repo_id
stringlengths
15
89
file_path
stringlengths
27
180
content
stringlengths
1
2.23M
__index_level_0__
int64
0
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/audio_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio_classification]] [[open-in-colab]] <Youtube id="KWwzcmG98Ds"/> ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋Š” ํ…์ŠคํŠธ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์— ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์ถœ๋ ฅ์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ํ…์ŠคํŠธ ์ž…๋ ฅ ๋Œ€์‹  ์›์‹œ ์˜ค๋””์˜ค ํŒŒํ˜•์ด ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์˜ ์‹ค์ œ ์ ์šฉ ๋ถ„์•ผ์—๋Š” ํ™”์ž์˜ ์˜๋„ ํŒŒ์•…, ์–ธ์–ด ๋ถ„๋ฅ˜, ์†Œ๋ฆฌ๋กœ ๋™๋ฌผ ์ข…์„ ์‹๋ณ„ํ•˜๋Š” ๊ฒƒ ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ํ™”์ž์˜ ์˜๋„๋ฅผ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ์•„๋ž˜์˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Audio Spectrogram Transformer](../model_doc/audio-spectrogram-transformer), [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm), [Whisper](../model_doc/whisper) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load_minds_14_dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ MinDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train` ๋ถ„ํ• ์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ์ž‘์€ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์ง‘ํ•ฉ์œผ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ์†Œ๋น„ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์‚ดํŽด๋ณผ๊ฒŒ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 450 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 113 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id` ๋ฐ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio` ๋ฐ `intent_class`์— ์ค‘์ ์„ ๋‘˜ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> minds["train"][0] {'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828, -0.00024414, -0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 8000}, 'intent_class': 2} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `๋ฐฐ์—ด`์ž…๋‹ˆ๋‹ค. - `intent_class`: ํ™”์ž์˜ ์˜๋„์— ๋Œ€ํ•œ ํด๋ž˜์Šค ID๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ๋ ˆ์ด๋ธ” ID์—์„œ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์‰ฝ๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“ค๊ฑฐ๋‚˜ ๊ทธ ๋ฐ˜๋Œ€๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> labels = minds["train"].features["intent_class"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` ์ด์ œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> id2label[str(2)] 'app_error' ``` ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด Wav2Vec2 ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` MinDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๋Š” 8000khz์ด๋ฏ€๋กœ(์ด ์ •๋ณด๋Š” [๋ฐ์ดํ„ฐ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ..., -2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 16000}, 'intent_class': 2} ``` ์ด์ œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: 1. ๊ฐ€์ ธ์˜ฌ `์˜ค๋””์˜ค` ์—ด์„ ํ˜ธ์ถœํ•˜๊ณ  ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๊ฐ€ ๋ชจ๋ธ์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์ด ์ •๋ณด๋Š” Wav2Vec2 [๋ชจ๋ธ ์นด๋“œ](https://huggingface.co/facebook/wav2vec2-base)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ๊ธด ์ž…๋ ฅ์ด ์ž˜๋ฆฌ์ง€ ์•Š๊ณ  ์ผ๊ด„ ์ฒ˜๋ฆฌ๋˜๋„๋ก ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True ... ) ... return inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜๊ณ  `intent_class`์˜ ์ด๋ฆ„์„ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ด๋ฆ„์ธ `label`๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True) >>> encoded_minds = encoded_minds.rename_column("intent_class", "label") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy(์ •ํ™•๋„)](https://huggingface.co/spaces/evaluate-metric/accuracy) ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค(๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— Evalutate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour) ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions = np.argmax(eval_pred.predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํŠธ๋ ˆ์ด๋‹์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForAudioClassification`]์„ ์ด์šฉํ•ด์„œ Wav2Vec2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer >>> num_labels = len(id2label) >>> model = AutoModelForAudioClassification.from_pretrained( ... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub = True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_mind_model", ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=3e-5, ... per_device_train_batch_size=32, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=32, ... num_train_epochs=10, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=feature_extractor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> For a more in-depth example of how to finetune a model for audio classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). </Tip> ## ์ถ”๋ก [[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์‹คํ–‰ํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๋ฅผ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„์™€ ์ผ์น˜ํ•˜๋„๋ก ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model") >>> classifier(audio_file) [ {'score': 0.09766869246959686, 'label': 'cash_deposit'}, {'score': 0.07998877018690109, 'label': 'app_error'}, {'score': 0.0781070664525032, 'label': 'joint_account'}, {'score': 0.07667109370231628, 'label': 'pay_bill'}, {'score': 0.0755252093076706, 'label': 'balance'} ] ``` ์›ํ•˜๋Š” ๊ฒฝ์šฐ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  `์ž…๋ ฅ`์„ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model") >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForAudioClassification >>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜จ ๋‹ค์Œ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฅผ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'cash_deposit' ``` </pt> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/token_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ† ํฐ ๋ถ„๋ฅ˜[[token-classification]] [[open-in-colab]] <Youtube id="wVHdVlPScxA"/> ํ† ํฐ ๋ถ„๋ฅ˜๋Š” ๋ฌธ์žฅ์˜ ๊ฐœ๋ณ„ ํ† ํฐ์— ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ํ† ํฐ ๋ถ„๋ฅ˜ ์ž‘์—… ์ค‘ ํ•˜๋‚˜๋Š” ๊ฐœ์ฒด๋ช… ์ธ์‹(Named Entity Recognition, NER)์ž…๋‹ˆ๋‹ค. ๊ฐœ์ฒด๋ช… ์ธ์‹์€ ๋ฌธ์žฅ์—์„œ ์‚ฌ๋žŒ, ์œ„์น˜ ๋˜๋Š” ์กฐ์ง๊ณผ ๊ฐ™์€ ๊ฐ ๊ฐœ์ฒด์˜ ๋ ˆ์ด๋ธ”์„ ์ฐพ์œผ๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. [WNUT 17](https://huggingface.co/datasets/wnut_17) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert-base-uncased)๋ฅผ ํŒŒ์ธ ํŠœ๋‹ํ•˜์—ฌ ์ƒˆ๋กœ์šด ๊ฐœ์ฒด๋ฅผ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธ ํŠœ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate seqeval ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-wnut-17-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> wnut = load_dataset("wnut_17") ``` ๋‹ค์Œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> wnut["train"][0] {'id': '0', 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], 'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'] } ``` `ner_tags`์˜ ๊ฐ ์ˆซ์ž๋Š” ๊ฐœ์ฒด๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ˆซ์ž๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๊ฐœ์ฒด๊ฐ€ ๋ฌด์—‡์ธ์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> label_list = wnut["train"].features[f"ner_tags"].feature.names >>> label_list [ "O", "B-corporation", "I-corporation", "B-creative-work", "I-creative-work", "B-group", "I-group", "B-location", "I-location", "B-person", "I-person", "B-product", "I-product", ] ``` ๊ฐ `ner_tag`์˜ ์•ž์— ๋ถ™์€ ๋ฌธ์ž๋Š” ๊ฐœ์ฒด์˜ ํ† ํฐ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค: - `B-`๋Š” ๊ฐœ์ฒด์˜ ์‹œ์ž‘์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. - `I-`๋Š” ํ† ํฐ์ด ๋™์ผํ•œ ๊ฐœ์ฒด ๋‚ด๋ถ€์— ํฌํ•จ๋˜์–ด ์žˆ์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค(์˜ˆ๋ฅผ ๋“ค์–ด `State` ํ† ํฐ์€ `Empire State Building`์™€ ๊ฐ™์€ ๊ฐœ์ฒด์˜ ์ผ๋ถ€์ž…๋‹ˆ๋‹ค). - `0`๋Š” ํ† ํฐ์ด ์–ด๋–ค ๊ฐœ์ฒด์—๋„ ํ•ด๋‹นํ•˜์ง€ ์•Š์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="iY2AZYdZAr0"/> ๋‹ค์Œ์œผ๋กœ `tokens` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` ์œ„์˜ ์˜ˆ์ œ `tokens` ํ•„๋“œ๋ฅผ ๋ณด๋ฉด ์ž…๋ ฅ์ด ์ด๋ฏธ ํ† ํฐํ™”๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‹ค์ œ๋กœ ์ž…๋ ฅ์€ ์•„์ง ํ† ํฐํ™”๋˜์ง€ ์•Š์•˜์œผ๋ฏ€๋กœ ๋‹จ์–ด๋ฅผ ํ•˜์œ„ ๋‹จ์–ด๋กœ ํ† ํฐํ™”ํ•˜๊ธฐ ์œ„ํ•ด `is_split_into_words=True`๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ œ๋กœ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> example = wnut["train"][0] >>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True) >>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"]) >>> tokens ['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]'] ``` ๊ทธ๋Ÿฌ๋‚˜ ์ด๋กœ ์ธํ•ด `[CLS]`๊ณผ `[SEP]`๋ผ๋Š” ํŠน์ˆ˜ ํ† ํฐ์ด ์ถ”๊ฐ€๋˜๊ณ , ํ•˜์œ„ ๋‹จ์–ด ํ† ํฐํ™”๋กœ ์ธํ•ด ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๋ถˆ์ผ์น˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ํ•˜๋‚˜์˜ ๋ ˆ์ด๋ธ”์— ํ•ด๋‹นํ•˜๋Š” ๋‹จ์ผ ๋‹จ์–ด๋Š” ์ด์ œ ๋‘ ๊ฐœ์˜ ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„ํ• ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์žฌ์ •๋ ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) ๋ฉ”์†Œ๋“œ๋กœ ๋ชจ๋“  ํ† ํฐ์„ ํ•ด๋‹น ๋‹จ์–ด์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ˆ˜ ํ† ํฐ `[CLS]`์™€ `[SEP]`์— `-100` ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•˜์—ฌ, PyTorch ์†์‹ค ํ•จ์ˆ˜๊ฐ€ ํ•ด๋‹น ํ† ํฐ์„ ๋ฌด์‹œํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 3. ์ฃผ์–ด์ง„ ๋‹จ์–ด์˜ ์ฒซ ๋ฒˆ์งธ ํ† ํฐ์—๋งŒ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ™์€ ๋‹จ์–ด์˜ ๋‹ค๋ฅธ ํ•˜์œ„ ํ† ํฐ์— `-100`์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ์žฌ์ •๋ ฌํ•˜๊ณ  DistilBERT์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋‚ด๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py >>> def tokenize_and_align_labels(examples): ... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) ... labels = [] ... for i, label in enumerate(examples[f"ner_tags"]): ... word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. ... previous_word_idx = None ... label_ids = [] ... for word_idx in word_ids: # Set the special tokens to -100. ... if word_idx is None: ... label_ids.append(-100) ... elif word_idx != previous_word_idx: # Only label the first token of a given word. ... label_ids.append(label[word_idx]) ... else: ... label_ids.append(-100) ... previous_word_idx = word_idx ... labels.append(label_ids) ... tokenized_inputs["labels"] = labels ... return tokenized_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) ``` ์ด์ œ [`DataCollatorWithPadding`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹ , *๋™์  ํŒจ๋”ฉ*์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด์— ๋งž๊ฒŒ ๋ฌธ์žฅ์„ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ด ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluation]] ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๐Ÿค— Evaluate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). Seqeval์€ ์‹ค์ œ๋กœ ์ •๋ฐ€๋„, ์žฌํ˜„๋ฅ , F1 ๋ฐ ์ •ํ™•๋„์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ์ ์ˆ˜๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> seqeval = evaluate.load("seqeval") ``` ๋จผ์ € NER ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜จ ๋‹ค์Œ, [`~evaluate.EvaluationModule.compute`]์— ์‹ค์ œ ์˜ˆ์ธก๊ณผ ์‹ค์ œ ๋ ˆ์ด๋ธ”์„ ์ „๋‹ฌํ•˜์—ฌ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> labels = [label_list[i] for i in example[f"ner_tags"]] >>> def compute_metrics(p): ... predictions, labels = p ... predictions = np.argmax(predictions, axis=2) ... true_predictions = [ ... [label_list[p] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... true_labels = [ ... [label_list[l] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... results = seqeval.compute(predictions=true_predictions, references=true_labels) ... return { ... "precision": results["overall_precision"], ... "recall": results["overall_recall"], ... "f1": results["overall_f1"], ... "accuracy": results["overall_accuracy"], ... } ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•˜๋ฉด ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „์—, `id2label`์™€ `label2id`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ƒ๋˜๋Š” id์™€ ๋ ˆ์ด๋ธ”์˜ ๋งต์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> id2label = { ... 0: "O", ... 1: "B-corporation", ... 2: "I-corporation", ... 3: "B-creative-work", ... 4: "I-creative-work", ... 5: "B-group", ... 6: "I-group", ... 7: "B-location", ... 8: "I-location", ... 9: "B-person", ... 10: "I-person", ... 11: "B-product", ... 12: "I-product", ... } >>> label2id = { ... "O": 0, ... "B-corporation": 1, ... "I-corporation": 2, ... "B-creative-work": 3, ... "I-creative-work": 4, ... "B-group": 5, ... "I-group": 6, ... "B-location": 7, ... "I-location": 8, ... "B-person": 9, ... "I-person": 10, ... "B-product": 11, ... "I-product": 12, ... } ``` <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSequenceClassification`]๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer >>> model = AutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.) ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๋Š” seqeval ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜์™€ ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_wnut_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_wnut["train"], ... eval_dataset=tokenized_wnut["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด, ๊ทธ๋ฆฌ๊ณ  ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 3 >>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs >>> optimizer, lr_schedule = create_optimizer( ... init_lr=2e-5, ... num_train_steps=num_train_steps, ... weight_decay_rate=0.01, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSequenceClassification`]์„ ์‚ฌ์šฉํ•˜์—ฌ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ , ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_wnut["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_wnut["validation"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ seqeval ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_wnut_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์— ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ์˜ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์—ฌ ํŒŒ์ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ํ† ํฐ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ํ…์ŠคํŠธ๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco." ``` ํŒŒ์ธ ํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ NER์˜ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ด๋ณด์„ธ์š”: ```py >>> from transformers import pipeline >>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model") >>> classifier(text) [{'entity': 'B-location', 'score': 0.42658573, 'index': 2, 'word': 'golden', 'start': 4, 'end': 10}, {'entity': 'I-location', 'score': 0.35856336, 'index': 3, 'word': 'state', 'start': 11, 'end': 16}, {'entity': 'B-group', 'score': 0.3064001, 'index': 4, 'word': 'warriors', 'start': 17, 'end': 25}, {'entity': 'B-location', 'score': 0.65523505, 'index': 13, 'word': 'san', 'start': 80, 'end': 83}, {'entity': 'B-location', 'score': 0.4668663, 'index': 14, 'word': 'francisco', 'start': 84, 'end': 93}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predictions = torch.argmax(logits, dim=2) >>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` ์ž…๋ ฅ๊ฐ’์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/asr.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] [[open-in-colab]] <Youtube id="TksaY_FDgnk"/> ์ž๋™ ์Œ์„ฑ ์ธ์‹(Automatic Speech Recognition, ASR)์€ ์Œ์„ฑ ์‹ ํ˜ธ๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์Œ์„ฑ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ํ…์ŠคํŠธ ์ถœ๋ ฅ์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. Siri์™€ Alexa์™€ ๊ฐ™์€ ๊ฐ€์ƒ ์–ด์‹œ์Šคํ„ดํŠธ๋Š” ASR ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ์ƒ์ ์œผ๋กœ ์‚ฌ์šฉ์ž๋ฅผ ๋•๊ณ  ์žˆ์œผ๋ฉฐ, ํšŒ์˜ ์ค‘ ๋ผ์ด๋ธŒ ์บก์…˜ ๋ฐ ๋ฉ”๋ชจ ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์‚ฌ์šฉ์ž ์นœํ™”์  ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ๋„ ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate jiwer ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-minds-14-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ถ„์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ์‹œ๊ฐ„์„ ๋“ค์ด๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]") ``` [`~Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id`์™€ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ, ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio`์™€ `transcription`์— ์ดˆ์ ์„ ๋งž์ถœ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ๋‹ค์‹œ ํ•œ๋ฒˆ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> minds["train"][0] {'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `array(๋ฐฐ์—ด)` - `transcription`: ๋ชฉํ‘œ ํ…์ŠคํŠธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ Wav2Vec2 ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base") ``` MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋Š” 8000kHz์ด๋ฏ€๋กœ([๋ฐ์ดํ„ฐ ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธ), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ..., 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ์œ„์˜ 'transcription'์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ํ…์ŠคํŠธ๋Š” ๋Œ€๋ฌธ์ž์™€ ์†Œ๋ฌธ์ž๊ฐ€ ์„ž์—ฌ ์žˆ์Šต๋‹ˆ๋‹ค. Wav2Vec2 ํ† ํฌ๋‚˜์ด์ €๋Š” ๋Œ€๋ฌธ์ž ๋ฌธ์ž์— ๋Œ€ํ•ด์„œ๋งŒ ํ›ˆ๋ จ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ…์ŠคํŠธ๊ฐ€ ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> def uppercase(example): ... return {"transcription": example["transcription"].upper()} >>> minds = minds.map(uppercase) ``` ์ด์ œ ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์—์„œ `input_values`๋ฅผ ์ถ”์ถœํ•˜๊ณ  ํ”„๋กœ์„ธ์„œ๋กœ `transcription` ์—ด์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def prepare_dataset(batch): ... audio = batch["audio"] ... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"]) ... batch["input_length"] = len(batch["input_values"][0]) ... return batch ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `num_proc` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4) ``` ๐Ÿค— Transformers์—๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” ํ…์ŠคํŠธ์™€ ๋ ˆ์ด๋ธ”์„ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์š”์†Œ์˜ ๊ธธ์ด์— ๋™์ ์œผ๋กœ ํŒจ๋”ฉํ•˜์—ฌ ๊ธธ์ด๋ฅผ ๊ท ์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. `tokenizer` ํ•จ์ˆ˜์—์„œ `padding=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๋™์  ํŒจ๋”ฉ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ ์ด ํŠน์ • ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” `input_values`์™€ `labels`์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass ... class DataCollatorCTCWithPadding: ... processor: AutoProcessor ... padding: Union[bool, str] = "longest" ... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... # ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค ... # ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๊ณ , ๊ฐ๊ฐ ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค ... input_features = [{"input_values": feature["input_values"][0]} for feature in features] ... label_features = [{"input_ids": feature["labels"]} for feature in features] ... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt") ... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt") ... # ํŒจ๋”ฉ์— ๋Œ€ํ•ด ์†์‹ค์„ ์ ์šฉํ•˜์ง€ ์•Š๋„๋ก -100์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค ... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) ... batch["labels"] = labels ... return batch ``` ์ด์ œ `DataCollatorForCTCWithPadding`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate, WER)](https://huggingface.co/spaces/evaluate-metric/wer) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> wer = evaluate.load("wer") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ WER์„ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(pred): ... pred_logits = pred.predictions ... pred_ids = np.argmax(pred_logits, axis=-1) ... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id ... pred_str = processor.batch_decode(pred_ids) ... label_str = processor.batch_decode(pred.label_ids, group_tokens=False) ... wer = wer.compute(predictions=pred_str, references=label_str) ... return {"wer": wer} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCTC`]๋กœ Wav2Vec2๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. `ctc_loss_reduction` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ CTC ์†์‹ค์— ์ ์šฉํ•  ์ถ•์†Œ(reduction) ๋ฐฉ๋ฒ•์„ ์ง€์ •ํ•˜์„ธ์š”. ๊ธฐ๋ณธ๊ฐ’์ธ ํ•ฉ๊ณ„ ๋Œ€์‹  ํ‰๊ท ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋” ์ข‹์€ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer >>> model = AutoModelForCTC.from_pretrained( ... "facebook/wav2vec2-base", ... ctc_loss_reduction="mean", ... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). [`Trainer`]๋Š” ๊ฐ ์—ํญ๋งˆ๋‹ค WER์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_asr_mind_model", ... per_device_train_batch_size=8, ... gradient_accumulation_steps=2, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True, ... group_by_length=True, ... evaluation_strategy="steps", ... per_device_eval_batch_size=8, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... load_best_model_at_end=True, ... metric_for_best_model="wer", ... greater_is_better=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=processor.feature_extractor, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋‘๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ์˜์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-wav2vec2-english)์™€ ๋‹ค๊ตญ์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ๋น„์œจ์„ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์— ๋งž๊ฒŒ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} ``` <Tip> ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜๋œ ๊ฒฐ๊ณผ๊ฐ€ ๊ฝค ๊ดœ์ฐฎ์ง€๋งŒ ๋” ์ข‹์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋” ๋งŽ์€ ์˜ˆ์ œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”! </Tip> `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ์žฌํ˜„ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์˜ค๋””์˜ค ํŒŒ์ผ๊ณผ ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  PyTorch ํ…์„œ๋กœ `input`์„ ๋ฐ˜ํ™˜ํ•  ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForCTC >>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์˜ `input_ids`๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ธก๋œ `input_ids`๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> import torch >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] ``` </pt> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_object_detection.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€[[zeroshot-object-detection]] [[open-in-colab]] ์ผ๋ฐ˜์ ์œผ๋กœ [๊ฐ์ฒด ํƒ์ง€](object_detection)์— ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํ•™์Šต ๋ฐ์ดํ„ฐ์— ์กด์žฌํ•˜๋Š” ํด๋ž˜์Šค(๋ ˆ์ด๋ธ”)๋งŒ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ํ•œ๊ณ„์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋Š” [OWL-ViT](../model_doc/owlvit) ๋ชจ๋ธ๋กœ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€๊ฐ€ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. OWL-ViT๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open-vocabulary) ๊ฐ์ฒด ํƒ์ง€๊ธฐ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜์ง€ ์•Š๊ณ  ์ž์œ  ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์€ ๋ฉ€ํ‹ฐ ๋ชจ๋‹ฌ ํ‘œํ˜„์„ ํ™œ์šฉํ•ด ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€(open-vocabulary detection)๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. [CLIP](../model_doc/clip) ๋ชจ๋ธ์— ๊ฒฝ๋Ÿ‰ํ™”(lightweight)๋œ ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™”(localization) ํ—ค๋“œ๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€๋Š” CLIP์˜ ํ…์ŠคํŠธ ์ธ์ฝ”๋”๋กœ free-text ์ฟผ๋ฆฌ๋ฅผ ์ž„๋ฒ ๋”ฉํ•˜๊ณ , ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™” ํ—ค๋“œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ํ…์ŠคํŠธ ์„ค๋ช…์„ ์—ฐ๊ฒฐํ•˜๋ฉด ViT๊ฐ€ ์ด๋ฏธ์ง€ ํŒจ์น˜(image patches)๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์˜ ์ €์ž๋“ค์€ CLIP ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šต(scratch learning)ํ•œ ํ›„์—, bipartite matching loss๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‘œ์ค€ ๊ฐ์ฒด ์ธ์‹ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ OWL-ViT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์€ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ์‚ฌ์ „ ํ•™์Šต ์—†์ด๋„ ํ…์ŠคํŠธ ์„ค๋ช…์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ๋Š” OWL-ViT ๋ชจ๋ธ์˜ ์‚ฌ์šฉ๋ฒ•์„ ๋‹ค๋ฃฐ ๊ฒƒ์ž…๋‹ˆ๋‹ค: - ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€ - ์ผ๊ด„ ๊ฐ์ฒด ํƒ์ง€ - ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-object-detection-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€์šฉ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python >>> from transformers import pipeline >>> checkpoint = "google/owlvit-base-patch32" >>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection") ``` ๋‹ค์Œ์œผ๋กœ, ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ์—ฌ๊ธฐ์„œ๋Š” [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€์ธ ์šฐ์ฃผ๋น„ํ–‰์‚ฌ ์—์ผ๋ฆฐ ์ฝœ๋ฆฐ์Šค(Eileen Collins) ์‚ฌ์ง„์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> import skimage >>> import numpy as np >>> from PIL import Image >>> image = skimage.data.astronaut() >>> image = Image.fromarray(np.uint8(image)).convert("RGB") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์„ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. candidate_labels๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰(query)ํ•˜๋ ค๋Š” ๋ชจ๋“  ํ•ญ๋ชฉ์— ๋Œ€ํ•œ ํ…์ŠคํŠธ ์„ค๋ช…๋„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> predictions = detector( ... image, ... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"], ... ) >>> predictions [{'score': 0.3571370542049408, 'label': 'human face', 'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}}, {'score': 0.28099656105041504, 'label': 'nasa badge', 'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}}, {'score': 0.2110239565372467, 'label': 'rocket', 'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}}, {'score': 0.13790413737297058, 'label': 'star-spangled banner', 'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}}, {'score': 0.11950037628412247, 'label': 'nasa badge', 'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}}, {'score': 0.10649408400058746, 'label': 'rocket', 'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}] ``` ์ด์ œ ์˜ˆ์ธก๊ฐ’์„ ์‹œ๊ฐํ™”ํ•ด๋ด…์‹œ๋‹ค: ```py >>> from PIL import ImageDraw >>> draw = ImageDraw.Draw(image) >>> for prediction in predictions: ... box = prediction["box"] ... label = prediction["label"] ... score = prediction["score"] ... xmin, ymin, xmax, ymax = box.values() ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"/> </div> ## ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€[[textprompted-zeroshot-object-detection-by-hand]] ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•ด ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์ด์ œ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ œํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?other=owlvit)์—์„œ ๊ด€๋ จ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import requests >>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640" >>> im = Image.open(requests.get(url, stream=True).raw) >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” [`CLIPTokenizer`]๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> text_queries = ["hat", "book", "sunglasses", "camera"] >>> inputs = processor(text=text_queries, images=im, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌ ๋ฐ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ๋ชจ๋ธ์— ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅํ•˜๊ธฐ ์ „์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์—, [`~OwlViTImageProcessor.post_process_object_detection`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ์˜ˆ์ธก๊ฐ’์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค(bounding box)๊ฐ€ ์›๋ณธ ์ด๋ฏธ์ง€์˜ ์ขŒํ‘œ์™€ ์ƒ๋Œ€์ ์œผ๋กœ ๋™์ผํ•œ์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = torch.tensor([im.size[::-1]]) ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(im) >>> scores = results["scores"].tolist() >>> labels = results["labels"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white") >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ผ๊ด„ ์ฒ˜๋ฆฌ[[batch-processing]] ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์—์„œ ์„œ๋กœ ๋‹ค๋ฅธ(๋˜๋Š” ๋™์ผํ•œ) ๊ฐ์ฒด๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๊ด„ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋Š” ์ด์ค‘ ๋ฆฌ์ŠคํŠธ๋กœ, ์ด๋ฏธ์ง€๋Š” PIL ์ด๋ฏธ์ง€, PyTorch ํ…์„œ, ๋˜๋Š” NumPy ๋ฐฐ์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋กœ ํ”„๋กœ์„ธ์„œ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> images = [image, im] >>> text_queries = [ ... ["human face", "rocket", "nasa badge", "star-spangled banner"], ... ["hat", "book", "sunglasses", "camera"], ... ] >>> inputs = processor(text=text_queries, images=images, return_tensors="pt") ``` ์ด์ „์—๋Š” ํ›„์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ๋‹จ์ผ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ๋ฅผ ํ…์„œ๋กœ ์ „๋‹ฌํ–ˆ์ง€๋งŒ, ํŠœํ”Œ์„ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๊ณ , ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ํŠœํ”Œ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ๋‘ ์˜ˆ์ œ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ์ƒ์„ฑํ•˜๊ณ , ๋‘ ๋ฒˆ์งธ ์ด๋ฏธ์ง€(`image_idx = 1`)๋ฅผ ์‹œ๊ฐํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = [x.size[::-1] for x in images] ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes) >>> image_idx = 1 >>> draw = ImageDraw.Draw(images[image_idx]) >>> scores = results[image_idx]["scores"].tolist() >>> labels = results[image_idx]["labels"].tolist() >>> boxes = results[image_idx]["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white") >>> images[image_idx] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€[[imageguided-object-detection]] ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ด์šฉํ•œ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ์™ธ์—๋„ OWL-ViT ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด ๋Œ€์ƒ ์ด๋ฏธ์ง€์—์„œ ์œ ์‚ฌํ•œ ๊ฐ์ฒด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ฟผ๋ฆฌ์™€ ๋‹ฌ๋ฆฌ ํ•˜๋‚˜์˜ ์˜ˆ์ œ ์ด๋ฏธ์ง€์—์„œ๋งŒ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์†ŒํŒŒ์— ๊ณ ์–‘์ด ๋‘ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋Œ€์ƒ ์ด๋ฏธ์ง€(target image)๋กœ, ๊ณ ์–‘์ด ํ•œ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image_target = Image.open(requests.get(url, stream=True).raw) >>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg" >>> query_image = Image.open(requests.get(query_url, stream=True).raw) ``` ๋‹ค์Œ ์ด๋ฏธ์ง€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 2) >>> ax[0].imshow(image_target) >>> ax[1].imshow(query_image) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"/> </div> ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„์—์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ ๋Œ€์‹ ์— `query_images`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt") ``` ์˜ˆ์ธก์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๋Š” ๋Œ€์‹  [`~OwlViTForObjectDetection.image_guided_detection`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์ด ์—†๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์ด์ „๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ด์ „๊ณผ ๋™์ผํ•˜๊ฒŒ ์ด๋ฏธ์ง€๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model.image_guided_detection(**inputs) ... target_sizes = torch.tensor([image_target.size[::-1]]) ... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(image_target) >>> scores = results["scores"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4) >>> image_target ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"/> </div> OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์•„๋ž˜ ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: <iframe src="https://adirik-owl-vit.hf.space" frameborder="0" width="850" height="450" ></iframe>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_image_classification.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[zeroshot-image-classification]] [[open-in-colab]] ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์˜ ์˜ˆ์‹œ๊ฐ€ ํฌํ•จ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต๋˜์ง€ ์•Š์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ๋‹ฌ๋ฆฐ ํŠน์ • ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋กœ ๋ชจ๋ธ ํ•™์Šต์ด ํ•„์š”ํ•˜๋ฉฐ, ์ด ๋ชจ๋ธ์€ ํŠน์ • ์ด๋ฏธ์ง€์˜ ํŠน์ง•์„ ๋ ˆ์ด๋ธ”์— "๋งคํ•‘"ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด ์žˆ๋Š” ๋ถ„๋ฅ˜ ์ž‘์—…์— ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š”, ๋ชจ๋ธ์„ "์žฌ๋ณด์ •"ํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด์™€ ๋Œ€์กฐ์ ์œผ๋กœ, ์ œ๋กœ์ƒท ๋˜๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open vocabulary) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋Œ€๊ทœ๋ชจ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์™€ ํ•ด๋‹น ์„ค๋ช…์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ(multimodal) ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ํฌํ•จํ•œ ๋งŽ์€ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ •๋ ฌ๋œ(aligned) ๋น„์ „ ์–ธ์–ด ํ‘œํ˜„์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ๋Œ€ํ•œ ๋ณด๋‹ค ์œ ์—ฐํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์œผ๋กœ, ์ถ”๊ฐ€ ํ•™์Šต ๋ฐ์ดํ„ฐ ์—†์ด ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด๋‚˜ ํ•™์Šตํ•˜์ง€ ๋ชปํ•œ ์นดํ…Œ๊ณ ๋ฆฌ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ์ผ๋ฐ˜ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž๊ฐ€ ๋Œ€์ƒ ๊ฐœ์ฒด์— ๋Œ€ํ•œ ์ž์œ  ํ˜•์‹์˜ ํ…์ŠคํŠธ ์„ค๋ช…์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ๋งŒ๋“ค๊ธฐ * ์ง์ ‘ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์ถ”๋ก  ์‹คํ–‰ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-image-classification-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import pipeline >>> checkpoint = "openai/clip-vit-large-patch14" >>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification") ``` ๋‹ค์Œ์œผ๋กœ, ๋ถ„๋ฅ˜ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์ธ `candidate_labels`๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `candidate_labels`๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> predictions = classifier(image, candidate_labels=["fox", "bear", "seagull", "owl"]) >>> predictions [{'score': 0.9996670484542847, 'label': 'owl'}, {'score': 0.000199399160919711, 'label': 'seagull'}, {'score': 7.392891711788252e-05, 'label': 'fox'}, {'score': 5.96074532950297e-05, 'label': 'bear'}] ``` ## ์ง์ ‘ ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ํ•˜๊ธฐ[[zeroshot-image-classification-by-hand]] ์ด์ œ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification >>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> candidate_labels = ["tree", "car", "bike", "cat"] >>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ , ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits_per_image[0] >>> probs = logits.softmax(dim=-1).numpy() >>> scores = probs.tolist() >>> result = [ ... {"score": score, "label": candidate_label} ... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0]) ... ] >>> result [{'score': 0.998572, 'label': 'car'}, {'score': 0.0010570387, 'label': 'bike'}, {'score': 0.0003393686, 'label': 'tree'}, {'score': 3.1572064e-05, 'label': 'cat'}] ```
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/masked_language_modeling.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง(Masked language modeling)[[masked-language-modeling]] [[open-in-colab]] <Youtube id="mqElG5QJWUg"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์‹œํ€€์Šค์—์„œ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์€ ์–‘๋ฐฉํ–ฅ์œผ๋กœ ํ† ํฐ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ํ† ํฐ์˜ ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ ์–‘์ชฝ์—์„œ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์ „์ฒด ์‹œํ€€์Šค์— ๋Œ€ํ•œ ๋ฌธ๋งฅ์  ์ดํ•ด๊ฐ€ ํ•„์š”ํ•œ ์ž‘์—…์— ์ ํ•ฉํ•˜๋ฉฐ, BERT๊ฐ€ ๊ทธ ์˜ˆ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋‹ค๋ฃฐ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [r/askscience](https://www.reddit.com/r/askscience/) ๋ถ€๋ถ„์„ ์‚ฌ์šฉํ•ด [DistilRoBERTa](https://huggingface.co/distilroberta-base) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก  ์‹œ์— ์ง์ ‘ ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ์ฒ˜๋Ÿผ ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์•„ํ‚คํ…์ณ ์ค‘ ํ•˜๋‚˜๋ฅผ ์„ ํƒํ•˜์„ธ์š”: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€์˜ ๊ณต์œ ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด(When prompted) ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-eli5-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ r/askscience ์ค‘ ์ผ๋ถ€๋งŒ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ ํ•™์Šต์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train_asks`๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ์•„๋ž˜ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` ๋งŽ์•„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” `text` ํ•„๋“œ์—๋งŒ ์ง‘์ค‘ํ•˜๋ฉด ๋ฉ๋‚˜๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์˜ ๋ฉ‹์ง„ ์ ์€ (๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ) *๋‹ค์Œ ๋‹จ์–ด๊ฐ€ ๋ ˆ์ด๋ธ”*์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ ˆ์ด๋ธ”์ด ๋”ฐ๋กœ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="8PmhEIXhBvI"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด, ๋‹ค์Œ ๋‹จ๊ณ„๋กœ DistilRoBERTa ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€์„œ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") ``` ์œ„์˜ ์˜ˆ์ œ์—์„œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, `text` ํ•„๋“œ๋Š” `answers` ์•ˆ์— ์ค‘์ฒฉ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ค‘์ฒฉ๋œ ๊ตฌ์กฐ์—์„œ [`flatten`](https://huggingface.co/docs/datasets/process#flatten) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` ์ด์ œ ๊ฐ ํ•˜์œ„ ํ•„๋“œ๋Š” `answers` ์ ‘๋‘์‚ฌ(prefix)๋กœ ํ‘œ์‹œ๋œ ๋Œ€๋กœ ๋ณ„๋„์˜ ์—ด์ด ๋˜๊ณ , `text` ํ•„๋“œ๋Š” ์ด์ œ ๋ฆฌ์ŠคํŠธ๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„์ ์œผ๋กœ ํ† ํฐํ™”ํ•˜๋Š” ๋Œ€์‹  ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ํ•œ๋ฒˆ์— ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ ์˜ˆ์ œ์— ๋Œ€ํ•ด ๋ฌธ์ž์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ `join`ํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋„๋ก `batched=True`๋ฅผ ์„ค์ •ํ•˜๊ณ  `num_proc`๋กœ ์ฒ˜๋ฆฌ ํšŸ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ํ† ํฐ ์‹œํ€€์Šค๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ ์ด ์ค‘ ์ผ๋ถ€๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊น๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด - ๋ชจ๋“  ์‹œํ€€์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ  - ์—ฐ๊ฒฐ๋œ ์‹œํ€€์Šค๋ฅผ ์ •์˜ํ•œ `block_size` ๋ณด๋‹ค ๋” ์งง์€ ๋ฉ์–ด๋ฆฌ๋กœ ๋ถ„ํ• ํ•˜๋Š”๋ฐ, ์ด ๋ฉ์–ด๋ฆฌ๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ์งง๊ณ  GPU RAM์ด ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ธธ์ด์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> block_size = 128 >>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `group_texts` ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` ์ด์ œ [`DataCollatorForLanguageModeling`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค collation ๋‹จ๊ณ„์—์„œ ๋งค ๋ฐฐ์น˜์•ˆ์—์„œ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) ``` </pt> <tf> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๊ฐ€ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์˜ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ €์žฅ ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์€ ์œ ์ผํ•œ ํ•„์ˆ˜ ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(collator)์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_mlm_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(perplexity)๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 8.76 ``` ๊ทธ๋ฆฌ๊ณ  [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, Hub๋กœ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> TensorFlow๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer) ํ•จ์ˆ˜ ์„ค์ •, ํ•™์Šต๋ฅ (learning rate) ์Šค์ผ€์ฅด๋ง, ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๋‹ค์Œ์œผ๋กœ [`TFAutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ”์†Œ๋“œ๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ์ด๋Š” ์—…๋กœ๋“œํ•  ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €์˜ ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์— ์ง€์ •ํ•˜์—ฌ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_mlm_model", ... tokenizer=tokenizer, ... ) ``` ๋“œ๋””์–ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ๋•Œ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํฌํฌ ์ˆ˜, ์ฝœ๋ฐฑ์ด ํฌํ•จ๋œ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ์ž๋™์œผ๋กœ Hub๋กœ ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์˜ˆ์ œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ง€๊ธˆ๊นŒ์ง€ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •์„ ์ž˜ ํ–ˆ์œผ๋‹ˆ, ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์ด ๋นˆ์นธ์„ ์ฑ„์šธ ํ…์ŠคํŠธ๋ฅผ ์ŠคํŽ˜์…œ ํ† ํฐ(special token)์ธ `<mask>` ํ† ํฐ์œผ๋กœ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> text = "The Milky Way is a <mask> galaxy." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `fill-mask`ํƒœ์Šคํฌ๋กœ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. `top_k` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ˜ํ™˜ํ•˜๋Š” ์˜ˆ์ธก์˜ ์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model") >>> mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] ``` <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="pt") >>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="tf") >>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/visual_question_answering.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต (Visual Question Answering) [[open-in-colab]] ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต(VQA)์€ ์ด๋ฏธ์ง€๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐœ๋ฐฉํ˜• ์งˆ๋ฌธ์— ๋Œ€์‘ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ๋Œ€๋ถ€๋ถ„ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์˜ ์กฐํ•ฉ์ด๋ฉฐ, ์ถœ๋ ฅ์€ ์ž์—ฐ์–ด๋กœ ๋œ ๋‹ต๋ณ€์ž…๋‹ˆ๋‹ค. VQA์˜ ์ฃผ์š” ์‚ฌ์šฉ ์‚ฌ๋ก€๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์‹œ๊ฐ ์žฅ์• ์ธ์„ ์œ„ํ•œ ์ ‘๊ทผ์„ฑ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ๊ต์œก: ๊ฐ•์˜๋‚˜ ๊ต๊ณผ์„œ์— ๋‚˜์˜จ ์‹œ๊ฐ ์ž๋ฃŒ์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์ฒดํ—˜ํ˜• ์ „์‹œ์™€ ์œ ์  ๋“ฑ์—์„œ๋„ VQA๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ๊ณ ๊ฐ ์„œ๋น„์Šค ๋ฐ ์ „์ž์ƒ๊ฑฐ๋ž˜: VQA๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์ œํ’ˆ์— ๋Œ€ํ•ด ์งˆ๋ฌธํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•จ์œผ๋กœ์จ ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰: VQA ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ํŠน์„ฑ์„ ๊ฐ€์ง„ ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์‚ฌ์šฉ์ž๋Š” "๊ฐ•์•„์ง€๊ฐ€ ์žˆ์–ด?"๋ผ๊ณ  ๋ฌผ์–ด๋ด์„œ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€ ๋ฌถ์Œ์—์„œ ๊ฐ•์•„์ง€๊ฐ€ ์žˆ๋Š” ๋ชจ๋“  ์ด๋ฏธ์ง€๋ฅผ ๋ฐ›์•„๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - VQA ๋ชจ๋ธ ์ค‘ ํ•˜๋‚˜์ธ [ViLT](../../en/model_doc/vilt)๋ฅผ [`Graphcore/vqa` ๋ฐ์ดํ„ฐ์…‹](https://huggingface.co/datasets/Graphcore/vqa) ์—์„œ ๋ฏธ์„ธ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• - ๋ฏธ์„ธ์กฐ์ •๋œ ViLT ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ• - BLIP-2 ๊ฐ™์€ ์ƒ์„ฑ ๋ชจ๋ธ๋กœ ์ œ๋กœ์ƒท VQA ์ถ”๋ก ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ• ## ViLT ๋ฏธ์„ธ ์กฐ์ • [[finetuning-vilt]] ViLT๋Š” Vision Transformer (ViT) ๋‚ด์— ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ์„ ํฌํ•จํ•˜์—ฌ ๋น„์ „/์ž์—ฐ์–ด ์‚ฌ์ „ํ›ˆ๋ จ(VLP; Vision-and-Language Pretraining)์„ ์œ„ํ•œ ๊ธฐ๋ณธ ๋””์ž์ธ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ViLT ๋ชจ๋ธ์€ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ(ViT)์— ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ์„ ๋„ฃ์–ด ๋น„์ „/์–ธ์–ด ์‚ฌ์ „ํ›ˆ๋ จ(VLP; Vision-and-Language Pre-training)์„ ์œ„ํ•œ ๊ธฐ๋ณธ์ ์ธ ๋””์ž์ธ์„ ๊ฐ–์ท„์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์—ฌ๋Ÿฌ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. VQA ํƒœ์Šคํฌ์—์„œ๋Š” (`[CLS]` ํ† ํฐ์˜ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด์ธ) ๋ถ„๋ฅ˜ ํ—ค๋”๊ฐ€ ์žˆ์œผ๋ฉฐ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—ฌ๊ธฐ์—์„œ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต์€ **๋ถ„๋ฅ˜ ๋ฌธ์ œ**๋กœ ์ทจ๊ธ‰๋ฉ๋‹ˆ๋‹ค. ์ตœ๊ทผ์˜ BLIP, BLIP-2, InstructBLIP์™€ ๊ฐ™์€ ๋ชจ๋ธ๋“ค์€ VQA๋ฅผ ์ƒ์„ฑํ˜• ์ž‘์—…์œผ๋กœ ๊ฐ„์ฃผํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์ด๋“œ์˜ ํ›„๋ฐ˜๋ถ€์—์„œ๋Š” ์ด๋Ÿฐ ๋ชจ๋ธ๋“ค์„ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๋กœ์ƒท VQA ์ถ”๋ก ์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „ ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ```bash pip install -q transformers datasets ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅ ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ๋กœ๊ทธ์ธํ•  ํ† ํฐ์„ ์ž…๋ ฅํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ „์—ญ ๋ณ€์ˆ˜๋กœ ์„ ์–ธํ•˜์„ธ์š”. ```py >>> model_checkpoint = "dandelin/vilt-b32-mlm" ``` ## ๋ฐ์ดํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-the-data]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `Graphcore/vqa` ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ž‘์€ ์ƒ˜ํ”Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” [๐Ÿค— Hub](https://huggingface.co/datasets/Graphcore/vqa) ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Graphcore/vqa` ๋ฐ์ดํ„ฐ์„ธํŠธ](https://huggingface.co/datasets/Graphcore/vqa) ์˜ ๋Œ€์•ˆ์œผ๋กœ ๊ณต์‹ [VQA ๋ฐ์ดํ„ฐ์„ธํŠธ ํŽ˜์ด์ง€](https://visualqa.org/download.html) ์—์„œ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ๊ณต์ˆ˜ํ•œ ๋ฐ์ดํ„ฐ๋กœ ํŠœํ† ๋ฆฌ์–ผ์„ ๋”ฐ๋ฅด๊ณ  ์‹ถ๋‹ค๋ฉด [์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์„ธํŠธ ๋งŒ๋“ค๊ธฐ](https://huggingface.co/docs/datasets/image_dataset#loading-script) ๋ผ๋Š” ๐Ÿค— Datasets ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์˜ ์ฒซ 200๊ฐœ ํ•ญ๋ชฉ์„ ๋ถˆ๋Ÿฌ์™€ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํŠน์„ฑ์„ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]") >>> dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) ``` ์˜ˆ์ œ๋ฅผ ํ•˜๋‚˜ ๋ฝ‘์•„ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํŠน์„ฑ์„ ์ดํ•ดํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}} ``` ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํŠน์„ฑ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: * `question`: ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ * `image_id`: ์งˆ๋ฌธ๊ณผ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ * `label`: ๋ฐ์ดํ„ฐ์˜ ๋ ˆ์ด๋ธ” (annotations) ๋‚˜๋จธ์ง€ ํŠน์„ฑ๋“ค์€ ํ•„์š”ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์‚ญ์ œํ•ด๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ `label` ํŠน์„ฑ์€ ๊ฐ™์€ ์งˆ๋ฌธ๋งˆ๋‹ค ๋‹ต๋ณ€์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋‘ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ๋ผ๋ฒจ๋Ÿฌ๋“ค๋กœ๋ถ€ํ„ฐ ์ˆ˜์ง‘๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์ธ๋ฐ์š”. ์งˆ๋ฌธ์˜ ๋‹ต๋ณ€์€ ์ฃผ๊ด€์ ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์งˆ๋ฌธ์€ "๊ทธ๋Š” ์–ด๋””๋ฅผ ๋ณด๊ณ  ์žˆ๋‚˜์š”?" ์˜€์ง€๋งŒ, ์–ด๋–ค ์‚ฌ๋žŒ๋“ค์€ "์•„๋ž˜"๋กœ ๋ ˆ์ด๋ธ”์„ ๋‹ฌ์•˜๊ณ , ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์€ "ํ…Œ์ด๋ธ”" ๋˜๋Š” "์Šค์ผ€์ดํŠธ๋ณด๋“œ" ๋“ฑ์œผ๋กœ ์ฃผ์„์„ ๋‹ฌ์•˜์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ์ด๋ฏธ์ง€๋ฅผ ๋ณด๊ณ  ์–ด๋–ค ๋‹ต๋ณ€์„ ์„ ํƒํ•  ๊ฒƒ์ธ์ง€ ์ƒ๊ฐํ•ด ๋ณด์„ธ์š”: ```python >>> from PIL import Image >>> image = Image.open(dataset[0]['image_id']) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/> </div> ์งˆ๋ฌธ๊ณผ ๋‹ต๋ณ€์˜ ๋ชจํ˜ธ์„ฑ์œผ๋กœ ์ธํ•ด ์ด๋Ÿฌํ•œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋‹ต๋ณ€์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋กœ ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€, ์›ํ•ซ(one-hot) ์ธ์ฝ”๋”ฉ ๋ฒกํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ๋ณด๋‹ค๋Š” ๋ ˆ์ด๋ธ”์—์„œ ํŠน์ • ๋‹ต๋ณ€์ด ๋‚˜ํƒ€๋‚˜๋Š” ํšŸ์ˆ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์†Œํ”„ํŠธ ์ธ์ฝ”๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์œ„์˜ ์˜ˆ์‹œ์—์„œ "์•„๋ž˜"๋ผ๋Š” ๋‹ต๋ณ€์ด ๋‹ค๋ฅธ ๋‹ต๋ณ€๋ณด๋‹ค ํ›จ์”ฌ ๋” ์ž์ฃผ ์„ ํƒ๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ `weight`๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ์ ์ˆ˜๋กœ 1.0์„ ๊ฐ€์ง€๋ฉฐ, ๋‚˜๋จธ์ง€ ๋‹ต๋ณ€๋“ค์€ 1.0 ๋ฏธ๋งŒ์˜ ์ ์ˆ˜๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ ์ ˆํ•œ ๋ถ„๋ฅ˜ ํ—ค๋”๋กœ ๋ชจ๋ธ์„ ๋‚˜์ค‘์— ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•œ ๋”•์…”๋„ˆ๋ฆฌ ํ•˜๋‚˜, ๋ฐ˜๋Œ€๋กœ ์ •์ˆ˜๋ฅผ ๋ ˆ์ด๋ธ”๋กœ ๋งคํ•‘ํ•œ ๋”•์…”๋„ˆ๋ฆฌ ํ•˜๋‚˜ ์ด 2๊ฐœ์˜ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> import itertools >>> labels = [item['ids'] for item in dataset['label']] >>> flattened_labels = list(itertools.chain(*labels)) >>> unique_labels = list(set(flattened_labels)) >>> label2id = {label: idx for idx, label in enumerate(unique_labels)} >>> id2label = {idx: label for label, idx in label2id.items()} ``` ์ด์ œ ๋งคํ•‘์ด ์™„๋ฃŒ๋˜์—ˆ์œผ๋ฏ€๋กœ ๋ฌธ์ž์—ด ๋‹ต๋ณ€์„ ํ•ด๋‹น id๋กœ ๊ต์ฒดํ•˜๊ณ , ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ๋” ํŽธ๋ฆฌํ•œ ํ›„์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ํŽธํ‰ํ™” ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python >>> def replace_ids(inputs): ... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]] ... return inputs >>> dataset = dataset.map(replace_ids) >>> flat_dataset = dataset.flatten() >>> flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} ``` ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocessing-data]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์„ ์œ„ํ•ด ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด ViLT ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [`ViltProcessor`]๋Š” BERT ํ† ํฌ๋‚˜์ด์ €์™€ ViLT ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ํŽธ๋ฆฌํ•˜๊ฒŒ ํ•˜๋‚˜์˜ ํ”„๋กœ์„ธ์„œ๋กœ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import ViltProcessor >>> processor = ViltProcessor.from_pretrained(model_checkpoint) ``` ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ [`ViltProcessor`]๋กœ ์ธ์ฝ”๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” [`BertTokenizerFast`]๋กœ ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์œ„ํ•ด `input_ids`, `attention_mask` ๋ฐ `token_type_ids`๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” [`ViltImageProcessor`]๋กœ ์ด๋ฏธ์ง€๋ฅผ ํฌ๊ธฐ ์กฐ์ •ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋ฉฐ, `pixel_values`์™€ `pixel_mask`๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฐ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋Š” ๋ชจ๋‘ ๋‚ด๋ถ€์—์„œ ์ด๋ฃจ์–ด์ง€๋ฏ€๋กœ, `processor`๋ฅผ ํ˜ธ์ถœํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์•„์ง ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”์ด ์™„์„ฑ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ํƒ€๊ฒŸ์˜ ํ‘œํ˜„์—์„œ ๊ฐ ์š”์†Œ๋Š” ๊ฐ€๋Šฅํ•œ ๋‹ต๋ณ€(๋ ˆ์ด๋ธ”)์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ๋‹ต๋ณ€์˜ ์š”์†Œ๋Š” ํ•ด๋‹น ์ ์ˆ˜(weight)๋ฅผ ์œ ์ง€์‹œํ‚ค๊ณ  ๋‚˜๋จธ์ง€ ์š”์†Œ๋Š” 0์œผ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ํ•จ์ˆ˜๊ฐ€ ์œ„์—์„œ ์„ค๋ช…ํ•œ๋Œ€๋กœ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์— `processor`๋ฅผ ์ ์šฉํ•˜๊ณ  ๋ ˆ์ด๋ธ”์„ ํ˜•์‹์— ๋งž์ถฅ๋‹ˆ๋‹ค: ```py >>> import torch >>> def preprocess_data(examples): ... image_paths = examples['image_id'] ... images = [Image.open(image_path) for image_path in image_paths] ... texts = examples['question'] ... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt") ... for k, v in encoding.items(): ... encoding[k] = v.squeeze() ... targets = [] ... for labels, scores in zip(examples['label.ids'], examples['label.weights']): ... target = torch.zeros(len(id2label)) ... for label, score in zip(labels, scores): ... target[label] = score ... targets.append(target) ... encoding["labels"] = targets ... return encoding ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์‹ญ์‹œ์˜ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•จ์œผ๋กœ์จ `map`์„ ๋” ๋น ๋ฅด๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ์ œ๊ฑฐํ•˜์„ธ์š”. ```py >>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights']) >>> processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) ``` ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ, [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ๋กœ ์“ธ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ## ๋ชจ๋ธ ํ›ˆ๋ จ [[train-the-model]] ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด ์ค€๋น„๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`ViltForQuestionAnswering`]์œผ๋กœ ViLT๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์˜ ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import ViltForQuestionAnswering >>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) ``` ์ด ์‹œ์ ์—์„œ๋Š” ๋‹ค์Œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”: ```py >>> from transformers import TrainingArguments >>> repo_id = "MariaK/vilt_finetuned_200" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์„ธํŠธ, ํ”„๋กœ์„ธ์„œ, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=processed_dataset, ... tokenizer=processor, ... ) ``` 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”: ```py >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Hub์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` ## ์ถ”๋ก  [[inference]] ViLT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ์—…๋กœ๋“œํ–ˆ๋‹ค๋ฉด ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`Pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") ``` ์ด ๊ฐ€์ด๋“œ์˜ ๋ชจ๋ธ์€ 200๊ฐœ์˜ ์˜ˆ์ œ์—์„œ๋งŒ ํ›ˆ๋ จ๋˜์—ˆ์œผ๋ฏ€๋กœ ๊ทธ๋‹ค์ง€ ๋งŽ์€ ๊ฒƒ์„ ๊ธฐ๋Œ€ํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ฒซ ๋ฒˆ์งธ ์˜ˆ์ œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก  ๊ฒฐ๊ณผ๋ฅผ ์„ค๋ช…ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) "Where is he looking?" [{'score': 0.5498199462890625, 'answer': 'down'}] ``` ๋น„๋ก ํ™•์‹ ์€ ๋ณ„๋กœ ์—†์ง€๋งŒ, ๋ชจ๋ธ์€ ์‹ค์ œ๋กœ ๋ฌด์–ธ๊ฐ€๋ฅผ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ์˜ˆ์ œ์™€ ๋” ๊ธด ํ›ˆ๋ จ ๊ธฐ๊ฐ„์ด ์ฃผ์–ด์ง„๋‹ค๋ฉด ๋ถ„๋ช… ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค! ์›ํ•œ๋‹ค๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ ๊ฐ€์ ธ์™€์„œ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 2. ์ „์ฒ˜๋ฆฌ๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. ๋กœ์ง“์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ ์žˆ๋Š” ๋‹ต๋ณ€์˜ id๋ฅผ ๊ฐ€์ ธ์™€์„œ `id2label`์—์„œ ์‹ค์ œ ๋‹ต๋ณ€์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ```py >>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200") >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> # prepare inputs >>> inputs = processor(image, question, return_tensors="pt") >>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200") >>> # forward pass >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits >>> idx = logits.argmax(-1).item() >>> print("Predicted answer:", model.config.id2label[idx]) Predicted answer: down ``` ## ์ œ๋กœ์ƒท VQA [[zeroshot-vqa]] ์ด์ „ ๋ชจ๋ธ์€ VQA๋ฅผ ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋กœ ์ฒ˜๋ฆฌํ–ˆ์Šต๋‹ˆ๋‹ค. BLIP, BLIP-2 ๋ฐ InstructBLIP์™€ ๊ฐ™์€ ์ตœ๊ทผ์˜ ๋ชจ๋ธ์€ VQA๋ฅผ ์ƒ์„ฑ ์ž‘์—…์œผ๋กœ ์ ‘๊ทผํ•ฉ๋‹ˆ๋‹ค. [BLIP-2](../../en/model_doc/blip-2)๋ฅผ ์˜ˆ๋กœ ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋น„์ „ ์ธ์ฝ”๋”์™€ LLM์˜ ๋ชจ๋“  ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ๋กœ์šด ๋น„์ „-์ž์—ฐ์–ด ์‚ฌ์ „ ํ•™์Šต ํŒจ๋Ÿฌ๋‹ค์ž„์„ ๋„์ž…ํ–ˆ์Šต๋‹ˆ๋‹ค. ([BLIP-2 ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/blip-2)๋ฅผ ํ†ตํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณผ ์ˆ˜ ์žˆ์–ด์š”) ์ด๋ฅผ ํ†ตํ•ด ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต์„ ํฌํ•จํ•œ ์—ฌ๋Ÿฌ ๋น„์ „-์ž์—ฐ์–ด ์ž‘์—…์—์„œ SOTA๋ฅผ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ VQA์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์„ค๋ช…ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋ชจ๋ธ์„ ๊ฐ€์ ธ์™€ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ GPU๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฒฝ์šฐ ๋ชจ๋ธ์„ ๋ช…์‹œ์ ์œผ๋กœ GPU๋กœ ์ „์†กํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์ „์—๋Š” ํ›ˆ๋ จํ•  ๋•Œ ์“ฐ์ง€ ์•Š์€ ์ด์œ ๋Š” [`Trainer`]๊ฐ€ ์ด ๋ถ€๋ถ„์„ ์ž๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch >>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) ``` ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์œผ๋ฏ€๋กœ, VQA ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ฒซ ๋ฒˆ์งธ ์˜ˆ์ œ์—์„œ์™€ ๋™์ผํ•œ ์ด๋ฏธ์ง€/์งˆ๋ฌธ ์Œ์„ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] ``` BLIP-2๋ฅผ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต ์ž‘์—…์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ๊ฐ€ `Question: {} Answer:` ํ˜•์‹์„ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> prompt = f"Question: {question} Answer:" ``` ์ด์ œ ๋ชจ๋ธ์˜ ํ”„๋กœ์„ธ์„œ๋กœ ์ด๋ฏธ์ง€/ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ , ์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ์„ ๋ชจ๋ธ์„ ํ†ตํ•ด ์ „๋‹ฌํ•˜๊ณ , ์ถœ๋ ฅ์„ ๋””์ฝ”๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) >>> generated_ids = model.generate(**inputs, max_new_tokens=10) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) "He is looking at the crowd" ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๋ชจ๋ธ์€ ๊ตฐ์ค‘์„ ์ธ์‹ํ•˜๊ณ , ์–ผ๊ตด์˜ ๋ฐฉํ–ฅ(์•„๋ž˜์ชฝ์„ ๋ณด๊ณ  ์žˆ์Œ)์„ ์ธ์‹ํ–ˆ์ง€๋งŒ, ๊ตฐ์ค‘์ด ์Šค์ผ€์ดํ„ฐ ๋’ค์— ์žˆ๋‹ค๋Š” ์‚ฌ์‹ค์„ ๋†“์ณค์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ๋žŒ์ด ์ง์ ‘ ๋ผ๋ฒจ๋งํ•œ ๋ฐ์ดํ„ฐ์…‹์„ ์–ป์„ ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ์—, ์ด ์ ‘๊ทผ๋ฒ•์€ ๋น ๋ฅด๊ฒŒ ์œ ์šฉํ•œ ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/tr/index.md
<!--Telif Hakkฤฑ 2020 The HuggingFace Ekibi. Tรผm haklarฤฑ saklฤฑdฤฑr. Apache Lisansฤฑ, Sรผrรผm 2.0 (Lisans); bu dosyayฤฑ yรผrรผrlรผkteki yasalara uygun bir ลŸekilde kullanabilirsiniz. Lisansฤฑn bir kopyasฤฑnฤฑ aลŸaฤŸฤฑdaki adresten alabilirsiniz. http://www.apache.org/licenses/LICENSE-2.0 Lisansa tabi olmayan durumlarda veya yazฤฑlฤฑ anlaลŸma olmadฤฑkรงa, Lisans kapsamฤฑnda daฤŸฤฑtฤฑlan yazฤฑlฤฑm, herhangi bir tรผrde (aรงฤฑk veya zฤฑmni) garanti veya koลŸul olmaksฤฑzฤฑn, "OLDUฤžU GฤฐBฤฐ" ESASINA Gร–RE daฤŸฤฑtฤฑlฤฑr. Lisans hรผkรผmleri, รถzel belirli dil kullanฤฑmฤฑ, yetkileri ve kฤฑsฤฑtlamalarฤฑ belirler. โš ๏ธ Bu dosya Markdown biรงimindedir, ancak belge oluลŸturucumuz iรงin รถzgรผ sรถzdizimleri iรงerir (MDX gibi) ve muhtemelen Markdown gรถrรผntรผleyicinizde dรผzgรผn bir ลŸekilde gรถrรผntรผlenmeyebilir. --> # ๐Ÿค— Transformers [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) ve [JAX](https://jax.readthedocs.io/en/latest/) iรงin son teknoloji makine รถฤŸrenimi. ๐Ÿค— Transformers, gรผncel รถnceden eฤŸitilmiลŸ (pretrained) modelleri indirmenizi ve eฤŸitmenizi kolaylaลŸtฤฑran API'ler ve araรงlar sunar. ร–nceden eฤŸitilmiลŸ modeller kullanarak, hesaplama maliyetlerinizi ve karbon ayak izinizi azaltabilir, ve sฤฑfฤฑrdan bir modeli eฤŸitmek iรงin gereken zaman ve kaynaklardan tasarruf edebilirsiniz. Bu modeller farklฤฑ modalitelerde ortak gรถrevleri destekler. ร–rneฤŸin: ๐Ÿ“ **DoฤŸal Dil ฤฐลŸleme**: metin sฤฑnฤฑflandฤฑrma, adlandฤฑrฤฑlmฤฑลŸ varlฤฑk tanฤฑma, soru cevaplama, dil modelleme, รถzetleme, รงeviri, รงoktan seรงmeli ve metin oluลŸturma.<br> ๐Ÿ–ผ๏ธ **Bilgisayarlฤฑ Gรถrรผ**: gรถrรผntรผ sฤฑnฤฑflandฤฑrma, nesne tespiti ve bรถlรผmleme (segmentation).<br> ๐Ÿ—ฃ๏ธ **Ses**: otomatik konuลŸma tanฤฑma ve ses sฤฑnฤฑflandฤฑrma.<br> ๐Ÿ™ **ร‡oklu Model**: tablo soru cevaplama, optik karakter tanฤฑma, taranmฤฑลŸ belgelerden bilgi รงฤฑkarma, video sฤฑnฤฑflandฤฑrma ve gรถrsel soru cevaplama. ๐Ÿค— Transformers, PyTorch, TensorFlow ve JAX arasฤฑnda รงerรงeve (framework) uyumluluฤŸu saฤŸlar. Bu, bir modelin yaลŸam dรถngรผsรผnรผn her aลŸamasฤฑnda farklฤฑ bir รงerรงeve kullanma esnekliฤŸi sunar; bir รงerรงevede รผรง satฤฑr kodla bir modeli eฤŸitebilir ve baลŸka bir รงerรงevede tahminleme iรงin kullanabilirsiniz. Modeller ayrฤฑca รผretim ortamlarฤฑnda kullanฤฑlmak รผzere ONNX ve TorchScript gibi bir formata aktarฤฑlabilir. Bรผyรผyen topluluฤŸa [Hub](https://huggingface.co/models), [Forum](https://discuss.huggingface.co/) veya [Discord](https://discord.com/invite/JfAtkvEtRb) รผzerinden katฤฑlabilirsiniz! ## Hugging Face ekibinden รถzel destek arฤฑyorsanฤฑz <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Uzman Hฤฑzlandฤฑrma Programฤฑ" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## ฤฐรงindekiler Dokรผmantasyon, beลŸ bรถlรผme ayrฤฑlmฤฑลŸtฤฑr: - **BAลžLARKEN**, kรผtรผphanenin hฤฑzlฤฑ bir turunu ve รงalฤฑลŸmaya baลŸlamak iรงin kurulum talimatlarฤฑnฤฑ saฤŸlar. - **ร–ฤžRETฤฐCฤฐLER**, baลŸlangฤฑรง yapmak iรงin harika bir yerdir. Bu bรถlรผm, kรผtรผphane kullanmaya baลŸlamak iรงin ihtiyacฤฑnฤฑz olan temel becerileri kazanmanฤฑza yardฤฑmcฤฑ olacaktฤฑr. - **NASIL YAPILIR KILAVUZLARI**, รถnceden eฤŸitilmiลŸ bir modele dil modellemesi iรงin ince ayar (fine-tuning) yapmak veya รถzel bir model yazmak, ve paylaลŸmak gibi belirli bir hedefe nasฤฑl ulaลŸฤฑlacaฤŸฤฑnฤฑ gรถsterir. - **KAVRAMSAL REHBERLER**, modellerin, gรถrevlerin ve ๐Ÿค— Transformers tasarฤฑm felsefesinin temel kavramlarฤฑ ve fikirleri hakkฤฑnda daha fazla tartฤฑลŸma ve aรงฤฑklama sunar. - **API** tรผm sฤฑnฤฑflarฤฑ (class) ve fonksiyonlarฤฑ (functions) aรงฤฑklar: - **ANA SINIFLAR**, yapฤฑlandฤฑrma, model, tokenizer ve pipeline gibi en รถnemli sฤฑnฤฑflarฤฑ (classes) ayrฤฑntฤฑlandฤฑrฤฑr. - **MODELLER**, kรผtรผphanede kullanฤฑlan her modelle ilgili sฤฑnฤฑflarฤฑ ve fonksiyonlarฤฑ detaylฤฑ olarak inceler. - **DAHฤฐLฤฐ YARDIMCILAR**, kullanฤฑlan yardฤฑmcฤฑ sฤฑnฤฑflarฤฑ ve fonksiyonlarฤฑ detaylฤฑ olarak inceler. ## Desteklenen Modeller ve ร‡erรงeveler AลŸaฤŸฤฑdaki tablo, her bir model iรงin kรผtรผphanede yer alan mevcut desteฤŸi temsil etmektedir. Her bir model iรงin bir Python tokenizer'ฤฑna ("slow" olarak adlandฤฑrฤฑlฤฑr) sahip olup olmadฤฑklarฤฑ, ๐Ÿค— Tokenizers kรผtรผphanesi tarafฤฑndan desteklenen hฤฑzlฤฑ bir tokenizer'a sahip olup olmadฤฑklarฤฑ, Jax (Flax aracฤฑlฤฑฤŸฤฑyla), PyTorch ve/veya TensorFlow'da destek olup olmadฤฑklarฤฑnฤฑ gรถstermektedir. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | PyTorch support | TensorFlow support | Flax Support | |:------------------------------------------------------------------------:|:---------------:|:------------------:|:------------:| | [ALBERT](model_doc/albert) | โœ… | โœ… | โœ… | | [ALIGN](model_doc/align) | โœ… | โŒ | โŒ | | [AltCLIP](model_doc/altclip) | โœ… | โŒ | โŒ | | [Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer) | โœ… | โŒ | โŒ | | [Autoformer](model_doc/autoformer) | โœ… | โŒ | โŒ | | [Bark](model_doc/bark) | โœ… | โŒ | โŒ | | [BART](model_doc/bart) | โœ… | โœ… | โœ… | | [BARThez](model_doc/barthez) | โœ… | โœ… | โœ… | | [BARTpho](model_doc/bartpho) | โœ… | โœ… | โœ… | | [BEiT](model_doc/beit) | โœ… | โŒ | โœ… | | [BERT](model_doc/bert) | โœ… | โœ… | โœ… | | [Bert Generation](model_doc/bert-generation) | โœ… | โŒ | โŒ | | [BertJapanese](model_doc/bert-japanese) | โœ… | โœ… | โœ… | | [BERTweet](model_doc/bertweet) | โœ… | โœ… | โœ… | | [BigBird](model_doc/big_bird) | โœ… | โŒ | โœ… | | [BigBird-Pegasus](model_doc/bigbird_pegasus) | โœ… | โŒ | โŒ | | [BioGpt](model_doc/biogpt) | โœ… | โŒ | โŒ | | [BiT](model_doc/bit) | โœ… | โŒ | โŒ | | [Blenderbot](model_doc/blenderbot) | โœ… | โœ… | โœ… | | [BlenderbotSmall](model_doc/blenderbot-small) | โœ… | โœ… | โœ… | | [BLIP](model_doc/blip) | โœ… | โœ… | โŒ | | [BLIP-2](model_doc/blip-2) | โœ… | โŒ | โŒ | | [BLOOM](model_doc/bloom) | โœ… | โŒ | โœ… | | [BORT](model_doc/bort) | โœ… | โœ… | โœ… | | [BridgeTower](model_doc/bridgetower) | โœ… | โŒ | โŒ | | [BROS](model_doc/bros) | โœ… | โŒ | โŒ | | [ByT5](model_doc/byt5) | โœ… | โœ… | โœ… | | [CamemBERT](model_doc/camembert) | โœ… | โœ… | โŒ | | [CANINE](model_doc/canine) | โœ… | โŒ | โŒ | | [Chinese-CLIP](model_doc/chinese_clip) | โœ… | โŒ | โŒ | | [CLAP](model_doc/clap) | โœ… | โŒ | โŒ | | [CLIP](model_doc/clip) | โœ… | โœ… | โœ… | | [CLIPSeg](model_doc/clipseg) | โœ… | โŒ | โŒ | | [CodeGen](model_doc/codegen) | โœ… | โŒ | โŒ | | [CodeLlama](model_doc/code_llama) | โœ… | โŒ | โŒ | | [Conditional DETR](model_doc/conditional_detr) | โœ… | โŒ | โŒ | | [ConvBERT](model_doc/convbert) | โœ… | โœ… | โŒ | | [ConvNeXT](model_doc/convnext) | โœ… | โœ… | โŒ | | [ConvNeXTV2](model_doc/convnextv2) | โœ… | โŒ | โŒ | | [CPM](model_doc/cpm) | โœ… | โœ… | โœ… | | [CPM-Ant](model_doc/cpmant) | โœ… | โŒ | โŒ | | [CTRL](model_doc/ctrl) | โœ… | โœ… | โŒ | | [CvT](model_doc/cvt) | โœ… | โœ… | โŒ | | [Data2VecAudio](model_doc/data2vec) | โœ… | โŒ | โŒ | | [Data2VecText](model_doc/data2vec) | โœ… | โŒ | โŒ | | [Data2VecVision](model_doc/data2vec) | โœ… | โœ… | โŒ | | [DeBERTa](model_doc/deberta) | โœ… | โœ… | โŒ | | [DeBERTa-v2](model_doc/deberta-v2) | โœ… | โœ… | โŒ | | [Decision Transformer](model_doc/decision_transformer) | โœ… | โŒ | โŒ | | [Deformable DETR](model_doc/deformable_detr) | โœ… | โŒ | โŒ | | [DeiT](model_doc/deit) | โœ… | โœ… | โŒ | | [DePlot](model_doc/deplot) | โœ… | โŒ | โŒ | | [DETA](model_doc/deta) | โœ… | โŒ | โŒ | | [DETR](model_doc/detr) | โœ… | โŒ | โŒ | | [DialoGPT](model_doc/dialogpt) | โœ… | โœ… | โœ… | | [DiNAT](model_doc/dinat) | โœ… | โŒ | โŒ | | [DINOv2](model_doc/dinov2) | โœ… | โŒ | โŒ | | [DistilBERT](model_doc/distilbert) | โœ… | โœ… | โœ… | | [DiT](model_doc/dit) | โœ… | โŒ | โœ… | | [DonutSwin](model_doc/donut) | โœ… | โŒ | โŒ | | [DPR](model_doc/dpr) | โœ… | โœ… | โŒ | | [DPT](model_doc/dpt) | โœ… | โŒ | โŒ | | [EfficientFormer](model_doc/efficientformer) | โœ… | โœ… | โŒ | | [EfficientNet](model_doc/efficientnet) | โœ… | โŒ | โŒ | | [ELECTRA](model_doc/electra) | โœ… | โœ… | โœ… | | [EnCodec](model_doc/encodec) | โœ… | โŒ | โŒ | | [Encoder decoder](model_doc/encoder-decoder) | โœ… | โœ… | โœ… | | [ERNIE](model_doc/ernie) | โœ… | โŒ | โŒ | | [ErnieM](model_doc/ernie_m) | โœ… | โŒ | โŒ | | [ESM](model_doc/esm) | โœ… | โœ… | โŒ | | [FairSeq Machine-Translation](model_doc/fsmt) | โœ… | โŒ | โŒ | | [Falcon](model_doc/falcon) | โœ… | โŒ | โŒ | | [FLAN-T5](model_doc/flan-t5) | โœ… | โœ… | โœ… | | [FLAN-UL2](model_doc/flan-ul2) | โœ… | โœ… | โœ… | | [FlauBERT](model_doc/flaubert) | โœ… | โœ… | โŒ | | [FLAVA](model_doc/flava) | โœ… | โŒ | โŒ | | [FNet](model_doc/fnet) | โœ… | โŒ | โŒ | | [FocalNet](model_doc/focalnet) | โœ… | โŒ | โŒ | | [Funnel Transformer](model_doc/funnel) | โœ… | โœ… | โŒ | | [Fuyu](model_doc/fuyu) | โœ… | โŒ | โŒ | | [GIT](model_doc/git) | โœ… | โŒ | โŒ | | [GLPN](model_doc/glpn) | โœ… | โŒ | โŒ | | [GPT Neo](model_doc/gpt_neo) | โœ… | โŒ | โœ… | | [GPT NeoX](model_doc/gpt_neox) | โœ… | โŒ | โŒ | | [GPT NeoX Japanese](model_doc/gpt_neox_japanese) | โœ… | โŒ | โŒ | | [GPT-J](model_doc/gptj) | โœ… | โœ… | โœ… | | [GPT-Sw3](model_doc/gpt-sw3) | โœ… | โœ… | โœ… | | [GPTBigCode](model_doc/gpt_bigcode) | โœ… | โŒ | โŒ | | [GPTSAN-japanese](model_doc/gptsan-japanese) | โœ… | โŒ | โŒ | | [Graphormer](model_doc/graphormer) | โœ… | โŒ | โŒ | | [GroupViT](model_doc/groupvit) | โœ… | โœ… | โŒ | | [HerBERT](model_doc/herbert) | โœ… | โœ… | โœ… | | [Hubert](model_doc/hubert) | โœ… | โœ… | โŒ | | [I-BERT](model_doc/ibert) | โœ… | โŒ | โŒ | | [IDEFICS](model_doc/idefics) | โœ… | โŒ | โŒ | | [ImageGPT](model_doc/imagegpt) | โœ… | โŒ | โŒ | | [Informer](model_doc/informer) | โœ… | โŒ | โŒ | | [InstructBLIP](model_doc/instructblip) | โœ… | โŒ | โŒ | | [Jukebox](model_doc/jukebox) | โœ… | โŒ | โŒ | | [LayoutLM](model_doc/layoutlm) | โœ… | โœ… | โŒ | | [LayoutLMv2](model_doc/layoutlmv2) | โœ… | โŒ | โŒ | | [LayoutLMv3](model_doc/layoutlmv3) | โœ… | โœ… | โŒ | | [LayoutXLM](model_doc/layoutxlm) | โœ… | โŒ | โŒ | | [LED](model_doc/led) | โœ… | โœ… | โŒ | | [LeViT](model_doc/levit) | โœ… | โŒ | โŒ | | [LiLT](model_doc/lilt) | โœ… | โŒ | โŒ | | [LLaMA](model_doc/llama) | โœ… | โŒ | โŒ | | [Llama2](model_doc/llama2) | โœ… | โŒ | โŒ | | [Longformer](model_doc/longformer) | โœ… | โœ… | โŒ | | [LongT5](model_doc/longt5) | โœ… | โŒ | โœ… | | [LUKE](model_doc/luke) | โœ… | โŒ | โŒ | | [LXMERT](model_doc/lxmert) | โœ… | โœ… | โŒ | | [M-CTC-T](model_doc/mctct) | โœ… | โŒ | โŒ | | [M2M100](model_doc/m2m_100) | โœ… | โŒ | โŒ | | [Marian](model_doc/marian) | โœ… | โœ… | โœ… | | [MarkupLM](model_doc/markuplm) | โœ… | โŒ | โŒ | | [Mask2Former](model_doc/mask2former) | โœ… | โŒ | โŒ | | [MaskFormer](model_doc/maskformer) | โœ… | โŒ | โŒ | | [MatCha](model_doc/matcha) | โœ… | โŒ | โŒ | | [mBART](model_doc/mbart) | โœ… | โœ… | โœ… | | [mBART-50](model_doc/mbart50) | โœ… | โœ… | โœ… | | [MEGA](model_doc/mega) | โœ… | โŒ | โŒ | | [Megatron-BERT](model_doc/megatron-bert) | โœ… | โŒ | โŒ | | [Megatron-GPT2](model_doc/megatron_gpt2) | โœ… | โœ… | โœ… | | [MGP-STR](model_doc/mgp-str) | โœ… | โŒ | โŒ | | [Mistral](model_doc/mistral) | โœ… | โŒ | โŒ | | [mLUKE](model_doc/mluke) | โœ… | โŒ | โŒ | | [MMS](model_doc/mms) | โœ… | โœ… | โœ… | | [MobileBERT](model_doc/mobilebert) | โœ… | โœ… | โŒ | | [MobileNetV1](model_doc/mobilenet_v1) | โœ… | โŒ | โŒ | | [MobileNetV2](model_doc/mobilenet_v2) | โœ… | โŒ | โŒ | | [MobileViT](model_doc/mobilevit) | โœ… | โœ… | โŒ | | [MobileViTV2](model_doc/mobilevitv2) | โœ… | โŒ | โŒ | | [MPNet](model_doc/mpnet) | โœ… | โœ… | โŒ | | [MPT](model_doc/mpt) | โœ… | โŒ | โŒ | | [MRA](model_doc/mra) | โœ… | โŒ | โŒ | | [MT5](model_doc/mt5) | โœ… | โœ… | โœ… | | [MusicGen](model_doc/musicgen) | โœ… | โŒ | โŒ | | [MVP](model_doc/mvp) | โœ… | โŒ | โŒ | | [NAT](model_doc/nat) | โœ… | โŒ | โŒ | | [Nezha](model_doc/nezha) | โœ… | โŒ | โŒ | | [NLLB](model_doc/nllb) | โœ… | โŒ | โŒ | | [NLLB-MOE](model_doc/nllb-moe) | โœ… | โŒ | โŒ | | [Nougat](model_doc/nougat) | โœ… | โœ… | โœ… | | [Nystrรถmformer](model_doc/nystromformer) | โœ… | โŒ | โŒ | | [OneFormer](model_doc/oneformer) | โœ… | โŒ | โŒ | | [OpenAI GPT](model_doc/openai-gpt) | โœ… | โœ… | โŒ | | [OpenAI GPT-2](model_doc/gpt2) | โœ… | โœ… | โœ… | | [OpenLlama](model_doc/open-llama) | โœ… | โŒ | โŒ | | [OPT](model_doc/opt) | โœ… | โœ… | โœ… | | [OWL-ViT](model_doc/owlvit) | โœ… | โŒ | โŒ | | [OWLv2](model_doc/owlv2) | โœ… | โŒ | โŒ | | [Pegasus](model_doc/pegasus) | โœ… | โœ… | โœ… | | [PEGASUS-X](model_doc/pegasus_x) | โœ… | โŒ | โŒ | | [Perceiver](model_doc/perceiver) | โœ… | โŒ | โŒ | | [Persimmon](model_doc/persimmon) | โœ… | โŒ | โŒ | | [PhoBERT](model_doc/phobert) | โœ… | โœ… | โœ… | | [Pix2Struct](model_doc/pix2struct) | โœ… | โŒ | โŒ | | [PLBart](model_doc/plbart) | โœ… | โŒ | โŒ | | [PoolFormer](model_doc/poolformer) | โœ… | โŒ | โŒ | | [Pop2Piano](model_doc/pop2piano) | โœ… | โŒ | โŒ | | [ProphetNet](model_doc/prophetnet) | โœ… | โŒ | โŒ | | [PVT](model_doc/pvt) | โœ… | โŒ | โŒ | | [QDQBert](model_doc/qdqbert) | โœ… | โŒ | โŒ | | [RAG](model_doc/rag) | โœ… | โœ… | โŒ | | [REALM](model_doc/realm) | โœ… | โŒ | โŒ | | [Reformer](model_doc/reformer) | โœ… | โŒ | โŒ | | [RegNet](model_doc/regnet) | โœ… | โœ… | โœ… | | [RemBERT](model_doc/rembert) | โœ… | โœ… | โŒ | | [ResNet](model_doc/resnet) | โœ… | โœ… | โœ… | | [RetriBERT](model_doc/retribert) | โœ… | โŒ | โŒ | | [RoBERTa](model_doc/roberta) | โœ… | โœ… | โœ… | | [RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm) | โœ… | โœ… | โœ… | | [RoCBert](model_doc/roc_bert) | โœ… | โŒ | โŒ | | [RoFormer](model_doc/roformer) | โœ… | โœ… | โœ… | | [RWKV](model_doc/rwkv) | โœ… | โŒ | โŒ | | [SAM](model_doc/sam) | โœ… | โœ… | โŒ | | [SeamlessM4T](model_doc/seamless_m4t) | โœ… | โŒ | โŒ | | [SegFormer](model_doc/segformer) | โœ… | โœ… | โŒ | | [SEW](model_doc/sew) | โœ… | โŒ | โŒ | | [SEW-D](model_doc/sew-d) | โœ… | โŒ | โŒ | | [Speech Encoder decoder](model_doc/speech-encoder-decoder) | โœ… | โŒ | โœ… | | [Speech2Text](model_doc/speech_to_text) | โœ… | โœ… | โŒ | | [SpeechT5](model_doc/speecht5) | โœ… | โŒ | โŒ | | [Splinter](model_doc/splinter) | โœ… | โŒ | โŒ | | [SqueezeBERT](model_doc/squeezebert) | โœ… | โŒ | โŒ | | [SwiftFormer](model_doc/swiftformer) | โœ… | โŒ | โŒ | | [Swin Transformer](model_doc/swin) | โœ… | โœ… | โŒ | | [Swin Transformer V2](model_doc/swinv2) | โœ… | โŒ | โŒ | | [Swin2SR](model_doc/swin2sr) | โœ… | โŒ | โŒ | | [SwitchTransformers](model_doc/switch_transformers) | โœ… | โŒ | โŒ | | [T5](model_doc/t5) | โœ… | โœ… | โœ… | | [T5v1.1](model_doc/t5v1.1) | โœ… | โœ… | โœ… | | [Table Transformer](model_doc/table-transformer) | โœ… | โŒ | โŒ | | [TAPAS](model_doc/tapas) | โœ… | โœ… | โŒ | | [TAPEX](model_doc/tapex) | โœ… | โœ… | โœ… | | [Time Series Transformer](model_doc/time_series_transformer) | โœ… | โŒ | โŒ | | [TimeSformer](model_doc/timesformer) | โœ… | โŒ | โŒ | | [Trajectory Transformer](model_doc/trajectory_transformer) | โœ… | โŒ | โŒ | | [Transformer-XL](model_doc/transfo-xl) | โœ… | โœ… | โŒ | | [TrOCR](model_doc/trocr) | โœ… | โŒ | โŒ | | [TVLT](model_doc/tvlt) | โœ… | โŒ | โŒ | | [UL2](model_doc/ul2) | โœ… | โœ… | โœ… | | [UMT5](model_doc/umt5) | โœ… | โŒ | โŒ | | [UniSpeech](model_doc/unispeech) | โœ… | โŒ | โŒ | | [UniSpeechSat](model_doc/unispeech-sat) | โœ… | โŒ | โŒ | | [UPerNet](model_doc/upernet) | โœ… | โŒ | โŒ | | [VAN](model_doc/van) | โœ… | โŒ | โŒ | | [VideoMAE](model_doc/videomae) | โœ… | โŒ | โŒ | | [ViLT](model_doc/vilt) | โœ… | โŒ | โŒ | | [Vision Encoder decoder](model_doc/vision-encoder-decoder) | โœ… | โœ… | โœ… | | [VisionTextDualEncoder](model_doc/vision-text-dual-encoder) | โœ… | โœ… | โœ… | | [VisualBERT](model_doc/visual_bert) | โœ… | โŒ | โŒ | | [ViT](model_doc/vit) | โœ… | โœ… | โœ… | | [ViT Hybrid](model_doc/vit_hybrid) | โœ… | โŒ | โŒ | | [VitDet](model_doc/vitdet) | โœ… | โŒ | โŒ | | [ViTMAE](model_doc/vit_mae) | โœ… | โœ… | โŒ | | [ViTMatte](model_doc/vitmatte) | โœ… | โŒ | โŒ | | [ViTMSN](model_doc/vit_msn) | โœ… | โŒ | โŒ | | [VITS](model_doc/vits) | โœ… | โŒ | โŒ | | [ViViT](model_doc/vivit) | โœ… | โŒ | โŒ | | [Wav2Vec2](model_doc/wav2vec2) | โœ… | โœ… | โœ… | | [Wav2Vec2-Conformer](model_doc/wav2vec2-conformer) | โœ… | โŒ | โŒ | | [Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme) | โœ… | โœ… | โœ… | | [WavLM](model_doc/wavlm) | โœ… | โŒ | โŒ | | [Whisper](model_doc/whisper) | โœ… | โœ… | โœ… | | [X-CLIP](model_doc/xclip) | โœ… | โŒ | โŒ | | [X-MOD](model_doc/xmod) | โœ… | โŒ | โŒ | | [XGLM](model_doc/xglm) | โœ… | โœ… | โœ… | | [XLM](model_doc/xlm) | โœ… | โœ… | โŒ | | [XLM-ProphetNet](model_doc/xlm-prophetnet) | โœ… | โŒ | โŒ | | [XLM-RoBERTa](model_doc/xlm-roberta) | โœ… | โœ… | โœ… | | [XLM-RoBERTa-XL](model_doc/xlm-roberta-xl) | โœ… | โŒ | โŒ | | [XLM-V](model_doc/xlm-v) | โœ… | โœ… | โœ… | | [XLNet](model_doc/xlnet) | โœ… | โœ… | โŒ | | [XLS-R](model_doc/xls_r) | โœ… | โœ… | โœ… | | [XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2) | โœ… | โœ… | โœ… | | [YOLOS](model_doc/yolos) | โœ… | โŒ | โŒ | | [YOSO](model_doc/yoso) | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/tr/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers title: Get started
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Apprentissage automatique de pointe pour [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), et [JAX](https://jax.readthedocs.io/en/latest/). ๐Ÿค— Transformers fournit des API et des outils pour tรฉlรฉcharger et entraรฎner facilement des modรจles prรฉ-entraรฎnรฉs de pointe. L'utilisation de modรจles prรฉ-entraรฎnรฉs peut rรฉduire vos coรปts de calcul, votre empreinte carbone, et vous faire รฉconomiser le temps et les ressources nรฉcessaires pour entraรฎner un modรจle ร  partir de zรฉro. Ces modรจles prennent en charge des tรขches courantes dans diffรฉrentes modalitรฉs, telles que : ๐Ÿ“ **Traitement automatique des langues**: classification de texte, reconnaissance d'entitรฉs, systรจme de question-rรฉponse, modรจle de langage, gรฉnรฉration de rรฉsumรฉ, traduction, question ร  choix multiples et gรฉnรฉration de texte.<br> ๐Ÿ–ผ๏ธ **Vision par ordinateur**: classification d'image, dรฉtection d'objet et segmentation.<br> ๐Ÿ—ฃ๏ธ **Audio**: reconnaissance automatique de la parole et classification audio.<br> ๐Ÿ™ **Multimodalitรฉ**: systรจme de question-rรฉponse avec des tableaux ou images, reconnaissance optique de caractรจres, extraction d'information depuis des documents scannรฉs et classification de vidรฉo. ๐Ÿค— Transformers prend en charge l'interopรฉrabilitรฉ entre PyTorch, TensorFlow et JAX. Cela permet d'utiliser un framework diffรฉrent ร  chaque รฉtape de la vie d'un modรจle, par exemple entraรฎner un modรจle en trois lignes de code avec un framework, et le charger pour l'infรฉrence avec un autre. Les modรจles peuvent รฉgalement รชtre exportรฉs dans un format comme ONNX et TorchScript pour รชtre dรฉployรฉs dans des environnements de production. Rejoignez la communautรฉ grandissante sur le [Hub](https://huggingface.co/models), le [forum](https://discuss.huggingface.co/) ou [Discord](https://discord.com/invite/JfAtkvEtRb) dรจs aujourd'hui ! ## Si vous cherchez un support personnalisรฉ de l'รฉquipe Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contents La documentation est organisรฉe en 5 parties: - **DEMARRER** propose une visite rapide de la bibliothรจque et des instructions d'installation pour รชtre opรฉrationnel. - **TUTORIELS** excellent point de dรฉpart pour les dรฉbutants. Cette section vous aidera ร  acquรฉrir les compรฉtences de base dont vous avez besoin pour commencer ร  utiliser la bibliothรจque. - **GUIDES D'UTILISATION** pour diffรฉrentes tรขches comme par exemple le finetuning d'un modรจle prรฉ-entraรฎnรฉ pour la classification de texte ou comment crรฉer et partager votre propre modรจle. - **GUIDES CONCEPTUELS** pour plus de discussions et d'explications sur les concepts et les idรฉes sous-jacentes aux modรจles, aux tรขches et ร  la philosophie de conception de ๐Ÿค— Transformers. - **API** dรฉcrit toutes les classes et fonctions : - **CLASSES PRINCIPALES** dรฉtaille les classes les plus importantes comme la configuration, le modรจle, le tokenizer et le pipeline.. - **MODELES** dรฉtaille les classes et les fonctions propres ร  chaque modรจle de la bibliothรจque. - **UTILITAIRES INTERNES** dรฉtaille les classes et fonctions utilitaires utilisรฉes en interne. ### Modรจles supportรฉs <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[AltCLIP](model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. 1. **[Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BioGpt](model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 1. **[BiT](model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLIP](model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[BridgeTower](model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[Chinese-CLIP](model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETA](model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krรคhenbรผhl. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientFormer](model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FastSpeech2Conformer](model_doc/fastspeech2_conformer)** (from ESPnet) released with the paper [Recent Developments On Espnet Toolkit Boosted By Conformer](https://arxiv.org/abs/2010.13956) by Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, Jing Shi, Shinji Watanabe, Kun Wei, Wangyou Zhang, and Yuekai Zhang. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GIT](model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPT-Sw3](model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey ร–hman, Fredrik Carlsson, Magnus Sahlgren. 1. **[Graphormer](model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileNetV1](model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 1. **[MobileNetV2](model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[NAT](model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechT5](model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[Swin2SR](model_doc/swin2sr)** (from University of Wรผrzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. 1. **[SwitchTransformers](model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[TimeSformer](model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[UPerNet](model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViT Hybrid](model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Frameworks compatibles Le tableau ci-dessous reprรฉsente la prise en charge actuelle dans la bibliothรจque pour chacun de ces modรจles, qu'ils aient ou non un tokenizer Python (appelรฉ "slow"). Un tokenizer rapide ("fast") soutenu par la bibliothรจque ๐Ÿค— Tokenizers, qu'ils aient un support en Jax (via Flax), PyTorch, et/ou TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Modรจle | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | AltCLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | Audio Spectrogram Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | BioGpt | โœ… | โŒ | โœ… | โŒ | โŒ | | BiT | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | BridgeTower | โŒ | โŒ | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | Chinese-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETA | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DiNAT | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | EfficientFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FastSpeech2Conformer | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GIT | โŒ | โŒ | โœ… | โŒ | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GPT-Sw3 | โœ… | โœ… | โœ… | โœ… | โœ… | | Graphormer | โŒ | โŒ | โœ… | โŒ | โŒ | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | Mask2Former | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormerSwin | โŒ | โŒ | โŒ | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileNetV1 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileNetV2 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | NAT | โŒ | โŒ | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OneFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โŒ | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoBERTa-PreLayerNorm | โŒ | โŒ | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | SpeechT5 | โœ… | โŒ | โœ… | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | Swin2SR | โŒ | โŒ | โœ… | โŒ | โŒ | | SwitchTransformers | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TimeSformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | UPerNet | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViT Hybrid | โŒ | โŒ | โœ… | โŒ | โŒ | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Visite rapide [[open-in-colab]] Soyez opรฉrationnel avec ๐Ÿค— Transformers ! Que vous soyez un dรฉveloppeur ou un utilisateur lambda, cette visite rapide vous aidera ร  dรฉmarrer et vous montrera comment utiliser le [`pipeline`] pour l'infรฉrence, charger un modรจle prรฉ-entraรฎnรฉ et un prรฉprocesseur avec une [AutoClass](./model_doc/auto), et entraรฎner rapidement un modรจle avec PyTorch ou TensorFlow. Si vous รชtes un dรฉbutant, nous vous recommandons de consulter nos tutoriels ou notre [cours](https://huggingface.co/course/chapter1/1) suivant pour des explications plus approfondies des concepts prรฉsentรฉs ici. Avant de commencer, assurez-vous que vous avez installรฉ toutes les bibliothรจques nรฉcessaires : ```bash !pip install transformers datasets ``` Vous aurez aussi besoin d'installer votre bibliothรจque d'apprentissage profond favorite : <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## Pipeline <Youtube id="tiZFewofSLM"/> Le [`pipeline`] est le moyen le plus simple d'utiliser un modรจle prรฉ-entraรฎnรฉ pour l'infรฉrence. Vous pouvez utiliser le [`pipeline`] prรชt ร  l'emploi pour de nombreuses tรขches dans diffรฉrentes modalitรฉs. Consultez le tableau ci-dessous pour connaรฎtre les tรขches prises en charge : | **Tรขche** | **Description** | **Modalitรฉ** | **Identifiant du pipeline** | |------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------|-----------------------------------------------| | Classification de texte | Attribue une catรฉgorie ร  une sรฉquence de texte donnรฉe | Texte | pipeline(task="sentiment-analysis") | | Gรฉnรฉration de texte | Gรฉnรจre du texte ร  partir d'une consigne donnรฉe | Texte | pipeline(task="text-generation") | | Reconnaissance de token nommรฉ | Attribue une catรฉgorie ร  chaque token dans une sรฉquence (personnes, organisation, localisation, etc.) | Texte | pipeline(task="ner") | | Question rรฉponse | Extrait une rรฉponse du texte en fonction du contexte et d'une question | Texte | pipeline(task="question-answering") | | Prรฉdiction de token masquรฉ | Prรฉdit correctement le token masquรฉ dans une sรฉquence | Texte | pipeline(task="fill-mask") | | Gรฉnรฉration de rรฉsumรฉ | Gรฉnรจre un rรฉsumรฉ d'une sรฉquence de texte donnรฉe ou d'un document | Texte | pipeline(task="summarization") | | Traduction | Traduit du texte d'un langage ร  un autre | Texte | pipeline(task="translation") | | Classification d'image | Attribue une catรฉgorie ร  une image | Image | pipeline(task="image-classification") | | Segmentation d'image | Attribue une catรฉgorie ร  chaque pixel d'une image (supporte la segmentation sรฉmantique, panoptique et d'instance) | Image | pipeline(task="image-segmentation") | | Dรฉtection d'objets | Prรฉdit les dรฉlimitations et catรฉgories d'objets dans une image | Image | pipeline(task="object-detection") | | Classification d'audio | Attribue une catรฉgorie ร  un fichier audio | Audio | pipeline(task="audio-classification") | | Reconnaissance automatique de la parole | Extrait le discours d'un fichier audio en texte | Audio | pipeline(task="automatic-speech-recognition") | | Question rรฉponse visuels | Etant donnรฉes une image et une question, rรฉpond correctement ร  une question sur l'image | Modalitรฉs multiples | pipeline(task="vqa") | Commencez par crรฉer une instance de [`pipeline`] et spรฉcifiez la tรขche pour laquelle vous souhaitez l'utiliser. Vous pouvez utiliser le [`pipeline`] pour n'importe laquelle des tรขches mentionnรฉes dans le tableau prรฉcรฉdent. Pour obtenir une liste complรจte des tรขches prises en charge, consultez la documentation de l'[API pipeline](./main_classes/pipelines). Dans ce guide, nous utiliserons le [`pipeline`] pour l'analyse des sentiments ร  titre d'exemple : ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Le [`pipeline`] tรฉlรฉcharge et stocke en cache un [modรจle prรฉ-entraรฎnรฉ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) et un tokenizer par dรฉfaut pour l'analyse des sentiments. Vous pouvez maintenant utiliser le `classifier` sur le texte de votre choix : ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` Si vous voulez classifier plus qu'un texte, donnez une liste de textes au [`pipeline`] pour obtenir une liste de dictionnaires en retour : ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, avec le score de: {round(result['score'], 4)}") label: POSITIVE, avec le score de: 0.9998 label: NEGATIVE, avec le score de: 0.5309 ``` Le [`pipeline`] peut aussi itรฉrer sur un jeu de donnรฉes entier pour n'importe quelle tรขche. Prenons par exemple la reconnaissance automatique de la parole : ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Chargez un jeu de donnรฉes audio (voir le ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) pour plus de dรฉtails) sur lequel vous souhaitez itรฉrer. Pour cet exemple, nous chargeons le jeu de donnรฉes [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) : ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Vous devez vous assurer que le taux d'รฉchantillonnage de l'ensemble de donnรฉes correspond au taux d'รฉchantillonnage sur lequel [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) a รฉtรฉ entraรฎnรฉ : ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Les fichiers audio sont automatiquement chargรฉs et rรฉรฉchantillonnรฉs lors de l'appel de la colonne `"audio"`. Extrayez les tableaux de formes d'ondes brutes des quatre premiers รฉchantillons et passez-les comme une liste au pipeline : ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Pour les ensembles de donnรฉes plus importants oรน les entrรฉes sont volumineuses (comme dans les domaines de la parole ou de la vision), utilisez plutรดt un gรฉnรฉrateur au lieu d'une liste pour charger toutes les entrรฉes en mรฉmoire. Pour plus d'informations, consultez la documentation de l'[API pipeline](./main_classes/pipelines). ### Utiliser une autre modรจle et tokenizer dans le pipeline Le [`pipeline`] peut รชtre utilisรฉ avec n'importe quel modรจle du [Hub](https://huggingface.co/models), ce qui permet d'adapter facilement le [`pipeline`] ร  d'autres cas d'utilisation. Par exemple, si vous souhaitez un modรจle capable de traiter du texte franรงais, utilisez les filtres du Hub pour trouver un modรจle appropriรฉ. Le premier rรฉsultat renvoie un [modรจle BERT](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) multilingue finetunรฉ pour l'analyse des sentiments que vous pouvez utiliser pour le texte franรงais : ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Utilisez [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `AutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Utilisez [`TFAutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `TFAutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Spรฉcifiez le modรจle et le tokenizer dans le [`pipeline`], et utilisez le `classifier` sur le texte en franรงais : ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Si vous ne parvenez pas ร  trouver un modรจle adaptรฉ ร  votre cas d'utilisation, vous devrez finetuner un modรจle prรฉ-entraรฎnรฉ sur vos donnรฉes. Jetez un coup d'ล“il ร  notre [tutoriel sur le finetuning](./training) pour apprendre comment faire. Enfin, aprรจs avoir finetunรฉ votre modรจle prรฉ-entraรฎnรฉ, pensez ร  [partager](./model_sharing) le modรจle avec la communautรฉ sur le Hub afin de dรฉmocratiser l'apprentissage automatique pour tous ! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Les classes [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] fonctionnent ensemble pour crรฉer un [`pipeline`] comme celui que vous avez utilisรฉ ci-dessus. Une [AutoClass](./model_doc/auto) est un raccourci qui rรฉcupรจre automatiquement l'architecture d'un modรจle prรฉ-entraรฎnรฉ ร  partir de son nom ou de son emplacement. Il vous suffit de sรฉlectionner l'`AutoClass` appropriรฉe ร  votre tรขche et la classe de prรฉtraitement qui lui est associรฉe. Reprenons l'exemple de la section prรฉcรฉdente et voyons comment vous pouvez utiliser l'`AutoClass` pour reproduire les rรฉsultats du [`pipeline`]. ### AutoTokenizer Un tokenizer est chargรฉ de prรฉtraiter le texte pour en faire un tableau de chiffres qui servira d'entrรฉe ร  un modรจle. De nombreuses rรจgles rรฉgissent le processus de tokenisation, notamment la maniรจre de diviser un mot et le niveau auquel les mots doivent รชtre divisรฉs (pour en savoir plus sur la tokenisation, consultez le [rรฉsumรฉ](./tokenizer_summary)). La chose la plus importante ร  retenir est que vous devez instancier un tokenizer avec le mรชme nom de modรจle pour vous assurer que vous utilisez les mรชmes rรจgles de tokenisation que celles avec lesquelles un modรจle a รฉtรฉ prรฉ-entraรฎnรฉ. Chargez un tokenizer avec [`AutoTokenizer`] : ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Passez votre texte au tokenizer : ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Le tokenizer retourne un dictionnaire contenant : * [input_ids](./glossary#input-ids): la reprรฉsentation numรฉrique des tokens. * [attention_mask](.glossary#attention-mask): indique quels tokens doivent faire l'objet d'une attention particuliรจre (plus particuliรจrement les tokens de remplissage). Un tokenizer peut รฉgalement accepter une liste de textes, et remplir et tronquer le texte pour retourner un รฉchantillon de longueur uniforme : <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> Consultez le tutoriel [prรฉtraitement](./preprocessing) pour plus de dรฉtails sur la tokenisation, et sur la maniรจre d'utiliser un [`AutoImageProcessor`], un [`AutoFeatureExtractor`] et un [`AutoProcessor`] pour prรฉtraiter les images, l'audio et les contenus multimodaux. </Tip> ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉes. Cela signifie que vous pouvez charger un [`AutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner l'[`AutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`AutoModelForSequenceClassification`] : ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Maintenant, passez votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle. Il vous suffit de dรฉcompresser le dictionnaire en ajoutant `**` : ```py >>> pt_outputs = pt_model(**pt_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉs. Cela signifie que vous pouvez charger un [`TFAutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner le [`TFAutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`TFAutoModelForSequenceClassification`] : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Passez maintenant votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle en passant les clรฉs du dictionnaire directement aux tensors : ```py >>> tf_outputs = tf_model(tf_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tous les modรจles ๐Ÿค— Transformers (PyTorch ou TensorFlow) produisent les tensors *avant* la fonction d'activation finale (comme softmax) car la fonction d'activation finale est souvent fusionnรฉe avec le calcul de la perte. Les structures produites par le modรจle sont des classes de donnรฉes spรฉciales, de sorte que leurs attributs sont autocomplรฉtรฉs dans un environnement de dรฉveloppement. Les structures produites par le modรจle se comportent comme un tuple ou un dictionnaire (vous pouvez les indexer avec un entier, une tranche ou une chaรฎne), auquel cas les attributs qui sont None sont ignorรฉs. </Tip> ### Sauvegarder un modรจle <frameworkcontent> <pt> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`PreTrainedModel.save_pretrained`] : ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`PreTrainedModel.from_pretrained`] : ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`TFPreTrainedModel.save_pretrained`] : ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`TFPreTrainedModel.from_pretrained`] : ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Une fonctionnalitรฉ particuliรจrement cool ๐Ÿค— Transformers est la possibilitรฉ d'enregistrer un modรจle et de le recharger en tant que modรจle PyTorch ou TensorFlow. Le paramรจtre `from_pt` ou `from_tf` permet de convertir le modรจle d'un framework ร  l'autre : <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Constructions de modรจles personnalisรฉs Vous pouvez modifier la configuration du modรจle pour changer la faรงon dont un modรจle est construit. La configuration spรฉcifie les attributs d'un modรจle, tels que le nombre de couches ou de tรชtes d'attention. Vous partez de zรฉro lorsque vous initialisez un modรจle ร  partir d'une configuration personnalisรฉe. Les attributs du modรจle sont initialisรฉs de maniรจre alรฉatoire et vous devrez entraรฎner le modรจle avant de pouvoir l'utiliser pour obtenir des rรฉsultats significatifs. Commencez par importer [`AutoConfig`], puis chargez le modรจle prรฉ-entraรฎnรฉ que vous voulez modifier. Dans [`AutoConfig.from_pretrained`], vous pouvez spรฉcifier l'attribut que vous souhaitez modifier, tel que le nombre de tรชtes d'attention : ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`AutoModel.from_config`] : ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`TFAutoModel.from_config`] : ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Consultez le guide [Crรฉer une architecture personnalisรฉe](./create_a_model) pour plus d'informations sur la crรฉation de configurations personnalisรฉes. ## Trainer - une boucle d'entraรฎnement optimisรฉe par PyTorch Tous les modรจles sont des [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) standard, vous pouvez donc les utiliser dans n'importe quelle boucle d'entraรฎnement typique. Bien que vous puissiez รฉcrire votre propre boucle d'entraรฎnement, ๐Ÿค— Transformers fournit une classe [`Trainer`] pour PyTorch, qui contient la boucle d'entraรฎnement de base et ajoute des fonctionnalitรฉs supplรฉmentaires comme l'entraรฎnement distribuรฉ, la prรฉcision mixte, et plus encore. En fonction de votre tรขche, vous passerez gรฉnรฉralement les paramรจtres suivants ร  [`Trainer`] : 1. Un [`PreTrainedModel`] ou un [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module): ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. [`TrainingArguments`] contient les hyperparamรจtres du modรจle que vous pouvez changer comme le taux d'apprentissage, la taille de l'รฉchantillon, et le nombre d'รฉpoques pour s'entraรฎner. Les valeurs par dรฉfaut sont utilisรฉes si vous ne spรฉcifiez pas d'hyperparamรจtres d'apprentissage : ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 4. Chargez un jeu de donnรฉes : ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` Puis appliquez-la ร  l'intรฉgralitรฉ du jeu de donnรฉes avec [`~datasets.Dataset.map`]: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. Un [`DataCollatorWithPadding`] pour crรฉer un รฉchantillon d'exemples ร  partir de votre jeu de donnรฉes : ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` Maintenant, rassemblez tous ces รฉlรฉments dans un [`Trainer`] : ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` Une fois que vous รชtes prรชt, appelez la fonction [`~Trainer.train`] pour commencer l'entraรฎnement : ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> Pour les tรขches - comme la traduction ou la gรฉnรฉration de rรฉsumรฉ - qui utilisent un modรจle sรฉquence ร  sรฉquence, utilisez plutรดt les classes [`Seq2SeqTrainer`] et [`Seq2SeqTrainingArguments`]. </Tip> Vous pouvez personnaliser le comportement de la boucle d'apprentissage en redรฉfinissant les mรฉthodes ร  l'intรฉrieur de [`Trainer`]. Cela vous permet de personnaliser des caractรฉristiques telles que la fonction de perte, l'optimiseur et le planificateur. Consultez la documentation de [`Trainer`] pour savoir quelles mรฉthodes peuvent รชtre redรฉfinies. L'autre moyen de personnaliser la boucle d'apprentissage est d'utiliser les [Callbacks](./main_classes/callbacks). Vous pouvez utiliser les callbacks pour intรฉgrer d'autres bibliothรจques et inspecter la boucle d'apprentissage afin de suivre la progression ou d'arrรชter l'apprentissage plus tรดt. Les callbacks ne modifient rien dans la boucle d'apprentissage elle-mรชme. Pour personnaliser quelque chose comme la fonction de perte, vous devez redรฉfinir le [`Trainer`] ร  la place. ## Entraรฎnement avec TensorFlow Tous les modรจles sont des modรจles standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) afin qu'ils puissent รชtre entraรฎnรฉs avec TensorFlow avec l'API [Keras](https://keras.io/). ๐Ÿค— Transformers fournit la fonction [`~TFPreTrainedModel.prepare_tf_dataset`] pour charger facilement votre jeu de donnรฉes comme un `tf.data.Dataset` afin que vous puissiez commencer l'entraรฎnement immรฉdiatement avec les fonctions [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) et [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) de Keras. 1. Vous commencez avec un modรจle [`TFPreTrainedModel`] ou [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 3. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. Appliquez le tokenizer ร  l'ensemble du jeu de donnรฉes avec [`~datasets.Dataset.map`] et passez ensuite le jeu de donnรฉes et le tokenizer ร  [`~TFPreTrainedModel.prepare_tf_dataset`]. Vous pouvez รฉgalement modifier la taille de l'รฉchantillon et mรฉlanger le jeu de donnรฉes ici si vous le souhaitez : ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. Une fois que vous รชtes prรชt, appelez les fonctions `compile` et `fit` pour commencer l'entraรฎnement : ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) >>> model.fit(dataset) # doctest: +SKIP ``` ## Et aprรจs ? Maintenant que vous avez terminรฉ la visite rapide de ๐Ÿค— Transformers, consultez nos guides et apprenez ร  faire des choses plus spรฉcifiques comme crรฉer un modรจle personnalisรฉ, finetuner un modรจle pour une tรขche, et comment entraรฎner un modรจle avec un script. Si vous souhaitez en savoir plus sur les concepts fondamentaux de ๐Ÿค— Transformers, jetez un ล“il ร  nos guides conceptuels !
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Installation Installez ๐Ÿค— Transformers pour n'importe quelle librairie d'apprentissage profond avec laquelle vous avez l'habitude de travaillez, configurez votre cache et configurez ๐Ÿค— Transformers pour un usage hors ligne (facultatif). ๐Ÿค— Transformers est testรฉ avec Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+ et Flax. Consulter les instructions d'installation ci-dessous pour la librairie d'apprentissage profond que vous utilisez: * Instructions d'installation pour [PyTorch](https://pytorch.org/get-started/locally/). * Instructions d'installation pour [TensorFlow 2.0](https://www.tensorflow.org/install/pip). * Instructions d'installation pour [Flax](https://flax.readthedocs.io/en/latest/). ## Installation avec pip Vous devriez installer ๐Ÿค— Transformers dans un [environnement virtuel](https://docs.python.org/3/library/venv.html). Si vous n'รชtes pas ร  l'aise avec les environnements virtuels, consultez ce [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Utiliser un environnement virtuel permet de facilement gรฉrer diffรฉrents projets et d'รฉviter des erreurs de compatibilitรฉ entre les diffรฉrentes dรฉpendances. Commencez par crรฉer un environnement virtuel dans l'espace de travail de votre projet : ```bash python -m venv .env ``` Activez l'environnement virtuel. Sur Linux ou MacOs : ```bash source .env/bin/activate ``` Activez l'environnement virtuel sur Windows : ```bash .env/Scripts/activate ``` Maintenant, ๐Ÿค— Transformers peut รชtre installรฉ avec la commande suivante : ```bash pip install transformers ``` Pour une utilisation avec CPU seulement, ๐Ÿค— Transformers et la librairie d'apprentissage profond de votre choix peuvent รชtre installรฉs en une seule ligne. Par exemple, installez ๐Ÿค— Transformers et PyTorch avec la commande suivante : ```bash pip install 'transformers[torch]' ``` ๐Ÿค— Transformers et TensorFlow 2.0 : ```bash pip install 'transformers[tf-cpu]' ``` <Tip warning={true}> Pour les architectures mac M1 / ARM Vous devez installer les outils suivants avant d'installer TensorFLow 2.0 ``` brew install cmake brew install pkg-config ``` </Tip> ๐Ÿค— Transformers et Flax : ```bash pip install 'transformers[flax]' ``` Vรฉrifiez que ๐Ÿค— Transformers a bien รฉtรฉ installรฉ avec la commande suivante. La commande va tรฉlรฉcharger un modรจle prรฉ-entraรฎnรฉ : ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Le label et score sont ensuite affichรฉs : ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installation depuis le code source Installez ๐Ÿค— Transformers depuis le code source avec la commande suivante : ```bash pip install git+https://github.com/huggingface/transformers ``` Cette commande installe la version depuis la branche `main` au lieu de la derniรจre version stable. La version de la branche `main` est utile pour avoir les derniers dรฉveloppements. Par exemple, si un bug a รฉtรฉ rรฉsolu depuis la derniรจre version stable mais n'a pas encore รฉtรฉ publiรฉ officiellement. Cependant, cela veut aussi dire que la version de la branche `main` n'est pas toujours stable. Nous nous efforรงons de maintenir la version de la branche `main` opรฉrationnelle, et la plupart des problรจmes sont gรฉnรฉralement rรฉsolus en l'espace de quelques heures ou d'un jour. Si vous recontrez un problรจme, n'hรฉsitez pas ร  crรฉer une [Issue](https://github.com/huggingface/transformers/issues) pour que l'on puisse trouver une solution au plus vite ! Vรฉrifiez que ๐Ÿค— Transformers a bien รฉtรฉ installรฉ avec la commande suivante : ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Installation modifiable Vous aurez besoin d'une installation modifiable si vous le souhaitez : * Utiliser la version de la branche `main` du code source. * Contribuer ร  ๐Ÿค— Transformers et vouler tester vos modifications du code source. Clonez le projet et installez ๐Ÿค— Transformers avec les commandes suivantes : ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Ces commandes crรฉent des liens entre le dossier oรน le projet a รฉtรฉ clonรฉ et les chemins de vos librairies Python. Python regardera maintenant dans le dossier que vous avez clonรฉ en plus des dossiers oรน sont installรฉes vos autres librairies. Par exemple, si vos librairies Python sont installรฉes dans `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python cherchera aussi dans le dossier oรน vous avez clonรฉ : `~/transformers/`. <Tip warning={true}> Vous devez garder le dossier `transformers` si vous voulez continuer d'utiliser la librairie. </Tip> Maintenant, vous pouvez facilement mettre ร  jour votre clone avec la derniรจre version de ๐Ÿค— Transformers en utilisant la commande suivante : ```bash cd ~/transformers/ git pull ``` Votre environnement Python utilisera la version de la branche `main` lors de la prochaine exรฉcution. ## Installation avec conda Installation via le canal `conda-forge` de conda : ```bash conda install conda-forge::transformers ``` ## Configuration du cache Les modรจles prรฉ-entraรฎnรฉs sont tรฉlรฉchargรฉs et mis en cache localement dans le dossier suivant : `~/.cache/huggingface/hub`. C'est le dossier par dรฉfaut donnรฉ par la variable d'environnement `TRANSFORMERS_CACHE`. Sur Windows, le dossier par dรฉfaut est `C:\Users\nom_utilisateur\.cache\huggingface\hub`. Vous pouvez modifier les variables d'environnement indiquรฉes ci-dessous - par ordre de prioritรฉ - pour spรฉcifier un dossier de cache diffรฉrent : 1. Variable d'environnement (par dรฉfaut) : `HUGGINGFACE_HUB_CACHE` ou `TRANSFORMERS_CACHE`. 2. Variable d'environnement : `HF_HOME`. 3. Variable d'environnement : `XDG_CACHE_HOME` + `/huggingface`. <Tip> ๐Ÿค— Transformers utilisera les variables d'environnement `PYTORCH_TRANSFORMERS_CACHE` ou `PYTORCH_PRETRAINED_BERT_CACHE` si vous utilisez une version prรฉcรฉdente de cette librairie et avez dรฉfini ces variables d'environnement, sauf si vous spรฉcifiez la variable d'environnement `TRANSFORMERS_CACHE`. </Tip> ## Mode hors ligne ๐Ÿค— Transformers peut fonctionner dans un environnement cloisonnรฉ ou hors ligne en n'utilisant que des fichiers locaux. Dรฉfinissez la variable d'environnement `TRANSFORMERS_OFFLINE=1` pour activer ce mode. <Tip> Ajoutez [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) ร  votre processus d'entraรฎnement hors ligne en dรฉfinissant la variable d'environnement `HF_DATASETS_OFFLINE=1`. </Tip> ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Le script devrait maintenant s'exรฉcuter sans rester en attente ou attendre une expiration, car il n'essaiera pas de tรฉlรฉcharger des modรจle sur le Hub. Vous pouvez aussi รฉviter de tรฉlรฉcharger un modรจle ร  chaque appel de la fonction [~PreTrainedModel.from_pretrained] en utilisant le paramรจtre [local_files_only]. Seuls les fichiers locaux sont chargรฉs lorsque ce paramรจtre est activรฉ (c.-ร -d. `local_files_only=True`) : ```py from transformers import T5Model model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True) ``` ### Rรฉcupรฉrer des modรจles et des tokenizers pour une utilisation hors ligne Une autre option pour utiliser ๐Ÿค— Transformers hors ligne est de tรฉlรฉcharger les fichiers ร  l'avance, puis d'utiliser les chemins locaux lorsque vous en avez besoin en mode hors ligne. Il existe trois faรงons de faire cela : * Tรฉlรฉchargez un fichier via l'interface utilisateur sur le [Model Hub](https://huggingface.co/models) en cliquant sur l'icรดne โ†“. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Utilisez les fonctions [`PreTrainedModel.from_pretrained`] et [`PreTrainedModel.save_pretrained`] : 1. Tรฉlรฉchargez vos fichiers ร  l'avance avec [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Sauvegardez les fichiers dans un dossier de votre choix avec [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Maintenant, lorsque vous รชtes hors ligne, rechargez vos fichiers avec [`PreTrainedModel.from_pretrained`] depuis le dossier oรน vous les avez sauvegardรฉs : ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Tรฉlรฉchargez des fichiers de maniรจre automatique avec la librairie [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) : 1. Installez la librairie `huggingface_hub` dans votre environnement virtuel : ```bash python -m pip install huggingface_hub ``` 2. Utilisez la fonction [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) pour tรฉlรฉcharger un fichier vers un chemin de votre choix. Par exemple, la commande suivante tรฉlรฉcharge le fichier `config.json` du modรจle [T0](https://huggingface.co/bigscience/T0_3B) vers le chemin de votre choix : ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Une fois que votre fichier est tรฉlรฉchargรฉ et cachรฉ localement, spรฉcifiez son chemin local pour le charger et l'utiliser : ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Consultez la section [How to download files from the Hub (Comment tรฉlรฉcharger des fichiers depuis le Hub)](https://huggingface.co/docs/hub/how-to-downstream) pour plus de dรฉtails sur le tรฉlรฉchargement de fichiers stockรฉs sur le Hub. </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/in_translation.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Traduction en cours.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Installation de Transformers ! pip install transformers datasets # Pour installer ร  partir du code source au lieu de la derniรจre version, commentez la commande ci-dessus et dรฉcommentez la suivante. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass Avec autant d'architectures Transformer diffรฉrentes, il peut รชtre difficile d'en crรฉer une pour votre ensemble de poids (aussi appelรฉs "weights" ou "checkpoint" en anglais). Dans l'idรฉe de crรฉer une librairie facile, simple et flexible ร  utiliser, ๐Ÿค— Transformers fournit une `AutoClass` qui infรจre et charge automatiquement l'architecture correcte ร  partir d'un ensemble de poids donnรฉ. La fonction `from_pretrained()` vous permet de charger rapidement un modรจle prรฉ-entraรฎnรฉ pour n'importe quelle architecture afin que vous n'ayez pas ร  consacrer du temps et des ressources ร  l'entraรฎnement d'un modรจle ร  partir de zรฉro. Produire un tel code indรฉpendant d'un ensemble de poids signifie que si votre code fonctionne pour un ensemble de poids, il fonctionnera avec un autre ensemble - tant qu'il a รฉtรฉ entraรฎnรฉ pour une tรขche similaire - mรชme si l'architecture est diffรฉrente. <Tip> Rappel, l'architecture fait rรฉfรฉrence au squelette du modรจle et l'ensemble de poids contient les poids pour une architecture donnรฉe. Par exemple, [BERT](https://huggingface.co/bert-base-uncased) est une architecture, tandis que `bert-base-uncased` est un ensemble de poids. Le terme modรจle est gรฉnรฉral et peut signifier soit architecture soit ensemble de poids. </Tip> Dans ce tutoriel, vous apprendrez ร : * Charger un tokenizer prรฉ-entraรฎnรฉ. * Charger un processeur d'image prรฉ-entraรฎnรฉ. * Charger un extracteur de caractรฉristiques prรฉ-entraรฎnรฉ. * Charger un processeur prรฉ-entraรฎnรฉ. * Charger un modรจle prรฉ-entraรฎnรฉ. ## AutoTokenizer Quasiment toutes les tรขches de traitement du langage (NLP) commencent avec un tokenizer. Un tokenizer convertit votre texte initial dans un format qui peut รชtre traitรฉ par le modรจle. Chargez un tokenizer avec [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` Puis, transformez votre texte initial comme montrรฉ ci-dessous: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoImageProcessor Pour les tรขches de vision, un processeur d'image traite l'image pour la formater correctment. ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ## AutoFeatureExtractor Pour les tรขches audio, un extracteur de caractรฉristiques (aussi appelรฉs "features" en anglais) traite le signal audio pour le formater correctement. Chargez un extracteur de caractรฉristiques avec [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Les tรขches multimodales nรฉcessitent un processeur qui combine deux types d'outils de prรฉtraitement. Par exemple, le modรจle [LayoutLMV2](model_doc/layoutlmv2) nรฉcessite un processeur d'image pour traiter les images et un tokenizer pour traiter le texte ; un processeur combine les deux. Chargez un processeur avec [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Enfin, les classes `AutoModelFor` vous permettent de charger un modรจle prรฉ-entraรฎnรฉ pour une tรขche donnรฉe (voir [ici](model_doc/auto) pour une liste complรจte des tรขches disponibles). Par exemple, chargez un modรจle pour la classification de sรฉquence avec [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Rรฉutilisez facilement le mรชme ensemble de poids pour charger une architecture pour une tรขche diffรฉrente : ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` <Tip warning={true}> Pour les modรจles PyTorch, la fonction `from_pretrained()` utilise `torch.load()` qui utilise `pickle` en interne et est connu pour รชtre non sรฉcurisรฉ. En gรฉnรฉral, ne chargez jamais un modรจle qui pourrait provenir d'une source non fiable, ou qui pourrait avoir รฉtรฉ altรฉrรฉ. Ce risque de sรฉcuritรฉ est partiellement attรฉnuรฉ pour les modรจles hรฉbergรฉs publiquement sur le Hugging Face Hub, qui sont [scannรฉs pour les logiciels malveillants](https://huggingface.co/docs/hub/security-malware) ร  chaque modification. Consultez la [documentation du Hub](https://huggingface.co/docs/hub/security) pour connaรฎtre les meilleures pratiques comme la [vรฉrification des modifications signรฉes](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg) avec GPG. Les points de contrรดle TensorFlow et Flax ne sont pas concernรฉs, et peuvent รชtre chargรฉs dans des architectures PyTorch en utilisant les arguments `from_tf` et `from_flax` de la fonction `from_pretrained` pour contourner ce problรจme. </Tip> En gรฉnรฉral, nous recommandons d'utiliser les classes `AutoTokenizer` et `AutoModelFor` pour charger des instances prรฉ-entraรฎnรฉes de tokenizers et modรจles respectivement. Cela vous permettra de charger la bonne architecture ร  chaque fois. Dans le prochain [tutoriel](preprocessing), vous apprenez ร  utiliser un tokenizer, processeur d'image, extracteur de caractรฉristiques et processeur pour prรฉ-traiter un jeu de donnรฉes pour le fine-tuning. </pt> <tf> Enfin, les classes `TFAutoModelFor` vous permettent de charger un modรจle prรฉ-entraรฎnรฉ pour une tรขche donnรฉe (voir [ici](model_doc/auto) pour une liste complรจte des tรขches disponibles). Par exemple, chargez un modรจle pour la classification de sรฉquence avec [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Rรฉutilisez facilement le mรชme ensemble de poids pour charger une architecture pour une tรขche diffรฉrente : ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` En gรฉnรฉral, nous recommandons d'utiliser les classes `AutoTokenizer` et `TFAutoModelFor` pour charger des instances prรฉ-entraรฎnรฉes de tokenizers et modรจles respectivement. Cela vous permettra de charger la bonne architecture ร  chaque fois. Dans le prochain [tutoriel](preprocessing), vous apprenez ร  utiliser un tokenizer, processeur d'image, extracteur de caractรฉristiques et processeur pour prรฉ-traiter un jeu de donnรฉes pour le fine-tuning. </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Visite rapide - local: installation title: Installation title: Dรฉmarrer - sections: - local: in_translation title: Pipelines pour l'infรฉrence - local: autoclass_tutorial title: Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass - local: in_translation title: Prรฉparation des donnรฉes - local: in_translation title: Fine-tune un modรจle prรฉ-entraรฎnรฉ - local: in_translation title: Entraรฎnement avec un script - local: in_translation title: Entraรฎnement distribuรฉ avec ๐Ÿค— Accelerate - local: in_translation title: Chargement et entraรฎnement des adaptateurs avec ๐Ÿค— PEFT - local: in_translation title: Partager un modรจle - local: in_translation title: Agents - local: in_translation title: Gรฉnรฉration avec LLMs title: Tutoriels
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/testing.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Testing ๐Ÿค— Transformersใƒขใƒ‡ใƒซใŒใฉใฎใ‚ˆใ†ใซใƒ†ใ‚นใƒˆใ•ใ‚Œใ€ๆ–ฐใ—ใ„ใƒ†ใ‚นใƒˆใ‚’ๆ›ธใ„ใฆๆ—ขๅญ˜ใฎใƒ†ใ‚นใƒˆใ‚’ๆ”นๅ–„ใงใใ‚‹ใ‹ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ใ“ใฎใƒชใƒใ‚ธใƒˆใƒชใซใฏ2ใคใฎใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใŒใ‚ใ‚Šใพใ™๏ผš 1. `tests` -- ไธ€่ˆฌ็š„ใชAPI็”จใฎใƒ†ใ‚นใƒˆ 2. `examples` -- APIใฎไธ€้ƒจใงใฏใชใ„ใ•ใพใ–ใพใชใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณ็”จใฎใƒ†ใ‚นใƒˆ ## How transformers are tested 1. PRใŒๆๅ‡บใ•ใ‚Œใ‚‹ใจใ€9ใคใฎCircleCiใ‚ธใƒงใƒ–ใงใƒ†ใ‚นใƒˆใ•ใ‚Œใพใ™ใ€‚PRใธใฎๆ–ฐใ—ใ„ใ‚ณใƒŸใƒƒใƒˆใ”ใจใซๅ†ใƒ†ใ‚นใƒˆใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใ‚ธใƒงใƒ–ใฏใ€[ใ“ใฎ่จญๅฎšใƒ•ใ‚กใ‚คใƒซ](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml)ใงๅฎš็พฉใ•ใ‚ŒใฆใŠใ‚Šใ€ๅฟ…่ฆใชๅ ดๅˆใฏๅŒใ˜็’ฐๅขƒใ‚’่‡ชๅˆ†ใฎใƒžใ‚ทใƒณใงๅ†็พใงใใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎCIใ‚ธใƒงใƒ–ใฏ `@slow` ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ›ใ‚“ใ€‚ 2. [GitHub Actions](https://github.com/huggingface/transformers/actions)ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹3ใคใฎใ‚ธใƒงใƒ–ใŒใ‚ใ‚Šใพใ™๏ผš - [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): torch hubใฎ็ตฑๅˆใŒๅ‹•ไฝœใ™ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ—ใพใ™ใ€‚ - [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): `main` ใซใ‚ณใƒŸใƒƒใƒˆใŒ่กŒใ‚ใ‚ŒใŸๅ ดๅˆใซใ€GPUใง้ซ˜้€Ÿใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใ“ใฎใ‚ธใƒงใƒ–ใฏใ€`main` ใงใฎใ‚ณใƒŸใƒƒใƒˆใŒไปฅไธ‹ใฎใƒ•ใ‚ฉใƒซใƒ€ใƒผใฎใ‚ณใƒผใƒ‰ใ‚’ๆ›ดๆ–ฐใ—ใŸๅ ดๅˆใซใฎใฟๅฎŸ่กŒใ•ใ‚Œใพใ™๏ผš`src`ใ€`tests`ใ€`.github`๏ผˆ่ฟฝๅŠ ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใ€ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใชใฉใฎๅฎŸ่กŒใ‚’้˜ฒใใŸใ‚๏ผ‰ใ€‚ - [self-hosted runner](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-scheduled.yml): GPUใง `tests` ใจ `examples` ใฎ้€šๅธธใฎใƒ†ใ‚นใƒˆใจ้…ใ„ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ```bash RUN_SLOW=1 pytest tests/ RUN_SLOW=1 pytest examples/ ``` ็ตๆžœใฏ[here](https://github.com/huggingface/transformers/actions)ใง่ฆณๅฏŸใงใใพใ™ใ€‚ ## Running tests ### Choosing which tests to run ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏใ€ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใฎๅคšใใฎ่ฉณ็ดฐใซใคใ„ใฆ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ใ™ในใฆใ‚’่ชญใ‚“ใ ๅพŒใงใ‚‚ใ€ใ•ใ‚‰ใซ่ฉณ็ดฐใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€[ใ“ใกใ‚‰](https://docs.pytest.org/en/latest/usage.html)ใง่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ไปฅไธ‹ใฏใ€ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใฎใ„ใใคใ‹ใฎๆœ€ใ‚‚ไพฟๅˆฉใชๆ–นๆณ•ใงใ™ใ€‚ ใ™ในใฆๅฎŸ่กŒใ—ใพใ™: ```console pytest ``` ใพใŸใฏ๏ผš ```bash make test ``` ๅพŒ่€…ใฏๆฌกใฎใ‚ˆใ†ใซๅฎš็พฉใ•ใ‚Œใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/ ``` ไปฅไธ‹ใฏใ€pytestใซๆธกใ™่จญๅฎšๆƒ…ๅ ฑใงใ™ใ€‚ - ใƒ†ใ‚นใƒˆใƒ—ใƒญใ‚ปใ‚นใ‚’CPUใ‚ณใ‚ขใฎๆ•ฐใจๅŒใ˜ใ ใ‘ๅฎŸ่กŒใ™ใ‚‹ใ‚ˆใ†ใซๆŒ‡็คบใ—ใพใ™ใ€‚ใŸใ ใ—ใ€RAMใŒๅๅˆ†ใงใชใ„ๅ ดๅˆใฏๆณจๆ„ใŒๅฟ…่ฆใงใ™ใ€‚ - ๅŒใ˜ใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ใฎใ™ในใฆใฎใƒ†ใ‚นใƒˆใฏใ€ๅŒใ˜ใƒ†ใ‚นใƒˆใƒ—ใƒญใ‚ปใ‚นใงๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ - ๅ‡บๅŠ›ใฎใ‚ญใƒฃใƒ—ใƒใƒฃใ‚’่กŒใ„ใพใ›ใ‚“ใ€‚ - ๅ†—้•ทใƒขใƒผใƒ‰ใงๅฎŸ่กŒใ—ใพใ™ใ€‚ ### Getting the list of all tests ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใฎใ™ในใฆใฎใƒ†ใ‚นใƒˆ๏ผš ```bash pytest --collect-only -q ``` ๆŒ‡ๅฎšใ•ใ‚ŒใŸใƒ†ใ‚นใƒˆ ใƒ•ใ‚กใ‚คใƒซใฎใ™ในใฆใฎใƒ†ใ‚นใƒˆ: ```bash pytest tests/test_optimization.py --collect-only -q ``` ### Run a specific test module ๅ€‹ๅˆฅใฎใƒ†ใ‚นใƒˆ ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏ: ```bash pytest tests/utils/test_logging.py ``` ### Run specific tests ใปใจใ‚“ใฉใฎใƒ†ใ‚นใƒˆใงunittestใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€็‰นๅฎšใฎใ‚ตใƒ–ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ใใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใ‚’ๅซใ‚€unittestใ‚ฏใƒฉใ‚นใฎๅๅ‰ใ‚’็Ÿฅใฃใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€ใใ‚Œใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผš ```bash pytest tests/test_optimization.py::OptimizationTest::test_adam_w ``` ใƒ†ใ‚นใƒˆใฎๅฎŸ่กŒๆ–นๆณ•: ใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซ: `tests/test_optimization.py` ใ‚ฏใƒฉใ‚นๅ: `OptimizationTest` ใƒ†ใ‚นใƒˆ้–ขๆ•ฐใฎๅๅ‰: `test_adam_w` ใƒ•ใ‚กใ‚คใƒซใซ่ค‡ๆ•ฐใฎใ‚ฏใƒฉใ‚นใŒๅซใพใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใฏใ€็‰นๅฎšใฎใ‚ฏใƒฉใ‚นใฎใƒ†ใ‚นใƒˆใฎใฟใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚’้ธๆŠžใงใใพใ™ใ€‚ไพ‹ใˆใฐ๏ผš ```bash pytest tests/test_optimization.py::OptimizationTest ``` ใƒ†ใ‚นใƒˆใ‚ฏใƒฉใ‚นๅ†…ใฎใ™ในใฆใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ๅ‰่ฟฐใฎ้€šใ‚Šใ€`OptimizationTest` ใ‚ฏใƒฉใ‚นใซๅซใพใ‚Œใ‚‹ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใงใใพใ™๏ผš ```bash pytest tests/test_optimization.py::OptimizationTest --collect-only -q ``` ใ‚ญใƒผใƒฏใƒผใƒ‰ๅผใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใงใใพใ™ใ€‚ ๅๅ‰ใซ `adam` ใŒๅซใพใ‚Œใ‚‹ใƒ†ใ‚นใƒˆใฎใฟใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏ๏ผš ```bash pytest -k adam tests/test_optimization.py ``` `and`ใŠใ‚ˆใณ`or`ใฏใ€ใ™ในใฆใฎใ‚ญใƒผใƒฏใƒผใƒ‰ใŒไธ€่‡ดใ™ใ‚‹ใ‹ใ€ใ„ใšใ‚Œใ‹ใ‚’็คบใ™ใŸใ‚ใซไฝฟ็”จใงใใพใ™ใ€‚`not`ใฏๅฆๅฎšใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใงใใพใ™ใ€‚ `adam`ใจใ„ใ†ๅๅ‰ใ‚’ๅซใ‚€ใƒ†ใ‚นใƒˆใ‚’้™คใ„ใฆใ™ในใฆใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏ๏ผš ```bash pytest -k "not adam" tests/test_optimization.py ``` ไปฅไธ‹ใฏใ€ๆไพ›ใ•ใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆใฎๆ—ฅๆœฌ่ชž่จณใงใ™ใ€‚ ```bash pytest -k "ada and not adam" tests/test_optimization.py ``` ใŸใจใˆใฐใ€`test_adafactor`ใจ`test_adam_w`ใฎไธกๆ–นใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ไฝฟ็”จใงใใพใ™: ```bash pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py ``` ๆณจๆ„: ใ“ใ“ใงใฏใ€`or` ใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใ‚ญใƒผใƒฏใƒผใƒ‰ใฎใ„ใšใ‚Œใ‹ไธ€ใคใŒไธ€่‡ดใ™ใ‚Œใฐใ€ไธกๆ–นใ‚’ๅซใ‚ใ‚‹ใŸใ‚ใงใ™ใ€‚ ไธกๆ–นใฎใƒ‘ใ‚ฟใƒผใƒณใ‚’ๅซใ‚€ใƒ†ใ‚นใƒˆใฎใฟใ‚’ๅซใ‚ใŸใ„ๅ ดๅˆใฏใ€`and` ใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ ```bash pytest -k "test and ada" tests/test_optimization.py ``` ### Run `accelerate` tests ๆ™‚ใ€…ใ€ใƒขใƒ‡ใƒซใซๅฏพใ—ใฆ `accelerate` ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€`OPT` ๅฎŸ่กŒใซๅฏพใ—ใฆใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใŸใ„ๅ ดๅˆใ€ใ‚ณใƒžใƒณใƒ‰ใซ `-m accelerate_tests` ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงๆธˆใฟใพใ™๏ผš ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py ``` ### Run documentation tests ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎไพ‹ใŒๆญฃใ—ใ„ใ‹ใฉใ†ใ‹ใ‚’ใƒ†ใ‚นใƒˆใ™ใ‚‹ใซใฏใ€`doctests` ใŒๅˆๆ ผใ—ใฆใ„ใ‚‹ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไพ‹ใจใ—ใฆใ€[`WhisperModel.forward` ใฎใƒ‰ใƒƒใ‚ฏใ‚นใƒˆใƒชใƒณใ‚ฐ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1017-L1035)ใ‚’ไฝฟ็”จใ—ใพใ—ใ‚‡ใ†ใ€‚ ```python r""" Returns: Example: ```python >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" ``` ๆŒ‡ๅฎšใ—ใŸใƒ•ใ‚กใ‚คใƒซๅ†…ใฎใ™ในใฆใฎใƒ‰ใƒƒใ‚ฏใ‚นใƒˆใƒชใƒณใ‚ฐไพ‹ใ‚’่‡ชๅ‹•็š„ใซใƒ†ใ‚นใƒˆใ™ใ‚‹ใŸใ‚ใซใ€ไปฅไธ‹ใฎ่กŒใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš ```bash pytest --doctest-modules <path_to_file_or_dir> ``` ใƒ•ใ‚กใ‚คใƒซใซใƒžใƒผใ‚ฏใƒ€ใ‚ฆใƒณๆ‹กๅผตๅญใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€`--doctest-glob="*.md"`ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ### Run only modified tests [pytest-picked](https://github.com/anapaulagomes/pytest-picked)ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ๆœชใ‚นใƒ†ใƒผใ‚ธใƒณใ‚ฐใฎใƒ•ใ‚กใ‚คใƒซใพใŸใฏ็พๅœจใฎใƒ–ใƒฉใƒณใƒ๏ผˆGitใซๅพ“ใฃใฆ๏ผ‰ใซ้–ข้€ฃใ™ใ‚‹ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใงใใพใ™ใ€‚ใ“ใ‚Œใฏใ€ๅค‰ๆ›ดๅ†…ๅฎนใซ้–ข้€ฃใ™ใ‚‹ใƒ†ใ‚นใƒˆใฎใฟๅฎŸ่กŒใ•ใ‚Œใ‚‹ใŸใ‚ใ€ๅค‰ๆ›ดใŒไฝ•ใ‚‚ๅฃŠใ‚Œใฆใ„ใชใ„ใ“ใจใ‚’่ฟ…้€Ÿใซ็ขบ่ชใ™ใ‚‹็ด ๆ™ดใ‚‰ใ—ใ„ๆ–นๆณ•ใงใ™ใ€‚ๅค‰ๆ›ดใ•ใ‚Œใฆใ„ใชใ„ใƒ•ใ‚กใ‚คใƒซใซ้–ข้€ฃใ™ใ‚‹ใƒ†ใ‚นใƒˆใฏๅฎŸ่กŒใ•ใ‚Œใพใ›ใ‚“ใ€‚ ```bash pip install pytest-picked ``` ```bash pytest --picked ``` ใ™ในใฆใฎใƒ†ใ‚นใƒˆใฏใ€ๅค‰ๆ›ดใ•ใ‚ŒใŸใŒใพใ ใ‚ณใƒŸใƒƒใƒˆใ•ใ‚Œใฆใ„ใชใ„ใƒ•ใ‚กใ‚คใƒซใจใƒ•ใ‚ฉใƒซใƒ€ใ‹ใ‚‰ๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ ### Automatically rerun failed tests on source modification [pytest-xdist](https://github.com/pytest-dev/pytest-xdist)ใฏใ€้žๅธธใซไพฟๅˆฉใชๆฉŸ่ƒฝใ‚’ๆไพ›ใ—ใฆใŠใ‚Šใ€ใ™ในใฆใฎๅคฑๆ•—ใ—ใŸใƒ†ใ‚นใƒˆใ‚’ๆคœๅ‡บใ—ใ€ใƒ•ใ‚กใ‚คใƒซใ‚’ไฟฎๆญฃใ™ใ‚‹้–“ใซใใ‚Œใ‚‰ใฎๅคฑๆ•—ใ—ใŸใƒ†ใ‚นใƒˆใ‚’้€ฃ็ถšใ—ใฆๅ†ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใใฎใŸใ‚ใ€ไฟฎๆญฃใ‚’่กŒใฃใŸๅพŒใซpytestใ‚’ๅ†่ตทๅ‹•ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ™ในใฆใฎใƒ†ใ‚นใƒˆใŒๅˆๆ ผใ™ใ‚‹ใพใง็นฐใ‚Š่ฟ”ใ•ใ‚Œใ€ใใฎๅพŒๅ†ๅบฆใƒ•ใƒซใƒฉใƒณใŒๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ ```bash pip install pytest-xdist ``` ใƒขใƒผใƒ‰ใซๅ…ฅใ‚‹ใซใฏ๏ผš `pytest -f`ใพใŸใฏ`pytest --looponfail` ใƒ•ใ‚กใ‚คใƒซใฎๅค‰ๆ›ดใฏใ€`looponfailroots`ใƒซใƒผใƒˆใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใจใใฎๅ†…ๅฎนๅ…จไฝ“๏ผˆๅ†ๅธฐ็š„ใซ๏ผ‰ใ‚’่ฆ‹ใฆๆคœๅ‡บใ•ใ‚Œใพใ™ใ€‚ใ“ใฎๅ€คใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใŒๆฉŸ่ƒฝใ—ใชใ„ๅ ดๅˆใ€`setup.cfg`ใง่จญๅฎšใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ๅค‰ๆ›ดใ—ใฆใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆๅ†…ใงๅค‰ๆ›ดใงใใพใ™ใ€‚ ```ini [tool:pytest] looponfailroots = transformers tests ``` ใพใŸใฏ `pytest.ini`/`tox.ini` ใƒ•ใ‚กใ‚คใƒซ: ```ini [pytest] looponfailroots = transformers tests ``` ใƒ•ใ‚กใ‚คใƒซใฎๅค‰ๆ›ดใ‚’ๆŽขใ™ใ“ใจใฏใ€iniใƒ•ใ‚กใ‚คใƒซใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’ๅŸบๆบ–ใซใ—ใฆๆŒ‡ๅฎšใ•ใ‚ŒใŸใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชๅ†…ใงใฎใฟ่กŒใ‚ใ‚Œใพใ™ใ€‚ [pytest-watch](https://github.com/joeyespo/pytest-watch) ใฏใ€ใ“ใฎๆฉŸ่ƒฝใฎไปฃๆ›ฟๅฎŸ่ฃ…ใงใ™ใ€‚ ### Skip a test module ็‰นๅฎšใฎใƒ†ใ‚นใƒˆใƒขใ‚ธใƒฅใƒผใƒซใ‚’้™คๅค–ใ—ใฆใ™ในใฆใฎใƒ†ใ‚นใƒˆใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๅฎŸ่กŒใ—ใŸใ„ๅ ดๅˆใ€ๅฎŸ่กŒใ™ใ‚‹ใƒ†ใ‚นใƒˆใฎๆ˜Ž็คบ็š„ใชใƒชใ‚นใƒˆใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€`test_modeling_*.py` ใƒ†ใ‚นใƒˆใ‚’้™คๅค–ใ—ใฆใ™ในใฆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™๏ผš ```bash pytest *ls -1 tests/*py | grep -v test_modeling* ``` ### Clearing state CIใƒ“ใƒซใƒ‰ใŠใ‚ˆใณ้€Ÿๅบฆใซๅฏพใ™ใ‚‹้š”้›ขใŒ้‡่ฆใชๅ ดๅˆ๏ผˆใ‚ญใƒฃใƒƒใ‚ทใƒฅใซๅฏพใ—ใฆ๏ผ‰ใ€ใ‚ญใƒฃใƒƒใ‚ทใƒฅใ‚’ใ‚ฏใƒชใ‚ขใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```bash pytest --cache-clear tests ``` ### Running tests in parallel ๅ‰่ฟฐใฎใ‚ˆใ†ใซใ€`make test` ใฏ `pytest-xdist` ใƒ—ใƒฉใ‚ฐใ‚คใƒณใ‚’ไป‹ใ—ใฆใƒ†ใ‚นใƒˆใ‚’ไธฆๅˆ—ๅฎŸ่กŒใ—ใพใ™๏ผˆ`-n X` ๅผ•ๆ•ฐใ€ไพ‹: `-n 2` ใง2ใคใฎไธฆๅˆ—ใ‚ธใƒงใƒ–ใ‚’ๅฎŸ่กŒ๏ผ‰ใ€‚ `pytest-xdist` ใฎ `--dist=` ใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒ†ใ‚นใƒˆใŒใฉใฎใ‚ˆใ†ใซใ‚ฐใƒซใƒผใƒ—ๅŒ–ใ•ใ‚Œใ‚‹ใ‹ใ‚’ๅˆถๅพกใงใใพใ™ใ€‚`--dist=loadfile` ใฏๅŒใ˜ใƒ•ใ‚กใ‚คใƒซใซใ‚ใ‚‹ใƒ†ใ‚นใƒˆใ‚’ๅŒใ˜ใƒ—ใƒญใ‚ปใ‚นใซ้…็ฝฎใ—ใพใ™ใ€‚ ใƒ†ใ‚นใƒˆใฎๅฎŸ่กŒ้ †ๅบใŒ็•ฐใชใ‚Šไบˆๆธฌไธๅฏ่ƒฝใงใ‚ใ‚‹ใŸใ‚ใ€`pytest-xdist` ใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใจๅคฑๆ•—ใŒ็™บ็”Ÿใ™ใ‚‹ๅ ดๅˆ๏ผˆใคใพใ‚Šใ€ใ„ใใคใ‹ใฎๆœชๆคœๅ‡บใฎ้€ฃๅ‹•ใƒ†ใ‚นใƒˆใŒใ‚ใ‚‹ๅ ดๅˆ๏ผ‰ใ€[pytest-replay](https://github.com/ESSS/pytest-replay) ใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚นใƒˆใ‚’ๅŒใ˜้ †ๅบใงๅ†็”Ÿใ—ใ€ใใฎๅพŒใ€ๅคฑๆ•—ใ™ใ‚‹ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๆœ€ๅฐ้™ใซใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ ### Test order and repetition ๆฝœๅœจ็š„ใช็›ธไบ’ไพๅญ˜ๆ€งใ‚„็Šถๆ…‹ใซ้–ข้€ฃใ™ใ‚‹ใƒใ‚ฐ๏ผˆใƒ†ใ‚ฃใ‚ขใƒ€ใ‚ฆใƒณ๏ผ‰ใ‚’ๆคœๅ‡บใ™ใ‚‹ใŸใ‚ใซใ€ใƒ†ใ‚นใƒˆใ‚’่ค‡ๆ•ฐๅ›žใ€้€ฃ็ถšใ—ใฆใ€ใƒฉใƒณใƒ€ใƒ ใซใ€ใพใŸใฏใ‚ปใƒƒใƒˆใง็นฐใ‚Š่ฟ”ใ™ใ“ใจใฏๆœ‰็”จใงใ™ใ€‚ใใ—ใฆใ€ๅ˜็ด”ใช่ค‡ๆ•ฐๅ›žใฎ็นฐใ‚Š่ฟ”ใ—ใฏใ€DLใฎใƒฉใƒณใƒ€ใƒ ๆ€งใซใ‚ˆใฃใฆๆ˜Žใ‚‰ใ‹ใซใชใ‚‹ใ„ใใคใ‹ใฎๅ•้กŒใ‚’ๆคœๅ‡บใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ #### Repeat tests - [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder): ```bash pip install pytest-flakefinder ``` ใใ—ใฆใ€ใ™ในใฆใฎใƒ†ใ‚นใƒˆใ‚’่ค‡ๆ•ฐๅ›žๅฎŸ่กŒใ—ใพใ™ (ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏ 50 ๅ›ž)ใ€‚ ```bash pytest --flake-finder --flake-runs=5 tests/test_failing_test.py ``` <Tip> ใ“ใฎใƒ—ใƒฉใ‚ฐใ‚คใƒณใฏใ€`pytest-xdist` ใฎ `-n` ใƒ•ใƒฉใ‚ฐใงใฏๅ‹•ไฝœใ—ใพใ›ใ‚“ใ€‚ </Tip> <Tip> ๅˆฅใฎใƒ—ใƒฉใ‚ฐใ‚คใƒณ `pytest-repeat` ใ‚‚ใ‚ใ‚Šใพใ™ใŒใ€ใ“ใ‚Œใฏ `unittest` ใงใฏๅ‹•ไฝœใ—ใพใ›ใ‚“ใ€‚ </Tip> #### Run tests in a random order ```bash pip install pytest-random-order ``` ้‡่ฆ: `pytest-random-order` ใŒๅญ˜ๅœจใ™ใ‚‹ใจใ€ใƒ†ใ‚นใƒˆใฏ่‡ชๅ‹•็š„ใซใƒฉใƒณใƒ€ใƒ ๅŒ–ใ•ใ‚Œใพใ™ใ€‚่จญๅฎšใฎๅค‰ๆ›ดใ‚„ๅค‰ๆ›ดใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณใ‚ชใƒ—ใ‚ทใƒงใƒณใฏๅฟ…้ ˆใงใ™ใ€‚ ๅ‰ใซ่ชฌๆ˜Žใ—ใŸใ‚ˆใ†ใซใ€ใ“ใ‚Œใซใ‚ˆใ‚Šใ€็ตๅˆใ•ใ‚ŒใŸใƒ†ใ‚นใƒˆ (1 ใคใฎใƒ†ใ‚นใƒˆใฎ็Šถๆ…‹ใŒๅˆฅใฎใƒ†ใ‚นใƒˆใฎ็Šถๆ…‹ใซๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹) ใฎๆคœๅ‡บใŒๅฏ่ƒฝใซใชใ‚Šใพใ™ใ€‚ใ„ใค `pytest-random-order` ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใจใ€ใใฎใ‚ปใƒƒใ‚ทใƒงใƒณใซไฝฟ็”จใ•ใ‚ŒใŸใƒฉใƒณใƒ€ใƒ  ใ‚ทใƒผใƒ‰ใŒๅ‡บๅŠ›ใ•ใ‚Œใพใ™ใ€‚ไพ‹: ```bash pytest tests [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ใใฎใŸใ‚ใ€ๆŒ‡ๅฎšใ•ใ‚ŒใŸ็‰นๅฎšใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใŒๅคฑๆ•—ใ—ใŸๅ ดๅˆใ€ใใฎๆญฃ็ขบใชใ‚ทใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใงใใ‚Œใ‚’ๅ†็พใงใใพใ™ใ€‚ไพ‹: ```bash pytest --random-order-seed=573663 [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ็‰นๅฎšใฎใƒ†ใ‚นใƒˆใฎใƒชใ‚นใƒˆใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆใ€ใพใŸใฏใพใฃใŸใใƒชใ‚นใƒˆใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆใ€ๅŒใ˜ใƒ†ใ‚นใƒˆใฎๆญฃ็ขบใช้ †ๅบใ‚’ๅ†็พใ—ใพใ™ใ€‚ใƒ†ใ‚นใƒˆใฎใƒชใ‚นใƒˆใ‚’ๆ‰‹ๅ‹•ใง็ตžใ‚Š่พผใฟๅง‹ใ‚ใ‚‹ใจใ€ใ‚ทใƒผใƒ‰ใซไพๅญ˜ใ›ใšใ€ใƒ†ใ‚นใƒˆใŒๅคฑๆ•—ใ—ใŸๆญฃ็ขบใช้ †ๅบใงๆ‰‹ๅ‹•ใงใƒชใ‚นใƒˆใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใฏใ€`--random-order-bucket=none` ใ‚’ไฝฟ็”จใ—ใฆใƒฉใƒณใƒ€ใƒ ๅŒ–ใ‚’็„กๅŠนใซใ™ใ‚‹ใ‚ˆใ†pytestใซๆŒ‡็คบใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™๏ผš ```bash pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py ``` ใ™ในใฆใฎใƒ†ใ‚นใƒˆใฎใ‚ทใƒฃใƒƒใƒ•ใƒซใ‚’็„กๅŠนใซใ™ใ‚‹ใซใฏ: ```bash pytest --random-order-bucket=none ``` ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€`--random-order-bucket=module` ใŒๆš—้ป™็š„ใซ้ฉ็”จใ•ใ‚Œใ€ใƒขใ‚ธใƒฅใƒผใƒซใƒฌใƒ™ใƒซใงใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ทใƒฃใƒƒใƒ•ใƒซใ—ใพใ™ใ€‚ใพใŸใ€`class`ใ€`package`ใ€`global`ใ€ใŠใ‚ˆใณ`none` ใƒฌใƒ™ใƒซใงใ‚ทใƒฃใƒƒใƒ•ใƒซใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€ใใฎ[ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://github.com/jbasko/pytest-random-order)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ๅˆฅใฎใƒฉใƒณใƒ€ใƒ ๅŒ–ใฎไปฃๆ›ฟๆ‰‹ๆฎตใฏใ€[`pytest-randomly`](https://github.com/pytest-dev/pytest-randomly) ใงใ™ใ€‚ใ“ใฎใƒขใ‚ธใƒฅใƒผใƒซใฏ้žๅธธใซไผผใŸๆฉŸ่ƒฝ/ใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นใ‚’ๆŒใฃใฆใ„ใพใ™ใŒใ€`pytest-random-order` ใงๅˆฉ็”จๅฏ่ƒฝใชใƒใ‚ฑใƒƒใƒˆใƒขใƒผใƒ‰ใ‚’ๆŒใฃใฆใ„ใพใ›ใ‚“ใ€‚ใ‚คใƒณใ‚นใƒˆใƒผใƒซๅพŒใซ่‡ชๅ‹•็š„ใซๆœ‰ๅŠนใซใชใ‚‹ใจใ„ใ†ๅŒใ˜ๅ•้กŒใŒใ‚ใ‚Šใพใ™ใ€‚ ### Look and feel variations #### pytest-sugar [pytest-sugar](https://github.com/Frozenball/pytest-sugar) ใฏใ€ๅค–่ฆณใจๆ“ไฝœๆ€งใ‚’ๅ‘ไธŠใ•ใ›ใ€ใƒ—ใƒญใ‚ฐใƒฌใ‚นใƒใƒผใ‚’่ฟฝๅŠ ใ—ใ€ๅณๅบงใซๅคฑๆ•—ใ—ใŸใƒ†ใ‚นใƒˆใจใ‚ขใ‚ตใƒผใ‚ทใƒงใƒณใ‚’่กจ็คบใ™ใ‚‹ใƒ—ใƒฉใ‚ฐใ‚คใƒณใงใ™ใ€‚ใ‚คใƒณใ‚นใƒˆใƒผใƒซๅพŒใซ่‡ชๅ‹•็š„ใซใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ–ๅŒ–ใ•ใ‚Œใพใ™ใ€‚ ```bash pip install pytest-sugar ``` ใ“ใ‚Œใ‚’ไฝฟ็”จใ›ใšใซใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๆฌกใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ```bash pytest -p no:sugar ``` ใพใŸใฏใ‚ขใƒณใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™ใ€‚ #### Report each sub-test name and its progress `pytest` ใซใ‚ˆใ‚‹ๅ˜ไธ€ใพใŸใฏใ‚ฐใƒซใƒผใƒ—ใฎใƒ†ใ‚นใƒˆใฎๅ ดๅˆ (`pip install pytest-pspec` ใฎๅพŒ): ```bash pytest --pspec tests/test_optimization.py ``` #### Instantly shows failed tests [pytest-instafail](https://github.com/pytest-dev/pytest-instafail) ใงใฏใ€ๅคฑๆ•—ใจใ‚จใƒฉใƒผใŒๅณๅบงใซ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ ใƒ†ใ‚นใƒˆใ‚ปใƒƒใ‚ทใƒงใƒณใŒ็ต‚ไบ†ใ™ใ‚‹ใพใงๅพ…ๆฉŸใ—ใพใ™ใ€‚ ```bash pip install pytest-instafail ``` ```bash pytest --instafail ``` ### To GPU or not to GPU GPU ใŒๆœ‰ๅŠนใช่จญๅฎšใงใ€CPU ใฎใฟใƒขใƒผใƒ‰ใงใƒ†ใ‚นใƒˆใ™ใ‚‹ใซใฏใ€`CUDA_VISIBLE_DEVICES=""`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ ```bash CUDA_VISIBLE_DEVICES="" pytest tests/utils/test_logging.py ``` ใพใŸใฏใ€่ค‡ๆ•ฐใฎ GPU ใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€`pytest` ใงใฉใ‚Œใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹ใ‚’ๆŒ‡ๅฎšใงใใพใ™ใ€‚ใŸใจใˆใฐใ€ 2 ็•ช็›ฎใฎ GPU GPU `0` ใจ `1` ใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใ‚’ๅฎŸ่กŒใงใใพใ™ใ€‚ ```bash CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py ``` ใ“ใ‚Œใฏใ€็•ฐใชใ‚‹GPUใง็•ฐใชใ‚‹ใ‚ฟใ‚นใ‚ฏใ‚’ๅฎŸ่กŒใ—ใŸใ„ๅ ดๅˆใซไพฟๅˆฉใงใ™ใ€‚ ไธ€้ƒจใฎใƒ†ใ‚นใƒˆใฏCPUใฎใฟใงๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€ไป–ใฎใƒ†ใ‚นใƒˆใฏCPUใ€GPUใ€ใพใŸใฏTPUใงๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€ใพใŸๅˆฅใฎใƒ†ใ‚นใƒˆใฏ่ค‡ๆ•ฐใฎGPUใงๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆฌกใฎใ‚นใ‚ญใƒƒใƒ—ใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใƒผใฏใ€ใƒ†ใ‚นใƒˆใฎCPU/GPU/TPUใซ้–ขใ™ใ‚‹่ฆไปถใ‚’่จญๅฎšใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™๏ผš - `require_torch` - ใ“ใฎใƒ†ใ‚นใƒˆใฏtorchใฎไธ‹ใงใฎใฟๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ - `require_torch_gpu` - `require_torch` ใซๅŠ ใˆใฆใ€ๅฐ‘ใชใใจใ‚‚1ใคใฎGPUใŒๅฟ…่ฆใงใ™ใ€‚ - `require_torch_multi_gpu` - `require_torch` ใซๅŠ ใˆใฆใ€ๅฐ‘ใชใใจใ‚‚2ใคใฎGPUใŒๅฟ…่ฆใงใ™ใ€‚ - `require_torch_non_multi_gpu` - `require_torch` ใซๅŠ ใˆใฆใ€0ใพใŸใฏ1ใคใฎGPUใŒๅฟ…่ฆใงใ™ใ€‚ - `require_torch_up_to_2_gpus` - `require_torch` ใซๅŠ ใˆใฆใ€0ใ€1ใ€ใพใŸใฏ2ใคใฎGPUใŒๅฟ…่ฆใงใ™ใ€‚ - `require_torch_tpu` - `require_torch` ใซๅŠ ใˆใฆใ€ๅฐ‘ใชใใจใ‚‚1ใคใฎTPUใŒๅฟ…่ฆใงใ™ใ€‚ ไปฅไธ‹ใฎ่กจใซGPUใฎ่ฆไปถใ‚’็คบใ—ใพใ™๏ผš | n gpus | decorator | |--------+--------------------------------| | `>= 0` | `@require_torch` | | `>= 1` | `@require_torch_gpu` | | `>= 2` | `@require_torch_multi_gpu` | | `< 2` | `@require_torch_non_multi_gpu` | | `< 3` | `@require_torch_up_to_2_gpus` | ใŸใจใˆใฐใ€ไฝฟ็”จๅฏ่ƒฝใช GPU ใŒ 2 ใคไปฅไธŠใ‚ใ‚Šใ€pytorch ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใซใฎใฟๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใƒ†ใ‚นใƒˆใ‚’ๆฌกใซ็คบใ—ใพใ™ใ€‚ ```python no-style @require_torch_multi_gpu def test_example_with_multi_gpu(): ``` ใƒ†ใ‚นใƒˆใซ `tensorflow` ใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€`require_tf` ใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ไพ‹ใˆใฐ๏ผš ```python no-style @require_tf def test_tf_thing_with_tensorflow(): ``` ใ“ใ‚Œใ‚‰ใฎใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใฏ็ฉใฟ้‡ใญใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใŸใจใˆใฐใ€ใƒ†ใ‚นใƒˆใŒ้…ใใ€pytorch ใงๅฐ‘ใชใใจใ‚‚ 1 ใคใฎ GPU ใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ่จญๅฎšๆ–นๆณ•: ```python no-style @require_torch_gpu @slow def test_example_slow_on_gpu(): ``` `@parametrized` ใฎใ‚ˆใ†ใชไธ€้ƒจใฎใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใฏใƒ†ใ‚นใƒˆๅใ‚’ๆ›ธใๆ›ใˆใ‚‹ใŸใ‚ใ€`@require_*` ใ‚นใ‚ญใƒƒใƒ— ใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใ‚’ใƒชใ‚นใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆœ€ๅพŒใซใใ‚Œใ‚‰ใŒๆญฃใ—ใๅ‹•ไฝœใ™ใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ๆญฃใ—ใ„ไฝฟ็”จไพ‹ใฏๆฌกใฎใจใŠใ‚Šใงใ™ ```python no-style @parameterized.expand(...) @require_torch_multi_gpu def test_integration_foo(): ``` ใ“ใฎ้ †ๅบใฎๅ•้กŒใฏ `@pytest.mark.parametrize` ใซใฏๅญ˜ๅœจใ—ใพใ›ใ‚“ใ€‚ๆœ€ๅˆใพใŸใฏๆœ€ๅพŒใซ้…็ฝฎใ—ใฆใ‚‚ใ€ใใ‚Œใงใ‚‚ๅ•้กŒใฏ่งฃๆฑบใ•ใ‚Œใพใ™ใ€‚ ไป•ไบ‹ใ€‚ใŸใ ใ—ใ€ใใ‚Œใฏ้žๅ˜ไฝ“ใƒ†ใ‚นใƒˆใงใฎใฟๆฉŸ่ƒฝใ—ใพใ™ใ€‚ ๅ†…้ƒจใƒ†ใ‚นใƒˆ: - ๅˆฉ็”จๅฏ่ƒฝใช GPU ใฎๆ•ฐ: ```python from transformers.testing_utils import get_gpu_count n_gpu = get_gpu_count() # works with torch and tf ``` ### Testing with a specific PyTorch backend or device ็‰นๅฎšใฎtorchใƒ‡ใƒใ‚คใ‚นใงใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€`TRANSFORMERS_TEST_DEVICE="$device"` ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใ“ใง `$device` ใฏๅฏพ่ฑกใฎใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใงใ™ใ€‚ไพ‹ใˆใฐใ€CPUใงใƒ†ใ‚นใƒˆใ™ใ‚‹ใซใฏไปฅไธ‹ใฎใ‚ˆใ†ใซใ—ใพใ™๏ผš ```bash TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/utils/test_logging.py ``` ใ“ใฎๅค‰ๆ•ฐใฏใ€`mps`ใชใฉใฎใ‚ซใ‚นใ‚ฟใƒ ใพใŸใฏใ‚ใพใ‚Šไธ€่ˆฌ็š„ใงใฏใชใ„ PyTorch ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใ‚’ใƒ†ใ‚นใƒˆใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ใพใŸใ€็‰นๅฎšใฎ GPU ใ‚’ใ‚ฟใƒผใ‚ฒใƒƒใƒˆใซใ—ใŸใ‚Šใ€CPU ๅฐ‚็”จใƒขใƒผใƒ‰ใงใƒ†ใ‚นใƒˆใ—ใŸใ‚Šใ™ใ‚‹ใ“ใจใงใ€`CUDA_VISIBLE_DEVICES`ใจๅŒใ˜ๅŠนๆžœใ‚’้”ๆˆใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ็‰นๅฎšใฎใƒ‡ใƒใ‚คใ‚นใงใฏใ€ๅˆใ‚ใฆใ€Œtorchใ€ใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใŸๅพŒใ€่ฟฝๅŠ ใฎใ‚คใƒณใƒใƒผใƒˆใŒๅฟ…่ฆใซใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€็’ฐๅขƒๅค‰ๆ•ฐ `TRANSFORMERS_TEST_BACKEND` ใ‚’ไฝฟ็”จใ—ใฆๆŒ‡ๅฎšใงใใพใ™ใ€‚ ```bash TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/utils/test_logging.py ``` ### Distributed training `pytest` ใฏ็›ดๆŽฅ็š„ใซๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚่ฉฆใฟใ‚‹ใจใ€ใ‚ตใƒ–ใƒ—ใƒญใ‚ปใ‚นใฏๆญฃใ—ใ„ๅ‡ฆ็†ใ‚’่กŒใ‚ใšใ€่‡ชๅˆ†่‡ช่บซใŒ `pytest` ใงใ‚ใ‚‹ใจๆ€ใ„่พผใ‚“ใงใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใ‚’ใƒซใƒผใƒ—ใงๅฎŸ่กŒใ—็ถšใ‘ใพใ™ใ€‚ใŸใ ใ—ใ€้€šๅธธใฎใƒ—ใƒญใ‚ปใ‚นใ‚’็”Ÿๆˆใ—ใ€ใใ‚Œใ‹ใ‚‰่ค‡ๆ•ฐใฎใƒฏใƒผใ‚ซใƒผใ‚’็”Ÿๆˆใ—ใ€IOใƒ‘ใ‚คใƒ—ใ‚’็ฎก็†ใ™ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใ‚’็”Ÿๆˆใ™ใ‚ŒใฐๆฉŸ่ƒฝใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚’ไฝฟ็”จใ™ใ‚‹ใ„ใใคใ‹ใฎใƒ†ใ‚นใƒˆใŒใ‚ใ‚Šใพใ™๏ผš - [test_trainer_distributed.py](https://github.com/huggingface/transformers/tree/main/tests/trainer/test_trainer_distributed.py) - [test_deepspeed.py](https://github.com/huggingface/transformers/tree/main/tests/deepspeed/test_deepspeed.py) ๅฎŸ่กŒใƒใ‚คใƒณใƒˆใซใ™ใใซ็งปๅ‹•ใ™ใ‚‹ใซใฏใ€ใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆๅ†…ใง `execute_subprocess_async` ๅ‘ผใณๅ‡บใ—ใ‚’ๆคœ็ดขใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๅฐ‘ใชใใจใ‚‚2ใคใฎGPUใŒๅฟ…่ฆใงใ™๏ผš ```bash CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py ``` ### Output capture ใƒ†ใ‚นใƒˆใฎๅฎŸ่กŒไธญใซใ€`stdout` ใŠใ‚ˆใณ `stderr` ใซ้€ไฟกใ•ใ‚ŒใŸๅ‡บๅŠ›ใฏใ‚ญใƒฃใƒ—ใƒใƒฃใ•ใ‚Œใพใ™ใ€‚ใƒ†ใ‚นใƒˆใพใŸใฏใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใƒกใ‚ฝใƒƒใƒ‰ใŒๅคฑๆ•—ใ—ใŸๅ ดๅˆใ€้€šๅธธใ€ใใ‚Œใซๅฏพๅฟœใ™ใ‚‹ใ‚ญใƒฃใƒ—ใƒใƒฃใ•ใ‚ŒใŸๅ‡บๅŠ›ใŒๅคฑๆ•—ใฎใƒˆใƒฌใƒผใ‚นใƒใƒƒใ‚ฏใจๅ…ฑใซ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ ๅ‡บๅŠ›ใฎใ‚ญใƒฃใƒ—ใƒใƒฃใ‚’็„กๅŠนใซใ—ใ€`stdout` ใจ `stderr` ใ‚’้€šๅธธ้€šใ‚Šใซๅ–ๅพ—ใ™ใ‚‹ใซใฏใ€`-s` ใพใŸใฏ `--capture=no` ใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„๏ผš ใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏๅฐ‘ใชใใจใ‚‚2ใคใฎGPUใŒๅฟ…่ฆใงใ™๏ผš ```bash pytest -s tests/utils/test_logging.py ``` ใƒ†ใ‚นใƒˆ็ตๆžœใ‚’ JUnit ๅฝขๅผใฎๅ‡บๅŠ›ใซ้€ไฟกใ™ใ‚‹ใซใฏ: ```bash py.test tests --junitxml=result.xml ``` ### Color control ่‰ฒใ‚’ๆŒใŸใชใ„ใ‚ˆใ†ใซใ™ใ‚‹๏ผˆไพ‹๏ผš้ป„่‰ฒใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’็™ฝใ„่ƒŒๆ™ฏใซ่กจ็คบใ™ใ‚‹ใจ่ชญใฟใซใใ„ใงใ™๏ผ‰๏ผš ```bash pytest --color=no tests/utils/test_logging.py ``` ### Sending test report to online pastebin service ใƒ†ใ‚นใƒˆๅคฑๆ•—ใ”ใจใซ URL ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```bash pytest --pastebin=failed tests/utils/test_logging.py ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ†ใ‚นใƒˆๅฎŸ่กŒๆƒ…ๅ ฑใŒใƒชใƒขใƒผใƒˆใฎPasteใ‚ตใƒผใƒ“ใ‚นใซ้€ไฟกใ•ใ‚Œใ€ๅ„ใ‚จใƒฉใƒผใซๅฏพใ—ใฆURLใŒๆไพ›ใ•ใ‚Œใพใ™ใ€‚้€šๅธธ้€šใ‚Šใƒ†ใ‚นใƒˆใ‚’้ธๆŠžใ™ใ‚‹ใ‹ใ€ใŸใจใˆใฐ็‰นๅฎšใฎใ‚จใƒฉใƒผใฎใฟใ‚’้€ไฟกใ—ใŸใ„ๅ ดๅˆใฏ `-x` ใ‚’่ฟฝๅŠ ใงๆŒ‡ๅฎšใงใใพใ™ใ€‚ ใƒ†ใ‚นใƒˆใ‚ปใƒƒใ‚ทใƒงใƒณๅ…จไฝ“ใฎใƒญใ‚ฐใซๅฏพใ™ใ‚‹URLใ‚’ไฝœๆˆใ™ใ‚‹ๆ–นๆณ•๏ผš ```bash pytest --pastebin=all tests/utils/test_logging.py ``` ## Writing tests ๐Ÿค— transformersใฎใƒ†ใ‚นใƒˆใฏ `unittest` ใ‚’ๅŸบใซใ—ใฆใ„ใพใ™ใŒใ€ `pytest` ใงๅฎŸ่กŒใ•ใ‚Œใ‚‹ใŸใ‚ใ€ใปใจใ‚“ใฉใฎๅ ดๅˆใ€ไธกๆ–นใฎใ‚ทใ‚นใƒ†ใƒ ใฎๆฉŸ่ƒฝใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ [ใ“ใกใ‚‰](https://docs.pytest.org/en/stable/unittest.html)ใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ๆฉŸ่ƒฝใ‚’่ชญใ‚€ใ“ใจใŒใงใใพใ™ใŒใ€้‡่ฆใชใ“ใจใฏใ€ใปใจใ‚“ใฉใฎ `pytest` ใฎใƒ•ใ‚ฃใ‚ฏใ‚นใƒใƒฃใŒๅ‹•ไฝœใ—ใชใ„ใ“ใจใงใ™ใ€‚ใƒ‘ใƒฉใƒกใƒผใ‚ฟๅŒ–ใ‚‚ๅŒๆง˜ใงใ™ใŒใ€ไผผใŸใ‚ˆใ†ใชๆ–นๆณ•ใงๅ‹•ไฝœใ™ใ‚‹ `parameterized` ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ ### Parametrization ๅŒใ˜ใƒ†ใ‚นใƒˆใ‚’็•ฐใชใ‚‹ๅผ•ๆ•ฐใง่ค‡ๆ•ฐๅ›žๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใƒ†ใ‚นใƒˆๅ†…้ƒจใ‹ใ‚‰่กŒใ†ใ“ใจใ‚‚ใงใใพใ™ใŒใ€ใใฎๅ ดๅˆใ€ใใฎใƒ†ใ‚นใƒˆใ‚’ๅ˜ไธ€ใฎๅผ•ๆ•ฐใ‚ปใƒƒใƒˆใงๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ```python # test_this1.py import unittest from parameterized import parameterized class TestMathUnitTest(unittest.TestCase): @parameterized.expand( [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ] ) def test_floor(self, name, input, expected): assert_equal(math.floor(input), expected) ``` ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ใ“ใฎใƒ†ใ‚นใƒˆใฏ3ๅ›žๅฎŸ่กŒใ•ใ‚Œใ€ใใ‚Œใžใ‚ŒใฎๅฎŸ่กŒใง `test_floor` ใฎๆœ€ๅพŒใฎ3ใคใฎๅผ•ๆ•ฐใŒใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒชใ‚นใƒˆใฎๅฏพๅฟœใ™ใ‚‹ๅผ•ๆ•ฐใซๅ‰ฒใ‚Šๅฝ“ใฆใ‚‰ใ‚Œใพใ™ใ€‚ ใใ—ใฆใ€`negative` ใจ `integer` ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎใ‚ปใƒƒใƒˆใฎใฟใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™: ```bash pytest -k "negative and integer" tests/test_mytest.py ``` ใพใŸใฏใ€`Negative`ใฎใ‚ตใƒ–ใƒ†ใ‚นใƒˆใ‚’้™คใใ™ในใฆใฎๅ ดๅˆใ€ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ```bash pytest -k "not negative" tests/test_mytest.py ``` `-k` ใƒ•ใ‚ฃใƒซใ‚ฟใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใซๅŠ ใˆใฆใ€ๅ„ใ‚ตใƒ–ใƒ†ใ‚นใƒˆใฎๆญฃ็ขบใชๅๅ‰ใ‚’่ชฟในใ€ใใฎๆญฃ็ขบใชๅๅ‰ใ‚’ไฝฟ็”จใ—ใฆไปปๆ„ใฎใ‚ตใƒ–ใƒ†ใ‚นใƒˆใพใŸใฏใ™ในใฆใฎใ‚ตใƒ–ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```bash pytest test_this1.py --collect-only -q ``` ใ™ใ‚‹ใจๆฌกใฎใ‚‚ใฎใŒใƒชใ‚นใƒˆใ•ใ‚Œใพใ™: ```bash test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer test_this1.py::TestMathUnitTest::test_floor_2_large_fraction ``` ใ—ใŸใŒใฃใฆใ€2 ใคใฎ็‰นๅฎšใฎใ‚ตใƒ–ใƒ†ใ‚นใƒˆใฎใฟใ‚’ๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ ```bash pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer ``` `transformers`ใฎ้–‹็™บ่€…ไพๅญ˜้–ขไฟ‚ใซใ™ใงใซๅซใพใ‚Œใฆใ„ใ‚‹ใƒขใ‚ธใƒฅใƒผใƒซ[parameterized](https://pypi.org/project/parameterized/) ใฏใ€`unittests` ใจ `pytest` ใƒ†ใ‚นใƒˆใฎไธกๆ–นใงๆฉŸ่ƒฝใ—ใพใ™ใ€‚ ใŸใ ใ—ใ€ใƒ†ใ‚นใƒˆใŒ `unittest` ใงใชใ„ๅ ดๅˆใ€`pytest.mark.parametrize` ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผˆใพใŸใฏๆ—ขๅญ˜ใฎใƒ†ใ‚นใƒˆใฎใ„ใใคใ‹ใงใ€ไธปใซ `examples` ใฎไธ‹ใงไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใฎใ‚’่ฆ‹ใ‚‹ใ“ใจใŒใงใใพใ™๏ผ‰ใ€‚ ๆฌกใซใ€ๅŒใ˜ไพ‹ใ‚’็คบใ—ใพใ™ใŒใ€ไปŠๅบฆใฏ `pytest` ใฎ `parametrize` ใƒžใƒผใ‚ซใƒผใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™๏ผš ```python # test_this2.py import pytest @pytest.mark.parametrize( "name, input, expected", [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ], ) def test_floor(name, input, expected): assert_equal(math.floor(input), expected) ``` `parameterized` ใจๅŒๆง˜ใซใ€`pytest.mark.parametrize` ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€`-k` ใƒ•ใ‚ฃใƒซใ‚ฟใŒๅฝน็ซ‹ใŸใชใ„ๅ ดๅˆใงใ‚‚ใ€ใ‚ตใƒ–ใƒ†ใ‚นใƒˆใฎๅฎŸ่กŒใ‚’็ดฐใ‹ใๅˆถๅพกใงใใพใ™ใ€‚ใŸใ ใ—ใ€ใ“ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟๅŒ–้–ขๆ•ฐใฏใ‚ตใƒ–ใƒ†ใ‚นใƒˆใฎๅๅ‰ใ‚’ใ‚ใšใ‹ใซ็•ฐใชใ‚‹ใ‚‚ใฎใซใ—ใพใ™ใ€‚ไปฅไธ‹ใซใใฎไพ‹ใ‚’็คบใ—ใพใ™๏ผš ```bash pytest test_this2.py --collect-only -q ``` ใ™ใ‚‹ใจๆฌกใฎใ‚‚ใฎใŒใƒชใ‚นใƒˆใ•ใ‚Œใพใ™: ```bash test_this2.py::test_floor[integer-1-1.0] test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[large fraction-1.6-1] ``` ใ“ใ‚Œใงใ€็‰นๅฎšใฎใƒ†ใ‚นใƒˆใฎใฟใ‚’ๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ ```bash pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0] ``` ๅ‰ใฎไพ‹ใจๅŒๆง˜ใซใ€‚ ### Files and directories ใƒ†ใ‚นใƒˆใฎไธญใงใ€็พๅœจใฎใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ใฎ็›ธๅฏพไฝ็ฝฎใ‚’็Ÿฅใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใ“ใ‚Œใฏ็ฐกๅ˜ใชใ“ใจใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใชใœใชใ‚‰ใ€ใƒ†ใ‚นใƒˆใฏ่ค‡ๆ•ฐใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‹ใ‚‰ๅ‘ผใณๅ‡บใ•ใ‚Œใ‚‹ใ‹ใ€็•ฐใชใ‚‹ๆทฑใ•ใฎใ‚ตใƒ–ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซๅญ˜ๅœจใ™ใ‚‹ใ“ใจใŒใ‚ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚`transformers.test_utils.TestCasePlus` ใจใ„ใ†ใƒ˜ใƒซใƒ‘ใƒผใ‚ฏใƒฉใ‚นใฏใ€ใ™ในใฆใฎๅŸบๆœฌใƒ‘ใ‚นใ‚’ๆ•ด็†ใ—ใ€็ฐกๅ˜ใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใงใ€ใ“ใฎๅ•้กŒใ‚’่งฃๆฑบใ—ใพใ™ใ€‚ - `pathlib` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆ๏ผˆใ™ในใฆๅฎŒๅ…จใซ่งฃๆฑบใ•ใ‚ŒใŸใ‚‚ใฎ๏ผ‰๏ผš - `test_file_path` - ็พๅœจใฎใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใฎใƒ‘ใ‚นใ€ใคใพใ‚Š `__file__` - `test_file_dir` - ็พๅœจใฎใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใ‚’ๅซใ‚€ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช - `tests_dir` - `tests` ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช - `examples_dir` - `examples` ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช - `repo_root_dir` - ใƒชใƒใ‚ธใƒˆใƒชใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช - `src_dir` - `transformers` ใ‚ตใƒ–ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใŒๅญ˜ๅœจใ™ใ‚‹ๅ ดๆ‰€ - ใƒ‘ใ‚นใฎๆ–‡ๅญ—ๅˆ—่กจ็พโ€•โ€•ไธŠ่จ˜ใจๅŒใ˜ใงใ™ใŒใ€ใ“ใ‚Œใ‚‰ใฏ `pathlib` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใงใฏใชใๆ–‡ๅญ—ๅˆ—ใจใ—ใฆใƒ‘ใ‚นใ‚’่ฟ”ใ—ใพใ™๏ผš - `test_file_path_str` - `test_file_dir_str` - `tests_dir_str` - `examples_dir_str` - `repo_root_dir_str` - `src_dir_str` ใ“ใ‚Œใ‚‰ใ‚’ไฝฟ็”จใ—ๅง‹ใ‚ใ‚‹ใซใฏใ€ใƒ†ใ‚นใƒˆใŒ `transformers.test_utils.TestCasePlus` ใฎใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใซๅญ˜ๅœจใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ไพ‹๏ผš ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_local_locations(self): data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro" ``` ใ‚‚ใ—ใ€`pathlib` ใ‚’ไป‹ใ—ใฆใƒ‘ใ‚นใ‚’ๆ“ไฝœใ™ใ‚‹ๅฟ…่ฆใŒใชใ„ๅ ดๅˆใ€ใพใŸใฏๅ˜ใซๆ–‡ๅญ—ๅˆ—ใจใ—ใฆใƒ‘ใ‚นใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€`pathlib` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซ `str()` ใ‚’ๅ‘ผใณๅ‡บใ™ใ‹ใ€`_str` ใง็ต‚ใ‚ใ‚‹ใ‚ขใ‚ฏใ‚ปใ‚ตใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ไพ‹๏ผš ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_stringified_locations(self): examples_dir = self.examples_dir_str ``` ### Temporary files and directories ไธ€ๆ„ใฎไธ€ๆ™‚ใƒ•ใ‚กใ‚คใƒซใจใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใฎไฝฟ็”จใฏใ€ไธฆๅˆ—ใƒ†ใ‚นใƒˆใฎๅฎŸ่กŒใซใฏๆฌ ใ‹ใ›ใพใ›ใ‚“ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ†ใ‚นใƒˆใŒใŠไบ’ใ„ใฎใƒ‡ใƒผใ‚ฟใ‚’ไธŠๆ›ธใใ—ใชใ„ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ใพใŸใ€ใ“ใ‚Œใ‚‰ใ‚’ไฝœๆˆใ—ใŸๅ„ใƒ†ใ‚นใƒˆใฎ็ต‚ไบ†ๆ™‚ใซไธ€ๆ™‚ใƒ•ใ‚กใ‚คใƒซใจใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใŒๅ‰Š้™คใ•ใ‚Œใ‚‹ใ“ใจใ‚’ๆœ›ใฟใพใ™ใ€‚ใใฎใŸใ‚ใ€ใ“ใ‚Œใ‚‰ใฎใƒ‹ใƒผใ‚บใ‚’ๆบ€ใŸใ™ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใงใ‚ใ‚‹ `tempfile` ใฎใ‚ˆใ†ใชใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใฎไฝฟ็”จใฏ้‡่ฆใงใ™ใ€‚ ใ—ใ‹ใ—ใ€ใƒ†ใ‚นใƒˆใฎใƒ‡ใƒใƒƒใ‚ฐๆ™‚ใซใฏใ€ไธ€ๆ™‚ใƒ•ใ‚กใ‚คใƒซใ‚„ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซไฝ•ใŒๆ ผ็ดใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚’็ขบ่ชใงใใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€ใƒ†ใ‚นใƒˆใ‚’ๅ†ๅฎŸ่กŒใ™ใ‚‹ใŸใณใซใƒฉใƒณใƒ€ใƒ ใซๅค‰ๆ›ดใ•ใ‚Œใชใ„ใใฎๆญฃ็ขบใชใƒ‘ใ‚นใ‚’็Ÿฅใ‚ŠใŸใ„ใจๆ€ใ„ใพใ™ใ€‚ `transformers.test_utils.TestCasePlus` ใจใ„ใ†ใƒ˜ใƒซใƒ‘ใƒผใ‚ฏใƒฉใ‚นใฏใ€ใ“ใฎใ‚ˆใ†ใช็›ฎ็š„ใซๆœ€้ฉใงใ™ใ€‚ใ“ใ‚Œใฏ `unittest.TestCase` ใฎใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใงใ‚ใ‚‹ใŸใ‚ใ€ใƒ†ใ‚นใƒˆใƒขใ‚ธใƒฅใƒผใƒซใง็ฐกๅ˜ใซ็ถ™ๆ‰ฟใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ไปฅไธ‹ใฏใใฎไฝฟ็”จไพ‹ใงใ™๏ผš ```python from transformers.testing_utils import TestCasePlus class ExamplesTests(TestCasePlus): def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` ใ“ใฎใ‚ณใƒผใƒ‰ใฏใƒฆใƒ‹ใƒผใ‚ฏใชไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’ไฝœๆˆใ—ใ€`tmp_dir` ใ‚’ใใฎๅ ดๆ‰€ใซ่จญๅฎšใ—ใพใ™ใ€‚ - ใƒฆใƒ‹ใƒผใ‚ฏใชไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` `tmp_dir` ใซใฏใ€ไฝœๆˆใ•ใ‚ŒใŸไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใธใฎใƒ‘ใ‚นใŒๅซใพใ‚Œใพใ™ใ€‚ๆœŸ้–“็ต‚ไบ†ๅพŒใฏ่‡ชๅ‹•็š„ใซๅ‰Š้™คใ•ใ‚Œใพใ™ ใƒ†ใ‚นใƒˆใ€‚ - ไปปๆ„ใฎไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’ไฝœๆˆใ—ใ€ใƒ†ใ‚นใƒˆใฎ้–‹ๅง‹ๅ‰ใซใใ‚ŒใŒ็ฉบใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€ใƒ†ใ‚นใƒˆๅพŒใซใฏ็ฉบใซใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir("./xxx") ``` ใ“ใ‚Œใฏใ€็‰นๅฎšใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’็›ฃ่ฆ–ใ—ใ€ๅ‰ใฎใƒ†ใ‚นใƒˆใŒใใ“ใซใƒ‡ใƒผใ‚ฟใ‚’ๆฎ‹ใ•ใชใ„ใ“ใจใ‚’็ขบ่ชใ—ใŸใ„ๅ ดๅˆใซใ€ใƒ‡ใƒใƒƒใ‚ฐใซๅฝน็ซ‹ใกใพใ™ใ€‚ - `before` ใจ `after` ๅผ•ๆ•ฐใ‚’็›ดๆŽฅใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ใ“ใจใงใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๅ‹•ไฝœใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใงใใพใ™ใ€‚ไปฅไธ‹ใฎใ„ใšใ‚Œใ‹ใฎๅ‹•ไฝœใซๅฐŽใใพใ™๏ผš - `before=True`๏ผšใƒ†ใ‚นใƒˆใฎ้–‹ๅง‹ๆ™‚ใซๅธธใซไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใŒใ‚ฏใƒชใ‚ขใ•ใ‚Œใพใ™ใ€‚ - `before=False`๏ผšไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใŒๆ—ขใซๅญ˜ๅœจใ™ใ‚‹ๅ ดๅˆใ€ๆ—ขๅญ˜ใฎใƒ•ใ‚กใ‚คใƒซใฏใใฎใพใพใซใชใ‚Šใพใ™ใ€‚ - `after=True`๏ผšใƒ†ใ‚นใƒˆใฎ็ต‚ไบ†ๆ™‚ใซๅธธใซไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใŒๅ‰Š้™คใ•ใ‚Œใพใ™ใ€‚ - `after=False`๏ผšใƒ†ใ‚นใƒˆใฎ็ต‚ไบ†ๆ™‚ใซๅธธใซไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใฏใใฎใพใพใซใชใ‚Šใพใ™ใ€‚ <Tip> `rm -r`ใฎ็›ธๅฝ“ใ‚’ๅฎ‰ๅ…จใซๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใซใ€ๆ˜Ž็คบ็š„ใช `tmp_dir` ใŒไฝฟ็”จใ•ใ‚Œใ‚‹ๅ ดๅˆใ€ใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใƒชใƒใ‚ธใƒˆใƒชใฎใƒใ‚งใƒƒใ‚ฏใ‚ขใ‚ฆใƒˆใฎใ‚ตใƒ–ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใฎใฟใŒ่จฑๅฏใ•ใ‚Œใพใ™ใ€‚่ชคใฃใฆ `/tmp` ใชใฉใฎใƒ•ใ‚กใ‚คใƒซใ‚ทใ‚นใƒ†ใƒ ใฎ้‡่ฆใช้ƒจๅˆ†ใŒๅ‰Š้™คใ•ใ‚Œใชใ„ใ‚ˆใ†ใซใ€ๅธธใซ `./` ใ‹ใ‚‰ๅง‹ใพใ‚‹ใƒ‘ใ‚นใ‚’ๆธกใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> <Tip> ๅ„ใƒ†ใ‚นใƒˆใฏ่ค‡ๆ•ฐใฎไธ€ๆ™‚ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’็™ป้Œฒใงใใ€่ฆๆฑ‚ใŒใชใ„้™ใ‚Šใ™ในใฆ่‡ชๅ‹•ใงๅ‰Š้™คใ•ใ‚Œใพใ™ใ€‚ </Tip> ### Temporary sys.path override ๅˆฅใฎใƒ†ใ‚นใƒˆใ‹ใ‚‰ใ‚คใƒณใƒใƒผใƒˆใ™ใ‚‹ใŸใ‚ใซไธ€ๆ™‚็š„ใซ `sys.path` ใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใ€`ExtendSysPath` ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใƒžใƒใƒผใ‚ธใƒฃใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ไพ‹๏ผš ```python import os from transformers.testing_utils import ExtendSysPath bindir = os.path.abspath(os.path.dirname(__file__)) with ExtendSysPath(f"{bindir}/.."): from test_trainer import TrainerIntegrationCommon # noqa ``` ### Skipping tests ใ“ใ‚Œใฏใ€ใƒใ‚ฐใŒ่ฆ‹ใคใ‹ใ‚Šใ€ๆ–ฐใ—ใ„ใƒ†ใ‚นใƒˆใŒไฝœๆˆใ•ใ‚ŒใŸๅ ดๅˆใงใ‚ใฃใฆใ‚‚ใ€ใƒใ‚ฐใŒใพใ ไฟฎๆญฃใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใซๅฝน็ซ‹ใกใพใ™ใ€‚ใƒกใ‚คใƒณใƒชใƒใ‚ธใƒˆใƒชใซใ‚ณใƒŸใƒƒใƒˆใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใซใฏใ€`make test` ใฎๅฎŸ่กŒไธญใซใใ‚Œใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒกใ‚ฝใƒƒใƒ‰๏ผš - **skip** ใฏใ€ใƒ†ใ‚นใƒˆใŒ็‰นๅฎšใฎๆกไปถใŒๆบ€ใŸใ•ใ‚ŒใŸๅ ดๅˆใซใฎใฟใƒ‘ใ‚นใ™ใ‚‹ใ“ใจใ‚’ๆœŸๅพ…ใ—ใฆใŠใ‚Šใ€ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใฏ pytest ใŒใƒ†ใ‚นใƒˆใฎๅฎŸ่กŒใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ—ใพใ™ใ€‚ไธ€่ˆฌ็š„ใชไพ‹ใฏใ€Windowsๅฐ‚็”จใฎใƒ†ใ‚นใƒˆใ‚’้žWindowsใƒ—ใƒฉใƒƒใƒˆใƒ•ใ‚ฉใƒผใƒ ใงใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ๅ ดๅˆใ€ใพใŸใฏ็พๅœจๅˆฉ็”จใงใใชใ„ๅค–้ƒจใƒชใ‚ฝใƒผใ‚นใซไพๅญ˜ใ™ใ‚‹ใƒ†ใ‚นใƒˆใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ๅ ดๅˆใงใ™๏ผˆไพ‹: ใƒ‡ใƒผใ‚ฟใƒ™ใƒผใ‚นใŒๅˆฉ็”จใงใใชใ„ๅ ดๅˆ๏ผ‰ใ€‚ - **xfail** ใฏใ€ไฝ•ใ‚‰ใ‹ใฎ็†็”ฑใงใƒ†ใ‚นใƒˆใŒๅคฑๆ•—ใ™ใ‚‹ใ“ใจใ‚’ๆœŸๅพ…ใ—ใฆใ„ใพใ™ใ€‚ไธ€่ˆฌ็š„ใชไพ‹ใฏใ€ใพใ ๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใชใ„ๆฉŸ่ƒฝใฎใƒ†ใ‚นใƒˆใ‚„ใ€ใพใ ไฟฎๆญฃใ•ใ‚Œใฆใ„ใชใ„ใƒใ‚ฐใฎใƒ†ใ‚นใƒˆใงใ™ใ€‚ใƒ†ใ‚นใƒˆใŒไบˆๆƒณใ•ใ‚Œใ‚‹ๅคฑๆ•—ใซใ‚‚ใ‹ใ‹ใ‚ใ‚‰ใšใƒ‘ใ‚นใ—ใŸๅ ดๅˆ๏ผˆpytest.mark.xfailใงใƒžใƒผใ‚ฏใ•ใ‚ŒใŸใƒ†ใ‚นใƒˆ๏ผ‰ใ€ใใ‚Œใฏxpassใจใ—ใฆใƒ†ใ‚นใƒˆใ‚ตใƒžใƒชใƒผใซๅ ฑๅ‘Šใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎ2ใคใฎ้–“ใฎ้‡่ฆใช้•ใ„ใฎ1ใคใฏใ€`skip` ใฏใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใชใ„็‚นใงใ‚ใ‚Šใ€`xfail` ใฏๅฎŸ่กŒใ—ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒใ‚ฐใฎใ‚ใ‚‹ใ‚ณใƒผใƒ‰ใŒไป–ใฎใƒ†ใ‚นใƒˆใซๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ๅ ดๅˆใฏใ€`xfail` ใ‚’ไฝฟ็”จใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ #### Implementation - ใƒ†ใ‚นใƒˆๅ…จไฝ“ใ‚’็„กๆกไปถใซใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ๆ–นๆณ•ใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš ```python no-style @unittest.skip("this bug needs to be fixed") def test_feature_x(): ``` ใพใŸใฏ pytest ็ตŒ็”ฑ: ```python no-style @pytest.mark.skip(reason="this bug needs to be fixed") ``` ใพใŸใฏ `xfail` ใฎๆ–นๆณ•: ```python no-style @pytest.mark.xfail def test_feature_x(): ``` - ใƒ†ใ‚นใƒˆๅ†…ใฎๅ†…้ƒจใƒใ‚งใƒƒใ‚ฏใซๅŸบใฅใ„ใฆใƒ†ใ‚นใƒˆใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ๆ–นๆณ•ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ ```python def test_feature_x(): if not has_something(): pytest.skip("unsupported configuration") ``` ใพใŸใฏใƒขใ‚ธใƒฅใƒผใƒซๅ…จไฝ“: ```python import pytest if not pytest.config.getoption("--custom-flag"): pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True) ``` ใพใŸใฏ `xfail` ใฎๆ–นๆณ•: ```python def test_feature_x(): pytest.xfail("expected to fail until bug XYZ is fixed") ``` - ไธ€้ƒจใฎใ‚คใƒณใƒใƒผใƒˆใŒๆฌ ่ฝใ—ใฆใ„ใ‚‹ๅ ดๅˆใซใƒขใ‚ธใƒฅใƒผใƒซๅ†…ใฎใ™ในใฆใฎใƒ†ใ‚นใƒˆใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ๆ–นๆณ•ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ ```python docutils = pytest.importorskip("docutils", minversion="0.3") ``` - ๆกไปถใซๅŸบใฅใ„ใฆใƒ†ใ‚นใƒˆใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ—ใพใ™ใ€‚ ```python no-style @pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher") def test_feature_x(): ``` ใพใŸใฏ๏ผš ```python no-style @unittest.skipIf(torch_device == "cpu", "Can't do half precision") def test_feature_x(): ``` ใพใŸใฏใƒขใ‚ธใƒฅใƒผใƒซๅ…จไฝ“ใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ—ใพใ™ใ€‚ ```python no-style @pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows") class TestClass(): def test_feature_x(self): ``` ่ฉณ็ดฐใ€ไพ‹ใ€ใŠใ‚ˆใณๆ–นๆณ•ใซใคใ„ใฆใฎ่ฉณ็ดฐใฏ[ใ“ใกใ‚‰](https://docs.pytest.org/en/latest/skipping.html)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### Slow tests ใƒ†ใ‚นใƒˆใƒฉใ‚คใƒ–ใƒฉใƒชใฏ็€ๅฎŸใซๆˆ้•ทใ—ใฆใŠใ‚Šใ€ใƒ†ใ‚นใƒˆใฎไธ€้ƒจใฏๆ•ฐๅˆ†ใ‹ใ‹ใ‚Šใพใ™ใ€‚ใใฎใŸใ‚ใ€CIใงใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใฎๅฎŒไบ†ใ‚’ๅพ…ใคใฎใฏ1ๆ™‚้–“ๅพ…ใคไฝ™่ฃ•ใŒใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ„ใใคใ‹ใฎไพ‹ๅค–ใ‚’้™คใ„ใฆใ€้…ใ„ใƒ†ใ‚นใƒˆใฏไปฅไธ‹ใฎไพ‹ใฎใ‚ˆใ†ใซใƒžใƒผใ‚ฏใ™ในใใงใ™๏ผš ```python no-style from transformers.testing_utils import slow @slow def test_integration_foo(): ``` ใƒ†ใ‚นใƒˆใŒ`@slow`ใจใ—ใฆใƒžใƒผใ‚ฏใ•ใ‚ŒใŸใ‚‰ใ€ใใฎใ‚ˆใ†ใชใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€็’ฐๅขƒๅค‰ๆ•ฐ `RUN_SLOW=1`ใ‚’่จญๅฎšใ—ใพใ™ใ€‚ไพ‹: ```bash RUN_SLOW=1 pytest tests ``` `@parameterized` ใฎใ‚ˆใ†ใชใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใฏใƒ†ใ‚นใƒˆๅใ‚’ๆ›ธใๆ›ใˆใ‚‹ใŸใ‚ใ€`@slow` ใŠใ‚ˆใณไป–ใฎใ‚นใ‚ญใƒƒใƒ—ใƒ‡ใ‚ณใƒฌใƒผใ‚ฟ `@require_*` ใฏๆญฃใ—ใๅ‹•ไฝœใ™ใ‚‹ใŸใ‚ใซใฏใ€ๆœ€ๅพŒใซใƒชใ‚นใƒˆใ‚ขใƒƒใƒ—ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไปฅไธ‹ใฏๆญฃใ—ใ„ไฝฟ็”จไพ‹ใฎไธ€ไพ‹ใงใ™๏ผš ```python no-style @parameteriz ed.expand(...) @slow def test_integration_foo(): ``` ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฎๅ†’้ ญใง่ชฌๆ˜Žใ—ใŸใ‚ˆใ†ใซใ€้…ใ„ใƒ†ใ‚นใƒˆใฏๅฎšๆœŸ็š„ใชใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒซใซๅพ“ใฃใฆๅฎŸ่กŒใ•ใ‚Œใ€PRใฎCIใƒใ‚งใƒƒใ‚ฏใงใฏๅฎŸ่กŒใ•ใ‚Œใพใ›ใ‚“ใ€‚ใใฎใŸใ‚ใ€ไธ€้ƒจใฎๅ•้กŒใŒPRใฎๆๅ‡บๆ™‚ใซ่ฆ‹่ฝใจใ•ใ‚Œใ€ใƒžใƒผใ‚ธใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใใฎใ‚ˆใ†ใชๅ•้กŒใฏๆฌกๅ›žใฎใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒซใ•ใ‚ŒใŸCIใ‚ธใƒงใƒ–ใงๆคœๅ‡บใ•ใ‚Œใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใใ‚ŒใฏใพใŸใ€PRใ‚’ๆๅ‡บใ™ใ‚‹ๅ‰ใซ่‡ชๅˆ†ใฎใƒžใ‚ทใƒณใง้…ใ„ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹้‡่ฆๆ€งใ‚’ๆ„ๅ‘ณใ—ใฆใ„ใพใ™ใ€‚ ใฉใฎใƒ†ใ‚นใƒˆใ‚’้…ใ„ใƒ†ใ‚นใƒˆใจใ—ใฆใƒžใƒผใ‚ฏใ™ในใใ‹ใ‚’้ธๆŠžใ™ใ‚‹ใŸใ‚ใฎใ€ใŠใŠใพใ‹ใชๆ„ๆ€ๆฑบๅฎšใƒกใ‚ซใƒ‹ใ‚บใƒ ใŒๆฌกใซ็คบใ•ใ‚Œใฆใ„ใพใ™๏ผš - ใƒ†ใ‚นใƒˆใŒใƒฉใ‚คใƒ–ใƒฉใƒชใฎๅ†…้ƒจใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฎ1ใคใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใ‚‹ๅ ดๅˆ๏ผˆไพ‹: ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใ€ใƒˆใƒผใ‚ฏใƒณๅŒ–ใƒ•ใ‚กใ‚คใƒซใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ๏ผ‰ใ€ใใฎใƒ†ใ‚นใƒˆใฏ้…ใ„ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใงๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใใ‚ŒใŒใƒฉใ‚คใƒ–ใƒฉใƒชใฎไป–ใฎๅด้ขใ€ใŸใจใˆใฐใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚„ไพ‹ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใ‚‹ๅ ดๅˆใ€ใใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใฏ้…ใ„ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใงๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใใ—ใฆใ€ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚’ๆด—็ทดใ•ใ›ใ‚‹ใŸใ‚ใซไพ‹ๅค–ใ‚’่จญใ‘ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ้‡ใ„ใ‚ฆใ‚งใ‚คใƒˆใ‚ปใƒƒใƒˆใ‚„็ด„50MBไปฅไธŠใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ™ในใฆใฎใƒ†ใ‚นใƒˆ๏ผˆไพ‹: ใƒขใƒ‡ใƒซ็ตฑๅˆใƒ†ใ‚นใƒˆใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถ็ตฑๅˆใƒ†ใ‚นใƒˆใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ็ตฑๅˆใƒ†ใ‚นใƒˆ๏ผ‰ใฏ้…ใ„ใƒ†ใ‚นใƒˆใจใ—ใฆ่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅ ดๅˆใ€็ตฑๅˆใƒ†ใ‚นใƒˆ็”จใซใƒฉใƒณใƒ€ใƒ ใชใ‚ฆใ‚งใ‚คใƒˆใ‚’ๆŒใคๅฐใ•ใชใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝœๆˆใ—ใ€ใƒใƒ–ใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใคใ„ใฆใฏไปฅไธ‹ใฎๆฎต่ฝใง่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™ใ€‚ - ็‰นใซ้ซ˜้€ŸๅŒ–ใ•ใ‚Œใฆใ„ใชใ„ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚‹ใ™ในใฆใฎใƒ†ใ‚นใƒˆใฏ้…ใ„ใƒ†ใ‚นใƒˆใจใ—ใฆ่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ไธ€้ƒจใฎใ€Œ้…ใ„ใ€ใงใ‚ใ‚‹ในใใงใชใ„ใƒ†ใ‚นใƒˆใŒ้žๅธธใซ้…ใ„ๅ ดๅˆใ€ใŠใ‚ˆใณใใ‚Œใ‚‰ใ‚’ `@slow` ใจใ—ใฆ่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใซใฏไพ‹ๅค–ใ‚’ๅฐŽๅ…ฅใงใใพใ™ใ€‚ๅคงๅฎน้‡ใฎใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ‡ใ‚ฃใ‚นใ‚ฏใซไฟๅญ˜ใŠใ‚ˆใณ่ชญใฟ่พผใฟใ™ใ‚‹่‡ชๅ‹•ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ†ใ‚นใƒˆใฏใ€`@slow` ใจใ—ใฆใƒžใƒผใ‚ฏใ•ใ‚ŒใŸใƒ†ใ‚นใƒˆใฎ่‰ฏใ„ไพ‹ใงใ™ใ€‚ - CIใง1็ง’ๆœชๆบ€ใงใƒ†ใ‚นใƒˆใŒๅฎŒไบ†ใ™ใ‚‹ๅ ดๅˆ๏ผˆใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ‚’ๅซใ‚€๏ผ‰ใ€ใใ‚Œใฏ้€šๅธธใฎใƒ†ใ‚นใƒˆใงใ‚ใ‚‹ในใใงใ™ใ€‚ ใ™ในใฆใฎ้ž้…ใ„ใƒ†ใ‚นใƒˆใฏใ€ใ•ใพใ–ใพใชๅ†…้ƒจ่ฆ็ด ใ‚’ๅฎŒๅ…จใซใ‚ซใƒใƒผใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใŒใ€้ซ˜้€Ÿใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€็‰นๅˆฅใซไฝœๆˆใ•ใ‚ŒใŸๅฐใ•ใชใƒขใƒ‡ใƒซ๏ผˆใƒฌใ‚คใƒคใƒผๆ•ฐใŒๆœ€ๅฐ้™ใงใ€่ชžๅฝ™ใ‚ตใ‚คใ‚บใŒๅฐใ•ใ„ใชใฉ๏ผ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใ‹ใชใ‚Šใฎใ‚ซใƒใƒฌใƒƒใ‚ธใ‚’ๅฎŸ็พใงใใพใ™ใ€‚ใใฎๅพŒใ€`@slow` ใƒ†ใ‚นใƒˆใงใฏๅคง่ฆๆจกใช้…ใ„ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆ่ณช็š„ใชใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใงใใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซ *tiny* ใƒขใƒ‡ใƒซใ‚’ๆŽขใ—ใฆใใ ใ•ใ„๏ผš ```bash grep tiny tests examples ``` [ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฎไพ‹](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py)ใŒใ‚ใ‚Šใ€ใ“ใ‚Œใซใ‚ˆใ‚Š tiny-wmt19-en-de ใฎใ‚ˆใ†ใชๅฐใ•ใชใƒขใƒ‡ใƒซใŒไฝœๆˆใ•ใ‚Œใพใ™ใ€‚็‰นๅฎšใฎใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซ็ฐกๅ˜ใซ่ชฟๆ•ดใงใใพใ™ใ€‚ ๅฎŸ่กŒๆ™‚้–“ใ‚’่ชคใฃใฆๆธฌๅฎšใ™ใ‚‹ใ“ใจใŒ็ฐกๅ˜ใงใ™ใ€‚ใŸใจใˆใฐใ€ๅทจๅคงใชใƒขใƒ‡ใƒซใฎใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใซ้–ขใ™ใ‚‹ใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒใ‚ใ‚‹ๅ ดๅˆใ€ใƒญใƒผใ‚ซใƒซใงใƒ†ใ‚นใƒˆใ™ใ‚‹ใจใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚ŒใŸใƒ•ใ‚กใ‚คใƒซใŒใ‚ญใƒฃใƒƒใ‚ทใƒฅใ•ใ‚Œใ€ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ๆ™‚้–“ใŒ่จˆๆธฌใ•ใ‚Œใชใใชใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€CIใƒญใ‚ฐใฎๅฎŸ่กŒ้€Ÿๅบฆใƒฌใƒใƒผใƒˆ๏ผˆ`pytest --durations=0 tests` ใฎๅ‡บๅŠ›๏ผ‰ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎใƒฌใƒใƒผใƒˆใฏใ€้…ใ„ใƒ†ใ‚นใƒˆใจใ—ใฆใƒžใƒผใ‚ฏใ•ใ‚Œใฆใ„ใชใ„้…ใ„ๅค–ใ‚Œๅ€คใ‚„ใ€้ซ˜้€Ÿใซๆ›ธใ็›ดใ™ๅฟ…่ฆใŒใ‚ใ‚‹ใƒ†ใ‚นใƒˆใ‚’่ฆ‹ใคใ‘ใ‚‹ใฎใซใ‚‚ๅฝน็ซ‹ใกใพใ™ใ€‚ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใŒCIใง้…ใใชใ‚Šๅง‹ใ‚ใŸๅ ดๅˆใ€ใ“ใฎใƒฌใƒใƒผใƒˆใฎใƒˆใƒƒใƒ—ใƒชใ‚นใƒˆใซใฏๆœ€ใ‚‚้…ใ„ใƒ†ใ‚นใƒˆใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ ### Testing the stdout/stderr output `stdout` ใŠใ‚ˆใณ/ใพใŸใฏ `stderr` ใซๆ›ธใ่พผใ‚€้–ขๆ•ฐใ‚’ใƒ†ใ‚นใƒˆใ™ใ‚‹ใŸใ‚ใซใ€ใƒ†ใ‚นใƒˆใฏ `pytest` ใฎ [capsys ใ‚ทใ‚นใƒ†ใƒ ](https://docs.pytest.org/en/latest/capture.html) ใ‚’ไฝฟ็”จใ—ใฆใ“ใ‚Œใ‚‰ใฎใ‚นใƒˆใƒชใƒผใƒ ใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ไปฅไธ‹ใฏใใฎๆ–นๆณ•ใงใ™๏ผš ```python import sys def print_to_stdout(s): print(s) def print_to_stderr(s): sys.stderr.write(s) def test_result_and_stdout(capsys): msg = "Hello" print_to_stdout(msg) print_to_stderr(msg) out, err = capsys.readouterr() # consume the captured output streams # optional: if you want to replay the consumed streams: sys.stdout.write(out) sys.stderr.write(err) # test: assert msg in out assert msg in err ``` ใใ—ใฆใ‚‚ใกใ‚ใ‚“ใ€ใปใจใ‚“ใฉใฎๅ ดๅˆใ€`stderr`ใฏไพ‹ๅค–ใฎไธ€้ƒจใจใ—ใฆๆไพ›ใ•ใ‚Œใ‚‹ใŸใ‚ใ€ใใฎใ‚ˆใ†ใชๅ ดๅˆใซใฏ try/excel ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ‚ฑใƒผใ‚น๏ผš ```python def raise_exception(msg): raise ValueError(msg) def test_something_exception(): msg = "Not a good value" error = "" try: raise_exception(msg) except Exception as e: error = str(e) assert msg in error, f"{msg} is in the exception:\n{error}" ``` stdout ใ‚’ใ‚ญใƒฃใƒ—ใƒใƒฃใ™ใ‚‹ใ‚‚ใ† 1 ใคใฎใ‚ขใƒ—ใƒญใƒผใƒใฏใ€`contextlib.redirect_stdout`ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ```python from io import StringIO from contextlib import redirect_stdout def print_to_stdout(s): print(s) def test_result_and_stdout(): msg = "Hello" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # optional: if you want to replay the consumed streams: sys.stdout.write(out) # test: assert msg in out ``` stdout ใ‚’ใ‚ญใƒฃใƒ—ใƒใƒฃใ™ใ‚‹้š›ใฎ้‡่ฆใชๆฝœๅœจ็š„ใชๅ•้กŒใฏใ€้€šๅธธใฎ `print` ใงใ“ใ‚Œใพใงใซๅ‡บๅŠ›ใ•ใ‚ŒใŸๅ†…ๅฎนใ‚’ใƒชใ‚ปใƒƒใƒˆใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ `\r` ๆ–‡ๅญ—ใŒๅซใพใ‚Œใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใ“ใจใงใ™ใ€‚`pytest` ่‡ชไฝ“ใซใฏๅ•้กŒใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€`pytest -s` ใงใฏใ“ใ‚Œใ‚‰ใฎๆ–‡ๅญ—ใŒใƒใƒƒใƒ•ใ‚กใซๅซใพใ‚Œใ‚‹ใŸใ‚ใ€`-s` ใ‚ใ‚Šใจใชใ—ใงใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใซใฏใ€`re.sub(r'~.*\r', '', buf, 0, re.M)` ใ‚’ไฝฟ็”จใ—ใฆใ‚ญใƒฃใƒ—ใƒใƒฃใ•ใ‚ŒใŸๅ‡บๅŠ›ใซๅฏพใ—ใฆ่ฟฝๅŠ ใฎใ‚ฏใƒชใƒผใƒณใ‚ขใƒƒใƒ—ใ‚’่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ—ใ‹ใ—ใ€ใใฎๅพŒใ€`\r` ใŒๅซใพใ‚Œใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใซใ‹ใ‹ใ‚ใ‚‰ใšใ€ใ™ในใฆใฎๆ“ไฝœใ‚’่‡ชๅ‹•็š„ใซๅ‡ฆ็†ใ™ใ‚‹ใƒ˜ใƒซใƒ‘ใƒผใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใƒžใƒใƒผใ‚ธใƒฃใƒฉใƒƒใƒ‘ใƒผใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ๆฌกใฎใ‚ˆใ†ใซ็ฐกๅ˜ใซ่กŒใˆใพใ™๏ผš ```python from transformers.testing_utils import CaptureStdout with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out) ``` ๅฎŒๅ…จใชใƒ†ใ‚นใƒˆไพ‹ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ ```python from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}" ``` `stderr` ใ‚’ใ‚ญใƒฃใƒ—ใƒใƒฃใ—ใŸใ„ๅ ดๅˆใฏใ€ไปฃใ‚ใ‚Šใซ `CaptureStderr` ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ ```python from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err) ``` ไธกๆ–นใฎใ‚นใƒˆใƒชใƒผใƒ ใ‚’ไธ€ๅบฆใซใ‚ญใƒฃใƒ—ใƒใƒฃใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€่ฆชใฎ `CaptureStd` ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ```python from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out) ``` ใพใŸใ€ใƒ†ใ‚นใƒˆใฎๅ•้กŒใฎใƒ‡ใƒใƒƒใ‚ฐใ‚’ๆ”ฏๆดใ™ใ‚‹ใŸใ‚ใซใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใ€ใ“ใ‚Œใ‚‰ใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ ใƒžใƒใƒผใ‚ธใƒฃใƒผใฏ็ต‚ไบ†ๆ™‚ใซใ‚ญใƒฃใƒ—ใƒใƒฃใ•ใ‚ŒใŸใ‚นใƒˆใƒชใƒผใƒ ใ‚’่‡ชๅ‹•็š„ใซๅ†็”Ÿใ—ใพใ™ใ€‚ ๆ–‡่„ˆใ‹ใ‚‰ใ€‚ ### Capturing logger stream ใƒญใ‚ฌใƒผใฎๅ‡บๅŠ›ใ‚’ๆคœ่จผใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€`CaptureLogger`ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ ```python from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger("transformers.models.bart.tokenization_bart") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n" ``` ### Testing with environment variables ็‰นๅฎšใฎใƒ†ใ‚นใƒˆใง็’ฐๅขƒๅค‰ๆ•ฐใฎๅฝฑ้Ÿฟใ‚’ใƒ†ใ‚นใƒˆใ—ใŸใ„ๅ ดๅˆใฏใ€ใƒ˜ใƒซใƒ‘ใƒผ ใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ `transformers.testing_utils.mockenv` ```python from transformers.testing_utils import mockenv class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None) ``` ๅ ดๅˆใซใ‚ˆใฃใฆใฏใ€ๅค–้ƒจใƒ—ใƒญใ‚ฐใƒฉใƒ ใ‚’ๅ‘ผใณๅ‡บใ™ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€`os.environ` ใซ`PYTHONPATH`ใ‚’่จญๅฎšใ—ใฆใ‚คใƒณใ‚ฏใƒซใƒผใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ่ค‡ๆ•ฐใฎใƒญใƒผใ‚ซใƒซ ใƒ‘ใ‚นใ€‚ใƒ˜ใƒซใƒ‘ใƒผ ใ‚ฏใƒฉใ‚น `transformers.test_utils.TestCasePlus` ใŒๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ```python from transformers.testing_utils import TestCasePlus class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # now call the external program, passing `env` to it ``` ใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใŒ `tests` ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใพใŸใฏ `examples` ใฎใฉใกใ‚‰ใซใ‚ใ‚‹ใ‹ใซๅฟœใ˜ใฆ `env[PYTHONPATH]` ใ‚’ไฝฟ็”จใ—ใฆใ€ใ“ใ‚Œใ‚‰ 2 ใคใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใฎใ„ใšใ‚Œใ‹ใ‚’ๅซใ‚ใพใ™ใ€‚ใพใŸใ€ใƒ†ใ‚นใƒˆใŒ็ขบๅฎŸใซ่กŒใ‚ใ‚Œใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใฎ `src` ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚‚ๅซใ‚ใพใ™ใ€‚ ็พๅœจใฎใƒชใƒใ‚ธใƒˆใƒชใซๅฏพใ—ใฆๅฎŸ่กŒใ•ใ‚Œใ€ๆœ€ๅพŒใซใ€ใƒ†ใ‚นใƒˆใŒๅฎŸ่กŒใ•ใ‚Œใ‚‹ๅ‰ใซใ™ใงใซ่จญๅฎšใ•ใ‚Œใฆใ„ใŸ `env[PYTHONPATH]` ใ‚’ไฝฟ็”จใ—ใฆๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ ไฝ•ใ‹ใ‚ใ‚Œใฐๅ‘ผใฐใ‚Œใพใ™ใ€‚ ใ“ใฎใƒ˜ใƒซใƒ‘ใƒผ ใƒกใ‚ฝใƒƒใƒ‰ใฏ `os.environ` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎใ‚ณใƒ”ใƒผใ‚’ไฝœๆˆใ™ใ‚‹ใŸใ‚ใ€ๅ…ƒใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏใใฎใพใพๆฎ‹ใ‚Šใพใ™ใ€‚ ### Getting reproducible results ็Šถๆณใซใ‚ˆใฃใฆใฏใ€ใƒ†ใ‚นใƒˆใฎใƒฉใƒณใƒ€ใƒ ๆ€งใ‚’ๅ‰Š้™คใ—ใŸใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ๅŒไธ€ใฎๅ†็พๅฏ่ƒฝใช็ตๆžœใ‚ปใƒƒใƒˆใ‚’ๅ–ๅพ—ใ™ใ‚‹ใซใฏใ€ ใ‚ทใƒผใƒ‰ใ‚’ไฟฎๆญฃใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™: ```python seed = 42 # python RNG import random random.seed(seed) # pytorch RNGs import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # numpy RNG import numpy as np np.random.seed(seed) # tf RNG tf.random.set_seed(seed) ``` ### Debugging tests ่ญฆๅ‘ŠใŒ็™บ็”Ÿใ—ใŸๆ™‚็‚นใงใƒ‡ใƒใƒƒใ‚ฌใƒผใ‚’้–‹ๅง‹ใ™ใ‚‹ใซใฏใ€ๆฌกใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ```bash pytest tests/utils/test_logging.py -W error::UserWarning --pdb ``` ## Working with github actions workflows ใ‚ปใƒซใƒ•ใƒ—ใƒƒใ‚ทใƒฅใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผCIใ‚ธใƒงใƒ–ใ‚’ใƒˆใƒชใ‚ฌใƒผใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš 1. `transformers` ใฎใƒชใƒขใƒผใƒˆใƒชใƒใ‚ธใƒˆใƒชใงๆ–ฐใ—ใ„ใƒ–ใƒฉใƒณใƒใ‚’ไฝœๆˆใ—ใพใ™๏ผˆใƒ•ใ‚ฉใƒผใ‚ฏใงใฏใชใใ€ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใง่กŒใ„ใพใ™๏ผ‰ใ€‚ 2. ใƒ–ใƒฉใƒณใƒใฎๅๅ‰ใฏ `ci_` ใพใŸใฏ `ci-` ใงๅง‹ใพใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผˆ`main` ใ‚‚ใƒˆใƒชใ‚ฌใƒผใ—ใพใ™ใŒใ€`main` ใงใฏPRใ‚’ไฝœๆˆใงใใพใ›ใ‚“๏ผ‰ใ€‚ใพใŸใ€็‰นๅฎšใฎใƒ‘ใ‚นใงใฎใฟใƒˆใƒชใ‚ฌใƒผใ•ใ‚Œใพใ™ - ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใŒๆ›ธใ‹ใ‚ŒใŸๅพŒใซๅค‰ๆ›ดใ•ใ‚ŒใŸๅ ดๅˆใซๅ‚™ใˆใฆใ€ๆœ€ๆ–ฐใฎๅฎš็พฉใฏ[ใ“ใกใ‚‰](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml)ใฎ *push:* ใซใ‚ใ‚Šใพใ™ใ€‚ 3. ใ“ใฎใƒ–ใƒฉใƒณใƒใ‹ใ‚‰PRใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ 4. ใใฎๅพŒใ€ใ“ใฎใ‚ธใƒงใƒ–ใŒ[ใ“ใ“](https://github.com/huggingface/transformers/actions/workflows/self-push.yml)ใซ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ใ‚ธใƒงใƒ–ใฏใƒใƒƒใ‚ฏใƒญใ‚ฐใŒใ‚ใ‚‹ๅ ดๅˆใ€ใ™ใใซๅฎŸ่กŒใ•ใ‚Œใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ## Testing Experimental CI Features CIๆฉŸ่ƒฝใฎใƒ†ใ‚นใƒˆใฏ้€šๅธธใฎCIใฎๆญฃๅธธใชๅ‹•ไฝœใซๅนฒๆธ‰ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ๆ–ฐใ—ใ„CIๆฉŸ่ƒฝใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅ ดๅˆใ€ไปฅไธ‹ใฎๆ‰‹้ †ใซๅพ“ใ†ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ 1. ใƒ†ใ‚นใƒˆใŒๅฟ…่ฆใชใ‚‚ใฎใ‚’ใƒ†ใ‚นใƒˆใ™ใ‚‹ใŸใ‚ใฎๆ–ฐใ—ใ„ๅฐ‚็”จใฎใ‚ธใƒงใƒ–ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ 2. ๆ–ฐใ—ใ„ใ‚ธใƒงใƒ–ใฏๅธธใซๆˆๅŠŸใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€ๅธธใซใ‚ฐใƒชใƒผใƒณ โœ“๏ผˆ่ฉณ็ดฐใฏไปฅไธ‹ๅ‚็…ง๏ผ‰ใ‚’่กจ็คบใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ 3. ใ•ใพใ–ใพใช็จฎ้กžใฎPR๏ผˆใƒฆใƒผใ‚ถใƒผใƒ•ใ‚ฉใƒผใ‚ฏใƒ–ใƒฉใƒณใƒใ€้žใƒ•ใ‚ฉใƒผใ‚ฏใƒ–ใƒฉใƒณใƒใ€github.com UIใ‹ใ‚‰็›ดๆŽฅใƒ•ใ‚กใ‚คใƒซใ‚’็ทจ้›†ใ™ใ‚‹ใƒ–ใƒฉใƒณใƒใ€ใ•ใพใ–ใพใชๅผทๅˆถใƒ—ใƒƒใ‚ทใƒฅใชใฉ๏ผ‰ใŒๅฎŸ่กŒใ•ใ‚Œใ‚‹ใพใงใ„ใใคใ‹ใฎๆ—ฅ้–“ๅฎŸ่กŒใ—ใ€ๅฎŸ้จ“็š„ใชใ‚ธใƒงใƒ–ใฎใƒญใ‚ฐใ‚’็›ฃ่ฆ–ใ—ใพใ™๏ผˆๆ„ๅ›ณ็š„ใซๅธธใซใ‚ฐใƒชใƒผใƒณใซใชใ‚‹ใ‚ˆใ†ใซใชใฃใฆใ„ใ‚‹ๅ…จไฝ“ใฎใ‚ธใƒงใƒ–ใฎ็ท‘ใงใฏใชใ๏ผ‰ใ€‚ 4. ใ™ในใฆใŒๅฎ‰ๅฎšใ—ใฆใ„ใ‚‹ใ“ใจใŒๆ˜Ž็ขบใซใชใฃใŸใ‚‰ใ€ๆ–ฐใ—ใ„ๅค‰ๆ›ดใ‚’ๆ—ขๅญ˜ใฎใ‚ธใƒงใƒ–ใซ็ตฑๅˆใ—ใพใ™ใ€‚ ใ“ใฎใ‚ˆใ†ใซใ€CIๆฉŸ่ƒฝ่‡ชไฝ“ใฎๅฎŸ้จ“ใŒ้€šๅธธใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใซๅนฒๆธ‰ใ—ใชใ„ใ‚ˆใ†ใซใงใใพใ™ใ€‚ ใงใฏใ€ๆ–ฐใ—ใ„CIๆฉŸ่ƒฝใŒ้–‹็™บไธญใงใ‚ใ‚‹้–“ใ€ใ‚ธใƒงใƒ–ใ‚’ๅธธใซๆˆๅŠŸใ•ใ›ใ‚‹ใซใฏใฉใ†ใ™ใ‚Œใฐใ„ใ„ใงใ—ใ‚‡ใ†ใ‹๏ผŸ TravisCIใฎใ‚ˆใ†ใชไธ€้ƒจใฎCIใฏ `ignore-step-failure` ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใ€ๅ…จไฝ“ใฎใ‚ธใƒงใƒ–ใ‚’ๆˆๅŠŸใจใ—ใฆๅ ฑๅ‘Šใ—ใพใ™ใŒใ€ใ“ใฎๆ–‡ๆ›ธใŒไฝœๆˆใ•ใ‚ŒใŸๆ™‚็‚นใงใฏCircleCIใจGithub Actionsใฏใใ‚Œใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ›ใ‚“ใ€‚ ใ—ใŸใŒใฃใฆใ€ไปฅไธ‹ใฎใƒฏใƒผใ‚ฏใ‚ขใƒฉใ‚ฆใƒณใƒ‰ใ‚’ไฝฟ็”จใงใใพใ™๏ผš 1. bashใ‚นใ‚ฏใƒชใƒ—ใƒˆๅ†…ใงๆฝœๅœจ็š„ใชๅคฑๆ•—ใ‚’ๆŠ‘ๅˆถใ™ใ‚‹ใŸใ‚ใซๅฎŸ่กŒใ‚ณใƒžใƒณใƒ‰ใฎๅ†’้ ญใซ `set +euo pipefail` ใ‚’่จ˜่ฟฐใ—ใพใ™ใ€‚ 2. ๆœ€ๅพŒใฎใ‚ณใƒžใƒณใƒ‰ใฏๆˆๅŠŸใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐ `echo "done"` ใพใŸใฏๅ˜ใซ `true` ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ ไปฅไธ‹ใฏไพ‹ใงใ™๏ผš ```yaml - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" ``` ๅ˜็ด”ใชใ‚ณใƒžใƒณใƒ‰ใฎๅ ดๅˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```bash cmd_that_may_fail || true ``` ใ‚‚ใกใ‚ใ‚“ใ€็ตๆžœใซๆบ€่ถณใ—ใŸใ‚‰ใ€ๅฎŸ้จ“็š„ใชใ‚นใƒ†ใƒƒใƒ—ใ‚„ใ‚ธใƒงใƒ–ใ‚’้€šๅธธใฎใ‚ธใƒงใƒ–ใจ็ตฑๅˆใ—ใ€`set +euo pipefail` ใชใฉใฎ่ฟฝๅŠ ใ—ใŸ่ฆ็ด ใ‚’ๅ‰Š้™คใ—ใฆใ€ๅฎŸ้จ“็š„ใชใ‚ธใƒงใƒ–ใŒ้€šๅธธใฎCIใฎๅ‹•ไฝœใซๅนฒๆธ‰ใ—ใชใ„ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ใ“ใฎใƒ—ใƒญใ‚ปใ‚นๅ…จไฝ“ใฏใ€ๅฎŸ้จ“็š„ใชใ‚นใƒ†ใƒƒใƒ—ใซๅฏพใ—ใฆ `allow-failure` ใฎใ‚ˆใ†ใชใ‚‚ใฎใ‚’่จญๅฎšใ—ใ€PRใฎๅ…จไฝ“ใฎใ‚นใƒ†ใƒผใ‚ฟใ‚นใซๅฝฑ้Ÿฟใ‚’ไธŽใˆใšใซๅคฑๆ•—ใ•ใ›ใ‚‹ใ“ใจใŒใงใใ‚Œใฐใ€ใฏใ‚‹ใ‹ใซ็ฐกๅ˜ใซใชใฃใŸใงใ—ใ‚‡ใ†ใ€‚ใ—ใ‹ใ—ใ€ๅ‰่ฟฐใฎ้€šใ‚Šใ€็พๅœจใฏCircleCIใจGithub Actionsใฏใ“ใฎๆฉŸ่ƒฝใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ›ใ‚“ใ€‚ ใ“ใฎๆฉŸ่ƒฝใซ้–ขใ—ใฆใฎๆŠ•็ฅจใ‚„ใ€CIใซ็‰นๆœ‰ใฎใ‚นใƒฌใƒƒใƒ‰ใงใใฎ้€ฒๆ—็Šถๆณใ‚’็ขบ่ชใงใใพใ™๏ผš - [Github Actions:](https://github.com/actions/toolkit/issues/399) - [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/tokenizer_summary.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Summary of the tokenizers [[open-in-colab]] ใ“ใฎใƒšใƒผใ‚ธใงใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใซใคใ„ใฆ่ฉณใ—ใ่ฆ‹ใฆใ„ใใพใ™ใ€‚ <Youtube id="VFp38yj8h3A"/> [ๅ‰ๅ‡ฆ็†ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](preprocessing)ใง่ฆ‹ใŸใ‚ˆใ†ใซใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ™ใ‚‹ใ“ใจใฏใ€ใใ‚Œใ‚’ๅ˜่ชžใพใŸใฏใ‚ตใƒ–ใƒฏใƒผใƒ‰ใซๅˆ†ๅ‰ฒใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒซใƒƒใ‚ฏใ‚ขใƒƒใƒ—ใƒ†ใƒผใƒ–ใƒซใ‚’ไป‹ใ—ใฆIDใซๅค‰ๆ›ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ๅ˜่ชžใพใŸใฏใ‚ตใƒ–ใƒฏใƒผใƒ‰ใ‚’IDใซๅค‰ๆ›ใ™ใ‚‹ใ“ใจใฏ็ฐกๅ˜ใงใ™ใฎใงใ€ใ“ใฎ่ฆ็ด„ใงใฏใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ˜่ชžใพใŸใฏใ‚ตใƒ–ใƒฏใƒผใƒ‰ใซๅˆ†ๅ‰ฒใ™ใ‚‹๏ผˆใคใพใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ™ใ‚‹๏ผ‰ใ“ใจใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใพใ™ใ€‚ๅ…ทไฝ“็š„ใซใฏใ€๐Ÿค— Transformersใงไฝฟ็”จใ•ใ‚Œใ‚‹3ใคใฎไธป่ฆใชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€[Byte-Pair Encoding๏ผˆBPE๏ผ‰](#byte-pair-encoding)ใ€[WordPiece](#wordpiece)ใ€ใŠใ‚ˆใณ[SentencePiece](#sentencepiece)ใ‚’่ฆ‹ใฆใ€ใฉใฎใƒขใƒ‡ใƒซใŒใฉใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚ฟใ‚คใƒ—ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‹ใฎไพ‹ใ‚’็คบใ—ใพใ™ใ€‚ ๅ„ใƒขใƒ‡ใƒซใƒšใƒผใ‚ธใงใฏใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซใŒใฉใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚ฟใ‚คใƒ—ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‹ใ‚’็Ÿฅใ‚‹ใŸใ‚ใซใ€้–ข้€ฃใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’็ขบ่ชใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€[`BertTokenizer`]ใ‚’่ฆ‹ใ‚‹ใจใ€ใƒขใƒ‡ใƒซใŒ[WordPiece](#wordpiece)ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ ## Introduction ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใ‚ˆใ‚Šๅฐใ•ใชใƒใƒฃใƒณใ‚ฏใซๅˆ†ๅ‰ฒใ™ใ‚‹ใ“ใจใฏใ€่ฆ‹ใ‹ใ‘ไปฅไธŠใซ้›ฃใ—ใ„ใ‚ฟใ‚นใ‚ฏใงใ‚ใ‚Šใ€่ค‡ๆ•ฐใฎๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€ๆฌกใฎๆ–‡ใ‚’่€ƒใˆใฆใฟใพใ—ใ‚‡ใ†ใ€‚ใ€Œ"Don't you love ๐Ÿค— Transformers? We sure do."ใ€ <Youtube id="nhJxYji1aho"/> ใ“ใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ™ใ‚‹็ฐกๅ˜ใชๆ–นๆณ•ใฏใ€ใ‚นใƒšใƒผใ‚นใงๅˆ†ๅ‰ฒใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ``` ["Don't", "you", "love", "๐Ÿค—", "Transformers?", "We", "sure", "do."] ``` ใ“ใ‚Œใฏๅˆ็†็š„ใช็ฌฌไธ€ๆญฉใงใ™ใŒใ€ใƒˆใƒผใ‚ฏใƒณ "Transformers?" ใจ "do." ใ‚’่ฆ‹ใ‚‹ใจใ€ๅฅ่ชญ็‚นใŒๅ˜่ชž "Transformer" ใจ "do" ใซ็ตๅˆใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใ€ใ“ใ‚Œใฏๆœ€้ฉใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๅฅ่ชญ็‚นใ‚’่€ƒๆ…ฎใซๅ…ฅใ‚Œใ‚‹ในใใงใ€ใƒขใƒ‡ใƒซใŒๅ˜่ชžใจใใ‚Œใซ็ถšใๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ใ™ในใฆใฎๅฅ่ชญ็‚น่จ˜ๅทใฎ็•ฐใชใ‚‹่กจ็พใ‚’ๅญฆใฐใชใ‘ใ‚Œใฐใชใ‚‰ใชใ„ใ“ใจใ‚’้ฟใ‘ใ‚‹ในใใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใŒๅญฆใฐใชใ‘ใ‚Œใฐใชใ‚‰ใชใ„่กจ็พใฎๆ•ฐใŒ็ˆ†็™บ็š„ใซๅข—ๅŠ ใ—ใพใ™ใ€‚ๅฅ่ชญ็‚นใ‚’่€ƒๆ…ฎใซๅ…ฅใ‚ŒใŸๅ ดๅˆใ€ไพ‹ๆ–‡ใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ``` ["Don", "'", "t", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` ใŸใ ใ—ใ€ๅ˜่ชžใ€Œ"Don't"ใ€ใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ™ใ‚‹ๆ–นๆณ•ใซ้–ขใ—ใฆใฏใ€ไธๅˆฉใชๅด้ขใŒใ‚ใ‚Šใพใ™ใ€‚ ใ€Œ"Don't"ใ€ใฏใ€Œ"do not"ใ€ใ‚’่กจใ—ใฆใ„ใ‚‹ใŸใ‚ใ€ใ€Œ["Do", "n't"]ใ€ใจใ—ใฆใƒˆใƒผใ‚ฏใƒณๅŒ–ใ™ใ‚‹ๆ–นใŒ้ฉใ—ใฆใ„ใพใ™ใ€‚ใ“ใ“ใ‹ใ‚‰ไบ‹ๆŸ„ใŒ่ค‡้›‘ใซใชใ‚Šใ€ๅ„ใƒขใƒ‡ใƒซใŒ็‹ฌ่‡ชใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚ฟใ‚คใƒ—ใ‚’ๆŒใค็†็”ฑใฎไธ€้ƒจใงใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ™ใ‚‹ใŸใ‚ใซ้ฉ็”จใ™ใ‚‹ใƒซใƒผใƒซใซๅฟœใ˜ใฆใ€ๅŒใ˜ใƒ†ใ‚ญใ‚นใƒˆใซๅฏพใ—ใฆ็•ฐใชใ‚‹ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ•ใ‚ŒใŸๅ‡บๅŠ›ใŒ็”Ÿๆˆใ•ใ‚Œใพใ™ใ€‚ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ™ใ‚‹ใฎใซไฝฟ็”จใ•ใ‚ŒใŸใƒซใƒผใƒซใจๅŒใ˜ใƒซใƒผใƒซใงใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ•ใ‚ŒใŸๅ…ฅๅŠ›ใ‚’ๆไพ›ใ™ใ‚‹ๅ ดๅˆใซใฎใฟๆญฃๅธธใซๆฉŸ่ƒฝใ—ใพใ™ใ€‚ [spaCy](https://spacy.io/)ใจ[Moses](http://www.statmt.org/moses/?n=Development.GetStarted)ใฏใ€2ใคใฎไบบๆฐ—ใฎใ‚ใ‚‹ใƒซใƒผใƒซใƒ™ใƒผใ‚นใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใ‚’็งใŸใกใฎไพ‹ใซ้ฉ็”จใ™ใ‚‹ใจใ€*spaCy*ใจ*Moses*ใฏๆฌกใฎใ‚ˆใ†ใชๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ—ใพใ™๏ผš ``` ["Do", "n't", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` ็ฉบ็™ฝใจๅฅ่ชญ็‚นใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใ€ใŠใ‚ˆใณใƒซใƒผใƒซใƒ™ใƒผใ‚นใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚็ฉบ็™ฝใจๅฅ่ชญ็‚นใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใ€ใŠใ‚ˆใณใƒซใƒผใƒซใƒ™ใƒผใ‚นใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใฏใ€ๆ–‡ใ‚’ๅ˜่ชžใซๅˆ†ๅ‰ฒใ™ใ‚‹ใ“ใจใ‚’ใ‚†ใ‚‹ใ‚„ใ‹ใซๅฎš็พฉใ•ใ‚Œใ‚‹ๅ˜่ชžใƒˆใƒผใ‚ฏใƒณๅŒ–ใฎไพ‹ใงใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใ‚ˆใ‚Šๅฐใ•ใชใƒใƒฃใƒณใ‚ฏใซๅˆ†ๅ‰ฒใ™ใ‚‹ใŸใ‚ใฎๆœ€ใ‚‚็›ดๆ„Ÿ็š„ใชๆ–นๆณ•ใงใ‚ใ‚‹ไธ€ๆ–นใ€ใ“ใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ๆ–นๆณ•ใฏๅคง่ฆๆจกใชใƒ†ใ‚ญใ‚นใƒˆใ‚ณใƒผใƒ‘ใ‚นใซๅฏพใ—ใฆๅ•้กŒใ‚’ๅผ•ใ่ตทใ“ใ™ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎๅ ดๅˆใ€็ฉบ็™ฝใจๅฅ่ชญ็‚นใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใฏ้€šๅธธใ€้žๅธธใซๅคงใใช่ชžๅฝ™๏ผˆใ™ในใฆใฎไธ€ๆ„ใชๅ˜่ชžใจใƒˆใƒผใ‚ฏใƒณใฎใ‚ปใƒƒใƒˆ๏ผ‰ใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ไพ‹ใˆใฐใ€[Transformer XL](model_doc/transformerxl)ใฏ็ฉบ็™ฝใจๅฅ่ชญ็‚นใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚’ไฝฟ็”จใ—ใฆใŠใ‚Šใ€่ชžๅฝ™ใ‚ตใ‚คใ‚บใฏ267,735ใงใ™๏ผ ใ“ใฎใ‚ˆใ†ใชๅคงใใช่ชžๅฝ™ใ‚ตใ‚คใ‚บใฏใ€ใƒขใƒ‡ใƒซใซ้žๅธธใซๅคงใใชๅŸ‹ใ‚่พผใฟ่กŒๅˆ—ใ‚’ๅ…ฅๅŠ›ใŠใ‚ˆใณๅ‡บๅŠ›ใƒฌใ‚คใƒคใƒผใจใ—ใฆๆŒใŸใ›ใ‚‹ใ“ใจใ‚’ๅผทๅˆถใ—ใ€ใƒกใƒขใƒชใŠใ‚ˆใณๆ™‚้–“ใฎ่ค‡้›‘ใ•ใฎๅข—ๅŠ ใ‚’ๅผ•ใ่ตทใ“ใ—ใพใ™ใ€‚ไธ€่ˆฌ็š„ใซใ€ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใฏใ€็‰นใซๅ˜ไธ€ใฎ่จ€่ชžใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸๅ ดๅˆใ€50,000ใ‚’่ถ…ใˆใ‚‹่ชžๅฝ™ใ‚ตใ‚คใ‚บใ‚’ๆŒใคใ“ใจใฏใปใจใ‚“ใฉใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ—ใŸใŒใฃใฆใ€ใ‚ทใƒณใƒ—ใƒซใช็ฉบ็™ฝใจๅฅ่ชญ็‚นใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใŒไธๅๅˆ†ใชๅ ดๅˆใ€ใชใœๅ˜ใซๆ–‡ๅญ—ๅ˜ไฝใงใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใชใ„ใฎใ‹ใจใ„ใ†็–‘ๅ•ใŒ็”Ÿใ˜ใพใ™ใ‹๏ผŸ <Youtube id="ssLq_EK2jLE"/> ๆ–‡ๅญ—ๅ˜ไฝใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใฏ้žๅธธใซใ‚ทใƒณใƒ—ใƒซใงใ‚ใ‚Šใ€ใƒกใƒขใƒชใจๆ™‚้–“ใฎ่ค‡้›‘ใ•ใ‚’ๅคงๅน…ใซๅ‰Šๆธ›ใงใใพใ™ใŒใ€ใƒขใƒ‡ใƒซใซๆ„ๅ‘ณใฎใ‚ใ‚‹ๅ…ฅๅŠ›่กจ็พใ‚’ๅญฆ็ฟ’ใ•ใ›ใ‚‹ใ“ใจใŒ้žๅธธใซ้›ฃใ—ใใชใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€ๆ–‡ๅญ—ใ€Œ"t"ใ€ใฎใŸใ‚ใฎๆ„ๅ‘ณใฎใ‚ใ‚‹ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ็‹ฌ็ซ‹ใฎ่กจ็พใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ“ใจใฏใ€ๅ˜่ชžใ€Œ"today"ใ€ใฎใŸใ‚ใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ็‹ฌ็ซ‹ใฎ่กจ็พใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ‚ˆใ‚Šใ‚‚ใฏใ‚‹ใ‹ใซ้›ฃใ—ใ„ใงใ™ใ€‚ใใฎใŸใ‚ใ€ๆ–‡ๅญ—ๅ˜ไฝใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใฏใ—ใฐใ—ใฐใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎไฝŽไธ‹ใ‚’ไผดใ„ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใฏๅ˜่ชžใƒฌใƒ™ใƒซใจๆ–‡ๅญ—ใƒฌใƒ™ใƒซใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใฎใƒใ‚คใƒ–ใƒชใƒƒใƒ‰ใงใ‚ใ‚‹**ใ‚ตใƒ–ใƒฏใƒผใƒ‰**ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚’ไฝฟ็”จใ—ใฆใ€ไธกๆ–นใฎไธ–็•Œใฎๅˆฉ็‚นใ‚’ๆดปใ‹ใ—ใพใ™ใ€‚ ## Subword tokenization <Youtube id="zHvTiHr506c"/> ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฏใ€้ ป็นใซไฝฟ็”จใ•ใ‚Œใ‚‹ๅ˜่ชžใ‚’ใ‚ˆใ‚Šๅฐใ•ใชใ‚ตใƒ–ใƒฏใƒผใƒ‰ใซๅˆ†ๅ‰ฒใ™ในใใงใฏใชใ„ใŒใ€็ใ—ใ„ๅ˜่ชžใฏๆ„ๅ‘ณใฎใ‚ใ‚‹ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใซๅˆ†่งฃใ•ใ‚Œใ‚‹ใจใ„ใ†ๅŽŸๅ‰‡ใซไพๅญ˜ใ—ใฆใ„ใพใ™ใ€‚ใŸใจใˆใฐใ€ใ€Œ"annoyingly"ใ€ใฏ็ใ—ใ„ๅ˜่ชžใจ่ฆ‹ใชใ•ใ‚Œใ€ใใฎๅ˜่ชžใฏใ€Œ"annoying"ใ€ใจใ€Œ"ly"ใ€ใซๅˆ†่งฃใ•ใ‚Œใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚็‹ฌ็ซ‹ใ—ใŸใ€Œ"annoying"ใ€ใจใ€Œ"ly"ใ€ใฏใ‚ˆใ‚Š้ ป็นใซ็พใ‚Œใพใ™ใŒใ€ใ€Œ"annoyingly"ใ€ใฎๆ„ๅ‘ณใฏใ€Œ"annoying"ใ€ใจใ€Œ"ly"ใ€ใฎๅˆๆˆ็š„ใชๆ„ๅ‘ณใซใ‚ˆใฃใฆไฟๆŒใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใฏ็‰นใซใƒˆใƒซใ‚ณ่ชžใชใฉใฎ็ตๅˆ่จ€่ชžใงๅฝน็ซ‹ใกใพใ™ใ€‚ใ“ใ“ใงใฏใ‚ตใƒ–ใƒฏใƒผใƒ‰ใ‚’้€ฃ็ตใ—ใฆ๏ผˆใปใผ๏ผ‰ไปปๆ„ใฎ้•ทใ„่ค‡้›‘ใชๅ˜่ชžใ‚’ๅฝขๆˆใงใใพใ™ใ€‚ ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใƒˆใƒผใ‚ฏใƒณๅŒ–ใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฏๅˆ็†็š„ใช่ชžๅฝ™ใ‚ตใ‚คใ‚บใ‚’ๆŒใคใ“ใจใŒใงใใ€ๆ„ๅ‘ณใฎใ‚ใ‚‹ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ็‹ฌ็ซ‹ใฎ่กจ็พใ‚’ๅญฆ็ฟ’ใงใใพใ™ใ€‚ใ•ใ‚‰ใซใ€ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใƒˆใƒผใ‚ฏใƒณๅŒ–ใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฏไปฅๅ‰ใซ่ฆ‹ใŸใ“ใจใฎใชใ„ๅ˜่ชžใ‚’ๅ‡ฆ็†ใ—ใ€ใใ‚Œใ‚‰ใ‚’ๆ—ข็Ÿฅใฎใ‚ตใƒ–ใƒฏใƒผใƒ‰ใซๅˆ†่งฃใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€[`~transformers.BertTokenizer`]ใฏ`"I have a new GPU!"`ใ‚’ไปฅไธ‹ใฎใ‚ˆใ†ใซใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใพใ™๏ผš ```py >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> tokenizer.tokenize("I have a new GPU!") ["i", "have", "a", "new", "gp", "##u", "!"] ``` ใ€Œuncasedใ€ใƒขใƒ‡ใƒซใ‚’่€ƒๆ…ฎใ—ใฆใ„ใ‚‹ใŸใ‚ใ€ใพใšๆ–‡ใ‚’ๅฐๆ–‡ๅญ—ใซๅค‰ๆ›ใ—ใพใ—ใŸใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎ่ชžๅฝ™ใซใ€Œ["i", "have", "a", "new"]ใ€ใจใ„ใ†ๅ˜่ชžใŒๅญ˜ๅœจใ™ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใŒใ€ใ€Œ"gpu"ใ€ใจใ„ใ†ๅ˜่ชžใฏๅญ˜ๅœจใ—ใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€Œ"gpu"ใ€ใ‚’ๆ—ข็Ÿฅใฎใ‚ตใƒ–ใƒฏใƒผใƒ‰ใ€Œ["gp"ใ€"##u"]ใ€ใซๅˆ†ๅ‰ฒใ—ใพใ™ใ€‚ใ“ใ“ใงใ€Œ"##"ใ€ใฏใ€ใƒˆใƒผใ‚ฏใƒณใฎใƒ‡ใ‚ณใƒผใƒ‰ใพใŸใฏใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใฎ้€†่ปขใฎใŸใ‚ใซใ€ใƒˆใƒผใ‚ฏใƒณใฎๅ‰ใฎ้ƒจๅˆ†ใซใ‚นใƒšใƒผใ‚นใชใ—ใงๆŽฅ็ถšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ๅˆฅใฎไพ‹ใจใ—ใฆใ€[`~transformers.XLNetTokenizer`]ใฏไปฅไธ‹ใฎใ‚ˆใ†ใซไปฅๅ‰ใฎใ‚ตใƒณใƒ—ใƒซใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใพใ™๏ผš ```py >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") >>> tokenizer.tokenize("Don't you love ๐Ÿค— Transformers? We sure do.") ["โ–Don", "'", "t", "โ–you", "โ–love", "โ–", "๐Ÿค—", "โ–", "Transform", "ers", "?", "โ–We", "โ–sure", "โ–do", "."] ``` ใ“ใ‚Œใ‚‰ใฎใ€Œโ–ใ€ใฎๆ„ๅ‘ณใซใคใ„ใฆใฏใ€[SentencePiece](#sentencepiece)ใ‚’่ฆ‹ใ‚‹ใจใใซ่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™ใ€‚ใ”่ฆงใฎ้€šใ‚Šใ€ใ€ŒTransformersใ€ใจใ„ใ†็ใ—ใ„ๅ˜่ชžใฏใ€ใ‚ˆใ‚Š้ ป็นใซ็พใ‚Œใ‚‹ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใ€ŒTransformใ€ใจใ€Œersใ€ใซๅˆ†ๅ‰ฒใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ•ใฆใ€็•ฐใชใ‚‹ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใŒใฉใฎใ‚ˆใ†ใซๅ‹•ไฝœใ™ใ‚‹ใ‹ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฏใ™ในใฆใ€้€šๅธธใฏๅฏพๅฟœใ™ใ‚‹ใƒขใƒ‡ใƒซใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใ‚‹ใ‚ณใƒผใƒ‘ใ‚นใง่กŒใ‚ใ‚Œใ‚‹ๅฝขๅผใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซไพๅญ˜ใ—ใฆใ„ใพใ™ใ€‚ <a id='byte-pair-encoding'></a> ### Byte-Pair Encoding๏ผˆBPE๏ผ‰ Byte-Pair Encoding๏ผˆBPE๏ผ‰ใฏใ€[Neural Machine Translation of Rare Words with Subword Units๏ผˆSennrich et al., 2015๏ผ‰](https://arxiv.org/abs/1508.07909)ใงๅฐŽๅ…ฅใ•ใ‚Œใพใ—ใŸใ€‚BPEใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใ‚’ๅ˜่ชžใซๅˆ†ๅ‰ฒใ™ใ‚‹ใƒ—ใƒชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใซไพๅญ˜ใ—ใฆใ„ใพใ™ใ€‚ใƒ—ใƒชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใฏใ€็ฉบ็™ฝใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใชใฉใ€้žๅธธใซๅ˜็ด”ใชใ‚‚ใฎใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€[GPT-2](model_doc/gpt2)ใ€[RoBERTa](model_doc/roberta)ใงใ™ใ€‚ใ‚ˆใ‚Š้ซ˜ๅบฆใชใƒ—ใƒชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใซใฏใ€ใƒซใƒผใƒซใƒ™ใƒผใ‚นใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณ๏ผˆ[XLM](model_doc/xlm)ใ€[FlauBERT](model_doc/flaubert)ใชใฉใŒๅคง้ƒจๅˆ†ใฎ่จ€่ชžใซMosesใ‚’ไฝฟ็”จ๏ผ‰ใ‚„ใ€[GPT](model_doc/gpt)๏ผˆSpacyใจftfyใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ณใƒผใƒ‘ใ‚นๅ†…ใฎๅ„ๅ˜่ชžใฎ้ ปๅบฆใ‚’ๆ•ฐใˆใ‚‹๏ผ‰ใชใฉใŒๅซใพใ‚Œใพใ™ใ€‚ ใƒ—ใƒชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใฎๅพŒใ€ไธ€ๆ„ใฎๅ˜่ชžใ‚ปใƒƒใƒˆใŒไฝœๆˆใ•ใ‚Œใ€ๅ„ๅ˜่ชžใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใงๅ‡บ็พใ—ใŸ้ ปๅบฆใŒๆฑบๅฎšใ•ใ‚Œใพใ™ใ€‚ๆฌกใซใ€BPEใฏใƒ™ใƒผใ‚น่ชžๅฝ™ใ‚’ไฝœๆˆใ—ใ€ใƒ™ใƒผใ‚น่ชžๅฝ™ใฎไบŒใคใฎใ‚ทใƒณใƒœใƒซใ‹ใ‚‰ๆ–ฐใ—ใ„ใ‚ทใƒณใƒœใƒซใ‚’ๅฝขๆˆใ™ใ‚‹ใŸใ‚ใฎใƒžใƒผใ‚ธใƒซใƒผใƒซใ‚’ๅญฆ็ฟ’ใ—ใพใ™ใ€‚ใ“ใฎใƒ—ใƒญใ‚ปใ‚นใฏใ€่ชžๅฝ™ใŒๆ‰€ๆœ›ใฎ่ชžๅฝ™ใ‚ตใ‚คใ‚บใซ้”ใ™ใ‚‹ใพใง็ถšใ‘ใ‚‰ใ‚Œใพใ™ใ€‚ใชใŠใ€ๆ‰€ๆœ›ใฎ่ชžๅฝ™ใ‚ตใ‚คใ‚บใฏใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ‰ใซๅฎš็พฉใ™ใ‚‹ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใงใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ไพ‹ใจใ—ใฆใ€ใƒ—ใƒชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใฎๅพŒใ€ๆฌกใฎใ‚ปใƒƒใƒˆใฎๅ˜่ชžใจใใฎๅ‡บ็พ้ ปๅบฆใŒๆฑบๅฎšใ•ใ‚ŒใŸใจไปฎๅฎšใ—ใพใ—ใ‚‡ใ†๏ผš ``` ("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5) ``` ใ—ใŸใŒใฃใฆใ€ใƒ™ใƒผใ‚น่ชžๅฝ™ใฏใ€Œ["b", "g", "h", "n", "p", "s", "u"]ใ€ใงใ™ใ€‚ใ™ในใฆใฎๅ˜่ชžใ‚’ใƒ™ใƒผใ‚น่ชžๅฝ™ใฎใ‚ทใƒณใƒœใƒซใซๅˆ†ๅ‰ฒใ™ใ‚‹ใจใ€ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ``` ("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5) ``` ใใฎๅพŒใ€BPEใฏๅฏ่ƒฝใชใ™ในใฆใฎใ‚ทใƒณใƒœใƒซใƒšใ‚ขใฎ้ ปๅบฆใ‚’ๆ•ฐใˆใ€ๆœ€ใ‚‚้ ป็นใซ็™บ็”Ÿใ™ใ‚‹ใ‚ทใƒณใƒœใƒซใƒšใ‚ขใ‚’้ธๆŠžใ—ใพใ™ใ€‚ไธŠ่จ˜ใฎไพ‹ใงใฏใ€`"h"`ใฎๅพŒใซ`"u"`ใŒ15ๅ›ž๏ผˆ`"hug"`ใฎ10ๅ›žใ€`"hugs"`ใฎ5ๅ›ž๏ผ‰ๅ‡บ็พใ—ใพใ™ใ€‚ใ—ใ‹ใ—ใ€ๆœ€ใ‚‚้ ป็นใชใ‚ทใƒณใƒœใƒซใƒšใ‚ขใฏใ€ๅˆ่จˆใง20ๅ›ž๏ผˆ`"u"`ใฎ10ๅ›žใ€`"g"`ใฎ5ๅ›žใ€`"u"`ใฎ5ๅ›ž๏ผ‰ๅ‡บ็พใ™ใ‚‹`"u"`ใฎๅพŒใซ`"g"`ใŒ็ถšใใ‚ทใƒณใƒœใƒซใƒšใ‚ขใงใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒๆœ€ๅˆใซๅญฆ็ฟ’ใ™ใ‚‹ใƒžใƒผใ‚ธใƒซใƒผใƒซใฏใ€`"u"`ใฎๅพŒใซ`"g"`ใŒ็ถšใใ™ในใฆใฎ`"u"`ใ‚ทใƒณใƒœใƒซใ‚’ไธ€็ท’ใซใ‚ฐใƒซใƒผใƒ—ๅŒ–ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ๆฌกใซใ€`"ug"`ใŒ่ชžๅฝ™ใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ๅ˜่ชžใฎใ‚ปใƒƒใƒˆใฏๆฌกใซใชใ‚Šใพใ™๏ผš ``` ("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5) ``` ๆฌกใซใ€BPEใฏๆฌกใซๆœ€ใ‚‚ไธ€่ˆฌ็š„ใชใ‚ทใƒณใƒœใƒซใƒšใ‚ขใ‚’่ญ˜ๅˆฅใ—ใพใ™ใ€‚ใใ‚Œใฏใ€Œ"u"ใ€ใซ็ถšใ„ใฆใ€Œ"n"ใ€ใงใ€16ๅ›žๅ‡บ็พใ—ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ€Œ"u"ใ€ใจใ€Œ"n"ใ€ใฏใ€Œ"un"ใ€ใซ็ตๅˆใ•ใ‚Œใ€่ชžๅฝ™ใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ๆฌกใซๆœ€ใ‚‚้ ปๅบฆใฎ้ซ˜ใ„ใ‚ทใƒณใƒœใƒซใƒšใ‚ขใฏใ€ใ€Œ"h"ใ€ใซ็ถšใ„ใฆใ€Œ"ug"ใ€ใงใ€15ๅ›žๅ‡บ็พใ—ใพใ™ใ€‚ๅ†ใณใƒšใ‚ขใŒ็ตๅˆใ•ใ‚Œใ€ใ€Œhugใ€ใŒ่ชžๅฝ™ใซ่ฟฝๅŠ ใงใใพใ™ใ€‚ ใ“ใฎๆฎต้šŽใงใฏใ€่ชžๅฝ™ใฏ`["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"]`ใงใ‚ใ‚Šใ€ไธ€ๆ„ใฎๅ˜่ชžใฎใ‚ปใƒƒใƒˆใฏไปฅไธ‹ใฎใ‚ˆใ†ใซ่กจใ•ใ‚Œใพใ™๏ผš ``` ("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5) ``` ๅ‰ๆใจใ—ใฆใ€Byte-Pair Encoding๏ผˆBPE๏ผ‰ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒใ“ใฎๆฎต้šŽใงๅœๆญขใ™ใ‚‹ใจใ€ๅญฆ็ฟ’ใ•ใ‚ŒใŸใƒžใƒผใ‚ธใƒซใƒผใƒซใŒๆ–ฐใ—ใ„ๅ˜่ชžใซ้ฉ็”จใ•ใ‚Œใพใ™๏ผˆๆ–ฐใ—ใ„ๅ˜่ชžใซใฏใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใซๅซใพใ‚Œใฆใ„ใชใ„ใ‚ทใƒณใƒœใƒซใŒๅซใพใ‚Œใฆใ„ใชใ„้™ใ‚Š๏ผ‰ใ€‚ ไพ‹ใˆใฐใ€ๅ˜่ชž "bug" ใฏ ["b", "ug"] ใจใ—ใฆใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚Œใพใ™ใŒใ€"mug" ใฏใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใซ "m" ใ‚ทใƒณใƒœใƒซใŒๅซใพใ‚Œใฆใ„ใชใ„ใŸใ‚ใ€["<unk>", "ug"] ใจใ—ใฆใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚Œใพใ™ใ€‚ ไธ€่ˆฌ็š„ใซใ€"m" ใฎใ‚ˆใ†ใชๅ˜ไธ€ใฎๆ–‡ๅญ—ใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใซใฏ้€šๅธธใ€ๅ„ๆ–‡ๅญ—ใฎๅฐ‘ใชใใจใ‚‚1ใคใฎๅ‡บ็พใŒๅซใพใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€"<unk>" ใ‚ทใƒณใƒœใƒซใซ็ฝฎใๆ›ใˆใ‚‰ใ‚Œใ‚‹ใ“ใจใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€็ตตๆ–‡ๅญ—ใฎใ‚ˆใ†ใช้žๅธธใซ็‰นๆฎŠใชๆ–‡ๅญ—ใฎๅ ดๅˆใซใฏ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ๅ‰่ฟฐใฎใ‚ˆใ†ใซใ€ใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‚ตใ‚คใ‚บใ€ใ™ใชใ‚ใกใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‚ตใ‚คใ‚บ + ใƒžใƒผใ‚ธใฎๅ›žๆ•ฐใฏ้ธๆŠžใ™ใ‚‹ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใงใ™ใ€‚ ไพ‹ใˆใฐใ€[GPT](model_doc/gpt) ใฏใƒ™ใƒผใ‚นๆ–‡ๅญ—ใŒ478ๆ–‡ๅญ—ใงใ€40,000ๅ›žใฎใƒžใƒผใ‚ธๅพŒใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅœๆญขใ—ใŸใŸใ‚ใ€ใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‚ตใ‚คใ‚บใฏ40,478ใงใ™ใ€‚ #### Byte-level BPE ใ™ในใฆใฎUnicodeๆ–‡ๅญ—ใ‚’ใƒ™ใƒผใ‚นๆ–‡ๅญ—ใจ่€ƒใˆใ‚‹ใจใ€ใ™ในใฆใฎๅฏ่ƒฝใชใƒ™ใƒผใ‚นๆ–‡ๅญ—ใŒๅซใพใ‚Œใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใชใ„ใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใฏใ‹ใชใ‚Šๅคงใใใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) ใฏใ€ใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‚’256ใƒใ‚คใƒˆใซใ™ใ‚‹่ณขใ„ใƒˆใƒชใƒƒใ‚ฏใจใ—ใฆใƒใ‚คใƒˆใ‚’ใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใจใ—ใฆไฝฟ็”จใ—ใ€ใ™ในใฆใฎใƒ™ใƒผใ‚นๆ–‡ๅญ—ใŒใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใซๅซใพใ‚Œใ‚‹ใ‚ˆใ†ใซใ—ใฆใ„ใพใ™ใ€‚ ใƒ‘ใƒณใ‚ฏใƒใƒฅใ‚จใƒผใ‚ทใƒงใƒณใ‚’ๆ‰ฑใ†ใŸใ‚ใฎใ„ใใคใ‹ใฎ่ฟฝๅŠ ใƒซใƒผใƒซใ‚’ๅ‚™ใˆใŸGPT2ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€<unk> ใ‚ทใƒณใƒœใƒซใ‚’ๅฟ…่ฆใจใ›ใšใซใ™ในใฆใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใงใใพใ™ใ€‚ [GPT-2](model_doc/gpt) ใฏ50,257ใฎใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‚ตใ‚คใ‚บใ‚’ๆŒใฃใฆใŠใ‚Šใ€ใ“ใ‚Œใฏ256ใƒใ‚คใƒˆใฎใƒ™ใƒผใ‚นใƒˆใƒผใ‚ฏใƒณใ€็‰นๅˆฅใชใƒ†ใ‚ญใ‚นใƒˆใฎ็ต‚ไบ†ใ‚’็คบใ™ใƒˆใƒผใ‚ฏใƒณใ€ใŠใ‚ˆใณ50,000ๅ›žใฎใƒžใƒผใ‚ธใงๅญฆ็ฟ’ใ—ใŸใ‚ทใƒณใƒœใƒซใซๅฏพๅฟœใ—ใฆใ„ใพใ™ใ€‚ ### WordPiece WordPieceใฏใ€[BERT](model_doc/bert)ใ€[DistilBERT](model_doc/distilbert)ใ€ใŠใ‚ˆใณ[Electra](model_doc/electra)ใงไฝฟ็”จใ•ใ‚Œใ‚‹ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใงใ™ใ€‚ ใ“ใฎใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฏใ€[Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) ใงๆฆ‚่ชฌใ•ใ‚ŒใฆใŠใ‚Šใ€BPEใซ้žๅธธใซไผผใฆใ„ใพใ™ใ€‚ WordPieceใฏๆœ€ใ‚‚้ ป็นใชใ‚ทใƒณใƒœใƒซใƒšใ‚ขใ‚’้ธๆŠžใ™ใ‚‹ใฎใงใฏใชใใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใซ่ฟฝๅŠ ใ—ใŸๅ ดๅˆใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใฎๅฐคๅบฆใ‚’ๆœ€ๅคงๅŒ–ใ™ใ‚‹ใ‚ทใƒณใƒœใƒซใƒšใ‚ขใ‚’้ธๆŠžใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏๅ…ทไฝ“็š„ใซใฏใฉใ†ใ„ใ†ๆ„ๅ‘ณใงใ™ใ‹๏ผŸๅ‰ใฎไพ‹ใ‚’ๅ‚็…งใ™ใ‚‹ใจใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใฎๅฐคๅบฆใ‚’ๆœ€ๅคงๅŒ–ใ™ใ‚‹ใ“ใจใฏใ€ใใฎใ‚ทใƒณใƒœใƒซใƒšใ‚ขใฎ็ขบ็އใ‚’ใใฎๆœ€ๅˆใฎใ‚ทใƒณใƒœใƒซใซ็ถšใ2็•ช็›ฎใฎใ‚ทใƒณใƒœใƒซใฎ็ขบ็އใงๅ‰ฒใฃใŸใ‚‚ใฎใŒใ€ใ™ในใฆใฎใ‚ทใƒณใƒœใƒซใƒšใ‚ขใฎไธญใงๆœ€ใ‚‚ๅคงใใ„ๅ ดๅˆใซ่ฉฒๅฝ“ใ™ใ‚‹ใ‚ทใƒณใƒœใƒซใƒšใ‚ขใ‚’่ฆ‹ใคใ‘ใ‚‹ใ“ใจใซ็ญ‰ใ—ใ„ใงใ™ใ€‚ ใŸใจใˆใฐใ€"u" ใฎๅพŒใซ "g" ใŒ็ถšใๅ ดๅˆใ€ไป–ใฎใฉใฎใ‚ทใƒณใƒœใƒซใƒšใ‚ขใ‚ˆใ‚Šใ‚‚ "ug" ใฎ็ขบ็އใ‚’ "u"ใ€"g" ใงๅ‰ฒใฃใŸ็ขบ็އใŒ้ซ˜ใ‘ใ‚Œใฐใ€ใใ‚Œใ‚‰ใฎใ‚ทใƒณใƒœใƒซใฏ็ตๅˆใ•ใ‚Œใพใ™ใ€‚็›ดๆ„Ÿ็š„ใซ่จ€ใˆใฐใ€WordPieceใฏ2ใคใฎใ‚ทใƒณใƒœใƒซใ‚’็ตๅˆใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆๅคฑใ‚ใ‚Œใ‚‹ใ‚‚ใฎใ‚’่ฉ•ไพกใ—ใ€ใใ‚ŒใŒใใ‚Œใซๅ€คใ™ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ™ใ‚‹็‚นใงBPEใจใฏใ‚ใšใ‹ใซ็•ฐใชใ‚Šใพใ™ใ€‚ ### Unigram Unigramใฏใ€[Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018)](https://arxiv.org/pdf/1804.10959.pdf) ใงๅฐŽๅ…ฅใ•ใ‚ŒใŸใ‚ตใƒ–ใƒฏใƒผใƒ‰ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใงใ™ใ€‚ BPEใ‚„WordPieceใจใฏ็•ฐใชใ‚Šใ€Unigramใฏใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‚’ๅคšๆ•ฐใฎใ‚ทใƒณใƒœใƒซใงๅˆๆœŸๅŒ–ใ—ใ€ๅ„ใ‚ทใƒณใƒœใƒซใ‚’ๅ‰Šๆธ›ใ—ใฆใ‚ˆใ‚Šๅฐใ•ใชใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ ใƒ™ใƒผใ‚นใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใฏใ€ไบ‹ๅ‰ใซใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚ŒใŸใ™ในใฆใฎๅ˜่ชžใจๆœ€ใ‚‚ไธ€่ˆฌ็š„ใช้ƒจๅˆ†ๆ–‡ๅญ—ๅˆ—ใซๅฏพๅฟœใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ Unigramใฏtransformersใฎใƒขใƒ‡ใƒซใฎ็›ดๆŽฅใฎไฝฟ็”จใซใฏ้ฉใ—ใฆใ„ใพใ›ใ‚“ใŒใ€[SentencePiece](#sentencepiece)ใจ็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ๅ„ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚นใƒ†ใƒƒใƒ—ใงใ€Unigramใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฏ็พๅœจใฎใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใจใƒฆใƒ‹ใ‚ฐใƒฉใƒ ่จ€่ชžใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟไธŠใฎๆๅคฑ๏ผˆ้€šๅธธใฏๅฏพๆ•ฐๅฐคๅบฆใจใ—ใฆๅฎš็พฉ๏ผ‰ใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ใใฎๅพŒใ€ใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชๅ†…ใฎๅ„ใ‚ทใƒณใƒœใƒซใซใคใ„ใฆใ€ใใฎใ‚ทใƒณใƒœใƒซใŒใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใ‹ใ‚‰ๅ‰Š้™คใ•ใ‚ŒใŸๅ ดๅˆใซๅ…จไฝ“ใฎๆๅคฑใŒใฉใ‚Œใ ใ‘ๅข—ๅŠ ใ™ใ‚‹ใ‹ใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ Unigramใฏใ€ๆๅคฑใฎๅข—ๅŠ ใŒๆœ€ใ‚‚ไฝŽใ„p๏ผˆ้€šๅธธใฏ10๏ผ…ใพใŸใฏ20๏ผ…๏ผ‰ใƒ‘ใƒผใ‚ปใƒณใƒˆใฎใ‚ทใƒณใƒœใƒซใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚ใคใพใ‚Šใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟๅ…จไฝ“ใฎๆๅคฑใซๆœ€ใ‚‚ๅฝฑ้Ÿฟใ‚’ไธŽใˆใชใ„ใ€ๆœ€ใ‚‚ๆๅคฑใฎๅฐ‘ใชใ„ใ‚ทใƒณใƒœใƒซใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚ ใ“ใฎใƒ—ใƒญใ‚ปใ‚นใฏใ€ใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใŒๆœ›ใพใ—ใ„ใ‚ตใ‚คใ‚บใซ้”ใ™ใ‚‹ใพใง็นฐใ‚Š่ฟ”ใ•ใ‚Œใพใ™ใ€‚ Unigramใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฏๅธธใซใƒ™ใƒผใ‚นๆ–‡ๅญ—ใ‚’ไฟๆŒใ™ใ‚‹ใŸใ‚ใ€ไปปๆ„ใฎๅ˜่ชžใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใงใใพใ™ใ€‚ Unigramใฏใƒžใƒผใ‚ธใƒซใƒผใƒซใซๅŸบใฅใ„ใฆใ„ใชใ„ใŸใ‚๏ผˆBPEใจWordPieceใจใฏๅฏพ็…ง็š„ใซ๏ผ‰ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅพŒใฎๆ–ฐใ—ใ„ใƒ†ใ‚ญใ‚นใƒˆใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใซใฏใ„ใใคใ‹ใฎๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใจใ—ใฆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸUnigramใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒๆŒใคใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใŒๆฌกใฎใ‚ˆใ†ใชๅ ดๅˆ๏ผš ``` ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"], ``` `"hugs"`ใฏใ€`["hug", "s"]`ใ€`["h", "ug", "s"]`ใ€ใพใŸใฏ`["h", "u", "g", "s"]`ใฎใ‚ˆใ†ใซใƒˆใƒผใ‚ฏใƒณๅŒ–ใงใใพใ™ใ€‚ใงใฏใ€ใฉใ‚Œใ‚’้ธๆŠžใ™ในใใงใ—ใ‚‡ใ†ใ‹๏ผŸ Unigramใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ณใƒผใƒ‘ใ‚นๅ†…ใฎๅ„ใƒˆใƒผใ‚ฏใƒณใฎ็ขบ็އใ‚’ไฟๅญ˜ใ—ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅพŒใซๅ„ๅฏ่ƒฝใชใƒˆใƒผใ‚ฏใƒณๅŒ–ใฎ็ขบ็އใ‚’่จˆ็ฎ—ใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ใ“ใฎใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฏๅฎŸ้š›ใซใฏๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚’้ธๆŠžใ—ใพใ™ใŒใ€็ขบ็އใซๅพ“ใฃใฆๅฏ่ƒฝใชใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚’ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ™ใ‚‹ใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚‚ๆไพ›ใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎ็ขบ็އใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซไฝฟ็”จใ™ใ‚‹ๆๅคฑใซใ‚ˆใฃใฆๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใŒๅ˜่ชž \\(x_{1}, \dots, x_{N}\\) ใงๆง‹ๆˆใ•ใ‚Œใ€ๅ˜่ชž \\(x_{i}\\) ใฎใ™ในใฆใฎๅฏ่ƒฝใชใƒˆใƒผใ‚ฏใƒณๅŒ–ใฎใ‚ปใƒƒใƒˆใŒ \\(S(x_{i})\\) ใจๅฎš็พฉใ•ใ‚Œใ‚‹ๅ ดๅˆใ€ๅ…จไฝ“ใฎๆๅคฑใฏๆฌกใฎใ‚ˆใ†ใซๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ $$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$$ <a id='sentencepiece'></a> ### SentencePiece ใ“ใ‚Œใพใงใซ่ชฌๆ˜Žใ—ใŸใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใซใฏๅŒใ˜ๅ•้กŒใŒใ‚ใ‚Šใพใ™ใ€‚ใใ‚Œใฏใ€ๅ…ฅๅŠ›ใƒ†ใ‚ญใ‚นใƒˆใŒๅ˜่ชžใ‚’ๅŒบๅˆ‡ใ‚‹ใŸใ‚ใซใ‚นใƒšใƒผใ‚นใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใจไปฎๅฎšใ—ใฆใ„ใ‚‹ใจใ„ใ†ใ“ใจใงใ™ใ€‚ใ—ใ‹ใ—ใ€ใ™ในใฆใฎ่จ€่ชžใŒๅ˜่ชžใ‚’ๅŒบๅˆ‡ใ‚‹ใŸใ‚ใซใ‚นใƒšใƒผใ‚นใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ“ใฎๅ•้กŒใ‚’ไธ€่ˆฌ็š„ใซ่งฃๆฑบใ™ใ‚‹ใŸใ‚ใฎ1ใคใฎๆ–นๆณ•ใฏใ€่จ€่ชžๅ›บๆœ‰ใฎๅ‰ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™๏ผˆไพ‹๏ผš[XLM](model_doc/xlm)ใฏ็‰นๅฎšใฎไธญๅ›ฝ่ชžใ€ๆ—ฅๆœฌ่ชžใ€ใŠใ‚ˆใณใ‚ฟใ‚ค่ชžใฎๅ‰ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™๏ผ‰ใ€‚ใ‚ˆใ‚Šไธ€่ˆฌ็š„ใซใ“ใฎๅ•้กŒใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซใ€[SentencePiece๏ผšใƒ‹ใƒฅใƒผใƒฉใƒซใƒ†ใ‚ญใ‚นใƒˆๅ‡ฆ็†ใฎใŸใ‚ใฎใ‚ทใƒณใƒ—ใƒซใง่จ€่ชž้žไพๅญ˜ใฎใ‚ตใƒ–ใƒฏใƒผใƒ‰ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŠใ‚ˆใณใƒ‡ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ๏ผˆKudo et al.ใ€2018๏ผ‰](https://arxiv.org/pdf/1808.06226.pdf) ใฏใ€ๅ…ฅๅŠ›ใ‚’็”Ÿใฎๅ…ฅๅŠ›ใ‚นใƒˆใƒชใƒผใƒ ใจใ—ใฆๆ‰ฑใ„ใ€ใ‚นใƒšใƒผใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–‡ๅญ—ใฎใ‚ปใƒƒใƒˆใซๅซใ‚ใพใ™ใ€‚ใใ‚Œใ‹ใ‚‰BPEใพใŸใฏunigramใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใ‚’ไฝฟ็”จใ—ใฆ้ฉๅˆ‡ใช่ชžๅฝ™ใ‚’ๆง‹็ฏ‰ใ—ใพใ™ใ€‚ ใŸใจใˆใฐใ€[`XLNetTokenizer`]ใฏSentencePieceใ‚’ไฝฟ็”จใ—ใฆใŠใ‚Šใ€ใใฎใŸใ‚ใซๅ‰่ฟฐใฎไพ‹ใง`"โ–"`ๆ–‡ๅญ—ใŒ่ชžๅฝ™ใซๅซใพใ‚Œใฆใ„ใพใ—ใŸใ€‚SentencePieceใ‚’ไฝฟ็”จใ—ใŸใƒ‡ใ‚ณใƒผใƒ‰ใฏ้žๅธธใซ็ฐกๅ˜ใงใ€ใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒณใ‚’ๅ˜็ด”ใซ้€ฃ็ตใ—ใ€`"โ–"`ใฏใ‚นใƒšใƒผใ‚นใซ็ฝฎๆ›ใ•ใ‚Œใพใ™ใ€‚ ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใ™ในใฆใฎtransformersใƒขใƒ‡ใƒซใฏใ€SentencePieceใ‚’unigramใจ็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จใ—ใพใ™ใ€‚SentencePieceใ‚’ไฝฟ็”จใ™ใ‚‹ใƒขใƒ‡ใƒซใฎไพ‹ใซใฏใ€[ALBERT](model_doc/albert)ใ€[XLNet](model_doc/xlnet)ใ€[Marian](model_doc/marian)ใ€ใŠใ‚ˆใณ[T5](model_doc/t5)ใŒใ‚ใ‚Šใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/task_summary.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # What ๐Ÿค— Transformers can do ๐Ÿค— Transformersใฏใ€่‡ช็„ถ่จ€่ชžๅ‡ฆ็†๏ผˆNLP๏ผ‰ใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ€้Ÿณๅฃฐๅ‡ฆ็†ใชใฉใฎๆœ€ๆ–ฐใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใƒฉใ‚คใƒ–ใƒฉใƒชใงใ™ใ€‚ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใซใฏใ€Transformerใƒขใƒ‡ใƒซใ ใ‘ใงใชใใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏๅ‘ใ‘ใฎใƒขใƒ€ใƒณใช็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใชใฉใ€Transformerไปฅๅค–ใฎใƒขใƒ‡ใƒซใ‚‚ๅซใพใ‚Œใฆใ„ใพใ™ใ€‚็พไปฃใฎใ‚นใƒžใƒผใƒˆใƒ•ใ‚ฉใƒณใ€ใ‚ขใƒ—ใƒชใ€ใƒ†ใƒฌใƒ“ใชใฉใ€ๆœ€ใ‚‚ไบบๆฐ—ใฎใ‚ใ‚‹ๆถˆ่ฒป่€…่ฃฝๅ“ใฎๅคšใใซใฏใ€ๆทฑๅฑคๅญฆ็ฟ’ๆŠ€่ก“ใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ‚นใƒžใƒผใƒˆใƒ•ใ‚ฉใƒณใงๆ’ฎๅฝฑใ—ใŸๅ†™็œŸใ‹ใ‚‰่ƒŒๆ™ฏใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๅ‰Š้™คใ—ใŸใ„ใงใ™ใ‹๏ผŸใ“ใ‚Œใฏใƒ‘ใƒŽใƒ—ใƒ†ใ‚ฃใƒƒใ‚ฏใƒปใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ๏ผˆใพใ ไฝ•ใ‚’ๆ„ๅ‘ณใ™ใ‚‹ใ‹ใ‚ใ‹ใ‚‰ใชใใฆใ‚‚ๅฟƒ้…ใ—ใชใ„ใงใใ ใ•ใ„ใ€ไปฅไธ‹ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ชฌๆ˜Žใ—ใพใ™๏ผ๏ผ‰ใฎใ‚ฟใ‚นใ‚ฏใฎไธ€ไพ‹ใงใ™ใ€‚ ใ“ใฎใƒšใƒผใ‚ธใงใฏใ€๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ—ใฆใ€ใŸใฃใŸ3่กŒใฎใ‚ณใƒผใƒ‰ใง่งฃๆฑบใงใใ‚‹ใ•ใพใ–ใพใช้ŸณๅฃฐใŠใ‚ˆใณ้Ÿณๅฃฐใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ€NLPใ‚ฟใ‚นใ‚ฏใฎๆฆ‚่ฆใ‚’ๆไพ›ใ—ใพใ™๏ผ ## Audio ้Ÿณๅฃฐใจ้Ÿณๅฃฐๅ‡ฆ็†ใฎใ‚ฟใ‚นใ‚ฏใฏใ€ไป–ใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใจใฏๅฐ‘ใ—็•ฐใชใ‚Šใพใ™ใ€‚ใชใœใชใ‚‰ใ€ๅ…ฅๅŠ›ใจใ—ใฆใฎ็”Ÿใฎ้Ÿณๅฃฐๆณขๅฝขใฏ้€ฃ็ถš็š„ใชไฟกๅทใงใ‚ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใจใฏ็•ฐใชใ‚Šใ€็”Ÿใฎ้Ÿณๅฃฐๆณขๅฝขใฏๆ–‡็ซ ใ‚’ๅ˜่ชžใซๅˆ†ๅ‰ฒใงใใ‚‹ใ‚ˆใ†ใซใใ‚Œใ„ใซๅˆ†ๅ‰ฒใงใใพใ›ใ‚“ใ€‚ใ“ใ‚Œใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซใ€้€šๅธธใ€็”Ÿใฎ้ŸณๅฃฐไฟกๅทใฏๅฎšๆœŸ็š„ใช้–“้š”ใงใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚้–“้š”ๅ†…ใงใ‚ˆใ‚Šๅคšใใฎใ‚ตใƒณใƒ—ใƒซใ‚’ๅ–ใ‚‹ใจใ€ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใŒ้ซ˜ใใชใ‚Šใ€้Ÿณๅฃฐใฏใ‚ˆใ‚Šๅ…ƒใฎ้Ÿณๅฃฐใ‚ฝใƒผใ‚นใซ่ฟ‘ใฅใใพใ™ใ€‚ ไปฅๅ‰ใฎใ‚ขใƒ—ใƒญใƒผใƒใงใฏใ€้Ÿณๅฃฐใ‚’ๅ‰ๅ‡ฆ็†ใ—ใฆใใ‚Œใ‹ใ‚‰ๆœ‰็”จใช็‰นๅพดใ‚’ๆŠฝๅ‡บใ—ใพใ—ใŸใ€‚ใ—ใ‹ใ—ใ€็พๅœจใงใฏใ€็”Ÿใฎ้Ÿณๅฃฐๆณขๅฝขใ‚’็‰นๅพดใ‚จใƒณใ‚ณใƒผใƒ€ใซ็›ดๆŽฅใƒ•ใ‚ฃใƒผใƒ‰ใ—ใ€้Ÿณๅฃฐ่กจ็พใ‚’ๆŠฝๅ‡บใ™ใ‚‹ใ“ใจใ‹ใ‚‰ๅง‹ใ‚ใ‚‹ใ“ใจใŒไธ€่ˆฌ็š„ใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅ‰ๅ‡ฆ็†ใ‚นใƒ†ใƒƒใƒ—ใŒ็ฐก็ด ๅŒ–ใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใฏๆœ€ใ‚‚้‡่ฆใช็‰นๅพดใ‚’ๅญฆ็ฟ’ใงใใพใ™ใ€‚ ### Audio classification ้Ÿณๅฃฐๅˆ†้กžใฏใ€ไบ‹ๅ‰ใซๅฎš็พฉใ•ใ‚ŒใŸใ‚ฏใƒฉใ‚นใฎใ‚ปใƒƒใƒˆใ‹ใ‚‰้Ÿณๅฃฐใƒ‡ใƒผใ‚ฟใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ใ“ใ‚Œใฏๅคšใใฎๅ…ทไฝ“็š„ใชใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใ‚’ๅซใ‚€ๅบƒ็ฏ„ใชใ‚ซใƒ†ใ‚ดใƒชใงใ‚ใ‚Šใ€ใ„ใใคใ‹ใฎไพ‹ใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš - ้Ÿณ้Ÿฟใ‚ทใƒผใƒณๅˆ†้กž๏ผš้Ÿณๅฃฐใซใ‚ทใƒผใƒณใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹๏ผˆใ€Œใ‚ชใƒ•ใ‚ฃใ‚นใ€ใ€ใ€Œใƒ“ใƒผใƒใ€ใ€ใ€Œใ‚นใ‚ฟใ‚ธใ‚ขใƒ ใ€๏ผ‰ - ้Ÿณ้Ÿฟใ‚คใƒ™ใƒณใƒˆๆคœๅ‡บ๏ผš้Ÿณๅฃฐใซ้Ÿณๅฃฐใ‚คใƒ™ใƒณใƒˆใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹๏ผˆใ€Œ่ปŠใฎใ‚ฏใƒฉใ‚ฏใ‚ทใƒงใƒณใ€ใ€ใ€Œใ‚ฏใ‚ธใƒฉใฎๅ‘ผใณๅฃฐใ€ใ€ใ€Œใ‚ฌใƒฉใ‚นใฎ็ ด่ฃ‚ใ€๏ผ‰ - ใ‚ฟใ‚ฎใƒณใ‚ฐ๏ผš่ค‡ๆ•ฐใฎ้Ÿณใ‚’ๅซใ‚€้Ÿณๅฃฐใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹๏ผˆ้ณฅใฎ้ณดใๅฃฐใ€ไผš่ญฐใงใฎใ‚นใƒ”ใƒผใ‚ซใƒผ่ญ˜ๅˆฅ๏ผ‰ - ้Ÿณๆฅฝๅˆ†้กž๏ผš้Ÿณๆฅฝใซใ‚ธใƒฃใƒณใƒซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹๏ผˆใ€Œใƒกใ‚ฟใƒซใ€ใ€ใ€Œใƒ’ใƒƒใƒ—ใƒ›ใƒƒใƒ—ใ€ใ€ใ€Œใ‚ซใƒณใƒˆใƒชใƒผใ€๏ผ‰ ```py >>> from transformers import pipeline >>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") >>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] ``` ### Automatic speech recognition ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜๏ผˆASR๏ผ‰ใฏ้Ÿณๅฃฐใ‚’ใƒ†ใ‚ญใ‚นใƒˆใซๅค‰ๆ›ใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€ไบบ้–“ใฎใ‚ณใƒŸใƒฅใƒ‹ใ‚ฑใƒผใ‚ทใƒงใƒณใฎ่‡ช็„ถใชๅฝขๅผใงใ‚ใ‚‹้Ÿณๅฃฐใฎไธ€้ƒจใจใ—ใฆใ€้ƒจๅˆ†็š„ใซใใฎใ‚ˆใ†ใชไธ€่ˆฌ็š„ใชใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚ฟใ‚นใ‚ฏใฎไธ€ใคใงใ™ใ€‚ไปŠๆ—ฅใ€ASRใ‚ทใ‚นใƒ†ใƒ ใฏใ‚นใƒ”ใƒผใ‚ซใƒผใ€ใ‚นใƒžใƒผใƒˆใƒ•ใ‚ฉใƒณใ€่‡ชๅ‹•่ปŠใชใฉใฎใ€Œใ‚นใƒžใƒผใƒˆใ€ใƒ†ใ‚ฏใƒŽใƒญใ‚ธใƒผใƒ—ใƒญใƒ€ใ‚ฏใƒˆใซ็ต„ใฟ่พผใพใ‚Œใฆใ„ใพใ™ใ€‚็งใŸใกใฏไปฎๆƒณใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใซ้Ÿณๆฅฝใ‚’ๅ†็”Ÿใ—ใฆใ‚‚ใ‚‰ใฃใŸใ‚Šใ€ใƒชใƒžใ‚คใƒณใƒ€ใƒผใ‚’่จญๅฎšใ—ใฆใ‚‚ใ‚‰ใฃใŸใ‚Šใ€ๅคฉๆฐ—ใ‚’ๆ•™ใˆใฆใ‚‚ใ‚‰ใฃใŸใ‚Šใงใใพใ™ใ€‚ ใ—ใ‹ใ—ใ€Transformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒๅŠฉใ‘ใŸไธป่ฆใช่ชฒ้กŒใฎไธ€ใคใฏใ€ไฝŽใƒชใ‚ฝใƒผใ‚น่จ€่ชžใซใŠใ‘ใ‚‹ใ‚‚ใฎใงใ™ใ€‚ๅคง้‡ใฎ้Ÿณๅฃฐใƒ‡ใƒผใ‚ฟใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใ€ไฝŽใƒชใ‚ฝใƒผใ‚น่จ€่ชžใงใƒฉใƒ™ใƒซไป˜ใ้Ÿณๅฃฐใƒ‡ใƒผใ‚ฟใ‚’ใ‚ใšใ‹1ๆ™‚้–“ใ ใ‘ใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใงใ‚‚ใ€ไปฅๅ‰ใฎASRใ‚ทใ‚นใƒ†ใƒ ใจๆฏ”่ผƒใ—ใฆ้ซ˜ๅ“่ณชใช็ตๆžœใ‚’ๅพ—ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไปฅๅ‰ใฎASRใ‚ทใ‚นใƒ†ใƒ ใฏ100ๅ€ไปฅไธŠใฎใƒฉใƒ™ใƒซไป˜ใใƒ‡ใƒผใ‚ฟใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ—ใŸใŒใ€Transformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฏใ“ใฎๅ•้กŒใซ่ฒข็Œฎใ—ใพใ—ใŸใ€‚ ```py >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ## Computer vision ๆœ€ๅˆใงๅˆใ‚ใฆๆˆๅŠŸใ—ใŸใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใฎใ‚ฟใ‚นใ‚ฏใฎไธ€ใคใฏใ€[็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ๏ผˆCNN๏ผ‰](glossary#convolution)ใ‚’ไฝฟ็”จใ—ใฆ้ƒตไพฟ็•ชๅทใฎ็”ปๅƒใ‚’่ช่ญ˜ใ™ใ‚‹ใ“ใจใงใ—ใŸใ€‚็”ปๅƒใฏใƒ”ใ‚ฏใ‚ปใƒซใ‹ใ‚‰ๆง‹ๆˆใ•ใ‚Œใ€ๅ„ใƒ”ใ‚ฏใ‚ปใƒซใซใฏๆ•ฐๅ€คใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€็”ปๅƒใ‚’ใƒ”ใ‚ฏใ‚ปใƒซๅ€คใฎ่กŒๅˆ—ใจใ—ใฆ็ฐกๅ˜ใซ่กจ็พใงใใพใ™ใ€‚็‰นๅฎšใฎใƒ”ใ‚ฏใ‚ปใƒซๅ€คใฎ็ต„ใฟๅˆใ‚ใ›ใฏใ€็”ปๅƒใฎ่‰ฒใ‚’่จ˜่ฟฐใ—ใพใ™ใ€‚ ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใฎใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ไธ€่ˆฌ็š„ใชๆ–นๆณ•ใฏๆฌกใฎ2ใคใงใ™๏ผš 1. ็•ณใฟ่พผใฟใ‚’ไฝฟ็”จใ—ใฆใ€ไฝŽใƒฌใƒ™ใƒซใฎ็‰นๅพดใ‹ใ‚‰้ซ˜ใƒฌใƒ™ใƒซใฎๆŠฝ่ฑก็š„ใชๆƒ…ๅ ฑใพใงใ€็”ปๅƒใฎ้šŽๅฑค็š„ใช็‰นๅพดใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ€‚ 2. ็”ปๅƒใ‚’ใƒ‘ใƒƒใƒใซๅˆ†ๅ‰ฒใ—ใ€ๅ„็”ปๅƒใƒ‘ใƒƒใƒใŒ็”ปๅƒๅ…จไฝ“ใจใฉใฎใ‚ˆใ†ใซ้–ข้€ฃใ—ใฆใ„ใ‚‹ใ‹ใ‚’ๅพใ€…ใซๅญฆ็ฟ’ใ™ใ‚‹ใŸใ‚ใซTransformerใ‚’ไฝฟ็”จใ™ใ‚‹ใ€‚CNNใŒๅฅฝใ‚€ใƒœใƒˆใƒ ใ‚ขใƒƒใƒ—ใ‚ขใƒ—ใƒญใƒผใƒใจใฏ็•ฐใชใ‚Šใ€ใ“ใ‚Œใฏใผใ‚“ใ‚„ใ‚Šใจใ—ใŸ็”ปๅƒใ‹ใ‚‰ๅง‹ใ‚ใฆๅพใ€…ใซ็„ฆ็‚นใ‚’ๅˆใ‚ใ›ใ‚‹ใ‚ˆใ†ใชใ‚‚ใฎใงใ™ใ€‚ ### Image classification ็”ปๅƒๅˆ†้กžใฏใ€ไบ‹ๅ‰ใซๅฎš็พฉใ•ใ‚ŒใŸใ‚ฏใƒฉใ‚นใฎใ‚ปใƒƒใƒˆใ‹ใ‚‰็”ปๅƒๅ…จไฝ“ใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใพใ™ใ€‚ๅคšใใฎๅˆ†้กžใ‚ฟใ‚นใ‚ฏใจๅŒๆง˜ใซใ€็”ปๅƒๅˆ†้กžใซใฏๅคšใใฎๅฎŸ็”จ็š„ใช็”จ้€”ใŒใ‚ใ‚Šใ€ใใฎไธ€้ƒจใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš * ๅŒป็™‚๏ผš็–พๆ‚ฃใฎๆคœๅ‡บใ‚„ๆ‚ฃ่€…ใฎๅฅๅบทใฎ็›ฃ่ฆ–ใซไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซๅŒป็™‚็”ปๅƒใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹ * ็’ฐๅขƒ๏ผšๆฃฎๆž—ไผๆŽกใฎ็›ฃ่ฆ–ใ€้‡Ž็”Ÿๅœฐใฎ็ฎก็†ๆƒ…ๅ ฑใ€ใพใŸใฏๅฑฑ็ซไบ‹ใฎๆคœๅ‡บใซไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซ่ก›ๆ˜Ÿ็”ปๅƒใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹ * ่พฒๆฅญ๏ผšไฝœ็‰ฉใฎๅฅๅบทใ‚’็›ฃ่ฆ–ใ™ใ‚‹ใŸใ‚ใฎไฝœ็‰ฉใฎ็”ปๅƒใ‚„ใ€ๅœŸๅœฐๅˆฉ็”จใฎ็›ฃ่ฆ–ใฎใŸใ‚ใฎ่ก›ๆ˜Ÿ็”ปๅƒใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹ * ็”Ÿๆ…‹ๅญฆ๏ผš้‡Ž็”Ÿๅ‹•็‰ฉใฎๅ€‹ไฝ“ๆ•ฐใ‚’็›ฃ่ฆ–ใ—ใŸใ‚Šใ€็ตถๆป…ๅฑๆƒง็จฎใ‚’่ฟฝ่ทกใ™ใ‚‹ใŸใ‚ใซๅ‹•ๆค็‰ฉใฎ็จฎใฎ็”ปๅƒใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ‚‹ ```py >>> from transformers import pipeline >>> classifier = pipeline(task="image-classification") >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} ``` ### Object detection ็”ปๅƒๅˆ†้กžใจใฏ็•ฐใชใ‚Šใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใฏ็”ปๅƒๅ†…ใฎ่ค‡ๆ•ฐใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’่ญ˜ๅˆฅใ—ใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎไฝ็ฝฎใ‚’็”ปๅƒๅ†…ใงๅฎš็พฉใ™ใ‚‹ๅขƒ็•Œใƒœใƒƒใ‚ฏใ‚นใซใ‚ˆใฃใฆ็‰นๅฎšใ—ใพใ™ใ€‚ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใฎไพ‹ใซใฏๆฌกใฎใ‚ˆใ†ใช็”จ้€”ใŒใ‚ใ‚Šใพใ™๏ผš * ่‡ชๅ‹•้‹่ปข่ปŠ๏ผšไป–ใฎ่ปŠไธกใ€ๆญฉ่กŒ่€…ใ€ไฟกๅทๆฉŸใชใฉใฎๆ—ฅๅธธใฎไบค้€šใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๆคœๅ‡บ * ใƒชใƒขใƒผใƒˆใ‚ปใƒณใ‚ทใƒณใ‚ฐ๏ผš็ฝๅฎณใƒขใƒ‹ใ‚ฟใƒชใƒณใ‚ฐใ€้ƒฝๅธ‚่จˆ็”ปใ€ๅคฉๅ€™ไบˆๆธฌ * ๆฌ ้™ฅๆคœๅ‡บ๏ผšๅปบ็‰ฉใฎใ‚ฏใƒฉใƒƒใ‚ฏใ‚„ๆง‹้€ ไธŠใฎๆๅ‚ทใ€่ฃฝ้€ ไธŠใฎๆฌ ้™ฅใ‚’ๆคœๅ‡บ ```py >>> from transformers import pipeline >>> detector = pipeline(task="object-detection") >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] ``` ### Image segmentation ็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏใ€็”ปๅƒๅ†…ใฎใ™ในใฆใฎใƒ”ใ‚ฏใ‚ปใƒซใ‚’ใ‚ฏใƒฉใ‚นใซๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ใƒ”ใ‚ฏใ‚ปใƒซใƒฌใƒ™ใƒซใฎใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ใ“ใ‚Œใฏใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใจใฏ็•ฐใชใ‚Šใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใƒฉใƒ™ใƒซไป˜ใ‘ใ—ใ€ไบˆๆธฌใ™ใ‚‹ใŸใ‚ใซๅขƒ็•Œใƒœใƒƒใ‚ฏใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏใ‚ˆใ‚Š่ฉณ็ดฐใซใชใ‚Šใพใ™ใ€‚ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏใƒ”ใ‚ฏใ‚ปใƒซใƒฌใƒ™ใƒซใงใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๆคœๅ‡บใงใใพใ™ใ€‚็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใซใฏใ„ใใคใ‹ใฎใ‚ฟใ‚คใƒ—ใŒใ‚ใ‚Šใพใ™๏ผš * ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ๏ผšใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎใ‚ฏใƒฉใ‚นใ‚’ใƒฉใƒ™ใƒซไป˜ใ‘ใ™ใ‚‹ใ ใ‘ใงใชใใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎๅ€‹ๅˆฅใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚น๏ผˆ"็Šฌ-1"ใ€"็Šฌ-2"๏ผ‰ใ‚‚ใƒฉใƒ™ใƒซไป˜ใ‘ใ—ใพใ™ใ€‚ * ใƒ‘ใƒŽใƒ—ใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ๏ผšใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใจใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎ็ต„ใฟๅˆใ‚ใ›ใ€‚ใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ฏใƒฉใ‚นใ”ใจใซๅ„ใƒ”ใ‚ฏใ‚ปใƒซใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎๅ€‹ๅˆฅใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚‚ใƒฉใƒ™ใƒซไป˜ใ‘ใ—ใพใ™ใ€‚ ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚ฟใ‚นใ‚ฏใฏใ€่‡ชๅ‹•้‹่ปข่ปŠใซใจใฃใฆใ€ๅ‘จๅ›ฒใฎไธ–็•Œใฎใƒ”ใ‚ฏใ‚ปใƒซใƒฌใƒ™ใƒซใฎใƒžใƒƒใƒ—ใ‚’ไฝœๆˆใ—ใ€ๆญฉ่กŒ่€…ใ‚„ไป–ใฎ่ปŠไธกใ‚’ๅฎ‰ๅ…จใซๅ›ž้ฟใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ใพใŸใ€ๅŒป็™‚็”ปๅƒใงใฏใ€ใ‚ฟใ‚นใ‚ฏใฎ็ดฐใ‹ใ„็ฒ’ๅบฆใŒ็•ฐๅธธใช็ดฐ่ƒžใ‚„่‡“ๅ™จใฎ็‰นๅพดใ‚’่ญ˜ๅˆฅใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏใ€eใ‚ณใƒžใƒผใ‚นใง่กฃ้กžใ‚’ไปฎๆƒณ็š„ใซ่ฉฆ็€ใ—ใŸใ‚Šใ€ใ‚ซใƒกใƒฉใ‚’้€šใ˜ใฆๅฎŸไธ–็•Œใซใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’้‡ใญใฆๆ‹กๅผต็พๅฎŸใฎไฝ“้จ“ใ‚’ไฝœๆˆใ—ใŸใ‚Šใ™ใ‚‹ใŸใ‚ใซใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} ``` ### Depth estimation ๆทฑๅบฆๆŽจๅฎšใฏใ€็”ปๅƒๅ†…ใฎๅ„ใƒ”ใ‚ฏใ‚ปใƒซใŒใ‚ซใƒกใƒฉใ‹ใ‚‰ใฎ่ท้›ขใ‚’ไบˆๆธฌใ—ใพใ™ใ€‚ใ“ใฎใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏใฏใ€็‰นใซใ‚ทใƒผใƒณใฎ็†่งฃใจๅ†ๆง‹็ฏ‰ใซ้‡่ฆใงใ™ใ€‚ใŸใจใˆใฐใ€่‡ชๅ‹•้‹่ปข่ปŠใงใฏใ€ๆญฉ่กŒ่€…ใ€ไบค้€šๆจ™่ญ˜ใ€ไป–ใฎ่ปŠใชใฉใฎ็‰ฉไฝ“ใŒใฉใ‚Œใ ใ‘้ ใ„ใ‹ใ‚’็†่งฃใ—ใ€้šœๅฎณ็‰ฉใ‚„่ก็ชใ‚’ๅ›ž้ฟใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใงใ™ใ€‚ๆทฑๅบฆๆƒ…ๅ ฑใฏใพใŸใ€2D็”ปๅƒใ‹ใ‚‰3D่กจ็พใ‚’ๆง‹็ฏ‰ใ—ใ€็”Ÿ็‰ฉๅญฆ็š„ๆง‹้€ ใ‚„ๅปบ็‰ฉใฎ้ซ˜ๅ“่ณชใช3D่กจ็พใ‚’ไฝœๆˆใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ ๆทฑๅบฆๆŽจๅฎšใซใฏๆฌกใฎ2ใคใฎใ‚ขใƒ—ใƒญใƒผใƒใŒใ‚ใ‚Šใพใ™๏ผš * ใ‚นใƒ†ใƒฌใ‚ช๏ผšๆทฑๅบฆใฏใ€ใ‚ใšใ‹ใซ็•ฐใชใ‚‹่ง’ๅบฆใ‹ใ‚‰ใฎๅŒใ˜็”ปๅƒใฎ2ใคใฎ็”ปๅƒใ‚’ๆฏ”่ผƒใ—ใฆๆŽจๅฎšใ•ใ‚Œใพใ™ใ€‚ * ใƒขใƒŽใ‚ญใƒฅใƒฉใƒผ๏ผšๆทฑๅบฆใฏๅ˜ไธ€ใฎ็”ปๅƒใ‹ใ‚‰ๆŽจๅฎšใ•ใ‚Œใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> depth_estimator = pipeline(task="depth-estimation") >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) ``` ## Natural language processing NLPใ‚ฟใ‚นใ‚ฏใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใŒ็งใŸใกใซใจใฃใฆ่‡ช็„ถใชใ‚ณใƒŸใƒฅใƒ‹ใ‚ฑใƒผใ‚ทใƒงใƒณๆ‰‹ๆฎตใงใ‚ใ‚‹ใŸใ‚ใ€ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใชใ‚ฟใ‚นใ‚ฏใฎไธ€ใคใงใ™ใ€‚ใƒขใƒ‡ใƒซใŒ่ช่ญ˜ใ™ใ‚‹ใŸใ‚ใฎๅฝขๅผใซใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅค‰ๆ›ใ™ใ‚‹ใซใฏใ€ใƒˆใƒผใ‚ฏใƒณๅŒ–ใŒๅฟ…่ฆใงใ™ใ€‚ใ“ใ‚Œใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ˜่ชžใ‚„ใ‚ตใƒ–ใƒฏใƒผใƒ‰๏ผˆใƒˆใƒผใ‚ฏใƒณ๏ผ‰ใซๅˆ†ๅ‰ฒใ—ใ€ใใ‚Œใ‚‰ใฎใƒˆใƒผใ‚ฏใƒณใ‚’ๆ•ฐๅญ—ใซๅค‰ๆ›ใ™ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ใใฎ็ตๆžœใ€ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๆ•ฐๅญ—ใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใจใ—ใฆ่กจ็พใ—ใ€ไธ€ๅบฆๆ•ฐๅญ—ใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใŒใ‚ใ‚Œใฐใ€ใ•ใพใ–ใพใชNLPใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซใƒขใƒ‡ใƒซใซๅ…ฅๅŠ›ใงใใพใ™๏ผ ### Text classification ใฉใ‚“ใชใƒขใƒ€ใƒชใƒ†ใ‚ฃใฎๅˆ†้กžใ‚ฟใ‚นใ‚ฏใจๅŒๆง˜ใซใ€ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใฏไบ‹ๅ‰ใซๅฎš็พฉใ•ใ‚ŒใŸใ‚ฏใƒฉใ‚นใฎใ‚ปใƒƒใƒˆใ‹ใ‚‰ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚น๏ผˆๆ–‡ใƒฌใƒ™ใƒซใ€ๆฎต่ฝใ€ใพใŸใฏใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผ‰ใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใพใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใซใฏๅคšใใฎๅฎŸ็”จ็š„ใช็”จ้€”ใŒใ‚ใ‚Šใ€ใใฎไธ€้ƒจใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš * ๆ„Ÿๆƒ…ๅˆ†ๆž๏ผš`positive`ใ‚„`negative`ใฎใ‚ˆใ†ใชๆฅตๆ€งใซๅพ“ใฃใฆใƒ†ใ‚ญใ‚นใƒˆใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใ€ๆ”ฟๆฒปใ€้‡‘่žใ€ใƒžใƒผใ‚ฑใƒ†ใ‚ฃใƒณใ‚ฐใชใฉใฎๅˆ†้‡Žใงใฎๆ„ๆ€ๆฑบๅฎšใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ * ใ‚ณใƒณใƒ†ใƒณใƒ„ๅˆ†้กž๏ผšใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒ”ใƒƒใ‚ฏใซๅพ“ใฃใฆใƒฉใƒ™ใƒซไป˜ใ‘ใ—ใ€ใƒ‹ใƒฅใƒผใ‚นใ‚„ใ‚ฝใƒผใ‚ทใƒฃใƒซใƒกใƒ‡ใ‚ฃใ‚ขใฎใƒ•ใ‚ฃใƒผใƒ‰ๅ†…ใฎๆƒ…ๅ ฑใ‚’ๆ•ด็†ใ—ใ€ใƒ•ใ‚ฃใƒซใ‚ฟใƒชใƒณใ‚ฐใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™๏ผˆ`ๅคฉๆฐ—`ใ€`ใ‚นใƒใƒผใƒ„`ใ€`้‡‘่ž`ใชใฉ๏ผ‰ใ€‚ ```py >>> from transformers import pipeline >>> classifier = pipeline(task="sentiment-analysis") >>> preds = classifier("Hugging Face is the best thing since sliced bread!") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] ``` ### Token classification ใฉใ‚“ใชNLPใ‚ฟใ‚นใ‚ฏใงใ‚‚ใ€ใƒ†ใ‚ญใ‚นใƒˆใฏใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ€‹ใ€…ใฎๅ˜่ชžใ‚„ใ‚ตใƒ–ใƒฏใƒผใƒ‰ใซๅˆ†ๅ‰ฒใ—ใฆๅ‰ๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏ[ใƒˆใƒผใ‚ฏใƒณ](/glossary#token)ใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใฏใ€ไบ‹ๅ‰ใซๅฎš็พฉใ•ใ‚ŒใŸใ‚ฏใƒฉใ‚นใฎใ‚ปใƒƒใƒˆใ‹ใ‚‰ๅ„ใƒˆใƒผใ‚ฏใƒณใซใƒฉใƒ™ใƒซใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใฎไธ€่ˆฌ็š„ใชใ‚ฟใ‚คใƒ—ใฏๆฌกใฎ2ใคใงใ™๏ผš * ๅ›บๆœ‰่กจ็พ่ช่ญ˜๏ผˆNER๏ผ‰๏ผš็ต„็น”ใ€ไบบ็‰ฉใ€ๅ ดๆ‰€ใ€ๆ—ฅไป˜ใชใฉใฎใ‚จใƒณใƒ†ใ‚ฃใƒ†ใ‚ฃใฎใ‚ซใƒ†ใ‚ดใƒชใซๅพ“ใฃใฆใƒˆใƒผใ‚ฏใƒณใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใพใ™ใ€‚NERใฏ็‰นใซใƒใ‚คใ‚ชใƒกใƒ‡ใ‚ฃใ‚ซใƒซ็’ฐๅขƒใงไบบๆฐ—ใงใ‚ใ‚Šใ€้บไผๅญใ€ใ‚ฟใƒณใƒ‘ใ‚ฏ่ณชใ€่–ฌ็‰ฉๅใชใฉใ‚’ใƒฉใƒ™ใƒซไป˜ใ‘ใงใใพใ™ใ€‚ * ๅ“่ฉžใ‚ฟใ‚ฐไป˜ใ‘๏ผˆPOS๏ผ‰๏ผšๅ่ฉžใ€ๅ‹•่ฉžใ€ๅฝขๅฎน่ฉžใชใฉใฎๅ“่ฉžใซๅพ“ใฃใฆใƒˆใƒผใ‚ฏใƒณใซใƒฉใƒ™ใƒซใ‚’ไป˜ใ‘ใพใ™ใ€‚POSใฏใ€็ฟป่จณใ‚ทใ‚นใƒ†ใƒ ใŒๅŒใ˜ๅ˜่ชžใŒๆ–‡ๆณ•็š„ใซใฉใฎใ‚ˆใ†ใซ็•ฐใชใ‚‹ใ‹ใ‚’็†่งฃใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™๏ผˆๅ่ฉžใจใ—ใฆใฎใ€Œ้Š€่กŒใ€ใจๅ‹•่ฉžใจใ—ใฆใฎใ€Œ้Š€่กŒใ€ใชใฉ๏ผ‰ใ€‚ ```py >>> from transformers import pipeline >>> classifier = pipeline(task="ner") >>> preds = classifier("Hugging Face is a French company based in New York City.") >>> preds = [ ... { ... "entity": pred["entity"], ... "score": round(pred["score"], 4), ... "index": pred["index"], ... "word": pred["word"], ... "start": pred["start"], ... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} ``` ### Question answering ่ณชๅ•ๅฟœ็ญ”ใฏใ€ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ๏ผˆใ‚ชใƒผใƒ—ใƒณใƒ‰ใƒกใ‚คใƒณ๏ผ‰ใ‚’ๅซใ‚€ๅ ดๅˆใจๅซใพใชใ„ๅ ดๅˆ๏ผˆใ‚ฏใƒญใƒผใ‚บใƒ‰ใƒ‰ใƒกใ‚คใƒณ๏ผ‰ใŒใ‚ใ‚‹ๅ ดๅˆใ‚‚ใ‚ใ‚‹ใ€ๅˆฅใฎใƒˆใƒผใ‚ฏใƒณใƒฌใƒ™ใƒซใฎใ‚ฟใ‚นใ‚ฏใงใ€่ณชๅ•ใซๅฏพใ™ใ‚‹ๅ›ž็ญ”ใ‚’่ฟ”ใ—ใพใ™ใ€‚ใ“ใฎใ‚ฟใ‚นใ‚ฏใฏใ€ไปฎๆƒณใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใซใƒฌใ‚นใƒˆใƒฉใƒณใŒๅ–ถๆฅญใ—ใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใฎใ‚ˆใ†ใช่ณชๅ•ใ‚’ใ™ใ‚‹ใจใใซ็™บ็”Ÿใ—ใพใ™ใ€‚ใพใŸใ€้กงๅฎขใ‚„ๆŠ€่ก“ใ‚ตใƒใƒผใƒˆใ‚’ๆไพ›ใ—ใ€ๆคœ็ดขใ‚จใƒณใ‚ธใƒณใŒใ‚ใชใŸใŒๆฑ‚ใ‚ใฆใ„ใ‚‹้–ข้€ฃๆƒ…ๅ ฑใ‚’ๅ–ๅพ—ใ™ใ‚‹ใฎใซใ‚‚ๅฝน็ซ‹ใกใพใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใฎไธ€่ˆฌ็š„ใชใ‚ฟใ‚คใƒ—ใฏๆฌกใฎ2ใคใงใ™๏ผš * ๆŠฝๅ‡บๅž‹๏ผš่ณชๅ•ใจไธ€้ƒจใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ใƒขใƒ‡ใƒซใŒใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰ๆŠฝๅ‡บใ™ใ‚‹ๅฟ…่ฆใฎใ‚ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚นใƒ‘ใƒณใŒๅ›ž็ญ”ใจใชใ‚Šใพใ™ใ€‚ * ๆŠฝ่ฑก็š„๏ผš่ณชๅ•ใจไธ€้ƒจใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ๅ›ž็ญ”ใฏใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰็”Ÿๆˆใ•ใ‚Œใพใ™ใ€‚ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏใ€[`QuestionAnsweringPipeline`]ใงใฏใชใ[`Text2TextGenerationPipeline`]ใงๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> question_answerer = pipeline(task="question-answering") >>> preds = question_answerer( ... question="What is the name of the repository?", ... context="The name of the repository is huggingface/transformers", ... ) >>> print( ... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ... ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers ``` ### Summarization ่ฆ็ด„ใฏใ€้•ทใ„ใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰็Ÿญใ„ใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝœๆˆใ—ใ€ๅ…ƒใฎๆ–‡ๆ›ธใฎๆ„ๅ‘ณใฎๅคง้ƒจๅˆ†ใ‚’ไฟใกใชใŒใ‚‰่ฉฆใฟใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚่ฆ็ด„ใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใ‹ใ‚‰ใ‚ทใƒผใ‚ฑใƒณใ‚นใธใฎใ‚ฟใ‚นใ‚ฏใงใ‚ใ‚Šใ€ๅ…ฅๅŠ›ใ‚ˆใ‚Šใ‚‚็Ÿญใ„ใƒ†ใ‚ญใ‚นใƒˆใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚่ฆ็ด„ใ‚’่กŒใ†ใ“ใจใงใ€่ชญ่€…ใŒไธป่ฆใชใƒใ‚คใƒณใƒˆใ‚’่ฟ…้€Ÿใซ็†่งฃใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใฎใซๅฝน็ซ‹ใค้•ทๆ–‡ๆ›ธใŒใŸใใ•ใ‚“ใ‚ใ‚Šใพใ™ใ€‚ๆณ•ๆกˆใ€ๆณ•็š„ใŠใ‚ˆใณ่ฒกๅ‹™ๆ–‡ๆ›ธใ€็‰น่จฑใ€็ง‘ๅญฆ่ซ–ๆ–‡ใชใฉใ€่ชญ่€…ใฎๆ™‚้–“ใ‚’็ฏ€็ด„ใ—่ชญๆ›ธใฎๆ”ฏๆดใจใชใ‚‹ๆ–‡ๆ›ธใฎไพ‹ใŒใ‚ใ‚Šใพใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใจๅŒๆง˜ใซใ€่ฆ็ด„ใซใฏ2ใคใฎใ‚ฟใ‚คใƒ—ใŒใ‚ใ‚Šใพใ™๏ผš * ๆŠฝๅ‡บ็š„่ฆ็ด„๏ผšๅ…ƒใฎใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰ๆœ€ใ‚‚้‡่ฆใชๆ–‡ใ‚’่ญ˜ๅˆฅใ—ใฆๆŠฝๅ‡บใ—ใพใ™ใ€‚ * ๆŠฝ่ฑก็š„่ฆ็ด„๏ผšๅ…ƒใฎใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰ใ‚ฟใƒผใ‚ฒใƒƒใƒˆใฎ่ฆ็ด„๏ผˆๅ…ฅๅŠ›ๆ–‡ๆ›ธใซๅซใพใ‚Œใฆใ„ใชใ„ๆ–ฐใ—ใ„ๅ˜่ชžใ‚’ๅซใ‚€ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผ‰ใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚[`SummarizationPipeline`]ใฏๆŠฝ่ฑก็š„ใชใ‚ขใƒ—ใƒญใƒผใƒใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> summarizer = pipeline(task="summarization") >>> summarizer( ... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ... ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] ``` ### Translation ็ฟป่จณใฏใ€ใ‚ใ‚‹่จ€่ชžใฎใƒ†ใ‚ญใ‚นใƒˆใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅˆฅใฎ่จ€่ชžใซๅค‰ๆ›ใ™ใ‚‹ไฝœๆฅญใงใ™ใ€‚ใ“ใ‚Œใฏ็•ฐใชใ‚‹ใƒใƒƒใ‚ฏใ‚ฐใƒฉใ‚ฆใƒณใƒ‰ใ‚’ๆŒใคไบบใ€…ใŒใ‚ณใƒŸใƒฅใƒ‹ใ‚ฑใƒผใ‚ทใƒงใƒณใ‚’ใจใ‚‹ใฎใซๅฝน็ซ‹ใกใ€ๅบƒ็ฏ„ใช่ฆณๅฎขใซใ‚ณใƒณใƒ†ใƒณใƒ„ใ‚’็ฟป่จณใ—ใฆไผใˆใ‚‹ใฎใซๅฝน็ซ‹ใกใ€ๆ–ฐใ—ใ„่จ€่ชžใ‚’ๅญฆใถใฎใ‚’ๆ”ฏๆดใ™ใ‚‹ๅญฆ็ฟ’ใƒ„ใƒผใƒซใซใ‚‚ใชใ‚Šใพใ™ใ€‚่ฆ็ด„ใจๅ…ฑใซใ€็ฟป่จณใฏใ‚ทใƒผใ‚ฑใƒณใ‚น้–“ใฎใ‚ฟใ‚นใ‚ฏใงใ‚ใ‚Šใ€ใƒขใƒ‡ใƒซใฏๅ…ฅๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€ใ‚ฟใƒผใ‚ฒใƒƒใƒˆใฎๅ‡บๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’่ฟ”ใ—ใพใ™ใ€‚ ๅˆๆœŸใฎ็ฟป่จณใƒขใƒ‡ใƒซใฏไธปใซๅ˜ไธ€่จ€่ชžใงใ—ใŸใŒใ€ๆœ€่ฟ‘ใงใฏๅคš่จ€่ชžใƒขใƒ‡ใƒซใซๅฏพใ™ใ‚‹้–ขๅฟƒใŒ้ซ˜ใพใ‚Šใ€ๅคšใใฎ่จ€่ชžๅฏพใง็ฟป่จณใงใใ‚‹ใ‚ˆใ†ใชๅคš่จ€่ชžใƒขใƒ‡ใƒซใซๆณจ็›ฎใŒ้›†ใพใฃใฆใ„ใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task="translation", model="t5-small") >>> translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}] ``` ### ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎๅ˜่ชžใ‚’ไบˆๆธฌใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ไบ‹ๅ‰ๅญฆ็ฟ’ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒซใฏใ€ๅคšใใฎไป–ใฎใƒ€ใ‚ฆใƒณใ‚นใƒˆใƒชใƒผใƒ ใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใงใใ‚‹ใŸใ‚ใ€้žๅธธใซไบบๆฐ—ใฎใ‚ใ‚‹NLPใ‚ฟใ‚นใ‚ฏใจใชใฃใฆใ„ใพใ™ใ€‚ๆœ€่ฟ‘ใงใฏใ€ใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆใพใŸใฏใƒ•ใƒฅใƒผใ‚ทใƒงใƒƒใƒˆๅญฆ็ฟ’ใ‚’ๅฎŸ่จผใ™ใ‚‹ๅคง่ฆๆจกใช่จ€่ชžใƒขใƒ‡ใƒซ๏ผˆLLM๏ผ‰ใซๅคงใใช้–ขๅฟƒใŒๅฏ„ใ›ใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใƒขใƒ‡ใƒซใŒๆ˜Ž็คบ็š„ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใชใ„ใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใงใใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™๏ผ่จ€่ชžใƒขใƒ‡ใƒซใฏใ€ๆตๆšขใง่ชฌๅพ—ๅŠ›ใฎใ‚ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใงใใพใ™ใŒใ€ใƒ†ใ‚ญใ‚นใƒˆใŒๅธธใซๆญฃ็ขบใงใ‚ใ‚‹ใ‚ใ‘ใงใฏใชใ„ใŸใ‚ใ€ๆณจๆ„ใŒๅฟ…่ฆใงใ™ใ€‚ ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใซใฏ2ใคใฎใ‚ฟใ‚คใƒ—ใŒใ‚ใ‚Šใพใ™๏ผš * ๅ› ๆžœ็š„๏ผšใƒขใƒ‡ใƒซใฎ็›ฎๆจ™ใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎๆฌกใฎใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใงใ‚ใ‚Šใ€ๅฐ†ๆฅใฎใƒˆใƒผใ‚ฏใƒณใฏใƒžใ‚นใ‚ฏใ•ใ‚Œใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task="text-generation") >>> generator(prompt) # doctest: +SKIP ``` * ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ๏ผšใƒขใƒ‡ใƒซใฎ็›ฎ็š„ใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎใƒˆใƒผใ‚ฏใƒณๅ…จไฝ“ใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใชใŒใ‚‰ใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ```py >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... { ... "score": round(pred["score"], 4), ... "token": pred["token"], ... "token_str": pred["token_str"], ... "sequence": pred["sequence"], ... } ... for pred in preds ... ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] ``` ## Multimodal ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใ‚ฟใ‚นใ‚ฏใฏใ€็‰นๅฎšใฎๅ•้กŒใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซ่ค‡ๆ•ฐใฎใƒ‡ใƒผใ‚ฟใƒขใƒ€ใƒชใƒ†ใ‚ฃ๏ผˆใƒ†ใ‚ญใ‚นใƒˆใ€็”ปๅƒใ€้Ÿณๅฃฐใ€ใƒ“ใƒ‡ใ‚ช๏ผ‰ใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซใƒขใƒ‡ใƒซใ‚’ๅฟ…่ฆใจใ—ใพใ™ใ€‚็”ปๅƒใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒ‹ใƒณใ‚ฐใฏใ€ใƒขใƒ‡ใƒซใŒๅ…ฅๅŠ›ใจใ—ใฆ็”ปๅƒใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€็”ปๅƒใ‚’่ชฌๆ˜Žใ™ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใพใŸใฏ็”ปๅƒใฎใ„ใใคใ‹ใฎ็‰นๆ€งใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใ‚ฟใ‚นใ‚ฏใฎไพ‹ใงใ™ใ€‚ ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใƒขใƒ‡ใƒซใฏ็•ฐใชใ‚‹ใƒ‡ใƒผใ‚ฟใ‚ฟใ‚คใƒ—ใพใŸใฏใƒขใƒ€ใƒชใƒ†ใ‚ฃใงไฝœๆฅญใ—ใพใ™ใŒใ€ๅ†…้ƒจ็š„ใซใฏๅ‰ๅ‡ฆ็†ใ‚นใƒ†ใƒƒใƒ—ใŒใƒขใƒ‡ใƒซใซใ™ในใฆใฎใƒ‡ใƒผใ‚ฟใ‚ฟใ‚คใƒ—ใ‚’ๅŸ‹ใ‚่พผใฟ๏ผˆใƒ‡ใƒผใ‚ฟใซ้–ขใ™ใ‚‹ๆ„ๅ‘ณใฎใ‚ใ‚‹ๆƒ…ๅ ฑใ‚’ไฟๆŒใ™ใ‚‹ใƒ™ใ‚ฏใƒˆใƒซใพใŸใฏๆ•ฐๅญ—ใฎใƒชใ‚นใƒˆ๏ผ‰ใซๅค‰ๆ›ใ™ใ‚‹ใฎใ‚’ๆ”ฏๆดใ—ใพใ™ใ€‚็”ปๅƒใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒ‹ใƒณใ‚ฐใฎใ‚ˆใ†ใชใ‚ฟใ‚นใ‚ฏใงใฏใ€ใƒขใƒ‡ใƒซใฏ็”ปๅƒใฎๅŸ‹ใ‚่พผใฟใจใƒ†ใ‚ญใ‚นใƒˆใฎๅŸ‹ใ‚่พผใฟใฎ้–“ใฎ้–ขไฟ‚ใ‚’ๅญฆ็ฟ’ใ—ใพใ™ใ€‚ ### Document question answering ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ่ณชๅ•ๅฟœ็ญ”ใฏใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‹ใ‚‰ใฎ่‡ช็„ถ่จ€่ชžใฎ่ณชๅ•ใซ็ญ”ใˆใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ…ฅๅŠ›ใจใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใƒฌใƒ™ใƒซใฎ่ณชๅ•ๅฟœ็ญ”ใ‚ฟใ‚นใ‚ฏใจใฏ็•ฐใชใ‚Šใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ่ณชๅ•ๅฟœ็ญ”ใฏใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฎ็”ปๅƒใจใใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใซ้–ขใ™ใ‚‹่ณชๅ•ใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€็ญ”ใˆใ‚’่ฟ”ใ—ใพใ™ใ€‚ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ่ณชๅ•ๅฟœ็ญ”ใฏๆง‹้€ ๅŒ–ใ•ใ‚ŒใŸใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’่งฃๆžใ—ใ€ใใ‚Œใ‹ใ‚‰้‡่ฆใชๆƒ…ๅ ฑใ‚’ๆŠฝๅ‡บใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใงใใพใ™ใ€‚ไปฅไธ‹ใฎไพ‹ใงใฏใ€ใƒฌใ‚ทใƒผใƒˆใ‹ใ‚‰ๅˆ่จˆ้‡‘้กใจใŠ้‡ฃใ‚Šใ‚’ๆŠฝๅ‡บใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> from PIL import Image >>> import requests >>> url = "https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/2/image/image.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> doc_question_answerer = pipeline("document-question-answering", model="magorshunov/layoutlm-invoices") >>> preds = doc_question_answerer( ... question="What is the total amount?", ... image=image, ... ) >>> preds [{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}] ``` ใ“ใฎใƒšใƒผใ‚ธใŒๅ„ใƒขใƒ€ใƒชใƒ†ใ‚ฃใฎใ‚ฟใ‚นใ‚ฏใฎ็จฎ้กžใจใใ‚Œใžใ‚Œใฎ้‡่ฆๆ€งใซใคใ„ใฆใฎ่ฟฝๅŠ ใฎ่ƒŒๆ™ฏๆƒ…ๅ ฑใ‚’ๆไพ›ใงใใŸใ“ใจใ‚’้ก˜ใฃใฆใ„ใพใ™ใ€‚ๆฌกใฎ [ใ‚ปใ‚ฏใ‚ทใƒงใƒณ](tasks_explained) ใงใฏใ€๐Ÿค— ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใŒใ“ใ‚Œใ‚‰ใฎใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซ **ใฉใฎใ‚ˆใ†ใซ** ๅ‹•ไฝœใ™ใ‚‹ใ‹ใ‚’ๅญฆใณใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/custom_tools.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Custom Tools and Prompts <Tip> ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใงใƒ„ใƒผใƒซใจใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒไฝ•ใงใ‚ใ‚‹ใ‹ใ‚’็Ÿฅใ‚‰ใชใ„ๅ ดๅˆใ€ ใพใš[Transformers Agents](transformers_agents)ใƒšใƒผใ‚ธใ‚’ใŠ่ชญใฟใ„ใŸใ ใใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ </Tip> <Tip warning={true}> Transformers AgentsใฏๅฎŸ้จ“็š„ใชAPIใงใ‚ใ‚Šใ€ใ„ใคใงใ‚‚ๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ‚ˆใฃใฆ่ฟ”ใ•ใ‚Œใ‚‹็ตๆžœใฏใ€APIใ‚„ๅŸบ็คŽใจใชใ‚‹ใƒขใƒ‡ใƒซใŒๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ๅค‰ๅŒ–ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใจใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ไฝœๆˆใ—ใ€ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ๅผทๅŒ–ใ—ใ€ๆ–ฐใ—ใ„ใ‚ฟใ‚นใ‚ฏใ‚’ๅฎŸ่กŒใ•ใ›ใ‚‹ใŸใ‚ใซ้žๅธธใซ้‡่ฆใงใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€ไปฅไธ‹ใฎๅ†…ๅฎนใ‚’่ชฌๆ˜Žใ—ใพใ™๏ผš - ใƒ—ใƒญใƒณใƒ—ใƒˆใฎใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บๆ–นๆณ• - ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใฎไฝฟ็”จๆ–นๆณ• - ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใฎไฝœๆˆๆ–นๆณ• ## Customizing the prompt [Transformers Agents](transformers_agents)ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใ‚ˆใ†ใซใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏ[`~Agent.run`]ใŠใ‚ˆใณ[`~Agent.chat`]ใƒขใƒผใƒ‰ใงๅฎŸ่กŒใงใใพใ™ใ€‚ `run`ใƒขใƒผใƒ‰ใจ`chat`ใƒขใƒผใƒ‰ใฎไธกๆ–นใฏๅŒใ˜ใƒญใ‚ธใƒƒใ‚ฏใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚ ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’้ง†ๅ‹•ใ™ใ‚‹่จ€่ชžใƒขใƒ‡ใƒซใฏใ€้•ทใ„ใƒ—ใƒญใƒณใƒ—ใƒˆใซๅŸบใฅใ„ใฆๆกไปถไป˜ใ‘ใ‚‰ใ‚Œใ€ ๆฌกใฎใƒˆใƒผใ‚ฏใƒณใ‚’็”Ÿๆˆใ—ใฆๅœๆญขใƒˆใƒผใ‚ฏใƒณใซ้”ใ™ใ‚‹ใพใงใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๅฎŒไบ†ใ—ใพใ™ใ€‚ ไธก่€…ใฎๅ”ฏไธ€ใฎ้•ใ„ใฏใ€`chat`ใƒขใƒผใƒ‰ใฎ้–“ใซใƒ—ใƒญใƒณใƒ—ใƒˆใŒๅ‰ใฎใƒฆใƒผใ‚ถใƒผใฎๅ…ฅๅŠ›ใจใƒขใƒ‡ใƒซใฎ็”Ÿๆˆใจๅ…ฑใซๆ‹กๅผตใ•ใ‚Œใ‚‹ใ“ใจใงใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏ้ŽๅŽปใฎๅฏพ่ฉฑใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ‚ใŸใ‹ใ‚‚ใƒกใƒขใƒชใŒใ‚ใ‚‹ใ‹ใฎใ‚ˆใ†ใซ่ฆ‹ใˆใพใ™ใ€‚ ### Structure of the prompt ใƒ—ใƒญใƒณใƒ—ใƒˆใŒใฉใฎใ‚ˆใ†ใซๆง‹็ฏ‰ใ•ใ‚Œใ€ใฉใฎใ‚ˆใ†ใซๆœ€้ฉๅŒ–ใงใใ‚‹ใ‹ใ‚’็†่งฃใ™ใ‚‹ใŸใ‚ใซใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใฏๅคงใพใ‹ใซ4ใคใฎ้ƒจๅˆ†ใซๅˆ†ใ‹ใ‚Œใฆใ„ใพใ™ใ€‚ 1. ใ‚คใƒณใƒˆใƒญใƒ€ใ‚ฏใ‚ทใƒงใƒณ๏ผšใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎๆŒฏใ‚‹่ˆžใ„ใ€ใƒ„ใƒผใƒซใฎๆฆ‚ๅฟตใฎ่ชฌๆ˜Žใ€‚ 2. ใ™ในใฆใฎใƒ„ใƒผใƒซใฎ่ชฌๆ˜Žใ€‚ใ“ใ‚Œใฏใƒฆใƒผใ‚ถใƒผใซใ‚ˆใฃใฆๅฎš็พฉ/้ธๆŠžใ•ใ‚ŒใŸใƒ„ใƒผใƒซใงใƒฉใƒณใ‚ฟใ‚คใƒ ๆ™‚ใซๅ‹•็š„ใซ็ฝฎๆ›ใ•ใ‚Œใ‚‹`<<all_tools>>`ใƒˆใƒผใ‚ฏใƒณใซใ‚ˆใฃใฆๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ 3. ใ‚ฟใ‚นใ‚ฏใจใใฎ่งฃๆฑบ็ญ–ใฎไธ€้€ฃใฎไพ‹ใ€‚ 4. ็พๅœจใฎไพ‹ใจ่งฃๆฑบ็ญ–ใฎ่ฆๆฑ‚ใ€‚ ๅ„้ƒจๅˆ†ใ‚’ใ‚ˆใ‚Šใ‚ˆใ็†่งฃใ™ใ‚‹ใŸใ‚ใซใ€`run`ใƒ—ใƒญใƒณใƒ—ใƒˆใŒใฉใฎใ‚ˆใ†ใซ่ฆ‹ใˆใ‚‹ใ‹ใฎ็ฐก็•ฅ็‰ˆใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†๏ผš ````text ใ‚ฟใ‚นใ‚ฏใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใซใ€Pythonใฎใ‚ทใƒณใƒ—ใƒซใชใ‚ณใƒžใƒณใƒ‰ใฎใ‚ทใƒชใƒผใ‚บใ‚’่€ƒใˆใฆใใ‚‹ใ“ใจใŒใ‚ใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ [...] ๆ„ๅ‘ณใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€ไธญ้–“็ตๆžœใ‚’่กจ็คบใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใƒ„ใƒผใƒซ๏ผš - document_qa๏ผšใ“ใ‚Œใฏใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ๏ผˆpdf๏ผ‰ใซ้–ขใ™ใ‚‹่ณชๅ•ใซ็ญ”ใˆใ‚‹ใƒ„ใƒผใƒซใงใ™ใ€‚ๆƒ…ๅ ฑใ‚’ๅซใ‚€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใ‚ใ‚‹ `document` ใจใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใซ้–ขใ™ใ‚‹่ณชๅ•ใงใ‚ใ‚‹ `question` ใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€่ณชๅ•ใซๅฏพใ™ใ‚‹ๅ›ž็ญ”ใ‚’ๅซใ‚€ใƒ†ใ‚ญใ‚นใƒˆใ‚’่ฟ”ใ—ใพใ™ใ€‚ - image_captioner๏ผšใ“ใ‚Œใฏ็”ปๅƒใฎ่ชฌๆ˜Žใ‚’็”Ÿๆˆใ™ใ‚‹ใƒ„ใƒผใƒซใงใ™ใ€‚ใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒณใซใ™ใ‚‹็”ปๅƒใงใ‚ใ‚‹ `image` ใจใ€่ชฌๆ˜Žใ‚’ๅซใ‚€่‹ฑ่ชžใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’่ฟ”ใ™ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ—ใ‘ๅ–ใ‚Šใพใ™ใ€‚ [...] ใ‚ฟใ‚นใ‚ฏ: "ๅค‰ๆ•ฐ `question` ใซ้–ขใ™ใ‚‹่ณชๅ•ใซ็ญ”ใˆใ‚‹ใŸใ‚ใฎ็”ปๅƒใซใคใ„ใฆๅ›ž็ญ”ใ—ใฆใใ ใ•ใ„ใ€‚่ณชๅ•ใฏใƒ•ใƒฉใƒณใ‚น่ชžใงใ™ใ€‚" ๆฌกใฎใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ—ใพใ™๏ผš่ณชๅ•ใ‚’่‹ฑ่ชžใซ็ฟป่จณใ™ใ‚‹ใŸใ‚ใฎ `translator`ใ€ใใ—ใฆๅ…ฅๅŠ›็”ปๅƒใซ้–ขใ™ใ‚‹่ณชๅ•ใซ็ญ”ใˆใ‚‹ใŸใ‚ใฎ `image_qa`ใ€‚ ๅ›ž็ญ”๏ผš ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(image=image, question=translated_question) print(f"The answer is {answer}") ``` ใ‚ฟใ‚นใ‚ฏ๏ผšใ€Œ`document`ๅ†…ใงๆœ€ๅนด้•ทใฎไบบ็‰ฉใ‚’็‰นๅฎšใ—ใ€ใใฎ็ตๆžœใ‚’ใƒใƒŠใƒผใจใ—ใฆ่กจ็คบใ™ใ‚‹ใ€‚ใ€ ไปฅไธ‹ใฎใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ—ใพใ™๏ผš`document_qa`ใ‚’ไฝฟ็”จใ—ใฆใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆๅ†…ใงๆœ€ๅนด้•ทใฎไบบ็‰ฉใ‚’่ฆ‹ใคใ‘ใ€ใใฎๅ›ž็ญ”ใซๅพ“ใฃใฆ`image_generator`ใ‚’ไฝฟ็”จใ—ใฆ็”ปๅƒใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ ๅ›ž็ญ”๏ผš ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` [...] ใ‚ฟใ‚นใ‚ฏ: "ๅทใจๆน–ใฎ็ตตใ‚’ๆใ„ใฆใใ ใ•ใ„" ไปฅไธ‹ใฎใ‚‚ใฎใ‚’ไฝฟ็”จใ—ใพใ™ ```` ๅฐŽๅ…ฅ้ƒจๅˆ†๏ผˆ"Tools:"ใฎๅ‰ใฎใƒ†ใ‚ญใ‚นใƒˆ๏ผ‰ใฏใ€ใƒขใƒ‡ใƒซใฎๆŒฏใ‚‹่ˆžใ„ใจๅฎŸ่กŒใ™ในใใ‚ฟใ‚นใ‚ฏใ‚’ๆญฃ็ขบใซ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ ใ“ใฎ้ƒจๅˆ†ใฏใŠใใ‚‰ใใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๅธธใซๅŒใ˜ๆ–นๆณ•ใงๆŒฏใ‚‹่ˆžใ†ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ 2็•ช็›ฎใฎ้ƒจๅˆ†๏ผˆ"Tools"ใฎไธ‹ใฎ็ฎ‡ๆกๆ›ธใ๏ผ‰ใฏใ€`run`ใพใŸใฏ`chat`ใ‚’ๅ‘ผใณๅ‡บใ™ใŸใณใซๅ‹•็š„ใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ `agent.toolbox`ๅ†…ใฎใƒ„ใƒผใƒซใฎๆ•ฐใจๅŒใ˜ๆ•ฐใฎ็ฎ‡ๆกๆ›ธใใŒใ‚ใ‚Šใ€ใใ‚Œใžใ‚Œใฎ็ฎ‡ๆกๆ›ธใใซใฏใƒ„ใƒผใƒซใฎๅๅ‰ใจ่ชฌๆ˜ŽใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ ```text - <tool.name>: <tool.description> ``` ใ‚‚ใ†ใ™ใ็ขบ่ชใ—ใพใ—ใ‚‡ใ†ใ€‚ `document_qa` ใƒ„ใƒผใƒซใ‚’่ชญใฟ่พผใ‚“ใงๅๅ‰ใจ่ชฌๆ˜Žใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ ```py from transformers import load_tool document_qa = load_tool("document-question-answering") print(f"- {document_qa.name}: {document_qa.description}") ``` which gives: ```text - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. ``` ใƒ„ใƒผใƒซ่ชฌๆ˜Ž: ใ“ใฎใƒ„ใƒผใƒซใฏใ€2ใคใฎใƒ‘ใƒผใƒˆใ‹ใ‚‰ๆˆใ‚Š็ซ‹ใฃใฆใ„ใพใ™ใ€‚ๆœ€ๅˆใฎใƒ‘ใƒผใƒˆใงใฏใ€ใƒ„ใƒผใƒซใŒไฝ•ใ‚’่กŒใ†ใ‹ใ‚’่ชฌๆ˜Žใ—ใ€2็•ช็›ฎใฎใƒ‘ใƒผใƒˆใงใฏๅ…ฅๅŠ›ๅผ•ๆ•ฐใจๆˆปใ‚Šๅ€คใŒใฉใฎใ‚ˆใ†ใซๆœŸๅพ…ใ•ใ‚Œใ‚‹ใ‹ใ‚’่ฟฐในใฆใ„ใพใ™ใ€‚ ่‰ฏใ„ใƒ„ใƒผใƒซๅใจใƒ„ใƒผใƒซใฎ่ชฌๆ˜Žใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๆญฃใ—ใไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซ้žๅธธใซ้‡่ฆใงใ™ใ€‚ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒใƒ„ใƒผใƒซใซใคใ„ใฆๆŒใฃใฆใ„ใ‚‹ๅ”ฏไธ€ใฎๆƒ…ๅ ฑใฏใ€ใใฎๅๅ‰ใจ่ชฌๆ˜Žใงใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒ„ใƒผใƒซๅใจ่ชฌๆ˜Žใฎไธกๆ–นใŒๆญฃ็ขบใซ่จ˜่ฟฐใ•ใ‚Œใ€ใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นๅ†…ใฎๆ—ขๅญ˜ใฎใƒ„ใƒผใƒซใฎใ‚นใ‚ฟใ‚คใƒซใซๅˆ่‡ดใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚็‰นใซใ€่ชฌๆ˜Žใซใฏใ‚ณใƒผใƒ‰ใ‚นใ‚ฟใ‚คใƒซใงๅๅ‰ใงๆœŸๅพ…ใ•ใ‚Œใ‚‹ใ™ในใฆใฎๅผ•ๆ•ฐใŒ่จ€ๅŠใ•ใ‚Œใ€ๆœŸๅพ…ใ•ใ‚Œใ‚‹ๅž‹ใจใใ‚Œใ‚‰ใŒไฝ•ใงใ‚ใ‚‹ใ‹ใฎ่ชฌๆ˜Žใ‚‚ๅซใ‚ใ‚‹ในใใงใ™ใ€‚ <Tip> ใ‚ญใƒฅใƒฌใƒผใƒˆใ•ใ‚ŒใŸTransformersใƒ„ใƒผใƒซใฎๅ‘ฝๅใจ่ชฌๆ˜Žใ‚’็ขบ่ชใ—ใฆใ€ใƒ„ใƒผใƒซใŒใฉใฎใ‚ˆใ†ใชๅๅ‰ใจ่ชฌๆ˜Žใ‚’ๆŒใคในใใ‹ใ‚’็†่งฃใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ ใ™ในใฆใฎใƒ„ใƒผใƒซใฏ[`Agent.toolbox`]ใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใง็ขบ่ชใงใใพใ™ใ€‚ </Tip> ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ•ใ‚ŒใŸไพ‹๏ผš ใƒ„ใƒผใƒซใฎไฝฟใ„ๆ–นใ‚’ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซๆญฃ็ขบใซ็คบใ™ไธ€้€ฃใฎไพ‹ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎไพ‹ใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๅฎŸ้š›ใซๆญฃ็ขบใงๅฎŸ่กŒๅฏ่ƒฝใชใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใ™ใ‚‹ๅฏ่ƒฝๆ€งใ‚’ๆœ€ๅคงๅŒ–ใ™ใ‚‹ใ‚ˆใ†ใซๆ›ธใ‹ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€้žๅธธใซ้‡่ฆใงใ™ใ€‚ๅคง่ฆๆจกใช่จ€่ชžใƒขใƒ‡ใƒซใฏใ€ใƒ—ใƒญใƒณใƒ—ใƒˆๅ†…ใฎใƒ‘ใ‚ฟใƒผใƒณใ‚’่ช่ญ˜ใ—ใ€ๆ–ฐใ—ใ„ใƒ‡ใƒผใ‚ฟใ‚’ไฝฟ็”จใ—ใฆใใฎใƒ‘ใ‚ฟใƒผใƒณใ‚’็นฐใ‚Š่ฟ”ใ™ใ“ใจใซ้žๅธธใซๅ„ชใ‚Œใฆใ„ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ๅฎŸ่ทตใงๆญฃใ—ใ„ๅฎŸ่กŒๅฏ่ƒฝใชใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใ™ใ‚‹ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎๅฏ่ƒฝๆ€งใ‚’ๆœ€ๅคงๅŒ–ใ™ใ‚‹ใ‚ˆใ†ใซใ€ใ“ใ‚Œใ‚‰ใฎไพ‹ใฏๆ›ธใ‹ใ‚Œใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฏใ€ไธ€ใคใฎไพ‹ใงใ™๏ผš ````text Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner." I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` ```` ใƒ‘ใ‚ฟใƒผใƒณ๏ผšใƒขใƒ‡ใƒซใŒ็นฐใ‚Š่ฟ”ใ—ใ‚’่กŒใ†ใ‚ˆใ†ใซๆŒ‡็คบใ•ใ‚Œใ‚‹ใƒ‘ใ‚ฟใƒผใƒณใซใฏใ€3ใคใฎ้ƒจๅˆ†ใŒใ‚ใ‚Šใพใ™ใ€‚ ใ‚ฟใ‚นใ‚ฏใฎๅฃฐๆ˜Žใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎๆ„ๅ›ณใ—ใŸๅ‹•ไฝœใฎ่ชฌๆ˜Žใ€ใใ—ใฆๆœ€ๅพŒใซ็”Ÿๆˆใ•ใ‚Œใ‚‹ใ‚ณใƒผใƒ‰ใงใ™ใ€‚ ใƒ—ใƒญใƒณใƒ—ใƒˆใฎไธ€้ƒจใงใ‚ใ‚‹ใ™ในใฆใฎไพ‹ใซใฏใ€ใ“ใฎๆญฃ็ขบใชใƒ‘ใ‚ฟใƒผใƒณใŒใ‚ใ‚Šใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๆ–ฐใ—ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’็”Ÿๆˆใ™ใ‚‹้š›ใซใ‚‚ ๅŒใ˜ใƒ‘ใ‚ฟใƒผใƒณใ‚’ๅ†็พใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใ„ใพใ™ใ€‚ ใƒ—ใƒญใƒณใƒ—ใƒˆใฎไพ‹ใฏTransformersใƒใƒผใƒ ใซใ‚ˆใฃใฆๅŽณ้ธใ•ใ‚Œใ€ไธ€้€ฃใฎๅ•้กŒใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใงๅŽณๅฏ†ใซ่ฉ•ไพกใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใƒ—ใƒญใƒณใƒ—ใƒˆใŒใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎๅฎŸ้š›ใฎไฝฟ็”จใ‚ฑใƒผใ‚นใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซใงใใ‚‹ใ ใ‘ๅ„ชใ‚ŒใŸใ‚‚ใฎใซใชใ‚Šใพใ™ใ€‚ ใƒ—ใƒญใƒณใƒ—ใƒˆใฎๆœ€ๅพŒใฎ้ƒจๅˆ†ใซๅฏพๅฟœใ—ใฆใ„ใพใ™๏ผš [ใ“ใกใ‚‰](https://github.com/huggingface/transformers/blob/main/src/transformers/tools/evaluate_agent.py)ใฎๅ•้กŒใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใงๅŽณๅฏ†ใซ่ฉ•ไพกใ•ใ‚Œใ‚‹ใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใƒ—ใƒญใƒณใƒ—ใƒˆใŒใงใใ‚‹ใ ใ‘ๅ„ชใ‚ŒใŸใ‚‚ใฎใซใชใ‚‹ใ‚ˆใ†ใซ ๆ…Ž้‡ใซ้ธๅฎšใ•ใ‚ŒใŸใƒ—ใƒญใƒณใƒ—ใƒˆไพ‹ใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ ```text Task: "Draw me a picture of rivers and lakes" I will use the following ``` ใ“ใ‚ŒใŒใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซๅฎŒๆˆใ•ใ›ใ‚‹ใŸใ‚ใฎๆœ€็ต‚็š„ใงๆœชๅฎŒๆˆใฎไพ‹ใงใ™ใ€‚ๆœชๅฎŒๆˆใฎไพ‹ใฏใ€ๅฎŸ้š›ใฎใƒฆใƒผใ‚ถใƒผๅ…ฅๅŠ›ใซๅŸบใฅใ„ใฆๅ‹•็š„ใซไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ไธŠ่จ˜ใฎไพ‹ใงใฏใ€ใƒฆใƒผใ‚ถใƒผใŒๆฌกใฎใ‚ˆใ†ใซๅฎŸ่กŒใ—ใพใ—ใŸ๏ผš ```py agent.run("Draw me a picture of rivers and lakes") ``` ใƒฆใƒผใ‚ถใƒผใฎๅ…ฅๅŠ› - ใคใพใ‚Šใ€ใ‚ฟใ‚นใ‚ฏ๏ผš"ๅทใจๆน–ใฎ็ตตใ‚’ๆใ„ใฆใใ ใ•ใ„"ใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใชใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใซๅค‰ๆ›ใ•ใ‚Œใพใ™๏ผš"ใ‚ฟใ‚นใ‚ฏ๏ผš<task> \n\n ๆฌกใซ็งใฏไปฅไธ‹ใ‚’ไฝฟ็”จใ—ใพใ™"ใ€‚ ใ“ใฎๆ–‡ใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๆกไปถไป˜ใ‘ใ‚‰ใ‚ŒใŸใƒ—ใƒญใƒณใƒ—ใƒˆใฎๆœ€็ต‚่กŒใ‚’ๆง‹ๆˆใ—ใ€ใ—ใŸใŒใฃใฆใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซๅฏพใ—ใฆๅ‰ใฎไพ‹ใจใพใฃใŸใๅŒใ˜ๆ–นๆณ•ใงไพ‹ใ‚’็ต‚ไบ†ใ™ใ‚‹ใ‚ˆใ†ๅผทใๅฝฑ้Ÿฟใ—ใพใ™ใ€‚ ่ฉณ็ดฐใซใฏ็ซ‹ใกๅ…ฅใ‚Šใพใ›ใ‚“ใŒใ€ใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏๅŒใ˜ใƒ—ใƒญใƒณใƒ—ใƒˆๆง‹้€ ใ‚’ๆŒใกใ€ไพ‹ใฏใ‚ใšใ‹ใซ็•ฐใชใ‚‹ใ‚นใ‚ฟใ‚คใƒซใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ไพ‹๏ผš ````text [...] ===== Human: Answer the question in the variable `question` about the image stored in the variable `image`. Assistant: I will use the tool `image_qa` to answer the question on the input image. ```py answer = image_qa(text=question, image=image) print(f"The answer is {answer}") ``` Human: I tried this code, it worked but didn't give me a good result. The question is in French Assistant: In this case, the question needs to be translated first. I will use the tool `translator` to do this. ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(text=translated_question, image=image) print(f"The answer is {answer}") ``` ===== [...] ```` *Human:* `run`ใƒ—ใƒญใƒณใƒ—ใƒˆใฎไพ‹ใจใฏๅฏพ็…ง็š„ใซใ€ๅ„`chat`ใƒ—ใƒญใƒณใƒ—ใƒˆใฎไพ‹ใซใฏ*Human*ใจ*Assistant*ใฎ้–“ใง1ใคไปฅไธŠใฎใ‚„ใ‚Šใจใ‚ŠใŒใ‚ใ‚Šใพใ™ใ€‚ๅ„ใ‚„ใ‚Šใจใ‚Šใฏใ€`run`ใƒ—ใƒญใƒณใƒ—ใƒˆใฎไพ‹ใจๅŒๆง˜ใฎๆง‹้€ ใซใชใฃใฆใ„ใพใ™ใ€‚ใƒฆใƒผใ‚ถใƒผใฎๅ…ฅๅŠ›ใฏ*Human:*ใฎๅพŒใ‚ใซ่ฟฝๅŠ ใ•ใ‚Œใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใฏใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใ™ใ‚‹ๅ‰ใซไฝ•ใ‚’่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚’ๆœ€ๅˆใซ็”Ÿๆˆใ™ใ‚‹ใ‚ˆใ†ใซๆŒ‡็คบใ•ใ‚Œใพใ™ใ€‚ใ‚„ใ‚Šใจใ‚Šใฏไปฅๅ‰ใฎใ‚„ใ‚Šใจใ‚ŠใซๅŸบใฅใ„ใฆ่กŒใ‚ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใ€ใƒฆใƒผใ‚ถใƒผใŒใ€ŒI tried **this** codeใ€ใจๅ…ฅๅŠ›ใ—ใŸใ‚ˆใ†ใซใ€ไปฅๅ‰ใซ็”Ÿๆˆใ•ใ‚ŒใŸใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใ‚ณใƒผใƒ‰ใ‚’ๅ‚็…งใงใใพใ™ใ€‚ *Assistant:* `.chat`ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใจใ€ใƒฆใƒผใ‚ถใƒผใฎๅ…ฅๅŠ›ใพใŸใฏ*ใ‚ฟใ‚นใ‚ฏ*ใŒๆœชๅฎŒไบ†ใฎๅฝขๅผใซๅค‰ๆ›ใ•ใ‚Œใพใ™๏ผš ```text Human: <user-input>\n\nAssistant: ``` ไปฅไธ‹ใฎใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๅฎŒไบ†ใ™ใ‚‹ใ‚ณใƒžใƒณใƒ‰ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ `run` ใ‚ณใƒžใƒณใƒ‰ใจใฏๅฏพ็…ง็š„ใซใ€`chat` ใ‚ณใƒžใƒณใƒ‰ใฏๅฎŒไบ†ใ—ใŸไพ‹ใ‚’ใƒ—ใƒญใƒณใƒ—ใƒˆใซ่ฟฝๅŠ ใ—ใพใ™ใ€‚ใใฎใŸใ‚ใ€ๆฌกใฎ `chat` ใ‚ฟใƒผใƒณใฎใŸใ‚ใซใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ‚ˆใ‚Šๅคšใใฎๆ–‡่„ˆใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ใ•ใฆใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใฎๆง‹้€ ใŒใ‚ใ‹ใฃใŸใจใ“ใ‚ใงใ€ใฉใฎใ‚ˆใ†ใซใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใงใใ‚‹ใ‹ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†๏ผ ### Writing good user inputs ๅคง่ฆๆจกใช่จ€่ชžใƒขใƒ‡ใƒซใฏใƒฆใƒผใ‚ถใƒผใฎๆ„ๅ›ณใ‚’็†่งฃใ™ใ‚‹่ƒฝๅŠ›ใŒใพใ™ใพใ™ๅ‘ไธŠใ—ใฆใ„ใพใ™ใŒใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๆญฃใ—ใ„ใ‚ฟใ‚นใ‚ฏใ‚’้ธๆŠžใ™ใ‚‹ใฎใ‚’ๅŠฉใ‘ใ‚‹ใŸใ‚ใซใ€ใงใใ‚‹ใ ใ‘ๆญฃ็ขบใซ่จ˜่ฟฐใ™ใ‚‹ใ“ใจใŒ้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ใงใใ‚‹ใ ใ‘ๆญฃ็ขบใงใ‚ใ‚‹ใจใฏไฝ•ใ‚’ๆ„ๅ‘ณใ™ใ‚‹ใฎใงใ—ใ‚‡ใ†ใ‹๏ผŸ ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใงใƒ„ใƒผใƒซๅใจใใฎ่ชฌๆ˜Žใฎใƒชใ‚นใƒˆใ‚’่ฆ‹ใฆใ„ใพใ™ใ€‚ใƒ„ใƒผใƒซใŒ่ฟฝๅŠ ใ•ใ‚Œใ‚‹ใปใฉใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๆญฃใ—ใ„ใƒ„ใƒผใƒซใ‚’้ธๆŠžใ™ใ‚‹ใฎใŒ้›ฃใ—ใใชใ‚Šใ€ๆญฃใ—ใ„ใƒ„ใƒผใƒซใฎ้€ฃ็ถšใ‚’้ธๆŠžใ™ใ‚‹ใฎใฏใ•ใ‚‰ใซ้›ฃใ—ใใชใ‚Šใพใ™ใ€‚ๅ…ฑ้€šใฎๅคฑๆ•—ไพ‹ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ใ“ใ“ใงใฏใ‚ณใƒผใƒ‰ใฎใฟใ‚’่ฟ”ใ™ใ“ใจใซใ—ใพใ™ใ€‚ ```py from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.run("Show me a tree", return_code=True) ``` gives: ```text ==Explanation from the agent== I will use the following tool: `image_segmenter` to create a segmentation mask for the image. ==Code generated by the agent== mask = image_segmenter(image, prompt="tree") ``` ใ“ใ‚ŒใฏใŠใใ‚‰ใ็งใŸใกใŒๆœ›ใ‚“ใงใ„ใŸใ‚‚ใฎใงใฏใชใ„ใงใ—ใ‚‡ใ†ใ€‚ไปฃใ‚ใ‚Šใซใ€ๆœจใฎ็”ปๅƒใŒ็”Ÿๆˆใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ˆใ‚Šๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™ใ€‚ ็‰นๅฎšใฎใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’่ช˜ๅฐŽใ™ใ‚‹ใŸใ‚ใซใ€ใƒ„ใƒผใƒซใฎๅๅ‰ใ‚„่ชฌๆ˜Žใซๅซใพใ‚Œใฆใ„ใ‚‹้‡่ฆใชใ‚ญใƒผใƒฏใƒผใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏ้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ใ•ใฆใ€่ฉณใ—ใ่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```py agent.toolbox["image_generator"].description ``` ```text 'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image. ``` ๅๅ‰ใจ่ชฌๆ˜Žๆ–‡ใซใฏใ€ใ‚ญใƒผใƒฏใƒผใƒ‰ใ€Œ็”ปๅƒใ€ใ€ใ€Œใƒ—ใƒญใƒณใƒ—ใƒˆใ€ใ€ใ€Œไฝœๆˆใ€ใ€ใŠใ‚ˆใณใ€Œ็”Ÿๆˆใ€ใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎ่จ€่‘‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€ใ“ใ“ใงใฎๅ‹•ไฝœใŒใ‚ˆใ‚ŠๅŠนๆžœ็š„ใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™ใ€‚ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๅฐ‘ใ—่ฉณ็ดฐใซ่ชฟๆ•ดใ—ใพใ—ใ‚‡ใ†ใ€‚ ```py agent.run("Create an image of a tree", return_code=True) ``` gives: ```text ==Explanation from the agent== I will use the following tool `image_generator` to generate an image of a tree. ==Code generated by the agent== image = image_generator(prompt="tree") ``` ็ฐกๅ˜ใซ่จ€ใ†ใจใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒใ‚ฟใ‚นใ‚ฏใ‚’ๆญฃ็ขบใซ้ฉๅˆ‡ใชใƒ„ใƒผใƒซใซใƒžใƒƒใƒ”ใƒณใ‚ฐใงใใชใ„ๅ ดๅˆใฏใ€ใƒ„ใƒผใƒซใฎๅๅ‰ใ‚„่ชฌๆ˜Žใฎๆœ€ใ‚‚้–ข้€ฃๆ€งใฎใ‚ใ‚‹ใ‚ญใƒผใƒฏใƒผใƒ‰ใ‚’่ชฟในใฆใ€ใ‚ฟใ‚นใ‚ฏใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ใใ‚Œใซๅˆใ‚ใ›ใฆๆด—็ทดใ•ใ›ใฆใฟใฆใใ ใ•ใ„ใ€‚ ### Customizing the tool descriptions ไปฅๅ‰ใซใ‚‚่ฆ‹ใŸใ‚ˆใ†ใซใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏๅ„ใƒ„ใƒผใƒซใฎๅๅ‰ใจ่ชฌๆ˜Žใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ใƒ™ใƒผใ‚นใฎใƒ„ใƒผใƒซใฏ้žๅธธใซๆญฃ็ขบใชๅๅ‰ใจ่ชฌๆ˜Žใ‚’ๆŒใฃใฆใ„ใ‚‹ใฏใšใงใ™ใŒใ€็‰นๅฎšใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซๅˆใ‚ใ›ใฆใƒ„ใƒผใƒซใฎ่ชฌๆ˜Žใ‚„ๅๅ‰ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใŒๅฝน็ซ‹ใคใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ€้žๅธธใซ้กžไผผใ—ใŸ่ค‡ๆ•ฐใฎใƒ„ใƒผใƒซใ‚’่ฟฝๅŠ ใ—ใŸๅ ดๅˆใ‚„ใ€็‰นๅฎšใฎใƒ‰ใƒกใ‚คใƒณ๏ผˆใŸใจใˆใฐใ€็”ปๅƒ็”Ÿๆˆใ‚„ๅค‰ๆ›ใชใฉ๏ผ‰ใงใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใซ็‰นใซ้‡่ฆใซใชใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใ‚ˆใใ‚ใ‚‹ๅ•้กŒใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒ็”ปๅƒ็”Ÿๆˆใ‚ฟใ‚นใ‚ฏใซ้ ป็นใซไฝฟ็”จใ•ใ‚Œใ‚‹ๅ ดๅˆใ€็”ปๅƒ็”Ÿๆˆใจ็”ปๅƒๅค‰ๆ›/ไฟฎๆญฃใ‚’ๆททๅŒใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ไพ‹๏ผš ```py agent.run("Make an image of a house and a car", return_code=True) ``` returns ```text ==Explanation from the agent== I will use the following tools `image_generator` to generate an image of a house and `image_transformer` to transform the image of a car into the image of a house. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") house_car_image = image_transformer(image=car_image, prompt="A house") ``` ใ“ใ‚ŒใฏใŠใใ‚‰ใ็งใŸใกใŒใ“ใ“ใงๆœ›ใ‚“ใงใ„ใ‚‹ๆญฃ็ขบใชใ‚‚ใฎใงใฏใชใ„ใ‚ˆใ†ใงใ™ใ€‚ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใ€Œimage_generatorใ€ใจใ€Œimage_transformerใ€ใฎ้•ใ„ใ‚’็†่งฃใ™ใ‚‹ใฎใŒ้›ฃใ—ใ„ใ‚ˆใ†ใงใ€ใ—ใฐใ—ใฐไธกๆ–นใ‚’ไธ€็ท’ใซไฝฟ็”จใ—ใพใ™ใ€‚ ใ“ใ“ใงใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใŸใ‚ใซใ€"image_transformer"ใฎใƒ„ใƒผใƒซๅใจ่ชฌๆ˜Žใ‚’ๅค‰ๆ›ดใ—ใฆใ€ๅฐ‘ใ—"image"ใ‚„"prompt"ใ‹ใ‚‰ๅˆ‡ใ‚Š้›ขใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ไปฃใ‚ใ‚Šใซใใ‚Œใ‚’ใ€Œmodifierใ€ใจๅ‘ผใณใพใ—ใ‚‡ใ†๏ผš ```py agent.toolbox["modifier"] = agent.toolbox.pop("image_transformer") agent.toolbox["modifier"].description = agent.toolbox["modifier"].description.replace( "transforms an image according to a prompt", "modifies an image" ) ``` ใ€Œๅค‰ๆ›ดใ€ใฏใ€ไธŠ่จ˜ใฎใƒ—ใƒญใƒณใƒ—ใƒˆใซๆ–ฐใ—ใ„็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ™ใ‚‹ๅผทๅŠ›ใชๆ‰‹ใŒใ‹ใ‚Šใงใ™ใ€‚ใใ‚Œใงใฏใ€ใ‚‚ใ†ไธ€ๅบฆๅฎŸ่กŒใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```py agent.run("Make an image of a house and a car", return_code=True) ``` Now we're getting: ```text ==Explanation from the agent== I will use the following tools: `image_generator` to generate an image of a house, then `image_generator` to generate an image of a car. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") ``` ใ“ใ‚Œใฏใ€็งใŸใกใŒ่€ƒใˆใฆใ„ใŸใ‚‚ใฎใซ็ขบๅฎŸใซ่ฟ‘ใฅใ„ใฆใ„ใพใ™๏ผใŸใ ใ—ใ€ๅฎถใจ่ปŠใ‚’ๅŒใ˜็”ปๅƒใซๅซใ‚ใŸใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚ใ‚ฟใ‚นใ‚ฏใ‚’ๅ˜ไธ€ใฎ็”ปๅƒ็”Ÿๆˆใซๅ‘ใ‘ใ‚‹ใ“ใจใงใ€ใ‚ˆใ‚Š้ฉๅˆ‡ใชๆ–นๅ‘ใซ้€ฒใ‚ใ‚‹ใฏใšใงใ™๏ผš ```py agent.run("Create image: 'A house and car'", return_code=True) ``` ```text ==Explanation from the agent== I will use the following tool: `image_generator` to generate an image. ==Code generated by the agent== image = image_generator(prompt="A house and car") ``` <Tip warning={true}> ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใ€็‰นใซ่ค‡ๆ•ฐใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎ็”ปๅƒใ‚’็”Ÿๆˆใ™ใ‚‹ใชใฉใ€ใ‚„ใ‚„่ค‡้›‘ใชใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซ้–ขใ—ใฆใฏใ€ใพใ ๅคšใใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซๅฏพใ—ใฆ่„†ๅผฑใงใ™ใ€‚ ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆ่‡ชไฝ“ใจใใฎๅŸบ็คŽใจใชใ‚‹ใƒ—ใƒญใƒณใƒ—ใƒˆใฏใ€ไปŠๅพŒๆ•ฐใƒถๆœˆใงใ•ใ‚‰ใซๆ”นๅ–„ใ•ใ‚Œใ€ใ•ใพใ–ใพใชใƒฆใƒผใ‚ถใƒผใฎๅ…ฅๅŠ›ใซๅฏพใ—ใฆใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒใ‚ˆใ‚Š้ ‘ๅฅใซใชใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ </Tip> ### Customizing the whole project ใƒฆใƒผใ‚ถใƒผใซๆœ€ๅคง้™ใฎๆŸ”่ปŸๆ€งใ‚’ๆไพ›ใ™ใ‚‹ใŸใ‚ใซใ€[ไธŠ่จ˜](#structure-of-the-prompt)ใง่ชฌๆ˜Žใ•ใ‚ŒใŸใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆๅ…จไฝ“ใ‚’ใƒฆใƒผใ‚ถใƒผใŒไธŠๆ›ธใใงใใพใ™ใ€‚ใ“ใฎๅ ดๅˆใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒ—ใƒญใƒณใƒ—ใƒˆใซใฏๅฐŽๅ…ฅใ‚ปใ‚ฏใ‚ทใƒงใƒณใ€ใƒ„ใƒผใƒซใ‚ปใ‚ฏใ‚ทใƒงใƒณใ€ไพ‹ใ‚ปใ‚ฏใ‚ทใƒงใƒณใ€ๆœชๅฎŒไบ†ใฎไพ‹ใ‚ปใ‚ฏใ‚ทใƒงใƒณใŒๅซใพใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚`run` ใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไธŠๆ›ธใใ—ใŸใ„ๅ ดๅˆใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซ่กŒใ†ใ“ใจใŒใงใใพใ™: ```py template = """ [...] """ agent = HfAgent(your_endpoint, run_prompt_template=template) ``` <Tip warning={true}> `<<all_tools>>` ๆ–‡ๅญ—ๅˆ—ใจ `<<prompt>>` ใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒไฝฟ็”จใงใใ‚‹ใƒ„ใƒผใƒซใ‚’่ช่ญ˜ใ—ใ€ใƒฆใƒผใ‚ถใƒผใฎใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๆญฃใ—ใๆŒฟๅ…ฅใงใใ‚‹ใ‚ˆใ†ใซใ€`template` ใฎใฉใ“ใ‹ใซๅฎš็พฉใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ๅŒๆง˜ใซใ€`chat` ใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไธŠๆ›ธใใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใชใŠใ€`chat` ใƒขใƒผใƒ‰ใงใฏๅธธใซไปฅไธ‹ใฎๅฝขๅผใงไบคๆ›ใŒ่กŒใ‚ใ‚Œใพใ™๏ผš ไธŠ่จ˜ใฎใƒ†ใ‚ญใ‚นใƒˆใฎไธŠใซๆ—ฅๆœฌ่ชžใฎ็ฟป่จณใ‚’ๆไพ›ใ—ใฆใใ ใ•ใ„ใ€‚Markdownใ‚ณใƒผใƒ‰ใจใ—ใฆๆ›ธใ„ใฆใใ ใ•ใ„ใ€‚ ```text Human: <<task>> Assistant: ``` ใ—ใŸใŒใฃใฆใ€ใ‚ซใ‚นใ‚ฟใƒ `chat`ใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฎไพ‹ใ‚‚ใ“ใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซใ€ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ๆ™‚ใซ`chat`ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไธŠๆ›ธใใงใใพใ™ใ€‚ ``` template = """ [...] """ agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template) ``` <Tip warning={true}> `<<all_tools>>` ใจใ„ใ†ๆ–‡ๅญ—ๅˆ—ใŒ `template` ๅ†…ใงๅฎš็พฉใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏไฝฟ็”จๅฏ่ƒฝใชใƒ„ใƒผใƒซใ‚’ๆŠŠๆกใงใใพใ™ใ€‚ </Tip> ไธกๆ–นใฎๅ ดๅˆใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฎไปฃใ‚ใ‚Šใซใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใฎ่ชฐใ‹ใŒใƒ›ใ‚นใƒˆใ—ใŸใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ใƒชใƒใ‚ธใƒˆใƒชIDใ‚’ๆธกใ™ใ“ใจใŒใงใใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ—ใƒญใƒณใƒ—ใƒˆใฏใ€[ใ“ใฎใƒชใƒใ‚ธใƒˆใƒช](https://huggingface.co/datasets/huggingface-tools/default-prompts) ใซใ‚ใ‚Šใพใ™ใฎใงใ€ๅ‚่€ƒใซใชใ‚Šใพใ™ใ€‚ ใ‚ซใ‚นใ‚ฟใƒ ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’Hubใฎใƒชใƒใ‚ธใƒˆใƒชใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใฆใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจๅ…ฑๆœ‰ใ™ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใฎใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผš - ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใƒชใƒใ‚ธใƒˆใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจ - `run` ใ‚ณใƒžใƒณใƒ‰็”จใฎใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ `run_prompt_template.txt` ใจใ„ใ†ๅๅ‰ใฎใƒ•ใ‚กใ‚คใƒซใซ้…็ฝฎใ™ใ‚‹ใ“ใจ - `chat` ใ‚ณใƒžใƒณใƒ‰็”จใฎใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ `chat_prompt_template.txt` ใจใ„ใ†ๅๅ‰ใฎใƒ•ใ‚กใ‚คใƒซใซ้…็ฝฎใ™ใ‚‹ใ“ใจ ## Using custom tools ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€็”ปๅƒ็”Ÿๆˆใซ็‰นๅŒ–ใ—ใŸ2ใคใฎๆ—ขๅญ˜ใฎใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚’ๅˆฉ็”จใ—ใพใ™๏ผš - [huggingface-tools/image-transformation](https://huggingface.co/spaces/huggingface-tools/image-transformation) ใ‚’ใ‚ˆใ‚Šๅคšใใฎ็”ปๅƒๅค‰ๆ›ดใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ใŸใ‚ใซ [diffusers/controlnet-canny-tool](https://huggingface.co/spaces/diffusers/controlnet-canny-tool) ใซ็ฝฎใๆ›ใˆใพใ™ใ€‚ - ็”ปๅƒใฎใ‚ขใƒƒใƒ—ใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐ็”จใฎๆ–ฐใ—ใ„ใƒ„ใƒผใƒซใ‚’ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใซ่ฟฝๅŠ ใ—ใพใ™๏ผš[diffusers/latent-upscaler-tool](https://huggingface.co/spaces/diffusers/latent-upscaler-tool) ใฏๆ—ขๅญ˜ใฎ็”ปๅƒๅค‰ๆ›ใƒ„ใƒผใƒซใ‚’็ฝฎใๆ›ใˆใพใ™ใ€‚ ไพฟๅˆฉใช [`load_tool`] ้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py from transformers import load_tool controlnet_transformer = load_tool("diffusers/controlnet-canny-tool") upscaler = load_tool("diffusers/latent-upscaler-tool") ``` ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ใจใ€ใƒ„ใƒผใƒซใฎ่ชฌๆ˜Žใจๅๅ‰ใŒใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใƒ—ใƒญใƒณใƒ—ใƒˆใซ่‡ชๅ‹•็š„ใซๅซใพใ‚Œใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใฎไฝฟ็”จๆ–นๆณ•ใ‚’็†่งฃใงใใ‚‹ใ‚ˆใ†ใซใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใซใฏ้ฉๅˆ‡ใซ่จ˜่ฟฐใ•ใ‚ŒใŸ่ชฌๆ˜Žใจๅๅ‰ใŒๅฟ…่ฆใงใ™ใ€‚ `controlnet_transformer`ใฎ่ชฌๆ˜Žใจๅๅ‰ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ๆœ€ๅˆใซใ€ไพฟๅˆฉใช[`load_tool`]้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ ```py print(f"Description: '{controlnet_transformer.description}'") print(f"Name: '{controlnet_transformer.name}'") ``` gives ```text Description: 'This is a tool that transforms an image with ControlNet according to a prompt. It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.' Name: 'image_transformer' ``` ๅๅ‰ใจ่ชฌๆ˜Žใฏๆญฃ็ขบใงใ‚ใ‚Šใ€[ๅŽณ้ธใ•ใ‚ŒใŸใƒ„ใƒผใƒซ](./transformers_agents#a-curated-set-of-tools)ใฎใ‚นใ‚ฟใ‚คใƒซใซๅˆใฃใฆใ„ใพใ™ใ€‚ ๆฌกใซใ€`controlnet_transformer`ใจ`upscaler`ใ‚’ไฝฟใฃใฆใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใพใ™ใ€‚ ```py tools = [controlnet_transformer, upscaler] agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=tools) ``` ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใฏใ€ไปฅไธ‹ใฎๆƒ…ๅ ฑใ‚’ๆไพ›ใ—ใพใ™๏ผš ```text image_transformer has been replaced by <transformers_modules.diffusers.controlnet-canny-tool.bd76182c7777eba9612fc03c0 8718a60c0aa6312.image_transformation.ControlNetTransformationTool object at 0x7f1d3bfa3a00> as provided in `additional_tools` ``` ไธ€้€ฃใฎๅŽณ้ธใ•ใ‚ŒใŸใƒ„ใƒผใƒซใซใฏใ™ใงใซ `image_transformer` ใƒ„ใƒผใƒซใŒใ‚ใ‚Šใ€ใ“ใ‚Œใ‚’ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใง็ฝฎใๆ›ใˆใพใ™ใ€‚ <Tip> ๆ—ขๅญ˜ใฎใƒ„ใƒผใƒซใ‚’ไธŠๆ›ธใใ™ใ‚‹ใ“ใจใฏใ€็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซๆ—ขๅญ˜ใฎใƒ„ใƒผใƒซใ‚’ใพใฃใŸใๅŒใ˜็›ฎ็š„ใงไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใซๆœ‰็›Šใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ใชใœใชใ‚‰ใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใใฎ็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใฎไฝฟ็”จๆ–นๆณ•ใซ็ฒพ้€šใ—ใฆใ„ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ใ“ใฎๅ ดๅˆใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใฏๆ—ขๅญ˜ใฎใƒ„ใƒผใƒซใจใพใฃใŸใๅŒใ˜APIใซๅพ“ใ†ใ‹ใ€ใใฎใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ™ในใฆใฎไพ‹ใŒๆ›ดๆ–ฐใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใƒ—ใƒญใƒณใƒ—ใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’้ฉๅฟœใ•ใ›ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ใ‚ขใƒƒใƒ—ใ‚นใ‚ฑใƒผใƒฉใƒผใƒ„ใƒผใƒซใซใฏ `image_upscaler` ใจใ„ใ†ๅๅ‰ใŒไป˜ใ‘ใ‚‰ใ‚Œใ€ใ“ใ‚Œใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใซใฏใพใ ๅญ˜ๅœจใ—ใชใ„ใŸใ‚ใ€ๅ˜ใซใƒ„ใƒผใƒซใฎใƒชใ‚นใƒˆใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒ็พๅœจไฝฟ็”จๅฏ่ƒฝใชใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€`agent.toolbox` ๅฑžๆ€งใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ ```py print("\n".join([f"- {a}" for a in agent.toolbox.keys()])) ``` ```text - document_qa - image_captioner - image_qa - image_segmenter - transcriber - summarizer - text_classifier - text_qa - text_reader - translator - image_transformer - text_downloader - image_generator - video_generator - image_upscaler ``` ๆณจๆ„: `image_upscaler` ใŒใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใฎไธ€้ƒจใจใชใฃใŸใ“ใจใซๆณจ็›ฎใ—ใฆใใ ใ•ใ„ใ€‚ ใใ‚Œใงใฏใ€ๆ–ฐใ—ใ„ใƒ„ใƒผใƒซใ‚’่ฉฆใ—ใฆใฟใพใ—ใ‚‡ใ†๏ผ[Transformers Agents Quickstart](./transformers_agents#single-execution-run) ใง็”Ÿๆˆใ—ใŸ็”ปๅƒใ‚’ๅ†ๅˆฉ็”จใ—ใพใ™ใ€‚ ```py from diffusers.utils import load_image image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" ) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ็พŽใ—ใ„ๅ†ฌใฎ้ขจๆ™ฏใซใ“ใฎ็”ปๅƒใ‚’ๅค‰่บซใ•ใ›ใพใ—ใ‚‡ใ†๏ผš ```py image = agent.run("Transform the image: 'A frozen lake and snowy forest'", image=image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_transformer` to transform the image. ==Code generated by the agent== image = image_transformer(image, prompt="A frozen lake and snowy forest") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter.png" width=200> ๆ–ฐใ—ใ„็”ปๅƒๅ‡ฆ็†ใƒ„ใƒผใƒซใฏใ€้žๅธธใซๅผทๅŠ›ใช็”ปๅƒใฎๅค‰ๆ›ดใ‚’่กŒใ†ใ“ใจใŒใงใใ‚‹ControlNetใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€็”ปๅƒๅ‡ฆ็†ใƒ„ใƒผใƒซใฏใ‚ตใ‚คใ‚บใŒ512x512ใƒ”ใ‚ฏใ‚ปใƒซใฎ็”ปๅƒใ‚’่ฟ”ใ—ใพใ™ใ€‚ใใ‚Œใ‚’ๆ‹กๅคงใงใใ‚‹ใ‹่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```py image = agent.run("Upscale the image", image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_upscaler` to upscale the image. ==Code generated by the agent== upscaled_image = image_upscaler(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter_upscale.png" width=400> ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใ€Œ็”ปๅƒใฎๆ‹กๅคงใ€ใ‚’ใ€ใใฎ่ชฌๆ˜Žใจใƒ„ใƒผใƒซใฎๅๅ‰ใ ใ‘ใ‚’ๅŸบใซใ€ๆ–ฐใŸใซ่ฟฝๅŠ ใ•ใ‚ŒใŸใ‚ขใƒƒใƒ—ใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐใƒ„ใƒผใƒซใซ่‡ชๅ‹•็š„ใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ—ใ€ๆญฃใ—ใๅฎŸ่กŒใงใใพใ—ใŸใ€‚ ๆฌกใซใ€ๆ–ฐใ—ใ„ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚’ไฝœๆˆใ™ใ‚‹ๆ–นๆณ•ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ### Adding new tools ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซ่ฟฝๅŠ ใงใใ‚‹ๆ–ฐใ—ใ„ใƒ„ใƒผใƒซใฎไฝœๆˆๆ–นๆณ•ใ‚’็คบใ—ใพใ™ใ€‚ #### Creating a new tool ใพใšใ€ใƒ„ใƒผใƒซใฎไฝœๆˆใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†ใ€‚ๆฌกใฎใ‚ณใƒผใƒ‰ใงใ€็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซ้–ขใ—ใฆHugging Face Hubใงๆœ€ใ‚‚ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅ–ๅพ—ใ™ใ‚‹ใ€ใ‚ใพใ‚Šๅฝน็ซ‹ใŸใชใ„ใ‘ใ‚Œใฉใ‚‚ๆฅฝใ—ใ„ใ‚ฟใ‚นใ‚ฏใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใงใใ‚Œใ‚’่กŒใ†ใ“ใจใŒใงใใพใ™๏ผš ```python from huggingface_hub import list_models task = "text-classification" model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(model.id) ``` ใ‚ฟใ‚นใ‚ฏ `text-classification` ใฎๅ ดๅˆใ€ใ“ใ‚Œใฏ `'facebook/bart-large-mnli'` ใ‚’่ฟ”ใ—ใพใ™ใ€‚`translation` ใฎๅ ดๅˆใ€`'t5-base'` ใ‚’่ฟ”ใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚’ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๅˆฉ็”จใงใใ‚‹ใƒ„ใƒผใƒซใซๅค‰ๆ›ใ™ใ‚‹ๆ–นๆณ•ใฏไฝ•ใงใ—ใ‚‡ใ†ใ‹๏ผŸใ™ในใฆใฎใƒ„ใƒผใƒซใฏใ€ไธป่ฆใชๅฑžๆ€งใ‚’ไฟๆŒใ™ใ‚‹ใ‚นใƒผใƒ‘ใƒผใ‚ฏใƒฉใ‚น `Tool` ใซไพๅญ˜ใ—ใฆใ„ใพใ™ใ€‚็งใŸใกใฏใ€ใใ‚Œใ‚’็ถ™ๆ‰ฟใ—ใŸใ‚ฏใƒฉใ‚นใ‚’ไฝœๆˆใ—ใพใ™: ```python from transformers import Tool class HFModelDownloadsTool(Tool): pass ``` ใ“ใฎใ‚ฏใƒฉใ‚นใซใฏใ„ใใคใ‹ใฎๅฟ…่ฆใช่ฆ็ด ใŒใ‚ใ‚Šใพใ™๏ผš - `name` ๅฑžๆ€ง๏ผšใ“ใ‚Œใฏใƒ„ใƒผใƒซ่‡ชไฝ“ใฎๅๅ‰ใซๅฏพๅฟœใ—ใ€ไป–ใฎใƒ„ใƒผใƒซใจ่ชฟๅ’Œใ™ใ‚‹ใŸใ‚ใซ `model_download_counter` ใจๅไป˜ใ‘ใพใ™ใ€‚ - `description` ๅฑžๆ€ง๏ผšใ“ใ‚Œใฏใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๅŸ‹ใ‚ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ - `inputs` ใจ `outputs` ๅฑžๆ€ง๏ผšใ“ใ‚Œใ‚‰ใ‚’ๅฎš็พฉใ™ใ‚‹ใ“ใจใงใ€Python ใ‚คใƒณใ‚ฟใƒผใƒ—ใƒชใ‚ฟใƒผใŒๅž‹ใซ้–ขใ™ใ‚‹่ณขๆ˜Žใช้ธๆŠžใ‚’่กŒใ†ใฎใซๅฝน็ซ‹ใกใ€ใƒ„ใƒผใƒซใ‚’Hubใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹้š›ใซgradio-demoใ‚’็”Ÿๆˆใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏใ€ไบˆๆƒณใ•ใ‚Œใ‚‹ๅ€คใฎใƒชใ‚นใƒˆใงใ‚ใ‚Šใ€`text`ใ€`image`ใ€ใพใŸใฏ`audio`ใซใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ - `__call__` ใƒกใ‚ฝใƒƒใƒ‰๏ผšใ“ใ‚ŒใซใฏๆŽจ่ซ–ใ‚ณใƒผใƒ‰ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚ŒใฏไธŠ่จ˜ใง่ฉฆใ—ใŸใ‚ณใƒผใƒ‰ใงใ™๏ผ ใ“ใกใ‚‰ใŒ็พๅœจใฎใ‚ฏใƒฉใ‚นใฎๅค–่ฆณใงใ™๏ผš ```python from transformers import Tool from huggingface_hub import list_models class HFModelDownloadsTool(Tool): name = "model_download_counter" description = ( "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. " "It takes the name of the category (such as text-classification, depth-estimation, etc), and " "returns the name of the checkpoint." ) inputs = ["text"] outputs = ["text"] def __call__(self, task: str): model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id ``` ใ•ใฆใ€ไปŠๅบฆใฏใƒ„ใƒผใƒซใŒไฝฟใˆใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ใ“ใฎใƒ„ใƒผใƒซใ‚’ใƒ•ใ‚กใ‚คใƒซใซไฟๅญ˜ใ—ใ€ใƒกใ‚คใƒณใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‹ใ‚‰ใ‚คใƒณใƒใƒผใƒˆใ—ใพใ—ใ‚‡ใ†ใ€‚ใ“ใฎใƒ•ใ‚กใ‚คใƒซใ‚’ `model_downloads.py` ใจใ„ใ†ๅๅ‰ใซใ—ใ€็ตๆžœใฎใ‚คใƒณใƒใƒผใƒˆใ‚ณใƒผใƒ‰ใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ไปฅไธ‹ใฏใ€็พๅœจใฎใ‚ฏใƒฉใ‚นใฎๅค–่ฆณใงใ™๏ผš ```python from model_downloads import HFModelDownloadsTool tool = HFModelDownloadsTool() ``` ไป–ใฎไบบใ€…ใซๅˆฉ็›Šใ‚’ใ‚‚ใŸใ‚‰ใ—ใ€ใ‚ˆใ‚Š็ฐกๅ˜ใชๅˆๆœŸๅŒ–ใฎใŸใ‚ใซใ€ใใ‚Œใ‚’Hubใซใ‚ใชใŸใฎๅๅ‰็ฉบ้–“ใงใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚’่กŒใ†ใซใฏใ€`tool` ๅค‰ๆ•ฐใง `push_to_hub` ใ‚’ๅ‘ผใณๅ‡บใ™ใ ใ‘ใงใ™๏ผš ```python tool.push_to_hub("hf-model-downloads") ``` ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใ€ๆœ€็ต‚ใ‚นใƒ†ใƒƒใƒ—ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ #### Having the agent use the tool Hubใซใ‚ใ‚‹ใƒ„ใƒผใƒซใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏๆฌกใฎใ‚ˆใ†ใซใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใงใใพใ™๏ผˆใƒฆใƒผใ‚ถใƒผๅใ‚’ใƒ„ใƒผใƒซใซๅˆใ‚ใ›ใฆๅค‰ๆ›ดใ—ใฆใใ ใ•ใ„๏ผ‰: ```python from transformers import load_tool tool = load_tool("lysandre/hf-model-downloads") ``` ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใงไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎๅˆๆœŸๅŒ–ใƒกใ‚ฝใƒƒใƒ‰ใฎ `additional_tools` ใƒ‘ใƒฉใƒกใƒผใ‚ฟใซใใ‚Œใ‚’ๆธกใ™ใ ใ‘ใงใ™๏ผš ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run( "Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) ``` which outputs the following: ```text ==Code generated by the agent== model = model_download_counter(task="text-to-video") print(f"The model with the most downloads is {model}.") audio_model = text_reader(model) ==Result== The model with the most downloads is damo-vilab/text-to-video-ms-1.7b. ``` ไปฅไธ‹ใฎใƒ†ใ‚ญใ‚นใƒˆใฏใ€ๆฌกใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ **Audio** | |------------------------------------------------------------------------------------------------------------------------------------------------------| | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/damo.wav" type="audio/wav"/> | <Tip> ็‰นๅฎšใฎLLMใซไพๅญ˜ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใ€ใ†ใพใๆฉŸ่ƒฝใ•ใ›ใ‚‹ใŸใ‚ใซใฏ้žๅธธใซๆญฃ็ขบใชใƒ—ใƒญใƒณใƒ—ใƒˆใŒๅฟ…่ฆใชใ‚‚ใฎใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใƒ„ใƒผใƒซใฎๅๅ‰ใจ่ชฌๆ˜Žใ‚’ๆ˜Ž็ขบใซๅฎš็พฉใ™ใ‚‹ใ“ใจใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ‚ˆใฃใฆๆดป็”จใ•ใ‚Œใ‚‹ใŸใ‚ใซ้žๅธธใซ้‡่ฆใงใ™ใ€‚ </Tip> ### Replacing existing tools ๆ—ขๅญ˜ใฎใƒ„ใƒผใƒซใ‚’็ฝฎใๆ›ใˆใ‚‹ใซใฏใ€ๆ–ฐใ—ใ„ใ‚ขใ‚คใƒ†ใƒ ใ‚’ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใซๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ใ ใ‘ใง่กŒใ†ใ“ใจใŒใงใใพใ™ใ€‚ไปฅไธ‹ใฏใใฎๆ–นๆณ•ใงใ™: ```python from transformers import HfAgent, load_tool agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.toolbox["image-transformation"] = load_tool("diffusers/controlnet-canny-tool") ``` <Tip> ไป–ใฎใƒ„ใƒผใƒซใงใƒ„ใƒผใƒซใ‚’็ฝฎใๆ›ใˆใ‚‹้š›ใซใฏๆณจๆ„ใŒๅฟ…่ฆใงใ™๏ผใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎใƒ—ใƒญใƒณใƒ—ใƒˆใ‚‚่ชฟๆ•ดใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใŸใ‚ˆใ‚Š่‰ฏใ„ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๆŒใฃใฆใ„ใ‚‹ๅ ดๅˆใซใฏ่‰ฏใ„ใ“ใจใงใ™ใŒใ€ไป–ใฎใƒ„ใƒผใƒซใŒ้ธๆŠžใ•ใ‚Œใ‚‹็ขบ็އใŒ้ซ˜ใใชใ‚Šใ€ๅฎš็พฉใ—ใŸใƒ„ใƒผใƒซใฎไปฃใ‚ใ‚Šใซไป–ใฎใƒ„ใƒผใƒซใŒ้ธๆŠžใ•ใ‚Œใ‚‹ใ“ใจใ‚‚ใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ </Tip> ## Leveraging gradio-tools [gradio-tools](https://github.com/freddyaboulton/gradio-tools)ใฏใ€Hugging Face Spacesใ‚’ใƒ„ใƒผใƒซใจใ—ใฆไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ๅผทๅŠ›ใชใƒฉใ‚คใƒ–ใƒฉใƒชใงใ™ใ€‚ๆ—ขๅญ˜ใฎๅคšใใฎSpacesใŠใ‚ˆใณใ‚ซใ‚นใ‚ฟใƒ Spacesใ‚’่จญ่จˆใ™ใ‚‹ใ“ใจใ‚‚ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ ๆˆ‘ใ€…ใฏใ€`gradio_tools`ใ‚’ไฝฟ็”จใ—ใฆ`StableDiffusionPromptGeneratorTool`ใƒ„ใƒผใƒซใ‚’ๆดป็”จใ—ใŸใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚ใ“ใฎใƒ„ใƒผใƒซใฏ`gradio-tools`ใƒ„ใƒผใƒซใ‚ญใƒƒใƒˆใงๆไพ›ใ•ใ‚ŒใฆใŠใ‚Šใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๆ”นๅ–„ใ—ใ€ใ‚ˆใ‚Š่‰ฏใ„็”ปๅƒใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ—ใพใ™ใ€‚ ใพใšใ€`gradio_tools`ใ‹ใ‚‰ใƒ„ใƒผใƒซใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใ€ใใ‚Œใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใพใ™: ```python from gradio_tools import StableDiffusionPromptGeneratorTool gradio_tool = StableDiffusionPromptGeneratorTool() ``` ใใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ `Tool.from_gradio` ใƒกใ‚ฝใƒƒใƒ‰ใซๆธกใ—ใพใ™๏ผš ```python from transformers import Tool tool = Tool.from_gradio(gradio_tool) ``` ใ“ใ‚Œใ‹ใ‚‰ใฏใ€้€šๅธธใฎใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใจๅŒใ˜ใ‚ˆใ†ใซใใ‚Œใ‚’็ฎก็†ใงใใพใ™ใ€‚็งใŸใกใฏใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๆ”นๅ–„ใ™ใ‚‹ใŸใ‚ใซใใ‚Œใ‚’ๆดป็”จใ—ใพใ™ใ€‚ ` a rabbit wearing a space suit`: ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run("Generate an image of the `prompt` after improving it.", prompt="A rabbit wearing a space suit") ``` The model adequately leverages the tool: ```text ==Explanation from the agent== I will use the following tools: `StableDiffusionPromptGenerator` to improve the prompt, then `image_generator` to generate an image according to the improved prompt. ==Code generated by the agent== improved_prompt = StableDiffusionPromptGenerator(prompt) print(f"The improved prompt is {improved_prompt}.") image = image_generator(improved_prompt) ``` ๆœ€็ต‚็š„ใซ็”ปๅƒใ‚’็”Ÿๆˆใ™ใ‚‹ๅ‰ใซ๏ผš ![็”ปๅƒ](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png) <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"> <Tip warning={true}> gradio-toolsใฏใ€ใ•ใพใ–ใพใชใƒขใƒ€ใƒชใƒ†ใ‚ฃใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใงใ‚‚ใ€*ใƒ†ใ‚ญใ‚นใƒˆ*ใฎๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใŒๅฟ…่ฆใงใ™ใ€‚ใ“ใฎๅฎŸ่ฃ…ใฏ็”ปๅƒใจ้Ÿณๅฃฐใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใจ้€ฃๆบใ—ใพใ™ใ€‚็พๆ™‚็‚นใงใฏใ€ใ“ใ‚Œใ‚‰2ใคใฏไบ’ๆ›ๆ€งใŒใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ใ‚ตใƒใƒผใƒˆใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใŸใ‚ใซๅ–ใ‚Š็ต„ใ‚“ใงใŠใ‚Šใ€่ฟ…้€Ÿใซไบ’ๆ›ๆ€งใŒๅ‘ไธŠใ™ใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ </Tip> ## Future compatibility with Langchain ็งใŸใกใฏLangchainใ‚’ๆ„›ใ—ใฆใŠใ‚Šใ€้žๅธธใซ้ญ…ๅŠ›็š„ใชใƒ„ใƒผใƒซใฎใ‚นใ‚คใƒผใƒˆใ‚’ๆŒใฃใฆใ„ใ‚‹ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒ„ใƒผใƒซใ‚’ๆ‰ฑใ†ใŸใ‚ใซใ€Langchainใฏใ•ใพใ–ใพใชใƒขใƒ€ใƒชใƒ†ใ‚ฃใงไฝœๆฅญใ™ใ‚‹ๅ ดๅˆใงใ‚‚ใ€*ใƒ†ใ‚ญใ‚นใƒˆ*ใฎๅ…ฅๅ‡บๅŠ›ใŒๅฟ…่ฆใงใ™ใ€‚ใ“ใ‚Œใฏใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎใ‚ทใƒชใ‚ขใƒซๅŒ–ใƒใƒผใ‚ธใƒงใƒณ๏ผˆใคใพใ‚Šใ€ใƒ‡ใ‚ฃใ‚นใ‚ฏใซไฟๅญ˜ใ•ใ‚ŒใŸใƒใƒผใ‚ธใƒงใƒณ๏ผ‰ใงใ‚ใ‚‹ใ“ใจใŒๅคšใ„ใงใ™ใ€‚ ใ“ใฎ้•ใ„ใซใ‚ˆใ‚Šใ€transformers-agentsใจlangchain้–“ใงใฏใƒžใƒซใƒใƒขใƒ€ใƒชใƒ†ใ‚ฃใŒๅ‡ฆ็†ใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ ใ“ใฎๅˆถ้™ใฏๅฐ†ๆฅใฎใƒใƒผใ‚ธใƒงใƒณใง่งฃๆฑบใ•ใ‚Œใ‚‹ใ“ใจใ‚’็›ฎๆŒ‡ใ—ใฆใŠใ‚Šใ€็†ฑๅฟƒใชlangchainใƒฆใƒผใ‚ถใƒผใ‹ใ‚‰ใฎไปปๆ„ใฎๆ”ฏๆดใ‚’ๆญ“่ฟŽใ—ใพใ™ใ€‚ ็งใŸใกใฏใ‚ˆใ‚Š่‰ฏใ„ใ‚ตใƒใƒผใƒˆใ‚’ๆไพ›ใ—ใŸใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚ใŠๆ‰‹ไผใ„ใ„ใŸใ ใ‘ใ‚‹ๅ ดๅˆใฏใ€ใœใฒ[ๅ•้กŒใ‚’้–‹ใ„ใฆ](https://github.com/huggingface/transformers/issues/new)ใ€ใŠ่€ƒใˆใฎใ“ใจใ‚’ๅ…ฑๆœ‰ใ—ใฆใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/peft.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Load adapters with ๐Ÿค— PEFT [[open-in-colab]] [Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) ใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐไธญใซๅ‡็ตใ—ใ€ใใฎไธŠใซใ‚ใšใ‹ใช่จ“็ทดๅฏ่ƒฝใชใƒ‘ใƒฉใƒกใƒผใ‚ฟ๏ผˆใ‚ขใƒ€ใƒ—ใ‚ฟใƒผ๏ผ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ‚ขใƒ—ใƒญใƒผใƒใงใ™ใ€‚ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใฏใ€ใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎๆƒ…ๅ ฑใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใŸใ‚ใซ่จ“็ทดใ•ใ‚Œใพใ™ใ€‚ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏใ€ใƒกใƒขใƒชไฝฟ็”จ้‡ใŒๅฐ‘ใชใใ€ๅฎŒๅ…จใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใจๆฏ”่ผƒใ—ใฆ่จˆ็ฎ—ใƒชใ‚ฝใƒผใ‚นใ‚’ไฝŽใๆŠ‘ใˆใคใคใ€ๅŒ็ญ‰ใฎ็ตๆžœใ‚’็”Ÿๆˆใ™ใ‚‹ใ“ใจใŒ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚ PEFTใง่จ“็ทดใ•ใ‚ŒใŸใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใฏ้€šๅธธใ€ๅฎŒๅ…จใชใƒขใƒ‡ใƒซใฎใ‚ตใ‚คใ‚บใ‚ˆใ‚Šใ‚‚1ๆกๅฐใ•ใใ€ๅ…ฑๆœ‰ใ€ไฟๅญ˜ใ€่ชญใฟ่พผใ‚€ใฎใŒไพฟๅˆฉใงใ™ใ€‚ <div class="flex flex-col justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/> <figcaption class="text-center">Hubใซๆ ผ็ดใ•ใ‚Œใฆใ„ใ‚‹OPTForCausalLMใƒขใƒ‡ใƒซใฎใ‚ขใƒ€ใƒ—ใ‚ฟใƒผ้‡ใฟใฏใ€ใƒขใƒ‡ใƒซใฎๅ…จไฝ“ใ‚ตใ‚คใ‚บใฎ็ด„6MBใงใ€ใƒขใƒ‡ใƒซ้‡ใฟใฎๅ…จใ‚ตใ‚คใ‚บใฏ็ด„700MBใงใ™ใ€‚</figcaption> </div> ๐Ÿค— PEFTใƒฉใ‚คใƒ–ใƒฉใƒชใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€[ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://huggingface.co/docs/peft/index)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## Setup ๐Ÿค— PEFTใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆๅง‹ใ‚ใพใ—ใ‚‡ใ†๏ผš ```bash pip install peft ``` ๆ–ฐๆฉŸ่ƒฝใ‚’่ฉฆใ—ใฆใฟใŸใ„ๅ ดๅˆใ€ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ“ใจใซ่ˆˆๅ‘ณใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผš ```bash pip install git+https://github.com/huggingface/peft.git ``` ## Supported PEFT models ๐Ÿค— Transformersใฏใ€ใ„ใใคใ‹ใฎPEFT๏ผˆParameter Efficient Fine-Tuning๏ผ‰ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ใƒใ‚คใƒ†ใ‚ฃใƒ–ใซใ‚ตใƒใƒผใƒˆใ—ใฆใŠใ‚Šใ€ใƒญใƒผใ‚ซใƒซใพใŸใฏHubใซๆ ผ็ดใ•ใ‚ŒใŸใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚ฆใ‚งใ‚คใƒˆใ‚’็ฐกๅ˜ใซ่ชญใฟ่พผใ‚“ใงๅฎŸ่กŒใพใŸใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใพใ™ใ€‚ไปฅไธ‹ใฎใƒกใ‚ฝใƒƒใƒ‰ใŒใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™๏ผš - [Low Rank Adapters](https://huggingface.co/docs/peft/conceptual_guides/lora) - [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3) - [AdaLoRA](https://arxiv.org/abs/2303.10512) ไป–ใฎPEFTใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใ€ใƒ—ใƒญใƒณใƒ—ใƒˆๅญฆ็ฟ’ใ‚„ใƒ—ใƒญใƒณใƒ—ใƒˆ่ชฟๆ•ดใชใฉใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใ€ใพใŸใฏ๐Ÿค— PEFTใƒฉใ‚คใƒ–ใƒฉใƒชๅ…จ่ˆฌใซใคใ„ใฆใฏใ€[ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://huggingface.co/docs/peft/index)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Load a PEFT adapter ๐Ÿค— Transformersใ‹ใ‚‰PEFTใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚“ใงไฝฟ็”จใ™ใ‚‹ใซใฏใ€Hubใƒชใƒใ‚ธใƒˆใƒชใพใŸใฏใƒญใƒผใ‚ซใƒซใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซ `adapter_config.json` ใƒ•ใ‚กใ‚คใƒซใจใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚ฆใ‚งใ‚คใƒˆใŒๅซใพใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ๆฌกใซใ€`AutoModelFor` ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆPEFTใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€ใ“ใจใŒใงใใพใ™ใ€‚ใŸใจใˆใฐใ€ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ็”จใฎPEFTใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€ใซใฏ๏ผš 1. PEFTใƒขใƒ‡ใƒซใฎIDใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ 2. ใใ‚Œใ‚’[`AutoModelForCausalLM`] ใ‚ฏใƒฉใ‚นใซๆธกใ—ใพใ™ใ€‚ ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) ``` <Tip> PEFTใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’`AutoModelFor`ใ‚ฏใƒฉใ‚นใพใŸใฏๅŸบๆœฌใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚น๏ผˆ`OPTForCausalLM`ใพใŸใฏ`LlamaForCausalLM`ใชใฉ๏ผ‰ใง่ชญใฟ่พผใ‚€ใ“ใจใŒใงใใพใ™ใ€‚ </Tip> ใพใŸใ€`load_adapter`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใงใ€PEFTใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’่ชญใฟ่พผใ‚€ใ“ใจใ‚‚ใงใใพใ™๏ผš ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "facebook/opt-350m" peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ``` ## Load in 8bit or 4bit `bitsandbytes` ็ตฑๅˆใฏใ€8ใƒ“ใƒƒใƒˆใŠใ‚ˆใณ4ใƒ“ใƒƒใƒˆใฎ็ฒพๅบฆใƒ‡ใƒผใ‚ฟๅž‹ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใŠใ‚Šใ€ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€้š›ใซใƒกใƒขใƒชใ‚’็ฏ€็ด„ใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™๏ผˆ่ฉณ็ดฐใซใคใ„ใฆใฏ `bitsandbytes` ็ตฑๅˆใฎ[ใ‚ฌใ‚คใƒ‰](./quantization#bitsandbytes-integration)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰ใ€‚[`~PreTrainedModel.from_pretrained`] ใซ `load_in_8bit` ใพใŸใฏ `load_in_4bit` ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’่ฟฝๅŠ ใ—ใ€`device_map="auto"` ใ‚’่จญๅฎšใ—ใฆใƒขใƒ‡ใƒซใ‚’ๅŠนๆžœ็š„ใซใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซๅˆ†ๆ•ฃ้…็ฝฎใงใใพใ™๏ผš ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", load_in_8bit=True) ``` ## Add a new adapter ๆ—ขๅญ˜ใฎใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’ๆŒใคใƒขใƒ‡ใƒซใซๆ–ฐใ—ใ„ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใซ [`~peft.PeftModel.add_adapter`] ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใŸใ ใ—ใ€ๆ–ฐใ—ใ„ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใฏ็พๅœจใฎใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใจๅŒใ˜ใ‚ฟใ‚คใƒ—ใงใ‚ใ‚‹้™ใ‚Šใ€ใ“ใ‚Œใ‚’่กŒใ†ใ“ใจใŒใงใใพใ™ใ€‚ใŸใจใˆใฐใ€ใƒขใƒ‡ใƒซใซๆ—ขๅญ˜ใฎ LoRA ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใŒใ‚ขใ‚ฟใƒƒใƒใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆ๏ผš ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=["q_proj", "k_proj"], init_lora_weights=False ) model.add_adapter(lora_config, adapter_name="adapter_1") ``` ๆ–ฐใ—ใ„ใ‚ขใƒ€ใƒ—ใ‚ฟใ‚’่ฟฝๅŠ ใ™ใ‚‹ใซใฏ: ```py # attach new adapter with same config model.add_adapter(lora_config, adapter_name="adapter_2") ``` [`~peft.PeftModel.set_adapter`] ใ‚’ไฝฟ็”จใ—ใฆใ€ใฉใฎใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹ใ‚’่จญๅฎšใงใใพใ™๏ผš ```py # use adapter_1 model.set_adapter("adapter_1") output = model.generate(**inputs) print(tokenizer.decode(output_disabled[0], skip_special_tokens=True)) # use adapter_2 model.set_adapter("adapter_2") output_enabled = model.generate(**inputs) print(tokenizer.decode(output_enabled[0], skip_special_tokens=True)) ``` ## Enable and disable adapters ใƒขใƒ‡ใƒซใซใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’่ฟฝๅŠ ใ—ใŸใ‚‰ใ€ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๆœ‰ๅŠนใพใŸใฏ็„กๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€ๆฌกใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ—ใพใ™๏ผš ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" adapter_model_id = "ybelkada/opt-350m-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello" inputs = tokenizer(text, return_tensors="pt") model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) ``` ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใƒขใ‚ธใƒฅใƒผใƒซใ‚’็„กๅŠนใซใ™ใ‚‹ใซใฏ๏ผš ```py model.disable_adapters() output = model.generate(**inputs) ``` ## Train a PEFT adapter PEFTใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใฏ[`Trainer`]ใ‚ฏใƒฉใ‚นใงใ‚ตใƒใƒผใƒˆใ•ใ‚ŒใฆใŠใ‚Šใ€็‰นๅฎšใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซๅฏพใ—ใฆใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ๆ•ฐ่กŒใฎใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงๆธˆใฟใพใ™ใ€‚ใŸใจใˆใฐใ€LoRAใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆ: <Tip> [`Trainer`]ใ‚’ไฝฟ็”จใ—ใŸใƒขใƒ‡ใƒซใฎๅพฎ่ชฟๆ•ดใซๆ…ฃใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏใ€[ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซใฎๅพฎ่ชฟๆ•ด](training)ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ </Tip> 1. ใ‚ฟใ‚นใ‚ฏใ‚ฟใ‚คใƒ—ใจใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใซๅฏพใ™ใ‚‹ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใฎๆง‹ๆˆใ‚’ๅฎš็พฉใ—ใพใ™๏ผˆใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ่ฉณ็ดฐใซใคใ„ใฆใฏ[`~peft.LoraConfig`]ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰ใ€‚ ```py from peft import LoraConfig peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) ``` 2. ใƒขใƒ‡ใƒซใซใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ€‚ ```py model.add_adapter(peft_config) ``` 3. ใ“ใ‚Œใงใ€ใƒขใƒ‡ใƒซใ‚’ [`Trainer`] ใซๆธกใ™ใ“ใจใŒใงใใพใ™๏ผ ```py trainer = Trainer(model=model, ...) trainer.train() ``` ไฟๅญ˜ใ™ใ‚‹ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใ‚ขใƒ€ใƒ—ใ‚ฟใจใใ‚Œใ‚’่ชญใฟ่พผใ‚€ใŸใ‚ใฎๆ‰‹้ †๏ผš
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/fast_tokenizers.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Use tokenizers from ๐Ÿค— Tokenizers [`PreTrainedTokenizerFast`]ใฏ[๐Ÿค— Tokenizers](https://huggingface.co/docs/tokenizers)ใƒฉใ‚คใƒ–ใƒฉใƒชใซไพๅญ˜ใ—ใฆใ„ใพใ™ใ€‚๐Ÿค— Tokenizersใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰ๅ–ๅพ—ใ—ใŸใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏใ€้žๅธธใซ็ฐกๅ˜ใซ๐Ÿค— Transformersใซใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ๅ…ทไฝ“็š„ใชๅ†…ๅฎนใซๅ…ฅใ‚‹ๅ‰ใซใ€ใพใšใฏใ„ใใคใ‹ใฎ่กŒใงใƒ€ใƒŸใƒผใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†๏ผš ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` ็งใŸใกใฏไปŠใ€ๅฎš็พฉใ—ใŸใƒ•ใ‚กใ‚คใƒซใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚’ใƒฉใƒณใ‚ฟใ‚คใƒ ใงๅผ•ใ็ถšใไฝฟ็”จใ™ใ‚‹ใ‹ใ€ ๅฐ†ๆฅใฎๅ†ๅˆฉ็”จใฎใŸใ‚ใซJSONใƒ•ใ‚กใ‚คใƒซใซไฟๅญ˜ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ## Loading directly from the tokenizer object ๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใงใ“ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใฉใฎใ‚ˆใ†ใซๆดป็”จใงใใ‚‹ใ‹ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚[`PreTrainedTokenizerFast`]ใ‚ฏใƒฉใ‚นใฏใ€ *tokenizer*ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๅผ•ๆ•ฐใจใ—ใฆๅ—ใ‘ๅ…ฅใ‚Œใ€็ฐกๅ˜ใซใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` ใ“ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏใ€๐Ÿค— Transformers ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒๅ…ฑๆœ‰ใ™ใ‚‹ใ™ในใฆใฎใƒกใ‚ฝใƒƒใƒ‰ใจไธ€็ท’ใซไฝฟ็”จใงใใพใ™๏ผ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใƒšใƒผใ‚ธ](main_classes/tokenizer)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## Loading from a JSON file JSONใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’่ชญใฟ่พผใ‚€ใซใฏใ€ใพใšใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฟๅญ˜ใ™ใ‚‹ใ“ใจใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†๏ผš ```python >>> tokenizer.save("tokenizer.json") ``` ใ“ใฎใƒ•ใ‚กใ‚คใƒซใ‚’ไฟๅญ˜ใ—ใŸใƒ‘ใ‚นใฏใ€`PreTrainedTokenizerFast` ใฎๅˆๆœŸๅŒ–ใƒกใ‚ฝใƒƒใƒ‰ใซ `tokenizer_file` ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ไฝฟ็”จใ—ใฆๆธกใ™ใ“ใจใŒใงใใพใ™๏ผš ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` ใ“ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏใ€๐Ÿค— Transformers ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒๅ…ฑๆœ‰ใ™ใ‚‹ใ™ในใฆใฎใƒกใ‚ฝใƒƒใƒ‰ใจไธ€็ท’ใซไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸ๏ผ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใƒšใƒผใ‚ธ](main_classes/tokenizer)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/benchmarks.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownใงใ™ใŒใ€Hugging Faceใฎdoc-builder๏ผˆMDXใซ้กžไผผ๏ผ‰ๅ‘ใ‘ใฎ็‰นๅฎšใฎๆง‹ๆ–‡ใ‚’ๅซใ‚“ใงใ„ใ‚‹ใŸใ‚ใ€ Markdownใƒ“ใƒฅใƒผใ‚ขใงใฏๆญฃใ—ใ่กจ็คบใ•ใ‚Œใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ --> # Benchmarks <Tip warning={true}> Hugging Faceใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใƒ„ใƒผใƒซใฏ้žๆŽจๅฅจใงใ‚ใ‚Šใ€Transformerใƒขใƒ‡ใƒซใฎ้€Ÿๅบฆใจใƒกใƒขใƒชใฎ่ค‡้›‘ใ•ใ‚’ๆธฌๅฎšใ™ใ‚‹ใŸใ‚ใซๅค–้ƒจใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ </Tip> [[open-in-colab]] ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ—ใ€ใƒ™ใ‚นใƒˆใƒ—ใƒฉใ‚ฏใƒ†ใ‚ฃใ‚นใ€ใ™ใงใซๅˆฉ็”จๅฏ่ƒฝใชใƒ™ใƒณใƒใƒžใƒผใ‚ฏใซใคใ„ใฆ่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใŸใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใฏ[ใ“ใกใ‚‰](https://github.com/huggingface/notebooks/tree/main/examples/benchmark.ipynb)ใงๅˆฉ็”จใงใใพใ™ใ€‚ ## How to benchmark ๐Ÿค— Transformers models [`PyTorchBenchmark`]ใ‚ฏใƒฉใ‚นใจ[`TensorFlowBenchmark`]ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ๆŸ”่ปŸใซใƒ™ใƒณใƒใƒžใƒผใ‚ฏใงใใพใ™ใ€‚ ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€_ใƒ”ใƒผใ‚ฏใƒกใƒขใƒชไฝฟ็”จ้‡_ ใŠใ‚ˆใณ _ๅฟ…่ฆใชๆ™‚้–“_ ใ‚’ _ๆŽจ่ซ–_ ใŠใ‚ˆใณ _ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ_ ใฎไธกๆ–นใซใคใ„ใฆๆธฌๅฎšใงใใพใ™ใ€‚ <Tip> ใ“ใ“ใงใฎ _ๆŽจ่ซ–_ ใฏใ€ๅ˜ไธ€ใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใซใ‚ˆใฃใฆๅฎš็พฉใ•ใ‚Œใ€ _ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ_ ใฏๅ˜ไธ€ใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใจ ใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใƒ‘ใ‚นใซใ‚ˆใฃใฆๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ </Tip> ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ฏใƒฉใ‚น[`PyTorchBenchmark`]ใจ[`TensorFlowBenchmark`]ใฏใ€ใใ‚Œใžใ‚Œใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ฏใƒฉใ‚นใซๅฏพใ™ใ‚‹้ฉๅˆ‡ใช่จญๅฎšใ‚’ๅซใ‚€ [`PyTorchBenchmarkArguments`] ใŠใ‚ˆใณ [`TensorFlowBenchmarkArguments`] ใ‚ฟใ‚คใƒ—ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๅฟ…่ฆใจใ—ใพใ™ใ€‚ [`PyTorchBenchmarkArguments`] ใŠใ‚ˆใณ [`TensorFlowBenchmarkArguments`] ใฏใƒ‡ใƒผใ‚ฟใ‚ฏใƒฉใ‚นใงใ‚ใ‚Šใ€ใใ‚Œใžใ‚Œใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ฏใƒฉใ‚นใซๅฏพใ™ใ‚‹ใ™ในใฆใฎ้–ข้€ฃใ™ใ‚‹่จญๅฎšใ‚’ๅซใ‚“ใงใ„ใพใ™ใ€‚ ๆฌกใฎไพ‹ใงใฏใ€ใ‚ฟใ‚คใƒ— _bert-base-cased_ ใฎBERTใƒขใƒ‡ใƒซใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ™ใ‚‹ๆ–นๆณ•ใŒ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚ <frameworkcontent> <pt> ```py >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments >>> args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]) >>> benchmark = PyTorchBenchmark(args) ``` </pt> <tf> ```py >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments >>> args = TensorFlowBenchmarkArguments( ... models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> benchmark = TensorFlowBenchmark(args) ``` </tf> </frameworkcontent> ใ“ใ“ใงใฏใ€ใƒ™ใƒณใƒใƒžใƒผใ‚ฏๅผ•ๆ•ฐใฎใƒ‡ใƒผใ‚ฟใ‚ฏใƒฉใ‚นใซๅฏพใ—ใฆใ€`models`ใ€`batch_sizes` ใŠใ‚ˆใณ`sequence_lengths`ใฎ3ใคใฎๅผ•ๆ•ฐใŒๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใพใ™ใ€‚ๅผ•ๆ•ฐ`models`ใฏๅฟ…้ ˆใงใ€ [ใƒขใƒ‡ใƒซใƒใƒ–](https://huggingface.co/models)ใ‹ใ‚‰ใฎใƒขใƒ‡ใƒซ่ญ˜ๅˆฅๅญใฎ`ใƒชใ‚นใƒˆ`ใ‚’ๆœŸๅพ…ใ— ใพใ™ใ€‚`batch_sizes`ใจ`sequence_lengths`ใฎ2ใคใฎ`ใƒชใ‚นใƒˆ`ๅผ•ๆ•ฐใฏ ใƒขใƒ‡ใƒซใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏๅฏพ่ฑกใจใชใ‚‹`input_ids`ใฎใ‚ตใ‚คใ‚บใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ ใƒ™ใƒณใƒใƒžใƒผใ‚ฏๅผ•ๆ•ฐใƒ‡ใƒผใ‚ฟใ‚ฏใƒฉใ‚นใ‚’ไป‹ใ—ใฆ่จญๅฎšใงใใ‚‹ไป–ใฎๅคšใใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€็›ดๆŽฅใƒ•ใ‚กใ‚คใƒซ `src/transformers/benchmark/benchmark_args_utils.py`ใ€ `src/transformers/benchmark/benchmark_args.py`๏ผˆPyTorch็”จ๏ผ‰ใ€ใŠใ‚ˆใณ`src/transformers/benchmark/benchmark_args_tf.py`๏ผˆTensorflow็”จ๏ผ‰ ใ‚’ๅ‚็…งใ™ใ‚‹ใ‹ใ€ๆฌกใฎใ‚ทใ‚งใƒซใ‚ณใƒžใƒณใƒ‰ใ‚’ใƒซใƒผใƒˆใ‹ใ‚‰ๅฎŸ่กŒใ™ใ‚‹ใจใ€PyTorchใจTensorflowใฎใใ‚Œใžใ‚Œใซๅฏพใ—ใฆ่จญๅฎšๅฏ่ƒฝใชใ™ในใฆใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ่จ˜่ฟฐ็š„ใชใƒชใ‚นใƒˆใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ <frameworkcontent> <pt> ```bash python examples/pytorch/benchmarking/run_benchmark.py --help ``` ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ•ใ‚ŒใŸใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏใ€ๅ˜ใซ `benchmark.run()` ใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใงๅฎŸ่กŒใงใใพใ™ใ€‚ ```py >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.006 bert-base-uncased 8 32 0.006 bert-base-uncased 8 128 0.018 bert-base-uncased 8 512 0.088 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1227 bert-base-uncased 8 32 1281 bert-base-uncased 8 128 1307 bert-base-uncased 8 512 1539 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 08:58:43.371351 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </pt> <tf> ```bash python examples/tensorflow/benchmarking/run_benchmark_tf.py --help ``` ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ•ใ‚ŒใŸใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏใ€ๅ˜ใซ `benchmark.run()` ใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใงๅฎŸ่กŒใงใใพใ™ใ€‚ ```py >>> results = benchmark.run() >>> print(results) >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.005 bert-base-uncased 8 32 0.008 bert-base-uncased 8 128 0.022 bert-base-uncased 8 512 0.105 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1330 bert-base-uncased 8 32 1330 bert-base-uncased 8 128 1330 bert-base-uncased 8 512 1770 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:26:35.617317 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </tf> </frameworkcontent> ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€_ๆŽจ่ซ–ๆ™‚้–“_ ใจ _ๅฟ…่ฆใชใƒกใƒขใƒช_ ใŒใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ•ใ‚Œใพใ™ใ€‚ ไธŠ่จ˜ใฎไพ‹ใฎๅ‡บๅŠ›ใงใฏใ€ๆœ€ๅˆใฎ2ใคใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใŒ _ๆŽจ่ซ–ๆ™‚้–“_ ใจ _ๆŽจ่ซ–ใƒกใƒขใƒช_ ใซๅฏพๅฟœใ™ใ‚‹็ตๆžœใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ใ•ใ‚‰ใซใ€่จˆ็ฎ—็’ฐๅขƒใซ้–ขใ™ใ‚‹ใ™ในใฆใฎ้–ข้€ฃๆƒ…ๅ ฑใ€ ไพ‹ใˆใฐ GPU ใ‚ฟใ‚คใƒ—ใ€ใ‚ทใ‚นใƒ†ใƒ ใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใƒใƒผใ‚ธใƒงใƒณใชใฉใŒใ€_ENVIRONMENT INFORMATION_ ใฎไธ‹ใซ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ใ“ใฎๆƒ…ๅ ฑใฏใ€[`PyTorchBenchmarkArguments`] ใŠใ‚ˆใณ [`TensorFlowBenchmarkArguments`] ใซๅผ•ๆ•ฐ `save_to_csv=True` ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใงใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใง _.csv_ ใƒ•ใ‚กใ‚คใƒซใซไฟๅญ˜ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใฎๅ ดๅˆใ€ๅ„ใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏๅˆฅใ€…ใฎ _.csv_ ใƒ•ใ‚กใ‚คใƒซใซไฟๅญ˜ใ•ใ‚Œใพใ™ใ€‚_.csv_ ใƒ•ใ‚กใ‚คใƒซใธใฎใƒ‘ใ‚นใฏใ€ใƒ‡ใƒผใ‚ฟใ‚ฏใƒฉใ‚นใฎๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ‚ชใƒ—ใ‚ทใƒงใƒณใงๅฎš็พฉใงใใพใ™ใ€‚ ใƒขใƒ‡ใƒซ่ญ˜ๅˆฅๅญใ€ไพ‹ใˆใฐ `bert-base-uncased` ใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ๅˆฉ็”จๅฏ่ƒฝใชไปปๆ„ใฎใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใฎไปปๆ„ใฎ่จญๅฎšใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ“ใฎๅ ดๅˆใ€ใƒ™ใƒณใƒใƒžใƒผใ‚ฏๅผ•ๆ•ฐใจๅ…ฑใซ่จญๅฎšใฎ `list` ใ‚’ๆŒฟๅ…ฅใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ <frameworkcontent> <pt> ```py >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig >>> args = PyTorchBenchmarkArguments( ... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 128 0.006 bert-base 8 512 0.006 bert-base 8 128 0.018 bert-base 8 512 0.088 bert-384-hid 8 8 0.006 bert-384-hid 8 32 0.006 bert-384-hid 8 128 0.011 bert-384-hid 8 512 0.054 bert-6-lay 8 8 0.003 bert-6-lay 8 32 0.004 bert-6-lay 8 128 0.009 bert-6-lay 8 512 0.044 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1277 bert-base 8 32 1281 bert-base 8 128 1307 bert-base 8 512 1539 bert-384-hid 8 8 1005 bert-384-hid 8 32 1027 bert-384-hid 8 128 1035 bert-384-hid 8 512 1255 bert-6-lay 8 8 1097 bert-6-lay 8 32 1101 bert-6-lay 8 128 1127 bert-6-lay 8 512 1359 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:35:25.143267 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </pt> <tf> ```py >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig >>> args = TensorFlowBenchmarkArguments( ... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 8 0.005 bert-base 8 32 0.008 bert-base 8 128 0.022 bert-base 8 512 0.106 bert-384-hid 8 8 0.005 bert-384-hid 8 32 0.007 bert-384-hid 8 128 0.018 bert-384-hid 8 512 0.064 bert-6-lay 8 8 0.002 bert-6-lay 8 32 0.003 bert-6-lay 8 128 0.0011 bert-6-lay 8 512 0.074 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1330 bert-base 8 32 1330 bert-base 8 128 1330 bert-base 8 512 1770 bert-384-hid 8 8 1330 bert-384-hid 8 32 1330 bert-384-hid 8 128 1330 bert-384-hid 8 512 1540 bert-6-lay 8 8 1330 bert-6-lay 8 32 1330 bert-6-lay 8 128 1330 bert-6-lay 8 512 1540 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:38:15.487125 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </tf> </frameworkcontent> ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ•ใ‚ŒใŸBertModelใ‚ฏใƒฉใ‚นใฎๆง‹ๆˆใซๅฏพใ™ใ‚‹ๆŽจ่ซ–ๆ™‚้–“ใจๅฟ…่ฆใชใƒกใƒขใƒชใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏ ใ“ใฎๆฉŸ่ƒฝใฏใ€ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹้š›ใซใฉใฎๆง‹ๆˆใ‚’้ธๆŠžใ™ในใใ‹ใ‚’ๆฑบๅฎšใ™ใ‚‹้š›ใซ็‰นใซๅฝน็ซ‹ใคใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ## Benchmark best practices ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใƒขใƒ‡ใƒซใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ™ใ‚‹้š›ใซๆณจๆ„ใ™ในใใ„ใใคใ‹ใฎใƒ™ใ‚นใƒˆใƒ—ใƒฉใ‚ฏใƒ†ใ‚ฃใ‚นใ‚’ใƒชใ‚นใƒˆใ‚ขใƒƒใƒ—ใ—ใฆใ„ใพใ™ใ€‚ - ็พๅœจใ€ๅ˜ไธ€ใƒ‡ใƒใ‚คใ‚นใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ—ใ‹ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚GPUใงใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅ ดๅˆใ€ใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใƒ‡ใƒใ‚คใ‚นใ‚’ใƒฆใƒผใ‚ถใƒผใŒๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใ‚’ๆŽจๅฅจใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏใ‚ทใ‚งใƒซใง`CUDA_VISIBLE_DEVICES`็’ฐๅขƒๅค‰ๆ•ฐใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใง่กŒใˆใพใ™ใ€‚ไพ‹๏ผš`export CUDA_VISIBLE_DEVICES=0`ใ‚’ๅฎŸ่กŒใ—ใฆใ‹ใ‚‰ใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ - `no_multi_processing`ใ‚ชใƒ—ใ‚ทใƒงใƒณใฏใ€ใƒ†ใ‚นใƒˆใŠใ‚ˆใณใƒ‡ใƒใƒƒใ‚ฐ็”จใซใฎใฟ`True`ใซ่จญๅฎšใ™ในใใงใ™ใ€‚ๆญฃ็ขบใชใƒกใƒขใƒช่จˆๆธฌใ‚’็ขบไฟใ™ใ‚‹ใŸใ‚ใซใ€ๅ„ใƒกใƒขใƒชใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’ๅˆฅใ€…ใฎใƒ—ใƒญใ‚ปใ‚นใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€`no_multi_processing`ใŒ`True`ใซ่จญๅฎšใ•ใ‚Œใพใ™ใ€‚ - ใƒขใƒ‡ใƒซใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏ็ตๆžœใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹้š›ใซใฏใ€ๅธธใซ็’ฐๅขƒๆƒ…ๅ ฑใ‚’่จ˜่ฟฐใ™ใ‚‹ในใใงใ™ใ€‚็•ฐใชใ‚‹GPUใƒ‡ใƒใ‚คใ‚นใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใƒใƒผใ‚ธใƒงใƒณใชใฉใงใƒ™ใƒณใƒใƒžใƒผใ‚ฏ็ตๆžœใŒๅคงใใ็•ฐใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ใƒ™ใƒณใƒใƒžใƒผใ‚ฏ็ตๆžœๅ˜ไฝ“ใงใฏใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซใจใฃใฆใ‚ใพใ‚Šๆœ‰็”จใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ## Sharing your benchmark ไปฅๅ‰ใ€ใ™ในใฆใฎๅˆฉ็”จๅฏ่ƒฝใชใ‚ณใ‚ขใƒขใƒ‡ใƒซ๏ผˆๅฝ“ๆ™‚10ใƒขใƒ‡ใƒซ๏ผ‰ใซๅฏพใ—ใฆใ€ๅคšใใฎ็•ฐใชใ‚‹่จญๅฎšใงๆŽจ่ซ–ๆ™‚้–“ใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใŒ่กŒใ‚ใ‚Œใพใ—ใŸ๏ผšPyTorchใ‚’ไฝฟ็”จใ—ใ€TorchScriptใฎๆœ‰็„กใ€TensorFlowใ‚’ไฝฟ็”จใ—ใ€XLAใฎๆœ‰็„กใชใฉใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใฏใ™ในใฆCPUใง่กŒใ‚ใ‚Œใพใ—ใŸ๏ผˆTensorFlow XLAใ‚’้™คใ๏ผ‰ใ€‚ ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ๆฌกใฎใƒ–ใƒญใ‚ฐใƒใ‚นใƒˆ](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2)ใซ่ฉณใ—ใ่ชฌๆ˜Žใ•ใ‚ŒใฆใŠใ‚Šใ€็ตๆžœใฏ[ใ“ใกใ‚‰](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit?usp=sharing)ใงๅˆฉ็”จใงใใพใ™ใ€‚ ๆ–ฐใ—ใ„ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใƒ™ใƒณใƒใƒžใƒผใ‚ฏ็ตๆžœใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ใ“ใจใŒใ“ใ‚ŒใพใงไปฅไธŠใซ็ฐกๅ˜ใซใชใ‚Šใพใ™ใ€‚ - [PyTorchใƒ™ใƒณใƒใƒžใƒผใ‚ฏ็ตๆžœ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/benchmarking/README.md)ใ€‚ - [TensorFlowใƒ™ใƒณใƒใƒžใƒผใ‚ฏ็ตๆžœ](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/benchmarking/README.md)ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/index.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), [JAX](https://jax.readthedocs.io/en/latest/)ใฎใŸใ‚ใฎๆœ€ๅ…ˆ็ซฏๆฉŸๆขฐๅญฆ็ฟ’ใ€‚ ๐Ÿค— Transformers ใฏๆœ€ๅ…ˆ็ซฏใฎๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’็ฐกๅ˜ใซใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆๅญฆ็ฟ’ใ™ใ‚‹APIใจใƒ„ใƒผใƒซใ‚’ๆไพ›ใ—ใพใ™ใ€‚ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใง่จˆ็ฎ—ใ‚ณใ‚นใƒˆใจไบŒ้…ธๅŒ–็‚ญ็ด ใฎๆŽ’ๅ‡บ้‡ใ‚’ๅ‰Šๆธ›ใงใใ€ใพใŸใ‚ผใƒญใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใŸใ‚ใซ่ฆๆฑ‚ใ•ใ‚Œใ‚‹ๆ™‚้–“ใจใƒชใ‚ฝใƒผใ‚นใ‚’็ฏ€็ด„ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฏไปฅไธ‹ใฎใ‚ˆใ†ใช็•ฐใชใ‚‹ใƒขใƒ€ใƒชใƒ†ใ‚ฃใซใŠใ‘ใ‚‹ไธ€่ˆฌ็š„ใชใ‚ฟใ‚นใ‚ฏใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™: ๐Ÿ“ **่‡ช็„ถ่จ€่ชžๅ‡ฆ็†**: ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ€ ๅ›บๆœ‰่กจ็พๆŠฝๅ‡บใ€ ่ณชๅ•ๅฟœ็ญ”ใ€ ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€ ๆ–‡็ซ ่ฆ็ด„ใ€ ๆฉŸๆขฐ็ฟป่จณใ€ ่ค‡ๆ•ฐ้ธๆŠžใ€ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใ€‚<br> ๐Ÿ–ผ๏ธ **ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณ**: ็”ปๅƒๅˆ†้กžใ€ ็‰ฉไฝ“ๆคœๅ‡บใ€ ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ€‚<br> ๐Ÿ—ฃ๏ธ **้Ÿณๅฃฐ**: ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใ€้Ÿณๅฃฐๅˆ†้กžใ€‚<br> ๐Ÿ™ **ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ**: ใƒ†ใƒผใƒ–ใƒซ่ณชๅ•ๅฟœ็ญ”ใ€ ๅ…‰ๅญฆๆ–‡ๅญ—่ช่ญ˜(OCR)ใ€ ใ‚นใ‚ญใƒฃใƒณใ•ใ‚ŒใŸใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‹ใ‚‰ใฎๆƒ…ๅ ฑๆŠฝๅ‡บใ€ ๅ‹•็”ปๅˆ†้กžใ€ visual question answering(่ฆ–่ฆš็š„่ณชๅ•ๅฟœ็ญ”)ใ€‚ ๐Ÿค— Transformers ใฏPyTorch, TensorFlow, JAX้–“ใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ็›ธไบ’้‹็”จๆ€งใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ ใ“ใ‚Œใฏใƒขใƒ‡ใƒซใฎๅ„ๆฎต้šŽใง็•ฐใชใ‚‹ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’ไฝฟใ†ใŸใ‚ใฎๆŸ”่ปŸๆ€งใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใ‚ใ‚‹ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใง3่กŒใฎใ‚ณใƒผใƒ‰ใงใƒขใƒ‡ใƒซใ‚’ๅญฆ็ฟ’ใ—ใ€ๅˆฅใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงๆŽจ่ซ–ใฎใŸใ‚ใซใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใ“ใจใŒๅฏ่ƒฝใงใ™ใ€‚ใพใŸใ€ๆœฌ็•ช็’ฐๅขƒใฎใƒ‡ใƒ—ใƒญใ‚คใฎใŸใ‚ใซใƒขใƒ‡ใƒซใ‚’ONNXใ‚„TorchScriptใฎใ‚ˆใ†ใชๅฝขๅผใงใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใ‚‚ๅฏ่ƒฝใงใ™ใ€‚ [Hub](https://huggingface.co/models), [forum](https://discuss.huggingface.co/), [Discord](https://discord.com/invite/JfAtkvEtRb)ใงๆˆ้•ทไธญใฎใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซไปŠๆ—ฅๅ‚ๅŠ ใ—ใพใ—ใ‚‡ใ†๏ผ ## Hugging Faceใƒใƒผใƒ ใซใ‚ˆใ‚‹ใ‚ซใ‚นใ‚ฟใƒ ใ‚ตใƒใƒผใƒˆใ‚’ใ”ๅธŒๆœ›ใฎๅ ดๅˆ <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## ็›ฎๆฌก ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏไปฅไธ‹ใฎ5ใคใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงๆง‹ๆˆใ•ใ‚Œใฆใ„ใพใ™: - **ใฏใ˜ใ‚ใซ** ใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ‚ฏใ‚คใƒƒใ‚ฏใƒ„ใ‚ขใƒผใจใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟใ„ๅง‹ใ‚ใ‚‹ใŸใ‚ใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ‰‹้ †ใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ - **ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ** ใฏใ€ๅˆๅฟƒ่€…ใŒๅง‹ใ‚ใ‚‹ใฎใซๆœ€้ฉใชๅ ดๆ‰€ใงใ™ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟใ„ๅง‹ใ‚ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชๅŸบๆœฌ็š„ใชใ‚นใ‚ญใƒซใ‚’็ฟ’ๅพ—ใงใใพใ™ใ€‚ - **HOW-TOใ‚ฌใ‚คใƒ‰** ใฏใ€่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฎใŸใ‚ใซๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’finetuningใ™ใ‚‹ใ“ใจใ‚„ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใฎไฝœๆˆใจๅ…ฑๆœ‰ใฎๆ–นๆณ•ใชใฉใจใ„ใฃใŸ็‰นๅฎšใฎ็›ฎๆจ™ใ‚’้”ๆˆใ™ใ‚‹ใŸใ‚ใฎๆ–นๆณ•ใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ - **ใ‚ณใƒณใ‚ปใƒ—ใƒˆใ‚ฌใ‚คใƒ‰** ใฏใ€ใƒขใƒ‡ใƒซใ‚„ใ‚ฟใ‚นใ‚ฏใ€ใใ—ใฆ ๐Ÿค— Transformersใฎ่จญ่จˆๆ€ๆƒณใฎ่ƒŒๆ™ฏใซใ‚ใ‚‹ๅŸบๆœฌ็š„ใซใ‚ณใƒณใ‚ปใƒ—ใƒˆใ‚„่€ƒใˆๆ–นใซใคใ„ใฆใ‚ˆใ‚Šๆทฑใ่€ƒๅฏŸใ—่งฃ่ชฌใ—ใฆใ„ใพใ™ใ€‚ - **API** ๅ…จใฆใฎใ‚ฏใƒฉใ‚นใจ้–ขๆ•ฐใ‚’่ชฌๆ˜Žใ—ใพใ™: - **MAIN CLASSES** ใฏใ€configuration, model, tokenizer, pipelineใจใ„ใฃใŸๆœ€ใ‚‚้‡่ฆใชใ‚ฏใƒฉใ‚นใซใคใ„ใฆ่ฉณ็ดฐใซ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ - **MODELS** ใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใงๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใ‚‹ใใ‚Œใžใ‚Œใฎใƒขใƒ‡ใƒซใซ้–ข้€ฃใ—ใŸใ‚ฏใƒฉใ‚นใจ้–ขๆ•ฐใ‚’่ฉณ็ดฐใซ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ - **INTERNAL HELPERS** ใฏใ€ๅ†…้ƒจใงไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใ‚ฏใƒฉใ‚นใ‚„้–ขๆ•ฐใ‚’่ฉณ็ดฐใซ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ ### ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซ <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (Google Research and the Toyota Technological Institute at Chicago ใ‹ใ‚‰) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942) 1. **[AltCLIP](https://huggingface.co/docs/transformers/main/model_doc/altclip)** (BAAI ใ‹ใ‚‰) Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (MIT ใ‹ใ‚‰) Yuan Gong, Yu-An Chung, James Glass ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (Facebook ใ‹ใ‚‰) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (ร‰cole polytechnique ใ‹ใ‚‰) Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (VinAI Research ใ‹ใ‚‰) Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (Microsoft ใ‹ใ‚‰) Hangbo Bao, Li Dong, Furu Wei ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (Google ใ‹ใ‚‰) Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (Google ใ‹ใ‚‰) Sascha Rothe, Shashi Narayan, Aliaksei Severyn ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (VinAI Research ใ‹ใ‚‰) Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (Google Research ใ‹ใ‚‰) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (Google Research ใ‹ใ‚‰) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (Microsoft Research AI4Science ใ‹ใ‚‰) Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (Google AI ใ‹ใ‚‰) Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Big Transfer (BiT)](https://arxiv.org/abs/1912.11370)Houlsby. 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (Facebook ใ‹ใ‚‰) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (Facebook ใ‹ใ‚‰) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 1. **[BLIP](https://huggingface.co/docs/transformers/main/model_doc/blip)** (Salesforce ใ‹ใ‚‰) Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (BigScience workshop ใ‹ใ‚‰) [BigScience Workshop](https://bigscience.huggingface.co/) ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚Œใพใ—ใŸ. 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (Alexa ใ‹ใ‚‰) Adrian de Wynter and Daniel J. Perry ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google Research ใ‹ใ‚‰) Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (Inria/Facebook/Sorbonne ใ‹ใ‚‰) Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google Research ใ‹ใ‚‰) Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (OFA-Sys ใ‹ใ‚‰) An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI ใ‹ใ‚‰) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (University of Gรถttingen ใ‹ใ‚‰) Timo Lรผddecke and Alexander Ecker ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (Salesforce ใ‹ใ‚‰) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (Microsoft Research Asia ใ‹ใ‚‰) Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech ใ‹ใ‚‰) Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI ใ‹ใ‚‰) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (Tsinghua University ใ‹ใ‚‰) Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (Salesforce ใ‹ใ‚‰) Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft ใ‹ใ‚‰) Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (Facebook ใ‹ใ‚‰) Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft ใ‹ใ‚‰) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft ใ‹ใ‚‰) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (Berkeley/Facebook/Google ใ‹ใ‚‰) Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (SenseTime Research ใ‹ใ‚‰) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (Facebook ใ‹ใ‚‰) Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (Facebook ใ‹ใ‚‰) Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (Microsoft Research ใ‹ใ‚‰) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (SHI Labs ใ‹ใ‚‰) Ali Hassani and Humphrey Shi ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (HuggingFace ใ‹ใ‚‰), Victor Sanh, Lysandre Debut and Thomas Wolf. ๅŒใ˜ๆ‰‹ๆณ•ใง GPT2, RoBERTa ใจ Multilingual BERT ใฎๅœง็ธฎใ‚’่กŒใ„ใพใ—ใŸ.ๅœง็ธฎใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏใใ‚Œใžใ‚Œ [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation)ใ€[DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation)ใ€[DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) ใจๅไป˜ใ‘ใ‚‰ใ‚Œใพใ—ใŸ. ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (Microsoft Research ใ‹ใ‚‰) Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER ใ‹ใ‚‰), Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (Facebook ใ‹ใ‚‰) Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (Intel Labs ใ‹ใ‚‰) Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google Research/Stanford University ใ‹ใ‚‰) Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google Research ใ‹ใ‚‰) Sascha Rothe, Shashi Narayan, Aliaksei Severyn ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (Baidu ใ‹ใ‚‰) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (Meta AI ใ‹ใ‚‰) ใฏใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒ—ใƒญใƒ†ใ‚คใƒณ่จ€่ชžใƒขใƒ‡ใƒซใงใ™. **ESM-1b** ใฏ Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118). **ESM-1v** ใฏ Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rivesใ€€ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648). **ESM-2** ใจใ€€**ESMFold** ใฏ Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (Google AI ใ‹ใ‚‰) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸใƒฌใƒใ‚ธใƒˆใƒชใƒผ [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) Le, and Jason Wei 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (CNRS ใ‹ใ‚‰) Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (Facebook AI ใ‹ใ‚‰) Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (Google Research ใ‹ใ‚‰) James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (CMU/Google Brain ใ‹ใ‚‰) Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 1. **[GIT](https://huggingface.co/docs/transformers/main/model_doc/git)** (Microsoft Research ใ‹ใ‚‰) Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡ [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (KAIST ใ‹ใ‚‰) Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (OpenAI ใ‹ใ‚‰) Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (EleutherAI ใ‹ใ‚‰) Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸใƒฌใƒใ‚ธใƒˆใƒชใƒผ : [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI ใ‹ใ‚‰) Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (ABEJA ใ‹ใ‚‰) Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori ใ‹ใ‚‰ใƒชใƒชใƒผใ‚น. 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (OpenAI ใ‹ใ‚‰) Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI ใ‹ใ‚‰) Ben Wang and Aran Komatsuzaki ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸใƒฌใƒใ‚ธใƒˆใƒชใƒผ [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/main/model_doc/gpt-sw3)** (AI-Sweden ใ‹ใ‚‰) Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey ร–hman, Fredrik Carlsson, Magnus Sahlgren ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA ใ‹ใ‚‰) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook ใ‹ใ‚‰) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (Berkeley ใ‹ใ‚‰) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (OpenAI ใ‹ใ‚‰) Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (OpenAI ใ‹ใ‚‰) Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (Microsoft Research Asia ใ‹ใ‚‰) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (Microsoft Research Asia ใ‹ใ‚‰) Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (Microsoft Research Asia ใ‹ใ‚‰) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (Microsoft Research Asia ใ‹ใ‚‰) Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (AllenAI ใ‹ใ‚‰) Iz Beltagy, Matthew E. Peters, Arman Cohan ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (Meta AI ใ‹ใ‚‰) Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (South China University of Technology ใ‹ใ‚‰) Jiapeng Wang, Lianwen Jin, Kai Ding ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (AllenAI ใ‹ใ‚‰) Iz Beltagy, Matthew E. Peters, Arman Cohan ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (Google AI ใ‹ใ‚‰) Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (Studio Ousia ใ‹ใ‚‰) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC Chapel Hill ใ‹ใ‚‰) Hao Tan and Mohit Bansal ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (Facebook ใ‹ใ‚‰) Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (Facebook ใ‹ใ‚‰) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Jรถrg Tiedemann ใ‹ใ‚‰. [OPUS](http://opus.nlpl.eu/) ใ‚’ไฝฟใ„ใชใŒใ‚‰ๅญฆ็ฟ’ใ•ใ‚ŒใŸ "Machine translation" (ใƒžใ‚ทใƒณใƒˆใƒฉใƒณใ‚นใƒฌใƒผใ‚ทใƒงใƒณ) ใƒขใƒ‡ใƒซ. [Marian Framework](https://marian-nmt.github.io/) ใฏMicrosoft Translator Teamใ€€ใŒ็พๅœจ้–‹็™บไธญใงใ™. 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (Microsoft Research Asia ใ‹ใ‚‰) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) 1. **[Mask2Former](https://huggingface.co/docs/transformers/main/model_doc/mask2former)** (FAIR and UIUC ใ‹ใ‚‰) Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡ [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (Meta and UIUC ใ‹ใ‚‰) Bowen Cheng, Alexander G. Schwing, Alexander Kirillov ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook ใ‹ใ‚‰) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook ใ‹ใ‚‰) Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA ใ‹ใ‚‰) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA ใ‹ใ‚‰) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (Studio Ousia ใ‹ใ‚‰) Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (CMU/Google Brain ใ‹ใ‚‰) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (Google Inc. ใ‹ใ‚‰) Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (Google Inc. ใ‹ใ‚‰) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple ใ‹ใ‚‰) Sachin Mehta and Mohammad Rastegari ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (Microsoft Research ใ‹ใ‚‰) Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI ใ‹ใ‚‰) Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (RUC AI Box ใ‹ใ‚‰) Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (SHI Labs ใ‹ใ‚‰) Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (Huawei Noahโ€™s Ark Lab ใ‹ใ‚‰) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (Meta ใ‹ใ‚‰) the NLLB team ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 1. **[Nystrรถmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (the University of Wisconsin - Madison ใ‹ใ‚‰) Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 1. **[OneFormer](https://huggingface.co/docs/transformers/main/model_doc/oneformer)** (SHI Labs ใ‹ใ‚‰) Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI ใ‹ใ‚‰) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI ใ‹ใ‚‰) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google ใ‹ใ‚‰) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google ใ‹ใ‚‰) Jason Phang, Yao Zhao, and Peter J. Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind ใ‹ใ‚‰) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research ใ‹ใ‚‰) Dat Quoc Nguyen and Anh Tuan Nguyen ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP ใ‹ใ‚‰) Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (Sea AI Labs ใ‹ใ‚‰) Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (Microsoft Research ใ‹ใ‚‰) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA ใ‹ใ‚‰) Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (Facebook ใ‹ใ‚‰) Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google Research ใ‹ใ‚‰) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (Google Research ใ‹ใ‚‰) Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META Platforms ใ‹ใ‚‰) Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Designing Network Design Space](https://arxiv.org/abs/2003.13678) 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (Google Research ใ‹ใ‚‰) Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (Microsoft Research ใ‹ใ‚‰) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (Facebook ใ‹ใ‚‰), Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/main/model_doc/roberta-prelayernorm)** (Facebook ใ‹ใ‚‰) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) 1. **[RoCBert](https://huggingface.co/docs/transformers/main/model_doc/roc_bert)** (WeChatAI ใ‹ใ‚‰) HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (ZhuiyiTechnology ใ‹ใ‚‰), Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (NVIDIA ใ‹ใ‚‰) Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP ใ‹ใ‚‰) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP ใ‹ใ‚‰) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (Facebook ใ‹ใ‚‰), Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (Facebook ใ‹ใ‚‰), Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (Tel Aviv University ใ‹ใ‚‰), Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (Berkeley ใ‹ใ‚‰) Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (Microsoft ใ‹ใ‚‰) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft ใ‹ใ‚‰) Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 1. **[Swin2SR](https://huggingface.co/docs/transformers/main/model_doc/swin2sr)** (University of Wรผrzburg ใ‹ใ‚‰) Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/main/model_doc/switch_transformers)** (Google ใ‹ใ‚‰) William Fedus, Barret Zoph, Noam Shazeer ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (Google AI ใ‹ใ‚‰) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (Google AI ใ‹ใ‚‰) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸใƒฌใƒใ‚ธใƒˆใƒชใƒผ [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (Microsoft Research ใ‹ใ‚‰) Brandon Smock, Rohith Pesala, Robin Abraham ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI ใ‹ใ‚‰) Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (Microsoft Research ใ‹ใ‚‰) Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (HuggingFace ใ‹ใ‚‰). 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (Facebook ใ‹ใ‚‰) Gedas Bertasius, Heng Wang, Lorenzo Torresani ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (the University of California at Berkeley ใ‹ใ‚‰) Michael Janner, Qiyang Li, Sergey Levine ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU ใ‹ใ‚‰) Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (Microsoft ใ‹ใ‚‰), Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (Google Research ใ‹ใ‚‰) Yi Tay, Mostafa Dehghani, Vinh Q ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (Microsoft Research ใ‹ใ‚‰) Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (Microsoft Research ใ‹ใ‚‰) Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 1. **[UPerNet](https://huggingface.co/docs/transformers/main/model_doc/upernet)** (Peking University ใ‹ใ‚‰) Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡ [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (Tsinghua University and Nankai University ใ‹ใ‚‰) Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Visual Attention Network](https://arxiv.org/abs/2202.09741) 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (Multimedia Computing Group, Nanjing University ใ‹ใ‚‰) Zhan Tong, Yibing Song, Jue Wang, Limin Wang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain ใ‹ใ‚‰) Wonjae Kim, Bokyung Son, Ildoo Kim ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (Google AI ใ‹ใ‚‰) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP ใ‹ใ‚‰) Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (Google AI ใ‹ใ‚‰) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (Meta AI ใ‹ใ‚‰) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (Meta AI ใ‹ใ‚‰) Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (Facebook AI ใ‹ใ‚‰) Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI ใ‹ใ‚‰) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI ใ‹ใ‚‰) Qiantong Xu, Alexei Baevski, Michael Auli ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (Microsoft Research ใ‹ใ‚‰) Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI ใ‹ใ‚‰) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (Microsoft Research ใ‹ใ‚‰) Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (Facebook ใ‹ใ‚‰) Guillaume Lample and Alexis Conneau ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (Microsoft Research ใ‹ใ‚‰) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (Facebook AI ใ‹ใ‚‰), Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI ใ‹ใ‚‰), Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU ใ‹ใ‚‰) Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI ใ‹ใ‚‰) Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (Facebook AI ใ‹ใ‚‰) Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (Huazhong University of Science & Technology ใ‹ใ‚‰) Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (the University of Wisconsin - Madison ใ‹ใ‚‰) Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh ใ‹ใ‚‰ๅ…ฌ้–‹ใ•ใ‚ŒใŸ็ ”็ฉถ่ซ–ๆ–‡: [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) ### ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ ไปฅไธ‹ใฎใƒ†ใƒผใƒ–ใƒซใฏใใ‚Œใžใ‚Œใฎใƒขใƒ‡ใƒซใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚"slow"ใจๅ‘ผใฐใ‚Œใ‚‹Pythonใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ€๐Ÿค— Tokenizers ใƒฉใ‚คใƒ–ใƒฉใƒชใซใ‚ˆใ‚‹"fast"ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ€PyTorch, TensorFlow, Flaxใฎ5ใคใฎใใ‚Œใžใ‚ŒใŒใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | AltCLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | Audio Spectrogram Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | BioGpt | โœ… | โŒ | โœ… | โŒ | โŒ | | BiT | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | Chinese-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DiNAT | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GIT | โŒ | โŒ | โœ… | โŒ | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GPT-Sw3 | โœ… | โœ… | โœ… | โœ… | โœ… | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | Mask2Former | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormerSwin | โŒ | โŒ | โŒ | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileNetV1 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileNetV2 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | NAT | โŒ | โŒ | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoBERTa-PreLayerNorm | โŒ | โŒ | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | Swin2SR | โŒ | โŒ | โœ… | โŒ | โŒ | | SwitchTransformers | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TimeSformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | UPerNet | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViT Hybrid | โŒ | โŒ | โœ… | โŒ | โŒ | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/tflite.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to TFLite [TensorFlow Lite](https://www.tensorflow.org/lite/guide)ใฏใ€ใƒขใƒใ‚คใƒซใƒ•ใ‚ฉใƒณใ€็ต„ใฟ่พผใฟใ‚ทใ‚นใƒ†ใƒ ใ€ใŠใ‚ˆใณใƒขใƒŽใฎใ‚คใƒณใ‚ฟใƒผใƒใƒƒใƒˆ๏ผˆIoT๏ผ‰ใƒ‡ใƒใ‚คใ‚นใชใฉใ€ใƒชใ‚ฝใƒผใ‚นใซๅˆถ็ด„ใฎใ‚ใ‚‹ใƒ‡ใƒใ‚คใ‚นใซๆฉŸๆขฐๅญฆ็ฟ’ใƒขใƒ‡ใƒซใ‚’ๅฑ•้–‹ใ™ใ‚‹ใŸใ‚ใฎ่ปฝ้‡ใชใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงใ™ใ€‚TFLiteใฏใ€่จˆ็ฎ—่ƒฝๅŠ›ใ€ใƒกใƒขใƒชใ€ใŠใ‚ˆใณ้›ปๅŠ›ๆถˆ่ฒปใŒ้™ใ‚‰ใ‚Œใฆใ„ใ‚‹ใ“ใ‚Œใ‚‰ใฎใƒ‡ใƒใ‚คใ‚นไธŠใงใƒขใƒ‡ใƒซใ‚’ๅŠน็އ็š„ใซๆœ€้ฉๅŒ–ใ—ใฆๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ TensorFlow Liteใƒขใƒ‡ใƒซใฏใ€`.tflite`ใƒ•ใ‚กใ‚คใƒซๆ‹กๅผตๅญใง่ญ˜ๅˆฅใ•ใ‚Œใ‚‹็‰นๅˆฅใชๅŠน็އ็š„ใชใƒใƒผใ‚ฟใƒ–ใƒซๅฝขๅผใง่กจใ•ใ‚Œใพใ™ใ€‚ ๐Ÿค— Optimumใฏใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’TFLiteใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใŸใ‚ใฎๆฉŸ่ƒฝใ‚’`exporters.tflite`ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ไป‹ใ—ใฆๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎใƒชใ‚นใƒˆใซใคใ„ใฆใฏใ€[๐Ÿค— Optimumใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/optimum/exporters/tflite/overview)ใ‚’ใ”ๅ‚็…งใใ ใ•ใ„ใ€‚ ใƒขใƒ‡ใƒซใ‚’TFLiteใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใซใฏใ€ๅฟ…่ฆใชไพๅญ˜้–ขไฟ‚ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใใ ใ•ใ„๏ผš ```bash pip install optimum[exporters-tf] ``` ใ™ในใฆใฎๅˆฉ็”จๅฏ่ƒฝใชๅผ•ๆ•ฐใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€[๐Ÿค— Optimumใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model)ใ‚’ๅ‚็…งใ™ใ‚‹ใ‹ใ€ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณใงใƒ˜ใƒซใƒ—ใ‚’่กจ็คบใ—ใฆใใ ใ•ใ„๏ผš ```bash optimum-cli export tflite --help ``` ๐Ÿค— Hubใ‹ใ‚‰ใƒขใƒ‡ใƒซใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใซใฏใ€ไพ‹ใˆใฐ `bert-base-uncased` ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™๏ผš ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ ``` ้€ฒ่กŒ็Šถๆณใ‚’็คบใ™ใƒญใ‚ฐใŒ่กจ็คบใ•ใ‚Œใ€็”Ÿๆˆใ•ใ‚ŒใŸ `model.tflite` ใŒไฟๅญ˜ใ•ใ‚ŒใŸๅ ดๆ‰€ใ‚‚่กจ็คบใ•ใ‚Œใ‚‹ใฏใšใงใ™๏ผš ```bash Validating TFLite model... -[โœ“] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[โœ“] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite ``` ไธŠ่จ˜ใฎไพ‹ใฏ๐Ÿค— Hubใ‹ใ‚‰ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ใƒญใƒผใ‚ซใƒซใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๅ ดๅˆใ€ใพใšใƒขใƒ‡ใƒซใฎ้‡ใฟใƒ•ใ‚กใ‚คใƒซใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒ•ใ‚กใ‚คใƒซใ‚’ๅŒใ˜ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช๏ผˆ`local_path`๏ผ‰ใซไฟๅญ˜ใ—ใŸใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚CLIใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€๐Ÿค— Hubใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆๅใฎไปฃใ‚ใ‚Šใซ`model`ๅผ•ๆ•ฐใซ`local_path`ใ‚’ๆธกใ—ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/add_tensorflow_model.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. ใƒฉใ‚คใ‚ปใƒณใ‚น๏ผšApache Licenseใ€ใƒใƒผใ‚ธใƒงใƒณ2.0๏ผˆใ€Œใƒฉใ‚คใ‚ปใƒณใ‚นใ€๏ผ‰ใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏใ€ใƒฉใ‚คใ‚ปใƒณใ‚นใซๆบ–ๆ‹ ใ—ใฆใ„ใชใ„้™ใ‚Šใ€ไฝฟ็”จใงใใพใ›ใ‚“ใ€‚ใƒฉใ‚คใ‚ปใƒณใ‚นใฎใ‚ณใƒ”ใƒผใฏไปฅไธ‹ใ‹ใ‚‰ๅ…ฅๆ‰‹ใงใใพใ™๏ผš http://www.apache.org/licenses/LICENSE-2.0 ้ฉ็”จๆณ•ใซๅพ“ใฃใฆๅฟ…่ฆใชๅ ดๅˆใ€ใพใŸใฏๆ›ธ้ขใงๅŒๆ„ใ—ใŸๅ ดๅˆใ‚’้™คใใ€ใƒฉใ‚คใ‚ปใƒณใ‚นใฎไธ‹ใง้…ๅธƒใ•ใ‚Œใ‚‹ใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ขใฏใ€ใ€ŒAS ISใ€ใฎๅŸบ็›คใงใ€ๆ˜Ž็คบใพใŸใฏ้ป™็คบใ‚’ๅ•ใ‚ใšใ€ใ„ใ‹ใชใ‚‹ไฟ่จผใ‚„ๆกไปถใ‚‚ๅซใฟใพใ›ใ‚“ใ€‚ใƒฉใ‚คใ‚ปใƒณใ‚นใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ใƒฉใ‚คใ‚ปใƒณใ‚นๆ–‡ๆ›ธใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownๅฝขๅผใงใ™ใŒใ€ๅผŠ็คพใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใƒ“ใƒซใƒ€ใƒผ๏ผˆMDXใซ้กžไผผใ—ใŸ็‰นๅฎšใฎๆง‹ๆ–‡ใ‚’ๅซใ‚€๏ผ‰ใ‚’ๅซใ‚€ใŸใ‚ใ€ใŠไฝฟใ„ใฎMarkdownใƒ“ใƒฅใƒผใ‚ขใงใฏๆญฃใ—ใ่กจ็คบใ•ใ‚Œใชใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ --> # How to convert a ๐Ÿค— Transformers model to TensorFlow? ๐Ÿค— Transformersใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซ่ค‡ๆ•ฐใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใŒๅˆฉ็”จๅฏ่ƒฝใงใ‚ใ‚‹ใ“ใจใฏใ€ใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใ‚’่จญ่จˆใ™ใ‚‹้š›ใซใใ‚Œใžใ‚Œใฎๅผทใฟใ‚’ๆดปใ‹ใ™ๆŸ”่ปŸๆ€งใ‚’ๆไพ›ใ—ใพใ™ใŒใ€ ไบ’ๆ›ๆ€งใ‚’ใƒขใƒ‡ใƒซใ”ใจใซ่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ใ—ใ‹ใ—ใ€ๅนธใ„ใชใ“ใจใซ ๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใซTensorFlowไบ’ๆ›ๆ€งใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใฏใ€[ใ‚ผใƒญใ‹ใ‚‰ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจ](add_new_model)ใ‚ˆใ‚Šใ‚‚็ฐกๅ˜ใงใ™๏ผ ๅคง่ฆๆจกใชTensorFlowใƒขใƒ‡ใƒซใฎ่ฉณ็ดฐใ‚’็†่งฃใ—ใŸใ‚Šใ€ไธป่ฆใชใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใฎ่ฒข็Œฎใ‚’่กŒใฃใŸใ‚Šใ€ ้ธๆŠžใ—ใŸใƒขใƒ‡ใƒซใ‚’TensorFlowใงๆœ‰ๅŠนใซใ™ใ‚‹ใŸใ‚ใฎใ‚ฌใ‚คใƒ‰ใงใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใฎใƒกใƒณใƒใƒผใงใ‚ใ‚‹ใ‚ใชใŸใซใ€TensorFlowใƒขใƒ‡ใƒซใฎ้‡ใฟใŠใ‚ˆใณ/ใพใŸใฏ ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’๐Ÿค— Transformersใงไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใ€Hugging Faceใƒใƒผใƒ ใ‹ใ‚‰ใฎๆœ€ๅฐ้™ใฎ็›ฃ่ฆ–ใง่ฒข็Œฎใงใใ‚‹ๅŠ›ใ‚’ไธŽใˆใพใ™ใ€‚ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’ๆ›ธใใ“ใจใฏๅฐใ•ใชๅ‰ๆฅญใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ ใ“ใฎใ‚ฌใ‚คใƒ‰ใ‚’่ชญใ‚€ใ“ใจใงใ€ใใ‚ŒใŒใƒญใƒผใƒฉใƒผใ‚ณใƒผใ‚นใ‚ฟใƒผใฎใ‚ˆใ†ใชใ‚‚ใฎใ‹ใ‚‰ๆ•ฃๆญฉใฎใ‚ˆใ†ใชใ‚‚ใฎใซใชใ‚‹ใ“ใจใ‚’้ก˜ใฃใฆใ„ใพใ™๐ŸŽข๐Ÿšถใ€‚ ใ“ใฎใƒ—ใƒญใ‚ปใ‚นใ‚’ใพใ™ใพใ™็ฐกๅ˜ใซใ™ใ‚‹ใŸใ‚ใซใ€็งใŸใกใฎๅ…ฑ้€šใฎ็ตŒ้จ“ใ‚’ๆดป็”จใ™ใ‚‹ใ“ใจใฏ้žๅธธใซ้‡่ฆใงใ™ใฎใงใ€ ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎๆ”นๅ–„ใ‚’ๆๆกˆใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™๏ผ ใ•ใ‚‰ใซ่ฉณใ—ใ่ชฟในใ‚‹ๅ‰ใซใ€ไปฅไธ‹ใฎใƒชใ‚ฝใƒผใ‚นใ‚’ใƒใ‚งใƒƒใ‚ฏใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚๐Ÿค— TransformersใŒๅˆใ‚ใฆใฎๅ ดๅˆ๏ผš - [๐Ÿค— Transformersใฎไธ€่ˆฌ็š„ใชๆฆ‚่ฆ](add_new_model#general-overview-of-transformers) - [Hugging FaceใฎTensorFlowๅ“ฒๅญฆ](https://huggingface.co/blog/tensorflow-philosophy) ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎๆฎ‹ใ‚Šใฎ้ƒจๅˆ†ใงใฏใ€ๆ–ฐใ—ใ„TensorFlowใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใ‚‚ใฎใ€ PyTorchใ‚’TensorFlowใƒขใƒ‡ใƒซใฎ้‡ใฟใซๅค‰ๆ›ใ™ใ‚‹ๆ‰‹้ †ใ€ใŠใ‚ˆใณMLใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ้–“ใฎไธไธ€่‡ดใ‚’ๅŠน็އ็š„ใซใƒ‡ใƒใƒƒใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆๅญฆใณใพใ™ใ€‚ใใ‚Œใงใฏๅง‹ใ‚ใพใ—ใ‚‡ใ†๏ผ <Tip> ไฝฟ็”จใ—ใŸใ„ใƒขใƒ‡ใƒซใซๅฏพๅฟœใ™ใ‚‹TensorFlowใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒใ™ใงใซๅญ˜ๅœจใ™ใ‚‹ใ‹ใฉใ†ใ‹ใ‚ใ‹ใ‚‰ใชใ„ใงใ™ใ‹๏ผŸ &nbsp; ้ธๆŠžใ—ใŸใƒขใƒ‡ใƒซใฎ`config.json`ใฎ`model_type`ใƒ•ใ‚ฃใƒผใƒซใƒ‰ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„ ๏ผˆ[ไพ‹](https://huggingface.co/bert-base-uncased/blob/main/config.json#L14)๏ผ‰ใ€‚ ๐Ÿค— Transformersใฎ่ฉฒๅฝ“ใ™ใ‚‹ใƒขใƒ‡ใƒซใƒ•ใ‚ฉใƒซใƒ€ใซใ€ๅๅ‰ใŒ"modeling_tf"ใงๅง‹ใพใ‚‹ใƒ•ใ‚กใ‚คใƒซใŒใ‚ใ‚‹ๅ ดๅˆใ€ใใ‚Œใฏๅฏพๅฟœใ™ใ‚‹TensorFlow ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆŒใฃใฆใ„ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™๏ผˆ[ไพ‹](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert)๏ผ‰ใ€‚ </Tip> ## Step-by-step guide to add TensorFlow model architecture code ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่จญ่จˆใ™ใ‚‹ๆ–นๆณ•ใฏใ•ใพใ–ใพใงใ‚ใ‚Šใ€ใใฎ่จญ่จˆใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ๆ–นๆณ•ใ‚‚ใ•ใพใ–ใพใงใ™ใ€‚ ใ—ใ‹ใ—ใ€[๐Ÿค— Transformersใฎไธ€่ˆฌ็š„ใชๆฆ‚่ฆ](add_new_model#general-overview-of-transformers)ใ‹ใ‚‰ ๆ€ใ„ๅ‡บใ—ใฆใ„ใŸใ ใ‘ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€็งใŸใกใฏๆ„่ฆ‹ใฎใ‚ใ‚‹ใ‚ฐใƒซใƒผใƒ—ใงใ™ - ๐Ÿค— Transformersใฎไฝฟใ„ใ‚„ใ™ใ•ใฏไธ€่ฒซๆ€งใฎใ‚ใ‚‹่จญ่จˆใฎ้ธๆŠž่‚ขใซไพๅญ˜ใ—ใฆใ„ใพใ™ใ€‚็ตŒ้จ“ใ‹ใ‚‰ใ€TensorFlowใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹้š›ใซ้‡่ฆใชใ“ใจใ‚’ใ„ใใคใ‹ใŠไผใˆใงใใพใ™๏ผš - ่ปŠ่ผชใ‚’ๅ†็™บๆ˜Žใ—ใชใ„ใงใใ ใ•ใ„๏ผใปใจใ‚“ใฉใฎๅ ดๅˆใ€็ขบ่ชใ™ในใๅฐ‘ใชใใจใ‚‚2ใคใฎๅ‚็…งๅฎŸ่ฃ…ใŒใ‚ใ‚Šใพใ™ใ€‚ใใ‚Œใฏใ€ ใ‚ใชใŸใŒๅฎŸ่ฃ…ใ—ใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใฎPyTorchใƒใƒผใ‚ธใƒงใƒณใจใ€ๅŒใ˜็จฎ้กžใฎๅ•้กŒใซๅฏพใ™ใ‚‹ไป–ใฎTensorFlowใƒขใƒ‡ใƒซใงใ™ใ€‚ - ๅ„ชใ‚ŒใŸใƒขใƒ‡ใƒซๅฎŸ่ฃ…ใฏๆ™‚้–“ใฎ่ฉฆ็ทดใ‚’ไน—ใ‚Š่ถŠใˆใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใ‚ณใƒผใƒ‰ใŒใใ‚Œใ„ใ ใ‹ใ‚‰ใงใฏใชใใ€ใ‚ณใƒผใƒ‰ใŒๆ˜Ž็ขบใงใ€ใƒ‡ใƒใƒƒใ‚ฐใ—ใ‚„ใ™ใใ€ ๆง‹็ฏ‰ใ—ใ‚„ใ™ใ„ใ‹ใ‚‰ใงใ™ใ€‚TensorFlowๅฎŸ่ฃ…ใงPyTorchๅฎŸ่ฃ…ใจไธ€่‡ดใ™ใ‚‹ใƒ‘ใ‚ฟใƒผใƒณใ‚’่ค‡่ฃฝใ—ใ€PyTorchๅฎŸ่ฃ…ใจใฎไธไธ€่‡ดใ‚’ๆœ€ๅฐ้™ใซๆŠ‘ใˆใ‚‹ใ“ใจใงใ€ ใ‚ใชใŸใฎ่ฒข็ŒฎใŒ้•ทๆœŸ้–“ใซใ‚ใŸใฃใฆๆœ‰็”จใงใ‚ใ‚‹ใ“ใจใ‚’ไฟ่จผใ—ใพใ™ใ€‚ - ่กŒใ่ฉฐใพใฃใŸใ‚‰ๅŠฉใ‘ใ‚’ๆฑ‚ใ‚ใฆใใ ใ•ใ„๏ผ ๐Ÿค— Transformersใƒใƒผใƒ ใฏใ“ใ“ใซใ„ใพใ™ใ—ใ€ใŠใใ‚‰ใใ‚ใชใŸใŒ็›ด้ขใ—ใฆใ„ใ‚‹ๅŒใ˜ๅ•้กŒใซๅฏพใ™ใ‚‹่งฃๆฑบ็ญ–ใ‚’่ฆ‹ใคใ‘ใฆใ„ใพใ™ใ€‚ TensorFlowใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใ‚นใƒ†ใƒƒใƒ—ใฎๆฆ‚่ฆใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš 1. ๅค‰ๆ›ใ—ใŸใ„ใƒขใƒ‡ใƒซใ‚’้ธๆŠž 2. transformersใฎ้–‹็™บ็’ฐๅขƒใ‚’ๆบ–ๅ‚™ 3. ๏ผˆใ‚ชใƒ—ใ‚ทใƒงใƒณ๏ผ‰็†่ซ–็š„ใชๅด้ขใจๆ—ขๅญ˜ใฎๅฎŸ่ฃ…ใ‚’็†่งฃ 4. ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๅฎŸ่ฃ… 5. ใƒขใƒ‡ใƒซใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่ฃ… 6. ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๆๅ‡บ 7. ๏ผˆใ‚ชใƒ—ใ‚ทใƒงใƒณ๏ผ‰ใƒ‡ใƒขใ‚’ๆง‹็ฏ‰ใ—ใฆไธ–็•Œใจๅ…ฑๆœ‰ ### 1.-3. Prepare your model contribution **1. ๅค‰ๆ›ใ—ใŸใ„ใƒขใƒ‡ใƒซใ‚’้ธๆŠžใ™ใ‚‹** ใพใšใ€ๅŸบๆœฌใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†ใ€‚ๆœ€ๅˆใซ็ŸฅใฃใฆใŠใๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใฏใ€ๅค‰ๆ›ใ—ใŸใ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ™ใ€‚ ็‰นๅฎšใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆฑบใ‚ใฆใ„ใชใ„ๅ ดๅˆใ€๐Ÿค— Transformers ใƒใƒผใƒ ใซๆๆกˆใ‚’ๆฑ‚ใ‚ใ‚‹ใ“ใจใฏใ€ๅฝฑ้Ÿฟใ‚’ๆœ€ๅคง้™ใซใ™ใ‚‹็ด ๆ™ดใ‚‰ใ—ใ„ๆ–นๆณ•ใงใ™ใ€‚ ใƒใƒผใƒ ใฏใ€TensorFlow ใ‚ตใ‚คใƒ‰ใงไธ่ถณใ—ใฆใ„ใ‚‹ๆœ€ใ‚‚ๆณจ็›ฎใ•ใ‚Œใ‚‹ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅ‘ใ‘ใฆใ‚ฌใ‚คใƒ‰ใ—ใพใ™ใ€‚ TensorFlow ใงไฝฟ็”จใ—ใŸใ„็‰นๅฎšใฎใƒขใƒ‡ใƒซใซใ€๐Ÿค— Transformers ใซๆ—ขใซ TensorFlow ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎๅฎŸ่ฃ…ใŒๅญ˜ๅœจใ—ใฆใ„ใ‚‹ใŒใ€้‡ใฟใŒไธ่ถณใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ ใ“ใฎใƒšใƒผใ‚ธใฎ[้‡ใฟใฎ่ฟฝๅŠ ใ‚ปใ‚ฏใ‚ทใƒงใƒณ](#adding-tensorflow-weights-to-hub)ใซ็›ดๆŽฅ็งปๅ‹•ใ—ใฆใใ ใ•ใ„ใ€‚ ็ฐกๅ˜ใซใ™ใ‚‹ใŸใ‚ใซใ€ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎๆฎ‹ใ‚Šใฎ้ƒจๅˆ†ใงใฏใ€TensorFlow ใƒใƒผใ‚ธใƒงใƒณใฎ *BrandNewBert* ใ‚’่ฒข็Œฎใ™ใ‚‹ใ“ใจใ‚’ๆฑบๅฎšใ—ใŸใจไปฎๅฎšใ—ใฆใ„ใพใ™ ๏ผˆใ“ใ‚Œใฏใ€[ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใฎ่ฟฝๅŠ ใ‚ฌใ‚คใƒ‰](add_new_model)ใงใฎไพ‹ใจๅŒใ˜ใงใ™๏ผ‰ใ€‚ <Tip> TensorFlow ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅ–ใ‚Š็ต„ใ‚€ๅ‰ใซใ€ใใ‚Œใ‚’่กŒใ†ใŸใ‚ใฎ้€ฒ่กŒไธญใฎๅ–ใ‚Š็ต„ใฟใŒใชใ„ใ‹ใ‚’ๅ†็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ GitHub ใƒšใƒผใ‚ธใฎ[ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆ](https://github.com/huggingface/transformers/pulls?q=is%3Apr)ใง `BrandNewBert` ใ‚’ๆคœ็ดขใ—ใฆใ€ TensorFlow ้–ข้€ฃใฎใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใŒใชใ„ใ“ใจใ‚’็ขบ่ชใงใใพใ™ใ€‚ </Tip> **2. transformers ้–‹็™บ็’ฐๅขƒใฎๆบ–ๅ‚™** ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’้ธๆŠžใ—ใŸใ‚‰ใ€ๆ„ๅ‘ใ‚’็คบใ™ใŸใ‚ใซใƒ‰ใƒฉใƒ•ใƒˆ PR ใ‚’้–‹ใใŸใ‚ใฎ็’ฐๅขƒใ‚’่จญๅฎšใ—ใฆใใ ใ•ใ„ใ€‚ ไปฅไธ‹ใฎๆ‰‹้ †ใซๅพ“ใฃใฆใ€็’ฐๅขƒใ‚’่จญๅฎšใ—ใ€ใƒ‰ใƒฉใƒ•ใƒˆ PR ใ‚’้–‹ใ„ใฆใใ ใ•ใ„ใ€‚ 1. ใƒชใƒใ‚ธใƒˆใƒชใฎใƒšใƒผใ‚ธใง 'Fork' ใƒœใ‚ฟใƒณใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใ€[ใƒชใƒใ‚ธใƒˆใƒช](https://github.com/huggingface/transformers)ใ‚’ใƒ•ใ‚ฉใƒผใ‚ฏใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚ณใƒผใƒ‰ใฎใ‚ณใƒ”ใƒผใŒ GitHub ใƒฆใƒผใ‚ถใƒผใ‚ขใ‚ซใ‚ฆใƒณใƒˆใฎไธ‹ใซไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ 2. ใƒญใƒผใ‚ซใƒซใƒ‡ใ‚ฃใ‚นใ‚ฏใซใ‚ใ‚‹ 'transformers' ใƒ•ใ‚ฉใƒผใ‚ฏใ‚’ใ‚ฏใƒญใƒผใƒณใ—ใ€ใƒ™ใƒผใ‚นใƒชใƒใ‚ธใƒˆใƒชใ‚’ใƒชใƒขใƒผใƒˆใจใ—ใฆ่ฟฝๅŠ ใ—ใพใ™: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. ้–‹็™บ็’ฐๅขƒใ‚’่จญๅฎšใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` ไพๅญ˜้–ขไฟ‚ใŒๅข—ใˆใฆใ„ใ‚‹ใŸใ‚ใ€OSใซๅฟœใ˜ใฆใ€Transformersใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใฎไพๅญ˜้–ขไฟ‚ใฎๆ•ฐใŒๅข—ใˆใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใใฎๅ ดๅˆใฏใ€TensorFlowใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใ‹ใ‚‰ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„ใ€‚ ```bash pip install -e ".[quality]" ``` **ๆณจๆ„:** CUDAใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’CPUใงๅ‹•ไฝœใ•ใ›ใ‚‹ใ“ใจใŒๅๅˆ†ใงใ™ใ€‚ 4. ใƒกใ‚คใƒณใƒ–ใƒฉใƒณใƒใ‹ใ‚‰ใ‚ใ‹ใ‚Šใ‚„ใ™ใ„ๅๅ‰ใฎใƒ–ใƒฉใƒณใƒใ‚’ไฝœๆˆใ—ใฆใใ ใ•ใ„ใ€‚ ```bash git checkout -b add_tf_brand_new_bert ``` 5. ็พๅœจใฎmainใƒ–ใƒฉใƒณใƒใซใƒ•ใ‚งใƒƒใƒใ—ใฆใƒชใƒ™ใƒผใ‚นใ™ใ‚‹ ```bash git fetch upstream git rebase upstream/main ``` 6. `transformers/src/models/brandnewbert/`ใซ`modeling_tf_brandnewbert.py`ใจใ„ใ†ๅๅ‰ใฎ็ฉบใฎ`.py`ใƒ•ใ‚กใ‚คใƒซใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ‚ใชใŸใฎTensorFlowใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซใงใ™ใ€‚ 7. ไปฅไธ‹ใ‚’ไฝฟ็”จใ—ใฆๅค‰ๆ›ดๅ†…ๅฎนใ‚’ใ‚ขใ‚ซใ‚ฆใƒณใƒˆใซใƒ—ใƒƒใ‚ทใƒฅใ—ใพใ™๏ผš ```bash git add . git commit -m "initial commit" git push -u origin add_tf_brand_new_bert ``` 8. GitHubไธŠใงใƒ•ใ‚ฉใƒผใ‚ฏใ—ใŸใ‚ฆใ‚งใƒ–ใƒšใƒผใ‚ธใซ็งปๅ‹•ใ—ใ€ใ€Œใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ€ใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใพใ™ใ€‚ๅฐ†ๆฅใฎๅค‰ๆ›ดใซๅ‚™ใˆใฆใ€Hugging Face ใƒใƒผใƒ ใฎใƒกใƒณใƒใƒผใฎGitHubใƒใƒณใƒ‰ใƒซใ‚’ใƒฌใƒ“ใƒฅใ‚ขใƒผใจใ—ใฆ่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„ใ€‚ 9. GitHubใฎใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚ฆใ‚งใƒ–ใƒšใƒผใ‚ธใฎๅณๅดใซใ‚ใ‚‹ใ€Œใƒ‰ใƒฉใƒ•ใƒˆใซๅค‰ๆ›ใ€ใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใ€ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ใƒ‰ใƒฉใƒ•ใƒˆใซๅค‰ๆ›ดใ—ใพใ™ใ€‚ ใ“ใ‚Œใงใ€๐Ÿค— Transformersๅ†…ใซ*BrandNewBert*ใ‚’TensorFlowใซ็งปๆคใ™ใ‚‹ใŸใ‚ใฎ้–‹็™บ็’ฐๅขƒใŒ่จญๅฎšใ•ใ‚Œใพใ—ใŸใ€‚ **3. (ไปปๆ„) ็†่ซ–็š„ใชๅด้ขใจๆ—ขๅญ˜ใฎๅฎŸ่ฃ…ใ‚’็†่งฃใ™ใ‚‹** *BrandNewBert*ใฎ่ซ–ๆ–‡ใŒๅญ˜ๅœจใ™ใ‚‹ๅ ดๅˆใ€ใใฎ่จ˜่ฟฐ็š„ใชไฝœๆฅญใ‚’่ชญใ‚€ๆ™‚้–“ใ‚’ๅ–ใ‚‹ในใใงใ™ใ€‚่ซ–ๆ–‡ใซใฏ็†่งฃใŒ้›ฃใ—ใ„ๅคงใใชใ‚ปใ‚ฏใ‚ทใƒงใƒณใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใใฎๅ ดๅˆใงใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ - ๅฟƒ้…ใ—ใชใ„ใงใใ ใ•ใ„๏ผ็›ฎๆจ™ใฏ่ซ–ๆ–‡ใฎ็†่ซ–็š„ใช็†่งฃใ‚’ๆทฑใ‚ใ‚‹ใ“ใจใงใฏใชใใ€๐Ÿค— Transformersใ‚’ไฝฟ็”จใ—ใฆTensorFlowใงใƒขใƒ‡ใƒซใ‚’ๅŠนๆžœ็š„ใซๅ†ๅฎŸ่ฃ…ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชๆƒ…ๅ ฑใ‚’ๆŠฝๅ‡บใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใจใฏ่จ€ใˆใ€็†่ซ–็š„ใชๅด้ขใซใ‚ใพใ‚Šๆ™‚้–“ใ‚’ใ‹ใ‘ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ไปฃใ‚ใ‚Šใซใ€ๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒšใƒผใ‚ธ๏ผˆใŸใจใˆใฐใ€[BERTใฎใƒขใƒ‡ใƒซใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](model_doc/bert)ใชใฉ๏ผ‰ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใ‚‹ในใใงใ™ใ€‚ ๅฎŸ่ฃ…ใ™ใ‚‹ใƒขใƒ‡ใƒซใฎๅŸบๆœฌใ‚’ๆŠŠๆกใ—ใŸๅพŒใ€ๆ—ขๅญ˜ใฎๅฎŸ่ฃ…ใ‚’็†่งฃใ™ใ‚‹ใ“ใจใฏ้‡่ฆใงใ™ใ€‚ใ“ใ‚Œใฏใ€ๅ‹•ไฝœใ™ใ‚‹ๅฎŸ่ฃ…ใŒใƒขใƒ‡ใƒซใซๅฏพใ™ใ‚‹ๆœŸๅพ…ใจไธ€่‡ดใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹็ตถๅฅฝใฎๆฉŸไผšใงใ‚ใ‚Šใ€TensorFlowๅดใงใฎๆŠ€่ก“็š„ใช่ชฒ้กŒใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ๆƒ…ๅ ฑใฎๅคšใ•ใซๅœงๅ€’ใ•ใ‚Œใฆใ„ใ‚‹ใจๆ„Ÿใ˜ใ‚‹ใฎใฏๅฎŒๅ…จใซ่‡ช็„ถใงใ™ใ€‚ใ“ใฎๆฎต้šŽใงใฏใƒขใƒ‡ใƒซใฎใ™ในใฆใฎๅด้ขใ‚’็†่งฃใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใŸใ ใ—ใ€[ใƒ•ใ‚ฉใƒผใƒฉใƒ ](https://discuss.huggingface.co/)ใงๆ€ฅใช่ณชๅ•ใ‚’่งฃๆฑบใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ### 4. Model implementation ใ•ใ‚ใ€ใ„ใ‚ˆใ„ใ‚ˆใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๅง‹ใ‚ใพใ—ใ‚‡ใ†ใ€‚ใŠๅ‹งใ‚ใ™ใ‚‹ๅ‡บ็™บ็‚นใฏใ€PyTorchใƒ•ใ‚กใ‚คใƒซใใฎใ‚‚ใฎใงใ™ใ€‚ `src/transformers/models/brand_new_bert/`ๅ†…ใฎ`modeling_brand_new_bert.py`ใฎๅ†…ๅฎนใ‚’ `modeling_tf_brand_new_bert.py`ใซใ‚ณใƒ”ใƒผใ—ใพใ™ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฎ็›ฎๆจ™ใฏใ€ ๐Ÿค— Transformersใฎใ‚คใƒณใƒใƒผใƒˆๆง‹้€ ใ‚’ๆ›ดๆ–ฐใ—ใ€`TFBrandNewBert`ใจ `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)`ใ‚’ๆญฃๅธธใซ่ชญใฟ่พผใ‚€ๅ‹•ไฝœใ™ใ‚‹TensorFlow *BrandNewBert*ใƒขใƒ‡ใƒซใ‚’ ใ‚คใƒณใƒใƒผใƒˆใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ๆฎ‹ๅฟตใชใŒใ‚‰ใ€PyTorchใƒขใƒ‡ใƒซใ‚’TensorFlowใซๅค‰ๆ›ใ™ใ‚‹ๆ˜Ž็ขบใชๆ–นๆณ•ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใŸใ ใ—ใ€ใƒ—ใƒญใ‚ปใ‚นใ‚’ใงใใ‚‹ใ ใ‘ใ‚นใƒ ใƒผใ‚บใซใ™ใ‚‹ใŸใ‚ใฎใƒ’ใƒณใƒˆใ‚’ไปฅไธ‹ใซ็คบใ—ใพใ™๏ผš - ใ™ในใฆใฎใ‚ฏใƒฉใ‚นใฎๅๅ‰ใฎๅ‰ใซ `TF` ใ‚’ไป˜ใ‘ใพใ™๏ผˆไพ‹๏ผš `BrandNewBert` ใฏ `TFBrandNewBert` ใซใชใ‚Šใพใ™๏ผ‰ใ€‚ - ใปใจใ‚“ใฉใฎPyTorchใฎๆ“ไฝœใซใฏใ€็›ดๆŽฅTensorFlowใฎไปฃๆ›ฟใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€`torch.nn.Linear` ใฏ `tf.keras.layers.Dense` ใซๅฏพๅฟœใ—ใ€`torch.nn.Dropout` ใฏ `tf.keras.layers.Dropout` ใซๅฏพๅฟœใ—ใพใ™ใ€‚็‰นๅฎšใฎๆ“ไฝœใซใคใ„ใฆไธๆ˜Ž็ขบใชๅ ดๅˆใฏใ€[TensorFlowใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://www.tensorflow.org/api_docs/python/tf)ใพใŸใฏ[PyTorchใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://pytorch.org/docs/stable/)ใ‚’ๅ‚็…งใงใใพใ™ใ€‚ - ๐Ÿค— Transformersใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใซใƒ‘ใ‚ฟใƒผใƒณใŒ่ฆ‹ใคใ‹ใ‚Šใพใ™ใ€‚็‰นๅฎšใฎๆ“ไฝœใซ็›ดๆŽฅ็š„ใชไปฃๆ›ฟใŒใชใ„ๅ ดๅˆใ€่ชฐใ‹ใŒใ™ใงใซๅŒใ˜ๅ•้กŒใซๅฏพๅ‡ฆใ—ใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™ใ€‚ - ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€PyTorchใจๅŒใ˜ๅค‰ๆ•ฐๅใจๆง‹้€ ใ‚’็ถญๆŒใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ‡ใƒใƒƒใ‚ฐใ‚„ๅ•้กŒใฎ่ฟฝ่ทกใ€ไฟฎๆญฃใฎ่ฟฝๅŠ ใŒๅฎนๆ˜“ใซใชใ‚Šใพใ™ใ€‚ - ไธ€้ƒจใฎใƒฌใ‚คใƒคใƒผใซใฏใ€ๅ„ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใง็•ฐใชใ‚‹ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€คใŒใ‚ใ‚Šใพใ™ใ€‚ๆณจ็›ฎใ™ในใไพ‹ใฏใ€ใƒใƒƒใƒๆญฃ่ฆๅŒ–ใƒฌใ‚คใƒคใƒผใฎ epsilon ใงใ™๏ผˆPyTorchใงใฏ`1e-5`ใ€[TensorFlowใงใฏ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) `1e-3` ใงใ™๏ผ‰ใ€‚ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๅ†็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผ - PyTorchใฎ `nn.Parameter` ๅค‰ๆ•ฐใฏ้€šๅธธใ€TF Layerใฎ `build()` ๅ†…ใงๅˆๆœŸๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆฌกใฎไพ‹ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผš[PyTorch](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_vit_mae.py#L212) / [TensorFlow](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_tf_vit_mae.py#L220) - PyTorchใƒขใƒ‡ใƒซใซ้–ขๆ•ฐใฎไธŠ้ƒจใซ `#copied from ...` ใŒใ‚ใ‚‹ๅ ดๅˆใ€TensorFlowใƒขใƒ‡ใƒซใ‚‚ๅŒใ˜ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‹ใ‚‰ใใฎ้–ขๆ•ฐใ‚’ๅ€Ÿใ‚Šใ‚‹ใ“ใจใŒใงใใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™ใ€‚TensorFlowใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒใ‚ใ‚‹ๅ ดๅˆใงใ™ใ€‚ - TensorFlow้–ขๆ•ฐๅ†…ใง `name`ๅฑžๆ€งใ‚’ๆญฃใ—ใ่จญๅฎšใ™ใ‚‹ใ“ใจใฏใ€`from_pt=True`ใฎใ‚ฆใ‚งใ‚คใƒˆใฎใ‚ฏใƒญใ‚นใƒญใƒผใƒ‰ใƒญใƒผใƒ‰ใ‚’่กŒใ†ใŸใ‚ใซ้‡่ฆใงใ™ใ€‚้€šๅธธใ€`name`ใฏPyTorchใ‚ณใƒผใƒ‰ๅ†…ใฎๅฏพๅฟœใ™ใ‚‹ๅค‰ๆ•ฐใฎๅๅ‰ใงใ™ใ€‚`name`ใŒๆญฃใ—ใ่จญๅฎšใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใ‚ฆใ‚งใ‚คใƒˆใฎใƒญใƒผใƒ‰ๆ™‚ใซใ‚จใƒฉใƒผใƒกใƒƒใ‚ปใƒผใ‚ธใง่กจ็คบใ•ใ‚Œใพใ™ใ€‚ - ใƒ™ใƒผใ‚นใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚น `BrandNewBertModel` ใฎใƒญใ‚ธใƒƒใ‚ฏใฏๅฎŸ้š›ใซใฏ `TFBrandNewBertMainLayer` ใซใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚ŒใฏKerasใƒฌใ‚คใƒคใƒผใฎใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใงใ™๏ผˆ[ไพ‹](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L719)๏ผ‰ใ€‚`TFBrandNewBertModel` ใฏใ€ๅ˜ใซใ“ใฎใƒฌใ‚คใƒคใƒผใฎใƒฉใƒƒใƒ‘ใƒผใงใ™ใ€‚ - ใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€ใŸใ‚ใซใฏใ€Kerasใƒขใƒ‡ใƒซใ‚’ใƒ“ใƒซใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใใฎใŸใ‚ใ€`TFBrandNewBertPreTrainedModel` ใฏใƒขใƒ‡ใƒซใธใฎๅ…ฅๅŠ›ใฎไพ‹ใ€`dummy_inputs` ใ‚’ๆŒใคๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผˆ[ไพ‹](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)๏ผ‰ใ€‚ - ่กจ็คบใŒๆญขใพใฃใŸๅ ดๅˆใฏใ€ๅŠฉใ‘ใ‚’ๆฑ‚ใ‚ใฆใใ ใ•ใ„ใ€‚็งใŸใกใฏใ‚ใชใŸใฎใŠๆ‰‹ไผใ„ใซใ“ใ“ใซใ„ใพใ™๏ผ ๐Ÿค— ใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซ่‡ชไฝ“ใ ใ‘ใงใชใใ€ใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใจ้–ข้€ฃใ™ใ‚‹ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒšใƒผใ‚ธใธใฎใƒใ‚คใƒณใ‚ฟใƒผใ‚‚่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไป–ใฎPRใฎใƒ‘ใ‚ฟใƒผใƒณใซๅพ“ใฃใฆใ“ใฎ้ƒจๅˆ†ใ‚’ๅฎŒไบ†ใงใใพใ™ ๏ผˆ[ไพ‹](https://github.com/huggingface/transformers/pull/18020/files)๏ผ‰ใ€‚ ไปฅไธ‹ใฏๆ‰‹ๅ‹•ใงใฎๅค‰ๆ›ดใŒๅฟ…่ฆใชไธ€่ฆงใงใ™๏ผš - *BrandNewBert*ใฎใ™ในใฆใฎใƒ‘ใƒ–ใƒชใƒƒใ‚ฏใ‚ฏใƒฉใ‚นใ‚’ `src/transformers/__init__.py` ใซๅซใ‚ใ‚‹ - *BrandNewBert*ใ‚ฏใƒฉใ‚นใ‚’ `src/transformers/models/auto/modeling_tf_auto.py` ใฎๅฏพๅฟœใ™ใ‚‹Autoใ‚ฏใƒฉใ‚นใซ่ฟฝๅŠ  - ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใฎใƒชใ‚นใƒˆใซใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ `utils/documentation_tests.txt` - `src/transformers/utils/dummy_tf_objects.py` ใซ้–ข้€ฃใ™ใ‚‹ *BrandNewBert* ใซ้–ข้€ฃใ™ใ‚‹้…ๅปถใƒญใƒผใƒ‰ใ‚ฏใƒฉใ‚นใ‚’่ฟฝๅŠ  - `src/transformers/models/brand_new_bert/__init__.py` ใงใƒ‘ใƒ–ใƒชใƒƒใ‚ฏใ‚ฏใƒฉใ‚นใฎใ‚คใƒณใƒใƒผใƒˆๆง‹้€ ใ‚’ๆ›ดๆ–ฐ - `docs/source/en/model_doc/brand_new_bert.md` ใซ *BrandNewBert* ใฎใƒ‘ใƒ–ใƒชใƒƒใ‚ฏใƒกใ‚ฝใƒƒใƒ‰ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒใ‚คใƒณใ‚ฟใƒผใ‚’่ฟฝๅŠ  - `docs/source/en/model_doc/brand_new_bert.md` ใฎ *BrandNewBert* ใฎ่ฒข็Œฎ่€…ใƒชใ‚นใƒˆใซ่‡ชๅˆ†่‡ช่บซใ‚’่ฟฝๅŠ  - ๆœ€ๅพŒใซใ€`docs/source/en/index.md` ใฎ *BrandNewBert* ใฎTensorFlowๅˆ—ใซ็ท‘่‰ฒใฎใƒใ‚งใƒƒใ‚ฏใƒžใƒผใ‚ฏ โœ… ใ‚’่ฟฝๅŠ  ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒๆบ–ๅ‚™ใงใใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ไปฅไธ‹ใฎใƒใ‚งใƒƒใ‚ฏใƒชใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš 1. ่จ“็ทดๆ™‚ใซ็•ฐใชใ‚‹ๅ‹•ไฝœใ‚’ใ™ใ‚‹ใ™ในใฆใฎใƒฌใ‚คใƒคใƒผ๏ผˆไพ‹๏ผšDropout๏ผ‰ใฏใ€`training`ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆๅ‘ผใณๅ‡บใ•ใ‚Œใ€ใใ‚ŒใŒๆœ€ไธŠไฝใ‚ฏใƒฉใ‚นใ‹ใ‚‰ไผๆ’ญใ•ใ‚Œใพใ™ใ€‚ 2. ๅฏ่ƒฝใช้™ใ‚Š `#copied from ...` ใ‚’ไฝฟ็”จใ—ใพใ—ใŸ 3. `TFBrandNewBertMainLayer` ใŠใ‚ˆใณใใ‚Œใ‚’ไฝฟ็”จใ™ใ‚‹ใ™ในใฆใฎใ‚ฏใƒฉใ‚นใฎ `call` ้–ขๆ•ฐใŒ `@unpack_inputs` ใงใƒ‡ใ‚ณใƒฌใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™ 4. `TFBrandNewBertMainLayer` ใฏ `@keras_serializable` ใงใƒ‡ใ‚ณใƒฌใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™ 5. PyTorchใ‚ฆใ‚งใ‚คใƒˆใ‹ใ‚‰TensorFlowใ‚ฆใ‚งใ‚คใƒˆใ‚’ไฝฟ็”จใ—ใฆTensorFlowใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™ `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` 6. ไบˆๆœŸใ•ใ‚Œใ‚‹ๅ…ฅๅŠ›ๅฝขๅผใ‚’ไฝฟ็”จใ—ใฆTensorFlowใƒขใƒ‡ใƒซใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใŒใงใใพใ™ ### 5. Add model tests ใ‚„ใฃใŸใญใ€TensorFlowใƒขใƒ‡ใƒซใ‚’ๅฎŸ่ฃ…ใ—ใพใ—ใŸ๏ผ ไปŠๅบฆใฏใ€ใƒขใƒ‡ใƒซใŒๆœŸๅพ…้€šใ‚Šใซๅ‹•ไฝœใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใฎใƒ†ใ‚นใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ๆ™‚้–“ใงใ™ใ€‚ ๅ‰ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใจๅŒๆง˜ใซใ€`tests/models/brand_new_bert/`ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชๅ†…ใฎ`test_modeling_brand_new_bert.py`ใƒ•ใ‚กใ‚คใƒซใ‚’`test_modeling_tf_brand_new_bert.py`ใซใ‚ณใƒ”ใƒผใ—ใ€ๅฟ…่ฆใชTensorFlowใฎ็ฝฎๆ›ใ‚’่กŒใ†ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ไปŠใฎๆฎต้šŽใงใฏใ€ใ™ในใฆใฎ`.from_pretrained()`ๅ‘ผใณๅ‡บใ—ใงใ€ๆ—ขๅญ˜ใฎPyTorchใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใซ`from_pt=True`ใƒ•ใƒฉใ‚ฐใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไฝœๆฅญใŒๅฎŒไบ†ใ—ใŸใ‚‰ใ€ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ๆบ–ๅ‚™ใŒๆ•ดใ„ใพใ—ใŸ๏ผ ๐Ÿ˜ฌ ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` ๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„็ตๆžœใฏใ€ๅคšใใฎใ‚จใƒฉใƒผใŒ่กจ็คบใ•ใ‚Œใ‚‹ใ“ใจใงใ™ใ€‚ๅฟƒ้…ใ—ใชใ„ใงใใ ใ•ใ„ใ€ใ“ใ‚Œใฏไบˆๆƒณใ•ใ‚Œใ‚‹ๅ‹•ไฝœใงใ™๏ผ MLใƒขใƒ‡ใƒซใฎใƒ‡ใƒใƒƒใ‚ฐใฏ้žๅธธใซ้›ฃใ—ใ„ใจใ•ใ‚ŒใฆใŠใ‚Šใ€ๆˆๅŠŸใฎ้ตใฏๅฟ่€ๅŠ›๏ผˆใจ`breakpoint()`๏ผ‰ใงใ™ใ€‚็งใŸใกใฎ็ตŒ้จ“ใงใฏใ€ ๆœ€ใ‚‚้›ฃใ—ใ„ๅ•้กŒใฏMLใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ้–“ใฎๅพฎๅฆ™ใชไธไธ€่‡ดใ‹ใ‚‰็™บ็”Ÿใ—ใ€ใ“ใ‚Œใซใคใ„ใฆใฏใ“ใฎใ‚ฌใ‚คใƒ‰ใฎๆœ€ๅพŒใซใ„ใใคใ‹ใฎใƒใ‚คใƒณใ‚ฟใ‚’็คบใ—ใพใ™ใ€‚ ไป–ใฎๅ ดๅˆใงใฏใ€ไธ€่ˆฌ็š„ใชใƒ†ใ‚นใƒˆใŒ็›ดๆŽฅใƒขใƒ‡ใƒซใซ้ฉ็”จใงใใชใ„ๅ ดๅˆใ‚‚ใ‚ใ‚Šใ€ใใฎๅ ดๅˆใฏใƒขใƒ‡ใƒซใฎใƒ†ใ‚นใƒˆใ‚ฏใƒฉใ‚นใƒฌใƒ™ใƒซใงใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ‚’ๆๆกˆใ—ใพใ™ใ€‚ ๅ•้กŒใฎ็จฎ้กžใซ้–ขไฟ‚ใชใใ€่ฉฐใพใฃใŸๅ ดๅˆใฏใ€ใƒ‰ใƒฉใƒ•ใƒˆใฎใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใงๅŠฉใ‘ใ‚’ๆฑ‚ใ‚ใ‚‹ใ“ใจใ‚’ใŸใ‚ใ‚‰ใ‚ใชใ„ใงใใ ใ•ใ„ใ€‚ ใ™ในใฆใฎใƒ†ใ‚นใƒˆใŒใƒ‘ใ‚นใ—ใŸใ‚‰ใ€ใŠใ‚ใงใจใ†ใ”ใ–ใ„ใพใ™ใ€‚ใ‚ใชใŸใฎใƒขใƒ‡ใƒซใฏใปใผ๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใซ่ฟฝๅŠ ใ™ใ‚‹ๆบ–ๅ‚™ใŒๆ•ดใ„ใพใ—ใŸ๏ผ๐ŸŽ‰ **6. ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๆๅ‡บใ™ใ‚‹** ๅฎŸ่ฃ…ใจใƒ†ใ‚นใƒˆใŒๅฎŒไบ†ใ—ใŸใ‚‰ใ€ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๆๅ‡บใ™ใ‚‹ๆบ–ๅ‚™ใŒๆ•ดใ„ใพใ—ใŸใ€‚ใ‚ณใƒผใƒ‰ใ‚’ใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ๅ‰ใซใ€ ใ‚ณใƒผใƒ‰ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใงใ‚ใ‚‹ `make fixup` ๐Ÿช„ ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€่‡ชๅ‹•็š„ใชใƒใ‚งใƒƒใ‚ฏใซๅคฑๆ•—ใ™ใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใฎๅ•้กŒใŒ่‡ชๅ‹•็š„ใซไฟฎๆญฃใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใงใ€ใƒ‰ใƒฉใƒ•ใƒˆใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๅฎŸ้š›ใฎใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใซๅค‰ๆ›ใ™ใ‚‹ๆบ–ๅ‚™ใŒๆ•ดใ„ใพใ—ใŸใ€‚ ใ“ใ‚Œใ‚’่กŒใ†ใซใฏใ€ใ€Œใƒฌใƒ“ใƒฅใƒผๅพ…ใกใ€ใƒœใ‚ฟใƒณใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใ€Joao๏ผˆ`@gante`๏ผ‰ใจMatt๏ผˆ`@Rocketknight1`๏ผ‰ใ‚’ใƒฌใƒ“ใƒฅใƒฏใƒผใจใ—ใฆ่ฟฝๅŠ ใ—ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใซใฏๅฐ‘ใชใใจใ‚‚3ไบบใฎใƒฌใƒ“ใƒฅใƒฏใƒผใŒๅฟ…่ฆใงใ™ใŒใ€ใƒขใƒ‡ใƒซใซ้ฉๅˆ‡ใช่ฟฝๅŠ ใฎใƒฌใƒ“ใƒฅใƒฏใƒผใ‚’่ฆ‹ใคใ‘ใ‚‹ใฎใฏๅฝผใ‚‰ใฎ่ฒฌไปปใงใ™ใ€‚ ใ™ในใฆใฎใƒฌใƒ“ใƒฅใƒฏใƒผใŒใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใฎ็Šถๆ…‹ใซๆบ€่ถณใ—ใŸใ‚‰ใ€ๆœ€ๅพŒใฎใ‚ขใ‚ฏใ‚ทใƒงใƒณใƒใ‚คใƒณใƒˆใฏใ€`.from_pretrained()` ๅ‘ผใณๅ‡บใ—ใง `from_pt=True` ใƒ•ใƒฉใ‚ฐใ‚’ๅ‰Š้™คใ™ใ‚‹ใ“ใจใงใ™ใ€‚ TensorFlowใฎใ‚ฆใ‚งใ‚คใƒˆใŒๅญ˜ๅœจใ—ใชใ„ใŸใ‚ใ€ใใ‚Œใ‚‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผใ“ใ‚Œใ‚’่กŒใ†ๆ–นๆณ•ใซใคใ„ใฆใฏใ€ไปฅไธ‹ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ๆœ€ๅพŒใซใ€TensorFlowใฎใ‚ฆใ‚งใ‚คใƒˆใŒใƒžใƒผใ‚ธใ•ใ‚Œใ€ๅฐ‘ใชใใจใ‚‚3ไบบใฎใƒฌใƒ“ใƒฅใƒผใ‚ขใŒๆ‰ฟ่ชใ—ใ€ใ™ในใฆใฎCIใƒใ‚งใƒƒใ‚ฏใŒ ๆˆๅŠŸใ—ใŸๅ ดๅˆใ€ใƒ†ใ‚นใƒˆใ‚’ใƒญใƒผใ‚ซใƒซใงๆœ€ๅพŒใซใ‚‚ใ†ไธ€ๅบฆ็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` ใใ—ใฆใ€ใ‚ใชใŸใฎPRใ‚’ใƒžใƒผใ‚ธใ—ใพใ™๏ผใƒžใ‚คใƒซใ‚นใƒˆใƒผใƒณ้”ๆˆใŠใ‚ใงใจใ†ใ”ใ–ใ„ใพใ™ ๐ŸŽ‰ **7. (Optional) ใƒ‡ใƒขใ‚’ไฝœๆˆใ—ใฆไธ–็•Œใจๅ…ฑๆœ‰** ใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใฎๆœ€ใ‚‚้›ฃใ—ใ„้ƒจๅˆ†ใฎ1ใคใฏใ€็™บ่ฆ‹ใงใ™ใ€‚ใ‚ใชใŸใฎ็ด ๆ™ดใ‚‰ใ—ใ„TensorFlowใฎ่ฒข็ŒฎใŒๅญ˜ๅœจใ™ใ‚‹ใ“ใจใ‚’ไป–ใฎใƒฆใƒผใ‚ถใƒผใŒใฉใฎใ‚ˆใ†ใซ็Ÿฅใ‚‹ใ“ใจใŒใงใใ‚‹ใงใ—ใ‚‡ใ†ใ‹๏ผŸ้ฉๅˆ‡ใชใ‚ณใƒŸใƒฅใƒ‹ใ‚ฑใƒผใ‚ทใƒงใƒณใงใ™๏ผ ๐Ÿ“ฃ ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ไธป่ฆใชๆ–นๆณ•ใฏ2ใคใ‚ใ‚Šใพใ™ใ€‚ - ใƒ‡ใƒขใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ใ“ใ‚ŒใซใฏGradioใƒ‡ใƒขใ€ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ€ใŠใ‚ˆใณใƒขใƒ‡ใƒซใ‚’็ดนไป‹ใ™ใ‚‹ใŸใ‚ใฎไป–ใฎๆฅฝใ—ใ„ๆ–นๆณ•ใŒๅซใพใ‚Œใพใ™ใ€‚[ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃ้ง†ๅ‹•ใฎใƒ‡ใƒข](https://huggingface.co/docs/transformers/community)ใซใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ - Twitterใ‚„LinkedInใชใฉใฎใ‚ฝใƒผใ‚ทใƒฃใƒซใƒกใƒ‡ใ‚ฃใ‚ขใงใ‚นใƒˆใƒผใƒชใƒผใ‚’ๅ…ฑๆœ‰ใ—ใพใ™ใ€‚ใ‚ใชใŸใฎไป•ไบ‹ใซ่ช‡ใ‚Šใ‚’ๆŒใกใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใ‚ใชใŸใฎๆˆๆžœใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ในใใงใ™ - ใ‚ใชใŸใฎใƒขใƒ‡ใƒซใฏไปŠใ‚„ไธ–็•Œไธญใฎไฝ•ๅƒไบบใ‚‚ใฎใ‚จใƒณใ‚ธใƒ‹ใ‚ขใ‚„็ ”็ฉถ่€…ใซใ‚ˆใฃใฆไฝฟ็”จใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ ๐ŸŒ๏ผ็งใŸใกใฏใ‚ใชใŸใฎๆŠ•็จฟใ‚’ใƒชใƒ„ใ‚คใƒผใƒˆใ—ใฆๅ…ฑๅŒไฝ“ใจๅ…ฑๆœ‰ใ™ใ‚‹ใŠๆ‰‹ไผใ„ใ‚’ๅ–œใ‚“ใงใ—ใพใ™ใ€‚ ## Adding TensorFlow weights to ๐Ÿค— Hub TensorFlowใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒ๐Ÿค— Transformersใงๅˆฉ็”จๅฏ่ƒฝใชๅ ดๅˆใ€PyTorchใฎ้‡ใฟใ‚’TensorFlowใฎ้‡ใฟใซๅค‰ๆ›ใ™ใ‚‹ใ“ใจใฏ็ฐกๅ˜ใงใ™๏ผ ไปฅไธ‹ใŒใใฎๆ–นๆณ•ใงใ™๏ผš 1. ใ‚ฟใƒผใƒŸใƒŠใƒซใงHugging Faceใ‚ขใ‚ซใ‚ฆใƒณใƒˆใซใƒญใ‚ฐใ‚คใƒณใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใ‚ณใƒžใƒณใƒ‰`huggingface-cli login`ใ‚’ไฝฟ็”จใ—ใฆใƒญใ‚ฐใ‚คใƒณใงใใพใ™๏ผˆใ‚ขใ‚ฏใ‚ปใ‚นใƒˆใƒผใ‚ฏใƒณใฏ[ใ“ใกใ‚‰](https://huggingface.co/settings/tokens)ใง่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใพใ™๏ผ‰ใ€‚ 2. `transformers-cli pt-to-tf --model-name foo/bar`ใจใ„ใ†ใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใ“ใ“ใงใ€`foo/bar`ใฏๅค‰ๆ›ใ—ใŸใ„PyTorchใฎ้‡ใฟใ‚’ๅซใ‚€ใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใฎๅๅ‰ใงใ™ใ€‚ 3. ไธŠ่จ˜ใฎใ‚ณใƒžใƒณใƒ‰ใงไฝœๆˆใ•ใ‚ŒใŸ๐Ÿค— Hub PRใซ`@joaogante`ใจ`@Rocketknight1`ใ‚’ใ‚ฟใ‚ฐไป˜ใ‘ใ—ใพใ™ใ€‚ ใใ‚Œใ ใ‘ใงใ™๏ผ ๐ŸŽ‰ ## Debugging mismatches across ML frameworks ๐Ÿ› ๆ–ฐใ—ใ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่ฟฝๅŠ ใ—ใŸใ‚Šใ€ๆ—ขๅญ˜ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎTensorFlowใฎ้‡ใฟใ‚’ไฝœๆˆใ—ใŸใ‚Šใ™ใ‚‹้š›ใ€PyTorchใจTensorFlow้–“ใฎไธไธ€่‡ดใซใคใ„ใฆใฎใ‚จใƒฉใƒผใซ้ญ้‡ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ๅ ดๅˆใซใ‚ˆใฃใฆใฏใ€PyTorchใจTensorFlowใฎใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒใปใผๅŒไธ€ใงใ‚ใ‚‹ใซใ‚‚ใ‹ใ‹ใ‚ใ‚‰ใšใ€ไธไธ€่‡ดใ‚’ๆŒ‡ๆ‘˜ใ™ใ‚‹ใ‚จใƒฉใƒผใŒ่กจ็คบใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ใฉใ†ใ—ใฆใงใ—ใ‚‡ใ†ใ‹๏ผŸ ๐Ÿค” ใพใšๆœ€ๅˆใซใ€ใชใœใ“ใ‚Œใ‚‰ใฎไธไธ€่‡ดใ‚’็†่งฃใ™ใ‚‹ใ“ใจใŒ้‡่ฆใ‹ใซใคใ„ใฆ่ฉฑใ—ใพใ—ใ‚‡ใ†ใ€‚ๅคšใใฎใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใƒกใƒณใƒใƒผใฏ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ใใฎใพใพไฝฟ็”จใ—ใ€ใƒขใƒ‡ใƒซใŒๆœŸๅพ…ใฉใŠใ‚Šใซๅ‹•ไฝœใ™ใ‚‹ใจไฟก้ ผใ—ใฆใ„ใพใ™ใ€‚ 2ใคใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ้–“ใงๅคงใใชไธไธ€่‡ดใŒใ‚ใ‚‹ใจใ€ๅฐ‘ใชใใจใ‚‚1ใคใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎใƒชใƒ•ใ‚กใƒฌใƒณใ‚นๅฎŸ่ฃ…ใซๅพ“ใฃใฆใƒขใƒ‡ใƒซใŒๅ‹•ไฝœใ—ใชใ„ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฏๅฎŸ่กŒใ•ใ‚Œใพใ™ใŒๆ€ง่ƒฝใŒไฝŽไธ‹ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใ€้™ใ‹ใชๅคฑๆ•—ใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€ๅ…จใๅฎŸ่กŒใ•ใ‚Œใชใ„ใƒขใƒ‡ใƒซใ‚ˆใ‚Šใ‚‚ๆ‚ชใ„ใจ่จ€ใˆใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผใใฎใŸใ‚ใ€ใƒขใƒ‡ใƒซใฎใ™ในใฆใฎๆฎต้šŽใงใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎไธไธ€่‡ดใŒ`1e-5`ๆœชๆบ€ใงใ‚ใ‚‹ใ“ใจใ‚’็›ฎๆŒ‡ใ—ใฆใ„ใพใ™ใ€‚ ๆ•ฐๅ€ค่จˆ็ฎ—ใฎๅ•้กŒใจๅŒๆง˜ใซใ€่ฉณ็ดฐใซใคใ„ใฆใฏ็ดฐใ‹ใ„ใจใ“ใ‚ใซใ‚ใ‚Šใพใ™ใ€‚ใใ—ใฆใ€่ฉณ็ดฐๆŒ‡ๅ‘ใฎๆŠ€่ก“ใงใ‚ใ‚‹ไปฅไธŠใ€็ง˜ๅฏ†ใฎ่ฆ็ด ใฏๅฟ่€ใงใ™ใ€‚ ใ“ใฎ็จฎใฎๅ•้กŒใซ้ญ้‡ใ—ใŸๅ ดๅˆใฎใŠๅ‹งใ‚ใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš 1. ไธไธ€่‡ดใฎๅŽŸๅ› ใ‚’็‰นๅฎšใ—ใพใ™ใ€‚ๅค‰ๆ›ไธญใฎใƒขใƒ‡ใƒซใซใฏใŠใใ‚‰ใ็‰นๅฎšใฎ็‚นใพใงใปใผๅŒไธ€ใฎๅ†…้ƒจๅค‰ๆ•ฐใŒใ‚ใ‚Šใพใ™ใ€‚ ไธกๆ–นใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซ`breakpoint()`ใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’้…็ฝฎใ—ใ€ใƒˆใƒƒใƒ—ใƒ€ใ‚ฆใƒณใฎๆ–นๆณ•ใงๆ•ฐๅ€คๅค‰ๆ•ฐใฎๅ€คใ‚’ๆฏ”่ผƒใ—ใ€ๅ•้กŒใฎๅŽŸๅ› ใ‚’่ฆ‹ใคใ‘ใพใ™ใ€‚ 2. ๅ•้กŒใฎๅŽŸๅ› ใ‚’็‰นๅฎšใ—ใŸใ‚‰ใ€๐Ÿค— Transformersใƒใƒผใƒ ใจ้€ฃ็ตกใ‚’ๅ–ใ‚Šใพใ—ใ‚‡ใ†ใ€‚ๅŒๆง˜ใฎๅ•้กŒใซ้ญ้‡ใ—ใŸใ“ใจใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใšใ€่ฟ…้€Ÿใซ่งฃๆฑบ็ญ–ใ‚’ๆไพ›ใงใใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ๆœ€็ต‚ๆ‰‹ๆฎตใจใ—ใฆใ€StackOverflowใ‚„GitHubใฎๅ•้กŒใชใฉใ€ไบบๆฐ—ใฎใ‚ใ‚‹ใƒšใƒผใ‚ธใ‚’ใ‚นใ‚ญใƒฃใƒณใ—ใพใ™ใ€‚ 3. ่งฃๆฑบ็ญ–ใŒ่ฆ‹ๅฝ“ใŸใ‚‰ใชใ„ๅ ดๅˆใ€ๅ•้กŒใ‚’ๆŽ˜ใ‚Šไธ‹ใ’ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚่‰ฏใ„ใƒ‹ใƒฅใƒผใ‚นใฏใ€ๅ•้กŒใฎๅŽŸๅ› ใ‚’็‰นๅฎšใ—ใŸใ“ใจใงใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ๅ•้กŒใฎใ‚ใ‚‹ๅ‘ฝไปคใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใ€ใƒขใƒ‡ใƒซใฎๆฎ‹ใ‚Šใ‚’ๆŠฝ่ฑกๅŒ–ใงใใพใ™๏ผๆ‚ชใ„ใƒ‹ใƒฅใƒผใ‚นใฏใ€ใใฎๅ‘ฝไปคใฎใ‚ฝใƒผใ‚นๅฎŸ่ฃ…ใซ้€ฒใ‚€ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใงใ™ใ€‚ไธ€้ƒจใฎๅ ดๅˆใงใฏใ€ใƒชใƒ•ใ‚กใƒฌใƒณใ‚นๅฎŸ่ฃ…ใซๅ•้กŒใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ - ไธŠๆตใƒชใƒใ‚ธใƒˆใƒชใงๅ•้กŒใ‚’้–‹ใใฎใ‚’ๆŽงใˆใชใ„ใงใใ ใ•ใ„ใ€‚ ๐Ÿค— Transformersใƒใƒผใƒ ใจใฎ่ฉฑใ—ๅˆใ„ใงใ€ไธไธ€่‡ดใ‚’ไฟฎๆญฃใ™ใ‚‹ใ“ใจใŒๅ›ฐ้›ฃใงใ‚ใ‚‹ใ“ใจใŒๅˆคๆ˜Žใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ๅ‡บๅŠ›ใƒฌใ‚คใƒคใƒผใฎใƒขใƒ‡ใƒซใงไธไธ€่‡ดใŒ้žๅธธใซๅฐใ•ใ„ๅ ดๅˆ๏ผˆใŸใ ใ—ใ€้š ใ‚ŒใŸ็Šถๆ…‹ใงใฏๅคงใใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹๏ผ‰ใ€ใƒขใƒ‡ใƒซใ‚’้…ๅธƒใ™ใ‚‹ใŸใ‚ใซใใ‚Œใ‚’็„ก่ฆ–ใ™ใ‚‹ใ“ใจใซใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ไธŠ่จ˜ใง่จ€ๅŠใ—ใŸ`pt-to-tf` CLIใซใฏใ€้‡ใฟๅค‰ๆ›ๆ™‚ใซใ‚จใƒฉใƒผใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’็„ก่ฆ–ใ™ใ‚‹ใŸใ‚ใฎ`--max-error`ใƒ•ใƒฉใ‚ฐใŒใ‚ใ‚Šใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_train_cpu_many.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Training on Multiple CPUs 1ใคใฎCPUใงใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒ้…ใ™ใŽใ‚‹ๅ ดๅˆใ€่ค‡ๆ•ฐใฎCPUใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏใ€PyTorchใƒ™ใƒผใ‚นใฎDDPใ‚’ไฝฟ็”จใ—ใŸๅˆ†ๆ•ฃCPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใพใ™ใ€‚ ## Intelยฎ oneCCL Bindings for PyTorch [Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL)๏ผˆ้›†ๅˆ้€šไฟกใƒฉใ‚คใƒ–ใƒฉใƒช๏ผ‰ใฏใ€allreduceใ€allgatherใ€alltoallใชใฉใฎๅŽ้›†้€šไฟกใ‚’ๅฎŸ่ฃ…ใ—ใŸๅŠน็އ็š„ใชๅˆ†ๆ•ฃใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็”จใฎใƒฉใ‚คใƒ–ใƒฉใƒชใงใ™ใ€‚oneCCLใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[oneCCLใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)ใจ[oneCCLไป•ๆง˜](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใƒขใ‚ธใƒฅใƒผใƒซ`oneccl_bindings_for_pytorch`๏ผˆใƒใƒผใ‚ธใƒงใƒณ1.12ไปฅๅ‰ใฏ`torch_ccl`๏ผ‰ใฏใ€PyTorch C10D ProcessGroup APIใ‚’ๅฎŸ่ฃ…ใ—ใ€ๅค–้ƒจใฎProcessGroupใจใ—ใฆๅ‹•็š„ใซใƒญใƒผใƒ‰ใงใใ€็พๅœจใฏLinuxใƒ—ใƒฉใƒƒใƒˆใƒ•ใ‚ฉใƒผใƒ ใงใฎใฟๅ‹•ไฝœใ—ใพใ™ใ€‚ [torch-ccl](https://github.com/intel/torch-ccl)ใฎ่ฉณ็ดฐๆƒ…ๅ ฑใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ### Intelยฎ oneCCL Bindings for PyTorch installation: Wheelใƒ•ใ‚กใ‚คใƒซใฏใ€ไปฅไธ‹ใฎPythonใƒใƒผใ‚ธใƒงใƒณ็”จใซๅˆฉ็”จๅฏ่ƒฝใงใ™: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | โˆš | โˆš | โˆš | โˆš | | 1.12.100 | | โˆš | โˆš | โˆš | โˆš | | 1.12.0 | | โˆš | โˆš | โˆš | โˆš | | 1.11.0 | | โˆš | โˆš | โˆš | โˆš | | 1.10.0 | โˆš | โˆš | โˆš | โˆš | | ``` pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` where `{pytorch_version}` should be your PyTorch version, for instance 1.13.0. Check more approaches for [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Versions of oneCCL and PyTorch must match. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100 </Tip> `{pytorch_version}` ใฏใ€ใ‚ใชใŸใฎPyTorchใฎใƒใƒผใ‚ธใƒงใƒณ๏ผˆไพ‹๏ผš1.13.0๏ผ‰ใซ็ฝฎใๆ›ใˆใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚้‡่ฆใชใฎใฏใ€oneCCLใจPyTorchใฎใƒใƒผใ‚ธใƒงใƒณใŒไธ€่‡ดใ—ใฆใ„ใ‚‹ใ“ใจใงใ™ใ€‚[oneccl_bind_ptใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซ](https://github.com/intel/torch-ccl)ใซ้–ขใ™ใ‚‹ใ•ใ‚‰ใชใ‚‹ใ‚ขใƒ—ใƒญใƒผใƒใ‚’็ขบ่ชใงใใพใ™ใ€‚ <Tip warning={true}> `oneccl_bindings_for_pytorch`ใฎ1.12.0ใƒ—ใƒชใƒ“ใƒซใƒˆใƒ›ใ‚คใƒผใƒซใฏPyTorch 1.12.1ใจไบ’ๆ›ๆ€งใŒใ‚ใ‚Šใพใ›ใ‚“๏ผˆใ“ใ‚ŒใฏPyTorch 1.12.0็”จใงใ™๏ผ‰ใ€‚PyTorch 1.12.1ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€`oneccl_bindings_for_pytorch`ใƒใƒผใ‚ธใƒงใƒณ1.12.100ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ## Intelยฎ MPI library ใ“ใฎๅŸบๆบ–ใƒ™ใƒผใ‚นใฎMPIๅฎŸ่ฃ…ใ‚’ไฝฟ็”จใ—ใฆใ€Intelยฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃไธŠใงๆŸ”่ปŸใงๅŠน็އ็š„ใ€ใ‚นใ‚ฑใƒผใƒฉใƒ–ใƒซใชใ‚ฏใƒฉใ‚นใ‚ฟใƒกใƒƒใ‚ปใƒผใ‚ธใƒณใ‚ฐใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใ“ใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฏใ€Intelยฎ oneAPI HPC Toolkitใฎไธ€้ƒจใงใ™ใ€‚ oneccl_bindings_for_pytorchใฏMPIใƒ„ใƒผใƒซใ‚ปใƒƒใƒˆใจไธ€็ท’ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใพใ™ใ€‚ไฝฟ็”จใ™ใ‚‹ๅ‰ใซ็’ฐๅขƒใ‚’ใ‚ฝใƒผใ‚นๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ for Intelยฎ oneCCL >= 1.12.0 ``` oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` for Intelยฎ oneCCL whose version < 1.12.0 ``` torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### IPEX installation: IPEXใฏใ€Float32ใŠใ‚ˆใณBFloat16ใฎไธกๆ–นใงCPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๆœ€้ฉๅŒ–ใ‚’ๆไพ›ใ—ใพใ™ใ€‚่ฉณ็ดฐใฏ[ใ“ใกใ‚‰ใฎใ‚ทใƒณใ‚ฐใƒซCPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](./perf_train_cpu)ใ‚’ใ”ๅ‚็…งใใ ใ•ใ„ใ€‚ ไปฅไธ‹ใฎใ€ŒใƒˆใƒฌใƒผใƒŠใƒผใงใฎไฝฟ็”จใ€ใฏใ€Intelยฎ MPIใƒฉใ‚คใƒ–ใƒฉใƒชใงmpirunใ‚’ไฝฟ็”จใ™ใ‚‹ไพ‹ใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ ## Usage in Trainer ใƒˆใƒฌใƒผใƒŠใƒผใงใฎใƒžใƒซใƒCPUๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใŸใ‚ใซใ€ใƒฆใƒผใ‚ถใƒผใฏใ‚ณใƒžใƒณใƒ‰ๅผ•ๆ•ฐใซ **`--ddp_backend ccl`** ใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไพ‹ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚[่ณชๅ•ๅฟœ็ญ”ใฎไพ‹](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใฏใ€1ใคใฎXeonใƒŽใƒผใƒ‰ใง2ใคใฎใƒ—ใƒญใ‚ปใ‚นใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚1ใคใฎใƒ—ใƒญใ‚ปใ‚นใŒ1ใคใฎใ‚ฝใ‚ฑใƒƒใƒˆใงๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚OMP_NUM_THREADS/CCL_WORKER_COUNTๅค‰ๆ•ฐใฏใ€ๆœ€้ฉใชใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’่ชฟๆ•ดใ™ใ‚‹ใŸใ‚ใซ่ชฟๆ•ดใงใใพใ™ใ€‚ ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใฏใ€2ใคใฎXeonใƒ—ใƒญใ‚ปใƒƒใ‚ต๏ผˆnode0ใจnode1ใ€node0ใ‚’ใƒกใ‚คใƒณใƒ—ใƒญใ‚ปใ‚นใจใ—ใฆไฝฟ็”จ๏ผ‰ใงๅˆ่จˆ4ใคใฎใƒ—ใƒญใ‚ปใ‚นใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚ppn๏ผˆใƒŽใƒผใƒ‰ใ”ใจใฎใƒ—ใƒญใ‚ปใ‚นๆ•ฐ๏ผ‰ใฏ2ใซ่จญๅฎšใ•ใ‚Œใ€1ใคใฎใ‚ฝใ‚ฑใƒƒใƒˆใ”ใจใซ1ใคใฎใƒ—ใƒญใ‚ปใ‚นใŒๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ๆœ€้ฉใชใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅพ—ใ‚‹ใŸใ‚ใซใ€OMP_NUM_THREADS/CCL_WORKER_COUNTๅค‰ๆ•ฐใ‚’่ชฟๆ•ดใงใใพใ™ใ€‚ node0ใงใฏใ€ๅ„ใƒŽใƒผใƒ‰ใฎIPใ‚ขใƒ‰ใƒฌใ‚นใ‚’ๅซใ‚€ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใ—ใ€ใใฎๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใฎใƒ‘ใ‚นใ‚’ๅผ•ๆ•ฐใจใ—ใฆๆธกใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` ใƒŽใƒผใƒ‰0ใงๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใจใ€ใƒŽใƒผใƒ‰0ใจใƒŽใƒผใƒ‰1ใง**4DDP**ใŒBF16่‡ชๅ‹•ๆททๅˆ็ฒพๅบฆใงๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/glossary.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Glossary ใ“ใฎ็”จ่ชž้›†ใฏใ€ไธ€่ˆฌ็š„ใชๆฉŸๆขฐๅญฆ็ฟ’ใจ ๐Ÿค— ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใฎ็”จ่ชžใ‚’ๅฎš็พฉใ—ใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ใ‚ˆใ‚Š็†่งฃใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ ## A ### attention mask ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ ใƒžใ‚นใ‚ฏใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ใƒใƒƒใƒๅ‡ฆ็†ใ™ใ‚‹้š›ใซไฝฟ็”จใ•ใ‚Œใ‚‹ใ‚ชใƒ—ใ‚ทใƒงใƒณใฎๅผ•ๆ•ฐใงใ™ใ€‚ <Youtube id="M6adb1j2jPI"/> ใ“ใฎๅผ•ๆ•ฐใฏใ€ใƒขใƒ‡ใƒซใซใฉใฎใƒˆใƒผใ‚ฏใƒณใ‚’ๆณจ่ฆ–ใ™ในใใ‹ใ€ใฉใฎใƒˆใƒผใ‚ฏใƒณใ‚’ๆณจ่ฆ–ใ—ใชใ„ใ‹ใ‚’็คบใ—ใพใ™ใ€‚ ไพ‹ใˆใฐใ€ๆฌกใฎ2ใคใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’่€ƒใˆใฆใฟใฆใใ ใ•ใ„๏ผš ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence_a = "This is a short sequence." >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A." >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"] >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"] ``` The encoded versions have different lengths: ```python >>> len(encoded_sequence_a), len(encoded_sequence_b) (8, 19) ``` ใ—ใŸใŒใฃใฆใ€ใ“ใ‚Œใ‚‰ใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ใใฎใพใพๅŒใ˜ใƒ†ใƒณใ‚ฝใƒซใซ้…็ฝฎใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ๆœ€ๅˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใฏใ€ 2็•ช็›ฎใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใฎ้•ทใ•ใซๅˆใ‚ใ›ใฆใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใพใŸใฏใ€2็•ช็›ฎใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใฏใ€ๆœ€ๅˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใฎ ้•ทใ•ใซๅˆ‡ใ‚Š่ฉฐใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆœ€ๅˆใฎๅ ดๅˆใ€IDใฎใƒชใ‚นใƒˆใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใงๆ‹กๅผตใ•ใ‚Œใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใซใƒชใ‚นใƒˆใ‚’ๆธกใ—ใ€ๆฌกใฎใ‚ˆใ†ใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ใ‚ˆใ†ใซ ไพ้ ผใงใใพใ™: ```python >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True) ``` 0sใŒ่ฟฝๅŠ ใ•ใ‚Œใฆใ€ๆœ€ๅˆใฎๆ–‡ใŒ2็•ช็›ฎใฎๆ–‡ใจๅŒใ˜้•ทใ•ใซใชใ‚‹ใฎใŒใ‚ใ‹ใ‚Šใพใ™๏ผš ```python >>> padded_sequences["input_ids"] [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]] ``` ใ“ใ‚Œใฏใ€PyTorchใพใŸใฏTensorFlowใงใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใงใใพใ™ใ€‚ๆณจๆ„ใƒžใ‚นใ‚ฏใฏใ€ใƒขใƒ‡ใƒซใŒใใ‚Œใ‚‰ใซๆณจๆ„ใ‚’ๆ‰•ใ‚ใชใ„ใ‚ˆใ†ใซใ€ๅŸ‹ใ‚่พผใพใ‚ŒใŸใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใฎไฝ็ฝฎใ‚’็คบใ™ใƒใ‚คใƒŠใƒชใƒ†ใƒณใ‚ฝใƒซใงใ™ใ€‚[`BertTokenizer`]ใงใฏใ€`1`ใฏๆณจๆ„ใ‚’ๆ‰•ใ†ๅฟ…่ฆใŒใ‚ใ‚‹ๅ€คใ‚’็คบใ—ใ€`0`ใฏๅŸ‹ใ‚่พผใพใ‚ŒใŸๅ€คใ‚’็คบใ—ใพใ™ใ€‚ใ“ใฎๆณจๆ„ใƒžใ‚นใ‚ฏใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒ่ฟ”ใ™่พžๆ›ธใฎใ‚ญใƒผใ€Œattention_maskใ€ใฎไธ‹ใซใ‚ใ‚Šใพใ™ใ€‚ ```python >>> padded_sequences["attention_mask"] [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] ``` ### autoencoding models [ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซ](#encoder-models) ใŠใ‚ˆใณ [ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ](#masked-language-modeling-mlm) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### autoregressive models [ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ](#causal-language-modeling) ใŠใ‚ˆใณ [ใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซ](#decoder-models) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## B ### backbone ใƒใƒƒใ‚ฏใƒœใƒผใƒณใฏใ€็”Ÿใฎ้š ใ‚ŒใŸ็Šถๆ…‹ใ‚„็‰นๅพดใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ๏ผˆๅŸ‹ใ‚่พผใฟใจๅฑค๏ผ‰ใงใ™ใ€‚้€šๅธธใ€็‰นๅพดใ‚’ๅ…ฅๅŠ›ใจใ—ใฆๅ—ใ‘ๅ–ใ‚‹ใŸใ‚ใซ [ใƒ˜ใƒƒใƒ‰](#head) ใซๆŽฅ็ถšใ•ใ‚ŒใฆใŠใ‚Šใ€ไบˆๆธฌใ‚’่กŒใ„ใพใ™ใ€‚ใŸใจใˆใฐใ€[`ViTModel`] ใฏ็‰นๅฎšใฎใƒ˜ใƒƒใƒ‰ใŒไธŠใซใชใ„ใƒใƒƒใ‚ฏใƒœใƒผใƒณใงใ™ใ€‚ไป–ใฎใƒขใƒ‡ใƒซใ‚‚ [`VitModel`] ใ‚’ใƒใƒƒใ‚ฏใƒœใƒผใƒณใจใ—ใฆไฝฟ็”จใงใใพใ™ใ€ไพ‹ใˆใฐ [DPT](model_doc/dpt) ใงใ™ใ€‚ ## C ### causal language modeling ใƒขใƒ‡ใƒซใŒใƒ†ใ‚ญใ‚นใƒˆใ‚’้ †็•ชใซ่ชญใฟใ€ๆฌกใฎๅ˜่ชžใ‚’ไบˆๆธฌใ™ใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚้€šๅธธใ€ใƒขใƒ‡ใƒซใฏๆ–‡ๅ…จไฝ“ใ‚’่ชญใฟๅ–ใ‚Šใพใ™ใŒใ€็‰นๅฎšใฎใ‚ฟใ‚คใƒ ใ‚นใƒ†ใƒƒใƒ—ใงๆœชๆฅใฎใƒˆใƒผใ‚ฏใƒณใ‚’้š ใ™ใŸใ‚ใซใƒขใƒ‡ใƒซๅ†…ใงใƒžใ‚นใ‚ฏใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ### channel ใ‚ซใƒฉใƒผ็”ปๅƒใฏใ€่ตคใ€็ท‘ใ€้’๏ผˆRGB๏ผ‰ใฎ3ใคใฎใƒใƒฃใƒใƒซใฎๅ€คใฎ็ต„ใฟๅˆใ‚ใ›ใ‹ใ‚‰ๆˆใ‚Š็ซ‹ใฃใฆใŠใ‚Šใ€ใ‚ฐใƒฌใƒผใ‚นใ‚ฑใƒผใƒซ็”ปๅƒใฏ1ใคใฎใƒใƒฃใƒใƒซใ—ใ‹ๆŒใกใพใ›ใ‚“ใ€‚๐Ÿค— Transformers ใงใฏใ€ใƒใƒฃใƒใƒซใฏ็”ปๅƒใฎใƒ†ใƒณใ‚ฝใƒซใฎๆœ€ๅˆใพใŸใฏๆœ€ๅพŒใฎๆฌกๅ…ƒใซใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผš[`n_channels`, `height`, `width`] ใพใŸใฏ [`height`, `width`, `n_channels`]ใ€‚ ### connectionist temporal classification (CTC) ๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใŒๆญฃ็ขบใซใฉใฎใ‚ˆใ†ใซๆ•ดๅˆ—ใ™ใ‚‹ใ‹ใ‚’ๆญฃ็ขบใซ็Ÿฅใ‚‰ใชใใฆใ‚‚ใƒขใƒ‡ใƒซใ‚’ๅญฆ็ฟ’ใ•ใ›ใ‚‹ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใ€‚CTC ใฏใ€็‰นๅฎšใฎๅ…ฅๅŠ›ใซๅฏพใ—ใฆใ™ในใฆใฎๅฏ่ƒฝใชๅ‡บๅŠ›ใฎๅˆ†ๅธƒใ‚’่จˆ็ฎ—ใ—ใ€ใใฎไธญใ‹ใ‚‰ๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ๅ‡บๅŠ›ใ‚’้ธๆŠžใ—ใพใ™ใ€‚CTC ใฏใ€ใ‚นใƒ”ใƒผใ‚ซใƒผใฎ็•ฐใชใ‚‹็™บ่ฉฑ้€Ÿๅบฆใชใฉใ€ใ•ใพใ–ใพใช็†็”ฑใง้ŸณๅฃฐใŒใƒˆใƒฉใƒณใ‚นใ‚ฏใƒชใƒ—ใƒˆใจๅฎŒๅ…จใซๆ•ดๅˆใ—ใชใ„ๅ ดๅˆใซใ€้Ÿณๅฃฐ่ช่ญ˜ใ‚ฟใ‚นใ‚ฏใงไธ€่ˆฌ็š„ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ### convolution ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใฎไธ€็จฎใงใ€ๅ…ฅๅŠ›่กŒๅˆ—ใŒ่ฆ็ด ใ”ใจใซๅฐใ•ใช่กŒๅˆ—๏ผˆใ‚ซใƒผใƒใƒซใพใŸใฏใƒ•ใ‚ฃใƒซใ‚ฟใƒผ๏ผ‰ใจไน—็ฎ—ใ•ใ‚Œใ€ๅ€คใŒๆ–ฐใ—ใ„่กŒๅˆ—ใซๅˆ่จˆใ•ใ‚Œใ‚‹ใƒฌใ‚คใƒคใƒผใฎใ‚ฟใ‚คใƒ—ใ€‚ใ“ใ‚Œใฏๅ…ฅๅŠ›่กŒๅˆ—ๅ…จไฝ“ใซๅฏพใ—ใฆ็นฐใ‚Š่ฟ”ใ•ใ‚Œใ‚‹็•ณใฟ่พผใฟๆ“ไฝœใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใ€ๅ„ๆ“ไฝœใฏๅ…ฅๅŠ›่กŒๅˆ—ใฎ็•ฐใชใ‚‹ใ‚ปใ‚ฐใƒกใƒณใƒˆใซ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ๏ผˆCNN๏ผ‰ใฏใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใงไธ€่ˆฌ็š„ใซไฝฟ็”จใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ## D ### decoder input IDs ใ“ใฎๅ…ฅๅŠ›ใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใซ็‰นๆœ‰ใงใ‚ใ‚Šใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใซไพ›็ตฆใ•ใ‚Œใ‚‹ๅ…ฅๅŠ›IDใ‚’ๅซใฟใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๅ…ฅๅŠ›ใฏใ€็ฟป่จณใ‚„่ฆ็ด„ใชใฉใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใƒ„ใƒผใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใ•ใ‚Œใ€้€šๅธธใ€ๅ„ใƒขใƒ‡ใƒซใซๅ›บๆœ‰ใฎๆ–นๆณ•ใงๆง‹็ฏ‰ใ•ใ‚Œใพใ™ใ€‚ ใปใจใ‚“ใฉใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซ๏ผˆBARTใ€T5๏ผ‰ใฏใ€`labels` ใ‹ใ‚‰็‹ฌ่‡ชใซ `decoder_input_ids` ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ใ“ใฎใ‚ˆใ†ใชใƒขใƒ‡ใƒซใงใฏใ€`labels` ใ‚’ๆธกใ™ใ“ใจใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ‡ฆ็†ใ™ใ‚‹ๅ„ชใ‚ŒใŸๆ–นๆณ•ใงใ™ใ€‚ ใ‚ทใƒผใ‚ฑใƒณใ‚นใƒ„ใƒผใ‚ทใƒผใ‚ฑใƒณใ‚นใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใŠใ‘ใ‚‹ใ“ใ‚Œใ‚‰ใฎๅ…ฅๅŠ›IDใฎๅ‡ฆ็†ๆ–นๆณ•ใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ๅ„ใƒขใƒ‡ใƒซใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ### decoder models ใ‚ชใƒผใƒˆใƒชใ‚ฐใƒฌใƒƒใ‚ทใƒงใƒณใƒขใƒ‡ใƒซใจใ‚‚ๅ‘ผใฐใ‚Œใ€ใƒขใƒ‡ใƒซใŒใƒ†ใ‚ญใ‚นใƒˆใ‚’้ †็•ชใซ่ชญใฟใ€ๆฌกใฎๅ˜่ชžใ‚’ไบˆๆธฌใ™ใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ฟใ‚นใ‚ฏ๏ผˆๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ๏ผ‰ใซ้–ขไธŽใ—ใพใ™ใ€‚้€šๅธธใ€ใƒขใƒ‡ใƒซใฏๆ–‡ๅ…จไฝ“ใ‚’่ชญใฟๅ–ใ‚Šใ€็‰นๅฎšใฎใ‚ฟใ‚คใƒ ใ‚นใƒ†ใƒƒใƒ—ใงๆœชๆฅใฎใƒˆใƒผใ‚ฏใƒณใ‚’้š ใ™ใƒžใ‚นใ‚ฏใ‚’ไฝฟ็”จใ—ใฆ่กŒใ‚ใ‚Œใพใ™ใ€‚ <Youtube id="d_ixlCubqQw"/> ### deep learning (DL) ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ไฝฟ็”จใ™ใ‚‹ๆฉŸๆขฐๅญฆ็ฟ’ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใงใ€่ค‡ๆ•ฐใฎๅฑคใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ ## E ### encoder models ใ‚ชใƒผใƒˆใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใƒขใƒ‡ใƒซใจใ—ใฆใ‚‚็Ÿฅใ‚‰ใ‚ŒใฆใŠใ‚Šใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใฏๅ…ฅๅŠ›๏ผˆใƒ†ใ‚ญใ‚นใƒˆใ‚„็”ปๅƒใชใฉ๏ผ‰ใ‚’ใ€ๅŸ‹ใ‚่พผใฟใจๅ‘ผใฐใ‚Œใ‚‹็ฐก็•ฅๅŒ–ใ•ใ‚ŒใŸๆ•ฐๅ€ค่กจ็พใซๅค‰ๆ›ใ—ใพใ™ใ€‚ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใฏใ€ใ—ใฐใ—ใฐ[ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ๏ผˆ#masked-language-modeling-mlm๏ผ‰](#masked-language-modeling-mlm)ใชใฉใฎๆŠ€่ก“ใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใ€ๅ…ฅๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎไธ€้ƒจใ‚’ใƒžใ‚นใ‚ฏใ—ใ€ใƒขใƒ‡ใƒซใซใ‚ˆใ‚Šๆ„ๅ‘ณใฎใ‚ใ‚‹่กจ็พใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใŒๅผทๅˆถใ•ใ‚Œใพใ™ใ€‚ <Youtube id="H39Z_720T5s"/> ## F ### feature extraction ็”Ÿใƒ‡ใƒผใ‚ฟใ‚’ใ‚ˆใ‚Šๆƒ…ๅ ฑ่ฑŠใ‹ใงๆฉŸๆขฐๅญฆ็ฟ’ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใซใจใฃใฆๆœ‰็”จใช็‰นๅพดใฎใ‚ปใƒƒใƒˆใซ้ธๆŠžใŠใ‚ˆใณๅค‰ๆ›ใ™ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใ€‚็‰นๅพดๆŠฝๅ‡บใฎไพ‹ใซใฏใ€็”Ÿใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ˜่ชžๅŸ‹ใ‚่พผใฟใซๅค‰ๆ›ใ—ใŸใ‚Šใ€็”ปๅƒ/ใƒ“ใƒ‡ใ‚ชใƒ‡ใƒผใ‚ฟใ‹ใ‚‰ใ‚จใƒƒใ‚ธใ‚„ๅฝข็Šถใชใฉใฎ้‡่ฆใช็‰นๅพดใ‚’ๆŠฝๅ‡บใ—ใŸใ‚Šใ™ใ‚‹ใ“ใจใŒๅซใพใ‚Œใพใ™ใ€‚ ### feed forward chunking ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผๅ†…ใฎๅ„ๆฎ‹ๅทฎๆณจๆ„ใƒ–ใƒญใƒƒใ‚ฏใงใฏใ€้€šๅธธใ€่‡ชๅทฑๆณจๆ„ๅฑคใฎๅพŒใซ2ใคใฎใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ๅฑคใŒ็ถšใใพใ™ใ€‚ ใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ๅฑคใฎไธญ้–“ๅŸ‹ใ‚่พผใฟใ‚ตใ‚คใ‚บใฏใ€ใƒขใƒ‡ใƒซใฎ้š ใ‚ŒใŸใ‚ตใ‚คใ‚บใ‚ˆใ‚Šใ‚‚ๅคงใใ„ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ™๏ผˆใŸใจใˆใฐใ€`bert-base-uncased`ใฎๅ ดๅˆ๏ผ‰ใ€‚ ๅ…ฅๅŠ›ใ‚ตใ‚คใ‚บใŒ `[batch_sizeใ€sequence_length]` ใฎๅ ดๅˆใ€ไธญ้–“ใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ๅŸ‹ใ‚่พผใฟ `[batch_sizeใ€sequence_lengthใ€config.intermediate_size]` ใ‚’ไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใƒกใƒขใƒชใฏใ€ใƒกใƒขใƒชใฎๅคง้ƒจๅˆ†ใ‚’ๅ ใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚[Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451)ใฎ่‘—่€…ใฏใ€่จˆ็ฎ—ใŒ `sequence_length` ๆฌกๅ…ƒใซไพๅญ˜ใ—ใชใ„ใŸใ‚ใ€ไธกๆ–นใฎใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ๅฑคใฎๅ‡บๅŠ›ๅŸ‹ใ‚่พผใฟ `[batch_sizeใ€config.hidden_size]_0ใ€...ใ€[batch_sizeใ€config.hidden_size]_n` ใ‚’ๅ€‹ๅˆฅใซ่จˆ็ฎ—ใ—ใ€ๅพŒใง `[batch_sizeใ€sequence_lengthใ€config.hidden_size]` ใซ้€ฃ็ตใ™ใ‚‹ใ“ใจใฏๆ•ฐๅญฆ็š„ใซ็ญ‰ไพกใงใ‚ใ‚‹ใจๆฐ—ไป˜ใใพใ—ใŸใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅข—ๅŠ ใ—ใŸ่จˆ็ฎ—ๆ™‚้–“ใจใƒกใƒขใƒชไฝฟ็”จ้‡ใฎใƒˆใƒฌใƒผใƒ‰ใ‚ชใƒ•ใŒ็”Ÿใ˜ใพใ™ใŒใ€ๆ•ฐๅญฆ็š„ใซ็ญ‰ไพกใช็ตๆžœใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ [`apply_chunking_to_forward`] ้–ขๆ•ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใƒขใƒ‡ใƒซใฎๅ ดๅˆใ€`chunk_size` ใฏไธฆๅˆ—ใซ่จˆ็ฎ—ใ•ใ‚Œใ‚‹ๅ‡บๅŠ›ๅŸ‹ใ‚่พผใฟใฎๆ•ฐใ‚’ๅฎš็พฉใ—ใ€ใƒกใƒขใƒชใจๆ™‚้–“ใฎ่ค‡้›‘ใ•ใฎใƒˆใƒฌใƒผใƒ‰ใ‚ชใƒ•ใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚`chunk_size` ใŒ 0 ใซ่จญๅฎšใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใฎใƒใƒฃใƒณใ‚ญใƒณใ‚ฐใฏ่กŒใ‚ใ‚Œใพใ›ใ‚“ใ€‚ ### finetuned models ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใฏใ€ไบ‹ๅ‰ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅ–ใ‚Šใ€ใใฎ้‡ใฟใ‚’ๅ›บๅฎšใ—ใ€ๆ–ฐใ—ใ่ฟฝๅŠ ใ•ใ‚ŒใŸ[model head](#head)ใงๅ‡บๅŠ›ใƒฌใ‚คใƒคใƒผใ‚’็ฝฎใๆ›ใˆใ‚‹ๅฝขๅผใฎ่ปข็งปๅญฆ็ฟ’ใงใ™ใ€‚ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใฏๅฏพ่ฑกใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใ‚’ๅ‚็…งใ—ใฆใ€๐Ÿค— Transformersใ‚’ไฝฟ็”จใ—ใŸใƒขใƒ‡ใƒซใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•ใ‚’ๅญฆใณใพใ—ใ‚‡ใ†ใ€‚ ## H ### head ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใฏใ€ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใฎๆœ€ๅพŒใฎใƒฌใ‚คใƒคใƒผใ‚’ๆŒ‡ใ—ใ€็”Ÿใฎ้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใฆ็•ฐใชใ‚‹ๆฌกๅ…ƒใซๅฐ„ๅฝฑใ—ใพใ™ใ€‚ๅ„ใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆ็•ฐใชใ‚‹ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐ๏ผš * [`GPT2ForSequenceClassification`] ใฏใ€ใƒ™ใƒผใ‚นใฎ[`GPT2Model`]ใฎไธŠใซใ‚ใ‚‹ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰๏ผˆ็ทšๅฝขๅฑค๏ผ‰ใงใ™ใ€‚ * [`ViTForImageClassification`] ใฏใ€ใƒ™ใƒผใ‚นใฎ[`ViTModel`]ใฎ`CLS`ใƒˆใƒผใ‚ฏใƒณใฎๆœ€็ต‚้š ใ‚ŒใŸ็Šถๆ…‹ใฎไธŠใซใ‚ใ‚‹็”ปๅƒๅˆ†้กžใƒ˜ใƒƒใƒ‰๏ผˆ็ทšๅฝขๅฑค๏ผ‰ใงใ™ใ€‚ * [`Wav2Vec2ForCTC`] ใฏใ€[CTC](#connectionist-temporal-classification-(CTC))ใ‚’ๆŒใคใƒ™ใƒผใ‚นใฎ[`Wav2Vec2Model`]ใฎ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ˜ใƒƒใƒ‰ใงใ™ใ€‚ ## I ### image patch ใƒ“ใ‚ธใƒงใƒณใƒ™ใƒผใ‚นใฎใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใฏใ€็”ปๅƒใ‚’ใ‚ˆใ‚Šๅฐใ•ใชใƒ‘ใƒƒใƒใซๅˆ†ๅ‰ฒใ—ใ€ใใ‚Œใ‚‰ใ‚’็ทšๅฝขใซๅŸ‹ใ‚่พผใฟใ€ใƒขใƒ‡ใƒซใซใ‚ทใƒผใ‚ฑใƒณใ‚นใจใ—ใฆๆธกใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎ ### inference ๆŽจ่ซ–ใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒๅฎŒไบ†ใ—ใŸๅพŒใซๆ–ฐใ—ใ„ใƒ‡ใƒผใ‚ฟใงใƒขใƒ‡ใƒซใ‚’่ฉ•ไพกใ™ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใงใ™ใ€‚ ๐Ÿค— Transformers ใ‚’ไฝฟ็”จใ—ใฆๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใฏใ€[ๆŽจ่ซ–ใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ](https://huggingface.co/docs/transformers/pipeline_tutorial) ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### input IDs ๅ…ฅๅŠ›IDใฏใ€ใƒขใƒ‡ใƒซใธใฎๅ…ฅๅŠ›ใจใ—ใฆๆธกใ™ๅฟ…่ฆใŒใ‚ใ‚‹ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฎไธญใงๆœ€ใ‚‚ไธ€่ˆฌ็š„ใชใ‚‚ใฎใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏใƒˆใƒผใ‚ฏใƒณใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใงใ‚ใ‚Šใ€ใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆๅ…ฅๅŠ›ใจใ—ใฆไฝฟ็”จใ•ใ‚Œใ‚‹ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใฎๆ•ฐๅ€ค่กจ็พใงใ™ใ€‚ <Youtube id="VFp38yj8h3A"/> ๅ„ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏ็•ฐใชใ‚‹ๆ–นๆณ•ใงๅ‹•ไฝœใ—ใพใ™ใŒใ€ๅŸบๆœฌ็š„ใชใƒกใ‚ซใƒ‹ใ‚บใƒ ใฏๅŒใ˜ใงใ™ใ€‚ไปฅไธ‹ใฏBERTใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ—ใŸไพ‹ใงใ™ใ€‚BERTใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏ[WordPiece](https://arxiv.org/pdf/1609.08144.pdf)ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใงใ™ใ€‚ ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence = "A Titan RTX has 24GB of VRAM" ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ่ชžๅฝ™ใงไฝฟ็”จๅฏ่ƒฝใชใƒˆใƒผใ‚ฏใƒณใซๅˆ†ๅ‰ฒใ—ใพใ™ใ€‚ ```python >>> tokenized_sequence = tokenizer.tokenize(sequence) ``` ใƒˆใƒผใ‚ฏใƒณใฏๅ˜่ชžใพใŸใฏใ‚ตใƒ–ใƒฏใƒผใƒ‰ใงใ™ใ€‚ ใŸใจใˆใฐใ€ใ“ใ“ใงใฏ "VRAM" ใฏใƒขใƒ‡ใƒซใฎ่ชžๅฝ™ใซๅซใพใ‚Œใฆใ„ใชใ‹ใฃใŸใŸใ‚ใ€"V"ใ€"RA"ใ€"M" ใซๅˆ†ๅ‰ฒใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒˆใƒผใ‚ฏใƒณใŒๅˆฅใ€…ใฎๅ˜่ชžใงใฏใชใใ€ๅŒใ˜ๅ˜่ชžใฎไธ€้ƒจใงใ‚ใ‚‹ใ“ใจใ‚’็คบใ™ใŸใ‚ใซใ€"RA" ใจ "M" ใซใฏใƒ€ใƒ–ใƒซใƒใƒƒใ‚ทใƒฅใฎใƒ—ใƒฌใƒ•ใ‚ฃใƒƒใ‚ฏใ‚นใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ ```python >>> print(tokenized_sequence) ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M'] ``` ใ“ใ‚Œใ‚‰ใฎใƒˆใƒผใ‚ฏใƒณใฏใ€ใƒขใƒ‡ใƒซใŒ็†่งฃใงใใ‚‹ใ‚ˆใ†ใซIDใซๅค‰ๆ›ใงใใพใ™ใ€‚ใ“ใ‚Œใฏใ€ๆ–‡ใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใซ็›ดๆŽฅไพ›็ตฆใ—ใฆ่กŒใ†ใ“ใจใŒใงใใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅ‘ไธŠใฎใŸใ‚ใซ[๐Ÿค— Tokenizers](https://github.com/huggingface/tokenizers)ใฎRustๅฎŸ่ฃ…ใ‚’ๆดป็”จใ—ใฆใ„ใพใ™ใ€‚ ```python >>> inputs = tokenizer(sequence) ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏใ€ๅฏพๅฟœใ™ใ‚‹ใƒขใƒ‡ใƒซใŒๆญฃใ—ใๅ‹•ไฝœใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใ™ในใฆใฎๅผ•ๆ•ฐใ‚’ๅซใ‚€่พžๆ›ธใ‚’่ฟ”ใ—ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒณใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใฏใ€ใ‚ญใƒผ `input_ids` ใฎไธ‹ใซใ‚ใ‚Šใพใ™ใ€‚ ```python >>> encoded_sequence = inputs["input_ids"] >>> print(encoded_sequence) [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102] ``` ๆณจๆ„๏ผšใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€้–ข้€ฃใ™ใ‚‹ใƒขใƒ‡ใƒซใŒใใ‚Œใ‚‰ใ‚’ๅฟ…่ฆใจใ™ใ‚‹ๅ ดๅˆใซ่‡ชๅ‹•็š„ใซใ€Œ็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณใ€ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏใ€ใƒขใƒ‡ใƒซใŒๆ™‚ๆŠ˜ไฝฟ็”จใ™ใ‚‹็‰นๅˆฅใชIDใงใ™ใ€‚ ๅ‰ใฎIDใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ใƒ‡ใ‚ณใƒผใƒ‰ใ™ใ‚‹ๅ ดๅˆใ€ ```python >>> decoded_sequence = tokenizer.decode(encoded_sequence) ``` ็งใŸใกใฏ่ฆ‹ใพใ™ ```python >>> print(decoded_sequence) [CLS] A Titan RTX has 24GB of VRAM [SEP] ``` ใ“ใ‚Œใฏ[`BertModel`]ใŒใใฎๅ…ฅๅŠ›ใ‚’ๆœŸๅพ…ใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ ## L ### Labels ใƒฉใƒ™ใƒซใฏใ€ใƒขใƒ‡ใƒซใŒๆๅคฑใ‚’่จˆ็ฎ—ใ™ใ‚‹ใŸใ‚ใซๆธกใ™ใ“ใจใŒใงใใ‚‹ใ‚ชใƒ—ใ‚ทใƒงใƒณใฎๅผ•ๆ•ฐใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒฉใƒ™ใƒซใฏใ€ใƒขใƒ‡ใƒซใฎไบˆๆธฌใฎๆœŸๅพ…ๅ€คใงใ‚ใ‚‹ในใใงใ™ใ€‚ใƒขใƒ‡ใƒซใฏใ€้€šๅธธใฎๆๅคฑใ‚’ไฝฟ็”จใ—ใฆใ€ใใฎไบˆๆธฌใจๆœŸๅพ…ๅ€ค๏ผˆใƒฉใƒ™ใƒซ๏ผ‰ใจใฎ้–“ใฎๆๅคฑใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒฉใƒ™ใƒซใฏใƒขใƒ‡ใƒซใฎใƒ˜ใƒƒใƒ‰ใซๅฟœใ˜ใฆ็•ฐใชใ‚Šใพใ™ใ€‚ใŸใจใˆใฐ๏ผš - ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒขใƒ‡ใƒซ๏ผˆ[`BertForSequenceClassification`]๏ผ‰ใฎๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฏๆฌกๅ…ƒใŒ `(batch_size)` ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๆœŸๅพ…ใ—ใ€ใƒใƒƒใƒๅ†…ใฎๅ„ๅ€คใŒใ‚ทใƒผใ‚ฑใƒณใ‚นๅ…จไฝ“ใฎไบˆๆธฌใƒฉใƒ™ใƒซใซๅฏพๅฟœใ—ใพใ™ใ€‚ - ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใƒขใƒ‡ใƒซ๏ผˆ[`BertForTokenClassification`]๏ผ‰ใฎๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฏๆฌกๅ…ƒใŒ `(batch_size, seq_length)` ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๆœŸๅพ…ใ—ใ€ๅ„ๅ€คใŒๅ„ๅ€‹ใ€…ใฎใƒˆใƒผใ‚ฏใƒณใฎไบˆๆธฌใƒฉใƒ™ใƒซใซๅฏพๅฟœใ—ใพใ™ใ€‚ - ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฎๅ ดๅˆ๏ผˆ[`BertForMaskedLM`]๏ผ‰ใ€ใƒขใƒ‡ใƒซใฏๆฌกๅ…ƒใŒ `(batch_size, seq_length)` ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๆœŸๅพ…ใ—ใ€ๅ„ๅ€คใŒๅ„ๅ€‹ใ€…ใฎใƒˆใƒผใ‚ฏใƒณใฎไบˆๆธฌใƒฉใƒ™ใƒซใซๅฏพๅฟœใ—ใพใ™ใ€‚ใ“ใ“ใงใฎใƒฉใƒ™ใƒซใฏใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใฎใƒˆใƒผใ‚ฏใƒณIDใงใ‚ใ‚Šใ€ไป–ใฎใƒˆใƒผใ‚ฏใƒณใซใฏ้€šๅธธ -100 ใชใฉใฎๅ€คใŒ่จญๅฎšใ•ใ‚Œใพใ™ใ€‚ - ใ‚ทใƒผใ‚ฑใƒณใ‚น้–“ใฎใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆ๏ผˆ[`BartForConditionalGeneration`]ใ€[`MBartForConditionalGeneration`]๏ผ‰ใ€ใƒขใƒ‡ใƒซใฏๆฌกๅ…ƒใŒ `(batch_size, tgt_seq_length)` ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๆœŸๅพ…ใ—ใ€ๅ„ๅ€คใŒๅ„ๅ…ฅๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใซ้–ข้€ฃไป˜ใ‘ใ‚‰ใ‚ŒใŸใ‚ฟใƒผใ‚ฒใƒƒใƒˆใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅฏพๅฟœใ—ใพใ™ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใ€BARTใจT5ใฎไธกๆ–นใฏ้ฉๅˆ‡ใช `decoder_input_ids` ใจใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒžใ‚นใ‚ฏใ‚’ๅ†…้ƒจใง็”Ÿๆˆใ—ใพใ™ใ€‚้€šๅธธใ€ใ“ใ‚Œใ‚‰ใ‚’ๆไพ›ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’ๅˆฉ็”จใ™ใ‚‹ใƒขใƒ‡ใƒซใซใฏ้ฉ็”จใ•ใ‚Œใพใ›ใ‚“ใ€‚ - ็”ปๅƒๅˆ†้กžใƒขใƒ‡ใƒซใฎๅ ดๅˆ๏ผˆ[`ViTForImageClassification`]๏ผ‰ใ€ใƒขใƒ‡ใƒซใฏๆฌกๅ…ƒใŒ `(batch_size)` ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๆœŸๅพ…ใ—ใ€ใƒใƒƒใƒๅ†…ใฎๅ„ๅ€คใŒๅ„ๅ€‹ใ€…ใฎ็”ปๅƒใฎไบˆๆธฌใƒฉใƒ™ใƒซใซๅฏพๅฟœใ—ใพใ™ใ€‚ - ใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒขใƒ‡ใƒซใฎๅ ดๅˆ๏ผˆ[`SegformerForSemanticSegmentation`]๏ผ‰ใ€ใƒขใƒ‡ใƒซใฏๆฌกๅ…ƒใŒ `(batch_size, height, width)` ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๆœŸๅพ…ใ—ใ€ใƒใƒƒใƒๅ†…ใฎๅ„ๅ€คใŒๅ„ๅ€‹ใ€…ใฎใƒ”ใ‚ฏใ‚ปใƒซใฎไบˆๆธฌใƒฉใƒ™ใƒซใซๅฏพๅฟœใ—ใพใ™ใ€‚ - ็‰ฉไฝ“ๆคœๅ‡บใƒขใƒ‡ใƒซใฎๅ ดๅˆ๏ผˆ[`DetrForObjectDetection`]๏ผ‰ใ€ใƒขใƒ‡ใƒซใฏๅ„ๅ€‹ใ€…ใฎ็”ปๅƒใฎไบˆๆธฌใƒฉใƒ™ใƒซใจๅขƒ็•Œใƒœใƒƒใ‚ฏใ‚นใฎๆ•ฐใซๅฏพๅฟœใ™ใ‚‹ `class_labels` ใจ `boxes` ใ‚ญใƒผใ‚’ๆŒใค่พžๆ›ธใฎใƒชใ‚นใƒˆใ‚’ๆœŸๅพ…ใ—ใพใ™ใ€‚ - ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใƒขใƒ‡ใƒซใฎๅ ดๅˆ๏ผˆ[`Wav2Vec2ForCTC`]๏ผ‰ใ€ใƒขใƒ‡ใƒซใฏๆฌกๅ…ƒใŒ `(batch_size, target_length)` ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๆœŸๅพ…ใ—ใ€ๅ„ๅ€คใŒๅ„ๅ€‹ใ€…ใฎใƒˆใƒผใ‚ฏใƒณใฎไบˆๆธฌใƒฉใƒ™ใƒซใซๅฏพๅฟœใ—ใพใ™ใ€‚ <Tip> ๅ„ใƒขใƒ‡ใƒซใฎใƒฉใƒ™ใƒซใฏ็•ฐใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚‹ใŸใ‚ใ€ๅธธใซๅ„ใƒขใƒ‡ใƒซใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’็ขบ่ชใ—ใฆใ€ใใ‚Œใ‚‰ใฎ็‰นๅฎšใฎใƒฉใƒ™ใƒซใซ้–ขใ™ใ‚‹่ฉณ็ดฐๆƒ…ๅ ฑใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผ </Tip> ใƒ™ใƒผใ‚นใƒขใƒ‡ใƒซ๏ผˆ[`BertModel`]๏ผ‰ใฏใƒฉใƒ™ใƒซใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚Œใ‚‰ใฏใƒ™ใƒผใ‚นใฎใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใงใ‚ใ‚Šใ€ๅ˜ใซ็‰นๅพดใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ ### large language models (LLM) ๅคง้‡ใฎใƒ‡ใƒผใ‚ฟใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸๅค‰ๆ›ๅ™จ่จ€่ชžใƒขใƒ‡ใƒซ๏ผˆGPT-3ใ€BLOOMใ€OPT๏ผ‰ใ‚’ๆŒ‡ใ™ไธ€่ˆฌ็š„ใช็”จ่ชžใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฏ้€šๅธธใ€ๅคšใใฎๅญฆ็ฟ’ๅฏ่ƒฝใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒใฃใฆใ„ใพใ™๏ผˆใŸใจใˆใฐใ€GPT-3ใฎๅ ดๅˆใ€1750ๅ„„ๅ€‹๏ผ‰ใ€‚ ## M ### masked language modeling (MLM) ใƒขใƒ‡ใƒซใฏใƒ†ใ‚ญใ‚นใƒˆใฎ็ ดๆใƒใƒผใ‚ธใƒงใƒณใ‚’่ฆ‹ใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ฟใ‚นใ‚ฏใงใ€้€šๅธธใฏใƒฉใƒณใƒ€ใƒ ใซไธ€้ƒจใฎใƒˆใƒผใ‚ฏใƒณใ‚’ใƒžใ‚นใ‚ญใƒณใ‚ฐใ—ใฆๅ…ƒใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ไบˆๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ### multimodal ใƒ†ใ‚ญใ‚นใƒˆใจๅˆฅใฎ็จฎ้กžใฎๅ…ฅๅŠ›๏ผˆใŸใจใˆใฐ็”ปๅƒ๏ผ‰ใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ ## N ### Natural language generation (NLG) ใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹้–ข้€ฃใ™ใ‚‹ใ™ในใฆใฎใ‚ฟใ‚นใ‚ฏ๏ผˆใŸใจใˆใฐใ€[Transformersใงๆ›ธใ](https://transformer.huggingface.co/)ใ€็ฟป่จณใชใฉ๏ผ‰ใ€‚ ### Natural language processing (NLP) ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆ‰ฑใ†ๆ–นๆณ•ใ‚’ไธ€่ˆฌ็š„ใซ่กจ็พใ—ใŸใ‚‚ใฎใงใ™ใ€‚ ### Natural language understanding (NLU) ใƒ†ใ‚ญใ‚นใƒˆๅ†…ใซไฝ•ใŒใ‚ใ‚‹ใ‹ใ‚’็†่งฃใ™ใ‚‹้–ข้€ฃใ™ใ‚‹ใ™ในใฆใฎใ‚ฟใ‚นใ‚ฏ๏ผˆใŸใจใˆใฐใ€ใƒ†ใ‚ญใ‚นใƒˆๅ…จไฝ“ใฎๅˆ†้กžใ€ๅ€‹ใ€…ใฎๅ˜่ชžใฎๅˆ†้กžใชใฉ๏ผ‰ใ€‚ ## P ### pipeline ๐Ÿค— Transformersใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ€ใƒ‡ใƒผใ‚ฟใฎๅ‰ๅ‡ฆ็†ใจๅค‰ๆ›ใ‚’็‰นๅฎšใฎ้ †ๅบใงๅฎŸ่กŒใ—ใฆใƒ‡ใƒผใ‚ฟใ‚’ๅ‡ฆ็†ใ—ใ€ใƒขใƒ‡ใƒซใ‹ใ‚‰ไบˆๆธฌใ‚’่ฟ”ใ™ไธ€้€ฃใฎใ‚นใƒ†ใƒƒใƒ—ใ‚’ๆŒ‡ใ™ๆŠฝ่ฑกๅŒ–ใงใ™ใ€‚ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซ่ฆ‹ใ‚‰ใ‚Œใ‚‹ใ„ใใคใ‹ใฎใ‚นใƒ†ใƒผใ‚ธใฎไพ‹ใซใฏใ€ใƒ‡ใƒผใ‚ฟใฎๅ‰ๅ‡ฆ็†ใ€็‰นๅพดๆŠฝๅ‡บใ€ๆญฃ่ฆๅŒ–ใชใฉใŒใ‚ใ‚Šใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ๆŽจ่ซ–ใฎใŸใ‚ใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ](https://huggingface.co/docs/transformers/pipeline_tutorial)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### pixel values ใƒขใƒ‡ใƒซใซๆธกใ•ใ‚Œใ‚‹็”ปๅƒใฎๆ•ฐๅ€ค่กจ็พใฎใƒ†ใƒณใ‚ฝใƒซใงใ™ใ€‚ใƒ”ใ‚ฏใ‚ปใƒซๅ€คใฏใ€ๅฝข็ŠถใŒ [`ใƒใƒƒใƒใ‚ตใ‚คใ‚บ`, `ใƒใƒฃใƒใƒซๆ•ฐ`, `้ซ˜ใ•`, `ๅน…`] ใฎ่กŒๅˆ—ใงใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‹ใ‚‰็”Ÿๆˆใ•ใ‚Œใพใ™ใ€‚ ### pooling ่กŒๅˆ—ใ‚’ๅฐใ•ใช่กŒๅˆ—ใซ็ธฎๅฐใ™ใ‚‹ๆ“ไฝœใงใ€ใƒ—ใƒผใƒซๅฏพ่ฑกใฎๆฌกๅ…ƒใฎๆœ€ๅคงๅ€คใพใŸใฏๅนณๅ‡ๅ€คใ‚’ๅ–ใ‚‹ใ“ใจใŒไธ€่ˆฌ็š„ใงใ™ใ€‚ใƒ—ใƒผใƒชใƒณใ‚ฐใƒฌใ‚คใƒคใƒผใฏไธ€่ˆฌ็š„ใซ็•ณใฟ่พผใฟใƒฌใ‚คใƒคใƒผใฎ้–“ใซ่ฆ‹ใ‚‰ใ‚Œใ€็‰นๅพด่กจ็พใ‚’ใƒ€ใ‚ฆใƒณใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ—ใพใ™ใ€‚ ### position IDs ใƒˆใƒผใ‚ฏใƒณใ”ใจใฎไฝ็ฝฎใŒๅŸ‹ใ‚่พผใพใ‚Œใฆใ„ใ‚‹RNNใจใฏ็•ฐใชใ‚Šใ€ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใฏๅ„ใƒˆใƒผใ‚ฏใƒณใฎไฝ็ฝฎใ‚’ๆŠŠๆกใ—ใฆใ„ใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒขใƒ‡ใƒซใฏใƒˆใƒผใ‚ฏใƒณใฎไฝ็ฝฎใ‚’่ญ˜ๅˆฅใ™ใ‚‹ใŸใ‚ใซไฝ็ฝฎID๏ผˆ`position_ids`๏ผ‰ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏใ‚ชใƒ—ใ‚ทใƒงใƒณใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใงใ™ใ€‚ใƒขใƒ‡ใƒซใซ `position_ids` ใŒๆธกใ•ใ‚Œใชใ„ๅ ดๅˆใ€IDใฏ่‡ชๅ‹•็š„ใซ็ตถๅฏพ็š„ใชไฝ็ฝฎๅŸ‹ใ‚่พผใฟใจใ—ใฆไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ ็ตถๅฏพ็š„ใชไฝ็ฝฎๅŸ‹ใ‚่พผใฟใฏ็ฏ„ๅ›ฒ `[0ใ€config.max_position_embeddings - 1]` ใ‹ใ‚‰้ธๆŠžใ•ใ‚Œใพใ™ใ€‚ไธ€้ƒจใฎใƒขใƒ‡ใƒซใฏใ€ๆญฃๅผฆๆณขไฝ็ฝฎๅŸ‹ใ‚่พผใฟใ‚„็›ธๅฏพไฝ็ฝฎๅŸ‹ใ‚่พผใฟใชใฉใ€ไป–ใฎใ‚ฟใ‚คใƒ—ใฎไฝ็ฝฎๅŸ‹ใ‚่พผใฟใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ### preprocessing ็”Ÿใƒ‡ใƒผใ‚ฟใ‚’ๆฉŸๆขฐๅญฆ็ฟ’ใƒขใƒ‡ใƒซใง็ฐกๅ˜ใซๅ‡ฆ็†ใงใใ‚‹ๅฝขๅผใซๆบ–ๅ‚™ใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ไพ‹ใˆใฐใ€ใƒ†ใ‚ญใ‚นใƒˆใฏ้€šๅธธใ€ใƒˆใƒผใ‚ฏใƒณๅŒ–ใซใ‚ˆใฃใฆๅ‰ๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚ไป–ใฎๅ…ฅๅŠ›ใ‚ฟใ‚คใƒ—ใซๅฏพใ™ใ‚‹ๅ‰ๅ‡ฆ็†ใฎๅ…ทไฝ“็š„ใชๆ–นๆณ•ใ‚’็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€[Preprocess](https://huggingface.co/docs/transformers/preprocessing) ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ### pretrained model ใ‚ใ‚‹ใƒ‡ใƒผใ‚ฟ๏ผˆใŸใจใˆใฐใ€Wikipediaๅ…จไฝ“ใชใฉ๏ผ‰ใงไบ‹ๅ‰ใซๅญฆ็ฟ’ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใงใ™ใ€‚ไบ‹ๅ‰ๅญฆ็ฟ’ใฎๆ–นๆณ•ใซใฏใ€่‡ชๅทฑๆ•™ๅธซใ‚ใ‚Šใฎ็›ฎ็š„ใŒๅซใพใ‚Œใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’่ชญใฟๅ–ใ‚Šใ€ๆฌกใฎๅ˜่ชžใ‚’ไบˆๆธฌใ—ใ‚ˆใ†ใจใ™ใ‚‹ใ‚‚ใฎ๏ผˆ[ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ](#ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ)ใ‚’ๅ‚็…ง๏ผ‰ใ‚„ใ€ไธ€้ƒจใฎๅ˜่ชžใ‚’ใƒžใ‚นใ‚ฏใ—ใ€ใใ‚Œใ‚‰ใ‚’ไบˆๆธฌใ—ใ‚ˆใ†ใจใ™ใ‚‹ใ‚‚ใฎ๏ผˆ[ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ](#ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ-mlm)ใ‚’ๅ‚็…ง๏ผ‰ใŒใ‚ใ‚Šใพใ™ใ€‚ ้Ÿณๅฃฐใจใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใซใฏ็‹ฌ่‡ชใฎไบ‹ๅ‰ๅญฆ็ฟ’ใฎ็›ฎ็š„ใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€Wav2Vec2ใฏ้Ÿณๅฃฐใƒขใƒ‡ใƒซใงใ€ใƒขใƒ‡ใƒซใซๅฏพใ—ใฆใ€Œ็œŸใฎใ€้Ÿณๅฃฐ่กจ็พใ‚’ๅฝใฎ้Ÿณๅฃฐ่กจ็พใฎใ‚ปใƒƒใƒˆใ‹ใ‚‰่ญ˜ๅˆฅใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅฏพๆฏ”็š„ใชใ‚ฟใ‚นใ‚ฏใงไบ‹ๅ‰ๅญฆ็ฟ’ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ไธ€ๆ–นใ€BEiTใฏใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใงใ€ไธ€้ƒจใฎ็”ปๅƒใƒ‘ใƒƒใƒใ‚’ใƒžใ‚นใ‚ฏใ—ใ€ใƒขใƒ‡ใƒซใซใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒ‘ใƒƒใƒใ‚’ไบˆๆธฌใ•ใ›ใ‚‹ใ‚ฟใ‚นใ‚ฏ๏ผˆใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฎ็›ฎ็š„ใจไผผใฆใ„ใพใ™๏ผ‰ใงไบ‹ๅ‰ๅญฆ็ฟ’ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ## R ### recurrent neural network (RNN) ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซๅฑคใ‚’ใƒซใƒผใƒ—ใ•ใ›ใ‚‹ใƒขใƒ‡ใƒซใฎไธ€็จฎใงใ™ใ€‚ ### representation learning ็”Ÿใƒ‡ใƒผใ‚ฟใฎๆ„ๅ‘ณใฎใ‚ใ‚‹่กจ็พใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ๆฉŸๆขฐๅญฆ็ฟ’ใฎใ‚ตใƒ–ใƒ•ใ‚ฃใƒผใƒซใƒ‰ใงใ™ใ€‚่กจ็พๅญฆ็ฟ’ใฎๆŠ€่ก“ใฎไธ€้ƒจใซใฏๅ˜่ชžๅŸ‹ใ‚่พผใฟใ€ใ‚ชใƒผใƒˆใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ€Generative Adversarial Networks๏ผˆGANs๏ผ‰ใชใฉใŒใ‚ใ‚Šใพใ™ใ€‚ ## S ### sampling rate ็ง’ใ”ใจใซๅ–ใ‚‰ใ‚Œใ‚‹ใ‚ตใƒณใƒ—ใƒซ๏ผˆใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชไฟกๅทใชใฉ๏ผ‰ใฎๆ•ฐใ‚’ใƒ˜ใƒซใƒ„ๅ˜ไฝใงๆธฌๅฎšใ—ใŸใ‚‚ใฎใงใ™ใ€‚ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใฏ้Ÿณๅฃฐใชใฉใฎ้€ฃ็ถšไฟกๅทใ‚’้›ขๆ•ฃๅŒ–ใ™ใ‚‹็ตๆžœใงใ™ใ€‚ ### self-attention ๅ…ฅๅŠ›ใฎๅ„่ฆ็ด ใฏใ€ใฉใฎไป–ใฎ่ฆ็ด ใซๆณจๆ„ใ‚’ๆ‰•ใ†ในใใ‹ใ‚’ๆคœๅ‡บใ—ใพใ™ใ€‚ ### self-supervised learning ใƒขใƒ‡ใƒซใŒใƒฉใƒ™ใƒซใฎใชใ„ใƒ‡ใƒผใ‚ฟใ‹ใ‚‰่‡ชๅˆ†่‡ช่บซใฎๅญฆ็ฟ’็›ฎๆจ™ใ‚’ไฝœๆˆใ™ใ‚‹ๆฉŸๆขฐๅญฆ็ฟ’ๆŠ€่ก“ใฎใ‚ซใƒ†ใ‚ดใƒชใงใ™ใ€‚ใ“ใ‚Œใฏ[ๆ•™ๅธซใชใ—ๅญฆ็ฟ’](#ๆ•™ๅธซใชใ—ๅญฆ็ฟ’)ใ‚„[ๆ•™ๅธซใ‚ใ‚Šๅญฆ็ฟ’](#ๆ•™ๅธซใ‚ใ‚Šๅญฆ็ฟ’)ใจใฏ็•ฐใชใ‚Šใ€ๅญฆ็ฟ’ใƒ—ใƒญใ‚ปใ‚นใฏใƒฆใƒผใ‚ถใƒผใ‹ใ‚‰ใฏๆ˜Ž็คบ็š„ใซใฏ็›ฃ็ฃใ•ใ‚Œใฆใ„ใชใ„็‚นใŒ็•ฐใชใ‚Šใพใ™ใ€‚ ่‡ชๅทฑๆ•™ๅธซใ‚ใ‚Šๅญฆ็ฟ’ใฎ1ใคใฎไพ‹ใฏ[ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ](#ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ-mlm)ใงใ€ใƒขใƒ‡ใƒซใซใฏไธ€้ƒจใฎใƒˆใƒผใ‚ฏใƒณใŒๅ‰Š้™คใ•ใ‚ŒใŸๆ–‡ใŒไธŽใˆใ‚‰ใ‚Œใ€ๆฌ ่ฝใ—ใŸใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ใ‚ˆใ†ใซๅญฆ็ฟ’ใ—ใพใ™ใ€‚ ### semi-supervised learning ใƒฉใƒ™ใƒซไป˜ใใƒ‡ใƒผใ‚ฟใฎๅฐ‘้‡ใจใƒฉใƒ™ใƒซใฎใชใ„ใƒ‡ใƒผใ‚ฟใฎๅคง้‡ใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใƒขใƒ‡ใƒซใฎ็ฒพๅบฆใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ๅบƒ็ฏ„ใชๆฉŸๆขฐๅญฆ็ฟ’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆŠ€่ก“ใฎใ‚ซใƒ†ใ‚ดใƒชใงใ™ใ€‚[ๆ•™ๅธซใ‚ใ‚Šๅญฆ็ฟ’](#ๆ•™ๅธซใ‚ใ‚Šๅญฆ็ฟ’)ใ‚„[ๆ•™ๅธซใชใ—ๅญฆ็ฟ’](#ๆ•™ๅธซใชใ—ๅญฆ็ฟ’)ใจใฏ็•ฐใชใ‚Šใ€ๅŠๆ•™ๅธซใ‚ใ‚Šๅญฆ็ฟ’ใฎใ‚ขใƒ—ใƒญใƒผใƒใฎ1ใคใฏใ€Œใ‚ปใƒซใƒ•ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ€ใงใ‚ใ‚Šใ€ใƒขใƒ‡ใƒซใฏใƒฉใƒ™ใƒซไป˜ใใƒ‡ใƒผใ‚ฟใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใ€ๆฌกใซใƒฉใƒ™ใƒซใฎใชใ„ใƒ‡ใƒผใ‚ฟใงไบˆๆธฌใ‚’่กŒใ„ใพใ™ใ€‚ใƒขใƒ‡ใƒซใŒๆœ€ใ‚‚่‡ชไฟกใ‚’ๆŒใฃใฆไบˆๆธฌใ™ใ‚‹้ƒจๅˆ†ใŒใƒฉใƒ™ใƒซไป˜ใใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซ่ฟฝๅŠ ใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใฎๅ†ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ### sequence-to-sequence (seq2seq) ๅ…ฅๅŠ›ใ‹ใ‚‰ๆ–ฐใ—ใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’็”Ÿๆˆใ™ใ‚‹ใƒขใƒ‡ใƒซใงใ™ใ€‚็ฟป่จณใƒขใƒ‡ใƒซใ‚„่ฆ็ด„ใƒขใƒ‡ใƒซ๏ผˆ[Bart](model_doc/bart)ใ‚„[T5](model_doc/t5)ใชใฉ๏ผ‰ใชใฉใŒใ“ใ‚Œใซ่ฉฒๅฝ“ใ—ใพใ™ใ€‚ ### stride [็•ณใฟ่พผใฟ](#็•ณใฟ่พผใฟ)ใพใŸใฏ[ใƒ—ใƒผใƒชใƒณใ‚ฐ](#ใƒ—ใƒผใƒชใƒณใ‚ฐ)ใซใŠใ„ใฆใ€ใ‚นใƒˆใƒฉใ‚คใƒ‰ใฏใ‚ซใƒผใƒใƒซใŒ่กŒๅˆ—ไธŠใง็งปๅ‹•ใ™ใ‚‹่ท้›ขใ‚’ๆŒ‡ใ—ใพใ™ใ€‚ใ‚นใƒˆใƒฉใ‚คใƒ‰ใŒ1ใฎๅ ดๅˆใ€ใ‚ซใƒผใƒใƒซใฏ1ใƒ”ใ‚ฏใ‚ปใƒซใšใค็งปๅ‹•ใ—ใ€ใ‚นใƒˆใƒฉใ‚คใƒ‰ใŒ2ใฎๅ ดๅˆใ€ใ‚ซใƒผใƒใƒซใฏ2ใƒ”ใ‚ฏใ‚ปใƒซใšใค็งปๅ‹•ใ—ใพใ™ใ€‚ ### supervised learning ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•ใฎไธ€ใคใงใ€็›ดๆŽฅใƒฉใƒ™ใƒซไป˜ใใƒ‡ใƒผใ‚ฟใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใฎๆ€ง่ƒฝใ‚’ไฟฎๆญฃใ—ๆŒ‡ๅฐŽใ—ใพใ™ใ€‚ใƒ‡ใƒผใ‚ฟใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใซไพ›็ตฆใ•ใ‚Œใ€ใใฎไบˆๆธฌใŒๆ—ข็Ÿฅใฎใƒฉใƒ™ใƒซใจๆฏ”่ผƒใ•ใ‚Œใพใ™ใ€‚ใƒขใƒ‡ใƒซใฏไบˆๆธฌใŒใฉใ‚Œใ ใ‘่ชคใฃใฆใ„ใŸใ‹ใซๅŸบใฅใ„ใฆ้‡ใฟใ‚’ๆ›ดๆ–ฐใ—ใ€ใƒ—ใƒญใ‚ปใ‚นใฏใƒขใƒ‡ใƒซใฎๆ€ง่ƒฝใ‚’ๆœ€้ฉๅŒ–ใ™ใ‚‹ใŸใ‚ใซ็นฐใ‚Š่ฟ”ใ•ใ‚Œใพใ™ใ€‚ ## T ### token ๆ–‡ใฎไธ€้ƒจใงใ‚ใ‚Šใ€้€šๅธธใฏๅ˜่ชžใงใ™ใŒใ€ใ‚ตใƒ–ใƒฏใƒผใƒ‰๏ผˆไธ€่ˆฌ็š„ใงใชใ„ๅ˜่ชžใฏใ—ใฐใ—ใฐใ‚ตใƒ–ใƒฏใƒผใƒ‰ใซๅˆ†ๅ‰ฒใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผ‰ใพใŸใฏๅฅ่ชญ็‚นใฎ่จ˜ๅทใงใ‚ใ‚‹ใ“ใจใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ### token Type IDs ไธ€้ƒจใฎใƒขใƒ‡ใƒซใฏใ€ๆ–‡ใฎใƒšใ‚ขใฎๅˆ†้กžใ‚„่ณชๅ•ๅฟœ็ญ”ใ‚’่กŒใ†ใ“ใจใ‚’็›ฎ็š„ใจใ—ใฆใ„ใพใ™ใ€‚ <Youtube id="0u3ioSwev3s"/> ใ“ใ‚Œใซใฏ็•ฐใชใ‚‹2ใคใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ˜ไธ€ใฎใ€Œinput_idsใ€ใ‚จใƒณใƒˆใƒชใซ็ตๅˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€้€šๅธธใฏๅˆ†้กžๅญ๏ผˆ`[CLS]`๏ผ‰ใ‚„ๅŒบๅˆ‡ใ‚Š่จ˜ๅท๏ผˆ`[SEP]`๏ผ‰ใชใฉใฎ็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณใฎๅŠฉใ‘ใ‚’ๅ€Ÿใ‚ŠใฆๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ไพ‹ใˆใฐใ€BERTใƒขใƒ‡ใƒซใฏๆฌกใฎใ‚ˆใ†ใซ2ใคใฎใ‚ทใƒผใ‚ฑใƒณใ‚นๅ…ฅๅŠ›ใ‚’ๆง‹็ฏ‰ใ—ใพใ™๏ผš ๆ—ฅๆœฌ่ชž่จณใ‚’ๆไพ›ใ—ใฆใ„ใŸใ ใใŸใ„ใงใ™ใ€‚Markdownๅฝขๅผใง่จ˜่ฟฐใ—ใฆใใ ใ•ใ„ใ€‚ ```python >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP] ``` ๆˆ‘ใ€…ใฏใ€ๅ‰่ฟฐใฎใ‚ˆใ†ใซใ€2ใคใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’2ใคใฎๅผ•ๆ•ฐใจใ—ใฆ `tokenizer` ใซๆธกใ™ใ“ใจใงใ€ใ“ใฎใ‚ˆใ†ใชๆ–‡ใ‚’่‡ชๅ‹•็š„ใซ็”Ÿๆˆใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผˆไปฅๅ‰ใฎใ‚ˆใ†ใซใƒชใ‚นใƒˆใงใฏใชใ๏ผ‰ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซ๏ผš ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence_a = "HuggingFace is based in NYC" >>> sequence_b = "Where is HuggingFace based?" >>> encoded_dict = tokenizer(sequence_a, sequence_b) >>> decoded = tokenizer.decode(encoded_dict["input_ids"]) ``` ใ“ใ‚Œใซๅฏพๅฟœใ™ใ‚‹ใ‚ณใƒผใƒ‰ใฏไปฅไธ‹ใงใ™๏ผš ```python >>> print(decoded) [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP] ``` ไธ€้ƒจใฎใƒขใƒ‡ใƒซใงใฏใ€1ใคใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใŒใฉใ“ใง็ต‚ใ‚ใ‚Šใ€ๅˆฅใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใŒใฉใ“ใงๅง‹ใพใ‚‹ใ‹ใ‚’็†่งฃใ™ใ‚‹ใฎใซๅๅˆ†ใชๆƒ…ๅ ฑใŒๅ‚™ใ‚ใฃใฆใ„ใพใ™ใ€‚ใŸใ ใ—ใ€BERTใชใฉใฎไป–ใฎใƒขใƒ‡ใƒซใงใฏใ€ใƒˆใƒผใ‚ฏใƒณใ‚ฟใ‚คใƒ—ID๏ผˆใ‚ปใ‚ฐใƒกใƒณใƒˆIDใจใ‚‚ๅ‘ผใฐใ‚Œใ‚‹๏ผ‰ใ‚‚ไฝฟ็”จใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใƒขใƒ‡ใƒซๅ†…ใฎ2ใคใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’่ญ˜ๅˆฅใ™ใ‚‹ใƒใ‚คใƒŠใƒชใƒžใ‚นใ‚ฏใจใ—ใฆ่กจใ•ใ‚Œใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€ใ“ใฎใƒžใ‚นใ‚ฏใ‚’ใ€Œtoken_type_idsใ€ใจใ—ใฆ่ฟ”ใ—ใพใ™ใ€‚ ```python >>> encoded_dict["token_type_ids"] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ๆœ€ๅˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ€ใคใพใ‚Š่ณชๅ•ใฎใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใ‚‹ใ€Œใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ€ใฏใ€ใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒณใŒใ€Œ0ใ€ใง่กจใ•ใ‚Œใฆใ„ใพใ™ใ€‚ไธ€ๆ–นใ€2็•ช็›ฎใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ€่ณชๅ•ใซๅฏพๅฟœใ™ใ‚‹ใ‚‚ใฎใฏใ€ใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒณใŒใ€Œ1ใ€ใง่กจใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ไธ€้ƒจใฎใƒขใƒ‡ใƒซใ€ไพ‹ใˆใฐ [`XLNetModel`] ใฎใ‚ˆใ†ใซใ€่ฟฝๅŠ ใฎใƒˆใƒผใ‚ฏใƒณใŒใ€Œ2ใ€ใง่กจใ•ใ‚Œใพใ™ใ€‚ ### transfer learning ไบ‹ๅ‰ใซๅญฆ็ฟ’ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅ–ใ‚Šใ€ใใ‚Œใ‚’ใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซ้ฉๅฟœใ•ใ›ใ‚‹ๆŠ€่ก“ใ€‚ใ‚ผใƒญใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’่จ“็ทดใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใ‹ใ‚‰ๅพ—ใŸ็Ÿฅ่ญ˜ใ‚’ๅ‡บ็™บ็‚นใจใ—ใฆๆดป็”จใงใใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šๅญฆ็ฟ’ใƒ—ใƒญใ‚ปใ‚นใŒๅŠ ้€Ÿใ—ใ€ๅฟ…่ฆใช่จ“็ทดใƒ‡ใƒผใ‚ฟใฎ้‡ใŒๆธ›ๅฐ‘ใ—ใพใ™ใ€‚ ### transformer ่‡ชๅทฑๆณจๆ„ใƒ™ใƒผใ‚นใฎๆทฑๅฑคๅญฆ็ฟ’ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ€‚ ## U ### unsupervised learning ใƒขใƒ‡ใƒซใซๆไพ›ใ•ใ‚Œใ‚‹ใƒ‡ใƒผใ‚ฟใŒใƒฉใƒ™ใƒซไป˜ใ‘ใ•ใ‚Œใฆใ„ใชใ„ใƒขใƒ‡ใƒซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅฝขๆ…‹ใ€‚ๆ•™ๅธซใชใ—ๅญฆ็ฟ’ใฎๆŠ€่ก“ใฏใ€ใ‚ฟใ‚นใ‚ฏใซๅฝน็ซ‹ใคใƒ‘ใ‚ฟใƒผใƒณใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซใƒ‡ใƒผใ‚ฟๅˆ†ๅธƒใฎ็ตฑ่จˆๆƒ…ๅ ฑใ‚’ๆดป็”จใ—ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_hardware.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Custom hardware for training ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŠใ‚ˆใณๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใฏใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใซๅคงใใชๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚GPUใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€Tim Dettmerใฎๅ„ชใ‚ŒใŸ[ใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ GPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใฎๅฎŸ็”จ็š„ใชใ‚ขใƒ‰ใƒใ‚คใ‚นใ‚’ใ„ใใคใ‹่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ## GPU ใ‚ˆใ‚Šๅคงใใชใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆใ€ๅŸบๆœฌ็š„ใซใฏไปฅไธ‹ใฎ3ใคใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใŒใ‚ใ‚Šใพใ™๏ผš - ใ‚ˆใ‚ŠๅคงใใชGPU - ใ‚ˆใ‚ŠๅคšใใฎGPU - ใ‚ˆใ‚ŠๅคšใใฎCPUใŠใ‚ˆใณNVMe๏ผˆ[DeepSpeed-Infinity](main_classes/deepspeed#nvme-support)ใซใ‚ˆใ‚‹ใ‚ชใƒ•ใƒญใƒผใƒ‰๏ผ‰ ใพใšใ€ๅ˜ไธ€ใฎGPUใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†ใ€‚ ### Power and Cooling ้ซ˜ไพกใชใƒใ‚คใ‚จใƒณใƒ‰GPUใ‚’่ณผๅ…ฅใ—ใŸๅ ดๅˆใ€ๆญฃใ—ใ„้›ปๅŠ›ไพ›็ตฆใจๅๅˆ†ใชๅ†ทๅดใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚ **้›ปๅŠ›**๏ผš ไธ€้ƒจใฎ้ซ˜็ดšใ‚ณใƒณใ‚ทใƒฅใƒผใƒžGPUใ‚ซใƒผใƒ‰ใซใฏใ€2ใคใพใŸใฏ3ใคใฎPCI-E 8ใƒ”ใƒณ้›ปๆบใ‚ฝใ‚ฑใƒƒใƒˆใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚ซใƒผใƒ‰ใซใ‚ใ‚‹ใ‚ฝใ‚ฑใƒƒใƒˆใฎๆ•ฐใ ใ‘ใ€็‹ฌ็ซ‹ใ—ใŸ12V PCI-E 8ใƒ”ใƒณใ‚ฑใƒผใƒ–ใƒซใŒๆŽฅ็ถšใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ๅŒใ˜ใ‚ฑใƒผใƒ–ใƒซใฎไธ€็ซฏใซใ‚ใ‚‹2ใคใฎๅˆ†ๅฒ๏ผˆใพใŸใฏใƒ”ใƒƒใ‚ฐใƒ†ใƒผใƒซใ‚ฑใƒผใƒ–ใƒซใจใ—ใฆใ‚‚็Ÿฅใ‚‰ใ‚Œใฆใ„ใพใ™๏ผ‰ใ‚’ไฝฟ็”จใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ใคใพใ‚Šใ€GPUใซ2ใคใฎใ‚ฝใ‚ฑใƒƒใƒˆใŒใ‚ใ‚‹ๅ ดๅˆใ€PSUใ‹ใ‚‰ใ‚ซใƒผใƒ‰ใซๅ‘ใ‘ใฆ2ใคใฎPCI-E 8ใƒ”ใƒณใ‚ฑใƒผใƒ–ใƒซใ‚’ไฝฟ็”จใ—ใ€1ใคใฎใ‚ฑใƒผใƒ–ใƒซใฎ็ซฏใซ2ใคใฎPCI-E 8ใƒ”ใƒณใ‚ณใƒใ‚ฏใ‚ฟใŒใ‚ใ‚‹ใ‚‚ใฎใฏไฝฟ็”จใ—ใชใ„ใงใใ ใ•ใ„๏ผใใ†ใ—ใชใ„ใจใ€ใ‚ซใƒผใƒ‰ใ‹ใ‚‰ใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅๅˆ†ใซๅผ•ใๅ‡บใ™ใ“ใจใŒใงใใพใ›ใ‚“ใ€‚ ๅ„PCI-E 8ใƒ”ใƒณ้›ปๆบใ‚ฑใƒผใƒ–ใƒซใฏใ€PSUๅดใฎ12VใƒฌใƒผใƒซใซๆŽฅ็ถšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€ๆœ€ๅคงใง150Wใฎ้›ปๅŠ›ใ‚’ไพ›็ตฆใงใใพใ™ใ€‚ ไธ€้ƒจใฎใ‚ซใƒผใƒ‰ใฏPCI-E 12ใƒ”ใƒณใ‚ณใƒใ‚ฏใ‚ฟใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใ€ใ“ใ‚Œใ‚‰ใฏๆœ€ๅคงใง500-600Wใฎ้›ปๅŠ›ใ‚’ไพ›็ตฆใงใใพใ™ใ€‚ ไฝŽไพกๆ ผๅธฏใฎใ‚ซใƒผใƒ‰ใฏ6ใƒ”ใƒณใ‚ณใƒใ‚ฏใ‚ฟใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใ€ๆœ€ๅคงใง75Wใฎ้›ปๅŠ›ใ‚’ไพ›็ตฆใ—ใพใ™ใ€‚ ใ•ใ‚‰ใซใ€ใ‚ซใƒผใƒ‰ใŒๅฟ…่ฆใจใ™ใ‚‹ๅฎ‰ๅฎšใ—ใŸ้›ปๅœงใ‚’ๆไพ›ใ™ใ‚‹้ซ˜ๅ“่ณชใช้›ปๆบใƒฆใƒ‹ใƒƒใƒˆ๏ผˆPSU๏ผ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ‚‚ใกใ‚ใ‚“ใ€PSUใซใฏใ‚ซใƒผใƒ‰ใ‚’้ง†ๅ‹•ใ™ใ‚‹ใŸใ‚ใซๅๅˆ†ใชๆœชไฝฟ็”จใฎ้›ปๅŠ›ใŒๅฟ…่ฆใงใ™ใ€‚ **ๅ†ทๅด**๏ผš GPUใŒ้Ž็†ฑใ™ใ‚‹ใจใ€ใ‚นใƒญใƒƒใƒˆใƒชใƒณใ‚ฐใŒ้–‹ๅง‹ใ•ใ‚Œใ€ใƒ•ใƒซใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๆไพ›ใ—ใชใใชใ‚Šใ€้Ž็†ฑใ—ใ™ใŽใ‚‹ใจใ‚ทใƒฃใƒƒใƒˆใƒ€ใ‚ฆใƒณใ™ใ‚‹ใ“ใจใ•ใˆใ‚ใ‚Šใพใ™ใ€‚ GPUใŒ้‡่ฆใช่ฒ ่ทใฎไธ‹ใงใฉใฎใ‚ˆใ†ใชๆธฉๅบฆใ‚’็›ฎๆŒ‡ใ™ในใใ‹ใ‚’ๆญฃ็ขบใซ็คบใ™ใ“ใจใฏ้›ฃใ—ใ„ใงใ™ใŒใ€ใŠใใ‚‰ใ+80โ„ƒๆœชๆบ€ใงใ‚ใ‚Œใฐ่‰ฏใ„ใงใ—ใ‚‡ใ†ใŒใ€ใใ‚Œใ‚ˆใ‚ŠไฝŽใ„ๆ–นใŒ่‰ฏใ„ใงใ™ - ใŠใใ‚‰ใ70-75โ„ƒใŒๅ„ชใ‚ŒใŸ็ฏ„ๅ›ฒใงใ—ใ‚‡ใ†ใ€‚ใ‚นใƒญใƒƒใƒˆใƒชใƒณใ‚ฐใฎ้–‹ๅง‹ๆธฉๅบฆใฏใŠใใ‚‰ใ84-90โ„ƒใฎใ‚ใŸใ‚Šใ‹ใ‚‰ใงใ—ใ‚‡ใ†ใ€‚ใ‚นใƒญใƒƒใƒˆใƒชใƒณใ‚ฐใซใ‚ˆใ‚‹ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎไฝŽไธ‹ไปฅๅค–ใซใ‚‚ใ€้•ทๆ™‚้–“ใซใ‚ใŸใ‚‹้žๅธธใซ้ซ˜ใ„ๆธฉๅบฆใฏGPUใฎๅฏฟๅ‘ฝใ‚’็Ÿญ็ธฎใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ๆฌกใซใ€่ค‡ๆ•ฐใฎGPUใ‚’ๆŒใค้š›ใซๆœ€ใ‚‚้‡่ฆใชๅด้ขใฎไธ€ใคใงใ‚ใ‚‹ๆŽฅ็ถšใซใคใ„ใฆ่ฉณใ—ใ่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ### Multi-GPU Connectivity ่ค‡ๆ•ฐใฎGPUใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใ‚ซใƒผใƒ‰ใฎ็›ธไบ’ๆŽฅ็ถšๆ–นๆณ•ใฏใƒˆใƒผใ‚ฟใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ™‚้–“ใซๅคงใใชๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚GPUใŒๅŒใ˜็‰ฉ็†ใƒŽใƒผใƒ‰ใซใ‚ใ‚‹ๅ ดๅˆใ€ๆฌกใฎใ‚ˆใ†ใซๅฎŸ่กŒใงใใพใ™๏ผš ``` nvidia-smi topo -m ``` ใ‚‚ใกใ‚ใ‚“ใ€GPUใŒใฉใฎใ‚ˆใ†ใซ็›ธไบ’ๆŽฅ็ถšใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ใƒ‡ใƒฅใ‚ขใƒซGPUใ‚’ๆญ่ผ‰ใ—ใ€NVLinkใงๆŽฅ็ถšใ•ใ‚Œใฆใ„ใ‚‹ใƒžใ‚ทใƒณใงใฏใ€ใŠใใ‚‰ใไปฅไธ‹ใฎใ‚ˆใ†ใชๆƒ…ๅ ฑใŒ่กจ็คบใ•ใ‚Œใ‚‹ใงใ—ใ‚‡ใ†๏ผš ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` ๅˆฅใฎNVLinkใชใ—ใฎใƒžใ‚ทใƒณใงใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใช็ŠถๆณใŒ็™บ็”Ÿใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผš ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A ``` ใ“ใกใ‚‰ใŒไผ่ชฌใงใ™๏ผš ``` X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` ๆœ€ๅˆใฎใƒฌใƒใƒผใƒˆใงใ‚ใ‚‹ `NV2` ใงใฏใ€GPUใฏ2ใคใฎNVLinkใงๆŽฅ็ถšใ•ใ‚ŒใฆใŠใ‚Šใ€2็•ช็›ฎใฎใƒฌใƒใƒผใƒˆใงใ‚ใ‚‹ `PHB` ใงใฏใ€ๅ…ธๅž‹็š„ใชๆถˆ่ฒป่€…ๅ‘ใ‘ใฎPCIe+Bridgeใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใŒ่กŒใ‚ใ‚Œใฆใ„ใพใ™ใ€‚ ใ‚ใชใŸใฎใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงใฉใฎ็จฎ้กžใฎๆŽฅ็ถšๆ€งใŒใ‚ใ‚‹ใ‹ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใ‚Œใ‚‰ใฎๆŽฅ็ถšๆ–นๆณ•ใฎใ„ใใคใ‹ใฏใ‚ซใƒผใƒ‰้–“ใฎ้€šไฟกใ‚’้€Ÿใใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผˆไพ‹๏ผšNVLink๏ผ‰ใ€ไป–ใฎใ‚‚ใฎใฏ้…ใใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผˆไพ‹๏ผšPHB๏ผ‰ใ€‚ ไฝฟ็”จใ•ใ‚Œใ‚‹ใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใฎ็จฎ้กžใซๅฟœใ˜ใฆใ€ๆŽฅ็ถš้€Ÿๅบฆใฏๅคงใใชๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ใ“ใจใ‚‚ใ€ๅฐใ•ใชๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ใ“ใจใ‚‚ใ‚ใ‚Šใพใ™ใ€‚GPUใŒใ‚ใพใ‚Š้ ป็นใซๅŒๆœŸใ™ใ‚‹ๅฟ…่ฆใŒใชใ„ๅ ดๅˆใ€DDPใฎใ‚ˆใ†ใซใ€้…ใ„ๆŽฅ็ถšใฎๅฝฑ้Ÿฟใฏใใ‚Œใปใฉ้‡่ฆใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ—ใ‹ใ—ใ€GPUใŒ้ ป็นใซใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’้€ไฟกใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใ€ZeRO-DPใฎใ‚ˆใ†ใซใ€้ซ˜้€ŸใฎๆŽฅ็ถšใŒใ‚ˆใ‚Š้ซ˜้€Ÿใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅฎŸ็พใ™ใ‚‹ใŸใ‚ใซ้žๅธธใซ้‡่ฆใซใชใ‚Šใพใ™ใ€‚ #### NVlink [NVLink](https://en.wikipedia.org/wiki/NVLink) ใฏใ€Nvidiaใซใ‚ˆใฃใฆ้–‹็™บใ•ใ‚ŒใŸๆœ‰็ทšใฎใ‚ทใƒชใ‚ขใƒซใƒžใƒซใƒใƒฌใƒผใƒณใฎ่ฟ‘่ท้›ข้€šไฟกใƒชใƒณใ‚ฏใงใ™ใ€‚ ๅ„ๆ–ฐไธ–ไปฃใงใฏใ€ใ‚ˆใ‚Š้ซ˜้€ŸใชๅธฏๅŸŸๅน…ใŒๆไพ›ใ•ใ‚Œใพใ™ใ€‚ใŸใจใˆใฐใ€[Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf) ใ‹ใ‚‰ใฎๅผ•็”จใงใ™ใ€‚ > Third-Generation NVLinkยฎ > GA102 GPUs utilize NVIDIAโ€™s third-generation NVLink interface, which includes four x4 links, > with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four > links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth > between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. > (Note that 3-Way and 4-Way SLI configurations are not supported.) ใ—ใŸใŒใฃใฆใ€`nvidia-smi topo -m` ใฎๅ‡บๅŠ›ใฎ `NVX` ใƒฌใƒใƒผใƒˆใงๅ–ๅพ—ใ™ใ‚‹ `X` ใŒ้ซ˜ใ„ใปใฉ่‰ฏใ„ใงใ™ใ€‚ไธ–ไปฃใฏใ‚ใชใŸใฎGPUใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซไพๅญ˜ใ—ใพใ™ใ€‚ ๅฐใ•ใชใ‚ตใƒณใƒ—ใƒซใฎwikitextใ‚’ไฝฟ็”จใ—ใŸgpt2่จ€่ชžใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅฎŸ่กŒใ‚’ๆฏ”่ผƒใ—ใพใ—ใ‚‡ใ†ใ€‚ ็ตๆžœใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš ๏ผˆใ“ใ“ใซ็ตๆžœใ‚’ๆŒฟๅ…ฅ๏ผ‰ ไธŠ่จ˜ใฎใƒ†ใ‚ญใ‚นใƒˆใฎๆ—ฅๆœฌ่ชž่จณใ‚’ๆไพ›ใ—ใพใ—ใŸใ€‚Markdownใ‚ณใƒผใƒ‰ใจใ—ใฆใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ—ใพใ—ใŸใ€‚ใฉใ‚“ใชไป–ใฎ่ณชๅ•ใŒใ‚ใ‚Œใฐใ€ใŠๆฐ—่ปฝใซใŠ็Ÿฅใ‚‰ใ›ใใ ใ•ใ„๏ผ | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | NVLinkใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒ็ด„23๏ผ…้€ŸใๅฎŒไบ†ใ™ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚2็•ช็›ฎใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใงใฏใ€`NCCL_P2P_DISABLE=1`ใ‚’ไฝฟ็”จใ—ใฆใ€GPUใŒNVLinkใ‚’ไฝฟ็”จใ—ใชใ„ใ‚ˆใ†ใซๆŒ‡็คบใ—ใฆใ„ใพใ™ใ€‚ ไปฅไธ‹ใฏใ€ๅฎŒๅ…จใชใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ณใƒผใƒ‰ใจๅ‡บๅŠ›ใงใ™๏ผš ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`) Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/hpo_train.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hyperparameter Search using Trainer API ๐Ÿค— Transformersใฏใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆœ€้ฉๅŒ–ใ™ใ‚‹[`Trainer`]ใ‚ฏใƒฉใ‚นใ‚’ๆไพ›ใ—ใ€็‹ฌ่‡ชใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ๆ‰‹ๅ‹•ใง่จ˜่ฟฐใ›ใšใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใ™ใ‚‹ใฎใŒ็ฐกๅ˜ใซใชใ‚Šใพใ™ใ€‚[`Trainer`]ใฏใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆคœ็ดขใฎAPIใ‚‚ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใฏใ€ใใ‚Œใ‚’ไพ‹็คบใ—ใพใ™ใ€‚ ## Hyperparameter Search backend [`Trainer`]ใฏ็พๅœจใ€4ใคใฎใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆคœ็ดขใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™๏ผš [optuna](https://optuna.org/)ใ€[sigopt](https://sigopt.com/)ใ€[raytune](https://docs.ray.io/en/latest/tune/index.html)ใ€ใŠใ‚ˆใณ[wandb](https://wandb.ai/site/sweeps)ใ€‚ ใ“ใ‚Œใ‚‰ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ€ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆคœ็ดขใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```bash pip install optuna/sigopt/wandb/ray[tune] ``` ## How to enable Hyperparameter search in example ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆคœ็ดขใ‚นใƒšใƒผใ‚นใ‚’ๅฎš็พฉใ—ใ€็•ฐใชใ‚‹ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใซใฏ็•ฐใชใ‚‹ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใŒๅฟ…่ฆใงใ™ใ€‚ Sigoptใฎๅ ดๅˆใ€sigopt [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ใใ‚Œใฏไปฅไธ‹ใฎใ‚ˆใ†ใชใ‚‚ใฎใงใ™๏ผš ```py >>> def sigopt_hp_space(trial): ... return [ ... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"}, ... { ... "categorical_values": ["16", "32", "64", "128"], ... "name": "per_device_train_batch_size", ... "type": "categorical", ... }, ... ] ``` Optunaใซ้–ขใ—ใฆใฏใ€[object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ```py >>> def optuna_hp_space(trial): ... return { ... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True), ... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]), ... } ``` Optunaใฏใ€ๅคš็›ฎ็š„ใฎใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟๆœ€้ฉๅŒ–๏ผˆHPO๏ผ‰ใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ `hyperparameter_search` ใง `direction` ใ‚’ๆธกใ—ใ€่ค‡ๆ•ฐใฎ็›ฎ็š„้–ขๆ•ฐๅ€คใ‚’่ฟ”ใ™ใŸใ‚ใฎ็‹ฌ่‡ชใฎ `compute_objective` ใ‚’ๅฎš็พฉใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ Pareto Front๏ผˆ`List[BestRun]`๏ผ‰ใฏ `hyperparameter_search` ใง่ฟ”ใ•ใ‚Œใ€[test_trainer](https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py) ใฎใƒ†ใ‚นใƒˆใ‚ฑใƒผใ‚น `TrainerHyperParameterMultiObjectOptunaIntegrationTest` ใ‚’ๅ‚็…งใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ```py >>> best_trials = trainer.hyperparameter_search( ... direction=["minimize", "maximize"], ... backend="optuna", ... hp_space=optuna_hp_space, ... n_trials=20, ... compute_objective=compute_objective, ... ) ``` Ray Tuneใซ้–ขใ—ใฆใ€[object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ```py >>> def ray_hp_space(trial): ... return { ... "learning_rate": tune.loguniform(1e-6, 1e-4), ... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]), ... } ``` Wandbใซใคใ„ใฆใฏใ€[object_parameter](https://docs.wandb.ai/guides/sweeps/configuration)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ใ“ใ‚Œใฏไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ```py >>> def wandb_hp_space(trial): ... return { ... "method": "random", ... "metric": {"name": "objective", "goal": "minimize"}, ... "parameters": { ... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4}, ... "per_device_train_batch_size": {"values": [16, 32, 64, 128]}, ... }, ... } ``` `model_init` ้–ขๆ•ฐใ‚’ๅฎš็พฉใ—ใ€ใใ‚Œใ‚’ [`Trainer`] ใซๆธกใ™ไพ‹ใ‚’็คบใ—ใพใ™๏ผš ```py >>> def model_init(trial): ... return AutoModelForSequenceClassification.from_pretrained( ... model_args.model_name_or_path, ... from_tf=bool(".ckpt" in model_args.model_name_or_path), ... config=config, ... cache_dir=model_args.cache_dir, ... revision=model_args.model_revision, ... token=True if model_args.use_auth_token else None, ... ) ``` [`Trainer`] ใ‚’ `model_init` ้–ขๆ•ฐใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅผ•ๆ•ฐใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ€ใƒ†ใ‚นใƒˆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ€ใŠใ‚ˆใณ่ฉ•ไพก้–ขๆ•ฐใจๅ…ฑใซไฝœๆˆใ—ใฆใใ ใ•ใ„: ```py >>> trainer = Trainer( ... model=None, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... tokenizer=tokenizer, ... model_init=model_init, ... data_collator=data_collator, ... ) ``` ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฎๆŽข็ดขใ‚’ๅ‘ผใณๅ‡บใ—ใ€ๆœ€่‰ฏใฎใƒˆใƒฉใ‚คใ‚ขใƒซ ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใฏ `"optuna"` / `"sigopt"` / `"wandb"` / `"ray"` ใงใ‚ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ๆ–นๅ‘ใฏ `"minimize"` ใพใŸใฏ `"maximize"` ใงใ‚ใ‚Šใ€็›ฎๆจ™ใ‚’ใ‚ˆใ‚Šๅคงใใใ™ใ‚‹ใ‹ๅฐใ•ใใ™ใ‚‹ใ‹ใ‚’็คบใ—ใพใ™ใ€‚ `compute_objective` ้–ขๆ•ฐใ‚’็‹ฌ่‡ชใซๅฎš็พฉใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ๅฎš็พฉใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ `compute_objective` ใŒๅ‘ผใณๅ‡บใ•ใ‚Œใ€F1ใชใฉใฎ่ฉ•ไพกใƒกใƒˆใƒชใƒƒใ‚ฏใฎๅˆ่จˆใŒ็›ฎๆจ™ๅ€คใจใ—ใฆ่ฟ”ใ•ใ‚Œใพใ™ใ€‚ ```py >>> best_trial = trainer.hyperparameter_search( ... direction="maximize", ... backend="optuna", ... hp_space=optuna_hp_space, ... n_trials=20, ... compute_objective=compute_objective, ... ) ``` ## Hyperparameter search For DDP finetune ็พๅœจใ€DDP๏ผˆDistributed Data Parallel๏ผ‰ใฎใŸใ‚ใฎใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆคœ็ดขใฏใ€Optuna ใจ SigOpt ใซๅฏพใ—ใฆๆœ‰ๅŠนใซใชใฃใฆใ„ใพใ™ใ€‚ใƒฉใƒณใ‚ฏใ‚ผใƒญใƒ—ใƒญใ‚ปใ‚นใฎใฟใŒๆคœ็ดขใƒˆใƒฉใ‚คใ‚ขใƒซใ‚’็”Ÿๆˆใ—ใ€ไป–ใฎใƒฉใƒณใ‚ฏใซๅผ•ๆ•ฐใ‚’ๆธกใ—ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/run_scripts.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Train with a script ๐Ÿค— Transformersใฎ[notebooks](./notebooks/README)ใจไธ€็ท’ใซใ€[PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch)ใ€[TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow)ใ€ใพใŸใฏ[JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax)ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ™ใ‚ตใƒณใƒ—ใƒซใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ใพใŸใ€็งใŸใกใฎ[็ ”็ฉถใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆ](https://github.com/huggingface/transformers/tree/main/examples/research_projects)ใ‚„[ใƒฌใ‚ฌใ‚ทใƒผใฎไพ‹](https://github.com/huggingface/transformers/tree/main/examples/legacy)ใงไฝฟ็”จใ—ใŸใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚‚่ฆ‹ใคใ‹ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏ็พๅœจใƒกใƒณใƒ†ใƒŠใƒณใ‚นใ•ใ‚ŒใฆใŠใ‚‰ใšใ€ใŠใใ‚‰ใๆœ€ๆ–ฐใƒใƒผใ‚ธใƒงใƒณใฎใƒฉใ‚คใƒ–ใƒฉใƒชใจไบ’ๆ›ๆ€งใŒใชใ„็‰นๅฎšใฎ๐Ÿค— Transformersใฎใƒใƒผใ‚ธใƒงใƒณใŒๅฟ…่ฆใงใ™ใ€‚ ใ‚ตใƒณใƒ—ใƒซใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ™ในใฆใฎๅ•้กŒใงใใฎใพใพๅ‹•ไฝœใ™ใ‚‹ใ“ใจใฏๆœŸๅพ…ใ•ใ‚ŒใฆใŠใ‚‰ใšใ€่งฃๆฑบใ—ใ‚ˆใ†ใจใ—ใฆใ„ใ‚‹ๅ•้กŒใซใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’้ฉๅฟœใ•ใ›ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใฎ็‚นใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใŸใ‚ใซใ€ใปใจใ‚“ใฉใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใƒ‡ใƒผใ‚ฟใŒใฉใฎใ‚ˆใ†ใซๅ‰ๅ‡ฆ็†ใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚’ๅฎŒๅ…จใซๅ…ฌ้–‹ใ—ใ€ๅฟ…่ฆใซๅฟœใ˜ใฆ็ทจ้›†ใงใใ‚‹ใ‚ˆใ†ใซใ—ใฆใ„ใพใ™ใ€‚ ใ‚ตใƒณใƒ—ใƒซใ‚นใ‚ฏใƒชใƒ—ใƒˆใงๅฎŸ่ฃ…ใ—ใŸใ„ๆฉŸ่ƒฝใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€[ใƒ•ใ‚ฉใƒผใƒฉใƒ ](https://discuss.huggingface.co/)ใ‹[ใ‚คใ‚ทใƒฅใƒผใƒˆใƒฉใƒƒใ‚ซใƒผ](https://github.com/huggingface/transformers/issues)ใง่ญฐ่ซ–ใ—ใฆใ‹ใ‚‰ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๆๅ‡บใ—ใฆใใ ใ•ใ„ใ€‚ใƒใ‚ฐไฟฎๆญฃใฏๆญ“่ฟŽใ—ใพใ™ใŒใ€่ชญใฟใ‚„ใ™ใ•ใฎใ‚ณใ‚นใƒˆใงๆฉŸ่ƒฝใ‚’่ฟฝๅŠ ใ™ใ‚‹ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใฏใปใจใ‚“ใฉใƒžใƒผใ‚ธใ•ใ‚Œใชใ„ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€[PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization)ใจ[TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization)ใงๅฎŸ่กŒใ™ใ‚‹ใ‚ตใƒžใƒชใ‚ผใƒผใ‚ทใƒงใƒณใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚นใ‚ฏใƒชใƒ—ใƒˆใฎๅฎŸ่กŒๆ–นๆณ•ใ‚’็คบใ—ใพใ™ใ€‚ใ™ในใฆใฎไพ‹ใฏใ€ๆ˜Ž็คบ็š„ใซๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใชใ„้™ใ‚Šใ€ไธกๆ–นใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใจใ‚‚ใซๅ‹•ไฝœใ™ใ‚‹ใ“ใจใŒๆœŸๅพ…ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ## Setup ๆœ€ๆ–ฐใƒใƒผใ‚ธใƒงใƒณใฎใ‚ตใƒณใƒ—ใƒซใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๆญฃๅธธใซๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๆ–ฐใ—ใ„ไปฎๆƒณ็’ฐๅขƒใซ๐Ÿค— Transformersใ‚’ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` ไปฅๅ‰ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฎใƒใƒผใ‚ธใƒงใƒณใซใคใ„ใฆใฏใ€ไปฅไธ‹ใฎใƒˆใ‚ฐใƒซใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„๏ผš <details> <summary>ไปฅๅ‰ใฎ๐Ÿค— Transformersใฎใƒใƒผใ‚ธใƒงใƒณใซ้–ขใ™ใ‚‹ไพ‹</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> ๆฌกใซใ€็พๅœจใฎ๐Ÿค— Transformersใฎใ‚ฏใƒญใƒผใƒณใ‚’็‰นๅฎšใฎใƒใƒผใ‚ธใƒงใƒณใซๅˆ‡ใ‚Šๆ›ฟใˆใฆใใ ใ•ใ„ใ€‚ใŸใจใˆใฐใ€v3.5.1ใชใฉใงใ™ใ€‚ ```bash git checkout tags/v3.5.1 ``` ้ฉๅˆ‡ใชใƒฉใ‚คใƒ–ใƒฉใƒชใƒใƒผใ‚ธใƒงใƒณใ‚’่จญๅฎšใ—ใŸใ‚‰ใ€ไปปๆ„ใฎไพ‹ใฎใƒ•ใ‚ฉใƒซใƒ€ใซ็งปๅ‹•ใ—ใ€ไพ‹ๅ›บๆœ‰ใฎ่ฆไปถใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™๏ผš ```bash pip install -r requirements.txt ``` ## Run a script <frameworkcontent> <pt> ใ“ใฎไพ‹ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใ€ๅ‰ๅ‡ฆ็†ใ‚’่กŒใ„ใพใ™ใ€‚ๆฌกใซใ€[Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) ใ‚’ไฝฟ็”จใ—ใฆ่ฆ็ด„ใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃไธŠใงใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใพใ™ใ€‚ไปฅไธ‹ใฎไพ‹ใงใฏใ€[CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆไธŠใง [T5-small](https://huggingface.co/t5-small) ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใŒ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚T5 ใƒขใƒ‡ใƒซใฏใ€ใใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•ใซ่ตทๅ› ใ—ใฆ่ฟฝๅŠ ใฎ `source_prefix` ๅผ•ๆ•ฐใŒๅฟ…่ฆใงใ™ใ€‚ใ“ใฎใƒ—ใƒญใƒณใƒ—ใƒˆใซใ‚ˆใ‚Šใ€T5 ใฏใ“ใ‚ŒใŒ่ฆ็ด„ใ‚ฟใ‚นใ‚ฏใงใ‚ใ‚‹ใ“ใจใ‚’็Ÿฅใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> ใ“ใฎไพ‹ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆๅ‰ๅ‡ฆ็†ใ—ใพใ™ใ€‚ใใฎๅพŒใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏ่ฆ็ด„ใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃไธŠใง Keras ใ‚’ไฝฟ็”จใ—ใฆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใพใ™ใ€‚ไปฅไธ‹ใฎไพ‹ใงใฏใ€[T5-small](https://huggingface.co/t5-small) ใ‚’ [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚T5 ใƒขใƒ‡ใƒซใฏใ€ใใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•ใซ่ตทๅ› ใ—ใฆ่ฟฝๅŠ ใฎ `source_prefix` ๅผ•ๆ•ฐใŒๅฟ…่ฆใงใ™ใ€‚ใ“ใฎใƒ—ใƒญใƒณใƒ—ใƒˆใฏใ€T5 ใซใ“ใ‚ŒใŒ่ฆ็ด„ใ‚ฟใ‚นใ‚ฏใงใ‚ใ‚‹ใ“ใจใ‚’็Ÿฅใ‚‰ใ›ใพใ™ใ€‚ ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Distributed training and mixed precision [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)ใฏใ€ๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๆททๅˆ็ฒพๅบฆใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ใคใพใ‚Šใ€ใ“ใฎๆฉŸ่ƒฝใ‚’ใ‚นใ‚ฏใƒชใƒ—ใƒˆใงไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๆฉŸ่ƒฝใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€ๆฌกใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ - `fp16`ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ—ใฆๆททๅˆ็ฒพๅบฆใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚ - `nproc_per_node`ๅผ•ๆ•ฐใงไฝฟ็”จใ™ใ‚‹GPUใฎๆ•ฐใ‚’่จญๅฎšใ—ใพใ™ใ€‚ ไปฅไธ‹ใฏๆไพ›ใ•ใ‚ŒใŸBashใ‚ณใƒผใƒ‰ใงใ™ใ€‚ใ“ใฎใ‚ณใƒผใƒ‰ใฎๆ—ฅๆœฌ่ชž่จณใ‚’Markdownๅฝขๅผใง่จ˜่ผ‰ใ—ใพใ™ใ€‚ ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` TensorFlowใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซ[`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy)ใ‚’ไฝฟ็”จใ—ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚นใ‚ฏใƒชใƒ—ใƒˆใซ่ฟฝๅŠ ใฎๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚TensorFlowใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใง่ค‡ๆ•ฐใฎGPUใŒๅˆฉ็”จๅฏ่ƒฝใชๅ ดๅˆใซใใ‚Œใ‚‰ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ## Run a script on a TPU <frameworkcontent> <pt> Tensor Processing Units (TPUs)ใฏใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅŠ ้€Ÿใ•ใ›ใ‚‹ใŸใ‚ใซ็‰นๅˆฅใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚PyTorchใฏใ€[XLA](https://www.tensorflow.org/xla)ใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐใ‚ณใƒณใƒ‘ใ‚คใƒฉใ‚’ไฝฟ็”จใ—ใฆTPUsใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใŠใ‚Šใ€่ฉณ็ดฐใซใคใ„ใฆใฏ[ใ“ใกใ‚‰](https://github.com/pytorch/xla/blob/master/README.md)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚TPUใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€`xla_spawn.py`ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’่ตทๅ‹•ใ—ใ€`num_cores`ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆไฝฟ็”จใ™ใ‚‹TPUใ‚ณใ‚ขใฎๆ•ฐใ‚’่จญๅฎšใ—ใพใ™ใ€‚ ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> ใ‚‚ใกใ‚ใ‚“ใ€Tensor Processing Units๏ผˆTPUs๏ผ‰ใฏๆ€ง่ƒฝใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซ็‰นๅˆฅใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚TensorFlowใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€TPUsใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใซ[`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy)ใ‚’ๅˆฉ็”จใ—ใพใ™ใ€‚TPUใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€TPUใƒชใ‚ฝใƒผใ‚นใฎๅๅ‰ใ‚’`tpu`ๅผ•ๆ•ฐใซๆธกใ—ใพใ™ใ€‚ ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Run a script with ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate)ใฏใ€PyTorchๅฐ‚็”จใฎใƒฉใ‚คใƒ–ใƒฉใƒชใงใ€CPUใฎใฟใ€่ค‡ๆ•ฐใฎGPUใ€TPUใชใฉใ€ใ•ใพใ–ใพใชใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใฎ็ตฑไธ€ใ•ใ‚ŒใŸๆ–นๆณ•ใ‚’ๆไพ›ใ—ใพใ™ใ€‚PyTorchใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ๅฎŒๅ…จใซๅฏ่ฆ–ๅŒ–ใ—ใชใŒใ‚‰ๅฎŸ่กŒใงใใพใ™ใ€‚ใพใ ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใ„ใชใ„ๅ ดๅˆใฏใ€๐Ÿค— Accelerateใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใใ ใ•ใ„๏ผš > ๆณจๆ„๏ผšAccelerateใฏๆ€ฅ้€Ÿใซ้–‹็™บใŒ้€ฒ่กŒใ—ใฆใ„ใ‚‹ใŸใ‚ใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏaccelerateใฎgitใƒใƒผใ‚ธใƒงใƒณใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ ```bash pip install git+https://github.com/huggingface/accelerate ``` ไปฃใ‚ใ‚Šใซใ€`run_summarization_no_trainer.py` ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Accelerate ใŒใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใ‚นใ‚ฏใƒชใƒ—ใƒˆใซใฏใ€ใƒ•ใ‚ฉใƒซใƒ€ๅ†…ใซ `task_no_trainer.py` ใƒ•ใ‚กใ‚คใƒซใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ใพใšใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใ—ใ€ไฟๅญ˜ใ—ใพใ™๏ผš ```bash accelerate config ``` ใƒ†ใ‚นใƒˆใ‚’่กŒใ„ใ€่จญๅฎšใŒๆญฃใ—ใๆง‹ๆˆใ•ใ‚Œใฆใ„ใ‚‹ใ‹็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผš ```bash accelerate test ``` Now you are ready to launch the training: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Use a custom dataset ่ฆ็ด„ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€CSVใพใŸใฏJSON Lineใƒ•ใ‚กใ‚คใƒซใงใ‚ใ‚Œใฐใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚็‹ฌ่‡ชใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใ„ใใคใ‹ใฎ่ฟฝๅŠ ใฎๅผ•ๆ•ฐใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - `train_file`ใŠใ‚ˆใณ`validation_file`ใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจใƒใƒชใƒ‡ใƒผใ‚ทใƒงใƒณใฎใƒ•ใ‚กใ‚คใƒซใธใฎใƒ‘ใ‚นใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ - `text_column`ใฏ่ฆ็ด„ใ™ใ‚‹ใŸใ‚ใฎๅ…ฅๅŠ›ใƒ†ใ‚ญใ‚นใƒˆใงใ™ใ€‚ - `summary_column`ใฏๅ‡บๅŠ›ใ™ใ‚‹ๅฏพ่ฑกใƒ†ใ‚ญใ‚นใƒˆใงใ™ใ€‚ ใ‚ซใ‚นใ‚ฟใƒ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใŸ่ฆ็ด„ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Test a script ใ™ในใฆใŒไบˆๆƒณ้€šใ‚Šใซๅ‹•ไฝœใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใ‚’ๅ‡ฆ็†ใ™ใ‚‹ๅ‰ใซใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎไธ€้ƒจใฎไพ‹ใงใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใฏ่‰ฏใ„ใ‚ขใ‚คใƒ‡ใ‚ขใงใ™ใ€‚ไปฅไธ‹ใฎๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๆœ€ๅคงใ‚ตใƒณใƒ—ใƒซๆ•ฐใซๅˆ‡ใ‚Š่ฉฐใ‚ใพใ™๏ผš - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` ไธ€้ƒจใฎไพ‹ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€`max_predict_samples`ๅผ•ๆ•ฐใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎๅผ•ๆ•ฐใŒใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใŒใ‚ใ‹ใ‚‰ใชใ„ๅ ดๅˆใฏใ€`-h`ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ—ใฆ็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Resume training from checkpoint ไปฅๅ‰ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ™ใ‚‹ใŸใ‚ใฎๅฝน็ซ‹ใคใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒไธญๆ–ญใ•ใ‚ŒใŸๅ ดๅˆใงใ‚‚ใ€ๆœ€ๅˆใ‹ใ‚‰ใ‚„ใ‚Š็›ดใ™ใ“ใจใชใใ€ไธญๆ–ญใ—ใŸใจใ“ใ‚ใ‹ใ‚‰ๅ†้–‹ใงใใพใ™ใ€‚ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ™ใ‚‹ใŸใ‚ใฎ2ใคใฎๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™ใ€‚ ๆœ€ๅˆใฎๆ–นๆณ•ใฏใ€`output_dir previous_output_dir` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ€`output_dir` ใซไฟๅญ˜ใ•ใ‚ŒใŸๆœ€ๆ–ฐใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ใ“ใฎๅ ดๅˆใ€`overwrite_output_dir` ใ‚’ๅ‰Š้™คใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` 2็•ช็›ฎใฎๆ–นๆณ•ใงใฏใ€`resume_from_checkpoint path_to_specific_checkpoint` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ€็‰นๅฎšใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใƒ•ใ‚ฉใƒซใƒ€ใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ—ใพใ™ใ€‚ ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Share your model ใ™ในใฆใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ๆœ€็ต‚็š„ใชใƒขใƒ‡ใƒซใ‚’ [Model Hub](https://huggingface.co/models) ใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚้–‹ๅง‹ใ™ใ‚‹ๅ‰ใซ Hugging Face ใซใƒญใ‚ฐใ‚คใƒณใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ```bash huggingface-cli login ``` ๆฌกใซใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใซ `push_to_hub` ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใฎๅผ•ๆ•ฐใฏใ€Hugging Face ใฎใƒฆใƒผใ‚ถใƒผๅใจ `output_dir` ใงๆŒ‡ๅฎšใ—ใŸใƒ•ใ‚ฉใƒซใƒ€ๅใงใƒชใƒใ‚ธใƒˆใƒชใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ็‰นๅฎšใฎๅๅ‰ใ‚’ใƒชใƒใ‚ธใƒˆใƒชใซไป˜ใ‘ใ‚‹ใซใฏใ€`push_to_hub_model_id` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆ่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใฎใƒชใƒใ‚ธใƒˆใƒชใฏ่‡ชๅ‹•็š„ใซใ‚ใชใŸใฎๅๅ‰็ฉบ้–“ใฎไธ‹ใซใƒชใ‚นใƒˆใ•ใ‚Œใพใ™ใ€‚ ไปฅไธ‹ใฎไพ‹ใฏใ€็‰นๅฎšใฎใƒชใƒใ‚ธใƒˆใƒชๅใงใƒขใƒ‡ใƒซใ‚’ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใฆใ„ใพใ™: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๆŽจ่ซ–ใฎใŸใ‚ใฎๅคš่จ€่ชžใƒขใƒ‡ใƒซ [[open-in-colab]] ๐Ÿค— Transformers ใซใฏใ„ใใคใ‹ใฎๅคš่จ€่ชžใƒขใƒ‡ใƒซใŒใ‚ใ‚Šใ€ใใ‚Œใ‚‰ใฎๆŽจ่ซ–ใฎไฝฟ็”จๆ–นๆณ•ใฏๅ˜ไธ€่จ€่ชžใƒขใƒ‡ใƒซใจใฏ็•ฐใชใ‚Šใพใ™ใ€‚ใŸใ ใ—ใ€ๅคš่จ€่ชžใƒขใƒ‡ใƒซใฎไฝฟ็”จๆ–นๆณ•ใŒใ™ในใฆ็•ฐใชใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) ใชใฉใฎไธ€้ƒจใฎใƒขใƒ‡ใƒซใฏใ€ๅ˜ไธ€่จ€่ชžใƒขใƒ‡ใƒซใจๅŒๆง˜ใซไฝฟ็”จใงใใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€ๆŽจ่ซ–ใฎใŸใ‚ใซไฝฟ็”จๆ–นๆณ•ใŒ็•ฐใชใ‚‹ๅคš่จ€่ชžใƒขใƒ‡ใƒซใ‚’ใฉใฎใ‚ˆใ†ใซไฝฟใ†ใ‹ใ‚’็คบใ—ใพใ™ใ€‚ ## XLM XLM ใซใฏ10ใฎ็•ฐใชใ‚‹ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒใ‚ใ‚Šใ€ใใฎใ†ใกใฎ1ใคใ ใ‘ใŒๅ˜ไธ€่จ€่ชžใงใ™ใ€‚ ๆฎ‹ใ‚Šใฎ9ใคใฎใƒขใƒ‡ใƒซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€่จ€่ชžๅŸ‹ใ‚่พผใฟใ‚’ไฝฟ็”จใ™ใ‚‹ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใจไฝฟ็”จใ—ใชใ„ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ2ใคใฎใ‚ซใƒ†ใ‚ดใƒชใซๅˆ†ใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ### ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใŒใ‚ใ‚‹ XLM ๆฌกใฎ XLM ใƒขใƒ‡ใƒซใฏใ€่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ไฝฟ็”จใ—ใฆใ€ๆŽจ่ซ–ใงไฝฟ็”จใ•ใ‚Œใ‚‹่จ€่ชžใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ - `xlm-mlm-ende-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ‰ใ‚คใƒ„่ชž) - `xlm-mlm-enfr-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ•ใƒฉใƒณใ‚น่ชž) - `xlm-mlm-enro-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒซใƒผใƒžใƒ‹ใ‚ข่ชž) - `xlm-mlm-xnli15-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€XNLI ่จ€่ชž) - `xlm-mlm-tlm-xnli15-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ + ็ฟป่จณ + XNLI ่จ€่ชž) - `xlm-clm-enfr-1024` (ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ•ใƒฉใƒณใ‚น่ชž) - `xlm-clm-ende-1024` (ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ‰ใ‚คใƒ„่ชž) ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใฏใ€ใƒขใƒ‡ใƒซใซๆธกใ•ใ‚Œใ‚‹ `input_ids` ใจๅŒใ˜ๅฝข็Šถใฎใƒ†ใƒณใ‚ฝใƒซใจใ—ใฆ่กจใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ†ใƒณใ‚ฝใƒซใฎๅ€คใฏใ€ไฝฟ็”จใ•ใ‚Œใ‚‹่จ€่ชžใซไพๅญ˜ใ—ใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎ `lang2id` ใŠใ‚ˆใณ `id2lang` ๅฑžๆ€งใซใ‚ˆใฃใฆ่ญ˜ๅˆฅใ•ใ‚Œใพใ™ใ€‚ ใ“ใฎไพ‹ใงใฏใ€`xlm-clm-enfr-1024` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ (ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ•ใƒฉใƒณใ‚น่ชž)ใ€‚ ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎ `lang2id` ๅฑžๆ€งใฏใ€ใ“ใฎใƒขใƒ‡ใƒซใฎ่จ€่ชžใจใใฎ ID ใ‚’่กจ็คบใ—ใพใ™ใ€‚ ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` ๆฌกใซใ€ๅ…ฅๅŠ›ไพ‹ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` ่จ€่ชž ID ใ‚’ `en` ใซ่จญๅฎšใ—ใ€ใใ‚Œใ‚’ไฝฟ็”จใ—ใฆ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใฏใ€่‹ฑ่ชžใฎ่จ€่ชž ID ใงใ‚ใ‚‹ใŸใ‚ใ€`0` ใงๅŸ‹ใ‚ใ‚‰ใ‚ŒใŸใƒ†ใƒณใ‚ฝใƒซใงใ™ใ€‚ ใ“ใฎใƒ†ใƒณใ‚ฝใƒซใฏ `input_ids` ใจๅŒใ˜ใ‚ตใ‚คใ‚บใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` ใ“ใ‚Œใงใ€`input_ids` ใจ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ใƒขใƒ‡ใƒซใซๆธกใ™ใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> outputs = model(input_ids, langs=langs) ``` [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€`xlm-clm` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆใ€่จ€่ชžใŒๅŸ‹ใ‚่พผใพใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใงใใพใ™ใ€‚ ### ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใŒใชใ„XLM ๆฌกใฎ XLM ใƒขใƒ‡ใƒซใฏใ€ๆŽจ่ซ–ไธญใซ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ๅฟ…่ฆใจใ—ใพใ›ใ‚“ใ€‚ - `xlm-mlm-17-1280` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€17ใฎ่จ€่ชž) - `xlm-mlm-100-1280` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€100ใฎ่จ€่ชž) ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฏใ€ไปฅๅ‰ใฎ XLM ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใจใฏ็•ฐใชใ‚Šใ€ไธ€่ˆฌ็š„ใชๆ–‡ใฎ่กจ็พใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ## BERT ไปฅไธ‹ใฎ BERT ใƒขใƒ‡ใƒซใฏใ€ๅคš่จ€่ชžใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใพใ™ใ€‚ - `bert-base-multilingual-uncased` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ + ๆฌกใฎๆ–‡ใฎไบˆๆธฌใ€102ใฎ่จ€่ชž) - `bert-base-multilingual-cased` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ + ๆฌกใฎๆ–‡ใฎไบˆๆธฌใ€104ใฎ่จ€่ชž) ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฏใ€ๆŽจ่ซ–ไธญใซ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ๅฟ…่ฆใจใ—ใพใ›ใ‚“ใ€‚ ๆ–‡่„ˆใ‹ใ‚‰่จ€่ชžใ‚’่ญ˜ๅˆฅใ—ใ€ใใ‚Œใซๅฟœใ˜ใฆๆŽจๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ## XLM-RoBERTa ๆฌกใฎ XLM-RoBERTa ใƒขใƒ‡ใƒซใฏใ€ๅคš่จ€่ชžใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใพใ™ใ€‚ - `xlm-roberta-base` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€100ใฎ่จ€่ชž) - `xlm-roberta-large` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€100ใฎ่จ€่ชž) XLM-RoBERTa ใฏใ€100ใฎ่จ€่ชžใงๆ–ฐใ—ใไฝœๆˆใŠใ‚ˆใณใ‚ฏใƒชใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ2.5 TB ใฎ CommonCrawl ใƒ‡ใƒผใ‚ฟใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใ‚Œใฏใ€ๅˆ†้กžใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒฉใƒ™ใƒซไป˜ใ‘ใ€่ณชๅ•ๅฟœ็ญ”ใชใฉใฎใƒ€ใ‚ฆใƒณใ‚นใƒˆใƒชใƒผใƒ ใ‚ฟใ‚นใ‚ฏใงใ€mBERT ใ‚„ XLM ใชใฉใฎไปฅๅ‰ใซใƒชใƒชใƒผใ‚นใ•ใ‚ŒใŸๅคš่จ€่ชžใƒขใƒ‡ใƒซใ‚’ๅคงๅน…ใซๆ”นๅ–„ใ—ใพใ™ใ€‚ ## M2M100 ๆฌกใฎ M2M100 ใƒขใƒ‡ใƒซใฏใ€ๅคš่จ€่ชž็ฟป่จณใซไฝฟ็”จใงใใพใ™ใ€‚ - `facebook/m2m100_418M` (็ฟป่จณ) - `facebook/m2m100_1.2B` (็ฟป่จณ) ใ“ใฎไพ‹ใงใฏใ€`facebook/m2m100_418M` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใฆใ€ไธญๅ›ฝ่ชžใ‹ใ‚‰่‹ฑ่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใงใ‚ฝใƒผใ‚น่จ€่ชžใ‚’่จญๅฎšใงใใพใ™ใ€‚ ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใพใ™ใ€‚ ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 ใฏใ€ๆœ€ๅˆใซ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจใ—ใฆใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชž ID ใ‚’ๅผทๅˆถ็š„ใซใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ ่‹ฑ่ชžใซ็ฟป่จณใ™ใ‚‹ใซใฏใ€`generate` ใƒกใ‚ฝใƒƒใƒ‰ใง `forced_bos_token_id` ใ‚’ `en` ใซ่จญๅฎšใ—ใพใ™ใ€‚ ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart ๅคš่จ€่ชž็ฟป่จณใซใฏใ€ๆฌกใฎ MBart ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ - `facebook/mbart-large-50-one-to-many-mmt` (One-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-many-mmt` (Many-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-one-mmt` (Many-to-one multilingual machine translation, 50 languages) - `facebook/mbart-large-50` (Multilingual translation, 50 languages) - `facebook/mbart-large-cc25` ใ“ใฎไพ‹ใงใฏใ€`facebook/mbart-large-50-many-to-many-mmt` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใฆใ€ใƒ•ใ‚ฃใƒณใƒฉใƒณใƒ‰่ชžใ‚’่‹ฑ่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใงใ‚ฝใƒผใ‚น่จ€่ชžใ‚’่จญๅฎšใงใใพใ™ใ€‚ ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใพใ™ใ€‚ ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart ใฏใ€ๆœ€ๅˆใซ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจใ—ใฆใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชž ID ใ‚’ๅผทๅˆถ็š„ใซใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ ่‹ฑ่ชžใซ็ฟป่จณใ™ใ‚‹ใซใฏใ€`generate` ใƒกใ‚ฝใƒƒใƒ‰ใง `forced_bos_token_id` ใ‚’ `en` ใซ่จญๅฎšใ—ใพใ™ใ€‚ ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` `facebook/mbart-large-50-many-to-one-mmt` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ๆœ€ๅˆใซ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจใ—ใฆใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชž ID ใ‚’ๅผทๅˆถใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใ€ไฝฟ็”จๆ–นๆณ•ใฏๅŒใ˜ใงใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/chat_templating.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Templates for Chat Models ## Introduction LLM๏ผˆLanguage Model๏ผ‰ใฎใพใ™ใพใ™ไธ€่ˆฌ็š„ใชไฝฟ็”จไบ‹ไพ‹ใฎ1ใคใฏใ€Œใƒใƒฃใƒƒใƒˆใ€ใงใ™ใ€‚ ใƒใƒฃใƒƒใƒˆใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใงใฏใ€้€šๅธธใฎ่จ€่ชžใƒขใƒ‡ใƒซใฎใ‚ˆใ†ใซๅ˜ไธ€ใฎใƒ†ใ‚ญใ‚นใƒˆใ‚นใƒˆใƒชใƒณใ‚ฐใ‚’็ถ™็ถšใ™ใ‚‹ใฎใงใฏใชใใ€ใƒขใƒ‡ใƒซใฏ1ใคไปฅไธŠใฎใ€Œใƒกใƒƒใ‚ปใƒผใ‚ธใ€ใ‹ใ‚‰ใชใ‚‹ไผš่ฉฑใ‚’็ถ™็ถšใ—ใพใ™ใ€‚ ๅ„ใƒกใƒƒใ‚ปใƒผใ‚ธใซใฏใ€Œใƒญใƒผใƒซใ€ใจใƒกใƒƒใ‚ปใƒผใ‚ธใƒ†ใ‚ญใ‚นใƒˆใŒๅซใพใ‚Œใพใ™ใ€‚ ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใซใ€ใ“ใ‚Œใ‚‰ใฎใƒญใƒผใƒซใฏใƒฆใƒผใ‚ถใƒผใ‹ใ‚‰ใฎใƒกใƒƒใ‚ปใƒผใ‚ธใซใฏใ€Œใƒฆใƒผใ‚ถใƒผใ€ใ€ใƒขใƒ‡ใƒซใ‹ใ‚‰ใฎใƒกใƒƒใ‚ปใƒผใ‚ธใซใฏใ€Œใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใ€ใŒๅ‰ฒใ‚Šๅฝ“ใฆใ‚‰ใ‚Œใพใ™ใ€‚ ไธ€้ƒจใฎใƒขใƒ‡ใƒซใฏใ€Œใ‚ทใ‚นใƒ†ใƒ ใ€ใƒญใƒผใƒซใ‚‚ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ ใ‚ทใ‚นใƒ†ใƒ ใƒกใƒƒใ‚ปใƒผใ‚ธใฏ้€šๅธธไผš่ฉฑใฎ้–‹ๅง‹ๆ™‚ใซ้€ไฟกใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใฎๅ‹•ไฝœๆ–นๆณ•ใซ้–ขใ™ใ‚‹ๆŒ‡็คบใŒๅซใพใ‚Œใพใ™ใ€‚ ใ™ในใฆใฎ่จ€่ชžใƒขใƒ‡ใƒซใ€ใƒใƒฃใƒƒใƒˆ็”จใซๅพฎ่ชฟๆ•ดใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅซใ‚€ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏใ€ใƒˆใƒผใ‚ฏใƒณใฎใƒชใƒ‹ใ‚ขใ‚ทใƒผใ‚ฑใƒณใ‚นใงๅ‹•ไฝœใ—ใ€ใƒญใƒผใƒซใซ็‰นๆœ‰ใฎ็‰นๅˆฅใชๅ‡ฆ็†ใ‚’ๆŒใกใพใ›ใ‚“ใ€‚ ใคใพใ‚Šใ€ใƒญใƒผใƒซๆƒ…ๅ ฑใฏ้€šๅธธใ€ใƒกใƒƒใ‚ปใƒผใ‚ธ้–“ใซๅˆถๅพกใƒˆใƒผใ‚ฏใƒณใ‚’่ฟฝๅŠ ใ—ใฆๆณจๅ…ฅใ•ใ‚Œใ€ใƒกใƒƒใ‚ปใƒผใ‚ธใฎๅขƒ็•Œใจ้–ข้€ฃใ™ใ‚‹ใƒญใƒผใƒซใ‚’็คบใ™ใ“ใจใงๆไพ›ใ•ใ‚Œใพใ™ใ€‚ ๆฎ‹ๅฟตใชใŒใ‚‰ใ€ใƒˆใƒผใ‚ฏใƒณใฎไฝฟ็”จๆ–นๆณ•ใซใคใ„ใฆใฏ๏ผˆใพใ ๏ผ๏ผ‰ๆจ™ๆบ–ใŒๅญ˜ๅœจใ›ใšใ€็•ฐใชใ‚‹ใƒขใƒ‡ใƒซใฏใƒใƒฃใƒƒใƒˆ็”จใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ‚„ๅˆถๅพกใƒˆใƒผใ‚ฏใƒณใŒๅคงใใ็•ฐใชใ‚‹ๅฝขๅผใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ“ใ‚Œใฏใƒฆใƒผใ‚ถใƒผใซใจใฃใฆๅฎŸ้š›ใฎๅ•้กŒใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ๆญฃใ—ใ„ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใชใ„ใจใ€ใƒขใƒ‡ใƒซใฏๅ…ฅๅŠ›ใซๆททไนฑใ—ใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๆœฌๆฅใ‚ˆใ‚Šใ‚‚้ฅใ‹ใซไฝŽไธ‹ใ—ใพใ™ใ€‚ ใ“ใ‚ŒใŒใ€Œใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ€ใŒ่งฃๆฑบใ—ใ‚ˆใ†ใจใ™ใ‚‹ๅ•้กŒใงใ™ใ€‚ ใƒใƒฃใƒƒใƒˆไผš่ฉฑใฏ้€šๅธธใ€ๅ„่พžๆ›ธใŒใ€Œใƒญใƒผใƒซใ€ใจใ€Œใ‚ณใƒณใƒ†ใƒณใƒ„ใ€ใฎใ‚ญใƒผใ‚’ๅซใฟใ€ๅ˜ไธ€ใฎใƒใƒฃใƒƒใƒˆใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’่กจใ™ใƒชใ‚นใƒˆใจใ—ใฆ่กจ็พใ•ใ‚Œใพใ™ใ€‚ ใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏใ€ๆŒ‡ๅฎšใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฎไผš่ฉฑใ‚’ๅ˜ไธ€ใฎใƒˆใƒผใ‚ฏใƒณๅŒ–ๅฏ่ƒฝใชใ‚ทใƒผใ‚ฑใƒณใ‚นใซใฉใฎใ‚ˆใ†ใซใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ™ใ‚‹ใ‹ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹Jinjaใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ๅซใ‚€ๆ–‡ๅญ—ๅˆ—ใงใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจใ“ใฎๆƒ…ๅ ฑใ‚’ไฟๅญ˜ใ™ใ‚‹ใ“ใจใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใŒๆœŸๅพ…ใ™ใ‚‹ๅฝขๅผใฎๅ…ฅๅŠ›ใƒ‡ใƒผใ‚ฟใ‚’ๅ–ๅพ—ใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ใ•ใฃใใใ€`BlenderBot` ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใŸไพ‹ใ‚’็คบใ—ใฆๅ…ทไฝ“็š„ใซใ—ใพใ—ใ‚‡ใ†ใ€‚`BlenderBot` ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏ้žๅธธใซใ‚ทใƒณใƒ—ใƒซใงใ€ใปใจใ‚“ใฉใŒๅฏพ่ฉฑใฎใƒฉใ‚ฆใƒณใƒ‰้–“ใซ็ฉบ็™ฝใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> chat = [ ... {"role": "user", "content": "Hello, how are you?"}, ... {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, ... {"role": "user", "content": "I'd like to show off how chat templating works!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) " Hello, how are you? I'm doing great. How can I help you today? I'd like to show off how chat templating works!</s>" ``` ๆŒ‡ๅฎšใ•ใ‚ŒใŸ้€šใ‚Šใ€ใƒใƒฃใƒƒใƒˆๅ…จไฝ“ใŒๅ˜ไธ€ใฎๆ–‡ๅญ—ๅˆ—ใซใพใจใ‚ใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ่จญๅฎšใงใ‚ใ‚‹ใ€Œtokenize=Trueใ€ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ ใใฎๆ–‡ๅญ—ๅˆ—ใ‚‚ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚Œใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใ‚ˆใ‚Š่ค‡้›‘ใชใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒๅฎŸ้š›ใซใฉใฎใ‚ˆใ†ใซๆฉŸ่ƒฝใ™ใ‚‹ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ ใ€Œmeta-llama/Llama-2-7b-chat-hfใ€ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ใŸใ ใ—ใ€ใ“ใฎใƒขใƒ‡ใƒซใฏใ‚ฒใƒผใƒˆไป˜ใใ‚ขใ‚ฏใ‚ปใ‚นใ‚’ๆŒใฃใฆใŠใ‚Šใ€ ใ“ใฎใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅ ดๅˆใฏ[ใƒชใƒใ‚ธใƒˆใƒชใงใ‚ขใ‚ฏใ‚ปใ‚นใ‚’ใƒชใ‚ฏใ‚จใ‚นใƒˆ](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```python >> from transformers import AutoTokenizer >> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >> chat = [ ... {"role": "user", "content": "Hello, how are you?"}, ... {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, ... {"role": "user", "content": "I'd like to show off how chat templating works!"}, ... ] >> tokenizer.use_default_system_prompt = False >> tokenizer.apply_chat_template(chat, tokenize=False) "<s>[INST] Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST]" ``` ไปŠๅ›žใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏๅˆถๅพกใƒˆใƒผใ‚ฏใƒณ [INST] ใจ [/INST] ใ‚’่ฟฝๅŠ ใ—ใพใ—ใŸใ€‚ใ“ใ‚Œใ‚‰ใฏใƒฆใƒผใ‚ถใƒผใƒกใƒƒใ‚ปใƒผใ‚ธใฎ้–‹ๅง‹ใจ็ต‚ไบ†ใ‚’็คบใ™ใŸใ‚ใฎใ‚‚ใฎใงใ™๏ผˆใŸใ ใ—ใ€ใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใƒกใƒƒใ‚ปใƒผใ‚ธใซใฏ้ฉ็”จใ•ใ‚Œใพใ›ใ‚“๏ผ๏ผ‰ ## How do chat templates work? ใƒขใƒ‡ใƒซใฎใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏใ€`tokenizer.chat_template`ๅฑžๆ€งใซๆ ผ็ดใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒ่จญๅฎšใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€ใใฎใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒไปฃใ‚ใ‚Šใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚`BlenderBot`ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer.default_chat_template "{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}" ``` ใ“ใ‚Œใฏๅฐ‘ใ—ๆŠ‘ๅœง็š„ใงใ™ใญใ€‚ๅฏ่ชญๆ€งใ‚’้ซ˜ใ‚ใ‚‹ใŸใ‚ใซใ€ๆ–ฐใ—ใ„่กŒใจใ‚คใƒณใƒ‡ใƒณใƒˆใ‚’่ฟฝๅŠ ใ—ใพใ—ใ‚‡ใ†ใ€‚ ๅ„ใƒ–ใƒญใƒƒใ‚ฏใฎ็›ดๅ‰ใฎ็ฉบ็™ฝใจใ€ใƒ–ใƒญใƒƒใ‚ฏใฎ็›ดๅพŒใฎๆœ€ๅˆใฎๆ”น่กŒใฏใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงJinjaใฎ `trim_blocks` ใŠใ‚ˆใณ `lstrip_blocks` ใƒ•ใƒฉใ‚ฐใ‚’ไฝฟ็”จใ—ใฆๅ‰Š้™คใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚คใƒณใƒ‡ใƒณใƒˆใจๆ”น่กŒใ‚’ๅซใ‚€ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ๆ›ธใ„ใฆใ‚‚ๆญฃๅธธใซๆฉŸ่ƒฝใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ ' ' }} {% endif %} {{ message['content'] }} {% if not loop.last %} {{ ' ' }} {% endif %} {% endfor %} {{ eos_token }} ``` ใ“ใ‚ŒใŒๅˆใ‚ใฆ่ฆ‹ใ‚‹ๆ–นใธใ€ใ“ใ‚Œใฏ[Jinjaใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆ](https://jinja.palletsprojects.com/en/3.1.x/templates/)ใงใ™ใ€‚ Jinjaใฏใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใฎใ‚ทใƒณใƒ—ใƒซใชใ‚ณใƒผใƒ‰ใ‚’่จ˜่ฟฐใงใใ‚‹ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆ่จ€่ชžใงใ™ใ€‚ๅคšใใฎ็‚นใงใ€ใ‚ณใƒผใƒ‰ใจ ๆง‹ๆ–‡ใฏPythonใซไผผใฆใ„ใพใ™ใ€‚็ด”็ฒ‹ใชPythonใงใฏใ€ใ“ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚‹ใงใ—ใ‚‡ใ†๏ผš ```python for idx, message in enumerate(messages): if message['role'] == 'user': print(' ') print(message['content']) if not idx == len(messages) - 1: # Check for the last message in the conversation print(' ') print(eos_token) ``` ๅฎŸ้š›ใซใ€ใ“ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏๆฌกใฎ3ใคใฎใ“ใจใ‚’่กŒใ„ใพใ™๏ผš 1. ๅ„ใƒกใƒƒใ‚ปใƒผใ‚ธใซๅฏพใ—ใฆใ€ใƒกใƒƒใ‚ปใƒผใ‚ธใŒใƒฆใƒผใ‚ถใƒผใƒกใƒƒใ‚ปใƒผใ‚ธใงใ‚ใ‚‹ๅ ดๅˆใ€ใใ‚Œใฎๅ‰ใซ็ฉบ็™ฝใ‚’่ฟฝๅŠ ใ—ใ€ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใฏไฝ•ใ‚‚่กจ็คบใ—ใพใ›ใ‚“ใ€‚ 2. ใƒกใƒƒใ‚ปใƒผใ‚ธใฎๅ†…ๅฎนใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ 3. ใƒกใƒƒใ‚ปใƒผใ‚ธใŒๆœ€ๅพŒใฎใƒกใƒƒใ‚ปใƒผใ‚ธใงใชใ„ๅ ดๅˆใ€ใใฎๅพŒใซ2ใคใฎใ‚นใƒšใƒผใ‚นใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ๆœ€ๅพŒใฎใƒกใƒƒใ‚ปใƒผใ‚ธใฎๅพŒใซใฏEOSใƒˆใƒผใ‚ฏใƒณใ‚’่กจ็คบใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏ้žๅธธใซใ‚ทใƒณใƒ—ใƒซใชใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใงใ™ใ€‚ๅˆถๅพกใƒˆใƒผใ‚ฏใƒณใ‚’่ฟฝๅŠ ใ—ใชใ„ใ—ใ€ใƒขใƒ‡ใƒซใซๅฏพใ™ใ‚‹ๆŒ‡็คบใ‚’ไผใˆใ‚‹ไธ€่ˆฌ็š„ใชๆ–นๆณ•ใงใ‚ใ‚‹ใ€Œใ‚ทใ‚นใƒ†ใƒ ใ€ใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ›ใ‚“ใ€‚ ใŸใ ใ—ใ€Jinjaใฏใ“ใ‚Œใ‚‰ใฎใ“ใจใ‚’่กŒใ†ใŸใ‚ใฎๅคšใใฎๆŸ”่ปŸๆ€งใ‚’ๆไพ›ใ—ใฆใ„ใพใ™๏ผ LLaMAใŒใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ™ใ‚‹ๆ–นๆณ•ใซ้กžไผผใ—ใŸๅ…ฅๅŠ›ใ‚’ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ™ใ‚‹ใŸใ‚ใฎJinjaใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ† ๏ผˆๅฎŸ้š›ใฎLLaMAใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใ‚ทใ‚นใƒ†ใƒ ใƒกใƒƒใ‚ปใƒผใ‚ธใฎๅ‡ฆ็†ใ‚„ใ€ไธ€่ˆฌ็š„ใชใ‚ทใ‚นใƒ†ใƒ ใƒกใƒƒใ‚ปใƒผใ‚ธใฎๅ‡ฆ็†ใŒ่‹ฅๅนฒ็•ฐใชใ‚‹ใŸใ‚ใ€ ๅฎŸ้š›ใฎใ‚ณใƒผใƒ‰ใงใฏใ“ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ—ใชใ„ใงใใ ใ•ใ„๏ผ๏ผ‰ ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'] + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ ' ' + message['content'] + ' ' + eos_token }} {% endif %} {% endfor %} ``` ้ก˜ใ‚ใใฐใ€ๅฐ‘ใ—่ฆ‹ใคใ‚ใฆใ„ใŸใ ใ‘ใ‚Œใฐใ€ใ“ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒไฝ•ใ‚’่กŒใฃใฆใ„ใ‚‹ใ‹ใŒใ‚ใ‹ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใ“ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏใ€ๅ„ใƒกใƒƒใ‚ปใƒผใ‚ธใฎใ€Œๅฝนๅ‰ฒใ€ใซๅŸบใฅใ„ใฆ็‰นๅฎšใฎใƒˆใƒผใ‚ฏใƒณใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒˆใƒผใ‚ฏใƒณใฏใ€ใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’้€ไฟกใ—ใŸไบบใ‚’่กจใ™ใ‚‚ใฎใงใ™ใ€‚ ใƒฆใƒผใ‚ถใƒผใ€ใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใ€ใŠใ‚ˆใณใ‚ทใ‚นใƒ†ใƒ ใƒกใƒƒใ‚ปใƒผใ‚ธใฏใ€ใใ‚Œใ‚‰ใŒๅซใพใ‚Œใ‚‹ใƒˆใƒผใ‚ฏใƒณใซใ‚ˆใฃใฆใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆๆ˜Ž็ขบใซๅŒบๅˆฅใ•ใ‚Œใพใ™ใ€‚ ## How do I create a chat template? ็ฐกๅ˜ใงใ™ใ€‚ๅ˜็ด”ใซJinjaใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ๆ›ธใ„ใฆใ€`tokenizer.chat_template`ใ‚’่จญๅฎšใ—ใพใ™ใ€‚ ไป–ใฎใƒขใƒ‡ใƒซใ‹ใ‚‰ๆ—ขๅญ˜ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ๅง‹็‚นใซใ—ใฆใ€ๅฟ…่ฆใซๅฟœใ˜ใฆ็ทจ้›†ใ™ใ‚‹ใจไพฟๅˆฉใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผ ไพ‹ใˆใฐใ€ไธŠ่จ˜ใฎLLaMAใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ๅ–ใฃใฆใ€ใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใƒกใƒƒใ‚ปใƒผใ‚ธใซ"[ASST]"ใจ"[/ASST]"ใ‚’่ฟฝๅŠ ใงใใพใ™ใ€‚ ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'].strip() + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ '[ASST] ' + message['content'] + ' [/ASST]' + eos_token }} {% endif %} {% endfor %} ``` ๆฌกใซใ€ๅ˜ใซ`tokenizer.chat_template`ๅฑžๆ€งใ‚’่จญๅฎšใ—ใฆใใ ใ•ใ„ใ€‚ ๆฌกๅ›žใ€[`~PreTrainedTokenizer.apply_chat_template`]ใ‚’ไฝฟ็”จใ™ใ‚‹้š›ใซใ€ๆ–ฐใ—ใ„ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒไฝฟ็”จใ•ใ‚Œใพใ™๏ผ ใ“ใฎๅฑžๆ€งใฏ`tokenizer_config.json`ใƒ•ใ‚กใ‚คใƒซใซไฟๅญ˜ใ•ใ‚Œใ‚‹ใŸใ‚ใ€[`~utils.PushToHubMixin.push_to_hub`]ใ‚’ไฝฟ็”จใ—ใฆ ๆ–ฐใ—ใ„ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’Hubใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใ€ใฟใ‚“ใชใŒๆญฃใ—ใ„ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใงใใพใ™๏ผ ```python template = tokenizer.chat_template template = template.replace("SYS", "SYSTEM") # Change the system token tokenizer.chat_template = template # Set the new template tokenizer.push_to_hub("model_name") # Upload your new template to the Hub! ``` [`~PreTrainedTokenizer.apply_chat_template`] ใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ใ‚ใชใŸใฎใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซ [`ConversationalPipeline`] ใ‚ฏใƒฉใ‚นใซใ‚ˆใฃใฆๅ‘ผใณๅ‡บใ•ใ‚Œใพใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ๆญฃใ—ใ„ใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’่จญๅฎšใ™ใ‚‹ใจใ€ใ‚ใชใŸใฎใƒขใƒ‡ใƒซใฏ่‡ชๅ‹•็š„ใซ [`ConversationalPipeline`] ใจไบ’ๆ›ๆ€งใŒใ‚ใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ## What are "default" templates? ใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฎๅฐŽๅ…ฅๅ‰ใซใ€ใƒใƒฃใƒƒใƒˆใฎๅ‡ฆ็†ใฏใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใƒฌใƒ™ใƒซใงใƒใƒผใƒ‰ใ‚ณใƒผใƒ‰ใ•ใ‚Œใฆใ„ใพใ—ใŸใ€‚ ๅพŒๆ–นไบ’ๆ›ๆ€งใฎใŸใ‚ใซใ€ใ“ใฎใ‚ฏใƒฉใ‚นๅ›บๆœ‰ใฎๅ‡ฆ็†ใ‚’ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใจใ—ใฆไฟๆŒใ—ใ€ใ‚ฏใƒฉใ‚นใƒฌใƒ™ใƒซใง่จญๅฎšใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใซใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒ่จญๅฎšใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€ใŸใ ใ—ใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒใ‚ใ‚‹ๅ ดๅˆใ€ `ConversationalPipeline`ใ‚ฏใƒฉใ‚นใ‚„`apply_chat_template`ใชใฉใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ‚ฏใƒฉใ‚นใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€`tokenizer.default_chat_template`ๅฑžๆ€งใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใฏใ€ๅพŒๆ–นไบ’ๆ›ๆ€งใฎใŸใ‚ใซ็ด”็ฒ‹ใซ่กŒใฃใฆใ„ใ‚‹ใ“ใจใงใ€ๆ—ขๅญ˜ใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใ‚’ๅฃŠใ•ใชใ„ใ‚ˆใ†ใซใ—ใฆใ„ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใซใจใฃใฆใ‚ฏใƒฉใ‚นใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒ้ฉๅˆ‡ใงใ‚ใ‚‹ๅ ดๅˆใงใ‚‚ใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ—ใฆ `chat_template`ๅฑžๆ€งใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒฆใƒผใ‚ถใƒผใซใจใฃใฆ ใƒขใƒ‡ใƒซใŒใƒใƒฃใƒƒใƒˆ็”จใซๆญฃใ—ใๆง‹ๆˆใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใŒๆ˜Ž็ขบใซใชใ‚Šใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒๅค‰ๆ›ดใ•ใ‚ŒใŸใ‚Šๅปƒๆญขใ•ใ‚ŒใŸๅ ดๅˆใซๅ‚™ใˆใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ## What template should I use? ใ™ใงใซใƒใƒฃใƒƒใƒˆใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ—ใ‘ใŸใƒขใƒ‡ใƒซใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’่จญๅฎšใ™ใ‚‹ๅ ดๅˆใ€ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซใƒขใƒ‡ใƒซใŒ่ฆ‹ใŸใƒกใƒƒใ‚ปใƒผใ‚ธใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใจใพใฃใŸใไธ€่‡ดใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใใ†ใงใชใ„ๅ ดๅˆใ€ๆ€ง่ƒฝใฎไฝŽไธ‹ใ‚’็ตŒ้จ“ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™ใ€‚ใ“ใ‚Œใฏใƒขใƒ‡ใƒซใ‚’ใ•ใ‚‰ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใฆใ„ใ‚‹ๅ ดๅˆใงใ‚‚ๅŒๆง˜ใงใ™ - ใƒใƒฃใƒƒใƒˆใƒˆใƒผใ‚ฏใƒณใ‚’ไธ€ๅฎšใซไฟใคใจใ€ใŠใใ‚‰ใๆœ€้ซ˜ใฎๆ€ง่ƒฝใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใฏใƒˆใƒผใ‚ฏใƒณๅŒ–ใจ้žๅธธใซ้กžไผผใ—ใฆใŠใ‚Šใ€้€šๅธธใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซไฝฟ็”จใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณๅŒ–ใจๆญฃ็ขบใซไธ€่‡ดใ™ใ‚‹ๅ ดๅˆใซใ€ๆŽจ่ซ–ใพใŸใฏใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใฎ้š›ใซๆœ€่‰ฏใฎๆ€ง่ƒฝใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ ไธ€ๆ–นใ€ใ‚ผใƒญใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ‹ใ€ใƒใƒฃใƒƒใƒˆใฎใŸใ‚ใซใƒ™ใƒผใ‚น่จ€่ชžใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆใ€้ฉๅˆ‡ใชใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’้ธๆŠžใ™ใ‚‹่‡ช็”ฑๅบฆใŒใ‚ใ‚Šใพใ™ใ€‚ LLM๏ผˆLanguage Model๏ผ‰ใฏใ•ใพใ–ใพใชๅ…ฅๅŠ›ๅฝขๅผใ‚’ๅ‡ฆ็†ใงใใ‚‹ใปใฉใ‚นใƒžใƒผใƒˆใงใ™ใ€‚ใ‚ฏใƒฉใ‚นๅ›บๆœ‰ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใŒใชใ„ใƒขใƒ‡ใƒซ็”จใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏใ€ไธ€่ˆฌ็š„ใชใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซๅฏพใ—ใฆ่‰ฏใ„ๆŸ”่ปŸใช้ธๆŠž่‚ขใงใ™ใ€‚ ใ“ใ‚Œใฏใ€[ChatMLใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆ](https://github.com/openai/openai-python/blob/main/chatml.md)ใซๅพ“ใฃใŸใ‚‚ใฎใงใ€ๅคšใใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซ้ฉใ—ใฆใ„ใพใ™ใ€‚ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ``` {% for message in messages %} {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}} {% endfor %} ``` If you like this one, here it is in one-liner form, ready to copy into your code: ``` tokenizer.chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}" ``` ใ“ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏใ€ๅ„ใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’ใ€Œ``ใ€ใƒˆใƒผใ‚ฏใƒณใงๅ›ฒใฟใ€ๅฝนๅ‰ฒใ‚’ๆ–‡ๅญ—ๅˆ—ใจใ—ใฆๅ˜็ด”ใซ่จ˜่ฟฐใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงไฝฟ็”จใ™ใ‚‹ๅฝนๅ‰ฒใซๅฏพใ™ใ‚‹ๆŸ”่ปŸๆ€งใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ๅ‡บๅŠ›ใฏไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ``` <|im_start|>system You are a helpful chatbot that will do its best not to say anything so stupid that people tweet about it.<|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant I'm doing great!<|im_end|> ``` ใ€Œใƒฆใƒผใ‚ถใƒผใ€ใ€ใ€Œใ‚ทใ‚นใƒ†ใƒ ใ€ใ€ใŠใ‚ˆใณใ€Œใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใ€ใฎๅฝนๅ‰ฒใฏใ€ใƒใƒฃใƒƒใƒˆใฎๆจ™ๆบ–ใงใ™ใ€‚ ็‰นใซใ€[`ConversationalPipeline`]ใจใฎ้€ฃๆบใ‚’ใ‚นใƒ ใƒผใ‚บใซ่กŒใ†ๅ ดๅˆใซใฏใ€ใ“ใ‚Œใ‚‰ใฎๅฝนๅ‰ฒใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ใ“ใ‚Œใ‚‰ใฎๅฝนๅ‰ฒใซๅˆถ็ด„ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏ้žๅธธใซๆŸ”่ปŸใงใ€ไปปๆ„ใฎๆ–‡ๅญ—ๅˆ—ใ‚’ๅฝนๅ‰ฒใจใ—ใฆไฝฟ็”จใงใใพใ™ใ€‚ ## I want to use chat templates! How should I get started? ใƒใƒฃใƒƒใƒˆใƒขใƒ‡ใƒซใ‚’ๆŒใฃใฆใ„ใ‚‹ๅ ดๅˆใ€ใใฎใƒขใƒ‡ใƒซใฎ`tokenizer.chat_template`ๅฑžๆ€งใ‚’่จญๅฎšใ—ใ€[`~PreTrainedTokenizer.apply_chat_template`]ใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚นใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏใƒขใƒ‡ใƒซใฎๆ‰€ๆœ‰่€…ใงใชใ„ๅ ดๅˆใงใ‚‚้ฉ็”จใ•ใ‚Œใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใŒ็ฉบใฎใƒใƒฃใƒƒใƒˆใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ใพใŸใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใ‚ฏใƒฉใ‚นใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใงใ‚‚ใ€ ใ“ใฎๅฑžๆ€งใ‚’้ฉๅˆ‡ใซ่จญๅฎšใงใใ‚‹ใ‚ˆใ†ใซ[ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆ](https://huggingface.co/docs/hub/repositories-pull-requests-discussions)ใ‚’้–‹ใ„ใฆใใ ใ•ใ„ใ€‚ ไธ€ๅบฆๅฑžๆ€งใŒ่จญๅฎšใ•ใ‚Œใ‚Œใฐใ€ใใ‚ŒใงๅฎŒไบ†ใงใ™๏ผ `tokenizer.apply_chat_template`ใฏใ€ใใฎใƒขใƒ‡ใƒซใซๅฏพใ—ใฆๆญฃใ—ใๅ‹•ไฝœใ™ใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€ `ConversationalPipeline`ใชใฉใฎๅ ดๆ‰€ใงใ‚‚่‡ชๅ‹•็š„ใซใ‚ตใƒใƒผใƒˆใ•ใ‚Œใพใ™ใ€‚ ใƒขใƒ‡ใƒซใŒใ“ใฎๅฑžๆ€งใ‚’ๆŒใคใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใ“ใจใงใ€ใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใƒขใƒ‡ใƒซใฎๅ…จใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใŒใใฎใƒ•ใƒซใƒ‘ใƒฏใƒผใ‚’ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใฎไธไธ€่‡ดใฏใ“ใฎๅˆ†้‡Žใซๆ‚ฉใฟ็ถšใ‘ใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใซ้ป™ใฃใฆๅฝฑ้Ÿฟใ‚’ไธŽใˆใฆใใพใ—ใŸใ€‚ใใ‚Œใ‚’็ต‚ใ‚ใ‚‰ใ›ใ‚‹ๆ™‚ใŒๆฅใพใ—ใŸ๏ผ
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/big_models.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Instantiating a big model ้žๅธธใซๅคง่ฆๆจกใชไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€RAMใฎไฝฟ็”จ้‡ใ‚’ๆœ€ๅฐ้™ใซๆŠ‘ใˆใ‚‹ใ“ใจใฏ่ชฒ้กŒใฎ1ใคใงใ™ใ€‚้€šๅธธใฎPyTorchใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš 1. ใƒฉใƒณใƒ€ใƒ ใช้‡ใฟใ‚’ๆŒใคใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ 2. ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ 3. ใ“ใ‚Œใ‚‰ใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใƒฉใƒณใƒ€ใƒ ใชใƒขใƒ‡ใƒซใซ้…็ฝฎใ—ใพใ™ใ€‚ ใ‚นใƒ†ใƒƒใƒ—1ใจ2ใฎไธกๆ–นใŒใƒกใƒขใƒชใซใƒขใƒ‡ใƒซใฎๅฎŒๅ…จใชใƒใƒผใ‚ธใƒงใƒณใ‚’ๅฟ…่ฆใจใ—ใ€ใปใจใ‚“ใฉใฎๅ ดๅˆใฏๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ใƒขใƒ‡ใƒซใฎใ‚ตใ‚คใ‚บใŒๆ•ฐใ‚ฎใ‚ฌใƒใ‚คใƒˆใซใชใ‚‹ใจใ€ใ“ใ‚Œใ‚‰ใฎ2ใคใฎใ‚ณใƒ”ใƒผใ‚’RAMใ‹ใ‚‰ๆŽ’้™คใ™ใ‚‹ใ“ใจใŒใงใใชใใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ•ใ‚‰ใซๆ‚ชใ„ใ“ใจใซใ€ๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใซ`torch.distributed`ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ๅ„ใƒ—ใƒญใ‚ปใ‚นใฏไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใ€ใ“ใ‚Œใ‚‰ใฎ2ใคใฎใ‚ณใƒ”ใƒผใ‚’RAMใซไฟๅญ˜ใ—ใพใ™ใ€‚ <Tip> ใƒฉใƒณใƒ€ใƒ ใซไฝœๆˆใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏใ€ใƒกใƒขใƒชๅ†…ใซใ€Œ็ฉบใฎใ€ใƒ†ใƒณใ‚ฝใƒซใงๅˆๆœŸๅŒ–ใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒฉใƒณใƒ€ใƒ ใชๅ€คใฏใ€ใƒกใƒขใƒชใฎ็‰นๅฎšใฎใƒใƒฃใƒณใ‚ฏใซใ‚ใฃใŸใ‚‚ใฎใ‚’ไฝฟ็”จใ—ใพใ™๏ผˆใ—ใŸใŒใฃใฆใ€ใƒฉใƒณใƒ€ใƒ ใชๅ€คใฏใใฎๆ™‚็‚นใงใฎใƒกใƒขใƒชใƒใƒฃใƒณใ‚ฏๅ†…ใฎๅ€คใงใ™๏ผ‰ใ€‚ใƒขใƒ‡ใƒซ/ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ็จฎ้กžใซ้ฉใ—ใŸๅˆ†ๅธƒ๏ผˆใŸใจใˆใฐใ€ๆญฃ่ฆๅˆ†ๅธƒ๏ผ‰ใซๅพ“ใ†ใƒฉใƒณใƒ€ใƒ ใชๅˆๆœŸๅŒ–ใฏใ€ใ‚นใƒ†ใƒƒใƒ—3ใงๅˆๆœŸๅŒ–ใ•ใ‚Œใฆใ„ใชใ„้‡ใฟใซๅฏพใ—ใฆใ€ใงใใ‚‹ใ ใ‘้ซ˜้€ŸใซๅฎŸ่กŒใ•ใ‚Œใพใ™๏ผ </Tip> ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€TransformersใŒใ“ใฎๅ•้กŒใซๅฏพๅ‡ฆใ™ใ‚‹ใŸใ‚ใซๆไพ›ใ™ใ‚‹ใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใ‚’ๆŽขใ‚Šใพใ™ใ€‚ใชใŠใ€ใ“ใ‚Œใฏ็พๅœจใ‚‚้–‹็™บใŒ้€ฒ่กŒไธญใฎๅˆ†้‡Žใงใ‚ใ‚Šใ€ๅฐ†ๆฅใ€ใ“ใ“ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹APIใŒใ‚ใšใ‹ใซๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ## Sharded checkpoints ใƒใƒผใ‚ธใƒงใƒณ4.18.0ใ‹ใ‚‰ใ€10GBใ‚’่ถ…ใˆใ‚‹ใ‚ตใ‚คใ‚บใฎใƒขใƒ‡ใƒซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏ่‡ชๅ‹•็š„ใซ่ค‡ๆ•ฐใฎๅฐใ•ใช้ƒจๅˆ†ใซๅˆ†ๅ‰ฒใ•ใ‚Œใพใ™ใ€‚`model.save_pretrained(save_dir)`ใ‚’ๅฎŸ่กŒใ™ใ‚‹้š›ใซ1ใคใฎๅ˜ไธ€ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๆŒใคไปฃใ‚ใ‚Šใซใ€ใ„ใใคใ‹ใฎ้ƒจๅˆ†็š„ใชใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ๏ผˆใใ‚Œใžใ‚Œใฎใ‚ตใ‚คใ‚บใŒ<10GB๏ผ‰ใจใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟๅใ‚’ใใ‚Œใ‚‰ใŒๆ ผ็ดใ•ใ‚Œใฆใ„ใ‚‹ใƒ•ใ‚กใ‚คใƒซใซใƒžใƒƒใƒ—ใ™ใ‚‹ใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใŒ็”Ÿๆˆใ•ใ‚Œใพใ™ใ€‚ `max_shard_size`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใงใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๅ‰ใฎๆœ€ๅคงใ‚ตใ‚คใ‚บใ‚’ๅˆถๅพกใงใใ‚‹ใŸใ‚ใ€ไพ‹ใจใ—ใฆ้€šๅธธใ‚ตใ‚คใ‚บใฎใƒขใƒ‡ใƒซใจๅฐใ•ใชใ‚ทใƒฃใƒผใƒ‰ใ‚ตใ‚คใ‚บใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ๅพ“ๆฅใฎBERTใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```py from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") ``` ใ‚‚ใ—[`~PreTrainedModel.save_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆไฟๅญ˜ใ™ใ‚‹ๅ ดๅˆใ€ๆ–ฐใ—ใ„ใƒ•ใ‚ฉใƒซใƒ€ใŒ2ใคใฎใƒ•ใ‚กใ‚คใƒซใ‚’ๅซใ‚€ๅฝขใงไฝœๆˆใ•ใ‚Œใพใ™: ใƒขใƒ‡ใƒซใฎ่จญๅฎšๆƒ…ๅ ฑใจใใฎ้‡ใฟๆƒ…ๅ ฑใงใ™ใ€‚ ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` ๆœ€ๅคงใ‚ทใƒฃใƒผใƒ‰ใ‚ตใ‚คใ‚บใ‚’200MBใซ่จญๅฎšใ—ใพใ™๏ผš ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` ใƒขใƒ‡ใƒซใฎ่จญๅฎšใฎไธŠใซใ€3ใคใฎ็•ฐใชใ‚‹้‡ใฟใƒ•ใ‚กใ‚คใƒซใจใ€`index.json`ใƒ•ใ‚กใ‚คใƒซใŒ่ฆ‹ใ‚‰ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใฏ็งใŸใกใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใงใ™ใ€‚ ใ“ใฎใ‚ˆใ†ใชใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€[`~PreTrainedModel.from_pretrained`]ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆๅฎŒๅ…จใซๅ†ใƒญใƒผใƒ‰ใงใใพใ™๏ผš ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` ไธป่ฆใชๅˆฉ็‚นใฏใ€ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใฎๅ ดๅˆใ€ไธŠ่จ˜ใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใฎใ‚นใƒ†ใƒƒใƒ—2ใซใŠใ„ใฆใ€ๅ„ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎใ‚ทใƒฃใƒผใƒ‰ใŒๅ‰ใฎใ‚ทใƒฃใƒผใƒ‰ใฎๅพŒใซใƒญใƒผใƒ‰ใ•ใ‚Œใ€RAMใฎใƒกใƒขใƒชไฝฟ็”จ้‡ใ‚’ใƒขใƒ‡ใƒซใฎใ‚ตใ‚คใ‚บใจๆœ€ๅคงใฎใ‚ทใƒฃใƒผใƒ‰ใฎใ‚ตใ‚คใ‚บใ‚’ๅˆใ‚ใ›ใŸใ‚‚ใฎใซๅˆถ้™ใงใใ‚‹ใ“ใจใงใ™ใ€‚ ๅ†…้ƒจใงใฏใ€ใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใƒ•ใ‚กใ‚คใƒซใŒไฝฟ็”จใ•ใ‚Œใ€ใฉใฎใ‚ญใƒผใŒใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซๅญ˜ๅœจใ—ใ€ๅฏพๅฟœใ™ใ‚‹้‡ใฟใŒใฉใ“ใซๆ ผ็ดใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚’ๅˆคๆ–ญใ—ใพใ™ใ€‚ใ“ใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใฏ้€šๅธธใฎJSONใƒ•ใ‚กใ‚คใƒซใฎใ‚ˆใ†ใซ่ชญใฟ่พผใ‚€ใ“ใจใŒใงใใ€่พžๆ›ธใจใ—ใฆๅ–ๅพ—ใงใใพใ™ใ€‚ ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` ใƒกใ‚ฟใƒ‡ใƒผใ‚ฟใซใฏ็พๆ™‚็‚นใงใฏใƒขใƒ‡ใƒซใฎ็ทใ‚ตใ‚คใ‚บใฎใฟใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ ๅฐ†ๆฅ็š„ใซใฏไป–ใฎๆƒ…ๅ ฑใ‚’่ฟฝๅŠ ใ™ใ‚‹ไบˆๅฎšใงใ™๏ผš ```py >>> index["metadata"] {'total_size': 433245184} ``` ้‡ใฟใƒžใƒƒใƒ—ใฏใ“ใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใฎไธป่ฆใช้ƒจๅˆ†ใงใ‚ใ‚Šใ€ๅ„ใƒ‘ใƒฉใƒกใƒผใ‚ฟๅ๏ผˆ้€šๅธธใฏPyTorchใƒขใƒ‡ใƒซใฎ`state_dict`ใง่ฆ‹ใคใ‹ใ‚‹ใ‚‚ใฎ๏ผ‰ใ‚’ใใฎๆ ผ็ดใ•ใ‚Œใฆใ„ใ‚‹ใƒ•ใ‚กใ‚คใƒซใซใƒžใƒƒใƒ—ใ—ใพใ™๏ผš ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` ็›ดๆŽฅใƒขใƒ‡ใƒซๅ†…ใง[`~PreTrainedModel.from_pretrained`]ใ‚’ไฝฟ็”จใ›ใšใซใ€ ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใŸใ„ๅ ดๅˆ๏ผˆใƒ•ใƒซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎๅ ดๅˆใซ`model.load_state_dict()`ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซ่กŒใ†ๆ–นๆณ•๏ผ‰ใ€[`~modeling_utils.load_sharded_checkpoint`]ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## Low memory loading ใ‚ทใƒฃใƒผใƒ‰ใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€ไธŠ่จ˜ใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใฎใ‚นใƒ†ใƒƒใƒ—2ใซใŠใ‘ใ‚‹ใƒกใƒขใƒชไฝฟ็”จ้‡ใ‚’ๅ‰Šๆธ›ใ—ใพใ™ใŒใ€ ไฝŽใƒกใƒขใƒชใฎ็’ฐๅขƒใงใใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใ€Accelerateใƒฉใ‚คใƒ–ใƒฉใƒชใซๅŸบใฅใ„ใŸๅฝ“็คพใฎใƒ„ใƒผใƒซใ‚’ๆดป็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ไปฅไธ‹ใฎใ‚ฌใ‚คใƒ‰ใ‚’ใ”่ฆงใใ ใ•ใ„๏ผš[Accelerateใ‚’ไฝฟ็”จใ—ใŸๅคง่ฆๆจกใƒขใƒ‡ใƒซใฎ่ชญใฟ่พผใฟ](./main_classes/model#large-model-loading)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/transformers_agents.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformers Agents <Tip warning={true}> Transformers Agentsใฏใ€ใ„ใคใงใ‚‚ๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ๅฎŸ้จ“็š„ใชAPIใงใ™ใ€‚ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒ่ฟ”ใ™็ตๆžœใฏใ€APIใพใŸใฏๅŸบ็คŽใจใชใ‚‹ใƒขใƒ‡ใƒซใŒๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€็•ฐใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> Transformersใƒใƒผใ‚ธใƒงใƒณv4.29.0ใฏใ€*ใƒ„ใƒผใƒซ*ใจ*ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆ*ใฎใ‚ณใƒณใ‚ปใƒ—ใƒˆใ‚’ๅŸบใซๆง‹็ฏ‰ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใฎ[colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj)ใง่ฉฆใ™ใ“ใจใŒใงใใพใ™ใ€‚ ่ฆใ™ใ‚‹ใซใ€ใ“ใ‚ŒใฏtransformersใฎไธŠใซ่‡ช็„ถ่จ€่ชžAPIใ‚’ๆไพ›ใ™ใ‚‹ใ‚‚ใฎใงใ™๏ผš็งใŸใกใฏไธ€้€ฃใฎๅŽณ้ธใ•ใ‚ŒใŸใƒ„ใƒผใƒซใ‚’ๅฎš็พฉใ—ใ€่‡ช็„ถ่จ€่ชžใ‚’่งฃ้‡ˆใ—ใ€ใ“ใ‚Œใ‚‰ใฎใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’่จญ่จˆใ—ใพใ™ใ€‚ใ“ใ‚Œใฏ่จญ่จˆไธŠๆ‹กๅผตๅฏ่ƒฝใงใ™ใ€‚็งใŸใกใฏใ„ใใคใ‹ใฎ้–ข้€ฃใ™ใ‚‹ใƒ„ใƒผใƒซใ‚’ๅŽณ้ธใ—ใพใ—ใŸใŒใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซใ‚ˆใฃใฆ้–‹็™บใ•ใ‚ŒใŸไปปๆ„ใฎใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใ‚ทใ‚นใƒ†ใƒ ใ‚’็ฐกๅ˜ใซๆ‹กๅผตใงใใ‚‹ๆ–นๆณ•ใ‚‚็คบใ—ใพใ™ใ€‚ ใ“ใฎๆ–ฐใ—ใ„APIใงไฝ•ใŒใงใใ‚‹ใ‹ใฎใ„ใใคใ‹ใฎไพ‹ใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†ใ€‚็‰นใซๅคšใƒขใƒผใƒ€ใƒซใชใ‚ฟใ‚นใ‚ฏใซ้–ขใ—ใฆๅผทๅŠ›ใงใ™ใฎใงใ€็”ปๅƒใ‚’็”Ÿๆˆใ—ใŸใ‚Šใƒ†ใ‚ญใ‚นใƒˆใ‚’่ชญใฟไธŠใ’ใŸใ‚Šใ™ใ‚‹ใฎใซๆœ€้ฉใงใ™ใ€‚ ไธŠ่จ˜ใฎใƒ†ใ‚ญใ‚นใƒˆใฎไธŠใซใ€ๆ—ฅๆœฌ่ชžใฎ็ฟป่จณใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ```py agent.run("Caption the following image", image=image) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------| | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png" width=200> | A beaver is swimming in the water | --- ```py agent.run("Read the following text out loud", text=text) ``` | **Input** | **Output** | |-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | A beaver is swimming in the water | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tts_example.wav" type="audio/wav"> your browser does not support the audio element. </audio> --- ```py agent.run( "In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?", document=document, ) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|----------------| | <img src="https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/0/image/image.jpg" width=200> | ballroom foyer | ## Quickstart `agent.run`ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใ€ๅคง่ฆๆจกใช่จ€่ชžใƒขใƒ‡ใƒซ๏ผˆLLM๏ผ‰ใงใ™ใ€‚ OpenAIใƒขใƒ‡ใƒซใจBigCodeใ€OpenAssistantใ‹ใ‚‰ใฎใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใฎไปฃๆ›ฟใƒขใƒ‡ใƒซใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚OpenAIใƒขใƒ‡ใƒซใฏใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅ„ชใ‚Œใฆใ„ใพใ™ใŒใ€OpenAIใฎAPIใ‚ญใƒผใŒๅฟ…่ฆใงใ‚ใ‚Šใ€็„กๆ–™ใงไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ไธ€ๆ–นใ€Hugging FaceใฏBigCodeใจOpenAssistantใƒขใƒ‡ใƒซใฎใ‚จใƒณใƒ‰ใƒใ‚คใƒณใƒˆใธใฎ็„กๆ–™ใ‚ขใ‚ฏใ‚ปใ‚นใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ ใพใšใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎไพๅญ˜้–ขไฟ‚ใ‚’ใ™ในใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใŸใ‚ใซ`agents`ใฎใ‚จใ‚ฏใ‚นใƒˆใƒฉใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใใ ใ•ใ„ใ€‚ ```bash pip install transformers[agents] ``` OpenAIใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€`openai`ใฎไพๅญ˜้–ขไฟ‚ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใŸๅพŒใ€`OpenAiAgent`ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใพใ™ใ€‚ ```bash pip install openai ``` ```py from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>") ``` BigCodeใพใŸใฏOpenAssistantใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ใพใšใƒญใ‚ฐใ‚คใƒณใ—ใฆInference APIใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใฆใใ ใ•ใ„ใ€‚ ```py from huggingface_hub import login login("<YOUR_TOKEN>") ``` ๆฌกใซใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใฆใใ ใ•ใ„ใ€‚ ```py from transformers import HfAgent # Starcoder agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") # StarcoderBase # agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase") # OpenAssistant # agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5") ``` ใ“ใ‚Œใฏใ€Hugging FaceใŒ็พๅœจ็„กๆ–™ใงๆไพ›ใ—ใฆใ„ใ‚‹ๆŽจ่ซ–APIใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใ“ใฎใƒขใƒ‡ใƒซ๏ผˆใพใŸใฏๅˆฅใฎใƒขใƒ‡ใƒซ๏ผ‰ใฎ็‹ฌ่‡ชใฎๆŽจ่ซ–ใ‚จใƒณใƒ‰ใƒใ‚คใƒณใƒˆใ‚’ใŠๆŒใกใฎๅ ดๅˆใฏใ€ไธŠ่จ˜ใฎURLใ‚จใƒณใƒ‰ใƒใ‚คใƒณใƒˆใ‚’ใ”่‡ชๅˆ†ใฎURLใ‚จใƒณใƒ‰ใƒใ‚คใƒณใƒˆใง็ฝฎใๆ›ใˆใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ <Tip> StarCoderใจOpenAssistantใฏ็„กๆ–™ใงๅˆฉ็”จใงใใ€ใ‚ทใƒณใƒ—ใƒซใชใ‚ฟใ‚นใ‚ฏใซใฏ้žๅธธใซๅ„ชใ‚ŒใŸๆ€ง่ƒฝใ‚’็™บๆฎใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ใ‚ˆใ‚Š่ค‡้›‘ใชใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๅ‡ฆ็†ใ™ใ‚‹้š›ใซใฏใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒๅๅˆ†ใงใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใใฎใ‚ˆใ†ใชๅ ดๅˆใซใฏใ€็พๆ™‚็‚นใงใฏใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใงใฏใชใ„ใ‚‚ใฎใฎใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅ‘ไธŠใ™ใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹OpenAIใƒขใƒ‡ใƒซใ‚’่ฉฆใ—ใฆใฟใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ </Tip> ใ“ใ‚Œใงๆบ–ๅ‚™ใŒๆ•ดใ„ใพใ—ใŸ๏ผใ“ใ‚Œใ‹ใ‚‰ใ€ใ‚ใชใŸใŒๅˆฉ็”จใงใใ‚‹2ใคใฎAPIใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ### Single execution (run) ๅ˜ไธ€ๅฎŸ่กŒใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎ [`~Agent.run`] ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใงใ™ใ€‚ ```py agent.run("Draw me a picture of rivers and lakes.") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ใ“ใ‚Œใฏใ€ๅฎŸ่กŒใ—ใŸใ„ใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใŸใƒ„ใƒผใƒซ๏ผˆใพใŸใฏใƒ„ใƒผใƒซ๏ผ‰ใ‚’่‡ชๅ‹•็š„ใซ้ธๆŠžใ—ใ€้ฉๅˆ‡ใซๅฎŸ่กŒใ—ใพใ™ใ€‚1ใคใพใŸใฏ่ค‡ๆ•ฐใฎใ‚ฟใ‚นใ‚ฏใ‚’ๅŒใ˜ๅ‘ฝไปคใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผˆใŸใ ใ—ใ€ๅ‘ฝไปคใŒ่ค‡้›‘ใงใ‚ใ‚‹ใปใฉใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒๅคฑๆ•—ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใใชใ‚Šใพใ™๏ผ‰ใ€‚ ```py agent.run("Draw me a picture of the sea then transform the picture to add an island") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sea_and_island.png" width=200> <br/> [`~Agent.run`] ๆ“ไฝœใฏ็‹ฌ็ซ‹ใ—ใฆๅฎŸ่กŒใงใใพใ™ใฎใงใ€็•ฐใชใ‚‹ใ‚ฟใ‚นใ‚ฏใงไฝ•ๅบฆใ‚‚ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ๆณจๆ„็‚นใจใ—ใฆใ€ใ‚ใชใŸใฎ `agent` ใฏๅ˜ใชใ‚‹ๅคง่ฆๆจกใช่จ€่ชžใƒขใƒ‡ใƒซใงใ‚ใ‚‹ใŸใ‚ใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใฎใ‚ใšใ‹ใชๅค‰ๆ›ดใงใ‚‚ๅฎŒๅ…จใซ็•ฐใชใ‚‹็ตๆžœใŒๅพ—ใ‚‰ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ๅฎŸ่กŒใ—ใŸใ„ใ‚ฟใ‚นใ‚ฏใ‚’ใงใใ‚‹ใ ใ‘ๆ˜Ž็ขบใซ่ชฌๆ˜Žใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚่‰ฏใ„ใƒ—ใƒญใƒณใƒ—ใƒˆใฎๆ›ธใๆ–นใซใคใ„ใฆใฏใ€[ใ“ใกใ‚‰](custom_tools#writing-good-user-inputs) ใง่ฉณใ—ใ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ ๅฎŸ่กŒใ”ใจใซ็Šถๆ…‹ใ‚’ไฟๆŒใ—ใŸใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆไปฅๅค–ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซๆธกใ—ใŸใ‚Šใ™ใ‚‹ๅ ดๅˆใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒไฝฟ็”จใ™ใ‚‹ๅค‰ๆ•ฐใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€ๆœ€ๅˆใฎๅทใ‚„ๆน–ใฎ็”ปๅƒใ‚’็”Ÿๆˆใ—ใ€ใใฎ็”ปๅƒใซๅณถใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ‚ˆใ†ใซใƒขใƒ‡ใƒซใซๆŒ‡็คบใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ˆใ†ใซ่กŒใ†ใ“ใจใŒใงใใพใ™๏ผš ```python picture = agent.run("Generate a picture of rivers and lakes.") updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture) ``` <Tip> ใ“ใ‚Œใฏใ€ใƒขใƒ‡ใƒซใŒใ‚ใชใŸใฎใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’็†่งฃใงใใชใ„ๅ ดๅˆใ‚„ใ€ใƒ„ใƒผใƒซใ‚’ๆททๅŒใ™ใ‚‹ๅ ดๅˆใซๅฝน็ซ‹ใคใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐ๏ผš ```py agent.run("Draw me the picture of a capybara swimming in the sea") ``` ใ“ใ“ใงใฏใ€ใƒขใƒ‡ใƒซใฏ2ใคใฎๆ–นๆณ•ใง่งฃ้‡ˆใงใใพใ™๏ผš - `text-to-image`ใซๆตทใงๆณณใใ‚ซใƒ”ใƒใƒฉใ‚’็”Ÿๆˆใ•ใ›ใ‚‹ - ใพใŸใฏใ€`text-to-image`ใงใ‚ซใƒ”ใƒใƒฉใ‚’็”Ÿๆˆใ—ใ€ใใ‚Œใ‚’ๆตทใงๆณณใŒใ›ใ‚‹ใŸใ‚ใซ`image-transformation`ใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ ๆœ€ๅˆใฎใ‚ทใƒŠใƒชใ‚ชใ‚’ๅผทๅˆถใ—ใŸใ„ๅ ดๅˆใฏใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๅผ•ๆ•ฐใจใ—ใฆๆธกใ™ใ“ใจใŒใงใใพใ™๏ผš ```py agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea") ``` </Tip> ### Chat-based execution (ใƒใƒฃใƒƒใƒˆ) ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใ€[`~Agent.chat`] ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€ใƒใƒฃใƒƒใƒˆใƒ™ใƒผใ‚นใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚‚ๅฏ่ƒฝใงใ™ใ€‚ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ```py agent.chat("Transform the picture so that there is a rock in there") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_and_beaver.png" width=200> <br/> ใ“ใ‚Œใฏใ€ๆŒ‡็คบใ‚’ใพใŸใ„ใง็Šถๆ…‹ใ‚’ไฟๆŒใ—ใŸใ„ๅ ดๅˆใซไพฟๅˆฉใชใ‚ขใƒ—ใƒญใƒผใƒใงใ€ๅ˜ไธ€ใฎๆŒ‡็คบใซๆฏ”ในใฆ่ค‡้›‘ใชๆŒ‡็คบใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใฎใฏ้›ฃใ—ใ„ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผˆใใฎๅ ดๅˆใฏ [`~Agent.run`] ใƒกใ‚ฝใƒƒใƒ‰ใฎๆ–นใŒ้ฉใ—ใฆใ„ใพใ™๏ผ‰ใ€‚ ใ“ใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ€้žใƒ†ใ‚ญใ‚นใƒˆๅž‹ใฎๅผ•ๆ•ฐใ‚„็‰นๅฎšใฎใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๆธกใ—ใŸใ„ๅ ดๅˆใซใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ ### โš ๏ธ Remote execution ใƒ‡ใƒขใƒณใ‚นใƒˆใƒฌใƒผใ‚ทใƒงใƒณใฎ็›ฎ็š„ใ‚„ใ™ในใฆใฎใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ€ใƒชใƒชใƒผใ‚นใฎใŸใ‚ใซใ„ใใคใ‹ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ„ใƒผใƒซ็”จใฎใƒชใƒขใƒผใƒˆๅฎŸ่กŒใƒ„ใƒผใƒซใ‚‚ไฝœๆˆใ—ใพใ—ใŸใ€‚ใ“ใ‚Œใ‚‰ใฏ [ๆŽจ่ซ–ใ‚จใƒณใƒ‰ใƒใ‚คใƒณใƒˆ](https://huggingface.co/inference-endpoints) ใ‚’ไฝฟ็”จใ—ใฆไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฏ็พๅœจใ‚ชใƒ•ใซใชใฃใฆใ„ใพใ™ใŒใ€ใƒชใƒขใƒผใƒˆๅฎŸ่กŒใƒ„ใƒผใƒซใ‚’่‡ชๅˆ†ใง่จญๅฎšใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใฏใ€[ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚ฌใ‚คใƒ‰](./custom_tools) ใ‚’่ชญใ‚€ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ### What's happening here? What are tools, and what are agents? ![ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใจใƒ„ใƒผใƒซใฎใƒ€ใ‚คใ‚ขใ‚ฐใƒฉใƒ ](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/diagram.png) #### Agents ใ“ใ“ใงใฎใ€Œใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ€ใจใฏใ€ๅคง่ฆๆจกใช่จ€่ชžใƒขใƒ‡ใƒซใฎใ“ใจใงใ‚ใ‚Šใ€็‰นๅฎšใฎไธ€้€ฃใฎใƒ„ใƒผใƒซใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ‚ˆใ†ใซใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’่จญๅฎšใ—ใฆใ„ใพใ™ใ€‚ LLM๏ผˆๅคง่ฆๆจก่จ€่ชžใƒขใƒ‡ใƒซ๏ผ‰ใฏใ€ใ‚ณใƒผใƒ‰ใฎๅฐใ•ใชใ‚ตใƒณใƒ—ใƒซใ‚’็”Ÿๆˆใ™ใ‚‹ใฎใซใ‹ใชใ‚Šๅ„ชใ‚ŒใฆใŠใ‚Šใ€ใ“ใฎAPIใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซ็‰นๅฎšใฎใƒ„ใƒผใƒซใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใฆใ‚ฟใ‚นใ‚ฏใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ‚ณใƒผใƒ‰ใฎๅฐใ•ใชใ‚ตใƒณใƒ—ใƒซใ‚’็”Ÿๆˆใ•ใ›ใ‚‹ใ“ใจใซๅˆฉ็”จใ—ใฆใ„ใพใ™ใ€‚ใ“ใฎใƒ—ใƒญใƒณใƒ—ใƒˆใฏใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ‚ฟใ‚นใ‚ฏใจใƒ„ใƒผใƒซใฎ่ชฌๆ˜Žใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจใงใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒไฝฟ็”จใ—ใฆใ„ใ‚‹ใƒ„ใƒผใƒซใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใ€้–ข้€ฃใ™ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ #### Tools ใƒ„ใƒผใƒซใฏ้žๅธธใซๅ˜็ด”ใงใ€ๅๅ‰ใจ่ชฌๆ˜Žใ‹ใ‚‰ใชใ‚‹ๅ˜ไธ€ใฎ้–ขๆ•ฐใงใ™ใ€‚ใใ‚Œใ‹ใ‚‰ใ€ใ“ใ‚Œใ‚‰ใฎใƒ„ใƒผใƒซใฎ่ชฌๆ˜Žใ‚’ไฝฟ็”จใ—ใฆใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ใƒ—ใƒญใƒณใƒ—ใƒˆใ—ใพใ™ใ€‚ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’้€šใ˜ใฆใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ€ใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ—ใฆใ‚ฏใ‚จใƒชใง่ฆๆฑ‚ใ•ใ‚ŒใŸใ‚ฟใ‚นใ‚ฏใ‚’ใฉใฎใ‚ˆใ†ใซๅฎŸ่กŒใ™ใ‚‹ใ‹ใ‚’็คบใ—ใพใ™ใ€‚็‰นใซใ€ใƒ„ใƒผใƒซใฎๆœŸๅพ…ใ•ใ‚Œใ‚‹ๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใ‚’็คบใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏๆ–ฐใ—ใ„ใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ—ใฆใŠใ‚Šใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใงใฏใชใใƒ„ใƒผใƒซใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใชใœใชใ‚‰ใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏ้žๅธธใซๅŽŸๅญ็š„ใชใƒ„ใƒผใƒซใงใ‚ˆใ‚Š่‰ฏใ„ใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใ™ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ‚ˆใ‚Šใƒชใƒ•ใ‚กใ‚ฏใ‚ฟใƒชใƒณใ‚ฐใ•ใ‚Œใ€ใ—ใฐใ—ใฐ่ค‡ๆ•ฐใฎใ‚ฟใ‚นใ‚ฏใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใ„ใพใ™ใ€‚ใƒ„ใƒผใƒซใฏ้žๅธธใซๅ˜็ด”ใชใ‚ฟใ‚นใ‚ฏใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใ‚‹ใ“ใจใ‚’ๆ„ๅ›ณใ—ใฆใ„ใพใ™ใ€‚ #### Code-execution?! ใ“ใฎใ‚ณใƒผใƒ‰ใฏใ€ใƒ„ใƒผใƒซใจใƒ„ใƒผใƒซใจไธ€็ท’ใซๆธกใ•ใ‚Œใ‚‹ๅ…ฅๅŠ›ใฎใ‚ปใƒƒใƒˆใงใ€ๅฝ“็คพใฎๅฐ่ฆๆจกใชPythonใ‚คใƒณใ‚ฟใƒผใƒ—ใƒชใ‚ฟใงๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ใ™ใงใซๆไพ›ใ•ใ‚ŒใŸใƒ„ใƒผใƒซใจprint้–ขๆ•ฐใ—ใ‹ๅ‘ผใณๅ‡บใ™ใ“ใจใŒใงใใชใ„ใŸใ‚ใ€ๅฎŸ่กŒใงใใ‚‹ใ“ใจใฏใ™ใงใซๅˆถ้™ใ•ใ‚Œใฆใ„ใพใ™ใ€‚Hugging Faceใฎใƒ„ใƒผใƒซใซๅˆถ้™ใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ๅฎ‰ๅ…จใ ใจ่€ƒใˆใฆใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ•ใ‚‰ใซใ€ๅฑžๆ€งใฎๆคœ็ดขใ‚„ใ‚คใƒณใƒใƒผใƒˆใฏ่จฑๅฏใ—ใฆใŠใ‚‰ใš๏ผˆใใ‚Œใ‚‰ใฏๆธกใ•ใ‚ŒใŸๅ…ฅๅŠ›/ๅ‡บๅŠ›ใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซใฏๅฟ…่ฆใชใ„ใฏใšใงใ™๏ผ‰ใ€ๆœ€ใ‚‚ๆ˜Žใ‚‰ใ‹ใชๆ”ปๆ’ƒใฏๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“๏ผˆใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใใ‚Œใ‚‰ใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ‚ˆใ†ใซใƒ—ใƒญใƒณใƒ—ใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผ‰ใ€‚่ถ…ๅฎ‰ๅ…จใชๅดใซ็ซ‹ใกใŸใ„ๅ ดๅˆใฏใ€่ฟฝๅŠ ใฎๅผ•ๆ•ฐ return_code=True ใ‚’ๆŒ‡ๅฎšใ—ใฆ run() ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅฎŸ่กŒใงใใพใ™ใ€‚ใใฎๅ ดๅˆใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏๅฎŸ่กŒใ™ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’่ฟ”ใ™ใ ใ‘ใงใ€ๅฎŸ่กŒใ™ใ‚‹ใ‹ใฉใ†ใ‹ใฏใ‚ใชใŸๆฌก็ฌฌใงใ™ใ€‚ ๅฎŸ่กŒใฏใ€้•ๆณ•ใชๆ“ไฝœใ‚’่ฉฆใฟใ‚‹่กŒใพใŸใฏใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใŒ็”Ÿๆˆใ—ใŸใ‚ณใƒผใƒ‰ใซ้€šๅธธใฎPythonใ‚จใƒฉใƒผใŒใ‚ใ‚‹ๅ ดๅˆใซๅœๆญขใ—ใพใ™ใ€‚ ### A curated set of tools ็งใŸใกใฏใ€ใ“ใฎใ‚ˆใ†ใชใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ๅผทๅŒ–ใงใใ‚‹ใƒ„ใƒผใƒซใฎใ‚ปใƒƒใƒˆใ‚’็‰นๅฎšใ—ใพใ™ใ€‚ไปฅไธ‹ใฏใ€`transformers`ใซ็ตฑๅˆใ•ใ‚ŒใŸใƒ„ใƒผใƒซใฎๆ›ดๆ–ฐใ•ใ‚ŒใŸใƒชใ‚นใƒˆใงใ™๏ผš - **ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ่ณชๅ•ๅฟœ็ญ”**: ็”ปๅƒๅฝขๅผใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ๏ผˆPDFใชใฉ๏ผ‰ใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใซ้–ขใ™ใ‚‹่ณชๅ•ใซๅ›ž็ญ”ใ—ใพใ™๏ผˆ[Donut](./model_doc/donut)๏ผ‰ - **ใƒ†ใ‚ญใ‚นใƒˆ่ณชๅ•ๅฟœ็ญ”**: ้•ทใ„ใƒ†ใ‚ญใ‚นใƒˆใจ่ณชๅ•ใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ใƒ†ใ‚ญใ‚นใƒˆๅ†…ใฎ่ณชๅ•ใซๅ›ž็ญ”ใ—ใพใ™๏ผˆ[Flan-T5](./model_doc/flan-t5)๏ผ‰ - **็„กๆกไปถใฎ็”ปๅƒใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒณ**: ็”ปๅƒใซใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒณใ‚’ไป˜ใ‘ใพใ™๏ผ๏ผˆ[BLIP](./model_doc/blip)๏ผ‰ - **็”ปๅƒ่ณชๅ•ๅฟœ็ญ”**: ็”ปๅƒใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ใใฎ็”ปๅƒใซ้–ขใ™ใ‚‹่ณชๅ•ใซๅ›ž็ญ”ใ—ใพใ™๏ผˆ[VILT](./model_doc/vilt)๏ผ‰ - **็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ**: ็”ปๅƒใจใƒ—ใƒญใƒณใƒ—ใƒˆใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ใใฎใƒ—ใƒญใƒณใƒ—ใƒˆใฎใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒžใ‚นใ‚ฏใ‚’ๅ‡บๅŠ›ใ—ใพใ™๏ผˆ[CLIPSeg](./model_doc/clipseg)๏ผ‰ - **้Ÿณๅฃฐใ‹ใ‚‰ใƒ†ใ‚ญใ‚นใƒˆใธใฎๅค‰ๆ›**: ไบบใฎ่ฉฑใ—ๅฃฐใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช้Œฒ้ŸณใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ใใฎ้Ÿณๅฃฐใ‚’ใƒ†ใ‚ญใ‚นใƒˆใซ่ปข่จ˜ใ—ใพใ™๏ผˆ[Whisper](./model_doc/whisper)๏ผ‰ - **ใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰้Ÿณๅฃฐใธใฎๅค‰ๆ›**: ใƒ†ใ‚ญใ‚นใƒˆใ‚’้Ÿณๅฃฐใซๅค‰ๆ›ใ—ใพใ™๏ผˆ[SpeechT5](./model_doc/speecht5)๏ผ‰ - **ใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กž**: ใƒ†ใ‚ญใ‚นใƒˆใจใƒฉใƒ™ใƒซใฎใƒชใ‚นใƒˆใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใ€ใƒ†ใ‚ญใ‚นใƒˆใŒๆœ€ใ‚‚ๅฏพๅฟœใ™ใ‚‹ใƒฉใƒ™ใƒซใ‚’่ญ˜ๅˆฅใ—ใพใ™๏ผˆ[BART](./model_doc/bart)๏ผ‰ - **ใƒ†ใ‚ญใ‚นใƒˆ่ฆ็ด„**: ้•ทใ„ใƒ†ใ‚ญใ‚นใƒˆใ‚’1ใคใพใŸใฏๆ•ฐๆ–‡ใซ่ฆ็ด„ใ—ใพใ™๏ผˆ[BART](./model_doc/bart)๏ผ‰ - **็ฟป่จณ**: ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆŒ‡ๅฎšใ•ใ‚ŒใŸ่จ€่ชžใซ็ฟป่จณใ—ใพใ™๏ผˆ[NLLB](./model_doc/nllb)๏ผ‰ ใ“ใ‚Œใ‚‰ใฎใƒ„ใƒผใƒซใฏtransformersใซ็ตฑๅˆใ•ใ‚ŒใฆใŠใ‚Šใ€ๆ‰‹ๅ‹•ใงใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ใŸใจใˆใฐใ€ๆฌกใฎใ‚ˆใ†ใซไฝฟ็”จใงใใพใ™๏ผš ```py from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` ### Custom tools ็งใŸใกใฏใ€ๅŽณ้ธใ•ใ‚ŒใŸใƒ„ใƒผใƒซใฎใ‚ปใƒƒใƒˆใ‚’็‰นๅฎšใ™ใ‚‹ไธ€ๆ–นใ€ใ“ใฎๅฎŸ่ฃ…ใŒๆไพ›ใ™ใ‚‹ไธป่ฆใชไพกๅ€คใฏใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚’่ฟ…้€Ÿใซไฝœๆˆใ—ใฆๅ…ฑๆœ‰ใงใใ‚‹่ƒฝๅŠ›ใ ใจๅผทใไฟกใ˜ใฆใ„ใพใ™ใ€‚ ใƒ„ใƒผใƒซใฎใ‚ณใƒผใƒ‰ใ‚’Hugging Face SpaceใพใŸใฏใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใงใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใจ็›ดๆŽฅ้€ฃๆบใ—ใฆใƒ„ใƒผใƒซใ‚’ๆดป็”จใงใใพใ™ใ€‚[`huggingface-tools` organization](https://huggingface.co/huggingface-tools)ใซใฏใ€**transformers้žไพๅญ˜**ใฎใ„ใใคใ‹ใฎใƒ„ใƒผใƒซใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸ๏ผš - **ใƒ†ใ‚ญใ‚นใƒˆใƒ€ใ‚ฆใƒณใƒญใƒผใƒ€ใƒผ**: ใ‚ฆใ‚งใƒ–URLใ‹ใ‚‰ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใฎใƒ„ใƒผใƒซ - **ใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰็”ปๅƒใธ**: ใƒ—ใƒญใƒณใƒ—ใƒˆใซๅพ“ใฃใฆ็”ปๅƒใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใฎใƒ„ใƒผใƒซใ€‚ๅฎ‰ๅฎšใ—ใŸๆ‹กๆ•ฃใ‚’ๆดป็”จใ—ใพใ™ - **็”ปๅƒๅค‰ๆ›**: ๅˆๆœŸ็”ปๅƒใจใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๆŒ‡ๅฎšใ—ใฆ็”ปๅƒใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใŸใ‚ใฎใƒ„ใƒผใƒซใ€‚instruct pix2pixใฎๅฎ‰ๅฎšใ—ใŸๆ‹กๆ•ฃใ‚’ๆดป็”จใ—ใพใ™ - **ใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰ใƒ“ใƒ‡ใ‚ชใธ**: ใƒ—ใƒญใƒณใƒ—ใƒˆใซๅพ“ใฃใฆๅฐใ•ใชใƒ“ใƒ‡ใ‚ชใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใฎใƒ„ใƒผใƒซใ€‚damo-vilabใ‚’ๆดป็”จใ—ใพใ™ ๆœ€ๅˆใ‹ใ‚‰ไฝฟ็”จใ—ใฆใ„ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใ‹ใ‚‰็”ปๅƒใธใฎใƒ„ใƒผใƒซใฏใ€[*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)ใซใ‚ใ‚‹ใƒชใƒขใƒผใƒˆใƒ„ใƒผใƒซใงใ™๏ผไปŠๅพŒใ‚‚ใ€ใ“ใฎ็ต„็น”ใŠใ‚ˆใณไป–ใฎ็ต„็น”ใซใ•ใ‚‰ใซใ“ใฎใ‚ˆใ†ใชใƒ„ใƒผใƒซใ‚’ใƒชใƒชใƒผใ‚นใ—ใ€ใ“ใฎๅฎŸ่ฃ…ใ‚’ใ•ใ‚‰ใซๅผทๅŒ–ใ—ใฆใ„ใใพใ™ใ€‚ ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใง[`huggingface-tools`](https://huggingface.co/huggingface-tools)ใซใ‚ใ‚‹ใƒ„ใƒผใƒซใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ ใƒ„ใƒผใƒซใฎไฝœๆˆใจๅ…ฑๆœ‰ๆ–นๆณ•ใ€ใพใŸHubใซๅญ˜ๅœจใ™ใ‚‹ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใ‚’ๆดป็”จใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใฎ่ฉณ็ดฐใฏใ€[ๆฌกใฎใ‚ฌใ‚คใƒ‰](custom_tools)ใง่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ ### Code generation ใ“ใ‚Œใพใงใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆใ‚ใชใŸใฎใŸใ‚ใซใ‚ขใ‚ฏใ‚ทใƒงใƒณใ‚’ๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใพใ—ใŸใ€‚ใŸใ ใ—ใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใ™ใ‚‹ใ ใ‘ใงใ€้žๅธธใซๅˆถ้™ใ•ใ‚ŒใŸPythonใ‚คใƒณใ‚ฟใƒผใƒ—ใƒชใ‚ฟใ‚’ไฝฟ็”จใ—ใฆๅฎŸ่กŒใ—ใพใ™ใ€‚็”Ÿๆˆใ•ใ‚ŒใŸใ‚ณใƒผใƒ‰ใ‚’็•ฐใชใ‚‹็’ฐๅขƒใงไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใ€ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใซใ‚ณใƒผใƒ‰ใ‚’่ฟ”ใ™ใ‚ˆใ†ใซๆŒ‡็คบใงใใพใ™ใ€‚ใƒ„ใƒผใƒซใฎๅฎš็พฉใจๆญฃ็ขบใชใ‚คใƒณใƒใƒผใƒˆใ‚‚ๅซใ‚ใฆใ€‚ ไพ‹ใˆใฐใ€ไปฅไธ‹ใฎๅ‘ฝไปค๏ผš ```python agent.run("Draw me a picture of rivers and lakes", return_code=True) ``` ๆฌกใฎใ‚ณใƒผใƒ‰ใ‚’่ฟ”ใ—ใพใ™ ```python from transformers import load_tool image_generator = load_tool("huggingface-tools/text-to-image") image = image_generator(prompt="rivers and lakes") ``` ใใฎๅพŒใ€่‡ชๅˆ†ใงๅค‰ๆ›ดใ—ใฆๅฎŸ่กŒใงใใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/pipeline_tutorial.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownๅฝขๅผใงใ™ใŒใ€ๅฝ“็คพใฎdoc-builder๏ผˆMDXใซไผผใŸๆง‹ๆ–‡๏ผ‰ใ‚’ๅซใ‚€ใŸใ‚ใ€Markdownใƒ“ใƒฅใƒผใ‚ขใงๆญฃใ—ใ่กจ็คบใ•ใ‚Œใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ --> # Pipelines for inference [`pipeline`]ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€[Hub](https://huggingface.co/models)ใ‹ใ‚‰ใฎไปปๆ„ใฎใƒขใƒ‡ใƒซใ‚’่จ€่ชžใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ€้Ÿณๅฃฐใ€ใŠใ‚ˆใณใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใ‚ฟใ‚นใ‚ฏใฎๆŽจ่ซ–ใซ็ฐกๅ˜ใซไฝฟ็”จใงใใพใ™ใ€‚ ็‰นๅฎšใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใซ้–ขใ™ใ‚‹็ตŒ้จ“ใŒใชใ„ๅ ดๅˆใ‚„ใ€ใƒขใƒ‡ใƒซใฎ่ƒŒๅพŒใซใ‚ใ‚‹ใ‚ณใƒผใƒ‰ใซ็ฒพ้€šใ—ใฆใ„ใชใ„ๅ ดๅˆใงใ‚‚ใ€[`pipeline`]ใ‚’ไฝฟ็”จใ—ใฆๆŽจ่ซ–ใงใใพใ™๏ผ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ๆฌกใฎใ“ใจใ‚’ๅญฆใณใพใ™๏ผš - ๆŽจ่ซ–ใฎใŸใ‚ใฎ[`pipeline`]ใฎไฝฟ็”จๆ–นๆณ•ใ€‚ - ็‰นๅฎšใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚„ใƒขใƒ‡ใƒซใฎไฝฟ็”จๆ–นๆณ•ใ€‚ - ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ€ใƒ“ใ‚ธใƒงใƒณใ€ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใฎ[`pipeline`]ใฎไฝฟ็”จๆ–นๆณ•ใ€‚ <Tip> ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใจๅˆฉ็”จๅฏ่ƒฝใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๅฎŒๅ…จใชไธ€่ฆงใซใคใ„ใฆใฏใ€[`pipeline`]ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ </Tip> ## Pipeline usage ๅ„ใ‚ฟใ‚นใ‚ฏใซใฏ้–ข้€ฃใ™ใ‚‹[`pipeline`]ใŒใ‚ใ‚Šใพใ™ใŒใ€ใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎ[`pipeline`]ใ‚’ไฝฟ็”จใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ใ™ในใฆใฎใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๅซใ‚€ไธ€่ˆฌ็š„ใช[`pipeline`]ใฎๆŠฝ่ฑกๅŒ–ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใ‚ˆใ‚Š็ฐกๅ˜ใงใ™ใ€‚[`pipeline`]ใฏ่‡ชๅ‹•็š„ใซใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒขใƒ‡ใƒซใจใ€ใ‚ฟใ‚นใ‚ฏใฎๆŽจ่ซ–ใŒๅฏ่ƒฝใชๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’่ชญใฟ่พผใฟใพใ™ใ€‚ 1. [`pipeline`]ใ‚’ไฝœๆˆใ—ใ€ๆŽจ่ซ–ใ‚ฟใ‚นใ‚ฏใ‚’ๆŒ‡ๅฎšใ—ใฆๅง‹ใ‚ใพใ™๏ผš ```py >>> from transformers import pipeline >>> generator = pipeline(task="automatic-speech-recognition") ``` 2. [`pipeline`]ใซๅ…ฅๅŠ›ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆธกใ—ใพใ™๏ผš ```python >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'} ``` ใƒใ‚งใƒƒใ‚ฏใ‚ขใ‚ฆใƒˆใงใใชใ‹ใฃใŸใ‹๏ผŸ [Hubใฎๆœ€ใ‚‚ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚ŒใŸ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใƒขใƒ‡ใƒซ](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) ใฎใ„ใใคใ‹ใ‚’่ฆ‹ใฆใ€ใ‚ˆใ‚Š่‰ฏใ„่ปขๅ†™ใ‚’ๅพ—ใ‚‹ใ“ใจใŒใงใใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ [openai/whisper-large](https://huggingface.co/openai/whisper-large) ใ‚’่ฉฆใ—ใฆใฟใพใ—ใ‚‡ใ†๏ผš ```python >>> generator = pipeline(model="openai/whisper-large") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ใ“ใฎ็ตๆžœใฏใ‚ˆใ‚Šๆญฃ็ขบใซ่ฆ‹ใˆใพใ™ใญ๏ผ ็•ฐใชใ‚‹่จ€่ชžใ€ๅฐ‚้–€ๅˆ†้‡Žใซ็‰นๅŒ–ใ—ใŸใƒขใƒ‡ใƒซใ€ใใฎไป–ใฎใƒขใƒ‡ใƒซใซใคใ„ใฆใฏใ€Hubใ‚’ใƒใ‚งใƒƒใ‚ฏใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ Hubใงใฏใ€ใƒ–ใƒฉใ‚ฆใ‚ถใ‹ใ‚‰็›ดๆŽฅใƒขใƒ‡ใƒซใฎ็ตๆžœใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใ€ไป–ใฎใƒขใƒ‡ใƒซใ‚ˆใ‚Šใ‚‚้ฉใ—ใฆใ„ใ‚‹ใ‹ใ€็‰นๆฎŠใชใ‚ฑใƒผใ‚นใ‚’ใ‚ˆใ‚Šใ‚ˆใๅ‡ฆ็†ใงใใ‚‹ใ‹ใ‚’็ขบ่ชใงใใพใ™ใ€‚ ใใ—ใฆใ€ใ‚ใชใŸใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซ้ฉใ—ใŸใƒขใƒ‡ใƒซใŒ่ฆ‹ใคใ‹ใ‚‰ใชใ„ๅ ดๅˆใ€ใ„ใคใงใ‚‚[ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](training)ใ‚’้–‹ๅง‹ใงใใพใ™๏ผ ่ค‡ๆ•ฐใฎๅ…ฅๅŠ›ใŒใ‚ใ‚‹ๅ ดๅˆใ€ๅ…ฅๅŠ›ใ‚’ใƒชใ‚นใƒˆใจใ—ใฆๆธกใ™ใ“ใจใŒใงใใพใ™๏ผš ```py generator( [ "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", ] ) ``` ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใ‚’็นฐใ‚Š่ฟ”ใ—ๅ‡ฆ็†ใ—ใŸใ‚Šใ€ใ‚ฆใ‚งใƒ–ใ‚ตใƒผใƒใƒผใงๆŽจ่ซ–ใซไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ๅฐ‚็”จใฎ้ƒจๅˆ†ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„ใ€‚ [ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ไฝฟ็”จใ™ใ‚‹](#using-pipelines-on-a-dataset) [ใ‚ฆใ‚งใƒ–ใ‚ตใƒผใƒใƒผใงใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ไฝฟ็”จใ™ใ‚‹](./pipeline_webserver) ## ใƒ‘ใƒฉใƒกใƒผใ‚ฟ [`pipeline`]ใฏๅคšใใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใŠใ‚Šใ€ไธ€้ƒจใฏใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใงใ‚ใ‚Šใ€ไธ€้ƒจใฏใ™ในใฆใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซๅ…ฑ้€šใงใ™ใ€‚ ไธ€่ˆฌ็š„ใซใฏใ€ใฉใ“ใงใ‚‚ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒ‡ๅฎšใงใใพใ™๏ผš ```py generator = pipeline(model="openai/whisper-large", my_parameter=1) out = generator(...) # ใ“ใ‚Œใฏ `my_parameter=1` ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ out = generator(..., my_parameter=2) # ใ“ใ‚ŒใฏไธŠๆ›ธใใ—ใฆ `my_parameter=2` ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ out = generator(...) # ใ“ใ‚Œใฏๅ†ใณ `my_parameter=1` ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ``` 3ใคใฎ้‡่ฆใชใ‚‚ใฎใ‚’็ขบ่ชใ—ใพใ—ใ‚‡ใ†๏ผš ### Device `device=n` ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใƒขใƒ‡ใƒซใ‚’ๆŒ‡ๅฎšใ—ใŸใƒ‡ใƒใ‚คใ‚นใซ่‡ชๅ‹•็š„ใซ้…็ฝฎใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏใ€PyTorchใพใŸใฏTensorflowใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใซ้–ขไฟ‚ใชใๆฉŸ่ƒฝใ—ใพใ™ใ€‚ ```py generator = pipeline(model="openai/whisper-large", device=0) ``` ใ‚‚ใ—ใƒขใƒ‡ใƒซใŒๅ˜ไธ€ใฎGPUใซใฏๅคงใใ™ใŽใ‚‹ๅ ดๅˆใ€`device_map="auto"`ใ‚’่จญๅฎšใ—ใฆใ€๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) ใซใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’ใฉใฎใ‚ˆใ†ใซใƒญใƒผใƒ‰ใ—ใ€ไฟๅญ˜ใ™ใ‚‹ใ‹ใ‚’่‡ชๅ‹•็š„ใซๆฑบๅฎšใ•ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```python #!pip install accelerate generator = pipeline(model="openai/whisper-large", device_map="auto") ``` ๆณจๆ„: `device_map="auto"` ใŒๆธกใ•ใ‚ŒใŸๅ ดๅˆใ€`pipeline` ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹้š›ใซ `device=device` ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใใ†ใ—ใชใ„ใจใ€ไบˆๆœŸใ—ใชใ„ๅ‹•ไฝœใซ้ญ้‡ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™๏ผ ### Batch size ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏ่ฉณ็ดฐใซใคใ„ใฆ[ใ“ใกใ‚‰](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching)ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹็†็”ฑใ‹ใ‚‰ใ€ๆŽจ่ซ–ใ‚’ใƒใƒƒใƒๅ‡ฆ็†ใ—ใพใ›ใ‚“ใ€‚ใใฎ็†็”ฑใฏใ€ใƒใƒƒใƒๅ‡ฆ็†ใŒๅฟ…ใšใ—ใ‚‚้€Ÿใใชใ„ใŸใ‚ใงใ‚ใ‚Šใ€ๅฎŸ้š›ใซใฏใ„ใใคใ‹ใฎใ‚ฑใƒผใ‚นใงใ‹ใชใ‚Š้…ใใชใ‚‹ใ“ใจใŒใ‚ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ ใŸใ ใ—ใ€ใ‚ใชใŸใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใงๆฉŸ่ƒฝใ™ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซไฝฟ็”จใงใใพใ™๏ผš ```py generator = pipeline(model="openai/whisper-large", device=0, batch_size=2) audio_filenames = [f"audio_{i}.flac" for i in range(10)] texts = generator(audio_filenames) ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏๆไพ›ใ•ใ‚ŒใŸ10ๅ€‹ใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใ‚กใ‚คใƒซใงใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๅฎŸ่กŒใ—ใพใ™ใŒใ€ ใƒขใƒ‡ใƒซใซใฏใƒใƒƒใƒๅ‡ฆ็†ใŒใ‚ˆใ‚ŠๅŠนๆžœ็š„ใงใ‚ใ‚‹GPUไธŠใซใ‚ใ‚Šใ€ใƒใƒƒใƒๅ‡ฆ็†ใ‚’่กŒใ†ใŸใ‚ใฎ่ฟฝๅŠ ใฎใ‚ณใƒผใƒ‰ใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๅ‡บๅŠ›ใฏๅธธใซใƒใƒƒใƒๅ‡ฆ็†ใชใ—ใงๅ—ใ‘ๅ–ใฃใŸใ‚‚ใฎใจไธ€่‡ดใ™ใ‚‹ใฏใšใงใ™ใ€‚ใ“ใ‚Œใฏๅ˜ใซใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‹ใ‚‰ใ‚ˆใ‚Š้ซ˜้€Ÿใชๅ‡ฆ็†ใ‚’ๅพ—ใ‚‹ใŸใ‚ใฎๆ–นๆณ•ใจใ—ใฆๆไพ›ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ€ใƒใƒƒใƒๅ‡ฆ็†ใฎใ„ใใคใ‹ใฎ่ค‡้›‘ใ•ใ‚’่ปฝๆธ›ใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใชใœใชใ‚‰ใ€ไธ€้ƒจใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใงใฏใ€ ใƒขใƒ‡ใƒซใงๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซ1ใคใฎใ‚ขใ‚คใƒ†ใƒ ๏ผˆ้•ทใ„ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใ‚กใ‚คใƒซใฎใ‚ˆใ†ใชใ‚‚ใฎ๏ผ‰ใ‚’่ค‡ๆ•ฐใฎ้ƒจๅˆ†ใซๅˆ†ๅ‰ฒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใŒใ‚ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ“ใ‚Œใ‚’ใ‚ใชใŸใฎใŸใ‚ใซๅฎŸ่กŒใ—ใพใ™ใ€‚[*ใƒใƒฃใƒณใ‚ฏใƒใƒƒใƒๅ‡ฆ็†*](./main_classes/pipelines#pipeline-chunk-batching)ใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใ‚‹ใ‚‚ใฎใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ### Task specific parameters ใ™ในใฆใฎใ‚ฟใ‚นใ‚ฏใฏใ€ใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆไพ›ใ—ใ€่ฟฝๅŠ ใฎๆŸ”่ปŸๆ€งใจใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ๆไพ›ใ—ใฆใ€ไฝœๆฅญใ‚’ใ‚นใƒ ใƒผใ‚บใซ้€ฒใ‚ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ ใŸใจใˆใฐใ€[`transformers.AutomaticSpeechRecognitionPipeline.__call__`]ใƒกใ‚ฝใƒƒใƒ‰ใซใฏใ€ใƒ“ใƒ‡ใ‚ชใฎๅญ—ๅน•ไฝœๆˆใซๆœ‰็”จใช`return_timestamps`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> # Not using whisper, as it cannot provide timestamps. >>> generator = pipeline(model="facebook/wav2vec2-large-960h-lv60-self", return_timestamps="word") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP AND LIVE OUT THE TRUE MEANING OF ITS CREED', 'chunks': [{'text': 'I', 'timestamp': (1.22, 1.24)}, {'text': 'HAVE', 'timestamp': (1.42, 1.58)}, {'text': 'A', 'timestamp': (1.66, 1.68)}, {'text': 'DREAM', 'timestamp': (1.76, 2.14)}, {'text': 'BUT', 'timestamp': (3.68, 3.8)}, {'text': 'ONE', 'timestamp': (3.94, 4.06)}, {'text': 'DAY', 'timestamp': (4.16, 4.3)}, {'text': 'THIS', 'timestamp': (6.36, 6.54)}, {'text': 'NATION', 'timestamp': (6.68, 7.1)}, {'text': 'WILL', 'timestamp': (7.32, 7.56)}, {'text': 'RISE', 'timestamp': (7.8, 8.26)}, {'text': 'UP', 'timestamp': (8.38, 8.48)}, {'text': 'AND', 'timestamp': (10.08, 10.18)}, {'text': 'LIVE', 'timestamp': (10.26, 10.48)}, {'text': 'OUT', 'timestamp': (10.58, 10.7)}, {'text': 'THE', 'timestamp': (10.82, 10.9)}, {'text': 'TRUE', 'timestamp': (10.98, 11.18)}, {'text': 'MEANING', 'timestamp': (11.26, 11.58)}, {'text': 'OF', 'timestamp': (11.66, 11.7)}, {'text': 'ITS', 'timestamp': (11.76, 11.88)}, {'text': 'CREED', 'timestamp': (12.0, 12.38)}]} ``` ใƒขใƒ‡ใƒซใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆŽจๆธฌใ—ใ€ๆ–‡ใฎไธญใงๅ„ๅ˜่ชžใŒใ„ใค็™บ้Ÿณใ•ใ‚ŒใŸใ‹ใ‚’ๅ‡บๅŠ›ใ—ใพใ—ใŸใ€‚ ๅ„ใ‚ฟใ‚นใ‚ฏใ”ใจใซๅˆฉ็”จๅฏ่ƒฝใชๅคšใใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒใ‚ใ‚Šใพใ™ใฎใงใ€ไฝ•ใ‚’่ชฟๆ•ดใงใใ‚‹ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซๅ„ใ‚ฟใ‚นใ‚ฏใฎAPIใƒชใƒ•ใ‚กใƒฌใƒณใ‚นใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผ ใŸใจใˆใฐใ€[`~transformers.AutomaticSpeechRecognitionPipeline`]ใซใฏใ€ใƒขใƒ‡ใƒซๅ˜ไฝ“ใงใฏๅ‡ฆ็†ใงใใชใ„้žๅธธใซ้•ทใ„ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใ‚กใ‚คใƒซ๏ผˆใŸใจใˆใฐใ€ๆ˜ ็”ปๅ…จไฝ“ใ‚„1ๆ™‚้–“ใฎใƒ“ใƒ‡ใ‚ชใฎๅญ—ๅน•ไป˜ใ‘ใชใฉ๏ผ‰ใงๅฝน็ซ‹ใค`chunk_length_s`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒใ‚ใ‚Šใพใ™ใ€‚ <!--ๅฝน็ซ‹ใคใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒ่ฆ‹ใคใ‹ใ‚‰ใชใ„ๅ ดๅˆใฏใ€[ใƒชใ‚ฏใ‚จใ‚นใƒˆ](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)ใ—ใฆใใ ใ•ใ„๏ผ--> ๅฝน็ซ‹ใคใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒ่ฆ‹ใคใ‹ใ‚‰ใชใ„ๅ ดๅˆใฏใ€[ใƒชใ‚ฏใ‚จใ‚นใƒˆ](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)ใ—ใฆใใ ใ•ใ„๏ผ ## Using pipeline in a dataset ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏๅคง่ฆๆจกใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆไธŠใงๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ“ใ‚Œใ‚’่กŒใ†ๆœ€ใ‚‚็ฐกๅ˜ใชๆ–นๆณ•ใฏใ€ใ‚คใƒ†ใƒฌใƒผใ‚ฟใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™๏ผš ```py def data(): for i in range(1000): yield f"My example {i}" pipe = pipeline(model="gpt2", device=0) generated_characters = 0 for out in pipe(data()): generated_characters += len(out[0]["generated_text"]) ``` ใ‚คใƒ†ใƒฌใƒผใ‚ฟใƒผ `data()` ใฏๅ„็ตๆžœใ‚’็”Ÿๆˆใ—ใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏ่‡ชๅ‹•็š„ใซๅ…ฅๅŠ›ใŒๅๅพฉๅฏ่ƒฝใงใ‚ใ‚‹ใ“ใจใ‚’่ช่ญ˜ใ—ใ€ใƒ‡ใƒผใ‚ฟใ‚’ๅ–ๅพ—ใ—็ถšใ‘ใชใŒใ‚‰GPUไธŠใงๅ‡ฆ็†ใ‚’่กŒใ„ใพใ™๏ผˆใ“ใ‚Œใฏ[DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)ใ‚’ๅ†…้ƒจใงไฝฟ็”จใ—ใฆใ„ใพใ™๏ผ‰ใ€‚ ใ“ใ‚Œใฏใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใซใƒกใƒขใƒชใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ๅฟ…่ฆใŒใชใใ€GPUใซใงใใ‚‹ใ ใ‘้€Ÿใใƒ‡ใƒผใ‚ฟใ‚’ไพ›็ตฆใงใใ‚‹ใŸใ‚้‡่ฆใงใ™ใ€‚ ใƒใƒƒใƒๅ‡ฆ็†ใฏๅ‡ฆ็†ใ‚’้ซ˜้€ŸๅŒ–ใงใใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ใ“ใ“ใง`batch_size`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’่ชฟๆ•ดใ—ใฆ่ฉฆใ™ใ“ใจใŒๅฝน็ซ‹ใคใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๅๅพฉๅ‡ฆ็†ใ™ใ‚‹ๆœ€ใ‚‚็ฐกๅ˜ใชๆ–นๆณ•ใฏใ€๐Ÿค— [Datasets](https://github.com/huggingface/datasets/)ใ‹ใ‚‰ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’่ชญใฟ่พผใ‚€ใ“ใจใงใ™๏ผš ```py # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset from datasets import load_dataset pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]") for out in pipe(KeyDataset(dataset, "audio")): print(out) ``` ## Using pipelines for a webserver <Tip> ๆŽจ่ซ–ใ‚จใƒณใ‚ธใƒณใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใฏ่ค‡้›‘ใชใƒˆใƒ”ใƒƒใ‚ฏใงใ€็‹ฌ่‡ชใฎใƒšใƒผใ‚ธใŒๅฟ…่ฆใงใ™ใ€‚ </Tip> [ใƒชใƒณใ‚ฏ](./pipeline_webserver) ## Vision pipeline ใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏ็”จใฎ[`pipeline`]ใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใฏใปใผๅŒใ˜ใงใ™ใ€‚ ใ‚ฟใ‚นใ‚ฏใ‚’ๆŒ‡ๅฎšใ—ใ€็”ปๅƒใ‚’ใ‚ฏใƒฉใ‚ทใƒ•ใ‚กใ‚คใ‚ขใซๆธกใ—ใพใ™ใ€‚็”ปๅƒใฏใƒชใƒณใ‚ฏใ€ใƒญใƒผใ‚ซใƒซใƒ‘ใ‚นใ€ใพใŸใฏBase64ใ‚จใƒณใ‚ณใƒผใƒ‰ใ•ใ‚ŒใŸ็”ปๅƒใงใ‚ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€ไปฅไธ‹ใฎ็”ปๅƒใฏใฉใฎ็จฎ้กžใฎ็Œซใงใ™ใ‹๏ผŸ ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(model="google/vit-base-patch16-224") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ## Text pipeline [`pipeline`]ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใ€NLPใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆใปใผๅŒใ˜ใงใ™ใ€‚ ```py >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model="facebook/bart-large-mnli") >>> classifier( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} ``` ## Multimodal pipeline [`pipeline`]ใฏใ€1ใคไปฅไธŠใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ใŸใจใˆใฐใ€่ฆ–่ฆš็š„ใช่ณชๅ•ๅฟœ็ญ”๏ผˆVQA๏ผ‰ใ‚ฟใ‚นใ‚ฏใฏใƒ†ใ‚ญใ‚นใƒˆใจ็”ปๅƒใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใ„ใพใ™ใ€‚ ๅฅฝใใช็”ปๅƒใƒชใƒณใ‚ฏใจ็”ปๅƒใซ้–ขใ™ใ‚‹่ณชๅ•ใ‚’่‡ช็”ฑใซไฝฟใฃใฆใใ ใ•ใ„ใ€‚็”ปๅƒใฏURLใพใŸใฏ็”ปๅƒใฎใƒญใƒผใ‚ซใƒซใƒ‘ใ‚นใงๆŒ‡ๅฎšใงใใพใ™ใ€‚ ไพ‹ใˆใฐใ€ใ“ใฎ[่ซ‹ๆฑ‚ๆ›ธ็”ปๅƒ](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png)ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆ๏ผš ```py >>> from transformers import pipeline >>> vqa = pipeline(model="impira/layoutlm-document-qa") >>> vqa( ... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", ... question="What is the invoice number?", ... ) [{'score': 0.42515, 'answer': 'us-001', 'start': 16, 'end': 16}] ``` <Tip> ไธŠ่จ˜ใฎไพ‹ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€๐Ÿค— TransformersใซๅŠ ใˆใฆ [`pytesseract`](https://pypi.org/project/pytesseract/) ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```bash sudo apt install -y tesseract-ocr pip install pytesseract ``` </Tip> ## Using `pipeline` on large models with ๐Ÿค— `accelerate`: ใพใšใ€`accelerate` ใ‚’`pip install accelerate` ใงใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ๆฌกใซใ€`device_map="auto"` ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ใ“ใฎไพ‹ใงใฏ `facebook/opt-1.3b` ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ```python # pip install accelerate import torch from transformers import pipeline pipe = pipeline(model="facebook/opt-1.3b", torch_dtype=torch.bfloat16, device_map="auto") output = pipe("ใ“ใ‚Œใฏ็ด ๆ™ดใ‚‰ใ—ใ„ไพ‹ใงใ™๏ผ", do_sample=True, top_p=0.95) ``` ใ‚‚ใ— `bitsandbytes` ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใ€`load_in_8bit=True` ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ™ใ‚Œใฐใ€8ใƒ“ใƒƒใƒˆใง่ชญใฟ่พผใพใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๆธกใ™ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```py # pip install accelerate bitsandbytes import torch from transformers import pipeline pipe = pipeline(model="facebook/opt-1.3b", device_map="auto", model_kwargs={"load_in_8bit": True}) output = pipe("This is a cool example!", do_sample=True, top_p=0.95) ``` ๆณจๆ„: BLOOMใชใฉใฎๅคง่ฆๆจกใƒขใƒ‡ใƒซใฎใƒญใƒผใƒ‰ใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹Hugging Faceใƒขใƒ‡ใƒซใฎใ„ใšใ‚Œใ‹ใงใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’็ฝฎใๆ›ใˆใ‚‹ใ“ใจใŒใงใใพใ™๏ผ
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/quicktour.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownๅฝขๅผใงใ™ใŒใ€Hugging Faceใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใƒ“ใƒซใƒ€ใƒผๅ‘ใ‘ใซ็‰นๅฎšใฎๆง‹ๆ–‡ใ‚’ๅซใ‚“ใงใ„ใ‚‹ใŸใ‚ใ€ ้€šๅธธใฎMarkdownใƒ“ใƒฅใƒผใ‚ขใƒผใงๆญฃใ—ใ่กจ็คบใ•ใ‚Œใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ --> # Quick tour [[open-in-colab]] ๐Ÿค— Transformersใ‚’ไฝฟใ„ๅง‹ใ‚ใพใ—ใ‚‡ใ†๏ผ ้–‹็™บ่€…ใงใ‚ใ‚ใ†ใจใ€ๆ—ฅๅธธ็š„ใชใƒฆใƒผใ‚ถใƒผใงใ‚ใ‚ใ†ใจใ€ใ“ใฎใ‚ฏใ‚คใƒƒใ‚ฏใƒ„ใ‚ขใƒผใฏ ๅˆใ‚ใฆๅง‹ใ‚ใ‚‹ใฎใ‚’ๆ”ฏๆดใ—ใ€[`pipeline`]ใ‚’ไฝฟใฃใŸๆŽจ่ซ–ๆ–นๆณ•ใ€[AutoClass](./model_doc/auto)ใงไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใจใƒ—ใƒชใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใ€ ใใ—ใฆPyTorchใพใŸใฏTensorFlowใง็ด ๆ—ฉใใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใพใ™ใ€‚ ๅˆๅฟƒ่€…ใฎๅ ดๅˆใ€ใ“ใ“ใง็ดนไป‹ใ•ใ‚ŒใŸใ‚ณใƒณใ‚ปใƒ—ใƒˆใฎ่ฉณ็ดฐใช่ชฌๆ˜Žใ‚’ๆไพ›ใ™ใ‚‹ ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใพใŸใฏ[ใ‚ณใƒผใ‚น](https://huggingface.co/course/chapter1/1)ใ‚’ๆฌกใซๅ‚็…งใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ๅง‹ใ‚ใ‚‹ๅ‰ใซใ€ๅฟ…่ฆใชใƒฉใ‚คใƒ–ใƒฉใƒชใŒใ™ในใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผš ```bash !pip install transformers datasets ``` ใ‚ใชใŸใฏใพใŸใ€ๅฅฝใใชๆฉŸๆขฐๅญฆ็ฟ’ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## Pipeline <Youtube id="tiZFewofSLM"/> [`pipeline`] ใฏใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ๆŽจ่ซ–ใซๆœ€ใ‚‚็ฐกๅ˜ใง้ซ˜้€Ÿใชๆ–นๆณ•ใงใ™ใ€‚ [`pipeline`] ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€ใ•ใพใ–ใพใชใƒขใƒ€ใƒชใƒ†ใ‚ฃใซใ‚ใŸใ‚‹ๅคšใใฎใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆๅณๅบงใซไฝฟ็”จใงใใพใ™ใ€‚ ใ„ใใคใ‹ใฎใ‚ฟใ‚นใ‚ฏใฏไปฅไธ‹ใฎ่กจใซ็คบใ•ใ‚Œใฆใ„ใพใ™๏ผš <Tip> ไฝฟ็”จๅฏ่ƒฝใชใ‚ฟใ‚นใ‚ฏใฎๅฎŒๅ…จใชไธ€่ฆงใซใคใ„ใฆใฏใ€[pipeline API ใƒชใƒ•ใ‚กใƒฌใƒณใ‚น](./main_classes/pipelines)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> | **ใ‚ฟใ‚นใ‚ฏ** | **่ชฌๆ˜Ž** | **ใƒขใƒ€ใƒชใƒ†ใ‚ฃ** | **ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ่ญ˜ๅˆฅๅญ** | |------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|-----------------------------------------------| | ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กž | ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซใƒฉใƒ™ใƒซใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ | NLP | pipeline(task="sentiment-analysis") | | ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆ | ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ๆŒ‡ๅฎšใ—ใฆใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ | NLP | pipeline(task="text-generation") | | ่ฆ็ด„ | ใƒ†ใ‚ญใ‚นใƒˆใพใŸใฏใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฎ่ฆ็ด„ใ‚’็”Ÿๆˆใ™ใ‚‹ | NLP | pipeline(task="summarization") | | ็”ปๅƒๅˆ†้กž | ็”ปๅƒใซใƒฉใƒ™ใƒซใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ | ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณ | pipeline(task="image-classification") | | ็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ | ็”ปๅƒใฎๅ„ๅ€‹ๅˆฅใฎใƒ”ใ‚ฏใ‚ปใƒซใซใƒฉใƒ™ใƒซใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹๏ผˆใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ€ใƒ‘ใƒŽใƒ—ใƒ†ใ‚ฃใƒƒใ‚ฏใ€ใŠใ‚ˆใณใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ใ‚ตใƒใƒผใƒˆ๏ผ‰ | ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณ | pipeline(task="image-segmentation") | | ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บ | ็”ปๅƒๅ†…ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎๅขƒ็•Œใƒœใƒƒใ‚ฏใ‚นใจใ‚ฏใƒฉใ‚นใ‚’ไบˆๆธฌใ™ใ‚‹ | ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณ | pipeline(task="object-detection") | | ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅˆ†้กž | ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใซใƒฉใƒ™ใƒซใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ | ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช | pipeline(task="audio-classification") | | ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ | ้Ÿณๅฃฐใ‚’ใƒ†ใ‚ญใ‚นใƒˆใซๅค‰ๆ›ใ™ใ‚‹ | ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช | pipeline(task="automatic-speech-recognition") | | ใƒ“ใ‚ธใƒฅใ‚ขใƒซใ‚ฏใ‚จใ‚นใƒใƒงใƒณๅฟœ็ญ” | ็”ปๅƒใจ่ณชๅ•ใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใซใ€็”ปๅƒใซ้–ขใ™ใ‚‹่ณชๅ•ใซๅ›ž็ญ”ใ™ใ‚‹ | ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ | pipeline(task="vqa") | | ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚ฏใ‚จใ‚นใƒใƒงใƒณๅฟœ็ญ” | ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใจ่ณชๅ•ใŒไธŽใˆใ‚‰ใ‚ŒใŸๅ ดๅˆใซใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใซ้–ขใ™ใ‚‹่ณชๅ•ใซๅ›ž็ญ”ใ™ใ‚‹ | ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ | pipeline(task="document-question-answering") | | ็”ปๅƒใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒ‹ใƒณใ‚ฐ | ไธŽใˆใ‚‰ใ‚ŒใŸ็”ปๅƒใซใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒณใ‚’็”Ÿๆˆใ™ใ‚‹ | ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ | pipeline(task="image-to-text") | ใพใšใ€[`pipeline`] ใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ไฝœๆˆใ—ใ€ไฝฟ็”จใ—ใŸใ„ใ‚ฟใ‚นใ‚ฏใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€ใ‚ปใƒณใƒใƒกใƒณใƒˆๅˆ†ๆžใฎใŸใ‚ใซ [`pipeline`] ใ‚’ไฝฟ็”จใ™ใ‚‹ไพ‹ใ‚’็คบใ—ใพใ™๏ผš ```python >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` [`pipeline`]ใฏใ€ๆ„Ÿๆƒ…ๅˆ†ๆžใฎใŸใ‚ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ[ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)ใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆใ‚ญใƒฃใƒƒใ‚ทใƒฅใ—ใ€ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใงใ€`classifier`ใ‚’ๅฏพ่ฑกใฎใƒ†ใ‚ญใ‚นใƒˆใซไฝฟ็”จใงใใพใ™๏ผš ```python >>> classifier("็งใŸใกใฏ๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใŠ่ฆ‹ใ›ใงใใฆใจใฆใ‚‚ๅฌ‰ใ—ใ„ใงใ™ใ€‚") [{'label': 'POSITIVE', 'score': 0.9998}] ``` ่ค‡ๆ•ฐใฎๅ…ฅๅŠ›ใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€[`pipeline`]ใซๅ…ฅๅŠ›ใ‚’ใƒชใ‚นใƒˆใจใ—ใฆๆธกใ—ใฆใ€่พžๆ›ธใฎใƒชใ‚นใƒˆใ‚’่ฟ”ใ—ใพใ™๏ผš ```py >>> results = classifier(["๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ”็ดนไป‹ใงใใฆ้žๅธธใซๅฌ‰ใ—ใ„ใงใ™ใ€‚", "ๅซŒใ„ใซใชใ‚‰ใชใ„ใงใปใ—ใ„ใงใ™ใ€‚"]) >>> for result in results: ... print(f"label: {result['label']}, ใ‚นใ‚ณใ‚ข: {round(result['score'], 4)}") label: POSITIVE, ใ‚นใ‚ณใ‚ข: 0.9998 label: NEGATIVE, ใ‚นใ‚ณใ‚ข: 0.5309 ``` [`pipeline`]ใฏใ€ไปปๆ„ใฎใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใ‚’็นฐใ‚Š่ฟ”ใ—ๅ‡ฆ็†ใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ“ใฎไพ‹ใงใฏใ€่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใ‚’ใ‚ฟใ‚นใ‚ฏใจใ—ใฆ้ธใณใพใ—ใ‚‡ใ†๏ผš ```python >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผˆ่ฉณ็ดฐใซใคใ„ใฆใฏ๐Ÿค— Datasets [ใ‚ฏใ‚คใƒƒใ‚ฏใ‚นใ‚ฟใƒผใƒˆ](https://huggingface.co/docs/datasets/quickstart#audio)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰ใ€‚ ใŸใจใˆใฐใ€[MInDS-14](https://huggingface.co/datasets/PolyAI/minds14)ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```python >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใŒ[`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h)ใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใจไธ€่‡ดใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผš ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` "audio"ๅˆ—ใ‚’ๅ‘ผใณๅ‡บใ™ใจใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใ‚กใ‚คใƒซใฏ่‡ชๅ‹•็š„ใซใƒญใƒผใƒ‰ใ•ใ‚Œใ€ใƒชใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ๆœ€ๅˆใฎ4ใคใฎใ‚ตใƒณใƒ—ใƒซใ‹ใ‚‰็”Ÿใฎๆณขๅฝข้…ๅˆ—ใ‚’ๆŠฝๅ‡บใ—ใ€ใใ‚Œใ‚’ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซใƒชใ‚นใƒˆใจใ—ใฆๆธกใ—ใพใ™ใ€‚ ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT'] ``` ๅคง่ฆๆจกใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใ€ๅ…ฅๅŠ›ใŒๅคงใใ„ๅ ดๅˆ๏ผˆ้Ÿณๅฃฐใ‚„็”ปๅƒใชใฉ๏ผ‰ใ€ใ™ในใฆใฎๅ…ฅๅŠ›ใ‚’ใƒกใƒขใƒชใซ่ชญใฟ่พผใ‚€ไปฃใ‚ใ‚Šใซใ€ใƒชใ‚นใƒˆใงใฏใชใใ‚ธใ‚งใƒใƒฌใƒผใ‚ฟใ‚’ๆธกใ™ใ“ใจใŒใŠๅ‹งใ‚ใงใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏ[ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณAPIใƒชใƒ•ใ‚กใƒฌใƒณใ‚น](./main_classes/pipelines)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### Use another model and tokenizer in the pipeline [`pipeline`]ใฏ[Hub](https://huggingface.co/models)ใ‹ใ‚‰ใฎไปปๆ„ใฎใƒขใƒ‡ใƒซใ‚’ๅŽๅฎนใงใใ€ไป–ใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซ[`pipeline`]ใ‚’้ฉๅฟœใ•ใ›ใ‚‹ใ“ใจใŒๅฎนๆ˜“ใงใ™ใ€‚ใŸใจใˆใฐใ€ใƒ•ใƒฉใƒณใ‚น่ชžใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ‡ฆ็†ใงใใ‚‹ใƒขใƒ‡ใƒซใŒๅฟ…่ฆใชๅ ดๅˆใ€Hubใฎใ‚ฟใ‚ฐใ‚’ไฝฟ็”จใ—ใฆ้ฉๅˆ‡ใชใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚ฃใƒซใ‚ฟใƒชใƒณใ‚ฐใงใใพใ™ใ€‚ใƒˆใƒƒใƒ—ใฎใƒ•ใ‚ฃใƒซใ‚ฟใƒชใƒณใ‚ฐใ•ใ‚ŒใŸ็ตๆžœใฏใ€ใƒ•ใƒฉใƒณใ‚น่ชžใฎใƒ†ใ‚ญใ‚นใƒˆใซไฝฟ็”จใงใใ‚‹ๆ„Ÿๆƒ…ๅˆ†ๆž็”จใซ่ชฟๆ•ดใ•ใ‚ŒใŸๅคš่จ€่ชžใฎ[BERTใƒขใƒ‡ใƒซ](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment)ใ‚’่ฟ”ใ—ใพใ™๏ผš ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> [`AutoModelForSequenceClassification`]ใจ[`AutoTokenizer`]ใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใจใใ‚Œใซ้–ข้€ฃใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผˆๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง`AutoClass`ใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™๏ผ‰๏ผš ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใฏใ€[`TFAutoModelForSequenceClassification`]ใŠใ‚ˆใณ[`AutoTokenizer`]ใ‚’ไฝฟ็”จใ—ใฆใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใจใใฎ้–ข้€ฃใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใฆใ„ใพใ™๏ผˆ`TFAutoClass`ใซใคใ„ใฆใฏๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™๏ผ‰๏ผš ```python >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> ๆŒ‡ๅฎšใ—ใŸใƒขใƒ‡ใƒซใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’[`pipeline`]ใซ่จญๅฎšใ—ใ€ไปŠๅบฆใฏใƒ•ใƒฉใƒณใ‚น่ชžใฎใƒ†ใ‚ญใ‚นใƒˆใซ`classifier`ใ‚’้ฉ็”จใงใใพใ™๏ผš ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` ใ‚‚ใ—ใ€ใ‚ใชใŸใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซ้ฉใ—ใŸใƒขใƒ‡ใƒซใŒ่ฆ‹ใคใ‹ใ‚‰ใชใ„ๅ ดๅˆใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใ‚ใชใŸใฎใƒ‡ใƒผใ‚ฟใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใฎๆ–นๆณ•ใซใคใ„ใฆใฏใ€[ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](./training)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ๆœ€ๅพŒใซใ€ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใŸไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ—ใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจๅ…ฑๆœ‰ใƒใƒ–ใงๅ…ฑๆœ‰ใ™ใ‚‹ใ“ใจใ‚’ๆคœ่จŽใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๆฉŸๆขฐๅญฆ็ฟ’ใ‚’ๆฐ‘ไธปๅŒ–ใ™ใ‚‹ๆ‰‹ๅŠฉใ‘ใŒใงใใพใ™๏ผ ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> [`AutoModelForSequenceClassification`] ใŠใ‚ˆใณ [`AutoTokenizer`] ใ‚ฏใƒฉใ‚นใฏใ€ไธŠ่จ˜ใงไฝฟ็”จใ—ใŸ [`pipeline`] ใ‚’้ง†ๅ‹•ใ™ใ‚‹ใŸใ‚ใซๅ”ๅŠ›ใ—ใฆๅ‹•ไฝœใ—ใพใ™ใ€‚ [AutoClass](./model_doc/auto) ใฏใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ใใฎๅๅ‰ใพใŸใฏใƒ‘ใ‚นใ‹ใ‚‰่‡ชๅ‹•็š„ใซๅ–ๅพ—ใ™ใ‚‹ใ‚ทใƒงใƒผใƒˆใ‚ซใƒƒใƒˆใงใ™ใ€‚ ้ฉๅˆ‡ใช `AutoClass` ใ‚’้ธๆŠžใ—ใ€ใใ‚Œใซ้–ข้€ฃใ™ใ‚‹ๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’้ธๆŠžใ™ใ‚‹ใ ใ‘ใงๆธˆใฟใพใ™ใ€‚ ๅ‰ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‹ใ‚‰ใฎไพ‹ใซๆˆปใ‚Šใ€`AutoClass` ใ‚’ไฝฟ็”จใ—ใฆ [`pipeline`] ใฎ็ตๆžœใ‚’ๅ†็พใ™ใ‚‹ๆ–นๆณ•ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ### AutoTokenizer ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใจใ—ใฆไฝฟ็”จใงใใ‚‹ๆ•ฐๅ€คใฎ้…ๅˆ—ใซๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๅฝนๅ‰ฒใ‚’ๆžœใŸใ—ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใƒ—ใƒญใ‚ปใ‚นใซใฏใ€ๅ˜่ชžใ‚’ใฉใฎใ‚ˆใ†ใซๅˆ†ๅ‰ฒใ™ใ‚‹ใ‹ใ‚„ใ€ๅ˜่ชžใ‚’ใฉใฎใƒฌใƒ™ใƒซใงๅˆ†ๅ‰ฒใ™ใ‚‹ใ‹ใจใ„ใฃใŸๅคšใใฎใƒซใƒผใƒซใŒใ‚ใ‚Šใพใ™ ๏ผˆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใซใคใ„ใฆใฎ่ฉณ็ดฐใฏ [ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚ตใƒžใƒชใƒผ](./tokenizer_summary) ใ‚’ใ”่ฆงใใ ใ•ใ„๏ผ‰ใ€‚ ๆœ€ใ‚‚้‡่ฆใชใ“ใจใฏใ€ใƒขใƒ‡ใƒซใŒไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใซใชใฃใŸใจใใจๅŒใ˜ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใƒซใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใ€ๅŒใ˜ใƒขใƒ‡ใƒซๅใงใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใงใ™ใ€‚ [`AutoTokenizer`] ใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```python >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Pass your text to the tokenizer: ```python >>> encoding = tokenizer("็งใŸใกใฏ๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใŠ่ฆ‹ใ›ใงใใฆใจใฆใ‚‚ๅฌ‰ใ—ใ„ใงใ™ใ€‚") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€ๆฌกใฎๆƒ…ๅ ฑใ‚’ๅซใ‚€่พžๆ›ธใ‚’่ฟ”ใ—ใพใ™๏ผš - [input_ids](./glossary#input-ids): ใƒˆใƒผใ‚ฏใƒณใฎๆ•ฐๅ€ค่กจ็พใ€‚ - [attention_mask](.glossary#attention-mask): ใฉใฎใƒˆใƒผใ‚ฏใƒณใซใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ๅ‘ใ‘ใ‚‹ใ‹ใ‚’็คบใ—ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใพใŸใ€ๅ…ฅๅŠ›ใฎใƒชใ‚นใƒˆใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€ไธ€ๆง˜ใช้•ทใ•ใฎใƒใƒƒใƒใ‚’่ฟ”ใ™ใŸใ‚ใซใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใŠใ‚ˆใณๅˆ‡ใ‚Š่ฉฐใ‚ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใŠ่ฆ‹ใ›ใงใใฆ้žๅธธใซๅฌ‰ใ—ใ„ใงใ™ใ€‚", "ๅซŒใ„ใงใฏใชใ„ใ“ใจใ‚’้ก˜ใฃใฆใ„ใพใ™ใ€‚"], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> [ๅ‰ๅ‡ฆ็†](./preprocessing)ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใ‚’ใ”่ฆงใ„ใŸใ ใใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใฎ่ฉณ็ดฐใ‚„ใ€[`AutoImageProcessor`]ใ€[`AutoFeatureExtractor`]ใ€[`AutoProcessor`]ใ‚’ไฝฟ็”จใ—ใฆ็”ปๅƒใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ€ใŠใ‚ˆใณใƒžใƒซใƒใƒขใƒผใƒ€ใƒซๅ…ฅๅŠ›ใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใƒšใƒผใ‚ธใ‚‚ใ”่ฆงใใ ใ•ใ„ใ€‚ </Tip> ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformersใฏไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’็ฐกๅ˜ใซ็ตฑไธ€็š„ใซใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏใ€[`AutoTokenizer`]ใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใฎใจๅŒใ˜ใ‚ˆใ†ใซ[`AutoModel`]ใ‚’ใƒญใƒผใƒ‰ใงใใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใŸ[`AutoModel`]ใ‚’้ธๆŠžใ™ใ‚‹ไปฅๅค–ใฎ้•ใ„ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆ๏ผˆใพใŸใฏใ‚ทใƒผใ‚ฑใƒณใ‚น๏ผ‰ๅˆ†้กžใฎๅ ดๅˆใ€[`AutoModelForSequenceClassification`]ใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`]ใ‚ฏใƒฉใ‚นใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใซ้–ขใ™ใ‚‹่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใ‚ฟใ‚นใ‚ฏใฎๆฆ‚่ฆ](./task_summary)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ไปŠใ€ๅ‰ๅ‡ฆ็†ๆธˆใฟใฎใƒใƒƒใƒใ‚’็›ดๆŽฅใƒขใƒ‡ใƒซใซๆธกใ—ใพใ™ใ€‚่พžๆ›ธใ‚’ๅฑ•้–‹ใ™ใ‚‹ใ ใ‘ใงใ€`**`ใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```python >>> pt_outputs = pt_model(**pt_batch) ``` ใƒขใƒ‡ใƒซใฏใ€`logits`ๅฑžๆ€งใซๆœ€็ต‚็š„ใชใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ `logits`ใซsoftmax้–ขๆ•ฐใ‚’้ฉ็”จใ—ใฆ็ขบ็އใ‚’ๅ–ๅพ—ใ—ใพใ™๏ผš ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformersใฏไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใฎใ‚ทใƒณใƒ—ใƒซใง็ตฑไธ€ใ•ใ‚ŒใŸๆ–นๆณ•ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏใ€[`TFAutoModel`]ใ‚’[`AutoTokenizer`]ใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใฎใจๅŒใ˜ใ‚ˆใ†ใซใƒญใƒผใƒ‰ใงใใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ๅ”ฏไธ€ใฎ้•ใ„ใฏใ€ใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใŸ[`TFAutoModel`]ใ‚’้ธๆŠžใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆ๏ผˆใพใŸใฏใ‚ทใƒผใ‚ฑใƒณใ‚น๏ผ‰ๅˆ†้กžใฎๅ ดๅˆใ€[`TFAutoModelForSequenceClassification`]ใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[`AutoModel`]ใ‚ฏใƒฉใ‚นใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใซ้–ขใ™ใ‚‹ๆƒ…ๅ ฑใฏใ€[ใ‚ฟใ‚นใ‚ฏใฎๆฆ‚่ฆ](./task_summary)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ๆฌกใซใ€ๅ‰ๅ‡ฆ็†ๆธˆใฟใฎใƒใƒƒใƒใ‚’็›ดๆŽฅใƒขใƒ‡ใƒซใซๆธกใ—ใพใ™ใ€‚ใƒ†ใƒณใ‚ฝใƒซใ‚’ใใฎใพใพๆธกใ™ใ“ใจใŒใงใใพใ™๏ผš ```python >>> tf_outputs = tf_model(tf_batch) ``` ใƒขใƒ‡ใƒซใฏ`logits`ๅฑžๆ€งใซๆœ€็ต‚็š„ใชใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚`logits`ใซใ‚ฝใƒ•ใƒˆใƒžใƒƒใ‚ฏใ‚น้–ขๆ•ฐใ‚’้ฉ็”จใ—ใฆ็ขบ็އใ‚’ๅ–ๅพ—ใ—ใพใ™๏ผš ```python >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> ๐Ÿค— Transformersใฎใ™ในใฆใฎใƒขใƒ‡ใƒซ๏ผˆPyTorchใพใŸใฏTensorFlow๏ผ‰ใฏใ€ๆœ€็ต‚็š„ใชๆดปๆ€งๅŒ–้–ขๆ•ฐ๏ผˆsoftmaxใชใฉ๏ผ‰*ๅ‰*ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ ๆœ€็ต‚็š„ใชๆดปๆ€งๅŒ–้–ขๆ•ฐใฏใ€ใ—ใฐใ—ใฐๆๅคฑใจ็ตๅˆใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใงใ™ใ€‚ใƒขใƒ‡ใƒซใฎๅ‡บๅŠ›ใฏ็‰นๅˆฅใชใƒ‡ใƒผใ‚ฟใ‚ฏใƒฉใ‚นใงใ‚ใ‚Šใ€ใใฎๅฑžๆ€งใฏIDEใง่‡ชๅ‹•่ฃœๅฎŒใ•ใ‚Œใพใ™ใ€‚ ใƒขใƒ‡ใƒซใฎๅ‡บๅŠ›ใฏใ€ใ‚ฟใƒ—ใƒซใพใŸใฏ่พžๆ›ธใฎใ‚ˆใ†ใซๅ‹•ไฝœใ—ใพใ™๏ผˆๆ•ดๆ•ฐใ€ใ‚นใƒฉใ‚คใ‚นใ€ใพใŸใฏๆ–‡ๅญ—ๅˆ—ใงใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใ‚’ไป˜ใ‘ใ‚‹ใ“ใจใŒใงใใพใ™๏ผ‰ใ€‚ ใ“ใฎๅ ดๅˆใ€Noneใงใ‚ใ‚‹ๅฑžๆ€งใฏ็„ก่ฆ–ใ•ใ‚Œใพใ™ใ€‚ </Tip> ### Save a Model <frameworkcontent> <pt> ใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใŸใ‚‰ใ€[`PreTrainedModel.save_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจๅ…ฑใซไฟๅญ˜ใงใใพใ™๏ผš ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` ๅ†ใณใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๆบ–ๅ‚™ใŒใงใใŸใ‚‰ใ€[`PreTrainedModel.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆๅ†ๅบฆใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> ใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใŸใ‚‰ใ€ใใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ไฟๅญ˜ใงใใพใ™ใ€‚[`TFPreTrainedModel.save_pretrained`]ใ‚’ไฝฟ็”จใ—ใพใ™๏ผš ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` ใƒขใƒ‡ใƒซใ‚’ๅ†ๅบฆไฝฟ็”จใ™ใ‚‹ๆบ–ๅ‚™ใŒใงใใŸใ‚‰ใ€[`TFPreTrainedModel.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆๅ†ๅบฆใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> ๐Ÿค— Transformersใฎ็‰นใซ็ด ๆ™ดใ‚‰ใ—ใ„ๆฉŸ่ƒฝใฎไธ€ใคใฏใ€ใƒขใƒ‡ใƒซใ‚’ไฟๅญ˜ใ—ใ€ใใ‚Œใ‚’PyTorchใƒขใƒ‡ใƒซใพใŸใฏTensorFlowใƒขใƒ‡ใƒซใจใ—ใฆๅ†ใƒญใƒผใƒ‰ใงใใ‚‹ใ“ใจใงใ™ใ€‚ `from_pt`ใพใŸใฏ`from_tf`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ้–“ใงๅค‰ๆ›ใงใใพใ™๏ผš <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Custom model builds ใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ๆ–นๆณ•ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใซใฏใ€ใƒขใƒ‡ใƒซใฎ่จญๅฎšใ‚ฏใƒฉใ‚นใ‚’ๅค‰ๆ›ดใงใใพใ™ใ€‚่จญๅฎšใฏใƒขใƒ‡ใƒซใฎๅฑžๆ€งใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ไพ‹ใˆใฐใ€้š ใ‚Œๅฑคใฎๆ•ฐใ‚„ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒ˜ใƒƒใƒ‰ใฎๆ•ฐใชใฉใŒใ“ใ‚Œใซๅซใพใ‚Œใพใ™ใ€‚ใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšใ‚ฏใƒฉใ‚นใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ๅˆๆœŸๅŒ–ใ™ใ‚‹้š›ใซใฏใ€ใ‚ผใƒญใ‹ใ‚‰ๅง‹ใ‚ใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎๅฑžๆ€งใฏใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚Œใ€ๆœ‰ๆ„็พฉใช็ตๆžœใ‚’ๅพ—ใ‚‹ใŸใ‚ใซใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆœ€ๅˆใซ[`AutoConfig`]ใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใ€ๅค‰ๆ›ดใ—ใŸใ„ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚[`AutoConfig.from_pretrained`]ๅ†…ใงใ€ๅค‰ๆ›ดใ—ใŸใ„ๅฑžๆ€ง๏ผˆไพ‹๏ผšใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒ˜ใƒƒใƒ‰ใฎๆ•ฐ๏ผ‰ใ‚’ๆŒ‡ๅฎšใงใใพใ™๏ผš ```python >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> [`AutoModel.from_config`]ใ‚’ไฝฟ็”จใ—ใฆใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```python >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> ใ‚ซใ‚นใ‚ฟใƒ ๆง‹ๆˆใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ™ใ‚‹ใซใฏใ€[`TFAutoModel.from_config`]ใ‚’ไฝฟ็”จใ—ใพใ™๏ผš ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> [ใ‚ซใ‚นใ‚ฟใƒ ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ไฝœๆˆ](./create_a_model)ใ‚ฌใ‚คใƒ‰ใ‚’ๅ‚็…งใ—ใฆใ€ใ‚ซใ‚นใ‚ฟใƒ ๆง‹ๆˆใฎ่ฉณ็ดฐๆƒ…ๅ ฑใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ## Trainer - PyTorchๅ‘ใ‘ใฎๆœ€้ฉๅŒ–ใ•ใ‚ŒใŸใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ— ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏๆจ™ๆบ–ใฎ[`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)ใงใ‚ใ‚‹ใŸใ‚ใ€้€šๅธธใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใงไฝฟ็”จใงใใพใ™ใ€‚ ็‹ฌ่‡ชใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ไฝœๆˆใงใใพใ™ใŒใ€๐Ÿค— TransformersใฏPyTorchๅ‘ใ‘ใซ[`Trainer`]ใ‚ฏใƒฉใ‚นใ‚’ๆไพ›ใ—ใฆใŠใ‚Šใ€ๅŸบๆœฌ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใซๅŠ ใˆใฆใ€ ๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ€ๆททๅˆ็ฒพๅบฆใชใฉใฎๆฉŸ่ƒฝใฎ่ฟฝๅŠ ใ‚’่กŒใฃใฆใ„ใพใ™ใ€‚ ใ‚ฟใ‚นใ‚ฏใซๅฟœใ˜ใฆใ€้€šๅธธใฏ[`Trainer`]ใซไปฅไธ‹ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆธกใ—ใพใ™๏ผš 1. [`PreTrainedModel`]ใพใŸใฏ[`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)ใ‹ใ‚‰ๅง‹ใ‚ใพใ™๏ผš ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. [`TrainingArguments`]ใซใฏใ€ๅค‰ๆ›ดใงใใ‚‹ใƒขใƒ‡ใƒซใฎใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒๅซใพใ‚ŒใฆใŠใ‚Šใ€ๅญฆ็ฟ’็އใ€ใƒใƒƒใƒใ‚ตใ‚คใ‚บใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚จใƒใƒƒใ‚ฏๆ•ฐใชใฉใŒๅค‰ๆ›ดใงใใพใ™ใ€‚ๆŒ‡ๅฎšใ—ใชใ„ๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€คใŒไฝฟ็”จใ•ใ‚Œใพใ™๏ผš ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ€็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ€ใพใŸใฏใƒ—ใƒญใ‚ปใƒƒใ‚ตใฎใ‚ˆใ†ใชๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 4. ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ™ใ‚‹ใŸใ‚ใฎ้–ขๆ•ฐใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```python >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` ใใฎๅพŒใ€[`~datasets.Dataset.map`]ใ‚’ไฝฟ็”จใ—ใฆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใซ้ฉ็”จใ—ใพใ™๏ผš ```python >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‹ใ‚‰ใฎไพ‹ใฎใƒใƒƒใƒใ‚’ไฝœๆˆใ™ใ‚‹ใŸใ‚ใฎ [`DataCollatorWithPadding`]๏ผš ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` ๆฌกใซใ€ใ“ใ‚Œใ‚‰ใฎใ‚ฏใƒฉใ‚นใ‚’[`Trainer`]ใซใพใจใ‚ใพใ™๏ผš ```python >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` ่จ“็ทดใ‚’้–‹ๅง‹ใ™ใ‚‹ๆบ–ๅ‚™ใŒใงใใŸใ‚‰ใ€[`~Trainer.train`]ใ‚’ๅ‘ผใณๅ‡บใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใ—ใพใ™๏ผš ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> ็ฟป่จณใ‚„่ฆ็ด„ใชใฉใ€ใ‚ทใƒผใ‚ฑใƒณใ‚น้–“ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏใซใฏใ€ไปฃใ‚ใ‚Šใซ[`Seq2SeqTrainer`]ใจ[`Seq2SeqTrainingArguments`]ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> [`Trainer`]ๅ†…ใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใ™ใ‚‹ใ“ใจใงใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใฎๅ‹•ไฝœใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใงใใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๆๅคฑ้–ขๆ•ฐใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ€ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใชใฉใฎๆฉŸ่ƒฝใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใงใใพใ™ใ€‚ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใงใใ‚‹ใƒกใ‚ฝใƒƒใƒ‰ใฎไธ€่ฆงใซใคใ„ใฆใฏใ€[`Trainer`]ใƒชใƒ•ใ‚กใƒฌใƒณใ‚นใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ™ใ‚‹ๅˆฅใฎๆ–นๆณ•ใฏใ€[Callbacks](./main_classes/callbacks)ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใ‚ณใƒผใƒซใƒใƒƒใ‚ฏใ‚’ไฝฟ็”จใ—ใฆไป–ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใจ็ตฑๅˆใ—ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’็›ฃ่ฆ–ใ—ใฆ้€ฒๆ—็Šถๆณใ‚’ๅ ฑๅ‘Šใ—ใŸใ‚Šใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆ—ฉๆœŸใซๅœๆญขใ—ใŸใ‚Šใงใใพใ™ใ€‚ใ‚ณใƒผใƒซใƒใƒƒใ‚ฏใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—่‡ชไฝ“ใซใฏไฝ•ใ‚‚ๅค‰ๆ›ดใ‚’ๅŠ ใˆใพใ›ใ‚“ใ€‚ๆๅคฑ้–ขๆ•ฐใชใฉใฎใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ‚’่กŒใ†ๅ ดๅˆใฏใ€[`Trainer`]ใ‚’ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ## Train with TensorFlow ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏๆจ™ๆบ–ใฎ[`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)ใงใ‚ใ‚‹ใŸใ‚ใ€[Keras](https://keras.io/) APIใ‚’ไฝฟ็”จใ—ใฆTensorFlowใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใพใ™ใ€‚ ๐Ÿค— Transformersใฏใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’`tf.data.Dataset`ใจใ—ใฆ็ฐกๅ˜ใซใƒญใƒผใƒ‰ใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹[`~TFPreTrainedModel.prepare_tf_dataset`]ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๆไพ›ใ—ใฆใŠใ‚Šใ€Kerasใฎ[`compile`](https://keras.io/api/models/model_training_apis/#compile-method)ใŠใ‚ˆใณ[`fit`](https://keras.io/api/models/model_training_apis/#fit-method)ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ใ™ใใซ้–‹ๅง‹ใงใใพใ™ใ€‚ 1. [`TFPreTrainedModel`]ใพใŸใฏ[`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)ใ‹ใ‚‰ๅง‹ใ‚ใพใ™๏ผš ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ€็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ€ใพใŸใฏใƒ—ใƒญใ‚ปใƒƒใ‚ตใฎใ‚ˆใ†ใชๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 3. ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ™ใ‚‹ใŸใ‚ใฎ้–ขๆ•ฐใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```python >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. [`~datasets.Dataset.map`]ใ‚’ไฝฟ็”จใ—ใฆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใซใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’้ฉ็”จใ—ใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’[`~TFPreTrainedModel.prepare_tf_dataset`]ใซๆธกใ—ใพใ™ใ€‚ใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ๅค‰ๆ›ดใ—ใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใ‚ทใƒฃใƒƒใƒ•ใƒซใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```python >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. ๆบ–ๅ‚™ใŒใงใใŸใ‚‰ใ€`compile`ใจ`fit`ใ‚’ๅ‘ผใณๅ‡บใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใงใใพใ™ใ€‚ Transformersใƒขใƒ‡ใƒซใฏใ™ในใฆใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใ‚ฟใ‚นใ‚ฏใซ้–ข้€ฃใ™ใ‚‹ๆๅคฑ้–ขๆ•ฐใ‚’ๆŒใฃใฆใ„ใ‚‹ใŸใ‚ใ€ๆŒ‡ๅฎšใ—ใชใ„้™ใ‚Šใ€ๆๅคฑ้–ขๆ•ฐใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ```python >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) # ๆๅคฑ้–ขๆ•ฐใฎๅผ•ๆ•ฐใฏไธ่ฆใงใ™๏ผ >>> model.fit(tf ``` ## What's next? ๐Ÿค— Transformersใฎใ‚ฏใ‚คใƒƒใ‚ฏใƒ„ใ‚ขใƒผใ‚’ๅฎŒไบ†ใ—ใŸใ‚‰ใ€ใ‚ฌใ‚คใƒ‰ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใฎไฝœๆˆใ€ใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝฟ็”จใ—ใŸใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใชใฉใ€ใ‚ˆใ‚Šๅ…ทไฝ“็š„ใชใ“ใจใ‚’ๅญฆใถใ“ใจใŒใงใใพใ™ใ€‚๐Ÿค— Transformersใฎใ‚ณใ‚ขใ‚ณใƒณใ‚ปใƒ—ใƒˆใซใคใ„ใฆใ‚‚ใฃใจ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€ใ‚ณใƒณใ‚ปใƒ—ใƒใƒฅใ‚ขใƒซใ‚ฌใ‚คใƒ‰ใ‚’่ชญใ‚“ใงใฟใฆใใ ใ•ใ„๏ผ
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/performance.md
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Performance and Scalability ๅคง่ฆๆจกใชใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŠใ‚ˆใณๆœฌ็•ช็’ฐๅขƒใธใฎๅฑ•้–‹ใฏใ•ใพใ–ใพใช่ชฒ้กŒใ‚’ๆ่ตทใ—ใพใ™ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซใฏใ€ใƒขใƒ‡ใƒซใŒๅˆฉ็”จๅฏ่ƒฝใชGPUใƒกใƒขใƒชใ‚ˆใ‚Šใ‚‚ๅคšใใ‚’ๅฟ…่ฆใจใ—ใŸใ‚Šใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ้€ŸๅบฆใŒ้…ใ‹ใฃใŸใ‚Šใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒ‡ใƒ—ใƒญใ‚คใƒ•ใ‚งใƒผใ‚บใงใฏใ€ใƒขใƒ‡ใƒซใŒๆœฌ็•ช็’ฐๅขƒใงๅฟ…่ฆใชใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใฎใซ่‹ฆๅŠดใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏใ€ใ“ใ‚Œใ‚‰ใฎ่ชฒ้กŒใ‚’ๅ…‹ๆœใ—ใ€ใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซๆœ€้ฉใช่จญๅฎšใ‚’่ฆ‹ใคใ‘ใ‚‹ใฎใซๅฝน็ซ‹ใคใ“ใจใ‚’็›ฎ็š„ใจใ—ใฆใ„ใพใ™ใ€‚ ใ‚ฌใ‚คใƒ‰ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๆŽจ่ซ–ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใซๅˆ†ใ‹ใ‚ŒใฆใŠใ‚Šใ€ใใ‚Œใžใ‚Œ็•ฐใชใ‚‹่ชฒ้กŒใจ่งฃๆฑบ็ญ–ใŒๅญ˜ๅœจใ—ใพใ™ใ€‚ ๅ„ใ‚ปใ‚ฏใ‚ทใƒงใƒณๅ†…ใซใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็”จใฎใ‚ทใƒณใ‚ฐใƒซGPUๅฏพใƒžใƒซใƒGPUใ€ๆŽจ่ซ–็”จใฎCPUๅฏพGPUใชใฉใ€็•ฐใชใ‚‹ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขๆง‹ๆˆ็”จใฎๅˆฅใ€…ใฎใ‚ฌใ‚คใƒ‰ใŒ็”จๆ„ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๅ‡บ็™บ็‚นใจใ—ใฆใ€ใ‚ทใƒŠใƒชใ‚ชใซๅˆใฃใŸๆ–นๆณ•ใซ้€ฒใ‚€ใŸใ‚ใฎๆƒ…ๅ ฑๆบใจใ—ใฆใ”ๅˆฉ็”จใใ ใ•ใ„ใ€‚ ## Training ๅคง่ฆๆจกใชใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใ‚’ๅŠน็އ็š„ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใซใฏใ€GPUใ‚„TPUใชใฉใฎใ‚ขใ‚ฏใ‚ปใƒฉใƒฌใƒผใ‚ฟใŒๅฟ…่ฆใงใ™ใ€‚ ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใชใ‚ฑใƒผใ‚นใฏใ€ใ‚ทใƒณใ‚ฐใƒซGPUใŒใ‚ใ‚‹ๅ ดๅˆใงใ™ใ€‚ใ‚ทใƒณใ‚ฐใƒซGPUใงใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅŠน็އใ‚’ๆœ€้ฉๅŒ–ใ™ใ‚‹ใŸใ‚ใฎไธ€่ˆฌ็š„ใชใ‚ขใƒ—ใƒญใƒผใƒใ‚’ๅญฆใถใซใฏใ€ไปฅไธ‹ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ * [ใ‚ทใƒณใ‚ฐใƒซGPUใงใฎๅŠน็އ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใŸใ‚ใฎๆ–นๆณ•ใจใƒ„ใƒผใƒซ](perf_train_gpu_one): GPUใƒกใƒขใƒชใฎๅŠนๆžœ็š„ใชๅˆฉ็”จใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ้ซ˜้€ŸๅŒ–ใชใฉใ‚’ๆ”ฏๆดใ™ใ‚‹ๅ…ฑ้€šใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚’ๅญฆใถใŸใ‚ใซใ“ใ“ใ‹ใ‚‰ๅง‹ใ‚ใฆใใ ใ•ใ„ใ€‚ * [ใƒžใƒซใƒGPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_many): ใƒžใƒซใƒGPU็’ฐๅขƒใซ้ฉ็”จใ•ใ‚Œใ‚‹ใƒ‡ใƒผใ‚ฟใ€ใƒ†ใƒณใ‚ฝใƒซใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณไธฆๅˆ—ๆ€งใชใฉใ€ใ•ใ‚‰ใชใ‚‹ๆœ€้ฉๅŒ–ๆ–นๆณ•ใซใคใ„ใฆ่ฉณ็ดฐใซๅญฆใณใพใ™ใ€‚ * [CPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_cpu): CPUไธŠใงใฎๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใคใ„ใฆๅญฆใณใพใ™ใ€‚ * [่ค‡ๆ•ฐCPUใงใฎๅŠน็އ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](perf_train_cpu_many): ๅˆ†ๆ•ฃCPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใคใ„ใฆๅญฆใณใพใ™ใ€‚ * [TensorFlowใงTPUใ‚’ไฝฟ็”จใ—ใŸใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](perf_train_tpu_tf): TPUใซๆ…ฃใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏใ€TPUใงใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจXLAใฎไฝฟ็”จใซใคใ„ใฆใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ * [ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใŸใ‚ใฎใ‚ซใ‚นใ‚ฟใƒ ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ข](perf_hardware): ็‹ฌ่‡ชใฎใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐ็’ฐๅขƒใ‚’ๆง‹็ฏ‰ใ™ใ‚‹้š›ใฎใƒ’ใƒณใƒˆใ‚„ใƒˆใƒชใƒƒใ‚ฏใ‚’่ฆ‹ใคใ‘ใพใ™ใ€‚ * [Trainer APIใ‚’ไฝฟ็”จใ—ใŸใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆคœ็ดข](hpo_train) ## Inference ๆœฌ็•ช็’ฐๅขƒใงๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’ๅŠน็އ็š„ใซๆŽจ่ซ–ใ™ใ‚‹ใ“ใจใฏใ€ใใ‚Œใ‚‰ใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใจๅŒใ˜ใใ‚‰ใ„้›ฃใ—ใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€CPUใŠใ‚ˆใณใ‚ทใƒณใ‚ฐใƒซ/ใƒžใƒซใƒGPU็’ฐๅขƒใงๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๆ‰‹้ †ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ * [ใ‚ทใƒณใ‚ฐใƒซCPUใงใฎๆŽจ่ซ–](perf_infer_cpu) * [ใ‚ทใƒณใ‚ฐใƒซGPUใงใฎๆŽจ่ซ–](perf_infer_gpu_one) * [ใƒžใƒซใƒGPUๆŽจ่ซ–](perf_infer_gpu_many) * [TensorFlowใƒขใƒ‡ใƒซใฎXLA็ตฑๅˆ](tf_xla) ## Training and inference ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ‹ใ€ใใ‚Œใ‚’ไฝฟ็”จใ—ใฆๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ‹ใซ้–ขไฟ‚ใชใ้ฉ็”จใ•ใ‚Œใ‚‹ใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ€ใƒ’ใƒณใƒˆใ€ใƒˆใƒชใƒƒใ‚ฏใŒใ“ใ“ใซใ‚ใ‚Šใพใ™ใ€‚ * [ๅคง่ฆๆจกใƒขใƒ‡ใƒซใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–](big_models) * [ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅ•้กŒใฎใƒˆใƒฉใƒ–ใƒซใ‚ทใƒฅใƒผใƒ†ใ‚ฃใƒณใ‚ฐ](debugging) ## Contribute ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏใพใ ๅฎŒๅ…จใงใฏใชใใ€ใ•ใ‚‰ใซ่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹้ …็›ฎใŒใŸใใ•ใ‚“ใ‚ใ‚Šใพใ™ใ€‚ ่ฟฝๅŠ ใ‚„่จ‚ๆญฃใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€้ ๆ…ฎใ›ใšใซPRใ‚’ใ‚ชใƒผใƒ—ใƒณใ™ใ‚‹ใ‹ใ€่ฉณ็ดฐใ‚’่ญฐ่ซ–ใ™ใ‚‹ใŸใ‚ใซIssueใ‚’้–‹ๅง‹ใ—ใฆใใ ใ•ใ„ใ€‚ AใŒBใ‚ˆใ‚Šใ‚‚ๅ„ชใ‚Œใฆใ„ใ‚‹ใจใ„ใ†่ฒข็Œฎใ‚’่กŒใ†้š›ใซใฏใ€ๅ†็พๅฏ่ƒฝใชใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚„ใใฎๆƒ…ๅ ฑใฎๅ‡บๅ…ธใธใฎใƒชใƒณใ‚ฏใ‚’ๅซใ‚ใฆใฟใฆใใ ใ•ใ„๏ผˆใ‚ใชใŸ่‡ช่บซใฎๆƒ…ๅ ฑใงใ‚ใ‚‹ๅ ดๅˆใ‚’้™คใ๏ผ‰ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_infer_cpu.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Inference on CPU ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏใ€CPUไธŠใงๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใฎๅŠน็އ็š„ใชๆŽจ่ซ–ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใพใ™ใ€‚ ## `BetterTransformer` for faster inference ๆœ€่ฟ‘ใ€ใƒ†ใ‚ญใ‚นใƒˆใ€็”ปๅƒใ€ใŠใ‚ˆใณ้Ÿณๅฃฐใƒขใƒ‡ใƒซใฎCPUไธŠใงใฎ้ซ˜้€ŸใชๆŽจ่ซ–ใฎใŸใ‚ใซ`BetterTransformer`ใ‚’็ตฑๅˆใ—ใพใ—ใŸใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€ใ“ใฎ็ตฑๅˆใซ้–ขใ™ใ‚‹ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’[ใ“ใกใ‚‰](https://huggingface.co/docs/optimum/bettertransformer/overview)ใง็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ## PyTorch JITใƒขใƒผใƒ‰๏ผˆTorchScript๏ผ‰ TorchScriptใฏใ€PyTorchใ‚ณใƒผใƒ‰ใ‹ใ‚‰ใ‚ทใƒชใ‚ขใƒฉใ‚คใ‚บๅฏ่ƒฝใงๆœ€้ฉๅŒ–ๅฏ่ƒฝใชใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ไปปๆ„ใฎTorchScriptใƒ—ใƒญใ‚ฐใƒฉใƒ ใฏใ€Pythonไพๅญ˜ๆ€งใฎใชใ„ใƒ—ใƒญใ‚ปใ‚นใงไฟๅญ˜ใŠใ‚ˆใณใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใ‚คใƒผใ‚ฌใƒผใƒขใƒผใƒ‰ใจๆฏ”่ผƒใ—ใฆใ€PyTorchใฎjitใƒขใƒผใƒ‰ใฏ้€šๅธธใ€ใ‚ชใƒšใƒฌใƒผใ‚ฟใƒผใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใชใฉใฎๆœ€้ฉๅŒ–ๆ‰‹ๆณ•ใซใ‚ˆใ‚Šใƒขใƒ‡ใƒซๆŽจ่ซ–ใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅ‘ไธŠใ—ใพใ™ใ€‚ TorchScriptใฎ็ฐกๅ˜ใช็ดนไป‹ใซใคใ„ใฆใฏใ€[PyTorch TorchScriptใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### JITใƒขใƒผใƒ‰ใงใฎIPEXใ‚ฐใƒฉใƒ•ๆœ€้ฉๅŒ– Intelยฎ Extension for PyTorchใฏใ€Transformersใ‚ทใƒชใƒผใ‚บใƒขใƒ‡ใƒซใฎjitใƒขใƒผใƒ‰ใซใ•ใ‚‰ใชใ‚‹ๆœ€้ฉๅŒ–ใ‚’ๆไพ›ใ—ใพใ™ใ€‚Intelยฎ Extension for PyTorchใ‚’jitใƒขใƒผใƒ‰ใงไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚Transformersใƒขใƒ‡ใƒซใ‹ใ‚‰ใ‚ˆใไฝฟ็”จใ•ใ‚Œใ‚‹ใ‚ชใƒšใƒฌใƒผใ‚ฟใƒผใƒ‘ใ‚ฟใƒผใƒณใฎใ„ใใคใ‹ใฏใ€ๆ—ขใซIntelยฎ Extension for PyTorchใงjitใƒขใƒผใƒ‰ใฎใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใซๅฏพๅฟœใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใƒ‘ใ‚ฟใƒผใƒณ๏ผˆMulti-head-attentionใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใ€Concat Linearใ€Linear+Addใ€Linear+Geluใ€Add+LayerNormใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใชใฉ๏ผ‰ใฏๆœ‰ๅŠนใงใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒ่‰ฏใ„ใงใ™ใ€‚ใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใฎๅˆฉ็‚นใฏใ€ใƒฆใƒผใ‚ถใƒผใซ้€้Ž็š„ใซๆไพ›ใ•ใ‚Œใพใ™ใ€‚ๅˆ†ๆžใซใ‚ˆใ‚Œใฐใ€ๆœ€ใ‚‚ไบบๆฐ—ใฎใ‚ใ‚‹่ณชๅ•ๅฟœ็ญ”ใ€ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ€ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใฎNLPใ‚ฟใ‚นใ‚ฏใฎ็ด„70๏ผ…ใŒใ€ใ“ใ‚Œใ‚‰ใฎใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใƒ‘ใ‚ฟใƒผใƒณใ‚’ไฝฟ็”จใ—ใฆFloat32็ฒพๅบฆใจBFloat16ๆททๅˆ็ฒพๅบฆใฎไธกๆ–นใงใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅˆฉ็‚นใ‚’ๅพ—ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ [IPEXใ‚ฐใƒฉใƒ•ๆœ€้ฉๅŒ–ใฎ่ฉณ็ดฐๆƒ…ๅ ฑ](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ #### IPEX installation: IPEXใฎใƒชใƒชใƒผใ‚นใฏPyTorchใซๅพ“ใฃใฆใ„ใพใ™ใ€‚[IPEXใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ–นๆณ•](https://intel.github.io/intel-extension-for-pytorch/)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ### Usage of JIT-mode Trainerใง่ฉ•ไพกใพใŸใฏไบˆๆธฌใฎใŸใ‚ใซJITใƒขใƒผใƒ‰ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€ใƒฆใƒผใ‚ถใƒผใฏTrainerใ‚ณใƒžใƒณใƒ‰ๅผ•ๆ•ฐใซ`jit_mode_eval`ใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ <Tip warning={true}> PyTorch >= 1.14.0ใฎๅ ดๅˆใ€jitใƒขใƒผใƒ‰ใฏjit.traceใงdictๅ…ฅๅŠ›ใŒใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ไบˆๆธฌใจ่ฉ•ไพกใซไปปๆ„ใฎใƒขใƒ‡ใƒซใซๅˆฉ็›Šใ‚’ใ‚‚ใŸใ‚‰ใ™ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ PyTorch < 1.14.0ใฎๅ ดๅˆใ€jitใƒขใƒผใƒ‰ใฏforwardใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฎ้ †ๅบใŒjit.traceใฎใ‚ฟใƒ—ใƒซๅ…ฅๅŠ›ใฎ้ †ๅบใจไธ€่‡ดใ™ใ‚‹ใƒขใƒ‡ใƒซใซๅˆฉ็›Šใ‚’ใ‚‚ใŸใ‚‰ใ™ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™๏ผˆ่ณชๅ•ๅฟœ็ญ”ใƒขใƒ‡ใƒซใชใฉ๏ผ‰ใ€‚jit.traceใŒใ‚ฟใƒ—ใƒซๅ…ฅๅŠ›ใฎ้ †ๅบใจไธ€่‡ดใ—ใชใ„ๅ ดๅˆใ€ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใƒขใƒ‡ใƒซใชใฉใ€jit.traceใฏๅคฑๆ•—ใ—ใ€ใ“ใ‚Œใ‚’ใƒ•ใ‚ฉใƒผใƒซใƒใƒƒใ‚ฏใ•ใ›ใ‚‹ใŸใ‚ใซไพ‹ๅค–ใงใ‚ญใƒฃใƒƒใƒใ—ใฆใ„ใพใ™ใ€‚ใƒญใ‚ฐใฏใƒฆใƒผใ‚ถใƒผใซ้€š็Ÿฅใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ </Tip> [Transformers่ณชๅ•ๅฟœ็ญ”ใฎไฝฟ็”จไพ‹](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)ใ‚’ๅ‚่€ƒใซใ—ใฆใใ ใ•ใ„ใ€‚ - Inference using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--jit_mode_eval </b></pre> - Inference with IPEX using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--use_ipex \</b> <b>--jit_mode_eval</b></pre>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/custom_models.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Sharing custom models ๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€็ฐกๅ˜ใซๆ‹กๅผตใงใใ‚‹ใ‚ˆใ†ใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏใƒชใƒใ‚ธใƒˆใƒชใฎ็‰นๅฎšใฎใ‚ตใƒ–ใƒ•ใ‚ฉใƒซใƒ€ใซๅฎŒๅ…จใซใ‚ณใƒผใƒ‰ๅŒ–ใ•ใ‚ŒใฆใŠใ‚Šใ€ๆŠฝ่ฑกๅŒ–ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ณใƒ”ใƒผใ—ใฆ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใŒ็ฐกๅ˜ใงใ™ใ€‚ ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’ๆ›ธใ„ใฆใ„ใ‚‹ๅ ดๅˆใ€ใ‚ผใƒญใ‹ใ‚‰ๅง‹ใ‚ใ‚‹ๆ–นใŒ็ฐกๅ˜ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใจใใฎ่จญๅฎšใ‚’ใฉใฎใ‚ˆใ†ใซๆ›ธใใ€Transformersๅ†…ใงไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ—ใ€ใ‚ณใƒผใƒ‰ใซไพๅญ˜ใ™ใ‚‹ๅ…ฑๅŒไฝ“ใจๅ…ฑๆœ‰ใ™ใ‚‹ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ใƒฉใ‚คใƒ–ใƒฉใƒชใซๅญ˜ๅœจใ—ใชใ„ๅ ดๅˆใงใ‚‚ใ€่ชฐใงใ‚‚ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚’ๅฎŸ่จผใ™ใ‚‹ใŸใ‚ใซใ€[timmใƒฉใ‚คใƒ–ใƒฉใƒช](https://github.com/rwightman/pytorch-image-models)ใฎResNetใ‚ฏใƒฉใ‚นใ‚’[`PreTrainedModel`]ใซใƒฉใƒƒใƒ—ใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆใ€ResNetใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ## Writing a custom configuration ใƒขใƒ‡ใƒซใซๅ–ใ‚Š็ต„ใ‚€ๅ‰ใซใ€ใพใšใใฎ่จญๅฎšใ‚’ๆ›ธใใพใ—ใ‚‡ใ†ใ€‚ใƒขใƒ‡ใƒซใฎ่จญๅฎšใฏใ€ใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใ™ในใฆใฎๆƒ…ๅ ฑใ‚’ๅซใ‚€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใงใ™ใ€‚ๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ฆ‹ใ‚‹ใ‚ˆใ†ใซใ€ใƒขใƒ‡ใƒซใฏๅˆๆœŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซ`config`ใ—ใ‹ๅ—ใ‘ๅ–ใ‚‹ใ“ใจใŒใงใใชใ„ใŸใ‚ใ€ใใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒใงใใ‚‹ใ ใ‘ๅฎŒๅ…จใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎไพ‹ใงใฏใ€ResNetใ‚ฏใƒฉใ‚นใฎใ„ใใคใ‹ใฎๅผ•ๆ•ฐใ‚’ๅ–ๅพ—ใ—ใ€่ชฟๆ•ดใ—ใŸใ„ใ‹ใ‚‚ใ—ใ‚Œใชใ„ใจใ—ใพใ™ใ€‚็•ฐใชใ‚‹่จญๅฎšใฏใ€็•ฐใชใ‚‹ใ‚ฟใ‚คใƒ—ใฎResNetใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใใฎๅพŒใ€ใ“ใ‚Œใ‚‰ใฎๅผ•ๆ•ฐใ‚’็ขบ่ชใ—ใŸๅพŒใ€ใใ‚Œใ‚‰ใฎๅผ•ๆ•ฐใ‚’ๅ˜ใซๆ ผ็ดใ—ใพใ™ใ€‚ ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` ้‡่ฆใชใ“ใจใ‚’3ใค่ฆšใˆใฆใŠใในใใƒใ‚คใƒณใƒˆใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš - `PretrainedConfig` ใ‚’็ถ™ๆ‰ฟใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ใ‚ใชใŸใฎ `PretrainedConfig` ใฎ `__init__` ใฏไปปๆ„ใฎ kwargs ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ใ“ใ‚Œใ‚‰ใฎ `kwargs` ใฏ่ฆชใ‚ฏใƒฉใ‚นใฎ `__init__` ใซๆธกใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ็ถ™ๆ‰ฟใฏใ€๐Ÿค— Transformers ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ™ในใฆใฎๆฉŸ่ƒฝใ‚’ๅ–ๅพ—ใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใงใ™ใ€‚ไป–ใฎ2ใคใฎๅˆถ็ด„ใฏใ€ `PretrainedConfig` ใŒ่จญๅฎšใ—ใฆใ„ใ‚‹ใƒ•ใ‚ฃใƒผใƒซใƒ‰ไปฅๅค–ใซใ‚‚ๅคšใใฎใƒ•ใ‚ฃใƒผใƒซใƒ‰ใ‚’ๆŒใฃใฆใ„ใ‚‹ใ“ใจใ‹ใ‚‰ๆฅใฆใ„ใพใ™ใ€‚ `from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใง่จญๅฎšใ‚’ๅ†ใƒญใƒผใƒ‰ใ™ใ‚‹ๅ ดๅˆใ€ใ“ใ‚Œใ‚‰ใฎใƒ•ใ‚ฃใƒผใƒซใƒ‰ใฏใ‚ใชใŸใฎ่จญๅฎšใซๅ—ใ‘ๅ…ฅใ‚Œใ‚‰ใ‚Œใ€ ใใฎๅพŒใ€่ฆชใ‚ฏใƒฉใ‚นใซ้€ไฟกใ•ใ‚Œใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ่จญๅฎšใฎ `model_type` ใ‚’ๅฎš็พฉใ™ใ‚‹ใ“ใจ๏ผˆใ“ใ“ใงใฏ `model_type="resnet"`๏ผ‰ใฏใ€ ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใซใƒขใƒ‡ใƒซใ‚’็™ป้Œฒใ—ใŸใ„ๅ ดๅˆใ‚’้™คใ„ใฆใฏๅฟ…้ ˆใงใฏใ‚ใ‚Šใพใ›ใ‚“๏ผˆๆœ€ๅพŒใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ๅ‚็…ง๏ผ‰ใ€‚ ใ“ใ‚Œใงใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎไป–ใฎใƒขใƒ‡ใƒซ่จญๅฎšใจๅŒๆง˜ใซใ€่จญๅฎšใ‚’็ฐกๅ˜ใซไฝœๆˆใ—ใฆไฟๅญ˜ใงใใพใ™ใ€‚ ไปฅไธ‹ใฏใ€resnet50d ่จญๅฎšใ‚’ไฝœๆˆใ—ใฆไฟๅญ˜ใ™ใ‚‹ๆ–นๆณ•ใฎไพ‹ใงใ™๏ผš ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€`custom-resnet` ใƒ•ใ‚ฉใƒซใƒ€ๅ†…ใซ `config.json` ใจใ„ใ†ๅๅ‰ใฎใƒ•ใ‚กใ‚คใƒซใŒไฟๅญ˜ใ•ใ‚Œใพใ™ใ€‚ใใฎๅพŒใ€`from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆๆง‹ๆˆใ‚’ๅ†ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` ใพใŸใ€[`PretrainedConfig`] ใ‚ฏใƒฉใ‚นใฎไป–ใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใŸใจใˆใฐใ€[`~PretrainedConfig.push_to_hub`] ใ‚’ไฝฟ็”จใ—ใฆใ€่จญๅฎšใ‚’็›ดๆŽฅ Hub ใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ## Writing a custom model ResNet ใฎ่จญๅฎšใŒใงใใŸใฎใงใ€ใƒขใƒ‡ใƒซใ‚’ๆ›ธใๅง‹ใ‚ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ๅฎŸ้š›ใซใฏ2ใคใฎใƒขใƒ‡ใƒซใ‚’ๆ›ธใใพใ™ใ€‚1ใคใฏใƒใƒƒใƒใฎ็”ปๅƒใ‹ใ‚‰้š ใ‚ŒใŸ็‰นๅพดใ‚’ๆŠฝๅ‡บใ™ใ‚‹ใƒขใƒ‡ใƒซ๏ผˆ[`BertModel`] ใฎใ‚ˆใ†ใชใ‚‚ใฎ๏ผ‰ใงใ€ใ‚‚ใ†1ใคใฏ็”ปๅƒๅˆ†้กžใซ้ฉใ—ใŸใƒขใƒ‡ใƒซ๏ผˆ[`BertForSequenceClassification`] ใฎใ‚ˆใ†ใชใ‚‚ใฎ๏ผ‰ใงใ™ใ€‚ ๅ‰่ฟฐใ—ใŸใ‚ˆใ†ใซใ€ใ“ใฎไพ‹ใ‚’ใ‚ทใƒณใƒ—ใƒซใซไฟใคใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใฎ็ทฉใ„ใƒฉใƒƒใƒ‘ใƒผใฎใฟใ‚’ๆ›ธใใพใ™ใ€‚ใ“ใฎใ‚ฏใƒฉใ‚นใ‚’ๆ›ธใๅ‰ใซ่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚‹ๅ”ฏไธ€ใฎใ“ใจใฏใ€ใƒ–ใƒญใƒƒใ‚ฏใ‚ฟใ‚คใƒ—ใจๅฎŸ้š›ใฎใƒ–ใƒญใƒƒใ‚ฏใ‚ฏใƒฉใ‚นใฎ้–“ใฎใƒžใƒƒใƒ—ใงใ™ใ€‚ใใฎๅพŒใ€ใ™ในใฆใ‚’ `ResNet` ใ‚ฏใƒฉใ‚นใซๆธกใ—ใฆ่จญๅฎšใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ๅฎš็พฉใ—ใพใ™๏ผš ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` ็”ปๅƒใ‚’ๅˆ†้กžใ™ใ‚‹ใƒขใƒ‡ใƒซใฎๅ ดๅˆใ€forwardใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ ใ‘ใงใ™๏ผš ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` ไธกๆ–นใฎๅ ดๅˆใ€`PreTrainedModel`ใ‹ใ‚‰็ถ™ๆ‰ฟใ—ใ€`config`ใ‚’ไฝฟ็”จใ—ใฆใ‚นใƒผใƒ‘ใƒผใ‚ฏใƒฉใ‚นใฎๅˆๆœŸๅŒ–ใ‚’ๅ‘ผใณๅ‡บใ—ใพใ™๏ผˆ้€šๅธธใฎ`torch.nn.Module`ใ‚’ๆ›ธใใจใใฎใ‚ˆใ†ใชๆ„Ÿใ˜ใงใ™๏ผ‰ใ€‚ `config_class`ใ‚’่จญๅฎšใ™ใ‚‹่กŒใฏๅฟ…้ ˆใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€๏ผˆๆœ€ๅพŒใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ๅ‚็…ง๏ผ‰ใ€ใƒขใƒ‡ใƒซใ‚’่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใซ็™ป้Œฒใ—ใŸใ„ๅ ดๅˆใซไฝฟ็”จใงใใพใ™ใ€‚ <Tip> ใƒขใƒ‡ใƒซใŒใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใƒขใƒ‡ใƒซใจ้žๅธธใซไผผใฆใ„ใ‚‹ๅ ดๅˆใ€ใ“ใฎใƒขใƒ‡ใƒซใจๅŒใ˜ๆง‹ๆˆใ‚’ๅ†ๅˆฉ็”จใงใใพใ™ใ€‚ </Tip> ใƒขใƒ‡ใƒซใŒ่ฟ”ใ™ๅ†…ๅฎนใฏไฝ•ใงใ‚‚ๆง‹ใ„ใพใ›ใ‚“ใŒใ€ใƒฉใƒ™ใƒซใŒๆธกใ•ใ‚Œใ‚‹ใจใใซๆๅคฑใ‚’ๅซใ‚€่พžๆ›ธใ‚’่ฟ”ใ™๏ผˆ`ResnetModelForImageClassification`ใฎใ‚ˆใ†ใซ่กŒใฃใŸใ‚‚ใฎ๏ผ‰ใจใ€ ใƒขใƒ‡ใƒซใ‚’[`Trainer`]ใ‚ฏใƒฉใ‚นๅ†…ใง็›ดๆŽฅไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚็‹ฌ่‡ชใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใพใŸใฏไป–ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ไบˆๅฎšใงใ‚ใ‚‹้™ใ‚Šใ€ ๅˆฅใฎๅ‡บๅŠ›ๅฝขๅผใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ•ใฆใ€ใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใŒใงใใŸใฎใงใ€1ใคไฝœๆˆใ—ใพใ—ใ‚‡ใ†๏ผš ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` ๅ†ๅบฆใ€[`PreTrainedModel`]ใฎใ„ใšใ‚Œใ‹ใฎใƒกใ‚ฝใƒƒใƒ‰ใ€ไพ‹ใˆใฐ[`~PreTrainedModel.save_pretrained`]ใ‚„ [`~PreTrainedModel.push_to_hub`]ใชใฉใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’ใ‚ณใƒผใƒ‰ใจไธ€็ท’ใซ Hugging Face Hub ใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ๆ–นๆณ•ใ‚’่ฆ‹ใฆใฟใพใ™ใ€‚ ใ—ใ‹ใ—ใ€ใพใšใฏใƒขใƒ‡ใƒซๅ†…ใซไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ—ใพใ—ใ‚‡ใ†ใ€‚ ็‹ฌ่‡ชใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใงใฏใ€ใŠใใ‚‰ใ็‹ฌ่‡ชใฎใƒ‡ใƒผใ‚ฟใงใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใซใชใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ‚นใƒ”ใƒผใƒ‰ใ‚ขใƒƒใƒ—ใฎใŸใ‚ใซใ€resnet50dใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ็งใŸใกใฎใƒขใƒ‡ใƒซใฏใใ‚Œใ‚’ใƒฉใƒƒใƒ—ใ™ใ‚‹ใ ใ‘ใชใฎใงใ€ใ“ใ‚Œใ‚‰ใฎ้‡ใฟใ‚’่ปข้€ใ™ใ‚‹ใฎใฏ็ฐกๅ˜ใงใ™๏ผš ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ใ•ใฆใ€[`~PreTrainedModel.save_pretrained`]ใพใŸใฏ[`~PreTrainedModel.push_to_hub`]ใ‚’ๅฎŸ่กŒใ—ใŸใจใใซใ€ ใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‰ใŒไฟๅญ˜ใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ๆ–นๆณ•ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ## Sending the code to the Hub <Tip warning={true}> ใ“ใฎAPIใฏๅฎŸ้จ“็š„ใงใ‚ใ‚Šใ€ๆฌกใฎใƒชใƒชใƒผใ‚นใงใ‚ใšใ‹ใชๅค‰ๆ›ดใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ </Tip> ใพใšใ€ใƒขใƒ‡ใƒซใŒ`.py`ใƒ•ใ‚กใ‚คใƒซใซๅฎŒๅ…จใซๅฎš็พฉใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ใƒ•ใ‚กใ‚คใƒซใฏ็›ธๅฏพใ‚คใƒณใƒใƒผใƒˆใ‚’ไป–ใฎใƒ•ใ‚กใ‚คใƒซใซไพๅญ˜ใงใใพใ™ใŒใ€ใ™ในใฆใฎใƒ•ใ‚กใ‚คใƒซใŒๅŒใ˜ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซใ‚ใ‚‹้™ใ‚Š๏ผˆใพใ ใ“ใฎๆฉŸ่ƒฝใงใฏใ‚ตใƒ–ใƒขใ‚ธใƒฅใƒผใƒซใฏใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ›ใ‚“๏ผ‰ใ€ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใฎไพ‹ใงใฏใ€็พๅœจใฎไฝœๆฅญใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชๅ†…ใซๅๅ‰ใŒใ€Œresnet_modelใ€ใฎใƒ•ใ‚ฉใƒซใƒ€ใ‚’ไฝœๆˆใ—ใ€ใใฎไธญใซ`modeling_resnet.py`ใƒ•ใ‚กใ‚คใƒซใจ`configuration_resnet.py`ใƒ•ใ‚กใ‚คใƒซใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใซใฏ`ResnetConfig`ใฎใ‚ณใƒผใƒ‰ใŒๅซใพใ‚Œใ€ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใซใฏ`ResnetModel`ใจ`ResnetModelForImageClassification`ใฎใ‚ณใƒผใƒ‰ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` `__init__.py`ใฏ็ฉบใงใ‚ใฃใฆใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚PythonใŒ`resnet_model`ใ‚’ใƒขใ‚ธใƒฅใƒผใƒซใจใ—ใฆๆคœๅ‡บใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใซๅญ˜ๅœจใ—ใพใ™ใ€‚ <Tip warning={true}> ใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ณใƒ”ใƒผใ™ใ‚‹ๅ ดๅˆใ€ใƒ•ใ‚กใ‚คใƒซใฎๅ…ˆ้ ญใซใ‚ใ‚‹ใ™ในใฆใฎ็›ธๅฏพใ‚คใƒณใƒใƒผใƒˆใ‚’`transformers`ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใ‹ใ‚‰ใ‚คใƒณใƒใƒผใƒˆใซ็ฝฎใๆ›ใˆใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ๆ—ขๅญ˜ใฎ่จญๅฎšใ‚„ใƒขใƒ‡ใƒซใ‚’ๅ†ๅˆฉ็”จ๏ผˆใพใŸใฏใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–๏ผ‰ใงใใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ใŸใ‚ใซใ€ๆฌกใฎๆ‰‹้ †ใซๅพ“ใฃใฆใใ ใ•ใ„๏ผšใพใšใ€ๆ–ฐใ—ใไฝœๆˆใ—ใŸใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ResNetใƒขใƒ‡ใƒซใจ่จญๅฎšใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใพใ™๏ผš ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` ๆฌกใซใ€`save_pretrained`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ“ใ‚Œใ‚‰ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎใ‚ณใƒผใƒ‰ใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ณใƒ”ใƒผใ—ใ€็‰นๅฎšใฎAutoใ‚ฏใƒฉใ‚น๏ผˆ็‰นใซใƒขใƒ‡ใƒซใฎๅ ดๅˆ๏ผ‰ใซๆญฃใ—ใ็™ป้Œฒใ™ใ‚‹ใ‚ˆใ†ใƒฉใ‚คใƒ–ใƒฉใƒชใซๆŒ‡็คบใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆฌกใฎใ‚ˆใ†ใซๅฎŸ่กŒใ—ใพใ™๏ผš ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` ๆณจๆ„: ่จญๅฎšใซใคใ„ใฆใฏ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“๏ผˆ่จญๅฎš็”จใฎ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใฏ1ใคใ—ใ‹ใชใใ€[`AutoConfig`]ใงใ™๏ผ‰ใŒใ€ ใƒขใƒ‡ใƒซใซใคใ„ใฆใฏ็•ฐใชใ‚Šใพใ™ใ€‚ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใฏๅคšใใฎ็•ฐใชใ‚‹ใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ ใƒขใƒ‡ใƒซใŒๆญฃ็ขบใช่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใฎใ†ใกใฉใ‚Œใซ้ฉใ—ใฆใ„ใ‚‹ใ‹ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆฌกใซใ€ๅ‰่ฟฐใฎใ‚ˆใ†ใซ่จญๅฎšใจใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใพใ—ใ‚‡ใ†๏ผš ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ใƒขใƒ‡ใƒซใ‚’Hubใซ้€ไฟกใ™ใ‚‹ใซใฏใ€ใƒญใ‚ฐใ‚คใƒณใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใ‚ฟใƒผใƒŸใƒŠใƒซใงๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™๏ผš ```bash huggingface-cli login ``` ใพใŸใฏใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‹ใ‚‰๏ผš ```py from huggingface_hub import notebook_login notebook_login() ``` ๆฌกใซใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใฆใ€็‹ฌ่‡ชใฎๅๅ‰็ฉบ้–“ใซใƒ—ใƒƒใ‚ทใƒฅใงใใพใ™๏ผˆใพใŸใฏใ€ใƒกใƒณใƒใƒผใงใ‚ใ‚‹็ต„็น”ใซใƒ—ใƒƒใ‚ทใƒฅใงใใพใ™๏ผ‰๏ผš ```py resnet50d.push_to_hub("custom-resnet50d") ``` ใƒขใƒ‡ใƒชใƒณใ‚ฐใฎ้‡ใฟใจJSONๅฝขๅผใฎๆง‹ๆˆใซๅŠ ใˆใฆใ€ใ“ใฎใƒ•ใ‚ฉใƒซใƒ€ใƒผใ€Œcustom-resnet50dใ€ๅ†…ใฎใƒขใƒ‡ใƒชใƒณใ‚ฐใŠใ‚ˆใณๆง‹ๆˆใ€Œ.pyใ€ใƒ•ใ‚กใ‚คใƒซใ‚‚ใ‚ณใƒ”ใƒผใ•ใ‚Œใ€็ตๆžœใฏHubใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚Œใพใ—ใŸใ€‚็ตๆžœใฏใ“ใฎ[model repo](https://huggingface.co/sgugger/custom-resnet50d)ใง็ขบ่ชใงใใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[Hubใธใฎใƒ—ใƒƒใ‚ทใƒฅๆ–นๆณ•](model_sharing)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Using a model with custom code ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใจ `from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒชใƒใ‚ธใƒˆใƒชๅ†…ใฎใ‚ซใ‚นใ‚ฟใƒ ใ‚ณใƒผใƒ‰ใƒ•ใ‚กใ‚คใƒซใจๅ…ฑใซไปปๆ„ใฎๆง‹ๆˆใ€ใƒขใƒ‡ใƒซใ€ใพใŸใฏใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ Hubใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒ•ใ‚กใ‚คใƒซใจใ‚ณใƒผใƒ‰ใฏใƒžใƒซใ‚ฆใ‚งใ‚ขใฎใ‚นใ‚ญใƒฃใƒณใŒๅฎŸๆ–ฝใ•ใ‚Œใพใ™๏ผˆ่ฉณ็ดฐใฏ[Hubใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃ](https://huggingface.co/docs/hub/security#malware-scanning)ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰ใ€ใ—ใ‹ใ—ใ€ไพ็„ถใจใ—ใฆๆ‚ชๆ„ใฎใ‚ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ—ใชใ„ใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใ‚ณใƒผใƒ‰ใจไฝœ่€…ใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ `trust_remote_code=True` ใ‚’่จญๅฎšใ—ใฆใ‚ซใ‚นใ‚ฟใƒ ใ‚ณใƒผใƒ‰ใ‚’ๆŒใคใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใพใ™๏ผš ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` ใ‚ณใƒŸใƒƒใƒˆใƒใƒƒใ‚ทใƒฅใ‚’ใ€Œrevisionใ€ใจใ—ใฆๆธกใ™ใ“ใจใ‚‚ๅผทใๆŽจๅฅจใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฎไฝœ่€…ใŒใ‚ณใƒผใƒ‰ใ‚’ๆ‚ชๆ„ใฎใ‚ใ‚‹ๆ–ฐใ—ใ„่กŒใงๆ›ดๆ–ฐใ—ใชใ‹ใฃใŸใ“ใจใ‚’็ขบ่ชใงใใพใ™๏ผˆใƒขใƒ‡ใƒซใฎไฝœ่€…ใ‚’ๅฎŒๅ…จใซไฟก้ ผใ—ใฆใ„ใ‚‹ๅ ดๅˆใ‚’้™คใใพใ™๏ผ‰ใ€‚ ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` ใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใฎใ‚ณใƒŸใƒƒใƒˆๅฑฅๆญดใ‚’ใƒ–ใƒฉใ‚ฆใ‚ธใƒณใ‚ฐใ™ใ‚‹้š›ใซใฏใ€ไปปๆ„ใฎใ‚ณใƒŸใƒƒใƒˆใฎใ‚ณใƒŸใƒƒใƒˆใƒใƒƒใ‚ทใƒฅใ‚’็ฐกๅ˜ใซใ‚ณใƒ”ใƒผใงใใ‚‹ใƒœใ‚ฟใƒณใŒใ‚ใ‚Šใพใ™ใ€‚ ## Registering a model with custom code to the auto classes ๐Ÿค— Transformersใ‚’ๆ‹กๅผตใ™ใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝœๆˆใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€็‹ฌ่‡ชใฎใƒขใƒ‡ใƒซใ‚’ๅซใ‚ใ‚‹ใŸใ‚ใซ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใ‚’ๆ‹กๅผตใ—ใŸใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏใ‚ณใƒผใƒ‰ใ‚’Hubใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใจใฏ็•ฐใชใ‚Šใ€ใƒฆใƒผใ‚ถใƒผใฏใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใ‚’ๅ–ๅพ—ใ™ใ‚‹ใŸใ‚ใซใ‚ใชใŸใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใƒใƒผใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ ๏ผˆHubใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚ณใƒผใƒ‰ใ‚’่‡ชๅ‹•็š„ใซใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ใฎใจใฏๅฏพ็…ง็š„ใงใ™๏ผ‰ใ€‚ ๆง‹ๆˆใซๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใ‚ฟใ‚คใƒ—ใจ็•ฐใชใ‚‹ `model_type` ๅฑžๆ€งใŒใ‚ใ‚‹้™ใ‚Šใ€ใพใŸใ‚ใชใŸใฎใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใŒ้ฉๅˆ‡ใช `config_class` ๅฑžๆ€งใ‚’ๆŒใฃใฆใ„ใ‚‹้™ใ‚Šใ€ ๆฌกใฎใ‚ˆใ†ใซใใ‚Œใ‚‰ใ‚’่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใซ่ฟฝๅŠ ใงใใพใ™๏ผš ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` ๆณจๆ„: `AutoConfig` ใซใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšใ‚’็™ป้Œฒใ™ใ‚‹้š›ใฎๆœ€ๅˆใฎๅผ•ๆ•ฐใฏใ€ใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšใฎ `model_type` ใจไธ€่‡ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใพใŸใ€ไปปๆ„ใฎ่‡ชๅ‹•ใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใซใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใ‚’็™ป้Œฒใ™ใ‚‹้š›ใฎๆœ€ๅˆใฎๅผ•ๆ•ฐใฏใ€ใใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฎ `config_class` ใจไธ€่‡ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/installation.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ใ‚คใƒณใ‚นใƒˆใƒผใƒซ ไฝฟ็”จใ—ใฆใ„ใ‚‹Deep Learningใƒฉใ‚คใƒ–ใƒฉใƒชใซๅฏพใ—ใฆใ€๐Ÿค— Transformersใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใ‚ญใƒฃใƒƒใ‚ทใƒฅใ‚’่จญๅฎšใ€ใใ—ใฆใ‚ชใƒ—ใ‚ทใƒงใƒณใงใ‚ชใƒ•ใƒฉใ‚คใƒณใงๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซ ๐Ÿค— Transformersใ‚’่จญๅฎšใ—ใพใ™ใ€‚ ๐Ÿค— TransformersใฏPython 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, Flaxใงๅ‹•ไฝœ็ขบ่ชใ—ใฆใ„ใพใ™ใ€‚ ไฝฟ็”จใ—ใฆใ„ใ‚‹Deep Learningใƒฉใ‚คใƒ–ใƒฉใƒชใซๅˆใ‚ใ›ใฆใ€ไปฅไธ‹ใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ–นๆณ•ใซๅพ“ใฃใฆใใ ใ•ใ„: * [PyTorch](https://pytorch.org/get-started/locally/)ใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ‰‹้ †ใ€‚ * [TensorFlow 2.0](https://www.tensorflow.org/install/pip)ใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ‰‹้ †ใ€‚ * [Flax](https://flax.readthedocs.io/en/latest/)ใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ‰‹้ †ใ€‚ ## pipใงใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซ ๐Ÿค— Transformersใ‚’[ไปฎๆƒณ็’ฐๅขƒ](https://docs.python.org/3/library/venv.html)ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ‚‚ใ—ใ€Pythonใฎไปฎๆƒณ็’ฐๅขƒใซ้ฆดๆŸ“ใฟใŒใชใ„ๅ ดๅˆใฏใ€ใ“ใฎ[ใ‚ฌใ‚คใƒ‰](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ไปฎๆƒณ็’ฐๅขƒใซใ‚ˆใฃใฆ็•ฐใชใ‚‹ใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใฎ็ฎก็†ใŒใ‚ˆใ‚Š็ฐกๅ˜ใซใชใ‚Šใ€ไพๅญ˜้–ขไฟ‚้–“ใฎไบ’ๆ›ๆ€งใฎๅ•้กŒใ‚’ๅ›ž้ฟใงใใพใ™ใ€‚ ใพใšใ€ใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซไปฎๆƒณ็’ฐๅขƒใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†: ```bash python -m venv .env ``` ไปฎๆƒณ็’ฐๅขƒใ‚’่ตทๅ‹•ใ—ใพใ—ใ‚‡ใ†ใ€‚LinuxใจMacOsใฎๅ ดๅˆใฏไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใง่ตทๅ‹•ใ—ใพใ™: ```bash source .env/bin/activate ``` Windowsใงไปฎๆƒณ็’ฐๅขƒใ‚’่ตทๅ‹•ใ—ใพใ™ ```bash .env/Scripts/activate ``` ใ“ใ‚Œใงใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใง๐Ÿค— Transformersใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๆบ–ๅ‚™ใŒๆ•ดใ„ใพใ—ใŸ: ```bash pip install transformers ``` CPUๅฏพๅฟœใฎใฟๅฟ…่ฆใชๅ ดๅˆใ€๐Ÿค— TransformersใจDeep Learningใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’1่กŒใงใ‚คใƒณใ‚นใƒˆใƒผใƒซใงใใ‚‹ใ‚ˆใ†ใซใชใฃใฆใ„ใฆไพฟๅˆฉใงใ™ใ€‚ไพ‹ใˆใฐใ€๐Ÿค— TransformersใจPyTorchใ‚’ไปฅไธ‹ใฎใ‚ˆใ†ใซไธ€็ท’ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใงใใพใ™: ```bash pip install transformers[torch] ``` ๐Ÿค— TransformersใจTensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— TransformersใจFlax: ```bash pip install transformers[flax] ``` ๆœ€ๅพŒใซใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใง๐Ÿค— TransformersใŒๆญฃใ—ใใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚’็ขบ่ชใ—ใพใ™ใ€‚ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใŒใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚Œใพใ™: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` ใใฎๅพŒใ€ใƒฉใƒ™ใƒซใจใ‚นใ‚ณใ‚ขใŒๅ‡บๅŠ›ใ•ใ‚Œใพใ™: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซ ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใงใ‚ฝใƒผใ‚นใ‹ใ‚‰๐Ÿค— Transformersใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™: ```bash pip install git+https://github.com/huggingface/transformers ``` ใ“ใฎใ‚ณใƒžใƒณใƒ‰ใฏๆœ€ๆ–ฐใฎๅฎ‰ๅฎš็‰ˆใงใฏใชใใ€้–‹็™บใซใŠใ‘ใ‚‹ๆœ€ๆ–ฐใฎ`main`ใƒใƒผใ‚ธใƒงใƒณใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™ใ€‚`main`ใƒใƒผใ‚ธใƒงใƒณใฏๆœ€ๆ–ฐใฎ้–‹็™บ็Šถๆณใซๅฏพๅฟœใ™ใ‚‹ใฎใซไพฟๅˆฉใงใ™ใ€‚ไพ‹ใˆใฐใ€ๆœ€ๅพŒใฎๅ…ฌๅผใƒชใƒชใƒผใ‚นไปฅ้™ใซใƒใ‚ฐใŒไฟฎๆญฃใ•ใ‚ŒใŸใŒใ€ๆ–ฐใ—ใ„ใƒชใƒชใƒผใ‚นใŒใพใ ๅฑ•้–‹ใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใชใฉใงใ™ใ€‚ใ—ใ‹ใ—ใ€ใ“ใ‚Œใฏ`main`ใƒใƒผใ‚ธใƒงใƒณใŒๅธธใซๅฎ‰ๅฎšใ—ใฆใ„ใ‚‹ใจใฏ้™ใ‚‰ใชใ„ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚็งใŸใกใฏ`main`ใƒใƒผใ‚ธใƒงใƒณใฎ้‹็”จใ‚’็ถญๆŒใ™ใ‚‹ใ‚ˆใ†ๅŠชใ‚ใ€ใปใจใ‚“ใฉใฎๅ•้กŒใฏ้€šๅธธใ€ๆ•ฐๆ™‚้–“ใ‹ใ‚‰1ๆ—ฅไปฅๅ†…ใซ่งฃๆฑบใ•ใ‚Œใพใ™ใ€‚ใ‚‚ใ—ๅ•้กŒใซ้ญ้‡ใ—ใŸๅ ดๅˆใฏใ€ใ‚ˆใ‚Šๆ—ฉใไฟฎๆญฃใงใใ‚‹ใ‚ˆใ†ใซ[Issue](https://github.com/huggingface/transformers/issues)ใ‚’ไฝœๆˆใ—ใฆใใ ใ•ใ„๏ผ ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใ€๐Ÿค— TransformersใŒๆญฃใ—ใใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ—ใพใ™: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## ็ทจ้›†ๅฏ่ƒฝใชใ‚คใƒณใ‚นใƒˆใƒผใƒซ ๅฟ…่ฆใซๅฟœใ˜ใฆใ€็ทจ้›†ๅฏ่ƒฝใชใ‚คใƒณใ‚นใƒˆใƒผใƒซใ‚’ใ—ใพใ™: * ใ‚ฝใƒผใ‚นใ‚ณใƒผใƒ‰ใฎ`main`ใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝฟใ„ใพใ™ใ€‚ * ๐Ÿค— Transformersใซใ‚ณใƒณใƒˆใƒชใƒ“ใƒฅใƒผใƒˆใ—ใ€ใ‚ณใƒผใƒ‰ใฎๅค‰ๆ›ดใ‚’ใƒ†ใ‚นใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใงใƒฌใƒใ‚ธใƒˆใƒชใ‚’ใ‚ฏใƒญใƒผใƒณใ—ใฆใ€๐Ÿค— Transformersใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` ไธŠ่จ˜ใฎใ‚ณใƒžใƒณใƒ‰ใฏใ€ใƒฌใƒใ‚ธใƒˆใƒชใ‚’ใ‚ฏใƒญใƒผใƒณใ—ใŸใƒ•ใ‚ฉใƒซใƒ€ใจPythonใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใƒ‘ใ‚นใ‚’ใƒชใƒณใ‚ฏใ—ใพใ™ใ€‚Pythonใฏ้€šๅธธใฎใƒฉใ‚คใƒ–ใƒฉใƒชใƒ‘ใ‚นใซๅŠ ใˆใฆใ€ใ‚ใชใŸใŒใ‚ฏใƒญใƒผใƒณใ—ใŸใƒ•ใ‚ฉใƒซใƒ€ใฎไธญใ‚‚่ฆ‹ใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€Pythonใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใŒ้€šๅธธใ€`~/anaconda3/envs/main/lib/python3.7/site-packages/`ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€Pythonใฏใ‚ฏใƒญใƒผใƒณใ—ใŸใƒ•ใ‚ฉใƒซใƒ€ใ‚‚ๆคœ็ดขใ™ใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™: `~/transformers/`. <Tip warning={true}> ใƒฉใ‚คใƒ–ใƒฉใƒชใƒผใ‚’ไฝฟใ„็ถšใ‘ใŸใ„ๅ ดๅˆใฏใ€transformersใƒ•ใ‚ฉใƒซใƒ€ใƒผใ‚’ไฟๆŒใ—ใคใฅใ‘ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ใ“ใ‚Œใงใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใง็ฐกๅ˜ใซใ‚ฏใƒญใƒผใƒณใ‚’๐Ÿค— Transformersใฎๆœ€ๆ–ฐ็‰ˆใซๆ›ดๆ–ฐใงใใพใ™: ```bash cd ~/transformers/ git pull ``` Python็’ฐๅขƒใฏๆฌกๅ›žใฎๅฎŸ่กŒๆ™‚ใซ๐Ÿค— Transformersใฎ`main`ใƒใƒผใ‚ธใƒงใƒณใ‚’่ฆ‹ใคใ‘ใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ## condaใงใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซ `conda-forge`ใฎcondaใƒใƒฃใƒณใƒใƒซใ‹ใ‚‰ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™: ```bash conda install conda-forge::transformers ``` ## ใ‚ญใƒฃใƒƒใ‚ทใƒฅใฎ่จญๅฎš ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฏใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚Œใ€ใƒญใƒผใ‚ซใƒซใซใ‚ญใƒฃใƒƒใ‚ทใƒฅใ•ใ‚Œใพใ™: `~/.cache/huggingface/hub`. ใ“ใ‚Œใฏใ‚ทใ‚งใƒซ็’ฐๅขƒๅค‰ๆ•ฐ`TRANSFORMERS_CACHE`ใงๆŒ‡ๅฎšใ•ใ‚Œใ‚‹ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใงใ™ใ€‚Windowsใงใฏใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใฏ`C:\Users\username\.cache\huggingface\hub`ใซใชใฃใฆใ„ใพใ™ใ€‚็•ฐใชใ‚‹ใ‚ญใƒฃใƒƒใ‚ทใƒฅใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใŸใ‚ใซใ€ไปฅไธ‹ใฎใ‚ทใ‚งใƒซ็’ฐๅขƒๅค‰ๆ•ฐใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใŒๅฏ่ƒฝใงใ™ใ€‚ๅ„ชๅ…ˆๅบฆใฏไปฅไธ‹ใฎ้ †็•ชใซๅฏพๅฟœใ—ใพใ™: 1. ใ‚ทใ‚งใƒซ็’ฐๅขƒๅค‰ๆ•ฐ (ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆ): `HUGGINGFACE_HUB_CACHE` ใพใŸใฏ `TRANSFORMERS_CACHE`. 2. ใ‚ทใ‚งใƒซ็’ฐๅขƒๅค‰ๆ•ฐ: `HF_HOME`. 3. ใ‚ทใ‚งใƒซ็’ฐๅขƒๅค‰ๆ•ฐ: `XDG_CACHE_HOME` + `/huggingface`. <Tip> ใ‚‚ใ—ใ€ไปฅๅ‰ใฎใƒใƒผใ‚ธใƒงใƒณใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ—ใฆใ„ใŸไบบใงใ€`PYTORCH_TRANSFORMERS_CACHE`ใพใŸใฏ`PYTORCH_PRETRAINED_BERT_CACHE`ใ‚’่จญๅฎšใ—ใฆใ„ใŸๅ ดๅˆใ€ใ‚ทใ‚งใƒซ็’ฐๅขƒๅค‰ๆ•ฐ`TRANSFORMERS_CACHE`ใ‚’ๆŒ‡ๅฎšใ—ใชใ„้™ใ‚Š๐Ÿค— Transformersใฏใ“ใ‚Œใ‚‰ใฎใ‚ทใ‚งใƒซ็’ฐๅขƒๅค‰ๆ•ฐใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ </Tip> ## ใ‚ชใƒ•ใƒฉใ‚คใƒณใƒขใƒผใƒ‰ ๐Ÿค— Transformersใฏใƒญใƒผใ‚ซใƒซใƒ•ใ‚กใ‚คใƒซใฎใฟใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใƒ•ใ‚กใ‚คใ‚ขใ‚ฆใ‚ฉใƒผใƒซใ‚„ใ‚ชใƒ•ใƒฉใ‚คใƒณใฎ็’ฐๅขƒใงใ‚‚ๅ‹•ไฝœใ•ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใฎๅ‹•ไฝœใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใŸใ‚ใซใฏใ€็’ฐๅขƒๅค‰ๆ•ฐ`TRANSFORMERS_OFFLINE=1`ใ‚’่จญๅฎšใ—ใพใ™ใ€‚ <Tip> ็’ฐๅขƒๅค‰ๆ•ฐ`HF_DATASETS_OFFLINE=1`ใ‚’่จญๅฎšใ—ใ€ใ‚ชใƒ•ใƒฉใ‚คใƒณใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใซ[๐Ÿค— Datasets](https://huggingface.co/docs/datasets/)ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ </Tip> ไพ‹ใˆใฐใ€ๅค–้ƒจใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใซๅฏพใ—ใฆใƒ•ใ‚กใ‚คใ‚ขใ‚ฆใ‚ฉใƒผใƒซใงไฟ่ญทใ•ใ‚ŒใŸ้€šๅธธใฎใƒใƒƒใƒˆใƒฏใƒผใ‚ฏไธŠใงใƒ—ใƒญใ‚ฐใƒฉใƒ ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅ ดๅˆใ€้€šๅธธไปฅไธ‹ใฎใ‚ˆใ†ใชใ‚ณใƒžใƒณใƒ‰ใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ใ‚ชใƒ•ใƒฉใ‚คใƒณใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใงใ“ใฎๅŒใ˜ใƒ—ใƒญใ‚ฐใƒฉใƒ ใ‚’ๅฎŸ่กŒใ—ใพใ™: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ใ“ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ใƒญใƒผใ‚ซใƒซใƒ•ใ‚กใ‚คใƒซใฎใฟใ‚’ๆคœ็ดขใ™ใ‚‹ใ“ใจใŒๅˆ†ใ‹ใฃใฆใ„ใ‚‹ใฎใงใ€ใƒใƒณใ‚ฐใ‚ขใƒƒใƒ—ใ—ใŸใ‚Šใ‚ฟใ‚คใƒ ใ‚ขใ‚ฆใƒˆใ‚’ๅพ…ใฃใŸใ‚Šใ™ใ‚‹ใ“ใจใชใๅฎŸ่กŒใ•ใ‚Œใ‚‹ใฏใšใงใ™ใ€‚ ### ใ‚ชใƒ•ใƒฉใ‚คใƒณใงไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใƒขใƒ‡ใƒซใ‚„ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ๅ–ๅพ—ใ™ใ‚‹ ใ‚ชใƒ•ใƒฉใ‚คใƒณใง๐Ÿค— Transformersใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚‚ใ†1ใคใฎๆ–นๆณ•ใฏใ€ๅ‰ใ‚‚ใฃใฆใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆใŠใใ€ใ‚ชใƒ•ใƒฉใ‚คใƒณใงไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใจใใซใใฎใƒญใƒผใ‚ซใƒซใƒ‘ใ‚นใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใซใฏ3ใคใฎๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™: * [Model Hub](https://huggingface.co/models)ใฎใƒฆใƒผใ‚ถใƒผใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นไธŠใ‹ใ‚‰โ†“ใ‚ขใ‚คใ‚ณใƒณใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใ€‚ ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * [`PreTrainedModel.from_pretrained`]ใŠใ‚ˆใณ[`PreTrainedModel.save_pretrained`]ใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•: 1. [`PreTrainedModel.from_pretrained`]ใงๅ‰ใ‚‚ใฃใฆใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใพใ™: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. [`PreTrainedModel.save_pretrained`]ใงๆŒ‡ๅฎšใ•ใ‚ŒใŸใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซใƒ•ใ‚กใ‚คใƒซใ‚’ไฟๅญ˜ใ—ใฆใŠใใพใ™: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. ใ‚ชใƒ•ใƒฉใ‚คใƒณใซใ‚ใ‚‹ๆ™‚ใ€[`PreTrainedModel.from_pretrained`]ใซๆŒ‡ๅฎšใ—ใŸใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‹ใ‚‰ใƒ•ใ‚กใ‚คใƒซใ‚’ใƒชใƒญใƒผใƒ‰ใ—ใพใ™: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * ใƒ—ใƒญใ‚ฐใƒฉใƒ ็š„ใซ[huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub)ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’็”จใ„ใฆใ€ใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•: 1. ไปฎๆƒณ็’ฐๅขƒใซ`huggingface_hub`ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™: ```bash python -m pip install huggingface_hub ``` 2. ๆŒ‡ๅฎšใฎใƒ‘ใ‚นใซใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใซใ€[`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub)้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ไพ‹ใˆใฐใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใงใ€[T0](https://huggingface.co/bigscience/T0_3B)ใƒขใƒ‡ใƒซใฎ`config.json`ใƒ•ใ‚กใ‚คใƒซใ‚’ๆŒ‡ๅฎšใฎใƒ‘ใ‚นใซใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใงใใพใ™: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` ใƒ•ใ‚กใ‚คใƒซใŒใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚Œใ€ใƒญใƒผใ‚ซใƒซใซใ‚ญใƒฃใƒƒใ‚ทใƒฅใ•ใ‚ŒใŸใ‚‰ใ€ใใฎใƒญใƒผใ‚ซใƒซใƒ‘ใ‚นใ‚’ๆŒ‡ๅฎšใ—ใฆใƒ•ใ‚กใ‚คใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใฆไฝฟ็”จใ—ใพใ™: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Hubใซไฟๅญ˜ใ•ใ‚Œใฆใ„ใ‚‹ใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream)ใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_train_tpu_tf.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Training on TPU with TensorFlow <Tip> ่ฉณ็ดฐใช่ชฌๆ˜ŽใŒไธ่ฆใงใ€ๅ˜ใซTPUใฎใ‚ณใƒผใƒ‰ใ‚ตใƒณใƒ—ใƒซใ‚’ๅ…ฅๆ‰‹ใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใ—ใŸใ„ๅ ดๅˆใฏใ€[็งใŸใกใฎTPUใฎไพ‹ใฎใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„๏ผ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) </Tip> ### What is a TPU? TPUใฏ**Tensor Processing Unit๏ผˆใƒ†ใƒณใ‚ฝใƒซๅ‡ฆ็†ใƒฆใƒ‹ใƒƒใƒˆ๏ผ‰**ใฎ็•ฅใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏGoogleใŒ่จญ่จˆใ—ใŸใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใ€ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏๅ†…ใฎใƒ†ใƒณใ‚ฝใƒซ่จˆ็ฎ—ใ‚’ๅคงๅน…ใซ้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚ŒใฏGPUใฎใ‚ˆใ†ใชใ‚‚ใฎใงใ™ใ€‚ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๆŽจ่ซ–ใฎไธกๆ–นใซไฝฟ็”จใงใใพใ™ใ€‚ไธ€่ˆฌ็š„ใซใฏGoogleใฎใ‚ฏใƒฉใ‚ฆใƒ‰ใ‚ตใƒผใƒ“ใ‚นใ‚’ไป‹ใ—ใฆใ‚ขใ‚ฏใ‚ปใ‚นใ•ใ‚Œใพใ™ใŒใ€Google ColabใจKaggle Kernelsใ‚’้€šใ˜ใฆใ‚‚็„กๆ–™ใงๅฐ่ฆๆจกใฎTPUใซ็›ดๆŽฅใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ [๐Ÿค— Transformersใฎใ™ในใฆใฎTensorFlowใƒขใƒ‡ใƒซใฏKerasใƒขใƒ‡ใƒซใงใ™](https://huggingface.co/blog/tensorflow-philosophy)ใฎใงใ€ใ“ใฎๆ–‡ๆ›ธใฎใปใจใ‚“ใฉใฎๆ–นๆณ•ใฏไธ€่ˆฌ็š„ใซKerasใƒขใƒ‡ใƒซ็”จใฎTPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซ้ฉ็”จใงใใพใ™๏ผใŸใ ใ—ใ€TransformersใจDatasetsใฎHuggingFaceใ‚จใ‚ณใ‚ทใ‚นใƒ†ใƒ ๏ผˆhug-o-system๏ผŸ๏ผ‰ใซๅ›บๆœ‰ใฎใƒใ‚คใƒณใƒˆใ‚‚ใ„ใใคใ‹ใ‚ใ‚Šใ€ใใ‚Œใซใคใ„ใฆใฏ้ฉ็”จใ™ใ‚‹ใจใใซใใ‚Œใ‚’็คบใ—ใพใ™ใ€‚ ### What kinds of TPU are available? ๆ–ฐใ—ใ„ใƒฆใƒผใ‚ถใƒผใฏใ€ใ•ใพใ–ใพใชTPUใจใใฎใ‚ขใ‚ฏใ‚ปใ‚นๆ–นๆณ•ใซ้–ขใ™ใ‚‹ๅน…ๅบƒใ„ๆƒ…ๅ ฑใซใ‚ˆใๆททไนฑใ—ใพใ™ใ€‚็†่งฃใ™ใ‚‹ใŸใ‚ใฎๆœ€ๅˆใฎ้‡่ฆใช้•ใ„ใฏใ€**TPUใƒŽใƒผใƒ‰**ใจ**TPU VM**ใฎ้•ใ„ใงใ™ใ€‚ **TPUใƒŽใƒผใƒ‰**ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ไบ‹ๅฎŸไธŠใƒชใƒขใƒผใƒˆใฎTPUใซ้–“ๆŽฅ็š„ใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใพใ™ใ€‚ๅˆฅๅ€‹ใฎVMใŒๅฟ…่ฆใงใ€ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใจใƒ‡ใƒผใ‚ฟใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๅˆๆœŸๅŒ–ใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒชใƒขใƒผใƒˆใƒŽใƒผใƒ‰ใซ่ปข้€ใ—ใพใ™ใ€‚Google ColabใงTPUใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€**TPUใƒŽใƒผใƒ‰**ใ‚นใ‚ฟใ‚คใƒซใงใ‚ขใ‚ฏใ‚ปใ‚นใ—ใฆใ„ใพใ™ใ€‚ TPUใƒŽใƒผใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใใ‚Œใซๆ…ฃใ‚Œใฆใ„ใชใ„ไบบใ€…ใซใฏใ‹ใชใ‚ŠไบˆๆœŸใ—ใชใ„ๅ‹•ไฝœใŒ็™บ็”Ÿใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผ็‰นใซใ€TPUใฏPythonใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใ„ใ‚‹ใƒžใ‚ทใƒณใจ็‰ฉ็†็š„ใซ็•ฐใชใ‚‹ใ‚ทใ‚นใƒ†ใƒ ใซ้…็ฝฎใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ใƒ‡ใƒผใ‚ฟใฏใƒญใƒผใ‚ซใƒซใƒžใ‚ทใƒณใซใƒญใƒผใ‚ซใƒซใงๆ ผ็ดใ•ใ‚Œใฆใ„ใ‚‹ใƒ‡ใƒผใ‚ฟใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใŒๅฎŒๅ…จใซๅคฑๆ•—ใ—ใพใ™ใ€‚ไปฃใ‚ใ‚Šใซใ€ใƒ‡ใƒผใ‚ฟใฏGoogle Cloud Storageใซๆ ผ็ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ“ใงใƒ‡ใƒผใ‚ฟใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใƒชใƒขใƒผใƒˆใฎTPUใƒŽใƒผใƒ‰ใงๅฎŸ่กŒใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใงใ‚‚ใ€ใƒ‡ใƒผใ‚ฟใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ <Tip> ใ™ในใฆใฎใƒ‡ใƒผใ‚ฟใ‚’`np.ndarray`ใพใŸใฏ`tf.Tensor`ใจใ—ใฆใƒกใƒขใƒชใซๅŽใ‚ใ‚‹ใ“ใจใŒใงใใ‚‹ๅ ดๅˆใ€ColabใพใŸใฏTPUใƒŽใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใงใ‚‚ใ€ใƒ‡ใƒผใ‚ฟใ‚’Google Cloud Storageใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ›ใšใซ`fit()`ใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใพใ™ใ€‚ </Tip> <Tip> **๐Ÿค— Hugging Faceๅ›บๆœ‰ใฎใƒ’ใƒณใƒˆ๐Ÿค—:** TFใ‚ณใƒผใƒ‰ใฎไพ‹ใงใ‚ˆใ่ฆ‹ใ‚‹ใงใ‚ใ‚ใ†`Dataset.to_tf_dataset()`ใจใใฎ้ซ˜ใƒฌใƒ™ใƒซใฎใƒฉใƒƒใƒ‘ใƒผใงใ‚ใ‚‹`model.prepare_tf_dataset()`ใฏใ€TPUใƒŽใƒผใƒ‰ใงๅคฑๆ•—ใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€`tf.data.Dataset`ใ‚’ไฝœๆˆใ—ใฆใ„ใ‚‹ใซใ‚‚ใ‹ใ‹ใ‚ใ‚‰ใšใ€ใใ‚ŒใŒใ€Œ็ด”็ฒ‹ใชใ€`tf.data`ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใงใฏใชใใ€`tf.numpy_function`ใพใŸใฏ`Dataset.from_generator()`ใ‚’ไฝฟ็”จใ—ใฆๅŸบ็›คใจใชใ‚‹HuggingFace `Dataset`ใ‹ใ‚‰ใƒ‡ใƒผใ‚ฟใ‚’ใ‚นใƒˆใƒชใƒผใƒ ใง่ชญใฟ่พผใ‚€ใ“ใจใ‹ใ‚‰ใงใ™ใ€‚ใ“ใฎHuggingFace `Dataset`ใฏใƒญใƒผใ‚ซใƒซใƒ‡ใ‚ฃใ‚นใ‚ฏไธŠใฎใƒ‡ใƒผใ‚ฟใ‚’ใƒใƒƒใ‚ฏใ‚ขใƒƒใƒ—ใ—ใฆใŠใ‚Šใ€ใƒชใƒขใƒผใƒˆTPUใƒŽใƒผใƒ‰ใŒ่ชญใฟๅ–ใ‚‹ใ“ใจใŒใงใใชใ„ใŸใ‚ใงใ™ใ€‚ </Tip> TPUใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹็ฌฌไบŒใฎๆ–นๆณ•ใฏใ€**TPU VM**ใ‚’ไป‹ใ—ใฆใงใ™ใ€‚TPU VMใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€TPUใŒๆŽฅ็ถšใ•ใ‚Œใฆใ„ใ‚‹ใƒžใ‚ทใƒณใซ็›ดๆŽฅๆŽฅ็ถšใ—ใพใ™ใ€‚ใ“ใ‚ŒใฏGPU VMใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’่กŒใ†ใฎใจๅŒๆง˜ใงใ™ใ€‚TPU VMใฏไธ€่ˆฌ็š„ใซใƒ‡ใƒผใ‚ฟใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซ้–ขใ—ใฆใฏ็‰นใซไฝœๆฅญใŒใ—ใ‚„ใ™ใใ€ไธŠ่จ˜ใฎใ™ในใฆใฎ่ญฆๅ‘ŠใฏTPU VMใซใฏ้ฉ็”จใ•ใ‚Œใพใ›ใ‚“๏ผ ใ“ใ‚Œใฏไธป่ฆณ็š„ใชๆ–‡ๆ›ธใงใ™ใฎใงใ€ใ“ใกใ‚‰ใฎๆ„่ฆ‹ใงใ™๏ผš**ๅฏ่ƒฝใช้™ใ‚ŠTPUใƒŽใƒผใƒ‰ใฎไฝฟ็”จใ‚’้ฟใ‘ใฆใใ ใ•ใ„ใ€‚** TPU VMใ‚ˆใ‚Šใ‚‚ๆททไนฑใ—ใ‚„ใ™ใใ€ใƒ‡ใƒใƒƒใ‚ฐใŒ้›ฃใ—ใ„ใงใ™ใ€‚ๅฐ†ๆฅ็š„ใซใฏใ‚ตใƒใƒผใƒˆใ•ใ‚Œใชใใชใ‚‹ๅฏ่ƒฝๆ€งใ‚‚ใ‚ใ‚Šใพใ™ - Googleใฎๆœ€ๆ–ฐใฎTPUใงใ‚ใ‚‹TPUv4ใฏใ€TPU VMใจใ—ใฆใฎใฟใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใŸใ‚ใ€TPUใƒŽใƒผใƒ‰ใฏๅฐ†ๆฅ็š„ใซใฏใ€Œใƒฌใ‚ฌใ‚ทใƒผใ€ใฎใ‚ขใ‚ฏใ‚ปใ‚นๆ–นๆณ•ใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™ใ€‚ใŸใ ใ—ใ€็„กๆ–™ใงTPUใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใฎใฏColabใจKaggle Kernelsใฎๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ใใฎๅ ดๅˆใ€ใฉใ†ใ—ใฆใ‚‚ไฝฟ็”จใ—ใชใ‘ใ‚Œใฐใชใ‚‰ใชใ„ๅ ดๅˆใฎๅ–ใ‚Šๆ‰ฑใ„ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใ‚ˆใ†ใจใ—ใพใ™๏ผ่ฉณ็ดฐใฏ[TPUใฎไพ‹ใฎใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)ใง่ฉณ็ดฐใช่ชฌๆ˜Žใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ### What sizes of TPU are available? ๅ˜ไธ€ใฎTPU๏ผˆv2-8/v3-8/v4-8๏ผ‰ใฏ8ใคใฎใƒฌใƒ—ใƒชใ‚ซใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚TPUใฏๆ•ฐ็™พใ‹ใ‚‰ๆ•ฐๅƒใฎใƒฌใƒ—ใƒชใ‚ซใ‚’ๅŒๆ™‚ใซๅฎŸ่กŒใงใใ‚‹**ใƒใƒƒใƒ‰**ใซๅญ˜ๅœจใ—ใพใ™ใ€‚ๅ˜ไธ€ใฎTPUใ‚ˆใ‚Šใ‚‚ๅคšใใฎTPUใ‚’ไฝฟ็”จใ™ใ‚‹ใŒใ€ใƒใƒƒใƒ‰ๅ…จไฝ“ใงใฏใชใ„ๅ ดๅˆ๏ผˆใŸใจใˆใฐv3-32๏ผ‰ใ€TPUใƒ•ใƒชใƒผใƒˆใฏ**ใƒใƒƒใƒ‰ใ‚นใƒฉใ‚คใ‚น**ใจใ—ใฆๅ‚็…งใ•ใ‚Œใพใ™ใ€‚ Colabใ‚’ไป‹ใ—ใฆ็„กๆ–™ใฎTPUใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ๅ ดๅˆใ€้€šๅธธใฏๅ˜ไธ€ใฎv2-8 TPUใŒๆไพ›ใ•ใ‚Œใพใ™ใ€‚ ### I keep hearing about this XLA thing. Whatโ€™s XLA, and how does it relate to TPUs? XLAใฏใ€TensorFlowใจJAXใฎไธกๆ–นใงไฝฟ็”จใ•ใ‚Œใ‚‹ๆœ€้ฉๅŒ–ใ‚ณใƒณใƒ‘ใ‚คใƒฉใงใ™ใ€‚JAXใงใฏๅ”ฏไธ€ใฎใ‚ณใƒณใƒ‘ใ‚คใƒฉใงใ‚ใ‚Šใ€TensorFlowใงใฏใ‚ชใƒ—ใ‚ทใƒงใƒณใงใ™ใŒ๏ผˆใ—ใ‹ใ—TPUใงใฏๅฟ…้ ˆใงใ™๏ผ๏ผ‰ใ€Kerasใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹้š›ใซ`model.compile()`ใซๅผ•ๆ•ฐ`jit_compile=True`ใ‚’ๆธกใ™ใ“ใจใงๆœ€ใ‚‚็ฐกๅ˜ใซๆœ‰ๅŠนใซใงใใพใ™ใ€‚ใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ›ใšใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒ่‰ฏๅฅฝใงใ‚ใ‚Œใฐใ€ใใ‚ŒใฏTPUใซ็งป่กŒใ™ใ‚‹ๆบ–ๅ‚™ใŒๆ•ดใฃใŸ่‰ฏใ„ๅ…†ๅ€™ใงใ™๏ผ TPUไธŠใงใฎใƒ‡ใƒใƒƒใ‚ฐใฏไธ€่ˆฌ็š„ใซCPU/GPUใ‚ˆใ‚Šใ‚‚ๅฐ‘ใ—้›ฃใ—ใ„ใŸใ‚ใ€TPUใง่ฉฆใ™ๅ‰ใซใพใšCPU/GPUใงXLAใ‚’ไฝฟ็”จใ—ใฆใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ‚‚ใกใ‚ใ‚“ใ€้•ทๆ™‚้–“ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใƒขใƒ‡ใƒซใจใƒ‡ใƒผใ‚ฟใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใŒๆœŸๅพ…้€šใ‚Šใซๅ‹•ไฝœใ™ใ‚‹ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใฎๆ•ฐใ‚นใƒ†ใƒƒใƒ—ใ ใ‘ใงใ™ใ€‚ <Tip> XLAใ‚ณใƒณใƒ‘ใ‚คใƒซใ•ใ‚ŒใŸใ‚ณใƒผใƒ‰ใฏ้€šๅธธ้ซ˜้€Ÿใงใ™ใ€‚ใ—ใŸใŒใฃใฆใ€TPUใงๅฎŸ่กŒใ™ใ‚‹ไบˆๅฎšใŒใชใ„ๅ ดๅˆใงใ‚‚ใ€`jit_compile=True`ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใงใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใŸใ ใ—ใ€ไปฅไธ‹ใฎXLAไบ’ๆ›ๆ€งใซ้–ขใ™ใ‚‹ๆณจๆ„ไบ‹้ …ใซๆณจๆ„ใ—ใฆใใ ใ•ใ„๏ผ </Tip> <Tip warning={true}> **่‹ฆใ„็ตŒ้จ“ใ‹ใ‚‰็”Ÿใพใ‚ŒใŸใƒ’ใƒณใƒˆ:** `jit_compile=True`ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใ€CPU/GPUใ‚ณใƒผใƒ‰ใŒXLAไบ’ๆ›ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€้€Ÿๅบฆใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹่‰ฏใ„ๆ–นๆณ•ใงใ™ใŒใ€ๅฎŸ้š›ใซTPUใงใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹้š›ใซใฏๅคšใใฎๅ•้กŒใ‚’ๅผ•ใ่ตทใ“ใ™ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ XLAใ‚ณใƒณใƒ‘ใ‚คใƒซใฏTPUไธŠใงๆš—้ป™็š„ใซ่กŒใ‚ใ‚Œใ‚‹ใŸใ‚ใ€ๅฎŸ้š›ใซใ‚ณใƒผใƒ‰ใ‚’TPUใงๅฎŸ่กŒใ™ใ‚‹ๅ‰ใซใใฎ่กŒใ‚’ๅ‰Š้™คใ™ใ‚‹ใ“ใจใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„๏ผ </Tip> ### How do I make my model XLA compatible? ๅคšใใฎๅ ดๅˆใ€ใ‚ณใƒผใƒ‰ใฏใ™ใงใซXLAไบ’ๆ›ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผใŸใ ใ—ใ€XLAใงใฏๅ‹•ไฝœใ™ใ‚‹้€šๅธธใฎTensorFlowใงใ‚‚ๅ‹•ไฝœใ—ใชใ„ใ„ใใคใ‹ใฎ่ฆ็ด ใŒใ‚ใ‚Šใพใ™ใ€‚ไปฅไธ‹ใซใ€3ใคใฎไธป่ฆใชใƒซใƒผใƒซใซใพใจใ‚ใฆใ„ใพใ™๏ผš <Tip> **๐Ÿค— HuggingFaceๅ›บๆœ‰ใฎใƒ’ใƒณใƒˆ๐Ÿค—:** TensorFlowใƒขใƒ‡ใƒซใจๆๅคฑ้–ขๆ•ฐใ‚’XLAไบ’ๆ›ใซๆ›ธใ็›ดใ™ใŸใ‚ใซๅคšใใฎๅŠชๅŠ›ใ‚’ๆ‰•ใฃใฆใ„ใพใ™ใ€‚้€šๅธธใ€ใƒขใƒ‡ใƒซใจๆๅคฑ้–ขๆ•ฐใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใƒซใƒผใƒซ๏ผƒ1ใจ๏ผƒ2ใซๅพ“ใฃใฆใ„ใ‚‹ใŸใ‚ใ€`transformers`ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ“ใ‚Œใ‚‰ใ‚’ใ‚นใ‚ญใƒƒใƒ—ใงใใพใ™ใ€‚ใŸใ ใ—ใ€็‹ฌ่‡ชใฎใƒขใƒ‡ใƒซใจๆๅคฑ้–ขๆ•ฐใ‚’่จ˜่ฟฐใ™ใ‚‹ๅ ดๅˆใฏใ€ใ“ใ‚Œใ‚‰ใฎใƒซใƒผใƒซใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„๏ผ </Tip> #### XLA Rule #1: Your code cannot have โ€œdata-dependent conditionalsโ€ ใ“ใ‚Œใฏใ€ไปปๆ„ใฎ`if`ใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใŒ`tf.Tensor`ๅ†…ใฎๅ€คใซไพๅญ˜ใ—ใฆใ„ใชใ„ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ไพ‹ใˆใฐใ€ๆฌกใฎใ‚ณใƒผใƒ‰ใƒ–ใƒญใƒƒใ‚ฏใฏXLAใงใ‚ณใƒณใƒ‘ใ‚คใƒซใงใใพใ›ใ‚“๏ผ ```python if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 ``` ใ“ใ‚Œใฏๆœ€ๅˆใฏ้žๅธธใซๅˆถ้™็š„ใซๆ€ใˆใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€ใปใจใ‚“ใฉใฎใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใ‚ณใƒผใƒ‰ใฏใ“ใ‚Œใ‚’่กŒใ†ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚้€šๅธธใ€ใ“ใฎๅˆถ็ด„ใ‚’ๅ›ž้ฟใ™ใ‚‹ใŸใ‚ใซ`tf.cond`ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹๏ผˆใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏใ“ใกใ‚‰ใ‚’ๅ‚็…ง๏ผ‰ใ€ๆกไปถใ‚’ๅ‰Š้™คใ—ใฆไปฃใ‚ใ‚ŠใซๆŒ‡็คบๅค‰ๆ•ฐใ‚’ไฝฟ็”จใ—ใŸใ‚Šใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ๆฌกใฎใ‚ˆใ†ใซ๏ผš ```python sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32) tensor = tensor / (1.0 + sum_over_10) ``` ใ“ใฎใ‚ณใƒผใƒ‰ใฏใ€ไธŠ่จ˜ใฎใ‚ณใƒผใƒ‰ใจใพใฃใŸใๅŒใ˜ๅŠนๆžœใ‚’ๆŒใฃใฆใ„ใพใ™ใŒใ€ๆกไปถใ‚’ๅ›ž้ฟใ™ใ‚‹ใ“ใจใงใ€XLAใงๅ•้กŒใชใใ‚ณใƒณใƒ‘ใ‚คใƒซใงใใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™๏ผ #### XLA Rule #2: Your code cannot have โ€œdata-dependent shapesโ€ ใ“ใ‚Œใฏใ€ใ‚ณใƒผใƒ‰ๅ†…ใฎใ™ในใฆใฎ `tf.Tensor` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎๅฝข็ŠถใŒใ€ใใฎๅ€คใซไพๅญ˜ใ—ใชใ„ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€`tf.unique` ้–ขๆ•ฐใฏXLAใงใ‚ณใƒณใƒ‘ใ‚คใƒซใงใใชใ„ใฎใงใ€ใ“ใฎใƒซใƒผใƒซใซ้•ๅใ—ใพใ™ใ€‚ใชใœใชใ‚‰ใ€ใ“ใ‚Œใฏๅ…ฅๅŠ› `Tensor` ใฎไธ€ๆ„ใฎๅ€คใฎๅ„ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ๅซใ‚€ `tensor` ใ‚’่ฟ”ใ™ใŸใ‚ใงใ™ใ€‚ใ“ใฎๅ‡บๅŠ›ใฎๅฝข็Šถใฏใ€ๅ…ฅๅŠ› `Tensor` ใฎ้‡่ค‡ๅ…ทๅˆใซใ‚ˆใฃใฆ็•ฐใชใ‚‹ใŸใ‚ใ€XLAใฏใใ‚Œใ‚’ๅ‡ฆ็†ใ—ใชใ„ใ“ใจใซใชใ‚Šใพใ™๏ผ ไธ€่ˆฌ็š„ใซใ€ใปใจใ‚“ใฉใฎใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚ณใƒผใƒ‰ใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใƒซใƒผใƒซ๏ผƒ2ใซๅพ“ใ„ใพใ™ใ€‚ใŸใ ใ—ใ€ใ„ใใคใ‹ใฎไธ€่ˆฌ็š„ใชใ‚ฑใƒผใ‚นใงใฏๅ•้กŒใŒ็™บ็”Ÿใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚้žๅธธใซไธ€่ˆฌ็š„ใชใ‚ฑใƒผใ‚นใฎ1ใคใฏใ€**ใƒฉใƒ™ใƒซใƒžใ‚นใ‚ญใƒณใ‚ฐ**ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใงใ™ใ€‚ใƒฉใƒ™ใƒซใ‚’็„ก่ฆ–ใ—ใฆๆๅคฑใ‚’่จˆ็ฎ—ใ™ใ‚‹ๅ ดๆ‰€ใ‚’็คบใ™ใŸใ‚ใซใ€ใƒฉใƒ™ใƒซใ‚’่ฒ ใฎๅ€คใซ่จญๅฎšใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚NumPyใพใŸใฏPyTorchใฎใƒฉใƒ™ใƒซใƒžใ‚นใ‚ญใƒณใ‚ฐใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ๆๅคฑ้–ขๆ•ฐใ‚’่ฆ‹ใ‚‹ใจใ€ๆฌกใฎใ‚ˆใ†ใช[ใƒ–ใƒผใƒซใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚น](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing)ใ‚’ไฝฟ็”จใ—ใŸใ‚ณใƒผใƒ‰ใŒใ‚ˆใ่ฆ‹ใ‚‰ใ‚Œใพใ™๏ผš ```python label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) ``` ใ“ใฎใ‚ณใƒผใƒ‰ใฏNumPyใ‚„PyTorchใงใฏๅฎŒๅ…จใซๆฉŸ่ƒฝใ—ใพใ™ใŒใ€XLAใงใฏๅ‹•ไฝœใ—ใพใ›ใ‚“๏ผใชใœใชใ‚‰ใ€`masked_outputs`ใจ`masked_labels`ใฎๅฝข็Šถใฏใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸไฝ็ฝฎใฎๆ•ฐใซไพๅญ˜ใ™ใ‚‹ใŸใ‚ใ€ใ“ใ‚Œใฏ**ใƒ‡ใƒผใ‚ฟไพๅญ˜ใฎๅฝข็Šถ**ใซใชใ‚Šใพใ™ใ€‚ใŸใ ใ—ใ€ใƒซใƒผใƒซ๏ผƒ1ใจๅŒๆง˜ใซใ€ใ“ใฎใ‚ณใƒผใƒ‰ใ‚’ๆ›ธใ็›ดใ—ใฆใ€ใƒ‡ใƒผใ‚ฟไพๅญ˜ใฎๅฝข็Šถใชใ—ใงใพใฃใŸใๅŒใ˜ๅ‡บๅŠ›ใ‚’็”Ÿๆˆใงใใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ```python label_mask = tf.cast(labels >= 0, tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) ``` ใ“ใ“ใงใฏใ€ใƒ‡ใƒผใ‚ฟไพๅญ˜ใฎๅฝข็Šถใ‚’้ฟใ‘ใ‚‹ใŸใ‚ใซใ€ๅ„ไฝ็ฝฎใงๆๅคฑใ‚’่จˆ็ฎ—ใ—ใฆใ‹ใ‚‰ใ€ๅนณๅ‡ใ‚’่จˆ็ฎ—ใ™ใ‚‹้š›ใซๅˆ†ๅญใจๅˆ†ๆฏใฎไธกๆ–นใงใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸไฝ็ฝฎใ‚’ใ‚ผใƒญๅŒ–ใ™ใ‚‹ๆ–นๆณ•ใ‚’็ดนไป‹ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๆœ€ๅˆใฎใ‚ขใƒ—ใƒญใƒผใƒใจใพใฃใŸใๅŒใ˜็ตๆžœใŒๅพ—ใ‚‰ใ‚Œใพใ™ใŒใ€XLAไบ’ๆ›ๆ€งใ‚’็ถญๆŒใ—ใพใ™ใ€‚ๆณจๆ„็‚นใจใ—ใฆใ€ใƒซใƒผใƒซ๏ผƒ1ใจๅŒใ˜ใƒˆใƒชใƒƒใ‚ฏใ‚’ไฝฟ็”จใ—ใพใ™ - `tf.bool`ใ‚’`tf.float32`ใซๅค‰ๆ›ใ—ใฆๆŒ‡ๆจ™ๅค‰ๆ•ฐใจใ—ใฆไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใฏ้žๅธธใซไพฟๅˆฉใชใƒˆใƒชใƒƒใ‚ฏใงใ™ใฎใงใ€่‡ชๅˆ†ใฎใ‚ณใƒผใƒ‰ใ‚’XLAใซๅค‰ๆ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใซใฏ่ฆšใˆใฆใŠใ„ใฆใใ ใ•ใ„๏ผ #### XLA Rule #3: XLA will need to recompile your model for every different input shape it sees ใ“ใ‚Œใฏ้‡่ฆใชใƒซใƒผใƒซใงใ™ใ€‚ใ“ใ‚Œใฏใคใพใ‚Šใ€ๅ…ฅๅŠ›ๅฝข็ŠถใŒ้žๅธธใซๅค‰ๅ‹•็š„ใชๅ ดๅˆใ€XLA ใฏใƒขใƒ‡ใƒซใ‚’ไฝ•ๅบฆใ‚‚ๅ†ใ‚ณใƒณใƒ‘ใ‚คใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€ๅคงใใชใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅ•้กŒใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใจใ„ใ†ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใฏ NLP ใƒขใƒ‡ใƒซใงไธ€่ˆฌ็š„ใซ็™บ็”Ÿใ—ใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บๅพŒใฎๅ…ฅๅŠ›ใƒ†ใ‚ญใ‚นใƒˆใฎ้•ทใ•ใŒ็•ฐใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ไป–ใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใงใฏใ€้™็š„ใชๅฝข็ŠถใŒไธ€่ˆฌ็š„ใงใ‚ใ‚Šใ€ใ“ใฎใƒซใƒผใƒซใฏใปใจใ‚“ใฉๅ•้กŒใซใชใ‚Šใพใ›ใ‚“ใ€‚ ใƒซใƒผใƒซ๏ผƒ3ใ‚’ๅ›ž้ฟใ™ใ‚‹ๆ–นๆณ•ใฏไฝ•ใงใ—ใ‚‡ใ†ใ‹๏ผŸ้ตใฏใ€Œใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ€ใงใ™ - ใ™ในใฆใฎๅ…ฅๅŠ›ใ‚’ๅŒใ˜้•ทใ•ใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ—ใ€ๆฌกใซใ€Œattention_maskใ€ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€ๅฏๅค‰ๅฝข็ŠถใจๅŒใ˜็ตๆžœใ‚’ๅพ—ใ‚‹ใ“ใจใŒใงใใพใ™ใŒใ€XLA ใฎๅ•้กŒใฏ็™บ็”Ÿใ—ใพใ›ใ‚“ใ€‚ใŸใ ใ—ใ€้Žๅบฆใฎใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚‚ๆทฑๅˆปใช้…ๅปถใ‚’ๅผ•ใ่ตทใ“ใ™ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ - ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใงๆœ€ๅคงใฎ้•ทใ•ใซใ™ในใฆใฎใ‚ตใƒณใƒ—ใƒซใ‚’ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ใจใ€ๅคšใใฎ่จˆ็ฎ—ใจใƒกใƒขใƒชใ‚’็„ก้ง„ใซใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™๏ผ ใ“ใฎๅ•้กŒใซใฏๅฎŒ็’งใช่งฃๆฑบ็ญ–ใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ใ„ใใคใ‹ใฎใƒˆใƒชใƒƒใ‚ฏใ‚’่ฉฆใ™ใ“ใจใŒใงใใพใ™ใ€‚้žๅธธใซไพฟๅˆฉใชใƒˆใƒชใƒƒใ‚ฏใฎ1ใคใฏใ€**ใƒใƒƒใƒใฎใ‚ตใƒณใƒ—ใƒซใ‚’32ใพใŸใฏ64ใƒˆใƒผใ‚ฏใƒณใฎๅ€ๆ•ฐใพใงใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹**ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒˆใƒผใ‚ฏใƒณๆ•ฐใŒใ‚ใšใ‹ใซๅข—ๅŠ ใ™ใ‚‹ใ ใ‘ใงใ€ใ™ในใฆใฎๅ…ฅๅŠ›ๅฝข็ŠถใŒ32ใพใŸใฏ64ใฎๅ€ๆ•ฐใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€ไธ€ๆ„ใฎๅ…ฅๅŠ›ๅฝข็Šถใฎๆ•ฐใŒๅคงๅน…ใซๆธ›ๅฐ‘ใ—ใพใ™ใ€‚ไธ€ๆ„ใฎๅ…ฅๅŠ›ๅฝข็ŠถใŒๅฐ‘ใชใ„ใจใ€XLA ใฎๅ†ใ‚ณใƒณใƒ‘ใ‚คใƒซใŒๅฐ‘ใชใใชใ‚Šใพใ™๏ผ <Tip> **๐Ÿค— HuggingFace ใซ้–ขใ™ใ‚‹ๅ…ทไฝ“็š„ใชใƒ’ใƒณใƒˆ๐Ÿค—:** ๅผŠ็คพใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใจใƒ‡ใƒผใ‚ฟใ‚ณใƒฌใ‚ฏใ‚ฟใƒผใซใฏใ€ใ“ใ“ใงๅฝน็ซ‹ใคใƒกใ‚ฝใƒƒใƒ‰ใŒใ‚ใ‚Šใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ๅ‘ผใณๅ‡บใ™้š›ใซ `padding="max_length"` ใพใŸใฏ `padding="longest"` ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ•ใ‚ŒใŸใƒ‡ใƒผใ‚ฟใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ‚ˆใ†ใซ่จญๅฎšใงใใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใจใƒ‡ใƒผใ‚ฟใ‚ณใƒฌใ‚ฏใ‚ฟใƒผใซใฏใ€ไธ€ๆ„ใฎๅ…ฅๅŠ›ๅฝข็Šถใฎๆ•ฐใ‚’ๆธ›ใ‚‰ใ™ใฎใซๅฝน็ซ‹ใค `pad_to_multiple_of` ๅผ•ๆ•ฐใ‚‚ใ‚ใ‚Šใพใ™๏ผ </Tip> ### How do I actually train my model on TPU? ไธ€ๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒ XLA ไบ’ๆ›ๆ€งใŒใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€๏ผˆTPU Node/Colab ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏ๏ผ‰ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒ้ฉๅˆ‡ใซๆบ–ๅ‚™ใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€TPU ไธŠใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใฏ้ฉšใใปใฉ็ฐกๅ˜ใงใ™๏ผใ‚ณใƒผใƒ‰ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใฎใฏใ€ใ„ใใคใ‹ใฎ่กŒใ‚’่ฟฝๅŠ ใ—ใฆ TPU ใ‚’ๅˆๆœŸๅŒ–ใ—ใ€ใƒขใƒ‡ใƒซใจใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒ `TPUStrategy` ใ‚นใ‚ณใƒผใƒ—ๅ†…ใงไฝœๆˆใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใ ใ‘ใงใ™ใ€‚ใ“ใ‚Œใ‚’ๅฎŸ้š›ใซ่ฆ‹ใ‚‹ใซใฏใ€[TPU ใฎใ‚ตใƒณใƒ—ใƒซใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)ใ‚’ใ”่ฆงใใ ใ•ใ„๏ผ ### Summary ใ“ใ“ใงใฏๅคšใใฎๆƒ…ๅ ฑใŒๆไพ›ใ•ใ‚Œใพใ—ใŸใฎใงใ€TPU ใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹้š›ใซไปฅไธ‹ใฎใƒใ‚งใƒƒใ‚ฏใƒชใ‚นใƒˆใ‚’ไฝฟ็”จใงใใพใ™๏ผš - ใ‚ณใƒผใƒ‰ใŒ XLA ใฎไธ‰ใคใฎใƒซใƒผใƒซใซๅพ“ใฃใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ - CPU/GPU ใง `jit_compile=True` ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใ‚ณใƒณใƒ‘ใ‚คใƒซใ—ใ€XLA ใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ - ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒกใƒขใƒชใซ่ชญใฟ่พผใ‚€ใ‹ใ€TPU ไบ’ๆ›ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ่ชญใฟ่พผใฟใ‚ขใƒ—ใƒญใƒผใƒใ‚’ไฝฟ็”จใ—ใพใ™๏ผˆ[ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ๅ‚็…ง](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)๏ผ‰ใ€‚ - ใ‚ณใƒผใƒ‰ใ‚’ Colab๏ผˆใ‚ขใ‚ฏใ‚ปใƒฉใƒฌใƒผใ‚ฟใ‚’ใ€ŒTPUใ€ใซ่จญๅฎš๏ผ‰ใพใŸใฏ Google Cloud ใฎ TPU VM ใซ็งป่กŒใ—ใพใ™ใ€‚ - TPU ๅˆๆœŸๅŒ–ใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ—ใพใ™๏ผˆ[ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ๅ‚็…ง](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)๏ผ‰ใ€‚ - `TPUStrategy` ใ‚’ไฝœๆˆใ—ใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎ่ชญใฟ่พผใฟใจใƒขใƒ‡ใƒซใฎไฝœๆˆใŒ `strategy.scope()` ๅ†…ใง่กŒใ‚ใ‚Œใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™๏ผˆ[ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ๅ‚็…ง](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)๏ผ‰ใ€‚ - TPU ใซ็งป่กŒใ™ใ‚‹้š›ใซ `jit_compile=True` ใ‚’ๅค–ใ™ใฎใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„๏ผ - ๐Ÿ™๐Ÿ™๐Ÿ™๐Ÿฅบ๐Ÿฅบ๐Ÿฅบ - `model.fit()` ใ‚’ๅ‘ผใณๅ‡บใ—ใพใ™ใ€‚ - ใŠใ‚ใงใจใ†ใ”ใ–ใ„ใพใ™๏ผ
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_train_tpu.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Training on TPUs <Tip> ๆณจๆ„: [ใ‚ทใƒณใ‚ฐใƒซGPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_one)ใง็ดนไป‹ใ•ใ‚Œใฆใ„ใ‚‹ใปใจใ‚“ใฉใฎๆˆฆ็•ฅ๏ผˆๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚„ๅ‹พ้…่“„็ฉใชใฉ๏ผ‰ใŠใ‚ˆใณ[ใƒžใƒซใƒGPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_many)ใฏไธ€่ˆฌ็š„ใชใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซ้ฉ็”จใงใใพใ™ใฎใงใ€ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใซๅ…ฅใ‚‹ๅ‰ใซใใ‚Œใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏใ€TPUใงใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•ใซ้–ขใ™ใ‚‹ๆƒ…ๅ ฑใ‚’ใพใ‚‚ใชใ่ฟฝๅŠ ใ„ใŸใ—ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/torchscript.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to TorchScript <Tip> ใ“ใ‚ŒใฏTorchScriptใ‚’ไฝฟ็”จใ—ใŸๅฎŸ้จ“ใฎๆœ€ๅˆใงใ‚ใ‚Šใ€ๅฏๅค‰ๅ…ฅๅŠ›ใ‚ตใ‚คใ‚บใฎใƒขใƒ‡ใƒซใซๅฏพใ™ใ‚‹ใใฎ่ƒฝๅŠ›ใ‚’ใพใ ๆŽขๆฑ‚ไธญใงใ™ใ€‚ใ“ใ‚Œใฏ็งใŸใกใฎ้–ขๅฟƒใฎ็„ฆ็‚นใงใ‚ใ‚Šใ€ไปŠๅพŒใฎใƒชใƒชใƒผใ‚นใงใฏใ€ใ‚ˆใ‚ŠๆŸ”่ปŸใชๅฎŸ่ฃ…ใ‚„ใ€Pythonใƒ™ใƒผใ‚นใฎใ‚ณใƒผใƒ‰ใจใ‚ณใƒณใƒ‘ใ‚คใƒซใ•ใ‚ŒใŸTorchScriptใ‚’ๆฏ”่ผƒใ™ใ‚‹ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’ๅซใ‚€ใ€ใ‚ˆใ‚Šๅคšใใฎใ‚ณใƒผใƒ‰ไพ‹ใง่ฉณ็ดฐใชๅˆ†ๆžใ‚’่กŒใ„ใพใ™ใ€‚ </Tip> [TorchScriptใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://pytorch.org/docs/stable/jit.html)ใซใ‚ˆใ‚Œใฐ๏ผš > TorchScriptใฏใ€PyTorchใ‚ณใƒผใƒ‰ใ‹ใ‚‰็›ดๅˆ—ๅŒ–ใŠใ‚ˆใณๆœ€้ฉๅŒ–ๅฏ่ƒฝใชใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ TorchScriptใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ๅŠน็އๅฟ—ๅ‘ใฎC++ใƒ—ใƒญใ‚ฐใƒฉใƒ ใชใฉใ€ไป–ใฎใƒ—ใƒญใ‚ฐใƒฉใƒ ใงใƒขใƒ‡ใƒซใ‚’ๅ†ๅˆฉ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚PyTorchใƒ™ใƒผใ‚นใฎPythonใƒ—ใƒญใ‚ฐใƒฉใƒ ไปฅๅค–ใฎ็’ฐๅขƒใง๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ—ใฆไฝฟ็”จใ™ใ‚‹ใŸใ‚ใฎใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ใ“ใ“ใงใฏใ€TorchScriptใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ—ใ€ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใซใฏใ€ๆฌกใฎ2ใคใฎ่ฆไปถใŒใ‚ใ‚Šใพใ™๏ผš - `torchscript`ใƒ•ใƒฉใ‚ฐใ‚’ไฝฟ็”จใ—ใŸใƒขใƒ‡ใƒซใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ– - ใƒ€ใƒŸใƒผใฎๅ…ฅๅŠ›ใ‚’ไฝฟ็”จใ—ใŸใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚น ใ“ใ‚Œใ‚‰ใฎๅฟ…่ฆๆกไปถใฏใ€ไปฅไธ‹ใง่ฉณ็ดฐใซ่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใ‚ˆใ†ใซใ€้–‹็™บ่€…ใŒๆณจๆ„ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ„ใใคใ‹ใฎใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ## TorchScript flag and tied weights `torchscript`ใƒ•ใƒฉใ‚ฐใฏใ€ใปใจใ‚“ใฉใฎ๐Ÿค— Transformers่จ€่ชžใƒขใƒ‡ใƒซใซใŠใ„ใฆใ€`Embedding`ใƒฌใ‚คใƒคใƒผใจ`Decoding`ใƒฌใ‚คใƒคใƒผ้–“ใง้‡ใฟใŒ้€ฃ็ตใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ๅฟ…่ฆใงใ™ใ€‚ TorchScriptใงใฏใ€้‡ใฟใŒ้€ฃ็ตใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใฎใงใ€ไบ‹ๅ‰ใซ้‡ใฟใ‚’ๅˆ‡ใ‚Š้›ขใ—ใฆ่ค‡่ฃฝใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ `torchscript`ใƒ•ใƒฉใ‚ฐใ‚’ไฝฟ็”จใ—ใฆใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏใ€`Embedding`ใƒฌใ‚คใƒคใƒผใจ`Decoding`ใƒฌใ‚คใƒคใƒผใŒๅˆ†้›ขใ•ใ‚ŒใฆใŠใ‚Šใ€ใใฎใŸใ‚ๅพŒใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใฆใฏใ„ใ‘ใพใ›ใ‚“ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ€ใ“ใ‚Œใ‚‰ใฎ2ใคใฎใƒฌใ‚คใƒคใƒผใ‚’้žๅŒๆœŸใซใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใ€ไบˆๆœŸใ—ใชใ„็ตๆžœใ‚’ใ‚‚ใŸใ‚‰ใ™ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ่จ€่ชžใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใ‚’ๆŒใŸใชใ„ใƒขใƒ‡ใƒซใซใฏ่จ€ๅŠใ—ใพใ›ใ‚“ใŒใ€ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใซใฏ้€ฃ็ตใ•ใ‚ŒใŸ้‡ใฟใŒๅญ˜ๅœจใ—ใชใ„ใŸใ‚ใ€`torchscript`ใƒ•ใƒฉใ‚ฐใชใ—ใงๅฎ‰ๅ…จใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใงใใพใ™ใ€‚ ## Dummy inputs and standard lengths ใƒ€ใƒŸใƒผๅ…ฅๅŠ›ใฏใƒขใƒ‡ใƒซใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ๅ…ฅๅŠ›ใฎๅ€คใฏใƒฌใ‚คใƒคใƒผใ‚’้€šใ˜ใฆไผๆ’ญใ•ใ‚Œใ‚‹้–“ใ€PyTorchใฏๅ„ใƒ†ใƒณใ‚ฝใƒซใซๅฎŸ่กŒใ•ใ‚ŒใŸ็•ฐใชใ‚‹ๆ“ไฝœใ‚’่ฟฝ่ทกใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎ่จ˜้Œฒใ•ใ‚ŒใŸๆ“ไฝœใฏใ€ใƒขใƒ‡ใƒซใฎ*ใƒˆใƒฌใƒผใ‚น*ใ‚’ไฝœๆˆใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ใƒˆใƒฌใƒผใ‚นใฏๅ…ฅๅŠ›ใฎๅฏธๆณ•ใซๅฏพใ—ใฆไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ใใฎใŸใ‚ใ€ใƒ€ใƒŸใƒผๅ…ฅๅŠ›ใฎๅฏธๆณ•ใซๅˆถ็ด„ใ•ใ‚Œใ€ไป–ใฎใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใ‚„ใƒใƒƒใƒใ‚ตใ‚คใ‚บใงใฏๅ‹•ไฝœใ—ใพใ›ใ‚“ใ€‚็•ฐใชใ‚‹ใ‚ตใ‚คใ‚บใง่ฉฆใ™ใจใ€ไปฅไธ‹ใฎใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ—ใพใ™๏ผš ``` `The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2` ``` ใŠๅ‹งใ‚ใ—ใพใ™ใฎใฏใ€ใƒขใƒ‡ใƒซใฎๆŽจ่ซ–ไธญใซไพ›็ตฆใ•ใ‚Œใ‚‹ๆœ€ๅคงใฎๅ…ฅๅŠ›ใจๅŒใ˜ๅคงใใ•ใฎใƒ€ใƒŸใƒผๅ…ฅๅŠ›ใ‚ตใ‚คใ‚บใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใ‚นใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ไฝฟ็”จใ—ใฆไธ่ถณๅ€คใ‚’่ฃœๅฎŒใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใŸใ ใ—ใ€ใƒขใƒ‡ใƒซใŒใ‚ˆใ‚Šๅคงใใชๅ…ฅๅŠ›ใ‚ตใ‚คใ‚บใงใƒˆใƒฌใƒผใ‚นใ•ใ‚Œใ‚‹ใŸใ‚ใ€่กŒๅˆ—ใฎๅฏธๆณ•ใ‚‚ๅคงใใใชใ‚Šใ€ใ‚ˆใ‚Šๅคšใใฎ่จˆ็ฎ—ใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ ็•ฐใชใ‚‹ใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใฎใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹้š›ใซใ€ๅ„ๅ…ฅๅŠ›ใซๅฏพใ—ใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ๆผ”็ฎ—ใฎ็ทๆ•ฐใซๆณจๆ„ใ—ใฆใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅฏ†ๆŽฅใซใƒ•ใ‚ฉใƒญใƒผใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ## Using TorchScript in Python ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใƒขใƒ‡ใƒซใฎไฟๅญ˜ใจ่ชญใฟ่พผใฟใ€ใŠใ‚ˆใณๆŽจ่ซ–ใซใƒˆใƒฌใƒผใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใพใ™ใ€‚ ### Saving a model TorchScriptใง`BertModel`ใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใซใฏใ€`BertConfig`ใ‚ฏใƒฉใ‚นใ‹ใ‚‰`BertModel`ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใ€ใใ‚Œใ‚’ใƒ•ใ‚กใ‚คใƒซๅ`traced_bert.pt`ใงใƒ‡ใ‚ฃใ‚นใ‚ฏใซไฟๅญ˜ใ—ใพใ™๏ผš ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("bert-base-uncased") # Tokenizing input text text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # Initializing the model with the torchscript flag # Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # Instantiating the model model = BertModel(config) # The model needs to be in evaluation mode model.eval() # If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag model = BertModel.from_pretrained("bert-base-uncased", torchscript=True) # Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` ### Loading a model ไปฅๅ‰ใซไฟๅญ˜ใ—ใŸ `BertModel`ใ€`traced_bert.pt` ใ‚’ใƒ‡ใ‚ฃใ‚นใ‚ฏใ‹ใ‚‰่ชญใฟ่พผใ‚“ใงใ€ไปฅๅ‰ใซๅˆๆœŸๅŒ–ใ—ใŸ `dummy_input` ใงไฝฟ็”จใงใใพใ™ใ€‚ ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` ### Using a traced model for inference ใƒˆใƒฌใƒผใ‚นใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆๆŽจ่ซ–ใ‚’่กŒใ†ใซใฏใ€ใใฎ `__call__` ใƒ€ใƒณใƒ€ใƒผใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ```python traced_model(tokens_tensor, segments_tensors) ``` ## Deploy Hugging Face TorchScript models to AWS with the Neuron SDK AWSใฏใ‚ฏใƒฉใ‚ฆใƒ‰ใงใฎไฝŽใ‚ณใ‚นใƒˆใง้ซ˜ๆ€ง่ƒฝใชๆฉŸๆขฐๅญฆ็ฟ’ๆŽจ่ซ–ๅ‘ใ‘ใซ [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใƒ•ใ‚กใƒŸใƒชใƒผใ‚’ๅฐŽๅ…ฅใ—ใพใ—ใŸใ€‚Inf1ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใฏAWS Inferentiaใƒใƒƒใƒ—ใซใ‚ˆใฃใฆ้ง†ๅ‹•ใ•ใ‚Œใ€ใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐๆŽจ่ซ–ใƒฏใƒผใ‚ฏใƒญใƒผใƒ‰ใซ็‰นๅŒ–ใ—ใŸใ‚ซใ‚นใ‚ฟใƒ ใƒ“ใƒซใƒ‰ใฎใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ‚ขใ‚ฏใ‚ปใƒฉใƒฌใƒผใ‚ฟใงใ™ใ€‚[AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) ใฏInferentia็”จใฎSDKใงใ€ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใ‚นใ—ใฆๆœ€้ฉๅŒ–ใ—ใ€Inf1ใซๅฑ•้–‹ใ™ใ‚‹ใŸใ‚ใฎใ‚ตใƒใƒผใƒˆใ‚’ๆไพ›ใ—ใพใ™ใ€‚ Neuron SDK ใŒๆไพ›ใ™ใ‚‹ใ‚‚ใฎ: 1. ใ‚ฏใƒฉใ‚ฆใƒ‰ใงใฎๆŽจ่ซ–ใฎใŸใ‚ใซTorchScriptใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใ‚นใ—ใฆๆœ€้ฉๅŒ–ใ™ใ‚‹ใŸใ‚ใฎใ€1่กŒใฎใ‚ณใƒผใƒ‰ๅค‰ๆ›ดใงไฝฟ็”จใงใใ‚‹็ฐกๅ˜ใชAPIใ€‚ 2. [ๆ”นๅ–„ใ•ใ‚ŒใŸใ‚ณใ‚นใƒˆใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚น](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/) ใฎใŸใ‚ใฎใƒœใƒƒใ‚ฏใ‚นๅค–ใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๆœ€้ฉๅŒ–ใ€‚ 3. [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) ใพใŸใฏ [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html) ใงๆง‹็ฏ‰ใ•ใ‚ŒใŸHugging Faceใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใธใฎใ‚ตใƒใƒผใƒˆใ€‚ ### Implications BERT๏ผˆBidirectional Encoder Representations from Transformers๏ผ‰ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚„ใใฎๅค‰็จฎ๏ผˆ[distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) ใ‚„ [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta) ใชใฉ๏ผ‰ใซๅŸบใฅใใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใฏใ€้ž็”Ÿๆˆใ‚ฟใ‚นใ‚ฏ๏ผˆๆŠฝๅ‡บๅž‹่ณชๅ•ๅฟœ็ญ”ใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใ€ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใชใฉ๏ผ‰ใซใŠใ„ใฆใ€Inf1ไธŠใงๆœ€้ฉใซๅ‹•ไฝœใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใ‚ฟใ‚นใ‚ฏใ‚‚ [AWS Neuron MarianMT ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html) ใซๅพ“ใฃใฆInf1ไธŠใงๅฎŸ่กŒใงใใพใ™ใ€‚Inferentiaใงใƒœใƒƒใ‚ฏใ‚นๅค–ใงๅค‰ๆ›ใงใใ‚‹ใƒขใƒ‡ใƒซใซ้–ขใ™ใ‚‹่ฉณ็ดฐๆƒ…ๅ ฑใฏใ€Neuronใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎ [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) ใ‚ปใ‚ฏใ‚ทใƒงใƒณใซใ‚ใ‚Šใพใ™ใ€‚ ### Dependencies ใƒขใƒ‡ใƒซใ‚’AWS Neuronใซๅค‰ๆ›ใ™ใ‚‹ใซใฏใ€[Neuron SDK ็’ฐๅขƒ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide) ใŒๅฟ…่ฆใงใ€[AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html) ใซไบ‹ๅ‰ใซๆง‹ๆˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ### Converting a model for AWS Neuron ใƒขใƒ‡ใƒซใ‚’AWS NEURON็”จใซๅค‰ๆ›ใ™ใ‚‹ใซใฏใ€[PythonใงTorchScriptใ‚’ไฝฟ็”จใ™ใ‚‹](torchscript#using-torchscript-in-python) ใจๅŒใ˜ใ‚ณใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใฆ `BertModel` ใ‚’ใƒˆใƒฌใƒผใ‚นใ—ใพใ™ใ€‚Python APIใ‚’ไป‹ใ—ใฆNeuron SDKใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ใŸใ‚ใซใ€`torch.neuron` ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏๆ‹กๅผตใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใพใ™ใ€‚ ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` ๆฌกใฎ่กŒใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ ใ‘ใงๆธˆใฟใพใ™ใ€‚ ```diff - torch.jit.trace(model, [tokens_tensor, segments_tensors]) + torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€Neuron SDKใฏใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใ‚นใ—ใ€Inf1ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅ‘ใ‘ใซๆœ€้ฉๅŒ–ใ—ใพใ™ใ€‚ AWS Neuron SDKใฎๆฉŸ่ƒฝใ€ใƒ„ใƒผใƒซใ€ใ‚ตใƒณใƒ—ใƒซใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใ€ๆœ€ๆ–ฐใฎใ‚ขใƒƒใƒ—ใƒ‡ใƒผใƒˆใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€[AWS NeuronSDK ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html) ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/generation_strategies.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text generation strategies ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใฏใ€ใ‚ชใƒผใƒ—ใƒณใ‚จใƒณใƒ‰ใฎใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใ€่ฆ็ด„ใ€็ฟป่จณใชใฉใ€ๅคšใใฎ่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใ‚ฟใ‚นใ‚ฏใซไธๅฏๆฌ ใงใ™ใ€‚ใพใŸใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ‡บๅŠ›ใจใ™ใ‚‹ใ•ใพใ–ใพใชๆททๅœจใƒขใƒ€ใƒชใƒ†ใ‚ฃใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใซใ‚‚ๅฝฑ้Ÿฟใ‚’ไธŽใˆใฆใŠใ‚Šใ€ไพ‹ใˆใฐ้Ÿณๅฃฐใ‹ใ‚‰ใƒ†ใ‚ญใ‚นใƒˆใธใฎๅค‰ๆ›ใ‚„็”ปๅƒใ‹ใ‚‰ใƒ†ใ‚ญใ‚นใƒˆใธใฎๅค‰ๆ›ใชใฉใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใงใใ‚‹ใ„ใใคใ‹ใฎใƒขใƒ‡ใƒซใซใฏใ€GPT2ใ€XLNetใ€OpenAI GPTใ€CTRLใ€TransformerXLใ€XLMใ€Bartใ€T5ใ€GITใ€WhisperใŒๅซใพใ‚Œใพใ™ใ€‚ [`~transformers.generation_utils.GenerationMixin.generate`] ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€็•ฐใชใ‚‹ใ‚ฟใ‚นใ‚ฏใฎใƒ†ใ‚ญใ‚นใƒˆๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ™ใ‚‹ใ„ใใคใ‹ใฎไพ‹ใ‚’ใ”็ดนไป‹ใ—ใพใ™๏ผš * [ใƒ†ใ‚ญใ‚นใƒˆ่ฆ็ด„](./tasks/summarization#inference) * [็”ปๅƒใฎใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒณ](./model_doc/git#transformers.GitForCausalLM.forward.example) * [้Ÿณๅฃฐใฎ่ปข่จ˜](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) generateใƒกใ‚ฝใƒƒใƒ‰ใธใฎๅ…ฅๅŠ›ใฏใ€ใƒขใƒ‡ใƒซใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใซไพๅญ˜ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๅ…ฅๅŠ›ใฏใ€AutoTokenizerใ‚„AutoProcessorใชใฉใฎใƒขใƒ‡ใƒซใฎใƒ—ใƒชใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚ฏใƒฉใ‚นใซใ‚ˆใฃใฆ่ฟ”ใ•ใ‚Œใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎใƒ—ใƒชใƒ—ใƒญใ‚ปใƒƒใ‚ตใŒ่ค‡ๆ•ฐใฎ็จฎ้กžใฎๅ…ฅๅŠ›ใ‚’็”Ÿๆˆใ™ใ‚‹ๅ ดๅˆใฏใ€ใ™ในใฆใฎๅ…ฅๅŠ›ใ‚’generate()ใซๆธกใ—ใพใ™ใ€‚ๅ„ใƒขใƒ‡ใƒซใฎใƒ—ใƒชใƒ—ใƒญใ‚ปใƒƒใ‚ตใซใคใ„ใฆใฎ่ฉณ็ดฐใฏใ€ๅฏพๅฟœใ™ใ‚‹ใƒขใƒ‡ใƒซใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใง็ขบ่ชใงใใพใ™ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใฎใƒˆใƒผใ‚ฏใƒณใฎ้ธๆŠžใƒ—ใƒญใ‚ปใ‚นใฏใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใ€`generate()`ใƒกใ‚ฝใƒƒใƒ‰ใŒไฝฟ็”จใ™ใ‚‹ใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใงใใพใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใฏใ€่จ“็ทดๅฏ่ƒฝใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๅ€คใ‚’ๅค‰ๆ›ดใ—ใพใ›ใ‚“ใŒใ€็”Ÿๆˆใ•ใ‚Œใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใฎๅ“่ณชใซ้ก•่‘—ใชๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆๅ†…ใฎ็นฐใ‚Š่ฟ”ใ—ใ‚’ๆธ›ๅฐ‘ใ•ใ›ใ€ใ‚ˆใ‚Šไธ€่ฒซๆ€งใฎใ‚ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏไปฅไธ‹ใฎๅ†…ๅฎนใŒ่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใพใ™๏ผš * ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆ่จญๅฎš * ไธ€่ˆฌ็š„ใชใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใจใใฎไธป่ฆใชใƒ‘ใƒฉใƒกใƒผใ‚ฟ * ๐Ÿค— Hubใฎใ‚ใชใŸใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒณใƒขใƒ‡ใƒซใจใ‚ซใ‚นใ‚ฟใƒ ็”Ÿๆˆ่จญๅฎšใฎไฟๅญ˜ใจๅ…ฑๆœ‰ ## Default text generation configuration ใƒขใƒ‡ใƒซใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใฏใ€ใใฎ็”Ÿๆˆ่จญๅฎšใงๅฎš็พฉใ•ใ‚Œใฆใ„ใพใ™ใ€‚[`pipeline`] ๅ†…ใงๆŽจ่ซ–ใซไบ‹ๅ‰่จ“็ทดใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹้š›ใซใฏใ€ใƒขใƒ‡ใƒซใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ็”Ÿๆˆ่จญๅฎšใ‚’ๅ†…้ƒจใง้ฉ็”จใ™ใ‚‹ `PreTrainedModel.generate()` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ—ใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ่จญๅฎšใฏใ€ใƒขใƒ‡ใƒซใซใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšใŒไฟๅญ˜ใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใซใ‚‚ไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ใƒขใƒ‡ใƒซใ‚’ๆ˜Ž็คบ็š„ใซ่ชญใฟ่พผใ‚€ๅ ดๅˆใ€ใใ‚Œใซไป˜ๅฑžใ™ใ‚‹็”Ÿๆˆ่จญๅฎšใ‚’ `model.generation_config` ใ‚’ไป‹ใ—ใฆ็ขบ่ชใงใใพใ™ใ€‚ ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") >>> model.generation_config GenerationConfig { "bos_token_id": 50256, "eos_token_id": 50256, } ``` `model.generation_config` ใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใจใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ็”Ÿๆˆ่จญๅฎšใ‹ใ‚‰็•ฐใชใ‚‹ๅ€คใฎใฟใŒ่กจ็คบใ•ใ‚Œใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๅ€คใฏใƒชใ‚นใƒˆใ•ใ‚Œใพใ›ใ‚“ใ€‚ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ็”Ÿๆˆ่จญๅฎšใงใฏใ€ๅ‡บๅŠ›ใฎใ‚ตใ‚คใ‚บใฏๅ…ฅๅŠ›ใƒ—ใƒญใƒณใƒ—ใƒˆใจใฎ็ต„ใฟๅˆใ‚ใ›ใงๆœ€ๅคง20ใƒˆใƒผใ‚ฏใƒณใซๅˆถ้™ใ•ใ‚ŒใฆใŠใ‚Šใ€ใƒชใ‚ฝใƒผใ‚นๅˆถ้™ใซ้”ใ—ใชใ„ใ‚ˆใ†ใซใ—ใฆใ„ใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใฏ่ฒชๆฌฒๆŽข็ดขใงใ€ๆœ€ใ‚‚็ขบ็އใฎ้ซ˜ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’ๆฌกใฎใƒˆใƒผใ‚ฏใƒณใจใ—ใฆ้ธๆŠžใ™ใ‚‹ๆœ€ใ‚‚ๅ˜็ด”ใชใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใงใ™ใ€‚ๅคšใใฎใ‚ฟใ‚นใ‚ฏใ‚„ๅฐใ•ใชๅ‡บๅŠ›ใ‚ตใ‚คใ‚บใฎๅ ดๅˆใ€ใ“ใ‚Œใฏใ†ใพใๆฉŸ่ƒฝใ—ใพใ™ใ€‚ใŸใ ใ—ใ€้•ทใ„ๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใ‚‹ๅ ดๅˆใ€่ฒชๆฌฒๆŽข็ดขใฏ้ซ˜ๅบฆใซ็นฐใ‚Š่ฟ”ใ•ใ‚Œใ‚‹็ตๆžœใ‚’็”Ÿๆˆใ—ๅง‹ใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ## Customize text generation `generate` ใƒกใ‚ฝใƒƒใƒ‰ใซ็›ดๆŽฅใƒ‘ใƒฉใƒกใƒผใ‚ฟใจใใฎๅ€คใ‚’ๆธกใ™ใ“ใจใงใ€`generation_config` ใ‚’ไธŠๆ›ธใใงใใพใ™ใ€‚ ```python >>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP ``` ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใŒใปใจใ‚“ใฉใฎใ‚ฟใ‚นใ‚ฏใงใ†ใพใๆฉŸ่ƒฝใ™ใ‚‹ๅ ดๅˆใงใ‚‚ใ€ใ„ใใคใ‹ใฎ่จญๅฎšใ‚’ๅพฎ่ชฟๆ•ดใงใใพใ™ใ€‚ไธ€่ˆฌ็š„ใซ่ชฟๆ•ดใ•ใ‚Œใ‚‹ใƒ‘ใƒฉใƒกใƒผใ‚ฟใซใฏๆฌกใฎใ‚‚ใฎใŒใ‚ใ‚Šใพใ™๏ผš - `max_new_tokens`: ็”Ÿๆˆใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใฎๆœ€ๅคงๆ•ฐใ€‚ใคใพใ‚Šใ€ๅ‡บๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใ‚ตใ‚คใ‚บใงใ‚ใ‚Šใ€ใƒ—ใƒญใƒณใƒ—ใƒˆๅ†…ใฎใƒˆใƒผใ‚ฏใƒณใฏๅซใพใ‚Œใพใ›ใ‚“ใ€‚ - `num_beams`: 1ใ‚ˆใ‚Šใ‚‚ๅคงใใชใƒ“ใƒผใƒ ๆ•ฐใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใงใ€่ฒชๆฌฒๆคœ็ดขใ‹ใ‚‰ใƒ“ใƒผใƒ ใ‚ตใƒผใƒใซๅˆ‡ใ‚Šๆ›ฟใˆใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใฎๆˆฆ็•ฅใงใฏใ€ๅ„ๆ™‚้–“ใ‚นใƒ†ใƒƒใƒ—ใงใ„ใใคใ‹ใฎไปฎ่ชฌใ‚’่ฉ•ไพกใ—ใ€ๆœ€็ต‚็š„ใซๅ…จไฝ“ใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅฏพใ™ใ‚‹ๆœ€ใ‚‚้ซ˜ใ„็ขบ็އใ‚’ๆŒใคไปฎ่ชฌใ‚’้ธๆŠžใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅˆๆœŸใฎ็ขบ็އใŒไฝŽใ„ใƒˆใƒผใ‚ฏใƒณใงๅง‹ใพใ‚‹้ซ˜็ขบ็އใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใŒ่ฒชๆฌฒๆคœ็ดขใซใ‚ˆใฃใฆ็„ก่ฆ–ใ•ใ‚Œใ‚‹ใ“ใจใŒใชใใชใ‚Šใพใ™ใ€‚ - `do_sample`: ใ“ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’`True`ใซ่จญๅฎšใ™ใ‚‹ใจใ€ๅคš้ …ๅˆ†ๅธƒใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ€ใƒ“ใƒผใƒ ใ‚ตใƒผใƒๅคš้ …ๅˆ†ๅธƒใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ€Top-Kใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ€Top-pใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใชใฉใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใŒๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๆˆฆ็•ฅใฏใ€ๅ„ๆˆฆ็•ฅๅ›บๆœ‰ใฎ่ชฟๆ•ดใ‚’ๅซใ‚€ๅ˜่ชžๅฝ™ๅ…จไฝ“ใฎ็ขบ็އๅˆ†ๅธƒใ‹ใ‚‰ๆฌกใฎใƒˆใƒผใ‚ฏใƒณใ‚’้ธๆŠžใ—ใพใ™ใ€‚ - `num_return_sequences`: ๅ„ๅ…ฅๅŠ›ใซๅฏพใ—ใฆ่ฟ”ใ™ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ€™่ฃœใฎๆ•ฐใ€‚ใ“ใ‚Œใฏใ€่ค‡ๆ•ฐใฎใ‚ทใƒผใ‚ฑใƒณใ‚นๅ€™่ฃœใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅ๏ผˆใƒ“ใƒผใƒ ใ‚ตใƒผใƒใ‚„ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใฎใƒใƒชใ‚จใƒผใ‚ทใƒงใƒณใชใฉ๏ผ‰ใซใฎใฟ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚่ฒชๆฌฒๆคœ็ดขใ‚„ๅฏพ็…ง็š„ใชๆคœ็ดขใชใฉใ€ๅ˜ไธ€ใฎๅ‡บๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’่ฟ”ใ™ใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใงใฏไฝฟ็”จใงใใพใ›ใ‚“ใ€‚ ## Save a custom decoding strategy with your model ็‰นๅฎšใฎ็”Ÿๆˆๆง‹ๆˆใง่ชฟๆ•ดใ—ใŸใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ—ใŸใ„ๅ ดๅˆใ€ไปฅไธ‹ใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใงใใพใ™๏ผš * [`GenerationConfig`] ใ‚ฏใƒฉใ‚นใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ไฝœๆˆใ™ใ‚‹ * ใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ * [`GenerationConfig.save_pretrained`] ใ‚’ไฝฟ็”จใ—ใฆ็”Ÿๆˆๆง‹ๆˆใ‚’ไฟๅญ˜ใ—ใ€`config_file_name` ๅผ•ๆ•ฐใ‚’็ฉบใซใ™ใ‚‹ใ“ใจใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„ * `push_to_hub` ใ‚’ `True` ใซ่จญๅฎšใ—ใฆใ€ๆง‹ๆˆใ‚’ใƒขใƒ‡ใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใพใ™ ```python >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP >>> generation_config = GenerationConfig( ... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ... ) >>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP ``` 1ใคใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซ่ค‡ๆ•ฐใฎ็”Ÿๆˆ่จญๅฎšใ‚’ไฟๅญ˜ใ™ใ‚‹ใ“ใจใ‚‚ใงใใ€[`GenerationConfig.save_pretrained`] ใฎ `config_file_name` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ๅพŒใง [`GenerationConfig.from_pretrained`] ใงใ“ใ‚Œใ‚‰ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใงใใพใ™ใ€‚ใ“ใ‚Œใฏใ€1ใคใฎใƒขใƒ‡ใƒซใซๅฏพใ—ใฆ่ค‡ๆ•ฐใฎ็”Ÿๆˆ่จญๅฎšใ‚’ไฟๅญ˜ใ—ใŸใ„ๅ ดๅˆใซไพฟๅˆฉใงใ™ ๏ผˆไพ‹๏ผšใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ‚’ไฝฟ็”จใ—ใŸใ‚ฏใƒชใ‚จใ‚คใƒ†ใ‚ฃใƒ–ใชใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆ็”จใฎ1ใคใจใ€ใƒ“ใƒผใƒ ใ‚ตใƒผใƒใ‚’ไฝฟ็”จใ—ใŸ่ฆ็ด„็”จใฎ1ใค๏ผ‰ใ€‚ใƒขใƒ‡ใƒซใซ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ใซใฏใ€้ฉๅˆ‡ใช Hub ๆจฉ้™ใŒๅฟ…่ฆใงใ™ใ€‚ ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("t5-small") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") >>> translation_generation_config = GenerationConfig( ... num_beams=4, ... early_stopping=True, ... decoder_start_token_id=0, ... eos_token_id=model.config.eos_token_id, ... pad_token=model.config.pad_token_id, ... ) >>> # Tip: add `push_to_hub=True` to push to the Hub >>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") >>> # You could then use the named generation config file to parameterize generation >>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json") >>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles ร  utiliser!'] ``` ## Streaming `generate()` ใฏใ€ใใฎ `streamer` ๅ…ฅๅŠ›ใ‚’ไป‹ใ—ใฆใ‚นใƒˆใƒชใƒผใƒŸใƒณใ‚ฐใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚`streamer` ๅ…ฅๅŠ›ใฏใ€ๆฌกใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๆŒใคใ‚ฏใƒฉใ‚นใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใจไบ’ๆ›ๆ€งใŒใ‚ใ‚Šใพใ™๏ผš`put()` ใจ `end()`ใ€‚ๅ†…้ƒจ็š„ใซใฏใ€`put()` ใฏๆ–ฐใ—ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’ใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใ€`end()` ใฏใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใฎ็ต‚ไบ†ใ‚’ใƒ•ใƒฉใ‚ฐไป˜ใ‘ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ <Tip warning={true}> ใ‚นใƒˆใƒชใƒผใƒžใƒผใ‚ฏใƒฉใ‚นใฎAPIใฏใพใ ้–‹็™บไธญใงใ‚ใ‚Šใ€ๅฐ†ๆฅๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ๅฎŸ้š›ใซใฏใ€ใ•ใพใ–ใพใช็›ฎ็š„ใซๅฏพใ—ใฆ็‹ฌ่‡ชใฎใ‚นใƒˆใƒชใƒผใƒŸใƒณใ‚ฐใ‚ฏใƒฉใ‚นใ‚’ไฝœๆˆใงใใพใ™๏ผใพใŸใ€ไฝฟ็”จใงใใ‚‹ๅŸบๆœฌ็š„ใชใ‚นใƒˆใƒชใƒผใƒŸใƒณใ‚ฐใ‚ฏใƒฉใ‚นใ‚‚็”จๆ„ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ไพ‹ใˆใฐใ€[`TextStreamer`] ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆใ€`generate()` ใฎๅ‡บๅŠ›ใ‚’็”ป้ขใซๅ˜่ชžใ”ใจใซใ‚นใƒˆใƒชใƒผใƒ ใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` ## Decoding strategies ็‰นๅฎšใฎ `generate()` ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ็ต„ใฟๅˆใ‚ใ›ใ€ใใ—ใฆๆœ€็ต‚็š„ใซ `generation_config` ใฏใ€็‰นๅฎšใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใงใใพใ™ใ€‚ใ“ใฎใ‚ณใƒณใ‚ปใƒ—ใƒˆใŒๆ–ฐใ—ใ„ๅ ดๅˆใ€[ใ“ใฎใƒ–ใƒญใ‚ฐใƒใ‚นใƒˆ](https://huggingface.co/blog/how-to-generate)ใ‚’่ชญใ‚€ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใฎใƒ–ใƒญใ‚ฐใƒใ‚นใƒˆใงใฏใ€ไธ€่ˆฌ็š„ใชใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใŒใฉใฎใ‚ˆใ†ใซๅ‹•ไฝœใ™ใ‚‹ใ‹ใŒ่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ“ใ“ใงใฏใ€ใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ๅˆถๅพกใ™ใ‚‹ใ„ใใคใ‹ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’็คบใ—ใ€ใใ‚Œใ‚‰ใ‚’ใฉใฎใ‚ˆใ†ใซไฝฟ็”จใงใใ‚‹ใ‹ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ ### Greedy Search [`generate`] ใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใง่ฒชๆฌฒๆŽข็ดขใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใ€ๆœ‰ๅŠนใซใ™ใ‚‹ใŸใ‚ใซใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆธกใ™ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟ `num_beams` ใŒ 1 ใซ่จญๅฎšใ•ใ‚Œใ€`do_sample=False` ใงใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "I look forward to" >>> checkpoint = "distilgpt2" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ``` ### Contrastive search ใ‚ณใƒณใƒˆใƒฉใ‚นใƒ†ใ‚ฃใƒ–ๆคœ็ดขใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใฏใ€2022ๅนดใฎ่ซ–ๆ–‡[A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417)ใงๆๆกˆใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใ‚Œใฏใ€้žๅๅพฉ็š„ใงใ‚ใ‚ŠใชใŒใ‚‰ไธ€่ฒซๆ€งใฎใ‚ใ‚‹้•ทใ„ๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซๅ„ชใ‚ŒใŸ็ตๆžœใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ใ‚ณใƒณใƒˆใƒฉใ‚นใƒ†ใ‚ฃใƒ–ๆคœ็ดขใฎๅ‹•ไฝœๅŽŸ็†ใ‚’ๅญฆใถใซใฏใ€[ใ“ใฎใƒ–ใƒญใ‚ฐใƒใ‚นใƒˆ](https://huggingface.co/blog/introducing-csearch)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ใ‚ณใƒณใƒˆใƒฉใ‚นใƒ†ใ‚ฃใƒ–ๆคœ็ดขใฎๅ‹•ไฝœใ‚’ๆœ‰ๅŠนใซใ—ใ€ๅˆถๅพกใ™ใ‚‹2ใคใฎไธป่ฆใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใฏใ€Œpenalty_alphaใ€ใจใ€Œtop_kใ€ใงใ™๏ผš ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = "gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Hugging Face Company is" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ``` ### Multinomial sampling ๅธธใซๆœ€้ซ˜็ขบ็އใฎใƒˆใƒผใ‚ฏใƒณใ‚’ๆฌกใฎใƒˆใƒผใ‚ฏใƒณใจใ—ใฆ้ธๆŠžใ™ใ‚‹่ฒชๆฌฒๆคœ็ดขใจใฏ็•ฐใชใ‚Šใ€ๅคš้ …ๅˆ†ๅธƒใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐ๏ผˆใพใŸใฏ็ฅ–ๅ…ˆใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใจใ‚‚ๅ‘ผใฐใ‚Œใพใ™๏ผ‰ใฏใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใ‚‹่ชžๅฝ™ๅ…จไฝ“ใฎ็ขบ็އๅˆ†ๅธƒใซๅŸบใฅใ„ใฆๆฌกใฎใƒˆใƒผใ‚ฏใƒณใ‚’ใƒฉใƒณใƒ€ใƒ ใซ้ธๆŠžใ—ใพใ™ใ€‚ใ‚ผใƒญไปฅๅค–ใฎ็ขบ็އใ‚’ๆŒใคใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒณใซใฏ้ธๆŠžใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใ€ใ“ใ‚Œใซใ‚ˆใ‚Š็นฐใ‚Š่ฟ”ใ—ใฎใƒชใ‚นใ‚ฏใŒๆธ›ๅฐ‘ใ—ใพใ™ใ€‚ ๅคš้ …ๅˆ†ๅธƒใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€`do_sample=True` ใŠใ‚ˆใณ `num_beams=1` ใ‚’่จญๅฎšใ—ใพใ™ใ€‚ ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # For reproducibility >>> checkpoint = "gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Today was an amazing day because" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling."'] ``` ### Beam-search decoding ่ฒชๆฌฒๆŽข็ดขใจใฏ็•ฐใชใ‚Šใ€ใƒ“ใƒผใƒ ใ‚ตใƒผใƒใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใฏๅ„ๆ™‚้–“ใ‚นใƒ†ใƒƒใƒ—ใงใ„ใใคใ‹ใฎไปฎ่ชฌใ‚’ไฟๆŒใ—ใ€ๆœ€็ต‚็š„ใซใ‚ทใƒผใ‚ฑใƒณใ‚นๅ…จไฝ“ใงๆœ€ใ‚‚็ขบ็އใŒ้ซ˜ใ„ไปฎ่ชฌใ‚’้ธๆŠžใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€่ฒชๆฌฒๆŽข็ดขใงใฏ็„ก่ฆ–ใ•ใ‚Œใฆใ—ใพใ†ๅˆๆœŸใƒˆใƒผใ‚ฏใƒณใฎ็ขบ็އใŒไฝŽใ„้ซ˜็ขบ็އใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’็‰นๅฎšใ™ใ‚‹ๅˆฉ็‚นใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€`num_beams`๏ผˆ่ฟฝ่ทกใ™ใ‚‹ไปฎ่ชฌใฎๆ•ฐ๏ผ‰ใ‚’1ใ‚ˆใ‚Šใ‚‚ๅคงใใชๅ€คใซๆŒ‡ๅฎšใ—ใพใ™ใ€‚ ๅธŒๆœ›ใ•ใ‚Œใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใฎ็ฟป่จณใŒใŠๆ‰‹ไผใ„ใงใใฆๅฌ‰ใ—ใ„ใงใ™๏ผใ‚‚ใ—ใ•ใ‚‰ใชใ‚‹่ณชๅ•ใ‚„ใ‚ตใƒใƒผใƒˆใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€ใŠๆฐ—่ปฝใซใŠ็Ÿฅใ‚‰ใ›ใใ ใ•ใ„ใ€‚ ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "It is astonishing how one can" >>> checkpoint = "gpt2-medium" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'] ``` ### Beam-search multinomial sampling ใใฎๅๅ‰ใ‹ใ‚‰ใ‚‚ใ‚ใ‹ใ‚‹ใ‚ˆใ†ใซใ€ใ“ใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใฏใƒ“ใƒผใƒ ใ‚ตใƒผใƒใจๅคš้ …ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใ„ใพใ™ใ€‚ใ“ใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€`num_beams` ใ‚’1ใ‚ˆใ‚Šๅคงใใชๅ€คใซ่จญๅฎšใ—ใ€`do_sample=True` ใ‚’่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) # For reproducibility >>> prompt = "translate English to German: The house is wonderful." >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ``` ### Diverse beam search decoding ๅคšๆง˜ใชใƒ“ใƒผใƒ ใ‚ตใƒผใƒใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใฏใ€ใƒ“ใƒผใƒ ใ‚ตใƒผใƒๆˆฆ็•ฅใฎๆ‹กๅผตใงใ‚ใ‚Šใ€้ธๆŠž่‚ขใ‹ใ‚‰ใ‚ˆใ‚Šๅคšๆง˜ใชใƒ“ใƒผใƒ ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’็”Ÿๆˆใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ใ“ใฎไป•็ต„ใฟใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf) ใ‚’ใ”ๅ‚็…งใใ ใ•ใ„ใ€‚ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใซใฏใ€`num_beams`ใ€`num_beam_groups`ใ€ใŠใ‚ˆใณ `diversity_penalty` ใจใ„ใ†3ใคใฎไธป่ฆใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒใ‚ใ‚Šใพใ™ใ€‚ๅคšๆง˜ๆ€งใƒšใƒŠใƒซใƒ†ใ‚ฃใฏใ€ๅ‡บๅŠ›ใŒใ‚ฐใƒซใƒผใƒ—ใ”ใจใซ็•ฐใชใ‚‹ใ“ใจใ‚’ไฟ่จผใ—ใ€ใƒ“ใƒผใƒ ใ‚ตใƒผใƒใฏๅ„ใ‚ฐใƒซใƒผใƒ—ๅ†…ใงไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = "google/pegasus-xsum" >>> prompt = ( ... "The Permaculture Design Principles are a set of universal design principles " ... "that can be applied to any location, climate and culture, and they allow us to design " ... "the most efficient and sustainable human habitation and food production systems. " ... "Permaculture is a design system that encompasses a wide variety of disciplines, such " ... "as ecology, landscape design, environmental science and energy conservation, and the " ... "Permaculture design principles are drawn from these various disciplines. Each individual " ... "design principle itself embodies a complete conceptual framework based on sound " ... "scientific principles. When we bring all these separate principles together, we can " ... "create a design system that both looks at whole systems, the parts that these systems " ... "consist of, and how those parts interact with each other to create a complex, dynamic, " ... "living system. Each design principle serves as a tool that allows us to integrate all " ... "the separate parts of a design, referred to as elements, into a functional, synergistic, " ... "whole system, where the elements harmoniously interact and work together in the most " ... "efficient way possible." ... ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' ``` ### Assisted Decoding ใ‚ขใ‚ทใ‚นใƒˆใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใฏใ€ไธŠ่จ˜ใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ๅค‰ๆ›ดใ—ใŸใ‚‚ใฎใงใ€ๅŒใ˜ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ๏ผˆ็†ๆƒณ็š„ใซใฏใฏใ‚‹ใ‹ใซๅฐใ•ใชใƒขใƒ‡ใƒซ๏ผ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใ„ใใคใ‹ใฎๅ€™่ฃœใƒˆใƒผใ‚ฏใƒณใ‚’่ฒชๆฌฒใซ็”Ÿๆˆใ™ใ‚‹ใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใใฎๅพŒใ€ไธป่ฆใชใƒขใƒ‡ใƒซใฏๅ€™่ฃœใƒˆใƒผใ‚ฏใƒณใ‚’1ใคใฎๅ‰ๅ‘ใใƒ‘ใ‚นใงๆคœ่จผใ—ใ€ใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใƒ—ใƒญใ‚ปใ‚นใ‚’้ซ˜้€ŸๅŒ–ใ—ใพใ™ใ€‚็พๅœจใ€ใ‚ขใ‚ทใ‚นใƒˆใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใงใฏ่ฒชๆฌฒๆคœ็ดขใจใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใฎใฟใŒใ‚ตใƒใƒผใƒˆใ•ใ‚ŒใฆใŠใ‚Šใ€ใƒใƒƒใƒๅ…ฅๅŠ›ใฏใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ใ‚ขใ‚ทใ‚นใƒˆใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใ“ใฎใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://huggingface.co/blog/assisted-generation) ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ใ‚ขใ‚ทใ‚นใƒˆใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€`assistant_model` ๅผ•ๆ•ฐใ‚’ใƒขใƒ‡ใƒซใง่จญๅฎšใ—ใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏใ€ใ•ใพใ–ใพใชใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ไธป่ฆใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ใ•ใ‚‰ใซ้ซ˜ๅบฆใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฏ [`generate`] ใƒกใ‚ฝใƒƒใƒ‰ใซๅญ˜ๅœจใ—ใ€[`generate`] ใƒกใ‚ฝใƒƒใƒ‰ใฎๅ‹•ไฝœใ‚’ใ•ใ‚‰ใซๅˆถๅพกใงใใพใ™ใ€‚ไฝฟ็”จๅฏ่ƒฝใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฎๅฎŒๅ…จใชใƒชใ‚นใƒˆใซใคใ„ใฆใฏใ€[APIใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](./main_classes/text_generation.md) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐๆ–นๆณ•ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใ‚ขใ‚ทใ‚นใƒˆใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใงใฏ `temperature` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ€ๅคš้ …ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใจๅŒๆง˜ใซใƒฉใƒณใƒ€ใƒ ๆ€งใ‚’ๅˆถๅพกใงใใพใ™ใ€‚ใŸใ ใ—ใ€ใ‚ขใ‚ทใ‚นใƒˆใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใงใฏใ€ๆธฉๅบฆใ‚’ไฝŽใใ™ใ‚‹ใ“ใจใง้…ๅปถใฎๆ”นๅ–„ใซๅฝน็ซ‹ใกใพใ™ใ€‚ ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) # For reproducibility >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are going to the same party. It is a small party, in a small'] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/preprocessing.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownๅฝขๅผใงใ™ใŒใ€็‰นๅฎšใฎMDXใซ้กžไผผใ—ใŸใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใƒ“ใƒซใƒ€ใƒผใฎๆง‹ๆ–‡ใ‚’ๅซใ‚“ใงใŠใ‚Šใ€ Markdownใƒ“ใƒฅใƒผใ‚ขใƒผใงๆญฃใ—ใ่กจ็คบใ•ใ‚Œใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ --> # Preprocess [[open-in-colab]] ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ‰ใซใ€ใใ‚Œใ‚’ใƒขใƒ‡ใƒซใฎๆœŸๅพ…ใ™ใ‚‹ๅ…ฅๅŠ›ๅฝขๅผใซๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒ‡ใƒผใ‚ฟใŒใƒ†ใ‚ญใ‚นใƒˆใ€็”ปๅƒใ€ใพใŸใฏใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใงใ‚ใ‚‹ใ‹ใฉใ†ใ‹ใซใ‹ใ‹ใ‚ใ‚‰ใšใ€ใใ‚Œใ‚‰ใฏใƒ†ใƒณใ‚ฝใƒซใฎใƒใƒƒใƒใซๅค‰ๆ›ใ—ใฆ็ต„ใฟ็ซ‹ใฆใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Transformersใฏใ€ใƒ‡ใƒผใ‚ฟใ‚’ใƒขใƒ‡ใƒซ็”จใซๆบ–ๅ‚™ใ™ใ‚‹ใฎใซๅฝน็ซ‹ใคๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใฎใ‚ปใƒƒใƒˆใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ๆฌกใฎใ“ใจใ‚’ๅญฆใณใพใ™๏ผš * ใƒ†ใ‚ญใ‚นใƒˆใฎๅ ดๅˆใ€[Tokenizer](./main_classes/tokenizer)ใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅค‰ๆ›ใ—ใ€ใƒˆใƒผใ‚ฏใƒณใฎๆ•ฐๅ€ค่กจ็พใ‚’ไฝœๆˆใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒ†ใƒณใ‚ฝใƒซใซ็ต„ใฟ็ซ‹ใฆใ‚‹ๆ–นๆณ•ใ€‚ * ้Ÿณๅฃฐใจใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใฎๅ ดๅˆใ€[Feature extractor](./main_classes/feature_extractor)ใ‚’ไฝฟ็”จใ—ใฆใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๆณขๅฝขใ‹ใ‚‰้€ฃ็ถš็š„ใช็‰นๅพดใ‚’ๆŠฝๅ‡บใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใ™ใ‚‹ๆ–นๆณ•ใ€‚ * ็”ปๅƒๅ…ฅๅŠ›ใฎๅ ดๅˆใ€[ImageProcessor](./main_classes/image)ใ‚’ไฝฟ็”จใ—ใฆ็”ปๅƒใ‚’ใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใ™ใ‚‹ๆ–นๆณ•ใ€‚ * ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซๅ…ฅๅŠ›ใฎๅ ดๅˆใ€[Processor](./main_classes/processors)ใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจ็‰นๅพดๆŠฝๅ‡บๅ™จใพใŸใฏ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ๆ–นๆณ•ใ€‚ <Tip> `AutoProcessor`ใฏๅธธใซๅ‹•ไฝœใ—ใ€ไฝฟ็”จใ™ใ‚‹ใƒขใƒ‡ใƒซใซ้ฉๅˆ‡ใชใ‚ฏใƒฉใ‚นใ‚’่‡ชๅ‹•็š„ใซ้ธๆŠžใ—ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ€็‰นๅพดๆŠฝๅ‡บๅ™จใ€ใพใŸใฏใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‹ใซใ‹ใ‹ใ‚ใ‚‰ใšใ€ๅ‹•ไฝœใ—ใพใ™ใ€‚ </Tip> ๅง‹ใ‚ใ‚‹ๅ‰ใซใ€๐Ÿค— Datasetsใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใ€ใ„ใใคใ‹ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’่ฉฆใ™ใ“ใจใŒใงใใ‚‹ใ‚ˆใ†ใซใ—ใฆใใ ใ•ใ„๏ผš ```bash pip install datasets ``` ## Natural Language Processing <Youtube id="Yffk5aydLzg"/> ใƒ†ใ‚ญใ‚นใƒˆใƒ‡ใƒผใ‚ฟใฎๅ‰ๅ‡ฆ็†ใซไฝฟ็”จใ™ใ‚‹ไธป่ฆใชใƒ„ใƒผใƒซใฏใ€[ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถ](main_classes/tokenizer)ใงใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€ไธ€้€ฃใฎใƒซใƒผใƒซใซๅพ“ใฃใฆใƒ†ใ‚ญใ‚นใƒˆใ‚’*ใƒˆใƒผใ‚ฏใƒณ*ใซๅˆ†ๅ‰ฒใ—ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒณใฏๆ•ฐๅ€คใซๅค‰ๆ›ใ•ใ‚Œใ€ใใฎๅพŒใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใจใชใ‚Šใพใ™ใ€‚ใƒขใƒ‡ใƒซใŒๅฟ…่ฆใจใ™ใ‚‹่ฟฝๅŠ ใฎๅ…ฅๅŠ›ใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใซใ‚ˆใฃใฆ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ <Tip> ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ไบˆๅฎšใฎๅ ดๅˆใ€้–ข้€ฃใ™ใ‚‹ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆใŒไบ‹ๅ‰ๅญฆ็ฟ’ใ‚ณใƒผใƒ‘ใ‚นใจๅŒใ˜ๆ–นๆณ•ใงๅˆ†ๅ‰ฒใ•ใ‚Œใ€ไบ‹ๅ‰ๅญฆ็ฟ’ไธญใซ้€šๅธธ*ใƒœใ‚ญใƒฃใƒ–*ใจใ—ใฆๅ‚็…งใ•ใ‚Œใ‚‹ๅฏพๅฟœใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ </Tip> [`AutoTokenizer.from_pretrained`]ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒญใƒผใƒ‰ใ—ใฆใ€้–‹ๅง‹ใ—ใพใ—ใ‚‡ใ†ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใŒไบ‹ๅ‰ๅญฆ็ฟ’ใ•ใ‚ŒใŸ*ใƒœใ‚ญใƒฃใƒ–*ใŒใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚Œใพใ™๏ผš ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` ๆฌกใซใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใซๆธกใ—ใพใ™๏ผš ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€้‡่ฆใช3ใคใฎ้ …็›ฎใ‚’ๆŒใค่พžๆ›ธใ‚’่ฟ”ใ—ใพใ™๏ผš * [input_ids](glossary#input-ids) ใฏๆ–‡ไธญใฎๅ„ใƒˆใƒผใ‚ฏใƒณใซๅฏพๅฟœใ™ใ‚‹ใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใงใ™ใ€‚ * [attention_mask](glossary#attention-mask) ใฏใƒˆใƒผใ‚ฏใƒณใŒใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ๅ—ใ‘ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็คบใ—ใพใ™ใ€‚ * [token_type_ids](glossary#token-type-ids) ใฏ่ค‡ๆ•ฐใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใŒใ‚ใ‚‹ๅ ดๅˆใ€ใƒˆใƒผใ‚ฏใƒณใŒใฉใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅฑžใ—ใฆใ„ใ‚‹ใ‹ใ‚’่ญ˜ๅˆฅใ—ใพใ™ใ€‚ `input_ids` ใ‚’ใƒ‡ใ‚ณใƒผใƒ‰ใ—ใฆๅ…ฅๅŠ›ใ‚’่ฟ”ใ—ใพใ™๏ผš ```python >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] ้ญ”ๆณ•ไฝฟใ„ใฎไบ‹ใซๅนฒๆธ‰ใ™ใ‚‹ใชใ€ๅฝผใ‚‰ใฏๅพฎๅฆ™ใงๆ€’ใ‚Šใฃใฝใ„ใ€‚ [SEP]' ``` ๅฆ‚ไฝ•ใซใŠๅˆ†ใ‹ใ‚Šใ„ใŸใ ใ‘ใ‚‹ใ‹ใจๆ€ใ„ใพใ™ใŒใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ“ใฎๆ–‡็ซ ใซ2ใคใฎ็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณใ€`CLS`๏ผˆใ‚ฏใƒฉใ‚ทใƒ•ใ‚กใ‚คใ‚ข๏ผ‰ใจ`SEP`๏ผˆใ‚ปใƒ‘ใƒฌใƒผใ‚ฟ๏ผ‰ใ‚’่ฟฝๅŠ ใ—ใพใ—ใŸใ€‚ ใ™ในใฆใฎใƒขใƒ‡ใƒซใŒ็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณใ‚’ๅฟ…่ฆใจใ™ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ๅฟ…่ฆใชๅ ดๅˆใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏ่‡ชๅ‹•็š„ใซใใ‚Œใ‚‰ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ ่ค‡ๆ•ฐใฎๆ–‡็ซ ใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๅ ดๅˆใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใซใƒชใ‚นใƒˆใจใ—ใฆๆธกใ—ใฆใใ ใ•ใ„๏ผš ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### Pad ๆ–‡็ซ ใฏๅธธใซๅŒใ˜้•ทใ•ใงใฏใชใ„ใ“ใจใŒใ‚ใ‚Šใ€ใ“ใ‚Œใฏใƒ†ใƒณใ‚ฝใƒซ๏ผˆใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›๏ผ‰ใŒๅ‡ไธ€ใชๅฝข็Šถใ‚’ๆŒใคๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ๅ•้กŒใจใชใ‚Šใพใ™ใ€‚ ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใฏใ€็Ÿญใ„ๆ–‡ใซ็‰นๅˆฅใชใ€Œใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ€ใ‚’่ฟฝๅŠ ใ—ใฆใ€ใƒ†ใƒณใ‚ฝใƒซใ‚’้•ทใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅˆใ‚ใ›ใ‚‹ใŸใ‚ใฎๆˆฆ็•ฅใงใ™ใ€‚ ใƒใƒƒใƒๅ†…ใฎ็Ÿญใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๆœ€้•ทใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅˆใ‚ใ›ใ‚‹ใŸใ‚ใซใ€`padding`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’`True`ใซ่จญๅฎšใ—ใพใ™๏ผš ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` 1็•ช็›ฎใจ3็•ช็›ฎใฎๆ–‡ใฏใ€็Ÿญใ„ใŸใ‚ใซ`0`ใงใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ### Truncation ้€†ใฎใ‚นใƒšใ‚ฏใƒˆใƒซใงใฏใ€ๆ™‚ๆŠ˜ใ€ใƒขใƒ‡ใƒซใŒๅ‡ฆ็†ใ™ใ‚‹ใฎใซ้•ทใ™ใŽใ‚‹ใ‚ทใƒผใ‚ฑใƒณใ‚นใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใฎๅ ดๅˆใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’็Ÿญ็ธฎใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคงใฎ้•ทใ•ใซใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅˆ‡ใ‚Š่ฉฐใ‚ใ‚‹ใซใฏใ€`truncation`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’`True`ใซ่จญๅฎšใ—ใพใ™๏ผš ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` <Tip> ็•ฐใชใ‚‹ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใจๅˆ‡ใ‚Š่ฉฐใ‚ใฎๅผ•ๆ•ฐใซใคใ„ใฆ่ฉณใ—ใใฏใ€[ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใจๅˆ‡ใ‚Š่ฉฐใ‚](./pad_truncation)ใฎใ‚ณใƒณใ‚ปใƒ—ใƒˆใ‚ฌใ‚คใƒ‰ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ </Tip> ### Build tensors ๆœ€ๅพŒใซใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒใƒขใƒ‡ใƒซใซไพ›็ตฆใ•ใ‚Œใ‚‹ๅฎŸ้š›ใฎใƒ†ใƒณใ‚ฝใƒซใ‚’่ฟ”ใ™ใ‚ˆใ†ใซ่จญๅฎšใ—ใพใ™ใ€‚ `return_tensors`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’`pt`๏ผˆPyTorch็”จ๏ผ‰ใพใŸใฏ`tf`๏ผˆTensorFlow็”จ๏ผ‰ใซ่จญๅฎšใ—ใพใ™๏ผš <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> ## Audio ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒขใƒ‡ใƒซ็”จใซๆบ–ๅ‚™ใ™ใ‚‹ใŸใ‚ใซ[็‰นๅพดๆŠฝๅ‡บๅ™จ](main_classes/feature_extractor)ใŒๅฟ…่ฆใงใ™ใ€‚ ็‰นๅพดๆŠฝๅ‡บๅ™จใฏ็”Ÿใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใ‹ใ‚‰็‰นๅพดใ‚’ๆŠฝๅ‡บใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ [PolyAI/minds14](https://huggingface.co/datasets/PolyAI/minds14)ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใฆ๏ผˆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใƒญใƒผใƒ‰ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏ๐Ÿค— [Datasetsใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://huggingface.co/docs/datasets/load_hub)ใ‚’ๅ‚็…ง๏ผ‰ใ€ ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง็‰นๅพดๆŠฝๅ‡บๅ™จใ‚’ใฉใฎใ‚ˆใ†ใซไฝฟ็”จใงใใ‚‹ใ‹ใ‚’็ขบ่ชใ—ใฆใฟใพใ—ใ‚‡ใ†๏ผš ```python >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` ใ‚ขใ‚ฏใ‚ปใ‚นใ—ใฆ`audio`ๅˆ—ใฎๆœ€ๅˆใฎ่ฆ็ด ใ‚’็ขบ่ชใ—ใพใ™ใ€‚`audio`ๅˆ—ใ‚’ๅ‘ผใณๅ‡บใ™ใจใ€่‡ชๅ‹•็š„ใซใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใ‚กใ‚คใƒซใŒ่ชญใฟ่พผใพใ‚Œใ€ใƒชใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ•ใ‚Œใพใ™๏ผš ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€3ใคใฎใ‚ขใ‚คใƒ†ใƒ ใŒ่ฟ”ใ•ใ‚Œใพใ™๏ผš * `array` ใฏ่ชญใฟ่พผใพใ‚ŒใŸ้Ÿณๅฃฐไฟกๅทใงใ€1Dใฎ้…ๅˆ—ใจใ—ใฆ่ชญใฟ่พผใพใ‚Œใพใ™ใ€‚ๅฟ…่ฆใซๅฟœใ˜ใฆใƒชใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ•ใ‚Œใ‚‹ใ“ใจใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ * `path` ใฏ้Ÿณๅฃฐใƒ•ใ‚กใ‚คใƒซใฎๅ ดๆ‰€ใ‚’ๆŒ‡ใ—ใพใ™ใ€‚ * `sampling_rate` ใฏ้Ÿณๅฃฐไฟกๅทๅ†…ใฎใƒ‡ใƒผใ‚ฟใƒใ‚คใƒณใƒˆใŒ1็ง’้–“ใซใ„ใใคๆธฌๅฎšใ•ใ‚Œใ‚‹ใ‹ใ‚’็คบใ—ใพใ™ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€[Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใ‚’็ขบ่ชใ™ใ‚‹ใจใ€Wav2Vec2ใŒ16kHzใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ•ใ‚ŒใŸ้Ÿณๅฃฐใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใงไบ‹ๅ‰ๅญฆ็ฟ’ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ ใƒขใƒ‡ใƒซใฎไบ‹ๅ‰ๅญฆ็ฟ’ใซไฝฟ็”จใ•ใ‚ŒใŸใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใจใ€ใ‚ใชใŸใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใŒไธ€่‡ดใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚ ใƒ‡ใƒผใ‚ฟใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใŒ็•ฐใชใ‚‹ๅ ดๅˆใ€ใƒ‡ใƒผใ‚ฟใ‚’ใƒชใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ 1. ๐Ÿค— Datasetsใฎ [`~datasets.Dataset.cast_column`] ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใ‚’16kHzใซใ‚ขใƒƒใƒ—ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ—ใพใ™๏ผš ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. ๅ†ใณ `audio` ๅˆ—ใ‚’ๅ‘ผใณๅ‡บใ—ใฆใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใ‚กใ‚คใƒซใ‚’ใƒชใ‚ตใƒณใƒ—ใƒซใ—ใพใ™๏ผš ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` ๆฌกใซใ€ๅ…ฅๅŠ›ใ‚’ๆญฃ่ฆๅŒ–ใ—ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใซ็‰นๅพดๆŠฝๅ‡บๅ™จใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใƒ‡ใƒผใ‚ฟใ‚’ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆใ€็Ÿญใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใซใฏ `0` ใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ๅŒใ˜่€ƒใˆๆ–นใŒใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใซใ‚‚้ฉ็”จใ•ใ‚Œใพใ™ใ€‚็‰นๅพดๆŠฝๅ‡บๅ™จใฏ `array` ใซ `0` ใ‚’่ฟฝๅŠ ใ—ใพใ™๏ผˆใ“ใ‚Œใฏ็„ก้Ÿณใจใ—ใฆ่งฃ้‡ˆใ•ใ‚Œใพใ™๏ผ‰ใ€‚ [`AutoFeatureExtractor.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆ็‰นๅพดๆŠฝๅ‡บๅ™จใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```python >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช `array` ใ‚’็‰นๅพดๆŠฝๅ‡บๅ™จใซๆธกใ—ใพใ™ใ€‚็‰นๅพดๆŠฝๅ‡บๅ™จใง็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹็„ก้Ÿณใ‚จใƒฉใƒผใ‚’ใ‚ˆใ‚Š่‰ฏใใƒ‡ใƒใƒƒใ‚ฐใ™ใ‚‹ใŸใ‚ใซใ€็‰นๅพดๆŠฝๅ‡บๅ™จใซ `sampling_rate` ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```python >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ๅŒๆง˜ใซใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจๅŒๆง˜ใซใ€ใƒใƒƒใƒๅ†…ใฎๅฏๅค‰ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใพใŸใฏๅˆ‡ใ‚Š่ฉฐใ‚ใ‚’้ฉ็”จใงใใพใ™ใ€‚ๆฌกใซใ€ใ“ใ‚Œใ‚‰ใฎ2ใคใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚ตใƒณใƒ—ใƒซใฎใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใ‚’็ขบ่ชใ—ใฆใฟใพใ—ใ‚‡ใ†๏ผš ```python >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` ใ“ใฎ้–ขๆ•ฐใฏใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๅ‰ๅ‡ฆ็†ใ—ใฆใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚ตใƒณใƒ—ใƒซใฎ้•ทใ•ใ‚’ๅŒใ˜ใซใ™ใ‚‹ใŸใ‚ใฎใ‚‚ใฎใงใ™ใ€‚ๆœ€ๅคงใ‚ตใƒณใƒ—ใƒซ้•ทใ‚’ๆŒ‡ๅฎšใ—ใ€็‰นๅพดๆŠฝๅ‡บๅ™จใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ใใ‚Œใซๅˆใ‚ใ›ใฆใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใพใŸใฏๅˆ‡ใ‚Š่ฉฐใ‚ใพใ™ใ€‚ ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` `preprocess_function`ใ‚’ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๆœ€ๅˆใฎๆ•ฐไพ‹ใซ้ฉ็”จใ—ใพใ™๏ผš ```python >>> processed_dataset = preprocess_function(dataset[:5]) ``` ใ‚ตใƒณใƒ—ใƒซใฎ้•ทใ•ใฏ็พๅœจๅŒใ˜ใงใ€ๆŒ‡ๅฎšใ•ใ‚ŒใŸๆœ€ๅคง้•ทใจไธ€่‡ดใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใงๅ‡ฆ็†ใ•ใ‚ŒใŸใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒขใƒ‡ใƒซใซๆธกใ™ใ“ใจใŒใงใใพใ™๏ผ ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` ## Computer Vision ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏใงใฏใ€ใƒขใƒ‡ใƒซ็”จใซใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๆบ–ๅ‚™ใ™ใ‚‹ใŸใ‚ใฎ[็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ต](main_classes/image_processor)ใŒๅฟ…่ฆใงใ™ใ€‚ ็”ปๅƒใฎๅ‰ๅ‡ฆ็†ใซใฏใ€็”ปๅƒใ‚’ใƒขใƒ‡ใƒซใŒๆœŸๅพ…ใ™ใ‚‹ๅ…ฅๅŠ›ๅฝขๅผใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใฎใ„ใใคใ‹ใฎใ‚นใƒ†ใƒƒใƒ—ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใ‚นใƒ†ใƒƒใƒ—ใซใฏใ€ใƒชใ‚ตใ‚คใ‚บใ€ๆญฃ่ฆๅŒ–ใ€ใ‚ซใƒฉใƒผใƒใƒฃใƒใƒซใฎ่ฃœๆญฃใ€ใŠใ‚ˆใณ็”ปๅƒใ‚’ใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใ™ใ‚‹ใชใฉใŒๅซใพใ‚Œใพใ™ใ€‚ <Tip> ็”ปๅƒใฎๅ‰ๅ‡ฆ็†ใฏใ€้€šๅธธใ€็”ปๅƒใฎๅข—ๅผทใฎๅฝขๅผใซๅพ“ใ„ใพใ™ใ€‚็”ปๅƒใฎๅ‰ๅ‡ฆ็†ใจ็”ปๅƒใฎๅข—ๅผทใฎไธกๆ–นใฏ็”ปๅƒใƒ‡ใƒผใ‚ฟใ‚’ๅค‰ๆ›ใ—ใพใ™ใŒใ€็•ฐใชใ‚‹็›ฎ็š„ใŒใ‚ใ‚Šใพใ™๏ผš * ็”ปๅƒใฎๅข—ๅผทใฏใ€้Žๅญฆ็ฟ’ใ‚’้˜ฒใŽใ€ใƒขใƒ‡ใƒซใฎๅ …็‰ขๆ€งใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใฎใซๅฝน็ซ‹ใคๆ–นๆณ•ใง็”ปๅƒใ‚’ๅค‰ๆ›ดใ—ใพใ™ใ€‚ใƒ‡ใƒผใ‚ฟใ‚’ๅข—ๅผทใ™ใ‚‹ๆ–นๆณ•ใฏ็„ก้™ใงใ€ๆ˜Žใ‚‹ใ•ใ‚„่‰ฒใฎ่ชฟๆ•ดใ€ใ‚ฏใƒญใƒƒใƒ—ใ€ๅ›ž่ปขใ€ใƒชใ‚ตใ‚คใ‚บใ€ใ‚บใƒผใƒ ใชใฉใ€ๆง˜ใ€…ใชๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใ ใ—ใ€ๅข—ๅผทๆ“ไฝœใซใ‚ˆใฃใฆ็”ปๅƒใฎๆ„ๅ‘ณใŒๅค‰ใ‚ใ‚‰ใชใ„ใ‚ˆใ†ใซๆณจๆ„ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ * ็”ปๅƒใฎๅ‰ๅ‡ฆ็†ใฏใ€็”ปๅƒใŒใƒขใƒ‡ใƒซใฎๆœŸๅพ…ใ™ใ‚‹ๅ…ฅๅŠ›ๅฝขๅผใจไธ€่‡ดใ™ใ‚‹ใ“ใจใ‚’ไฟ่จผใ—ใพใ™ใ€‚ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆใ€็”ปๅƒใฏใƒขใƒ‡ใƒซใŒๆœ€ๅˆใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใจใใจใพใฃใŸใๅŒใ˜ๆ–นๆณ•ใงๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ็”ปๅƒใฎๅข—ๅผทใซใฏไปปๆ„ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใงใใพใ™ใ€‚็”ปๅƒใฎๅ‰ๅ‡ฆ็†ใซใฏใ€ใƒขใƒ‡ใƒซใซ้–ข้€ฃไป˜ใ‘ใ‚‰ใ‚ŒใŸ`ImageProcessor`ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ </Tip> ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ™ใŸใ‚ใซใ€[food101](https://huggingface.co/datasets/food101)ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผˆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใƒญใƒผใƒ‰ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏ๐Ÿค—[Datasetsใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://huggingface.co/docs/datasets/load_hub)ใ‚’ๅ‚็…ง๏ผ‰๏ผš <Tip> ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒใ‹ใชใ‚Šๅคงใใ„ใŸใ‚ใ€๐Ÿค— Datasetsใฎ`split`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใฎๅฐใ•ใชใ‚ตใƒณใƒ—ใƒซใฎใฟใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผ </Tip> ```python >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` ๆฌกใซใ€๐Ÿค— Datasetsใฎ [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) ๆฉŸ่ƒฝใง็”ปๅƒใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†๏ผš ```python >>> dataset[0]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/> </div> AutoImageProcessorใ‚’[`AutoImageProcessor.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` 1. ใพใšใ€็”ปๅƒใฎๆ‹กๅผตใ‚’่ฟฝๅŠ ใ—ใพใ—ใ‚‡ใ†ใ€‚ๅฅฝใใชใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใงใใพใ™ใŒใ€ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏtorchvisionใฎ[`transforms`](https://pytorch.org/vision/stable/transforms.html)ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ๅˆฅใฎใƒ‡ใƒผใ‚ฟๆ‹กๅผตใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€[Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb)ใพใŸใฏ[Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)ใง่ฉณ็ดฐใ‚’ๅญฆใถใ“ใจใŒใงใใพใ™ใ€‚ ใ“ใ“ใงใฏใ€[`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)ใ‚’ไฝฟ็”จใ—ใฆใ„ใใคใ‹ใฎๅค‰ๆ›ใ‚’้€ฃ้Ž–ใ•ใ›ใพใ™ - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)ใจ[`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html)ใ€‚ ใ‚ตใ‚คใ‚บใฎๅค‰ๆ›ดใซ้–ขใ—ใฆใฏใ€`image_processor`ใ‹ใ‚‰็”ปๅƒใ‚ตใ‚คใ‚บใฎ่ฆไปถใ‚’ๅ–ๅพ—ใงใใพใ™ใ€‚ ไธ€้ƒจใฎใƒขใƒ‡ใƒซใงใฏใ€ๆญฃ็ขบใช้ซ˜ใ•ใจๅน…ใŒๅฟ…่ฆใงใ™ใŒใ€ไป–ใฎใƒขใƒ‡ใƒซใงใฏ`shortest_edge`ใฎใฟใŒๅฎš็พฉใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ```py >>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) ``` 2. ใƒขใƒ‡ใƒซใฏ[`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)ใ‚’ๅ…ฅๅŠ›ใจใ—ใฆๅ—ใ‘ๅ–ใ‚Šใพใ™ใ€‚ `ImageProcessor`ใฏ็”ปๅƒใฎๆญฃ่ฆๅŒ–ใจ้ฉๅˆ‡ใชใƒ†ใƒณใ‚ฝใƒซใฎ็”Ÿๆˆใ‚’ๅ‡ฆ็†ใงใใพใ™ใ€‚ ไธ€้€ฃใฎ็”ปๅƒใซๅฏพใ™ใ‚‹็”ปๅƒๆ‹กๅผตใจ็”ปๅƒๅ‰ๅ‡ฆ็†ใ‚’็ต„ใฟๅˆใ‚ใ›ใ€`pixel_values`ใ‚’็”Ÿๆˆใ™ใ‚‹้–ขๆ•ฐใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```python >>> def transforms(examples): ... images = [_transforms(img.convert("RGB")) for img in examples["image"]] ... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"] ... return examples ``` <Tip> ไธŠ่จ˜ใฎไพ‹ใงใฏใ€็”ปๅƒใฎใ‚ตใ‚คใ‚บๅค‰ๆ›ดใ‚’ๆ—ขใซ็”ปๅƒๅข—ๅผทๅค‰ๆ›ใง่กŒใฃใฆใ„ใ‚‹ใŸใ‚ใ€`do_resize=False`ใ‚’่จญๅฎšใ—ใพใ—ใŸใ€‚ ้ฉๅˆ‡ใช `image_processor` ใ‹ใ‚‰ใฎ `size` ๅฑžๆ€งใ‚’ๆดป็”จใ—ใฆใ„ใพใ™ใ€‚็”ปๅƒๅข—ๅผทไธญใซ็”ปๅƒใฎใ‚ตใ‚คใ‚บๅค‰ๆ›ดใ‚’่กŒใ‚ใชใ„ๅ ดๅˆใฏใ€ใ“ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’็œ็•ฅใ—ใฆใใ ใ•ใ„ใ€‚ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€`ImageProcessor` ใŒใ‚ตใ‚คใ‚บๅค‰ๆ›ดใ‚’ๅ‡ฆ็†ใ—ใพใ™ใ€‚ ็”ปๅƒใ‚’ๅข—ๅผทๅค‰ๆ›ใฎไธ€้ƒจใจใ—ใฆๆญฃ่ฆๅŒ–ใ—ใŸใ„ๅ ดๅˆใฏใ€`image_processor.image_mean` ใจ `image_processor.image_std` ใฎๅ€คใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> 3. ๆฌกใซใ€๐Ÿค— Datasetsใฎ[`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)ใ‚’ไฝฟ็”จใ—ใฆใ€ๅค‰ๆ›ใ‚’ใƒชใ‚ขใƒซใ‚ฟใ‚คใƒ ใง้ฉ็”จใ—ใพใ™๏ผš ```python >>> dataset.set_transform(transforms) ``` 4. ็”ปๅƒใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ใจใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใŒ `pixel_values` ใ‚’่ฟฝๅŠ ใ—ใŸใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใงๅ‡ฆ็†ๆธˆใฟใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒขใƒ‡ใƒซใซๆธกใ™ใ“ใจใŒใงใใพใ™๏ผ ```python >>> dataset[0].keys() ``` ไปฅไธ‹ใฏใ€ๅค‰ๆ›ใŒ้ฉ็”จใ•ใ‚ŒใŸๅพŒใฎ็”ปๅƒใฎๅค–่ฆณใงใ™ใ€‚ ็”ปๅƒใฏใƒฉใƒณใƒ€ใƒ ใซๅˆ‡ใ‚ŠๆŠœใ‹ใ‚Œใ€ใใฎ่‰ฒใฎ็‰นๆ€งใ‚‚็•ฐใชใ‚Šใพใ™ใ€‚ ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/> </div> <Tip> ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใ€ๆ„ๅ‘ณใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ€ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ€ใŠใ‚ˆใณใƒ‘ใƒŽใƒ—ใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใชใฉใฎใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€`ImageProcessor`ใฏ ใƒใ‚นใƒˆๅ‡ฆ็†ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ใƒขใƒ‡ใƒซใฎ็”Ÿใฎๅ‡บๅŠ›ใ‚’ๅขƒ็•Œใƒœใƒƒใ‚ฏใ‚นใ‚„ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒžใƒƒใƒ—ใชใฉใฎๆ„ๅ‘ณใฎใ‚ใ‚‹ไบˆๆธฌใซๅค‰ๆ›ใ—ใพใ™ใ€‚ </Tip> ### Pad ไธ€้ƒจใฎๅ ดๅˆใ€ใŸใจใˆใฐใ€[DETR](./model_doc/detr)ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ™‚ใซใ‚นใ‚ฑใƒผใƒซใฎๅค‰ๆ›ดใ‚’้ฉ็”จใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒใƒƒใƒๅ†…ใฎ็”ปๅƒใฎใ‚ตใ‚คใ‚บใŒ็•ฐใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚[`DetrImageProcessor`]ใ‹ใ‚‰[`DetrImageProcessor.pad`]ใ‚’ไฝฟ็”จใ—ใ€ ใ‚ซใ‚นใ‚ฟใƒ ใฎ`collate_fn`ใ‚’ๅฎš็พฉใ—ใฆ็”ปๅƒใ‚’ไธ€็ท’ใซใƒใƒƒใƒๅ‡ฆ็†ใงใใพใ™ใ€‚ ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## Multi Modal ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซๅ…ฅๅŠ›ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€ใƒขใƒ‡ใƒซ็”จใซใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๆบ–ๅ‚™ใ™ใ‚‹ใŸใ‚ใฎ[ใƒ—ใƒญใ‚ปใƒƒใ‚ต](main_classes/processors)ใŒๅฟ…่ฆใงใ™ใ€‚ใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚„็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใชใฉใฎ2ใคใฎๅ‡ฆ็†ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’็ตๅˆใ—ใพใ™ใ€‚ ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜๏ผˆASR๏ผ‰ใฎใŸใ‚ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฎไฝฟ็”จๆ–นๆณ•ใ‚’็คบใ™ใŸใ‚ใซใ€[LJ Speech](https://huggingface.co/datasets/lj_speech)ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผˆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใƒญใƒผใƒ‰ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏ๐Ÿค— [Datasets ใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://huggingface.co/docs/datasets/load_hub)ใ‚’ๅ‚็…ง๏ผ‰๏ผš ```python >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` ASR๏ผˆ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜๏ผ‰ใฎๅ ดๅˆใ€ไธปใซ `audio` ใจ `text` ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใ‚‹ใŸใ‚ใ€ไป–ใฎๅˆ—ใ‚’ๅ‰Š้™คใงใใพใ™๏ผš ```python >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` ๆฌกใซใ€`audio`ใจ`text`ใฎๅˆ—ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†๏ผš ```python >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` ๅธธใซใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใ‚’ใ€ใƒขใƒ‡ใƒซใฎไบ‹ๅ‰ๅญฆ็ฟ’ใซไฝฟ็”จใ•ใ‚ŒใŸใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใƒฌใƒผใƒˆใจไธ€่‡ดใ•ใ›ใ‚‹ใ‚ˆใ†ใซ[ใƒชใ‚ตใƒณใƒ—ใƒซ](preprocessing#audio)ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผ ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` ใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ [`AutoProcessor.from_pretrained`] ใ‚’ไฝฟ็”จใ—ใฆใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. `array`ๅ†…ใซๅซใพใ‚Œใ‚‹ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ‡ใƒผใ‚ฟใ‚’`input_values`ใซๅ‡ฆ็†ใ—ใ€`text`ใ‚’`labels`ใซใƒˆใƒผใ‚ฏใƒณๅŒ–ใ™ใ‚‹้–ขๆ•ฐใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. ใ‚ตใƒณใƒ—ใƒซใซ`prepare_dataset`้–ขๆ•ฐใ‚’้ฉ็”จใ—ใพใ™๏ผš ```py >>> prepare_dataset(lj_speech[0]) ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/model_memory_anatomy.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Model training anatomy ใƒขใƒ‡ใƒซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅŠน็އใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใŸใ‚ใซ้ฉ็”จใงใใ‚‹ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๆœ€้ฉๅŒ–ใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚’็†่งฃใ™ใ‚‹ใซใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซGPUใŒใฉใฎใ‚ˆใ†ใซๅˆฉ็”จใ•ใ‚Œใ‚‹ใ‹ใ€ใŠใ‚ˆใณๅฎŸ่กŒใ•ใ‚Œใ‚‹ๆ“ไฝœใซๅฟœใ˜ใฆ่จˆ็ฎ—ๅผทๅบฆใŒใฉใฎใ‚ˆใ†ใซๅค‰ๅŒ–ใ™ใ‚‹ใ‹ใ‚’็†่งฃใ™ใ‚‹ใ“ใจใŒๅฝน็ซ‹ใกใพใ™ใ€‚ ใพใšใฏใ€GPUใฎๅˆฉ็”จไพ‹ใจใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅฎŸ่กŒใซ้–ขใ™ใ‚‹็คบๅ”†ใซๅฏŒใ‚€ไพ‹ใ‚’ๆŽขๆฑ‚ใ™ใ‚‹ใ“ใจใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†ใ€‚ใƒ‡ใƒขใƒณใ‚นใƒˆใƒฌใƒผใ‚ทใƒงใƒณใฎใŸใ‚ใซใ€ใ„ใใคใ‹ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™: ```bash pip install transformers datasets accelerate nvidia-ml-py3 ``` `nvidia-ml-py3` ใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€Pythonๅ†…ใ‹ใ‚‰ใƒขใƒ‡ใƒซใฎใƒกใƒขใƒชไฝฟ็”จ็Šถๆณใ‚’ใƒขใƒ‹ใ‚ฟใƒผใ™ใ‚‹ใ“ใจใ‚’ๅฏ่ƒฝใซใ—ใพใ™ใ€‚ใŠใใ‚‰ใใ€ใ‚ฟใƒผใƒŸใƒŠใƒซใงใฎ `nvidia-smi` ใ‚ณใƒžใƒณใƒ‰ใซใคใ„ใฆใฏใŠ่žใใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€Pythonใ‹ใ‚‰ๅŒใ˜ๆƒ…ๅ ฑใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ ใใ‚Œใ‹ใ‚‰ใ€ใ„ใใคใ‹ใฎใƒ€ใƒŸใƒผใƒ‡ใƒผใ‚ฟใ‚’ไฝœๆˆใ—ใพใ™ใ€‚100ใ‹ใ‚‰30000ใฎ้–“ใฎใƒฉใƒณใƒ€ใƒ ใชใƒˆใƒผใ‚ฏใƒณIDใจใ€ๅˆ†้กžๅ™จใฎใŸใ‚ใฎใƒใ‚คใƒŠใƒชใƒฉใƒ™ใƒซใงใ™ใ€‚ๅˆ่จˆใงใ€512ใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใŒใ‚ใ‚Šใ€ใใ‚Œใžใ‚Œใฎ้•ทใ•ใฏ512ใงใ€PyTorchใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใฎ [`~datasets.Dataset`] ใซๆ ผ็ดใ•ใ‚Œใพใ™ใ€‚ ```py >>> import numpy as np >>> from datasets import Dataset >>> seq_len, dataset_size = 512, 512 >>> dummy_data = { ... "input_ids": np.random.randint(100, 30000, (dataset_size, seq_len)), ... "labels": np.random.randint(0, 1, (dataset_size)), ... } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format("pt") ``` [`Trainer`]ใ‚’ไฝฟ็”จใ—ใฆGPUๅˆฉ็”จ็އใจใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅฎŸ่กŒใฎ่ฆ็ด„็ตฑ่จˆๆƒ…ๅ ฑใ‚’่กจ็คบใ™ใ‚‹ใŸใ‚ใซใ€2ใคใฎใƒ˜ใƒซใƒ‘ใƒผ้–ขๆ•ฐใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ ```py >>> from pynvml import * >>> def print_gpu_utilization(): ... nvmlInit() ... handle = nvmlDeviceGetHandleByIndex(0) ... info = nvmlDeviceGetMemoryInfo(handle) ... print(f"GPU memory occupied: {info.used//1024**2} MB.") >>> def print_summary(result): ... print(f"Time: {result.metrics['train_runtime']:.2f}") ... print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}") ... print_gpu_utilization() ``` ไปฅไธ‹ใฏใ€็„กๆ–™ใฎGPUใƒกใƒขใƒชใ‹ใ‚‰้–‹ๅง‹ใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ—ใ‚‡ใ†๏ผš ```py >>> print_gpu_utilization() GPU memory occupied: 0 MB. ``` GPUใƒกใƒขใƒชใŒใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€ๅ‰ใฎใ‚ˆใ†ใซๅ ๆœ‰ใ•ใ‚Œใฆใ„ใชใ„ใ‚ˆใ†ใซ่ฆ‹ใˆใพใ™ใ€‚ใ“ใ‚ŒใŒใŠไฝฟใ„ใฎใƒžใ‚ทใƒณใงใฎ็Šถๆณใงใชใ„ๅ ดๅˆใฏใ€GPUใƒกใƒขใƒชใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ™ในใฆใฎใƒ—ใƒญใ‚ปใ‚นใ‚’ๅœๆญขใ—ใฆใใ ใ•ใ„ใ€‚ใŸใ ใ—ใ€ใ™ในใฆใฎ็ฉบใGPUใƒกใƒขใƒชใ‚’ใƒฆใƒผใ‚ถใƒผใŒไฝฟ็”จใงใใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใƒขใƒ‡ใƒซใŒGPUใซ่ชญใฟ่พผใพใ‚Œใ‚‹ใจใ€ใ‚ซใƒผใƒใƒซใ‚‚่ชญใฟ่พผใพใ‚Œใ€1ใ€œ2GBใฎใƒกใƒขใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใใ‚ŒใŒใฉใ‚Œใใ‚‰ใ„ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€GPUใซๅฐใ•ใชใƒ†ใƒณใ‚ฝใƒซใ‚’่ชญใฟ่พผใ‚€ใจใ€ใ‚ซใƒผใƒใƒซใ‚‚่ชญใฟ่พผใพใ‚Œใพใ™ใ€‚ ```py >>> import torch >>> torch.ones((1, 1)).to("cuda") >>> print_gpu_utilization() GPU memory occupied: 1343 MB. ``` ใ‚ซใƒผใƒใƒซใ ใ‘ใง1.3GBใฎGPUใƒกใƒขใƒชใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ๆฌกใซใ€ใƒขใƒ‡ใƒซใŒใฉใ‚Œใ ใ‘ใฎใ‚นใƒšใƒผใ‚นใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‹ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ## Load Model ใพใšใ€`bert-large-uncased` ใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใฟใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’็›ดๆŽฅGPUใซ่ชญใฟ่พผใ‚€ใ“ใจใงใ€้‡ใฟใ ใ‘ใŒใฉใ‚Œใ ใ‘ใฎใ‚นใƒšใƒผใ‚นใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‹ใ‚’็ขบ่ชใงใใพใ™ใ€‚ ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-large-uncased").to("cuda") >>> print_gpu_utilization() GPU memory occupied: 2631 MB. ``` ใƒขใƒ‡ใƒซใฎ้‡ใฟใ ใ‘ใงใ€GPUใƒกใƒขใƒชใ‚’1.3 GBไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ๆญฃ็ขบใชๆ•ฐๅ€คใฏใ€ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ…ทไฝ“็š„ใชGPUใซไพๅญ˜ใ—ใพใ™ใ€‚ๆ–ฐใ—ใ„GPUใงใฏใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใŒๆœ€้ฉๅŒ–ใ•ใ‚ŒใŸๆ–นๆณ•ใง่ชญใฟ่พผใพใ‚Œใ‚‹ใŸใ‚ใ€ใƒขใƒ‡ใƒซใฎไฝฟ็”จใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚‹ใŸใ‚ใ€ใƒขใƒ‡ใƒซใŒใ‚ˆใ‚Šๅคšใใฎใ‚นใƒšใƒผใ‚นใ‚’ๅ ๆœ‰ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ•ใฆใ€`nvidia-smi` CLIใจๅŒใ˜็ตๆžœใŒๅพ—ใ‚‰ใ‚Œใ‚‹ใ‹ใ‚’็ฐกๅ˜ใซ็ขบ่ชใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```bash nvidia-smi ``` ```bash Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ ``` ๅ‰ๅ›žใจๅŒใ˜ๆ•ฐๅ€คใ‚’ๅ–ๅพ—ใ—ใ€16GBใฎใƒกใƒขใƒชใ‚’ๆญ่ผ‰ใ—ใŸV100 GPUใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ใ•ใฆใ€ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใ—ใ€GPUใƒกใƒขใƒชใฎๆถˆ่ฒปใŒใฉใฎใ‚ˆใ†ใซๅค‰ๅŒ–ใ™ใ‚‹ใ‹ใ‚’็ขบ่ชใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ใพใšใ€ใ„ใใคใ‹ใฎๆจ™ๆบ–็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅผ•ๆ•ฐใ‚’่จญๅฎšใ—ใพใ™: ```py default_args = { "output_dir": "tmp", "evaluation_strategy": "steps", "num_train_epochs": 1, "log_level": "error", "report_to": "none", } ``` <Tip> ่ค‡ๆ•ฐใฎๅฎŸ้จ“ใ‚’ๅฎŸ่กŒใ™ใ‚‹ไบˆๅฎšใŒใ‚ใ‚‹ๅ ดๅˆใ€ๅฎŸ้จ“้–“ใงใƒกใƒขใƒชใ‚’้ฉๅˆ‡ใซใ‚ฏใƒชใ‚ขใ™ใ‚‹ใŸใ‚ใซใ€ๅฎŸ้จ“ใฎ้–“ใซ Python ใ‚ซใƒผใƒใƒซใ‚’ๅ†่ตทๅ‹•ใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ## Memory utilization at vanilla training [`Trainer`] ใ‚’ไฝฟ็”จใ—ใฆใ€GPU ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๆœ€้ฉๅŒ–ใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚’ไฝฟ็”จใ›ใšใซใƒใƒƒใƒใ‚ตใ‚คใ‚บ 4 ใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใพใ—ใ‚‡ใ†๏ผš ```py >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error() >>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) ``` ``` Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. ``` ๆ—ขใซใ€ๆฏ”่ผƒ็š„ๅฐใ•ใ„ใƒใƒƒใƒใ‚ตใ‚คใ‚บใงใ‚‚ใ€GPUใฎใปใจใ‚“ใฉใฎใƒกใƒขใƒชใŒใ™ใงใซไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใ‚ˆใ‚Šๅคงใใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใ€ใ—ใฐใ—ใฐใƒขใƒ‡ใƒซใฎๅŽๆŸใŒ้€ŸใใชใฃใŸใ‚Šใ€ๆœ€็ต‚็š„ใชๆ€ง่ƒฝใŒๅ‘ไธŠใ—ใŸใ‚Šใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€็†ๆƒณ็š„ใซใฏใ€ใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ใƒขใƒ‡ใƒซใฎ่ฆไปถใซๅˆใ‚ใ›ใฆ่ชฟๆ•ดใ—ใŸใ„ใฎใงใ™ใŒใ€GPUใฎๅˆถ้™ใซๅˆใ‚ใ›ใฆ่ชฟๆ•ดใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚่ˆˆๅ‘ณๆทฑใ„ใ“ใจใซใ€ใƒขใƒ‡ใƒซใฎใ‚ตใ‚คใ‚บใ‚ˆใ‚Šใ‚‚ใฏใ‚‹ใ‹ใซๅคšใใฎใƒกใƒขใƒชใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใชใœใใ†ใชใ‚‹ใฎใ‹ใ‚’ๅฐ‘ใ—็†่งฃใ™ใ‚‹ใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใฎๆ“ไฝœใจใƒกใƒขใƒชใฎๅฟ…่ฆๆ€งใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ## Anatomy of Model's Operations Transformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซใฏใ€่จˆ็ฎ—ๅผทๅบฆใซใ‚ˆใฃใฆไปฅไธ‹ใฎ3ใคใฎไธป่ฆใชๆ“ไฝœใ‚ฐใƒซใƒผใƒ—ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ 1. **ใƒ†ใƒณใ‚ฝใƒซใฎๅŽ็ธฎ** ็ทšๅฝขๅฑคใจMulti-Head Attentionใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฏใ€ใ™ในใฆใƒใƒƒใƒๅ‡ฆ็†ใ•ใ‚ŒใŸ **่กŒๅˆ—-่กŒๅˆ—ใฎไน—็ฎ—** ใ‚’่กŒใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๆ“ไฝœใฏใ€Transformerใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใŠใ„ใฆๆœ€ใ‚‚่จˆ็ฎ—้›†็ด„็š„ใช้ƒจๅˆ†ใงใ™ใ€‚ 2. **็ตฑ่จˆ็š„ๆญฃ่ฆๅŒ–** Softmaxใจๅฑคๆญฃ่ฆๅŒ–ใฏใ€ใƒ†ใƒณใ‚ฝใƒซใฎๅŽ็ธฎใ‚ˆใ‚Šใ‚‚่จˆ็ฎ—่ฒ ่ทใŒๅฐ‘ใชใใ€1ใคใพใŸใฏ่ค‡ๆ•ฐใฎ **็ธฎ็ด„ๆ“ไฝœ** ใ‚’ๅซใฟใ€ใใฎ็ตๆžœใŒใƒžใƒƒใƒ—ใ‚’ไป‹ใ—ใฆ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚ 3. **่ฆ็ด ใ”ใจใฎๆผ”็ฎ—ๅญ** ใ“ใ‚Œใ‚‰ใฏๆฎ‹ใ‚Šใฎๆผ”็ฎ—ๅญใงใ™๏ผš**ใƒใ‚คใ‚ขใ‚นใ€ใƒ‰ใƒญใƒƒใƒ—ใ‚ขใ‚ฆใƒˆใ€ๆดปๆ€งๅŒ–ใ€ใŠใ‚ˆใณๆฎ‹ๅทฎๆŽฅ็ถš** ใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏๆœ€ใ‚‚่จˆ็ฎ—้›†็ด„็š„ใชๆ“ไฝœใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎใƒœใƒˆใƒซใƒใƒƒใ‚ฏใ‚’ๅˆ†ๆžใ™ใ‚‹้š›ใซใ€ใ“ใฎ็Ÿฅ่ญ˜ใฏๅฝน็ซ‹ใคใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎ่ฆ็ด„ใฏใ€[Data Movement Is All You Need: Optimizing Transformers 2020ใซ้–ขใ™ใ‚‹ใ‚ฑใƒผใ‚นใ‚นใ‚ฟใƒ‡ใ‚ฃ](https://arxiv.org/abs/2007.00072)ใ‹ใ‚‰ๆดพ็”Ÿใ—ใฆใ„ใพใ™ใ€‚ ## Anatomy of Model's Memory ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒGPUใซ้…็ฝฎใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚ˆใ‚Šใ‚‚ใฏใ‚‹ใ‹ใซๅคšใใฎใƒกใƒขใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’่ฆ‹ใฆใใพใ—ใŸใ€‚ใ“ใ‚Œใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซGPUใƒกใƒขใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ๅคšใใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใŒๅญ˜ๅœจใ™ใ‚‹ใŸใ‚ใงใ™ใ€‚GPUใƒกใƒขใƒชไธŠใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™๏ผš 1. ใƒขใƒ‡ใƒซใฎ้‡ใฟ 2. ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ 3. ๅ‹พ้… 4. ๅ‹พ้…่จˆ็ฎ—ใฎใŸใ‚ใซไฟๅญ˜ใ•ใ‚ŒใŸๅ‰ๅ‘ใๆดปๆ€งๅŒ– 5. ไธ€ๆ™‚ใƒใƒƒใƒ•ใ‚ก 6. ๆฉŸ่ƒฝๅ›บๆœ‰ใฎใƒกใƒขใƒช ้€šๅธธใ€AdamWใ‚’ไฝฟ็”จใ—ใฆๆททๅˆ็ฒพๅบฆใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏใ€ใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒกใƒผใ‚ฟใ”ใจใซ18ใƒใ‚คใƒˆใจใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใƒกใƒขใƒชใŒๅฟ…่ฆใงใ™ใ€‚ๆŽจ่ซ–ใงใฏใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใจๅ‹พ้…ใฏไธ่ฆใงใ™ใฎใงใ€ใ“ใ‚Œใ‚‰ใ‚’ๅทฎใ—ๅผ•ใใ“ใจใŒใงใใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ๆททๅˆ็ฒพๅบฆใฎๆŽจ่ซ–ใซใŠใ„ใฆใฏใ€ใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒกใƒผใ‚ฟใ”ใจใซ6ใƒใ‚คใƒˆใจใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใƒกใƒขใƒชใŒๅฟ…่ฆใงใ™ใ€‚ ่ฉณ็ดฐใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ **ใƒขใƒ‡ใƒซใฎ้‡ใฟ:** - fp32ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆ•ฐ * 4ใƒใ‚คใƒˆ - ใƒŸใƒƒใ‚ฏใ‚นใƒ—ใƒฌใ‚ทใ‚ธใƒงใƒณใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆ•ฐ * 6ใƒใ‚คใƒˆ๏ผˆใƒกใƒขใƒชๅ†…ใซfp32ใจfp16ใฎใƒขใƒ‡ใƒซใ‚’็ถญๆŒ๏ผ‰ **ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹:** - ้€šๅธธใฎAdamWใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆ•ฐ * 8ใƒใ‚คใƒˆ๏ผˆ2ใคใฎ็Šถๆ…‹ใ‚’็ถญๆŒ๏ผ‰ - 8-bit AdamWใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆ•ฐ * 2ใƒใ‚คใƒˆ๏ผˆ[bitsandbytes](https://github.com/TimDettmers/bitsandbytes)ใฎใ‚ˆใ†ใชใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถ๏ผ‰ - ใƒขใƒผใƒกใƒณใ‚ฟใƒ ใ‚’ๆŒใคSGDใฎใ‚ˆใ†ใชใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆ•ฐ * 4ใƒใ‚คใƒˆ๏ผˆ1ใคใฎ็Šถๆ…‹ใ‚’็ถญๆŒ๏ผ‰ **ๅ‹พ้…** - fp32ใพใŸใฏใƒŸใƒƒใ‚ฏใ‚นใƒ—ใƒฌใ‚ทใ‚ธใƒงใƒณใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๆ•ฐ * 4ใƒใ‚คใƒˆ๏ผˆๅ‹พ้…ใฏๅธธใซfp32ใงไฟๆŒ๏ผ‰ **ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณ** - ใ‚ตใ‚คใ‚บใฏๅคšใใฎ่ฆๅ› ใซไพๅญ˜ใ—ใ€ไธป่ฆใช่ฆๅ› ใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใฎ้•ทใ•ใ€้š ใ‚Œๅฑคใฎใ‚ตใ‚คใ‚บใ€ใŠใ‚ˆใณใƒใƒƒใƒใ‚ตใ‚คใ‚บใงใ™ใ€‚ ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใจใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใฎ้–ขๆ•ฐใซใ‚ˆใฃใฆๆธกใ•ใ‚Œใ€่ฟ”ใ•ใ‚Œใ‚‹ๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใ€ใŠใ‚ˆใณๅ‹พ้…่จˆ็ฎ—ใฎใŸใ‚ใซไฟๅญ˜ใ•ใ‚Œใ‚‹ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใŒใ‚ใ‚Šใพใ™ใ€‚ **ไธ€ๆ™‚็š„ใชใƒกใƒขใƒช** ใ•ใ‚‰ใซใ€่จˆ็ฎ—ใŒๅฎŒไบ†ใ—ใŸๅพŒใซ่งฃๆ”พใ•ใ‚Œใ‚‹ใ•ใพใ–ใพใชไธ€ๆ™‚ๅค‰ๆ•ฐใŒใ‚ใ‚Šใพใ™ใŒใ€ใ“ใ‚Œใ‚‰ใฏไธ€ๆ™‚็š„ใซ่ฟฝๅŠ ใฎใƒกใƒขใƒชใ‚’ๅฟ…่ฆใจใ—ใ€OOMใซ้”ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆ™‚ใซใฏใ“ใฎใ‚ˆใ†ใชไธ€ๆ™‚ๅค‰ๆ•ฐใซๆˆฆ็•ฅ็š„ใซ่€ƒใˆใ€ๅฟ…่ฆใชใใชใฃใŸใ‚‰ๆ˜Ž็คบ็š„ใซ่งฃๆ”พใ™ใ‚‹ใ“ใจใŒ้žๅธธใซ้‡่ฆใงใ™ใ€‚ **ๆฉŸ่ƒฝๅ›บๆœ‰ใฎใƒกใƒขใƒช** ๆฌกใซใ€ใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ขใซใฏ็‰นๅˆฅใชใƒกใƒขใƒช่ฆไปถใŒใ‚ใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€ใƒ“ใƒผใƒ ใ‚ตใƒผใƒใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ๅ ดๅˆใ€ใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ขใฏ่ค‡ๆ•ฐใฎๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใฎใ‚ณใƒ”ใƒผใ‚’็ถญๆŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ **`forward`ใจ`backward`ใฎๅฎŸ่กŒ้€Ÿๅบฆ** ็•ณใฟ่พผใฟๅฑคใจ็ทšๅฝขๅฑคใงใฏใ€ใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใซใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใจๆฏ”ในใฆ2ๅ€ใฎFLOPSใŒใ‚ใ‚Šใ€ไธ€่ˆฌ็š„ใซใฏ็ด„2ๅ€้…ใใชใ‚Šใพใ™๏ผˆใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใฎใ‚ตใ‚คใ‚บใŒไธไพฟใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚‹ใŸใ‚ใ€ใใ‚ŒไปฅไธŠใซใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผ‰ใ€‚ ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใฏ้€šๅธธใ€ใƒใƒณใƒ‰ๅน…ๅˆถ้™ใ•ใ‚ŒใฆใŠใ‚Šใ€ใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใงใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใŒใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใ‚ˆใ‚Šใ‚‚ๅคšใใฎใƒ‡ใƒผใ‚ฟใ‚’่ชญใ‚€ใ“ใจใŒไธ€่ˆฌ็š„ใงใ™๏ผˆใŸใจใˆใฐใ€ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใฏ1ๅ›ž่ชญใฟๅ–ใ‚Šใ€1ๅ›žๆ›ธใ่พผใฟใ€ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใฏใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใฎgradOutputใŠใ‚ˆใณๅ‡บๅŠ›ใ‚’2ๅ›ž่ชญใฟๅ–ใ‚Šใ€1ๅ›žๆ›ธใ่พผใฟใพใ™๏ผ‰ใ€‚ ใ”่ฆงใฎ้€šใ‚Šใ€GPUใƒกใƒขใƒชใ‚’็ฏ€็ด„ใ—ใŸใ‚Šๆ“ไฝœใ‚’้ซ˜้€ŸๅŒ–ใงใใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ใ„ใใคใ‹ใฎๅ ดๆ‰€ใŒใ‚ใ‚Šใพใ™ใ€‚ GPUใฎๅˆฉ็”จใจ่จˆ็ฎ—้€Ÿๅบฆใซๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹่ฆๅ› ใ‚’็†่งฃใ—ใŸใฎใงใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๆœ€้ฉๅŒ–ใฎๆŠ€่ก“ใซใคใ„ใฆใฏใ€[ๅ˜ไธ€GPUใงใฎๅŠน็އ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใŸใ‚ใฎๆ–นๆณ•ใจใƒ„ใƒผใƒซ](perf_train_gpu_one)ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒšใƒผใ‚ธใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ่ฉณ็ดฐใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/create_a_model.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Create a custom architecture [`AutoClass`](model_doc/auto)ใฏใ€ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่‡ชๅ‹•็š„ใซๆŽจ่ซ–ใ—ใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ่จญๅฎšใจ้‡ใฟใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ไธ€่ˆฌ็š„ใซใฏใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซไพๅญ˜ใ—ใชใ„ใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซ`AutoClass`ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใŸใ ใ—ใ€็‰นๅฎšใฎใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒกใƒผใ‚ฟใซๅฏพใ™ใ‚‹ๅˆถๅพกใ‚’ใ‚ˆใ‚Š่ฉณ็ดฐใซ่กŒใ„ใŸใ„ใƒฆใƒผใ‚ถใƒผใฏใ€ใ„ใใคใ‹ใฎๅŸบๆœฌใ‚ฏใƒฉใ‚นใ‹ใ‚‰ใ‚ซใ‚นใ‚ฟใƒ ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใงใใพใ™ใ€‚ใ“ใ‚Œใฏใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’็ ”็ฉถใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ€ใพใŸใฏๅฎŸ้จ“ใ™ใ‚‹่ˆˆๅ‘ณใŒใ‚ใ‚‹ใƒฆใƒผใ‚ถใƒผใซ็‰นใซๅฝน็ซ‹ใคใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€`AutoClass`ใ‚’ไฝฟ็”จใ—ใชใ„ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใฎไฝœๆˆใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™ใ€‚ๆฌกใฎๆ–นๆณ•ใ‚’ๅญฆใณใพใ™๏ผš - ใƒขใƒ‡ใƒซใฎ่จญๅฎšใ‚’ใƒญใƒผใƒ‰ใŠใ‚ˆใณใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ™ใ‚‹ใ€‚ - ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ไฝœๆˆใ™ใ‚‹ใ€‚ - ใƒ†ใ‚ญใ‚นใƒˆ็”จใฎ้…ใ„ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจ้ซ˜้€Ÿใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝœๆˆใ™ใ‚‹ใ€‚ - ใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏ็”จใฎ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝœๆˆใ™ใ‚‹ใ€‚ - ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚ฟใ‚นใ‚ฏ็”จใฎ็‰นๅพดๆŠฝๅ‡บๅ™จใ‚’ไฝœๆˆใ™ใ‚‹ใ€‚ - ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใ‚ฟใ‚นใ‚ฏ็”จใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝœๆˆใ™ใ‚‹ใ€‚ ## Configuration [่จญๅฎš](main_classes/configuration)ใฏใ€ใƒขใƒ‡ใƒซใฎ็‰นๅฎšใฎๅฑžๆ€งใ‚’ๆŒ‡ใ—ใพใ™ใ€‚ๅ„ใƒขใƒ‡ใƒซใฎ่จญๅฎšใซใฏ็•ฐใชใ‚‹ๅฑžๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€ใ™ในใฆใฎNLPใƒขใƒ‡ใƒซใซใฏใ€`hidden_size`ใ€`num_attention_heads`ใ€`num_hidden_layers`ใ€ใŠใ‚ˆใณ`vocab_size`ๅฑžๆ€งใŒๅ…ฑ้€šใ—ใฆใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๅฑžๆ€งใฏใ€ใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใฎๆณจๆ„ใƒ˜ใƒƒใƒ‰ใฎๆ•ฐใ‚„้š ใ‚Œๅฑคใฎๆ•ฐใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ [DistilBERT](model_doc/distilbert)ใ‚’ใ‚ˆใ‚Š่ฉณใ—ใ่ชฟในใ‚‹ใŸใ‚ใซใ€[`DistilBertConfig`]ใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใฆใใฎๅฑžๆ€งใ‚’่ชฟในใฆใฟใพใ—ใ‚‡ใ†๏ผš ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`]ใฏใ€ๅŸบๆœฌใฎ[`DistilBertModel`]ใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅฑžๆ€งใ‚’่กจ็คบใ—ใพใ™ใ€‚ ใ™ในใฆใฎๅฑžๆ€งใฏใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บๅฏ่ƒฝใงใ€ๅฎŸ้จ“ใฎใŸใ‚ใฎใ‚นใƒšใƒผใ‚นใ‚’ๆไพ›ใ—ใพใ™ใ€‚ไพ‹ใˆใฐใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒขใƒ‡ใƒซใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ—ใฆไปฅไธ‹ใฎใ‚ˆใ†ใชใ“ใจใŒใงใใพใ™๏ผš - `activation`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใง็•ฐใชใ‚‹ๆดปๆ€งๅŒ–้–ขๆ•ฐใ‚’่ฉฆใ™ใ€‚ - `attention_dropout`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใงๆณจๆ„็ขบ็އใฎ้ซ˜ใ„ใƒ‰ใƒญใƒƒใƒ—ใ‚ขใ‚ฆใƒˆ็އใ‚’ไฝฟ็”จใ™ใ‚‹ใ€‚ ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎๅฑžๆ€งใฏใ€[`~PretrainedConfig.from_pretrained`] ้–ขๆ•ฐใงๅค‰ๆ›ดใงใใพใ™๏ผš ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` Once you are satisfied with your model configuration, you can save it with [`PretrainedConfig.save_pretrained`]. Your configuration file is stored as a JSON file in the specified save directory. ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’ๅ†ๅˆฉ็”จใ™ใ‚‹ใซใฏใ€[`~PretrainedConfig.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใใ‚Œใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip> ใ‚ซใ‚นใ‚ฟใƒ ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใ‚’่พžๆ›ธใจใ—ใฆไฟๅญ˜ใ™ใ‚‹ใ“ใจใ‚‚ใ€ใ‚ซใ‚นใ‚ฟใƒ ๆง‹ๆˆๅฑžๆ€งใจใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๆง‹ๆˆๅฑžๆ€งใฎ้•ใ„ใ ใ‘ใ‚’ไฟๅญ˜ใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™๏ผ่ฉณ็ดฐใซใคใ„ใฆใฏ[configuration](main_classes/configuration)ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ </Tip> ## Model ๆฌกใฎใ‚นใƒ†ใƒƒใƒ—ใฏใ€[ใƒขใƒ‡ใƒซ](main_classes/models)ใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใƒขใƒ‡ใƒซ๏ผˆใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใจใ‚‚็ทฉใ่จ€ใ‚ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผ‰ใฏใ€ๅ„ใƒฌใ‚คใƒคใƒผใŒไฝ•ใ‚’ใ—ใฆใ„ใ‚‹ใ‹ใ€ใฉใฎๆ“ไฝœใŒ่กŒใ‚ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ๆง‹ๆˆใ‹ใ‚‰ใฎ `num_hidden_layers` ใฎใ‚ˆใ†ใชๅฑžๆ€งใฏใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๅฎš็พฉใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏ [`PreTrainedModel`] ใ‚’ใƒ™ใƒผใ‚นใ‚ฏใƒฉใ‚นใจใ—ใ€ๅ…ฅๅŠ›ๅŸ‹ใ‚่พผใฟใฎใƒชใ‚ตใ‚คใ‚บใ‚„ใ‚ปใƒซใƒ•ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒ˜ใƒƒใƒ‰ใฎใƒ—ใƒซใƒผใƒ‹ใƒณใ‚ฐใชใฉใ€ๅ…ฑ้€šใฎใƒกใ‚ฝใƒƒใƒ‰ใŒใ„ใใคใ‹ใ‚ใ‚Šใพใ™ใ€‚ ใ•ใ‚‰ใซใ€ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏ [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html)ใ€[`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)ใ€ใพใŸใฏ [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html) ใฎใ„ใšใ‚Œใ‹ใฎใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใงใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใคใพใ‚Šใ€ใƒขใƒ‡ใƒซใฏใใ‚Œใžใ‚Œใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎไฝฟ็”จๆณ•ใจไบ’ๆ›ๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ <frameworkcontent> <pt> ใƒขใƒ‡ใƒซใซใ‚ซใ‚นใ‚ฟใƒ ๆง‹ๆˆๅฑžๆ€งใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") >>> model = DistilBertModel(my_config) ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎ้‡ใฟใงใฏใชใใƒฉใƒณใƒ€ใƒ ใชๅ€คใ‚’ๆŒใคใƒขใƒ‡ใƒซใŒไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒ่กŒใ‚ใ‚Œใ‚‹ใพใงใ€ใพใ ๆœ‰็”จใชใ‚‚ใฎใจใ—ใฆไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ‚ณใ‚นใƒˆใจๆ™‚้–“ใŒใ‹ใ‹ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใงใ™ใ€‚ ้€šๅธธใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅฟ…่ฆใชใƒชใ‚ฝใƒผใ‚นใฎไธ€้ƒจใ—ใ‹ไฝฟ็”จใ›ใšใ€ใ‚ˆใ‚Š้€Ÿใใ‚ˆใ‚Š่‰ฏใ„็ตๆžœใ‚’ๅพ—ใ‚‹ใŸใ‚ใซไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒ่‰ฏใ„ใงใ—ใ‚‡ใ†ใ€‚ [`~PreTrainedModel.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased") ``` ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹้š›ใ€ใƒขใƒ‡ใƒซใŒ๐Ÿค— Transformersใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒขใƒ‡ใƒซ่จญๅฎšใŒ่‡ชๅ‹•็š„ใซใƒญใƒผใƒ‰ใ•ใ‚Œใพใ™ใ€‚ใŸใ ใ—ใ€ๅฟ…่ฆใซๅฟœใ˜ใฆใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒขใƒ‡ใƒซ่จญๅฎšๅฑžๆ€งใฎไธ€้ƒจใพใŸใฏใ™ในใฆใ‚’็‹ฌ่‡ชใฎใ‚‚ใฎใง็ฝฎใๆ›ใˆใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </pt> <tf> ใƒขใƒ‡ใƒซใซใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšๅฑžๆ€งใ‚’ใƒญใƒผใƒ‰ใ—ใฆใใ ใ•ใ„๏ผš ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใงใฏใชใใƒฉใƒณใƒ€ใƒ ใชๅ€คใ‚’ๆŒใคใƒขใƒ‡ใƒซใŒไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ ใ“ใฎใƒขใƒ‡ใƒซใ‚’ๆœ‰็”จใช็›ฎ็š„ใซใฏใพใ ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ‚ณใ‚นใƒˆใŒใ‹ใ‹ใ‚Šใ€ๆ™‚้–“ใŒใ‹ใ‹ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใงใ™ใ€‚ ไธ€่ˆฌ็š„ใซใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅฟ…่ฆใชใƒชใ‚ฝใƒผใ‚นใฎไธ€้ƒจใ—ใ‹ไฝฟ็”จใ›ใšใซใ€ใ‚ˆใ‚Š้€Ÿใๅ„ชใ‚ŒใŸ็ตๆžœใ‚’ๅพ—ใ‚‹ใŸใ‚ใซไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒ่‰ฏใ„ใงใ—ใ‚‡ใ†ใ€‚ [`~TFPreTrainedModel.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") ``` ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹้š›ใ€ใƒขใƒ‡ใƒซใŒ๐Ÿค— Transformersใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒขใƒ‡ใƒซๆง‹ๆˆใŒ่‡ชๅ‹•็š„ใซใƒญใƒผใƒ‰ใ•ใ‚Œใพใ™ใ€‚ใŸใ ใ—ใ€ๅฟ…่ฆใงใ‚ใ‚Œใฐใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒขใƒ‡ใƒซๆง‹ๆˆๅฑžๆ€งใฎไธ€้ƒจใพใŸใฏใ™ในใฆใ‚’็‹ฌ่‡ชใฎใ‚‚ใฎใง็ฝฎใๆ›ใˆใ‚‹ใ“ใจใ‚‚ใงใใพใ™๏ผš ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### Model heads ใ“ใฎๆ™‚็‚นใงใ€ใƒ™ใƒผใ‚นใฎDistilBERTใƒขใƒ‡ใƒซใŒใ‚ใ‚Šใ€ใ“ใ‚Œใฏ้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚้š ใ‚ŒใŸ็Šถๆ…‹ใฏใƒขใƒ‡ใƒซใฎใƒ˜ใƒƒใƒ‰ใธใฎๅ…ฅๅŠ›ใจใ—ใฆๆธกใ•ใ‚Œใ€ๆœ€็ต‚็š„ใชๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚๐Ÿค— Transformersใฏใ€ใƒขใƒ‡ใƒซใŒใใฎใ‚ฟใ‚นใ‚ฏใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใ‚‹้™ใ‚Šใ€ๅ„ใ‚ฟใ‚นใ‚ฏใซๅฏพๅฟœใ™ใ‚‹็•ฐใชใ‚‹ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใ‚’ๆไพ›ใ—ใพใ™๏ผˆใคใพใ‚Šใ€DistilBERTใ‚’็ฟป่จณใฎใ‚ˆใ†ใชใ‚ทใƒผใ‚ฑใƒณใ‚นๅฏพใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“๏ผ‰ใ€‚ <frameworkcontent> <pt> ใŸใจใˆใฐใ€[`DistilBertForSequenceClassification`]ใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใ‚’ๆŒใคใƒ™ใƒผใ‚นใฎDistilBERTใƒขใƒ‡ใƒซใงใ™ใ€‚ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใฏใ€ใƒ—ใƒผใƒซใ•ใ‚ŒใŸๅ‡บๅŠ›ใฎไธŠใซใ‚ใ‚‹็ทšๅฝขๅฑคใงใ™ใ€‚ ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๆ–ฐใ—ใ„ใ‚ฟใ‚นใ‚ฏใซใ“ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’็ฐกๅ˜ใซๅ†ๅˆฉ็”จใ™ใ‚‹ใซใฏใ€็•ฐใชใ‚‹ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใซๅˆ‡ใ‚Šๆ›ฟใˆใพใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€[`DistilBertForQuestionAnswering`] ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใƒ˜ใƒƒใƒ‰ใฏใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใจ้กžไผผใ—ใฆใ„ใพใ™ใŒใ€้š ใ‚Œ็Šถๆ…‹ใฎๅ‡บๅŠ›ใฎไธŠใซ็ทšๅฝขๅฑคใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </pt> <tf> ไพ‹ใˆใฐใ€[`TFDistilBertForSequenceClassification`]ใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใ‚’ๆŒใคใƒ™ใƒผใ‚นใฎDistilBERTใƒขใƒ‡ใƒซใงใ™ใ€‚ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใฏใ€ใƒ—ใƒผใƒซใ•ใ‚ŒใŸๅ‡บๅŠ›ใฎไธŠใซใ‚ใ‚‹็ทšๅฝขๅฑคใงใ™ใ€‚ ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๅˆฅใฎใ‚ฟใ‚นใ‚ฏใซใ“ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’็ฐกๅ˜ใซๅ†ๅˆฉ็”จใ™ใ‚‹ใ“ใจใŒใงใใ€็•ฐใชใ‚‹ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใซๅˆ‡ใ‚Šๆ›ฟใˆใ‚‹ใ ใ‘ใงใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€[`TFDistilBertForQuestionAnswering`]ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใƒ˜ใƒƒใƒ‰ใฏใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใจไผผใฆใ„ใพใ™ใŒใ€้š ใ‚Œ็Šถๆ…‹ใฎๅ‡บๅŠ›ใฎไธŠใซ็ทšๅฝขๅฑคใŒใ‚ใ‚‹ใ ใ‘ใงใ™ใ€‚ ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </tf> </frameworkcontent> ## Tokenizer ใƒ†ใ‚ญใ‚นใƒˆใƒ‡ใƒผใ‚ฟใ‚’ใƒขใƒ‡ใƒซใงไฝฟ็”จใ™ใ‚‹ๅ‰ใซๅฟ…่ฆใชๆœ€ๅพŒใฎใƒ™ใƒผใ‚นใ‚ฏใƒฉใ‚นใฏใ€็”Ÿใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใฎ[ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถ](main_classes/tokenizer)ใงใ™ใ€‚ ๐Ÿค— Transformersใงไฝฟ็”จใงใใ‚‹2ใคใฎใ‚ฟใ‚คใƒ—ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒใ‚ใ‚Šใพใ™๏ผš - [`PreTrainedTokenizer`]: ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎPythonๅฎŸ่ฃ…ใงใ™ใ€‚ - [`PreTrainedTokenizerFast`]: Rustใƒ™ใƒผใ‚นใฎ[๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/)ใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใงใ™ใ€‚ ใ“ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎใ‚ฟใ‚คใƒ—ใฏใ€ใใฎRustๅฎŸ่ฃ…ใซใ‚ˆใ‚Šใ€็‰นใซใƒใƒƒใƒใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณไธญใซ้ซ˜้€Ÿใงใ™ใ€‚ ้ซ˜้€Ÿใชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€ใƒˆใƒผใ‚ฏใƒณใ‚’ๅ…ƒใฎๅ˜่ชžใพใŸใฏๆ–‡ๅญ—ใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ™ใ‚‹*ใ‚ชใƒ•ใ‚ปใƒƒใƒˆใƒžใƒƒใƒ”ใƒณใ‚ฐ*ใชใฉใฎ่ฟฝๅŠ ใƒกใ‚ฝใƒƒใƒ‰ใ‚‚ๆไพ›ใ—ใพใ™ใ€‚ ไธกๆ–นใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€ใ‚จใƒณใ‚ณใƒผใƒ‰ใจใƒ‡ใ‚ณใƒผใƒ‰ใ€ๆ–ฐใ—ใ„ใƒˆใƒผใ‚ฏใƒณใฎ่ฟฝๅŠ ใ€็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณใฎ็ฎก็†ใชใฉใ€ๅ…ฑ้€šใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ <Tip warning={true}> ใ™ในใฆใฎใƒขใƒ‡ใƒซใŒ้ซ˜้€Ÿใชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใƒขใƒ‡ใƒซใŒ้ซ˜้€Ÿใชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€ใ“ใฎ[่กจ](index#supported-frameworks)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ </Tip> ็‹ฌ่‡ชใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใŸๅ ดๅˆใ€*ใƒœใ‚ญใƒฃใƒ–ใƒฉใƒชใƒผ*ใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝœๆˆใงใใพใ™ใ€‚ ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` ใ‚ซใ‚นใ‚ฟใƒ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‹ใ‚‰็”Ÿๆˆใ•ใ‚Œใ‚‹่ชžๅฝ™ใฏใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒ็”Ÿๆˆใ™ใ‚‹่ชžๅฝ™ใจใฏ็•ฐใชใ‚‹ใ“ใจใ‚’่ฆšใˆใฆใŠใใ“ใจใฏ้‡่ฆใงใ™ใ€‚ ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎ่ชžๅฝ™ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใใ†ใ—ใชใ„ใจใ€ๅ…ฅๅŠ›ใŒๆ„ๅ‘ณใ‚’ใชใ•ใชใใชใ‚Šใพใ™ใ€‚ [`DistilBertTokenizer`]ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎ่ชžๅฝ™ใ‚’ๆŒใคใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝœๆˆใ—ใพใ™: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") ``` [`DistilBertTokenizerFast`]ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆ้ซ˜้€Ÿใชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased") ``` <Tip> ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€[`AutoTokenizer`]ใฏ้ซ˜้€Ÿใชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’่ชญใฟ่พผใ‚‚ใ†ใจใ—ใพใ™ใ€‚`from_pretrained`ๅ†…ใง`use_fast=False`ใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใงใ€ใ“ใฎๅ‹•ไฝœใ‚’็„กๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ </Tip> ## Image Processor ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใƒ“ใ‚ธใƒงใƒณๅ…ฅๅŠ›ใ‚’ๅ‡ฆ็†ใ—ใพใ™ใ€‚ใ“ใ‚ŒใฏๅŸบๆœฌใ‚ฏใƒฉใ‚น [`~image_processing_utils.ImageProcessingMixin`] ใ‚’็ถ™ๆ‰ฟใ—ใฆใ„ใพใ™ใ€‚ ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ไฝฟ็”จใ—ใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใซ้–ข้€ฃไป˜ใ‘ใ‚‰ใ‚ŒใŸ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ใŸใจใˆใฐใ€็”ปๅƒๅˆ†้กžใซ[ViT](model_doc/vit)ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ [`ViTImageProcessor`] ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```py >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { "do_normalize": true, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ‚’ๅฟ…่ฆใจใ—ใชใ„ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใซใฏใ€ๅ˜็ด”ใซ`from_pretrained`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> [`ViTImageProcessor`]ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๅค‰ๆ›ดใ—ใฆใ€ใ‚ซใ‚นใ‚ฟใƒ ใฎ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝœๆˆใงใใพใ™๏ผš ```py >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { "do_normalize": false, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` ## Feature Extractor ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผๆŠฝๅ‡บๅ™จใฏ้Ÿณๅฃฐๅ…ฅๅŠ›ใ‚’ๅ‡ฆ็†ใ—ใพใ™ใ€‚ใ“ใ‚ŒใฏๅŸบๆœฌ็š„ใช [`~feature_extraction_utils.FeatureExtractionMixin`] ใ‚ฏใƒฉใ‚นใ‹ใ‚‰็ถ™ๆ‰ฟใ•ใ‚Œใ€้Ÿณๅฃฐๅ…ฅๅŠ›ใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใฎ [`SequenceFeatureExtractor`] ใ‚ฏใƒฉใ‚นใ‹ใ‚‰ใ‚‚็ถ™ๆ‰ฟใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ใƒขใƒ‡ใƒซใซ้–ข้€ฃไป˜ใ‘ใ‚‰ใ‚ŒใŸใƒ•ใ‚ฃใƒผใƒใƒฃใƒผๆŠฝๅ‡บๅ™จใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€้Ÿณๅฃฐๅˆ†้กžใซ [Wav2Vec2](model_doc/wav2vec2) ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ [`Wav2Vec2FeatureExtractor`] ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` <Tip> ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ‚’่กŒใ‚ใชใ„ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ็‰นๅพดๆŠฝๅ‡บๅ™จใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใซใฏใ€ๅ˜ใซ `from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> [`Wav2Vec2FeatureExtractor`] ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ๅค‰ๆ›ดใ—ใฆใ€ใ‚ซใ‚นใ‚ฟใƒ ็‰นๅพดๆŠฝๅ‡บๅ™จใ‚’ไฝœๆˆใงใใพใ™: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False) >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": false, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 8000 } ``` ## Processor ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใ‚ฟใ‚นใ‚ฏใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใƒขใƒ‡ใƒซใซๅฏพใ—ใฆใ€๐Ÿค— Transformersใฏไพฟๅˆฉใชใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚ฏใƒฉใ‚นใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ ใ“ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚ฏใƒฉใ‚นใฏใ€็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ‚„ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใชใฉใฎๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’ไพฟๅˆฉใซใƒฉใƒƒใƒ—ใ—ใ€ๅ˜ไธ€ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซ็ตๅˆใ—ใพใ™ใ€‚ ใŸใจใˆใฐใ€่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใ‚ฟใ‚นใ‚ฏ๏ผˆASR๏ผ‰็”จใซ[`Wav2Vec2Processor`]ใ‚’ไฝฟ็”จใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ASRใฏ้Ÿณๅฃฐใ‚’ใƒ†ใ‚ญใ‚นใƒˆใซ่ปขๅ†™ใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ‚ใ‚Šใ€้Ÿณๅฃฐๅ…ฅๅŠ›ใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซ็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒๅฟ…่ฆใงใ™ใ€‚ ้Ÿณๅฃฐๅ…ฅๅŠ›ใ‚’ๅ‡ฆ็†ใ™ใ‚‹็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` ใƒ†ใ‚ญใ‚นใƒˆๅ…ฅๅŠ›ใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝœๆˆใ—ใพใ™: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` [`Wav2Vec2Processor`]ใง็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’็ต„ใฟๅˆใ‚ใ›ใพใ™๏ผš ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` ไบŒใคใฎๅŸบๆœฌใ‚ฏใƒฉใ‚น - ่จญๅฎšใจใƒขใƒ‡ใƒซ - ใŠใ‚ˆใณ่ฟฝๅŠ ใฎๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚น๏ผˆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ€็‰นๅพดๆŠฝๅ‡บๅ™จใ€ใพใŸใฏใƒ—ใƒญใ‚ปใƒƒใ‚ต๏ผ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€๐Ÿค— Transformers ใŒใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใƒขใƒ‡ใƒซใฎใ„ใšใ‚Œใ‹ใ‚’ไฝœๆˆใงใใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๅŸบๆœฌใ‚ฏใƒฉใ‚นใฏ่จญๅฎšๅฏ่ƒฝใงใ€ๅฟ…่ฆใช็‰นๆ€งใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็”จใซ็ฐกๅ˜ใซใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ—ใŸใ‚Šใ€ๆ—ขๅญ˜ใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/tasks_explained.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # How ๐Ÿค— Transformers solve tasks [๐Ÿค— Transformersใงใงใใ‚‹ใ“ใจ](task_summary)ใงใ€่‡ช็„ถ่จ€่ชžๅ‡ฆ็†๏ผˆNLP๏ผ‰ใ€้Ÿณๅฃฐใจใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใฎใ‚ฟใ‚นใ‚ฏใ€ใใ‚Œใ‚‰ใฎ้‡่ฆใชใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใซใคใ„ใฆๅญฆใณใพใ—ใŸใ€‚ใ“ใฎใƒšใƒผใ‚ธใงใฏใ€ใƒขใƒ‡ใƒซใŒใ“ใ‚Œใ‚‰ใฎใ‚ฟใ‚นใ‚ฏใ‚’ใฉใฎใ‚ˆใ†ใซ่งฃๆฑบใ™ใ‚‹ใ‹ใ‚’่ฉณใ—ใ่ฆ‹ใฆใ€ใƒขใƒ‡ใƒซใฎๅ†…้ƒจใงไฝ•ใŒ่ตทใ“ใฃใฆใ„ใ‚‹ใ‹ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซใฏๅคšใใฎๆ–นๆณ•ใŒใ‚ใ‚Šใ€ไธ€้ƒจใฎใƒขใƒ‡ใƒซใฏ็‰นๅฎšใฎใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ใ‹ใ€ใพใŸใฏๆ–ฐใ—ใ„่ฆณ็‚นใ‹ใ‚‰ใ‚ฟใ‚นใ‚ฏใซๅ–ใ‚Š็ต„ใ‚€ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€Transformerใƒขใƒ‡ใƒซใซใจใฃใฆใ€ไธ€่ˆฌ็š„ใชใ‚ขใ‚คใƒ‡ใ‚ขใฏๅŒใ˜ใงใ™ใ€‚ๆŸ”่ปŸใชใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎใŠใ‹ใ’ใงใ€ใปใจใ‚“ใฉใฎใƒขใƒ‡ใƒซใฏใ‚จใƒณใ‚ณใƒผใƒ€ใ€ใƒ‡ใ‚ณใƒผใƒ€ใ€ใพใŸใฏใ‚จใƒณใ‚ณใƒผใƒ€-ใƒ‡ใ‚ณใƒผใƒ€ๆง‹้€ ใฎๅค‰็จฎใงใ™ใ€‚Transformerใƒขใƒ‡ใƒซไปฅๅค–ใซใ‚‚ใ€ๅฝ“็คพใฎใƒฉใ‚คใƒ–ใƒฉใƒชใซใฏใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏใซไปŠใงใ‚‚ไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใ„ใใคใ‹ใฎ็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ๏ผˆCNN๏ผ‰ใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใพใŸใ€็พไปฃใฎCNNใŒใฉใฎใ‚ˆใ†ใซๆฉŸ่ƒฝใ™ใ‚‹ใ‹ใ‚‚่ชฌๆ˜Žใ—ใพใ™ใ€‚ ใ‚ฟใ‚นใ‚ฏใŒใฉใฎใ‚ˆใ†ใซ่งฃๆฑบใ•ใ‚Œใ‚‹ใ‹ใ‚’่ชฌๆ˜Žใ™ใ‚‹ใŸใ‚ใซใ€ใƒขใƒ‡ใƒซๅ†…้ƒจใงๆœ‰็”จใชไบˆๆธฌใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใŸใ‚ใซไฝ•ใŒ่ตทใ“ใ‚‹ใ‹ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ - [Wav2Vec2](model_doc/wav2vec2)๏ผšใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅˆ†้กžใŠใ‚ˆใณ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜๏ผˆASR๏ผ‰ๅ‘ใ‘ - [Vision Transformer๏ผˆViT๏ผ‰](model_doc/vit)ใŠใ‚ˆใณ[ConvNeXT](model_doc/convnext)๏ผš็”ปๅƒๅˆ†้กžๅ‘ใ‘ - [DETR](model_doc/detr)๏ผšใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บๅ‘ใ‘ - [Mask2Former](model_doc/mask2former)๏ผš็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณๅ‘ใ‘ - [GLPN](model_doc/glpn)๏ผšๆทฑๅบฆๆŽจๅฎšๅ‘ใ‘ - [BERT](model_doc/bert)๏ผšใ‚จใƒณใ‚ณใƒผใƒ€ใ‚’ไฝฟ็”จใ™ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ€ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใ€ใŠใ‚ˆใณ่ณชๅ•ๅฟœ็ญ”ใชใฉใฎNLPใ‚ฟใ‚นใ‚ฏๅ‘ใ‘ - [GPT2](model_doc/gpt2)๏ผšใƒ‡ใ‚ณใƒผใƒ€ใ‚’ไฝฟ็”จใ™ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆ็”ŸๆˆใชใฉใฎNLPใ‚ฟใ‚นใ‚ฏๅ‘ใ‘ - [BART](model_doc/bart)๏ผšใ‚จใƒณใ‚ณใƒผใƒ€-ใƒ‡ใ‚ณใƒผใƒ€ใ‚’ไฝฟ็”จใ™ใ‚‹่ฆ็ด„ใŠใ‚ˆใณ็ฟป่จณใชใฉใฎNLPใ‚ฟใ‚นใ‚ฏๅ‘ใ‘ <Tip> ใ•ใ‚‰ใซ้€ฒใ‚€ๅ‰ใซใ€ๅ…ƒใฎTransformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎๅŸบๆœฌ็š„ใช็Ÿฅ่ญ˜ใ‚’ๆŒใคใจ่‰ฏใ„ใงใ™ใ€‚ใ‚จใƒณใ‚ณใƒผใƒ€ใ€ใƒ‡ใ‚ณใƒผใƒ€ใ€ใŠใ‚ˆใณๆณจๆ„ๅŠ›ใŒใฉใฎใ‚ˆใ†ใซๅ‹•ไฝœใ™ใ‚‹ใ‹ใ‚’็ŸฅใฃใฆใŠใใจใ€็•ฐใชใ‚‹Transformerใƒขใƒ‡ใƒซใŒใฉใฎใ‚ˆใ†ใซๅ‹•ไฝœใ™ใ‚‹ใ‹ใ‚’็†่งฃใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ๅง‹ใ‚ใฆใ„ใ‚‹ใ‹ใ€ใƒชใƒ•ใƒฌใƒƒใ‚ทใƒฅใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€่ฉณ็ดฐใชๆƒ…ๅ ฑใซใคใ„ใฆใฏๅฝ“็คพใฎ[ใ‚ณใƒผใ‚น](https://huggingface.co/course/chapter1/4?fw=pt)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„๏ผ </Tip> ## Speech and audio [Wav2Vec2](model_doc/wav2vec2)ใฏใ€ๆœชใƒฉใƒ™ใƒซใฎ้Ÿณๅฃฐใƒ‡ใƒผใ‚ฟใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅˆ†้กžใŠใ‚ˆใณ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใฎใƒฉใƒ™ใƒซไป˜ใใƒ‡ใƒผใ‚ฟใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒณใ•ใ‚ŒใŸ่‡ชๅทฑๆ•™ๅธซใƒขใƒ‡ใƒซใงใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/> </div> ใ“ใฎใƒขใƒ‡ใƒซใซใฏไธปใซๆฌกใฎ4ใคใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใŒใ‚ใ‚Šใพใ™ใ€‚ 1. *็‰นๅพดใ‚จใƒณใ‚ณใƒผใƒ€*๏ผš็”Ÿใฎ้Ÿณๅฃฐๆณขๅฝขใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€ๅนณๅ‡ๅ€คใ‚’ใ‚ผใƒญใซๆญฃ่ฆๅŒ–ใ—ใ€ๅ˜ไฝๅˆ†ๆ•ฃใซๅค‰ๆ›ใ—ใ€ใใ‚Œใ‚’20msใ”ใจใฎ็‰นๅพดใƒ™ใ‚ฏใƒˆใƒซใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅค‰ๆ›ใ—ใพใ™ใ€‚ 2. ๆณขๅฝขใฏ่‡ช็„ถใซ้€ฃ็ถšใ—ใฆใ„ใ‚‹ใŸใ‚ใ€ใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ˜่ชžใซๅˆ†ๅ‰ฒใงใใ‚‹ใ‚ˆใ†ใซใงใใ‚‹ใ‚ˆใ†ใซใ€็‰นๅพดใƒ™ใ‚ฏใƒˆใƒซใฏ*้‡ๅญๅŒ–ใƒขใ‚ธใƒฅใƒผใƒซ*ใซๆธกใ•ใ‚Œใ€้›ขๆ•ฃ้Ÿณๅฃฐใƒฆใƒ‹ใƒƒใƒˆใ‚’ๅญฆ็ฟ’ใ—ใ‚ˆใ†ใจใ—ใพใ™ใ€‚้Ÿณๅฃฐใƒฆใƒ‹ใƒƒใƒˆใฏ*ใ‚ณใƒผใƒ‰ใƒ–ใƒƒใ‚ฏ*๏ผˆ่ชžๅฝ™ใจ่€ƒใˆใ‚‹ใ“ใจใŒใงใใพใ™๏ผ‰ใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใ‚‹ใ‚ณใƒผใƒ‰ใƒฏใƒผใƒ‰ใฎใ‚ณใƒฌใ‚ฏใ‚ทใƒงใƒณใ‹ใ‚‰้ธๆŠžใ•ใ‚Œใพใ™ใ€‚ใ‚ณใƒผใƒ‰ใƒ–ใƒƒใ‚ฏใ‹ใ‚‰ใ€้€ฃ็ถšใ—ใŸใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅ…ฅๅŠ›ใ‚’ๆœ€ใ‚‚ใ‚ˆใ่กจใ™ใƒ™ใ‚ฏใƒˆใƒซใพใŸใฏ้Ÿณๅฃฐใƒฆใƒ‹ใƒƒใƒˆ๏ผˆใ‚ฟใƒผใ‚ฒใƒƒใƒˆใƒฉใƒ™ใƒซใจ่€ƒใˆใ‚‹ใ“ใจใŒใงใใพใ™๏ผ‰ใŒ้ธๆŠžใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใ‚’ไป‹ใ—ใฆ่ปข้€ใ•ใ‚Œใพใ™ใ€‚ 3. ็‰นๅพดใƒ™ใ‚ฏใƒˆใƒซใฎ็ด„ๅŠๅˆ†ใฏใƒฉใƒณใƒ€ใƒ ใซใƒžใ‚นใ‚ฏใ•ใ‚Œใ€ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ็‰นๅพดใƒ™ใ‚ฏใƒˆใƒซใฏ*ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ*ใซไพ›็ตฆใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใฏใ€็›ธๅฏพ็š„ใชไฝ็ฝฎใ‚จใƒณใƒ™ใƒƒใƒ‡ใ‚ฃใƒณใ‚ฐใ‚‚่ฟฝๅŠ ใ™ใ‚‹Transformerใ‚จใƒณใ‚ณใƒผใƒ€ใงใ™ใ€‚ 4. ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ็›ฎ็š„ใฏ*ใ‚ณใƒณใƒˆใƒฉใ‚นใƒ†ใ‚ฃใƒ–ใ‚ฟใ‚นใ‚ฏ*ใงใ™ใ€‚ใƒขใƒ‡ใƒซใฏใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸไบˆๆธฌใฎ็œŸใฎ้‡ๅญๅŒ–้Ÿณๅฃฐ่กจ็พใ‚’ใ€ๅฝใฎไบˆๆธฌใฎใ‚ปใƒƒใƒˆใ‹ใ‚‰ไบˆๆธฌใ—ใชใ‘ใ‚Œใฐใชใ‚‰ใšใ€ใƒขใƒ‡ใƒซใฏๆœ€ใ‚‚ไผผใŸใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใƒ™ใ‚ฏใƒˆใƒซใจ้‡ๅญๅŒ–้Ÿณๅฃฐใƒฆใƒ‹ใƒƒใƒˆ๏ผˆใ‚ฟใƒผใ‚ฒใƒƒใƒˆใƒฉใƒ™ใƒซ๏ผ‰ใ‚’่ฆ‹ใคใ‘ใ‚‹ใ‚ˆใ†ใซไฟƒใ•ใ‚Œใพใ™ใ€‚ ไปŠใ€Wav2Vec2ใฏไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใ‚‹ใฎใงใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅˆ†้กžใพใŸใฏ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใฎใŸใ‚ใซใƒ‡ใƒผใ‚ฟใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒณใงใใพใ™๏ผ ### Audio classification ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅˆ†้กžใซไฝฟ็”จใ™ใ‚‹ใซใฏใ€ๅŸบๆœฌ็š„ใชWav2Vec2ใƒขใƒ‡ใƒซใฎไธŠใซใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ๅˆ†้กžใƒ˜ใƒƒใƒ‰ใฏใ‚จใƒณใ‚ณใƒผใƒ€ใฎ้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ‚‹็ทšๅฝขๅฑคใงใ€ๅ„ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใƒฌใƒผใƒ ใ‹ใ‚‰ๅญฆ็ฟ’ใ•ใ‚ŒใŸ็‰นๅพดใ‚’่กจใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎ้š ใ‚ŒใŸ็Šถๆ…‹ใฏ้•ทใ•ใŒ็•ฐใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ๆœ€ๅˆใซ้š ใ‚ŒใŸ็Šถๆ…‹ใŒใƒ—ใƒผใƒซใ•ใ‚Œใ€ๆฌกใซใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใซๅฏพใ™ใ‚‹ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ•ใ‚Œใพใ™ใ€‚ใƒญใ‚ธใƒƒใƒˆใจใ‚ฟใƒผใ‚ฒใƒƒใƒˆ้–“ใฎใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใŒ่จˆ็ฎ—ใ•ใ‚Œใ€ๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใ‚ฏใƒฉใ‚นใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅˆ†้กžใ‚’่ฉฆใ™ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸWav2Vec2ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒณใ—ใฆๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใฎๅฎŒๅ…จใช[ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅˆ†้กžใ‚ฌใ‚คใƒ‰](tasks/audio_classification)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„๏ผ ### Automatic speech recognition ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใซไฝฟ็”จใ™ใ‚‹ใซใฏใ€[connectionist temporal classification๏ผˆCTC๏ผ‰](glossary#connectionist-temporal-classification-ctc)ใฎใŸใ‚ใฎๅŸบๆœฌ็š„ใชWav2Vec2ใƒขใƒ‡ใƒซใฎไธŠใซ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ˜ใƒƒใƒ‰ใฏใ‚จใƒณใ‚ณใƒผใƒ€ใฎ้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€ใใ‚Œใ‚‰ใ‚’ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ—ใพใ™ใ€‚ๅ„ใƒญใ‚ธใƒƒใƒˆใฏใƒˆใƒผใ‚ฏใƒณใ‚ฏใƒฉใ‚นใ‚’่กจใ—๏ผˆใƒˆใƒผใ‚ฏใƒณๆ•ฐใฏใ‚ฟใ‚นใ‚ฏใฎ่ชžๅฝ™ใ‹ใ‚‰ๆฅใพใ™๏ผ‰ใ€ใƒญใ‚ธใƒƒใƒˆใจใ‚ฟใƒผใ‚ฒใƒƒใƒˆ้–“ใฎCTCๆๅคฑใŒ่จˆ็ฎ—ใ•ใ‚Œใ€ๆฌกใซ่ปขๅ†™ใซๅค‰ๆ›ใ•ใ‚Œใพใ™ใ€‚ ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใ‚’่ฉฆใ™ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸWav2Vec2ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒณใ—ใฆๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใฎๅฎŒๅ…จใช[่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ใ‚ฌใ‚คใƒ‰](tasks/asr)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„๏ผ ## Computer vision ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใฎใ‚ฟใ‚นใ‚ฏใ‚’ใ‚ขใƒ—ใƒญใƒผใƒใ™ใ‚‹ๆ–นๆณ•ใฏ2ใคใ‚ใ‚Šใพใ™ใ€‚ 1. ็”ปๅƒใ‚’ใƒ‘ใƒƒใƒใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅˆ†ๅ‰ฒใ—ใ€Transformerใ‚’ไฝฟ็”จใ—ใฆไธฆๅˆ—ใซๅ‡ฆ็†ใ—ใพใ™ใ€‚ 2. [ConvNeXT](model_doc/convnext)ใชใฉใฎใƒขใƒ€ใƒณใชCNNใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏ็•ณใฟ่พผใฟๅฑคใ‚’ไฝฟ็”จใ—ใพใ™ใŒใ€ใƒขใƒ€ใƒณใชใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ่จญ่จˆใ‚’ๆŽก็”จใ—ใฆใ„ใพใ™ใ€‚ <Tip> ใ‚ตใƒผใƒ‰ใ‚ขใƒ—ใƒญใƒผใƒใงใฏใ€Transformerใจ็•ณใฟ่พผใฟใ‚’็ต„ใฟๅˆใ‚ใ›ใŸใ‚‚ใฎใ‚‚ใ‚ใ‚Šใพใ™๏ผˆไพ‹๏ผš[Convolutional Vision Transformer](model_doc/cvt)ใพใŸใฏ[LeViT](model_doc/levit)๏ผ‰ใ€‚ใ“ใ‚Œใ‚‰ใซใคใ„ใฆใฏ่ญฐ่ซ–ใ—ใพใ›ใ‚“ใŒใ€ใ“ใ‚Œใ‚‰ใฏใ“ใ“ใง่ชฟในใ‚‹2ใคใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใ„ใพใ™ใ€‚ </Tip> ViTใจConvNeXTใฏ็”ปๅƒๅˆ†้กžใซใ‚ˆใไฝฟ็”จใ•ใ‚Œใพใ™ใŒใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใ€ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ€ๆทฑๅบฆๆŽจๅฎšใชใฉใฎไป–ใฎใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆใฏใ€DETRใ€Mask2Formerใ€GLPNใชใฉใŒ้ฉใ—ใฆใ„ใพใ™ใ€‚ ### Image classification ViTใจConvNeXTใฎไธกๆ–นใ‚’็”ปๅƒๅˆ†้กžใซไฝฟ็”จใงใใพใ™ใ€‚ไธปใช้•ใ„ใฏใ€ViTใŒๆณจๆ„ใƒกใ‚ซใƒ‹ใ‚บใƒ ใ‚’ไฝฟ็”จใ—ใ€ConvNeXTใŒ็•ณใฟ่พผใฟใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ #### Transformer [ViT](model_doc/vit)ใฏ็•ณใฟ่พผใฟใ‚’ๅฎŒๅ…จใซTransformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใง็ฝฎใๆ›ใˆใพใ™ใ€‚ๅ…ƒใฎTransformerใซ็ฒพ้€šใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ViTใฎ็†่งฃใฏๆ—ขใซใปใจใ‚“ใฉๅฎŒไบ†ใ—ใฆใ„ใพใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/> </div> ViTใŒๅฐŽๅ…ฅใ—ใŸไธปใชๅค‰ๆ›ด็‚นใฏใ€็”ปๅƒใ‚’Transformerใซไพ›็ตฆใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ 1. ็”ปๅƒใฏๆญฃๆ–นๅฝขใง้‡ใชใ‚‰ใชใ„ใƒ‘ใƒƒใƒใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅˆ†ๅ‰ฒใ•ใ‚Œใ€ๅ„ใƒ‘ใƒƒใƒใฏใƒ™ใ‚ฏใƒˆใƒซใพใŸใฏ*ใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟ*ใซๅค‰ๆ›ใ•ใ‚Œใพใ™ใ€‚ใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใฏใ€้ฉๅˆ‡ใชๅ…ฅๅŠ›ๆฌกๅ…ƒใ‚’ไฝœๆˆใ™ใ‚‹ใŸใ‚ใซ2D็•ณใฟ่พผใฟๅฑคใ‹ใ‚‰็”Ÿๆˆใ•ใ‚Œใพใ™๏ผˆๅŸบๆœฌใฎTransformerใฎๅ ดๅˆใ€ๅ„ใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใซ768ใฎๅ€คใŒใ‚ใ‚Šใพใ™๏ผ‰ใ€‚224x224ใƒ”ใ‚ฏใ‚ปใƒซใฎ็”ปๅƒใŒใ‚ใ‚‹ๅ ดๅˆใ€ใใ‚Œใ‚’16x16ใฎ็”ปๅƒใƒ‘ใƒƒใƒใซๅˆ†ๅ‰ฒใงใใพใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใŒๅ˜่ชžใซใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ€็”ปๅƒใฏใƒ‘ใƒƒใƒใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซใ€Œใƒˆใƒผใ‚ฏใƒณๅŒ–ใ€ใ•ใ‚Œใพใ™ใ€‚ 2. *ๅญฆ็ฟ’ๅŸ‹ใ‚่พผใฟ*ใ€ใคใพใ‚Š็‰นๅˆฅใช `[CLS]` ใƒˆใƒผใ‚ฏใƒณใŒใ€BERTใฎใ‚ˆใ†ใซใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใฎๅ…ˆ้ ญใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ `[CLS]` ใƒˆใƒผใ‚ฏใƒณใฎๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใฏใ€ไป˜ๅฑžใฎๅˆ†้กžใƒ˜ใƒƒใƒ‰ใฎๅ…ฅๅŠ›ใจใ—ใฆไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ไป–ใฎๅ‡บๅŠ›ใฏ็„ก่ฆ–ใ•ใ‚Œใพใ™ใ€‚ใ“ใฎใƒˆใƒผใ‚ฏใƒณใฏใ€ใƒขใƒ‡ใƒซใŒ็”ปๅƒใฎ่กจ็พใ‚’ใ‚จใƒณใ‚ณใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ 3. ใƒ‘ใƒƒใƒใจๅญฆ็ฟ’ๅŸ‹ใ‚่พผใฟใซ่ฟฝๅŠ ใ™ใ‚‹ๆœ€ๅพŒใฎ่ฆ็ด ใฏ*ไฝ็ฝฎๅŸ‹ใ‚่พผใฟ*ใงใ™ใ€‚ใƒขใƒ‡ใƒซใฏ็”ปๅƒใƒ‘ใƒƒใƒใŒใฉใฎใ‚ˆใ†ใซไธฆในใ‚‰ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚’็Ÿฅใ‚Šใพใ›ใ‚“ใฎใงใ€ไฝ็ฝฎๅŸ‹ใ‚่พผใฟใ‚‚ๅญฆ็ฟ’ๅฏ่ƒฝใงใ€ใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใจๅŒใ˜ใ‚ตใ‚คใ‚บใ‚’ๆŒใกใพใ™ใ€‚ๆœ€ๅพŒใซใ€ใ™ในใฆใฎๅŸ‹ใ‚่พผใฟใŒTransformerใ‚จใƒณใ‚ณใƒผใƒ€ใซๆธกใ•ใ‚Œใพใ™ใ€‚ 4. ๅ‡บๅŠ›ใ€ๅ…ทไฝ“็š„ใซใฏ `[CLS]` ใƒˆใƒผใ‚ฏใƒณใฎๅ‡บๅŠ›ใ ใ‘ใŒใ€ๅคšๅฑคใƒ‘ใƒผใ‚ปใƒ—ใƒˆใƒญใƒณใƒ˜ใƒƒใƒ‰๏ผˆMLP๏ผ‰ใซๆธกใ•ใ‚Œใพใ™ใ€‚ViTใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ็›ฎ็š„ใฏๅ˜็ด”ใซๅˆ†้กžใงใ™ใ€‚ไป–ใฎๅˆ†้กžใƒ˜ใƒƒใƒ‰ใจๅŒๆง˜ใซใ€MLPใƒ˜ใƒƒใƒ‰ใฏๅ‡บๅŠ›ใ‚’ใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใซๅฏพใ™ใ‚‹ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ—ใ€ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใ‚’่จˆ็ฎ—ใ—ใฆๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใ‚ฏใƒฉใ‚นใ‚’่ฆ‹ใคใ‘ใพใ™ใ€‚ ็”ปๅƒๅˆ†้กžใ‚’่ฉฆใ™ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸViTใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒณใ—ใฆๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใฎๅฎŒๅ…จใช[็”ปๅƒๅˆ†้กžใ‚ฌใ‚คใƒ‰](tasks/image_classification)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„๏ผ #### CNN <Tip> ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏ็•ณใฟ่พผใฟใซใคใ„ใฆ็ฐกๅ˜ใซ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใŒใ€็”ปๅƒใฎๅฝข็Šถใจใ‚ตใ‚คใ‚บใŒใฉใฎใ‚ˆใ†ใซๅค‰ๅŒ–ใ™ใ‚‹ใ‹ใ‚’ไบ‹ๅ‰ใซ็†่งฃใ—ใฆใ„ใ‚‹ใจๅฝน็ซ‹ใกใพใ™ใ€‚็•ณใฟ่พผใฟใซๆ…ฃใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏใ€fastaiใฎๆ›ธ็ฑใ‹ใ‚‰[Convolution Neural Networks chapter](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ </Tip> [ConvNeXT](model_doc/convnext)ใฏใ€ๆ€ง่ƒฝใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใŸใ‚ใซๆ–ฐใ—ใ„ใƒขใƒ€ใƒณใชใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ่จญ่จˆใ‚’ๆŽก็”จใ—ใŸCNNใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ™ใ€‚ใŸใ ใ—ใ€็•ณใฟ่พผใฟใฏใƒขใƒ‡ใƒซใฎไธญๆ ธใซใพใ ใ‚ใ‚Šใพใ™ใ€‚้ซ˜ใƒฌใƒ™ใƒซใ‹ใ‚‰่ฆ‹ใŸๅ ดๅˆใ€[็•ณใฟ่พผใฟ๏ผˆconvolution๏ผ‰](glossary#convolution)ใฏใ€ๅฐใ•ใช่กŒๅˆ—๏ผˆ*ใ‚ซใƒผใƒใƒซ*๏ผ‰ใŒ็”ปๅƒใฎใƒ”ใ‚ฏใ‚ปใƒซใฎๅฐใ•ใชใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใซไน—็ฎ—ใ•ใ‚Œใ‚‹ๆ“ไฝœใงใ™ใ€‚ใใ‚Œใฏ็‰นๅฎšใฎใƒ†ใ‚ฏใ‚นใƒใƒฃใ‚„็ทšใฎๆ›ฒ็އใชใฉใฎ็‰นๅพดใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ใใฎๅพŒใ€ๆฌกใฎใƒ”ใ‚ฏใ‚ปใƒซใฎใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใซ็งปๅ‹•ใ—ใพใ™ใ€‚็•ณใฟ่พผใฟใŒ็งปๅ‹•ใ™ใ‚‹่ท้›ขใฏ*ใ‚นใƒˆใƒฉใ‚คใƒ‰*ใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/> </div> <small>[Convolution Arithmetic for Deep Learning](https://arxiv.org/abs/1603.07285) ใ‹ใ‚‰ใฎๅŸบๆœฌ็š„ใชใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚„ใ‚นใƒˆใƒฉใ‚คใƒ‰ใฎใชใ„็•ณใฟ่พผใฟใ€‚</small> ใ“ใฎๅ‡บๅŠ›ใ‚’ๅˆฅใฎ็•ณใฟ่พผใฟๅฑคใซไพ›็ตฆใ—ใ€ๅ„้€ฃ็ถšใ—ใŸๅฑคใ”ใจใซใ€ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใฏใƒ›ใƒƒใƒˆใƒ‰ใƒƒใ‚ฐใ‚„ใƒญใ‚ฑใƒƒใƒˆใฎใ‚ˆใ†ใชใ‚ˆใ‚Š่ค‡้›‘ใงๆŠฝ่ฑก็š„ใชใ‚‚ใฎใ‚’ๅญฆ็ฟ’ใ—ใพใ™ใ€‚็•ณใฟ่พผใฟๅฑคใฎ้–“ใซใฏใ€็‰นๅพดใฎๆฌกๅ…ƒใ‚’ๅ‰Šๆธ›ใ—ใ€็‰นๅพดใฎไฝ็ฝฎใฎๅค‰ๅ‹•ใซๅฏพใ—ใฆใƒขใƒ‡ใƒซใ‚’ใ‚ˆใ‚Šๅ …็‰ขใซใ™ใ‚‹ใŸใ‚ใซใƒ—ใƒผใƒชใƒณใ‚ฐๅฑคใ‚’่ฟฝๅŠ ใ™ใ‚‹ใฎใŒไธ€่ˆฌ็š„ใงใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/> </div> ConvNeXTใฏใ€ไปฅไธ‹ใฎ5ใคใฎๆ–นๆณ•ใงCNNใ‚’ใƒขใƒ€ใƒณๅŒ–ใ—ใฆใ„ใพใ™ใ€‚ 1. ๅ„ใ‚นใƒ†ใƒผใ‚ธใฎใƒ–ใƒญใƒƒใ‚ฏๆ•ฐใ‚’ๅค‰ๆ›ดใ—ใ€็”ปๅƒใ‚’ใ‚ˆใ‚Šๅคงใใชใ‚นใƒˆใƒฉใ‚คใƒ‰ใจๅฏพๅฟœใ™ใ‚‹ใ‚ซใƒผใƒใƒซใ‚ตใ‚คใ‚บใง*ใƒ‘ใƒƒใƒๅŒ–*ใ—ใพใ™ใ€‚้‡ใชใ‚‰ใชใ„ใ‚นใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใฏใ€ใ“ใ‚Œใซใ‚ˆใ‚Š็”ปๅƒใ‚’ใƒ‘ใƒƒใƒใซๅˆ†ๅ‰ฒใ™ใ‚‹ViTใฎๆˆฆ็•ฅใจไผผใฆใ„ใพใ™ใ€‚ 2. *ใƒœใƒˆใƒซใƒใƒƒใ‚ฏ* ใƒฌใ‚คใƒคใƒผใฏใƒใƒฃใƒใƒซๆ•ฐใ‚’็ธฎๅฐใ—ใ€ใใ‚Œใ‚’ๅพฉๅ…ƒใ—ใพใ™ใ€‚1x1ใฎ็•ณใฟ่พผใฟใ‚’ๅฎŸ่กŒใ™ใ‚‹ใฎใฏ้€Ÿใใ€ๆทฑใ•ใ‚’ๅข—ใ‚„ใ™ใ“ใจใŒใงใใพใ™ใ€‚้€†ใƒœใƒˆใƒซใƒใƒƒใ‚ฏใฏ้€†ใฎใ“ใจใ‚’่กŒใ„ใ€ใƒใƒฃใƒใƒซๆ•ฐใ‚’ๆ‹กๅผตใ—ใ€ใใ‚Œใ‚’็ธฎๅฐใ—ใพใ™ใ€‚ใ“ใ‚ŒใฏใƒกใƒขใƒชๅŠน็އใŒ้ซ˜ใ„ใงใ™ใ€‚ 3. ใƒœใƒˆใƒซใƒใƒƒใ‚ฏใƒฌใ‚คใƒคใƒผๅ†…ใฎ้€šๅธธใฎ3x3ใฎ็•ณใฟ่พผใฟๅฑคใ‚’ใ€*ๆทฑๅบฆๆ–นๅ‘ใฎ็•ณใฟ่พผใฟ*ใง็ฝฎใๆ›ใˆใพใ™ใ€‚ใ“ใ‚Œใฏๅ„ๅ…ฅๅŠ›ใƒใƒฃใƒใƒซใซๅ€‹ๅˆฅใซ็•ณใฟ่พผใฟใ‚’้ฉ็”จใ—ใ€ๆœ€ๅพŒใซใใ‚Œใ‚‰ใ‚’็ฉใฟ้‡ใญใ‚‹็•ณใฟ่พผใฟใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๆ€ง่ƒฝๅ‘ไธŠใฎใŸใ‚ใซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏๅน…ใŒๅบƒใŒใ‚Šใพใ™ใ€‚ 4. ViTใฏใ‚ฐใƒญใƒผใƒใƒซๅ—ๅฎน้‡Žใ‚’ๆŒใฃใฆใ„ใ‚‹ใŸใ‚ใ€ใใฎๆณจๆ„ใƒกใ‚ซใƒ‹ใ‚บใƒ ใฎใŠใ‹ใ’ใงไธ€ๅบฆใซ็”ปๅƒใฎๅคšใใ‚’่ฆ‹ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ConvNeXTใฏใ“ใฎๅŠนๆžœใ‚’ๅ†็พใ—ใ‚ˆใ†ใจใ—ใ€ใ‚ซใƒผใƒใƒซใ‚ตใ‚คใ‚บใ‚’7x7ใซๅข—ใ‚„ใ—ใพใ™ใ€‚ 5. ConvNeXTใฏใพใŸใ€Transformerใƒขใƒ‡ใƒซใ‚’ๆจกๅ€ฃใ™ใ‚‹ใ„ใใคใ‹ใฎใƒฌใ‚คใƒคใƒผใƒ‡ใ‚ถใ‚คใƒณๅค‰ๆ›ดใ‚’่กŒใฃใฆใ„ใพใ™ใ€‚ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใจๆญฃ่ฆๅŒ–ใƒฌใ‚คใƒคใƒผใŒๅฐ‘ใชใใ€ๆดปๆ€งๅŒ–้–ขๆ•ฐใฏReLUใฎไปฃใ‚ใ‚ŠใซGELUใซๅˆ‡ใ‚Šๆ›ฟใˆใ€BatchNormใฎไปฃใ‚ใ‚ŠใซLayerNormใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ ็•ณใฟ่พผใฟใƒ–ใƒญใƒƒใ‚ฏใ‹ใ‚‰ใฎๅ‡บๅŠ›ใฏใ€ๅˆ†้กžใƒ˜ใƒƒใƒ‰ใซๆธกใ•ใ‚Œใ€ๅ‡บๅŠ›ใ‚’ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ—ใ€ๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒฉใƒ™ใƒซใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใŒ่จˆ็ฎ—ใ•ใ‚Œใพใ™ใ€‚ ### Object detection [DETR](model_doc/detr)ใ€*DEtection TRansformer*ใ€ใฏCNNใจTransformerใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใ‚’็ต„ใฟๅˆใ‚ใ›ใŸใ‚จใƒณใƒ‰ใƒ„ใƒผใ‚จใƒณใƒ‰ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใƒขใƒ‡ใƒซใงใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/> </div> 1. ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸCNN *ใƒใƒƒใ‚ฏใƒœใƒผใƒณ* ใฏใ€ใƒ”ใ‚ฏใ‚ปใƒซๅ€คใง่กจใ•ใ‚Œใ‚‹็”ปๅƒใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€ใใ‚ŒใฎไฝŽ่งฃๅƒๅบฆใฎ็‰นๅพดใƒžใƒƒใƒ—ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚็‰นๅพดใƒžใƒƒใƒ—ใซใฏๆฌกๅ…ƒๅ‰Šๆธ›ใฎใŸใ‚ใซ1x1ใฎ็•ณใฟ่พผใฟใŒ้ฉ็”จใ•ใ‚Œใ€้ซ˜ใƒฌใƒ™ใƒซใฎ็”ปๅƒ่กจ็พใ‚’ๆŒใคๆ–ฐใ—ใ„็‰นๅพดใƒžใƒƒใƒ—ใŒไฝœๆˆใ•ใ‚Œใพใ™ใ€‚Transformerใฏ้€ฃ็ถšใƒขใƒ‡ใƒซใงใ‚ใ‚‹ใŸใ‚ใ€็‰นๅพดใƒžใƒƒใƒ—ใฏ็‰นๅพดใƒ™ใ‚ฏใƒˆใƒซใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅนณๅฆๅŒ–ใ•ใ‚Œใ€ไฝ็ฝฎใ‚จใƒณใƒ™ใƒ‡ใ‚ฃใƒณใ‚ฐใจ็ต„ใฟๅˆใ‚ใ›ใ‚‰ใ‚Œใพใ™ใ€‚ 2. ็‰นๅพดใƒ™ใ‚ฏใƒˆใƒซใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใซๆธกใ•ใ‚Œใ€ใใฎๆณจๆ„ใƒฌใ‚คใƒคใƒผใ‚’ไฝฟ็”จใ—ใฆ็”ปๅƒ่กจ็พใ‚’ๅญฆ็ฟ’ใ—ใพใ™ใ€‚ๆฌกใซใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใฏใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎ*ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒช*ใจ็ต„ใฟๅˆใ‚ใ•ใ‚Œใพใ™ใ€‚ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒชใฏใ€็”ปๅƒใฎ็•ฐใชใ‚‹้ ˜ๅŸŸใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใ‚‹ๅญฆ็ฟ’ๅŸ‹ใ‚่พผใฟใงใ€ๅ„ๆณจๆ„ใƒฌใ‚คใƒคใƒผใ‚’้€ฒ่กŒใ™ใ‚‹ใซใคใ‚Œใฆๆ›ดๆ–ฐใ•ใ‚Œใพใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใฏใ€ๅ„ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒชใซๅฏพใ—ใฆใƒใ‚ฆใƒณใƒ‡ใ‚ฃใƒณใ‚ฐใƒœใƒƒใ‚ฏใ‚นใฎๅบงๆจ™ใจใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใ‚’ไบˆๆธฌใ™ใ‚‹ใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใซๆธกใ•ใ‚Œใพใ™ใ€‚ใพใŸใฏใ€ๅญ˜ๅœจใ—ใชใ„ๅ ดๅˆใฏ `no object` ใŒๆธกใ•ใ‚Œใพใ™ใ€‚ DETRใฏๅ„ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒชใ‚’ไธฆ่กŒใ—ใฆใƒ‡ใ‚ณใƒผใƒ‰ใ—ใฆใ€*N*ใฎๆœ€็ต‚็š„ใชไบˆๆธฌ๏ผˆ*N*ใฏใ‚ฏใ‚จใƒชใฎๆ•ฐ๏ผ‰ใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ๅ…ธๅž‹็š„ใช่‡ชๅทฑๅ›žๅธฐใƒขใƒ‡ใƒซใŒ1ใคใฎ่ฆ็ด ใ‚’1ๅ›žใšใคไบˆๆธฌใ™ใ‚‹ใฎใจใฏ็•ฐใชใ‚Šใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใฏใ‚ปใƒƒใƒˆไบˆๆธฌใ‚ฟใ‚นใ‚ฏ๏ผˆ`ใƒใ‚ฆใƒณใƒ‡ใ‚ฃใƒณใ‚ฐใƒœใƒƒใ‚ฏใ‚น`ใ€`ใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซ`๏ผ‰ใงใ‚ใ‚Šใ€1ๅ›žใฎใƒ‘ใ‚นใง*N*ใฎไบˆๆธฌใ‚’่กŒใ„ใพใ™ใ€‚ 3. ่จ“็ทดไธญใ€DETRใฏ*ไบŒ้ƒจใƒžใƒƒใƒใƒณใ‚ฐๆๅคฑ*ใ‚’ไฝฟ็”จใ—ใฆใ€ๅ›บๅฎšใ•ใ‚ŒใŸๆ•ฐใฎไบˆๆธฌใจๅ›บๅฎšใ•ใ‚ŒใŸไธ€้€ฃใฎๆญฃ่งฃใƒฉใƒ™ใƒซใ‚’ๆฏ”่ผƒใ—ใพใ™ใ€‚ *N*ใฎใƒฉใƒ™ใƒซใ‚ปใƒƒใƒˆใซๆญฃ่งฃใƒฉใƒ™ใƒซใŒๅฐ‘ใชใ„ๅ ดๅˆใ€ `no object` ใ‚ฏใƒฉใ‚นใงใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ใ“ใฎๆๅคฑ้–ขๆ•ฐใฏใ€DETRใซไบˆๆธฌใจๆญฃ่งฃใƒฉใƒ™ใƒซใจใฎ้–“ใง1ๅฏพ1ใฎๅ‰ฒใ‚Šๅฝ“ใฆใ‚’่ฆ‹ใคใ‘ใ‚‹ใ‚ˆใ†ใซไฟƒใ—ใพใ™ใ€‚ใƒใ‚ฆใƒณใƒ‡ใ‚ฃใƒณใ‚ฐใƒœใƒƒใ‚ฏใ‚นใพใŸใฏใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใฎใฉใกใ‚‰ใ‹ใŒๆญฃใ—ใใชใ„ๅ ดๅˆใ€ๆๅคฑใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ๅŒๆง˜ใซใ€DETRใŒๅญ˜ๅœจใ—ใชใ„ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ไบˆๆธฌใ—ใŸๅ ดๅˆใ€็ฝฐ้‡‘ใŒ็ง‘ใ›ใ‚‰ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€DETRใฏ1ใคใฎ้žๅธธใซ้ก•่‘—ใชใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใ‚‹ใฎใงใฏใชใใ€็”ปๅƒๅ†…ใฎไป–ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’่ฆ‹ใคใ‘ใ‚‹ใ‚ˆใ†ใซไฟƒใ•ใ‚Œใพใ™ใ€‚ DETRใฎไธŠใซใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใฆใ€ใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใจใƒใ‚ฆใƒณใƒ‡ใ‚ฃใƒณใ‚ฐใƒœใƒƒใ‚ฏใ‚นใฎๅบงๆจ™ใ‚’่ฆ‹ใคใ‘ใพใ™ใ€‚ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใƒ˜ใƒƒใƒ‰ใซใฏ2ใคใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใŒใ‚ใ‚Šใพใ™๏ผšใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใ‚’ใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใฎใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใฎ็ทšๅฝขๅฑคใ€ใŠใ‚ˆใณใƒใ‚ฆใƒณใƒ‡ใ‚ฃใƒณใ‚ฐใƒœใƒƒใ‚ฏใ‚นใ‚’ไบˆๆธฌใ™ใ‚‹ใŸใ‚ใฎMLPใงใ™ใ€‚ ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใ‚’่ฉฆใ™ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸDETROใฎๅฎŒๅ…จใช[ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใ‚ฌใ‚คใƒ‰](tasks/object_detection)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใ€DETROใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•ใจๆŽจ่ซ–ๆ–นๆณ•ใ‚’ๅญฆใ‚“ใงใใ ใ•ใ„๏ผ ### Image segmentation [Mask2Former](model_doc/mask2former)ใฏใ€ใ™ในใฆใฎ็จฎ้กžใฎ็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใฎใƒฆใƒ‹ใƒใƒผใ‚ตใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ™ใ€‚ๅพ“ๆฅใฎใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒขใƒ‡ใƒซใฏ้€šๅธธใ€ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ€ใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ€ใพใŸใฏใƒ‘ใƒŽใƒ—ใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎ็‰นๅฎšใฎใ‚ตใƒ–ใ‚ฟใ‚นใ‚ฏใซๅˆใ‚ใ›ใฆ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚Mask2Formerใฏใ€ใใ‚Œใ‚‰ใฎใ‚ฟใ‚นใ‚ฏใฎใใ‚Œใžใ‚Œใ‚’*ใƒžใ‚นใ‚ฏๅˆ†้กž*ใฎๅ•้กŒใจใ—ใฆๆ‰ใˆใพใ™ใ€‚ใƒžใ‚นใ‚ฏๅˆ†้กžใฏใƒ”ใ‚ฏใ‚ปใƒซใ‚’*N*ใฎใ‚ปใ‚ฐใƒกใƒณใƒˆใซใ‚ฐใƒซใƒผใƒ—ๅŒ–ใ—ใ€ไธŽใˆใ‚‰ใ‚ŒใŸ็”ปๅƒใซๅฏพใ—ใฆ*N*ใฎใƒžใ‚นใ‚ฏใจใใ‚Œใซๅฏพๅฟœใ™ใ‚‹ใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใ‚’ไบˆๆธฌใ—ใพใ™ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€Mask2Formerใฎๅ‹•ไฝœๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใ€ๆœ€ๅพŒใซSegFormerใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ‚’่ฉฆใ™ใ“ใจใŒใงใใพใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/> </div> Mask2Formerใฎไธป่ฆใชใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฏๆฌกใฎ3ใคใงใ™ใ€‚ 1. [Swin](model_doc/swin)ใƒใƒƒใ‚ฏใƒœใƒผใƒณใฏ็”ปๅƒใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€3ใคใฎ้€ฃ็ถšใ™ใ‚‹3x3ใฎ็•ณใฟ่พผใฟใ‹ใ‚‰ไฝŽ่งฃๅƒๅบฆใฎ็”ปๅƒ็‰นๅพดใƒžใƒƒใƒ—ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ 2. ็‰นๅพดใƒžใƒƒใƒ—ใฏ*ใƒ”ใ‚ฏใ‚ปใƒซใƒ‡ใ‚ณใƒผใƒ€ใƒผ*ใซๆธกใ•ใ‚Œใ€ไฝŽ่งฃๅƒๅบฆใฎ็‰นๅพดใ‚’้ซ˜่งฃๅƒๅบฆใฎใƒ”ใ‚ฏใ‚ปใƒซๅŸ‹ใ‚่พผใฟใซๅพใ€…ใซใ‚ขใƒƒใƒ—ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ—ใพใ™ใ€‚ใƒ”ใ‚ฏใ‚ปใƒซใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏๅฎŸ้š›ใซใฏ่งฃๅƒๅบฆ1/32ใ€1/16ใ€ใŠใ‚ˆใณ1/8ใฎใ‚ชใƒชใ‚ธใƒŠใƒซ็”ปๅƒใฎใƒžใƒซใƒใ‚นใ‚ฑใƒผใƒซ็‰นๅพด๏ผˆไฝŽ่งฃๅƒๅบฆใจ้ซ˜่งฃๅƒๅบฆใฎ็‰นๅพดใ‚’ๅซใ‚€๏ผ‰ใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ 3. ใ“ใ‚Œใ‚‰ใฎ็•ฐใชใ‚‹ใ‚นใ‚ฑใƒผใƒซใฎ็‰นๅพดใƒžใƒƒใƒ—ใฎใใ‚Œใžใ‚Œใฏใ€้ซ˜่งฃๅƒๅบฆใฎ็‰นๅพดใ‹ใ‚‰ๅฐใ•ใ„ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใ‚ญใƒฃใƒ—ใƒใƒฃใ™ใ‚‹ใŸใ‚ใซ1ๅ›žใšใคใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒฌใ‚คใƒคใƒผใซๆธกใ•ใ‚Œใพใ™ใ€‚Mask2Formerใฎ่ฆ็‚นใฏใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎ*ใƒžใ‚นใ‚ฏใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ*ใƒกใ‚ซใƒ‹ใ‚บใƒ ใงใ™ใ€‚ใ‚ฏใƒญใ‚นใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใŒ็”ปๅƒๅ…จไฝ“ใซๆณจๆ„ใ‚’ๅ‘ใ‘ใ‚‹ใ“ใจใŒใงใใ‚‹ใฎใซๅฏพใ—ใ€ใƒžใ‚นใ‚ฏใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใฏ็”ปๅƒใฎ็‰นๅฎšใฎ้ ˜ๅŸŸใซใฎใฟ็„ฆ็‚นใ‚’ๅฝ“ใฆใพใ™ใ€‚ใ“ใ‚Œใฏ้€Ÿใใ€ใƒญใƒผใ‚ซใƒซใช็”ปๅƒ็‰นๅพดใ ใ‘ใงใ‚‚ใƒขใƒ‡ใƒซใŒๅญฆ็ฟ’ใงใใ‚‹ใŸใ‚ใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅ‘ไธŠใ—ใพใ™ใ€‚ 4. [DETR](tasks_explained#object-detection)ใจๅŒๆง˜ใซใ€Mask2Formerใ‚‚ๅญฆ็ฟ’ใ•ใ‚ŒใŸใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒชใ‚’ไฝฟ็”จใ—ใ€็”ปๅƒใฎ็‰นๅพดใจ็ต„ใฟๅˆใ‚ใ›ใฆใ‚ปใƒƒใƒˆใฎไบˆๆธฌ๏ผˆ`ใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซ`ใ€`ใƒžใ‚นใ‚ฏไบˆๆธฌ`๏ผ‰ใ‚’่กŒใ„ใพใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใฏ็ทšๅฝขๅฑคใซๆธกใ•ใ‚Œใ€ใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใซๅฏพใ™ใ‚‹ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ•ใ‚Œใพใ™ใ€‚ใƒญใ‚ธใƒƒใƒˆใจๆญฃ่งฃใƒฉใƒ™ใƒซ้–“ใฎใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใŒๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใ‚‚ใฎใ‚’่ฆ‹ใคใ‘ใพใ™ใ€‚ ใƒžใ‚นใ‚ฏไบˆๆธฌใฏใ€ใƒ”ใ‚ฏใ‚ปใƒซๅŸ‹ใ‚่พผใฟใจๆœ€็ต‚็š„ใชใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใ‚’็ต„ใฟๅˆใ‚ใ›ใฆ็”Ÿๆˆใ•ใ‚Œใพใ™ใ€‚ใ‚ทใ‚ฐใƒขใ‚คใƒ‰ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผใ‚„ใƒ€ใ‚คใ‚นๆๅคฑใŒใƒญใ‚ธใƒƒใƒˆใจๆญฃ่งฃใƒžใ‚นใ‚ฏใฎ้–“ใงๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒžใ‚นใ‚ฏใ‚’่ฆ‹ใคใ‘ใพใ™ใ€‚ ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚ฟใ‚นใ‚ฏใซๅ–ใ‚Š็ต„ใ‚€ๆบ–ๅ‚™ใŒใงใใพใ—ใŸใ‹๏ผŸSegFormerใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•ใจๆŽจ่ซ–ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใซใ€ๅฎŒๅ…จใช[็”ปๅƒใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚ฌใ‚คใƒ‰](tasks/semantic_segmentation)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ ### Depth estimation [GLPN](model_doc/glpn)ใ€*Global-Local Path Network*ใ€ใฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใพใŸใฏๆทฑๅบฆๆŽจๅฎšใชใฉใฎๅฏ†ใชไบˆๆธฌใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใฆใ„ใพใ™ใ€‚[SegFormer](model_doc/segformer)ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚’่ปฝ้‡ใƒ‡ใ‚ณใƒผใƒ€ใƒผใจ็ต„ใฟๅˆใ‚ใ›ใŸTransformerใƒ™ใƒผใ‚นใฎๆทฑๅบฆๆŽจๅฎšใƒขใƒ‡ใƒซใงใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/> </div> 1. ViTใฎใ‚ˆใ†ใซใ€็”ปๅƒใฏใƒ‘ใƒƒใƒใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅˆ†ๅ‰ฒใ•ใ‚Œใพใ™ใŒใ€ใ“ใ‚Œใ‚‰ใฎ็”ปๅƒใƒ‘ใƒƒใƒใฏๅฐใ•ใ„ใงใ™ใ€‚ใ“ใ‚Œใฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚„ๆทฑๅบฆๆŽจๅฎšใชใฉใฎๅฏ†ใชไบˆๆธฌใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใฆใ„ใพใ™ใ€‚็”ปๅƒใƒ‘ใƒƒใƒใฏใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใซๅค‰ๆ›ใ•ใ‚Œใพใ™๏ผˆใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใฎไฝœๆˆๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[็”ปๅƒๅˆ†้กž](#image-classification)ใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใซๆธกใ•ใ‚Œใพใ™ใ€‚ 2. ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฏใƒ‘ใƒƒใƒๅŸ‹ใ‚่พผใฟใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€่ค‡ๆ•ฐใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ–ใƒญใƒƒใ‚ฏใ‚’้€šใ˜ใฆใใ‚Œใ‚‰ใ‚’ๆธกใ—ใพใ™ใ€‚ๅ„ใƒ–ใƒญใƒƒใ‚ฏใซใฏใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใจMix-FFNใƒฌใ‚คใƒคใƒผใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ๅพŒ่€…ใฎๅฝนๅ‰ฒใฏไฝ็ฝฎๆƒ…ๅ ฑใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ๅ„ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ–ใƒญใƒƒใ‚ฏใฎๆœ€ๅพŒใซใฏใ€้šŽๅฑค็š„่กจ็พใ‚’ไฝœๆˆใ™ใ‚‹ใŸใ‚ใฎ*ใƒ‘ใƒƒใƒใƒžใƒผใ‚ธใƒณใ‚ฐ*ใƒฌใ‚คใƒคใƒผใŒใ‚ใ‚Šใพใ™ใ€‚้šฃๆŽฅใ™ใ‚‹ใƒ‘ใƒƒใƒใฎใ‚ฐใƒซใƒผใƒ—ใ”ใจใฎ็‰นๅพดใŒ้€ฃ็ตใ•ใ‚Œใ€้€ฃ็ตใ•ใ‚ŒใŸ็‰นๅพดใซๅฏพใ—ใฆ็ทšๅฝขๅฑคใŒ้ฉ็”จใ•ใ‚Œใ€ใƒ‘ใƒƒใƒใฎๆ•ฐใ‚’1/4ใฎ่งฃๅƒๅบฆใซๅ‰Šๆธ›ใ—ใพใ™ใ€‚ใ“ใ‚ŒใŒๆฌกใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ–ใƒญใƒƒใ‚ฏใธใฎๅ…ฅๅŠ›ใจใชใ‚Šใ€ใ“ใ“ใงใฏใ“ใฎใƒ—ใƒญใ‚ปใ‚นๅ…จไฝ“ใŒ็นฐใ‚Š่ฟ”ใ•ใ‚Œใ€ๅ…ƒใฎ็”ปๅƒใฎ1/8ใ€1/16ใ€ใŠใ‚ˆใณ1/32ใฎ่งฃๅƒๅบฆใฎ็”ปๅƒ็‰นๅพดใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ 3. ่ปฝ้‡ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‹ใ‚‰ใฎๆœ€ๅพŒใฎ็‰นๅพดใƒžใƒƒใƒ—๏ผˆ1/32ใ‚นใ‚ฑใƒผใƒซ๏ผ‰ใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€ใใ‚Œใ‚’1/16ใ‚นใ‚ฑใƒผใƒซใซใ‚ขใƒƒใƒ—ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ—ใพใ™ใ€‚ใใฎๅพŒใ€็‰นๅพดใฏๅ„็‰นๅพดใซๅฏพใ™ใ‚‹ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒžใƒƒใƒ—ใ‹ใ‚‰ใƒญใƒผใ‚ซใƒซใจใ‚ฐใƒญใƒผใƒใƒซใช็‰นๅพดใ‚’้ธๆŠžใ—ใฆ็ต„ใฟๅˆใ‚ใ›ใ‚‹*ใ‚ปใƒฌใ‚ฏใƒ†ใ‚ฃใƒ–ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใƒ•ใƒฅใƒผใ‚ธใƒงใƒณ๏ผˆSFF๏ผ‰*ใƒขใ‚ธใƒฅใƒผใƒซใซๆธกใ•ใ‚Œใ€1/8ใซใ‚ขใƒƒใƒ—ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ใ“ใฎใƒ—ใƒญใ‚ปใ‚นใฏใƒ‡ใ‚ณใƒผใƒ‰ใ•ใ‚ŒใŸ็‰นๅพดใŒๅ…ƒใฎ็”ปๅƒใจๅŒใ˜ใ‚ตใ‚คใ‚บใซใชใ‚‹ใพใง็นฐใ‚Š่ฟ”ใ•ใ‚Œใพใ™ใ€‚ 4. ใƒ‡ใ‚ณใƒผใƒ‰ใ•ใ‚ŒใŸ็‰นๅพดใฏใ€ๆœ€็ต‚็š„ใชไบˆๆธฌใ‚’่กŒใ†ใŸใ‚ใซใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ€ๆทฑๅบฆๆŽจๅฎšใ€ใพใŸใฏใใฎไป–ใฎๅฏ†ใชไบˆๆธฌใ‚ฟใ‚นใ‚ฏใซไพ›็ตฆใ•ใ‚Œใพใ™ใ€‚ใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎๅ ดๅˆใ€็‰นๅพดใฏใ‚ฏใƒฉใ‚นๆ•ฐใซๅฏพใ™ใ‚‹ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ•ใ‚Œใ€ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใ‚’ไฝฟ็”จใ—ใฆๆœ€้ฉๅŒ–ใ•ใ‚Œใพใ™ใ€‚ๆทฑๅบฆๆŽจๅฎšใฎๅ ดๅˆใ€็‰นๅพดใฏๆทฑๅบฆใƒžใƒƒใƒ—ใซๅค‰ๆ›ใ•ใ‚Œใ€ๅนณๅ‡็ตถๅฏพ่ชคๅทฎ๏ผˆMAE๏ผ‰ใพใŸใฏๅนณๅ‡ไบŒไน—่ชคๅทฎ๏ผˆMSE๏ผ‰ๆๅคฑใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ## Natural language processing Transformerใฏๆœ€ๅˆใซๆฉŸๆขฐ็ฟป่จณใฎใŸใ‚ใซ่จญ่จˆใ•ใ‚Œใ€ใใ‚Œไปฅ้™ใ€ใปใจใ‚“ใฉใฎNLPใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใจใชใฃใฆใ„ใพใ™ใ€‚ไธ€้ƒจใฎใ‚ฟใ‚นใ‚ฏใฏTransformerใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผๆง‹้€ ใซ้ฉใ—ใฆใŠใ‚Šใ€ไป–ใฎใ‚ฟใ‚นใ‚ฏใฏใƒ‡ใ‚ณใƒผใƒ€ใƒผใซ้ฉใ—ใฆใ„ใพใ™ใ€‚ใ•ใ‚‰ใซใ€ไธ€้ƒจใฎใ‚ฟใ‚นใ‚ฏใงใฏTransformerใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผ-ใƒ‡ใ‚ณใƒผใƒ€ใƒผๆง‹้€ ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ### Text classification [BERT](model_doc/bert)ใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎใฟใฎใƒขใƒ‡ใƒซใงใ‚ใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆใฎ่ฑŠใ‹ใช่กจ็พใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใŸใ‚ใซไธกๅดใฎๅ˜่ชžใซๆณจๆ„ใ‚’ๆ‰•ใ†ใ“ใจใงใ€ๆทฑใ„ๅŒๆ–นๅ‘ๆ€งใ‚’ๅŠนๆžœ็š„ใซๅฎŸ่ฃ…ใ—ใŸๆœ€ๅˆใฎใƒขใƒ‡ใƒซใงใ™ใ€‚ 1. BERTใฏ[WordPiece](tokenizer_summary#wordpiece)ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆใฎใƒˆใƒผใ‚ฏใƒณๅŸ‹ใ‚่พผใฟใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ๅ˜ไธ€ใฎๆ–‡ใจๆ–‡ใฎใƒšใ‚ขใ‚’ๅŒบๅˆฅใ™ใ‚‹ใŸใ‚ใซใ€็‰นๅˆฅใช `[SEP]` ใƒˆใƒผใ‚ฏใƒณใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ `[CLS]` ใƒˆใƒผใ‚ฏใƒณใฏใ™ในใฆใฎใƒ†ใ‚ญใ‚นใƒˆใ‚ทใƒผใ‚ฑใƒณใ‚นใฎๅ…ˆ้ ญใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ `[CLS]` ใƒˆใƒผใ‚ฏใƒณใจใจใ‚‚ใซๆœ€็ต‚ๅ‡บๅŠ›ใฏใ€ๅˆ†้กžใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใฎๅ…ฅๅŠ›ใจใ—ใฆไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚BERTใฏใพใŸใ€ใƒˆใƒผใ‚ฏใƒณใŒๆ–‡ใฎใƒšใ‚ขใฎๆœ€ๅˆใพใŸใฏ2็•ช็›ฎใฎๆ–‡ใซๅฑžใ™ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็คบใ™ใ‚ปใ‚ฐใƒกใƒณใƒˆๅŸ‹ใ‚่พผใฟใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ 2. BERTใฏใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใง2ใคใฎ็›ฎๆจ™ใ‚’ไฝฟ็”จใ—ใพใ™๏ผšใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใจๆฌกใฎๆ–‡ใฎไบˆๆธฌใงใ™ใ€‚ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใงใฏใ€ๅ…ฅๅŠ›ใƒˆใƒผใ‚ฏใƒณใฎไธ€้ƒจใŒใƒฉใƒณใƒ€ใƒ ใซใƒžใ‚นใ‚ฏใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใฏใ“ใ‚Œใ‚‰ใ‚’ไบˆๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใŒๅ…จใฆใฎๅ˜่ชžใ‚’่ฆ‹ใฆใ€Œๆฌกใฎๅ˜่ชžใ€ใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใŒใงใใ‚‹ๅŒๆ–นๅ‘ๆ€งใฎๅ•้กŒใŒ่งฃๆฑบใ•ใ‚Œใพใ™ใ€‚ไบˆๆธฌใ•ใ‚ŒใŸใƒžใ‚นใ‚ฏใƒˆใƒผใ‚ฏใƒณใฎๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใฏใ€ใ‚ฝใƒ•ใƒˆใƒžใƒƒใ‚ฏใ‚นใ‚’ไฝฟ็”จใ—ใŸๅ˜่ชžใฎใƒžใ‚นใ‚ฏใ‚’ไบˆๆธฌใ™ใ‚‹ใŸใ‚ใฎใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใซๆธกใ•ใ‚Œใพใ™ใ€‚ 2็•ช็›ฎใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏๆฌกใฎๆ–‡ใฎไบˆๆธฌใงใ™ใ€‚ใƒขใƒ‡ใƒซใฏๆ–‡AใฎๅพŒใซๆ–‡BใŒ็ถšใใ‹ใฉใ†ใ‹ใ‚’ไบˆๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๅŠๅˆ†ใฎๅ ดๅˆใ€ๆ–‡Bใฏๆฌกใฎๆ–‡ใงใ‚ใ‚Šใ€ๆฎ‹ใ‚ŠใฎๅŠๅˆ†ใฎๅ ดๅˆใ€ๆ–‡Bใฏใƒฉใƒณใƒ€ใƒ ใชๆ–‡ใงใ™ใ€‚ไบˆๆธฌ๏ผˆๆฌกใฎๆ–‡ใ‹ใฉใ†ใ‹๏ผ‰ใฏใ€2ใคใฎใ‚ฏใƒฉใ‚น๏ผˆ`IsNext`ใŠใ‚ˆใณ`NotNext`๏ผ‰ใซๅฏพใ™ใ‚‹ใ‚ฝใƒ•ใƒˆใƒžใƒƒใ‚ฏใ‚นใ‚’ๆŒใคใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใซๆธกใ•ใ‚Œใพใ™ใ€‚ 3. ๅ…ฅๅŠ›ๅŸ‹ใ‚่พผใฟใฏใ€ๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใŸใ‚ใซ่ค‡ๆ•ฐใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒฌใ‚คใƒคใƒผใ‚’ไป‹ใ—ใฆๆธกใ•ใ‚Œใพใ™ใ€‚ ไบ‹ๅ‰่จ“็ทดๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใซไฝฟ็”จใ™ใ‚‹ใซใฏใ€ใƒ™ใƒผใ‚นใฎBERTใƒขใƒ‡ใƒซใฎไธŠใซใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใƒ˜ใƒƒใƒ‰ใฏๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€ใใ‚Œใ‚‰ใ‚’ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใฎ็ทšๅฝขๅฑคใงใ™ใ€‚ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใฏใ€ใƒญใ‚ธใƒƒใƒˆใจใ‚ฟใƒผใ‚ฒใƒƒใƒˆ้–“ใงๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒฉใƒ™ใƒซใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซ่จˆ็ฎ—ใ•ใ‚Œใพใ™ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ‚’่ฉฆใ—ใฆใฟใ‚‹ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸDistilBERTใ‚’ๅพฎ่ชฟๆ•ดใ—ใ€ๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใซใ€ๅฎŒๅ…จใช[ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ‚ฌใ‚คใƒ‰](tasks/sequence_classification)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ ### Token classification BERTใ‚’ๅๅ‰ใ‚จใƒณใƒ†ใ‚ฃใƒ†ใ‚ฃ่ช่ญ˜๏ผˆNER๏ผ‰ใชใฉใฎใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใ™ใ‚‹ใซใฏใ€ใƒ™ใƒผใ‚นใฎBERTใƒขใƒ‡ใƒซใฎไธŠใซใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใƒ˜ใƒƒใƒ‰ใฏๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€ใใ‚Œใ‚‰ใ‚’ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใฎ็ทšๅฝขๅฑคใงใ™ใ€‚ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใฏใ€ใƒญใ‚ธใƒƒใƒˆใจๅ„ใƒˆใƒผใ‚ฏใƒณ้–“ใงๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒฉใƒ™ใƒซใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซ่จˆ็ฎ—ใ•ใ‚Œใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใ‚’่ฉฆใ—ใฆใฟใ‚‹ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸDistilBERTใ‚’ๅพฎ่ชฟๆ•ดใ—ใ€ๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใซใ€ๅฎŒๅ…จใช[ใƒˆใƒผใ‚ฏใƒณๅˆ†้กžใ‚ฌใ‚คใƒ‰](tasks/token_classification)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ ### Question answering BERTใ‚’่ณชๅ•ๅฟœ็ญ”ใซไฝฟ็”จใ™ใ‚‹ใซใฏใ€ใƒ™ใƒผใ‚นใฎBERTใƒขใƒ‡ใƒซใฎไธŠใซใ‚นใƒ‘ใƒณๅˆ†้กžใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใฎ็ทšๅฝขๅฑคใฏๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€ๅ›ž็ญ”ใซๅฏพๅฟœใ™ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใฎใ€Œใ‚นใƒ‘ใƒณใ€้–‹ๅง‹ใจ็ต‚ไบ†ใฎใƒญใ‚ธใƒƒใƒˆใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใฏใ€ใƒญใ‚ธใƒƒใƒˆใจใƒฉใƒ™ใƒซไฝ็ฝฎใจใฎ้–“ใงๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒ†ใ‚ญใ‚นใƒˆใ‚นใƒ‘ใƒณใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซ่จˆ็ฎ—ใ•ใ‚Œใพใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใ‚’่ฉฆใ—ใฆใฟใ‚‹ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸDistilBERTใ‚’ๅพฎ่ชฟๆ•ดใ—ใ€ๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใซใ€ๅฎŒๅ…จใช[่ณชๅ•ๅฟœ็ญ”ใ‚ฌใ‚คใƒ‰](tasks/question_answering)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ <Tip> ๐Ÿ’ก ๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ไธ€ๅบฆไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒๅฎŒไบ†ใ—ใŸBERTใ‚’ไฝฟ็”จใ—ใฆใ•ใพใ–ใพใชใ‚ฟใ‚นใ‚ฏใซ็ฐกๅ˜ใซ้ฉ็”จใงใใ‚‹ใ“ใจใซๆณจ็›ฎใ—ใฆใใ ใ•ใ„ใ€‚ๅฟ…่ฆใชใฎใฏใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซใซ็‰นๅฎšใฎใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใฆใ€้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๆ‰€ๆœ›ใฎๅ‡บๅŠ›ใซๅค‰ๆ›ใ™ใ‚‹ใ“ใจใ ใ‘ใงใ™๏ผ </Tip> ### Text generation [GPT-2](model_doc/gpt2)ใฏๅคง้‡ใฎใƒ†ใ‚ญใ‚นใƒˆใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒ‡ใ‚ณใƒผใƒ€ใƒผๅฐ‚็”จใƒขใƒ‡ใƒซใงใ™ใ€‚ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ไธŽใˆใ‚‹ใจ่ชฌๅพ—ๅŠ›ใฎใ‚ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ—ใ€ๆ˜Ž็คบ็š„ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใชใ„ใซใ‚‚ใ‹ใ‹ใ‚ใ‚‰ใšใ€่ณชๅ•ๅฟœ็ญ”ใชใฉใฎไป–ใฎNLPใ‚ฟใ‚นใ‚ฏใ‚‚ๅฎŒไบ†ใงใใพใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/> </div> 1. GPT-2ใฏ[ใƒใ‚คใƒˆใƒšใ‚ขใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐ๏ผˆBPE๏ผ‰](tokenizer_summary#bytepair-encoding-bpe)ใ‚’ไฝฟ็”จใ—ใฆๅ˜่ชžใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ—ใ€ใƒˆใƒผใ‚ฏใƒณๅŸ‹ใ‚่พผใฟใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ไฝ็ฝฎใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใŒใƒˆใƒผใ‚ฏใƒณๅŸ‹ใ‚่พผใฟใซ่ฟฝๅŠ ใ•ใ‚Œใ€ๅ„ใƒˆใƒผใ‚ฏใƒณใฎไฝ็ฝฎใ‚’็คบใ—ใพใ™ใ€‚ๅ…ฅๅŠ›ๅŸ‹ใ‚่พผใฟใฏ่ค‡ๆ•ฐใฎใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒ–ใƒญใƒƒใ‚ฏใ‚’ไป‹ใ—ใฆๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใŸใ‚ใซๆธกใ•ใ‚Œใพใ™ใ€‚ๅ„ใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒ–ใƒญใƒƒใ‚ฏๅ†…ใงใ€GPT-2ใฏใ€Œใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่‡ชๅทฑๆณจๆ„ใ€ใƒฌใ‚คใƒคใƒผใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€GPT-2ใŒๆœชๆฅใฎใƒˆใƒผใ‚ฏใƒณใซๆณจๆ„ใ‚’ๆ‰•ใ†ใ“ใจใฏใงใใชใ„ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚GPT-2ใฏๅทฆๅดใฎใƒˆใƒผใ‚ฏใƒณใซใฎใฟๆณจๆ„ใ‚’ๆ‰•ใ†ใ“ใจใŒ่จฑๅฏใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚ŒใฏBERTใฎ[`mask`]ใƒˆใƒผใ‚ฏใƒณใจใฏ็•ฐใชใ‚Šใ€ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่‡ชๅทฑๆณจๆ„ใงใฏๆœชๆฅใฎใƒˆใƒผใ‚ฏใƒณใซๅฏพใ—ใฆใ‚นใ‚ณใ‚ขใ‚’`0`ใซ่จญๅฎšใ™ใ‚‹ใŸใ‚ใฎๆณจๆ„ใƒžใ‚นใ‚ฏใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ 2. ใƒ‡ใ‚ณใƒผใƒ€ใƒผใ‹ใ‚‰ใฎๅ‡บๅŠ›ใฏใ€่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ˜ใƒƒใƒ‰ใซๆธกใ•ใ‚Œใ€ๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใฎ็ทšๅฝขๅค‰ๆ›ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใƒฉใƒ™ใƒซใฏใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎๆฌกใฎใƒˆใƒผใ‚ฏใƒณใงใ‚ใ‚Šใ€ใ“ใ‚Œใฏใƒญใ‚ธใƒƒใƒˆใ‚’ๅณใซ1ใคใšใ‚‰ใ—ใฆ็”Ÿๆˆใ•ใ‚Œใพใ™ใ€‚ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใฏใ€ใ‚ทใƒ•ใƒˆใ•ใ‚ŒใŸใƒญใ‚ธใƒƒใƒˆใจใƒฉใƒ™ใƒซ้–“ใง่จˆ็ฎ—ใ•ใ‚Œใ€ๆฌกใซๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ GPT-2ใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ็›ฎๆจ™ใฏๅฎŒๅ…จใซ[ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ](glossary#causal-language-modeling)ใซๅŸบใฅใ„ใฆใŠใ‚Šใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎๆฌกใฎๅ˜่ชžใ‚’ไบˆๆธฌใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€GPT-2ใฏใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใ‚’ๅซใ‚€ใ‚ฟใ‚นใ‚ฏใง็‰นใซๅ„ชใ‚ŒใŸๆ€ง่ƒฝใ‚’็™บๆฎใ—ใพใ™ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใ‚’่ฉฆใ—ใฆใฟใ‚‹ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸDistilGPT-2ใ‚’ๅพฎ่ชฟๆ•ดใ—ใ€ๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใซใ€ๅฎŒๅ…จใช[ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ‚ฌใ‚คใƒ‰](tasks/language_modeling#causal-language-modeling)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ <Tip> ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใซ้–ขใ™ใ‚‹่ฉณ็ดฐใฏใ€[ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆๆˆฆ็•ฅ](generation_strategies)ใ‚ฌใ‚คใƒ‰ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ </Tip> ### Summarization [BART](model_doc/bart) ใ‚„ [T5](model_doc/t5) ใฎใ‚ˆใ†ใชใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใฏใ€่ฆ็ด„ใ‚ฟใ‚นใ‚ฏใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใƒปใƒˆใ‚ฅใƒปใ‚ทใƒผใ‚ฑใƒณใ‚นใƒปใƒ‘ใ‚ฟใƒผใƒณใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€BARTใฎๅ‹•ไฝœๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใ€ๆœ€ๅพŒใซT5ใฎๅพฎ่ชฟๆ•ดใ‚’่ฉฆใ™ใ“ใจใŒใงใใพใ™ใ€‚ <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/> </div> 1. BARTใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฏใ€BERTใจ้žๅธธใซไผผใฆใŠใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆใฎใƒˆใƒผใ‚ฏใƒณใจไฝ็ฝฎใ‚จใƒณใƒ™ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใพใ™ใ€‚BARTใฏใ€ๅ…ฅๅŠ›ใ‚’็ ดๅฃŠใ—ใฆใ‹ใ‚‰ใƒ‡ใ‚ณใƒผใƒ€ใƒผใงๅ†ๆง‹็ฏ‰ใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚็‰นๅฎšใฎ็ ดๅฃŠๆˆฆ็•ฅใ‚’ๆŒใคไป–ใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใจใฏ็•ฐใชใ‚Šใ€BARTใฏไปปๆ„ใฎ็จฎ้กžใฎ็ ดๅฃŠใ‚’้ฉ็”จใงใใพใ™ใ€‚ใŸใ ใ—ใ€*ใƒ†ใ‚ญใ‚นใƒˆใ‚คใƒณใƒ•ใ‚ฃใƒชใƒณใ‚ฐ*็ ดๅฃŠๆˆฆ็•ฅใŒๆœ€้ฉใงใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใ‚คใƒณใƒ•ใ‚ฃใƒชใƒณใ‚ฐใงใฏใ€ใ„ใใคใ‹ใฎใƒ†ใ‚ญใ‚นใƒˆใ‚นใƒ‘ใƒณใŒ**ๅ˜ไธ€ใฎ** [`mask`] ใƒˆใƒผใ‚ฏใƒณใง็ฝฎใๆ›ใˆใ‚‰ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใฏ้‡่ฆใงใ™ใ€ใชใœใชใ‚‰ใƒขใƒ‡ใƒซใฏใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ—ใชใ‘ใ‚Œใฐใชใ‚‰ใšใ€ใƒขใƒ‡ใƒซใซๆฌ ่ฝใƒˆใƒผใ‚ฏใƒณใฎๆ•ฐใ‚’ไบˆๆธฌใ•ใ›ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ๅ…ฅๅŠ›ๅŸ‹ใ‚่พผใฟใจใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใ‚นใƒ‘ใƒณใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚’ไป‹ใ—ใฆๆœ€็ต‚็š„ใช้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใŒใ€BERTใจใฏ็•ฐใชใ‚Šใ€BARTใฏๅ˜่ชžใ‚’ไบˆๆธฌใ™ใ‚‹ใŸใ‚ใฎๆœ€็ต‚็š„ใชใƒ•ใ‚ฃใƒผใƒ‰ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ๆœ€ๅพŒใซ่ฟฝๅŠ ใ—ใพใ›ใ‚“ใ€‚ 2. ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎๅ‡บๅŠ›ใฏใƒ‡ใ‚ณใƒผใƒ€ใƒผใซๆธกใ•ใ‚Œใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎๅ‡บๅŠ›ใ‹ใ‚‰ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจ้ž็ ดๅฃŠใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏๅ…ƒใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅพฉๅ…ƒใ™ใ‚‹ใฎใซๅฝน็ซ‹ใค่ฟฝๅŠ ใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใŒๆไพ›ใ•ใ‚Œใพใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ€ใƒผใ‹ใ‚‰ใฎๅ‡บๅŠ›ใฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ˜ใƒƒใƒ‰ใซๆธกใ•ใ‚Œใ€้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ™ใ‚‹ใŸใ‚ใฎ็ทšๅฝขๅค‰ๆ›ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใฏใ€ใƒญใ‚ธใƒƒใƒˆใจใƒฉใƒ™ใƒซใฎ้–“ใง่จˆ็ฎ—ใ•ใ‚Œใ€ใƒฉใƒ™ใƒซใฏๅ˜ใซๅณใซใ‚ทใƒ•ใƒˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใงใ™ใ€‚ ่ฆ็ด„ใ‚’่ฉฆใ™ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸT5ใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใซใ€ๅฎŒๅ…จใช[่ฆ็ด„ใ‚ฌใ‚คใƒ‰](tasks/summarization)ใ‚’ใ”่ฆงใใ ใ•ใ„๏ผ <Tip> ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใซ้–ขใ™ใ‚‹่ฉณ็ดฐใฏใ€[ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆๆˆฆ็•ฅ](generation_strategies)ใ‚ฌใ‚คใƒ‰ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ </Tip> ### Translation ็ฟป่จณใฏใ€ใ‚‚ใ†ไธ€ใคใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใƒปใƒˆใ‚ฅใƒปใ‚ทใƒผใ‚ฑใƒณใ‚นใƒปใ‚ฟใ‚นใ‚ฏใฎไพ‹ใงใ‚ใ‚Šใ€[BART](model_doc/bart) ใ‚„ [T5](model_doc/t5) ใฎใ‚ˆใ†ใชใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆๅฎŸ่กŒใงใใพใ™ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€BARTใฎๅ‹•ไฝœๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใ€ๆœ€ๅพŒใซT5ใฎๅพฎ่ชฟๆ•ดใ‚’่ฉฆใ™ใ“ใจใŒใงใใพใ™ใ€‚ BARTใฏใ€ใ‚ฝใƒผใ‚น่จ€่ชžใ‚’ใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชžใซใƒ‡ใ‚ณใƒผใƒ‰ใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใซใ€ๅˆฅๅ€‹ใซใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใง็ฟป่จณใซ้ฉๅฟœใ—ใพใ™ใ€‚ใ“ใฎๆ–ฐใ—ใ„ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎๅŸ‹ใ‚่พผใฟใฏใ€ๅ…ƒใฎๅ˜่ชžๅŸ‹ใ‚่พผใฟใฎไปฃใ‚ใ‚Šใซไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใซๆธกใ•ใ‚Œใพใ™ใ€‚ใ‚ฝใƒผใ‚นใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฏใ€ใƒขใƒ‡ใƒซใฎๅ‡บๅŠ›ใ‹ใ‚‰ใฎใ‚ฏใƒญใ‚นใ‚จใƒณใƒˆใƒญใƒ”ใƒผๆๅคฑใ‚’็”จใ„ใฆใ‚ฝใƒผใ‚นใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ€ไฝ็ฝฎใ‚จใƒณใƒ™ใƒ‡ใ‚ฃใƒณใ‚ฐใ€ใŠใ‚ˆใณๅ…ฅๅŠ›ใ‚จใƒณใƒ™ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๆ›ดๆ–ฐใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆ่จ“็ทดใ•ใ‚Œใพใ™ใ€‚ใ“ใฎๆœ€ๅˆใฎใ‚นใƒ†ใƒƒใƒ—ใงใฏใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒๅ›บๅฎšใ•ใ‚Œใ€ใ™ในใฆใฎใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒ2็•ช็›ฎใฎใ‚นใƒ†ใƒƒใƒ—ใงไธ€็ท’ใซ่จ“็ทดใ•ใ‚Œใพใ™ใ€‚ ใใฎๅพŒใ€็ฟป่จณใฎใŸใ‚ใซๅคš่จ€่ชž็‰ˆใฎmBARTใŒ็™ปๅ ดใ—ใ€ๅคš่จ€่ชžใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใจใ—ใฆๅˆฉ็”จๅฏ่ƒฝใงใ™ใ€‚ ็ฟป่จณใ‚’่ฉฆใ™ๆบ–ๅ‚™ใฏใงใใพใ—ใŸใ‹๏ผŸT5ใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถใŸใ‚ใซใ€ๅฎŒๅ…จใช[็ฟป่จณใ‚ฌใ‚คใƒ‰](tasks/summarization)ใ‚’ใ”่ฆงใใ ใ•ใ„๏ผ <Tip> ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใซ้–ขใ™ใ‚‹่ฉณ็ดฐใฏใ€[ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆๆˆฆ็•ฅ](generation_strategies)ใ‚ฌใ‚คใƒ‰ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„๏ผ </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/philosophy.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Philosophy ๐Ÿค— Transformersใฏใ€ๆฌกใฎใ‚ˆใ†ใช็›ฎ็š„ใงๆง‹็ฏ‰ใ•ใ‚ŒใŸๆ„่ฆ‹ใ‚’ๆŒใคใƒฉใ‚คใƒ–ใƒฉใƒชใงใ™๏ผš - ๅคง่ฆๆจกใชTransformersใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ€็ ”็ฉถใ€ใพใŸใฏๆ‹กๅผตใ—ใŸใ„ๆฉŸๆขฐๅญฆ็ฟ’็ ”็ฉถ่€…ใŠใ‚ˆใณๆ•™่‚ฒ่€…ใ€‚ - ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ—ใŸใ‚Šใ€ๆœฌ็•ช็’ฐๅขƒใงๆไพ›ใ—ใŸใ‚Šใ€ใพใŸใฏใใฎไธกๆ–นใ‚’่กŒใ„ใŸใ„ๅฎŸๅ‹™ๅฎถใ€‚ - ไธŽใˆใ‚‰ใ‚ŒใŸๆฉŸๆขฐๅญฆ็ฟ’ใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ใŸใ‚ใซใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆไฝฟ็”จใ—ใŸใ„ใ‚จใƒณใ‚ธใƒ‹ใ‚ขใ€‚ ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€2ใคใฎๅผทๅŠ›ใช็›ฎๆจ™ใ‚’ๆŒใฃใฆ่จญ่จˆใ•ใ‚Œใพใ—ใŸ๏ผš 1. ใงใใ‚‹ใ ใ‘็ฐกๅ˜ใ‹ใค้ซ˜้€Ÿใซไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจ๏ผš - ใƒฆใƒผใ‚ถใƒผๅ‘ใ‘ใฎๆŠฝ่ฑกๅŒ–ใ‚’้™ใ‚Šใชใๅฐ‘ใชใใ—ใ€ๅฎŸ้š›ใ€ใปใจใ‚“ใฉใฎๅ ดๅˆใ€ๆŠฝ่ฑกๅŒ–ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๅ„ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใช3ใคใฎๆจ™ๆบ–ใ‚ฏใƒฉใ‚นใ ใ‘ใŒๅญ˜ๅœจใ—ใพใ™๏ผš[ๆง‹ๆˆ](main_classes/configuration)ใ€ [ใƒขใƒ‡ใƒซ](main_classes/model)ใ€ใŠใ‚ˆใณๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚น๏ผˆNLP็”จใฎ[ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถ](main_classes/tokenizer)ใ€ใƒ“ใ‚ธใƒงใƒณ็”จใฎ[ใ‚คใƒกใƒผใ‚ธใƒ—ใƒญใ‚ปใƒƒใ‚ต](main_classes/image_processor)ใ€ ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช็”จใฎ[็‰นๅพดๆŠฝๅ‡บๅ™จ](main_classes/feature_extractor)ใ€ใŠใ‚ˆใณใƒžใƒซใƒใƒขใƒผใƒ€ใƒซๅ…ฅๅŠ›็”จใฎ[ใƒ—ใƒญใ‚ปใƒƒใ‚ต](main_classes/processors)๏ผ‰ใ€‚ - ใ“ใ‚Œใ‚‰ใฎใ‚ฏใƒฉใ‚นใฏใ€ๅ…ฑ้€šใฎ`from_pretrained()`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‹ใ‚‰็ฐกๅ˜ใ‹ใค็ตฑไธ€ใ•ใ‚ŒใŸๆ–นๆณ•ใงๅˆๆœŸๅŒ–ใงใใพใ™ใ€‚ใ“ใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰้–ข้€ฃใ™ใ‚‹ใ‚ฏใƒฉใ‚นใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใจ้–ข้€ฃใƒ‡ใƒผใ‚ฟ๏ผˆๆง‹ๆˆใฎใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎ่ชžๅฝ™ใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟ๏ผ‰ใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰๏ผˆๅฟ…่ฆใชๅ ดๅˆใฏใ‚ญใƒฃใƒƒใ‚ทใƒฅ๏ผ‰ใ—ใฆ่ชญใฟ่พผใฟใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๅŸบๆœฌใ‚ฏใƒฉใ‚นใฎไธŠใซใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฏ2ใคใฎAPIใ‚’ๆไพ›ใ—ใฆใ„ใพใ™๏ผš[ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ]ใฏใ€็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใงใƒขใƒ‡ใƒซใ‚’ใ™ใฐใ‚„ใๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ใŸใ‚ใฎใ‚‚ใฎใงใ‚ใ‚Šใ€[`Trainer`]ใฏPyTorchใƒขใƒ‡ใƒซใ‚’่ฟ…้€Ÿใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใพใŸใฏๅพฎ่ชฟๆ•ดใ™ใ‚‹ใŸใ‚ใฎใ‚‚ใฎใงใ™๏ผˆใ™ในใฆใฎTensorFlowใƒขใƒ‡ใƒซใฏ`Keras.fit`ใจไบ’ๆ›ๆ€งใŒใ‚ใ‚Šใพใ™๏ผ‰ใ€‚ - ใใฎ็ตๆžœใ€ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใฎใƒขใ‚ธใƒฅใƒฉใƒผใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ๆ‹กๅผตใพใŸใฏๆง‹็ฏ‰ใ—ใŸใ„ๅ ดๅˆใฏใ€้€šๅธธใฎPythonใ€PyTorchใ€TensorFlowใ€Kerasใƒขใ‚ธใƒฅใƒผใƒซใ‚’ไฝฟ็”จใ—ใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎๅŸบๆœฌใ‚ฏใƒฉใ‚นใ‹ใ‚‰็ถ™ๆ‰ฟใ—ใฆใƒขใƒ‡ใƒซใฎ่ชญใฟ่พผใฟใจไฟๅญ˜ใชใฉใฎๆฉŸ่ƒฝใ‚’ๅ†ๅˆฉ็”จใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๅ“ฒๅญฆใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€[Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy)ใƒ–ใƒญใ‚ฐๆŠ•็จฟใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ 2. ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒขใƒ‡ใƒซใซใงใใ‚‹ใ ใ‘่ฟ‘ใ„ๆ€ง่ƒฝใ‚’ๆŒใคๆœ€ๆ–ฐใฎใƒขใƒ‡ใƒซใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจ๏ผš - ๅ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅฏพใ—ใฆใ€ๅ…ฌๅผใช่‘—่€…ใ‹ใ‚‰ๆไพ›ใ•ใ‚ŒใŸ็ตๆžœใ‚’ๅ†็พใ™ใ‚‹ๅฐ‘ใชใใจใ‚‚1ใคใฎไพ‹ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ - ใ‚ณใƒผใƒ‰ใฏ้€šๅธธใ€ๅฏ่ƒฝใช้™ใ‚Šๅ…ƒใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใซ่ฟ‘ใ„ใ‚‚ใฎใงใ‚ใ‚Šใ€ใ“ใ‚ŒใฏPyTorchใ‚ณใƒผใƒ‰ใŒTensorFlowใ‚ณใƒผใƒ‰ใซๅค‰ๆ›ใ•ใ‚Œใ‚‹ใ“ใจใ‹ใ‚‰็”Ÿใ˜ใ€้€†ใ‚‚ใพใŸ็„ถใ‚Šใงใ™ใ€‚ ใใฎไป–ใฎใ„ใใคใ‹ใฎ็›ฎๆจ™๏ผš - ใƒขใƒ‡ใƒซใฎๅ†…้ƒจใ‚’ใงใใ‚‹ใ ใ‘ไธ€่ฒซใ—ใฆๅ…ฌ้–‹ใ™ใ‚‹ใ“ใจ๏ผš - ใƒ•ใƒซใช้š ใ‚Œ็Šถๆ…‹ใจๆณจๆ„ใฎ้‡ใฟใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ๅ˜ไธ€ใฎAPIใ‚’ๆไพ›ใ—ใพใ™ใ€‚ - ๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใจๅŸบๆœฌใƒขใƒ‡ใƒซใฎAPIใฏๆจ™ๆบ–ๅŒ–ใ•ใ‚Œใ€็ฐกๅ˜ใซใƒขใƒ‡ใƒซ้–“ใ‚’ๅˆ‡ใ‚Šๆ›ฟใˆใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ - ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฎๅพฎ่ชฟๆ•ดใจ่ชฟๆŸปใฎใŸใ‚ใฎๆœ‰ๆœ›ใชใƒ„ใƒผใƒซใ‚’ไธป่ฆณ็š„ใซ้ธๅฎšใ™ใ‚‹ใ“ใจ๏ผš - ่ชžๅฝ™ใจๅŸ‹ใ‚่พผใฟใซๆ–ฐใ—ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใฎ็ฐกๅ˜ใงไธ€่ฒซใ—ใŸๆ–นๆณ•ใ€‚ - Transformerใƒ˜ใƒƒใƒ‰ใ‚’ใƒžใ‚นใ‚ฏใŠใ‚ˆใณใƒ—ใƒซใƒผใƒณใ™ใ‚‹ใŸใ‚ใฎ็ฐกๅ˜ใชๆ–นๆณ•ใ€‚ - PyTorchใ€TensorFlow 2.0ใ€ใŠใ‚ˆใณFlaxใฎ้–“ใ‚’็ฐกๅ˜ใซๅˆ‡ใ‚Šๆ›ฟใˆใฆใ€1ใคใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใ€ๅˆฅใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงๆŽจ่ซ–ใ‚’่กŒใ†ใ“ใจใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ใ“ใจใ€‚ ## Main concepts ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€ๅ„ใƒขใƒ‡ใƒซใซใคใ„ใฆๆฌกใฎ3ใคใฎใ‚ฟใ‚คใƒ—ใฎใ‚ฏใƒฉใ‚นใ‚’ไธญๅฟƒใซๆง‹็ฏ‰ใ•ใ‚Œใฆใ„ใพใ™๏ผš - **ใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚น**ใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใงๆไพ›ใ•ใ‚Œใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎ้‡ใฟใจไบ’ๆ›ๆ€งใฎใ‚ใ‚‹PyTorchใƒขใƒ‡ใƒซ๏ผˆ[torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)๏ผ‰ใ€Kerasใƒขใƒ‡ใƒซ๏ผˆ[tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๏ผ‰ใพใŸใฏJAX/Flaxใƒขใƒ‡ใƒซ๏ผˆ[flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)๏ผ‰ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ - **ๆง‹ๆˆใ‚ฏใƒฉใ‚น**ใฏใ€ใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆ ผ็ดใ—ใพใ™๏ผˆๅฑคใฎๆ•ฐใ‚„้š ใ‚Œๅฑคใฎใ‚ตใ‚คใ‚บใชใฉ๏ผ‰ใ€‚ใ“ใ‚Œใ‚‰ใ‚’่‡ชๅˆ†ใงใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚็‰นใซใ€ๅค‰ๆ›ดใ‚’ๅŠ ใˆใšใซไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ™ใ‚‹ใจ่‡ชๅ‹•็š„ใซๆง‹ๆˆใŒใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผˆใ“ใ‚Œใฏใƒขใƒ‡ใƒซใฎไธ€้ƒจใงใ™๏ผ‰ใ€‚ - **ๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚น**ใฏใ€็”Ÿใƒ‡ใƒผใ‚ฟใ‚’ใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๅฝขๅผใซๅค‰ๆ›ใ—ใพใ™ใ€‚[ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถ](main_classes/tokenizer)ใฏๅ„ใƒขใƒ‡ใƒซใฎ่ชžๅฝ™ใ‚’ไฟๅญ˜ใ—ใ€ๆ–‡ๅญ—ๅˆ—ใ‚’ใƒˆใƒผใ‚ฏใƒณๅŸ‹ใ‚่พผใฟใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใฎใƒชใ‚นใƒˆใซใ‚จใƒณใ‚ณใƒผใƒ‰ใŠใ‚ˆใณใƒ‡ใ‚ณใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๆไพ›ใ—ใพใ™ใ€‚[ใ‚คใƒกใƒผใ‚ธใƒ—ใƒญใ‚ปใƒƒใ‚ต](main_classes/image_processor)ใฏใƒ“ใ‚ธใƒงใƒณๅ…ฅๅŠ›ใ‚’ๅ‰ๅ‡ฆ็†ใ—ใ€[็‰นๅพดๆŠฝๅ‡บๅ™จ](main_classes/feature_extractor)ใฏใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅ…ฅๅŠ›ใ‚’ๅ‰ๅ‡ฆ็†ใ—ใ€[ใƒ—ใƒญใ‚ปใƒƒใ‚ต](main_classes/processors)ใฏใƒžใƒซใƒใƒขใƒผใƒ€ใƒซๅ…ฅๅŠ›ใ‚’ๅ‡ฆ็†ใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใ™ในใฆใฎใ‚ฏใƒฉใ‚นใฏใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‹ใ‚‰ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใ€ใƒญใƒผใ‚ซใƒซใซไฟๅญ˜ใ—ใ€Hubใงๅ…ฑๆœ‰ใ™ใ‚‹ใ“ใจใŒใงใใ‚‹3ใคใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™๏ผš - `from_pretrained()`ใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒช่‡ชไฝ“ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใ‚‹๏ผˆ[ใƒขใƒ‡ใƒซใƒใƒ–](https://huggingface.co/models)ใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใŒใ‚ใ‚Šใพใ™๏ผ‰ใ‹ใ€ใƒฆใƒผใ‚ถใƒผใซใ‚ˆใฃใฆใƒญใƒผใ‚ซใƒซใซไฟๅญ˜ใ•ใ‚ŒใŸ๏ผˆใพใŸใฏใ‚ตใƒผใƒใƒผใซไฟๅญ˜ใ•ใ‚ŒใŸ๏ผ‰ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒใƒผใ‚ธใƒงใƒณใ‹ใ‚‰ใƒขใƒ‡ใƒซใ€ๆง‹ๆˆใ€ๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ใŸใ‚ใฎใƒกใ‚ฝใƒƒใƒ‰ใงใ™ใ€‚ - `save_pretrained()`ใฏใ€ใƒขใƒ‡ใƒซใ€ๆง‹ๆˆใ€ๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’ใƒญใƒผใ‚ซใƒซใซไฟๅญ˜ใ—ใ€`from_pretrained()`ใ‚’ไฝฟ็”จใ—ใฆๅ†่ชญใฟ่พผใฟใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ - `push_to_hub()`ใฏใ€ใƒขใƒ‡ใƒซใ€ๆง‹ๆˆใ€ๅ‰ๅ‡ฆ็†ใ‚ฏใƒฉใ‚นใ‚’Hubใซๅ…ฑๆœ‰ใ—ใ€่ชฐใงใ‚‚็ฐกๅ˜ใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/model_summary.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # The Transformer model family 2017ๅนดใซๅฐŽๅ…ฅใ•ใ‚Œใฆไปฅๆฅใ€[ๅ…ƒใฎTransformer](https://arxiv.org/abs/1706.03762)ใƒขใƒ‡ใƒซใฏใ€่‡ช็„ถ่จ€่ชžๅ‡ฆ็†๏ผˆNLP๏ผ‰ใฎใ‚ฟใ‚นใ‚ฏใ‚’่ถ…ใˆใ‚‹ๅคšใใฎๆ–ฐใ—ใ„ใ‚จใ‚ญใ‚ตใ‚คใƒ†ใ‚ฃใƒณใ‚ฐใชใƒขใƒ‡ใƒซใ‚’ใ‚คใƒณใ‚นใƒ‘ใ‚คใ‚ขใ—ใพใ—ใŸใ€‚[ใ‚ฟใƒณใƒ‘ใ‚ฏ่ณชใฎๆŠ˜ใ‚ŠใŸใŸใพใ‚ŒใŸๆง‹้€ ใ‚’ไบˆๆธฌ](https://huggingface.co/blog/deep-learning-with-proteins)ใ™ใ‚‹ใƒขใƒ‡ใƒซใ€[ใƒใƒผใ‚ฟใƒผใ‚’่ตฐใ‚‰ใ›ใ‚‹ใŸใ‚ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](https://huggingface.co/blog/train-decision-transformers)ใ™ใ‚‹ใƒขใƒ‡ใƒซใ€ใใ—ใฆ[ๆ™‚็ณปๅˆ—ไบˆๆธฌ](https://huggingface.co/blog/time-series-transformers)ใฎใŸใ‚ใฎใƒขใƒ‡ใƒซใชใฉใŒใ‚ใ‚Šใพใ™ใ€‚Transformerใฎใ•ใพใ–ใพใชใƒใƒชใ‚ขใƒณใƒˆใŒๅˆฉ็”จๅฏ่ƒฝใงใ™ใŒใ€ๅคงๅฑ€ใ‚’่ฆ‹่ฝใจใ™ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใ™ในใฆใฎใƒขใƒ‡ใƒซใซๅ…ฑ้€šใ™ใ‚‹ใฎใฏใ€ๅ…ƒใฎTransformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅŸบใฅใ„ใฆใ„ใ‚‹ใ“ใจใงใ™ใ€‚ไธ€้ƒจใฎใƒขใƒ‡ใƒซใฏใ‚จใƒณใ‚ณใƒผใƒ€ใพใŸใฏใƒ‡ใ‚ณใƒผใƒ€ใฎใฟใ‚’ไฝฟ็”จใ—ใ€ไป–ใฎใƒขใƒ‡ใƒซใฏไธกๆ–นใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€Transformerใƒ•ใ‚กใƒŸใƒชใƒผๅ†…ใฎใƒขใƒ‡ใƒซใฎ้ซ˜ใƒฌใƒ™ใƒซใฎ้•ใ„ใ‚’ใ‚ซใƒ†ใ‚ดใƒฉใ‚คใ‚บใ—ใ€่ชฟๆŸปใ™ใ‚‹ใŸใ‚ใฎๆœ‰็”จใชๅˆ†้กžๆณ•ใ‚’ๆไพ›ใ—ใ€ไปฅๅ‰ใซๅ‡บไผšใฃใŸใ“ใจใฎใชใ„Transformerใ‚’็†่งฃใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ ๅ…ƒใฎTransformerใƒขใƒ‡ใƒซใซๆ…ฃใ‚Œใฆใ„ใชใ„ใ‹ใ€ใƒชใƒ•ใƒฌใƒƒใ‚ทใƒฅใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€Hugging Faceใ‚ณใƒผใ‚นใฎ[Transformerใฎๅ‹•ไฝœๅŽŸ็†](https://huggingface.co/course/chapter1/4?fw=pt)็ซ ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„ใ€‚ <div align="center"> <iframe width="560" height="315" src="https://www.youtube.com/embed/H39Z_720T5s" title="YouTubeใƒ“ใƒ‡ใ‚ชใƒ—ใƒฌใƒผใƒคใƒผ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> ## Computer vision <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FacQBpeFBVvrDUlzFlkejoz%2FModelscape-timeline%3Fnode-id%3D0%253A1%26t%3Dm0zJ7m2BQ9oe0WtO-1" allowfullscreen></iframe> ### Convolutional network ้•ทใ„้–“ใ€็•ณใฟ่พผใฟใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ๏ผˆCNN๏ผ‰ใฏใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใฎใ‚ฟใ‚นใ‚ฏใซใŠใ„ใฆๆ”ฏ้…็š„ใชใƒ‘ใƒฉใƒ€ใ‚คใƒ ใงใ—ใŸใŒใ€[ใƒ“ใ‚ธใƒงใƒณTransformer](https://arxiv.org/abs/2010.11929)ใฏใใฎใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃใจๅŠน็އๆ€งใ‚’็คบใ—ใพใ—ใŸใ€‚ใใ‚Œใงใ‚‚ใ€ไธ€้ƒจใฎCNNใฎๆœ€้ซ˜ใฎ็‰นๆ€งใ€็‰นใซ็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซใจใฃใฆใฏ้žๅธธใซๅผทๅŠ›ใช็ฟป่จณไธๅค‰ๆ€งใชใฉใ€ไธ€้ƒจใฎTransformerใฏใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซ็•ณใฟ่พผใฟใ‚’็ต„ใฟ่พผใ‚“ใงใ„ใพใ™ใ€‚[ConvNeXt](model_doc/convnext)ใฏใ€็•ณใฟ่พผใฟใ‚’็พไปฃๅŒ–ใ™ใ‚‹ใŸใ‚ใซTransformerใ‹ใ‚‰่จญ่จˆใฎ้ธๆŠž่‚ขใ‚’ๅ–ใ‚Šๅ…ฅใ‚Œใ€ไพ‹ใˆใฐใ€ConvNeXtใฏ็”ปๅƒใ‚’ใƒ‘ใƒƒใƒใซๅˆ†ๅ‰ฒใ™ใ‚‹ใŸใ‚ใซ้‡ใชใ‚Šๅˆใ‚ใชใ„ใ‚นใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใจใ€ใ‚ฐใƒญใƒผใƒใƒซๅ—ๅฎน้‡Žใ‚’ๅข—ๅŠ ใ•ใ›ใ‚‹ใŸใ‚ใฎๅคงใใชใ‚ซใƒผใƒใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ConvNeXtใฏใ€ใƒกใƒขใƒชๅŠน็އใ‚’ๅ‘ไธŠใ•ใ›ใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใŸใ‚ใซใ„ใใคใ‹ใฎใƒฌใ‚คใƒคใƒผใƒ‡ใ‚ถใ‚คใƒณใฎ้ธๆŠž่‚ขใ‚‚ๆไพ›ใ—ใ€Transformerใจ็ซถๅˆ็š„ใซใชใ‚Šใพใ™๏ผ ### Encoder[[cv-encoder]] [ใƒ“ใ‚ธใƒงใƒณ ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผ๏ผˆViT๏ผ‰](model_doc/vit) ใฏใ€็•ณใฟ่พผใฟใ‚’ไฝฟ็”จใ—ใชใ„ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏใฎๆ‰‰ใ‚’้–‹ใ‘ใพใ—ใŸใ€‚ViT ใฏๆจ™ๆบ–ใฎใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚’ไฝฟ็”จใ—ใพใ™ใŒใ€็”ปๅƒใ‚’ๆ‰ฑใ†ๆ–นๆณ•ใŒไธป่ฆใชใƒ–ใƒฌใƒผใ‚ฏใ‚นใƒซใƒผใงใ—ใŸใ€‚็”ปๅƒใ‚’ๅ›บๅฎšใ‚ตใ‚คใ‚บใฎใƒ‘ใƒƒใƒใซๅˆ†ๅ‰ฒใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒˆใƒผใ‚ฏใƒณใฎใ‚ˆใ†ใซไฝฟ็”จใ—ใฆๅŸ‹ใ‚่พผใฟใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ViT ใฏใ€ๅฝ“ๆ™‚ใฎCNNใจ็ซถไบ‰ๅŠ›ใฎใ‚ใ‚‹็ตๆžœใ‚’็คบใ™ใŸใ‚ใซใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใฎๅŠน็އ็š„ใชใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆดป็”จใ—ใพใ—ใŸใŒใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅฟ…่ฆใชใƒชใ‚ฝใƒผใ‚นใŒๅฐ‘ใชใใฆๆธˆใฟใพใ—ใŸใ€‚ViT ใซ็ถšใ„ใฆใ€ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚„ๆคœๅ‡บใชใฉใฎๅฏ†ใชใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏใ‚’ๅ‡ฆ็†ใงใใ‚‹ไป–ใฎใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใ‚‚็™ปๅ ดใ—ใพใ—ใŸใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฎ1ใคใŒ[Swin](model_doc/swin) ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใงใ™ใ€‚Swin ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใฏใ€ใ‚ˆใ‚Šๅฐใ•ใชใ‚ตใ‚คใ‚บใฎใƒ‘ใƒƒใƒใ‹ใ‚‰้šŽๅฑค็š„ใช็‰นๅพดใƒžใƒƒใƒ—๏ผˆCNNใฎใ‚ˆใ†ใง ViT ใจใฏ็•ฐใชใ‚Šใพใ™๏ผ‰ใ‚’ๆง‹็ฏ‰ใ—ใ€ๆทฑๅฑคใฎใƒ‘ใƒƒใƒใจ้šฃๆŽฅใ™ใ‚‹ใƒ‘ใƒƒใƒใจใƒžใƒผใ‚ธใ—ใพใ™ใ€‚ๆณจๆ„ใฏใƒญใƒผใ‚ซใƒซใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๅ†…ใงใฎใฟ่จˆ็ฎ—ใ•ใ‚Œใ€ใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใฏๆณจๆ„ใฎใƒฌใ‚คใƒคใƒผ้–“ใงใ‚ทใƒ•ใƒˆใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใŒใ‚ˆใ‚Š่‰ฏใๅญฆ็ฟ’ใ™ใ‚‹ใฎใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ๆŽฅ็ถšใ‚’ไฝœๆˆใ—ใพใ™ใ€‚Swin ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใฏ้šŽๅฑค็š„ใช็‰นๅพดใƒžใƒƒใƒ—ใ‚’็”Ÿๆˆใงใใ‚‹ใŸใ‚ใ€ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚„ๆคœๅ‡บใชใฉใฎๅฏ†ใชไบˆๆธฌใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใฆใ„ใพใ™ใ€‚[SegFormer](model_doc/segformer) ใ‚‚้šŽๅฑค็š„ใช็‰นๅพดใƒžใƒƒใƒ—ใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใซใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚’ไฝฟ็”จใ—ใพใ™ใŒใ€ใ™ในใฆใฎ็‰นๅพดใƒžใƒƒใƒ—ใ‚’็ต„ใฟๅˆใ‚ใ›ใฆไบˆๆธฌใ™ใ‚‹ใŸใ‚ใซใ‚ทใƒณใƒ—ใƒซใชใƒžใƒซใƒใƒฌใ‚คใƒคใƒผใƒ‘ใƒผใ‚ปใƒ—ใƒˆใƒญใƒณ๏ผˆMLP๏ผ‰ใƒ‡ใ‚ณใƒผใƒ€ใƒผใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ BeIT ใŠใ‚ˆใณ ViTMAE ใชใฉใฎไป–ใฎใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใฏใ€BERTใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็›ฎๆจ™ใ‹ใ‚‰ใ‚คใƒณใ‚นใƒ”ใƒฌใƒผใ‚ทใƒงใƒณใ‚’ๅพ—ใพใ—ใŸใ€‚[BeIT](model_doc/beit) ใฏ *masked image modeling (MIM)* ใซใ‚ˆใฃใฆไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚็”ปๅƒใƒ‘ใƒƒใƒใฏใƒฉใƒณใƒ€ใƒ ใซใƒžใ‚นใ‚ฏใ•ใ‚Œใ€็”ปๅƒใ‚‚่ฆ–่ฆšใƒˆใƒผใ‚ฏใƒณใซใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚Œใพใ™ใ€‚BeIT ใฏใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒ‘ใƒƒใƒใซๅฏพๅฟœใ™ใ‚‹่ฆ–่ฆšใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ใ‚ˆใ†ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚[ViTMAE](model_doc/vitmae) ใ‚‚ไผผใŸใ‚ˆใ†ใชไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็›ฎๆจ™ใ‚’ๆŒใฃใฆใŠใ‚Šใ€่ฆ–่ฆšใƒˆใƒผใ‚ฏใƒณใฎไปฃใ‚ใ‚Šใซใƒ”ใ‚ฏใ‚ปใƒซใ‚’ไบˆๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚็•ฐไพ‹ใชใฎใฏ็”ปๅƒใƒ‘ใƒƒใƒใฎ75%ใŒใƒžใ‚นใ‚ฏใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใงใ™๏ผใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจใ‚จใƒณใ‚ณใƒผใƒ‰ใ•ใ‚ŒใŸใƒ‘ใƒƒใƒใ‹ใ‚‰ใƒ”ใ‚ฏใ‚ปใƒซใ‚’ๅ†ๆง‹็ฏ‰ใ—ใพใ™ใ€‚ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅพŒใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏๆจใฆใ‚‰ใ‚Œใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฏใƒ€ใ‚ฆใƒณใ‚นใƒˆใƒชใƒผใƒ ใฎใ‚ฟใ‚นใ‚ฏใงไฝฟ็”จใงใใ‚‹็Šถๆ…‹ใงใ™ใ€‚ ### Decoder[[cv-decoder]] ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎใฟใฎใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใฏ็ใ—ใ„ใงใ™ใ€‚ใชใœใชใ‚‰ใ€ใปใจใ‚“ใฉใฎใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใฏ็”ปๅƒ่กจ็พใ‚’ๅญฆใถใŸใ‚ใซใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ใ—ใ‹ใ—ใ€็”ปๅƒ็”Ÿๆˆใชใฉใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใงใฏใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏ่‡ช็„ถใช้ฉๅฟœใงใ™ใ€‚ใ“ใ‚Œใฏใ€GPT-2ใชใฉใฎใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใƒขใƒ‡ใƒซใ‹ใ‚‰่ฆ‹ใฆใใŸใ‚ˆใ†ใซใ€[ImageGPT](model_doc/imagegpt) ใงใ‚‚ๅŒๆง˜ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ไฝฟ็”จใ—ใพใ™ใŒใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎๆฌกใฎใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€็”ปๅƒๅ†…ใฎๆฌกใฎใƒ”ใ‚ฏใ‚ปใƒซใ‚’ไบˆๆธฌใ—ใพใ™ใ€‚็”ปๅƒ็”ŸๆˆใซๅŠ ใˆใฆใ€ImageGPT ใฏ็”ปๅƒๅˆ†้กžใฎใŸใ‚ใซใ‚‚ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใงใใพใ™ใ€‚ ### Encoder-decoder[[cv-encoder-decoder]] ใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใฏไธ€่ˆฌ็š„ใซใ‚จใƒณใ‚ณใƒผใƒ€ใƒผ๏ผˆใƒใƒƒใ‚ฏใƒœใƒผใƒณใจใ‚‚ๅ‘ผใฐใ‚Œใพใ™๏ผ‰ใ‚’ไฝฟ็”จใ—ใฆ้‡่ฆใช็”ปๅƒ็‰นๅพดใ‚’ๆŠฝๅ‡บใ—ใ€ใใ‚Œใ‚’ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใซๆธกใ™ใŸใ‚ใซไฝฟ็”จใ—ใพใ™ใ€‚[DETR](model_doc/detr) ใฏไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎใƒใƒƒใ‚ฏใƒœใƒผใƒณใ‚’ๆŒใฃใฆใ„ใพใ™ใŒใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใฎใŸใ‚ใซๅฎŒๅ…จใชใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚‚ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฏ็”ปๅƒ่กจ็พใ‚’ๅญฆใณใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผๅ†…ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒช๏ผˆๅ„ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒชใฏ็”ปๅƒๅ†…ใฎ้ ˜ๅŸŸใพใŸใฏใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใŸๅญฆ็ฟ’ใ•ใ‚ŒใŸๅŸ‹ใ‚่พผใฟใงใ™๏ผ‰ใจ็ต„ใฟๅˆใ‚ใ›ใพใ™ใ€‚DETR ใฏๅ„ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚ฏใ‚จใƒชใซๅฏพใ™ใ‚‹ๅขƒ็•Œใƒœใƒƒใ‚ฏใ‚นใฎๅบงๆจ™ใจใ‚ฏใƒฉใ‚นใƒฉใƒ™ใƒซใ‚’ไบˆๆธฌใ—ใพใ™ใ€‚ ## Natural lanaguage processing <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FUhbQAZDlpYW5XEpdFy6GoG%2Fnlp-model-timeline%3Fnode-id%3D0%253A1%26t%3D4mZMr4r1vDEYGJ50-1" allowfullscreen></iframe> ### Encoder[[nlp-encoder]] [BERT](model_doc/bert) ใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผๅฐ‚็”จใฎTransformerใงใ€ๅ…ฅๅŠ›ใฎไธ€้ƒจใฎใƒˆใƒผใ‚ฏใƒณใ‚’ใƒฉใƒณใƒ€ใƒ ใซใƒžใ‚นใ‚ฏใ—ใฆไป–ใฎใƒˆใƒผใ‚ฏใƒณใ‚’่ฆ‹ใชใ„ใ‚ˆใ†ใซใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒˆใƒผใ‚ฏใƒณใ‚’ใƒžใ‚นใ‚ฏใ—ใŸๆ–‡่„ˆใซๅŸบใฅใ„ใฆใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใŒไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ็›ฎๆจ™ใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€BERTใฏๅ…ฅๅŠ›ใฎใ‚ˆใ‚Šๆทฑใ„ใ‹ใค่ฑŠใ‹ใช่กจ็พใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใฎใซๅทฆๅณใฎๆ–‡่„ˆใ‚’ๅฎŒๅ…จใซๆดป็”จใงใใพใ™ใ€‚ใ—ใ‹ใ—ใ€BERTใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆˆฆ็•ฅใซใฏใพใ ๆ”นๅ–„ใฎไฝ™ๅœฐใŒใ‚ใ‚Šใพใ—ใŸใ€‚[RoBERTa](model_doc/roberta) ใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้•ทๆ™‚้–“่กŒใ„ใ€ใ‚ˆใ‚Šๅคงใใชใƒใƒƒใƒใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใ€ไบ‹ๅ‰ๅ‡ฆ็†ไธญใซไธ€ๅบฆใ ใ‘ใงใชใๅ„ใ‚จใƒใƒƒใ‚ฏใงใƒˆใƒผใ‚ฏใƒณใ‚’ใƒฉใƒณใƒ€ใƒ ใซใƒžใ‚นใ‚ฏใ—ใ€ๆฌกๆ–‡ไบˆๆธฌใฎ็›ฎๆจ™ใ‚’ๅ‰Š้™คใ™ใ‚‹ๆ–ฐใ—ใ„ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒฌใ‚ทใƒ”ใ‚’ๅฐŽๅ…ฅใ™ใ‚‹ใ“ใจใงใ“ใ‚Œใ‚’ๆ”นๅ–„ใ—ใพใ—ใŸใ€‚ ๆ€ง่ƒฝใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ไธป่ฆใชๆˆฆ็•ฅใฏใƒขใƒ‡ใƒซใฎใ‚ตใ‚คใ‚บใ‚’ๅข—ใ‚„ใ™ใ“ใจใงใ™ใŒใ€ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏ่จˆ็ฎ—ใ‚ณใ‚นใƒˆใŒใ‹ใ‹ใ‚Šใพใ™ใ€‚่จˆ็ฎ—ใ‚ณใ‚นใƒˆใ‚’ๅ‰Šๆธ›ใ™ใ‚‹ๆ–นๆณ•ใฎ1ใคใฏใ€[DistilBERT](model_doc/distilbert) ใฎใ‚ˆใ†ใชๅฐใ•ใชใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚DistilBERTใฏ[็Ÿฅ่ญ˜่’ธ็•™](https://arxiv.org/abs/1503.02531) - ๅœง็ธฎๆŠ€่ก“ - ใ‚’ไฝฟ็”จใ—ใฆใ€BERTใฎใปใผใ™ในใฆใฎ่จ€่ชž็†่งฃๆฉŸ่ƒฝใ‚’ไฟๆŒใ—ใชใŒใ‚‰ใ€ใ‚ˆใ‚Šๅฐใ•ใชใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ใ—ใ‹ใ—ใ€ใปใจใ‚“ใฉใฎTransformerใƒขใƒ‡ใƒซใฏๅผ•ใ็ถšใใ‚ˆใ‚Šๅคšใใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅŠน็އใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใŒ็™ปๅ ดใ—ใฆใ„ใพใ™ใ€‚[ALBERT](model_doc/albert) ใฏใ€2ใคใฎๆ–นๆณ•ใงใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆ•ฐใ‚’ๆธ›ใ‚‰ใ™ใ“ใจใซใ‚ˆใฃใฆใƒกใƒขใƒชๆถˆ่ฒป้‡ใ‚’ๅ‰Šๆธ›ใ—ใพใ™ใ€‚ๅคงใใช่ชžๅฝ™ๅŸ‹ใ‚่พผใฟใ‚’2ใคใฎๅฐใ•ใช่กŒๅˆ—ใซๅˆ†ๅ‰ฒใ—ใ€ใƒฌใ‚คใƒคใƒผใŒใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๅ…ฑๆœ‰ใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚[DeBERTa](model_doc/deberta) ใฏใ€ๅ˜่ชžใจใใฎไฝ็ฝฎใ‚’2ใคใฎใƒ™ใ‚ฏใƒˆใƒซใงๅˆฅใ€…ใซใ‚จใƒณใ‚ณใƒผใƒ‰ใ™ใ‚‹่งฃใ‹ใ‚ŒใŸๆณจๆ„ๆฉŸๆง‹ใ‚’่ฟฝๅŠ ใ—ใพใ—ใŸใ€‚ๆณจๆ„ใฏใ“ใ‚Œใ‚‰ใฎๅˆฅใ€…ใฎใƒ™ใ‚ฏใƒˆใƒซใ‹ใ‚‰่จˆ็ฎ—ใ•ใ‚Œใพใ™ใ€‚ๅ˜่ชžใจไฝ็ฝฎใฎๅŸ‹ใ‚่พผใฟใŒๅซใพใ‚Œใ‚‹ๅ˜ไธ€ใฎใƒ™ใ‚ฏใƒˆใƒซใงใฏใชใใ€[Longformer](model_doc/longformer) ใฏใ€็‰นใซ้•ทใ„ใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซๆณจๆ„ใ‚’ใ‚ˆใ‚ŠๅŠน็އ็š„ใซใ™ใ‚‹ใ“ใจใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใพใ—ใŸใ€‚ๅ›บๅฎšใ•ใ‚ŒใŸใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใ‚ตใ‚คใ‚บใฎๅ‘จใ‚Šใฎๅ„ใƒˆใƒผใ‚ฏใƒณใ‹ใ‚‰่จˆ็ฎ—ใ•ใ‚Œใ‚‹ใƒญใƒผใ‚ซใƒซใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆไป˜ใๆณจๆ„๏ผˆ็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใƒˆใƒผใ‚ฏใƒณ๏ผˆๅˆ†้กžใฎใŸใ‚ใฎ `[CLS]` ใชใฉ๏ผ‰ใฎใฟใฎใŸใ‚ใฎใ‚ฐใƒญใƒผใƒใƒซใชๆณจๆ„ใ‚’ๅซใ‚€๏ผ‰ใฎ็ต„ใฟๅˆใ‚ใ›ใ‚’ไฝฟ็”จใ—ใฆใ€ๅฎŒๅ…จใชๆณจๆ„่กŒๅˆ—ใงใฏใชใ็–Žใชๆณจๆ„่กŒๅˆ—ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ### Decoder[[nlp-decoder]] [GPT-2](model_doc/gpt2)ใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎๆฌกใฎๅ˜่ชžใ‚’ไบˆๆธฌใ™ใ‚‹ใƒ‡ใ‚ณใƒผใƒ€ใƒผๅฐ‚็”จใฎTransformerใงใ™ใ€‚ใƒขใƒ‡ใƒซใฏๅ…ˆใ‚’่ฆ‹ใ‚‹ใ“ใจใŒใงใใชใ„ใ‚ˆใ†ใซใƒˆใƒผใ‚ฏใƒณใ‚’ๅณใซใƒžใ‚นใ‚ฏใ—ใ€"ใฎใžใ่ฆ‹"ใ‚’้˜ฒใŽใพใ™ใ€‚ๅคง้‡ใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใŸใ“ใจใซใ‚ˆใ‚Šใ€GPT-2ใฏใƒ†ใ‚ญใ‚นใƒˆ็”ŸๆˆใŒ้žๅธธใซๅพ—ๆ„ใงใ€ใƒ†ใ‚ญใ‚นใƒˆใŒๆญฃ็ขบใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚‹ใซใ—ใฆใ‚‚ใ€ๆ™‚ๆŠ˜ๆญฃ็ขบใงใฏใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใ‹ใ—ใ€GPT-2ใซใฏBERTใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‹ใ‚‰ใฎๅŒๆ–นๅ‘ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใŒไธ่ถณใ—ใฆใŠใ‚Šใ€็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซใฏ้ฉใ—ใฆใ„ใพใ›ใ‚“ใงใ—ใŸใ€‚[XLNET](model_doc/xlnet)ใฏใ€ๅŒๆ–นๅ‘ใซๅญฆ็ฟ’ใงใใ‚‹้ †ๅˆ—่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ็›ฎๆจ™๏ผˆPLM๏ผ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€BERTใจGPT-2ใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็›ฎๆจ™ใฎใƒ™ใ‚นใƒˆใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใ„ใพใ™ใ€‚ GPT-2ใฎๅพŒใ€่จ€่ชžใƒขใƒ‡ใƒซใฏใ•ใ‚‰ใซๅคงใใๆˆ้•ทใ—ใ€ไปŠใงใฏ*ๅคง่ฆๆจก่จ€่ชžใƒขใƒ‡ใƒซ๏ผˆLLM๏ผ‰*ใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ๅคง่ฆๆจกใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใ‚Œใฐใ€LLMใฏใปใผใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆๅญฆ็ฟ’ใ‚’็คบใ™ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚[GPT-J](model_doc/gptj)ใฏใ€6Bใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒใคLLMใงใ€400Bใฎใƒˆใƒผใ‚ฏใƒณใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚GPT-Jใซใฏ[OPT](model_doc/opt)ใŒ็ถšใใ€ใใฎใ†ใกๆœ€ๅคงใฎใƒขใƒ‡ใƒซใฏ175Bใงใ€180Bใฎใƒˆใƒผใ‚ฏใƒณใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚ๅŒใ˜ๆ™‚ๆœŸใซ[BLOOM](model_doc/bloom)ใŒใƒชใƒชใƒผใ‚นใ•ใ‚Œใ€ใ“ใฎใƒ•ใ‚กใƒŸใƒชใƒผใฎๆœ€ๅคงใฎใƒขใƒ‡ใƒซใฏ176Bใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒใกใ€46ใฎ่จ€่ชžใจ13ใฎใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐ่จ€่ชžใง366Bใฎใƒˆใƒผใ‚ฏใƒณใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ### Encoder-decoder[[nlp-encoder-decoder]] [BART](model_doc/bart)ใฏใ€ๅ…ƒใฎTransformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ไฟๆŒใ—ใฆใ„ใพใ™ใŒใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็›ฎๆจ™ใ‚’*ใƒ†ใ‚ญใ‚นใƒˆ่ฃœๅฎŒ*ใฎ็ ดๆใซๅค‰ๆ›ดใ—ใฆใ„ใพใ™ใ€‚ไธ€้ƒจใฎใƒ†ใ‚ญใ‚นใƒˆใ‚นใƒ‘ใƒณใฏๅ˜ไธ€ใฎ`mask`ใƒˆใƒผใ‚ฏใƒณใง็ฝฎๆ›ใ•ใ‚Œใพใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏ็ ดๆใ—ใฆใ„ใชใ„ใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ—๏ผˆๆœชๆฅใฎใƒˆใƒผใ‚ฏใƒณใฏใƒžใ‚นใ‚ฏใ•ใ‚Œใพใ™๏ผ‰ใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚ŒใŸ็Šถๆ…‹ใ‚’ไฝฟ็”จใ—ใฆไบˆๆธฌใ‚’่ฃœๅŠฉใ—ใพใ™ใ€‚[Pegasus](model_doc/pegasus)ใฏBARTใซไผผใฆใ„ใพใ™ใŒใ€Pegasusใฏใƒ†ใ‚ญใ‚นใƒˆใ‚นใƒ‘ใƒณใฎไปฃใ‚ใ‚Šใซๆ–‡ๅ…จไฝ“ใ‚’ใƒžใ‚นใ‚ฏใ—ใพใ™ใ€‚ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใซๅŠ ใˆใฆใ€Pegasusใฏใ‚ฎใƒฃใƒƒใƒ—ๆ–‡็”Ÿๆˆ๏ผˆGSG๏ผ‰ใซใ‚ˆใฃใฆไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚GSGใฎ็›ฎๆจ™ใฏใ€ๆ–‡ๆ›ธใซ้‡่ฆใชๆ–‡ใ‚’ใƒžใ‚นใ‚ฏใ—ใ€ใใ‚Œใ‚‰ใ‚’`mask`ใƒˆใƒผใ‚ฏใƒณใง็ฝฎๆ›ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏๆฎ‹ใ‚Šใฎๆ–‡ใ‹ใ‚‰ๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ—ใชใ‘ใ‚Œใฐใชใ‚Šใพใ›ใ‚“ใ€‚[T5](model_doc/t5)ใฏใ€ใ™ในใฆใฎNLPใ‚ฟใ‚นใ‚ฏใ‚’็‰นๅฎšใฎใƒ—ใƒฌใƒ•ใ‚ฃใƒƒใ‚ฏใ‚นใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆๅฏพใƒ†ใ‚ญใ‚นใƒˆใฎๅ•้กŒใซๅค‰ๆ›ใ™ใ‚‹ใ‚ˆใ‚Šใƒฆใƒ‹ใƒผใ‚ฏใชใƒขใƒ‡ใƒซใงใ™ใ€‚ใŸใจใˆใฐใ€ใƒ—ใƒฌใƒ•ใ‚ฃใƒƒใ‚ฏใ‚น`Summarize:`ใฏ่ฆ็ด„ใ‚ฟใ‚นใ‚ฏใ‚’็คบใ—ใพใ™ใ€‚T5ใฏๆ•™ๅธซใ‚ใ‚Šใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ๏ผˆGLUEใจSuperGLUE๏ผ‰ใจ่‡ชๅทฑๆ•™ๅธซใ‚ใ‚Šใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ๏ผˆใƒˆใƒผใ‚ฏใƒณใฎ15๏ผ…ใ‚’ใƒฉใƒณใƒ€ใƒ ใซใ‚ตใƒณใƒ—ใƒซใ—ใƒ‰ใƒญใƒƒใƒ—ใ‚ขใ‚ฆใƒˆ๏ผ‰ใซใ‚ˆใฃใฆไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ## Audio <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2Fvrchl8jDV9YwNVPWu2W0kK%2Fspeech-and-audio-model-timeline%3Fnode-id%3D0%253A1%26t%3DmM4H8pPMuK23rClL-1" allowfullscreen></iframe> ### Encoder[[audio-encoder]] [Wav2Vec2](model_doc/wav2vec2) ใฏใ€็”Ÿใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๆณขๅฝขใ‹ใ‚‰็›ดๆŽฅ้Ÿณๅฃฐ่กจ็พใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใŸใ‚ใฎTransformerใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€ๅฏพ็…ง็š„ใชใ‚ฟใ‚นใ‚ฏใงไบ‹ๅ‰ๅญฆ็ฟ’ใ•ใ‚Œใ€ไธ€้€ฃใฎๅฝใฎ่กจ็พใ‹ใ‚‰็œŸใฎ้Ÿณๅฃฐ่กจ็พใ‚’็‰นๅฎšใ—ใพใ™ใ€‚ [HuBERT](model_doc/hubert) ใฏWav2Vec2ใซไผผใฆใ„ใพใ™ใŒใ€็•ฐใชใ‚‹ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ—ใƒญใ‚ปใ‚นใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ใ‚ฟใƒผใ‚ฒใƒƒใƒˆใƒฉใƒ™ใƒซใฏใ€้กžไผผใ—ใŸใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚ปใ‚ฐใƒกใƒณใƒˆใŒใ‚ฏใƒฉใ‚นใ‚ฟใซๅ‰ฒใ‚Šๅฝ“ใฆใ‚‰ใ‚Œใ€ใ“ใ‚ŒใŒ้š ใ‚Œใƒฆใƒ‹ใƒƒใƒˆใซใชใ‚‹ใ‚ฏใƒฉใ‚นใ‚ฟใƒชใƒณใ‚ฐใ‚นใƒ†ใƒƒใƒ—ใซใ‚ˆใฃใฆไฝœๆˆใ•ใ‚Œใพใ™ใ€‚้š ใ‚Œใƒฆใƒ‹ใƒƒใƒˆใฏๅŸ‹ใ‚่พผใฟใซใƒžใƒƒใƒ—ใ•ใ‚Œใ€ไบˆๆธฌใ‚’่กŒใ„ใพใ™ใ€‚ ### Encoder-decoder[[audio-encoder-decoder]] [Speech2Text](model_doc/speech_to_text) ใฏใ€่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜๏ผˆASR๏ผ‰ใŠใ‚ˆใณ้Ÿณๅฃฐ็ฟป่จณใฎใŸใ‚ใซ่จญ่จˆใ•ใ‚ŒใŸ้Ÿณๅฃฐใƒขใƒ‡ใƒซใงใ™ใ€‚ใ“ใฎใƒขใƒ‡ใƒซใฏใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๆณขๅฝขใ‹ใ‚‰ๆŠฝๅ‡บใ•ใ‚ŒใŸใƒญใ‚ฐใƒกใƒซใƒ•ใ‚ฃใƒซใ‚ฟใƒผใƒใƒณใ‚ฏใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ่‡ชๅทฑๅ›žๅธฐ็š„ใซใƒˆใƒฉใƒณใ‚นใ‚ฏใƒชใƒ—ใƒˆใพใŸใฏ็ฟป่จณใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ [Whisper](model_doc/whisper) ใ‚‚ASRใƒขใƒ‡ใƒซใงใ™ใŒใ€ไป–ใฎๅคšใใฎ้Ÿณๅฃฐใƒขใƒ‡ใƒซใจใฏ็•ฐใชใ‚Šใ€โœจ ใƒฉใƒ™ใƒซไป˜ใ โœจ ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒˆใƒฉใƒณใ‚นใ‚ฏใƒชใƒ—ใ‚ทใƒงใƒณใƒ‡ใƒผใ‚ฟใ‚’ๅคง้‡ใซไบ‹ๅ‰ใซๅญฆ็ฟ’ใ—ใฆใ€ใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅฎŸ็พใ—ใพใ™ใ€‚ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๅคง้ƒจๅˆ†ใซใฏ้ž่‹ฑ่ชžใฎ่จ€่ชžใ‚‚ๅซใพใ‚ŒใฆใŠใ‚Šใ€WhisperใฏไฝŽใƒชใ‚ฝใƒผใ‚น่จ€่ชžใซใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ๆง‹้€ ็š„ใซใฏใ€WhisperใฏSpeech2Textใซไผผใฆใ„ใพใ™ใ€‚ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชไฟกๅทใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใซใ‚ˆใฃใฆใ‚จใƒณใ‚ณใƒผใƒ‰ใ•ใ‚ŒใŸใƒญใ‚ฐใƒกใƒซใ‚นใƒšใ‚ฏใƒˆใƒญใ‚ฐใƒฉใƒ ใซๅค‰ๆ›ใ•ใ‚Œใพใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใจๅ‰ใฎใƒˆใƒผใ‚ฏใƒณใ‹ใ‚‰ใƒˆใƒฉใƒณใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’่‡ชๅทฑๅ›žๅธฐ็š„ใซ็”Ÿๆˆใ—ใพใ™ใ€‚ ## Multimodal <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FcX125FQHXJS2gxeICiY93p%2Fmultimodal%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> ### Encoder[[mm-encoder]] [VisualBERT](model_doc/visual_bert) ใฏใ€BERTใฎๅพŒใซใƒชใƒชใƒผใ‚นใ•ใ‚ŒใŸใƒ“ใ‚ธใƒงใƒณ่จ€่ชžใ‚ฟใ‚นใ‚ฏๅ‘ใ‘ใฎใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใƒขใƒ‡ใƒซใงใ™ใ€‚ใ“ใ‚ŒใฏBERTใจไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ็‰ฉไฝ“ๆคœๅ‡บใ‚ทใ‚นใƒ†ใƒ ใ‚’็ต„ใฟๅˆใ‚ใ›ใ€็”ปๅƒ็‰นๅพดใ‚’ใƒ“ใ‚ธใƒฅใ‚ขใƒซๅŸ‹ใ‚่พผใฟใซๆŠฝๅ‡บใ—ใ€ใƒ†ใ‚ญใ‚นใƒˆๅŸ‹ใ‚่พผใฟใจไธ€็ท’ใซBERTใซๆธกใ—ใพใ™ใ€‚VisualBERTใฏ้žใƒžใ‚นใ‚ฏใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅŸบใซใ—ใŸใƒžใ‚นใ‚ฏใƒ†ใ‚ญใ‚นใƒˆใ‚’ไบˆๆธฌใ—ใ€ใƒ†ใ‚ญใ‚นใƒˆใŒ็”ปๅƒใจๆ•ดๅˆใ—ใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใ‚‚ไบˆๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ViTใŒใƒชใƒชใƒผใ‚นใ•ใ‚ŒใŸ้š›ใ€[ViLT](model_doc/vilt) ใฏ็”ปๅƒๅŸ‹ใ‚่พผใฟใ‚’ๅ–ๅพ—ใ™ใ‚‹ใŸใ‚ใซใ“ใฎๆ–นๆณ•ใ‚’ๆŽก็”จใ—ใพใ—ใŸใ€‚็”ปๅƒๅŸ‹ใ‚่พผใฟใฏใƒ†ใ‚ญใ‚นใƒˆๅŸ‹ใ‚่พผใฟใจๅ…ฑใซๅ…ฑๅŒใงๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚ใใ‚Œใ‹ใ‚‰ใ€ViLTใฏ็”ปๅƒใƒ†ใ‚ญใ‚นใƒˆใƒžใƒƒใƒใƒณใ‚ฐใ€ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€ใŠใ‚ˆใณๅ…จๅ˜่ชžใƒžใ‚นใ‚ญใƒณใ‚ฐใซใ‚ˆใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒ่กŒใ‚ใ‚Œใพใ™ใ€‚ [CLIP](model_doc/clip) ใฏ็•ฐใชใ‚‹ใ‚ขใƒ—ใƒญใƒผใƒใ‚’ๅ–ใ‚Šใ€(`็”ปๅƒ`ใ€`ใƒ†ใ‚ญใ‚นใƒˆ`) ใฎใƒšใ‚ขไบˆๆธฌใ‚’่กŒใ„ใพใ™ใ€‚็”ปๅƒใ‚จใƒณใ‚ณใƒผใƒ€ใƒผ๏ผˆViT๏ผ‰ใจใƒ†ใ‚ญใ‚นใƒˆใ‚จใƒณใ‚ณใƒผใƒ€ใƒผ๏ผˆTransformer๏ผ‰ใฏใ€(`็”ปๅƒ`ใ€`ใƒ†ใ‚ญใ‚นใƒˆ`) ใƒšใ‚ขใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆไธŠใงๅ…ฑๅŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใ€(`็”ปๅƒ`ใ€`ใƒ†ใ‚ญใ‚นใƒˆ`) ใƒšใ‚ขใฎ็”ปๅƒใจใƒ†ใ‚ญใ‚นใƒˆใฎๅŸ‹ใ‚่พผใฟใฎ้กžไผผๆ€งใ‚’ๆœ€ๅคงๅŒ–ใ—ใพใ™ใ€‚ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅพŒใ€CLIPใ‚’ไฝฟ็”จใ—ใฆ็”ปๅƒใ‹ใ‚‰ใƒ†ใ‚ญใ‚นใƒˆใ‚’ไบˆๆธฌใ—ใŸใ‚Šใ€ใใฎ้€†ใ‚’่กŒใ†ใ“ใจใŒใงใใพใ™ใ€‚[OWL-ViT](model_doc/owlvit) ใฏใ€ใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆ็‰ฉไฝ“ๆคœๅ‡บใฎใƒใƒƒใ‚ฏใƒœใƒผใƒณใจใ—ใฆCLIPใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅพŒใ€็‰ฉไฝ“ๆคœๅ‡บใƒ˜ใƒƒใƒ‰ใŒ่ฟฝๅŠ ใ•ใ‚Œใ€(`ใ‚ฏใƒฉใ‚น`ใ€`ใƒใ‚ฆใƒณใƒ‡ใ‚ฃใƒณใ‚ฐใƒœใƒƒใ‚ฏใ‚น`) ใƒšใ‚ขใซๅฏพใ™ใ‚‹ใ‚ปใƒƒใƒˆไบˆๆธฌใŒ่กŒใ‚ใ‚Œใพใ™ใ€‚ ### Encoder-decoder[[mm-encoder-decoder]] ๅ…‰ๅญฆๆ–‡ๅญ—่ช่ญ˜๏ผˆOCR๏ผ‰ใฏใ€้€šๅธธใ€็”ปๅƒใ‚’็†่งฃใ—ใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซ่ค‡ๆ•ฐใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใŒ้–ขไธŽใ™ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆ่ช่ญ˜ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ [TrOCR](model_doc/trocr) ใฏใ€ใ‚จใƒณใƒ‰ใƒ„ใƒผใ‚จใƒณใƒ‰ใฎTransformerใ‚’ไฝฟ็”จใ—ใฆใ“ใฎใƒ—ใƒญใ‚ปใ‚นใ‚’็ฐก็•ฅๅŒ–ใ—ใพใ™ใ€‚ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฏ็”ปๅƒใ‚’ๅ›บๅฎšใ‚ตใ‚คใ‚บใฎใƒ‘ใƒƒใƒใจใ—ใฆๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใฎViTใ‚นใ‚ฟใ‚คใƒซใฎใƒขใƒ‡ใƒซใงใ‚ใ‚Šใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’่‡ชๅทฑๅ›žๅธฐ็š„ใซ็”Ÿๆˆใ—ใพใ™ใ€‚[Donut](model_doc/donut) ใฏOCRใƒ™ใƒผใ‚นใฎใ‚ขใƒ—ใƒญใƒผใƒใซไพๅญ˜ใ—ใชใ„ใ‚ˆใ‚Šไธ€่ˆฌ็š„ใชใƒ“ใ‚ธใƒฅใ‚ขใƒซใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ็†่งฃใƒขใƒ‡ใƒซใงใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใจใ—ใฆSwin Transformerใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใจใ—ใฆๅคš่จ€่ชžBARTใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ Donutใฏ็”ปๅƒใจใƒ†ใ‚ญใ‚นใƒˆใฎๆณจ้‡ˆใซๅŸบใฅใ„ใฆๆฌกใฎๅ˜่ชžใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใซใ‚ˆใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆใ‚’่ชญใ‚€ใŸใ‚ใซไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ใƒ‡ใ‚ณใƒผใƒ€ใƒผใฏใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’ไธŽใˆใ‚‰ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ใƒ—ใƒญใƒณใƒ—ใƒˆใฏๅ„ใƒ€ใ‚ฆใƒณใ‚นใƒˆใƒชใƒผใƒ ใ‚ฟใ‚นใ‚ฏใ”ใจใซ็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณใ‚’ไฝฟ็”จใ—ใฆ่กจ็พใ•ใ‚Œใพใ™ใ€‚ไพ‹ใˆใฐใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฎ่งฃๆžใซใฏ`่งฃๆž`ใƒˆใƒผใ‚ฏใƒณใŒใ‚ใ‚Šใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎ้š ใ‚Œ็Šถๆ…‹ใจ็ต„ใฟๅˆใ‚ใ•ใ‚Œใฆใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๆง‹้€ ๅŒ–ใ•ใ‚ŒใŸๅ‡บๅŠ›ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆ๏ผˆJSON๏ผ‰ใซ่งฃๆžใ—ใพใ™ใ€‚ ## Reinforcement learning <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FiB3Y6RvWYki7ZuKO6tNgZq%2Freinforcement-learning%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> ### Decoder[[rl-decoder]] ๆ„ๆ€ๆฑบๅฎšใจ่ปŒ่ทกใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใฏใ€็Šถๆ…‹ใ€ใ‚ขใ‚ฏใ‚ทใƒงใƒณใ€ๅ ฑ้…ฌใ‚’ใ‚ทใƒผใ‚ฑใƒณใ‚นใƒขใƒ‡ใƒชใƒณใ‚ฐใฎๅ•้กŒใจใ—ใฆๆ‰ใˆใพใ™ใ€‚ [Decision Transformer](model_doc/decision_transformer) ใฏใ€ใƒชใ‚ฟใƒผใƒณใƒปใƒˆใ‚ฅใƒปใ‚ดใƒผใ€้ŽๅŽปใฎ็Šถๆ…‹ใ€ใŠใ‚ˆใณใ‚ขใ‚ฏใ‚ทใƒงใƒณใซๅŸบใฅใ„ใฆๅฐ†ๆฅใฎๅธŒๆœ›ใƒชใ‚ฟใƒผใƒณใซใคใชใŒใ‚‹ใ‚ขใ‚ฏใ‚ทใƒงใƒณใฎ็ณปๅˆ—ใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ๆœ€ๅพŒใฎ *K* ใ‚ฟใ‚คใƒ ใ‚นใƒ†ใƒƒใƒ—ใงใฏใ€3ใคใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃใใ‚Œใžใ‚ŒใŒใƒˆใƒผใ‚ฏใƒณๅŸ‹ใ‚่พผใฟใซๅค‰ๆ›ใ•ใ‚Œใ€ๅฐ†ๆฅใฎใ‚ขใ‚ฏใ‚ทใƒงใƒณใƒˆใƒผใ‚ฏใƒณใ‚’ไบˆๆธฌใ™ใ‚‹ใŸใ‚ใซGPTใฎใ‚ˆใ†ใชใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚[Trajectory Transformer](model_doc/trajectory_transformer) ใ‚‚็Šถๆ…‹ใ€ใ‚ขใ‚ฏใ‚ทใƒงใƒณใ€ๅ ฑ้…ฌใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใ€GPTใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงๅ‡ฆ็†ใ—ใพใ™ใ€‚ๅ ฑ้…ฌ่ชฟๆ•ดใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใŸDecision Transformerใจใฏ็•ฐใชใ‚Šใ€Trajectory Transformerใฏใƒ“ใƒผใƒ ใ‚ตใƒผใƒใ‚’ไฝฟ็”จใ—ใฆๅฐ†ๆฅใฎใ‚ขใ‚ฏใ‚ทใƒงใƒณใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/attention.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownๅฝขๅผใงใ™ใŒใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒ“ใƒซใƒ€ใƒผ็”จใฎ็‰นๅฎšใฎๆง‹ๆ–‡ใ‚’ๅซใ‚“ใงใŠใ‚Šใ€Markdownใƒ“ใƒฅใƒผใ‚ขใƒผใงใฏๆญฃใ—ใ่กจ็คบใ•ใ‚Œใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ --> # Attention mechanism ใปใจใ‚“ใฉใฎTransformerใƒขใƒ‡ใƒซใฏใ€ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ่กŒๅˆ—ใŒๆญฃๆ–นๅฝขใงใ‚ใ‚‹ใจใ„ใ†ๆ„ๅ‘ณใงๅฎŒๅ…จใชใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใ“ใ‚Œใฏใ€้•ทใ„ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆ‰ฑใ†ๅ ดๅˆใซ่จˆ็ฎ—ใฎใƒœใƒˆใƒซใƒใƒƒใ‚ฏใจใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚Longformerใ‚„Reformerใฏใ€ใ‚ˆใ‚ŠๅŠน็އ็š„ใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ่กŒๅˆ—ใฎใ‚นใƒ‘ใƒผใ‚นใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝฟ็”จใ—ใ‚ˆใ†ใจใ™ใ‚‹ใƒขใƒ‡ใƒซใงใ™ใ€‚ ## LSH attention [Reformer](#reformer)ใฏLSH๏ผˆๅฑ€ๆ‰€็š„ใซๆ•ฃๅœจใƒใƒƒใ‚ทใƒฅ๏ผ‰ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใ‚ฝใƒ•ใƒˆใƒžใƒƒใ‚ฏใ‚น(QK^t)ใงใฏใ€่กŒๅˆ—QK^tใฎไธญใง๏ผˆใ‚ฝใƒ•ใƒˆใƒžใƒƒใ‚ฏใ‚นๆฌกๅ…ƒใง๏ผ‰ๆœ€ใ‚‚ๅคงใใช่ฆ็ด ใฎใฟใŒๆœ‰็”จใชๅฏ„ไธŽใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ๅ„ใ‚ฏใ‚จใƒชqใซใคใ„ใฆใ€ใ‚ฏใ‚จใƒชqใซ่ฟ‘ใ„ใ‚ญใƒผkใฎใฟใ‚’่€ƒๆ…ฎใงใใพใ™ใ€‚ qใจkใŒ่ฟ‘ใ„ใ‹ใฉใ†ใ‹ใ‚’ๆฑบๅฎšใ™ใ‚‹ใŸใ‚ใซใ€ใƒใƒƒใ‚ทใƒฅ้–ขๆ•ฐใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒžใ‚นใ‚ฏใฏๅค‰ๆ›ดใ•ใ‚Œใ€็พๅœจใฎใƒˆใƒผใ‚ฏใƒณใ‚’ใƒžใ‚นใ‚ฏๅŒ–ใ—ใพใ™๏ผˆๆœ€ๅˆใฎไฝ็ฝฎใ‚’้™คใ๏ผ‰ใ€‚ ใชใœใชใ‚‰ใ€ใใ‚Œใฏใ‚ฏใ‚จใƒชใจใ‚ญใƒผใŒ็ญ‰ใ—ใ„๏ผˆใคใพใ‚Š้žๅธธใซไผผใฆใ„ใ‚‹๏ผ‰ใ‚ฏใ‚จใƒชใจใ‚ญใƒผใ‚’ๆไพ›ใ™ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ ใƒใƒƒใ‚ทใƒฅใฏๅคšๅฐ‘ใƒฉใƒณใƒ€ใƒ ใ‹ใ‚‚ใ—ใ‚Œใชใ„ใŸใ‚ใ€ๅฎŸ้š›ใซใฏใ„ใใคใ‹ใฎใƒใƒƒใ‚ทใƒฅ้–ขๆ•ฐใŒไฝฟ็”จใ•ใ‚Œ๏ผˆn_roundsใƒ‘ใƒฉใƒกใƒผใ‚ฟใงๆฑบๅฎšใ•ใ‚Œใพใ™๏ผ‰ใ€ใใ‚Œใ‚‰ใŒๅนณๅ‡ๅŒ–ใ•ใ‚Œใพใ™ใ€‚ ## Local attention [Longformer](#longformer)ใฏใƒญใƒผใ‚ซใƒซใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใ—ใฐใ—ใฐใ€ใƒญใƒผใ‚ซใƒซใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ๏ผˆไพ‹๏ผšๅทฆๅณใฎ2ใคใฎใƒˆใƒผใ‚ฏใƒณใฏไฝ•ใงใ™ใ‹๏ผŸ๏ผ‰ใฏใ€็‰นๅฎšใฎใƒˆใƒผใ‚ฏใƒณใซๅฏพใ—ใฆ่กŒๅ‹•ใ‚’่ตทใ“ใ™ใฎใซๅๅˆ†ใงใ™ใ€‚ ใพใŸใ€ๅฐใ•ใชใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใ‚’ๆŒใคใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒฌใ‚คใƒคใƒผใ‚’็ฉใฟ้‡ใญใ‚‹ใ“ใจใงใ€ๆœ€ๅพŒใฎใƒฌใ‚คใƒคใƒผใฏใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๅ†…ใฎใƒˆใƒผใ‚ฏใƒณใ ใ‘ใงใชใใ€ใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๅ†…ใฎใƒˆใƒผใ‚ฏใƒณใ‚’่ถ…ใˆใฆๅ—ๅฎน้‡Žใ‚’ๆŒใคใ‚ˆใ†ใซใชใ‚Šใ€ๆ–‡ๅ…จไฝ“ใฎ่กจ็พใ‚’ๆง‹็ฏ‰ใงใใพใ™ใ€‚ ไธ€้ƒจใฎไบ‹ๅ‰้ธๆŠžใ•ใ‚ŒใŸๅ…ฅๅŠ›ใƒˆใƒผใ‚ฏใƒณใซใฏใ‚ฐใƒญใƒผใƒใƒซใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚‚ไธŽใˆใ‚‰ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎๅฐ‘ๆ•ฐใฎใƒˆใƒผใ‚ฏใƒณใซๅฏพใ—ใฆใ€ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ่กŒๅˆ—ใฏใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒณใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ€ใ“ใฎใƒ—ใƒญใ‚ปใ‚นใฏๅฏพ็งฐ็š„ใงใ™ใ€‚ ไป–ใฎใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒณใฏใ€ใ“ใ‚Œใ‚‰ใฎ็‰นๅฎšใฎใƒˆใƒผใ‚ฏใƒณใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™๏ผˆใƒญใƒผใ‚ซใƒซใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๅ†…ใฎใƒˆใƒผใ‚ฏใƒณใซๅŠ ใˆใฆ๏ผ‰ใ€‚ ใ“ใ‚Œใฏใ€่ซ–ๆ–‡ใฎๅ›ณ2dใซ็คบใ•ใ‚ŒใฆใŠใ‚Šใ€ไปฅไธ‹ใฏใ‚ตใƒณใƒ—ใƒซใฎใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒžใ‚นใ‚ฏใงใ™๏ผš <div class="flex justify-center"> <img scale="50 %" align="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/local_attention_mask.png"/> </div> ## Other tricks ### Axial positional encodings [Reformer](#reformer)ใฏ่ปธๆ–นๅ‘ใฎไฝ็ฝฎใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ไผ็ตฑ็š„ใชใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใงใฏใ€ไฝ็ฝฎใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐEใฏใ‚ตใ‚คใ‚บใŒ \\(l\\) ร— \\(d\\) ใฎ่กŒๅˆ—ใงใ€\\(l\\) ใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใฎ้•ทใ•ใ€\\(d\\) ใฏ้š ใ‚Œ็Šถๆ…‹ใฎๆฌกๅ…ƒใงใ™ใ€‚้žๅธธใซ้•ทใ„ใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆ‰ฑใ†ๅ ดๅˆใ€ใ“ใฎ่กŒๅˆ—ใฏ้žๅธธใซๅคงใใใ€GPUไธŠใงๅคง้‡ใฎใ‚นใƒšใƒผใ‚นใ‚’ๅ ๆœ‰ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚’็ทฉๅ’Œใ™ใ‚‹ใŸใ‚ใซใ€่ปธๆ–นๅ‘ใฎไฝ็ฝฎใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใฏใ€ใ“ใฎๅคงใใช่กŒๅˆ—Eใ‚’2ใคใฎๅฐใ•ใช่กŒๅˆ—E1ใจE2ใซๅˆ†่งฃใ—ใพใ™ใ€‚ใใ‚Œใžใ‚Œใฎ่กŒๅˆ—ใฏใ‚ตใ‚คใ‚บ \\(l_{1} \times d_{1}\\) ใŠใ‚ˆใณ \\(l_{2} \times d_{2}\\) ใ‚’ๆŒใกใ€ \\(l_{1} \times l_{2} = l\\) ใŠใ‚ˆใณ \\(d_{1} + d_{2} = d\\) ใจใ„ใ†ๆกไปถใ‚’ๆบ€ใŸใ—ใพใ™๏ผˆ้•ทใ•ใฎ็ฉใ‚’่€ƒใˆใ‚‹ใจใ€ใ“ใ‚ŒใŒใฏใ‚‹ใ‹ใซๅฐใ•ใใชใ‚Šใพใ™๏ผ‰ใ€‚่กŒๅˆ—Eๅ†…ใฎๆ™‚ๅˆป \\(j\\) ใฎๅŸ‹ใ‚่พผใฟใฏใ€E1ๅ†…ใฎๆ™‚ๅˆป \\(j \% l1\\) ใฎๅŸ‹ใ‚่พผใฟใจE2ๅ†…ใฎๆ™‚ๅˆป \\(j // l1\\) ใฎๅŸ‹ใ‚่พผใฟใ‚’้€ฃ็ตใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/bertology.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BERTology ๅคง่ฆๆจกใชใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใ€ไพ‹ใˆใฐBERTใฎๅ†…้ƒจๅ‹•ไฝœใ‚’่ชฟๆŸปใ™ใ‚‹็ ”็ฉถ้ ˜ๅŸŸใŒๆ€ฅๆˆ้•ทใ—ใฆใ„ใพใ™๏ผˆใ“ใ‚Œใ‚’ใ€ŒBERTologyใ€ใจใ‚‚ๅ‘ผใณใพใ™๏ผ‰ใ€‚ใ“ใฎๅˆ†้‡Žใฎ่‰ฏใ„ไพ‹ใฏไปฅไธ‹ใงใ™๏ผš - BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick: [่ซ–ๆ–‡ใƒชใƒณใ‚ฏ](https://arxiv.org/abs/1905.05950) - Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: [่ซ–ๆ–‡ใƒชใƒณใ‚ฏ](https://arxiv.org/abs/1905.10650) - What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: [่ซ–ๆ–‡ใƒชใƒณใ‚ฏ](https://arxiv.org/abs/1906.04341) - CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: [่ซ–ๆ–‡ใƒชใƒณใ‚ฏ](https://arxiv.org/abs/2210.04633) ใ“ใฎๆ–ฐใ—ใ„ๅˆ†้‡Žใฎ็™บๅฑ•ใ‚’ๆ”ฏๆดใ™ใ‚‹ใŸใ‚ใซใ€BERT/GPT/GPT-2ใƒขใƒ‡ใƒซใซใ„ใใคใ‹ใฎ่ฟฝๅŠ ๆฉŸ่ƒฝใ‚’็ต„ใฟ่พผใฟใ€ไบบใ€…ใŒๅ†…้ƒจ่กจ็พใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ—ใŸใ€‚ใ“ใ‚Œใ‚‰ใฎๆฉŸ่ƒฝใฏใ€ไธปใซPaul Michelๆฐใฎๅ„ชใ‚ŒใŸ็ ”็ฉถ๏ผˆ[่ซ–ๆ–‡ใƒชใƒณใ‚ฏ](https://arxiv.org/abs/1905.10650)๏ผ‰ใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚ๅ…ทไฝ“็š„ใซใฏใ€ไปฅไธ‹ใฎๆฉŸ่ƒฝใŒๅซใพใ‚Œใฆใ„ใพใ™๏ผš - BERT/GPT/GPT-2ใฎใ™ในใฆใฎ้š ใ‚Œ็Šถๆ…‹ใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ - BERT/GPT/GPT-2ใฎๅ„ใƒ˜ใƒƒใƒ‰ใฎๆณจๆ„้‡ใฟใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ - ใƒ˜ใƒƒใƒ‰ใฎๅ‡บๅŠ›ๅ€คใจๅ‹พ้…ใ‚’ๅ–ๅพ—ใ—ใ€ใƒ˜ใƒƒใƒ‰ใฎ้‡่ฆๆ€งใ‚นใ‚ณใ‚ขใ‚’่จˆ็ฎ—ใ—ใ€[่ซ–ๆ–‡ใƒชใƒณใ‚ฏ](https://arxiv.org/abs/1905.10650)ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใ‚ˆใ†ใซใƒ˜ใƒƒใƒ‰ใ‚’ๅ‰Šๆธ›ใงใใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎๆฉŸ่ƒฝใ‚’็†่งฃใ—ใ€ไฝฟ็”จใ™ใ‚‹ใฎใ‚’ๆ”ฏๆดใ™ใ‚‹ใŸใ‚ใซใ€็‰นๅฎšใฎใ‚ตใƒณใƒ—ใƒซใ‚นใ‚ฏใƒชใƒ—ใƒˆใ€Œ[bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py)ใ€ใ‚’่ฟฝๅŠ ใ—ใพใ—ใŸใ€‚ใ“ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€GLUEใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‹ใ‚‰ๆƒ…ๅ ฑใ‚’ๆŠฝๅ‡บใ—ใ€ใƒ˜ใƒƒใƒ‰ใ‚’ๅ‰Šๆธ›ใ™ใ‚‹ๅฝนๅ‰ฒใ‚’ๆžœใŸใ—ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perplexity.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Perplexity of fixed-length models [[open-in-colab]] ใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃ๏ผˆPPL๏ผ‰ใฏ่จ€่ชžใƒขใƒ‡ใƒซใฎ่ฉ•ไพกใซๆœ€ใ‚‚ไธ€่ˆฌ็š„ใชๆŒ‡ๆจ™ใฎ1ใคใงใ™ใ€‚ๆทฑๅ…ฅใ‚Šใ™ใ‚‹ๅ‰ใซใ€ใ“ใฎๆŒ‡ๆจ™ใฏ็‰นใซๅคๅ…ธ็š„ใช่จ€่ชžใƒขใƒ‡ใƒซ๏ผˆๆ™‚ใซใฏใ‚ชใƒผใƒˆใƒฌใ‚ฐใƒฌใƒƒใ‚ทใƒ–ใพใŸใฏๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒซใจใ‚‚ๅ‘ผใฐใ‚Œใ‚‹๏ผ‰ใซ้ฉ็”จใ•ใ‚Œใ€BERTใชใฉใฎใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒซใซใฏ้ฉใ—ใฆใ„ใชใ„ใ“ใจใซๆณจๆ„ใ™ในใใงใ™๏ผˆใƒขใƒ‡ใƒซใฎๆฆ‚่ฆใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„[ใƒขใƒ‡ใƒซใฎๆฆ‚่ฆ](model_summary)๏ผ‰ใ€‚ ใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎๆŒ‡ๆ•ฐๅนณๅ‡่ฒ ใฎๅฏพๆ•ฐๅฐคๅบฆใจใ—ใฆๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚ŒใŸใ‚ทใƒผใ‚ฑใƒณใ‚น \\(X = (x_0, x_1, \dots, x_t)\\) ใŒใ‚ใ‚‹ๅ ดๅˆใ€\\(X\\) ใฎใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใฏๆฌกใฎใ‚ˆใ†ใซ่กจใ•ใ‚Œใพใ™ใ€‚ $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$ ใ“ใ“ใงใ€\\(\log p_\theta (x_i|x_{<i})\\) ใฏใƒขใƒ‡ใƒซใซใ‚ˆใ‚‹ๅ‰ใฎใƒˆใƒผใ‚ฏใƒณ \\(x_{<i}\\) ใซๅฏพใ™ใ‚‹็ฌฌiใƒˆใƒผใ‚ฏใƒณใฎๅฏพๆ•ฐๅฐคๅบฆใงใ™ใ€‚็›ดๆ„Ÿ็š„ใซใฏใ€ใ“ใ‚Œใฏใƒขใƒ‡ใƒซใŒใ‚ณใƒผใƒ‘ใ‚นๅ†…ใฎๆŒ‡ๅฎšใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใฎ้›†ๅˆใซๅฏพใ—ใฆไธ€ๆง˜ใซไบˆๆธฌใ™ใ‚‹่ƒฝๅŠ›ใฎ่ฉ•ไพกใจ่€ƒใˆใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚้‡่ฆใชใฎใฏใ€ใ“ใ‚Œใซใ‚ˆใฃใฆใƒˆใƒผใ‚ฏใƒณๅŒ–ๆ‰‹ๆณ•ใŒใƒขใƒ‡ใƒซใฎใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใซ็›ดๆŽฅๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ใŸใ‚ใ€็•ฐใชใ‚‹ใƒขใƒ‡ใƒซใ‚’ๆฏ”่ผƒใ™ใ‚‹้š›ใซใฏๅธธใซ่€ƒๆ…ฎใ™ในใใงใ‚ใ‚‹ใจใ„ใ†ใ“ใจใงใ™ใ€‚ ใ“ใ‚ŒใฏใพใŸใ€ใƒ‡ใƒผใ‚ฟใจใƒขใƒ‡ใƒซใฎไบˆๆธฌใจใฎ้–“ใฎไบคๅทฎใ‚จใƒณใƒˆใƒญใƒ”ใƒผใฎๆŒ‡ๆ•ฐๅŒ–ใจๅŒ็ญ‰ใงใ™ใ€‚ใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใŠใ‚ˆใณใƒ“ใƒƒใƒˆใƒปใƒ‘ใƒผใƒปใ‚ญใƒฃใƒฉใ‚ฏใ‚ฟใƒผ๏ผˆBPC๏ผ‰ใจใƒ‡ใƒผใ‚ฟๅœง็ธฎใจใฎ้–ขไฟ‚ใซใคใ„ใฆใฎ่ฉณ็ดฐใชๆƒ…ๅ ฑใซใคใ„ใฆใฏใ€ใ“ใฎ[็ด ๆ™ดใ‚‰ใ—ใ„ The Gradient ใฎใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Calculating PPL with fixed-length models ใƒขใƒ‡ใƒซใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‚ตใ‚คใ‚บใซๅˆถ็ด„ใŒใชใ„ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฎใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใ‚’่ฉ•ไพกใ™ใ‚‹ใŸใ‚ใซใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’่‡ชๅทฑๅ›žๅธฐ็š„ใซๅ› ๅญๅˆ†่งฃใ—ใ€ๅ„ใ‚นใƒ†ใƒƒใƒ—ใงๅ‰ใฎใ‚ตใƒ–ใ‚ทใƒผใ‚ฑใƒณใ‚นใซๆกไปถใ‚’ไป˜ใ‘ใ‚‹ใ“ใจใง่จˆ็ฎ—ใ—ใพใ™ใ€‚ไปฅไธ‹ใซ็คบใ™ใ‚ˆใ†ใซใ€‚ <img width="600" alt="ๅฎŒๅ…จใชใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ้•ทใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใฎๅˆ†่งฃ" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/> ใ—ใ‹ใ—ใ€้€šๅธธใ€่ฟ‘ไผผใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใŒๅ‡ฆ็†ใงใใ‚‹ใƒˆใƒผใ‚ฏใƒณๆ•ฐใซๅˆถ็ด„ใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€ๆœ€ๅคงใฎ[GPT-2](model_doc/gpt2)ใฎใƒใƒผใ‚ธใƒงใƒณใฏ1024ใƒˆใƒผใ‚ฏใƒณใฎๅ›บๅฎš้•ทใ‚’ๆŒใฃใฆใ„ใ‚‹ใŸใ‚ใ€1024ใ‚ˆใ‚Šใ‚‚ๅคงใใ„ \\(t\\) ใซๅฏพใ—ใฆ \\(p_\theta(x_t|x_{<t})\\) ใ‚’็›ดๆŽฅ่จˆ็ฎ—ใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ ไปฃใ‚ใ‚Šใซใ€้€šๅธธใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใฏใƒขใƒ‡ใƒซใฎๆœ€ๅคงๅ…ฅๅŠ›ใ‚ตใ‚คใ‚บใซ็ญ‰ใ—ใ„ใ‚ตใƒ–ใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅˆ†ๅ‰ฒใ•ใ‚Œใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎๆœ€ๅคงๅ…ฅๅŠ›ใ‚ตใ‚คใ‚บใŒ \\(k\\) ใฎๅ ดๅˆใ€ใƒˆใƒผใ‚ฏใƒณ \\(x_t\\) ใฎๅฐคๅบฆใ‚’่ฟ‘ไผผใ™ใ‚‹ใซใฏใ€ๅฎŒๅ…จใชใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใงใฏใชใใ€ใใ‚Œใ‚’ๅ…ˆ่กŒใ™ใ‚‹ \\(k-1\\) ใƒˆใƒผใ‚ฏใƒณใซใฎใฟๆกไปถใ‚’ไป˜ใ‘ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒขใƒ‡ใƒซใฎใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใ‚’่ฉ•ไพกใ™ใ‚‹้š›ใ€่ช˜ๆƒ‘็š„ใงใ™ใŒ้žๅŠน็އใชๆ–นๆณ•ใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅˆ†ๅ‰ฒใ—ใ€ๅ„ใ‚ปใ‚ฐใƒกใƒณใƒˆใฎๅˆ†่งฃๅฏพๆ•ฐๅฐคๅบฆใ‚’็‹ฌ็ซ‹ใซๅˆ็ฎ—ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ <img width="600" alt="ๅˆฉ็”จๅฏ่ƒฝใชๅฎŒๅ…จใชใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆดป็”จใ—ใชใ„้žๆœ€้ฉใชPPL" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/> ใ“ใ‚Œใฏๅ„ใ‚ปใ‚ฐใƒกใƒณใƒˆใฎใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใŒ1ๅ›žใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใง่จˆ็ฎ—ใงใใ‚‹ใŸใ‚ใ€่จˆ็ฎ—ใŒ่ฟ…้€Ÿใงใ™ใŒใ€้€šๅธธใ€ใƒขใƒ‡ใƒซใฏใปใจใ‚“ใฉใฎไบˆๆธฌใ‚นใƒ†ใƒƒใƒ—ใงใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใŒๅฐ‘ใชใ„ใŸใ‚ใ€ๅฎŒๅ…จใซๅ› ๅญๅˆ†่งฃใ•ใ‚ŒใŸใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใฎๆ‚ชใ„่ฟ‘ไผผใจใชใ‚Šใ€้€šๅธธใ€ใ‚ˆใ‚Š้ซ˜ใ„๏ผˆๆ‚ชใ„๏ผ‰PPLใ‚’่ฟ”ใ—ใพใ™ใ€‚ ไปฃใ‚ใ‚Šใซใ€ๅ›บๅฎš้•ทใƒขใƒ‡ใƒซใฎPPLใฏใ‚นใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๆˆฆ็•ฅใ‚’็”จใ„ใฆ่ฉ•ไพกใ™ใ‚‹ในใใงใ™ใ€‚ใ“ใ‚Œใซใฏใ€ใƒขใƒ‡ใƒซใŒๅ„ไบˆๆธฌใ‚นใƒ†ใƒƒใƒ—ใงใ‚ˆใ‚Šๅคšใใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆŒใคใ‚ˆใ†ใซใ€ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใ‚’็นฐใ‚Š่ฟ”ใ—ใ‚นใƒฉใ‚คใƒ‰ใ•ใ›ใ‚‹ใจใ„ใ†ๆ–นๆณ•ใŒๅซใพใ‚Œใพใ™ใ€‚ <img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/> ใ“ใ‚Œใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใฎ็ขบ็އใฎใ‚ˆใ‚Šๆญฃ็ขบใชๅˆ†่งฃใซ่ฟ‘ใ„ใ‚‚ใฎใงใ‚ใ‚Šใ€้€šๅธธใฏใ‚ˆใ‚Šๆœ‰ๅˆฉใชใ‚นใ‚ณใ‚ขใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ๆฌ ็‚นใฏใ€ใ‚ณใƒผใƒ‘ใ‚นๅ†…ใฎๅ„ใƒˆใƒผใ‚ฏใƒณใซๅฏพใ—ใฆๅˆฅๅ€‹ใฎๅ‰ๆ–นใƒ‘ใ‚นใŒๅฟ…่ฆใงใ™ใ€‚ๅฎŸ็”จ็š„ใชๅฆฅๅ”ๆกˆใฏใ€1ใƒˆใƒผใ‚ฏใƒณใšใคใ‚นใƒฉใ‚คใƒ‰ใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ใ‚ˆใ‚Šๅคงใใชใ‚นใƒˆใƒฉใ‚คใƒ‰ใงใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‚’็งปๅ‹•ใ™ใ‚‹ใ‚นใƒˆใƒฉใ‚คใƒ‰ๅž‹ใฎใ‚นใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€่จˆ็ฎ—ใŒใฏใ‚‹ใ‹ใซ้ซ˜้€Ÿใซ้€ฒ่กŒใงใใ‚‹ไธ€ๆ–นใงใ€ใƒขใƒ‡ใƒซใซใฏๅ„ใ‚นใƒ†ใƒƒใƒ—ใงไบˆๆธฌใ‚’่กŒใ†ใŸใ‚ใฎๅคงใใชใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใŒๆไพ›ใ•ใ‚Œใพใ™ใ€‚ ## Example: Calculating perplexity with GPT-2 in ๐Ÿค— Transformers GPT-2ใ‚’ไฝฟ็”จใ—ใฆใ“ใฎใƒ—ใƒญใ‚ปใ‚นใ‚’ใƒ‡ใƒขใƒณใ‚นใƒˆใƒฌใƒผใ‚ทใƒงใƒณใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) ``` WikiText-2ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’่ชญใฟ่พผใฟใ€็•ฐใชใ‚‹ใ‚นใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๆˆฆ็•ฅใ‚’ไฝฟ็”จใ—ใฆใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃใ‚’่ฉ•ไพกใ—ใพใ™ใ€‚ใ“ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฏๅฐ่ฆๆจกใงใ€ใ‚ปใƒƒใƒˆๅ…จไฝ“ใซๅฏพใ—ใฆๅ˜ไธ€ใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ ใ‘ใชใฎใงใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใ‚’ใƒกใƒขใƒชใซ่ชญใฟ่พผใ‚“ใงใ‚จใƒณใ‚ณใƒผใƒ‰ใ™ใ‚‹ใ ใ‘ใงๅๅˆ†ใงใ™ใ€‚ ```python from datasets import load_dataset test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") ``` ๐Ÿค— Transformersใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ๅ˜็ด”ใซ`input_ids`ใ‚’ใƒขใƒ‡ใƒซใฎ`labels`ใจใ—ใฆๆธกใ™ใ“ใจใงใ€ๅ„ใƒˆใƒผใ‚ฏใƒณใฎๅนณๅ‡่ฒ ใฎๅฏพๆ•ฐๅฐคๅบฆใŒๆๅคฑใจใ—ใฆ่ฟ”ใ•ใ‚Œใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใ‚นใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใฎใ‚ขใƒ—ใƒญใƒผใƒใงใฏใ€ๅ„ใ‚คใƒ†ใƒฌใƒผใ‚ทใƒงใƒณใงใƒขใƒ‡ใƒซใซๆธกใ™ใƒˆใƒผใ‚ฏใƒณใซใ‚ชใƒผใƒใƒผใƒฉใƒƒใƒ—ใŒใ‚ใ‚Šใพใ™ใ€‚็งใŸใกใฏใ€ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใจใ—ใฆๆ‰ฑใฃใฆใ„ใ‚‹ใƒˆใƒผใ‚ฏใƒณใฎๅฏพๆ•ฐๅฐคๅบฆใ‚’ๆๅคฑใซๅซใ‚ใŸใใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใใฎใŸใ‚ใ€ใ“ใ‚Œใ‚‰ใฎๅฏพ่ฑกใ‚’ `-100` ใซ่จญๅฎšใ—ใฆ็„ก่ฆ–ใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ไปฅไธ‹ใฏใ€ใ‚นใƒˆใƒฉใ‚คใƒ‰ใ‚’ `512` ใจใ—ใŸๅ ดๅˆใฎไพ‹ใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฏไปปๆ„ใฎใƒˆใƒผใ‚ฏใƒณใฎๆกไปถไป˜ใ‘ใฎๅฐคๅบฆใ‚’่จˆ็ฎ—ใ™ใ‚‹้š›ใซใ€ๅฐ‘ใชใใจใ‚‚ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใจใ—ใฆ 512 ใƒˆใƒผใ‚ฏใƒณใ‚’ๆŒใคใ“ใจใซใชใ‚Šใพใ™๏ผˆ512 ๅ€‹ใฎๅ‰ใฎใƒˆใƒผใ‚ฏใƒณใŒๅˆฉ็”จๅฏ่ƒฝใงใ‚ใ‚‹ๅ ดๅˆ๏ผ‰ใ€‚ ```python import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # may be different from stride on last loop input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # loss is calculated using CrossEntropyLoss which averages over valid labels # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels # to the left by 1. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) ``` ใ‚นใƒˆใƒฉใ‚คใƒ‰้•ทใŒๆœ€ๅคงๅ…ฅๅŠ›้•ทใจๅŒใ˜ๅ ดๅˆใ€ไธŠ่ฟฐใฎๆœ€้ฉใงใชใ„ใ‚นใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๆˆฆ็•ฅใจๅŒ็ญ‰ใงใ™ใ€‚ใ‚นใƒˆใƒฉใ‚คใƒ‰ใŒๅฐใ•ใ„ใปใฉใ€ใƒขใƒ‡ใƒซใฏๅ„ไบˆๆธฌใ‚’่กŒใ†้š›ใซใ‚ˆใ‚Šๅคšใใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆŒใคใŸใ‚ใ€้€šๅธธใ€ๅ ฑๅ‘Šใ•ใ‚Œใ‚‹ๅ›ฐ้›ฃๅบฆ๏ผˆperplexity๏ผ‰ใŒๅ‘ไธŠใ—ใพใ™ใ€‚ ไธŠ่จ˜ใฎใ‚ณใƒผใƒ‰ใ‚’ `stride = 1024` ใงๅฎŸ่กŒใ™ใ‚‹ใจใ€ใ‚ชใƒผใƒใƒผใƒฉใƒƒใƒ—ใŒใชใ„็Šถๆ…‹ใงใ€็ตๆžœใฎๅ›ฐ้›ฃๅบฆ๏ผˆperplexity๏ผ‰ใฏ `19.44` ใซใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏ GPT-2 ใฎ่ซ–ๆ–‡ใซๅ ฑๅ‘Šใ•ใ‚ŒใŸ `19.93` ใจใปใผๅŒ็ญ‰ใงใ™ใ€‚ไธ€ๆ–นใ€`stride = 512` ใ‚’ไฝฟ็”จใ—ใ€ใ“ใฎใ‚ˆใ†ใซใ‚นใƒˆใƒฉใ‚คใƒ‡ใ‚ฃใƒณใ‚ฐใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆๆˆฆ็•ฅใ‚’ๆŽก็”จใ™ใ‚‹ใจใ€ๅ›ฐ้›ฃๅบฆ๏ผˆperplexity๏ผ‰ใŒ `16.45` ใซๅ‘ไธŠใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ‚ˆใ‚Šๅฅฝๆ„็š„ใชใ‚นใ‚ณใ‚ขใ ใ‘ใงใชใใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎๅฐคๅบฆใฎ็œŸใฎ่‡ชๅทฑๅ›žๅธฐๅˆ†่งฃใซใ‚ˆใ‚Š่ฟ‘ใ„ๆ–นๆณ•ใง่จˆ็ฎ—ใ•ใ‚Œใฆใ„ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/serialization.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to ONNX ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ๆœฌ็•ช็’ฐๅขƒใซๅฑ•้–‹ใ™ใ‚‹้š›ใซใฏใ€ใƒขใƒ‡ใƒซใ‚’็‰นๆฎŠใชใƒฉใƒณใ‚ฟใ‚คใƒ ใŠใ‚ˆใณใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใง่ชญใฟ่พผใฟใ€ๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซใ€ใƒขใƒ‡ใƒซใ‚’ใ‚ทใƒชใ‚ขใƒฉใ‚คใ‚บใ•ใ‚ŒใŸๅฝขๅผใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใŒๅฟ…่ฆใงใ‚ใ‚‹ใ‹ใ€ใใฎๆฉๆตใ‚’ๅ—ใ‘ใ‚‹ใ“ใจใŒใงใใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Optimumใฏใ€Transformersใฎๆ‹กๅผตๆฉŸ่ƒฝใงใ‚ใ‚Šใ€PyTorchใพใŸใฏTensorFlowใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ONNXใ‚„TFLiteใชใฉใฎใ‚ทใƒชใ‚ขใƒฉใ‚คใ‚บใ•ใ‚ŒใŸๅฝขๅผใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ใ€Œexportersใ€ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ใพใŸใ€๐Ÿค— Optimumใฏใ€ๆœ€ๅคงใฎๅŠน็އใงใ‚ฟใƒผใ‚ฒใƒƒใƒˆใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŠใ‚ˆใณๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๆœ€้ฉๅŒ–ใƒ„ใƒผใƒซใ‚‚ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’๐Ÿค— Optimumใ‚’ไฝฟ็”จใ—ใฆONNXใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใฆใŠใ‚Šใ€ใƒขใƒ‡ใƒซใ‚’TFLiteใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใฏ[Export to TFLiteใƒšใƒผใ‚ธ](tflite)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Export to ONNX [ONNX๏ผˆOpen Neural Network eXchange๏ผ‰](http://onnx.ai)ใฏใ€PyTorchใŠใ‚ˆใณTensorFlowใ‚’ๅซใ‚€ใ•ใพใ–ใพใชใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงๆทฑๅฑคๅญฆ็ฟ’ใƒขใƒ‡ใƒซใ‚’่กจ็พใ™ใ‚‹ใŸใ‚ใฎๅ…ฑ้€šใฎไธ€้€ฃใฎๆผ”็ฎ—ๅญใจใƒ•ใ‚กใ‚คใƒซๅฝขๅผใ‚’ๅฎš็พฉใ™ใ‚‹ใ‚ชใƒผใƒ—ใƒณใ‚นใ‚ฟใƒณใƒ€ใƒผใƒ‰ใงใ™ใ€‚ใƒขใƒ‡ใƒซใŒONNXๅฝขๅผใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ•ใ‚Œใ‚‹ใจใ€ใ“ใ‚Œใ‚‰ใฎๆผ”็ฎ—ๅญใฏใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ไป‹ใ™ใ‚‹ใƒ‡ใƒผใ‚ฟใฎๆตใ‚Œใ‚’่กจใ™่จˆ็ฎ—ใ‚ฐใƒฉใƒ•๏ผˆไธ€่ˆฌ็š„ใซใฏใ€Œไธญ้–“่กจ็พใ€ใจๅ‘ผใฐใ‚Œใ‚‹๏ผ‰ใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ๆจ™ๆบ–ๅŒ–ใ•ใ‚ŒใŸๆผ”็ฎ—ๅญใจใƒ‡ใƒผใ‚ฟๅž‹ใ‚’ๅ‚™ใˆใŸใ‚ฐใƒฉใƒ•ใ‚’ๅ…ฌ้–‹ใ™ใ‚‹ใ“ใจใงใ€ONNXใฏใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ้–“ใฎๅˆ‡ใ‚Šๆ›ฟใˆใ‚’ๅฎนๆ˜“ใซใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€PyTorchใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏONNXๅฝขๅผใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ—ใ€ใใ‚Œใ‚’TensorFlowใงใ‚คใƒณใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผˆ้€†ใ‚‚ๅŒๆง˜ใงใ™๏ผ‰ใ€‚ ONNXๅฝขๅผใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซไฝฟ็”จใงใใพใ™๏ผš - [ใ‚ฐใƒฉใƒ•ๆœ€้ฉๅŒ–](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization)ใ‚„[้‡ๅญๅŒ–](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization)ใชใฉใฎใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚’ไฝฟ็”จใ—ใฆๆŽจ่ซ–ใฎใŸใ‚ใซๆœ€้ฉๅŒ–ใ€‚ - [`ORTModelForXXX`ใ‚ฏใƒฉใ‚น](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort)ใ‚’ไป‹ใ—ใฆONNX RuntimeใงๅฎŸ่กŒใ—ใ€๐Ÿค— TransformersใงใŠใชใ˜ใฟใฎ`AutoModel` APIใซๅพ“ใ„ใพใ™ใ€‚ - [ๆœ€้ฉๅŒ–ใ•ใ‚ŒใŸๆŽจ่ซ–ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines)ใ‚’ไป‹ใ—ใฆๅฎŸ่กŒใ—ใ€๐Ÿค— Transformersใฎ[`pipeline`]้–ขๆ•ฐใจๅŒใ˜APIใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ ๐Ÿค— Optimumใฏใ€่จญๅฎšใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๆดป็”จใ—ใฆONNXใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใŠใ‚Šใ€ใ“ใ‚Œใ‚‰ใฎ่จญๅฎšใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏๅคšใใฎใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ็”จใซไบ‹ๅ‰ใซไฝœๆˆใ•ใ‚ŒใฆใŠใ‚Šใ€ไป–ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซใ‚‚็ฐกๅ˜ใซๆ‹กๅผตใงใใ‚‹ใ‚ˆใ†ใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ไบ‹ๅ‰ใซไฝœๆˆใ•ใ‚ŒใŸ่จญๅฎšใฎใƒชใ‚นใƒˆใซใคใ„ใฆใฏใ€[๐Ÿค— Optimumใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/optimum/exporters/onnx/overview)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ONNXใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๆ–นๆณ•ใฏ2ใคใ‚ใ‚Šใพใ™ใ€‚ไปฅไธ‹ใงใฏไธกๆ–นใฎๆ–นๆณ•ใ‚’็คบใ—ใพใ™๏ผš - export with ๐Ÿค— Optimum via CLI. - export with ๐Ÿค— Optimum with `optimum.onnxruntime`. ### Exporting a ๐Ÿค— Transformers model to ONNX with CLI ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ONNXใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใซใฏใ€ใพใš่ฟฝๅŠ ใฎไพๅญ˜้–ขไฟ‚ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใใ ใ•ใ„๏ผš ```bash pip install optimum[exporters] ``` ใ™ในใฆใฎๅˆฉ็”จๅฏ่ƒฝใชๅผ•ๆ•ฐใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€[๐Ÿค— Optimumใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ใพใŸใฏใ€ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณใงใƒ˜ใƒซใƒ—ใ‚’่กจ็คบใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™๏ผš ```bash optimum-cli export onnx --help ``` ๐Ÿค— Hubใ‹ใ‚‰ใƒขใƒ‡ใƒซใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใซใฏใ€ไพ‹ใˆใฐ `distilbert-base-uncased-distilled-squad` ใ‚’ไฝฟใ„ใŸใ„ๅ ดๅˆใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš ```bash optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ ``` ้€ฒ่กŒ็Šถๆณใ‚’็คบใ—ใ€็ตๆžœใฎ `model.onnx` ใŒไฟๅญ˜ใ•ใ‚Œใ‚‹ๅ ดๆ‰€ใ‚’่กจ็คบใ™ใ‚‹ใƒญใ‚ฐใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซ่กจ็คบใ•ใ‚Œใ‚‹ใฏใšใงใ™๏ผš ```bash Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx... -[โœ“] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output "start_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) - Validating ONNX Model output "end_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx ``` ไธŠ่จ˜ใฎไพ‹ใฏ๐Ÿค— Hubใ‹ใ‚‰ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ใƒญใƒผใ‚ซใƒซใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๅ ดๅˆใ€ใพใšใƒขใƒ‡ใƒซใฎ้‡ใฟใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎใƒ•ใ‚กใ‚คใƒซใ‚’ๅŒใ˜ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช๏ผˆ`local_path`๏ผ‰ใซไฟๅญ˜ใ—ใฆใใ ใ•ใ„ใ€‚CLIใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€๐Ÿค— Hubใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆๅใฎไปฃใ‚ใ‚Šใซ`model`ๅผ•ๆ•ฐใซ`local_path`ใ‚’ๆธกใ—ใ€`--task`ๅผ•ๆ•ฐใ‚’ๆŒ‡ๅฎšใ—ใฆใใ ใ•ใ„ใ€‚[๐Ÿค— Optimumใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/optimum/exporters/task_manager)ใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใฎใƒชใ‚นใƒˆใ‚’็ขบ่ชใงใใพใ™ใ€‚`task`ๅผ•ๆ•ฐใŒๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€ใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎใƒ˜ใƒƒใƒ‰ใ‚’ๆŒใŸใชใ„ใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใง้ธๆŠžใ•ใ‚Œใพใ™ใ€‚ ```bash optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ ``` ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ•ใ‚ŒใŸ `model.onnx` ใƒ•ใ‚กใ‚คใƒซใฏใ€ONNXๆจ™ๆบ–ใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹[ๅคšใใฎใ‚ขใ‚ฏใ‚ปใƒฉใƒฌใƒผใ‚ฟ](https://onnx.ai/supported-tools.html#deployModel)ใฎ1ใคใงๅฎŸ่กŒใงใใพใ™ใ€‚ใŸใจใˆใฐใ€[ONNX Runtime](https://onnxruntime.ai/)ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใฟใ€ๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™๏ผš ```python >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx") >>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx") >>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt") >>> outputs = model(**inputs) ``` ๐Ÿค— Hubใ‹ใ‚‰TensorFlowใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใฏใ€ๅŒๆง˜ใงใ™ใ€‚ไพ‹ใˆใฐใ€[Keras organization](https://huggingface.co/keras-io)ใ‹ใ‚‰็ด”็ฒ‹ใชTensorFlowใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๆ–นๆณ•ใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™๏ผš ```bash optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ ``` ### Exporting a ๐Ÿค— Transformers model to ONNX with `optimum.onnxruntime` CLIใฎไปฃใ‚ใ‚Šใซใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ONNXใซใƒ—ใƒญใ‚ฐใƒฉใƒ ็š„ใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซ่กŒใ„ใพใ™๏ผš ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert_base_uncased_squad" >>> save_directory = "onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) ``` ### Exporting a model for an unsupported architecture ็พๅœจใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใงใใชใ„ใƒขใƒ‡ใƒซใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใŸใ‚ใซ่ฒข็Œฎใ—ใŸใ„ๅ ดๅˆใ€ใพใš[`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview)ใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ—ใ€ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏ[๐Ÿค— Optimumใซ่ฒข็Œฎ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute)ใ—ใฆใใ ใ•ใ„ใ€‚ ### Exporting a model with `transformers.onnx` <Tip warning={true}> `transformers.onnx`ใฏใ‚‚ใฏใ‚„ใƒกใƒณใƒ†ใƒŠใƒณใ‚นใ•ใ‚Œใฆใ„ใชใ„ใŸใ‚ใ€ใƒขใƒ‡ใƒซใ‚’ไธŠ่จ˜ใง่ชฌๆ˜Žใ—ใŸใ‚ˆใ†ใซ๐Ÿค— Optimumใงใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏๅฐ†ๆฅใฎใƒใƒผใ‚ธใƒงใƒณใงๅ‰Š้™คใ•ใ‚Œใพใ™ใ€‚ </Tip> ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ONNXใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใซใฏใ€่ฟฝๅŠ ใฎไพๅญ˜้–ขไฟ‚ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใใ ใ•ใ„๏ผš ```bash pip install transformers[onnx] ``` `transformers.onnx`ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใ‚’Pythonใƒขใ‚ธใƒฅใƒผใƒซใจใ—ใฆไฝฟ็”จใ—ใฆใ€ไบ‹ๅ‰ใซ็”จๆ„ใ•ใ‚ŒใŸ่จญๅฎšใ‚’ไฝฟ็”จใ—ใฆใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๆ–นๆณ•ใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™๏ผš ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ ``` ใ“ใฎๆ–นๆณ•ใฏใ€`--model`ๅผ•ๆ•ฐใงๅฎš็พฉใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎONNXใ‚ฐใƒฉใƒ•ใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ—ใพใ™ใ€‚๐Ÿค— Hubใฎใ„ใšใ‚Œใ‹ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใพใŸใฏใƒญใƒผใ‚ซใƒซใซไฟๅญ˜ใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๆธกใ™ใ“ใจใŒใงใใพใ™ใ€‚ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ•ใ‚ŒใŸ`model.onnx`ใƒ•ใ‚กใ‚คใƒซใฏใ€ONNXๆจ™ๆบ–ใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ๅคšใใฎใ‚ขใ‚ฏใ‚ปใƒฉใƒฌใƒผใ‚ฟใงๅฎŸ่กŒใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€ONNX Runtimeใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚“ใงๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™๏ผš ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` ๅฟ…่ฆใชๅ‡บๅŠ›ๅ๏ผˆไพ‹: `["last_hidden_state"]`๏ผ‰ใฏใ€ๅ„ใƒขใƒ‡ใƒซใฎONNXๆง‹ๆˆใ‚’็ขบ่ชใ™ใ‚‹ใ“ใจใงๅ–ๅพ—ใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€DistilBERTใฎๅ ดๅˆใ€ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` ใƒใƒ–ใ‹ใ‚‰็ด”็ฒ‹ใชTensorFlowใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒ—ใƒญใ‚ฐใƒฉใƒ ็š„ใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซๅŒๆง˜ใงใ™๏ผš ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` ใƒญใƒผใ‚ซใƒซใซไฟๅญ˜ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎใƒ•ใ‚กใ‚คใƒซใ‚’ๅŒใ˜ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซไฟๅญ˜ใ—ใฆใใ ใ•ใ„๏ผˆไพ‹๏ผš `local-pt-checkpoint`๏ผ‰ใ€‚ใใฎๅพŒใ€`transformers.onnx`ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใฎ `--model`ๅผ•ๆ•ฐใ‚’ๅธŒๆœ›ใ™ใ‚‹ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซๅ‘ใ‘ใฆ่จญๅฎšใ—ใฆใ€ONNXใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ—ใพใ™๏ผš ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_train_gpu_one.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Methods and tools for efficient training on a single GPU ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€ใƒกใƒขใƒชใฎๅˆฉ็”จๅŠน็އใ‚’ๆœ€้ฉๅŒ–ใ—ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใ“ใจใงใ€ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅŠน็އใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใŸใ‚ใซไฝฟ็”จใงใใ‚‹ๅฎŸ็”จ็š„ใชใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚’็ดนไป‹ใ—ใพใ™ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซGPUใŒใฉใฎใ‚ˆใ†ใซๅˆฉ็”จใ•ใ‚Œใ‚‹ใ‹ใ‚’็†่งฃใ—ใŸใ„ๅ ดๅˆใฏใ€ๆœ€ๅˆใซใ€Œ[ใƒขใƒ‡ใƒซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ่งฃๅ‰–ๅญฆ](model_memory_anatomy)ใ€ใฎใ‚ณใƒณใ‚ปใƒ—ใƒˆใ‚ฌใ‚คใƒ‰ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏๅฎŸ็”จ็š„ใชใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใพใ™ใ€‚ <Tip> ่ค‡ๆ•ฐใฎGPUใ‚’ๆญ่ผ‰ใ—ใŸใƒžใ‚ทใƒณใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ๅ ดๅˆใ€ใ“ใ‚Œใ‚‰ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏไพ็„ถใจใ—ใฆๆœ‰ๅŠนใงใ™ใ€‚ใ•ใ‚‰ใซใ€[ใƒžใƒซใƒGPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_many)ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹่ฟฝๅŠ ใฎๆ–นๆณ•ใ‚’ๆดป็”จใงใใพใ™ใ€‚ </Tip> ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹้š›ใ€ๅŒๆ™‚ใซ่€ƒๆ…ฎใ™ในใ2ใคใฎๅด้ขใŒใ‚ใ‚Šใพใ™๏ผš * ใƒ‡ใƒผใ‚ฟใฎใ‚นใƒซใƒผใƒ—ใƒƒใƒˆ/ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ™‚้–“ * ใƒขใƒ‡ใƒซใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚น ใ‚นใƒซใƒผใƒ—ใƒƒใƒˆ๏ผˆใ‚ตใƒณใƒ—ใƒซ/็ง’๏ผ‰ใ‚’ๆœ€ๅคงๅŒ–ใ™ใ‚‹ใ“ใจใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ณใ‚นใƒˆใ‚’ไฝŽๆธ›ใ•ใ›ใพใ™ใ€‚ใ“ใ‚Œใฏไธ€่ˆฌ็š„ใซใ€GPUใ‚’ใงใใ‚‹ใ ใ‘ๅŠนๆžœ็š„ใซๆดป็”จใ—ใ€GPUใƒกใƒขใƒชใ‚’้™็•ŒใพใงๅŸ‹ใ‚ใ‚‹ใ“ใจใซใ‚ˆใฃใฆ้”ๆˆใ•ใ‚Œใพใ™ใ€‚ๅธŒๆœ›ใ™ใ‚‹ใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒGPUใƒกใƒขใƒชใฎๅˆถ้™ใ‚’่ถ…ใˆใ‚‹ๅ ดๅˆใ€ๅ‹พ้…่“„็ฉใชใฉใฎใƒกใƒขใƒชๆœ€้ฉๅŒ–ใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใŒๅฝน็ซ‹ใกใพใ™ใ€‚ ใ—ใ‹ใ—ใ€ๅฅฝใฟใฎใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒใƒกใƒขใƒชใซๅŽใพใ‚‹ๅ ดๅˆใ€ใƒกใƒขใƒชใ‚’ๆœ€้ฉๅŒ–ใ™ใ‚‹ใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใ‚’้ฉ็”จใ™ใ‚‹็†็”ฑใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๅคงใใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ไฝฟ็”จใงใใ‚‹ใ‹ใ‚‰ใจใ„ใฃใฆใ€ใใ‚Œใ‚’ๅฟ…ใšใ—ใ‚‚ไฝฟ็”จใ™ในใใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ่ชฟๆ•ดใฎไธ€็’ฐใจใ—ใฆใ€ใฉใฎใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒๆœ€่‰ฏใฎ็ตๆžœใ‚’็”Ÿใฟๅ‡บใ™ใ‹ใ‚’ๆฑบๅฎšใ—ใ€ใƒชใ‚ฝใƒผใ‚นใ‚’้ฉๅˆ‡ใซๆœ€้ฉๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใ‚ซใƒใƒผใ•ใ‚Œใฆใ„ใ‚‹ๆ–นๆณ•ใจใƒ„ใƒผใƒซใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ—ใƒญใ‚ปใ‚นใซไธŽใˆใ‚‹ๅฝฑ้ŸฟใซๅŸบใฅใ„ใฆๅˆ†้กžใงใใพใ™๏ผš | Method/tool | Improves training speed | Optimizes memory utilization | |:-----------------------------------------------------------|:------------------------|:-----------------------------| | [Batch size choice](#batch-size-choice) | Yes | Yes | | [Gradient accumulation](#gradient-accumulation) | No | Yes | | [Gradient checkpointing](#gradient-checkpointing) | No | Yes | | [Mixed precision training](#mixed-precision-training) | Yes | (No) | | [Optimizer choice](#optimizer-choice) | Yes | Yes | | [Data preloading](#data-preloading) | Yes | No | | [DeepSpeed Zero](#deepspeed-zero) | No | Yes | | [torch.compile](#using-torchcompile) | Yes | No | <Tip> **ๆณจๆ„**: ๅฐใ•ใชใƒขใƒ‡ใƒซใจๅคงใใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใƒกใƒขใƒชใฎ็ฏ€็ด„ใŒ่กŒใ‚ใ‚Œใพใ™ใŒใ€ๅคงใใชใƒขใƒ‡ใƒซใจๅฐใ•ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใƒกใƒขใƒชใฎไฝฟ็”จ้‡ใŒๅข—ๅŠ ใ—ใพใ™ใ€‚ </Tip> ใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใฏใ€[`Trainer`]ใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใฆใ„ใ‚‹ๅ ดๅˆใ‚„ใ€็ด”็ฒ‹ใชPyTorchใƒซใƒผใƒ—ใ‚’่จ˜่ฟฐใ—ใฆใ„ใ‚‹ๅ ดๅˆใฎไธกๆ–นใงๅˆฉ็”จใงใใพใ™ใ€‚่ฉณ็ดฐใชๆœ€้ฉๅŒ–ใฎ่จญๅฎšใซใคใ„ใฆใฏใ€๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ—ใฆ[ใ“ใ‚Œใ‚‰ใฎๆœ€้ฉๅŒ–ใ‚’่จญๅฎšใงใใพใ™](#using-accelerate)ใ€‚ ใ“ใ‚Œใ‚‰ใฎๆ–นๆณ•ใŒๅๅˆ†ใชๅˆฉ็›Šใ‚’ใ‚‚ใŸใ‚‰ใ•ใชใ„ๅ ดๅˆใ€ไปฅไธ‹ใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ๆคœ่จŽใงใใพใ™๏ผš * [ๅŠน็އ็š„ใชใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ขใƒ—ใƒชใƒ“ใƒซใƒ‰ใ‚’ๅ‚™ใˆใŸใ‚ซใ‚นใ‚ฟใƒ Dockerใ‚ณใƒณใƒ†ใƒŠใฎไฝœๆˆ](#efficient-software-prebuilds) * [Mixture of Experts๏ผˆMoE๏ผ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใƒขใƒ‡ใƒซใ‚’ๆคœ่จŽ](#mixture-of-experts) * [ใƒขใƒ‡ใƒซใ‚’BetterTransformerใซๅค‰ๆ›ใ—ใฆใ€PyTorchใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ๆดป็”จ](#using-pytorch-native-attention) ๆœ€ๅพŒใซใ€ใ“ใ‚Œใ‚‰ใฎๆ–นๆณ•ใŒใพใ ๅๅˆ†ใงใชใ„ๅ ดๅˆใ€A100ใชใฉใฎใ‚ตใƒผใƒใƒผใ‚ฐใƒฌใƒผใƒ‰GPUใซๅˆ‡ใ‚Šๆ›ฟใˆใฆใ‚‚ใ€ใ•ใ‚‰ใชใ‚‹ๆ”นๅ–„ใŒๅฟ…่ฆใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚Œใ‚‰ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏใ€ใƒžใƒซใƒGPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงใ‚‚ๆœ‰ๅŠนใงใ‚ใ‚Šใ€[ใƒžใƒซใƒGPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_many)ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹่ฟฝๅŠ ใฎไธฆๅˆ—ๅŒ–ๆŠ€่ก“ใ‚’ๆดป็”จใงใใพใ™ใ€‚ ## Batch size choice ๆœ€้ฉใชใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅฎŸ็พใ™ใ‚‹ใŸใ‚ใซใ€้ฉๅˆ‡ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’็‰นๅฎšใ™ใ‚‹ใ“ใจใ‹ใ‚‰ๅง‹ใ‚ใพใ—ใ‚‡ใ†ใ€‚2^Nใฎใ‚ตใ‚คใ‚บใฎใƒใƒƒใƒใ‚ตใ‚คใ‚บใจๅ…ฅๅŠ›/ๅ‡บๅŠ›ใƒ‹ใƒฅใƒผใƒญใƒณๆ•ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒๆŽจๅฅจใ•ใ‚Œใฆใ„ใพใ™ใ€‚้€šๅธธใ€ใ“ใ‚Œใฏ8ใฎๅ€ๆ•ฐใงใ™ใŒใ€ไฝฟ็”จใ™ใ‚‹ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใจใƒขใƒ‡ใƒซใฎใƒ‡ใƒผใ‚ฟๅž‹ใซไพๅญ˜ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ๅ‚่€ƒใพใงใซใ€NVIDIAใฎ[ๅ…ฅๅŠ›/ๅ‡บๅŠ›ใƒ‹ใƒฅใƒผใƒญใƒณๆ•ฐใฎๆŽจๅฅจไบ‹้ …](https://docs.nvidia.com/deeplearning/performance/dl-performance-fully-connected/index.html#input-features)ใจ[ใƒใƒƒใƒใ‚ตใ‚คใ‚บ](https://docs.nvidia.com/deeplearning/performance/dl-performance-fully-connected/index.html#batch-size)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผˆใ“ใ‚Œใ‚‰ใฏGEMM๏ผˆไธ€่ˆฌ็š„ใช่กŒๅˆ—ไน—็ฎ—๏ผ‰ใซ้–ขไธŽใ—ใพใ™๏ผ‰ใ€‚ [Tensor Core่ฆไปถ](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc)ใงใฏใ€ใƒ‡ใƒผใ‚ฟๅž‹ใจใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซๅŸบใฅใ„ใฆไน—ๆ•ฐใŒๅฎš็พฉใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใŸใจใˆใฐใ€fp16ใƒ‡ใƒผใ‚ฟๅž‹ใฎๅ ดๅˆใ€64ใฎๅ€ๆ•ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒๆŽจๅฅจใ•ใ‚Œใพใ™๏ผˆA100 GPUใฎๅ ดๅˆใ‚’้™คใ๏ผ‰ใ€‚ ๅฐใ•ใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๅ ดๅˆใ€[ๆฌกๅ…ƒ้‡ๅญๅŒ–ๅŠนๆžœ](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#dim-quantization)ใ‚‚่€ƒๆ…ฎใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใ‚Œใฏใ‚ฟใ‚คใƒชใƒณใ‚ฐใŒ่กŒใ‚ใ‚Œใ€้ฉๅˆ‡ใชไน—ๆ•ฐใŒๅคงๅน…ใช้ซ˜้€ŸๅŒ–ใ‚’ใ‚‚ใŸใ‚‰ใ™ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ ## Gradient Accumulation **ๅ‹พ้…่“„็ฉ**ใƒกใ‚ฝใƒƒใƒ‰ใฏใ€GPUใฎใƒกใƒขใƒชๅฎน้‡ใฎๅˆถ็ด„ใซใ‚ˆใฃใฆ่ชฒใ›ใ‚‰ใ‚Œใ‚‹ๅˆถ้™ใ‚’่ถ…ใˆใŸๅŠนๆžœ็š„ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ๅฎŸ็พใ™ใ‚‹ใŸใ‚ใซใ€ๅ‹พ้…ใ‚’ๅฐใ•ใชๅข—ๅˆ†ใง่จˆ็ฎ—ใ™ใ‚‹ใ“ใจใ‚’็›ฎ็š„ใจใ—ใฆใ„ใพใ™ใ€‚ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใงใฏใ€ใƒขใƒ‡ใƒซใ‚’้ †ๆ–นๅ‘ใŠใ‚ˆใณ้€†ๆ–นๅ‘ใซๅฐใ•ใชใƒใƒƒใƒใงๅๅพฉ็š„ใซ่จˆ็ฎ—ใ—ใ€ใใฎ้Ž็จ‹ใงๅ‹พ้…ใ‚’่“„็ฉใ—ใพใ™ใ€‚ๅๅˆ†ใชๆ•ฐใฎๅ‹พ้…ใŒ่“„็ฉใ•ใ‚ŒใŸใ‚‰ใ€ใƒขใƒ‡ใƒซใฎๆœ€้ฉๅŒ–ใ‚นใƒ†ใƒƒใƒ—ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ๅ‹พ้…่“„็ฉใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€GPUใฎใƒกใƒขใƒชๅฎน้‡ใซใ‚ˆใ‚‹ๅˆถ็ด„ใ‚’่ถ…ใˆใฆ**ๅŠนๆžœ็š„ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บ**ใ‚’ๅข—ใ‚„ใ™ใ“ใจใŒใงใใพใ™ใŒใ€ๅ‹พ้…่“„็ฉใซใ‚ˆใฃใฆๅฐŽๅ…ฅใ•ใ‚Œใ‚‹่ฟฝๅŠ ใฎ้ †ๆ–นๅ‘ใŠใ‚ˆใณ้€†ๆ–นๅ‘ใฎ่จˆ็ฎ—ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ—ใƒญใ‚ปใ‚นใ‚’้…ใใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใŒๅฟ…่ฆใงใ™ใ€‚ `TrainingArguments`ใซ`gradient_accumulation_steps`ๅผ•ๆ•ฐใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใงใ€ๅ‹พ้…่“„็ฉใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```py training_args = TrainingArguments(per_device_train_batch_size=1, gradient_accumulation_steps=4, **default_args) ``` ไธŠ่จ˜ใฎไพ‹ใงใฏใ€ๅŠนๆžœ็š„ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใฏ4ใซใชใ‚Šใพใ™ใ€‚ ใพใŸใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ๅฎŒๅ…จใซๅˆถๅพกใ™ใ‚‹ใŸใ‚ใซ๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚๐Ÿค— Accelerateใฎไพ‹ใฏใ€[ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎๅพŒๅŠใซใ‚ใ‚‹](#using-accelerate)ใง่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใงใใ‚‹ใ ใ‘GPUใฎไฝฟ็”จ็އใ‚’ๆœ€ๅคง้™ใซใ™ใ‚‹ใ“ใจใŒๆŽจๅฅจใ•ใ‚Œใฆใ„ใพใ™ใŒใ€้ซ˜ใ„ๅ‹พ้…่“„็ฉใ‚นใƒ†ใƒƒใƒ—ๆ•ฐใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ้…ๅปถใ‚’ใ‚ˆใ‚Š้ก•่‘—ใซใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ไปฅไธ‹ใฎไพ‹ใ‚’่€ƒใˆใฆใฟใพใ—ใ‚‡ใ†ใ€‚`per_device_train_batch_size=4`ใฎๅ ดๅˆใ€ๅ‹พ้…่“„็ฉใ‚’ไฝฟ็”จใ—ใชใ„ใจGPUใฎๅˆถ้™ใซ้”ใ—ใพใ™ใ€‚ใƒใƒƒใƒใ‚ตใ‚คใ‚บ64ใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใŸใ„ๅ ดๅˆใ€`per_device_train_batch_size`ใ‚’1ใซ่จญๅฎšใ—ใ€`gradient_accumulation_steps`ใ‚’64ใซ่จญๅฎšใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ไปฃใ‚ใ‚Šใซใ€`per_device_train_batch_size=4`ใ‚’ไฟๆŒใ—ใ€`gradient_accumulation_steps=16`ใ‚’่จญๅฎšใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅŒใ˜ๅŠนๆžœ็š„ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒๅพ—ใ‚‰ใ‚Œใ€ๅˆฉ็”จๅฏ่ƒฝใชGPUใƒชใ‚ฝใƒผใ‚นใŒๅŠนๆžœ็š„ใซๆดป็”จใ•ใ‚Œใพใ™ใ€‚ ่ฉณ็ดฐใชๆƒ…ๅ ฑใซใคใ„ใฆใฏใ€[RTX-3090็”จใฎใƒใƒƒใƒใ‚ตใ‚คใ‚บใจๅ‹พ้…่“„็ฉใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏ](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537)ใŠใ‚ˆใณ[A100็”จใฎใƒใƒƒใƒใ‚ตใ‚คใ‚บใจๅ‹พ้…่“„็ฉใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏ](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005033957)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Gradient Checkpointing ไธ€้ƒจใฎๅคงใใชใƒขใƒ‡ใƒซใฏใ€ใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’1ใซ่จญๅฎšใ—ใ€ๅ‹พ้…่“„็ฉใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใงใ‚‚ใƒกใƒขใƒชใฎๅ•้กŒใซ็›ด้ขใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใƒกใƒขใƒชใ‚นใƒˆใƒฌใƒผใ‚ธใŒๅฟ…่ฆใชไป–ใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใ‚‚ๅญ˜ๅœจใ™ใ‚‹ใŸใ‚ใงใ™ใ€‚ ๅ‰ๅ‘ใใƒ‘ใ‚นใ‹ใ‚‰ใฎใ™ในใฆใฎใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ‚’ไฟๅญ˜ใ—ใฆใ€้€†ๅ‘ใใƒ‘ใ‚นใงๅ‹พ้…ใ‚’่จˆ็ฎ—ใ™ใ‚‹ใจใ€ใ‹ใชใ‚Šใฎใƒกใƒขใƒชใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒ็™บ็”Ÿใ—ใพใ™ใ€‚้€†ๅ‘ใใƒ‘ใ‚นใงๅฟ…่ฆใชใจใใซใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ‚’็ ดๆฃ„ใ—ใฆๅ†่จˆ็ฎ—ใ™ใ‚‹ไปฃๆ›ฟใ‚ขใƒ—ใƒญใƒผใƒใฏใ€่จˆ็ฎ—ใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒๅคงๅน…ใซๅข—ๅŠ ใ—ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ—ใƒญใ‚ปใ‚นใŒ้…ใใชใ‚Šใพใ™ใ€‚ **ๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ**ใฏใ€ใ“ใ‚Œใ‚‰ใฎ2ใคใฎใ‚ขใƒ—ใƒญใƒผใƒใฎๆŠ˜่กทๆกˆใ‚’ๆไพ›ใ—ใ€่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ๅ…จไฝ“ใงๆˆฆ็•ฅ็š„ใซ้ธๆŠžใ•ใ‚ŒใŸใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใฎใฟใ‚’ไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใ€ๅ‹พ้…ใ‚’ๅ†่จˆ็ฎ—ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใฎไธ€้ƒจใ ใ‘ใ‚’็ฏ€็ด„ใ—ใพใ™ใ€‚ๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใ“ใฎ็ด ๆ™ดใ‚‰ใ—ใ„่จ˜ไบ‹](https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ [`Trainer`]ใงๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€[`TrainingArguments`]ใซๅฏพๅฟœใ™ใ‚‹ใƒ•ใƒฉใ‚ฐใ‚’ๆธกใ—ใพใ™๏ผš ```py training_args = TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=4, gradient_checkpointing=True, **default_args ) ``` ไปฃๆ›ฟๆ‰‹ๆฎตใจใ—ใฆใ€๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ - ๐Ÿค— Accelerateใฎไพ‹ใฏ[ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎใ•ใ‚‰ใซๅพŒใ‚ใซใ‚ใ‚Šใพใ™](#using-accelerate)ใ€‚ <Tip> ๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใƒกใƒขใƒชๅŠน็އใŒๅ‘ไธŠใ™ใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใŒใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ้€Ÿๅบฆใฏ็ด„20%้…ใใชใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ## Mixed precision training **ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ**ใฏใ€ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ่จˆ็ฎ—ๅŠน็އใ‚’ๆœ€้ฉๅŒ–ใ™ใ‚‹ๆŠ€่ก“ใงใ€็‰นๅฎšใฎๅค‰ๆ•ฐใซๅฏพใ—ใฆไฝŽ็ฒพๅบฆใฎๆ•ฐๅ€คใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ‚’ๅˆฉ็”จใ—ใพใ™ใ€‚ๅพ“ๆฅใ€ใปใจใ‚“ใฉใฎใƒขใƒ‡ใƒซใฏๅค‰ๆ•ฐใ‚’่กจ็พใ—ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซ32ใƒ“ใƒƒใƒˆๆตฎๅ‹•ๅฐๆ•ฐ็‚น็ฒพๅบฆ๏ผˆfp32ใพใŸใฏfloat32๏ผ‰ใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใ™ในใฆใฎๅค‰ๆ•ฐใŒๆญฃ็ขบใช็ตๆžœใ‚’ๅพ—ใ‚‹ใŸใ‚ใซใ“ใฎ้ซ˜็ฒพๅบฆใฎใƒฌใƒ™ใƒซใ‚’ๅฟ…่ฆใจใ—ใชใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ไธ€้ƒจใฎๅค‰ๆ•ฐใฎ็ฒพๅบฆใ‚’16ใƒ“ใƒƒใƒˆๆตฎๅ‹•ๅฐๆ•ฐ็‚น๏ผˆfp16ใพใŸใฏfloat16๏ผ‰ใชใฉใฎใ‚ˆใ‚ŠไฝŽใ„ๆ•ฐๅ€คใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใซๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใงใ€่จˆ็ฎ—ใ‚’้ซ˜้€ŸๅŒ–ใงใใพใ™ใ€‚ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใงใฏใ€ไธ€้ƒจใฎ่จˆ็ฎ—ใฏๅŠ็ฒพๅบฆใง่กŒใ‚ใ‚Œใ€ไธ€้ƒจใฏใพใ ๅฎŒๅ…จใช็ฒพๅบฆใง่กŒใ‚ใ‚Œใ‚‹ใŸใ‚ใ€ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๅ‘ผใฐใ‚Œใฆใ„ใพใ™ใ€‚ ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใซๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ€fp16๏ผˆfloat16๏ผ‰ใƒ‡ใƒผใ‚ฟๅž‹ใ‚’ไฝฟ็”จใ—ใฆๅฎŸ็พใ•ใ‚Œใพใ™ใŒใ€ไธ€้ƒจใฎGPUใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ๏ผˆใ‚ขใƒณใƒšใ‚ขใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใชใฉ๏ผ‰ใงใฏbf16ใŠใ‚ˆใณtf32๏ผˆCUDAๅ†…้ƒจใƒ‡ใƒผใ‚ฟๅž‹๏ผ‰ใƒ‡ใƒผใ‚ฟๅž‹ใ‚‚ๆไพ›ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒ‡ใƒผใ‚ฟๅž‹ใฎ้•ใ„ใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€[NVIDIAใฎใƒ–ใƒญใ‚ฐ](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ### fp16 ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎไธปใชๅˆฉ็‚นใฏใ€ๅŠ็ฒพๅบฆ๏ผˆfp16๏ผ‰ใงใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ‚’ไฟๅญ˜ใ™ใ‚‹ใ“ใจใ‹ใ‚‰ๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ ๅ‹พ้…ใ‚‚ๅŠ็ฒพๅบฆใง่จˆ็ฎ—ใ•ใ‚Œใพใ™ใŒใ€ๆœ€้ฉๅŒ–ใ‚นใƒ†ใƒƒใƒ—ใงใฏๅ†ใณๅฎŒๅ…จ็ฒพๅบฆใซๅค‰ๆ›ใ•ใ‚Œใ‚‹ใŸใ‚ใ€ใ“ใ“ใงใฏใƒกใƒขใƒชใฏไฟๅญ˜ใ•ใ‚Œใพใ›ใ‚“ใ€‚ ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏ่จˆ็ฎ—้€Ÿๅบฆใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ไธ€ๆ–นใ€็‰นใซๅฐใ•ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใฎๅ ดๅˆใ€ใ‚ˆใ‚ŠๅคšใใฎGPUใƒกใƒขใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏใ€ใƒขใƒ‡ใƒซใŒGPUไธŠใซ16ใƒ“ใƒƒใƒˆใŠใ‚ˆใณ32ใƒ“ใƒƒใƒˆ็ฒพๅบฆใฎไธกๆ–นใงๅญ˜ๅœจใ™ใ‚‹ใŸใ‚ใงใ™๏ผˆGPUไธŠใฎๅ…ƒใฎใƒขใƒ‡ใƒซใฎ1.5ๅ€๏ผ‰ใ€‚ ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€`fp16`ใƒ•ใƒฉใ‚ฐใ‚’`True`ใซ่จญๅฎšใ—ใพใ™๏ผš ```py training_args = TrainingArguments(per_device_train_batch_size=4, fp16=True, **default_args) ``` ๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€๐Ÿค— Accelerateใฎไพ‹ใฏ[ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎใ•ใ‚‰ใซๅพŒใ‚ใซใ‚ใ‚Šใพใ™](#using-accelerate)ใ€‚ ### BF16 AmpereใพใŸใฏใใ‚Œไปฅ้™ใฎใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ๅ ดๅˆใ€ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจ่ฉ•ไพกใซbf16ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚bf16ใฏfp16ใ‚ˆใ‚Šใ‚‚็ฒพๅบฆใŒๅŠฃใ‚Šใพใ™ใŒใ€ใฏใ‚‹ใ‹ใซๅคงใใชๅ‹•็š„็ฏ„ๅ›ฒใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚fp16ใงใฏใ€ๆŒใคใ“ใจใŒใงใใ‚‹ๆœ€ๅคงใฎๆ•ฐใฏ `65535` ใงใ‚ใ‚Šใ€ใใ‚Œใ‚’่ถ…ใˆใ‚‹ๆ•ฐๅ€คใฏใ‚ชใƒผใƒใƒผใƒ•ใƒญใƒผใ‚’ๅผ•ใ่ตทใ“ใ—ใพใ™ใ€‚ไธ€ๆ–นใ€bf16ใฎๆ•ฐๅ€คใฏ `3.39e+38` ใฎใ‚ˆใ†ใซๅคงใใใ€ใ“ใ‚Œใฏfp32ใจใปใผๅŒใ˜ใงใ™ - ใฉใกใ‚‰ใ‚‚ๆ•ฐๅ€ค็ฏ„ๅ›ฒใซ8ใƒ“ใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใŸใ‚ใงใ™ใ€‚ BF16ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€๐Ÿค— Trainerใงไปฅไธ‹ใฎใ‚ˆใ†ใซ่จญๅฎšใ—ใพใ™๏ผš ```python training_args = TrainingArguments(bf16=True, **default_args) ``` ### TF32 ใ‚ขใƒณใƒšใ‚ขใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใฏใ€tf32ใจใ„ใ†็‰นๅˆฅใชใƒ‡ใƒผใ‚ฟๅž‹ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€fp32ใจๅŒใ˜ๆ•ฐๅ€ค็ฏ„ๅ›ฒ๏ผˆ8ใƒ“ใƒƒใƒˆ๏ผ‰ใ‚’ๆŒใฃใฆใ„ใพใ™ใŒใ€23ใƒ“ใƒƒใƒˆใฎ็ฒพๅบฆใงใฏใชใใ€10ใƒ“ใƒƒใƒˆใฎ็ฒพๅบฆ๏ผˆfp16ใจๅŒใ˜๏ผ‰ใ‚’ๆŒใกใ€ๅˆ่จˆใง19ใƒ“ใƒƒใƒˆใ—ใ‹ไฝฟ็”จใ—ใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏ้€šๅธธใฎfp32ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŠใ‚ˆใณๆŽจ่ซ–ใ‚ณใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใ€tf32ใ‚ตใƒใƒผใƒˆใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใงใ€ๆœ€ๅคง3ๅ€ใฎใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใฎๅ‘ไธŠใŒๅพ—ใ‚‰ใ‚Œใ‚‹็‚นใงใ€Œ้ญ”ๆณ•ใฎใ‚ˆใ†ใ€ใงใ™ใ€‚่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚‹ใฎใฏใ€ๆฌกใฎใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงใ™๏ผš ``` import torch torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True ``` ไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹GPUใŒใ‚ขใƒณใƒšใ‚ขใ‚ทใƒชใƒผใ‚บใงใ‚ใ‚‹ใจไปฎๅฎšใ—ใ€CUDAใฏๅฏ่ƒฝใช้™ใ‚Štf32ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซ่‡ชๅ‹•็š„ใซๅˆ‡ใ‚Šๆ›ฟใˆใพใ™ใ€‚ [NVIDIAใฎ็ ”็ฉถใซใ‚ˆใ‚Œใฐ](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/)ใ€ใปใจใ‚“ใฉใฎๆฉŸๆขฐๅญฆ็ฟ’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒฏใƒผใ‚ฏใƒญใƒผใƒ‰ใฏtf32ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจfp32ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงๅŒใ˜้›ฃ่งฃๅบฆใจๅŽๆŸใ‚’็คบใ—ใพใ™ใ€‚ใ™ใงใซfp16ใพใŸใฏbf16ๆททๅˆ็ฒพๅบฆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใฎๅ‘ไธŠใซๅฝน็ซ‹ใคใ“ใจใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Trainerใงใ“ใฎใƒขใƒผใƒ‰ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```python TrainingArguments(tf32=True, **default_args) ``` <Tip> tf32ใฏ`tensor.to(dtype=torch.tf32)`ใ‚’ไป‹ใ—ใฆ็›ดๆŽฅใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏๅ†…้ƒจใฎCUDAใƒ‡ใƒผใ‚ฟๅž‹ใงใ™ใ€‚tf32ใƒ‡ใƒผใ‚ฟๅž‹ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€`torch>=1.7`ใŒๅฟ…่ฆใงใ™ใ€‚ </Tip> tf32ใจไป–ใฎ็ฒพๅบฆใซ้–ขใ™ใ‚‹่ฉณ็ดฐใชๆƒ…ๅ ฑใซใคใ„ใฆใฏใ€ไปฅไธ‹ใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผš [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004390803)ใŠใ‚ˆใณ [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004543189)ใ€‚ ## Flash Attention 2 transformersใงFlash Attention 2็ตฑๅˆใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚Flash Attention 2ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๅซใ‚€ใƒขใƒ‡ใƒซใฎ่ชญใฟ่พผใฟๆ–นๆณ•ใซใคใ„ใฆใฏใ€[single GPU section](./perf_infer_gpu_one#Flash-Attention-2)ใฎ้ฉๅˆ‡ใชใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’็ขบ่ชใ—ใฆ่ฉณ็ดฐใ‚’ๅญฆใณใพใ—ใ‚‡ใ†ใ€‚ ## ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ้ธๆŠž Transformerใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใซๆœ€ใ‚‚ไธ€่ˆฌ็š„ใซไฝฟ็”จใ•ใ‚Œใ‚‹ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฏAdamใพใŸใฏAdamW๏ผˆ้‡ใฟๆธ›่กฐใ‚’ไผดใ†Adam๏ผ‰ใงใ™ใ€‚Adamใฏๅ‰ๅ›žใฎๅ‹พ้…ใฎ็งปๅ‹•ๅนณๅ‡ใ‚’ไฟๅญ˜ใ™ใ‚‹ใ“ใจใงๅŽๆŸใ‚’้”ๆˆใ—ใพใ™ใŒใ€ใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆ•ฐใฎใ‚ชใƒผใƒ€ใƒผใฎ่ฟฝๅŠ ใƒกใƒขใƒชใƒ•ใƒƒใƒˆใƒ—ใƒชใƒณใƒˆใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚’่งฃๆถˆใ™ใ‚‹ใŸใ‚ใซใ€ไปฃๆ›ฟใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใŸใจใˆใฐใ€[NVIDIA/apex](https://github.com/NVIDIA/apex)ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€`adamw_apex_fused`ใฏใ™ในใฆใฎใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹AdamWใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎไธญใงๆœ€ใ‚‚้ซ˜้€Ÿใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไฝ“้จ“ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ [`Trainer`]ใฏใ€็›ดๆŽฅไฝฟ็”จใงใใ‚‹ใ•ใพใ–ใพใชใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’็ตฑๅˆใ—ใฆใŠใ‚Šใ€`adamw_hf`ใ€`adamw_torch`ใ€`adamw_torch_fused`ใ€`adamw_apex_fused`ใ€`adamw_anyprecision`ใ€`adafactor`ใ€ใพใŸใฏ`adamw_bnb_8bit`ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ใ‚ตใƒผใƒ‰ใƒ‘ใƒผใƒ†ใ‚ฃใฎๅฎŸ่ฃ…ใ‚’ไป‹ใ—ใฆใ•ใ‚‰ใซๅคšใใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’่ฟฝๅŠ ใงใใพใ™ใ€‚ AdamWใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎไปฃๆ›ฟๆ‰‹ๆฎตใซใคใ„ใฆ่ฉณใ—ใ่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†๏ผš 1. [`Trainer`]ใงไฝฟ็”จๅฏ่ƒฝใช`adafactor` 2. Trainerใงไฝฟ็”จๅฏ่ƒฝใช`adamw_bnb_8bit`ใฏใ€ใƒ‡ใƒขใƒณใ‚นใƒˆใƒฌใƒผใ‚ทใƒงใƒณ็”จใซไปฅไธ‹ใงใ‚ตใƒผใƒ‰ใƒ‘ใƒผใƒ†ใ‚ฃใฎ็ตฑๅˆใŒๆไพ›ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ๆฏ”่ผƒใฎใŸใ‚ใ€3Bใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒขใƒ‡ใƒซ๏ผˆไพ‹๏ผšใ€Œt5-3bใ€๏ผ‰ใฎๅ ดๅˆ๏ผš * ๆจ™ๆบ–ใฎAdamWใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฏใ€ๅ„ใƒ‘ใƒฉใƒกใƒผใ‚ฟใซ8ใƒใ‚คใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใ€24GBใฎGPUใƒกใƒขใƒชใŒๅฟ…่ฆใงใ™๏ผˆ8 * 3 => 24GB๏ผ‰ใ€‚ * Adafactorใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฏ12GBไปฅไธŠๅฟ…่ฆใงใ™ใ€‚ๅ„ใƒ‘ใƒฉใƒกใƒผใ‚ฟใซใ‚ใšใ‹4ใƒใ‚คใƒˆไปฅไธŠใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใ€4 * 3ใจๅฐ‘ใ—ไฝ™ๅˆ†ใซใชใ‚Šใพใ™ใ€‚ * 8ใƒ“ใƒƒใƒˆใฎBNB้‡ๅญๅŒ–ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฏใ€ใ™ในใฆใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใŒ้‡ๅญๅŒ–ใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ใ‚ใšใ‹6GBใ—ใ‹ไฝฟ็”จใ—ใพใ›ใ‚“ใ€‚ ### Adafactor Adafactorใฏใ€้‡ใฟ่กŒๅˆ—ใฎๅ„่ฆ็ด ใฎใŸใ‚ใซๅ‰ๅ›žใฎๅนณๅ‡ใ‚’ไฟๅญ˜ใ—ใพใ›ใ‚“ใ€‚ไปฃใ‚ใ‚Šใซใ€๏ผˆ่กŒใ”ใจใจๅˆ—ใ”ใจใฎๅนณๅ‡ใฎๅˆ่จˆใชใฉ๏ผ‰้›† ```py training_args = TrainingArguments(per_device_train_batch_size=4, optim="adafactor", **default_args) ``` ไป–ใฎใ‚ขใƒ—ใƒญใƒผใƒ๏ผˆๅ‹พ้…่“„็ฉใ€ๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ€ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ๏ผ‰ใจ็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใงใ€ใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’็ถญๆŒใ—ใชใŒใ‚‰ๆœ€ๅคง3ๅ€ใฎๅ‘ไธŠใŒ่ฆ‹ใ‚‰ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผใŸใ ใ—ใ€ๅ‰่ฟฐใฎใ‚ˆใ†ใซใ€AdafactorใฎๅŽๆŸๆ€งใฏAdamใ‚ˆใ‚Šใ‚‚ๆ‚ชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ### 8ใƒ“ใƒƒใƒˆ Adam Adafactorใฎใ‚ˆใ†ใซใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใ‚’้›†็ด„ใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€8ใƒ“ใƒƒใƒˆใฎAdamใฏๅฎŒๅ…จใช็Šถๆ…‹ใ‚’ไฟๆŒใ—ใ€ใใ‚Œใ‚’้‡ๅญๅŒ–ใ—ใพใ™ใ€‚้‡ๅญๅŒ–ใจใฏใ€็Šถๆ…‹ใ‚’ไฝŽใ„็ฒพๅบฆใงไฟๅญ˜ใ—ใ€ๆœ€้ฉๅŒ–ใฎใŸใ‚ใ ใ‘ใซ้ž้‡ๅญๅŒ–ใ™ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ใ“ใ‚Œใฏๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ่ƒŒๅพŒใซใ‚ใ‚‹ใ‚ขใ‚คใƒ‡ใ‚ขใจไผผใฆใ„ใพใ™ใ€‚ `adamw_bnb_8bit`ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ๅ˜ใซ[`TrainingArguments`]ใง`optim="adamw_bnb_8bit"`ใ‚’่จญๅฎšใ™ใ‚‹ใ ใ‘ใงใ™๏ผš ```py training_args = TrainingArguments(per_device_train_batch_size=4, optim="adamw_bnb_8bit", **default_args) ``` ใŸใ ใ—ใ€ใƒ‡ใƒขใƒณใ‚นใƒˆใƒฌใƒผใ‚ทใƒงใƒณ็›ฎ็š„ใง8ใƒ“ใƒƒใƒˆใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’ใ‚ตใƒผใƒ‰ใƒ‘ใƒผใƒ†ใ‚ฃใฎๅฎŸ่ฃ…ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ“ใ‚Œใ‚’็ตฑๅˆใ™ใ‚‹ๆ–นๆณ•ใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใงใ™ใ€‚ ใพใšใ€8ใƒ“ใƒƒใƒˆAdamใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’ๅฎŸ่ฃ…ใ—ใŸ`bitsandbytes`ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใŸใ‚ใซใ€GitHub [ใƒชใƒใ‚ธใƒˆใƒช](https://github.com/TimDettmers/bitsandbytes)ๅ†…ใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซใ‚ฌใ‚คใƒ‰ใซๅพ“ใฃใฆใใ ใ•ใ„ใ€‚ ๆฌกใซใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’ๅˆๆœŸๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใฏ2ใคใฎใ‚นใƒ†ใƒƒใƒ—ใŒๅซใพใ‚Œใพใ™๏ผš * ใพใšใ€ใƒขใƒ‡ใƒซใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’2ใคใฎใ‚ฐใƒซใƒผใƒ—ใซๅˆ†ใ‘ใพใ™ - ้‡ใฟๆธ›่กฐใ‚’้ฉ็”จใ™ใ‚‹ในใใ‚ฐใƒซใƒผใƒ—ใจใ€้ฉ็”จใ™ในใใงใชใ„ใ‚ฐใƒซใƒผใƒ—ใงใ™ใ€‚้€šๅธธใ€ใƒใ‚คใ‚ขใ‚นใจใƒฌใ‚คใƒคใƒผๆญฃ่ฆๅŒ–ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฏ้‡ใฟๆธ›่กฐใ•ใ‚Œใพใ›ใ‚“ใ€‚ * ๆฌกใซใ€ไปฅๅ‰ใซไฝฟ็”จใ—ใŸAdamWใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใจๅŒใ˜ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใ€ใ„ใใคใ‹ใฎๅผ•ๆ•ฐใฎ่ชฟๆ•ดใ‚’่กŒใ„ใพใ™ใ€‚ ```py import bitsandbytes as bnb from torch import nn from transformers.trainer_pt_utils import get_parameter_names training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) decay_parameters = get_parameter_names(model, [nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if "bias" not in name] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if n in decay_parameters], "weight_decay": training_args.weight_decay, }, { "params": [p for n, p in model.named_parameters() if n not in decay_parameters], "weight_decay": 0.0, }, ] optimizer_kwargs = { "betas": (training_args.adam_beta1, training_args.adam_beta2), "eps": training_args.adam_epsilon, } optimizer_kwargs["lr"] = training_args.learning_rate adam_bnb_optim = bnb.optim.Adam8bit( optimizer_grouped_parameters, betas=(training_args.adam_beta1, training_args.adam_beta2), eps=training_args.adam_epsilon, lr=training_args.learning_rate, ) ``` ๆœ€ๅพŒใซใ€ใ‚ซใ‚นใ‚ฟใƒ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’`Trainer`ใซๅผ•ๆ•ฐใจใ—ใฆๆธกใ—ใพใ™๏ผš ```py trainer = Trainer(model=model, args=training_args, train_dataset=ds, optimizers=(adam_bnb_optim, None)) ``` ไป–ใฎใ‚ขใƒ—ใƒญใƒผใƒ๏ผˆๅ‹พ้…่“„็ฉใ€ๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ€ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ๏ผ‰ใจ็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใงใ€Adafactorใฎไฝฟ็”จใจๅŒ็ญ‰ไปฅไธŠใฎ3ๅ€ใฎใƒกใƒขใƒชๆ”นๅ–„ใŠใ‚ˆใณใ‚ใšใ‹ใซ้ซ˜ใ„ใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ๆœŸๅพ…ใงใใพใ™ใ€‚ ### multi_tensor pytorch-nightlyใฏใ€ๅคšใใฎๅฐใ•ใช็‰นๅพดใƒ†ใƒณใ‚ฝใƒซใŒใ‚ใ‚‹็Šถๆณใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’ๅคงๅน…ใซ้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใฏใšใฎ`torch.optim._multi_tensor`ใ‚’ๅฐŽๅ…ฅใ—ใพใ—ใŸใ€‚ใ“ใ‚Œใฏๆœ€็ต‚็š„ใซใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใซใชใ‚‹ใฏใšใงใ™ใŒใ€ใใ‚Œใ‚’ๆ—ฉใ่ฉฆใ—ใฆใฟใŸใ„ๅ ดๅˆใฏใ€ใ“ใฎGitHub [issue](https://github.com/huggingface/transformers/issues/9965)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## ใƒ‡ใƒผใ‚ฟใฎไบ‹ๅ‰่ชญใฟ่พผใฟ ๅ„ชใ‚ŒใŸใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ้€Ÿๅบฆใซๅˆฐ้”ใ™ใ‚‹ใŸใ‚ใฎ้‡่ฆใช่ฆไปถใฎ1ใคใฏใ€GPUใŒๅ‡ฆ็†ใงใใ‚‹ๆœ€ๅคง้€Ÿๅบฆใงใƒ‡ใƒผใ‚ฟใ‚’ไพ›็ตฆใงใใ‚‹่ƒฝๅŠ›ใงใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ™ในใฆใŒใƒกใ‚คใƒณใƒ—ใƒญใ‚ปใ‚นใง่กŒใ‚ใ‚Œใ€ใƒ‡ใƒผใ‚ฟใ‚’ใƒ‡ใ‚ฃใ‚นใ‚ฏใ‹ใ‚‰ๅๅˆ†้€Ÿใ่ชญใฟๅ–ใ‚‹ใ“ใจใŒใงใใชใ„ๅ ดๅˆใ€GPUใฎใ‚ขใƒณใƒ€ใƒผใƒฆใƒผใƒ†ใ‚ฃใƒชใ‚ผใƒผใ‚ทใƒงใƒณใ‚’ๅผ•ใ่ตทใ“ใ™ใƒœใƒˆใƒซใƒใƒƒใ‚ฏใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใƒœใƒˆใƒซใƒใƒƒใ‚ฏใ‚’ๆธ›ใ‚‰ใ™ใŸใ‚ใซใ€ไปฅไธ‹ใฎๅผ•ๆ•ฐใ‚’่จญๅฎšใ—ใพใ™๏ผš - `DataLoader(pin_memory=True, ...)` - ใƒ‡ใƒผใ‚ฟใ‚’CPUใฎใƒ”ใƒณใƒกใƒขใƒชใซไบ‹ๅ‰่ชญใฟ่พผใฟใ—ใ€้€šๅธธใ€CPUใ‹ใ‚‰GPUใƒกใƒขใƒชใธใฎ่ปข้€ใŒใฏใ‚‹ใ‹ใซ้ซ˜้€ŸๅŒ–ใ•ใ‚Œใพใ™ใ€‚ - `DataLoader(num_workers=4, ...)` - ใƒ‡ใƒผใ‚ฟใ‚’ใ‚ˆใ‚Š้€Ÿใไบ‹ๅ‰่ชญใฟ่พผใฟใ™ใ‚‹ใŸใ‚ใซ่ค‡ๆ•ฐใฎใƒฏใƒผใ‚ซใƒผใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซGPUใฎๅˆฉ็”จ็Šถๆณใฎ็ตฑ่จˆๆƒ…ๅ ฑใ‚’็ขบ่ชใ—ใ€100๏ผ…ใ‹ใ‚‰้ ใ„ๅ ดๅˆใ€ใƒฏใƒผใ‚ซใƒผใฎๆ•ฐใ‚’ๅข—ใ‚„ใ™ๅฎŸ้จ“ใ‚’่กŒใฃใฆใใ ใ•ใ„ใ€‚ใ‚‚ใกใ‚ใ‚“ใ€ๅ•้กŒใฏไป–ใฎๅ ดๆ‰€ใซใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใฎใงใ€ๅคšใใฎใƒฏใƒผใ‚ซใƒผใŒๅฟ…ใšใ—ใ‚‚ๆ€ง่ƒฝๅ‘ไธŠใซใคใชใŒใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ [`Trainer`]ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ๅฏพๅฟœใ™ใ‚‹[`TrainingArguments`]ใฏ`dataloader_pin_memory`๏ผˆใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏ`True`๏ผ‰ใŠใ‚ˆใณ`dataloader_num_workers`๏ผˆใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฏ`0`๏ผ‰ใงใ™ใ€‚ ## DeepSpeed ZeRO DeepSpeedใฏใ€๐Ÿค— Transformersใจ๐Ÿค— Accelerateใจ็ตฑๅˆใ•ใ‚ŒใŸใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใฎใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐๆœ€้ฉๅŒ–ใƒฉใ‚คใƒ–ใƒฉใƒชใงใ™ใ€‚ ๅคง่ฆๆจกใชใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅŠน็އใจใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใŸใ‚ใซ่จญ่จˆใ•ใ‚ŒใŸใ•ใพใ–ใพใชๆฉŸ่ƒฝใจๆœ€้ฉๅŒ–ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใŒๅ˜ไธ€ใฎGPUใซๅŽใพใ‚Šใ€ๅฐใ•ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ๅŽใ‚ใ‚‹ใ‚นใƒšใƒผใ‚นใŒใ‚ใ‚‹ๅ ดๅˆใ€DeepSpeedใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใใ‚Œใฏใ‚€ใ—ใ‚้…ใใชใ‚Šใพใ™ใ€‚ใŸใ ใ—ใ€ใƒขใƒ‡ใƒซใŒๅ˜ไธ€ใฎGPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆใ€ใพใŸใฏๅฐใ•ใชใƒใƒƒใƒใ‚’ๅŽใ‚ใ‚‹ใ“ใจใŒใงใใชใ„ๅ ดๅˆใ€DeepSpeed ZeRO + CPU OffloadใพใŸใฏNVMe Offloadใ‚’ๅˆฉ็”จใงใใพใ™ใ€‚ใ“ใฎๅ ดๅˆใ€[ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ๅˆฅ้€”ใ‚คใƒณใ‚นใƒˆใƒผใƒซ](main_classes/deepspeed#installation)ใ—ใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใ—ใ€DeepSpeedใ‚’่ตทๅ‹•ใ™ใ‚‹ใŸใ‚ใฎใ‚ฌใ‚คใƒ‰ใ‚’ใƒ•ใ‚ฉใƒญใƒผใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš * [`Trainer`]ใจใฎDeepSpeed็ตฑๅˆใฎ่ฉณ็ดฐใ‚ฌใ‚คใƒ‰ใซใคใ„ใฆใฏใ€[่ฉฒๅฝ“ใ™ใ‚‹ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](main_classes/deepspeed)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚็‰นใซใ€[ๅ˜ไธ€GPU็”จใฎใƒ‡ใƒ—ใƒญใ‚คใƒกใƒณใƒˆ](main_classes/deepspeed#deployment-with-one-gpu)ใซ้–ขใ™ใ‚‹ใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใ™ใ€‚DeepSpeedใ‚’ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใงไฝฟ็”จใ™ใ‚‹ใซใฏใ„ใใคใ‹ใฎ่ชฟๆ•ดใŒๅฟ…่ฆใงใ™ใฎใงใ€[่ฉฒๅฝ“ใ™ใ‚‹ใ‚ฌใ‚คใƒ‰](main_classes/deepspeed#deployment-in-notebooks)ใ‚‚ใ”่ฆงใใ ใ•ใ„ใ€‚ * ๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€[๐Ÿค— Accelerate DeepSpeedใ‚ฌใ‚คใƒ‰](https://huggingface.co/docs/accelerate/en/usage_guides/deepspeed)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## torch.compileใฎไฝฟ็”จ PyTorch 2.0ใฏๆ–ฐใ—ใ„ใ‚ณใƒณใƒ‘ใ‚คใƒซ้–ขๆ•ฐใ‚’ๅฐŽๅ…ฅใ—ใพใ—ใŸใ€‚ใ“ใ‚Œใฏๆ—ขๅญ˜ใฎPyTorchใ‚ณใƒผใƒ‰ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€1่กŒใฎใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใงใ‚ณใƒผใƒ‰ใ‚’ๆœ€้ฉๅŒ–ใงใใพใ™๏ผš`model = torch.compile(model)`ใ€‚ [`Trainer`]ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€[`TrainingArguments`]ๅ†…ใฎ`torch_compile`ใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ๆธกใ™ใ ใ‘ใงใ™๏ผš ```python training_args = TrainingArguments(torch_compile=True, **default_args) ``` `torch.compile`ใฏใ€ๆ—ขๅญ˜ใฎPyTorchใƒ—ใƒญใ‚ฐใƒฉใƒ ใ‹ใ‚‰ใ‚ฐใƒฉใƒ•ใ‚’่‡ชๅ‹•็š„ใซไฝœๆˆใ™ใ‚‹ใŸใ‚ใซPythonใฎใƒ•ใƒฌใƒผใƒ ่ฉ•ไพกAPIใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ‚ฐใƒฉใƒ•ใ‚’ใ‚ญใƒฃใƒ—ใƒใƒฃใ—ใŸๅพŒใ€็•ฐใชใ‚‹ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใ‚’ๅฑ•้–‹ใ—ใฆๆœ€้ฉๅŒ–ใ•ใ‚ŒใŸใ‚จใƒณใ‚ธใƒณใซๅค‰ๆ›ใงใใพใ™ใ€‚ ่ฉณ็ดฐใŠใ‚ˆใณใƒ™ใƒณใƒใƒžใƒผใ‚ฏใซใคใ„ใฆใฏใ€[PyTorchใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://pytorch.org/get-started/pytorch-2.0/)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ `torch.compile`ใซใฏใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใฎไพๅญ˜้–ขไฟ‚ใ‚’ๆŒใคๆˆ้•ทไธญใฎใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใฎใƒชใ‚นใƒˆใŒใ‚ใ‚Šใ€`torchdynamo.list_backends()`ใ‚’ๅ‘ผใณๅ‡บใ—ใฆ็ขบ่ชใงใใพใ™ใ€‚ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใซไฝฟ็”จใ•ใ‚Œใ‚‹ไธ€้ƒจใฎใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ **ใƒ‡ใƒใƒƒใ‚ฐ็”จใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰**๏ผš * `dynamo.optimize("eager")` - ๆŠฝๅ‡บใ•ใ‚ŒใŸGraphModuleใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใซPyTorchใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚ŒใฏTorchDynamoใฎๅ•้กŒใ‚’ใƒ‡ใƒใƒƒใ‚ฐใ™ใ‚‹้š›ใซ้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ * `dynamo.optimize("aot_eager")` - ใ‚ณใƒณใƒ‘ใ‚คใƒฉใƒผใ‚’ไฝฟ็”จใ—ใชใ„AotAutogradใ‚’ไฝฟ็”จใ—ใฆAotAutogradใฎๆŠฝๅ‡บใ•ใ‚ŒใŸใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใŠใ‚ˆใณใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใ‚ฐใƒฉใƒ•ใซๅฏพใ—ใฆๅ˜ใซPyTorch eagerใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใƒ‡ใƒใƒƒใ‚ฐใซๅฝน็ซ‹ใกใ€้ซ˜้€ŸๅŒ–ใฏๆœŸๅพ…ใงใใพใ›ใ‚“ใ€‚ **ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŠใ‚ˆใณๆŽจ่ซ–ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰**๏ผš * `dynamo.optimize("inductor")` - TorchInductorใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใ‚’ไฝฟ็”จใ—ใ€AotAutogradใŠใ‚ˆใณcudagraphsใ‚’ๆดป็”จใ—ใฆใ‚ณใƒผใƒ‰็”Ÿๆˆใ•ใ‚ŒใŸTritonใ‚ซใƒผใƒใƒซใ‚’ไฝฟ็”จใ—ใพใ™ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747) * `dynamo.optimize("nvfuser")` - nvFuser with TorchScriptใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593) * `dynamo.optimize("aot_nvfuser")` - nvFuser with AotAutogradใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593) * `dynamo.optimize("aot_cudagraphs")` - AotAutogradใ‚’ไฝฟ็”จใ—ใฆcudagraphsใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://github.com/pytorch/torchdynamo/pull/757) **ๆŽจ่ซ–ๅฐ‚็”จใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰**๏ผš * `dynamo.optimize("ofi")` - Torchscriptใฎ`optimize_for_inference`ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html) * `dynamo.optimize("fx2trt")` - Nvidia TensorRTใ‚’ไฝฟ็”จใ—ใŸๆŽจ่ซ–ใฎๆœ€้ฉๅŒ–ใซNvidia TensorRTใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://pytorch.org/TensorRT/tutorials/getting_started_with_fx_path.html) * `dynamo.optimize("onnxrt")` - CPU/GPUใงใฎๆŽจ่ซ–ใซONNX Runtimeใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://onnxruntime.ai/) * `dynamo.optimize("ipex")` - CPUใงใฎๆŽจ่ซ–ใซIPEXใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [่ฉณ็ดฐใฏใ“ใกใ‚‰](https://github.com/intel/intel-extension-for-pytorch) ๐Ÿค— Transformersใ‚’ไฝฟ็”จใ—ใŸ`torch.compile`ใฎไฝฟ็”จไพ‹ใซใคใ„ใฆใฏใ€ใ“ใฎ[ใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://www.philschmid.de/getting-started-pytorch-2-0-transformers)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## Using ๐Ÿค— Accelerate [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate/index)ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ไธŠ่จ˜ใฎๆ–นๆณ•ใ‚’ไฝฟ็”จใ—ใชใŒใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ๅฎŒๅ…จใซๅˆถๅพกใงใใ€ๅŸบๆœฌ็š„ใซใฏ็ด”็ฒ‹ใชPyTorchใงใƒซใƒผใƒ—ใ‚’ๆ›ธใใ“ใจใŒใงใใพใ™ใ€‚ ๆฌกใซใ€[`TrainingArguments`]ๅ†…ใงๆ–นๆณ•ใ‚’็ต„ใฟๅˆใ‚ใ›ใŸๅ ดๅˆใ‚’ๆƒณ ```py training_args = TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=4, gradient_checkpointing=True, fp16=True, **default_args, ) ``` ๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ—ใŸๅฎŒๅ…จใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใฎไพ‹ใฏใ€ใปใ‚“ใฎๆ•ฐ่กŒใฎใ‚ณใƒผใƒ‰ใงใ™๏ผš ```py from accelerate import Accelerator from torch.utils.data.dataloader import DataLoader dataloader = DataLoader(ds, batch_size=training_args.per_device_train_batch_size) if training_args.gradient_checkpointing: model.gradient_checkpointing_enable() accelerator = Accelerator(fp16=training_args.fp16) model, optimizer, dataloader = accelerator.prepare(model, adam_bnb_optim, dataloader) model.train() for step, batch in enumerate(dataloader, start=1): loss = model(**batch).loss loss = loss / training_args.gradient_accumulation_steps accelerator.backward(loss) if step % training_args.gradient_accumulation_steps == 0: optimizer.step() optimizer.zero_grad() ``` ใพใšใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)ใงใƒฉใƒƒใƒ—ใ—ใพใ™ใ€‚ ๆฌกใซใ€ใƒขใƒ‡ใƒซใฎ[`~PreTrainedModel.gradient_checkpointing_enable`]ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใงๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๆœ‰ๅŠนใซใงใใพใ™ใ€‚ [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator)ใ‚’ๅˆๆœŸๅŒ–ใ™ใ‚‹้š›ใซใ€ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’[`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare)ใฎๅ‘ผใณๅ‡บใ—ใงๆŒ‡ๅฎšใ—ใ€่ค‡ๆ•ฐใฎGPUใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€`prepare`ใฎ้–“ใซใƒ‡ใƒผใ‚ฟใƒญใƒผใƒ€ใƒผใ‚‚ใƒฏใƒผใ‚ซใƒผ้–“ใงๅˆ†ๆ•ฃใ•ใ‚Œใพใ™ใ€‚ๅŒใ˜[8ใƒ“ใƒƒใƒˆใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถ](#8-bit-adam)ใ‚’ๅ‰ใฎไพ‹ใ‹ใ‚‰ไฝฟ็”จใ—ใพใ™ใ€‚ ๆœ€ๅพŒใซใ€ไธป่ฆใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’่ฟฝๅŠ ใงใใพใ™ใ€‚`backward`ใฎๅ‘ผใณๅ‡บใ—ใฏ๐Ÿค— Accelerateใซใ‚ˆใฃใฆๅ‡ฆ็†ใ•ใ‚Œใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใพใŸใ€ๅ‹พ้…ใฎ่“„็ฉใŒใฉใฎใ‚ˆใ†ใซๆฉŸ่ƒฝใ™ใ‚‹ใ‹ใ‚‚็ขบ่ชใงใใพใ™ใ€‚ๆๅคฑใ‚’ๆญฃ่ฆๅŒ–ใ—ใฆใ„ใ‚‹ใŸใ‚ใ€่“„็ฉใฎๆœ€ๅพŒใซๅนณๅ‡ใ‚’ๅพ—ใฆใ€ๅๅˆ†ใชใ‚นใƒ†ใƒƒใƒ—ใŒใ‚ใ‚‹ใจๆœ€้ฉๅŒ–ใŒๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎๆœ€้ฉๅŒ–ๆŠ€่ก“ใ‚’๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ—ใฆๅฎŸ่ฃ…ใ™ใ‚‹ใฎใฏใ€ใ‚ใšใ‹ใชใ‚ณใƒผใƒ‰่กŒใง่กŒใ†ใ“ใจใŒใงใใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใฎๆŸ”่ปŸๆ€งใŒๅ‘ไธŠใ—ใพใ™ใ€‚ใ™ในใฆใฎๆฉŸ่ƒฝใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[Accelerateใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/accelerate/index)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Efficient Software Prebuilds PyTorchใฎ[pipใจcondaใƒ“ใƒซใƒ‰](https://pytorch.org/get-started/locally/#start-locally)ใฏใ€PyTorchใ‚’ๅฎŸ่กŒใ™ใ‚‹ใฎใซๅๅˆ†ใชcudaใƒ„ใƒผใƒซใ‚ญใƒƒใƒˆใงไบ‹ๅ‰ใซใƒ“ใƒซใƒ‰ใ•ใ‚Œใฆใ„ใพใ™ใŒใ€cudaๆ‹กๅผตใ‚’ใƒ“ใƒซใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใซใฏไธๅๅˆ†ใงใ™ใ€‚ ๆ™‚ๆŠ˜ใ€่ฟฝๅŠ ใฎๅŠชๅŠ›ใŒๅฟ…่ฆใชๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€ไบ‹ๅ‰ใซใ‚ณใƒณใƒ‘ใ‚คใƒซใ•ใ‚Œใฆใ„ใชใ„`apex`ใชใฉใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใงใ™ใ€‚ใพใŸใ€ใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใง้ฉๅˆ‡ใชcudaใƒ„ใƒผใƒซใ‚ญใƒƒใƒˆใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๆ–นๆณ•ใ‚’่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒ้›ฃใ—ใ„ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใ‚ทใƒŠใƒชใ‚ชใซๅฏพๅ‡ฆใ™ใ‚‹ใŸใ‚ใซใ€PyTorchใจNVIDIAใฏcudaๆ‹กๅผตใŒใ™ใงใซไบ‹ๅ‰ใซใƒ“ใƒซใƒ‰ใ•ใ‚Œใฆใ„ใ‚‹NGC dockerใ‚ณใƒณใƒ†ใƒŠใฎๆ–ฐใ—ใ„ใƒใƒผใ‚ธใƒงใƒณใ‚’ใƒชใƒชใƒผใ‚นใ—ใพใ—ใŸใ€‚ใƒ—ใƒญใ‚ฐใƒฉใƒ ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ ใ‘ใงใ€ใใฎใพใพๅฎŸ่กŒใงใใพใ™ใ€‚ ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏใ€PyTorchใฎใ‚ฝใƒผใ‚นใ‚’่ชฟๆ•ดใ—ใŸใ‚Šใ€ๆ–ฐใ—ใ„ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ•ใ‚ŒใŸใƒ“ใƒซใƒ‰ใ‚’ไฝœๆˆใ—ใŸใ‚Šใ—ใŸใ„ๅ ดๅˆใซใ‚‚ๅฝน็ซ‹ใกใพใ™ใ€‚ ๆฌฒใ—ใ„dockerใ‚คใƒกใƒผใ‚ธใƒใƒผใ‚ธใƒงใƒณใ‚’่ฆ‹ใคใ‘ใ‚‹ใซใฏใ€ใพใš[PyTorchใฎใƒชใƒชใƒผใ‚นใƒŽใƒผใƒˆ](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/)ใ‹ใ‚‰ๅง‹ใ‚ใ€ๆœ€ๆ–ฐใฎๆœˆๆฌกใƒชใƒชใƒผใ‚นใฎใ„ใšใ‚Œใ‹ใ‚’้ธๆŠžใ—ใพใ™ใ€‚ๅธŒๆœ›ใฎใƒชใƒชใƒผใ‚นใฎใƒชใƒชใƒผใ‚นใƒŽใƒผใƒˆใซ็งปๅ‹•ใ—ใ€็’ฐๅขƒใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใŒๅฟ…่ฆใชใ‚‚ใฎใจไธ€่‡ดใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™๏ผˆNVIDIA Driverใฎ่ฆไปถใ‚‚ๅซใ‚€๏ผ๏ผ‰ใ€ใใฎๆ–‡ๆ›ธใฎไธ€็•ชไธŠใซ่กŒใใ€ๅฏพๅฟœใ™ใ‚‹NGCใƒšใƒผใ‚ธใซ็งปๅ‹•ใ—ใพใ™ใ€‚ใชใœใ‹ใ‚ใ‹ใ‚‰ใชใ„ๅ ดๅˆใฏใ€[ใ™ในใฆใฎPyTorch NGCใ‚คใƒกใƒผใ‚ธใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚น](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)ใงใ™ใ€‚ ๆฌกใซใ€dockerใ‚คใƒกใƒผใ‚ธใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆๅฑ•้–‹ใ™ใ‚‹ๆ‰‹้ †ใซๅพ“ใ„ใพใ™ใ€‚ ## Mixture of Experts ๆœ€่ฟ‘ใฎ่ซ–ๆ–‡ใซใ‚ˆใ‚Œใฐใ€Transformerใƒขใƒ‡ใƒซใซๅฐ‚้–€ๅฎถใฎๆททๅˆ๏ผˆMoE๏ผ‰ใ‚’็ตฑๅˆใ™ใ‚‹ใ“ใจใงใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ้€ŸๅบฆใŒ4ใ€œ5ๅ€ๅ‘ไธŠใ—ใ€ๆŽจ่ซ–ใ‚‚้ซ˜้€ŸๅŒ–ใ•ใ‚Œใ‚‹ใ“ใจใŒๅ ฑๅ‘Šใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ‚ˆใ‚Šๅคšใใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒใ‚ˆใ‚Š่‰ฏใ„ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใซใคใชใŒใ‚‹ใ“ใจใŒใ‚ใ‹ใฃใฆใ„ใ‚‹ใŸใ‚ใ€ใ“ใฎๆŠ€่ก“ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ณใ‚นใƒˆใ‚’ๅข—ใ‚„ใ™ใ“ใจใชใใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆ•ฐใ‚’ๆก้•ใ„ใซๅข—ใ‚„ใ™ใ“ใจใ‚’ๅฏ่ƒฝใซใ—ใพใ™ใ€‚ ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใงใฏใ€ไป–ใฎFFNๅฑคใฎไปฃใ‚ใ‚ŠใซMoEๅฑคใŒ้…็ฝฎใ•ใ‚Œใ€ๅ„ๅฐ‚้–€ๅฎถใ‚’ใƒˆใƒผใ‚ฏใƒณใฎไฝ็ฝฎใซๅฟœใ˜ใฆใƒใƒฉใƒณใ‚นใ‚ˆใใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ‚ฒใƒผใƒˆ้–ขๆ•ฐใงๆง‹ๆˆใ•ใ‚Œใพใ™ใ€‚ ![MoE Transformer 2x block](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perf-moe-transformer.png) ๏ผˆๅ‡บๅ…ธ: [GLAM](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html)๏ผ‰ ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฎไธปใชๆฌ ็‚นใฏใ€GPUใƒกใƒขใƒชใ‚’ใปใผๆก้•ใ„ใซๅคšใๅฟ…่ฆใจใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใƒกใƒขใƒช่ฆไปถใŒใฏใ‚‹ใ‹ใซๅคงใใ„ใ“ใจใŒใใฎใพใพๅๆ˜ ใ•ใ‚Œใพใ™ใ€‚ใ‚ˆใ‚Š้ซ˜ใ„ใƒกใƒขใƒช่ฆไปถใ‚’ๅ…‹ๆœใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใฏใ€ใ•ใพใ–ใพใช่’ธ็•™ใŠใ‚ˆใณใ‚ขใƒ—ใƒญใƒผใƒใŒๆๆกˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใŸใ ใ—ใ€็›ดๆŽฅใฎใƒˆใƒฌใƒผใƒ‰ใ‚ชใƒ•ใŒใ‚ใ‚Šใพใ™ใ€‚ๆ•ฐไบบใฎๅฐ‚้–€ๅฎถใ‚’ไฝฟ็”จใ—ใฆใƒ™ใƒผใ‚นใƒขใƒ‡ใƒซใ‚’2ใ€œ3ๅ€ๅฐใ•ใใ™ใ‚‹ใ“ใจใงใ€5ๅ€ๅฐใ•ใชใƒขใƒ‡ใƒซใซใ—ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ้€Ÿๅบฆใ‚’้ฉๅบฆใซๅ‘ไธŠใ•ใ›ใ€ใƒกใƒขใƒช่ฆไปถใ‚’้ฉๅบฆใซๅข—ใ‚„ใ™ใ“ใจใŒใงใใพใ™ใ€‚ ้–ข้€ฃใ™ใ‚‹ใปใจใ‚“ใฉใฎ่ซ–ๆ–‡ใŠใ‚ˆใณๅฎŸ่ฃ…ใฏTensorflow/TPUใ‚’ไธญๅฟƒใซๆง‹็ฏ‰ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ - [GShard: Conditional Computation and Automatic Shardingใ‚’ๆดป็”จใ—ใŸๅทจๅคงใƒขใƒ‡ใƒซใฎใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐ](https://arxiv.org/abs/2006.16668) - [Switch Transformers: ใ‚ทใƒณใƒ—ใƒซใงๅŠน็އ็š„ใชใ‚นใƒ‘ใƒผใ‚นๆ€งใ‚’ๅ‚™ใˆใŸใƒˆใƒชใƒชใ‚ชใƒณใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒขใƒ‡ใƒซใธใฎใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐ](https://arxiv.org/abs/2101.03961) - [GLaM: Generalist Language Model (GLaM)](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html) PytorchใซใฏDeepSpeedใŒๆง‹็ฏ‰ใ—ใŸใ‚‚ใฎใ‚‚ใ‚ใ‚Šใพใ™: [DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale](https://arxiv.org/abs/2201.05596)ใ€[Mixture of Experts](https://www.deepspeed.ai/tutorials/mixture-of-experts/) - ใƒ–ใƒญใ‚ฐ่จ˜ไบ‹: [1](https://www.microsoft.com/en-us/research/blog/deepspeed-powers-8x-larger-moe-model-training-with-high-performance/)ใ€[2](https://www.microsoft.com/en-us/research/publication/scalable-and-efficient-moe-training-for-multitask-multilingual-models/)ใ€ๅคง่ฆๆจกใชTransformerใƒ™ใƒผใ‚นใฎ่‡ช็„ถ่จ€่ชž็”Ÿๆˆใƒขใƒ‡ใƒซใฎๅ…ทไฝ“็š„ใชๅฑ•้–‹ใซใคใ„ใฆใฏใ€[ใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://www.deepspeed.ai/2021/12/09/deepspeed-moe-nlg.html)ใ€[Megatron-Deepspeedใƒ–ใƒฉใƒณใƒ](https://github.com/microsoft/Megatron-DeepSpeed/tree/moe-training)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## PyTorchใƒใ‚คใƒ†ใ‚ฃใƒ–ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใจFlash Attentionใฎไฝฟ็”จ PyTorch 2.0ใงใฏใ€ใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎ[`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)๏ผˆSDPA๏ผ‰ใŒใƒชใƒชใƒผใ‚นใ•ใ‚Œใ€[ใƒกใƒขใƒชๅŠน็އใฎ้ซ˜ใ„ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ](https://arxiv.org/abs/2112.05682)ใ‚„[ใƒ•ใƒฉใƒƒใ‚ทใƒฅใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ](https://arxiv.org/abs/2205.14135)ใชใฉใฎ่žๅˆใ•ใ‚ŒใŸGPUใ‚ซใƒผใƒใƒซใฎไฝฟ็”จใ‚’ๅฏ่ƒฝใซใ—ใพใ™ใ€‚ [`optimum`](https://github.com/huggingface/optimum)ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใŸๅพŒใ€้–ข้€ฃใ™ใ‚‹ๅ†…้ƒจใƒขใ‚ธใƒฅใƒผใƒซใ‚’็ฝฎใๆ›ใˆใฆใ€PyTorchใฎใƒใ‚คใƒ†ใ‚ฃใƒ–ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซ่จญๅฎšใ—ใพใ™๏ผš ```python model = model.to_bettertransformer() ``` ๅค‰ๆ›ๅพŒใ€้€šๅธธ้€šใ‚Šใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใฆใใ ใ•ใ„ใ€‚ <Tip warning={true}> PyTorchใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎ`scaled_dot_product_attention`ๆผ”็ฎ—ๅญใฏใ€`attention_mask`ใŒๆไพ›ใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใซใฎใฟFlash Attentionใซใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใงใใพใ™ใ€‚ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒขใƒผใƒ‰ใงBetterTransformer็ตฑๅˆใฏใƒžใ‚นใ‚ฏใ‚ตใƒใƒผใƒˆใ‚’ๅ‰Š้™คใ—ใ€ใƒใƒƒใƒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒžใ‚นใ‚ฏใŒๅฟ…่ฆใชใ„ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใ—ใ‹ไฝฟ็”จใงใใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ€ไพ‹ใˆใฐใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ‚„ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฎใ‚ˆใ†ใชใ€ใƒใƒƒใƒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒžใ‚นใ‚ฏใŒไธ่ฆใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅ ดๅˆใซ่ฉฒๅฝ“ใ—ใพใ™ใ€‚BetterTransformerใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒžใ‚นใ‚ฏใŒๅฟ…่ฆใชใ‚ฟใ‚นใ‚ฏใซๅฏพใ™ใ‚‹ใƒขใƒ‡ใƒซใฎๅพฎ่ชฟๆ•ดใซใฏ้ฉใ—ใฆใ„ใพใ›ใ‚“ใ€‚ </Tip> SDPAใ‚’ไฝฟ็”จใ—ใŸใ‚ขใ‚ฏใ‚ปใƒฉใƒฌใƒผใ‚ทใƒงใƒณใจใƒกใƒขใƒชใฎ็ฏ€็ด„ใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใฏใ€ใ“ใฎ[ใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://pytorch.org/blog/out-of-the-box-acceleration/)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/community.md
<!--โš ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Community ใ“ใฎใƒšใƒผใ‚ธใฏใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซใ‚ˆใฃใฆ้–‹็™บใ•ใ‚ŒใŸ๐Ÿค— Transformersใซ้–ขใ™ใ‚‹ใƒชใ‚ฝใƒผใ‚นใ‚’ใพใจใ‚ใŸใ‚‚ใฎใงใ™ใ€‚ ## Community resources: | ใƒชใ‚ฝใƒผใ‚น | ่ชฌๆ˜Ž | ไฝœ่€… | |:----------|:-------------|------:| | [Hugging Face Transformers Glossary Flashcards](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | [Transformers Docs Glossary](glossary)ใซๅŸบใฅใ„ใŸใƒ•ใƒฉใƒƒใ‚ทใƒฅใ‚ซใƒผใƒ‰ใ‚ปใƒƒใƒˆใงใ™ใ€‚ใ“ใฎใ‚ปใƒƒใƒˆใฏใ€้•ทๆœŸใฎ็Ÿฅ่ญ˜ๅฎš็€ใ‚’็‰นใซ่€ƒๆ…ฎใ—ใฆ่จญ่จˆใ•ใ‚ŒใŸใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใฎใ‚ฏใƒญใ‚นใƒ—ใƒฉใƒƒใƒˆใƒ•ใ‚ฉใƒผใƒ ใ‚ขใƒ—ใƒชใงใ‚ใ‚‹[Anki](https://apps.ankiweb.net/)ใ‚’ไฝฟ็”จใ—ใฆ็ฐกๅ˜ใซๅญฆ็ฟ’/ๅพฉ็ฟ’ใงใใ‚‹ๅฝขๅผใซใชใฃใฆใ„ใพใ™ใ€‚[ใƒ•ใƒฉใƒƒใ‚ทใƒฅใ‚ซใƒผใƒ‰ใฎไฝฟ็”จๆ–นๆณ•ใซ้–ขใ™ใ‚‹็ดนไป‹ใƒ“ใƒ‡ใ‚ชใฏใ“ใกใ‚‰](https://www.youtube.com/watch?v=Dji_h7PILrw)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ | [Darigov Research](https://www.darigovresearch.com/) | ## Community notebooks: | ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏ | ่ชฌๆ˜Ž | ่‘—่€… | | |:----------|:-------------|:-------------|------:| | [ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎTransformerใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆๆญŒ่ฉžใ‚’็”Ÿๆˆ](https://github.com/AlekseyKorshuk/huggingartists) | GPT-2ใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆใŠๆฐ—ใซๅ…ฅใ‚Šใฎใ‚ขใƒผใƒ†ใ‚ฃใ‚นใƒˆใฎใ‚นใ‚ฟใ‚คใƒซใงๆญŒ่ฉžใ‚’็”Ÿๆˆใ™ใ‚‹ๆ–นๆณ• | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Tensorflow 2ใงT5ใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/snapthat/TF-T5-text-to-text) | Tensorflow 2ใ‚’ไฝฟ็”จใ—ใฆไปปๆ„ใฎใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆT5ใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใ€‚ใ“ใฎใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใฏTensorflow 2ใ‚’ไฝฟ็”จใ—ใฆSQUADใงๅฎŸ่ฃ…ใ•ใ‚ŒใŸ่ณชๅ•ใจๅ›ž็ญ”ใ‚ฟใ‚นใ‚ฏใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ | [Muhammad Harris](https://github.com/HarrisDePerceptron) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [TPUใงT5ใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | TransformersใจNlpใ‚’ไฝฟ็”จใ—ใฆSQUADใงT5ใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Suraj Patil](https://github.com/patil-suraj) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [ๅˆ†้กžใจๅคš่‚ข้ธๆŠžใฎใŸใ‚ใซT5ใ‚’ๅพฎ่ชฟๆ•ด](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | PyTorch Lightningใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆๅฏพใƒ†ใ‚ญใ‚นใƒˆๅฝขๅผใงT5ใ‚’ๅˆ†้กžใจๅคš่‚ข้ธๆŠžใ‚ฟใ‚นใ‚ฏใซๅพฎ่ชฟๆ•ดใ™ใ‚‹ๆ–นๆณ• | [Suraj Patil](https://github.com/patil-suraj) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [ๆ–ฐใ—ใ„ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใจ่จ€่ชžใงDialoGPTใ‚’ๅพฎ่ชฟๆ•ด](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | DialoGPTใƒขใƒ‡ใƒซใ‚’ๆ–ฐใ—ใ„ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใ‚ชใƒผใƒ—ใƒณใƒ€ใ‚คใ‚ขใƒญใ‚ฐไผš่ฉฑ็”จใฎๅพฎ่ชฟๆ•ดใ™ใ‚‹ๆ–นๆณ• | [Nathan Cooper](https://github.com/ncoop57) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Reformerใ‚’ไฝฟ็”จใ—ใŸ้•ทใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใƒขใƒ‡ใƒชใƒณใ‚ฐ](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Reformerใ‚’ไฝฟ็”จใ—ใฆ500,000ใƒˆใƒผใ‚ฏใƒณใพใงใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [่ฆ็ด„ใฎใŸใ‚ใซBARTใ‚’ๅพฎ่ชฟๆ•ด](https://github.com/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | Blurrใ‚’ไฝฟ็”จใ—ใฆ่ฆ็ด„ใฎใŸใ‚ใซBARTใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ๆ–นๆณ• | [Wayde Gilliam](https://ohmeow.com/) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | | [ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎTransformerใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆ่ชฐใ‹ใฎใƒ„ใ‚คใƒผใƒˆใ‚’็”Ÿๆˆ](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | GPT-2ใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆใŠๆฐ—ใซๅ…ฅใ‚ŠใฎTwitterใ‚ขใ‚ซใ‚ฆใƒณใƒˆใฎใ‚นใ‚ฟใ‚คใƒซใงใƒ„ใ‚คใƒผใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ๆ–นๆณ• | [Boris Dayma](https://github.com/borisdayma) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [๐Ÿค— Hugging Faceใƒขใƒ‡ใƒซใ‚’Weights & Biasesใงๆœ€้ฉๅŒ–](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | Hugging FaceใจWeights & Biasesใฎ็ตฑๅˆใ‚’็คบใ™ๅฎŒๅ…จใชใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ | [Boris Dayma](https://github.com/borisdayma) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Longformerใฎไบ‹ๅ‰ๅญฆ็ฟ’](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | ๆ—ขๅญ˜ใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใ€Œ้•ทใ„ใ€ใƒใƒผใ‚ธใƒงใƒณใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ๆ–นๆณ• | [Iz Beltagy](https://beltagy.net) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [QAใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใซLongformerใ‚’ๅพฎ่ชฟๆ•ด](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | QAใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใซLongformerใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ๆ–นๆณ• | [Suraj Patil](https://github.com/patil-suraj) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [๐Ÿค—nlpใ‚’ไฝฟ็”จใ—ใŸใƒขใƒ‡ใƒซใฎ่ฉ•ไพก](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | `nlp`ใ‚’ไฝฟ็”จใ—ใฆTriviaQAใงLongformerใ‚’่ฉ•ไพกใ™ใ‚‹ๆ–นๆณ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [ๆ„Ÿๆƒ…ใ‚นใƒ‘ใƒณๆŠฝๅ‡บใฎใŸใ‚ใซT5ใ‚’ๅพฎ่ชฟๆ•ด](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | PyTorch Lightningใ‚’ไฝฟ็”จใ—ใฆๆ„Ÿๆƒ…ใ‚นใƒ‘ใƒณๆŠฝๅ‡บใฎใŸใ‚ใซT5ใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ๆ–นๆณ• | [Lorenzo Ampil](https://github.com/enzoampil) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [DistilBertใ‚’ใƒžใƒซใƒใ‚ฏใƒฉใ‚นๅˆ†้กžใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | PyTorchใ‚’ไฝฟ็”จใ—ใฆDistilBertใ‚’ใƒžใƒซใƒใ‚ฏใƒฉใ‚นๅˆ†้กžใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| |[BERTใ‚’ใƒžใƒซใƒใƒฉใƒ™ใƒซๅˆ†้กžใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|PyTorchใ‚’ไฝฟ็”จใ—ใฆBERTใ‚’ใƒžใƒซใƒใƒฉใƒ™ใƒซๅˆ†้กžใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| |[T5ใ‚’่ฆ็ด„ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)|PyTorchใ‚’ไฝฟ็”จใ—ใฆT5ใ‚’่ฆ็ด„ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใ€WandBใงๅฎŸ้จ“ใ‚’ใƒˆใƒฉใƒƒใ‚ญใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| |[ใƒ€ใ‚คใƒŠใƒŸใƒƒใ‚ฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐ/ใƒใ‚ฑใƒƒใƒ†ใ‚ฃใƒณใ‚ฐใ‚’ไฝฟ็”จใ—ใฆTransformersใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ‚’้ซ˜้€ŸๅŒ–](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)|ใƒ€ใ‚คใƒŠใƒŸใƒƒใ‚ฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐ/ใƒใ‚ฑใƒƒใƒ†ใ‚ฃใƒณใ‚ฐใ‚’ไฝฟ็”จใ—ใฆใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ‚’2ๅ€้ซ˜้€ŸๅŒ–ใ™ใ‚‹ๆ–นๆณ•|[Michael Benesty](https://github.com/pommedeterresautee) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฎใŸใ‚ใฎReformerใฎไบ‹ๅ‰ๅญฆ็ฟ’](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)|ๅŒๆ–นๅ‘ใ‚ปใƒซใƒ•ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒฌใ‚คใƒคใƒผใ‚’ๅ‚™ใˆใŸReformerใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ–นๆณ•|[Patrick von Platen](https://github.com/patrickvonplaten) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| |[Sci-BERTใ‚’ๆ‹กๅผตใ—ใฆใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)|AllenAIใฎCORDใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎSciBERTใƒขใƒ‡ใƒซใฎ่ชžๅฝ™ใ‚’ๆ‹กๅผตใ—ใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณๅŒ–ใ™ใ‚‹ๆ–นๆณ•|[Tanmay Thakur](https://github.com/lordtt13) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| |[Trainer APIใ‚’ไฝฟ็”จใ—ใฆBlenderBotSmallใ‚’่ฆ็ด„ใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)|ใ‚ซใ‚นใ‚ฟใƒ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงBlenderBotSmallใ‚’่ฆ็ด„ใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใ€Trainer APIใ‚’ไฝฟ็”จ|[Tanmay Thakur](https://github.com/lordtt13) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| |[Electraใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใฆCaptum Integrated Gradientsใง่งฃ้‡ˆ](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) |Electraใ‚’ๆ„Ÿๆƒ…ๅˆ†ๆžใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใ€Captum Integrated Gradientsใงไบˆๆธฌใ‚’่งฃ้‡ˆใ™ใ‚‹ๆ–นๆณ•|[Eliza Szczechla](https://elsanns.github.io) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| |[Trainerใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆ้ž่‹ฑ่ชžใฎGPT-2ใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) |Trainerใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆ้ž่‹ฑ่ชžใฎGPT-2ใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•|[Philipp Schmid](https://www.philschmid.de) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[DistilBERTใƒขใƒ‡ใƒซใ‚’ใƒžใƒซใƒใƒฉใƒ™ใƒซๅˆ†้กžใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) |DistilBERTใƒขใƒ‡ใƒซใ‚’ใƒžใƒซใƒใƒฉใƒ™ใƒซๅˆ†้กžใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•|[Dhaval Taunk](https://github.com/DhavalTaunk08) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[ALBERTใ‚’ๆ–‡ใƒšใ‚ขๅˆ†้กžใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) |ALBERTใƒขใƒ‡ใƒซใพใŸใฏไป–ใฎBERTใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซใ‚’ๆ–‡ใƒšใ‚ขๅˆ†้กžใ‚ฟใ‚นใ‚ฏใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•|[Nadir El Manouzi](https://github.com/NadirEM) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[RoBERTaใ‚’ๆ„Ÿๆƒ…ๅˆ†ๆžใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) |RoBERTaใƒขใƒ‡ใƒซใ‚’ๆ„Ÿๆƒ…ๅˆ†ๆžใฎใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•|[Dhaval Taunk](https://github.com/DhavalTaunk08) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[่ณชๅ•็”Ÿๆˆใƒขใƒ‡ใƒซใฎ่ฉ•ไพก](https://github.com/flexudy-pipe/qugeev) | seq2seqใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆ็”Ÿๆˆใ•ใ‚ŒใŸ่ณชๅ•ใฎๅ›ž็ญ”ใฎๆญฃ็ขบใ•ใ‚’่ฉ•ไพกใ™ใ‚‹ๆ–นๆณ• | [Pascal Zoleko](https://github.com/zolekode) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[DistilBERTใจTensorflowใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅˆ†้กž](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | TensorFlowใงใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใฎใŸใ‚ใซDistilBERTใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Peter Bayerle](https://github.com/peterbayerle) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[CNN/Dailymailใงใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผ่ฆ็ด„ใซBERTใ‚’ๆดป็”จ](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | *bert-base-uncased* ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆCNN/Dailymailใฎ่ฆ็ด„ใฎใŸใ‚ใซ *EncoderDecoderModel* ใ‚’ใ‚ฆใ‚ฉใƒผใƒ ใ‚นใ‚ฟใƒผใƒˆใ™ใ‚‹ๆ–นๆณ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[BBC XSumใงใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผ่ฆ็ด„ใซRoBERTaใ‚’ๆดป็”จ](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | *roberta-base* ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆBBC/XSumใฎ่ฆ็ด„ใฎใŸใ‚ใฎๅ…ฑๆœ‰ *EncoderDecoderModel* ใ‚’ใ‚ฆใ‚ฉใƒผใƒ ใ‚นใ‚ฟใƒผใƒˆใ™ใ‚‹ๆ–นๆณ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[TAPASใ‚’ใ‚ทใƒผใ‚ฑใƒณใ‚ทใƒฃใƒซ่ณชๅ•ๅฟœ็ญ”๏ผˆSQA๏ผ‰ใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | ใ‚ทใƒผใ‚ฑใƒณใ‚ทใƒฃใƒซ่ณชๅ•ๅฟœ็ญ”๏ผˆSQA๏ผ‰ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง *tapas-base* ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆ *TapasForQuestionAnswering* ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[TabFactใงTAPASใ‚’่ฉ•ไพก](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | *tapas-base-finetuned-tabfact* ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ *TapasForSequenceClassification* ใ‚’่ฉ•ไพกใ™ใ‚‹ๆ–นๆณ•ใ€๐Ÿค— datasets ใจ ๐Ÿค— transformers ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จ | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[็ฟป่จณใฎใŸใ‚ใฎmBARTใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | Seq2SeqTrainerใ‚’ไฝฟ็”จใ—ใฆHindiใ‹ใ‚‰Englishใธใฎ็ฟป่จณใฎใŸใ‚ใซmBARTใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[FUNSD๏ผˆใƒ•ใ‚ฉใƒผใƒ ็†่งฃใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ๏ผ‰ใงLayoutLMใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | ใ‚นใ‚ญใƒฃใƒณใ•ใ‚ŒใŸใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‹ใ‚‰ใฎๆƒ…ๅ ฑๆŠฝๅ‡บใฎใŸใ‚ใซFUNSDใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง *LayoutLMForTokenClassification* ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| | [DistilGPT2ใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใจใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆ](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | DistilGPT2ใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใจใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆๆ–นๆณ• | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| | [ๆœ€ๅคง8Kใƒˆใƒผใ‚ฏใƒณใงใฎLEDใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | ใƒญใƒณใ‚ฐใƒฌใƒณใ‚ธ่ฆ็ด„ใฎใŸใ‚ใฎpubmedใงLEDใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| | [ArxivใงใฎLEDใฎ่ฉ•ไพก](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | ใƒญใƒณใ‚ฐใƒฌใƒณใ‚ธ่ฆ็ด„ใฎใŸใ‚ใฎLEDใฎๅŠนๆžœ็š„ใช่ฉ•ไพกๆ–นๆณ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| | [RVL-CDIP๏ผˆๆ–‡ๆ›ธ็”ปๅƒๅˆ†้กžใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ๏ผ‰ใงใฎLayoutLMใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | ใ‚นใ‚ญใƒฃใƒณใ•ใ‚ŒใŸๆ–‡ๆ›ธใฎๅˆ†้กžใฎใŸใ‚ใฎRVL-CDIPใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง*LayoutLMForSequenceClassification*ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Niels Rogge](https://github.com/nielsrogge) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| | [Wav2Vec2 CTCใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใจGPT2ใฎ่ชฟๆ•ด](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | ่จ€่ชžใƒขใƒ‡ใƒซใฎ่ชฟๆ•ดใ‚’ไผดใ†CTCใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆ–นๆณ• | [Eric Lam](https://github.com/voidful) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)| | [Trainerใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใŸ2่จ€่ชžใฎ่ฆ็ด„็”จใซBARTใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | ใƒˆใƒฌใƒผใƒŠใƒผใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ—ใฆ2ใคใฎ่จ€่ชžใงใฎ่ฆ็ด„็”จใซBARTใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Eliza Szczechla](https://github.com/elsanns) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| | [PubMedใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงBigBirdใฎ่ฉ•ไพก](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Trivia QAใฎ้•ทใ„ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ่ณชๅ•ๅฟœ็ญ”ใงBigBirdใฎ่ฉ•ไพกๆ–นๆณ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Wav2Vec2ใ‚’ไฝฟ็”จใ—ใฆใƒ“ใƒ‡ใ‚ชใฎๅญ—ๅน•ใ‚’ไฝœๆˆใ™ใ‚‹](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Wav2Vecใงใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚’่ปข่จ˜ใ—ใฆไปปๆ„ใฎใƒ“ใƒ‡ใ‚ชใ‹ใ‚‰YouTubeใฎๅญ—ๅน•ใ‚’ไฝœๆˆใ™ใ‚‹ๆ–นๆณ• | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [PyTorch Lightningใ‚’ไฝฟ็”จใ—ใŸCIFAR-10ใงใฎVision Transformerใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | HuggingFace Transformersใ€Datasetsใ€ใŠใ‚ˆใณPyTorch Lightningใ‚’ไฝฟ็”จใ—ใฆCIFAR-10ใงVision Transformer๏ผˆViT๏ผ‰ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Niels Rogge](https://github.com/nielsrogge) |[![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [๐Ÿค— Trainerใ‚’ไฝฟ็”จใ—ใŸCIFAR-10ใงใฎVision Transformerใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | HuggingFace Transformersใ€Datasetsใ€ใŠใ‚ˆใณ๐Ÿค— Trainerใ‚’ไฝฟ็”จใ—ใฆCIFAR-10ใงVision Transformer๏ผˆViT๏ผ‰ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Niels Rogge](https://github.com/nielsrogge) |[![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [Open Entityใ€ใ‚จใƒณใƒ†ใ‚ฃใƒ†ใ‚ฃใ‚ฟใ‚คใƒ”ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงLUKEใฎ่ฉ•ไพก](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Open Entityใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง*LukeForEntityClassification*ใฎ่ฉ•ไพกๆ–นๆณ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [TACREDใ€้–ขไฟ‚ๆŠฝๅ‡บใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงLUKEใฎ่ฉ•ไพก](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | TACREDใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง*LukeForEntityPairClassification*ใฎ่ฉ•ไพกๆ–นๆณ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [CoNLL-2003ใ€้‡่ฆใชNERใƒ™ใƒณใƒใƒžใƒผใ‚ฏใงLUKEใฎ่ฉ•ไพก](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | CoNLL-2003ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง*LukeForEntitySpanClassification*ใฎ่ฉ•ไพกๆ–นๆณ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [PubMedใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงBigBird-Pegasusใฎ่ฉ•ไพก](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | PubMedใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง*BigBirdPegasusForConditionalGeneration*ใฎ่ฉ•ไพกๆ–นๆณ• | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Wav2Vec2ใ‚’ไฝฟ็”จใ—ใŸใ‚นใƒ”ใƒผใƒใ‚จใƒขใƒผใ‚ทใƒงใƒณๅˆ†้กž](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | MEGAใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใฎๆ„Ÿๆƒ…ๅˆ†้กžใฎใŸใ‚ใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟWav2Vec2ใƒขใƒ‡ใƒซใฎๅˆฉ็”จๆ–นๆณ• | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [DETRใ‚’ไฝฟ็”จใ—ใฆ็”ปๅƒๅ†…ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๆคœๅ‡บใ™ใ‚‹](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟ*DetrForObjectDetection*ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆ็”ปๅƒๅ†…ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๆคœๅ‡บใ—ใ€ๆณจๆ„ใ‚’ๅฏ่ฆ–ๅŒ–ใ™ใ‚‹ๆ–นๆณ• | [Niels Rogge](https://github.com/NielsRogge) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [ใ‚ซใ‚นใ‚ฟใƒ ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงDETRใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | ใ‚ซใ‚นใ‚ฟใƒ ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆๆคœๅ‡บใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใง*DetrForObjectDetection*ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Niels Rogge](https://github.com/NielsRogge) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [Named Entity RecognitionใฎใŸใ‚ใซT5ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | Named Entity Recognitionใ‚ฟใ‚นใ‚ฏใงT5ใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ• | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Colabใง้–‹ใ](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/training.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Fine-tune a pretrained model [[open-in-colab]] ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€่จˆ็ฎ—ใ‚ณใ‚นใƒˆใ‚’ๅ‰Šๆธ›ใ—ใ€็‚ญ็ด ๆŽ’ๅ‡บ้‡ใ‚’ๆธ›ๅฐ‘ใ•ใ›ใ€ใ‚ผใƒญใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใชใ—ใซๆœ€ๆ–ฐใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใ‚‹ๅˆฉ็‚นใŒใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Transformersใฏใ€ใ•ใพใ–ใพใชใ‚ฟใ‚นใ‚ฏใซๅฏพๅฟœใ—ใŸๆ•ฐๅƒใ‚‚ใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใธใฎใ‚ขใ‚ฏใ‚ปใ‚นใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ใใ‚Œใ‚’็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซๅˆใ‚ใ›ใŸใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใ€้žๅธธใซๅผทๅŠ›ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆŠ€่ก“ใงใ™ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’้ธๆŠžใ—ใŸใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™๏ผš * ๐Ÿค— Transformersใฎ[`Trainer`]ใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ€‚ * TensorFlowใจKerasใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ€‚ * ใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎPyTorchใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ€‚ <a id='data-processing'></a> ## Prepare a dataset <Youtube id="_BZearw7f0w"/> ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ‰ใซใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็”จใซๆบ–ๅ‚™ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๅ‰ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใฎๅ‡ฆ็†ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ—ใŸใŒใ€ใ“ใ‚Œใ‹ใ‚‰ใฏใใ‚Œใ‚‰ใฎใ‚นใ‚ญใƒซใ‚’ๆดปใ‹ใ™ๆฉŸไผšใŒใ‚ใ‚Šใพใ™๏ผ ใพใšใ€[Yelp Reviews](https://huggingface.co/datasets/yelp_review_full)ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’่ชญใฟ่พผใ‚“ใงใฟใพใ—ใ‚‡ใ†๏ผš ```python >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ‡ฆ็†ใ—ใ€ๅฏๅค‰ใฎใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใฎใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใจๅˆ‡ใ‚Šๆจใฆๆˆฆ็•ฅใ‚’ๅซใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใ‚’ใ”ๅญ˜็Ÿฅใฎ้€šใ‚Šใ€ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’1ใคใฎใ‚นใƒ†ใƒƒใƒ—ใงๅ‡ฆ็†ใ™ใ‚‹ใซใฏใ€๐Ÿค— Datasets ใฎ [`map`](https://huggingface.co/docs/datasets/process#map) ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใซๅ‰ๅ‡ฆ็†้–ขๆ•ฐใ‚’้ฉ็”จใ—ใพใ™๏ผš ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` ใŠๅฅฝใฟใงใ€ๅฎŸ่กŒๆ™‚้–“ใ‚’็Ÿญ็ธฎใ™ใ‚‹ใŸใ‚ใซใƒ•ใƒซใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๅฐใ•ใชใ‚ตใƒ–ใ‚ปใƒƒใƒˆใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Train ใ“ใฎๆ™‚็‚นใงใ€ไฝฟ็”จใ—ใŸใ„ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใซๅฏพๅฟœใ™ใ‚‹ใ‚ปใ‚ฏใ‚ทใƒงใƒณใซๅพ“ใ†ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๅณๅดใฎใ‚ตใ‚คใƒ‰ใƒใƒผใฎใƒชใƒณใ‚ฏใ‚’ไฝฟ็”จใ—ใฆใ€ใ‚ธใƒฃใƒณใƒ—ใ—ใŸใ„ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใซ็งปๅ‹•ใงใใพใ™ใ€‚ ใใ—ใฆใ€็‰นๅฎšใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎใ™ในใฆใฎใ‚ณใƒณใƒ†ใƒณใƒ„ใ‚’้ž่กจ็คบใซใ—ใŸใ„ๅ ดๅˆใฏใ€ใใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎใƒ–ใƒญใƒƒใ‚ฏๅณไธŠใซใ‚ใ‚‹ใƒœใ‚ฟใƒณใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„๏ผ <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ## Train with Pytorch Trainer ๐Ÿค— Transformersใฏใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆœ€้ฉๅŒ–ใ—ใŸ[`Trainer`]ใ‚ฏใƒฉใ‚นใ‚’ๆไพ›ใ—ใ€็‹ฌ่‡ชใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ๆ‰‹ๅ‹•ใง่จ˜่ฟฐใ›ใšใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใ—ใ‚„ใ™ใใ—ใฆใ„ใพใ™ใ€‚ [`Trainer`] APIใฏใ€ใƒญใ‚ฐ่จ˜้Œฒใ€ๅ‹พ้…็ดฏ็ฉใ€ๆททๅˆ็ฒพๅบฆใชใฉใ€ใ•ใพใ–ใพใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ชใƒ—ใ‚ทใƒงใƒณใจๆฉŸ่ƒฝใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ ใพใšใ€ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใ€ไบˆๆƒณใ•ใ‚Œใ‚‹ใƒฉใƒ™ใƒซใฎๆ•ฐใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields)ใ‹ใ‚‰ใ€5ใคใฎใƒฉใƒ™ใƒซใŒใ‚ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™๏ผš ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` <Tip> ไธ€้ƒจใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใŒไฝฟ็”จใ•ใ‚Œใšใ€ไธ€้ƒจใฎ้‡ใฟใŒใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸ่ญฆๅ‘ŠใŒ่กจ็คบใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ๅฟƒ้…ใ—ใชใ„ใงใใ ใ•ใ„ใ€ใ“ใ‚ŒใฏๅฎŒๅ…จใซๆญฃๅธธใงใ™๏ผ BERTใƒขใƒ‡ใƒซใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎใƒ˜ใƒƒใƒ‰ใฏ็ ดๆฃ„ใ•ใ‚Œใ€ใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸๅˆ†้กžใƒ˜ใƒƒใƒ‰ใง็ฝฎใๆ›ใˆใ‚‰ใ‚Œใพใ™ใ€‚ใ“ใฎๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใƒ˜ใƒƒใƒ‰ใ‚’ใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กžใ‚ฟใ‚นใ‚ฏใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ—ใ€ไบ‹ๅ‰ๅญฆ็ฟ’ใƒขใƒ‡ใƒซใฎ็Ÿฅ่ญ˜ใ‚’ใใ‚Œใซ่ปข้€ใ—ใพใ™ใ€‚ </Tip> ### Training Hyperparameters ๆฌกใซใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใƒˆใ™ใ‚‹ใŸใ‚ใฎใ™ในใฆใฎใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใจใ€่ชฟๆ•ดใงใใ‚‹ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๅซใ‚€[`TrainingArguments`]ใ‚ฏใƒฉใ‚นใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ[ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟ](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments)ใ‚’ไฝฟ็”จใ—ใฆ้–‹ๅง‹ใงใใพใ™ใŒใ€ๆœ€้ฉใช่จญๅฎšใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซใ“ใ‚Œใ‚‰ใ‚’ๅฎŸ้จ“ใ—ใฆใ‚‚ๆง‹ใ„ใพใ›ใ‚“ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฟๅญ˜ใ™ใ‚‹ๅ ดๆ‰€ใ‚’ๆŒ‡ๅฎšใ—ใพใ™๏ผš ```python >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### Evaluate [`Trainer`]ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซ่‡ชๅ‹•็š„ใซใƒขใƒ‡ใƒซใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’่ฉ•ไพกใ—ใพใ›ใ‚“ใ€‚ใƒกใƒˆใƒชใ‚ฏใ‚นใ‚’่จˆ็ฎ—ใ—ใฆๅ ฑๅ‘Šใ™ใ‚‹้–ขๆ•ฐใ‚’[`Trainer`]ใซๆธกใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ [๐Ÿค— Evaluate](https://huggingface.co/docs/evaluate/index)ใƒฉใ‚คใƒ–ใƒฉใƒชใงใฏใ€[`evaluate.load`]้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆ่ชญใฟ่พผใ‚€ใ“ใจใŒใงใใ‚‹ใ‚ทใƒณใƒ—ใƒซใช[`accuracy`](https://huggingface.co/spaces/evaluate-metric/accuracy)้–ขๆ•ฐใŒๆไพ›ใ•ใ‚Œใฆใ„ใพใ™๏ผˆ่ฉณ็ดฐใซใคใ„ใฆใฏ[ใ“ใกใ‚‰ใฎใ‚ฏใ‚คใƒƒใ‚ฏใƒ„ใ‚ขใƒผ](https://huggingface.co/docs/evaluate/a_quick_tour)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰๏ผš ```python >>> import numpy as np >>> import evaluate >>> metric = evaluate.load("accuracy") ``` `metric`ใฎ`~evaluate.compute`ใ‚’ๅ‘ผใณๅ‡บใ—ใฆใ€ไบˆๆธฌใฎๆญฃ็ขบๅบฆใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ `compute`ใซไบˆๆธฌใ‚’ๆธกใ™ๅ‰ใซใ€ไบˆๆธฌใ‚’ใƒญใ‚ธใƒƒใƒˆใซๅค‰ๆ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผˆใ™ในใฆใฎ๐Ÿค— Transformersใƒขใƒ‡ใƒซใฏใƒญใ‚ธใƒƒใƒˆใ‚’่ฟ”ใ™ใ“ใจใ‚’่ฆšใˆใฆใŠใ„ใฆใใ ใ•ใ„๏ผ‰๏ผš ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` ่ฉ•ไพกใƒกใƒˆใƒชใ‚ฏใ‚นใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐไธญใซ็›ฃ่ฆ–ใ—ใŸใ„ๅ ดๅˆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅผ•ๆ•ฐใง `evaluation_strategy` ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒ‡ๅฎšใ—ใฆใ€ๅ„ใ‚จใƒใƒƒใ‚ฏใฎ็ต‚ไบ†ๆ™‚ใซ่ฉ•ไพกใƒกใƒˆใƒชใ‚ฏใ‚นใ‚’ๅ ฑๅ‘Šใ—ใพใ™๏ผš ```python >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") ``` ### Trainer ใƒขใƒ‡ใƒซใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅผ•ๆ•ฐใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŠใ‚ˆใณใƒ†ใ‚นใƒˆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ€่ฉ•ไพก้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆ[`Trainer`]ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ใใฎๅพŒใ€[`~transformers.Trainer.train`]ใ‚’ๅ‘ผใณๅ‡บใ—ใฆใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ—ใพใ™๏ผš ```python >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> ## Kerasใ‚’ไฝฟ็”จใ—ใฆTensorFlowใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ Keras APIใ‚’ไฝฟ็”จใ—ใฆ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’TensorFlowใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™๏ผ ### Loading Data from Keras ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’Keras APIใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’KerasใŒ็†่งฃใงใใ‚‹ๅฝขๅผใซๅค‰ๆ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒๅฐใ•ใ„ๅ ดๅˆใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใ‚’NumPy้…ๅˆ—ใซๅค‰ๆ›ใ—ใฆKerasใซๆธกใ™ใ“ใจใŒใงใใพใ™ใ€‚ ่ค‡้›‘ใชใ“ใจใ‚’ใ™ใ‚‹ๅ‰ใซใ€ใพใšใใ‚Œใ‚’่ฉฆใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ใพใšใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’่ชญใฟ่พผใฟใพใ™ใ€‚GLUEใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‹ใ‚‰CoLAใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใพใ™ ([GLUE Banchmark](https://huggingface.co/datasets/glue))ใ€ใ“ใ‚Œใฏๅ˜็ด”ใชใƒใ‚คใƒŠใƒชใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ไปŠใฎใจใ“ใ‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅˆ†ๅ‰ฒใฎใฟใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # ไปŠใฎใจใ“ใ‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅˆ†ๅ‰ฒใฎใฟใ‚’ไฝฟ็”จใ—ใพใ™ ``` ๆฌกใซใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒญใƒผใƒ‰ใ—ใ€ใƒ‡ใƒผใ‚ฟใ‚’NumPy้…ๅˆ—ใจใ—ใฆใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใพใ™ใ€‚ใƒฉใƒ™ใƒซใฏๆ—ขใซ`0`ใจ`1`ใฎใƒชใ‚นใƒˆใงใ‚ใ‚‹ใŸใ‚ใ€ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ›ใšใซ็›ดๆŽฅNumPy้…ๅˆ—ใซๅค‰ๆ›ใงใใพใ™๏ผ ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True) # ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏBatchEncodingใ‚’่ฟ”ใ—ใพใ™ใŒใ€ใใ‚Œใ‚’Keras็”จใซ่พžๆ›ธใซๅค‰ๆ›ใ—ใพใ™ tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # ใƒฉใƒ™ใƒซใฏใ™ใงใซ0ใจ1ใฎ้…ๅˆ—ใงใ™ ``` ๆœ€ๅพŒใซใ€ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใ€[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ใจ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ๆณจๆ„็‚นใจใ—ใฆใ€Transformersใƒขใƒ‡ใƒซใฏใ™ในใฆใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใ‚ฟใ‚นใ‚ฏใซ้–ข้€ฃใ—ใŸๆๅคฑ้–ขๆ•ฐใ‚’ๆŒใฃใฆใ„ใ‚‹ใŸใ‚ใ€ๆŒ‡ๅฎšใ—ใชใใฆใ‚‚ๆง‹ใ„ใพใ›ใ‚“๏ผˆๆŒ‡ๅฎšใ™ใ‚‹ๅ ดๅˆใ‚’้™คใ๏ผ‰๏ผš ```python from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใฆใ‚ณใƒณใƒ‘ใ‚คใƒซใ™ใ‚‹ model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") # ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใซใฏ้€šๅธธใ€ๅญฆ็ฟ’็އใ‚’ไธ‹ใ’ใ‚‹ใจ่‰ฏใ„ใงใ™ model.compile(optimizer=Adam(3e-5)) # ๆๅคฑ้–ขๆ•ฐใฎๆŒ‡ๅฎšใฏไธ่ฆใงใ™๏ผ model.fit(tokenized_data, labels) ``` <Tip> ใƒขใƒ‡ใƒซใ‚’`compile()`ใ™ใ‚‹้š›ใซ`loss`ๅผ•ๆ•ฐใ‚’ๆธกใ™ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“๏ผHugging Faceใƒขใƒ‡ใƒซใฏใ€ใ“ใฎๅผ•ๆ•ฐใ‚’็ฉบ็™ฝใฎใพใพใซใ—ใฆใŠใใจใ€ใ‚ฟใ‚นใ‚ฏใจใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซ้ฉใ—ใŸๆๅคฑใ‚’่‡ชๅ‹•็š„ใซ้ธๆŠžใ—ใพใ™ใ€‚ ๅฟ…่ฆใซๅฟœใ˜ใฆ่‡ชๅˆ†ใงๆๅคฑใ‚’ๆŒ‡ๅฎšใ—ใฆใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™๏ผ </Tip> ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏใ€ๅฐ่ฆๆจกใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซใฏ้ฉใ—ใฆใ„ใพใ™ใŒใ€ๅคง่ฆๆจกใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซๅฏพใ—ใฆใฏๅ•้กŒใซใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใชใœใชใ‚‰ใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ•ใ‚ŒใŸ้…ๅˆ—ใจใƒฉใƒ™ใƒซใฏใƒกใƒขใƒชใซๅฎŒๅ…จใซ่ชญใฟ่พผใพใ‚Œใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€ใพใŸNumPyใฏใ€Œใ‚ธใƒฃใ‚ฎใƒผใ€ใช้…ๅˆ—ใ‚’ๅ‡ฆ็†ใ—ใชใ„ใŸใ‚ใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ•ใ‚ŒใŸๅ„ใ‚ตใƒณใƒ—ใƒซใ‚’ๅ…จไฝ“ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ†…ใงๆœ€ใ‚‚้•ทใ„ใ‚ตใƒณใƒ—ใƒซใฎ้•ทใ•ใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€้…ๅˆ—ใŒใ•ใ‚‰ใซๅคงใใใชใ‚Šใ€ใ™ในใฆใฎใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้…ใใ™ใ‚‹ๅŽŸๅ› ใซใชใ‚Šใพใ™๏ผ ### Loading data as a tf.data.Dataset ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้…ใใ›ใšใซใƒ‡ใƒผใ‚ฟใ‚’่ชญใฟ่พผใ‚€ใซใฏใ€ใƒ‡ใƒผใ‚ฟใ‚’`tf.data.Dataset`ใจใ—ใฆ่ชญใฟ่พผใ‚€ใ“ใจใŒใงใใพใ™ใ€‚็‹ฌ่‡ชใฎ`tf.data`ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใŒใ€ใ“ใ‚Œใ‚’่กŒใ†ใŸใ‚ใฎไพฟๅˆฉใชๆ–นๆณ•ใŒ2ใคใ‚ใ‚Šใพใ™๏ผš - [`~TFPreTrainedModel.prepare_tf_dataset`]: ใ“ใ‚Œใฏใปใจใ‚“ใฉใฎๅ ดๅˆใงๆŽจๅฅจใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ใƒขใƒ‡ใƒซไธŠใฎใƒกใ‚ฝใƒƒใƒ‰ใชใฎใงใ€ใƒขใƒ‡ใƒซใ‚’ๆคœๆŸปใ—ใฆใƒขใƒ‡ใƒซๅ…ฅๅŠ›ใจใ—ใฆไฝฟ็”จๅฏ่ƒฝใชๅˆ—ใ‚’่‡ชๅ‹•็š„ใซๆŠŠๆกใ—ใ€ไป–ใฎๅˆ—ใ‚’็ ดๆฃ„ใ—ใฆใ‚ˆใ‚Šๅ˜็ด”ใง้ซ˜ๆ€ง่ƒฝใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝœๆˆใงใใพใ™ใ€‚ - [`~datasets.Dataset.to_tf_dataset`]: ใ“ใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ‚ˆใ‚ŠไฝŽใƒฌใƒ™ใƒซใงใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒใฉใฎใ‚ˆใ†ใซไฝœๆˆใ•ใ‚Œใ‚‹ใ‹ใ‚’ๆญฃ็ขบใซๅˆถๅพกใ™ใ‚‹ๅ ดๅˆใซไพฟๅˆฉใงใ™ใ€‚`columns`ใจ`label_cols`ใ‚’ๆŒ‡ๅฎšใ—ใฆใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซๅซใ‚ใ‚‹ๅˆ—ใ‚’ๆญฃ็ขบใซๆŒ‡ๅฎšใงใใพใ™ใ€‚ [`~TFPreTrainedModel.prepare_tf_dataset`]ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ€ๆฌกใฎใ‚ณใƒผใƒ‰ใ‚ตใƒณใƒ—ใƒซใซ็คบใ™ใ‚ˆใ†ใซใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎๅ‡บๅŠ›ใ‚’ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซๅˆ—ใจใ—ใฆ่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```py def tokenize_dataset(data): # ่ฟ”ใ•ใ‚ŒใŸ่พžๆ›ธใฎใ‚ญใƒผใฏใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซๅˆ—ใจใ—ใฆ่ฟฝๅŠ ใ•ใ‚Œใพใ™ return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` Hugging Faceใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใƒ‡ใ‚ฃใ‚นใ‚ฏใซไฟๅญ˜ใ•ใ‚Œใ‚‹ใŸใ‚ใ€ใ“ใ‚Œใซใ‚ˆใ‚Šใƒกใƒขใƒชใฎไฝฟ็”จ้‡ใŒๅข—ใˆใ‚‹ใ“ใจใฏใ‚ใ‚Šใพใ›ใ‚“๏ผ ๅˆ—ใŒ่ฟฝๅŠ ใ•ใ‚ŒใŸใ‚‰ใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‹ใ‚‰ใƒใƒƒใƒใ‚’ใ‚นใƒˆใƒชใƒผใƒ ใ—ใ€ๅ„ใƒใƒƒใƒใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’่ฟฝๅŠ ใงใใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅ ดๅˆใจๆฏ”ในใฆใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใฎๆ•ฐใŒๅคงๅน…ใซๅ‰Šๆธ›ใ•ใ‚Œใพใ™ใ€‚ ```python >>> tf_dataset = model.prepare_tf_dataset(dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer) ``` ไธŠ่จ˜ใฎใ‚ณใƒผใƒ‰ใ‚ตใƒณใƒ—ใƒซใงใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’`prepare_tf_dataset`ใซๆธกใ—ใฆใ€ใƒใƒƒใƒใ‚’ๆญฃใ—ใ่ชญใฟ่พผใ‚€้š›ใซๆญฃใ—ใใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใ™ในใฆใฎใ‚ตใƒณใƒ—ใƒซใŒๅŒใ˜้•ทใ•ใงใ‚ใ‚Šใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใŒไธ่ฆใชๅ ดๅˆใฏใ€ใ“ใฎๅผ•ๆ•ฐใ‚’ใ‚นใ‚ญใƒƒใƒ—ใงใใพใ™ใ€‚ ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐไปฅๅค–ใฎ่ค‡้›‘ใชๅ‡ฆ็†ใ‚’่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆ๏ผˆไพ‹๏ผšใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฎใŸใ‚ใฎใƒˆใƒผใ‚ฏใƒณใฎ็ ดๆใชใฉ๏ผ‰ใ€ ไปฃใ‚ใ‚Šใซ`collate_fn`ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ€ใ‚ตใƒณใƒ—ใƒซใฎใƒชใ‚นใƒˆใ‚’ใƒใƒƒใƒใซๅค‰ๆ›ใ—ใ€ๅฟ…่ฆใชๅ‰ๅ‡ฆ็†ใ‚’้ฉ็”จใ™ใ‚‹้–ขๆ•ฐใ‚’ๆธกใ™ใ“ใจใŒใงใใพใ™ใ€‚ ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚’ๅฎŸ้š›ใซไฝฟ็”จใ—ใŸไพ‹ใซใคใ„ใฆใฏใ€ [examples](https://github.com/huggingface/transformers/tree/main/examples)ใ‚„ [notebooks](https://huggingface.co/docs/transformers/notebooks)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ `tf.data.Dataset`ใ‚’ไฝœๆˆใ—ใŸใ‚‰ใ€ไปฅๅ‰ใจๅŒๆง˜ใซใƒขใƒ‡ใƒซใ‚’ใ‚ณใƒณใƒ‘ใ‚คใƒซใ—ใ€้ฉๅˆใ•ใ›ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```python model.compile(optimizer=Adam(3e-5)) # ๆๅคฑๅผ•ๆ•ฐใฏไธ่ฆใงใ™๏ผ model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## Train in native Pytorch <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`]ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ๅ‡ฆ็†ใ—ใ€1่กŒใฎใ‚ณใƒผใƒ‰ใงใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’็‹ฌ่‡ชใซ่จ˜่ฟฐใ—ใŸใ„ใƒฆใƒผใ‚ถใƒผใฎใŸใ‚ใซใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’ใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎPyTorchใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ใ“ใฎๆ™‚็‚นใงใ€ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ๅ†่ตทๅ‹•ใ™ใ‚‹ใ‹ใ€ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใƒกใƒขใƒชใ‚’่งฃๆ”พใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผš ```py del model del trainer torch.cuda.empty_cache() ``` 1. ใƒขใƒ‡ใƒซใฏ็”Ÿใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ…ฅๅŠ›ใจใ—ใฆๅ—ใ‘ๅ–ใ‚‰ใชใ„ใŸใ‚ใ€`text` ๅˆ—ใ‚’ๅ‰Š้™คใ—ใพใ™๏ผš ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. `label`ๅˆ—ใ‚’`labels`ใซๅๅ‰ใ‚’ๅค‰ๆ›ดใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใฏๅผ•ๆ•ฐใฎๅๅ‰ใ‚’`labels`ใจๆœŸๅพ…ใ—ใฆใ„ใพใ™๏ผš ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๅฝขๅผใ‚’ใƒชใ‚นใƒˆใงใฏใชใPyTorchใƒ†ใƒณใ‚ฝใƒซใ‚’่ฟ”ใ™ใ‚ˆใ†ใซ่จญๅฎšใ—ใพใ™๏ผš ```py >>> tokenized_datasets.set_format("torch") ``` ไปฅๅ‰ใซ็คบใ—ใŸใ‚ˆใ†ใซใ€ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๅฐใ•ใชใ‚ตใƒ–ใ‚ปใƒƒใƒˆใ‚’ไฝœๆˆใ—ใพใ™๏ผš ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใจใƒ†ใ‚นใƒˆใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ็”จใฎ`DataLoader`ใ‚’ไฝœๆˆใ—ใฆใ€ใƒ‡ใƒผใ‚ฟใฎใƒใƒƒใƒใ‚’ใ‚คใƒ†ใƒฌใƒผใƒˆใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™๏ผš ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` ใƒญใƒผใƒ‰ใ™ใ‚‹ใƒขใƒ‡ใƒซใจๆœŸๅพ…ใ•ใ‚Œใ‚‹ใƒฉใƒ™ใƒซใฎๆ•ฐใ‚’ๆŒ‡ๅฎšใ—ใฆใใ ใ•ใ„๏ผš ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` ### Optimizer and learning rate scheduler ใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใจๅญฆ็ฟ’็އใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใ‚’ไฝœๆˆใ—ใพใ—ใ‚‡ใ†ใ€‚ PyTorchใ‹ใ‚‰[`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’ไฝฟ็”จใ—ใพใ™๏ผš ```python >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๅญฆ็ฟ’็އใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใ‚’[`Trainer`]ใ‹ใ‚‰ไฝœๆˆใ™ใ‚‹๏ผš ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` ๆœ€ๅพŒใซใ€GPUใ‚’ๅˆฉ็”จใงใใ‚‹ๅ ดๅˆใฏ `device` ใ‚’ๆŒ‡ๅฎšใ—ใฆใใ ใ•ใ„ใ€‚ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใ€CPUใงใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏๆ•ฐๆ™‚้–“ใ‹ใ‹ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใ€ๆ•ฐๅˆ†ใงๅฎŒไบ†ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> ใ‚ฏใƒฉใ‚ฆใƒ‰GPUใŒๅˆฉ็”จใงใใชใ„ๅ ดๅˆใ€[Colaboratory](https://colab.research.google.com/)ใ‚„[SageMaker StudioLab](https://studiolab.sagemaker.aws/)ใชใฉใฎใƒ›ใ‚นใƒˆใ•ใ‚ŒใŸใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ไฝฟ็”จใ—ใฆ็„กๆ–™ใงGPUใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ </Tip> ใ•ใฆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๆบ–ๅ‚™ใŒๆ•ดใ„ใพใ—ใŸ๏ผ ๐Ÿฅณ ### ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ— ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ้€ฒๆ—ใ‚’่ฟฝ่ทกใ™ใ‚‹ใŸใ‚ใซใ€[tqdm](https://tqdm.github.io/)ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚นใƒ†ใƒƒใƒ—ใฎๆ•ฐใซๅฏพใ—ใฆ้€ฒ่กŒ็Šถๆณใƒใƒผใ‚’่ฟฝๅŠ ใ—ใพใ™๏ผš ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### Evaluate [`Trainer`]ใซ่ฉ•ไพก้–ขๆ•ฐใ‚’่ฟฝๅŠ ใ—ใŸใฎใจๅŒๆง˜ใซใ€็‹ฌ่‡ชใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใ‚’ไฝœๆˆใ™ใ‚‹้š›ใซใ‚‚ๅŒๆง˜ใฎๆ“ไฝœใ‚’่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใŸใ ใ—ใ€ๅ„ใ‚จใƒใƒƒใ‚ฏใฎๆœ€ๅพŒใซใƒกใƒˆใƒชใƒƒใ‚ฏใ‚’่จˆ็ฎ—ใŠใ‚ˆใณๅ ฑๅ‘Šใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ไปŠๅ›žใฏ[`~evaluate.add_batch`]ใ‚’ไฝฟ็”จใ—ใฆใ™ในใฆใฎใƒใƒƒใƒใ‚’่“„็ฉใ—ใ€ๆœ€ๅพŒใซใƒกใƒˆใƒชใƒƒใ‚ฏใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ ```python >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## ่ฟฝๅŠ ใƒชใ‚ฝใƒผใ‚น ใ•ใ‚‰ใชใ‚‹ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใฎไพ‹ใซใคใ„ใฆใฏใ€ไปฅไธ‹ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผš - [๐Ÿค— Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) ใซใฏใ€PyTorchใจTensorFlowใงไธ€่ˆฌ็š„ใชNLPใ‚ฟใ‚นใ‚ฏใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ‚นใ‚ฏใƒชใƒ—ใƒˆใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ - [๐Ÿค— Transformers Notebooks](notebooks) ใซใฏใ€็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใซ้–ขใ™ใ‚‹ใ•ใพใ–ใพใชใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/pad_truncation.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Padding and truncation ใƒใƒƒใƒๅ…ฅๅŠ›ใฏใ—ใฐใ—ใฐ็•ฐใชใ‚‹้•ทใ•ใงใ‚ใ‚Šใ€ๅ›บๅฎšใ‚ตใ‚คใ‚บใฎใƒ†ใƒณใ‚ฝใƒซใซๅค‰ๆ›ใงใใชใ„ใŸใ‚ใ€ๅค‰ๅ‹•ใ™ใ‚‹้•ทใ•ใฎใƒใƒƒใƒใ‹ใ‚‰้•ทๆ–นๅฝขใฎใƒ†ใƒณใ‚ฝใƒซใ‚’ไฝœๆˆใ™ใ‚‹ใŸใ‚ใฎๆˆฆ็•ฅใจใ—ใฆใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใจๅˆ‡ใ‚Š่ฉฐใ‚ใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใฏใ€็Ÿญใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใŒใƒใƒƒใƒๅ†…ใฎๆœ€้•ทใ‚ทใƒผใ‚ฑใƒณใ‚นใพใŸใฏใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคง้•ทใจๅŒใ˜้•ทใ•ใซใชใ‚‹ใ‚ˆใ†ใซใ€็‰นๅˆฅใช**ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณ**ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ๅˆ‡ใ‚Š่ฉฐใ‚ใฏใ€้•ทใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅˆ‡ใ‚Š่ฉฐใ‚ใ‚‹ใ“ใจใง้€†ๆ–นๅ‘ใซๆฉŸ่ƒฝใ—ใพใ™ใ€‚ ใปใจใ‚“ใฉใฎๅ ดๅˆใ€ใƒใƒƒใƒใ‚’ๆœ€้•ทใ‚ทใƒผใ‚ฑใƒณใ‚นใฎ้•ทใ•ใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ—ใ€ใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคง้•ทใซๅˆ‡ใ‚Š่ฉฐใ‚ใ‚‹ใ“ใจใงใ€ใ†ใพใๅ‹•ไฝœใ—ใพใ™ใ€‚ใŸใ ใ—ใ€APIใฏใใ‚ŒไปฅไธŠใฎๆˆฆ็•ฅใ‚‚ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ๅฟ…่ฆใช3ใคใฎๅผ•ๆ•ฐใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš`padding`ใ€`truncation`ใ€ใŠใ‚ˆใณ `max_length`ใ€‚ `padding`ๅผ•ๆ•ฐใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚ใƒ–ใƒผใƒซๅ€คใพใŸใฏๆ–‡ๅญ—ๅˆ—ใงใ‚ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš - `True`ใพใŸใฏ`'longest'`๏ผšใƒใƒƒใƒๅ†…ใฎๆœ€้•ทใ‚ทใƒผใ‚ฑใƒณใ‚นใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’่ฟฝๅŠ ใ—ใพใ™๏ผˆใ‚ทใƒผใ‚ฑใƒณใ‚นใŒ1ใคใ—ใ‹ๆไพ›ใ•ใ‚Œใชใ„ๅ ดๅˆใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใฏ้ฉ็”จใ•ใ‚Œใพใ›ใ‚“๏ผ‰ใ€‚ - `max_length'`๏ผš`max_length`ๅผ•ๆ•ฐใงๆŒ‡ๅฎšใ•ใ‚ŒใŸ้•ทใ•ใพใงใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ใพใŸใฏ`max_length`ใŒๆไพ›ใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคง้•ท๏ผˆ`max_length=None`๏ผ‰ใ€‚ใ‚ทใƒผใ‚ฑใƒณใ‚นใŒ1ใคใ—ใ‹ๆไพ›ใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใงใ‚‚ใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใฏ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚ - `False`ใพใŸใฏ`'do_not_pad'`๏ผšใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใฏ้ฉ็”จใ•ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚ŒใŒใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๅ‹•ไฝœใงใ™ใ€‚ `truncation`ๅผ•ๆ•ฐใฏๅˆ‡ใ‚Š่ฉฐใ‚ใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚ใƒ–ใƒผใƒซๅ€คใพใŸใฏๆ–‡ๅญ—ๅˆ—ใงใ‚ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš - `True`ใพใŸใฏ`'longest_first'`๏ผšๆœ€ๅคง้•ทใ‚’`max_length`ๅผ•ๆ•ฐใงๆŒ‡ๅฎšใ™ใ‚‹ใ‹ใ€ใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคง้•ท๏ผˆ`max_length=None`๏ผ‰ใพใงๅˆ‡ใ‚Š่ฉฐใ‚ใพใ™ใ€‚ใ“ใ‚Œใฏใƒˆใƒผใ‚ฏใƒณใ”ใจใซๅˆ‡ใ‚Š่ฉฐใ‚ใ€้ฉๅˆ‡ใช้•ทใ•ใซ้”ใ™ใ‚‹ใพใงใƒšใ‚ขๅ†…ใฎๆœ€้•ทใ‚ทใƒผใ‚ฑใƒณใ‚นใ‹ใ‚‰ใƒˆใƒผใ‚ฏใƒณใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚ - `'only_second'`๏ผšๆœ€ๅคง้•ทใ‚’`max_length`ๅผ•ๆ•ฐใงๆŒ‡ๅฎšใ™ใ‚‹ใ‹ใ€ใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคง้•ท๏ผˆ`max_length=None`๏ผ‰ใพใงๅˆ‡ใ‚Š่ฉฐใ‚ใพใ™ใ€‚ใ“ใ‚Œใฏใƒšใ‚ขใฎ2็•ช็›ฎใฎๆ–‡ใ ใ‘ใ‚’ๅˆ‡ใ‚Š่ฉฐใ‚ใพใ™๏ผˆใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒšใ‚ขใพใŸใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒใƒƒใƒใฎใƒšใ‚ขใŒๆไพ›ใ•ใ‚ŒใŸๅ ดๅˆ๏ผ‰ใ€‚ - `'only_first'`๏ผšๆœ€ๅคง้•ทใ‚’`max_length`ๅผ•ๆ•ฐใงๆŒ‡ๅฎšใ™ใ‚‹ใ‹ใ€ใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคง้•ท๏ผˆ`max_length=None`๏ผ‰ใพใงๅˆ‡ใ‚Š่ฉฐใ‚ใพใ™ใ€‚ใ“ใ‚Œใฏใƒšใ‚ขใฎๆœ€ๅˆใฎๆ–‡ใ ใ‘ใ‚’ๅˆ‡ใ‚Š่ฉฐใ‚ใพใ™๏ผˆใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒšใ‚ขใพใŸใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒใƒƒใƒใฎใƒšใ‚ขใŒๆไพ›ใ•ใ‚ŒใŸๅ ดๅˆ๏ผ‰ใ€‚ - `False`ใพใŸใฏ`'do_not_truncate'`๏ผšๅˆ‡ใ‚Š่ฉฐใ‚ใฏ้ฉ็”จใ•ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚ŒใŒใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๅ‹•ไฝœใงใ™ใ€‚ `max_length`ๅผ•ๆ•ฐใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใจๅˆ‡ใ‚Š่ฉฐใ‚ใฎ้•ทใ•ใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚ๆ•ดๆ•ฐใพใŸใฏ`None`ใงใ‚ใ‚Šใ€ใ“ใฎๅ ดๅˆใ€ใƒขใƒ‡ใƒซใŒๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๆœ€ๅคงๅ…ฅๅŠ›้•ทใซใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใง่จญๅฎšใ•ใ‚Œใพใ™ใ€‚ใƒขใƒ‡ใƒซใซ็‰นๅฎšใฎๆœ€ๅคงๅ…ฅๅŠ›้•ทใŒใชใ„ๅ ดๅˆใ€`max_length`ใธใฎๅˆ‡ใ‚Š่ฉฐใ‚ใพใŸใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใฏ็„กๅŠนใซใชใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฎ่กจใฏใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใจๅˆ‡ใ‚Š่ฉฐใ‚ใ‚’่จญๅฎšใ™ใ‚‹ๆŽจๅฅจๆ–นๆณ•ใ‚’่ฆ็ด„ใ—ใฆใ„ใพใ™ใ€‚ไปฅไธ‹ใฎไพ‹ใฎใ„ใšใ‚Œใ‹ใงๅ…ฅๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒšใ‚ขใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€`truncation=True`ใ‚’`['only_first', 'only_second', 'longest_first']`ใง้ธๆŠžใ—ใŸ`STRATEGY`ใซ็ฝฎใๆ›ใˆใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใคใพใ‚Šใ€`truncation='only_second'`ใพใŸใฏ`truncation='longest_first'`ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒšใ‚ขๅ†…ใฎไธกๆ–นใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ‰่ฟฐใฎใ‚ˆใ†ใซๅˆ‡ใ‚Š่ฉฐใ‚ใ‚‹ๆ–นๆณ•ใ‚’ๅˆถๅพกใงใใพใ™ใ€‚ | Truncation | Padding | Instruction | |--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------| | no truncation | no padding | `tokenizer(batch_sentences)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or | | | | `tokenizer(batch_sentences, padding='longest')` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | padding to a multiple of a value | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) | | truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | padding to specific length | Not possible | | truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | padding to max model input length | Not possible | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_infer_gpu_many.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Inference on a Multiple GPUs ใ“ใฎๆ–‡ๆ›ธใซใฏใ€่ค‡ๆ•ฐใฎGPUใงๅŠน็އ็š„ใซๆŽจ่ซ–ใ‚’่กŒใ†ๆ–นๆณ•ใซ้–ขใ™ใ‚‹ๆƒ…ๅ ฑใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ <Tip> ๆณจๆ„: ่ค‡ๆ•ฐใฎGPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใฏใ€[ๅ˜ไธ€ใฎGPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](./perf_infer_gpu_one)ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใปใจใ‚“ใฉใฎๆˆฆ็•ฅใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใŸใ ใ—ใ€ใ‚ˆใ‚Š่‰ฏใ„ไฝฟ็”จๆณ•ใฎใŸใ‚ใซไฝฟ็”จใงใใ‚‹็ฐกๅ˜ใชใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใซใคใ„ใฆใ‚‚่ช่ญ˜ใ—ใฆใŠใๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ## Flash Attention 2 Flash Attention 2ใฎ็ตฑๅˆใฏใ€่ค‡ๆ•ฐใฎGPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงใ‚‚ๆฉŸ่ƒฝใ—ใพใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ๅ˜ไธ€ใฎGPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](./perf_infer_gpu_one#Flash-Attention-2)ใฎ้ฉๅˆ‡ใชใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## BetterTransformer [BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฏใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’PyTorchใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎ้ซ˜้€ŸๅฎŸ่กŒใƒ‘ใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซๅค‰ๆ›ใ—ใ€ใใฎไธ‹ใงFlash Attentionใชใฉใฎๆœ€้ฉๅŒ–ใ•ใ‚ŒใŸใ‚ซใƒผใƒใƒซใ‚’ๅ‘ผใณๅ‡บใ—ใพใ™ใ€‚ BetterTransformerใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใ€็”ปๅƒใ€้Ÿณๅฃฐใƒขใƒ‡ใƒซใฎๅ˜ไธ€GPUใŠใ‚ˆใณ่ค‡ๆ•ฐGPUใงใฎ้ซ˜้€ŸๆŽจ่ซ–ใ‚‚ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ <Tip> Flash Attentionใฏใ€fp16ใพใŸใฏbf16 dtypeใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใซใฎใฟไฝฟ็”จใงใใพใ™ใ€‚BetterTransformerใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ€ใƒขใƒ‡ใƒซใ‚’้ฉๅˆ‡ใชdtypeใซใ‚ญใƒฃใ‚นใƒˆใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ### Decoder models ใƒ†ใ‚ญใ‚นใƒˆใƒขใƒ‡ใƒซใ€็‰นใซใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซ๏ผˆGPTใ€T5ใ€Llamaใชใฉ๏ผ‰ใฎๅ ดๅˆใ€BetterTransformer APIใฏใ™ในใฆใฎๆณจๆ„ๆ“ไฝœใ‚’[`torch.nn.functional.scaled_dot_product_attention`ใ‚ชใƒšใƒฌใƒผใ‚ฟใƒผ](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention)๏ผˆSDPA๏ผ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซๅค‰ๆ›ใ—ใพใ™ใ€‚ใ“ใ‚ŒใฏPyTorch 2.0ไปฅ้™ใงใฎใฟไฝฟ็”จๅฏ่ƒฝใงใ™ใ€‚ ใƒขใƒ‡ใƒซใ‚’BetterTransformerใซๅค‰ๆ›ใ™ใ‚‹ใซใฏ๏ผš ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") # convert the model to BetterTransformer model.to_bettertransformer() # Use it for training or inference ``` SDPAใฏใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ‚„ๅ•้กŒใฎใ‚ตใ‚คใ‚บใชใฉใฎ็‰นๅฎšใฎ่จญๅฎšใง[Flash Attention](https://arxiv.org/abs/2205.14135)ใ‚ซใƒผใƒใƒซใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใ‚‚ใงใใพใ™ใ€‚Flash Attentionใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ‹ใ€็‰นๅฎšใฎ่จญๅฎš๏ผˆใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ€ๅ•้กŒใฎใ‚ตใ‚คใ‚บ๏ผ‰ใงๅˆฉ็”จๅฏ่ƒฝใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€[`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel)ใ‚’ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใƒžใƒใƒผใ‚ธใƒฃใจใ—ใฆไฝฟ็”จใ—ใพใ™ใ€‚ ```diff import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m").to("cuda") # convert the model to BetterTransformer model.to_bettertransformer() input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") + with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ใ‚‚ใ—ใƒˆใƒฌใƒผใ‚นใƒใƒƒใ‚ฏใงๆฌกใฎใ‚ˆใ†ใชใ‚จใƒฉใƒผใƒกใƒƒใ‚ปใƒผใ‚ธใŒ่กจ็คบใ•ใ‚ŒใŸๅ ดๅˆ๏ผš ```bash RuntimeError: No available kernel. Aborting execution. ``` ๅฝ“ๆ—ฅใ€Flash Attentionใฎใ‚ซใƒใƒฌใƒƒใ‚ธใŒๅบƒ็ฏ„ๅ›ฒใงใ‚ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹PyTorch Nightlyใƒใƒผใ‚ธใƒงใƒณใ‚’่ฉฆใ™ใ‚ˆใ†ใซใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```bash pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 ``` [ใ“ใฎใƒ–ใƒญใ‚ฐๆŠ•็จฟ](https://pytorch.org/blog/out-of-the-box-acceleration/)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใ€BetterTransformer + SDPA APIใงๅฏ่ƒฝใชใ“ใจใซใคใ„ใฆ่ฉณใ—ใๅญฆใณใพใ—ใ‚‡ใ†ใ€‚ ### Encoder Models ๆŽจ่ซ–ไธญใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใงใฏใ€BetterTransformerใฏใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒฌใ‚คใƒคใƒผใฎforwardๅ‘ผใณๅ‡บใ—ใ‚’ใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒฌใ‚คใƒคใƒผใฎ[`torch.nn.TransformerEncoderLayer`](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html)ใฎ็›ธๅฝ“ใ™ใ‚‹ใ‚‚ใฎใซใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒฌใ‚คใƒคใƒผใฎ้ซ˜้€ŸๅฎŸ่ฃ…ใŒๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ `torch.nn.TransformerEncoderLayer`ใฎ้ซ˜้€ŸๅฎŸ่ฃ…ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใชใ„ใŸใ‚ใ€ไปฃใ‚ใ‚Šใซ`torch.nn.functional.scaled_dot_product_attention`ใซใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒใ‚นใƒˆใ•ใ‚ŒใŸใƒ†ใƒณใ‚ฝใƒซใ‚’ๆดป็”จใ—ใชใ„Flash AttentionใพใŸใฏMemory-Efficient Attentionใฎ่žๅˆใ‚ซใƒผใƒใƒซใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ BetterTransformerใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ใ“ใฎ[ใƒ–ใƒญใ‚ฐๆŠ•็จฟ](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใ‚’ใ”่ฆงใ„ใŸใ ใ‘ใพใ™ใ€‚ใพใŸใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซ็”จใฎBetterTransformerใซใคใ„ใฆใฏใ€ใ“ใฎ[ใƒ–ใƒญใ‚ฐ](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)ใง่ฉณใ—ใๅญฆใถใ“ใจใŒใงใใพใ™ใ€‚ ## Advanced usage: mixing FP4 (or Int8) and BetterTransformer ใƒขใƒ‡ใƒซใฎๆœ€่‰ฏใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅพ—ใ‚‹ใŸใ‚ใซใ€ไธŠ่จ˜ใง่ชฌๆ˜Žใ—ใŸ็•ฐใชใ‚‹ๆ–นๆณ•ใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€FP4ใƒŸใƒƒใ‚ฏใ‚นใƒ—ใƒฌใ‚ทใ‚ธใƒงใƒณๆŽจ่ซ–+Flash Attentionใ‚’ไฝฟ็”จใ—ใŸBetterTransformerใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config) input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/troubleshooting.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Troubleshoot ๆ™‚ใซใฏใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใŒใ€็งใŸใกใฏใ“ใ“ใซใ„ใพใ™๏ผใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€็งใŸใกใŒใ‚ˆใ่ฆ‹ใ‚‹ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใชๅ•้กŒใจใ€ใใ‚Œใ‚‰ใ‚’่งฃๆฑบใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏใ™ในใฆใฎ ๐Ÿค— Transformers ใฎๅ•้กŒใฎๅŒ…ๆ‹ฌ็š„ใชใ‚ณใƒฌใ‚ฏใ‚ทใƒงใƒณใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๅ•้กŒใ‚’ใƒˆใƒฉใƒ–ใƒซใ‚ทใƒฅใƒผใƒ†ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใฎ่ฉณ็ดฐใชใƒ˜ใƒซใƒ—ใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€ไปฅไธ‹ใฎๆ–นๆณ•ใ‚’่ฉฆใ—ใฆใฟใฆใใ ใ•ใ„๏ผš <Youtube id="S2EEG3JIt2A"/> 1. [ใƒ•ใ‚ฉใƒผใƒฉใƒ ](https://discuss.huggingface.co/)ใงๅŠฉใ‘ใ‚’ๆฑ‚ใ‚ใ‚‹ใ€‚ [ๅˆๅฟƒ่€…ๅ‘ใ‘](https://discuss.huggingface.co/c/beginners/5) ใพใŸใฏ [๐Ÿค— Transformers](https://discuss.huggingface.co/c/transformers/9) ใชใฉใ€่ณชๅ•ใ‚’ๆŠ•็จฟใงใใ‚‹็‰นๅฎšใฎใ‚ซใƒ†ใ‚ดใƒชใŒใ‚ใ‚Šใพใ™ใ€‚ๅ•้กŒใŒ่งฃๆฑบใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใ‚’ๆœ€ๅคง้™ใซใ™ใ‚‹ใŸใ‚ใซใ€ๅ†็พๅฏ่ƒฝใชใ‚ณใƒผใƒ‰ใ‚’ๅซใ‚€่‰ฏใ„่ชฌๆ˜Ž็š„ใชใƒ•ใ‚ฉใƒผใƒฉใƒ ๆŠ•็จฟใ‚’ๆ›ธใใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผ <Youtube id="_PAli-V4wj0"/> 2. ใƒใ‚ฐใŒใƒฉใ‚คใƒ–ใƒฉใƒชใซ้–ข้€ฃใ™ใ‚‹ๅ ดๅˆใฏใ€๐Ÿค— Transformers ใƒชใƒใ‚ธใƒˆใƒชใง [Issue](https://github.com/huggingface/transformers/issues/new/choose) ใ‚’ไฝœๆˆใ—ใฆใใ ใ•ใ„ใ€‚ใƒใ‚ฐใ‚’่ชฌๆ˜Žใ™ใ‚‹ใŸใ‚ใฎใงใใ‚‹ใ ใ‘ๅคšใใฎๆƒ…ๅ ฑใ‚’ๅซใ‚ใ‚‹ใ‚ˆใ†ใซๅฟƒใŒใ‘ใ€ไฝ•ใŒๅ•้กŒใงใ€ใฉใฎใ‚ˆใ†ใซไฟฎๆญฃใงใใ‚‹ใ‹ใ‚’ใ‚ˆใ‚Š่‰ฏใ็†่งฃใงใใ‚‹ใ‚ˆใ†ใซใ—ใฆใใ ใ•ใ„ใ€‚ 3. ใ‚ˆใ‚Šๅคใ„ใƒใƒผใ‚ธใƒงใƒณใฎ ๐Ÿค— Transformers ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€[Migration](migration) ใ‚ฌใ‚คใƒ‰ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใƒใƒผใ‚ธใƒงใƒณ้–“ใง้‡่ฆใชๅค‰ๆ›ดใŒๅฐŽๅ…ฅใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใงใ™ใ€‚ ใƒˆใƒฉใƒ–ใƒซใ‚ทใƒฅใƒผใƒ†ใ‚ฃใƒณใ‚ฐใจใƒ˜ใƒซใƒ—ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€Hugging Faceใ‚ณใƒผใ‚นใฎ [็ฌฌ8็ซ ](https://huggingface.co/course/chapter8/1?fw=pt) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Firewalled environments ไธ€้ƒจใฎใ‚ฏใƒฉใ‚ฆใƒ‰ไธŠใฎGPUใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚„ใ‚คใƒณใƒˆใƒฉใƒใƒƒใƒˆใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใฏใ€ๅค–้ƒจๆŽฅ็ถšใซๅฏพใ—ใฆใƒ•ใ‚กใ‚คใ‚ขใ‚ฆใ‚ฉใƒผใƒซใงไฟ่ญทใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ๆŽฅ็ถšใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚นใ‚ฏใƒชใƒ—ใƒˆใŒใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚„ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใ‚ˆใ†ใจใ™ใ‚‹ใจใ€ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใŒ้€”ไธญใงๆญขใพใ‚Šใ€ๆฌกใฎใƒกใƒƒใ‚ปใƒผใ‚ธใจใ‚ฟใ‚คใƒ ใ‚ขใ‚ฆใƒˆใ‚จใƒฉใƒผใŒ่กจ็คบใ•ใ‚Œใพใ™๏ผš ``` ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ``` ใ“ใฎๅ ดๅˆใ€ๆŽฅ็ถšใ‚จใƒฉใƒผใ‚’ๅ›ž้ฟใ™ใ‚‹ใŸใ‚ใซ[ใ‚ชใƒ•ใƒฉใ‚คใƒณใƒขใƒผใƒ‰](installation#offline-mode)ใง๐Ÿค— Transformersใ‚’ๅฎŸ่กŒใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ ## CUDA out of memory ๆ•ฐ็™พไธ‡ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒใคๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ€้ฉๅˆ‡ใชใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใชใ—ใงใฏ่ชฒ้กŒใงใ™ใ€‚GPUใฎใƒกใƒขใƒชใŒไธ่ถณใ™ใ‚‹ใจใ‚ˆใใ‚ใ‚‹ใ‚จใƒฉใƒผใฎ1ใคใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš ไปฅไธ‹ใฏใƒกใƒขใƒชไฝฟ็”จ้‡ใ‚’ๆธ›ใ‚‰ใ™ใŸใ‚ใซ่ฉฆใ™ใ“ใจใŒใงใใ‚‹ใ„ใใคใ‹ใฎ่งฃๆฑบ็ญ–ใงใ™๏ผš - [`TrainingArguments`]ใฎไธญใง [`per_device_train_batch_size`](main_classes/trainer#transformers.TrainingArguments.per_device_train_batch_size) ใฎๅ€คใ‚’ๆธ›ใ‚‰ใ™ใ€‚ - [`TrainingArguments`]ใฎไธญใง [`gradient_accumulation_steps`](main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps) ใ‚’ไฝฟ็”จใ—ใฆใ€ๅ…จไฝ“็š„ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ๅŠนๆžœ็š„ใซๅข—ใ‚„ใ™ใ“ใจใ‚’่ฉฆใ™ใ€‚ <Tip> ใƒกใƒขใƒช็ฏ€็ด„ใฎใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏใซใคใ„ใฆใฎ่ฉณ็ดฐใฏใ€[ใ‚ฌใ‚คใƒ‰](performance)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ## Unable to load a saved TensorFlow model TensorFlowใฎ[model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model)ใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ใƒขใƒ‡ใƒซๅ…จไฝ“ - ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ€้‡ใฟใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ่จญๅฎš - ใ‚’1ใคใฎใƒ•ใ‚กใ‚คใƒซใซไฟๅญ˜ใ—ใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซใ‚’ๅ†ๅบฆ่ชญใฟ่พผใ‚€้š›ใซใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€๐Ÿค— TransformersใŒใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซๅ†…ใฎใ™ในใฆใฎTensorFlow้–ข้€ฃใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’่ชญใฟ่พผใพใชใ„ใŸใ‚ใงใ™ใ€‚TensorFlowใƒขใƒ‡ใƒซใฎไฟๅญ˜ใจ่ชญใฟ่พผใฟใซ้–ขใ™ใ‚‹ๅ•้กŒใ‚’ๅ›ž้ฟใ™ใ‚‹ใŸใ‚ใซใ€ๆฌกใฎใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™๏ผš - ใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’`h5`ใƒ•ใ‚กใ‚คใƒซๆ‹กๅผตๅญใงไฟๅญ˜ใ—ใ€[`~TFPreTrainedModel.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ๅ†่ชญใฟ่พผใฟใ™ใ‚‹๏ผš ```py >>> from transformers import TFPreTrainedModel >>> from tensorflow import keras >>> model.save_weights("some_folder/tf_model.h5") >>> model = TFPreTrainedModel.from_pretrained("some_folder") ``` - Save the model with [`~TFPretrainedModel.save_pretrained`] and load it again with [`~TFPreTrainedModel.from_pretrained`]: ```py >>> from transformers import TFPreTrainedModel >>> model.save_pretrained("path_to/model") >>> model = TFPreTrainedModel.from_pretrained("path_to/model") ``` ## ImportError ใ‚‚ใ†ไธ€ใคใ‚ˆใใ‚ใ‚‹ใ‚จใƒฉใƒผใฏใ€็‰นใซๆ–ฐใ—ใใƒชใƒชใƒผใ‚นใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฎๅ ดๅˆใซ้ญ้‡ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚‹ `ImportError` ใงใ™๏ผš ``` ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location) ``` ใ“ใ‚Œใ‚‰ใฎใ‚จใƒฉใƒผใ‚ฟใ‚คใƒ—ใซ้–ขใ—ใฆใฏใ€ๆœ€ๆ–ฐใƒใƒผใ‚ธใƒงใƒณใฎ ๐Ÿค— Transformers ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใ€ๆœ€ๆ–ฐใฎใƒขใƒ‡ใƒซใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ‚ˆใ†ใซใ—ใฆใใ ใ•ใ„๏ผš ```bash pip install transformers --upgrade ``` ## CUDA error: device-side assert triggered ๆ™‚ใ€…ใ€ใƒ‡ใƒใ‚คใ‚นใ‚ณใƒผใƒ‰ใงใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ—ใŸใจใ„ใ†ไธ€่ˆฌ็š„ใช CUDA ใ‚จใƒฉใƒผใซ้ญ้‡ใ™ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ``` RuntimeError: CUDA error: device-side assert triggered ``` ใ‚ˆใ‚Šๅ…ทไฝ“็š„ใชใ‚จใƒฉใƒผใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’ๅ–ๅพ—ใ™ใ‚‹ใŸใ‚ใซใ€ใพใšใฏCPUไธŠใงใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใฟใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ไปฅไธ‹ใฎ็’ฐๅขƒๅค‰ๆ•ฐใ‚’ใ‚ณใƒผใƒ‰ใฎๅ†’้ ญใซ่ฟฝๅŠ ใ—ใฆใ€CPUใซๅˆ‡ใ‚Šๆ›ฟใˆใฆใฟใฆใใ ใ•ใ„๏ผš ```py >>> import os >>> os.environ["CUDA_VISIBLE_DEVICES"] = "" ``` GPUใ‹ใ‚‰ใ‚ˆใ‚Š่‰ฏใ„ใƒˆใƒฌใƒผใ‚นใƒใƒƒใ‚ฏใ‚’ๅ–ๅพ—ใ™ใ‚‹ๅˆฅใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใฏใ€ๆฌกใฎ็’ฐๅขƒๅค‰ๆ•ฐใ‚’ใ‚ณใƒผใƒ‰ใฎๅ…ˆ้ ญใซ่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚จใƒฉใƒผใฎ็™บ็”Ÿๆบใ‚’ๆŒ‡ใ™ใƒˆใƒฌใƒผใ‚นใƒใƒƒใ‚ฏใŒๅพ—ใ‚‰ใ‚Œใพใ™๏ผš ```py >>> import os >>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1" ``` ## Incorrect output when padding tokens aren't masked ไธ€้ƒจใฎใ‚ฑใƒผใ‚นใงใฏใ€`input_ids`ใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใŒๅซใพใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ๅ‡บๅŠ›ใฎ`hidden_state`ใŒๆญฃใ—ใใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ‡ใƒขใƒณใ‚นใƒˆใƒฌใƒผใ‚ทใƒงใƒณใฎใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎ`pad_token_id`ใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใฆใ€ใใฎๅ€คใ‚’็ขบ่ชใงใใพใ™ใ€‚ไธ€้ƒจใฎใƒขใƒ‡ใƒซใงใฏ`pad_token_id`ใŒ`None`ใซใชใ‚‹ใ“ใจใ‚‚ใ‚ใ‚Šใพใ™ใŒใ€ๅธธใซๆ‰‹ๅ‹•ใง่จญๅฎšใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> from transformers import AutoModelForSequenceClassification >>> import torch >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") >>> model.config.pad_token_id 0 ``` ไปฅไธ‹ใฎไพ‹ใฏใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ใƒžใ‚นใ‚ฏใ›ใšใซๅ‡บๅŠ›ใ‚’่กจ็คบใ—ใŸใ‚‚ใฎใงใ™๏ผš ```py >>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>) ``` ไปฅไธ‹ใฏใ€็ฌฌ2ใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใฎๅฎŸ้š›ใฎๅ‡บๅŠ›ใงใ™๏ผš ```py >>> input_ids = torch.tensor([[7592]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๅคงๆŠตใฎๅ ดๅˆใ€ใƒขใƒ‡ใƒซใซใฏ `attention_mask` ใ‚’ๆไพ›ใ—ใฆใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’็„ก่ฆ–ใ—ใ€ใ“ใฎใ‚ˆใ†ใช็„ก้Ÿณใฎใ‚จใƒฉใƒผใ‚’ๅ›ž้ฟใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€2็•ช็›ฎใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใฎๅ‡บๅŠ›ใŒๅฎŸ้š›ใฎๅ‡บๅŠ›ใจไธ€่‡ดใ™ใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ <Tip> ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใซๅŸบใฅใ„ใฆ `attention_mask` ใ‚’่‡ชๅ‹•ใงไฝœๆˆใ—ใพใ™ใ€‚ </Tip> ```py >>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]]) >>> output = model(input_ids, attention_mask=attention_mask) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๐Ÿค— Transformersใฏใ€ๆไพ›ใ•ใ‚Œใ‚‹ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ใƒžใ‚นใ‚ฏใ™ใ‚‹ใŸใ‚ใซ่‡ชๅ‹•็š„ใซ`attention_mask`ใ‚’ไฝœๆˆใ—ใพใ›ใ‚“ใ€‚ใใฎ็†็”ฑใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™๏ผš - ไธ€้ƒจใฎใƒขใƒ‡ใƒซใซใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใŒๅญ˜ๅœจใ—ใชใ„ๅ ดๅˆใŒใ‚ใ‚‹ใŸใ‚ใงใ™ใ€‚ - ไธ€้ƒจใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใงใฏใ€ใƒฆใƒผใ‚ถใƒผใŒใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใซใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’ๅ‘ใ‘ใ‚‹ใ“ใจใ‚’ๆœ›ใ‚€ๅ ดๅˆใŒใ‚ใ‚‹ใŸใ‚ใงใ™ใ€‚ ## ValueError: Unrecognized configuration class XYZ for this kind of AutoModel ไธ€่ˆฌ็š„ใซใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใซใฏ[`AutoModel`]ใ‚ฏใƒฉใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใฎใ‚ฏใƒฉใ‚นใฏใ€่จญๅฎšใซๅŸบใฅใ„ใฆไธŽใˆใ‚‰ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ๆญฃใ—ใ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่‡ชๅ‹•็š„ใซๆŽจๆธฌใŠใ‚ˆใณใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹้š›ใซใ“ใฎ`ValueError`ใŒ่กจ็คบใ•ใ‚Œใ‚‹ๅ ดๅˆใ€Autoใ‚ฏใƒฉใ‚นใฏไธŽใˆใ‚‰ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ่จญๅฎšใ‹ใ‚‰ใ€ใƒญใƒผใƒ‰ใ—ใ‚ˆใ†ใจใ—ใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใฎ็จฎ้กžใธใฎใƒžใƒƒใƒ”ใƒณใ‚ฐใ‚’่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใชใ‹ใฃใŸใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใซใฏใ€็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใ‚’ใ‚ตใƒใƒผใƒˆใ—ใชใ„ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒใ‚ใ‚‹ๅ ดๅˆใซใ“ใฎใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ ไพ‹ใˆใฐใ€่ณชๅ•ๅฟœ็ญ”ใฎใŸใ‚ใฎGPT2ใŒๅญ˜ๅœจใ—ใชใ„ๅ ดๅˆใ€ๆฌกใฎไพ‹ใงใ“ใฎใ‚จใƒฉใƒผใŒ่กจ็คบใ•ใ‚Œใพใ™๏ผš ไธŠ่จ˜ใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ๆ—ฅๆœฌ่ชžใซ็ฟป่จณใ—ใ€Markdownใƒ•ใ‚กใ‚คใƒซใจใ—ใฆใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ—ใพใ—ใŸใ€‚ ```py >>> from transformers import AutoProcessor, AutoModelForQuestionAnswering >>> processor = AutoProcessor.from_pretrained("gpt2-medium") >>> model = AutoModelForQuestionAnswering.from_pretrained("gpt2-medium") ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering. Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ... ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/model_sharing.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownใงใ™ใŒใ€Hugging Faceใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใƒ“ใƒซใƒ€ใƒผ๏ผˆMDXใซ้กžไผผ๏ผ‰ๅ‘ใ‘ใฎ็‰นๅฎšใฎๆง‹ๆ–‡ใ‚’ๅซใ‚“ใงใ„ใ‚‹ใŸใ‚ใ€Markdownใƒ“ใƒฅใƒผใ‚ขใƒผใง้ฉๅˆ‡ใซใƒฌใƒณใƒ€ใƒชใƒณใ‚ฐใ•ใ‚Œใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ --> # Share a Model ๆœ€ๅพŒใฎ2ใคใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€PyTorchใ€Kerasใ€ใŠใ‚ˆใณ๐Ÿค— Accelerateใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใพใ—ใŸใ€‚ๆฌกใฎใ‚นใƒ†ใƒƒใƒ—ใฏใ€ใƒขใƒ‡ใƒซใ‚’ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจๅ…ฑๆœ‰ใ™ใ‚‹ใ“ใจใงใ™๏ผHugging Faceใงใฏใ€็Ÿฅ่ญ˜ใจใƒชใ‚ฝใƒผใ‚นใ‚’ๅ…ฌ้–‹็š„ใซๅ…ฑๆœ‰ใ—ใ€ไบบๅทฅ็Ÿฅ่ƒฝใ‚’่ชฐใซใงใ‚‚ๆไพ›ใ™ใ‚‹ใ“ใจใ‚’ไฟกใ˜ใฆใ„ใพใ™ใ€‚ไป–ใฎไบบใ€…ใŒๆ™‚้–“ใจใƒชใ‚ฝใƒผใ‚นใ‚’็ฏ€็ด„ใงใใ‚‹ใ‚ˆใ†ใซใ€ใƒขใƒ‡ใƒซใ‚’ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจๅ…ฑๆœ‰ใ™ใ‚‹ใ“ใจใ‚’ๆคœ่จŽใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€่จ“็ทดๆธˆใฟใพใŸใฏใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’[Model Hub](https://huggingface.co/models)ใซๅ…ฑๆœ‰ใ™ใ‚‹2ใคใฎๆ–นๆณ•ใ‚’ๅญฆใณใพใ™๏ผš - ใƒ—ใƒญใ‚ฐใƒฉใƒ ใงใƒ•ใ‚กใ‚คใƒซใ‚’Hubใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ€‚ - ใ‚ฆใ‚งใƒ–ใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นใ‚’ไฝฟ็”จใ—ใฆใƒ•ใ‚กใ‚คใƒซใ‚’Hubใซใƒ‰ใƒฉใƒƒใ‚ฐใ‚ขใƒณใƒ‰ใƒ‰ใƒญใƒƒใƒ—ใ™ใ‚‹ใ€‚ <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ใซใฏใ€[huggingface.co](https://huggingface.co/join)ใงใ‚ขใ‚ซใ‚ฆใƒณใƒˆใŒๅฟ…่ฆใงใ™ใ€‚ๆ—ขๅญ˜ใฎ็ต„็น”ใซๅ‚ๅŠ ใ—ใŸใ‚Šใ€ๆ–ฐใ—ใ„็ต„็น”ใ‚’ไฝœๆˆใ—ใŸใ‚Šใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ </Tip> ## Repository Features Model HubไธŠใฎๅ„ใƒชใƒใ‚ธใƒˆใƒชใฏใ€้€šๅธธใฎGitHubใƒชใƒใ‚ธใƒˆใƒชใฎใ‚ˆใ†ใซๅ‹•ไฝœใ—ใพใ™ใ€‚ใƒชใƒใ‚ธใƒˆใƒชใฏใƒใƒผใ‚ธใƒงใƒ‹ใƒณใ‚ฐใ€ใ‚ณใƒŸใƒƒใƒˆๅฑฅๆญดใ€้•ใ„ใฎ่ฆ–่ฆšๅŒ–ใฎๆฉŸ่ƒฝใ‚’ๆไพ›ใ—ใพใ™ใ€‚ Model Hubใฎ็ต„ใฟ่พผใฟใƒใƒผใ‚ธใƒงใƒ‹ใƒณใ‚ฐใฏgitใŠใ‚ˆใณ[git-lfs](https://git-lfs.github.com/)ใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚่จ€ใ„ๆ›ใˆใ‚Œใฐใ€ใƒขใƒ‡ใƒซใ‚’1ใคใฎใƒชใƒใ‚ธใƒˆใƒชใจใ—ใฆๆ‰ฑใ†ใ“ใจใŒใงใใ€ใ‚ˆใ‚Šๅคงใใชใ‚ขใ‚ฏใ‚ปใ‚นๅˆถๅพกใจใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃใ‚’ๅฎŸ็พใ—ใพใ™ใ€‚ใƒใƒผใ‚ธใƒงใƒณ็ฎก็†ใซใฏ*ใƒชใƒ“ใ‚ธใƒงใƒณ*ใŒใ‚ใ‚Šใ€ใ‚ณใƒŸใƒƒใƒˆใƒใƒƒใ‚ทใƒฅใ€ใ‚ฟใ‚ฐใ€ใพใŸใฏใƒ–ใƒฉใƒณใƒๅใง็‰นๅฎšใฎใƒขใƒ‡ใƒซใƒใƒผใ‚ธใƒงใƒณใ‚’ใƒ”ใƒณ็•™ใ‚ใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ ใใฎ็ตๆžœใ€`revision`ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ไฝฟ็”จใ—ใฆ็‰นๅฎšใฎใƒขใƒ‡ใƒซใƒใƒผใ‚ธใƒงใƒณใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผš ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # ใ‚ฟใ‚ฐๅใ€ใพใŸใฏใƒ–ใƒฉใƒณใƒๅใ€ใพใŸใฏใ‚ณใƒŸใƒƒใƒˆใƒใƒƒใ‚ทใƒฅ ... ) ``` ใƒ•ใ‚กใ‚คใƒซใฏใƒชใƒใ‚ธใƒˆใƒชๅ†…ใง็ฐกๅ˜ใซ็ทจ้›†ใงใใ€ใ‚ณใƒŸใƒƒใƒˆๅฑฅๆญดใจๅทฎๅˆ†ใ‚’่กจ็คบใงใใพใ™๏ผš ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Set Up ใƒขใƒ‡ใƒซใ‚’Hubใซๅ…ฑๆœ‰ใ™ใ‚‹ๅ‰ใซใ€Hugging Faceใฎ่ช่จผๆƒ…ๅ ฑใŒๅฟ…่ฆใงใ™ใ€‚ใ‚ฟใƒผใƒŸใƒŠใƒซใธใฎใ‚ขใ‚ฏใ‚ปใ‚นๆจฉใŒใ‚ใ‚‹ๅ ดๅˆใ€๐Ÿค— TransformersใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ไปฎๆƒณ็’ฐๅขƒใงไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚ขใ‚ฏใ‚ปใ‚นใƒˆใƒผใ‚ฏใƒณใŒHugging Faceใฎใ‚ญใƒฃใƒƒใ‚ทใƒฅใƒ•ใ‚ฉใƒซใƒ€ใซไฟๅญ˜ใ•ใ‚Œใพใ™๏ผˆใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏ `~/.cache/` ใซไฟๅญ˜ใ•ใ‚Œใพใ™๏ผ‰๏ผš ```bash huggingface-cli login ``` Jupyterใ‚„Colaboratoryใฎใ‚ˆใ†ใชใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€[`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library)ใƒฉใ‚คใƒ–ใƒฉใƒชใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€Hubใจใƒ—ใƒญใ‚ฐใƒฉใƒ ็š„ใซๅฏพ่ฉฑใงใใพใ™ใ€‚ ```bash pip install huggingface_hub ``` ๆฌกใซใ€`notebook_login`ใ‚’ไฝฟ็”จใ—ใฆHubใซใ‚ตใ‚คใƒณใ‚คใƒณใ—ใ€[ใ“ใกใ‚‰ใฎใƒชใƒณใ‚ฏ](https://huggingface.co/settings/token)ใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใฆใƒญใ‚ฐใ‚คใƒณใซไฝฟ็”จใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใ‚’็”Ÿๆˆใ—ใพใ™๏ผš ```python >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Convert a Model for all frameworks ็•ฐใชใ‚‹ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงไฝœๆฅญใ—ใฆใ„ใ‚‹ไป–ใฎใƒฆใƒผใ‚ถใƒผใŒใ‚ใชใŸใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใซใ€ PyTorchใŠใ‚ˆใณTensorFlowใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใงใƒขใƒ‡ใƒซใ‚’ๅค‰ๆ›ใ—ใฆใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใ“ใฎใ‚นใƒ†ใƒƒใƒ—ใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ใจใ€ใƒฆใƒผใ‚ถใƒผใฏ็•ฐใชใ‚‹ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™ใŒใ€ ใƒขใƒ‡ใƒซใ‚’ใ‚ชใƒณใ‚ถใƒ•ใƒฉใ‚คใงๅค‰ๆ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€้…ใใชใ‚Šใพใ™ใ€‚ ๅˆฅใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ็”จใซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๅค‰ๆ›ใ™ใ‚‹ใ“ใจใฏ็ฐกๅ˜ใงใ™ใ€‚ PyTorchใจTensorFlowใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผˆใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ‰‹้ †ใซใคใ„ใฆใฏ[ใ“ใกใ‚‰](installation)ใ‚’ๅ‚็…ง๏ผ‰ใ—ใ€ ใใฎๅพŒใ€ไป–ใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏๅ‘ใ‘ใซ็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏ็”จใฎใƒขใƒ‡ใƒซใ‚’่ฆ‹ใคใ‘ใพใ™ใ€‚ <frameworkcontent> <pt> TensorFlowใ‹ใ‚‰PyTorchใซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๅค‰ๆ›ใ™ใ‚‹ใซใฏใ€`from_tf=True`ใ‚’ๆŒ‡ๅฎšใ—ใพใ™๏ผš ```python >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> ๆŒ‡ๅฎšใ—ใฆใ€PyTorchใ‹ใ‚‰TensorFlowใซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๅค‰ๆ›ใ™ใ‚‹ใซใฏ `from_pt=True` ใ‚’ไฝฟ็”จใ—ใพใ™๏ผš ```python >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` ๆ–ฐใ—ใ„TensorFlowใƒขใƒ‡ใƒซใจใใฎๆ–ฐใ—ใ„ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฟๅญ˜ใงใใพใ™๏ผš ```python >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <tf> <jax> Flaxใงใƒขใƒ‡ใƒซใŒๅˆฉ็”จๅฏ่ƒฝใชๅ ดๅˆใ€PyTorchใ‹ใ‚‰Flaxใธใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎๅค‰ๆ›ใ‚‚่กŒใ†ใ“ใจใŒใงใใพใ™๏ผš ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Push a model during traning <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> ใƒขใƒ‡ใƒซใ‚’Hubใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใฏใ€่ฟฝๅŠ ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใพใŸใฏใ‚ณใƒผใƒซใƒใƒƒใ‚ฏใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใง็ฐกๅ˜ใงใ™ใ€‚ [ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](training)ใ‹ใ‚‰ๆ€ใ„ๅ‡บใ—ใฆใใ ใ•ใ„ใ€[`TrainingArguments`]ใ‚ฏใƒฉใ‚นใฏใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใจ่ฟฝๅŠ ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅ ดๆ‰€ใงใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ชใƒ—ใ‚ทใƒงใƒณใฎ1ใคใซใ€ใƒขใƒ‡ใƒซใ‚’็›ดๆŽฅHubใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ๆฉŸ่ƒฝใŒใ‚ใ‚Šใพใ™ใ€‚[`TrainingArguments`]ใง`push_to_hub=True`ใ‚’่จญๅฎšใ—ใพใ™๏ผš ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` Pass your training arguments as usual to [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` [`Trainer`]ใซ้€šๅธธ้€šใ‚Šใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅผ•ๆ•ฐใ‚’ๆธกใ—ใพใ™๏ผš ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใŒๅฎŒไบ†ใ—ใŸใ‚‰ใ€[`Trainer`]ใง[`~transformers.Trainer.push_to_hub`]ใ‚’ๅ‘ผใณๅ‡บใ—ใฆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซใ‚’Hubใซใƒ—ใƒƒใ‚ทใƒฅใ—ใพใ™ใ€‚๐Ÿค— Transformersใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็ตๆžœใ€ใŠใ‚ˆใณใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎใƒใƒผใ‚ธใƒงใƒณใ‚’่‡ชๅ‹•็š„ใซใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใซ่ฟฝๅŠ ใ—ใพใ™๏ผ ```py >>> trainer.push_to_hub() ``` </pt> <tf> [`PushToHubCallback`]ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’Hubใซๅ…ฑๆœ‰ใ—ใพใ™ใ€‚[`PushToHubCallback`]้–ขๆ•ฐใซใฏใ€ๆฌกใฎใ‚‚ใฎใ‚’่ฟฝๅŠ ใ—ใพใ™๏ผš - ใƒขใƒ‡ใƒซใฎๅ‡บๅŠ›ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ€‚ - ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€‚ - `hub_model_id`ใ€ใคใพใ‚ŠHubใฎใƒฆใƒผใ‚ถใƒผๅใจใƒขใƒ‡ใƒซๅใ€‚ ```python >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` ๐Ÿค— Transformersใฏ[`fit`](https://keras.io/api/models/model_training_apis/)ใซใ‚ณใƒผใƒซใƒใƒƒใ‚ฏใ‚’่ฟฝๅŠ ใ—ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซใ‚’Hubใซใƒ—ใƒƒใ‚ทใƒฅใ—ใพใ™๏ผš ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## `push_to_hub` ้–ขๆ•ฐใ‚’ไฝฟ็”จใ™ใ‚‹ ใพใŸใ€ใƒขใƒ‡ใƒซใ‚’็›ดๆŽฅHubใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใซใ€`push_to_hub` ใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใ‚‚ใงใใพใ™ใ€‚ `push_to_hub` ใงใƒขใƒ‡ใƒซๅใ‚’ๆŒ‡ๅฎšใ—ใพใ™๏ผš ```py >>> pt_model.push_to_hub("my-awesome-model") ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒฆใƒผใ‚ถใƒผๅใฎไธ‹ใซใƒขใƒ‡ใƒซๅ `my-awesome-model` ใ‚’ๆŒใคใƒชใƒใ‚ธใƒˆใƒชใŒไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ ใƒฆใƒผใ‚ถใƒผใฏใ€`from_pretrained` ้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผš ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` ็ต„็น”ใซๆ‰€ๅฑžใ—ใ€ใƒขใƒ‡ใƒซใ‚’็ต„็น”ๅใฎใ‚‚ใจใซใƒ—ใƒƒใ‚ทใƒฅใ—ใŸใ„ๅ ดๅˆใ€`repo_id` ใซใใ‚Œใ‚’่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„๏ผš ```python >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` `push_to_hub`้–ขๆ•ฐใฏใ€ใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใซไป–ใฎใƒ•ใ‚กใ‚คใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใซใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใซ่ฟฝๅŠ ใ—ใพใ™๏ผš ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` ใ‚ใ‚‹ใ„ใฏใ€ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸPyTorchใƒขใƒ‡ใƒซใฎTensorFlowใƒใƒผใ‚ธใƒงใƒณใ‚’่ฟฝๅŠ ใ—ใŸใ„ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผš ```python >>> tf_model.push_to_hub("my-awesome-model") ``` Hugging Faceใƒ—ใƒญใƒ•ใ‚ฃใƒผใƒซใซ็งปๅ‹•ใ™ใ‚‹ใจใ€ๆ–ฐใ—ใไฝœๆˆใ—ใŸใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใŒ่กจ็คบใ•ใ‚Œใ‚‹ใฏใšใงใ™ใ€‚**Files**ใ‚ฟใƒ–ใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ™ใ‚‹ใจใ€ใƒชใƒใ‚ธใƒˆใƒชใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใŸใ™ในใฆใฎใƒ•ใ‚กใ‚คใƒซใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ ใƒชใƒใ‚ธใƒˆใƒชใซใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใŠใ‚ˆใณใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€Hubใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ[ใ“ใกใ‚‰](https://huggingface.co/docs/hub/how-to-upstream)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Upload with the web interface ใ‚ณใƒผใƒ‰ใ‚’ๆ›ธใ‹ใšใซใƒขใƒ‡ใƒซใ‚’ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใŸใ„ใƒฆใƒผใ‚ถใƒผใฏใ€Hubใฎใ‚ฆใ‚งใƒ–ใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚[huggingface.co/new](https://huggingface.co/new)ใ‚’่จชใ‚Œใฆๆ–ฐใ—ใ„ใƒชใƒใ‚ธใƒˆใƒชใ‚’ไฝœๆˆใ—ใพใ™๏ผš ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) ใ“ใ“ใ‹ใ‚‰ใ€ใƒขใƒ‡ใƒซใซ้–ขใ™ใ‚‹ใ„ใใคใ‹ใฎๆƒ…ๅ ฑใ‚’่ฟฝๅŠ ใ—ใพใ™๏ผš - ใƒชใƒใ‚ธใƒˆใƒชใฎ**ๆ‰€ๆœ‰่€…**ใ‚’้ธๆŠžใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ‚ใชใŸ่‡ช่บซใพใŸใฏๆ‰€ๅฑžใ—ใฆใ„ใ‚‹็ต„็น”ใฎใ„ใšใ‚Œใ‹ใงใ™ใ€‚ - ใƒขใƒ‡ใƒซใฎๅๅ‰ใ‚’้ธๆŠžใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใƒชใƒใ‚ธใƒˆใƒชใฎๅๅ‰ใซใ‚‚ใชใ‚Šใพใ™ใ€‚ - ใƒขใƒ‡ใƒซใŒๅ…ฌ้–‹ใ‹้žๅ…ฌ้–‹ใ‹ใ‚’้ธๆŠžใ—ใพใ™ใ€‚ - ใƒขใƒ‡ใƒซใฎใƒฉใ‚คใ‚ปใƒณใ‚นไฝฟ็”จๆ–นๆณ•ใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ ใใฎๅพŒใ€**Files**ใ‚ฟใƒ–ใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใ€**Add file**ใƒœใ‚ฟใƒณใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใƒชใƒใ‚ธใƒˆใƒชใซๆ–ฐใ—ใ„ใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ๆฌกใซใ€ใƒ•ใ‚กใ‚คใƒซใ‚’ใƒ‰ใƒฉใƒƒใ‚ฐใ‚ขใƒณใƒ‰ใƒ‰ใƒญใƒƒใƒ—ใ—ใฆใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใ€ใ‚ณใƒŸใƒƒใƒˆใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Add a model card ใƒฆใƒผใ‚ถใƒผใŒใƒขใƒ‡ใƒซใฎๆฉŸ่ƒฝใ€ๅˆถ้™ใ€ๆฝœๅœจ็š„ใชๅใ‚Šใ€ๅ€ซ็†็š„ใช่€ƒๆ…ฎไบ‹้ …ใ‚’็†่งฃใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใซใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„ใ€‚ใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใฏ`README.md`ใƒ•ใ‚กใ‚คใƒซใงๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ๆ–นๆณ•๏ผš * ๆ‰‹ๅ‹•ใง`README.md`ใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใŠใ‚ˆใณใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ™ใ‚‹ใ€‚ * ใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชๅ†…ใฎ**Edit model card**ใƒœใ‚ฟใƒณใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ™ใ‚‹ใ€‚ ใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใซๅซใ‚ใ‚‹ในใๆƒ…ๅ ฑใฎไพ‹ใซใคใ„ใฆใฏใ€DistilBert [ใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰](https://huggingface.co/distilbert-base-uncased)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚`README.md`ใƒ•ใ‚กใ‚คใƒซใงๅˆถๅพกใงใใ‚‹ไป–ใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใ€ไพ‹ใˆใฐใƒขใƒ‡ใƒซใฎ็‚ญ็ด ใƒ•ใƒƒใƒˆใƒ—ใƒชใƒณใƒˆใ‚„ใ‚ฆใ‚ฃใ‚ธใ‚งใƒƒใƒˆใฎไพ‹ใชใฉใซใคใ„ใฆใฎ่ฉณ็ดฐใฏใ€[ใ“ใกใ‚‰ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://huggingface.co/docs/hub/models-cards)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/add_new_model.md
<!-- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownๅฝขๅผใงใ™ใŒใ€็‰นๅฎšใฎๆ–‡ๆณ•ใŒๅซใพใ‚ŒใฆใŠใ‚Šใ€้€šๅธธใฎMarkdownใƒ“ใƒฅใƒผใ‚ขใƒผใงใฏๆญฃใ—ใ่กจ็คบใ•ใ‚Œใชใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ --> # How to add a model to ๐Ÿค— Transformers? ๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใฎ่ฒข็Œฎ่€…ใฎใŠใ‹ใ’ใงๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’ๆไพ›ใงใใ‚‹ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ™ใ€‚ ใ—ใ‹ใ—ใ€ใ“ใ‚Œใฏ้›ฃใ—ใ„ใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใงใ‚ใ‚Šใ€๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใจๅฎŸ่ฃ…ใ™ใ‚‹ใƒขใƒ‡ใƒซใซใคใ„ใฆใฎๆทฑใ„็Ÿฅ่ญ˜ใŒๅฟ…่ฆใงใ™ใ€‚ Hugging Faceใงใฏใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใฎๅคšใใฎไบบใ€…ใซ็ฉๆฅต็š„ใซใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅŠ›ใ‚’ไธŽใˆใ‚ˆใ†ใจๅŠชๅŠ›ใ—ใฆใŠใ‚Šใ€ ใ“ใฎใ‚ฌใ‚คใƒ‰ใ‚’ใพใจใ‚ใฆใ€PyTorchใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใ‚’่ชฌๆ˜Žใ—ใพใ™๏ผˆ[PyTorchใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„](https://pytorch.org/get-started/locally/)๏ผ‰ใ€‚ <Tip> TensorFlowใƒขใƒ‡ใƒซใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹่ˆˆๅ‘ณใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€[๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’TensorFlowใซๅค‰ๆ›ใ™ใ‚‹ๆ–นๆณ•](add_tensorflow_model)ใ‚ฌใ‚คใƒ‰ใ‚’ๅ‚็…งใ—ใฆใฟใฆใใ ใ•ใ„๏ผ </Tip> ใ“ใฎ้Ž็จ‹ใงใ€ไปฅไธ‹ใฎใ“ใจใ‚’ๅญฆใณใพใ™๏ผš - ใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใฎใƒ™ใ‚นใƒˆใƒ—ใƒฉใ‚ฏใƒ†ใ‚ฃใ‚นใซ้–ขใ™ใ‚‹ๆดžๅฏŸ - ๆœ€ใ‚‚ไบบๆฐ—ใฎใ‚ใ‚‹ๆทฑๅฑคๅญฆ็ฟ’ใƒฉใ‚คใƒ–ใƒฉใƒชใฎ่จญ่จˆๅŽŸๅ‰‡ใ‚’็†่งฃใ™ใ‚‹ - ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’ๅŠน็އ็š„ใซใƒ†ใ‚นใƒˆใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถ - `black`ใ€`ruff`ใ€ใŠใ‚ˆใณ`make fix-copies`ใชใฉใฎPythonใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใ‚’็ตฑๅˆใ—ใฆใ€ใ‚ฏใƒชใƒผใƒณใง่ชญใฟใ‚„ใ™ใ„ใ‚ณใƒผใƒ‰ใ‚’็ขบไฟใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใถ Hugging Faceใƒใƒผใƒ ใฎใƒกใƒณใƒใƒผใŒใ‚ตใƒใƒผใƒˆใ‚’ๆไพ›ใ™ใ‚‹ใฎใงใ€ไธ€ไบบใผใฃใกใซใชใ‚‹ใ“ใจใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๐Ÿค— โค๏ธ ใ•ใ‚ใ€ๅง‹ใ‚ใพใ—ใ‚‡ใ†๏ผ๐Ÿค— Transformersใง่ฆ‹ใŸใ„ใƒขใƒ‡ใƒซใซใคใ„ใฆใฎ[New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml)ใฎใ‚คใ‚ทใƒฅใƒผใ‚’้–‹ใ„ใฆใใ ใ•ใ„ใ€‚ ็‰นๅฎšใฎใƒขใƒ‡ใƒซใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจใซ็‰นใซใ“ใ ใ‚ใ‚ŠใŒใชใ„ๅ ดๅˆใ€[New model label](https://github.com/huggingface/transformers/labels/New%20model)ใงๆœชๅ‰ฒใ‚Šๅฝ“ใฆใฎใƒขใƒ‡ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใŒใ‚ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ—ใฆใ€ใใ‚Œใซๅ–ใ‚Š็ต„ใ‚€ใ“ใจใŒใงใใพใ™ใ€‚ ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้–‹ใ„ใŸใ‚‰ใ€ๆœ€ๅˆใฎใ‚นใƒ†ใƒƒใƒ—ใฏ๐Ÿค— Transformersใ‚’ใ‚ˆใ็†่งฃใ™ใ‚‹ใ“ใจใงใ™๏ผ ## General overview of ๐Ÿค— Transformers ใพใšใ€๐Ÿค— Transformersใฎไธ€่ˆฌ็š„ใชๆฆ‚่ฆใ‚’ๆŠŠๆกใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚๐Ÿค— Transformersใฏ้žๅธธใซๆ„่ฆ‹ใŒๅˆ†ใ‹ใ‚Œใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใงใ™ใฎใงใ€ ใƒฉใ‚คใƒ–ใƒฉใƒชใฎๅ“ฒๅญฆใ‚„่จญ่จˆ้ธๆŠžใซใคใ„ใฆๅŒๆ„ใงใใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใ ใ—ใ€็งใŸใกใฎ็ตŒ้จ“ใ‹ใ‚‰ใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎๅŸบๆœฌ็š„ใช่จญ่จˆ้ธๆŠžใจๅ“ฒๅญฆใฏใ€ ๐Ÿค— Transformersใ‚’ๅŠน็އ็š„ใซใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐใ—ใ€้ฉๅˆ‡ใชใƒฌใƒ™ใƒซใงไฟๅฎˆใ‚ณใ‚นใƒˆใ‚’ๆŠ‘ใˆใ‚‹ใŸใ‚ใซไธๅฏๆฌ ใงใ™ใ€‚ ใƒฉใ‚คใƒ–ใƒฉใƒชใฎ็†่งฃใ‚’ๆทฑใ‚ใ‚‹ใŸใ‚ใฎ่‰ฏใ„ๅ‡บ็™บ็‚นใฏใ€[ๅ“ฒๅญฆใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](philosophy)ใ‚’่ชญใ‚€ใ“ใจใงใ™ใ€‚ ็งใŸใกใฎไฝœๆฅญๆ–นๆณ•ใฎ็ตๆžœใ€ใ™ในใฆใฎใƒขใƒ‡ใƒซใซ้ฉ็”จใ—ใ‚ˆใ†ใจใ™ใ‚‹ใ„ใใคใ‹ใฎ้ธๆŠž่‚ขใŒใ‚ใ‚Šใพใ™๏ผš - ไธ€่ˆฌ็š„ใซใ€ๆŠฝ่ฑกๅŒ–ใ‚ˆใ‚Šใ‚‚ๆง‹ๆˆใŒๅ„ชๅ…ˆใ•ใ‚Œใพใ™ใ€‚ - ใ‚ณใƒผใƒ‰ใฎ้‡่ค‡ใฏใ€่ชญใฟใ‚„ใ™ใ•ใ‚„ใ‚ขใ‚ฏใ‚ปใ‚นๅฏ่ƒฝๆ€งใ‚’ๅคงๅน…ใซๅ‘ไธŠใ•ใ›ใ‚‹ๅ ดๅˆใ€ๅฟ…ใšใ—ใ‚‚ๆ‚ชใ„ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ - ใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซใฏใงใใ‚‹ใ ใ‘่‡ชๅทฑๅฎŒ็ต็š„ใงใ‚ใ‚‹ในใใงใ€็‰นๅฎšใฎใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‰ใ‚’่ชญใ‚€้š›ใซใฏใ€็†ๆƒณ็š„ใซใฏ่ฉฒๅฝ“ใ™ใ‚‹`modeling_....py`ใƒ•ใ‚กใ‚คใƒซใฎใฟใ‚’่ฆ‹ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ็งใŸใกใฎๆ„่ฆ‹ใงใฏใ€ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ‚ณใƒผใƒ‰ใฏๅ˜ใชใ‚‹่ฃฝๅ“ใ‚’ๆไพ›ใ™ใ‚‹ๆ‰‹ๆฎตใ ใ‘ใงใชใใ€*ไพ‹ใˆใฐใ€ๆŽจ่ซ–ใฎใŸใ‚ใซBERTใ‚’ไฝฟ็”จใ™ใ‚‹่ƒฝๅŠ›*ใชใฉใฎ่ฃฝๅ“ใใฎใ‚‚ใฎ. ### Overview of models ใƒขใƒ‡ใƒซใ‚’ๆญฃๅธธใซ่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใซใฏใ€ใƒขใƒ‡ใƒซใจใใฎ่จญๅฎšใ€[`PreTrainedModel`]ใ€ใŠใ‚ˆใณ[`PretrainedConfig`]ใฎ็›ธไบ’ไฝœ็”จใ‚’็†่งฃใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚ ไพ‹็คบ็š„ใช็›ฎ็š„ใงใ€๐Ÿค— Transformersใซ่ฟฝๅŠ ใ™ใ‚‹ใƒขใƒ‡ใƒซใ‚’ใ€ŒBrandNewBertใ€ใจๅ‘ผใณใพใ™ใ€‚ ไปฅไธ‹ใ‚’ใ”่ฆงใใ ใ•ใ„๏ผš <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> ใ”่ฆงใฎใ‚ˆใ†ใซใ€๐Ÿค— Transformersใงใฏ็ถ™ๆ‰ฟใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใŒใ€ๆŠฝ่ฑกๅŒ–ใฎใƒฌใƒ™ใƒซใ‚’ๆœ€ๅฐ้™ใซไฟใฃใฆใ„ใพใ™ใ€‚ ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใฉใฎใƒขใƒ‡ใƒซใซใ‚‚ใ€ๆŠฝ่ฑกๅŒ–ใฎใƒฌใƒ™ใƒซใŒ2ใคใ‚’่ถ…ใˆใ‚‹ใ“ใจใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ `BrandNewBertModel` ใฏ `BrandNewBertPreTrainedModel` ใ‚’็ถ™ๆ‰ฟใ—ใ€ใ•ใ‚‰ใซ[`PreTrainedModel`]ใ‚’็ถ™ๆ‰ฟใ—ใฆใ„ใพใ™ใ€‚ ใ“ใ‚Œใ ใ‘ใงใ™ใ€‚ ไธ€่ˆฌ็š„ใชใƒซใƒผใƒซใจใ—ใฆใ€ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใฏ[`PreTrainedModel`]ใซใฎใฟไพๅญ˜ใ™ใ‚‹ใ‚ˆใ†ใซใ—ใŸใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚ ใ™ในใฆใฎๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใซ่‡ชๅ‹•็š„ใซๆไพ›ใ•ใ‚Œใ‚‹้‡่ฆใชๆฉŸ่ƒฝใฏใ€[`~PreTrainedModel.from_pretrained`]ใŠใ‚ˆใณ [`~PreTrainedModel.save_pretrained`]ใงใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฏใ‚ทใƒชใ‚ขใƒฉใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใจใƒ‡ใ‚ทใƒชใ‚ขใƒฉใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ `BrandNewBertModel.forward`ใชใฉใฎไป–ใฎ้‡่ฆใชๆฉŸ่ƒฝใฏใ€ๆ–ฐใ—ใ„ใ€Œmodeling_brand_new_bert.pyใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใงๅฎŒๅ…จใซๅฎš็พฉใ•ใ‚Œใ‚‹ในใใงใ™ใ€‚ ๆฌกใซใ€็‰นๅฎšใฎใƒ˜ใƒƒใƒ‰ใƒฌใ‚คใƒคใƒผใ‚’ๆŒใคใƒขใƒ‡ใƒซ๏ผˆใŸใจใˆใฐ `BrandNewBertForMaskedLM` ๏ผ‰ใŒ `BrandNewBertModel` ใ‚’็ถ™ๆ‰ฟใ™ใ‚‹ใฎใงใฏใชใใ€ ๆŠฝ่ฑกๅŒ–ใฎใƒฌใƒ™ใƒซใ‚’ไฝŽใไฟใคใŸใ‚ใซใ€ใใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใง `BrandNewBertModel` ใ‚’ๅ‘ผใณๅ‡บใ™ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใจใ—ใฆไฝฟ็”จใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ—ใŸใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚ ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใซใฏๅธธใซ `BrandNewBertConfig` ใจใ„ใ†่จญๅฎšใ‚ฏใƒฉใ‚นใŒๅฟ…่ฆใงใ™ใ€‚ใ“ใฎ่จญๅฎšใฏๅธธใซ[`PreTrainedModel`]ใฎๅฑžๆ€งใจใ—ใฆไฟๅญ˜ใ•ใ‚Œใ€ ใ—ใŸใŒใฃใฆใ€`BrandNewBertPreTrainedModel`ใ‹ใ‚‰็ถ™ๆ‰ฟใ™ใ‚‹ใ™ในใฆใฎใ‚ฏใƒฉใ‚นใง`config`ๅฑžๆ€งใ‚’ไป‹ใ—ใฆใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` ใƒขใƒ‡ใƒซใจๅŒๆง˜ใซใ€่จญๅฎšใฏ[`PretrainedConfig`]ใ‹ใ‚‰ๅŸบๆœฌ็š„ใชใ‚ทใƒชใ‚ขใƒซๅŒ–ใŠใ‚ˆใณ้€†ใ‚ทใƒชใ‚ขใƒซๅŒ–ใฎๆฉŸ่ƒฝใ‚’็ถ™ๆ‰ฟใ—ใฆใ„ใพใ™ใ€‚ๆณจๆ„ใ™ในใใฏใ€่จญๅฎšใจใƒขใƒ‡ใƒซใฏๅธธใซ2ใคใฎ็•ฐใชใ‚‹ๅฝขๅผใซใ‚ทใƒชใ‚ขใƒซๅŒ–ใ•ใ‚Œใ‚‹ใ“ใจใงใ™ - ใƒขใƒ‡ใƒซใฏ*pytorch_model.bin*ใƒ•ใ‚กใ‚คใƒซใซใ€่จญๅฎšใฏ*config.json*ใƒ•ใ‚กใ‚คใƒซใซใ‚ทใƒชใ‚ขใƒซๅŒ–ใ•ใ‚Œใพใ™ใ€‚[`~PreTrainedModel.save_pretrained`]ใ‚’ๅ‘ผใณๅ‡บใ™ใจใ€่‡ชๅ‹•็š„ใซ[`~PretrainedConfig.save_pretrained`]ใ‚‚ๅ‘ผใณๅ‡บใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใจ่จญๅฎšใฎไธกๆ–นใŒไฟๅญ˜ใ•ใ‚Œใพใ™ใ€‚ ### Code style ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹้š›ใซใฏใ€Transformersใฏๆ„่ฆ‹ใŒใ‚ใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใงใ‚ใ‚Šใ€ใ‚ณใƒผใƒ‰ใฎๆ›ธใๆ–นใซ้–ขใ—ใฆใ„ใใคใ‹ใฎ็‹ฌ่‡ชใฎ่€ƒใˆๆ–นใŒใ‚ใ‚Šใพใ™ :-) 1. ใƒขใƒ‡ใƒซใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใฏใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใซๅฎŒๅ…จใซ่จ˜่ฟฐใ•ใ‚Œใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎไป–ใฎใƒขใƒ‡ใƒซใจใฏๅฎŒๅ…จใซ็‹ฌ็ซ‹ใ—ใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไป–ใฎใƒขใƒ‡ใƒซใ‹ใ‚‰ใƒ–ใƒญใƒƒใ‚ฏใ‚’ๅ†ๅˆฉ็”จใ—ใŸใ„ๅ ดๅˆใ€ใ‚ณใƒผใƒ‰ใ‚’ใ‚ณใƒ”ใƒผใ—ใฆใƒˆใƒƒใƒ—ใซ`# Copied from`ใ‚ณใƒกใƒณใƒˆใ‚’ไป˜ใ‘ใฆ่ฒผใ‚Šไป˜ใ‘ใพใ™๏ผˆ่‰ฏใ„ไพ‹ใฏ[ใ“ใกใ‚‰](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)ใ€ใ‚ณใƒ”ใƒผใซ้–ขใ™ใ‚‹่ฉณ็ดฐใชใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏ[ใ“ใ“](pr_checks#check-copies)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰ใ€‚ 2. ใ‚ณใƒผใƒ‰ใฏๅฎŒๅ…จใซ็†่งฃๅฏ่ƒฝใงใชใ‘ใ‚Œใฐใชใ‚Šใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏ่จ˜่ฟฐ็š„ใชๅค‰ๆ•ฐๅใ‚’้ธๆŠžใ—ใ€็œ็•ฅๅฝขใ‚’้ฟใ‘ใ‚‹ในใใงใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ไพ‹ใˆใฐใ€`act`ใงใฏใชใ`activation`ใŒๅฅฝใพใ‚Œใพใ™ใ€‚1ๆ–‡ๅญ—ใฎๅค‰ๆ•ฐๅใฏใ€forใƒซใƒผใƒ—ๅ†…ใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใงใชใ„้™ใ‚Šใ€ๅผทใ้žๆŽจๅฅจใงใ™ใ€‚ 3. ใ‚ˆใ‚Šไธ€่ˆฌ็š„ใซใ€้ญ”ๆณ•ใฎใ‚ˆใ†ใช็Ÿญใ„ใ‚ณใƒผใƒ‰ใ‚ˆใ‚Šใ‚‚้•ทใใฆๆ˜Ž็คบ็š„ใชใ‚ณใƒผใƒ‰ใ‚’ๅฅฝใฟใพใ™ใ€‚ 4. PyTorchใงใฏ`nn.Sequential`ใ‚’ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใ›ใšใซใ€`nn.Module`ใ‚’ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใ—ใ€ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’่จ˜่ฟฐใ—ใ€ใ‚ณใƒผใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ไป–ใฎไบบใŒ็ฐกๅ˜ใซใƒ‡ใƒใƒƒใ‚ฐใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ใƒ—ใƒชใƒณใƒˆใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚„ใƒ–ใƒฌใƒผใ‚ฏใƒใ‚คใƒณใƒˆใ‚’่ฟฝๅŠ ใ—ใฆใƒ‡ใƒใƒƒใ‚ฐใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ 5. ้–ขๆ•ฐใฎใ‚ทใ‚ฐใƒใƒใƒฃใฏๅž‹ใ‚ขใƒŽใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ไป˜ใ‘ใ‚‹ในใใงใ™ใ€‚ใใฎไป–ใฎ้ƒจๅˆ†ใซ้–ขใ—ใฆใฏใ€ๅž‹ใ‚ขใƒŽใƒ†ใƒผใ‚ทใƒงใƒณใ‚ˆใ‚Šใ‚‚่‰ฏใ„ๅค‰ๆ•ฐๅใŒ่ชญใฟใ‚„ใ™ใ็†่งฃใ—ใ‚„ใ™ใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ### Overview of tokenizers ใพใ ๅฎŒไบ†ใ—ใฆใ„ใพใ›ใ‚“ :-( ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏ่ฟ‘ๆ—ฅไธญใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™๏ผ ## Step-by-step recipe to add a model to ๐Ÿค— Transformers ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ๆ–นๆณ•ใฏไบบใใ‚Œใžใ‚Œ็•ฐใชใ‚‹ใŸใ‚ใ€ไป–ใฎใ‚ณใƒณใƒˆใƒชใƒ“ใƒฅใƒผใ‚ฟใƒผใŒ๐Ÿค— Transformersใซใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹้š›ใฎ่ฆ็ด„ใ‚’็ขบ่ชใ™ใ‚‹ใ“ใจใŒ้žๅธธใซๅฝน็ซ‹ใคๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ไปฅไธ‹ใฏใ€ไป–ใฎใ‚ณใƒณใƒˆใƒชใƒ“ใƒฅใƒผใ‚ฟใƒผใŒ๐Ÿค— Transformersใซใƒขใƒ‡ใƒซใ‚’ใƒใƒผใƒˆใ™ใ‚‹้š›ใฎใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใƒ–ใƒญใ‚ฐๆŠ•็จฟใฎใƒชใ‚นใƒˆใงใ™ใ€‚ 1. [GPT2ใƒขใƒ‡ใƒซใฎใƒใƒผใƒ†ใ‚ฃใƒณใ‚ฐ](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf) 2. [WMT19 MTใƒขใƒ‡ใƒซใฎใƒใƒผใƒ†ใ‚ฃใƒณใ‚ฐ](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas) ็ตŒ้จ“ใ‹ใ‚‰่จ€ใˆใ‚‹ใ“ใจใฏใ€ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹้š›ใซๆœ€ใ‚‚้‡่ฆใชใ“ใจใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš - ่ปŠ่ผชใฎๅ†็™บๆ˜Žใ‚’ใ—ใชใ„ใงใใ ใ•ใ„๏ผๆ–ฐใ—ใ„๐Ÿค— Transformersใƒขใƒ‡ใƒซใฎใŸใ‚ใซ่ฟฝๅŠ ใ™ใ‚‹ใ‚ณใƒผใƒ‰ใฎใปใจใ‚“ใฉใฏใ™ใงใซ๐Ÿค— Transformersๅ†…ใฎใฉใ“ใ‹ใซๅญ˜ๅœจใ—ใฆใ„ใพใ™ใ€‚้กžไผผใ—ใŸๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใ‚„ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซใ€ใ„ใใคใ‹ใฎๆ™‚้–“ใ‚’ใ‹ใ‘ใฆๆŽขใ™ใ“ใจใŒ้‡่ฆใงใ™ใ€‚[grep](https://www.gnu.org/software/grep/)ใจ[rg](https://github.com/BurntSushi/ripgrep)ใฏใ‚ใชใŸใฎๅ‹้”ใงใ™ใ€‚ใƒขใƒ‡ใƒซใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏ1ใคใฎใƒขใƒ‡ใƒซๅฎŸ่ฃ…ใซๅŸบใฅใ„ใฆใ„ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€ใƒขใƒ‡ใƒซใฎใƒขใƒ‡ใƒชใƒณใ‚ฐใ‚ณใƒผใƒ‰ใฏๅˆฅใฎๅฎŸ่ฃ…ใซๅŸบใฅใ„ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ไพ‹ใˆใฐใ€FSMTใฎใƒขใƒ‡ใƒชใƒณใ‚ฐใ‚ณใƒผใƒ‰ใฏBARTใซๅŸบใฅใ„ใฆใŠใ‚Šใ€FSMTใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚ณใƒผใƒ‰ใฏXLMใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚ - ใ“ใ‚Œใฏ็ง‘ๅญฆ็š„ใช่ชฒ้กŒใ‚ˆใ‚Šใ‚‚ใ‚จใƒณใ‚ธใƒ‹ใ‚ขใƒชใƒณใ‚ฐใฎ่ชฒ้กŒใงใ™ใ€‚ใƒขใƒ‡ใƒซใฎ่ซ–ๆ–‡ใฎ็†่ซ–็š„ใชๅด้ขใ‚’ใ™ในใฆ็†่งฃใ—ใ‚ˆใ†ใจใ™ใ‚‹ใ‚ˆใ‚Šใ‚‚ใ€ๅŠน็އ็š„ใชใƒ‡ใƒใƒƒใ‚ฐ็’ฐๅขƒใ‚’ไฝœๆˆใ™ใ‚‹ใŸใ‚ใซๆ™‚้–“ใ‚’่ฒปใ‚„ใ™ในใใงใ™ใ€‚ - ่กŒใ่ฉฐใพใฃใŸๅ ดๅˆใฏๅŠฉใ‘ใ‚’ๆฑ‚ใ‚ใฆใใ ใ•ใ„๏ผใƒขใƒ‡ใƒซใฏ๐Ÿค— Transformersใฎใ‚ณใ‚ขใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใงใ‚ใ‚Šใ€Hugging Faceใงใฏใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใฎๅ„ใ‚นใƒ†ใƒƒใƒ—ใงใŠๆ‰‹ไผใ„ใ™ใ‚‹ใฎใ‚’ๅ–œใ‚“ใงใ„ใพใ™ใ€‚้€ฒ่กŒใŒใชใ„ใ“ใจใซๆฐ—ไป˜ใ„ใŸๅ ดๅˆใฏใ€้€ฒๅฑ•ใ—ใฆใ„ใชใ„ใ“ใจใ‚’ๆฐ—ใซใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ ไปฅไธ‹ใงใฏใ€๐Ÿค— Transformersใซใƒขใƒ‡ใƒซใ‚’ใƒใƒผใƒˆใ™ใ‚‹้š›ใซๆœ€ใ‚‚ๅฝน็ซ‹ใคใจ่€ƒใˆใ‚‰ใ‚Œใ‚‹ไธ€่ˆฌ็š„ใชใƒฌใ‚ทใƒ”ใ‚’ๆไพ›ใ—ใ‚ˆใ†ใจใ—ใฆใ„ใพใ™ใ€‚ ๆฌกใฎใƒชใ‚นใƒˆใฏใ€ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ใŸใ‚ใซ่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚‹ใ™ในใฆใฎใ“ใจใฎ่ฆ็ด„ใงใ‚ใ‚Šใ€To-Doใƒชใ‚นใƒˆใจใ—ใฆไฝฟ็”จใงใใพใ™๏ผš - โ˜ ๏ผˆใ‚ชใƒ—ใ‚ทใƒงใƒณ๏ผ‰ใƒขใƒ‡ใƒซใฎ็†่ซ–็š„ใชๅด้ขใ‚’็†่งฃใ—ใพใ—ใŸ - โ˜ ๐Ÿค— Transformersใฎ้–‹็™บ็’ฐๅขƒใ‚’ๆบ–ๅ‚™ใ—ใพใ—ใŸ - โ˜ ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใฎใƒ‡ใƒใƒƒใ‚ฐ็’ฐๅขƒใ‚’ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ—ใพใ—ใŸ - โ˜ `forward()` ใƒ‘ใ‚นใ‚’ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใจใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใงๆญฃๅธธใซๅฎŸ่กŒใ™ใ‚‹ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝœๆˆใ—ใพใ—ใŸ - โ˜ ใƒขใƒ‡ใƒซใฎ้ชจๆ ผใ‚’๐Ÿค— Transformersใซๆญฃๅธธใซ่ฟฝๅŠ ใ—ใพใ—ใŸ - โ˜ ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’๐Ÿค— Transformersใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซๆญฃๅธธใซๅค‰ๆ›ใ—ใพใ—ใŸ - โ˜ ๐Ÿค— TransformersใงๅฎŸ่กŒใ•ใ‚Œใ‚‹ `forward()` ใƒ‘ใ‚นใ‚’ๆญฃๅธธใซๅฎŸ่กŒใ—ใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใจๅŒไธ€ใฎๅ‡บๅŠ›ใ‚’ๅพ—ใพใ—ใŸ - โ˜ ๐Ÿค— Transformersใงใฎใƒขใƒ‡ใƒซใƒ†ใ‚นใƒˆใ‚’ๅฎŒไบ†ใ—ใพใ—ใŸ - โ˜ ๐Ÿค— Transformersใซใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ๆญฃๅธธใซ่ฟฝๅŠ ใ—ใพใ—ใŸ - โ˜ ใ‚จใƒณใƒ‰ใƒ„ใƒผใ‚จใƒณใƒ‰ใฎ็ตฑๅˆใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ—ใŸ - โ˜ ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๅฎŒๆˆใ•ใ›ใพใ—ใŸ - โ˜ ใƒขใƒ‡ใƒซใฎใ‚ฆใ‚งใ‚คใƒˆใ‚’Hubใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใพใ—ใŸ - โ˜ ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๆๅ‡บใ—ใพใ—ใŸ - โ˜ ๏ผˆใ‚ชใƒ—ใ‚ทใƒงใƒณ๏ผ‰ใƒ‡ใƒขใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’่ฟฝๅŠ ใ—ใพใ—ใŸ ใพใšใ€้€šๅธธใ€`BrandNewBert`ใฎ็†่ซ–็š„ใช็†่งฃใ‚’ๆทฑใ‚ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใŸใ ใ—ใ€ใ‚‚ใ—ใƒขใƒ‡ใƒซใฎ็†่ซ–็š„ใชๅด้ขใ‚’ใ€ŒๅฎŸๅ‹™ไธญใซ็†่งฃใ™ใ‚‹ใ€ๆ–นใŒๅฅฝใพใ—ใ„ๅ ดๅˆใ€`BrandNewBert`ใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใซ็›ดๆŽฅใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ใฎใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใฏใ€ใ‚จใƒณใ‚ธใƒ‹ใ‚ขใƒชใƒณใ‚ฐใฎใ‚นใ‚ญใƒซใŒ็†่ซ–็š„ใชใ‚นใ‚ญใƒซใ‚ˆใ‚Šใ‚‚ๅ„ชใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ `BrandNewBert`ใฎ่ซ–ๆ–‡ใ‚’็†่งฃใ™ใ‚‹ใฎใซ่‹ฆๅŠดใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ใพใŸใฏ็ง‘ๅญฆ็š„ใช่ซ–ๆ–‡ใ‚’่ชญใ‚€ใ‚ˆใ‚Šใ‚‚ใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐใ‚’ๆฅฝใ—ใ‚“ใงใ„ใ‚‹ๅ ดๅˆใซ้ฉใ—ใฆใ„ใพใ™ใ€‚ ### 1. (Optional) Theoretical aspects of BrandNewBert BrandNewBertใฎ่ซ–ๆ–‡ใŒใ‚ใ‚‹ๅ ดๅˆใ€ใใฎ่ชฌๆ˜Žใ‚’่ชญใ‚€ใŸใ‚ใฎๆ™‚้–“ใ‚’ๅ–ใ‚‹ในใใงใ™ใ€‚่ซ–ๆ–‡ใฎไธญใซใฏ็†่งฃใŒ้›ฃใ—ใ„้ƒจๅˆ†ใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใใฎๅ ดๅˆใงใ‚‚ๅฟƒ้…ใ—ใชใ„ใงใใ ใ•ใ„ใ€‚็›ฎๆจ™ใฏ่ซ–ๆ–‡ใฎๆทฑใ„็†่ซ–็š„็†่งฃใ‚’ๅพ—ใ‚‹ใ“ใจใงใฏใชใใ€ ๐Ÿค— Transformersใงใƒขใƒ‡ใƒซใ‚’ๅŠนๆžœ็š„ใซๅ†ๅฎŸ่ฃ…ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชๆƒ…ๅ ฑใ‚’ๆŠฝๅ‡บใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ใŸใ ใ—ใ€็†่ซ–็š„ใชๅด้ขใซใ‚ใพใ‚Šๅคšใใฎๆ™‚้–“ใ‚’ใ‹ใ‘ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ไปฃใ‚ใ‚Šใซใ€ๅฎŸ่ทต็š„ใชๅด้ขใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใพใ—ใ‚‡ใ†ใ€‚ๅ…ทไฝ“็š„ใซใฏๆฌกใฎ็‚นใงใ™๏ผš - *brand_new_bert*ใฏใฉใฎ็จฎ้กžใฎใƒขใƒ‡ใƒซใงใ™ใ‹๏ผŸ BERTใฎใ‚ˆใ†ใชใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎใฟใฎใƒขใƒ‡ใƒซใงใ™ใ‹๏ผŸ GPT2ใฎใ‚ˆใ†ใชใƒ‡ใ‚ณใƒผใƒ€ใƒผใฎใฟใฎใƒขใƒ‡ใƒซใงใ™ใ‹๏ผŸ BARTใฎใ‚ˆใ†ใชใ‚จใƒณใ‚ณใƒผใƒ€ใƒผ-ใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใงใ™ใ‹๏ผŸ [model_summary](model_summary)ใ‚’ๅ‚็…งใ—ใฆใ€ใ“ใ‚Œใ‚‰ใฎ้•ใ„ใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ - *brand_new_bert*ใฎๅฟœ็”จๅˆ†้‡Žใฏไฝ•ใงใ™ใ‹๏ผŸ ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใงใ™ใ‹๏ผŸ ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใงใ™ใ‹๏ผŸ Seq2Seqใ‚ฟใ‚นใ‚ฏใ€ไพ‹ใˆใฐ่ฆ็ด„ใงใ™ใ‹๏ผŸ - ใƒขใƒ‡ใƒซใ‚’BERT/GPT-2/BARTใจใฏ็•ฐใชใ‚‹ใ‚‚ใฎใซใ™ใ‚‹ๆ–ฐใ—ใ„ๆฉŸ่ƒฝใฏไฝ•ใงใ™ใ‹๏ผŸ - ๆ—ขๅญ˜ใฎ[๐Ÿค— Transformersใƒขใƒ‡ใƒซ](https://huggingface.co/transformers/#contents)ใฎไธญใง*brand_new_bert*ใซๆœ€ใ‚‚ไผผใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใฏใฉใ‚Œใงใ™ใ‹๏ผŸ - ไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎ็จฎ้กžใฏไฝ•ใงใ™ใ‹๏ผŸ SentencePieceใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใงใ™ใ‹๏ผŸ WordPieceใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใงใ™ใ‹๏ผŸ BERTใ‚„BARTใงไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจๅŒใ˜ใงใ™ใ‹๏ผŸ ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎ่‰ฏใ„ๆฆ‚่ฆใ‚’ๅพ—ใŸใจๆ„Ÿใ˜ใŸใ‚‰ใ€Hugging Faceใƒใƒผใƒ ใซ่ณชๅ•ใ‚’้€ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใ“ใ‚Œใซใฏใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ€ๆณจๆ„ๅฑคใชใฉใซ้–ขใ™ใ‚‹่ณชๅ•ใŒๅซใพใ‚Œใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ็งใŸใกใฏๅ–œใ‚“ใงใŠๆ‰‹ไผใ„ใ—ใพใ™ใ€‚ ### 2. Next prepare your environment 1. ใƒชใƒใ‚ธใƒˆใƒชใฎใƒšใƒผใ‚ธใงใ€ŒForkใ€ใƒœใ‚ฟใƒณใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใ€[ใƒชใƒใ‚ธใƒˆใƒช](https://github.com/huggingface/transformers)ใ‚’ใƒ•ใ‚ฉใƒผใ‚ฏใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚ณใƒผใƒ‰ใฎใ‚ณใƒ”ใƒผใŒGitHubใƒฆใƒผใ‚ถใƒผใ‚ขใ‚ซใ‚ฆใƒณใƒˆใฎไธ‹ใซไฝœๆˆใ•ใ‚Œใพใ™ใ€‚ 2. ใƒญใƒผใ‚ซใƒซใƒ‡ใ‚ฃใ‚นใ‚ฏใซใ‚ใ‚‹`transformers`ใƒ•ใ‚ฉใƒผใ‚ฏใ‚’ใ‚ฏใƒญใƒผใƒณใ—ใ€ใƒ™ใƒผใ‚นใƒชใƒใ‚ธใƒˆใƒชใ‚’ใƒชใƒขใƒผใƒˆใจใ—ใฆ่ฟฝๅŠ ใ—ใพใ™๏ผš ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` 3. ้–‹็™บ็’ฐๅขƒใ‚’ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ™ใ‚‹ใŸใ‚ใซใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` ใŠไฝฟใ„ใฎOSใซๅฟœใ˜ใฆใ€ใŠใ‚ˆใณTransformersใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใฎไพๅญ˜้–ขไฟ‚ใฎๆ•ฐใŒๅข—ใˆใฆใ„ใ‚‹ใŸใ‚ใ€ใ“ใฎใ‚ณใƒžใƒณใƒ‰ใงใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ใใฎๅ ดๅˆใฏใ€ไฝœๆฅญใ—ใฆใ„ใ‚‹Deep Learningใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ๏ผˆPyTorchใ€TensorFlowใ€ใŠใ‚ˆใณ/ใพใŸใฏFlax๏ผ‰ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใ€ๆฌกใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš ```bash pip install -e ".[quality]" ``` ใ“ใ‚Œใฏใปใจใ‚“ใฉใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซใฏๅๅˆ†ใงใ‚ใ‚‹ใฏใšใงใ™ใ€‚ใใฎๅพŒใ€่ฆชใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซๆˆปใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```bash cd .. ``` 4. Transformersใซ*brand_new_bert*ใฎPyTorchใƒใƒผใ‚ธใƒงใƒณใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚PyTorchใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใซใฏใ€ https://pytorch.org/get-started/locally/ ใฎๆŒ‡็คบใซๅพ“ใฃใฆใใ ใ•ใ„ใ€‚ **ๆณจๆ„:** CUDAใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’CPUใงๅ‹•ไฝœใ•ใ›ใ‚‹ใ“ใจใงๅๅˆ†ใงใ™ใ€‚ 5. *brand_new_bert*ใ‚’็งปๆคใ™ใ‚‹ใซใฏใ€ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใธใฎใ‚ขใ‚ฏใ‚ปใ‚นใ‚‚ๅฟ…่ฆใงใ™ใ€‚ ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` *brand_new_bert*ใ‚’๐Ÿค— Transformersใซใƒใƒผใƒˆใ™ใ‚‹ใŸใ‚ใฎ้–‹็™บ็’ฐๅขƒใ‚’่จญๅฎšใ—ใพใ—ใŸใ€‚ ### 3.-4. Run a pretrained checkpoint using the original repository ๆœ€ๅˆใซใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎ*brand_new_bert*ใƒชใƒใ‚ธใƒˆใƒชใงไฝœๆฅญใ—ใพใ™ใ€‚้€šๅธธใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎๅฎŸ่ฃ…ใฏ้žๅธธใซใ€Œ็ ”็ฉถ็š„ใ€ใงใ‚ใ‚Šใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใŒไธ่ถณใ—ใฆใ„ใŸใ‚Šใ€ใ‚ณใƒผใƒ‰ใŒ็†่งฃใ—ใซใใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใ‹ใ—ใ€ใ“ใ‚ŒใŒ*brand_new_bert*ใ‚’ๅ†ๅฎŸ่ฃ…ใ™ใ‚‹ๅ‹•ๆฉŸใจใชใ‚‹ในใใงใ™ใ€‚Hugging Faceใงใฏใ€ไธป่ฆใช็›ฎๆจ™ใฎ1ใคใŒใ€ๅ‹•ไฝœใ™ใ‚‹ใƒขใƒ‡ใƒซใ‚’ๅ–ใ‚Šใ€ใใ‚Œใ‚’ใงใใ‚‹ใ ใ‘**ใ‚ขใ‚ฏใ‚ปใ‚นๅฏ่ƒฝใงใƒฆใƒผใ‚ถใƒผใƒ•ใƒฌใƒณใƒ‰ใƒชใƒผใง็พŽใ—ใ„**ใ‚‚ใฎใซๆ›ธใ็›ดใ™ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใฏใ€๐Ÿค— Transformersใซใƒขใƒ‡ใƒซใ‚’ๅ†ๅฎŸ่ฃ…ใ™ใ‚‹ๆœ€ใ‚‚้‡่ฆใชๅ‹•ๆฉŸใงใ™ - ่ค‡้›‘ใชๆ–ฐใ—ใ„NLPๆŠ€่ก“ใ‚’**่ชฐใซใงใ‚‚**ใ‚ขใ‚ฏใ‚ปใ‚นๅฏ่ƒฝใซใ—ใ‚ˆใ†ใจใ™ใ‚‹่ฉฆใฟใงใ™ใ€‚ ใพใšใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใซๅ…ฅใ‚Š่พผใ‚€ใ“ใจใ‹ใ‚‰ๅง‹ใ‚ใ‚‹ในใใงใ™ใ€‚ ๅ…ฌๅผใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใงๆญฃๅธธใซๅฎŸ่กŒใ™ใ‚‹ใ“ใจใฏใ€้€šๅธธใ€**ๆœ€ใ‚‚ๅ›ฐ้›ฃใช**ใ‚นใƒ†ใƒƒใƒ—ใงใ™ใ€‚ ็งใŸใกใฎ็ตŒ้จ“ใ‹ใ‚‰ใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใซๆ…ฃใ‚Œใ‚‹ใฎใซๆ™‚้–“ใ‚’ใ‹ใ‘ใ‚‹ใ“ใจใŒ้žๅธธใซ้‡่ฆใงใ™ใ€‚ไปฅไธ‹ใฎใ“ใจใ‚’็†่งฃใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš - ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใฉใ“ใง่ฆ‹ใคใ‘ใ‚‹ใ‹๏ผŸ - ๅฏพๅฟœใ™ใ‚‹ใƒขใƒ‡ใƒซใซไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใฏ๏ผŸ - ใƒขใƒ‡ใƒซใ‹ใ‚‰็‹ฌ็ซ‹ใ—ใฆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใฏ๏ผŸ - 1ใคใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’่ฟฝ่ทกใ—ใฆใ€ๅ˜็ด”ใชใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใซๅฟ…่ฆใชใ‚ฏใƒฉใ‚นใจ้–ขๆ•ฐใŒใ‚ใ‹ใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚้€šๅธธใ€ใ“ใ‚Œใ‚‰ใฎ้–ขๆ•ฐใ ใ‘ใ‚’ๅ†ๅฎŸ่ฃ…ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ใƒขใƒ‡ใƒซใฎ้‡่ฆใชใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใ‚’็‰นๅฎšใงใใ‚‹ใ“ใจ๏ผšใƒขใƒ‡ใƒซใฎใ‚ฏใƒฉใ‚นใฏใฉใ“ใซใ‚ใ‚Šใพใ™ใ‹๏ผŸใƒขใƒ‡ใƒซใฎใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใ€*ไพ‹* EncoderModelใ€DecoderModelใŒใ‚ใ‚Šใพใ™ใ‹๏ผŸ่‡ชๅทฑๆณจๆ„ใƒฌใ‚คใƒคใƒผใฏใฉใ“ใซใ‚ใ‚Šใพใ™ใ‹๏ผŸ่ค‡ๆ•ฐใฎ็•ฐใชใ‚‹ๆณจๆ„ใƒฌใ‚คใƒคใƒผใ€*ไพ‹* *่‡ชๅทฑๆณจๆ„*ใ€*ใ‚ฏใƒญใ‚นใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ*ใชใฉใŒๅญ˜ๅœจใ—ใพใ™ใ‹๏ผŸ - ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใฎ็’ฐๅขƒใงใƒขใƒ‡ใƒซใ‚’ใƒ‡ใƒใƒƒใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใฏ๏ผŸ*print*ใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ€*ipdb*ใฎใ‚ˆใ†ใชๅฏพ่ฉฑๅž‹ใƒ‡ใƒใƒƒใ‚ฌใ‚’ไฝฟ็”จใงใใ‚‹ใ‹ใ€PyCharmใฎใ‚ˆใ†ใชๅŠน็އ็š„ใชIDEใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒ‡ใƒใƒƒใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ‹๏ผŸ ้‡่ฆใชใฎใฏใ€ใƒใƒผใƒ†ใ‚ฃใƒณใ‚ฐใƒ—ใƒญใ‚ปใ‚นใ‚’้–‹ๅง‹ใ™ใ‚‹ๅ‰ใซใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใงใ‚ณใƒผใƒ‰ใ‚’**ๅŠน็އ็š„ใซ**ใƒ‡ใƒใƒƒใ‚ฐใงใใ‚‹ใ“ใจใงใ™๏ผใพใŸใ€ใ“ใ‚Œใฏใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใƒฉใ‚คใƒ–ใƒฉใƒชใงไฝœๆฅญใ—ใฆใ„ใ‚‹ใ“ใจใ‚’่ฆšใˆใฆใŠใ„ใฆใใ ใ•ใ„ใ€‚ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใงใ‚ณใƒผใƒ‰ใ‚’่ชฟในใ‚‹่ชฐใ‹ใ‚’ๆญ“่ฟŽใ™ใ‚‹ใŸใ‚ใซใ€ๅ•้กŒใ‚’ใ‚ชใƒผใƒ—ใƒณใซใ—ใŸใ‚Šใ€ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้€ไฟกใ—ใŸใ‚Šใ™ใ‚‹ใ“ใจใ‚’ใŸใ‚ใ‚‰ใ‚ใชใ„ใงใใ ใ•ใ„ใ€‚ใ“ใฎใƒชใƒใ‚ธใƒˆใƒชใฎใƒกใƒณใƒ†ใƒŠใƒผใฏใ€ๅฝผใ‚‰ใฎใ‚ณใƒผใƒ‰ใ‚’่ชฟในใฆใใ‚Œใ‚‹ไบบใซๅฏพใ—ใฆ้žๅธธใซๅ–œใ‚“ใงใ„ใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใงใ™๏ผ ใ“ใฎๆฎต้šŽใงใฏใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒขใƒ‡ใƒซใฎใƒ‡ใƒใƒƒใ‚ฐใซใฉใฎใ‚ˆใ†ใช็’ฐๅขƒใจๆˆฆ็•ฅใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹ใฏใ€ใ‚ใชใŸๆฌก็ฌฌใงใ™ใ€‚ๆœ€ๅˆใซใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใซ้–ขใ™ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’ใƒ‡ใƒใƒƒใ‚ฐใงใใ‚‹ใ“ใจใŒ้žๅธธใซ้‡่ฆใงใ™ใ€‚ใพใŸใ€GPU็’ฐๅขƒใ‚’ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ™ใ‚‹ใ“ใจใฏใŠๅ‹งใ‚ใ—ใพใ›ใ‚“ใ€‚ใพใšใ€CPUไธŠใงไฝœๆฅญใ—ใ€ใƒขใƒ‡ใƒซใŒใ™ใงใซ๐Ÿค— Transformersใซๆญฃๅธธใซใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ๆœ€ๅพŒใซใ€ใƒขใƒ‡ใƒซใŒGPUไธŠใงใ‚‚ๆœŸๅพ…้€šใ‚Šใซๅ‹•ไฝœใ™ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’ๆคœ่จผใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไธ€่ˆฌ็š„ใซใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒขใƒ‡ใƒซใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใฎ2ใคใฎใƒ‡ใƒใƒƒใ‚ฐ็’ฐๅขƒใŒใ‚ใ‚Šใพใ™๏ผš - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - ใƒญใƒผใ‚ซใƒซใชPythonใ‚นใ‚ฏใƒชใƒ—ใƒˆใ€‚ JupyterใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใฏใ€ใ‚ปใƒซใ”ใจใซๅฎŸ่กŒใงใใ‚‹ใŸใ‚ใ€่ซ–็†็š„ใชใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใ‚’ใ‚ˆใ‚Šๅˆ†ๅ‰ฒใ—ใ€ไธญ้–“็ตๆžœใ‚’ไฟๅญ˜ใงใใ‚‹ใŸใ‚ใ€ใƒ‡ใƒใƒƒใ‚ฐใ‚ตใ‚คใ‚ฏใƒซใŒ้€Ÿใใชใ‚‹ใจใ„ใ†ๅˆฉ็‚นใŒใ‚ใ‚Šใพใ™ใ€‚ใพใŸใ€ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใฏไป–ใฎๅ…ฑๅŒไฝœๆฅญ่€…ใจ็ฐกๅ˜ใซๅ…ฑๆœ‰ใงใใ‚‹ใ“ใจใŒๅคšใใ€Hugging Faceใƒใƒผใƒ ใซๅŠฉใ‘ใ‚’ๆฑ‚ใ‚ใ‚‹ๅ ดๅˆใซ้žๅธธใซๅฝน็ซ‹ใคๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚JupyterใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใซ็ฒพ้€šใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ใใ‚Œ ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` ใƒ‡ใƒใƒƒใ‚ฐๆˆฆ็•ฅใซใคใ„ใฆใฏใ€้€šๅธธใ€ใ„ใใคใ‹ใฎ้ธๆŠž่‚ขใŒใ‚ใ‚Šใพใ™๏ผš - ๅ…ƒใฎใƒขใƒ‡ใƒซใ‚’ๅคšใใฎๅฐใ•ใชใƒ†ใ‚นใƒˆๅฏ่ƒฝใชใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใซๅˆ†่งฃใ—ใ€ใใ‚Œใžใ‚Œใซๅฏพใ—ใฆๅ‰ๆ–นใƒ‘ใ‚นใ‚’ๅฎŸ่กŒใ—ใฆๆคœ่จผใ—ใพใ™ - ๅ…ƒใฎใƒขใƒ‡ใƒซใ‚’ๅ…ƒใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจๅ…ƒใฎใƒขใƒ‡ใƒซใซใฎใฟๅˆ†่งฃใ—ใ€ใใ‚Œใ‚‰ใซๅฏพใ—ใฆๅ‰ๆ–นใƒ‘ใ‚นใ‚’ๅฎŸ่กŒใ—ใ€ๆคœ่จผใฎใŸใ‚ใซไธญ้–“ใฎใƒ—ใƒชใƒณใƒˆใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใพใŸใฏใƒ–ใƒฌใƒผใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใพใ™ ๅ†ๅบฆใ€ใฉใฎๆˆฆ็•ฅใ‚’้ธๆŠžใ™ใ‚‹ใ‹ใฏใ‚ใชใŸๆฌก็ฌฌใงใ™ใ€‚ๅ…ƒใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใซไพๅญ˜ใ™ใ‚‹ใ“ใจใŒๅคšใใ€ๅ…ƒใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใซๅฟœใ˜ใฆไธ€ๆ–นใพใŸใฏไป–ๆ–นใŒๆœ‰ๅˆฉใชใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ๅ…ƒใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใŒใƒขใƒ‡ใƒซใ‚’ๅฐใ•ใชใ‚ตใƒ–ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใซๅˆ†่งฃใงใใ‚‹ๅ ดๅˆใ€*ไพ‹ใˆใฐ*ๅ…ƒใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใŒ็ฐกๅ˜ใซใ‚คใƒผใ‚ฌใƒผใƒขใƒผใƒ‰ใงๅฎŸ่กŒใงใใ‚‹ๅ ดๅˆใ€ใใ‚Œใ‚’่กŒใ†ไพกๅ€คใŒ้€šๅธธใ‚ใ‚Šใพใ™ใ€‚ๆœ€ๅˆใ‹ใ‚‰ใ‚ˆใ‚Š้›ฃใ—ใ„ๆ–นๆณ•ใ‚’้ธๆŠžใ™ใ‚‹ใ“ใจใซใฏใ„ใใคใ‹ใฎ้‡่ฆใชๅˆฉ็‚นใŒใ‚ใ‚Šใพใ™๏ผš - ๅพŒใงๅ…ƒใฎใƒขใƒ‡ใƒซใ‚’๐Ÿค— TransformersใฎๅฎŸ่ฃ…ใจๆฏ”่ผƒใ™ใ‚‹้š›ใซใ€ๅ„ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใŒๅฏพๅฟœใ™ใ‚‹๐Ÿค— TransformersๅฎŸ่ฃ…ใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใจไธ€่‡ดใ™ใ‚‹ใ“ใจใ‚’่‡ชๅ‹•็š„ใซๆคœ่จผใงใใ‚‹ใŸใ‚ใ€่ฆ–่ฆš็š„ใชๆฏ”่ผƒใซไพๅญ˜ใ›ใšใซๆธˆใฟใพใ™ - ๅคงใใชๅ•้กŒใ‚’ๅฐใ•ใชๅ•้กŒใซๅˆ†่งฃใ™ใ‚‹ใ€ใคใพใ‚Šๅ€‹ใ€…ใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฎใฟใ‚’ใƒใƒผใƒ†ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ๅ•้กŒใซๅˆ†ๅ‰ฒใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใ€ไฝœๆฅญใ‚’ๆง‹้€ ๅŒ–ใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ - ใƒขใƒ‡ใƒซใ‚’่ซ–็†็š„ใชๆ„ๅ‘ณใฎใ‚ใ‚‹ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใซๅˆ†ๅ‰ฒใ™ใ‚‹ใ“ใจใงใ€ใƒขใƒ‡ใƒซใฎ่จญ่จˆใ‚’ใ‚ˆใ‚Šใ‚ˆใ็†่งฃใ—ใ‚„ใ™ใใ—ใ€ใƒขใƒ‡ใƒซใ‚’ใ‚ˆใ‚Šใ‚ˆใ็†่งฃใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ - ๅพŒใงใ€ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใ”ใจใฎใƒ†ใ‚นใƒˆใ‚’่กŒใ†ใ“ใจใงใ€ใ‚ณใƒผใƒ‰ใ‚’ๅค‰ๆ›ดใ—็ถšใ‘ใ‚‹้š›ใซใƒชใ‚ฐใƒฌใƒƒใ‚ทใƒงใƒณใŒ็™บ็”Ÿใ—ใชใ„ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ [Lysandreใฎ](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) ELECTRAใฎ็ตฑๅˆใƒใ‚งใƒƒใ‚ฏใฏใ€ใ“ใ‚ŒใŒใฉใฎใ‚ˆใ†ใซ่กŒใ‚ใ‚Œใ‚‹ใ‹ใฎ่‰ฏใ„ไพ‹ใงใ™ใ€‚ ใŸใ ใ—ใ€ๅ…ƒใฎใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใŒ้žๅธธใซ่ค‡้›‘ใงใ€ไธญ้–“ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใ‚’ใ‚ณใƒณใƒ‘ใ‚คใƒซใƒขใƒผใƒ‰ใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ—ใ‹่จฑๅฏใ—ใชใ„ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใ‚’ๅฐใ•ใชใƒ†ใ‚นใƒˆๅฏ่ƒฝใชใ‚ตใƒ–ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใซๅˆ†่งฃใ™ใ‚‹ใ“ใจใŒๆ™‚้–“ใŒใ‹ใ‹ใ‚Šใ™ใŽใ‚‹ใ‹ใ€ไธๅฏ่ƒฝใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ่‰ฏใ„ไพ‹ใฏ[T5ใฎMeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow)ใƒฉใ‚คใƒ–ใƒฉใƒชใงใ‚ใ‚Šใ€้žๅธธใซ่ค‡้›‘ใงใƒขใƒ‡ใƒซใ‚’ใ‚ตใƒ–ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใซๅˆ†่งฃใ™ใ‚‹็ฐกๅ˜ใชๆ–นๆณ•ใ‚’ๆไพ›ใ—ใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎใ‚ˆใ†ใชใƒฉใ‚คใƒ–ใƒฉใƒชใงใฏใ€้€šๅธธใ€ใƒ—ใƒชใƒณใƒˆใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’ๆคœ่จผใ™ใ‚‹ใ“ใจใซไพๅญ˜ใ—ใพใ™ใ€‚ ใฉใฎๆˆฆ็•ฅใ‚’้ธๆŠžใ—ใฆใ‚‚ใ€ๆŽจๅฅจใ•ใ‚Œใ‚‹ๆ‰‹้ †ใฏ้€šๅธธๅŒใ˜ใงใ€ๆœ€ๅˆใฎใƒฌใ‚คใƒคใƒผใ‹ใ‚‰ใƒ‡ใƒใƒƒใ‚ฐใ‚’้–‹ๅง‹ใ—ใ€ๆœ€ๅพŒใฎใƒฌใ‚คใƒคใƒผใ‹ใ‚‰ใƒ‡ใƒใƒƒใ‚ฐใ‚’่กŒใ†ในใใงใ™ใ€‚ ้€šๅธธใ€ไปฅไธ‹ใฎ้ †ๅบใงๆฌกใฎใƒฌใ‚คใƒคใƒผใ‹ใ‚‰ใฎๅ‡บๅŠ›ใ‚’ๅ–ๅพ—ใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™๏ผš 1. ใƒขใƒ‡ใƒซใซๆธกใ•ใ‚ŒใŸๅ…ฅๅŠ›IDใ‚’ๅ–ๅพ—ใ™ใ‚‹ 2. ๅ˜่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ๅ–ๅพ—ใ™ใ‚‹ 3. ๆœ€ๅˆใฎTransformerใƒฌใ‚คใƒคใƒผใฎๅ…ฅๅŠ›ใ‚’ๅ–ๅพ—ใ™ใ‚‹ 4. ๆœ€ๅˆใฎTransformerใƒฌใ‚คใƒคใƒผใฎๅ‡บๅŠ›ใ‚’ๅ–ๅพ—ใ™ใ‚‹ 5. ๆฌกใฎn - 1ใคใฎTransformerใƒฌใ‚คใƒคใƒผใฎๅ‡บๅŠ›ใ‚’ๅ–ๅพ—ใ™ใ‚‹ 6. BrandNewBertใƒขใƒ‡ใƒซๅ…จไฝ“ใฎๅ‡บๅŠ›ใ‚’ๅ–ๅพ—ใ™ใ‚‹ ๅ…ฅๅŠ›IDใฏๆ•ดๆ•ฐใฎ้…ๅˆ—ใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€*ไพ‹๏ผš* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฎใƒฌใ‚คใƒคใƒผใฎๅ‡บๅŠ›ใฏๅคšๆฌกๅ…ƒใฎๆตฎๅ‹•ๅฐๆ•ฐ็‚น้…ๅˆ—ใงใ‚ใ‚‹ใ“ใจใŒๅคšใใ€ๆฌกใฎใ‚ˆใ†ใซใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™๏ผš ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` ๐Ÿค— Transformersใซ่ฟฝๅŠ ใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏใ€็ตฑๅˆใƒ†ใ‚นใƒˆใ‚’ๆ•ฐๅ›žๅˆๆ ผใ™ใ‚‹ใ“ใจใŒๆœŸๅพ…ใ•ใ‚ŒใฆใŠใ‚Šใ€ๅ…ƒใฎใƒขใƒ‡ใƒซใจ๐Ÿค— Transformersใงๅ†ๅฎŸ่ฃ…ใ•ใ‚ŒใŸใƒใƒผใ‚ธใƒงใƒณใŒใ€0.001ใฎ็ฒพๅบฆใพใงใพใฃใŸใๅŒใ˜ๅ‡บๅŠ›ใ‚’ๆไพ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ็•ฐใชใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงๅŒใ˜ใƒขใƒ‡ใƒซใ‚’ๆ›ธใ„ใŸๅ ดๅˆใ€ใ‚ใšใ‹ใซ็•ฐใชใ‚‹ๅ‡บๅŠ›ใ‚’่ฟ”ใ™ใ“ใจใŒๆญฃๅธธใงใ‚ใ‚‹ใŸใ‚ใ€่ชคๅทฎ่จฑๅฎนๅ€คใจใ—ใฆ1e-3๏ผˆ0.001๏ผ‰ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใฆใ„ใพใ™ใ€‚ใƒขใƒ‡ใƒซใŒใปใผๅŒใ˜ๅ‡บๅŠ›ใ‚’่ฟ”ใ™ใ ใ‘ใงใฏไธๅๅˆ†ใงใ€ใปใผๅŒไธ€ใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใใฎใŸใ‚ใ€๐Ÿค— Transformersใƒใƒผใ‚ธใƒงใƒณใฎไธญ้–“ๅ‡บๅŠ›ใ‚’ๅ…ƒใฎ*brand_new_bert*ใฎๅฎŸ่ฃ…ใฎไธญ้–“ๅ‡บๅŠ›ใจ่ค‡ๆ•ฐๅ›žใซใ‚ใŸใฃใฆๆฏ”่ผƒใ™ใ‚‹ใ“ใจใซใชใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ใใฎ้š›ใ€ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใฎ**ๅŠน็އ็š„ใช**ใƒ‡ใƒใƒƒใ‚ฐ็’ฐๅขƒใŒ้žๅธธใซ้‡่ฆใงใ™ใ€‚ไปฅไธ‹ใฏใ€ใƒ‡ใƒใƒƒใ‚ฐ็’ฐๅขƒใ‚’ใงใใ‚‹ใ ใ‘ๅŠน็އ็š„ใซใ™ใ‚‹ใŸใ‚ใฎใ‚ขใƒ‰ใƒใ‚คใ‚นใงใ™ใ€‚ - ไธญ้–“็ตๆžœใ‚’ใƒ‡ใƒใƒƒใ‚ฐใ™ใ‚‹ๆœ€้ฉใชๆ–นๆณ•ใ‚’่ฆ‹ใคใ‘ใ‚‹ใ€‚ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใฏPyTorchใงๆ›ธใ‹ใ‚Œใฆใ„ใพใ™ใ‹๏ผŸใใฎๅ ดๅˆใ€ๅ…ƒใฎใƒขใƒ‡ใƒซใ‚’ใ‚ˆใ‚Šๅฐใ•ใชใ‚ตใƒ–ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใซๅˆ†่งฃใ—ใฆไธญ้–“ๅ€คใ‚’ๅ–ๅพ—ใ™ใ‚‹้•ทใ„ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๆ›ธใใ“ใจใŒใŠใใ‚‰ใ้ฉๅˆ‡ใงใ™ใ€‚ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใŒTensorflow 1ใงๆ›ธใ‹ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€[tf.print](https://www.tensorflow.org/api_docs/python/tf/print)ใชใฉใฎTensorFlowใฎใƒ—ใƒชใƒณใƒˆๆ“ไฝœใ‚’ไฝฟ็”จใ—ใฆไธญ้–“ๅ€คใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใŒJaxใงๆ›ธใ‹ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใฎๅฎŸ่กŒๆ™‚ใซใƒขใƒ‡ใƒซใŒ**jittedใ•ใ‚Œใฆใ„ใชใ„**ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ไพ‹๏ผš[ใ“ใฎใƒชใƒณใ‚ฏ](https://github.com/google/jax/issues/196)ใ‚’ใƒใ‚งใƒƒใ‚ฏใ€‚ - ไฝฟ็”จๅฏ่ƒฝใชๆœ€ๅฐใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒๅฐใ•ใ„ใปใฉใ€ใƒ‡ใƒใƒƒใ‚ฐใ‚ตใ‚คใ‚ฏใƒซใŒ้€Ÿใใชใ‚Šใพใ™ใ€‚ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใŒใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใซ10็ง’ไปฅไธŠใ‹ใ‹ใ‚‹ๅ ดๅˆใ€ๅŠน็އ็š„ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚้žๅธธใซๅคงใใชใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ—ใ‹ๅˆฉ็”จใงใใชใ„ๅ ดๅˆใ€ๆ–ฐใ—ใ„็’ฐๅขƒใงใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸใ‚ฆใ‚งใ‚คใƒˆใ‚’ๆŒใคใƒ€ใƒŸใƒผใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใ€ใใ‚Œใ‚‰ใฎใ‚ฆใ‚งใ‚คใƒˆใ‚’๐Ÿค— Transformersใƒใƒผใ‚ธใƒงใƒณใฎใƒขใƒ‡ใƒซใจๆฏ”่ผƒใ™ใ‚‹ๆ–นใŒ่‰ฏใ„ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ - ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใงใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’ๅ‘ผใณๅ‡บใ™ๆœ€ใ‚‚็ฐกๅ˜ใชๆ–นๆณ•ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚็†ๆƒณ็š„ใซใฏใ€ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใง**ๅ˜ไธ€ใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚น**ใ‚’ๅ‘ผใณๅ‡บใ™้–ขๆ•ฐใ‚’่ฆ‹ใคใ‘ใŸใ„ใงใ™ใ€‚ใ“ใ‚Œใฏ้€šๅธธใ€Œpredictใ€ใ€ใ€Œevaluateใ€ใ€ใ€Œforwardใ€ใ€ใ€Œ__call__ใ€ใจๅ‘ผใฐใ‚Œใพใ™ใ€‚่ค‡ๆ•ฐๅ›žใ€Œforwardใ€ใ‚’ๅ‘ผใณๅ‡บใ™้–ขๆ•ฐใ‚’ใƒ‡ใƒใƒƒใ‚ฐใ—ใŸใใ‚ใ‚Šใพใ›ใ‚“ใ€‚ไพ‹๏ผšใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซใ€Œautoregressive_sampleใ€ใ€ใ€Œgenerateใ€ใจๅ‘ผใฐใ‚Œใ‚‹้–ขๆ•ฐใ€‚ - ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใจใƒขใƒ‡ใƒซใฎใ€Œใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใ€ใƒ‘ใ‚นใ‚’ๅˆ†้›ขใ—ใ‚ˆใ†ใจใ—ใฆใใ ใ•ใ„ใ€‚ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใŒๅ…ฅๅŠ›ๆ–‡ๅญ—ๅˆ—ใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ไพ‹ใ‚’็คบใ™ๅ ดๅˆใ€ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใ‚ณใƒผใƒซๅ†…ใงๆ–‡ๅญ—ๅˆ—ๅ…ฅๅŠ›ใŒๅ…ฅๅŠ›IDใซๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅ ดๆ‰€ใ‚’็‰นๅฎšใ—ใ€ใ“ใฎใƒใ‚คใƒณใƒˆใ‹ใ‚‰้–‹ๅง‹ใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’่‡ชๅˆ†ใงๆ›ธใใ‹ใ€ๅ…ฅๅŠ›ๆ–‡ๅญ—ๅˆ—ใงใฏใชใๅ…ฅๅŠ›IDใ‚’็›ดๆŽฅๅ…ฅๅŠ›ใงใใ‚‹ใ‚ˆใ†ใซๅ…ƒใฎใ‚ณใƒผใƒ‰ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ - ใƒ‡ใƒใƒƒใ‚ฐใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ๅ†…ใฎใƒขใƒ‡ใƒซใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒขใƒผใƒ‰ใงใฏใชใ„ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒขใƒผใƒ‰ใงใฏใ€ใƒขใƒ‡ใƒซๅ†…ใฎ่ค‡ๆ•ฐใฎใƒ‰ใƒญใƒƒใƒ—ใ‚ขใ‚ฆใƒˆใƒฌใ‚คใƒคใƒผใฎใŸใ‚ใซใƒฉใƒณใƒ€ใƒ ใชๅ‡บๅŠ›ใŒ็”Ÿๆˆใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ‡ใƒใƒƒใ‚ฐ็’ฐๅขƒใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใŒ**ๆฑบๅฎš่ซ–็š„**ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€ใƒ‰ใƒญใƒƒใƒ—ใ‚ขใ‚ฆใƒˆใƒฌใ‚คใƒคใƒผใŒไฝฟ็”จใ•ใ‚Œใชใ„ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ใพใŸใฏใ€ๆ–ฐใ—ใ„ๅฎŸ่ฃ…ใŒๅŒใ˜ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏๅ†…ใซใ‚ใ‚‹ๅ ดๅˆใ€*transformers.utils.set_seed*ใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ ไปฅไธ‹ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€*brand_new_bert*ใซใคใ„ใฆใ“ใ‚Œใ‚’ๅ…ทไฝ“็š„ใซใฉใฎใ‚ˆใ†ใซ่กŒใ†ใ‹ใซใคใ„ใฆใฎ่ฉณ็ดฐ/ใƒ’ใƒณใƒˆใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ### 5.-14. Port BrandNewBert to ๐Ÿค— Transformers ๆฌกใซใ€ใคใ„ใซๆ–ฐใ—ใ„ใ‚ณใƒผใƒ‰ใ‚’๐Ÿค— Transformersใซ่ฟฝๅŠ ใงใใพใ™ใ€‚๐Ÿค— Transformersใฎใƒ•ใ‚ฉใƒผใ‚ฏใฎใ‚ฏใƒญใƒผใƒณใซ็งปๅ‹•ใ—ใฆใใ ใ•ใ„๏ผš ```bash cd transformers ``` ็‰นๅˆฅใชใ‚ฑใƒผใ‚นใจใ—ใฆใ€ๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใจๅฎŒๅ…จใซไธ€่‡ดใ™ใ‚‹ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅ ดๅˆใ€ [ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณ](#write-a-conversion-script)ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใ‚ˆใ†ใซใ€ๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงๆธˆใฟใพใ™ใ€‚ ใ“ใฎๅ ดๅˆใ€ๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใฎๅฎŒๅ…จใชใƒขใƒ‡ใƒซใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๅ†ๅˆฉ็”จใงใใพใ™ใ€‚ ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใ€ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใฎ็”Ÿๆˆใ‚’้–‹ๅง‹ใ—ใพใ™ใ€‚ใ“ใ“ใง2ใคใฎ้ธๆŠž่‚ขใŒใ‚ใ‚Šใพใ™๏ผš - `transformers-cli add-new-model-like`ใ‚’ไฝฟ็”จใ—ใฆๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใฎใ‚ˆใ†ใชๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ—ใพใ™ - `transformers-cli add-new-model`ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‹ใ‚‰ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ—ใพใ™๏ผˆใƒขใƒ‡ใƒซใฎใ‚ฟใ‚คใƒ—ใซๅฟœใ˜ใฆBERTใพใŸใฏBartใฎใ‚ˆใ†ใซ่ฆ‹ใˆใพใ™๏ผ‰ ใฉใกใ‚‰ใฎๅ ดๅˆใงใ‚‚ใ€ใƒขใƒ‡ใƒซใฎๅŸบๆœฌๆƒ…ๅ ฑใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ใŸใ‚ใฎ่ณชๅ•ไบ‹้ …ใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ 2็•ช็›ฎใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€`cookiecutter`ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏ[ใ“ใกใ‚‰](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ **ไธป่ฆใช huggingface/transformers ใƒชใƒใ‚ธใƒˆใƒชใงใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้–‹ใ** ่‡ชๅ‹•็”Ÿๆˆใ•ใ‚ŒใŸใ‚ณใƒผใƒ‰ใ‚’้ฉๅฟœใ—ๅง‹ใ‚ใ‚‹ๅ‰ใซใ€๐Ÿค— Transformers ใซใ€Œไฝœๆฅญไธญ๏ผˆWIP๏ผ‰ใ€ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้–‹ใใ‚ฟใ‚คใƒŸใƒณใ‚ฐใงใ™ใ€‚ ไพ‹๏ผšใ€Œ[WIP] *brand_new_bert* ใ‚’่ฟฝๅŠ ใ€ใชใฉใงใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒฆใƒผใ‚ถใƒผใจ Hugging Face ใƒใƒผใƒ ใŒ๐Ÿค— Transformers ใซใƒขใƒ‡ใƒซใ‚’็ตฑๅˆใ™ใ‚‹ไฝœๆฅญใ‚’ไธฆ่กŒใ—ใฆ่กŒใ†ใ“ใจใŒใงใใพใ™ใ€‚ ไปฅไธ‹ใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš 1. ใƒกใ‚คใƒณใƒ–ใƒฉใƒณใƒใ‹ใ‚‰ๅˆ†ใ‹ใ‚Šใ‚„ใ™ใ„ๅๅ‰ใฎใƒ–ใƒฉใƒณใƒใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```bash git checkout -b add_brand_new_bert ``` 2. ่‡ชๅ‹•็”Ÿๆˆใ•ใ‚ŒใŸใ‚ณใƒผใƒ‰ใ‚’ใ‚ณใƒŸใƒƒใƒˆใ—ใฆใใ ใ•ใ„: ```bash git add . git commit ``` 3. ็พๅœจใฎ main ใƒ–ใƒฉใƒณใƒใซใƒ•ใ‚งใƒƒใƒใ—ใฆใƒชใƒ™ใƒผใ‚น ```bash git fetch upstream git rebase upstream/main ``` 4. ๅค‰ๆ›ดใ‚’ใ‚ใชใŸใฎใ‚ขใ‚ซใ‚ฆใƒณใƒˆใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ไฝฟ็”จใ—ใพใ™๏ผš ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. ๆบ€่ถณใ—ใŸใ‚‰ใ€GitHubไธŠใฎใƒ•ใ‚ฉใƒผใ‚ฏใฎใ‚ฆใ‚งใƒ–ใƒšใƒผใ‚ธใซ็งปๅ‹•ใ—ใพใ™ใ€‚[ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆ]ใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใพใ™ใ€‚ๅฐ†ๆฅใฎๅค‰ๆ›ดใซๅ‚™ใˆใฆใ€Hugging Face ใƒใƒผใƒ ใฎใƒกใƒณใƒใƒผใฎGitHubใƒใƒณใƒ‰ใƒซใ‚’ใƒฌใƒ“ใƒฅใ‚ขใƒผใจใ—ใฆ่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„ใ€‚ 6. GitHubใฎใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚ฆใ‚งใƒ–ใƒšใƒผใ‚ธใฎๅณๅดใซใ‚ใ‚‹ใ€Œใƒ‰ใƒฉใƒ•ใƒˆใซๅค‰ๆ›ใ€ใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใ€PRใ‚’ใƒ‰ใƒฉใƒ•ใƒˆใซๅค‰ๆ›ดใ—ใพใ™ใ€‚ ไปฅไธ‹ใงใฏใ€้€ฒๆ—ใŒใ‚ใฃใŸๅ ดๅˆใฏๅธธใซไฝœๆฅญใ‚’ใ‚ณใƒŸใƒƒใƒˆใ—ใ€ใƒ—ใƒƒใ‚ทใƒฅใ—ใฆใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใซ่กจ็คบใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ—ใฆใใ ใ•ใ„ใ€‚ใ•ใ‚‰ใซใ€ๅฎšๆœŸ็š„ใซใƒกใ‚คใƒณใ‹ใ‚‰ใฎๆœ€ๆ–ฐใฎๅค‰ๆ›ดใ‚’ๅ–ใ‚Š่พผใ‚€ใŸใ‚ใซใ€ๆฌกใฎใ‚ˆใ†ใซ่กŒใ†ใ“ใจใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„๏ผš ```bash git fetch upstream git merge upstream/main ``` ไธ€่ˆฌ็š„ใซใ€ใƒขใƒ‡ใƒซใ‚„ๅฎŸ่ฃ…ใซ้–ขใ™ใ‚‹่ณชๅ•ใฏPull Request (PR) ใง่กŒใ„ใ€PRๅ†…ใง่ญฐ่ซ–ใ—ใ€่งฃๆฑบใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€Hugging Face ใƒใƒผใƒ ใฏๆ–ฐใ—ใ„ใ‚ณใƒผใƒ‰ใ‚’ใ‚ณใƒŸใƒƒใƒˆใ™ใ‚‹้š›ใ‚„่ณชๅ•ใŒใ‚ใ‚‹ๅ ดๅˆใซๅธธใซ้€š็Ÿฅใ‚’ๅ—ใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ่ณชๅ•ใ‚„ๅ•้กŒใŒ่งฃๆฑบใ•ใ‚ŒใŸ้š›ใซใ€ๅ•้กŒใ‚„่ณชๅ•ใŒ็†่งฃใ•ใ‚Œใ‚„ใ™ใ„ใ‚ˆใ†ใซใ€Hugging Face ใƒใƒผใƒ ใซใ‚ณใƒผใƒ‰ใ‚’ๆŒ‡ๆ‘˜ใ™ใ‚‹ใ“ใจใŒ้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ ใ“ใฎใŸใ‚ใซใฏใ€ใ€ŒFiles changedใ€ใ‚ฟใƒ–ใซ็งปๅ‹•ใ—ใฆใ™ในใฆใฎๅค‰ๆ›ดใ‚’่กจ็คบใ—ใ€่ณชๅ•ใ—ใŸใ„่กŒใซ็งปๅ‹•ใ—ใฆใ€Œ+ใ€ใ‚ทใƒณใƒœใƒซใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใ‚ณใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ ่ณชๅ•ใ‚„ๅ•้กŒใŒ่งฃๆฑบใ•ใ‚ŒใŸๅ ดๅˆใฏใ€ไฝœๆˆใ•ใ‚ŒใŸใ‚ณใƒกใƒณใƒˆใฎใ€ŒResolveใ€ใƒœใ‚ฟใƒณใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใงใใพใ™ใ€‚ ๅŒๆง˜ใซใ€Hugging Face ใƒใƒผใƒ ใฏใ‚ณใƒผใƒ‰ใ‚’ใƒฌใƒ“ใƒฅใƒผใ™ใ‚‹้š›ใซใ‚ณใƒกใƒณใƒˆใ‚’้–‹ใใพใ™ใ€‚ PRไธŠใงใฎใปใจใ‚“ใฉใฎ่ณชๅ•ใฏGitHubไธŠใง่กŒใ†ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ไธ€่ˆฌ็š„ใช่ณชๅ•ใซ้–ขใ—ใฆใฏใ€ๅ…ฌใซใฏใ‚ใพใ‚Šๅฝน็ซ‹ใŸใชใ„่ณชๅ•ใซใคใ„ใฆใฏใ€Slackใ‚„ใƒกใƒผใƒซใงHugging Face ใƒใƒผใƒ ใซ้€ฃ็ตกใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ **5. ็”Ÿๆˆใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚ณใƒผใƒ‰ใ‚’"brand_new_bert"ใซ้ฉๅฟœใ•ใ›ใ‚‹** ๆœ€ๅˆใซใ€ใƒขใƒ‡ใƒซ่‡ชไฝ“ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใซใฏๆฐ—ใซใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ ้–ข้€ฃใ™ใ‚‹ใ‚ณใƒผใƒ‰ใฏใ€็”Ÿๆˆใ•ใ‚ŒใŸใƒ•ใ‚กใ‚คใƒซ`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใŠใ‚ˆใณ`src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`ใง่ฆ‹ใคใ‹ใ‚‹ใฏใšใงใ™ใ€‚ ใ•ใฆใ€ใคใ„ใซใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๅง‹ใ‚ใ‚‹ใ“ใจใŒใงใใพใ™ :smile:ใ€‚ `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใซใ‚ใ‚‹็”Ÿๆˆใ•ใ‚ŒใŸใ‚ณใƒผใƒ‰ใฏใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใฎใฟใฎใƒขใƒ‡ใƒซใงใ‚ใ‚ŒใฐBERTใจๅŒใ˜ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆŒใฃใฆใ„ใ‚‹ใ‹ใ€ใ‚จใƒณใ‚ณใƒผใƒ€ใƒผ-ใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใงใ‚ใ‚ŒใฐBARTใจๅŒใ˜ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๆŒใฃใฆใ„ใ‚‹ใฏใšใงใ™ใ€‚ ใ“ใฎๆฎต้šŽใงใฏใ€ใƒขใƒ‡ใƒซใฎ็†่ซ–็š„ใชๅด้ขใซใคใ„ใฆๅญฆใ‚“ใ ใ“ใจใ‚’ๆ€ใ„ๅ‡บใ™ในใใงใ™ใ€‚ใคใพใ‚Šใ€ใ€Œใ“ใฎใƒขใƒ‡ใƒซใฏBERTใพใŸใฏBARTใจใฉใฎใ‚ˆใ†ใซ็•ฐใชใ‚‹ใฎใ‹๏ผŸใ€ใจใ„ใ†ใ“ใจใงใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎๅค‰ๆ›ดใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใŒใ€ใ“ใ‚Œใฏ้€šๅธธใ€ใ‚ปใƒซใƒ•ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒฌใ‚คใƒคใƒผใ€ๆญฃ่ฆๅŒ–ใƒฌใ‚คใƒคใƒผใฎ้ †ๅบใชใฉใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ๅ†ใณใ€ใ‚ใชใŸใฎใƒขใƒ‡ใƒซใŒใฉใฎใ‚ˆใ†ใซๅฎŸ่ฃ…ใ•ใ‚Œใ‚‹ในใใ‹ใ‚’ใ‚ˆใ‚Š่‰ฏใ็†่งฃใ™ใ‚‹ใŸใ‚ใซใ€Transformersๅ†…ใซๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใฎ้กžไผผใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่ฆ‹ใ‚‹ใ“ใจใŒๅฝน็ซ‹ใคใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎๆ™‚็‚นใงใฏใ€ใ‚ณใƒผใƒ‰ใŒๅฎŒๅ…จใซๆญฃ็ขบใพใŸใฏใ‚ฏใƒชใƒผใƒณใงใ‚ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ‚€ใ—ใ‚ใ€ใพใšใฏๅฟ…่ฆใชใ‚ณใƒผใƒ‰ใฎๆœ€ๅˆใฎ*ใ‚ฏใƒชใƒผใƒณใงใชใ„*ใ‚ณใƒ”ใƒผ๏ผ†ใƒšใƒผใ‚นใƒˆใƒใƒผใ‚ธใƒงใƒณใ‚’ `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใซ่ฟฝๅŠ ใ—ใ€ๅฟ…่ฆใชใ‚ณใƒผใƒ‰ใŒใ™ในใฆ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใ‚‹ใจๆ„Ÿใ˜ใ‚‹ใพใงๆ”นๅ–„/ไฟฎๆญฃใ‚’ๅๅพฉ็š„ใซ่กŒใ†ใ“ใจใŒใŠๅ‹งใ‚ใงใ™ใ€‚ ็งใŸใกใฎ็ตŒ้จ“ใ‹ใ‚‰ใ€ๅฟ…่ฆใชใ‚ณใƒผใƒ‰ใฎๆœ€ๅˆใฎใƒใƒผใ‚ธใƒงใƒณใ‚’่ฟ…้€Ÿใซ่ฟฝๅŠ ใ—ใ€ๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ชฌๆ˜Žใ™ใ‚‹ๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝฟ็”จใ—ใฆใ‚ณใƒผใƒ‰ใ‚’็นฐใ‚Š่ฟ”ใ—ๆ”นๅ–„/ไฟฎๆญฃใ™ใ‚‹ๆ–นใŒๅŠน็އ็š„ใงใ‚ใ‚‹ใ“ใจใŒๅคšใ„ใงใ™ใ€‚ ใ“ใฎๆ™‚็‚นใงๅ‹•ไฝœใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใฎใฏใ€๐Ÿค— Transformersใฎ"brand_new_bert"ใฎๅฎŸ่ฃ…ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใงใใ‚‹ใ“ใจใ ใ‘ใงใ™ใ€‚ใคใพใ‚Šใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใŒๆฉŸ่ƒฝใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` ไธŠ่จ˜ใฎใ‚ณใƒžใƒณใƒ‰ใฏใ€`BrandNewBertConfig()` ใงๅฎš็พฉใ•ใ‚ŒใŸใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ‘ใƒฉใƒกใƒผใ‚ฟใซๅพ“ใฃใฆใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใ€ ใ™ในใฆใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฎ `init()` ใƒกใ‚ฝใƒƒใƒ‰ใŒๆญฃๅธธใซๅ‹•ไฝœใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ ใ™ในใฆใฎใƒฉใƒณใƒ€ใƒ ใชๅˆๆœŸๅŒ–ใฏใ€`BrandnewBertPreTrainedModel` ใ‚ฏใƒฉใ‚นใฎ `_init_weights` ใƒกใ‚ฝใƒƒใƒ‰ใง่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ€่จญๅฎšๅค‰ๆ•ฐใซไพๅญ˜ใ™ใ‚‹ใ™ในใฆใฎใƒชใƒผใƒ•ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๅˆๆœŸๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไปฅไธ‹ใฏใ€BERT ใฎ `_init_weights` ใƒกใ‚ฝใƒƒใƒ‰ใฎไพ‹ใงใ™๏ผš ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` ็‰นๅฎšใฎใƒขใ‚ธใƒฅใƒผใƒซใซ็‰นๅˆฅใชๅˆๆœŸๅŒ–ใŒๅฟ…่ฆใชๅ ดๅˆใ€ใ‚ซใ‚นใ‚ฟใƒ ใ‚นใ‚ญใƒผใƒ ใ‚’ใ•ใ‚‰ใซๆŒใคใ“ใจใŒใงใใพใ™ใ€‚ใŸใจใˆใฐใ€ `Wav2Vec2ForPreTraining`ใงใฏใ€ๆœ€ๅพŒใฎ2ใคใฎ็ทšๅฝขๅฑคใซใฏ้€šๅธธใฎPyTorchใฎ`nn.Linear`ใฎๅˆๆœŸๅŒ–ใŒๅฟ…่ฆใงใ™ใŒใ€ ไป–ใฎใ™ในใฆใฎๅฑคใฏไธŠ่จ˜ใฎใ‚ˆใ†ใชๅˆๆœŸๅŒ–ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏไปฅไธ‹ใฎใ‚ˆใ†ใซใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™๏ผš ```py def _init_weights(self, module): """Initialize the weights""" if isinstnace(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` `_is_hf_initialized`ใƒ•ใƒฉใ‚ฐใฏใ€ใ‚ตใƒ–ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ไธ€ๅบฆใ ใ‘ๅˆๆœŸๅŒ–ใ™ใ‚‹ใ“ใจใ‚’็ขบๅฎŸใซใ™ใ‚‹ใŸใ‚ใซๅ†…้ƒจใงไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ `module.project_q`ใจ`module.project_hid`ใฎใŸใ‚ใซใใ‚Œใ‚’`True`ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใงใ€ ใ‚ซใ‚นใ‚ฟใƒ ๅˆๆœŸๅŒ–ใŒๅพŒใงไธŠๆ›ธใใ•ใ‚Œใชใ„ใ‚ˆใ†ใซใ—ใ€`_init_weights`้–ขๆ•ฐใŒใใ‚Œใ‚‰ใซ้ฉ็”จใ•ใ‚Œใชใ„ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ **6. ๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๆ›ธใ** ๆฌกใซใ€*brand_new_bert* ใฎๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใงใƒ‡ใƒใƒƒใ‚ฐใซไฝฟ็”จใ—ใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ€ๆ–ฐใ—ใไฝœๆˆใ—ใŸ ๐Ÿค— Transformers ๅฎŸ่ฃ…ใฎ *brand_new_bert* ใจไบ’ๆ›ๆ€งใฎใ‚ใ‚‹ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซๅค‰ๆ›ใงใใ‚‹ๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๆ›ธใๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ใ‚ผใƒญใ‹ใ‚‰ๆ›ธใใ“ใจใฏใŠๅ‹งใ‚ใ•ใ‚Œใพใ›ใ‚“ใŒใ€ไปฃใ‚ใ‚Šใซ ๐Ÿค— Transformers ใงๆ—ขใซๅญ˜ๅœจใ™ใ‚‹้กžไผผใฎใƒขใƒ‡ใƒซใ‚’ๅŒใ˜ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใงๅค‰ๆ›ใ—ใŸใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’่ชฟในใ‚‹ใ“ใจใŒ่‰ฏใ„ใงใ—ใ‚‡ใ†ใ€‚ ้€šๅธธใ€ๆ—ขๅญ˜ใฎๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ใ‚ณใƒ”ใƒผใ—ใฆใ€่‡ชๅˆ†ใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซใ‚ใšใ‹ใซ้ฉๅฟœใ•ใ›ใ‚‹ใ“ใจใงๅๅˆ†ใงใ™ใ€‚ Hugging Face ใƒใƒผใƒ ใซๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใซ้กžไผผใ—ใŸๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๆ•™ใˆใฆใ‚‚ใ‚‰ใ†ใ“ใจใ‚‚่บŠ่บ‡ใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ - TensorFlowใ‹ใ‚‰PyTorchใซใƒขใƒ‡ใƒซใ‚’็งปๆคใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€่‰ฏใ„ๅ‡บ็™บ็‚นใฏBERTใฎๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - PyTorchใ‹ใ‚‰PyTorchใซใƒขใƒ‡ใƒซใ‚’็งปๆคใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€่‰ฏใ„ๅ‡บ็™บ็‚นใฏBARTใฎๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) ไปฅไธ‹ใงใฏใ€PyTorchใƒขใƒ‡ใƒซใŒๅฑคใฎ้‡ใฟใ‚’ใฉใฎใ‚ˆใ†ใซไฟๅญ˜ใ—ใ€ๅฑคใฎๅๅ‰ใ‚’ๅฎš็พฉใ™ใ‚‹ใ‹ใซใคใ„ใฆ็ฐกๅ˜ใซ่ชฌๆ˜Žใ—ใพใ™ใ€‚ PyTorchใงใฏใ€ๅฑคใฎๅๅ‰ใฏๅฑคใซไธŽใˆใ‚‹ใ‚ฏใƒฉใ‚นๅฑžๆ€งใฎๅๅ‰ใซใ‚ˆใฃใฆๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ PyTorchใง `SimpleModel` ใจใ„ใ†ใƒ€ใƒŸใƒผใƒขใƒ‡ใƒซใ‚’ๅฎš็พฉใ—ใพใ—ใ‚‡ใ†๏ผš ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` ใ“ใ‚Œใงใ€ใ“ใฎใƒขใƒ‡ใƒซๅฎš็พฉใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ไฝœๆˆใ—ใ€`dense`ใ€`intermediate`ใ€`layer_norm`ใฎใ™ในใฆใฎ้‡ใฟใ‚’ใƒฉใƒณใƒ€ใƒ ใช้‡ใฟใงๅŸ‹ใ‚ใŸใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใงใใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใ‚’ๅฐๅˆทใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```python model = SimpleModel() print(model) ``` ใ“ใ‚Œใฏไปฅไธ‹ใ‚’ๅ‡บๅŠ›ใ—ใพใ™๏ผš ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` ๅฑคใฎๅๅ‰ใฏPyTorchใฎใ‚ฏใƒฉใ‚นๅฑžๆ€งใฎๅๅ‰ใซใ‚ˆใฃใฆๅฎš็พฉใ•ใ‚Œใฆใ„ใพใ™ใ€‚็‰นๅฎšใฎๅฑคใฎ้‡ใฟๅ€คใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```python print(model.dense.weight.data) ``` ใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸ้‡ใฟใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซ ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` ใ‚นใ‚ฏใƒชใƒ—ใƒˆๅ†…ใฎๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆใงใฏใ€ใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸ้‡ใฟใ‚’ใ€ๅฏพๅฟœใ™ใ‚‹ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆๅ†…ใฎๆญฃ็ขบใช้‡ใฟใงๅŸ‹ใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซ็ฟป่จณใ—ใพใ™๏ผš ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` PyTorchใƒขใƒ‡ใƒซใฎๅ„ใƒฉใƒณใƒ€ใƒ ๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸ้‡ใฟใจๅฏพๅฟœใ™ใ‚‹ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ้‡ใฟใŒ **ๅฝข็Šถใจๅๅ‰ใฎไธกๆ–น**ใงๆญฃ็ขบใซไธ€่‡ดใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใ‚’่กŒใ†ใŸใ‚ใซใ€ๅฝข็Šถใซๅฏพใ™ใ‚‹assertใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ—ใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ้‡ใฟใฎๅๅ‰ใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ“ใจใŒ **ๅฟ…่ฆไธๅฏๆฌ **ใงใ™ใ€‚ไพ‹ใˆใฐใ€ๆฌกใฎใ‚ˆใ†ใชใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` ใพใŸใ€ไธกๆ–นใฎ้‡ใฟใฎๅๅ‰ใ‚’ๅฐๅˆทใ—ใฆใ€ไธ€่‡ดใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™๏ผš ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` ใ‚‚ใ—ๅฝข็ŠถใพใŸใฏๅๅ‰ใฎใ„ใšใ‚Œใ‹ใŒไธ€่‡ดใ—ใชใ„ๅ ดๅˆใ€ใŠใใ‚‰ใ่ชคใฃใฆ๐Ÿค— TransformersใฎๅฎŸ่ฃ…ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸใƒฌใ‚คใƒคใƒผใซ้–“้•ใฃใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ้‡ใฟใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใฆใ—ใพใฃใŸๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ่ชคใฃใŸๅฝข็Šถใฏใ€ใŠใใ‚‰ใ`BrandNewBertConfig()`ใงใฎ่จญๅฎšใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใŒใ€ๅค‰ๆ›ใ—ใŸใ„ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใงไฝฟ็”จใ•ใ‚ŒใŸใ‚‚ใฎใจๆญฃ็ขบใซไธ€่‡ดใ—ใชใ„ใŸใ‚ใงใ™ใ€‚ ใŸใ ใ—ใ€PyTorchใฎใƒฌใ‚คใƒคใƒผใฎๅฎŸ่ฃ…ใซใ‚ˆใฃใฆใฏใ€้‡ใฟใ‚’ไบ‹ๅ‰ใซ่ปข็ฝฎใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ๆœ€ๅพŒใซใ€**ใ™ในใฆ**ใฎๅฟ…่ฆใช้‡ใฟใŒๅˆๆœŸๅŒ–ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€ๅˆๆœŸๅŒ–ใซไฝฟ็”จใ•ใ‚Œใชใ‹ใฃใŸใ™ในใฆใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ้‡ใฟใ‚’่กจ็คบใ—ใฆใ€ใƒขใƒ‡ใƒซใŒๆญฃใ—ใๅค‰ๆ›ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ๅค‰ๆ›ใƒˆใƒฉใ‚คใ‚ขใƒซใŒ่ชคใฃใŸๅฝข็Šถใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใพใŸใฏ่ชคใฃใŸๅๅ‰ๅ‰ฒใ‚Šๅฝ“ใฆใงๅคฑๆ•—ใ™ใ‚‹ใฎใฏๅฎŒๅ…จใซๆญฃๅธธใงใ™ใ€‚ ใ“ใ‚ŒใฏใŠใใ‚‰ใใ€`BrandNewBertConfig()`ใง่ชคใฃใŸใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ไฝฟ็”จใ—ใŸใ‹ใ€๐Ÿค— TransformersใฎๅฎŸ่ฃ…ใซ่ชคใฃใŸใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒใ‚ใ‚‹ใ‹ใ€๐Ÿค— TransformersใฎๅฎŸ่ฃ…ใฎ1ใคใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฎ`init()`้–ขๆ•ฐใซใƒใ‚ฐใŒใ‚ใ‚‹ใ‹ใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ้‡ใฟใฎ1ใคใ‚’่ปข็ฝฎใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใงใ™ใ€‚ ใ“ใฎใ‚นใƒ†ใƒƒใƒ—ใฏใ€ไปฅๅ‰ใฎใ‚นใƒ†ใƒƒใƒ—ใจ็นฐใ‚Š่ฟ”ใ™ในใใงใ™ใ€‚ใ™ในใฆใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ้‡ใฟใŒๆญฃใ—ใ๐Ÿค— Transformersใƒขใƒ‡ใƒซใซ่ชญใฟ่พผใพใ‚Œใ‚‹ใพใง็นฐใ‚Š่ฟ”ใ™ในใใงใ™ใ€‚ ๐Ÿค— TransformersๅฎŸ่ฃ…ใซๆญฃใ—ใใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’่ชญใฟ่พผใ‚“ใ ๅพŒใ€้ธๆŠžใ—ใŸใƒ•ใ‚ฉใƒซใƒ€ใƒผใซใƒขใƒ‡ใƒซใ‚’ไฟๅญ˜ใงใใพใ™ `/path/to/converted/checkpoint/folder`ใ€‚ใ“ใฎใƒ•ใ‚ฉใƒซใƒ€ใซใฏ`pytorch_model.bin`ใƒ•ใ‚กใ‚คใƒซใจ`config.json`ใƒ•ใ‚กใ‚คใƒซใฎไธกๆ–นใŒๅซใพใ‚Œใ‚‹ใฏใšใงใ™ใ€‚ ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. ้ †ไผๆ’ญ๏ผˆforward pass๏ผ‰ใฎๅฎŸ่ฃ…** ๐Ÿค— TransformersๅฎŸ่ฃ…ใงไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ๆญฃใ—ใ่ชญใฟ่พผใ‚“ใ ๅพŒใ€้ †ไผๆ’ญใŒๆญฃใ—ใๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚[ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใ‚’็†่งฃใ™ใ‚‹](#34-run-a-pretrained-checkpoint-using-the-original-repository)ใงใ€ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใฎ้ †ไผๆ’ญใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ใ™ใงใซไฝœๆˆใ—ใพใ—ใŸใ€‚ไปŠๅบฆใฏใ€ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชใฎไปฃใ‚ใ‚Šใซ๐Ÿค— TransformersๅฎŸ่ฃ…ใ‚’ไฝฟ็”จใ—ใฆ้กžไผผใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝœๆˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™๏ผš ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` ๐Ÿค— TransformersใฎๅฎŸ่ฃ…ใจๅ…ƒใฎใƒขใƒ‡ใƒซใฎๅฎŸ่ฃ…ใŒๆœ€ๅˆใฎๅฎŸ่กŒใงๅฎŒๅ…จใซๅŒใ˜ๅ‡บๅŠ›ใ‚’ๆไพ›ใ—ใชใ„ใ‹ใ€ ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใงใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒ้žๅธธใซ้ซ˜ใ„ใงใ™ใ€‚ๅคฑๆœ›ใ—ใชใ„ใงใใ ใ•ใ„ - ใ“ใ‚Œใฏไบˆๆƒณใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใงใ™๏ผ ใพใšใ€ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใŒใ‚จใƒฉใƒผใ‚’ใ‚นใƒญใƒผใ—ใชใ„ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ้–“้•ใฃใŸๆฌกๅ…ƒใŒไฝฟ็”จใ•ใ‚Œใ€*ๆฌกๅ…ƒใฎไธไธ€่‡ด*ใ‚จใƒฉใƒผใ‚„ใ€่ชคใฃใŸใƒ‡ใƒผใ‚ฟๅž‹ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒไฝฟ็”จใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ™ใ€‚ ไพ‹ใˆใฐใ€`torch.long`ใงใฏใชใ`torch.float32`ใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚็‰นๅฎšใฎใ‚จใƒฉใƒผใ‚’่งฃๆฑบใงใใชใ„ๅ ดๅˆใฏใ€ Hugging Faceใƒใƒผใƒ ใซๅŠฉใ‘ใ‚’ๆฑ‚ใ‚ใ‚‹ใ“ใจใ‚’่บŠ่บ‡ใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ ๐Ÿค— TransformersๅฎŸ่ฃ…ใŒๆญฃใ—ใๆฉŸ่ƒฝใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๆœ€็ต‚็š„ใช้ƒจๅˆ†ใฏใ€ๅ‡บๅŠ›ใŒ`1e-3`ใฎ็ฒพๅบฆใงๅŒ็ญ‰ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ใพใšใ€ๅ‡บๅŠ›ใฎๅฝข็ŠถใŒๅŒไธ€ใงใ‚ใ‚‹ใ“ใจใ€ใคใพใ‚Šใ‚นใ‚ฏใƒชใƒ—ใƒˆใฎ๐Ÿค— TransformersๅฎŸ่ฃ…ใจๅ…ƒใฎๅฎŸ่ฃ…ใฎไธกๆ–นใง`outputs.shape`ใŒๅŒใ˜ๅ€คใ‚’็”Ÿๆˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆฌกใซใ€ๅ‡บๅŠ›ๅ€คใŒๅŒไธ€ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹้š›ใฎๆœ€ใ‚‚้›ฃใ—ใ„้ƒจๅˆ†ใฎ1ใคใงใ™ใ€‚ ๅ‡บๅŠ›ใŒๅŒไธ€ใงใชใ„็†็”ฑใฎไธ€่ˆฌ็š„ใช้–“้•ใ„ใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™ใ€‚ - ไธ€้ƒจใฎใƒฌใ‚คใƒคใƒผใŒ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใชใ„ใ€ใคใพใ‚Š*ๆดปๆ€งๅŒ–*ใƒฌใ‚คใƒคใƒผใŒ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใชใ„ใ‹ใ€ใƒชใ‚ถใƒใƒซๆŽฅ็ถšใŒๅฟ˜ใ‚Œใ‚‰ใ‚Œใฆใ„ใ‚‹ - ๅ˜่ชžๅŸ‹ใ‚่พผใฟ่กŒๅˆ—ใŒ็ตใฐใ‚Œใฆใ„ใชใ„ - ใ‚ชใƒชใ‚ธใƒŠใƒซใฎๅฎŸ่ฃ…ใŒใ‚ชใƒ•ใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใŸใ‚ใ€่ชคใฃใŸไฝ็ฝฎๅŸ‹ใ‚่พผใฟใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ - ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นไธญใซใƒ‰ใƒญใƒƒใƒ—ใ‚ขใ‚ฆใƒˆใŒ้ฉ็”จใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚’ไฟฎๆญฃใ™ใ‚‹ใซใฏใ€*model.trainingใŒFalse*ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นไธญใซ่ชคใฃใฆใƒ‰ใƒญใƒƒใƒ—ใ‚ขใ‚ฆใƒˆใƒฌใ‚คใƒคใƒผใŒใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ–ๅŒ–ใ•ใ‚Œใชใ„ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ *ใคใพใ‚Š* [PyTorchใฎfunctional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)ใซ*model.training*ใ‚’ๆธกใ—ใพใ™ใ€‚ ๅ•้กŒใ‚’ไฟฎๆญฃใ™ใ‚‹ๆœ€่‰ฏใฎๆ–นๆณ•ใฏใ€้€šๅธธใ€ๅ…ƒใฎๅฎŸ่ฃ…ใจ๐Ÿค— TransformersๅฎŸ่ฃ…ใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’ไธฆในใฆ่กจ็คบใ—ใ€้•ใ„ใŒใ‚ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ็†ๆƒณ็š„ใซใฏใ€ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใฎไธกๆ–นใฎๅฎŸ่ฃ…ใฎไธญ้–“ๅ‡บๅŠ›ใ‚’ใƒ‡ใƒใƒƒใ‚ฐ/ใƒ—ใƒชใƒณใƒˆใ‚ขใ‚ฆใƒˆใ—ใฆใ€๐Ÿค— TransformersๅฎŸ่ฃ…ใŒๅ…ƒใฎๅฎŸ่ฃ…ใจ็•ฐใชใ‚‹ๅ‡บๅŠ›ใ‚’็คบใ™ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏๅ†…ใฎๆญฃ็ขบใชไฝ็ฝฎใ‚’่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ๆœ€ๅˆใซใ€ไธกๆ–นใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฎใƒใƒผใƒ‰ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ•ใ‚ŒใŸ`input_ids`ใŒๅŒไธ€ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ ๆฌกใซใ€`input_ids`ใฎๆœ€ๅˆใฎๅค‰ๆ›๏ผˆ้€šๅธธใ€ๅ˜่ชžๅŸ‹ใ‚่พผใฟ๏ผ‰ใฎๅ‡บๅŠ›ใŒๅŒไธ€ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ ใใฎๅพŒใ€ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใฎๆœ€ๅพŒใฎใƒฌใ‚คใƒคใƒผใพใงไฝœๆฅญใ‚’้€ฒใ‚ใพใ™ใ€‚ ใ„ใšใ‚Œใ‹ใฎๆ™‚็‚นใงใ€2ใคใฎๅฎŸ่ฃ…้–“ใง้•ใ„ใŒใ‚ใ‚‹ใ“ใจใซๆฐ—ไป˜ใใฏใšใงใ€ใใ‚Œใซใ‚ˆใ‚Š๐Ÿค— TransformersๅฎŸ่ฃ…ใฎใƒใ‚ฐใฎๅ ดๆ‰€ใŒ็‰นๅฎšใ•ใ‚Œใพใ™ใ€‚ ็ตŒ้จ“ไธŠใ€ๅ…ƒใฎๅฎŸ่ฃ…ใจ๐Ÿค— TransformersๅฎŸ่ฃ…ใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใฎๅŒใ˜ไฝ็ฝฎใซๅคšใใฎใƒ—ใƒชใƒณใƒˆใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ—ใ€ ไธญ้–“ใƒ—ใƒฌใ‚ผใƒณใƒ†ใƒผใ‚ทใƒงใƒณใงๅŒใ˜ๅ€คใ‚’็คบใ™ใƒ—ใƒชใƒณใƒˆใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’ๆฎต้šŽ็š„ใซๅ‰Š้™คใ™ใ‚‹ใฎใŒใ‚ทใƒณใƒ—ใƒซใ‹ใคๅŠนๆžœ็š„ใชๆ–นๆณ•ใงใ™ใ€‚ ไธกๆ–นใฎๅฎŸ่ฃ…ใŒๅŒใ˜ๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ™ใ‚‹ใ“ใจใซ่‡ชไฟกใ‚’ๆŒใฃใฆใ„ใ‚‹ๅ ดๅˆใ€`torch.allclose(original_output, output, atol=1e-3)`ใ‚’ไฝฟ็”จใ—ใฆๅ‡บๅŠ›ใ‚’็ขบ่ชใ™ใ‚‹ใจใ€ๆœ€ใ‚‚้›ฃใ—ใ„้ƒจๅˆ†ใŒๅฎŒไบ†ใ—ใพใ™๏ผ ใŠใ‚ใงใจใ†ใ”ใ–ใ„ใพใ™ - ๅฎŒไบ†ใ™ใ‚‹ไฝœๆฅญใฏ็ฐกๅ˜ใชใ‚‚ใฎใซใชใ‚‹ใฏใšใงใ™ ๐Ÿ˜Šใ€‚ **8. ๅฟ…่ฆใชใ™ในใฆใฎใƒขใƒ‡ใƒซใƒ†ใ‚นใƒˆใ‚’่ฟฝๅŠ ** ใ“ใฎๆ™‚็‚นใงใ€ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใŒๆญฃๅธธใซ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸใ€‚ ใŸใ ใ—ใ€ใƒขใƒ‡ใƒซใŒใพใ ๅฟ…่ฆใช่จญ่จˆใซๅฎŒๅ…จใซๆบ–ๆ‹ ใ—ใฆใ„ใชใ„ๅฏ่ƒฝๆ€งใŒ้žๅธธใซ้ซ˜ใ„ใงใ™ใ€‚ ๐Ÿค— TransformersใจๅฎŒๅ…จใซไบ’ๆ›ๆ€งใŒใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ใ™ในใฆใฎไธ€่ˆฌ็š„ใชใƒ†ใ‚นใƒˆใŒใƒ‘ใ‚นใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ CookiecutterใฏใŠใใ‚‰ใใƒขใƒ‡ใƒซ็”จใฎใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใ‚’่‡ชๅ‹•็š„ใซ่ฟฝๅŠ ใ—ใฆใ„ใ‚‹ใฏใšใงใ€ใŠใใ‚‰ใๅŒใ˜ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซ`tests/models/brand_new_bert/test_modeling_brand_new_bert.py`ใจใ—ใฆๅญ˜ๅœจใ—ใพใ™ใ€‚ ใ“ใฎใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใ‚’ๅฎŸ่กŒใ—ใฆใ€ใ™ในใฆใฎไธ€่ˆฌ็š„ใชใƒ†ใ‚นใƒˆใŒใƒ‘ใ‚นใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผš ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` ใ™ในใฆใฎไธ€่ˆฌ็š„ใชใƒ†ใ‚นใƒˆใ‚’ไฟฎๆญฃใ—ใŸใ‚‰ใ€ไปŠๅบฆใฏๅฎŸ่กŒใ—ใŸใ™ในใฆใฎ็ด ๆ™ดใ‚‰ใ—ใ„ไฝœๆฅญใŒ้ฉๅˆ‡ใซใƒ†ใ‚นใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใ“ใจใŒ้žๅธธใซ้‡่ฆใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ - a) ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใฏ*brand_new_bert*ใฎ็‰นๅฎšใฎใƒ†ใ‚นใƒˆใ‚’่ฆ‹ใ‚‹ใ“ใจใงใ€ใ‚ใชใŸใฎไฝœๆฅญใ‚’็ฐกๅ˜ใซ็†่งฃใงใใพใ™ใ€‚ - b) ใƒขใƒ‡ใƒซใธใฎๅฐ†ๆฅใฎๅค‰ๆ›ดใŒใƒขใƒ‡ใƒซใฎ้‡่ฆใชๆฉŸ่ƒฝใ‚’ๅฃŠใ•ใชใ„ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใพใšใ€็ตฑๅˆใƒ†ใ‚นใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎ็ตฑๅˆใƒ†ใ‚นใƒˆใฏใ€ๅŸบๆœฌ็š„ใซใฏใƒ‡ใƒใƒƒใ‚ฐใ‚นใ‚ฏใƒชใƒ—ใƒˆใจๅŒใ˜ใ“ใจใ‚’่กŒใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใƒ†ใ‚นใƒˆใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใฏCookiecutterใซใ‚ˆใฃใฆๆ—ขใซ่ฟฝๅŠ ใ•ใ‚ŒใฆใŠใ‚Šใ€ใ€ŒBrandNewBertModelIntegrationTestsใ€ใจๅ‘ผใฐใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใฎใƒ†ใ‚นใƒˆใ‚’่จ˜ๅ…ฅใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใŒๅˆๆ ผใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Windowsใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€`RUN_SLOW=1`ใ‚’`SET RUN_SLOW=1`ใซ็ฝฎใๆ›ใˆใฆใใ ใ•ใ„ใ€‚ </Tip> ๆฌกใซใ€*brand_new_bert*ใซ็‰นๆœ‰ใฎใ™ในใฆใฎ็‰นๅพดใฏใ€ๅˆฅๅ€‹ใฎใƒ†ใ‚นใƒˆๅ†…ใง่ฟฝๅŠ ใ•ใ‚Œใ‚‹ในใใงใ™ใ€‚ `BrandNewBertModelTester`/`BrandNewBertModelTest`ใฎไธ‹ใซใ€‚ใ“ใฎ้ƒจๅˆ†ใฏใ‚ˆใๅฟ˜ใ‚Œใ‚‰ใ‚Œใพใ™ใŒใ€2ใคใฎ็‚นใง้žๅธธใซๅฝน็ซ‹ใกใพใ™๏ผš - ใƒขใƒ‡ใƒซใฎ่ฟฝๅŠ ไธญใซ็ฒๅพ—ใ—ใŸ็Ÿฅ่ญ˜ใ‚’ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซไผใˆใ€*brand_new_bert*ใฎ็‰นๅˆฅใชๆฉŸ่ƒฝใŒใฉใฎใ‚ˆใ†ใซๅ‹•ไฝœใ™ใ‚‹ใ‹ใ‚’็คบใ™ใ“ใจใซใ‚ˆใฃใฆใ€็Ÿฅ่ญ˜ใฎๅ…ฑๆœ‰ใ‚’ๆ”ฏๆดใ—ใพใ™ใ€‚ - ๅฐ†ๆฅใฎ่ฒข็Œฎ่€…ใฏใ€ใ“ใ‚Œใ‚‰ใฎ็‰นๅˆฅใชใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใงใƒขใƒ‡ใƒซใธใฎๅค‰ๆ›ดใ‚’่ฟ…้€Ÿใซใƒ†ใ‚นใƒˆใงใใพใ™ใ€‚ **9. ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎๅฎŸ่ฃ…** ๆฌกใซใ€*brand_new_bert*ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚้€šๅธธใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏ๐Ÿค— Transformersใฎๆ—ขๅญ˜ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใจๅŒ็ญ‰ใ‹้žๅธธใซไผผใฆใ„ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒๆญฃใ—ใๅ‹•ไฝœใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใฏใ€ใพใšใ€ๅ…ƒใฎใƒชใƒใ‚ธใƒˆใƒชๅ†…ใงๆ–‡ๅญ—ๅˆ—ใ‚’ๅ…ฅๅŠ›ใ—ใ€`input_ids`ใ‚’่ฟ”ใ™ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใ“ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซ่ฆ‹ใˆใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผˆ็–‘ไผผใ‚ณใƒผใƒ‰ใง็คบใ—ใพใ™๏ผ‰๏ผš ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใ‚’่ฉณใ—ใ่ชฟๆŸปใ—ใ€ๆญฃใ—ใ„ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎ้–ขๆ•ฐใ‚’่ฆ‹ใคใ‘ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใพใŸใฏใ€ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใฎใ‚ฏใƒญใƒผใƒณใ‚’ๅค‰ๆ›ดใ—ใฆใ€`input_ids`ใ ใ‘ใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใ‚ชใƒชใ‚ธใƒŠใƒซใฎใƒชใƒใ‚ธใƒˆใƒชใ‚’ไฝฟ็”จใ—ใŸๆฉŸ่ƒฝ็š„ใชใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ผใƒผใ‚ทใƒงใƒณใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝœๆˆใ—ใŸๅพŒใ€ ๐Ÿค— Transformersๅ‘ใ‘ใฎ้กžไผผใ—ใŸใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝœๆˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฎใ‚ˆใ†ใซ่ฆ‹ใˆใ‚‹ในใใงใ™๏ผš ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` `input_ids`ใŒๅŒใ˜ๅ€คใ‚’็”Ÿๆˆใ—ใŸๅ ดๅˆใ€ๆœ€็ต‚ใ‚นใƒ†ใƒƒใƒ—ใจใ—ใฆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใ‚‚่ฟฝๅŠ ใ™ใ‚‹ในใใงใ™ใ€‚ *brand_new_bert*ใฎใƒขใƒ‡ใƒซใƒณใ‚ฐใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใจๅŒๆง˜ใซใ€*brand_new_bert*ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใซใฏใ€ใ„ใใคใ‹ใฎใƒใƒผใƒ‰ใ‚ณใƒผใƒ‰ใ•ใ‚ŒใŸ็ตฑๅˆใƒ†ใ‚นใƒˆใŒๅซใพใ‚Œใ‚‹ในใใงใ™ใ€‚ **10. ใ‚จใƒณใƒ‰ใƒ„ใƒผใ‚จใƒณใƒ‰็ตฑๅˆใƒ†ใ‚นใƒˆใฎๅฎŸ่กŒ** ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’่ฟฝๅŠ ใ—ใŸๅพŒใ€`๐Ÿค— Transformers`ๅ†…ใฎ`tests/models/brand_new_bert/test_modeling_brand_new_bert.py`ใซ ใƒขใƒ‡ใƒซใจใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฎไธกๆ–นใ‚’ไฝฟ็”จใ™ใ‚‹ใ„ใใคใ‹ใฎใ‚จใƒณใƒ‰ใƒ„ใƒผใ‚จใƒณใƒ‰็ตฑๅˆใƒ†ใ‚นใƒˆใ‚‚่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎใ‚ˆใ†ใชใƒ†ใ‚นใƒˆใฏใ€๐Ÿค— TransformersใฎๅฎŸ่ฃ…ใŒๆœŸๅพ…ใฉใŠใ‚ŠใซๆฉŸ่ƒฝใ™ใ‚‹ใ“ใจใ‚’็คบใ™ในใใงใ™ใ€‚ ๆ„ๅ‘ณใฎใ‚ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆๅฏพใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ตใƒณใƒ—ใƒซใŒๅซใพใ‚Œใพใ™ใ€‚ๆœ‰็”จใชใƒ†ใ‚ญใ‚นใƒˆๅฏพใƒ†ใ‚ญใ‚นใƒˆใฎใ‚ตใƒณใƒ—ใƒซใซใฏใ€ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใ‚ฟใƒผใ‚ฒใƒƒใƒˆใธใฎ็ฟป่จณใƒšใ‚ขใ€่จ˜ไบ‹ใ‹ใ‚‰่ฆ็ด„ใธใฎใƒšใ‚ขใ€่ณชๅ•ใ‹ใ‚‰ๅ›ž็ญ”ใธใฎใƒšใ‚ขใชใฉใŒๅซใพใ‚Œใพใ™ใ€‚ ใƒใƒผใƒˆใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒใƒ€ใ‚ฆใƒณใ‚นใƒˆใƒชใƒผใƒ ใ‚ฟใ‚นใ‚ฏใงใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฎใƒ†ใ‚นใƒˆใซไพๅญ˜ใ™ใ‚‹ใ ใ‘ใงๅๅˆ†ใงใ™ใ€‚ ใƒขใƒ‡ใƒซใŒๅฎŒๅ…จใซๆฉŸ่ƒฝใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ใ™ในใฆใฎใƒ†ใ‚นใƒˆใ‚’GPUไธŠใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚‚ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใฎๅ†…้ƒจใƒ†ใƒณใ‚ฝใƒซใซ`.to(self.device)`ใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ใฎใ‚’ๅฟ˜ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ใใฎใ‚ˆใ†ใชใƒ†ใ‚นใƒˆใงใฏใ‚จใƒฉใƒผใŒ่กจ็คบใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ GPUใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใชใ„ๅ ดๅˆใ€Hugging Faceใƒใƒผใƒ ใŒไปฃใ‚ใ‚Šใซใ“ใ‚Œใ‚‰ใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใงใใพใ™ใ€‚ **11. ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฎ่ฟฝๅŠ ** ใ“ใ‚Œใงใ€*brand_new_bert*ใฎๅฟ…่ฆใชใ™ในใฆใฎๆฉŸ่ƒฝใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸ - ใปใผๅฎŒไบ†ใงใ™๏ผๆฎ‹ใ‚Šใฎ่ฟฝๅŠ ใ™ในใใ“ใจใฏใ€่‰ฏใ„ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใจใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใƒšใƒผใ‚ธใงใ™ใ€‚ CookiecutterใŒ`docs/source/model_doc/brand_new_bert.md`ใจใ„ใ†ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใƒ•ใ‚กใ‚คใƒซใ‚’่ฟฝๅŠ ใ—ใฆใ„ใ‚‹ใฏใšใงใ€ใ“ใ‚Œใ‚’่จ˜ๅ…ฅใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒขใƒ‡ใƒซใฎใƒฆใƒผใ‚ถใƒผใฏ้€šๅธธใ€ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใพใšใ“ใฎใƒšใƒผใ‚ธใ‚’่ฆ‹ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏ็†่งฃใ—ใ‚„ใ™ใ็ฐกๆฝ”ใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒขใƒ‡ใƒซใฎไฝฟ็”จๆ–นๆณ•ใ‚’็คบใ™ใŸใ‚ใซใ„ใใคใ‹ใฎ*Tips*ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใฏใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซใจใฃใฆ้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใซ้–ขใ—ใฆใฏใ€Hugging Faceใƒใƒผใƒ ใซๅ•ใ„ๅˆใ‚ใ›ใ‚‹ใ“ใจใ‚’ใŸใ‚ใ‚‰ใ‚ใชใ„ใงใใ ใ•ใ„ใ€‚ ๆฌกใซใ€`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใซ่ฟฝๅŠ ใ•ใ‚ŒใŸใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณๆ–‡ๅญ—ๅˆ—ใŒๆญฃใ—ใ„ใ“ใจใ€ใŠใ‚ˆใณใ™ในใฆใฎๅฟ…่ฆใชๅ…ฅๅŠ›ใŠใ‚ˆใณๅ‡บๅŠ›ใ‚’ๅซใ‚“ใงใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎๆ›ธใๆ–นใจใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณๆ–‡ๅญ—ๅˆ—ใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใซใคใ„ใฆ่ฉณ็ดฐใชใ‚ฌใ‚คใƒ‰ใŒ[ใ“ใกใ‚‰](writing-documentation)ใซใ‚ใ‚Šใพใ™ใ€‚ ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฏ้€šๅธธใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใƒขใƒ‡ใƒซใฎๆœ€ๅˆใฎๆŽฅ่งฆ็‚นใงใ‚ใ‚‹ใŸใ‚ใ€ใ‚ณใƒผใƒ‰ใจๅŒใ˜ใใ‚‰ใ„ๆณจๆ„ๆทฑใๆ‰ฑใ†ในใใงใ‚ใ‚‹ใ“ใจใ‚’ๅธธใซๅฟต้ ญใซ็ฝฎใ„ใฆใใ ใ•ใ„ใ€‚ **ใ‚ณใƒผใƒ‰ใฎใƒชใƒ•ใ‚กใ‚ฏใ‚ฟใƒชใƒณใ‚ฐ** ็ด ๆ™ดใ‚‰ใ—ใ„ใ€ใ“ใ‚Œใง*brand_new_bert*ใซๅฟ…่ฆใชใ™ในใฆใฎใ‚ณใƒผใƒ‰ใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใฎๆ™‚็‚นใงใ€ๆฌกใฎใ‚ˆใ†ใชใƒใƒ†ใƒณใ‚ทใƒฃใƒซใชใ‚ณใƒผใƒ‰ใ‚นใ‚ฟใ‚คใƒซใฎ่ชคใ‚Šใ‚’่จ‚ๆญฃใ™ใ‚‹ใŸใ‚ใซไปฅไธ‹ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš ```bash make style ``` ใ‚ใชใŸใฎใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚นใ‚ฟใ‚คใƒซใŒๅ“่ณชใƒใ‚งใƒƒใ‚ฏใ‚’ใƒ‘ใ‚นใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„: ```bash make quality ``` ๐Ÿค— Transformersใฎ้žๅธธใซๅŽณๆ ผใชใƒ‡ใ‚ถใ‚คใƒณใƒ†ใ‚นใƒˆใซใฏใ€ใพใ ๅˆๆ ผใ—ใฆใ„ใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใ„ใใคใ‹ใฎไป–ใฎใƒ†ใ‚นใƒˆใŒๅญ˜ๅœจใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใ“ใ‚Œใฏใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆๆ–‡ๅญ—ๅˆ—ใซๆƒ…ๅ ฑใŒไธ่ถณใ—ใฆใ„ใ‚‹ใ‹ใ€ๅๅ‰ใŒ้–“้•ใฃใฆใ„ใ‚‹ใ“ใจใŒๅŽŸๅ› ใงใ‚ใ‚‹ใ“ใจใŒๅคšใ„ใงใ™ใ€‚Hugging Faceใƒใƒผใƒ ใฏใ€ใ“ใ“ใง่ฉฐใพใฃใฆใ„ใ‚‹ๅ ดๅˆใซใฏๅฟ…ใšๅŠฉใ‘ใฆใใ‚Œใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ ๆœ€ๅพŒใซใ€ใ‚ณใƒผใƒ‰ใŒๆญฃใ—ใๆฉŸ่ƒฝใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใŸๅพŒใ€ใ‚ณใƒผใƒ‰ใ‚’ใƒชใƒ•ใ‚กใ‚ฏใ‚ฟใƒชใƒณใ‚ฐใ™ใ‚‹ใฎใฏๅธธใซ่‰ฏใ„ใ‚ขใ‚คใƒ‡ใ‚ขใงใ™ใ€‚ ใ™ในใฆใฎใƒ†ใ‚นใƒˆใŒใƒ‘ใ‚นใ—ใŸไปŠใ€่ฟฝๅŠ ใ—ใŸใ‚ณใƒผใƒ‰ใ‚’ๅ†ๅบฆ็ขบ่ชใ—ใฆใƒชใƒ•ใ‚กใ‚ฏใ‚ฟใƒชใƒณใ‚ฐใ‚’่กŒใ†ใฎใฏ่‰ฏใ„ใ‚ฟใ‚คใƒŸใƒณใ‚ฐใงใ™ใ€‚ ใ“ใ‚Œใงใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใฎ้ƒจๅˆ†ใฏๅฎŒไบ†ใ—ใพใ—ใŸใ€ใŠใ‚ใงใจใ†ใ”ใ–ใ„ใพใ™๏ผ ๐ŸŽ‰ ใ‚ใชใŸใฏ็ด ๆ™ดใ‚‰ใ—ใ„ใงใ™๏ผ ๐Ÿ˜Ž **12. ใƒขใƒ‡ใƒซใ‚’ใƒขใƒ‡ใƒซใƒใƒ–ใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰** ๆœ€ๅพŒใฎใƒ‘ใƒผใƒˆใงใฏใ€ใ™ในใฆใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒขใƒ‡ใƒซใƒใƒ–ใซๅค‰ๆ›ใ—ใฆใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใ€ๅ„ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใŸใƒขใƒ‡ใƒซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒขใƒ‡ใƒซใƒใƒ–ใฎๆฉŸ่ƒฝใซใคใ„ใฆ่ฉณใ—ใใฏใ€[Model sharing and uploading Page](model_sharing)ใ‚’่ชญใ‚“ใง็†่งฃใงใใพใ™ใ€‚ ใ“ใ“ใงใฏใ€*brand_new_bert*ใฎ่‘—่€…็ต„็น”ใฎไธ‹ใซใƒขใƒ‡ใƒซใ‚’ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใงใใ‚‹ใ‚ˆใ†ใซๅฟ…่ฆใชใ‚ขใ‚ฏใ‚ปใ‚นๆจฉใ‚’ๅ–ๅพ—ใ™ใ‚‹ใŸใ‚ใซใ€Hugging Faceใƒใƒผใƒ ใจๅ”ๅŠ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ `transformers`ใฎใ™ในใฆใฎใƒขใƒ‡ใƒซใซๅญ˜ๅœจใ™ใ‚‹`push_to_hub`ใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒใƒ–ใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹่ฟ…้€Ÿใ‹ใคๅŠน็އ็š„ใชๆ–นๆณ•ใงใ™ใ€‚ ไปฅไธ‹ใซใ€ๅฐ‘ใ—ใฎใ‚ณใƒผใƒ‰ใ‚นใƒ‹ใƒšใƒƒใƒˆใ‚’็คบใ—ใพใ™๏ผš ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` ๅ„ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซ้ฉๅˆ‡ใชใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใ‚’ไฝœๆˆใ™ใ‚‹ไพกๅ€คใŒใ‚ใ‚Šใพใ™ใ€‚ใƒขใƒ‡ใƒซใ‚ซใƒผใƒ‰ใฏใ€ใ“ใฎ็‰นๅฎšใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ็‰นๆ€งใ‚’ใƒใ‚คใƒฉใ‚คใƒˆใ™ใ‚‹ในใใงใ™ใ€‚ไพ‹ใˆใฐใ€ใ“ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใฉใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงไบ‹ๅ‰ๅญฆ็ฟ’/ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใ‹ใ€ใฉใฎใ‚ˆใ†ใชไธ‹ๆตใ‚ฟใ‚นใ‚ฏใงใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ในใใ‹ใ‚’็คบใ™ในใใงใ™ใ€‚ใพใŸใ€ใƒขใƒ‡ใƒซใฎๆญฃใ—ใ„ไฝฟ็”จๆ–นๆณ•ใซ้–ขใ™ใ‚‹ใ‚ณใƒผใƒ‰ใ‚‚ๅซใ‚ใ‚‹ในใใงใ™ใ€‚ **13.๏ผˆใ‚ชใƒ—ใ‚ทใƒงใƒณ๏ผ‰ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใฎ่ฟฝๅŠ ** *brand_new_bert*ใ‚’ๆŽจ่ซ–ใพใŸใฏไธ‹ๆตใ‚ฟใ‚นใ‚ฏใฎใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใซใฉใฎใ‚ˆใ†ใซ่ฉณ็ดฐใซไฝฟ็”จใงใใ‚‹ใ‹ใ‚’็คบใ™ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใฏ้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ใ“ใ‚Œใฏใ‚ใชใŸใฎPRใ‚’ใƒžใƒผใ‚ธใ™ใ‚‹ใŸใ‚ใซๅฟ…้ ˆใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซใจใฃใฆ้žๅธธใซๆœ‰็”จใงใ™ใ€‚ **14. ๅฎŒๆˆใ—ใŸPRใฎๆๅ‡บ** ใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐใŒๅฎŒไบ†ใ—ใŸใ‚‰ใ€ๆœ€ๅพŒใฎใ‚นใƒ†ใƒƒใƒ—ใซ็งปๅ‹•ใ—ใ€PRใ‚’ใƒกใ‚คใƒณใƒ–ใƒฉใƒณใƒใซใƒžใƒผใ‚ธใ—ใพใ—ใ‚‡ใ†ใ€‚้€šๅธธใ€Hugging Faceใƒใƒผใƒ ใฏใ“ใฎๆ™‚็‚นใงๆ—ขใซใ‚ใชใŸใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใ‚‹ใฏใšใงใ™ใŒใ€PRใซ่‰ฏใ„่ชฌๆ˜Žใ‚’่ฟฝๅŠ ใ—ใ€ใ‚ณใƒผใƒ‰ใซใ‚ณใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ—ใฆใ€ใƒฌใƒ“ใƒฅใ‚ขใƒผใซ็‰นๅฎšใฎ่จญ่จˆใฎ้ธๆŠž่‚ขใ‚’ๆŒ‡ๆ‘˜ใ—ใŸใ„ๅ ดๅˆใฏใ‚ณใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใ‚‚ไพกๅ€คใŒใ‚ใ‚Šใพใ™ใ€‚ ### Share your work!! ใ•ใ‚ใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใ‹ใ‚‰ใ‚ใชใŸใฎไฝœๆฅญใซๅฏพใ™ใ‚‹่ฉ•ไพกใ‚’ๅพ—ใ‚‹ๆ™‚ใŒๆฅใพใ—ใŸ๏ผใƒขใƒ‡ใƒซใฎ่ฟฝๅŠ ใ‚’ๅฎŒไบ†ใ™ใ‚‹ใ“ใจใฏใ€TransformersใŠใ‚ˆใณNLPใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใซใจใฃใฆ้‡่ฆใช่ฒข็Œฎใงใ™ใ€‚ใ‚ใชใŸใฎใ‚ณใƒผใƒ‰ใจใƒใƒผใƒˆใ•ใ‚ŒใŸไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฏใ€ไฝ•็™พไบบใ€ไฝ•ๅƒไบบใจใ„ใ†้–‹็™บ่€…ใ‚„็ ”็ฉถ่€…ใซใ‚ˆใฃใฆ็ขบๅฎŸใซไฝฟ็”จใ•ใ‚Œใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ใ‚ใชใŸใฎไป•ไบ‹ใซ่ช‡ใ‚Šใ‚’ๆŒใกใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใ‚ใชใŸใฎๆˆๆžœใ‚’ๅ…ฑๆœ‰ใ—ใพใ—ใ‚‡ใ†ใ€‚ **ใ‚ใชใŸใฏใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใฎ่ชฐใงใ‚‚็ฐกๅ˜ใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ๅˆฅใฎใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใพใ—ใŸ๏ผ ๐Ÿคฏ**
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/pipeline_webserver.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Webใ‚ตใƒผใƒใƒผ็”จใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎไฝฟ็”จ <Tip> ๆŽจ่ซ–ใ‚จใƒณใ‚ธใƒณใฎไฝœๆˆใฏ่ค‡้›‘ใชใƒˆใƒ”ใƒƒใ‚ฏใงใ‚ใ‚Šใ€"ๆœ€้ฉใช"ใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใฏใŠใใ‚‰ใๅ•้กŒใฎ้ ˜ๅŸŸใซไพๅญ˜ใ™ใ‚‹ใงใ—ใ‚‡ใ†ใ€‚CPUใพใŸใฏGPUใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ‹๏ผŸๆœ€ไฝŽใฎใƒฌใ‚คใƒ†ใƒณใ‚ทใ€ๆœ€้ซ˜ใฎใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ€ๅคšใใฎใƒขใƒ‡ใƒซใฎใ‚ตใƒใƒผใƒˆใ€ใพใŸใฏ็‰นๅฎšใฎใƒขใƒ‡ใƒซใฎ้ซ˜ๅบฆใชๆœ€้ฉๅŒ–ใ‚’ๆœ›ใ‚“ใงใ„ใพใ™ใ‹๏ผŸ ใ“ใฎใƒˆใƒ”ใƒƒใ‚ฏใซๅ–ใ‚Š็ต„ใ‚€ใŸใ‚ใฎๅคšใใฎๆ–นๆณ•ใŒใ‚ใ‚Šใ€็งใŸใกใŒ็ดนไป‹ใ™ใ‚‹ใฎใฏใ€ใŠใใ‚‰ใๆœ€้ฉใชใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใงใฏใชใ„ใ‹ใ‚‚ใ—ใ‚Œใชใ„ใŒใ€ๅง‹ใ‚ใ‚‹ใŸใ‚ใฎ่‰ฏใ„ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใ™ใ€‚ </Tip> ้‡่ฆใชใ“ใจใฏใ€Webใ‚ตใƒผใƒใƒผใฏใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๅพ…ๆฉŸใ—ใ€ๅ—ไฟกใ—ใŸใ‚ˆใ†ใซๆ‰ฑใ†ใ‚ทใ‚นใƒ†ใƒ ใงใ‚ใ‚‹ใŸใ‚ใ€[ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ](pipeline_tutorial#using-pipelines-on-a-dataset)ใฎใ‚ˆใ†ใซใ€ใ‚คใƒ†ใƒฌใƒผใ‚ฟใ‚’ไฝฟ็”จใงใใ‚‹ใ“ใจใงใ™ใ€‚ ้€šๅธธใ€Webใ‚ตใƒผใƒใƒผใฏไธฆๅˆ—ๅ‡ฆ็†๏ผˆใƒžใƒซใƒใ‚นใƒฌใƒƒใƒ‰ใ€้žๅŒๆœŸใชใฉ๏ผ‰ใ•ใ‚Œใฆใ€ใ•ใพใ–ใพใชใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’ๅŒๆ™‚ใซๅ‡ฆ็†ใ—ใพใ™ใ€‚ไธ€ๆ–นใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ๏ผˆใŠใ‚ˆใณไธปใซใใฎๅŸบ็คŽใจใชใ‚‹ใƒขใƒ‡ใƒซ๏ผ‰ใฏไธฆๅˆ—ๅ‡ฆ็†ใซใฏใ‚ใพใ‚Š้ฉใ—ใฆใ„ใพใ›ใ‚“ใ€‚ใใ‚Œใ‚‰ใฏๅคšใใฎRAMใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใ€ๅฎŸ่กŒไธญใซๅˆฉ็”จๅฏ่ƒฝใชใƒชใ‚ฝใƒผใ‚นใ‚’ใ™ในใฆๆไพ›ใ™ใ‚‹ใ‹ใ€่จˆ็ฎ—้›†็ด„ๅž‹ใฎใ‚ธใƒงใƒ–ใงใ‚ใ‚‹ๅ ดๅˆใซๆœ€้ฉใงใ™ใ€‚ Webใ‚ตใƒผใƒใƒผใฏๅ—ไฟกใจ้€ไฟกใฎ่ปฝใ„่ฒ ่ทใ‚’ๅ‡ฆ็†ใ—ใ€ๅฎŸ้š›ใฎไฝœๆฅญใ‚’1ใคใฎใ‚นใƒฌใƒƒใƒ‰ใงๅ‡ฆ็†ใ™ใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ใ“ใฎไพ‹ใงใฏ`starlette`ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ๅฎŸ้š›ใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฏใ‚ใพใ‚Š้‡่ฆใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ๅˆฅใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ๅŒใ˜ๅŠนๆžœใ‚’ๅพ—ใ‚‹ใŸใ‚ใซใ‚ณใƒผใƒ‰ใ‚’่ชฟๆ•ดใพใŸใฏๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ `server.py`ใ‚’ไฝœๆˆใ—ใฆใใ ใ•ใ„๏ผš ```py from starlette.applications import Starlette from starlette.responses import JSONResponse from starlette.routing import Route from transformers import pipeline import asyncio async def homepage(request): payload = await request.body() string = payload.decode("utf-8") response_q = asyncio.Queue() await request.app.model_queue.put((string, response_q)) output = await response_q.get() return JSONResponse(output) async def server_loop(q): pipe = pipeline(model="bert-base-uncased") while True: (string, response_q) = await q.get() out = pipe(string) await response_q.put(out) app = Starlette( routes=[ Route("/", homepage, methods=["POST"]), ], ) @app.on_event("startup") async def startup_event(): q = asyncio.Queue() app.model_queue = q asyncio.create_task(server_loop(q)) ``` ใ“ใ“ใ‹ใ‚‰ๅง‹ใ‚ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```bash uvicorn server:app ``` ใใ—ใฆใ€ๆฌกใฎใ‚ˆใ†ใซใ‚ฏใ‚จใƒชใงใใพใ™๏ผš ```bash curl -X POST -d "test [MASK]" http://localhost:8000/ #[{"score":0.7742936015129089,"token":1012,"token_str":".","sequence":"test."},...] ``` ใใ—ใฆใ€ใ“ใ‚Œใงใ‚ฆใ‚งใƒ–ใ‚ตใƒผใƒใƒผใ‚’ไฝœๆˆใ™ใ‚‹ๆ–นๆณ•ใฎ่‰ฏใ„ใ‚ขใ‚คใƒ‡ใ‚ขใ‚’ๆŒใฃใฆใ„ใพใ™๏ผ ๆœฌๅฝ“ใซ้‡่ฆใชใฎใฏใ€ใƒขใƒ‡ใƒซใ‚’**ไธ€ๅบฆใ ใ‘**ใƒญใƒผใƒ‰ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚ฆใ‚งใƒ–ใ‚ตใƒผใƒใƒผไธŠใซใƒขใƒ‡ใƒซใฎใ‚ณใƒ”ใƒผใŒใชใ„ใŸใ‚ใ€ไธๅฟ…่ฆใชRAMใŒไฝฟ็”จใ•ใ‚Œใชใใชใ‚Šใพใ™ใ€‚ ใใฎๅพŒใ€ใ‚ญใƒฅใƒผใ‚คใƒณใ‚ฐใƒกใ‚ซใƒ‹ใ‚บใƒ ใ‚’ไฝฟ็”จใ—ใฆใ€ๅ‹•็š„ใƒใƒƒใƒๅ‡ฆ็†ใ‚’่กŒใ†ใชใฉใ€ใ„ใใคใ‹ใฎใ‚ขใ‚คใƒ†ใƒ ใ‚’่“„็ฉใ—ใฆใ‹ใ‚‰ๆŽจ่ซ–ใ‚’่กŒใ†ใชใฉใ€้ซ˜ๅบฆใชๅ‡ฆ็†ใ‚’่กŒใ†ใ“ใจใŒใงใใพใ™๏ผš <Tip warning={true}> ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใ‚ตใƒณใƒ—ใƒซใฏใ€ๅฏ่ชญๆ€งใฎใŸใ‚ใซๆ“ฌไผผใ‚ณใƒผใƒ‰ใฎใ‚ˆใ†ใซๆ›ธใ‹ใ‚Œใฆใ„ใพใ™ใ€‚ใ‚ทใ‚นใƒ†ใƒ ใƒชใ‚ฝใƒผใ‚นใซๅˆ็†็š„ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ›ใšใซๅฎŸ่กŒใ—ใชใ„ใงใใ ใ•ใ„๏ผ </Tip> ```py (string, rq) = await q.get() strings = [] queues = [] while True: try: (string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms except asyncio.exceptions.TimeoutError: break strings.append(string) queues.append(rq) strings outs = pipe(strings, batch_size=len(strings)) for rq, out in zip(queues, outs): await rq.put(out) ``` ใพใš็ฌฌไธ€ใซใ€้€šๅธธใฏใ‚ใพใ‚Š่‰ฏใ„ใ‚ขใ‚คใƒ‡ใ‚ขใงใฏใชใ„ใƒใƒƒใƒใ‚ตใ‚คใ‚บใฎๅˆถ้™ใŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๆฌกใซใ€ใ‚ฟใ‚คใƒ ใ‚ขใ‚ฆใƒˆใฏใ‚ญใƒฅใƒผใฎๅ–ๅพ—ใ”ใจใซใƒชใ‚ปใƒƒใƒˆใ•ใ‚Œใ‚‹ใŸใ‚ใ€ๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅ‰ใซ1msไปฅไธŠๅพ…ใคๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™๏ผˆๆœ€ๅˆใฎใƒชใ‚ฏใ‚จใ‚นใƒˆใฎ้…ๅปถใซ1msๅˆ†้…ใ‚ŒใŒ็”Ÿใ˜ใพใ™๏ผ‰ใ€‚ 1msใฎ็ท ใ‚ๅˆ‡ใ‚Šใ‚’1ๅ›žใ ใ‘ๆŒใคใฎใŒ่‰ฏใ„ใงใ—ใ‚‡ใ†ใ€‚ ใ“ใ‚Œใฏใ€ใ‚ญใƒฅใƒผใซไฝ•ใ‚‚ใชใ„ๅ ดๅˆใงใ‚‚ๅธธใซ1msๅพ…ๆฉŸใ—ใพใ™ใŒใ€ใ‚ญใƒฅใƒผใซไฝ•ใ‚‚ใชใ„ๅ ดๅˆใซๆŽจ่ซ–ใ‚’้–‹ๅง‹ใ—ใŸใ„ๅ ดๅˆใฏ้ฉใ—ใฆใ„ใชใ„ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใŸใ ใ—ใ€ใƒใƒƒใƒๅ‡ฆ็†ใŒๆœฌๅฝ“ใซ้‡่ฆใชๅ ดๅˆใซใฏๆ„ๅ‘ณใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ๅ†ๅบฆใ€1ใคใฎๆœ€้ฉใช่งฃๆฑบ็ญ–ใฏๅญ˜ๅœจใ—ใพใ›ใ‚“ใ€‚ ## Few things you might want to consider ### Error checking ๆœฌ็•ช็’ฐๅขƒใงใฏๅคšใใฎๅ•้กŒใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™๏ผšใƒกใƒขใƒชไธ่ถณใ€ใ‚นใƒšใƒผใ‚นไธ่ถณใ€ใƒขใƒ‡ใƒซใฎ่ชญใฟ่พผใฟใŒๅคฑๆ•—ใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€ใ‚ฏใ‚จใƒชใŒ่ชคใฃใฆใ„ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€ใ‚ฏใ‚จใƒชใŒๆญฃใ—ใ„ๅ ดๅˆใงใ‚‚ใƒขใƒ‡ใƒซใฎๆง‹ๆˆใ‚จใƒฉใƒผใฎใŸใ‚ใซๅฎŸ่กŒใซๅคฑๆ•—ใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใชใฉใ€‚ ไธ€่ˆฌ็š„ใซใฏใ€ใ‚ตใƒผใƒใƒผใŒใ‚จใƒฉใƒผใ‚’ใƒฆใƒผใ‚ถใƒผใซๅ‡บๅŠ›ใ™ใ‚‹ใจ่‰ฏใ„ใŸใ‚ใ€ใ“ใ‚Œใ‚‰ใฎใ‚จใƒฉใƒผใ‚’่กจ็คบใ™ใ‚‹ใŸใ‚ใฎๅคšใใฎ`try..except`ใ‚นใƒ†ใƒผใƒˆใƒกใƒณใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใฏ่‰ฏใ„ใ‚ขใ‚คใƒ‡ใ‚ขใงใ™ใ€‚ใŸใ ใ—ใ€ใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใซๅฟœใ˜ใฆใ“ใ‚Œใ‚‰ใฎใ‚จใƒฉใƒผใ‚’ใ™ในใฆ่กจ็คบใ™ใ‚‹ใ“ใจใฏใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃใƒชใ‚นใ‚ฏใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ### Circuit breaking Webใ‚ตใƒผใƒใƒผใฏ้€šๅธธใ€้Ž่ฒ ่ทๆ™‚ใซๆญฃใ—ใ„ใ‚จใƒฉใƒผใ‚’่ฟ”ใ™ๆ–นใŒ่‰ฏใ„ใงใ™ใ€‚ใ‚ฏใ‚จใƒชใ‚’็„กๆœŸ้™ใซๅพ…ใคไปฃใ‚ใ‚Šใซ้ฉๅˆ‡ใชใ‚จใƒฉใƒผใ‚’่ฟ”ใ—ใพใ™ใ€‚้•ทๆ™‚้–“ๅพ…ใคไปฃใ‚ใ‚Šใซ503ใ‚จใƒฉใƒผใ‚’่ฟ”ใ™ใ‹ใ€้•ทๆ™‚้–“ๅพ…ใฃใฆใ‹ใ‚‰504ใ‚จใƒฉใƒผใ‚’่ฟ”ใ™ใ‹ใงใ™ใ€‚ ๆๆกˆใ•ใ‚ŒใŸใ‚ณใƒผใƒ‰ใงใฏๅ˜ไธ€ใฎใ‚ญใƒฅใƒผใŒใ‚ใ‚‹ใŸใ‚ใ€ใ‚ญใƒฅใƒผใ‚ตใ‚คใ‚บใ‚’่ฆ‹ใ‚‹ใ“ใจใฏใ€Webใ‚ตใƒผใƒใƒผใŒ่ฒ ่ทใซ่€ใˆใ‚‹ๅ‰ใซใ‚จใƒฉใƒผใ‚’่ฟ”ใ™ใŸใ‚ใฎๅŸบๆœฌ็š„ใชๆ–นๆณ•ใงใ™ใ€‚ ### Blocking the main thread ็พๅœจใ€PyTorchใฏ้žๅŒๆœŸใ‚’่ช่ญ˜ใ—ใฆใ„ใชใ„ใŸใ‚ใ€่จˆ็ฎ—ใฏใƒกใ‚คใƒณใ‚นใƒฌใƒƒใƒ‰ใ‚’ใƒ–ใƒญใƒƒใ‚ฏใ—ใพใ™ใ€‚ใคใพใ‚Šใ€PyTorchใŒ็‹ฌ่‡ชใฎใ‚นใƒฌใƒƒใƒ‰/ใƒ—ใƒญใ‚ปใ‚นใงๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใจ่‰ฏใ„ใงใ—ใ‚‡ใ†ใ€‚ๆๆกˆใ•ใ‚ŒใŸใ‚ณใƒผใƒ‰ใฏใ€ใ‚นใƒฌใƒƒใƒ‰ใจ้žๅŒๆœŸใจใ‚ญใƒฅใƒผใŒใ†ใพใ้€ฃๆบใ—ใชใ„ใŸใ‚ใ€ใ“ใ‚Œใฏ่กŒใ‚ใ‚Œใฆใ„ใพใ›ใ‚“ใŒใ€ๆœ€็ต‚็š„ใซใฏๅŒใ˜ใ“ใจใ‚’่กŒใ„ใพใ™ใ€‚ ใ“ใ‚Œใฏใ€ๅ˜ไธ€ใฎใ‚ขใ‚คใƒ†ใƒ ใฎๆŽจ่ซ–ใŒ้•ทใ„ๅ ดๅˆ๏ผˆ>1็ง’๏ผ‰ใซ้‡่ฆใงใ™ใ€‚ใ“ใฎๅ ดๅˆใ€ๆŽจ่ซ–ไธญใซใ™ในใฆใฎใ‚ฏใ‚จใƒชใŒ1็ง’ๅพ…ใŸใชใ‘ใ‚Œใฐใชใ‚‰ใชใ„ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ### Dynamic batching ไธ€่ˆฌ็š„ใซใ€ใƒใƒƒใƒๅ‡ฆ็†ใฏ1ๅ›žใฎใ‚ขใ‚คใƒ†ใƒ ใ‚’1ๅ›žๆธกใ™ใ‚ˆใ‚Šใ‚‚ๆ”นๅ–„ใ•ใ‚Œใ‚‹ใ“ใจใฏๅฟ…ใšใ—ใ‚‚ใ‚ใ‚Šใพใ›ใ‚“๏ผˆ่ฉณ็ดฐใฏ[ใƒใƒƒใƒๅ‡ฆ็†ใฎ่ฉณ็ดฐ](./main_classes/pipelines#pipeline-batching)ใ‚’ๅ‚็…ง๏ผ‰ใ€‚ใ—ใ‹ใ—ใ€ๆญฃใ—ใ„่จญๅฎšใงไฝฟ็”จใ™ใ‚‹ใจ้žๅธธใซๅŠนๆžœ็š„ใงใ™ใ€‚APIใงใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงๅ‹•็š„ใƒใƒƒใƒๅ‡ฆ็†ใฏ่กŒใ‚ใ‚Œใพใ›ใ‚“๏ผˆ้…ๅปถใฎๆฉŸไผšใŒๅคšใ™ใŽใพใ™๏ผ‰ใ€‚ใ—ใ‹ใ—ใ€้žๅธธใซๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใงใ‚ใ‚‹BLOOMๆŽจ่ซ–ใฎๅ ดๅˆใ€ๅ‹•็š„ใƒใƒƒใƒๅ‡ฆ็†ใฏ**้‡่ฆ**ใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ™ในใฆใฎใƒฆใƒผใ‚ถใƒผใซใจใฃใฆใพใจใ‚‚ใชใ‚จใ‚ฏใ‚นใƒšใƒชใ‚จใƒณใ‚นใ‚’ๆไพ›ใงใใพใ™ใ€‚ ไปฅไธŠใŒใ€ๆไพ›ใ•ใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆใฎMarkdownๅฝขๅผใฎ็ฟป่จณใงใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_train_gpu_many.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Training on Multiple GPUs ๅ˜ไธ€ใฎGPUใงใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒ้…ใ™ใŽใ‚‹ๅ ดๅˆใ‚„ใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใŒๅ˜ไธ€ใฎGPUใฎใƒกใƒขใƒชใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆใ€่ค‡ๆ•ฐใฎGPUใ‚’ไฝฟ็”จใ—ใŸใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใŒๅฟ…่ฆใจใชใ‚Šใพใ™ใ€‚ๅ˜ไธ€ใฎGPUใ‹ใ‚‰่ค‡ๆ•ฐใฎGPUใธใฎๅˆ‡ใ‚Šๆ›ฟใˆใซใฏใ€ใƒฏใƒผใ‚ฏใƒญใƒผใƒ‰ใ‚’ๅˆ†ๆ•ฃใ™ใ‚‹ใŸใ‚ใฎใ‚ใ‚‹็จฎใฎไธฆๅˆ—ๅ‡ฆ็†ใŒๅฟ…่ฆใงใ™ใ€‚ใƒ‡ใƒผใ‚ฟใ€ใƒ†ใƒณใ‚ฝใƒซใ€ใพใŸใฏใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎไธฆๅˆ—ๅ‡ฆ็†ใชใฉใ€ใ•ใพใ–ใพใชไธฆๅˆ—ๅ‡ฆ็†ๆŠ€่ก“ใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใ ใ—ใ€ใ™ในใฆใซ้ฉใ—ใŸไธ€ใคใฎ่งฃๆฑบ็ญ–ใฏๅญ˜ๅœจใ›ใšใ€ๆœ€้ฉใช่จญๅฎšใฏไฝฟ็”จใ™ใ‚‹ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซไพๅญ˜ใ—ใพใ™ใ€‚ใ“ใฎ่จ˜ไบ‹ใฏใ€ใŠใใ‚‰ใไป–ใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใซใ‚‚้ฉ็”จใ•ใ‚Œใ‚‹ไธป่ฆใชๆฆ‚ๅฟตใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใคใคใ€PyTorchใƒ™ใƒผใ‚นใฎๅฎŸ่ฃ…ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใพใ™ใ€‚ <Tip> **ๆณจๆ„**: [ๅ˜ไธ€GPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_one) ใง็ดนไป‹ใ•ใ‚ŒใŸๅคšใใฎๆˆฆ็•ฅ๏ผˆๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚„ๅ‹พ้…่“„็ฉใชใฉ๏ผ‰ใฏไธ€่ˆฌ็š„ใงใ‚ใ‚Šใ€ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซไธ€่ˆฌ็š„ใซ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒžใƒซใƒGPUใ‚„CPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใชใฉใฎๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใซๅ…ฅใ‚‹ๅ‰ใซใ€ใใ‚Œใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ใพใšใ€ใ•ใพใ–ใพใช1Dไธฆๅˆ—ๅ‡ฆ็†ๆŠ€่ก“ใจใใฎๅˆฉ็‚นใŠใ‚ˆใณๆฌ ็‚นใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใ€ใใ‚Œใ‚‰ใ‚’2DใŠใ‚ˆใณ3Dไธฆๅˆ—ๅ‡ฆ็†ใซ็ต„ใฟๅˆใ‚ใ›ใฆใ•ใ‚‰ใซ้ซ˜้€Ÿใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅฎŸ็พใ—ใ€ใ‚ˆใ‚Šๅคงใใชใƒขใƒ‡ใƒซใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ๆ–นๆณ•ใ‚’ๆคœ่จŽใ—ใพใ™ใ€‚ใ•ใพใ–ใพใชไป–ใฎๅผทๅŠ›ใชไปฃๆ›ฟๆ‰‹ๆณ•ใ‚‚็ดนไป‹ใ•ใ‚Œใพใ™ใ€‚ ## Concepts ไปฅไธ‹ใฏใ€ใ“ใฎๆ–‡ๆ›ธใงๅพŒใง่ฉณใ—ใ่ชฌๆ˜Žใ•ใ‚Œใ‚‹ไธป่ฆใชๆฆ‚ๅฟตใฎ็ฐกๅ˜ใช่ชฌๆ˜Žใงใ™ใ€‚ 1. **DataParallel (DP)** - ๅŒใ˜ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใŒ่ค‡ๆ•ฐๅ›ž่ค‡่ฃฝใ•ใ‚Œใ€ๅ„ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใซใƒ‡ใƒผใ‚ฟใฎใ‚นใƒฉใ‚คใ‚นใŒไพ›็ตฆใ•ใ‚Œใพใ™ใ€‚ๅ‡ฆ็†ใฏไธฆ่กŒใ—ใฆ่กŒใ‚ใ‚Œใ€ๅ„ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚นใƒ†ใƒƒใƒ—ใฎๆœ€ๅพŒใซๅŒๆœŸใ•ใ‚Œใพใ™ใ€‚ 2. **TensorParallel (TP)** - ๅ„ใƒ†ใƒณใ‚ฝใƒซใฏ่ค‡ๆ•ฐใฎใƒใƒฃใƒณใ‚ฏใซๅˆ†ๅ‰ฒใ•ใ‚Œใ€ๅ˜ไธ€ใฎGPUใซใƒ†ใƒณใ‚ฝใƒซๅ…จไฝ“ใŒๅญ˜ๅœจใ™ใ‚‹ใฎใงใฏใชใใ€ใƒ†ใƒณใ‚ฝใƒซใฎๅ„ใ‚ทใƒฃใƒผใƒ‰ใŒๆŒ‡ๅฎšใ•ใ‚ŒใŸGPUใซๅญ˜ๅœจใ—ใพใ™ใ€‚ๅ‡ฆ็†ไธญใซใ€ๅ„ใ‚ทใƒฃใƒผใƒ‰ใฏๅˆฅใ€…ใซไธฆ่กŒใ—ใฆๅ‡ฆ็†ใ•ใ‚Œใ€็•ฐใชใ‚‹GPUใงๅŒๆœŸใ•ใ‚Œใ€ใ‚นใƒ†ใƒƒใƒ—ใฎๆœ€ๅพŒใซ็ตๆžœใŒๅŒๆœŸใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใฏๆฐดๅนณไธฆๅˆ—ๅ‡ฆ็†ใจๅ‘ผใฐใ‚Œใ‚‹ใ‚‚ใฎใงใ€ๅˆ†ๅ‰ฒใฏๆฐดๅนณใƒฌใƒ™ใƒซใง่กŒใ‚ใ‚Œใพใ™ใ€‚ 3. **PipelineParallel (PP)** - ใƒขใƒ‡ใƒซใฏๅž‚็›ด๏ผˆใƒฌใ‚คใƒคใƒผใƒฌใƒ™ใƒซ๏ผ‰ใซ่ค‡ๆ•ฐใฎGPUใซๅˆ†ๅ‰ฒใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใฎๅ˜ไธ€ใพใŸใฏ่ค‡ๆ•ฐใฎใƒฌใ‚คใƒคใƒผใŒๅ˜ไธ€ใฎGPUใซ้…็ฝฎใ•ใ‚Œใพใ™ใ€‚ๅ„GPUใฏใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎ็•ฐใชใ‚‹ใ‚นใƒ†ใƒผใ‚ธใ‚’ไธฆ่กŒใ—ใฆๅ‡ฆ็†ใ—ใ€ใƒใƒƒใƒใฎๅฐใ•ใชใƒใƒฃใƒณใ‚ฏใงไฝœๆฅญใ—ใพใ™ใ€‚ 4. **Zero Redundancy Optimizer (ZeRO)** - TPใจใ„ใใ‚‰ใ‹ไผผใŸใ‚ˆใ†ใชใƒ†ใƒณใ‚ฝใƒซใฎใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๅฎŸ่กŒใ—ใพใ™ใŒใ€ๅ‰ๅ‘ใใพใŸใฏๅพŒๅ‘ใใฎ่จˆ็ฎ—ใฎใŸใ‚ใซใƒ†ใƒณใ‚ฝใƒซๅ…จไฝ“ใŒๅ†ๆง‹็ฏ‰ใ•ใ‚Œใ‚‹ใŸใ‚ใ€ใƒขใƒ‡ใƒซใ‚’ๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใพใŸใ€GPUใƒกใƒขใƒชใŒๅˆถ้™ใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใซ่ฃœๅ„Ÿใ™ใ‚‹ใŸใ‚ใฎใ•ใพใ–ใพใชใ‚ชใƒ•ใƒญใƒผใƒ‰ๆŠ€่ก“ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ 5. **Sharded DDP** - Sharded DDPใฏใ€ใ•ใพใ–ใพใชZeROๅฎŸ่ฃ…ใงไฝฟ็”จใ•ใ‚Œใ‚‹ๅŸบๆœฌ็š„ใชZeROใ‚ณใƒณใ‚ปใƒ—ใƒˆใฎๅˆฅๅใงใ™ใ€‚ ๅ„ใ‚ณใƒณใ‚ปใƒ—ใƒˆใฎ่ฉณ็ดฐใซๆทฑๅ…ฅใ‚Šใ™ใ‚‹ๅ‰ใซใ€ๅคง่ฆๆจกใชใ‚คใƒณใƒ•ใƒฉใ‚นใƒˆใƒฉใ‚ฏใƒใƒฃใงๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹้š›ใฎๅคงใพใ‹ใชๆฑบๅฎšใƒ—ใƒญใ‚ปใ‚นใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ## Scalability Strategy **โ‡จ ใ‚ทใƒณใ‚ฐใƒซใƒŽใƒผใƒ‰ / ใƒžใƒซใƒGPU** * ใƒขใƒ‡ใƒซใŒๅ˜ไธ€ใฎGPUใซๅŽใพใ‚‹ๅ ดๅˆ๏ผš 1. DDP - ๅˆ†ๆ•ฃใƒ‡ใƒผใ‚ฟไธฆๅˆ— 2. ZeRO - ็Šถๆณใจไฝฟ็”จใ•ใ‚Œใ‚‹ๆง‹ๆˆใซๅฟœใ˜ใฆ้€Ÿใ„ใ‹ใฉใ†ใ‹ใŒ็•ฐใชใ‚Šใพใ™ * ใƒขใƒ‡ใƒซใŒๅ˜ไธ€ใฎGPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆ๏ผš 1. PP 2. ZeRO 3. TP ้žๅธธใซ้ซ˜้€ŸใชใƒŽใƒผใƒ‰ๅ†…ๆŽฅ็ถš๏ผˆNVLINKใพใŸใฏNVSwitchใชใฉ๏ผ‰ใŒใ‚ใ‚Œใฐใ€ใ“ใ‚Œใ‚‰ใฎ3ใคใฏใปใผๅŒใ˜้€Ÿๅบฆใซใชใ‚‹ใฏใšใงใ€ใ“ใ‚Œใ‚‰ใŒใชใ„ๅ ดๅˆใ€PPใฏTPใพใŸใฏZeROใ‚ˆใ‚Šใ‚‚้€Ÿใใชใ‚Šใพใ™ใ€‚TPใฎ็จ‹ๅบฆใ‚‚ๅทฎใ‚’็”Ÿใ˜ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚็‰นๅฎšใฎใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงใฎๅ‹่€…ใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซๅฎŸ้จ“ใ™ใ‚‹ใ“ใจใŒๆœ€ๅ–„ใงใ™ใ€‚ TPใฏใปใจใ‚“ใฉใฎๅ ดๅˆใ€ๅ˜ไธ€ใƒŽใƒผใƒ‰ๅ†…ใงไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ใคใพใ‚Šใ€TPใ‚ตใ‚คใ‚บ <= ใƒŽใƒผใƒ‰ใ”ใจใฎGPUๆ•ฐใงใ™ใ€‚ * ๆœ€ๅคงใฎใƒฌใ‚คใƒคใƒผใŒๅ˜ไธ€ใฎGPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆ๏ผš 1. ZeROใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆ - TPใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚PPๅ˜็‹ฌใงใฏๅŽใพใ‚‰ใชใ„ใงใ—ใ‚‡ใ†ใ€‚ 2. ZeROใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆ - "ใ‚ทใƒณใ‚ฐใƒซGPU"ใฎใ‚จใƒณใƒˆใƒชใจๅŒใ˜ใ‚‚ใฎใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ **โ‡จ ใƒžใƒซใƒใƒŽใƒผใƒ‰ / ใƒžใƒซใƒGPU** * ใƒŽใƒผใƒ‰้–“ใฎ้ซ˜้€ŸๆŽฅ็ถšใŒใ‚ใ‚‹ๅ ดๅˆ๏ผš 1. ZeRO - ใƒขใƒ‡ใƒซใธใฎใปใจใ‚“ใฉใฎๅค‰ๆ›ดใŒไธ่ฆใงใ™ 2. PP+TP+DP - ้€šไฟกใŒๅฐ‘ใชใใ€ใƒขใƒ‡ใƒซใธใฎๅคง่ฆๆจกใชๅค‰ๆ›ดใŒๅฟ…่ฆใงใ™ * ใƒŽใƒผใƒ‰้–“ใฎๆŽฅ็ถšใŒ้…ใใ€GPUใƒกใƒขใƒชใŒใพใ ไธ่ถณใ—ใฆใ„ใ‚‹ๅ ดๅˆ๏ผš 1. DP+PP+TP+ZeRO-1 ## Data Parallelism 2ใคใฎGPUใ‚’ๆŒใคใปใจใ‚“ใฉใฎใƒฆใƒผใ‚ถใƒผใฏใ€`DataParallel`๏ผˆDP๏ผ‰ใจ`DistributedDataParallel`๏ผˆDDP๏ผ‰ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใ‚‹ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ้€Ÿๅบฆใฎๅ‘ไธŠใ‚’ใ™ใงใซไบซๅ—ใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏใปใผ่‡ชๆ˜Žใซไฝฟ็”จใงใใ‚‹PyTorchใฎ็ต„ใฟ่พผใฟๆฉŸ่ƒฝใงใ™ใ€‚ไธ€่ˆฌ็š„ใซใ€ใ™ในใฆใฎใƒขใƒ‡ใƒซใงๅ‹•ไฝœใ™ใ‚‹DDPใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚DPใฏไธ€้ƒจใฎใƒขใƒ‡ใƒซใงๅคฑๆ•—ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใงใ™ใ€‚[PyTorchใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html)่‡ชไฝ“ใ‚‚DDPใฎไฝฟ็”จใ‚’ๆŽจๅฅจใ—ใฆใ„ใพใ™ใ€‚ ### DP vs DDP `DistributedDataParallel`๏ผˆDDP๏ผ‰ใฏ้€šๅธธใ€`DataParallel`๏ผˆDP๏ผ‰ใ‚ˆใ‚Šใ‚‚้ซ˜้€Ÿใงใ™ใŒใ€ๅธธใซใใ†ใจใฏ้™ใ‚Šใพใ›ใ‚“๏ผš * DPใฏPythonใ‚นใƒฌใƒƒใƒ‰ใƒ™ใƒผใ‚นใงใ™ใŒใ€DDPใฏใƒžใƒซใƒใƒ—ใƒญใ‚ปใ‚นใƒ™ใƒผใ‚นใงใ™ใ€‚ใใฎใŸใ‚ใ€GIL๏ผˆGlobal Interpreter Lock๏ผ‰ใชใฉใฎPythonใ‚นใƒฌใƒƒใƒ‰ใฎๅˆถ็ด„ใŒใชใ„ใŸใ‚ใงใ™ใ€‚ * ไธ€ๆ–นใ€GPUใ‚ซใƒผใƒ‰้–“ใฎ้…ใ„็›ธไบ’ๆŽฅ็ถšๆ€งใฏใ€DDPใฎๅ ดๅˆใซๅฎŸ้š›ใซใฏ้…ใ„็ตๆžœใ‚’ใ‚‚ใŸใ‚‰ใ™ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฏใ€2ใคใฎใƒขใƒผใƒ‰้–“ใฎGPU้–“้€šไฟกใฎไธปใช้•ใ„ใงใ™๏ผš [DDP](https://pytorch.org/docs/master/notes/ddp.html): - ้–‹ๅง‹ๆ™‚ใ€ใƒกใ‚คใƒณใƒ—ใƒญใ‚ปใ‚นใฏใƒขใƒ‡ใƒซใ‚’GPU 0ใ‹ใ‚‰ไป–ใฎGPUใซ่ค‡่ฃฝใ—ใพใ™ใ€‚ - ใใ‚Œใ‹ใ‚‰ๅ„ใƒใƒƒใƒใ”ใจใซ: 1. ๅ„GPUใฏๅ„่‡ชใฎใƒŸใƒ‹ใƒใƒƒใƒใฎใƒ‡ใƒผใ‚ฟใ‚’็›ดๆŽฅๆถˆ่ฒปใ—ใพใ™ใ€‚ 2. `backward`ไธญใ€ใƒญใƒผใ‚ซใƒซๅ‹พ้…ใŒๆบ–ๅ‚™ใงใใ‚‹ใจใ€ใใ‚Œใ‚‰ใฏใ™ในใฆใฎใƒ—ใƒญใ‚ปใ‚นใงๅนณๅ‡ๅŒ–ใ•ใ‚Œใพใ™ใ€‚ [DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html): ๅ„ใƒใƒƒใƒใ”ใจใซ: 1. GPU 0ใฏใƒ‡ใƒผใ‚ฟใƒใƒƒใƒใ‚’่ชญใฟๅ–ใ‚Šใ€ใใ‚Œใ‹ใ‚‰ๅ„GPUใซใƒŸใƒ‹ใƒใƒƒใƒใ‚’้€ไฟกใ—ใพใ™ใ€‚ 2. GPU 0ใ‹ใ‚‰ๅ„GPUใซๆœ€ๆ–ฐใฎใƒขใƒ‡ใƒซใ‚’่ค‡่ฃฝใ—ใพใ™ใ€‚ 3. `forward`ใ‚’ๅฎŸ่กŒใ—ใ€ๅ„GPUใ‹ใ‚‰GPU 0ใซๅ‡บๅŠ›ใ‚’้€ไฟกใ—ใ€ๆๅคฑใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ 4. GPU 0ใ‹ใ‚‰ใ™ในใฆใฎGPUใซๆๅคฑใ‚’ๅˆ†ๆ•ฃใ—ใ€`backward`ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ 5. ๅ„GPUใ‹ใ‚‰GPU 0ใซๅ‹พ้…ใ‚’้€ไฟกใ—ใ€ใใ‚Œใ‚‰ใ‚’ๅนณๅ‡ๅŒ–ใ—ใพใ™ใ€‚ DDPใฏใƒใƒƒใƒใ”ใจใซ่กŒใ†้€šไฟกใฏๅ‹พ้…ใฎ้€ไฟกใฎใฟใงใ‚ใ‚Šใ€ไธ€ๆ–นใ€DPใฏใƒใƒƒใƒใ”ใจใซ5ใคใฎ็•ฐใชใ‚‹ใƒ‡ใƒผใ‚ฟไบคๆ›ใ‚’่กŒใ„ใพใ™ใ€‚ DPใฏใƒ—ใƒญใ‚ปใ‚นๅ†…ใงใƒ‡ใƒผใ‚ฟใ‚’Pythonใ‚นใƒฌใƒƒใƒ‰ใ‚’ไป‹ใ—ใฆใ‚ณใƒ”ใƒผใ—ใพใ™ใŒใ€DDPใฏ[torch.distributed](https://pytorch.org/docs/master/distributed.html)ใ‚’ไป‹ใ—ใฆใƒ‡ใƒผใ‚ฟใ‚’ใ‚ณใƒ”ใƒผใ—ใพใ™ใ€‚ DPใงใฏGPU 0ใฏไป–ใฎGPUใ‚ˆใ‚Šใ‚‚ใฏใ‚‹ใ‹ใซๅคšใใฎไฝœๆฅญใ‚’่กŒใ†ใŸใ‚ใ€GPUใฎๆœชไฝฟ็”จ็އใŒ้ซ˜ใใชใ‚Šใพใ™ใ€‚ DDPใฏ่ค‡ๆ•ฐใฎใƒžใ‚ทใƒณ้–“ใงไฝฟ็”จใงใใพใ™ใŒใ€DPใฎๅ ดๅˆใฏใใ†ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ DPใจDDPใฎไป–ใซใ‚‚้•ใ„ใŒใ‚ใ‚Šใพใ™ใŒใ€ใ“ใฎ่ญฐ่ซ–ใซใฏ้–ขไฟ‚ใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใ‚Œใ‚‰2ใคใฎใƒขใƒผใƒ‰ใ‚’ๆทฑใ็†่งฃใ—ใŸใ„ๅ ดๅˆใ€ใ“ใฎ[่จ˜ไบ‹](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/)ใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚็ด ๆ™ดใ‚‰ใ—ใ„ใƒ€ใ‚คใ‚ขใ‚ฐใƒฉใƒ ใ‚’ๅซใฟใ€ใ•ใพใ–ใพใชใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใฎ่ค‡ๆ•ฐใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใจใƒ—ใƒญใƒ•ใ‚กใ‚คใƒฉใฎๅ‡บๅŠ›ใ‚’็คบใ—ใ€็ŸฅใฃใฆใŠใๅฟ…่ฆใŒใ‚ใ‚‹ใ™ในใฆใฎๅพฎๅฆ™ใชใƒ‹ใƒฅใ‚ขใƒณใ‚นใ‚’่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ ๅฎŸ้š›ใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†๏ผš | Type | NVlink | Time | | :----- | ----- | ---: | | 2:DP | Y | 110s | | 2:DDP | Y | 101s | | 2:DDP | N | 131s | ่งฃๆž๏ผš ใ“ใ“ใงใ€DPใฏNVlinkใ‚’ไฝฟ็”จใ—ใŸDDPใซๆฏ”ในใฆ็ด„10๏ผ…้…ใใ€NVlinkใ‚’ไฝฟ็”จใ—ใชใ„DDPใซๆฏ”ในใฆ็ด„15๏ผ…้ซ˜้€Ÿใงใ‚ใ‚‹ใ“ใจใŒ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ๅฎŸ้š›ใฎ้•ใ„ใฏใ€ๅ„GPUใŒไป–ใฎGPUใจๅŒๆœŸใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใƒ‡ใƒผใ‚ฟใฎ้‡ใซไพๅญ˜ใ—ใพใ™ใ€‚ๅŒๆœŸใ™ใ‚‹ใƒ‡ใƒผใ‚ฟใŒๅคšใ„ใปใฉใ€้…ใ„ใƒชใƒณใ‚ฏใŒๅˆ่จˆใฎๅฎŸ่กŒๆ™‚้–“ใ‚’้…ใใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใใชใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฏๅฎŒๅ…จใชใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ณใƒผใƒ‰ใจๅ‡บๅŠ›ใงใ™๏ผš `NCCL_P2P_DISABLE=1`ใ‚’ไฝฟ็”จใ—ใฆใ€ๅฏพๅฟœใ™ใ‚‹ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใงNVLinkๆฉŸ่ƒฝใ‚’็„กๅŠนใซใ—ใพใ—ใŸใ€‚ ``` # DP rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69} # DDP w/ NVlink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVlink rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ข: 2x TITAN RTXใ€ๅ„24GB + 2ใคใฎNVLink๏ผˆ`nvidia-smi topo -m`ใง `NV2`๏ผ‰ ใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ข: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0` ## ZeRO Data Parallelism ZeROใƒ‘ใƒฏใƒผใƒ‰ใƒ‡ใƒผใ‚ฟไธฆๅˆ—ๅ‡ฆ็†๏ผˆZeRO-DP๏ผ‰ใฏใ€ๆฌกใฎ[ใƒ–ใƒญใ‚ฐๆŠ•็จฟ](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)ใฎใƒ€ใ‚คใ‚ขใ‚ฐใƒฉใƒ ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ![DeepSpeed-Image-1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png) ใ“ใ‚Œใฏ็†่งฃใŒ้›ฃใ—ใ„ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€ๅฎŸ้š›ใซใฏใ“ใฎๆฆ‚ๅฟตใฏ้žๅธธใซใ‚ทใƒณใƒ—ใƒซใงใ™ใ€‚ใ“ใ‚Œใฏ้€šๅธธใฎ`DataParallel`๏ผˆDP๏ผ‰ใงใ™ใŒใ€ๅฎŒๅ…จใชใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒกใƒผใ‚ฟใ€ๅ‹พ้…ใ€ใŠใ‚ˆใณใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใ‚’่ค‡่ฃฝใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ๅ„GPUใฏใใ‚Œใžใ‚Œใฎใ‚นใƒฉใ‚คใ‚นใฎใฟใ‚’ไฟๅญ˜ใ—ใพใ™ใ€‚ใใ—ใฆใ€ๅฎŸ่กŒๆ™‚ใซใ€็‰นๅฎšใฎใƒฌใ‚คใƒคใƒผใซๅฟ…่ฆใชๅฎŒๅ…จใชใƒฌใ‚คใƒคใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒๅฟ…่ฆใชๅ ดๅˆใ€ใ™ในใฆใฎGPUใŒๅŒๆœŸใ—ใฆใ€ใŠไบ’ใ„ใซไธ่ถณใ—ใฆใ„ใ‚‹้ƒจๅˆ†ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใใ‚ŒใŒใ™ในใฆใงใ™ใ€‚ 3ใคใฎใƒฌใ‚คใƒคใƒผใ‹ใ‚‰ใชใ‚‹ๅ˜็ด”ใชใƒขใƒ‡ใƒซใ‚’่€ƒใˆใฆใฟใพใ—ใ‚‡ใ†ใ€‚ๅ„ใƒฌใ‚คใƒคใƒผใซใฏ3ใคใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒใ‚ใ‚Šใพใ™๏ผš ``` La | Lb | Lc ---|----|--- a0 | b0 | c0 a1 | b1 | c1 a2 | b2 | c2 ``` ใƒฌใ‚คใƒคใƒผLaใซใฏใ€้‡ใฟa0ใ€a1ใ€ใŠใ‚ˆใณa2ใŒใ‚ใ‚Šใพใ™ใ€‚ 3ใคใฎGPUใŒใ‚ใ‚‹ๅ ดๅˆใ€Sharded DDP๏ผˆ= Zero-DP๏ผ‰ใฏใƒขใƒ‡ใƒซใ‚’3ใคใฎGPUใซๆฌกใฎใ‚ˆใ†ใซๅˆ†ๅ‰ฒใ—ใพใ™๏ผš ``` GPU0: La | Lb | Lc ---|----|--- a0 | b0 | c0 GPU1: La | Lb | Lc ---|----|--- a1 | b1 | c1 GPU2: La | Lb | Lc ---|----|--- a2 | b2 | c2 ``` ใ“ใ‚Œใฏใ€ๅ…ธๅž‹็š„ใชใƒ‡ใ‚ฃใƒผใƒ—ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ๏ผˆDNN๏ผ‰ใฎใƒ€ใ‚คใ‚ขใ‚ฐใƒฉใƒ ใ‚’ๆƒณๅƒใ™ใ‚‹ใจใ€ใƒ†ใƒณใ‚ฝใƒซไธฆๅˆ—ๅ‡ฆ็†ใจๅŒๆง˜ใฎๆฐดๅนณใ‚นใƒฉใ‚คใ‚นใงใ‚ใ‚‹ใ‚ˆใ†ใชใ‚‚ใฎใงใ™ใ€‚ๅž‚็›ดใ‚นใƒฉใ‚คใ‚นใฏใ€็•ฐใชใ‚‹GPUใซๅฎŒๅ…จใชๅฑคใ‚ฐใƒซใƒผใƒ—ใ‚’้…็ฝฎใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ใ—ใ‹ใ—ใ€ใ“ใ‚Œใฏๅ˜ใชใ‚‹ๅ‡บ็™บ็‚นใซ้ŽใŽใพใ›ใ‚“ใ€‚ ใ“ใ‚Œใ‹ใ‚‰ใ€ๅ„GPUใฏ้€šๅธธใฎใƒ‡ใƒผใ‚ฟไธฆๅˆ—ๅ‡ฆ็†๏ผˆDP๏ผ‰ใจๅŒๆง˜ใซใ€้€šๅธธใฎใƒŸใƒ‹ใƒใƒƒใƒใ‚’ๅ—ใ‘ๅ–ใ‚Šใพใ™๏ผš ``` x0 => GPU0 x1 => GPU1 x2 => GPU2 ``` ๆœ€ๅˆใซใ€ๅ…ฅๅŠ›ใƒ‡ใƒผใ‚ฟใฏใƒฌใ‚คใƒคใƒผLaใซ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚ GPU0ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใพใ—ใ‚‡ใ†๏ผšx0ใฏใ€ใใฎๅ‰ๅ‘ใใƒ‘ใ‚นใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใซa0ใ€a1ใ€a2ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒๅฟ…่ฆใงใ™ใŒใ€GPU0ใซใฏa0ใ—ใ‹ใ‚ใ‚Šใพใ›ใ‚“ใ€‚GPU1ใ‹ใ‚‰a1ใ‚’ใ€GPU2ใ‹ใ‚‰a2ใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€ใƒขใƒ‡ใƒซใฎๅ„้ƒจๅˆ†ใ‚’ใพใจใ‚ใพใ™ใ€‚ ๅŒๆง˜ใซใ€GPU1ใฏใƒŸใƒ‹ใƒใƒƒใƒx1ใ‚’ๅ—ใ‘ๅ–ใ‚Šใ€a1ใ—ใ‹ๆŒใฃใฆใ„ใพใ›ใ‚“ใŒใ€a0ใจa2ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒๅฟ…่ฆใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏGPU0ใจGPU2ใ‹ใ‚‰ๅ–ๅพ—ใ—ใพใ™ใ€‚ GPU2ใ‚‚x2ใ‚’ๅ—ใ‘ๅ–ใ‚Šใพใ™ใ€‚a0ใจa1ใฏGPU0ใจGPU1ใ‹ใ‚‰ๅ—ใ‘ๅ–ใ‚Šใ€a2ใจใจใ‚‚ใซๅฎŒๅ…จใชใƒ†ใƒณใ‚ฝใƒซใ‚’ๅ†ๆง‹็ฏ‰ใ—ใพใ™ใ€‚ 3ใคใฎGPUใฏๅฎŒๅ…จใชใƒ†ใƒณใ‚ฝใƒซใ‚’ๅ†ๆง‹็ฏ‰ใ—ใ€ๅ‰ๅ‘ใ่จˆ็ฎ—ใŒ่กŒใ‚ใ‚Œใพใ™ใ€‚ ่จˆ็ฎ—ใŒๅฎŒไบ†ใ™ใ‚‹ใจใ€ไธ่ฆใซใชใฃใŸใƒ‡ใƒผใ‚ฟใฏๅ‰Š้™คใ•ใ‚Œใพใ™ใ€‚่จˆ็ฎ—ไธญใ ใ‘ไฝฟ็”จใ•ใ‚Œใ€ๅ†ๆง‹็ฏ‰ใฏไบ‹ๅ‰ใซใƒ•ใ‚งใƒƒใƒใ‚’ไฝฟ็”จใ—ใฆๅŠน็އ็š„ใซ่กŒใ‚ใ‚Œใพใ™ใ€‚ ใใ—ใฆใ€ใ“ใฎใƒ—ใƒญใ‚ปใ‚นๅ…จไฝ“ใŒใƒฌใ‚คใƒคใƒผLbใ€ๆฌกใซๅ‰ๅ‘ใใงLcใ€ใใ—ใฆ้€†ๆ–นๅ‘ใงLc -> Lb -> Laใซๅฏพใ—ใฆ็นฐใ‚Š่ฟ”ใ•ใ‚Œใพใ™ใ€‚ ็งใซใจใฃใฆใ€ใ“ใ‚ŒใฏๅŠน็އ็š„ใชใ‚ฐใƒซใƒผใƒ—ใงใฎ้‡ใฟใฎๅˆ†ๆ•ฃๆˆฆ็•ฅใฎใ‚ˆใ†ใซ่žใ“ใˆใพใ™๏ผš 1. ไบบAใฏใƒ†ใƒณใƒˆใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ 2. ไบบBใฏใ‚นใƒˆใƒผใƒ–ใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ 3. ไบบCใฏๆ–งใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ ไปŠใ€ๅฝผใ‚‰ใฏๆฏŽๆ™ฉๆŒใฃใฆใ„ใ‚‹ใ‚‚ใฎใ‚’ๅ…ฑๆœ‰ใ—ใ€ไป–ใฎไบบใ‹ใ‚‰ๆŒใฃใฆใ„ใชใ„ใ‚‚ใฎใ‚’ใ‚‚ใ‚‰ใ„ใ€ๆœใซใฏๅ‰ฒใ‚Šๅฝ“ใฆใ‚‰ใ‚ŒใŸใ‚ฟใ‚คใƒ—ใฎใ‚ฎใ‚ขใ‚’่ฉฐใ‚ใฆๆ—…ใ‚’็ถšใ‘ใพใ™ใ€‚ใ“ใ‚ŒใŒSharded DDP / Zero DPใงใ™ใ€‚ ใ“ใฎๆˆฆ็•ฅใ‚’ใ€ๅ„ไบบใŒ็‹ฌ่‡ชใฎใƒ†ใƒณใƒˆใ€ใ‚นใƒˆใƒผใƒ–ใ€ๆ–งใ‚’ๆŒใฃใฆ้‹ใฐใชใ‘ใ‚Œใฐใชใ‚‰ใชใ„ใ‚ทใƒณใƒ—ใƒซใชๆˆฆ็•ฅใจๆฏ”่ผƒใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ใ“ใ‚ŒใŒPyTorchใฎDataParallel๏ผˆDPใŠใ‚ˆใณDDP๏ผ‰ใงใ™ใ€‚ ใ“ใฎใƒˆใƒ”ใƒƒใ‚ฏใฎๆ–‡็Œฎใ‚’่ชญใ‚€้š›ใซใ€ไปฅไธ‹ใฎ้กž็พฉ่ชžใซๅ‡บไผšใ†ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“๏ผšShardedใ€Partitionedใ€‚ ZeROใŒใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’ๅˆ†ๅ‰ฒใ™ใ‚‹ๆ–นๆณ•ใซๆณจๆ„ใ‚’ๆ‰•ใ†ใจใ€ใ“ใ‚Œใฏใƒ†ใƒณใ‚ฝใƒซใƒ‘ใƒฉใƒฌใƒชใ‚บใƒ ใจ้žๅธธใซไผผใฆใ„ใ‚‹ใ‚ˆใ†ใซ่ฆ‹ใˆใพใ™ใ€‚ใ“ใ‚ŒใฏๅพŒใง่ญฐ่ซ–ใ•ใ‚Œใ‚‹ๅž‚็›ดใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒฌใƒชใ‚บใƒ ใจใฏ็•ฐใชใ‚Šใ€ๅ„ใƒฌใ‚คใƒคใƒผใฎ้‡ใฟใ‚’ใƒ‘ใƒผใƒ†ใ‚ฃใ‚ทใƒงใƒณ/ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ—ใพใ™ใ€‚ Implementations: - [DeepSpeed](https://www.deepspeed.ai/tutorials/zero/) ZeRO-DP stages 1+2+3 - [`transformers` integration](main_classes/trainer#trainer-integrations) ## Naive Model Parallelism (Vertical) and Pipeline Parallelism ใƒŠใ‚คใƒผใƒ–ใƒขใƒ‡ใƒซใƒ‘ใƒฉใƒฌใƒชใ‚บใƒ ๏ผˆMP๏ผ‰ใฏใ€ใƒขใƒ‡ใƒซใฎๅฑคใ‚’่ค‡ๆ•ฐใฎGPUใซๅˆ†ๆ•ฃใ•ใ›ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ใ“ใฎใƒกใ‚ซใƒ‹ใ‚บใƒ ใฏๆฏ”่ผƒ็š„ๅ˜็ด”ใงใ€ๅธŒๆœ›ใ™ใ‚‹ๅฑคใ‚’`.to()`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆ็‰นๅฎšใฎใƒ‡ใƒใ‚คใ‚นใซๅˆ‡ใ‚Šๆ›ฟใˆใ‚‹ใ ใ‘ใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ‡ใƒผใ‚ฟใŒใ“ใ‚Œใ‚‰ใฎๅฑคใ‚’้€š้Žใ™ใ‚‹ใŸใณใซใ€ใƒ‡ใƒผใ‚ฟใ‚‚ๅฑคใจๅŒใ˜ใƒ‡ใƒใ‚คใ‚นใซๅˆ‡ใ‚Šๆ›ฟใˆใ‚‰ใ‚Œใ€ๆฎ‹ใ‚Šใฎ้ƒจๅˆ†ใฏๅค‰ๆ›ดใ•ใ‚Œใพใ›ใ‚“ใ€‚ ็งใŸใกใฏใ“ใ‚Œใ‚’ใ€Œๅž‚็›ดMPใ€ใจๅ‘ผใณใพใ™ใ€‚ใชใœใชใ‚‰ใ€ใปใจใ‚“ใฉใฎใƒขใƒ‡ใƒซใŒใฉใฎใ‚ˆใ†ใซๆใ‹ใ‚Œใ‚‹ใ‹ใ‚’ๆ€ใ„ๅ‡บใ™ใจใ€ๅฑคใ‚’ๅž‚็›ดใซใ‚นใƒฉใ‚คใ‚นใ™ใ‚‹ใ‹ใ‚‰ใงใ™ใ€‚ใŸใจใˆใฐใ€ไปฅไธ‹ใฎๅ›ณใฏ8ๅฑคใฎใƒขใƒ‡ใƒซใ‚’็คบใ—ใฆใ„ใพใ™๏ผš ``` =================== =================== | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | =================== =================== gpu0 gpu1 ``` ๆˆ‘ใ€…ใฏใ€ใƒขใƒ‡ใƒซใ‚’ๅž‚็›ดใซ2ใคใซๅˆ†ๅ‰ฒใ—ใ€ใƒฌใ‚คใƒคใƒผ0ใ‹ใ‚‰3ใ‚’GPU0ใซ้…็ฝฎใ—ใ€ใƒฌใ‚คใƒคใƒผ4ใ‹ใ‚‰7ใ‚’GPU1ใซ้…็ฝฎใ—ใพใ—ใŸใ€‚ ใƒ‡ใƒผใ‚ฟใŒใƒฌใ‚คใƒคใƒผ0ใ‹ใ‚‰1ใ€1ใ‹ใ‚‰2ใ€2ใ‹ใ‚‰3ใซ็งปๅ‹•ใ™ใ‚‹้–“ใฏ้€šๅธธใฎใƒขใƒ‡ใƒซใจๅŒใ˜ใงใ™ใ€‚ใ—ใ‹ใ—ใ€ใƒ‡ใƒผใ‚ฟใŒใƒฌใ‚คใƒคใƒผ3ใ‹ใ‚‰ใƒฌใ‚คใƒคใƒผ4ใซ็งปๅ‹•ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใ€GPU0ใ‹ใ‚‰GPU1ใธใฎ็งปๅ‹•ใŒ็™บ็”Ÿใ—ใ€้€šไฟกใฎใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ๅ‚ๅŠ ใ—ใฆใ„ใ‚‹GPUใŒๅŒใ˜ใ‚ณใƒณใƒ”ใƒฅใƒผใƒˆใƒŽใƒผใƒ‰๏ผˆไพ‹๏ผšๅŒใ˜็‰ฉ็†ใƒžใ‚ทใƒณ๏ผ‰ใซใ‚ใ‚‹ๅ ดๅˆใ€ใ“ใฎใ‚ณใƒ”ใƒผใฏ้žๅธธใซ้ซ˜้€Ÿใงใ™ใŒใ€็•ฐใชใ‚‹ใ‚ณใƒณใƒ”ใƒฅใƒผใƒˆใƒŽใƒผใƒ‰๏ผˆไพ‹๏ผš่ค‡ๆ•ฐใฎใƒžใ‚ทใƒณ๏ผ‰ใซใ‚ใ‚‹ๅ ดๅˆใ€้€šไฟกใฎใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใฏๅคงๅน…ใซๅข—ๅŠ ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ใใฎๅพŒใ€ใƒฌใ‚คใƒคใƒผ4ใ‹ใ‚‰5ใ€6ใ‹ใ‚‰7ใพใงใฏ้€šๅธธใฎใƒขใƒ‡ใƒซใจๅŒๆง˜ใซๅ‹•ไฝœใ—ใ€7็•ช็›ฎใฎใƒฌใ‚คใƒคใƒผใŒๅฎŒไบ†ใ™ใ‚‹ใจใ€ใƒ‡ใƒผใ‚ฟใ‚’ใ—ใฐใ—ใฐใƒฌใ‚คใƒคใƒผ0ใซๆˆปใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผˆใพใŸใฏใƒฉใƒ™ใƒซใ‚’ๆœ€ๅพŒใฎใƒฌใ‚คใƒคใƒผใซ้€ไฟกใ—ใพใ™๏ผ‰ใ€‚ใ“ใ‚Œใงๆๅคฑใ‚’่จˆ็ฎ—ใ—ใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใŒไฝœๆฅญใ‚’้–‹ๅง‹ใงใใพใ™ใ€‚ ๅ•้กŒ็‚น๏ผš - ไธปใชๆฌ ็‚นใ€ใŠใ‚ˆใณใชใœใ“ใ‚Œใ‚’ใ€Œๅ˜็ด”ใชใ€MPใจๅ‘ผใถใฎใ‹ใฏใ€1ใคใ‚’้™คใ„ใฆใ™ในใฆใฎGPUใŒใฉใ‚“ใช็žฌ้–“ใงใ‚‚ใ‚ขใ‚คใƒ‰ใƒซ็Šถๆ…‹ใงใ‚ใ‚‹ใ“ใจใงใ™ใ€‚ใ—ใŸใŒใฃใฆใ€4ใคใฎGPUใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ๅ˜็ด”ใชMPใฏใ€1ใคใฎGPUใฎใƒกใƒขใƒชๅฎน้‡ใ‚’4ๅ€ใซใ™ใ‚‹ใฎใจใปใผๅŒใ˜ใงใ‚ใ‚Šใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใฎๆฎ‹ใ‚Šใ‚’็„ก่ฆ–ใ—ใพใ™ใ€‚ใ•ใ‚‰ใซใ€ใƒ‡ใƒผใ‚ฟใฎใ‚ณใƒ”ใƒผใฎใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒใ‚ใ‚‹ใ“ใจใ‚’ๅฟ˜ใ‚Œใฆใฏใ„ใ‘ใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€4ๆžšใฎ6GBใฎใ‚ซใƒผใƒ‰ใฏใ€ใƒ‡ใƒผใ‚ฟใฎใ‚ณใƒ”ใƒผใฎใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒใชใ„1ๆžšใฎ24GBใฎใ‚ซใƒผใƒ‰ใจๅŒใ˜ใ‚ตใ‚คใ‚บใ‚’ๅŽๅฎนใงใใ‚‹ใงใ—ใ‚‡ใ†ใŒใ€ๅพŒ่€…ใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ใ‚ˆใ‚Š่ฟ…้€ŸใซๅฎŒไบ†ใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ใŸใจใˆใฐ40GBใฎใ‚ซใƒผใƒ‰ใŒใ‚ใ‚Šใ€45GBใฎใƒขใƒ‡ใƒซใ‚’ๅŽใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใ€ๅ‹พ้…ใจใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใฎใŸใ‚ใซใปใจใ‚“ใฉๅŽใ‚ใ‚‹ใ“ใจใŒใงใใพใ›ใ‚“ใ€‚ - ๅ…ฑๆœ‰ใฎๅŸ‹ใ‚่พผใฟใฏใ€GPU้–“ใงใ‚ณใƒ”ใƒผใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณไธฆๅˆ—ๅ‡ฆ็†๏ผˆPP๏ผ‰ใฏใ€ใปใผๅ˜็ด”ใชMPใจๅŒใ˜ใงใ™ใŒใ€GPUใŒใ‚ขใ‚คใƒ‰ใƒซ็Šถๆ…‹ใซใชใ‚‹ๅ•้กŒใ‚’่งฃๆฑบใ—ใ€ๅ…ฅๅŠ›ใƒใƒƒใƒใ‚’ใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใซๅˆ†ๅ‰ฒใ—ใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ไบบๅทฅ็š„ใซไฝœๆˆใ™ใ‚‹ใ“ใจใซใ‚ˆใ‚Šใ€็•ฐใชใ‚‹GPUใŒ่จˆ็ฎ—ใƒ—ใƒญใ‚ปใ‚นใซๅŒๆ™‚ใซๅ‚ๅŠ ใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ไปฅไธ‹ใฏใ€[GPipe่ซ–ๆ–‡](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html)ใ‹ใ‚‰ใฎๅ›ณใงใ€ไธŠ้ƒจใซใฏๅ˜็ด”ใชMPใ€ไธ‹้ƒจใซใฏPPใŒ็คบใ•ใ‚Œใฆใ„ใพใ™๏ผš ![mp-pp](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-gpipe-bubble.png) ใ“ใฎๅ›ณใ‹ใ‚‰ใ€PPใŒGPUใŒใ‚ขใ‚คใƒ‰ใƒซ็Šถๆ…‹ใฎ้ ˜ๅŸŸใงใ‚ใ‚‹ใ€Œใƒใƒ–ใƒซใ€ใ‚’ๅฐ‘ใชใๆŒใคใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ใ‚ขใ‚คใƒ‰ใƒซ็Šถๆ…‹ใฎ้ƒจๅˆ†ใฏใ€Œใƒใƒ–ใƒซใ€ใจๅ‘ผใฐใ‚Œใพใ™ใ€‚ ๅ›ณใฎไธกๆ–นใฎ้ƒจๅˆ†ใฏใ€4ใคใฎGPUใŒใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซๅ‚ๅŠ ใ—ใฆใ„ใ‚‹4ใฎๆฌกๅ…ƒใฎไธฆๅˆ—ๆ€งใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ใคใพใ‚Šใ€4ใคใฎใƒ‘ใ‚คใƒ—ใ‚นใƒ†ใƒผใ‚ธF0ใ€F1ใ€F2ใ€F3ใฎใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใŒใ‚ใ‚Šใ€้€†้ †ใฎใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใƒ‘ใ‚นB3ใ€B2ใ€B1ใ€B0ใŒใ‚ใ‚Šใพใ™ใ€‚ PPใฏ่ชฟๆ•ดใ™ใ‚‹ๆ–ฐใ—ใ„ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๅฐŽๅ…ฅใ—ใพใ™ใ€‚ใใ‚Œใฏ `chunks` ใงใ€ๅŒใ˜ใƒ‘ใ‚คใƒ—ใ‚นใƒ†ใƒผใ‚ธใ‚’้€šใ˜ใฆ้€ฃ็ถšใ—ใฆ้€ไฟกใ•ใ‚Œใ‚‹ใƒ‡ใƒผใ‚ฟใฎใƒใƒฃใƒณใ‚ฏใฎๆ•ฐใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€ไธ‹ใฎๅ›ณใงใฏ `chunks=4` ใŒ่กจ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚GPU0ใฏใƒใƒฃใƒณใ‚ฏ0ใ€1ใ€2ใ€3๏ผˆF0,0ใ€F0,1ใ€F0,2ใ€F0,3๏ผ‰ใงๅŒใ˜ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’ๅฎŸ่กŒใ—ใ€ไป–ใฎGPUใŒไฝœๆฅญใ‚’้–‹ๅง‹ใ—ๅง‹ใ‚ใ‚‹ใฎใ‚’ๅพ…ใฃใฆใ‹ใ‚‰ใ€GPU0ใฏใƒใƒฃใƒณใ‚ฏ3ใ€2ใ€1ใ€0๏ผˆB0,3ใ€B0,2ใ€B0,1ใ€B0,0๏ผ‰ใง้€†้ †ใƒ‘ใ‚นใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ๆณจๆ„ใ™ในใใฏใ€ๆฆ‚ๅฟต็š„ใซใฏใ“ใ‚ŒใŒๅ‹พ้…่“„็ฉใ‚นใƒ†ใƒƒใƒ—๏ผˆGAS๏ผ‰ใจๅŒใ˜ใ‚ณใƒณใ‚ปใƒ—ใƒˆใงใ‚ใ‚‹ใ“ใจใงใ™ใ€‚PyTorchใฏ `chunks` ใ‚’ไฝฟ็”จใ—ใ€DeepSpeedใฏๅŒใ˜ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’GASใจๅ‘ผใณใพใ™ใ€‚ `chunks` ใฎๅฐŽๅ…ฅใซใ‚ˆใ‚Šใ€PPใฏใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒ๏ผˆMBS๏ผ‰ใฎๆฆ‚ๅฟตใ‚’ๅฐŽๅ…ฅใ—ใพใ™ใ€‚DPใฏใ‚ฐใƒญใƒผใƒใƒซใƒ‡ใƒผใ‚ฟใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ใƒŸใƒ‹ใƒใƒƒใƒใซๅˆ†ๅ‰ฒใ—ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€DPใฎๆฌกๆ•ฐใŒ4ใงใ€ใ‚ฐใƒญใƒผใƒใƒซใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒ1024ใฎๅ ดๅˆใ€4ใคใฎใƒŸใƒ‹ใƒใƒƒใƒ๏ผˆใใ‚Œใžใ‚Œ256๏ผ‰ใซๅˆ†ๅ‰ฒใ•ใ‚Œใพใ™๏ผˆ1024/4๏ผ‰ใ€‚ใใ—ใฆใ€`chunks`๏ผˆใพใŸใฏGAS๏ผ‰ใฎๆ•ฐใŒ32ใงใ‚ใ‚‹ๅ ดๅˆใ€ใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใ‚ตใ‚คใ‚บใฏ8ใซใชใ‚Šใพใ™๏ผˆ256/32๏ผ‰ใ€‚ๅ„ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚นใƒ†ใƒผใ‚ธใฏ1ใคใฎใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใงไฝœๆฅญใ—ใพใ™ใ€‚ DP + PPใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใฎใ‚ฐใƒญใƒผใƒใƒซใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’่จˆ็ฎ—ใ™ใ‚‹ใซใฏใ€`mbs*chunks*dp_degree`๏ผˆ`8*32*4=1024`๏ผ‰ใ‚’่กŒใ„ใพใ™ใ€‚ ๅ›ณใซๆˆปใ‚Šใพใ—ใ‚‡ใ†ใ€‚ `chunks=1` ใงใ‚ใ‚Œใฐใ€้žๅŠน็އใชๅ˜็ด”ใชMPใซใชใ‚Šใพใ™ใ€‚้žๅธธใซๅคงใใช `chunks` ๅ€คใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€้žๅธธใซๅฐใ•ใชใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใ‚ตใ‚คใ‚บใซใชใ‚Šใ€ๅŠน็އใŒใ‚ใพใ‚Š้ซ˜ใใชใ„ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€GPUใฎๅŠน็އ็š„ใชๅˆฉ็”จใ‚’ๆœ€ๅคงๅŒ–ใ™ใ‚‹ๅ€คใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซๅฎŸ้จ“ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใƒใƒ–ใƒซใฎใ‚ตใ‚คใ‚บใ‚’ๆœ€ๅฐ้™ใซใ™ใ‚‹ใ“ใจใซๅฏพๅฟœใ™ใ‚‹ใ€ใ™ในใฆใฎๅ‚ๅŠ GPUใซใ‚ใŸใ‚‹้ซ˜ใ„ไธฆ่กŒGPUๅˆฉ็”จใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ใŸใ‚ใงใ™ใ€‚ 2ใคใฎใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใ‚ฐใƒซใƒผใƒ—ใŒใ‚ใ‚Šใพใ™ใ€‚ๅพ“ๆฅใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณAPIใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใจใ€ใƒฆใƒผใ‚ถใƒผใฎใƒขใƒ‡ใƒซใ‚’ๅคงๅน…ใซๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ‚ˆใ‚Š็พไปฃ็š„ใชใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใงใ™ใ€‚ ๅพ“ๆฅใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณAPIใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณ๏ผš - PyTorch - DeepSpeed - Megatron-LM ็พไปฃ็š„ใชใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณ๏ผš - Varuna - Sagemaker ๅพ“ๆฅใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณAPIใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใฎๅ•้กŒ็‚น๏ผš - ใƒขใƒ‡ใƒซใ‚’ใ‹ใชใ‚Šๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€Pipelineใฏใƒขใ‚ธใƒฅใƒผใƒซใฎ้€šๅธธใฎใƒ•ใƒญใƒผใ‚’`nn.Sequential`ใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅ†ๆ›ธใ่พผใ‚€ๅฟ…่ฆใŒใ‚ใ‚Šใ€ใƒขใƒ‡ใƒซใฎ่จญ่จˆใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใŒๅฟ…่ฆใงใ™ใ€‚ - ็พๅœจใ€Pipeline APIใฏ้žๅธธใซๅˆถ้™็š„ใงใ™ใ€‚ๆœ€ๅˆใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚นใƒ†ใƒผใ‚ธใซๆธกใ•ใ‚Œใ‚‹Pythonๅค‰ๆ•ฐใฎใ‚ปใƒƒใƒˆใŒใ‚ใ‚‹ๅ ดๅˆใ€ๅ›ž้ฟ็ญ–ใ‚’่ฆ‹ใคใ‘ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚็พๅœจใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นใงใฏใ€ๅ”ฏไธ€ใฎใƒ†ใƒณใ‚ฝใƒซใพใŸใฏใƒ†ใƒณใ‚ฝใƒซใฎใ‚ฟใƒ—ใƒซใ‚’ๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใจใ—ใฆ่ฆๆฑ‚ใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒ†ใƒณใ‚ฝใƒซใฏใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ๆœ€ๅˆใฎๆฌกๅ…ƒใจใ—ใฆๆŒใฃใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใƒŸใƒ‹ใƒใƒƒใƒใ‚’ใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใซๅˆ†ๅ‰ฒใ—ใพใ™ใ€‚ๅฏ่ƒฝใชๆ”นๅ–„็‚นใซใคใ„ใฆใฏใ€ใ“ใกใ‚‰ใฎ่ญฐ่ซ–ใŒ่กŒใ‚ใ‚Œใฆใ„ใพใ™๏ผšhttps://github.com/pytorch/pytorch/pull/50693 - ใƒ‘ใ‚คใƒ—ใ‚นใƒ†ใƒผใ‚ธใฎใƒฌใƒ™ใƒซใงใฎๆกไปถไป˜ใๅˆถๅพกใƒ•ใƒญใƒผใฏไธๅฏ่ƒฝใงใ™ใ€‚ไพ‹ใˆใฐใ€T5ใฎใ‚ˆใ†ใชใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใฏใ€ๆกไปถไป˜ใใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใ‚นใƒ†ใƒผใ‚ธใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซ็‰นๅˆฅใชๅ›ž้ฟ็ญ–ใŒๅฟ…่ฆใงใ™ใ€‚ - ๅ„ใƒฌใ‚คใƒคใƒผใ‚’้…็ฝฎใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€1ใคใฎใƒขใƒ‡ใƒซใฎๅ‡บๅŠ›ใŒไป–ใฎใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใซใชใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ VarunaใจSageMakerใจใฎๅฎŸ้จ“ใฏใพใ ่กŒใฃใฆใ„ใพใ›ใ‚“ใŒใ€ๅฝผใ‚‰ใฎ่ซ–ๆ–‡ใซใ‚ˆใ‚Œใฐใ€ไธŠ่จ˜ใง่ฟฐในใŸๅ•้กŒใฎใƒชใ‚นใƒˆใ‚’ๅ…‹ๆœใ—ใ€ใƒฆใƒผใ‚ถใƒผใฎใƒขใƒ‡ใƒซใซใฏใฏใ‚‹ใ‹ใซๅฐใ•ใชๅค‰ๆ›ดใ—ใ‹ๅฟ…่ฆใจใ—ใชใ„ใจๅ ฑๅ‘Šใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ๅฎŸ่ฃ…๏ผš - [Pytorch](https://pytorch.org/docs/stable/pipeline.html) (initial support in pytorch-1.8, and progressively getting improved in 1.9 and more so in 1.10). Some [examples](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py) - [DeepSpeed](https://www.deepspeed.ai/tutorials/pipeline/) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation - no API. - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - this is a proprietary solution that can only be used on AWS. - [OSLO](https://github.com/tunib-ai/oslo) - ใ“ใฎๅฎŸ่ฃ…ใฏใ€Hugging Face TransformersใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚ ๐Ÿค— Transformersใฎใ‚นใƒ†ใƒผใ‚ฟใ‚น: ใ“ใฎๅŸท็ญ†ๆ™‚็‚นใงใฏใ€ใ„ใšใ‚Œใฎใƒขใƒ‡ใƒซใ‚‚ๅฎŒๅ…จใชPP๏ผˆใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณไธฆๅˆ—ๅ‡ฆ็†๏ผ‰ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ›ใ‚“ใ€‚GPT2ใƒขใƒ‡ใƒซใจT5ใƒขใƒ‡ใƒซใฏๅ˜็ด”ใชMP๏ผˆใƒขใƒ‡ใƒซไธฆๅˆ—ๅ‡ฆ็†๏ผ‰ใ‚ตใƒใƒผใƒˆใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ไธปใช้šœๅฎณใฏใ€ใƒขใƒ‡ใƒซใ‚’`nn.Sequential`ใซๅค‰ๆ›ใงใใšใ€ใ™ในใฆใฎๅ…ฅๅŠ›ใŒใƒ†ใƒณใ‚ฝใƒซใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใงใ™ใ€‚็พๅœจใฎใƒขใƒ‡ใƒซใซใฏใ€ๅค‰ๆ›ใ‚’้žๅธธใซ่ค‡้›‘ใซใ™ใ‚‹ๅคšใใฎๆฉŸ่ƒฝใŒๅซใพใ‚ŒใฆใŠใ‚Šใ€ใ“ใ‚Œใ‚‰ใ‚’ๅ‰Š้™คใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไป–ใฎใ‚ขใƒ—ใƒญใƒผใƒ๏ผš DeepSpeedใ€Varunaใ€ใŠใ‚ˆใณSageMakerใฏใ€[ไบคไบ’ใซใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๅฎŸ่กŒ](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html)ใ™ใ‚‹ใ‚ณใƒณใ‚ปใƒ—ใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใ“ใ“ใงใฏใ€ใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’ๅ„ชๅ…ˆใ•ใ›ใฆใƒใƒ–ใƒซ๏ผˆใ‚ขใ‚คใƒ‰ใƒซๆ™‚้–“๏ผ‰ใ‚’ใ•ใ‚‰ใซๆœ€ๅฐ้™ใซๆŠ‘ใˆใพใ™ใ€‚ Varunaใฏใ€ๆœ€้ฉใชใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒซใ‚’็™บ่ฆ‹ใ™ใ‚‹ใŸใ‚ใซใ‚ทใƒŸใƒฅใƒฌใƒผใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใ—ใฆใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒซใ‚’ใ•ใ‚‰ใซๆ”นๅ–„ใ—ใ‚ˆใ†ใจใ—ใพใ™ใ€‚ OSLOใฏใ€`nn.Sequential`ใฎๅค‰ๆ›ใชใ—ใงTransformersใซๅŸบใฅใใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณไธฆๅˆ—ๅ‡ฆ็†ใ‚’ๅฎŸ่ฃ…ใ—ใฆใ„ใพใ™ใ€‚ ## Tensor Parallelism ใƒ†ใƒณใ‚ฝใƒซไธฆๅˆ—ๅ‡ฆ็†ใงใฏใ€ๅ„GPUใŒใƒ†ใƒณใ‚ฝใƒซใฎใ‚นใƒฉใ‚คใ‚นใฎใฟใ‚’ๅ‡ฆ็†ใ—ใ€ๅ…จไฝ“ใŒๅฟ…่ฆใชๆ“ไฝœใฎใŸใ‚ใซใฎใฟๅฎŒๅ…จใชใƒ†ใƒณใ‚ฝใƒซใ‚’้›†็ด„ใ—ใพใ™ใ€‚ ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)่ซ–ๆ–‡ใ‹ใ‚‰ใฎใ‚ณใƒณใ‚ปใƒ—ใƒˆใจๅ›ณใ‚’ไฝฟ็”จใ—ใพใ™๏ผš[GPUใ‚ฏใƒฉใ‚นใ‚ฟใงใฎๅŠน็އ็š„ใชๅคง่ฆๆจก่จ€่ชžใƒขใƒ‡ใƒซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](https://arxiv.org/abs/2104.04473)ใ€‚ ใฉใฎใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใฎไธป่ฆใชๆง‹็ฏ‰่ฆ็ด ใฏใ€ๅฎŒๅ…จใซๆŽฅ็ถšใ•ใ‚ŒใŸ`nn.Linear`ใซ็ถšใ้ž็ทšๅฝขใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณ`GeLU`ใงใ™ใ€‚ Megatronใฎ่ซ–ๆ–‡ใฎ่กจ่จ˜ๆณ•ใซๅพ“ใฃใฆใ€่กŒๅˆ—ใฎไน—็ฎ—้ƒจๅˆ†ใ‚’`Y = GeLU(XA)`ใจๆ›ธใใ“ใจใŒใงใใพใ™ใ€‚ใ“ใ“ใงใ€`X`ใจ`Y`ใฏๅ…ฅๅŠ›ใƒ™ใ‚ฏใƒˆใƒซใจๅ‡บๅŠ›ใƒ™ใ‚ฏใƒˆใƒซใงใ€`A`ใฏ้‡ใฟ่กŒๅˆ—ใงใ™ใ€‚ ่กŒๅˆ—ใฎ่จˆ็ฎ—ใ‚’่กŒๅˆ—ๅฝขๅผใง่ฆ‹ใ‚‹ใจใ€่กŒๅˆ—ไน—็ฎ—ใ‚’่ค‡ๆ•ฐใฎGPUใงๅˆ†ๅ‰ฒใงใใ‚‹ๆ–นๆณ•ใŒ็ฐกๅ˜ใซ็†่งฃใงใใพใ™๏ผš ![Parallel GEMM](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_gemm.png) ้‡ใฟ่กŒๅˆ—`A`ใ‚’`N`ๅ€‹ใฎGPUใซๅฏพใ—ใฆๅˆ—ใ”ใจใซๅˆ†ๅ‰ฒใ—ใ€ไธฆๅˆ—ใง่กŒๅˆ—ไน—็ฎ—`XA_1`ใ‹ใ‚‰`XA_n`ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใจใ€`N`ๅ€‹ใฎๅ‡บๅŠ›ใƒ™ใ‚ฏใƒˆใƒซ`Y_1ใ€Y_2ใ€...ใ€Y_n`ใŒๅพ—ใ‚‰ใ‚Œใ€ใใ‚Œใ‚‰ใ‚’็‹ฌ็ซ‹ใ—ใฆ`GeLU`ใซไพ›็ตฆใงใใพใ™๏ผš ![็‹ฌ็ซ‹ใ—ใŸGeLU](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-independent-gelu.png) ใ“ใฎๅŽŸ็†ใ‚’ไฝฟ็”จใ—ใฆใ€ๆœ€ๅพŒใพใงๅŒๆœŸใŒๅฟ…่ฆใชใ„ใพใพใ€ไปปๆ„ใฎๆทฑใ•ใฎMLPใ‚’ๆ›ดๆ–ฐใงใใพใ™ใ€‚Megatron-LMใฎ่‘—่€…ใฏใใฎใŸใ‚ใฎๆœ‰็”จใชใ‚คใƒฉใ‚นใƒˆใ‚’ๆไพ›ใ—ใฆใ„ใพใ™๏ผš ![ไธฆๅˆ—ใ‚ทใƒฃใƒผใƒ‰ๅ‡ฆ็†](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_shard_processing.png) ใƒžใƒซใƒใƒ˜ใƒƒใƒ‰ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒฌใ‚คใƒคใƒผใ‚’ไธฆๅˆ—ๅŒ–ใ™ใ‚‹ใ“ใจใฏใ•ใ‚‰ใซ็ฐกๅ˜ใงใ™ใ€‚ใใ‚Œใ‚‰ใฏๆ—ขใซ่ค‡ๆ•ฐใฎ็‹ฌ็ซ‹ใ—ใŸใƒ˜ใƒƒใƒ‰ใ‚’ๆŒใฃใฆใ„ใ‚‹ใŸใ‚ใ€ๆœฌ่ณช็š„ใซไธฆๅˆ—ใงใ™๏ผ ![ไธฆๅˆ—ใ‚ปใƒซใƒ•ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_self_attention.png) ็‰นๅˆฅใช่€ƒๆ…ฎไบ‹้ …๏ผšTPใซใฏ้žๅธธใซ้ซ˜้€Ÿใชใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใŒๅฟ…่ฆใงใ‚ใ‚Šใ€ใ—ใŸใŒใฃใฆ1ใคใฎใƒŽใƒผใƒ‰ใ‚’่ถ…ใˆใฆTPใ‚’ๅฎŸ่กŒใ—ใชใ„ใ“ใจใŒใŠๅ‹งใ‚ใ•ใ‚Œใพใ›ใ‚“ใ€‚ๅฎŸ้š›ใซใฏใ€1ใคใฎใƒŽใƒผใƒ‰ใซ4ใคใฎGPUใŒใ‚ใ‚‹ๅ ดๅˆใ€ๆœ€ๅคงใฎTPๅบฆๆ•ฐใฏ4ใงใ™ใ€‚TPๅบฆๆ•ฐ8ใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€ๅฐ‘ใชใใจใ‚‚8ใคใฎGPUใ‚’ๆŒใคใƒŽใƒผใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏใ€ๅ…ƒใฎใ‚ˆใ‚Š่ฉณ็ดฐใช[TPใฎๆฆ‚่ฆ](https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530)ใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚ by [@anton-l](https://github.com/anton-l)ใ€‚ SageMakerใฏใ€ใ‚ˆใ‚ŠๅŠน็އ็š„ใชๅ‡ฆ็†ใฎใŸใ‚ใซTPใจDPใ‚’็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จใ—ใพใ™ใ€‚ ไปฃๆ›ฟๅ๏ผš - [DeepSpeed](https://github.com/microsoft/DeepSpeed)ใฏใ“ใ‚Œใ‚’ใ€Œใƒ†ใƒณใ‚ฝใƒซใ‚นใƒฉใ‚คใ‚ทใƒณใ‚ฐใ€ใจๅ‘ผใณใพใ™ใ€‚่ฉณ็ดฐใฏ[DeepSpeedใฎ็‰นๅพด](https://www.deepspeed.ai/training/#model-parallelism)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ๅฎŸ่ฃ…ไพ‹: - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)ใซใฏใ€ใƒขใƒ‡ใƒซๅ›บๆœ‰ใฎๅ†…้ƒจๅฎŸ่ฃ…ใŒใ‚ใ‚Šใพใ™ใ€‚ - [parallelformers](https://github.com/tunib-ai/parallelformers)๏ผˆ็พๆ™‚็‚นใงใฏๆŽจ่ซ–ใฎใฟ๏ผ‰ใ€‚ - [SageMaker](https://arxiv.org/abs/2111.05972) - ใ“ใ‚ŒใฏAWSใงใฎใฟไฝฟ็”จใงใใ‚‹ใƒ—ใƒญใƒ—ใƒฉใ‚คใ‚จใ‚ฟใƒชใชใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณใงใ™ใ€‚ - [OSLO](https://github.com/tunib-ai/oslo)ใซใฏใ€TransformersใซๅŸบใฅใ„ใŸใƒ†ใƒณใ‚ฝใƒซไธฆๅˆ—ๅฎŸ่ฃ…ใŒใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Transformersใฎ็Šถๆณ: - ใ‚ณใ‚ข: ใพใ ใ‚ณใ‚ขใซใฏๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ - ใŸใ ใ—ใ€ๆŽจ่ซ–ใŒๅฟ…่ฆใชๅ ดๅˆใ€[parallelformers](https://github.com/tunib-ai/parallelformers)ใฏใปใจใ‚“ใฉใฎใƒขใƒ‡ใƒซใซๅฏพใ—ใฆใ‚ตใƒใƒผใƒˆใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใ“ใ‚ŒใŒใ‚ณใ‚ขใซๅฎŸ่ฃ…ใ•ใ‚Œใ‚‹ใพใงใ€ใ“ใ‚Œใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใใ—ใฆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒขใƒผใƒ‰ใ‚‚ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใ‚‹ใ“ใจใ‚’ๆœŸๅพ…ใ—ใฆใ„ใพใ™ใ€‚ - Deepspeed-Inferenceใงใฏใ€BERTใ€GPT-2ใ€ใŠใ‚ˆใณGPT-Neoใƒขใƒ‡ใƒซใ‚’CUDAใ‚ซใƒผใƒใƒซใƒ™ใƒผใ‚นใฎ้ซ˜้€ŸๆŽจ่ซ–ใƒขใƒผใƒ‰ใงใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚่ฉณ็ดฐใฏ[ใ“ใกใ‚‰](https://www.deepspeed.ai/tutorials/inference-tutorial/)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## DP+PP DeepSpeedใฎ[ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซ](https://www.deepspeed.ai/tutorials/pipeline/)ใ‹ใ‚‰ใฎๆฌกใฎๅ›ณใฏใ€DPใ‚’PPใจ็ต„ใฟๅˆใ‚ใ›ใ‚‹ๆ–นๆณ•ใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ ![dp-pp-2d](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero-dp-pp.png) ใ“ใ“ใง้‡่ฆใชใฎใฏใ€DPใƒฉใƒณใ‚ฏ0ใŒGPU2ใ‚’่ฆ‹ใˆใชใใ—ใ€DPใƒฉใƒณใ‚ฏ1ใŒGPU3ใ‚’่ฆ‹ใˆใชใใ™ใ‚‹ใ“ใจใงใ™ใ€‚DPใซใจใฃใฆใ€ๅญ˜ๅœจใ™ใ‚‹ใฎใฏGPU 0 ใจ 1 ใฎใฟใงใ€ใใ‚Œใ‚‰ใฎ2ใคใฎGPUใฎใ‚ˆใ†ใซใƒ‡ใƒผใ‚ฟใ‚’ไพ›็ตฆใ—ใพใ™ใ€‚GPU0ใฏPPใ‚’ไฝฟ็”จใ—ใฆGPU2ใซไธ€้ƒจใฎ่ฒ ่ทใ‚’ใ€Œ็ง˜ๅฏ†่ฃใซใ€ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ—ใ€GPU1ใ‚‚ๅŒๆง˜ใซGPU3ใ‚’ๆ”ฏๆดใซๅผ•ใๅ…ฅใ‚Œใพใ™ใ€‚ ๅ„ๆฌกๅ…ƒใซใฏๅฐ‘ใชใใจใ‚‚2ใคใฎGPUใŒๅฟ…่ฆใงใ™ใฎใงใ€ใ“ใ“ใงใฏๅฐ‘ใชใใจใ‚‚4ใคใฎGPUใŒๅฟ…่ฆใงใ™ใ€‚ ๅฎŸ่ฃ…ไพ‹: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) ๐Ÿค— Transformersใฎ็Šถๆณ: ใพใ ๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใพใ›ใ‚“ ## DP+PP+TP ใ•ใ‚‰ใซๅŠน็އ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’่กŒใ†ใŸใ‚ใซใ€3Dใƒ‘ใƒฉใƒฌใƒชใ‚บใƒ ใ‚’ไฝฟ็”จใ—ใ€PPใ‚’TPใจDPใจ็ต„ใฟๅˆใ‚ใ›ใพใ™ใ€‚ใ“ใ‚Œใฏๆฌกใฎๅ›ณใง็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ![dp-pp-tp-3d](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-deepspeed-3d.png) ใ“ใฎๅ›ณใฏ[3Dใƒ‘ใƒฉใƒฌใƒชใ‚บใƒ ๏ผšๅ…†ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒขใƒ‡ใƒซใธใฎใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐ](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/)ใจใ„ใ†ใƒ–ใƒญใ‚ฐๆŠ•็จฟใ‹ใ‚‰ๅ–ๅพ—ใ•ใ‚ŒใŸใ‚‚ใฎใงใ€ใŠใ™ใ™ใ‚ใฎ่ชญใฟ็‰ฉใงใ™ใ€‚ ๅ„ๆฌกๅ…ƒใซใฏๅฐ‘ใชใใจใ‚‚2ใคใฎGPUใŒๅฟ…่ฆใงใ™ใฎใงใ€ใ“ใ“ใงใฏๅฐ‘ใชใใจใ‚‚8ใคใฎGPUใŒๅฟ…่ฆใงใ™ใ€‚ ๅฎŸ่ฃ…ไพ‹: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - DeepSpeedใซใฏใ€ใ•ใ‚‰ใซๅŠน็އ็š„ใชDPใงใ‚ใ‚‹ZeRO-DPใจๅ‘ผใฐใ‚Œใ‚‹ใ‚‚ใฎใ‚‚ๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) ๐Ÿค— Transformersใฎ็Šถๆณ: ใพใ ๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚PPใจTPใŒใชใ„ใŸใ‚ใ€‚ ## ZeRO DP+PP+TP DeepSpeedใฎไธป่ฆใชๆฉŸ่ƒฝใฎ1ใคใฏZeROใงใ€ใ“ใ‚ŒใฏDPใฎๆ‹กๅผตๆฉŸ่ƒฝใงใ™ใ€‚ใ“ใ‚Œใซใคใ„ใฆใฏใ™ใงใซใ€ŒZeROใƒ‡ใƒผใ‚ฟไธฆๅˆ—ๅŒ–ใ€ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใพใ™ใ€‚้€šๅธธใ€ใ“ใ‚Œใฏๅ˜็‹ฌใงๅ‹•ไฝœใ™ใ‚‹ๆฉŸ่ƒฝใงใ€PPใ‚„TPใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ—ใ‹ใ—ใ€PPใจTPใจ็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ZeRO-DPใŒPPใจ็ต„ใฟๅˆใ‚ใ•ใ‚Œใ‚‹ๅ ดๅˆใ€้€šๅธธใฏZeROใ‚นใƒ†ใƒผใ‚ธ1๏ผˆใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐ๏ผ‰ใฎใฟใŒๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ ZeROใ‚นใƒ†ใƒผใ‚ธ2๏ผˆๅ‹พ้…ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐ๏ผ‰ใ‚’ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณไธฆๅˆ—ๅŒ–ใจ็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จใ™ใ‚‹็†่ซ–็š„ใชๅฏ่ƒฝๆ€งใฏใ‚ใ‚Šใพใ™ใŒใ€ๆ€ง่ƒฝใซๆ‚ชๅฝฑ้Ÿฟใ‚’ๅŠใผใ—ใพใ™ใ€‚ๅ„ใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใ”ใจใซๅ‹พ้…ใ‚’ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ๅ‰ใซใ€ๅ‹พ้…ใ‚’้›†็ด„ใ™ใ‚‹ใŸใ‚ใฎ่ฟฝๅŠ ใฎใƒชใƒ€ใ‚ฏใ‚ทใƒงใƒณใ‚นใ‚ญใƒฃใƒƒใ‚ฟใƒผ้›†่จˆใŒๅฟ…่ฆใงใ€้€šไฟกใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณไธฆๅˆ—ๅŒ–ใฎๆ€ง่ณชไธŠใ€ๅฐใ•ใชใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใŒไฝฟ็”จใ•ใ‚Œใ€่จˆ็ฎ—ใฎ้›†ไธญๅบฆ๏ผˆใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒใ‚ตใ‚คใ‚บ๏ผ‰ใ‚’ใƒใƒฉใƒณใ‚นใซใ‹ใ‘ใ€ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใƒใƒ–ใƒซ๏ผˆใƒžใ‚คใ‚ฏใƒญใƒใƒƒใƒๆ•ฐ๏ผ‰ใ‚’ๆœ€ๅฐ้™ใซๆŠ‘ใˆใ‚‹ใ“ใจใซ็„ฆ็‚นใŒๅฝ“ใฆใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ“ใ‚Œใ‚‰ใฎ้€šไฟกใ‚ณใ‚นใƒˆใฏๅฝฑ้Ÿฟใ‚’ๅŠใผใ™ใงใ—ใ‚‡ใ†ใ€‚ ใ•ใ‚‰ใซใ€PPใซใฏ้€šๅธธใ‚ˆใ‚Šใ‚‚ๅฐ‘ใชใ„ๅฑคใŒๅซใพใ‚ŒใฆใŠใ‚Šใ€ใƒกใƒขใƒชใฎ็ฏ€็ด„ใฏใใ‚Œใปใฉๅคงใใใ‚ใ‚Šใพใ›ใ‚“ใ€‚PPใฏๆ—ขใซๅ‹พ้…ใ‚ตใ‚คใ‚บใ‚’ใ€Œ1/PPใ€ใซๅ‰Šๆธ›ใ™ใ‚‹ใŸใ‚ใ€ๅ‹พ้…ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใฎ็ฏ€็ด„ใฏ็ด”็ฒ‹ใชDPใ‚ˆใ‚Šใ‚‚ใฏใ‚‹ใ‹ใซ้‡่ฆใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ZeROใ‚นใƒ†ใƒผใ‚ธ3ใ‚‚ๅŒๆง˜ใฎ็†็”ฑใง้ฉใ—ใฆใ„ใพใ›ใ‚“ - ใ‚ˆใ‚ŠๅคšใใฎใƒŽใƒผใƒ‰้–“้€šไฟกใŒๅฟ…่ฆใงใ™ใ€‚ ใใ—ใฆใ€ZeROใ‚’ๆŒใฃใฆใ„ใ‚‹ใฎใงใ€ใ‚‚ใ†ไธ€ใคใฎๅˆฉ็‚นใฏZeRO-Offloadใงใ™ใ€‚ใ“ใ‚Œใฏใ‚นใƒ†ใƒผใ‚ธ1ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ‚นใƒ†ใƒผใƒˆใ‚’CPUใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ๅฎŸ่ฃ…ไพ‹: - [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed)ใจ[BigScienceใ‹ใ‚‰ใฎMegatron-Deepspeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)ใฏใ€ๅ‰่€…ใฎใƒชใƒใ‚ธใƒˆใƒชใฎใƒ•ใ‚ฉใƒผใ‚ฏใงใ™ใ€‚ - [OSLO](https://github.com/tunib-ai/oslo) ้‡่ฆใช่ซ–ๆ–‡: - [DeepSpeedใจMegatronใ‚’ไฝฟ็”จใ—ใŸMegatron-Turing NLG 530Bใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ](https://arxiv.org/abs/2201.11990) ๐Ÿค— Transformersใฎ็Šถๆณ: ใพใ ๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚PPใจTPใŒใชใ„ใŸใ‚ใ€‚ ## FlexFlow [FlexFlow](https://github.com/flexflow/FlexFlow)ใฏใ€ใ‚ใšใ‹ใซ็•ฐใชใ‚‹ใ‚ขใƒ—ใƒญใƒผใƒใงไธฆๅˆ—ๅŒ–ใฎๅ•้กŒใ‚’่งฃๆฑบใ—ใพใ™ใ€‚ ่ซ–ๆ–‡: [Zhihao Jiaใ€Matei Zahariaใ€Alex Aikenใซใ‚ˆใ‚‹ "Deep Neural Networksใฎใƒ‡ใƒผใ‚ฟใจใƒขใƒ‡ใƒซใฎไธฆๅˆ—ๅŒ–ใ‚’่ถ…ใˆใฆ"](https://arxiv.org/abs/1807.05358) FlexFlowใฏใ€ใ‚ตใƒณใƒ—ใƒซ-ใ‚ชใƒšใƒฌใƒผใ‚ฟ-ๅฑžๆ€ง-ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ4Dไธฆๅˆ—ๅŒ–ใ‚’่กŒใ„ใพใ™ใ€‚ 1. ใ‚ตใƒณใƒ—ใƒซ = ใƒ‡ใƒผใ‚ฟไธฆๅˆ—ๅŒ–๏ผˆใ‚ตใƒณใƒ—ใƒซๅ˜ไฝใฎไธฆๅˆ—ๅŒ–๏ผ‰ 2. ใ‚ชใƒšใƒฌใƒผใ‚ฟ = ๅ˜ไธ€ใฎๆ“ไฝœใ‚’ใ„ใใคใ‹ใฎใ‚ตใƒ–ๆ“ไฝœใซไธฆๅˆ—ๅŒ– 3. ๅฑžๆ€ง = ใƒ‡ใƒผใ‚ฟไธฆๅˆ—ๅŒ–๏ผˆ้•ทใ•ๆ–นๅ‘ใฎไธฆๅˆ—ๅŒ–๏ผ‰ 4. ใƒ‘ใƒฉใƒกใƒผใ‚ฟ = ใƒขใƒ‡ใƒซไธฆๅˆ—ๅŒ–๏ผˆๆฌกๅ…ƒใซ้–ขไฟ‚ใชใใ€ๆฐดๅนณใพใŸใฏๅž‚็›ด๏ผ‰ ไพ‹: * ใ‚ตใƒณใƒ—ใƒซ ใ‚ทใƒผใ‚ฑใƒณใ‚น้•ท512ใฎ10ใƒใƒƒใƒใ‚’่€ƒใˆใฆใฟใพใ—ใ‚‡ใ†ใ€‚ใ“ใ‚Œใ‚‰ใ‚’ใ‚ตใƒณใƒ—ใƒซๆฌกๅ…ƒใง2ใคใฎใƒ‡ใƒใ‚คใ‚นใซไธฆๅˆ—ๅŒ–ใ™ใ‚‹ใจใ€10 x 512ใŒ5 x 2 x 512ใซใชใ‚Šใพใ™ใ€‚ * ใ‚ชใƒšใƒฌใƒผใ‚ฟ ๅฑคๆญฃ่ฆๅŒ–ใ‚’่กŒใ†ๅ ดๅˆใ€ใพใšstdใ‚’่จˆ็ฎ—ใ—ใ€ๆฌกใซmeanใ‚’่จˆ็ฎ—ใ—ใ€ใƒ‡ใƒผใ‚ฟใ‚’ๆญฃ่ฆๅŒ–ใงใใพใ™ใ€‚ใ‚ชใƒšใƒฌใƒผใ‚ฟใฎไธฆๅˆ—ๅŒ–ใซใ‚ˆใ‚Šใ€stdใจmeanใ‚’ไธฆๅˆ—ใซ่จˆ็ฎ—ใงใใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ‚ชใƒšใƒฌใƒผใ‚ฟๆฌกๅ…ƒใง2ใคใฎใƒ‡ใƒใ‚คใ‚น๏ผˆcuda:0ใ€cuda:1๏ผ‰ใซไธฆๅˆ—ๅŒ–ใ™ใ‚‹ใจใ€ๆœ€ๅˆใซๅ…ฅๅŠ›ใƒ‡ใƒผใ‚ฟใ‚’ไธกๆ–นใฎใƒ‡ใƒใ‚คใ‚นใซใ‚ณใƒ”ใƒผใ—ใ€cuda:0ใงstdใ‚’่จˆ็ฎ—ใ—ใ€cuda:1ใงmeanใ‚’ๅŒๆ™‚ใซ่จˆ็ฎ—ใ—ใพใ™ใ€‚ * ๅฑžๆ€ง 10ใƒใƒƒใƒใฎ512้•ทใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใ‚’ๅฑžๆ€งๆฌกๅ…ƒใง2ใคใฎใƒ‡ใƒใ‚คใ‚นใซไธฆๅˆ—ๅŒ–ใ™ใ‚‹ใจใ€10 x 512ใŒ10 x 2 x 256ใซใชใ‚Šใพใ™ใ€‚ * ใƒ‘ใƒฉใƒกใƒผใ‚ฟ ใ“ใ‚Œใฏใƒ†ใƒณใ‚ฝใƒซใƒขใƒ‡ใƒซใฎไธฆๅˆ—ๅŒ–ใพใŸใฏๅ˜็ด”ใชๅฑคใ”ใจใฎใƒขใƒ‡ใƒซใฎไธฆๅˆ—ๅŒ–ใจไผผใฆใ„ใพใ™ใ€‚ ใ“ใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎ้‡่ฆๆ€งใฏใ€๏ผˆ1๏ผ‰GPU/TPU/CPUๅฏพ๏ผˆ2๏ผ‰RAM/DRAMๅฏพ๏ผˆ3๏ผ‰้ซ˜้€Ÿๅ†…้ƒจๆŽฅ็ถš/ไฝŽ้€Ÿๅค–้ƒจๆŽฅ็ถšใชใฉใฎใƒชใ‚ฝใƒผใ‚นใ‚’ๅ–ใ‚Šใ€ใ“ใ‚Œใ‚‰ใ™ในใฆใ‚’ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใซใ‚ˆใฃใฆ่‡ชๅ‹•็š„ใซๆœ€้ฉๅŒ–ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใฉใฎไธฆๅˆ—ๅŒ–ใ‚’ใฉใ“ใงไฝฟ็”จใ™ใ‚‹ใ‹ใ‚’ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ็š„ใซๆฑบๅฎšใ—ใพใ™ใ€‚ ้žๅธธใซ้‡่ฆใชๅด้ขใฎ1ใคใฏใ€FlexFlowใฏ้™็š„ใงๅ›บๅฎšใฎใƒฏใƒผใ‚ฏใƒญใƒผใƒ‰ใ‚’ๆŒใคใƒขใƒ‡ใƒซใฎใŸใ‚ใซ่จญ่จˆใ•ใ‚ŒใฆใŠใ‚Šใ€ๅ‹•็š„ใชๅ‹•ไฝœใ‚’ๆŒใคใƒขใƒ‡ใƒซใฏใ‚คใƒ†ใƒฌใƒผใ‚ทใƒงใƒณใ”ใจใซ็•ฐใชใ‚‹ไธฆๅˆ—ๅŒ–ๆˆฆ็•ฅใ‚’ๅฅฝใ‚€ๅ ดๅˆใŒใ‚ใ‚‹ใ“ใจใงใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใ“ใฎใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฎ็ด„ๆŸใฏ้žๅธธใซ้ญ…ๅŠ›็š„ใงใ™ใ€‚้ธๆŠžใ—ใŸใ‚ฏใƒฉใ‚นใ‚ฟใง30ๅˆ†้–“ใฎใ‚ทใƒŸใƒฅใƒฌใƒผใ‚ทใƒงใƒณใ‚’ๅฎŸ่กŒใ—ใ€ใ“ใฎ็‰นๅฎšใฎ็’ฐๅขƒใ‚’ๆœ€้ฉใซๅˆฉ็”จใ™ใ‚‹ใŸใ‚ใฎๆœ€่‰ฏใฎๆˆฆ็•ฅใ‚’ๆไพ›ใ—ใพใ™ใ€‚้ƒจๅˆ†ใ‚’่ฟฝๅŠ /ๅ‰Š้™ค/็ฝฎๆ›ใ™ใ‚‹ใจใ€ใใ‚Œใซๅฏพใ—ใฆๅฎŸ่กŒใ—ใฆๅ†ๆœ€้ฉๅŒ–ใƒ—ใƒฉใƒณใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ใใฎๅพŒใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใพใ™ใ€‚็•ฐใชใ‚‹ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใซใฏ็‹ฌ่‡ชใฎๆœ€้ฉๅŒ–ใŒใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Transformersใฎ็พๅœจใฎ็Šถๆณ: ใพใ ็ตฑๅˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ใ™ใงใซ[transformers.utils.fx](https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py)ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใŒFXใƒˆใƒฌใƒผใ‚นๅฏ่ƒฝใงใ‚ใ‚‹ใŸใ‚ใ€FlexFlowใ‚’ๅ‹•ไฝœใ•ใ›ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชๆ‰‹้ †ใ‚’่ชฐใ‹ใŒ่ฆ‹ใคใ‘ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ## Which Strategy To Use When ใ“ใ“ใงใฏใ€ใฉใฎไธฆๅˆ—ๅŒ–ๆˆฆ็•ฅใ‚’ใ„ใคไฝฟ็”จใ™ใ‚‹ใ‹ใฎ้žๅธธใซใŠใŠใพใ‹ใชใ‚ขใ‚ฆใƒˆใƒฉใ‚คใƒณใ‚’็คบใ—ใพใ™ใ€‚ๅ„ใƒชใ‚นใƒˆใฎๆœ€ๅˆใŒ้€šๅธธใ‚ˆใ‚Šใ‚‚้€Ÿใ„ใ“ใจใŒไธ€่ˆฌ็š„ใงใ™ใ€‚ **โ‡จ ๅ˜ไธ€GPU** * ใƒขใƒ‡ใƒซใŒๅ˜ไธ€GPUใซๅŽใพใ‚‹ๅ ดๅˆ๏ผš 1. ้€šๅธธใฎไฝฟ็”จ * ใƒขใƒ‡ใƒซใŒๅ˜ไธ€GPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆ๏ผš 1. ZeRO + CPUใ‚’ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ—ใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใงNVMeใ‚’ใ‚ชใƒ•ใƒญใƒผใƒ‰ 2. ไธŠ่จ˜ใซๅŠ ใˆใฆใ€ๆœ€ๅคงใฎใƒฌใ‚คใƒคใƒผใŒๅ˜ไธ€GPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆใ€[Memory Centric Tiling](https://deepspeed.readthedocs.io/en/latest/zero3.html#memory-centric-tiling)๏ผˆ่ฉณ็ดฐใฏไปฅไธ‹ๅ‚็…ง๏ผ‰ใ‚’ๆœ‰ๅŠนๅŒ– * ๆœ€ๅคงใฎใƒฌใ‚คใƒคใƒผใŒๅ˜ไธ€GPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆ๏ผš 1. ZeROใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆ - TPใ‚’ๆœ‰ๅŠนๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใชใœใชใ‚‰ใ€PPใ ใ‘ใงใฏๅŽใ‚ใ‚‹ใ“ใจใŒใงใใชใ„ใ‹ใ‚‰ใงใ™ใ€‚ 2. ZeROใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ไธŠ่จ˜ใฎใ€Œๅ˜ไธ€GPUใ€ใฎใ‚จใƒณใƒˆใƒชใจๅŒใ˜ใ‚‚ใฎใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ **โ‡จ ๅ˜ไธ€ใƒŽใƒผใƒ‰/ใƒžใƒซใƒGPU** * ใƒขใƒ‡ใƒซใŒๅ˜ไธ€GPUใซๅŽใพใ‚‹ๅ ดๅˆ๏ผš 1. DDP - ๅˆ†ๆ•ฃใƒ‡ใƒผใ‚ฟไธฆๅˆ— 2. ZeRO - ็Šถๆณใจไฝฟ็”จใ•ใ‚Œใ‚‹ๆง‹ๆˆใซไพๅญ˜ใ—ใฆ้€Ÿใ„ใ‹ใฉใ†ใ‹ใŒ็•ฐใชใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ * ใƒขใƒ‡ใƒซใŒๅ˜ไธ€GPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆ๏ผš 1. PP 2. ZeRO 3. TP ้žๅธธใซ้ซ˜้€ŸใชใƒŽใƒผใƒ‰ๅ†…ๆŽฅ็ถšใŒNVLINKใพใŸใฏNVSwitchใงใ‚ใ‚‹ๅ ดๅˆใ€ใ“ใ‚Œใ‚‰ใฎใ™ในใฆใฏใปใจใ‚“ใฉๅŒ็ญ‰ใฎๆ€ง่ƒฝใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใŒใชใ„ๅ ดๅˆใ€PPใฏTPใพใŸใฏZeROใ‚ˆใ‚Šใ‚‚้€Ÿใใชใ‚Šใพใ™ใ€‚TPใฎๅบฆๅˆใ„ใ‚‚้•ใ„ใ‚’็”Ÿใ˜ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚็‰นๅฎšใฎใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงๅ‹่€…ใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซๅฎŸ้จ“ใ™ใ‚‹ใฎใŒๆœ€ๅ–„ใงใ™ใ€‚ TPใฏใปใจใ‚“ใฉๅธธใซๅ˜ไธ€ใƒŽใƒผใƒ‰ๅ†…ใงไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ใคใพใ‚Šใ€TPใ‚ตใ‚คใ‚บ <= ใƒŽใƒผใƒ‰ใ‚ใŸใ‚ŠใฎGPUใงใ™ใ€‚ * ๆœ€ๅคงใฎใƒฌใ‚คใƒคใƒผใŒๅ˜ไธ€GPUใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆ๏ผš 1. ZeROใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆ - TPใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใชใœใชใ‚‰ใ€PPใ ใ‘ใงใฏๅŽใ‚ใ‚‹ใ“ใจใŒใงใใชใ„ใ‹ใ‚‰ใงใ™ใ€‚ 2. ZeROใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ไธŠ่จ˜ใฎใ€Œๅ˜ไธ€GPUใ€ใฎใ‚จใƒณใƒˆใƒชใจๅŒใ˜ใ‚‚ใฎใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ **โ‡จ ใƒžใƒซใƒใƒŽใƒผใƒ‰/ใƒžใƒซใƒGPU** * ้ซ˜้€ŸใชใƒŽใƒผใƒ‰้–“ๆŽฅ็ถšใŒใ‚ใ‚‹ๅ ดๅˆ๏ผš 1. ZeRO - ใƒขใƒ‡ใƒซใธใฎใปใจใ‚“ใฉใฎๅค‰ๆ›ดใŒไธ่ฆใงใ™ 2. PP+TP+DP - ้€šไฟกใŒๅฐ‘ใชใใ€ใƒขใƒ‡ใƒซใซๅคง่ฆๆจกใชๅค‰ๆ›ดใŒๅฟ…่ฆใงใ™ * ้…ใ„ใƒŽใƒผใƒ‰้–“ๆŽฅ็ถšใŒใ‚ใ‚Šใ€GPUใƒกใƒขใƒชใŒๅฐ‘ใชใ„ๅ ดๅˆ๏ผš 1. DP+PP+TP+ZeRO-1
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/pr_checks.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Checks on a Pull Request ๐Ÿค— Transformersใƒชใƒใ‚ธใƒˆใƒชใงใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้–‹ใใจใ€่ฟฝๅŠ ใ—ใฆใ„ใ‚‹ใƒ‘ใƒƒใƒใŒๆ—ขๅญ˜ใฎใ‚‚ใฎใ‚’ๅฃŠใ—ใฆใ„ใชใ„ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€ใ‹ใชใ‚Šใฎๆ•ฐใฎใƒใ‚งใƒƒใ‚ฏใŒๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒใ‚งใƒƒใ‚ฏใซใฏใ€ๆฌกใฎ4ใคใฎใ‚ฟใ‚คใƒ—ใŒใ‚ใ‚Šใพใ™๏ผš - ้€šๅธธใฎใƒ†ใ‚นใƒˆ - ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎใƒ“ใƒซใƒ‰ - ใ‚ณใƒผใƒ‰ใจใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎใ‚นใ‚ฟใ‚คใƒซ - ใƒชใƒใ‚ธใƒˆใƒชๅ…จไฝ“ใฎไธ€่ฒซๆ€ง ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใฏใ€ใ“ใ‚Œใ‚‰ใฎใ•ใพใ–ใพใชใƒใ‚งใƒƒใ‚ฏใจใใฎ่ƒŒๅพŒใซใ‚ใ‚‹็†็”ฑใ€ใใ—ใฆใใ‚Œใ‚‰ใฎใ„ใšใ‚Œใ‹ใŒใ‚ใชใŸใฎใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใงๅคฑๆ•—ใ—ใŸๅ ดๅˆใฎใƒญใƒผใ‚ซใƒซใงใฎใƒ‡ใƒใƒƒใ‚ฐๆ–นๆณ•ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ใชใŠใ€็†ๆƒณ็š„ใซใฏใ€้–‹็™บ่€…็”จใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซใŒๅฟ…่ฆใงใ™๏ผš ```bash pip install transformers[dev] ``` ใพใŸใฏ็ทจ้›†ๅฏ่ƒฝใชใ‚คใƒณใ‚นใƒˆใƒผใƒซใฎๅ ดๅˆ๏ผš ```bash pip install -e .[dev] ``` ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใ‚บใฎใƒชใƒใ‚ธใƒˆใƒชๅ†…ใงไฝœๆฅญใ—ใฆใ„ใพใ™ใ€‚ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใ‚บใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใฎไพๅญ˜้–ขไฟ‚ใฎๆ•ฐใŒๅข—ใˆใŸใŸใ‚ใ€ใ™ในใฆใ‚’ๅ–ๅพ—ใงใใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚้–‹็™บ็”จใ‚คใƒณใ‚นใƒˆใƒผใƒซใŒๅคฑๆ•—ใ—ใŸๅ ดๅˆใ€ไฝœๆฅญใ—ใฆใ„ใ‚‹ใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏ๏ผˆPyTorchใ€TensorFlowใ€ใŠใ‚ˆใณ/ใพใŸใฏFlax๏ผ‰ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใ€ๆฌกใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„ใ€‚ ```bash pip install transformers[quality] ``` ใพใŸใฏ็ทจ้›†ๅฏ่ƒฝใชใ‚คใƒณใ‚นใƒˆใƒผใƒซใฎๅ ดๅˆ๏ผš ```bash pip install -e .[quality] ``` ## Tests `ci/circleci: run_tests_` ใงๅง‹ใพใ‚‹ใ™ในใฆใฎใ‚ธใƒงใƒ–ใฏใ€Transformersใฎใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใฎไธ€้ƒจใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใ‚ธใƒงใƒ–ใฏใ€็‰นๅฎšใฎ็’ฐๅขƒใงใƒฉใ‚คใƒ–ใƒฉใƒชใฎไธ€้ƒจใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ใŸใจใˆใฐใ€`ci/circleci: run_tests_pipelines_tf` ใฏใ€TensorFlowใฎใฟใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚ŒใŸ็’ฐๅขƒใงใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใฎไธ€้ƒจใฎใฟใŒๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใƒ†ใ‚นใƒˆใ‚นใ‚คใƒผใƒˆใฏใ€ๅค‰ๆ›ดๅ‰ใจๅค‰ๆ›ดๅพŒใฎPRใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฎ้•ใ„ใ‚’ๆฑบๅฎšใ—ใ€ใใฎ้•ใ„ใซๅฝฑ้Ÿฟใ‚’ๅ—ใ‘ใ‚‹ใƒ†ใ‚นใƒˆใ‚’้ธๆŠžใ™ใ‚‹ใŸใ‚ใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใŒๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ใ“ใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใฏใ€ใƒญใƒผใ‚ซใƒซใงไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆๅฎŸ่กŒใงใใพใ™๏ผš ```bash python utils/tests_fetcher.py ``` 1. ใƒชใƒใ‚ธใƒˆใƒชใฎใƒซใƒผใƒˆใ‹ใ‚‰ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใ“ใ‚Œใฏๆฌกใฎใ‚นใƒ†ใƒƒใƒ—ใ‚’ๅฎŸ่กŒใ—ใพใ™๏ผš 1. ๅทฎๅˆ†ๅ†…ใฎๅ„ใƒ•ใ‚กใ‚คใƒซใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใ€ๅค‰ๆ›ดใŒใ‚ณใƒผใƒ‰ๅ†…ใซใ‚ใ‚‹ใ‹ใ€ใ‚ณใƒกใƒณใƒˆใ‚„ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณๆ–‡ๅญ—ๅˆ—ใฎใฟใซใ‚ใ‚‹ใ‹ใ‚’็ขบ่ชใ—ใพใ™ใ€‚ๅฎŸ้š›ใฎใ‚ณใƒผใƒ‰ๅค‰ๆ›ดใŒใ‚ใ‚‹ใƒ•ใ‚กใ‚คใƒซใฎใฟใ‚’ไฟๆŒใ—ใพใ™ใ€‚ 2. ๅ†…้ƒจใฎใƒžใƒƒใƒ—ใ‚’ๆง‹็ฏ‰ใ—ใพใ™ใ€‚ใ“ใฎใƒžใƒƒใƒ—ใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ‚ฝใƒผใ‚นใ‚ณใƒผใƒ‰ใฎๅ„ใƒ•ใ‚กใ‚คใƒซใŒๅ†ๅธฐ็š„ใซๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ใ™ในใฆใฎใƒ•ใ‚กใ‚คใƒซใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใƒขใ‚ธใƒฅใƒผใƒซAใŒใƒขใ‚ธใƒฅใƒผใƒซBใซๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ใจใฏใ€ใƒขใ‚ธใƒฅใƒผใƒซBใŒใƒขใ‚ธใƒฅใƒผใƒซAใ‚’ใ‚คใƒณใƒใƒผใƒˆใ™ใ‚‹ๅ ดๅˆใ‚’ๆŒ‡ใ—ใพใ™ใ€‚ๅ†ๅธฐ็š„ใชๅฝฑ้Ÿฟใ‚’ๅพ—ใ‚‹ใซใฏใ€ใƒขใ‚ธใƒฅใƒผใƒซAใ‹ใ‚‰ใƒขใ‚ธใƒฅใƒผใƒซBใธใฎใƒขใ‚ธใƒฅใƒผใƒซใฎใƒใ‚งใƒผใƒณใŒๅฟ…่ฆใงใ€ๅ„ใƒขใ‚ธใƒฅใƒผใƒซใฏๅ‰ใฎใƒขใ‚ธใƒฅใƒผใƒซใ‚’ใ‚คใƒณใƒใƒผใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ 3. ใ“ใฎใƒžใƒƒใƒ—ใ‚’ใ‚นใƒ†ใƒƒใƒ—1ใงๅŽ้›†ใ—ใŸใƒ•ใ‚กใ‚คใƒซใซ้ฉ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€PRใซๅฝฑ้Ÿฟใ‚’ๅ—ใ‘ใ‚‹ใƒขใƒ‡ใƒซใƒ•ใ‚กใ‚คใƒซใฎใƒชใ‚นใƒˆใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ 4. ใ“ใ‚Œใ‚‰ใฎใƒ•ใ‚กใ‚คใƒซใ‚’ใใ‚Œใซๅฏพๅฟœใ™ใ‚‹ใƒ†ใ‚นใƒˆใƒ•ใ‚กใ‚คใƒซใซใƒžใƒƒใƒ—ใ—ใ€ๅฎŸ่กŒใ™ใ‚‹ใƒ†ใ‚นใƒˆใฎใƒชใ‚นใƒˆใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ 2. ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ใƒญใƒผใ‚ซใƒซใงๅฎŸ่กŒใ™ใ‚‹ๅ ดๅˆใ€ใ‚นใƒ†ใƒƒใƒ—1ใ€3ใ€ใŠใ‚ˆใณ4ใฎ็ตๆžœใŒ่กจ็คบใ•ใ‚Œใ€ๅฎŸ่กŒใ™ใ‚‹ใƒ†ใ‚นใƒˆใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใพใŸใ€`test_list.txt` ใจใ„ใ†ๅๅ‰ใฎใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ใ“ใฎใƒ•ใ‚กใ‚คใƒซใซใฏๅฎŸ่กŒใ™ใ‚‹ใƒ†ใ‚นใƒˆใฎใƒชใ‚นใƒˆใŒๅซใพใ‚ŒใฆใŠใ‚Šใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใงใƒญใƒผใ‚ซใƒซใงๅฎŸ่กŒใงใใพใ™๏ผš ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` ## Documentation build `build_pr_documentation` ใ‚ธใƒงใƒ–ใฏใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎใƒ“ใƒซใƒ‰ใ‚’่กŒใ„ใ€ใ‚ใชใŸใฎPRใŒใƒžใƒผใ‚ธใ•ใ‚ŒใŸๅพŒใซใ™ในใฆใŒๆญฃๅธธใซ่กจ็คบใ•ใ‚Œใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ใƒœใƒƒใƒˆใŒใƒ—ใƒฌใƒ“ใƒฅใƒผใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใธใฎใƒชใƒณใ‚ฏใ‚’PRใซ่ฟฝๅŠ ใ—ใพใ™ใ€‚PRใซๅฏพใ™ใ‚‹ๅค‰ๆ›ดใฏใ€ใƒ—ใƒฌใƒ“ใƒฅใƒผใซ่‡ชๅ‹•็š„ใซๅๆ˜ ใ•ใ‚Œใพใ™ใ€‚ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎใƒ“ใƒซใƒ‰ใซๅคฑๆ•—ใ—ใŸๅ ดๅˆใ€ๅคฑๆ•—ใ—ใŸใ‚ธใƒงใƒ–ใฎ้šฃใซใ‚ใ‚‹ใ€Œ่ฉณ็ดฐใ€ใ‚’ใ‚ฏใƒชใƒƒใ‚ฏใ—ใฆใ€ไฝ•ใŒๅ•้กŒใซใชใฃใฆใ„ใ‚‹ใ‹ใ‚’็ขบ่ชใงใใพใ™ใ€‚ๅคšใใฎๅ ดๅˆใ€ๅ•้กŒใฏ`toctree`ๅ†…ใฎใƒ•ใ‚กใ‚คใƒซใŒไธ่ถณใ—ใฆใ„ใ‚‹ใชใฉใ€ๅ˜็ด”ใชใ‚‚ใฎใงใ™ใ€‚ ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ใƒญใƒผใ‚ซใƒซใงใƒ“ใƒซใƒ‰ใพใŸใฏใƒ—ใƒฌใƒ“ใƒฅใƒผใ—ใŸใ„ๅ ดๅˆใฏใ€[docsใƒ•ใ‚ฉใƒซใƒ€ๅ†…ใฎ`README.md`](https://github.com/huggingface/transformers/tree/main/docs)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## Code and documentation style ใ™ในใฆใฎใ‚ฝใƒผใ‚นใƒ•ใ‚กใ‚คใƒซใ€ไพ‹ใ€ใƒ†ใ‚นใƒˆใซใฏใ€`black`ใจ`ruff`ใ‚’ไฝฟ็”จใ—ใฆใ‚ณใƒผใƒ‰ใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใŒ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚ใพใŸใ€ใƒ‰ใƒƒใ‚ฏใ‚นใƒˆใƒชใƒณใ‚ฐใจ`rst`ใƒ•ใ‚กใ‚คใƒซใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ€Transformersใฎ`__init__.py`ใƒ•ใ‚กใ‚คใƒซใงๅฎŸ่กŒใ•ใ‚Œใ‚‹้…ๅปถใ‚คใƒณใƒใƒผใƒˆใฎ้ †ๅบใซใคใ„ใฆใ‚‚ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใŒๅญ˜ๅœจใ—ใพใ™๏ผˆ`utils/style_doc.py`ใจ`utils/custom_init_isort.py`๏ผ‰ใ€‚ใ“ใ‚Œใ‚‰ใ™ในใฆใฏใ€ไปฅไธ‹ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใง่ตทๅ‹•ใงใใพใ™ใ€‚ ```bash make style ``` CIใฏใ€`ci/circleci: check_code_quality` ใƒใ‚งใƒƒใ‚ฏๅ†…ใงใ“ใ‚Œใ‚‰ใฎใƒใ‚งใƒƒใ‚ฏใŒ้ฉ็”จใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ใพใŸใ€`ruff` ใ‚’ๅฎŸ่กŒใ—ใ€ๆœชๅฎš็พฉใฎๅค‰ๆ•ฐใ‚„ไฝฟ็”จใ•ใ‚Œใฆใ„ใชใ„ๅค‰ๆ•ฐใŒใ‚ใ‚‹ๅ ดๅˆใซใ‚จใƒฉใƒผใ‚’ๅ ฑๅ‘Šใ—ใพใ™ใ€‚ใ“ใฎใƒใ‚งใƒƒใ‚ฏใ‚’ใƒญใƒผใ‚ซใƒซใงๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ ```bash make quality ``` ๆ™‚้–“ใŒใ‹ใ‹ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€็พๅœจใฎใƒ–ใƒฉใƒณใƒใงๅค‰ๆ›ดใ—ใŸใƒ•ใ‚กใ‚คใƒซใฎใฟใงๅŒใ˜ใ“ใจใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ```bash make fixup ``` ใ“ใฎๆœ€ๅพŒใฎใ‚ณใƒžใƒณใƒ‰ใฏใ€ใƒชใƒใ‚ธใƒˆใƒชใฎๆ•ดๅˆๆ€งใฎใŸใ‚ใฎใ™ในใฆใฎ่ฟฝๅŠ ใฎใƒใ‚งใƒƒใ‚ฏใ‚‚ๅฎŸ่กŒใ—ใพใ™ใ€‚ใใ‚Œใ‚‰ใ‚’่ฉณใ—ใ่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ## Repository consistency ใ“ใ‚Œใซใฏใ€ใ‚ใชใŸใฎPRใŒใƒชใƒใ‚ธใƒˆใƒชใ‚’้ฉๅˆ‡ใช็Šถๆ…‹ใซไฟใฃใŸใพใพใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใฎใ™ในใฆใฎใƒ†ใ‚นใƒˆใŒๅซใพใ‚ŒใฆใŠใ‚Šใ€ci/`circleci: check_repository_consistency` ใƒใ‚งใƒƒใ‚ฏใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ใƒญใƒผใ‚ซใƒซใงใ“ใฎใƒใ‚งใƒƒใ‚ฏใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ```bash make repo-consistency ``` ใ“ใ‚Œใ‚’็ขบ่ชใ—ใพใ™๏ผš - `utils/check_repo.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€init ใซ่ฟฝๅŠ ใ•ใ‚ŒใŸใ™ในใฆใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒๆ–‡ๆ›ธๅŒ–ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ - `utils/check_inits.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€ใ™ในใฆใฎ `__init__.py` ใƒ•ใ‚กใ‚คใƒซใŒใใฎ2ใคใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงๅŒใ˜ๅ†…ๅฎนใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ - `utils/check_copies.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€ไป–ใฎใƒขใ‚ธใƒฅใƒผใƒซใ‹ใ‚‰ใฎใ‚ณใƒ”ใƒผใจใ—ใฆ่ญ˜ๅˆฅใ•ใ‚ŒใŸใ™ในใฆใฎใ‚ณใƒผใƒ‰ใŒๅ…ƒใฎใ‚ณใƒผใƒ‰ใจไธ€่‡ดใ—ใฆใ„ใพใ™ใ€‚ - `utils/check_config_docstrings.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€ใ™ในใฆใฎ่จญๅฎšใ‚ฏใƒฉใ‚นใซใฏๅฐ‘ใชใใจใ‚‚1ใคใฎๆœ‰ๅŠนใชใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆๆ–‡ๅญ—ๅˆ—ใซ่จ˜่ผ‰ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ - `utils/check_config_attributes.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€ใ™ในใฆใฎ่จญๅฎšใ‚ฏใƒฉใ‚นใซใฏใ€ๅฏพๅฟœใ™ใ‚‹ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใงไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ๅฑžๆ€งใฎใฟใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ - `utils/check_copies.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€README ใจใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใฎ็ฟป่จณใŒใ€ใƒกใ‚คใƒณใฎREADME ใจๅŒใ˜ใƒขใƒ‡ใƒซใƒชใ‚นใƒˆใ‚’ๆŒใฃใฆใ„ใพใ™ใ€‚ - `utils/check_table.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใฎ่‡ชๅ‹•็”Ÿๆˆใƒ†ใƒผใƒ–ใƒซใŒๆœ€ๆ–ฐใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ - `utils/check_dummies.py` ใซใ‚ˆใฃใฆๅฎŸ่กŒใ•ใ‚Œใ‚‹ใ€ใ™ในใฆใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒๅˆฉ็”จๅฏ่ƒฝใงใ‚ใ‚Šใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใฎไพๅญ˜้–ขไฟ‚ใŒใ™ในใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใชใใฆใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใฎใƒใ‚งใƒƒใ‚ฏใŒๅคฑๆ•—ใ™ใ‚‹ๅ ดๅˆใ€ๆœ€ๅˆใฎ2ใคใฎ้ …็›ฎใฏๆ‰‹ๅ‹•ใงไฟฎๆญฃใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€ๆœ€ๅพŒใฎ4ใคใฏใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆ่‡ชๅ‹•็š„ใซไฟฎๆญฃใงใใพใ™ใ€‚ ```bash make fix-copies ``` ่ฟฝๅŠ ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹Pull Request๏ผˆPR๏ผ‰ใซ้–ข้€ฃใ—ใฆใ„ใพใ™ใ€‚ไธปใซๆฌกใฎ็‚นใ‚’็ขบ่ชใ—ใพใ™๏ผš - ใ™ในใฆใฎ่ฟฝๅŠ ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏใ€Auto-mapping๏ผˆ`utils/check_repo.py`ใงๅฎŸ่กŒ๏ผ‰ใซๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ <!-- TODO Sylvainใ€ๅ…ฑ้€šใฎใƒ†ใ‚นใƒˆใŒๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ใƒใ‚งใƒƒใ‚ฏใ‚’่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„ใ€‚--> - ใ™ในใฆใฎใƒขใƒ‡ใƒซใŒ้ฉๅˆ‡ใซใƒ†ใ‚นใƒˆใ•ใ‚Œใฆใ„ใพใ™๏ผˆ`utils/check_repo.py`ใงๅฎŸ่กŒ๏ผ‰ใ€‚ <!-- TODO Sylvainใ€ไปฅไธ‹ใ‚’่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„ - ใ™ในใฆใฎใƒขใƒ‡ใƒซใŒใƒกใ‚คใƒณใฎREADMEใŠใ‚ˆใณใƒกใ‚คใƒณใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆๅ†…ใซ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ - ไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใ™ในใฆใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒๅฎŸ้š›ใซHubใซๅญ˜ๅœจใ—ใฆใ„ใพใ™ --> ### Check copies Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€ใƒขใƒ‡ใƒซใ‚ณใƒผใƒ‰ใซ้–ขใ—ใฆ้žๅธธใซๆ„่ฆ‹ใŒใ‚ใ‚‹ใŸใ‚ใ€ๅ„ใƒขใƒ‡ใƒซใฏไป–ใฎใƒขใƒ‡ใƒซใซไพๅญ˜ใ›ใšใซๅฎŒๅ…จใซ1ใคใฎใƒ•ใ‚กใ‚คใƒซใซๅฎŸ่ฃ…ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€็‰นๅฎšใฎใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‰ใฎใ‚ณใƒ”ใƒผใŒๅ…ƒใฎใ‚ณใƒผใƒ‰ใจไธ€่ฒซใ—ใฆใ„ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ไป•็ต„ใฟใ‚’่ฟฝๅŠ ใ—ใพใ—ใŸใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒใ‚ฐไฟฎๆญฃใŒใ‚ใ‚‹ๅ ดๅˆใ€ไป–ใฎๅฝฑ้Ÿฟใ‚’ๅ—ใ‘ใ‚‹ใƒขใƒ‡ใƒซใ‚’ใ™ในใฆ็ขบ่ชใ—ใ€ๅค‰ๆ›ดใ‚’ไผ้”ใ™ใ‚‹ใ‹ใ‚ณใƒ”ใƒผใ‚’็ ดๆฃ„ใ™ใ‚‹ใ‹ใ‚’้ธๆŠžใงใใพใ™ใ€‚ <Tip> ใƒ•ใ‚กใ‚คใƒซใŒๅˆฅใฎใƒ•ใ‚กใ‚คใƒซใฎๅฎŒๅ…จใชใ‚ณใƒ”ใƒผใงใ‚ใ‚‹ๅ ดๅˆใ€ใใ‚Œใ‚’`utils/check_copies.py`ใฎ`FULL_COPIES`ๅฎšๆ•ฐใซ็™ป้Œฒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ใ“ใฎไป•็ต„ใฟใฏใ€`# Copied from xxx`ใจใ„ใ†ๅฝขๅผใฎใ‚ณใƒกใƒณใƒˆใซไพๅญ˜ใ—ใฆใ„ใพใ™ใ€‚`xxx`ใฏใ€ใ‚ณใƒ”ใƒผใ•ใ‚Œใฆใ„ใ‚‹ใ‚ฏใƒฉใ‚นใพใŸใฏ้–ขๆ•ฐใฎๅฎŒๅ…จใชใƒ‘ใ‚นใ‚’ๅซใ‚€ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€`RobertaSelfOutput`ใฏ`BertSelfOutput`ใ‚ฏใƒฉใ‚นใฎ็›ดๆŽฅใฎใ‚ณใƒ”ใƒผใงใ™ใฎใงใ€[ใ“ใกใ‚‰](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289)ใซใ‚ณใƒกใƒณใƒˆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py # Copied from transformers.models.bert.modeling_bert.BertSelfOutput ``` ๆณจๆ„็‚นใจใ—ใฆใ€ใ“ใ‚Œใ‚’ใ‚ฏใƒฉใ‚นๅ…จไฝ“ใซ้ฉ็”จใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ใ‚ณใƒ”ใƒผๅ…ƒใฎ้–ข้€ฃใƒกใ‚ฝใƒƒใƒ‰ใซ้ฉ็”จใงใใพใ™ใ€‚ใŸใจใˆใฐใ€[ใ“ใกใ‚‰](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598)ใงใฏใ€`RobertaPreTrainedModel._init_weights` ใŒ `BertPreTrainedModel` ใ‹ใ‚‰ใ‚ณใƒ”ใƒผใ•ใ‚ŒใฆใŠใ‚Šใ€ไปฅไธ‹ใฎใ‚ณใƒกใƒณใƒˆใŒใ‚ใ‚Šใพใ™๏ผš ```py # Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta ``` ๆณจ๏ผš็Ÿขๅฐใฎๅ‘จใ‚Šใซใฏใ‚นใƒšใƒผใ‚นใŒๅซใพใ‚Œใฆใ„ใฆใฏใ„ใ‘ใพใ›ใ‚“๏ผˆใ‚‚ใกใ‚ใ‚“ใ€ใใฎใ‚นใƒšใƒผใ‚นใŒ็ฝฎๆ›ใƒ‘ใ‚ฟใƒผใƒณใฎไธ€้ƒจใงใ‚ใ‚‹ๅ ดๅˆใ‚’้™คใใพใ™๏ผ‰ใ€‚ ใ‚ซใƒณใƒžใงๅŒบๅˆ‡ใ‚‰ใ‚ŒใŸ่ค‡ๆ•ฐใฎใƒ‘ใ‚ฟใƒผใƒณใ‚’่ฟฝๅŠ ใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€ใ“ใ“ใงใฏ `CamemberForMaskedLM` ใฏ `RobertaForMaskedLM` ใฎ็›ดๆŽฅใฎใ‚ณใƒ”ใƒผใงใ€2ใคใฎ็ฝฎๆ›ใŒใ‚ใ‚Šใพใ™๏ผš `Roberta` ใ‹ใ‚‰ `Camembert` ใธใ€ใใ—ใฆ `ROBERTA` ใ‹ใ‚‰ `CAMEMBERT` ใธใจ็ฝฎๆ›ใ•ใ‚Œใพใ™ใ€‚[ใ“ใกใ‚‰](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/camembert/modeling_camembert.py#L929)ใงใ€ใ“ใฎไฝœๆฅญใฏใ‚ณใƒกใƒณใƒˆไป˜ใใง่กŒใ‚ใ‚Œใฆใ„ใพใ™ใ€‚ ```py # Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT ``` ใ‚‚ใ—้ †ๅบใŒ้‡่ฆใชๅ ดๅˆ๏ผˆไปฅๅ‰ใฎ็ฝฎๆ›ใจ็ซถๅˆใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚๏ผ‰ใ€็ฝฎๆ›ใฏๅทฆใ‹ใ‚‰ๅณใซๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ <Tip> ใ‚‚ใ—็ฝฎๆ›ใŒใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ‚’ๅค‰ๆ›ดใ™ใ‚‹ๅ ดๅˆ๏ผˆใŸใจใˆใฐใ€็Ÿญใ„ๅๅ‰ใ‚’้žๅธธใซ้•ทใ„ๅๅ‰ใซ็ฝฎใๆ›ใˆใ‚‹ๅ ดๅˆใชใฉ๏ผ‰ใ€่‡ชๅ‹•ใƒ•ใ‚ฉใƒผใƒžใƒƒใ‚ฟใ‚’้ฉ็”จใ—ใŸๅพŒใซใ‚ณใƒ”ใƒผใŒ็ขบ่ชใ•ใ‚Œใพใ™ใ€‚ </Tip> ใƒ‘ใ‚ฟใƒผใƒณใŒๅŒใ˜็ฝฎๆ›ใฎ็•ฐใชใ‚‹ใ‚ฑใƒผใ‚น๏ผˆๅคงๆ–‡ๅญ—ใจๅฐๆ–‡ๅญ—ใฎใƒใƒชใ‚ขใƒณใƒˆใŒใ‚ใ‚‹๏ผ‰ใฎๅ ดๅˆใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใจใ—ใฆ `all-casing` ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใฎๅˆฅใฎๆ–นๆณ•ใ‚‚ใ‚ใ‚Šใพใ™ใ€‚[ใ“ใกใ‚‰](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/mobilebert/modeling_mobilebert.py#L1237)ใฏใ€`MobileBertForSequenceClassification` ๅ†…ใฎไพ‹ใงใ€ใ‚ณใƒกใƒณใƒˆใŒใคใ„ใฆใ„ใพใ™ใ€‚ ```py # Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing ``` ใ“ใฎๅ ดๅˆใ€ใ‚ณใƒผใƒ‰ใฏใ€ŒBertForSequenceClassificationใ€ใ‹ใ‚‰ใ‚ณใƒ”ใƒผใ•ใ‚Œใ€ๆฌกใฎใ‚ˆใ†ใซ็ฝฎๆ›ใ•ใ‚Œใพใ™๏ผš - `Bert` ใ‚’ `MobileBert` ใซ็ฝฎใๆ›ใˆใ‚‹๏ผˆไพ‹๏ผš`init`ใง `MobileBertModel` ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆ๏ผ‰ - `bert` ใ‚’ `mobilebert` ใซ็ฝฎใๆ›ใˆใ‚‹๏ผˆไพ‹๏ผš`self.mobilebert` ใ‚’ๅฎš็พฉใ™ใ‚‹ๅ ดๅˆ๏ผ‰ - `BERT` ใ‚’ `MOBILEBERT` ใซ็ฝฎใๆ›ใˆใ‚‹๏ผˆๅฎšๆ•ฐ `MOBILEBERT_INPUTS_DOCSTRING` ๅ†…ใง๏ผ‰
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/autoclass_tutorial.md
<!-- ่‘—ไฝœๆจฉ 2023 The HuggingFace Teamใ€‚ๅ…จ่‘—ไฝœๆจฉๆ‰€ๆœ‰ใ€‚ Apache Licenseใ€Version 2.0๏ผˆไปฅไธ‹ใ€Œใƒฉใ‚คใ‚ปใƒณใ‚นใ€ใจๅ‘ผใณใพใ™๏ผ‰ใซๅŸบใฅใใƒฉใ‚คใ‚ปใƒณใ‚นใงใ€ ใƒฉใ‚คใ‚ปใƒณใ‚นใซๅพ“ใ‚ใชใ„้™ใ‚Šใ€ใ“ใฎใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟ็”จใงใใพใ›ใ‚“ใ€‚ ใƒฉใ‚คใ‚ปใƒณใ‚นใฎใ‚ณใƒ”ใƒผใฏไปฅไธ‹ใ‹ใ‚‰ๅ…ฅๆ‰‹ใงใใพใ™๏ผš http://www.apache.org/licenses/LICENSE-2.0 ้ฉ็”จๆณ•ใซๅพ“ใ†ใ‹ใ€ๆ›ธ้ขใซใ‚ˆใ‚‹ๅŒๆ„ใŒใ‚ใ‚‹้™ใ‚Šใ€ใƒฉใ‚คใ‚ปใƒณใ‚นใฎไธ‹ใงใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ขใฏ้…ๅธƒใ•ใ‚Œใพใ™ใ€‚ ใƒฉใ‚คใ‚ปใƒณใ‚นใซๅŸบใฅใ็‰นๅฎšใฎ่จ€่ชžใงใฎๆกไปถใ‚’็ขบ่ชใ™ใ‚‹ใ‹ใ€ใƒฉใ‚คใ‚ปใƒณใ‚นใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎใƒ•ใ‚กใ‚คใƒซใฏMarkdownๅฝขๅผใงใ™ใŒใ€doc-builder๏ผˆMDXใซ้กžไผผใ—ใŸใ‚‚ใฎ๏ผ‰ใฎ็‰นๅฎšใฎๆง‹ๆ–‡ใ‚’ๅซใ‚“ใงใŠใ‚Šใ€ ใŠไฝฟใ„ใฎMarkdownใƒ“ใƒฅใƒผใ‚ขใงๆญฃใ—ใใƒฌใƒณใƒ€ใƒชใƒณใ‚ฐใ•ใ‚Œใชใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ --> # AutoClassใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ ใ•ใพใ–ใพใชTransformerใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒๅญ˜ๅœจใ™ใ‚‹ใŸใ‚ใ€่‡ชๅˆ†ใฎใ‚ฟใ‚นใ‚ฏใซๅˆใฃใŸใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ™ใ‚‹ใฎใฏ้›ฃใ—ใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ ๐Ÿค— Transformersใฎใ‚ณใ‚ขๅ“ฒๅญฆใฎไธ€็’ฐใจใ—ใฆใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ—ใ‚„ใ™ใใ€ใ‚ทใƒณใƒ—ใƒซใงๆŸ”่ปŸใซใ™ใ‚‹ใŸใ‚ใซใ€ `AutoClass`ใฏไธŽใˆใ‚‰ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ๆญฃใ—ใ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่‡ชๅ‹•็š„ใซๆŽจ่ซ–ใ—ใฆใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ `from_pretrained()`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’็ด ๆ—ฉใใƒญใƒผใƒ‰ใงใใ‚‹ใŸใ‚ใ€ใƒขใƒ‡ใƒซใ‚’ใ‚ผใƒญใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใซๆ™‚้–“ใจใƒชใ‚ฝใƒผใ‚นใ‚’่ฒปใ‚„ใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใฎ็จฎใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซไพๅญ˜ใ—ใชใ„ใ‚ณใƒผใƒ‰ใ‚’็”Ÿๆˆใ™ใ‚‹ใ“ใจใฏใ€ ใ‚ณใƒผใƒ‰ใŒ1ใคใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใงๅ‹•ไฝœใ™ใ‚Œใฐใ€ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒ็•ฐใชใฃใฆใ„ใฆใ‚‚ใ€ๅŒใ˜ใ‚ฟใ‚นใ‚ฏใซๅ‘ใ‘ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸๅ ดๅˆใฏๅˆฅใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใงใ‚‚ๅ‹•ไฝœใ™ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ <Tip> ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฏใƒขใƒ‡ใƒซใฎ้ชจๆ ผใ‚’ๆŒ‡ใ—ใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏ็‰นๅฎšใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎ้‡ใฟใงใ™ใ€‚ ใŸใจใˆใฐใ€[BERT](https://huggingface.co/bert-base-uncased)ใฏใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ‚ใ‚Šใ€`bert-base-uncased`ใฏใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใงใ™ใ€‚ ใƒขใƒ‡ใƒซใฏใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใพใŸใฏใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎใฉใกใ‚‰ใ‚’ๆŒ‡ใ™ไธ€่ˆฌ็š„ใช็”จ่ชžใงใ™ใ€‚ </Tip> ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ไปฅไธ‹ใ‚’ๅญฆ็ฟ’ใ—ใพใ™๏ผš * ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใ€‚ * ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใ€‚ * ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟ็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใ€‚ * ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใ€‚ * ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใ€‚ ## AutoTokenizer ใปใจใ‚“ใฉใฎNLPใ‚ฟใ‚นใ‚ฏใฏใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใงๅง‹ใพใ‚Šใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใฏๅ…ฅๅŠ›ใ‚’ใƒขใƒ‡ใƒซใงๅ‡ฆ็†ใงใใ‚‹ๅฝขๅผใซๅค‰ๆ›ใ—ใพใ™ใ€‚ [`AutoTokenizer.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` ๆฌกใซใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซๅ…ฅๅŠ›ใ‚’ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚บใ—ใพใ™๏ผš ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoImageProcessor ใƒ“ใ‚ธใƒงใƒณใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใŒ็”ปๅƒใ‚’ๆญฃใ—ใ„ๅ…ฅๅŠ›ๅฝขๅผใซๅค‰ๆ›ใ—ใพใ™ใ€‚ ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ## AutoFeatureExtractor ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใŒใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชไฟกๅทใ‚’ๆญฃใ—ใ„ๅ…ฅๅŠ›ๅฝขๅผใซๅค‰ๆ›ใ—ใพใ™ใ€‚ [`AutoFeatureExtractor.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆ็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™. ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใ‚ฟใ‚นใ‚ฏใฎๅ ดๅˆใ€2ใคใฎๅ‰ๅ‡ฆ็†ใƒ„ใƒผใƒซใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใƒ—ใƒญใ‚ปใƒƒใ‚ตใŒๅฟ…่ฆใงใ™ใ€‚ใŸใจใˆใฐใ€ [LayoutLMV2](model_doc/layoutlmv2)ใƒขใƒ‡ใƒซใฏ็”ปๅƒใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใฎ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใจใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใŒๅฟ…่ฆใงใ™ใ€‚ ใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ“ใ‚Œใ‚‰ใฎไธกๆ–นใ‚’็ต„ใฟๅˆใ‚ใ›ใพใ™ใ€‚ [`AutoProcessor.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™๏ผš ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> ๆœ€ๅพŒใซใ€`AutoModelFor`ใ‚ฏใƒฉใ‚นใฏ็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผˆไฝฟ็”จๅฏ่ƒฝใชใ‚ฟใ‚นใ‚ฏใฎๅฎŒๅ…จใชไธ€่ฆงใซใคใ„ใฆใฏ[ใ“ใกใ‚‰](model_doc/auto)ใ‚’ๅ‚็…ง๏ผ‰ใ€‚ ใŸใจใˆใฐใ€[`AutoModelForSequenceClassification.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กž็”จใฎใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผš ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๅŒใ˜ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๅ†ๅˆฉ็”จใ—ใฆ็•ฐใชใ‚‹ใ‚ฟใ‚นใ‚ฏใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผš ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` <Tip warning={true}> PyTorchใƒขใƒ‡ใƒซใฎๅ ดๅˆใ€ `from_pretrained()`ใƒกใ‚ฝใƒƒใƒ‰ใฏๅ†…้ƒจใง`torch.load()`ใ‚’ไฝฟ็”จใ—ใ€ๅ†…้ƒจ็š„ใซใฏ`pickle`ใ‚’ไฝฟ็”จใ—ใฆใŠใ‚Šใ€ใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃใฎๅ•้กŒใŒ็Ÿฅใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ ไธ€่ˆฌ็š„ใซใฏใ€ไฟก้ ผๆ€งใฎใชใ„ใ‚ฝใƒผใ‚นใ‹ใ‚‰ๅ–ๅพ—ใ—ใŸๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใƒขใƒ‡ใƒซใ‚„ๆ”นใ–ใ‚“ใ•ใ‚ŒใŸๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ ใ“ใฎใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃใƒชใ‚นใ‚ฏใฏใ€`Hugging Face Hub`ใงใƒ›ใ‚นใƒˆใ•ใ‚Œใฆใ„ใ‚‹ๅ…ฌ้–‹ใƒขใƒ‡ใƒซใซๅฏพใ—ใฆ้ƒจๅˆ†็š„ใซ็ทฉๅ’Œใ•ใ‚ŒใฆใŠใ‚Šใ€ๅ„ใ‚ณใƒŸใƒƒใƒˆใงใƒžใƒซใ‚ฆใ‚งใ‚ขใฎใ‚นใ‚ญใƒฃใƒณใŒ่กŒใ‚ใ‚Œใฆใ„ใพใ™ใ€‚ GPGใ‚’ไฝฟ็”จใ—ใŸ็ฝฒๅๆธˆใฟใ‚ณใƒŸใƒƒใƒˆใฎๆคœ่จผใชใฉใฎใƒ™ใ‚นใƒˆใƒ—ใƒฉใ‚ฏใƒ†ใ‚ฃใ‚นใซใคใ„ใฆใฏใ€Hubใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ TensorFlowใŠใ‚ˆใณFlaxใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซใฏๅฝฑ้ŸฟใŒใชใใ€`from_pretrained`ใƒกใ‚ฝใƒƒใƒ‰ใฎ`from_tf`ใŠใ‚ˆใณ`from_flax`ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆPyTorchใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃๅ†…ใงใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ </Tip> ไธ€่ˆฌ็š„ใซใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใซ`AutoTokenizer`ใ‚ฏใƒฉใ‚นใจ`AutoModelFor`ใ‚ฏใƒฉใ‚นใฎไฝฟ็”จใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅธธใซๆญฃใ—ใ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ๆฌกใฎ[tutorial](preprocessing)ใงใฏใ€ๆ–ฐใ—ใใƒญใƒผใƒ‰ใ—ใŸใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ€็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ€ใŠใ‚ˆใณใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ็”จใซใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใณใพใ™ใ€‚ </pt> <tf> ๆœ€ๅพŒใซใ€`TFAutoModelFor`ใ‚ฏใƒฉใ‚นใฏ็‰นๅฎšใฎใ‚ฟใ‚นใ‚ฏใซๅฏพใ—ใฆไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผˆไฝฟ็”จๅฏ่ƒฝใชใ‚ฟใ‚นใ‚ฏใฎๅฎŒๅ…จใชไธ€่ฆงใซใคใ„ใฆใฏใ“ใกใ‚‰ใ‚’ๅ‚็…ง๏ผ‰ใ€‚ ใŸใจใˆใฐใ€[`TFAutoModelForSequenceClassification.from_pretrained`]ใ‚’ไฝฟ็”จใ—ใฆใ‚ทใƒผใ‚ฑใƒณใ‚นๅˆ†้กž็”จใฎใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผš ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๅŒใ˜ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๅ†ๅˆฉ็”จใ—ใฆ็•ฐใชใ‚‹ใ‚ฟใ‚นใ‚ฏใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ใƒญใƒผใƒ‰ใงใใพใ™๏ผš ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` ไธ€่ˆฌ็š„ใซใฏใ€ไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒขใƒ‡ใƒซใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใซ`AutoTokenizer`ใ‚ฏใƒฉใ‚นใจ`TFAutoModelFor`ใ‚ฏใƒฉใ‚นใฎไฝฟ็”จใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅธธใซๆญฃใ—ใ„ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ๆฌกใฎ[tutorial](preproccesing)ใงใฏใ€ๆ–ฐใ—ใใƒญใƒผใƒ‰ใ—ใŸใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ€็‰นๅพด้‡ๆŠฝๅ‡บๅ™จใ€ใŠใ‚ˆใณใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ็”จใซใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆใณใพใ™ใ€‚ </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: ใ‚ฏใ‚คใƒƒใ‚ฏใƒ„ใ‚ขใƒผ - local: installation title: ใ‚คใƒณใ‚นใƒˆใƒผใƒซ title: Get started - sections: - local: pipeline_tutorial title: ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ไฝฟ็”จใ—ใฆๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ - local: autoclass_tutorial title: AutoClass ใ‚’ไฝฟ็”จใ—ใฆ็งปๆคๅฏ่ƒฝใชใ‚ณใƒผใƒ‰ใ‚’ไฝœๆˆใ™ใ‚‹ - local: preprocessing title: ใƒ‡ใƒผใ‚ฟใฎๅ‰ๅ‡ฆ็† - local: training title: ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ - local: run_scripts title: ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ - local: accelerate title: ๐Ÿค— Accelerate ใ‚’ไฝฟ็”จใ—ใฆๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ™ใ‚‹ - local: peft title: ๐Ÿค— PEFT ใ‚’ไฝฟ็”จใ—ใฆใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’ใƒญใƒผใƒ‰ใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ - local: model_sharing title: ใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ - local: transformers_agents title: ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆ - local: llm_tutorial title: LLM ใ‚’ไฝฟ็”จใ—ใŸ็”Ÿๆˆ title: Tutorials - sections: - isExpanded: false sections: - local: tasks/sequence_classification title: ใƒ†ใ‚ญใ‚นใƒˆใฎๅˆ†้กž - local: tasks/token_classification title: ใƒˆใƒผใ‚ฏใƒณใฎๅˆ†้กž - local: tasks/question_answering title: ่ณช็–‘ๅฟœ็ญ” - local: tasks/language_modeling title: ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ - local: tasks/masked_language_modeling title: ใƒžใ‚นใ‚ฏใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ - local: tasks/translation title: ็ฟป่จณ - local: tasks/summarization title: ่ฆ็ด„ - local: tasks/multiple_choice title: ่ค‡ๆ•ฐใฎ้ธๆŠž่‚ข title: ่‡ช็„ถ่จ€่ชžๅ‡ฆ็† - isExpanded: false sections: - local: tasks/audio_classification title: ้Ÿณๅฃฐใฎๅˆ†้กž - local: tasks/asr title: ่‡ชๅ‹•้Ÿณๅฃฐ่ช่ญ˜ title: ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช - isExpanded: false sections: - local: tasks/image_classification title: ็”ปๅƒๅˆ†้กž - local: tasks/semantic_segmentation title: ใ‚ปใƒžใƒณใƒ†ใ‚ฃใƒƒใ‚ฏใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ - local: tasks/video_classification title: ใƒ“ใƒ‡ใ‚ชใฎๅˆ†้กž - local: tasks/object_detection title: ็‰ฉไฝ“ๆคœๅ‡บ - local: tasks/zero_shot_object_detection title: ใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆ็‰ฉไฝ“ๆคœๅ‡บ - local: tasks/zero_shot_image_classification title: ใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆ็”ปๅƒๅˆ†้กž - local: tasks/monocular_depth_estimation title: ๆทฑใ•ใฎๆŽจๅฎš - local: tasks/image_to_image title: ็”ปๅƒใ‹ใ‚‰็”ปๅƒใธ - local: tasks/knowledge_distillation_for_image_classification title: ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใฎใŸใ‚ใฎ็Ÿฅ่ญ˜ใฎ่’ธ็•™ title: ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณ - isExpanded: false sections: - local: tasks/image_captioning title: ็”ปๅƒใฎใ‚ญใƒฃใƒ—ใ‚ทใƒงใƒณ - local: tasks/document_question_answering title: ๆ–‡ๆ›ธใฎ่ณชๅ•ใธใฎๅ›ž็ญ” - local: tasks/visual_question_answering title: ่ฆ–่ฆš็š„ใช่ณชๅ•ใธใฎๅ›ž็ญ” - local: tasks/text-to-speech title: ใƒ†ใ‚ญใ‚นใƒˆ่ชญใฟไธŠใ’ title: ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ - isExpanded: false sections: - local: generation_strategies title: ็”Ÿๆˆๆˆฆ็•ฅใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ™ใ‚‹ title: ไธ–ไปฃ - isExpanded: false sections: - local: tasks/idefics title: IDEFICS ใ‚’ไฝฟ็”จใ—ใŸใ‚คใƒกใƒผใ‚ธ ใ‚ฟใ‚นใ‚ฏ - local: tasks/prompting title: LLM ใƒ—ใƒญใƒณใƒ—ใƒˆ ใ‚ฌใ‚คใƒ‰ title: ใƒ—ใƒญใƒณใƒ—ใƒˆ title: Task Guides - sections: - local: fast_tokenizers title: ๐Ÿค— ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎ้ซ˜้€Ÿใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ - local: multilingual title: ๅคš่จ€่ชžใƒขใƒ‡ใƒซใงๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ - local: create_a_model title: ใƒขใƒ‡ใƒซๅ›บๆœ‰ใฎ API ใ‚’ไฝฟ็”จใ™ใ‚‹ - local: custom_models title: ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ - local: chat_templating title: ใƒใƒฃใƒƒใƒˆใƒขใƒ‡ใƒซใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆ - local: serialization title: ONNX ใธใฎใ‚จใ‚ฏใ‚นใƒใƒผใƒˆ - local: tflite title: TFLite ใธใฎใ‚จใ‚ฏใ‚นใƒใƒผใƒˆ - local: torchscript title: ใƒˆใƒผใƒใ‚นใ‚ฏใƒชใƒ—ใƒˆใธใฎใ‚จใ‚ฏใ‚นใƒใƒผใƒˆ - local: benchmarks title: ใƒ™ใƒณใƒใƒžใƒผใ‚ฏ - local: community title: ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใƒชใ‚ฝใƒผใ‚น - local: custom_tools title: ใ‚ซใ‚นใ‚ฟใƒ ใƒ„ใƒผใƒซใจใƒ—ใƒญใƒณใƒ—ใƒˆ - local: troubleshooting title: ใƒˆใƒฉใƒ–ใƒซใ‚ทใƒฅใƒผใƒ†ใ‚ฃใƒณใ‚ฐ title: ้–‹็™บ่€…ใ‚ฌใ‚คใƒ‰ - sections: - local: performance title: ๆฆ‚่ฆ - sections: - local: perf_train_gpu_one title: ๅ˜ไธ€ใฎ GPU ใงๅŠน็އ็š„ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใฎๆ–นๆณ•ใจใƒ„ใƒผใƒซ - local: perf_train_gpu_many title: ่ค‡ๆ•ฐใฎ GPU ใจไธฆๅˆ—ๅ‡ฆ็† - local: perf_train_cpu title: CPU ใงใฎๅŠน็އ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ - local: perf_train_cpu_many title: ๅˆ†ๆ•ฃCPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ - local: perf_train_tpu title: TPU ใซ้–ขใ™ใ‚‹ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ - local: perf_train_tpu_tf title: TensorFlow ใ‚’ไฝฟ็”จใ—ใŸ TPU ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ - local: perf_train_special title: ็‰นๆฎŠใชใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซ้–ขใ™ใ‚‹ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ - local: perf_hardware title: ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็”จใฎใ‚ซใ‚นใ‚ฟใƒ  ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ข - local: hpo_train title: Trainer API ใ‚’ไฝฟ็”จใ—ใŸใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟๆคœ็ดข title: ๅŠน็އ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒ†ใ‚ฏใƒ‹ใƒƒใ‚ฏ - sections: - local: perf_infer_cpu title: CPUใงใฎๆŽจ่ซ– - local: perf_infer_gpu_one title: 1 ใคใฎ GPU ใงใฎๆŽจ่ซ– - local: perf_infer_gpu_many title: ๅคšใใฎ GPU ใงใฎๆŽจ่ซ– - local: perf_infer_special title: ็‰นๆฎŠใชใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใฎๆŽจ่ซ– title: ๆŽจ่ซ–ใฎๆœ€้ฉๅŒ– - local: big_models title: ๅคงใใชใƒขใƒ‡ใƒซใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ– - local: tf_xla title: TensorFlowใƒขใƒ‡ใƒซใฎXLA็ตฑๅˆ - local: perf_torch_compile title: torch.compile()ใ‚’ไฝฟ็”จใ—ใŸๆŽจ่ซ–ใฎๆœ€้ฉๅŒ– title: ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใจใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃ - sections: - local: add_new_model title: ๐Ÿค— Transformersใซใƒขใƒ‡ใƒซใ‚’่ฟฝๅŠ ใ™ใ‚‹ๆ–นๆณ• - local: add_tensorflow_model title: ๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’TensorFlowใซๅค‰ๆ›ใ™ใ‚‹ๆ–นๆณ• - local: testing title: ใƒ†ใ‚นใƒˆ - local: pr_checks title: ใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใฎใƒใ‚งใƒƒใ‚ฏ title: ่ฒข็Œฎใ™ใ‚‹ - sections: - local: philosophy title: ใƒ•ใ‚ฃใƒญใ‚ฝใƒ•ใ‚ฃใƒผ - local: glossary title: ็”จ่ชž้›† - local: task_summary title: ๐Ÿค— TransformersใฎๆฉŸ่ƒฝ - local: tasks_explained title: ๐Ÿค— TransformersใŒใ‚ฟใ‚นใ‚ฏใ‚’่งฃๆฑบใ™ใ‚‹ๆ–นๆณ• - local: model_summary title: Transformerใƒขใƒ‡ใƒซใƒ•ใ‚กใƒŸใƒชใƒผ - local: tokenizer_summary title: ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎๆฆ‚่ฆ - local: attention title: ๆณจๆ„ๆฉŸๆง‹ - local: pad_truncation title: ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใจๅˆ‡ใ‚Š่ฉฐใ‚ - local: bertology title: BERTology - local: perplexity title: ๅ›บๅฎš้•ทใƒขใƒ‡ใƒซใฎใƒ‘ใƒผใƒ—ใƒฌใ‚ญใ‚ทใƒ†ใ‚ฃ - local: pipeline_webserver title: Webใ‚ตใƒผใƒใƒผๆŽจ่ซ–็”จใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ - local: model_memory_anatomy title: ใƒขใƒ‡ใƒซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ่งฃๅ‰–ๅญฆ title: ใ‚ณใƒณใ‚ปใƒ—ใƒใƒฅใ‚ขใƒซใ‚ฌใ‚คใƒ‰ - sections: - sections: - local: main_classes/agent title: ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใจใƒ„ใƒผใƒซ - local: model_doc/auto title: Auto Classes - local: main_classes/callback title: ใ‚ณใƒผใƒซใƒใƒƒใ‚ฏ - local: main_classes/configuration title: ๆง‹ๆˆ - local: main_classes/data_collator title: ใƒ‡ใƒผใ‚ฟ็…งๅˆ่€… - local: main_classes/keras_callbacks title: Keras ใ‚ณใƒผใƒซใƒใƒƒใ‚ฏ - local: main_classes/logging title: ใƒญใ‚ฎใƒณใ‚ฐ - local: main_classes/model title: ใƒขใƒ‡ใƒซ - local: main_classes/text_generation title: ใƒ†ใ‚ญใ‚นใƒˆใฎ็”Ÿๆˆ - local: main_classes/onnx title: ONNX - local: main_classes/optimizer_schedules title: ๆœ€้ฉๅŒ– - local: main_classes/output title: ใƒขใƒ‡ใƒซใฎๅ‡บๅŠ› - local: main_classes/pipelines title: ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ - local: main_classes/processors title: ใƒ—ใƒญใ‚ปใƒƒใ‚ตใƒผ - local: main_classes/quantization title: ้‡ๅญๅŒ– - local: main_classes/tokenizer title: ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ - local: main_classes/trainer title: ใƒˆใƒฌใƒผใƒŠใƒผ - local: main_classes/deepspeed title: ใƒ‡ใ‚ฃใƒผใƒ—ใ‚นใƒ”ใƒผใƒ‰ใฎ็ตฑๅˆ - local: main_classes/feature_extractor title: ็‰นๅพดๆŠฝๅ‡บๅ™จ - local: main_classes/image_processor title: ็”ปๅƒๅ‡ฆ็†ใƒ—ใƒญใ‚ปใƒƒใ‚ต title: ไธป่ฆใชใ‚ฏใƒฉใ‚น - sections: - isExpanded: false sections: - local: model_doc/albert title: ALBERT - local: model_doc/bart title: BART - local: model_doc/barthez title: BARThez - local: model_doc/bartpho title: BARTpho - local: model_doc/bert title: BERT - local: model_doc/bert-generation title: BertGeneration - local: model_doc/bert-japanese title: BertJapanese - local: model_doc/bertweet title: Bertweet - local: model_doc/big_bird title: BigBird - local: model_doc/bigbird_pegasus title: BigBirdPegasus - local: model_doc/biogpt title: BioGpt - local: model_doc/blenderbot title: Blenderbot - local: model_doc/blenderbot-small title: Blenderbot Small - local: model_doc/bloom title: BLOOM - local: model_doc/bort title: BORT - local: model_doc/byt5 title: ByT5 - local: model_doc/camembert title: CamemBERT - local: model_doc/canine title: CANINE - local: model_doc/codegen title: CodeGen - local: model_doc/code_llama title: CodeLlama - local: model_doc/convbert title: ConvBERT - local: model_doc/cpm title: CPM - local: model_doc/cpmant title: CPMANT - local: model_doc/ctrl title: CTRL - local: model_doc/deberta title: DeBERTa - local: model_doc/deberta-v2 title: DeBERTa-v2 title: ๆ–‡็ซ ใƒขใƒ‡ใƒซ - isExpanded: false sections: - local: model_doc/beit title: BEiT - local: model_doc/bit title: BiT - local: model_doc/conditional_detr title: Conditional DETR - local: model_doc/convnext title: ConvNeXT - local: model_doc/convnextv2 title: ConvNeXTV2 - local: model_doc/cvt title: CvT - local: model_doc/deformable_detr title: Deformable DETR title: ใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซ - isExpanded: false sections: - local: model_doc/audio-spectrogram-transformer title: Audio Spectrogram Transformer - local: model_doc/bark title: Bark - local: model_doc/clap title: CLAP title: ้Ÿณๅฃฐใƒขใƒ‡ใƒซ - isExpanded: false sections: - local: model_doc/align title: ALIGN - local: model_doc/altclip title: AltCLIP - local: model_doc/blip title: BLIP - local: model_doc/blip-2 title: BLIP-2 - local: model_doc/bridgetower title: BridgeTower - local: model_doc/bros title: BROS - local: model_doc/chinese_clip title: Chinese-CLIP - local: model_doc/clip title: CLIP - local: model_doc/clipseg title: CLIPSeg - local: model_doc/clvp title: CLVP - local: model_doc/data2vec title: Data2Vec title: ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใƒขใƒ‡ใƒซ - isExpanded: false sections: - local: model_doc/decision_transformer title: Decision Transformer title: ๅผทๅŒ–ๅญฆ็ฟ’ใƒขใƒ‡ใƒซ - isExpanded: false sections: - local: model_doc/autoformer title: Autoformer title: ๆ™‚็ณปๅˆ—ใƒขใƒ‡ใƒซ title: ใƒขใƒ‡ใƒซ - sections: - local: internal/modeling_utils title: ใ‚ซใ‚นใ‚ฟใƒ ใƒฌใ‚คใƒคใƒผใจใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ - local: internal/pipelines_utils title: ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ็”จใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ - local: internal/tokenization_utils title: ใƒˆ=ใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ็”จใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ - local: internal/trainer_utils title: ใƒˆใƒฌใƒผใƒŠใƒผ็”จใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ - local: internal/generation_utils title: ็™บ้›ป็”จใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ - local: internal/image_processing_utils title: ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ต็”จใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ - local: internal/audio_utils title: ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅ‡ฆ็†็”จใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ - local: internal/file_utils title: ไธ€่ˆฌๅ…ฌๅ…ฑไบ‹ๆฅญ - local: internal/time_series_utils title: ๆ™‚็ณปๅˆ—็”จใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ title: ๅ†…้ƒจใƒ˜ใƒซใƒ‘ใƒผ title: API
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_infer_special.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inference on Specialized Hardware ใ“ใกใ‚‰ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏใ€ๅฐ‚็”จใฎใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใฎๆŽจ่ซ–ๆ–นๆณ•ใซใคใ„ใฆใฎๆƒ…ๅ ฑใŒใพใ‚‚ใชใๆไพ›ใ•ใ‚Œใพใ™ใ€‚ใใฎ้–“ใซใ€CPUใงใฎๆŽจ่ซ–ใซ้–ขใ™ใ‚‹ใ‚ฌใ‚คใƒ‰ใ‚’ใ”่ฆงใ„ใŸใ ใ‘ใพใ™ใ€‚[the guide for inference on CPUs](perf_infer_cpu).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Training on CPU ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏใ€CPUไธŠใงๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’ๅŠน็އ็š„ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใพใ™ใ€‚ ## Mixed precision with IPEX IPEXใฏAVX-512ไปฅไธŠใฎCPUใซๆœ€้ฉๅŒ–ใ•ใ‚ŒใฆใŠใ‚Šใ€AVX2ใฎใฟใฎCPUใงใ‚‚ๆฉŸ่ƒฝ็š„ใซๅ‹•ไฝœใ—ใพใ™ใ€‚ใใฎใŸใ‚ใ€AVX-512ไปฅไธŠใฎIntel CPUไธ–ไปฃใงใฏใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅ‘ไธŠใŒๆœŸๅพ…ใ•ใ‚Œใพใ™ใŒใ€AVX2ใฎใฟใฎCPU๏ผˆไพ‹๏ผšAMD CPUใพใŸใฏๅคใ„Intel CPU๏ผ‰ใงใฏIPEXใฎไธ‹ใงใ‚ˆใ‚Š่‰ฏใ„ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅพ—ใ‚‰ใ‚Œใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€ไฟ่จผใ•ใ‚Œใพใ›ใ‚“ใ€‚IPEXใฏใ€Float32ใจBFloat16ใฎไธกๆ–นใงCPUใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๆœ€้ฉๅŒ–ใ—ใพใ™ใ€‚ไปฅไธ‹ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€BFloat16ใฎไฝฟ็”จใซ้‡็‚นใ‚’็ฝฎใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ไฝŽ็ฒพๅบฆใƒ‡ใƒผใ‚ฟๅž‹ใงใ‚ใ‚‹BFloat16ใฏใ€AVX512ๅ‘ฝไปคใ‚ปใƒƒใƒˆใ‚’ๅ‚™ใˆใŸ็ฌฌ3ไธ–ไปฃXeonยฎ Scalable Processors๏ผˆๅˆฅๅCooper Lake๏ผ‰ใงใƒใ‚คใƒ†ใ‚ฃใƒ–ใ‚ตใƒใƒผใƒˆใ•ใ‚ŒใฆใŠใ‚Šใ€ใ•ใ‚‰ใซ้ซ˜ๆ€ง่ƒฝใชIntelยฎ Advanced Matrix Extensions๏ผˆIntelยฎ AMX๏ผ‰ๅ‘ฝไปคใ‚ปใƒƒใƒˆใ‚’ๅ‚™ใˆใŸๆฌกไธ–ไปฃใฎIntelยฎ Xeonยฎ Scalable Processorsใงใ‚‚ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใพใ™ใ€‚CPUใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰็”จใฎ่‡ชๅ‹•ๆททๅˆ็ฒพๅบฆใŒPyTorch-1.10ไปฅ้™ใงๆœ‰ๅŠนใซใชใฃใฆใ„ใพใ™ใ€‚ๅŒๆ™‚ใซใ€Intelยฎ Extension for PyTorchใงใฎCPU็”จBFloat16ใฎ่‡ชๅ‹•ๆททๅˆ็ฒพๅบฆใ‚ตใƒใƒผใƒˆใจใ€ใ‚ชใƒšใƒฌใƒผใ‚ฟใƒผใฎBFloat16ๆœ€้ฉๅŒ–ใฎใ‚ตใƒใƒผใƒˆใŒๅคงๅน…ใซๅ‘ไธŠใ—ใ€ไธ€้ƒจใŒPyTorchใฎใƒกใ‚คใƒณใƒ–ใƒฉใƒณใƒใซใ‚ขใƒƒใƒ—ใ‚นใƒˆใƒชใƒผใƒ ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใƒฆใƒผใ‚ถใƒผใฏIPEX Auto Mixed Precisionใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€ใ‚ˆใ‚Šๅ„ชใ‚ŒใŸใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใจใƒฆใƒผใ‚ถใƒผใ‚จใ‚ฏใ‚นใƒšใƒชใ‚จใƒณใ‚นใ‚’ๅพ—ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ่ฉณ็ดฐใชๆƒ…ๅ ฑใซใคใ„ใฆใฏใ€[Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html)ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ### IPEX installation: IPEXใฎใƒชใƒชใƒผใ‚นใฏPyTorchใซๅพ“ใฃใฆใŠใ‚Šใ€pipใ‚’ไฝฟ็”จใ—ใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใงใใพใ™๏ผš | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ``` pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` [IPEXใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซๆ–นๆณ•](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html)ใซใคใ„ใฆใ€ใ•ใ‚‰ใชใ‚‹ใ‚ขใƒ—ใƒญใƒผใƒใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ### Trainerใงใฎไฝฟ็”จๆ–นๆณ• TrainerใงIPEXใฎ่‡ชๅ‹•ๆททๅˆ็ฒพๅบฆใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€ใƒฆใƒผใ‚ถใƒผใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ณใƒžใƒณใƒ‰ๅผ•ๆ•ฐใซ `use_ipex`ใ€`bf16`ใ€ใŠใ‚ˆใณ `no_cuda` ใ‚’่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ [Transformersใฎ่ณชๅ•ๅฟœ็ญ”](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)ใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใ‚’ไพ‹ใซ่ชฌๆ˜Žใ—ใพใ™ใ€‚ - CPUไธŠใงBF16่‡ชๅ‹•ๆททๅˆ็ฒพๅบฆใ‚’ไฝฟ็”จใ—ใฆIPEXใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’่กŒใ†ๅ ดๅˆ๏ผš <pre> python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### Practice example Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Accelerate ใ‚’็”จใ„ใŸๅˆ†ๆ•ฃๅญฆ็ฟ’ ใƒขใƒ‡ใƒซใŒๅคงใใใชใ‚‹ใซใคใ‚Œใฆใ€้™ใ‚‰ใ‚ŒใŸใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใ‚ˆใ‚Šๅคงใใชใƒขใƒ‡ใƒซใ‚’่จ“็ทดใ—ใ€่จ“็ทด้€Ÿๅบฆใ‚’ๅคงๅน…ใซไธŠๆ˜‡ใ•ใ›ใ‚‹ใŸใ‚ใฎๆ–นๆณ•ใจใ—ใฆไธฆๅˆ—ๅ‡ฆ็†ใŒๆตฎไธŠใ—ใฆใใพใ—ใŸใ€‚1ๅฐใฎใƒžใ‚ทใƒณใซ่ค‡ๆ•ฐใฎGPUใŒใ‚ใฃใฆใ‚‚ใ€่ค‡ๆ•ฐใฎใƒžใ‚ทใƒณใซใพใŸใŒใ‚‹่ค‡ๆ•ฐใฎGPUใŒใ‚ใฃใฆใ‚‚ใ€ใ‚ใ‚‰ใ‚†ใ‚‹ใ‚ฟใ‚คใƒ—ใฎๅˆ†ๆ•ฃๅ‡ฆ็†ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ไธŠใงใƒฆใƒผใ‚ถใƒผใŒ็ฐกๅ˜ใซ ๐Ÿค— Transformers ใƒขใƒ‡ใƒซใ‚’่จ“็ทดใงใใ‚‹ใ‚ˆใ†ใซใ€ Hugging Face ใงใฏ [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝœๆˆใ—ใพใ—ใŸใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€PyTorch ใฎ่จ“็ทดใƒซใƒผใƒ—ใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ—ใฆใ€ๅˆ†ๆ•ฃๅ‡ฆ็†็’ฐๅขƒใงใฎ่จ“็ทดใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆๅญฆใณใพใ™ใ€‚ ## ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ— ใฏใ˜ใ‚ใซ ๐Ÿค— Accelerate ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ—ใ‚‡ใ†: ```bash pip install accelerate ``` ใใ—ใŸใ‚‰ใ‚คใƒณใƒใƒผใƒˆใ—ใฆ [`~accelerate.Accelerator`] ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ไฝœๆˆใ—ใพใ—ใ‚‡ใ†ใ€‚[`~accelerate.Accelerator`] ใฏๅˆ†ๆ•ฃๅ‡ฆ็†ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ‚’่‡ชๅ‹•็š„ใซๆคœๅ‡บใ—ใ€่จ“็ทดใฎใŸใ‚ใซๅฟ…่ฆใชๅ…จใฆใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใ‚’ๅˆๆœŸๅŒ–ใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใ‚’ใƒ‡ใƒใ‚คใ‚นใซๆ˜Ž็คบ็š„ใซ้…็ฝฎใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## Accelerate ใ™ใ‚‹ๆบ–ๅ‚™ใ‚’ใ—ใพใ—ใ‚‡ใ† ๆฌกใซใ€้–ข้€ฃใ™ใ‚‹ๅ…จใฆใฎ่จ“็ทดใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ [`~accelerate.Accelerator.prepare`] ใƒกใ‚ฝใƒƒใƒ‰ใซๆธกใ—ใพใ™ใ€‚ใ“ใ‚Œใซใฏใ€่จ“็ทดใจ่ฉ•ไพกใใ‚Œใžใ‚ŒใฎDataloaderใ€ใƒขใƒ‡ใƒซใ€optimizer ใŒๅซใพใ‚Œใพใ™: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## Backward ๆœ€ๅพŒใซ่จ“็ทดใƒซใƒผใƒ—ๅ†…ใฎ `loss.backward()` ใ‚’ ๐Ÿค— Accelerate ใฎ [`~accelerate.Accelerator.backward`] ใƒกใ‚ฝใƒƒใƒ‰ใง็ฝฎใๆ›ใˆใพใ™๏ผš ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใง็ขบ่ชใงใใ‚‹้€šใ‚Šใ€่จ“็ทดใƒซใƒผใƒ—ใซ4่กŒใฎใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงๅˆ†ๆ•ฃๅญฆ็ฟ’ใŒๅฏ่ƒฝใงใ™๏ผ ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## ่จ“็ทดใ™ใ‚‹ ้–ข้€ฃใ™ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ—ใŸใ‚‰ใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใพใŸใฏ Colaboratory ใชใฉใฎใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใง่จ“็ทดใ‚’้–‹ๅง‹ใ—ใพใ™ใ€‚ ### ใ‚นใ‚ฏใƒชใƒ—ใƒˆใง่จ“็ทดใ™ใ‚‹ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‹ใ‚‰่จ“็ทดใ‚’ใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใƒปไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใซไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„: ```bash accelerate config ``` ใใ—ใฆๆฌกใฎใ‚ˆใ†ใซใ—ใฆ่จ“็ทดใ‚’้–‹ๅง‹ใ—ใพใ™: ```bash accelerate launch train.py ``` ### ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใง่จ“็ทดใ™ใ‚‹ Colaboratory ใฎ TPU ใฎๅˆฉ็”จใ‚’ใŠ่€ƒใˆใฎๅ ดๅˆใ€๐Ÿค— Accelerate ใฏใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏไธŠใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚่จ“็ทดใซๅฟ…่ฆใชๅ…จใฆใฎใ‚ณใƒผใƒ‰ใ‚’้–ขๆ•ฐใซๅซใ‚ใ€[`~accelerate.notebook_launcher`] ใซๆธกใ—ใฆใใ ใ•ใ„: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` ๐Ÿค— Accelerate ใจ่ฑŠๅฏŒใชๆฉŸ่ƒฝใซใคใ„ใฆใ‚‚ใฃใจ็Ÿฅใ‚ŠใŸใ„ๆ–นใฏ[ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/accelerate)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/tf_xla.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # XLA Integration for TensorFlow Models [[open-in-colab]] ๅŠ ้€Ÿ็ทšๅฝขไปฃๆ•ฐ๏ผˆAccelerated Linear Algebra๏ผ‰ใ€้€š็งฐXLAใฏใ€TensorFlowใƒขใƒ‡ใƒซใฎใƒฉใƒณใ‚ฟใ‚คใƒ ใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใฎใ‚ณใƒณใƒ‘ใ‚คใƒฉใงใ™ใ€‚[ๅ…ฌๅผใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://www.tensorflow.org/xla)ใซใ‚ˆใ‚Œใฐใ€XLA๏ผˆAccelerated Linear Algebra๏ผ‰ใฏ็ทšๅฝขไปฃๆ•ฐใฎใŸใ‚ใฎใƒ‰ใƒกใ‚คใƒณๅ›บๆœ‰ใฎใ‚ณใƒณใƒ‘ใ‚คใƒฉใงใ€TensorFlowใƒขใƒ‡ใƒซใ‚’ๆฝœๅœจ็š„ใซใ‚ฝใƒผใ‚นใ‚ณใƒผใƒ‰ใฎๅค‰ๆ›ดใชใ—ใง้ซ˜้€ŸๅŒ–ใงใใพใ™ใ€‚ TensorFlowใงXLAใ‚’ไฝฟ็”จใ™ใ‚‹ใฎใฏ็ฐกๅ˜ใงใ™ใ€‚XLAใฏ`tensorflow`ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใซใƒ‘ใƒƒใ‚ฑใƒผใ‚ธๅŒ–ใ•ใ‚ŒใฆใŠใ‚Šใ€[`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs)ใชใฉใฎใ‚ฐใƒฉใƒ•ใ‚’ไฝœๆˆใ™ใ‚‹้–ขๆ•ฐๅ†…ใง`jit_compile`ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒชใ‚ฌใƒผใงใใพใ™ใ€‚`fit()`ใ‚„`predict()`ใชใฉใฎKerasใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€`model.compile()`ใซ`jit_compile`ๅผ•ๆ•ฐใ‚’ๆธกใ™ใ ใ‘ใงXLAใ‚’ๆœ‰ๅŠนใซใงใใพใ™ใ€‚ใŸใ ใ—ใ€XLAใฏใ“ใ‚Œใ‚‰ใฎใƒกใ‚ฝใƒƒใƒ‰ใซ้™ๅฎšใ•ใ‚Œใฆใ„ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ไปปๆ„ใฎ`tf.function`ใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ ๐Ÿค— Transformersๅ†…ใฎใ„ใใคใ‹ใฎTensorFlowใƒกใ‚ฝใƒƒใƒ‰ใฏใ€XLAใจไบ’ๆ›ๆ€งใŒใ‚ใ‚‹ใ‚ˆใ†ใซๆ›ธใ็›ดใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใซใฏใ€[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)ใ€[T5](https://huggingface.co/docs/transformers/model_doc/t5)ใ€[OPT](https://huggingface.co/docs/transformers/model_doc/opt)ใชใฉใฎใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใƒขใƒ‡ใƒซใ‚„ใ€[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)ใชใฉใฎ้Ÿณๅฃฐๅ‡ฆ็†ใƒขใƒ‡ใƒซใ‚‚ๅซใพใ‚Œใพใ™ใ€‚ ้€Ÿๅบฆๅ‘ไธŠใฎๅ…ทไฝ“็š„ใช้‡ใฏใƒขใƒ‡ใƒซใซ้žๅธธใซไพๅญ˜ใ—ใพใ™ใŒใ€๐Ÿค— Transformersๅ†…ใฎTensorFlowใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใƒขใƒ‡ใƒซใงใฏใ€็ด„100ๅ€ใฎ้€Ÿๅบฆๅ‘ไธŠใ‚’็ขบ่ชใ—ใฆใ„ใพใ™ใ€‚ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใฏใ€ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใซXLAใ‚’ไฝฟ็”จใ—ใฆๆœ€ๅคงใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅพ—ใ‚‹ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ใพใŸใ€ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใจXLA็ตฑๅˆใฎใƒ‡ใ‚ถใ‚คใƒณๅ“ฒๅญฆใซใคใ„ใฆ่ฉณใ—ใๅญฆใณใŸใ„ๅ ดๅˆใฎ่ฟฝๅŠ ใƒชใ‚ฝใƒผใ‚นใธใฎใƒชใƒณใ‚ฏใ‚‚ๆไพ›ใ—ใพใ™ใ€‚ ## Running TF functions with XLA ไปฅไธ‹ใฎTensorFlowใƒขใƒ‡ใƒซใ‚’่€ƒใˆใฆใฟใพใ—ใ‚‡ใ†๏ผš ```py import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")] ) ``` ไธŠ่จ˜ใฎใƒขใƒ‡ใƒซใฏใ€ๆฌกๅ…ƒใŒ`(10, )`ใฎๅ…ฅๅŠ›ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใพใ™ใ€‚ใ“ใฎใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใงๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™๏ผš ```py # Generate random inputs for the model. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) # Run a forward pass. _ = model(random_inputs) ``` XLAใงใ‚ณใƒณใƒ‘ใ‚คใƒซใ•ใ‚ŒใŸ้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซใ—ใพใ™๏ผš ```py xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) ``` `model`ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ `call()` ้–ขๆ•ฐใฏXLAใ‚ฐใƒฉใƒ•ใ‚’ใ‚ณใƒณใƒ‘ใ‚คใƒซใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ใŸใ ใ—ใ€XLAใซใ‚ณใƒณใƒ‘ใ‚คใƒซใ—ใŸใ„ไป–ใฎใƒขใƒ‡ใƒซ้–ขๆ•ฐใŒใ‚ใ‚‹ๅ ดๅˆใ€ใใ‚Œใ‚‚ๅฏ่ƒฝใงใ™ใ€‚ไปฅไธ‹ใฏใใฎๆ–นๆณ•ใงใ™๏ผš ```py my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) ``` ## Running a TF text generation model with XLA from ๐Ÿค— Transformers ๐Ÿค— Transformersๅ†…ใงXLAใงใฎ้ซ˜้€ŸๅŒ–ใ•ใ‚ŒใŸ็”Ÿๆˆใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€ๆœ€ๆ–ฐใƒใƒผใ‚ธใƒงใƒณใฎ`transformers`ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใงใใพใ™๏ผš ```bash pip install transformers --upgrade ``` ๆฌกใซใ€ๆฌกใฎใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใงใใพใ™๏ผš ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM # Will error if the minimal version of Transformers is not installed. from transformers.utils import check_min_version check_min_version("4.21.0") tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] # One line to create an XLA generation function xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the ``` `generate()`ใงXLAใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใฎใฏใ€ใŸใฃใŸไธ€่กŒใฎใ‚ณใƒผใƒ‰ใงใ™ใ€‚ใ‚ณใƒผใƒ‰ใฎๆฎ‹ใ‚Š้ƒจๅˆ†ใฏๅค‰ๆ›ดใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ใŸใ ใ—ใ€XLAๅ›บๆœ‰ใฎใ„ใใคใ‹ใฎๆณจๆ„็‚นใŒไธŠ่จ˜ใฎใ‚ณใƒผใƒ‰ใ‚นใƒ‹ใƒšใƒƒใƒˆใซใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใซๆณจๆ„ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€XLAใŒใ‚‚ใŸใ‚‰ใ™้€Ÿๅบฆๅ‘ไธŠใ‚’ๅฎŸ็พใ™ใ‚‹ใŸใ‚ใซใใ‚Œใ‚‰ใ‚’ๆŠŠๆกใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚ๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใ“ใ‚Œใ‚‰ใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ## Gotchas to be aware of XLAใ‚’ๆœ‰ๅŠนใซใ—ใŸ้–ขๆ•ฐ๏ผˆไธŠ่จ˜ใฎ`xla_generate()`ใชใฉ๏ผ‰ใ‚’ๅˆใ‚ใฆๅฎŸ่กŒใ™ใ‚‹ใจใ€ๅ†…้ƒจใง่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ใ‚’ๆŽจ่ซ–ใ—ใ‚ˆใ†ใจใ—ใพใ™ใŒใ€ใ“ใ‚Œใฏๆ™‚้–“ใŒใ‹ใ‹ใ‚Šใพใ™ใ€‚ใ“ใฎใƒ—ใƒญใ‚ปใ‚นใฏ["ใƒˆใƒฌใƒผใ‚ทใƒณใ‚ฐ"๏ผˆtracing๏ผ‰](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing)ใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ ็”Ÿๆˆๆ™‚้–“ใŒ้ซ˜้€Ÿใงใฏใชใ„ใ“ใจใซๆฐ—ไป˜ใใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚`xla_generate()`๏ผˆใพใŸใฏไป–ใฎXLAๅฏพๅฟœ้–ขๆ•ฐ๏ผ‰ใฎ้€ฃ็ถšๅ‘ผใณๅ‡บใ—ใงใฏใ€้–ขๆ•ฐใธใฎๅ…ฅๅŠ›ใŒๆœ€ๅˆใซ่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ใŒๆง‹็ฏ‰ใ•ใ‚ŒใŸใจใใจๅŒใ˜ๅฝข็Šถใซๅพ“ใฃใฆใ„ใ‚‹ๅ ดๅˆใ€่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ใ‚’ๆŽจ่ซ–ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ€ๅ…ฅๅŠ›ๅฝข็ŠถใŒๅ›บๅฎšใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ€ใƒชใƒ†ใ‚ฃ๏ผˆไพ‹๏ผš็”ปๅƒ๏ผ‰ใซใฏๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ๅค‰ๆ•ฐใฎๅ…ฅๅŠ›ๅฝข็Šถใƒขใƒ€ใƒชใƒ†ใ‚ฃ๏ผˆไพ‹๏ผšใƒ†ใ‚ญใ‚นใƒˆ๏ผ‰ใ‚’ๆ‰ฑใ†ๅ ดๅˆใซใฏๆณจๆ„ใŒๅฟ…่ฆใงใ™ใ€‚ `xla_generate()`ใŒๅธธใซๅŒใ˜ๅ…ฅๅŠ›ๅฝข็Šถใงๅ‹•ไฝœใ™ใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใซใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ๅ‘ผใณๅ‡บใ™้š›ใซ`padding`ๅผ•ๆ•ฐใ‚’ๆŒ‡ๅฎšใงใใพใ™ใ€‚ ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] xla_generate = tf.function(model.generate, jit_compile=True) # Here, we call the tokenizer with padding options. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€`xla_generate()`ใธใฎๅ…ฅๅŠ›ใŒๅธธใซใƒˆใƒฌใƒผใ‚นใ•ใ‚ŒใŸๅฝข็Šถใฎๅ…ฅๅŠ›ใ‚’ๅ—ใ‘ๅ–ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€็”Ÿๆˆๆ™‚้–“ใฎ้ซ˜้€ŸๅŒ–ใ‚’ๅฎŸ็พใงใใพใ™ใ€‚ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใงใ“ใ‚Œใ‚’็ขบ่ชใงใใพใ™๏ผš ```py import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n") ``` Tesla T4 GPUใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ๆฌกใฎใ‚ˆใ†ใชๅ‡บๅŠ›ใŒๆœŸๅพ…ใ•ใ‚Œใพใ™๏ผš ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms ``` ๆœ€ๅˆใฎ`xla_generate()`ๅ‘ผใณๅ‡บใ—ใฏใƒˆใƒฌใƒผใ‚ทใƒณใ‚ฐใฎใŸใ‚ใซๆ™‚้–“ใŒใ‹ใ‹ใ‚Šใพใ™ใŒใ€้€ฃ็ถšใ™ใ‚‹ๅ‘ผใณๅ‡บใ—ใฏๆก้•ใ„ใซ้ซ˜้€Ÿใงใ™ใ€‚็”Ÿๆˆใ‚ชใƒ—ใ‚ทใƒงใƒณใฎใ„ใ‹ใชใ‚‹ๅค‰ๆ›ดใ‚‚ใ€ๅ†ใƒˆใƒฌใƒผใ‚ทใƒณใ‚ฐใ‚’ๅผ•ใ่ตทใ“ใ—ใ€็”Ÿๆˆๆ™‚้–“ใฎ้…ๅปถใ‚’ๅผ•ใ่ตทใ“ใ™ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใฏใ€๐Ÿค— TransformersใŒๆไพ›ใ™ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ใ™ในใฆ็ถฒ็พ…ใ—ใฆใ„ใพใ›ใ‚“ใ€‚้ซ˜ๅบฆใชใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซใคใ„ใฆใฏใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ## Additional Resources ใ“ใ“ใงใฏใ€๐Ÿค— Transformersใจไธ€่ˆฌ็š„ใชXLAใซใคใ„ใฆใ•ใ‚‰ใซ่ฉณใ—ใๅญฆใณใŸใ„ๅ ดๅˆใฎใ„ใใคใ‹ใฎ่ฟฝๅŠ ใƒชใ‚ฝใƒผใ‚นใ‚’ๆไพ›ใ—ใพใ™ใ€‚ * [ใ“ใฎColab Notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb)ใงใฏใ€XLAๅฏพๅฟœใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒ‡ใ‚ณใƒผใƒ€ใƒผ๏ผˆ[T5](https://huggingface.co/docs/transformers/model_doc/t5)ใชใฉ๏ผ‰ใŠใ‚ˆใณใƒ‡ใ‚ณใƒผใƒ€ใƒผๅฐ‚็”จ๏ผˆ[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)ใชใฉ๏ผ‰ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใƒขใƒ‡ใƒซใ‚’่ฉฆใ™ใŸใ‚ใฎๅฏพ่ฉฑๅž‹ใƒ‡ใƒขใŒๆไพ›ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ * [ใ“ใฎใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://huggingface.co/blog/tf-xla-generate)ใงใฏใ€XLAๅฏพๅฟœใƒขใƒ‡ใƒซใฎๆฏ”่ผƒใƒ™ใƒณใƒใƒžใƒผใ‚ฏใฎๆฆ‚่ฆใจใ€TensorFlowใงใฎXLAใซใคใ„ใฆใฎๅ‹ๅฅฝ็š„ใช็ดนไป‹ใŒๆไพ›ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ * [ใ“ใฎใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html)ใงใฏใ€๐Ÿค— TransformersใฎTensorFlowใƒขใƒ‡ใƒซใซXLAใ‚ตใƒใƒผใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹้š›ใฎ่จญ่จˆๅ“ฒๅญฆใซใคใ„ใฆ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ * ไธ€่ˆฌ็š„ใชXLAใจTensorFlowใ‚ฐใƒฉใƒ•ใซใคใ„ใฆ่ฉณใ—ใๅญฆใถใŸใ‚ใฎใŠใ™ใ™ใ‚ใฎๆŠ•็จฟ๏ผš * [XLA: ๆฉŸๆขฐๅญฆ็ฟ’็”จใฎๆœ€้ฉๅŒ–ใ‚ณใƒณใƒ‘ใ‚คใƒฉ](https://www.tensorflow.org/xla) * [ใ‚ฐใƒฉใƒ•ใจ`tf.function`ใฎ็ดนไป‹](https://www.tensorflow.org/guide/intro_to_graphs) * [`tf.function`ใ‚’ไฝฟ็”จใ—ใŸใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๅ‘ไธŠ](https://www.tensorflow.org/guide/function)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_train_special.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Training on Specialized Hardware <Tip> ๆณจๆ„: [ๅ˜ไธ€GPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_one)ใง็ดนไป‹ใ•ใ‚ŒใŸใปใจใ‚“ใฉใฎๆˆฆ็•ฅ๏ผˆๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚„ๅ‹พ้…่“„็ฉใชใฉ๏ผ‰ใŠใ‚ˆใณ[ใƒžใƒซใƒGPUใ‚ปใ‚ฏใ‚ทใƒงใƒณ](perf_train_gpu_many)ใฏไธ€่ˆฌ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒขใƒ‡ใƒซใซ้ฉ็”จใ•ใ‚Œใ‚‹ๆฑŽ็”จ็š„ใชใ‚‚ใฎใงใ™ใฎใงใ€ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใซๅ…ฅใ‚‹ๅ‰ใซใใ‚Œใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏใ€ๅฐ‚็”จใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๆ–นๆณ•ใซ้–ขใ™ใ‚‹ๆƒ…ๅ ฑใ‚’่ฟ‘ๆ—ฅไธญใซ่ฟฝๅŠ ไบˆๅฎšใงใ™ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/llm_tutorial.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Generation with LLMs [[open-in-colab]] LLMใ€ใพใŸใฏLarge Language Models๏ผˆๅคง่ฆๆจก่จ€่ชžใƒขใƒ‡ใƒซ๏ผ‰ใฏใ€ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใฎ้ตใจใชใ‚‹่ฆ็ด ใงใ™ใ€‚่ฆใ™ใ‚‹ใซใ€ใ“ใ‚Œใ‚‰ใฏๅคง่ฆๆจกใชไบ‹ๅ‰่จ“็ทดๆธˆใฟใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒซใงใ€ไธŽใˆใ‚‰ใ‚ŒใŸๅ…ฅๅŠ›ใƒ†ใ‚ญใ‚นใƒˆใซๅŸบใฅใ„ใฆๆฌกใฎๅ˜่ชž๏ผˆใพใŸใฏใ€ใ‚ˆใ‚Šๆญฃ็ขบใซใฏใƒˆใƒผใ‚ฏใƒณ๏ผ‰ใ‚’ไบˆๆธฌใ™ใ‚‹ใ‚ˆใ†ใซ่จ“็ทดใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒณใ‚’1ใคใšใคไบˆๆธฌใ™ใ‚‹ใŸใ‚ใ€ใƒขใƒ‡ใƒซใ‚’ๅ‘ผใณๅ‡บใ™ใ ใ‘ใงใฏๆ–ฐใ—ใ„ๆ–‡ใ‚’็”Ÿๆˆใ™ใ‚‹ใŸใ‚ใซไฝ•ใ‹ใ‚ˆใ‚Š็ฒพๅทงใชใ“ใจใ‚’ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚่‡ชๅทฑๅ›žๅธฐ็”Ÿๆˆใ‚’่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ่‡ชๅทฑๅ›žๅธฐ็”Ÿๆˆใฏใ€ๆŽจ่ซ–ๆ™‚ใฎๆ‰‹็ถšใใงใ€ใ„ใใคใ‹ใฎๅˆๆœŸๅ…ฅๅŠ›ใ‚’ไธŽใˆใŸ็Šถๆ…‹ใงใ€ใƒขใƒ‡ใƒซใ‚’ๅๅพฉ็š„ใซๅ‘ผใณๅ‡บใ™ๆ‰‹ๆณ•ใงใ™ใ€‚๐Ÿค— Transformersใงใฏใ€ใ“ใ‚Œใฏ[`~generation.GenerationMixin.generate`]ใƒกใ‚ฝใƒƒใƒ‰ใซใ‚ˆใฃใฆๅ‡ฆ็†ใ•ใ‚Œใ€ใ“ใ‚Œใฏ็”Ÿๆˆ่ƒฝๅŠ›ใ‚’ๆŒใคใ™ในใฆใฎใƒขใƒ‡ใƒซใงๅˆฉ็”จๅฏ่ƒฝใงใ™ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ไปฅไธ‹ใฎใ“ใจใ‚’็คบใ—ใพใ™๏ผš * LLMใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ™ใ‚‹ๆ–นๆณ• * ไธ€่ˆฌ็š„ใช่ฝใจใ—็ฉดใ‚’ๅ›ž้ฟใ™ใ‚‹ๆ–นๆณ• * LLMใ‚’ๆœ€ๅคง้™ใซๆดป็”จใ™ใ‚‹ใŸใ‚ใฎๆฌกใฎใ‚นใƒ†ใƒƒใƒ— ๅง‹ใ‚ใ‚‹ๅ‰ใซใ€ๅฟ…่ฆใชใƒฉใ‚คใƒ–ใƒฉใƒชใŒใ™ในใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผš ```bash pip install transformers bitsandbytes>=0.39.0 -q ``` ## Generate text [ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ](tasks/language_modeling)ใฎใŸใ‚ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒซใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใƒˆใƒผใ‚ฏใƒณใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ…ฅๅŠ›ใจใ—ใฆๅ—ใ‘ๅ–ใ‚Šใ€ๆฌกใฎใƒˆใƒผใ‚ฏใƒณใฎ็ขบ็އๅˆ†ๅธƒใ‚’่ฟ”ใ—ใพใ™ใ€‚ <!-- [GIF 1 -- FWD PASS] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov" ></video> <figcaption>"Forward pass of an LLM"</figcaption> </figure> LLM๏ผˆLanguage Model๏ผ‰ใซใ‚ˆใ‚‹่‡ชๅทฑๅ›žๅธฐ็”Ÿๆˆใฎ้‡่ฆใชๅด้ขใฎ1ใคใฏใ€ใ“ใฎ็ขบ็އๅˆ†ๅธƒใ‹ใ‚‰ๆฌกใฎใƒˆใƒผใ‚ฏใƒณใ‚’้ธๆŠžใ™ใ‚‹ๆ–นๆณ•ใงใ™ใ€‚ใ“ใฎใ‚นใƒ†ใƒƒใƒ—ใงใฏใ€ๆฌกใฎใ‚คใƒ†ใƒฌใƒผใ‚ทใƒงใƒณใฎใŸใ‚ใฎใƒˆใƒผใ‚ฏใƒณใŒๅพ—ใ‚‰ใ‚Œใ‚‹้™ใ‚Šใ€ไฝ•ใงใ‚‚ๅฏ่ƒฝใงใ™ใ€‚ใ“ใ‚Œใฏใ€็ขบ็އๅˆ†ๅธƒใ‹ใ‚‰ๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’้ธๆŠžใ™ใ‚‹ใ ใ‘ใฎใ‚ทใƒณใƒ—ใƒซใชๆ–นๆณ•ใ‹ใ‚‰ใ€็ตๆžœใฎๅˆ†ๅธƒใ‹ใ‚‰ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ™ใ‚‹ๅ‰ใซๆ•ฐใ€…ใฎๅค‰ๆ›ใ‚’้ฉ็”จใ™ใ‚‹ใปใฉ่ค‡้›‘ใชๆ–นๆณ•ใพใงใ€ใ‚ใ‚‰ใ‚†ใ‚‹ๆ–นๆณ•ใŒ่€ƒใˆใ‚‰ใ‚Œใพใ™ใ€‚ <!-- [GIF 2 -- TEXT GENERATION] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov" ></video> <figcaption>"Autoregressive generation iteratively selects the next token from a probability distribution to generate text"</figcaption> </figure> ไธŠ่จ˜ใฎใƒ—ใƒญใ‚ปใ‚นใฏใ€ใ‚ใ‚‹ๅœๆญขๆกไปถใŒๆบ€ใŸใ•ใ‚Œใ‚‹ใพใงๅๅพฉ็š„ใซ็นฐใ‚Š่ฟ”ใ•ใ‚Œใพใ™ใ€‚็†ๆƒณ็š„ใซใฏใ€ๅœๆญขๆกไปถใฏใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆๆŒ‡็คบใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใฏ็ต‚ไบ†ใ‚ทใƒผใ‚ฑใƒณใ‚น๏ผˆ`EOS`๏ผ‰ใƒˆใƒผใ‚ฏใƒณใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ‚ฟใ‚คใƒŸใƒณใ‚ฐใ‚’ๅญฆ็ฟ’ใ™ในใใงใ™ใ€‚ใ“ใ‚ŒใŒใใ†ใงใชใ„ๅ ดๅˆใ€็”Ÿๆˆใฏใ‚ใ‚‰ใ‹ใ˜ใ‚ๅฎš็พฉใ•ใ‚ŒใŸๆœ€ๅคง้•ทใซ้”ใ—ใŸใจใใซๅœๆญขใ—ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒณ้ธๆŠžใ‚นใƒ†ใƒƒใƒ—ใจๅœๆญขๆกไปถใ‚’้ฉๅˆ‡ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใฏใ€ใƒขใƒ‡ใƒซใŒใ‚ฟใ‚นใ‚ฏใงๆœŸๅพ…ใฉใŠใ‚ŠใซๆŒฏใ‚‹่ˆžใ†ใŸใ‚ใซ้‡่ฆใงใ™ใ€‚ใใ‚ŒใŒใ€ๅ„ใƒขใƒ‡ใƒซใซ้–ข้€ฃไป˜ใ‘ใ‚‰ใ‚ŒใŸ [`~generation.GenerationConfig`] ใƒ•ใ‚กใ‚คใƒซใŒใ‚ใ‚‹็†็”ฑใงใ‚ใ‚Šใ€ใ“ใ‚Œใซใฏๅ„ชใ‚ŒใŸใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ็”Ÿๆˆใƒ‘ใƒฉใƒกใƒผใ‚ฟๅŒ–ใŒๅซใพใ‚Œใ€ใƒขใƒ‡ใƒซใจไธ€็ท’ใซ่ชญใฟ่พผใพใ‚Œใพใ™ใ€‚ ใ‚ณใƒผใƒ‰ใซใคใ„ใฆ่ฉฑใ—ใพใ—ใ‚‡ใ†๏ผ <Tip> ๅŸบๆœฌ็š„ใชLLMใฎไฝฟ็”จใซ่ˆˆๅ‘ณใŒใ‚ใ‚‹ๅ ดๅˆใ€้ซ˜ใƒฌใƒ™ใƒซใฎ [`Pipeline`](pipeline_tutorial) ใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นใŒ่‰ฏใ„ๅ‡บ็™บ็‚นใงใ™ใ€‚ใŸใ ใ—ใ€LLMใฏใ—ใฐใ—ใฐ้‡ๅญๅŒ–ใ‚„ใƒˆใƒผใ‚ฏใƒณ้ธๆŠžใ‚นใƒ†ใƒƒใƒ—ใฎ็ดฐใ‹ใ„ๅˆถๅพกใชใฉใฎ้ซ˜ๅบฆใชๆฉŸ่ƒฝใŒๅฟ…่ฆใงใ‚ใ‚Šใ€ใ“ใ‚Œใฏ [`~generation.GenerationMixin.generate`] ใ‚’ไป‹ใ—ใฆๆœ€่‰ฏใซ่กŒใ‚ใ‚Œใพใ™ใ€‚LLMใจใฎ่‡ชๅทฑๅ›žๅธฐ็”Ÿๆˆใฏใƒชใ‚ฝใƒผใ‚นใŒๅคšใๅฟ…่ฆใงใ‚ใ‚Šใ€้ฉๅˆ‡ใชใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใฎใŸใ‚ใซGPUใงๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> <!-- TODO: llama 2๏ผˆใพใŸใฏใ‚ˆใ‚Šๆ–ฐใ—ใ„ไธ€่ˆฌ็š„ใชใƒ™ใƒผใ‚นใƒฉใ‚คใƒณ๏ผ‰ใŒๅˆฉ็”จๅฏ่ƒฝใซใชใฃใŸใ‚‰ใ€ไพ‹ใ‚’ๆ›ดๆ–ฐใ™ใ‚‹ --> ใพใšใ€ใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( ... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True ... ) ``` `from_pretrained` ๅ‘ผใณๅ‡บใ—ใง2ใคใฎใƒ•ใƒฉใ‚ฐใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„๏ผš - `device_map` ใฏใƒขใƒ‡ใƒซใ‚’ใ‚ใชใŸใฎGPUใซ็งปๅ‹•ใ•ใ›ใพใ™ - `load_in_4bit` ใฏ[4ใƒ“ใƒƒใƒˆใฎๅ‹•็š„้‡ๅญๅŒ–](main_classes/quantization)ใ‚’้ฉ็”จใ—ใฆใƒชใ‚ฝใƒผใ‚น่ฆไปถใ‚’ๅคงๅน…ใซๅ‰Šๆธ›ใ—ใพใ™ ใƒขใƒ‡ใƒซใ‚’ๅˆๆœŸๅŒ–ใ™ใ‚‹ไป–ใฎๆ–นๆณ•ใ‚‚ใ‚ใ‚Šใพใ™ใŒใ€ใ“ใ‚ŒใฏLLMใ‚’ๅง‹ใ‚ใ‚‹ใŸใ‚ใฎ่‰ฏใ„ๅŸบๆบ–ใงใ™ใ€‚ ๆฌกใซใ€[ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถ](tokenizer_summary)ใ‚’ไฝฟ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆๅ…ฅๅŠ›ใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b") >>> model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to("cuda") ``` `model_inputs` ๅค‰ๆ•ฐใฏใ€ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ•ใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆๅ…ฅๅŠ›ใจใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒžใ‚นใ‚ฏใ‚’ไฟๆŒใ—ใฆใ„ใพใ™ใ€‚ [`~generation.GenerationMixin.generate`] ใฏใ€ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒžใ‚นใ‚ฏใŒๆธกใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใงใ‚‚ใ€ๆœ€ๅ–„ใฎๅŠชๅŠ›ใ‚’ใ—ใฆใใ‚Œใ‚’ๆŽจๆธฌใ—ใ‚ˆใ†ใจใ—ใพใ™ใŒใ€ใงใใ‚‹้™ใ‚Šๆธกใ™ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ๆœ€้ฉใช็ตๆžœใ‚’ๅพ—ใ‚‹ใŸใ‚ใงใ™ใ€‚ ๆœ€ๅพŒใซใ€[`~generation.GenerationMixin.generate`] ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ—ใฆ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใ‚’ๅ–ๅพ—ใ—ใ€ใใ‚Œใ‚’่กจ็คบใ™ใ‚‹ๅ‰ใซใƒ†ใ‚ญใ‚นใƒˆใซๅค‰ๆ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A list of colors: red, blue, green, yellow, black, white, and brown' ``` ใ“ใ‚ŒใงๅฎŒไบ†ใงใ™๏ผใ‚ใšใ‹ใชใ‚ณใƒผใƒ‰่กŒๆ•ฐใงใ€LLM๏ผˆLarge Language Model๏ผ‰ใฎใƒ‘ใƒฏใƒผใ‚’ๆดป็”จใงใใพใ™ใ€‚ ## Common pitfalls [็”Ÿๆˆๆˆฆ็•ฅ](generation_strategies)ใฏใŸใใ•ใ‚“ใ‚ใ‚Šใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๅ€คใŒใ‚ใชใŸใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซ้ฉใ—ใฆใ„ใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ๅ‡บๅŠ›ใŒๆœŸๅพ…้€šใ‚Šใงใชใ„ๅ ดๅˆใ€ๆœ€ใ‚‚ไธ€่ˆฌ็š„ใช่ฝใจใ—็ฉดใจใใฎๅ›ž้ฟๆ–นๆณ•ใฎใƒชใ‚นใƒˆใ‚’ไฝœๆˆใ—ใพใ—ใŸใ€‚ ```py >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b") >>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default >>> model = AutoModelForCausalLM.from_pretrained( ... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True ... ) ``` ### Generated output is too short/long [`~generation.GenerationConfig`] ใƒ•ใ‚กใ‚คใƒซใงๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€`generate` ใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงๆœ€ๅคงใง 20 ใƒˆใƒผใ‚ฏใƒณใพใง่ฟ”ใ—ใพใ™ใ€‚ๆˆ‘ใ€…ใฏ `generate` ใ‚ณใƒผใƒซใง `max_new_tokens` ใ‚’ๆ‰‹ๅ‹•ใง่จญๅฎšใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€่ฟ”ใ•ใ‚Œใ‚‹ๆ–ฐใ—ใ„ใƒˆใƒผใ‚ฏใƒณใฎๆœ€ๅคงๆ•ฐใ‚’ๅˆถๅพกใงใใพใ™ใ€‚LLM๏ผˆๆญฃ็ขบใซใฏใ€[ใƒ‡ใ‚ณใƒผใƒ€ใƒผๅฐ‚็”จใƒขใƒ‡ใƒซ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)๏ผ‰ใ‚‚ๅ‡บๅŠ›ใฎไธ€้ƒจใจใ—ใฆๅ…ฅๅŠ›ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’่ฟ”ใ™ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ```py >>> model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda") >>> # By default, the output will contain up to 20 tokens >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5' >>> # Setting `max_new_tokens` allows you to control the maximum length >>> generated_ids = model.generate(**model_inputs, max_new_tokens=50) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,' ``` ### Incorrect generation mode ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ [`~generation.GenerationConfig`] ใƒ•ใ‚กใ‚คใƒซใงๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใชใ„้™ใ‚Šใ€`generate` ใฏๅ„ใ‚คใƒ†ใƒฌใƒผใ‚ทใƒงใƒณใงๆœ€ใ‚‚ๅฏ่ƒฝๆ€งใฎ้ซ˜ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’้ธๆŠžใ—ใพใ™๏ผˆ่ฒชๆฌฒใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐ๏ผ‰ใ€‚ใ‚ฟใ‚นใ‚ฏใซๅฟœใ˜ใฆใ€ใ“ใ‚Œใฏๆœ›ใพใ—ใใชใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ใƒใƒฃใƒƒใƒˆใƒœใƒƒใƒˆใ‚„ใ‚จใƒƒใ‚ปใ‚คใฎใ‚ˆใ†ใชๅ‰ต้€ ็š„ใชใ‚ฟใ‚นใ‚ฏใงใฏใ€ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใŒๆœ‰็›Šใงใ™ใ€‚ไธ€ๆ–นใ€้Ÿณๅฃฐใฎ่ปขๅ†™ใ‚„็ฟป่จณใฎใ‚ˆใ†ใชๅ…ฅๅŠ›ใซๅŸบใฅใใ‚ฟใ‚นใ‚ฏใงใฏใ€่ฒชๆฌฒใƒ‡ใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใŒๆœ‰็›Šใงใ™ใ€‚`do_sample=True` ใงใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ‚’ๆœ‰ๅŠนใซใงใใพใ™ใ€‚ใ“ใฎใƒˆใƒ”ใƒƒใ‚ฏใซใคใ„ใฆใฎ่ฉณ็ดฐใฏใ€ใ“ใฎ[ใƒ–ใƒญใ‚ฐใƒใ‚นใƒˆ](https://huggingface.co/blog/how-to-generate)ใงๅญฆใถใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> # Set seed or reproducibility -- you don't need this unless you want full reproducibility >>> from transformers import set_seed >>> set_seed(0) >>> model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda") >>> # LLM + greedy decoding = repetitive, boring output >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. I am a cat. I am a cat. I am a cat' >>> # With sampling, the output becomes more creative! >>> generated_ids = model.generate(**model_inputs, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat.\nI just need to be. I am always.\nEvery time' ``` ### Wrong padding side LLM๏ผˆLarge Language Models๏ผ‰ใฏ[ใƒ‡ใ‚ณใƒผใƒ€ใƒผๅฐ‚็”จ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ‚ใ‚Šใ€ๅ…ฅๅŠ›ใƒ—ใƒญใƒณใƒ—ใƒˆใ‚’็นฐใ‚Š่ฟ”ใ—ๅ‡ฆ็†ใ™ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ๅ…ฅๅŠ›ใŒๅŒใ˜้•ทใ•ใงใชใ„ๅ ดๅˆใ€ใใ‚Œใ‚‰ใ‚’ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚LLMใฏใƒ‘ใƒƒใƒ‰ใƒˆใƒผใ‚ฏใƒณใ‹ใ‚‰ใฎ็ถšใใ‚’ๅญฆ็ฟ’ใ—ใฆใ„ใชใ„ใŸใ‚ใ€ๅ…ฅๅŠ›ใฏๅทฆใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใพใŸใ€็”Ÿๆˆใซๅฏพใ—ใฆๆณจ็›ฎใƒžใ‚นใ‚ฏใ‚’ๆธกใ—ๅฟ˜ใ‚Œใชใ„ใ‚ˆใ†ใซใ—ใฆใใ ใ•ใ„๏ผ ```py >>> # The tokenizer initialized above has right-padding active by default: the 1st sequence, >>> # which is shorter, has padding on the right side. Generation fails. >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0] '' >>> # With left-padding, it works as expected! >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", padding_side="left") >>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 3, 4, 5, 6,' ``` ## Further resources ใ‚ชใƒผใƒˆใƒชใ‚ฐใƒฌใƒƒใ‚ทใƒ–็”Ÿๆˆใƒ—ใƒญใ‚ปใ‚นใฏๆฏ”่ผƒ็š„็ฐกๅ˜ใงใ™ใŒใ€LLMใ‚’ๆœ€ๅคง้™ใซๆดป็”จใ™ใ‚‹ใ“ใจใฏๅคšใใฎ่ฆ็ด ใŒ็ตกใ‚€ใŸใ‚ใ€ๆŒ‘ๆˆฆ็š„ใช่ฉฆใฟใจใชใ‚Šใพใ™ใ€‚LLMใฎไฝฟ็”จใจ็†่งฃใ‚’ใ•ใ‚‰ใซๆทฑใ‚ใ‚‹ใŸใ‚ใฎๆฌกใฎใ‚นใƒ†ใƒƒใƒ—ใซใคใ„ใฆใฏไปฅไธ‹ใฎใƒชใ‚ฝใƒผใ‚นใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ <!-- TODO: ๆ–ฐใ—ใ„ใ‚ฌใ‚คใƒ‰ใงๅฎŒไบ† --> ### Advanced generate usage 1. [ใ‚ฌใ‚คใƒ‰](generation_strategies)๏ผš็•ฐใชใ‚‹็”Ÿๆˆๆ–นๆณ•ใ‚’ๅˆถๅพกใ™ใ‚‹ๆ–นๆณ•ใ€็”Ÿๆˆๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใฎ่จญๅฎšๆ–นๆณ•ใ€ๅ‡บๅŠ›ใฎใ‚นใƒˆใƒชใƒผใƒŸใƒณใ‚ฐๆ–นๆณ•ใซใคใ„ใฆใฎใ‚ฌใ‚คใƒ‰; 2. [`~generation.GenerationConfig`]ใ€[`~generation.GenerationMixin.generate`]ใ€ใŠใ‚ˆใณ[็”Ÿๆˆ้–ข้€ฃใ‚ฏใƒฉใ‚น](internal/generation_utils)ใซ้–ขใ™ใ‚‹APIใƒชใƒ•ใ‚กใƒฌใƒณใ‚นใ€‚ ### LLM leaderboards 1. [Open LLM ใƒชใƒผใƒ€ใƒผใƒœใƒผใƒ‰](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)๏ผšใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚นใƒขใƒ‡ใƒซใฎๅ“่ณชใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใŸใƒชใƒผใƒ€ใƒผใƒœใƒผใƒ‰; 2. [Open LLM-Perf ใƒชใƒผใƒ€ใƒผใƒœใƒผใƒ‰](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)๏ผšLLMใฎใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใŸใƒชใƒผใƒ€ใƒผใƒœใƒผใƒ‰ใ€‚ ### Latency and throughput 1. [ใ‚ฌใ‚คใƒ‰](main_classes/quantization)๏ผšใƒ€ใ‚คใƒŠใƒŸใƒƒใ‚ฏใ‚ฏใ‚ชใƒณใ‚ฟใ‚คใ‚บใซ้–ขใ™ใ‚‹ใ‚ฌใ‚คใƒ‰ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใƒกใƒขใƒช่ฆไปถใ‚’ๅЇ็š„ใซๅ‰Šๆธ›ใ™ใ‚‹ๆ–นๆณ•ใŒ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ### Related libraries 1. [`text-generation-inference`](https://github.com/huggingface/text-generation-inference)๏ผšLLM็”จใฎๆœฌ็•ชๅ‘ใ‘ใ‚ตใƒผใƒใƒผ; 2. [`optimum`](https://github.com/huggingface/optimum)๏ผš็‰นๅฎšใฎใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใƒ‡ใƒใ‚คใ‚นๅ‘ใ‘ใซๆœ€้ฉๅŒ–ใ•ใ‚ŒใŸ๐Ÿค— Transformersใฎๆ‹กๅผตใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_torch_compile.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimize inference using torch.compile() ใ“ใฎใ‚ฌใ‚คใƒ‰ใฏใ€[`torch.compile()`](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) ใ‚’ไฝฟ็”จใ—ใŸๆŽจ่ซ–้€Ÿๅบฆใฎๅ‘ไธŠใซ้–ขใ™ใ‚‹ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’ๆไพ›ใ™ใ‚‹ใ“ใจใ‚’็›ฎ็š„ใจใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใฏใ€[๐Ÿค— Transformers ใฎใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซ](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers&sort=trending)ๅ‘ใ‘ใฎใ‚‚ใฎใงใ™ใ€‚ ## Benefits of torch.compile `torch.compile()`ใฎๅˆฉ็‚น ใƒขใƒ‡ใƒซใจGPUใซใ‚ˆใฃใฆใฏใ€torch.compile()ใฏๆŽจ่ซ–ๆ™‚ใซๆœ€ๅคง30%ใฎ้ซ˜้€ŸๅŒ–ใ‚’ๅฎŸ็พใ—ใพใ™ใ€‚ `torch.compile()`ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ใƒใƒผใ‚ธใƒงใƒณ2.0ไปฅไธŠใฎtorchใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ ใƒขใƒ‡ใƒซใฎใ‚ณใƒณใƒ‘ใ‚คใƒซใซใฏๆ™‚้–“ใŒใ‹ใ‹ใ‚‹ใŸใ‚ใ€ๆฏŽๅ›žๆŽจ่ซ–ใ™ใ‚‹ใฎใงใฏใชใใ€ใƒขใƒ‡ใƒซใ‚’1ๅบฆใ ใ‘ใ‚ณใƒณใƒ‘ใ‚คใƒซใ™ใ‚‹ๅ ดๅˆใซๅฝน็ซ‹ใกใพใ™ใ€‚ ไปปๆ„ใฎใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใ‚’ใ‚ณใƒณใƒ‘ใ‚คใƒซใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซใƒขใƒ‡ใƒซใซ`torch.compile()`ใ‚’ๅ‘ผใณๅ‡บใ—ใพใ™๏ผš ```diff from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda") + model = torch.compile(model) ``` `compile()` ใฏใ€ใ‚ณใƒณใƒ‘ใ‚คใƒซใซ้–ขใ™ใ‚‹็•ฐใชใ‚‹ใƒขใƒผใƒ‰ใ‚’ๅ‚™ใˆใฆใŠใ‚Šใ€ๅŸบๆœฌ็š„ใซใฏใ‚ณใƒณใƒ‘ใ‚คใƒซๆ™‚้–“ใจๆŽจ่ซ–ใฎใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใŒ็•ฐใชใ‚Šใพใ™ใ€‚`max-autotune` ใฏ `reduce-overhead` ใ‚ˆใ‚Šใ‚‚ๆ™‚้–“ใŒใ‹ใ‹ใ‚Šใพใ™ใŒใ€ๆŽจ่ซ–้€ŸๅบฆใŒ้€Ÿใใชใ‚Šใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒขใƒผใƒ‰ใฏใ‚ณใƒณใƒ‘ใ‚คใƒซใซใŠใ„ใฆใฏๆœ€้€Ÿใงใ™ใŒใ€ๆŽจ่ซ–ๆ™‚้–“ใซใŠใ„ใฆใฏ `reduce-overhead` ใซๆฏ”ในใฆๅŠน็އใŒ่‰ฏใใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒขใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใพใ—ใŸใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใ“ใกใ‚‰](https://pytorch.org/get-started/pytorch-2.0/#user-experience) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ `torch` ใƒใƒผใ‚ธใƒงใƒณ 2.0.1 ใง็•ฐใชใ‚‹ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใ€ใ‚ฟใ‚นใ‚ฏใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใฎ็จฎ้กžใ€ใŠใ‚ˆใณใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ไฝฟ็”จใ—ใฆ `torch.compile` ใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ—ใพใ—ใŸใ€‚ ## Benchmarking code ไปฅไธ‹ใซใ€ๅ„ใ‚ฟใ‚นใ‚ฏใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚ณใƒผใƒ‰ใ‚’็คบใ—ใพใ™ใ€‚ๆŽจ่ซ–ๅ‰ใซGPUใ‚’ใ‚ฆใ‚ฉใƒผใƒ ใ‚ขใƒƒใƒ—ใ—ใ€ๆฏŽๅ›žๅŒใ˜็”ปๅƒใ‚’ไฝฟ็”จใ—ใฆ300ๅ›žใฎๆŽจ่ซ–ใฎๅนณๅ‡ๆ™‚้–“ใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ ### Image Classification with ViT ``` from PIL import Image import requests import numpy as np from transformers import AutoImageProcessor, AutoModelForImageClassification url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224").to("cuda") model = torch.compile(model) processed_input = processor(image, return_tensors='pt').to(device="cuda") with torch.no_grad(): _ = model(**processed_input) ``` #### Object Detection with DETR ```python from transformers import AutoImageProcessor, AutoModelForObjectDetection processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50").to("cuda") model = torch.compile(model) texts = ["a photo of a cat", "a photo of a dog"] inputs = processor(text=texts, images=image, return_tensors="pt").to("cuda") with torch.no_grad(): _ = model(**inputs) ``` #### Image Segmentation with Segformer ```python from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512").to("cuda") model = torch.compile(model) seg_inputs = processor(images=image, return_tensors="pt").to("cuda") with torch.no_grad(): _ = model(**seg_inputs) ``` ไปฅไธ‹ใฏใ€็งใŸใกใŒใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’่กŒใฃใŸใƒขใƒ‡ใƒซใฎใƒชใ‚นใƒˆใงใ™ใ€‚ **Image Classification** - [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) - [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) - [facebook/convnext-large-224](https://huggingface.co/facebook/convnext-large-224) - [microsoft/resnet-50](https://huggingface.co/) **Image Segmentation** - [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [facebook/mask2former-swin-tiny-coco-panoptic](https://huggingface.co/facebook/mask2former-swin-tiny-coco-panoptic) - [facebook/maskformer-swin-base-ade](https://huggingface.co/facebook/maskformer-swin-base-ade) - [google/deeplabv3_mobilenet_v2_1.0_513](https://huggingface.co/google/deeplabv3_mobilenet_v2_1.0_513) **Object Detection** - [google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) - [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) - [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) ไปฅไธ‹ใฏใ€`torch.compile()`ใ‚’ไฝฟ็”จใ—ใŸๅ ดๅˆใจไฝฟ็”จใ—ใชใ„ๅ ดๅˆใฎๆŽจ่ซ–ๆ™‚้–“ใฎๅฏ่ฆ–ๅŒ–ใจใ€็•ฐใชใ‚‹ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใจใƒใƒƒใƒใ‚ตใ‚คใ‚บใฎๅ„ใƒขใƒ‡ใƒซใซๅฏพใ™ใ‚‹ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๅ‘ไธŠใฎๅ‰ฒๅˆใงใ™ใ€‚ <div class="flex"> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/a100_batch_comp.png" /> </div> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/v100_batch_comp.png" /> </div> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/t4_batch_comp.png" /> </div> </div> <div class="flex"> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/A100_1_duration.png" /> </div> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/A100_1_percentage.png" /> </div> </div> ![Duration Comparison on V100 with Batch Size of 1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/v100_1_duration.png) ![Percentage Improvement on T4 with Batch Size of 4](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/T4_4_percentage.png) ไธ‹่จ˜ใฏใ€ๅ„ใƒขใƒ‡ใƒซใซใคใ„ใฆ`compile()`ใ‚’ไฝฟ็”จใ—ใŸๅ ดๅˆใจไฝฟ็”จใ—ใชใ‹ใฃใŸๅ ดๅˆใฎๆŽจ่ซ–ๆ™‚้–“๏ผˆใƒŸใƒช็ง’ๅ˜ไฝ๏ผ‰ใงใ™ใ€‚ใชใŠใ€OwlViTใฏๅคงใใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใงใฎไฝฟ็”จๆ™‚ใซใƒกใƒขใƒชไธ่ถณ๏ผˆOOM๏ผ‰ใŒ็™บ็”Ÿใ™ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ### A100 (batch size: 1) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 9.325 | 7.584 | | Image Segmentation/Segformer | 11.759 | 10.500 | | Object Detection/OwlViT | 24.978 | 18.420 | | Image Classification/BeiT | 11.282 | 8.448 | | Object Detection/DETR | 34.619 | 19.040 | | Image Classification/ConvNeXT | 10.410 | 10.208 | | Image Classification/ResNet | 6.531 | 4.124 | | Image Segmentation/Mask2former | 60.188 | 49.117 | | Image Segmentation/Maskformer | 75.764 | 59.487 | | Image Segmentation/MobileNet | 8.583 | 3.974 | | Object Detection/Resnet-101 | 36.276 | 18.197 | | Object Detection/Conditional-DETR | 31.219 | 17.993 | ### A100 (batch size: 4) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 14.832 | 14.499 | | Image Segmentation/Segformer | 18.838 | 16.476 | | Image Classification/BeiT | 13.205 | 13.048 | | Object Detection/DETR | 48.657 | 32.418| | Image Classification/ConvNeXT | 22.940 | 21.631 | | Image Classification/ResNet | 6.657 | 4.268 | | Image Segmentation/Mask2former | 74.277 | 61.781 | | Image Segmentation/Maskformer | 180.700 | 159.116 | | Image Segmentation/MobileNet | 14.174 | 8.515 | | Object Detection/Resnet-101 | 68.101 | 44.998 | | Object Detection/Conditional-DETR | 56.470 | 35.552 | ### A100 (batch size: 16) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 40.944 | 40.010 | | Image Segmentation/Segformer | 37.005 | 31.144 | | Image Classification/BeiT | 41.854 | 41.048 | | Object Detection/DETR | 164.382 | 161.902 | | Image Classification/ConvNeXT | 82.258 | 75.561 | | Image Classification/ResNet | 7.018 | 5.024 | | Image Segmentation/Mask2former | 178.945 | 154.814 | | Image Segmentation/Maskformer | 638.570 | 579.826 | | Image Segmentation/MobileNet | 51.693 | 30.310 | | Object Detection/Resnet-101 | 232.887 | 155.021 | | Object Detection/Conditional-DETR | 180.491 | 124.032 | ### V100 (batch size: 1) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 10.495 | 6.00 | | Image Segmentation/Segformer | 13.321 | 5.862 | | Object Detection/OwlViT | 25.769 | 22.395 | | Image Classification/BeiT | 11.347 | 7.234 | | Object Detection/DETR | 33.951 | 19.388 | | Image Classification/ConvNeXT | 11.623 | 10.412 | | Image Classification/ResNet | 6.484 | 3.820 | | Image Segmentation/Mask2former | 64.640 | 49.873 | | Image Segmentation/Maskformer | 95.532 | 72.207 | | Image Segmentation/MobileNet | 9.217 | 4.753 | | Object Detection/Resnet-101 | 52.818 | 28.367 | | Object Detection/Conditional-DETR | 39.512 | 20.816 | ### V100 (batch size: 4) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 15.181 | 14.501 | | Image Segmentation/Segformer | 16.787 | 16.188 | | Image Classification/BeiT | 15.171 | 14.753 | | Object Detection/DETR | 88.529 | 64.195 | | Image Classification/ConvNeXT | 29.574 | 27.085 | | Image Classification/ResNet | 6.109 | 4.731 | | Image Segmentation/Mask2former | 90.402 | 76.926 | | Image Segmentation/Maskformer | 234.261 | 205.456 | | Image Segmentation/MobileNet | 24.623 | 14.816 | | Object Detection/Resnet-101 | 134.672 | 101.304 | | Object Detection/Conditional-DETR | 97.464 | 69.739 | ### V100 (batch size: 16) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 52.209 | 51.633 | | Image Segmentation/Segformer | 61.013 | 55.499 | | Image Classification/BeiT | 53.938 | 53.581 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 109.682 | 100.771 | | Image Classification/ResNet | 14.857 | 12.089 | | Image Segmentation/Mask2former | 249.605 | 222.801 | | Image Segmentation/Maskformer | 831.142 | 743.645 | | Image Segmentation/MobileNet | 93.129 | 55.365 | | Object Detection/Resnet-101 | 482.425 | 361.843 | | Object Detection/Conditional-DETR | 344.661 | 255.298 | ### T4 (batch size: 1) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 16.520 | 15.786 | | Image Segmentation/Segformer | 16.116 | 14.205 | | Object Detection/OwlViT | 53.634 | 51.105 | | Image Classification/BeiT | 16.464 | 15.710 | | Object Detection/DETR | 73.100 | 53.99 | | Image Classification/ConvNeXT | 32.932 | 30.845 | | Image Classification/ResNet | 6.031 | 4.321 | | Image Segmentation/Mask2former | 79.192 | 66.815 | | Image Segmentation/Maskformer | 200.026 | 188.268 | | Image Segmentation/MobileNet | 18.908 | 11.997 | | Object Detection/Resnet-101 | 106.622 | 82.566 | | Object Detection/Conditional-DETR | 77.594 | 56.984 | ### T4 (batch size: 4) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 43.653 | 43.626 | | Image Segmentation/Segformer | 45.327 | 42.445 | | Image Classification/BeiT | 52.007 | 51.354 | | Object Detection/DETR | 277.850 | 268.003 | | Image Classification/ConvNeXT | 119.259 | 105.580 | | Image Classification/ResNet | 13.039 | 11.388 | | Image Segmentation/Mask2former | 201.540 | 184.670 | | Image Segmentation/Maskformer | 764.052 | 711.280 | | Image Segmentation/MobileNet | 74.289 | 48.677 | | Object Detection/Resnet-101 | 421.859 | 357.614 | | Object Detection/Conditional-DETR | 289.002 | 226.945 | ### T4 (batch size: 16) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 163.914 | 160.907 | | Image Segmentation/Segformer | 192.412 | 163.620 | | Image Classification/BeiT | 188.978 | 187.976 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 422.886 | 388.078 | | Image Classification/ResNet | 44.114 | 37.604 | | Image Segmentation/Mask2former | 756.337 | 695.291 | | Image Segmentation/Maskformer | 2842.940 | 2656.88 | | Image Segmentation/MobileNet | 299.003 | 201.942 | | Object Detection/Resnet-101 | 1619.505 | 1262.758 | | Object Detection/Conditional-DETR | 1137.513 | 897.390| ## PyTorch Nightly ใพใŸใ€PyTorchใฎใƒŠใ‚คใƒˆใƒชใƒผใƒใƒผใ‚ธใƒงใƒณ๏ผˆ2.1.0dev๏ผ‰ใงใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’่กŒใ„ใ€ใ‚ณใƒณใƒ‘ใ‚คใƒซใ•ใ‚Œใฆใ„ใชใ„ใƒขใƒ‡ใƒซใจใ‚ณใƒณใƒ‘ใ‚คใƒซๆธˆใฟใƒขใƒ‡ใƒซใฎไธกๆ–นใงใƒฌใ‚คใƒ†ใƒณใ‚ทใƒผใฎๅ‘ไธŠใ‚’่ฆณๅฏŸใ—ใพใ—ใŸใ€‚ใƒ›ใ‚คใƒผใƒซใฏ[ใ“ใกใ‚‰](https://download.pytorch.org/whl/nightly/cu118)ใ‹ใ‚‰ๅ…ฅๆ‰‹ใงใใพใ™ใ€‚ ### A100 | **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 -<br> compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 12.462 | 6.954 | | Image Classification/BeiT | 4 | 14.109 | 12.851 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 30.484 | 15.221 | | Object Detection/DETR | 4 | 46.816 | 30.942 | | Object Detection/DETR | 16 | 163.749 | 163.706 | ### T4 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 14.408 | 14.052 | | Image Classification/BeiT | 4 | 47.381 | 46.604 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 68.382 | 53.481 | | Object Detection/DETR | 4 | 269.615 | 204.785 | | Object Detection/DETR | 16 | OOM | OOM | ###ย V100 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 13.477 | 7.926 | | Image Classification/BeiT | 4 | 15.103 | 14.378 | | Image Classification/BeiT | 16 | 52.517 | 51.691 | | Object Detection/DETR | Unbatched | 28.706 | 19.077 | | Object Detection/DETR | 4 | 88.402 | 62.949| | Object Detection/DETR | 16 | OOM | OOM | ## Reduce Overhead Nightlyใƒ“ใƒซใƒ‰ใงA100ใŠใ‚ˆใณT4ๅ‘ใ‘ใฎ `reduce-overhead` ใ‚ณใƒณใƒ‘ใ‚คใƒซใƒขใƒผใƒ‰ใ‚’ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ—ใพใ—ใŸใ€‚ ### A100 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 11.758 | 7.335 | | Image Classification/ConvNeXT | 4 | 23.171 | 21.490 | | Image Classification/ResNet | Unbatched | 7.435 | 3.801 | | Image Classification/ResNet | 4 | 7.261 | 2.187 | | Object Detection/Conditional-DETR | Unbatched | 32.823 | 11.627 | | Object Detection/Conditional-DETR | 4 | 50.622 | 33.831 | | Image Segmentation/MobileNet | Unbatched | 9.869 | 4.244 | | Image Segmentation/MobileNet | 4 | 14.385 | 7.946 | ### T4 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 32.137 | 31.84 | | Image Classification/ConvNeXT | 4 | 120.944 | 110.209 | | Image Classification/ResNet | Unbatched | 9.761 | 7.698 | | Image Classification/ResNet | 4 | 15.215 | 13.871 | | Object Detection/Conditional-DETR | Unbatched | 72.150 | 57.660 | | Object Detection/Conditional-DETR | 4 | 301.494 | 247.543 | | Image Segmentation/MobileNet | Unbatched | 22.266 | 19.339 | | Image Segmentation/MobileNet | 4 | 78.311 | 50.983 |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/perf_infer_gpu_one.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Inference on a Single GPU ใ“ใฎใ‚ฌใ‚คใƒ‰ใซๅŠ ใˆใฆใ€[1ใคใฎGPUใงใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ฌใ‚คใƒ‰](perf_train_gpu_one)ใจ[CPUใงใฎๆŽจ่ซ–ใ‚ฌใ‚คใƒ‰](perf_infer_cpu)ใซ้–ข้€ฃใ™ใ‚‹ๆƒ…ๅ ฑใŒใ‚ใ‚Šใพใ™ใ€‚ ## Flash Attention 2 <Tip> ใ“ใฎๆฉŸ่ƒฝใฏๅฎŸ้จ“็š„ใงใ‚ใ‚Šใ€ๅฐ†ๆฅใฎใƒใƒผใ‚ธใƒงใƒณใงๅคงๅน…ใซๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€Flash Attention 2 APIใฏ่ฟ‘ใ„ๅฐ†ๆฅ`BetterTransformer` APIใซ็งป่กŒใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ </Tip> Flash Attention 2ใฏใ€ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๆŽจ่ซ–้€Ÿๅบฆใ‚’ๅคงๅน…ใซ้ซ˜้€ŸๅŒ–ใงใใพใ™ใ€‚Flash Attention 2ใฏใ€Tri Daoๆฐใซใ‚ˆใฃใฆ[ๅ…ฌๅผใฎFlash Attentionใƒชใƒใ‚ธใƒˆใƒช](https://github.com/Dao-AILab/flash-attention)ใงๅฐŽๅ…ฅใ•ใ‚Œใพใ—ใŸใ€‚Flash Attentionใซ้–ขใ™ใ‚‹็ง‘ๅญฆ่ซ–ๆ–‡ใฏ[ใ“ใกใ‚‰](https://arxiv.org/abs/2205.14135)ใง่ฆ‹ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ Flash Attention 2ใ‚’ๆญฃใ—ใใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใซใฏใ€ไธŠ่จ˜ใฎใƒชใƒใ‚ธใƒˆใƒชใซ่จ˜่ผ‰ใ•ใ‚Œใฆใ„ใ‚‹ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ‚ฌใ‚คใƒ‰ใซๅพ“ใฃใฆใใ ใ•ใ„ใ€‚ ไปฅไธ‹ใฎใƒขใƒ‡ใƒซใซๅฏพใ—ใฆFlash Attention 2ใ‚’ใƒใ‚คใƒ†ใ‚ฃใƒ–ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™๏ผš - Llama - Falcon ใ•ใ‚‰ใซๅคšใใฎใƒขใƒ‡ใƒซใซFlash Attention 2ใฎใ‚ตใƒใƒผใƒˆใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใ‚’GitHubใงๆๆกˆใ™ใ‚‹ใ“ใจใ‚‚ใงใใ€ๅค‰ๆ›ดใ‚’็ตฑๅˆใ™ใ‚‹ใŸใ‚ใซใƒ—ใƒซใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้–‹ใใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใฏใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅซใ‚€ใ€ๆŽจ่ซ–ใจใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซไฝฟ็”จใงใใพใ™๏ผˆ็พๅœจใฎ`BetterTransformer` APIใงใฏใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„๏ผ‰ใ€‚ <Tip> Flash Attention 2ใฏใ€ใƒขใƒ‡ใƒซใฎdtypeใŒ`fp16`ใพใŸใฏ`bf16`ใฎๅ ดๅˆใซใฎใฟไฝฟ็”จใงใใ€NVIDIA-GPUใƒ‡ใƒใ‚คใ‚นใงใฎใฟๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ใ“ใฎๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ€ใƒขใƒ‡ใƒซใ‚’้ฉๅˆ‡ใชdtypeใซใ‚ญใƒฃใ‚นใƒˆใ—ใ€ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใƒ‡ใƒใ‚คใ‚นใซใƒญใƒผใƒ‰ใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ### Quick usage ใƒขใƒ‡ใƒซใงFlash Attention 2ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€`from_pretrained`ใฎๅผ•ๆ•ฐใซ`attn_implementation="flash_attention_2"`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM model_id = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) ``` ใ“ใกใ‚‰ใฏใ€็”ŸๆˆใพใŸใฏๅพฎ่ชฟๆ•ดใฎใŸใ‚ใซไฝฟ็”จใ™ใ‚‹ใƒ†ใ‚ญใ‚นใƒˆใงใ™ใ€‚ ### Expected speedups ็‰นใซ้•ทใ„ใ‚ทใƒผใ‚ฑใƒณใ‚นใซๅฏพใ—ใฆใ€ๅพฎ่ชฟๆ•ดใจๆŽจ่ซ–ใฎ้š›ใซใฏใ€ใ‹ใชใ‚Šใฎ้ซ˜้€ŸๅŒ–ใŒๆœŸๅพ…ใงใใพใ™ใ€‚ใŸใ ใ—ใ€Flash Attentionใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ไฝฟ็”จใ—ใฆใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚นใ‚ณใ‚ขใ‚’่จˆ็ฎ—ใ—ใชใ„ใŸใ‚ใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใŒๅซใพใ‚Œใ‚‹ๅ ดๅˆใ€ใƒใƒƒใƒๆŽจ่ซ–ใซใŠใ„ใฆใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚นใ‚ณใ‚ขใ‚’ๆ‰‹ๅ‹•ใงใƒ‘ใƒƒใƒ‰/ใ‚ขใƒณใƒ‘ใƒƒใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ๅซใ‚€ใƒใƒƒใƒ็”Ÿๆˆใฎๅคงๅน…ใช้…ๅปถใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚’ๅ…‹ๆœใ™ใ‚‹ใŸใ‚ใซใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซใ‚ทใƒผใ‚ฑใƒณใ‚นใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ไฝฟ็”จใ›ใšใซFlash Attentionใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผˆใŸใจใˆใฐใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใƒ‘ใƒƒใ‚ฏใ™ใ‚‹ใ“ใจใซใ‚ˆใ‚Šใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๆœ€ๅคงใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใซ้”ใ™ใ‚‹ใพใง้€ฃ็ตใ™ใ‚‹ใ“ใจใชใฉ๏ผ‰ใ€‚ใ“ใ“ใซ[ไพ‹](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516)ใŒๆไพ›ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ไปฅไธ‹ใฏใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใฎใชใ„ๅ ดๅˆใซใ€ใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใŒ4096ใฎ[tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b)ใซๅฏพใ™ใ‚‹ๅ˜็ด”ใชใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใฎไบˆๆƒณใ•ใ‚Œใ‚‹้ซ˜้€ŸๅŒ–ใงใ™ใ€‚ใ•ใพใ–ใพใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒ็คบใ•ใ‚Œใฆใ„ใพใ™๏ผš <div style="text-align: center"> <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/falcon-7b-inference-large-seqlen.png"> </div> ไปฅไธ‹ใฏใ€ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใฎใชใ„ๅ ดๅˆใซใ€ใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใŒ4096ใฎ[`meta-llama/Llama-7b-hf`](https://hf.co/meta-llama/Llama-7b-hf)ใซๅฏพใ™ใ‚‹ๅ˜็ด”ใชใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใฎไบˆๆƒณใ•ใ‚Œใ‚‹้ซ˜้€ŸๅŒ–ใงใ™ใ€‚ใ•ใพใ–ใพใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒ็คบใ•ใ‚Œใฆใ„ใพใ™๏ผš <div style="text-align: center"> <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-7b-inference-large-seqlen.png"> </div> ใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ๅซใ‚€ใ‚ทใƒผใ‚ฑใƒณใ‚น๏ผˆใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใพใŸใฏ็”Ÿๆˆใ™ใ‚‹๏ผ‰ใฎๅ ดๅˆใ€ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚นใ‚ณใ‚ขใ‚’ๆญฃใ—ใ่จˆ็ฎ—ใ™ใ‚‹ใŸใ‚ใซๅ…ฅๅŠ›ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ใ‚ขใƒณใƒ‘ใƒƒใƒ‰/ใƒ‘ใƒƒใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆฏ”่ผƒ็š„ๅฐใ•ใ„ใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใฎๅ ดๅˆใ€็ด”็ฒ‹ใชใƒ•ใ‚ฉใƒฏใƒผใƒ‰ใƒ‘ใ‚นใงใฏใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใƒˆใƒผใ‚ฏใƒณใŒ30%ๆœชๆบ€ใ—ใ‹ๅŸ‹ใ‚ใ‚‰ใ‚Œใฆใ„ใชใ„ใŸใ‚ใ€ใ“ใ‚Œใฏใ‚ใšใ‹ใช้ซ˜้€ŸๅŒ–ใ‚’ใ‚‚ใŸใ‚‰ใ—ใพใ™ใ€‚ <div style="text-align: center"> <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-small-seqlen-padding.png"> </div> ใ—ใ‹ใ—ใ€ๅคงใใชใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใฎๅ ดๅˆใ€็ด”็ฒ‹ใชๆŽจ่ซ–๏ผˆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚‚ๅซใ‚€๏ผ‰ใซใฏ่ˆˆๅ‘ณๆทฑใ„้ซ˜้€ŸๅŒ–ใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ Flash Attentionใฏใ€ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ่จˆ็ฎ—ใ‚’ใ‚ˆใ‚ŠใƒกใƒขใƒชๅŠน็އใฎ่‰ฏใ„ใ‚‚ใฎใซใ—ใ€ๅคงใใชใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใงใฎCUDA OOMใฎๅ•้กŒใ‚’ๅ›ž้ฟใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ๅคงใใชใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใซๅฏพใ—ใฆๆœ€ๅคง20ใฎใƒกใƒขใƒชๅ‰Šๆธ›ใ‚’ใ‚‚ใŸใ‚‰ใ™ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ๅ…ฌๅผใฎFlash Attentionใƒชใƒใ‚ธใƒˆใƒช](https://github.com/Dao-AILab/flash-attention)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ <div style="text-align: center"> <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-large-seqlen-padding.png"> </div> ### Advanced usage ใ“ใฎๆฉŸ่ƒฝใ‚’ใƒขใƒ‡ใƒซใฎๆœ€้ฉๅŒ–ใซๅคšใใฎๆ—ขๅญ˜ใฎๆฉŸ่ƒฝใจ็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไปฅไธ‹ใซใ„ใใคใ‹ใฎไพ‹ใ‚’็คบใ—ใพใ™๏ผš ### Combining Flash Attention 2 and 8-bit models ใ“ใฎๆฉŸ่ƒฝใ‚’8ใƒ“ใƒƒใƒˆใฎ้‡ๅญๅŒ–ใจ็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM model_id = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, load_in_8bit=True, attn_implementation="flash_attention_2", ) ``` ### Combining Flash Attention 2 and 4-bit models ใ“ใฎๆฉŸ่ƒฝใ‚’ 4 ใƒ“ใƒƒใƒˆใฎ้‡ๅญๅŒ–ใจ็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใŒใงใใพใ™๏ผš ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM model_id = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, load_in_4bit=True, attn_implementation="flash_attention_2", ) ``` ### Combining Flash Attention 2 and PEFT ใ“ใฎๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ—ใฆใ€Flash Attention 2ใ‚’ใƒ™ใƒผใ‚นใซใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹้š›ใซPEFTใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM from peft import LoraConfig model_id = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, load_in_4bit=True, attn_implementation="flash_attention_2", ) lora_config = LoraConfig( r=8, task_type="CAUSAL_LM" ) model.add_adapter(lora_config) ... # train your model ``` ## BetterTransformer [BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฏใ€๐Ÿค— Transformersใƒขใƒ‡ใƒซใ‚’PyTorchใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎ้ซ˜้€Ÿใƒ‘ใ‚นๅฎŸ่กŒใซๅค‰ๆ›ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€Flash Attentionใชใฉใฎๆœ€้ฉๅŒ–ใ•ใ‚ŒใŸใ‚ซใƒผใƒใƒซใŒๅ†…้ƒจใงๅ‘ผใณๅ‡บใ•ใ‚Œใพใ™ใ€‚ BetterTransformerใฏใ€ใƒ†ใ‚ญใ‚นใƒˆใ€็”ปๅƒใ€ใŠใ‚ˆใณใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒขใƒ‡ใƒซใฎๅ˜ไธ€ใŠใ‚ˆใณใƒžใƒซใƒGPUใงใฎ้ซ˜้€ŸใชๆŽจ่ซ–ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ <Tip> Flash Attentionใฏใ€fp16ใพใŸใฏbf16ใฎdtypeใ‚’ไฝฟ็”จใ™ใ‚‹ใƒขใƒ‡ใƒซใซใฎใฟไฝฟ็”จใงใใพใ™ใ€‚BetterTransformerใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ€ใƒขใƒ‡ใƒซใ‚’้ฉๅˆ‡ใชdtypeใซใ‚ญใƒฃใ‚นใƒˆใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ### Encoder models PyTorchใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎ[`nn.MultiHeadAttention`](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ้ซ˜้€Ÿใƒ‘ใ‚นใ€BetterTransformerใจๅ‘ผใฐใ‚Œใ‚‹ใ‚‚ใฎใฏใ€[๐Ÿค— Optimumใƒฉใ‚คใƒ–ใƒฉใƒช](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฎ็ตฑๅˆใ‚’้€šใ˜ใฆTransformersใจไธ€็ท’ใซไฝฟ็”จใงใใพใ™ใ€‚ PyTorchใฎใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ้ซ˜้€Ÿใƒ‘ใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใ‚ซใƒผใƒใƒซใƒ•ใƒฅใƒผใ‚ธใƒงใƒณใจ[ใƒใ‚นใƒˆใ•ใ‚ŒใŸใƒ†ใƒณใ‚ฝใƒซ](https://pytorch.org/docs/stable/nested.html)ใฎไฝฟ็”จใซใ‚ˆใ‚Šใ€ๆŽจ่ซ–ใ‚’้ซ˜้€ŸๅŒ–ใงใใพใ™ใ€‚่ฉณ็ดฐใชใƒ™ใƒณใƒใƒžใƒผใ‚ฏๆƒ…ๅ ฑใฏ[ใ“ใฎใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใซใ‚ใ‚Šใพใ™ใ€‚ [`optimum`](https://github.com/huggingface/optimum)ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใŸๅพŒใ€ๆŽจ่ซ–ไธญใซBetter Transformerใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€้–ข้€ฃใ™ใ‚‹ๅ†…้ƒจใƒขใ‚ธใƒฅใƒผใƒซใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใง็ฝฎใๆ›ใˆใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™[`~PreTrainedModel.to_bettertransformer`]: ```python model = model.to_bettertransformer() ``` ใƒกใ‚ฝใƒƒใƒ‰ [`~PreTrainedModel.reverse_bettertransformer`] ใฏใ€ใƒขใƒ‡ใƒซใ‚’ไฟๅญ˜ใ™ใ‚‹ๅ‰ใซไฝฟ็”จใ™ในใใงใ€ๆจ™ๆบ–ใฎใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒขใƒ‡ใƒชใƒณใ‚ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใฎใ‚‚ใฎใงใ™๏ผš ```python model = model.reverse_bettertransformer() model.save_pretrained("saved_model") ``` BetterTransformer APIใ‚’ไฝฟใฃใŸใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใƒขใƒ‡ใƒซใฎๅฏ่ƒฝๆ€งใซใคใ„ใฆ่ฉณใ—ใ็Ÿฅใ‚‹ใซใฏใ€[ใ“ใฎใƒ–ใƒญใ‚ฐใƒใ‚นใƒˆ](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ### Decoder models ใƒ†ใ‚ญใ‚นใƒˆใƒขใƒ‡ใƒซใ€็‰นใซใƒ‡ใ‚ณใƒผใƒ€ใƒผใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซ๏ผˆGPTใ€T5ใ€Llamaใชใฉ๏ผ‰ใซใจใฃใฆใ€BetterTransformer APIใฏใ™ในใฆใฎๆณจๆ„ๆ“ไฝœใ‚’[`torch.nn.functional.scaled_dot_product_attention`ใ‚ชใƒšใƒฌใƒผใ‚ฟใƒผ](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention)๏ผˆSDPA๏ผ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซๅค‰ๆ›ใ—ใพใ™ใ€‚ใ“ใฎใ‚ชใƒšใƒฌใƒผใ‚ฟใƒผใฏPyTorch 2.0ไปฅ้™ใงใฎใฟๅˆฉ็”จๅฏ่ƒฝใงใ™ใ€‚ ใƒขใƒ‡ใƒซใ‚’BetterTransformerใซๅค‰ๆ›ใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") # convert the model to BetterTransformer model.to_bettertransformer() # Use it for training or inference ``` SDPAใฏใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ‚„ๅ•้กŒใฎใ‚ตใ‚คใ‚บใซๅฟœใ˜ใฆ[Flash Attention](https://arxiv.org/abs/2205.14135)ใ‚ซใƒผใƒใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚Flash Attentionใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ‹ใ€็‰นๅฎšใฎ่จญๅฎš๏ผˆใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ€ๅ•้กŒใ‚ตใ‚คใ‚บ๏ผ‰ใงไฝฟ็”จๅฏ่ƒฝใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ™ใ‚‹ใซใฏใ€[`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel)ใ‚’ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใƒžใƒใƒผใ‚ธใƒฃใจใ—ใฆไฝฟ็”จใ—ใพใ™ใ€‚ ```diff import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16).to("cuda") # convert the model to BetterTransformer model.to_bettertransformer() input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") + with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ใ‚‚ใ—ใƒˆใƒฌใƒผใ‚นใƒใƒƒใ‚ฏใซใƒใ‚ฐใŒ่กจ็คบใ•ใ‚ŒใŸๅ ดๅˆ ```bash RuntimeError: No available kernel. Aborting execution. ``` Flash Attention ใฎๅบƒ็ฏ„ใชใ‚ซใƒใƒฌใƒƒใ‚ธใ‚’ๆŒใคใ‹ใ‚‚ใ—ใ‚Œใชใ„ PyTorch ใฎใƒŠใ‚คใƒˆใƒชใƒผใƒใƒผใ‚ธใƒงใƒณใ‚’่ฉฆใ—ใฆใฟใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```bash pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 ``` Or make sure your model is correctly casted in float16 or bfloat16 ใƒขใƒ‡ใƒซใŒๆญฃใ—ใfloat16ใพใŸใฏbfloat16ใซใ‚ญใƒฃใ‚นใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ Have a look at [this detailed blogpost](https://pytorch.org/blog/out-of-the-box-acceleration/) to read more about what is possible to do with `BetterTransformer` + SDPA API. `BetterTransformer` + SDPA APIใ‚’ไฝฟ็”จใ—ใฆไฝ•ใŒๅฏ่ƒฝใ‹ใซใคใ„ใฆ่ฉณใ—ใ่ชญใ‚€ใซใฏใ€[ใ“ใฎ่ฉณ็ดฐใชใƒ–ใƒญใ‚ฐใƒใ‚นใƒˆ](https://pytorch.org/blog/out-of-the-box-acceleration/)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## `bitsandbytes` integration for FP4 mixed-precision inference FP4ๆททๅˆ็ฒพๅบฆๆŽจ่ซ–ใฎใŸใ‚ใฎ`bitsandbytes`็ตฑๅˆ You can install `bitsandbytes` and benefit from easy model compression on GPUs. Using FP4 quantization you can expect to reduce up to 8x the model size compared to its native full precision version. Check out below how to get started. `bitsandbytes`ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใ€GPUใง็ฐกๅ˜ใชใƒขใƒ‡ใƒซใฎๅœง็ธฎใ‚’ๅˆฉ็”จใงใใพใ™ใ€‚FP4้‡ๅญๅŒ–ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒใ‚คใƒ†ใ‚ฃใƒ–ใฎใƒ•ใƒซใƒ—ใƒฌใ‚ทใ‚ธใƒงใƒณใƒใƒผใ‚ธใƒงใƒณใจๆฏ”่ผƒใ—ใฆใƒขใƒ‡ใƒซใ‚ตใ‚คใ‚บใ‚’ๆœ€ๅคง8ๅ€ๅ‰Šๆธ›ใงใใ‚‹ใ“ใจใŒๆœŸๅพ…ใงใใพใ™ใ€‚ไปฅไธ‹ใ‚’็ขบ่ชใ—ใฆใ€ใฉใฎใ‚ˆใ†ใซๅง‹ใ‚ใ‚‹ใ‹ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ <Tip> Note that this feature can also be used in a multi GPU setup. ใ“ใฎๆฉŸ่ƒฝใฏใ€ใƒžใƒซใƒGPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใงใ‚‚ไฝฟ็”จใงใใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ### Requirements [[requirements-for-fp4-mixedprecision-inference]] - Latest `bitsandbytes` library `pip install bitsandbytes>=0.39.0` - Install latest `accelerate` from source `pip install git+https://github.com/huggingface/accelerate.git` - Install latest `transformers` from source `pip install git+https://github.com/huggingface/transformers.git` ### Running FP4 models - single GPU setup - Quickstart ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ“ใจใงใ€็ฐกๅ˜ใซๅ˜ไธ€ใฎGPUใงFP4ใƒขใƒ‡ใƒซใ‚’ๅฎŸ่กŒใงใใพใ™: ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) ``` ๆณจๆ„: `device_map`ใฏใ‚ชใƒ—ใ‚ทใƒงใƒณใงใ™ใŒใ€ๆŽจ่ซ–ๆ™‚ใซ `device_map = 'auto'` ใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใŒๆŽจๅฅจใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅˆฉ็”จๅฏ่ƒฝใชใƒชใ‚ฝใƒผใ‚นใซๅŠน็އ็š„ใซใƒขใƒ‡ใƒซใŒใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใ•ใ‚Œใพใ™ใ€‚ ### Running FP4 models - multi GPU setup ๆททๅˆ4ใƒ“ใƒƒใƒˆใƒขใƒ‡ใƒซใ‚’่ค‡ๆ•ฐใฎGPUใซใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใฏใ€ๅ˜ไธ€GPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใจๅŒใ˜ใงใ™๏ผˆๅ˜ไธ€GPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใจๅŒใ˜ใ‚ณใƒžใƒณใƒ‰ใงใ™๏ผ‰๏ผš ```py model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) ``` ใ—ใ‹ใ—ใ€`accelerate`ใ‚’ไฝฟ็”จใ—ใฆใ€ๅ„GPUใซๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹GPU RAMใ‚’ๅˆถๅพกใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไปฅไธ‹ใฎใ‚ˆใ†ใซใ€`max_memory`ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใพใ™๏ผš ```py max_memory_mapping = {0: "600MB", 1: "1GB"} model_name = "bigscience/bloom-3b" model_4bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping ) ``` ใ“ใฎไพ‹ใงใฏใ€ๆœ€ๅˆใฎGPUใฏ600MBใฎใƒกใƒขใƒชใ‚’ไฝฟ็”จใ—ใ€2็•ช็›ฎใฎGPUใฏ1GBใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ### Advanced usage ใ“ใฎใƒกใ‚ฝใƒƒใƒ‰ใฎใ•ใ‚‰ใชใ‚‹้ซ˜ๅบฆใชไฝฟ็”จๆณ•ใซใคใ„ใฆใฏใ€[้‡ๅญๅŒ–](main_classes/quantization)ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใƒšใƒผใ‚ธใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ## `bitsandbytes` integration for Int8 mixed-precision matrix decomposition <Tip> ใ“ใฎๆฉŸ่ƒฝใฏใ€ใƒžใƒซใƒGPU็’ฐๅขƒใงใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ </Tip> ่ซ–ๆ–‡[`LLM.int8()๏ผšใ‚นใ‚ฑใƒผใƒฉใƒ–ใƒซใชTransformerๅ‘ใ‘ใฎ8ใƒ“ใƒƒใƒˆ่กŒๅˆ—ไน—็ฎ—`](https://arxiv.org/abs/2208.07339)ใซใ‚ˆใ‚Œใฐใ€Hugging Face็ตฑๅˆใŒHubๅ†…ใฎใ™ในใฆใฎใƒขใƒ‡ใƒซใงใ‚ใšใ‹ๆ•ฐ่กŒใฎใ‚ณใƒผใƒ‰ใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ€ๅŠ็ฒพๅบฆ๏ผˆ`float16`ใŠใ‚ˆใณ`bfloat16`๏ผ‰ใฎ้‡ใฟใฎๅ ดๅˆใซ`nn.Linear`ใ‚ตใ‚คใ‚บใ‚’2ๅ€ใ€ๅ˜็ฒพๅบฆ๏ผˆ`float32`๏ผ‰ใฎ้‡ใฟใฎๅ ดๅˆใฏ4ๅ€ใซ็ธฎๅฐใ—ใ€ๅค–ใ‚Œๅ€คใซๅฏพใ—ใฆใปใจใ‚“ใฉๅฝฑ้Ÿฟใ‚’ไธŽใˆใพใ›ใ‚“ใ€‚ ![HFxbitsandbytes.png](https://cdn-uploads.huggingface.co/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png) Int8ๆททๅˆ็ฒพๅบฆ่กŒๅˆ—ๅˆ†่งฃใฏใ€่กŒๅˆ—ไน—็ฎ—ใ‚’2ใคใฎใ‚นใƒˆใƒชใƒผใƒ ใซๅˆ†ๅ‰ฒใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆๅ‹•ไฝœใ—ใพใ™๏ผš(1) ใ‚ทใ‚นใƒ†ใƒžใƒ†ใ‚ฃใƒƒใ‚ฏใช็‰นๅพดๅค–ใ‚Œๅ€คใ‚นใƒˆใƒชใƒผใƒ ใŒfp16ใง่กŒๅˆ—ไน—็ฎ—๏ผˆ0.01%๏ผ‰ใ€(2) int8่กŒๅˆ—ไน—็ฎ—ใฎ้€šๅธธใฎใ‚นใƒˆใƒชใƒผใƒ ๏ผˆ99.9%๏ผ‰ใ€‚ใ“ใฎๆ–นๆณ•ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€้žๅธธใซๅคงใใชใƒขใƒ‡ใƒซใซๅฏพใ—ใฆไบˆๆธฌใฎๅŠฃๅŒ–ใชใ—ใซint8ๆŽจ่ซ–ใŒๅฏ่ƒฝใงใ™ใ€‚ ใ“ใฎใƒกใ‚ฝใƒƒใƒ‰ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[่ซ–ๆ–‡](https://arxiv.org/abs/2208.07339)ใพใŸใฏ[ใ“ใฎ็ตฑๅˆใซ้–ขใ™ใ‚‹ใƒ–ใƒญใ‚ฐ่จ˜ไบ‹](https://huggingface.co/blog/hf-bitsandbytes-integration)ใ‚’ใ”็ขบ่ชใใ ใ•ใ„ใ€‚ ![MixedInt8.gif](https://cdn-uploads.huggingface.co/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif) ใชใŠใ€ใ“ใฎๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏGPUใŒๅฟ…่ฆใงใ‚ใ‚Šใ€ใ‚ซใƒผใƒใƒซใฏGPUๅฐ‚็”จใซใ‚ณใƒณใƒ‘ใ‚คใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ€ใƒขใƒ‡ใƒซใฎ1/4๏ผˆใพใŸใฏใƒใƒผใƒ•็ฒพๅบฆใฎ้‡ใฟใฎๅ ดๅˆใฏ1/2๏ผ‰ใ‚’ไฟๅญ˜ใ™ใ‚‹ใฎใซๅๅˆ†ใชGPUใƒกใƒขใƒชใŒใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎใƒขใ‚ธใƒฅใƒผใƒซใ‚’ไฝฟ็”จใ™ใ‚‹้š›ใฎใƒ˜ใƒซใƒ—ใซ้–ขใ™ใ‚‹่ฉณ็ดฐใฏใ€ไปฅไธ‹ใฎใƒŽใƒผใƒˆใ‚’ใ”่ฆงใ„ใŸใ ใใ‹ใ€[Google Colabใฎใƒ‡ใƒข](#colab-demos)ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ### Requirements [[requirements-for-int8-mixedprecision-matrix-decomposition]] - `bitsandbytes<0.37.0`ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€NVIDIA GPUใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใ€8ใƒ“ใƒƒใƒˆใƒ†ใƒณใ‚ฝใƒซใ‚ณใ‚ขใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„๏ผˆTuringใ€Ampereใ€ใพใŸใฏใใ‚Œไปฅ้™ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใƒผใ€ไพ‹๏ผšT4ใ€RTX20s RTX30sใ€A40-A100ใชใฉ๏ผ‰ใ€‚`bitsandbytes>=0.37.0`ใฎๅ ดๅˆใ€ใ™ในใฆใฎGPUใŒใ‚ตใƒใƒผใƒˆใ•ใ‚Œใ‚‹ใฏใšใงใ™ใ€‚ - ๆญฃใ—ใ„ใƒใƒผใ‚ธใƒงใƒณใฎ`bitsandbytes`ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„๏ผš `pip install bitsandbytes>=0.31.5` - `accelerate`ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™๏ผš `pip install accelerate>=0.12.0` ### Running mixed-Int8 models - single GPU setup ๅฟ…่ฆใชใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใŸๅพŒใ€ใƒŸใƒƒใ‚ฏใ‚น 8 ใƒ“ใƒƒใƒˆใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€ๆ–นๆณ•ใฏๆฌกใฎ้€šใ‚Šใงใ™๏ผš ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` ไปฅไธ‹ใฏใ‚ทใƒณใƒ—ใƒซใชไพ‹ใงใ™๏ผš * `pipeline()` ้–ขๆ•ฐใฎไปฃใ‚ใ‚Šใซใ€ใƒขใƒ‡ใƒซใฎ `generate()` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚`pipeline()` ้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆๆŽจ่ซ–ใ™ใ‚‹ใ“ใจใฏๅฏ่ƒฝใงใ™ใŒใ€ๆททๅˆ8ใƒ“ใƒƒใƒˆใƒขใƒ‡ใƒซใซๆœ€้ฉๅŒ–ใ•ใ‚ŒใฆใŠใ‚‰ใšใ€`generate()` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ‚Šใ‚‚้…ใใชใ‚Šใพใ™ใ€‚ใพใŸใ€ไธ€้ƒจใฎใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐๆˆฆ็•ฅ๏ผˆไพ‹๏ผšใƒŒใ‚ฏใƒฌใ‚ฆใ‚นใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐ๏ผ‰ใฏใ€`pipeline()` ้–ขๆ•ฐใงใฏๆททๅˆ8ใƒ“ใƒƒใƒˆใƒขใƒ‡ใƒซใงใฏใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ * ใ™ในใฆใฎๅ…ฅๅŠ›ใ‚’ใƒขใƒ‡ใƒซใจๅŒใ˜ใƒ‡ใƒใ‚คใ‚นใซ้…็ฝฎใ—ใฆใใ ใ•ใ„ใ€‚ ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-2b5" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) prompt = "Hello, my llama is cute" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ### Running mixed-int8 models - multi GPU setup ่ค‡ๆ•ฐใฎGPUใซๆททๅˆ8ใƒ“ใƒƒใƒˆใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ๆ–นๆณ•ใฏใ€ๆฌกใฎ้€šใ‚Šใงใ™๏ผˆใ‚ทใƒณใ‚ฐใƒซGPUใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใจๅŒใ˜ใ‚ณใƒžใƒณใƒ‰ใงใ™๏ผ‰๏ผš ```py model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` `accelerate`ใ‚’ไฝฟ็”จใ—ใฆๅ„GPUใซๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹GPU RAMใ‚’ๅˆถๅพกใ™ใ‚‹้š›ใซใฏใ€ไปฅไธ‹ใฎใ‚ˆใ†ใซ`max_memory`ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใพใ™๏ผš ```py max_memory_mapping = {0: "1GB", 1: "2GB"} model_name = "bigscience/bloom-3b" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping ) ``` In this example, the first GPU will use 1GB of memory and the second 2GB. ### Colab demos ใ“ใฎๆ–นๆณ•ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ไปฅๅ‰ใฎGoogle ColabใงใฏๆŽจ่ซ–ใงใใชใ‹ใฃใŸใƒขใƒ‡ใƒซใซๅฏพใ—ใฆๆŽจ่ซ–ใ‚’่กŒใ†ใ“ใจใŒใงใใพใ™ใ€‚ไปฅไธ‹ใฏใ€Google Colabใง8ใƒ“ใƒƒใƒˆ้‡ๅญๅŒ–ใ‚’ไฝฟ็”จใ—ใฆT5-11b๏ผˆfp32ใง42GB๏ผ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใƒ‡ใƒขใฎใƒชใƒณใ‚ฏใงใ™๏ผš [![Open In Colab: T5-11b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) ใพใŸใ€BLOOM-3Bใฎใƒ‡ใƒขใ‚‚ใ”่ฆงใ„ใŸใ ใ‘ใพใ™๏ผš [![Open In Colab: BLOOM-3b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing) ## Advanced usage: mixing FP4 (or Int8) and BetterTransformer ็•ฐใชใ‚‹ๆ–นๆณ•ใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใ€ใƒขใƒ‡ใƒซใฎๆœ€้ฉใชใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅพ—ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ไพ‹ใˆใฐใ€BetterTransformerใ‚’ไฝฟ็”จใ—ใฆFP4ใƒŸใƒƒใ‚ฏใ‚นใƒ—ใƒฌใ‚ทใ‚ธใƒงใƒณๆŽจ่ซ–ใจใƒ•ใƒฉใƒƒใ‚ทใƒฅใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config) input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/pipelines_utils.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ็”จใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใŒใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซๆไพ›ใ™ใ‚‹ใ™ในใฆใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐใŒใƒชใ‚นใƒˆใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‰ใ‚’็ ”็ฉถใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## Argument handling [[autodoc]] pipelines.ArgumentHandler [[autodoc]] pipelines.ZeroShotClassificationArgumentHandler [[autodoc]] pipelines.QuestionAnsweringArgumentHandler ## Data format [[autodoc]] pipelines.PipelineDataFormat [[autodoc]] pipelines.CsvPipelineDataFormat [[autodoc]] pipelines.JsonPipelineDataFormat [[autodoc]] pipelines.PipedPipelineDataFormat ## Utilities [[autodoc]] pipelines.PipelineException
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/image_processing_utils.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ต็”จใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใƒผใงไฝฟ็”จใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใƒผ้–ขๆ•ฐใŒใƒชใ‚นใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ไธปใซๆฉŸ่ƒฝ็š„ใชใ‚‚ใฎใงใ™ใ€‚ ็”ปๅƒใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใ‚‹ๅค‰ๆ›ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใฎใ‚ณใƒผใƒ‰ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## Image Transformations [[autodoc]] image_transforms.center_crop [[autodoc]] image_transforms.center_to_corners_format [[autodoc]] image_transforms.corners_to_center_format [[autodoc]] image_transforms.id_to_rgb [[autodoc]] image_transforms.normalize [[autodoc]] image_transforms.pad [[autodoc]] image_transforms.rgb_to_id [[autodoc]] image_transforms.rescale [[autodoc]] image_transforms.resize [[autodoc]] image_transforms.to_pil_image ## ImageProcessingMixin [[autodoc]] image_processing_utils.ImageProcessingMixin
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/time_series_utils.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๆ™‚็ณปๅˆ—ใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€ๆ™‚็ณปๅˆ—ใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซใซไฝฟ็”จใงใใ‚‹ใ™ในใฆใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐใจใ‚ฏใƒฉใ‚นใŒใƒชใ‚นใƒˆใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ๆ™‚็ณปๅˆ—ใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‰ใ‚’็ ”็ฉถใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ใพใŸใฏๅˆ†ๆ•ฃๅ‡บๅŠ›ใ‚ฏใƒฉใ‚นใฎใ‚ณใƒฌใ‚ฏใ‚ทใƒงใƒณใซ่ฟฝๅŠ ใ—ใŸใ„ๅ ดๅˆใซใฎใฟๅฝน็ซ‹ใกใพใ™ใ€‚ ## Distributional Output [[autodoc]] time_series_utils.NormalOutput [[autodoc]] time_series_utils.StudentTOutput [[autodoc]] time_series_utils.NegativeBinomialOutput
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/audio_utils.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # `FeatureExtractor` ็”จใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€*็Ÿญๆ™‚้–“ใƒ•ใƒผใƒชใ‚จๅค‰ๆ›* ใ‚„ *ใƒญใ‚ฐ ใƒกใƒซ ใ‚นใƒšใ‚ฏใƒˆใƒญใ‚ฐใƒฉใƒ * ใชใฉใฎไธ€่ˆฌ็š„ใชใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใ‚’ไฝฟ็”จใ—ใฆ็”Ÿใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ‹ใ‚‰็‰นๅˆฅใช็‰นๅพดใ‚’่จˆ็ฎ—ใ™ใ‚‹ใŸใ‚ใซใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช [`FeatureExtractor`] ใงไฝฟ็”จใงใใ‚‹ใ™ในใฆใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐใŒใƒชใ‚นใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช ใƒ—ใƒญใ‚ปใƒƒใ‚ตใฎใ‚ณใƒผใƒ‰ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชๅค‰ๆ› [[autodoc]] audio_utils.hertz_to_mel [[autodoc]] audio_utils.mel_to_hertz [[autodoc]] audio_utils.mel_filter_bank [[autodoc]] audio_utils.optimal_fft_length [[autodoc]] audio_utils.window_function [[autodoc]] audio_utils.spectrogram [[autodoc]] audio_utils.power_to_db [[autodoc]] audio_utils.amplitude_to_db
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/file_utils.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ไธ€่ˆฌ็š„ใชใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€ใƒ•ใ‚กใ‚คใƒซ `utils.py` ใซใ‚ใ‚‹ Transformers ใฎไธ€่ˆฌ็š„ใชใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐใŒใ™ในใฆใƒชใ‚นใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใงไธ€่ˆฌ็š„ใชใ‚ณใƒผใƒ‰ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## ๅˆ—ๆŒ™ๅž‹ใจๅๅ‰ไป˜ใใ‚ฟใƒ—ใƒซ [[autodoc]] utils.ExplicitEnum [[autodoc]] utils.PaddingStrategy [[autodoc]] utils.TensorType ## ็‰นๅˆฅใชใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใƒผ [[autodoc]] utils.add_start_docstrings [[autodoc]] utils.add_start_docstrings_to_model_forward [[autodoc]] utils.add_end_docstrings [[autodoc]] utils.add_code_sample_docstrings [[autodoc]] utils.replace_return_docstrings ## ็‰นๆฎŠใชใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃ [[autodoc]] utils.cached_property ## ใใฎไป–ใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ [[autodoc]] utils._LazyModule
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/modeling_utils.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ใ‚ซใ‚นใ‚ฟใƒ ใƒฌใ‚คใƒคใƒผใจใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใงไฝฟ็”จใ•ใ‚Œใ‚‹ใ™ในใฆใฎใ‚ซใ‚นใ‚ฟใƒ  ใƒฌใ‚คใƒคใƒผใจใ€ใƒขใƒ‡ใƒชใƒณใ‚ฐใซๆไพ›ใ•ใ‚Œใ‚‹ใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐใŒใƒชใ‚นใƒˆใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‰ใ‚’็ ”็ฉถใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## Pytorch custom modules [[autodoc]] pytorch_utils.Conv1D [[autodoc]] modeling_utils.PoolerStartLogits - forward [[autodoc]] modeling_utils.PoolerEndLogits - forward [[autodoc]] modeling_utils.PoolerAnswerClass - forward [[autodoc]] modeling_utils.SquadHeadOutput [[autodoc]] modeling_utils.SQuADHead - forward [[autodoc]] modeling_utils.SequenceSummary - forward ## PyTorch Helper Functions [[autodoc]] pytorch_utils.apply_chunking_to_forward [[autodoc]] pytorch_utils.find_pruneable_heads_and_indices [[autodoc]] pytorch_utils.prune_layer [[autodoc]] pytorch_utils.prune_conv1d_layer [[autodoc]] pytorch_utils.prune_linear_layer ## TensorFlow custom layers [[autodoc]] modeling_tf_utils.TFConv1D [[autodoc]] modeling_tf_utils.TFSequenceSummary ## TensorFlow loss functions [[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss [[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss [[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss [[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss [[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss [[autodoc]] modeling_tf_utils.TFTokenClassificationLoss ## TensorFlow Helper Functions [[autodoc]] modeling_tf_utils.get_initializer [[autodoc]] modeling_tf_utils.keras_serializable [[autodoc]] modeling_tf_utils.shape_list
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/generation_utils.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ็™บ้›ป็”จใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€[`~generation.GenerationMixin.generate`] ใงไฝฟ็”จใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐใŒใƒชใ‚นใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ [`~generation.GenerationMixin.greedy_search`], [`~generation.GenerationMixin.contrastive_search`], [`~generation.GenerationMixin.sample`], [`~generation.GenerationMixin.beam_search`], [`~generation.GenerationMixin.beam_sample`], [`~generation.GenerationMixin.group_beam_search`]ใ€ใŠใ‚ˆใณ [`~generation.GenerationMixin.constrained_beam_search`]ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎ็”Ÿๆˆใƒกใ‚ฝใƒƒใƒ‰ใฎใ‚ณใƒผใƒ‰ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## ๅ‡บๅŠ›ใ‚’็”Ÿๆˆใ™ใ‚‹ [`~generation.GenerationMixin.generate`] ใฎๅ‡บๅŠ›ใฏใ€ๆฌกใฎใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใงใ™ใ€‚ [`~utils.ModelOutput`]ใ€‚ใ“ใฎๅ‡บๅŠ›ใฏใ€่ฟ”ใ•ใ‚ŒใŸใ™ในใฆใฎๆƒ…ๅ ฑใ‚’ๅซใ‚€ใƒ‡ใƒผใ‚ฟๆง‹้€ ใงใ™ใ€‚ [`~generation.GenerationMixin.generate`] ใซใ‚ˆใฃใฆไฝœๆˆใ•ใ‚Œใพใ™ใŒใ€ใ‚ฟใƒ—ใƒซใพใŸใฏ่พžๆ›ธใจใ—ใฆใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ ไปฅไธ‹ใซไพ‹ใ‚’็คบใ—ใพใ™ใ€‚ ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("gpt2") inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt") generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True) ``` `generation_output` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏใ€ใงใใ‚‹้™ใ‚Š [`~generation.GenerateDecoderOnlyOutput`] ใงใ™ใ€‚ ไปฅไธ‹ใฎใใฎใ‚ฏใƒฉใ‚นใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใ‚Œใฏใ€ๆฌกใฎๅฑžๆ€งใŒใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ - `sequences`: ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใฎใ‚ทใƒผใ‚ฑใƒณใ‚น - `scores` (ใ‚ชใƒ—ใ‚ทใƒงใƒณ): ๅ„็”Ÿๆˆใ‚นใƒ†ใƒƒใƒ—ใฎ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ ใƒ˜ใƒƒใƒ‰ใฎไบˆๆธฌใ‚นใ‚ณใ‚ข - `hidden_โ€‹โ€‹states` (ใ‚ชใƒ—ใ‚ทใƒงใƒณ): ็”Ÿๆˆใ‚นใƒ†ใƒƒใƒ—ใ”ใจใฎใƒขใƒ‡ใƒซใฎ้š ใ‚ŒใŸ็Šถๆ…‹ - `attentions` (ใ‚ชใƒ—ใ‚ทใƒงใƒณ): ็”Ÿๆˆใ‚นใƒ†ใƒƒใƒ—ใ”ใจใฎใƒขใƒ‡ใƒซใฎใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใฎ้‡ใฟ ใ“ใ“ใงใฏใ€`output_scores=True`ใ‚’ๆธกใ—ใŸใฎใง `scores` ใŒใ‚ใ‚Šใพใ™ใŒใ€`hidden_โ€‹โ€‹states` ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ `attentions` ใฏใ€`output_hidden_โ€‹โ€‹states=True`ใพใŸใฏ`output_attentions=True`ใ‚’ๆธกใ•ใชใ‹ใฃใŸใŸใ‚ใงใ™ใ€‚ ้€šๅธธใจๅŒใ˜ใ‚ˆใ†ใซๅ„ๅฑžๆ€งใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ใใฎๅฑžๆ€งใŒใƒขใƒ‡ใƒซใ‹ใ‚‰่ฟ”ใ•ใ‚Œใชใ‹ใฃใŸๅ ดๅˆใฏใ€ ใฏใ€Œใชใ—ใ€ใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ใ“ใ“ใงใ€ใŸใจใˆใฐ`generation_output.scores`ใฏใ€็”Ÿๆˆใ•ใ‚ŒใŸใ™ในใฆใฎไบˆๆธฌใ‚นใ‚ณใ‚ขใงใ™ใ€‚ ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใฎใƒ˜ใƒƒใƒ‰ใงใ‚ใ‚Šใ€`generation_output.attentions`ใฏ`None`ใงใ™ใ€‚ `generation_output` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใ‚ฟใƒ—ใƒซใจใ—ใฆไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€`None` ๅ€คใ‚’ๆŒใŸใชใ„ๅฑžๆ€งใฎใฟใŒไฟๆŒใ•ใ‚Œใพใ™ใ€‚ ใŸใจใˆใฐใ€ใ“ใ“ใซใฏ 2 ใคใฎ่ฆ็ด ใ€`loss`ใ€ๆฌกใซ`logits`ใŒใ‚ใ‚Šใพใ™ใ€‚ ```python generation_output[:2] ``` ใŸใจใˆใฐใ€ใ‚ฟใƒ—ใƒซ `(generation_output.sequences,generation_output.scores)` ใ‚’่ฟ”ใ—ใพใ™ใ€‚ `generation_output` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’่พžๆ›ธใจใ—ใฆไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€`None` ใ‚’ๆŒใŸใชใ„ๅฑžๆ€งใฎใฟใŒไฟๆŒใ•ใ‚Œใพใ™ใ€‚ ใ“ใ“ใงใฏใ€ใŸใจใˆใฐใ€`sequences`ใจ`scores`ใจใ„ใ† 2 ใคใฎใ‚ญใƒผใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ“ใงใฏใ™ในใฆใฎๅ‡บๅŠ›ใ‚ฟใ‚คใƒ—ใ‚’ๆ–‡ๆ›ธๅŒ–ใ—ใพใ™ใ€‚ ### PyTorch [[autodoc]] generation.GenerateDecoderOnlyOutput [[autodoc]] generation.GenerateEncoderDecoderOutput [[autodoc]] generation.GenerateBeamDecoderOnlyOutput [[autodoc]] generation.GenerateBeamEncoderDecoderOutput ### TensorFlow [[autodoc]] generation.TFGreedySearchEncoderDecoderOutput [[autodoc]] generation.TFGreedySearchDecoderOnlyOutput [[autodoc]] generation.TFSampleEncoderDecoderOutput [[autodoc]] generation.TFSampleDecoderOnlyOutput [[autodoc]] generation.TFBeamSearchEncoderDecoderOutput [[autodoc]] generation.TFBeamSearchDecoderOnlyOutput [[autodoc]] generation.TFBeamSampleEncoderDecoderOutput [[autodoc]] generation.TFBeamSampleDecoderOnlyOutput [[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput [[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput ### FLAX [[autodoc]] generation.FlaxSampleOutput [[autodoc]] generation.FlaxGreedySearchOutput [[autodoc]] generation.FlaxBeamSearchOutput ## LogitsProcessor [`LogitsProcessor`] ใ‚’ไฝฟ็”จใ—ใฆใ€่จ€่ชžใƒขใƒ‡ใƒซใฎใƒ˜ใƒƒใƒ‰ใฎไบˆๆธฌใ‚นใ‚ณใ‚ขใ‚’ๅค‰ๆ›ดใงใใพใ™ใ€‚ ไธ–ไปฃใ€‚ ### PyTorch [[autodoc]] AlternatingCodebooksLogitsProcessor - __call__ [[autodoc]] ClassifierFreeGuidanceLogitsProcessor - __call__ [[autodoc]] EncoderNoRepeatNGramLogitsProcessor - __call__ [[autodoc]] EncoderRepetitionPenaltyLogitsProcessor - __call__ [[autodoc]] EpsilonLogitsWarper - __call__ [[autodoc]] EtaLogitsWarper - __call__ [[autodoc]] ExponentialDecayLengthPenalty - __call__ [[autodoc]] ForcedBOSTokenLogitsProcessor - __call__ [[autodoc]] ForcedEOSTokenLogitsProcessor - __call__ [[autodoc]] ForceTokensLogitsProcessor - __call__ [[autodoc]] HammingDiversityLogitsProcessor - __call__ [[autodoc]] InfNanRemoveLogitsProcessor - __call__ [[autodoc]] LogitNormalization - __call__ [[autodoc]] LogitsProcessor - __call__ [[autodoc]] LogitsProcessorList - __call__ [[autodoc]] LogitsWarper - __call__ [[autodoc]] MinLengthLogitsProcessor - __call__ [[autodoc]] MinNewTokensLengthLogitsProcessor - __call__ [[autodoc]] NoBadWordsLogitsProcessor - __call__ [[autodoc]] NoRepeatNGramLogitsProcessor - __call__ [[autodoc]] PrefixConstrainedLogitsProcessor - __call__ [[autodoc]] RepetitionPenaltyLogitsProcessor - __call__ [[autodoc]] SequenceBiasLogitsProcessor - __call__ [[autodoc]] SuppressTokensAtBeginLogitsProcessor - __call__ [[autodoc]] SuppressTokensLogitsProcessor - __call__ [[autodoc]] TemperatureLogitsWarper - __call__ [[autodoc]] TopKLogitsWarper - __call__ [[autodoc]] TopPLogitsWarper - __call__ [[autodoc]] TypicalLogitsWarper - __call__ [[autodoc]] UnbatchedClassifierFreeGuidanceLogitsProcessor - __call__ [[autodoc]] WhisperTimeStampLogitsProcessor - __call__ ### TensorFlow [[autodoc]] TFForcedBOSTokenLogitsProcessor - __call__ [[autodoc]] TFForcedEOSTokenLogitsProcessor - __call__ [[autodoc]] TFForceTokensLogitsProcessor - __call__ [[autodoc]] TFLogitsProcessor - __call__ [[autodoc]] TFLogitsProcessorList - __call__ [[autodoc]] TFLogitsWarper - __call__ [[autodoc]] TFMinLengthLogitsProcessor - __call__ [[autodoc]] TFNoBadWordsLogitsProcessor - __call__ [[autodoc]] TFNoRepeatNGramLogitsProcessor - __call__ [[autodoc]] TFRepetitionPenaltyLogitsProcessor - __call__ [[autodoc]] TFSuppressTokensAtBeginLogitsProcessor - __call__ [[autodoc]] TFSuppressTokensLogitsProcessor - __call__ [[autodoc]] TFTemperatureLogitsWarper - __call__ [[autodoc]] TFTopKLogitsWarper - __call__ [[autodoc]] TFTopPLogitsWarper - __call__ ### FLAX [[autodoc]] FlaxForcedBOSTokenLogitsProcessor - __call__ [[autodoc]] FlaxForcedEOSTokenLogitsProcessor - __call__ [[autodoc]] FlaxForceTokensLogitsProcessor - __call__ [[autodoc]] FlaxLogitsProcessor - __call__ [[autodoc]] FlaxLogitsProcessorList - __call__ [[autodoc]] FlaxLogitsWarper - __call__ [[autodoc]] FlaxMinLengthLogitsProcessor - __call__ [[autodoc]] FlaxSuppressTokensAtBeginLogitsProcessor - __call__ [[autodoc]] FlaxSuppressTokensLogitsProcessor - __call__ [[autodoc]] FlaxTemperatureLogitsWarper - __call__ [[autodoc]] FlaxTopKLogitsWarper - __call__ [[autodoc]] FlaxTopPLogitsWarper - __call__ [[autodoc]] FlaxWhisperTimeStampLogitsProcessor - __call__ ## StoppingCriteria [`StoppingCriteria`] ใ‚’ไฝฟ็”จใ—ใฆใ€(EOS ใƒˆใƒผใ‚ฏใƒณไปฅๅค–ใฎ) ็”Ÿๆˆใ‚’ๅœๆญขใ™ใ‚‹ใ‚ฟใ‚คใƒŸใƒณใ‚ฐใ‚’ๅค‰ๆ›ดใงใใพใ™ใ€‚ใ“ใ‚Œใฏ PyTorch ๅฎŸ่ฃ…ใงใฎใฟๅˆฉ็”จๅฏ่ƒฝใงใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ [[autodoc]] StoppingCriteria - __call__ [[autodoc]] StoppingCriteriaList - __call__ [[autodoc]] MaxLengthCriteria - __call__ [[autodoc]] MaxTimeCriteria - __call__ ## Constraints [`Constraint`] ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€็”Ÿๆˆๆ™‚ใซๅ‡บๅŠ›ใซ็‰นๅฎšใฎใƒˆใƒผใ‚ฏใƒณใพใŸใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใŒๅซใพใ‚Œใ‚‹ใ‚ˆใ†ใซๅผทๅˆถใงใใพใ™ใ€‚ใ“ใ‚Œใฏ PyTorch ๅฎŸ่ฃ…ใงใฎใฟๅˆฉ็”จๅฏ่ƒฝใงใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ [[autodoc]] Constraint [[autodoc]] PhrasalConstraint [[autodoc]] DisjunctiveConstraint [[autodoc]] ConstraintListState ## BeamSearch [[autodoc]] BeamScorer - process - finalize [[autodoc]] BeamSearchScorer - process - finalize [[autodoc]] ConstrainedBeamSearchScorer - process - finalize ## Utilities [[autodoc]] top_k_top_p_filtering [[autodoc]] tf_top_k_top_p_filtering ## Streamers [[autodoc]] TextStreamer [[autodoc]] TextIteratorStreamer
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/tokenization_utils.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Utilities for Tokenizers ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใซใ‚ˆใฃใฆไฝฟ็”จใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐ (ไธปใซใ‚ฏใƒฉใ‚น) ใŒใƒชใ‚นใƒˆใ•ใ‚Œใพใ™ใ€‚ [`~tokenization_utils_base.PreTrainedTokenizerBase`] ้–“ใฎๅ…ฑ้€šใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚ [`PreTrainedTokenizer`] ใจ [`PreTrainedTokenizerFast`] ใŠใ‚ˆใณใƒŸใƒƒใ‚ฏใ‚นใ‚คใƒณ [`~tokenization_utils_base.SpecialTokensMixin`]ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎใ‚ณใƒผใƒ‰ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## PreTrainedTokenizerBase [[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase - __call__ - all ## SpecialTokensMixin [[autodoc]] tokenization_utils_base.SpecialTokensMixin ## Enums and namedtuples [[autodoc]] tokenization_utils_base.TruncationStrategy [[autodoc]] tokenization_utils_base.CharSpan [[autodoc]] tokenization_utils_base.TokenSpan
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/internal/trainer_utils.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ใƒˆใƒฌใƒผใƒŠใƒผ็”จใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ ใ“ใฎใƒšใƒผใ‚ธใซใฏใ€[`Trainer`] ใงไฝฟ็”จใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃ้–ขๆ•ฐใŒใƒชใ‚นใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใปใจใ‚“ใฉใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใƒˆใƒฌใƒผใƒŠใƒผใฎใ‚ณใƒผใƒ‰ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ๅ ดๅˆใซใฎใฟๅฝนใซ็ซ‹ใกใพใ™ใ€‚ ## Utilities [[autodoc]] EvalPrediction [[autodoc]] IntervalStrategy [[autodoc]] enable_full_determinism [[autodoc]] set_seed [[autodoc]] torch_distributed_zero_first ## Callbacks internals [[autodoc]] trainer_callback.CallbackHandler ## Distributed Evaluation [[autodoc]] trainer_pt_utils.DistributedTensorGatherer ## Distributed Evaluation [[autodoc]] HfArgumentParser ## Debug Utilities [[autodoc]] debug_utils.DebugUnderflowOverflow
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/feature_extractor.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Feature Extractor ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใ‚จใ‚ฏใ‚นใƒˆใƒฉใ‚ฏใ‚ฟใฏใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใพใŸใฏใƒ“ใ‚ธใƒงใƒณใƒขใƒ‡ใƒซใฎใŸใ‚ใฎๅ…ฅๅŠ›ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใฎๆบ–ๅ‚™ใ‚’ๆ‹…ๅฝ“ใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใซใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใ‹ใ‚‰ใฎใƒ•ใ‚ฃใƒผใƒใƒฃใƒผๆŠฝๅ‡บ๏ผˆไพ‹๏ผšใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใƒ•ใ‚กใ‚คใƒซใฎๅ‰ๅ‡ฆ็†ใ‹ใ‚‰Log-Melใ‚นใƒšใ‚ฏใƒˆใƒญใ‚ฐใƒฉใƒ ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใธใฎๅค‰ๆ›๏ผ‰ใ€็”ปๅƒใ‹ใ‚‰ใฎใƒ•ใ‚ฃใƒผใƒใƒฃใƒผๆŠฝๅ‡บ๏ผˆไพ‹๏ผš็”ปๅƒใƒ•ใ‚กใ‚คใƒซใฎใ‚ฏใƒญใƒƒใƒ”ใƒณใ‚ฐ๏ผ‰ใ€ใพใŸใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ€ๆญฃ่ฆๅŒ–ใ€ใใ—ใฆNumpyใ€PyTorchใ€TensorFlowใƒ†ใƒณใ‚ฝใƒซใธใฎๅค‰ๆ›ใ‚‚ๅซใพใ‚Œใพใ™ใ€‚ ## FeatureExtractionMixin [[autodoc]] feature_extraction_utils.FeatureExtractionMixin - from_pretrained - save_pretrained ## SequenceFeatureExtractor [[autodoc]] SequenceFeatureExtractor - pad ## BatchFeature [[autodoc]] BatchFeature ## ImageFeatureExtractionMixin [[autodoc]] image_utils.ImageFeatureExtractionMixin
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/output.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Model outputs ใ™ในใฆใฎใƒขใƒ‡ใƒซใซใฏใ€[`~utils.ModelOutput`] ใฎใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใงใ‚ใ‚‹ๅ‡บๅŠ›ใŒใ‚ใ‚Šใพใ™ใ€‚ใใ‚Œใ‚‰ใฏ ใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆ่ฟ”ใ•ใ‚Œใ‚‹ใ™ในใฆใฎๆƒ…ๅ ฑใ‚’ๅซใ‚€ใƒ‡ใƒผใ‚ฟๆง‹้€ ใงใ™ใŒใ€ใ‚ฟใƒ—ใƒซใพใŸใฏ ่พžๆ›ธใ€‚ ใ“ใ‚ŒใŒใฉใฎใ‚ˆใ†ใซใชใ‚‹ใ‹ใ‚’ไพ‹ใง่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```python from transformers import BertTokenizer, BertForSequenceClassification import torch tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = BertForSequenceClassification.from_pretrained("bert-base-uncased") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) ``` `outputs`ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏ[`~modeling_outputs.SequenceClassifierOutput`]ใงใ‚ใ‚‹ใ€‚ ใ“ใ‚Œใฏใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใง `loss`ใ€`logits`ใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใง `hidden_states`ใ€ใ‚ชใƒ—ใ‚ทใƒงใƒณใง `attentions` ๅฑžๆ€งใ‚’ๆŒใคใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ใ‚ชใƒ—ใ‚ทใƒงใƒณใฎ `attentions` ๅฑžๆ€งใ‚’ๆŒใคใ“ใจใ‚’ๆ„ๅ‘ณใ™ใ‚‹ใ€‚ใ“ใ“ใงใฏใ€`labels`ใ‚’ๆธกใ—ใŸใฎใง`loss`ใŒใ‚ใ‚‹ใŒใ€`hidden_states`ใจ`attentions`ใฏใชใ„ใ€‚ `output_hidden_states=True`ใ‚„`output_attentions=True`ใ‚’ๆธกใ—ใฆใ„ใชใ„ใฎใงใ€`hidden_states`ใจ`attentions`ใฏใชใ„ใ€‚ `output_attentions=True`ใ‚’ๆธกใ•ใชใ‹ใฃใŸใ‹ใ‚‰ใ ใ€‚ <Tip> `output_hidden_states=True`ใ‚’ๆธกใ™ใจใ€`outputs.hidden_states[-1]`ใŒ `outputs.last_hidden_states` ใจๆญฃ็ขบใซไธ€่‡ดใ™ใ‚‹ใ“ใจใ‚’ๆœŸๅพ…ใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใชใ„ใ€‚ ใ—ใ‹ใ—ใ€ๅฟ…ใšใ—ใ‚‚ใใ†ใชใ‚‹ใจใฏ้™ใ‚Šใพใ›ใ‚“ใ€‚ใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆใฏใ€ๆœ€ๅพŒใซ้š ใ•ใ‚ŒใŸ็Šถๆ…‹ใŒ่ฟ”ใ•ใ‚ŒใŸใจใใซใ€ๆญฃ่ฆๅŒ–ใ‚„ใใฎๅพŒใฎๅ‡ฆ็†ใ‚’้ฉ็”จใ™ใ‚‹ใ‚‚ใฎใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ </Tip> ้€šๅธธใจๅŒใ˜ใ‚ˆใ†ใซๅ„ๅฑžๆ€งใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใพใ™ใ€‚ใใฎๅฑžๆ€งใŒใƒขใƒ‡ใƒซใ‹ใ‚‰่ฟ”ใ•ใ‚Œใชใ‹ใฃใŸๅ ดๅˆใฏใ€ ใฏ `None`ใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ใ“ใ“ใงใ€ใŸใจใˆใฐ`outputs.loss`ใฏใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆ่จˆ็ฎ—ใ•ใ‚ŒใŸๆๅคฑใงใ‚ใ‚Šใ€`outputs.attentions`ใฏ `None`ใ€‚ `outputs`ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใ‚ฟใƒ—ใƒซใจใ—ใฆ่€ƒใˆใ‚‹ๅ ดๅˆใ€`None`ๅ€คใ‚’ๆŒใŸใชใ„ๅฑžๆ€งใฎใฟใŒ่€ƒๆ…ฎใ•ใ‚Œใพใ™ใ€‚ ใŸใจใˆใฐใ€ใ“ใ“ใซใฏ 2 ใคใฎ่ฆ็ด ใ€`loss`ใ€ๆฌกใซ`logits`ใŒใ‚ใ‚Šใพใ™ใ€‚ ```python outputs[:2] ``` ใŸใจใˆใฐใ€ใ‚ฟใƒ—ใƒซ `(outputs.loss, Outputs.logits)` ใ‚’่ฟ”ใ—ใพใ™ใ€‚ `outputs`ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’่พžๆ›ธใจใ—ใฆ่€ƒๆ…ฎใ™ใ‚‹ๅ ดๅˆใ€ใ€ŒNoneใ€ใ‚’ๆŒใŸใชใ„ๅฑžๆ€งใฎใฟใŒ่€ƒๆ…ฎใ•ใ‚Œใพใ™ใ€‚ ไพกๅ€ค่ฆณใ€‚ใŸใจใˆใฐใ€ใ“ใ“ใซใฏ`loss` ใจ `logits`ใจใ„ใ† 2 ใคใฎใ‚ญใƒผใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ“ใงใฏใ€่ค‡ๆ•ฐใฎใƒขใƒ‡ใƒซ ใ‚ฟใ‚คใƒ—ใงไฝฟ็”จใ•ใ‚Œใ‚‹ๆฑŽ็”จใƒขใƒ‡ใƒซใฎๅ‡บๅŠ›ใ‚’ๆ–‡ๆ›ธๅŒ–ใ—ใพใ™ใ€‚ๅ…ทไฝ“็š„ใชๅ‡บๅŠ›ใ‚ฟใ‚คใƒ—ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ ๅฏพๅฟœใ™ใ‚‹ใƒขใƒ‡ใƒซใฎใƒšใƒผใ‚ธใซ่จ˜่ผ‰ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ## ModelOutput [[autodoc]] utils.ModelOutput - to_tuple ## BaseModelOutput [[autodoc]] modeling_outputs.BaseModelOutput ## BaseModelOutputWithPooling [[autodoc]] modeling_outputs.BaseModelOutputWithPooling ## BaseModelOutputWithCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions ## BaseModelOutputWithPoolingAndCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions ## BaseModelOutputWithPast [[autodoc]] modeling_outputs.BaseModelOutputWithPast ## BaseModelOutputWithPastAndCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions ## Seq2SeqModelOutput [[autodoc]] modeling_outputs.Seq2SeqModelOutput ## CausalLMOutput [[autodoc]] modeling_outputs.CausalLMOutput ## CausalLMOutputWithCrossAttentions [[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions ## CausalLMOutputWithPast [[autodoc]] modeling_outputs.CausalLMOutputWithPast ## MaskedLMOutput [[autodoc]] modeling_outputs.MaskedLMOutput ## Seq2SeqLMOutput [[autodoc]] modeling_outputs.Seq2SeqLMOutput ## NextSentencePredictorOutput [[autodoc]] modeling_outputs.NextSentencePredictorOutput ## SequenceClassifierOutput [[autodoc]] modeling_outputs.SequenceClassifierOutput ## Seq2SeqSequenceClassifierOutput [[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput ## MultipleChoiceModelOutput [[autodoc]] modeling_outputs.MultipleChoiceModelOutput ## TokenClassifierOutput [[autodoc]] modeling_outputs.TokenClassifierOutput ## QuestionAnsweringModelOutput [[autodoc]] modeling_outputs.QuestionAnsweringModelOutput ## Seq2SeqQuestionAnsweringModelOutput [[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput ## Seq2SeqSpectrogramOutput [[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput ## SemanticSegmenterOutput [[autodoc]] modeling_outputs.SemanticSegmenterOutput ## ImageClassifierOutput [[autodoc]] modeling_outputs.ImageClassifierOutput ## ImageClassifierOutputWithNoAttention [[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention ## DepthEstimatorOutput [[autodoc]] modeling_outputs.DepthEstimatorOutput ## Wav2Vec2BaseModelOutput [[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput ## XVectorOutput [[autodoc]] modeling_outputs.XVectorOutput ## Seq2SeqTSModelOutput [[autodoc]] modeling_outputs.Seq2SeqTSModelOutput ## Seq2SeqTSPredictionOutput [[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput ## SampleTSPredictionOutput [[autodoc]] modeling_outputs.SampleTSPredictionOutput ## TFBaseModelOutput [[autodoc]] modeling_tf_outputs.TFBaseModelOutput ## TFBaseModelOutputWithPooling [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling ## TFBaseModelOutputWithPoolingAndCrossAttentions [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions ## TFBaseModelOutputWithPast [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast ## TFBaseModelOutputWithPastAndCrossAttentions [[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions ## TFSeq2SeqModelOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput ## TFCausalLMOutput [[autodoc]] modeling_tf_outputs.TFCausalLMOutput ## TFCausalLMOutputWithCrossAttentions [[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions ## TFCausalLMOutputWithPast [[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast ## TFMaskedLMOutput [[autodoc]] modeling_tf_outputs.TFMaskedLMOutput ## TFSeq2SeqLMOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput ## TFNextSentencePredictorOutput [[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput ## TFSequenceClassifierOutput [[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput ## TFSeq2SeqSequenceClassifierOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput ## TFMultipleChoiceModelOutput [[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput ## TFTokenClassifierOutput [[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput ## TFQuestionAnsweringModelOutput [[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput ## TFSeq2SeqQuestionAnsweringModelOutput [[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput ## FlaxBaseModelOutput [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput ## FlaxBaseModelOutputWithPast [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast ## FlaxBaseModelOutputWithPooling [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling ## FlaxBaseModelOutputWithPastAndCrossAttentions [[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions ## FlaxSeq2SeqModelOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput ## FlaxCausalLMOutputWithCrossAttentions [[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions ## FlaxMaskedLMOutput [[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput ## FlaxSeq2SeqLMOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput ## FlaxNextSentencePredictorOutput [[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput ## FlaxSequenceClassifierOutput [[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput ## FlaxSeq2SeqSequenceClassifierOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput ## FlaxMultipleChoiceModelOutput [[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput ## FlaxTokenClassifierOutput [[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput ## FlaxQuestionAnsweringModelOutput [[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput ## FlaxSeq2SeqQuestionAnsweringModelOutput [[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/text_generation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Generation ๅ„ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใซใฏใ€ใใ‚Œใžใ‚Œใฎ `GenerationMixin` ใ‚ฏใƒฉใ‚นใซๅฎŸ่ฃ…ใ•ใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆ็”ŸๆˆใฎใŸใ‚ใฎ Generate ใƒกใ‚ฝใƒƒใƒ‰ใŒใ‚ใ‚Šใพใ™ใ€‚ - PyTorch [`~generation.GenerationMixin.generate`] ใฏ [`~generation.GenerationMixin`] ใซๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ - TensorFlow [`~generation.TFGenerationMixin.generate`] ใฏ [`~generation.TFGenerationMixin`] ใซๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ - Flax/JAX [`~generation.FlaxGenerationMixin.generate`] ใฏ [`~generation.FlaxGenerationMixin`] ใซๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ้ธๆŠžใ—ใŸใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใซ้–ขไฟ‚ใชใใ€[`~generation.GenerationConfig`] ใ‚’ไฝฟ็”จใ—ใฆ็”Ÿๆˆใƒกใ‚ฝใƒƒใƒ‰ใ‚’ใƒ‘ใƒฉใƒกใƒผใ‚ฟๅŒ–ใงใใพใ™ใ€‚ ใ‚ฏใƒฉใ‚นใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ€‚ๅ‹•ไฝœใ‚’ๅˆถๅพกใ™ใ‚‹็”Ÿๆˆใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๅฎŒๅ…จใชใƒชใ‚นใƒˆใซใคใ„ใฆใฏใ€ใ“ใฎใ‚ฏใƒฉใ‚นใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ็”Ÿๆˆๆ–นๆณ•ใฎใ“ใจใ€‚ ใƒขใƒ‡ใƒซใฎ็”Ÿๆˆๆง‹ๆˆใ‚’ๆคœๆŸปใ™ใ‚‹ๆ–นๆณ•ใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใจใฏไฝ•ใ‹ใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ใ‚ขใƒ‰ใƒ›ใƒƒใ‚ฏใซๅค‰ๆ›ดใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใซใฏใ€ ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ•ใ‚ŒใŸ็”Ÿๆˆๆง‹ๆˆใ‚’ไฝœๆˆใ—ใฆไฟๅญ˜ใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใฏใ€ใ€Œ [ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆๆˆฆ็•ฅใ‚ฌใ‚คใƒ‰](../generation_strategies)ใ€‚ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€้–ข้€ฃๆฉŸ่ƒฝใฎไฝฟ็”จๆ–นๆณ•ใซใคใ„ใฆใ‚‚่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒณใ‚นใƒˆใƒชใƒผใƒŸใƒณใ‚ฐใฎใ‚ˆใ†ใชใ€‚ ## GenerationConfig [[autodoc]] generation.GenerationConfig - from_pretrained - from_model_config - save_pretrained ## GenerationMixin [[autodoc]] generation.GenerationMixin - generate - compute_transition_scores - greedy_search - sample - beam_search - beam_sample - contrastive_search - group_beam_search - constrained_beam_search ## TFGenerationMixin [[autodoc]] generation.TFGenerationMixin - generate - compute_transition_scores ## FlaxGenerationMixin [[autodoc]] generation.FlaxGenerationMixin - generate
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/logging.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Logging ๐Ÿค— Transformersใซใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎ่ฉณ็ดฐๅบฆใ‚’็ฐกๅ˜ใซ่จญๅฎšใงใใ‚‹ไธญๅคฎ้›†ไธญๅž‹ใฎใƒญใ‚ฎใƒณใ‚ฐใ‚ทใ‚นใƒ†ใƒ ใŒใ‚ใ‚Šใพใ™ใ€‚ ็พๅœจใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ่ฉณ็ดฐๅบฆใฏใ€ŒWARNINGใ€ใงใ™ใ€‚ ่ฉณ็ดฐๅบฆใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใซใฏใ€็›ดๆŽฅ่จญๅฎšใƒกใ‚ฝใƒƒใƒ‰ใฎ1ใคใ‚’ไฝฟ็”จใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ไพ‹ใˆใฐใ€่ฉณ็ดฐๅบฆใ‚’INFOใƒฌใƒ™ใƒซใซๅค‰ๆ›ดใ™ใ‚‹ๆ–นๆณ•ใฏไปฅไธ‹ใฎ้€šใ‚Šใงใ™ใ€‚ ```python import transformers transformers.logging.set_verbosity_info() ``` ็’ฐๅขƒๅค‰ๆ•ฐ `TRANSFORMERS_VERBOSITY` ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎๅ†—้•ทๆ€งใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚่จญๅฎšใงใใพใ™ `debug`ใ€`info`ใ€`warning`ใ€`error`ใ€`critical` ใฎใ„ใšใ‚Œใ‹ใซๅค‰ๆ›ดใ—ใพใ™ใ€‚ไพ‹ใˆใฐ๏ผš ```bash TRANSFORMERS_VERBOSITY=error ./myprogram.py ``` ใ•ใ‚‰ใซใ€ไธ€้ƒจใฎใ€Œ่ญฆๅ‘Šใ€ใฏ็’ฐๅขƒๅค‰ๆ•ฐใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใง็„กๅŠนใซใงใใพใ™ใ€‚ `TRANSFORMERS_NO_ADVISORY_WARNINGS` ใ‚’ *1* ใชใฉใฎ true ๅ€คใซ่จญๅฎšใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๆฌกใ‚’ไฝฟ็”จใ—ใฆใƒญใ‚ฐใซ่จ˜้Œฒใ•ใ‚Œใ‚‹่ญฆๅ‘ŠใŒ็„กๅŠนใซใชใ‚Šใพใ™ใ€‚ [`logger.warning_advice`]ใ€‚ไพ‹ใˆใฐ๏ผš ```bash TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py ``` ไปฅไธ‹ใฏใ€็‹ฌ่‡ชใฎใƒขใ‚ธใƒฅใƒผใƒซใพใŸใฏใ‚นใ‚ฏใƒชใƒ—ใƒˆใงใƒฉใ‚คใƒ–ใƒฉใƒชใจๅŒใ˜ใƒญใ‚ฌใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใฎไพ‹ใงใ™ใ€‚ ```python from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger("transformers") logger.info("INFO") logger.warning("WARN") ``` ใ“ใฎใƒญใ‚ฎใƒณใ‚ฐ ใƒขใ‚ธใƒฅใƒผใƒซใฎใ™ในใฆใฎใƒกใ‚ฝใƒƒใƒ‰ใฏไปฅไธ‹ใซๆ–‡ๆ›ธๅŒ–ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ไธปใชใƒกใ‚ฝใƒƒใƒ‰ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ [`logging.get_verbosity`] ใƒญใ‚ฌใƒผใฎ็พๅœจใฎๅ†—้•ทใƒฌใƒ™ใƒซใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ [`logging.set_verbosity`] ใ‚’ไฝฟ็”จใ—ใฆใ€ๅ†—้•ทๆ€งใ‚’้ธๆŠžใ—ใŸใƒฌใƒ™ใƒซใซ่จญๅฎšใ—ใพใ™ใ€‚้ †็•ชใซ๏ผˆๅฐ‘ใชใ„ใ‚‚ใฎใ‹ใ‚‰๏ผ‰ ๅ†—้•ทใ‹ใ‚‰ๆœ€ใ‚‚ๅ†—้•ทใพใง)ใ€ใใ‚Œใ‚‰ใฎใƒฌใƒ™ใƒซ (ๆ‹ฌๅผงๅ†…ใฏๅฏพๅฟœใ™ใ‚‹ int ๅ€ค) ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - `transformers.logging.CRITICAL` ใพใŸใฏ `transformers.logging.FATAL` (int ๅ€คใ€50): ๆœ€ใ‚‚ๅคšใ„ใ‚‚ใฎใฎใฟใ‚’ใƒฌใƒใƒผใƒˆใ—ใพใ™ใ€‚ ้‡ๅคงใชใ‚จใƒฉใƒผใ€‚ - `transformers.logging.ERROR` (int ๅ€คใ€40): ใ‚จใƒฉใƒผใฎใฟใ‚’ๅ ฑๅ‘Šใ—ใพใ™ใ€‚ - `transformers.logging.WARNING` ใพใŸใฏ `transformers.logging.WARN` (int ๅ€คใ€30): ใ‚จใƒฉใƒผใจ ่ญฆๅ‘Šใ€‚ใ“ใ‚Œใฏใƒฉใ‚คใƒ–ใƒฉใƒชใงไฝฟ็”จใ•ใ‚Œใ‚‹ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒฌใƒ™ใƒซใงใ™ใ€‚ - `transformers.logging.INFO` (int ๅ€คใ€20): ใ‚จใƒฉใƒผใ€่ญฆๅ‘Šใ€ใŠใ‚ˆใณๅŸบๆœฌๆƒ…ๅ ฑใ‚’ใƒฌใƒใƒผใƒˆใ—ใพใ™ใ€‚ - `transformers.logging.DEBUG` (int ๅ€คใ€10): ใ™ในใฆใฎๆƒ…ๅ ฑใ‚’ใƒฌใƒใƒผใƒˆใ—ใพใ™ใ€‚ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ใƒขใƒ‡ใƒซใฎใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ไธญใซใ€Œtqdmใ€้€ฒ่กŒ็ŠถๆณใƒใƒผใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ [`logging.disable_progress_bar`] ใŠใ‚ˆใณ [`logging.enable_progress_bar`] ใ‚’ไฝฟ็”จใ—ใฆใ€ใ“ใฎๅ‹•ไฝœใ‚’ๆŠ‘ๅˆถใพใŸใฏๆŠ‘ๅˆถ่งฃ้™คใงใใพใ™ใ€‚ ## `logging` vs `warnings` Python ใซใฏใ€ใ‚ˆใ็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จโ€‹โ€‹ใ•ใ‚Œใ‚‹ 2 ใคใฎใƒญใ‚ฎใƒณใ‚ฐ ใ‚ทใ‚นใƒ†ใƒ ใŒใ‚ใ‚Šใพใ™ใ€‚ไธŠใง่ชฌๆ˜Žใ—ใŸ `logging` ใจ `warnings` ใงใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€็‰นๅฎšใฎใƒใ‚ฑใƒƒใƒˆๅ†…ใฎ่ญฆๅ‘Šใ‚’ใ•ใ‚‰ใซๅˆ†้กžใงใใพใ™ (ไพ‹: ๆฉŸ่ƒฝใพใŸใฏใƒ‘ใ‚นใฎ`FutureWarning`) ใ“ใ‚Œใฏใ™ใงใซ้žๆŽจๅฅจใซใชใฃใฆใŠใ‚Šใ€`DeprecationWarning`ใฏไปŠๅพŒใฎ้žๆŽจๅฅจใ‚’็คบใ—ใพใ™ใ€‚ ไธกๆ–นใจใ‚‚`transformers`ใƒฉใ‚คใƒ–ใƒฉใƒชใงไฝฟ็”จใ—ใพใ™ใ€‚ `logging`ใฎ`captureWarning`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๆดป็”จใ—ใฆ้ฉๅฟœใ•ใ›ใฆใ€ ใ“ใ‚Œใ‚‰ใฎ่ญฆๅ‘Šใƒกใƒƒใ‚ปใƒผใ‚ธใฏใ€ไธŠ่จ˜ใฎๅ†—้•ท่จญๅฎšใƒ„ใƒผใƒซใซใ‚ˆใฃใฆ็ฎก็†ใ•ใ‚Œใพใ™ใ€‚ ใใ‚Œใฏใƒฉใ‚คใƒ–ใƒฉใƒชใฎ้–‹็™บ่€…ใซใจใฃใฆไฝ•ใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ‹?ๆฌกใฎใƒ’ใƒฅใƒผใƒชใ‚นใƒ†ใ‚ฃใƒƒใ‚ฏใ‚’ๅฐŠ้‡ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - `warnings`ใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใŠใ‚ˆใณ`transformers`ใซไพๅญ˜ใ™ใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใฎ้–‹็™บ่€…ใซๅ„ชๅ…ˆใ•ใ‚Œใ‚‹ในใใงใ™ใ€‚ - `logging`ใฏใ€ๆ—ฅๅธธใฎใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใงใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ‚จใƒณใƒ‰ใƒฆใƒผใ‚ถใƒผใซไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ไปฅไธ‹ใฎ`captureWarnings`ใƒกใ‚ฝใƒƒใƒ‰ใฎใƒชใƒ•ใ‚กใƒฌใƒณใ‚นใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ [[autodoc]] logging.captureWarnings ## Base setters [[autodoc]] logging.set_verbosity_error [[autodoc]] logging.set_verbosity_warning [[autodoc]] logging.set_verbosity_info [[autodoc]] logging.set_verbosity_debug ## Other functions [[autodoc]] logging.get_verbosity [[autodoc]] logging.set_verbosity [[autodoc]] logging.get_logger [[autodoc]] logging.enable_default_handler [[autodoc]] logging.disable_default_handler [[autodoc]] logging.enable_explicit_format [[autodoc]] logging.reset_format [[autodoc]] logging.enable_progress_bar [[autodoc]] logging.disable_progress_bar
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/configuration.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> ๏ผƒ ๆง‹ๆˆ ๅŸบๆœฌใ‚ฏใƒฉใ‚น [`PretrainedConfig`] ใฏใ€่จญๅฎšใ‚’ใƒญใƒผใƒ‰/ไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใฎไธ€่ˆฌ็š„ใชใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚ ใƒญใƒผใ‚ซใƒซ ใƒ•ใ‚กใ‚คใƒซใพใŸใฏใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‹ใ‚‰ใ€ใพใŸใฏใƒฉใ‚คใƒ–ใƒฉใƒช (ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ•ใ‚ŒใŸ) ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒขใƒ‡ใƒซๆง‹ๆˆใ‹ใ‚‰ HuggingFace ใฎ AWS S3 ใƒชใƒใ‚ธใƒˆใƒชใ‹ใ‚‰)ใ€‚ ๅ„ๆดพ็”Ÿๆง‹ๆˆใ‚ฏใƒฉใ‚นใฏใƒขใƒ‡ใƒซๅ›บๆœ‰ใฎๅฑžๆ€งใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚ใ™ในใฆใฎๆง‹ๆˆใ‚ฏใƒฉใ‚นใซๅญ˜ๅœจใ™ใ‚‹ๅ…ฑ้€šใฎๅฑžๆ€งใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ `hidden_โ€‹โ€‹size`ใ€`num_attention_heads`ใ€ใŠใ‚ˆใณ `num_hidden_โ€‹โ€‹layers`ใ€‚ใƒ†ใ‚ญใ‚นใƒˆ ใƒขใƒ‡ใƒซใฏใ•ใ‚‰ใซไปฅไธ‹ใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚ `vocab_size`ใ€‚ ## PretrainedConfig [[autodoc]] PretrainedConfig - push_to_hub - all
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/processors.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Processors Transformers ใƒฉใ‚คใƒ–ใƒฉใƒชใงใฏใ€ใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏ 2 ใคใฎ็•ฐใชใ‚‹ๆ„ๅ‘ณใ‚’ๆŒใกใพใ™ใ€‚ - [Wav2Vec2](../model_doc/wav2vec2) ใชใฉใฎใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆ (้Ÿณๅฃฐใจใƒ†ใ‚ญใ‚นใƒˆ) ใพใŸใฏ [CLIP](../model_doc/clip) (ใƒ†ใ‚ญใ‚นใƒˆใจใƒ“ใ‚ธใƒงใƒณ) - ๅคใ„ใƒใƒผใ‚ธใƒงใƒณใฎใƒฉใ‚คใƒ–ใƒฉใƒชใง GLUE ใพใŸใฏ SQUAD ใฎใƒ‡ใƒผใ‚ฟใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใฆใ„ใŸใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏ้žๆŽจๅฅจใซใชใ‚Šใพใ—ใŸใ€‚ ## Multi-modal processors ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ ใƒขใƒ‡ใƒซใงใฏใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒ่ค‡ๆ•ฐใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃ (ใƒ†ใ‚ญใ‚นใƒˆใ€ ่ฆ–่ฆšใจ้Ÿณๅฃฐ๏ผ‰ใ€‚ใ“ใ‚Œใฏใ€2 ใคไปฅไธŠใฎๅ‡ฆ็†ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใ‚ฐใƒซใƒผใƒ—ๅŒ–ใ™ใ‚‹ใƒ—ใƒญใ‚ปใƒƒใ‚ตใƒผใจๅ‘ผใฐใ‚Œใ‚‹ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซใ‚ˆใฃใฆๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ (ใƒ†ใ‚ญใ‚นใƒˆ ใƒขใƒ€ใƒชใƒ†ใ‚ฃ็”จ)ใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใƒผ (่ฆ–่ฆš็”จ)ใ€็‰นๅพดๆŠฝๅ‡บๅ™จ (ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช็”จ) ใชใฉใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ€ไฟๅญ˜ใŠใ‚ˆใณใƒญใƒผใƒ‰ๆฉŸ่ƒฝใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ๆฌกใฎๅŸบๆœฌใ‚ฏใƒฉใ‚นใ‚’็ถ™ๆ‰ฟใ—ใพใ™ใ€‚ [[autodoc]] ProcessorMixin ## Deprecated processors ใ™ในใฆใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ€ๅŒใ˜ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅพ“ใฃใฆใ„ใพใ™ใ€‚ [`~data.processors.utils.DataProcessor`]ใ€‚ใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏๆฌกใฎใƒชใ‚นใƒˆใ‚’่ฟ”ใ—ใพใ™ใ€‚ [`~data.processors.utils.InputExample`]ใ€‚ใ“ใ‚Œใ‚‰ [`~data.processors.utils.InputExample`] ใฏๆฌกใฎใ‚ˆใ†ใซๅค‰ๆ›ใงใใพใ™ใ€‚ [`~data.processors.utils.Input features`] ใ‚’ใƒขใƒ‡ใƒซใซใƒ•ใ‚ฃใƒผใƒ‰ใ—ใพใ™ใ€‚ [[autodoc]] data.processors.utils.DataProcessor [[autodoc]] data.processors.utils.InputExample [[autodoc]] data.processors.utils.InputFeatures ## GLUE [ไธ€่ˆฌ่จ€่ชž็†่งฃ่ฉ•ไพก (GLUE)](https://gluebenchmark.com/) ใฏใ€ ๆ—ขๅญ˜ใฎ NLU ใ‚ฟใ‚นใ‚ฏใฎๅคšๆง˜ใชใ‚ปใƒƒใƒˆใซใ‚ใŸใ‚‹ใƒขใƒ‡ใƒซใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ€‚็ด™ใจๅŒๆ™‚็™บๅฃฒใ•ใ‚ŒใŸ [GLUE: A ่‡ช็„ถ่จ€่ชž็†่งฃใฎใŸใ‚ใฎใƒžใƒซใƒใ‚ฟใ‚นใ‚ฏใƒ™ใƒณใƒใƒžใƒผใ‚ฏใŠใ‚ˆใณๅˆ†ๆžใƒ—ใƒฉใƒƒใƒˆใƒ•ใ‚ฉใƒผใƒ ](https://openreview.net/pdf?id=rJ4km2R5t7) ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€MRPCใ€MNLIใ€MNLI (ไธไธ€่‡ด)ใ€CoLAใ€SST2ใ€STSBใ€ QQPใ€QNLIใ€RTEใ€WNLIใ€‚ ใใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - [`~data.processors.utils.MrpcProcessor`] - [`~data.processors.utils.MnliProcessor`] - [`~data.processors.utils.MnliMismatchedProcessor`] - [`~data.processors.utils.Sst2Processor`] - [`~data.processors.utils.StsbProcessor`] - [`~data.processors.utils.QqpProcessor`] - [`~data.processors.utils.QnliProcessor`] - [`~data.processors.utils.RteProcessor`] - [`~data.processors.utils.WnliProcessor`] ใ•ใ‚‰ใซใ€ๆฌกใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ‡ใƒผใ‚ฟ ใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ๅ€คใ‚’ใƒญใƒผใƒ‰ใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒชใ‚นใƒˆใซๅค‰ๆ›ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ [`~data.processors.utils.InputExample`]ใ€‚ [[autodoc]] data.processors.glue.glue_convert_examples_to_features ## XNLI [ใ‚ฏใƒญใ‚นใƒชใƒณใ‚ฌใƒซ NLI ใ‚ณใƒผใƒ‘ใ‚น (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) ใฏใ€ ่จ€่ชžใ‚’่ถ…ใˆใŸใƒ†ใ‚ญใ‚นใƒˆ่กจ็พใฎๅ“่ณชใ€‚ XNLI ใฏใ€[*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/) ใซๅŸบใฅใใ‚ฏใƒฉใ‚ฆใƒ‰ใ‚ฝใƒผใ‚นใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใฎใƒšใ‚ขใซใฏใ€15 ๅ€‹ใฎใƒ†ใ‚ญใ‚นใƒˆๅซๆ„ใ‚ขใƒŽใƒ†ใƒผใ‚ทใƒงใƒณใŒใƒฉใƒ™ใƒซไป˜ใ‘ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ•ใพใ–ใพใช่จ€่ชž (่‹ฑ่ชžใชใฉใฎ้ซ˜ใƒชใ‚ฝใƒผใ‚น่จ€่ชžใจใ‚นใƒฏใƒ’ใƒช่ชžใชใฉใฎไฝŽใƒชใ‚ฝใƒผใ‚น่จ€่ชžใฎไธกๆ–นใ‚’ๅซใ‚€)ใ€‚ ่ซ–ๆ–‡ [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) ใจๅŒๆ™‚ใซใƒชใƒชใƒผใ‚นใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€XNLI ใƒ‡ใƒผใ‚ฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒ›ใ‚นใƒˆใ—ใพใ™ใ€‚ - [`~data.processors.utils.XnliProcessor`] ใƒ†ใ‚นใƒˆใ‚ปใƒƒใƒˆใซใฏใ‚ดใƒผใƒซใƒ‰ใƒฉใƒ™ใƒซใŒไป˜ใ„ใฆใ„ใ‚‹ใŸใ‚ใ€่ฉ•ไพกใฏใƒ†ใ‚นใƒˆใ‚ปใƒƒใƒˆใง่กŒใ‚ใ‚Œใพใ™ใฎใงใ”ไบ†ๆ‰ฟใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ™ใ‚‹ไพ‹ใฏใ€[run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) ใ‚นใ‚ฏใƒชใƒ—ใƒˆใซ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ## SQuAD [The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) ใฏใ€ๆฌกใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใงใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใซ้–ขใ™ใ‚‹ใƒขใƒ‡ใƒซใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’่ฉ•ไพกใ—ใพใ™ใ€‚ v1.1 ใจ v2.0 ใฎ 2 ใคใฎใƒใƒผใ‚ธใƒงใƒณใŒๅˆฉ็”จๅฏ่ƒฝใงใ™ใ€‚ๆœ€ๅˆใฎใƒใƒผใ‚ธใƒงใƒณ (v1.1) ใฏใ€่ซ–ๆ–‡ [SQuAD: 100,000+ question for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ใจใจใ‚‚ใซใƒชใƒชใƒผใ‚นใ•ใ‚Œใพใ—ใŸใ€‚ 2 ็•ช็›ฎใฎใƒใƒผใ‚ธใƒงใƒณ (v2.0) ใฏใ€่ซ–ๆ–‡ [Know What You Don't ใจๅŒๆ™‚ใซใƒชใƒชใƒผใ‚นใ•ใ‚Œใพใ—ใŸใ€‚ ็ŸฅใฃใฆใŠใในใ: SQuAD ใฎ็ญ”ใˆใ‚‰ใ‚Œใชใ„่ณชๅ•](https://arxiv.org/abs/1806.03822)ใ€‚ ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€ๆฌกใฎ 2 ใคใฎใƒใƒผใ‚ธใƒงใƒณใฎใใ‚Œใžใ‚Œใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒ›ใ‚นใƒˆใ—ใพใ™ใ€‚ ### Processors ใใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - [`~data.processors.utils.SquadV1Processor`] - [`~data.processors.utils.SquadV2Processor`] ใฉใกใ‚‰ใ‚‚ๆŠฝ่ฑกใ‚ฏใƒฉใ‚น [`~data.processors.utils.SquadProcessor`] ใ‚’็ถ™ๆ‰ฟใ—ใฆใ„ใพใ™ใ€‚ [[autodoc]] data.processors.squad.SquadProcessor - all ใ•ใ‚‰ใซใ€ๆฌกใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€SQuAD ใฎไพ‹ใ‚’ๆฌกใฎๅฝขๅผใซๅค‰ๆ›ใงใใพใ™ใ€‚ ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใจใ—ใฆไฝฟ็”จใงใใ‚‹ [`~data.processors.utils.SquadFeatures`]ใ€‚ [[autodoc]] data.processors.squad.squad_convert_examples_to_features ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใจๅ‰่ฟฐใฎๆ–นๆณ•ใฏใ€ใƒ‡ใƒผใ‚ฟใ‚’ๅซใ‚€ใƒ•ใ‚กใ‚คใƒซใ ใ‘ใงใชใใ€ *tensorflow_datasets* ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใ€‚ไปฅไธ‹ใซไพ‹ใ‚’็คบใ—ใพใ™ใ€‚ ### Example usage ไปฅไธ‹ใซใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ—ใŸไพ‹ใจใ€ใƒ‡ใƒผใ‚ฟ ใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟ็”จใ—ใŸๅค‰ๆ›ๆ–นๆณ•ใ‚’็คบใ—ใพใ™ใ€‚ ```python # Loading a V2 processor processor = SquadV2Processor() examples = processor.get_dev_examples(squad_v2_data_dir) # Loading a V1 processor processor = SquadV1Processor() examples = processor.get_dev_examples(squad_v1_data_dir) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) ``` *tensorflow_datasets* ใฎไฝฟ็”จใฏใ€ใƒ‡ใƒผใ‚ฟ ใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใฎใจๅŒใ˜ใใ‚‰ใ„็ฐกๅ˜ใงใ™ใ€‚ ```python # tensorflow_datasets only handle Squad V1. tfds_examples = tfds.load("squad") examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) ``` ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ™ใ‚‹ๅˆฅใฎไพ‹ใฏใ€[run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) ใ‚นใ‚ฏใƒชใƒ—ใƒˆใซ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/pipelines.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipelines ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ€ๆŽจ่ซ–ใซใƒขใƒ‡ใƒซใ‚’ไฝฟใ†ใŸใ‚ใฎ็ฐกๅ˜ใงๅ„ชใ‚ŒใŸๆ–นๆณ•ใงใ‚ใ‚‹ใ€‚ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ€่ค‡้›‘ใชใ‚ณใƒผใƒ‰ใฎใปใจใ‚“ใฉใ‚’ๆŠฝ่ฑกๅŒ–ใ—ใŸใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใงใ™ใ€‚ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰่ค‡้›‘ใชใ‚ณใƒผใƒ‰ใฎใปใจใ‚“ใฉใ‚’ๆŠฝ่ฑกๅŒ–ใ—ใŸใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใงใ€ๅๅ‰ไป˜ใๅ›บๆœ‰่กจ็พ่ช่ญ˜ใ€ใƒžใ‚นใ‚ฏ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€ๆ„Ÿๆƒ…ๅˆ†ๆžใ€็‰นๅพดๆŠฝๅ‡บใ€่ณชๅ•ๅฟœ็ญ”ใชใฉใฎใ‚ฟใ‚นใ‚ฏใซ็‰นๅŒ–ใ—ใŸใ‚ทใƒณใƒ—ใƒซใชAPIใ‚’ๆไพ›ใ—ใพใ™ใ€‚ Recognitionใ€Masked Language Modelingใ€Sentiment Analysisใ€Feature Extractionใ€Question Answeringใชใฉใฎใ‚ฟใ‚นใ‚ฏใซ็‰นๅŒ–ใ—ใŸใ‚ทใƒณใƒ—ใƒซใชAPIใ‚’ๆไพ›ใ—ใพใ™ใ€‚ไปฅไธ‹ใ‚’ๅ‚็…งใฎใ“ใจใ€‚ [ใ‚ฟใ‚นใ‚ฏๆฆ‚่ฆ](../task_summary)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎๆŠฝ่ฑกๅŒ–ใซใฏ2ใคใฎใ‚ซใƒ†ใ‚ดใƒชใƒผใŒใ‚ใ‚‹๏ผš - [`pipeline`] ใฏใ€ไป–ใฎใ™ในใฆใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ใ‚ซใƒ—ใ‚ปใƒซๅŒ–ใ™ใ‚‹ๆœ€ใ‚‚ๅผทๅŠ›ใชใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใงใ™ใ€‚ - ใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ€[ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช](#audio)ใ€[ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒผ ใƒ“ใ‚ธใƒงใƒณ](#computer-vision)ใ€[่‡ช็„ถ่จ€่ชžๅ‡ฆ็†](#natural-language-processing)ใ€ใŠใ‚ˆใณ [ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ](#multimodal) ใ‚ฟใ‚นใ‚ฏใงไฝฟ็”จใงใใพใ™ใ€‚ ## The pipeline abstraction *ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณ* ๆŠฝ่ฑกๅŒ–ใฏใ€ไป–ใฎใ™ในใฆใฎๅˆฉ็”จๅฏ่ƒฝใชใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎใƒฉใƒƒใƒ‘ใƒผใงใ™ใ€‚ไป–ใฎใ‚‚ใฎใจๅŒๆง˜ใซใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ•ใ‚Œใพใ™ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใงใ™ใŒใ€ใ•ใ‚‰ใชใ‚‹็”Ÿๆดปใฎ่ณชใ‚’ๆไพ›ใงใใพใ™ใ€‚ 1 ใคใฎ้ …็›ฎใซๅฏพใ™ใ‚‹ๅ˜็ด”ใชๅ‘ผใณๅ‡บใ—: ```python >>> pipe = pipeline("text-classification") >>> pipe("This restaurant is awesome") [{'label': 'POSITIVE', 'score': 0.9998743534088135}] ``` [ใƒใƒ–](https://huggingface.co) ใฎ็‰นๅฎšใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ใƒขใƒ‡ใƒซใŒใ‚ชใƒณใซใชใฃใฆใ„ใ‚‹ๅ ดๅˆใฏใ‚ฟใ‚นใ‚ฏใ‚’็„ก่ฆ–ใงใใพใ™ใ€‚ ใƒใƒ–ใฏใ™ใงใซใใ‚Œใ‚’ๅฎš็พฉใ—ใฆใ„ใพใ™ใ€‚ ```python >>> pipe = pipeline(model="roberta-large-mnli") >>> pipe("This restaurant is awesome") [{'label': 'NEUTRAL', 'score': 0.7313136458396912}] ``` ๅคšใใฎ้ …็›ฎใซๅฏพใ—ใฆใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๅ‘ผใณๅ‡บใ™ใซใฏใ€*list* ใ‚’ไฝฟ็”จใ—ใฆใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๅ‘ผใณๅ‡บใ™ใ“ใจใŒใงใใพใ™ใ€‚ ```python >>> pipe = pipeline("text-classification") >>> pipe(["This restaurant is awesome", "This restaurant is awful"]) [{'label': 'POSITIVE', 'score': 0.9998743534088135}, {'label': 'NEGATIVE', 'score': 0.9996669292449951}] ``` ๅฎŒๅ…จใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๅๅพฉใ™ใ‚‹ใซใฏใ€`Dataset`ใ‚’็›ดๆŽฅไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ๅฟ…่ฆใŒใชใ„ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ…จไฝ“ใ‚’ไธ€ๅบฆใซๅ‡ฆ็†ใ™ใ‚‹ใ“ใจใ‚‚ใ€่‡ชๅˆ†ใงใƒใƒƒใƒๅ‡ฆ็†ใ‚’่กŒใ†ๅฟ…่ฆใ‚‚ใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ‚ซใ‚นใ‚ฟใƒ ใƒซใƒผใƒ—ใจๅŒใ˜ใใ‚‰ใ„้€Ÿใๅ‹•ไฝœใ™ใ‚‹ใฏใšใงใ™ใ€‚ GPUใ€‚ใใ‚ŒใŒๅ•้กŒใงใชใ„ๅ ดๅˆใฏใ€ใŸใ‚ใ‚‰ใ‚ใšใซๅ•้กŒใ‚’ไฝœๆˆใ—ใฆใใ ใ•ใ„ใ€‚ ```python import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset for out in tqdm(pipe(KeyDataset(dataset, "file"))): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` ไฝฟใ„ใ‚„ใ™ใใ™ใ‚‹ใŸใ‚ใซใ€ใ‚ธใ‚งใƒใƒฌใƒผใ‚ฟใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```python from transformers import pipeline pipe = pipeline("text-classification") def data(): while True: # This could come from a dataset, a database, a queue or HTTP request # in a server # Caveat: because this is iterative, you cannot use `num_workers > 1` variable # to use multiple threads to preprocess data. You can still have 1 thread that # does the preprocessing while the main runs the big inference yield "This is a test" for out in pipe(data()): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` [[autodoc]] pipeline ## Pipeline batching ใ™ในใฆใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใงใƒใƒƒใƒๅ‡ฆ็†ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใ“ใ‚Œใฏใ†ใพใใ„ใใพใ™ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใŒใ‚นใƒˆใƒชใƒผใƒŸใƒณใ‚ฐๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ™ใ‚‹ใจใใฏๅธธใซ (ใคใพใ‚Šใ€ใƒชใ‚นใƒˆใ€`dataset`ใ€ใพใŸใฏ `generator`ใ‚’ๆธกใ™ใจใ)ใ€‚ ```python from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset import datasets dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised") pipe = pipeline("text-classification", device=0) for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"): print(out) # [{'label': 'POSITIVE', 'score': 0.9998743534088135}] # Exactly the same output as before, but the content are passed # as batches to the model ``` <Tip warning={true}> ใŸใ ใ—ใ€ใ“ใ‚Œใซใ‚ˆใฃใฆใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒ่‡ชๅ‹•็š„ใซๅ‘ไธŠใ™ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚็Šถๆณใซๅฟœใ˜ใฆใ€10 ๅ€ใฎ้ซ˜้€ŸๅŒ–ใพใŸใฏ 5 ๅ€ใฎไฝŽ้€ŸๅŒ–ใฎใ„ใšใ‚Œใ‹ใซใชใ‚Šใพใ™ใ€‚ ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ€ใƒ‡ใƒผใ‚ฟใ€ไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ๅฎŸ้š›ใฎใƒขใƒ‡ใƒซใซใคใ„ใฆใ€‚ ไธปใซ้ซ˜้€ŸๅŒ–ใงใ‚ใ‚‹ไพ‹: </Tip> ```python from transformers import pipeline from torch.utils.data import Dataset from tqdm.auto import tqdm pipe = pipeline("text-classification", device=0) class MyDataset(Dataset): def __len__(self): return 5000 def __getitem__(self, i): return "This is a test" dataset = MyDataset() for batch_size in [1, 8, 64, 256]: print("-" * 30) print(f"Streaming batch_size={batch_size}") for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)): pass ``` ``` # On GTX 970 ------------------------------ Streaming no batching 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5000/5000 [00:26<00:00, 187.52it/s] ------------------------------ Streaming batch_size=8 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5000/5000 [00:04<00:00, 1205.95it/s] ------------------------------ Streaming batch_size=64 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5000/5000 [00:02<00:00, 2478.24it/s] ------------------------------ Streaming batch_size=256 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5000/5000 [00:01<00:00, 2554.43it/s] (diminishing returns, saturated the GPU) ``` ๆœ€ใ‚‚้€ŸๅบฆใŒไฝŽไธ‹ใ™ใ‚‹ไพ‹: ```python class MyDataset(Dataset): def __len__(self): return 5000 def __getitem__(self, i): if i % 64 == 0: n = 100 else: n = 1 return "This is a test" * n ``` ใ“ใ‚Œใฏใ€ไป–ใฎๆ–‡ใซๆฏ”ในใฆ้žๅธธใซ้•ทใ„ๆ–‡ใŒๆ™‚ๆŠ˜ใ‚ใ‚Šใพใ™ใ€‚ใใฎๅ ดๅˆใ€**ๅ…จไฝ“**ใฎใƒใƒƒใƒใฏ 400 ใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒณใŒ้•ทใ„ใŸใ‚ใ€ใƒใƒƒใƒๅ…จไฝ“ใŒ [64, 4] ใงใฏใชใ [64, 400] ใซใชใ‚Šใ€้€ŸๅบฆใŒๅคงๅน…ใซไฝŽไธ‹ใ—ใพใ™ใ€‚ใ•ใ‚‰ใซๆ‚ชใ„ใ“ใจใซใ€ ใƒใƒƒใƒใŒๅคงใใใชใ‚‹ใจใ€ใƒ—ใƒญใ‚ฐใƒฉใƒ ใฏๅ˜็ด”ใซใ‚ฏใƒฉใƒƒใ‚ทใƒฅใ—ใพใ™ใ€‚ ``` ------------------------------ Streaming no batching 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1000/1000 [00:05<00:00, 183.69it/s] ------------------------------ Streaming batch_size=8 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1000/1000 [00:03<00:00, 265.74it/s] ------------------------------ Streaming batch_size=64 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1000/1000 [00:26<00:00, 37.80it/s] ------------------------------ Streaming batch_size=256 0%| | 0/1000 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/nicolas/src/transformers/test.py", line 42, in <module> for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)): .... q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head) RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch) ``` ใ“ใฎๅ•้กŒใซๅฏพใ™ใ‚‹้ฉๅˆ‡ใช (ไธ€่ˆฌ็š„ใช) ่งฃๆฑบ็ญ–ใฏใชใใ€ไฝฟ็”จใงใใ‚‹่ท้›ขใฏใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซใ‚ˆใฃใฆ็•ฐใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ใฎใƒซใƒผใƒซ ่ฆชๆŒ‡๏ผš ใƒฆใƒผใ‚ถใƒผใซใจใฃใฆใฎ็ตŒ้จ“ๅ‰‡ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - **ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ‚’ไฝฟ็”จใ—ใฆใ€่ฒ ่ทใซๅฏพใ™ใ‚‹ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๆธฌๅฎšใ—ใพใ™ใ€‚ๆธฌใฃใฆใ€ๆธฌใฃใฆใ€ๆธฌใ‚Š็ถšใ‘ใ‚‹ใ€‚ๅฎŸๆ•ฐใจใ„ใ†ใฎใฏใ€ ้€ฒใ‚€ในใๅ”ฏไธ€ใฎๆ–นๆณ•ใ€‚** - ใƒฌใ‚คใƒ†ใƒณใ‚ทใซๅˆถ็ด„ใŒใ‚ใ‚‹ๅ ดๅˆ (ๅฎŸ้š›ใฎ่ฃฝๅ“ใŒๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ—ใฆใ„ใ‚‹ๅ ดๅˆ)ใ€ใƒใƒƒใƒๅ‡ฆ็†ใ‚’่กŒใ‚ใชใ„ใงใใ ใ•ใ„ใ€‚ - CPU ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ใƒใƒƒใƒๅ‡ฆ็†ใ‚’่กŒใ‚ใชใ„ใงใใ ใ•ใ„ใ€‚ - GPU ใงใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆ (ๅคง้‡ใฎ้™็š„ใƒ‡ใƒผใ‚ฟใงใƒขใƒ‡ใƒซใ‚’ๅฎŸ่กŒใ—ใŸใ„ๅ ดๅˆ)ใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™ใ€‚ - sequence_length (ใ€Œ่‡ช็„ถใชใ€ใƒ‡ใƒผใ‚ฟ) ใฎใ‚ตใ‚คใ‚บใซใคใ„ใฆใพใฃใŸใใ‚ใ‹ใ‚‰ใชใ„ๅ ดๅˆใฏใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใƒใƒƒใƒๅ‡ฆ็†ใ‚„ๆธฌๅฎšใ‚’่กŒใ‚ใšใ€ ๆšซๅฎš็š„ใซ่ฟฝๅŠ ใ—ใฆใฟใพใ™ใ€‚ๅคฑๆ•—ใ—ใŸๅ ดๅˆใซๅ›žๅพฉใ™ใ‚‹ใŸใ‚ใซ OOM ใƒใ‚งใƒƒใ‚ฏใ‚’่ฟฝๅŠ ใ—ใพใ™ (ๅคฑๆ•—ใ—ใŸๅ ดๅˆใฏใ€ใ‚ใ‚‹ๆ™‚็‚นใงๅ›žๅพฉใ—ใพใ™)ใ€‚ sequence_length ใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚) - sequence_length ใŒ้žๅธธใซ่ฆๅ‰‡็š„ใงใ‚ใ‚‹ๅ ดๅˆใ€ใƒใƒƒใƒๅ‡ฆ็†ใฏ้žๅธธใซ่ˆˆๅ‘ณๆทฑใ„ใ‚‚ใฎใจใชใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใใ€ๆธฌๅฎšใ—ใฆใƒ—ใƒƒใ‚ทใƒฅใ—ใฆใใ ใ•ใ„ใ€‚ OOM ใŒ็™บ็”Ÿใ™ใ‚‹ใพใง็ถšใ‘ใพใ™ใ€‚ - GPU ใŒๅคงใใ„ใปใฉใ€ใƒใƒƒใƒๅ‡ฆ็†ใŒใ‚ˆใ‚Š่ˆˆๅ‘ณๆทฑใ„ใ‚‚ใฎใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใใชใ‚Šใพใ™ใ€‚ - ใƒใƒƒใƒๅ‡ฆ็†ใ‚’ๆœ‰ๅŠนใซใ—ใŸใ‚‰ใ™ใใซใ€OOM ใ‚’้ฉๅˆ‡ใซๅ‡ฆ็†ใงใใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ## Pipeline chunk batching `zero-shot-classification` ใจ `question-answering` ใฏใ€ๅ˜ไธ€ใฎๅ…ฅๅŠ›ใง็ตๆžœใŒๅพ—ใ‚‰ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใจใ„ใ†ๆ„ๅ‘ณใงใ€ๅฐ‘ใ—็‰นๆฎŠใงใ™ใ€‚ ใƒขใƒ‡ใƒซใฎ่ค‡ๆ•ฐใฎๅ‰ๆ–นใƒ‘ใ‚นใ€‚้€šๅธธใฎ็Šถๆณใงใฏใ€ใ“ใ‚Œใซใ‚ˆใ‚Š `batch_size` ๅผ•ๆ•ฐใซ้–ขใ™ใ‚‹ๅ•้กŒใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ ใ“ใฎๅ•้กŒใ‚’ๅ›ž้ฟใ™ใ‚‹ใŸใ‚ใซใ€ใ“ใ‚Œใ‚‰ใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใฉใกใ‚‰ใ‚‚ๅฐ‘ใ—็‰นๆฎŠใซใชใฃใฆใŠใ‚Šใ€ไปฃใ‚ใ‚Šใซ `ChunkPipeline` ใซใชใฃใฆใ„ใพใ™ใ€‚ ้€šๅธธใฎ `Pipeline`ใ€‚่ฆใ™ใ‚‹ใซ๏ผš ```python preprocessed = pipe.preprocess(inputs) model_outputs = pipe.forward(preprocessed) outputs = pipe.postprocess(model_outputs) ``` ไปŠใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™: ```python all_model_outputs = [] for preprocessed in pipe.preprocess(inputs): model_outputs = pipe.forward(preprocessed) all_model_outputs.append(model_outputs) outputs = pipe.postprocess(all_model_outputs) ``` ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏไปฅไธ‹ใงไฝฟ็”จใ•ใ‚Œใ‚‹ใŸใ‚ใ€ใ“ใ‚Œใฏใ‚ณใƒผใƒ‰ใซๅฏพใ—ใฆ้žๅธธใซ้€้Ž็š„ใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๅŒใ˜ๆ–นๆณ•ใ€‚ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใƒใƒƒใƒใ‚’่‡ชๅ‹•็š„ใซๅ‡ฆ็†ใงใใ‚‹ใŸใ‚ใ€ใ“ใ‚Œใฏ็ฐก็•ฅๅŒ–ใ•ใ‚ŒใŸใƒ“ใƒฅใƒผใงใ™ใ€‚ๆฐ—ใซใ™ใ‚‹ๅฟ…่ฆใฏใชใ„ใจใ„ใ†ๆ„ๅ‘ณใงใ™ ๅ…ฅๅŠ›ใŒๅฎŸ้š›ใซใƒˆใƒชใ‚ฌใƒผใ™ใ‚‹ๅ‰ๆ–นใƒ‘ใ‚นใฎๆ•ฐใซใคใ„ใฆใฏใ€`batch_size` ใ‚’ๆœ€้ฉๅŒ–ใงใใพใ™ใ€‚ ๅ…ฅๅŠ›ใจใฏ็‹ฌ็ซ‹ใ—ใฆใ€‚ๅ‰ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฎๆณจๆ„ไบ‹้ …ใŒๅผ•ใ็ถšใ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚ ## Pipeline custom code ็‰นๅฎšใฎใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ๅ ดๅˆใ€‚ ็›ฎใฎๅ‰ใฎใ‚ฟใ‚นใ‚ฏใซ้–ขใ™ใ‚‹ๅ•้กŒใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใ‚’่บŠ่บ‡ใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎ็›ฎๆจ™ใฏใ€ไฝฟใ„ใ‚„ใ™ใใ€ใปใจใ‚“ใฉใฎใƒฆใƒผใ‚ถใƒผใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€`transformers`ใŒใ‚ใชใŸใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ๅ˜็ด”ใซ่ฉฆใ—ใฆใฟใŸใ„ๅ ดๅˆใฏใ€ๆฌกใฎใ“ใจใŒใงใใพใ™ใ€‚ - ้ธๆŠžใ—ใŸใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใ—ใพใ™ ```python class MyPipeline(TextClassificationPipeline): def postprocess(): # Your code goes here scores = scores * 100 # And here my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...) # or if you use *pipeline* function, then: my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline) ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅฟ…่ฆใชใ‚ซใ‚นใ‚ฟใƒ  ใ‚ณใƒผใƒ‰ใ‚’ใ™ในใฆๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ## Implementing a pipeline [Implementing a new pipeline](../add_new_pipeline) ## Audio ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช ใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใ‚‹ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซใฏๆฌกใฎใ‚‚ใฎใŒใ‚ใ‚Šใพใ™ใ€‚ ### AudioClassificationPipeline [[autodoc]] AudioClassificationPipeline - __call__ - all ### AutomaticSpeechRecognitionPipeline [[autodoc]] AutomaticSpeechRecognitionPipeline - __call__ - all ### TextToAudioPipeline [[autodoc]] TextToAudioPipeline - __call__ - all ### ZeroShotAudioClassificationPipeline [[autodoc]] ZeroShotAudioClassificationPipeline - __call__ - all ## Computer vision ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒผ ใƒ“ใ‚ธใƒงใƒณ ใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใ‚‹ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซใฏๆฌกใฎใ‚‚ใฎใŒใ‚ใ‚Šใพใ™ใ€‚ ### DepthEstimationPipeline [[autodoc]] DepthEstimationPipeline - __call__ - all ### ImageClassificationPipeline [[autodoc]] ImageClassificationPipeline - __call__ - all ### ImageSegmentationPipeline [[autodoc]] ImageSegmentationPipeline - __call__ - all ### ImageToImagePipeline [[autodoc]] ImageToImagePipeline - __call__ - all ### ObjectDetectionPipeline [[autodoc]] ObjectDetectionPipeline - __call__ - all ### VideoClassificationPipeline [[autodoc]] VideoClassificationPipeline - __call__ - all ### ZeroShotImageClassificationPipeline [[autodoc]] ZeroShotImageClassificationPipeline - __call__ - all ### ZeroShotObjectDetectionPipeline [[autodoc]] ZeroShotObjectDetectionPipeline - __call__ - all ## Natural Language Processing ่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใ‚‹ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซใฏๆฌกใฎใ‚‚ใฎใŒใ‚ใ‚Šใพใ™ใ€‚ ### ConversationalPipeline [[autodoc]] Conversation [[autodoc]] ConversationalPipeline - __call__ - all ### FillMaskPipeline [[autodoc]] FillMaskPipeline - __call__ - all ### NerPipeline [[autodoc]] NerPipeline ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[`TokenClassificationPipeline`] ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### QuestionAnsweringPipeline [[autodoc]] QuestionAnsweringPipeline - __call__ - all ### SummarizationPipeline [[autodoc]] SummarizationPipeline - __call__ - all ### TableQuestionAnsweringPipeline [[autodoc]] TableQuestionAnsweringPipeline - __call__ ### TextClassificationPipeline [[autodoc]] TextClassificationPipeline - __call__ - all ### TextGenerationPipeline [[autodoc]] TextGenerationPipeline - __call__ - all ### Text2TextGenerationPipeline [[autodoc]] Text2TextGenerationPipeline - __call__ - all ### TokenClassificationPipeline [[autodoc]] TokenClassificationPipeline - __call__ - all ### TranslationPipeline [[autodoc]] TranslationPipeline - __call__ - all ### ZeroShotClassificationPipeline [[autodoc]] ZeroShotClassificationPipeline - __call__ - all ## Multimodal ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ ใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใ‚‹ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใซใฏๆฌกใฎใ‚‚ใฎใŒใ‚ใ‚Šใพใ™ใ€‚ ### DocumentQuestionAnsweringPipeline [[autodoc]] DocumentQuestionAnsweringPipeline - __call__ - all ### FeatureExtractionPipeline [[autodoc]] FeatureExtractionPipeline - __call__ - all ### ImageToTextPipeline [[autodoc]] ImageToTextPipeline - __call__ - all ### VisualQuestionAnsweringPipeline [[autodoc]] VisualQuestionAnsweringPipeline - __call__ - all ## Parent class: `Pipeline` [[autodoc]] Pipeline
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/image_processor.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Image Processor ็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ€ใƒ“ใ‚ธใƒงใƒณ ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›็‰นๅพดใฎๆบ–ๅ‚™ใจใใฎๅ‡บๅŠ›ใฎๅพŒๅ‡ฆ็†ใ‚’ๆ‹…ๅฝ“ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใฏใ€ใ‚ตใ‚คใ‚บๅค‰ๆ›ดใ€ๆญฃ่ฆๅŒ–ใ€PyTorchใ€TensorFlowใ€Flaxใ€Numpy ใƒ†ใƒณใ‚ฝใƒซใธใฎๅค‰ๆ›ใชใฉใฎๅค‰ๆ›ใŒๅซใพใ‚Œใพใ™ใ€‚ใƒญใ‚ธใƒƒใƒˆใ‚’ใ‚ปใ‚ฐใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณ ใƒžใ‚นใ‚ฏใซๅค‰ๆ›ใ™ใ‚‹ใชใฉใ€ใƒขใƒ‡ใƒซๅ›บๆœ‰ใฎๅพŒๅ‡ฆ็†ใ‚‚ๅซใพใ‚Œใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ ## ImageProcessingMixin [[autodoc]] image_processing_utils.ImageProcessingMixin - from_pretrained - save_pretrained ## BatchFeature [[autodoc]] BatchFeature ## BaseImageProcessor [[autodoc]] image_processing_utils.BaseImageProcessor
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/optimizer_schedules.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimization `.optimization` ใƒขใ‚ธใƒฅใƒผใƒซใฏไปฅไธ‹ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ - ใƒขใƒ‡ใƒซใฎๅพฎ่ชฟๆ•ดใซไฝฟ็”จใงใใ‚‹้‡ใฟๆธ›่กฐใŒไฟฎๆญฃใ•ใ‚ŒใŸใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ€ใŠใ‚ˆใณ - `_LRSchedule` ใ‹ใ‚‰็ถ™ๆ‰ฟใ™ใ‚‹ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒซ ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎๅฝขๅผใฎใ„ใใคใ‹ใฎใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒซ: - ่ค‡ๆ•ฐใฎใƒใƒƒใƒใฎๅ‹พ้…ใ‚’็ดฏ็ฉใ™ใ‚‹ใŸใ‚ใฎๅ‹พ้…็ดฏ็ฉใ‚ฏใƒฉใ‚น ## AdamW (PyTorch) [[autodoc]] AdamW ## AdaFactor (PyTorch) [[autodoc]] Adafactor ## AdamWeightDecay (TensorFlow) [[autodoc]] AdamWeightDecay [[autodoc]] create_optimizer ## Schedules ### Learning Rate Schedules (Pytorch) [[autodoc]] SchedulerType [[autodoc]] get_scheduler [[autodoc]] get_constant_schedule [[autodoc]] get_constant_schedule_with_warmup <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png"/> [[autodoc]] get_cosine_schedule_with_warmup <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png"/> [[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png"/> [[autodoc]] get_linear_schedule_with_warmup <img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png"/> [[autodoc]] get_polynomial_decay_schedule_with_warmup [[autodoc]] get_inverse_sqrt_schedule ### Warmup (TensorFlow) [[autodoc]] WarmUp ## Gradient Strategies ### GradientAccumulator (TensorFlow) [[autodoc]] GradientAccumulator
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/quantization.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quantize ๐Ÿค— Transformers models ## `AutoGPTQ` Integration ๐Ÿค— Transformers ใซใฏใ€่จ€่ชžใƒขใƒ‡ใƒซใง GPTQ ้‡ๅญๅŒ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใฎ `optimum` API ใŒ็ตฑๅˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅคงๅน…ใซไฝŽไธ‹ใ•ใ›ใ‚‹ใ“ใจใชใใ€ๆŽจ่ซ–้€Ÿๅบฆใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใ“ใจใชใใ€ใƒขใƒ‡ใƒซใ‚’ 8ใ€4ใ€3ใ€ใ•ใ‚‰ใซใฏ 2 ใƒ“ใƒƒใƒˆใงใƒญใƒผใƒ‰ใŠใ‚ˆใณ้‡ๅญๅŒ–ใงใใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใปใจใ‚“ใฉใฎ GPU ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ไปฅไธ‹ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ - [GPTQ](https://arxiv.org/pdf/2210.17323.pdf) ่ซ–ๆ–‡ - GPTQ ้‡ๅญๅŒ–ใซ้–ขใ™ใ‚‹ `optimum` [ใ‚ฌใ‚คใƒ‰](https://huggingface.co/docs/optimum/llm_quantization/usage_guides/quantization) - ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใจใ—ใฆไฝฟ็”จใ•ใ‚Œใ‚‹ [`AutoGPTQ`](https://github.com/PanQiWei/AutoGPTQ) ใƒฉใ‚คใƒ–ใƒฉใƒช ### Requirements ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ไปฅไธ‹ใฎ่ฆไปถใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™๏ผš - ๆœ€ๆ–ฐใฎ `AutoGPTQ` ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ€‚ `pip install auto-gptq` ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ€‚ - ๆœ€ๆ–ฐใฎ `optimum` ใ‚’ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ€‚ `git+https://github.com/huggingface/optimum.git` ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ€‚ - ๆœ€ๆ–ฐใฎ `transformers` ใ‚’ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ€‚ ๆœ€ๆ–ฐใฎ `transformers` ใ‚’ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ `pip install git+https://github.com/huggingface/transformers.git` - ๆœ€ๆ–ฐใฎ `accelerate` ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ€‚ `pip install --upgrade accelerate` ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ€‚ GPTQ็ตฑๅˆใฏไปŠใฎใจใ“ใ‚ใƒ†ใ‚ญใ‚นใƒˆใƒขใƒ‡ใƒซใฎใฟใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใ‚‹ใฎใงใ€่ฆ–่ฆšใ€้Ÿณๅฃฐใ€ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใƒขใƒ‡ใƒซใงใฏไบˆๆœŸใ›ใฌๆŒ™ๅ‹•ใซ้ญ้‡ใ™ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ### Load and quantize a model GPTQ ใฏใ€้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ๅ‰ใซ้‡ใฟใฎใ‚ญใƒฃใƒชใƒ–ใƒฌใƒผใ‚ทใƒงใƒณใ‚’ๅฟ…่ฆใจใ™ใ‚‹้‡ๅญๅŒ–ๆ–นๆณ•ใงใ™ใ€‚ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผ ใƒขใƒ‡ใƒซใ‚’ๆœ€ๅˆใ‹ใ‚‰้‡ๅญๅŒ–ใ™ใ‚‹ๅ ดๅˆใฏใ€้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ™ใ‚‹ใพใงใซๆ™‚้–“ใŒใ‹ใ‹ใ‚‹ใ“ใจใŒใ‚ใ‚Šใพใ™ (`facebook/opt-350m`ใƒขใƒ‡ใƒซใฎ Google colab ใงใฏ็ด„ 5 ๅˆ†)ใ€‚ ใ—ใŸใŒใฃใฆใ€GPTQ ้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ทใƒŠใƒชใ‚ชใฏ 2 ใคใ‚ใ‚Šใพใ™ใ€‚ๆœ€ๅˆใฎไฝฟ็”จไพ‹ใฏใ€ใƒใƒ–ใงๅˆฉ็”จๅฏ่ƒฝใชไป–ใฎใƒฆใƒผใ‚ถใƒผใซใ‚ˆใฃใฆใ™ใงใซ้‡ๅญๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใ“ใจใงใ™ใ€‚2 ็•ช็›ฎใฎไฝฟ็”จไพ‹ใฏใ€ใƒขใƒ‡ใƒซใ‚’ๆœ€ๅˆใ‹ใ‚‰้‡ๅญๅŒ–ใ—ใ€ไฟๅญ˜ใ™ใ‚‹ใ‹ใƒใƒ–ใซใƒ—ใƒƒใ‚ทใƒฅใ—ใฆใ€ไป–ใฎใƒฆใƒผใ‚ถใƒผใŒไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใใ‚Œใ‚‚ไฝฟใฃใฆใใ ใ•ใ„ใ€‚ #### GPTQ Configuration ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใฆ้‡ๅญๅŒ–ใ™ใ‚‹ใซใฏใ€[`GPTQConfig`] ใ‚’ไฝœๆˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๆบ–ๅ‚™ใ™ใ‚‹ใซใฏใ€`bits`ใฎๆ•ฐใ€้‡ๅญๅŒ–ใ‚’่ชฟๆ•ดใ™ใ‚‹ใŸใ‚ใฎ`dataset`ใ€ใŠใ‚ˆใณใƒขใƒ‡ใƒซใฎ`Tokenizer`ใ‚’ๆธกใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```python model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) gptq_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer) ``` ็‹ฌ่‡ชใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๆ–‡ๅญ—ๅˆ—ใฎใƒชใ‚นใƒˆใจใ—ใฆๆธกใ™ใ“ใจใŒใงใใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใŸใ ใ—ใ€GPTQ ่ซ–ๆ–‡ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```python dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."] quantization = GPTQConfig(bits=4, dataset = dataset, tokenizer=tokenizer) ``` #### Quantization `from_pretrained` ใ‚’ไฝฟ็”จใ—ใ€`quantization_config` ใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใงใƒขใƒ‡ใƒซใ‚’้‡ๅญๅŒ–ใงใใพใ™ใ€‚ ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config) ``` ใƒขใƒ‡ใƒซใ‚’้‡ๅญๅŒ–ใ™ใ‚‹ใซใฏ GPU ใŒๅฟ…่ฆใงใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใƒขใƒ‡ใƒซใ‚’ CPU ใซ้…็ฝฎใ—ใ€้‡ๅญๅŒ–ใ™ใ‚‹ใŸใ‚ใซใƒขใ‚ธใƒฅใƒผใƒซใ‚’ GPU ใซๅ‰ๅพŒใซ็งปๅ‹•ใ•ใ›ใพใ™ใ€‚ CPU ใ‚ชใƒ•ใƒญใƒผใƒ‰ใฎไฝฟ็”จไธญใซ GPU ใฎไฝฟ็”จ้‡ใ‚’ๆœ€ๅคงๅŒ–ใ—ใŸใ„ๅ ดๅˆใฏใ€`device_map = "auto"` ใ‚’่จญๅฎšใงใใพใ™ใ€‚ ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config) ``` ใƒ‡ใ‚ฃใ‚นใ‚ฏ ใ‚ชใƒ•ใƒญใƒผใƒ‰ใฏใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใ•ใ‚‰ใซใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒๅŽŸๅ› ใงใƒกใƒขใƒชใŒไธ่ถณใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€`from_pretained` ใง `max_memory` ใ‚’ๆธกใ™ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ `device_map`ใจ`max_memory`ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ใ“ใฎ [ใ‚ฌใ‚คใƒ‰](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ <Tip warning={true}> GPTQ ้‡ๅญๅŒ–ใฏใ€็พๆ™‚็‚นใงใฏใƒ†ใ‚ญใ‚นใƒˆ ใƒขใƒ‡ใƒซใงใฎใฟๆฉŸ่ƒฝใ—ใพใ™ใ€‚ใ•ใ‚‰ใซใ€้‡ๅญๅŒ–ใƒ—ใƒญใ‚ปใ‚นใฏใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซใ‚ˆใฃใฆใฏ้•ทๆ™‚้–“ใ‹ใ‹ใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ (NVIDIA A100 ใ‚’ไฝฟ็”จใ—ใŸๅ ดๅˆใ€175B ใƒขใƒ‡ใƒซ = 4 gpu ๆ™‚้–“)ใ€‚ใƒขใƒ‡ใƒซใฎ GPTQ ้‡ๅญๅŒ–ใƒใƒผใ‚ธใƒงใƒณใŒๅญ˜ๅœจใ—ใชใ„ๅ ดๅˆใฏใ€ใƒใƒ–ใง็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใใ†ใงใชใ„ๅ ดๅˆใฏใ€github ใง่ฆๆฑ‚ใ‚’้€ไฟกใงใใพใ™ใ€‚ </Tip> ### Push quantized model to ๐Ÿค— Hub ไป–ใฎ ๐Ÿค— ใƒขใƒ‡ใƒซใจๅŒๆง˜ใซใ€`push_to_hub` ใ‚’ไฝฟ็”จใ—ใฆ้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใ‚’ใƒใƒ–ใซใƒ—ใƒƒใ‚ทใƒฅใงใใพใ™ใ€‚้‡ๅญๅŒ–ๆง‹ๆˆใฏไฟๅญ˜ใ•ใ‚Œใ€ใƒขใƒ‡ใƒซใซๆฒฟใฃใฆใƒ—ใƒƒใ‚ทใƒฅใ•ใ‚Œใพใ™ใ€‚ ```python quantized_model.push_to_hub("opt-125m-gptq") tokenizer.push_to_hub("opt-125m-gptq") ``` ้‡ๅญๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใ‚ซใƒซ ใƒžใ‚ทใƒณใซไฟๅญ˜ใ—ใŸใ„ๅ ดๅˆใฏใ€`save_pretrained` ใ‚’ไฝฟ็”จใ—ใฆ่กŒใ†ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```python quantized_model.save_pretrained("opt-125m-gptq") tokenizer.save_pretrained("opt-125m-gptq") ``` `device_map` ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’้‡ๅญๅŒ–ใ—ใŸๅ ดๅˆใฏใ€ไฟๅญ˜ใ™ใ‚‹ๅ‰ใซใƒขใƒ‡ใƒซๅ…จไฝ“ใ‚’ GPU ใพใŸใฏ `cpu` ใฎใ„ใšใ‚Œใ‹ใซ็งปๅ‹•ใ—ใฆใใ ใ•ใ„ใ€‚ ```python quantized_model.to("cpu") quantized_model.save_pretrained("opt-125m-gptq") ``` ### Load a quantized model from the ๐Ÿค— Hub `from_pretrained`ใ‚’ไฝฟ็”จใ—ใฆใ€้‡ๅญๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใƒใƒ–ใ‹ใ‚‰ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ๅฑžๆ€ง `quantization_config` ใŒใƒขใƒ‡ใƒซ่จญๅฎšใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซๅญ˜ๅœจใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใ€ใƒ—ใƒƒใ‚ทใƒฅใ•ใ‚ŒใŸ้‡ใฟใŒ้‡ๅญๅŒ–ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq") ``` ๅฟ…่ฆไปฅไธŠใฎใƒกใƒขใƒชใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใšใซใƒขใƒ‡ใƒซใ‚’ใ‚ˆใ‚Š้€Ÿใใƒญใƒผใƒ‰ใ—ใŸใ„ๅ ดๅˆใฏใ€`device_map` ๅผ•ๆ•ฐใฏ้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใงใ‚‚ๆฉŸ่ƒฝใ—ใพใ™ใ€‚ `accelerate`ใƒฉใ‚คใƒ–ใƒฉใƒชใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto") ``` ### Exllama kernels for faster inference 4 ใƒ“ใƒƒใƒˆ ใƒขใƒ‡ใƒซใฎๅ ดๅˆใ€ๆŽจ่ซ–้€Ÿๅบฆใ‚’้ซ˜ใ‚ใ‚‹ใŸใ‚ใซ exllama ใ‚ซใƒผใƒใƒซใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงๆœ‰ๅŠนใซใชใฃใฆใ„ใพใ™ใ€‚ [`GPTQConfig`] ใง `disable_exllama` ใ‚’ๆธกใ™ใ“ใจใงใ€ใใฎๅ‹•ไฝœใ‚’ๅค‰ๆ›ดใงใใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€่จญๅฎšใซไฟๅญ˜ใ•ใ‚Œใฆใ„ใ‚‹้‡ๅญๅŒ–่จญๅฎšใŒไธŠๆ›ธใใ•ใ‚Œใพใ™ใ€‚ใ‚ซใƒผใƒใƒซใซ้–ข้€ฃใ™ใ‚‹ๅฑžๆ€งใฎใฟใ‚’ไธŠๆ›ธใใงใใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใ•ใ‚‰ใซใ€exllama ใ‚ซใƒผใƒใƒซใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ใƒขใƒ‡ใƒซๅ…จไฝ“ใ‚’ GPU ไธŠใซ็ฝฎใๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py import torch gptq_config = GPTQConfig(bits=4, disable_exllama=False) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config = gptq_config) ``` ็พๆ™‚็‚นใงใฏ 4 ใƒ“ใƒƒใƒˆ ใƒขใƒ‡ใƒซใฎใฟใŒใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใ•ใ‚‰ใซใ€peft ใ‚’ไฝฟ็”จใ—ใฆ้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€exllama ใ‚ซใƒผใƒใƒซใ‚’้žใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ–ๅŒ–ใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ #### Fine-tune a quantized model Hugging Face ใ‚จใ‚ณใ‚ทใ‚นใƒ†ใƒ ใฎใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใฎๅ…ฌๅผใ‚ตใƒใƒผใƒˆใซใ‚ˆใ‚Šใ€GPTQ ใง้‡ๅญๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใงใใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[`peft`](https://github.com/huggingface/peft) ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ### Example demo GPTQ ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’้‡ๅญๅŒ–ใ™ใ‚‹ๆ–นๆณ•ใจใ€peft ใ‚’ไฝฟ็”จใ—ใฆ้‡ๅญๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆใฏใ€Google Colab [ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏ](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### GPTQConfig [[autodoc]] GPTQConfig ## `bitsandbytes` Integration ๐Ÿค— Transformers ใฏใ€`bitsandbytes` ใงๆœ€ใ‚‚ใ‚ˆใไฝฟ็”จใ•ใ‚Œใ‚‹ใƒขใ‚ธใƒฅใƒผใƒซใจ็ทŠๅฏ†ใซ็ตฑๅˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ๆ•ฐ่กŒใฎใ‚ณใƒผใƒ‰ใงใƒขใƒ‡ใƒซใ‚’ 8 ใƒ“ใƒƒใƒˆ็ฒพๅบฆใงใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ใ“ใ‚Œใฏใ€`bitsandbytes`ใฎ `0.37.0`ใƒชใƒชใƒผใ‚นไปฅ้™ใ€ใปใจใ‚“ใฉใฎ GPU ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ้‡ๅญๅŒ–ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[LLM.int8()](https://arxiv.org/abs/2208.07339) ่ซ–ๆ–‡ใ€ใพใŸใฏ [ใƒ–ใƒญใ‚ฐๆŠ•็จฟ](https://huggingface.co/blog/hf-bitsandbytes-) ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚็ตฑๅˆ๏ผ‰ใ‚ณใƒฉใƒœใƒฌใƒผใ‚ทใƒงใƒณใซใคใ„ใฆใ€‚ `0.39.0`ใƒชใƒชใƒผใ‚นไปฅ้™ใ€FP4 ใƒ‡ใƒผใ‚ฟๅž‹ใ‚’ๆดป็”จใ—ใ€4 ใƒ“ใƒƒใƒˆ้‡ๅญๅŒ–ใ‚’ไฝฟ็”จใ—ใฆ`device_map`ใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ไปปๆ„ใฎใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ็‹ฌ่‡ชใฎ pytorch ใƒขใƒ‡ใƒซใ‚’้‡ๅญๅŒ–ใ—ใŸใ„ๅ ดๅˆใฏใ€๐Ÿค— Accelerate ใƒฉใ‚คใƒ–ใƒฉใƒชใฎ [ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/accelerate/main/en/usage_guides/quantization) ใ‚’ใƒใ‚งใƒƒใ‚ฏใ—ใฆใใ ใ•ใ„ใ€‚ `bitsandbytes`็ตฑๅˆใ‚’ไฝฟ็”จใ—ใฆใงใใ‚‹ใ“ใจใฏๆฌกใฎใจใŠใ‚Šใงใ™ ### General usage ใƒขใƒ‡ใƒซใŒ ๐Ÿค— Accelerate ใซใ‚ˆใ‚‹่ชญใฟ่พผใฟใ‚’ใ‚ตใƒใƒผใƒˆใ—ใ€`torch.nn.Linear` ใƒฌใ‚คใƒคใƒผใŒๅซใพใ‚Œใฆใ„ใ‚‹้™ใ‚Šใ€ [`~PreTrainedModel.from_pretrained`] ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ™ใจใใซ `load_in_8bit` ใพใŸใฏ `load_in_4bit` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’้‡ๅญๅŒ–ใงใใพใ™ใ€‚ใ“ใ‚Œใฏใฉใฎใ‚ˆใ†ใชใƒขใƒ€ใƒชใƒ†ใ‚ฃใงใ‚‚ๅŒๆง˜ใซๆฉŸ่ƒฝใ™ใ‚‹ใฏใšใงใ™ใ€‚ ```python from transformers import AutoModelForCausalLM model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True) model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True) ``` ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ไป–ใฎใ™ในใฆใฎใƒขใ‚ธใƒฅใƒผใƒซ (ไพ‹: `torch.nn.LayerNorm`) ใฏ `torch.float16` ใซๅค‰ๆ›ใ•ใ‚Œใพใ™ใŒใ€ใใฎ `dtype` ใ‚’ๅค‰ๆ›ดใ—ใŸใ„ๅ ดๅˆใฏใ€`torch_dtype` ๅผ•ๆ•ฐใ‚’ไธŠๆ›ธใใงใใพใ™ใ€‚ ```python >>> import torch >>> from transformers import AutoModelForCausalLM >>> model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True, torch_dtype=torch.float32) >>> model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype torch.float32 ``` ### FP4 quantization #### Requirements ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ ใ‚นใƒ‹ใƒšใƒƒใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅ‰ใซใ€ไปฅไธ‹ใฎ่ฆไปถใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ - ๆœ€ๆ–ฐใฎ`bitsandbytes`ใƒฉใ‚คใƒ–ใƒฉใƒช `pip install bitsandbytes>=0.39.0` - ๆœ€ๆ–ฐใฎ`accelerate`ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ `pip install --upgrade accelerate` - ๆœ€ๆ–ฐใฎ `transformers` ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ `pip install --upgrade transformers` #### Tips and best practices - **้ซ˜ๅบฆใชไฝฟ็”จๆณ•:** ๅฏ่ƒฝใชใ™ในใฆใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ไฝฟ็”จใ—ใŸ 4 ใƒ“ใƒƒใƒˆ้‡ๅญๅŒ–ใฎ้ซ˜ๅบฆใชไฝฟ็”จๆณ•ใซใคใ„ใฆใฏใ€[ใ“ใฎ Google Colab ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏ](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ - **`batch_size=1` ใซใ‚ˆใ‚‹้ซ˜้€ŸๆŽจ่ซ– :** bitsandbytes ใฎ `0.40.0` ใƒชใƒชใƒผใ‚นไปฅ้™ใ€`batch_size=1` ใงใฏ้ซ˜้€ŸๆŽจ่ซ–ใฎๆฉๆตใ‚’ๅ—ใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ [ใ“ใ‚Œใ‚‰ใฎใƒชใƒชใƒผใ‚น ใƒŽใƒผใƒˆ](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0) ใ‚’็ขบ่ชใ—ใ€ใ“ใฎๆฉŸ่ƒฝใ‚’ๆดป็”จใ™ใ‚‹ใซใฏ`0.40.0`ไปฅ้™ใฎใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚็ฎฑใฎใ€‚ - **ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ:** [QLoRA ่ซ–ๆ–‡](https://arxiv.org/abs/2305.14314) ใซใ‚ˆใ‚‹ใจใ€4 ใƒ“ใƒƒใƒˆๅŸบๆœฌใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ๅ ดๅˆ (ไพ‹: LoRA ใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใ‚’ไฝฟ็”จ)ใ€`bnb_4bit_quant_type='nf4'` ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ€‚ - **ๆŽจ่ซ–:** ๆŽจ่ซ–ใฎๅ ดๅˆใ€`bnb_4bit_quant_type` ใฏใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใซๅคงใใชๅฝฑ้Ÿฟใ‚’ไธŽใˆใพใ›ใ‚“ใ€‚ใŸใ ใ—ใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใจใฎไธ€่ฒซๆ€งใ‚’ไฟใคใŸใ‚ใซใ€ๅฟ…ใšๅŒใ˜ `bnb_4bit_compute_dtype` ใŠใ‚ˆใณ `torch_dtype` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ #### Load a large model in 4bit `.from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ™ใจใใซ `load_in_4bit=True` ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒกใƒขใƒชไฝฟ็”จ้‡ใ‚’ (ใŠใŠใ‚ˆใ) 4 ใงๅ‰ฒใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```python # pip install transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "bigscience/bloom-1b7" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True) ``` <Tip warning={true}> ใƒขใƒ‡ใƒซใŒ 4 ใƒ“ใƒƒใƒˆใงใƒญใƒผใƒ‰ใ•ใ‚Œใ‚‹ใจใ€็พๆ™‚็‚นใงใฏ้‡ๅญๅŒ–ใ•ใ‚ŒใŸ้‡ใฟใ‚’ใƒใƒ–ใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใฏใงใใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ 4 ใƒ“ใƒƒใƒˆใฎ้‡ใฟใฏใพใ ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„ใŸใ‚ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใชใ„ใ“ใจใซใ‚‚ๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใŸใ ใ—ใ€4 ใƒ“ใƒƒใƒˆ ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆ่ฟฝๅŠ ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ“ใ‚Œใซใคใ„ใฆใฏๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ชฌๆ˜Žใ—ใพใ™ใ€‚ </Tip> ### Load a large model in 8bit `.from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ™ใจใใซ `load_in_8bit=True` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒกใƒขใƒช่ฆไปถใ‚’ใŠใ‚ˆใๅŠๅˆ†ใซใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ```python # pip install transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "bigscience/bloom-1b7" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True) ``` ๆฌกใซใ€้€šๅธธ [`PreTrainedModel`] ใ‚’ไฝฟ็”จใ™ใ‚‹ใฎใจๅŒใ˜ใ‚ˆใ†ใซใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ `get_memory_footprint` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒขใƒ‡ใƒซใฎใƒกใƒขใƒช ใƒ•ใƒƒใƒˆใƒ—ใƒชใƒณใƒˆใ‚’็ขบ่ชใงใใพใ™ใ€‚ ```python print(model.get_memory_footprint()) ``` ใ“ใฎ็ตฑๅˆใซใ‚ˆใ‚Šใ€ๅคงใใชใƒขใƒ‡ใƒซใ‚’ๅฐใ•ใชใƒ‡ใƒใ‚คใ‚นใซใƒญใƒผใƒ‰ใ—ใ€ๅ•้กŒใชใๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ <Tip warning={true}> ใƒขใƒ‡ใƒซใŒ 8 ใƒ“ใƒƒใƒˆใงใƒญใƒผใƒ‰ใ•ใ‚Œใ‚‹ใจใ€ๆœ€ๆ–ฐใฎ `transformers`ใจ`bitsandbytes`ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ‚’้™คใใ€้‡ๅญๅŒ–ใ•ใ‚ŒใŸ้‡ใฟใ‚’ใƒใƒ–ใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใฏ็พๅœจไธๅฏ่ƒฝใงใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ 8 ใƒ“ใƒƒใƒˆใฎ้‡ใฟใฏใพใ ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„ใŸใ‚ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใชใ„ใ“ใจใซใ‚‚ๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใŸใ ใ—ใ€8 ใƒ“ใƒƒใƒˆ ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆ่ฟฝๅŠ ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ“ใ‚Œใซใคใ„ใฆใฏๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ชฌๆ˜Žใ—ใพใ™ใ€‚ ใพใŸใ€`device_map` ใฏใ‚ชใƒ—ใ‚ทใƒงใƒณใงใ™ใŒใ€ๅˆฉ็”จๅฏ่ƒฝใชใƒชใ‚ฝใƒผใ‚นไธŠใงใƒขใƒ‡ใƒซใ‚’ๅŠน็އ็š„ใซใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใ™ใ‚‹ใŸใ‚ใ€ๆŽจ่ซ–ใซใฏ `device_map = 'auto'` ใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใŒๆŽจๅฅจใ•ใ‚Œใพใ™ใ€‚ </Tip> #### Advanced use cases ใ“ใ“ใงใฏใ€FP4 ้‡ๅญๅŒ–ใ‚’ไฝฟ็”จใ—ใฆๅฎŸ่กŒใงใใ‚‹ใ„ใใคใ‹ใฎ้ซ˜ๅบฆใชไฝฟ็”จไพ‹ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ##### Change the compute dtype compute dtype ใฏใ€่จˆ็ฎ—ไธญใซไฝฟ็”จใ•ใ‚Œใ‚‹ dtype ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ใŸใจใˆใฐใ€้š ใ—็Šถๆ…‹ใฏ`float32`ใซใ‚ใ‚Šใพใ™ใŒใ€้ซ˜้€ŸๅŒ–ใฎใŸใ‚ใซ่จˆ็ฎ—ใ‚’ bf16 ใซ่จญๅฎšใงใใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€compute dtype ใฏ `float32` ใซ่จญๅฎšใ•ใ‚Œใพใ™ใ€‚ ```python import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) ``` ##### Using NF4 (Normal Float 4) data type NF4 ใƒ‡ใƒผใ‚ฟๅž‹ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ“ใ‚Œใฏใ€ๆญฃ่ฆๅˆ†ๅธƒใ‚’ไฝฟ็”จใ—ใฆๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸ้‡ใฟใซ้ฉๅˆใ—ใŸๆ–ฐใ—ใ„ 4 ใƒ“ใƒƒใƒˆ ใƒ‡ใƒผใ‚ฟๅž‹ใงใ™ใ€‚ใใฎๅฎŸ่กŒใฎใŸใ‚ใซ: ```python from transformers import BitsAndBytesConfig nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config) ``` ##### Use nested quantization for more memory efficient inference ใพใŸใ€ใƒใ‚นใƒˆใ•ใ‚ŒใŸ้‡ๅญๅŒ–ๆ‰‹ๆณ•ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใชใใ€ใ‚ˆใ‚ŠๅคšใใฎใƒกใƒขใƒชใŒ็ฏ€็ด„ใ•ใ‚Œใพใ™ใ€‚็ตŒ้จ“็š„ใช่ฆณๅฏŸใ‹ใ‚‰ใ€ใ“ใ‚Œใซใ‚ˆใ‚Šใ€NVIDIA-T4 16GB ไธŠใงใ‚ทใƒผใ‚ฑใƒณใ‚น้•ท 1024ใ€ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บ 1ใ€ๅ‹พ้…็ดฏ็ฉใ‚นใƒ†ใƒƒใƒ— 4 ใฎ llama-13b ใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใŒๅฏ่ƒฝใซใชใ‚Šใพใ™ใ€‚ ```python from transformers import BitsAndBytesConfig double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) model_double_quant = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=double_quant_config) ``` ### Push quantized models on the ๐Ÿค— Hub `push_to_hub`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ˜็ด”ใซไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ€้‡ๅญๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใƒใƒ–ใซใƒ—ใƒƒใ‚ทใƒฅใงใใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๆœ€ๅˆใซ้‡ๅญๅŒ–ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใŒใƒ—ใƒƒใ‚ทใƒฅใ•ใ‚Œใ€ๆฌกใซ้‡ๅญๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฎ้‡ใฟใŒใƒ—ใƒƒใ‚ทใƒฅใ•ใ‚Œใพใ™ใ€‚ ใ“ใฎๆฉŸ่ƒฝใ‚’ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใซใฏใ€ๅฟ…ใš `bitsandbytes>0.37.2` ใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ (ใ“ใฎ่จ˜ไบ‹ใฎๅŸท็ญ†ๆ™‚็‚นใงใฏใ€`bitsandbytes==0.38.0.post1` ใงใƒ†ใ‚นใƒˆใ—ใพใ—ใŸ)ใ€‚ ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") model.push_to_hub("bloom-560m-8bit") ``` <Tip warning={true}> ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใงใฏใ€ใƒใƒ–ไธŠใง 8 ใƒ“ใƒƒใƒˆ ใƒขใƒ‡ใƒซใ‚’ใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใŒๅผทใๆŽจๅฅจใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใฏใƒกใƒขใƒช ใƒ•ใƒƒใƒˆใƒ—ใƒชใƒณใƒˆใฎๅ‰Šๆธ›ใจใ€ใŸใจใˆใฐ Google Colab ใงใฎๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใฎ่ชญใฟ่พผใฟใซใ‚ˆใ‚‹ๆฉๆตใ‚’ๅ—ใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ </Tip> ### Load a quantized model from the ๐Ÿค— Hub `from_pretrained`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒใƒ–ใ‹ใ‚‰้‡ๅญๅŒ–ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ๅฑžๆ€ง `quantization_config` ใŒใƒขใƒ‡ใƒซ่จญๅฎšใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซๅญ˜ๅœจใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใ€ใƒ—ใƒƒใ‚ทใƒฅใ•ใ‚ŒใŸ้‡ใฟใŒ้‡ๅญๅŒ–ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใพใ™ใ€‚ ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto") ``` ใ“ใฎๅ ดๅˆใ€ๅผ•ๆ•ฐ `load_in_8bit=True` ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€`bitsandbytes` ใจ `accelerate` ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ใพใŸใ€`device_map` ใฏใ‚ชใƒ—ใ‚ทใƒงใƒณใงใ™ใŒใ€ๅˆฉ็”จๅฏ่ƒฝใชใƒชใ‚ฝใƒผใ‚นไธŠใงใƒขใƒ‡ใƒซใ‚’ๅŠน็އ็š„ใซใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใ™ใ‚‹ใŸใ‚ใ€ๆŽจ่ซ–ใซใฏ `device_map = 'auto'` ใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใŒๆŽจๅฅจใ•ใ‚Œใพใ™ใ€‚ ### Advanced use cases ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏใ€8 ใƒ“ใƒƒใƒˆ ใƒขใƒ‡ใƒซใฎใƒญใƒผใƒ‰ใจๅฎŸ่กŒไปฅๅค–ใซไฝ•ใŒใงใใ‚‹ใ‹ใ‚’ๆŽขๆฑ‚ใ—ใŸใ„ไธŠ็ดšใƒฆใƒผใ‚ถใƒผใ‚’ๅฏพ่ฑกใจใ—ใฆใ„ใพใ™ใ€‚ #### Offload between `cpu` and `gpu` ใ“ใฎ้ซ˜ๅบฆใชไฝฟ็”จไพ‹ใฎ 1 ใคใฏใ€ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใ€`CPU`ใจ`GPU`ใฎ้–“ใง้‡ใฟใ‚’ใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใงใใ‚‹ใ“ใจใงใ™ใ€‚ CPU ไธŠใงใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใ•ใ‚Œใ‚‹้‡ใฟใฏ **8 ใƒ“ใƒƒใƒˆใซๅค‰ๆ›ใ•ใ‚Œใชใ„**ใŸใ‚ใ€`float32`ใซไฟๆŒใ•ใ‚Œใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใฎๆฉŸ่ƒฝใฏใ€้žๅธธใซๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’้ฉๅˆใ•ใ›ใ€ใใฎใƒขใƒ‡ใƒซใ‚’ GPU ใจ CPU ใฎ้–“ใงใƒ‡ใ‚ฃใ‚นใƒ‘ใƒƒใƒใ—ใŸใ„ใƒฆใƒผใ‚ถใƒผใ‚’ๅฏพ่ฑกใจใ—ใฆใ„ใพใ™ใ€‚ ใพใšใ€`transformers` ใ‹ใ‚‰ [`BitsAndBytesConfig`] ใ‚’ใƒญใƒผใƒ‰ใ—ใ€ๅฑžๆ€ง `llm_int8_enable_fp32_cpu_offload` ใ‚’ `True` ใซ่จญๅฎšใ—ใพใ™ใ€‚ ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True) ``` `bigscience/bloom-1b7`ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใ€`lm_head`ใ‚’้™คใใƒขใƒ‡ใƒซๅ…จไฝ“ใซโ€‹โ€‹้ฉๅˆใ™ใ‚‹ใฎใซๅๅˆ†ใช GPU RAM ใŒใ‚ใ‚‹ใจใ—ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ๆฌกใฎใ‚ˆใ†ใซใ‚ซใ‚นใ‚ฟใƒ  device_map ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```python device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h": 0, "transformer.ln_f": 0, } ``` ใใ—ใฆใ€ๆฌกใฎใ‚ˆใ†ใซใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ ```python model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", device_map=device_map, quantization_config=quantization_config, ) ``` ไปฅไธŠใงใ™๏ผใƒขใƒ‡ใƒซใ‚’ๆฅฝใ—ใ‚“ใงใใ ใ•ใ„๏ผ #### Play with `llm_int8_threshold` `llm_int8_threshold` ๅผ•ๆ•ฐใ‚’ๆ“ไฝœใ—ใฆใ€ๅค–ใ‚Œๅ€คใฎใ—ใใ„ๅ€คใ‚’ๅค‰ๆ›ดใงใใพใ™ใ€‚ ๅค–ใ‚Œๅ€ค ใจใฏใ€็‰นๅฎšใฎใ—ใใ„ๅ€คใ‚ˆใ‚Šๅคงใใ„้š ใ‚ŒใŸ็Šถๆ…‹ใฎๅ€คใงใ™ใ€‚ ใ“ใ‚Œใฏใ€`LLM.int8()`่ซ–ๆ–‡ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ๅค–ใ‚Œๅ€คๆคœๅ‡บใฎๅค–ใ‚Œๅ€คใ—ใใ„ๅ€คใซๅฏพๅฟœใ—ใพใ™ใ€‚ใ“ใฎใ—ใใ„ๅ€คใ‚’่ถ…ใˆใ‚‹้š ใ—็Šถๆ…‹ใฎๅ€คใฏๅค–ใ‚Œๅ€คใจใฟใชใ•ใ‚Œใ€ใใ‚Œใ‚‰ใฎๅ€คใซๅฏพใ™ใ‚‹ๆ“ไฝœใฏ fp16 ใงๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚้€šๅธธใ€ๅ€คใฏๆญฃ่ฆๅˆ†ๅธƒใ—ใพใ™ใ€‚ใคใพใ‚Šใ€ใปใจใ‚“ใฉใฎๅ€คใฏ [-3.5, 3.5] ใฎ็ฏ„ๅ›ฒๅ†…ใซใ‚ใ‚Šใพใ™ใŒใ€ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใงใฏๅคงใใ็•ฐใชใ‚‹ๅˆ†ๅธƒใ‚’็คบใ™ไพ‹ๅค–็š„ใช็ณป็ตฑ็š„ๅค–ใ‚Œๅ€คใŒใ„ใใคใ‹ใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๅค–ใ‚Œๅ€คใฏใ€ๅคšใใฎๅ ดๅˆ [-60, -6] ใพใŸใฏ [6, 60] ใฎ็ฏ„ๅ›ฒๅ†…ใซใ‚ใ‚Šใพใ™ใ€‚ Int8 ้‡ๅญๅŒ–ใฏใ€ๅคงใใ•ใŒ 5 ็จ‹ๅบฆใพใงใฎๅ€คใงใฏใ†ใพใๆฉŸ่ƒฝใ—ใพใ™ใŒใ€ใใ‚Œใ‚’่ถ…ใˆใ‚‹ใจใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅคงๅน…ใซไฝŽไธ‹ใ—ใพใ™ใ€‚้ฉๅˆ‡ใชใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใ—ใใ„ๅ€คใฏ 6 ใงใ™ใŒใ€ใ‚ˆใ‚Šไธๅฎ‰ๅฎšใชใƒขใƒ‡ใƒซ (ๅฐ่ฆๆจกใชใƒขใƒ‡ใƒซใ€ๅพฎ่ชฟๆ•ด) ใงใฏใ€ใ‚ˆใ‚ŠไฝŽใ„ใ—ใใ„ๅ€คใŒๅฟ…่ฆใซใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎๅผ•ๆ•ฐใฏใ€ใƒขใƒ‡ใƒซใฎๆŽจ่ซ–้€Ÿๅบฆใซๅฝฑ้Ÿฟใ‚’ไธŽใˆใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’่ฉฆใ—ใฆใฟใฆใ€ใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซๆœ€้ฉใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’่ฆ‹ใคใ‘ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_threshold=10, ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained(model_id) ``` #### Skip the conversion of some modules ไธ€้ƒจใฎใƒขใƒ‡ใƒซใซใฏใ€ๅฎ‰ๅฎšๆ€งใ‚’็ขบไฟใ™ใ‚‹ใŸใ‚ใซ 8 ใƒ“ใƒƒใƒˆใซๅค‰ๆ›ใ™ใ‚‹ๅฟ…่ฆใŒใชใ„ใƒขใ‚ธใƒฅใƒผใƒซใŒใ„ใใคใ‹ใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€ใ‚ธใƒฅใƒผใ‚ฏใƒœใƒƒใ‚ฏใ‚น ใƒขใƒ‡ใƒซใซใฏใ€ใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ„ใใคใ‹ใฎ `lm_head` ใƒขใ‚ธใƒฅใƒผใƒซใŒใ‚ใ‚Šใพใ™ใ€‚ `llm_int8_skip_modules` ใง้Šใ‚“ใงใฟใ‚‹ ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=["lm_head"], ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained(model_id) ``` #### Fine-tune a model that has been loaded in 8-bit Hugging Face ใ‚จใ‚ณใ‚ทใ‚นใƒ†ใƒ ใฎใ‚ขใƒ€ใƒ—ใ‚ฟใƒผใฎๅ…ฌๅผใ‚ตใƒใƒผใƒˆใซใ‚ˆใ‚Šใ€8 ใƒ“ใƒƒใƒˆใงใƒญใƒผใƒ‰ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใงใใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅ˜ไธ€ใฎ Google Colab ใง`flan-t5-large`ใ‚„`facebook/opt-6.7b`ใชใฉใฎๅคง่ฆๆจกใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€[`peft`](https://github.com/huggingface/peft) ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็”จใฎใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใจใใซ `device_map` ใ‚’ๆธกใ™ๅฟ…่ฆใŒใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใƒขใƒ‡ใƒซใŒ GPU ใซ่‡ชๅ‹•็š„ใซใƒญใƒผใƒ‰ใ•ใ‚Œใพใ™ใ€‚ๅฟ…่ฆใซๅฟœใ˜ใฆใ€ใƒ‡ใƒใ‚คใ‚น ใƒžใƒƒใƒ—ใ‚’็‰นๅฎšใฎใƒ‡ใƒใ‚คใ‚นใซ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ (ไพ‹: `cuda:0`ใ€`0`ใ€`torch.device('cuda:0')`)ใ€‚ `device_map=auto`ใฏๆŽจ่ซ–ใฎใฟใซไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ### BitsAndBytesConfig [[autodoc]] BitsAndBytesConfig ## Quantization with ๐Ÿค— `optimum` `optimum`ใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹้‡ๅญๅŒ–ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[Optimum ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/optimum/index) ใ‚’ๅ‚็…งใ—ใ€ใ“ใ‚Œใ‚‰ใŒ่‡ชๅˆ†ใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซ้ฉ็”จใงใใ‚‹ใ‹ใฉใ†ใ‹ใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/agent.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใจใƒ„ใƒผใƒซ <Tip warning={true}> Transformers Agents ใฏๅฎŸ้จ“็š„ใช API ใงใ‚ใ‚Šใ€ใ„ใคใงใ‚‚ๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‹ใ‚‰่ฟ”ใ•ใ‚Œใ‚‹็ตๆžœ API ใพใŸใฏๅŸบ็คŽใจใชใ‚‹ใƒขใƒ‡ใƒซใฏๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅ‚พๅ‘ใŒใ‚ใ‚‹ใŸใ‚ใ€ๅค‰ๆ›ดใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใจใƒ„ใƒผใƒซใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ๅ…ฅ้–€ใ‚ฌใ‚คใƒ‰](../transformers_agents) ใ‚’ๅฟ…ใšใŠ่ชญใฟใใ ใ•ใ„ใ€‚ใ“ใฎใƒšใƒผใ‚ธ ๅŸบ็คŽใจใชใ‚‹ใ‚ฏใƒฉใ‚นใฎ API ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ ## ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆ ็งใŸใกใฏ 3 ็จฎ้กžใฎใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใ‚’ๆไพ›ใ—ใพใ™ใ€‚[`HfAgent`] ใฏใ‚ชใƒผใƒ—ใƒณใ‚ฝใƒผใ‚น ใƒขใƒ‡ใƒซใฎๆŽจ่ซ–ใ‚จใƒณใƒ‰ใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใ€[`LocalAgent`] ใฏ้ธๆŠžใ—ใŸใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใ‚ซใƒซใงไฝฟ็”จใ—ใ€[`OpenAiAgent`] ใฏ OpenAI ใ‚ฏใƒญใƒผใ‚บใƒ‰ ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ### HfAgent [[autodoc]] HfAgent ### LocalAgent [[autodoc]] LocalAgent ### OpenAiAgent [[autodoc]] OpenAiAgent ### AzureOpenAiAgent [[autodoc]] AzureOpenAiAgent ### Agent [[autodoc]] Agent - chat - run - prepare_for_new_chat ## Tools ### load_tool [[autodoc]] load_tool ### Tool [[autodoc]] Tool ### PipelineTool [[autodoc]] PipelineTool ### RemoteTool [[autodoc]] RemoteTool ### launch_gradio_demo [[autodoc]] launch_gradio_demo ## ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฎ็จฎ้กž ใ‚จใƒผใ‚ธใ‚งใƒณใƒˆใฏใƒ„ใƒผใƒซ้–“ใงใ‚ใ‚‰ใ‚†ใ‚‹็จฎ้กžใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๅ‡ฆ็†ใงใใพใ™ใ€‚ใƒ„ใƒผใƒซใฏๅฎŒๅ…จใซใƒžใƒซใƒใƒขใƒผใƒ€ใƒซใงใ‚ใ‚‹ใŸใ‚ใ€ๅ—ใ‘ๅ–ใ‚Šใจ่ฟ”ๅ“ใŒๅฏ่ƒฝใงใ™ ใƒ†ใ‚ญใ‚นใƒˆใ€็”ปๅƒใ€ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใ€ใƒ“ใƒ‡ใ‚ชใชใฉใฎใ‚ฟใ‚คใƒ—ใ€‚ใƒ„ใƒผใƒซ้–“ใฎไบ’ๆ›ๆ€งใ‚’้ซ˜ใ‚ใ‚‹ใŸใ‚ใ ใ‘ใงใชใใ€ ใ“ใ‚Œใ‚‰ใฎๆˆปใ‚Šๅ€คใ‚’ ipython (jupyterใ€colabใ€ipython ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใชใฉ) ใงๆญฃใ—ใใƒฌใƒณใƒ€ใƒชใƒณใ‚ฐใ™ใ‚‹ใซใฏใ€ใƒฉใƒƒใƒ‘ใƒผ ใ‚ฏใƒฉใ‚นใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚ ใ“ใฎใ‚ฟใ‚คใƒ—ใฎๅ‘จใ‚Šใ€‚ ใƒฉใƒƒใƒ—ใ•ใ‚ŒใŸใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏๆœ€ๅˆใจๅŒใ˜ใ‚ˆใ†ใซๅ‹•ไฝœใ—็ถšใ‘ใ‚‹ใฏใšใงใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏไพ็„ถใจใ—ใฆๆ–‡ๅญ—ๅˆ—ใพใŸใฏ็”ปๅƒใจใ—ใฆๅ‹•ไฝœใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏไพ็„ถใจใ—ใฆ `PIL.Image` ใจใ—ใฆๅ‹•ไฝœใ™ใ‚‹ใฏใšใงใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใ‚ฟใ‚คใƒ—ใซใฏใ€ๆฌกใฎ 3 ใคใฎ็‰นๅฎšใฎ็›ฎ็š„ใŒใ‚ใ‚Šใพใ™ใ€‚ - ๅž‹ใซๅฏพใ—ใฆ `to_raw` ใ‚’ๅ‘ผใณๅ‡บใ™ใจใ€ๅŸบใซใชใ‚‹ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒ่ฟ”ใ•ใ‚Œใ‚‹ใฏใšใงใ™ - ๅž‹ใซๅฏพใ—ใฆ `to_string` ใ‚’ๅ‘ผใณๅ‡บใ™ใจใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๆ–‡ๅญ—ๅˆ—ใจใ—ใฆ่ฟ”ใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚`AgentText` ใฎๅ ดๅˆใฏๆ–‡ๅญ—ๅˆ—ใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ใŸใ ใ—ใ€ไป–ใฎใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎใ‚ทใƒชใ‚ขใƒซๅŒ–ใ•ใ‚ŒใŸใƒใƒผใ‚ธใƒงใƒณใฎใƒ‘ใ‚นใซใชใ‚Šใพใ™ใ€‚ - ipython ใ‚ซใƒผใƒใƒซใง่กจ็คบใ™ใ‚‹ใจใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒๆญฃใ—ใ่กจ็คบใ•ใ‚Œใ‚‹ใฏใšใงใ™ ### AgentText [[autodoc]] transformers.tools.agent_types.AgentText ### AgentImage [[autodoc]] transformers.tools.agent_types.AgentImage ### AgentAudio [[autodoc]] transformers.tools.agent_types.AgentAudio
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/trainer.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Trainer [`Trainer`] ใ‚ฏใƒฉใ‚นใฏใ€ใปใจใ‚“ใฉใฎๆจ™ๆบ–็š„ใชใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใซๅฏพใ—ใฆใ€PyTorch ใงๆฉŸ่ƒฝใ‚’ๅฎŒๅ…จใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใฎ API ใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใ€[ใ‚ตใƒณใƒ—ใƒซ ใ‚นใ‚ฏใƒชใƒ—ใƒˆ](https://github.com/huggingface/transformers/tree/main/examples) ใฎใปใจใ‚“ใฉใงไฝฟ็”จใ•ใ‚Œใฆใ„ใพใ™ใ€‚ [`Trainer`] ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ๅ‰ใซใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใฎใ™ในใฆใฎใƒใ‚คใƒณใƒˆใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ใŸใ‚ใซ [`TrainingArguments`] ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ใ“ใฎ API ใฏใ€่ค‡ๆ•ฐใฎ GPU/TPU ใงใฎๅˆ†ๆ•ฃใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ€[NVIDIA Apex](https://github.com/NVIDIA/apex) ใŠใ‚ˆใณ PyTorch ใฎใƒใ‚คใƒ†ใ‚ฃใƒ– AMP ใซใ‚ˆใ‚‹ๆททๅˆ็ฒพๅบฆใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ [`Trainer`] ใซใฏใ€ไธŠ่จ˜ใฎๆฉŸ่ƒฝใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ๅŸบๆœฌ็š„ใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒซใƒผใƒ—ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ใ‚ซใ‚นใ‚ฟใƒ ๅ‹•ไฝœใ‚’ๆŒฟๅ…ฅใ™ใ‚‹ใซใฏใ€ใใ‚Œใ‚‰ใ‚’ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใ—ใ€ๆฌกใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ—ใพใ™ใ€‚ - **get_train_dataloader** -- ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒ‡ใƒผใ‚ฟใƒญใƒผใƒ€ใƒผใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ - **get_eval_dataloader** -- ่ฉ•ไพก็”จใƒ‡ใƒผใ‚ฟใƒญใƒผใƒ€ใƒผใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ - **get_test_dataloader** -- ใƒ†ใ‚นใƒˆ ใƒ‡ใƒผใ‚ฟใƒญใƒผใƒ€ใƒผใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ - **log** -- ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’็›ฃ่ฆ–ใ—ใฆใ„ใ‚‹ใ•ใพใ–ใพใชใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซ้–ขใ™ใ‚‹ๆƒ…ๅ ฑใ‚’ใƒญใ‚ฐใซ่จ˜้Œฒใ—ใพใ™ใ€‚ - **create_optimizer_and_scheduler** -- ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใจๅญฆ็ฟ’็އใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใŒๆธกใ•ใ‚Œใชใ‹ใฃใŸๅ ดๅˆใซใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ—ใพใ™ใ€‚ ๅˆๆœŸๅŒ–ใ€‚ `create_optimizer`ใƒกใ‚ฝใƒƒใƒ‰ใจ`create_scheduler`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–ใพใŸใฏใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ใ“ใจใ‚‚ใงใใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ๅˆฅใ€…ใซใ€‚ - **create_optimizer** -- init ใงๆธกใ•ใ‚Œใชใ‹ใฃใŸๅ ดๅˆใซใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ‚’ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ—ใพใ™ใ€‚ - **create_scheduler** -- init ใงๆธกใ•ใ‚Œใชใ‹ใฃใŸๅ ดๅˆใ€ๅญฆ็ฟ’็އใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใ‚’่จญๅฎšใ—ใพใ™ใ€‚ - **compute_loss** - ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅ…ฅๅŠ›ใฎใƒใƒƒใƒใฎๆๅคฑใ‚’่จˆ็ฎ—ใ—ใพใ™ใ€‚ - **training_step** -- ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใ‚นใƒ†ใƒƒใƒ—ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ - **prediction_step** -- ่ฉ•ไพก/ใƒ†ใ‚นใƒˆ ใ‚นใƒ†ใƒƒใƒ—ใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ - **evaluate** -- ่ฉ•ไพกใƒซใƒผใƒ—ใ‚’ๅฎŸ่กŒใ—ใ€ใƒกใƒˆใƒชใ‚ฏใ‚นใ‚’่ฟ”ใ—ใพใ™ใ€‚ - **predict** -- ใƒ†ใ‚นใƒˆ ใ‚ปใƒƒใƒˆใฎไบˆๆธฌ (ใƒฉใƒ™ใƒซใŒไฝฟ็”จๅฏ่ƒฝใชๅ ดๅˆใฏใƒกใƒˆใƒชใ‚ฏใ‚นใ‚‚ๅซใ‚€) ใ‚’่ฟ”ใ—ใพใ™ใ€‚ <Tip warning={true}> [`Trainer`] ใ‚ฏใƒฉใ‚นใฏ ๐Ÿค— Transformers ใƒขใƒ‡ใƒซ็”จใซๆœ€้ฉๅŒ–ใ•ใ‚ŒใฆใŠใ‚Šใ€้ฉšใในใๅ‹•ไฝœใ‚’ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ ไป–ใฎๆฉŸ็จฎใงไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€‚็‹ฌ่‡ชใฎใƒขใƒ‡ใƒซใงไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใฎ็‚นใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ - ใƒขใƒ‡ใƒซใฏๅธธใซ [`~utils.ModelOutput`] ใฎใ‚ฟใƒ—ใƒซใพใŸใฏใ‚ตใƒ–ใ‚ฏใƒฉใ‚นใ‚’่ฟ”ใ—ใพใ™ใ€‚ - `labels` ๅผ•ๆ•ฐใŒๆŒ‡ๅฎšใ•ใ‚Œใ€ใใฎๆๅคฑใŒๆœ€ๅˆใฎๅ€คใจใ—ใฆ่ฟ”ใ•ใ‚Œใ‚‹ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฏๆๅคฑใ‚’่จˆ็ฎ—ใงใใพใ™ใ€‚ ใ‚ฟใƒ—ใƒซใฎ่ฆ็ด  (ใƒขใƒ‡ใƒซใŒใ‚ฟใƒ—ใƒซใ‚’่ฟ”ใ™ๅ ดๅˆ) - ใƒขใƒ‡ใƒซใฏ่ค‡ๆ•ฐใฎใƒฉใƒ™ใƒซๅผ•ๆ•ฐใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ใ“ใจใŒใงใใพใ™ ([`TrainingArguments`] ใง `label_names` ใ‚’ไฝฟ็”จใ—ใฆใ€ใใฎๅๅ‰ใ‚’ [`Trainer`] ใซ็คบใ—ใพใ™) ใŒใ€ใใ‚Œใ‚‰ใฎใ„ใšใ‚Œใซใ‚‚ `"label"` ใจใ„ใ†ๅๅ‰ใ‚’ไป˜ใ‘ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ </Tip> ไปฅไธ‹ใฏใ€ๅŠ ้‡ๆๅคฑใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซ [`Trainer`] ใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ™ใ‚‹ๆ–นๆณ•ใฎไพ‹ใงใ™ (ไธๅ‡่กกใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใ‚ปใƒƒใƒˆใŒใ‚ใ‚‹ๅ ดๅˆใซๅฝน็ซ‹ใกใพใ™)ใ€‚ ```python from torch import nn from transformers import Trainer class CustomTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.pop("labels") # forward pass outputs = model(**inputs) logits = outputs.get("logits") # compute custom loss (suppose one has 3 labels with different weights) loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device)) loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1)) return (loss, outputs) if return_outputs else loss ``` PyTorch [`Trainer`] ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒซใƒผใƒ—ใฎๅ‹•ไฝœใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ™ใ‚‹ใ‚‚ใ† 1 ใคใฎๆ–นๆณ•ใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒซใƒผใƒ—ใฎ็Šถๆ…‹ใ‚’ๆคœๆŸปใงใใ‚‹ [callbacks](ใ‚ณใƒผใƒซใƒใƒƒใ‚ฏ) ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ (้€ฒ่กŒ็Šถๆณใƒฌใƒใƒผใƒˆใ€TensorBoard ใพใŸใฏไป–ใฎ ML ใƒ—ใƒฉใƒƒใƒˆใƒ•ใ‚ฉใƒผใƒ ใงใฎใƒญใ‚ฐ่จ˜้Œฒใชใฉ)ใ€‚ๆฑบๅฎš๏ผˆๆ—ฉๆœŸๅœๆญขใชใฉ๏ผ‰ใ€‚ ## Trainer [[autodoc]] Trainer - all ## Seq2SeqTrainer [[autodoc]] Seq2SeqTrainer - evaluate - predict ## TrainingArguments [[autodoc]] TrainingArguments - all ## Seq2SeqTrainingArguments [[autodoc]] Seq2SeqTrainingArguments - all ## Checkpoints ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€[`Trainer`] ใฏใ™ในใฆใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใ€ [`TrainingArguments`] ใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏใ€xxx ใ‚’ๅซใ‚€`checkpoint-xxx`ใจใ„ใ†ๅๅ‰ใฎใ‚ตใƒ–ใƒ•ใ‚ฉใƒซใƒ€ใƒผใซไฟๅญ˜ใ•ใ‚Œใพใ™ใ€‚ ใใ‚Œใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๆฎต้šŽใงใ—ใŸใ€‚ ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ„ใšใ‚Œใ‹ใ‚’ไฝฟ็”จใ—ใฆ [`Trainer.train`] ใ‚’ๅ‘ผใณๅ‡บใ—ใพใ™ใ€‚ - `resume_from_checkpoint=True` ใฏๆœ€ๆ–ฐใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ—ใพใ™ - `resume_from_checkpoint=checkpoint_dir` ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชๅ†…ใฎ็‰นๅฎšใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ—ใพใ™ ๅˆๆ ผใ—ใŸใ€‚ ใ•ใ‚‰ใซใ€`push_to_hub=True` ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒขใƒ‡ใƒซ ใƒใƒ–ใซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’็ฐกๅ˜ใซไฟๅญ˜ใงใใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ใ™ในใฆ ไธญ้–“ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซไฟๅญ˜ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏๅˆฅใฎใ‚ณใƒŸใƒƒใƒˆใซไฟๅญ˜ใ•ใ‚Œใพใ™ใŒใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ใฏไฟๅญ˜ใ•ใ‚Œใพใ›ใ‚“ใ€‚้ฉๅฟœใงใใพใ™ [`TrainingArguments`] ใฎ `hub-strategy` ๅ€คใ‚’ๆฌกใฎใ„ใšใ‚Œใ‹ใซใ—ใพใ™ใ€‚ - `"checkpoint"`: ๆœ€ๆ–ฐใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚‚ last-checkpoint ใจใ„ใ†ๅๅ‰ใฎใ‚ตใƒ–ใƒ•ใ‚ฉใƒซใƒ€ใƒผใซใƒ—ใƒƒใ‚ทใƒฅใ•ใ‚Œใพใ™ใ€‚ `trainer.train(resume_from_checkpoint="output_dir/last-checkpoint")` ใ‚’ไฝฟ็”จใ—ใฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’็ฐกๅ˜ใซๅ†้–‹ใ—ใพใ™ใ€‚ - `"all_checkpoints"`: ใ™ในใฆใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€ๅ‡บๅŠ›ใƒ•ใ‚ฉใƒซใƒ€ใƒผใซ่กจ็คบใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใƒ—ใƒƒใ‚ทใƒฅใ•ใ‚Œใพใ™ (ใ—ใŸใŒใฃใฆใ€1 ใคใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒๅพ—ใ‚‰ใ‚Œใพใ™) ๆœ€็ต‚ใƒชใƒใ‚ธใƒˆใƒชๅ†…ใฎใƒ•ใ‚ฉใƒซใƒ€ใƒผใ”ใจใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ ใƒ•ใ‚ฉใƒซใƒ€ใƒผ) ## Logging ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€[`Trainer`] ใฏใƒกใ‚คใƒณใƒ—ใƒญใ‚ปใ‚นใซ `logging.INFO` ใ‚’ไฝฟ็”จใ—ใ€ใƒฌใƒ—ใƒชใ‚ซใŒใ‚ใ‚‹ๅ ดๅˆใซใฏ `logging.WARNING` ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฏใ€[`TrainingArguments`] ใฎ 5 ใคใฎ `logging` ใƒฌใƒ™ใƒซใฎใ„ใšใ‚Œใ‹ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใงใใพใ™ใ€‚ ๅผ•ๆ•ฐ: - `log_level` - ใƒกใ‚คใƒณใƒ—ใƒญใ‚ปใ‚น็”จ - `log_level_replica` - ใƒฌใƒ—ใƒชใ‚ซ็”จ ใ•ใ‚‰ใซใ€[`TrainingArguments`] ใฎ `log_on_each_node` ใŒ `False` ใซ่จญๅฎšใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ใƒกใ‚คใƒณ ใƒŽใƒผใƒ‰ใฎใฟใŒ ใƒกใ‚คใƒณ ใƒ—ใƒญใ‚ปใ‚นใฎใƒญใ‚ฐ ใƒฌใƒ™ใƒซ่จญๅฎšใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ไป–ใฎใ™ในใฆใฎใƒŽใƒผใƒ‰ใฏใƒฌใƒ—ใƒชใ‚ซใฎใƒญใ‚ฐ ใƒฌใƒ™ใƒซ่จญๅฎšใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ [`Trainer`] ใฏใ€`transformers` ใฎใƒญใ‚ฐ ใƒฌใƒ™ใƒซใ‚’ใƒŽใƒผใƒ‰ใ”ใจใซๅ€‹ๅˆฅใซ่จญๅฎšใ™ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ [`Trainer.__init__`]ใ€‚ใ—ใŸใŒใฃใฆใ€ไป–ใฎๆฉŸ่ƒฝใ‚’ๅˆฉ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ใ“ใ‚Œใ‚’ใ‚ˆใ‚Šๆ—ฉใ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ (ๆฌกใฎไพ‹ใ‚’ๅ‚็…ง)ใ€‚ [`Trainer`] ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ไฝœๆˆใ™ใ‚‹ๅ‰ใฎ `transformers` ๆฉŸ่ƒฝใ€‚ ใ“ใ‚Œใ‚’ใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใงไฝฟ็”จใ™ใ‚‹ๆ–นๆณ•ใฎไพ‹ใ‚’ๆฌกใซ็คบใ—ใพใ™ใ€‚ ```python [...] logger = logging.getLogger(__name__) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) # set the main code and the modules it uses to the same log-level according to the node log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) trainer = Trainer(...) ``` ใใ—ใฆใ€ใƒกใ‚คใƒณ ใƒŽใƒผใƒ‰ใจไป–ใฎใ™ในใฆใฎใƒŽใƒผใƒ‰ใง้‡่ค‡ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใ‚‚ใฎใ‚’ๅ‡บๅŠ›ใ—ใชใ„ใ‚ˆใ†ใซ่ญฆๅ‘Šใ™ใ‚‹ใ ใ‘ใ‚’่กจ็คบใ—ใŸใ„ๅ ดๅˆใฏใ€ ่ญฆๅ‘Š: ๆฌกใฎใ‚ˆใ†ใซๅฎŸ่กŒใงใใพใ™ใ€‚ ```bash my_app.py ... --log_level warning --log_level_replica error ``` ใƒžใƒซใƒใƒŽใƒผใƒ‰็’ฐๅขƒใงใ€ๅ„ใƒŽใƒผใƒ‰ใฎใƒกใ‚คใƒณใƒ—ใƒญใ‚ปใ‚นใฎใƒญใ‚ฐใ‚’็นฐใ‚Š่ฟ”ใ—ใŸใใชใ„ๅ ดๅˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ไธŠ่จ˜ใ‚’ๆฌกใฎใ‚ˆใ†ใซๅค‰ๆ›ดใ—ใพใ™ใ€‚ ```bash my_app.py ... --log_level warning --log_level_replica error --log_on_each_node 0 ``` ใใฎๅพŒใ€ๆœ€ๅˆใฎใƒŽใƒผใƒ‰ใฎใƒกใ‚คใƒณ ใƒ—ใƒญใ‚ปใ‚นใฎใฟใŒใ€Œ่ญฆๅ‘Šใ€ใƒฌใƒ™ใƒซใงใƒญใ‚ฐใซ่จ˜้Œฒใ•ใ‚Œใ€ใƒกใ‚คใƒณ ใƒŽใƒผใƒ‰ไธŠใฎไป–ใฎใ™ในใฆใฎใƒ—ใƒญใ‚ปใ‚นใฏใƒญใ‚ฐใซ่จ˜้Œฒใ•ใ‚Œใพใ™ใ€‚ ใƒŽใƒผใƒ‰ใจไป–ใฎใƒŽใƒผใƒ‰ไธŠใฎใ™ในใฆใฎใƒ—ใƒญใ‚ปใ‚นใฏใ€Œใ‚จใƒฉใƒผใ€ใƒฌใƒ™ใƒซใงใƒญใ‚ฐใซ่จ˜้Œฒใ•ใ‚Œใพใ™ใ€‚ ใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใ‚’ใงใใ‚‹ใ ใ‘้™ใ‹ใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ```bash my_app.py ... --log_level error --log_level_replica error --log_on_each_node 0 ``` (ใƒžใƒซใƒใƒŽใƒผใƒ‰็’ฐๅขƒใฎๅ ดๅˆใฏ `--log_on_each_node 0` ใ‚’่ฟฝๅŠ ใ—ใพใ™) ## Randomness [`Trainer`] ใซใ‚ˆใฃใฆ็”Ÿๆˆใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ๅ†้–‹ใ™ใ‚‹ๅ ดๅˆใ€ใ™ในใฆใฎๅŠชๅŠ›ใŒใใฎ็Šถๆ…‹ใ‚’ๅพฉๅ…ƒใ™ใ‚‹ใŸใ‚ใซ่กŒใ‚ใ‚Œใพใ™ใ€‚ _python_ใ€_numpy_ใ€ใŠใ‚ˆใณ _pytorch_ ใฎ RNG ็Šถๆ…‹ใฏใ€ใใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฟๅญ˜ใ—ใŸๆ™‚็‚นใจๅŒใ˜็Šถๆ…‹ใซใชใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ€Œๅœๆญขใ—ใฆๅ†้–‹ใ€ใจใ„ใ†ใ‚นใ‚ฟใ‚คใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒใ€ใƒŽใƒณใ‚นใƒˆใƒƒใƒ—ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅฏ่ƒฝใช้™ใ‚Š่ฟ‘ใฅใ‘ใ‚‰ใ‚Œใ‚‹ใฏใšใงใ™ใ€‚ ใŸใ ใ—ใ€ใ•ใพใ–ใพใชใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ้žๆฑบๅฎš็š„ใช pytorch ่จญๅฎšใซใ‚ˆใ‚Šใ€ใ“ใ‚ŒใฏๅฎŒๅ…จใซๆฉŸ่ƒฝใ—ใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใƒ•ใƒซใ‚’ใ”ๅธŒๆœ›ใฎๅ ดๅˆใฏ ๆฑบๅฎš่ซ–ใซใคใ„ใฆใฏใ€[ใƒฉใƒณใƒ€ใƒ ๆ€งใฎใ‚ฝใƒผใ‚นใฎๅˆถๅพก](https://pytorch.org/docs/stable/notes/randomness) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใ‚ˆใ†ใซใ€ใ“ใ‚Œใ‚‰ใฎ่จญๅฎšใฎไธ€้ƒจใฏ ็‰ฉไบ‹ใ‚’ๆฑบๅฎš่ซ–็š„ใซใ™ใ‚‹ใ‚‚ใฎ (ไพ‹: `torch.backends.cudnn.deterministic`) ใฏ็‰ฉไบ‹ใ‚’้…ใใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ใ“ใ‚Œใฏ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏๅฎŸ่กŒใงใใพใ›ใ‚“ใŒใ€ๅฟ…่ฆใซๅฟœใ˜ใฆ่‡ชๅˆ†ใงๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ## Specific GPUs Selection ใฉใฎ GPU ใ‚’ใฉใฎใ‚ˆใ†ใช้ †ๅบใงไฝฟ็”จใ™ใ‚‹ใ‹ใ‚’ใƒ—ใƒญใ‚ฐใƒฉใƒ ใซๆŒ‡็คบใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.Parallel.DistributedDataParallel.html) ใ‚’ไฝฟ็”จใ—ใฆ GPU ใฎใ‚ตใƒ–ใ‚ปใƒƒใƒˆใฎใฟใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€ไฝฟ็”จใ™ใ‚‹ GPU ใฎๆ•ฐใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ ใ€‚ใŸใจใˆใฐใ€GPU ใŒ 4 ใคใ‚ใ‚‹ใŒใ€ๆœ€ๅˆใฎ 2 ใคใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ```bash torchrun --nproc_per_node=2 trainer-program.py ... ``` [`accelerate`](https://github.com/huggingface/accelerate) ใพใŸใฏ [`deepspeed`](https://github.com/microsoft/DeepSpeed) ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใ‚’ไฝฟ็”จใ—ใฆๅŒใ˜ใ“ใจใ‚’้”ๆˆใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใฎไธ€ใค๏ผš ```bash accelerate launch --num_processes 2 trainer-program.py ... ``` ```bash deepspeed --num_gpus 2 trainer-program.py ... ``` ใ“ใ‚Œใ‚‰ใฎใƒฉใƒณใƒใƒฃใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใ€Accelerate ใพใŸใฏ [Deepspeed ็ตฑๅˆ](deepspeed) ๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใ‚Œใพใงใฏใ€ใƒ—ใƒญใ‚ฐใƒฉใƒ ใซไฝฟ็”จใ™ใ‚‹ GPU ใฎๆ•ฐใ‚’ๆŒ‡็คบใงใใพใ—ใŸใ€‚ๆฌกใซใ€็‰นๅฎšใฎ GPU ใ‚’้ธๆŠžใ—ใ€ใใฎ้ †ๅบใ‚’ๅˆถๅพกใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ๆฌกใฎ็’ฐๅขƒๅค‰ๆ•ฐใฏใ€ไฝฟ็”จใ™ใ‚‹ GPU ใจใใฎ้ †ๅบใ‚’ๅˆถๅพกใ™ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ **`CUDA_VISIBLE_DEVICES`** ่ค‡ๆ•ฐใฎ GPU ใŒใ‚ใ‚Šใ€ใใฎใ†ใกใฎ 1 ใคใพใŸใฏใ„ใใคใ‹ใฎ GPU ใ ใ‘ใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€็’ฐๅขƒๅค‰ๆ•ฐ `CUDA_VISIBLE_DEVICES` ใ‚’ไฝฟ็”จใ™ใ‚‹ GPU ใฎใƒชใ‚นใƒˆใซ่จญๅฎšใ—ใพใ™ใ€‚ ใŸใจใˆใฐใ€4 ใคใฎ GPU (0ใ€1ใ€2ใ€3) ใŒใ‚ใ‚‹ใจใ—ใพใ™ใ€‚็‰ฉ็† GPU 0 ใจ 2 ใฎใฟใงๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ```bash CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ... ``` ใ—ใŸใŒใฃใฆใ€pytorch ใฏ 2 ใคใฎ GPU ใฎใฟใ‚’่ช่ญ˜ใ—ใ€็‰ฉ็† GPU 0 ใจ 2 ใฏใใ‚Œใžใ‚Œ `cuda:0` ใจ `cuda:1` ใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ ้ †ๅบใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```bash CUDA_VISIBLE_DEVICES=2,0 torchrun trainer-program.py ... ``` ใ“ใ“ใงใฏใ€็‰ฉ็† GPU 0 ใจ 2 ใŒใใ‚Œใžใ‚Œ`cuda:1`ใจ`cuda:0`ใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ไธŠ่จ˜ใฎไพ‹ใฏใ™ในใฆ `DistributedDataParallel` ไฝฟ็”จใƒ‘ใ‚ฟใƒผใƒณใฎใ‚‚ใฎใงใ™ใŒใ€ๅŒใ˜ๆ–นๆณ•ใŒ [`DataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) ใงใ‚‚ๆฉŸ่ƒฝใ—ใพใ™ใ€‚ ```bash CUDA_VISIBLE_DEVICES=2,0 python trainer-program.py ... ``` GPU ใฎใชใ„็’ฐๅขƒใ‚’ใ‚จใƒŸใƒฅใƒฌใƒผใƒˆใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ“ใฎ็’ฐๅขƒๅค‰ๆ•ฐใ‚’็ฉบใฎๅ€คใซ่จญๅฎšใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ ```bash CUDA_VISIBLE_DEVICES= python trainer-program.py ... ``` ไป–ใฎ็’ฐๅขƒๅค‰ๆ•ฐใจๅŒๆง˜ใซใ€ใ“ใ‚Œใ‚‰ใ‚’ใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณใซ่ฟฝๅŠ ใ™ใ‚‹ไปฃใ‚ใ‚Šใซใ€ๆฌกใฎใ‚ˆใ†ใซใ‚จใ‚ฏใ‚นใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```bash export CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ... ``` ใŸใ ใ—ใ€ใ“ใฎๆ–นๆณ•ใงใฏใ€ไปฅๅ‰ใซ็’ฐๅขƒๅค‰ๆ•ฐใ‚’่จญๅฎšใ—ใŸใ“ใจใ‚’ๅฟ˜ใ‚Œใฆใ€ใชใœ้–“้•ใฃใŸ GPU ใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใฎใ‹็†่งฃใงใใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ๆททไนฑใ‚’ๆ‹›ใๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฎใปใจใ‚“ใฉใฎไพ‹ใง็คบใ•ใ‚Œใฆใ„ใ‚‹ใ‚ˆใ†ใซใ€ๅŒใ˜ใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณใง็‰นๅฎšใฎๅฎŸ่กŒใซๅฏพใ—ใฆใฎใฟ็’ฐๅขƒๅค‰ๆ•ฐใ‚’่จญๅฎšใ™ใ‚‹ใฎใŒไธ€่ˆฌ็š„ใงใ™ใ€‚ **`CUDA_DEVICE_ORDER`** ็‰ฉ็†ใƒ‡ใƒใ‚คใ‚นใฎ้ †ๅบใ‚’ๅˆถๅพกใ™ใ‚‹่ฟฝๅŠ ใฎ็’ฐๅขƒๅค‰ๆ•ฐ `CUDA_DEVICE_ORDER` ใŒใ‚ใ‚Šใพใ™ใ€‚้ธๆŠž่‚ขใฏๆฌกใฎ 2 ใคใงใ™ใ€‚ 1. PCIe ใƒใ‚น ID ้ † (`nvidia-smi` ใฎ้ †ๅบใจไธ€่‡ด) - ใ“ใ‚ŒใŒใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใ™ใ€‚ ```bash export CUDA_DEVICE_ORDER=PCI_BUS_ID ``` 2. GPU ใ‚ณใƒณใƒ”ใƒฅใƒผใƒ†ใ‚ฃใƒณใ‚ฐ่ƒฝๅŠ›้ †ใซไธฆในใ‚‹ ```bash export CUDA_DEVICE_ORDER=FASTEST_FIRST ``` ใปใจใ‚“ใฉใฎๅ ดๅˆใ€ใ“ใฎ็’ฐๅขƒๅค‰ๆ•ฐใ‚’ๆฐ—ใซใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ๅคใ„ GPU ใจๆ–ฐใ—ใ„ GPU ใŒ็‰ฉ็†็š„ใซๆŒฟๅ…ฅใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€้…ใ„ๅคใ„ใ‚ซใƒผใƒ‰ใŒ้…ใใชใฃใฆใ„ใ‚‹ใ‚ˆใ†ใซ่ฆ‹ใˆใ‚‹ใ‚ˆใ†ใชๅใฃใŸใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ‚’่กŒใฃใฆใ„ใ‚‹ๅ ดๅˆใซใฏใ€้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ๅˆใ‚ใ€‚ใ“ใ‚Œใ‚’่งฃๆฑบใ™ใ‚‹ 1 ใคใฎๆ–นๆณ•ใฏใ€ใ‚ซใƒผใƒ‰ใ‚’ไบคๆ›ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใŸใ ใ—ใ€ใ‚ซใƒผใƒ‰ใ‚’ไบคๆ›ใงใใชใ„ๅ ดๅˆ (ใƒ‡ใƒใ‚คใ‚นใฎๅ†ทๅดใŒๅฝฑ้Ÿฟใ‚’ๅ—ใ‘ใŸๅ ดๅˆใชใฉ)ใ€`CUDA_DEVICE_ORDER=FASTEST_FIRST`ใ‚’่จญๅฎšใ™ใ‚‹ใจใ€ๅธธใซๆ–ฐใ—ใ„้ซ˜้€Ÿใ‚ซใƒผใƒ‰ใŒๆœ€ๅˆใซ้…็ฝฎใ•ใ‚Œใพใ™ใ€‚ใŸใ ใ—ใ€`nvidia-smi`ใฏไพ็„ถใจใ—ใฆ PCIe ใฎ้ †ๅบใงใƒฌใƒใƒผใƒˆใ™ใ‚‹ใŸใ‚ใ€ๅคšๅฐ‘ๆททไนฑใ™ใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ ้ †ๅบใ‚’ๅ…ฅใ‚Œๆ›ฟใˆใ‚‹ใ‚‚ใ† 1 ใคใฎ่งฃๆฑบ็ญ–ใฏใ€ไปฅไธ‹ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ```bash export CUDA_VISIBLE_DEVICES=1,0 ``` ใ“ใฎไพ‹ใงใฏ 2 ใคใฎ GPU ใ ใ‘ใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใŒใ€ใ‚‚ใกใ‚ใ‚“ใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใƒผใซๆญ่ผ‰ใ•ใ‚Œใฆใ„ใ‚‹ๆ•ฐใฎ GPU ใซใ‚‚ๅŒใ˜ใ“ใจใŒๅฝ“ใฆใฏใพใ‚Šใพใ™ใ€‚ ใพใŸใ€ใ“ใฎ็’ฐๅขƒๅค‰ๆ•ฐใ‚’่จญๅฎšใ™ใ‚‹ๅ ดๅˆใฏใ€`~/.bashrc` ใƒ•ใ‚กใ‚คใƒซใพใŸใฏใใฎไป–ใฎ่ตทๅ‹•่จญๅฎšใƒ•ใ‚กใ‚คใƒซใซ่จญๅฎšใ—ใฆใ€ๅฟ˜ใ‚Œใ‚‹ใฎใŒๆœ€ๅ–„ใงใ™ใ€‚ ## Trainer Integrations [`Trainer`] ใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅЇ็š„ใซๆ”นๅ–„ใ™ใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใ‚ˆใ†ใซๆ‹กๅผตใ•ใ‚Œใพใ—ใŸใ€‚ ๆ™‚้–“ใจใฏใ‚‹ใ‹ใซๅคงใใชใƒขใƒ‡ใƒซใซ้ฉๅˆใ—ใพใ™ใ€‚ ็พๅœจใ€ใ‚ตใƒผใƒ‰ใƒ‘ใƒผใƒ†ใ‚ฃใฎใ‚ฝใƒชใƒฅใƒผใ‚ทใƒงใƒณ [DeepSpeed](https://github.com/microsoft/DeepSpeed) ใŠใ‚ˆใณ [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html) ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚่ซ–ๆ–‡ [ZeRO: ใƒกใƒขใƒชใฎๆœ€้ฉๅŒ–] ๅ…†ใƒ‘ใƒฉใƒกใƒผใ‚ฟ ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅ‘ใ‘ใฆใ€Samyam Rajbhandariใ€Jeff Rasleyใ€Olatunji Ruwaseใ€Yuxiong He ่‘—](https://arxiv.org/abs/1910.02054)ใ€‚ ใ“ใฎๆไพ›ใ•ใ‚Œใ‚‹ใ‚ตใƒใƒผใƒˆใฏใ€ใ“ใฎ่จ˜ไบ‹ใฎๅŸท็ญ†ๆ™‚็‚นใงใฏๆ–ฐใ—ใใฆๅฎŸ้จ“็š„ใชใ‚‚ใฎใงใ™ใ€‚ DeepSpeed ใจ PyTorch FSDP ใฎใ‚ตใƒใƒผใƒˆใฏใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ–ใงใ‚ใ‚Šใ€ใใ‚Œใซ้–ขใ™ใ‚‹ๅ•้กŒใฏๆญ“่ฟŽใ—ใพใ™ใŒใ€FairScale ็ตฑๅˆใฏ PyTorch ใƒกใ‚คใƒณใซ็ตฑๅˆใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ใ‚‚ใ†ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ›ใ‚“ ([PyTorch FSDP ็ตฑๅˆ](#pytorch-fully-sharded-data-parallel)) <a id='zero-install-notes'></a> ### CUDA Extension Installation Notes ใ“ใฎ่จ˜ไบ‹ใฎๅŸท็ญ†ๆ™‚็‚นใงใฏใ€Deepspeed ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€CUDA C++ ใ‚ณใƒผใƒ‰ใ‚’ใ‚ณใƒณใƒ‘ใ‚คใƒซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ™ในใฆใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซใฎๅ•้กŒใฏใ€[Deepspeed](https://github.com/microsoft/DeepSpeed/issues) ใฎๅฏพๅฟœใ™ใ‚‹ GitHub ใฎๅ•้กŒใ‚’้€šใ˜ใฆๅฏพๅ‡ฆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใŒใ€ใƒ“ใƒซใƒ‰ไธญใซ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ไธ€่ˆฌ็š„ใชๅ•้กŒใŒใ„ใใคใ‹ใ‚ใ‚Šใพใ™ใ€‚ CUDA ๆ‹กๅผตๆฉŸ่ƒฝใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ PyTorch ๆ‹กๅผตๆฉŸ่ƒฝใ€‚ ใ—ใŸใŒใฃใฆใ€ๆฌกใฎๆ“ไฝœใ‚’ๅฎŸ่กŒไธญใซ CUDA ้–ข้€ฃใฎใƒ“ใƒซใƒ‰ใฎๅ•้กŒใŒ็™บ็”Ÿใ—ใŸๅ ดๅˆใฏใ€ๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ ```bash pip install deepspeed ``` ใพใšๆฌกใฎๆณจๆ„ไบ‹้ …ใ‚’ใŠ่ชญใฟใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒŽใƒผใƒˆใงใฏใ€`pytorch` ใŒ CUDA `10.2` ใงใƒ“ใƒซใƒ‰ใ•ใ‚ŒใŸๅ ดๅˆใซไฝ•ใ‚’ใ™ในใใ‹ใฎไพ‹ใ‚’็คบใ—ใพใ™ใ€‚ใ‚ใชใŸใฎ็ŠถๆณใŒๆฌกใฎใ‚ˆใ†ใชๅ ดๅˆ ็•ฐใชใ‚‹ๅ ดๅˆใฏใ€ใƒใƒผใ‚ธใƒงใƒณ็•ชๅทใ‚’็›ฎ็š„ใฎใƒใƒผใ‚ธใƒงใƒณใซ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„ใ€‚ #### Possible problem #1 Pytorch ใซใฏ็‹ฌ่‡ชใฎ CUDA ใƒ„ใƒผใƒซใ‚ญใƒƒใƒˆใŒไป˜ๅฑžใ—ใฆใ„ใพใ™ใŒใ€ใ“ใ‚Œใ‚‰ 2 ใคใฎใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใƒ“ใƒซใƒ‰ใ™ใ‚‹ใซใฏใ€ๅŒไธ€ใƒใƒผใ‚ธใƒงใƒณใฎ CUDA ใŒๅฟ…่ฆใงใ™ใ€‚ ใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใพใ™ใ€‚ ใŸใจใˆใฐใ€Python ็’ฐๅขƒใซ `cudatoolkit==10.2` ใ‚’ๆŒ‡ๅฎšใ—ใฆ `pytorch` ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใŸๅ ดๅˆใฏใ€ๆฌกใฎใ‚‚ใฎใ‚‚ๅฟ…่ฆใงใ™ใ€‚ CUDA `10.2` ใŒใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใพใ—ใŸใ€‚ ๆญฃ็ขบใชๅ ดๆ‰€ใฏใ‚ทใ‚นใƒ†ใƒ ใซใ‚ˆใฃใฆ็•ฐใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใŒใ€ๅคšใใฎใ‚ทใ‚นใƒ†ใƒ ใงใฏ`/usr/local/cuda-10.2`ใŒๆœ€ใ‚‚ไธ€่ˆฌ็š„ใชๅ ดๆ‰€ใงใ™ใ€‚ Unix ใ‚ทใ‚นใƒ†ใƒ ใ€‚ CUDA ใŒๆญฃใ—ใ่จญๅฎšใ•ใ‚Œใ€`PATH`็’ฐๅขƒๅค‰ๆ•ฐใซ่ฟฝๅŠ ใ•ใ‚Œใ‚‹ใจใ€ ๆฌกใฎใ‚ˆใ†ใซใ—ใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซๅ ดๆ‰€ใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ ```bash which nvcc ``` CUDA ใŒใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏใ€ๆœ€ๅˆใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆใใ ใ•ใ„ใ€‚ใŠๆฐ—ใซๅ…ฅใ‚Šใ‚’ไฝฟ็”จใ—ใฆๆ‰‹้ †ใ‚’่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ ๆคœ็ดขใ‚จใƒณใ‚ธใƒณใ€‚ใŸใจใˆใฐใ€Ubuntu ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€[ubuntu cuda 10.2 install](https://www.google.com/search?q=ubuntu+cuda+10.2+install) ใ‚’ๆคœ็ดขใ™ใ‚‹ใจใ‚ˆใ„ใงใ—ใ‚‡ใ†ใ€‚ #### Possible problem #2 ใ‚‚ใ† 1 ใคใฎ่€ƒใˆใ‚‰ใ‚Œใ‚‹ไธ€่ˆฌ็š„ใชๅ•้กŒใฏใ€ใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใซ่ค‡ๆ•ฐใฎ CUDA ใƒ„ใƒผใƒซใ‚ญใƒƒใƒˆใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใ“ใจใงใ™ใ€‚ใŸใจใˆใฐใ‚ใชใŸ ใŒใ‚ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Š๏ผš ```bash /usr/local/cuda-10.2 /usr/local/cuda-11.0 ``` ใ“ใฎ็Šถๆณใงใฏใ€`PATH` ใŠใ‚ˆใณ `LD_LIBRARY_PATH` ็’ฐๅขƒๅค‰ๆ•ฐใซไปฅไธ‹ใŒๅซใพใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ็›ฎ็š„ใฎ CUDA ใƒใƒผใ‚ธใƒงใƒณใธใฎๆญฃใ—ใ„ใƒ‘ใ‚นใ€‚้€šๅธธใ€ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธ ใ‚คใƒณใ‚นใƒˆใƒผใƒฉใƒผใฏใ€ใ“ใ‚Œใ‚‰ใซใ€ ๆœ€ๅพŒใฎใƒใƒผใ‚ธใƒงใƒณใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใพใ—ใŸใ€‚้ฉๅˆ‡ใชใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใŒ่ฆ‹ใคใ‹ใ‚‰ใชใ„ใŸใ‚ใซใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใฎใƒ“ใƒซใƒ‰ใŒๅคฑๆ•—ใ™ใ‚‹ใจใ„ใ†ๅ•้กŒใŒ็™บ็”Ÿใ—ใŸๅ ดๅˆใฏใ€ CUDA ใƒใƒผใ‚ธใƒงใƒณใŒใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใซใ‚‚ใ‹ใ‹ใ‚ใ‚‰ใšใ€ๅ‰่ฟฐใฎ 2 ใคใ‚’่ชฟๆ•ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ ็’ฐๅขƒๅค‰ๆ•ฐใ€‚ ใพใšใ€ใใฎๅ†…ๅฎนใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```bash echo $PATH echo $LD_LIBRARY_PATH ``` ใใ‚Œใงใ€ไธญใซไฝ•ใŒๅ…ฅใฃใฆใ„ใ‚‹ใ‹ใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ `LD_LIBRARY_PATH` ใŒ็ฉบใงใ‚ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ `PATH` ใฏๅฎŸ่กŒๅฏ่ƒฝใƒ•ใ‚กใ‚คใƒซใŒๅญ˜ๅœจใ™ใ‚‹ๅ ดๆ‰€ใ‚’ใƒชใ‚นใƒˆใ—ใ€`LD_LIBRARY_PATH` ใฏๅ…ฑๆœ‰ใƒฉใ‚คใƒ–ใƒฉใƒชใฎๅ ดๆ‰€ใ‚’็คบใ—ใพใ™ใ€‚ ๆŽขใ™ใ“ใจใงใ™ใ€‚ใฉใกใ‚‰ใฎๅ ดๅˆใ‚‚ใ€ๅ‰ใฎใ‚จใƒณใƒˆใƒชใŒๅพŒใฎใ‚จใƒณใƒˆใƒชใ‚ˆใ‚Šๅ„ชๅ…ˆใ•ใ‚Œใพใ™ใ€‚ `:` ใฏ่ค‡ๆ•ฐใ‚’ๅŒบๅˆ‡ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ ใ‚จใƒณใƒˆใƒชใ€‚ ใ“ใ“ใงใ€ใƒ“ใƒซใƒ‰ ใƒ—ใƒญใ‚ฐใƒฉใƒ ใซ็‰นๅฎšใฎ CUDA ใƒ„ใƒผใƒซใ‚ญใƒƒใƒˆใฎๅ ดๆ‰€ใ‚’ๆŒ‡็คบใ™ใ‚‹ใซใฏใ€ๆœ€ๅˆใซใƒชใ‚นใƒˆใ•ใ‚Œใ‚‹ๅธŒๆœ›ใฎใƒ‘ใ‚นใ‚’ๆŒฟๅ…ฅใ—ใพใ™ใ€‚ ใ‚„ใฃใฆใ„ใ‚‹ใ“ใจ๏ผš ```bash export PATH=/usr/local/cuda-10.2/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH ``` ๆ—ขๅญ˜ใฎๅ€คใ‚’ไธŠๆ›ธใใ™ใ‚‹ใฎใงใฏใชใใ€ๅ…ˆ้ ญใซ่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ใ‚‚ใกใ‚ใ‚“ใ€ๅฟ…่ฆใซๅฟœใ˜ใฆใƒใƒผใ‚ธใƒงใƒณ็•ชๅทใ‚„ใƒ•ใƒซใƒ‘ใ‚นใ‚’่ชฟๆ•ดใ—ใพใ™ใ€‚ๅ‰ฒใ‚Šๅฝ“ใฆใŸใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใŒๅฎŸ้š›ใซๆฉŸ่ƒฝใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ ๅญ˜ๅœจใ™ใ‚‹ใ€‚ `lib64` ใ‚ตใƒ–ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใฏใ€`libcudart.so` ใชใฉใฎใ•ใพใ–ใพใช CUDA `.so` ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒๅญ˜ๅœจใ™ใ‚‹ๅ ดๆ‰€ใงใ™ใ€‚ ใ‚ทใ‚นใƒ†ใƒ ใงใฏๅˆฅใฎๅๅ‰ใŒไป˜ใ‘ใ‚‰ใ‚Œใพใ™ใŒใ€็พๅฎŸใ‚’ๅๆ˜ ใ™ใ‚‹ใ‚ˆใ†ใซ่ชฟๆ•ดใ—ใฆใใ ใ•ใ„ใ€‚ #### Possible problem #3 ไธ€้ƒจใฎๅคใ„ CUDA ใƒใƒผใ‚ธใƒงใƒณใฏใ€ๆ–ฐใ—ใ„ใ‚ณใƒณใƒ‘ใ‚คใƒฉใงใฎใƒ“ใƒซใƒ‰ใ‚’ๆ‹’ๅฆใ™ใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€ใ‚ใชใŸใฏ`gcc-9`ใ‚’ๆŒใฃใฆใ„ใพใ™ใŒใ€ใใ‚ŒใŒๅฟ…่ฆใงใ™ `gcc-7`ใ€‚ ใใ‚Œใซใฏใ•ใพใ–ใพใชๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™ใ€‚ ๆœ€ๆ–ฐใฎ CUDA ใƒ„ใƒผใƒซใ‚ญใƒƒใƒˆใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใงใใ‚‹ๅ ดๅˆใฏใ€้€šๅธธใ€ๆ–ฐใ—ใ„ใ‚ณใƒณใƒ‘ใ‚คใƒฉใŒใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ใฏใšใงใ™ใ€‚ ใ‚ใ‚‹ใ„ใฏใ€ๆ—ขใซๆ‰€ๆœ‰ใ—ใฆใ„ใ‚‹ใ‚ณใƒณใƒ‘ใ‚คใƒฉใซๅŠ ใˆใฆใ€ไธ‹ไฝใƒใƒผใ‚ธใƒงใƒณใฎใ‚ณใƒณใƒ‘ใ‚คใƒฉใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ใ™ใงใซๅญ˜ๅœจใ—ใพใ™ใŒใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใชใ„ใŸใ‚ใ€ใƒ“ใƒซใƒ‰ใ‚ทใ‚นใƒ†ใƒ ใฏใใ‚Œใ‚’่ช่ญ˜ใงใใพใ›ใ‚“ใ€‚ ใ€Œgcc-7ใ€ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใŒใ€ ใƒ“ใƒซใƒ‰ใ‚ทใ‚นใƒ†ใƒ ใŒ่ฆ‹ใคใ‹ใ‚‰ใชใ„ใจใ„ใ†ใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’่กจ็คบใ™ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใฎๆ–นๆณ•ใง่งฃๆฑบใงใใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ```bash sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++ ``` ใ“ใ“ใงใฏใ€`/usr/local/cuda-10.2/bin/gcc` ใ‹ใ‚‰ `gcc-7` ใธใฎใ‚ทใƒณใƒœใƒชใƒƒใ‚ฏใƒชใƒณใ‚ฏใ‚’ไฝœๆˆใ—ใฆใ„ใพใ™ใ€‚ `/usr/local/cuda-10.2/bin/` ใฏ `PATH` ็’ฐๅขƒๅค‰ๆ•ฐๅ†…ใซใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ (ๅ‰ใฎๅ•้กŒใฎ่งฃๆฑบ็ญ–ใ‚’ๅ‚็…ง)ใ€‚ `gcc-7` (ใŠใ‚ˆใณ `g++7`) ใŒ่ฆ‹ใคใ‹ใ‚‹ใฏใšใงใ€ใƒ“ใƒซใƒ‰ใฏๆˆๅŠŸใ—ใพใ™ใ€‚ ใ„ใคใ‚‚ใฎใ‚ˆใ†ใซใ€็Šถๆณใซๅˆใ‚ใ›ใฆไพ‹ใฎใƒ‘ใ‚นใ‚’็ทจ้›†ใ—ใฆใใ ใ•ใ„ใ€‚ ### PyTorch Fully Sharded Data parallel ใ‚ˆใ‚Šๅคงใใชใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใงๅทจๅคงใชใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใซใฏใ€ๅฎŒๅ…จใซใ‚ทใƒฃใƒผใƒ‰ๅŒ–ใ•ใ‚ŒใŸใƒ‡ใƒผใ‚ฟไธฆๅˆ—ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ ใ“ใฎใ‚ฟใ‚คใƒ—ใฎใƒ‡ใƒผใ‚ฟไธฆๅˆ—ใƒ‘ใƒฉใƒ€ใ‚คใƒ ใงใฏใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ใ€ๅ‹พ้…ใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ใ“ใจใงใ€ใ‚ˆใ‚Šๅคšใใฎใƒ‡ใƒผใ‚ฟใจๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใ‚’ใƒ•ใ‚ฃใƒƒใƒ†ใ‚ฃใƒณใ‚ฐใงใใพใ™ใ€‚ ใ“ใฎๆฉŸ่ƒฝใจใใฎๅˆฉ็‚นใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ๅฎŒๅ…จใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐ ใƒ‡ใƒผใ‚ฟไธฆๅˆ—ใƒ–ใƒญใ‚ฐ](https://pytorch.org/blog/introducing-pytorch-full-sharded-data-Parallel-api/) ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚ ๆœ€ๆ–ฐใฎ PyTorch ใฎ Fully Sharded Data Parallel (FSDP) ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆฉŸ่ƒฝใ‚’็ตฑๅˆใ—ใพใ—ใŸใ€‚ ๅฟ…่ฆใชใฎใฏใ€่จญๅฎšใ‚’้€šใ˜ใฆๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใ ใ‘ใงใ™ใ€‚ **FSDP ใ‚ตใƒใƒผใƒˆใซๅฟ…่ฆใช PyTorch ใƒใƒผใ‚ธใƒงใƒณ**: PyTorch Nightly (ใƒชใƒชใƒผใ‚นๅพŒใซใ“ใ‚Œใ‚’่ชญใ‚“ใ ๅ ดๅˆใฏ 1.12.0) FSDP ใ‚’ๆœ‰ๅŠนใซใ—ใŸใƒขใƒ‡ใƒซใฎไฟๅญ˜ใฏใ€ๆœ€่ฟ‘ใฎไฟฎๆญฃใงใฎใฟๅˆฉ็”จใงใใ‚‹ใŸใ‚ใงใ™ใ€‚ **ไฝฟ็”จๆณ•**๏ผš - ้…ๅธƒใ•ใ‚ŒใŸใƒฉใƒณใƒใƒฃใƒผใŒ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ ใพใ ไฝฟ็”จใ—ใฆใ„ใชใ„ๅ ดๅˆใฏใ€`-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE`ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ - **ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆˆฆ็•ฅ**: - FULL_SHARD : ใƒ‡ใƒผใ‚ฟไธฆๅˆ—ใƒฏใƒผใ‚ซใƒผ/GPU ใซใ‚ใŸใ‚‹ใ‚ทใƒฃใƒผใƒ‰ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ + ๅ‹พ้… + ใƒขใƒ‡ใƒซ ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ€‚ ใ“ใฎใŸใ‚ใซใฏใ€ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซ`--fsdp full_shard`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ - SHARD_GRAD_OP : ใ‚ทใƒฃใƒผใƒ‰ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ + ใƒ‡ใƒผใ‚ฟไธฆๅˆ—ใƒฏใƒผใ‚ซใƒผ/GPU ๅ…จไฝ“ใฎๅ‹พ้…ใ€‚ ใ“ใฎใŸใ‚ใซใฏใ€ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซ`--fsdp shard_grad_op`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ - NO_SHARD : ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใชใ—ใ€‚ใ“ใฎใŸใ‚ใซใฏใ€ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซ`--fsdp no_shard`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ - ใƒ‘ใƒฉใƒกใƒผใ‚ฟใจๅ‹พ้…ใ‚’ CPU ใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ™ใ‚‹ใซใฏใ€ ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซ`--fsdp "full_shard offload"`ใพใŸใฏ`--fsdp "shard_grad_op offload"`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ - `default_auto_wrap_policy` ใ‚’ไฝฟ็”จใ—ใฆ FSDP ใงใƒฌใ‚คใƒคใƒผใ‚’่‡ชๅ‹•็š„ใซๅ†ๅธฐ็š„ใซใƒฉใƒƒใƒ—ใ™ใ‚‹ใซใฏใ€ ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซ`--fsdp "full_shard auto_wrap"`ใพใŸใฏ`--fsdp "shard_grad_op auto_wrap"`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ - CPU ใ‚ชใƒ•ใƒญใƒผใƒ‰ใจ่‡ชๅ‹•ใƒฉใƒƒใƒ”ใƒณใ‚ฐใฎไธกๆ–นใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€ ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซ`--fsdp "full_shard offload auto_wrap"`ใพใŸใฏ`--fsdp "shard_grad_op offload auto_wrap"`ใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ - ๆฎ‹ใ‚Šใฎ FSDP ๆง‹ๆˆใฏใ€`--fsdp_config <path_to_fsdp_config.json>`ใ‚’ไป‹ใ—ใฆๆธกใ•ใ‚Œใพใ™ใ€‚ใใ‚Œใฏใ€ๆฌกใฎใ„ใšใ‚Œใ‹ใฎๅ ดๆ‰€ใงใ™ใ€‚ FSDP json ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซ (ไพ‹: `fsdp_config.json`)ใ€ใพใŸใฏใ™ใงใซใƒญใƒผใƒ‰ใ•ใ‚Œใฆใ„ใ‚‹ json ใƒ•ใ‚กใ‚คใƒซใ‚’ `dict` ใจใ—ใฆไฝฟ็”จใ—ใพใ™ใ€‚ - ่‡ชๅ‹•ใƒฉใƒƒใƒ”ใƒณใ‚ฐใŒๆœ‰ๅŠนใชๅ ดๅˆใฏใ€ใƒˆใƒฉใƒณใ‚นใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ— ใƒใƒชใ‚ทใƒผใพใŸใฏใ‚ตใ‚คใ‚บ ใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ— ใƒใƒชใ‚ทใƒผใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ - ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ—ใƒใƒชใ‚ทใƒผใฎๅ ดๅˆใ€ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใง `fsdp_transformer_layer_cls_to_wrap` ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ๆŒ‡ๅฎšใ—ใชใ„ๅ ดๅˆใ€ไฝฟ็”จๅฏ่ƒฝใชๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€คใฏ `model._no_split_modules` ใซใชใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏใ€ใƒฉใƒƒใƒ—ใ™ใ‚‹ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผๅฑคใ‚ฏใƒฉใ‚นๅใฎใƒชใ‚นใƒˆ (ๅคงๆ–‡ๅญ—ใจๅฐๆ–‡ๅญ—ใ‚’ๅŒบๅˆฅ) ใ‚’ๆŒ‡ๅฎšใ—ใพใ™ (ไพ‹: [`BertLayer`]ใ€[`GPTJBlock`]ใ€[`T5Block`] ...)ใ€‚ ้‡ใฟใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ใ‚ตใƒ–ใƒขใ‚ธใƒฅใƒผใƒซ (ๅŸ‹ใ‚่พผใฟๅฑคใชใฉ) ใŒ็•ฐใชใ‚‹ FSDP ใƒฉใƒƒใƒ—ใ•ใ‚ŒใŸใƒฆใƒ‹ใƒƒใƒˆใซใชใ‚‰ใชใ„ใ‚ˆใ†ใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€ใ“ใ‚Œใฏ้‡่ฆใงใ™ใ€‚ ใ“ใฎใƒใƒชใ‚ทใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒžใƒซใƒใƒ˜ใƒƒใƒ‰ ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใจใใ‚Œใซ็ถšใใ„ใใคใ‹ใฎ MLP ใƒฌใ‚คใƒคใƒผใ‚’ๅซใ‚€ใƒ–ใƒญใƒƒใ‚ฏใ”ใจใซใƒฉใƒƒใƒ”ใƒณใ‚ฐใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ ๅ…ฑๆœ‰ๅŸ‹ใ‚่พผใฟใ‚’ๅซใ‚€ๆฎ‹ใ‚Šใฎๅฑคใฏใ€ๅŒใ˜ๆœ€ใ‚‚ๅค–ๅดใฎ FSDP ใƒฆใƒ‹ใƒƒใƒˆใซใƒฉใƒƒใƒ—ใ•ใ‚Œใ‚‹ใฎใŒไพฟๅˆฉใงใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใƒˆใƒฉใƒณใ‚นใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซใซใฏใ“ใ‚Œใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ - ใ‚ตใ‚คใ‚บใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ—ใƒใƒชใ‚ทใƒผใฎๅ ดๅˆใฏใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใซ`fsdp_min_num_params`ใ‚’่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„ใ€‚ ่‡ชๅ‹•ใƒฉใƒƒใƒ”ใƒณใ‚ฐใฎใŸใ‚ใฎ FSDP ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆœ€ๅฐๆ•ฐใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ - ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใง `fsdp_backward_prefetch` ใ‚’ๆŒ‡ๅฎšใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ๆฌกใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎใ‚ปใƒƒใƒˆใ‚’ใ„ใคใƒ—ใƒชใƒ•ใ‚งใƒƒใƒใ™ใ‚‹ใ‹ใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚ `backward_pre` ใจ `backward_pos` ใŒๅˆฉ็”จๅฏ่ƒฝใชใ‚ชใƒ—ใ‚ทใƒงใƒณใงใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€`torch.distributed.fsdp.full_sharded_data_Parallel.BackwardPrefetch`ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ - ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใง `fsdp_forward_prefetch` ใ‚’ๆŒ‡ๅฎšใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ๆฌกใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎใ‚ปใƒƒใƒˆใ‚’ใ„ใคใƒ—ใƒชใƒ•ใ‚งใƒƒใƒใ™ใ‚‹ใ‹ใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚ `True`ใฎๅ ดๅˆใ€FSDP ใฏใƒ•ใ‚ฉใƒฏใƒผใƒ‰ ใƒ‘ใ‚นใงใฎๅฎŸ่กŒไธญใซใ€ๆฌกใซๆฅใ‚‹ใ‚ชใƒผใƒซใ‚ฎใƒฃใ‚ถใƒผใ‚’ๆ˜Ž็คบ็š„ใซใƒ—ใƒชใƒ•ใ‚งใƒƒใƒใ—ใพใ™ใ€‚ - ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใง `limit_all_gathers` ใ‚’ๆŒ‡ๅฎšใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ `True`ใฎๅ ดๅˆใ€FSDP ใฏ CPU ใ‚นใƒฌใƒƒใƒ‰ใ‚’ๆ˜Ž็คบ็š„ใซๅŒๆœŸใ—ใฆใ€ๅฎŸ่กŒไธญใฎใ‚ชใƒผใƒซใ‚ฎใƒฃใ‚ถใŒๅคšใ™ใŽใ‚‹ใฎใ‚’้˜ฒใŽใพใ™ใ€‚ - `activation_checkpointing`ใ‚’่จญๅฎšใƒ•ใ‚กใ‚คใƒซใงๆŒ‡ๅฎšใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ `True`ใฎๅ ดๅˆใ€FSDP ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณ ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€FSDP ใฎใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ‚’ใ‚ฏใƒชใ‚ขใ™ใ‚‹ใ“ใจใงใƒกใƒขใƒชไฝฟ็”จ้‡ใ‚’ๅ‰Šๆธ›ใ™ใ‚‹ๆ‰‹ๆณ•ใงใ™ใ€‚ ็‰นๅฎšใฎใƒฌใ‚คใƒคใƒผใ‚’ๅ‡ฆ็†ใ—ใ€ใƒใƒƒใ‚ฏใƒฏใƒผใƒ‰ ใƒ‘ใ‚นไธญใซใใ‚Œใ‚‰ใ‚’ๅ†่จˆ็ฎ—ใ—ใพใ™ใ€‚ไบ‹ๅฎŸไธŠใ€ใ“ใ‚Œใฏไฝ™ๅˆ†ใช่จˆ็ฎ—ๆ™‚้–“ใ‚’็Š ็‰ฒใซใ—ใพใ™ ใƒกใƒขใƒชไฝฟ็”จ้‡ใ‚’ๅ‰Šๆธ›ใ—ใพใ™ใ€‚ **ๆณจๆ„ใ™ในใๆณจๆ„็‚นใŒใ„ใใคใ‹ใ‚ใ‚Šใพใ™** - ใ“ใ‚Œใฏ `generate` ใจไบ’ๆ›ๆ€งใŒใชใ„ใŸใ‚ใ€ `--predict_with_generate` ใจใ‚‚ไบ’ๆ›ๆ€งใŒใ‚ใ‚Šใพใ›ใ‚“ ใ™ในใฆใฎ seq2seq/clm ใ‚นใ‚ฏใƒชใƒ—ใƒˆ (็ฟป่จณ/่ฆ็ด„/clm ใชใฉ)ใ€‚ ๅ•้กŒ [#21667](https://github.com/huggingface/transformers/issues/21667) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### PyTorch/XLA Fully Sharded Data parallel TPU ใƒฆใƒผใ‚ถใƒผใฎ็š†ๆง˜ใซๆœ—ๅ ฑใงใ™ใ€‚ PyTorch/XLA ใฏ FSDP ใ‚’ใ‚ตใƒใƒผใƒˆใ™ใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ ๆœ€ๆ–ฐใฎ Fully Sharded Data Parallel (FSDP) ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒใ™ในใฆใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[FSDP ใ‚’ไฝฟ็”จใ—ใŸ Cloud TPU ใงใฎ PyTorch ใƒขใƒ‡ใƒซใฎใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐ](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) ใŠใ‚ˆใณ [PyTorch/XLA ๅฎŸ่ฃ… ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ FSDP ใฎ](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp) ๅฟ…่ฆใชใฎใฏใ€่จญๅฎšใ‚’้€šใ˜ใฆๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใ ใ‘ใงใ™ใ€‚ **FSDP ใ‚ตใƒใƒผใƒˆใซๅฟ…่ฆใช PyTorch/XLA ใƒใƒผใ‚ธใƒงใƒณ**: >=2.0 **ไฝฟ็”จๆณ•**๏ผš `--fsdp "full shard"` ใ‚’ใ€`--fsdp_config <path_to_fsdp_config.json>` ใซๅŠ ใˆใ‚‰ใ‚Œใ‚‹ๆฌกใฎๅค‰ๆ›ดใจใจใ‚‚ใซๆธกใ—ใพใ™ใ€‚ - PyTorch/XLA FSDP ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใซใฏใ€`xla`ใ‚’`True`ใซ่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - `xla_fsdp_settings` ๅ€คใฏใ€XLA FSDP ใƒฉใƒƒใƒ”ใƒณใ‚ฐ ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆ ผ็ดใ™ใ‚‹่พžๆ›ธใงใ™ใ€‚ ใ‚ชใƒ—ใ‚ทใƒงใƒณใฎๅฎŒๅ…จใชใƒชใ‚นใƒˆใซใคใ„ใฆใฏใ€[ใ“ใกใ‚‰]( https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_full_sharded_data_Parallel.py)ใ€‚ - `xla_fsdp_grad_ckpt`ใ€‚ `True`ใฎๅ ดๅˆใ€ใƒใ‚นใƒˆใ•ใ‚ŒใŸ XLA FSDP ใงใƒฉใƒƒใƒ—ใ•ใ‚ŒใŸๅ„ใƒฌใ‚คใƒคใƒผไธŠใงๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ใ“ใฎ่จญๅฎšใฏใ€xla ใƒ•ใƒฉใ‚ฐใŒ true ใซ่จญๅฎšใ•ใ‚ŒใฆใŠใ‚Šใ€่‡ชๅ‹•ใƒฉใƒƒใƒ”ใƒณใ‚ฐ ใƒใƒชใ‚ทใƒผใŒๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใซใฎใฟไฝฟ็”จใงใใพใ™ใ€‚ `fsdp_min_num_params` ใพใŸใฏ `fsdp_transformer_layer_cls_to_wrap`ใ€‚ - ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผ ใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ— ใƒใƒชใ‚ทใƒผใพใŸใฏใ‚ตใ‚คใ‚บ ใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ— ใƒใƒชใ‚ทใƒผใฎใ„ใšใ‚Œใ‹ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ - ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ—ใƒใƒชใ‚ทใƒผใฎๅ ดๅˆใ€ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใง `fsdp_transformer_layer_cls_to_wrap` ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ๆŒ‡ๅฎšใ—ใชใ„ๅ ดๅˆใ€ไฝฟ็”จๅฏ่ƒฝใชๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€คใฏ `model._no_split_modules` ใซใชใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏใ€ใƒฉใƒƒใƒ—ใ™ใ‚‹ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผๅฑคใ‚ฏใƒฉใ‚นๅใฎใƒชใ‚นใƒˆ (ๅคงๆ–‡ๅญ—ใจๅฐๆ–‡ๅญ—ใ‚’ๅŒบๅˆฅ) ใ‚’ๆŒ‡ๅฎšใ—ใพใ™ (ไพ‹: [`BertLayer`]ใ€[`GPTJBlock`]ใ€[`T5Block`] ...)ใ€‚ ้‡ใฟใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ใ‚ตใƒ–ใƒขใ‚ธใƒฅใƒผใƒซ (ๅŸ‹ใ‚่พผใฟๅฑคใชใฉ) ใŒ็•ฐใชใ‚‹ FSDP ใƒฉใƒƒใƒ—ใ•ใ‚ŒใŸใƒฆใƒ‹ใƒƒใƒˆใซใชใ‚‰ใชใ„ใ‚ˆใ†ใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใ€ใ“ใ‚Œใฏ้‡่ฆใงใ™ใ€‚ ใ“ใฎใƒใƒชใ‚ทใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒžใƒซใƒใƒ˜ใƒƒใƒ‰ ใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใจใใ‚Œใซ็ถšใใ„ใใคใ‹ใฎ MLP ใƒฌใ‚คใƒคใƒผใ‚’ๅซใ‚€ใƒ–ใƒญใƒƒใ‚ฏใ”ใจใซใƒฉใƒƒใƒ”ใƒณใ‚ฐใŒ็™บ็”Ÿใ—ใพใ™ใ€‚ ๅ…ฑๆœ‰ๅŸ‹ใ‚่พผใฟใ‚’ๅซใ‚€ๆฎ‹ใ‚Šใฎๅฑคใฏใ€ๅŒใ˜ๆœ€ใ‚‚ๅค–ๅดใฎ FSDP ใƒฆใƒ‹ใƒƒใƒˆใซใƒฉใƒƒใƒ—ใ•ใ‚Œใ‚‹ใฎใŒไพฟๅˆฉใงใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใƒˆใƒฉใƒณใ‚นใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซใซใฏใ“ใ‚Œใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ - ใ‚ตใ‚คใ‚บใƒ™ใƒผใ‚นใฎ่‡ชๅ‹•ใƒฉใƒƒใƒ—ใƒใƒชใ‚ทใƒผใฎๅ ดๅˆใฏใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใซ`fsdp_min_num_params`ใ‚’่ฟฝๅŠ ใ—ใฆใใ ใ•ใ„ใ€‚ ่‡ชๅ‹•ใƒฉใƒƒใƒ”ใƒณใ‚ฐใฎใŸใ‚ใฎ FSDP ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆœ€ๅฐๆ•ฐใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ ### Using Trainer for accelerated PyTorch Training on Mac PyTorch v1.12 ใƒชใƒชใƒผใ‚นใซใ‚ˆใ‚Šใ€้–‹็™บ่€…ใจ็ ”็ฉถ่€…ใฏ Apple ใ‚ทใƒชใ‚ณใƒณ GPU ใ‚’ๅˆฉ็”จใ—ใฆใƒขใƒ‡ใƒซ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅคงๅน…ใซ้ซ˜้€ŸๅŒ–ใงใใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ—ใƒญใƒˆใ‚ฟใ‚คใƒ”ใƒณใ‚ฐใ‚„ๅพฎ่ชฟๆ•ดใชใฉใฎๆฉŸๆขฐๅญฆ็ฟ’ใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใ‚’ Mac ไธŠใงใƒญใƒผใ‚ซใƒซใงๅฎŸ่กŒใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ PyTorch ใฎใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใจใ—ใฆใฎ Apple ใฎ Metal Performance Shaders (MPS) ใฏใ“ใ‚Œใ‚’ๅฏ่ƒฝใซใ—ใ€ๆ–ฐใ—ใ„ `"mps"` ใƒ‡ใƒใ‚คใ‚น็ตŒ็”ฑใงไฝฟ็”จใงใใพใ™ใ€‚ ใ“ใ‚Œใซใ‚ˆใ‚Šใ€่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ใจใƒ—ใƒชใƒŸใƒ†ใ‚ฃใƒ–ใŒ MPS Graph ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใจ MPS ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใ‚‹่ชฟๆ•ดใ•ใ‚ŒใŸใ‚ซใƒผใƒใƒซใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ๅ…ฌๅผใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ [Mac ใงใฎ Accelerated PyTorch Training ใฎ็ดนไป‹](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใŠใ‚ˆใณ [MPS ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰](https://pytorch.org/docs/stable/notes/mps.html)ใ€‚ <Tip warning={false}> MacOS ใƒžใ‚ทใƒณใซ PyTorch >= 1.13 (ๅŸท็ญ†ๆ™‚็‚นใงใฏใƒŠใ‚คใƒˆใƒชใƒผ ใƒใƒผใ‚ธใƒงใƒณ) ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ใƒˆใƒฉใƒณใ‚นใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซใฎใƒขใƒ‡ใƒซใฎๆญฃ็ขบๆ€งใจใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅ‘ไธŠใซ้–ข้€ฃใ™ใ‚‹ไธป่ฆใชไฟฎๆญฃใŒ่กŒใ‚ใ‚Œใฆใ„ใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€https://github.com/pytorch/pytorch/issues/82707 ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> **Apple Silicon ใƒใƒƒใƒ—ใ‚’ไฝฟ็”จใ—ใŸใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๆŽจ่ซ–ใฎๅˆฉ็‚น** 1. ใƒฆใƒผใ‚ถใƒผใŒใƒญใƒผใ‚ซใƒซใงๅคง่ฆๆจกใชใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚„ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ 2. ใƒฆใƒ‹ใƒ•ใ‚กใ‚คใƒ‰ ใƒกใƒขใƒช ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซใ‚ˆใ‚Šใ€ใƒ‡ใƒผใ‚ฟๅ–ๅพ—ใฎ้…ๅปถใŒ็Ÿญ็ธฎใ•ใ‚Œใ€GPU ใŒใƒกใƒขใƒช ใ‚นใƒˆใ‚ขๅ…จไฝ“ใซ็›ดๆŽฅใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใ‚จใƒณใƒ‰ใƒ„ใƒผใ‚จใƒณใƒ‰ใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅ‘ไธŠใ—ใพใ™ใ€‚ 3. ใ‚ฏใƒฉใ‚ฆใƒ‰ใƒ™ใƒผใ‚นใฎ้–‹็™บใซ้–ข้€ฃใ™ใ‚‹ใ‚ณใ‚นใƒˆใ‚„่ฟฝๅŠ ใฎใƒญใƒผใ‚ซใƒซ GPU ใฎๅฟ…่ฆๆ€งใ‚’ๅ‰Šๆธ›ใ—ใพใ™ใ€‚ **ๅ‰ๆๆกไปถ**: mps ใ‚ตใƒใƒผใƒˆใ‚’ๅ‚™ใˆใŸใƒˆใƒผใƒใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใซใฏใ€ ใ“ใฎ็ด ๆ™ดใ‚‰ใ—ใ„ใƒกใƒ‡ใ‚ฃใ‚ข่จ˜ไบ‹ [GPU ใ‚ขใ‚ฏใ‚ปใƒฉใƒฌใƒผใ‚ทใƒงใƒณใŒ M1 Mac ใฎ PyTorch ใซ็™ปๅ ด](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1) ใซๅพ“ใฃใฆใใ ใ•ใ„ใ€‚ ใ€‚ **ไฝฟ็”จๆณ•**๏ผš `mps` ใƒ‡ใƒใ‚คใ‚นใฏใ€`cuda` ใƒ‡ใƒใ‚คใ‚นใŒไฝฟ็”จใ•ใ‚Œใ‚‹ๆ–นๆณ•ใจๅŒๆง˜ใซๅˆฉ็”จๅฏ่ƒฝใชๅ ดๅˆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใƒฆใƒผใ‚ถใƒผใซใ‚ˆใ‚‹ใ‚ขใ‚ฏใ‚ทใƒงใƒณใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใŸใจใˆใฐใ€ไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€Apple Silicon GPU ใ‚’ไฝฟ็”จใ—ใฆๅ…ฌๅผใฎ Glue ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ‚ฟใ‚นใ‚ฏใ‚’ (ใƒซใƒผใƒˆ ใƒ•ใ‚ฉใƒซใƒ€ใƒผใ‹ใ‚‰) ๅฎŸ่กŒใงใใพใ™ใ€‚ ```bash export TASK_NAME=mrpc python examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir ``` **ๆณจๆ„ใ™ในใใ„ใใคใ‹ใฎๆณจๆ„ไบ‹้ …** 1. ไธ€้ƒจใฎ PyTorch ๆ“ไฝœใฏ mps ใซๅฎŸ่ฃ…ใ•ใ‚Œใฆใ„ใชใ„ใŸใ‚ใ€ใ‚จใƒฉใƒผใŒใ‚นใƒญใƒผใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚’ๅ›ž้ฟใ™ใ‚‹ 1 ใคใฎๆ–นๆณ•ใฏใ€็’ฐๅขƒๅค‰ๆ•ฐ `PYTORCH_ENABLE_MPS_FALLBACK=1` ใ‚’่จญๅฎšใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎๆ“ไฝœใงใฏ CPU ใซใƒ•ใ‚ฉใƒผใƒซใƒใƒƒใ‚ฏใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ใใ‚Œใงใ‚‚ UserWarning ใŒใ‚นใƒญใƒผใ•ใ‚Œใพใ™ใ€‚ 2. ๅˆ†ๆ•ฃใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—`gloo`ใŠใ‚ˆใณ`nccl`ใฏใ€`mps`ใƒ‡ใƒใ‚คใ‚นใงใฏๅ‹•ไฝœใ—ใพใ›ใ‚“ใ€‚ ใ“ใ‚Œใฏใ€็พๅœจใ€Œmpsใ€ใƒ‡ใƒใ‚คใ‚น ใ‚ฟใ‚คใƒ—ใฎๅ˜ไธ€ GPU ใฎใฟใ‚’ไฝฟ็”จใงใใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ๆœ€ๅพŒใซใ€่ฆšใˆใฆใŠใ„ใฆใใ ใ•ใ„ใ€‚ ๐Ÿค— `Trainer` ใฏ MPS ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใฎใฟใ‚’็ตฑๅˆใ™ใ‚‹ใŸใ‚ใ€ MPS ใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใฎไฝฟ็”จใซ้–ขใ—ใฆๅ•้กŒใ‚„่ณชๅ•ใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€ [PyTorch GitHub](https://github.com/pytorch/pytorch/issues) ใซๅ•้กŒใ‚’ๆๅ‡บใ—ใฆใใ ใ•ใ„ใ€‚ ## Using Accelerate Launcher with Trainer ๅŠ ้€Ÿใ—ใฆใƒˆใƒฌใƒผใƒŠใƒผใซใƒ‘ใƒฏใƒผใ‚’ไธŽใˆใพใ—ใ‚‡ใ†ใ€‚ใƒฆใƒผใ‚ถใƒผใŒๆœŸๅพ…ใ™ใ‚‹ใ“ใจใซ้–ขใ—ใฆใฏใ€ๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - ใƒˆใƒฌใƒผใƒŠใƒผๅผ•ๆ•ฐใซๅฏพใ—ใฆ FSDPใ€DeepSpeed ใชใฉใฎใƒˆใƒฌใƒผใƒŠใƒผ ใ‚คใƒณใƒ†ใƒฌใƒผใ‚ทใƒงใƒณใ‚’ๅค‰ๆ›ดใ›ใšใซไฝฟ็”จใ—็ถšใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ - ใƒˆใƒฌใƒผใƒŠใƒผใง Accelerate Launcher ใ‚’ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸ (ๆŽจๅฅจ)ใ€‚ ใƒˆใƒฌใƒผใƒŠใƒผใง Accelerate Launcher ใ‚’ไฝฟ็”จใ™ใ‚‹ๆ‰‹้ †: 1. ๐Ÿค— Accelerate ใŒใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚Accelerate ใŒใชใ„ใจ `Trainer` ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ใใ†ใงใชใ„ๅ ดๅˆใฏใ€`pip install accelerate`ใ—ใฆใใ ใ•ใ„ใ€‚ Accelerate ใฎใƒใƒผใ‚ธใƒงใƒณใ‚’ๆ›ดๆ–ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™: `pip install activate --upgrade` 2. `accelerate config`ใ‚’ๅฎŸ่กŒใ—ใ€ใ‚ขใƒณใ‚ฑใƒผใƒˆใซ่จ˜ๅ…ฅใ—ใพใ™ใ€‚ไปฅไธ‹ใฏๅŠ ้€Ÿ่จญๅฎšใฎไพ‹ใงใ™ใ€‚ ๏ฝ๏ผŽ DDP ใƒžใƒซใƒใƒŽใƒผใƒ‰ ใƒžใƒซใƒ GPU ๆง‹ๆˆ: ```yaml compute_environment: LOCAL_MACHINE distributed_type: MULTI_GPU downcast_bf16: 'no' gpu_ids: all machine_rank: 0 #change rank as per the node main_process_ip: 192.168.20.1 main_process_port: 9898 main_training_function: main mixed_precision: fp16 num_machines: 2 num_processes: 8 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` b. FSDP config: ```yaml compute_environment: LOCAL_MACHINE distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: true fsdp_offload_params: false fsdp_sharding_strategy: 1 fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: BertLayer fsdp_use_orig_params: true machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` c.ใƒ•ใ‚กใ‚คใƒซใ‚’ๆŒ‡ใ™ DeepSpeed ๆง‹ๆˆ: ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_config_file: /home/user/configs/ds_zero3_config.json zero3_init_flag: true distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` d.ๅŠ ้€Ÿใƒ—ใƒฉใ‚ฐใ‚คใƒณใ‚’ไฝฟ็”จใ—ใŸ DeepSpeed ๆง‹ๆˆ: ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 0.7 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` 3. ๅŠ ้€Ÿ่จญๅฎšใพใŸใฏใƒฉใƒณใƒใƒฃใƒผๅผ•ๆ•ฐใซใ‚ˆใฃใฆไธŠ่จ˜ใงๅ‡ฆ็†ใ•ใ‚ŒใŸๅผ•ๆ•ฐไปฅๅค–ใฎๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ€ใƒˆใƒฌใƒผใƒŠใƒผ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ไปฅไธ‹ใฏใ€ไธŠ่จ˜ใฎ FSDP ๆง‹ๆˆใง`accelerate launcher`ใ‚’ไฝฟ็”จใ—ใฆ`run_glue.py`ใ‚’ๅฎŸ่กŒใ™ใ‚‹ไพ‹ใงใ™ใ€‚ ```bash cd transformers accelerate launch \ ./examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 5e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir ``` 4. `accelerate launch`ใ™ใ‚‹ใŸใ‚ใฎ cmd ๅผ•ๆ•ฐใ‚’็›ดๆŽฅไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ไธŠใฎไพ‹ใฏๆฌกใฎใ‚ˆใ†ใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ•ใ‚Œใพใ™ใ€‚ ```bash cd transformers accelerate launch --num_processes=2 \ --use_fsdp \ --mixed_precision=bf16 \ --fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \ --fsdp_transformer_layer_cls_to_wrap="BertLayer" \ --fsdp_sharding_strategy=1 \ --fsdp_state_dict_type=FULL_STATE_DICT \ ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 5e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir ``` ่ฉณ็ดฐใซใคใ„ใฆใฏใ€๐Ÿค— Accelerate CLI ใ‚ฌใ‚คใƒ‰ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„: [๐Ÿค— Accelerate ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฎ่ตทๅ‹•](https://huggingface.co/docs/accelerate/basic_tutorials/launch)ใ€‚ ็งปๅ‹•ใ•ใ‚ŒใŸใ‚ปใ‚ฏใ‚ทใƒงใƒณ: [ <a href="./deepspeed#deepspeed-trainer-integration">DeepSpeed</a><a id="deepspeed"></a> | <a href="./deepspeed#deepspeed-installation">Installation</a><a id="installation"></a> | <a href="./deepspeed#deepspeed-multi-gpu">Deployment with multiple GPUs</a><a id="deployment-with-multiple-gpus"></a> | <a href="./deepspeed#deepspeed-one-gpu">Deployment with one GPU</a><a id="deployment-with-one-gpu"></a> | <a href="./deepspeed#deepspeed-notebook">Deployment in Notebooks</a><a id="deployment-in-notebooks"></a> | <a href="./deepspeed#deepspeed-config">Configuration</a><a id="configuration"></a> | <a href="./deepspeed#deepspeed-config-passing">Passing Configuration</a><a id="passing-configuration"></a> | <a href="./deepspeed#deepspeed-config-shared">Shared Configuration</a><a id="shared-configuration"></a> | <a href="./deepspeed#deepspeed-zero">ZeRO</a><a id="zero"></a> | <a href="./deepspeed#deepspeed-zero2-config">ZeRO-2 Config</a><a id="zero-2-config"></a> | <a href="./deepspeed#deepspeed-zero3-config">ZeRO-3 Config</a><a id="zero-3-config"></a> | <a href="./deepspeed#deepspeed-nvme">NVMe Support</a><a id="nvme-support"></a> | <a href="./deepspeed#deepspeed-zero2-zero3-performance">ZeRO-2 vs ZeRO-3 Performance</a><a id="zero-2-vs-zero-3-performance"></a> | <a href="./deepspeed#deepspeed-zero2-example">ZeRO-2 Example</a><a id="zero-2-example"></a> | <a href="./deepspeed#deepspeed-zero3-example">ZeRO-3 Example</a><a id="zero-3-example"></a> | <a href="./deepspeed#deepspeed-optimizer">Optimizer</a><a id="optimizer"></a> | <a href="./deepspeed#deepspeed-scheduler">Scheduler</a><a id="scheduler"></a> | <a href="./deepspeed#deepspeed-fp32">fp32 Precision</a><a id="fp32-precision"></a> | <a href="./deepspeed#deepspeed-amp">Automatic Mixed Precision</a><a id="automatic-mixed-precision"></a> | <a href="./deepspeed#deepspeed-bs">Batch Size</a><a id="batch-size"></a> | <a href="./deepspeed#deepspeed-grad-acc">Gradient Accumulation</a><a id="gradient-accumulation"></a> | <a href="./deepspeed#deepspeed-grad-clip">Gradient Clipping</a><a id="gradient-clipping"></a> | <a href="./deepspeed#deepspeed-weight-extraction">Getting The Model Weights Out</a><a id="getting-the-model-weights-out"></a> ]
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/model.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Models ใƒ™ใƒผใ‚นใ‚ฏใƒฉใ‚นใงใ‚ใ‚‹ [`PreTrainedModel`]ใ€[`TFPreTrainedModel`]ใ€[`FlaxPreTrainedModel`] ใฏใ€ใƒขใƒ‡ใƒซใฎ่ชญใฟ่พผใฟใจไฟๅญ˜ใซ้–ขใ™ใ‚‹ๅ…ฑ้€šใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅฎŸ่ฃ…ใ—ใฆใŠใ‚Šใ€ใ“ใ‚Œใฏใƒญใƒผใ‚ซใƒซใฎใƒ•ใ‚กใ‚คใƒซใ‚„ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‹ใ‚‰ใ€ใพใŸใฏใƒฉใ‚คใƒ–ใƒฉใƒชใŒๆไพ›ใ™ใ‚‹ไบ‹ๅ‰ๅญฆ็ฟ’ใƒขใƒ‡ใƒซๆง‹ๆˆ๏ผˆHuggingFaceใฎAWS S3ใƒชใƒใ‚ธใƒˆใƒชใ‹ใ‚‰ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰๏ผ‰ใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’่ชญใฟ่พผใ‚€ใŸใ‚ใซไฝฟ็”จใงใใพใ™ใ€‚ [`PreTrainedModel`] ใจ [`TFPreTrainedModel`] ใฏใ€ๆฌกใฎๅ…ฑ้€šใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚‚ๅฎŸ่ฃ…ใ—ใฆใ„ใพใ™๏ผš - ่ชžๅฝ™ใซๆ–ฐใ—ใ„ใƒˆใƒผใ‚ฏใƒณใŒ่ฟฝๅŠ ใ•ใ‚ŒใŸๅ ดๅˆใซใ€ๅ…ฅๅŠ›ใƒˆใƒผใ‚ฏใƒณๅŸ‹ใ‚่พผใฟใฎใƒชใ‚ตใ‚คใ‚บใ‚’่กŒใ† - ใƒขใƒ‡ใƒซใฎใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณใƒ˜ใƒƒใƒ‰ใ‚’ๅˆˆใ‚Š่พผใ‚€ ๅ„ใƒขใƒ‡ใƒซใซๅ…ฑ้€šใ™ใ‚‹ใใฎไป–ใฎใƒกใ‚ฝใƒƒใƒ‰ใฏใ€[`~modeling_utils.ModuleUtilsMixin`]๏ผˆPyTorchใƒขใƒ‡ใƒซ็”จ๏ผ‰ใŠใ‚ˆใณ[`~modeling_tf_utils.TFModuleUtilsMixin`]๏ผˆTensorFlowใƒขใƒ‡ใƒซ็”จ๏ผ‰ใงๅฎš็พฉใ•ใ‚ŒใฆใŠใ‚Šใ€ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใฎๅ ดๅˆใ€[`~generation.GenerationMixin`]๏ผˆPyTorchใƒขใƒ‡ใƒซ็”จ๏ผ‰ใ€[`~generation.TFGenerationMixin`]๏ผˆTensorFlowใƒขใƒ‡ใƒซ็”จ๏ผ‰ใ€ใŠใ‚ˆใณ[`~generation.FlaxGenerationMixin`]๏ผˆFlax/JAXใƒขใƒ‡ใƒซ็”จ๏ผ‰ใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ## PreTrainedModel [[autodoc]] PreTrainedModel - push_to_hub - all <a id='from_pretrained-torch-dtype'></a> ### ๅคง่ฆๆจกใƒขใƒ‡ใƒซใฎ่ชญใฟ่พผใฟ Transformers 4.20.0ใงใฏใ€[`~PreTrainedModel.from_pretrained`] ใƒกใ‚ฝใƒƒใƒ‰ใŒๅ†่จญ่จˆใ•ใ‚Œใ€[Accelerate](https://huggingface.co/docs/accelerate/big_modeling) ใ‚’ไฝฟ็”จใ—ใฆๅคง่ฆๆจกใƒขใƒ‡ใƒซใ‚’ๆ‰ฑใ†ใ“ใจใŒๅฏ่ƒฝใซใชใ‚Šใพใ—ใŸใ€‚ใ“ใ‚Œใซใฏ Accelerate >= 0.9.0 ใจ PyTorch >= 1.9.0 ใŒๅฟ…่ฆใงใ™ใ€‚ไปฅๅ‰ใฎๆ–นๆณ•ใงใƒ•ใƒซใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใ€ใใฎๅพŒไบ‹ๅ‰ๅญฆ็ฟ’ใฎ้‡ใฟใ‚’่ชญใฟ่พผใ‚€ไปฃใ‚ใ‚Šใซ๏ผˆใ“ใ‚Œใซใฏใƒกใƒขใƒชๅ†…ใฎใƒขใƒ‡ใƒซใ‚ตใ‚คใ‚บใŒ2ๅ€ๅฟ…่ฆใงใ€ใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซ็”จใจ้‡ใฟ็”จใฎ2ใคใŒๅฟ…่ฆใงใ—ใŸ๏ผ‰ใ€ใƒขใƒ‡ใƒซใ‚’็ฉบใฎๅค–ๆฎปใจใ—ใฆไฝœๆˆใ—ใ€ไบ‹ๅ‰ๅญฆ็ฟ’ใฎ้‡ใฟใŒ่ชญใฟ่พผใพใ‚Œใ‚‹ใจใใซใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใ‚’ๅฎŸไฝ“ๅŒ–ใ™ใ‚‹ใ‚ชใƒ—ใ‚ทใƒงใƒณใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใฏ `low_cpu_mem_usage=True` ใงๆœ‰ๅŠนใซใงใใพใ™ใ€‚ใƒขใƒ‡ใƒซใฏใพใš็ฉบใฎ้‡ใฟใ‚’ๆŒใคใƒกใ‚ฟใƒ‡ใƒใ‚คใ‚นไธŠใซไฝœๆˆใ•ใ‚Œใ€ใใฎๅพŒ็Šถๆ…‹่พžๆ›ธใŒๅ†…้ƒจใซ่ชญใฟ่พผใพใ‚Œใพใ™๏ผˆใ‚ทใƒฃใƒผใƒ‰ใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎๅ ดๅˆใ€ใ‚ทใƒฃใƒผใƒ‰ใ”ใจใซ่ชญใฟ่พผใพใ‚Œใพใ™๏ผ‰ใ€‚ใ“ใฎๆ–นๆณ•ใงไฝฟ็”จใ•ใ‚Œใ‚‹ๆœ€ๅคงRAMใฏใ€ใƒขใƒ‡ใƒซใฎๅฎŒๅ…จใชใ‚ตใ‚คใ‚บใ ใ‘ใงใ™ใ€‚ ```py from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True) ``` ใ•ใ‚‰ใซใ€ใƒขใƒ‡ใƒซใŒๅฎŒๅ…จใซRAMใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆ๏ผˆ็พๆ™‚็‚นใงใฏๆŽจ่ซ–ใฎใฟๆœ‰ๅŠน๏ผ‰ใ€็•ฐใชใ‚‹ใƒ‡ใƒใ‚คใ‚นใซใƒขใƒ‡ใƒซใ‚’็›ดๆŽฅ้…็ฝฎใงใใพใ™ใ€‚`device_map="auto"` ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€Accelerateใฏๅ„ใƒฌใ‚คใƒคใƒผใ‚’ใฉใฎใƒ‡ใƒใ‚คใ‚นใซ้…็ฝฎใ™ใ‚‹ใ‹ใ‚’ๆฑบๅฎšใ—ใ€ๆœ€้€Ÿใฎใƒ‡ใƒใ‚คใ‚น๏ผˆGPU๏ผ‰ใ‚’ๆœ€ๅคง้™ใซๆดป็”จใ—ใ€ๆฎ‹ใ‚Šใฎ้ƒจๅˆ†ใ‚’CPUใ€ใ‚ใ‚‹ใ„ใฏGPU RAMใŒไธ่ถณใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใƒใƒผใƒ‰ใƒ‰ใƒฉใ‚คใƒ–ใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใŒ่ค‡ๆ•ฐใฎใƒ‡ใƒใ‚คใ‚นใซๅˆ†ๅ‰ฒใ•ใ‚Œใฆใ„ใฆใ‚‚ใ€้€šๅธธใฉใŠใ‚ŠๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ `device_map` ใ‚’ๆธกใ™้š›ใ€`low_cpu_mem_usage` ใฏ่‡ชๅ‹•็š„ใซ `True` ใซ่จญๅฎšใ•ใ‚Œใ‚‹ใŸใ‚ใ€ใใ‚Œใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ```py from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto") ``` ใƒขใƒ‡ใƒซใŒใƒ‡ใƒใ‚คใ‚น้–“ใงใฉใฎใ‚ˆใ†ใซๅˆ†ๅ‰ฒใ•ใ‚ŒใŸใ‹ใฏใ€ใใฎ `hf_device_map` ๅฑžๆ€งใ‚’่ฆ‹ใ‚‹ใ“ใจใง็ขบ่ชใงใใพใ™: ```py t0pp.hf_device_map ``` ```python out {'shared': 0, 'decoder.embed_tokens': 0, 'encoder': 0, 'decoder.block.0': 0, 'decoder.block.1': 1, 'decoder.block.2': 1, 'decoder.block.3': 1, 'decoder.block.4': 1, 'decoder.block.5': 1, 'decoder.block.6': 1, 'decoder.block.7': 1, 'decoder.block.8': 1, 'decoder.block.9': 1, 'decoder.block.10': 1, 'decoder.block.11': 1, 'decoder.block.12': 1, 'decoder.block.13': 1, 'decoder.block.14': 1, 'decoder.block.15': 1, 'decoder.block.16': 1, 'decoder.block.17': 1, 'decoder.block.18': 1, 'decoder.block.19': 1, 'decoder.block.20': 1, 'decoder.block.21': 1, 'decoder.block.22': 'cpu', 'decoder.block.23': 'cpu', 'decoder.final_layer_norm': 'cpu', 'decoder.dropout': 'cpu', 'lm_head': 'cpu'} ``` ๅŒใ˜ใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใซๅพ“ใฃใฆใ€็‹ฌ่‡ชใฎใƒ‡ใƒใ‚คใ‚นใƒžใƒƒใƒ—ใ‚’ไฝœๆˆใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™๏ผˆใƒฌใ‚คใƒคใƒผๅใ‹ใ‚‰ใƒ‡ใƒใ‚คใ‚นใธใฎ่พžๆ›ธใงใ™๏ผ‰ใ€‚ใƒขใƒ‡ใƒซใฎใ™ในใฆใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒ‡ๅฎšใ•ใ‚ŒใŸใƒ‡ใƒใ‚คใ‚นใซใƒžใƒƒใƒ—ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใŒใ€1ใคใฎใƒฌใ‚คใƒคใƒผใŒๅฎŒๅ…จใซๅŒใ˜ใƒ‡ใƒใ‚คใ‚นใซใ‚ใ‚‹ๅ ดๅˆใ€ใใฎใƒฌใ‚คใƒคใƒผใฎใ‚ตใƒ–ใƒขใ‚ธใƒฅใƒผใƒซใฎใ™ในใฆใŒใฉใ“ใซ่กŒใใ‹ใฎ่ฉณ็ดฐใ‚’็คบใ™ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ไพ‹ใˆใฐใ€ๆฌกใฎใƒ‡ใƒใ‚คใ‚นใƒžใƒƒใƒ—ใฏT0ppใซ้ฉใ—ใฆใ„ใพใ™๏ผˆGPUใƒกใƒขใƒชใŒใ‚ใ‚‹ๅ ดๅˆ๏ผ‰: ```python device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1} ``` ใƒขใƒ‡ใƒซใฎใƒกใƒขใƒชใธใฎๅฝฑ้Ÿฟใ‚’ๆœ€ๅฐ้™ใซๆŠ‘ใˆใ‚‹ใ‚‚ใ† 1 ใคใฎๆ–นๆณ•ใฏใ€ไฝŽ็ฒพๅบฆใฎ dtype (`torch.float16` ใชใฉ) ใงใƒขใƒ‡ใƒซใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ใ‹ใ€ไปฅไธ‹ใง่ชฌๆ˜Žใ™ใ‚‹็›ดๆŽฅ้‡ๅญๅŒ–ๆ‰‹ๆณ•ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ### Model Instantiation dtype Pytorch ใงใฏใ€ใƒขใƒ‡ใƒซใฏ้€šๅธธ `torch.float32` ๅฝขๅผใงใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใ—ใ‚ˆใ†ใจใ™ใ‚‹ใจๅ•้กŒใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ ้‡ใฟใŒ fp16 ใซใ‚ใ‚‹ใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใจใ€2 ๅ€ใฎใƒกใƒขใƒชใŒๅฟ…่ฆใซใชใ‚‹ใŸใ‚ใงใ™ใ€‚ใ“ใฎๅˆถ้™ใ‚’ๅ…‹ๆœใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ“ใจใŒใงใใพใ™ใ€‚ `torch_dtype` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ€็›ฎ็š„ใฎ `dtype` ใ‚’ๆ˜Ž็คบ็š„ใซๆธกใ—ใพใ™ใ€‚ ```python model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16) ``` ใพใŸใฏใ€ใƒขใƒ‡ใƒซใ‚’ๅธธใซๆœ€้ฉใชใƒกใƒขใƒช ใƒ‘ใ‚ฟใƒผใƒณใงใƒญใƒผใƒ‰ใ—ใŸใ„ๅ ดๅˆใฏใ€็‰นๅˆฅใชๅ€ค `"auto"` ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ ใใ—ใฆใ€`dtype` ใฏใƒขใƒ‡ใƒซใฎ้‡ใฟใ‹ใ‚‰่‡ชๅ‹•็š„ใซๅฐŽๅ‡บใ•ใ‚Œใพใ™ใ€‚ ```python model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto") ``` ใ‚นใ‚ฏใƒฉใƒƒใƒใ‹ใ‚‰ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใซใฏใ€ใฉใฎ `dtype` ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹ใ‚’ๆŒ‡็คบใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```python config = T5Config.from_pretrained("t5") model = AutoModel.from_config(config) ``` Pytorch ใฎ่จญ่จˆใซใ‚ˆใ‚Šใ€ใ“ใฎๆฉŸ่ƒฝใฏๆตฎๅ‹•ๅฐๆ•ฐ็‚น dtype ใงใฎใฟไฝฟ็”จใงใใพใ™ใ€‚ ## ModuleUtilsMixin [[autodoc]] modeling_utils.ModuleUtilsMixin ## TFPreTrainedModel [[autodoc]] TFPreTrainedModel - push_to_hub - all ## TFModelUtilsMixin [[autodoc]] modeling_tf_utils.TFModelUtilsMixin ## FlaxPreTrainedModel [[autodoc]] FlaxPreTrainedModel - push_to_hub - all ## Pushing to the Hub [[autodoc]] utils.PushToHubMixin ## Sharded checkpoints [[autodoc]] modeling_utils.load_sharded_checkpoint
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/tokenizer.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tokenizer ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏใ€ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใฎๆบ–ๅ‚™ใ‚’ๆ‹…ๅฝ“ใ—ใพใ™ใ€‚ใƒฉใ‚คใƒ–ใƒฉใƒชใซใฏใ€ใ™ในใฆใฎใƒขใƒ‡ใƒซใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ใปใจใ‚“ใฉ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎไธ€้ƒจใฏใ€ๅฎŒๅ…จใช Python ๅฎŸ่ฃ…ใจใ€ Rust ใƒฉใ‚คใƒ–ใƒฉใƒช [๐Ÿค— Tokenizers](https://github.com/huggingface/tokenizers)ใ€‚ ใ€Œ้ซ˜้€Ÿใ€ๅฎŸ่ฃ…ใงใฏๆฌกใฎใ“ใจใŒๅฏ่ƒฝใซใชใ‚Šใพใ™ใ€‚ 1. ็‰นใซใƒใƒƒใƒใƒˆใƒผใ‚ฏใƒณๅŒ–ใ‚’่กŒใ†ๅ ดๅˆใฎๅคงๅน…ใชใ‚นใƒ”ใƒผใƒ‰ใ‚ขใƒƒใƒ—ใจ 2. ๅ…ƒใฎๆ–‡ๅญ—ๅˆ— (ๆ–‡ๅญ—ใจๅ˜่ชž) ใจใƒˆใƒผใ‚ฏใƒณ็ฉบ้–“ใฎ้–“ใงใƒžใƒƒใƒ”ใƒณใ‚ฐใ™ใ‚‹่ฟฝๅŠ ใฎใƒกใ‚ฝใƒƒใƒ‰ (ไพ‹: ็‰นๅฎšใฎๆ–‡ๅญ—ใ‚’ๅซใ‚€ใƒˆใƒผใ‚ฏใƒณใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใ€ใพใŸใฏ็‰นๅฎšใฎใƒˆใƒผใ‚ฏใƒณใซๅฏพๅฟœใ™ใ‚‹ๆ–‡ๅญ—ใฎ็ฏ„ๅ›ฒ๏ผ‰ใ€‚ ๅŸบๆœฌใ‚ฏใƒฉใ‚น [`PreTrainedTokenizer`] ใŠใ‚ˆใณ [`PreTrainedTokenizerFast`] ใƒขใƒ‡ใƒซๅ…ฅๅŠ›ใฎๆ–‡ๅญ—ๅˆ—ๅ…ฅๅŠ›ใ‚’ใ‚จใƒณใ‚ณใƒผใƒ‰ใ— (ไปฅไธ‹ใ‚’ๅ‚็…ง)ใ€Python ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–/ไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใฎไธ€่ˆฌ็š„ใชใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚ ใƒญใƒผใ‚ซใƒซ ใƒ•ใ‚กใ‚คใƒซใพใŸใฏใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ€ใพใŸใฏใƒฉใ‚คใƒ–ใƒฉใƒชใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‹ใ‚‰ใฎใ€Œ้ซ˜้€Ÿใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ (HuggingFace ใฎ AWS S3 ใƒชใƒใ‚ธใƒˆใƒชใ‹ใ‚‰ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰)ใ€‚ไบŒไบบใจใ‚‚้ ผใ‚Šใซใ—ใฆใ„ใ‚‹ใฎใฏใ€ ๅ…ฑ้€šใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅซใ‚€ [`~tokenization_utils_base.PreTrainedTokenizerBase`] [`~tokenization_utils_base.SpecialTokensMixin`]ใ€‚ ใ—ใŸใŒใฃใฆใ€[`PreTrainedTokenizer`] ใจ [`PreTrainedTokenizerFast`] ใฏใƒกใ‚คใƒณใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚ ใ™ในใฆใฎใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใฎใƒกใ‚ฝใƒƒใƒ‰: - ใƒˆใƒผใ‚ฏใƒณๅŒ– (ๆ–‡ๅญ—ๅˆ—ใ‚’ใ‚ตใƒ–ใƒฏใƒผใƒ‰ ใƒˆใƒผใ‚ฏใƒณๆ–‡ๅญ—ๅˆ—ใซๅˆ†ๅ‰ฒ)ใ€ใƒˆใƒผใ‚ฏใƒณๆ–‡ๅญ—ๅˆ—ใ‚’ ID ใซๅค‰ๆ›ใ—ใŸใ‚Šใ€ใใฎ้€†ใฎๅค‰ๆ›ใ‚’่กŒใฃใŸใ‚Šใ—ใพใ™ใ€‚ ใ‚จใƒณใ‚ณใƒผใƒ‰/ใƒ‡ใ‚ณใƒผใƒ‰ (ใคใพใ‚Šใ€ใƒˆใƒผใ‚ฏใƒณๅŒ–ใจๆ•ดๆ•ฐใธใฎๅค‰ๆ›)ใ€‚ - ๅŸบ็คŽใจใชใ‚‹ๆง‹้€  (BPEใ€SentencePiece...) ใ‹ใ‚‰็‹ฌ็ซ‹ใ—ใŸๆ–นๆณ•ใงใ€่ชžๅฝ™ใซๆ–ฐใ—ใ„ใƒˆใƒผใ‚ฏใƒณใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚ - ็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณ (ใƒžใ‚นใ‚ฏใ€ๆ–‡ใฎๅง‹ใพใ‚Šใชใฉ) ใฎ็ฎก็†: ใƒˆใƒผใ‚ฏใƒณใฎ่ฟฝๅŠ ใ€ๅฑžๆ€งใธใฎๅ‰ฒใ‚Šๅฝ“ใฆใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใซใ‚ˆใ‚Šใ€็ฐกๅ˜ใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ€ใƒˆใƒผใ‚ฏใƒณๅŒ–ไธญใซๅˆ†ๅ‰ฒใ•ใ‚Œใชใ„ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ [`BatchEncoding`] ใฏใ€ [`~tokenization_utils_base.PreTrainedTokenizerBase`] ใฎใ‚จใƒณใ‚ณใƒผใƒ‰ ใƒกใ‚ฝใƒƒใƒ‰ (`__call__`ใ€ `encode_plus` ใŠใ‚ˆใณ `batch_encode_plus`) ใงใ‚ใ‚Šใ€Python ่พžๆ›ธใ‹ใ‚‰ๆดพ็”Ÿใ—ใฆใ„ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒ็ด”็ฒ‹ใช Python ใฎๅ ดๅˆ tokenizer ใฎๅ ดๅˆใ€ใ“ใฎใ‚ฏใƒฉใ‚นใฏๆจ™ๆบ–ใฎ Python ่พžๆ›ธใจๅŒใ˜ใ‚ˆใ†ใซๅ‹•ไฝœใ—ใ€ใซใ‚ˆใฃใฆ่จˆ็ฎ—ใ•ใ‚ŒใŸใ•ใพใ–ใพใชใƒขใƒ‡ใƒซๅ…ฅๅŠ›ใ‚’ไฟๆŒใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒกใ‚ฝใƒƒใƒ‰ (`input_ids`ใ€`attention_mask`...)ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒใ€Œ้ซ˜้€Ÿใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใงใ‚ใ‚‹ๅ ดๅˆ (ใคใพใ‚Šใ€ HuggingFace [ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ ใƒฉใ‚คใƒ–ใƒฉใƒช](https://github.com/huggingface/tokenizers))ใ€ใ“ใฎใ‚ฏใƒฉใ‚นใฏใ•ใ‚‰ใซๆไพ›ใ—ใพใ™ ๅ…ƒใฎๆ–‡ๅญ—ๅˆ— (ๆ–‡ๅญ—ใจๅ˜่ชž) ใจ ใƒˆใƒผใ‚ฏใƒณใ‚นใƒšใƒผใ‚น (ไพ‹: ๆŒ‡ๅฎšใ•ใ‚ŒใŸๆ–‡ๅญ—ใพใŸใฏๅฏพๅฟœใ™ใ‚‹ๆ–‡ๅญ—ใฎ็ฏ„ๅ›ฒใ‚’ๆง‹ๆˆใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใฎใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใฎๅ–ๅพ—) ไธŽใˆใ‚‰ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใซ๏ผ‰ใ€‚ ## PreTrainedTokenizer [[autodoc]] PreTrainedTokenizer - __call__ - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## PreTrainedTokenizerFast [`PreTrainedTokenizerFast`] ใฏ [tokenizers](https://huggingface.co/docs/tokenizers) ใƒฉใ‚คใƒ–ใƒฉใƒชใซไพๅญ˜ใ—ใพใ™ใ€‚ ๐Ÿค— ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ ใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰ๅ–ๅพ—ใ—ใŸใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏใ€ ๐Ÿค— ใƒˆใƒฉใƒณใ‚นใซ้žๅธธใซ็ฐกๅ˜ใซใƒญใƒผใƒ‰ใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚ŒใŒใฉใฎใ‚ˆใ†ใซ่กŒใ‚ใ‚Œใ‚‹ใ‹ใ‚’็†่งฃใ™ใ‚‹ใซใฏใ€[๐Ÿค— tokenizers ใ‹ใ‚‰ใฎ tokenizers ใ‚’ไฝฟ็”จใ™ใ‚‹](../fast_tokenizers) ใƒšใƒผใ‚ธใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ [[autodoc]] PreTrainedTokenizerFast - __call__ - apply_chat_template - batch_decode - decode - encode - push_to_hub - all ## BatchEncoding [[autodoc]] BatchEncoding
0
hf_public_repos/transformers/docs/source/ja
hf_public_repos/transformers/docs/source/ja/main_classes/deepspeed.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DeepSpeed Integration [DeepSpeed](https://github.com/microsoft/DeepSpeed) ใฏใ€[ZeRO ่ซ–ๆ–‡](https://arxiv.org/abs/1910.02054) ใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใ™ในใฆใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚็พๅœจใ€ๆฌกใฎใ‚‚ใฎใ‚’ๅฎŒๅ…จใซใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ 1. ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ๅˆ†ๅ‰ฒ (ZeRO ใ‚นใƒ†ใƒผใ‚ธ 1) 2. ๅ‹พ้…ๅˆ†ๅ‰ฒ (ZeRO ใ‚นใƒ†ใƒผใ‚ธ 2) 3. ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฎๅˆ†ๅ‰ฒ (ZeRO ใ‚นใƒ†ใƒผใ‚ธ 3) 4. ใ‚ซใ‚นใ‚ฟใƒ ๆททๅˆ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅ‡ฆ็† 5. ไธ€้€ฃใฎ้ซ˜้€Ÿ CUDA ๆ‹กๅผตใƒ™ใƒผใ‚นใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผ 6. CPU ใŠใ‚ˆใณ NVMe ใธใฎ ZeRO ใ‚ชใƒ•ใƒญใƒผใƒ‰ ZeRO-Offload ใซใฏ็‹ฌ่‡ชใฎๅฐ‚็”จใƒšใƒผใƒ‘ใƒผใŒใ‚ใ‚Šใพใ™: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)ใ€‚ NVMe ใ‚ตใƒใƒผใƒˆใซใคใ„ใฆใฏใ€่ซ–ๆ–‡ [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)ใ€‚ DeepSpeed ZeRO-2 ใฏใ€ใใฎๆฉŸ่ƒฝใŒๆŽจ่ซ–ใซใฏๅฝนใซ็ซ‹ใŸใชใ„ใŸใ‚ใ€ไธปใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใฟใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ DeepSpeed ZeRO-3 ใฏใ€ๅทจๅคงใชใƒขใƒ‡ใƒซใ‚’่ค‡ๆ•ฐใฎ GPU ใซใƒญใƒผใƒ‰ใงใใ‚‹ใŸใ‚ใ€ๆŽจ่ซ–ใซใ‚‚ไฝฟ็”จใงใใพใ™ใ€‚ ๅ˜ไธ€ใฎ GPU ใงใฏไธๅฏ่ƒฝใงใ™ใ€‚ ๐Ÿค— Transformers ใฏใ€2 ใคใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ไป‹ใ—ใฆ [DeepSpeed](https://github.com/microsoft/DeepSpeed) ใ‚’็ตฑๅˆใ—ใพใ™ใ€‚ 1. [`Trainer`] ใซใ‚ˆใ‚‹ใ‚ณใ‚ข DeepSpeed ๆฉŸ่ƒฝใฎ็ตฑๅˆใ€‚ไฝ•ใงใ‚‚ใ‚„ใฃใฆใใ‚Œใ‚‹ใ‚ฟใ‚คใƒ—ใงใ™ ็ตฑๅˆใฎๅ ดๅˆ - ใ‚ซใ‚นใ‚ฟใƒ ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ‹ใ€ใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใ ใ‘ใงใ€ไป–ใซไฝ•ใ‚‚ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใŸใ„ใฆใ„ใฎ ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใฏใ“ใฎๆฉŸ่ƒฝใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใ„ใพใ™ใ€‚ 2. [`Trainer`] ใ‚’ไฝฟ็”จใ›ใšใ€DeepSpeed ใ‚’็ตฑๅˆใ—ใŸ็‹ฌ่‡ชใฎใƒˆใƒฌใƒผใƒŠใƒผใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆ `from_pretrained` ใ‚„ `from_config` ใชใฉใฎใ‚ณใ‚ขๆฉŸ่ƒฝใซใฏใ€้‡่ฆใชๆฉŸ่ƒฝใฎ็ตฑๅˆใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ ZeRO ใ‚นใƒ†ใƒผใ‚ธ 3 ไปฅ้™ใฎ `zero.Init`ใชใฉใฎ DeepSpeed ใฎ้ƒจๅˆ†ใ€‚ใ“ใฎๆฉŸ่ƒฝใ‚’ๆดป็”จใ™ใ‚‹ใซใฏใ€ๆฌกใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ใŠ่ชญใฟใใ ใ•ใ„ใ€‚ [้žใƒˆใƒฌใƒผใƒŠใƒผ DeepSpeed ็ตฑๅˆ](#nontrainer-deepspeed-integration)ใ€‚ ็ตฑๅˆใ•ใ‚Œใฆใ„ใ‚‹ใ‚‚ใฎ: ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ๏ผš 1. DeepSpeed ZeRO ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ€ZeRO-Infinity (CPU ใŠใ‚ˆใณ NVME ใ‚ชใƒ•ใƒญใƒผใƒ‰) ใ‚’ไฝฟ็”จใ—ใฆๅฎŒๅ…จใช ZeRO ใ‚นใƒ†ใƒผใ‚ธ 1ใ€2ใ€ใŠใ‚ˆใณ 3 ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ ๆŽจ่ซ–๏ผš 1. DeepSpeed ZeRO Inference ใฏใ€ZeRO-Infinity ใซใ‚ˆใ‚‹ ZeRO ใ‚นใƒ†ใƒผใ‚ธ 3 ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๅŒใ˜ ZeRO ใƒ—ใƒญใƒˆใ‚ณใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใŒใ€ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใจ lr ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใฏไฝฟ็”จใ›ใšใ€ใ‚นใƒ†ใƒผใ‚ธ 3 ใฎใฟใŒ้–ข้€ฃใ—ใพใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€ไปฅไธ‹ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ [ใ‚ผใƒญๆŽจ่ซ–](#zero-inference)ใ€‚ DeepSpeed Inference ใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€Tensor Parallelism ใฎไปฃใ‚ใ‚Šใซ Tensor Parallelism ใ‚’ไฝฟ็”จใ™ใ‚‹ใพใฃใŸใ็•ฐใชใ‚‹ใƒ†ใ‚ฏใƒŽใƒญใ‚ธใƒผใงใ™ใ€‚ ZeRO (่ฟ‘ๆ—ฅๅ…ฌ้–‹)ใ€‚ <a id='deepspeed-trainer-integration'></a> ## Trainer Deepspeed Integration <a id='deepspeed-installation'></a> ### Installation pypi ็ตŒ็”ฑใงใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™ใ€‚ ```bash pip install deepspeed ``` ใพใŸใฏ`tansformers`, `extras`็ตŒ็”ฑ: ```bash pip install transformers[deepspeed] ``` ใพใŸใฏใ€[DeepSpeed ใฎ GitHub ใƒšใƒผใ‚ธ](https://github.com/microsoft/deepspeed#installation) ใง่ฉณ็ดฐใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ [้ซ˜ๅบฆใชใ‚คใƒณใ‚นใƒˆใƒผใƒซ](https://www.deepspeed.ai/tutorials/advanced-install/)ใ€‚ ใใ‚Œใงใ‚‚ใƒ“ใƒซใƒ‰ใซ่‹ฆๅŠดใ™ใ‚‹ๅ ดๅˆใฏใ€ใพใš [CUDA ๆ‹กๅผตๆฉŸ่ƒฝใฎใ‚คใƒณใ‚นใƒˆใƒผใƒซ ใƒŽใƒผใƒˆ](trainer#cuda-extension-installation-notes) ใ‚’ๅฟ…ใš่ชญใ‚“ใงใใ ใ•ใ„ใ€‚ ๆ‹กๅผตๆฉŸ่ƒฝใ‚’ไบ‹ๅ‰ใƒ“ใƒซใƒ‰ใ›ใšใ€ๅฎŸ่กŒๆ™‚ใซๆ‹กๅผตๆฉŸ่ƒฝใŒใƒ“ใƒซใƒ‰ใ•ใ‚Œใ‚‹ใ“ใจใซไพๅญ˜ใ—ใฆใŠใ‚Šใ€ไธŠ่จ˜ใฎ่งฃๆฑบ็ญ–ใ‚’ใ™ในใฆ่ฉฆใ—ใŸๅ ดๅˆ ใใ‚ŒใŒๅฝนใซ็ซ‹ใŸใชใ‹ใฃใŸๅ ดๅˆใ€ๆฌกใซ่ฉฆใ™ในใใ“ใจใฏใ€ใƒขใ‚ธใƒฅใƒผใƒซใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ๅ‰ใซใƒขใ‚ธใƒฅใƒผใƒซใ‚’ไบ‹ๅ‰ใซใƒ“ใƒซใƒ‰ใ™ใ‚‹ใ“ใจใงใ™ใ€‚ DeepSpeed ใฎใƒญใƒผใ‚ซใƒซ ใƒ“ใƒซใƒ‰ใ‚’ไฝœๆˆใ™ใ‚‹ใซใฏ: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \ --global-option="build_ext" --global-option="-j8" --no-cache -v \ --disable-pip-version-check 2>&1 | tee build.log ``` NVMe ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ไธŠ่จ˜ใฎๆ‰‹้ †ใซ`DS_BUILD_AIO=1`ใ‚’ๅซใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ (ใพใŸใ€ *libaio-dev* ใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™)ใ€‚ `TORCH_CUDA_ARCH_LIST` ใ‚’็ทจ้›†ใ—ใฆใ€ไฝฟ็”จใ™ใ‚‹ GPU ใ‚ซใƒผใƒ‰ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฎใ‚ณใƒผใƒ‰ใ‚’ๆŒฟๅ…ฅใ—ใพใ™ใ€‚ใ™ในใฆใ‚’ไปฎๅฎšใ™ใ‚‹ใจ ใ‚ใชใŸใฎใ‚ซใƒผใƒ‰ใฏๅŒใ˜ใงใ€ๆฌกใฎๆ–นๆณ•ใงใ‚ขใƒผใƒใ‚’ๅ–ๅพ—ใงใใพใ™ใ€‚ ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())" ``` ใ—ใŸใŒใฃใฆใ€`8, 6`ใ‚’ๅ–ๅพ—ใ—ใŸๅ ดๅˆใฏใ€`TORCH_CUDA_ARCH_LIST="8.6"`ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚่ค‡ๆ•ฐใฎ็•ฐใชใ‚‹ใ‚ซใƒผใƒ‰ใ‚’ใŠๆŒใกใฎๅ ดๅˆใฏใ€ใ™ในใฆใ‚’ใƒชใ‚นใƒˆใ™ใ‚‹ใ“ใจใŒใงใใพใ™ ใใ‚Œใ‚‰ใฎใ†ใกใ€`TORCH_CUDA_ARCH_LIST="6.1;8.6"`ใŒๅฅฝใใงใ™ ่ค‡ๆ•ฐใฎใƒžใ‚ทใƒณใงๅŒใ˜ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€ใƒใ‚คใƒŠใƒช ใƒ›ใ‚คใƒผใƒซใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \ python setup.py build_ext -j8 bdist_wheel ``` `dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ใฎใ‚ˆใ†ใชใ‚‚ใฎใŒ็”Ÿๆˆใ•ใ‚Œใ‚‹ใฎใงใ€ใ“ใ‚Œใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใงใใพใ™ `pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ใจใ—ใฆใƒญใƒผใ‚ซใƒซใพใŸใฏไป–ใฎใƒžใ‚ทใƒณใซใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™ใ€‚ ็นฐใ‚Š่ฟ”ใ—ใพใ™ใŒใ€`TORCH_CUDA_ARCH_LIST`ใ‚’ใ‚ฟใƒผใ‚ฒใƒƒใƒˆ ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅˆใ‚ใ›ใฆ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„ใ€‚ NVIDIA GPU ใฎๅฎŒๅ…จใชใƒชใ‚นใƒˆใจใ€ใใ‚Œใซๅฏพๅฟœใ™ใ‚‹ **ใ‚ณใƒณใƒ”ใƒฅใƒผใƒ†ใ‚ฃใƒณใ‚ฐๆฉŸ่ƒฝ** (ใ“ใฎ่จ˜ไบ‹ใฎ Arch ใจๅŒใ˜) ใ‚’่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ) [ใ“ใ“](https://developer.nvidia.com/cuda-gpus)ใ€‚ ไปฅไธ‹ใ‚’ไฝฟ็”จใ—ใฆใ€pytorch ใŒๆง‹็ฏ‰ใ•ใ‚ŒใŸใ‚ขใƒผใƒใ‚’็ขบ่ชใงใใพใ™ใ€‚ ```bash python -c "import torch; print(torch.cuda.get_arch_list())" ``` ใ“ใ“ใงใฏใ€ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ GPU ใฎ 1 ใคใฎใ‚ขใƒผใƒใ‚’่ฆ‹ใคใ‘ใ‚‹ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€GPU 0 ใฎๅ ดๅˆ: ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; \ print(torch.cuda.get_device_properties(torch.device('cuda')))" ``` ๅ‡บๅŠ›ใŒๆฌกใฎๅ ดๅˆ: ```bash _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82) ``` ใใ†ใ™ใ‚Œใฐใ€ใ“ใฎใ‚ซใƒผใƒ‰ใฎใ‚ขใƒผใƒใŒ`8.6`ใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ `TORCH_CUDA_ARCH_LIST` ใ‚’ๅฎŒๅ…จใซ็œ็•ฅใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใใ†ใ™ใ‚Œใฐใ€ใƒ“ใƒซใƒ‰ ใƒ—ใƒญใ‚ฐใƒฉใƒ ใŒ่‡ชๅ‹•็š„ใซใ‚ฏใ‚จใƒชใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ใƒ“ใƒซใƒ‰ใŒ่กŒใ‚ใ‚Œใ‚‹ GPU ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ€‚ใ“ใ‚Œใฏใ€ใ‚ฟใƒผใ‚ฒใƒƒใƒˆ ใƒžใ‚ทใƒณใฎ GPU ใจไธ€่‡ดใ™ใ‚‹ๅ ดๅˆใ‚‚ใ‚ใ‚Œใฐใ€ไธ€่‡ดใ—ใชใ„ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ็›ฎ็š„ใฎใ‚ขใƒผใƒใ‚’ๆ˜Ž็คบ็š„ใซๆŒ‡ๅฎšใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ๆๆกˆใ•ใ‚ŒใŸใ“ใจใ‚’ใ™ในใฆ่ฉฆใ—ใฆใ‚‚ใพใ ใƒ“ใƒซใƒ‰ใฎๅ•้กŒใŒ็™บ็”Ÿใ™ใ‚‹ๅ ดๅˆใฏใ€GitHub ใฎๅ•้กŒใซ้€ฒใ‚“ใงใใ ใ•ใ„ใ€‚ [ใƒ‡ใ‚ฃใƒผใƒ—ใ‚นใƒ”ใƒผใƒ‰](https://github.com/microsoft/DeepSpeed/issues)ใ€ <a id='deepspeed-multi-gpu'></a> ### Deployment with multiple GPUs DeepSpeed ็ตฑๅˆใ‚’ใƒ‡ใƒ—ใƒญใ‚คใ™ใ‚‹ใซใฏใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณๅผ•ๆ•ฐใ‚’่ชฟๆ•ดใ—ใฆๆ–ฐใ—ใ„ๅผ•ๆ•ฐ `--deepspeed ds_config.json` ใ‚’ๅซใ‚ใพใ™ใ€‚ใ“ใ“ใงใ€`ds_config.json` ใฏ DeepSpeed ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใงใ™ใ€‚ [ใ“ใกใ‚‰](https://www.deepspeed.ai/docs/config-json/)ใซ่จ˜่ผ‰ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใƒ•ใ‚กใ‚คใƒซๅใฏใ‚ใชใŸๆฌก็ฌฌใงใ™ใ€‚ DeepSpeed ใฎ`add_config_arguments`ใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใ‚’ไฝฟ็”จใ—ใฆใ€ๅฟ…่ฆใชใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณๅผ•ๆ•ฐใ‚’ใ‚ณใƒผใƒ‰ใซ่ฟฝๅŠ ใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[DeepSpeed ใฎๅผ•ๆ•ฐ่งฃๆž](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใ“ใง้ธๆŠžใ—ใŸใƒฉใƒณใƒใƒฃใƒผใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ pytorch ใƒฉใƒณใƒใƒฃใƒผใ‚’ๅผ•ใ็ถšใไฝฟ็”จใงใใพใ™ใ€‚ ```bash torch.distributed.run --nproc_per_node=2 your_program.py <normal cl args> --deepspeed ds_config.json ``` ใพใŸใฏใ€`deepspeed`ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใ‚‹ใƒฉใƒณใƒใƒฃใƒผใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ```bash deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json ``` ใ”่ฆงใฎใจใŠใ‚Šใ€ๅผ•ๆ•ฐใฏๅŒใ˜ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ใปใจใ‚“ใฉใฎใƒ‹ใƒผใ‚บใงใฏใฉใกใ‚‰ใงใ‚‚ๆฉŸ่ƒฝใ—ใพใ™ใ€‚ใฎ ใ•ใพใ–ใพใชใƒŽใƒผใƒ‰ใจ GPU ใ‚’ๆง‹ๆˆใ™ใ‚‹ๆ–นๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใ“ใกใ‚‰](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ `deepspeed`ใƒฉใƒณใƒใƒฃใƒผใ‚’ไฝฟ็”จใ—ใ€ๅˆฉ็”จๅฏ่ƒฝใชใ™ในใฆใฎ GPU ใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€`--num_gpus`ใƒ•ใƒฉใ‚ฐใ‚’็œ็•ฅใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ ไปฅไธ‹ใฏใ€ๅˆฉ็”จๅฏ่ƒฝใชใ™ในใฆใฎ GPU ใ‚’ใƒ‡ใƒ—ใƒญใ‚คใ™ใ‚‹ DeepSpeed ใง`run_translation.py`ใ‚’ๅฎŸ่กŒใ™ใ‚‹ไพ‹ใงใ™ใ€‚ ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro ``` DeepSpeed ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใซใฏใ€`--deepspeed --deepspeed_config ds_config.json`ใŒ่กจ็คบใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ DeepSpeed ้–ข้€ฃใฎๅผ•ๆ•ฐใŒ 2 ใคใ‚ใ‚Šใพใ™ใŒใ€็ฐกๅ˜ใซใ™ใ‚‹ใŸใ‚ใงใ‚ใ‚Šใ€ๅ‡ฆ็†ใ™ในใๅผ•ๆ•ฐใŒใ™ใงใซ้žๅธธใซๅคšใ„ใŸใ‚ใงใ™ใ€‚ ใ“ใฎ 2 ใคใ‚’ 1 ใคใฎๅผ•ๆ•ฐใซ็ตๅˆใ—ใพใ—ใŸใ€‚ ๅฎŸ้š›ใฎไฝฟ็”จไพ‹ใซใคใ„ใฆใฏใ€ใ“ใฎ [ๆŠ•็จฟ](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ <a id='deepspeed-one-gpu'></a> ### Deployment with one GPU 1 ใคใฎ GPU ใง DeepSpeed ใ‚’ใƒ‡ใƒ—ใƒญใ‚คใ™ใ‚‹ใซใฏใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณๅผ•ๆ•ฐใ‚’ๆฌกใฎใ‚ˆใ†ใซ่ชฟๆ•ดใ—ใพใ™ใ€‚ ```bash deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero2.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro ``` ใ“ใ‚Œใฏ่ค‡ๆ•ฐใฎ GPU ใฎๅ ดๅˆใจใปใผๅŒใ˜ใงใ™ใŒใ€ใ“ใ“ใงใฏใ€DeepSpeed ใซ 1 ใคใฎ GPU ใ ใ‘ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซๆ˜Ž็คบ็š„ใซๆŒ‡็คบใ—ใพใ™ใ€‚ `--num_gpus=1`ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€DeepSpeed ใฏๆŒ‡ๅฎšใ•ใ‚ŒใŸใƒŽใƒผใƒ‰ไธŠใง่ช่ญ˜ใงใใ‚‹ใ™ในใฆใฎ GPU ใ‚’ใƒ‡ใƒ—ใƒญใ‚คใ—ใพใ™ใ€‚่ตทๅ‹•ใ™ใ‚‹ GPU ใŒ 1 ใคใ ใ‘ใฎๅ ดๅˆ ใฎๅ ดๅˆใ€ใ“ใฎๅผ•ๆ•ฐใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๆฌกใฎ [ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใงใฏใ€ใƒฉใƒณใƒใƒฃใƒผ ใ‚ชใƒ—ใ‚ทใƒงใƒณใซใคใ„ใฆ่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚ 1 ใคใฎ GPU ใ ใ‘ใง DeepSpeed ใ‚’ไฝฟ็”จใ—ใŸใ„ใฎใฏใชใœใงใ™ใ‹? 1. ไธ€้ƒจใฎ่จˆ็ฎ—ใจใƒกใƒขใƒชใ‚’ใƒ›ใ‚นใƒˆใฎ CPU ใจ RAM ใซๅง”ไปปใงใใ‚‹ ZeRO ใ‚ชใƒ•ใƒญใƒผใƒ‰ๆฉŸ่ƒฝใ‚’ๅ‚™ใˆใฆใ„ใ‚‹ใŸใ‚ใ€ ใƒขใƒ‡ใƒซใฎใƒ‹ใƒผใ‚บใซๅˆใ‚ใ›ใฆใ‚ˆใ‚Šๅคšใใฎ GPU ใƒชใ‚ฝใƒผใ‚นใ‚’ๆฎ‹ใ—ใฆใŠใใพใ™ใ€‚ใ‚ˆใ‚Šๅคงใใชใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใ€ใพใŸใฏ้žๅธธใซๅคงใใชใƒขใƒ‡ใƒซใฎใƒ•ใ‚ฃใƒƒใƒ†ใ‚ฃใƒณใ‚ฐใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ ๆ™ฎ้€šใฏๅˆใ‚ใชใ„ใงใ—ใ‚‡ใ†ใ€‚ 2. ใ‚นใƒžใƒผใƒˆใช GPU ใƒกใƒขใƒช็ฎก็†ใ‚ทใ‚นใƒ†ใƒ ใ‚’ๆไพ›ใ—ใ€ใƒกใƒขใƒชใฎๆ–ญ็‰‡ๅŒ–ใ‚’ๆœ€ๅฐ้™ใซๆŠ‘ใˆใพใ™ใ€‚ ใ‚ˆใ‚Šๅคงใใชใƒขใƒ‡ใƒซใจใƒ‡ใƒผใ‚ฟ ใƒใƒƒใƒใ€‚ ๆฌกใซๆง‹ๆˆใซใคใ„ใฆ่ฉณใ—ใ่ชฌๆ˜Žใ—ใพใ™ใŒใ€ๅ˜ไธ€ใฎ GPU ใงๅคงๅน…ใชๆ”นๅ–„ใ‚’ๅฎŸ็พใ™ใ‚‹ใŸใ‚ใฎ้ตใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ DeepSpeed ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใซๅฐ‘ใชใใจใ‚‚ๆฌกใฎๆง‹ๆˆใŒๅฟ…่ฆใงใ™ใ€‚ ```json { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "reduce_scatter": true, "reduce_bucket_size": 2e8, "overlap_comm": true, "contiguous_gradients": true } } ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚„ใใฎไป–ใฎ้‡่ฆใชๆฉŸ่ƒฝใŒๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ใƒใƒƒใƒ•ใ‚ก ใ‚ตใ‚คใ‚บใ‚’่ฉฆใ—ใฆใฟใ‚‹ใจใ‚ˆใ„ใงใ—ใ‚‡ใ†ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ไปฅไธ‹ใฎใƒ‡ใ‚ฃใ‚นใ‚ซใƒƒใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎใ‚ฟใ‚คใƒ—ใฎใƒ‡ใƒ—ใƒญใ‚คใƒกใƒณใƒˆใฎๅฎŸ้š›็š„ใชไฝฟ็”จไพ‹ใซใคใ„ใฆใฏใ€ใ“ใฎ [ๆŠ•็จฟ](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใง่ฉณใ—ใ่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ใ‚ˆใ†ใซใ€CPU ใŠใ‚ˆใณ NVMe ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚’ๅ‚™ใˆใŸ ZeRO-3 ใ‚’่ฉฆใ™ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ใƒŽใƒผใƒˆ๏ผš - GPU 0 ใจใฏ็•ฐใชใ‚‹็‰นๅฎšใฎ GPU ใงๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใ€`CUDA_VISIBLE_DEVICES` ใ‚’ไฝฟ็”จใ—ใฆๅˆถ้™ใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ ๅˆฉ็”จๅฏ่ƒฝใช GPU ใฎ่กจ็คบ็ฏ„ๅ›ฒใ€‚ไปฃใ‚ใ‚Šใซใ€ๆฌกใฎๆง‹ๆ–‡ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```bash deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py ... ``` ใ“ใฎไพ‹ใงใฏใ€DeepSpeed ใซ GPU 1 (2 ็•ช็›ฎใฎ GPU) ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซๆŒ‡็คบใ—ใพใ™ใ€‚ <a id='deepspeed-multi-node'></a> ### ่ค‡ๆ•ฐใฎใƒŽใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใŸใƒ‡ใƒ—ใƒญใ‚คใƒกใƒณใƒˆ ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฎๆƒ…ๅ ฑใฏ DeepSpeed ็ตฑๅˆใซๅ›บๆœ‰ใฎใ‚‚ใฎใงใฏใชใใ€ใ‚ใ‚‰ใ‚†ใ‚‹ใƒžใƒซใƒใƒŽใƒผใƒ‰ ใƒ—ใƒญใ‚ฐใƒฉใƒ ใซ้ฉ็”จใงใใพใ™ใ€‚ใŸใ ใ—ใ€DeepSpeed ใฏใ€SLURM ็’ฐๅขƒใงใชใ„้™ใ‚Šใ€ไป–ใฎใƒฉใƒณใƒใƒฃใƒผใ‚ˆใ‚Šใ‚‚ไฝฟใ„ใ‚„ใ™ใ„`deepspeed`ใƒฉใƒณใƒใƒฃใƒผใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใใ‚Œใžใ‚Œ 8 GPU ใ‚’ๅ‚™ใˆใŸ 2 ใคใฎใƒŽใƒผใƒ‰ใŒใ‚ใ‚‹ใจไปฎๅฎšใ—ใพใ™ใ€‚ใพใŸใ€ๆœ€ๅˆใฎใƒŽใƒผใƒ‰ใซใฏ `ssh hostname1` ใ‚’ไฝฟ็”จใ—ใฆใ€2 ็•ช็›ฎใฎใƒŽใƒผใƒ‰ใซใฏ `ssh hostname2` ใ‚’ไฝฟ็”จใ—ใฆๆŽฅ็ถšใงใใพใ™ใ€‚ไธกๆ–นใจใ‚‚ใƒ‘ใ‚นใƒฏใƒผใƒ‰ใชใ—ใงใƒญใƒผใ‚ซใƒซใฎ ssh ็ตŒ็”ฑใง็›ธไบ’ใซๆŽฅ็ถšใงใใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚‚ใกใ‚ใ‚“ใ€ใ“ใ‚Œใ‚‰ใฎใƒ›ใ‚นใƒˆ (ใƒŽใƒผใƒ‰) ๅใ‚’ใ€ไฝœๆฅญใ—ใฆใ„ใ‚‹ๅฎŸ้š›ใฎใƒ›ใ‚นใƒˆๅใซๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ #### The torch.distributed.run launcher ใŸใจใˆใฐใ€`torch.distributed.run` ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ```bash python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \ --master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json ``` ๅ„ใƒŽใƒผใƒ‰ใซ SSH ใงๆŽฅ็ถšใ—ใ€ใใ‚Œใžใ‚ŒใฎใƒŽใƒผใƒ‰ใงๅŒใ˜ใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆ€ฅใๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใƒฉใƒณใƒใƒฃใƒผใฏไธกๆ–นใฎใƒŽใƒผใƒ‰ใŒๅŒๆœŸใ™ใ‚‹ใพใงๅพ…ๆฉŸใ—ใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[torchrun](https://pytorch.org/docs/stable/elastic/run.html) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ใกใชใฟใซใ€ใ“ใ‚Œใฏ pytorch ใฎๆ•ฐใƒใƒผใ‚ธใƒงใƒณๅ‰ใฎ`torch.distributed.launch`ใ‚’็ฝฎใๆ›ใˆใŸใƒฉใƒณใƒใƒฃใƒผใงใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ #### ใƒ‡ใ‚ฃใƒผใƒ—ใ‚นใƒ”ใƒผใƒ‰ ใƒฉใƒณใƒใƒฃใƒผ ไปฃใ‚ใ‚Šใซ`deepspeed`ใƒฉใƒณใƒใƒฃใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ใพใš`hostfile`ใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ``` hostname1 slots=8 hostname2 slots=8 ``` ใใ—ใฆใ€ๆฌกใฎใ‚ˆใ†ใซ่ตทๅ‹•ใงใใพใ™ใ€‚ ```bash deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \ your_program.py <normal cl args> --deepspeed ds_config.json ``` `torch.distributed.run`ใƒฉใƒณใƒใƒฃใƒผใจใฏ็•ฐใชใ‚Šใ€`deepspeed`ใฏไธกๆ–นใฎใƒŽใƒผใƒ‰ใงใ“ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’่‡ชๅ‹•็š„ใซ่ตทๅ‹•ใ—ใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใƒชใ‚ฝใƒผใ‚นๆง‹ๆˆ (ใƒžใƒซใƒใƒŽใƒผใƒ‰)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ #### Launching in a SLURM environment SLURM ็’ฐๅขƒใงใฏใ€ๆฌกใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ไปฅไธ‹ใฏใ€็‰นๅฎšใฎ SLURM ็’ฐๅขƒใซ้ฉๅˆใ•ใ›ใ‚‹ใŸใ‚ใซๅฟ…่ฆใช slurm ใ‚นใ‚ฏใƒชใƒ—ใƒˆ `launch.slurm` ใงใ™ใ€‚ ```bash #SBATCH --job-name=test-nodes # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=10 # number of cores per tasks #SBATCH --gres=gpu:8 # number of gpus #SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS) #SBATCH --output=%x-%j.out # output file name export GPUS_PER_NODE=8 export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1) export MASTER_PORT=9901 srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \ --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \ --master_addr $MASTER_ADDR --master_port $MASTER_PORT \ your_program.py <normal cl args> --deepspeed ds_config.json' ``` ใ‚ใจใฏๅฎŸ่กŒใ‚’ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒซใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ ```bash sbatch launch.slurm ``` #### Use of Non-shared filesystem ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€DeepSpeed ใฏใƒžใƒซใƒใƒŽใƒผใƒ‰็’ฐๅขƒใŒๅ…ฑๆœ‰ใ‚นใƒˆใƒฌใƒผใ‚ธใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ๆƒณๅฎšใ—ใฆใ„ใพใ™ใ€‚ใ“ใ‚ŒใŒๅฝ“ใฆใฏใพใ‚‰ใšใ€ๅ„ใƒŽใƒผใƒ‰ใŒใƒญใƒผใ‚ซใƒซ ใƒ•ใ‚กใ‚คใƒซใ‚ทใ‚นใƒ†ใƒ ใ—ใ‹ๅ‚็…งใงใใชใ„ๅ ดๅˆใฏใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’่ชฟๆ•ดใ—ใฆ [`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#) ใ‚’ๅซใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ ใ‚ชใƒ—ใ‚ทใƒงใƒณ) ใ‚’ๆฌกใฎ่จญๅฎšใงๆŒ‡ๅฎšใ—ใพใ™ใ€‚ ```json { "checkpoint": { "use_node_local_storage": true } } ``` ใ‚ใ‚‹ใ„ใฏใ€[`Trainer`] ใฎ `--save_on_each_node` ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใ€ไธŠ่จ˜ใฎ่จญๅฎšใฏ่‡ชๅ‹•็š„ใซ่ฟฝๅŠ ใ•ใ‚Œใพใ™ใ€‚ <a id='deepspeed-notebook'></a> ### Deployment in Notebooks ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใฎใ‚ปใƒซใ‚’ใ‚นใ‚ฏใƒชใƒ—ใƒˆใจใ—ใฆๅฎŸ่กŒใ™ใ‚‹ๅ ดๅˆใฎๅ•้กŒใฏใ€ไพๅญ˜ใ™ใ‚‹้€šๅธธใฎ`deepspeed`ใƒฉใƒณใƒใƒฃใƒผใŒใชใ„ใ“ใจใงใ™ใ€‚ ็‰นๅฎšใฎ่จญๅฎšใงใฏใ€ใใ‚Œใ‚’ใ‚จใƒŸใƒฅใƒฌใƒผใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ GPU ใ‚’ 1 ใคใ ใ‘ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€DeepSpeed ใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏๅ†…ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใ‚ณใƒผใƒ‰ใ‚’่ชฟๆ•ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๆ–นๆณ•ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ ```python # DeepSpeed requires a distributed environment even when only one process is used. # This emulates a launcher in the notebook import os os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use os.environ["RANK"] = "0" os.environ["LOCAL_RANK"] = "0" os.environ["WORLD_SIZE"] = "1" # Now proceed as normal, plus pass the deepspeed config file training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json") trainer = Trainer(...) trainer.train() ``` ๆณจ: `...` ใฏใ€้–ขๆ•ฐใซๆธกใ™้€šๅธธใฎๅผ•ๆ•ฐใ‚’่กจใ—ใพใ™ใ€‚ ่ค‡ๆ•ฐใฎ GPU ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€DeepSpeed ใŒๅ‹•ไฝœใ™ใ‚‹ใซใฏใƒžใƒซใƒใƒ—ใƒญใ‚ปใ‚น็’ฐๅขƒใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใคใพใ‚Šใ€ใ‚ใชใŸใฏๆŒใฃใฆใ„ใพใ™ ใใฎ็›ฎ็š„ใงใƒฉใƒณใƒใƒฃใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใŒใ€ใ“ใ‚Œใฏใ€ๆ็คบใ•ใ‚ŒใŸๅˆ†ๆ•ฃ็’ฐๅขƒใ‚’ใ‚จใƒŸใƒฅใƒฌใƒผใƒˆใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆใฏๅฎŸ็พใงใใพใ›ใ‚“ใ€‚ ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฎๅ†’้ ญใงใ€‚ ็พๅœจใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใฎใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใซใใฎๅ ดใงๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใ—ใŸใ„ๅ ดๅˆใฏใ€ๅฐ‚็”จใฎ ใ‚ปใƒซใฎๅ†…ๅฎน: ```python no-style %%bash cat <<'EOT' > ds_config_zero3.json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } EOT ``` ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใŒใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใฎใ‚ปใƒซใงใฏใชใ้€šๅธธใฎใƒ•ใ‚กใ‚คใƒซใซใ‚ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใฆ`deepspeed`ใ‚’้€šๅธธใฉใŠใ‚Š่ตทๅ‹•ใงใใพใ™ใ€‚ ็ดฐ่ƒžใ‹ใ‚‰ใฎใ‚ทใ‚งใƒซใ€‚ใŸใจใˆใฐใ€`run_translation.py` ใ‚’ไฝฟ็”จใ™ใ‚‹ใซใฏใ€ๆฌกใฎใ‚ˆใ†ใซ่ตทๅ‹•ใ—ใพใ™ใ€‚ ```python no-style !git clone https://github.com/huggingface/transformers !cd transformers; deepspeed examples/pytorch/translation/run_translation.py ... ``` ใพใŸใฏใ€`%%bash` ใƒžใ‚ธใƒƒใ‚ฏใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใ‚ทใ‚งใƒซ ใƒ—ใƒญใ‚ฐใƒฉใƒ ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใŸใ‚ใฎ่ค‡ๆ•ฐ่กŒใฎใ‚ณใƒผใƒ‰ใ‚’่จ˜่ฟฐใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```python no-style %%bash git clone https://github.com/huggingface/transformers cd transformers deepspeed examples/pytorch/translation/run_translation.py ... ``` ใใฎใ‚ˆใ†ใชๅ ดๅˆใ€ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฎๆœ€ๅˆใซ็คบใ—ใŸใ‚ณใƒผใƒ‰ใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๆณจ: `%%bash` ใƒžใ‚ธใƒƒใ‚ฏใฏๅ„ชใ‚Œใฆใ„ใพใ™ใŒใ€็พๆ™‚็‚นใงใฏๅ‡บๅŠ›ใ‚’ใƒใƒƒใƒ•ใ‚กใƒชใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใ€ใƒ—ใƒญใ‚ปใ‚นใŒ็ต‚ไบ†ใ™ใ‚‹ใพใงใƒญใ‚ฐใฏ่กจ็คบใ•ใ‚Œใพใ›ใ‚“ใ€‚ ๅฎŒไบ†ใ—ใพใ™ใ€‚ <a id='deepspeed-config'></a> ### Configuration ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใงไฝฟ็”จใงใใ‚‹ DeepSpeed ่จญๅฎšใ‚ชใƒ—ใ‚ทใƒงใƒณใฎๅฎŒๅ…จใชใ‚ฌใ‚คใƒ‰ใซใคใ„ใฆใฏใ€ๆฌกใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ [ๆฌกใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://www.deepspeed.ai/docs/config-json/) ใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใฆใใ ใ•ใ„ใ€‚ ใ•ใพใ–ใพใชๅฎŸ้š›ใฎใƒ‹ใƒผใ‚บใซๅฏพๅฟœใ™ใ‚‹ๆ•ฐๅใฎ DeepSpeed ๆง‹ๆˆไพ‹ใ‚’ [DeepSpeedExamples] (https://github.com/microsoft/DeepSpeedExamples)ใง่ฆ‹ใคใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใƒชใƒใ‚ธใƒˆใƒช: ```bash git clone https://github.com/microsoft/DeepSpeedExamples cd DeepSpeedExamples find . -name '*json' ``` ไธŠ่จ˜ใฎใ‚ณใƒผใƒ‰ใ‚’็ถšใ‘ใฆใ€Lamb ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ‚’ๆง‹ๆˆใ—ใ‚ˆใ†ใจใ—ใฆใ„ใ‚‹ใจใ—ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ๆฌกใฎไธญใ‹ใ‚‰ๆคœ็ดขใงใใพใ™ `.json` ใƒ•ใ‚กใ‚คใƒซใฎไพ‹: ```bash grep -i Lamb $(find . -name '*json') ``` ใ•ใ‚‰ใซใ„ใใคใ‹ใฎไพ‹ใŒ [ใƒกใ‚คใƒณ ใƒชใƒใ‚ธใƒˆใƒช](https://github.com/microsoft/DeepSpeed) ใซใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ DeepSpeed ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ๅธธใซ DeepSpeed ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใŒใ€ไธ€้ƒจใฎๆง‹ๆˆใƒ‘ใƒฉใƒกใƒผใ‚ฟใซใฏ ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณ็ตŒ็”ฑใง่จญๅฎšใ—ใพใ™ใ€‚ๅพฎๅฆ™ใช้•ใ„ใซใคใ„ใฆใฏใ€ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎๆฎ‹ใ‚Šใฎ้ƒจๅˆ†ใง่ชฌๆ˜Žใ—ใพใ™ใ€‚ DeepSpeed ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใŒใฉใฎใ‚ˆใ†ใชใ‚‚ใฎใ‹ใ‚’็†่งฃใ™ใ‚‹ใŸใ‚ใซใ€ZeRO ใ‚นใƒ†ใƒผใ‚ธ 2 ๆฉŸ่ƒฝใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใ‚’ๆฌกใซ็คบใ—ใพใ™ใ€‚ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผ็Šถๆ…‹ใฎ CPU ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚’ๅซใฟใ€`AdamW`ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใจ`WarmupLR`ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใ‚’ไฝฟ็”จใ—ใ€ๆททๅˆใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚ `--fp16` ใŒๆธกใ•ใ‚ŒใŸๅ ดๅˆใฎ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", } ``` ใƒ—ใƒญใ‚ฐใƒฉใƒ ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใจใ€DeepSpeed ใฏ [`Trainer`] ใ‹ใ‚‰ๅ—ใ‘ๅ–ใฃใŸ่จญๅฎšใ‚’ใƒญใ‚ฐใซ่จ˜้Œฒใ—ใพใ™ใ€‚ ใ‚ณใƒณใ‚ฝใƒผใƒซใซๆธกใ•ใ‚Œใ‚‹ใŸใ‚ใ€ๆœ€็ต‚็š„ใซใฉใฎใ‚ˆใ†ใช่จญๅฎšใŒๆธกใ•ใ‚ŒใŸใฎใ‹ใ‚’ๆญฃ็ขบใซ็ขบ่ชใงใใพใ™ใ€‚ <a id='deepspeed-config-passing'></a> ### Passing Configuration ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใง่ชฌๆ˜Žใ—ใŸใ‚ˆใ†ใซใ€้€šๅธธใ€DeepSpeed ่จญๅฎšใฏ json ใƒ•ใ‚กใ‚คใƒซใธใฎใƒ‘ใ‚นใจใ—ใฆๆธกใ•ใ‚Œใพใ™ใŒใ€ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ่จญๅฎšใซใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณ ใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใ‚คใ‚นใ‚’ไฝฟ็”จใ›ใšใ€ไปฃใ‚ใ‚Šใซใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ [`Trainer`] via [`TrainingArguments`] ใใฎๅพŒใ€`deepspeed` ๅผ•ๆ•ฐใซใคใ„ใฆใฏๆฌกใฎใ“ใจใŒใงใใพใ™ ใƒใ‚นใƒˆใ•ใ‚ŒใŸ `dict` ใ‚’ๆธกใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใใฎๅ ดใงๆง‹ๆˆใ‚’ไฝœๆˆใงใใ€ใใ‚Œใ‚’ๆ›ธใ่พผใ‚€ๅฟ…่ฆใŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ [`TrainingArguments`] ใซๆธกใ™ๅ‰ใซใƒ•ใ‚กใ‚คใƒซ ใ‚ทใ‚นใƒ†ใƒ ใ‚’ๅค‰ๆ›ดใ—ใพใ™ใ€‚ ่ฆ็ด„ใ™ใ‚‹ใจใ€ๆฌกใฎใ“ใจใŒใงใใพใ™ใ€‚ ```python TrainingArguments(..., deepspeed="/path/to/ds_config.json") ``` ใพใŸใฏ๏ผš ```python ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params) TrainingArguments(..., deepspeed=ds_config_dict) ``` <a id='deepspeed-config-shared'></a> ### Shared Configuration <Tip warning={true}> ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏๅฟ…่ชญใงใ™ </Tip> [`Trainer`] ใจ DeepSpeed ใฎไธกๆ–นใŒๆญฃใ—ใๆฉŸ่ƒฝใ™ใ‚‹ใซใฏใ€ใ„ใใคใ‹ใฎ่จญๅฎšๅ€คใŒๅฟ…่ฆใงใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ๆคœๅ‡บใŒๅ›ฐ้›ฃใชใ‚จใƒฉใƒผใซใคใชใŒใ‚‹ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ๅฎš็พฉใฎ็ซถๅˆใ‚’้˜ฒใใŸใ‚ใซใ€ใใ‚Œใ‚‰ใ‚’ๆง‹ๆˆใ™ใ‚‹ใ“ใจใซใ—ใพใ—ใŸใ€‚ [`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐ็ตŒ็”ฑใ€‚ ใ•ใ‚‰ใซใ€ไธ€้ƒจใฎๆง‹ๆˆๅ€คใฏใƒขใƒ‡ใƒซใฎๆง‹ๆˆใซๅŸบใฅใ„ใฆ่‡ชๅ‹•็š„ใซๅฐŽๅ‡บใ•ใ‚Œใพใ™ใ€‚ ่ค‡ๆ•ฐใฎๅ€คใ‚’ๆ‰‹ๅ‹•ใง่ชฟๆ•ดใ™ใ‚‹ใ“ใจใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„ใ€‚[`Trainer`] ใซๅคง้ƒจๅˆ†ใ‚’ไปปใ›ใ‚‹ใฎใŒๆœ€ๅ–„ใงใ™ ใฎ่จญๅฎšใ‚’่กŒใ„ใพใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใ“ใฎใ‚ฌใ‚คใƒ‰ใฎๆฎ‹ใ‚Šใฎ้ƒจๅˆ†ใงใฏใ€็‰นๅˆฅใช่จญๅฎšๅ€ค `auto` ใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใ‚’่จญๅฎšใ™ใ‚‹ใจใ€ ๆญฃใ—ใ„ๅ€คใพใŸใฏๆœ€ใ‚‚ๅŠน็އ็š„ใชๅ€คใซ่‡ชๅ‹•็š„ใซ็ฝฎใๆ›ใˆใ‚‰ใ‚Œใพใ™ใ€‚ใ“ใ‚Œใ‚’็„ก่ฆ–ใ™ใ‚‹ใ“ใจใ‚’่‡ช็”ฑใซ้ธๆŠžใ—ใฆใใ ใ•ใ„ ๆŽจๅฅจไบ‹้ …ใ‚’ๅ‚็…งใ—ใ€ๅ€คใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ—ใพใ™ใ€‚ใ“ใฎๅ ดๅˆใ€ๆฌกใฎ็‚นใซๅๅˆ†ๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ [`Trainer`] ๅผ•ๆ•ฐใจ DeepSpeed ่จญๅฎšใฏไธ€่‡ดใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€ๅŒใ˜ใ‚‚ใฎใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ‹ ๅญฆ็ฟ’็އใ€ใƒใƒƒใƒใ‚ตใ‚คใ‚บใ€ใพใŸใฏๅ‹พ้…็ดฏ็ฉ่จญๅฎš?ใ“ใ‚Œใ‚‰ใŒไธ€่‡ดใ—ใชใ„ๅ ดๅˆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏ้žๅธธใซๅคฑๆ•—ใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ ๆ–นๆณ•ใ‚’ๆคœๅ‡บใ™ใ‚‹ใฎใŒ้›ฃใ—ใ„ใ€‚ใ‚ใชใŸใฏ่ญฆๅ‘Šใ‚’ๅ—ใ‘ใพใ—ใŸใ€‚ DeepSpeed ใฎใฟใซๅ›บๆœ‰ใฎๅ€คใ‚„ใ€ใใ‚Œใซๅˆใ‚ใ›ใฆๆ‰‹ๅ‹•ใง่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ€คใŒไป–ใซใ‚‚่ค‡ๆ•ฐใ‚ใ‚Šใพใ™ใ€‚ ใ‚ใชใŸใฎ่ฆๆœ›ใ€‚ ็‹ฌ่‡ชใฎใƒ—ใƒญใ‚ฐใƒฉใƒ ใงใ€DeepSpeed ๆง‹ๆˆใ‚’ใƒžใ‚นใ‚ฟใƒผใจใ—ใฆๅค‰ๆ›ดใ—ใŸใ„ๅ ดๅˆใฏใ€ๆฌกใฎใ‚ขใƒ—ใƒญใƒผใƒใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ใใ‚ŒใซๅŸบใฅใ„ใฆ [`TrainingArguments`] ใ‚’่จญๅฎšใ—ใพใ™ใ€‚ๆ‰‹้ †ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ 1. ใƒžใ‚นใ‚ฟใƒผๆง‹ๆˆใจใ—ใฆไฝฟ็”จใ™ใ‚‹ DeepSpeed ๆง‹ๆˆใ‚’ไฝœๆˆใพใŸใฏใƒญใƒผใƒ‰ใ—ใพใ™ 2. ใ“ใ‚Œใ‚‰ใฎๅ€คใซๅŸบใฅใ„ใฆ [`TrainingArguments`] ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ไฝœๆˆใ—ใพใ™ `scheduler.params.total_num_steps`ใชใฉใฎไธ€้ƒจใฎๅ€คใฏๆฌกใฎใ‚ˆใ†ใซ่จˆ็ฎ—ใ•ใ‚Œใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ `train` ไธญใซ [`Trainer`] ใ‚’ๅฎŸ่กŒใ—ใพใ™ใŒใ€ใ‚‚ใกใ‚ใ‚“่‡ชๅˆ†ใง่จˆ็ฎ—ใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ <a id='deepspeed-zero'></a> ### ZeRO [Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) ใฏใ€DeepSpeed ใฎไธปๅŠ›่ฃฝๅ“ใงใ™ใ€‚ใใ‚Œ 3 ใคใฎ็•ฐใชใ‚‹ใƒฌใƒ™ใƒซ (ๆฎต้šŽ) ใฎๆœ€้ฉๅŒ–ใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ๆœ€ๅˆใฎใ‚‚ใฎใฏใ€ใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃใฎ่ฆณ็‚นใ‹ใ‚‰ใฏใ‚ใพใ‚Š่ˆˆๅ‘ณๆทฑใ„ใ‚‚ใฎใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ—ใŸใŒใฃใฆใ€ใ“ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใงใฏใ‚นใƒ†ใƒผใ‚ธ 2 ใจ 3 ใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใพใ™ใ€‚ใ‚นใƒ†ใƒผใ‚ธ 3 ใฏใ€ๆœ€ๆ–ฐใฎ ZeRO-Infinity ใฎ่ฟฝๅŠ ใซใ‚ˆใฃใฆใ•ใ‚‰ใซๆ”นๅ–„ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€DeepSpeed ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใฎ `zero_optimization` ใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏๆœ€ใ‚‚้‡่ฆใช้ƒจๅˆ†ใงใ™ ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training))ใ€‚ใ“ใ“ใงๅฎš็พฉใ—ใพใ™ ใฉใฎ ZeRO ใ‚นใƒ†ใƒผใ‚ธใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ‹ใ€ใใ—ใฆใใ‚Œใ‚‰ใ‚’ใฉใฎใ‚ˆใ†ใซๆง‹ๆˆใ™ใ‚‹ใ‹ใ€‚ๅ„ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ่ชฌๆ˜Žใฏใ€ DeepSpeed ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใ€‚ ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏใ€DeepSpeed ่จญๅฎšใ‚’ไป‹ใ—ใฆใฎใฟ่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ - [`Trainer`] ใŒๆไพ›ใ—ใพใ™ ๅŒ็ญ‰ใฎใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๆณจ: ็พๅœจใ€DeepSpeed ใฏใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๅใ‚’ๆคœ่จผใ—ใชใ„ใŸใ‚ใ€ใ‚นใƒšใƒซใ‚’้–“้•ใˆใ‚‹ใจใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆ่จญๅฎšใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ใ‚นใƒšใƒซใŒ้–“้•ใฃใฆใ„ใ‚‹ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ€‚ DeepSpeed ใ‚จใƒณใ‚ธใƒณใฎ่ตทๅ‹•ใƒญใ‚ฐ ใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’่ฆ‹ใฆใ€ใใฎๅ€คใ‚’็ขบ่ชใงใใพใ™ใ€‚ ไฝฟ็”จใ™ใ‚‹ใคใ‚‚ใ‚Šใงใ™ใ€‚ <a id='deepspeed-zero2-config'></a> #### ZeRO-2 Config ไปฅไธ‹ใฏใ€ZeRO ใ‚นใƒ†ใƒผใ‚ธ 2 ใฎๆง‹ๆˆไพ‹ใงใ™ใ€‚ ```json { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 5e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 5e8, "contiguous_gradients": true } } ``` **ๆ€ง่ƒฝ่ชฟๆ•ด๏ผš** - `offload_optimizer` ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใจใ€GPU RAM ใฎไฝฟ็”จ้‡ใŒๅ‰Šๆธ›ใ•ใ‚Œใพใ™ (`"stage": 2` ใŒๅฟ…่ฆใงใ™) - `"overlap_comm": true` ใฏใ€GPU RAM ไฝฟ็”จ้‡ใฎๅข—ๅŠ ใจใƒˆใƒฌใƒผใƒ‰ใ‚ชใƒ•ใ—ใฆใ€้…ๅปถใ‚’ใ™ในใฆๅ‰Šๆธ›ใ—ใพใ™ใ€‚ `overlap_comm`ใฏ 4.5x ใ‚’ไฝฟ็”จใ—ใพใ™ `allgather_bucket_size`ใจ`reduce_bucket_size`ใฎๅ€คใ€‚ใ—ใŸใŒใฃใฆใ€5e8 ใซ่จญๅฎšใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€9GB ใŒๅฟ…่ฆใซใชใ‚Šใพใ™ใ€‚ ใƒ•ใƒƒใƒˆใƒ—ใƒชใƒณใƒˆ (`5e8 x 2Bytes x 2 x 4.5`)ใ€‚ใ—ใŸใŒใฃใฆใ€8GB ไปฅไธ‹ใฎ RAM ใ‚’ๆญ่ผ‰ใ—ใŸ GPU ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ OOM ใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ—ใŸๅ ดๅˆใฏใ€ใ“ใ‚Œใ‚‰ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’`2e8`็จ‹ๅบฆใซๆธ›ใ‚‰ใ™ๅฟ…่ฆใŒใ‚ใ‚Šใ€ใใ‚Œใซใฏ 3.6GB ใŒๅฟ…่ฆใซใชใ‚Šใพใ™ใ€‚ใ‚„ใ‚ŠใŸใใชใ‚‹ใงใ—ใ‚‡ใ† OOM ใซ้”ใ—ๅง‹ใ‚ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ใ‚ˆใ‚Šๅคงๅฎน้‡ใฎ GPU ใงใ‚‚ๅŒๆง˜ใงใ™ใ€‚ - ใ“ใ‚Œใ‚‰ใฎใƒใƒƒใƒ•ใ‚กใ‚’ๆธ›ใ‚‰ใ™ใจใ€ใ‚ˆใ‚Šๅคšใใฎ GPU RAM ใ‚’ๅˆฉ็”จใ™ใ‚‹ใŸใ‚ใซ้€šไฟก้€Ÿๅบฆใ‚’็Š ็‰ฒใซใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ใƒใƒƒใƒ•ใ‚กใ‚ตใ‚คใ‚บใŒๅฐใ•ใ„ใปใฉใ€ ้€šไฟกใŒ้…ใใชใ‚Šใ€ไป–ใฎใ‚ฟใ‚นใ‚ฏใงไฝฟ็”จใงใใ‚‹ GPU RAM ใŒๅข—ใˆใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒใƒƒใƒใ‚ตใ‚คใ‚บใŒๅคงใใ„ๅ ดๅˆใฏใ€ ้‡่ฆใชใฎใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ™‚้–“ใ‚’ๅฐ‘ใ—้…ใ‚‰ใ›ใ‚‹ใ“ใจใฏ่‰ฏใ„ใƒˆใƒฌใƒผใƒ‰ใซใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ใ•ใ‚‰ใซใ€`deepspeed==0.4.4`ใซใฏใ€ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใงๆœ‰ๅŠนใซใงใใ‚‹ๆ–ฐใ—ใ„ใ‚ชใƒ—ใ‚ทใƒงใƒณ`round_robin_gradients`ใŒ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸใ€‚ ```json { "zero_optimization": { "round_robin_gradients": true } } ``` ใ“ใ‚Œใฏใ€ใใ‚็ดฐใ‹ใ„ๅ‹พ้…ใƒ‘ใƒผใƒ†ใ‚ฃใ‚ทใƒงใƒ‹ใƒณใ‚ฐใซใ‚ˆใฃใฆใƒฉใƒณใ‚ฏ้–“ใฎ CPU ใƒกใƒขใƒชใธใฎๅ‹พ้…ใ‚ณใƒ”ใƒผใ‚’ไธฆๅˆ—ๅŒ–ใ™ใ‚‹ใ€CPU ใ‚ชใƒ•ใƒญใƒผใƒ‰ใฎใ‚นใƒ†ใƒผใ‚ธ 2 ๆœ€้ฉๅŒ–ใงใ™ใ€‚ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅˆฉ็‚นใฏใ€ๅ‹พ้…็ดฏ็ฉใ‚นใƒ†ใƒƒใƒ— (ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผ ใ‚นใƒ†ใƒƒใƒ—้–“ใฎใ‚ณใƒ”ใƒผใฎๅข—ๅŠ ) ใพใŸใฏ GPU ๆ•ฐ (ไธฆๅˆ—ๅ‡ฆ็†ใฎๅข—ๅŠ ) ใซๅฟœใ˜ใฆๅข—ๅŠ ใ—ใพใ™ใ€‚ <a id='deepspeed-zero3-config'></a> #### ZeRO-3 Config ไปฅไธ‹ใฏใ€ZeRO ใ‚นใƒ†ใƒผใ‚ธ 3 ใฎๆง‹ๆˆไพ‹ใงใ™ใ€‚ ```json { "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true } } ``` ใƒขใƒ‡ใƒซใพใŸใฏใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใŒ GPU ใƒกใƒขใƒชใซ้ฉๅˆใ›ใšใ€CPU ใŒๆœชไฝฟ็”จใงใ‚ใ‚‹ใŸใ‚ใซ OOM ใŒ็™บ็”Ÿใ—ใฆใ„ใ‚‹ๅ ดๅˆ `"device": "cpu"` ใ‚’ไฝฟ็”จใ—ใฆใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใจใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ CPU ใƒกใƒขใƒชใซใƒกใƒขใƒชใ‚ชใƒ•ใƒญใƒผใƒ‰ใ™ใ‚‹ใจใ€ใ“ใฎๅˆถ้™ใŒ่งฃๆฑบใ•ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ CPU ใƒกใƒขใƒชใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ—ใŸใใชใ„ๅ ดๅˆใฏใ€`device`ใ‚จใƒณใƒˆใƒชใซ`cpu`ใฎไปฃใ‚ใ‚Šใซ`none`ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ‚ชใƒ•ใƒญใƒผใƒ‰ๅ…ˆ NVMe ใซใคใ„ใฆใฏๅพŒใปใฉ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ๅ›บๅฎšใƒกใƒขใƒชใฏใ€`pin_memory`ใ‚’`true`ใซ่จญๅฎšใ™ใ‚‹ใจๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ใ“ใฎๆฉŸ่ƒฝใซใ‚ˆใ‚Šใ€ๆฌกใฎใ‚ˆใ†ใชใ‚ณใ‚นใƒˆใ‚’ใ‹ใ‘ใฆใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ไป–ใฎใƒ—ใƒญใ‚ปใ‚นใŒไฝฟ็”จใงใใ‚‹ใƒกใƒขใƒชใŒๅฐ‘ใชใใชใ‚Šใพใ™ใ€‚ใƒ”ใƒณ็•™ใ‚ใ•ใ‚ŒใŸใƒกใƒขใƒชใฏใ€ใใ‚Œใ‚’่ฆๆฑ‚ใ—ใŸ็‰นๅฎšใฎใƒ—ใƒญใ‚ปใ‚นใฎใŸใ‚ใซ็ขบไฟใ•ใ‚Œใพใ™ใ€‚ ้€šๅธธใ€้€šๅธธใฎ CPU ใƒกใƒขใƒชใ‚ˆใ‚Šใ‚‚ใฏใ‚‹ใ‹ใซ้ซ˜้€Ÿใซใ‚ขใ‚ฏใ‚ปใ‚นใ•ใ‚Œใพใ™ใ€‚ **ๆ€ง่ƒฝ่ชฟๆ•ด๏ผš** - `stage3_max_live_parameters`: `1e9` - `stage3_max_reuse_distance`: `1e9` OOM ใซ้”ใ—ใŸๅ ดๅˆใฏใ€ใ€Œstage3_max_live_parametersใ€ใจใ€Œstage3_max_reuse_ distanceใ€ใ‚’ๆธ›ใ‚‰ใ—ใพใ™ใ€‚ๅฝฑ้Ÿฟใฏๆœ€ๅฐ้™ใซๆŠ‘ใˆใ‚‰ใ‚Œใ‚‹ใฏใšใงใ™ ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ–ๅŒ–ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๅฎŸ่กŒใ—ใชใ„้™ใ‚Šใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใซๅฝฑ้Ÿฟใ—ใพใ™ใ€‚ `1e9`ใฏ็ด„ 2GB ใ‚’ๆถˆ่ฒปใ—ใพใ™ใ€‚่จ˜ๆ†ถใ‚’ๅ…ฑๆœ‰ใ—ใฆใ„ใ‚‹ใฎใฏใ€ `stage3_max_live_parameters` ใจ `stage3_max_reuse_distance` ใชใฎใงใ€ๅŠ ็ฎ—ใ•ใ‚Œใ‚‹ใ‚‚ใฎใงใฏใชใใ€ๅˆ่จˆใง 2GB ใซใชใ‚Šใพใ™ใ€‚ `stage3_max_live_parameters` ใฏใ€็‰นๅฎšใฎๆ™‚็‚นใง GPU ไธŠใซไฟๆŒใ™ใ‚‹ๅฎŒๅ…จใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆ•ฐใฎไธŠ้™ใงใ™ใ€‚ ๆ™‚้–“ใ€‚ ใ€Œๅ†ๅˆฉ็”จ่ท้›ขใ€ใฏใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒๅฐ†ๆฅใ„ใคๅ†ใณไฝฟ็”จใ•ใ‚Œใ‚‹ใ‹ใ‚’ๅˆคๆ–ญใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ™ใ‚‹ๆŒ‡ๆจ™ใงใ™ใ€‚ `stage3_max_reuse_ distance`ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’็ ดๆฃ„ใ™ใ‚‹ใ‹ไฟๆŒใ™ใ‚‹ใ‹ใ‚’ๆฑบๅฎšใ—ใพใ™ใ€‚ใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒ ่ฟ‘ใ„ๅฐ†ๆฅใซๅ†ใณไฝฟ็”จใ•ใ‚Œใ‚‹ไบˆๅฎš (`stage3_max_reuse_distance`ๆœชๆบ€) ใชใฎใงใ€้€šไฟกใ‚’ๆธ›ใ‚‰ใ™ใŸใ‚ใซไฟๆŒใ—ใพใ™ใ€‚ ใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใ€‚ใ“ใ‚Œใฏใ€ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณ ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๆœ‰ๅŠนใซใ—ใฆใ„ใ‚‹ๅ ดๅˆใซ้žๅธธใซๅฝน็ซ‹ใกใพใ™ใ€‚ใƒ•ใ‚ฉใƒฏใƒผใƒ‰ๅ†่จˆ็ฎ—ใŒ่กŒใ‚ใ‚Œใ€ backward ใฏๅ˜ไธ€ใƒฌใ‚คใƒคใƒผ็ฒ’ๅบฆใ‚’ๆธกใ—ใ€ๅพŒๆ–นๅ†่จˆ็ฎ—ใพใงใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๅ‰ๆ–นๅ†่จˆ็ฎ—ใซไฟๆŒใ—ใŸใ„ใจ่€ƒใˆใฆใ„ใพใ™ใ€‚ ๆฌกใฎๆง‹ๆˆๅ€คใฏใ€ใƒขใƒ‡ใƒซใฎ้ž่กจ็คบใ‚ตใ‚คใ‚บใซใ‚ˆใฃใฆ็•ฐใชใ‚Šใพใ™ใ€‚ - `reduce_bucket_size`: `hidden_size*hidden_size` - `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size` - `stage3_param_persistence_threshold`: `10 * hidden_size` ใ—ใŸใŒใฃใฆใ€ใ“ใ‚Œใ‚‰ใฎๅ€คใ‚’ `auto` ใซ่จญๅฎšใ™ใ‚‹ใจใ€[`Trainer`] ใŒๆŽจๅฅจใ•ใ‚Œใ‚‹ๅ€คใ‚’่‡ชๅ‹•็š„ใซๅ‰ฒใ‚Šๅฝ“ใฆใพใ™ใ€‚ ไพกๅ€ค่ฆณใ€‚ใŸใ ใ—ใ€ใ‚‚ใกใ‚ใ‚“ใ€ใ“ใ‚Œใ‚‰ใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ `stage3_gather_16bit_weights_on_model_save` ใฏใ€ใƒขใƒ‡ใƒซใฎไฟๅญ˜ๆ™‚ใซใƒขใƒ‡ใƒซ fp16 ใฎ้‡ใฟ็ตฑๅˆใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚ๅคงใใ„ ใƒขใƒ‡ใƒซใจ่ค‡ๆ•ฐใฎ GPU ใฎๅ ดๅˆใ€ใ“ใ‚Œใฏใƒกใƒขใƒชใจ้€Ÿๅบฆใฎไธกๆ–นใฎ็‚นใง้ซ˜ไพกใชๆ“ไฝœใงใ™ใ€‚็พๅœจๅฟ…้ ˆใจใชใฃใฆใ„ใ‚‹ใฎใฏใ€ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ™ใ‚‹ไบˆๅฎšใงใ™ใ€‚ใ“ใฎๅˆถ้™ใ‚’ๅ–ใ‚Š้™คใใ€ใ‚ˆใ‚Šไพฟๅˆฉใซใ™ใ‚‹ไปŠๅพŒใฎใ‚ขใƒƒใƒ—ใƒ‡ใƒผใƒˆใซๆณจ็›ฎใ—ใฆใใ ใ•ใ„ใ€‚ ใƒ•ใƒฌใ‚ญใ‚ทใƒ–ใƒซใ€‚ ZeRO-2 ๆง‹ๆˆใ‹ใ‚‰็งป่กŒใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€`allgather_partitions`ใ€`allgather_bucket_size`ใ€ใŠใ‚ˆใณ `reduce_scatter`่จญๅฎšใƒ‘ใƒฉใƒกใƒผใ‚ฟใฏ ZeRO-3 ใงใฏไฝฟ็”จใ•ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚Œใ‚‰ใ‚’่จญๅฎšใƒ•ใ‚กใ‚คใƒซใซไฟๅญ˜ใ—ใฆใŠใใจใ€ ็„ก่ฆ–ใ•ใ‚Œใ‚‹ใ€‚ - `sub_group_size`: `1e9` `sub_group_size` ใฏใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎใ‚นใƒ†ใƒƒใƒ—ไธญใซใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใŒๆ›ดๆ–ฐใ•ใ‚Œใ‚‹็ฒ’ๅบฆใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ `sub_group_size` ใฎใƒใ‚ฑใƒƒใƒˆใซใ‚ฐใƒซใƒผใƒ—ๅŒ–ใ•ใ‚Œใ€ๅ„ใƒใ‚ฑใƒƒใƒˆใฏไธ€ๅบฆใซ 1 ใคใšใคๆ›ดๆ–ฐใ•ใ‚Œใพใ™ใ€‚ NVMeใ‚ชใƒ•ใƒญใƒผใƒ‰ใงไฝฟ็”จใ™ใ‚‹ๅ ดๅˆ ใ—ใŸใŒใฃใฆใ€ZeRO-Infinity ใฎ `sub_group_size`ใฏใ€ใƒขใƒ‡ใƒซใฎ็Šถๆ…‹ใŒ CPU ใซๅ‡บๅ…ฅใ‚Šใ™ใ‚‹็ฒ’ๅบฆใ‚’ๅˆถๅพกใ—ใพใ™ใ€‚ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚นใƒ†ใƒƒใƒ—ไธญใซ NVMe ใ‹ใ‚‰ใƒกใƒขใƒชใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€้žๅธธใซๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใฎ CPU ใƒกใƒขใƒชไธ่ถณใŒ้˜ฒๆญขใ•ใ‚Œใพใ™ใ€‚ NVMe ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆใฏใ€`sub_group_size`ใ‚’ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€คใฎ *1e9* ใฎใพใพใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ๅค‰ๆ›ดใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ ๆฌกใฎๅ ดๅˆใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€ค: 1. ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผ ใ‚นใƒ†ใƒƒใƒ—ไธญใซ OOM ใŒ็™บ็”Ÿใ™ใ‚‹: `sub_group_size` ใ‚’ๆธ›ใ‚‰ใ—ใฆใ€ไธ€ๆ™‚ใƒใƒƒใƒ•ใ‚กใƒผใฎใƒกใƒขใƒชไฝฟ็”จ้‡ใ‚’ๅ‰Šๆธ›ใ—ใพใ™ใ€‚ 2. ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผ ใ‚นใƒ†ใƒƒใƒ—ใซๆ™‚้–“ใŒใ‹ใ‹ใ‚Šใพใ™ใ€‚`sub_group_size`ใ‚’ๅข—ใ‚„ใ—ใฆใ€ๅธฏๅŸŸๅน…ใฎไฝฟ็”จ็އใ‚’ๅ‘ไธŠใ•ใ›ใพใ™ใ€‚ ใƒ‡ใƒผใ‚ฟใƒใƒƒใƒ•ใ‚กใฎๅข—ๅŠ ใ€‚ #### ZeRO-0 Config ใ‚นใƒ†ใƒผใ‚ธ 0 ใจ 1 ใฏใ‚ใฃใŸใซไฝฟ็”จใ•ใ‚Œใชใ„ใŸใ‚ใ€ๆœ€ๅพŒใซใƒชใ‚นใƒˆใ—ใฆใ„ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ใ‚นใƒ†ใƒผใ‚ธ 0 ใงใฏใ€ใ™ในใฆใฎใ‚ฟใ‚คใƒ—ใฎใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’็„กๅŠนใซใ—ใ€DDP ใจใ—ใฆ DeepSpeed ใฎใฟใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใงใ‚ชใƒณใซใงใใพใ™ใ€‚ ```json { "zero_optimization": { "stage": 0 } } ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ไป–ใซไฝ•ใ‚‚ๅค‰ๆ›ดใ™ใ‚‹ๅฟ…่ฆใŒใชใใ€ๅŸบๆœฌ็š„ใซ ZeRO ใŒ็„กๅŠนใซใชใ‚Šใพใ™ใ€‚ #### ZeRO-1 Config ใ‚นใƒ†ใƒผใ‚ธ 1 ใฏใ€ใ‚นใƒ†ใƒผใ‚ธ 2 ใ‹ใ‚‰ใ‚ฐใƒฉใƒ‡ใƒผใ‚ทใƒงใƒณ ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’้™คใ„ใŸใ‚‚ใฎใงใ™ใ€‚ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ใ‚’ใ‚ทใƒฃใƒผใƒ‰ๅŒ–ใ™ใ‚‹ใ ใ‘ใงใ€ๅ‡ฆ็†ใ‚’ๅฐ‘ใ—้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซใ„ใคใงใ‚‚่ฉฆใ™ใ“ใจใŒใงใใพใ™ใ€‚ ```json { "zero_optimization": { "stage": 1 } } ``` <a id='deepspeed-nvme'></a> ### NVMe Support ZeRO-Infinity ใฏใ€GPU ใจ CPU ใƒกใƒขใƒชใ‚’ NVMe ใƒกใƒขใƒชใงๆ‹กๅผตใ™ใ‚‹ใ“ใจใงใ€้žๅธธใซๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅฏ่ƒฝใซใ—ใพใ™ใ€‚ใŠใ‹ใ’ใง ใ‚นใƒžใƒผใƒˆ ใƒ‘ใƒผใƒ†ใ‚ฃใ‚ทใƒงใƒ‹ใƒณใ‚ฐใŠใ‚ˆใณใ‚ฟใ‚คใƒชใƒณใ‚ฐ ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใงใฏใ€ๅ„ GPU ใŒ้žๅธธใซๅฐ‘้‡ใฎใƒ‡ใƒผใ‚ฟใ‚’้€ๅ—ไฟกใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ‚ชใƒ•ใƒญใƒผใƒ‰ใซใ‚ˆใ‚Šใ€ๆœ€ๆ–ฐใฎ NVMe ใŒใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅˆฉ็”จใงใใ‚‹ๅˆ่จˆใƒกใƒขใƒช ใƒ—ใƒผใƒซใ‚’ใ•ใ‚‰ใซๅคงใใใ™ใ‚‹ใฎใซ้ฉใ—ใฆใ„ใ‚‹ใ“ใจใŒๅˆคๆ˜Žใ—ใพใ—ใŸใ€‚ ใƒ—ใƒญใ‚ปใ‚นใ€‚ ZeRO-Infinity ใซใฏใ€ZeRO-3 ใŒๆœ‰ๅŠนใซใชใฃใฆใ„ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆฌกใฎ่จญๅฎšไพ‹ใงใฏใ€NVMe ใŒใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใจใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎไธกๆ–นใ‚’ใ‚ชใƒ•ใƒญใƒผใƒ‰ใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ```json { "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 4, "fast_init": false }, "offload_param": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 5, "buffer_size": 1e8, "max_in_cpu": 1e9 }, "aio": { "block_size": 262144, "queue_depth": 32, "thread_count": 1, "single_submit": false, "overlap_events": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, } ``` ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใจใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎไธกๆ–นใ‚’ NVMe ใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ™ใ‚‹ใ‹ใ€ใฉใกใ‚‰ใ‹ 1 ใคใ ใ‘ใ‚’ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ™ใ‚‹ใ‹ใ€ใพใฃใŸใใ‚ชใƒ•ใƒญใƒผใƒ‰ใ—ใชใ„ใ‹ใ‚’้ธๆŠžใงใใพใ™ใ€‚ใŸใจใˆใฐใ€ๆฌกใฎๅ ดๅˆ ๅˆฉ็”จๅฏ่ƒฝใช CPU ใƒกใƒขใƒชใŒๅคง้‡ใซใ‚ใ‚‹ๅ ดๅˆใฏใ€้ซ˜้€Ÿใซใชใ‚‹ใŸใ‚ใ€ๅฟ…ใš CPU ใƒกใƒขใƒชใฎใฟใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ—ใฆใใ ใ•ใ„ (ใƒ’ใƒณใƒˆ: *"device": "CPU"*)ใ€‚ [ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) ใจ [ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผ](https://www.deepspeed.ai/docs/config-json/#parameter-offloading)ใ€‚ `nvme_path`ใŒๅฎŸ้š›ใซ NVMe ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚NVMe ใฏ้€šๅธธใฎใƒใƒผใƒ‰ใƒ‰ใƒฉใ‚คใƒ–ใพใŸใฏ SSD ใงๅ‹•ไฝœใ—ใพใ™ใŒใ€ ใฏใ‚‹ใ‹ใซ้…ใใชใ‚Šใพใ™ใ€‚้ซ˜้€Ÿใ‚นใ‚ฑใƒผใƒฉใƒ–ใƒซใชใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ€ๆœ€ๆ–ฐใฎ NVMe ่ปข้€้€Ÿๅบฆใ‚’ๅฟต้ ญใซ็ฝฎใ„ใฆ่จญ่จˆใ•ใ‚Œใพใ—ใŸ (ใ“ใฎๆ™‚็‚นใงใฏ ๆ›ธใ่พผใฟใงใฏใ€่ชญใฟๅ–ใ‚Šๆœ€ๅคง 3.5 GB/็ง’ใ€ๆ›ธใ่พผใฟๆœ€ๅคง 3 GB/็ง’ใฎใƒ”ใƒผใ‚ฏ้€ŸๅบฆใŒๅพ—ใ‚‰ใ‚Œใพใ™)ใ€‚ ๆœ€้ฉใช`aio`ๆง‹ๆˆใƒ–ใƒญใƒƒใ‚ฏใ‚’่ฆ‹ใคใ‘ใ‚‹ใซใฏใ€ใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จญๅฎšใงใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’ๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ [ใ“ใ“ใง่ชฌๆ˜Ž](https://github.com/microsoft/DeepSpeed/issues/998)ใ€‚ <a id='deepspeed-zero2-zero3-performance'></a> #### ZeRO-2 vs ZeRO-3 Performance ZeRO-3 ใฏใ€ไป–ใฎใ™ในใฆใŒๅŒใ˜ใ‚ˆใ†ใซๆง‹ๆˆใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใ€ZeRO-2 ใ‚ˆใ‚Šใ‚‚้…ใใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ๅ‰่€…ใฏๅŽ้›†ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใŸใ‚ใงใ™ใ€‚ ZeRO-2 ใฎๆฉŸ่ƒฝใซๅŠ ใˆใฆใƒขใƒ‡ใƒซใฎ้‡ใฟไป˜ใ‘ใ‚’่กŒใ„ใพใ™ใ€‚ ZeRO-2 ใŒใƒ‹ใƒผใ‚บใ‚’ๆบ€ใŸใ—ใ€ๆ•ฐๅ€‹ใฎ GPU ใ‚’่ถ…ใˆใฆๆ‹กๅผตใ™ใ‚‹ๅฟ…่ฆใŒใชใ„ๅ ดๅˆ ใใ†ใ™ใ‚Œใฐใ€ใใ‚Œใซๅ›บๅŸทใ™ใ‚‹ใ“ใจใ‚’้ธๆŠžใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ZeRO-3 ใซใ‚ˆใ‚Šใ€ใฏใ‚‹ใ‹ใซ้ซ˜ใ„ใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃๅฎน้‡ใŒๅฏ่ƒฝใซใชใ‚‹ใ“ใจใ‚’็†่งฃใ™ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ ใ‚นใƒ”ใƒผใƒ‰ใ‚’็Š ็‰ฒใซใ—ใฆใ€‚ ZeRO-3 ใฎๆง‹ๆˆใ‚’่ชฟๆ•ดใ—ใฆใ€ZeRO-2 ใซ่ฟ‘ใฅใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ - `stage3_param_persistence_threshold` ใ‚’้žๅธธใซๅคงใใชๆ•ฐๅ€คใซ่จญๅฎšใ—ใพใ™ใ€‚ใŸใจใˆใฐใ€`6 * hidden_โ€‹โ€‹size * hidden_โ€‹โ€‹size` ใฎใ‚ˆใ†ใซใ€ๆœ€ๅคงโ€‹โ€‹ใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚ˆใ‚Šใ‚‚ๅคงใใใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟใŒ GPU ใซไฟๆŒใ•ใ‚Œใพใ™ใ€‚ - ZeRO-2 ใซใฏใใฎใ‚ชใƒ—ใ‚ทใƒงใƒณใŒใชใ„ใŸใ‚ใ€`offload_params` ใ‚’ใ‚ชใƒ•ใซใ—ใพใ™ใ€‚ ๅค‰ๆ›ดใ—ใชใใฆใ‚‚ใ€`offload_params`ใ‚’ใ‚ชใƒ•ใซใ™ใ‚‹ใ ใ‘ใงใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅคงๅน…ใซๅ‘ไธŠใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ `stage3_param_persistence_threshold`ใ€‚ใ‚‚ใกใ‚ใ‚“ใ€ใ“ใ‚Œใ‚‰ใฎๅค‰ๆ›ดใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใใ‚‹ใƒขใƒ‡ใƒซใฎใ‚ตใ‚คใ‚บใซๅฝฑ้Ÿฟใ—ใพใ™ใ€‚ใใ‚Œใง ใ“ใ‚Œใ‚‰ใฏใ€ใƒ‹ใƒผใ‚บใซๅฟœใ˜ใฆใ€ใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃใจๅผ•ใๆ›ใˆใซ้€Ÿๅบฆใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใฎใซๅฝน็ซ‹ใกใพใ™ใ€‚ <a id='deepspeed-zero2-example'></a> #### ZeRO-2 Example ไปฅไธ‹ใฏใ€ๅฎŒๅ…จใช ZeRO-2 ่‡ชๅ‹•ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซ `ds_config_zero2.json` ใงใ™ใ€‚ ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` ไปฅไธ‹ใฏใ€ๆ‰‹ๅ‹•ใง่จญๅฎšใ•ใ‚ŒใŸๅฎŒๅ…จใช ZeRO-2 ใฎใ™ในใฆใŒๆœ‰ๅŠนใชๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใงใ™ใ€‚ใ“ใ“ใงใฏไธปใซใ€ๅ…ธๅž‹็š„ใชใ‚‚ใฎใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใฎใ‚‚ใฎใงใ™ใ€‚ ๅ€คใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใŒใ€่ค‡ๆ•ฐใฎ`auto`่จญๅฎšใŒๅซใพใ‚Œใ‚‹ๅ€คใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": 3e-5, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 3e-5, "warmup_num_steps": 500 } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "steps_per_print": 2000, "wall_clock_breakdown": false } ``` <a id='deepspeed-zero3-example'></a> #### ZeRO-3 Example ไปฅไธ‹ใฏใ€ๅฎŒๅ…จใช ZeRO-3 ่‡ชๅ‹•ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซ`ds_config_zero3.json`ใงใ™ใ€‚ ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` ไปฅไธ‹ใฏใ€ๆ‰‹ๅ‹•ใง่จญๅฎšใ•ใ‚ŒใŸๅฎŒๅ…จใช ZeRO-3 ใฎใ™ในใฆใŒๆœ‰ๅŠนใชๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใงใ™ใ€‚ใ“ใ“ใงใฏไธปใซใ€ๅ…ธๅž‹็š„ใชใ‚‚ใฎใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใฎใ‚‚ใฎใงใ™ใ€‚ ๅ€คใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใŒใ€่ค‡ๆ•ฐใฎ`auto`่จญๅฎšใŒๅซใพใ‚Œใ‚‹ๅ€คใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": 3e-5, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 3e-5, "warmup_num_steps": 500 } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": 1e6, "stage3_prefetch_bucket_size": 0.94e6, "stage3_param_persistence_threshold": 1e4, "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "steps_per_print": 2000, "wall_clock_breakdown": false } ``` #### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance ใ“ใ‚Œใงใ€ใ•ใพใ–ใพใชๆฎต้šŽใŒใ‚ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ—ใŸใ€‚ใฉใกใ‚‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‹ใ‚’ใฉใฎใ‚ˆใ†ใซๆฑบๅฎšใ™ใ‚Œใฐใ‚ˆใ„ใงใ—ใ‚‡ใ†ใ‹?ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใ“ใฎ่ณชๅ•ใซ็ญ”ใˆใฆใ„ใใพใ™ใ€‚ ไธ€่ˆฌใซใ€ๆฌกใฎใ“ใจใŒๅฝ“ใฆใฏใพใ‚Šใพใ™ใ€‚ - ้€Ÿๅบฆใฎ็‚น๏ผˆๅทฆใฎๆ–นใŒๅณใ‚ˆใ‚Š้€Ÿใ„๏ผ‰ ใ‚นใƒ†ใƒผใ‚ธ 0 (DDP) > ใ‚นใƒ†ใƒผใ‚ธ 1 > ใ‚นใƒ†ใƒผใ‚ธ 2 > ใ‚นใƒ†ใƒผใ‚ธ 2 + ใ‚ชใƒ•ใƒญใƒผใƒ‰ > ใ‚นใƒ†ใƒผใ‚ธ 3 > ใ‚นใƒ†ใƒผใ‚ธ 3 + ใ‚ชใƒ•ใƒญใƒผใƒ‰ - GPU ใƒกใƒขใƒชใฎไฝฟ็”จ็Šถๆณ (ๅณใฏๅทฆใ‚ˆใ‚Šใ‚‚ GPU ใƒกใƒขใƒชๅŠน็އใŒ้ซ˜ใ„) ใ‚นใƒ†ใƒผใ‚ธ 0 (DDP) < ใ‚นใƒ†ใƒผใ‚ธ 1 < ใ‚นใƒ†ใƒผใ‚ธ 2 < ใ‚นใƒ†ใƒผใ‚ธ 2 + ใ‚ชใƒ•ใƒญใƒผใƒ‰ < ใ‚นใƒ†ใƒผใ‚ธ 3 < ใ‚นใƒ†ใƒผใ‚ธ 3 + ใ‚ชใƒ•ใƒญใƒผใƒ‰ ใ—ใŸใŒใฃใฆใ€ๆœ€ๅฐ้™ใฎๆ•ฐใฎ GPU ใซๅŽใพใ‚ŠใชใŒใ‚‰ๆœ€้€ŸใฎๅฎŸ่กŒใ‚’ๅฎŸ็พใ—ใŸใ„ๅ ดๅˆใฏใ€ๆฌกใฎใƒ—ใƒญใ‚ปใ‚นใซๅพ“ใ†ใ“ใจใŒใงใใพใ™ใ€‚ๆœ€ใ‚‚้€Ÿใ„ใ‚ขใƒ—ใƒญใƒผใƒใ‹ใ‚‰้–‹ๅง‹ใ—ใ€GPU OOM ใซ้™ฅใฃใŸๅ ดๅˆใฏใ€ๆฌกใซ้…ใ„ใ‚ขใƒ—ใƒญใƒผใƒใซ้€ฒใฟใพใ™ใŒใ€ใ“ใ‚Œใซใ‚ˆใ‚Šไฝฟ็”จใ•ใ‚Œใ‚‹ GPU ใƒกใƒขใƒชใŒๅฐ‘ใชใใชใ‚Šใพใ™ใ€‚ใชใฉใชใฉใ€‚ ใพใšใ€ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใ‚’ 1 ใซ่จญๅฎšใ—ใพใ™ (ๅฟ…่ฆใชๆœ‰ๅŠนใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใซๅฏพใ—ใฆใ€ใ„ใคใงใ‚‚ๅ‹พ้…็ดฏ็ฉใ‚’ไฝฟ็”จใงใใพใ™)ใ€‚ 1. `--gradient_checkpointing 1` (HF Trainer) ใพใŸใฏ็›ดๆŽฅ `model.gradient_checkpointing_enable()` ใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ - OOM ใฎๅ ดๅˆ 2. ๆœ€ๅˆใซ ZeRO ใ‚นใƒ†ใƒผใ‚ธ 2 ใ‚’่ฉฆใ—ใฆใใ ใ•ใ„ใ€‚ OOMใฎๅ ดๅˆ 3. ZeRO ใ‚นใƒ†ใƒผใ‚ธ 2 + `offload_optimizer` ใ‚’่ฉฆใ—ใพใ™ - OOM ใฎๅ ดๅˆ 4. ZeRO ใ‚นใƒ†ใƒผใ‚ธ 3 ใซๅˆ‡ใ‚Šๆ›ฟใˆใ‚‹ - OOM ใฎๅ ดๅˆ 5. `cpu` ใซๅฏพใ—ใฆ `offload_param` ใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ - OOM ใฎๅ ดๅˆ 6. OOM ใฎๅ ดๅˆใฏใ€`cpu`ใซๅฏพใ—ใฆ`offload_optimizer`ใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚ 7. ใใ‚Œใงใ‚‚ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บ 1 ใซ้ฉๅˆใ—ใชใ„ๅ ดๅˆใฏใ€ใพใšใ•ใพใ–ใพใชใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€คใ‚’็ขบ่ชใ—ใ€ๅฏ่ƒฝใงใ‚ใ‚Œใฐๅ€คใ‚’ไธ‹ใ’ใพใ™ใ€‚ใŸใจใˆใฐใ€`generate`ใ‚’ไฝฟ็”จใ—ใ€ๅบƒใ„ๆคœ็ดขใƒ“ใƒผใƒ ใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆใฏใ€ๅคง้‡ใฎใƒกใƒขใƒชใ‚’ๆถˆ่ฒปใ™ใ‚‹ใŸใ‚ใ€ๆคœ็ดขใƒ“ใƒผใƒ ใ‚’็‹ญใใ—ใพใ™ใ€‚ 8. fp32 ใงใฏๅฟ…ใšๆททๅˆๅŠ็ฒพๅบฆใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใคใพใ‚Šใ€Ampere ไปฅไธŠใฎ GPU ใงใฏ bf16ใ€ๅคใ„ GPU ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใฏ fp16 ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ 9. ใใ‚Œใงใ‚‚ OOM ใ‚’่กŒใ†ๅ ดๅˆใฏใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ‹ใ€ZeRO-Infinity ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใคใพใ‚Šใ€ใ‚ชใƒ•ใƒญใƒผใƒ‰ `offload_param` ใจ `offload_optimizer` ใ‚’ `nvme` ใซๅˆ‡ใ‚Šๆ›ฟใˆใพใ™ใ€‚้žๅธธใซ้ซ˜้€Ÿใช nvme ใงใ‚ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚้€ธ่ฉฑใจใ—ใฆใ€ZeRO-Infinity ใ‚’ไฝฟ็”จใ—ใฆๅฐใ•ใช GPU ใง BLOOM-176B ใ‚’ๆŽจ่ซ–ใ™ใ‚‹ใ“ใจใŒใงใใพใ—ใŸใŒใ€้žๅธธใซ้…ใ‹ใฃใŸใงใ™ใ€‚ใงใ‚‚ใ€ใ†ใพใใ„ใใพใ—ใŸ๏ผ ใ‚‚ใกใ‚ใ‚“ใ€ๆœ€ใ‚‚ GPU ใƒกใƒขใƒชๅŠน็އใฎ้ซ˜ใ„ๆง‹ๆˆใ‹ใ‚‰ๅง‹ใ‚ใฆใ€ๅพŒใ‹ใ‚‰้€†ใซ้€ฒใ‚€ใ“ใจใงใ€ใ“ใ‚Œใ‚‰ใฎๆ‰‹้ †ใ‚’้€†ใซๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใ‚ใ‚‹ใ„ใฏไบŒ็ญ‰ๅˆ†ใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ OOM ใ‚’ๅผ•ใ่ตทใ“ใ•ใชใ„ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บ 1 ใ‚’ๅ–ๅพ—ใ—ใŸใ‚‰ใ€ๅฎŸๅŠนใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ๆธฌๅฎšใ—ใพใ™ใ€‚ ๆฌกใซใ€ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใ‚’ใงใใ‚‹ใ ใ‘ๅคงใใใ—ใฆใฟใพใ™ใ€‚ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใŒๅคงใใ„ใปใฉใ€ไน—็ฎ—ใ™ใ‚‹่กŒๅˆ—ใŒๅทจๅคงใชๅ ดๅˆใซ GPU ใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๆœ€้ซ˜ใซใชใ‚‹ใŸใ‚ใ€GPU ใฎๅŠน็އใŒๅ‘ไธŠใ—ใพใ™ใ€‚ ใ“ใ“ใงใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นๆœ€้ฉๅŒ–ใ‚ฒใƒผใƒ ใŒๅง‹ใพใ‚Šใพใ™ใ€‚ไธ€้ƒจใฎใ‚ชใƒ•ใƒญใƒผใƒ‰ๆฉŸ่ƒฝใ‚’ใ‚ชใƒ•ใซใ™ใ‚‹ใ‹ใ€ZeRO ๆฎต้šŽใงใ‚นใƒ†ใƒƒใƒ—ใƒ€ใ‚ฆใƒณใ—ใฆใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใ‚’ๅข—ๆธ›ใ—ใฆใ€ๅฎŸๅŠนใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ๅ†ๅบฆๆธฌๅฎšใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ๆบ€่ถณใ™ใ‚‹ใพใงๆด—ใ„ๆตใ—ใ€็นฐใ‚Š่ฟ”ใ—ใพใ™ใ€‚ ๆฐธ้ ใซใ“ใ‚Œใซ่ฒปใ‚„ใ™ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€3 ใ‹ๆœˆใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใ—ใ‚ˆใ†ใจใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใซ้–ขใ—ใฆๆœ€ใ‚‚ๅŠนๆžœ็š„ใช่จญๅฎšใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซๆ•ฐๆ—ฅใ‹ใ‘ใฆใใ ใ•ใ„ใ€‚ใใฎใŸใ‚ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎใ‚ณใ‚นใƒˆใŒๆœ€ๅฐ้™ใซใชใ‚Šใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ใ‚ˆใ‚Šๆ—ฉใๅฎŒไบ†ใงใใพใ™ใ€‚็พๅœจใฎ็›ฎใพใใ‚‹ใ—ใๅค‰ๅŒ–ใ™ใ‚‹ ML ใฎไธ–็•Œใงใฏใ€ไฝ•ใ‹ใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใฎใซใ•ใ‚‰ใซ 1 ใ‹ๆœˆใ‹ใ‹ใ‚‹ๅ ดๅˆใ€็ตถๅฅฝใฎๆฉŸไผšใ‚’้€ƒใ™ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚‚ใกใ‚ใ‚“ใ€ใ“ใ‚Œใฏ็งใŒๆ„่ฆ‹ใ‚’ๅ…ฑๆœ‰ใ—ใฆใ„ใ‚‹ใ ใ‘ใงใ‚ใ‚Šใ€ๆฑบใ—ใฆใ‚ใชใŸใ‚’ๆ€ฅใ‹ใใ†ใจใ—ใฆใ„ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ BLOOM-176B ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’้–‹ๅง‹ใ™ใ‚‹ๅ‰ใซใ€ใ“ใฎใƒ—ใƒญใ‚ปใ‚นใซ 2 ๆ—ฅ้–“่ฒปใ‚„ใ—ใ€ใ‚นใƒซใƒผใƒ—ใƒƒใƒˆใ‚’ 90 TFLOP ใ‹ใ‚‰ 150 TFLOP ใซๅ‘ไธŠใ•ใ›ใ‚‹ใ“ใจใŒใงใใพใ—ใŸใ€‚ใ“ใฎๅ–ใ‚Š็ต„ใฟใซใ‚ˆใ‚Šใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆ™‚้–“ใ‚’ 1 ใ‹ๆœˆไปฅไธŠ็ฏ€็ด„ใงใใพใ—ใŸใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒกใƒขใฏไธปใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒขใƒผใƒ‰็”จใซๆ›ธใ‹ใ‚ŒใŸใ‚‚ใฎใงใ™ใŒใ€ใปใจใ‚“ใฉใฎๅ ดๅˆใฏๆŽจ่ซ–ใซใ‚‚้ฉ็”จใ•ใ‚Œใ‚‹ใฏใšใงใ™ใ€‚ใŸใจใˆใฐใ€ๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซใฎใฟๅฝน็ซ‹ใคใŸใ‚ใ€ๆŽจ่ซ–ไธญใฏไฝ•ใ‚‚่กŒใ‚ใ‚Œใพใ›ใ‚“ใ€‚ใ•ใ‚‰ใซใ€ใƒžใƒซใƒ GPU ๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ—ใฆใ„ใฆใ€[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/)ใ€[Accelerate](https://ใƒใ‚ฐใƒ•ใ‚งใ‚คใ‚น.co/blog/bloom-inference-pytorch-scripts) ใฏๅ„ชใ‚ŒใŸใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๆไพ›ใ™ใ‚‹ใฏใšใงใ™ใ€‚ ใใฎไป–ใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚น้–ข้€ฃใฎ็ฐกๅ˜ใชใƒกใƒข: - ไฝ•ใ‹ใ‚’ๆœ€ๅˆใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ๅธธใซ 16 ใงๅ‰ฒใ‚Šๅˆ‡ใ‚Œใ‚‹ๅฝข็Šถใฎใƒ†ใƒณใ‚ฝใƒซ (้š ใ‚ŒใŸใ‚ตใ‚คใ‚บใชใฉ) ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซใ—ใฆใใ ใ•ใ„ใ€‚ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใซใคใ„ใฆใฏใ€ๅฐ‘ใชใใจใ‚‚ 2 ใงๅ‰ฒใ‚Šๅˆ‡ใ‚Œใ‚‹ใ‚ˆใ†ใซใ—ใฆใใ ใ•ใ„ใ€‚ GPU ใ‹ใ‚‰ใ•ใ‚‰ใซ้ซ˜ใ„ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’ๅผ•ใๅ‡บใ—ใŸใ„ๅ ดๅˆใฏใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขๅ›บๆœ‰ใฎ [ๆณขใจใ‚ฟใ‚คใƒซใฎ้‡ๅญๅŒ–](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) ใฎๅฏๅˆ†ๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ### Activation Checkpointing or Gradient Checkpointing ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณ ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใจๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€ๅŒใ˜ๆ–นๆณ•่ซ–ใ‚’ๆŒ‡ใ™ 2 ใคใฎ็•ฐใชใ‚‹็”จ่ชžใงใ™ใ€‚ใจใฆใ‚‚ใ‚„ใ‚„ใ“ใ—ใ„ใงใ™ใŒใ€ใ“ใ‚“ใชๆ„Ÿใ˜ใงใ™ใ€‚ ๅ‹พ้…ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€้€Ÿๅบฆใ‚’ GPU ใƒกใƒขใƒชใจๅผ•ใๆ›ใˆใซใงใใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€GPU OOM ใ‚’ๅ…‹ๆœใ—ใŸใ‚Šใ€ใƒใƒƒใƒ ใ‚ตใ‚คใ‚บใ‚’ๅข—ใ‚„ใ™ใ“ใจใŒใงใใ€ๅคšใใฎๅ ดๅˆใ€ใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใฎๅ‘ไธŠใซใคใชใŒใ‚Šใพใ™ใ€‚ HF Transformers ใƒขใƒ‡ใƒซใฏใ€DeepSpeed ใฎใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณ ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใซใคใ„ใฆไฝ•ใ‚‚็Ÿฅใ‚‰ใชใ„ใŸใ‚ใ€DeepSpeed ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใงใใฎๆฉŸ่ƒฝใ‚’ๆœ‰ๅŠนใซใ—ใ‚ˆใ†ใจใ—ใฆใ‚‚ใ€ไฝ•ใ‚‚่ตทใ“ใ‚Šใพใ›ใ‚“ใ€‚ ใ—ใŸใŒใฃใฆใ€ใ“ใฎ้žๅธธใซๆœ‰็›ŠใชๆฉŸ่ƒฝใ‚’ๆดป็”จใ™ใ‚‹ใซใฏ 2 ใคใฎๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™ใ€‚ 1. HF Transformers ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€`model.gradient_checkpointing_enable()` ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ‹ใ€HF ใƒˆใƒฌใƒผใƒŠใƒผใง `--gradient_checkpointing` ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใ“ใ‚ŒใŒ่‡ชๅ‹•็š„ใซๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ใใ“ใงไฝฟใ‚ใ‚Œใ‚‹ใฎใŒ `torch.utils.checkpoint` ใงใ™ใ€‚ 2. ็‹ฌ่‡ชใฎใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใ€DeepSpeed ใฎใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณ ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€[ใใ“ใง่ฆๅฎšใ•ใ‚Œใฆใ„ใ‚‹ API](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html) ใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ HF Transformers ใƒขใƒ‡ใƒชใƒณใ‚ฐ ใ‚ณใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€`torch.utils.checkpoint` ใ‚’ DeepSpeed ใฎ API ใซ็ฝฎใๆ›ใˆใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ๅพŒ่€…ใฏใ€้ †ๆ–นๅ‘ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ‚’ๅ†่จˆ็ฎ—ใ™ใ‚‹ไปฃใ‚ใ‚Šใซ CPU ใƒกใƒขใƒชใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใงใใ‚‹ใŸใ‚ใ€ใ‚ˆใ‚ŠๆŸ”่ปŸใงใ™ใ€‚ ### Optimizer and Scheduler `offload_optimizer`ใ‚’ๆœ‰ๅŠนใซใ—ใชใ„้™ใ‚Šใ€DeepSpeed ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใจ HuggingFace ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใ‚’็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จโ€‹โ€‹ใงใใพใ™ใ€‚ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผ (HuggingFace ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใจ DeepSpeed ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็ต„ใฟๅˆใ‚ใ›ใ‚’้™คใ): | Combos | HF Scheduler | DS Scheduler | |:-------------|:-------------|:-------------| | HF Optimizer | Yes | Yes | | DS Optimizer | No | Yes | `offload_optimizer`ใŒๆœ‰ๅŠนใชๅ ดๅˆใ€CPU ใจ GPU ๅฎŸ่ฃ… (LAMB ใ‚’้™คใ)ใ€‚ <a id='deepspeed-optimizer'></a> #### Optimizer DeepSpeed ใฎไธปใชใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฏใ€Adamใ€AdamWใ€OneBitAdamใ€Lamb ใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏ ZeRO ใงๅพนๅบ•็š„ใซใƒ†ใ‚นใƒˆใ•ใ‚ŒใฆใŠใ‚Šใ€ ใ—ใŸใŒใฃใฆใ€ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ไป–ใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใ‚’ใ€Œtorchใ€ใ‹ใ‚‰ใ‚คใƒณใƒใƒผใƒˆใ™ใ‚‹ใ“ใจใฏใงใใพใ™ใ€‚ๅฎŒๅ…จใชใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏ [ใ“ใกใ‚‰](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters) ใซใ‚ใ‚Šใพใ™ใ€‚ ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใง `optimizer` ใ‚จใƒณใƒˆใƒชใ‚’่จญๅฎšใ—ใชใ„ๅ ดๅˆใ€[`Trainer`] ใฏ ่‡ชๅ‹•็š„ใซ`AdamW`ใซ่จญๅฎšใ•ใ‚Œใ€ๆŒ‡ๅฎšใ•ใ‚ŒใŸๅ€คใพใŸใฏๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ๅผ•ๆ•ฐ: `--learning_rate`ใ€`--adam_beta1`ใ€`--adam_beta2`ใ€`--adam_epsilon`ใ€ใŠใ‚ˆใณ `--weight_decay`ใ€‚ ไปฅไธ‹ใฏใ€`AdamW`ใฎ่‡ชๅ‹•ๆง‹ๆˆใ•ใ‚ŒใŸ`optimizer`ใ‚จใƒณใƒˆใƒชใฎไพ‹ใงใ™ใ€‚ ```json { "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } } } ``` ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซใ‚ˆใฃใฆๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซๅ†…ใฎๅ€คใŒ่จญๅฎšใ•ใ‚Œใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใ“ใ‚Œใฏ 1 ใคใ‚ใ‚‹ใŸใ‚ใงใ™ ๅ€คใฎๆฑบๅฎš็š„ใชใ‚ฝใƒผใ‚นใ‚’ๆไพ›ใ—ใ€ใŸใจใˆใฐๅญฆ็ฟ’็އใŒๆฌกใฎใ‚ˆใ†ใซ่จญๅฎšใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆใซใ€่ฆ‹ใคใ‘ใซใใ„ใ‚จใƒฉใƒผใ‚’ๅ›ž้ฟใ—ใพใ™ใ€‚ ใ•ใพใ–ใพใชๅ ดๆ‰€ใงใ•ใพใ–ใพใชไพกๅ€ค่ฆณใ€‚ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณใฎใƒซใƒผใƒซใ€‚ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ•ใ‚Œใ‚‹ๅ€คใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - `lr` ใจ `--learning_rate` ใฎๅ€ค - `betas` ใจ `--adam_beta1 --adam_beta2` ใฎๅ€ค - `eps` ใจ `--adam_epsilon` ใฎๅ€ค - `weight_decay` ใจ `--weight_decay` ใฎๅ€ค ใ—ใŸใŒใฃใฆใ€ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณใงๅ…ฑๆœ‰ใƒใ‚คใƒ‘ใƒผใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’่ชฟๆ•ดใ™ใ‚‹ใ“ใจใ‚’ๅฟ˜ใ‚Œใชใ„ใงใใ ใ•ใ„ใ€‚ ๅ€คใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "optimizer": { "type": "AdamW", "params": { "lr": 0.001, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } } } ``` ใŸใ ใ—ใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใจ DeepSpeed ใ‚’่‡ชๅˆ†ใงๅŒๆœŸใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๆง‹ๆˆใ€‚ ไธŠ่จ˜ใซใƒชใ‚นใƒˆใ•ใ‚Œใฆใ„ใชใ„ๅˆฅใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ใƒˆใƒƒใƒ—ใƒฌใƒ™ใƒซใฎๆง‹ๆˆใซ่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```json { "zero_allow_untested_optimizer": true } ``` `AdamW`ใจๅŒๆง˜ใซใ€ๅ…ฌๅผใซใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใ‚‹ไป–ใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ‚’ๆง‹ๆˆใงใใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏ็•ฐใชใ‚‹่จญๅฎšๅ€คใ‚’ๆŒใคๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ไพ‹ใˆใฐAdam ใฎๅ ดๅˆใฏใ€`weight_decay`ใ‚’`0.01`ไป˜่ฟ‘ใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ•ใ‚‰ใซใ€ใ‚ชใƒ•ใƒญใƒผใƒ‰ใฏใ€Deepspeed ใฎ CPU Adam ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใจไฝต็”จใ™ใ‚‹ใจๆœ€ใ‚‚ๅŠนๆžœ็š„ใซๆฉŸ่ƒฝใ—ใพใ™ใ€‚ `deepspeed==0.8.3` ใชใฎใงใ€ใ‚ชใƒ•ใƒญใƒผใƒ‰ใงๅˆฅใฎใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ไปฅไธ‹ใ‚‚่ฟฝๅŠ ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```json { "zero_force_ds_cpu_optimizer": false } ``` ๆœ€ไธŠไฝใฎๆง‹ๆˆใซ็งป่กŒใ—ใพใ™ใ€‚ <a id='deepspeed-scheduler'></a> #### Scheduler DeepSpeed ใฏใ€`LRRangeTest`ใ€`OneCycle`ใ€`WarmupLR`ใ€ใŠใ‚ˆใณ`WarmupDecayLR`ๅญฆ็ฟ’็އใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใ€‚ๅฎŒๅ…จใช ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใฏ[ใ“ใ“](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters)ใงใ™ใ€‚ ใ“ใ“ใงใฏใ€๐Ÿค— Transformers ใจ DeepSpeed ใฎ้–“ใงใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใŒ้‡่ค‡ใ™ใ‚‹ๅ ดๆ‰€ใ‚’็คบใ—ใพใ™ใ€‚ - `--lr_scheduler_type constant_with_warmup` ็ตŒ็”ฑใฎ `WarmupLR` - `--lr_scheduler_type Linear` ใ‚’ไป‹ใ—ใŸ `WarmupDecayLR`ใ€‚ใ“ใ‚Œใฏ `--lr_scheduler_type` ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆๅ€คใงใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใ‚’่จญๅฎšใ—ใชใ„ๅ ดๅˆใ€ใ“ใ‚ŒใŒใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใง่จญๅฎšใ•ใ‚Œใ‚‹ใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใซใชใ‚Šใพใ™ใ€‚ ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใง `scheduler` ใ‚จใƒณใƒˆใƒชใ‚’่จญๅฎšใ—ใชใ„ๅ ดๅˆใ€[`Trainer`] ใฏ `--lr_scheduler_type`ใ€`--learning_rate`ใ€ใŠใ‚ˆใณ `--warmup_steps` ใพใŸใฏ `--warmup_ratio` ใฎๅ€คใ‚’่จญๅฎšใ—ใพใ™ใ€‚ ๐Ÿค— ใใ‚Œใฎใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใƒใƒผใ‚ธใƒงใƒณใ€‚ ไปฅไธ‹ใฏใ€`WarmupLR`ใฎ่‡ชๅ‹•ๆง‹ๆˆใ•ใ‚ŒใŸ`scheduler`ใ‚จใƒณใƒˆใƒชใฎไพ‹ใงใ™ใ€‚ ```json { "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } } } ``` *"auto"* ใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€[`Trainer`] ๅผ•ๆ•ฐใฏ่จญๅฎšใซๆญฃใ—ใ„ๅ€คใ‚’่จญๅฎšใ—ใพใ™ใ€‚ ใƒ•ใ‚กใ‚คใƒซใ€‚ใ“ใ‚Œใฏใ€ๅ€คใฎๆฑบๅฎš็š„ใชใ‚ฝใƒผใ‚นใŒ 1 ใคใ‚ใ‚‹ใ“ใจใจใ€ใŸใจใˆใฐๆฌกใฎใ‚ˆใ†ใชๅ ดๅˆใซ่ฆ‹ใคใ‘ใซใใ„ใ‚จใƒฉใƒผใ‚’้ฟใ‘ใ‚‹ใŸใ‚ใงใ™ใ€‚ ๅญฆ็ฟ’็އใฏใ€ๅ ดๆ‰€ใ”ใจใซ็•ฐใชใ‚‹ๅ€คใซ่จญๅฎšใ•ใ‚Œใพใ™ใ€‚ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณใฎใƒซใƒผใƒซใ€‚่จญๅฎšใ•ใ‚Œใ‚‹ๅ€คใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - `warmup_min_lr` ใฎๅ€คใฏ `0` ใงใ™ใ€‚ - `warmup_max_lr` ใจ `--learning_rate` ใฎๅ€คใ€‚ - `warmup_num_steps` ใจ `--warmup_steps` ใฎๅ€ค (ๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใ‚‹ๅ ดๅˆ)ใ€‚ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใฏ `--warmup_ratio` ใ‚’ไฝฟ็”จใ—ใพใ™ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใ‚นใƒ†ใƒƒใƒ—ใฎๆ•ฐใ‚’ไน—็ฎ—ใ—ใ€ๅˆ‡ใ‚ŠไธŠใ’ใพใ™ใ€‚ - `total_num_steps` ใซใฏ `--max_steps` ใฎๅ€คใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใ‹ใ€ๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใฏๅฎŸ่กŒๆ™‚ใซ่‡ชๅ‹•็š„ใซๅฐŽๅ‡บใ•ใ‚Œใพใ™ใ€‚ ็’ฐๅขƒใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใ‚ตใ‚คใ‚บใ€ใŠใ‚ˆใณใใฎไป–ใฎใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณๅผ•ๆ•ฐ ( `WarmupDecayLR`)ใ€‚ ใ‚‚ใกใ‚ใ‚“ใ€ๆง‹ๆˆๅ€คใฎไธ€้ƒจใพใŸใฏใ™ในใฆใ‚’ๅผ•ใ็ถ™ใ„ใงใ€่‡ชๅˆ†ใง่จญๅฎšใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 0.001, "warmup_num_steps": 1000 } } } ``` ใŸใ ใ—ใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใจ DeepSpeed ใ‚’่‡ชๅˆ†ใงๅŒๆœŸใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๆง‹ๆˆใ€‚ ใŸใจใˆใฐใ€`WarmupDecayLR`ใฎๅ ดๅˆใฏใ€ๆฌกใฎใ‚จใƒณใƒˆใƒชใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ ```json { "scheduler": { "type": "WarmupDecayLR", "params": { "last_batch_iteration": -1, "total_num_steps": "auto", "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } } } ``` `total_num_steps`ใ€`warmup_max_lr`ใ€`warmup_num_steps`ใ€ใŠใ‚ˆใณ `total_num_steps` ใฏใƒญใƒผใƒ‰ๆ™‚ใซ่จญๅฎšใ•ใ‚Œใพใ™ใ€‚ <a id='deepspeed-fp32'></a> ### fp32 Precision Deepspeed ใฏใ€ๅฎŒๅ…จใช fp32 ใจ fp16 ใฎๆททๅˆ็ฒพๅบฆใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ fp16 ๆททๅˆ็ฒพๅบฆใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ๅฟ…่ฆใชใƒกใƒขใƒชใŒๅคงๅน…ใซๅ‰Šๆธ›ใ•ใ‚Œใ€้€ŸๅบฆใŒๅ‘ไธŠใ™ใ‚‹ใŸใ‚ใ€ ไฝฟ็”จใ—ใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใŒใ“ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒขใƒผใƒ‰ใง้ฉๅˆ‡ใซๅ‹•ไฝœใ—ใชใ„ๅ ดๅˆใฏใ€ไฝฟ็”จใ—ใชใ„ๆ–นใŒใ‚ˆใ„ใงใ—ใ‚‡ใ†ใ€‚้€šๅธธใ“ใ‚Œ ใƒขใƒ‡ใƒซใŒ fp16 ๆททๅˆ็ฒพๅบฆใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใซ็™บ็”Ÿใ—ใพใ™ (ใŸใจใˆใฐใ€ใ“ใ‚Œใฏ bf16 ใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸๅ ดๅˆใซใ‚ˆใ็™บ็”Ÿใ—ใพใ™) ใƒขใƒ‡ใƒซ๏ผ‰ใ€‚ใ“ใฎใ‚ˆใ†ใชใƒขใƒ‡ใƒซใงใฏใ€ใ‚ชใƒผใƒใƒผใƒ•ใƒญใƒผใพใŸใฏใ‚ขใƒณใƒ€ใƒผใƒ•ใƒญใƒผใŒ็™บ็”Ÿใ—ใ€`NaN`ๆๅคฑใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚ŒใŒใ‚ใชใŸใฎๅ ดๅˆใฏใ€ไฝฟ็”จใ—ใŸใ„ใจๆ€ใ†ใงใ—ใ‚‡ใ† ๅฎŒๅ…จใช fp32 ใƒขใƒผใƒ‰ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎ fp16 ๆททๅˆ็ฒพๅบฆใƒขใƒผใƒ‰ใ‚’ๆฌกใฎใ‚ˆใ†ใซๆ˜Ž็คบ็š„ใซ็„กๅŠนใซใ—ใพใ™ใ€‚ ```json { "fp16": { "enabled": false, } } ``` Ampere ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ ใƒ™ใƒผใ‚นใฎ GPU ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€pytorch ใƒใƒผใ‚ธใƒงใƒณ 1.7 ไปฅ้™ใฏ่‡ชๅ‹•็š„ใซ ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซๅˆ‡ใ‚Šๆ›ฟใ‚ใ‚Šใพใ™ใ€‚ ไธ€้ƒจใฎๆ“ไฝœใงใฏใฏใ‚‹ใ‹ใซๅŠน็އ็š„ใช tf32 ๅฝขๅผใ‚’ไฝฟ็”จใ—ใพใ™ใŒใ€็ตๆžœใฏไพ็„ถใจใ—ใฆ fp32 ใซใชใ‚Šใพใ™ใ€‚่ฉณ็ดฐใจ ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใซใคใ„ใฆใฏใ€[Ampere ใƒ‡ใƒใ‚คใ‚นไธŠใฎ TensorFloat-32(TF32)](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ๆ–‡ๆ›ธใซใฏไปฅไธ‹ใŒๅซใพใ‚Œใพใ™ ไฝ•ใ‚‰ใ‹ใฎ็†็”ฑใงใ“ใฎ่‡ชๅ‹•ๅค‰ๆ›ใ‚’ไฝฟ็”จใ—ใŸใใชใ„ๅ ดๅˆใฏใ€ใ“ใฎ่‡ชๅ‹•ๅค‰ๆ›ใ‚’็„กๅŠนใซใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ ๐Ÿค— ใƒˆใƒฌใƒผใƒŠใƒผใงใฏใ€`--tf32` ใ‚’ไฝฟ็”จใ—ใฆๆœ‰ๅŠนใซใ™ใ‚‹ใ‹ใ€`--tf32 0` ใพใŸใฏ `--no_tf32` ใ‚’ไฝฟ็”จใ—ใฆ็„กๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€PyTorch ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ <a id='deepspeed-amp'></a> ### Automatic Mixed Precision pytorch ใฎใ‚ˆใ†ใช AMP ใฎๆ–นๆณ•ใพใŸใฏ apex ใฎใ‚ˆใ†ใชๆ–นๆณ•ใง่‡ชๅ‹•ๆททๅˆ็ฒพๅบฆใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ ### fp16 fp16 (float16) ใ‚’่จญๅฎšใ—ใฆ pytorch AMP ใฎใ‚ˆใ†ใชใƒขใƒผใƒ‰ใ‚’่จญๅฎšใ™ใ‚‹ใซใฏ: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } ``` [`Trainer`] ใฏใ€ใฎๅ€คใซๅŸบใฅใ„ใฆใใ‚Œใ‚’่‡ชๅ‹•็š„ใซๆœ‰ๅŠนใพใŸใฏ็„กๅŠนใซใ—ใพใ™ใ€‚ `args.fp16_backend`ใ€‚ๆฎ‹ใ‚Šใฎ่จญๅฎšๅ€คใฏใ‚ใชใŸๆฌก็ฌฌใงใ™ใ€‚ ใ“ใฎใƒขใƒผใƒ‰ใฏใ€`--fp16 --fp16_backend amp`ใพใŸใฏ`--fp16_full_eval`ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใŒๆธกใ•ใ‚Œใ‚‹ใจๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ ใ“ใฎใƒขใƒผใƒ‰ใ‚’ๆ˜Ž็คบ็š„ใซๆœ‰ๅŠน/็„กๅŠนใซใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } ``` ใŸใ ใ—ใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใจ DeepSpeed ใ‚’่‡ชๅˆ†ใงๅŒๆœŸใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๆง‹ๆˆใ€‚ ใ“ใ‚ŒใŒ[ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://www.deepspeed.ai/docs/config-json/#fp16-training-options)ใงใ™ใ€‚ ### BF16 fp16 ใฎไปฃใ‚ใ‚Šใซ bf16 (bfloat16) ใŒๅฟ…่ฆใชๅ ดๅˆใฏใ€ๆฌกใฎๆง‹ๆˆใ‚ปใ‚ฏใ‚ทใƒงใƒณใŒไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ```json { "bf16": { "enabled": "auto" } } ``` bf16 ใฏ fp32 ใจๅŒใ˜ใƒ€ใ‚คใƒŠใƒŸใƒƒใ‚ฏ ใƒฌใƒณใ‚ธใ‚’ๅ‚™ใˆใฆใ„ใ‚‹ใŸใ‚ใ€ๆๅคฑใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใฎใƒขใƒผใƒ‰ใฏใ€`--bf16` ใพใŸใฏ `--bf16_full_eval` ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใŒๆธกใ•ใ‚Œใ‚‹ใจๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ ใ“ใฎใƒขใƒผใƒ‰ใ‚’ๆ˜Ž็คบ็š„ใซๆœ‰ๅŠน/็„กๅŠนใซใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "bf16": { "enabled": true } } ``` <Tip> `deepspeed==0.6.0`ใฎๆ™‚็‚นใงใฏใ€bf16 ใ‚ตใƒใƒผใƒˆใฏๆ–ฐใ—ใๅฎŸ้จ“็š„ใชใ‚‚ใฎใงใ™ใ€‚ bf16 ใŒๆœ‰ๅŠนใช็Šถๆ…‹ใง [ๅ‹พ้…็ดฏ็ฉ](#gradient-accumulation) ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€bf16 ใงๅ‹พ้…ใŒ็ดฏ็ฉใ•ใ‚Œใ‚‹ใ“ใจใซๆณจๆ„ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎๅฝขๅผใฎ็ฒพๅบฆใŒไฝŽใ„ใŸใ‚ใ€ใ“ใ‚ŒใฏๅธŒๆœ›ใฉใŠใ‚Šใงใฏใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ๆๅคฑใฎใ‚ใ‚‹่“„็ฉใซใคใชใŒใ‚Šใพใ™ใ€‚ ใ“ใฎๅ•้กŒใ‚’ไฟฎๆญฃใ—ใ€ใ‚ˆใ‚Š้ซ˜็ฒพๅบฆใฎ `dtype` (fp16 ใพใŸใฏ fp32) ใ‚’ไฝฟ็”จใ™ใ‚‹ใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ๆไพ›ใ™ใ‚‹ใŸใ‚ใฎไฝœๆฅญใŒ่กŒใ‚ใ‚Œใฆใ„ใพใ™ใ€‚ </Tip> ### NCCL Collectives ่จ“็ทดไฝ“ๅˆถใฎ`dtype`ใŒใ‚ใ‚Šใ€ใ•ใพใ–ใพใชๅ‰Šๆธ›ใ‚„ๅŽ้›†/ๅˆ†ๆ•ฃๆ“ไฝœใชใฉใฎใ‚ณใƒŸใƒฅใƒ‹ใ‚ฑใƒผใ‚ทใƒงใƒณ้›†ๅˆไฝ“ใซไฝฟ็”จใ•ใ‚Œใ‚‹ๅˆฅใฎ`dtype`ใŒใ‚ใ‚Šใพใ™ใ€‚ ใ™ในใฆใฎๅŽ้›†/ๅˆ†ๆ•ฃๆ“ไฝœใฏใ€ใƒ‡ใƒผใ‚ฟใŒๅซใพใ‚Œใฆใ„ใ‚‹ใฎใจๅŒใ˜ `dtype` ใงๅฎŸ่กŒใ•ใ‚Œใ‚‹ใŸใ‚ใ€bf16 ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไฝ“ๅˆถใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ใƒ‡ใƒผใ‚ฟใฏ bf16 ใงๅŽ้›†ใ•ใ‚Œใพใ™ใ€‚ๅŽ้›†ใฏๆๅคฑใฎใชใ„ๆ“ไฝœใงใ™ใ€‚ ใ•ใพใ–ใพใชใƒชใƒ‡ใƒฅใƒผใ‚นๆ“ไฝœใฏ้žๅธธใซๆๅคฑใŒๅคงใใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€่ค‡ๆ•ฐใฎ GPU ้–“ใงๅ‹พ้…ใŒๅนณๅ‡ๅŒ–ใ•ใ‚Œใ‚‹ๅ ดๅˆใ€้€šไฟกใŒ fp16 ใพใŸใฏ bf16 ใง่กŒใ‚ใ‚Œใ‚‹ๅ ดๅˆใ€็ตๆžœใฏๆๅคฑใŒๅคšใใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚่ค‡ๆ•ฐใฎๆ•ฐๅ€คใ‚’ไฝŽ็ฒพๅบฆใงใ‚ขใƒ‰ใƒใ‚ฟใ‚คใ‚บใ™ใ‚‹ใจ็ตๆžœใฏๆญฃ็ขบใงใฏใชใ„ใŸใ‚ใงใ™ใ€‚ ใ€‚ bf16 ใงใฏ fp16 ใ‚ˆใ‚Šใ‚‚็ฒพๅบฆใŒไฝŽใ„ใŸใ‚ใ€ใ•ใ‚‰ใซใใ†ใงใ™ใ€‚้€šๅธธใฏ้žๅธธใซๅฐใ•ใ„ grad ใ‚’ๅนณๅ‡ใ™ใ‚‹้š›ใฎๆๅคฑใŒๆœ€ๅฐ้™ใซๆŠ‘ใˆใ‚‰ใ‚Œใ‚‹ใŸใ‚ใ€fp16 ใงๅๅˆ†ใงใ‚ใ‚‹ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏใ€ๅŠ็ฒพๅบฆใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใงใฏ fp16 ใŒใƒชใƒ€ใ‚ฏใ‚ทใƒงใƒณๆผ”็ฎ—ใฎใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใจใ—ใฆไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ใŸใ ใ—ใ€ใ“ใฎๆฉŸ่ƒฝใ‚’ๅฎŒๅ…จใซๅˆถๅพกใงใใ€ๅฟ…่ฆใซๅฟœใ˜ใฆๅฐใ•ใชใ‚ชใƒผใƒใƒผใƒ˜ใƒƒใƒ‰ใ‚’่ฟฝๅŠ ใ—ใฆใ€ใƒชใƒ€ใ‚ฏใ‚ทใƒงใƒณใŒ็ดฏ็ฉ dtype ใจใ—ใฆ fp32 ใ‚’ไฝฟ็”จใ—ใ€็ตๆžœใฎๆบ–ๅ‚™ใŒใงใใŸๅ ดๅˆใซใฎใฟๅŠ็ฒพๅบฆ `dtype` ใซใƒ€ใ‚ฆใƒณใ‚ญใƒฃใ‚นใƒˆใ™ใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใงใ™ใ€‚ ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ใซใฏใ€ๆ–ฐใ—ใ„ๆง‹ๆˆใ‚จใƒณใƒˆใƒชใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงใ™ใ€‚ ```json { "communication_data_type": "fp32" } ``` ใ“ใฎ่จ˜ไบ‹ใฎๅŸท็ญ†ๆ™‚็‚นใงใฎๆœ‰ๅŠนใชๅ€คใฏใ€"fp16"ใ€"bfp16"ใ€"fp32"ใงใ™ใ€‚ ๆณจ: ใ‚นใƒ†ใƒผใ‚ธ ใ‚ผใƒญ 3 ใซใฏใ€bf16 ้€šไฟกใ‚ฟใ‚คใƒ—ใซ้–ขใ™ใ‚‹ใƒใ‚ฐใŒใ‚ใ‚Šใ€`deepspeed==0.8.1`ใงไฟฎๆญฃใ•ใ‚Œใพใ—ใŸใ€‚ ### apex apex AMP ใฎใ‚ˆใ†ใชใƒขใƒผใƒ‰ ใ‚ปใƒƒใƒˆใ‚’่จญๅฎšใ™ใ‚‹ใซใฏ: ```json "amp": { "enabled": "auto", "opt_level": "auto" } ``` [`Trainer`] ใฏ `args.fp16_backend` ใฎๅ€คใซๅŸบใฅใ„ใฆ่‡ชๅ‹•็š„ใซ่จญๅฎšใ—ใพใ™ใ€‚ `args.fp16_opt_level`ใ€‚ ใ“ใฎใƒขใƒผใƒ‰ใฏใ€`--fp16 --fp16_backend apex --fp16_opt_level 01`ใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณๅผ•ๆ•ฐใŒๆธกใ•ใ‚Œใ‚‹ใจๆœ‰ๅŠนใซใชใ‚Šใพใ™ใ€‚ ใ“ใฎใƒขใƒผใƒ‰ใ‚’ๆ˜Ž็คบ็š„ใซๆง‹ๆˆใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "amp": { "enabled": true, "opt_level": "O1" } } ``` ใŸใ ใ—ใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใจ DeepSpeed ใ‚’่‡ชๅˆ†ใงๅŒๆœŸใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๆง‹ๆˆใ€‚ ใ“ใ‚Œใฏ[ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options)ใงใ™ใ€‚ <a id='deepspeed-bs'></a> ### Batch Size ใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’่จญๅฎšใ™ใ‚‹ใซใฏใ€ๆฌกใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ```json { "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto" } ``` [`Trainer`] ใฏ่‡ชๅ‹•็š„ใซ `train_micro_batch_size_per_gpu` ใ‚’ๆฌกใฎๅ€คใซ่จญๅฎšใ—ใพใ™ใ€‚ `args.per_device_train_batch_size`ใจ`train_batch_size`ใ‚’`args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`ใซๅค‰ๆ›ดใ—ใพใ™ใ€‚ ๅ€คใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "train_batch_size": 12, "train_micro_batch_size_per_gpu": 4 } ``` ใŸใ ใ—ใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใจ DeepSpeed ใ‚’่‡ชๅˆ†ใงๅŒๆœŸใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๆง‹ๆˆใ€‚ <a id='deepspeed-grad-acc'></a> ### Gradient Accumulation ๅ‹พ้…็ดฏ็ฉใ‚ปใƒƒใƒˆใ‚’ๆง‹ๆˆใ™ใ‚‹ใซใฏ: ```json { "gradient_accumulation_steps": "auto" } ``` [`Trainer`] ใฏ่‡ชๅ‹•็š„ใซใใ‚Œใ‚’ `args.gradient_accumulation_steps` ใฎๅ€คใซ่จญๅฎšใ—ใพใ™ใ€‚ ๅ€คใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "gradient_accumulation_steps": 3 } ``` ใŸใ ใ—ใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใจ DeepSpeed ใ‚’่‡ชๅˆ†ใงๅŒๆœŸใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๆง‹ๆˆใ€‚ <a id='deepspeed-grad-clip'></a> ### Gradient Clipping ใ‚ฐใƒฉใƒ‡ใƒผใ‚ทใƒงใƒณ ใ‚ฐใƒฉใƒ‡ใƒผใ‚ทใƒงใƒณ ใ‚ฏใƒชใƒƒใƒ”ใƒณใ‚ฐ ใ‚ปใƒƒใƒˆใ‚’ๆง‹ๆˆใ™ใ‚‹ใซใฏ: ```json { "gradient_clipping": "auto" } ``` [`Trainer`] ใฏ่‡ชๅ‹•็š„ใซใใ‚Œใ‚’ `args.max_grad_norm` ใฎๅ€คใซ่จญๅฎšใ—ใพใ™ใ€‚ ๅ€คใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```json { "gradient_clipping": 1.0 } ``` ใŸใ ใ—ใ€[`Trainer`] ใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใจ DeepSpeed ใ‚’่‡ชๅˆ†ใงๅŒๆœŸใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๆง‹ๆˆใ€‚ <a id='deepspeed-weight-extraction'></a> ### Getting The Model Weights Out ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’็ถ™็ถšใ—ใ€DeepSpeed ใฎไฝฟ็”จใ‚’ๅ†้–‹ใ™ใ‚‹้™ใ‚Šใ€ไฝ•ใ‚‚ๅฟƒ้…ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ DeepSpeed ใ‚นใƒˆใ‚ข fp32 ใฎใ‚ซใ‚นใ‚ฟใƒ  ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผ ใƒ•ใ‚กใ‚คใƒซๅ†…ใฎใƒžใ‚นใ‚ฟใƒผใฎ้‡ใฟใ€‚ใ“ใ‚Œใฏ `global_step*/*optim_states.pt` (ใ“ใ‚Œใฏ glob ใƒ‘ใ‚ฟใƒผใƒณ)ใ€้€šๅธธใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎไธ‹ใซไฟๅญ˜ใ•ใ‚Œใพใ™ใ€‚ **FP16 ใ‚ฆใ‚งใ‚คใƒˆ:** ใƒขใƒ‡ใƒซใ‚’ ZeRO-2 ใงไฟๅญ˜ใ™ใ‚‹ใจใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’ๅซใ‚€้€šๅธธใฎ `pytorch_model.bin` ใƒ•ใ‚กใ‚คใƒซใŒไฝœๆˆใ•ใ‚Œใพใ™ใŒใ€ ใ“ใ‚Œใ‚‰ใฏ้‡ใฟใฎ fp16 ใƒใƒผใ‚ธใƒงใƒณใซใ™ใŽใพใ›ใ‚“ใ€‚ ZeRO-3 ใงใฏใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใŒ่ค‡ๆ•ฐใฎ GPU ใซๅˆ†ๅ‰ฒใ•ใ‚Œใ‚‹ใŸใ‚ใ€็Šถๆณใฏใ•ใ‚‰ใซ่ค‡้›‘ใซใชใ‚Šใพใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€fp16 ใ‚’ไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใฎ `Trainer` ใ‚’ๅ–ๅพ—ใ™ใ‚‹ใซใฏใ€`"stage3_gather_16bit_weights_on_model_save": true` ใŒๅฟ…่ฆใงใ™ใ€‚ ้‡ใฟใฎใƒใƒผใ‚ธใƒงใƒณใ€‚ใ“ใฎ่จญๅฎšใŒ`False`ใฎๅ ดๅˆใ€`pytorch_model.bin`ใฏไฝœๆˆใ•ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ€ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใง DeepSpeed ใฎ `state_dict` ใซๅฎŸ้š›ใฎ้‡ใฟใงใฏใชใใƒ—ใƒฌใƒผใ‚นใƒ›ใƒซใƒ€ใƒผใŒๅซใพใ‚Œใ‚‹ใŸใ‚ใงใ™ใ€‚ใ“ใฎ `state_dict` ใ‚’ไฟๅญ˜ใ—ใŸๅ ดๅˆใ€ใƒญใƒผใƒ‰ใ—็›ดใ™ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ ```json { "zero_optimization": { "stage3_gather_16bit_weights_on_model_save": true } } ``` **FP32 ้‡้‡:** fp16 ใ‚ฆใ‚งใ‚คใƒˆใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅ†้–‹ใ™ใ‚‹ใฎใซ้ฉใ—ใฆใ„ใพใ™ใŒใ€ใƒขใƒ‡ใƒซใฎๅพฎ่ชฟๆ•ดใŒๅฎŒไบ†ใ—ใ€ใใ‚Œใ‚’ [ใƒขใƒ‡ใƒซ ใƒใƒ–](https://huggingface.co/models) ใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ใ‹ใ€fp32 ใ‚’ๅ…ฅๆ‰‹ใ—ใŸใ„ใจๆ€ใ‚ใ‚Œใ‚‹ไป–ใฎไบบใซๆธกใ—ใพใ™ใ€‚ ้‡ใฟใ€‚ใ“ใ‚Œใฏๅคง้‡ใฎใƒกใƒขใƒชใ‚’ๅฟ…่ฆใจใ™ใ‚‹ใƒ—ใƒญใ‚ปใ‚นใงใ‚ใ‚‹ใŸใ‚ใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐไธญใซ่กŒใ†ในใใงใฏใชใ„ใฎใŒ็†ๆƒณ็š„ใงใ™ใ€‚ ใ—ใŸใŒใฃใฆใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅฎŒไบ†ๅพŒใซใ‚ชใƒ•ใƒฉใ‚คใƒณใงๅฎŸ่กŒใ™ใ‚‹ใฎใŒๆœ€้ฉใงใ™ใ€‚ใŸใ ใ—ใ€ๅฟ…่ฆใซๅฟœใ˜ใฆใ€็ฉบใ CPU ใŒๅๅˆ†ใซใ‚ใ‚‹ๅ ดๅˆใฏใ€ ๅŒใ˜ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใงๅฎŸ่กŒใงใใ‚‹ใ“ใจใ‚’ๆ€ใ„ๅ‡บใ—ใฆใใ ใ•ใ„ใ€‚ๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ไธกๆ–นใฎใ‚ขใƒ—ใƒญใƒผใƒใซใคใ„ใฆ่ชฌๆ˜Žใ—ใพใ™ใ€‚ **ใƒฉใ‚คใƒ– FP32 ใ‚ฆใ‚งใ‚คใƒˆ ใƒชใ‚ซใƒใƒช:** ใƒขใƒ‡ใƒซใŒๅคงใใใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎ็ต‚ไบ†ๆ™‚ใซ็ฉบใ CPU ใƒกใƒขใƒชใŒใปใจใ‚“ใฉๆฎ‹ใฃใฆใ„ใชใ„ๅ ดๅˆใ€ใ“ใฎใ‚ขใƒ—ใƒญใƒผใƒใฏๆฉŸ่ƒฝใ—ใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚ ๅฐ‘ใชใใจใ‚‚ 1 ใคใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฟๅญ˜ใ—ใฆใ„ใฆใ€ๆœ€ๆ–ฐใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ๆฌกใฎๆ‰‹้ †ใ‚’ๅฎŸ่กŒใงใใพใ™ใ€‚ ```python from transformers.trainer_utils import get_last_checkpoint from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = get_last_checkpoint(trainer.args.output_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) ``` `--load_best_model_at_end` class:*~transformers.TrainingArguments* ๅผ•ๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆ (ๆœ€้ฉใชใƒขใƒ‡ใƒซใ‚’่ฟฝ่ทกใ™ใ‚‹ใŸใ‚) ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ)ใ€ๆœ€ๅˆใซๆœ€็ต‚ใƒขใƒ‡ใƒซใ‚’ๆ˜Ž็คบ็š„ใซไฟๅญ˜ใ—ใฆใ‹ใ‚‰ใ€ไธŠ่จ˜ใจๅŒใ˜ใ“ใจใ‚’่กŒใ†ใ“ใจใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’็ต‚ไบ†ใงใใพใ™ใ€‚ ```python from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final") trainer.deepspeed.save_checkpoint(checkpoint_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) ``` <Tip> `load_state_dict_from_zero_checkpoint` ใŒๅฎŸ่กŒใ•ใ‚Œใ‚‹ใจใ€`model` ใฏใ‚‚ใฏใ‚„ไฝฟ็”จใงใใชใใชใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ๅŒใ˜ใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใฎ DeepSpeed ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใ€‚ใคใพใ‚Šใ€deepspeed ใ‚จใƒณใ‚ธใƒณใ‚’ๅ†ๅˆๆœŸๅŒ–ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ `model.load_state_dict(state_dict)` ใฏใใ“ใ‹ใ‚‰ใ™ในใฆใฎ DeepSpeed ใƒžใ‚ธใƒƒใ‚ฏใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€ใ“ใ‚Œใฏๆœ€ๅพŒใซใฎใฟๅฎŸ่กŒใ—ใฆใใ ใ•ใ„ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๆง˜ๅญใ€‚ </Tip> ใ‚‚ใกใ‚ใ‚“ใ€class:*~transformers.Trainer* ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใฏใชใใ€ไธŠ่จ˜ใฎไพ‹ใ‚’็‹ฌ่‡ชใฎใ‚‚ใฎใซ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ใƒˆใƒฌใƒผใƒŠใƒผใ€‚ ไฝ•ใ‚‰ใ‹ใฎ็†็”ฑใงใ•ใ‚‰ใซๆ”น่‰ฏใ—ใŸใ„ๅ ดๅˆใฏใ€้‡ใฟใฎ fp32 `state_dict` ใ‚’ๆŠฝๅ‡บใ—ใฆ้ฉ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ๆฌกใฎไพ‹ใซ็คบใ™ใ‚ˆใ†ใซใ€ใ“ใ‚Œใ‚‰ใฏ่‡ชๅˆ†ใงไฝœๆˆใ—ใพใ™ใ€‚ ```python from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu model = model.cpu() model.load_state_dict(state_dict) ``` **ใ‚ชใƒ•ใƒฉใ‚คใƒณ FP32 ใ‚ฆใ‚งใ‚คใƒˆ ใƒชใ‚ซใƒใƒช:** DeepSpeed ใฏ็‰นๅˆฅใชๅค‰ๆ›ใ‚นใ‚ฏใƒชใƒ—ใƒˆ`zero_to_fp32.py`ใ‚’ไฝœๆˆใ—ใ€ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎๆœ€ไธŠไฝใซ้…็ฝฎใ—ใพใ™ใ€‚ ใƒ•ใ‚ฉใƒซใƒ€ใ€‚ใ“ใฎใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใ„ใคใงใ‚‚้‡ใฟใ‚’ๆŠฝๅ‡บใงใใพใ™ใ€‚ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ‚นใ‚ฟใƒณใƒ‰ใ‚ขใƒญใƒณใชใฎใงใ€ใ‚‚ใ†ๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๆŠฝๅ‡บใ‚’่กŒใ†ใŸใ‚ใฎ่จญๅฎšใƒ•ใ‚กใ‚คใƒซใพใŸใฏ `Trainer` ใŒๅฟ…่ฆใงใ™ใ€‚ ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ ใƒ•ใ‚ฉใƒซใƒ€ใƒผใŒๆฌกใฎใ‚ˆใ†ใซใชใฃใฆใ„ใ‚‹ใจใ—ใพใ™ใ€‚ ```bash $ ls -l output_dir/checkpoint-1/ -rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/ -rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest -rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt -rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin -rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt -rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json -rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model -rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json -rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json -rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin -rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py* ``` ใ“ใฎไพ‹ใงใฏใ€DeepSpeed ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ ใ‚ตใƒ–ใƒ•ใ‚ฉใƒซใƒ€ใƒผ *global_step1* ใŒ 1 ใคใ ใ‘ใ‚ใ‚Šใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€FP32ใ‚’ๅ†ๆง‹็ฏ‰ใ™ใ‚‹ใซใฏ ้‡ใฟใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ ใ‘ใงใ™: ```bash python zero_to_fp32.py . pytorch_model.bin ``` ใ“ใ‚Œใ ใ‚ˆใ€‚ `pytorch_model.bin`ใซใฏใ€่ค‡ๆ•ฐใฎ GPU ใ‹ใ‚‰็ตฑๅˆใ•ใ‚ŒใŸๅฎŒๅ…จใช fp32 ใƒขใƒ‡ใƒซใฎ้‡ใฟใŒๅซใพใ‚Œใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ZeRO-2 ใพใŸใฏ ZeRO-3 ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’่‡ชๅ‹•็š„ใซๅ‡ฆ็†ใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ `python zero_to_fp32.py -h` ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใจใ€ไฝฟ็”จๆ–นๆณ•ใฎ่ฉณ็ดฐใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€ใƒ•ใ‚กใ‚คใƒซ`latest`ใฎๅ†…ๅฎนใ‚’ไฝฟ็”จใ—ใฆ deepspeed ใ‚ตใƒ–ใƒ•ใ‚ฉใƒซใƒ€ใƒผใ‚’่‡ชๅ‹•ๆคœๅ‡บใ—ใพใ™ใ€‚ ไพ‹ใซใฏ`global_step1`ใŒๅซใพใ‚Œใพใ™ใ€‚ ๆณจ: ็พๅœจใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใซใฏๆœ€็ต‚็š„ใช fp32 ใƒขใƒ‡ใƒซใฎ้‡ใฟใฎ 2 ๅ€ใฎไธ€่ˆฌ RAM ใŒๅฟ…่ฆใงใ™ใ€‚ ### ZeRO-3 ใจ Infinity Nuances ZeRO-3 ใฏใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟ ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐๆฉŸ่ƒฝใฎ็‚นใง ZeRO-2 ใจใฏๅคงใใ็•ฐใชใ‚Šใพใ™ใ€‚ ZeRO-Infinity ใฏ ZeRO-3 ใ‚’ใ•ใ‚‰ใซๆ‹กๅผตใ—ใ€NVMe ใƒกใƒขใƒชใ‚„ใใฎไป–ใฎ่ค‡ๆ•ฐใฎ้€Ÿๅบฆใจใ‚นใ‚ฑใƒผใƒฉใƒ“ใƒชใƒ†ใ‚ฃใฎๅ‘ไธŠใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใซ็‰นๅˆฅใชๅค‰ๆ›ดใ‚’ๅŠ ใˆใ‚‹ๅฟ…่ฆใŒใชใใฆใ‚‚ๆญฃๅธธใซๅ‹•ไฝœใ™ใ‚‹ใ‚ˆใ†ใซใ‚ใ‚‰ใ‚†ใ‚‹ๅŠชๅŠ›ใŒๆ‰•ใ‚ใ‚Œใฆใใพใ—ใŸใŒใ€็‰นๅฎšใฎ็‚นใงใฏ ็Šถๆณใซใ‚ˆใฃใฆใฏใ€ๆฌกใฎๆƒ…ๅ ฑใŒๅฟ…่ฆใซใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ #### Constructing Massive Models DeepSpeed/ZeRO-3 ใฏใ€ๆ—ขๅญ˜ใฎ RAM ใซๅŽใพใ‚‰ใชใ„ๅฏ่ƒฝๆ€งใฎใ‚ใ‚‹ๆ•ฐๅ…†ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒใคใƒขใƒ‡ใƒซใ‚’ๅ‡ฆ็†ใงใใพใ™ใ€‚ใใฎใ‚ˆใ†ใชๅ ดๅˆใ€ ใพใŸใ€ๅˆๆœŸๅŒ–ใ‚’ใ‚ˆใ‚Š้ซ˜้€ŸใซๅฎŸ่กŒใ—ใŸใ„ๅ ดๅˆใฏใ€*deepspeed.zero.Init()* ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ๅˆๆœŸๅŒ–ใ—ใพใ™ใ€‚ ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆ ใƒžใƒใƒผใ‚ธใƒฃใƒผ (้–ขๆ•ฐใƒ‡ใ‚ณใƒฌใƒผใ‚ฟใƒผใงใ‚‚ใ‚ใ‚Šใพใ™)ใ€‚ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ```python from transformers import T5ForConditionalGeneration, T5Config import deepspeed with deepspeed.zero.Init(): config = T5Config.from_pretrained("t5-small") model = T5ForConditionalGeneration(config) ``` ใ”่ฆงใฎใจใŠใ‚Šใ€ใ“ใ‚Œใซใ‚ˆใ‚Šใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใ€`model_class.from_pretrained` ใฏๆฌกใฎๆกไปถใ‚’ๆบ€ใŸใ™้™ใ‚Šใ“ใฎๆฉŸ่ƒฝใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚ `is_deepspeed_zero3_enabled()` ใฏ `True` ใ‚’่ฟ”ใ—ใพใ™ใ€‚ใ“ใ‚Œใฏ็พๅœจใ€ [`TrainingArguments`] ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆ (ๆธกใ•ใ‚ŒใŸ DeepSpeed ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใซ ZeRO-3 ๆง‹ๆˆใŒๅซใพใ‚Œใฆใ„ใ‚‹ๅ ดๅˆ) ใ‚ปใ‚ฏใ‚ทใƒงใƒณใ€‚ใ—ใŸใŒใฃใฆใ€ๅ‘ผใณๅ‡บใ—ใฎๅ‰ใซ** [`TrainingArguments`] ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ไฝœๆˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ `from_pretrained`ใ€‚่€ƒใˆใ‚‰ใ‚Œใ‚‹ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎไพ‹ใ‚’ๆฌกใซ็คบใ—ใพใ™ใ€‚ ```python from transformers import AutoModel, Trainer, TrainingArguments training_args = TrainingArguments(..., deepspeed=ds_config) model = AutoModel.from_pretrained("t5-small") trainer = Trainer(model=model, args=training_args, ...) ``` ๅ…ฌๅผใฎใ‚ตใƒณใƒ—ใƒซ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใฆใ€ใ‚ณใƒžใƒณใƒ‰ ใƒฉใ‚คใƒณๅผ•ๆ•ฐใซ `--deepspeed ds_config.json` ใŒๅซใพใ‚Œใฆใ„ใ‚‹ๅ ดๅˆ ZeRO-3 ่จญๅฎšใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใจใ€ใ“ใ‚ŒใŒใ‚ตใƒณใƒ—ใƒซ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฎ่จ˜่ฟฐๆ–นๆณ•ใงใ‚ใ‚‹ใŸใ‚ใ€ใ™ในใฆใŒใ™ใงใซๅฎŒไบ†ใ—ใฆใ„ใพใ™ใ€‚ ๆณจ: ใƒขใƒ‡ใƒซใฎ fp16 ้‡ใฟใŒๅ˜ไธ€ใฎ GPU ใฎใƒกใƒขใƒชใซๅŽใพใ‚‰ใชใ„ๅ ดๅˆใฏใ€ใ“ใฎๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎๆ–นๆณ•ใจใใฎไป–ใฎ้–ข้€ฃๆฉŸ่ƒฝใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ๅคง่ฆๆจกใƒขใƒ‡ใƒซใฎๆง‹็ฏ‰](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใพใŸใ€fp16 ใงไบ‹ๅ‰่จ“็ทดใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใจใใฏใ€`from_pretrained` ใซไฝฟ็”จใ™ใ‚‹ใ‚ˆใ†ใซๆŒ‡็คบใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ `torch_dtype=torch.float16`ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€[from_pretrained-torch-dtype](#from_pretrained-torch-dtype) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ #### Gathering Parameters ่ค‡ๆ•ฐใฎ GPU ไธŠใฎ ZeRO-3 ใงใฏใ€็พๅœจใฎ GPU ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใงใชใ„้™ใ‚Šใ€ๅ˜ไธ€ใฎ GPU ใŒใ™ในใฆใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒใคใ“ใจใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๅฎŸ่กŒๅฑคใ€‚ใ—ใŸใŒใฃใฆใ€ใ™ในใฆใฎใƒฌใ‚คใƒคใƒผใฎใ™ในใฆใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใซไธ€ๅบฆใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€ใใ‚Œใ‚’่กŒใ†ใŸใ‚ใฎ็‰นๅฎšใฎๆ–นๆณ•ใŒใ‚ใ‚Šใพใ™ใ€‚ ใปใจใ‚“ใฉใฎๅ ดๅˆใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ๅฟ…่ฆใชๅ ดๅˆใฏใ€[ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๅŽ้›†](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใŸใ ใ—ใ€ใ„ใใคใ‹ใฎๅ ดๆ‰€ใงๅ†…้ƒจ็š„ใซไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ใใฎไพ‹ใฎ 1 ใคใฏใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใจใใงใ™ใ€‚ `from_pretrained`ใ€‚ไธ€ๅบฆใซ 1 ใคใฎใƒฌใ‚คใƒคใƒผใ‚’ใƒญใƒผใƒ‰ใ—ใ€ๅ‚ๅŠ ใ—ใฆใ„ใ‚‹ใ™ในใฆใฎ GPU ใซๅณๅบงใซๅˆ†ๅ‰ฒใ—ใพใ™ใ€‚ ๅคง่ฆๆจกใชใƒขใƒ‡ใƒซใงใฏใ€ใƒกใƒขใƒชใฎ้–ขไฟ‚ใงใ€1 ใคใฎ GPU ใซใƒญใƒผใƒ‰ใ—ใฆใ‹ใ‚‰่ค‡ๆ•ฐใฎ GPU ใซๅˆ†ๆ•ฃใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ ๅˆถ้™ใ€‚ ใพใŸใ€ZeRO-3 ใงใฏใ€็‹ฌ่‡ชใฎใ‚ณใƒผใƒ‰ใ‚’ไฝœๆˆใ—ใ€ๆฌกใฎใ‚ˆใ†ใชใƒขใƒ‡ใƒซ ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฎ้‡ใฟใŒ็™บ็”Ÿใ™ใ‚‹ใจใ—ใพใ™ใ€‚ ```python tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True) ``` `tensor([1.])` ใซใ‚นใƒˆใƒฌใ‚นใ‚’ๆ„Ÿใ˜ใŸๅ ดๅˆใ€ใพใŸใฏใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎใ‚ตใ‚คใ‚บใŒ `1` ใงใ‚ใ‚‹ใจใ„ใ†ใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ—ใŸๅ ดๅˆ ใ‚ˆใ‚Šๅคงใใชๅคšๆฌกๅ…ƒๅฝข็Šถใ€‚ใ“ใ‚Œใฏใ€ใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใŒๅˆ†ๅ‰ฒใ•ใ‚ŒใฆใŠใ‚Šใ€่กจ็คบใ•ใ‚Œใ‚‹ใฎใฏ ZeRO-3 ใƒ—ใƒฌใƒผใ‚นใƒ›ใƒซใƒ€ใƒผใงใ‚ใ‚‹ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ <a id='deepspeed-zero-inference'></a> ### ZeRO Inference ZeRO Inference ใฏใ€ZeRO-3 Training ใจๅŒใ˜ๆง‹ๆˆใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใจใ‚นใ‚ฑใ‚ธใƒฅใƒผใƒฉใƒผใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใง ๅฎŸ้š›ใ€ๅŒใ˜ใ‚‚ใฎใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๅ…ฑๆœ‰ใ—ใŸใ„ๅ ดๅˆใฏใ€ใ“ใ‚Œใ‚‰ใ‚’่จญๅฎšใƒ•ใ‚กใ‚คใƒซใซๆฎ‹ใ™ใ“ใจใŒใงใใพใ™ใ€‚ๅฝผใ‚‰ใฏใŸใ ใใ†ใชใ‚‹ใ ใ‚ใ† ็„ก่ฆ–ใ•ใ‚Œใพใ—ใŸใ€‚ ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใฏใ€้€šๅธธใฎ [`TrainingArguments`] ๅผ•ๆ•ฐใ‚’ๆธกใ™ใ ใ‘ใงใ™ใ€‚ไพ‹ใˆใฐ๏ผš ```bash deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json ``` ๅ”ฏไธ€้‡่ฆใชใ“ใจใฏใ€ZeRO-2 ใซใฏไฝ•ใฎๅˆฉ็‚นใ‚‚ใชใ„ใŸใ‚ใ€ZeRO-3 ๆง‹ๆˆใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใจใ„ใ†ใ“ใจใงใ™ใ€‚ ZeRO-3 ใฎใฟใŒใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใฎใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ๅฎŸ่กŒใ™ใ‚‹ใฎใซๅฏพใ—ใ€ZeRO-1 ใฏๅ‹พ้…ใจใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ใ‚’ใ‚ทใƒฃใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใ€ๆŽจ่ซ–ใซๅฝน็ซ‹ใกใพใ™ใ€‚ ไปฅไธ‹ใฏใ€ๅˆฉ็”จๅฏ่ƒฝใชใ™ในใฆใฎ GPU ใ‚’ใƒ‡ใƒ—ใƒญใ‚คใ™ใ‚‹ DeepSpeed ใง`run_translation.py`ใ‚’ๅฎŸ่กŒใ™ใ‚‹ไพ‹ใงใ™ใ€‚ ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --output_dir output_dir \ --do_eval --max_eval_samples 50 --warmup_steps 50 \ --max_source_length 128 --val_max_target_length 128 \ --overwrite_output_dir --per_device_eval_batch_size 4 \ --predict_with_generate --dataset_config "ro-en" --fp16 \ --source_lang en --target_lang ro --dataset_name wmt16 \ --source_prefix "translate English to Romanian: " ``` ๆŽจ่ซ–ใฎใŸใ‚ใซใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใƒผใฎ็Šถๆ…‹ใจๅ‹พ้…ใซใ‚ˆใฃใฆไฝฟ็”จใ•ใ‚Œใ‚‹่ฟฝๅŠ ใฎๅคงใใชใƒกใƒขใƒชใฏๅฟ…่ฆใชใ„ใŸใ‚ใ€ ใฏใ‚‹ใ‹ใซๅคงใใชใƒใƒƒใƒใ‚„ใ‚ทใƒผใ‚ฑใƒณใ‚น้•ทใ‚’ๅŒใ˜ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซ้ฉๅˆใงใใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ•ใ‚‰ใซใ€DeepSpeed ใฏ็พๅœจใ€Deepspeed-Inference ใจๅ‘ผใฐใ‚Œใ‚‹้–ข้€ฃ่ฃฝๅ“ใ‚’้–‹็™บใ—ใฆใ„ใพใ™ใŒใ€ใ“ใ‚Œใจใฏไฝ•ใฎ้–ขไฟ‚ใ‚‚ใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ZeRO ใƒ†ใ‚ฏใƒŽใƒญใ‚ธใƒผใซๆบ–ๆ‹ ใ—ใฆใ„ใพใ™ใŒใ€ไปฃใ‚ใ‚Šใซใƒ†ใƒณใ‚ฝใƒซไธฆๅˆ—ๅ‡ฆ็†ใ‚’ไฝฟ็”จใ—ใฆใ€ๅ˜ไธ€ใฎ GPU ใซๅŽใพใ‚‰ใชใ„ใƒขใƒ‡ใƒซใ‚’ใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐใ—ใพใ™ใ€‚ใ“ใ‚Œใฏ ็พๅœจ้–‹็™บไธญใงใ™ใ€‚่ฃฝๅ“ใŒๅฎŒๆˆใ—ใŸใ‚‰็ตฑๅˆใ‚’ๆไพ›ใ™ใ‚‹ไบˆๅฎšใงใ™ใ€‚ ### Memory Requirements Deepspeed ZeRO ใฏใƒกใƒขใƒชใ‚’ CPU (ใŠใ‚ˆใณ NVMe) ใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใงใใ‚‹ใŸใ‚ใ€ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใฏใ€ไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ GPU ใฎๆ•ฐใซๅฟœใ˜ใฆๅฟ…่ฆใช CPU ใŠใ‚ˆใณ GPU ใƒกใƒขใƒชใฎ้‡ใ‚’็Ÿฅใ‚‹ใ“ใจใŒใงใใ‚‹ใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใ‚’ๆไพ›ใ—ใพใ™ใ€‚ ๅ˜ไธ€ใฎ GPU ใง `bigscience/T0_3B`ใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใƒกใƒขใƒชใฎ้‡ใ‚’่ฆ‹็ฉใ‚‚ใฃใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)' [...] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 1 GPU per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1 15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0 ``` ใ—ใŸใŒใฃใฆใ€ๅ˜ไธ€ใฎ 80 GB GPU ใง CPU ใ‚ชใƒ•ใƒญใƒผใƒ‰ใชใ—ใงๆญ่ผ‰ใ™ใ‚‹ใ“ใจใ‚‚ใ€ๅฐใ•ใช 8 GB GPU ใงใ‚‚ๆœ€ๅคง 60 GB ใฎ CPU ใƒกใƒขใƒชใŒๅฟ…่ฆใซใชใ‚‹ใ“ใจใ‚‚ๅฏ่ƒฝใงใ™ใ€‚ (ใ“ใ‚Œใฏใƒ‘ใƒฉใƒกใƒผใ‚ฟใ€ใ‚ชใƒ—ใƒ†ใ‚ฃใƒžใ‚คใ‚ถใฎ็Šถๆ…‹ใ€ใŠใ‚ˆใณๅ‹พ้…ใฎใŸใ‚ใฎใƒกใƒขใƒชใงใ‚ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚cuda ใ‚ซใƒผใƒใƒซใ€ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ™ใƒผใ‚ทใƒงใƒณใ€ใŠใ‚ˆใณไธ€ๆ™‚ใƒกใƒขใƒชใซใฏใ‚‚ใ†ๅฐ‘ใ—ๅคšใใฎใƒกใƒขใƒชใŒๅฟ…่ฆใงใ™ใ€‚) ๆฌกใซใ€ใ‚ณใ‚นใƒˆใจ้€Ÿๅบฆใฎใƒˆใƒฌใƒผใƒ‰ใ‚ชใƒ•ใซใชใ‚Šใพใ™ใ€‚ใ‚ˆใ‚Šๅฐใ•ใ„ GPU ใ‚’่ณผๅ…ฅใพใŸใฏใƒฌใƒณใ‚ฟใƒซใ—ใŸๆ–นใŒๅฎ‰ใใชใ‚Šใพใ™ (Deepspeed ZeRO ใงใฏ่ค‡ๆ•ฐใฎ GPU ใ‚’ไฝฟ็”จใงใใ‚‹ใŸใ‚ใ€GPU ใฎๆ•ฐใ‚’ๆธ›ใ‚‰ใ™ใ“ใจใ‚‚ใงใใพใ™)ใ€‚ใ—ใ‹ใ—ใ€ใใฎๅ ดๅˆใฏ้…ใใชใ‚Šใพใ™ใ€‚ใใฎใŸใ‚ใ€ไฝ•ใ‹ใ‚’ๅฎŸ่กŒใ™ใ‚‹้€Ÿๅบฆใ‚’ๆฐ—ใซใ—ใชใใฆใ‚‚ใ€้€ŸๅบฆใฎไฝŽไธ‹ใฏ GPU ใฎไฝฟ็”จๆ™‚้–“ใซ็›ดๆŽฅๅฝฑ้Ÿฟใ—ใ€ใ‚ณใ‚นใƒˆใŒๅข—ๅคงใ™ใ‚‹ใŸใ‚ใ€ใฉใ‚ŒใŒๆœ€ใ‚‚ๅŠนๆžœ็š„ใ‹ใ‚’ๅฎŸ้จ“ใ—ใฆๆฏ”่ผƒใ—ใฆใใ ใ•ใ„ใ€‚ ๅๅˆ†ใช GPU ใƒกใƒขใƒชใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€ใ™ในใฆใŒ้ซ˜้€Ÿใซใชใ‚‹ใŸใ‚ใ€CPU/NVMe ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚’ๅฟ…ใš็„กๅŠนใซใ—ใฆใใ ใ•ใ„ใ€‚ ใŸใจใˆใฐใ€2 ใคใฎ GPU ใซๅฏพใ—ใฆๅŒใ˜ใ“ใจใ‚’็นฐใ‚Š่ฟ”ใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)' [...] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 2 GPUs per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1 31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0 ``` ใ—ใŸใŒใฃใฆใ€ใ“ใ“ใงใฏใ€CPU ใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ›ใšใซ 2x 32GB ไปฅไธŠใฎ GPU ใŒๅฟ…่ฆใซใชใ‚Šใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[ใƒกใƒขใƒชๆŽจๅฎšใƒ„ใƒผใƒซ](https://deepspeed.readthedocs.io/en/latest/memory.html) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### Filing Issues ใ“ใ“ใงใฏใ€ๅ•้กŒใฎ็œŸ็›ธใ‚’ใ™ใใซ่งฃๆ˜Žใ—ใ€ไฝœๆฅญใฎใƒ–ใƒญใƒƒใ‚ฏใ‚’่งฃ้™คใงใใ‚‹ใ‚ˆใ†ใ€ๅ•้กŒใ‚’ๅ ฑๅ‘Šใ™ใ‚‹ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ ใƒฌใƒใƒผใƒˆใซใฏๅฟ…ใšๆฌกใฎๅ†…ๅฎนใ‚’ๅซใ‚ใฆใใ ใ•ใ„ใ€‚ 1. ใƒฌใƒใƒผใƒˆๅ†…ใฎๅฎŒๅ…จใช Deepspeed ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซ 2. [`Trainer`] ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ‚ณใƒžใƒณใƒ‰ใƒฉใ‚คใƒณๅผ•ๆ•ฐใ€ใพใŸใฏ ใƒˆใƒฌใƒผใƒŠใƒผใฎใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ‚’่‡ชๅˆ†ใงใ‚นใ‚ฏใƒชใƒ—ใƒˆไฝœๆˆใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€[`TrainingArguments`] ๅผ•ๆ•ฐใ€‚ใ—ใชใ„ใงใใ ใ•ใ„ [`TrainingArguments`] ใซใฏ็„ก้–ขไฟ‚ใชใ‚จใƒณใƒˆใƒชใŒๅคšๆ•ฐๅซใพใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ใƒ€ใƒณใƒ—ใ—ใพใ™ใ€‚ 3. ๆฌกใฎๅ‡บๅŠ›: ```bash python -c 'import torch; print(f"torch: {torch.__version__}")' python -c 'import transformers; print(f"transformers: {transformers.__version__}")' python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")' ``` 4. ๅฏ่ƒฝใงใ‚ใ‚Œใฐใ€ๅ•้กŒใ‚’ๅ†็พใงใใ‚‹ Google Colab ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใธใฎใƒชใƒณใ‚ฏใ‚’ๅซใ‚ใฆใใ ใ•ใ„ใ€‚ใ“ใ‚Œใ‚’ไฝฟใˆใพใ™ [ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏ](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) ใจใ—ใฆ ๅ‡บ็™บ็‚นใ€‚ 5. ไธๅฏ่ƒฝใงใชใ„้™ใ‚Šใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใฏใชใใ€ๅธธใซไฝฟ็”จใงใใ‚‹ๆจ™ๆบ–ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใฆใใ ใ•ใ„ใ€‚ 6. ๅฏ่ƒฝใงใ‚ใ‚Œใฐใ€ๆ—ขๅญ˜ใฎ [ใ‚ตใƒณใƒ—ใƒซ](https://github.com/huggingface/transformers/tree/main/examples/pytorch) ใฎใ„ใšใ‚Œใ‹ใ‚’ไฝฟ็”จใ—ใฆๅ•้กŒใ‚’ๅ†็พใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ - Deepspeed ใŒๅ•้กŒใฎๅŽŸๅ› ใงใฏใชใ„ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ™ใ€‚ ๆๅ‡บใ•ใ‚ŒใŸๅ•้กŒใฎไธ€้ƒจใฏใ€Deepspeed ใจใฏ็„ก้–ขไฟ‚ใงใ‚ใ‚‹ใ“ใจใŒๅˆคๆ˜Žใ—ใพใ—ใŸใ€‚ใใ‚Œใฏใ€Deepspeed ใŒใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ‹ใ‚‰ๅ‰Š้™คใ•ใ‚ŒใŸๅพŒใงใ™ใ€‚ ๅ•้กŒใฏใพใ ๆฎ‹ใฃใฆใ„ใŸใ€‚ ใ—ใŸใŒใฃใฆใ€ๅฎŒๅ…จใซๆ˜Ž็™ฝใงใชใ„ๅ ดๅˆใฏใ€DeepSpeed ้–ข้€ฃใฎๅ•้กŒใงใ™ใ€‚ ไพ‹ๅค–ใŒ็™บ็”Ÿใ—ใ€DeepSpeed ใƒขใ‚ธใƒฅใƒผใƒซใŒ้–ขไฟ‚ใ—ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ใพใšใ€DeepSpeed ใ‚’ๅซใพใชใ„ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ‚’ๅ†ใƒ†ใ‚นใƒˆใ—ใฆใใ ใ•ใ„ใ€‚ ๅ•้กŒใŒ่งฃๆฑบใ—ใชใ„ๅ ดๅˆใซใฎใฟใ€Deepspeed ใซใคใ„ใฆ่จ€ๅŠใ—ใ€ๅฟ…่ฆใช่ฉณ็ดฐใ‚’ใ™ในใฆๆไพ›ใ—ใฆใใ ใ•ใ„ใ€‚ - ๅ•้กŒใŒ็ตฑๅˆ้ƒจๅˆ†ใงใฏใชใ DeepSpeed ใ‚ณใ‚ขใซใ‚ใ‚‹ใ“ใจใŒๆ˜Žใ‚‰ใ‹ใชๅ ดๅˆใฏใ€ๅ•้กŒใ‚’ๆๅ‡บใ—ใฆใใ ใ•ใ„ใ€‚ [Deepspeed](https://github.com/microsoft/DeepSpeed/) ใ‚’็›ดๆŽฅไฝฟ็”จใ—ใพใ™ใ€‚ใ‚ˆใใ‚ใ‹ใ‚‰ใชใ„ๅ ดๅˆใงใ‚‚ใ€ใ”ๅฎ‰ๅฟƒใใ ใ•ใ„ใ€‚ ใฉใกใ‚‰ใฎๅ•้กŒใƒˆใƒฉใƒƒใ‚ซใƒผใงใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๆŠ•็จฟใ•ใ‚ŒใŸใ‚‰ใใ‚Œใ‚’ๅˆคๆ–ญใ—ใ€ๆฌกใฎๅ ดๅˆใฏๅˆฅใฎๅ•้กŒใƒˆใƒฉใƒƒใ‚ซใƒผใซใƒชใƒ€ใ‚คใƒฌใ‚ฏใƒˆใ—ใพใ™ใ€‚ ใใ†ใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใ€‚ ### Troubleshooting #### the `deepspeed` process gets killed at startup without a traceback `deepspeed`ใƒ—ใƒญใ‚ปใ‚นใŒ่ตทๅ‹•ๆ™‚ใซใƒˆใƒฌใƒผใ‚นใƒใƒƒใ‚ฏใชใ—ใงๅผทๅˆถ็ต‚ไบ†ใ•ใ‚ŒใŸๅ ดๅˆใ€ใใ‚Œใฏ้€šๅธธใ€ใƒ—ใƒญใ‚ฐใƒฉใƒ ใŒ่ฉฆ่กŒใ—ใŸใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ ใ‚ทใ‚นใƒ†ใƒ ใŒๆŒใฃใฆใ„ใ‚‹ใ‚ˆใ‚Šใ‚‚ๅคšใใฎ CPU ใƒกใƒขใƒชใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ใ‹ใ€ใƒ—ใƒญใ‚ปใ‚นใŒๅ‰ฒใ‚Šๅฝ“ใฆใ‚’่จฑๅฏใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€OS ใ‚ซใƒผใƒใƒซใŒใใ‚Œใ‚’ๅผทๅˆถ็ต‚ไบ†ใ—ใพใ™ใ€‚ ใƒ—ใƒญใ‚ปใ‚นใ€‚ใ“ใ‚Œใฏใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใซ `offload_optimizer` ใพใŸใฏ `offload_param` ใŒๅซใพใ‚Œใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใŒ้ซ˜ใ„ใŸใ‚ใงใ™ใ€‚ ใฉใกใ‚‰ใ‚‚`cpu`ใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ™ใ‚‹ใ‚ˆใ†ใซ่จญๅฎšใ•ใ‚Œใฆใ„ใพใ™ใ€‚ NVMe ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ๆฌกใฎ็’ฐๅขƒใงๅฎŸ่กŒใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏ NVMe ใธใฎใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚’่ฉฆใ—ใฆใใ ใ•ใ„ใ€‚ ใ‚ผใƒญ-3ใ€‚ [็‰นๅฎšใฎใƒขใƒ‡ใƒซใซๅฟ…่ฆใชใƒกใƒขใƒช้‡ใ‚’่ฆ‹็ฉใ‚‚ใ‚‹]ๆ–นๆณ•ใฏๆฌกใฎใจใŠใ‚Šใงใ™(https://deepspeed.readthedocs.io/en/latest/memory.html)ใ€‚ #### training and/or eval/predict loss is `NaN` ใ“ใ‚Œใฏใ€bf16 ๆททๅˆ็ฒพๅบฆใƒขใƒผใƒ‰ใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๅ–ๅพ—ใ—ใ€ใใ‚Œใ‚’ fp16 (ๆททๅˆ็ฒพๅบฆใฎๆœ‰็„กใซใ‹ใ‹ใ‚ใ‚‰ใš) ใงไฝฟ็”จใ—ใ‚ˆใ†ใจใ—ใŸๅ ดๅˆใซใ‚ˆใ็™บ็”Ÿใ—ใพใ™ใ€‚ TPU ใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใปใจใ‚“ใฉใฎใƒขใƒ‡ใƒซใ€ใŠใ‚ˆใณๅคšใใฎๅ ดๅˆใ€Google ใซใ‚ˆใฃใฆใƒชใƒชใƒผใ‚นใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฏใ€ใ“ใฎใ‚ซใƒ†ใ‚ดใƒชใซๅˆ†้กžใ•ใ‚Œใพใ™ (ใŸใจใˆใฐใ€ใปใผใ™ในใฆใฎ t5 ใƒ™ใƒผใ‚นใฎใƒขใƒ‡ใƒซ)ใ€‚ใ“ใ“ใงใฎ่งฃๆฑบ็ญ–ใฏใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใŒใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใ‚‹ๅ ดๅˆ (TPUใ€Ampere GPU ไปฅ้™)ใ€fp32 ใพใŸใฏ bf16 ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } ``` ใƒญใ‚ฐใซใฏใ€Deepspeed ใŒๆฌกใฎใ‚ˆใ†ใซ`OVERFLOW!`ใ‚’ๅ ฑๅ‘Šใ—ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ ``` 0%| | 0/189 [00:00<?, ?it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144 1%|โ–Œ | 1/189 [00:00<01:26, 2.17it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0 1%|โ–ˆโ– [...] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 14%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 27/189 [00:14<01:13, 2.21it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 28/189 [00:14<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 29/189 [00:15<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 [...] ``` ใ“ใ‚Œใฏใ€Deepspeed ๆๅคฑใ‚นใ‚ฑใƒผใƒฉใƒผใŒๆๅคฑใ‚ชใƒผใƒใƒผใƒ•ใƒญใƒผใ‚’ๅ…‹ๆœใ™ใ‚‹ใ‚นใ‚ฑใƒผใƒชใƒณใ‚ฐไฟ‚ๆ•ฐใ‚’่ฆ‹ใคใ‘ใ‚‰ใ‚Œใชใ„ใ“ใจใ‚’ๆ„ๅ‘ณใ—ใพใ™ใ€‚ (ใƒญใ‚ฐใฏใ“ใ“ใง่ชญใฟใ‚„ใ™ใใ™ใ‚‹ใŸใ‚ใซใƒžใƒƒใ‚ตใƒผใ‚ธใ•ใ‚Œใฆใ„ใพใ™ใ€‚) ใ“ใฎๅ ดๅˆใ€้€šๅธธใฏ `initial_scale_power` ใฎๅ€คใ‚’ไธŠใ’ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚้€šๅธธใ€`initial_scale_power: 32` ใซ่จญๅฎšใ™ใ‚‹ใจๅ•้กŒใŒ่งฃๆฑบใ—ใพใ™ใ€‚ ### Notes - DeepSpeed ใซใฏ pip ใงใ‚คใƒณใ‚นใƒˆใƒผใƒซๅฏ่ƒฝใช PyPI ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใŒใ‚ใ‚Šใพใ™ใŒใ€ใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใซๆœ€ใ‚‚้ฉๅˆใ™ใ‚‹ใ‚ˆใ†ใซใ€ใพใŸๆœ‰ๅŠนใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€[ใ‚ฝใƒผใ‚น](https://github.com/microsoft/deepspeed#installation) ใ‹ใ‚‰ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ™ใ‚‹ใ“ใจใ‚’ๅผทใใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ 1 ใƒ“ใƒƒใƒˆ Adam ใชใฉใฎ็‰นๅฎšใฎๆฉŸ่ƒฝใฏใ€pypi ใƒ‡ใ‚ฃใ‚นใƒˆใƒชใƒ“ใƒฅใƒผใ‚ทใƒงใƒณใงใฏๅˆฉ็”จใงใใพใ›ใ‚“ใ€‚ - ๐Ÿค— Transformers ใง DeepSpeed ใ‚’ไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซ [`Trainer`] ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ - ไปปๆ„ใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใพใ™ ๅพŒ่€…ใฏ [DeepSpeed ็ตฑๅˆๆ‰‹้ †](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models) ใซๅพ“ใฃใฆ่ชฟๆ•ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ## Non-Trainer Deepspeed Integration [`~integrations.HfDeepSpeedConfig`] ใฏใ€Deepspeed ใ‚’ ๐Ÿค— Transformers ใ‚ณใ‚ขใซ็ตฑๅˆใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใพใ™ [`Trainer`] ใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆใฎๆฉŸ่ƒฝใ€‚ๅฎŸ่กŒใ™ใ‚‹ๅ”ฏไธ€ใฎใ“ใจใฏใ€Deepspeed ZeRO-3 ใƒ‘ใƒฉใƒกใƒผใ‚ฟๅŽ้›†ใ‚’ๅ‡ฆ็†ใ—ใ€`from_pretrained`ๅ‘ผใณๅ‡บใ—ไธญใซใƒขใƒ‡ใƒซใ‚’่ค‡ๆ•ฐใฎ GPU ใซ่‡ชๅ‹•็š„ใซๅˆ†ๅ‰ฒใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใใ‚Œไปฅๅค–ใฏใ™ในใฆ่‡ชๅˆ†ใง่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ [`Trainer`] ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใ™ในใฆใŒ่‡ชๅ‹•็š„ใซๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚ [`Trainer`] ใ‚’ไฝฟ็”จใ—ใชใ„ๅ ดๅˆใ€DeepSpeed ZeRO-3 ใ‚’ๅŠน็އ็š„ใซๅฐŽๅ…ฅใ™ใ‚‹ใซใฏใ€ ใƒขใƒ‡ใƒซใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ๅ‰ใซ [`~integrations.HfDeepSpeedConfig`] ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ๅ‰Š้™คใ—ใ€ใใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’็”ŸใใŸใพใพใซใ—ใพใ™ใ€‚ Deepspeed ZeRO-1 ใพใŸใฏ ZeRO-2 ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€`HfDeepSpeedConfig`ใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใฏใพใฃใŸใใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใŸใจใˆใฐใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใฎๅ ดๅˆใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ```python from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel import deepspeed ds_config = {...} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive model = AutoModel.from_pretrained("gpt2") engine = deepspeed.initialize(model=model, config_params=ds_config, ...) ``` ใพใŸใฏใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใชใ„ใƒขใƒ‡ใƒซใฎๅ ดๅˆ: ```python from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel, AutoConfig import deepspeed ds_config = {...} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive config = AutoConfig.from_pretrained("gpt2") model = AutoModel.from_config(config) engine = deepspeed.initialize(model=model, config_params=ds_config, ...) ``` [`Trainer`] ็ตฑๅˆใ‚’ไฝฟ็”จใ—ใฆใ„ใชใ„ๅ ดๅˆใฏใ€ๅฎŒๅ…จใซ็‹ฌๅŠ›ใง่กŒใ†ใ“ใจใซใชใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ๅŸบๆœฌ็š„ใซใฏใ€[Deepspeed](https://www.deepspeed.ai/) Web ใ‚ตใ‚คใƒˆใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆใซๅพ“ใฃใฆใใ ใ•ใ„ใ€‚ใพใŸใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’ๆ˜Ž็คบ็š„ใซ่จญๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚`"auto"`ๅ€คใฏไฝฟ็”จใงใใšใ€ไปฃใ‚ใ‚ŠใซๅฎŸ้š›ใฎๅ€คใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ## HfDeepSpeedConfig [[autodoc]] integrations.HfDeepSpeedConfig - all ### Custom DeepSpeed ZeRO Inference ไปฅไธ‹ใฏใ€ๅ˜ไธ€ใฎ GPU ใซใƒขใƒ‡ใƒซใ‚’้ฉๅˆใงใใชใ„ๅ ดๅˆใซใ€[`Trainer`] ใ‚’ไฝฟ็”จใ›ใšใซ DeepSpeed ZeRO ๆŽจ่ซ–ใ‚’ๅฎŸ่กŒใ™ใ‚‹ๆ–นๆณ•ใฎไพ‹ใงใ™ใ€‚่งฃๆฑบ็ญ–ใซใฏใ€่ฟฝๅŠ ใฎ GPU ใฎไฝฟ็”จใ€ใพใŸใฏ GPU ใƒกใƒขใƒชใ‚’ CPU ใƒกใƒขใƒชใซใ‚ชใƒ•ใƒญใƒผใƒ‰ใ™ใ‚‹ใ“ใจใŒๅซใพใ‚Œใพใ™ใ€‚ ใ“ใ“ใง็†่งฃใ™ในใ้‡่ฆใชใƒ‹ใƒฅใ‚ขใƒณใ‚นใฏใ€ZeRO ใฎ่จญ่จˆๆ–นๆณ•ใซใ‚ˆใ‚Šใ€็•ฐใชใ‚‹ GPU ใง็•ฐใชใ‚‹ๅ…ฅๅŠ›ใ‚’ไธฆ่กŒใ—ใฆๅ‡ฆ็†ใงใใ‚‹ใจใ„ใ†ใ“ใจใงใ™ใ€‚ ใ“ใฎไพ‹ใซใฏๅคง้‡ใฎใƒกใƒขใŒใ‚ใ‚Šใ€่‡ชๅทฑๆ–‡ๆ›ธๅŒ–ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ๅฟ…ใšๆฌกใฎใ“ใจใ‚’่กŒใฃใฆใใ ใ•ใ„ใ€‚ 1. ๅๅˆ†ใช GPU ใƒกใƒขใƒชใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€CPU ใ‚ชใƒ•ใƒญใƒผใƒ‰ใ‚’็„กๅŠนใซใ—ใพใ™ (้€ŸๅบฆใŒไฝŽไธ‹ใ™ใ‚‹ใŸใ‚)ใ€‚ 2. Ampere ใพใŸใฏๆ–ฐใ—ใ„ GPU ใ‚’ๆ‰€ๆœ‰ใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€ๅ‡ฆ็†ใ‚’้ซ˜้€ŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซ bf16 ใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚ใใฎใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใŒใชใ„ๅ ดๅˆใฏใ€bf16 ๆททๅˆ็ฒพๅบฆใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซ (ใปใจใ‚“ใฉใฎ t5 ใƒขใƒ‡ใƒซใชใฉ) ใ‚’ไฝฟ็”จใ—ใชใ„้™ใ‚Šใ€fp16 ใ‚’ๆœ‰ๅŠนใซใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฏ้€šๅธธใ€fp16 ใงใ‚ชใƒผใƒใƒผใƒ•ใƒญใƒผใ—ใ€ๅ‡บๅŠ›ใจใ—ใฆใ‚ฌใƒ™ใƒผใ‚ธใŒ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ ```python #!/usr/bin/env python # This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model # into a single GPU # # 1. Use 1 GPU with CPU offload # 2. Or use multiple GPUs instead # # First you need to install deepspeed: pip install deepspeed # # Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2 # small GPUs can handle it. or 1 small GPU and a lot of CPU memory. # # To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU - # you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to # process multiple inputs at once. # # The provided deepspeed config also activates CPU memory offloading, so chances are that if you # have a lot of available CPU memory and you don't mind a slowdown you should be able to load a # model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will # run faster if you don't want offload to CPU - so disable that section then. # # To deploy on 1 gpu: # # deepspeed --num_gpus 1 t0.py # or: # python -m torch.distributed.run --nproc_per_node=1 t0.py # # To deploy on 2 gpus: # # deepspeed --num_gpus 2 t0.py # or: # python -m torch.distributed.run --nproc_per_node=2 t0.py from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM from transformers.integrations import HfDeepSpeedConfig import deepspeed import os import torch os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers # distributed setup local_rank = int(os.getenv("LOCAL_RANK", "0")) world_size = int(os.getenv("WORLD_SIZE", "1")) torch.cuda.set_device(local_rank) deepspeed.init_distributed() model_name = "bigscience/T0_3B" config = AutoConfig.from_pretrained(model_name) model_hidden_size = config.d_model # batch size has to be divisible by world_size, but can be bigger than world_size train_batch_size = 1 * world_size # ds_config notes # # - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be # faster. # # - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g. # all official t5 models are bf16-pretrained # # - set offload_param.device to "none" or completely remove the `offload_param` section if you don't # - want CPU offload # # - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control # - which params should remain on gpus - the larger the value the smaller the offload size # # For indepth info on Deepspeed config see # https://huggingface.co/docs/transformers/main/main_classes/deepspeed # keeping the same format as json for consistency, except it uses lower case for true/false # fmt: off ds_config = { "fp16": { "enabled": False }, "bf16": { "enabled": False }, "zero_optimization": { "stage": 3, "offload_param": { "device": "cpu", "pin_memory": True }, "overlap_comm": True, "contiguous_gradients": True, "reduce_bucket_size": model_hidden_size * model_hidden_size, "stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size, "stage3_param_persistence_threshold": 10 * model_hidden_size }, "steps_per_print": 2000, "train_batch_size": train_batch_size, "train_micro_batch_size_per_gpu": 1, "wall_clock_breakdown": False } # fmt: on # next line instructs transformers to partition the model directly over multiple gpus using # deepspeed.zero.Init when model's `from_pretrained` method is called. # # **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)** # # otherwise the model will first be loaded normally and only partitioned at forward time which is # less efficient and when there is little CPU RAM may fail dschf = HfDeepSpeedConfig(ds_config) # keep this object alive # now a model can be loaded. model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # initialise Deepspeed ZeRO and store only the engine object ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0] ds_engine.module.eval() # inference # Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once. # If you use more GPUs adjust for more. # And of course if you have just one input to process you then need to pass the same string to both gpus # If you use only one GPU, then you will have only rank 0. rank = torch.distributed.get_rank() if rank == 0: text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy" elif rank == 1: text_in = "Is this review positive or negative? Review: this is the worst restaurant ever" tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank) with torch.no_grad(): outputs = ds_engine.module.generate(inputs, synced_gpus=True) text_out = tokenizer.decode(outputs[0], skip_special_tokens=True) print(f"rank{rank}:\n in={text_in}\n out={text_out}") ``` ใใ‚Œใ‚’`t0.py`ใจใ—ใฆไฟๅญ˜ใ—ใฆๅฎŸ่กŒใ—ใพใ—ใ‚‡ใ†ใ€‚ ``` $ deepspeed --num_gpus 2 t0.py rank0: in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy out=Positive rank1: in=Is this review positive or negative? Review: this is the worst restaurant ever out=negative ``` ใ“ใ‚Œใฏ้žๅธธใซๅŸบๆœฌ็š„ใชไพ‹ใงใ‚ใ‚Šใ€ใƒ‹ใƒผใ‚บใซๅˆใ‚ใ›ใฆ่ชฟๆ•ดใ—ใฆใใ ใ•ใ„ใ€‚ ### `generate` nuances ZeRO Stage-3 ใง่ค‡ๆ•ฐใฎ GPU ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใ€`generate(..., synced_gpus=True)`ใ‚’ๅ‘ผใณๅ‡บใ—ใฆ GPU ใ‚’ๅŒๆœŸใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚’่กŒใ‚ใชใ„ใจใ€1 ใคใฎ GPU ใŒไป–ใฎ GPU ใ‚ˆใ‚Šๅ…ˆใซ็”Ÿๆˆใ‚’็ต‚ไบ†ใ—ใŸๅ ดๅˆใ€ๆฎ‹ใ‚Šใฎ GPU ใŒ็”Ÿๆˆใ‚’ๅœๆญขใ—ใŸ GPU ใ‹ใ‚‰ใ‚ฆใ‚งใ‚คใƒˆใฎใ‚ทใƒฃใƒผใƒ‰ใ‚’ๅ—ไฟกใงใใชใใชใ‚‹ใŸใ‚ใ€ใ‚ทใ‚นใƒ†ใƒ ๅ…จไฝ“ใŒใƒใƒณใ‚ฐใ—ใพใ™ใ€‚ `transformers>=4.28` ไปฅ้™ใ€`synced_gpus` ใŒๆ˜Ž็คบ็š„ใซๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใชใ„ๅ ดๅˆใ€ใ“ใ‚Œใ‚‰ใฎๆกไปถใŒๆคœๅ‡บใ•ใ‚Œใ‚‹ใจ่‡ชๅ‹•็š„ใซ `True` ใซ่จญๅฎšใ•ใ‚Œใพใ™ใ€‚ใŸใ ใ—ใ€ๅฟ…่ฆใซๅฟœใ˜ใฆ `synced_gpus` ใฎๅ€คใ‚’ใ‚ชใƒผใƒใƒผใƒฉใ‚คใƒ‰ใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ## Deepspeed ็ตฑๅˆใฎใƒ†ใ‚นใƒˆ DeepSpeed ็ตฑๅˆใ‚’ๅซใ‚€ PR ใ‚’้€ไฟกใ™ใ‚‹ๅ ดๅˆใฏใ€CircleCI PR CI ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใซใฏ GPU ใŒใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใใฎใŸใ‚ใ€GPU ใ‚’ๅฟ…่ฆใจใ™ใ‚‹ใƒ†ใ‚นใƒˆใฏๅˆฅใฎ CI ใงๆฏŽๆ™ฉใฎใฟๅฎŸ่กŒใ•ใ‚Œใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€PR ใง็ท‘่‰ฒใฎ CI ใƒฌใƒใƒผใƒˆใŒ่กจ็คบใ•ใ‚Œใฆใ‚‚ใ€DeepSpeed ใƒ†ใ‚นใƒˆใŒๅˆๆ ผใ—ใŸใ“ใจใ‚’ๆ„ๅ‘ณใ™ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ DeepSpeed ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ™ใ‚‹ใซใฏใ€ๅฐ‘ใชใใจใ‚‚ไปฅไธ‹ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„ใ€‚ ``` RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py ``` ใƒขใƒ‡ใƒชใƒณใ‚ฐใพใŸใฏ pytorch ใ‚ตใƒณใƒ—ใƒซ ใ‚ณใƒผใƒ‰ใฎใ„ใšใ‚Œใ‹ใ‚’ๅค‰ๆ›ดใ—ใŸๅ ดๅˆใฏใ€Model Zoo ใƒ†ใ‚นใƒˆใ‚‚ๅฎŸ่กŒใ—ใพใ™ใ€‚ไปฅไธ‹ใฏใ™ในใฆใฎ DeepSpeed ใƒ†ใ‚นใƒˆใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ ``` RUN_SLOW=1 pytest tests/deepspeed ``` ## Main DeepSpeed Resources - [ใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใฎ github](https://github.com/microsoft/deepspeed) - [ไฝฟ็”จๆ–นๆณ•ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://www.deepspeed.ai/getting-started/) - [API ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://deepspeed.readthedocs.io/en/latest/index.html) - [ใƒ–ใƒญใ‚ฐๆŠ•็จฟ](https://www.microsoft.com/en-us/research/search/?q=deepspeed) ่ซ–ๆ–‡: - [ZeRO: ๅ…†ใƒ‘ใƒฉใƒกใƒผใ‚ฟ ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅ‘ใ‘ใŸใƒกใƒขใƒชใฎๆœ€้ฉๅŒ–](https://arxiv.org/abs/1910.02054) - [ZeRO-Offload: 10 ๅ„„่ฆๆจกใฎใƒขใƒ‡ใƒซ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๆฐ‘ไธปๅŒ–](https://arxiv.org/abs/2101.06840) - [ZeRO-Infinity: ๆฅต้™ใ‚นใ‚ฑใƒผใƒซใฎๆทฑๅฑคๅญฆ็ฟ’ใฎใŸใ‚ใฎ GPU ใƒกใƒขใƒชใฎๅฃใ‚’ๆ‰“ใก็ ดใ‚‹](https://arxiv.org/abs/2104.07857) ๆœ€ๅพŒใซใ€HuggingFace [`Trainer`] ใฏ DeepSpeed ใฎใฟใ‚’็ตฑๅˆใ—ใฆใ„ใ‚‹ใ“ใจใ‚’่ฆšใˆใฆใŠใ„ใฆใใ ใ•ใ„ใ€‚ DeepSpeed ใฎไฝฟ็”จใซ้–ขใ—ใฆๅ•้กŒใ‚„่ณชๅ•ใŒใ‚ใ‚‹ๅ ดๅˆใฏใ€[DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues) ใซๅ•้กŒใ‚’ๆๅ‡บใ—ใฆใใ ใ•ใ„ใ€‚
0