repo_id
stringlengths 15
89
| file_path
stringlengths 27
180
| content
stringlengths 1
2.23M
| __index_level_0__
int64 0
0
|
---|---|---|---|
hf_public_repos/transformers/docs/source/ko
|
hf_public_repos/transformers/docs/source/ko/tasks/audio_classification.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ์ค๋์ค ๋ถ๋ฅ[[audio_classification]]
[[open-in-colab]]
<Youtube id="KWwzcmG98Ds"/>
์ค๋์ค ๋ถ๋ฅ๋ ํ
์คํธ์ ๋ง์ฐฌ๊ฐ์ง๋ก ์
๋ ฅ ๋ฐ์ดํฐ์ ํด๋์ค ๋ ์ด๋ธ ์ถ๋ ฅ์ ํ ๋นํฉ๋๋ค. ์ ์ผํ ์ฐจ์ด์ ์ ํ
์คํธ ์
๋ ฅ ๋์ ์์ ์ค๋์ค ํํ์ด ์๋ค๋ ๊ฒ์
๋๋ค. ์ค๋์ค ๋ถ๋ฅ์ ์ค์ ์ ์ฉ ๋ถ์ผ์๋ ํ์์ ์๋ ํ์
, ์ธ์ด ๋ถ๋ฅ, ์๋ฆฌ๋ก ๋๋ฌผ ์ข
์ ์๋ณํ๋ ๊ฒ ๋ฑ์ด ์์ต๋๋ค.
์ด ๋ฌธ์์์ ๋ฐฉ๋ฒ์ ์์๋ณด๊ฒ ์ต๋๋ค:
1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํฐ ์ธํธ๋ฅผ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋ก ๋ฏธ์ธ ์กฐ์ ํ์ฌ ํ์์ ์๋๋ฅผ ๋ถ๋ฅํฉ๋๋ค.
2. ์ถ๋ก ์ ๋ฏธ์ธ ์กฐ์ ๋ ๋ชจ๋ธ์ ์ฌ์ฉํ์ธ์.
<Tip>
์ด ํํ ๋ฆฌ์ผ์์ ์ค๋ช
ํ๋ ์์
์ ์๋์ ๋ชจ๋ธ ์ํคํ
์ฒ์์ ์ง์๋ฉ๋๋ค:
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[Audio Spectrogram Transformer](../model_doc/audio-spectrogram-transformer), [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm), [Whisper](../model_doc/whisper)
<!--End of the generated tip-->
</Tip>
์์ํ๊ธฐ ์ ์ ํ์ํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ๋ชจ๋ ์ค์น๋์ด ์๋์ง ํ์ธํ์ธ์:
```bash
pip install transformers datasets evaluate
```
๋ชจ๋ธ์ ์
๋ก๋ํ๊ณ ์ปค๋ฎค๋ํฐ์ ๊ณต์ ํ ์ ์๋๋ก ํ๊น
ํ์ด์ค ๊ณ์ ์ ๋ก๊ทธ์ธํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ๋ฉ์์ง๊ฐ ํ์๋๋ฉด ํ ํฐ์ ์
๋ ฅํ์ฌ ๋ก๊ทธ์ธํฉ๋๋ค:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## MInDS-14 ๋ฐ์ดํฐ์
๋ถ๋ฌ์ค๊ธฐ[[load_minds_14_dataset]]
๋จผ์ ๐ค Datasets ๋ผ์ด๋ธ๋ฌ๋ฆฌ์์ MinDS-14 ๋ฐ์ดํฐ ์ธํธ๋ฅผ ๊ฐ์ ธ์ต๋๋ค:
```py
>>> from datasets import load_dataset, Audio
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
๋ฐ์ดํฐ ์ธํธ์ `train` ๋ถํ ์ [`~datasets.Dataset.train_test_split`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ ์์ ํ๋ จ ๋ฐ ํ
์คํธ ์งํฉ์ผ๋ก ๋ถํ ํฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ์ ์ฒด ๋ฐ์ดํฐ ์ธํธ์ ๋ ๋ง์ ์๊ฐ์ ์๋นํ๊ธฐ ์ ์ ๋ชจ๋ ๊ฒ์ด ์๋ํ๋์ง ์คํํ๊ณ ํ์ธํ ์ ์์ต๋๋ค.
```py
>>> minds = minds.train_test_split(test_size=0.2)
```
์ด์ ๋ฐ์ดํฐ ์งํฉ์ ์ดํด๋ณผ๊ฒ์:
```py
>>> minds
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 450
})
test: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 113
})
})
```
๋ฐ์ดํฐ ์ธํธ์๋ `lang_id` ๋ฐ `english_transcription`๊ณผ ๊ฐ์ ์ ์ฉํ ์ ๋ณด๊ฐ ๋ง์ด ํฌํจ๋์ด ์์ง๋ง ์ด ๊ฐ์ด๋์์๋ `audio` ๋ฐ `intent_class`์ ์ค์ ์ ๋ ๊ฒ์
๋๋ค. ๋ค๋ฅธ ์ด์ [`~datasets.Dataset.remove_columns`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ์ ๊ฑฐํฉ๋๋ค:
```py
>>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"])
```
์์๋ฅผ ์ดํด๋ณด๊ฒ ์ต๋๋ค:
```py
>>> minds["train"][0]
{'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828,
-0.00024414, -0.00024414], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
'sampling_rate': 8000},
'intent_class': 2}
```
๋ ๊ฐ์ ํ๋๊ฐ ์์ต๋๋ค:
- `audio`: ์ค๋์ค ํ์ผ์ ๊ฐ์ ธ์ค๊ณ ๋ฆฌ์ํ๋งํ๊ธฐ ์ํด ํธ์ถํด์ผ ํ๋ ์์ฑ ์ ํธ์ 1์ฐจ์ `๋ฐฐ์ด`์
๋๋ค.
- `intent_class`: ํ์์ ์๋์ ๋ํ ํด๋์ค ID๋ฅผ ๋ํ๋
๋๋ค.
๋ชจ๋ธ์ด ๋ ์ด๋ธ ID์์ ๋ ์ด๋ธ ์ด๋ฆ์ ์ฝ๊ฒ ๊ฐ์ ธ์ฌ ์ ์๋๋ก ๋ ์ด๋ธ ์ด๋ฆ์ ์ ์๋ก ๋งคํํ๋ ์ฌ์ ์ ๋ง๋ค๊ฑฐ๋ ๊ทธ ๋ฐ๋๋ก ๋งคํํ๋ ์ฌ์ ์ ๋ง๋ญ๋๋ค:
```py
>>> labels = minds["train"].features["intent_class"].names
>>> label2id, id2label = dict(), dict()
>>> for i, label in enumerate(labels):
... label2id[label] = str(i)
... id2label[str(i)] = label
```
์ด์ ๋ ์ด๋ธ ID๋ฅผ ๋ ์ด๋ธ ์ด๋ฆ์ผ๋ก ๋ณํํ ์ ์์ต๋๋ค:
```py
>>> id2label[str(2)]
'app_error'
```
## ์ ์ฒ๋ฆฌ[[preprocess]]
๋ค์ ๋จ๊ณ๋ ์ค๋์ค ์ ํธ๋ฅผ ์ฒ๋ฆฌํ๊ธฐ ์ํด Wav2Vec2 ํน์ง ์ถ์ถ๊ธฐ๋ฅผ ๊ฐ์ ธ์ค๋ ๊ฒ์
๋๋ค:
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
```
MinDS-14 ๋ฐ์ดํฐ ์ธํธ์ ์ํ๋ง ์๋๋ 8000khz์ด๋ฏ๋ก(์ด ์ ๋ณด๋ [๋ฐ์ดํฐ์ธํธ ์นด๋](https://huggingface.co/datasets/PolyAI/minds14)์์ ํ์ธํ ์ ์์ต๋๋ค), ์ฌ์ ํ๋ จ๋ Wav2Vec2 ๋ชจ๋ธ์ ์ฌ์ฉํ๋ ค๋ฉด ๋ฐ์ดํฐ ์ธํธ๋ฅผ 16000kHz๋ก ๋ฆฌ์ํ๋งํด์ผ ํฉ๋๋ค:
```py
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
>>> minds["train"][0]
{'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ...,
-2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
'sampling_rate': 16000},
'intent_class': 2}
```
์ด์ ์ ์ฒ๋ฆฌ ํจ์๋ฅผ ๋ง๋ญ๋๋ค:
1. ๊ฐ์ ธ์ฌ `์ค๋์ค` ์ด์ ํธ์ถํ๊ณ ํ์ํ ๊ฒฝ์ฐ ์ค๋์ค ํ์ผ์ ๋ฆฌ์ํ๋งํฉ๋๋ค.
2. ์ค๋์ค ํ์ผ์ ์ํ๋ง ์๋๊ฐ ๋ชจ๋ธ์ ์ฌ์ ํ๋ จ๋ ์ค๋์ค ๋ฐ์ดํฐ์ ์ํ๋ง ์๋์ ์ผ์นํ๋์ง ํ์ธํฉ๋๋ค. ์ด ์ ๋ณด๋ Wav2Vec2 [๋ชจ๋ธ ์นด๋](https://huggingface.co/facebook/wav2vec2-base)์์ ํ์ธํ ์ ์์ต๋๋ค.
3. ๊ธด ์
๋ ฅ์ด ์๋ฆฌ์ง ์๊ณ ์ผ๊ด ์ฒ๋ฆฌ๋๋๋ก ์ต๋ ์
๋ ฅ ๊ธธ์ด๋ฅผ ์ค์ ํฉ๋๋ค.
```py
>>> def preprocess_function(examples):
... audio_arrays = [x["array"] for x in examples["audio"]]
... inputs = feature_extractor(
... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
... )
... return inputs
```
์ ์ฒด ๋ฐ์ดํฐ ์ธํธ์ ์ ์ฒ๋ฆฌ ๊ธฐ๋ฅ์ ์ ์ฉํ๋ ค๋ฉด ๐ค Datasets [`~datasets.Dataset.map`] ํจ์๋ฅผ ์ฌ์ฉํฉ๋๋ค. `batched=True`๋ฅผ ์ค์ ํ์ฌ ๋ฐ์ดํฐ ์งํฉ์ ์ฌ๋ฌ ์์๋ฅผ ํ ๋ฒ์ ์ฒ๋ฆฌํ๋ฉด `map`์ ์๋๋ฅผ ๋์ผ ์ ์์ต๋๋ค. ํ์ํ์ง ์์ ์ด์ ์ ๊ฑฐํ๊ณ `intent_class`์ ์ด๋ฆ์ ๋ชจ๋ธ์ด ์์ํ๋ ์ด๋ฆ์ธ `label`๋ก ๋ณ๊ฒฝํฉ๋๋ค:
```py
>>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True)
>>> encoded_minds = encoded_minds.rename_column("intent_class", "label")
```
## ํ๊ฐํ๊ธฐ[[evaluate]]
ํ๋ จ ์ค์ ๋ฉํธ๋ฆญ์ ํฌํจํ๋ฉด ๋ชจ๋ธ์ ์ฑ๋ฅ์ ํ๊ฐํ๋ ๋ฐ ๋์์ด ๋๋ ๊ฒฝ์ฐ๊ฐ ๋ง์ต๋๋ค. ๐ค [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ฌ์ฉํ์ฌ ํ๊ฐ ๋ฐฉ๋ฒ์ ๋น ๋ฅด๊ฒ ๊ฐ์ ธ์ฌ ์ ์์ต๋๋ค. ์ด ์์
์์๋ [accuracy(์ ํ๋)](https://huggingface.co/spaces/evaluate-metric/accuracy) ๋ฉํธ๋ฆญ์ ๊ฐ์ ธ์ต๋๋ค(๋ฉํธ๋ฆญ์ ๊ฐ์ ธ์ค๊ณ ๊ณ์ฐํ๋ ๋ฐฉ๋ฒ์ ๋ํ ์์ธํ ๋ด์ฉ์ ๐ค Evalutate [๋น ๋ฅธ ๋๋ฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour) ์ฐธ์กฐํ์ธ์):
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
๊ทธ๋ฐ ๋ค์ ์์ธก๊ณผ ๋ ์ด๋ธ์ [`~evaluate.EvaluationModule.compute`]์ ์ ๋ฌํ์ฌ ์ ํ๋๋ฅผ ๊ณ์ฐํ๋ ํจ์๋ฅผ ๋ง๋ญ๋๋ค:
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions = np.argmax(eval_pred.predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids)
```
์ด์ `compute_metrics` ํจ์๋ฅผ ์ฌ์ฉํ ์ค๋น๊ฐ ๋์์ผ๋ฉฐ, ํธ๋ ์ด๋์ ์ค์ ํ ๋ ์ด ํจ์๋ฅผ ์ฌ์ฉํฉ๋๋ค.
## ํ๋ จ[[train]]
<frameworkcontent>
<pt>
<Tip>
[`Trainer`]๋ก ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๋ ๋ฐ ์ต์ํ์ง ์๋ค๋ฉด ๊ธฐ๋ณธ ํํ ๋ฆฌ์ผ [์ฌ๊ธฐ](../training#train-with-pytorch-trainer)์ ์ดํด๋ณด์ธ์!
</Tip>
์ด์ ๋ชจ๋ธ ํ๋ จ์ ์์ํ ์ค๋น๊ฐ ๋์์ต๋๋ค! [`AutoModelForAudioClassification`]์ ์ด์ฉํด์ Wav2Vec2๋ฅผ ๋ถ๋ฌ์ต๋๋ค. ์์๋๋ ๋ ์ด๋ธ ์์ ๋ ์ด๋ธ ๋งคํ์ ์ง์ ํฉ๋๋ค:
```py
>>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
>>> num_labels = len(id2label)
>>> model = AutoModelForAudioClassification.from_pretrained(
... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label
... )
```
์ด์ ์ธ ๋จ๊ณ๋ง ๋จ์์ต๋๋ค:
1. ํ๋ จ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ [`TrainingArguments`]์ ์ ์ํฉ๋๋ค. ์ ์ผํ ํ์ ๋งค๊ฐ๋ณ์๋ ๋ชจ๋ธ์ ์ ์ฅํ ์์น๋ฅผ ์ง์ ํ๋ `output_dir`์
๋๋ค. `push_to_hub = True`๋ฅผ ์ค์ ํ์ฌ ์ด ๋ชจ๋ธ์ ํ๋ธ๋ก ํธ์ํฉ๋๋ค(๋ชจ๋ธ์ ์
๋ก๋ํ๋ ค๋ฉด ํ๊น
ํ์ด์ค์ ๋ก๊ทธ์ธํด์ผ ํฉ๋๋ค). ๊ฐ ์ํญ์ด ๋๋ ๋๋ง๋ค [`Trainer`]๊ฐ ์ ํ๋๋ฅผ ํ๊ฐํ๊ณ ํ๋ จ ์ฒดํฌํฌ์ธํธ๋ฅผ ์ ์ฅํฉ๋๋ค.
2. ๋ชจ๋ธ, ๋ฐ์ดํฐ ์ธํธ, ํ ํฌ๋์ด์ , ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ, `compute_metrics` ํจ์์ ํจ๊ป ํ๋ จ ์ธ์๋ฅผ [`Trainer`]์ ์ ๋ฌํฉ๋๋ค.
3. [`~Trainer.train`]์ ํธ์ถํ์ฌ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํฉ๋๋ค.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_mind_model",
... evaluation_strategy="epoch",
... save_strategy="epoch",
... learning_rate=3e-5,
... per_device_train_batch_size=32,
... gradient_accumulation_steps=4,
... per_device_eval_batch_size=32,
... num_train_epochs=10,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=encoded_minds["train"],
... eval_dataset=encoded_minds["test"],
... tokenizer=feature_extractor,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ํ๋ จ์ด ์๋ฃ๋๋ฉด ๋ชจ๋ ์ฌ๋์ด ๋ชจ๋ธ์ ์ฌ์ฉํ ์ ์๋๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ๋ธ์ ๊ณต์ ํ์ธ์:
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<Tip>
For a more in-depth example of how to finetune a model for audio classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb).
</Tip>
## ์ถ๋ก [[inference]]
์ด์ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ์ผ๋ ์ถ๋ก ์ ์ฌ์ฉํ ์ ์์ต๋๋ค!
์ถ๋ก ์ ์คํํ ์ค๋์ค ํ์ผ์ ๊ฐ์ ธ์ต๋๋ค. ํ์ํ ๊ฒฝ์ฐ ์ค๋์ค ํ์ผ์ ์ํ๋ง ์๋๋ฅผ ๋ชจ๋ธ์ ์ํ๋ง ์๋์ ์ผ์นํ๋๋ก ๋ฆฌ์ํ๋งํ๋ ๊ฒ์ ์์ง ๋ง์ธ์!
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> audio_file = dataset[0]["audio"]["path"]
```
์ถ๋ก ์ ์ํด ๋ฏธ์ธ ์กฐ์ ํ ๋ชจ๋ธ์ ์ํํด ๋ณด๋ ๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ์ [`pipeline`]์์ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค. ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์ค๋์ค ๋ถ๋ฅ๋ฅผ ์ํ `pipeline`์ ์ธ์คํด์คํํ๊ณ ์ค๋์ค ํ์ผ์ ์ ๋ฌํฉ๋๋ค:
```py
>>> from transformers import pipeline
>>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model")
>>> classifier(audio_file)
[
{'score': 0.09766869246959686, 'label': 'cash_deposit'},
{'score': 0.07998877018690109, 'label': 'app_error'},
{'score': 0.0781070664525032, 'label': 'joint_account'},
{'score': 0.07667109370231628, 'label': 'pay_bill'},
{'score': 0.0755252093076706, 'label': 'balance'}
]
```
์ํ๋ ๊ฒฝ์ฐ `pipeline`์ ๊ฒฐ๊ณผ๋ฅผ ์๋์ผ๋ก ๋ณต์ ํ ์๋ ์์ต๋๋ค:
<frameworkcontent>
<pt>
ํน์ง ์ถ์ถ๊ธฐ๋ฅผ ๊ฐ์ ธ์์ ์ค๋์ค ํ์ผ์ ์ ์ฒ๋ฆฌํ๊ณ `์
๋ ฅ`์ PyTorch ํ
์๋ก ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model")
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
```
๋ชจ๋ธ์ ์
๋ ฅ์ ์ ๋ฌํ๊ณ ๋ก์ง์ ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import AutoModelForAudioClassification
>>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
ํ๋ฅ ์ด ๊ฐ์ฅ ๋์ ํด๋์ค๋ฅผ ๊ฐ์ ธ์จ ๋ค์ ๋ชจ๋ธ์ `id2label` ๋งคํ์ ์ฌ์ฉํ์ฌ ์ด๋ฅผ ๋ ์ด๋ธ๋ก ๋ณํํฉ๋๋ค:
```py
>>> import torch
>>> predicted_class_ids = torch.argmax(logits).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
>>> predicted_label
'cash_deposit'
```
</pt>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ko
|
hf_public_repos/transformers/docs/source/ko/tasks/token_classification.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ํ ํฐ ๋ถ๋ฅ[[token-classification]]
[[open-in-colab]]
<Youtube id="wVHdVlPScxA"/>
ํ ํฐ ๋ถ๋ฅ๋ ๋ฌธ์ฅ์ ๊ฐ๋ณ ํ ํฐ์ ๋ ์ด๋ธ์ ํ ๋นํฉ๋๋ค. ๊ฐ์ฅ ์ผ๋ฐ์ ์ธ ํ ํฐ ๋ถ๋ฅ ์์
์ค ํ๋๋ ๊ฐ์ฒด๋ช
์ธ์(Named Entity Recognition, NER)์
๋๋ค. ๊ฐ์ฒด๋ช
์ธ์์ ๋ฌธ์ฅ์์ ์ฌ๋, ์์น ๋๋ ์กฐ์ง๊ณผ ๊ฐ์ ๊ฐ ๊ฐ์ฒด์ ๋ ์ด๋ธ์ ์ฐพ์ผ๋ ค๊ณ ์๋ํฉ๋๋ค.
์ด ๊ฐ์ด๋์์ ํ์ตํ ๋ด์ฉ์:
1. [WNUT 17](https://huggingface.co/datasets/wnut_17) ๋ฐ์ดํฐ ์ธํธ์์ [DistilBERT](https://huggingface.co/distilbert-base-uncased)๋ฅผ ํ์ธ ํ๋ํ์ฌ ์๋ก์ด ๊ฐ์ฒด๋ฅผ ํ์งํฉ๋๋ค.
2. ์ถ๋ก ์ ์ํด ํ์ธ ํ๋ ๋ชจ๋ธ์ ์ฌ์ฉํฉ๋๋ค.
<Tip>
์ด ํํ ๋ฆฌ์ผ์์ ์ค๋ช
ํ๋ ์์
์ ๋ค์ ๋ชจ๋ธ ์ํคํ
์ฒ์ ์ํด ์ง์๋ฉ๋๋ค:
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
<!--End of the generated tip-->
</Tip>
์์ํ๊ธฐ ์ ์, ํ์ํ ๋ชจ๋ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ์ค์น๋์ด ์๋์ง ํ์ธํ์ธ์:
```bash
pip install transformers datasets evaluate seqeval
```
Hugging Face ๊ณ์ ์ ๋ก๊ทธ์ธํ์ฌ ๋ชจ๋ธ์ ์
๋ก๋ํ๊ณ ์ปค๋ฎค๋ํฐ์ ๊ณต์ ํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค. ๋ฉ์์ง๊ฐ ํ์๋๋ฉด, ํ ํฐ์ ์
๋ ฅํ์ฌ ๋ก๊ทธ์ธํ์ธ์:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## WNUT 17 ๋ฐ์ดํฐ ์ธํธ ๊ฐ์ ธ์ค๊ธฐ[[load-wnut-17-dataset]]
๋จผ์ ๐ค Datasets ๋ผ์ด๋ธ๋ฌ๋ฆฌ์์ WNUT 17 ๋ฐ์ดํฐ ์ธํธ๋ฅผ ๊ฐ์ ธ์ต๋๋ค:
```py
>>> from datasets import load_dataset
>>> wnut = load_dataset("wnut_17")
```
๋ค์ ์์ ๋ฅผ ์ดํด๋ณด์ธ์:
```py
>>> wnut["train"][0]
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],
'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.']
}
```
`ner_tags`์ ๊ฐ ์ซ์๋ ๊ฐ์ฒด๋ฅผ ๋ํ๋
๋๋ค. ์ซ์๋ฅผ ๋ ์ด๋ธ ์ด๋ฆ์ผ๋ก ๋ณํํ์ฌ ๊ฐ์ฒด๊ฐ ๋ฌด์์ธ์ง ํ์ธํฉ๋๋ค:
```py
>>> label_list = wnut["train"].features[f"ner_tags"].feature.names
>>> label_list
[
"O",
"B-corporation",
"I-corporation",
"B-creative-work",
"I-creative-work",
"B-group",
"I-group",
"B-location",
"I-location",
"B-person",
"I-person",
"B-product",
"I-product",
]
```
๊ฐ `ner_tag`์ ์์ ๋ถ์ ๋ฌธ์๋ ๊ฐ์ฒด์ ํ ํฐ ์์น๋ฅผ ๋ํ๋
๋๋ค:
- `B-`๋ ๊ฐ์ฒด์ ์์์ ๋ํ๋
๋๋ค.
- `I-`๋ ํ ํฐ์ด ๋์ผํ ๊ฐ์ฒด ๋ด๋ถ์ ํฌํจ๋์ด ์์์ ๋ํ๋
๋๋ค(์๋ฅผ ๋ค์ด `State` ํ ํฐ์ `Empire State Building`์ ๊ฐ์ ๊ฐ์ฒด์ ์ผ๋ถ์
๋๋ค).
- `0`๋ ํ ํฐ์ด ์ด๋ค ๊ฐ์ฒด์๋ ํด๋นํ์ง ์์์ ๋ํ๋
๋๋ค.
## ์ ์ฒ๋ฆฌ[[preprocess]]
<Youtube id="iY2AZYdZAr0"/>
๋ค์์ผ๋ก `tokens` ํ๋๋ฅผ ์ ์ฒ๋ฆฌํ๊ธฐ ์ํด DistilBERT ํ ํฌ๋์ด์ ๋ฅผ ๊ฐ์ ธ์ต๋๋ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
```
์์ ์์ `tokens` ํ๋๋ฅผ ๋ณด๋ฉด ์
๋ ฅ์ด ์ด๋ฏธ ํ ํฐํ๋ ๊ฒ์ฒ๋ผ ๋ณด์
๋๋ค. ๊ทธ๋ฌ๋ ์ค์ ๋ก ์
๋ ฅ์ ์์ง ํ ํฐํ๋์ง ์์์ผ๋ฏ๋ก ๋จ์ด๋ฅผ ํ์ ๋จ์ด๋ก ํ ํฐํํ๊ธฐ ์ํด `is_split_into_words=True`๋ฅผ ์ค์ ํด์ผ ํฉ๋๋ค. ์์ ๋ก ํ์ธํฉ๋๋ค:
```py
>>> example = wnut["train"][0]
>>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True)
>>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
>>> tokens
['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]']
```
๊ทธ๋ฌ๋ ์ด๋ก ์ธํด `[CLS]`๊ณผ `[SEP]`๋ผ๋ ํน์ ํ ํฐ์ด ์ถ๊ฐ๋๊ณ , ํ์ ๋จ์ด ํ ํฐํ๋ก ์ธํด ์
๋ ฅ๊ณผ ๋ ์ด๋ธ ๊ฐ์ ๋ถ์ผ์น๊ฐ ๋ฐ์ํฉ๋๋ค. ํ๋์ ๋ ์ด๋ธ์ ํด๋นํ๋ ๋จ์ผ ๋จ์ด๋ ์ด์ ๋ ๊ฐ์ ํ์ ๋จ์ด๋ก ๋ถํ ๋ ์ ์์ต๋๋ค. ํ ํฐ๊ณผ ๋ ์ด๋ธ์ ๋ค์๊ณผ ๊ฐ์ด ์ฌ์ ๋ ฌํด์ผ ํฉ๋๋ค:
1. [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) ๋ฉ์๋๋ก ๋ชจ๋ ํ ํฐ์ ํด๋น ๋จ์ด์ ๋งคํํฉ๋๋ค.
2. ํน์ ํ ํฐ `[CLS]`์ `[SEP]`์ `-100` ๋ ์ด๋ธ์ ํ ๋นํ์ฌ, PyTorch ์์ค ํจ์๊ฐ ํด๋น ํ ํฐ์ ๋ฌด์ํ๋๋ก ํฉ๋๋ค.
3. ์ฃผ์ด์ง ๋จ์ด์ ์ฒซ ๋ฒ์งธ ํ ํฐ์๋ง ๋ ์ด๋ธ์ ์ง์ ํฉ๋๋ค. ๊ฐ์ ๋จ์ด์ ๋ค๋ฅธ ํ์ ํ ํฐ์ `-100`์ ํ ๋นํฉ๋๋ค.
๋ค์์ ํ ํฐ๊ณผ ๋ ์ด๋ธ์ ์ฌ์ ๋ ฌํ๊ณ DistilBERT์ ์ต๋ ์
๋ ฅ ๊ธธ์ด๋ณด๋ค ๊ธธ์ง ์๋๋ก ์ํ์ค๋ฅผ ์๋ผ๋ด๋ ํจ์๋ฅผ ๋ง๋๋ ๋ฐฉ๋ฒ์
๋๋ค:
```py
>>> def tokenize_and_align_labels(examples):
... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
... labels = []
... for i, label in enumerate(examples[f"ner_tags"]):
... word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
... previous_word_idx = None
... label_ids = []
... for word_idx in word_ids: # Set the special tokens to -100.
... if word_idx is None:
... label_ids.append(-100)
... elif word_idx != previous_word_idx: # Only label the first token of a given word.
... label_ids.append(label[word_idx])
... else:
... label_ids.append(-100)
... previous_word_idx = word_idx
... labels.append(label_ids)
... tokenized_inputs["labels"] = labels
... return tokenized_inputs
```
์ ์ฒด ๋ฐ์ดํฐ ์ธํธ์ ์ ์ฒ๋ฆฌ ํจ์๋ฅผ ์ ์ฉํ๋ ค๋ฉด, ๐ค Datasets [`~datasets.Dataset.map`] ํจ์๋ฅผ ์ฌ์ฉํ์ธ์. `batched=True`๋ก ์ค์ ํ์ฌ ๋ฐ์ดํฐ ์ธํธ์ ์ฌ๋ฌ ์์๋ฅผ ํ ๋ฒ์ ์ฒ๋ฆฌํ๋ฉด `map` ํจ์์ ์๋๋ฅผ ๋์ผ ์ ์์ต๋๋ค:
```py
>>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True)
```
์ด์ [`DataCollatorWithPadding`]๋ฅผ ์ฌ์ฉํ์ฌ ์์ ๋ฐฐ์น๋ฅผ ๋ง๋ค์ด๋ด
์๋ค. ๋ฐ์ดํฐ ์ธํธ ์ ์ฒด๋ฅผ ์ต๋ ๊ธธ์ด๋ก ํจ๋ฉํ๋ ๋์ , *๋์ ํจ๋ฉ*์ ์ฌ์ฉํ์ฌ ๋ฐฐ์น์์ ๊ฐ์ฅ ๊ธด ๊ธธ์ด์ ๋ง๊ฒ ๋ฌธ์ฅ์ ํจ๋ฉํ๋ ๊ฒ์ด ํจ์จ์ ์
๋๋ค.
<frameworkcontent>
<pt>
```py
>>> from transformers import DataCollatorForTokenClassification
>>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
```
</pt>
<tf>
```py
>>> from transformers import DataCollatorForTokenClassification
>>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf")
```
</tf>
</frameworkcontent>
## ํ๊ฐ[[evaluation]]
ํ๋ จ ์ค ๋ชจ๋ธ์ ์ฑ๋ฅ์ ํ๊ฐํ๊ธฐ ์ํด ํ๊ฐ ์งํ๋ฅผ ํฌํจํ๋ ๊ฒ์ด ์ ์ฉํฉ๋๋ค. ๐ค [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ฌ์ฉํ์ฌ ๋น ๋ฅด๊ฒ ํ๊ฐ ๋ฐฉ๋ฒ์ ๊ฐ์ ธ์ฌ ์ ์์ต๋๋ค. ์ด ์์
์์๋ [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) ํ๊ฐ ์งํ๋ฅผ ๊ฐ์ ธ์ต๋๋ค. (ํ๊ฐ ์งํ๋ฅผ ๊ฐ์ ธ์ค๊ณ ๊ณ์ฐํ๋ ๋ฐฉ๋ฒ์ ๋ํด์๋ ๐ค Evaluate [๋น ๋ฅธ ๋๋ฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ์ธ์). Seqeval์ ์ค์ ๋ก ์ ๋ฐ๋, ์ฌํ๋ฅ , F1 ๋ฐ ์ ํ๋์ ๊ฐ์ ์ฌ๋ฌ ์ ์๋ฅผ ์ฐ์ถํฉ๋๋ค.
```py
>>> import evaluate
>>> seqeval = evaluate.load("seqeval")
```
๋จผ์ NER ๋ ์ด๋ธ์ ๊ฐ์ ธ์จ ๋ค์, [`~evaluate.EvaluationModule.compute`]์ ์ค์ ์์ธก๊ณผ ์ค์ ๋ ์ด๋ธ์ ์ ๋ฌํ์ฌ ์ ์๋ฅผ ๊ณ์ฐํ๋ ํจ์๋ฅผ ๋ง๋ญ๋๋ค:
```py
>>> import numpy as np
>>> labels = [label_list[i] for i in example[f"ner_tags"]]
>>> def compute_metrics(p):
... predictions, labels = p
... predictions = np.argmax(predictions, axis=2)
... true_predictions = [
... [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
... for prediction, label in zip(predictions, labels)
... ]
... true_labels = [
... [label_list[l] for (p, l) in zip(prediction, label) if l != -100]
... for prediction, label in zip(predictions, labels)
... ]
... results = seqeval.compute(predictions=true_predictions, references=true_labels)
... return {
... "precision": results["overall_precision"],
... "recall": results["overall_recall"],
... "f1": results["overall_f1"],
... "accuracy": results["overall_accuracy"],
... }
```
์ด์ `compute_metrics` ํจ์๋ฅผ ์ฌ์ฉํ ์ค๋น๊ฐ ๋์์ผ๋ฉฐ, ํ๋ จ์ ์ค์ ํ๋ฉด ์ด ํจ์๋ก ๋๋์์ฌ ๊ฒ์
๋๋ค.
## ํ๋ จ[[train]]
๋ชจ๋ธ์ ํ๋ จํ๊ธฐ ์ ์, `id2label`์ `label2id`๋ฅผ ์ฌ์ฉํ์ฌ ์์๋๋ id์ ๋ ์ด๋ธ์ ๋งต์ ์์ฑํ์ธ์:
```py
>>> id2label = {
... 0: "O",
... 1: "B-corporation",
... 2: "I-corporation",
... 3: "B-creative-work",
... 4: "I-creative-work",
... 5: "B-group",
... 6: "I-group",
... 7: "B-location",
... 8: "I-location",
... 9: "B-person",
... 10: "I-person",
... 11: "B-product",
... 12: "I-product",
... }
>>> label2id = {
... "O": 0,
... "B-corporation": 1,
... "I-corporation": 2,
... "B-creative-work": 3,
... "I-creative-work": 4,
... "B-group": 5,
... "I-group": 6,
... "B-location": 7,
... "I-location": 8,
... "B-person": 9,
... "I-person": 10,
... "B-product": 11,
... "I-product": 12,
... }
```
<frameworkcontent>
<pt>
<Tip>
[`Trainer`]๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ์ธ ํ๋ํ๋ ๋ฐฉ๋ฒ์ ์ต์ํ์ง ์์ ๊ฒฝ์ฐ, [์ฌ๊ธฐ](../training#train-with-pytorch-trainer)์์ ๊ธฐ๋ณธ ํํ ๋ฆฌ์ผ์ ํ์ธํ์ธ์!
</Tip>
์ด์ ๋ชจ๋ธ์ ํ๋ จ์ํฌ ์ค๋น๊ฐ ๋์์ต๋๋ค! [`AutoModelForSequenceClassification`]๋ก DistilBERT๋ฅผ ๊ฐ์ ธ์ค๊ณ ์์๋๋ ๋ ์ด๋ธ ์์ ๋ ์ด๋ธ ๋งคํ์ ์ง์ ํ์ธ์:
```py
>>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
>>> model = AutoModelForTokenClassification.from_pretrained(
... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id
... )
```
์ด์ ์ธ ๋จ๊ณ๋ง ๊ฑฐ์น๋ฉด ๋์
๋๋ค:
1. [`TrainingArguments`]์์ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ ์ํ์ธ์. `output_dir`๋ ๋ชจ๋ธ์ ์ ์ฅํ ์์น๋ฅผ ์ง์ ํ๋ ์ ์ผํ ๋งค๊ฐ๋ณ์์
๋๋ค. ์ด ๋ชจ๋ธ์ ํ๋ธ์ ์
๋ก๋ํ๊ธฐ ์ํด `push_to_hub=True`๋ฅผ ์ค์ ํฉ๋๋ค(๋ชจ๋ธ์ ์
๋ก๋ํ๊ธฐ ์ํด Hugging Face์ ๋ก๊ทธ์ธํด์ผํฉ๋๋ค.) ๊ฐ ์ํญ์ด ๋๋ ๋๋ง๋ค, [`Trainer`]๋ seqeval ์ ์๋ฅผ ํ๊ฐํ๊ณ ํ๋ จ ์ฒดํฌํฌ์ธํธ๋ฅผ ์ ์ฅํฉ๋๋ค.
2. [`Trainer`]์ ํ๋ จ ์ธ์์ ๋ชจ๋ธ, ๋ฐ์ดํฐ ์ธํธ, ํ ํฌ๋์ด์ , ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ ๋ฐ `compute_metrics` ํจ์๋ฅผ ์ ๋ฌํ์ธ์.
3. [`~Trainer.train`]๋ฅผ ํธ์ถํ์ฌ ๋ชจ๋ธ์ ํ์ธ ํ๋ํ์ธ์.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_wnut_model",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=2,
... weight_decay=0.01,
... evaluation_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_wnut["train"],
... eval_dataset=tokenized_wnut["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ํ๋ จ์ด ์๋ฃ๋๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ๋ธ์ ๊ณต์ ํ ์ ์์ต๋๋ค.
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ์ธ ํ๋ํ๋ ๋ฐฉ๋ฒ์ ์ต์ํ์ง ์์ ๊ฒฝ์ฐ, [์ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์ ๊ธฐ๋ณธ ํํ ๋ฆฌ์ผ์ ํ์ธํ์ธ์!
</Tip>
TensorFlow์์ ๋ชจ๋ธ์ ํ์ธ ํ๋ํ๋ ค๋ฉด, ๋จผ์ ์ตํฐ๋ง์ด์ ํจ์์ ํ์ต๋ฅ ์ค์ผ์ฅด, ๊ทธ๋ฆฌ๊ณ ์ผ๋ถ ํ๋ จ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ค์ ํด์ผ ํฉ๋๋ค:
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_train_epochs = 3
>>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs
>>> optimizer, lr_schedule = create_optimizer(
... init_lr=2e-5,
... num_train_steps=num_train_steps,
... weight_decay_rate=0.01,
... num_warmup_steps=0,
... )
```
๊ทธ๋ฐ ๋ค์ [`TFAutoModelForSequenceClassification`]์ ์ฌ์ฉํ์ฌ DistilBERT๋ฅผ ๊ฐ์ ธ์ค๊ณ , ์์๋๋ ๋ ์ด๋ธ ์์ ๋ ์ด๋ธ ๋งคํ์ ์ง์ ํฉ๋๋ค:
```py
>>> from transformers import TFAutoModelForTokenClassification
>>> model = TFAutoModelForTokenClassification.from_pretrained(
... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id
... )
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ ์ธํธ๋ฅผ `tf.data.Dataset` ํ์์ผ๋ก ๋ณํํฉ๋๋ค:
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_wnut["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_wnut["validation"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์ฌ์ฉํ์ฌ ํ๋ จํ ๋ชจ๋ธ์ ๊ตฌ์ฑํฉ๋๋ค:
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer)
```
ํ๋ จ์ ์์ํ๊ธฐ ์ ์ ์ค์ ํด์ผํ ๋ง์ง๋ง ๋ ๊ฐ์ง๋ ์์ธก์์ seqeval ์ ์๋ฅผ ๊ณ์ฐํ๊ณ , ๋ชจ๋ธ์ ํ๋ธ์ ์
๋ก๋ํ ๋ฐฉ๋ฒ์ ์ ๊ณตํ๋ ๊ฒ์
๋๋ค. ๋ชจ๋ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์ฌ์ฉํ์ฌ ์ํ๋ฉ๋๋ค.
[`~transformers.KerasMetricCallback`]์ `compute_metrics` ํจ์๋ฅผ ์ ๋ฌํ์ธ์:
```py
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
[`~transformers.PushToHubCallback`]์์ ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ฅผ ์
๋ก๋ํ ์์น๋ฅผ ์ง์ ํฉ๋๋ค:
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_wnut_model",
... tokenizer=tokenizer,
... )
```
๊ทธ๋ฐ ๋ค์ ์ฝ๋ฐฑ์ ํจ๊ป ๋ฌถ์ต๋๋ค:
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
๋๋์ด, ๋ชจ๋ธ ํ๋ จ์ ์์ํ ์ค๋น๊ฐ ๋์์ต๋๋ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์ ํ๋ จ ๋ฐ์ดํฐ ์ธํธ, ๊ฒ์ฆ ๋ฐ์ดํฐ ์ธํธ, ์ํญ์ ์ ๋ฐ ์ฝ๋ฐฑ์ ์ ๋ฌํ์ฌ ํ์ธ ํ๋ํฉ๋๋ค:
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks)
```
ํ๋ จ์ด ์๋ฃ๋๋ฉด, ๋ชจ๋ธ์ด ์๋์ผ๋ก ํ๋ธ์ ์
๋ก๋๋์ด ๋๊ตฌ๋ ์ฌ์ฉํ ์ ์์ต๋๋ค!
</tf>
</frameworkcontent>
<Tip>
ํ ํฐ ๋ถ๋ฅ๋ฅผ ์ํ ๋ชจ๋ธ์ ํ์ธ ํ๋ํ๋ ์์ธํ ์์ ๋ ๋ค์
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)
๋๋ [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ์ธ์.
</Tip>
## ์ถ๋ก [[inference]]
์ข์์, ์ด์ ๋ชจ๋ธ์ ํ์ธ ํ๋ํ์ผ๋ ์ถ๋ก ์ ์ฌ์ฉํ ์ ์์ต๋๋ค!
์ถ๋ก ์ ์ํํ๊ณ ์ ํ๋ ํ
์คํธ๋ฅผ ๊ฐ์ ธ์๋ด
์๋ค:
```py
>>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco."
```
ํ์ธ ํ๋๋ ๋ชจ๋ธ๋ก ์ถ๋ก ์ ์๋ํ๋ ๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ์ [`pipeline`]๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค. ๋ชจ๋ธ๋ก NER์ `pipeline`์ ์ธ์คํด์คํํ๊ณ , ํ
์คํธ๋ฅผ ์ ๋ฌํด๋ณด์ธ์:
```py
>>> from transformers import pipeline
>>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model")
>>> classifier(text)
[{'entity': 'B-location',
'score': 0.42658573,
'index': 2,
'word': 'golden',
'start': 4,
'end': 10},
{'entity': 'I-location',
'score': 0.35856336,
'index': 3,
'word': 'state',
'start': 11,
'end': 16},
{'entity': 'B-group',
'score': 0.3064001,
'index': 4,
'word': 'warriors',
'start': 17,
'end': 25},
{'entity': 'B-location',
'score': 0.65523505,
'index': 13,
'word': 'san',
'start': 80,
'end': 83},
{'entity': 'B-location',
'score': 0.4668663,
'index': 14,
'word': 'francisco',
'start': 84,
'end': 93}]
```
์ํ๋ค๋ฉด, `pipeline`์ ๊ฒฐ๊ณผ๋ฅผ ์๋์ผ๋ก ๋ณต์ ํ ์๋ ์์ต๋๋ค:
<frameworkcontent>
<pt>
ํ
์คํธ๋ฅผ ํ ํฐํํ๊ณ PyTorch ํ
์๋ฅผ ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> inputs = tokenizer(text, return_tensors="pt")
```
์
๋ ฅ์ ๋ชจ๋ธ์ ์ ๋ฌํ๊ณ `logits`์ ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
๊ฐ์ฅ ๋์ ํ๋ฅ ์ ๊ฐ์ง ํด๋์ค๋ฅผ ๋ชจ๋ธ์ `id2label` ๋งคํ์ ์ฌ์ฉํ์ฌ ํ
์คํธ ๋ ์ด๋ธ๋ก ๋ณํํฉ๋๋ค:
```py
>>> predictions = torch.argmax(logits, dim=2)
>>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]]
>>> predicted_token_class
['O',
'O',
'B-location',
'I-location',
'B-group',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'B-location',
'B-location',
'O',
'O']
```
</pt>
<tf>
ํ
์คํธ๋ฅผ ํ ํฐํํ๊ณ TensorFlow ํ
์๋ฅผ ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> inputs = tokenizer(text, return_tensors="tf")
```
์
๋ ฅ๊ฐ์ ๋ชจ๋ธ์ ์ ๋ฌํ๊ณ `logits`์ ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import TFAutoModelForTokenClassification
>>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model")
>>> logits = model(**inputs).logits
```
๊ฐ์ฅ ๋์ ํ๋ฅ ์ ๊ฐ์ง ํด๋์ค๋ฅผ ๋ชจ๋ธ์ `id2label` ๋งคํ์ ์ฌ์ฉํ์ฌ ํ
์คํธ ๋ ์ด๋ธ๋ก ๋ณํํฉ๋๋ค:
```py
>>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
>>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
>>> predicted_token_class
['O',
'O',
'B-location',
'I-location',
'B-group',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'O',
'B-location',
'B-location',
'O',
'O']
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ko
|
hf_public_repos/transformers/docs/source/ko/tasks/asr.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ์๋ ์์ฑ ์ธ์[[automatic-speech-recognition]]
[[open-in-colab]]
<Youtube id="TksaY_FDgnk"/>
์๋ ์์ฑ ์ธ์(Automatic Speech Recognition, ASR)์ ์์ฑ ์ ํธ๋ฅผ ํ
์คํธ๋ก ๋ณํํ์ฌ ์์ฑ ์
๋ ฅ ์ํ์ค๋ฅผ ํ
์คํธ ์ถ๋ ฅ์ ๋งคํํฉ๋๋ค.
Siri์ Alexa์ ๊ฐ์ ๊ฐ์ ์ด์์คํดํธ๋ ASR ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์ผ์์ ์ผ๋ก ์ฌ์ฉ์๋ฅผ ๋๊ณ ์์ผ๋ฉฐ, ํ์ ์ค ๋ผ์ด๋ธ ์บก์
๋ฐ ๋ฉ๋ชจ ์์ฑ๊ณผ ๊ฐ์ ์ ์ฉํ ์ฌ์ฉ์ ์นํ์ ์์ฉ ํ๋ก๊ทธ๋จ๋ ๋ง์ด ์์ต๋๋ค.
์ด ๊ฐ์ด๋์์ ์๊ฐํ ๋ด์ฉ์ ์๋์ ๊ฐ์ต๋๋ค:
1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํฐ ์ธํธ์์ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋ฅผ ๋ฏธ์ธ ์กฐ์ ํ์ฌ ์ค๋์ค๋ฅผ ํ
์คํธ๋ก ๋ณํํฉ๋๋ค.
2. ๋ฏธ์ธ ์กฐ์ ํ ๋ชจ๋ธ์ ์ถ๋ก ์ ์ฌ์ฉํฉ๋๋ค.
<Tip>
์ด ํํ ๋ฆฌ์ผ์์ ์ค๋ช
ํ๋ ์์
์ ๋ค์ ๋ชจ๋ธ ์ํคํ
์ฒ์ ์ํด ์ง์๋ฉ๋๋ค:
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm)
<!--End of the generated tip-->
</Tip>
์์ํ๊ธฐ ์ ์ ํ์ํ ๋ชจ๋ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ์ค์น๋์ด ์๋์ง ํ์ธํ์ธ์:
```bash
pip install transformers datasets evaluate jiwer
```
Hugging Face ๊ณ์ ์ ๋ก๊ทธ์ธํ๋ฉด ๋ชจ๋ธ์ ์
๋ก๋ํ๊ณ ์ปค๋ฎค๋ํฐ์ ๊ณต์ ํ ์ ์์ต๋๋ค. ํ ํฐ์ ์
๋ ฅํ์ฌ ๋ก๊ทธ์ธํ์ธ์.
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## MInDS-14 ๋ฐ์ดํฐ ์ธํธ ๊ฐ์ ธ์ค๊ธฐ[[load-minds-14-dataset]]
๋จผ์ , ๐ค Datasets ๋ผ์ด๋ธ๋ฌ๋ฆฌ์์ [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํฐ ์ธํธ์ ์ผ๋ถ๋ถ์ ๊ฐ์ ธ์ค์ธ์.
์ด๋ ๊ฒ ํ๋ฉด ์ ์ฒด ๋ฐ์ดํฐ ์ธํธ์ ๋ํ ํ๋ จ์ ์๊ฐ์ ๋ค์ด๊ธฐ ์ ์ ๋ชจ๋ ๊ฒ์ด ์๋ํ๋์ง ์คํํ๊ณ ๊ฒ์ฆํ ์ ์์ต๋๋ค.
```py
>>> from datasets import load_dataset, Audio
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]")
```
[`~Dataset.train_test_split`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ ์ธํธ์ `train`์ ํ๋ จ ์ธํธ์ ํ
์คํธ ์ธํธ๋ก ๋๋์ธ์:
```py
>>> minds = minds.train_test_split(test_size=0.2)
```
๊ทธ๋ฆฌ๊ณ ๋ฐ์ดํฐ ์ธํธ๋ฅผ ํ์ธํ์ธ์:
```py
>>> minds
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 16
})
test: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 4
})
})
```
๋ฐ์ดํฐ ์ธํธ์๋ `lang_id`์ `english_transcription`๊ณผ ๊ฐ์ ์ ์ฉํ ์ ๋ณด๊ฐ ๋ง์ด ํฌํจ๋์ด ์์ง๋ง, ์ด ๊ฐ์ด๋์์๋ `audio`์ `transcription`์ ์ด์ ์ ๋ง์ถ ๊ฒ์
๋๋ค. ๋ค๋ฅธ ์ด์ [`~datasets.Dataset.remove_columns`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ์ ๊ฑฐํ์ธ์:
```py
>>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"])
```
์์๋ฅผ ๋ค์ ํ๋ฒ ํ์ธํด๋ณด์ธ์:
```py
>>> minds["train"][0]
{'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414,
0.00024414, 0.00024414], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 8000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
๋ ๊ฐ์ ํ๋๊ฐ ์์ต๋๋ค:
- `audio`: ์ค๋์ค ํ์ผ์ ๊ฐ์ ธ์ค๊ณ ๋ฆฌ์ํ๋งํ๊ธฐ ์ํด ํธ์ถํด์ผ ํ๋ ์์ฑ ์ ํธ์ 1์ฐจ์ `array(๋ฐฐ์ด)`
- `transcription`: ๋ชฉํ ํ
์คํธ
## ์ ์ฒ๋ฆฌ[[preprocess]]
๋ค์์ผ๋ก ์ค๋์ค ์ ํธ๋ฅผ ์ฒ๋ฆฌํ๊ธฐ ์ํ Wav2Vec2 ํ๋ก์ธ์๋ฅผ ๊ฐ์ ธ์ต๋๋ค:
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base")
```
MInDS-14 ๋ฐ์ดํฐ ์ธํธ์ ์ํ๋ง ๋ ์ดํธ๋ 8000kHz์ด๋ฏ๋ก([๋ฐ์ดํฐ ์ธํธ ์นด๋](https://huggingface.co/datasets/PolyAI/minds14)์์ ํ์ธ), ์ฌ์ ํ๋ จ๋ Wav2Vec2 ๋ชจ๋ธ์ ์ฌ์ฉํ๋ ค๋ฉด ๋ฐ์ดํฐ ์ธํธ๋ฅผ 16000kHz๋ก ๋ฆฌ์ํ๋งํด์ผ ํฉ๋๋ค:
```py
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
>>> minds["train"][0]
{'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ...,
2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 16000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
์์ 'transcription'์์ ๋ณผ ์ ์๋ฏ์ด ํ
์คํธ๋ ๋๋ฌธ์์ ์๋ฌธ์๊ฐ ์์ฌ ์์ต๋๋ค. Wav2Vec2 ํ ํฌ๋์ด์ ๋ ๋๋ฌธ์ ๋ฌธ์์ ๋ํด์๋ง ํ๋ จ๋์ด ์์ผ๋ฏ๋ก ํ
์คํธ๊ฐ ํ ํฌ๋์ด์ ์ ์ดํ์ ์ผ์นํ๋์ง ํ์ธํด์ผ ํฉ๋๋ค:
```py
>>> def uppercase(example):
... return {"transcription": example["transcription"].upper()}
>>> minds = minds.map(uppercase)
```
์ด์ ๋ค์ ์์
์ ์ํํ ์ ์ฒ๋ฆฌ ํจ์๋ฅผ ๋ง๋ค์ด๋ณด๊ฒ ์ต๋๋ค:
1. `audio` ์ด์ ํธ์ถํ์ฌ ์ค๋์ค ํ์ผ์ ๊ฐ์ ธ์ค๊ณ ๋ฆฌ์ํ๋งํฉ๋๋ค.
2. ์ค๋์ค ํ์ผ์์ `input_values`๋ฅผ ์ถ์ถํ๊ณ ํ๋ก์ธ์๋ก `transcription` ์ด์ ํ ํฐํํฉ๋๋ค.
```py
>>> def prepare_dataset(batch):
... audio = batch["audio"]
... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"])
... batch["input_length"] = len(batch["input_values"][0])
... return batch
```
์ ์ฒด ๋ฐ์ดํฐ ์ธํธ์ ์ ์ฒ๋ฆฌ ํจ์๋ฅผ ์ ์ฉํ๋ ค๋ฉด ๐ค Datasets [`~datasets.Dataset.map`] ํจ์๋ฅผ ์ฌ์ฉํ์ธ์. `num_proc` ๋งค๊ฐ๋ณ์๋ฅผ ์ฌ์ฉํ์ฌ ํ๋ก์ธ์ค ์๋ฅผ ๋๋ฆฌ๋ฉด `map`์ ์๋๋ฅผ ๋์ผ ์ ์์ต๋๋ค. [`~datasets.Dataset.remove_columns`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ํ์ํ์ง ์์ ์ด์ ์ ๊ฑฐํ์ธ์:
```py
>>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4)
```
๐ค Transformers์๋ ์๋ ์์ฑ ์ธ์์ฉ ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ๊ฐ ์์ผ๋ฏ๋ก ์์ ๋ฐฐ์น๋ฅผ ์์ฑํ๋ ค๋ฉด [`DataCollatorWithPadding`]์ ์กฐ์ ํด์ผ ํฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ๋ ํ
์คํธ์ ๋ ์ด๋ธ์ ๋ฐฐ์น์์ ๊ฐ์ฅ ๊ธด ์์์ ๊ธธ์ด์ ๋์ ์ผ๋ก ํจ๋ฉํ์ฌ ๊ธธ์ด๋ฅผ ๊ท ์ผํ๊ฒ ํฉ๋๋ค. `tokenizer` ํจ์์์ `padding=True`๋ฅผ ์ค์ ํ์ฌ ํ
์คํธ๋ฅผ ํจ๋ฉํ ์ ์์ง๋ง, ๋์ ํจ๋ฉ์ด ๋ ํจ์จ์ ์
๋๋ค.
๋ค๋ฅธ ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ์ ๋ฌ๋ฆฌ ์ด ํน์ ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ๋ `input_values`์ `labels`์ ๋ํด ๋ค๋ฅธ ํจ๋ฉ ๋ฐฉ๋ฒ์ ์ ์ฉํด์ผ ํฉ๋๋ค.
```py
>>> import torch
>>> from dataclasses import dataclass, field
>>> from typing import Any, Dict, List, Optional, Union
>>> @dataclass
... class DataCollatorCTCWithPadding:
... processor: AutoProcessor
... padding: Union[bool, str] = "longest"
... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
... # ์
๋ ฅ๊ณผ ๋ ์ด๋ธ์ ๋ถํ ํฉ๋๋ค
... # ๊ธธ์ด๊ฐ ๋ค๋ฅด๊ณ , ๊ฐ๊ฐ ๋ค๋ฅธ ํจ๋ฉ ๋ฐฉ๋ฒ์ ์ฌ์ฉํด์ผ ํ๊ธฐ ๋๋ฌธ์
๋๋ค
... input_features = [{"input_values": feature["input_values"][0]} for feature in features]
... label_features = [{"input_ids": feature["labels"]} for feature in features]
... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt")
... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt")
... # ํจ๋ฉ์ ๋ํด ์์ค์ ์ ์ฉํ์ง ์๋๋ก -100์ผ๋ก ๋์ฒดํฉ๋๋ค
... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
... batch["labels"] = labels
... return batch
```
์ด์ `DataCollatorForCTCWithPadding`์ ์ธ์คํด์คํํฉ๋๋ค:
```py
>>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest")
```
## ํ๊ฐํ๊ธฐ[[evaluate]]
ํ๋ จ ์ค์ ํ๊ฐ ์งํ๋ฅผ ํฌํจํ๋ฉด ๋ชจ๋ธ์ ์ฑ๋ฅ์ ํ๊ฐํ๋ ๋ฐ ๋์์ด ๋๋ ๊ฒฝ์ฐ๊ฐ ๋ง์ต๋๋ค. ๐ค [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ฌ์ฉํ๋ฉด ํ๊ฐ ๋ฐฉ๋ฒ์ ๋น ๋ฅด๊ฒ ๋ถ๋ฌ์ฌ ์ ์์ต๋๋ค.
์ด ์์
์์๋ [๋จ์ด ์ค๋ฅ์จ(Word Error Rate, WER)](https://huggingface.co/spaces/evaluate-metric/wer) ํ๊ฐ ์งํ๋ฅผ ๊ฐ์ ธ์ต๋๋ค.
(ํ๊ฐ ์งํ๋ฅผ ๋ถ๋ฌ์ค๊ณ ๊ณ์ฐํ๋ ๋ฐฉ๋ฒ์ ๐ค Evaluate [๋๋ฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ์ธ์):
```py
>>> import evaluate
>>> wer = evaluate.load("wer")
```
๊ทธ๋ฐ ๋ค์ ์์ธก๊ฐ๊ณผ ๋ ์ด๋ธ์ [`~evaluate.EvaluationModule.compute`]์ ์ ๋ฌํ์ฌ WER์ ๊ณ์ฐํ๋ ํจ์๋ฅผ ๋ง๋ญ๋๋ค:
```py
>>> import numpy as np
>>> def compute_metrics(pred):
... pred_logits = pred.predictions
... pred_ids = np.argmax(pred_logits, axis=-1)
... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
... pred_str = processor.batch_decode(pred_ids)
... label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
... wer = wer.compute(predictions=pred_str, references=label_str)
... return {"wer": wer}
```
์ด์ `compute_metrics` ํจ์๋ฅผ ์ฌ์ฉํ ์ค๋น๊ฐ ๋์์ผ๋ฉฐ, ํ๋ จ์ ์ค์ ํ ๋ ์ด ํจ์๋ก ๋๋์์ฌ ๊ฒ์
๋๋ค.
## ํ๋ จํ๊ธฐ[[train]]
<frameworkcontent>
<pt>
<Tip>
[`Trainer`]๋ก ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๋ ๊ฒ์ด ์ต์ํ์ง ์๋ค๋ฉด, [์ฌ๊ธฐ](../training#train-with-pytorch-trainer)์์ ๊ธฐ๋ณธ ํํ ๋ฆฌ์ผ์ ํ์ธํด๋ณด์ธ์!
</Tip>
์ด์ ๋ชจ๋ธ ํ๋ จ์ ์์ํ ์ค๋น๊ฐ ๋์์ต๋๋ค! [`AutoModelForCTC`]๋ก Wav2Vec2๋ฅผ ๊ฐ์ ธ์ค์ธ์. `ctc_loss_reduction` ๋งค๊ฐ๋ณ์๋ก CTC ์์ค์ ์ ์ฉํ ์ถ์(reduction) ๋ฐฉ๋ฒ์ ์ง์ ํ์ธ์. ๊ธฐ๋ณธ๊ฐ์ธ ํฉ๊ณ ๋์ ํ๊ท ์ ์ฌ์ฉํ๋ ๊ฒ์ด ๋ ์ข์ ๊ฒฝ์ฐ๊ฐ ๋ง์ต๋๋ค:
```py
>>> from transformers import AutoModelForCTC, TrainingArguments, Trainer
>>> model = AutoModelForCTC.from_pretrained(
... "facebook/wav2vec2-base",
... ctc_loss_reduction="mean",
... pad_token_id=processor.tokenizer.pad_token_id,
... )
```
์ด์ ์ธ ๋จ๊ณ๋ง ๋จ์์ต๋๋ค:
1. [`TrainingArguments`]์์ ํ๋ จ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ ์ํ์ธ์. `output_dir`์ ๋ชจ๋ธ์ ์ ์ฅํ ๊ฒฝ๋ก๋ฅผ ์ง์ ํ๋ ์ ์ผํ ํ์ ๋งค๊ฐ๋ณ์์
๋๋ค. `push_to_hub=True`๋ฅผ ์ค์ ํ์ฌ ๋ชจ๋ธ์ Hub์ ์
๋ก๋ ํ ์ ์์ต๋๋ค(๋ชจ๋ธ์ ์
๋ก๋ํ๋ ค๋ฉด Hugging Face์ ๋ก๊ทธ์ธํด์ผ ํฉ๋๋ค). [`Trainer`]๋ ๊ฐ ์ํญ๋ง๋ค WER์ ํ๊ฐํ๊ณ ํ๋ จ ์ฒดํฌํฌ์ธํธ๋ฅผ ์ ์ฅํฉ๋๋ค.
2. ๋ชจ๋ธ, ๋ฐ์ดํฐ ์ธํธ, ํ ํฌ๋์ด์ , ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ, `compute_metrics` ํจ์์ ํจ๊ป [`Trainer`]์ ํ๋ จ ์ธ์๋ฅผ ์ ๋ฌํ์ธ์.
3. [`~Trainer.train`]์ ํธ์ถํ์ฌ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ์ธ์.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_asr_mind_model",
... per_device_train_batch_size=8,
... gradient_accumulation_steps=2,
... learning_rate=1e-5,
... warmup_steps=500,
... max_steps=2000,
... gradient_checkpointing=True,
... fp16=True,
... group_by_length=True,
... evaluation_strategy="steps",
... per_device_eval_batch_size=8,
... save_steps=1000,
... eval_steps=1000,
... logging_steps=25,
... load_best_model_at_end=True,
... metric_for_best_model="wer",
... greater_is_better=False,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=encoded_minds["train"],
... eval_dataset=encoded_minds["test"],
... tokenizer=processor.feature_extractor,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ํ๋ จ์ด ์๋ฃ๋๋ฉด ๋ชจ๋๊ฐ ๋ชจ๋ธ์ ์ฌ์ฉํ ์ ์๋๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ Hub์ ๊ณต์ ํ์ธ์:
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<Tip>
์๋ ์์ฑ ์ธ์์ ์ํด ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๋ ๋ ์์ธํ ์์ ๋ ์์ด ์๋ ์์ฑ ์ธ์์ ์ํ [๋ธ๋ก๊ทธ ํฌ์คํธ](https://huggingface.co/blog/fine-tune-wav2vec2-english)์ ๋ค๊ตญ์ด ์๋ ์์ฑ ์ธ์์ ์ํ [ํฌ์คํธ](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)๋ฅผ ์ฐธ์กฐํ์ธ์.
</Tip>
## ์ถ๋ก ํ๊ธฐ[[inference]]
์ข์์, ์ด์ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ์ผ๋ ์ถ๋ก ์ ์ฌ์ฉํ ์ ์์ต๋๋ค!
์ถ๋ก ์ ์ฌ์ฉํ ์ค๋์ค ํ์ผ์ ๊ฐ์ ธ์ค์ธ์. ํ์ํ ๊ฒฝ์ฐ ์ค๋์ค ํ์ผ์ ์ํ๋ง ๋น์จ์ ๋ชจ๋ธ์ ์ํ๋ง ๋ ์ดํธ์ ๋ง๊ฒ ๋ฆฌ์ํ๋งํ๋ ๊ฒ์ ์์ง ๋ง์ธ์!
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train")
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> audio_file = dataset[0]["audio"]["path"]
```
์ถ๋ก ์ ์ํด ๋ฏธ์ธ ์กฐ์ ๋ ๋ชจ๋ธ์ ์ํํด๋ณด๋ ๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ์ [`pipeline`]์ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค. ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์๋ ์์ฑ ์ธ์์ ์ํ `pipeline`์ ์ธ์คํด์คํํ๊ณ ์ค๋์ค ํ์ผ์ ์ ๋ฌํ์ธ์:
```py
>>> from transformers import pipeline
>>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model")
>>> transcriber(audio_file)
{'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'}
```
<Tip>
ํ
์คํธ๋ก ๋ณํ๋ ๊ฒฐ๊ณผ๊ฐ ๊ฝค ๊ด์ฐฎ์ง๋ง ๋ ์ข์ ์๋ ์์ต๋๋ค! ๋ ๋์ ๊ฒฐ๊ณผ๋ฅผ ์ป์ผ๋ ค๋ฉด ๋ ๋ง์ ์์ ๋ก ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ์ธ์!
</Tip>
`pipeline`์ ๊ฒฐ๊ณผ๋ฅผ ์๋์ผ๋ก ์ฌํํ ์๋ ์์ต๋๋ค:
<frameworkcontent>
<pt>
์ค๋์ค ํ์ผ๊ณผ ํ
์คํธ๋ฅผ ์ ์ฒ๋ฆฌํ๊ณ PyTorch ํ
์๋ก `input`์ ๋ฐํํ ํ๋ก์ธ์๋ฅผ ๊ฐ์ ธ์ค์ธ์:
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
```
์
๋ ฅ์ ๋ชจ๋ธ์ ์ ๋ฌํ๊ณ ๋ก์ง์ ๋ฐํํ์ธ์:
```py
>>> from transformers import AutoModelForCTC
>>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
๊ฐ์ฅ ๋์ ํ๋ฅ ์ `input_ids`๋ฅผ ์์ธกํ๊ณ , ํ๋ก์ธ์๋ฅผ ์ฌ์ฉํ์ฌ ์์ธก๋ `input_ids`๋ฅผ ๋ค์ ํ
์คํธ๋ก ๋์ฝ๋ฉํ์ธ์:
```py
>>> import torch
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> transcription = processor.batch_decode(predicted_ids)
>>> transcription
['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER']
```
</pt>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ko
|
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_object_detection.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ์ ๋ก์ท(zero-shot) ๊ฐ์ฒด ํ์ง[[zeroshot-object-detection]]
[[open-in-colab]]
์ผ๋ฐ์ ์ผ๋ก [๊ฐ์ฒด ํ์ง](object_detection)์ ์ฌ์ฉ๋๋ ๋ชจ๋ธ์ ํ์ตํ๊ธฐ ์ํด์๋ ๋ ์ด๋ธ์ด ์ง์ ๋ ์ด๋ฏธ์ง ๋ฐ์ดํฐ ์ธํธ๊ฐ ํ์ํฉ๋๋ค.
๊ทธ๋ฆฌ๊ณ ํ์ต ๋ฐ์ดํฐ์ ์กด์ฌํ๋ ํด๋์ค(๋ ์ด๋ธ)๋ง ํ์งํ ์ ์๋ค๋ ํ๊ณ์ ์ด ์์ต๋๋ค.
๋ค๋ฅธ ๋ฐฉ์์ ์ฌ์ฉํ๋ [OWL-ViT](../model_doc/owlvit) ๋ชจ๋ธ๋ก ์ ๋ก์ท ๊ฐ์ฒด ํ์ง๊ฐ ๊ฐ๋ฅํฉ๋๋ค.
OWL-ViT๋ ๊ฐ๋ฐฉํ ์ดํ(open-vocabulary) ๊ฐ์ฒด ํ์ง๊ธฐ์
๋๋ค.
์ฆ, ๋ ์ด๋ธ์ด ์ง์ ๋ ๋ฐ์ดํฐ ์ธํธ์ ๋ฏธ์ธ ์กฐ์ ํ์ง ์๊ณ ์์ ํ
์คํธ ์ฟผ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์ด๋ฏธ์ง์์ ๊ฐ์ฒด๋ฅผ ํ์งํ ์ ์์ต๋๋ค.
OWL-ViT ๋ชจ๋ธ์ ๋ฉํฐ ๋ชจ๋ฌ ํํ์ ํ์ฉํด ๊ฐ๋ฐฉํ ์ดํ ํ์ง(open-vocabulary detection)๋ฅผ ์ํํฉ๋๋ค.
[CLIP](../model_doc/clip) ๋ชจ๋ธ์ ๊ฒฝ๋ํ(lightweight)๋ ๊ฐ์ฒด ๋ถ๋ฅ์ ์ง์ญํ(localization) ํค๋๋ฅผ ๊ฒฐํฉํฉ๋๋ค.
๊ฐ๋ฐฉํ ์ดํ ํ์ง๋ CLIP์ ํ
์คํธ ์ธ์ฝ๋๋ก free-text ์ฟผ๋ฆฌ๋ฅผ ์๋ฒ ๋ฉํ๊ณ , ๊ฐ์ฒด ๋ถ๋ฅ์ ์ง์ญํ ํค๋์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํฉ๋๋ค.
์ด๋ฏธ์ง์ ํด๋น ํ
์คํธ ์ค๋ช
์ ์ฐ๊ฒฐํ๋ฉด ViT๊ฐ ์ด๋ฏธ์ง ํจ์น(image patches)๋ฅผ ์
๋ ฅ์ผ๋ก ์ฒ๋ฆฌํฉ๋๋ค.
OWL-ViT ๋ชจ๋ธ์ ์ ์๋ค์ CLIP ๋ชจ๋ธ์ ์ฒ์๋ถํฐ ํ์ต(scratch learning)ํ ํ์, bipartite matching loss๋ฅผ ์ฌ์ฉํ์ฌ ํ์ค ๊ฐ์ฒด ์ธ์ ๋ฐ์ดํฐ์
์ผ๋ก OWL-ViT ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ์ต๋๋ค.
์ด ์ ๊ทผ ๋ฐฉ์์ ์ฌ์ฉํ๋ฉด ๋ชจ๋ธ์ ๋ ์ด๋ธ์ด ์ง์ ๋ ๋ฐ์ดํฐ ์ธํธ์ ๋ํ ์ฌ์ ํ์ต ์์ด๋ ํ
์คํธ ์ค๋ช
์ ๊ธฐ๋ฐ์ผ๋ก ๊ฐ์ฒด๋ฅผ ํ์งํ ์ ์์ต๋๋ค.
์ด๋ฒ ๊ฐ์ด๋์์๋ OWL-ViT ๋ชจ๋ธ์ ์ฌ์ฉ๋ฒ์ ๋ค๋ฃฐ ๊ฒ์
๋๋ค:
- ํ
์คํธ ํ๋กฌํํธ ๊ธฐ๋ฐ ๊ฐ์ฒด ํ์ง
- ์ผ๊ด ๊ฐ์ฒด ํ์ง
- ์ด๋ฏธ์ง ๊ฐ์ด๋ ๊ฐ์ฒด ํ์ง
์์ํ๊ธฐ ์ ์ ํ์ํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ๋ชจ๋ ์ค์น๋์ด ์๋์ง ํ์ธํ์ธ์:
```bash
pip install -q transformers
```
## ์ ๋ก์ท(zero-shot) ๊ฐ์ฒด ํ์ง ํ์ดํ๋ผ์ธ[[zeroshot-object-detection-pipeline]]
[`pipeline`]์ ํ์ฉํ๋ฉด ๊ฐ์ฅ ๊ฐ๋จํ๊ฒ OWL-ViT ๋ชจ๋ธ์ ์ถ๋ก ํด๋ณผ ์ ์์ต๋๋ค.
[Hugging Face Hub์ ์
๋ก๋๋ ์ฒดํฌํฌ์ธํธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์์ ์ ๋ก์ท(zero-shot) ๊ฐ์ฒด ํ์ง์ฉ ํ์ดํ๋ผ์ธ์ ์ธ์คํด์คํํฉ๋๋ค:
```python
>>> from transformers import pipeline
>>> checkpoint = "google/owlvit-base-patch32"
>>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection")
```
๋ค์์ผ๋ก, ๊ฐ์ฒด๋ฅผ ํ์งํ๊ณ ์ถ์ ์ด๋ฏธ์ง๋ฅผ ์ ํํ์ธ์.
์ฌ๊ธฐ์๋ [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images ๋ฐ์ดํฐ ์ธํธ์ ์ผ๋ถ์ธ ์ฐ์ฃผ๋นํ์ฌ ์์ผ๋ฆฐ ์ฝ๋ฆฐ์ค(Eileen Collins) ์ฌ์ง์ ์ฌ์ฉํ๊ฒ ์ต๋๋ค.
```py
>>> import skimage
>>> import numpy as np
>>> from PIL import Image
>>> image = skimage.data.astronaut()
>>> image = Image.fromarray(np.uint8(image)).convert("RGB")
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"/>
</div>
์ด๋ฏธ์ง์ ํด๋น ์ด๋ฏธ์ง์ ํ๋ณด ๋ ์ด๋ธ์ ํ์ดํ๋ผ์ธ์ผ๋ก ์ ๋ฌํฉ๋๋ค.
์ฌ๊ธฐ์๋ ์ด๋ฏธ์ง๋ฅผ ์ง์ ์ ๋ฌํ์ง๋ง, ์ปดํจํฐ์ ์ ์ฅ๋ ์ด๋ฏธ์ง์ ๊ฒฝ๋ก๋ url๋ก ์ ๋ฌํ ์๋ ์์ต๋๋ค.
candidate_labels๋ ์ด ์์์ฒ๋ผ ๊ฐ๋จํ ๋จ์ด์ผ ์๋ ์๊ณ ์ข ๋ ์ค๋ช
์ ์ธ ๋จ์ด์ผ ์๋ ์์ต๋๋ค.
๋ํ, ์ด๋ฏธ์ง๋ฅผ ๊ฒ์(query)ํ๋ ค๋ ๋ชจ๋ ํญ๋ชฉ์ ๋ํ ํ
์คํธ ์ค๋ช
๋ ์ ๋ฌํฉ๋๋ค.
```py
>>> predictions = detector(
... image,
... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"],
... )
>>> predictions
[{'score': 0.3571370542049408,
'label': 'human face',
'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}},
{'score': 0.28099656105041504,
'label': 'nasa badge',
'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}},
{'score': 0.2110239565372467,
'label': 'rocket',
'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}},
{'score': 0.13790413737297058,
'label': 'star-spangled banner',
'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}},
{'score': 0.11950037628412247,
'label': 'nasa badge',
'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}},
{'score': 0.10649408400058746,
'label': 'rocket',
'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}]
```
์ด์ ์์ธก๊ฐ์ ์๊ฐํํด๋ด
์๋ค:
```py
>>> from PIL import ImageDraw
>>> draw = ImageDraw.Draw(image)
>>> for prediction in predictions:
... box = prediction["box"]
... label = prediction["label"]
... score = prediction["score"]
... xmin, ymin, xmax, ymax = box.values()
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white")
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"/>
</div>
## ํ
์คํธ ํ๋กฌํํธ ๊ธฐ๋ฐ ๊ฐ์ฒด ํ์ง[[textprompted-zeroshot-object-detection-by-hand]]
์ ๋ก์ท ๊ฐ์ฒด ํ์ง ํ์ดํ๋ผ์ธ ์ฌ์ฉ๋ฒ์ ๋ํด ์ดํด๋ณด์์ผ๋, ์ด์ ๋์ผํ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ ํด๋ณด๊ฒ ์ต๋๋ค.
[Hugging Face Hub์ ์
๋ก๋๋ ์ฒดํฌํฌ์ธํธ](https://huggingface.co/models?other=owlvit)์์ ๊ด๋ จ ๋ชจ๋ธ๊ณผ ํ๋ก์ธ์๋ฅผ ๊ฐ์ ธ์ค๋ ๊ฒ์ผ๋ก ์์ํฉ๋๋ค.
์ฌ๊ธฐ์๋ ์ด์ ๊ณผ ๋์ผํ ์ฒดํฌํฌ์ธํธ๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค:
```py
>>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
```
๋ค๋ฅธ ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํด ๋ณด๊ฒ ์ต๋๋ค:
```py
>>> import requests
>>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640"
>>> im = Image.open(requests.get(url, stream=True).raw)
>>> im
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"/>
</div>
ํ๋ก์ธ์๋ฅผ ์ฌ์ฉํด ๋ชจ๋ธ์ ์
๋ ฅ์ ์ค๋นํฉ๋๋ค.
ํ๋ก์ธ์๋ ๋ชจ๋ธ์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํ๊ธฐ ์ํด ์ด๋ฏธ์ง ํฌ๊ธฐ๋ฅผ ๋ณํํ๊ณ ์ ๊ทํํ๋ ์ด๋ฏธ์ง ํ๋ก์ธ์์ ํ
์คํธ ์
๋ ฅ์ ์ฒ๋ฆฌํ๋ [`CLIPTokenizer`]๋ก ๊ตฌ์ฑ๋ฉ๋๋ค.
```py
>>> text_queries = ["hat", "book", "sunglasses", "camera"]
>>> inputs = processor(text=text_queries, images=im, return_tensors="pt")
```
๋ชจ๋ธ์ ์
๋ ฅ์ ์ ๋ฌํ๊ณ ๊ฒฐ๊ณผ๋ฅผ ํ์ฒ๋ฆฌ ๋ฐ ์๊ฐํํฉ๋๋ค.
์ด๋ฏธ์ง ํ๋ก์ธ์๊ฐ ๋ชจ๋ธ์ ์ด๋ฏธ์ง๋ฅผ ์
๋ ฅํ๊ธฐ ์ ์ ์ด๋ฏธ์ง ํฌ๊ธฐ๋ฅผ ์กฐ์ ํ๊ธฐ ๋๋ฌธ์, [`~OwlViTImageProcessor.post_process_object_detection`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํด
์์ธก๊ฐ์ ๋ฐ์ด๋ฉ ๋ฐ์ค(bounding box)๊ฐ ์๋ณธ ์ด๋ฏธ์ง์ ์ขํ์ ์๋์ ์ผ๋ก ๋์ผํ์ง ํ์ธํด์ผ ํฉ๋๋ค.
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = torch.tensor([im.size[::-1]])
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(im)
>>> scores = results["scores"].tolist()
>>> labels = results["labels"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white")
>>> im
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/>
</div>
## ์ผ๊ด ์ฒ๋ฆฌ[[batch-processing]]
์ฌ๋ฌ ์ด๋ฏธ์ง์ ํ
์คํธ ์ฟผ๋ฆฌ๋ฅผ ์ ๋ฌํ์ฌ ์ฌ๋ฌ ์ด๋ฏธ์ง์์ ์๋ก ๋ค๋ฅธ(๋๋ ๋์ผํ) ๊ฐ์ฒด๋ฅผ ๊ฒ์ํ ์ ์์ต๋๋ค.
์ผ๊ด ์ฒ๋ฆฌ๋ฅผ ์ํด์ ํ
์คํธ ์ฟผ๋ฆฌ๋ ์ด์ค ๋ฆฌ์คํธ๋ก, ์ด๋ฏธ์ง๋ PIL ์ด๋ฏธ์ง, PyTorch ํ
์, ๋๋ NumPy ๋ฐฐ์ด๋ก ์ด๋ฃจ์ด์ง ๋ฆฌ์คํธ๋ก ํ๋ก์ธ์์ ์ ๋ฌํด์ผ ํฉ๋๋ค.
```py
>>> images = [image, im]
>>> text_queries = [
... ["human face", "rocket", "nasa badge", "star-spangled banner"],
... ["hat", "book", "sunglasses", "camera"],
... ]
>>> inputs = processor(text=text_queries, images=images, return_tensors="pt")
```
์ด์ ์๋ ํ์ฒ๋ฆฌ๋ฅผ ์ํด ๋จ์ผ ์ด๋ฏธ์ง์ ํฌ๊ธฐ๋ฅผ ํ
์๋ก ์ ๋ฌํ์ง๋ง, ํํ์ ์ ๋ฌํ ์ ์๊ณ , ์ฌ๋ฌ ์ด๋ฏธ์ง๋ฅผ ์ฒ๋ฆฌํ๋ ๊ฒฝ์ฐ์๋ ํํ๋ก ์ด๋ฃจ์ด์ง ๋ฆฌ์คํธ๋ฅผ ์ ๋ฌํ ์๋ ์์ต๋๋ค.
์๋ ๋ ์์ ์ ๋ํ ์์ธก์ ์์ฑํ๊ณ , ๋ ๋ฒ์งธ ์ด๋ฏธ์ง(`image_idx = 1`)๋ฅผ ์๊ฐํํด ๋ณด๊ฒ ์ต๋๋ค.
```py
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = [x.size[::-1] for x in images]
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)
>>> image_idx = 1
>>> draw = ImageDraw.Draw(images[image_idx])
>>> scores = results[image_idx]["scores"].tolist()
>>> labels = results[image_idx]["labels"].tolist()
>>> boxes = results[image_idx]["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white")
>>> images[image_idx]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/>
</div>
## ์ด๋ฏธ์ง ๊ฐ์ด๋ ๊ฐ์ฒด ํ์ง[[imageguided-object-detection]]
ํ
์คํธ ์ฟผ๋ฆฌ๋ฅผ ์ด์ฉํ ์ ๋ก์ท ๊ฐ์ฒด ํ์ง ์ธ์๋ OWL-ViT ๋ชจ๋ธ์ ์ด๋ฏธ์ง ๊ฐ์ด๋ ๊ฐ์ฒด ํ์ง ๊ธฐ๋ฅ์ ์ ๊ณตํฉ๋๋ค.
์ด๋ฏธ์ง๋ฅผ ์ฟผ๋ฆฌ๋ก ์ฌ์ฉํด ๋์ ์ด๋ฏธ์ง์์ ์ ์ฌํ ๊ฐ์ฒด๋ฅผ ์ฐพ์ ์ ์๋ค๋ ์๋ฏธ์
๋๋ค.
ํ
์คํธ ์ฟผ๋ฆฌ์ ๋ฌ๋ฆฌ ํ๋์ ์์ ์ด๋ฏธ์ง์์๋ง ๊ฐ๋ฅํฉ๋๋ค.
์ํ์ ๊ณ ์์ด ๋ ๋ง๋ฆฌ๊ฐ ์๋ ์ด๋ฏธ์ง๋ฅผ ๋์ ์ด๋ฏธ์ง(target image)๋ก, ๊ณ ์์ด ํ ๋ง๋ฆฌ๊ฐ ์๋ ์ด๋ฏธ์ง๋ฅผ ์ฟผ๋ฆฌ๋ก ์ฌ์ฉํด๋ณด๊ฒ ์ต๋๋ค:
```py
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image_target = Image.open(requests.get(url, stream=True).raw)
>>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg"
>>> query_image = Image.open(requests.get(query_url, stream=True).raw)
```
๋ค์ ์ด๋ฏธ์ง๋ฅผ ์ดํด๋ณด๊ฒ ์ต๋๋ค:
```py
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots(1, 2)
>>> ax[0].imshow(image_target)
>>> ax[1].imshow(query_image)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"/>
</div>
์ ์ฒ๋ฆฌ ๋จ๊ณ์์ ํ
์คํธ ์ฟผ๋ฆฌ ๋์ ์ `query_images`๋ฅผ ์ฌ์ฉํฉ๋๋ค:
```py
>>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt")
```
์์ธก์ ๊ฒฝ์ฐ, ๋ชจ๋ธ์ ์
๋ ฅ์ ์ ๋ฌํ๋ ๋์ [`~OwlViTForObjectDetection.image_guided_detection`]์ ์ ๋ฌํฉ๋๋ค.
๋ ์ด๋ธ์ด ์๋ค๋ ์ ์ ์ ์ธํ๋ฉด ์ด์ ๊ณผ ๋์ผํฉ๋๋ค.
์ด์ ๊ณผ ๋์ผํ๊ฒ ์ด๋ฏธ์ง๋ฅผ ์๊ฐํํฉ๋๋ค.
```py
>>> with torch.no_grad():
... outputs = model.image_guided_detection(**inputs)
... target_sizes = torch.tensor([image_target.size[::-1]])
... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(image_target)
>>> scores = results["scores"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4)
>>> image_target
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"/>
</div>
OWL-ViT ๋ชจ๋ธ์ ์ถ๋ก ํ๊ณ ์ถ๋ค๋ฉด ์๋ ๋ฐ๋ชจ๋ฅผ ํ์ธํ์ธ์:
<iframe
src="https://adirik-owl-vit.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
| 0 |
hf_public_repos/transformers/docs/source/ko
|
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_image_classification.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ์ ๋ก์ท(zero-shot) ์ด๋ฏธ์ง ๋ถ๋ฅ[[zeroshot-image-classification]]
[[open-in-colab]]
์ ๋ก์ท(zero-shot) ์ด๋ฏธ์ง ๋ถ๋ฅ๋ ํน์ ์นดํ
๊ณ ๋ฆฌ์ ์์๊ฐ ํฌํจ๋ ๋ฐ์ดํฐ๋ฅผ ํ์ต๋์ง ์์ ๋ชจ๋ธ์ ์ฌ์ฉํด ์ด๋ฏธ์ง ๋ถ๋ฅ๋ฅผ ์ํํ๋ ์์
์
๋๋ค.
์ผ๋ฐ์ ์ผ๋ก ์ด๋ฏธ์ง ๋ถ๋ฅ๋ฅผ ์ํด์๋ ๋ ์ด๋ธ์ด ๋ฌ๋ฆฐ ํน์ ์ด๋ฏธ์ง ๋ฐ์ดํฐ๋ก ๋ชจ๋ธ ํ์ต์ด ํ์ํ๋ฉฐ, ์ด ๋ชจ๋ธ์ ํน์ ์ด๋ฏธ์ง์ ํน์ง์ ๋ ์ด๋ธ์ "๋งคํ"ํ๋ ๋ฐฉ๋ฒ์ ํ์ตํฉ๋๋ค.
์๋ก์ด ๋ ์ด๋ธ์ด ์๋ ๋ถ๋ฅ ์์
์ ์ด๋ฌํ ๋ชจ๋ธ์ ์ฌ์ฉํด์ผ ํ๋ ๊ฒฝ์ฐ์๋, ๋ชจ๋ธ์ "์ฌ๋ณด์ "ํ๊ธฐ ์ํด ๋ฏธ์ธ ์กฐ์ ์ด ํ์ํฉ๋๋ค.
์ด์ ๋์กฐ์ ์ผ๋ก, ์ ๋ก์ท ๋๋ ๊ฐ๋ฐฉํ ์ดํ(open vocabulary) ์ด๋ฏธ์ง ๋ถ๋ฅ ๋ชจ๋ธ์ ์ผ๋ฐ์ ์ผ๋ก ๋๊ท๋ชจ ์ด๋ฏธ์ง ๋ฐ์ดํฐ์ ํด๋น ์ค๋ช
์ ๋ํด ํ์ต๋ ๋ฉํฐ๋ชจ๋ฌ(multimodal) ๋ชจ๋ธ์
๋๋ค.
์ด๋ฌํ ๋ชจ๋ธ์ ์ ๋ก์ท ์ด๋ฏธ์ง ๋ถ๋ฅ๋ฅผ ํฌํจํ ๋ง์ ๋ค์ด์คํธ๋ฆผ ์์
์ ์ฌ์ฉํ ์ ์๋ ์ ๋ ฌ๋(aligned) ๋น์ ์ธ์ด ํํ์ ํ์ตํฉ๋๋ค.
์ด๋ ์ด๋ฏธ์ง ๋ถ๋ฅ์ ๋ํ ๋ณด๋ค ์ ์ฐํ ์ ๊ทผ ๋ฐฉ์์ผ๋ก, ์ถ๊ฐ ํ์ต ๋ฐ์ดํฐ ์์ด ์๋ก์ด ๋ ์ด๋ธ์ด๋ ํ์ตํ์ง ๋ชปํ ์นดํ
๊ณ ๋ฆฌ์ ๋ํด ๋ชจ๋ธ์ ์ผ๋ฐํํ ์ ์์ต๋๋ค.
๋ํ, ์ฌ์ฉ์๊ฐ ๋์ ๊ฐ์ฒด์ ๋ํ ์์ ํ์์ ํ
์คํธ ์ค๋ช
์ผ๋ก ์ด๋ฏธ์ง๋ฅผ ๊ฒ์ํ ์ ์์ต๋๋ค.
์ด๋ฒ ๊ฐ์ด๋์์ ๋ฐฐ์ธ ๋ด์ฉ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
* ์ ๋ก์ท ์ด๋ฏธ์ง ๋ถ๋ฅ ํ์ดํ๋ผ์ธ ๋ง๋ค๊ธฐ
* ์ง์ ์ ๋ก์ท ์ด๋ฏธ์ง ๋ถ๋ฅ ๋ชจ๋ธ ์ถ๋ก ์คํํ๊ธฐ
์์ํ๊ธฐ ์ ์ ํ์ํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ๋ชจ๋ ์ค์น๋์ด ์๋์ง ํ์ธํ์ธ์:
```bash
pip install -q transformers
```
## ์ ๋ก์ท(zero-shot) ์ด๋ฏธ์ง ๋ถ๋ฅ ํ์ดํ๋ผ์ธ[[zeroshot-image-classification-pipeline]]
[`pipeline`]์ ํ์ฉํ๋ฉด ๊ฐ์ฅ ๊ฐ๋จํ๊ฒ ์ ๋ก์ท ์ด๋ฏธ์ง ๋ถ๋ฅ๋ฅผ ์ง์ํ๋ ๋ชจ๋ธ๋ก ์ถ๋ก ํด๋ณผ ์ ์์ต๋๋ค.
[Hugging Face Hub์ ์
๋ก๋๋ ์ฒดํฌํฌ์ธํธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์์ ํ์ดํ๋ผ์ธ์ ์ธ์คํด์คํํฉ๋๋ค.
```python
>>> from transformers import pipeline
>>> checkpoint = "openai/clip-vit-large-patch14"
>>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification")
```
๋ค์์ผ๋ก, ๋ถ๋ฅํ๊ณ ์ถ์ ์ด๋ฏธ์ง๋ฅผ ์ ํํ์ธ์.
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/>
</div>
์ด๋ฏธ์ง์ ํด๋น ์ด๋ฏธ์ง์ ํ๋ณด ๋ ์ด๋ธ์ธ `candidate_labels`๋ฅผ ํ์ดํ๋ผ์ธ์ผ๋ก ์ ๋ฌํฉ๋๋ค.
์ฌ๊ธฐ์๋ ์ด๋ฏธ์ง๋ฅผ ์ง์ ์ ๋ฌํ์ง๋ง, ์ปดํจํฐ์ ์ ์ฅ๋ ์ด๋ฏธ์ง์ ๊ฒฝ๋ก๋ url๋ก ์ ๋ฌํ ์๋ ์์ต๋๋ค.
`candidate_labels`๋ ์ด ์์์ฒ๋ผ ๊ฐ๋จํ ๋จ์ด์ผ ์๋ ์๊ณ ์ข ๋ ์ค๋ช
์ ์ธ ๋จ์ด์ผ ์๋ ์์ต๋๋ค.
```py
>>> predictions = classifier(image, candidate_labels=["fox", "bear", "seagull", "owl"])
>>> predictions
[{'score': 0.9996670484542847, 'label': 'owl'},
{'score': 0.000199399160919711, 'label': 'seagull'},
{'score': 7.392891711788252e-05, 'label': 'fox'},
{'score': 5.96074532950297e-05, 'label': 'bear'}]
```
## ์ง์ ์ ๋ก์ท(zero-shot) ์ด๋ฏธ์ง ๋ถ๋ฅํ๊ธฐ[[zeroshot-image-classification-by-hand]]
์ด์ ์ ๋ก์ท ์ด๋ฏธ์ง ๋ถ๋ฅ ํ์ดํ๋ผ์ธ ์ฌ์ฉ ๋ฐฉ๋ฒ์ ์ดํด๋ณด์์ผ๋, ์คํํ๋ ๋ฐฉ๋ฒ์ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
[Hugging Face Hub์ ์
๋ก๋๋ ์ฒดํฌํฌ์ธํธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์์ ๋ชจ๋ธ๊ณผ ํ๋ก์ธ์๋ฅผ ๊ฐ์ ธ์ค๋ ๊ฒ์ผ๋ก ์์ํฉ๋๋ค.
์ฌ๊ธฐ์๋ ์ด์ ๊ณผ ๋์ผํ ์ฒดํฌํฌ์ธํธ๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค:
```py
>>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
>>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
```
๋ค๋ฅธ ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํด ๋ณด๊ฒ ์ต๋๋ค.
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/>
</div>
ํ๋ก์ธ์๋ฅผ ์ฌ์ฉํด ๋ชจ๋ธ์ ์
๋ ฅ์ ์ค๋นํฉ๋๋ค.
ํ๋ก์ธ์๋ ๋ชจ๋ธ์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํ๊ธฐ ์ํด ์ด๋ฏธ์ง ํฌ๊ธฐ๋ฅผ ๋ณํํ๊ณ ์ ๊ทํํ๋ ์ด๋ฏธ์ง ํ๋ก์ธ์์ ํ
์คํธ ์
๋ ฅ์ ์ฒ๋ฆฌํ๋ ํ ํฌ๋์ด์ ๋ก ๊ตฌ์ฑ๋ฉ๋๋ค.
```py
>>> candidate_labels = ["tree", "car", "bike", "cat"]
>>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True)
```
๋ชจ๋ธ์ ์
๋ ฅ์ ์ ๋ฌํ๊ณ , ๊ฒฐ๊ณผ๋ฅผ ํ์ฒ๋ฆฌํฉ๋๋ค:
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits = outputs.logits_per_image[0]
>>> probs = logits.softmax(dim=-1).numpy()
>>> scores = probs.tolist()
>>> result = [
... {"score": score, "label": candidate_label}
... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0])
... ]
>>> result
[{'score': 0.998572, 'label': 'car'},
{'score': 0.0010570387, 'label': 'bike'},
{'score': 0.0003393686, 'label': 'tree'},
{'score': 3.1572064e-05, 'label': 'cat'}]
```
| 0 |
hf_public_repos/transformers/docs/source/ko
|
hf_public_repos/transformers/docs/source/ko/tasks/masked_language_modeling.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ๋ง์คํน๋ ์ธ์ด ๋ชจ๋ธ๋ง(Masked language modeling)[[masked-language-modeling]]
[[open-in-colab]]
<Youtube id="mqElG5QJWUg"/>
๋ง์คํน๋ ์ธ์ด ๋ชจ๋ธ๋ง์ ์ํ์ค์์ ๋ง์คํน๋ ํ ํฐ์ ์์ธกํ๋ฉฐ, ๋ชจ๋ธ์ ์๋ฐฉํฅ์ผ๋ก ํ ํฐ์ ์ก์ธ์คํ ์ ์์ต๋๋ค.
์ฆ, ๋ชจ๋ธ์ ํ ํฐ์ ์ผ์ชฝ๊ณผ ์ค๋ฅธ์ชฝ ์์ชฝ์์ ์ ๊ทผํ ์ ์์ต๋๋ค.
๋ง์คํน๋ ์ธ์ด ๋ชจ๋ธ๋ง์ ์ ์ฒด ์ํ์ค์ ๋ํ ๋ฌธ๋งฅ์ ์ดํด๊ฐ ํ์ํ ์์
์ ์ ํฉํ๋ฉฐ, BERT๊ฐ ๊ทธ ์์ ํด๋นํฉ๋๋ค.
์ด๋ฒ ๊ฐ์ด๋์์ ๋ค๋ฃฐ ๋ด์ฉ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
1. [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํฐ ์ธํธ์์ [r/askscience](https://www.reddit.com/r/askscience/) ๋ถ๋ถ์ ์ฌ์ฉํด [DistilRoBERTa](https://huggingface.co/distilroberta-base) ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํฉ๋๋ค.
2. ์ถ๋ก ์์ ์ง์ ๋ฏธ์ธ ์กฐ์ ํ ๋ชจ๋ธ์ ์ฌ์ฉํฉ๋๋ค.
<Tip>
์ด๋ฒ ๊ฐ์ด๋์์์ฒ๋ผ ๋ค๋ฅธ ์ํคํ
์ฒ๋ฅผ ๋ฏธ์ธ ์กฐ์ ํด ๋ง์คํน๋ ์ธ์ด ๋ชจ๋ธ๋ง์ ํ ์ ์์ต๋๋ค.
๋ค์ ์ํคํ
์ณ ์ค ํ๋๋ฅผ ์ ํํ์ธ์:
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
<!--End of the generated tip-->
</Tip>
์์ํ๊ธฐ ์ ์ ํ์ํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ๋ชจ๋ ์ค์น๋์ด ์๋์ง ํ์ธํ์ธ์:
```bash
pip install transformers datasets evaluate
```
Hugging Face ๊ณ์ ์ ๋ก๊ทธ์ธํ์ฌ ๋ชจ๋ธ์ ์
๋ก๋ํ๊ณ ์ปค๋ฎค๋ํฐ์์ ๊ณต์ ๋ฅผ ๊ถ์ฅํฉ๋๋ค. ๋ฉ์์ง๊ฐ ํ์๋๋ฉด(When prompted) ํ ํฐ์ ์
๋ ฅํ์ฌ ๋ก๊ทธ์ธํฉ๋๋ค:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## ELI5 ๋ฐ์ดํฐ ์ธํธ ๊ฐ์ ธ์ค๊ธฐ[[load-eli5-dataset]]
๋จผ์ ๐ค Datasets ๋ผ์ด๋ธ๋ฌ๋ฆฌ์์ ELI5 ๋ฐ์ดํฐ ์ธํธ์ r/askscience ์ค ์ผ๋ถ๋ง ๊ฐ์ ธ์ต๋๋ค.
์ด๋ ๊ฒ ํ๋ฉด ์ ์ฒด ๋ฐ์ดํฐ ์ธํธ ํ์ต์ ๋ ๋ง์ ์๊ฐ์ ํ ์ ํ๊ธฐ ์ ์ ๋ชจ๋ ๊ฒ์ด ์๋ํ๋์ง ์คํํ๊ณ ํ์ธํ ์ ์์ต๋๋ค.
```py
>>> from datasets import load_dataset
>>> eli5 = load_dataset("eli5", split="train_asks[:5000]")
```
๋ฐ์ดํฐ ์ธํธ์ `train_asks`๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํด ํ๋ จ ๋ฐ์ดํฐ์ ํ
์คํธ ๋ฐ์ดํฐ๋ก ๋ถํ ํฉ๋๋ค:
```py
>>> eli5 = eli5.train_test_split(test_size=0.2)
```
๊ทธ๋ฆฌ๊ณ ์๋ ์์๋ฅผ ์ดํด๋ณด์ธ์:
```py
>>> eli5["train"][0]
{'answers': {'a_id': ['c3d1aib', 'c3d4lya'],
'score': [6, 3],
'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]},
'answers_urls': {'url': []},
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']},
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls': {'url': []}}
```
๋ง์ ๋ณด์ผ ์ ์์ง๋ง ์ค์ ๋ก๋ `text` ํ๋์๋ง ์ง์คํ๋ฉด ๋ฉ๋๋ค.
์ธ์ด ๋ชจ๋ธ๋ง ์์
์ ๋ฉ์ง ์ ์ (๋น์ง๋ ํ์ต์ผ๋ก) *๋ค์ ๋จ์ด๊ฐ ๋ ์ด๋ธ*์ด๊ธฐ ๋๋ฌธ์ ๋ ์ด๋ธ์ด ๋ฐ๋ก ํ์ํ์ง ์์ต๋๋ค.
## ์ ์ฒ๋ฆฌ[[preprocess]]
<Youtube id="8PmhEIXhBvI"/>
๋ง์คํน๋ ์ธ์ด ๋ชจ๋ธ๋ง์ ์ํด, ๋ค์ ๋จ๊ณ๋ก DistilRoBERTa ํ ํฌ๋์ด์ ๋ฅผ ๊ฐ์ ธ์์ `text` ํ์ ํ๋๋ฅผ ์ฒ๋ฆฌํฉ๋๋ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilroberta-base")
```
์์ ์์ ์์์ ๋ง์ฐฌ๊ฐ์ง๋ก, `text` ํ๋๋ `answers` ์์ ์ค์ฒฉ๋์ด ์์ต๋๋ค.
๋ฐ๋ผ์ ์ค์ฒฉ๋ ๊ตฌ์กฐ์์ [`flatten`](https://huggingface.co/docs/datasets/process#flatten) ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ `text` ํ์ ํ๋๋ฅผ ์ถ์ถํฉ๋๋ค:
```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'answers.a_id': ['c3d1aib', 'c3d4lya'],
'answers.score': [6, 3],
'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"],
'answers_urls.url': [],
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'],
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls.url': []}
```
์ด์ ๊ฐ ํ์ ํ๋๋ `answers` ์ ๋์ฌ(prefix)๋ก ํ์๋ ๋๋ก ๋ณ๋์ ์ด์ด ๋๊ณ , `text` ํ๋๋ ์ด์ ๋ฆฌ์คํธ๊ฐ ๋์์ต๋๋ค.
๊ฐ ๋ฌธ์ฅ์ ๊ฐ๋ณ์ ์ผ๋ก ํ ํฐํํ๋ ๋์ ๋ฆฌ์คํธ๋ฅผ ๋ฌธ์์ด๋ก ๋ณํํ์ฌ ํ๋ฒ์ ํ ํฐํํ ์ ์์ต๋๋ค.
๋ค์์ ๊ฐ ์์ ์ ๋ํด ๋ฌธ์์ด๋ก ์ด๋ฃจ์ด์ง ๋ฆฌ์คํธ๋ฅผ `join`ํ๊ณ ๊ฒฐ๊ณผ๋ฅผ ํ ํฐํํ๋ ์ฒซ ๋ฒ์งธ ์ ์ฒ๋ฆฌ ํจ์์
๋๋ค:
```py
>>> def preprocess_function(examples):
... return tokenizer([" ".join(x) for x in examples["answers.text"]])
```
์ด ์ ์ฒ๋ฆฌ ํจ์๋ฅผ ์ ์ฒด ๋ฐ์ดํฐ ์ธํธ์ ์ ์ฉํ๊ธฐ ์ํด ๐ค Datasets [`~datasets.Dataset.map`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํฉ๋๋ค.
๋ฐ์ดํฐ ์ธํธ์ ์ฌ๋ฌ ์์๋ฅผ ํ ๋ฒ์ ์ฒ๋ฆฌํ๋๋ก `batched=True`๋ฅผ ์ค์ ํ๊ณ `num_proc`๋ก ์ฒ๋ฆฌ ํ์๋ฅผ ๋๋ฆฌ๋ฉด `map` ํจ์์ ์๋๋ฅผ ๋์ผ ์ ์์ต๋๋ค.
ํ์ํ์ง ์์ ์ด์ ์ ๊ฑฐํฉ๋๋ค:
```py
>>> tokenized_eli5 = eli5.map(
... preprocess_function,
... batched=True,
... num_proc=4,
... remove_columns=eli5["train"].column_names,
... )
```
์ด ๋ฐ์ดํฐ ์ธํธ์๋ ํ ํฐ ์ํ์ค๊ฐ ํฌํจ๋์ด ์์ง๋ง ์ด ์ค ์ผ๋ถ๋ ๋ชจ๋ธ์ ์ต๋ ์
๋ ฅ ๊ธธ์ด๋ณด๋ค ๊น๋๋ค.
์ด์ ๋ ๋ฒ์งธ ์ ์ฒ๋ฆฌ ํจ์๋ฅผ ์ฌ์ฉํด
- ๋ชจ๋ ์ํ์ค๋ฅผ ์ฐ๊ฒฐํ๊ณ
- ์ฐ๊ฒฐ๋ ์ํ์ค๋ฅผ ์ ์ํ `block_size` ๋ณด๋ค ๋ ์งง์ ๋ฉ์ด๋ฆฌ๋ก ๋ถํ ํ๋๋ฐ, ์ด ๋ฉ์ด๋ฆฌ๋ ๋ชจ๋ธ์ ์ต๋ ์
๋ ฅ ๊ธธ์ด๋ณด๋ค ์งง๊ณ GPU RAM์ด ์์ฉํ ์ ์๋ ๊ธธ์ด์ฌ์ผ ํฉ๋๋ค.
```py
>>> block_size = 128
>>> def group_texts(examples):
... # Concatenate all texts.
... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
... total_length = len(concatenated_examples[list(examples.keys())[0]])
... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
... # customize this part to your needs.
... if total_length >= block_size:
... total_length = (total_length // block_size) * block_size
... # Split by chunks of block_size.
... result = {
... k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
... for k, t in concatenated_examples.items()
... }
... result["labels"] = result["input_ids"].copy()
... return result
```
์ ์ฒด ๋ฐ์ดํฐ ์ธํธ์ `group_texts` ํจ์๋ฅผ ์ ์ฉํฉ๋๋ค:
```py
>>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
```
์ด์ [`DataCollatorForLanguageModeling`]์ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ ์์ ์ ๋ฐฐ์น๋ฅผ ์์ฑํฉ๋๋ค.
๋ฐ์ดํฐ ์ธํธ ์ ์ฒด๋ฅผ ์ต๋ ๊ธธ์ด๋ก ํจ๋ฉํ๋ ๊ฒ๋ณด๋ค collation ๋จ๊ณ์์ ๋งค ๋ฐฐ์น์์์์ ์ต๋ ๊ธธ์ด๋ก ๋ฌธ์ฅ์ *๋์ ์ผ๋ก ํจ๋ฉ*ํ๋ ๊ฒ์ด ๋ ํจ์จ์ ์
๋๋ค.
<frameworkcontent>
<pt>
์ํ์ค ๋ ํ ํฐ์ ํจ๋ฉ ํ ํฐ์ผ๋ก ์ฌ์ฉํ๊ณ ๋ฐ์ดํฐ๋ฅผ ๋ฐ๋ณตํ ๋๋ง๋ค ํ ํฐ์ ๋ฌด์์๋ก ๋ง์คํนํ๋๋ก `mlm_-probability`๋ฅผ ์ง์ ํฉ๋๋ค:
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> tokenizer.pad_token = tokenizer.eos_token
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
```
</pt>
<tf>
์ํ์ค ๋ ํ ํฐ์ ํจ๋ฉ ํ ํฐ์ผ๋ก ์ฌ์ฉํ๊ณ ๋ฐ์ดํฐ๋ฅผ ๋ฐ๋ณตํ ๋๋ง๋ค ํ ํฐ์ ๋ฌด์์๋ก ๋ง์คํนํ๋๋ก `mlm_-probability`๋ฅผ ์ง์ ํฉ๋๋ค:
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf")
```
</tf>
</frameworkcontent>
## ํ๋ จ[[train]]
<frameworkcontent>
<pt>
<Tip>
[`Trainer`]๋ก ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๋ ๋ฐ ์ต์ํ์ง ์๋ค๋ฉด ๊ธฐ๋ณธ ํํ ๋ฆฌ์ผ [์ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์ดํด๋ณด์ธ์!
</Tip>
์ด์ ๋ชจ๋ธ ํ๋ จ์ ์์ํ ์ค๋น๊ฐ ๋์์ต๋๋ค! [`AutoModelForMaskedLM`]๋ฅผ ์ฌ์ฉํด DistilRoBERTa ๋ชจ๋ธ์ ๊ฐ์ ธ์ต๋๋ค:
```py
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("distilroberta-base")
```
์ด์ ์ธ ๋จ๊ณ๊ฐ ๋จ์์ต๋๋ค:
1. [`TrainingArguments`]์ ํ๋ จ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ ์ํฉ๋๋ค. ๋ชจ๋ธ ์ ์ฅ ์์น๋ฅผ ์ง์ ํ๋ `output_dir`์ ์ ์ผํ ํ์ ํ๋ผ๋ฏธํฐ์
๋๋ค. `push_to_hub=True`๋ฅผ ์ค์ ํ์ฌ ์ด ๋ชจ๋ธ์ Hub์ ์
๋ก๋ํฉ๋๋ค (๋ชจ๋ธ์ ์
๋ก๋ํ๋ ค๋ฉด Hugging Face์ ๋ก๊ทธ์ธํด์ผ ํฉ๋๋ค).
2. ๋ชจ๋ธ, ๋ฐ์ดํฐ ์ธํธ ๋ฐ ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ(collator)์ ํจ๊ป ํ๋ จ ์ธ์๋ฅผ [`Trainer`]์ ์ ๋ฌํฉ๋๋ค.
3. [`~Trainer.train`]์ ํธ์ถํ์ฌ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํฉ๋๋ค.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_mlm_model",
... evaluation_strategy="epoch",
... learning_rate=2e-5,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=lm_dataset["train"],
... eval_dataset=lm_dataset["test"],
... data_collator=data_collator,
... )
>>> trainer.train()
```
ํ๋ จ์ด ์๋ฃ๋๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ํํ๋ ์ํฐ(perplexity)๋ฅผ ๊ณ์ฐํ๊ณ ๋ชจ๋ธ์ ํ๊ฐํฉ๋๋ค:
```py
>>> import math
>>> eval_results = trainer.evaluate()
>>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 8.76
```
๊ทธ๋ฆฌ๊ณ [`~transformers.Trainer.push_to_hub`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํด ๋ค๋ฅธ ์ฌ๋๋ค์ด ์ฌ์ฉํ ์ ์๋๋ก, Hub๋ก ๋ชจ๋ธ์ ์
๋ก๋ํฉ๋๋ค.
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras๋ก ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๋ ๋ฐ ์ต์ํ์ง ์๋ค๋ฉด ๊ธฐ๋ณธ ํํ ๋ฆฌ์ผ [์ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์ดํด๋ณด์ธ์!
</Tip>
TensorFlow๋ก ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๊ธฐ ์ํด์๋ ์ตํฐ๋ง์ด์ (optimizer) ํจ์ ์ค์ , ํ์ต๋ฅ (learning rate) ์ค์ผ์ฅด๋ง, ํ๋ จ ํ์ดํผํ๋ผ๋ฏธํฐ ์ค์ ๋ถํฐ ์์ํ์ธ์:
```py
>>> from transformers import create_optimizer, AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```
๋ค์์ผ๋ก [`TFAutoModelForMaskedLM`]๋ฅผ ์ฌ์ฉํด DistilRoBERTa ๋ชจ๋ธ์ ๊ฐ์ ธ์ต๋๋ค:
```py
>>> from transformers import TFAutoModelForMaskedLM
>>> model = TFAutoModelForMaskedLM.from_pretrained("distilroberta-base")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํด ๋ฐ์ดํฐ ์ธํธ๋ฅผ `tf.data.Dataset` ํ์์ผ๋ก ๋ณํํ์ธ์:
```py
>>> tf_train_set = model.prepare_tf_dataset(
... lm_dataset["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... lm_dataset["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ์๋๋ฅผ ํตํด ๋ชจ๋ธ ํ๋ จ์ ๊ตฌ์ฑํฉ๋๋ค:
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer)
```
์ด๋ ์
๋ก๋ํ ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ์ ์์น๋ฅผ [`~transformers.PushToHubCallback`]์ ์ง์ ํ์ฌ ์ํํ ์ ์์ต๋๋ค:
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_eli5_mlm_model",
... tokenizer=tokenizer,
... )
```
๋๋์ด ๋ชจ๋ธ์ ํ๋ จํ ์ค๋น๊ฐ ๋์์ต๋๋ค!
๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ ๋ ํ๋ จ ๋ฐ ๊ฒ์ฆ ๋ฐ์ดํฐ ์ธํธ, ์ํฌํฌ ์, ์ฝ๋ฐฑ์ด ํฌํจ๋ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์ ํธ์ถํฉ๋๋ค:
```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
```
ํ๋ จ์ด ์๋ฃ๋๋ฉด, ์๋์ผ๋ก Hub๋ก ์
๋ก๋๋์ด ๋๊ตฌ๋ ์ฌ์ฉํ ์ ์์ต๋๋ค!
</tf>
</frameworkcontent>
<Tip>
๋ง์คํน๋ ์ธ์ด ๋ชจ๋ธ๋ง์ ์ํด ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๋ ๋ฐฉ๋ฒ์ ๋ํ ๋ณด๋ค ์ฌ์ธต์ ์ธ ์์ ๋
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
๋๋ [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์ ์ฐธ์กฐํ์ธ์.
</Tip>
## ์ถ๋ก [[inference]]
์ง๊ธ๊น์ง ๋ชจ๋ธ ๋ฏธ์ธ ์กฐ์ ์ ์ ํ์ผ๋, ์ถ๋ก ์ ์ฌ์ฉํ ์ ์์ต๋๋ค!
๋ชจ๋ธ์ด ๋น์นธ์ ์ฑ์ธ ํ
์คํธ๋ฅผ ์คํ์
ํ ํฐ(special token)์ธ `<mask>` ํ ํฐ์ผ๋ก ํ์ํฉ๋๋ค:
```py
>>> text = "The Milky Way is a <mask> galaxy."
```
์ถ๋ก ์ ์ํด ๋ฏธ์ธ ์กฐ์ ํ ๋ชจ๋ธ์ ํ
์คํธํ๋ ๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ์ [`pipeline`]์์ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค.
`fill-mask`ํ์คํฌ๋ก `pipeline`์ ์ธ์คํด์คํํ๊ณ ํ
์คํธ๋ฅผ ์ ๋ฌํฉ๋๋ค.
`top_k` ๋งค๊ฐ๋ณ์๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐํํ๋ ์์ธก์ ์๋ฅผ ์ง์ ํ ์ ์์ต๋๋ค:
```py
>>> from transformers import pipeline
>>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model")
>>> mask_filler(text, top_k=3)
[{'score': 0.5150994658470154,
'token': 21300,
'token_str': ' spiral',
'sequence': 'The Milky Way is a spiral galaxy.'},
{'score': 0.07087188959121704,
'token': 2232,
'token_str': ' massive',
'sequence': 'The Milky Way is a massive galaxy.'},
{'score': 0.06434620916843414,
'token': 650,
'token_str': ' small',
'sequence': 'The Milky Way is a small galaxy.'}]
```
<frameworkcontent>
<pt>
ํ
์คํธ๋ฅผ ํ ํฐํํ๊ณ `input_ids`๋ฅผ PyTorch ํ
์ ํํ๋ก ๋ฐํํฉ๋๋ค.
๋ํ, `<mask>` ํ ํฐ์ ์์น๋ฅผ ์ง์ ํด์ผ ํฉ๋๋ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model")
>>> inputs = tokenizer(text, return_tensors="pt")
>>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
```
๋ชจ๋ธ์ `inputs`๋ฅผ ์
๋ ฅํ๊ณ , ๋ง์คํน๋ ํ ํฐ์ `logits`๋ฅผ ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> logits = model(**inputs).logits
>>> mask_token_logits = logits[0, mask_token_index, :]
```
๊ทธ๋ฐ ๋ค์ ๊ฐ์ฅ ๋์ ํ๋ฅ ์ ๊ฐ์ง ๋ง์คํฌ ํ ํฐ 3๊ฐ๋ฅผ ๋ฐํํ๊ณ , ์ถ๋ ฅํฉ๋๋ค:
```py
>>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist()
>>> for token in top_3_tokens:
... print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massive galaxy.
The Milky Way is a small galaxy.
```
</pt>
<tf>
ํ
์คํธ๋ฅผ ํ ํฐํํ๊ณ `input_ids`๋ฅผ TensorFlow ํ
์ ํํ๋ก ๋ฐํํฉ๋๋ค.
๋ํ, `<mask>` ํ ํฐ์ ์์น๋ฅผ ์ง์ ํด์ผ ํฉ๋๋ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model")
>>> inputs = tokenizer(text, return_tensors="tf")
>>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1]
```
๋ชจ๋ธ์ `inputs`๋ฅผ ์
๋ ฅํ๊ณ , ๋ง์คํน๋ ํ ํฐ์ `logits`๋ฅผ ๋ฐํํฉ๋๋ค:
```py
>>> from transformers import TFAutoModelForMaskedLM
>>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model")
>>> logits = model(**inputs).logits
>>> mask_token_logits = logits[0, mask_token_index, :]
```
๊ทธ๋ฐ ๋ค์ ๊ฐ์ฅ ๋์ ํ๋ฅ ์ ๊ฐ์ง ๋ง์คํฌ ํ ํฐ 3๊ฐ๋ฅผ ๋ฐํํ๊ณ , ์ถ๋ ฅํฉ๋๋ค:
```py
>>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy()
>>> for token in top_3_tokens:
... print(text.replace(tokenizer.mask_token, tokenizer.decode([token])))
The Milky Way is a spiral galaxy.
The Milky Way is a massive galaxy.
The Milky Way is a small galaxy.
```
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source/ko
|
hf_public_repos/transformers/docs/source/ko/tasks/visual_question_answering.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ์๊ฐ์ ์ง์์๋ต (Visual Question Answering)
[[open-in-colab]]
์๊ฐ์ ์ง์์๋ต(VQA)์ ์ด๋ฏธ์ง๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ๊ฐ๋ฐฉํ ์ง๋ฌธ์ ๋์ํ๋ ์์
์
๋๋ค. ์ด ์์
์ ์ง์ํ๋ ๋ชจ๋ธ์ ์
๋ ฅ์ ๋๋ถ๋ถ ์ด๋ฏธ์ง์ ์ง๋ฌธ์ ์กฐํฉ์ด๋ฉฐ, ์ถ๋ ฅ์ ์์ฐ์ด๋ก ๋ ๋ต๋ณ์
๋๋ค.
VQA์ ์ฃผ์ ์ฌ์ฉ ์ฌ๋ก๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
* ์๊ฐ ์ฅ์ ์ธ์ ์ํ ์ ๊ทผ์ฑ ์ ํ๋ฆฌ์ผ์ด์
์ ๊ตฌ์ถํ ์ ์์ต๋๋ค.
* ๊ต์ก: ๊ฐ์๋ ๊ต๊ณผ์์ ๋์จ ์๊ฐ ์๋ฃ์ ๋ํ ์ง๋ฌธ์ ๋ตํ ์ ์์ต๋๋ค. ๋ํ ์ฒดํํ ์ ์์ ์ ์ ๋ฑ์์๋ VQA๋ฅผ ํ์ฉํ ์ ์์ต๋๋ค.
* ๊ณ ๊ฐ ์๋น์ค ๋ฐ ์ ์์๊ฑฐ๋: VQA๋ ์ฌ์ฉ์๊ฐ ์ ํ์ ๋ํด ์ง๋ฌธํ ์ ์๊ฒ ํจ์ผ๋ก์จ ์ฌ์ฉ์ ๊ฒฝํ์ ํฅ์์ํฌ ์ ์์ต๋๋ค.
* ์ด๋ฏธ์ง ๊ฒ์: VQA ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์ํ๋ ํน์ฑ์ ๊ฐ์ง ์ด๋ฏธ์ง๋ฅผ ๊ฒ์ํ ์ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด ์ฌ์ฉ์๋ "๊ฐ์์ง๊ฐ ์์ด?"๋ผ๊ณ ๋ฌผ์ด๋ด์ ์ฃผ์ด์ง ์ด๋ฏธ์ง ๋ฌถ์์์ ๊ฐ์์ง๊ฐ ์๋ ๋ชจ๋ ์ด๋ฏธ์ง๋ฅผ ๋ฐ์๋ณผ ์ ์์ต๋๋ค.
์ด ๊ฐ์ด๋์์ ํ์ตํ ๋ด์ฉ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
- VQA ๋ชจ๋ธ ์ค ํ๋์ธ [ViLT](../../en/model_doc/vilt)๋ฅผ [`Graphcore/vqa` ๋ฐ์ดํฐ์
](https://huggingface.co/datasets/Graphcore/vqa) ์์ ๋ฏธ์ธ์กฐ์ ํ๋ ๋ฐฉ๋ฒ
- ๋ฏธ์ธ์กฐ์ ๋ ViLT ๋ชจ๋ธ๋ก ์ถ๋ก ํ๋ ๋ฐฉ๋ฒ
- BLIP-2 ๊ฐ์ ์์ฑ ๋ชจ๋ธ๋ก ์ ๋ก์ท VQA ์ถ๋ก ์ ์คํํ๋ ๋ฐฉ๋ฒ
## ViLT ๋ฏธ์ธ ์กฐ์ [[finetuning-vilt]]
ViLT๋ Vision Transformer (ViT) ๋ด์ ํ
์คํธ ์๋ฒ ๋ฉ์ ํฌํจํ์ฌ ๋น์ /์์ฐ์ด ์ฌ์ ํ๋ จ(VLP; Vision-and-Language Pretraining)์ ์ํ ๊ธฐ๋ณธ ๋์์ธ์ ์ ๊ณตํฉ๋๋ค.
ViLT ๋ชจ๋ธ์ ๋น์ ํธ๋์คํฌ๋จธ(ViT)์ ํ
์คํธ ์๋ฒ ๋ฉ์ ๋ฃ์ด ๋น์ /์ธ์ด ์ฌ์ ํ๋ จ(VLP; Vision-and-Language Pre-training)์ ์ํ ๊ธฐ๋ณธ์ ์ธ ๋์์ธ์ ๊ฐ์ท์ต๋๋ค. ์ด ๋ชจ๋ธ์ ์ฌ๋ฌ ๋ค์ด์คํธ๋ฆผ ์์
์ ์ฌ์ฉํ ์ ์์ต๋๋ค. VQA ํ์คํฌ์์๋ (`[CLS]` ํ ํฐ์ ์ต์ข
์๋ ์ํ ์์ ์ ํ ๋ ์ด์ด์ธ) ๋ถ๋ฅ ํค๋๊ฐ ์์ผ๋ฉฐ ๋ฌด์์๋ก ์ด๊ธฐํ๋ฉ๋๋ค.
๋ฐ๋ผ์ ์ฌ๊ธฐ์์ ์๊ฐ์ ์ง์์๋ต์ **๋ถ๋ฅ ๋ฌธ์ **๋ก ์ทจ๊ธ๋ฉ๋๋ค.
์ต๊ทผ์ BLIP, BLIP-2, InstructBLIP์ ๊ฐ์ ๋ชจ๋ธ๋ค์ VQA๋ฅผ ์์ฑํ ์์
์ผ๋ก ๊ฐ์ฃผํฉ๋๋ค. ๊ฐ์ด๋์ ํ๋ฐ๋ถ์์๋ ์ด๋ฐ ๋ชจ๋ธ๋ค์ ์ฌ์ฉํ์ฌ ์ ๋ก์ท VQA ์ถ๋ก ์ ํ๋ ๋ฐฉ๋ฒ์ ๋ํด ์ค๋ช
ํ๊ฒ ์ต๋๋ค.
์์ํ๊ธฐ ์ ํ์ํ ๋ชจ๋ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ค์นํ๋์ง ํ์ธํ์ธ์.
```bash
pip install -q transformers datasets
```
์ปค๋ฎค๋ํฐ์ ๋ชจ๋ธ์ ๊ณต์ ํ๋ ๊ฒ์ ๊ถ์ฅ ๋๋ฆฝ๋๋ค. Hugging Face ๊ณ์ ์ ๋ก๊ทธ์ธํ์ฌ ๐ค Hub์ ์
๋ก๋ํ ์ ์์ต๋๋ค.
๋ฉ์์ง๊ฐ ๋ํ๋๋ฉด ๋ก๊ทธ์ธํ ํ ํฐ์ ์
๋ ฅํ์ธ์:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ๋ฅผ ์ ์ญ ๋ณ์๋ก ์ ์ธํ์ธ์.
```py
>>> model_checkpoint = "dandelin/vilt-b32-mlm"
```
## ๋ฐ์ดํฐ ๊ฐ์ ธ์ค๊ธฐ [[load-the-data]]
์ด ๊ฐ์ด๋์์๋ `Graphcore/vqa` ๋ฐ์ดํฐ์ธํธ์ ์์ ์ํ์ ์ฌ์ฉํฉ๋๋ค. ์ ์ฒด ๋ฐ์ดํฐ์ธํธ๋ [๐ค Hub](https://huggingface.co/datasets/Graphcore/vqa) ์์ ํ์ธํ ์ ์์ต๋๋ค.
[`Graphcore/vqa` ๋ฐ์ดํฐ์ธํธ](https://huggingface.co/datasets/Graphcore/vqa) ์ ๋์์ผ๋ก ๊ณต์ [VQA ๋ฐ์ดํฐ์ธํธ ํ์ด์ง](https://visualqa.org/download.html) ์์ ๋์ผํ ๋ฐ์ดํฐ๋ฅผ ์๋์ผ๋ก ๋ค์ด๋ก๋ํ ์ ์์ต๋๋ค. ์ง์ ๊ณต์ํ ๋ฐ์ดํฐ๋ก ํํ ๋ฆฌ์ผ์ ๋ฐ๋ฅด๊ณ ์ถ๋ค๋ฉด [์ด๋ฏธ์ง ๋ฐ์ดํฐ์ธํธ ๋ง๋ค๊ธฐ](https://huggingface.co/docs/datasets/image_dataset#loading-script) ๋ผ๋
๐ค Datasets ๋ฌธ์๋ฅผ ์ฐธ์กฐํ์ธ์.
๊ฒ์ฆ ๋ฐ์ดํฐ์ ์ฒซ 200๊ฐ ํญ๋ชฉ์ ๋ถ๋ฌ์ ๋ฐ์ดํฐ์ธํธ์ ํน์ฑ์ ํ์ธํด ๋ณด๊ฒ ์ต๋๋ค:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]")
>>> dataset
Dataset({
features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'],
num_rows: 200
})
```
์์ ๋ฅผ ํ๋ ๋ฝ์ ๋ฐ์ดํฐ์ธํธ์ ํน์ฑ์ ์ดํดํด ๋ณด๊ฒ ์ต๋๋ค.
```py
>>> dataset[0]
{'question': 'Where is he looking?',
'question_type': 'none of the above',
'question_id': 262148000,
'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg',
'answer_type': 'other',
'label': {'ids': ['at table', 'down', 'skateboard', 'table'],
'weights': [0.30000001192092896,
1.0,
0.30000001192092896,
0.30000001192092896]}}
```
๋ฐ์ดํฐ์ธํธ์๋ ๋ค์๊ณผ ๊ฐ์ ํน์ฑ์ด ํฌํจ๋์ด ์์ต๋๋ค:
* `question`: ์ด๋ฏธ์ง์ ๋ํ ์ง๋ฌธ
* `image_id`: ์ง๋ฌธ๊ณผ ๊ด๋ จ๋ ์ด๋ฏธ์ง์ ๊ฒฝ๋ก
* `label`: ๋ฐ์ดํฐ์ ๋ ์ด๋ธ (annotations)
๋๋จธ์ง ํน์ฑ๋ค์ ํ์ํ์ง ์๊ธฐ ๋๋ฌธ์ ์ญ์ ํด๋ ๋ฉ๋๋ค:
```py
>>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type'])
```
๋ณด์๋ค์ํผ `label` ํน์ฑ์ ๊ฐ์ ์ง๋ฌธ๋ง๋ค ๋ต๋ณ์ด ์ฌ๋ฌ ๊ฐ ์์ ์ ์์ต๋๋ค. ๋ชจ๋ ๋ค๋ฅธ ๋ฐ์ดํฐ ๋ผ๋ฒจ๋ฌ๋ค๋ก๋ถํฐ ์์ง๋์๊ธฐ ๋๋ฌธ์ธ๋ฐ์. ์ง๋ฌธ์ ๋ต๋ณ์ ์ฃผ๊ด์ ์ผ ์ ์์ต๋๋ค. ์ด ๊ฒฝ์ฐ ์ง๋ฌธ์ "๊ทธ๋ ์ด๋๋ฅผ ๋ณด๊ณ ์๋์?" ์์ง๋ง, ์ด๋ค ์ฌ๋๋ค์ "์๋"๋ก ๋ ์ด๋ธ์ ๋ฌ์๊ณ , ๋ค๋ฅธ ์ฌ๋๋ค์ "ํ
์ด๋ธ" ๋๋ "์ค์ผ์ดํธ๋ณด๋" ๋ฑ์ผ๋ก ์ฃผ์์ ๋ฌ์์ต๋๋ค.
์๋์ ์ด๋ฏธ์ง๋ฅผ ๋ณด๊ณ ์ด๋ค ๋ต๋ณ์ ์ ํํ ๊ฒ์ธ์ง ์๊ฐํด ๋ณด์ธ์:
```python
>>> from PIL import Image
>>> image = Image.open(dataset[0]['image_id'])
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/>
</div>
์ง๋ฌธ๊ณผ ๋ต๋ณ์ ๋ชจํธ์ฑ์ผ๋ก ์ธํด ์ด๋ฌํ ๋ฐ์ดํฐ์ธํธ๋ ์ฌ๋ฌ ๊ฐ์ ๋ต๋ณ์ด ๊ฐ๋ฅํ๋ฏ๋ก ๋ค์ค ๋ ์ด๋ธ ๋ถ๋ฅ ๋ฌธ์ ๋ก ์ฒ๋ฆฌ๋ฉ๋๋ค. ๊ฒ๋ค๊ฐ, ์ํซ(one-hot) ์ธ์ฝ๋ฉ ๋ฒกํฐ๋ฅผ ์์ฑํ๊ธฐ๋ณด๋ค๋ ๋ ์ด๋ธ์์ ํน์ ๋ต๋ณ์ด ๋ํ๋๋ ํ์๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์ํํธ ์ธ์ฝ๋ฉ์ ์์ฑํฉ๋๋ค.
์์ ์์์์ "์๋"๋ผ๋ ๋ต๋ณ์ด ๋ค๋ฅธ ๋ต๋ณ๋ณด๋ค ํจ์ฌ ๋ ์์ฃผ ์ ํ๋์๊ธฐ ๋๋ฌธ์ ๋ฐ์ดํฐ์ธํธ์์ `weight`๋ผ๊ณ ๋ถ๋ฆฌ๋ ์ ์๋ก 1.0์ ๊ฐ์ง๋ฉฐ, ๋๋จธ์ง ๋ต๋ณ๋ค์ 1.0 ๋ฏธ๋ง์ ์ ์๋ฅผ ๊ฐ์ง๋๋ค.
์ ์ ํ ๋ถ๋ฅ ํค๋๋ก ๋ชจ๋ธ์ ๋์ค์ ์ธ์คํด์คํํ๊ธฐ ์ํด ๋ ์ด๋ธ์ ์ ์๋ก ๋งคํํ ๋์
๋๋ฆฌ ํ๋, ๋ฐ๋๋ก ์ ์๋ฅผ ๋ ์ด๋ธ๋ก ๋งคํํ ๋์
๋๋ฆฌ ํ๋ ์ด 2๊ฐ์ ๋์
๋๋ฆฌ๋ฅผ ์์ฑํ์ธ์:
```py
>>> import itertools
>>> labels = [item['ids'] for item in dataset['label']]
>>> flattened_labels = list(itertools.chain(*labels))
>>> unique_labels = list(set(flattened_labels))
>>> label2id = {label: idx for idx, label in enumerate(unique_labels)}
>>> id2label = {idx: label for label, idx in label2id.items()}
```
์ด์ ๋งคํ์ด ์๋ฃ๋์์ผ๋ฏ๋ก ๋ฌธ์์ด ๋ต๋ณ์ ํด๋น id๋ก ๊ต์ฒดํ๊ณ , ๋ฐ์ดํฐ์ธํธ์ ๋ ํธ๋ฆฌํ ํ์ฒ๋ฆฌ๋ฅผ ์ํด ํธํํ ํ ์ ์์ต๋๋ค.
```python
>>> def replace_ids(inputs):
... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]]
... return inputs
>>> dataset = dataset.map(replace_ids)
>>> flat_dataset = dataset.flatten()
>>> flat_dataset.features
{'question': Value(dtype='string', id=None),
'image_id': Value(dtype='string', id=None),
'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None),
'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)}
```
## ๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ [[preprocessing-data]]
๋ค์ ๋จ๊ณ๋ ๋ชจ๋ธ์ ์ํด ์ด๋ฏธ์ง์ ํ
์คํธ ๋ฐ์ดํฐ๋ฅผ ์ค๋นํ๊ธฐ ์ํด ViLT ํ๋ก์ธ์๋ฅผ ๊ฐ์ ธ์ค๋ ๊ฒ์
๋๋ค.
[`ViltProcessor`]๋ BERT ํ ํฌ๋์ด์ ์ ViLT ์ด๋ฏธ์ง ํ๋ก์ธ์๋ฅผ ํธ๋ฆฌํ๊ฒ ํ๋์ ํ๋ก์ธ์๋ก ๋ฌถ์ต๋๋ค:
```py
>>> from transformers import ViltProcessor
>>> processor = ViltProcessor.from_pretrained(model_checkpoint)
```
๋ฐ์ดํฐ๋ฅผ ์ ์ฒ๋ฆฌํ๋ ค๋ฉด ์ด๋ฏธ์ง์ ์ง๋ฌธ์ [`ViltProcessor`]๋ก ์ธ์ฝ๋ฉํด์ผ ํฉ๋๋ค. ํ๋ก์ธ์๋ [`BertTokenizerFast`]๋ก ํ
์คํธ๋ฅผ ํ ํฌ๋์ด์ฆํ๊ณ ํ
์คํธ ๋ฐ์ดํฐ๋ฅผ ์ํด `input_ids`, `attention_mask` ๋ฐ `token_type_ids`๋ฅผ ์์ฑํฉ๋๋ค.
์ด๋ฏธ์ง๋ [`ViltImageProcessor`]๋ก ์ด๋ฏธ์ง๋ฅผ ํฌ๊ธฐ ์กฐ์ ํ๊ณ ์ ๊ทํํ๋ฉฐ, `pixel_values`์ `pixel_mask`๋ฅผ ์์ฑํฉ๋๋ค.
์ด๋ฐ ์ ์ฒ๋ฆฌ ๋จ๊ณ๋ ๋ชจ๋ ๋ด๋ถ์์ ์ด๋ฃจ์ด์ง๋ฏ๋ก, `processor`๋ฅผ ํธ์ถํ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค. ํ์ง๋ง ์์ง ํ๊ฒ ๋ ์ด๋ธ์ด ์์ฑ๋์ง ์์์ต๋๋ค. ํ๊ฒ์ ํํ์์ ๊ฐ ์์๋ ๊ฐ๋ฅํ ๋ต๋ณ(๋ ์ด๋ธ)์ ํด๋นํฉ๋๋ค. ์ ํํ ๋ต๋ณ์ ์์๋ ํด๋น ์ ์(weight)๋ฅผ ์ ์ง์ํค๊ณ ๋๋จธ์ง ์์๋ 0์ผ๋ก ์ค์ ํด์ผ ํฉ๋๋ค.
์๋ ํจ์๊ฐ ์์์ ์ค๋ช
ํ๋๋ก ์ด๋ฏธ์ง์ ์ง๋ฌธ์ `processor`๋ฅผ ์ ์ฉํ๊ณ ๋ ์ด๋ธ์ ํ์์ ๋ง์ถฅ๋๋ค:
```py
>>> import torch
>>> def preprocess_data(examples):
... image_paths = examples['image_id']
... images = [Image.open(image_path) for image_path in image_paths]
... texts = examples['question']
... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt")
... for k, v in encoding.items():
... encoding[k] = v.squeeze()
... targets = []
... for labels, scores in zip(examples['label.ids'], examples['label.weights']):
... target = torch.zeros(len(id2label))
... for label, score in zip(labels, scores):
... target[label] = score
... targets.append(target)
... encoding["labels"] = targets
... return encoding
```
์ ์ฒด ๋ฐ์ดํฐ์ธํธ์ ์ ์ฒ๋ฆฌ ํจ์๋ฅผ ์ ์ฉํ๋ ค๋ฉด ๐ค Datasets์ [`~datasets.map`] ํจ์๋ฅผ ์ฌ์ฉํ์ญ์์ค. `batched=True`๋ฅผ ์ค์ ํ์ฌ ๋ฐ์ดํฐ์ธํธ์ ์ฌ๋ฌ ์์๋ฅผ ํ ๋ฒ์ ์ฒ๋ฆฌํจ์ผ๋ก์จ `map`์ ๋ ๋น ๋ฅด๊ฒ ํ ์ ์์ต๋๋ค. ์ด ์์ ์์ ํ์ํ์ง ์์ ์ด์ ์ ๊ฑฐํ์ธ์.
```py
>>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights'])
>>> processed_dataset
Dataset({
features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'],
num_rows: 200
})
```
๋ง์ง๋ง ๋จ๊ณ๋ก, [`DefaultDataCollator`]๋ฅผ ์ฌ์ฉํ์ฌ ์์ ๋ก ์ธ ๋ฐฐ์น๋ฅผ ์์ฑํ์ธ์:
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
## ๋ชจ๋ธ ํ๋ จ [[train-the-model]]
์ด์ ๋ชจ๋ธ์ ํ๋ จํ๊ธฐ ์ํด ์ค๋น๋์์ต๋๋ค! [`ViltForQuestionAnswering`]์ผ๋ก ViLT๋ฅผ ๊ฐ์ ธ์ฌ ์ฐจ๋ก์
๋๋ค. ๋ ์ด๋ธ์ ์์ ๋ ์ด๋ธ ๋งคํ์ ์ง์ ํ์ธ์:
```py
>>> from transformers import ViltForQuestionAnswering
>>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id)
```
์ด ์์ ์์๋ ๋ค์ ์ธ ๋จ๊ณ๋ง ๋จ์์ต๋๋ค:
1. [`TrainingArguments`]์์ ํ๋ จ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ ์ํ์ธ์:
```py
>>> from transformers import TrainingArguments
>>> repo_id = "MariaK/vilt_finetuned_200"
>>> training_args = TrainingArguments(
... output_dir=repo_id,
... per_device_train_batch_size=4,
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
... push_to_hub=True,
... )
```
2. ๋ชจ๋ธ, ๋ฐ์ดํฐ์ธํธ, ํ๋ก์ธ์, ๋ฐ์ดํฐ ์ฝ๋ ์ดํฐ์ ํจ๊ป ํ๋ จ ์ธ์๋ฅผ [`Trainer`]์ ์ ๋ฌํ์ธ์:
```py
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=processed_dataset,
... tokenizer=processor,
... )
```
3. [`~Trainer.train`]์ ํธ์ถํ์ฌ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ์ธ์:
```py
>>> trainer.train()
```
ํ๋ จ์ด ์๋ฃ๋๋ฉด, [`~Trainer.push_to_hub`] ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ๐ค Hub์ ๋ชจ๋ธ์ ๊ณต์ ํ์ธ์:
```py
>>> trainer.push_to_hub()
```
## ์ถ๋ก [[inference]]
ViLT ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๊ณ ๐ค Hub์ ์
๋ก๋ํ๋ค๋ฉด ์ถ๋ก ์ ์ฌ์ฉํ ์ ์์ต๋๋ค. ๋ฏธ์ธ ์กฐ์ ๋ ๋ชจ๋ธ์ ์ถ๋ก ์ ์ฌ์ฉํด๋ณด๋ ๊ฐ์ฅ ๊ฐ๋จํ ๋ฐฉ๋ฒ์ [`Pipeline`]์์ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค.
```py
>>> from transformers import pipeline
>>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200")
```
์ด ๊ฐ์ด๋์ ๋ชจ๋ธ์ 200๊ฐ์ ์์ ์์๋ง ํ๋ จ๋์์ผ๋ฏ๋ก ๊ทธ๋ค์ง ๋ง์ ๊ฒ์ ๊ธฐ๋ํ ์๋ ์์ต๋๋ค. ๋ฐ์ดํฐ์ธํธ์ ์ฒซ ๋ฒ์งธ ์์ ๋ฅผ ์ฌ์ฉํ์ฌ ์ถ๋ก ๊ฒฐ๊ณผ๋ฅผ ์ค๋ช
ํด๋ณด๊ฒ ์ต๋๋ค:
```py
>>> example = dataset[0]
>>> image = Image.open(example['image_id'])
>>> question = example['question']
>>> print(question)
>>> pipe(image, question, top_k=1)
"Where is he looking?"
[{'score': 0.5498199462890625, 'answer': 'down'}]
```
๋น๋ก ํ์ ์ ๋ณ๋ก ์์ง๋ง, ๋ชจ๋ธ์ ์ค์ ๋ก ๋ฌด์ธ๊ฐ๋ฅผ ๋ฐฐ์ ์ต๋๋ค. ๋ ๋ง์ ์์ ์ ๋ ๊ธด ํ๋ จ ๊ธฐ๊ฐ์ด ์ฃผ์ด์ง๋ค๋ฉด ๋ถ๋ช
๋ ๋์ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์์ ๊ฒ์
๋๋ค!
์ํ๋ค๋ฉด ํ์ดํ๋ผ์ธ์ ๊ฒฐ๊ณผ๋ฅผ ์๋์ผ๋ก ๋ณต์ ํ ์๋ ์์ต๋๋ค:
1. ์ด๋ฏธ์ง์ ์ง๋ฌธ์ ๊ฐ์ ธ์์ ํ๋ก์ธ์๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ์ค๋นํฉ๋๋ค.
2. ์ ์ฒ๋ฆฌ๋ ๊ฒฐ๊ณผ๋ฅผ ๋ชจ๋ธ์ ์ ๋ฌํฉ๋๋ค.
3. ๋ก์ง์์ ๊ฐ์ฅ ๊ฐ๋ฅ์ฑ ์๋ ๋ต๋ณ์ id๋ฅผ ๊ฐ์ ธ์์ `id2label`์์ ์ค์ ๋ต๋ณ์ ์ฐพ์ต๋๋ค.
```py
>>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200")
>>> image = Image.open(example['image_id'])
>>> question = example['question']
>>> # prepare inputs
>>> inputs = processor(image, question, return_tensors="pt")
>>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200")
>>> # forward pass
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits = outputs.logits
>>> idx = logits.argmax(-1).item()
>>> print("Predicted answer:", model.config.id2label[idx])
Predicted answer: down
```
## ์ ๋ก์ท VQA [[zeroshot-vqa]]
์ด์ ๋ชจ๋ธ์ VQA๋ฅผ ๋ถ๋ฅ ๋ฌธ์ ๋ก ์ฒ๋ฆฌํ์ต๋๋ค. BLIP, BLIP-2 ๋ฐ InstructBLIP์ ๊ฐ์ ์ต๊ทผ์ ๋ชจ๋ธ์ VQA๋ฅผ ์์ฑ ์์
์ผ๋ก ์ ๊ทผํฉ๋๋ค. [BLIP-2](../../en/model_doc/blip-2)๋ฅผ ์๋ก ๋ค์ด ๋ณด๊ฒ ์ต๋๋ค. ์ด ๋ชจ๋ธ์ ์ฌ์ ํ๋ จ๋ ๋น์ ์ธ์ฝ๋์ LLM์ ๋ชจ๋ ์กฐํฉ์ ์ฌ์ฉํ ์ ์๋ ์๋ก์ด ๋น์ -์์ฐ์ด ์ฌ์ ํ์ต ํจ๋ฌ๋ค์์ ๋์
ํ์ต๋๋ค. ([BLIP-2 ๋ธ๋ก๊ทธ ํฌ์คํธ](https://huggingface.co/blog/blip-2)๋ฅผ ํตํด ๋ ์์ธํ ์์๋ณผ ์ ์์ด์)
์ด๋ฅผ ํตํด ์๊ฐ์ ์ง์์๋ต์ ํฌํจํ ์ฌ๋ฌ ๋น์ -์์ฐ์ด ์์
์์ SOTA๋ฅผ ๋ฌ์ฑํ ์ ์์์ต๋๋ค.
์ด ๋ชจ๋ธ์ ์ด๋ป๊ฒ VQA์ ์ฌ์ฉํ ์ ์๋์ง ์ค๋ช
ํด ๋ณด๊ฒ ์ต๋๋ค. ๋จผ์ ๋ชจ๋ธ์ ๊ฐ์ ธ์ ๋ณด๊ฒ ์ต๋๋ค. ์ฌ๊ธฐ์ GPU๊ฐ ์ฌ์ฉ ๊ฐ๋ฅํ ๊ฒฝ์ฐ ๋ชจ๋ธ์ ๋ช
์์ ์ผ๋ก GPU๋ก ์ ์กํ ๊ฒ์
๋๋ค. ์ด์ ์๋ ํ๋ จํ ๋ ์ฐ์ง ์์ ์ด์ ๋ [`Trainer`]๊ฐ ์ด ๋ถ๋ถ์ ์๋์ผ๋ก ์ฒ๋ฆฌํ๊ธฐ ๋๋ฌธ์
๋๋ค:
```py
>>> from transformers import AutoProcessor, Blip2ForConditionalGeneration
>>> import torch
>>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
>>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model.to(device)
```
๋ชจ๋ธ์ ์ด๋ฏธ์ง์ ํ
์คํธ๋ฅผ ์
๋ ฅ์ผ๋ก ๋ฐ์ผ๋ฏ๋ก, VQA ๋ฐ์ดํฐ์ธํธ์ ์ฒซ ๋ฒ์งธ ์์ ์์์ ๋์ผํ ์ด๋ฏธ์ง/์ง๋ฌธ ์์ ์ฌ์ฉํด ๋ณด๊ฒ ์ต๋๋ค:
```py
>>> example = dataset[0]
>>> image = Image.open(example['image_id'])
>>> question = example['question']
```
BLIP-2๋ฅผ ์๊ฐ์ ์ง์์๋ต ์์
์ ์ฌ์ฉํ๋ ค๋ฉด ํ
์คํธ ํ๋กฌํํธ๊ฐ `Question: {} Answer:` ํ์์ ๋ฐ๋ผ์ผ ํฉ๋๋ค.
```py
>>> prompt = f"Question: {question} Answer:"
```
์ด์ ๋ชจ๋ธ์ ํ๋ก์ธ์๋ก ์ด๋ฏธ์ง/ํ๋กฌํํธ๋ฅผ ์ ์ฒ๋ฆฌํ๊ณ , ์ฒ๋ฆฌ๋ ์
๋ ฅ์ ๋ชจ๋ธ์ ํตํด ์ ๋ฌํ๊ณ , ์ถ๋ ฅ์ ๋์ฝ๋ํด์ผ ํฉ๋๋ค:
```py
>>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)
>>> generated_ids = model.generate(**inputs, max_new_tokens=10)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
>>> print(generated_text)
"He is looking at the crowd"
```
๋ณด์๋ค์ํผ ๋ชจ๋ธ์ ๊ตฐ์ค์ ์ธ์ํ๊ณ , ์ผ๊ตด์ ๋ฐฉํฅ(์๋์ชฝ์ ๋ณด๊ณ ์์)์ ์ธ์ํ์ง๋ง, ๊ตฐ์ค์ด ์ค์ผ์ดํฐ ๋ค์ ์๋ค๋ ์ฌ์ค์ ๋์ณค์ต๋๋ค. ๊ทธ๋ฌ๋ ์ฌ๋์ด ์ง์ ๋ผ๋ฒจ๋งํ ๋ฐ์ดํฐ์
์ ์ป์ ์ ์๋ ๊ฒฝ์ฐ์, ์ด ์ ๊ทผ๋ฒ์ ๋น ๋ฅด๊ฒ ์ ์ฉํ ๊ฒฐ๊ณผ๋ฅผ ์์ฑํ ์ ์์ต๋๋ค.
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/tr/index.md
|
<!--Telif Hakkฤฑ 2020 The HuggingFace Ekibi. Tรผm haklarฤฑ saklฤฑdฤฑr.
Apache Lisansฤฑ, Sรผrรผm 2.0 (Lisans); bu dosyayฤฑ yรผrรผrlรผkteki yasalara uygun bir ลekilde kullanabilirsiniz. Lisansฤฑn bir kopyasฤฑnฤฑ aลaฤฤฑdaki adresten alabilirsiniz.
http://www.apache.org/licenses/LICENSE-2.0
Lisansa tabi olmayan durumlarda veya yazฤฑlฤฑ anlaลma olmadฤฑkรงa, Lisans kapsamฤฑnda daฤฤฑtฤฑlan yazฤฑlฤฑm, herhangi bir tรผrde (aรงฤฑk veya zฤฑmni) garanti veya koลul olmaksฤฑzฤฑn, "OLDUฤU GฤฐBฤฐ" ESASINA GรRE daฤฤฑtฤฑlฤฑr. Lisans hรผkรผmleri, รถzel belirli dil kullanฤฑmฤฑ, yetkileri ve kฤฑsฤฑtlamalarฤฑ belirler.
โ ๏ธ Bu dosya Markdown biรงimindedir, ancak belge oluลturucumuz iรงin รถzgรผ sรถzdizimleri iรงerir (MDX gibi) ve muhtemelen Markdown gรถrรผntรผleyicinizde dรผzgรผn bir ลekilde gรถrรผntรผlenmeyebilir.
-->
# ๐ค Transformers
[PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) ve [JAX](https://jax.readthedocs.io/en/latest/) iรงin son teknoloji makine รถฤrenimi.
๐ค Transformers, gรผncel รถnceden eฤitilmiล (pretrained) modelleri indirmenizi ve eฤitmenizi kolaylaลtฤฑran API'ler ve araรงlar sunar. รnceden eฤitilmiล modeller kullanarak, hesaplama maliyetlerinizi ve karbon ayak izinizi azaltabilir, ve sฤฑfฤฑrdan bir modeli eฤitmek iรงin gereken zaman ve kaynaklardan tasarruf edebilirsiniz. Bu modeller farklฤฑ modalitelerde ortak gรถrevleri destekler. รrneฤin:
๐ **Doฤal Dil ฤฐลleme**: metin sฤฑnฤฑflandฤฑrma, adlandฤฑrฤฑlmฤฑล varlฤฑk tanฤฑma, soru cevaplama, dil modelleme, รถzetleme, รงeviri, รงoktan seรงmeli ve metin oluลturma.<br>
๐ผ๏ธ **Bilgisayarlฤฑ Gรถrรผ**: gรถrรผntรผ sฤฑnฤฑflandฤฑrma, nesne tespiti ve bรถlรผmleme (segmentation).<br>
๐ฃ๏ธ **Ses**: otomatik konuลma tanฤฑma ve ses sฤฑnฤฑflandฤฑrma.<br>
๐ **รoklu Model**: tablo soru cevaplama, optik karakter tanฤฑma, taranmฤฑล belgelerden bilgi รงฤฑkarma, video sฤฑnฤฑflandฤฑrma ve gรถrsel soru cevaplama.
๐ค Transformers, PyTorch, TensorFlow ve JAX arasฤฑnda รงerรงeve (framework) uyumluluฤu saฤlar. Bu, bir modelin yaลam dรถngรผsรผnรผn her aลamasฤฑnda farklฤฑ bir รงerรงeve kullanma esnekliฤi sunar; bir รงerรงevede รผรง satฤฑr kodla bir modeli eฤitebilir ve baลka bir รงerรงevede tahminleme iรงin kullanabilirsiniz. Modeller ayrฤฑca รผretim ortamlarฤฑnda kullanฤฑlmak รผzere ONNX ve TorchScript gibi bir formata aktarฤฑlabilir.
Bรผyรผyen topluluฤa [Hub](https://huggingface.co/models), [Forum](https://discuss.huggingface.co/) veya [Discord](https://discord.com/invite/JfAtkvEtRb) รผzerinden katฤฑlabilirsiniz!
## Hugging Face ekibinden รถzel destek arฤฑyorsanฤฑz
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Uzman Hฤฑzlandฤฑrma Programฤฑ" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a>
## ฤฐรงindekiler
Dokรผmantasyon, beล bรถlรผme ayrฤฑlmฤฑลtฤฑr:
- **BAลLARKEN**, kรผtรผphanenin hฤฑzlฤฑ bir turunu ve รงalฤฑลmaya baลlamak iรงin kurulum talimatlarฤฑnฤฑ saฤlar.
- **รฤRETฤฐCฤฐLER**, baลlangฤฑรง yapmak iรงin harika bir yerdir. Bu bรถlรผm, kรผtรผphane kullanmaya baลlamak iรงin ihtiyacฤฑnฤฑz olan temel becerileri kazanmanฤฑza yardฤฑmcฤฑ olacaktฤฑr.
- **NASIL YAPILIR KILAVUZLARI**, รถnceden eฤitilmiล bir modele dil modellemesi iรงin ince ayar (fine-tuning) yapmak veya รถzel bir model yazmak, ve paylaลmak gibi belirli bir hedefe nasฤฑl ulaลฤฑlacaฤฤฑnฤฑ gรถsterir.
- **KAVRAMSAL REHBERLER**, modellerin, gรถrevlerin ve ๐ค Transformers tasarฤฑm felsefesinin temel kavramlarฤฑ ve fikirleri hakkฤฑnda daha fazla tartฤฑลma ve aรงฤฑklama sunar.
- **API** tรผm sฤฑnฤฑflarฤฑ (class) ve fonksiyonlarฤฑ (functions) aรงฤฑklar:
- **ANA SINIFLAR**, yapฤฑlandฤฑrma, model, tokenizer ve pipeline gibi en รถnemli sฤฑnฤฑflarฤฑ (classes) ayrฤฑntฤฑlandฤฑrฤฑr.
- **MODELLER**, kรผtรผphanede kullanฤฑlan her modelle ilgili sฤฑnฤฑflarฤฑ ve fonksiyonlarฤฑ detaylฤฑ olarak inceler.
- **DAHฤฐLฤฐ YARDIMCILAR**, kullanฤฑlan yardฤฑmcฤฑ sฤฑnฤฑflarฤฑ ve fonksiyonlarฤฑ detaylฤฑ olarak inceler.
## Desteklenen Modeller ve รerรงeveler
Aลaฤฤฑdaki tablo, her bir model iรงin kรผtรผphanede yer alan mevcut desteฤi temsil etmektedir. Her bir model iรงin bir Python tokenizer'ฤฑna ("slow" olarak adlandฤฑrฤฑlฤฑr) sahip olup olmadฤฑklarฤฑ, ๐ค Tokenizers kรผtรผphanesi tarafฤฑndan desteklenen hฤฑzlฤฑ bir tokenizer'a sahip olup olmadฤฑklarฤฑ, Jax (Flax aracฤฑlฤฑฤฤฑyla), PyTorch ve/veya TensorFlow'da destek olup olmadฤฑklarฤฑnฤฑ gรถstermektedir.
<!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!-->
| Model | PyTorch support | TensorFlow support | Flax Support |
|:------------------------------------------------------------------------:|:---------------:|:------------------:|:------------:|
| [ALBERT](model_doc/albert) | โ
| โ
| โ
|
| [ALIGN](model_doc/align) | โ
| โ | โ |
| [AltCLIP](model_doc/altclip) | โ
| โ | โ |
| [Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer) | โ
| โ | โ |
| [Autoformer](model_doc/autoformer) | โ
| โ | โ |
| [Bark](model_doc/bark) | โ
| โ | โ |
| [BART](model_doc/bart) | โ
| โ
| โ
|
| [BARThez](model_doc/barthez) | โ
| โ
| โ
|
| [BARTpho](model_doc/bartpho) | โ
| โ
| โ
|
| [BEiT](model_doc/beit) | โ
| โ | โ
|
| [BERT](model_doc/bert) | โ
| โ
| โ
|
| [Bert Generation](model_doc/bert-generation) | โ
| โ | โ |
| [BertJapanese](model_doc/bert-japanese) | โ
| โ
| โ
|
| [BERTweet](model_doc/bertweet) | โ
| โ
| โ
|
| [BigBird](model_doc/big_bird) | โ
| โ | โ
|
| [BigBird-Pegasus](model_doc/bigbird_pegasus) | โ
| โ | โ |
| [BioGpt](model_doc/biogpt) | โ
| โ | โ |
| [BiT](model_doc/bit) | โ
| โ | โ |
| [Blenderbot](model_doc/blenderbot) | โ
| โ
| โ
|
| [BlenderbotSmall](model_doc/blenderbot-small) | โ
| โ
| โ
|
| [BLIP](model_doc/blip) | โ
| โ
| โ |
| [BLIP-2](model_doc/blip-2) | โ
| โ | โ |
| [BLOOM](model_doc/bloom) | โ
| โ | โ
|
| [BORT](model_doc/bort) | โ
| โ
| โ
|
| [BridgeTower](model_doc/bridgetower) | โ
| โ | โ |
| [BROS](model_doc/bros) | โ
| โ | โ |
| [ByT5](model_doc/byt5) | โ
| โ
| โ
|
| [CamemBERT](model_doc/camembert) | โ
| โ
| โ |
| [CANINE](model_doc/canine) | โ
| โ | โ |
| [Chinese-CLIP](model_doc/chinese_clip) | โ
| โ | โ |
| [CLAP](model_doc/clap) | โ
| โ | โ |
| [CLIP](model_doc/clip) | โ
| โ
| โ
|
| [CLIPSeg](model_doc/clipseg) | โ
| โ | โ |
| [CodeGen](model_doc/codegen) | โ
| โ | โ |
| [CodeLlama](model_doc/code_llama) | โ
| โ | โ |
| [Conditional DETR](model_doc/conditional_detr) | โ
| โ | โ |
| [ConvBERT](model_doc/convbert) | โ
| โ
| โ |
| [ConvNeXT](model_doc/convnext) | โ
| โ
| โ |
| [ConvNeXTV2](model_doc/convnextv2) | โ
| โ | โ |
| [CPM](model_doc/cpm) | โ
| โ
| โ
|
| [CPM-Ant](model_doc/cpmant) | โ
| โ | โ |
| [CTRL](model_doc/ctrl) | โ
| โ
| โ |
| [CvT](model_doc/cvt) | โ
| โ
| โ |
| [Data2VecAudio](model_doc/data2vec) | โ
| โ | โ |
| [Data2VecText](model_doc/data2vec) | โ
| โ | โ |
| [Data2VecVision](model_doc/data2vec) | โ
| โ
| โ |
| [DeBERTa](model_doc/deberta) | โ
| โ
| โ |
| [DeBERTa-v2](model_doc/deberta-v2) | โ
| โ
| โ |
| [Decision Transformer](model_doc/decision_transformer) | โ
| โ | โ |
| [Deformable DETR](model_doc/deformable_detr) | โ
| โ | โ |
| [DeiT](model_doc/deit) | โ
| โ
| โ |
| [DePlot](model_doc/deplot) | โ
| โ | โ |
| [DETA](model_doc/deta) | โ
| โ | โ |
| [DETR](model_doc/detr) | โ
| โ | โ |
| [DialoGPT](model_doc/dialogpt) | โ
| โ
| โ
|
| [DiNAT](model_doc/dinat) | โ
| โ | โ |
| [DINOv2](model_doc/dinov2) | โ
| โ | โ |
| [DistilBERT](model_doc/distilbert) | โ
| โ
| โ
|
| [DiT](model_doc/dit) | โ
| โ | โ
|
| [DonutSwin](model_doc/donut) | โ
| โ | โ |
| [DPR](model_doc/dpr) | โ
| โ
| โ |
| [DPT](model_doc/dpt) | โ
| โ | โ |
| [EfficientFormer](model_doc/efficientformer) | โ
| โ
| โ |
| [EfficientNet](model_doc/efficientnet) | โ
| โ | โ |
| [ELECTRA](model_doc/electra) | โ
| โ
| โ
|
| [EnCodec](model_doc/encodec) | โ
| โ | โ |
| [Encoder decoder](model_doc/encoder-decoder) | โ
| โ
| โ
|
| [ERNIE](model_doc/ernie) | โ
| โ | โ |
| [ErnieM](model_doc/ernie_m) | โ
| โ | โ |
| [ESM](model_doc/esm) | โ
| โ
| โ |
| [FairSeq Machine-Translation](model_doc/fsmt) | โ
| โ | โ |
| [Falcon](model_doc/falcon) | โ
| โ | โ |
| [FLAN-T5](model_doc/flan-t5) | โ
| โ
| โ
|
| [FLAN-UL2](model_doc/flan-ul2) | โ
| โ
| โ
|
| [FlauBERT](model_doc/flaubert) | โ
| โ
| โ |
| [FLAVA](model_doc/flava) | โ
| โ | โ |
| [FNet](model_doc/fnet) | โ
| โ | โ |
| [FocalNet](model_doc/focalnet) | โ
| โ | โ |
| [Funnel Transformer](model_doc/funnel) | โ
| โ
| โ |
| [Fuyu](model_doc/fuyu) | โ
| โ | โ |
| [GIT](model_doc/git) | โ
| โ | โ |
| [GLPN](model_doc/glpn) | โ
| โ | โ |
| [GPT Neo](model_doc/gpt_neo) | โ
| โ | โ
|
| [GPT NeoX](model_doc/gpt_neox) | โ
| โ | โ |
| [GPT NeoX Japanese](model_doc/gpt_neox_japanese) | โ
| โ | โ |
| [GPT-J](model_doc/gptj) | โ
| โ
| โ
|
| [GPT-Sw3](model_doc/gpt-sw3) | โ
| โ
| โ
|
| [GPTBigCode](model_doc/gpt_bigcode) | โ
| โ | โ |
| [GPTSAN-japanese](model_doc/gptsan-japanese) | โ
| โ | โ |
| [Graphormer](model_doc/graphormer) | โ
| โ | โ |
| [GroupViT](model_doc/groupvit) | โ
| โ
| โ |
| [HerBERT](model_doc/herbert) | โ
| โ
| โ
|
| [Hubert](model_doc/hubert) | โ
| โ
| โ |
| [I-BERT](model_doc/ibert) | โ
| โ | โ |
| [IDEFICS](model_doc/idefics) | โ
| โ | โ |
| [ImageGPT](model_doc/imagegpt) | โ
| โ | โ |
| [Informer](model_doc/informer) | โ
| โ | โ |
| [InstructBLIP](model_doc/instructblip) | โ
| โ | โ |
| [Jukebox](model_doc/jukebox) | โ
| โ | โ |
| [LayoutLM](model_doc/layoutlm) | โ
| โ
| โ |
| [LayoutLMv2](model_doc/layoutlmv2) | โ
| โ | โ |
| [LayoutLMv3](model_doc/layoutlmv3) | โ
| โ
| โ |
| [LayoutXLM](model_doc/layoutxlm) | โ
| โ | โ |
| [LED](model_doc/led) | โ
| โ
| โ |
| [LeViT](model_doc/levit) | โ
| โ | โ |
| [LiLT](model_doc/lilt) | โ
| โ | โ |
| [LLaMA](model_doc/llama) | โ
| โ | โ |
| [Llama2](model_doc/llama2) | โ
| โ | โ |
| [Longformer](model_doc/longformer) | โ
| โ
| โ |
| [LongT5](model_doc/longt5) | โ
| โ | โ
|
| [LUKE](model_doc/luke) | โ
| โ | โ |
| [LXMERT](model_doc/lxmert) | โ
| โ
| โ |
| [M-CTC-T](model_doc/mctct) | โ
| โ | โ |
| [M2M100](model_doc/m2m_100) | โ
| โ | โ |
| [Marian](model_doc/marian) | โ
| โ
| โ
|
| [MarkupLM](model_doc/markuplm) | โ
| โ | โ |
| [Mask2Former](model_doc/mask2former) | โ
| โ | โ |
| [MaskFormer](model_doc/maskformer) | โ
| โ | โ |
| [MatCha](model_doc/matcha) | โ
| โ | โ |
| [mBART](model_doc/mbart) | โ
| โ
| โ
|
| [mBART-50](model_doc/mbart50) | โ
| โ
| โ
|
| [MEGA](model_doc/mega) | โ
| โ | โ |
| [Megatron-BERT](model_doc/megatron-bert) | โ
| โ | โ |
| [Megatron-GPT2](model_doc/megatron_gpt2) | โ
| โ
| โ
|
| [MGP-STR](model_doc/mgp-str) | โ
| โ | โ |
| [Mistral](model_doc/mistral) | โ
| โ | โ |
| [mLUKE](model_doc/mluke) | โ
| โ | โ |
| [MMS](model_doc/mms) | โ
| โ
| โ
|
| [MobileBERT](model_doc/mobilebert) | โ
| โ
| โ |
| [MobileNetV1](model_doc/mobilenet_v1) | โ
| โ | โ |
| [MobileNetV2](model_doc/mobilenet_v2) | โ
| โ | โ |
| [MobileViT](model_doc/mobilevit) | โ
| โ
| โ |
| [MobileViTV2](model_doc/mobilevitv2) | โ
| โ | โ |
| [MPNet](model_doc/mpnet) | โ
| โ
| โ |
| [MPT](model_doc/mpt) | โ
| โ | โ |
| [MRA](model_doc/mra) | โ
| โ | โ |
| [MT5](model_doc/mt5) | โ
| โ
| โ
|
| [MusicGen](model_doc/musicgen) | โ
| โ | โ |
| [MVP](model_doc/mvp) | โ
| โ | โ |
| [NAT](model_doc/nat) | โ
| โ | โ |
| [Nezha](model_doc/nezha) | โ
| โ | โ |
| [NLLB](model_doc/nllb) | โ
| โ | โ |
| [NLLB-MOE](model_doc/nllb-moe) | โ
| โ | โ |
| [Nougat](model_doc/nougat) | โ
| โ
| โ
|
| [Nystrรถmformer](model_doc/nystromformer) | โ
| โ | โ |
| [OneFormer](model_doc/oneformer) | โ
| โ | โ |
| [OpenAI GPT](model_doc/openai-gpt) | โ
| โ
| โ |
| [OpenAI GPT-2](model_doc/gpt2) | โ
| โ
| โ
|
| [OpenLlama](model_doc/open-llama) | โ
| โ | โ |
| [OPT](model_doc/opt) | โ
| โ
| โ
|
| [OWL-ViT](model_doc/owlvit) | โ
| โ | โ |
| [OWLv2](model_doc/owlv2) | โ
| โ | โ |
| [Pegasus](model_doc/pegasus) | โ
| โ
| โ
|
| [PEGASUS-X](model_doc/pegasus_x) | โ
| โ | โ |
| [Perceiver](model_doc/perceiver) | โ
| โ | โ |
| [Persimmon](model_doc/persimmon) | โ
| โ | โ |
| [PhoBERT](model_doc/phobert) | โ
| โ
| โ
|
| [Pix2Struct](model_doc/pix2struct) | โ
| โ | โ |
| [PLBart](model_doc/plbart) | โ
| โ | โ |
| [PoolFormer](model_doc/poolformer) | โ
| โ | โ |
| [Pop2Piano](model_doc/pop2piano) | โ
| โ | โ |
| [ProphetNet](model_doc/prophetnet) | โ
| โ | โ |
| [PVT](model_doc/pvt) | โ
| โ | โ |
| [QDQBert](model_doc/qdqbert) | โ
| โ | โ |
| [RAG](model_doc/rag) | โ
| โ
| โ |
| [REALM](model_doc/realm) | โ
| โ | โ |
| [Reformer](model_doc/reformer) | โ
| โ | โ |
| [RegNet](model_doc/regnet) | โ
| โ
| โ
|
| [RemBERT](model_doc/rembert) | โ
| โ
| โ |
| [ResNet](model_doc/resnet) | โ
| โ
| โ
|
| [RetriBERT](model_doc/retribert) | โ
| โ | โ |
| [RoBERTa](model_doc/roberta) | โ
| โ
| โ
|
| [RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm) | โ
| โ
| โ
|
| [RoCBert](model_doc/roc_bert) | โ
| โ | โ |
| [RoFormer](model_doc/roformer) | โ
| โ
| โ
|
| [RWKV](model_doc/rwkv) | โ
| โ | โ |
| [SAM](model_doc/sam) | โ
| โ
| โ |
| [SeamlessM4T](model_doc/seamless_m4t) | โ
| โ | โ |
| [SegFormer](model_doc/segformer) | โ
| โ
| โ |
| [SEW](model_doc/sew) | โ
| โ | โ |
| [SEW-D](model_doc/sew-d) | โ
| โ | โ |
| [Speech Encoder decoder](model_doc/speech-encoder-decoder) | โ
| โ | โ
|
| [Speech2Text](model_doc/speech_to_text) | โ
| โ
| โ |
| [SpeechT5](model_doc/speecht5) | โ
| โ | โ |
| [Splinter](model_doc/splinter) | โ
| โ | โ |
| [SqueezeBERT](model_doc/squeezebert) | โ
| โ | โ |
| [SwiftFormer](model_doc/swiftformer) | โ
| โ | โ |
| [Swin Transformer](model_doc/swin) | โ
| โ
| โ |
| [Swin Transformer V2](model_doc/swinv2) | โ
| โ | โ |
| [Swin2SR](model_doc/swin2sr) | โ
| โ | โ |
| [SwitchTransformers](model_doc/switch_transformers) | โ
| โ | โ |
| [T5](model_doc/t5) | โ
| โ
| โ
|
| [T5v1.1](model_doc/t5v1.1) | โ
| โ
| โ
|
| [Table Transformer](model_doc/table-transformer) | โ
| โ | โ |
| [TAPAS](model_doc/tapas) | โ
| โ
| โ |
| [TAPEX](model_doc/tapex) | โ
| โ
| โ
|
| [Time Series Transformer](model_doc/time_series_transformer) | โ
| โ | โ |
| [TimeSformer](model_doc/timesformer) | โ
| โ | โ |
| [Trajectory Transformer](model_doc/trajectory_transformer) | โ
| โ | โ |
| [Transformer-XL](model_doc/transfo-xl) | โ
| โ
| โ |
| [TrOCR](model_doc/trocr) | โ
| โ | โ |
| [TVLT](model_doc/tvlt) | โ
| โ | โ |
| [UL2](model_doc/ul2) | โ
| โ
| โ
|
| [UMT5](model_doc/umt5) | โ
| โ | โ |
| [UniSpeech](model_doc/unispeech) | โ
| โ | โ |
| [UniSpeechSat](model_doc/unispeech-sat) | โ
| โ | โ |
| [UPerNet](model_doc/upernet) | โ
| โ | โ |
| [VAN](model_doc/van) | โ
| โ | โ |
| [VideoMAE](model_doc/videomae) | โ
| โ | โ |
| [ViLT](model_doc/vilt) | โ
| โ | โ |
| [Vision Encoder decoder](model_doc/vision-encoder-decoder) | โ
| โ
| โ
|
| [VisionTextDualEncoder](model_doc/vision-text-dual-encoder) | โ
| โ
| โ
|
| [VisualBERT](model_doc/visual_bert) | โ
| โ | โ |
| [ViT](model_doc/vit) | โ
| โ
| โ
|
| [ViT Hybrid](model_doc/vit_hybrid) | โ
| โ | โ |
| [VitDet](model_doc/vitdet) | โ
| โ | โ |
| [ViTMAE](model_doc/vit_mae) | โ
| โ
| โ |
| [ViTMatte](model_doc/vitmatte) | โ
| โ | โ |
| [ViTMSN](model_doc/vit_msn) | โ
| โ | โ |
| [VITS](model_doc/vits) | โ
| โ | โ |
| [ViViT](model_doc/vivit) | โ
| โ | โ |
| [Wav2Vec2](model_doc/wav2vec2) | โ
| โ
| โ
|
| [Wav2Vec2-Conformer](model_doc/wav2vec2-conformer) | โ
| โ | โ |
| [Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme) | โ
| โ
| โ
|
| [WavLM](model_doc/wavlm) | โ
| โ | โ |
| [Whisper](model_doc/whisper) | โ
| โ
| โ
|
| [X-CLIP](model_doc/xclip) | โ
| โ | โ |
| [X-MOD](model_doc/xmod) | โ
| โ | โ |
| [XGLM](model_doc/xglm) | โ
| โ
| โ
|
| [XLM](model_doc/xlm) | โ
| โ
| โ |
| [XLM-ProphetNet](model_doc/xlm-prophetnet) | โ
| โ | โ |
| [XLM-RoBERTa](model_doc/xlm-roberta) | โ
| โ
| โ
|
| [XLM-RoBERTa-XL](model_doc/xlm-roberta-xl) | โ
| โ | โ |
| [XLM-V](model_doc/xlm-v) | โ
| โ
| โ
|
| [XLNet](model_doc/xlnet) | โ
| โ
| โ |
| [XLS-R](model_doc/xls_r) | โ
| โ
| โ
|
| [XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2) | โ
| โ
| โ
|
| [YOLOS](model_doc/yolos) | โ
| โ | โ |
| [YOSO](model_doc/yoso) | โ
| โ | โ |
<!-- End table-->
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/tr/_toctree.yml
|
- sections:
- local: index
title: ๐ค Transformers
title: Get started
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/fr/index.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ๐ค Transformers
Apprentissage automatique de pointe pour [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), et [JAX](https://jax.readthedocs.io/en/latest/).
๐ค Transformers fournit des API et des outils pour tรฉlรฉcharger et entraรฎner facilement des modรจles prรฉ-entraรฎnรฉs de pointe. L'utilisation de modรจles prรฉ-entraรฎnรฉs peut rรฉduire vos coรปts de calcul, votre empreinte carbone, et vous faire รฉconomiser le temps et les ressources nรฉcessaires pour entraรฎner un modรจle ร partir de zรฉro. Ces modรจles prennent en charge des tรขches courantes dans diffรฉrentes modalitรฉs, telles que :
๐ **Traitement automatique des langues**: classification de texte, reconnaissance d'entitรฉs, systรจme de question-rรฉponse, modรจle de langage, gรฉnรฉration de rรฉsumรฉ, traduction, question ร choix multiples et gรฉnรฉration de texte.<br>
๐ผ๏ธ **Vision par ordinateur**: classification d'image, dรฉtection d'objet et segmentation.<br>
๐ฃ๏ธ **Audio**: reconnaissance automatique de la parole et classification audio.<br>
๐ **Multimodalitรฉ**: systรจme de question-rรฉponse avec des tableaux ou images, reconnaissance optique de caractรจres, extraction d'information depuis des documents scannรฉs et classification de vidรฉo.
๐ค Transformers prend en charge l'interopรฉrabilitรฉ entre PyTorch, TensorFlow et JAX. Cela permet d'utiliser un framework diffรฉrent ร chaque รฉtape de la vie d'un modรจle, par exemple entraรฎner un modรจle en trois lignes de code avec un framework, et le charger pour l'infรฉrence avec un autre. Les modรจles peuvent รฉgalement รชtre exportรฉs dans un format comme ONNX et TorchScript pour รชtre dรฉployรฉs dans des environnements de production.
Rejoignez la communautรฉ grandissante sur le [Hub](https://huggingface.co/models), le [forum](https://discuss.huggingface.co/) ou [Discord](https://discord.com/invite/JfAtkvEtRb) dรจs aujourd'hui !
## Si vous cherchez un support personnalisรฉ de l'รฉquipe Hugging Face
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a>
## Contents
La documentation est organisรฉe en 5 parties:
- **DEMARRER** propose une visite rapide de la bibliothรจque et des instructions d'installation pour รชtre opรฉrationnel.
- **TUTORIELS** excellent point de dรฉpart pour les dรฉbutants. Cette section vous aidera ร acquรฉrir les compรฉtences de base dont vous avez besoin pour commencer ร utiliser la bibliothรจque.
- **GUIDES D'UTILISATION** pour diffรฉrentes tรขches comme par exemple le finetuning d'un modรจle prรฉ-entraรฎnรฉ pour la classification de texte ou comment crรฉer et partager votre propre modรจle.
- **GUIDES CONCEPTUELS** pour plus de discussions et d'explications sur les concepts et les idรฉes sous-jacentes aux modรจles, aux tรขches et ร la philosophie de conception de ๐ค Transformers.
- **API** dรฉcrit toutes les classes et fonctions :
- **CLASSES PRINCIPALES** dรฉtaille les classes les plus importantes comme la configuration, le modรจle, le tokenizer et le pipeline..
- **MODELES** dรฉtaille les classes et les fonctions propres ร chaque modรจle de la bibliothรจque.
- **UTILITAIRES INTERNES** dรฉtaille les classes et fonctions utilitaires utilisรฉes en interne.
### Modรจles supportรฉs
<!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! -->
1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
1. **[AltCLIP](model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
1. **[Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
1. **[BARThez](model_doc/barthez)** (from รcole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
1. **[BioGpt](model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
1. **[BiT](model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
1. **[BLIP](model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/).
1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
1. **[BridgeTower](model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, รric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot.
1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
1. **[Chinese-CLIP](model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker.
1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou.
1. **[DETA](model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krรคhenbรผhl.
1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oฤuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
1. **[EfficientFormer](model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu.
1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
1. **[FastSpeech2Conformer](model_doc/fastspeech2_conformer)** (from ESPnet) released with the paper [Recent Developments On Espnet Toolkit Boosted By Conformer](https://arxiv.org/abs/2010.13956) by Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, Jing Shi, Shinji Watanabe, Kun Wei, Wangyou Zhang, and Yuekai Zhang.
1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab.
1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
1. **[GIT](model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
1. **[GPT-Sw3](model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey รhman, Fredrik Carlsson, Magnus Sahlgren.
1. **[Graphormer](model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze.
1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding.
1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei.
1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.
1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
1. **[MobileNetV1](model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
1. **[MobileNetV2](model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
1. **[NAT](model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
1. **[Nezha](model_doc/nezha)** (from Huawei Noahโs Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi.
1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira.
1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela.
1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya.
1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr.
1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder.
1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
1. **[RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
1. **[SpeechT5](model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
1. **[Swin2SR](model_doc/swin2sr)** (from University of Wรผrzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
1. **[SwitchTransformers](model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham.
1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos.
1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace).
1. **[TimeSformer](model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
1. **[UPerNet](model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.
1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
1. **[ViT Hybrid](model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick.
1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas.
1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โXLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
### Frameworks compatibles
Le tableau ci-dessous reprรฉsente la prise en charge actuelle dans la bibliothรจque pour chacun de ces modรจles, qu'ils aient ou non un tokenizer Python (appelรฉ "slow"). Un tokenizer rapide ("fast") soutenu par la bibliothรจque ๐ค Tokenizers, qu'ils aient un support en Jax (via Flax), PyTorch, et/ou TensorFlow.
<!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!-->
| Modรจle | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support |
|:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:|
| ALBERT | โ
| โ
| โ
| โ
| โ
|
| AltCLIP | โ | โ | โ
| โ | โ |
| Audio Spectrogram Transformer | โ | โ | โ
| โ | โ |
| BART | โ
| โ
| โ
| โ
| โ
|
| BEiT | โ | โ | โ
| โ | โ
|
| BERT | โ
| โ
| โ
| โ
| โ
|
| Bert Generation | โ
| โ | โ
| โ | โ |
| BigBird | โ
| โ
| โ
| โ | โ
|
| BigBird-Pegasus | โ | โ | โ
| โ | โ |
| BioGpt | โ
| โ | โ
| โ | โ |
| BiT | โ | โ | โ
| โ | โ |
| Blenderbot | โ
| โ
| โ
| โ
| โ
|
| BlenderbotSmall | โ
| โ
| โ
| โ
| โ
|
| BLIP | โ | โ | โ
| โ | โ |
| BLOOM | โ | โ
| โ
| โ | โ |
| BridgeTower | โ | โ | โ
| โ | โ |
| CamemBERT | โ
| โ
| โ
| โ
| โ |
| CANINE | โ
| โ | โ
| โ | โ |
| Chinese-CLIP | โ | โ | โ
| โ | โ |
| CLIP | โ
| โ
| โ
| โ
| โ
|
| CLIPSeg | โ | โ | โ
| โ | โ |
| CodeGen | โ
| โ
| โ
| โ | โ |
| Conditional DETR | โ | โ | โ
| โ | โ |
| ConvBERT | โ
| โ
| โ
| โ
| โ |
| ConvNeXT | โ | โ | โ
| โ
| โ |
| CTRL | โ
| โ | โ
| โ
| โ |
| CvT | โ | โ | โ
| โ
| โ |
| Data2VecAudio | โ | โ | โ
| โ | โ |
| Data2VecText | โ | โ | โ
| โ | โ |
| Data2VecVision | โ | โ | โ
| โ
| โ |
| DeBERTa | โ
| โ
| โ
| โ
| โ |
| DeBERTa-v2 | โ
| โ
| โ
| โ
| โ |
| Decision Transformer | โ | โ | โ
| โ | โ |
| Deformable DETR | โ | โ | โ
| โ | โ |
| DeiT | โ | โ | โ
| โ
| โ |
| DETA | โ | โ | โ
| โ | โ |
| DETR | โ | โ | โ
| โ | โ |
| DiNAT | โ | โ | โ
| โ | โ |
| DistilBERT | โ
| โ
| โ
| โ
| โ
|
| DonutSwin | โ | โ | โ
| โ | โ |
| DPR | โ
| โ
| โ
| โ
| โ |
| DPT | โ | โ | โ
| โ | โ |
| EfficientFormer | โ | โ | โ
| โ | โ |
| ELECTRA | โ
| โ
| โ
| โ
| โ
|
| Encoder decoder | โ | โ | โ
| โ
| โ
|
| ERNIE | โ | โ | โ
| โ | โ |
| ESM | โ
| โ | โ
| โ
| โ |
| FairSeq Machine-Translation | โ
| โ | โ
| โ | โ |
| FastSpeech2Conformer | โ
| โ | โ
| โ | โ |
| FlauBERT | โ
| โ | โ
| โ
| โ |
| FLAVA | โ | โ | โ
| โ | โ |
| FNet | โ
| โ
| โ
| โ | โ |
| Funnel Transformer | โ
| โ
| โ
| โ
| โ |
| GIT | โ | โ | โ
| โ | โ |
| GLPN | โ | โ | โ
| โ | โ |
| GPT Neo | โ | โ | โ
| โ | โ
|
| GPT NeoX | โ | โ
| โ
| โ | โ |
| GPT NeoX Japanese | โ
| โ | โ
| โ | โ |
| GPT-J | โ | โ | โ
| โ
| โ
|
| GPT-Sw3 | โ
| โ
| โ
| โ
| โ
|
| Graphormer | โ | โ | โ
| โ | โ |
| GroupViT | โ | โ | โ
| โ
| โ |
| Hubert | โ | โ | โ
| โ
| โ |
| I-BERT | โ | โ | โ
| โ | โ |
| ImageGPT | โ | โ | โ
| โ | โ |
| Jukebox | โ
| โ | โ
| โ | โ |
| LayoutLM | โ
| โ
| โ
| โ
| โ |
| LayoutLMv2 | โ
| โ
| โ
| โ | โ |
| LayoutLMv3 | โ
| โ
| โ
| โ
| โ |
| LED | โ
| โ
| โ
| โ
| โ |
| LeViT | โ | โ | โ
| โ | โ |
| LiLT | โ | โ | โ
| โ | โ |
| Longformer | โ
| โ
| โ
| โ
| โ |
| LongT5 | โ | โ | โ
| โ | โ
|
| LUKE | โ
| โ | โ
| โ | โ |
| LXMERT | โ
| โ
| โ
| โ
| โ |
| M-CTC-T | โ | โ | โ
| โ | โ |
| M2M100 | โ
| โ | โ
| โ | โ |
| Marian | โ
| โ | โ
| โ
| โ
|
| MarkupLM | โ
| โ
| โ
| โ | โ |
| Mask2Former | โ | โ | โ
| โ | โ |
| MaskFormer | โ | โ | โ
| โ | โ |
| MaskFormerSwin | โ | โ | โ | โ | โ |
| mBART | โ
| โ
| โ
| โ
| โ
|
| Megatron-BERT | โ | โ | โ
| โ | โ |
| MobileBERT | โ
| โ
| โ
| โ
| โ |
| MobileNetV1 | โ | โ | โ
| โ | โ |
| MobileNetV2 | โ | โ | โ
| โ | โ |
| MobileViT | โ | โ | โ
| โ
| โ |
| MPNet | โ
| โ
| โ
| โ
| โ |
| MT5 | โ
| โ
| โ
| โ
| โ
|
| MVP | โ
| โ
| โ
| โ | โ |
| NAT | โ | โ | โ
| โ | โ |
| Nezha | โ | โ | โ
| โ | โ |
| Nystrรถmformer | โ | โ | โ
| โ | โ |
| OneFormer | โ | โ | โ
| โ | โ |
| OpenAI GPT | โ
| โ
| โ
| โ
| โ |
| OpenAI GPT-2 | โ
| โ
| โ
| โ
| โ
|
| OPT | โ | โ | โ
| โ
| โ
|
| OWL-ViT | โ | โ | โ
| โ | โ |
| Pegasus | โ
| โ
| โ
| โ
| โ
|
| PEGASUS-X | โ | โ | โ
| โ | โ |
| Perceiver | โ
| โ | โ
| โ | โ |
| PLBart | โ
| โ | โ
| โ | โ |
| PoolFormer | โ | โ | โ
| โ | โ |
| ProphetNet | โ
| โ | โ
| โ | โ |
| QDQBert | โ | โ | โ
| โ | โ |
| RAG | โ
| โ | โ
| โ
| โ |
| REALM | โ
| โ
| โ
| โ | โ |
| Reformer | โ
| โ
| โ
| โ | โ |
| RegNet | โ | โ | โ
| โ
| โ
|
| RemBERT | โ
| โ
| โ
| โ
| โ |
| ResNet | โ | โ | โ
| โ
| โ |
| RetriBERT | โ
| โ
| โ
| โ | โ |
| RoBERTa | โ
| โ
| โ
| โ
| โ
|
| RoBERTa-PreLayerNorm | โ | โ | โ
| โ
| โ
|
| RoCBert | โ
| โ | โ
| โ | โ |
| RoFormer | โ
| โ
| โ
| โ
| โ
|
| SegFormer | โ | โ | โ
| โ
| โ |
| SEW | โ | โ | โ
| โ | โ |
| SEW-D | โ | โ | โ
| โ | โ |
| Speech Encoder decoder | โ | โ | โ
| โ | โ
|
| Speech2Text | โ
| โ | โ
| โ
| โ |
| Speech2Text2 | โ
| โ | โ | โ | โ |
| SpeechT5 | โ
| โ | โ
| โ | โ |
| Splinter | โ
| โ
| โ
| โ | โ |
| SqueezeBERT | โ
| โ
| โ
| โ | โ |
| Swin Transformer | โ | โ | โ
| โ
| โ |
| Swin Transformer V2 | โ | โ | โ
| โ | โ |
| Swin2SR | โ | โ | โ
| โ | โ |
| SwitchTransformers | โ | โ | โ
| โ | โ |
| T5 | โ
| โ
| โ
| โ
| โ
|
| Table Transformer | โ | โ | โ
| โ | โ |
| TAPAS | โ
| โ | โ
| โ
| โ |
| Time Series Transformer | โ | โ | โ
| โ | โ |
| TimeSformer | โ | โ | โ
| โ | โ |
| Trajectory Transformer | โ | โ | โ
| โ | โ |
| Transformer-XL | โ
| โ | โ
| โ
| โ |
| TrOCR | โ | โ | โ
| โ | โ |
| UniSpeech | โ | โ | โ
| โ | โ |
| UniSpeechSat | โ | โ | โ
| โ | โ |
| UPerNet | โ | โ | โ
| โ | โ |
| VAN | โ | โ | โ
| โ | โ |
| VideoMAE | โ | โ | โ
| โ | โ |
| ViLT | โ | โ | โ
| โ | โ |
| Vision Encoder decoder | โ | โ | โ
| โ
| โ
|
| VisionTextDualEncoder | โ | โ | โ
| โ | โ
|
| VisualBERT | โ | โ | โ
| โ | โ |
| ViT | โ | โ | โ
| โ
| โ
|
| ViT Hybrid | โ | โ | โ
| โ | โ |
| ViTMAE | โ | โ | โ
| โ
| โ |
| ViTMSN | โ | โ | โ
| โ | โ |
| Wav2Vec2 | โ
| โ | โ
| โ
| โ
|
| Wav2Vec2-Conformer | โ | โ | โ
| โ | โ |
| WavLM | โ | โ | โ
| โ | โ |
| Whisper | โ
| โ | โ
| โ
| โ |
| X-CLIP | โ | โ | โ
| โ | โ |
| XGLM | โ
| โ
| โ
| โ
| โ
|
| XLM | โ
| โ | โ
| โ
| โ |
| XLM-ProphetNet | โ
| โ | โ
| โ | โ |
| XLM-RoBERTa | โ
| โ
| โ
| โ
| โ
|
| XLM-RoBERTa-XL | โ | โ | โ
| โ | โ |
| XLNet | โ
| โ
| โ
| โ
| โ |
| YOLOS | โ | โ | โ
| โ | โ |
| YOSO | โ | โ | โ
| โ | โ |
<!-- End table-->
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/fr/quicktour.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Visite rapide
[[open-in-colab]]
Soyez opรฉrationnel avec ๐ค Transformers ! Que vous soyez un dรฉveloppeur ou un utilisateur lambda, cette visite rapide vous aidera ร dรฉmarrer et vous montrera comment utiliser le [`pipeline`] pour l'infรฉrence, charger un modรจle prรฉ-entraรฎnรฉ et un prรฉprocesseur avec une [AutoClass](./model_doc/auto), et entraรฎner rapidement un modรจle avec PyTorch ou TensorFlow. Si vous รชtes un dรฉbutant, nous vous recommandons de consulter nos tutoriels ou notre [cours](https://huggingface.co/course/chapter1/1) suivant pour des explications plus approfondies des concepts prรฉsentรฉs ici.
Avant de commencer, assurez-vous que vous avez installรฉ toutes les bibliothรจques nรฉcessaires :
```bash
!pip install transformers datasets
```
Vous aurez aussi besoin d'installer votre bibliothรจque d'apprentissage profond favorite :
<frameworkcontent>
<pt>
```bash
pip install torch
```
</pt>
<tf>
```bash
pip install tensorflow
```
</tf>
</frameworkcontent>
## Pipeline
<Youtube id="tiZFewofSLM"/>
Le [`pipeline`] est le moyen le plus simple d'utiliser un modรจle prรฉ-entraรฎnรฉ pour l'infรฉrence. Vous pouvez utiliser le [`pipeline`] prรชt ร l'emploi pour de nombreuses tรขches dans diffรฉrentes modalitรฉs. Consultez le tableau ci-dessous pour connaรฎtre les tรขches prises en charge :
| **Tรขche** | **Description** | **Modalitรฉ** | **Identifiant du pipeline** |
|------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------|-----------------------------------------------|
| Classification de texte | Attribue une catรฉgorie ร une sรฉquence de texte donnรฉe | Texte | pipeline(task="sentiment-analysis") |
| Gรฉnรฉration de texte | Gรฉnรจre du texte ร partir d'une consigne donnรฉe | Texte | pipeline(task="text-generation") |
| Reconnaissance de token nommรฉ | Attribue une catรฉgorie ร chaque token dans une sรฉquence (personnes, organisation, localisation, etc.) | Texte | pipeline(task="ner") |
| Question rรฉponse | Extrait une rรฉponse du texte en fonction du contexte et d'une question | Texte | pipeline(task="question-answering") |
| Prรฉdiction de token masquรฉ | Prรฉdit correctement le token masquรฉ dans une sรฉquence | Texte | pipeline(task="fill-mask") |
| Gรฉnรฉration de rรฉsumรฉ | Gรฉnรจre un rรฉsumรฉ d'une sรฉquence de texte donnรฉe ou d'un document | Texte | pipeline(task="summarization") |
| Traduction | Traduit du texte d'un langage ร un autre | Texte | pipeline(task="translation") |
| Classification d'image | Attribue une catรฉgorie ร une image | Image | pipeline(task="image-classification") |
| Segmentation d'image | Attribue une catรฉgorie ร chaque pixel d'une image (supporte la segmentation sรฉmantique, panoptique et d'instance) | Image | pipeline(task="image-segmentation") |
| Dรฉtection d'objets | Prรฉdit les dรฉlimitations et catรฉgories d'objets dans une image | Image | pipeline(task="object-detection") |
| Classification d'audio | Attribue une catรฉgorie ร un fichier audio | Audio | pipeline(task="audio-classification") |
| Reconnaissance automatique de la parole | Extrait le discours d'un fichier audio en texte | Audio | pipeline(task="automatic-speech-recognition") |
| Question rรฉponse visuels | Etant donnรฉes une image et une question, rรฉpond correctement ร une question sur l'image | Modalitรฉs multiples | pipeline(task="vqa") |
Commencez par crรฉer une instance de [`pipeline`] et spรฉcifiez la tรขche pour laquelle vous souhaitez l'utiliser. Vous pouvez utiliser le [`pipeline`] pour n'importe laquelle des tรขches mentionnรฉes dans le tableau prรฉcรฉdent. Pour obtenir une liste complรจte des tรขches prises en charge, consultez la documentation de l'[API pipeline](./main_classes/pipelines). Dans ce guide, nous utiliserons le [`pipeline`] pour l'analyse des sentiments ร titre d'exemple :
```py
>>> from transformers import pipeline
>>> classifier = pipeline("sentiment-analysis")
```
Le [`pipeline`] tรฉlรฉcharge et stocke en cache un [modรจle prรฉ-entraรฎnรฉ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) et un tokenizer par dรฉfaut pour l'analyse des sentiments. Vous pouvez maintenant utiliser le `classifier` sur le texte de votre choix :
```py
>>> classifier("We are very happy to show you the ๐ค Transformers library.")
[{'label': 'POSITIVE', 'score': 0.9998}]
```
Si vous voulez classifier plus qu'un texte, donnez une liste de textes au [`pipeline`] pour obtenir une liste de dictionnaires en retour :
```py
>>> results = classifier(["We are very happy to show you the ๐ค Transformers library.", "We hope you don't hate it."])
>>> for result in results:
... print(f"label: {result['label']}, avec le score de: {round(result['score'], 4)}")
label: POSITIVE, avec le score de: 0.9998
label: NEGATIVE, avec le score de: 0.5309
```
Le [`pipeline`] peut aussi itรฉrer sur un jeu de donnรฉes entier pour n'importe quelle tรขche. Prenons par exemple la reconnaissance automatique de la parole :
```py
>>> import torch
>>> from transformers import pipeline
>>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h")
```
Chargez un jeu de donnรฉes audio (voir le ๐ค Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) pour plus de dรฉtails) sur lequel vous souhaitez itรฉrer. Pour cet exemple, nous chargeons le jeu de donnรฉes [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) :
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT
```
Vous devez vous assurer que le taux d'รฉchantillonnage de l'ensemble de donnรฉes correspond au taux d'รฉchantillonnage sur lequel [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) a รฉtรฉ entraรฎnรฉ :
```py
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate))
```
Les fichiers audio sont automatiquement chargรฉs et rรฉรฉchantillonnรฉs lors de l'appel de la colonne `"audio"`.
Extrayez les tableaux de formes d'ondes brutes des quatre premiers รฉchantillons et passez-les comme une liste au pipeline :
```py
>>> result = speech_recognizer(dataset[:4]["audio"])
>>> print([d["text"] for d in result])
['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT']
```
Pour les ensembles de donnรฉes plus importants oรน les entrรฉes sont volumineuses (comme dans les domaines de la parole ou de la vision), utilisez plutรดt un gรฉnรฉrateur au lieu d'une liste pour charger toutes les entrรฉes en mรฉmoire. Pour plus d'informations, consultez la documentation de l'[API pipeline](./main_classes/pipelines).
### Utiliser une autre modรจle et tokenizer dans le pipeline
Le [`pipeline`] peut รชtre utilisรฉ avec n'importe quel modรจle du [Hub](https://huggingface.co/models), ce qui permet d'adapter facilement le [`pipeline`] ร d'autres cas d'utilisation. Par exemple, si vous souhaitez un modรจle capable de traiter du texte franรงais, utilisez les filtres du Hub pour trouver un modรจle appropriรฉ. Le premier rรฉsultat renvoie un [modรจle BERT](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) multilingue finetunรฉ pour l'analyse des sentiments que vous pouvez utiliser pour le texte franรงais :
```py
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
```
<frameworkcontent>
<pt>
Utilisez [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `AutoClass` dans la section suivante) :
```py
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</pt>
<tf>
Utilisez [`TFAutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `TFAutoClass` dans la section suivante) :
```py
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</tf>
</frameworkcontent>
Spรฉcifiez le modรจle et le tokenizer dans le [`pipeline`], et utilisez le `classifier` sur le texte en franรงais :
```py
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐ค Transformers.")
[{'label': '5 stars', 'score': 0.7273}]
```
Si vous ne parvenez pas ร trouver un modรจle adaptรฉ ร votre cas d'utilisation, vous devrez finetuner un modรจle prรฉ-entraรฎnรฉ sur vos donnรฉes. Jetez un coup d'ลil ร notre [tutoriel sur le finetuning](./training) pour apprendre comment faire. Enfin, aprรจs avoir finetunรฉ votre modรจle prรฉ-entraรฎnรฉ, pensez ร [partager](./model_sharing) le modรจle avec la communautรฉ sur le Hub afin de dรฉmocratiser l'apprentissage automatique pour tous ! ๐ค
## AutoClass
<Youtube id="AhChOFRegn4"/>
Les classes [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] fonctionnent ensemble pour crรฉer un [`pipeline`] comme celui que vous avez utilisรฉ ci-dessus. Une [AutoClass](./model_doc/auto) est un raccourci qui rรฉcupรจre automatiquement l'architecture d'un modรจle prรฉ-entraรฎnรฉ ร partir de son nom ou de son emplacement. Il vous suffit de sรฉlectionner l'`AutoClass` appropriรฉe ร votre tรขche et la classe de prรฉtraitement qui lui est associรฉe.
Reprenons l'exemple de la section prรฉcรฉdente et voyons comment vous pouvez utiliser l'`AutoClass` pour reproduire les rรฉsultats du [`pipeline`].
### AutoTokenizer
Un tokenizer est chargรฉ de prรฉtraiter le texte pour en faire un tableau de chiffres qui servira d'entrรฉe ร un modรจle. De nombreuses rรจgles rรฉgissent le processus de tokenisation, notamment la maniรจre de diviser un mot et le niveau auquel les mots doivent รชtre divisรฉs (pour en savoir plus sur la tokenisation, consultez le [rรฉsumรฉ](./tokenizer_summary)). La chose la plus importante ร retenir est que vous devez instancier un tokenizer avec le mรชme nom de modรจle pour vous assurer que vous utilisez les mรชmes rรจgles de tokenisation que celles avec lesquelles un modรจle a รฉtรฉ prรฉ-entraรฎnรฉ.
Chargez un tokenizer avec [`AutoTokenizer`] :
```py
>>> from transformers import AutoTokenizer
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Passez votre texte au tokenizer :
```py
>>> encoding = tokenizer("We are very happy to show you the ๐ค Transformers library.")
>>> print(encoding)
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
Le tokenizer retourne un dictionnaire contenant :
* [input_ids](./glossary#input-ids): la reprรฉsentation numรฉrique des tokens.
* [attention_mask](.glossary#attention-mask): indique quels tokens doivent faire l'objet d'une attention particuliรจre (plus particuliรจrement les tokens de remplissage).
Un tokenizer peut รฉgalement accepter une liste de textes, et remplir et tronquer le texte pour retourner un รฉchantillon de longueur uniforme :
<frameworkcontent>
<pt>
```py
>>> pt_batch = tokenizer(
... ["We are very happy to show you the ๐ค Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="pt",
... )
```
</pt>
<tf>
```py
>>> tf_batch = tokenizer(
... ["We are very happy to show you the ๐ค Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="tf",
... )
```
</tf>
</frameworkcontent>
<Tip>
Consultez le tutoriel [prรฉtraitement](./preprocessing) pour plus de dรฉtails sur la tokenisation, et sur la maniรจre d'utiliser un [`AutoImageProcessor`], un [`AutoFeatureExtractor`] et un [`AutoProcessor`] pour prรฉtraiter les images, l'audio et les contenus multimodaux.
</Tip>
### AutoModel
<frameworkcontent>
<pt>
๐ค Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉes. Cela signifie que vous pouvez charger un [`AutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner l'[`AutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`AutoModelForSequenceClassification`] :
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
<Tip>
Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`].
</Tip>
Maintenant, passez votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle. Il vous suffit de dรฉcompresser le dictionnaire en ajoutant `**` :
```py
>>> pt_outputs = pt_model(**pt_batch)
```
Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs :
```py
>>> from torch import nn
>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
>>> print(pt_predictions)
tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],
[0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>)
```
</pt>
<tf>
๐ค Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉs. Cela signifie que vous pouvez charger un [`TFAutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner le [`TFAutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`TFAutoModelForSequenceClassification`] :
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
```
<Tip>
Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`].
</Tip>
Passez maintenant votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle en passant les clรฉs du dictionnaire directement aux tensors :
```py
>>> tf_outputs = tf_model(tf_batch)
```
Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs :
```py
>>> import tensorflow as tf
>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf_predictions # doctest: +IGNORE_RESULT
```
</tf>
</frameworkcontent>
<Tip>
Tous les modรจles ๐ค Transformers (PyTorch ou TensorFlow) produisent les tensors *avant* la fonction d'activation finale (comme softmax) car la fonction d'activation finale est souvent fusionnรฉe avec le calcul de la perte. Les structures produites par le modรจle sont des classes de donnรฉes spรฉciales, de sorte que leurs attributs sont autocomplรฉtรฉs dans un environnement de dรฉveloppement. Les structures produites par le modรจle se comportent comme un tuple ou un dictionnaire (vous pouvez les indexer avec un entier, une tranche ou une chaรฎne), auquel cas les attributs qui sont None sont ignorรฉs.
</Tip>
### Sauvegarder un modรจle
<frameworkcontent>
<pt>
Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`PreTrainedModel.save_pretrained`] :
```py
>>> pt_save_directory = "./pt_save_pretrained"
>>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT
>>> pt_model.save_pretrained(pt_save_directory)
```
Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`PreTrainedModel.from_pretrained`] :
```py
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")
```
</pt>
<tf>
Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`TFPreTrainedModel.save_pretrained`] :
```py
>>> tf_save_directory = "./tf_save_pretrained"
>>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT
>>> tf_model.save_pretrained(tf_save_directory)
```
Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`TFPreTrainedModel.from_pretrained`] :
```py
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")
```
</tf>
</frameworkcontent>
Une fonctionnalitรฉ particuliรจrement cool ๐ค Transformers est la possibilitรฉ d'enregistrer un modรจle et de le recharger en tant que modรจle PyTorch ou TensorFlow. Le paramรจtre `from_pt` ou `from_tf` permet de convertir le modรจle d'un framework ร l'autre :
<frameworkcontent>
<pt>
```py
>>> from transformers import AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)
```
</pt>
<tf>
```py
>>> from transformers import TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)
```
</tf>
</frameworkcontent>
## Constructions de modรจles personnalisรฉs
Vous pouvez modifier la configuration du modรจle pour changer la faรงon dont un modรจle est construit. La configuration spรฉcifie les attributs d'un modรจle, tels que le nombre de couches ou de tรชtes d'attention. Vous partez de zรฉro lorsque vous initialisez un modรจle ร partir d'une configuration personnalisรฉe. Les attributs du modรจle sont initialisรฉs de maniรจre alรฉatoire et vous devrez entraรฎner le modรจle avant de pouvoir l'utiliser pour obtenir des rรฉsultats significatifs.
Commencez par importer [`AutoConfig`], puis chargez le modรจle prรฉ-entraรฎnรฉ que vous voulez modifier. Dans [`AutoConfig.from_pretrained`], vous pouvez spรฉcifier l'attribut que vous souhaitez modifier, tel que le nombre de tรชtes d'attention :
```py
>>> from transformers import AutoConfig
>>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12)
```
<frameworkcontent>
<pt>
Crรฉez un modรจle personnalisรฉ ร partir de votre configuration avec [`AutoModel.from_config`] :
```py
>>> from transformers import AutoModel
>>> my_model = AutoModel.from_config(my_config)
```
</pt>
<tf>
Crรฉez un modรจle personnalisรฉ ร partir de votre configuration avec [`TFAutoModel.from_config`] :
```py
>>> from transformers import TFAutoModel
>>> my_model = TFAutoModel.from_config(my_config)
```
</tf>
</frameworkcontent>
Consultez le guide [Crรฉer une architecture personnalisรฉe](./create_a_model) pour plus d'informations sur la crรฉation de configurations personnalisรฉes.
## Trainer - une boucle d'entraรฎnement optimisรฉe par PyTorch
Tous les modรจles sont des [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) standard, vous pouvez donc les utiliser dans n'importe quelle boucle d'entraรฎnement typique. Bien que vous puissiez รฉcrire votre propre boucle d'entraรฎnement, ๐ค Transformers fournit une classe [`Trainer`] pour PyTorch, qui contient la boucle d'entraรฎnement de base et ajoute des fonctionnalitรฉs supplรฉmentaires comme l'entraรฎnement distribuรฉ, la prรฉcision mixte, et plus encore.
En fonction de votre tรขche, vous passerez gรฉnรฉralement les paramรจtres suivants ร [`Trainer`] :
1. Un [`PreTrainedModel`] ou un [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module):
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
2. [`TrainingArguments`] contient les hyperparamรจtres du modรจle que vous pouvez changer comme le taux d'apprentissage, la taille de l'รฉchantillon, et le nombre d'รฉpoques pour s'entraรฎner. Les valeurs par dรฉfaut sont utilisรฉes si vous ne spรฉcifiez pas d'hyperparamรจtres d'apprentissage :
```py
>>> from transformers import TrainingArguments
>>> training_args = TrainingArguments(
... output_dir="path/to/save/folder/",
... learning_rate=2e-5,
... per_device_train_batch_size=8,
... per_device_eval_batch_size=8,
... num_train_epochs=2,
... )
```
3. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques :
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
```
4. Chargez un jeu de donnรฉes :
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT
```
5. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token :
```py
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"])
```
Puis appliquez-la ร l'intรฉgralitรฉ du jeu de donnรฉes avec [`~datasets.Dataset.map`]:
```py
>>> dataset = dataset.map(tokenize_dataset, batched=True)
```
6. Un [`DataCollatorWithPadding`] pour crรฉer un รฉchantillon d'exemples ร partir de votre jeu de donnรฉes :
```py
>>> from transformers import DataCollatorWithPadding
>>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
```
Maintenant, rassemblez tous ces รฉlรฉments dans un [`Trainer`] :
```py
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=dataset["train"],
... eval_dataset=dataset["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... ) # doctest: +SKIP
```
Une fois que vous รชtes prรชt, appelez la fonction [`~Trainer.train`] pour commencer l'entraรฎnement :
```py
>>> trainer.train() # doctest: +SKIP
```
<Tip>
Pour les tรขches - comme la traduction ou la gรฉnรฉration de rรฉsumรฉ - qui utilisent un modรจle sรฉquence ร sรฉquence, utilisez plutรดt les classes [`Seq2SeqTrainer`] et [`Seq2SeqTrainingArguments`].
</Tip>
Vous pouvez personnaliser le comportement de la boucle d'apprentissage en redรฉfinissant les mรฉthodes ร l'intรฉrieur de [`Trainer`]. Cela vous permet de personnaliser des caractรฉristiques telles que la fonction de perte, l'optimiseur et le planificateur. Consultez la documentation de [`Trainer`] pour savoir quelles mรฉthodes peuvent รชtre redรฉfinies.
L'autre moyen de personnaliser la boucle d'apprentissage est d'utiliser les [Callbacks](./main_classes/callbacks). Vous pouvez utiliser les callbacks pour intรฉgrer d'autres bibliothรจques et inspecter la boucle d'apprentissage afin de suivre la progression ou d'arrรชter l'apprentissage plus tรดt. Les callbacks ne modifient rien dans la boucle d'apprentissage elle-mรชme. Pour personnaliser quelque chose comme la fonction de perte, vous devez redรฉfinir le [`Trainer`] ร la place.
## Entraรฎnement avec TensorFlow
Tous les modรจles sont des modรจles standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) afin qu'ils puissent รชtre entraรฎnรฉs avec TensorFlow avec l'API [Keras](https://keras.io/). ๐ค Transformers fournit la fonction [`~TFPreTrainedModel.prepare_tf_dataset`] pour charger facilement votre jeu de donnรฉes comme un `tf.data.Dataset` afin que vous puissiez commencer l'entraรฎnement immรฉdiatement avec les fonctions [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) et [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) de Keras.
1. Vous commencez avec un modรจle [`TFPreTrainedModel`] ou [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) :
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
2. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques :
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
```
3. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token :
```py
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"]) # doctest: +SKIP
```
4. Appliquez le tokenizer ร l'ensemble du jeu de donnรฉes avec [`~datasets.Dataset.map`] et passez ensuite le jeu de donnรฉes et le tokenizer ร [`~TFPreTrainedModel.prepare_tf_dataset`]. Vous pouvez รฉgalement modifier la taille de l'รฉchantillon et mรฉlanger le jeu de donnรฉes ici si vous le souhaitez :
```py
>>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP
>>> tf_dataset = model.prepare_tf_dataset(
... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer
... ) # doctest: +SKIP
```
5. Une fois que vous รชtes prรชt, appelez les fonctions `compile` et `fit` pour commencer l'entraรฎnement :
```py
>>> from tensorflow.keras.optimizers import Adam
>>> model.compile(optimizer=Adam(3e-5))
>>> model.fit(dataset) # doctest: +SKIP
```
## Et aprรจs ?
Maintenant que vous avez terminรฉ la visite rapide de ๐ค Transformers, consultez nos guides et apprenez ร faire des choses plus spรฉcifiques comme crรฉer un modรจle personnalisรฉ, finetuner un modรจle pour une tรขche, et comment entraรฎner un modรจle avec un script. Si vous souhaitez en savoir plus sur les concepts fondamentaux de ๐ค Transformers, jetez un ลil ร nos guides conceptuels !
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/fr/installation.md
|
<!---
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Installation
Installez ๐ค Transformers pour n'importe quelle librairie d'apprentissage profond avec laquelle vous avez l'habitude de travaillez, configurez votre cache et configurez ๐ค Transformers pour un usage hors ligne (facultatif).
๐ค Transformers est testรฉ avec Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+ et Flax.
Consulter les instructions d'installation ci-dessous pour la librairie d'apprentissage profond que vous utilisez:
* Instructions d'installation pour [PyTorch](https://pytorch.org/get-started/locally/).
* Instructions d'installation pour [TensorFlow 2.0](https://www.tensorflow.org/install/pip).
* Instructions d'installation pour [Flax](https://flax.readthedocs.io/en/latest/).
## Installation avec pip
Vous devriez installer ๐ค Transformers dans un [environnement virtuel](https://docs.python.org/3/library/venv.html).
Si vous n'รชtes pas ร l'aise avec les environnements virtuels, consultez ce [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
Utiliser un environnement virtuel permet de facilement gรฉrer diffรฉrents projets et d'รฉviter des erreurs de compatibilitรฉ entre les diffรฉrentes dรฉpendances.
Commencez par crรฉer un environnement virtuel dans l'espace de travail de votre projet :
```bash
python -m venv .env
```
Activez l'environnement virtuel. Sur Linux ou MacOs :
```bash
source .env/bin/activate
```
Activez l'environnement virtuel sur Windows :
```bash
.env/Scripts/activate
```
Maintenant, ๐ค Transformers peut รชtre installรฉ avec la commande suivante :
```bash
pip install transformers
```
Pour une utilisation avec CPU seulement, ๐ค Transformers et la librairie d'apprentissage profond de votre choix peuvent รชtre installรฉs en une seule ligne.
Par exemple, installez ๐ค Transformers et PyTorch avec la commande suivante :
```bash
pip install 'transformers[torch]'
```
๐ค Transformers et TensorFlow 2.0 :
```bash
pip install 'transformers[tf-cpu]'
```
<Tip warning={true}>
Pour les architectures mac M1 / ARM
Vous devez installer les outils suivants avant d'installer TensorFLow 2.0
```
brew install cmake
brew install pkg-config
```
</Tip>
๐ค Transformers et Flax :
```bash
pip install 'transformers[flax]'
```
Vรฉrifiez que ๐ค Transformers a bien รฉtรฉ installรฉ avec la commande suivante. La commande va tรฉlรฉcharger un modรจle prรฉ-entraรฎnรฉ :
```bash
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
```
Le label et score sont ensuite affichรฉs :
```bash
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
```
## Installation depuis le code source
Installez ๐ค Transformers depuis le code source avec la commande suivante :
```bash
pip install git+https://github.com/huggingface/transformers
```
Cette commande installe la version depuis la branche `main` au lieu de la derniรจre version stable. La version de la branche `main` est utile pour avoir les derniers dรฉveloppements. Par exemple, si un bug a รฉtรฉ rรฉsolu depuis la derniรจre version stable mais n'a pas encore รฉtรฉ publiรฉ officiellement. Cependant, cela veut aussi dire que la version de la branche `main` n'est pas toujours stable. Nous nous efforรงons de maintenir la version de la branche `main` opรฉrationnelle, et la plupart des problรจmes sont gรฉnรฉralement rรฉsolus en l'espace de quelques heures ou d'un jour. Si vous recontrez un problรจme, n'hรฉsitez pas ร crรฉer une [Issue](https://github.com/huggingface/transformers/issues) pour que l'on puisse trouver une solution au plus vite !
Vรฉrifiez que ๐ค Transformers a bien รฉtรฉ installรฉ avec la commande suivante :
```bash
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"
```
## Installation modifiable
Vous aurez besoin d'une installation modifiable si vous le souhaitez :
* Utiliser la version de la branche `main` du code source.
* Contribuer ร ๐ค Transformers et vouler tester vos modifications du code source.
Clonez le projet et installez ๐ค Transformers avec les commandes suivantes :
```bash
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
```
Ces commandes crรฉent des liens entre le dossier oรน le projet a รฉtรฉ clonรฉ et les chemins de vos librairies Python. Python regardera maintenant dans le dossier que vous avez clonรฉ en plus des dossiers oรน sont installรฉes vos autres librairies. Par exemple, si vos librairies Python sont installรฉes dans `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python cherchera aussi dans le dossier oรน vous avez clonรฉ : `~/transformers/`.
<Tip warning={true}>
Vous devez garder le dossier `transformers` si vous voulez continuer d'utiliser la librairie.
</Tip>
Maintenant, vous pouvez facilement mettre ร jour votre clone avec la derniรจre version de ๐ค Transformers en utilisant la commande suivante :
```bash
cd ~/transformers/
git pull
```
Votre environnement Python utilisera la version de la branche `main` lors de la prochaine exรฉcution.
## Installation avec conda
Installation via le canal `conda-forge` de conda :
```bash
conda install conda-forge::transformers
```
## Configuration du cache
Les modรจles prรฉ-entraรฎnรฉs sont tรฉlรฉchargรฉs et mis en cache localement dans le dossier suivant : `~/.cache/huggingface/hub`. C'est le dossier par dรฉfaut donnรฉ par la variable d'environnement `TRANSFORMERS_CACHE`. Sur Windows, le dossier par dรฉfaut est `C:\Users\nom_utilisateur\.cache\huggingface\hub`. Vous pouvez modifier les variables d'environnement indiquรฉes ci-dessous - par ordre de prioritรฉ - pour spรฉcifier un dossier de cache diffรฉrent :
1. Variable d'environnement (par dรฉfaut) : `HUGGINGFACE_HUB_CACHE` ou `TRANSFORMERS_CACHE`.
2. Variable d'environnement : `HF_HOME`.
3. Variable d'environnement : `XDG_CACHE_HOME` + `/huggingface`.
<Tip>
๐ค Transformers utilisera les variables d'environnement `PYTORCH_TRANSFORMERS_CACHE` ou `PYTORCH_PRETRAINED_BERT_CACHE` si vous utilisez une version prรฉcรฉdente de cette librairie et avez dรฉfini ces variables d'environnement, sauf si vous spรฉcifiez la variable d'environnement `TRANSFORMERS_CACHE`.
</Tip>
## Mode hors ligne
๐ค Transformers peut fonctionner dans un environnement cloisonnรฉ ou hors ligne en n'utilisant que des fichiers locaux. Dรฉfinissez la variable d'environnement `TRANSFORMERS_OFFLINE=1` pour activer ce mode.
<Tip>
Ajoutez [๐ค Datasets](https://huggingface.co/docs/datasets/) ร votre processus d'entraรฎnement hors ligne en dรฉfinissant la variable d'environnement `HF_DATASETS_OFFLINE=1`.
</Tip>
```bash
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
Le script devrait maintenant s'exรฉcuter sans rester en attente ou attendre une expiration, car il n'essaiera pas de tรฉlรฉcharger des modรจle sur le Hub.
Vous pouvez aussi รฉviter de tรฉlรฉcharger un modรจle ร chaque appel de la fonction [~PreTrainedModel.from_pretrained] en utilisant le paramรจtre [local_files_only]. Seuls les fichiers locaux sont chargรฉs lorsque ce paramรจtre est activรฉ (c.-ร -d. `local_files_only=True`) :
```py
from transformers import T5Model
model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True)
```
### Rรฉcupรฉrer des modรจles et des tokenizers pour une utilisation hors ligne
Une autre option pour utiliser ๐ค Transformers hors ligne est de tรฉlรฉcharger les fichiers ร l'avance, puis d'utiliser les chemins locaux lorsque vous en avez besoin en mode hors ligne. Il existe trois faรงons de faire cela :
* Tรฉlรฉchargez un fichier via l'interface utilisateur sur le [Model Hub](https://huggingface.co/models) en cliquant sur l'icรดne โ.

* Utilisez les fonctions [`PreTrainedModel.from_pretrained`] et [`PreTrainedModel.save_pretrained`] :
1. Tรฉlรฉchargez vos fichiers ร l'avance avec [`PreTrainedModel.from_pretrained`]:
```py
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B")
```
2. Sauvegardez les fichiers dans un dossier de votre choix avec [`PreTrainedModel.save_pretrained`]:
```py
>>> tokenizer.save_pretrained("./your/path/bigscience_t0")
>>> model.save_pretrained("./your/path/bigscience_t0")
```
3. Maintenant, lorsque vous รชtes hors ligne, rechargez vos fichiers avec [`PreTrainedModel.from_pretrained`] depuis le dossier oรน vous les avez sauvegardรฉs :
```py
>>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0")
>>> model = AutoModel.from_pretrained("./your/path/bigscience_t0")
```
* Tรฉlรฉchargez des fichiers de maniรจre automatique avec la librairie [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) :
1. Installez la librairie `huggingface_hub` dans votre environnement virtuel :
```bash
python -m pip install huggingface_hub
```
2. Utilisez la fonction [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) pour tรฉlรฉcharger un fichier vers un chemin de votre choix. Par exemple, la commande suivante tรฉlรฉcharge le fichier `config.json` du modรจle [T0](https://huggingface.co/bigscience/T0_3B) vers le chemin de votre choix :
```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0")
```
Une fois que votre fichier est tรฉlรฉchargรฉ et cachรฉ localement, spรฉcifiez son chemin local pour le charger et l'utiliser :
```py
>>> from transformers import AutoConfig
>>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json")
```
<Tip>
Consultez la section [How to download files from the Hub (Comment tรฉlรฉcharger des fichiers depuis le Hub)](https://huggingface.co/docs/hub/how-to-downstream) pour plus de dรฉtails sur le tรฉlรฉchargement de fichiers stockรฉs sur le Hub.
</Tip>
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/fr/in_translation.md
|
<!--โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Traduction en cours.
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/fr/_config.py
|
# docstyle-ignore
INSTALL_CONTENT = """
# Installation de Transformers
! pip install transformers datasets
# Pour installer ร partir du code source au lieu de la derniรจre version, commentez la commande ci-dessus et dรฉcommentez la suivante.
# ! pip install git+https://github.com/huggingface/transformers.git
"""
notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]
black_avoid_patterns = {
"{processor_class}": "FakeProcessorClass",
"{model_class}": "FakeModelClass",
"{object_class}": "FakeObjectClass",
}
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/fr/autoclass_tutorial.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass
Avec autant d'architectures Transformer diffรฉrentes, il peut รชtre difficile d'en crรฉer une pour votre ensemble de poids (aussi appelรฉs "weights" ou "checkpoint" en anglais). Dans l'idรฉe de crรฉer une librairie facile, simple et flexible ร utiliser, ๐ค Transformers fournit une `AutoClass` qui infรจre et charge automatiquement l'architecture correcte ร partir d'un ensemble de poids donnรฉ. La fonction `from_pretrained()` vous permet de charger rapidement un modรจle prรฉ-entraรฎnรฉ pour n'importe quelle architecture afin que vous n'ayez pas ร consacrer du temps et des ressources ร l'entraรฎnement d'un modรจle ร partir de zรฉro. Produire un tel code indรฉpendant d'un ensemble de poids signifie que si votre code fonctionne pour un ensemble de poids, il fonctionnera avec un autre ensemble - tant qu'il a รฉtรฉ entraรฎnรฉ pour une tรขche similaire - mรชme si l'architecture est diffรฉrente.
<Tip>
Rappel, l'architecture fait rรฉfรฉrence au squelette du modรจle et l'ensemble de poids contient les poids pour une architecture donnรฉe. Par exemple, [BERT](https://huggingface.co/bert-base-uncased) est une architecture, tandis que `bert-base-uncased` est un ensemble de poids. Le terme modรจle est gรฉnรฉral et peut signifier soit architecture soit ensemble de poids.
</Tip>
Dans ce tutoriel, vous apprendrez ร :
* Charger un tokenizer prรฉ-entraรฎnรฉ.
* Charger un processeur d'image prรฉ-entraรฎnรฉ.
* Charger un extracteur de caractรฉristiques prรฉ-entraรฎnรฉ.
* Charger un processeur prรฉ-entraรฎnรฉ.
* Charger un modรจle prรฉ-entraรฎnรฉ.
## AutoTokenizer
Quasiment toutes les tรขches de traitement du langage (NLP) commencent avec un tokenizer. Un tokenizer convertit votre texte initial dans un format qui peut รชtre traitรฉ par le modรจle.
Chargez un tokenizer avec [`AutoTokenizer.from_pretrained`]:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
```
Puis, transformez votre texte initial comme montrรฉ ci-dessous:
```py
>>> sequence = "In a hole in the ground there lived a hobbit."
>>> print(tokenizer(sequence))
{'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
## AutoImageProcessor
Pour les tรขches de vision, un processeur d'image traite l'image pour la formater correctment.
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
```
## AutoFeatureExtractor
Pour les tรขches audio, un extracteur de caractรฉristiques (aussi appelรฉs "features" en anglais) traite le signal audio pour le formater correctement.
Chargez un extracteur de caractรฉristiques avec [`AutoFeatureExtractor.from_pretrained`]:
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(
... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
... )
```
## AutoProcessor
Les tรขches multimodales nรฉcessitent un processeur qui combine deux types d'outils de prรฉtraitement. Par exemple, le modรจle [LayoutLMV2](model_doc/layoutlmv2) nรฉcessite un processeur d'image pour traiter les images et un tokenizer pour traiter le texte ; un processeur combine les deux.
Chargez un processeur avec [`AutoProcessor.from_pretrained`]:
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
```
## AutoModel
<frameworkcontent>
<pt>
Enfin, les classes `AutoModelFor` vous permettent de charger un modรจle prรฉ-entraรฎnรฉ pour une tรขche donnรฉe (voir [ici](model_doc/auto) pour une liste complรจte des tรขches disponibles). Par exemple, chargez un modรจle pour la classification de sรฉquence avec [`AutoModelForSequenceClassification.from_pretrained`]:
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
Rรฉutilisez facilement le mรชme ensemble de poids pour charger une architecture pour une tรขche diffรฉrente :
```py
>>> from transformers import AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased")
```
<Tip warning={true}>
Pour les modรจles PyTorch, la fonction `from_pretrained()` utilise `torch.load()` qui utilise `pickle` en interne et est connu pour รชtre non sรฉcurisรฉ. En gรฉnรฉral, ne chargez jamais un modรจle qui pourrait provenir d'une source non fiable, ou qui pourrait avoir รฉtรฉ altรฉrรฉ. Ce risque de sรฉcuritรฉ est partiellement attรฉnuรฉ pour les modรจles hรฉbergรฉs publiquement sur le Hugging Face Hub, qui sont [scannรฉs pour les logiciels malveillants](https://huggingface.co/docs/hub/security-malware) ร chaque modification. Consultez la [documentation du Hub](https://huggingface.co/docs/hub/security) pour connaรฎtre les meilleures pratiques comme la [vรฉrification des modifications signรฉes](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg) avec GPG.
Les points de contrรดle TensorFlow et Flax ne sont pas concernรฉs, et peuvent รชtre chargรฉs dans des architectures PyTorch en utilisant les arguments `from_tf` et `from_flax` de la fonction `from_pretrained` pour contourner ce problรจme.
</Tip>
En gรฉnรฉral, nous recommandons d'utiliser les classes `AutoTokenizer` et `AutoModelFor` pour charger des instances prรฉ-entraรฎnรฉes de tokenizers et modรจles respectivement. Cela vous permettra de charger la bonne architecture ร chaque fois. Dans le prochain [tutoriel](preprocessing), vous apprenez ร utiliser un tokenizer, processeur d'image, extracteur de caractรฉristiques et processeur pour prรฉ-traiter un jeu de donnรฉes pour le fine-tuning.
</pt>
<tf>
Enfin, les classes `TFAutoModelFor` vous permettent de charger un modรจle prรฉ-entraรฎnรฉ pour une tรขche donnรฉe (voir [ici](model_doc/auto) pour une liste complรจte des tรขches disponibles). Par exemple, chargez un modรจle pour la classification de sรฉquence avec [`TFAutoModelForSequenceClassification.from_pretrained`]:
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
Rรฉutilisez facilement le mรชme ensemble de poids pour charger une architecture pour une tรขche diffรฉrente :
```py
>>> from transformers import TFAutoModelForTokenClassification
>>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased")
```
En gรฉnรฉral, nous recommandons d'utiliser les classes `AutoTokenizer` et `TFAutoModelFor` pour charger des instances prรฉ-entraรฎnรฉes de tokenizers et modรจles respectivement. Cela vous permettra de charger la bonne architecture ร chaque fois. Dans le prochain [tutoriel](preprocessing), vous apprenez ร utiliser un tokenizer, processeur d'image, extracteur de caractรฉristiques et processeur pour prรฉ-traiter un jeu de donnรฉes pour le fine-tuning.
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/fr/_toctree.yml
|
- sections:
- local: index
title: ๐ค Transformers
- local: quicktour
title: Visite rapide
- local: installation
title: Installation
title: Dรฉmarrer
- sections:
- local: in_translation
title: Pipelines pour l'infรฉrence
- local: autoclass_tutorial
title: Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass
- local: in_translation
title: Prรฉparation des donnรฉes
- local: in_translation
title: Fine-tune un modรจle prรฉ-entraรฎnรฉ
- local: in_translation
title: Entraรฎnement avec un script
- local: in_translation
title: Entraรฎnement distribuรฉ avec ๐ค Accelerate
- local: in_translation
title: Chargement et entraรฎnement des adaptateurs avec ๐ค PEFT
- local: in_translation
title: Partager un modรจle
- local: in_translation
title: Agents
- local: in_translation
title: Gรฉnรฉration avec LLMs
title: Tutoriels
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/testing.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Testing
๐ค Transformersใขใใซใใฉใฎใใใซใในใใใใๆฐใใใในใใๆธใใฆๆขๅญใฎใในใใๆนๅใงใใใใ่ฆใฆใฟใพใใใใ
ใใฎใชใใธใใชใซใฏ2ใคใฎใในใในใคใผใใใใใพใ๏ผ
1. `tests` -- ไธ่ฌ็ใชAPI็จใฎใในใ
2. `examples` -- APIใฎไธ้จใงใฏใชใใใพใใพใชใขใใชใฑใผใทใงใณ็จใฎใในใ
## How transformers are tested
1. PRใๆๅบใใใใจใ9ใคใฎCircleCiใธใงใใงใในใใใใพใใPRใธใฎๆฐใใใณใใใใใจใซๅใในใใใใพใใใใใใฎใธใงใใฏใ[ใใฎ่จญๅฎใใกใคใซ](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml)ใงๅฎ็พฉใใใฆใใใๅฟ
่ฆใชๅ ดๅใฏๅใ็ฐๅขใ่ชๅใฎใใทใณใงๅ็พใงใใพใใ
ใใใใฎCIใธใงใใฏ `@slow` ใในใใๅฎ่กใใพใใใ
2. [GitHub Actions](https://github.com/huggingface/transformers/actions)ใซใใฃใฆๅฎ่กใใใ3ใคใฎใธใงใใใใใพใ๏ผ
- [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): torch hubใฎ็ตฑๅใๅไฝใใใใฉใใใ็ขบ่ชใใพใใ
- [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): `main` ใซใณใใใใ่กใใใๅ ดๅใซใGPUใง้ซ้ใในใใๅฎ่กใใพใใใใฎใธใงใใฏใ`main` ใงใฎใณใใใใไปฅไธใฎใใฉใซใใผใฎใณใผใใๆดๆฐใใๅ ดๅใซใฎใฟๅฎ่กใใใพใ๏ผ`src`ใ`tests`ใ`.github`๏ผ่ฟฝๅ ใใใใขใใซใซใผใใใใผใใใใฏใชใฉใฎๅฎ่กใ้ฒใใใ๏ผใ
- [self-hosted runner](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-scheduled.yml): GPUใง `tests` ใจ `examples` ใฎ้ๅธธใฎใในใใจ้
ใใในใใๅฎ่กใใพใใ
```bash
RUN_SLOW=1 pytest tests/
RUN_SLOW=1 pytest examples/
```
็ตๆใฏ[here](https://github.com/huggingface/transformers/actions)ใง่ฆณๅฏใงใใพใใ
## Running tests
### Choosing which tests to run
ใใฎใใญใฅใกใณใใฏใใในใใๅฎ่กใใๆนๆณใฎๅคใใฎ่ฉณ็ดฐใซใคใใฆ่ชฌๆใใฆใใพใใใในใฆใ่ชญใใ ๅพใงใใใใใซ่ฉณ็ดฐใๅฟ
่ฆใชๅ ดๅใฏใ[ใใกใ](https://docs.pytest.org/en/latest/usage.html)ใง่ฆใคใใใใจใใงใใพใใ
ไปฅไธใฏใใในใใๅฎ่กใใใใใฎใใใคใใฎๆใไพฟๅฉใชๆนๆณใงใใ
ใในใฆๅฎ่กใใพใ:
```console
pytest
```
ใพใใฏ๏ผ
```bash
make test
```
ๅพ่
ใฏๆฌกใฎใใใซๅฎ็พฉใใใใใจใซๆณจๆใใฆใใ ใใใ
```bash
python -m pytest -n auto --dist=loadfile -s -v ./tests/
```
ไปฅไธใฏใpytestใซๆธกใ่จญๅฎๆ
ๅ ฑใงใใ
- ใในใใใญใปในใCPUใณใขใฎๆฐใจๅใใ ใๅฎ่กใใใใใซๆ็คบใใพใใใใ ใใRAMใๅๅใงใชใๅ ดๅใฏๆณจๆใๅฟ
่ฆใงใใ
- ๅใใใกใคใซใใใฎใในใฆใฎใในใใฏใๅใใในใใใญใปในใงๅฎ่กใใใใใใซใใพใใ
- ๅบๅใฎใญใฃใใใฃใ่กใใพใใใ
- ๅ้ทใขใผใใงๅฎ่กใใพใใ
### Getting the list of all tests
ใในใในใคใผใใฎใในใฆใฎใในใ๏ผ
```bash
pytest --collect-only -q
```
ๆๅฎใใใใในใ ใใกใคใซใฎใในใฆใฎใในใ:
```bash
pytest tests/test_optimization.py --collect-only -q
```
### Run a specific test module
ๅๅฅใฎใในใ ใขใธใฅใผใซใๅฎ่กใใใซใฏ:
```bash
pytest tests/utils/test_logging.py
```
### Run specific tests
ใปใจใใฉใฎใในใใงunittestใไฝฟ็จใใใฆใใใใใ็นๅฎใฎใตใใในใใๅฎ่กใใใซใฏใใใใใฎใในใใๅซใunittestใฏใฉในใฎๅๅใ็ฅใฃใฆใใๅฟ
่ฆใใใใพใใไพใใฐใใใใฏๆฌกใฎใใใซใชใใใใใใพใใ๏ผ
```bash
pytest tests/test_optimization.py::OptimizationTest::test_adam_w
```
ใในใใฎๅฎ่กๆนๆณ:
ใในใใใกใคใซ: `tests/test_optimization.py`
ใฏใฉในๅ: `OptimizationTest`
ใในใ้ขๆฐใฎๅๅ: `test_adam_w`
ใใกใคใซใซ่คๆฐใฎใฏใฉในใๅซใพใใฆใใๅ ดๅใฏใ็นๅฎใฎใฏใฉในใฎใในใใฎใฟใๅฎ่กใใใใจใ้ธๆใงใใพใใไพใใฐ๏ผ
```bash
pytest tests/test_optimization.py::OptimizationTest
```
ใในใใฏใฉในๅ
ใฎใในใฆใฎใในใใๅฎ่กใใพใใ
ๅ่ฟฐใฎ้ใใ`OptimizationTest` ใฏใฉในใซๅซใพใใใในใใๅฎ่กใใใซใฏใๆฌกใฎใณใใณใใๅฎ่กใงใใพใ๏ผ
```bash
pytest tests/test_optimization.py::OptimizationTest --collect-only -q
```
ใญใผใฏใผใๅผใไฝฟ็จใใฆใในใใๅฎ่กใงใใพใใ
ๅๅใซ `adam` ใๅซใพใใใในใใฎใฟใๅฎ่กใใใซใฏ๏ผ
```bash
pytest -k adam tests/test_optimization.py
```
`and`ใใใณ`or`ใฏใใในใฆใฎใญใผใฏใผใใไธ่ดใใใใใใใใใ็คบใใใใซไฝฟ็จใงใใพใใ`not`ใฏๅฆๅฎใใใใใซไฝฟ็จใงใใพใใ
`adam`ใจใใๅๅใๅซใใในใใ้คใใฆใในใฆใฎใในใใๅฎ่กใใใซใฏ๏ผ
```bash
pytest -k "not adam" tests/test_optimization.py
```
ไปฅไธใฏใๆไพใใใใใญในใใฎๆฅๆฌ่ช่จณใงใใ
```bash
pytest -k "ada and not adam" tests/test_optimization.py
```
ใใจใใฐใ`test_adafactor`ใจ`test_adam_w`ใฎไธกๆนใๅฎ่กใใใซใฏใไปฅไธใฎใณใใณใใไฝฟ็จใงใใพใ:
```bash
pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py
```
ๆณจๆ: ใใใงใฏใ`or` ใไฝฟ็จใใฆใใพใใใญใผใฏใผใใฎใใใใไธใคใไธ่ดใใใฐใไธกๆนใๅซใใใใใงใใ
ไธกๆนใฎใใฟใผใณใๅซใใในใใฎใฟใๅซใใใๅ ดๅใฏใ`and` ใไฝฟ็จใใฆใใ ใใใ
```bash
pytest -k "test and ada" tests/test_optimization.py
```
### Run `accelerate` tests
ๆใ
ใใขใใซใซๅฏพใใฆ `accelerate` ใในใใๅฎ่กใใๅฟ
่ฆใใใใพใใใใจใใฐใ`OPT` ๅฎ่กใซๅฏพใใฆใใใใฎใในใใๅฎ่กใใใๅ ดๅใใณใใณใใซ `-m accelerate_tests` ใ่ฟฝๅ ใใใ ใใงๆธใฟใพใ๏ผ
```bash
RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py
```
### Run documentation tests
ใใญใฅใกใณใใผใทใงใณใฎไพใๆญฃใใใใฉใใใใในใใใใซใฏใ`doctests` ใๅๆ ผใใฆใใใใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
ไพใจใใฆใ[`WhisperModel.forward` ใฎใใใฏในใใชใณใฐ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1017-L1035)ใไฝฟ็จใใพใใใใ
```python
r"""
Returns:
Example:
```python
>>> import torch
>>> from transformers import WhisperModel, WhisperFeatureExtractor
>>> from datasets import load_dataset
>>> model = WhisperModel.from_pretrained("openai/whisper-base")
>>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt")
>>> input_features = inputs.input_features
>>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id
>>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state
>>> list(last_hidden_state.shape)
[1, 2, 512]
```"""
```
ๆๅฎใใใใกใคใซๅ
ใฎใในใฆใฎใใใฏในใใชใณใฐไพใ่ชๅ็ใซใในใใใใใใซใไปฅไธใฎ่กใๅฎ่กใใฆใใ ใใ๏ผ
```bash
pytest --doctest-modules <path_to_file_or_dir>
```
ใใกใคใซใซใใผใฏใใฆใณๆกๅผตๅญใใใๅ ดๅใฏใ`--doctest-glob="*.md"`ๅผๆฐใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
### Run only modified tests
[pytest-picked](https://github.com/anapaulagomes/pytest-picked)ใไฝฟ็จใใใจใๆชในใใผใธใณใฐใฎใใกใคใซใพใใฏ็พๅจใฎใใฉใณใ๏ผGitใซๅพใฃใฆ๏ผใซ้ข้ฃใใใในใใๅฎ่กใงใใพใใใใใฏใๅคๆดๅ
ๅฎนใซ้ข้ฃใใใในใใฎใฟๅฎ่กใใใใใใๅคๆดใไฝใๅฃใใฆใใชใใใจใ่ฟ
้ใซ็ขบ่ชใใ็ด ๆดใใใๆนๆณใงใใๅคๆดใใใฆใใชใใใกใคใซใซ้ข้ฃใใใในใใฏๅฎ่กใใใพใใใ
```bash
pip install pytest-picked
```
```bash
pytest --picked
```
ใในใฆใฎใในใใฏใๅคๆดใใใใใพใ ใณใใใใใใฆใใชใใใกใคใซใจใใฉใซใใใๅฎ่กใใใพใใ
### Automatically rerun failed tests on source modification
[pytest-xdist](https://github.com/pytest-dev/pytest-xdist)ใฏใ้ๅธธใซไพฟๅฉใชๆฉ่ฝใๆไพใใฆใใใใในใฆใฎๅคฑๆใใใในใใๆคๅบใใใใกใคใซใไฟฎๆญฃใใ้ใซใใใใฎๅคฑๆใใใในใใ้ฃ็ถใใฆๅๅฎ่กใใใใจใใงใใพใใใใฎใใใไฟฎๆญฃใ่กใฃใๅพใซpytestใๅ่ตทๅใใๅฟ
่ฆใใใใพใใใใในใฆใฎใในใใๅๆ ผใใใพใง็นฐใ่ฟใใใใใฎๅพๅๅบฆใใซใฉใณใๅฎ่กใใใพใใ
```bash
pip install pytest-xdist
```
ใขใผใใซๅ
ฅใใซใฏ๏ผ `pytest -f`ใพใใฏ`pytest --looponfail`
ใใกใคใซใฎๅคๆดใฏใ`looponfailroots`ใซใผใใใฃใฌใฏใใชใจใใฎๅ
ๅฎนๅ
จไฝ๏ผๅๅธฐ็ใซ๏ผใ่ฆใฆๆคๅบใใใพใใใใฎๅคใฎใใใฉใซใใๆฉ่ฝใใชใๅ ดๅใ`setup.cfg`ใง่จญๅฎใชใใทใงใณใๅคๆดใใฆใใญใธใงใฏใๅ
ใงๅคๆดใงใใพใใ
```ini
[tool:pytest]
looponfailroots = transformers tests
```
ใพใใฏ `pytest.ini`/`tox.ini` ใใกใคใซ:
```ini
[pytest]
looponfailroots = transformers tests
```
ใใกใคใซใฎๅคๆดใๆขใใใจใฏใiniใใกใคใซใฎใใฃใฌใฏใใชใๅบๆบใซใใฆๆๅฎใใใใใฃใฌใฏใใชๅ
ใงใฎใฟ่กใใใพใใ
[pytest-watch](https://github.com/joeyespo/pytest-watch) ใฏใใใฎๆฉ่ฝใฎไปฃๆฟๅฎ่ฃ
ใงใใ
### Skip a test module
็นๅฎใฎใในใใขใธใฅใผใซใ้คๅคใใฆใในใฆใฎใในใใขใธใฅใผใซใๅฎ่กใใใๅ ดๅใๅฎ่กใใใในใใฎๆ็คบ็ใชใชในใใๆๅฎใใใใจใใงใใพใใไพใใฐใ`test_modeling_*.py` ใในใใ้คๅคใใฆใในใฆใๅฎ่กใใใซใฏๆฌกใฎใใใซใใพใ๏ผ
```bash
pytest *ls -1 tests/*py | grep -v test_modeling*
```
### Clearing state
CIใใซใใใใณ้ๅบฆใซๅฏพใใ้้ขใ้่ฆใชๅ ดๅ๏ผใญใฃใใทใฅใซๅฏพใใฆ๏ผใใญใฃใใทใฅใใฏใชใขใใๅฟ
่ฆใใใใพใ๏ผ
```bash
pytest --cache-clear tests
```
### Running tests in parallel
ๅ่ฟฐใฎใใใซใ`make test` ใฏ `pytest-xdist` ใใฉใฐใคใณใไปใใฆใในใใไธฆๅๅฎ่กใใพใ๏ผ`-n X` ๅผๆฐใไพ: `-n 2` ใง2ใคใฎไธฆๅใธใงใใๅฎ่ก๏ผใ
`pytest-xdist` ใฎ `--dist=` ใชใใทใงใณใไฝฟ็จใใใจใใในใใใฉใฎใใใซใฐใซใผใๅใใใใใๅถๅพกใงใใพใใ`--dist=loadfile` ใฏๅใใใกใคใซใซใใใในใใๅใใใญใปในใซ้
็ฝฎใใพใใ
ใในใใฎๅฎ่ก้ ๅบใ็ฐใชใไบๆธฌไธๅฏ่ฝใงใใใใใ`pytest-xdist` ใไฝฟ็จใใฆใในใในใคใผใใๅฎ่กใใใจๅคฑๆใ็บ็ใใๅ ดๅ๏ผใคใพใใใใใคใใฎๆชๆคๅบใฎ้ฃๅใในใใใใๅ ดๅ๏ผใ[pytest-replay](https://github.com/ESSS/pytest-replay) ใไฝฟ็จใใฆใในใใๅใ้ ๅบใงๅ็ใใใใฎๅพใๅคฑๆใใใทใผใฑใณในใๆๅฐ้ใซใใใฎใซๅฝน็ซใกใพใใ
### Test order and repetition
ๆฝๅจ็ใช็ธไบไพๅญๆงใ็ถๆ
ใซ้ข้ฃใใใใฐ๏ผใใฃใขใใฆใณ๏ผใๆคๅบใใใใใซใใในใใ่คๆฐๅใ้ฃ็ถใใฆใใฉใณใใ ใซใใพใใฏใปใใใง็นฐใ่ฟใใใจใฏๆ็จใงใใใใใฆใๅ็ดใช่คๆฐๅใฎ็นฐใ่ฟใใฏใDLใฎใฉใณใใ ๆงใซใใฃใฆๆใใใซใชใใใใคใใฎๅ้กใๆคๅบใใใฎใซๅฝน็ซใกใพใใ
#### Repeat tests
- [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder):
```bash
pip install pytest-flakefinder
```
ใใใฆใใในใฆใฎใในใใ่คๆฐๅๅฎ่กใใพใ (ใใใฉใซใใงใฏ 50 ๅ)ใ
```bash
pytest --flake-finder --flake-runs=5 tests/test_failing_test.py
```
<Tip>
ใใฎใใฉใฐใคใณใฏใ`pytest-xdist` ใฎ `-n` ใใฉใฐใงใฏๅไฝใใพใใใ
</Tip>
<Tip>
ๅฅใฎใใฉใฐใคใณ `pytest-repeat` ใใใใพใใใใใใฏ `unittest` ใงใฏๅไฝใใพใใใ
</Tip>
#### Run tests in a random order
```bash
pip install pytest-random-order
```
้่ฆ: `pytest-random-order` ใๅญๅจใใใจใใในใใฏ่ชๅ็ใซใฉใณใใ ๅใใใพใใ่จญๅฎใฎๅคๆดใๅคๆดใฏๅฟ
่ฆใใใพใใใ
ใณใใณใใฉใคใณใชใใทใงใณใฏๅฟ
้ ใงใใ
ๅใซ่ชฌๆใใใใใซใใใใซใใใ็ตๅใใใใในใ (1 ใคใฎใในใใฎ็ถๆ
ใๅฅใฎใในใใฎ็ถๆ
ใซๅฝฑ้ฟใไธใใ) ใฎๆคๅบใๅฏ่ฝใซใชใใพใใใใค
`pytest-random-order` ใใคใณในใใผใซใใใฆใใใจใใใฎใปใใทใงใณใซไฝฟ็จใใใใฉใณใใ ใทใผใใๅบๅใใใพใใไพ:
```bash
pytest tests
[...]
Using --random-order-bucket=module
Using --random-order-seed=573663
```
ใใฎใใใๆๅฎใใใ็นๅฎใฎใทใผใฑใณในใๅคฑๆใใๅ ดๅใใใฎๆญฃ็ขบใชใทใผใใ่ฟฝๅ ใใใใจใงใใใๅ็พใงใใพใใไพ:
```bash
pytest --random-order-seed=573663
[...]
Using --random-order-bucket=module
Using --random-order-seed=573663
```
็นๅฎใฎใในใใฎใชในใใไฝฟ็จใใชใๅ ดๅใใพใใฏใพใฃใใใชในใใไฝฟ็จใใชใๅ ดๅใๅใใในใใฎๆญฃ็ขบใช้ ๅบใๅ็พใใพใใใในใใฎใชในใใๆๅใง็ตใ่พผใฟๅงใใใจใใทใผใใซไพๅญใใใใในใใๅคฑๆใใๆญฃ็ขบใช้ ๅบใงๆๅใงใชในใใๆๅฎใใๅฟ
่ฆใใใใพใใใใใซใฏใ`--random-order-bucket=none` ใไฝฟ็จใใฆใฉใณใใ ๅใ็กๅนใซใใใใpytestใซๆ็คบใใๅฟ
่ฆใใใใพใใไพใใฐใๆฌกใฎใใใซใใพใ๏ผ
```bash
pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py
```
ใในใฆใฎใในใใฎใทใฃใใใซใ็กๅนใซใใใซใฏ:
```bash
pytest --random-order-bucket=none
```
ใใใฉใซใใงใฏใ`--random-order-bucket=module` ใๆ้ป็ใซ้ฉ็จใใใใขใธใฅใผใซใฌใใซใงใใกใคใซใใทใฃใใใซใใพใใใพใใ`class`ใ`package`ใ`global`ใใใใณ`none` ใฌใใซใงใทใฃใใใซใใใใจใใงใใพใใ่ฉณ็ดฐใซใคใใฆใฏใใใฎ[ใใญใฅใกใณใใผใทใงใณ](https://github.com/jbasko/pytest-random-order)ใๅ็
งใใฆใใ ใใใ
ๅฅใฎใฉใณใใ ๅใฎไปฃๆฟๆๆฎตใฏใ[`pytest-randomly`](https://github.com/pytest-dev/pytest-randomly) ใงใใใใฎใขใธใฅใผใซใฏ้ๅธธใซไผผใๆฉ่ฝ/ใคใณใฟใผใใงใผในใๆใฃใฆใใพใใใ`pytest-random-order` ใงๅฉ็จๅฏ่ฝใชใใฑใใใขใผใใๆใฃใฆใใพใใใใคใณในใใผใซๅพใซ่ชๅ็ใซๆๅนใซใชใใจใใๅใๅ้กใใใใพใใ
### Look and feel variations
#### pytest-sugar
[pytest-sugar](https://github.com/Frozenball/pytest-sugar) ใฏใๅค่ฆณใจๆไฝๆงใๅไธใใใใใญใฐใฌในใใผใ่ฟฝๅ ใใๅณๅบงใซๅคฑๆใใใในใใจใขใตใผใทใงใณใ่กจ็คบใใใใฉใฐใคใณใงใใใคใณในใใผใซๅพใซ่ชๅ็ใซใขใฏใใฃใๅใใใพใใ
```bash
pip install pytest-sugar
```
ใใใไฝฟ็จใใใซใในใใๅฎ่กใใใซใฏใๆฌกใๅฎ่กใใพใใ
```bash
pytest -p no:sugar
```
ใพใใฏใขใณใคใณในใใผใซใใพใใ
#### Report each sub-test name and its progress
`pytest` ใซใใๅไธใพใใฏใฐใซใผใใฎใในใใฎๅ ดๅ (`pip install pytest-pspec` ใฎๅพ):
```bash
pytest --pspec tests/test_optimization.py
```
#### Instantly shows failed tests
[pytest-instafail](https://github.com/pytest-dev/pytest-instafail) ใงใฏใๅคฑๆใจใจใฉใผใๅณๅบงใซ่กจ็คบใใใพใใ
ใในใใปใใทใงใณใ็ตไบใใใพใงๅพ
ๆฉใใพใใ
```bash
pip install pytest-instafail
```
```bash
pytest --instafail
```
### To GPU or not to GPU
GPU ใๆๅนใช่จญๅฎใงใCPU ใฎใฟใขใผใใงใในใใใใซใฏใ`CUDA_VISIBLE_DEVICES=""`ใ่ฟฝๅ ใใพใใ
```bash
CUDA_VISIBLE_DEVICES="" pytest tests/utils/test_logging.py
```
ใพใใฏใ่คๆฐใฎ GPU ใใใๅ ดๅใฏใ`pytest` ใงใฉใใไฝฟ็จใใใใๆๅฎใงใใพใใใใจใใฐใ
2 ็ช็ฎใฎ GPU GPU `0` ใจ `1` ใใใๅ ดๅใฏใๆฌกใๅฎ่กใงใใพใใ
```bash
CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py
```
ใใใฏใ็ฐใชใGPUใง็ฐใชใใฟในใฏใๅฎ่กใใใๅ ดๅใซไพฟๅฉใงใใ
ไธ้จใฎใในใใฏCPUใฎใฟใงๅฎ่กใใๅฟ
่ฆใใใใไปใฎใในใใฏCPUใGPUใใพใใฏTPUใงๅฎ่กใใๅฟ
่ฆใใใใใพใๅฅใฎใในใใฏ่คๆฐใฎGPUใงๅฎ่กใใๅฟ
่ฆใใใใพใใๆฌกใฎในใญใใใใณใฌใผใฟใผใฏใใในใใฎCPU/GPU/TPUใซ้ขใใ่ฆไปถใ่จญๅฎใใใใใซไฝฟ็จใใใพใ๏ผ
- `require_torch` - ใใฎใในใใฏtorchใฎไธใงใฎใฟๅฎ่กใใใพใใ
- `require_torch_gpu` - `require_torch` ใซๅ ใใฆใๅฐใชใใจใ1ใคใฎGPUใๅฟ
่ฆใงใใ
- `require_torch_multi_gpu` - `require_torch` ใซๅ ใใฆใๅฐใชใใจใ2ใคใฎGPUใๅฟ
่ฆใงใใ
- `require_torch_non_multi_gpu` - `require_torch` ใซๅ ใใฆใ0ใพใใฏ1ใคใฎGPUใๅฟ
่ฆใงใใ
- `require_torch_up_to_2_gpus` - `require_torch` ใซๅ ใใฆใ0ใ1ใใพใใฏ2ใคใฎGPUใๅฟ
่ฆใงใใ
- `require_torch_tpu` - `require_torch` ใซๅ ใใฆใๅฐใชใใจใ1ใคใฎTPUใๅฟ
่ฆใงใใ
ไปฅไธใฎ่กจใซGPUใฎ่ฆไปถใ็คบใใพใ๏ผ
| n gpus | decorator |
|--------+--------------------------------|
| `>= 0` | `@require_torch` |
| `>= 1` | `@require_torch_gpu` |
| `>= 2` | `@require_torch_multi_gpu` |
| `< 2` | `@require_torch_non_multi_gpu` |
| `< 3` | `@require_torch_up_to_2_gpus` |
ใใจใใฐใไฝฟ็จๅฏ่ฝใช GPU ใ 2 ใคไปฅไธใใใpytorch ใใคใณในใใผใซใใใฆใใๅ ดๅใซใฎใฟๅฎ่กใใๅฟ
่ฆใใใใในใใๆฌกใซ็คบใใพใใ
```python no-style
@require_torch_multi_gpu
def test_example_with_multi_gpu():
```
ใในใใซ `tensorflow` ใๅฟ
่ฆใชๅ ดๅใฏใ`require_tf` ใใณใฌใผใฟใไฝฟ็จใใพใใไพใใฐ๏ผ
```python no-style
@require_tf
def test_tf_thing_with_tensorflow():
```
ใใใใฎใใณใฌใผใฟใฏ็ฉใฟ้ใญใใใจใใงใใพใใใใจใใฐใใในใใ้
ใใpytorch ใงๅฐใชใใจใ 1 ใคใฎ GPU ใๅฟ
่ฆใชๅ ดๅใฏใๆฌกใฎใใใซใชใใพใใ
่จญๅฎๆนๆณ:
```python no-style
@require_torch_gpu
@slow
def test_example_slow_on_gpu():
```
`@parametrized` ใฎใใใชไธ้จใฎใใณใฌใผใฟใฏใในใๅใๆธใๆใใใใใ`@require_*` ในใญใใ ใใณใฌใผใฟใใชในใใใๅฟ
่ฆใใใใพใใ
ๆๅพใซใใใใๆญฃใใๅไฝใใใใใซใใพใใๆญฃใใไฝฟ็จไพใฏๆฌกใฎใจใใใงใ
```python no-style
@parameterized.expand(...)
@require_torch_multi_gpu
def test_integration_foo():
```
ใใฎ้ ๅบใฎๅ้กใฏ `@pytest.mark.parametrize` ใซใฏๅญๅจใใพใใใๆๅใพใใฏๆๅพใซ้
็ฝฎใใฆใใใใใงใๅ้กใฏ่งฃๆฑบใใใพใใ
ไปไบใใใ ใใใใใฏ้ๅไฝใในใใงใฎใฟๆฉ่ฝใใพใใ
ๅ
้จใในใ:
- ๅฉ็จๅฏ่ฝใช GPU ใฎๆฐ:
```python
from transformers.testing_utils import get_gpu_count
n_gpu = get_gpu_count() # works with torch and tf
```
### Testing with a specific PyTorch backend or device
็นๅฎใฎtorchใใใคในใงใในใในใคใผใใๅฎ่กใใใซใฏใ`TRANSFORMERS_TEST_DEVICE="$device"` ใ่ฟฝๅ ใใพใใใใใง `$device` ใฏๅฏพ่ฑกใฎใใใฏใจใณใใงใใไพใใฐใCPUใงใในใใใใซใฏไปฅไธใฎใใใซใใพใ๏ผ
```bash
TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/utils/test_logging.py
```
ใใฎๅคๆฐใฏใ`mps`ใชใฉใฎใซในใฟใ ใพใใฏใใพใไธ่ฌ็ใงใฏใชใ PyTorch ใใใฏใจใณใใใในใใใใฎใซๅฝน็ซใกใพใใใพใใ็นๅฎใฎ GPU ใใฟใผใฒใใใซใใใใCPU ๅฐ็จใขใผใใงใในใใใใใใใใจใงใ`CUDA_VISIBLE_DEVICES`ใจๅใๅนๆใ้ๆใใใใใซไฝฟ็จใใใใจใใงใใพใใ
็นๅฎใฎใใใคในใงใฏใๅใใฆใtorchใใใคใณใใผใใใๅพใ่ฟฝๅ ใฎใคใณใใผใใๅฟ
่ฆใซใชใใพใใใใใฏใ็ฐๅขๅคๆฐ `TRANSFORMERS_TEST_BACKEND` ใไฝฟ็จใใฆๆๅฎใงใใพใใ
```bash
TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/utils/test_logging.py
```
### Distributed training
`pytest` ใฏ็ดๆฅ็ใซๅๆฃใใฌใผใใณใฐใๅฆ็ใใใใจใฏใงใใพใใใ่ฉฆใฟใใจใใตใใใญใปในใฏๆญฃใใๅฆ็ใ่กใใใ่ชๅ่ช่บซใ `pytest` ใงใใใจๆใ่พผใใงใในใในใคใผใใใซใผใใงๅฎ่กใ็ถใใพใใใใ ใใ้ๅธธใฎใใญใปในใ็ๆใใใใใใ่คๆฐใฎใฏใผใซใผใ็ๆใใIOใใคใใ็ฎก็ใใใใญใปในใ็ๆใใใฐๆฉ่ฝใใพใใ
ใใใไฝฟ็จใใใใใคใใฎใในใใใใใพใ๏ผ
- [test_trainer_distributed.py](https://github.com/huggingface/transformers/tree/main/tests/trainer/test_trainer_distributed.py)
- [test_deepspeed.py](https://github.com/huggingface/transformers/tree/main/tests/deepspeed/test_deepspeed.py)
ๅฎ่กใใคใณใใซใใใซ็งปๅใใใซใฏใใใใใฎใในใๅ
ใง `execute_subprocess_async` ๅผใณๅบใใๆค็ดขใใฆใใ ใใใ
ใใใใฎใในใใๅฎ่กใใใซใฏใๅฐใชใใจใ2ใคใฎGPUใๅฟ
่ฆใงใ๏ผ
```bash
CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py
```
### Output capture
ใในใใฎๅฎ่กไธญใซใ`stdout` ใใใณ `stderr` ใซ้ไฟกใใใๅบๅใฏใญใฃใใใฃใใใพใใใในใใพใใฏใปใใใขใใใกใฝใใใๅคฑๆใใๅ ดๅใ้ๅธธใใใใซๅฏพๅฟใใใญใฃใใใฃใใใๅบๅใๅคฑๆใฎใใฌใผในใใใฏใจๅ
ฑใซ่กจ็คบใใใพใใ
ๅบๅใฎใญใฃใใใฃใ็กๅนใซใใ`stdout` ใจ `stderr` ใ้ๅธธ้ใใซๅๅพใใใซใฏใ`-s` ใพใใฏ `--capture=no` ใไฝฟ็จใใฆใใ ใใ๏ผ
ใใใใฎใในใใๅฎ่กใใใซใฏๅฐใชใใจใ2ใคใฎGPUใๅฟ
่ฆใงใ๏ผ
```bash
pytest -s tests/utils/test_logging.py
```
ใในใ็ตๆใ JUnit ๅฝขๅผใฎๅบๅใซ้ไฟกใใใซใฏ:
```bash
py.test tests --junitxml=result.xml
```
### Color control
่ฒใๆใใชใใใใซใใ๏ผไพ๏ผ้ป่ฒใฎใใญในใใ็ฝใ่ๆฏใซ่กจ็คบใใใจ่ชญใฟใซใใใงใ๏ผ๏ผ
```bash
pytest --color=no tests/utils/test_logging.py
```
### Sending test report to online pastebin service
ใในใๅคฑๆใใจใซ URL ใไฝๆใใพใใ
```bash
pytest --pastebin=failed tests/utils/test_logging.py
```
ใใใซใใใใในใๅฎ่กๆ
ๅ ฑใใชใขใผใใฎPasteใตใผใในใซ้ไฟกใใใๅใจใฉใผใซๅฏพใใฆURLใๆไพใใใพใใ้ๅธธ้ใใในใใ้ธๆใใใใใใจใใฐ็นๅฎใฎใจใฉใผใฎใฟใ้ไฟกใใใๅ ดๅใฏ `-x` ใ่ฟฝๅ ใงๆๅฎใงใใพใใ
ใในใใปใใทใงใณๅ
จไฝใฎใญใฐใซๅฏพใใURLใไฝๆใใๆนๆณ๏ผ
```bash
pytest --pastebin=all tests/utils/test_logging.py
```
## Writing tests
๐ค transformersใฎใในใใฏ `unittest` ใๅบใซใใฆใใพใใใ `pytest` ใงๅฎ่กใใใใใใใปใจใใฉใฎๅ ดๅใไธกๆนใฎใทในใใ ใฎๆฉ่ฝใไฝฟ็จใงใใพใใ
[ใใกใ](https://docs.pytest.org/en/stable/unittest.html)ใงใตใใผใใใใฆใใๆฉ่ฝใ่ชญใใใจใใงใใพใใใ้่ฆใชใใจใฏใใปใจใใฉใฎ `pytest` ใฎใใฃใฏในใใฃใๅไฝใใชใใใจใงใใใใฉใกใผใฟๅใๅๆงใงใใใไผผใใใใชๆนๆณใงๅไฝใใ `parameterized` ใขใธใฅใผใซใไฝฟ็จใใฆใใพใใ
### Parametrization
ๅใใในใใ็ฐใชใๅผๆฐใง่คๆฐๅๅฎ่กใใๅฟ
่ฆใใใใใจใใใใใใพใใใใใฏใในใๅ
้จใใ่กใใใจใใงใใพใใใใใฎๅ ดๅใใใฎใในใใๅไธใฎๅผๆฐใปใใใงๅฎ่กใใๆนๆณใฏใใใพใใใ
```python
# test_this1.py
import unittest
from parameterized import parameterized
class TestMathUnitTest(unittest.TestCase):
@parameterized.expand(
[
("negative", -1.5, -2.0),
("integer", 1, 1.0),
("large fraction", 1.6, 1),
]
)
def test_floor(self, name, input, expected):
assert_equal(math.floor(input), expected)
```
ใใใฉใซใใงใฏใใใฎใในใใฏ3ๅๅฎ่กใใใใใใใใฎๅฎ่กใง `test_floor` ใฎๆๅพใฎ3ใคใฎๅผๆฐใใใฉใกใผใฟใชในใใฎๅฏพๅฟใใๅผๆฐใซๅฒใๅฝใฆใใใพใใ
ใใใฆใ`negative` ใจ `integer` ใใฉใกใผใฟใฎใปใใใฎใฟใๅฎ่กใใใใจใใงใใพใ:
```bash
pytest -k "negative and integer" tests/test_mytest.py
```
ใพใใฏใ`Negative`ใฎใตใใในใใ้คใใในใฆใฎๅ ดๅใๆฌกใฎใใใซใชใใพใใ
```bash
pytest -k "not negative" tests/test_mytest.py
```
`-k` ใใฃใซใฟใผใไฝฟ็จใใใใจใซๅ ใใฆใๅใตใใในใใฎๆญฃ็ขบใชๅๅใ่ชฟในใใใฎๆญฃ็ขบใชๅๅใไฝฟ็จใใฆไปปๆใฎใตใใในใใพใใฏใในใฆใฎใตใใในใใๅฎ่กใใใใจใใงใใพใใ
```bash
pytest test_this1.py --collect-only -q
```
ใใใจๆฌกใฎใใฎใใชในใใใใพใ:
```bash
test_this1.py::TestMathUnitTest::test_floor_0_negative
test_this1.py::TestMathUnitTest::test_floor_1_integer
test_this1.py::TestMathUnitTest::test_floor_2_large_fraction
```
ใใใใฃใฆใ2 ใคใฎ็นๅฎใฎใตใใในใใฎใฟใๅฎ่กใงใใใใใซใชใใพใใใ
```bash
pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer
```
`transformers`ใฎ้็บ่
ไพๅญ้ขไฟใซใใงใซๅซใพใใฆใใใขใธใฅใผใซ[parameterized](https://pypi.org/project/parameterized/) ใฏใ`unittests` ใจ `pytest` ใในใใฎไธกๆนใงๆฉ่ฝใใพใใ
ใใ ใใใในใใ `unittest` ใงใชใๅ ดๅใ`pytest.mark.parametrize` ใไฝฟ็จใใใใจใใงใใพใ๏ผใพใใฏๆขๅญใฎใในใใฎใใใคใใงใไธปใซ `examples` ใฎไธใงไฝฟ็จใใใฆใใใฎใ่ฆใใใจใใงใใพใ๏ผใ
ๆฌกใซใๅใไพใ็คบใใพใใใไปๅบฆใฏ `pytest` ใฎ `parametrize` ใใผใซใผใไฝฟ็จใใฆใใพใ๏ผ
```python
# test_this2.py
import pytest
@pytest.mark.parametrize(
"name, input, expected",
[
("negative", -1.5, -2.0),
("integer", 1, 1.0),
("large fraction", 1.6, 1),
],
)
def test_floor(name, input, expected):
assert_equal(math.floor(input), expected)
```
`parameterized` ใจๅๆงใซใ`pytest.mark.parametrize` ใไฝฟ็จใใใจใ`-k` ใใฃใซใฟใๅฝน็ซใใชใๅ ดๅใงใใใตใใในใใฎๅฎ่กใ็ดฐใใๅถๅพกใงใใพใใใใ ใใใใฎใใฉใกใผใฟๅ้ขๆฐใฏใตใใในใใฎๅๅใใใใใซ็ฐใชใใใฎใซใใพใใไปฅไธใซใใฎไพใ็คบใใพใ๏ผ
```bash
pytest test_this2.py --collect-only -q
```
ใใใจๆฌกใฎใใฎใใชในใใใใพใ:
```bash
test_this2.py::test_floor[integer-1-1.0]
test_this2.py::test_floor[negative--1.5--2.0]
test_this2.py::test_floor[large fraction-1.6-1]
```
ใใใงใ็นๅฎใฎใในใใฎใฟใๅฎ่กใงใใใใใซใชใใพใใใ
```bash
pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0]
```
ๅใฎไพใจๅๆงใซใ
### Files and directories
ใในใใฎไธญใงใ็พๅจใฎใในใใใกใคใซใใใฎ็ธๅฏพไฝ็ฝฎใ็ฅใๅฟ
่ฆใใใใใจใใใใใใพใใใใใใใใใฏ็ฐกๅใชใใจใงใฏใใใพใใใใชใใชใใใในใใฏ่คๆฐใฎใใฃใฌใฏใใชใใๅผใณๅบใใใใใ็ฐใชใๆทฑใใฎใตใใใฃใฌใฏใใชใซๅญๅจใใใใจใใใใใใงใใ`transformers.test_utils.TestCasePlus` ใจใใใใซใใผใฏใฉในใฏใใในใฆใฎๅบๆฌใในใๆด็ใใ็ฐกๅใซใขใฏใปในใงใใใใใซใใใใจใงใใใฎๅ้กใ่งฃๆฑบใใพใใ
- `pathlib` ใชใใธใงใฏใ๏ผใในใฆๅฎๅ
จใซ่งฃๆฑบใใใใใฎ๏ผ๏ผ
- `test_file_path` - ็พๅจใฎใในใใใกใคใซใฎใในใใคใพใ `__file__`
- `test_file_dir` - ็พๅจใฎใในใใใกใคใซใๅซใใใฃใฌใฏใใช
- `tests_dir` - `tests` ใในใในใคใผใใฎใใฃใฌใฏใใช
- `examples_dir` - `examples` ใในใในใคใผใใฎใใฃใฌใฏใใช
- `repo_root_dir` - ใชใใธใใชใฎใใฃใฌใฏใใช
- `src_dir` - `transformers` ใตใใใฃใฌใฏใใชใๅญๅจใใๅ ดๆ
- ใในใฎๆๅญๅ่กจ็พโโไธ่จใจๅใใงใใใใใใใฏ `pathlib` ใชใใธใงใฏใใงใฏใชใๆๅญๅใจใใฆใในใ่ฟใใพใ๏ผ
- `test_file_path_str`
- `test_file_dir_str`
- `tests_dir_str`
- `examples_dir_str`
- `repo_root_dir_str`
- `src_dir_str`
ใใใใไฝฟ็จใๅงใใใซใฏใใในใใ `transformers.test_utils.TestCasePlus` ใฎใตใใฏใฉในใซๅญๅจใใใใจใ็ขบ่ชใใใ ใใงใใไพ๏ผ
```python
from transformers.testing_utils import TestCasePlus
class PathExampleTest(TestCasePlus):
def test_something_involving_local_locations(self):
data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro"
```
ใใใ`pathlib` ใไปใใฆใในใๆไฝใใๅฟ
่ฆใใชใๅ ดๅใใพใใฏๅใซๆๅญๅใจใใฆใในใๅฟ
่ฆใชๅ ดๅใฏใ`pathlib` ใชใใธใงใฏใใซ `str()` ใๅผใณๅบใใใ`_str` ใง็ตใใใขใฏใปใตใไฝฟ็จใงใใพใใไพ๏ผ
```python
from transformers.testing_utils import TestCasePlus
class PathExampleTest(TestCasePlus):
def test_something_involving_stringified_locations(self):
examples_dir = self.examples_dir_str
```
### Temporary files and directories
ไธๆใฎไธๆใใกใคใซใจใใฃใฌใฏใใชใฎไฝฟ็จใฏใไธฆๅใในใใฎๅฎ่กใซใฏๆฌ ใใใพใใใใใใซใใใใในใใใไบใใฎใใผใฟใไธๆธใใใชใใใใซใใพใใใพใใใใใใไฝๆใใๅใในใใฎ็ตไบๆใซไธๆใใกใคใซใจใใฃใฌใฏใใชใๅ้คใใใใใจใๆใฟใพใใใใฎใใใใใใใฎใใผใบใๆบใใใใใฑใผใธใงใใ `tempfile` ใฎใใใชใใใฑใผใธใฎไฝฟ็จใฏ้่ฆใงใใ
ใใใใใในใใฎใใใใฐๆใซใฏใไธๆใใกใคใซใใใฃใฌใฏใใชใซไฝใๆ ผ็ดใใใฆใใใใ็ขบ่ชใงใใๅฟ
่ฆใใใใใในใใๅๅฎ่กใใใใณใซใฉใณใใ ใซๅคๆดใใใชใใใฎๆญฃ็ขบใชใในใ็ฅใใใใจๆใใพใใ
`transformers.test_utils.TestCasePlus` ใจใใใใซใใผใฏใฉในใฏใใใฎใใใช็ฎ็ใซๆ้ฉใงใใใใใฏ `unittest.TestCase` ใฎใตใใฏใฉในใงใใใใใใในใใขใธใฅใผใซใง็ฐกๅใซ็ถๆฟใใใใจใใงใใพใใ
ไปฅไธใฏใใฎไฝฟ็จไพใงใ๏ผ
```python
from transformers.testing_utils import TestCasePlus
class ExamplesTests(TestCasePlus):
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir()
```
ใใฎใณใผใใฏใฆใใผใฏใชไธๆใใฃใฌใฏใใชใไฝๆใใ`tmp_dir` ใใใฎๅ ดๆใซ่จญๅฎใใพใใ
- ใฆใใผใฏใชไธๆใใฃใฌใฏใใชใไฝๆใใพใ๏ผ
```python
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir()
```
`tmp_dir` ใซใฏใไฝๆใใใไธๆใใฃใฌใฏใใชใธใฎใในใๅซใพใใพใใๆ้็ตไบๅพใฏ่ชๅ็ใซๅ้คใใใพใ
ใในใใ
- ไปปๆใฎไธๆใใฃใฌใฏใใชใไฝๆใใใในใใฎ้ๅงๅใซใใใ็ฉบใงใใใใจใ็ขบ่ชใใใในใๅพใซใฏ็ฉบใซใใชใใงใใ ใใใ
```python
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir("./xxx")
```
ใใใฏใ็นๅฎใฎใใฃใฌใฏใใชใ็ฃ่ฆใใๅใฎใในใใใใใซใใผใฟใๆฎใใชใใใจใ็ขบ่ชใใใๅ ดๅใซใใใใใฐใซๅฝน็ซใกใพใใ
- `before` ใจ `after` ๅผๆฐใ็ดๆฅใชใผใใผใฉใคใใใใใจใงใใใใฉใซใใฎๅไฝใใชใผใใผใฉใคใใงใใพใใไปฅไธใฎใใใใใฎๅไฝใซๅฐใใพใ๏ผ
- `before=True`๏ผใในใใฎ้ๅงๆใซๅธธใซไธๆใใฃใฌใฏใใชใใฏใชใขใใใพใใ
- `before=False`๏ผไธๆใใฃใฌใฏใใชใๆขใซๅญๅจใใๅ ดๅใๆขๅญใฎใใกใคใซใฏใใฎใพใพใซใชใใพใใ
- `after=True`๏ผใในใใฎ็ตไบๆใซๅธธใซไธๆใใฃใฌใฏใใชใๅ้คใใใพใใ
- `after=False`๏ผใในใใฎ็ตไบๆใซๅธธใซไธๆใใฃใฌใฏใใชใฏใใฎใพใพใซใชใใพใใ
<Tip>
`rm -r`ใฎ็ธๅฝใๅฎๅ
จใซๅฎ่กใใใใใซใๆ็คบ็ใช `tmp_dir` ใไฝฟ็จใใใๅ ดๅใใใญใธใงใฏใใชใใธใใชใฎใใงใใฏใขใฆใใฎใตใใใฃใฌใฏใใชใฎใฟใ่จฑๅฏใใใพใใ่ชคใฃใฆ `/tmp` ใชใฉใฎใใกใคใซใทในใใ ใฎ้่ฆใช้จๅใๅ้คใใใชใใใใซใๅธธใซ `./` ใใๅงใพใใในใๆธกใใฆใใ ใใใ
</Tip>
<Tip>
ๅใในใใฏ่คๆฐใฎไธๆใใฃใฌใฏใใชใ็ป้ฒใงใใ่ฆๆฑใใชใ้ใใในใฆ่ชๅใงๅ้คใใใพใใ
</Tip>
### Temporary sys.path override
ๅฅใฎใในใใใใคใณใใผใใใใใใซไธๆ็ใซ `sys.path` ใใชใผใใผใฉใคใใใๅฟ
่ฆใใใๅ ดๅใ`ExtendSysPath` ใณใณใใญในใใใใผใธใฃใไฝฟ็จใงใใพใใไพ๏ผ
```python
import os
from transformers.testing_utils import ExtendSysPath
bindir = os.path.abspath(os.path.dirname(__file__))
with ExtendSysPath(f"{bindir}/.."):
from test_trainer import TrainerIntegrationCommon # noqa
```
### Skipping tests
ใใใฏใใใฐใ่ฆใคใใใๆฐใใใในใใไฝๆใใใๅ ดๅใงใใฃใฆใใใใฐใใพใ ไฟฎๆญฃใใใฆใใชใๅ ดๅใซๅฝน็ซใกใพใใใกใคใณใชใใธใใชใซใณใใใใงใใใใใซใใใซใฏใ`make test` ใฎๅฎ่กไธญใซใใใในใญใใใใๅฟ
่ฆใใใใพใใ
ใกใฝใใ๏ผ
- **skip** ใฏใใในใใ็นๅฎใฎๆกไปถใๆบใใใใๅ ดๅใซใฎใฟใในใใใใจใๆๅพ
ใใฆใใใใใไปฅๅคใฎๅ ดๅใฏ pytest ใใในใใฎๅฎ่กใในใญใใใใพใใไธ่ฌ็ใชไพใฏใWindowsๅฐ็จใฎใในใใ้Windowsใใฉใใใใฉใผใ ใงในใญใใใใๅ ดๅใใพใใฏ็พๅจๅฉ็จใงใใชใๅค้จใชใฝใผในใซไพๅญใใใในใใในใญใใใใๅ ดๅใงใ๏ผไพ: ใใผใฟใใผในใๅฉ็จใงใใชใๅ ดๅ๏ผใ
- **xfail** ใฏใไฝใใใฎ็็ฑใงใในใใๅคฑๆใใใใจใๆๅพ
ใใฆใใพใใไธ่ฌ็ใชไพใฏใใพใ ๅฎ่ฃ
ใใใฆใใชใๆฉ่ฝใฎใในใใใใพใ ไฟฎๆญฃใใใฆใใชใใใฐใฎใในใใงใใใในใใไบๆณใใใๅคฑๆใซใใใใใใใในใใๅ ดๅ๏ผpytest.mark.xfailใงใใผใฏใใใใในใ๏ผใใใใฏxpassใจใใฆใในใใตใใชใผใซๅ ฑๅใใใพใใ
ใใใใฎ2ใคใฎ้ใฎ้่ฆใช้ใใฎ1ใคใฏใ`skip` ใฏใในใใๅฎ่กใใชใ็นใงใใใ`xfail` ใฏๅฎ่กใใพใใใใใใฃใฆใใใฐใฎใใใณใผใใไปใฎใในใใซๅฝฑ้ฟใไธใใๅ ดๅใฏใ`xfail` ใไฝฟ็จใใชใใงใใ ใใใ
#### Implementation
- ใในใๅ
จไฝใ็กๆกไปถใซในใญใใใใๆนๆณใฏๆฌกใฎใจใใใงใ๏ผ
```python no-style
@unittest.skip("this bug needs to be fixed")
def test_feature_x():
```
ใพใใฏ pytest ็ต็ฑ:
```python no-style
@pytest.mark.skip(reason="this bug needs to be fixed")
```
ใพใใฏ `xfail` ใฎๆนๆณ:
```python no-style
@pytest.mark.xfail
def test_feature_x():
```
- ใในใๅ
ใฎๅ
้จใใงใใฏใซๅบใฅใใฆใในใใในใญใใใใๆนๆณใฏๆฌกใฎใจใใใงใใ
```python
def test_feature_x():
if not has_something():
pytest.skip("unsupported configuration")
```
ใพใใฏใขใธใฅใผใซๅ
จไฝ:
```python
import pytest
if not pytest.config.getoption("--custom-flag"):
pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True)
```
ใพใใฏ `xfail` ใฎๆนๆณ:
```python
def test_feature_x():
pytest.xfail("expected to fail until bug XYZ is fixed")
```
- ไธ้จใฎใคใณใใผใใๆฌ ่ฝใใฆใใๅ ดๅใซใขใธใฅใผใซๅ
ใฎใในใฆใฎใในใใในใญใใใใๆนๆณใฏๆฌกใฎใจใใใงใใ
```python
docutils = pytest.importorskip("docutils", minversion="0.3")
```
- ๆกไปถใซๅบใฅใใฆใในใใในใญใใใใพใใ
```python no-style
@pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher")
def test_feature_x():
```
ใพใใฏ๏ผ
```python no-style
@unittest.skipIf(torch_device == "cpu", "Can't do half precision")
def test_feature_x():
```
ใพใใฏใขใธใฅใผใซๅ
จไฝใในใญใใใใพใใ
```python no-style
@pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows")
class TestClass():
def test_feature_x(self):
```
่ฉณ็ดฐใไพใใใใณๆนๆณใซใคใใฆใฎ่ฉณ็ดฐใฏ[ใใกใ](https://docs.pytest.org/en/latest/skipping.html)ใๅ็
งใใฆใใ ใใใ
### Slow tests
ใในใใฉใคใใฉใชใฏ็ๅฎใซๆ้ทใใฆใใใใในใใฎไธ้จใฏๆฐๅใใใใพใใใใฎใใใCIใงใในใในใคใผใใฎๅฎไบใๅพ
ใคใฎใฏ1ๆ้ๅพ
ใคไฝ่ฃใใชใใใจใใใใพใใใใใใฃใฆใใใใคใใฎไพๅคใ้คใใฆใ้
ใใในใใฏไปฅไธใฎไพใฎใใใซใใผใฏใในใใงใ๏ผ
```python no-style
from transformers.testing_utils import slow
@slow
def test_integration_foo():
```
ใในใใ`@slow`ใจใใฆใใผใฏใใใใใใใฎใใใชใในใใๅฎ่กใใใซใฏใ็ฐๅขๅคๆฐ `RUN_SLOW=1`ใ่จญๅฎใใพใใไพ:
```bash
RUN_SLOW=1 pytest tests
```
`@parameterized` ใฎใใใชใใณใฌใผใฟใฏใในใๅใๆธใๆใใใใใ`@slow` ใใใณไปใฎในใญใใใใณใฌใผใฟ `@require_*` ใฏๆญฃใใๅไฝใใใใใซใฏใๆๅพใซใชในใใขใใใใๅฟ
่ฆใใใใพใใไปฅไธใฏๆญฃใใไฝฟ็จไพใฎไธไพใงใ๏ผ
```python no-style
@parameteriz ed.expand(...)
@slow
def test_integration_foo():
```
ใใฎใใญใฅใกใณใใฎๅ้ ญใง่ชฌๆใใใใใซใ้
ใใในใใฏๅฎๆ็ใชในใฑใธใฅใผใซใซๅพใฃใฆๅฎ่กใใใPRใฎCIใใงใใฏใงใฏๅฎ่กใใใพใใใใใฎใใใไธ้จใฎๅ้กใPRใฎๆๅบๆใซ่ฆ่ฝใจใใใใใผใธใใใๅฏ่ฝๆงใใใใพใใใใฎใใใชๅ้กใฏๆฌกๅใฎในใฑใธใฅใผใซใใใCIใธใงใใงๆคๅบใใใพใใใใใใใใใฏใพใใPRใๆๅบใใๅใซ่ชๅใฎใใทใณใง้
ใใในใใๅฎ่กใใ้่ฆๆงใๆๅณใใฆใใพใใ
ใฉใฎใในใใ้
ใใในใใจใใฆใใผใฏใในใใใ้ธๆใใใใใฎใใใใพใใชๆๆๆฑบๅฎใกใซใใบใ ใๆฌกใซ็คบใใใฆใใพใ๏ผ
- ใในใใใฉใคใใฉใชใฎๅ
้จใณใณใใผใใณใใฎ1ใคใซ็ฆ็นใๅฝใฆใฆใใๅ ดๅ๏ผไพ: ใขใใชใณใฐใใกใคใซใใใผใฏใณๅใใกใคใซใใใคใใฉใคใณ๏ผใใใฎใในใใฏ้
ใใในใในใคใผใใงๅฎ่กใใๅฟ
่ฆใใใใพใใใใใใฉใคใใฉใชใฎไปใฎๅด้ขใใใจใใฐใใญใฅใกใณใใผใทใงใณใไพใซ็ฆ็นใๅฝใฆใฆใใๅ ดๅใใใใใฎใในใใฏ้
ใใในใในใคใผใใงๅฎ่กใใๅฟ
่ฆใใใใพใใใใใฆใใใฎใขใใญใผใใๆด็ทดใใใใใใซไพๅคใ่จญใใๅฟ
่ฆใใใใพใใ
- ้ใใฆใงใคใใปใใใ็ด50MBไปฅไธใฎใใผใฟใปใใใใใฆใณใญใผใใใๅฟ
่ฆใใใใในใฆใฎใในใ๏ผไพ: ใขใใซ็ตฑๅใในใใใใผใฏใใคใถ็ตฑๅใในใใใใคใใฉใคใณ็ตฑๅใในใ๏ผใฏ้
ใใในใใจใใฆ่จญๅฎใใๅฟ
่ฆใใใใพใใๆฐใใใขใใซใ่ฟฝๅ ใใๅ ดๅใ็ตฑๅใในใ็จใซใฉใณใใ ใชใฆใงใคใใๆใคๅฐใใชใใผใธใงใณใไฝๆใใใใใซใขใใใญใผใใใๅฟ
่ฆใใใใพใใใใใซใคใใฆใฏไปฅไธใฎๆฎต่ฝใง่ฉณใใ่ชฌๆใใพใใ
- ็นใซ้ซ้ๅใใใฆใใชใใใฌใผใใณใฐใ่กใๅฟ
่ฆใใใใในใฆใฎใในใใฏ้
ใใในใใจใใฆ่จญๅฎใใๅฟ
่ฆใใใใพใใ
- ไธ้จใฎใ้
ใใใงใใในใใงใชใใในใใ้ๅธธใซ้
ใๅ ดๅใใใใณใใใใ `@slow` ใจใใฆ่จญๅฎใใๅฟ
่ฆใใใๅ ดๅใซใฏไพๅคใๅฐๅ
ฅใงใใพใใๅคงๅฎน้ใฎใใกใคใซใใใฃในใฏใซไฟๅญใใใณ่ชญใฟ่พผใฟใใ่ชๅใขใใชใณใฐใในใใฏใ`@slow` ใจใใฆใใผใฏใใใใในใใฎ่ฏใไพใงใใ
- CIใง1็งๆชๆบใงใในใใๅฎไบใใๅ ดๅ๏ผใใฆใณใญใผใใๅซใ๏ผใใใใฏ้ๅธธใฎใในใใงใใในใใงใใ
ใในใฆใฎ้้
ใใในใใฏใใใพใใพใชๅ
้จ่ฆ็ด ใๅฎๅ
จใซใซใใผใใๅฟ
่ฆใใใใพใใใ้ซ้ใงใใๅฟ
่ฆใใใใพใใใใจใใฐใ็นๅฅใซไฝๆใใใๅฐใใชใขใใซ๏ผใฌใคใคใผๆฐใๆๅฐ้ใงใ่ชๅฝใตใคใบใๅฐใใใชใฉ๏ผใไฝฟ็จใใฆใใใชใใฎใซใใฌใใธใๅฎ็พใงใใพใใใใฎๅพใ`@slow` ใในใใงใฏๅคง่ฆๆจกใช้
ใใขใใซใไฝฟ็จใใฆ่ณช็ใชใในใใๅฎ่กใงใใพใใใใใใไฝฟ็จใใใซใฏใไปฅไธใฎใใใซ *tiny* ใขใใซใๆขใใฆใใ ใใ๏ผ
```bash
grep tiny tests examples
```
[ในใฏใชใใใฎไพ](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py)ใใใใใใใซใใ tiny-wmt19-en-de ใฎใใใชๅฐใใชใขใใซใไฝๆใใใพใใ็นๅฎใฎใขใใซใฎใขใผใญใใฏใใฃใซ็ฐกๅใซ่ชฟๆดใงใใพใใ
ๅฎ่กๆ้ใ่ชคใฃใฆๆธฌๅฎใใใใจใ็ฐกๅใงใใใใจใใฐใๅทจๅคงใชใขใใซใฎใใฆใณใญใผใใซ้ขใใใชใผใใผใใใใใใๅ ดๅใใญใผใซใซใงใในใใใใจใใฆใณใญใผใใใใใใกใคใซใใญใฃใใทใฅใใใใใฆใณใญใผใๆ้ใ่จๆธฌใใใชใใชใใพใใใใใใฃใฆใCIใญใฐใฎๅฎ่ก้ๅบฆใฌใใผใ๏ผ`pytest --durations=0 tests` ใฎๅบๅ๏ผใ็ขบ่ชใใฆใใ ใใใ
ใใฎใฌใใผใใฏใ้
ใใในใใจใใฆใใผใฏใใใฆใใชใ้
ใๅคใๅคใใ้ซ้ใซๆธใ็ดใๅฟ
่ฆใใใใในใใ่ฆใคใใใฎใซใๅฝน็ซใกใพใใใในใในใคใผใใCIใง้
ใใชใๅงใใๅ ดๅใใใฎใฌใใผใใฎใใใใชในใใซใฏๆใ้
ใใในใใ่กจ็คบใใใพใใ
### Testing the stdout/stderr output
`stdout` ใใใณ/ใพใใฏ `stderr` ใซๆธใ่พผใ้ขๆฐใใในใใใใใใซใใในใใฏ `pytest` ใฎ [capsys ใทในใใ ](https://docs.pytest.org/en/latest/capture.html) ใไฝฟ็จใใฆใใใใฎในใใชใผใ ใซใขใฏใปในใงใใพใใไปฅไธใฏใใฎๆนๆณใงใ๏ผ
```python
import sys
def print_to_stdout(s):
print(s)
def print_to_stderr(s):
sys.stderr.write(s)
def test_result_and_stdout(capsys):
msg = "Hello"
print_to_stdout(msg)
print_to_stderr(msg)
out, err = capsys.readouterr() # consume the captured output streams
# optional: if you want to replay the consumed streams:
sys.stdout.write(out)
sys.stderr.write(err)
# test:
assert msg in out
assert msg in err
```
ใใใฆใใกใใใใปใจใใฉใฎๅ ดๅใ`stderr`ใฏไพๅคใฎไธ้จใจใใฆๆไพใใใใใใใใฎใใใชๅ ดๅใซใฏ try/excel ใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
ใฑใผใน๏ผ
```python
def raise_exception(msg):
raise ValueError(msg)
def test_something_exception():
msg = "Not a good value"
error = ""
try:
raise_exception(msg)
except Exception as e:
error = str(e)
assert msg in error, f"{msg} is in the exception:\n{error}"
```
stdout ใใญใฃใใใฃใใใใ 1 ใคใฎใขใใญใผใใฏใ`contextlib.redirect_stdout`ใไฝฟ็จใใใใจใงใใ
```python
from io import StringIO
from contextlib import redirect_stdout
def print_to_stdout(s):
print(s)
def test_result_and_stdout():
msg = "Hello"
buffer = StringIO()
with redirect_stdout(buffer):
print_to_stdout(msg)
out = buffer.getvalue()
# optional: if you want to replay the consumed streams:
sys.stdout.write(out)
# test:
assert msg in out
```
stdout ใใญใฃใใใฃใใ้ใฎ้่ฆใชๆฝๅจ็ใชๅ้กใฏใ้ๅธธใฎ `print` ใงใใใพใงใซๅบๅใใใๅ
ๅฎนใใชใปใใใใๅฏ่ฝๆงใใใ `\r` ๆๅญใๅซใพใใฆใใๅฏ่ฝๆงใใใใใจใงใใ`pytest` ่ชไฝใซใฏๅ้กใฏใใใพใใใใ`pytest -s` ใงใฏใใใใฎๆๅญใใใใใกใซๅซใพใใใใใ`-s` ใใใจใชใใงใในใใๅฎ่กใงใใใใใซใใใซใฏใ`re.sub(r'~.*\r', '', buf, 0, re.M)` ใไฝฟ็จใใฆใญใฃใใใฃใใใๅบๅใซๅฏพใใฆ่ฟฝๅ ใฎใฏใชใผใณใขใใใ่กใๅฟ
่ฆใใใใพใใ
ใใใใใใฎๅพใ`\r` ใๅซใพใใฆใใใใฉใใใซใใใใใใใในใฆใฎๆไฝใ่ชๅ็ใซๅฆ็ใใใใซใใผใณใณใใญในใใใใผใธใฃใฉใใใผใใใใพใใใใใใฃใฆใๆฌกใฎใใใซ็ฐกๅใซ่กใใพใ๏ผ
```python
from transformers.testing_utils import CaptureStdout
with CaptureStdout() as cs:
function_that_writes_to_stdout()
print(cs.out)
```
ๅฎๅ
จใชใในใไพใฏๆฌกใฎใจใใใงใใ
```python
from transformers.testing_utils import CaptureStdout
msg = "Secret message\r"
final = "Hello World"
with CaptureStdout() as cs:
print(msg + final)
assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}"
```
`stderr` ใใญใฃใใใฃใใใๅ ดๅใฏใไปฃใใใซ `CaptureStderr` ใฏใฉในใไฝฟ็จใใฆใใ ใใใ
```python
from transformers.testing_utils import CaptureStderr
with CaptureStderr() as cs:
function_that_writes_to_stderr()
print(cs.err)
```
ไธกๆนใฎในใใชใผใ ใไธๅบฆใซใญใฃใใใฃใใๅฟ
่ฆใใใๅ ดๅใฏใ่ฆชใฎ `CaptureStd` ใฏใฉในใไฝฟ็จใใพใใ
```python
from transformers.testing_utils import CaptureStd
with CaptureStd() as cs:
function_that_writes_to_stdout_and_stderr()
print(cs.err, cs.out)
```
ใพใใใในใใฎๅ้กใฎใใใใฐใๆฏๆดใใใใใซใใใใฉใซใใงใใใใใฎใณใณใใญในใ ใใใผใธใฃใผใฏ็ตไบๆใซใญใฃใใใฃใใใในใใชใผใ ใ่ชๅ็ใซๅ็ใใพใใ
ๆ่ใใใ
### Capturing logger stream
ใญใฌใผใฎๅบๅใๆค่จผใใๅฟ
่ฆใใใๅ ดๅใฏใ`CaptureLogger`ใไฝฟ็จใงใใพใใ
```python
from transformers import logging
from transformers.testing_utils import CaptureLogger
msg = "Testing 1, 2, 3"
logging.set_verbosity_info()
logger = logging.get_logger("transformers.models.bart.tokenization_bart")
with CaptureLogger(logger) as cl:
logger.info(msg)
assert cl.out, msg + "\n"
```
### Testing with environment variables
็นๅฎใฎใในใใง็ฐๅขๅคๆฐใฎๅฝฑ้ฟใใในใใใใๅ ดๅใฏใใใซใใผ ใใณใฌใผใฟใไฝฟ็จใงใใพใใ
`transformers.testing_utils.mockenv`
```python
from transformers.testing_utils import mockenv
class HfArgumentParserTest(unittest.TestCase):
@mockenv(TRANSFORMERS_VERBOSITY="error")
def test_env_override(self):
env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None)
```
ๅ ดๅใซใใฃใฆใฏใๅค้จใใญใฐใฉใ ใๅผใณๅบใๅฟ
่ฆใใใใใใ`os.environ` ใซ`PYTHONPATH`ใ่จญๅฎใใฆใคใณใฏใซใผใใใๅฟ
่ฆใใใใพใใ
่คๆฐใฎใญใผใซใซ ใในใใใซใใผ ใฏใฉใน `transformers.test_utils.TestCasePlus` ใๅฝนใซ็ซใกใพใใ
```python
from transformers.testing_utils import TestCasePlus
class EnvExampleTest(TestCasePlus):
def test_external_prog(self):
env = self.get_env()
# now call the external program, passing `env` to it
```
ใในใใใกใคใซใ `tests` ใในใในใคใผใใพใใฏ `examples` ใฎใฉใกใใซใใใใซๅฟใใฆ
`env[PYTHONPATH]` ใไฝฟ็จใใฆใใใใ 2 ใคใฎใใฃใฌใฏใใชใฎใใใใใๅซใใพใใใพใใใในใใ็ขบๅฎใซ่กใใใใใใซใใใใใฎ `src` ใใฃใฌใฏใใชใๅซใใพใใ
็พๅจใฎใชใใธใใชใซๅฏพใใฆๅฎ่กใใใๆๅพใซใใในใใๅฎ่กใใใๅใซใใงใซ่จญๅฎใใใฆใใ `env[PYTHONPATH]` ใไฝฟ็จใใฆๅฎ่กใใใพใใ
ไฝใใใใฐๅผใฐใใพใใ
ใใฎใใซใใผ ใกใฝใใใฏ `os.environ` ใชใใธใงใฏใใฎใณใใผใไฝๆใใใใใๅ
ใฎใชใใธใงใฏใใฏใใฎใพใพๆฎใใพใใ
### Getting reproducible results
็ถๆณใซใใฃใฆใฏใใในใใฎใฉใณใใ ๆงใๅ้คใใใๅ ดๅใใใใพใใๅไธใฎๅ็พๅฏ่ฝใช็ตๆใปใใใๅๅพใใใซใฏใ
ใทใผใใไฟฎๆญฃใใๅฟ
่ฆใใใใพใ:
```python
seed = 42
# python RNG
import random
random.seed(seed)
# pytorch RNGs
import torch
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
# numpy RNG
import numpy as np
np.random.seed(seed)
# tf RNG
tf.random.set_seed(seed)
```
### Debugging tests
่ญฆๅใ็บ็ใใๆ็นใงใใใใฌใผใ้ๅงใใใซใฏใๆฌกใฎๆ้ ใๅฎ่กใใพใใ
```bash
pytest tests/utils/test_logging.py -W error::UserWarning --pdb
```
## Working with github actions workflows
ใปใซใใใใทใฅใฎใฏใผใฏใใญใผCIใธใงใใใใชใฌใผใใใซใฏใไปฅไธใฎๆ้ ใๅฎ่กใใๅฟ
่ฆใใใใพใ๏ผ
1. `transformers` ใฎใชใขใผใใชใใธใใชใงๆฐใใใใฉใณใใไฝๆใใพใ๏ผใใฉใผใฏใงใฏใชใใๅ
ใฎใชใใธใใชใง่กใใพใ๏ผใ
2. ใใฉใณใใฎๅๅใฏ `ci_` ใพใใฏ `ci-` ใงๅงใพใๅฟ
่ฆใใใใพใ๏ผ`main` ใใใชใฌใผใใพใใใ`main` ใงใฏPRใไฝๆใงใใพใใ๏ผใใพใใ็นๅฎใฎใในใงใฎใฟใใชใฌใผใใใพใ - ใใฎใใญใฅใกใณใใๆธใใใๅพใซๅคๆดใใใๅ ดๅใซๅใใฆใๆๆฐใฎๅฎ็พฉใฏ[ใใกใ](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml)ใฎ *push:* ใซใใใพใใ
3. ใใฎใใฉใณใใใPRใไฝๆใใพใใ
4. ใใฎๅพใใใฎใธใงใใ[ใใ](https://github.com/huggingface/transformers/actions/workflows/self-push.yml)ใซ่กจ็คบใใใพใใใธใงใใฏใใใฏใญใฐใใใๅ ดๅใใใใซๅฎ่กใใใชใใใจใใใใพใใ
## Testing Experimental CI Features
CIๆฉ่ฝใฎใในใใฏ้ๅธธใฎCIใฎๆญฃๅธธใชๅไฝใซๅนฒๆธใใๅฏ่ฝๆงใใใใใใๆฐใใCIๆฉ่ฝใ่ฟฝๅ ใใๅ ดๅใไปฅไธใฎๆ้ ใซๅพใๅฟ
่ฆใใใใพใใ
1. ใในใใๅฟ
่ฆใชใใฎใใในใใใใใใฎๆฐใใๅฐ็จใฎใธใงใใไฝๆใใพใใ
2. ๆฐใใใธใงใใฏๅธธใซๆๅใใๅฟ
่ฆใใใใใใๅธธใซใฐใชใผใณ โ๏ผ่ฉณ็ดฐใฏไปฅไธๅ็
ง๏ผใ่กจ็คบใใๅฟ
่ฆใใใใพใใ
3. ใใพใใพใช็จฎ้กใฎPR๏ผใฆใผใถใผใใฉใผใฏใใฉใณใใ้ใใฉใผใฏใใฉใณใใgithub.com UIใใ็ดๆฅใใกใคใซใ็ทจ้ใใใใฉใณใใใใพใใพใชๅผทๅถใใใทใฅใชใฉ๏ผใๅฎ่กใใใใพใงใใใคใใฎๆฅ้ๅฎ่กใใๅฎ้จ็ใชใธใงใใฎใญใฐใ็ฃ่ฆใใพใ๏ผๆๅณ็ใซๅธธใซใฐใชใผใณใซใชใใใใซใชใฃใฆใใๅ
จไฝใฎใธใงใใฎ็ทใงใฏใชใ๏ผใ
4. ใในใฆใๅฎๅฎใใฆใใใใจใๆ็ขบใซใชใฃใใใๆฐใใๅคๆดใๆขๅญใฎใธใงใใซ็ตฑๅใใพใใ
ใใฎใใใซใCIๆฉ่ฝ่ชไฝใฎๅฎ้จใ้ๅธธใฎใฏใผใฏใใญใผใซๅนฒๆธใใชใใใใซใงใใพใใ
ใงใฏใๆฐใใCIๆฉ่ฝใ้็บไธญใงใใ้ใใธใงใใๅธธใซๆๅใใใใซใฏใฉใใใใฐใใใงใใใใ๏ผ
TravisCIใฎใใใชไธ้จใฎCIใฏ `ignore-step-failure` ใใตใใผใใใๅ
จไฝใฎใธใงใใๆๅใจใใฆๅ ฑๅใใพใใใใใฎๆๆธใไฝๆใใใๆ็นใงใฏCircleCIใจGithub Actionsใฏใใใใตใใผใใใฆใใพใใใ
ใใใใฃใฆใไปฅไธใฎใฏใผใฏใขใฉใฆใณใใไฝฟ็จใงใใพใ๏ผ
1. bashในใฏใชใใๅ
ใงๆฝๅจ็ใชๅคฑๆใๆๅถใใใใใซๅฎ่กใณใใณใใฎๅ้ ญใซ `set +euo pipefail` ใ่จ่ฟฐใใพใใ
2. ๆๅพใฎใณใใณใใฏๆๅใใๅฟ
่ฆใใใใพใใใใจใใฐ `echo "done"` ใพใใฏๅใซ `true` ใไฝฟ็จใงใใพใใ
ไปฅไธใฏไพใงใ๏ผ
```yaml
- run:
name: run CI experiment
command: |
set +euo pipefail
echo "setting run-all-despite-any-errors-mode"
this_command_will_fail
echo "but bash continues to run"
# emulate another failure
false
# but the last command must be a success
echo "during experiment do not remove: reporting success to CI, even if there were failures"
```
ๅ็ดใชใณใใณใใฎๅ ดๅใฏใๆฌกใฎใใใซใใใใจใใงใใพใใ
```bash
cmd_that_may_fail || true
```
ใใกใใใ็ตๆใซๆบ่ถณใใใใๅฎ้จ็ใชในใใใใใธใงใใ้ๅธธใฎใธใงใใจ็ตฑๅใใ`set +euo pipefail` ใชใฉใฎ่ฟฝๅ ใใ่ฆ็ด ใๅ้คใใฆใๅฎ้จ็ใชใธใงใใ้ๅธธใฎCIใฎๅไฝใซๅนฒๆธใใชใใใใซใใพใใ
ใใฎใใญใปในๅ
จไฝใฏใๅฎ้จ็ใชในใใใใซๅฏพใใฆ `allow-failure` ใฎใใใชใใฎใ่จญๅฎใใPRใฎๅ
จไฝใฎในใใผใฟในใซๅฝฑ้ฟใไธใใใซๅคฑๆใใใใใจใใงใใใฐใใฏใใใซ็ฐกๅใซใชใฃใใงใใใใใใใใๅ่ฟฐใฎ้ใใ็พๅจใฏCircleCIใจGithub Actionsใฏใใฎๆฉ่ฝใใตใใผใใใฆใใพใใใ
ใใฎๆฉ่ฝใซ้ขใใฆใฎๆ็ฅจใใCIใซ็นๆใฎในใฌใใใงใใฎ้ฒๆ็ถๆณใ็ขบ่ชใงใใพใ๏ผ
- [Github Actions:](https://github.com/actions/toolkit/issues/399)
- [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344)
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/tokenizer_summary.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Summary of the tokenizers
[[open-in-colab]]
ใใฎใใผใธใงใฏใใใผใฏใใคใผใผใทใงใณใซใคใใฆ่ฉณใใ่ฆใฆใใใพใใ
<Youtube id="VFp38yj8h3A"/>
[ๅๅฆ็ใฎใใฅใผใใชใขใซ](preprocessing)ใง่ฆใใใใซใใใญในใใใใผใฏใณๅใใใใจใฏใใใใๅ่ชใพใใฏใตใใฏใผใใซๅๅฒใใใใใใใซใใฏใขใใใใผใใซใไปใใฆIDใซๅคๆใใใใจใงใใๅ่ชใพใใฏใตใใฏใผใใIDใซๅคๆใใใใจใฏ็ฐกๅใงใใฎใงใใใฎ่ฆ็ดใงใฏใใญในใใๅ่ชใพใใฏใตใใฏใผใใซๅๅฒใใ๏ผใคใพใใใใญในใใใใผใฏใใคใบใใ๏ผใใจใซ็ฆ็นใๅฝใฆใพใใๅ
ทไฝ็ใซใฏใ๐ค Transformersใงไฝฟ็จใใใ3ใคใฎไธป่ฆใชใใผใฏใใคใถใ[Byte-Pair Encoding๏ผBPE๏ผ](#byte-pair-encoding)ใ[WordPiece](#wordpiece)ใใใใณ[SentencePiece](#sentencepiece)ใ่ฆใฆใใฉใฎใขใใซใใฉใฎใใผใฏใใคใถใฟใคใใไฝฟ็จใใฆใใใใฎไพใ็คบใใพใใ
ๅใขใใซใใผใธใงใฏใไบๅใใฌใผใใณใฐๆธใฟใขใใซใใฉใฎใใผใฏใใคใถใฟใคใใไฝฟ็จใใฆใใใใ็ฅใใใใซใ้ข้ฃใใใใผใฏใใคใถใฎใใญใฅใกใณใใ็ขบ่ชใงใใพใใไพใใฐใ[`BertTokenizer`]ใ่ฆใใจใใขใใซใ[WordPiece](#wordpiece)ใไฝฟ็จใใฆใใใใจใใใใใพใใ
## Introduction
ใใญในใใใใๅฐใใชใใฃใณใฏใซๅๅฒใใใใจใฏใ่ฆใใไปฅไธใซ้ฃใใใฟในใฏใงใใใ่คๆฐใฎๆนๆณใใใใพใใไพใใฐใๆฌกใฎๆใ่ใใฆใฟใพใใใใใ"Don't you love ๐ค Transformers? We sure do."ใ
<Youtube id="nhJxYji1aho"/>
ใใฎใใญในใใใใผใฏใณๅใใ็ฐกๅใชๆนๆณใฏใในใใผในใงๅๅฒใใใใจใงใใใใใซใใใไปฅไธใฎใใใซใชใใพใ๏ผ
```
["Don't", "you", "love", "๐ค", "Transformers?", "We", "sure", "do."]
```
ใใใฏๅ็็ใช็ฌฌไธๆญฉใงใใใใใผใฏใณ "Transformers?" ใจ "do." ใ่ฆใใจใๅฅ่ชญ็นใๅ่ช "Transformer" ใจ "do" ใซ็ตๅใใใฆใใใใจใใใใใใใใฏๆ้ฉใงใฏใใใพใใใๅฅ่ชญ็นใ่ๆ
ฎใซๅ
ฅใใในใใงใใขใใซใๅ่ชใจใใใซ็ถใๅฏ่ฝๆงใฎใใใในใฆใฎๅฅ่ชญ็น่จๅทใฎ็ฐใชใ่กจ็พใๅญฆใฐใชใใใฐใชใใชใใใจใ้ฟใใในใใงใใใใใซใใใใขใใซใๅญฆใฐใชใใใฐใชใใชใ่กจ็พใฎๆฐใ็็บ็ใซๅขๅ ใใพใใๅฅ่ชญ็นใ่ๆ
ฎใซๅ
ฅใใๅ ดๅใไพๆใฎใใผใฏใณๅใฏๆฌกใฎใใใซใชใใพใ๏ผ
```
["Don", "'", "t", "you", "love", "๐ค", "Transformers", "?", "We", "sure", "do", "."]
```
ใใ ใใๅ่ชใ"Don't"ใใใใผใฏใณๅใใๆนๆณใซ้ขใใฆใฏใไธๅฉใชๅด้ขใใใใพใใ ใ"Don't"ใใฏใ"do not"ใใ่กจใใฆใใใใใใ["Do", "n't"]ใใจใใฆใใผใฏใณๅใใๆนใ้ฉใใฆใใพใใใใใใไบๆใ่ค้ใซใชใใๅใขใใซใ็ฌ่ชใฎใใผใฏใใคใถใผใฟใคใใๆใค็็ฑใฎไธ้จใงใใใใพใใใใญในใใใใผใฏใณๅใใใใใซ้ฉ็จใใใซใผใซใซๅฟใใฆใๅใใใญในใใซๅฏพใใฆ็ฐใชใใใผใฏใใคใบใใใๅบๅใ็ๆใใใพใใไบๅใใฌใผใใณใฐๆธใฟใขใใซใฏใใใฌใผใใณใฐใใผใฟใใใผใฏใใคใบใใใฎใซไฝฟ็จใใใใซใผใซใจๅใใซใผใซใงใใผใฏใใคใบใใใๅ
ฅๅใๆไพใใๅ ดๅใซใฎใฟๆญฃๅธธใซๆฉ่ฝใใพใใ
[spaCy](https://spacy.io/)ใจ[Moses](http://www.statmt.org/moses/?n=Development.GetStarted)ใฏใ2ใคใฎไบบๆฐใฎใใใซใผใซใใผในใฎใใผใฏใใคใถใผใงใใใใใใ็งใใกใฎไพใซ้ฉ็จใใใจใ*spaCy*ใจ*Moses*ใฏๆฌกใฎใใใชๅบๅใ็ๆใใพใ๏ผ
```
["Do", "n't", "you", "love", "๐ค", "Transformers", "?", "We", "sure", "do", "."]
```
็ฉบ็ฝใจๅฅ่ชญ็นใฎใใผใฏใณๅใใใใณใซใผใซใใผในใฎใใผใฏใณๅใไฝฟ็จใใใฆใใใใจใใใใใพใใ็ฉบ็ฝใจๅฅ่ชญ็นใฎใใผใฏใณๅใใใใณใซใผใซใใผในใฎใใผใฏใณๅใฏใๆใๅ่ชใซๅๅฒใใใใจใใใใใใซๅฎ็พฉใใใๅ่ชใใผใฏใณๅใฎไพใงใใใใญในใใใใๅฐใใชใใฃใณใฏใซๅๅฒใใใใใฎๆใ็ดๆ็ใชๆนๆณใงใใไธๆนใใใฎใใผใฏใณๅๆนๆณใฏๅคง่ฆๆจกใชใใญในใใณใผใในใซๅฏพใใฆๅ้กใๅผใ่ตทใใใใจใใใใพใใใใฎๅ ดๅใ็ฉบ็ฝใจๅฅ่ชญ็นใฎใใผใฏใณๅใฏ้ๅธธใ้ๅธธใซๅคงใใช่ชๅฝ๏ผใในใฆใฎไธๆใชๅ่ชใจใใผใฏใณใฎใปใใ๏ผใ็ๆใใพใใไพใใฐใ[Transformer XL](model_doc/transformerxl)ใฏ็ฉบ็ฝใจๅฅ่ชญ็นใฎใใผใฏใณๅใไฝฟ็จใใฆใใใ่ชๅฝใตใคใบใฏ267,735ใงใ๏ผ
ใใฎใใใชๅคงใใช่ชๅฝใตใคใบใฏใใขใใซใซ้ๅธธใซๅคงใใชๅใ่พผใฟ่กๅใๅ
ฅๅใใใณๅบๅใฌใคใคใผใจใใฆๆใใใใใจใๅผทๅถใใใกใขใชใใใณๆ้ใฎ่ค้ใใฎๅขๅ ใๅผใ่ตทใใใพใใไธ่ฌ็ใซใใใฉใณในใใฉใผใใผใขใใซใฏใ็นใซๅไธใฎ่จ่ชใงไบๅใใฌใผใใณใฐใใใๅ ดๅใ50,000ใ่ถ
ใใ่ชๅฝใตใคใบใๆใคใใจใฏใปใจใใฉใใใพใใใ
ใใใใฃใฆใใทใณใใซใช็ฉบ็ฝใจๅฅ่ชญ็นใฎใใผใฏใณๅใไธๅๅใชๅ ดๅใใชใๅใซๆๅญๅไฝใงใใผใฏใณๅใใชใใฎใใจใใ็ๅใ็ใใพใใ๏ผ
<Youtube id="ssLq_EK2jLE"/>
ๆๅญๅไฝใฎใใผใฏใณๅใฏ้ๅธธใซใทใณใใซใงใใใใกใขใชใจๆ้ใฎ่ค้ใใๅคงๅน
ใซๅๆธใงใใพใใใใขใใซใซๆๅณใฎใใๅ
ฅๅ่กจ็พใๅญฆ็ฟใใใใใจใ้ๅธธใซ้ฃใใใชใใพใใใใจใใฐใๆๅญใ"t"ใใฎใใใฎๆๅณใฎใใใณใณใใญในใ็ฌ็ซใฎ่กจ็พใๅญฆ็ฟใใใใจใฏใๅ่ชใ"today"ใใฎใใใฎใณใณใใญในใ็ฌ็ซใฎ่กจ็พใๅญฆ็ฟใใใใใใฏใใใซ้ฃใใใงใใใใฎใใใๆๅญๅไฝใฎใใผใฏใณๅใฏใใฐใใฐใใใฉใผใใณในใฎไฝไธใไผดใใพใใใใใใฃใฆใใใฉใณในใใฉใผใใผใขใใซใฏๅ่ชใฌใใซใจๆๅญใฌใใซใฎใใผใฏใณๅใฎใใคใใชใใใงใใ**ใตใใฏใผใ**ใใผใฏใณๅใไฝฟ็จใใฆใไธกๆนใฎไธ็ใฎๅฉ็นใๆดปใใใพใใ
## Subword tokenization
<Youtube id="zHvTiHr506c"/>
ใตใใฏใผใใใผใฏใณๅใขใซใดใชใบใ ใฏใ้ ป็นใซไฝฟ็จใใใๅ่ชใใใๅฐใใชใตใใฏใผใใซๅๅฒใในใใงใฏใชใใใ็ใใๅ่ชใฏๆๅณใฎใใใตใใฏใผใใซๅ่งฃใใใใจใใๅๅใซไพๅญใใฆใใพใใใใจใใฐใใ"annoyingly"ใใฏ็ใใๅ่ชใจ่ฆใชใใใใใฎๅ่ชใฏใ"annoying"ใใจใ"ly"ใใซๅ่งฃใใใใใใใใพใใใ็ฌ็ซใใใ"annoying"ใใจใ"ly"ใใฏใใ้ ป็นใซ็พใใพใใใใ"annoyingly"ใใฎๆๅณใฏใ"annoying"ใใจใ"ly"ใใฎๅๆ็ใชๆๅณใซใใฃใฆไฟๆใใใพใใใใใฏ็นใซใใซใณ่ชใชใฉใฎ็ตๅ่จ่ชใงๅฝน็ซใกใพใใใใใงใฏใตใใฏใผใใ้ฃ็ตใใฆ๏ผใปใผ๏ผไปปๆใฎ้ทใ่ค้ใชๅ่ชใๅฝขๆใงใใพใใ
ใตใใฏใผใใใผใฏใณๅใซใใใใขใใซใฏๅ็็ใช่ชๅฝใตใคใบใๆใคใใจใใงใใๆๅณใฎใใใณใณใใญในใ็ฌ็ซใฎ่กจ็พใๅญฆ็ฟใงใใพใใใใใซใใตใใฏใผใใใผใฏใณๅใซใใใใขใใซใฏไปฅๅใซ่ฆใใใจใฎใชใๅ่ชใๅฆ็ใใใใใใๆข็ฅใฎใตใใฏใผใใซๅ่งฃใใใใจใใงใใพใใไพใใฐใ[`~transformers.BertTokenizer`]ใฏ`"I have a new GPU!"`ใไปฅไธใฎใใใซใใผใฏใณๅใใพใ๏ผ
```py
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
>>> tokenizer.tokenize("I have a new GPU!")
["i", "have", "a", "new", "gp", "##u", "!"]
```
ใuncasedใใขใใซใ่ๆ
ฎใใฆใใใใใใพใๆใๅฐๆๅญใซๅคๆใใพใใใใใผใฏใใคใถใฎ่ชๅฝใซใ["i", "have", "a", "new"]ใใจใใๅ่ชใๅญๅจใใใใจใใใใใพใใใใ"gpu"ใใจใใๅ่ชใฏๅญๅจใใพใใใใใใใฃใฆใใใผใฏใใคใถใฏใ"gpu"ใใๆข็ฅใฎใตใใฏใผใใ["gp"ใ"##u"]ใใซๅๅฒใใพใใใใใงใ"##"ใใฏใใใผใฏใณใฎใใณใผใใพใใฏใใผใฏใใคใผใผใทใงใณใฎ้่ปขใฎใใใซใใใผใฏใณใฎๅใฎ้จๅใซในใใผในใชใใงๆฅ็ถใใๅฟ
่ฆใใใใใจใๆๅณใใพใใ
ๅฅใฎไพใจใใฆใ[`~transformers.XLNetTokenizer`]ใฏไปฅไธใฎใใใซไปฅๅใฎใตใณใใซใใญในใใใใผใฏใณๅใใพใ๏ผ
```py
>>> from transformers import XLNetTokenizer
>>> tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
>>> tokenizer.tokenize("Don't you love ๐ค Transformers? We sure do.")
["โDon", "'", "t", "โyou", "โlove", "โ", "๐ค", "โ", "Transform", "ers", "?", "โWe", "โsure", "โdo", "."]
```
ใใใใฎใโใใฎๆๅณใซใคใใฆใฏใ[SentencePiece](#sentencepiece)ใ่ฆใใจใใซ่ฉณใใ่ชฌๆใใพใใใ่ฆงใฎ้ใใใTransformersใใจใใ็ใใๅ่ชใฏใใใ้ ป็นใซ็พใใใตใใฏใผใใTransformใใจใersใใซๅๅฒใใใฆใใพใใ
ใใฆใ็ฐใชใใตใใฏใผใใใผใฏใณๅใขใซใดใชใบใ ใใฉใฎใใใซๅไฝใใใใ่ฆใฆใฟใพใใใใใใใใฎใใผใฏใใคใผใผใทใงใณใขใซใดใชใบใ ใฏใในใฆใ้ๅธธใฏๅฏพๅฟใใใขใใซใใใฌใผใใณใฐใใใใณใผใในใง่กใใใๅฝขๅผใฎใใฌใผใใณใฐใซไพๅญใใฆใใพใใ
<a id='byte-pair-encoding'></a>
### Byte-Pair Encoding๏ผBPE๏ผ
Byte-Pair Encoding๏ผBPE๏ผใฏใ[Neural Machine Translation of Rare Words with Subword Units๏ผSennrich et al., 2015๏ผ](https://arxiv.org/abs/1508.07909)ใงๅฐๅ
ฅใใใพใใใBPEใฏใใใฌใผใใณใฐใใผใฟใๅ่ชใซๅๅฒใใใใชใใผใฏใใคใถใซไพๅญใใฆใใพใใใใชใใผใฏใใคใผใผใทใงใณใฏใ็ฉบ็ฝใฎใใผใฏใใคใผใผใทใงใณใชใฉใ้ๅธธใซๅ็ดใชใใฎใงใใใใจใใใใพใใไพใใฐใ[GPT-2](model_doc/gpt2)ใ[RoBERTa](model_doc/roberta)ใงใใใใ้ซๅบฆใชใใชใใผใฏใใคใผใผใทใงใณใซใฏใใซใผใซใใผในใฎใใผใฏใใคใผใผใทใงใณ๏ผ[XLM](model_doc/xlm)ใ[FlauBERT](model_doc/flaubert)ใชใฉใๅคง้จๅใฎ่จ่ชใซMosesใไฝฟ็จ๏ผใใ[GPT](model_doc/gpt)๏ผSpacyใจftfyใไฝฟ็จใใฆใใฌใผใใณใฐใณใผใในๅ
ใฎๅๅ่ชใฎ้ ปๅบฆใๆฐใใ๏ผใชใฉใๅซใพใใพใใ
ใใชใใผใฏใใคใผใผใทใงใณใฎๅพใไธๆใฎๅ่ชใปใใใไฝๆใใใๅๅ่ชใใใฌใผใใณใฐใใผใฟใงๅบ็พใใ้ ปๅบฆใๆฑบๅฎใใใพใใๆฌกใซใBPEใฏใใผใน่ชๅฝใไฝๆใใใใผใน่ชๅฝใฎไบใคใฎใทใณใใซใใๆฐใใใทใณใใซใๅฝขๆใใใใใฎใใผใธใซใผใซใๅญฆ็ฟใใพใใใใฎใใญใปในใฏใ่ชๅฝใๆๆใฎ่ชๅฝใตใคใบใซ้ใใใพใง็ถใใใใพใใใชใใๆๆใฎ่ชๅฝใตใคใบใฏใใผใฏใใคใถใใใฌใผใใณใฐใใๅใซๅฎ็พฉใใใใคใใผใใฉใกใผใฟใงใใใใจใซๆณจๆใใฆใใ ใใใ
ไพใจใใฆใใใชใใผใฏใใคใผใผใทใงใณใฎๅพใๆฌกใฎใปใใใฎๅ่ชใจใใฎๅบ็พ้ ปๅบฆใๆฑบๅฎใใใใจไปฎๅฎใใพใใใ๏ผ
```
("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5)
```
ใใใใฃใฆใใใผใน่ชๅฝใฏใ["b", "g", "h", "n", "p", "s", "u"]ใใงใใใในใฆใฎๅ่ชใใใผใน่ชๅฝใฎใทใณใใซใซๅๅฒใใใจใๆฌกใฎใใใซใชใใพใ๏ผ
```
("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5)
```
ใใฎๅพใBPEใฏๅฏ่ฝใชใในใฆใฎใทใณใใซใใขใฎ้ ปๅบฆใๆฐใใๆใ้ ป็นใซ็บ็ใใใทใณใใซใใขใ้ธๆใใพใใไธ่จใฎไพใงใฏใ`"h"`ใฎๅพใซ`"u"`ใ15ๅ๏ผ`"hug"`ใฎ10ๅใ`"hugs"`ใฎ5ๅ๏ผๅบ็พใใพใใใใใใๆใ้ ป็นใชใทใณใใซใใขใฏใๅ่จใง20ๅ๏ผ`"u"`ใฎ10ๅใ`"g"`ใฎ5ๅใ`"u"`ใฎ5ๅ๏ผๅบ็พใใ`"u"`ใฎๅพใซ`"g"`ใ็ถใใทใณใใซใใขใงใใใใใใฃใฆใใใผใฏใใคใถใๆๅใซๅญฆ็ฟใใใใผใธใซใผใซใฏใ`"u"`ใฎๅพใซ`"g"`ใ็ถใใในใฆใฎ`"u"`ใทใณใใซใไธ็ทใซใฐใซใผใๅใใใใจใงใใๆฌกใซใ`"ug"`ใ่ชๅฝใซ่ฟฝๅ ใใใพใใๅ่ชใฎใปใใใฏๆฌกใซใชใใพใ๏ผ
```
("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5)
```
ๆฌกใซใBPEใฏๆฌกใซๆใไธ่ฌ็ใชใทใณใใซใใขใ่ญๅฅใใพใใใใใฏใ"u"ใใซ็ถใใฆใ"n"ใใงใ16ๅๅบ็พใใพใใใใใใฃใฆใใ"u"ใใจใ"n"ใใฏใ"un"ใใซ็ตๅใใใ่ชๅฝใซ่ฟฝๅ ใใใพใใๆฌกใซๆใ้ ปๅบฆใฎ้ซใใทใณใใซใใขใฏใใ"h"ใใซ็ถใใฆใ"ug"ใใงใ15ๅๅบ็พใใพใใๅใณใใขใ็ตๅใใใใhugใใ่ชๅฝใซ่ฟฝๅ ใงใใพใใ
ใใฎๆฎต้ใงใฏใ่ชๅฝใฏ`["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"]`ใงใใใไธๆใฎๅ่ชใฎใปใใใฏไปฅไธใฎใใใซ่กจใใใพใ๏ผ
```
("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5)
```
ๅๆใจใใฆใByte-Pair Encoding๏ผBPE๏ผใฎใใฌใผใใณใฐใใใฎๆฎต้ใงๅๆญขใใใจใๅญฆ็ฟใใใใใผใธใซใผใซใๆฐใใๅ่ชใซ้ฉ็จใใใพใ๏ผๆฐใใๅ่ชใซใฏใใผในใใญใฃใใฉใชใซๅซใพใใฆใใชใใทใณใใซใๅซใพใใฆใใชใ้ใ๏ผใ ไพใใฐใๅ่ช "bug" ใฏ ["b", "ug"] ใจใใฆใใผใฏใณๅใใใพใใใ"mug" ใฏใใผในใใญใฃใใฉใชใซ "m" ใทใณใใซใๅซใพใใฆใใชใใใใ["<unk>", "ug"] ใจใใฆใใผใฏใณๅใใใพใใ ไธ่ฌ็ใซใ"m" ใฎใใใชๅไธใฎๆๅญใฏใใใฌใผใใณใฐใใผใฟใซใฏ้ๅธธใๅๆๅญใฎๅฐใชใใจใ1ใคใฎๅบ็พใๅซใพใใฆใใใใใ"<unk>" ใทใณใใซใซ็ฝฎใๆใใใใใใจใฏใใใพใใใใ็ตตๆๅญใฎใใใช้ๅธธใซ็นๆฎใชๆๅญใฎๅ ดๅใซใฏ็บ็ใใๅฏ่ฝๆงใใใใพใใ
ๅ่ฟฐใฎใใใซใใใญใฃใใฉใชใตใคใบใใใชใใกใใผในใใญใฃใใฉใชใตใคใบ + ใใผใธใฎๅๆฐใฏ้ธๆใใใใคใใผใใฉใกใผใฟใงใใ ไพใใฐใ[GPT](model_doc/gpt) ใฏใใผในๆๅญใ478ๆๅญใงใ40,000ๅใฎใใผใธๅพใซใใฌใผใใณใฐใๅๆญขใใใใใใใญใฃใใฉใชใตใคใบใฏ40,478ใงใใ
#### Byte-level BPE
ใในใฆใฎUnicodeๆๅญใใใผในๆๅญใจ่ใใใจใใในใฆใฎๅฏ่ฝใชใใผในๆๅญใๅซใพใใใใใใใชใใใผในใใญใฃใใฉใชใฏใใชใๅคงใใใชใใใจใใใใพใใ [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) ใฏใใใผในใใญใฃใใฉใชใ256ใใคใใซใใ่ณขใใใชใใฏใจใใฆใใคใใใใผในใใญใฃใใฉใชใจใใฆไฝฟ็จใใใในใฆใฎใใผในๆๅญใใใญใฃใใฉใชใซๅซใพใใใใใซใใฆใใพใใ ใใณใฏใใฅใจใผใทใงใณใๆฑใใใใฎใใใคใใฎ่ฟฝๅ ใซใผใซใๅใใGPT2ใฎใใผใฏใใคใถใฏใ<unk> ใทใณใใซใๅฟ
่ฆใจใใใซใในใฆใฎใใญในใใใใผใฏใณๅใงใใพใใ [GPT-2](model_doc/gpt) ใฏ50,257ใฎใใญใฃใใฉใชใตใคใบใๆใฃใฆใใใใใใฏ256ใใคใใฎใใผในใใผใฏใณใ็นๅฅใชใใญในใใฎ็ตไบใ็คบใใใผใฏใณใใใใณ50,000ๅใฎใใผใธใงๅญฆ็ฟใใใทใณใใซใซๅฏพๅฟใใฆใใพใใ
### WordPiece
WordPieceใฏใ[BERT](model_doc/bert)ใ[DistilBERT](model_doc/distilbert)ใใใใณ[Electra](model_doc/electra)ใงไฝฟ็จใใใใตใใฏใผใใใผใฏใใคใผใผใทใงใณใขใซใดใชใบใ ใงใใ ใใฎใขใซใดใชใบใ ใฏใ[Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) ใงๆฆ่ชฌใใใฆใใใBPEใซ้ๅธธใซไผผใฆใใพใใ WordPieceใฏๆใ้ ป็นใชใทใณใใซใใขใ้ธๆใใใฎใงใฏใชใใใใฌใผใใณใฐใใผใฟใซ่ฟฝๅ ใใๅ ดๅใซใใฌใผใใณใฐใใผใฟใฎๅฐคๅบฆใๆๅคงๅใใใทใณใใซใใขใ้ธๆใใพใใ
ใใใฏๅ
ทไฝ็ใซใฏใฉใใใๆๅณใงใใ๏ผๅใฎไพใๅ็
งใใใจใใใฌใผใใณใฐใใผใฟใฎๅฐคๅบฆใๆๅคงๅใใใใจใฏใใใฎใทใณใใซใใขใฎ็ขบ็ใใใฎๆๅใฎใทใณใใซใซ็ถใ2็ช็ฎใฎใทใณใใซใฎ็ขบ็ใงๅฒใฃใใใฎใใใในใฆใฎใทใณใใซใใขใฎไธญใงๆใๅคงใใๅ ดๅใซ่ฉฒๅฝใใใทใณใใซใใขใ่ฆใคใใใใจใซ็ญใใใงใใ ใใจใใฐใ"u" ใฎๅพใซ "g" ใ็ถใๅ ดๅใไปใฎใฉใฎใทใณใใซใใขใใใ "ug" ใฎ็ขบ็ใ "u"ใ"g" ใงๅฒใฃใ็ขบ็ใ้ซใใใฐใใใใใฎใทใณใใซใฏ็ตๅใใใพใใ็ดๆ็ใซ่จใใฐใWordPieceใฏ2ใคใฎใทใณใใซใ็ตๅใใใใจใซใใฃใฆๅคฑใใใใใฎใ่ฉไพกใใใใใใใใซๅคใใใใฉใใใ็ขบ่ชใใ็นใงBPEใจใฏใใใใซ็ฐใชใใพใใ
### Unigram
Unigramใฏใ[Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018)](https://arxiv.org/pdf/1804.10959.pdf) ใงๅฐๅ
ฅใใใใตใใฏใผใใใผใฏใใคใผใผใทใงใณใขใซใดใชใบใ ใงใใ BPEใWordPieceใจใฏ็ฐใชใใUnigramใฏใใผในใใญใฃใใฉใชใๅคๆฐใฎใทใณใใซใงๅๆๅใใๅใทใณใใซใๅๆธใใฆใใๅฐใใชใใญใฃใใฉใชใๅๅพใใพใใ ใใผในใใญใฃใใฉใชใฏใไบๅใซใใผใฏใณๅใใใใในใฆใฎๅ่ชใจๆใไธ่ฌ็ใช้จๅๆๅญๅใซๅฏพๅฟใใๅฏ่ฝๆงใใใใพใใ Unigramใฏtransformersใฎใขใใซใฎ็ดๆฅใฎไฝฟ็จใซใฏ้ฉใใฆใใพใใใใ[SentencePiece](#sentencepiece)ใจ็ตใฟๅใใใฆไฝฟ็จใใใพใใ
ๅใใฌใผใใณใฐในใใใใงใUnigramใขใซใดใชใบใ ใฏ็พๅจใฎใใญใฃใใฉใชใจใฆใใฐใฉใ ่จ่ชใขใใซใไฝฟ็จใใฆใใฌใผใใณใฐใใผใฟไธใฎๆๅคฑ๏ผ้ๅธธใฏๅฏพๆฐๅฐคๅบฆใจใใฆๅฎ็พฉ๏ผใๅฎ็พฉใใพใใใใฎๅพใใใญใฃใใฉใชๅ
ใฎๅใทใณใใซใซใคใใฆใใใฎใทใณใใซใใใญใฃใใฉใชใใๅ้คใใใๅ ดๅใซๅ
จไฝใฎๆๅคฑใใฉใใ ใๅขๅ ใใใใ่จ็ฎใใพใใ Unigramใฏใๆๅคฑใฎๅขๅ ใๆใไฝใp๏ผ้ๅธธใฏ10๏ผ
ใพใใฏ20๏ผ
๏ผใใผใปใณใใฎใทใณใใซใๅ้คใใพใใใคใพใใใใฌใผใใณใฐใใผใฟๅ
จไฝใฎๆๅคฑใซๆใๅฝฑ้ฟใไธใใชใใๆใๆๅคฑใฎๅฐใชใใทใณใใซใๅ้คใใพใใ ใใฎใใญใปในใฏใใใญใฃใใฉใชใๆใพใใใตใคใบใซ้ใใใพใง็นฐใ่ฟใใใพใใ Unigramใขใซใดใชใบใ ใฏๅธธใซใใผในๆๅญใไฟๆใใใใใไปปๆใฎๅ่ชใใใผใฏใณๅใงใใพใใ
Unigramใฏใใผใธใซใผใซใซๅบใฅใใฆใใชใใใ๏ผBPEใจWordPieceใจใฏๅฏพ็
ง็ใซ๏ผใใใฌใผใใณใฐๅพใฎๆฐใใใใญในใใฎใใผใฏใณๅใซใฏใใใคใใฎๆนๆณใใใใพใใไพใจใใฆใใใฌใผใใณใฐใใใUnigramใใผใฏใใคใถใๆใคใใญใฃใใฉใชใๆฌกใฎใใใชๅ ดๅ๏ผ
```
["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"],
```
`"hugs"`ใฏใ`["hug", "s"]`ใ`["h", "ug", "s"]`ใใพใใฏ`["h", "u", "g", "s"]`ใฎใใใซใใผใฏใณๅใงใใพใใใงใฏใใฉใใ้ธๆใในใใงใใใใ๏ผ Unigramใฏใใใฌใผใใณใฐใณใผใในๅ
ใฎๅใใผใฏใณใฎ็ขบ็ใไฟๅญใใใใฌใผใใณใฐๅพใซๅๅฏ่ฝใชใใผใฏใณๅใฎ็ขบ็ใ่จ็ฎใงใใใใใซใใพใใใใฎใขใซใดใชใบใ ใฏๅฎ้ใซใฏๆใๅฏ่ฝๆงใฎ้ซใใใผใฏใณๅใ้ธๆใใพใใใ็ขบ็ใซๅพใฃใฆๅฏ่ฝใชใใผใฏใณๅใใตใณใใชใณใฐใใใชใใทใงใณใๆไพใใพใใ
ใใใใฎ็ขบ็ใฏใใใผใฏใใคใถใผใใใฌใผใใณใฐใซไฝฟ็จใใๆๅคฑใซใใฃใฆๅฎ็พฉใใใพใใใใฌใผใใณใฐใใผใฟใๅ่ช \\(x_{1}, \dots, x_{N}\\) ใงๆงๆใใใๅ่ช \\(x_{i}\\) ใฎใในใฆใฎๅฏ่ฝใชใใผใฏใณๅใฎใปใใใ \\(S(x_{i})\\) ใจๅฎ็พฉใใใๅ ดๅใๅ
จไฝใฎๆๅคฑใฏๆฌกใฎใใใซๅฎ็พฉใใใพใใ
$$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$$
<a id='sentencepiece'></a>
### SentencePiece
ใใใพใงใซ่ชฌๆใใใในใฆใฎใใผใฏใณๅใขใซใดใชใบใ ใซใฏๅใๅ้กใใใใพใใใใใฏใๅ
ฅๅใใญในใใๅ่ชใๅบๅใใใใซในใใผในใไฝฟ็จใใฆใใใจไปฎๅฎใใฆใใใจใใใใจใงใใใใใใใในใฆใฎ่จ่ชใๅ่ชใๅบๅใใใใซในใใผในใไฝฟ็จใใฆใใใใใงใฏใใใพใใใใใฎๅ้กใไธ่ฌ็ใซ่งฃๆฑบใใใใใฎ1ใคใฎๆนๆณใฏใ่จ่ชๅบๆใฎๅใใผใฏใใคใถใผใไฝฟ็จใใใใจใงใ๏ผไพ๏ผ[XLM](model_doc/xlm)ใฏ็นๅฎใฎไธญๅฝ่ชใๆฅๆฌ่ชใใใใณใฟใค่ชใฎๅใใผใฏใใคใถใผใไฝฟ็จใใฆใใพใ๏ผใใใไธ่ฌ็ใซใใฎๅ้กใ่งฃๆฑบใใใใใซใ[SentencePiece๏ผใใฅใผใฉใซใใญในใๅฆ็ใฎใใใฎใทใณใใซใง่จ่ช้ไพๅญใฎใตใใฏใผใใใผใฏใใคใถใผใใใณใใใผใฏใใคใถใผ๏ผKudo et al.ใ2018๏ผ](https://arxiv.org/pdf/1808.06226.pdf) ใฏใๅ
ฅๅใ็ใฎๅ
ฅๅในใใชใผใ ใจใใฆๆฑใใในใใผในใไฝฟ็จใใๆๅญใฎใปใใใซๅซใใพใใใใใใBPEใพใใฏunigramใขใซใดใชใบใ ใไฝฟ็จใใฆ้ฉๅใช่ชๅฝใๆง็ฏใใพใใ
ใใจใใฐใ[`XLNetTokenizer`]ใฏSentencePieceใไฝฟ็จใใฆใใใใใฎใใใซๅ่ฟฐใฎไพใง`"โ"`ๆๅญใ่ชๅฝใซๅซใพใใฆใใพใใใSentencePieceใไฝฟ็จใใใใณใผใใฏ้ๅธธใซ็ฐกๅใงใใในใฆใฎใใผใฏใณใๅ็ดใซ้ฃ็ตใใ`"โ"`ใฏในใใผในใซ็ฝฎๆใใใพใใ
ใฉใคใใฉใชๅ
ใฎใในใฆใฎtransformersใขใใซใฏใSentencePieceใunigramใจ็ตใฟๅใใใฆไฝฟ็จใใพใใSentencePieceใไฝฟ็จใใใขใใซใฎไพใซใฏใ[ALBERT](model_doc/albert)ใ[XLNet](model_doc/xlnet)ใ[Marian](model_doc/marian)ใใใใณ[T5](model_doc/t5)ใใใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/task_summary.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# What ๐ค Transformers can do
๐ค Transformersใฏใ่ช็ถ่จ่ชๅฆ็๏ผNLP๏ผใใณใณใใฅใผใฟใใธใงใณใ้ณๅฃฐๅฆ็ใชใฉใฎๆๆฐใฎไบๅๅญฆ็ฟๆธใฟใขใใซใฎใฉใคใใฉใชใงใใใใฎใฉใคใใฉใชใซใฏใTransformerใขใใซใ ใใงใชใใใณใณใใฅใผใฟใใธใงใณใฟในใฏๅใใฎใขใใณใช็ณใฟ่พผใฟใใฅใผใฉใซใใใใฏใผใฏใชใฉใTransformerไปฅๅคใฎใขใใซใๅซใพใใฆใใพใใ็พไปฃใฎในใใผใใใฉใณใใขใใชใใใฌใใชใฉใๆใไบบๆฐใฎใใๆถ่ฒป่
่ฃฝๅใฎๅคใใซใฏใๆทฑๅฑคๅญฆ็ฟๆ่กใไฝฟ็จใใใฆใใพใใในใใผใใใฉใณใงๆฎๅฝฑใใๅ็ใใ่ๆฏใชใใธใงใฏใใๅ้คใใใใงใใ๏ผใใใฏใใใใใฃใใฏใปใปใฐใกใณใใผใทใงใณ๏ผใพใ ไฝใๆๅณใใใใใใใชใใฆใๅฟ้
ใใชใใงใใ ใใใไปฅไธใฎใปใฏใทใงใณใง่ชฌๆใใพใ๏ผ๏ผใฎใฟในใฏใฎไธไพใงใใ
ใใฎใใผใธใงใฏใ๐ค Transformersใฉใคใใฉใชใไฝฟ็จใใฆใใใฃใ3่กใฎใณใผใใง่งฃๆฑบใงใใใใพใใพใช้ณๅฃฐใใใณ้ณๅฃฐใใณใณใใฅใผใฟใใธใงใณใNLPใฟในใฏใฎๆฆ่ฆใๆไพใใพใ๏ผ
## Audio
้ณๅฃฐใจ้ณๅฃฐๅฆ็ใฎใฟในใฏใฏใไปใฎใขใใชใใฃใจใฏๅฐใ็ฐใชใใพใใใชใใชใใๅ
ฅๅใจใใฆใฎ็ใฎ้ณๅฃฐๆณขๅฝขใฏ้ฃ็ถ็ใชไฟกๅทใงใใใใใงใใใใญในใใจใฏ็ฐใชใใ็ใฎ้ณๅฃฐๆณขๅฝขใฏๆ็ซ ใๅ่ชใซๅๅฒใงใใใใใซใใใใซๅๅฒใงใใพใใใใใใ่งฃๆฑบใใใใใซใ้ๅธธใ็ใฎ้ณๅฃฐไฟกๅทใฏๅฎๆ็ใช้้ใงใตใณใใชใณใฐใใใพใใ้้ๅ
ใงใใๅคใใฎใตใณใใซใๅใใจใใตใณใใชใณใฐใฌใผใใ้ซใใชใใ้ณๅฃฐใฏใใๅ
ใฎ้ณๅฃฐใฝใผในใซ่ฟใฅใใพใใ
ไปฅๅใฎใขใใญใผใใงใฏใ้ณๅฃฐใๅๅฆ็ใใฆใใใใๆ็จใช็นๅพดใๆฝๅบใใพใใใใใใใ็พๅจใงใฏใ็ใฎ้ณๅฃฐๆณขๅฝขใ็นๅพดใจใณใณใผใใซ็ดๆฅใใฃใผใใใ้ณๅฃฐ่กจ็พใๆฝๅบใใใใจใใๅงใใใใจใไธ่ฌ็ใงใใใใใซใใใๅๅฆ็ในใใใใ็ฐก็ด ๅใใใใขใใซใฏๆใ้่ฆใช็นๅพดใๅญฆ็ฟใงใใพใใ
### Audio classification
้ณๅฃฐๅ้กใฏใไบๅใซๅฎ็พฉใใใใฏใฉในใฎใปใใใใ้ณๅฃฐใใผใฟใซใฉใใซใไปใใใฟในใฏใงใใใใใฏๅคใใฎๅ
ทไฝ็ใชใขใใชใฑใผใทใงใณใๅซใๅบ็ฏใชใซใใดใชใงใใใใใใคใใฎไพใฏๆฌกใฎใจใใใงใ๏ผ
- ้ณ้ฟใทใผใณๅ้ก๏ผ้ณๅฃฐใซใทใผใณใฉใใซใไปใใ๏ผใใชใใฃในใใใใใผใใใใในใฟใธใขใ ใ๏ผ
- ้ณ้ฟใคใใณใๆคๅบ๏ผ้ณๅฃฐใซ้ณๅฃฐใคใใณใใฉใใซใไปใใ๏ผใ่ปใฎใฏใฉใฏใทใงใณใใใใฏใธใฉใฎๅผใณๅฃฐใใใใฌใฉในใฎ็ ด่ฃใ๏ผ
- ใฟใฎใณใฐ๏ผ่คๆฐใฎ้ณใๅซใ้ณๅฃฐใซใฉใใซใไปใใ๏ผ้ณฅใฎ้ณดใๅฃฐใไผ่ญฐใงใฎในใใผใซใผ่ญๅฅ๏ผ
- ้ณๆฅฝๅ้ก๏ผ้ณๆฅฝใซใธใฃใณใซใฉใใซใไปใใ๏ผใใกใฟใซใใใใใใใใใใใใใซใณใใชใผใ๏ผ
```py
>>> from transformers import pipeline
>>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er")
>>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
>>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
>>> preds
[{'score': 0.4532, 'label': 'hap'},
{'score': 0.3622, 'label': 'sad'},
{'score': 0.0943, 'label': 'neu'},
{'score': 0.0903, 'label': 'ang'}]
```
### Automatic speech recognition
่ชๅ้ณๅฃฐ่ช่ญ๏ผASR๏ผใฏ้ณๅฃฐใใใญในใใซๅคๆใใพใใใใใฏใไบบ้ใฎใณใใฅใใฑใผใทใงใณใฎ่ช็ถใชๅฝขๅผใงใใ้ณๅฃฐใฎไธ้จใจใใฆใ้จๅ็ใซใใฎใใใชไธ่ฌ็ใชใชใผใใฃใชใฟในใฏใฎไธใคใงใใไปๆฅใASRใทในใใ ใฏในใใผใซใผใในใใผใใใฉใณใ่ชๅ่ปใชใฉใฎใในใใผใใใใฏใใญใธใผใใญใใฏใใซ็ตใฟ่พผใพใใฆใใพใใ็งใใกใฏไปฎๆณใขใทในใฟใณใใซ้ณๆฅฝใๅ็ใใฆใใใฃใใใใชใใคใณใใผใ่จญๅฎใใฆใใใฃใใใๅคฉๆฐใๆใใฆใใใฃใใใงใใพใใ
ใใใใTransformerใขใผใญใใฏใใฃใๅฉใใไธป่ฆใช่ชฒ้กใฎไธใคใฏใไฝใชใฝใผใน่จ่ชใซใใใใใฎใงใใๅคง้ใฎ้ณๅฃฐใใผใฟใงไบๅใใฌใผใใณใฐใใไฝใชใฝใผใน่จ่ชใงใฉใใซไปใ้ณๅฃฐใใผใฟใใใใ1ๆ้ใ ใใงใใกใคใณใใฅใผใใณใฐใใใใจใงใใไปฅๅใฎASRใทในใใ ใจๆฏ่ผใใฆ้ซๅ่ณชใช็ตๆใๅพใใใจใใงใใพใใไปฅๅใฎASRใทในใใ ใฏ100ๅไปฅไธใฎใฉใใซไปใใใผใฟใงใใฌใผใใณใฐใใใฆใใพใใใใTransformerใขใผใญใใฏใใฃใฏใใฎๅ้กใซ่ฒข็ฎใใพใใใ
```py
>>> from transformers import pipeline
>>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small")
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
```
## Computer vision
ๆๅใงๅใใฆๆๅใใใณใณใใฅใผใฟใใธใงใณใฎใฟในใฏใฎไธใคใฏใ[็ณใฟ่พผใฟใใฅใผใฉใซใใใใฏใผใฏ๏ผCNN๏ผ](glossary#convolution)ใไฝฟ็จใใฆ้ตไพฟ็ชๅทใฎ็ปๅใ่ช่ญใใใใจใงใใใ็ปๅใฏใใฏใปใซใใๆงๆใใใๅใใฏใปใซใซใฏๆฐๅคใใใใพใใใใใซใใใ็ปๅใใใฏใปใซๅคใฎ่กๅใจใใฆ็ฐกๅใซ่กจ็พใงใใพใใ็นๅฎใฎใใฏใปใซๅคใฎ็ตใฟๅใใใฏใ็ปๅใฎ่ฒใ่จ่ฟฐใใพใใ
ใณใณใใฅใผใฟใใธใงใณใฎใฟในใฏใ่งฃๆฑบใใไธ่ฌ็ใชๆนๆณใฏๆฌกใฎ2ใคใงใ๏ผ
1. ็ณใฟ่พผใฟใไฝฟ็จใใฆใไฝใฌใใซใฎ็นๅพดใใ้ซใฌใใซใฎๆฝ่ฑก็ใชๆ
ๅ ฑใพใงใ็ปๅใฎ้ๅฑค็ใช็นๅพดใๅญฆ็ฟใใใ
2. ็ปๅใใใใใซๅๅฒใใๅ็ปๅใใใใ็ปๅๅ
จไฝใจใฉใฎใใใซ้ข้ฃใใฆใใใใๅพใ
ใซๅญฆ็ฟใใใใใซTransformerใไฝฟ็จใใใCNNใๅฅฝใใใใ ใขใใใขใใญใผใใจใฏ็ฐใชใใใใใฏใผใใใใจใใ็ปๅใใๅงใใฆๅพใ
ใซ็ฆ็นใๅใใใใใใชใใฎใงใใ
### Image classification
็ปๅๅ้กใฏใไบๅใซๅฎ็พฉใใใใฏใฉในใฎใปใใใใ็ปๅๅ
จไฝใซใฉใใซใไปใใพใใๅคใใฎๅ้กใฟในใฏใจๅๆงใซใ็ปๅๅ้กใซใฏๅคใใฎๅฎ็จ็ใช็จ้ใใใใใใฎไธ้จใฏๆฌกใฎใจใใใงใ๏ผ
* ๅป็๏ผ็พๆฃใฎๆคๅบใๆฃ่
ใฎๅฅๅบทใฎ็ฃ่ฆใซไฝฟ็จใใใใใซๅป็็ปๅใซใฉใใซใไปใใ
* ็ฐๅข๏ผๆฃฎๆไผๆกใฎ็ฃ่ฆใ้็ๅฐใฎ็ฎก็ๆ
ๅ ฑใใพใใฏๅฑฑ็ซไบใฎๆคๅบใซไฝฟ็จใใใใใซ่กๆ็ปๅใซใฉใใซใไปใใ
* ่พฒๆฅญ๏ผไฝ็ฉใฎๅฅๅบทใ็ฃ่ฆใใใใใฎไฝ็ฉใฎ็ปๅใใๅๅฐๅฉ็จใฎ็ฃ่ฆใฎใใใฎ่กๆ็ปๅใซใฉใใซใไปใใ
* ็ๆ
ๅญฆ๏ผ้็ๅ็ฉใฎๅไฝๆฐใ็ฃ่ฆใใใใ็ตถๆป
ๅฑๆง็จฎใ่ฟฝ่ทกใใใใใซๅๆค็ฉใฎ็จฎใฎ็ปๅใซใฉใใซใไปใใ
```py
>>> from transformers import pipeline
>>> classifier = pipeline(task="image-classification")
>>> preds = classifier(
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
... )
>>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
>>> print(*preds, sep="\n")
{'score': 0.4335, 'label': 'lynx, catamount'}
{'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}
{'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}
{'score': 0.0239, 'label': 'Egyptian cat'}
{'score': 0.0229, 'label': 'tiger cat'}
```
### Object detection
็ปๅๅ้กใจใฏ็ฐใชใใใชใใธใงใฏใๆคๅบใฏ็ปๅๅ
ใฎ่คๆฐใฎใชใใธใงใฏใใ่ญๅฅใใใชใใธใงใฏใใฎไฝ็ฝฎใ็ปๅๅ
ใงๅฎ็พฉใใๅข็ใใใฏในใซใใฃใฆ็นๅฎใใพใใใชใใธใงใฏใๆคๅบใฎไพใซใฏๆฌกใฎใใใช็จ้ใใใใพใ๏ผ
* ่ชๅ้่ปข่ป๏ผไปใฎ่ปไธกใๆญฉ่ก่
ใไฟกๅทๆฉใชใฉใฎๆฅๅธธใฎไบค้ใชใใธใงใฏใใๆคๅบ
* ใชใขใผใใปใณใทใณใฐ๏ผ็ฝๅฎณใขใใฟใชใณใฐใ้ฝๅธ่จ็ปใๅคฉๅไบๆธฌ
* ๆฌ ้ฅๆคๅบ๏ผๅปบ็ฉใฎใฏใฉใใฏใๆง้ ไธใฎๆๅทใ่ฃฝ้ ไธใฎๆฌ ้ฅใๆคๅบ
```py
>>> from transformers import pipeline
>>> detector = pipeline(task="object-detection")
>>> preds = detector(
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
... )
>>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds]
>>> preds
[{'score': 0.9865,
'label': 'cat',
'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}]
```
### Image segmentation
็ปๅใปใฐใกใณใใผใทใงใณใฏใ็ปๅๅ
ใฎใในใฆใฎใใฏใปใซใใฏใฉในใซๅฒใๅฝใฆใใใฏใปใซใฌใใซใฎใฟในใฏใงใใใใใฏใชใใธใงใฏใๆคๅบใจใฏ็ฐใชใใใชใใธใงใฏใใใฉใใซไปใใใไบๆธฌใใใใใซๅข็ใใใฏในใไฝฟ็จใใไปฃใใใซใใปใฐใกใณใใผใทใงใณใฏใใ่ฉณ็ดฐใซใชใใพใใใปใฐใกใณใใผใทใงใณใฏใใฏใปใซใฌใใซใงใชใใธใงใฏใใๆคๅบใงใใพใใ็ปๅใปใฐใกใณใใผใทใงใณใซใฏใใใคใใฎใฟใคใใใใใพใ๏ผ
* ใคใณในใฟใณในใปใฐใกใณใใผใทใงใณ๏ผใชใใธใงใฏใใฎใฏใฉในใใฉใใซไปใใใใ ใใงใชใใใชใใธใงใฏใใฎๅๅฅใฎใคใณในใฟใณใน๏ผ"็ฌ-1"ใ"็ฌ-2"๏ผใใฉใใซไปใใใพใใ
* ใใใใใฃใใฏใปใฐใกใณใใผใทใงใณ๏ผใปใใณใใฃใใฏใปใฐใกใณใใผใทใงใณใจใคใณในใฟใณในใปใฐใกใณใใผใทใงใณใฎ็ตใฟๅใใใใปใใณใใฃใใฏใฏใฉในใใจใซๅใใฏใปใซใซใฉใใซใไปใใใชใใธใงใฏใใฎๅๅฅใฎใคใณในใฟใณในใใฉใใซไปใใใพใใ
ใปใฐใกใณใใผใทใงใณใฟในใฏใฏใ่ชๅ้่ปข่ปใซใจใฃใฆใๅจๅฒใฎไธ็ใฎใใฏใปใซใฌใใซใฎใใใใไฝๆใใๆญฉ่ก่
ใไปใฎ่ปไธกใๅฎๅ
จใซๅ้ฟใงใใใใใซใใใฎใซๅฝน็ซใกใพใใใพใใๅป็็ปๅใงใฏใใฟในใฏใฎ็ดฐใใ็ฒๅบฆใ็ฐๅธธใช็ดฐ่ใ่ๅจใฎ็นๅพดใ่ญๅฅใใใฎใซๅฝน็ซใกใพใใ็ปๅใปใฐใกใณใใผใทใงใณใฏใeใณใใผในใง่กฃ้กใไปฎๆณ็ใซ่ฉฆ็ใใใใใซใกใฉใ้ใใฆๅฎไธ็ใซใชใใธใงใฏใใ้ใญใฆๆกๅผต็พๅฎใฎไฝ้จใไฝๆใใใใใใใใซใไฝฟ็จใงใใพใใ
```py
>>> from transformers import pipeline
>>> segmenter = pipeline(task="image-segmentation")
>>> preds = segmenter(
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
... )
>>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
>>> print(*preds, sep="\n")
{'score': 0.9879, 'label': 'LABEL_184'}
{'score': 0.9973, 'label': 'snow'}
{'score': 0.9972, 'label': 'cat'}
```
### Depth estimation
ๆทฑๅบฆๆจๅฎใฏใ็ปๅๅ
ใฎๅใใฏใปใซใใซใกใฉใใใฎ่ท้ขใไบๆธฌใใพใใใใฎใณใณใใฅใผใฟใใธใงใณใฟในใฏใฏใ็นใซใทใผใณใฎ็่งฃใจๅๆง็ฏใซ้่ฆใงใใใใจใใฐใ่ชๅ้่ปข่ปใงใฏใๆญฉ่ก่
ใไบค้ๆจ่ญใไปใฎ่ปใชใฉใฎ็ฉไฝใใฉใใ ใ้ ใใใ็่งฃใใ้ๅฎณ็ฉใ่ก็ชใๅ้ฟใใใใใซๅฟ
่ฆใงใใๆทฑๅบฆๆ
ๅ ฑใฏใพใใ2D็ปๅใใ3D่กจ็พใๆง็ฏใใ็็ฉๅญฆ็ๆง้ ใๅปบ็ฉใฎ้ซๅ่ณชใช3D่กจ็พใไฝๆใใใฎใซๅฝน็ซใกใพใใ
ๆทฑๅบฆๆจๅฎใซใฏๆฌกใฎ2ใคใฎใขใใญใผใใใใใพใ๏ผ
* ในใใฌใช๏ผๆทฑๅบฆใฏใใใใใซ็ฐใชใ่งๅบฆใใใฎๅใ็ปๅใฎ2ใคใฎ็ปๅใๆฏ่ผใใฆๆจๅฎใใใพใใ
* ใขใใญใฅใฉใผ๏ผๆทฑๅบฆใฏๅไธใฎ็ปๅใใๆจๅฎใใใพใใ
```py
>>> from transformers import pipeline
>>> depth_estimator = pipeline(task="depth-estimation")
>>> preds = depth_estimator(
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
... )
```
## Natural language processing
NLPใฟในใฏใฏใใใญในใใ็งใใกใซใจใฃใฆ่ช็ถใชใณใใฅใใฑใผใทใงใณๆๆฎตใงใใใใใๆใไธ่ฌ็ใชใฟในใฏใฎไธใคใงใใใขใใซใ่ช่ญใใใใใฎๅฝขๅผใซใใญในใใๅคๆใใใซใฏใใใผใฏใณๅใๅฟ
่ฆใงใใใใใฏใใใญในใใฎใทใผใฑใณในใๅ่ชใใตใใฏใผใ๏ผใใผใฏใณ๏ผใซๅๅฒใใใใใใฎใใผใฏใณใๆฐๅญใซๅคๆใใใใจใๆๅณใใพใใใใฎ็ตๆใใใญในใใฎใทใผใฑใณในใๆฐๅญใฎใทใผใฑใณในใจใใฆ่กจ็พใใไธๅบฆๆฐๅญใฎใทใผใฑใณในใใใใฐใใใพใใพใชNLPใฟในใฏใ่งฃๆฑบใใใใใซใขใใซใซๅ
ฅๅใงใใพใ๏ผ
### Text classification
ใฉใใชใขใใชใใฃใฎๅ้กใฟในใฏใจๅๆงใซใใใญในใๅ้กใฏไบๅใซๅฎ็พฉใใใใฏใฉในใฎใปใใใใใใญในใใฎใทใผใฑใณใน๏ผๆใฌใใซใๆฎต่ฝใใพใใฏใใญใฅใกใณใใงใใใใจใใใใพใ๏ผใซใฉใใซใไปใใพใใใใญในใๅ้กใซใฏๅคใใฎๅฎ็จ็ใช็จ้ใใใใใใฎไธ้จใฏๆฌกใฎใจใใใงใ๏ผ
* ๆๆ
ๅๆ๏ผ`positive`ใ`negative`ใฎใใใชๆฅตๆงใซๅพใฃใฆใใญในใใซใฉใใซใไปใใๆฟๆฒปใ้่ใใใผใฑใใฃใณใฐใชใฉใฎๅ้ใงใฎๆๆๆฑบๅฎใใตใใผใใใพใใ
* ใณใณใใณใๅ้ก๏ผใใญในใใใใใใฏใซๅพใฃใฆใฉใใซไปใใใใใฅใผในใใฝใผใทใฃใซใกใใฃใขใฎใใฃใผใๅ
ใฎๆ
ๅ ฑใๆด็ใใใใฃใซใฟใชใณใฐใใใฎใซๅฝน็ซใกใพใ๏ผ`ๅคฉๆฐ`ใ`ในใใผใ`ใ`้่`ใชใฉ๏ผใ
```py
>>> from transformers import pipeline
>>> classifier = pipeline(task="sentiment-analysis")
>>> preds = classifier("Hugging Face is the best thing since sliced bread!")
>>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
>>> preds
[{'score': 0.9991, 'label': 'POSITIVE'}]
```
### Token classification
ใฉใใชNLPใฟในใฏใงใใใใญในใใฏใใญในใใฎใทใผใฑใณในใๅใ
ใฎๅ่ชใใตใใฏใผใใซๅๅฒใใฆๅๅฆ็ใใใพใใใใใใฏ[ใใผใฏใณ](/glossary#token)ใจใใฆ็ฅใใใฆใใพใใใใผใฏใณๅ้กใฏใไบๅใซๅฎ็พฉใใใใฏใฉในใฎใปใใใใๅใใผใฏใณใซใฉใใซใๅฒใๅฝใฆใพใใ
ใใผใฏใณๅ้กใฎไธ่ฌ็ใชใฟใคใใฏๆฌกใฎ2ใคใงใ๏ผ
* ๅบๆ่กจ็พ่ช่ญ๏ผNER๏ผ๏ผ็ต็นใไบบ็ฉใๅ ดๆใๆฅไปใชใฉใฎใจใณใใฃใใฃใฎใซใใดใชใซๅพใฃใฆใใผใฏใณใซใฉใใซใไปใใพใใNERใฏ็นใซใใคใชใกใใฃใซใซ็ฐๅขใงไบบๆฐใงใใใ้บไผๅญใใฟใณใใฏ่ณชใ่ฌ็ฉๅใชใฉใใฉใใซไปใใงใใพใใ
* ๅ่ฉใฟใฐไปใ๏ผPOS๏ผ๏ผๅ่ฉใๅ่ฉใๅฝขๅฎน่ฉใชใฉใฎๅ่ฉใซๅพใฃใฆใใผใฏใณใซใฉใใซใไปใใพใใPOSใฏใ็ฟป่จณใทในใใ ใๅใๅ่ชใๆๆณ็ใซใฉใฎใใใซ็ฐใชใใใ็่งฃใใใฎใซๅฝน็ซใกใพใ๏ผๅ่ฉใจใใฆใฎใ้่กใใจๅ่ฉใจใใฆใฎใ้่กใใชใฉ๏ผใ
```py
>>> from transformers import pipeline
>>> classifier = pipeline(task="ner")
>>> preds = classifier("Hugging Face is a French company based in New York City.")
>>> preds = [
... {
... "entity": pred["entity"],
... "score": round(pred["score"], 4),
... "index": pred["index"],
... "word": pred["word"],
... "start": pred["start"],
... "end": pred["end"],
... }
... for pred in preds
... ]
>>> print(*preds, sep="\n")
{'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2}
{'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7}
{'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12}
{'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24}
{'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45}
{'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50}
{'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55}
```
### Question answering
่ณชๅๅฟ็ญใฏใใณใณใใญในใ๏ผใชใผใใณใใกใคใณ๏ผใๅซใๅ ดๅใจๅซใพใชใๅ ดๅ๏ผใฏใญใผใบใใใกใคใณ๏ผใใใๅ ดๅใใใใๅฅใฎใใผใฏใณใฌใใซใฎใฟในใฏใงใ่ณชๅใซๅฏพใใๅ็ญใ่ฟใใพใใใใฎใฟในใฏใฏใไปฎๆณใขใทในใฟใณใใซใฌในใใฉใณใๅถๆฅญใใฆใใใใฉใใใฎใใใช่ณชๅใใใใจใใซ็บ็ใใพใใใพใใ้กงๅฎขใๆ่กใตใใผใใๆไพใใๆค็ดขใจใณใธใณใใใชใใๆฑใใฆใใ้ข้ฃๆ
ๅ ฑใๅๅพใใใฎใซใๅฝน็ซใกใพใใ
่ณชๅๅฟ็ญใฎไธ่ฌ็ใชใฟใคใใฏๆฌกใฎ2ใคใงใ๏ผ
* ๆฝๅบๅ๏ผ่ณชๅใจไธ้จใฎใณใณใใญในใใไธใใใใๅ ดๅใใขใใซใใณใณใใญในใใใๆฝๅบใใๅฟ
่ฆใฎใใใใญในใใฎในใใณใๅ็ญใจใชใใพใใ
* ๆฝ่ฑก็๏ผ่ณชๅใจไธ้จใฎใณใณใใญในใใไธใใใใๅ ดๅใๅ็ญใฏใณใณใใญในใใใ็ๆใใใพใใใใฎใขใใญใผใใฏใ[`QuestionAnsweringPipeline`]ใงใฏใชใ[`Text2TextGenerationPipeline`]ใงๅฆ็ใใใพใใ
```py
>>> from transformers import pipeline
>>> question_answerer = pipeline(task="question-answering")
>>> preds = question_answerer(
... question="What is the name of the repository?",
... context="The name of the repository is huggingface/transformers",
... )
>>> print(
... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}"
... )
score: 0.9327, start: 30, end: 54, answer: huggingface/transformers
```
### Summarization
่ฆ็ดใฏใ้ทใใใญในใใใ็ญใใใผใธใงใณใไฝๆใใๅ
ใฎๆๆธใฎๆๅณใฎๅคง้จๅใไฟใกใชใใ่ฉฆใฟใใฟในใฏใงใใ่ฆ็ดใฏใทใผใฑใณในใใใทใผใฑใณในใธใฎใฟในใฏใงใใใๅ
ฅๅใใใ็ญใใใญในใใทใผใฑใณในใๅบๅใใพใใ่ฆ็ดใ่กใใใจใงใ่ชญ่
ใไธป่ฆใชใใคใณใใ่ฟ
้ใซ็่งฃใงใใใใใซใใใฎใซๅฝน็ซใค้ทๆๆธใใใใใใใใพใใๆณๆกใๆณ็ใใใณ่ฒกๅๆๆธใ็น่จฑใ็งๅญฆ่ซๆใชใฉใ่ชญ่
ใฎๆ้ใ็ฏ็ดใ่ชญๆธใฎๆฏๆดใจใชใๆๆธใฎไพใใใใพใใ
่ณชๅๅฟ็ญใจๅๆงใซใ่ฆ็ดใซใฏ2ใคใฎใฟใคใใใใใพใ๏ผ
* ๆฝๅบ็่ฆ็ด๏ผๅ
ใฎใใญในใใใๆใ้่ฆใชๆใ่ญๅฅใใฆๆฝๅบใใพใใ
* ๆฝ่ฑก็่ฆ็ด๏ผๅ
ใฎใใญในใใใใฟใผใฒใใใฎ่ฆ็ด๏ผๅ
ฅๅๆๆธใซๅซใพใใฆใใชใๆฐใใๅ่ชใๅซใใใจใใใใพใ๏ผใ็ๆใใพใใ[`SummarizationPipeline`]ใฏๆฝ่ฑก็ใชใขใใญใผใใไฝฟ็จใใฆใใพใใ
```py
>>> from transformers import pipeline
>>> summarizer = pipeline(task="summarization")
>>> summarizer(
... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles."
... )
[{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}]
```
### Translation
็ฟป่จณใฏใใใ่จ่ชใฎใใญในใใทใผใฑใณในใๅฅใฎ่จ่ชใซๅคๆใใไฝๆฅญใงใใใใใฏ็ฐใชใใใใฏใฐใฉใฆใณใใๆใคไบบใ
ใใณใใฅใใฑใผใทใงใณใใจใใฎใซๅฝน็ซใกใๅบ็ฏใช่ฆณๅฎขใซใณใณใใณใใ็ฟป่จณใใฆไผใใใฎใซๅฝน็ซใกใๆฐใใ่จ่ชใๅญฆใถใฎใๆฏๆดใใๅญฆ็ฟใใผใซใซใใชใใพใใ่ฆ็ดใจๅ
ฑใซใ็ฟป่จณใฏใทใผใฑใณใน้ใฎใฟในใฏใงใใใใขใใซใฏๅ
ฅๅใทใผใฑใณในใๅใๅใใใฟใผใฒใใใฎๅบๅใทใผใฑใณในใ่ฟใใพใใ
ๅๆใฎ็ฟป่จณใขใใซใฏไธปใซๅไธ่จ่ชใงใใใใๆ่ฟใงใฏๅค่จ่ชใขใใซใซๅฏพใใ้ขๅฟใ้ซใพใใๅคใใฎ่จ่ชๅฏพใง็ฟป่จณใงใใใใใชๅค่จ่ชใขใใซใซๆณจ็ฎใ้ใพใฃใฆใใพใใ
```py
>>> from transformers import pipeline
>>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning."
>>> translator = pipeline(task="translation", model="t5-small")
>>> translator(text)
[{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}]
```
### ่จ่ชใขใใชใณใฐ
่จ่ชใขใใชใณใฐใฏใใใญในใใฎใทใผใฑใณในๅ
ใฎๅ่ชใไบๆธฌใใใฟในใฏใงใใไบๅๅญฆ็ฟใใใ่จ่ชใขใใซใฏใๅคใใฎไปใฎใใฆใณในใใชใผใ ใฟในใฏใซๅฏพใใฆใใกใคใณใใฅใผใใณใฐใงใใใใใ้ๅธธใซไบบๆฐใฎใใNLPใฟในใฏใจใชใฃใฆใใพใใๆ่ฟใงใฏใใผใญใทใงใใใพใใฏใใฅใผใทใงใใๅญฆ็ฟใๅฎ่จผใใๅคง่ฆๆจกใช่จ่ชใขใใซ๏ผLLM๏ผใซๅคงใใช้ขๅฟใๅฏใใใใฆใใพใใใใใฏใใขใใซใๆ็คบ็ใซใใฌใผใใณใฐใใใฆใใชใใฟในใฏใ่งฃๆฑบใงใใใใจใๆๅณใใพใ๏ผ่จ่ชใขใใซใฏใๆตๆขใง่ชฌๅพๅใฎใใใใญในใใ็ๆใใใใใซไฝฟ็จใงใใพใใใใใญในใใๅธธใซๆญฃ็ขบใงใใใใใงใฏใชใใใใๆณจๆใๅฟ
่ฆใงใใ
่จ่ชใขใใชใณใฐใซใฏ2ใคใฎใฟใคใใใใใพใ๏ผ
* ๅ ๆ็๏ผใขใใซใฎ็ฎๆจใฏใใทใผใฑใณในๅ
ใฎๆฌกใฎใใผใฏใณใไบๆธฌใใใใจใงใใใๅฐๆฅใฎใใผใฏใณใฏใในใฏใใใพใใ
```py
>>> from transformers import pipeline
>>> prompt = "Hugging Face is a community-based open-source platform for machine learning."
>>> generator = pipeline(task="text-generation")
>>> generator(prompt) # doctest: +SKIP
```
* ใในใฏใใใ๏ผใขใใซใฎ็ฎ็ใฏใใทใผใฑใณในๅ
ใฎใใผใฏใณๅ
จไฝใซใขใฏใปในใใชใใใใทใผใฑใณในๅ
ใฎใในใฏใใใใใผใฏใณใไบๆธฌใใใใจใงใใ
```py
>>> text = "Hugging Face is a community-based open-source <mask> for machine learning."
>>> fill_mask = pipeline(task="fill-mask")
>>> preds = fill_mask(text, top_k=1)
>>> preds = [
... {
... "score": round(pred["score"], 4),
... "token": pred["token"],
... "token_str": pred["token_str"],
... "sequence": pred["sequence"],
... }
... for pred in preds
... ]
>>> preds
[{'score': 0.2236,
'token': 1761,
'token_str': ' platform',
'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}]
```
## Multimodal
ใใซใใขใผใใซใฟในใฏใฏใ็นๅฎใฎๅ้กใ่งฃๆฑบใใใใใซ่คๆฐใฎใใผใฟใขใใชใใฃ๏ผใใญในใใ็ปๅใ้ณๅฃฐใใใใช๏ผใๅฆ็ใใใใใซใขใใซใๅฟ
่ฆใจใใพใใ็ปๅใญใฃใใทใงใใณใฐใฏใใขใใซใๅ
ฅๅใจใใฆ็ปๅใๅใๅใใ็ปๅใ่ชฌๆใใใใญในใใฎใทใผใฑใณในใพใใฏ็ปๅใฎใใใคใใฎ็นๆงใๅบๅใใใใซใใขใผใใซใฟในใฏใฎไพใงใใ
ใใซใใขใผใใซใขใใซใฏ็ฐใชใใใผใฟใฟใคใใพใใฏใขใใชใใฃใงไฝๆฅญใใพใใใๅ
้จ็ใซใฏๅๅฆ็ในใใใใใขใใซใซใในใฆใฎใใผใฟใฟใคใใๅใ่พผใฟ๏ผใใผใฟใซ้ขใใๆๅณใฎใใๆ
ๅ ฑใไฟๆใใใใฏใใซใพใใฏๆฐๅญใฎใชในใ๏ผใซๅคๆใใใฎใๆฏๆดใใพใใ็ปๅใญใฃใใทใงใใณใฐใฎใใใชใฟในใฏใงใฏใใขใใซใฏ็ปๅใฎๅใ่พผใฟใจใใญในใใฎๅใ่พผใฟใฎ้ใฎ้ขไฟใๅญฆ็ฟใใพใใ
### Document question answering
ใใญใฅใกใณใ่ณชๅๅฟ็ญใฏใใใญใฅใกใณใใใใฎ่ช็ถ่จ่ชใฎ่ณชๅใซ็ญใใใฟในใฏใงใใใใญในใใๅ
ฅๅใจใใใใผใฏใณใฌใใซใฎ่ณชๅๅฟ็ญใฟในใฏใจใฏ็ฐใชใใใใญใฅใกใณใ่ณชๅๅฟ็ญใฏใใญใฅใกใณใใฎ็ปๅใจใใฎใใญใฅใกใณใใซ้ขใใ่ณชๅใๅใๅใใ็ญใใ่ฟใใพใใใใญใฅใกใณใ่ณชๅๅฟ็ญใฏๆง้ ๅใใใใใญใฅใกใณใใ่งฃๆใใใใใใ้่ฆใชๆ
ๅ ฑใๆฝๅบใใใใใซไฝฟ็จใงใใพใใไปฅไธใฎไพใงใฏใใฌใทใผใใใๅ่จ้้กใจใ้ฃใใๆฝๅบใใใใจใใงใใพใใ
```py
>>> from transformers import pipeline
>>> from PIL import Image
>>> import requests
>>> url = "https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/2/image/image.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> doc_question_answerer = pipeline("document-question-answering", model="magorshunov/layoutlm-invoices")
>>> preds = doc_question_answerer(
... question="What is the total amount?",
... image=image,
... )
>>> preds
[{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}]
```
ใใฎใใผใธใๅใขใใชใใฃใฎใฟในใฏใฎ็จฎ้กใจใใใใใฎ้่ฆๆงใซใคใใฆใฎ่ฟฝๅ ใฎ่ๆฏๆ
ๅ ฑใๆไพใงใใใใจใ้กใฃใฆใใพใใๆฌกใฎ [ใปใฏใทใงใณ](tasks_explained) ใงใฏใ๐ค ใใฉใณในใใฉใผใใผใใใใใฎใฟในใฏใ่งฃๆฑบใใใใใซ **ใฉใฎใใใซ** ๅไฝใใใใๅญฆใณใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/custom_tools.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Custom Tools and Prompts
<Tip>
ใใฉใณในใใฉใผใใผใฎใณใณใใญในใใงใใผใซใจใจใผใธใงใณใใไฝใงใใใใ็ฅใใชใๅ ดๅใ
ใพใ[Transformers Agents](transformers_agents)ใใผใธใใ่ชญใฟใใใ ใใใจใใๅงใใใพใใ
</Tip>
<Tip warning={true}>
Transformers Agentsใฏๅฎ้จ็ใชAPIใงใใใใใคใงใๅคๆดใใใๅฏ่ฝๆงใใใใพใใ
ใจใผใธใงใณใใซใใฃใฆ่ฟใใใ็ตๆใฏใAPIใๅบ็คใจใชใใขใใซใๅคๆดใใใๅฏ่ฝๆงใใใใใใๅคๅใใใใจใใใใพใใ
</Tip>
ใซในใฟใ ใใผใซใจใใญใณใใใไฝๆใใไฝฟ็จใใใใจใฏใใจใผใธใงใณใใๅผทๅใใๆฐใใใฟในใฏใๅฎ่กใใใใใใซ้ๅธธใซ้่ฆใงใใ
ใใฎใฌใคใใงใฏใไปฅไธใฎๅ
ๅฎนใ่ชฌๆใใพใ๏ผ
- ใใญใณใใใฎใซในใฟใใคใบๆนๆณ
- ใซในใฟใ ใใผใซใฎไฝฟ็จๆนๆณ
- ใซในใฟใ ใใผใซใฎไฝๆๆนๆณ
## Customizing the prompt
[Transformers Agents](transformers_agents)ใง่ชฌๆใใใฆใใใใใซใใจใผใธใงใณใใฏ[`~Agent.run`]ใใใณ[`~Agent.chat`]ใขใผใใงๅฎ่กใงใใพใใ
`run`ใขใผใใจ`chat`ใขใผใใฎไธกๆนใฏๅใใญใธใใฏใซๅบใฅใใฆใใพใใ
ใจใผใธใงใณใใ้งๅใใ่จ่ชใขใใซใฏใ้ทใใใญใณใใใซๅบใฅใใฆๆกไปถไปใใใใ
ๆฌกใฎใใผใฏใณใ็ๆใใฆๅๆญขใใผใฏใณใซ้ใใใพใงใใญใณใใใๅฎไบใใพใใ
ไธก่
ใฎๅฏไธใฎ้ใใฏใ`chat`ใขใผใใฎ้ใซใใญใณใใใๅใฎใฆใผใถใผใฎๅ
ฅๅใจใขใใซใฎ็ๆใจๅ
ฑใซๆกๅผตใใใใใจใงใใ
ใใใซใใใใจใผใธใงใณใใฏ้ๅปใฎๅฏพ่ฉฑใซใขใฏใปในใงใใใจใผใธใงใณใใซใใใใใกใขใชใใใใใฎใใใซ่ฆใใพใใ
### Structure of the prompt
ใใญใณใใใใฉใฎใใใซๆง็ฏใใใใฉใฎใใใซๆ้ฉๅใงใใใใ็่งฃใใใใใซใใใญใณใใใฏๅคงใพใใซ4ใคใฎ้จๅใซๅใใใฆใใพใใ
1. ใคใณใใญใใฏใทใงใณ๏ผใจใผใธใงใณใใฎๆฏใ่ใใใใผใซใฎๆฆๅฟตใฎ่ชฌๆใ
2. ใในใฆใฎใใผใซใฎ่ชฌๆใใใใฏใฆใผใถใผใซใใฃใฆๅฎ็พฉ/้ธๆใใใใใผใซใงใฉใณใฟใคใ ๆใซๅ็ใซ็ฝฎๆใใใ`<<all_tools>>`ใใผใฏใณใซใใฃใฆๅฎ็พฉใใใพใใ
3. ใฟในใฏใจใใฎ่งฃๆฑบ็ญใฎไธ้ฃใฎไพใ
4. ็พๅจใฎไพใจ่งฃๆฑบ็ญใฎ่ฆๆฑใ
ๅ้จๅใใใใใ็่งฃใใใใใซใ`run`ใใญใณใใใใฉใฎใใใซ่ฆใใใใฎ็ฐก็ฅ็ใ่ฆใฆใฟใพใใใ๏ผ
````text
ใฟในใฏใๅฎ่กใใใใใซใPythonใฎใทใณใใซใชใณใใณใใฎใทใชใผใบใ่ใใฆใใใใจใใใใงใใใใ
[...]
ๆๅณใใใๅ ดๅใฏใไธญ้็ตๆใ่กจ็คบใใใใจใใงใใพใใ
ใใผใซ๏ผ
- document_qa๏ผใใใฏใใญใฅใกใณใ๏ผpdf๏ผใซ้ขใใ่ณชๅใซ็ญใใใใผใซใงใใๆ
ๅ ฑใๅซใใใญใฅใกใณใใงใใ `document` ใจใใใญใฅใกใณใใซ้ขใใ่ณชๅใงใใ `question` ใๅใๅใใ่ณชๅใซๅฏพใใๅ็ญใๅซใใใญในใใ่ฟใใพใใ
- image_captioner๏ผใใใฏ็ปๅใฎ่ชฌๆใ็ๆใใใใผใซใงใใใญใฃใใทใงใณใซใใ็ปๅใงใใ `image` ใจใ่ชฌๆใๅซใ่ฑ่ชใฎใใญในใใ่ฟใใใญในใใๅใๅใใพใใ
[...]
ใฟในใฏ: "ๅคๆฐ `question` ใซ้ขใใ่ณชๅใซ็ญใใใใใฎ็ปๅใซใคใใฆๅ็ญใใฆใใ ใใใ่ณชๅใฏใใฉใณใน่ชใงใใ"
ๆฌกใฎใใผใซใไฝฟ็จใใพใ๏ผ่ณชๅใ่ฑ่ชใซ็ฟป่จณใใใใใฎ `translator`ใใใใฆๅ
ฅๅ็ปๅใซ้ขใใ่ณชๅใซ็ญใใใใใฎ `image_qa`ใ
ๅ็ญ๏ผ
```py
translated_question = translator(question=question, src_lang="French", tgt_lang="English")
print(f"The translated question is {translated_question}.")
answer = image_qa(image=image, question=translated_question)
print(f"The answer is {answer}")
```
ใฟในใฏ๏ผใ`document`ๅ
ใงๆๅนด้ทใฎไบบ็ฉใ็นๅฎใใใใฎ็ตๆใใใใผใจใใฆ่กจ็คบใใใใ
ไปฅไธใฎใใผใซใไฝฟ็จใใพใ๏ผ`document_qa`ใไฝฟ็จใใฆใใญใฅใกใณใๅ
ใงๆๅนด้ทใฎไบบ็ฉใ่ฆใคใใใใฎๅ็ญใซๅพใฃใฆ`image_generator`ใไฝฟ็จใใฆ็ปๅใ็ๆใใพใใ
ๅ็ญ๏ผ
```py
answer = document_qa(document, question="What is the oldest person?")
print(f"The answer is {answer}.")
image = image_generator("A banner showing " + answer)
```
[...]
ใฟในใฏ: "ๅทใจๆนใฎ็ตตใๆใใฆใใ ใใ"
ไปฅไธใฎใใฎใไฝฟ็จใใพใ
````
ๅฐๅ
ฅ้จๅ๏ผ"Tools:"ใฎๅใฎใใญในใ๏ผใฏใใขใใซใฎๆฏใ่ใใจๅฎ่กใในใใฟในใฏใๆญฃ็ขบใซ่ชฌๆใใฆใใพใใ
ใใฎ้จๅใฏใใใใใจใผใธใงใณใใๅธธใซๅใๆนๆณใงๆฏใ่ใๅฟ
่ฆใใใใใใใซในใฟใใคใบใใๅฟ
่ฆใฏใใใพใใใ
2็ช็ฎใฎ้จๅ๏ผ"Tools"ใฎไธใฎ็ฎๆกๆธใ๏ผใฏใ`run`ใพใใฏ`chat`ใๅผใณๅบใใใณใซๅ็ใซ่ฟฝๅ ใใใพใใ
`agent.toolbox`ๅ
ใฎใใผใซใฎๆฐใจๅใๆฐใฎ็ฎๆกๆธใใใใใใใใใใฎ็ฎๆกๆธใใซใฏใใผใซใฎๅๅใจ่ชฌๆใๅซใพใใฆใใพใใ
```text
- <tool.name>: <tool.description>
```
ใใใใ็ขบ่ชใใพใใใใ `document_qa` ใใผใซใ่ชญใฟ่พผใใงๅๅใจ่ชฌๆใๅบๅใใพใใ
```py
from transformers import load_tool
document_qa = load_tool("document-question-answering")
print(f"- {document_qa.name}: {document_qa.description}")
```
which gives:
```text
- document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question.
```
ใใผใซ่ชฌๆ:
ใใฎใใผใซใฏใ2ใคใฎใใผใใใๆใ็ซใฃใฆใใพใใๆๅใฎใใผใใงใฏใใใผใซใไฝใ่กใใใ่ชฌๆใใ2็ช็ฎใฎใใผใใงใฏๅ
ฅๅๅผๆฐใจๆปใๅคใใฉใฎใใใซๆๅพ
ใใใใใ่ฟฐในใฆใใพใใ
่ฏใใใผใซๅใจใใผใซใฎ่ชฌๆใฏใใจใผใธใงใณใใๆญฃใใไฝฟ็จใใใใใซ้ๅธธใซ้่ฆใงใใใจใผใธใงใณใใใใผใซใซใคใใฆๆใฃใฆใใๅฏไธใฎๆ
ๅ ฑใฏใใใฎๅๅใจ่ชฌๆใงใใใใใใฃใฆใใใผใซๅใจ่ชฌๆใฎไธกๆนใๆญฃ็ขบใซ่จ่ฟฐใใใใใผใซใใใฏในๅ
ใฎๆขๅญใฎใใผใซใฎในใฟใคใซใซๅ่ดใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ็นใซใ่ชฌๆใซใฏใณใผใในใฟใคใซใงๅๅใงๆๅพ
ใใใใในใฆใฎๅผๆฐใ่จๅใใใๆๅพ
ใใใๅใจใใใใไฝใงใใใใฎ่ชฌๆใๅซใใในใใงใใ
<Tip>
ใญใฅใฌใผใใใใTransformersใใผใซใฎๅฝๅใจ่ชฌๆใ็ขบ่ชใใฆใใใผใซใใฉใฎใใใชๅๅใจ่ชฌๆใๆใคในใใใ็่งฃใใใฎใซๅฝน็ซใกใพใใ
ใในใฆใฎใใผใซใฏ[`Agent.toolbox`]ใใญใใใฃใง็ขบ่ชใงใใพใใ
</Tip>
ใซในใฟใใคใบใใใไพ๏ผ
ใใผใซใฎไฝฟใๆนใใจใผใธใงใณใใซๆญฃ็ขบใซ็คบใไธ้ฃใฎไพใๅซใพใใฆใใพใใใใใใฎไพใฏใใจใผใธใงใณใใๅฎ้ใซๆญฃ็ขบใงๅฎ่กๅฏ่ฝใชใณใผใใ็ๆใใๅฏ่ฝๆงใๆๅคงๅใใใใใซๆธใใใฆใใใใใ้ๅธธใซ้่ฆใงใใๅคง่ฆๆจกใช่จ่ชใขใใซใฏใใใญใณใใๅ
ใฎใใฟใผใณใ่ช่ญใใๆฐใใใใผใฟใไฝฟ็จใใฆใใฎใใฟใผใณใ็นฐใ่ฟใใใจใซ้ๅธธใซๅชใใฆใใพใใใใใใฃใฆใๅฎ่ทตใงๆญฃใใๅฎ่กๅฏ่ฝใชใณใผใใ็ๆใใใจใผใธใงใณใใฎๅฏ่ฝๆงใๆๅคงๅใใใใใซใใใใใฎไพใฏๆธใใใฆใใๅฟ
่ฆใใใใพใใ
ไปฅไธใฏใไธใคใฎไพใงใ๏ผ
````text
Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner."
I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.
Answer:
```py
answer = document_qa(document, question="What is the oldest person?")
print(f"The answer is {answer}.")
image = image_generator("A banner showing " + answer)
```
````
ใใฟใผใณ๏ผใขใใซใ็นฐใ่ฟใใ่กใใใใซๆ็คบใใใใใฟใผใณใซใฏใ3ใคใฎ้จๅใใใใพใใ
ใฟในใฏใฎๅฃฐๆใใจใผใธใงใณใใฎๆๅณใใๅไฝใฎ่ชฌๆใใใใฆๆๅพใซ็ๆใใใใณใผใใงใใ
ใใญใณใใใฎไธ้จใงใใใในใฆใฎไพใซใฏใใใฎๆญฃ็ขบใชใใฟใผใณใใใใใจใผใธใงใณใใๆฐใใใใผใฏใณใ็ๆใใ้ใซใ
ๅใใใฟใผใณใๅ็พใใใใจใ็ขบ่ชใใฆใใพใใ
ใใญใณใใใฎไพใฏTransformersใใผใ ใซใใฃใฆๅณ้ธใใใไธ้ฃใฎๅ้กในใใผใใกใณใใงๅณๅฏใซ่ฉไพกใใใพใใ
ใใใซใใใใจใผใธใงใณใใฎใใญใณใใใใจใผใธใงใณใใฎๅฎ้ใฎไฝฟ็จใฑใผในใ่งฃๆฑบใใใใใซใงใใใ ใๅชใใใใฎใซใชใใพใใ
ใใญใณใใใฎๆๅพใฎ้จๅใซๅฏพๅฟใใฆใใพใ๏ผ
[ใใกใ](https://github.com/huggingface/transformers/blob/main/src/transformers/tools/evaluate_agent.py)ใฎๅ้กในใใผใใกใณใใงๅณๅฏใซ่ฉไพกใใใใใจใผใธใงใณใใฎใใญใณใใใใงใใใ ใๅชใใใใฎใซใชใใใใซ
ๆ
้ใซ้ธๅฎใใใใใญใณใใไพใๆไพใใฆใใพใใ
```text
Task: "Draw me a picture of rivers and lakes"
I will use the following
```
ใใใใจใผใธใงใณใใซๅฎๆใใใใใใฎๆ็ต็ใงๆชๅฎๆใฎไพใงใใๆชๅฎๆใฎไพใฏใๅฎ้ใฎใฆใผใถใผๅ
ฅๅใซๅบใฅใใฆๅ็ใซไฝๆใใใพใใไธ่จใฎไพใงใฏใใฆใผใถใผใๆฌกใฎใใใซๅฎ่กใใพใใ๏ผ
```py
agent.run("Draw me a picture of rivers and lakes")
```
ใฆใผใถใผใฎๅ
ฅๅ - ใคใพใใใฟในใฏ๏ผ"ๅทใจๆนใฎ็ตตใๆใใฆใใ ใใ"ใฏใไปฅไธใฎใใใชใใญใณใใใใณใใฌใผใใซๅคๆใใใพใ๏ผ"ใฟในใฏ๏ผ<task> \n\n ๆฌกใซ็งใฏไปฅไธใไฝฟ็จใใพใ"ใ
ใใฎๆใฏใใจใผใธใงใณใใๆกไปถไปใใใใใใญใณใใใฎๆ็ต่กใๆงๆใใใใใใฃใฆใจใผใธใงใณใใซๅฏพใใฆๅใฎไพใจใพใฃใใๅใๆนๆณใงไพใ็ตไบใใใใๅผทใๅฝฑ้ฟใใพใใ
่ฉณ็ดฐใซใฏ็ซใกๅ
ฅใใพใใใใใใฃใใใใณใใฌใผใใฏๅใใใญใณใใๆง้ ใๆใกใไพใฏใใใใซ็ฐใชใในใฟใคใซใๆใฃใฆใใพใใไพ๏ผ
````text
[...]
=====
Human: Answer the question in the variable `question` about the image stored in the variable `image`.
Assistant: I will use the tool `image_qa` to answer the question on the input image.
```py
answer = image_qa(text=question, image=image)
print(f"The answer is {answer}")
```
Human: I tried this code, it worked but didn't give me a good result. The question is in French
Assistant: In this case, the question needs to be translated first. I will use the tool `translator` to do this.
```py
translated_question = translator(question=question, src_lang="French", tgt_lang="English")
print(f"The translated question is {translated_question}.")
answer = image_qa(text=translated_question, image=image)
print(f"The answer is {answer}")
```
=====
[...]
````
*Human:* `run`ใใญใณใใใฎไพใจใฏๅฏพ็
ง็ใซใๅ`chat`ใใญใณใใใฎไพใซใฏ*Human*ใจ*Assistant*ใฎ้ใง1ใคไปฅไธใฎใใใจใใใใใพใใๅใใใจใใฏใ`run`ใใญใณใใใฎไพใจๅๆงใฎๆง้ ใซใชใฃใฆใใพใใใฆใผใถใผใฎๅ
ฅๅใฏ*Human:*ใฎๅพใใซ่ฟฝๅ ใใใใจใผใธใงใณใใซใฏใณใผใใ็ๆใใๅใซไฝใ่กใๅฟ
่ฆใใใใใๆๅใซ็ๆใใใใใซๆ็คบใใใพใใใใใจใใฏไปฅๅใฎใใใจใใซๅบใฅใใฆ่กใใใใใจใใใใใฆใผใถใผใใI tried **this** codeใใจๅ
ฅๅใใใใใซใไปฅๅใซ็ๆใใใใจใผใธใงใณใใฎใณใผใใๅ็
งใงใใพใใ
*Assistant:* `.chat`ใๅฎ่กใใใจใใฆใผใถใผใฎๅ
ฅๅใพใใฏ*ใฟในใฏ*ใๆชๅฎไบใฎๅฝขๅผใซๅคๆใใใพใ๏ผ
```text
Human: <user-input>\n\nAssistant:
```
ไปฅไธใฎใจใผใธใงใณใใๅฎไบใใใณใใณใใซใคใใฆ่ชฌๆใใพใใ `run` ใณใใณใใจใฏๅฏพ็
ง็ใซใ`chat` ใณใใณใใฏๅฎไบใใไพใใใญใณใใใซ่ฟฝๅ ใใพใใใใฎใใใๆฌกใฎ `chat` ใฟใผใณใฎใใใซใจใผใธใงใณใใซใใๅคใใฎๆ่ใๆไพใใพใใ
ใใฆใใใญใณใใใฎๆง้ ใใใใฃใใจใใใงใใฉใฎใใใซใซในใฟใใคใบใงใใใใ่ฆใฆใฟใพใใใ๏ผ
### Writing good user inputs
ๅคง่ฆๆจกใช่จ่ชใขใใซใฏใฆใผใถใผใฎๆๅณใ็่งฃใใ่ฝๅใใพใใพใๅไธใใฆใใพใใใใจใผใธใงใณใใๆญฃใใใฟในใฏใ้ธๆใใใฎใๅฉใใใใใซใใงใใใ ใๆญฃ็ขบใซ่จ่ฟฐใใใใจใ้ๅธธใซๅฝน็ซใกใพใใใงใใใ ใๆญฃ็ขบใงใใใจใฏไฝใๆๅณใใใฎใงใใใใ๏ผ
ใจใผใธใงใณใใฏใใใญใณใใใงใใผใซๅใจใใฎ่ชฌๆใฎใชในใใ่ฆใฆใใพใใใใผใซใ่ฟฝๅ ใใใใปใฉใใจใผใธใงใณใใๆญฃใใใใผใซใ้ธๆใใใฎใ้ฃใใใชใใๆญฃใใใใผใซใฎ้ฃ็ถใ้ธๆใใใฎใฏใใใซ้ฃใใใชใใพใใๅ
ฑ้ใฎๅคฑๆไพใ่ฆใฆใฟใพใใใใใใใงใฏใณใผใใฎใฟใ่ฟใใใจใซใใพใใ
```py
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
agent.run("Show me a tree", return_code=True)
```
gives:
```text
==Explanation from the agent==
I will use the following tool: `image_segmenter` to create a segmentation mask for the image.
==Code generated by the agent==
mask = image_segmenter(image, prompt="tree")
```
ใใใฏใใใใ็งใใกใๆใใงใใใใฎใงใฏใชใใงใใใใไปฃใใใซใๆจใฎ็ปๅใ็ๆใใใใใจใใใๅฏ่ฝๆงใ้ซใใงใใ
็นๅฎใฎใใผใซใไฝฟ็จใใใใใจใผใธใงใณใใ่ชๅฐใใใใใซใใใผใซใฎๅๅใ่ชฌๆใซๅซใพใใฆใใ้่ฆใชใญใผใฏใผใใไฝฟ็จใใใใจใฏ้ๅธธใซๅฝน็ซใกใพใใใใฆใ่ฉณใใ่ฆใฆใฟใพใใใใ
```py
agent.toolbox["image_generator"].description
```
```text
'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image.
```
ๅๅใจ่ชฌๆๆใซใฏใใญใผใฏใผใใ็ปๅใใใใใญใณใใใใใไฝๆใใใใใณใ็ๆใใไฝฟ็จใใใฆใใพใใใใใใฎ่จ่ใไฝฟ็จใใใใจใงใใใใงใฎๅไฝใใใๅนๆ็ใซใชใๅฏ่ฝๆงใ้ซใใงใใใใญใณใใใๅฐใ่ฉณ็ดฐใซ่ชฟๆดใใพใใใใ
```py
agent.run("Create an image of a tree", return_code=True)
```
gives:
```text
==Explanation from the agent==
I will use the following tool `image_generator` to generate an image of a tree.
==Code generated by the agent==
image = image_generator(prompt="tree")
```
็ฐกๅใซ่จใใจใใจใผใธใงใณใใใฟในใฏใๆญฃ็ขบใซ้ฉๅใชใใผใซใซใใใใณใฐใงใใชใๅ ดๅใฏใใใผใซใฎๅๅใ่ชฌๆใฎๆใ้ข้ฃๆงใฎใใใญใผใฏใผใใ่ชฟในใฆใใฟในใฏใชใฏใจในใใใใใซๅใใใฆๆด็ทดใใใฆใฟใฆใใ ใใใ
### Customizing the tool descriptions
ไปฅๅใซใ่ฆใใใใซใใจใผใธใงใณใใฏๅใใผใซใฎๅๅใจ่ชฌๆใซใขใฏใปในใงใใพใใใใผในใฎใใผใซใฏ้ๅธธใซๆญฃ็ขบใชๅๅใจ่ชฌๆใๆใฃใฆใใใฏใใงใใใ็นๅฎใฎใฆใผในใฑใผในใซๅใใใฆใใผใซใฎ่ชฌๆใๅๅใๅคๆดใใใใจใๅฝน็ซใคใใใใใพใใใใใใฏใ้ๅธธใซ้กไผผใใ่คๆฐใฎใใผใซใ่ฟฝๅ ใใๅ ดๅใใ็นๅฎใฎใใกใคใณ๏ผใใจใใฐใ็ปๅ็ๆใๅคๆใชใฉ๏ผใงใจใผใธใงใณใใไฝฟ็จใใๅ ดๅใซ็นใซ้่ฆใซใชใใใใใใพใใใ
ใใใใๅ้กใฏใใจใผใธใงใณใใ็ปๅ็ๆใฟในใฏใซ้ ป็นใซไฝฟ็จใใใๅ ดๅใ็ปๅ็ๆใจ็ปๅๅคๆ/ไฟฎๆญฃใๆททๅใใใใจใงใใ
ไพ๏ผ
```py
agent.run("Make an image of a house and a car", return_code=True)
```
returns
```text
==Explanation from the agent==
I will use the following tools `image_generator` to generate an image of a house and `image_transformer` to transform the image of a car into the image of a house.
==Code generated by the agent==
house_image = image_generator(prompt="A house")
car_image = image_generator(prompt="A car")
house_car_image = image_transformer(image=car_image, prompt="A house")
```
ใใใฏใใใใ็งใใกใใใใงๆใใงใใๆญฃ็ขบใชใใฎใงใฏใชใใใใงใใใจใผใธใงใณใใฏใimage_generatorใใจใimage_transformerใใฎ้ใใ็่งฃใใใฎใ้ฃใใใใใงใใใฐใใฐไธกๆนใไธ็ทใซไฝฟ็จใใพใใ
ใใใงใจใผใธใงใณใใใตใใผใใใใใใซใ"image_transformer"ใฎใใผใซๅใจ่ชฌๆใๅคๆดใใฆใๅฐใ"image"ใ"prompt"ใใๅใ้ขใใฆใฟใพใใใใไปฃใใใซใใใใmodifierใใจๅผใณใพใใใ๏ผ
```py
agent.toolbox["modifier"] = agent.toolbox.pop("image_transformer")
agent.toolbox["modifier"].description = agent.toolbox["modifier"].description.replace(
"transforms an image according to a prompt", "modifies an image"
)
```
ใๅคๆดใใฏใไธ่จใฎใใญใณใใใซๆฐใใ็ปๅใใญใปใใตใไฝฟ็จใใๅผทๅใชๆใใใใงใใใใใงใฏใใใไธๅบฆๅฎ่กใใฆใฟใพใใใใ
```py
agent.run("Make an image of a house and a car", return_code=True)
```
Now we're getting:
```text
==Explanation from the agent==
I will use the following tools: `image_generator` to generate an image of a house, then `image_generator` to generate an image of a car.
==Code generated by the agent==
house_image = image_generator(prompt="A house")
car_image = image_generator(prompt="A car")
```
ใใใฏใ็งใใกใ่ใใฆใใใใฎใซ็ขบๅฎใซ่ฟใฅใใฆใใพใ๏ผใใ ใใๅฎถใจ่ปใๅใ็ปๅใซๅซใใใใจ่ใใฆใใพใใใฟในใฏใๅไธใฎ็ปๅ็ๆใซๅใใใใจใงใใใ้ฉๅใชๆนๅใซ้ฒใใใฏใใงใ๏ผ
```py
agent.run("Create image: 'A house and car'", return_code=True)
```
```text
==Explanation from the agent==
I will use the following tool: `image_generator` to generate an image.
==Code generated by the agent==
image = image_generator(prompt="A house and car")
```
<Tip warning={true}>
ใจใผใธใงใณใใฏใ็นใซ่คๆฐใฎใชใใธใงใฏใใฎ็ปๅใ็ๆใใใชใฉใใใ่ค้ใชใฆใผในใฑใผในใซ้ขใใฆใฏใใพใ ๅคใใฎใฆใผในใฑใผในใซๅฏพใใฆ่ๅผฑใงใใ
ใจใผใธใงใณใ่ชไฝใจใใฎๅบ็คใจใชใใใญใณใใใฏใไปๅพๆฐใถๆใงใใใซๆนๅใใใใใพใใพใชใฆใผใถใผใฎๅ
ฅๅใซๅฏพใใฆใจใผใธใงใณใใใใ้ ๅฅใซใชใใใใซใชใใพใใ
</Tip>
### Customizing the whole project
ใฆใผใถใผใซๆๅคง้ใฎๆ่ปๆงใๆไพใใใใใซใ[ไธ่จ](#structure-of-the-prompt)ใง่ชฌๆใใใใใญใณใใใใณใใฌใผใๅ
จไฝใใฆใผใถใผใไธๆธใใงใใพใใใใฎๅ ดๅใใซในใฟใ ใใญใณใใใซใฏๅฐๅ
ฅใปใฏใทใงใณใใใผใซใปใฏใทใงใณใไพใปใฏใทใงใณใๆชๅฎไบใฎไพใปใฏใทใงใณใๅซใพใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ`run` ใใญใณใใใใณใใฌใผใใไธๆธใใใใๅ ดๅใไปฅไธใฎใใใซ่กใใใจใใงใใพใ:
```py
template = """ [...] """
agent = HfAgent(your_endpoint, run_prompt_template=template)
```
<Tip warning={true}>
`<<all_tools>>` ๆๅญๅใจ `<<prompt>>` ใฏใใจใผใธใงใณใใไฝฟ็จใงใใใใผใซใ่ช่ญใใใฆใผใถใผใฎใใญใณใใใๆญฃใใๆฟๅ
ฅใงใใใใใซใ`template` ใฎใฉใใใซๅฎ็พฉใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
</Tip>
ๅๆงใซใ`chat` ใใญใณใใใใณใใฌใผใใไธๆธใใใใใจใใงใใพใใใชใใ`chat` ใขใผใใงใฏๅธธใซไปฅไธใฎๅฝขๅผใงไบคๆใ่กใใใพใ๏ผ
ไธ่จใฎใใญในใใฎไธใซๆฅๆฌ่ชใฎ็ฟป่จณใๆไพใใฆใใ ใใใMarkdownใณใผใใจใใฆๆธใใฆใใ ใใใ
```text
Human: <<task>>
Assistant:
```
ใใใใฃใฆใใซในใฟใ `chat`ใใญใณใใใใณใใฌใผใใฎไพใใใฎใใฉใผใใใใไฝฟ็จใใใใจใ้่ฆใงใใไปฅไธใฎใใใซใใคใณในใฟใณในๅๆใซ`chat`ใใณใใฌใผใใไธๆธใใงใใพใใ
```
template = """ [...] """
agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)
```
<Tip warning={true}>
`<<all_tools>>` ใจใใๆๅญๅใ `template` ๅ
ใงๅฎ็พฉใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใใใใซใใใใจใผใธใงใณใใฏไฝฟ็จๅฏ่ฝใชใใผใซใๆๆกใงใใพใใ
</Tip>
ไธกๆนใฎๅ ดๅใใใญใณใใใใณใใฌใผใใฎไปฃใใใซใใณใใฅใใใฃใฎ่ชฐใใใในใใใใใณใใฌใผใใไฝฟ็จใใใๅ ดๅใฏใใชใใธใใชIDใๆธกใใใจใใงใใพใใใใใฉใซใใฎใใญใณใใใฏใ[ใใฎใชใใธใใช](https://huggingface.co/datasets/huggingface-tools/default-prompts) ใซใใใพใใฎใงใๅ่ใซใชใใพใใ
ใซในใฟใ ใใญใณใใใHubใฎใชใใธใใชใซใขใใใญใผใใใฆใณใใฅใใใฃใจๅ
ฑๆใใๅ ดๅใฏใๆฌกใฎใใจใ็ขบ่ชใใฆใใ ใใ๏ผ
- ใใผใฟใปใใใชใใธใใชใไฝฟ็จใใใใจ
- `run` ใณใใณใ็จใฎใใญใณใใใใณใใฌใผใใ `run_prompt_template.txt` ใจใใๅๅใฎใใกใคใซใซ้
็ฝฎใใใใจ
- `chat` ใณใใณใ็จใฎใใญใณใใใใณใใฌใผใใ `chat_prompt_template.txt` ใจใใๅๅใฎใใกใคใซใซ้
็ฝฎใใใใจ
## Using custom tools
ใใฎใปใฏใทใงใณใงใฏใ็ปๅ็ๆใซ็นๅใใ2ใคใฎๆขๅญใฎใซในใฟใ ใใผใซใๅฉ็จใใพใ๏ผ
- [huggingface-tools/image-transformation](https://huggingface.co/spaces/huggingface-tools/image-transformation) ใใใๅคใใฎ็ปๅๅคๆดใๅฏ่ฝใซใใใใใซ [diffusers/controlnet-canny-tool](https://huggingface.co/spaces/diffusers/controlnet-canny-tool) ใซ็ฝฎใๆใใพใใ
- ็ปๅใฎใขใใในใฑใผใชใณใฐ็จใฎๆฐใใใใผใซใใใใฉใซใใฎใใผใซใใใฏในใซ่ฟฝๅ ใใพใ๏ผ[diffusers/latent-upscaler-tool](https://huggingface.co/spaces/diffusers/latent-upscaler-tool) ใฏๆขๅญใฎ็ปๅๅคๆใใผใซใ็ฝฎใๆใใพใใ
ไพฟๅฉใช [`load_tool`] ้ขๆฐใไฝฟ็จใใฆใซในใฟใ ใใผใซใใญใผใใใพใ๏ผ
```py
from transformers import load_tool
controlnet_transformer = load_tool("diffusers/controlnet-canny-tool")
upscaler = load_tool("diffusers/latent-upscaler-tool")
```
ใจใผใธใงใณใใซใซในใฟใ ใใผใซใ่ฟฝๅ ใใใจใใใผใซใฎ่ชฌๆใจๅๅใใจใผใธใงใณใใฎใใญใณใใใซ่ชๅ็ใซๅซใพใใพใใใใใใฃใฆใใจใผใธใงใณใใใซในใฟใ ใใผใซใฎไฝฟ็จๆนๆณใ็่งฃใงใใใใใซใใซในใฟใ ใใผใซใซใฏ้ฉๅใซ่จ่ฟฐใใใ่ชฌๆใจๅๅใๅฟ
่ฆใงใใ
`controlnet_transformer`ใฎ่ชฌๆใจๅๅใ่ฆใฆใฟใพใใใใ
ๆๅใซใไพฟๅฉใช[`load_tool`]้ขๆฐใไฝฟ็จใใฆใซในใฟใ ใใผใซใใญใผใใใพใใ
```py
print(f"Description: '{controlnet_transformer.description}'")
print(f"Name: '{controlnet_transformer.name}'")
```
gives
```text
Description: 'This is a tool that transforms an image with ControlNet according to a prompt.
It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.'
Name: 'image_transformer'
```
ๅๅใจ่ชฌๆใฏๆญฃ็ขบใงใใใ[ๅณ้ธใใใใใผใซ](./transformers_agents#a-curated-set-of-tools)ใฎในใฟใคใซใซๅใฃใฆใใพใใ
ๆฌกใซใ`controlnet_transformer`ใจ`upscaler`ใไฝฟใฃใฆใจใผใธใงใณใใใคใณในใฟใณในๅใใพใใ
```py
tools = [controlnet_transformer, upscaler]
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=tools)
```
ไปฅไธใฎใณใใณใใฏใไปฅไธใฎๆ
ๅ ฑใๆไพใใพใ๏ผ
```text
image_transformer has been replaced by <transformers_modules.diffusers.controlnet-canny-tool.bd76182c7777eba9612fc03c0
8718a60c0aa6312.image_transformation.ControlNetTransformationTool object at 0x7f1d3bfa3a00> as provided in `additional_tools`
```
ไธ้ฃใฎๅณ้ธใใใใใผใซใซใฏใใงใซ `image_transformer` ใใผใซใใใใใใใใซในใฟใ ใใผใซใง็ฝฎใๆใใพใใ
<Tip>
ๆขๅญใฎใใผใซใไธๆธใใใใใจใฏใ็นๅฎใฎใฟในใฏใซๆขๅญใฎใใผใซใใพใฃใใๅใ็ฎ็ใงไฝฟ็จใใใๅ ดๅใซๆ็ใงใใใใจใใใใพใใ
ใชใใชใใใจใผใธใงใณใใฏใใฎ็นๅฎใฎใฟในใฏใฎไฝฟ็จๆนๆณใซ็ฒพ้ใใฆใใใใใงใใใใฎๅ ดๅใใซในใฟใ ใใผใซใฏๆขๅญใฎใใผใซใจใพใฃใใๅใAPIใซๅพใใใใใฎใใผใซใไฝฟ็จใใใในใฆใฎไพใๆดๆฐใใใใใใซใใญใณใใใใณใใฌใผใใ้ฉๅฟใใใๅฟ
่ฆใใใใพใใ
</Tip>
ใขใใในใฑใผใฉใผใใผใซใซใฏ `image_upscaler` ใจใใๅๅใไปใใใใใใใฏใใใฉใซใใฎใใผใซใใใฏในใซใฏใพใ ๅญๅจใใชใใใใๅใซใใผใซใฎใชในใใซ่ฟฝๅ ใใใพใใ
ใจใผใธใงใณใใ็พๅจไฝฟ็จๅฏ่ฝใชใใผใซใใใฏในใ็ขบ่ชใใใซใฏใ`agent.toolbox` ๅฑๆงใไฝฟ็จใงใใพใใ
```py
print("\n".join([f"- {a}" for a in agent.toolbox.keys()]))
```
```text
- document_qa
- image_captioner
- image_qa
- image_segmenter
- transcriber
- summarizer
- text_classifier
- text_qa
- text_reader
- translator
- image_transformer
- text_downloader
- image_generator
- video_generator
- image_upscaler
```
ๆณจๆ: `image_upscaler` ใใจใผใธใงใณใใฎใใผใซใใใฏในใฎไธ้จใจใชใฃใใใจใซๆณจ็ฎใใฆใใ ใใใ
ใใใงใฏใๆฐใใใใผใซใ่ฉฆใใฆใฟใพใใใ๏ผ[Transformers Agents Quickstart](./transformers_agents#single-execution-run) ใง็ๆใใ็ปๅใๅๅฉ็จใใพใใ
```py
from diffusers.utils import load_image
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png"
)
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200>
็พใใๅฌใฎ้ขจๆฏใซใใฎ็ปๅใๅค่บซใใใพใใใ๏ผ
```py
image = agent.run("Transform the image: 'A frozen lake and snowy forest'", image=image)
```
```text
==Explanation from the agent==
I will use the following tool: `image_transformer` to transform the image.
==Code generated by the agent==
image = image_transformer(image, prompt="A frozen lake and snowy forest")
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter.png" width=200>
ๆฐใใ็ปๅๅฆ็ใใผใซใฏใ้ๅธธใซๅผทๅใช็ปๅใฎๅคๆดใ่กใใใจใใงใใControlNetใซๅบใฅใใฆใใพใใ
ใใใฉใซใใงใฏใ็ปๅๅฆ็ใใผใซใฏใตใคใบใ512x512ใใฏใปใซใฎ็ปๅใ่ฟใใพใใใใใๆกๅคงใงใใใ่ฆใฆใฟใพใใใใ
```py
image = agent.run("Upscale the image", image)
```
```text
==Explanation from the agent==
I will use the following tool: `image_upscaler` to upscale the image.
==Code generated by the agent==
upscaled_image = image_upscaler(image)
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter_upscale.png" width=400>
ใจใผใธใงใณใใฏใใใญใณใใใ็ปๅใฎๆกๅคงใใใใใฎ่ชฌๆใจใใผใซใฎๅๅใ ใใๅบใซใๆฐใใซ่ฟฝๅ ใใใใขใใในใฑใผใชใณใฐใใผใซใซ่ชๅ็ใซใใใใณใฐใใๆญฃใใๅฎ่กใงใใพใใใ
ๆฌกใซใๆฐใใใซในใฟใ ใใผใซใไฝๆใใๆนๆณใ่ฆใฆใฟใพใใใใ
### Adding new tools
ใใฎใปใฏใทใงใณใงใฏใใจใผใธใงใณใใซ่ฟฝๅ ใงใใๆฐใใใใผใซใฎไฝๆๆนๆณใ็คบใใพใใ
#### Creating a new tool
ใพใใใใผใซใฎไฝๆใใๅงใใพใใใใๆฌกใฎใณใผใใงใ็นๅฎใฎใฟในใฏใซ้ขใใฆHugging Face Hubใงๆใใใฆใณใญใผใใใใใขใใซใๅๅพใใใใใพใๅฝน็ซใใชใใใใฉใๆฅฝใใใฟในใฏใ่ฟฝๅ ใใพใใ
ไปฅไธใฎใณใผใใงใใใ่กใใใจใใงใใพใ๏ผ
```python
from huggingface_hub import list_models
task = "text-classification"
model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
print(model.id)
```
ใฟในใฏ `text-classification` ใฎๅ ดๅใใใใฏ `'facebook/bart-large-mnli'` ใ่ฟใใพใใ`translation` ใฎๅ ดๅใ`'t5-base'` ใ่ฟใใพใใ
ใใใใจใผใธใงใณใใๅฉ็จใงใใใใผใซใซๅคๆใใๆนๆณใฏไฝใงใใใใ๏ผใในใฆใฎใใผใซใฏใไธป่ฆใชๅฑๆงใไฟๆใใในใผใใผใฏใฉใน `Tool` ใซไพๅญใใฆใใพใใ็งใใกใฏใใใใ็ถๆฟใใใฏใฉในใไฝๆใใพใ:
```python
from transformers import Tool
class HFModelDownloadsTool(Tool):
pass
```
ใใฎใฏใฉในใซใฏใใใคใใฎๅฟ
่ฆใช่ฆ็ด ใใใใพใ๏ผ
- `name` ๅฑๆง๏ผใใใฏใใผใซ่ชไฝใฎๅๅใซๅฏพๅฟใใไปใฎใใผใซใจ่ชฟๅใใใใใซ `model_download_counter` ใจๅไปใใพใใ
- `description` ๅฑๆง๏ผใใใฏใจใผใธใงใณใใฎใใญใณใใใๅใใใใใซไฝฟ็จใใใพใใ
- `inputs` ใจ `outputs` ๅฑๆง๏ผใใใใๅฎ็พฉใใใใจใงใPython ใคใณใฟใผใใชใฟใผใๅใซ้ขใใ่ณขๆใช้ธๆใ่กใใฎใซๅฝน็ซใกใใใผใซใHubใซใใใทใฅใใ้ใซgradio-demoใ็ๆใงใใใใใซใชใใพใใใใใใฏใไบๆณใใใๅคใฎใชในใใงใใใ`text`ใ`image`ใใพใใฏ`audio`ใซใชใใใจใใใใพใใ
- `__call__` ใกใฝใใ๏ผใใใซใฏๆจ่ซใณใผใใๅซใพใใฆใใพใใใใใฏไธ่จใง่ฉฆใใใณใผใใงใ๏ผ
ใใกใใ็พๅจใฎใฏใฉในใฎๅค่ฆณใงใ๏ผ
```python
from transformers import Tool
from huggingface_hub import list_models
class HFModelDownloadsTool(Tool):
name = "model_download_counter"
description = (
"This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. "
"It takes the name of the category (such as text-classification, depth-estimation, etc), and "
"returns the name of the checkpoint."
)
inputs = ["text"]
outputs = ["text"]
def __call__(self, task: str):
model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
return model.id
```
ใใฆใไปๅบฆใฏใใผใซใไฝฟใใใใใซใชใใพใใใใใฎใใผใซใใใกใคใซใซไฟๅญใใใกใคใณในใฏใชใใใใใคใณใใผใใใพใใใใใใฎใใกใคใซใ `model_downloads.py` ใจใใๅๅใซใใ็ตๆใฎใคใณใใผใใณใผใใฏๆฌกใฎใใใซใชใใพใ๏ผ
ไปฅไธใฏใ็พๅจใฎใฏใฉในใฎๅค่ฆณใงใ๏ผ
```python
from model_downloads import HFModelDownloadsTool
tool = HFModelDownloadsTool()
```
ไปใฎไบบใ
ใซๅฉ็ใใใใใใใใ็ฐกๅใชๅๆๅใฎใใใซใใใใHubใซใใชใใฎๅๅ็ฉบ้ใงใใใทใฅใใใใจใใๅงใใใพใใใใใ่กใใซใฏใ`tool` ๅคๆฐใง `push_to_hub` ใๅผใณๅบใใ ใใงใ๏ผ
```python
tool.push_to_hub("hf-model-downloads")
```
ใจใผใธใงใณใใใใผใซใไฝฟ็จใใๆนๆณใซใคใใฆใๆ็ตในใใใใ่ฆใฆใฟใพใใใใ
#### Having the agent use the tool
Hubใซใใใใผใซใใใใพใใใใใฏๆฌกใฎใใใซใคใณในใฟใณในๅใงใใพใ๏ผใฆใผใถใผๅใใใผใซใซๅใใใฆๅคๆดใใฆใใ ใใ๏ผ:
```python
from transformers import load_tool
tool = load_tool("lysandre/hf-model-downloads")
```
ใจใผใธใงใณใใงไฝฟ็จใใใใใซใฏใใจใผใธใงใณใใฎๅๆๅใกใฝใใใฎ `additional_tools` ใใฉใกใผใฟใซใใใๆธกใใ ใใงใ๏ผ
```python
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool])
agent.run(
"Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
)
```
which outputs the following:
```text
==Code generated by the agent==
model = model_download_counter(task="text-to-video")
print(f"The model with the most downloads is {model}.")
audio_model = text_reader(model)
==Result==
The model with the most downloads is damo-vilab/text-to-video-ms-1.7b.
```
ไปฅไธใฎใใญในใใฏใๆฌกใฎใชใผใใฃใชใ็ๆใใพใใ
**Audio** |
|------------------------------------------------------------------------------------------------------------------------------------------------------|
| <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/damo.wav" type="audio/wav"/> |
<Tip>
็นๅฎใฎLLMใซไพๅญใใใใจใใใใใใพใๆฉ่ฝใใใใใใซใฏ้ๅธธใซๆญฃ็ขบใชใใญใณใใใๅฟ
่ฆใชใใฎใใใใพใใใใผใซใฎๅๅใจ่ชฌๆใๆ็ขบใซๅฎ็พฉใใใใจใฏใใจใผใธใงใณใใซใใฃใฆๆดป็จใใใใใใซ้ๅธธใซ้่ฆใงใใ
</Tip>
### Replacing existing tools
ๆขๅญใฎใใผใซใ็ฝฎใๆใใใซใฏใๆฐใใใขใคใใ ใใจใผใธใงใณใใฎใใผใซใใใฏในใซๅฒใๅฝใฆใใ ใใง่กใใใจใใงใใพใใไปฅไธใฏใใฎๆนๆณใงใ:
```python
from transformers import HfAgent, load_tool
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
agent.toolbox["image-transformation"] = load_tool("diffusers/controlnet-canny-tool")
```
<Tip>
ไปใฎใใผใซใงใใผใซใ็ฝฎใๆใใ้ใซใฏๆณจๆใๅฟ
่ฆใงใ๏ผใใใซใใใใจใผใธใงใณใใฎใใญใณใใใ่ชฟๆดใใใพใใใใใฏใใฟในใฏใซ้ฉใใใใ่ฏใใใญใณใใใๆใฃใฆใใๅ ดๅใซใฏ่ฏใใใจใงใใใไปใฎใใผใซใ้ธๆใใใ็ขบ็ใ้ซใใชใใๅฎ็พฉใใใใผใซใฎไปฃใใใซไปใฎใใผใซใ้ธๆใใใใใจใใใใใใใใพใใใ
</Tip>
## Leveraging gradio-tools
[gradio-tools](https://github.com/freddyaboulton/gradio-tools)ใฏใHugging Face Spacesใใใผใซใจใใฆไฝฟ็จใใใใจใๅฏ่ฝใซใใๅผทๅใชใฉใคใใฉใชใงใใๆขๅญใฎๅคใใฎSpacesใใใณใซในใฟใ Spacesใ่จญ่จใใใใจใใตใใผใใใฆใใพใใ
ๆใ
ใฏใ`gradio_tools`ใไฝฟ็จใใฆ`StableDiffusionPromptGeneratorTool`ใใผใซใๆดป็จใใใใจ่ใใฆใใพใใใใฎใใผใซใฏ`gradio-tools`ใใผใซใญใใใงๆไพใใใฆใใใใใญใณใใใๆนๅใใใใ่ฏใ็ปๅใ็ๆใใใใใซไฝฟ็จใใพใใ
ใพใใ`gradio_tools`ใใใใผใซใใคใณใใผใใใใใใใคใณในใฟใณในๅใใพใ:
```python
from gradio_tools import StableDiffusionPromptGeneratorTool
gradio_tool = StableDiffusionPromptGeneratorTool()
```
ใใฎใคใณในใฟใณในใ `Tool.from_gradio` ใกใฝใใใซๆธกใใพใ๏ผ
```python
from transformers import Tool
tool = Tool.from_gradio(gradio_tool)
```
ใใใใใฏใ้ๅธธใฎใซในใฟใ ใใผใซใจๅใใใใซใใใ็ฎก็ใงใใพใใ็งใใกใฏใใญใณใใใๆนๅใใใใใซใใใๆดป็จใใพใใ
` a rabbit wearing a space suit`:
```python
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool])
agent.run("Generate an image of the `prompt` after improving it.", prompt="A rabbit wearing a space suit")
```
The model adequately leverages the tool:
```text
==Explanation from the agent==
I will use the following tools: `StableDiffusionPromptGenerator` to improve the prompt, then `image_generator` to generate an image according to the improved prompt.
==Code generated by the agent==
improved_prompt = StableDiffusionPromptGenerator(prompt)
print(f"The improved prompt is {improved_prompt}.")
image = image_generator(improved_prompt)
```
ๆ็ต็ใซ็ปๅใ็ๆใใๅใซ๏ผ

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png">
<Tip warning={true}>
gradio-toolsใฏใใใพใใพใชใขใใชใใฃใไฝฟ็จใใๅ ดๅใงใใ*ใใญในใ*ใฎๅ
ฅๅใจๅบๅใๅฟ
่ฆใงใใใใฎๅฎ่ฃ
ใฏ็ปๅใจ้ณๅฃฐใชใใธใงใฏใใจ้ฃๆบใใพใใ็พๆ็นใงใฏใใใใ2ใคใฏไบๆๆงใใใใพใใใใใตใใผใใๅไธใใใใใใซๅใ็ตใใงใใใ่ฟ
้ใซไบๆๆงใๅไธใใใงใใใใ
</Tip>
## Future compatibility with Langchain
็งใใกใฏLangchainใๆใใฆใใใ้ๅธธใซ้ญ
ๅ็ใชใใผใซใฎในใคใผใใๆใฃใฆใใใจ่ใใฆใใพใใใใใใฎใใผใซใๆฑใใใใซใLangchainใฏใใพใใพใชใขใใชใใฃใงไฝๆฅญใใๅ ดๅใงใใ*ใใญในใ*ใฎๅ
ฅๅบๅใๅฟ
่ฆใงใใใใใฏใใชใใธใงใฏใใฎใทใชใขใซๅใใผใธใงใณ๏ผใคใพใใใใฃในใฏใซไฟๅญใใใใใผใธใงใณ๏ผใงใใใใจใๅคใใงใใ
ใใฎ้ใใซใใใtransformers-agentsใจlangchain้ใงใฏใใซใใขใใชใใฃใๅฆ็ใใใฆใใพใใใ
ใใฎๅถ้ใฏๅฐๆฅใฎใใผใธใงใณใง่งฃๆฑบใใใใใจใ็ฎๆใใฆใใใ็ฑๅฟใชlangchainใฆใผใถใผใใใฎไปปๆใฎๆฏๆดใๆญ่ฟใใพใใ
็งใใกใฏใใ่ฏใใตใใผใใๆไพใใใใจ่ใใฆใใพใใใๆไผใใใใ ใใๅ ดๅใฏใใใฒ[ๅ้กใ้ใใฆ](https://github.com/huggingface/transformers/issues/new)ใใ่ใใฎใใจใๅ
ฑๆใใฆใใ ใใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/peft.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Load adapters with ๐ค PEFT
[[open-in-colab]]
[Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) ใกใฝใใใฏใไบๅๅญฆ็ฟๆธใฟใขใใซใฎใใฉใกใผใฟใใใกใคใณใใฅใผใใณใฐไธญใซๅ็ตใใใใฎไธใซใใใใช่จ็ทดๅฏ่ฝใชใใฉใกใผใฟ๏ผใขใใใฟใผ๏ผใ่ฟฝๅ ใใใขใใญใผใใงใใใขใใใฟใผใฏใใฟในใฏๅบๆใฎๆ
ๅ ฑใๅญฆ็ฟใใใใใซ่จ็ทดใใใพใใใใฎใขใใญใผใใฏใใกใขใชไฝฟ็จ้ใๅฐใชใใๅฎๅ
จใซใใกใคใณใใฅใผใใณใฐใใใใขใใซใจๆฏ่ผใใฆ่จ็ฎใชใฝใผในใไฝใๆใใคใคใๅ็ญใฎ็ตๆใ็ๆใใใใจใ็คบใใใฆใใพใใ
PEFTใง่จ็ทดใใใใขใใใฟใผใฏ้ๅธธใๅฎๅ
จใชใขใใซใฎใตใคใบใใใ1ๆกๅฐใใใๅ
ฑๆใไฟๅญใ่ชญใฟ่พผใใฎใไพฟๅฉใงใใ
<div class="flex flex-col justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/>
<figcaption class="text-center">Hubใซๆ ผ็ดใใใฆใใOPTForCausalLMใขใใซใฎใขใใใฟใผ้ใฟใฏใใขใใซใฎๅ
จไฝใตใคใบใฎ็ด6MBใงใใขใใซ้ใฟใฎๅ
จใตใคใบใฏ็ด700MBใงใใ</figcaption>
</div>
๐ค PEFTใฉใคใใฉใชใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใฏใ[ใใญใฅใกใณใใผใทใงใณ](https://huggingface.co/docs/peft/index)ใใ่ฆงใใ ใใใ
## Setup
๐ค PEFTใใคใณในใใผใซใใฆๅงใใพใใใ๏ผ
```bash
pip install peft
```
ๆฐๆฉ่ฝใ่ฉฆใใฆใฟใใๅ ดๅใใฝใผในใใใฉใคใใฉใชใใคใณในใใผใซใใใใจใซ่ๅณใใใใใใใใพใใ๏ผ
```bash
pip install git+https://github.com/huggingface/peft.git
```
## Supported PEFT models
๐ค TransformersใฏใใใใคใใฎPEFT๏ผParameter Efficient Fine-Tuning๏ผใกใฝใใใใใคใใฃใใซใตใใผใใใฆใใใใญใผใซใซใพใใฏHubใซๆ ผ็ดใใใใขใใใฟใผใฆใงใคใใ็ฐกๅใซ่ชญใฟ่พผใใงๅฎ่กใพใใฏใใฌใผใใณใฐใงใใพใใไปฅไธใฎใกใฝใใใใตใใผใใใใฆใใพใ๏ผ
- [Low Rank Adapters](https://huggingface.co/docs/peft/conceptual_guides/lora)
- [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3)
- [AdaLoRA](https://arxiv.org/abs/2303.10512)
ไปใฎPEFTใกใฝใใใไฝฟ็จใใใๅ ดๅใใใญใณใใๅญฆ็ฟใใใญใณใใ่ชฟๆดใชใฉใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใใพใใฏ๐ค PEFTใฉใคใใฉใชๅ
จ่ฌใซใคใใฆใฏใ[ใใญใฅใกใณใใผใทใงใณ](https://huggingface.co/docs/peft/index)ใๅ็
งใใฆใใ ใใใ
## Load a PEFT adapter
๐ค TransformersใใPEFTใขใใใฟใผใขใใซใ่ชญใฟ่พผใใงไฝฟ็จใใใซใฏใHubใชใใธใใชใพใใฏใญใผใซใซใใฃใฌใฏใใชใซ `adapter_config.json` ใใกใคใซใจใขใใใฟใผใฆใงใคใใๅซใพใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใๆฌกใซใ`AutoModelFor` ใฏใฉในใไฝฟ็จใใฆPEFTใขใใใฟใผใขใใซใ่ชญใฟ่พผใใใจใใงใใพใใใใจใใฐใๅ ๆ่จ่ชใขใใชใณใฐ็จใฎPEFTใขใใใฟใผใขใใซใ่ชญใฟ่พผใใซใฏ๏ผ
1. PEFTใขใใซใฎIDใๆๅฎใใพใใ
2. ใใใ[`AutoModelForCausalLM`] ใฏใฉในใซๆธกใใพใใ
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(peft_model_id)
```
<Tip>
PEFTใขใใใฟใผใ`AutoModelFor`ใฏใฉในใพใใฏๅบๆฌใขใใซใฏใฉใน๏ผ`OPTForCausalLM`ใพใใฏ`LlamaForCausalLM`ใชใฉ๏ผใง่ชญใฟ่พผใใใจใใงใใพใใ
</Tip>
ใพใใ`load_adapter`ใกใฝใใใๅผใณๅบใใใจใงใPEFTใขใใใฟใผใ่ชญใฟ่พผใใใจใใงใใพใ๏ผ
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "facebook/opt-350m"
peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
```
## Load in 8bit or 4bit
`bitsandbytes` ็ตฑๅใฏใ8ใใใใใใณ4ใใใใฎ็ฒพๅบฆใใผใฟๅใใตใใผใใใฆใใใๅคง่ฆๆจกใชใขใใซใ่ชญใฟ่พผใ้ใซใกใขใชใ็ฏ็ดใใใฎใซๅฝน็ซใกใพใ๏ผ่ฉณ็ดฐใซใคใใฆใฏ `bitsandbytes` ็ตฑๅใฎ[ใฌใคใ](./quantization#bitsandbytes-integration)ใๅ็
งใใฆใใ ใใ๏ผใ[`~PreTrainedModel.from_pretrained`] ใซ `load_in_8bit` ใพใใฏ `load_in_4bit` ใใฉใกใผใฟใ่ฟฝๅ ใใ`device_map="auto"` ใ่จญๅฎใใฆใขใใซใๅนๆ็ใซใใผใใฆใงใขใซๅๆฃ้
็ฝฎใงใใพใ๏ผ
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", load_in_8bit=True)
```
## Add a new adapter
ๆขๅญใฎใขใใใฟใผใๆใคใขใใซใซๆฐใใใขใใใฟใผใ่ฟฝๅ ใใใใใซ [`~peft.PeftModel.add_adapter`] ใไฝฟ็จใงใใพใใใใ ใใๆฐใใใขใใใฟใผใฏ็พๅจใฎใขใใใฟใผใจๅใใฟใคใใงใใ้ใใใใใ่กใใใจใใงใใพใใใใจใใฐใใขใใซใซๆขๅญใฎ LoRA ใขใใใฟใผใใขใฟใใใใใฆใใๅ ดๅ๏ผ
```py
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig
model_id = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_id)
lora_config = LoraConfig(
target_modules=["q_proj", "k_proj"],
init_lora_weights=False
)
model.add_adapter(lora_config, adapter_name="adapter_1")
```
ๆฐใใใขใใใฟใ่ฟฝๅ ใใใซใฏ:
```py
# attach new adapter with same config
model.add_adapter(lora_config, adapter_name="adapter_2")
```
[`~peft.PeftModel.set_adapter`] ใไฝฟ็จใใฆใใฉใฎใขใใใฟใผใไฝฟ็จใใใใ่จญๅฎใงใใพใ๏ผ
```py
# use adapter_1
model.set_adapter("adapter_1")
output = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
# use adapter_2
model.set_adapter("adapter_2")
output_enabled = model.generate(**inputs)
print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))
```
## Enable and disable adapters
ใขใใซใซใขใใใฟใผใ่ฟฝๅ ใใใใใขใใใฟใผใขใธใฅใผใซใๆๅนใพใใฏ็กๅนใซใใใใจใใงใใพใใใขใใใฟใผใขใธใฅใผใซใๆๅนใซใใใซใฏใๆฌกใฎๆ้ ใๅฎ่กใใพใ๏ผ
```py
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig
model_id = "facebook/opt-350m"
adapter_model_id = "ybelkada/opt-350m-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
text = "Hello"
inputs = tokenizer(text, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(model_id)
peft_config = PeftConfig.from_pretrained(adapter_model_id)
# to initiate with random weights
peft_config.init_lora_weights = False
model.add_adapter(peft_config)
model.enable_adapters()
output = model.generate(**inputs)
```
ใขใใใฟใผใขใธใฅใผใซใ็กๅนใซใใใซใฏ๏ผ
```py
model.disable_adapters()
output = model.generate(**inputs)
```
## Train a PEFT adapter
PEFTใขใใใฟใผใฏ[`Trainer`]ใฏใฉในใงใตใใผใใใใฆใใใ็นๅฎใฎใฆใผในใฑใผในใซๅฏพใใฆใขใใใฟใผใใใฌใผใใณใฐใใใใจใใงใใพใใๆฐ่กใฎใณใผใใ่ฟฝๅ ใใใ ใใงๆธใฟใพใใใใจใใฐใLoRAใขใใใฟใผใใใฌใผใใณใฐใใๅ ดๅ:
<Tip>
[`Trainer`]ใไฝฟ็จใใใขใใซใฎๅพฎ่ชฟๆดใซๆ
ฃใใฆใใชใๅ ดๅใฏใ[ไบๅใใฌใผใใณใฐๆธใฟใขใใซใฎๅพฎ่ชฟๆด](training)ใใฅใผใใชใขใซใใ่ฆงใใ ใใใ
</Tip>
1. ใฟในใฏใฟใคใใจใใคใใผใใฉใกใผใฟใซๅฏพใใใขใใใฟใผใฎๆงๆใๅฎ็พฉใใพใ๏ผใใคใใผใใฉใกใผใฟใฎ่ฉณ็ดฐใซใคใใฆใฏ[`~peft.LoraConfig`]ใๅ็
งใใฆใใ ใใ๏ผใ
```py
from peft import LoraConfig
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
)
```
2. ใขใใซใซใขใใใฟใผใ่ฟฝๅ ใใใ
```py
model.add_adapter(peft_config)
```
3. ใใใงใใขใใซใ [`Trainer`] ใซๆธกใใใจใใงใใพใ๏ผ
```py
trainer = Trainer(model=model, ...)
trainer.train()
```
ไฟๅญใใใใฌใผใใณใฐๆธใฟใขใใใฟใจใใใ่ชญใฟ่พผใใใใฎๆ้ ๏ผ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/fast_tokenizers.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Use tokenizers from ๐ค Tokenizers
[`PreTrainedTokenizerFast`]ใฏ[๐ค Tokenizers](https://huggingface.co/docs/tokenizers)ใฉใคใใฉใชใซไพๅญใใฆใใพใใ๐ค Tokenizersใฉใคใใฉใชใใๅๅพใใใใผใฏใใคใถใผใฏใ้ๅธธใซ็ฐกๅใซ๐ค Transformersใซใญใผใใงใใพใใ
ๅ
ทไฝ็ใชๅ
ๅฎนใซๅ
ฅใๅใซใใพใใฏใใใคใใฎ่กใงใใใผใฎใใผใฏใใคใถใผใไฝๆใใใใจใใๅงใใพใใใ๏ผ
```python
>>> from tokenizers import Tokenizer
>>> from tokenizers.models import BPE
>>> from tokenizers.trainers import BpeTrainer
>>> from tokenizers.pre_tokenizers import Whitespace
>>> tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
>>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
>>> tokenizer.pre_tokenizer = Whitespace()
>>> files = [...]
>>> tokenizer.train(files, trainer)
```
็งใใกใฏไปใๅฎ็พฉใใใใกใคใซใซใใฌใผใใณใฐใใใใใผใฏใใคใถใผใๆใฃใฆใใพใใใใใใฉใณใฟใคใ ใงๅผใ็ถใไฝฟ็จใใใใ
ๅฐๆฅใฎๅๅฉ็จใฎใใใซJSONใใกใคใซใซไฟๅญใใใใจใใงใใพใใ
## Loading directly from the tokenizer object
๐ค Transformersใฉใคใใฉใชใงใใฎใใผใฏใใคใถใผใชใใธใงใฏใใใฉใฎใใใซๆดป็จใงใใใใ่ฆใฆใฟใพใใใใ[`PreTrainedTokenizerFast`]ใฏใฉในใฏใ
*tokenizer*ใชใใธใงใฏใใๅผๆฐใจใใฆๅใๅ
ฅใใ็ฐกๅใซใคใณในใฟใณในๅใงใใใใใซใใพใใ
```python
>>> from transformers import PreTrainedTokenizerFast
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
```
ใใฎใชใใธใงใฏใใฏใ๐ค Transformers ใใผใฏใใคใถใผใๅ
ฑๆใใใในใฆใฎใกใฝใใใจไธ็ทใซไฝฟ็จใงใใพใ๏ผ่ฉณ็ดฐใซใคใใฆใฏใ[ใใผใฏใใคใถใผใใผใธ](main_classes/tokenizer)ใใ่ฆงใใ ใใใ
## Loading from a JSON file
JSONใใกใคใซใใใใผใฏใใคใถใผใ่ชญใฟ่พผใใซใฏใใพใใใผใฏใใคใถใผใไฟๅญใใใใจใใๅงใใพใใใ๏ผ
```python
>>> tokenizer.save("tokenizer.json")
```
ใใฎใใกใคใซใไฟๅญใใใในใฏใ`PreTrainedTokenizerFast` ใฎๅๆๅใกใฝใใใซ `tokenizer_file` ใใฉใกใผใฟใไฝฟ็จใใฆๆธกใใใจใใงใใพใ๏ผ
```python
>>> from transformers import PreTrainedTokenizerFast
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
```
ใใฎใชใใธใงใฏใใฏใ๐ค Transformers ใใผใฏใใคใถใผใๅ
ฑๆใใใในใฆใฎใกใฝใใใจไธ็ทใซไฝฟ็จใงใใใใใซใชใใพใใ๏ผ่ฉณ็ดฐใซใคใใฆใฏใ[ใใผใฏใใคใถใผใใผใธ](main_classes/tokenizer)ใใ่ฆงใใ ใใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/benchmarks.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ ใใฎใใกใคใซใฏMarkdownใงใใใHugging Faceใฎdoc-builder๏ผMDXใซ้กไผผ๏ผๅใใฎ็นๅฎใฎๆงๆใๅซใใงใใใใใ
Markdownใใฅใผใขใงใฏๆญฃใใ่กจ็คบใใใชใใใจใซๆณจๆใใฆใใ ใใใ
-->
# Benchmarks
<Tip warning={true}>
Hugging Faceใฎใใณใใใผใฏใใผใซใฏ้ๆจๅฅจใงใใใTransformerใขใใซใฎ้ๅบฆใจใกใขใชใฎ่ค้ใใๆธฌๅฎใใใใใซๅค้จใฎใใณใใใผใฏใฉใคใใฉใชใไฝฟ็จใใใใจใใๅงใใใพใใ
</Tip>
[[open-in-colab]]
๐ค Transformersใขใใซใใใณใใใผใฏใใใในใใใฉใฏใใฃในใใใงใซๅฉ็จๅฏ่ฝใชใใณใใใผใฏใซใคใใฆ่ฆใฆใฟใพใใใใ
๐ค Transformersใขใใซใใใณใใใผใฏใใๆนๆณใซใคใใฆ่ฉณใใ่ชฌๆใใใใผใใใใฏใฏ[ใใกใ](https://github.com/huggingface/notebooks/tree/main/examples/benchmark.ipynb)ใงๅฉ็จใงใใพใใ
## How to benchmark ๐ค Transformers models
[`PyTorchBenchmark`]ใฏใฉในใจ[`TensorFlowBenchmark`]ใฏใฉในใไฝฟ็จใใใจใ๐ค Transformersใขใใซใๆ่ปใซใใณใใใผใฏใงใใพใใ
ใใณใใใผใฏใฏใฉในใไฝฟ็จใใใจใ_ใใผใฏใกใขใชไฝฟ็จ้_ ใใใณ _ๅฟ
่ฆใชๆ้_ ใ _ๆจ่ซ_ ใใใณ _ใใฌใผใใณใฐ_ ใฎไธกๆนใซใคใใฆๆธฌๅฎใงใใพใใ
<Tip>
ใใใงใฎ _ๆจ่ซ_ ใฏใๅไธใฎใใฉใฏใผใใในใซใใฃใฆๅฎ็พฉใใใ _ใใฌใผใใณใฐ_ ใฏๅไธใฎใใฉใฏใผใใในใจ
ใใใฏใฏใผใใในใซใใฃใฆๅฎ็พฉใใใพใใ
</Tip>
ใใณใใใผใฏใฏใฉใน[`PyTorchBenchmark`]ใจ[`TensorFlowBenchmark`]ใฏใใใใใใฎใใณใใใผใฏใฏใฉในใซๅฏพใใ้ฉๅใช่จญๅฎใๅซใ [`PyTorchBenchmarkArguments`] ใใใณ [`TensorFlowBenchmarkArguments`] ใฟใคใใฎใชใใธใงใฏใใๅฟ
่ฆใจใใพใใ
[`PyTorchBenchmarkArguments`] ใใใณ [`TensorFlowBenchmarkArguments`] ใฏใใผใฟใฏใฉในใงใใใใใใใใฎใใณใใใผใฏใฏใฉในใซๅฏพใใใในใฆใฎ้ข้ฃใใ่จญๅฎใๅซใใงใใพใใ
ๆฌกใฎไพใงใฏใใฟใคใ _bert-base-cased_ ใฎBERTใขใใซใใใณใใใผใฏใใๆนๆณใ็คบใใใฆใใพใใ
<frameworkcontent>
<pt>
```py
>>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments
>>> args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512])
>>> benchmark = PyTorchBenchmark(args)
```
</pt>
<tf>
```py
>>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments
>>> args = TensorFlowBenchmarkArguments(
... models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> benchmark = TensorFlowBenchmark(args)
```
</tf>
</frameworkcontent>
ใใใงใฏใใใณใใใผใฏๅผๆฐใฎใใผใฟใฏใฉในใซๅฏพใใฆใ`models`ใ`batch_sizes`
ใใใณ`sequence_lengths`ใฎ3ใคใฎๅผๆฐใๆๅฎใใใฆใใพใใๅผๆฐ`models`ใฏๅฟ
้ ใงใ
[ใขใใซใใ](https://huggingface.co/models)ใใใฎใขใใซ่ญๅฅๅญใฎ`ใชในใ`ใๆๅพ
ใ
ใพใใ`batch_sizes`ใจ`sequence_lengths`ใฎ2ใคใฎ`ใชในใ`ๅผๆฐใฏ
ใขใใซใฎใใณใใใผใฏๅฏพ่ฑกใจใชใ`input_ids`ใฎใตใคใบใๅฎ็พฉใใพใใ
ใใณใใใผใฏๅผๆฐใใผใฟใฏใฉในใไปใใฆ่จญๅฎใงใใไปใฎๅคใใฎใใฉใกใผใฟใใใใพใใใใใใฎ่ฉณ็ดฐใซใคใใฆใฏใ็ดๆฅใใกใคใซ
`src/transformers/benchmark/benchmark_args_utils.py`ใ
`src/transformers/benchmark/benchmark_args.py`๏ผPyTorch็จ๏ผใใใใณ`src/transformers/benchmark/benchmark_args_tf.py`๏ผTensorflow็จ๏ผ
ใๅ็
งใใใใๆฌกใฎใทใงใซใณใใณใใใซใผใใใๅฎ่กใใใจใPyTorchใจTensorflowใฎใใใใใซๅฏพใใฆ่จญๅฎๅฏ่ฝใชใในใฆใฎใใฉใกใผใฟใฎ่จ่ฟฐ็ใชใชในใใ่กจ็คบใใใพใใ
<frameworkcontent>
<pt>
```bash
python examples/pytorch/benchmarking/run_benchmark.py --help
```
ใคใณในใฟใณในๅใใใใใณใใใผใฏใชใใธใงใฏใใฏใๅใซ `benchmark.run()` ใๅผใณๅบใใใจใงๅฎ่กใงใใพใใ
```py
>>> results = benchmark.run()
>>> print(results)
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-uncased 8 8 0.006
bert-base-uncased 8 32 0.006
bert-base-uncased 8 128 0.018
bert-base-uncased 8 512 0.088
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base-uncased 8 8 1227
bert-base-uncased 8 32 1281
bert-base-uncased 8 128 1307
bert-base-uncased 8 512 1539
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.4.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 08:58:43.371351
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</pt>
<tf>
```bash
python examples/tensorflow/benchmarking/run_benchmark_tf.py --help
```
ใคใณในใฟใณในๅใใใใใณใใใผใฏใชใใธใงใฏใใฏใๅใซ `benchmark.run()` ใๅผใณๅบใใใจใงๅฎ่กใงใใพใใ
```py
>>> results = benchmark.run()
>>> print(results)
>>> results = benchmark.run()
>>> print(results)
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-uncased 8 8 0.005
bert-base-uncased 8 32 0.008
bert-base-uncased 8 128 0.022
bert-base-uncased 8 512 0.105
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base-uncased 8 8 1330
bert-base-uncased 8 32 1330
bert-base-uncased 8 128 1330
bert-base-uncased 8 512 1770
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: Tensorflow
- use_xla: False
- framework_version: 2.2.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:26:35.617317
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</tf>
</frameworkcontent>
ใใใฉใซใใงใฏใ_ๆจ่ซๆ้_ ใจ _ๅฟ
่ฆใชใกใขใช_ ใใใณใใใผใฏใใใพใใ
ไธ่จใฎไพใฎๅบๅใงใฏใๆๅใฎ2ใคใฎใปใฏใทใงใณใ _ๆจ่ซๆ้_ ใจ _ๆจ่ซใกใขใช_
ใซๅฏพๅฟใใ็ตๆใ็คบใใฆใใพใใใใใซใ่จ็ฎ็ฐๅขใซ้ขใใใในใฆใฎ้ข้ฃๆ
ๅ ฑใ
ไพใใฐ GPU ใฟใคใใใทในใใ ใใฉใคใใฉใชใฎใใผใธใงใณใชใฉใใ_ENVIRONMENT INFORMATION_ ใฎไธใซ่กจ็คบใใใพใใใใฎๆ
ๅ ฑใฏใ[`PyTorchBenchmarkArguments`]
ใใใณ [`TensorFlowBenchmarkArguments`] ใซๅผๆฐ `save_to_csv=True`
ใ่ฟฝๅ ใใใใจใงใใชใใทใงใณใง _.csv_ ใใกใคใซใซไฟๅญใใใใจใใงใใพใใใใฎๅ ดๅใๅใปใฏใทใงใณใฏๅฅใ
ใฎ _.csv_ ใใกใคใซใซไฟๅญใใใพใใ_.csv_
ใใกใคใซใธใฎใในใฏใใใผใฟใฏใฉในใฎๅผๆฐใไฝฟ็จใใฆใชใใทใงใณใงๅฎ็พฉใงใใพใใ
ใขใใซ่ญๅฅๅญใไพใใฐ `bert-base-uncased` ใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใใใณใใใผใฏใใไปฃใใใซใๅฉ็จๅฏ่ฝใชไปปๆใฎใขใใซใฏใฉในใฎไปปๆใฎ่จญๅฎใใใณใใใผใฏใใใใจใใงใใพใใใใฎๅ ดๅใใใณใใใผใฏๅผๆฐใจๅ
ฑใซ่จญๅฎใฎ `list` ใๆฟๅ
ฅใใๅฟ
่ฆใใใใพใใ
<frameworkcontent>
<pt>
```py
>>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig
>>> args = PyTorchBenchmarkArguments(
... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> config_base = BertConfig()
>>> config_384_hid = BertConfig(hidden_size=384)
>>> config_6_lay = BertConfig(num_hidden_layers=6)
>>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay])
>>> benchmark.run()
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base 8 128 0.006
bert-base 8 512 0.006
bert-base 8 128 0.018
bert-base 8 512 0.088
bert-384-hid 8 8 0.006
bert-384-hid 8 32 0.006
bert-384-hid 8 128 0.011
bert-384-hid 8 512 0.054
bert-6-lay 8 8 0.003
bert-6-lay 8 32 0.004
bert-6-lay 8 128 0.009
bert-6-lay 8 512 0.044
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base 8 8 1277
bert-base 8 32 1281
bert-base 8 128 1307
bert-base 8 512 1539
bert-384-hid 8 8 1005
bert-384-hid 8 32 1027
bert-384-hid 8 128 1035
bert-384-hid 8 512 1255
bert-6-lay 8 8 1097
bert-6-lay 8 32 1101
bert-6-lay 8 128 1127
bert-6-lay 8 512 1359
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.4.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:35:25.143267
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</pt>
<tf>
```py
>>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig
>>> args = TensorFlowBenchmarkArguments(
... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> config_base = BertConfig()
>>> config_384_hid = BertConfig(hidden_size=384)
>>> config_6_lay = BertConfig(num_hidden_layers=6)
>>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay])
>>> benchmark.run()
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base 8 8 0.005
bert-base 8 32 0.008
bert-base 8 128 0.022
bert-base 8 512 0.106
bert-384-hid 8 8 0.005
bert-384-hid 8 32 0.007
bert-384-hid 8 128 0.018
bert-384-hid 8 512 0.064
bert-6-lay 8 8 0.002
bert-6-lay 8 32 0.003
bert-6-lay 8 128 0.0011
bert-6-lay 8 512 0.074
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base 8 8 1330
bert-base 8 32 1330
bert-base 8 128 1330
bert-base 8 512 1770
bert-384-hid 8 8 1330
bert-384-hid 8 32 1330
bert-384-hid 8 128 1330
bert-384-hid 8 512 1540
bert-6-lay 8 8 1330
bert-6-lay 8 32 1330
bert-6-lay 8 128 1330
bert-6-lay 8 512 1540
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: Tensorflow
- use_xla: False
- framework_version: 2.2.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:38:15.487125
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
</tf>
</frameworkcontent>
ใซในใฟใใคใบใใใBertModelใฏใฉในใฎๆงๆใซๅฏพใใๆจ่ซๆ้ใจๅฟ
่ฆใชใกใขใชใฎใใณใใใผใฏ
ใใฎๆฉ่ฝใฏใใขใใซใใใฌใผใใณใฐใใ้ใซใฉใฎๆงๆใ้ธๆใในใใใๆฑบๅฎใใ้ใซ็นใซๅฝน็ซใคใใจใใใใพใใ
## Benchmark best practices
ใใฎใปใฏใทใงใณใงใฏใใขใใซใใใณใใใผใฏใใ้ใซๆณจๆใในใใใใคใใฎใในใใใฉใฏใใฃในใใชในใใขใใใใฆใใพใใ
- ็พๅจใๅไธใใใคในใฎใใณใใใผใฏใใใตใใผใใใใฆใใพใใใGPUใงใใณใใใผใฏใๅฎ่กใใๅ ดๅใใณใผใใๅฎ่กใใใใใคในใใฆใผใถใผใๆๅฎใใใใจใๆจๅฅจใใพใใ
ใใใฏใทใงใซใง`CUDA_VISIBLE_DEVICES`็ฐๅขๅคๆฐใ่จญๅฎใใใใจใง่กใใพใใไพ๏ผ`export CUDA_VISIBLE_DEVICES=0`ใๅฎ่กใใฆใใใณใผใใๅฎ่กใใพใใ
- `no_multi_processing`ใชใใทใงใณใฏใใในใใใใณใใใใฐ็จใซใฎใฟ`True`ใซ่จญๅฎใในใใงใใๆญฃ็ขบใชใกใขใช่จๆธฌใ็ขบไฟใใใใใซใๅใกใขใชใใณใใใผใฏใๅฅใ
ใฎใใญใปในใงๅฎ่กใใใใจใใๅงใใใพใใใใใซใใใ`no_multi_processing`ใ`True`ใซ่จญๅฎใใใพใใ
- ใขใใซใฎใใณใใใผใฏ็ตๆใๅ
ฑๆใใ้ใซใฏใๅธธใซ็ฐๅขๆ
ๅ ฑใ่จ่ฟฐใใในใใงใใ็ฐใชใGPUใใใคในใใฉใคใใฉใชใใผใธใงใณใชใฉใงใใณใใใผใฏ็ตๆใๅคงใใ็ฐใชใๅฏ่ฝๆงใใใใใใใใณใใใผใฏ็ตๆๅไฝใงใฏใณใใฅใใใฃใซใจใฃใฆใใพใๆ็จใงใฏใใใพใใใ
## Sharing your benchmark
ไปฅๅใใในใฆใฎๅฉ็จๅฏ่ฝใชใณใขใขใใซ๏ผๅฝๆ10ใขใใซ๏ผใซๅฏพใใฆใๅคใใฎ็ฐใชใ่จญๅฎใงๆจ่ซๆ้ใฎใใณใใใผใฏใ่กใใใพใใ๏ผPyTorchใไฝฟ็จใใTorchScriptใฎๆ็กใTensorFlowใไฝฟ็จใใXLAใฎๆ็กใชใฉใงใใใใใใฎใในใใฏใในใฆCPUใง่กใใใพใใ๏ผTensorFlow XLAใ้คใ๏ผใ
ใใฎใขใใญใผใใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ๆฌกใฎใใญใฐใในใ](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2)ใซ่ฉณใใ่ชฌๆใใใฆใใใ็ตๆใฏ[ใใกใ](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit?usp=sharing)ใงๅฉ็จใงใใพใใ
ๆฐใใใใณใใใผใฏใใผใซใไฝฟ็จใใใจใใณใใฅใใใฃใจใใณใใใผใฏ็ตๆใๅ
ฑๆใใใใจใใใใพใงไปฅไธใซ็ฐกๅใซใชใใพใใ
- [PyTorchใใณใใใผใฏ็ตๆ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/benchmarking/README.md)ใ
- [TensorFlowใใณใใใผใฏ็ตๆ](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/benchmarking/README.md)ใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/index.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ๐ค Transformers
[PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), [JAX](https://jax.readthedocs.io/en/latest/)ใฎใใใฎๆๅ
็ซฏๆฉๆขฐๅญฆ็ฟใ
๐ค Transformers ใฏๆๅ
็ซฏใฎๅญฆ็ฟๆธใฟใขใใซใ็ฐกๅใซใใฆใณใญใผใใใฆๅญฆ็ฟใใAPIใจใใผใซใๆไพใใพใใๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใใใจใง่จ็ฎใณในใใจไบ้
ธๅ็ญ็ด ใฎๆๅบ้ใๅๆธใงใใใพใใผใญใใใขใใซใๅญฆ็ฟใใใใใซ่ฆๆฑใใใๆ้ใจใชใฝใผในใ็ฏ็ดใใใใจใใงใใพใใ ใใใใฎใขใใซใฏไปฅไธใฎใใใช็ฐใชใใขใใชใใฃใซใใใไธ่ฌ็ใชใฟในใฏใใตใใผใใใพใ:
๐ **่ช็ถ่จ่ชๅฆ็**: ใใญในใๅ้กใ ๅบๆ่กจ็พๆฝๅบใ ่ณชๅๅฟ็ญใ ่จ่ชใขใใชใณใฐใ ๆ็ซ ่ฆ็ดใ ๆฉๆขฐ็ฟป่จณใ ่คๆฐ้ธๆใใใญในใ็ๆใ<br>
๐ผ๏ธ **ใณใณใใฅใผใฟใใธใงใณ**: ็ปๅๅ้กใ ็ฉไฝๆคๅบใ ใปใฐใกใณใใผใทใงใณใ<br>
๐ฃ๏ธ **้ณๅฃฐ**: ่ชๅ้ณๅฃฐ่ช่ญใ้ณๅฃฐๅ้กใ<br>
๐ **ใใซใใขใผใใซ**: ใใผใใซ่ณชๅๅฟ็ญใ ๅ
ๅญฆๆๅญ่ช่ญ(OCR)ใ ในใญใฃใณใใใใใญใฅใกใณใใใใฎๆ
ๅ ฑๆฝๅบใ ๅ็ปๅ้กใ visual question answering(่ฆ่ฆ็่ณชๅๅฟ็ญ)ใ
๐ค Transformers ใฏPyTorch, TensorFlow, JAX้ใฎใใฌใผใ ใฏใผใฏ็ธไบ้็จๆงใใตใใผใใใฆใใพใใ ใใใฏใขใใซใฎๅๆฎต้ใง็ฐใชใใใฌใผใ ใฏใผใฏใไฝฟใใใใฎๆ่ปๆงใๆไพใใพใใใใใใฌใผใ ใฏใผใฏใง3่กใฎใณใผใใงใขใใซใๅญฆ็ฟใใๅฅใฎใใฌใผใ ใฏใผใฏใงๆจ่ซใฎใใใซใขใใซใใญใผใใใใใจใๅฏ่ฝใงใใใพใใๆฌ็ช็ฐๅขใฎใใใญใคใฎใใใซใขใใซใONNXใTorchScriptใฎใใใชๅฝขๅผใงใจใฏในใใผใใใใใจใๅฏ่ฝใงใใ
[Hub](https://huggingface.co/models), [forum](https://discuss.huggingface.co/), [Discord](https://discord.com/invite/JfAtkvEtRb)ใงๆ้ทไธญใฎใณใใฅใใใฃใซไปๆฅๅๅ ใใพใใใ๏ผ
## Hugging Faceใใผใ ใซใใใซในใฟใ ใตใใผใใใๅธๆใฎๅ ดๅ
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a>
## ็ฎๆฌก
ใใญใฅใกใณใใฏไปฅไธใฎ5ใคใฎใปใฏใทใงใณใงๆงๆใใใฆใใพใ:
- **ใฏใใใซ** ใฏใใฉใคใใฉใชใฎใฏใคใใฏใใขใผใจใฉใคใใฉใชใไฝฟใๅงใใใใใฎใคใณในใใผใซๆ้ ใๆไพใใฆใใพใใ
- **ใใฅใผใใชใขใซ** ใฏใๅๅฟ่
ใๅงใใใฎใซๆ้ฉใชๅ ดๆใงใใใใฎใปใฏใทใงใณใงใฏใใฉใคใใฉใชใไฝฟใๅงใใใใใซๅฟ
่ฆใชๅบๆฌ็ใชในใญใซใ็ฟๅพใงใใพใใ
- **HOW-TOใฌใคใ** ใฏใ่จ่ชใขใใชใณใฐใฎใใใซๅญฆ็ฟๆธใฟใขใใซใfinetuningใใใใจใใซในใฟใ ใขใใซใฎไฝๆใจๅ
ฑๆใฎๆนๆณใชใฉใจใใฃใ็นๅฎใฎ็ฎๆจใ้ๆใใใใใฎๆนๆณใ็คบใใฆใใพใใ
- **ใณใณใปใใใฌใคใ** ใฏใใขใใซใใฟในใฏใใใใฆ ๐ค Transformersใฎ่จญ่จๆๆณใฎ่ๆฏใซใใๅบๆฌ็ใซใณใณใปใใใ่ใๆนใซใคใใฆใใๆทฑใ่ๅฏใ่งฃ่ชฌใใฆใใพใใ
- **API** ๅ
จใฆใฎใฏใฉในใจ้ขๆฐใ่ชฌๆใใพใ:
- **MAIN CLASSES** ใฏใconfiguration, model, tokenizer, pipelineใจใใฃใๆใ้่ฆใชใฏใฉในใซใคใใฆ่ฉณ็ดฐใซ่ชฌๆใใฆใใพใใ
- **MODELS** ใฏใใฉใคใใฉใชใงๅฎ่ฃ
ใใใฆใใใใใใใฎใขใใซใซ้ข้ฃใใใฏใฉในใจ้ขๆฐใ่ฉณ็ดฐใซ่ชฌๆใใฆใใพใใ
- **INTERNAL HELPERS** ใฏใๅ
้จใงไฝฟ็จใใใฆใใใฆใผใใฃใชใใฃใฏใฉในใ้ขๆฐใ่ฉณ็ดฐใซ่ชฌๆใใฆใใพใใ
### ใตใใผใใใใฆใใใขใใซ
<!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! -->
1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (Google Research and the Toyota Technological Institute at Chicago ใใ) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942)
1. **[AltCLIP](https://huggingface.co/docs/transformers/main/model_doc/altclip)** (BAAI ใใ) Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679)
1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (MIT ใใ) Yuan Gong, Yu-An Chung, James Glass ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778)
1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (Facebook ใใ) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461)
1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (รcole polytechnique ใใ) Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (VinAI Research ใใ) Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701)
1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (Microsoft ใใ) Hangbo Bao, Li Dong, Furu Wei ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254)
1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (Google ใใ) Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805)
1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (Google ใใ) Sascha Rothe, Shashi Narayan, Aliaksei Severyn ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461)
1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (VinAI Research ใใ) Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/)
1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (Google Research ใใ) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062)
1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (Google Research ใใ) Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062)
1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (Microsoft Research AI4Science ใใ) Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9)
1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (Google AI ใใ) Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Big Transfer (BiT)](https://arxiv.org/abs/1912.11370)Houlsby.
1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (Facebook ใใ) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637)
1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (Facebook ใใ) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637)
1. **[BLIP](https://huggingface.co/docs/transformers/main/model_doc/blip)** (Salesforce ใใ) Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086)
1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (BigScience workshop ใใ) [BigScience Workshop](https://bigscience.huggingface.co/) ใใๅ
ฌ้ใใใพใใ.
1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (Alexa ใใ) Adrian de Wynter and Daniel J. Perry ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499)
1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google Research ใใ) Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (Inria/Facebook/Sorbonne ใใ) Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, รric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894)
1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google Research ใใ) Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874)
1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (OFA-Sys ใใ) An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335)
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI ใใ) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (University of Gรถttingen ใใ) Timo Lรผddecke and Alexander Ecker ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003)
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (Salesforce ใใ) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474)
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (Microsoft Research Asia ใใ) Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152)
1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech ใใ) Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496)
1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI ใใ) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)
1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (Tsinghua University ใใ) Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413)
1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (Salesforce ใใ) Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858)
1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft ใใ) Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808)
1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (Facebook ใใ) Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555)
1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft ใใ) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654)
1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft ใใ) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654)
1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (Berkeley/Facebook/Google ใใ) Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345)
1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (SenseTime Research ใใ) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159)
1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (Facebook ใใ) Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877)
1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (Facebook ใใ) Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872)
1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (Microsoft Research ใใ) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536)
1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (SHI Labs ใใ) Ali Hassani and Humphrey Shi ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001)
1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (HuggingFace ใใ), Victor Sanh, Lysandre Debut and Thomas Wolf. ๅใๆๆณใง GPT2, RoBERTa ใจ Multilingual BERT ใฎๅง็ธฎใ่กใใพใใ.ๅง็ธฎใใใใขใใซใฏใใใใ [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation)ใ[DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation)ใ[DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) ใจๅไปใใใใพใใ. ๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108)
1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (Microsoft Research ใใ) Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378)
1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER ใใ), Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664)
1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (Facebook ใใ) Vladimir Karpukhin, Barlas Oฤuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906)
1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (Intel Labs ใใ) Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413)
1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le.
1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google Research/Stanford University ใใ) Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555)
1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google Research ใใ) Sascha Rothe, Shashi Narayan, Aliaksei Severyn ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461)
1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (Baidu ใใ) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223)
1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (Meta AI ใใ) ใฏใใฉใณในใใฉใผใใผใใญใใคใณ่จ่ชใขใใซใงใ. **ESM-1b** ใฏ Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118). **ESM-1v** ใฏ Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rivesใใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648). **ESM-2** ใจใ**ESMFold** ใฏ Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902)
1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (Google AI ใใ) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V ใใๅ
ฌ้ใใใใฌใใธใใชใผ [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) Le, and Jason Wei
1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (CNRS ใใ) Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372)
1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (Facebook AI ใใ) Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482)
1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (Google Research ใใ) James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824)
1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (CMU/Google Brain ใใ) Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236)
1. **[GIT](https://huggingface.co/docs/transformers/main/model_doc/git)** (Microsoft Research ใใ) Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100)
1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (KAIST ใใ) Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436)
1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (OpenAI ใใ) Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/)
1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (EleutherAI ใใ) Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy ใใๅ
ฌ้ใใใใฌใใธใใชใผ : [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo)
1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI ใใ) Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745)
1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (ABEJA ใใ) Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori ใใใชใชใผใน.
1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (OpenAI ใใ) Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/)
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI ใใ) Ben Wang and Aran Komatsuzaki ใใๅ
ฌ้ใใใใฌใใธใใชใผ [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/)
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/main/model_doc/gpt-sw3)** (AI-Sweden ใใ) Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey รhman, Fredrik Carlsson, Magnus Sahlgren ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf)
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA ใใ) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094)
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook ใใ) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447)
1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (Berkeley ใใ) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321)
1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (OpenAI ใใ) Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/)
1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (OpenAI ใใ) Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf)
1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (Microsoft Research Asia ใใ) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (Microsoft Research Asia ใใ) Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740)
1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (Microsoft Research Asia ใใ) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387)
1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (Microsoft Research Asia ใใ) Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836)
1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (AllenAI ใใ) Iz Beltagy, Matthew E. Peters, Arman Cohan ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150)
1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (Meta AI ใใ) Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136)
1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (South China University of Technology ใใ) Jiapeng Wang, Lianwen Jin, Kai Ding ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669)
1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (AllenAI ใใ) Iz Beltagy, Matthew E. Peters, Arman Cohan ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150)
1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (Google AI ใใ) Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916)
1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (Studio Ousia ใใ) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057)
1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC Chapel Hill ใใ) Hao Tan and Mohit Bansal ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490)
1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (Facebook ใใ) Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161)
1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (Facebook ใใ) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125)
1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Jรถrg Tiedemann ใใ. [OPUS](http://opus.nlpl.eu/) ใไฝฟใใชใใๅญฆ็ฟใใใ "Machine translation" (ใใทใณใใฉใณในใฌใผใทใงใณ) ใขใใซ. [Marian Framework](https://marian-nmt.github.io/) ใฏMicrosoft Translator Teamใใ็พๅจ้็บไธญใงใ.
1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (Microsoft Research Asia ใใ) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518)
1. **[Mask2Former](https://huggingface.co/docs/transformers/main/model_doc/mask2former)** (FAIR and UIUC ใใ) Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527)
1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (Meta and UIUC ใใ) Bowen Cheng, Alexander G. Schwing, Alexander Kirillov ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278)
1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook ใใ) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210)
1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook ใใ) Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401)
1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA ใใ) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053)
1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA ใใ) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053)
1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (Studio Ousia ใใ) Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151)
1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (CMU/Google Brain ใใ) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984)
1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (Google Inc. ใใ) Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)
1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (Google Inc. ใใ) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple ใใ) Sachin Mehta and Mohammad Rastegari ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178)
1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (Microsoft Research ใใ) Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297)
1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI ใใ) Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (RUC AI Box ใใ) Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131)
1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (SHI Labs ใใ) Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143)
1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (Huawei Noahโs Ark Lab ใใ) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204)
1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (Meta ใใ) the NLLB team ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672)
1. **[Nystrรถmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (the University of Wisconsin - Madison ใใ) Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902)
1. **[OneFormer](https://huggingface.co/docs/transformers/main/model_doc/oneformer)** (SHI Labs ใใ) Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220)
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI ใใ) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068)
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI ใใ) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230)
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google ใใ) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google ใใ) Jason Phang, Yao Zhao, and Peter J. Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347)
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind ใใ) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795)
1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research ใใ) Dat Quoc Nguyen and Anh Tuan Nguyen ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/)
1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP ใใ) Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333)
1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (Sea AI Labs ใใ) Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418)
1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (Microsoft Research ใใ) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063)
1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA ใใ) Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602)
1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (Facebook ใใ) Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401)
1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google Research ใใ) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909)
1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (Google Research ใใ) Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451)
1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META Platforms ใใ) Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Designing Network Design Space](https://arxiv.org/abs/2003.13678)
1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (Google Research ใใ) Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821)
1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (Microsoft Research ใใ) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (Facebook ใใ), Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692)
1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/main/model_doc/roberta-prelayernorm)** (Facebook ใใ) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038)
1. **[RoCBert](https://huggingface.co/docs/transformers/main/model_doc/roc_bert)** (WeChatAI ใใ) HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf)
1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (ZhuiyiTechnology ใใ), Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864)
1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (NVIDIA ใใ) Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203)
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP ใใ) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP ใใ) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (Facebook ใใ), Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (Facebook ใใ), Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678)
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (Tel Aviv University ใใ), Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438)
1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (Berkeley ใใ) Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316)
1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (Microsoft ใใ) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft ใใ) Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883)
1. **[Swin2SR](https://huggingface.co/docs/transformers/main/model_doc/swin2sr)** (University of Wรผrzburg ใใ) Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345)
1. **[SwitchTransformers](https://huggingface.co/docs/transformers/main/model_doc/switch_transformers)** (Google ใใ) William Fedus, Barret Zoph, Noam Shazeer ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961)
1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (Google AI ใใ) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683)
1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (Google AI ใใ) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu ใใๅ
ฌ้ใใใใฌใใธใใชใผ [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511)
1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (Microsoft Research ใใ) Brandon Smock, Rohith Pesala, Robin Abraham ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061)
1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI ใใ) Jonathan Herzig, Paweล Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)
1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (Microsoft Research ใใ) Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653)
1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (HuggingFace ใใ).
1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (Facebook ใใ) Gedas Bertasius, Heng Wang, Lorenzo Torresani ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095)
1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (the University of California at Berkeley ใใ) Michael Janner, Qiyang Li, Sergey Levine ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039)
1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU ใใ) Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860)
1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (Microsoft ใใ), Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282)
1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (Google Research ใใ) Yi Tay, Mostafa Dehghani, Vinh Q ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (Microsoft Research ใใ) Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597)
1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (Microsoft Research ใใ) Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
1. **[UPerNet](https://huggingface.co/docs/transformers/main/model_doc/upernet)** (Peking University ใใ) Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221)
1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (Tsinghua University and Nankai University ใใ) Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Visual Attention Network](https://arxiv.org/abs/2202.09741)
1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (Multimedia Computing Group, Nanjing University ใใ) Zhan Tong, Yibing Song, Jue Wang, Limin Wang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602)
1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain ใใ) Wonjae Kim, Bokyung Son, Ildoo Kim ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334)
1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (Google AI ใใ) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP ใใ) Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557)
1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (Google AI ใใ) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (Meta AI ใใ) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377)
1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (Meta AI ใใ) Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141)
1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (Facebook AI ใใ) Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477)
1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI ใใ) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171)
1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI ใใ) Qiantong Xu, Alexei Baevski, Michael Auli ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680)
1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (Microsoft Research ใใ) Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI ใใ) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)
1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (Microsoft Research ใใ) Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816)
1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668)
1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (Facebook ใใ) Guillaume Lample and Alexis Conneau ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291)
1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (Microsoft Research ใใ) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063)
1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (Facebook AI ใใ), Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116)
1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI ใใ), Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572)
1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU ใใ) Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [โXLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237)
1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI ใใ) Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296)
1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (Facebook AI ใใ) Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979)
1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (Huazhong University of Science & Technology ใใ) Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666)
1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (the University of Wisconsin - Madison ใใ) Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ: [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714)
### ใตใใผใใใใฆใใใใฌใผใ ใฏใผใฏ
ไปฅไธใฎใใผใใซใฏใใใใใฎใขใใซใงใตใใผใใใใฆใใใฉใคใใฉใชใ็คบใใฆใใพใใ"slow"ใจๅผใฐใใPythonใใผใฏใใคใถใผใ๐ค Tokenizers ใฉใคใใฉใชใซใใ"fast"ใใผใฏใใคใถใผใPyTorch, TensorFlow, Flaxใฎ5ใคใฎใใใใใใตใใผใใใใฆใใใใ็คบใใฆใใพใใ
<!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!-->
| Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support |
|:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:|
| ALBERT | โ
| โ
| โ
| โ
| โ
|
| AltCLIP | โ | โ | โ
| โ | โ |
| Audio Spectrogram Transformer | โ | โ | โ
| โ | โ |
| BART | โ
| โ
| โ
| โ
| โ
|
| BEiT | โ | โ | โ
| โ | โ
|
| BERT | โ
| โ
| โ
| โ
| โ
|
| Bert Generation | โ
| โ | โ
| โ | โ |
| BigBird | โ
| โ
| โ
| โ | โ
|
| BigBird-Pegasus | โ | โ | โ
| โ | โ |
| BioGpt | โ
| โ | โ
| โ | โ |
| BiT | โ | โ | โ
| โ | โ |
| Blenderbot | โ
| โ
| โ
| โ
| โ
|
| BlenderbotSmall | โ
| โ
| โ
| โ
| โ
|
| BLIP | โ | โ | โ
| โ | โ |
| BLOOM | โ | โ
| โ
| โ | โ |
| CamemBERT | โ
| โ
| โ
| โ
| โ |
| CANINE | โ
| โ | โ
| โ | โ |
| Chinese-CLIP | โ | โ | โ
| โ | โ |
| CLIP | โ
| โ
| โ
| โ
| โ
|
| CLIPSeg | โ | โ | โ
| โ | โ |
| CodeGen | โ
| โ
| โ
| โ | โ |
| Conditional DETR | โ | โ | โ
| โ | โ |
| ConvBERT | โ
| โ
| โ
| โ
| โ |
| ConvNeXT | โ | โ | โ
| โ
| โ |
| CTRL | โ
| โ | โ
| โ
| โ |
| CvT | โ | โ | โ
| โ
| โ |
| Data2VecAudio | โ | โ | โ
| โ | โ |
| Data2VecText | โ | โ | โ
| โ | โ |
| Data2VecVision | โ | โ | โ
| โ
| โ |
| DeBERTa | โ
| โ
| โ
| โ
| โ |
| DeBERTa-v2 | โ
| โ
| โ
| โ
| โ |
| Decision Transformer | โ | โ | โ
| โ | โ |
| Deformable DETR | โ | โ | โ
| โ | โ |
| DeiT | โ | โ | โ
| โ
| โ |
| DETR | โ | โ | โ
| โ | โ |
| DiNAT | โ | โ | โ
| โ | โ |
| DistilBERT | โ
| โ
| โ
| โ
| โ
|
| DonutSwin | โ | โ | โ
| โ | โ |
| DPR | โ
| โ
| โ
| โ
| โ |
| DPT | โ | โ | โ
| โ | โ |
| ELECTRA | โ
| โ
| โ
| โ
| โ
|
| Encoder decoder | โ | โ | โ
| โ
| โ
|
| ERNIE | โ | โ | โ
| โ | โ |
| ESM | โ
| โ | โ
| โ
| โ |
| FairSeq Machine-Translation | โ
| โ | โ
| โ | โ |
| FlauBERT | โ
| โ | โ
| โ
| โ |
| FLAVA | โ | โ | โ
| โ | โ |
| FNet | โ
| โ
| โ
| โ | โ |
| Funnel Transformer | โ
| โ
| โ
| โ
| โ |
| GIT | โ | โ | โ
| โ | โ |
| GLPN | โ | โ | โ
| โ | โ |
| GPT Neo | โ | โ | โ
| โ | โ
|
| GPT NeoX | โ | โ
| โ
| โ | โ |
| GPT NeoX Japanese | โ
| โ | โ
| โ | โ |
| GPT-J | โ | โ | โ
| โ
| โ
|
| GPT-Sw3 | โ
| โ
| โ
| โ
| โ
|
| GroupViT | โ | โ | โ
| โ
| โ |
| Hubert | โ | โ | โ
| โ
| โ |
| I-BERT | โ | โ | โ
| โ | โ |
| ImageGPT | โ | โ | โ
| โ | โ |
| Jukebox | โ
| โ | โ
| โ | โ |
| LayoutLM | โ
| โ
| โ
| โ
| โ |
| LayoutLMv2 | โ
| โ
| โ
| โ | โ |
| LayoutLMv3 | โ
| โ
| โ
| โ
| โ |
| LED | โ
| โ
| โ
| โ
| โ |
| LeViT | โ | โ | โ
| โ | โ |
| LiLT | โ | โ | โ
| โ | โ |
| Longformer | โ
| โ
| โ
| โ
| โ |
| LongT5 | โ | โ | โ
| โ | โ
|
| LUKE | โ
| โ | โ
| โ | โ |
| LXMERT | โ
| โ
| โ
| โ
| โ |
| M-CTC-T | โ | โ | โ
| โ | โ |
| M2M100 | โ
| โ | โ
| โ | โ |
| Marian | โ
| โ | โ
| โ
| โ
|
| MarkupLM | โ
| โ
| โ
| โ | โ |
| Mask2Former | โ | โ | โ
| โ | โ |
| MaskFormer | โ | โ | โ
| โ | โ |
| MaskFormerSwin | โ | โ | โ | โ | โ |
| mBART | โ
| โ
| โ
| โ
| โ
|
| Megatron-BERT | โ | โ | โ
| โ | โ |
| MobileBERT | โ
| โ
| โ
| โ
| โ |
| MobileNetV1 | โ | โ | โ
| โ | โ |
| MobileNetV2 | โ | โ | โ
| โ | โ |
| MobileViT | โ | โ | โ
| โ
| โ |
| MPNet | โ
| โ
| โ
| โ
| โ |
| MT5 | โ
| โ
| โ
| โ
| โ
|
| MVP | โ
| โ
| โ
| โ | โ |
| NAT | โ | โ | โ
| โ | โ |
| Nezha | โ | โ | โ
| โ | โ |
| Nystrรถmformer | โ | โ | โ
| โ | โ |
| OpenAI GPT | โ
| โ
| โ
| โ
| โ |
| OpenAI GPT-2 | โ
| โ
| โ
| โ
| โ
|
| OPT | โ | โ | โ
| โ
| โ
|
| OWL-ViT | โ | โ | โ
| โ | โ |
| Pegasus | โ
| โ
| โ
| โ
| โ
|
| PEGASUS-X | โ | โ | โ
| โ | โ |
| Perceiver | โ
| โ | โ
| โ | โ |
| PLBart | โ
| โ | โ
| โ | โ |
| PoolFormer | โ | โ | โ
| โ | โ |
| ProphetNet | โ
| โ | โ
| โ | โ |
| QDQBert | โ | โ | โ
| โ | โ |
| RAG | โ
| โ | โ
| โ
| โ |
| REALM | โ
| โ
| โ
| โ | โ |
| Reformer | โ
| โ
| โ
| โ | โ |
| RegNet | โ | โ | โ
| โ
| โ
|
| RemBERT | โ
| โ
| โ
| โ
| โ |
| ResNet | โ | โ | โ
| โ
| โ
|
| RetriBERT | โ
| โ
| โ
| โ | โ |
| RoBERTa | โ
| โ
| โ
| โ
| โ
|
| RoBERTa-PreLayerNorm | โ | โ | โ
| โ
| โ
|
| RoCBert | โ
| โ | โ
| โ | โ |
| RoFormer | โ
| โ
| โ
| โ
| โ
|
| SegFormer | โ | โ | โ
| โ
| โ |
| SEW | โ | โ | โ
| โ | โ |
| SEW-D | โ | โ | โ
| โ | โ |
| Speech Encoder decoder | โ | โ | โ
| โ | โ
|
| Speech2Text | โ
| โ | โ
| โ
| โ |
| Speech2Text2 | โ
| โ | โ | โ | โ |
| Splinter | โ
| โ
| โ
| โ | โ |
| SqueezeBERT | โ
| โ
| โ
| โ | โ |
| Swin Transformer | โ | โ | โ
| โ
| โ |
| Swin Transformer V2 | โ | โ | โ
| โ | โ |
| Swin2SR | โ | โ | โ
| โ | โ |
| SwitchTransformers | โ | โ | โ
| โ | โ |
| T5 | โ
| โ
| โ
| โ
| โ
|
| Table Transformer | โ | โ | โ
| โ | โ |
| TAPAS | โ
| โ | โ
| โ
| โ |
| Time Series Transformer | โ | โ | โ
| โ | โ |
| TimeSformer | โ | โ | โ
| โ | โ |
| Trajectory Transformer | โ | โ | โ
| โ | โ |
| Transformer-XL | โ
| โ | โ
| โ
| โ |
| TrOCR | โ | โ | โ
| โ | โ |
| UniSpeech | โ | โ | โ
| โ | โ |
| UniSpeechSat | โ | โ | โ
| โ | โ |
| UPerNet | โ | โ | โ
| โ | โ |
| VAN | โ | โ | โ
| โ | โ |
| VideoMAE | โ | โ | โ
| โ | โ |
| ViLT | โ | โ | โ
| โ | โ |
| Vision Encoder decoder | โ | โ | โ
| โ
| โ
|
| VisionTextDualEncoder | โ | โ | โ
| โ | โ
|
| VisualBERT | โ | โ | โ
| โ | โ |
| ViT | โ | โ | โ
| โ
| โ
|
| ViT Hybrid | โ | โ | โ
| โ | โ |
| ViTMAE | โ | โ | โ
| โ
| โ |
| ViTMSN | โ | โ | โ
| โ | โ |
| Wav2Vec2 | โ
| โ | โ
| โ
| โ
|
| Wav2Vec2-Conformer | โ | โ | โ
| โ | โ |
| WavLM | โ | โ | โ
| โ | โ |
| Whisper | โ
| โ | โ
| โ
| โ |
| X-CLIP | โ | โ | โ
| โ | โ |
| XGLM | โ
| โ
| โ
| โ
| โ
|
| XLM | โ
| โ | โ
| โ
| โ |
| XLM-ProphetNet | โ
| โ | โ
| โ | โ |
| XLM-RoBERTa | โ
| โ
| โ
| โ
| โ
|
| XLM-RoBERTa-XL | โ | โ | โ
| โ | โ |
| XLNet | โ
| โ
| โ
| โ
| โ |
| YOLOS | โ | โ | โ
| โ | โ |
| YOSO | โ | โ | โ
| โ | โ |
<!-- End table-->
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/tflite.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Export to TFLite
[TensorFlow Lite](https://www.tensorflow.org/lite/guide)ใฏใใขใใคใซใใฉใณใ็ตใฟ่พผใฟใทในใใ ใใใใณใขใใฎใคใณใฟใผใใใ๏ผIoT๏ผใใใคในใชใฉใใชใฝใผในใซๅถ็ดใฎใใใใใคในใซๆฉๆขฐๅญฆ็ฟใขใใซใๅฑ้ใใใใใฎ่ปฝ้ใชใใฌใผใ ใฏใผใฏใงใใTFLiteใฏใ่จ็ฎ่ฝๅใใกใขใชใใใใณ้ปๅๆถ่ฒปใ้ใใใฆใใใใใใฎใใใคในไธใงใขใใซใๅน็็ใซๆ้ฉๅใใฆๅฎ่กใใใใใซ่จญ่จใใใฆใใพใใ
TensorFlow Liteใขใใซใฏใ`.tflite`ใใกใคใซๆกๅผตๅญใง่ญๅฅใใใ็นๅฅใชๅน็็ใชใใผใฟใใซๅฝขๅผใง่กจใใใพใใ
๐ค Optimumใฏใ๐ค TransformersใขใใซใTFLiteใซใจใฏในใใผใใใใใใฎๆฉ่ฝใ`exporters.tflite`ใขใธใฅใผใซใไปใใฆๆไพใใฆใใพใใใตใใผใใใใฆใใใขใใซใขใผใญใใฏใใฃใฎใชในใใซใคใใฆใฏใ[๐ค Optimumใฎใใญใฅใกใณใ](https://huggingface.co/docs/optimum/exporters/tflite/overview)ใใๅ็
งใใ ใใใ
ใขใใซใTFLiteใซใจใฏในใใผใใใใซใฏใๅฟ
่ฆใชไพๅญ้ขไฟใใคใณในใใผใซใใฆใใ ใใ๏ผ
```bash
pip install optimum[exporters-tf]
```
ใในใฆใฎๅฉ็จๅฏ่ฝใชๅผๆฐใ็ขบ่ชใใใซใฏใ[๐ค Optimumใใญใฅใกใณใ](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model)ใๅ็
งใใใใใณใใณใใฉใคใณใงใใซใใ่กจ็คบใใฆใใ ใใ๏ผ
```bash
optimum-cli export tflite --help
```
๐ค Hubใใใขใใซใฎใใงใใฏใใคใณใใใจใฏในใใผใใใใซใฏใไพใใฐ `bert-base-uncased` ใไฝฟ็จใใๅ ดๅใๆฌกใฎใณใใณใใๅฎ่กใใพใ๏ผ
```bash
optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/
```
้ฒ่ก็ถๆณใ็คบใใญใฐใ่กจ็คบใใใ็ๆใใใ `model.tflite` ใไฟๅญใใใๅ ดๆใ่กจ็คบใใใใฏใใงใ๏ผ
```bash
Validating TFLite model...
-[โ] TFLite model output names match reference model (logits)
- Validating TFLite Model output "logits":
-[โ] (1, 128, 30522) matches (1, 128, 30522)
-[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05)
The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05:
- logits: max diff = 5.817413330078125e-05.
The exported model was saved at: bert_tflite
```
ไธ่จใฎไพใฏ๐ค Hubใใใใงใใฏใใคใณใใใจใฏในใใผใใใๆนๆณใ็คบใใฆใใพใใใญใผใซใซใขใใซใใจใฏในใใผใใใๅ ดๅใใพใใขใใซใฎ้ใฟใใกใคใซใจใใผใฏใใคใถใใกใคใซใๅใใใฃใฌใฏใใช๏ผ`local_path`๏ผใซไฟๅญใใใใจใ็ขบ่ชใใฆใใ ใใใCLIใไฝฟ็จใใๅ ดๅใ๐ค Hubใฎใใงใใฏใใคใณใๅใฎไปฃใใใซ`model`ๅผๆฐใซ`local_path`ใๆธกใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/add_tensorflow_model.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
ใฉใคใปใณใน๏ผApache Licenseใใใผใธใงใณ2.0๏ผใใฉใคใปใณในใ๏ผใซๅบใฅใใฆใใพใใใใฎใใกใคใซใฏใใฉใคใปใณในใซๆบๆ ใใฆใใชใ้ใใไฝฟ็จใงใใพใใใใฉใคใปใณในใฎใณใใผใฏไปฅไธใใๅ
ฅๆใงใใพใ๏ผ
http://www.apache.org/licenses/LICENSE-2.0
้ฉ็จๆณใซๅพใฃใฆๅฟ
่ฆใชๅ ดๅใใพใใฏๆธ้ขใงๅๆใใๅ ดๅใ้คใใใฉใคใปใณในใฎไธใง้
ๅธใใใใฝใใใฆใงใขใฏใใAS ISใใฎๅบ็คใงใๆ็คบใพใใฏ้ป็คบใๅใใใใใใชใไฟ่จผใๆกไปถใๅซใฟใพใใใใฉใคใปใณในใฎ่ฉณ็ดฐใซใคใใฆใฏใใฉใคใปใณในๆๆธใใ่ฆงใใ ใใใ
โ ๏ธ ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใๅผ็คพใฎใใญใฅใกใณใใใซใใผ๏ผMDXใซ้กไผผใใ็นๅฎใฎๆงๆใๅซใ๏ผใๅซใใใใใไฝฟใใฎMarkdownใใฅใผใขใงใฏๆญฃใใ่กจ็คบใใใชใๅ ดๅใใใใพใใ
-->
# How to convert a ๐ค Transformers model to TensorFlow?
๐ค Transformersใไฝฟ็จใใใใใซ่คๆฐใฎใใฌใผใ ใฏใผใฏใๅฉ็จๅฏ่ฝใงใใใใจใฏใใขใใชใฑใผใทใงใณใ่จญ่จใใ้ใซใใใใใฎๅผทใฟใๆดปใใๆ่ปๆงใๆไพใใพใใใ
ไบๆๆงใใขใใซใใจใซ่ฟฝๅ ใใๅฟ
่ฆใใใใใจใๆๅณใใพใใใใใใๅนธใใชใใจใซ
ๆขๅญใฎใขใใซใซTensorFlowไบๆๆงใ่ฟฝๅ ใใใใจใฏใ[ใผใญใใๆฐใใใขใใซใ่ฟฝๅ ใใใใจ](add_new_model)ใใใ็ฐกๅใงใ๏ผ
ๅคง่ฆๆจกใชTensorFlowใขใใซใฎ่ฉณ็ดฐใ็่งฃใใใใไธป่ฆใชใชใผใใณใฝใผในใฎ่ฒข็ฎใ่กใฃใใใ
้ธๆใใใขใใซใTensorFlowใงๆๅนใซใใใใใฎใฌใคใใงใใ
ใใฎใฌใคใใฏใใณใใฅใใใฃใฎใกใณใใผใงใใใใชใใซใTensorFlowใขใใซใฎ้ใฟใใใณ/ใพใใฏ
ใขใผใญใใฏใใฃใ๐ค Transformersใงไฝฟ็จใใใใใซใHugging Faceใใผใ ใใใฎๆๅฐ้ใฎ็ฃ่ฆใง่ฒข็ฎใงใใๅใไธใใพใใๆฐใใใขใใซใๆธใใใจใฏๅฐใใชๅๆฅญใงใฏใใใพใใใใ
ใใฎใฌใคใใ่ชญใใใจใงใใใใใญใผใฉใผใณใผในใฟใผใฎใใใชใใฎใใๆฃๆญฉใฎใใใชใใฎใซใชใใใจใ้กใฃใฆใใพใ๐ข๐ถใ
ใใฎใใญใปในใใพใใพใ็ฐกๅใซใใใใใซใ็งใใกใฎๅ
ฑ้ใฎ็ต้จใๆดป็จใใใใจใฏ้ๅธธใซ้่ฆใงใใฎใงใ
ใใฎใฌใคใใฎๆนๅใๆๆกใใใใจใๅผทใใๅงใใใพใ๏ผ
ใใใซ่ฉณใใ่ชฟในใๅใซใไปฅไธใฎใชใฝใผในใใใงใใฏใใใใจใใๅงใใใพใใ๐ค Transformersใๅใใฆใฎๅ ดๅ๏ผ
- [๐ค Transformersใฎไธ่ฌ็ใชๆฆ่ฆ](add_new_model#general-overview-of-transformers)
- [Hugging FaceใฎTensorFlowๅฒๅญฆ](https://huggingface.co/blog/tensorflow-philosophy)
ใใฎใฌใคใใฎๆฎใใฎ้จๅใงใฏใๆฐใใTensorFlowใขใใซใขใผใญใใฏใใฃใ่ฟฝๅ ใใใใใซๅฟ
่ฆใชใใฎใ
PyTorchใTensorFlowใขใใซใฎ้ใฟใซๅคๆใใๆ้ ใใใใณMLใใฌใผใ ใฏใผใฏ้ใฎไธไธ่ดใๅน็็ใซใใใใฐใใๆนๆณใซใคใใฆๅญฆใณใพใใใใใงใฏๅงใใพใใใ๏ผ
<Tip>
ไฝฟ็จใใใใขใใซใซๅฏพๅฟใใTensorFlowใขใผใญใใฏใใฃใใใงใซๅญๅจใใใใฉใใใใใใชใใงใใ๏ผ
้ธๆใใใขใใซใฎ`config.json`ใฎ`model_type`ใใฃใผใซใใใใงใใฏใใฆใฟใฆใใ ใใ
๏ผ[ไพ](https://huggingface.co/bert-base-uncased/blob/main/config.json#L14)๏ผใ
๐ค Transformersใฎ่ฉฒๅฝใใใขใใซใใฉใซใใซใๅๅใ"modeling_tf"ใงๅงใพใใใกใคใซใใใๅ ดๅใใใใฏๅฏพๅฟใใTensorFlow
ใขใผใญใใฏใใฃใๆใฃใฆใใใใจใๆๅณใใพใ๏ผ[ไพ](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert)๏ผใ
</Tip>
## Step-by-step guide to add TensorFlow model architecture code
ๅคง่ฆๆจกใชใขใใซใขใผใญใใฏใใฃใ่จญ่จใใๆนๆณใฏใใพใใพใงใใใใใฎ่จญ่จใๅฎ่ฃ
ใใๆนๆณใใใพใใพใงใใ
ใใใใ[๐ค Transformersใฎไธ่ฌ็ใชๆฆ่ฆ](add_new_model#general-overview-of-transformers)ใใ
ๆใๅบใใฆใใใ ใใใใใใใพใใใใ็งใใกใฏๆ่ฆใฎใใใฐใซใผใใงใ - ๐ค Transformersใฎไฝฟใใใใใฏไธ่ฒซๆงใฎใใ่จญ่จใฎ้ธๆ่ขใซไพๅญใใฆใใพใใ็ต้จใใใTensorFlowใขใใซใ่ฟฝๅ ใใ้ใซ้่ฆใชใใจใใใใคใใไผใใงใใพใ๏ผ
- ่ป่ผชใๅ็บๆใใชใใงใใ ใใ๏ผใปใจใใฉใฎๅ ดๅใ็ขบ่ชใในใๅฐใชใใจใ2ใคใฎๅ็
งๅฎ่ฃ
ใใใใพใใใใใฏใ
ใใชใใๅฎ่ฃ
ใใฆใใใขใใซใฎPyTorchใใผใธใงใณใจใๅใ็จฎ้กใฎๅ้กใซๅฏพใใไปใฎTensorFlowใขใใซใงใใ
- ๅชใใใขใใซๅฎ่ฃ
ใฏๆ้ใฎ่ฉฆ็ทดใไนใ่ถใใพใใใใใฏใใณใผใใใใใใ ใใใงใฏใชใใใณใผใใๆ็ขบใงใใใใใฐใใใใใ
ๆง็ฏใใใใใใใงใใTensorFlowๅฎ่ฃ
ใงPyTorchๅฎ่ฃ
ใจไธ่ดใใใใฟใผใณใ่ค่ฃฝใใPyTorchๅฎ่ฃ
ใจใฎไธไธ่ดใๆๅฐ้ใซๆใใใใจใงใ
ใใชใใฎ่ฒข็ฎใ้ทๆ้ใซใใใฃใฆๆ็จใงใใใใจใไฟ่จผใใพใใ
- ่กใ่ฉฐใพใฃใใๅฉใใๆฑใใฆใใ ใใ๏ผ ๐ค Transformersใใผใ ใฏใใใซใใพใใใใใใใใใชใใ็ด้ขใใฆใใๅใๅ้กใซๅฏพใใ่งฃๆฑบ็ญใ่ฆใคใใฆใใพใใ
TensorFlowใขใใซใขใผใญใใฏใใฃใ่ฟฝๅ ใใใใใซๅฟ
่ฆใชในใใใใฎๆฆ่ฆใฏๆฌกใฎใจใใใงใ๏ผ
1. ๅคๆใใใใขใใซใ้ธๆ
2. transformersใฎ้็บ็ฐๅขใๆบๅ
3. ๏ผใชใใทใงใณ๏ผ็่ซ็ใชๅด้ขใจๆขๅญใฎๅฎ่ฃ
ใ็่งฃ
4. ใขใใซใขใผใญใใฏใใฃใๅฎ่ฃ
5. ใขใใซใฎใในใใๅฎ่ฃ
6. ใใซใชใฏใจในใใๆๅบ
7. ๏ผใชใใทใงใณ๏ผใใขใๆง็ฏใใฆไธ็ใจๅ
ฑๆ
### 1.-3. Prepare your model contribution
**1. ๅคๆใใใใขใใซใ้ธๆใใ**
ใพใใๅบๆฌใใๅงใใพใใใใๆๅใซ็ฅใฃใฆใใๅฟ
่ฆใใใใใจใฏใๅคๆใใใใขใผใญใใฏใใฃใงใใ
็นๅฎใฎใขใผใญใใฏใใฃใๆฑบใใฆใใชใๅ ดๅใ๐ค Transformers ใใผใ ใซๆๆกใๆฑใใใใจใฏใๅฝฑ้ฟใๆๅคง้ใซใใ็ด ๆดใใใๆนๆณใงใใ
ใใผใ ใฏใTensorFlow ใตใคใใงไธ่ถณใใฆใใๆใๆณจ็ฎใใใใขใผใญใใฏใใฃใซๅใใฆใฌใคใใใพใใ
TensorFlow ใงไฝฟ็จใใใ็นๅฎใฎใขใใซใซใ๐ค Transformers ใซๆขใซ TensorFlow ใขใผใญใใฏใใฃใฎๅฎ่ฃ
ใๅญๅจใใฆใใใใ้ใฟใไธ่ถณใใฆใใๅ ดๅใ
ใใฎใใผใธใฎ[้ใฟใฎ่ฟฝๅ ใปใฏใทใงใณ](#adding-tensorflow-weights-to-hub)ใซ็ดๆฅ็งปๅใใฆใใ ใใใ
็ฐกๅใซใใใใใซใใใฎใฌใคใใฎๆฎใใฎ้จๅใงใฏใTensorFlow ใใผใธใงใณใฎ *BrandNewBert* ใ่ฒข็ฎใใใใจใๆฑบๅฎใใใจไปฎๅฎใใฆใใพใ
๏ผใใใฏใ[ๆฐใใใขใใซใฎ่ฟฝๅ ใฌใคใ](add_new_model)ใงใฎไพใจๅใใงใ๏ผใ
<Tip>
TensorFlow ใขใใซใฎใขใผใญใใฏใใฃใซๅใ็ตใๅใซใใใใ่กใใใใฎ้ฒ่กไธญใฎๅใ็ตใฟใใชใใใๅ็ขบ่ชใใฆใใ ใใใ
GitHub ใใผใธใฎ[ใใซใชใฏใจในใ](https://github.com/huggingface/transformers/pulls?q=is%3Apr)ใง `BrandNewBert` ใๆค็ดขใใฆใ
TensorFlow ้ข้ฃใฎใใซใชใฏใจในใใใชใใใจใ็ขบ่ชใงใใพใใ
</Tip>
**2. transformers ้็บ็ฐๅขใฎๆบๅ**
ใขใใซใขใผใญใใฏใใฃใ้ธๆใใใใๆๅใ็คบใใใใซใใฉใใ PR ใ้ใใใใฎ็ฐๅขใ่จญๅฎใใฆใใ ใใใ
ไปฅไธใฎๆ้ ใซๅพใฃใฆใ็ฐๅขใ่จญๅฎใใใใฉใใ PR ใ้ใใฆใใ ใใใ
1. ใชใใธใใชใฎใใผใธใง 'Fork' ใใฟใณใใฏใชใใฏใใฆใ[ใชใใธใใช](https://github.com/huggingface/transformers)ใใใฉใผใฏใใพใใ
ใใใซใใใใณใผใใฎใณใใผใ GitHub ใฆใผใถใผใขใซใฆใณใใฎไธใซไฝๆใใใพใใ
2. ใญใผใซใซใใฃในใฏใซใใ 'transformers' ใใฉใผใฏใใฏใญใผใณใใใใผในใชใใธใใชใใชใขใผใใจใใฆ่ฟฝๅ ใใพใ:
```bash
git clone https://github.com/[your Github handle]/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
```
3. ้็บ็ฐๅขใ่จญๅฎใใพใใใใจใใฐใไปฅไธใฎใณใใณใใๅฎ่กใใฆใใ ใใ๏ผ
```bash
git clone https://github.com/[your Github handle]/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
```
ไพๅญ้ขไฟใๅขใใฆใใใใใOSใซๅฟใใฆใTransformersใฎใชใใทใงใณใฎไพๅญ้ขไฟใฎๆฐใๅขใใใใใใใพใใใใใฎๅ ดๅใฏใTensorFlowใใคใณในใใผใซใใฆใใๆฌกใฎใณใใณใใๅฎ่กใใฆใใ ใใใ
```bash
pip install -e ".[quality]"
```
**ๆณจๆ:** CUDAใใคใณในใใผใซใใๅฟ
่ฆใฏใใใพใใใๆฐใใใขใใซใCPUใงๅไฝใใใใใจใๅๅใงใใ
4. ใกใคใณใใฉใณใใใใใใใใใๅๅใฎใใฉใณใใไฝๆใใฆใใ ใใใ
```bash
git checkout -b add_tf_brand_new_bert
```
5. ็พๅจใฎmainใใฉใณใใซใใงใใใใฆใชใใผในใใ
```bash
git fetch upstream
git rebase upstream/main
```
6. `transformers/src/models/brandnewbert/`ใซ`modeling_tf_brandnewbert.py`ใจใใๅๅใฎ็ฉบใฎ`.py`ใใกใคใซใ่ฟฝๅ ใใพใใใใใฏใใชใใฎTensorFlowใขใใซใใกใคใซใงใใ
7. ไปฅไธใไฝฟ็จใใฆๅคๆดๅ
ๅฎนใใขใซใฆใณใใซใใใทใฅใใพใ๏ผ
```bash
git add .
git commit -m "initial commit"
git push -u origin add_tf_brand_new_bert
```
8. GitHubไธใงใใฉใผใฏใใใฆใงใใใผใธใซ็งปๅใใใใใซใชใฏใจในใใใใฏใชใใฏใใพใใๅฐๆฅใฎๅคๆดใซๅใใฆใHugging Face ใใผใ ใฎใกใณใใผใฎGitHubใใณใใซใใฌใใฅใขใผใจใใฆ่ฟฝๅ ใใฆใใ ใใใ
9. GitHubใฎใใซใชใฏใจในใใฆใงใใใผใธใฎๅณๅดใซใใใใใฉใใใซๅคๆใใใฏใชใใฏใใฆใใใซใชใฏใจในใใใใฉใใใซๅคๆดใใพใใ
ใใใงใ๐ค Transformersๅ
ใซ*BrandNewBert*ใTensorFlowใซ็งปๆคใใใใใฎ้็บ็ฐๅขใ่จญๅฎใใใพใใใ
**3. (ไปปๆ) ็่ซ็ใชๅด้ขใจๆขๅญใฎๅฎ่ฃ
ใ็่งฃใใ**
*BrandNewBert*ใฎ่ซๆใๅญๅจใใๅ ดๅใใใฎ่จ่ฟฐ็ใชไฝๆฅญใ่ชญใๆ้ใๅใในใใงใใ่ซๆใซใฏ็่งฃใ้ฃใใๅคงใใชใปใฏใทใงใณใใใใใใใใพใใใใใฎๅ ดๅใงใๅ้กใใใพใใ - ๅฟ้
ใใชใใงใใ ใใ๏ผ็ฎๆจใฏ่ซๆใฎ็่ซ็ใช็่งฃใๆทฑใใใใจใงใฏใชใใ๐ค Transformersใไฝฟ็จใใฆTensorFlowใงใขใใซใๅนๆ็ใซๅๅฎ่ฃ
ใใใใใซๅฟ
่ฆใชๆ
ๅ ฑใๆฝๅบใใใใจใงใใใจใฏ่จใใ็่ซ็ใชๅด้ขใซใใพใๆ้ใใใใๅฟ
่ฆใฏใใใพใใใไปฃใใใซใๆขๅญใฎใขใใซใฎใใญใฅใกใณใใผใทใงใณใใผใธ๏ผใใจใใฐใ[BERTใฎใขใใซใใญใฅใกใณใ](model_doc/bert)ใชใฉ๏ผใซ็ฆ็นใๅฝใฆใในใใงใใ
ๅฎ่ฃ
ใใใขใใซใฎๅบๆฌใๆๆกใใๅพใๆขๅญใฎๅฎ่ฃ
ใ็่งฃใใใใจใฏ้่ฆใงใใใใใฏใๅไฝใใๅฎ่ฃ
ใใขใใซใซๅฏพใใๆๅพ
ใจไธ่ดใใใใจใ็ขบ่ชใใ็ตถๅฅฝใฎๆฉไผใงใใใTensorFlowๅดใงใฎๆ่ก็ใช่ชฒ้กใไบๆธฌใใใใจใใงใใพใใ
ๆ
ๅ ฑใฎๅคใใซๅงๅใใใฆใใใจๆใใใฎใฏๅฎๅ
จใซ่ช็ถใงใใใใฎๆฎต้ใงใฏใขใใซใฎใในใฆใฎๅด้ขใ็่งฃใใๅฟ
่ฆใฏใใใพใใใใใ ใใ[ใใฉใผใฉใ ](https://discuss.huggingface.co/)ใงๆฅใช่ณชๅใ่งฃๆฑบใใใใจใๅผทใใๅงใใใพใใ
### 4. Model implementation
ใใใใใใใใณใผใใฃใณใฐใๅงใใพใใใใใๅงใใใๅบ็บ็นใฏใPyTorchใใกใคใซใใฎใใฎใงใใ
`src/transformers/models/brand_new_bert/`ๅ
ใฎ`modeling_brand_new_bert.py`ใฎๅ
ๅฎนใ
`modeling_tf_brand_new_bert.py`ใซใณใใผใใพใใใใฎใปใฏใทใงใณใฎ็ฎๆจใฏใ
๐ค Transformersใฎใคใณใใผใๆง้ ใๆดๆฐใใ`TFBrandNewBert`ใจ
`TFBrandNewBert.from_pretrained(model_repo, from_pt=True)`ใๆญฃๅธธใซ่ชญใฟ่พผใๅไฝใใTensorFlow *BrandNewBert*ใขใใซใ
ใคใณใใผใใงใใใใใซใใใใจใงใใ
ๆฎๅฟตใชใใใPyTorchใขใใซใTensorFlowใซๅคๆใใๆ็ขบใชๆนๆณใฏใใใพใใใใใ ใใใใญใปในใใงใใใ ใในใ ใผใบใซใใใใใฎใใณใใไปฅไธใซ็คบใใพใ๏ผ
- ใในใฆใฎใฏใฉในใฎๅๅใฎๅใซ `TF` ใไปใใพใ๏ผไพ๏ผ `BrandNewBert` ใฏ `TFBrandNewBert` ใซใชใใพใ๏ผใ
- ใปใจใใฉใฎPyTorchใฎๆไฝใซใฏใ็ดๆฅTensorFlowใฎไปฃๆฟใใใใพใใใใจใใฐใ`torch.nn.Linear` ใฏ `tf.keras.layers.Dense` ใซๅฏพๅฟใใ`torch.nn.Dropout` ใฏ `tf.keras.layers.Dropout` ใซๅฏพๅฟใใพใใ็นๅฎใฎๆไฝใซใคใใฆไธๆ็ขบใชๅ ดๅใฏใ[TensorFlowใฎใใญใฅใกใณใ](https://www.tensorflow.org/api_docs/python/tf)ใพใใฏ[PyTorchใฎใใญใฅใกใณใ](https://pytorch.org/docs/stable/)ใๅ็
งใงใใพใใ
- ๐ค Transformersใฎใณใผใใใผในใซใใฟใผใณใ่ฆใคใใใพใใ็นๅฎใฎๆไฝใซ็ดๆฅ็ใชไปฃๆฟใใชใๅ ดๅใ่ชฐใใใใงใซๅใๅ้กใซๅฏพๅฆใใฆใใๅฏ่ฝๆงใ้ซใใงใใ
- ใใใฉใซใใงใฏใPyTorchใจๅใๅคๆฐๅใจๆง้ ใ็ถญๆใใพใใใใใซใใใใใใใฐใๅ้กใฎ่ฟฝ่ทกใไฟฎๆญฃใฎ่ฟฝๅ ใๅฎนๆใซใชใใพใใ
- ไธ้จใฎใฌใคใคใผใซใฏใๅใใฌใผใ ใฏใผใฏใง็ฐใชใใใใฉใซใๅคใใใใพใใๆณจ็ฎใในใไพใฏใใใใๆญฃ่ฆๅใฌใคใคใผใฎ epsilon ใงใ๏ผPyTorchใงใฏ`1e-5`ใ[TensorFlowใงใฏ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) `1e-3` ใงใ๏ผใใใญใฅใกใณใใๅ็ขบ่ชใใฆใใ ใใ๏ผ
- PyTorchใฎ `nn.Parameter` ๅคๆฐใฏ้ๅธธใTF Layerใฎ `build()` ๅ
ใงๅๆๅใใๅฟ
่ฆใใใใพใใๆฌกใฎไพใๅ็
งใใฆใใ ใใ๏ผ[PyTorch](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_vit_mae.py#L212) / [TensorFlow](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_tf_vit_mae.py#L220)
- PyTorchใขใใซใซ้ขๆฐใฎไธ้จใซ `#copied from ...` ใใใๅ ดๅใTensorFlowใขใใซใๅใใขใผใญใใฏใใฃใใใใฎ้ขๆฐใๅใใใใจใใงใใๅฏ่ฝๆงใ้ซใใงใใTensorFlowใขใผใญใใฏใใฃใใใๅ ดๅใงใใ
- TensorFlow้ขๆฐๅ
ใง `name`ๅฑๆงใๆญฃใใ่จญๅฎใใใใจใฏใ`from_pt=True`ใฎใฆใงใคใใฎใฏใญในใญใผใใญใผใใ่กใใใใซ้่ฆใงใใ้ๅธธใ`name`ใฏPyTorchใณใผใๅ
ใฎๅฏพๅฟใใๅคๆฐใฎๅๅใงใใ`name`ใๆญฃใใ่จญๅฎใใใฆใใชใๅ ดๅใใขใใซใฆใงใคใใฎใญใผใๆใซใจใฉใผใกใใปใผใธใง่กจ็คบใใใพใใ
- ใใผในใขใใซใฏใฉใน `BrandNewBertModel` ใฎใญใธใใฏใฏๅฎ้ใซใฏ `TFBrandNewBertMainLayer` ใซใใใพใใใใใฏKerasใฌใคใคใผใฎใตใใฏใฉในใงใ๏ผ[ไพ](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L719)๏ผใ`TFBrandNewBertModel` ใฏใๅใซใใฎใฌใคใคใผใฎใฉใใใผใงใใ
- ใขใใซใ่ชญใฟ่พผใใใใซใฏใKerasใขใใซใใใซใใใๅฟ
่ฆใใใใพใใใใฎใใใ`TFBrandNewBertPreTrainedModel` ใฏใขใใซใธใฎๅ
ฅๅใฎไพใ`dummy_inputs` ใๆใคๅฟ
่ฆใใใใพใ๏ผ[ไพ](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)๏ผใ
- ่กจ็คบใๆญขใพใฃใๅ ดๅใฏใๅฉใใๆฑใใฆใใ ใใใ็งใใกใฏใใชใใฎใๆไผใใซใใใซใใพใ๏ผ ๐ค
ใขใใซใใกใคใซ่ชไฝใ ใใงใชใใใขใใซใฏใฉในใจ้ข้ฃใใใใญใฅใกใณใใผใทใงใณใใผใธใธใฎใใคใณใฟใผใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใไปใฎPRใฎใใฟใผใณใซๅพใฃใฆใใฎ้จๅใๅฎไบใงใใพใ
๏ผ[ไพ](https://github.com/huggingface/transformers/pull/18020/files)๏ผใ
ไปฅไธใฏๆๅใงใฎๅคๆดใๅฟ
่ฆใชไธ่ฆงใงใ๏ผ
- *BrandNewBert*ใฎใในใฆใฎใใใชใใฏใฏใฉในใ `src/transformers/__init__.py` ใซๅซใใ
- *BrandNewBert*ใฏใฉในใ `src/transformers/models/auto/modeling_tf_auto.py` ใฎๅฏพๅฟใใAutoใฏใฉในใซ่ฟฝๅ
- ใใญใฅใกใณใใผใทใงใณใในใใใกใคใซใฎใชในใใซใขใใชใณใฐใใกใคใซใ่ฟฝๅ ใใ `utils/documentation_tests.txt`
- `src/transformers/utils/dummy_tf_objects.py` ใซ้ข้ฃใใ *BrandNewBert* ใซ้ข้ฃใใ้
ๅปถใญใผใใฏใฉในใ่ฟฝๅ
- `src/transformers/models/brand_new_bert/__init__.py` ใงใใใชใใฏใฏใฉในใฎใคใณใใผใๆง้ ใๆดๆฐ
- `docs/source/en/model_doc/brand_new_bert.md` ใซ *BrandNewBert* ใฎใใใชใใฏใกใฝใใใฎใใญใฅใกใณใใผใทใงใณใใคใณใฟใผใ่ฟฝๅ
- `docs/source/en/model_doc/brand_new_bert.md` ใฎ *BrandNewBert* ใฎ่ฒข็ฎ่
ใชในใใซ่ชๅ่ช่บซใ่ฟฝๅ
- ๆๅพใซใ`docs/source/en/index.md` ใฎ *BrandNewBert* ใฎTensorFlowๅใซ็ท่ฒใฎใใงใใฏใใผใฏ โ
ใ่ฟฝๅ
ใขใใซใขใผใญใใฏใใฃใๆบๅใงใใฆใใใใจใ็ขบ่ชใใใใใซใไปฅไธใฎใใงใใฏใชในใใๅฎ่กใใฆใใ ใใ๏ผ
1. ่จ็ทดๆใซ็ฐใชใๅไฝใใใใในใฆใฎใฌใคใคใผ๏ผไพ๏ผDropout๏ผใฏใ`training`ๅผๆฐใไฝฟ็จใใฆๅผใณๅบใใใใใใๆไธไฝใฏใฉในใใไผๆญใใใพใใ
2. ๅฏ่ฝใช้ใ `#copied from ...` ใไฝฟ็จใใพใใ
3. `TFBrandNewBertMainLayer` ใใใณใใใไฝฟ็จใใใในใฆใฎใฏใฉในใฎ `call` ้ขๆฐใ `@unpack_inputs` ใงใใณใฌใผใใใใฆใใพใ
4. `TFBrandNewBertMainLayer` ใฏ `@keras_serializable` ใงใใณใฌใผใใใใฆใใพใ
5. PyTorchใฆใงใคใใใTensorFlowใฆใงใคใใไฝฟ็จใใฆTensorFlowใขใใซใใญใผใใงใใพใ `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)`
6. ไบๆใใใๅ
ฅๅๅฝขๅผใไฝฟ็จใใฆTensorFlowใขใใซใๅผใณๅบใใใจใใงใใพใ
### 5. Add model tests
ใใฃใใญใTensorFlowใขใใซใๅฎ่ฃ
ใใพใใ๏ผ
ไปๅบฆใฏใใขใใซใๆๅพ
้ใใซๅไฝใใใใจใ็ขบ่ชใใใใใฎใในใใ่ฟฝๅ ใใๆ้ใงใใ
ๅใฎใปใฏใทใงใณใจๅๆงใซใ`tests/models/brand_new_bert/`ใใฃใฌใฏใใชๅ
ใฎ`test_modeling_brand_new_bert.py`ใใกใคใซใ`test_modeling_tf_brand_new_bert.py`ใซใณใใผใใๅฟ
่ฆใชTensorFlowใฎ็ฝฎๆใ่กใใใจใใๅงใใใพใใ
ไปใฎๆฎต้ใงใฏใใในใฆใฎ`.from_pretrained()`ๅผใณๅบใใงใๆขๅญใฎPyTorchใฎ้ใฟใใญใผใใใใใใซ`from_pt=True`ใใฉใฐใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
ไฝๆฅญใๅฎไบใใใใใในใใๅฎ่กใใๆบๅใๆดใใพใใ๏ผ ๐ฌ
```bash
NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \
py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py
```
ๆใๅฏ่ฝๆงใฎ้ซใ็ตๆใฏใๅคใใฎใจใฉใผใ่กจ็คบใใใใใจใงใใๅฟ้
ใใชใใงใใ ใใใใใใฏไบๆณใใใๅไฝใงใ๏ผ
MLใขใใซใฎใใใใฐใฏ้ๅธธใซ้ฃใใใจใใใฆใใใๆๅใฎ้ตใฏๅฟ่ๅ๏ผใจ`breakpoint()`๏ผใงใใ็งใใกใฎ็ต้จใงใฏใ
ๆใ้ฃใใๅ้กใฏMLใใฌใผใ ใฏใผใฏ้ใฎๅพฎๅฆใชไธไธ่ดใใ็บ็ใใใใใซใคใใฆใฏใใฎใฌใคใใฎๆๅพใซใใใคใใฎใใคใณใฟใ็คบใใพใใ
ไปใฎๅ ดๅใงใฏใไธ่ฌ็ใชใในใใ็ดๆฅใขใใซใซ้ฉ็จใงใใชใๅ ดๅใใใใใใฎๅ ดๅใฏใขใใซใฎใในใใฏใฉในใฌใใซใงใชใผใใผใฉใคใใๆๆกใใพใใ
ๅ้กใฎ็จฎ้กใซ้ขไฟใชใใ่ฉฐใพใฃใๅ ดๅใฏใใใฉใใใฎใใซใชใฏใจในใใงๅฉใใๆฑใใใใจใใใใใใชใใงใใ ใใใ
ใในใฆใฎใในใใใในใใใใใใใงใจใใใใใพใใใใชใใฎใขใใซใฏใปใผ๐ค Transformersใฉใคใใฉใชใซ่ฟฝๅ ใใๆบๅใๆดใใพใใ๏ผ๐
**6. ใใซใชใฏใจในใใๆๅบใใ**
ๅฎ่ฃ
ใจใในใใๅฎไบใใใใใใซใชใฏใจในใใๆๅบใใๆบๅใๆดใใพใใใใณใผใใใใใทใฅใใๅใซใ
ใณใผใใใฉใผใใใใฆใผใใฃใชใใฃใงใใ `make fixup` ๐ช ใๅฎ่กใใฆใใ ใใใ
ใใใซใใใ่ชๅ็ใชใใงใใฏใซๅคฑๆใใๅฏ่ฝๆงใฎใใใใฉใผใใใใฎๅ้กใ่ชๅ็ใซไฟฎๆญฃใใใพใใ
ใใใงใใใฉใใใใซใชใฏใจในใใๅฎ้ใฎใใซใชใฏใจในใใซๅคๆใใๆบๅใๆดใใพใใใ
ใใใ่กใใซใฏใใใฌใใฅใผๅพ
ใกใใใฟใณใใฏใชใใฏใใJoao๏ผ`@gante`๏ผใจMatt๏ผ`@Rocketknight1`๏ผใใฌใใฅใฏใผใจใใฆ่ฟฝๅ ใใพใใ
ใขใใซใใซใชใฏใจในใใซใฏๅฐใชใใจใ3ไบบใฎใฌใใฅใฏใผใๅฟ
่ฆใงใใใใขใใซใซ้ฉๅใช่ฟฝๅ ใฎใฌใใฅใฏใผใ่ฆใคใใใฎใฏๅฝผใใฎ่ฒฌไปปใงใใ
ใในใฆใฎใฌใใฅใฏใผใใใซใชใฏใจในใใฎ็ถๆ
ใซๆบ่ถณใใใใๆๅพใฎใขใฏใทใงใณใใคใณใใฏใ`.from_pretrained()` ๅผใณๅบใใง `from_pt=True` ใใฉใฐใๅ้คใใใใจใงใใ
TensorFlowใฎใฆใงใคใใๅญๅจใใชใใใใใใใใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใ๏ผใใใ่กใๆนๆณใซใคใใฆใฏใไปฅไธใฎใปใฏใทใงใณใ็ขบ่ชใใฆใใ ใใใ
ๆๅพใซใTensorFlowใฎใฆใงใคใใใใผใธใใใๅฐใชใใจใ3ไบบใฎใฌใใฅใผใขใๆฟ่ชใใใในใฆใฎCIใใงใใฏใ
ๆๅใใๅ ดๅใใในใใใญใผใซใซใงๆๅพใซใใไธๅบฆ็ขบ่ชใใฆใใ ใใใ
```bash
NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \
py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py
```
ใใใฆใใใชใใฎPRใใใผใธใใพใ๏ผใใคใซในใใผใณ้ๆใใใงใจใใใใใพใ ๐
**7. (Optional) ใใขใไฝๆใใฆไธ็ใจๅ
ฑๆ**
ใชใผใใณใฝใผในใฎๆใ้ฃใใ้จๅใฎ1ใคใฏใ็บ่ฆใงใใใใชใใฎ็ด ๆดใใใTensorFlowใฎ่ฒข็ฎใๅญๅจใใใใจใไปใฎใฆใผใถใผใใฉใฎใใใซ็ฅใใใจใใงใใใงใใใใ๏ผ้ฉๅใชใณใใฅใใฑใผใทใงใณใงใ๏ผ ๐ฃ
ใณใใฅใใใฃใจใขใใซใๅ
ฑๆใใไธป่ฆใชๆนๆณใฏ2ใคใใใพใใ
- ใใขใไฝๆใใพใใใใใซใฏGradioใใขใใใผใใใใฏใใใใณใขใใซใ็ดนไปใใใใใฎไปใฎๆฅฝใใๆนๆณใๅซใพใใพใใ[ใณใใฅใใใฃ้งๅใฎใใข](https://huggingface.co/docs/transformers/community)ใซใใผใใใใฏใ่ฟฝๅ ใใใใจใๅผทใใๅงใใใพใใ
- TwitterใLinkedInใชใฉใฎใฝใผใทใฃใซใกใใฃใขใงในใใผใชใผใๅ
ฑๆใใพใใใใชใใฎไปไบใซ่ชใใๆใกใใณใใฅใใใฃใจใใชใใฎๆๆใๅ
ฑๆใใในใใงใ - ใใชใใฎใขใใซใฏไปใไธ็ไธญใฎไฝๅไบบใใฎใจใณใธใใขใ็ ็ฉถ่
ใซใใฃใฆไฝฟ็จใใใๅฏ่ฝๆงใใใใพใ ๐๏ผ็งใใกใฏใใชใใฎๆ็จฟใใชใใคใผใใใฆๅ
ฑๅไฝใจๅ
ฑๆใใใๆไผใใๅใใงใใพใใ
## Adding TensorFlow weights to ๐ค Hub
TensorFlowใขใใซใฎใขใผใญใใฏใใฃใ๐ค Transformersใงๅฉ็จๅฏ่ฝใชๅ ดๅใPyTorchใฎ้ใฟใTensorFlowใฎ้ใฟใซๅคๆใใใใจใฏ็ฐกๅใงใ๏ผ
ไปฅไธใใใฎๆนๆณใงใ๏ผ
1. ใฟใผใใใซใงHugging Faceใขใซใฆใณใใซใญใฐใคใณใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใใณใใณใ`huggingface-cli login`ใไฝฟ็จใใฆใญใฐใคใณใงใใพใ๏ผใขใฏใปในใใผใฏใณใฏ[ใใกใ](https://huggingface.co/settings/tokens)ใง่ฆใคใใใใจใใงใใพใ๏ผใ
2. `transformers-cli pt-to-tf --model-name foo/bar`ใจใใใณใใณใใๅฎ่กใใพใใใใใงใ`foo/bar`ใฏๅคๆใใใPyTorchใฎ้ใฟใๅซใใขใใซใชใใธใใชใฎๅๅใงใใ
3. ไธ่จใฎใณใใณใใงไฝๆใใใ๐ค Hub PRใซ`@joaogante`ใจ`@Rocketknight1`ใใฟใฐไปใใใพใใ
ใใใ ใใงใ๏ผ ๐
## Debugging mismatches across ML frameworks ๐
ๆฐใใใขใผใญใใฏใใฃใ่ฟฝๅ ใใใใๆขๅญใฎใขใผใญใใฏใใฃใฎTensorFlowใฎ้ใฟใไฝๆใใใใใ้ใPyTorchใจTensorFlow้ใฎไธไธ่ดใซใคใใฆใฎใจใฉใผใซ้ญ้ใใใใจใใใใพใใ
ๅ ดๅใซใใฃใฆใฏใPyTorchใจTensorFlowใฎใขใใซใขใผใญใใฏใใฃใใปใผๅไธใงใใใซใใใใใใใไธไธ่ดใๆๆใใใจใฉใผใ่กจ็คบใใใใใจใใใใพใใ
ใฉใใใฆใงใใใใ๏ผ ๐ค
ใพใๆๅใซใใชใใใใใฎไธไธ่ดใ็่งฃใใใใจใ้่ฆใใซใคใใฆ่ฉฑใใพใใใใๅคใใฎใณใใฅใใใฃใกใณใใผใฏ๐ค Transformersใขใใซใใใฎใพใพไฝฟ็จใใใขใใซใๆๅพ
ใฉใใใซๅไฝใใใจไฟก้ ผใใฆใใพใใ
2ใคใฎใใฌใผใ ใฏใผใฏ้ใงๅคงใใชไธไธ่ดใใใใจใๅฐใชใใจใ1ใคใฎใใฌใผใ ใฏใผใฏใฎใชใใกใฌใณในๅฎ่ฃ
ใซๅพใฃใฆใขใใซใๅไฝใใชใใใจใๆๅณใใพใใ
ใใใซใใใใขใใซใฏๅฎ่กใใใพใใๆง่ฝใไฝไธใใๅฏ่ฝๆงใใใใ้ใใชๅคฑๆใ็บ็ใใๅฏ่ฝๆงใใใใพใใใใใฏใๅ
จใๅฎ่กใใใชใใขใใซใใใๆชใใจ่จใใใใใใใพใใ๏ผใใฎใใใใขใใซใฎใในใฆใฎๆฎต้ใงใฎใใฌใผใ ใฏใผใฏใฎไธไธ่ดใ`1e-5`ๆชๆบใงใใใใจใ็ฎๆใใฆใใพใใ
ๆฐๅค่จ็ฎใฎๅ้กใจๅๆงใซใ่ฉณ็ดฐใซใคใใฆใฏ็ดฐใใใจใใใซใใใพใใใใใฆใ่ฉณ็ดฐๆๅใฎๆ่กใงใใไปฅไธใ็งๅฏใฎ่ฆ็ด ใฏๅฟ่ใงใใ
ใใฎ็จฎใฎๅ้กใซ้ญ้ใใๅ ดๅใฎใๅงใใฎใฏใผใฏใใญใผใฏๆฌกใฎใจใใใงใ๏ผ
1. ไธไธ่ดใฎๅๅ ใ็นๅฎใใพใใๅคๆไธญใฎใขใใซใซใฏใใใใ็นๅฎใฎ็นใพใงใปใผๅไธใฎๅ
้จๅคๆฐใใใใพใใ
ไธกๆนใฎใใฌใผใ ใฏใผใฏใฎใขใผใญใใฏใใฃใซ`breakpoint()`ในใใผใใกใณใใ้
็ฝฎใใใใใใใฆใณใฎๆนๆณใงๆฐๅคๅคๆฐใฎๅคใๆฏ่ผใใๅ้กใฎๅๅ ใ่ฆใคใใพใใ
2. ๅ้กใฎๅๅ ใ็นๅฎใใใใ๐ค Transformersใใผใ ใจ้ฃ็ตกใๅใใพใใใใๅๆงใฎๅ้กใซ้ญ้ใใใใจใใใใใใใใใ่ฟ
้ใซ่งฃๆฑบ็ญใๆไพใงใใใใใใใพใใใๆ็ตๆๆฎตใจใใฆใStackOverflowใGitHubใฎๅ้กใชใฉใไบบๆฐใฎใใใใผใธใในใญใฃใณใใพใใ
3. ่งฃๆฑบ็ญใ่ฆๅฝใใใชใๅ ดๅใๅ้กใๆใไธใใๅฟ
่ฆใใใใใจใๆๅณใใพใใ่ฏใใใฅใผในใฏใๅ้กใฎๅๅ ใ็นๅฎใใใใจใงใใใใใใฃใฆใๅ้กใฎใใๅฝไปคใซ็ฆ็นใๅฝใฆใใขใใซใฎๆฎใใๆฝ่ฑกๅใงใใพใ๏ผๆชใใใฅใผในใฏใใใฎๅฝไปคใฎใฝใผในๅฎ่ฃ
ใซ้ฒใๅฟ
่ฆใใใใใจใงใใไธ้จใฎๅ ดๅใงใฏใใชใใกใฌใณในๅฎ่ฃ
ใซๅ้กใใใใใใใใพใใ - ไธๆตใชใใธใใชใงๅ้กใ้ใใฎใๆงใใชใใงใใ ใใใ
๐ค Transformersใใผใ ใจใฎ่ฉฑใๅใใงใไธไธ่ดใไฟฎๆญฃใใใใจใๅฐ้ฃใงใใใใจใๅคๆใใใใจใใใใพใใ
ๅบๅใฌใคใคใผใฎใขใใซใงไธไธ่ดใ้ๅธธใซๅฐใใๅ ดๅ๏ผใใ ใใ้ ใใ็ถๆ
ใงใฏๅคงใใๅฏ่ฝๆงใใใ๏ผใใขใใซใ้
ๅธใใใใใซใใใ็ก่ฆใใใใจใซใใใใใใใพใใใ
ไธ่จใง่จๅใใ`pt-to-tf` CLIใซใฏใ้ใฟๅคๆๆใซใจใฉใผใกใใปใผใธใ็ก่ฆใใใใใฎ`--max-error`ใใฉใฐใใใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_train_cpu_many.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Efficient Training on Multiple CPUs
1ใคใฎCPUใงใฎใใฌใผใใณใฐใ้
ใใใๅ ดๅใ่คๆฐใฎCPUใไฝฟ็จใงใใพใใใใฎใฌใคใใฏใPyTorchใใผในใฎDDPใไฝฟ็จใใๅๆฃCPUใใฌใผใใณใฐใซ็ฆ็นใๅฝใฆใฆใใพใใ
## Intelยฎ oneCCL Bindings for PyTorch
[Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL)๏ผ้ๅ้ไฟกใฉใคใใฉใช๏ผใฏใallreduceใallgatherใalltoallใชใฉใฎๅ้้ไฟกใๅฎ่ฃ
ใใๅน็็ใชๅๆฃใใฃใผใใฉใผใใณใฐใใฌใผใใณใฐ็จใฎใฉใคใใฉใชใงใใoneCCLใฎ่ฉณ็ดฐใซใคใใฆใฏใ[oneCCLใใญใฅใกใณใ](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)ใจ[oneCCLไปๆง](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)ใๅ็
งใใฆใใ ใใใ
ใขใธใฅใผใซ`oneccl_bindings_for_pytorch`๏ผใใผใธใงใณ1.12ไปฅๅใฏ`torch_ccl`๏ผใฏใPyTorch C10D ProcessGroup APIใๅฎ่ฃ
ใใๅค้จใฎProcessGroupใจใใฆๅ็ใซใญใผใใงใใ็พๅจใฏLinuxใใฉใใใใฉใผใ ใงใฎใฟๅไฝใใพใใ
[torch-ccl](https://github.com/intel/torch-ccl)ใฎ่ฉณ็ดฐๆ
ๅ ฑใ็ขบ่ชใใฆใใ ใใใ
### Intelยฎ oneCCL Bindings for PyTorch installation:
WheelใใกใคใซใฏใไปฅไธใฎPythonใใผใธใงใณ็จใซๅฉ็จๅฏ่ฝใงใ:
| Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 |
| :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: |
| 1.13.0 | | โ | โ | โ | โ |
| 1.12.100 | | โ | โ | โ | โ |
| 1.12.0 | | โ | โ | โ | โ |
| 1.11.0 | | โ | โ | โ | โ |
| 1.10.0 | โ | โ | โ | โ | |
```
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
```
where `{pytorch_version}` should be your PyTorch version, for instance 1.13.0.
Check more approaches for [oneccl_bind_pt installation](https://github.com/intel/torch-ccl).
Versions of oneCCL and PyTorch must match.
<Tip warning={true}>
oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0)
PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100
</Tip>
`{pytorch_version}` ใฏใใใชใใฎPyTorchใฎใใผใธใงใณ๏ผไพ๏ผ1.13.0๏ผใซ็ฝฎใๆใใๅฟ
่ฆใใใใพใใ้่ฆใชใฎใฏใoneCCLใจPyTorchใฎใใผใธใงใณใไธ่ดใใฆใใใใจใงใใ[oneccl_bind_ptใฎใคใณในใใผใซ](https://github.com/intel/torch-ccl)ใซ้ขใใใใใชใใขใใญใผใใ็ขบ่ชใงใใพใใ
<Tip warning={true}>
`oneccl_bindings_for_pytorch`ใฎ1.12.0ใใชใใซใใใคใผใซใฏPyTorch 1.12.1ใจไบๆๆงใใใใพใใ๏ผใใใฏPyTorch 1.12.0็จใงใ๏ผใPyTorch 1.12.1ใไฝฟ็จใใๅ ดๅใฏใ`oneccl_bindings_for_pytorch`ใใผใธใงใณ1.12.100ใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
</Tip>
## Intelยฎ MPI library
ใใฎๅบๆบใใผในใฎMPIๅฎ่ฃ
ใไฝฟ็จใใฆใIntelยฎใขใผใญใใฏใใฃไธใงๆ่ปใงๅน็็ใในใฑใผใฉใใซใชใฏใฉในใฟใกใใปใผใธใณใฐใๆไพใใพใใใใฎใณใณใใผใใณใใฏใIntelยฎ oneAPI HPC Toolkitใฎไธ้จใงใใ
oneccl_bindings_for_pytorchใฏMPIใใผใซใปใใใจไธ็ทใซใคใณในใใผใซใใใพใใไฝฟ็จใใๅใซ็ฐๅขใใฝใผในๅใใๅฟ
่ฆใใใใพใใ
for Intelยฎ oneCCL >= 1.12.0
```
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
```
for Intelยฎ oneCCL whose version < 1.12.0
```
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh
```
#### IPEX installation:
IPEXใฏใFloat32ใใใณBFloat16ใฎไธกๆนใงCPUใใฌใผใใณใฐใฎใใใฉใผใใณในๆ้ฉๅใๆไพใใพใใ่ฉณ็ดฐใฏ[ใใกใใฎใทใณใฐใซCPUใปใฏใทใงใณ](./perf_train_cpu)ใใๅ็
งใใ ใใใ
ไปฅไธใฎใใใฌใผใใผใงใฎไฝฟ็จใใฏใIntelยฎ MPIใฉใคใใฉใชใงmpirunใไฝฟ็จใใไพใ็คบใใฆใใพใใ
## Usage in Trainer
ใใฌใผใใผใงใฎใใซใCPUๅๆฃใใฌใผใใณใฐใๆๅนใซใใใใใซใใฆใผใถใผใฏใณใใณใๅผๆฐใซ **`--ddp_backend ccl`** ใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
ไพใ่ฆใฆใฟใพใใใใ[่ณชๅๅฟ็ญใฎไพ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
ไปฅไธใฎใณใใณใใฏใ1ใคใฎXeonใใผใใง2ใคใฎใใญใปในใไฝฟ็จใใฆใใฌใผใใณใฐใๆๅนใซใใพใใ1ใคใฎใใญใปในใ1ใคใฎใฝใฑใใใงๅฎ่กใใใพใใOMP_NUM_THREADS/CCL_WORKER_COUNTๅคๆฐใฏใๆ้ฉใชใใใฉใผใใณในใ่ชฟๆดใใใใใซ่ชฟๆดใงใใพใใ
```shell script
export CCL_WORKER_COUNT=1
export MASTER_ADDR=127.0.0.1
mpirun -n 2 -genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
--model_name_or_path bert-large-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
--no_cuda \
--ddp_backend ccl \
--use_ipex
```
ไปฅไธใฎใณใใณใใฏใ2ใคใฎXeonใใญใปใใต๏ผnode0ใจnode1ใnode0ใใกใคใณใใญใปในใจใใฆไฝฟ็จ๏ผใงๅ่จ4ใคใฎใใญใปในใไฝฟ็จใใฆใใฌใผใใณใฐใๆๅนใซใใพใใppn๏ผใใผใใใจใฎใใญใปในๆฐ๏ผใฏ2ใซ่จญๅฎใใใ1ใคใฎใฝใฑใใใใจใซ1ใคใฎใใญใปในใๅฎ่กใใใพใใๆ้ฉใชใใใฉใผใใณในใๅพใใใใซใOMP_NUM_THREADS/CCL_WORKER_COUNTๅคๆฐใ่ชฟๆดใงใใพใใ
node0ใงใฏใๅใใผใใฎIPใขใใฌในใๅซใๆงๆใใกใคใซใไฝๆใใใใฎๆงๆใใกใคใซใฎใในใๅผๆฐใจใใฆๆธกใๅฟ
่ฆใใใใพใใ
```shell script
cat hostfile
xxx.xxx.xxx.xxx #node0 ip
xxx.xxx.xxx.xxx #node1 ip
```
ใใผใ0ใงๆฌกใฎใณใใณใใๅฎ่กใใใจใใใผใ0ใจใใผใ1ใง**4DDP**ใBF16่ชๅๆททๅ็ฒพๅบฆใงๆๅนใซใชใใพใใ
```shell script
export CCL_WORKER_COUNT=1
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
mpirun -f hostfile -n 4 -ppn 2 \
-genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
--model_name_or_path bert-large-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
--no_cuda \
--ddp_backend ccl \
--use_ipex \
--bf16
```
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/glossary.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Glossary
ใใฎ็จ่ช้ใฏใไธ่ฌ็ใชๆฉๆขฐๅญฆ็ฟใจ ๐ค ใใฉใณในใใฉใผใใผใฎ็จ่ชใๅฎ็พฉใใใใญใฅใกใณใใผใทใงใณใใใ็่งฃใใใฎใซๅฝน็ซใกใพใใ
## A
### attention mask
ใขใใณใทใงใณ ใในใฏใฏใใทใผใฑใณในใใใใๅฆ็ใใ้ใซไฝฟ็จใใใใชใใทใงใณใฎๅผๆฐใงใใ
<Youtube id="M6adb1j2jPI"/>
ใใฎๅผๆฐใฏใใขใใซใซใฉใฎใใผใฏใณใๆณจ่ฆใในใใใใฉใฎใใผใฏใณใๆณจ่ฆใใชใใใ็คบใใพใใ
ไพใใฐใๆฌกใฎ2ใคใฎใทใผใฑใณในใ่ใใฆใฟใฆใใ ใใ๏ผ
```python
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> sequence_a = "This is a short sequence."
>>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
>>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
>>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
```
The encoded versions have different lengths:
```python
>>> len(encoded_sequence_a), len(encoded_sequence_b)
(8, 19)
```
ใใใใฃใฆใใใใใฎใทใผใฑใณในใใใฎใพใพๅใใใณใฝใซใซ้
็ฝฎใใใใจใฏใงใใพใใใๆๅใฎใทใผใฑใณในใฏใ
2็ช็ฎใฎใทใผใฑใณในใฎ้ทใใซๅใใใฆใใใฃใณใฐใใๅฟ
่ฆใใใใพใใใพใใฏใ2็ช็ฎใฎใทใผใฑใณในใฏใๆๅใฎใทใผใฑใณในใฎ
้ทใใซๅใ่ฉฐใใๅฟ
่ฆใใใใพใใ
ๆๅใฎๅ ดๅใIDใฎใชในใใฏใใใฃใณใฐใคใณใใใฏในใงๆกๅผตใใใพใใใใผใฏใใคใถใซใชในใใๆธกใใๆฌกใฎใใใซใใใฃใณใฐใใใใใซ
ไพ้ ผใงใใพใ:
```python
>>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
```
0sใ่ฟฝๅ ใใใฆใๆๅใฎๆใ2็ช็ฎใฎๆใจๅใ้ทใใซใชใใฎใใใใใพใ๏ผ
```python
>>> padded_sequences["input_ids"]
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
```
ใใใฏใPyTorchใพใใฏTensorFlowใงใใณใฝใซใซๅคๆใงใใพใใๆณจๆใในใฏใฏใใขใใซใใใใใซๆณจๆใๆใใชใใใใซใๅใ่พผใพใใใคใณใใใฏในใฎไฝ็ฝฎใ็คบใใใคใใชใใณใฝใซใงใใ[`BertTokenizer`]ใงใฏใ`1`ใฏๆณจๆใๆใๅฟ
่ฆใใใๅคใ็คบใใ`0`ใฏๅใ่พผใพใใๅคใ็คบใใพใใใใฎๆณจๆใในใฏใฏใใใผใฏใใคใถใ่ฟใ่พๆธใฎใญใผใattention_maskใใฎไธใซใใใพใใ
```python
>>> padded_sequences["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
```
### autoencoding models
[ใจใณใณใผใใผใขใใซ](#encoder-models) ใใใณ [ใในใฏ่จ่ชใขใใชใณใฐ](#masked-language-modeling-mlm) ใๅ็
งใใฆใใ ใใใ
### autoregressive models
[ๅ ๆ่จ่ชใขใใชใณใฐ](#causal-language-modeling) ใใใณ [ใใณใผใใผใขใใซ](#decoder-models) ใๅ็
งใใฆใใ ใใใ
## B
### backbone
ใใใฏใใผใณใฏใ็ใฎ้ ใใ็ถๆ
ใ็นๅพดใๅบๅใใใใใใฏใผใฏ๏ผๅใ่พผใฟใจๅฑค๏ผใงใใ้ๅธธใ็นๅพดใๅ
ฅๅใจใใฆๅใๅใใใใซ [ใใใ](#head) ใซๆฅ็ถใใใฆใใใไบๆธฌใ่กใใพใใใใจใใฐใ[`ViTModel`] ใฏ็นๅฎใฎใใใใไธใซใชใใใใฏใใผใณใงใใไปใฎใขใใซใ [`VitModel`] ใใใใฏใใผใณใจใใฆไฝฟ็จใงใใพใใไพใใฐ [DPT](model_doc/dpt) ใงใใ
## C
### causal language modeling
ใขใใซใใใญในใใ้ ็ชใซ่ชญใฟใๆฌกใฎๅ่ชใไบๆธฌใใไบๅใใฌใผใใณใฐใฟในใฏใงใใ้ๅธธใใขใใซใฏๆๅ
จไฝใ่ชญใฟๅใใพใใใ็นๅฎใฎใฟใคใ ในใใใใงๆชๆฅใฎใใผใฏใณใ้ ใใใใซใขใใซๅ
ใงใในใฏใไฝฟ็จใใพใใ
### channel
ใซใฉใผ็ปๅใฏใ่ตคใ็ทใ้๏ผRGB๏ผใฎ3ใคใฎใใฃใใซใฎๅคใฎ็ตใฟๅใใใใๆใ็ซใฃใฆใใใใฐใฌใผในใฑใผใซ็ปๅใฏ1ใคใฎใใฃใใซใใๆใกใพใใใ๐ค Transformers ใงใฏใใใฃใใซใฏ็ปๅใฎใใณใฝใซใฎๆๅใพใใฏๆๅพใฎๆฌกๅ
ใซใชใใใจใใใใพใ๏ผ[`n_channels`, `height`, `width`] ใพใใฏ [`height`, `width`, `n_channels`]ใ
### connectionist temporal classification (CTC)
ๅ
ฅๅใจๅบๅใๆญฃ็ขบใซใฉใฎใใใซๆดๅใใใใๆญฃ็ขบใซ็ฅใใชใใฆใใขใใซใๅญฆ็ฟใใใใขใซใดใชใบใ ใCTC ใฏใ็นๅฎใฎๅ
ฅๅใซๅฏพใใฆใในใฆใฎๅฏ่ฝใชๅบๅใฎๅๅธใ่จ็ฎใใใใฎไธญใใๆใๅฏ่ฝๆงใฎ้ซใๅบๅใ้ธๆใใพใใCTC ใฏใในใใผใซใผใฎ็ฐใชใ็บ่ฉฑ้ๅบฆใชใฉใใใพใใพใช็็ฑใง้ณๅฃฐใใใฉใณในใฏใชใใใจๅฎๅ
จใซๆดๅใใชใๅ ดๅใซใ้ณๅฃฐ่ช่ญใฟในใฏใงไธ่ฌ็ใซไฝฟ็จใใใพใใ
### convolution
ใใฅใผใฉใซใใใใฏใผใฏใฎไธ็จฎใงใๅ
ฅๅ่กๅใ่ฆ็ด ใใจใซๅฐใใช่กๅ๏ผใซใผใใซใพใใฏใใฃใซใฟใผ๏ผใจไน็ฎใใใๅคใๆฐใใ่กๅใซๅ่จใใใใฌใคใคใผใฎใฟใคใใใใใฏๅ
ฅๅ่กๅๅ
จไฝใซๅฏพใใฆ็นฐใ่ฟใใใ็ณใฟ่พผใฟๆไฝใจใใฆ็ฅใใใๅๆไฝใฏๅ
ฅๅ่กๅใฎ็ฐใชใใปใฐใกใณใใซ้ฉ็จใใใพใใ็ณใฟ่พผใฟใใฅใผใฉใซใใใใฏใผใฏ๏ผCNN๏ผใฏใใณใณใใฅใผใฟใใธใงใณใงไธ่ฌ็ใซไฝฟ็จใใใฆใใพใใ
## D
### decoder input IDs
ใใฎๅ
ฅๅใฏใจใณใณใผใใผใใณใผใใผใขใใซใซ็นๆใงใใใใใณใผใใผใซไพ็ตฆใใใๅ
ฅๅIDใๅซใฟใพใใใใใใฎๅ
ฅๅใฏใ็ฟป่จณใ่ฆ็ดใชใฉใฎใทใผใฑใณในใใผใทใผใฑใณในใฟในใฏใซไฝฟ็จใใใ้ๅธธใๅใขใใซใซๅบๆใฎๆนๆณใงๆง็ฏใใใพใใ
ใปใจใใฉใฎใจใณใณใผใใผใใณใผใใผใขใใซ๏ผBARTใT5๏ผใฏใ`labels` ใใ็ฌ่ชใซ `decoder_input_ids` ใไฝๆใใพใใใใฎใใใชใขใใซใงใฏใ`labels` ใๆธกใใใจใใใฌใผใใณใฐใๅฆ็ใใๅชใใๆนๆณใงใใ
ใทใผใฑใณในใใผใทใผใฑใณในใใฌใผใใณใฐใซใใใใใใใฎๅ
ฅๅIDใฎๅฆ็ๆนๆณใ็ขบ่ชใใใใใซใๅใขใใซใฎใใญใฅใกใณใใ็ขบ่ชใใฆใใ ใใใ
### decoder models
ใชใผใใชใฐใฌใใทใงใณใขใใซใจใๅผใฐใใใขใใซใใใญในใใ้ ็ชใซ่ชญใฟใๆฌกใฎๅ่ชใไบๆธฌใใไบๅใใฌใผใใณใฐใฟในใฏ๏ผๅ ๆ่จ่ชใขใใชใณใฐ๏ผใซ้ขไธใใพใใ้ๅธธใใขใใซใฏๆๅ
จไฝใ่ชญใฟๅใใ็นๅฎใฎใฟใคใ ในใใใใงๆชๆฅใฎใใผใฏใณใ้ ใใในใฏใไฝฟ็จใใฆ่กใใใพใใ
<Youtube id="d_ixlCubqQw"/>
### deep learning (DL)
ใใฅใผใฉใซใใใใฏใผใฏใไฝฟ็จใใๆฉๆขฐๅญฆ็ฟใขใซใดใชใบใ ใงใ่คๆฐใฎๅฑคใๆใฃใฆใใพใใ
## E
### encoder models
ใชใผใใจใณใณใผใใฃใณใฐใขใใซใจใใฆใ็ฅใใใฆใใใใจใณใณใผใใผใขใใซใฏๅ
ฅๅ๏ผใใญในใใ็ปๅใชใฉ๏ผใใๅใ่พผใฟใจๅผใฐใใ็ฐก็ฅๅใใใๆฐๅค่กจ็พใซๅคๆใใพใใใจใณใณใผใใผใขใใซใฏใใใฐใใฐ[ใในใฏใใใ่จ่ชใขใใชใณใฐ๏ผ#masked-language-modeling-mlm๏ผ](#masked-language-modeling-mlm)ใชใฉใฎๆ่กใไฝฟ็จใใฆไบๅใซใใฌใผใใณใฐใใใๅ
ฅๅใทใผใฑใณในใฎไธ้จใใในใฏใใใขใใซใซใใๆๅณใฎใใ่กจ็พใไฝๆใใใใจใๅผทๅถใใใพใใ
<Youtube id="H39Z_720T5s"/>
## F
### feature extraction
็ใใผใฟใใใๆ
ๅ ฑ่ฑใใงๆฉๆขฐๅญฆ็ฟใขใซใดใชใบใ ใซใจใฃใฆๆ็จใช็นๅพดใฎใปใใใซ้ธๆใใใณๅคๆใใใใญใปในใ็นๅพดๆฝๅบใฎไพใซใฏใ็ใฎใใญในใใๅ่ชๅใ่พผใฟใซๅคๆใใใใ็ปๅ/ใใใชใใผใฟใใใจใใธใๅฝข็ถใชใฉใฎ้่ฆใช็นๅพดใๆฝๅบใใใใใใใจใๅซใพใใพใใ
### feed forward chunking
ใใฉใณในใใฉใผใใผๅ
ใฎๅๆฎๅทฎๆณจๆใใญใใฏใงใฏใ้ๅธธใ่ชๅทฑๆณจๆๅฑคใฎๅพใซ2ใคใฎใใฃใผใใใฉใฏใผใๅฑคใ็ถใใพใใ
ใใฃใผใใใฉใฏใผใๅฑคใฎไธญ้ๅใ่พผใฟใตใคใบใฏใใขใใซใฎ้ ใใใตใคใบใใใๅคงใใใใจใใใใใใพใ๏ผใใจใใฐใ`bert-base-uncased`ใฎๅ ดๅ๏ผใ
ๅ
ฅๅใตใคใบใ `[batch_sizeใsequence_length]` ใฎๅ ดๅใไธญ้ใใฃใผใใใฉใฏใผใๅใ่พผใฟ `[batch_sizeใsequence_lengthใconfig.intermediate_size]` ใไฟๅญใใใใใซๅฟ
่ฆใชใกใขใชใฏใใกใขใชใฎๅคง้จๅใๅ ใใใใจใใใใพใใ[Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451)ใฎ่่
ใฏใ่จ็ฎใ `sequence_length` ๆฌกๅ
ใซไพๅญใใชใใใใไธกๆนใฎใใฃใผใใใฉใฏใผใๅฑคใฎๅบๅๅใ่พผใฟ `[batch_sizeใconfig.hidden_size]_0ใ...ใ[batch_sizeใconfig.hidden_size]_n` ใๅๅฅใซ่จ็ฎใใๅพใง `[batch_sizeใsequence_lengthใconfig.hidden_size]` ใซ้ฃ็ตใใใใจใฏๆฐๅญฆ็ใซ็ญไพกใงใใใจๆฐไปใใพใใใใใใซใใใๅขๅ ใใ่จ็ฎๆ้ใจใกใขใชไฝฟ็จ้ใฎใใฌใผใใชใใ็ใใพใใใๆฐๅญฆ็ใซ็ญไพกใช็ตๆใๅพใใใพใใ
[`apply_chunking_to_forward`] ้ขๆฐใไฝฟ็จใใใขใใซใฎๅ ดๅใ`chunk_size` ใฏไธฆๅใซ่จ็ฎใใใๅบๅๅใ่พผใฟใฎๆฐใๅฎ็พฉใใใกใขใชใจๆ้ใฎ่ค้ใใฎใใฌใผใใชใใๅฎ็พฉใใพใใ`chunk_size` ใ 0 ใซ่จญๅฎใใใฆใใๅ ดๅใใใฃใผใใใฉใฏใผใใฎใใฃใณใญใณใฐใฏ่กใใใพใใใ
### finetuned models
ใใกใคใณใใฅใผใใณใฐใฏใไบๅใซใใฌใผใใณใฐใใใใขใใซใๅใใใใฎ้ใฟใๅบๅฎใใๆฐใใ่ฟฝๅ ใใใ[model head](#head)ใงๅบๅใฌใคใคใผใ็ฝฎใๆใใๅฝขๅผใฎ่ปข็งปๅญฆ็ฟใงใใใขใใซใใใใฏๅฏพ่ฑกใฎใใผใฟใปใใใงใใฌใผใใณใฐใใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) ใใฅใผใใชใขใซใๅ็
งใใฆใ๐ค Transformersใไฝฟ็จใใใขใใซใฎใใกใคใณใใฅใผใใณใฐๆนๆณใๅญฆใณใพใใใใ
## H
### head
ใขใใซใใใใฏใใใฅใผใฉใซใใใใฏใผใฏใฎๆๅพใฎใฌใคใคใผใๆใใ็ใฎ้ ใใ็ถๆ
ใๅใๅ
ฅใใฆ็ฐใชใๆฌกๅ
ใซๅฐๅฝฑใใพใใๅใฟในใฏใซๅฏพใใฆ็ฐใชใใขใใซใใใใใใใพใใไพใใฐ๏ผ
* [`GPT2ForSequenceClassification`] ใฏใใใผในใฎ[`GPT2Model`]ใฎไธใซใใใทใผใฑใณในๅ้กใใใ๏ผ็ทๅฝขๅฑค๏ผใงใใ
* [`ViTForImageClassification`] ใฏใใใผในใฎ[`ViTModel`]ใฎ`CLS`ใใผใฏใณใฎๆ็ต้ ใใ็ถๆ
ใฎไธใซใใ็ปๅๅ้กใใใ๏ผ็ทๅฝขๅฑค๏ผใงใใ
* [`Wav2Vec2ForCTC`] ใฏใ[CTC](#connectionist-temporal-classification-(CTC))ใๆใคใใผในใฎ[`Wav2Vec2Model`]ใฎ่จ่ชใขใใชใณใฐใใใใงใใ
## I
### image patch
ใใธใงใณใใผในใฎใใฉใณในใใฉใผใใผใขใใซใฏใ็ปๅใใใๅฐใใชใใใใซๅๅฒใใใใใใ็ทๅฝขใซๅใ่พผใฟใใขใใซใซใทใผใฑใณในใจใใฆๆธกใใพใใใขใใซใฎ
### inference
ๆจ่ซใฏใใใฌใผใใณใฐใๅฎไบใใๅพใซๆฐใใใใผใฟใงใขใใซใ่ฉไพกใใใใญใปในใงใใ ๐ค Transformers ใไฝฟ็จใใฆๆจ่ซใๅฎ่กใใๆนๆณใซใคใใฆใฏใ[ๆจ่ซใฎใใคใใฉใคใณ](https://huggingface.co/docs/transformers/pipeline_tutorial) ใใฅใผใใชใขใซใๅ็
งใใฆใใ ใใใ
### input IDs
ๅ
ฅๅIDใฏใใขใใซใธใฎๅ
ฅๅใจใใฆๆธกใๅฟ
่ฆใใใใใฉใกใผใฟใผใฎไธญใงๆใไธ่ฌ็ใชใใฎใงใใใใใใฏใใผใฏใณใฎใคใณใใใฏในใงใใใใขใใซใซใใฃใฆๅ
ฅๅใจใใฆไฝฟ็จใใใใทใผใฑใณในใๆง็ฏใใใใผใฏใณใฎๆฐๅค่กจ็พใงใใ
<Youtube id="VFp38yj8h3A"/>
ๅใใผใฏใใคใถใผใฏ็ฐใชใๆนๆณใงๅไฝใใพใใใๅบๆฌ็ใชใกใซใใบใ ใฏๅใใงใใไปฅไธใฏBERTใใผใฏใใคใถใผใไฝฟ็จใใไพใงใใBERTใใผใฏใใคใถใผใฏ[WordPiece](https://arxiv.org/pdf/1609.08144.pdf)ใใผใฏใใคใถใผใงใใ
```python
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> sequence = "A Titan RTX has 24GB of VRAM"
```
ใใผใฏใใคใถใผใฏใใทใผใฑใณในใใใผใฏใใคใถใผ่ชๅฝใงไฝฟ็จๅฏ่ฝใชใใผใฏใณใซๅๅฒใใพใใ
```python
>>> tokenized_sequence = tokenizer.tokenize(sequence)
```
ใใผใฏใณใฏๅ่ชใพใใฏใตใใฏใผใใงใใ ใใจใใฐใใใใงใฏ "VRAM" ใฏใขใใซใฎ่ชๅฝใซๅซใพใใฆใใชใใฃใใใใ"V"ใ"RA"ใ"M" ใซๅๅฒใใใพใใใ
ใใใใฎใใผใฏใณใๅฅใ
ใฎๅ่ชใงใฏใชใใๅใๅ่ชใฎไธ้จใงใใใใจใ็คบใใใใซใ"RA" ใจ "M" ใซใฏใใใซใใใทใฅใฎใใฌใใฃใใฏในใ่ฟฝๅ ใใใพใใ
```python
>>> print(tokenized_sequence)
['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
```
ใใใใฎใใผใฏใณใฏใใขใใซใ็่งฃใงใใใใใซIDใซๅคๆใงใใพใใใใใฏใๆใใใผใฏใใคใถใผใซ็ดๆฅไพ็ตฆใใฆ่กใใใจใใงใใพใใใใผใฏใใคใถใผใฏใใใใฉใผใใณในใฎๅไธใฎใใใซ[๐ค Tokenizers](https://github.com/huggingface/tokenizers)ใฎRustๅฎ่ฃ
ใๆดป็จใใฆใใพใใ
```python
>>> inputs = tokenizer(sequence)
```
ใใผใฏใใคใถใผใฏใๅฏพๅฟใใใขใใซใๆญฃใใๅไฝใใใใใซๅฟ
่ฆใชใในใฆใฎๅผๆฐใๅซใ่พๆธใ่ฟใใพใใใใผใฏใณใฎใคใณใใใฏในใฏใใญใผ `input_ids` ใฎไธใซใใใพใใ
```python
>>> encoded_sequence = inputs["input_ids"]
>>> print(encoded_sequence)
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
```
ๆณจๆ๏ผใใผใฏใใคใถใฏใ้ข้ฃใใใขใใซใใใใใๅฟ
่ฆใจใใๅ ดๅใซ่ชๅ็ใซใ็นๅฅใชใใผใฏใณใใ่ฟฝๅ ใใพใใใใใใฏใใขใใซใๆๆไฝฟ็จใใ็นๅฅใชIDใงใใ
ๅใฎIDใทใผใฑใณในใใใณใผใใใๅ ดๅใ
```python
>>> decoded_sequence = tokenizer.decode(encoded_sequence)
```
็งใใกใฏ่ฆใพใ
```python
>>> print(decoded_sequence)
[CLS] A Titan RTX has 24GB of VRAM [SEP]
```
ใใใฏ[`BertModel`]ใใใฎๅ
ฅๅใๆๅพ
ใใๆนๆณใงใใ
## L
### Labels
ใฉใใซใฏใใขใใซใๆๅคฑใ่จ็ฎใใใใใซๆธกใใใจใใงใใใชใใทใงใณใฎๅผๆฐใงใใใใใใฎใฉใใซใฏใใขใใซใฎไบๆธฌใฎๆๅพ
ๅคใงใใในใใงใใใขใใซใฏใ้ๅธธใฎๆๅคฑใไฝฟ็จใใฆใใใฎไบๆธฌใจๆๅพ
ๅค๏ผใฉใใซ๏ผใจใฎ้ใฎๆๅคฑใ่จ็ฎใใพใใ
ใใใใฎใฉใใซใฏใขใใซใฎใใใใซๅฟใใฆ็ฐใชใใพใใใใจใใฐ๏ผ
- ใทใผใฑใณในๅ้กใขใใซ๏ผ[`BertForSequenceClassification`]๏ผใฎๅ ดๅใใขใใซใฏๆฌกๅ
ใ `(batch_size)` ใฎใใณใฝใซใๆๅพ
ใใใใใๅ
ใฎๅๅคใใทใผใฑใณในๅ
จไฝใฎไบๆธฌใฉใใซใซๅฏพๅฟใใพใใ
- ใใผใฏใณๅ้กใขใใซ๏ผ[`BertForTokenClassification`]๏ผใฎๅ ดๅใใขใใซใฏๆฌกๅ
ใ `(batch_size, seq_length)` ใฎใใณใฝใซใๆๅพ
ใใๅๅคใๅๅใ
ใฎใใผใฏใณใฎไบๆธฌใฉใใซใซๅฏพๅฟใใพใใ
- ใในใฏ่จ่ชใขใใชใณใฐใฎๅ ดๅ๏ผ[`BertForMaskedLM`]๏ผใใขใใซใฏๆฌกๅ
ใ `(batch_size, seq_length)` ใฎใใณใฝใซใๆๅพ
ใใๅๅคใๅๅใ
ใฎใใผใฏใณใฎไบๆธฌใฉใใซใซๅฏพๅฟใใพใใใใใงใฎใฉใใซใฏใในใฏใใใใใผใฏใณใฎใใผใฏใณIDใงใใใไปใฎใใผใฏใณใซใฏ้ๅธธ -100 ใชใฉใฎๅคใ่จญๅฎใใใพใใ
- ใทใผใฑใณใน้ใฎใฟในใฏใฎๅ ดๅ๏ผ[`BartForConditionalGeneration`]ใ[`MBartForConditionalGeneration`]๏ผใใขใใซใฏๆฌกๅ
ใ `(batch_size, tgt_seq_length)` ใฎใใณใฝใซใๆๅพ
ใใๅๅคใๅๅ
ฅๅใทใผใฑใณในใซ้ข้ฃไปใใใใใฟใผใฒใใใทใผใฑใณในใซๅฏพๅฟใใพใใใใฌใผใใณใฐไธญใBARTใจT5ใฎไธกๆนใฏ้ฉๅใช `decoder_input_ids` ใจใใณใผใใผใฎใขใใณใทใงใณใในใฏใๅ
้จใง็ๆใใพใใ้ๅธธใใใใใๆไพใใๅฟ
่ฆใฏใใใพใใใใใใฏใจใณใณใผใใผใใณใผใใผใใฌใผใ ใฏใผใฏใๅฉ็จใใใขใใซใซใฏ้ฉ็จใใใพใใใ
- ็ปๅๅ้กใขใใซใฎๅ ดๅ๏ผ[`ViTForImageClassification`]๏ผใใขใใซใฏๆฌกๅ
ใ `(batch_size)` ใฎใใณใฝใซใๆๅพ
ใใใใใๅ
ใฎๅๅคใๅๅใ
ใฎ็ปๅใฎไบๆธฌใฉใใซใซๅฏพๅฟใใพใใ
- ใปใใณใใฃใใฏใปใฐใกใณใใผใทใงใณใขใใซใฎๅ ดๅ๏ผ[`SegformerForSemanticSegmentation`]๏ผใใขใใซใฏๆฌกๅ
ใ `(batch_size, height, width)` ใฎใใณใฝใซใๆๅพ
ใใใใใๅ
ใฎๅๅคใๅๅใ
ใฎใใฏใปใซใฎไบๆธฌใฉใใซใซๅฏพๅฟใใพใใ
- ็ฉไฝๆคๅบใขใใซใฎๅ ดๅ๏ผ[`DetrForObjectDetection`]๏ผใใขใใซใฏๅๅใ
ใฎ็ปๅใฎไบๆธฌใฉใใซใจๅข็ใใใฏในใฎๆฐใซๅฏพๅฟใใ `class_labels` ใจ `boxes` ใญใผใๆใค่พๆธใฎใชในใใๆๅพ
ใใพใใ
- ่ชๅ้ณๅฃฐ่ช่ญใขใใซใฎๅ ดๅ๏ผ[`Wav2Vec2ForCTC`]๏ผใใขใใซใฏๆฌกๅ
ใ `(batch_size, target_length)` ใฎใใณใฝใซใๆๅพ
ใใๅๅคใๅๅใ
ใฎใใผใฏใณใฎไบๆธฌใฉใใซใซๅฏพๅฟใใพใใ
<Tip>
ๅใขใใซใฎใฉใใซใฏ็ฐใชใๅ ดๅใใใใใใๅธธใซๅใขใใซใฎใใญใฅใกใณใใ็ขบ่ชใใฆใใใใใฎ็นๅฎใฎใฉใใซใซ้ขใใ่ฉณ็ดฐๆ
ๅ ฑใ็ขบ่ชใใฆใใ ใใ๏ผ
</Tip>
ใใผในใขใใซ๏ผ[`BertModel`]๏ผใฏใฉใใซใๅใๅ
ฅใใพใใใใใใใฏใใผในใฎใใฉใณในใใฉใผใใผใขใใซใงใใใๅใซ็นๅพดใๅบๅใใพใใ
### large language models (LLM)
ๅคง้ใฎใใผใฟใงใใฌใผใใณใฐใใใๅคๆๅจ่จ่ชใขใใซ๏ผGPT-3ใBLOOMใOPT๏ผใๆใไธ่ฌ็ใช็จ่ชใงใใใใใใฎใขใใซใฏ้ๅธธใๅคใใฎๅญฆ็ฟๅฏ่ฝใชใใฉใกใผใฟใๆใฃใฆใใพใ๏ผใใจใใฐใGPT-3ใฎๅ ดๅใ1750ๅๅ๏ผใ
## M
### masked language modeling (MLM)
ใขใใซใฏใใญในใใฎ็ ดๆใใผใธใงใณใ่ฆใไบๅใใฌใผใใณใฐใฟในใฏใงใ้ๅธธใฏใฉใณใใ ใซไธ้จใฎใใผใฏใณใใในใญใณใฐใใฆๅ
ใฎใใญในใใไบๆธฌใใๅฟ
่ฆใใใใพใใ
### multimodal
ใใญในใใจๅฅใฎ็จฎ้กใฎๅ
ฅๅ๏ผใใจใใฐ็ปๅ๏ผใ็ตใฟๅใใใใฟในใฏใงใใ
## N
### Natural language generation (NLG)
ใใญในใใ็ๆใใ้ข้ฃใใใในใฆใฎใฟในใฏ๏ผใใจใใฐใ[Transformersใงๆธใ](https://transformer.huggingface.co/)ใ็ฟป่จณใชใฉ๏ผใ
### Natural language processing (NLP)
ใใญในใใๆฑใๆนๆณใไธ่ฌ็ใซ่กจ็พใใใใฎใงใใ
### Natural language understanding (NLU)
ใใญในใๅ
ใซไฝใใใใใ็่งฃใใ้ข้ฃใใใในใฆใฎใฟในใฏ๏ผใใจใใฐใใใญในใๅ
จไฝใฎๅ้กใๅใ
ใฎๅ่ชใฎๅ้กใชใฉ๏ผใ
## P
### pipeline
๐ค Transformersใฎใใคใใฉใคใณใฏใใใผใฟใฎๅๅฆ็ใจๅคๆใ็นๅฎใฎ้ ๅบใงๅฎ่กใใฆใใผใฟใๅฆ็ใใใขใใซใใไบๆธฌใ่ฟใไธ้ฃใฎในใใใใๆใๆฝ่ฑกๅใงใใใใคใใฉใคใณใซ่ฆใใใใใใคใใฎในใใผใธใฎไพใซใฏใใใผใฟใฎๅๅฆ็ใ็นๅพดๆฝๅบใๆญฃ่ฆๅใชใฉใใใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[ๆจ่ซใฎใใใฎใใคใใฉใคใณ](https://huggingface.co/docs/transformers/pipeline_tutorial)ใๅ็
งใใฆใใ ใใใ
### pixel values
ใขใใซใซๆธกใใใ็ปๅใฎๆฐๅค่กจ็พใฎใใณใฝใซใงใใใใฏใปใซๅคใฏใๅฝข็ถใ [`ใใใใตใคใบ`, `ใใฃใใซๆฐ`, `้ซใ`, `ๅน
`] ใฎ่กๅใงใ็ปๅใใญใปใใตใใ็ๆใใใพใใ
### pooling
่กๅใๅฐใใช่กๅใซ็ธฎๅฐใใๆไฝใงใใใผใซๅฏพ่ฑกใฎๆฌกๅ
ใฎๆๅคงๅคใพใใฏๅนณๅๅคใๅใใใจใไธ่ฌ็ใงใใใใผใชใณใฐใฌใคใคใผใฏไธ่ฌ็ใซ็ณใฟ่พผใฟใฌใคใคใผใฎ้ใซ่ฆใใใ็นๅพด่กจ็พใใใฆใณใตใณใใชใณใฐใใพใใ
### position IDs
ใใผใฏใณใใจใฎไฝ็ฝฎใๅใ่พผใพใใฆใใRNNใจใฏ็ฐใชใใใใฉใณในใใฉใผใใผใฏๅใใผใฏใณใฎไฝ็ฝฎใๆๆกใใฆใใพใใใใใใใฃใฆใใขใใซใฏใใผใฏใณใฎไฝ็ฝฎใ่ญๅฅใใใใใซไฝ็ฝฎID๏ผ`position_ids`๏ผใไฝฟ็จใใพใใ
ใใใฏใชใใทใงใณใฎใใฉใกใผใฟใงใใใขใใซใซ `position_ids` ใๆธกใใใชใๅ ดๅใIDใฏ่ชๅ็ใซ็ตถๅฏพ็ใชไฝ็ฝฎๅใ่พผใฟใจใใฆไฝๆใใใพใใ
็ตถๅฏพ็ใชไฝ็ฝฎๅใ่พผใฟใฏ็ฏๅฒ `[0ใconfig.max_position_embeddings - 1]` ใใ้ธๆใใใพใใไธ้จใฎใขใใซใฏใๆญฃๅผฆๆณขไฝ็ฝฎๅใ่พผใฟใ็ธๅฏพไฝ็ฝฎๅใ่พผใฟใชใฉใไปใฎใฟใคใใฎไฝ็ฝฎๅใ่พผใฟใไฝฟ็จใใใใจใใใใพใใ
### preprocessing
็ใใผใฟใๆฉๆขฐๅญฆ็ฟใขใใซใง็ฐกๅใซๅฆ็ใงใใๅฝขๅผใซๆบๅใใใฟในใฏใงใใไพใใฐใใใญในใใฏ้ๅธธใใใผใฏใณๅใซใใฃใฆๅๅฆ็ใใใพใใไปใฎๅ
ฅๅใฟใคใใซๅฏพใใๅๅฆ็ใฎๅ
ทไฝ็ใชๆนๆณใ็ฅใใใๅ ดๅใฏใ[Preprocess](https://huggingface.co/docs/transformers/preprocessing) ใใฅใผใใชใขใซใใ่ฆงใใ ใใใ
### pretrained model
ใใใใผใฟ๏ผใใจใใฐใWikipediaๅ
จไฝใชใฉ๏ผใงไบๅใซๅญฆ็ฟใใใใขใใซใงใใไบๅๅญฆ็ฟใฎๆนๆณใซใฏใ่ชๅทฑๆๅธซใใใฎ็ฎ็ใๅซใพใใใใญในใใ่ชญใฟๅใใๆฌกใฎๅ่ชใไบๆธฌใใใใจใใใใฎ๏ผ[ๅ ๆ่จ่ชใขใใชใณใฐ](#ๅ ๆ่จ่ชใขใใชใณใฐ)ใๅ็
ง๏ผใใไธ้จใฎๅ่ชใใในใฏใใใใใใไบๆธฌใใใใจใใใใฎ๏ผ[ใในใฏ่จ่ชใขใใชใณใฐ](#ใในใฏ่จ่ชใขใใชใณใฐ-mlm)ใๅ็
ง๏ผใใใใพใใ
้ณๅฃฐใจใใธใงใณใขใใซใซใฏ็ฌ่ชใฎไบๅๅญฆ็ฟใฎ็ฎ็ใใใใพใใใใจใใฐใWav2Vec2ใฏ้ณๅฃฐใขใใซใงใใขใใซใซๅฏพใใฆใ็ใฎใ้ณๅฃฐ่กจ็พใๅฝใฎ้ณๅฃฐ่กจ็พใฎใปใใใใ่ญๅฅใใๅฟ
่ฆใใใๅฏพๆฏ็ใชใฟในใฏใงไบๅๅญฆ็ฟใใใฆใใพใใไธๆนใBEiTใฏใใธใงใณใขใใซใงใไธ้จใฎ็ปๅใใใใใในใฏใใใขใใซใซใในใฏใใใใใใใไบๆธฌใใใใฟในใฏ๏ผใในใฏ่จ่ชใขใใชใณใฐใฎ็ฎ็ใจไผผใฆใใพใ๏ผใงไบๅๅญฆ็ฟใใใฆใใพใใ
## R
### recurrent neural network (RNN)
ใใญในใใๅฆ็ใใใใใซๅฑคใใซใผใใใใใขใใซใฎไธ็จฎใงใใ
### representation learning
็ใใผใฟใฎๆๅณใฎใใ่กจ็พใๅญฆ็ฟใใๆฉๆขฐๅญฆ็ฟใฎใตใใใฃใผใซใใงใใ่กจ็พๅญฆ็ฟใฎๆ่กใฎไธ้จใซใฏๅ่ชๅใ่พผใฟใใชใผใใจใณใณใผใใผใGenerative Adversarial Networks๏ผGANs๏ผใชใฉใใใใพใใ
## S
### sampling rate
็งใใจใซๅใใใใตใณใใซ๏ผใชใผใใฃใชไฟกๅทใชใฉ๏ผใฎๆฐใใใซใๅไฝใงๆธฌๅฎใใใใฎใงใใใตใณใใชใณใฐใฌใผใใฏ้ณๅฃฐใชใฉใฎ้ฃ็ถไฟกๅทใ้ขๆฃๅใใ็ตๆใงใใ
### self-attention
ๅ
ฅๅใฎๅ่ฆ็ด ใฏใใฉใฎไปใฎ่ฆ็ด ใซๆณจๆใๆใในใใใๆคๅบใใพใใ
### self-supervised learning
ใขใใซใใฉใใซใฎใชใใใผใฟใใ่ชๅ่ช่บซใฎๅญฆ็ฟ็ฎๆจใไฝๆใใๆฉๆขฐๅญฆ็ฟๆ่กใฎใซใใดใชใงใใใใใฏ[ๆๅธซใชใๅญฆ็ฟ](#ๆๅธซใชใๅญฆ็ฟ)ใ[ๆๅธซใใๅญฆ็ฟ](#ๆๅธซใใๅญฆ็ฟ)ใจใฏ็ฐใชใใๅญฆ็ฟใใญใปในใฏใฆใผใถใผใใใฏๆ็คบ็ใซใฏ็ฃ็ฃใใใฆใใชใ็นใ็ฐใชใใพใใ
่ชๅทฑๆๅธซใใๅญฆ็ฟใฎ1ใคใฎไพใฏ[ใในใฏ่จ่ชใขใใชใณใฐ](#ใในใฏ่จ่ชใขใใชใณใฐ-mlm)ใงใใขใใซใซใฏไธ้จใฎใใผใฏใณใๅ้คใใใๆใไธใใใใๆฌ ่ฝใใใใผใฏใณใไบๆธฌใใใใใซๅญฆ็ฟใใพใใ
### semi-supervised learning
ใฉใใซไปใใใผใฟใฎๅฐ้ใจใฉใใซใฎใชใใใผใฟใฎๅคง้ใ็ตใฟๅใใใฆใขใใซใฎ็ฒพๅบฆใๅไธใใใๅบ็ฏใชๆฉๆขฐๅญฆ็ฟใใฌใผใใณใฐๆ่กใฎใซใใดใชใงใใ[ๆๅธซใใๅญฆ็ฟ](#ๆๅธซใใๅญฆ็ฟ)ใ[ๆๅธซใชใๅญฆ็ฟ](#ๆๅธซใชใๅญฆ็ฟ)ใจใฏ็ฐใชใใๅๆๅธซใใๅญฆ็ฟใฎใขใใญใผใใฎ1ใคใฏใใปใซใใใฌใผใใณใฐใใงใใใใขใใซใฏใฉใใซไปใใใผใฟใงใใฌใผใใณใฐใใใๆฌกใซใฉใใซใฎใชใใใผใฟใงไบๆธฌใ่กใใพใใใขใใซใๆใ่ชไฟกใๆใฃใฆไบๆธฌใใ้จๅใใฉใใซไปใใใผใฟใปใใใซ่ฟฝๅ ใใใใขใใซใฎๅใใฌใผใใณใฐใซไฝฟ็จใใใพใใ
### sequence-to-sequence (seq2seq)
ๅ
ฅๅใใๆฐใใใทใผใฑใณในใ็ๆใใใขใใซใงใใ็ฟป่จณใขใใซใ่ฆ็ดใขใใซ๏ผ[Bart](model_doc/bart)ใ[T5](model_doc/t5)ใชใฉ๏ผใชใฉใใใใซ่ฉฒๅฝใใพใใ
### stride
[็ณใฟ่พผใฟ](#็ณใฟ่พผใฟ)ใพใใฏ[ใใผใชใณใฐ](#ใใผใชใณใฐ)ใซใใใฆใในใใฉใคใใฏใซใผใใซใ่กๅไธใง็งปๅใใ่ท้ขใๆใใพใใในใใฉใคใใ1ใฎๅ ดๅใใซใผใใซใฏ1ใใฏใปใซใใค็งปๅใใในใใฉใคใใ2ใฎๅ ดๅใใซใผใใซใฏ2ใใฏใปใซใใค็งปๅใใพใใ
### supervised learning
ใขใใซใฎใใฌใผใใณใฐๆนๆณใฎไธใคใงใ็ดๆฅใฉใใซไปใใใผใฟใไฝฟ็จใใฆใขใใซใฎๆง่ฝใไฟฎๆญฃใๆๅฐใใพใใใใผใฟใใใฌใผใใณใฐใใใฆใใใขใใซใซไพ็ตฆใใใใใฎไบๆธฌใๆข็ฅใฎใฉใใซใจๆฏ่ผใใใพใใใขใใซใฏไบๆธฌใใฉใใ ใ่ชคใฃใฆใใใใซๅบใฅใใฆ้ใฟใๆดๆฐใใใใญใปในใฏใขใใซใฎๆง่ฝใๆ้ฉๅใใใใใซ็นฐใ่ฟใใใพใใ
## T
### token
ๆใฎไธ้จใงใใใ้ๅธธใฏๅ่ชใงใใใใตใใฏใผใ๏ผไธ่ฌ็ใงใชใๅ่ชใฏใใฐใใฐใตใใฏใผใใซๅๅฒใใใใใจใใใใพใ๏ผใพใใฏๅฅ่ชญ็นใฎ่จๅทใงใใใใจใใใใพใใ
### token Type IDs
ไธ้จใฎใขใใซใฏใๆใฎใใขใฎๅ้กใ่ณชๅๅฟ็ญใ่กใใใจใ็ฎ็ใจใใฆใใพใใ
<Youtube id="0u3ioSwev3s"/>
ใใใซใฏ็ฐใชใ2ใคใฎใทใผใฑใณในใๅไธใฎใinput_idsใใจใณใใชใซ็ตๅใใๅฟ
่ฆใใใใ้ๅธธใฏๅ้กๅญ๏ผ`[CLS]`๏ผใๅบๅใ่จๅท๏ผ`[SEP]`๏ผใชใฉใฎ็นๅฅใชใใผใฏใณใฎๅฉใใๅใใฆๅฎ่กใใใพใใไพใใฐใBERTใขใใซใฏๆฌกใฎใใใซ2ใคใฎใทใผใฑใณในๅ
ฅๅใๆง็ฏใใพใ๏ผ
ๆฅๆฌ่ช่จณใๆไพใใฆใใใ ใใใใงใใMarkdownๅฝขๅผใง่จ่ฟฐใใฆใใ ใใใ
```python
>>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
```
ๆใ
ใฏใๅ่ฟฐใฎใใใซใ2ใคใฎใทใผใฑใณในใ2ใคใฎๅผๆฐใจใใฆ `tokenizer` ใซๆธกใใใจใงใใใฎใใใชๆใ่ชๅ็ใซ็ๆใใใใจใใงใใพใ๏ผไปฅๅใฎใใใซใชในใใงใฏใชใ๏ผใไปฅไธใฎใใใซ๏ผ
```python
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> sequence_a = "HuggingFace is based in NYC"
>>> sequence_b = "Where is HuggingFace based?"
>>> encoded_dict = tokenizer(sequence_a, sequence_b)
>>> decoded = tokenizer.decode(encoded_dict["input_ids"])
```
ใใใซๅฏพๅฟใใใณใผใใฏไปฅไธใงใ๏ผ
```python
>>> print(decoded)
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
```
ไธ้จใฎใขใใซใงใฏใ1ใคใฎใทใผใฑใณในใใฉใใง็ตใใใๅฅใฎใทใผใฑใณในใใฉใใงๅงใพใใใ็่งฃใใใฎใซๅๅใชๆ
ๅ ฑใๅใใฃใฆใใพใใใใ ใใBERTใชใฉใฎไปใฎใขใใซใงใฏใใใผใฏใณใฟใคใID๏ผใปใฐใกใณใIDใจใๅผใฐใใ๏ผใไฝฟ็จใใใฆใใพใใใใใฏใใขใใซๅ
ใฎ2ใคใฎใทใผใฑใณในใ่ญๅฅใใใใคใใชใในใฏใจใใฆ่กจใใใพใใ
ใใผใฏใใคใถใฏใใใฎใในใฏใใtoken_type_idsใใจใใฆ่ฟใใพใใ
```python
>>> encoded_dict["token_type_ids"]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
ๆๅใฎใทใผใฑใณในใใคใพใ่ณชๅใฎใใใซไฝฟ็จใใใใใณใณใใญในใใใฏใใในใฆใฎใใผใฏใณใใ0ใใง่กจใใใฆใใพใใไธๆนใ2็ช็ฎใฎใทใผใฑใณในใ่ณชๅใซๅฏพๅฟใใใใฎใฏใใในใฆใฎใใผใฏใณใใ1ใใง่กจใใใฆใใพใใ
ไธ้จใฎใขใใซใไพใใฐ [`XLNetModel`] ใฎใใใซใ่ฟฝๅ ใฎใใผใฏใณใใ2ใใง่กจใใใพใใ
### transfer learning
ไบๅใซๅญฆ็ฟใใใใขใใซใๅใใใใใใฟในใฏๅบๆใฎใใผใฟใปใใใซ้ฉๅฟใใใๆ่กใใผใญใใใขใใซใ่จ็ทดใใไปฃใใใซใๆขๅญใฎใขใใซใใๅพใ็ฅ่ญใๅบ็บ็นใจใใฆๆดป็จใงใใพใใใใใซใใๅญฆ็ฟใใญใปในใๅ ้ใใๅฟ
่ฆใช่จ็ทดใใผใฟใฎ้ใๆธๅฐใใพใใ
### transformer
่ชๅทฑๆณจๆใใผในใฎๆทฑๅฑคๅญฆ็ฟใขใใซใขใผใญใใฏใใฃใ
## U
### unsupervised learning
ใขใใซใซๆไพใใใใใผใฟใใฉใใซไปใใใใฆใใชใใขใใซใใฌใผใใณใฐใฎๅฝขๆ
ใๆๅธซใชใๅญฆ็ฟใฎๆ่กใฏใใฟในใฏใซๅฝน็ซใคใใฟใผใณใ่ฆใคใใใใใซใใผใฟๅๅธใฎ็ตฑ่จๆ
ๅ ฑใๆดป็จใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_hardware.md
|
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Custom hardware for training
ใขใใซใฎใใฌใผใใณใฐใใใณๆจ่ซใซไฝฟ็จใใใใผใใฆใงใขใฏใใใใฉใผใใณในใซๅคงใใชๅฝฑ้ฟใไธใใใใจใใใใพใใGPUใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใฏใTim Dettmerใฎๅชใใ[ใใญใฐ่จไบ](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/)ใใใงใใฏใใฆใฟใฆใใ ใใใ
GPUใปใใใขใใใฎๅฎ็จ็ใชใขใใใคในใใใใคใ่ฆใฆใฟใพใใใใ
## GPU
ใใๅคงใใชใขใใซใใใฌใผใใณใฐใใๅ ดๅใๅบๆฌ็ใซใฏไปฅไธใฎ3ใคใฎใชใใทใงใณใใใใพใ๏ผ
- ใใๅคงใใชGPU
- ใใๅคใใฎGPU
- ใใๅคใใฎCPUใใใณNVMe๏ผ[DeepSpeed-Infinity](main_classes/deepspeed#nvme-support)ใซใใใชใใญใผใ๏ผ
ใพใใๅไธใฎGPUใไฝฟ็จใใๅ ดๅใใๅงใใพใใใใ
### Power and Cooling
้ซไพกใชใใคใจใณใGPUใ่ณผๅ
ฅใใๅ ดๅใๆญฃใใ้ปๅไพ็ตฆใจๅๅใชๅทๅดใๆไพใใใใจใ้่ฆใงใใ
**้ปๅ**๏ผ
ไธ้จใฎ้ซ็ดใณใณใทใฅใผใGPUใซใผใใซใฏใ2ใคใพใใฏ3ใคใฎPCI-E 8ใใณ้ปๆบใฝใฑใใใใใใพใใใซใผใใซใใใฝใฑใใใฎๆฐใ ใใ็ฌ็ซใใ12V PCI-E 8ใใณใฑใผใใซใๆฅ็ถใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใๅใใฑใผใใซใฎไธ็ซฏใซใใ2ใคใฎๅๅฒ๏ผใพใใฏใใใฐใใผใซใฑใผใใซใจใใฆใ็ฅใใใฆใใพใ๏ผใไฝฟ็จใใชใใงใใ ใใใใคใพใใGPUใซ2ใคใฎใฝใฑใใใใใๅ ดๅใPSUใใใซใผใใซๅใใฆ2ใคใฎPCI-E 8ใใณใฑใผใใซใไฝฟ็จใใ1ใคใฎใฑใผใใซใฎ็ซฏใซ2ใคใฎPCI-E 8ใใณใณใใฏใฟใใใใใฎใฏไฝฟ็จใใชใใงใใ ใใ๏ผใใใใชใใจใใซใผใใใใฎใใใฉใผใใณในใๅๅใซๅผใๅบใใใจใใงใใพใใใ
ๅPCI-E 8ใใณ้ปๆบใฑใผใใซใฏใPSUๅดใฎ12Vใฌใผใซใซๆฅ็ถใใๅฟ
่ฆใใใใๆๅคงใง150Wใฎ้ปๅใไพ็ตฆใงใใพใใ
ไธ้จใฎใซใผใใฏPCI-E 12ใใณใณใใฏใฟใไฝฟ็จใใใใจใใใใใใใใฏๆๅคงใง500-600Wใฎ้ปๅใไพ็ตฆใงใใพใใ
ไฝไพกๆ ผๅธฏใฎใซใผใใฏ6ใใณใณใใฏใฟใไฝฟ็จใใใใจใใใใๆๅคงใง75Wใฎ้ปๅใไพ็ตฆใใพใใ
ใใใซใใซใผใใๅฟ
่ฆใจใใๅฎๅฎใใ้ปๅงใๆไพใใ้ซๅ่ณชใช้ปๆบใฆใใใ๏ผPSU๏ผใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
ใใกใใใPSUใซใฏใซใผใใ้งๅใใใใใซๅๅใชๆชไฝฟ็จใฎ้ปๅใๅฟ
่ฆใงใใ
**ๅทๅด**๏ผ
GPUใ้็ฑใใใจใในใญใใใชใณใฐใ้ๅงใใใใใซใใใฉใผใใณในใๆไพใใชใใชใใ้็ฑใใใใใจใทใฃใใใใฆใณใใใใจใใใใใพใใ
GPUใ้่ฆใช่ฒ ่ทใฎไธใงใฉใฎใใใชๆธฉๅบฆใ็ฎๆใในใใใๆญฃ็ขบใซ็คบใใใจใฏ้ฃใใใงใใใใใใใ+80โๆชๆบใงใใใฐ่ฏใใงใใใใใใใใใไฝใๆนใ่ฏใใงใ - ใใใใ70-75โใๅชใใ็ฏๅฒใงใใใใในใญใใใชใณใฐใฎ้ๅงๆธฉๅบฆใฏใใใใ84-90โใฎใใใใใใงใใใใในใญใใใชใณใฐใซใใใใใฉใผใใณในใฎไฝไธไปฅๅคใซใใ้ทๆ้ใซใใใ้ๅธธใซ้ซใๆธฉๅบฆใฏGPUใฎๅฏฟๅฝใ็ญ็ธฎใใๅฏ่ฝๆงใใใใพใใ
ๆฌกใซใ่คๆฐใฎGPUใๆใค้ใซๆใ้่ฆใชๅด้ขใฎไธใคใงใใๆฅ็ถใซใคใใฆ่ฉณใใ่ฆใฆใฟใพใใใใ
### Multi-GPU Connectivity
่คๆฐใฎGPUใไฝฟ็จใใๅ ดๅใใซใผใใฎ็ธไบๆฅ็ถๆนๆณใฏใใผใฟใซใฎใใฌใผใใณใฐๆ้ใซๅคงใใชๅฝฑ้ฟใไธใใๅฏ่ฝๆงใใใใพใใGPUใๅใ็ฉ็ใใผใใซใใๅ ดๅใๆฌกใฎใใใซๅฎ่กใงใใพใ๏ผ
```
nvidia-smi topo -m
```
ใใกใใใGPUใใฉใฎใใใซ็ธไบๆฅ็ถใใใฆใใใใซใคใใฆ่ชฌๆใใพใใใใฅใขใซGPUใๆญ่ผใใNVLinkใงๆฅ็ถใใใฆใใใใทใณใงใฏใใใใใไปฅไธใฎใใใชๆ
ๅ ฑใ่กจ็คบใใใใงใใใ๏ผ
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X NV2 0-23 N/A
GPU1 NV2 X 0-23 N/A
```
ๅฅใฎNVLinkใชใใฎใใทใณใงใฏใไปฅไธใฎใใใช็ถๆณใ็บ็ใใใใใใใพใใ๏ผ
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X PHB 0-11 N/A
GPU1 PHB X 0-11 N/A
```
ใใกใใไผ่ชฌใงใ๏ผ
```
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
```
ๆๅใฎใฌใใผใใงใใ `NV2` ใงใฏใGPUใฏ2ใคใฎNVLinkใงๆฅ็ถใใใฆใใใ2็ช็ฎใฎใฌใใผใใงใใ `PHB` ใงใฏใๅ
ธๅ็ใชๆถ่ฒป่
ๅใใฎPCIe+Bridgeใปใใใขใใใ่กใใใฆใใพใใ
ใใชใใฎใปใใใขใใใงใฉใฎ็จฎ้กใฎๆฅ็ถๆงใใใใใ็ขบ่ชใใฆใใ ใใใใใใใฎๆฅ็ถๆนๆณใฎใใใคใใฏใซใผใ้ใฎ้ไฟกใ้ใใใใใจใใงใใพใ๏ผไพ๏ผNVLink๏ผใไปใฎใใฎใฏ้
ใใใใใจใใงใใพใ๏ผไพ๏ผPHB๏ผใ
ไฝฟ็จใใใในใฑใผใฉใใชใใฃใฝใชใฅใผใทใงใณใฎ็จฎ้กใซๅฟใใฆใๆฅ็ถ้ๅบฆใฏๅคงใใชๅฝฑ้ฟใไธใใใใจใใๅฐใใชๅฝฑ้ฟใไธใใใใจใใใใพใใGPUใใใพใ้ ป็นใซๅๆใใๅฟ
่ฆใใชใๅ ดๅใDDPใฎใใใซใ้
ใๆฅ็ถใฎๅฝฑ้ฟใฏใใใปใฉ้่ฆใงใฏใใใพใใใใใใใGPUใ้ ป็นใซใกใใปใผใธใ้ไฟกใใๅฟ
่ฆใใใๅ ดๅใZeRO-DPใฎใใใซใ้ซ้ใฎๆฅ็ถใใใ้ซ้ใชใใฌใผใใณใฐใๅฎ็พใใใใใซ้ๅธธใซ้่ฆใซใชใใพใใ
#### NVlink
[NVLink](https://en.wikipedia.org/wiki/NVLink) ใฏใNvidiaใซใใฃใฆ้็บใใใๆ็ทใฎใทใชใขใซใใซใใฌใผใณใฎ่ฟ่ท้ข้ไฟกใชใณใฏใงใใ
ๅๆฐไธไปฃใงใฏใใใ้ซ้ใชๅธฏๅๅน
ใๆไพใใใพใใใใจใใฐใ[Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf) ใใใฎๅผ็จใงใใ
> Third-Generation NVLinkยฎ
> GA102 GPUs utilize NVIDIAโs third-generation NVLink interface, which includes four x4 links,
> with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four
> links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth
> between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink.
> (Note that 3-Way and 4-Way SLI configurations are not supported.)
ใใใใฃใฆใ`nvidia-smi topo -m` ใฎๅบๅใฎ `NVX` ใฌใใผใใงๅๅพใใ `X` ใ้ซใใปใฉ่ฏใใงใใไธไปฃใฏใใชใใฎGPUใขใผใญใใฏใใฃใซไพๅญใใพใใ
ๅฐใใชใตใณใใซใฎwikitextใไฝฟ็จใใgpt2่จ่ชใขใใซใฎใใฌใผใใณใฐใฎๅฎ่กใๆฏ่ผใใพใใใใ
็ตๆใฏๆฌกใฎใจใใใงใ๏ผ
๏ผใใใซ็ตๆใๆฟๅ
ฅ๏ผ
ไธ่จใฎใใญในใใฎๆฅๆฌ่ช่จณใๆไพใใพใใใMarkdownใณใผใใจใใฆใใฉใผใใใใใพใใใใฉใใชไปใฎ่ณชๅใใใใฐใใๆฐ่ปฝใซใ็ฅใใใใ ใใ๏ผ
| NVlink | Time |
| ----- | ---: |
| Y | 101s |
| N | 131s |
NVLinkใไฝฟ็จใใใจใใใฌใผใใณใฐใ็ด23๏ผ
้ใๅฎไบใใใใจใใใใใพใใ2็ช็ฎใฎใใณใใใผใฏใงใฏใ`NCCL_P2P_DISABLE=1`ใไฝฟ็จใใฆใGPUใNVLinkใไฝฟ็จใใชใใใใซๆ็คบใใฆใใพใใ
ไปฅไธใฏใๅฎๅ
จใชใใณใใใผใฏใณใผใใจๅบๅใงใ๏ผ
```bash
# DDP w/ NVLink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \
--nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \
--output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVLink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \
--nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train
--output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`)
Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/hpo_train.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Hyperparameter Search using Trainer API
๐ค Transformersใฏใ๐ค Transformersใขใใซใฎใใฌใผใใณใฐใๆ้ฉๅใใ[`Trainer`]ใฏใฉในใๆไพใใ็ฌ่ชใฎใใฌใผใใณใฐใซใผใใๆๅใง่จ่ฟฐใใใซใใฌใผใใณใฐใ้ๅงใใใฎใ็ฐกๅใซใชใใพใใ[`Trainer`]ใฏใใคใใผใใฉใกใผใฟใผๆค็ดขใฎAPIใๆไพใใฆใใพใใใใฎใใญใฅใกใณใใงใฏใใใใไพ็คบใใพใใ
## Hyperparameter Search backend
[`Trainer`]ใฏ็พๅจใ4ใคใฎใใคใใผใใฉใกใผใฟใผๆค็ดขใใใฏใจใณใใใตใใผใใใฆใใพใ๏ผ
[optuna](https://optuna.org/)ใ[sigopt](https://sigopt.com/)ใ[raytune](https://docs.ray.io/en/latest/tune/index.html)ใใใใณ[wandb](https://wandb.ai/site/sweeps)ใ
ใใใใไฝฟ็จใใๅใซใใใคใใผใใฉใกใผใฟใผๆค็ดขใใใฏใจใณใใใคใณในใใผใซใใๅฟ
่ฆใใใใพใใ
```bash
pip install optuna/sigopt/wandb/ray[tune]
```
## How to enable Hyperparameter search in example
ใใคใใผใใฉใกใผใฟใฎๆค็ดขในใใผในใๅฎ็พฉใใ็ฐใชใใใใฏใจใณใใซใฏ็ฐใชใใใฉใผใใใใๅฟ
่ฆใงใใ
Sigoptใฎๅ ดๅใsigopt [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter) ใๅ็
งใใฆใใ ใใใใใใฏไปฅไธใฎใใใชใใฎใงใ๏ผ
```py
>>> def sigopt_hp_space(trial):
... return [
... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"},
... {
... "categorical_values": ["16", "32", "64", "128"],
... "name": "per_device_train_batch_size",
... "type": "categorical",
... },
... ]
```
Optunaใซ้ขใใฆใฏใ[object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py)ใใ่ฆงใใ ใใใไปฅไธใฎใใใซใชใใพใ๏ผ
```py
>>> def optuna_hp_space(trial):
... return {
... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True),
... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]),
... }
```
Optunaใฏใๅค็ฎ็ใฎใใคใใผใใฉใกใผใฟๆ้ฉๅ๏ผHPO๏ผใๆไพใใฆใใพใใ `hyperparameter_search` ใง `direction` ใๆธกใใ่คๆฐใฎ็ฎ็้ขๆฐๅคใ่ฟใใใใฎ็ฌ่ชใฎ `compute_objective` ใๅฎ็พฉใใใใจใใงใใพใใ Pareto Front๏ผ`List[BestRun]`๏ผใฏ `hyperparameter_search` ใง่ฟใใใ[test_trainer](https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py) ใฎใในใใฑใผใน `TrainerHyperParameterMultiObjectOptunaIntegrationTest` ใๅ็
งใใๅฟ
่ฆใใใใพใใใใใฏไปฅไธใฎใใใซใชใใพใใ
```py
>>> best_trials = trainer.hyperparameter_search(
... direction=["minimize", "maximize"],
... backend="optuna",
... hp_space=optuna_hp_space,
... n_trials=20,
... compute_objective=compute_objective,
... )
```
Ray Tuneใซ้ขใใฆใ[object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html)ใๅ็
งใใฆใใ ใใใไปฅไธใฎใใใซใชใใพใ๏ผ
```py
>>> def ray_hp_space(trial):
... return {
... "learning_rate": tune.loguniform(1e-6, 1e-4),
... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]),
... }
```
Wandbใซใคใใฆใฏใ[object_parameter](https://docs.wandb.ai/guides/sweeps/configuration)ใใ่ฆงใใ ใใใใใใฏไปฅไธใฎใใใซใชใใพใ๏ผ
```py
>>> def wandb_hp_space(trial):
... return {
... "method": "random",
... "metric": {"name": "objective", "goal": "minimize"},
... "parameters": {
... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4},
... "per_device_train_batch_size": {"values": [16, 32, 64, 128]},
... },
... }
```
`model_init` ้ขๆฐใๅฎ็พฉใใใใใ [`Trainer`] ใซๆธกใไพใ็คบใใพใ๏ผ
```py
>>> def model_init(trial):
... return AutoModelForSequenceClassification.from_pretrained(
... model_args.model_name_or_path,
... from_tf=bool(".ckpt" in model_args.model_name_or_path),
... config=config,
... cache_dir=model_args.cache_dir,
... revision=model_args.model_revision,
... token=True if model_args.use_auth_token else None,
... )
```
[`Trainer`] ใ `model_init` ้ขๆฐใใใฌใผใใณใฐๅผๆฐใใใฌใผใใณใฐใใผใฟใปใใใใในใใใผใฟใปใใใใใใณ่ฉไพก้ขๆฐใจๅ
ฑใซไฝๆใใฆใใ ใใ:
```py
>>> trainer = Trainer(
... model=None,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... tokenizer=tokenizer,
... model_init=model_init,
... data_collator=data_collator,
... )
```
ใใคใใผใใฉใกใผใฟใผใฎๆข็ดขใๅผใณๅบใใๆ่ฏใฎใใฉใคใขใซ ใใฉใกใผใฟใผใๅๅพใใพใใใใใฏใจใณใใฏ `"optuna"` / `"sigopt"` / `"wandb"` / `"ray"` ใงใใๅฏ่ฝๆงใใใใพใใๆนๅใฏ `"minimize"` ใพใใฏ `"maximize"` ใงใใใ็ฎๆจใใใๅคงใใใใใๅฐใใใใใใ็คบใใพใใ
`compute_objective` ้ขๆฐใ็ฌ่ชใซๅฎ็พฉใใใใจใใงใใพใใๅฎ็พฉใใใฆใใชใๅ ดๅใใใใฉใซใใฎ `compute_objective` ใๅผใณๅบใใใF1ใชใฉใฎ่ฉไพกใกใใชใใฏใฎๅ่จใ็ฎๆจๅคใจใใฆ่ฟใใใพใใ
```py
>>> best_trial = trainer.hyperparameter_search(
... direction="maximize",
... backend="optuna",
... hp_space=optuna_hp_space,
... n_trials=20,
... compute_objective=compute_objective,
... )
```
## Hyperparameter search For DDP finetune
็พๅจใDDP๏ผDistributed Data Parallel๏ผใฎใใใฎใใคใใผใใฉใกใผใฟใผๆค็ดขใฏใOptuna ใจ SigOpt ใซๅฏพใใฆๆๅนใซใชใฃใฆใใพใใใฉใณใฏใผใญใใญใปในใฎใฟใๆค็ดขใใฉใคใขใซใ็ๆใใไปใฎใฉใณใฏใซๅผๆฐใๆธกใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/run_scripts.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Train with a script
๐ค Transformersใฎ[notebooks](./notebooks/README)ใจไธ็ทใซใ[PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch)ใ[TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow)ใใพใใฏ[JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax)ใไฝฟ็จใใฆใขใใซใใใฌใผใใณใฐใใๆนๆณใ็คบใใตใณใใซในใฏใชใใใใใใพใใ
ใพใใ็งใใกใฎ[็ ็ฉถใใญใธใงใฏใ](https://github.com/huggingface/transformers/tree/main/examples/research_projects)ใ[ใฌใฌใทใผใฎไพ](https://github.com/huggingface/transformers/tree/main/examples/legacy)ใงไฝฟ็จใใในใฏใชใใใ่ฆใคใใใพใใใใใใฎในใฏใชใใใฏ็พๅจใกใณใใใณในใใใฆใใใใใใใใๆๆฐใใผใธใงใณใฎใฉใคใใฉใชใจไบๆๆงใใชใ็นๅฎใฎ๐ค Transformersใฎใใผใธใงใณใๅฟ
่ฆใงใใ
ใตใณใใซในใฏใชใใใฏใในใฆใฎๅ้กใงใใฎใพใพๅไฝใใใใจใฏๆๅพ
ใใใฆใใใใ่งฃๆฑบใใใใจใใฆใใๅ้กใซในใฏใชใใใ้ฉๅฟใใใๅฟ
่ฆใใใใใใใใพใใใใใฎ็นใใตใใผใใใใใใซใใปใจใใฉใฎในใฏใชใใใฏใใผใฟใใฉใฎใใใซๅๅฆ็ใใใฆใใใใๅฎๅ
จใซๅ
ฌ้ใใๅฟ
่ฆใซๅฟใใฆ็ทจ้ใงใใใใใซใใฆใใพใใ
ใตใณใใซในใฏใชใใใงๅฎ่ฃ
ใใใๆฉ่ฝใใใๅ ดๅใฏใ[ใใฉใผใฉใ ](https://discuss.huggingface.co/)ใ[ใคใทใฅใผใใฉใใซใผ](https://github.com/huggingface/transformers/issues)ใง่ญฐ่ซใใฆใใใใซใชใฏใจในใใๆๅบใใฆใใ ใใใใใฐไฟฎๆญฃใฏๆญ่ฟใใพใใใ่ชญใฟใใใใฎใณในใใงๆฉ่ฝใ่ฟฝๅ ใใใใซใชใฏใจในใใฏใปใจใใฉใใผใธใใใชใๅฏ่ฝๆงใ้ซใใงใใ
ใใฎใฌใคใใงใฏใ[PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization)ใจ[TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization)ใงๅฎ่กใใใตใใชใผใผใทใงใณใใฌใผใใณใฐในใฏใชใใใฎๅฎ่กๆนๆณใ็คบใใพใใใในใฆใฎไพใฏใๆ็คบ็ใซๆๅฎใใใฆใใชใ้ใใไธกๆนใฎใใฌใผใ ใฏใผใฏใจใใซๅไฝใใใใจใๆๅพ
ใใใฆใใพใใ
## Setup
ๆๆฐใใผใธใงใณใฎใตใณใใซในใฏใชใใใๆญฃๅธธใซๅฎ่กใใใซใฏใๆฐใใไปฎๆณ็ฐๅขใซ๐ค Transformersใใฝใผในใใใคใณในใใผใซใใๅฟ
่ฆใใใใพใ:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
```
ไปฅๅใฎในใฏใชใใใฎใใผใธใงใณใซใคใใฆใฏใไปฅไธใฎใใฐใซใใฏใชใใฏใใฆใใ ใใ๏ผ
<details>
<summary>ไปฅๅใฎ๐ค Transformersใฎใใผใธใงใณใซ้ขใใไพ</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li>
<li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li>
</ul>
</details>
ๆฌกใซใ็พๅจใฎ๐ค Transformersใฎใฏใญใผใณใ็นๅฎใฎใใผใธใงใณใซๅใๆฟใใฆใใ ใใใใใจใใฐใv3.5.1ใชใฉใงใใ
```bash
git checkout tags/v3.5.1
```
้ฉๅใชใฉใคใใฉใชใใผใธใงใณใ่จญๅฎใใใใไปปๆใฎไพใฎใใฉใซใใซ็งปๅใใไพๅบๆใฎ่ฆไปถใใคใณในใใผใซใใพใ๏ผ
```bash
pip install -r requirements.txt
```
## Run a script
<frameworkcontent>
<pt>
ใใฎไพใฎในใฏใชใใใฏใ๐ค [Datasets](https://huggingface.co/docs/datasets/) ใฉใคใใฉใชใใใใผใฟใปใใใใใฆใณใญใผใใใๅๅฆ็ใ่กใใพใใๆฌกใซใ[Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) ใไฝฟ็จใใฆ่ฆ็ดใใตใใผใใใใขใผใญใใฏใใฃไธใงใใผใฟใปใใใใใกใคใณใใฅใผใใณใฐใใพใใไปฅไธใฎไพใงใฏใ[CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ใใผใฟใปใใไธใง [T5-small](https://huggingface.co/t5-small) ใใใกใคใณใใฅใผใใณใฐใใๆนๆณใ็คบใใใฆใใพใใT5 ใขใใซใฏใใใฎใใฌใผใใณใฐๆนๆณใซ่ตทๅ ใใฆ่ฟฝๅ ใฎ `source_prefix` ๅผๆฐใๅฟ
่ฆใงใใใใฎใใญใณใใใซใใใT5 ใฏใใใ่ฆ็ดใฟในใฏใงใใใใจใ็ฅใใใจใใงใใพใใ
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
</pt>
<tf>
ใใฎไพใฎในใฏใชใใใฏใ๐ค [Datasets](https://huggingface.co/docs/datasets/) ใฉใคใใฉใชใใใใผใฟใปใใใใใฆใณใญใผใใใฆๅๅฆ็ใใพใใใใฎๅพใในใฏใชใใใฏ่ฆ็ดใใตใใผใใใใขใผใญใใฏใใฃไธใง Keras ใไฝฟ็จใใฆใใผใฟใปใใใใใกใคใณใใฅใผใใณใฐใใพใใไปฅไธใฎไพใงใฏใ[T5-small](https://huggingface.co/t5-small) ใ [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ใใผใฟใปใใใงใใกใคใณใใฅใผใใณใฐใใๆนๆณใ็คบใใฆใใพใใT5 ใขใใซใฏใใใฎใใฌใผใใณใฐๆนๆณใซ่ตทๅ ใใฆ่ฟฝๅ ใฎ `source_prefix` ๅผๆฐใๅฟ
่ฆใงใใใใฎใใญใณใใใฏใT5 ใซใใใ่ฆ็ดใฟในใฏใงใใใใจใ็ฅใใใพใใ
```bash
python examples/tensorflow/summarization/run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
```
</tf>
</frameworkcontent>
## Distributed training and mixed precision
[Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)ใฏใๅๆฃใใฌใผใใณใฐใจๆททๅ็ฒพๅบฆใใตใใผใใใฆใใพใใใคใพใใใใฎๆฉ่ฝใในใฏใชใใใงไฝฟ็จใใใใจใใงใใพใใใใใใฎๆฉ่ฝใๆๅนใซใใใซใฏใๆฌกใฎๆ้ ใๅฎ่กใใพใใ
- `fp16`ๅผๆฐใ่ฟฝๅ ใใฆๆททๅ็ฒพๅบฆใๆๅนใซใใพใใ
- `nproc_per_node`ๅผๆฐใงไฝฟ็จใใGPUใฎๆฐใ่จญๅฎใใพใใ
ไปฅไธใฏๆไพใใใBashใณใผใใงใใใใฎใณใผใใฎๆฅๆฌ่ช่จณใMarkdownๅฝขๅผใง่จ่ผใใพใใ
```bash
torchrun \
--nproc_per_node 8 pytorch/summarization/run_summarization.py \
--fp16 \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
TensorFlowในใฏใชใใใฏใๅๆฃใใฌใผใใณใฐใซ[`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy)ใไฝฟ็จใใใใฌใผใใณใฐในใฏใชใใใซ่ฟฝๅ ใฎๅผๆฐใ่ฟฝๅ ใใๅฟ
่ฆใฏใใใพใใใTensorFlowในใฏใชใใใฏใใใใฉใซใใง่คๆฐใฎGPUใๅฉ็จๅฏ่ฝใชๅ ดๅใซใใใใไฝฟ็จใใพใใ
## Run a script on a TPU
<frameworkcontent>
<pt>
Tensor Processing Units (TPUs)ใฏใใใใฉใผใใณในใๅ ้ใใใใใใซ็นๅฅใซ่จญ่จใใใฆใใพใใPyTorchใฏใ[XLA](https://www.tensorflow.org/xla)ใใฃใผใใฉใผใใณใฐใณใณใใคใฉใไฝฟ็จใใฆTPUsใใตใใผใใใฆใใใ่ฉณ็ดฐใซใคใใฆใฏ[ใใกใ](https://github.com/pytorch/xla/blob/master/README.md)ใใ่ฆงใใ ใใใTPUใไฝฟ็จใใใซใฏใ`xla_spawn.py`ในใฏใชใใใ่ตทๅใใ`num_cores`ๅผๆฐใไฝฟ็จใใฆไฝฟ็จใใTPUใณใขใฎๆฐใ่จญๅฎใใพใใ
```bash
python xla_spawn.py --num_cores 8 \
summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
</pt>
<tf>
ใใกใใใTensor Processing Units๏ผTPUs๏ผใฏๆง่ฝใ้ซ้ๅใใใใใซ็นๅฅใซ่จญ่จใใใฆใใพใใTensorFlowในใฏใชใใใฏใTPUsใงใใฌใผใใณใฐใใใใใซ[`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy)ใๅฉ็จใใพใใTPUใไฝฟ็จใใใซใฏใTPUใชใฝใผในใฎๅๅใ`tpu`ๅผๆฐใซๆธกใใพใใ
```bash
python run_summarization.py \
--tpu name_of_tpu_resource \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
```
</tf>
</frameworkcontent>
## Run a script with ๐ค Accelerate
๐ค [Accelerate](https://huggingface.co/docs/accelerate)ใฏใPyTorchๅฐ็จใฎใฉใคใใฉใชใงใCPUใฎใฟใ่คๆฐใฎGPUใTPUใชใฉใใใพใใพใชใปใใใขใใใงใขใใซใใใฌใผใใณใฐใใใใใฎ็ตฑไธใใใๆนๆณใๆไพใใพใใPyTorchใฎใใฌใผใใณใฐใซใผใใๅฎๅ
จใซๅฏ่ฆๅใใชใใๅฎ่กใงใใพใใใพใ ใคใณในใใผใซใใฆใใชใๅ ดๅใฏใ๐ค Accelerateใใคใณในใใผใซใใฆใใ ใใ๏ผ
> ๆณจๆ๏ผAccelerateใฏๆฅ้ใซ้็บใ้ฒ่กใใฆใใใใใในใฏใชใใใๅฎ่กใใใซใฏaccelerateใฎgitใใผใธใงใณใใคใณในใใผใซใใๅฟ
่ฆใใใใพใ
```bash
pip install git+https://github.com/huggingface/accelerate
```
ไปฃใใใซใ`run_summarization_no_trainer.py` ในใฏใชใใใไฝฟ็จใใๅฟ
่ฆใใใใพใใ ๐ค Accelerate ใใตใใผใใใในใฏใชใใใซใฏใใใฉใซใๅ
ใซ `task_no_trainer.py` ใใกใคใซใๅซใพใใฆใใพใใใพใใๆฌกใฎใณใใณใใๅฎ่กใใฆ่จญๅฎใใกใคใซใไฝๆใใไฟๅญใใพใ๏ผ
```bash
accelerate config
```
ใในใใ่กใใ่จญๅฎใๆญฃใใๆงๆใใใฆใใใ็ขบ่ชใใฆใใ ใใ๏ผ
```bash
accelerate test
```
Now you are ready to launch the training:
```bash
accelerate launch run_summarization_no_trainer.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
```
## Use a custom dataset
่ฆ็ดในใฏใชใใใฏใCSVใพใใฏJSON Lineใใกใคใซใงใใใฐใใซในใฟใ ใใผใฟใปใใใใตใใผใใใฆใใพใใ็ฌ่ชใฎใใผใฟใปใใใไฝฟ็จใใๅ ดๅใใใใคใใฎ่ฟฝๅ ใฎๅผๆฐใๆๅฎใใๅฟ
่ฆใใใใพใใ
- `train_file`ใใใณ`validation_file`ใฏใใใฌใผใใณใฐใจใใชใใผใทใงใณใฎใใกใคใซใธใฎใในใๆๅฎใใพใใ
- `text_column`ใฏ่ฆ็ดใใใใใฎๅ
ฅๅใใญในใใงใใ
- `summary_column`ใฏๅบๅใใๅฏพ่ฑกใใญในใใงใใ
ใซในใฟใ ใใผใฟใปใใใไฝฟ็จใใ่ฆ็ดในใฏใชใใใฏใไปฅไธใฎใใใซใชใใพใ๏ผ
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--train_file path_to_csv_or_jsonlines_file \
--validation_file path_to_csv_or_jsonlines_file \
--text_column text_column_name \
--summary_column summary_column_name \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--overwrite_output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--predict_with_generate
```
## Test a script
ใในใฆใไบๆณ้ใใซๅไฝใใใใจใ็ขบ่ชใใใใใซใใใผใฟใปใใๅ
จไฝใๅฆ็ใใๅใซใใใผใฟใปใใใฎไธ้จใฎไพใงในใฏใชใใใๅฎ่กใใใใจใฏ่ฏใใขใคใใขใงใใไปฅไธใฎๅผๆฐใไฝฟ็จใใฆใใใผใฟใปใใใๆๅคงใตใณใใซๆฐใซๅใ่ฉฐใใพใ๏ผ
- `max_train_samples`
- `max_eval_samples`
- `max_predict_samples`
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
ไธ้จใฎไพใฎในใฏใชใใใฏใ`max_predict_samples`ๅผๆฐใใตใใผใใใฆใใชใใใจใใใใพใใใใฎๅผๆฐใใตใใผใใใใฆใใใใฉใใใใใใใชใๅ ดๅใฏใ`-h`ๅผๆฐใ่ฟฝๅ ใใฆ็ขบ่ชใใฆใใ ใใใ
```bash
examples/pytorch/summarization/run_summarization.py -h
```
## Resume training from checkpoint
ไปฅๅใฎใใงใใฏใใคใณใใใใใฌใผใใณใฐใๅ้ใใใใใฎๅฝน็ซใคใชใใทใงใณใใใใพใใใใใซใใใใใฌใผใใณใฐใไธญๆญใใใๅ ดๅใงใใๆๅใใใใ็ดใใใจใชใใไธญๆญใใใจใใใใๅ้ใงใใพใใใใงใใฏใใคใณใใใใใฌใผใใณใฐใๅ้ใใใใใฎ2ใคใฎๆนๆณใใใใพใใ
ๆๅใฎๆนๆณใฏใ`output_dir previous_output_dir` ๅผๆฐใไฝฟ็จใใฆใ`output_dir` ใซไฟๅญใใใๆๆฐใฎใใงใใฏใใคใณใใใใใฌใผใใณใฐใๅ้ใใๆนๆณใงใใใใฎๅ ดๅใ`overwrite_output_dir` ใๅ้คใใๅฟ
่ฆใใใใพใ๏ผ
```bash
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--output_dir previous_output_dir \
--predict_with_generate
```
2็ช็ฎใฎๆนๆณใงใฏใ`resume_from_checkpoint path_to_specific_checkpoint` ๅผๆฐใไฝฟ็จใใฆใ็นๅฎใฎใใงใใฏใใคใณใใใฉใซใใใใใฌใผใใณใฐใๅ้ใใพใใ
```bash
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--resume_from_checkpoint path_to_specific_checkpoint \
--predict_with_generate
```
## Share your model
ใในใฆใฎในใฏใชใใใฏใๆ็ต็ใชใขใใซใ [Model Hub](https://huggingface.co/models) ใซใขใใใญใผใใงใใพใใ้ๅงใใๅใซ Hugging Face ใซใญใฐใคใณใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
```bash
huggingface-cli login
```
ๆฌกใซใในใฏใชใใใซ `push_to_hub` ๅผๆฐใ่ฟฝๅ ใใพใใใใฎๅผๆฐใฏใHugging Face ใฎใฆใผใถใผๅใจ `output_dir` ใงๆๅฎใใใใฉใซใๅใงใชใใธใใชใไฝๆใใพใใ
็นๅฎใฎๅๅใใชใใธใใชใซไปใใใซใฏใ`push_to_hub_model_id` ๅผๆฐใไฝฟ็จใใฆ่ฟฝๅ ใใพใใใใฎใชใใธใใชใฏ่ชๅ็ใซใใชใใฎๅๅ็ฉบ้ใฎไธใซใชในใใใใพใใ
ไปฅไธใฎไพใฏใ็นๅฎใฎใชใใธใใชๅใงใขใใซใใขใใใญใผใใใๆนๆณใ็คบใใฆใใพใ:
```bash
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--push_to_hub \
--push_to_hub_model_id finetuned-t5-cnn_dailymail \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/multilingual.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ๆจ่ซใฎใใใฎๅค่จ่ชใขใใซ
[[open-in-colab]]
๐ค Transformers ใซใฏใใใคใใฎๅค่จ่ชใขใใซใใใใใใใใฎๆจ่ซใฎไฝฟ็จๆนๆณใฏๅไธ่จ่ชใขใใซใจใฏ็ฐใชใใพใใใใ ใใๅค่จ่ชใขใใซใฎไฝฟ็จๆนๆณใใในใฆ็ฐใชใใใใงใฏใใใพใใใ [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) ใชใฉใฎไธ้จใฎใขใใซใฏใๅไธ่จ่ชใขใใซใจๅๆงใซไฝฟ็จใงใใพใใ ใใฎใฌใคใใงใฏใๆจ่ซใฎใใใซไฝฟ็จๆนๆณใ็ฐใชใๅค่จ่ชใขใใซใใฉใฎใใใซไฝฟใใใ็คบใใพใใ
## XLM
XLM ใซใฏ10ใฎ็ฐใชใใใงใใฏใใคใณใใใใใใใฎใใกใฎ1ใคใ ใใๅไธ่จ่ชใงใใ ๆฎใใฎ9ใคใฎใขใใซใใงใใฏใใคใณใใฏใ่จ่ชๅใ่พผใฟใไฝฟ็จใใใใงใใฏใใคใณใใจไฝฟ็จใใชใใใงใใฏใใคใณใใฎ2ใคใฎใซใใดใชใซๅใใใใจใใงใใพใใ
### ่จ่ชใฎๅใ่พผใฟใใใ XLM
ๆฌกใฎ XLM ใขใใซใฏใ่จ่ชใฎๅใ่พผใฟใไฝฟ็จใใฆใๆจ่ซใงไฝฟ็จใใใ่จ่ชใๆๅฎใใพใใ
- `xlm-mlm-ende-1024` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใ่ฑ่ช-ใใคใ่ช)
- `xlm-mlm-enfr-1024` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใ่ฑ่ช-ใใฉใณใน่ช)
- `xlm-mlm-enro-1024` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใ่ฑ่ช-ใซใผใใใข่ช)
- `xlm-mlm-xnli15-1024` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใXNLI ่จ่ช)
- `xlm-mlm-tlm-xnli15-1024` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐ + ็ฟป่จณ + XNLI ่จ่ช)
- `xlm-clm-enfr-1024` (ๅ ๆ่จ่ชใขใใชใณใฐใ่ฑ่ช-ใใฉใณใน่ช)
- `xlm-clm-ende-1024` (ๅ ๆ่จ่ชใขใใชใณใฐใ่ฑ่ช-ใใคใ่ช)
่จ่ชใฎๅใ่พผใฟใฏใใขใใซใซๆธกใใใ `input_ids` ใจๅใๅฝข็ถใฎใใณใฝใซใจใใฆ่กจใใใพใใ ใใใใฎใใณใฝใซใฎๅคใฏใไฝฟ็จใใใ่จ่ชใซไพๅญใใใใผใฏใใคใถใผใฎ `lang2id` ใใใณ `id2lang` ๅฑๆงใซใใฃใฆ่ญๅฅใใใพใใ
ใใฎไพใงใฏใ`xlm-clm-enfr-1024` ใใงใใฏใใคใณใใใญใผใใใพใ (ๅ ๆ่จ่ชใขใใชใณใฐใ่ฑ่ช-ใใฉใณใน่ช)ใ
```py
>>> import torch
>>> from transformers import XLMTokenizer, XLMWithLMHeadModel
>>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024")
>>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024")
```
ใใผใฏใใคใถใผใฎ `lang2id` ๅฑๆงใฏใใใฎใขใใซใฎ่จ่ชใจใใฎ ID ใ่กจ็คบใใพใใ
```py
>>> print(tokenizer.lang2id)
{'en': 0, 'fr': 1}
```
ๆฌกใซใๅ
ฅๅไพใไฝๆใใพใใ
```py
>>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1
```
่จ่ช ID ใ `en` ใซ่จญๅฎใใใใใไฝฟ็จใใฆ่จ่ชใฎๅใ่พผใฟใๅฎ็พฉใใพใใ ่จ่ชใฎๅใ่พผใฟใฏใ่ฑ่ชใฎ่จ่ช ID ใงใใใใใ`0` ใงๅใใใใใใณใฝใซใงใใ ใใฎใใณใฝใซใฏ `input_ids` ใจๅใใตใคใบใซใใๅฟ
่ฆใใใใพใใ
```py
>>> language_id = tokenizer.lang2id["en"] # 0
>>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0])
>>> # We reshape it to be of size (batch_size, sequence_length)
>>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1)
```
ใใใงใ`input_ids` ใจ่จ่ชใฎๅใ่พผใฟใใขใใซใซๆธกใใใจใใงใใพใใ
```py
>>> outputs = model(input_ids, langs=langs)
```
[run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) ในใฏใชใใใฏใ`xlm-clm` ใใงใใฏใใคใณใใไฝฟ็จใใฆใ่จ่ชใๅใ่พผใพใใใใญในใใ็ๆใงใใพใใ
### ่จ่ชใฎๅใ่พผใฟใใชใXLM
ๆฌกใฎ XLM ใขใใซใฏใๆจ่ซไธญใซ่จ่ชใฎๅใ่พผใฟใๅฟ
่ฆใจใใพใใใ
- `xlm-mlm-17-1280` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใ17ใฎ่จ่ช)
- `xlm-mlm-100-1280` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใ100ใฎ่จ่ช)
ใใใใฎใขใใซใฏใไปฅๅใฎ XLM ใใงใใฏใใคใณใใจใฏ็ฐใชใใไธ่ฌ็ใชๆใฎ่กจ็พใซไฝฟ็จใใใพใใ
## BERT
ไปฅไธใฎ BERT ใขใใซใฏใๅค่จ่ชใฟในใฏใซไฝฟ็จใงใใพใใ
- `bert-base-multilingual-uncased` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐ + ๆฌกใฎๆใฎไบๆธฌใ102ใฎ่จ่ช)
- `bert-base-multilingual-cased` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐ + ๆฌกใฎๆใฎไบๆธฌใ104ใฎ่จ่ช)
ใใใใฎใขใใซใฏใๆจ่ซไธญใซ่จ่ชใฎๅใ่พผใฟใๅฟ
่ฆใจใใพใใใ ๆ่ใใ่จ่ชใ่ญๅฅใใใใใซๅฟใใฆๆจๆธฌใใๅฟ
่ฆใใใใพใใ
## XLM-RoBERTa
ๆฌกใฎ XLM-RoBERTa ใขใใซใฏใๅค่จ่ชใฟในใฏใซไฝฟ็จใงใใพใใ
- `xlm-roberta-base` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใ100ใฎ่จ่ช)
- `xlm-roberta-large` (ใในใฏๅใใใ่จ่ชใขใใชใณใฐใ100ใฎ่จ่ช)
XLM-RoBERTa ใฏใ100ใฎ่จ่ชใงๆฐใใไฝๆใใใณใฏใชใผใใณใฐใใใ2.5 TB ใฎ CommonCrawl ใใผใฟใงใใฌใผใใณใฐใใใพใใใ ใใใฏใๅ้กใใทใผใฑใณในใฎใฉใใซไปใใ่ณชๅๅฟ็ญใชใฉใฎใใฆใณในใใชใผใ ใฟในใฏใงใmBERT ใ XLM ใชใฉใฎไปฅๅใซใชใชใผในใใใๅค่จ่ชใขใใซใๅคงๅน
ใซๆนๅใใพใใ
## M2M100
ๆฌกใฎ M2M100 ใขใใซใฏใๅค่จ่ช็ฟป่จณใซไฝฟ็จใงใใพใใ
- `facebook/m2m100_418M` (็ฟป่จณ)
- `facebook/m2m100_1.2B` (็ฟป่จณ)
ใใฎไพใงใฏใ`facebook/m2m100_418M` ใใงใใฏใใคใณใใใญใผใใใฆใไธญๅฝ่ชใใ่ฑ่ชใซ็ฟป่จณใใพใใ ใใผใฏใใคใถใผใงใฝใผใน่จ่ชใ่จญๅฎใงใใพใใ
```py
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
>>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger."
>>> chinese_text = "ไธ่ฆๆๆๅทซๅธซ็ไบๅ, ๅ ็บไปๅๆฏๅพฎๅฆ็, ๅพๅฟซๅฐฑๆ็ผๆ."
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh")
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
```
ใใญในใใใใผใฏใณๅใใพใใ
```py
>>> encoded_zh = tokenizer(chinese_text, return_tensors="pt")
```
M2M100 ใฏใๆๅใซ็ๆใใใใใผใฏใณใจใใฆใฟใผใฒใใ่จ่ช ID ใๅผทๅถ็ใซใฟใผใฒใใ่จ่ชใซ็ฟป่จณใใพใใ ่ฑ่ชใซ็ฟป่จณใใใซใฏใ`generate` ใกใฝใใใง `forced_bos_token_id` ใ `en` ใซ่จญๅฎใใพใใ
```py
>>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.'
```
## MBart
ๅค่จ่ช็ฟป่จณใซใฏใๆฌกใฎ MBart ใขใใซใไฝฟ็จใงใใพใใ
- `facebook/mbart-large-50-one-to-many-mmt` (One-to-many multilingual machine translation, 50 languages)
- `facebook/mbart-large-50-many-to-many-mmt` (Many-to-many multilingual machine translation, 50 languages)
- `facebook/mbart-large-50-many-to-one-mmt` (Many-to-one multilingual machine translation, 50 languages)
- `facebook/mbart-large-50` (Multilingual translation, 50 languages)
- `facebook/mbart-large-cc25`
ใใฎไพใงใฏใ`facebook/mbart-large-50-many-to-many-mmt` ใใงใใฏใใคใณใใใญใผใใใฆใใใฃใณใฉใณใ่ชใ่ฑ่ชใซ็ฟป่จณใใพใใใใผใฏใใคใถใผใงใฝใผใน่จ่ชใ่จญๅฎใงใใพใใ
```py
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger."
>>> fi_text = "รlรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia."
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
```
ใใญในใใใใผใฏใณๅใใพใใ
```py
>>> encoded_en = tokenizer(en_text, return_tensors="pt")
```
MBart ใฏใๆๅใซ็ๆใใใใใผใฏใณใจใใฆใฟใผใฒใใ่จ่ช ID ใๅผทๅถ็ใซใฟใผใฒใใ่จ่ชใซ็ฟป่จณใใพใใ ่ฑ่ชใซ็ฟป่จณใใใซใฏใ`generate` ใกใฝใใใง `forced_bos_token_id` ใ `en` ใซ่จญๅฎใใพใใ
```py
>>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"Don't interfere with the wizard's affairs, because they are subtle, will soon get angry."
```
`facebook/mbart-large-50-many-to-one-mmt` ใใงใใฏใใคใณใใไฝฟ็จใใฆใใๅ ดๅใๆๅใซ็ๆใใใใใผใฏใณใจใใฆใฟใผใฒใใ่จ่ช ID ใๅผทๅถใใๅฟ
่ฆใฏใใใพใใใใใไปฅๅคใฎๅ ดๅใไฝฟ็จๆนๆณใฏๅใใงใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/chat_templating.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Templates for Chat Models
## Introduction
LLM๏ผLanguage Model๏ผใฎใพใใพใไธ่ฌ็ใชไฝฟ็จไบไพใฎ1ใคใฏใใใฃใใใใงใใ
ใใฃใใใฎใณใณใใญในใใงใฏใ้ๅธธใฎ่จ่ชใขใใซใฎใใใซๅไธใฎใใญในใในใใชใณใฐใ็ถ็ถใใใฎใงใฏใชใใใขใใซใฏ1ใคไปฅไธใฎใใกใใปใผใธใใใใชใไผ่ฉฑใ็ถ็ถใใพใใ
ๅใกใใปใผใธใซใฏใใญใผใซใใจใกใใปใผใธใใญในใใๅซใพใใพใใ
ๆใไธ่ฌ็ใซใใใใใฎใญใผใซใฏใฆใผใถใผใใใฎใกใใปใผใธใซใฏใใฆใผใถใผใใใขใใซใใใฎใกใใปใผใธใซใฏใใขใทในใฟใณใใใๅฒใๅฝใฆใใใพใใ
ไธ้จใฎใขใใซใฏใใทในใใ ใใญใผใซใใตใใผใใใฆใใพใใ
ใทในใใ ใกใใปใผใธใฏ้ๅธธไผ่ฉฑใฎ้ๅงๆใซ้ไฟกใใใใขใใซใฎๅไฝๆนๆณใซ้ขใใๆ็คบใๅซใพใใพใใ
ใในใฆใฎ่จ่ชใขใใซใใใฃใใ็จใซๅพฎ่ชฟๆดใใใใขใใซใๅซใใในใฆใฎใขใใซใฏใใใผใฏใณใฎใชใใขใทใผใฑใณในใงๅไฝใใใญใผใซใซ็นๆใฎ็นๅฅใชๅฆ็ใๆใกใพใใใ
ใคใพใใใญใผใซๆ
ๅ ฑใฏ้ๅธธใใกใใปใผใธ้ใซๅถๅพกใใผใฏใณใ่ฟฝๅ ใใฆๆณจๅ
ฅใใใใกใใปใผใธใฎๅข็ใจ้ข้ฃใใใญใผใซใ็คบใใใจใงๆไพใใใพใใ
ๆฎๅฟตใชใใใใใผใฏใณใฎไฝฟ็จๆนๆณใซใคใใฆใฏ๏ผใพใ ๏ผ๏ผๆจๆบใๅญๅจใใใ็ฐใชใใขใใซใฏใใฃใใ็จใฎใใฉใผใใใใๅถๅพกใใผใฏใณใๅคงใใ็ฐใชใๅฝขๅผใงใใฌใผใใณใฐใใใฆใใพใใ
ใใใฏใฆใผใถใผใซใจใฃใฆๅฎ้ใฎๅ้กใซใชใๅฏ่ฝๆงใใใใพใใๆญฃใใใใฉใผใใใใไฝฟ็จใใชใใจใใขใใซใฏๅ
ฅๅใซๆททไนฑใใใใใฉใผใใณในใๆฌๆฅใใใ้ฅใใซไฝไธใใพใใ
ใใใใใใฃใใใใณใใฌใผใใใ่งฃๆฑบใใใใจใใๅ้กใงใใ
ใใฃใใไผ่ฉฑใฏ้ๅธธใๅ่พๆธใใใญใผใซใใจใใณใณใใณใใใฎใญใผใๅซใฟใๅไธใฎใใฃใใใกใใปใผใธใ่กจใใชในใใจใใฆ่กจ็พใใใพใใ
ใใฃใใใใณใใฌใผใใฏใๆๅฎใใใใขใใซใฎไผ่ฉฑใๅไธใฎใใผใฏใณๅๅฏ่ฝใชใทใผใฑใณในใซใฉใฎใใใซใใฉใผใใใใใใใๆๅฎใใJinjaใใณใใฌใผใใๅซใๆๅญๅใงใใ
ใใผใฏใใคใถใจใใฎๆ
ๅ ฑใไฟๅญใใใใจใซใใใใขใใซใๆๅพ
ใใๅฝขๅผใฎๅ
ฅๅใใผใฟใๅๅพใงใใใใใซใชใใพใใ
ใใฃใใใ`BlenderBot` ใขใใซใไฝฟ็จใใไพใ็คบใใฆๅ
ทไฝ็ใซใใพใใใใ`BlenderBot` ใฎใใใฉใซใใใณใใฌใผใใฏ้ๅธธใซใทใณใใซใงใใปใจใใฉใๅฏพ่ฉฑใฎใฉใฆใณใ้ใซ็ฉบ็ฝใ่ฟฝๅ ใใใ ใใงใใ
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
>>> chat = [
... {"role": "user", "content": "Hello, how are you?"},
... {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
... {"role": "user", "content": "I'd like to show off how chat templating works!"},
... ]
>>> tokenizer.apply_chat_template(chat, tokenize=False)
" Hello, how are you? I'm doing great. How can I help you today? I'd like to show off how chat templating works!</s>"
```
ๆๅฎใใใ้ใใใใฃใใๅ
จไฝใๅไธใฎๆๅญๅใซใพใจใใใใฆใใพใใใใใฉใซใใฎ่จญๅฎใงใใใtokenize=Trueใใไฝฟ็จใใใจใ
ใใฎๆๅญๅใใใผใฏใณๅใใใพใใใใใใใใ่ค้ใชใใณใใฌใผใใๅฎ้ใซใฉใฎใใใซๆฉ่ฝใใใใ็ขบ่ชใใใใใซใ
ใmeta-llama/Llama-2-7b-chat-hfใใขใใซใไฝฟ็จใใฆใฟใพใใใใใใ ใใใใฎใขใใซใฏใฒใผใไปใใขใฏใปในใๆใฃใฆใใใ
ใใฎใณใผใใๅฎ่กใใๅ ดๅใฏ[ใชใใธใใชใงใขใฏใปในใใชใฏใจในใ](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)ใใๅฟ
่ฆใใใใพใใ
```python
>> from transformers import AutoTokenizer
>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>> chat = [
... {"role": "user", "content": "Hello, how are you?"},
... {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
... {"role": "user", "content": "I'd like to show off how chat templating works!"},
... ]
>> tokenizer.use_default_system_prompt = False
>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>[INST] Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST]"
```
ไปๅใใใผใฏใใคใถใฏๅถๅพกใใผใฏใณ [INST] ใจ [/INST] ใ่ฟฝๅ ใใพใใใใใใใฏใฆใผใถใผใกใใปใผใธใฎ้ๅงใจ็ตไบใ็คบใใใใฎใใฎใงใ๏ผใใ ใใใขใทในใฟใณใใกใใปใผใธใซใฏ้ฉ็จใใใพใใ๏ผ๏ผ
## How do chat templates work?
ใขใใซใฎใใฃใใใใณใใฌใผใใฏใ`tokenizer.chat_template`ๅฑๆงใซๆ ผ็ดใใใฆใใพใใใใฃใใใใณใใฌใผใใ่จญๅฎใใใฆใใชใๅ ดๅใใใฎใขใใซใฏใฉในใฎใใใฉใซใใใณใใฌใผใใไปฃใใใซไฝฟ็จใใใพใใ`BlenderBot`ใฎใใณใใฌใผใใ่ฆใฆใฟใพใใใ:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
>>> tokenizer.default_chat_template
"{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
```
ใใใฏๅฐใๆๅง็ใงใใญใๅฏ่ชญๆงใ้ซใใใใใซใๆฐใใ่กใจใคใณใใณใใ่ฟฝๅ ใใพใใใใ
ๅใใญใใฏใฎ็ดๅใฎ็ฉบ็ฝใจใใใญใใฏใฎ็ดๅพใฎๆๅใฎๆน่กใฏใใใใฉใซใใงJinjaใฎ `trim_blocks` ใใใณ `lstrip_blocks` ใใฉใฐใไฝฟ็จใใฆๅ้คใใพใใ
ใใใซใใใใคใณใใณใใจๆน่กใๅซใใใณใใฌใผใใๆธใใฆใๆญฃๅธธใซๆฉ่ฝใใใใจใใงใใพใใ
```
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ ' ' }}
{% endif %}
{{ message['content'] }}
{% if not loop.last %}
{{ ' ' }}
{% endif %}
{% endfor %}
{{ eos_token }}
```
ใใใๅใใฆ่ฆใๆนใธใใใใฏ[Jinjaใใณใใฌใผใ](https://jinja.palletsprojects.com/en/3.1.x/templates/)ใงใใ
Jinjaใฏใใญในใใ็ๆใใใใใฎใทใณใใซใชใณใผใใ่จ่ฟฐใงใใใใณใใฌใผใ่จ่ชใงใใๅคใใฎ็นใงใใณใผใใจ
ๆงๆใฏPythonใซไผผใฆใใพใใ็ด็ฒใชPythonใงใฏใใใฎใใณใใฌใผใใฏๆฌกใฎใใใซใชใใงใใใ๏ผ
```python
for idx, message in enumerate(messages):
if message['role'] == 'user':
print(' ')
print(message['content'])
if not idx == len(messages) - 1: # Check for the last message in the conversation
print(' ')
print(eos_token)
```
ๅฎ้ใซใใใฎใใณใใฌใผใใฏๆฌกใฎ3ใคใฎใใจใ่กใใพใ๏ผ
1. ๅใกใใปใผใธใซๅฏพใใฆใใกใใปใผใธใใฆใผใถใผใกใใปใผใธใงใใๅ ดๅใใใใฎๅใซ็ฉบ็ฝใ่ฟฝๅ ใใใใไปฅๅคใฎๅ ดๅใฏไฝใ่กจ็คบใใพใใใ
2. ใกใใปใผใธใฎๅ
ๅฎนใ่ฟฝๅ ใใพใใ
3. ใกใใปใผใธใๆๅพใฎใกใใปใผใธใงใชใๅ ดๅใใใฎๅพใซ2ใคใฎในใใผในใ่ฟฝๅ ใใพใใๆๅพใฎใกใใปใผใธใฎๅพใซใฏEOSใใผใฏใณใ่กจ็คบใใพใใ
ใใใฏ้ๅธธใซใทใณใใซใชใใณใใฌใผใใงใใๅถๅพกใใผใฏใณใ่ฟฝๅ ใใชใใใใขใใซใซๅฏพใใๆ็คบใไผใใไธ่ฌ็ใชๆนๆณใงใใใใทในใใ ใใกใใปใผใธใใตใใผใใใฆใใพใใใ
ใใ ใใJinjaใฏใใใใฎใใจใ่กใใใใฎๅคใใฎๆ่ปๆงใๆไพใใฆใใพใ๏ผ
LLaMAใใใฉใผใใใใใๆนๆณใซ้กไผผใใๅ
ฅๅใใใฉใผใใใใใใใใฎJinjaใใณใใฌใผใใ่ฆใฆใฟใพใใใ
๏ผๅฎ้ใฎLLaMAใใณใใฌใผใใฏใใใฉใซใใฎใทในใใ ใกใใปใผใธใฎๅฆ็ใใไธ่ฌ็ใชใทในใใ ใกใใปใผใธใฎๅฆ็ใ่ฅๅนฒ็ฐใชใใใใ
ๅฎ้ใฎใณใผใใงใฏใใฎใใณใใฌใผใใไฝฟ็จใใชใใงใใ ใใ๏ผ๏ผ
```
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ bos_token + '[INST] ' + message['content'] + ' [/INST]' }}
{% elif message['role'] == 'system' %}
{{ '<<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n\\n' }}
{% elif message['role'] == 'assistant' %}
{{ ' ' + message['content'] + ' ' + eos_token }}
{% endif %}
{% endfor %}
```
้กใใใฐใๅฐใ่ฆใคใใฆใใใ ใใใฐใใใฎใใณใใฌใผใใไฝใ่กใฃใฆใใใใใใใใใใใใพใใใ
ใใฎใใณใใฌใผใใฏใๅใกใใปใผใธใฎใๅฝนๅฒใใซๅบใฅใใฆ็นๅฎใฎใใผใฏใณใ่ฟฝๅ ใใพใใใใใใฎใใผใฏใณใฏใใกใใปใผใธใ้ไฟกใใไบบใ่กจใใใฎใงใใ
ใฆใผใถใผใใขใทในใฟใณใใใใใณใทในใใ ใกใใปใผใธใฏใใใใใๅซใพใใใใผใฏใณใซใใฃใฆใขใใซใซใใฃใฆๆ็ขบใซๅบๅฅใใใพใใ
## How do I create a chat template?
็ฐกๅใงใใๅ็ดใซJinjaใใณใใฌใผใใๆธใใฆใ`tokenizer.chat_template`ใ่จญๅฎใใพใใ
ไปใฎใขใใซใใๆขๅญใฎใใณใใฌใผใใๅง็นใซใใฆใๅฟ
่ฆใซๅฟใใฆ็ทจ้ใใใจไพฟๅฉใใใใใพใใ๏ผ
ไพใใฐใไธ่จใฎLLaMAใใณใใฌใผใใๅใฃใฆใใขใทในใฟใณใใกใใปใผใธใซ"[ASST]"ใจ"[/ASST]"ใ่ฟฝๅ ใงใใพใใ
```
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }}
{% elif message['role'] == 'system' %}
{{ '<<SYS>>\\n' + message['content'].strip() + '\\n<</SYS>>\\n\\n' }}
{% elif message['role'] == 'assistant' %}
{{ '[ASST] ' + message['content'] + ' [/ASST]' + eos_token }}
{% endif %}
{% endfor %}
```
ๆฌกใซใๅใซ`tokenizer.chat_template`ๅฑๆงใ่จญๅฎใใฆใใ ใใใ
ๆฌกๅใ[`~PreTrainedTokenizer.apply_chat_template`]ใไฝฟ็จใใ้ใซใๆฐใใใใณใใฌใผใใไฝฟ็จใใใพใ๏ผ
ใใฎๅฑๆงใฏ`tokenizer_config.json`ใใกใคใซใซไฟๅญใใใใใใ[`~utils.PushToHubMixin.push_to_hub`]ใไฝฟ็จใใฆ
ๆฐใใใใณใใฌใผใใHubใซใขใใใญใผใใใใฟใใชใๆญฃใใใใณใใฌใผใใไฝฟ็จใใฆใใใใจใ็ขบ่ชใงใใพใ๏ผ
```python
template = tokenizer.chat_template
template = template.replace("SYS", "SYSTEM") # Change the system token
tokenizer.chat_template = template # Set the new template
tokenizer.push_to_hub("model_name") # Upload your new template to the Hub!
```
[`~PreTrainedTokenizer.apply_chat_template`] ใกใฝใใใฏใใใชใใฎใใฃใใใใณใใฌใผใใไฝฟ็จใใใใใซ [`ConversationalPipeline`] ใฏใฉในใซใใฃใฆๅผใณๅบใใใพใใ
ใใใใฃใฆใๆญฃใใใใฃใใใใณใใฌใผใใ่จญๅฎใใใจใใใชใใฎใขใใซใฏ่ชๅ็ใซ [`ConversationalPipeline`] ใจไบๆๆงใใใใใใซใชใใพใใ
## What are "default" templates?
ใใฃใใใใณใใฌใผใใฎๅฐๅ
ฅๅใซใใใฃใใใฎๅฆ็ใฏใขใใซใฏใฉในใฌใใซใงใใผใใณใผใใใใฆใใพใใใ
ๅพๆนไบๆๆงใฎใใใซใใใฎใฏใฉในๅบๆใฎๅฆ็ใใใใฉใซใใใณใใฌใผใใจใใฆไฟๆใใใฏใฉในใฌใใซใง่จญๅฎใใใฆใใพใใ
ใขใใซใซใใฃใใใใณใใฌใผใใ่จญๅฎใใใฆใใชใๅ ดๅใใใ ใใขใใซใฏใฉในใฎใใใฉใซใใใณใใฌใผใใใใๅ ดๅใ
`ConversationalPipeline`ใฏใฉในใ`apply_chat_template`ใชใฉใฎใกใฝใใใฏใฏใฉในใใณใใฌใผใใไฝฟ็จใใพใใ
ใใผใฏใใคใถใฎใใใฉใซใใฎใใฃใใใใณใใฌใผใใ็ขบ่ชใใใซใฏใ`tokenizer.default_chat_template`ๅฑๆงใใใงใใฏใใฆใใ ใใใ
ใใใฏใๅพๆนไบๆๆงใฎใใใซ็ด็ฒใซ่กใฃใฆใใใใจใงใๆขๅญใฎใฏใผใฏใใญใผใๅฃใใชใใใใซใใฆใใพใใ
ใขใใซใซใจใฃใฆใฏใฉในใใณใใฌใผใใ้ฉๅใงใใๅ ดๅใงใใใใใฉใซใใใณใใฌใผใใใชใผใใผใฉใคใใใฆ
`chat_template`ๅฑๆงใๆ็คบ็ใซ่จญๅฎใใใใจใๅผทใใๅงใใใพใใใใใซใใใใฆใผใถใผใซใจใฃใฆ
ใขใใซใใใฃใใ็จใซๆญฃใใๆงๆใใใฆใใใใจใๆ็ขบใซใชใใใใใฉใซใใใณใใฌใผใใๅคๆดใใใใๅปๆญขใใใๅ ดๅใซๅใใใใจใใงใใพใใ
## What template should I use?
ใใงใซใใฃใใใฎใใฌใผใใณใฐใๅใใใขใใซใฎใใณใใฌใผใใ่จญๅฎใใๅ ดๅใใใณใใฌใผใใใใฌใผใใณใฐไธญใซใขใใซใ่ฆใใกใใปใผใธใฎใใฉใผใใใใจใพใฃใใไธ่ดใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
ใใใงใชใๅ ดๅใๆง่ฝใฎไฝไธใ็ต้จใใๅฏ่ฝๆงใ้ซใใงใใใใใฏใขใใซใใใใซใใฌใผใใณใฐใใฆใใๅ ดๅใงใๅๆงใงใ - ใใฃใใใใผใฏใณใไธๅฎใซไฟใคใจใใใใใๆ้ซใฎๆง่ฝใๅพใใใพใใ
ใใใฏใใผใฏใณๅใจ้ๅธธใซ้กไผผใใฆใใใ้ๅธธใฏใใฌใผใใณใฐไธญใซไฝฟ็จใใใใใผใฏใณๅใจๆญฃ็ขบใซไธ่ดใใๅ ดๅใซใๆจ่ซใพใใฏใใกใคใณใใฅใผใใณใฐใฎ้ใซๆ่ฏใฎๆง่ฝใๅพใใใพใใ
ไธๆนใใผใญใใใขใใซใใใฌใผใใณใฐใใใใใใฃใใใฎใใใซใใผใน่จ่ชใขใใซใใใกใคใณใใฅใผใใณใฐใใๅ ดๅใ้ฉๅใชใใณใใฌใผใใ้ธๆใใ่ช็ฑๅบฆใใใใพใใ
LLM๏ผLanguage Model๏ผใฏใใพใใพใชๅ
ฅๅๅฝขๅผใๅฆ็ใงใใใปใฉในใใผใใงใใใฏใฉในๅบๆใฎใใณใใฌใผใใใชใใขใใซ็จใฎใใใฉใซใใใณใใฌใผใใฏใไธ่ฌ็ใชใฆใผในใฑใผในใซๅฏพใใฆ่ฏใๆ่ปใช้ธๆ่ขใงใใ
ใใใฏใ[ChatMLใใฉใผใใใ](https://github.com/openai/openai-python/blob/main/chatml.md)ใซๅพใฃใใใฎใงใๅคใใฎใฆใผในใฑใผในใซ้ฉใใฆใใพใใๆฌกใฎใใใซใชใใพใ๏ผ
```
{% for message in messages %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}
{% endfor %}
```
If you like this one, here it is in one-liner form, ready to copy into your code:
```
tokenizer.chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}"
```
ใใฎใใณใใฌใผใใฏใๅใกใใปใผใธใใ``ใใใผใฏใณใงๅฒใฟใๅฝนๅฒใๆๅญๅใจใใฆๅ็ดใซ่จ่ฟฐใใพใใ
ใใใซใใใใใฌใผใใณใฐใงไฝฟ็จใใๅฝนๅฒใซๅฏพใใๆ่ปๆงใๅพใใใพใใๅบๅใฏไปฅไธใฎใใใซใชใใพใ๏ผ
```
<|im_start|>system
You are a helpful chatbot that will do its best not to say anything so stupid that people tweet about it.<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
I'm doing great!<|im_end|>
```
ใใฆใผใถใผใใใใทในใใ ใใใใใณใใขใทในใฟใณใใใฎๅฝนๅฒใฏใใใฃใใใฎๆจๆบใงใใ
็นใซใ[`ConversationalPipeline`]ใจใฎ้ฃๆบใในใ ใผใบใซ่กใๅ ดๅใซใฏใใใใใฎๅฝนๅฒใไฝฟ็จใใใใจใใๅงใใใพใใใใ ใใใใใใฎๅฝนๅฒใซๅถ็ดใฏใใใพใใใใใณใใฌใผใใฏ้ๅธธใซๆ่ปใงใไปปๆใฎๆๅญๅใๅฝนๅฒใจใใฆไฝฟ็จใงใใพใใ
## I want to use chat templates! How should I get started?
ใใฃใใใขใใซใๆใฃใฆใใๅ ดๅใใใฎใขใใซใฎ`tokenizer.chat_template`ๅฑๆงใ่จญๅฎใใ[`~PreTrainedTokenizer.apply_chat_template`]ใไฝฟ็จใใฆใในใใใๅฟ
่ฆใใใใพใใ
ใใใฏใขใใซใฎๆๆ่
ใงใชใๅ ดๅใงใ้ฉ็จใใใพใใใขใใซใฎใชใใธใใชใ็ฉบใฎใใฃใใใใณใใฌใผใใไฝฟ็จใใฆใใๅ ดๅใใพใใฏใใใฉใซใใฎใฏใฉในใใณใใฌใผใใไฝฟ็จใใฆใใๅ ดๅใงใใ
ใใฎๅฑๆงใ้ฉๅใซ่จญๅฎใงใใใใใซ[ใใซใชใฏใจในใ](https://huggingface.co/docs/hub/repositories-pull-requests-discussions)ใ้ใใฆใใ ใใใ
ไธๅบฆๅฑๆงใ่จญๅฎใใใใฐใใใใงๅฎไบใงใ๏ผ `tokenizer.apply_chat_template`ใฏใใใฎใขใใซใซๅฏพใใฆๆญฃใใๅไฝใใใใใซใชใใพใใใใใฏใ
`ConversationalPipeline`ใชใฉใฎๅ ดๆใงใ่ชๅ็ใซใตใใผใใใใพใใ
ใขใใซใใใฎๅฑๆงใๆใคใใจใ็ขบ่ชใใใใจใงใใชใผใใณใฝใผในใขใใซใฎๅ
จใณใใฅใใใฃใใใฎใใซใใฏใผใไฝฟ็จใงใใใใใซใชใใพใใ
ใใฉใผใใใใฎไธไธ่ดใฏใใฎๅ้ใซๆฉใฟ็ถใใใใใฉใผใใณในใซ้ปใฃใฆๅฝฑ้ฟใไธใใฆใใพใใใใใใ็ตใใใใๆใๆฅใพใใ๏ผ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/big_models.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Instantiating a big model
้ๅธธใซๅคง่ฆๆจกใชไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใๅ ดๅใRAMใฎไฝฟ็จ้ใๆๅฐ้ใซๆใใใใจใฏ่ชฒ้กใฎ1ใคใงใใ้ๅธธใฎPyTorchใฎใฏใผใฏใใญใผใฏๆฌกใฎใจใใใงใ๏ผ
1. ใฉใณใใ ใช้ใฟใๆใคใขใใซใไฝๆใใพใใ
2. ไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใใญใผใใใพใใ
3. ใใใใฎไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใใฉใณใใ ใชใขใใซใซ้
็ฝฎใใพใใ
ในใใใ1ใจ2ใฎไธกๆนใใกใขใชใซใขใใซใฎๅฎๅ
จใชใใผใธใงใณใๅฟ
่ฆใจใใใปใจใใฉใฎๅ ดๅใฏๅ้กใใใพใใใใใขใใซใฎใตใคใบใๆฐใฎใฌใใคใใซใชใใจใใใใใฎ2ใคใฎใณใใผใRAMใใๆ้คใใใใจใใงใใชใใชใๅฏ่ฝๆงใใใใพใใใใใซๆชใใใจใซใๅๆฃใใฌใผใใณใฐใๅฎ่กใใใใใซ`torch.distributed`ใไฝฟ็จใใฆใใๅ ดๅใๅใใญใปในใฏไบๅๅญฆ็ฟๆธใฟใขใใซใใญใผใใใใใใใฎ2ใคใฎใณใใผใRAMใซไฟๅญใใพใใ
<Tip>
ใฉใณใใ ใซไฝๆใใใใขใใซใฏใใกใขใชๅ
ใซใ็ฉบใฎใใใณใฝใซใงๅๆๅใใใพใใใใใใฎใฉใณใใ ใชๅคใฏใใกใขใชใฎ็นๅฎใฎใใฃใณใฏใซใใฃใใใฎใไฝฟ็จใใพใ๏ผใใใใฃใฆใใฉใณใใ ใชๅคใฏใใฎๆ็นใงใฎใกใขใชใใฃใณใฏๅ
ใฎๅคใงใ๏ผใใขใใซ/ใใฉใกใผใฟใฎ็จฎ้กใซ้ฉใใๅๅธ๏ผใใจใใฐใๆญฃ่ฆๅๅธ๏ผใซๅพใใฉใณใใ ใชๅๆๅใฏใในใใใ3ใงๅๆๅใใใฆใใชใ้ใฟใซๅฏพใใฆใใงใใใ ใ้ซ้ใซๅฎ่กใใใพใ๏ผ
</Tip>
ใใฎใฌใคใใงใฏใTransformersใใใฎๅ้กใซๅฏพๅฆใใใใใซๆไพใใใฝใชใฅใผใทใงใณใๆขใใพใใใชใใใใใฏ็พๅจใ้็บใ้ฒ่กไธญใฎๅ้ใงใใใๅฐๆฅใใใใง่ชฌๆใใใฆใใAPIใใใใใซๅคๆดใใใๅฏ่ฝๆงใใใใใจใซๆณจๆใใฆใใ ใใใ
## Sharded checkpoints
ใใผใธใงใณ4.18.0ใใใ10GBใ่ถ
ใใใตใคใบใฎใขใใซใใงใใฏใใคใณใใฏ่ชๅ็ใซ่คๆฐใฎๅฐใใช้จๅใซๅๅฒใใใพใใ`model.save_pretrained(save_dir)`ใๅฎ่กใใ้ใซ1ใคใฎๅไธใฎใใงใใฏใใคใณใใๆใคไปฃใใใซใใใใคใใฎ้จๅ็ใชใใงใใฏใใคใณใ๏ผใใใใใฎใตใคใบใ<10GB๏ผใจใใใฉใกใผใฟๅใใใใใๆ ผ็ดใใใฆใใใใกใคใซใซใใใใใใคใณใใใฏในใ็ๆใใใพใใ
`max_shard_size`ใใฉใกใผใฟใงใทใฃใผใใฃใณใฐๅใฎๆๅคงใตใคใบใๅถๅพกใงใใใใใไพใจใใฆ้ๅธธใตใคใบใฎใขใใซใจๅฐใใชใทใฃใผใใตใคใบใไฝฟ็จใใพใใๅพๆฅใฎBERTใขใใซใไฝฟ็จใใฆใฟใพใใใใ
```py
from transformers import AutoModel
model = AutoModel.from_pretrained("bert-base-cased")
```
ใใ[`~PreTrainedModel.save_pretrained`]ใไฝฟ็จใใฆไฟๅญใใๅ ดๅใๆฐใใใใฉใซใใ2ใคใฎใใกใคใซใๅซใๅฝขใงไฝๆใใใพใ: ใขใใซใฎ่จญๅฎๆ
ๅ ฑใจใใฎ้ใฟๆ
ๅ ฑใงใใ
```py
>>> import os
>>> import tempfile
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir)
... print(sorted(os.listdir(tmp_dir)))
['config.json', 'pytorch_model.bin']
```
ๆๅคงใทใฃใผใใตใคใบใ200MBใซ่จญๅฎใใพใ๏ผ
```py
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... print(sorted(os.listdir(tmp_dir)))
['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json']
```
ใขใใซใฎ่จญๅฎใฎไธใซใ3ใคใฎ็ฐใชใ้ใฟใใกใคใซใจใ`index.json`ใใกใคใซใ่ฆใใใพใใใใใฏ็งใใกใฎใคใณใใใฏในใงใใ
ใใฎใใใชใใงใใฏใใคใณใใฏใ[`~PreTrainedModel.from_pretrained`]ใกใฝใใใไฝฟ็จใใฆๅฎๅ
จใซๅใญใผใใงใใพใ๏ผ
```py
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... new_model = AutoModel.from_pretrained(tmp_dir)
```
ไธป่ฆใชๅฉ็นใฏใๅคง่ฆๆจกใชใขใใซใฎๅ ดๅใไธ่จใฎใฏใผใฏใใญใผใฎในใใใ2ใซใใใฆใๅใใงใใฏใใคใณใใฎใทใฃใผใใๅใฎใทใฃใผใใฎๅพใซใญใผใใใใRAMใฎใกใขใชไฝฟ็จ้ใใขใใซใฎใตใคใบใจๆๅคงใฎใทใฃใผใใฎใตใคใบใๅใใใใใฎใซๅถ้ใงใใใใจใงใใ
ๅ
้จใงใฏใใคใณใใใฏในใใกใคใซใไฝฟ็จใใใใฉใฎใญใผใใใงใใฏใใคใณใใซๅญๅจใใๅฏพๅฟใใ้ใฟใใฉใใซๆ ผ็ดใใใฆใใใใๅคๆญใใพใใใใฎใคใณใใใฏในใฏ้ๅธธใฎJSONใใกใคใซใฎใใใซ่ชญใฟ่พผใใใจใใงใใ่พๆธใจใใฆๅๅพใงใใพใใ
```py
>>> import json
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f:
... index = json.load(f)
>>> print(index.keys())
dict_keys(['metadata', 'weight_map'])
```
ใกใฟใใผใฟใซใฏ็พๆ็นใงใฏใขใใซใฎ็ทใตใคใบใฎใฟใๅซใพใใฆใใพใใ
ๅฐๆฅ็ใซใฏไปใฎๆ
ๅ ฑใ่ฟฝๅ ใใไบๅฎใงใ๏ผ
```py
>>> index["metadata"]
{'total_size': 433245184}
```
้ใฟใใใใฏใใฎใคใณใใใฏในใฎไธป่ฆใช้จๅใงใใใๅใใฉใกใผใฟๅ๏ผ้ๅธธใฏPyTorchใขใใซใฎ`state_dict`ใง่ฆใคใใใใฎ๏ผใใใฎๆ ผ็ดใใใฆใใใใกใคใซใซใใใใใพใ๏ผ
```py
>>> index["weight_map"]
{'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin',
'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin',
...
```
็ดๆฅใขใใซๅ
ใง[`~PreTrainedModel.from_pretrained`]ใไฝฟ็จใใใซใ
ใทใฃใผใใฃใณใฐใใใใใงใใฏใใคใณใใใญใผใใใใๅ ดๅ๏ผใใซใใงใใฏใใคใณใใฎๅ ดๅใซ`model.load_state_dict()`ใไฝฟ็จใใใใใซ่กใๆนๆณ๏ผใ[`~modeling_utils.load_sharded_checkpoint`]ใไฝฟ็จใใๅฟ
่ฆใใใใพใ๏ผ
```py
>>> from transformers.modeling_utils import load_sharded_checkpoint
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... load_sharded_checkpoint(model, tmp_dir)
```
## Low memory loading
ใทใฃใผใใใใใใงใใฏใใคใณใใฏใไธ่จใฎใฏใผใฏใใญใผใฎในใใใ2ใซใใใใกใขใชไฝฟ็จ้ใๅๆธใใพใใใ
ไฝใกใขใชใฎ็ฐๅขใงใใฎใขใใซใไฝฟ็จใใใใใซใAccelerateใฉใคใใฉใชใซๅบใฅใใๅฝ็คพใฎใใผใซใๆดป็จใใใใจใใๅงใใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใไปฅไธใฎใฌใคใใใ่ฆงใใ ใใ๏ผ[Accelerateใไฝฟ็จใใๅคง่ฆๆจกใขใใซใฎ่ชญใฟ่พผใฟ](./main_classes/model#large-model-loading)
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/transformers_agents.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Transformers Agents
<Tip warning={true}>
Transformers Agentsใฏใใใคใงใๅคๆดใใใๅฏ่ฝๆงใฎใใๅฎ้จ็ใชAPIใงใใใจใผใธใงใณใใ่ฟใ็ตๆใฏใAPIใพใใฏๅบ็คใจใชใใขใใซใๅคๆดใใใๅฏ่ฝๆงใใใใใใ็ฐใชใใใจใใใใพใใ
</Tip>
Transformersใใผใธใงใณv4.29.0ใฏใ*ใใผใซ*ใจ*ใจใผใธใงใณใ*ใฎใณใณใปใใใๅบใซๆง็ฏใใใฆใใพใใใใฎ[colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj)ใง่ฉฆใใใจใใงใใพใใ
่ฆใใใซใใใใฏtransformersใฎไธใซ่ช็ถ่จ่ชAPIใๆไพใใใใฎใงใ๏ผ็งใใกใฏไธ้ฃใฎๅณ้ธใใใใใผใซใๅฎ็พฉใใ่ช็ถ่จ่ชใ่งฃ้ใใใใใใฎใใผใซใไฝฟ็จใใใจใผใธใงใณใใ่จญ่จใใพใใใใใฏ่จญ่จไธๆกๅผตๅฏ่ฝใงใใ็งใใกใฏใใใคใใฎ้ข้ฃใใใใผใซใๅณ้ธใใพใใใใใณใใฅใใใฃใซใใฃใฆ้็บใใใไปปๆใฎใใผใซใไฝฟ็จใใใใใซใทในใใ ใ็ฐกๅใซๆกๅผตใงใใๆนๆณใ็คบใใพใใ
ใใฎๆฐใใAPIใงไฝใใงใใใใฎใใใคใใฎไพใใๅงใใพใใใใ็นใซๅคใขใผใใซใชใฟในใฏใซ้ขใใฆๅผทๅใงใใฎใงใ็ปๅใ็ๆใใใใใญในใใ่ชญใฟไธใใใใใใฎใซๆ้ฉใงใใ
ไธ่จใฎใใญในใใฎไธใซใๆฅๆฌ่ชใฎ็ฟป่จณใๆไพใใพใใ
```py
agent.run("Caption the following image", image=image)
```
| **Input** | **Output** |
|-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------|
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png" width=200> | A beaver is swimming in the water |
---
```py
agent.run("Read the following text out loud", text=text)
```
| **Input** | **Output** |
|-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------|
| A beaver is swimming in the water | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tts_example.wav" type="audio/wav"> your browser does not support the audio element. </audio>
---
```py
agent.run(
"In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?",
document=document,
)
```
| **Input** | **Output** |
|-----------------------------------------------------------------------------------------------------------------------------|----------------|
| <img src="https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/0/image/image.jpg" width=200> | ballroom foyer |
## Quickstart
`agent.run`ใไฝฟ็จใใๅใซใใจใผใธใงใณใใใคใณในใฟใณในๅใใๅฟ
่ฆใใใใพใใใจใผใธใงใณใใฏใๅคง่ฆๆจกใช่จ่ชใขใใซ๏ผLLM๏ผใงใใ
OpenAIใขใใซใจBigCodeใOpenAssistantใใใฎใชใผใใณใฝใผในใฎไปฃๆฟใขใใซใใตใใผใใใฆใใพใใOpenAIใขใใซใฏใใใฉใผใใณในใๅชใใฆใใพใใใOpenAIใฎAPIใญใผใๅฟ
่ฆใงใใใ็กๆใงไฝฟ็จใใใใจใฏใงใใพใใใไธๆนใHugging FaceใฏBigCodeใจOpenAssistantใขใใซใฎใจใณใใใคใณใใธใฎ็กๆใขใฏใปในใๆไพใใฆใใพใใ
ใพใใใใใฉใซใใฎไพๅญ้ขไฟใใในใฆใคใณในใใผใซใใใใใซ`agents`ใฎใจใฏในใใฉใใคใณในใใผใซใใฆใใ ใใใ
```bash
pip install transformers[agents]
```
OpenAIใขใใซใไฝฟ็จใใใซใฏใ`openai`ใฎไพๅญ้ขไฟใใคใณในใใผใซใใๅพใ`OpenAiAgent`ใใคใณในใฟใณในๅใใพใใ
```bash
pip install openai
```
```py
from transformers import OpenAiAgent
agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>")
```
BigCodeใพใใฏOpenAssistantใไฝฟ็จใใใซใฏใใพใใญใฐใคใณใใฆInference APIใซใขใฏใปในใใฆใใ ใใใ
```py
from huggingface_hub import login
login("<YOUR_TOKEN>")
```
ๆฌกใซใใจใผใธใงใณใใใคใณในใฟใณในๅใใฆใใ ใใใ
```py
from transformers import HfAgent
# Starcoder
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
# StarcoderBase
# agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase")
# OpenAssistant
# agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5")
```
ใใใฏใHugging Faceใ็พๅจ็กๆใงๆไพใใฆใใๆจ่ซAPIใไฝฟ็จใใฆใใพใใใใฎใขใใซ๏ผใพใใฏๅฅใฎใขใใซ๏ผใฎ็ฌ่ชใฎๆจ่ซใจใณใใใคใณใใใๆใกใฎๅ ดๅใฏใไธ่จใฎURLใจใณใใใคใณใใใ่ชๅใฎURLใจใณใใใคใณใใง็ฝฎใๆใใใใจใใงใใพใใ
<Tip>
StarCoderใจOpenAssistantใฏ็กๆใงๅฉ็จใงใใใทใณใใซใชใฟในใฏใซใฏ้ๅธธใซๅชใใๆง่ฝใ็บๆฎใใพใใใใ ใใใใ่ค้ใชใใญใณใใใๅฆ็ใใ้ใซใฏใใใงใใฏใใคใณใใๅๅใงใชใใใจใใใใพใใใใฎใใใชๅ ดๅใซใฏใ็พๆ็นใงใฏใชใผใใณใฝใผในใงใฏใชใใใฎใฎใใใใฉใผใใณในใๅไธใใๅฏ่ฝๆงใฎใใOpenAIใขใใซใ่ฉฆใใฆใฟใใใจใใๅงใใใพใใ
</Tip>
ใใใงๆบๅใๆดใใพใใ๏ผใใใใใใใชใใๅฉ็จใงใใ2ใคใฎAPIใซใคใใฆ่ฉณใใ่ชฌๆใใพใใ
### Single execution (run)
ๅไธๅฎ่กใกใฝใใใฏใใจใผใธใงใณใใฎ [`~Agent.run`] ใกใฝใใใไฝฟ็จใใๅ ดๅใงใใ
```py
agent.run("Draw me a picture of rivers and lakes.")
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200>
ใใใฏใๅฎ่กใใใใฟในใฏใซ้ฉใใใใผใซ๏ผใพใใฏใใผใซ๏ผใ่ชๅ็ใซ้ธๆใใ้ฉๅใซๅฎ่กใใพใใ1ใคใพใใฏ่คๆฐใฎใฟในใฏใๅใๅฝไปคใงๅฎ่กใใใใจใใงใใพใ๏ผใใ ใใๅฝไปคใ่ค้ใงใใใปใฉใใจใผใธใงใณใใๅคฑๆใใๅฏ่ฝๆงใ้ซใใชใใพใ๏ผใ
```py
agent.run("Draw me a picture of the sea then transform the picture to add an island")
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sea_and_island.png" width=200>
<br/>
[`~Agent.run`] ๆไฝใฏ็ฌ็ซใใฆๅฎ่กใงใใพใใฎใงใ็ฐใชใใฟในใฏใงไฝๅบฆใๅฎ่กใใใใจใใงใใพใใ
ๆณจๆ็นใจใใฆใใใชใใฎ `agent` ใฏๅใชใๅคง่ฆๆจกใช่จ่ชใขใใซใงใใใใใใใญใณใใใฎใใใใชๅคๆดใงใๅฎๅ
จใซ็ฐใชใ็ตๆใๅพใใใๅฏ่ฝๆงใใใใพใใใใใใฃใฆใๅฎ่กใใใใฟในใฏใใงใใใ ใๆ็ขบใซ่ชฌๆใใใใจใ้่ฆใงใใ่ฏใใใญใณใใใฎๆธใๆนใซใคใใฆใฏใ[ใใกใ](custom_tools#writing-good-user-inputs) ใง่ฉณใใ่ชฌๆใใฆใใพใใ
ๅฎ่กใใจใซ็ถๆ
ใไฟๆใใใใใใญในใไปฅๅคใฎใชใใธใงใฏใใใจใผใธใงใณใใซๆธกใใใใใๅ ดๅใฏใใจใผใธใงใณใใไฝฟ็จใใๅคๆฐใๆๅฎใใใใจใใงใใพใใไพใใฐใๆๅใฎๅทใๆนใฎ็ปๅใ็ๆใใใใฎ็ปๅใซๅณถใ่ฟฝๅ ใใใใใซใขใใซใซๆ็คบใใใซใฏใๆฌกใฎใใใซ่กใใใจใใงใใพใ๏ผ
```python
picture = agent.run("Generate a picture of rivers and lakes.")
updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture)
```
<Tip>
ใใใฏใใขใใซใใใชใใฎใชใฏใจในใใ็่งฃใงใใชใๅ ดๅใใใใผใซใๆททๅใใๅ ดๅใซๅฝน็ซใคใใจใใใใพใใไพใใฐ๏ผ
```py
agent.run("Draw me the picture of a capybara swimming in the sea")
```
ใใใงใฏใใขใใซใฏ2ใคใฎๆนๆณใง่งฃ้ใงใใพใ๏ผ
- `text-to-image`ใซๆตทใงๆณณใใซใใใฉใ็ๆใใใ
- ใพใใฏใ`text-to-image`ใงใซใใใฉใ็ๆใใใใใๆตทใงๆณณใใใใใใซ`image-transformation`ใใผใซใไฝฟ็จใใ
ๆๅใฎใทใใชใชใๅผทๅถใใใๅ ดๅใฏใใใญใณใใใๅผๆฐใจใใฆๆธกใใใจใใงใใพใ๏ผ
```py
agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea")
```
</Tip>
### Chat-based execution (ใใฃใใ)
ใจใผใธใงใณใใฏใ[`~Agent.chat`] ใกใฝใใใไฝฟ็จใใใใจใงใใใฃใใใใผในใฎใขใใญใผใใๅฏ่ฝใงใใ
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200>
```py
agent.chat("Transform the picture so that there is a rock in there")
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_and_beaver.png" width=200>
<br/>
ใใใฏใๆ็คบใใพใใใง็ถๆ
ใไฟๆใใใๅ ดๅใซไพฟๅฉใชใขใใญใผใใงใๅไธใฎๆ็คบใซๆฏในใฆ่ค้ใชๆ็คบใๅฆ็ใใใฎใฏ้ฃใใใใใใใพใใ๏ผใใฎๅ ดๅใฏ [`~Agent.run`] ใกใฝใใใฎๆนใ้ฉใใฆใใพใ๏ผใ
ใใฎใกใฝใใใฏใ้ใใญในใๅใฎๅผๆฐใ็นๅฎใฎใใญใณใใใๆธกใใใๅ ดๅใซใไฝฟ็จใงใใพใใ
### โ ๏ธ Remote execution
ใใขใณในใใฌใผใทใงใณใฎ็ฎ็ใใในใฆใฎใปใใใขใใใงไฝฟ็จใงใใใใใซใใชใชใผในใฎใใใซใใใคใใฎใใใฉใซใใใผใซ็จใฎใชใขใผใๅฎ่กใใผใซใไฝๆใใพใใใใใใใฏ [ๆจ่ซใจใณใใใคใณใ](https://huggingface.co/inference-endpoints) ใไฝฟ็จใใฆไฝๆใใใพใใ
ใใใใฏ็พๅจใชใใซใชใฃใฆใใพใใใใชใขใผใๅฎ่กใใผใซใ่ชๅใง่จญๅฎใใๆนๆณใซใคใใฆใฏใ[ใซในใฟใ ใใผใซใฌใคใ](./custom_tools) ใ่ชญใใใจใใๅงใใใพใใ
### What's happening here? What are tools, and what are agents?

#### Agents
ใใใงใฎใใจใผใธใงใณใใใจใฏใๅคง่ฆๆจกใช่จ่ชใขใใซใฎใใจใงใใใ็นๅฎใฎไธ้ฃใฎใใผใซใซใขใฏใปในใงใใใใใซใใญใณใใใ่จญๅฎใใฆใใพใใ
LLM๏ผๅคง่ฆๆจก่จ่ชใขใใซ๏ผใฏใใณใผใใฎๅฐใใชใตใณใใซใ็ๆใใใฎใซใใชใๅชใใฆใใใใใฎAPIใฏใใจใผใธใงใณใใซ็นๅฎใฎใใผใซใปใใใไฝฟ็จใใฆใฟในใฏใๅฎ่กใใใณใผใใฎๅฐใใชใตใณใใซใ็ๆใใใใใจใซๅฉ็จใใฆใใพใใใใฎใใญใณใใใฏใใจใผใธใงใณใใซใฟในใฏใจใใผใซใฎ่ชฌๆใๆไพใใใใจใงใใจใผใธใงใณใใไฝฟ็จใใฆใใใใผใซใฎใใญใฅใกใณใใซใขใฏใปในใใ้ข้ฃใใใณใผใใ็ๆใงใใใใใซใชใใพใใ
#### Tools
ใใผใซใฏ้ๅธธใซๅ็ดใงใๅๅใจ่ชฌๆใใใชใๅไธใฎ้ขๆฐใงใใใใใใใใใใใฎใใผใซใฎ่ชฌๆใไฝฟ็จใใฆใจใผใธใงใณใใใใญใณใใใใพใใใใญใณใใใ้ใใฆใใจใผใธใงใณใใซใใใผใซใไฝฟ็จใใฆใฏใจใชใง่ฆๆฑใใใใฟในใฏใใฉใฎใใใซๅฎ่กใใใใ็คบใใพใใ็นใซใใใผใซใฎๆๅพ
ใใใๅ
ฅๅใจๅบๅใ็คบใใพใใ
ใใใฏๆฐใใใใผใซใไฝฟ็จใใฆใใใใใคใใฉใคใณใงใฏใชใใใผใซใไฝฟ็จใใฆใใพใใใชใใชใใใจใผใธใงใณใใฏ้ๅธธใซๅๅญ็ใชใใผใซใงใใ่ฏใใณใผใใ็ๆใใใใใงใใใใคใใฉใคใณใฏใใใชใใกใฏใฟใชใณใฐใใใใใฐใใฐ่คๆฐใฎใฟในใฏใ็ตใฟๅใใใฆใใพใใใใผใซใฏ้ๅธธใซๅ็ดใชใฟในใฏใซ็ฆ็นใๅฝใฆใใใจใๆๅณใใฆใใพใใ
#### Code-execution?!
ใใฎใณใผใใฏใใใผใซใจใใผใซใจไธ็ทใซๆธกใใใๅ
ฅๅใฎใปใใใงใๅฝ็คพใฎๅฐ่ฆๆจกใชPythonใคใณใฟใผใใชใฟใงๅฎ่กใใใพใใใใงใซๆไพใใใใใผใซใจprint้ขๆฐใใๅผใณๅบใใใจใใงใใชใใใใๅฎ่กใงใใใใจใฏใใงใซๅถ้ใใใฆใใพใใHugging Faceใฎใใผใซใซๅถ้ใใใฆใใใใใๅฎๅ
จใ ใจ่ใใฆใๅ้กใใใพใใใ
ใใใซใๅฑๆงใฎๆค็ดขใใคใณใใผใใฏ่จฑๅฏใใฆใใใ๏ผใใใใฏๆธกใใใๅ
ฅๅ/ๅบๅใๅฆ็ใใใใใซใฏๅฟ
่ฆใชใใฏใใงใ๏ผใๆใๆใใใชๆปๆใฏๅ้กใใใพใใ๏ผใจใผใธใงใณใใซใใใใๅบๅใใใใใซใใญใณใใใใๅฟ
่ฆใใใใพใ๏ผใ่ถ
ๅฎๅ
จใชๅดใซ็ซใกใใๅ ดๅใฏใ่ฟฝๅ ใฎๅผๆฐ return_code=True ใๆๅฎใใฆ run() ใกใฝใใใๅฎ่กใงใใพใใใใฎๅ ดๅใใจใผใธใงใณใใฏๅฎ่กใใใณใผใใ่ฟใใ ใใงใๅฎ่กใใใใฉใใใฏใใชใๆฌก็ฌฌใงใใ
ๅฎ่กใฏใ้ๆณใชๆไฝใ่ฉฆใฟใ่กใพใใฏใจใผใธใงใณใใ็ๆใใใณใผใใซ้ๅธธใฎPythonใจใฉใผใใใๅ ดๅใซๅๆญขใใพใใ
### A curated set of tools
็งใใกใฏใใใฎใใใชใจใผใธใงใณใใๅผทๅใงใใใใผใซใฎใปใใใ็นๅฎใใพใใไปฅไธใฏใ`transformers`ใซ็ตฑๅใใใใใผใซใฎๆดๆฐใใใใชในใใงใ๏ผ
- **ใใญใฅใกใณใ่ณชๅๅฟ็ญ**: ็ปๅๅฝขๅผใฎใใญใฅใกใณใ๏ผPDFใชใฉ๏ผใไธใใใใๅ ดๅใใใฎใใญใฅใกใณใใซ้ขใใ่ณชๅใซๅ็ญใใพใ๏ผ[Donut](./model_doc/donut)๏ผ
- **ใใญในใ่ณชๅๅฟ็ญ**: ้ทใใใญในใใจ่ณชๅใไธใใใใๅ ดๅใใใญในใๅ
ใฎ่ณชๅใซๅ็ญใใพใ๏ผ[Flan-T5](./model_doc/flan-t5)๏ผ
- **็กๆกไปถใฎ็ปๅใญใฃใใทใงใณ**: ็ปๅใซใญใฃใใทใงใณใไปใใพใ๏ผ๏ผ[BLIP](./model_doc/blip)๏ผ
- **็ปๅ่ณชๅๅฟ็ญ**: ็ปๅใไธใใใใๅ ดๅใใใฎ็ปๅใซ้ขใใ่ณชๅใซๅ็ญใใพใ๏ผ[VILT](./model_doc/vilt)๏ผ
- **็ปๅใปใฐใกใณใใผใทใงใณ**: ็ปๅใจใใญใณใใใไธใใใใๅ ดๅใใใฎใใญใณใใใฎใปใฐใกใณใใผใทใงใณใในใฏใๅบๅใใพใ๏ผ[CLIPSeg](./model_doc/clipseg)๏ผ
- **้ณๅฃฐใใใใญในใใธใฎๅคๆ**: ไบบใฎ่ฉฑใๅฃฐใฎใชใผใใฃใช้ฒ้ณใไธใใใใๅ ดๅใใใฎ้ณๅฃฐใใใญในใใซ่ปข่จใใพใ๏ผ[Whisper](./model_doc/whisper)๏ผ
- **ใใญในใใใ้ณๅฃฐใธใฎๅคๆ**: ใใญในใใ้ณๅฃฐใซๅคๆใใพใ๏ผ[SpeechT5](./model_doc/speecht5)๏ผ
- **ใผใญใทใงใใใใญในใๅ้ก**: ใใญในใใจใฉใใซใฎใชในใใไธใใใใๅ ดๅใใใญในใใๆใๅฏพๅฟใใใฉใใซใ่ญๅฅใใพใ๏ผ[BART](./model_doc/bart)๏ผ
- **ใใญในใ่ฆ็ด**: ้ทใใใญในใใ1ใคใพใใฏๆฐๆใซ่ฆ็ดใใพใ๏ผ[BART](./model_doc/bart)๏ผ
- **็ฟป่จณ**: ใใญในใใๆๅฎใใใ่จ่ชใซ็ฟป่จณใใพใ๏ผ[NLLB](./model_doc/nllb)๏ผ
ใใใใฎใใผใซใฏtransformersใซ็ตฑๅใใใฆใใใๆๅใงใไฝฟ็จใงใใพใใใใจใใฐใๆฌกใฎใใใซไฝฟ็จใงใใพใ๏ผ
```py
from transformers import load_tool
tool = load_tool("text-to-speech")
audio = tool("This is a text to speech tool")
```
### Custom tools
็งใใกใฏใๅณ้ธใใใใใผใซใฎใปใใใ็นๅฎใใไธๆนใใใฎๅฎ่ฃ
ใๆไพใใไธป่ฆใชไพกๅคใฏใใซในใฟใ ใใผใซใ่ฟ
้ใซไฝๆใใฆๅ
ฑๆใงใใ่ฝๅใ ใจๅผทใไฟกใใฆใใพใใ
ใใผใซใฎใณใผใใHugging Face Spaceใพใใฏใขใใซใชใใธใใชใซใใใทใฅใใใใจใงใใจใผใธใงใณใใจ็ดๆฅ้ฃๆบใใฆใใผใซใๆดป็จใงใใพใใ[`huggingface-tools` organization](https://huggingface.co/huggingface-tools)ใซใฏใ**transformers้ไพๅญ**ใฎใใใคใใฎใใผใซใ่ฟฝๅ ใใใพใใ๏ผ
- **ใใญในใใใฆใณใญใผใใผ**: ใฆใงใURLใใใใญในใใใใฆใณใญใผใใใใใใฎใใผใซ
- **ใใญในใใใ็ปๅใธ**: ใใญใณใใใซๅพใฃใฆ็ปๅใ็ๆใใใใใฎใใผใซใๅฎๅฎใใๆกๆฃใๆดป็จใใพใ
- **็ปๅๅคๆ**: ๅๆ็ปๅใจใใญใณใใใๆๅฎใใฆ็ปๅใๅคๆดใใใใใฎใใผใซใinstruct pix2pixใฎๅฎๅฎใใๆกๆฃใๆดป็จใใพใ
- **ใใญในใใใใใใชใธ**: ใใญใณใใใซๅพใฃใฆๅฐใใชใใใชใ็ๆใใใใใฎใใผใซใdamo-vilabใๆดป็จใใพใ
ๆๅใใไฝฟ็จใใฆใใใใญในใใใ็ปๅใธใฎใใผใซใฏใ[*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)ใซใใใชใขใผใใใผใซใงใ๏ผไปๅพใใใใฎ็ต็นใใใณไปใฎ็ต็นใซใใใซใใฎใใใชใใผใซใใชใชใผในใใใใฎๅฎ่ฃ
ใใใใซๅผทๅใใฆใใใพใใ
ใจใผใธใงใณใใฏใใใฉใซใใง[`huggingface-tools`](https://huggingface.co/huggingface-tools)ใซใใใใผใซใซใขใฏใปในใงใใพใใ
ใใผใซใฎไฝๆใจๅ
ฑๆๆนๆณใใพใHubใซๅญๅจใใใซในใฟใ ใใผใซใๆดป็จใใๆนๆณใซใคใใฆใฎ่ฉณ็ดฐใฏใ[ๆฌกใฎใฌใคใ](custom_tools)ใง่ชฌๆใใฆใใพใใ
### Code generation
ใใใพใงใใจใผใธใงใณใใไฝฟ็จใใฆใใชใใฎใใใซใขใฏใทใงใณใๅฎ่กใใๆนๆณใ็คบใใพใใใใใ ใใใจใผใธใงใณใใฏใณใผใใ็ๆใใใ ใใงใ้ๅธธใซๅถ้ใใใPythonใคใณใฟใผใใชใฟใไฝฟ็จใใฆๅฎ่กใใพใใ็ๆใใใใณใผใใ็ฐใชใ็ฐๅขใงไฝฟ็จใใใๅ ดๅใใจใผใธใงใณใใซใณใผใใ่ฟใใใใซๆ็คบใงใใพใใใใผใซใฎๅฎ็พฉใจๆญฃ็ขบใชใคใณใใผใใๅซใใฆใ
ไพใใฐใไปฅไธใฎๅฝไปค๏ผ
```python
agent.run("Draw me a picture of rivers and lakes", return_code=True)
```
ๆฌกใฎใณใผใใ่ฟใใพใ
```python
from transformers import load_tool
image_generator = load_tool("huggingface-tools/text-to-image")
image = image_generator(prompt="rivers and lakes")
```
ใใฎๅพใ่ชๅใงๅคๆดใใฆๅฎ่กใงใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/pipeline_tutorial.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใๅฝ็คพใฎdoc-builder๏ผMDXใซไผผใๆงๆ๏ผใๅซใใใใMarkdownใใฅใผใขใงๆญฃใใ่กจ็คบใใใชใใใจใใใใพใใ
-->
# Pipelines for inference
[`pipeline`]ใไฝฟ็จใใใใจใงใ[Hub](https://huggingface.co/models)ใใใฎไปปๆใฎใขใใซใ่จ่ชใใณใณใใฅใผใฟใใธใงใณใ้ณๅฃฐใใใใณใใซใใขใผใใซใฟในใฏใฎๆจ่ซใซ็ฐกๅใซไฝฟ็จใงใใพใใ
็นๅฎใฎใขใใชใใฃใซ้ขใใ็ต้จใใชใๅ ดๅใใใขใใซใฎ่ๅพใซใใใณใผใใซ็ฒพ้ใใฆใใชใๅ ดๅใงใใ[`pipeline`]ใไฝฟ็จใใฆๆจ่ซใงใใพใ๏ผ
ใใฎใใฅใผใใชใขใซใงใฏใๆฌกใฎใใจใๅญฆใณใพใ๏ผ
- ๆจ่ซใฎใใใฎ[`pipeline`]ใฎไฝฟ็จๆนๆณใ
- ็นๅฎใฎใใผใฏใใคใถใใขใใซใฎไฝฟ็จๆนๆณใ
- ใชใผใใฃใชใใใธใงใณใใใซใใขใผใใซใฟในใฏใฎใใใฎ[`pipeline`]ใฎไฝฟ็จๆนๆณใ
<Tip>
ใตใใผใใใใฆใใใฟในใฏใจๅฉ็จๅฏ่ฝใชใใฉใกใผใฟใฎๅฎๅ
จใชไธ่ฆงใซใคใใฆใฏใ[`pipeline`]ใฎใใญใฅใกใณใใผใทใงใณใใ่ฆงใใ ใใใ
</Tip>
## Pipeline usage
ๅใฟในใฏใซใฏ้ข้ฃใใ[`pipeline`]ใใใใพใใใใฟในใฏๅบๆใฎ[`pipeline`]ใไฝฟ็จใใไปฃใใใซใใในใฆใฎใฟในใฏๅบๆใฎใใคใใฉใคใณใๅซใไธ่ฌ็ใช[`pipeline`]ใฎๆฝ่ฑกๅใไฝฟ็จใใใจใใใ็ฐกๅใงใใ[`pipeline`]ใฏ่ชๅ็ใซใใใฉใซใใฎใขใใซใจใใฟในใฏใฎๆจ่ซใๅฏ่ฝใชๅๅฆ็ใฏใฉในใ่ชญใฟ่พผใฟใพใใ
1. [`pipeline`]ใไฝๆใใๆจ่ซใฟในใฏใๆๅฎใใฆๅงใใพใ๏ผ
```py
>>> from transformers import pipeline
>>> generator = pipeline(task="automatic-speech-recognition")
```
2. [`pipeline`]ใซๅ
ฅๅใใญในใใๆธกใใพใ๏ผ
```python
>>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'}
```
ใใงใใฏใขใฆใใงใใชใใฃใใ๏ผ [Hubใฎๆใใใฆใณใญใผใใใใ่ชๅ้ณๅฃฐ่ช่ญใขใใซ](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) ใฎใใใคใใ่ฆใฆใใใ่ฏใ่ปขๅใๅพใใใจใใงใใใใฉใใใ็ขบ่ชใใฆใฟใฆใใ ใใใ
[openai/whisper-large](https://huggingface.co/openai/whisper-large) ใ่ฉฆใใฆใฟใพใใใ๏ผ
```python
>>> generator = pipeline(model="openai/whisper-large")
>>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
```
ใใฎ็ตๆใฏใใๆญฃ็ขบใซ่ฆใใพใใญ๏ผ
็ฐใชใ่จ่ชใๅฐ้ๅ้ใซ็นๅใใใขใใซใใใฎไปใฎใขใใซใซใคใใฆใฏใHubใใใงใใฏใใใใจใๅผทใใๅงใใใพใใ
Hubใงใฏใใใฉใฆใถใใ็ดๆฅใขใใซใฎ็ตๆใใใงใใฏใใฆใไปใฎใขใใซใใใ้ฉใใฆใใใใ็นๆฎใชใฑใผในใใใใใๅฆ็ใงใใใใ็ขบ่ชใงใใพใใ
ใใใฆใใใชใใฎใฆใผในใฑใผในใซ้ฉใใใขใใซใ่ฆใคใใใชใๅ ดๅใใใคใงใ[ใใฌใผใใณใฐ](training)ใ้ๅงใงใใพใ๏ผ
่คๆฐใฎๅ
ฅๅใใใๅ ดๅใๅ
ฅๅใใชในใใจใใฆๆธกใใใจใใงใใพใ๏ผ
```py
generator(
[
"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac",
"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac",
]
)
```
ใใผใฟใปใใๅ
จไฝใ็นฐใ่ฟใๅฆ็ใใใใใฆใงใใตใผใใผใงๆจ่ซใซไฝฟ็จใใใๅ ดๅใฏใๅฐ็จใฎ้จๅใใใงใใฏใใฆใใ ใใใ
[ใใผใฟใปใใใงใใคใใฉใคใณใไฝฟ็จใใ](#using-pipelines-on-a-dataset)
[ใฆใงใใตใผใใผใงใใคใใฉใคใณใไฝฟ็จใใ](./pipeline_webserver)
## ใใฉใกใผใฟ
[`pipeline`]ใฏๅคใใฎใใฉใกใผใฟใใตใใผใใใฆใใใไธ้จใฏใฟในใฏๅบๆใงใใใไธ้จใฏใในใฆใฎใใคใใฉใคใณใซๅ
ฑ้ใงใใ
ไธ่ฌ็ใซใฏใใฉใใงใใใฉใกใผใฟใๆๅฎใงใใพใ๏ผ
```py
generator = pipeline(model="openai/whisper-large", my_parameter=1)
out = generator(...) # ใใใฏ `my_parameter=1` ใไฝฟ็จใใพใใ
out = generator(..., my_parameter=2) # ใใใฏไธๆธใใใฆ `my_parameter=2` ใไฝฟ็จใใพใใ
out = generator(...) # ใใใฏๅใณ `my_parameter=1` ใไฝฟ็จใใพใใ
```
3ใคใฎ้่ฆใชใใฎใ็ขบ่ชใใพใใใ๏ผ
### Device
`device=n` ใไฝฟ็จใใใจใใใคใใฉใคใณใฏใขใใซใๆๅฎใใใใใคในใซ่ชๅ็ใซ้
็ฝฎใใพใใ
ใใใฏใPyTorchใพใใฏTensorflowใไฝฟ็จใใฆใใใใฉใใใซ้ขไฟใชใๆฉ่ฝใใพใใ
```py
generator = pipeline(model="openai/whisper-large", device=0)
```
ใใใขใใซใๅไธใฎGPUใซใฏๅคงใใใใๅ ดๅใ`device_map="auto"`ใ่จญๅฎใใฆใ๐ค [Accelerate](https://huggingface.co/docs/accelerate) ใซใขใใซใฎ้ใฟใใฉใฎใใใซใญใผใใใไฟๅญใใใใ่ชๅ็ใซๆฑบๅฎใใใใใจใใงใใพใใ
```python
#!pip install accelerate
generator = pipeline(model="openai/whisper-large", device_map="auto")
```
ๆณจๆ: `device_map="auto"` ใๆธกใใใๅ ดๅใ`pipeline` ใใคใณในใฟใณในๅใใ้ใซ `device=device` ๅผๆฐใ่ฟฝๅ ใใๅฟ
่ฆใฏใใใพใใใใใใใชใใจใไบๆใใชใๅไฝใซ้ญ้ใใๅฏ่ฝๆงใใใใพใ๏ผ
### Batch size
ใใใฉใซใใงใฏใใใคใใฉใคใณใฏ่ฉณ็ดฐใซใคใใฆ[ใใกใ](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching)ใง่ชฌๆใใใฆใใ็็ฑใใใๆจ่ซใใใใๅฆ็ใใพใใใใใฎ็็ฑใฏใใใใๅฆ็ใๅฟ
ใใใ้ใใชใใใใงใใใๅฎ้ใซใฏใใใคใใฎใฑใผในใงใใชใ้
ใใชใใใจใใใใใใงใใ
ใใ ใใใใชใใฎใฆใผในใฑใผในใงๆฉ่ฝใใๅ ดๅใฏใๆฌกใฎใใใซไฝฟ็จใงใใพใ๏ผ
```py
generator = pipeline(model="openai/whisper-large", device=0, batch_size=2)
audio_filenames = [f"audio_{i}.flac" for i in range(10)]
texts = generator(audio_filenames)
```
ใใใซใใใใใคใใฉใคใณใฏๆไพใใใ10ๅใฎใชใผใใฃใชใใกใคใซใงใใคใใฉใคใณใๅฎ่กใใพใใใ
ใขใใซใซใฏใใใๅฆ็ใใใๅนๆ็ใงใใGPUไธใซใใใใใใๅฆ็ใ่กใใใใฎ่ฟฝๅ ใฎใณใผใใฏๅฟ
่ฆใใใพใใใ
ๅบๅใฏๅธธใซใใใๅฆ็ใชใใงๅใๅใฃใใใฎใจไธ่ดใใใฏใใงใใใใใฏๅใซใใคใใฉใคใณใใใใ้ซ้ใชๅฆ็ใๅพใใใใฎๆนๆณใจใใฆๆไพใใใฆใใพใใ
ใใคใใฉใคใณใฏใใใใๅฆ็ใฎใใใคใใฎ่ค้ใใ่ปฝๆธใใใใจใใงใใพใใใชใใชใใไธ้จใฎใใคใใฉใคใณใงใฏใ
ใขใใซใงๅฆ็ใใใใใซ1ใคใฎใขใคใใ ๏ผ้ทใใชใผใใฃใชใใกใคใซใฎใใใชใใฎ๏ผใ่คๆฐใฎ้จๅใซๅๅฒใใๅฟ
่ฆใใใๅ ดๅใใใใใใงใใ
ใใคใใฉใคใณใฏใใใใใชใใฎใใใซๅฎ่กใใพใใ[*ใใฃใณใฏใใใๅฆ็*](./main_classes/pipelines#pipeline-chunk-batching)ใจใใฆ็ฅใใใใใฎใๅฎ่กใใพใใ
### Task specific parameters
ใในใฆใฎใฟในใฏใฏใใฟในใฏๅบๆใฎใใฉใกใผใฟใๆไพใใ่ฟฝๅ ใฎๆ่ปๆงใจใชใใทใงใณใๆไพใใฆใไฝๆฅญใในใ ใผใบใซ้ฒใใใฎใซๅฝน็ซใกใพใใ
ใใจใใฐใ[`transformers.AutomaticSpeechRecognitionPipeline.__call__`]ใกใฝใใใซใฏใใใใชใฎๅญๅนไฝๆใซๆ็จใช`return_timestamps`ใใฉใกใผใฟใใใใพใใ
```py
>>> # Not using whisper, as it cannot provide timestamps.
>>> generator = pipeline(model="facebook/wav2vec2-large-960h-lv60-self", return_timestamps="word")
>>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP AND LIVE OUT THE TRUE MEANING OF ITS CREED', 'chunks': [{'text': 'I', 'timestamp': (1.22, 1.24)}, {'text': 'HAVE', 'timestamp': (1.42, 1.58)}, {'text': 'A', 'timestamp': (1.66, 1.68)}, {'text': 'DREAM', 'timestamp': (1.76, 2.14)}, {'text': 'BUT', 'timestamp': (3.68, 3.8)}, {'text': 'ONE', 'timestamp': (3.94, 4.06)}, {'text': 'DAY', 'timestamp': (4.16, 4.3)}, {'text': 'THIS', 'timestamp': (6.36, 6.54)}, {'text': 'NATION', 'timestamp': (6.68, 7.1)}, {'text': 'WILL', 'timestamp': (7.32, 7.56)}, {'text': 'RISE', 'timestamp': (7.8, 8.26)}, {'text': 'UP', 'timestamp': (8.38, 8.48)}, {'text': 'AND', 'timestamp': (10.08, 10.18)}, {'text': 'LIVE', 'timestamp': (10.26, 10.48)}, {'text': 'OUT', 'timestamp': (10.58, 10.7)}, {'text': 'THE', 'timestamp': (10.82, 10.9)}, {'text': 'TRUE', 'timestamp': (10.98, 11.18)}, {'text': 'MEANING', 'timestamp': (11.26, 11.58)}, {'text': 'OF', 'timestamp': (11.66, 11.7)}, {'text': 'ITS', 'timestamp': (11.76, 11.88)}, {'text': 'CREED', 'timestamp': (12.0, 12.38)}]}
```
ใขใใซใฏใใใญในใใๆจๆธฌใใๆใฎไธญใงๅๅ่ชใใใค็บ้ณใใใใใๅบๅใใพใใใ
ๅใฟในใฏใใจใซๅฉ็จๅฏ่ฝใชๅคใใฎใใฉใกใผใฟใใใใพใใฎใงใไฝใ่ชฟๆดใงใใใใ็ขบ่ชใใใใใซๅใฟในใฏใฎAPIใชใใกใฌใณในใ็ขบ่ชใใฆใใ ใใ๏ผ
ใใจใใฐใ[`~transformers.AutomaticSpeechRecognitionPipeline`]ใซใฏใใขใใซๅไฝใงใฏๅฆ็ใงใใชใ้ๅธธใซ้ทใใชใผใใฃใชใใกใคใซ๏ผใใจใใฐใๆ ็ปๅ
จไฝใ1ๆ้ใฎใใใชใฎๅญๅนไปใใชใฉ๏ผใงๅฝน็ซใค`chunk_length_s`ใใฉใกใผใฟใใใใพใใ
<!--ๅฝน็ซใคใใฉใกใผใฟใ่ฆใคใใใชใๅ ดๅใฏใ[ใชใฏใจในใ](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)ใใฆใใ ใใ๏ผ-->
ๅฝน็ซใคใใฉใกใผใฟใ่ฆใคใใใชใๅ ดๅใฏใ[ใชใฏใจในใ](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)ใใฆใใ ใใ๏ผ
## Using pipeline in a dataset
ใใคใใฉใคใณใฏๅคง่ฆๆจกใชใใผใฟใปใใไธใงๆจ่ซใๅฎ่กใใใใจใใงใใพใใใใใ่กใๆใ็ฐกๅใชๆนๆณใฏใใคใใฌใผใฟใไฝฟ็จใใใใจใงใ๏ผ
```py
def data():
for i in range(1000):
yield f"My example {i}"
pipe = pipeline(model="gpt2", device=0)
generated_characters = 0
for out in pipe(data()):
generated_characters += len(out[0]["generated_text"])
```
ใคใใฌใผใฟใผ `data()` ใฏๅ็ตๆใ็ๆใใใใคใใฉใคใณใฏ่ชๅ็ใซๅ
ฅๅใๅๅพฉๅฏ่ฝใงใใใใจใ่ช่ญใใใใผใฟใๅๅพใ็ถใใชใใGPUไธใงๅฆ็ใ่กใใพใ๏ผใใใฏ[DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)ใๅ
้จใงไฝฟ็จใใฆใใพใ๏ผใ
ใใใฏใใใผใฟใปใใๅ
จไฝใซใกใขใชใๅฒใๅฝใฆใๅฟ
่ฆใใชใใGPUใซใงใใใ ใ้ใใใผใฟใไพ็ตฆใงใใใใ้่ฆใงใใ
ใใใๅฆ็ใฏๅฆ็ใ้ซ้ๅใงใใๅฏ่ฝๆงใใใใใใใใใง`batch_size`ใใฉใกใผใฟใ่ชฟๆดใใฆ่ฉฆใใใจใๅฝน็ซใคใใใใใพใใใ
ใใผใฟใปใใใๅๅพฉๅฆ็ใใๆใ็ฐกๅใชๆนๆณใฏใ๐ค [Datasets](https://github.com/huggingface/datasets/)ใใใใผใฟใปใใใ่ชญใฟ่พผใใใจใงใ๏ผ
```py
# KeyDataset is a util that will just output the item we're interested in.
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]")
for out in pipe(KeyDataset(dataset, "audio")):
print(out)
```
## Using pipelines for a webserver
<Tip>
ๆจ่ซใจใณใธใณใไฝๆใใใใจใฏ่ค้ใชใใใใฏใงใ็ฌ่ชใฎใใผใธใๅฟ
่ฆใงใใ
</Tip>
[ใชใณใฏ](./pipeline_webserver)
## Vision pipeline
ใใธใงใณใฟในใฏ็จใฎ[`pipeline`]ใไฝฟ็จใใๆนๆณใฏใปใผๅใใงใใ
ใฟในใฏใๆๅฎใใ็ปๅใใฏใฉใทใใกใคใขใซๆธกใใพใใ็ปๅใฏใชใณใฏใใญใผใซใซใในใใพใใฏBase64ใจใณใณใผใใใใ็ปๅใงใใใใจใใงใใพใใไพใใฐใไปฅไธใฎ็ปๅใฏใฉใฎ็จฎ้กใฎ็ซใงใใ๏ผ

```py
>>> from transformers import pipeline
>>> vision_classifier = pipeline(model="google/vit-base-patch16-224")
>>> preds = vision_classifier(
... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
... )
>>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
>>> preds
[{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}]
```
## Text pipeline
[`pipeline`]ใไฝฟ็จใใใใจใฏใNLPใฟในใฏใซๅฏพใใฆใปใผๅใใงใใ
```py
>>> from transformers import pipeline
>>> # This model is a `zero-shot-classification` model.
>>> # It will classify text, except you are free to choose any label you might imagine
>>> classifier = pipeline(model="facebook/bart-large-mnli")
>>> classifier(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]}
```
## Multimodal pipeline
[`pipeline`]ใฏใ1ใคไปฅไธใฎใขใใชใใฃใใตใใผใใใฆใใพใใใใจใใฐใ่ฆ่ฆ็ใช่ณชๅๅฟ็ญ๏ผVQA๏ผใฟในใฏใฏใใญในใใจ็ปๅใ็ตใฟๅใใใฆใใพใใ
ๅฅฝใใช็ปๅใชใณใฏใจ็ปๅใซ้ขใใ่ณชๅใ่ช็ฑใซไฝฟใฃใฆใใ ใใใ็ปๅใฏURLใพใใฏ็ปๅใฎใญใผใซใซใในใงๆๅฎใงใใพใใ
ไพใใฐใใใฎ[่ซๆฑๆธ็ปๅ](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png)ใไฝฟ็จใใๅ ดๅ๏ผ
```py
>>> from transformers import pipeline
>>> vqa = pipeline(model="impira/layoutlm-document-qa")
>>> vqa(
... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png",
... question="What is the invoice number?",
... )
[{'score': 0.42515, 'answer': 'us-001', 'start': 16, 'end': 16}]
```
<Tip>
ไธ่จใฎไพใๅฎ่กใใใซใฏใ๐ค Transformersใซๅ ใใฆ [`pytesseract`](https://pypi.org/project/pytesseract/) ใใคใณในใใผใซใใใฆใใๅฟ
่ฆใใใใพใใ
```bash
sudo apt install -y tesseract-ocr
pip install pytesseract
```
</Tip>
## Using `pipeline` on large models with ๐ค `accelerate`:
ใพใใ`accelerate` ใ`pip install accelerate` ใงใคใณในใใผใซใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
ๆฌกใซใ`device_map="auto"` ใไฝฟ็จใใฆใขใใซใใญใผใใใพใใใใฎไพใงใฏ `facebook/opt-1.3b` ใไฝฟ็จใใพใใ
```python
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(model="facebook/opt-1.3b", torch_dtype=torch.bfloat16, device_map="auto")
output = pipe("ใใใฏ็ด ๆดใใใไพใงใ๏ผ", do_sample=True, top_p=0.95)
```
ใใ `bitsandbytes` ใใคใณในใใผใซใใ`load_in_8bit=True` ๅผๆฐใ่ฟฝๅ ใใใฐใ8ใใใใง่ชญใฟ่พผใพใใใขใใซใๆธกใใใจใใงใใพใใ
```py
# pip install accelerate bitsandbytes
import torch
from transformers import pipeline
pipe = pipeline(model="facebook/opt-1.3b", device_map="auto", model_kwargs={"load_in_8bit": True})
output = pipe("This is a cool example!", do_sample=True, top_p=0.95)
```
ๆณจๆ: BLOOMใชใฉใฎๅคง่ฆๆจกใขใใซใฎใญใผใใใตใใผใใใHugging Faceใขใใซใฎใใใใใงใใใงใใฏใใคใณใใ็ฝฎใๆใใใใจใใงใใพใ๏ผ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/quicktour.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใHugging Faceใฎใใญใฅใกใณใใใซใใผๅใใซ็นๅฎใฎๆงๆใๅซใใงใใใใใ
้ๅธธใฎMarkdownใใฅใผใขใผใงๆญฃใใ่กจ็คบใใใชใใใจใซๆณจๆใใฆใใ ใใใ
-->
# Quick tour
[[open-in-colab]]
๐ค Transformersใไฝฟใๅงใใพใใใ๏ผ ้็บ่
ใงใใใใจใๆฅๅธธ็ใชใฆใผใถใผใงใใใใจใใใฎใฏใคใใฏใใขใผใฏ
ๅใใฆๅงใใใฎใๆฏๆดใใ[`pipeline`]ใไฝฟใฃใๆจ่ซๆนๆณใ[AutoClass](./model_doc/auto)ใงไบๅๅญฆ็ฟๆธใฟใขใใซใจใใชใใญใปใใตใใญใผใใใๆนๆณใ
ใใใฆPyTorchใพใใฏTensorFlowใง็ด ๆฉใใขใใซใใใฌใผใใณใฐใใๆนๆณใ็คบใใพใใ ๅๅฟ่
ใฎๅ ดๅใใใใง็ดนไปใใใใณใณใปใใใฎ่ฉณ็ดฐใช่ชฌๆใๆไพใใ
ใใฅใผใใชใขใซใพใใฏ[ใณใผใน](https://huggingface.co/course/chapter1/1)ใๆฌกใซๅ็
งใใใใจใใๅงใใใพใใ
ๅงใใๅใซใๅฟ
่ฆใชใฉใคใใฉใชใใในใฆใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใ๏ผ
```bash
!pip install transformers datasets
```
ใใชใใฏใพใใๅฅฝใใชๆฉๆขฐๅญฆ็ฟใใฌใผใ ใฏใผใฏใใคใณในใใผใซใใๅฟ
่ฆใใใใพใ:
<frameworkcontent>
<pt>
```bash
pip install torch
```
</pt>
<tf>
```bash
pip install tensorflow
```
</tf>
</frameworkcontent>
## Pipeline
<Youtube id="tiZFewofSLM"/>
[`pipeline`] ใฏใไบๅๅญฆ็ฟๆธใฟใขใใซใๆจ่ซใซๆใ็ฐกๅใง้ซ้ใชๆนๆณใงใใ
[`pipeline`] ใไฝฟ็จใใใใจใงใใใพใใพใชใขใใชใใฃใซใใใๅคใใฎใฟในใฏใซๅฏพใใฆๅณๅบงใซไฝฟ็จใงใใพใใ
ใใใคใใฎใฟในใฏใฏไปฅไธใฎ่กจใซ็คบใใใฆใใพใ๏ผ
<Tip>
ไฝฟ็จๅฏ่ฝใชใฟในใฏใฎๅฎๅ
จใชไธ่ฆงใซใคใใฆใฏใ[pipeline API ใชใใกใฌใณใน](./main_classes/pipelines)ใ็ขบ่ชใใฆใใ ใใใ
</Tip>
| **ใฟในใฏ** | **่ชฌๆ** | **ใขใใชใใฃ** | **ใใคใใฉใคใณ่ญๅฅๅญ** |
|------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------|-----------------------------------------------|
| ใใญในใๅ้ก | ใใญในใใฎใทใผใฑใณในใซใฉใใซใๅฒใๅฝใฆใ | NLP | pipeline(task="sentiment-analysis") |
| ใใญในใ็ๆ | ใใญใณใใใๆๅฎใใฆใใญในใใ็ๆใใ | NLP | pipeline(task="text-generation") |
| ่ฆ็ด | ใใญในใใพใใฏใใญใฅใกใณใใฎ่ฆ็ดใ็ๆใใ | NLP | pipeline(task="summarization") |
| ็ปๅๅ้ก | ็ปๅใซใฉใใซใๅฒใๅฝใฆใ | ใณใณใใฅใผใฟใใธใงใณ | pipeline(task="image-classification") |
| ็ปๅใปใฐใกใณใใผใทใงใณ | ็ปๅใฎๅๅๅฅใฎใใฏใปใซใซใฉใใซใๅฒใๅฝใฆใ๏ผใปใใณใใฃใใฏใใใใใใฃใใฏใใใใณใคใณในใฟใณในใปใฐใกใณใใผใทใงใณใใตใใผใ๏ผ | ใณใณใใฅใผใฟใใธใงใณ | pipeline(task="image-segmentation") |
| ใชใใธใงใฏใๆคๅบ | ็ปๅๅ
ใฎใชใใธใงใฏใใฎๅข็ใใใฏในใจใฏใฉในใไบๆธฌใใ | ใณใณใใฅใผใฟใใธใงใณ | pipeline(task="object-detection") |
| ใชใผใใฃใชๅ้ก | ใชใผใใฃใชใใผใฟใซใฉใใซใๅฒใๅฝใฆใ | ใชใผใใฃใช | pipeline(task="audio-classification") |
| ่ชๅ้ณๅฃฐ่ช่ญ | ้ณๅฃฐใใใญในใใซๅคๆใใ | ใชใผใใฃใช | pipeline(task="automatic-speech-recognition") |
| ใใธใฅใขใซใฏใจในใใงใณๅฟ็ญ | ็ปๅใจ่ณชๅใไธใใใใๅ ดๅใซใ็ปๅใซ้ขใใ่ณชๅใซๅ็ญใใ | ใใซใใขใผใใซ | pipeline(task="vqa") |
| ใใญใฅใกใณใใฏใจในใใงใณๅฟ็ญ | ใใญใฅใกใณใใจ่ณชๅใไธใใใใๅ ดๅใซใใใญใฅใกใณใใซ้ขใใ่ณชๅใซๅ็ญใใ | ใใซใใขใผใใซ | pipeline(task="document-question-answering") |
| ็ปๅใญใฃใใทใงใใณใฐ | ไธใใใใ็ปๅใซใญใฃใใทใงใณใ็ๆใใ | ใใซใใขใผใใซ | pipeline(task="image-to-text") |
ใพใใ[`pipeline`] ใฎใคใณในใฟใณในใไฝๆใใไฝฟ็จใใใใฟในใฏใๆๅฎใใพใใ
ใใฎใฌใคใใงใฏใใปใณใใกใณใๅๆใฎใใใซ [`pipeline`] ใไฝฟ็จใใไพใ็คบใใพใ๏ผ
```python
>>> from transformers import pipeline
>>> classifier = pipeline("sentiment-analysis")
```
[`pipeline`]ใฏใๆๆ
ๅๆใฎใใใฎใใใฉใซใใฎ[ไบๅๅญฆ็ฟๆธใฟใขใใซ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)ใจใใผใฏใใคใถใใใฆใณใญใผใใใฆใญใฃใใทใฅใใไฝฟ็จใงใใใใใซใชใใพใใ
ใใใงใ`classifier`ใๅฏพ่ฑกใฎใใญในใใซไฝฟ็จใงใใพใ๏ผ
```python
>>> classifier("็งใใกใฏ๐ค Transformersใฉใคใใฉใชใใ่ฆใใงใใฆใจใฆใๅฌใใใงใใ")
[{'label': 'POSITIVE', 'score': 0.9998}]
```
่คๆฐใฎๅ
ฅๅใใใๅ ดๅใฏใ[`pipeline`]ใซๅ
ฅๅใใชในใใจใใฆๆธกใใฆใ่พๆธใฎใชในใใ่ฟใใพใ๏ผ
```py
>>> results = classifier(["๐ค Transformersใฉใคใใฉใชใใ็ดนไปใงใใฆ้ๅธธใซๅฌใใใงใใ", "ๅซใใซใชใใชใใงใปใใใงใใ"])
>>> for result in results:
... print(f"label: {result['label']}, ในใณใข: {round(result['score'], 4)}")
label: POSITIVE, ในใณใข: 0.9998
label: NEGATIVE, ในใณใข: 0.5309
```
[`pipeline`]ใฏใไปปๆใฎใฟในใฏใซๅฏพใใฆใใผใฟใปใใๅ
จไฝใ็นฐใ่ฟใๅฆ็ใใใใจใใงใใพใใใใฎไพใงใฏใ่ชๅ้ณๅฃฐ่ช่ญใใฟในใฏใจใใฆ้ธใณใพใใใ๏ผ
```python
>>> import torch
>>> from transformers import pipeline
>>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h")
```
ใชใผใใฃใชใใผใฟใปใใใใญใผใใใพใ๏ผ่ฉณ็ดฐใซใคใใฆใฏ๐ค Datasets [ใฏใคใใฏในใฟใผใ](https://huggingface.co/docs/datasets/quickstart#audio)ใๅ็
งใใฆใใ ใใ๏ผใ
ใใจใใฐใ[MInDS-14](https://huggingface.co/datasets/PolyAI/minds14)ใใผใฟใปใใใใญใผใใใพใ๏ผ
```python
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT
```
ใใผใฟใปใใใฎใตใณใใชใณใฐใฌใผใใ[`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h)ใใใฌใผใใณใฐใใใใตใณใใชใณใฐใฌใผใใจไธ่ดใใใใจใ็ขบ่ชใใฆใใ ใใ๏ผ
```py
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate))
```
"audio"ๅใๅผใณๅบใใจใใชใผใใฃใชใใกใคใซใฏ่ชๅ็ใซใญใผใใใใใชใตใณใใชใณใฐใใใพใใๆๅใฎ4ใคใฎใตใณใใซใใ็ใฎๆณขๅฝข้
ๅใๆฝๅบใใใใใใใคใใฉใคใณใซใชในใใจใใฆๆธกใใพใใ
```py
>>> result = speech_recognizer(dataset[:4]["audio"])
>>> print([d["text"] for d in result])
['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT']
```
ๅคง่ฆๆจกใชใใผใฟใปใใใงใๅ
ฅๅใๅคงใใๅ ดๅ๏ผ้ณๅฃฐใ็ปๅใชใฉ๏ผใใในใฆใฎๅ
ฅๅใใกใขใชใซ่ชญใฟ่พผใไปฃใใใซใใชในใใงใฏใชใใธใงใใฌใผใฟใๆธกใใใจใใๅงใใงใใ่ฉณ็ดฐใซใคใใฆใฏ[ใใคใใฉใคใณAPIใชใใกใฌใณใน](./main_classes/pipelines)ใๅ็
งใใฆใใ ใใใ
### Use another model and tokenizer in the pipeline
[`pipeline`]ใฏ[Hub](https://huggingface.co/models)ใใใฎไปปๆใฎใขใใซใๅๅฎนใงใใไปใฎใฆใผในใฑใผในใซ[`pipeline`]ใ้ฉๅฟใใใใใจใๅฎนๆใงใใใใจใใฐใใใฉใณใน่ชใฎใใญในใใๅฆ็ใงใใใขใใซใๅฟ
่ฆใชๅ ดๅใHubใฎใฟใฐใไฝฟ็จใใฆ้ฉๅใชใขใใซใใใฃใซใฟใชใณใฐใงใใพใใใใใใฎใใฃใซใฟใชใณใฐใใใ็ตๆใฏใใใฉใณใน่ชใฎใใญในใใซไฝฟ็จใงใใๆๆ
ๅๆ็จใซ่ชฟๆดใใใๅค่จ่ชใฎ[BERTใขใใซ](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment)ใ่ฟใใพใ๏ผ
```py
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
```
<frameworkcontent>
<pt>
[`AutoModelForSequenceClassification`]ใจ[`AutoTokenizer`]ใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใจใใใซ้ข้ฃใใใใผใฏใใคใถใใญใผใใใพใ๏ผๆฌกใฎใปใฏใทใงใณใง`AutoClass`ใซใคใใฆ่ฉณใใ่ชฌๆใใพใ๏ผ๏ผ
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</pt>
<tf>
ไปฅไธใฎใณใผใใฏใ[`TFAutoModelForSequenceClassification`]ใใใณ[`AutoTokenizer`]ใไฝฟ็จใใฆใไบๅๅญฆ็ฟๆธใฟใขใใซใจใใฎ้ข้ฃใใใใผใฏใใคใถใใญใผใใใๆนๆณใ็คบใใฆใใพใ๏ผ`TFAutoClass`ใซใคใใฆใฏๆฌกใฎใปใฏใทใงใณใง่ฉณใใ่ชฌๆใใพใ๏ผ๏ผ
```python
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</tf>
</frameworkcontent>
ๆๅฎใใใขใใซใจใใผใฏใใคใถใ[`pipeline`]ใซ่จญๅฎใใไปๅบฆใฏใใฉใณใน่ชใฎใใญในใใซ`classifier`ใ้ฉ็จใงใใพใ๏ผ
```py
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐ค Transformers.")
[{'label': '5 stars', 'score': 0.7273}]
```
ใใใใใชใใฎใฆใผในใฑใผในใซ้ฉใใใขใใซใ่ฆใคใใใชใๅ ดๅใไบๅๅญฆ็ฟๆธใฟใขใใซใใใชใใฎใใผใฟใงใใกใคใณใใฅใผใใณใฐใใๅฟ
่ฆใใใใพใใ
ใใกใคใณใใฅใผใใณใฐใฎๆนๆณใซใคใใฆใฏใ[ใใกใคใณใใฅใผใใณใฐใฎใใฅใผใใชใขใซ](./training)ใใ่ฆงใใ ใใใ
ๆๅพใซใใใกใคใณใใฅใผใใณใฐใใไบๅๅญฆ็ฟๆธใฟใขใใซใๅ
ฑๆใใใณใใฅใใใฃใจๅ
ฑๆใใใงๅ
ฑๆใใใใจใๆค่จใใฆใใ ใใใใใใซใใใๆฉๆขฐๅญฆ็ฟใๆฐไธปๅใใๆๅฉใใใงใใพใ๏ผ ๐ค
## AutoClass
<Youtube id="AhChOFRegn4"/>
[`AutoModelForSequenceClassification`] ใใใณ [`AutoTokenizer`] ใฏใฉในใฏใไธ่จใงไฝฟ็จใใ [`pipeline`] ใ้งๅใใใใใซๅๅใใฆๅไฝใใพใใ
[AutoClass](./model_doc/auto) ใฏใไบๅๅญฆ็ฟๆธใฟใขใใซใฎใขใผใญใใฏใใฃใใใฎๅๅใพใใฏใในใใ่ชๅ็ใซๅๅพใใใทใงใผใใซใใใงใใ
้ฉๅใช `AutoClass` ใ้ธๆใใใใใซ้ข้ฃใใๅๅฆ็ใฏใฉในใ้ธๆใใใ ใใงๆธใฟใพใใ
ๅใฎใปใฏใทใงใณใใใฎไพใซๆปใใ`AutoClass` ใไฝฟ็จใใฆ [`pipeline`] ใฎ็ตๆใๅ็พใใๆนๆณใ่ฆใฆใฟใพใใใใ
### AutoTokenizer
ใใผใฏใใคใถใฏใใญในใใใขใใซใฎๅ
ฅๅใจใใฆไฝฟ็จใงใใๆฐๅคใฎ้
ๅใซๅๅฆ็ใใๅฝนๅฒใๆใใใพใใ
ใใผใฏใใคใผใผใทใงใณใใญใปในใซใฏใๅ่ชใใฉใฎใใใซๅๅฒใใใใใๅ่ชใใฉใฎใฌใใซใงๅๅฒใใใใจใใฃใๅคใใฎใซใผใซใใใใพใ
๏ผใใผใฏใใคใผใผใทใงใณใซใคใใฆใฎ่ฉณ็ดฐใฏ [ใใผใฏใใคใถใตใใชใผ](./tokenizer_summary) ใใ่ฆงใใ ใใ๏ผใ
ๆใ้่ฆใชใใจใฏใใขใใซใไบๅๅญฆ็ฟๆธใฟใซใชใฃใใจใใจๅใใใผใฏใใคใผใผใทใงใณใซใผใซใไฝฟ็จใใใใใซใๅใใขใใซๅใงใใผใฏใใคใถใใคใณในใฟใณในๅใใๅฟ
่ฆใใใใใจใงใใ
[`AutoTokenizer`] ใไฝฟ็จใใฆใใผใฏใใคใถใใญใผใใใพใ๏ผ
```python
>>> from transformers import AutoTokenizer
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Pass your text to the tokenizer:
```python
>>> encoding = tokenizer("็งใใกใฏ๐ค Transformersใฉใคใใฉใชใใ่ฆใใงใใฆใจใฆใๅฌใใใงใใ")
>>> print(encoding)
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
ใใผใฏใใคใถใฏใๆฌกใฎๆ
ๅ ฑใๅซใ่พๆธใ่ฟใใพใ๏ผ
- [input_ids](./glossary#input-ids): ใใผใฏใณใฎๆฐๅค่กจ็พใ
- [attention_mask](.glossary#attention-mask): ใฉใฎใใผใฏใณใซใขใใณใทใงใณใๅใใใใ็คบใใพใใ
ใใผใฏใใคใถใฏใพใใๅ
ฅๅใฎใชในใใๅใๅ
ฅใใไธๆงใช้ทใใฎใใใใ่ฟใใใใซใใญในใใใใใฃใณใฐใใใณๅใ่ฉฐใใใใจใใงใใพใใ
<frameworkcontent>
<pt>
```py
>>> pt_batch = tokenizer(
... ["๐ค Transformersใฉใคใใฉใชใใ่ฆใใงใใฆ้ๅธธใซๅฌใใใงใใ", "ๅซใใงใฏใชใใใจใ้กใฃใฆใใพใใ"],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="pt",
... )
```
</pt>
<tf>
```py
>>> tf_batch = tokenizer(
... ["We are very happy to show you the ๐ค Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="tf",
... )
```
</tf>
</frameworkcontent>
<Tip>
[ๅๅฆ็](./preprocessing)ใใฅใผใใชใขใซใใ่ฆงใใใ ใใใใผใฏใใคใผใผใทใงใณใฎ่ฉณ็ดฐใใ[`AutoImageProcessor`]ใ[`AutoFeatureExtractor`]ใ[`AutoProcessor`]ใไฝฟ็จใใฆ็ปๅใใชใผใใฃใชใใใใณใใซใใขใผใใซๅ
ฅๅใๅๅฆ็ใใๆนๆณใซใคใใฆ่ฉณใใ่ชฌๆใใใฆใใใใผใธใใ่ฆงใใ ใใใ
</Tip>
### AutoModel
<frameworkcontent>
<pt>
๐ค Transformersใฏไบๅๅญฆ็ฟๆธใฟใคใณในใฟใณในใ็ฐกๅใซ็ตฑไธ็ใซใญใผใใใๆนๆณใๆไพใใพใใ
ใใใฏใ[`AutoTokenizer`]ใใญใผใใใใฎใจๅใใใใซ[`AutoModel`]ใใญใผใใงใใใใจใๆๅณใใพใใ
ใฟในใฏใซ้ฉใใ[`AutoModel`]ใ้ธๆใใไปฅๅคใฎ้ใใฏใใใพใใใ
ใใญในใ๏ผใพใใฏใทใผใฑใณใน๏ผๅ้กใฎๅ ดๅใ[`AutoModelForSequenceClassification`]ใใญใผใใใๅฟ
่ฆใใใใพใ๏ผ
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
<Tip>
[`AutoModel`]ใฏใฉในใงใตใใผใใใใฆใใใฟในใฏใซ้ขใใ่ฉณ็ดฐใซใคใใฆใฏใ[ใฟในใฏใฎๆฆ่ฆ](./task_summary)ใๅ็
งใใฆใใ ใใใ
</Tip>
ไปใๅๅฆ็ๆธใฟใฎใใใใ็ดๆฅใขใใซใซๆธกใใพใใ่พๆธใๅฑ้ใใใ ใใงใ`**`ใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใ๏ผ
```python
>>> pt_outputs = pt_model(**pt_batch)
```
ใขใใซใฏใ`logits`ๅฑๆงใซๆ็ต็ใชใขใฏใใฃใใผใทใงใณใๅบๅใใพใใ `logits`ใซsoftmax้ขๆฐใ้ฉ็จใใฆ็ขบ็ใๅๅพใใพใ๏ผ
```py
>>> from torch import nn
>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
>>> print(pt_predictions)
tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],
[0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>)
```
</pt>
<tf>
๐ค Transformersใฏไบๅๅญฆ็ฟๆธใฟใคใณในใฟใณในใใญใผใใใใใใฎใทใณใใซใง็ตฑไธใใใๆนๆณใๆไพใใพใใ
ใใใฏใ[`TFAutoModel`]ใ[`AutoTokenizer`]ใใญใผใใใใฎใจๅใใใใซใญใผใใงใใใใจใๆๅณใใพใใ
ๅฏไธใฎ้ใใฏใใฟในใฏใซ้ฉใใ[`TFAutoModel`]ใ้ธๆใใใใจใงใใ
ใใญในใ๏ผใพใใฏใทใผใฑใณใน๏ผๅ้กใฎๅ ดๅใ[`TFAutoModelForSequenceClassification`]ใใญใผใใใๅฟ
่ฆใใใใพใ๏ผ
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
```
<Tip>
่ฉณ็ดฐใซใคใใฆใฏใ[`AutoModel`]ใฏใฉในใงใตใใผใใใใฆใใใฟในใฏใซ้ขใใๆ
ๅ ฑใฏใ[ใฟในใฏใฎๆฆ่ฆ](./task_summary)ใๅ็
งใใฆใใ ใใใ
</Tip>
ๆฌกใซใๅๅฆ็ๆธใฟใฎใใใใ็ดๆฅใขใใซใซๆธกใใพใใใใณใฝใซใใใฎใพใพๆธกใใใจใใงใใพใ๏ผ
```python
>>> tf_outputs = tf_model(tf_batch)
```
ใขใใซใฏ`logits`ๅฑๆงใซๆ็ต็ใชใขใฏใใฃใใผใทใงใณใๅบๅใใพใใ`logits`ใซใฝใใใใใฏใน้ขๆฐใ้ฉ็จใใฆ็ขบ็ใๅๅพใใพใ๏ผ
```python
>>> import tensorflow as tf
>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf_predictions # doctest: +IGNORE_RESULT
```
</tf>
</frameworkcontent>
<Tip>
๐ค Transformersใฎใในใฆใฎใขใใซ๏ผPyTorchใพใใฏTensorFlow๏ผใฏใๆ็ต็ใชๆดปๆงๅ้ขๆฐ๏ผsoftmaxใชใฉ๏ผ*ๅ*ใฎใใณใฝใซใๅบๅใใพใใ
ๆ็ต็ใชๆดปๆงๅ้ขๆฐใฏใใใฐใใฐๆๅคฑใจ็ตๅใใใฆใใใใใงใใใขใใซใฎๅบๅใฏ็นๅฅใชใใผใฟใฏใฉในใงใใใใใฎๅฑๆงใฏIDEใง่ชๅ่ฃๅฎใใใพใใ
ใขใใซใฎๅบๅใฏใใฟใใซใพใใฏ่พๆธใฎใใใซๅไฝใใพใ๏ผๆดๆฐใในใฉใคในใใพใใฏๆๅญๅใงใคใณใใใฏในใไปใใใใจใใงใใพใ๏ผใ
ใใฎๅ ดๅใNoneใงใใๅฑๆงใฏ็ก่ฆใใใพใใ
</Tip>
### Save a Model
<frameworkcontent>
<pt>
ใขใใซใใใกใคใณใใฅใผใใณใฐใใใใ[`PreTrainedModel.save_pretrained`]ใไฝฟ็จใใฆใใผใฏใใคใถใจๅ
ฑใซไฟๅญใงใใพใ๏ผ
```py
>>> pt_save_directory = "./pt_save_pretrained"
>>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT
>>> pt_model.save_pretrained(pt_save_directory)
```
ๅใณใขใใซใไฝฟ็จใใๆบๅใใงใใใใ[`PreTrainedModel.from_pretrained`]ใไฝฟ็จใใฆๅๅบฆใญใผใใใพใ๏ผ
```py
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")
```
</pt>
<tf>
ใขใใซใใใกใคใณใใฅใผใใณใฐใใใใใใฎใใผใฏใใคใถใไฝฟ็จใใฆใขใใซใไฟๅญใงใใพใใ[`TFPreTrainedModel.save_pretrained`]ใไฝฟ็จใใพใ๏ผ
```py
>>> tf_save_directory = "./tf_save_pretrained"
>>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT
>>> tf_model.save_pretrained(tf_save_directory)
```
ใขใใซใๅๅบฆไฝฟ็จใใๆบๅใใงใใใใ[`TFPreTrainedModel.from_pretrained`]ใไฝฟ็จใใฆๅๅบฆใญใผใใใพใ๏ผ
```py
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")
```
</tf>
</frameworkcontent>
๐ค Transformersใฎ็นใซ็ด ๆดใใใๆฉ่ฝใฎไธใคใฏใใขใใซใไฟๅญใใใใใPyTorchใขใใซใพใใฏTensorFlowใขใใซใจใใฆๅใญใผใใงใใใใจใงใใ `from_pt`ใพใใฏ`from_tf`ใใฉใกใผใฟใไฝฟ็จใใฆใขใใซใใใฌใผใ ใฏใผใฏ้ใงๅคๆใงใใพใ๏ผ
<frameworkcontent>
<pt>
```py
>>> from transformers import AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)
```
</pt>
<tf>
```py
>>> from transformers import TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)
```
</tf>
</frameworkcontent>
## Custom model builds
ใขใใซใๆง็ฏๆนๆณใๅคๆดใใใซใฏใใขใใซใฎ่จญๅฎใฏใฉในใๅคๆดใงใใพใใ่จญๅฎใฏใขใใซใฎๅฑๆงใๆๅฎใใพใใไพใใฐใ้ ใๅฑคใฎๆฐใใขใใณใทใงใณใใใใฎๆฐใชใฉใใใใซๅซใพใใพใใใซในใฟใ ่จญๅฎใฏใฉในใใใขใใซใๅๆๅใใ้ใซใฏใใผใญใใๅงใใพใใใขใใซใฎๅฑๆงใฏใฉใณใใ ใซๅๆๅใใใๆๆ็พฉใช็ตๆใๅพใใใใซใขใใซใใใฌใผใใณใฐใใๅฟ
่ฆใใใใพใใ
ๆๅใซ[`AutoConfig`]ใใคใณใใผใใใๅคๆดใใใไบๅๅญฆ็ฟๆธใฟใขใใซใใญใผใใใพใใ[`AutoConfig.from_pretrained`]ๅ
ใงใๅคๆดใใใๅฑๆง๏ผไพ๏ผใขใใณใทใงใณใใใใฎๆฐ๏ผใๆๅฎใงใใพใ๏ผ
```python
>>> from transformers import AutoConfig
>>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12)
```
<frameworkcontent>
<pt>
[`AutoModel.from_config`]ใไฝฟ็จใใฆใซในใฟใ ่จญๅฎใใใขใใซใไฝๆใใพใ๏ผ
```python
>>> from transformers import AutoModel
>>> my_model = AutoModel.from_config(my_config)
```
</pt>
<tf>
ใซในใฟใ ๆงๆใใใขใใซใไฝๆใใใซใฏใ[`TFAutoModel.from_config`]ใไฝฟ็จใใพใ๏ผ
```py
>>> from transformers import TFAutoModel
>>> my_model = TFAutoModel.from_config(my_config)
```
</tf>
</frameworkcontent>
[ใซในใฟใ ใขใผใญใใฏใใฃใไฝๆ](./create_a_model)ใฌใคใใๅ็
งใใฆใใซในใฟใ ๆงๆใฎ่ฉณ็ดฐๆ
ๅ ฑใ็ขบ่ชใใฆใใ ใใใ
## Trainer - PyTorchๅใใฎๆ้ฉๅใใใใใฌใผใใณใฐใซใผใ
ใในใฆใฎใขใใซใฏๆจๆบใฎ[`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)ใงใใใใใ้ๅธธใฎใใฌใผใใณใฐใซใผใใงไฝฟ็จใงใใพใใ
็ฌ่ชใฎใใฌใผใใณใฐใซใผใใไฝๆใงใใพใใใ๐ค TransformersใฏPyTorchๅใใซ[`Trainer`]ใฏใฉในใๆไพใใฆใใใๅบๆฌ็ใชใใฌใผใใณใฐใซใผใใซๅ ใใฆใ
ๅๆฃใใฌใผใใณใฐใๆททๅ็ฒพๅบฆใชใฉใฎๆฉ่ฝใฎ่ฟฝๅ ใ่กใฃใฆใใพใใ
ใฟในใฏใซๅฟใใฆใ้ๅธธใฏ[`Trainer`]ใซไปฅไธใฎใใฉใกใผใฟใๆธกใใพใ๏ผ
1. [`PreTrainedModel`]ใพใใฏ[`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)ใใๅงใใพใ๏ผ
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
2. [`TrainingArguments`]ใซใฏใๅคๆดใงใใใขใใซใฎใใคใใผใใฉใกใผใฟใๅซใพใใฆใใใๅญฆ็ฟ็ใใใใใตใคใบใใใฌใผใใณใฐใจใใใฏๆฐใชใฉใๅคๆดใงใใพใใๆๅฎใใชใๅ ดๅใใใใฉใซใๅคใไฝฟ็จใใใพใ๏ผ
```py
>>> from transformers import TrainingArguments
>>> training_args = TrainingArguments(
... output_dir="path/to/save/folder/",
... learning_rate=2e-5,
... per_device_train_batch_size=8,
... per_device_eval_batch_size=8,
... num_train_epochs=2,
... )
```
3. ใใผใฏใใคใถใ็ปๅใใญใปใใตใ็นๅพด้ๆฝๅบๅจใใพใใฏใใญใปใใตใฎใใใชๅๅฆ็ใฏใฉในใใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
```
4. ใใผใฟใปใใใใญใผใใใ:
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT
```
5. ใใผใฟใปใใใใใผใฏใณๅใใใใใฎ้ขๆฐใไฝๆใใพใ๏ผ
```python
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"])
```
ใใฎๅพใ[`~datasets.Dataset.map`]ใไฝฟ็จใใฆใใผใฟใปใใๅ
จไฝใซ้ฉ็จใใพใ๏ผ
```python
>>> dataset = dataset.map(tokenize_dataset, batched=True)
```
6. ใใผใฟใปใใใใใฎไพใฎใใใใไฝๆใใใใใฎ [`DataCollatorWithPadding`]๏ผ
```py
>>> from transformers import DataCollatorWithPadding
>>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
```
ๆฌกใซใใใใใฎใฏใฉในใ[`Trainer`]ใซใพใจใใพใ๏ผ
```python
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=dataset["train"],
... eval_dataset=dataset["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... ) # doctest: +SKIP
```
่จ็ทดใ้ๅงใใๆบๅใใงใใใใ[`~Trainer.train`]ใๅผใณๅบใใฆใใฌใผใใณใฐใ้ๅงใใพใ๏ผ
```py
>>> trainer.train() # doctest: +SKIP
```
<Tip>
็ฟป่จณใ่ฆ็ดใชใฉใใทใผใฑใณใน้ใขใใซใไฝฟ็จใใใฟในใฏใซใฏใไปฃใใใซ[`Seq2SeqTrainer`]ใจ[`Seq2SeqTrainingArguments`]ใฏใฉในใไฝฟ็จใใฆใใ ใใใ
</Tip>
[`Trainer`]ๅ
ใฎใกใฝใใใใตใใฏใฉในๅใใใใจใงใใใฌใผใใณใฐใซใผใใฎๅไฝใใซในใฟใใคใบใงใใพใใใใใซใใใๆๅคฑ้ขๆฐใใชใใใฃใใคใถใในใฑใธใฅใผใฉใชใฉใฎๆฉ่ฝใใซในใฟใใคใบใงใใพใใใตใใฏใฉในๅใงใใใกใฝใใใฎไธ่ฆงใซใคใใฆใฏใ[`Trainer`]ใชใใกใฌใณในใใ่ฆงใใ ใใใ
ใใฌใผใใณใฐใซใผใใใซในใฟใใคใบใใๅฅใฎๆนๆณใฏใ[Callbacks](./main_classes/callbacks)ใไฝฟ็จใใใใจใงใใใณใผใซใใใฏใไฝฟ็จใใฆไปใฎใฉใคใใฉใชใจ็ตฑๅใใใใฌใผใใณใฐใซใผใใ็ฃ่ฆใใฆ้ฒๆ็ถๆณใๅ ฑๅใใใใใใฌใผใใณใฐใๆฉๆใซๅๆญขใใใใงใใพใใใณใผใซใใใฏใฏใใฌใผใใณใฐใซใผใ่ชไฝใซใฏไฝใๅคๆดใๅ ใใพใใใๆๅคฑ้ขๆฐใชใฉใฎใซในใฟใใคใบใ่กใๅ ดๅใฏใ[`Trainer`]ใใตใใฏใฉในๅใใๅฟ
่ฆใใใใพใใ
## Train with TensorFlow
ใในใฆใฎใขใใซใฏๆจๆบใฎ[`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)ใงใใใใใ[Keras](https://keras.io/) APIใไฝฟ็จใใฆTensorFlowใงใใฌใผใใณใฐใงใใพใใ
๐ค Transformersใฏใใใผใฟใปใใใ`tf.data.Dataset`ใจใใฆ็ฐกๅใซใญใผใใงใใใใใซใใ[`~TFPreTrainedModel.prepare_tf_dataset`]ใกใฝใใใๆไพใใฆใใใKerasใฎ[`compile`](https://keras.io/api/models/model_training_apis/#compile-method)ใใใณ[`fit`](https://keras.io/api/models/model_training_apis/#fit-method)ใกใฝใใใไฝฟ็จใใฆใใฌใผใใณใฐใใใใซ้ๅงใงใใพใใ
1. [`TFPreTrainedModel`]ใพใใฏ[`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)ใใๅงใใพใ๏ผ
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
2. ใใผใฏใใคใถใ็ปๅใใญใปใใตใ็นๅพด้ๆฝๅบๅจใใพใใฏใใญใปใใตใฎใใใชๅๅฆ็ใฏใฉในใใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
```
3. ใใผใฟใปใใใใใผใฏใใคใบใใใใใฎ้ขๆฐใไฝๆใใพใ๏ผ
```python
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"]) # doctest: +SKIP
```
4. [`~datasets.Dataset.map`]ใไฝฟ็จใใฆใใผใฟใปใใๅ
จไฝใซใใผใฏใใคใถใ้ฉ็จใใใใผใฟใปใใใจใใผใฏใใคใถใ[`~TFPreTrainedModel.prepare_tf_dataset`]ใซๆธกใใพใใใใใใตใคใบใๅคๆดใใใใผใฟใปใใใใทใฃใใใซใใใใจใใงใใพใใ
```python
>>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP
>>> tf_dataset = model.prepare_tf_dataset(
... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer
... ) # doctest: +SKIP
```
5. ๆบๅใใงใใใใ`compile`ใจ`fit`ใๅผใณๅบใใฆใใฌใผใใณใฐใ้ๅงใงใใพใใ Transformersใขใใซใฏใในใฆใใใฉใซใใฎใฟในใฏใซ้ข้ฃใใๆๅคฑ้ขๆฐใๆใฃใฆใใใใใๆๅฎใใชใ้ใใๆๅคฑ้ขๆฐใๆๅฎใใๅฟ
่ฆใฏใใใพใใใ
```python
>>> from tensorflow.keras.optimizers import Adam
>>> model.compile(optimizer=Adam(3e-5)) # ๆๅคฑ้ขๆฐใฎๅผๆฐใฏไธ่ฆใงใ๏ผ
>>> model.fit(tf
```
## What's next?
๐ค Transformersใฎใฏใคใใฏใใขใผใๅฎไบใใใใใฌใคใใใใงใใฏใใฆใใซในใฟใ ใขใใซใฎไฝๆใใฟในใฏใฎใใใฎใใกใคใณใใฅใผใใณใฐใในใฏใชใใใไฝฟ็จใใใขใใซใฎใใฌใผใใณใฐใชใฉใใใๅ
ทไฝ็ใชใใจใๅญฆใถใใจใใงใใพใใ๐ค Transformersใฎใณใขใณใณใปใใใซใคใใฆใใฃใจ่ฉณใใ็ฅใใใๅ ดๅใฏใใณใณใปใใใฅใขใซใฌใคใใ่ชญใใงใฟใฆใใ ใใ๏ผ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/performance.md
|
<!---
Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Performance and Scalability
ๅคง่ฆๆจกใชใใฉใณในใใฉใผใใผใขใใซใฎใใฌใผใใณใฐใใใณๆฌ็ช็ฐๅขใธใฎๅฑ้ใฏใใพใใพใช่ชฒ้กใๆ่ตทใใพใใ
ใใฌใผใใณใฐไธญใซใฏใใขใใซใๅฉ็จๅฏ่ฝใชGPUใกใขใชใใใๅคใใๅฟ
่ฆใจใใใใใใฌใผใใณใฐ้ๅบฆใ้
ใใฃใใใใๅฏ่ฝๆงใใใใพใใ
ใใใญใคใใงใผใบใงใฏใใขใใซใๆฌ็ช็ฐๅขใงๅฟ
่ฆใชในใซใผใใใใๅฆ็ใใใฎใซ่ฆๅดใใใใจใใใใพใใ
ใใฎใใญใฅใกใณใใผใทใงใณใฏใใใใใฎ่ชฒ้กใๅ
ๆใใใฆใผในใฑใผในใซๆ้ฉใช่จญๅฎใ่ฆใคใใใฎใซๅฝน็ซใคใใจใ็ฎ็ใจใใฆใใพใใ
ใฌใคใใฏใใฌใผใใณใฐใจๆจ่ซใฎใปใฏใทใงใณใซๅใใใฆใใใใใใใ็ฐใชใ่ชฒ้กใจ่งฃๆฑบ็ญใๅญๅจใใพใใ
ๅใปใฏใทใงใณๅ
ใซใฏใใใฌใผใใณใฐ็จใฎใทใณใฐใซGPUๅฏพใใซใGPUใๆจ่ซ็จใฎCPUๅฏพGPUใชใฉใ็ฐใชใใใผใใฆใงใขๆงๆ็จใฎๅฅใ
ใฎใฌใคใใ็จๆใใใฆใใพใใ
ใใฎใใญใฅใกใณใใๅบ็บ็นใจใใฆใใทใใชใชใซๅใฃใๆนๆณใซ้ฒใใใใฎๆ
ๅ ฑๆบใจใใฆใๅฉ็จใใ ใใใ
## Training
ๅคง่ฆๆจกใชใใฉใณในใใฉใผใใผใขใใซใๅน็็ใซใใฌใผใใณใฐใใใซใฏใGPUใTPUใชใฉใฎใขใฏใปใฉใฌใผใฟใๅฟ
่ฆใงใใ
ๆใไธ่ฌ็ใชใฑใผในใฏใใทใณใฐใซGPUใใใๅ ดๅใงใใใทใณใฐใซGPUใงใฎใใฌใผใใณใฐๅน็ใๆ้ฉๅใใใใใฎไธ่ฌ็ใชใขใใญใผใใๅญฆใถใซใฏใไปฅไธใๅ็
งใใฆใใ ใใใ
* [ใทใณใฐใซGPUใงใฎๅน็็ใชใใฌใผใใณใฐใฎใใใฎๆนๆณใจใใผใซ](perf_train_gpu_one): GPUใกใขใชใฎๅนๆ็ใชๅฉ็จใใใฌใผใใณใฐใฎ้ซ้ๅใชใฉใๆฏๆดใใๅ
ฑ้ใฎใขใใญใผใใๅญฆใถใใใซใใใใๅงใใฆใใ ใใใ
* [ใใซใGPUใใฌใผใใณใฐใปใฏใทใงใณ](perf_train_gpu_many): ใใซใGPU็ฐๅขใซ้ฉ็จใใใใใผใฟใใใณใฝใซใใใคใใฉใคใณไธฆๅๆงใชใฉใใใใชใๆ้ฉๅๆนๆณใซใคใใฆ่ฉณ็ดฐใซๅญฆใณใพใใ
* [CPUใใฌใผใใณใฐใปใฏใทใงใณ](perf_train_cpu): CPUไธใงใฎๆททๅ็ฒพๅบฆใใฌใผใใณใฐใซใคใใฆๅญฆใณใพใใ
* [่คๆฐCPUใงใฎๅน็็ใชใใฌใผใใณใฐ](perf_train_cpu_many): ๅๆฃCPUใใฌใผใใณใฐใซใคใใฆๅญฆใณใพใใ
* [TensorFlowใงTPUใไฝฟ็จใใใใฌใผใใณใฐ](perf_train_tpu_tf): TPUใซๆ
ฃใใฆใใชใๅ ดๅใฏใTPUใงใฎใใฌใผใใณใฐใจXLAใฎไฝฟ็จใซใคใใฆใฎใปใฏใทใงใณใๅ็
งใใฆใใ ใใใ
* [ใใฌใผใใณใฐใฎใใใฎใซในใฟใ ใใผใใฆใงใข](perf_hardware): ็ฌ่ชใฎใใฃใผใใฉใผใใณใฐ็ฐๅขใๆง็ฏใใ้ใฎใใณใใใใชใใฏใ่ฆใคใใพใใ
* [Trainer APIใไฝฟ็จใใใใคใใผใใฉใกใผใฟใผๆค็ดข](hpo_train)
## Inference
ๆฌ็ช็ฐๅขใงๅคง่ฆๆจกใชใขใใซใๅน็็ใซๆจ่ซใใใใจใฏใใใใใใใฌใผใใณใฐใใใใจใจๅใใใใ้ฃใใใใจใใใใพใใ
ไปฅไธใฎใปใฏใทใงใณใงใฏใCPUใใใณใทใณใฐใซ/ใใซใGPU็ฐๅขใงๆจ่ซใๅฎ่กใใๆ้ ใซใคใใฆ่ชฌๆใใพใใ
* [ใทใณใฐใซCPUใงใฎๆจ่ซ](perf_infer_cpu)
* [ใทใณใฐใซGPUใงใฎๆจ่ซ](perf_infer_gpu_one)
* [ใใซใGPUๆจ่ซ](perf_infer_gpu_many)
* [TensorFlowใขใใซใฎXLA็ตฑๅ](tf_xla)
## Training and inference
ใขใใซใใใฌใผใใณใฐใใใใใใใไฝฟ็จใใฆๆจ่ซใๅฎ่กใใใใซ้ขไฟใชใ้ฉ็จใใใใใฏใใใฏใใใณใใใใชใใฏใใใใซใใใพใใ
* [ๅคง่ฆๆจกใขใใซใฎใคใณในใฟใณในๅ](big_models)
* [ใใใฉใผใใณในใฎๅ้กใฎใใฉใใซใทใฅใผใใฃใณใฐ](debugging)
## Contribute
ใใฎใใญใฅใกใณใใฏใพใ ๅฎๅ
จใงใฏใชใใใใใซ่ฟฝๅ ใใๅฟ
่ฆใใใ้
็ฎใใใใใใใใพใใ
่ฟฝๅ ใ่จๆญฃใๅฟ
่ฆใชๅ ดๅใฏใ้ ๆ
ฎใใใซPRใใชใผใใณใใใใ่ฉณ็ดฐใ่ญฐ่ซใใใใใซIssueใ้ๅงใใฆใใ ใใใ
AใBใใใๅชใใฆใใใจใใ่ฒข็ฎใ่กใ้ใซใฏใๅ็พๅฏ่ฝใชใใณใใใผใฏใใใฎๆ
ๅ ฑใฎๅบๅ
ธใธใฎใชใณใฏใๅซใใฆใฟใฆใใ ใใ๏ผใใชใ่ช่บซใฎๆ
ๅ ฑใงใใๅ ดๅใ้คใ๏ผใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_infer_cpu.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Efficient Inference on CPU
ใใฎใฌใคใใฏใCPUไธใงๅคง่ฆๆจกใชใขใใซใฎๅน็็ใชๆจ่ซใซ็ฆ็นใๅฝใฆใฆใใพใใ
## `BetterTransformer` for faster inference
ๆ่ฟใใใญในใใ็ปๅใใใใณ้ณๅฃฐใขใใซใฎCPUไธใงใฎ้ซ้ใชๆจ่ซใฎใใใซ`BetterTransformer`ใ็ตฑๅใใพใใใ่ฉณ็ดฐใซใคใใฆใฏใใใฎ็ตฑๅใซ้ขใใใใญใฅใกใณใใผใทใงใณใ[ใใกใ](https://huggingface.co/docs/optimum/bettertransformer/overview)ใง็ขบ่ชใใฆใใ ใใใ
## PyTorch JITใขใผใ๏ผTorchScript๏ผ
TorchScriptใฏใPyTorchใณใผใใใใทใชใขใฉใคใบๅฏ่ฝใงๆ้ฉๅๅฏ่ฝใชใขใใซใไฝๆใใๆนๆณใงใใไปปๆใฎTorchScriptใใญใฐใฉใ ใฏใPythonไพๅญๆงใฎใชใใใญใปในใงไฟๅญใใใณใญใผใใงใใพใใ
ใใใฉใซใใฎใคใผใฌใผใขใผใใจๆฏ่ผใใฆใPyTorchใฎjitใขใผใใฏ้ๅธธใใชใใฌใผใฟใผใใฅใผใธใงใณใชใฉใฎๆ้ฉๅๆๆณใซใใใขใใซๆจ่ซใฎใใใฉใผใใณในใๅไธใใพใใ
TorchScriptใฎ็ฐกๅใช็ดนไปใซใคใใฆใฏใ[PyTorch TorchScriptใใฅใผใใชใขใซ](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules)ใๅ็
งใใฆใใ ใใใ
### JITใขใผใใงใฎIPEXใฐใฉใๆ้ฉๅ
Intelยฎ Extension for PyTorchใฏใTransformersใทใชใผใบใขใใซใฎjitใขใผใใซใใใชใๆ้ฉๅใๆไพใใพใใIntelยฎ Extension for PyTorchใjitใขใผใใงไฝฟ็จใใใใจใๅผทใใๅงใใใพใใTransformersใขใใซใใใใไฝฟ็จใใใใชใใฌใผใฟใผใใฟใผใณใฎใใใคใใฏใๆขใซIntelยฎ Extension for PyTorchใงjitใขใผใใฎใใฅใผใธใงใณใซๅฏพๅฟใใฆใใพใใใใใใฎใใฅใผใธใงใณใใฟใผใณ๏ผMulti-head-attentionใใฅใผใธใงใณใConcat LinearใLinear+AddใLinear+GeluใAdd+LayerNormใใฅใผใธใงใณใชใฉ๏ผใฏๆๅนใงใใใฉใผใใณในใ่ฏใใงใใใใฅใผใธใงใณใฎๅฉ็นใฏใใฆใผใถใผใซ้้็ใซๆไพใใใพใใๅๆใซใใใฐใๆใไบบๆฐใฎใใ่ณชๅๅฟ็ญใใใญในใๅ้กใใใผใฏใณๅ้กใฎNLPใฟในใฏใฎ็ด70๏ผ
ใใใใใใฎใใฅใผใธใงใณใใฟใผใณใไฝฟ็จใใฆFloat32็ฒพๅบฆใจBFloat16ๆททๅ็ฒพๅบฆใฎไธกๆนใงใใใฉใผใใณในใฎๅฉ็นใๅพใใใจใใงใใพใใ
[IPEXใฐใฉใๆ้ฉๅใฎ่ฉณ็ดฐๆ
ๅ ฑ](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html)ใ็ขบ่ชใใฆใใ ใใใ
#### IPEX installation:
IPEXใฎใชใชใผในใฏPyTorchใซๅพใฃใฆใใพใใ[IPEXใฎใคใณในใใผใซๆนๆณ](https://intel.github.io/intel-extension-for-pytorch/)ใ็ขบ่ชใใฆใใ ใใใ
### Usage of JIT-mode
Trainerใง่ฉไพกใพใใฏไบๆธฌใฎใใใซJITใขใผใใๆๅนใซใใใซใฏใใฆใผใถใผใฏTrainerใณใใณใๅผๆฐใซ`jit_mode_eval`ใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
<Tip warning={true}>
PyTorch >= 1.14.0ใฎๅ ดๅใjitใขใผใใฏjit.traceใงdictๅ
ฅๅใใตใใผใใใใฆใใใใใไบๆธฌใจ่ฉไพกใซไปปๆใฎใขใใซใซๅฉ็ใใใใใๅฏ่ฝๆงใใใใพใใ
PyTorch < 1.14.0ใฎๅ ดๅใjitใขใผใใฏforwardใใฉใกใผใฟใผใฎ้ ๅบใjit.traceใฎใฟใใซๅ
ฅๅใฎ้ ๅบใจไธ่ดใใใขใใซใซๅฉ็ใใใใใๅฏ่ฝๆงใใใใพใ๏ผ่ณชๅๅฟ็ญใขใใซใชใฉ๏ผใjit.traceใใฟใใซๅ
ฅๅใฎ้ ๅบใจไธ่ดใใชใๅ ดๅใใใญในใๅ้กใขใใซใชใฉใjit.traceใฏๅคฑๆใใใใใใใฉใผใซใใใฏใใใใใใซไพๅคใงใญใฃใใใใฆใใพใใใญใฐใฏใฆใผใถใผใซ้็ฅใใใใใซไฝฟ็จใใใพใใ
</Tip>
[Transformers่ณชๅๅฟ็ญใฎไฝฟ็จไพ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)ใๅ่ใซใใฆใใ ใใใ
- Inference using jit mode on CPU:
<pre>python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
<b>--jit_mode_eval </b></pre>
- Inference with IPEX using jit mode on CPU:
<pre>python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
<b>--use_ipex \</b>
<b>--jit_mode_eval</b></pre>
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/custom_models.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Sharing custom models
๐ค Transformersใฉใคใใฉใชใฏใ็ฐกๅใซๆกๅผตใงใใใใใซ่จญ่จใใใฆใใพใใใในใฆใฎใขใใซใฏใชใใธใใชใฎ็นๅฎใฎใตใใใฉใซใใซๅฎๅ
จใซใณใผใๅใใใฆใใใๆฝ่ฑกๅใฏใใใพใใใใใใใฃใฆใใขใใชใณใฐใใกใคใซใใณใใผใใฆ่ชฟๆดใใใใจใ็ฐกๅใงใใ
ๆฐใใใขใใซใๆธใใฆใใๅ ดๅใใผใญใใๅงใใๆนใ็ฐกๅใใใใใพใใใใใฎใใฅใผใใชใขใซใงใฏใใซในใฟใ ใขใใซใจใใฎ่จญๅฎใใฉใฎใใใซๆธใใTransformersๅ
ใงไฝฟ็จใงใใใใใซใใใณใผใใซไพๅญใใๅ
ฑๅไฝใจๅ
ฑๆใใๆนๆณใ่ชฌๆใใพใใใฉใคใใฉใชใซๅญๅจใใชใๅ ดๅใงใใ่ชฐใงใไฝฟ็จใงใใใใใซใใพใใ
ใใใๅฎ่จผใใใใใซใ[timmใฉใคใใฉใช](https://github.com/rwightman/pytorch-image-models)ใฎResNetใฏใฉในใ[`PreTrainedModel`]ใซใฉใใใใใใจใซใใฃใฆใResNetใขใใซใไฝฟ็จใใพใใ
## Writing a custom configuration
ใขใใซใซๅใ็ตใๅใซใใพใใใฎ่จญๅฎใๆธใใพใใใใใขใใซใฎ่จญๅฎใฏใใขใใซใๆง็ฏใใใใใซๅฟ
่ฆใชใในใฆใฎๆ
ๅ ฑใๅซใใชใใธใงใฏใใงใใๆฌกใฎใปใฏใทใงใณใง่ฆใใใใซใใขใใซใฏๅๆๅใใใใใซ`config`ใใๅใๅใใใจใใงใใชใใใใใใฎใชใใธใงใฏใใใงใใใ ใๅฎๅ
จใงใใๅฟ
่ฆใใใใพใใ
ใใฎไพใงใฏใResNetใฏใฉในใฎใใใคใใฎๅผๆฐใๅๅพใใ่ชฟๆดใใใใใใใใชใใจใใพใใ็ฐใชใ่จญๅฎใฏใ็ฐใชใใฟใคใใฎResNetใๆไพใใพใใใใฎๅพใใใใใฎๅผๆฐใ็ขบ่ชใใๅพใใใใใฎๅผๆฐใๅใซๆ ผ็ดใใพใใ
```python
from transformers import PretrainedConfig
from typing import List
class ResnetConfig(PretrainedConfig):
model_type = "resnet"
def __init__(
self,
block_type="bottleneck",
layers: List[int] = [3, 4, 6, 3],
num_classes: int = 1000,
input_channels: int = 3,
cardinality: int = 1,
base_width: int = 64,
stem_width: int = 64,
stem_type: str = "",
avg_down: bool = False,
**kwargs,
):
if block_type not in ["basic", "bottleneck"]:
raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.")
if stem_type not in ["", "deep", "deep-tiered"]:
raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.")
self.block_type = block_type
self.layers = layers
self.num_classes = num_classes
self.input_channels = input_channels
self.cardinality = cardinality
self.base_width = base_width
self.stem_width = stem_width
self.stem_type = stem_type
self.avg_down = avg_down
super().__init__(**kwargs)
```
้่ฆใชใใจใ3ใค่ฆใใฆใใในใใใคใณใใฏๆฌกใฎใจใใใงใ๏ผ
- `PretrainedConfig` ใ็ถๆฟใใๅฟ
่ฆใใใใพใใ
- ใใชใใฎ `PretrainedConfig` ใฎ `__init__` ใฏไปปๆใฎ kwargs ใๅใๅ
ฅใใๅฟ
่ฆใใใใพใใ
- ใใใใฎ `kwargs` ใฏ่ฆชใฏใฉในใฎ `__init__` ใซๆธกใๅฟ
่ฆใใใใพใใ
็ถๆฟใฏใ๐ค Transformers ใฉใคใใฉใชใฎใในใฆใฎๆฉ่ฝใๅๅพใงใใใใใซใใใใใงใใไปใฎ2ใคใฎๅถ็ดใฏใ
`PretrainedConfig` ใ่จญๅฎใใฆใใใใฃใผใซใไปฅๅคใซใๅคใใฎใใฃใผใซใใๆใฃใฆใใใใจใใๆฅใฆใใพใใ
`from_pretrained` ใกใฝใใใง่จญๅฎใๅใญใผใใใๅ ดๅใใใใใฎใใฃใผใซใใฏใใชใใฎ่จญๅฎใซๅใๅ
ฅใใใใ
ใใฎๅพใ่ฆชใฏใฉในใซ้ไฟกใใใๅฟ
่ฆใใใใพใใ
่จญๅฎใฎ `model_type` ใๅฎ็พฉใใใใจ๏ผใใใงใฏ `model_type="resnet"`๏ผใฏใ
่ชๅใฏใฉในใซใขใใซใ็ป้ฒใใใๅ ดๅใ้คใใฆใฏๅฟ
้ ใงใฏใใใพใใ๏ผๆๅพใฎใปใฏใทใงใณใๅ็
ง๏ผใ
ใใใงใใฉใคใใฉใชใฎไปใฎใขใใซ่จญๅฎใจๅๆงใซใ่จญๅฎใ็ฐกๅใซไฝๆใใฆไฟๅญใงใใพใใ
ไปฅไธใฏใresnet50d ่จญๅฎใไฝๆใใฆไฟๅญใใๆนๆณใฎไพใงใ๏ผ
```py
resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True)
resnet50d_config.save_pretrained("custom-resnet")
```
ใใใซใใใ`custom-resnet` ใใฉใซใๅ
ใซ `config.json` ใจใใๅๅใฎใใกใคใซใไฟๅญใใใพใใใใฎๅพใ`from_pretrained` ใกใฝใใใไฝฟ็จใใฆๆงๆใๅใญใผใใงใใพใใ
```py
resnet50d_config = ResnetConfig.from_pretrained("custom-resnet")
```
ใพใใ[`PretrainedConfig`] ใฏใฉในใฎไปใฎใกใฝใใใไฝฟ็จใใใใจใใงใใพใใใใจใใฐใ[`~PretrainedConfig.push_to_hub`] ใไฝฟ็จใใฆใ่จญๅฎใ็ดๆฅ Hub ใซใขใใใญใผใใงใใพใใ
## Writing a custom model
ResNet ใฎ่จญๅฎใใงใใใฎใงใใขใใซใๆธใๅงใใใใจใใงใใพใใๅฎ้ใซใฏ2ใคใฎใขใใซใๆธใใพใใ1ใคใฏใใใใฎ็ปๅใใ้ ใใ็นๅพดใๆฝๅบใใใขใใซ๏ผ[`BertModel`] ใฎใใใชใใฎ๏ผใงใใใ1ใคใฏ็ปๅๅ้กใซ้ฉใใใขใใซ๏ผ[`BertForSequenceClassification`] ใฎใใใชใใฎ๏ผใงใใ
ๅ่ฟฐใใใใใซใใใฎไพใใทใณใใซใซไฟใคใใใซใใขใใซใฎ็ทฉใใฉใใใผใฎใฟใๆธใใพใใใใฎใฏใฉในใๆธใๅใซ่กใๅฟ
่ฆใใใๅฏไธใฎใใจใฏใใใญใใฏใฟใคใใจๅฎ้ใฎใใญใใฏใฏใฉในใฎ้ใฎใใใใงใใใใฎๅพใใในใฆใ `ResNet` ใฏใฉในใซๆธกใใฆ่จญๅฎใใใขใใซใๅฎ็พฉใใพใ๏ผ
```py
from transformers import PreTrainedModel
from timm.models.resnet import BasicBlock, Bottleneck, ResNet
from .configuration_resnet import ResnetConfig
BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck}
class ResnetModel(PreTrainedModel):
config_class = ResnetConfig
def __init__(self, config):
super().__init__(config)
block_layer = BLOCK_MAPPING[config.block_type]
self.model = ResNet(
block_layer,
config.layers,
num_classes=config.num_classes,
in_chans=config.input_channels,
cardinality=config.cardinality,
base_width=config.base_width,
stem_width=config.stem_width,
stem_type=config.stem_type,
avg_down=config.avg_down,
)
def forward(self, tensor):
return self.model.forward_features(tensor)
```
็ปๅใๅ้กใใใขใใซใฎๅ ดๅใforwardใกใฝใใใๅคๆดใใใ ใใงใ๏ผ
```py
import torch
class ResnetModelForImageClassification(PreTrainedModel):
config_class = ResnetConfig
def __init__(self, config):
super().__init__(config)
block_layer = BLOCK_MAPPING[config.block_type]
self.model = ResNet(
block_layer,
config.layers,
num_classes=config.num_classes,
in_chans=config.input_channels,
cardinality=config.cardinality,
base_width=config.base_width,
stem_width=config.stem_width,
stem_type=config.stem_type,
avg_down=config.avg_down,
)
def forward(self, tensor, labels=None):
logits = self.model(tensor)
if labels is not None:
loss = torch.nn.cross_entropy(logits, labels)
return {"loss": loss, "logits": logits}
return {"logits": logits}
```
ไธกๆนใฎๅ ดๅใ`PreTrainedModel`ใใ็ถๆฟใใ`config`ใไฝฟ็จใใฆในใผใใผใฏใฉในใฎๅๆๅใๅผใณๅบใใพใ๏ผ้ๅธธใฎ`torch.nn.Module`ใๆธใใจใใฎใใใชๆใใงใ๏ผใ
`config_class`ใ่จญๅฎใใ่กใฏๅฟ
้ ใงใฏใใใพใใใใ๏ผๆๅพใฎใปใฏใทใงใณใๅ็
ง๏ผใใขใใซใ่ชๅใฏใฉในใซ็ป้ฒใใใๅ ดๅใซไฝฟ็จใงใใพใใ
<Tip>
ใขใใซใใฉใคใใฉใชๅ
ใฎใขใใซใจ้ๅธธใซไผผใฆใใๅ ดๅใใใฎใขใใซใจๅใๆงๆใๅๅฉ็จใงใใพใใ
</Tip>
ใขใใซใ่ฟใๅ
ๅฎนใฏไฝใงใๆงใใพใใใใใฉใใซใๆธกใใใใจใใซๆๅคฑใๅซใ่พๆธใ่ฟใ๏ผ`ResnetModelForImageClassification`ใฎใใใซ่กใฃใใใฎ๏ผใจใ
ใขใใซใ[`Trainer`]ใฏใฉในๅ
ใง็ดๆฅไฝฟ็จใงใใใใใซใชใใพใใ็ฌ่ชใฎใใฌใผใใณใฐใซใผใใพใใฏไปใฎใฉใคใใฉใชใไฝฟ็จใใไบๅฎใงใใ้ใใ
ๅฅใฎๅบๅๅฝขๅผใไฝฟ็จใใใใจใๅ้กใใใพใใใ
ใใฆใใขใใซใฏใฉในใใงใใใฎใงใ1ใคไฝๆใใพใใใ๏ผ
```py
resnet50d = ResnetModelForImageClassification(resnet50d_config)
```
ๅๅบฆใ[`PreTrainedModel`]ใฎใใใใใฎใกใฝใใใไพใใฐ[`~PreTrainedModel.save_pretrained`]ใ
[`~PreTrainedModel.push_to_hub`]ใชใฉใไฝฟ็จใงใใพใใๆฌกใฎใปใฏใทใงใณใงใฏใใขใใซใฎ้ใฟใใณใผใใจไธ็ทใซ
Hugging Face Hub ใซใใใทใฅใใๆนๆณใ่ฆใฆใฟใพใใ
ใใใใใพใใฏใขใใซๅ
ใซไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใใญใผใใใพใใใใ
็ฌ่ชใฎใฆใผในใฑใผในใงใฏใใใใใ็ฌ่ชใฎใใผใฟใงใซในใฟใ ใขใใซใใใฌใผใใณใฐใใใใจใซใชใใงใใใใ
ใใฎใใฅใผใใชใขใซใงใฏในใใผใใขใใใฎใใใซใresnet50dใฎไบๅๅญฆ็ฟๆธใฟใใผใธใงใณใไฝฟ็จใใพใใ
็งใใกใฎใขใใซใฏใใใใฉใใใใใ ใใชใฎใงใใใใใฎ้ใฟใ่ปข้ใใใฎใฏ็ฐกๅใงใ๏ผ
```py
import timm
pretrained_model = timm.create_model("resnet50d", pretrained=True)
resnet50d.model.load_state_dict(pretrained_model.state_dict())
```
ใใฆใ[`~PreTrainedModel.save_pretrained`]ใพใใฏ[`~PreTrainedModel.push_to_hub`]ใๅฎ่กใใใจใใซใ
ใขใใซใฎใณใผใใไฟๅญใใใใใใซใใๆนๆณใ่ฆใฆใฟใพใใใใ
## Sending the code to the Hub
<Tip warning={true}>
ใใฎAPIใฏๅฎ้จ็ใงใใใๆฌกใฎใชใชใผในใงใใใใชๅคๆดใใใใใใใใพใใใ
</Tip>
ใพใใใขใใซใ`.py`ใใกใคใซใซๅฎๅ
จใซๅฎ็พฉใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
ใใกใคใซใฏ็ธๅฏพใคใณใใผใใไปใฎใใกใคใซใซไพๅญใงใใพใใใใในใฆใฎใใกใคใซใๅใใใฃใฌใฏใใชใซใใ้ใ๏ผใพใ ใใฎๆฉ่ฝใงใฏใตใใขใธใฅใผใซใฏใตใใผใใใฆใใพใใ๏ผใๅ้กใใใพใใใ
ใใฎไพใงใฏใ็พๅจใฎไฝๆฅญใใฃใฌใฏใใชๅ
ใซๅๅใใresnet_modelใใฎใใฉใซใใไฝๆใใใใฎไธญใซ`modeling_resnet.py`ใใกใคใซใจ`configuration_resnet.py`ใใกใคใซใๅฎ็พฉใใพใใ
ๆงๆใใกใคใซใซใฏ`ResnetConfig`ใฎใณใผใใๅซใพใใใขใใชใณใฐใใกใคใซใซใฏ`ResnetModel`ใจ`ResnetModelForImageClassification`ใฎใณใผใใๅซใพใใฆใใพใใ
```
.
โโโ resnet_model
โโโ __init__.py
โโโ configuration_resnet.py
โโโ modeling_resnet.py
```
`__init__.py`ใฏ็ฉบใงใใฃใฆใๅ้กใใใพใใใPythonใ`resnet_model`ใใขใธใฅใผใซใจใใฆๆคๅบใงใใใใใซใใใใใซๅญๅจใใพใใ
<Tip warning={true}>
ใฉใคใใฉใชใใใขใใชใณใฐใใกใคใซใใณใใผใใๅ ดๅใใใกใคใซใฎๅ
้ ญใซใใใในใฆใฎ็ธๅฏพใคใณใใผใใ`transformers`ใใใฑใผใธใใใคใณใใผใใซ็ฝฎใๆใใๅฟ
่ฆใใใใพใใ
</Tip>
ๆขๅญใฎ่จญๅฎใใขใใซใๅๅฉ็จ๏ผใพใใฏใตใใฏใฉในๅ๏ผใงใใใใจใซๆณจๆใใฆใใ ใใใ
ใณใใฅใใใฃใจใขใใซใๅ
ฑๆใใใใใซใๆฌกใฎๆ้ ใซๅพใฃใฆใใ ใใ๏ผใพใใๆฐใใไฝๆใใใใกใคใซใใResNetใขใใซใจ่จญๅฎใใคใณใใผใใใพใ๏ผ
```py
from resnet_model.configuration_resnet import ResnetConfig
from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification
```
ๆฌกใซใ`save_pretrained`ใกใฝใใใไฝฟ็จใใฆใใใใฎใชใใธใงใฏใใฎใณใผใใใกใคใซใใณใใผใใ็นๅฎใฎAutoใฏใฉใน๏ผ็นใซใขใใซใฎๅ ดๅ๏ผใซๆญฃใใ็ป้ฒใใใใใฉใคใใฉใชใซๆ็คบใใๅฟ
่ฆใใใใพใใๆฌกใฎใใใซๅฎ่กใใพใ๏ผ
```py
ResnetConfig.register_for_auto_class()
ResnetModel.register_for_auto_class("AutoModel")
ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification")
```
ๆณจๆ: ่จญๅฎใซใคใใฆใฏ่ชๅใฏใฉในใๆๅฎใใๅฟ
่ฆใฏใใใพใใ๏ผ่จญๅฎ็จใฎ่ชๅใฏใฉในใฏ1ใคใใใชใใ[`AutoConfig`]ใงใ๏ผใใ
ใขใใซใซใคใใฆใฏ็ฐใชใใพใใใซในใฟใ ใขใใซใฏๅคใใฎ็ฐใชใใฟในใฏใซ้ฉใใฆใใๅฏ่ฝๆงใใใใใใ
ใขใใซใๆญฃ็ขบใช่ชๅใฏใฉในใฎใใกใฉใใซ้ฉใใฆใใใใๆๅฎใใๅฟ
่ฆใใใใพใใ
ๆฌกใซใๅ่ฟฐใฎใใใซ่จญๅฎใจใขใใซใไฝๆใใพใใใ๏ผ
```py
resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True)
resnet50d = ResnetModelForImageClassification(resnet50d_config)
pretrained_model = timm.create_model("resnet50d", pretrained=True)
resnet50d.model.load_state_dict(pretrained_model.state_dict())
```
ใขใใซใHubใซ้ไฟกใใใซใฏใใญใฐใคใณใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใใฟใผใใใซใงๆฌกใฎใณใใณใใๅฎ่กใใพใ๏ผ
```bash
huggingface-cli login
```
ใพใใฏใใผใใใใฏใใ๏ผ
```py
from huggingface_hub import notebook_login
notebook_login()
```
ๆฌกใซใๆฌกใฎใใใซใใฆใ็ฌ่ชใฎๅๅ็ฉบ้ใซใใใทใฅใงใใพใ๏ผใพใใฏใใกใณใใผใงใใ็ต็นใซใใใทใฅใงใใพใ๏ผ๏ผ
```py
resnet50d.push_to_hub("custom-resnet50d")
```
ใขใใชใณใฐใฎ้ใฟใจJSONๅฝขๅผใฎๆงๆใซๅ ใใฆใใใฎใใฉใซใใผใcustom-resnet50dใๅ
ใฎใขใใชใณใฐใใใณๆงๆใ.pyใใใกใคใซใใณใใผใใใ็ตๆใฏHubใซใขใใใญใผใใใใพใใใ็ตๆใฏใใฎ[model repo](https://huggingface.co/sgugger/custom-resnet50d)ใง็ขบ่ชใงใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[Hubใธใฎใใใทใฅๆนๆณ](model_sharing)ใๅ็
งใใฆใใ ใใใ
## Using a model with custom code
่ชๅใฏใฉในใจ `from_pretrained` ใกใฝใใใไฝฟ็จใใฆใใชใใธใใชๅ
ใฎใซในใฟใ ใณใผใใใกใคใซใจๅ
ฑใซไปปๆใฎๆงๆใใขใใซใใพใใฏใใผใฏใใคใถใไฝฟ็จใงใใพใใ Hubใซใขใใใญใผใใใใใในใฆใฎใใกใคใซใจใณใผใใฏใใซใฆใงใขใฎในใญใฃใณใๅฎๆฝใใใพใ๏ผ่ฉณ็ดฐใฏ[Hubใปใญใฅใชใใฃ](https://huggingface.co/docs/hub/security#malware-scanning)ใใญใฅใกใณใใผใทใงใณใๅ็
งใใฆใใ ใใ๏ผใใใใใไพ็ถใจใใฆๆชๆใฎใใใณใผใใๅฎ่กใใชใใใใซใใขใใซใณใผใใจไฝ่
ใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
`trust_remote_code=True` ใ่จญๅฎใใฆใซในใฟใ ใณใผใใๆใคใขใใซใไฝฟ็จใงใใพใ๏ผ
```py
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True)
```
ใณใใใใใใทใฅใใrevisionใใจใใฆๆธกใใใจใๅผทใๆจๅฅจใใใฆใใพใใใใใซใใใใขใใซใฎไฝ่
ใใณใผใใๆชๆใฎใใๆฐใใ่กใงๆดๆฐใใชใใฃใใใจใ็ขบ่ชใงใใพใ๏ผใขใใซใฎไฝ่
ใๅฎๅ
จใซไฟก้ ผใใฆใใๅ ดๅใ้คใใพใ๏ผใ
```py
commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292"
model = AutoModelForImageClassification.from_pretrained(
"sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash
)
```
ใขใใซใชใใธใใชใฎใณใใใๅฑฅๆญดใใใฉใฆใธใณใฐใใ้ใซใฏใไปปๆใฎใณใใใใฎใณใใใใใใทใฅใ็ฐกๅใซใณใใผใงใใใใฟใณใใใใพใใ
## Registering a model with custom code to the auto classes
๐ค Transformersใๆกๅผตใใใฉใคใใฉใชใไฝๆใใฆใใๅ ดๅใ็ฌ่ชใฎใขใใซใๅซใใใใใซ่ชๅใฏใฉในใๆกๅผตใใใๅ ดๅใใใใพใใ
ใใใฏใณใผใใHubใซใใใทใฅใใใใจใจใฏ็ฐใชใใใฆใผใถใผใฏใซในใฟใ ใขใใซใๅๅพใใใใใซใใชใใฎใฉใคใใฉใชใใคใณใใผใใใๅฟ
่ฆใใใใพใ
๏ผHubใใใขใใซใณใผใใ่ชๅ็ใซใใฆใณใญใผใใใใฎใจใฏๅฏพ็
ง็ใงใ๏ผใ
ๆงๆใซๆขๅญใฎใขใใซใฟใคใใจ็ฐใชใ `model_type` ๅฑๆงใใใ้ใใใพใใใชใใฎใขใใซใฏใฉในใ้ฉๅใช `config_class` ๅฑๆงใๆใฃใฆใใ้ใใ
ๆฌกใฎใใใซใใใใ่ชๅใฏใฉในใซ่ฟฝๅ ใงใใพใ๏ผ
```py
from transformers import AutoConfig, AutoModel, AutoModelForImageClassification
AutoConfig.register("resnet", ResnetConfig)
AutoModel.register(ResnetConfig, ResnetModel)
AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification)
```
ๆณจๆ: `AutoConfig` ใซใซในใฟใ ่จญๅฎใ็ป้ฒใใ้ใฎๆๅใฎๅผๆฐใฏใใซในใฟใ ่จญๅฎใฎ `model_type` ใจไธ่ดใใๅฟ
่ฆใใใใพใใ
ใพใใไปปๆใฎ่ชๅใขใใซใฏใฉในใซใซในใฟใ ใขใใซใ็ป้ฒใใ้ใฎๆๅใฎๅผๆฐใฏใใใใใฎใขใใซใฎ `config_class` ใจไธ่ดใใๅฟ
่ฆใใใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/installation.md
|
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ใคใณในใใผใซ
ไฝฟ็จใใฆใใDeep Learningใฉใคใใฉใชใซๅฏพใใฆใ๐ค Transformersใใคใณในใใผใซใใฆใญใฃใใทใฅใ่จญๅฎใใใใฆใชใใทใงใณใงใชใใฉใคใณใงๅฎ่กใงใใใใใซ ๐ค Transformersใ่จญๅฎใใพใใ
๐ค TransformersใฏPython 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, Flaxใงๅไฝ็ขบ่ชใใฆใใพใใ ไฝฟ็จใใฆใใDeep Learningใฉใคใใฉใชใซๅใใใฆใไปฅไธใฎใคใณในใใผใซๆนๆณใซๅพใฃใฆใใ ใใ:
* [PyTorch](https://pytorch.org/get-started/locally/)ใฎใคใณในใใผใซๆ้ ใ
* [TensorFlow 2.0](https://www.tensorflow.org/install/pip)ใฎใคใณในใใผใซๆ้ ใ
* [Flax](https://flax.readthedocs.io/en/latest/)ใฎใคใณในใใผใซๆ้ ใ
## pipใงใฎใคใณในใใผใซ
๐ค Transformersใ[ไปฎๆณ็ฐๅข](https://docs.python.org/3/library/venv.html)ใซใคใณในใใผใซใใๅฟ
่ฆใใใใพใใ ใใใPythonใฎไปฎๆณ็ฐๅขใซ้ฆดๆใฟใใชใๅ ดๅใฏใใใฎ[ใฌใคใ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใใ่ฆงใใ ใใใไปฎๆณ็ฐๅขใซใใฃใฆ็ฐใชใใใญใธใงใฏใใฎ็ฎก็ใใใ็ฐกๅใซใชใใไพๅญ้ขไฟ้ใฎไบๆๆงใฎๅ้กใๅ้ฟใงใใพใใ
ใพใใใใญใธใงใฏใใใฃใฌใฏใใชใซไปฎๆณ็ฐๅขใไฝๆใใใใจใใๅงใใพใใใ:
```bash
python -m venv .env
```
ไปฎๆณ็ฐๅขใ่ตทๅใใพใใใใLinuxใจMacOsใฎๅ ดๅใฏไปฅไธใฎใณใใณใใง่ตทๅใใพใ:
```bash
source .env/bin/activate
```
Windowsใงไปฎๆณ็ฐๅขใ่ตทๅใใพใ
```bash
.env/Scripts/activate
```
ใใใงใๆฌกใฎใณใใณใใง๐ค Transformersใใคใณในใใผใซใใๆบๅใๆดใใพใใ:
```bash
pip install transformers
```
CPUๅฏพๅฟใฎใฟๅฟ
่ฆใชๅ ดๅใ๐ค TransformersใจDeep Learningใฉใคใใฉใชใ1่กใงใคใณในใใผใซใงใใใใใซใชใฃใฆใใฆไพฟๅฉใงใใไพใใฐใ๐ค TransformersใจPyTorchใไปฅไธใฎใใใซไธ็ทใซใคใณในใใผใซใงใใพใ:
```bash
pip install transformers[torch]
```
๐ค TransformersใจTensorFlow 2.0:
```bash
pip install transformers[tf-cpu]
```
๐ค TransformersใจFlax:
```bash
pip install transformers[flax]
```
ๆๅพใซใไปฅไธใฎใณใใณใใๅฎ่กใใใใจใง๐ค Transformersใๆญฃใใใคใณในใใผใซใใใฆใใใใ็ขบ่ชใใพใใๅญฆ็ฟๆธใฟใขใใซใใใฆใณใญใผใใใใพใ:
```bash
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
```
ใใฎๅพใใฉใใซใจในใณใขใๅบๅใใใพใ:
```bash
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
```
## ใฝใผในใใใฎใคใณในใใผใซ
ไปฅไธใฎใณใใณใใงใฝใผในใใ๐ค Transformersใใคใณในใใผใซใใพใ:
```bash
pip install git+https://github.com/huggingface/transformers
```
ใใฎใณใใณใใฏๆๆฐใฎๅฎๅฎ็ใงใฏใชใใ้็บใซใใใๆๆฐใฎ`main`ใใผใธใงใณใใคใณในใใผใซใใพใใ`main`ใใผใธใงใณใฏๆๆฐใฎ้็บ็ถๆณใซๅฏพๅฟใใใฎใซไพฟๅฉใงใใไพใใฐใๆๅพใฎๅ
ฌๅผใชใชใผในไปฅ้ใซใใฐใไฟฎๆญฃใใใใใๆฐใใใชใชใผในใใพใ ๅฑ้ใใใฆใใชใๅ ดๅใชใฉใงใใใใใใใใใฏ`main`ใใผใธใงใณใๅธธใซๅฎๅฎใใฆใใใจใฏ้ใใชใใใจใๆๅณใใพใใ็งใใกใฏ`main`ใใผใธใงใณใฎ้็จใ็ถญๆใใใใๅชใใใปใจใใฉใฎๅ้กใฏ้ๅธธใๆฐๆ้ใใ1ๆฅไปฅๅ
ใซ่งฃๆฑบใใใพใใใใๅ้กใซ้ญ้ใใๅ ดๅใฏใใใๆฉใไฟฎๆญฃใงใใใใใซ[Issue](https://github.com/huggingface/transformers/issues)ใไฝๆใใฆใใ ใใ๏ผ
ไปฅไธใฎใณใใณใใๅฎ่กใใฆใ๐ค Transformersใๆญฃใใใคใณในใใผใซใใใฆใใใใฉใใใ็ขบ่ชใใพใ:
```bash
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"
```
## ็ทจ้ๅฏ่ฝใชใคใณในใใผใซ
ๅฟ
่ฆใซๅฟใใฆใ็ทจ้ๅฏ่ฝใชใคใณในใใผใซใใใพใ:
* ใฝใผในใณใผใใฎ`main`ใใผใธใงใณใไฝฟใใพใใ
* ๐ค Transformersใซใณใณใใชใใฅใผใใใใณใผใใฎๅคๆดใใในใใใๅฟ
่ฆใใใใพใใ
ไปฅไธใฎใณใใณใใงใฌใใธใใชใใฏใญใผใณใใฆใ๐ค Transformersใใคใณในใใผใซใใพใ:
```bash
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
```
ไธ่จใฎใณใใณใใฏใใฌใใธใใชใใฏใญใผใณใใใใฉใซใใจPythonใฎใฉใคใใฉใชใใในใใชใณใฏใใพใใPythonใฏ้ๅธธใฎใฉใคใใฉใชใในใซๅ ใใฆใใใชใใใฏใญใผใณใใใใฉใซใใฎไธญใ่ฆใใใใซใชใใพใใไพใใฐใPythonใใใฑใผใธใ้ๅธธใ`~/anaconda3/envs/main/lib/python3.7/site-packages/`ใซใคใณในใใผใซใใใฆใใๅ ดๅใPythonใฏใฏใญใผใณใใใใฉใซใใๆค็ดขใใใใใซใชใใพใ: `~/transformers/`.
<Tip warning={true}>
ใฉใคใใฉใชใผใไฝฟใ็ถใใใๅ ดๅใฏใtransformersใใฉใซใใผใไฟๆใใคใฅใใๅฟ
่ฆใใใใพใใ
</Tip>
ใใใงใๆฌกใฎใณใใณใใง็ฐกๅใซใฏใญใผใณใ๐ค Transformersใฎๆๆฐ็ใซๆดๆฐใงใใพใ:
```bash
cd ~/transformers/
git pull
```
Python็ฐๅขใฏๆฌกๅใฎๅฎ่กๆใซ๐ค Transformersใฎ`main`ใใผใธใงใณใ่ฆใคใใใใใซใชใใพใใ
## condaใงใฎใคใณในใใผใซ
`conda-forge`ใฎcondaใใฃใณใใซใใใคใณในใใผใซใใพใ:
```bash
conda install conda-forge::transformers
```
## ใญใฃใใทใฅใฎ่จญๅฎ
ๅญฆ็ฟๆธใฟใขใใซใฏใใฆใณใญใผใใใใใญใผใซใซใซใญใฃใใทใฅใใใพใ: `~/.cache/huggingface/hub`. ใใใฏใทใงใซ็ฐๅขๅคๆฐ`TRANSFORMERS_CACHE`ใงๆๅฎใใใใใใฉใซใใฎใใฃใฌใฏใใชใงใใWindowsใงใฏใใใใฉใซใใฎใใฃใฌใฏใใชใฏ`C:\Users\username\.cache\huggingface\hub`ใซใชใฃใฆใใพใใ็ฐใชใใญใฃใใทใฅใใฃใฌใฏใใชใๆๅฎใใใใใซใไปฅไธใฎใทใงใซ็ฐๅขๅคๆฐใๅคๆดใใใใจใๅฏ่ฝใงใใๅชๅ
ๅบฆใฏไปฅไธใฎ้ ็ชใซๅฏพๅฟใใพใ:
1. ใทใงใซ็ฐๅขๅคๆฐ (ใใใฉใซใ): `HUGGINGFACE_HUB_CACHE` ใพใใฏ `TRANSFORMERS_CACHE`.
2. ใทใงใซ็ฐๅขๅคๆฐ: `HF_HOME`.
3. ใทใงใซ็ฐๅขๅคๆฐ: `XDG_CACHE_HOME` + `/huggingface`.
<Tip>
ใใใไปฅๅใฎใใผใธใงใณใฎใฉใคใใฉใชใไฝฟ็จใใฆใใไบบใงใ`PYTORCH_TRANSFORMERS_CACHE`ใพใใฏ`PYTORCH_PRETRAINED_BERT_CACHE`ใ่จญๅฎใใฆใใๅ ดๅใใทใงใซ็ฐๅขๅคๆฐ`TRANSFORMERS_CACHE`ใๆๅฎใใชใ้ใ๐ค Transformersใฏใใใใฎใทใงใซ็ฐๅขๅคๆฐใไฝฟ็จใใพใใ
</Tip>
## ใชใใฉใคใณใขใผใ
๐ค Transformersใฏใญใผใซใซใใกใคใซใฎใฟใไฝฟ็จใใใใจใงใใกใคใขใฆใฉใผใซใใชใใฉใคใณใฎ็ฐๅขใงใๅไฝใใใใใจใใงใใพใใใใฎๅไฝใๆๅนใซใใใใใซใฏใ็ฐๅขๅคๆฐ`TRANSFORMERS_OFFLINE=1`ใ่จญๅฎใใพใใ
<Tip>
็ฐๅขๅคๆฐ`HF_DATASETS_OFFLINE=1`ใ่จญๅฎใใใชใใฉใคใณใใฌใผใใณใฐใฏใผใฏใใญใผใซ[๐ค Datasets](https://huggingface.co/docs/datasets/)ใ่ฟฝๅ ใใพใใ
</Tip>
ไพใใฐใๅค้จใคใณในใฟใณในใซๅฏพใใฆใใกใคใขใฆใฉใผใซใงไฟ่ญทใใใ้ๅธธใฎใใใใฏใผใฏไธใงใใญใฐใฉใ ใๅฎ่กใใๅ ดๅใ้ๅธธไปฅไธใฎใใใชใณใใณใใงๅฎ่กใใใใจใซใชใใพใ:
```bash
python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
ใชใใฉใคใณใคใณในใฟใณในใงใใฎๅใใใญใฐใฉใ ใๅฎ่กใใพใ:
```bash
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
ใใฎในใฏใชใใใฏใใญใผใซใซใใกใคใซใฎใฟใๆค็ดขใใใใจใๅใใฃใฆใใใฎใงใใใณใฐใขใใใใใใฟใคใ ใขใฆใใๅพ
ใฃใใใใใใจใชใๅฎ่กใใใใฏใใงใใ
### ใชใใฉใคใณใงไฝฟ็จใใใใใซใขใใซใใใผใฏใใคใถใผใๅๅพใใ
ใชใใฉใคใณใง๐ค Transformersใไฝฟ็จใใใใ1ใคใฎๆนๆณใฏใๅใใฃใฆใใกใคใซใใใฆใณใญใผใใใฆใใใใชใใฉใคใณใงไฝฟ็จใใๅฟ
่ฆใใใใจใใซใใฎใญใผใซใซใในใๆๅฎใใใใจใงใใใใใซใฏ3ใคใฎๆนๆณใใใใพใ:
* [Model Hub](https://huggingface.co/models)ใฎใฆใผใถใผใคใณใฟใผใใงใผในไธใใโใขใคใณใณใใฏใชใใฏใใฆใใกใคใซใใใฆใณใญใผใใใๆนๆณใ

* [`PreTrainedModel.from_pretrained`]ใใใณ[`PreTrainedModel.save_pretrained`]ใฎใฏใผใฏใใญใผใไฝฟ็จใใๆนๆณ:
1. [`PreTrainedModel.from_pretrained`]ใงๅใใฃใฆใใกใคใซใใใฆใณใญใผใใใพใ:
```py
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B")
```
2. [`PreTrainedModel.save_pretrained`]ใงๆๅฎใใใใใฃใฌใฏใใชใซใใกใคใซใไฟๅญใใฆใใใพใ:
```py
>>> tokenizer.save_pretrained("./your/path/bigscience_t0")
>>> model.save_pretrained("./your/path/bigscience_t0")
```
3. ใชใใฉใคใณใซใใๆใ[`PreTrainedModel.from_pretrained`]ใซๆๅฎใใใใฃใฌใฏใใชใใใใกใคใซใใชใญใผใใใพใ:
```py
>>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0")
>>> model = AutoModel.from_pretrained("./your/path/bigscience_t0")
```
* ใใญใฐใฉใ ็ใซ[huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub)ใฉใคใใฉใชใ็จใใฆใใใกใคใซใใใฆใณใญใผใใใๆนๆณ:
1. ไปฎๆณ็ฐๅขใซ`huggingface_hub`ใฉใคใใฉใชใใคใณในใใผใซใใพใ:
```bash
python -m pip install huggingface_hub
```
2. ๆๅฎใฎใในใซใใกใคใซใใใฆใณใญใผใใใใใใซใ[`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub)้ขๆฐใไฝฟ็จใใพใใไพใใฐใไปฅไธใฎใณใใณใใงใ[T0](https://huggingface.co/bigscience/T0_3B)ใขใใซใฎ`config.json`ใใกใคใซใๆๅฎใฎใในใซใใฆใณใญใผใใงใใพใ:
```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0")
```
ใใกใคใซใใใฆใณใญใผใใใใใญใผใซใซใซใญใฃใใทใฅใใใใใใใฎใญใผใซใซใในใๆๅฎใใฆใใกใคใซใใญใผใใใฆไฝฟ็จใใพใ:
```py
>>> from transformers import AutoConfig
>>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json")
```
<Tip>
Hubใซไฟๅญใใใฆใใใใกใคใซใใใฆใณใญใผใใใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใ[How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream)ใปใฏใทใงใณใๅ็
งใใฆใใ ใใใ
</Tip>
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_train_tpu_tf.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on TPU with TensorFlow
<Tip>
่ฉณ็ดฐใช่ชฌๆใไธ่ฆใงใๅใซTPUใฎใณใผใใตใณใใซใๅ
ฅๆใใฆใใฌใผใใณใฐใ้ๅงใใใๅ ดๅใฏใ[็งใใกใฎTPUใฎไพใฎใใผใใใใฏใใใงใใฏใใฆใใ ใใ๏ผ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)
</Tip>
### What is a TPU?
TPUใฏ**Tensor Processing Unit๏ผใใณใฝใซๅฆ็ใฆใใใ๏ผ**ใฎ็ฅใงใใใใใใฏGoogleใ่จญ่จใใใใผใใฆใงใขใงใใใฅใผใฉใซใใใใฏใผใฏๅ
ใฎใใณใฝใซ่จ็ฎใๅคงๅน
ใซ้ซ้ๅใใใใใซไฝฟ็จใใใพใใใใใฏGPUใฎใใใชใใฎใงใใใใใใฏใผใฏใฎใใฌใผใใณใฐใจๆจ่ซใฎไธกๆนใซไฝฟ็จใงใใพใใไธ่ฌ็ใซใฏGoogleใฎใฏใฉใฆใใตใผใในใไปใใฆใขใฏใปในใใใพใใใGoogle ColabใจKaggle Kernelsใ้ใใฆใ็กๆใงๅฐ่ฆๆจกใฎTPUใซ็ดๆฅใขใฏใปในใงใใพใใ
[๐ค TransformersใฎใในใฆใฎTensorFlowใขใใซใฏKerasใขใใซใงใ](https://huggingface.co/blog/tensorflow-philosophy)ใฎใงใใใฎๆๆธใฎใปใจใใฉใฎๆนๆณใฏไธ่ฌ็ใซKerasใขใใซ็จใฎTPUใใฌใผใใณใฐใซ้ฉ็จใงใใพใ๏ผใใ ใใTransformersใจDatasetsใฎHuggingFaceใจใณใทในใใ ๏ผhug-o-system๏ผ๏ผใซๅบๆใฎใใคใณใใใใใคใใใใใใใซใคใใฆใฏ้ฉ็จใใใจใใซใใใ็คบใใพใใ
### What kinds of TPU are available?
ๆฐใใใฆใผใถใผใฏใใใพใใพใชTPUใจใใฎใขใฏใปในๆนๆณใซ้ขใใๅน
ๅบใๆ
ๅ ฑใซใใๆททไนฑใใพใใ็่งฃใใใใใฎๆๅใฎ้่ฆใช้ใใฏใ**TPUใใผใ**ใจ**TPU VM**ใฎ้ใใงใใ
**TPUใใผใ**ใไฝฟ็จใใใจใไบๅฎไธใชใขใผใใฎTPUใซ้ๆฅ็ใซใขใฏใปในใใพใใๅฅๅใฎVMใๅฟ
่ฆใงใใใใใฏใผใฏใจใใผใฟใใคใใฉใคใณใๅๆๅใใใใใใใชใขใผใใใผใใซ่ปข้ใใพใใGoogle ColabใงTPUใไฝฟ็จใใใจใ**TPUใใผใ**ในใฟใคใซใงใขใฏใปในใใฆใใพใใ
TPUใใผใใไฝฟ็จใใใจใใใใซๆ
ฃใใฆใใชใไบบใ
ใซใฏใใชใไบๆใใชใๅไฝใ็บ็ใใใใจใใใใพใ๏ผ็นใซใTPUใฏPythonใณใผใใๅฎ่กใใฆใใใใทใณใจ็ฉ็็ใซ็ฐใชใใทในใใ ใซ้
็ฝฎใใใฆใใใใใใใผใฟใฏใญใผใซใซใใทใณใซใญใผใซใซใงๆ ผ็ดใใใฆใใใใผใฟใใคใใฉใคใณใๅฎๅ
จใซๅคฑๆใใพใใไปฃใใใซใใใผใฟใฏGoogle Cloud Storageใซๆ ผ็ดใใๅฟ
่ฆใใใใพใใใใใงใใผใฟใใคใใฉใคใณใฏใชใขใผใใฎTPUใใผใใงๅฎ่กใใใฆใใๅ ดๅใงใใใใผใฟใซใขใฏใปในใงใใพใใ
<Tip>
ใในใฆใฎใใผใฟใ`np.ndarray`ใพใใฏ`tf.Tensor`ใจใใฆใกใขใชใซๅใใใใจใใงใใๅ ดๅใColabใพใใฏTPUใใผใใไฝฟ็จใใฆใใๅ ดๅใงใใใใผใฟใGoogle Cloud Storageใซใขใใใญใผใใใใซ`fit()`ใงใใฌใผใใณใฐใงใใพใใ
</Tip>
<Tip>
**๐ค Hugging Faceๅบๆใฎใใณใ๐ค:** TFใณใผใใฎไพใงใใ่ฆใใงใใใ`Dataset.to_tf_dataset()`ใจใใฎ้ซใฌใใซใฎใฉใใใผใงใใ`model.prepare_tf_dataset()`ใฏใTPUใใผใใงๅคฑๆใใพใใใใใฏใ`tf.data.Dataset`ใไฝๆใใฆใใใซใใใใใใใใใใใ็ด็ฒใชใ`tf.data`ใใคใใฉใคใณใงใฏใชใใ`tf.numpy_function`ใพใใฏ`Dataset.from_generator()`ใไฝฟ็จใใฆๅบ็คใจใชใHuggingFace `Dataset`ใใใใผใฟใในใใชใผใ ใง่ชญใฟ่พผใใใจใใใงใใใใฎHuggingFace `Dataset`ใฏใญใผใซใซใใฃในใฏไธใฎใใผใฟใใใใฏใขใใใใฆใใใใชใขใผใTPUใใผใใ่ชญใฟๅใใใจใใงใใชใใใใงใใ
</Tip>
TPUใซใขใฏใปในใใ็ฌฌไบใฎๆนๆณใฏใ**TPU VM**ใไปใใฆใงใใTPU VMใไฝฟ็จใใๅ ดๅใTPUใๆฅ็ถใใใฆใใใใทใณใซ็ดๆฅๆฅ็ถใใพใใใใใฏGPU VMใงใใฌใผใใณใฐใ่กใใฎใจๅๆงใงใใTPU VMใฏไธ่ฌ็ใซใใผใฟใใคใใฉใคใณใซ้ขใใฆใฏ็นใซไฝๆฅญใใใใใใไธ่จใฎใในใฆใฎ่ญฆๅใฏTPU VMใซใฏ้ฉ็จใใใพใใ๏ผ
ใใใฏไธป่ฆณ็ใชๆๆธใงใใฎใงใใใกใใฎๆ่ฆใงใ๏ผ**ๅฏ่ฝใช้ใTPUใใผใใฎไฝฟ็จใ้ฟใใฆใใ ใใใ** TPU VMใใใๆททไนฑใใใใใใใใใฐใ้ฃใใใงใใๅฐๆฅ็ใซใฏใตใใผใใใใชใใชใๅฏ่ฝๆงใใใใพใ - GoogleใฎๆๆฐใฎTPUใงใใTPUv4ใฏใTPU VMใจใใฆใฎใฟใขใฏใปในใงใใใใใTPUใใผใใฏๅฐๆฅ็ใซใฏใใฌใฌใทใผใใฎใขใฏใปในๆนๆณใซใชใๅฏ่ฝๆงใ้ซใใงใใใใ ใใ็กๆใงTPUใซใขใฏใปในใงใใใฎใฏColabใจKaggle Kernelsใฎๅ ดๅใใใใพใใใใฎๅ ดๅใใฉใใใฆใไฝฟ็จใใชใใใฐใชใใชใๅ ดๅใฎๅใๆฑใๆนๆณใ่ชฌๆใใใใจใใพใ๏ผ่ฉณ็ดฐใฏ[TPUใฎไพใฎใใผใใใใฏ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)ใง่ฉณ็ดฐใช่ชฌๆใ็ขบ่ชใใฆใใ ใใใ
### What sizes of TPU are available?
ๅไธใฎTPU๏ผv2-8/v3-8/v4-8๏ผใฏ8ใคใฎใฌใใชใซใๅฎ่กใใพใใTPUใฏๆฐ็พใใๆฐๅใฎใฌใใชใซใๅๆใซๅฎ่กใงใใ**ใใใ**ใซๅญๅจใใพใใๅไธใฎTPUใใใๅคใใฎTPUใไฝฟ็จใใใใใใใๅ
จไฝใงใฏใชใๅ ดๅ๏ผใใจใใฐv3-32๏ผใTPUใใชใผใใฏ**ใใใในใฉใคใน**ใจใใฆๅ็
งใใใพใใ
Colabใไปใใฆ็กๆใฎTPUใซใขใฏใปในใใๅ ดๅใ้ๅธธใฏๅไธใฎv2-8 TPUใๆไพใใใพใใ
### I keep hearing about this XLA thing. Whatโs XLA, and how does it relate to TPUs?
XLAใฏใTensorFlowใจJAXใฎไธกๆนใงไฝฟ็จใใใๆ้ฉๅใณใณใใคใฉใงใใJAXใงใฏๅฏไธใฎใณใณใใคใฉใงใใใTensorFlowใงใฏใชใใทใงใณใงใใ๏ผใใใTPUใงใฏๅฟ
้ ใงใ๏ผ๏ผใKerasใขใใซใใใฌใผใใณใฐใใ้ใซ`model.compile()`ใซๅผๆฐ`jit_compile=True`ใๆธกใใใจใงๆใ็ฐกๅใซๆๅนใซใงใใพใใใจใฉใผใ็บ็ใใใใใใฉใผใใณในใ่ฏๅฅฝใงใใใฐใใใใฏTPUใซ็งป่กใใๆบๅใๆดใฃใ่ฏใๅ
ๅใงใ๏ผ
TPUไธใงใฎใใใใฐใฏไธ่ฌ็ใซCPU/GPUใใใๅฐใ้ฃใใใใใTPUใง่ฉฆใๅใซใพใCPU/GPUใงXLAใไฝฟ็จใใฆใณใผใใๅฎ่กใใใใจใใๅงใใใพใใใใกใใใ้ทๆ้ใใฌใผใใณใฐใใๅฟ
่ฆใฏใใใพใใใใขใใซใจใใผใฟใใคใใฉใคใณใๆๅพ
้ใใซๅไฝใใใใ็ขบ่ชใใใใใฎๆฐในใใใใ ใใงใใ
<Tip>
XLAใณใณใใคใซใใใใณใผใใฏ้ๅธธ้ซ้ใงใใใใใใฃใฆใTPUใงๅฎ่กใใไบๅฎใใชใๅ ดๅใงใใ`jit_compile=True`ใ่ฟฝๅ ใใใใจใงใใใฉใผใใณในใๅไธใใใใใจใใงใใพใใใใ ใใไปฅไธใฎXLAไบๆๆงใซ้ขใใๆณจๆไบ้
ใซๆณจๆใใฆใใ ใใ๏ผ
</Tip>
<Tip warning={true}>
**่ฆใ็ต้จใใ็ใพใใใใณใ:** `jit_compile=True`ใไฝฟ็จใใใใจใฏใCPU/GPUใณใผใใXLAไบๆใงใใใใจใ็ขบ่ชใใ้ๅบฆใๅไธใใใ่ฏใๆนๆณใงใใใๅฎ้ใซTPUใงใณใผใใๅฎ่กใใ้ใซใฏๅคใใฎๅ้กใๅผใ่ตทใใๅฏ่ฝๆงใใใใพใใ XLAใณใณใใคใซใฏTPUไธใงๆ้ป็ใซ่กใใใใใใๅฎ้ใซใณใผใใTPUใงๅฎ่กใใๅใซใใฎ่กใๅ้คใใใใจใๅฟใใชใใงใใ ใใ๏ผ
</Tip>
### How do I make my model XLA compatible?
ๅคใใฎๅ ดๅใใณใผใใฏใใงใซXLAไบๆใใใใใพใใ๏ผใใ ใใXLAใงใฏๅไฝใใ้ๅธธใฎTensorFlowใงใๅไฝใใชใใใใคใใฎ่ฆ็ด ใใใใพใใไปฅไธใซใ3ใคใฎไธป่ฆใชใซใผใซใซใพใจใใฆใใพใ๏ผ
<Tip>
**๐ค HuggingFaceๅบๆใฎใใณใ๐ค:** TensorFlowใขใใซใจๆๅคฑ้ขๆฐใXLAไบๆใซๆธใ็ดใใใใซๅคใใฎๅชๅใๆใฃใฆใใพใใ้ๅธธใใขใใซใจๆๅคฑ้ขๆฐใฏใใใฉใซใใงใซใผใซ๏ผ1ใจ๏ผ2ใซๅพใฃใฆใใใใใ`transformers`ใขใใซใไฝฟ็จใใฆใใๅ ดๅใฏใใใใในใญใใใงใใพใใใใ ใใ็ฌ่ชใฎใขใใซใจๆๅคฑ้ขๆฐใ่จ่ฟฐใใๅ ดๅใฏใใใใใฎใซใผใซใๅฟใใชใใงใใ ใใ๏ผ
</Tip>
#### XLA Rule #1: Your code cannot have โdata-dependent conditionalsโ
ใใใฏใไปปๆใฎ`if`ในใใผใใกใณใใ`tf.Tensor`ๅ
ใฎๅคใซไพๅญใใฆใใชใๅฟ
่ฆใใใใใจใๆๅณใใพใใไพใใฐใๆฌกใฎใณใผใใใญใใฏใฏXLAใงใณใณใใคใซใงใใพใใ๏ผ
```python
if tf.reduce_sum(tensor) > 10:
tensor = tensor / 2.0
```
ใใใฏๆๅใฏ้ๅธธใซๅถ้็ใซๆใใใใใใใพใใใใใปใจใใฉใฎใใฅใผใฉใซใใใใณใผใใฏใใใ่กใๅฟ
่ฆใฏใใใพใใใ้ๅธธใใใฎๅถ็ดใๅ้ฟใใใใใซ`tf.cond`ใไฝฟ็จใใใ๏ผใใญใฅใกใณใใฏใใกใใๅ็
ง๏ผใๆกไปถใๅ้คใใฆไปฃใใใซๆ็คบๅคๆฐใไฝฟ็จใใใใใใใจใใงใใพใใๆฌกใฎใใใซ๏ผ
```python
sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32)
tensor = tensor / (1.0 + sum_over_10)
```
ใใฎใณใผใใฏใไธ่จใฎใณใผใใจใพใฃใใๅใๅนๆใๆใฃใฆใใพใใใๆกไปถใๅ้ฟใใใใจใงใXLAใงๅ้กใชใใณใณใใคใซใงใใใใจใ็ขบ่ชใใพใ๏ผ
#### XLA Rule #2: Your code cannot have โdata-dependent shapesโ
ใใใฏใใณใผใๅ
ใฎใในใฆใฎ `tf.Tensor` ใชใใธใงใฏใใฎๅฝข็ถใใใใฎๅคใซไพๅญใใชใใใจใๆๅณใใพใใใใจใใฐใ`tf.unique` ้ขๆฐใฏXLAใงใณใณใใคใซใงใใชใใฎใงใใใฎใซใผใซใซ้ๅใใพใใใชใใชใใใใใฏๅ
ฅๅ `Tensor` ใฎไธๆใฎๅคใฎๅใคใณในใฟใณในใๅซใ `tensor` ใ่ฟใใใใงใใใใฎๅบๅใฎๅฝข็ถใฏใๅ
ฅๅ `Tensor` ใฎ้่คๅ
ทๅใซใใฃใฆ็ฐใชใใใใXLAใฏใใใๅฆ็ใใชใใใจใซใชใใพใ๏ผ
ไธ่ฌ็ใซใใปใจใใฉใฎใใฅใผใฉใซใใใใฏใผใฏใณใผใใฏใใใฉใซใใงใซใผใซ๏ผ2ใซๅพใใพใใใใ ใใใใใคใใฎไธ่ฌ็ใชใฑใผในใงใฏๅ้กใ็บ็ใใใใจใใใใพใใ้ๅธธใซไธ่ฌ็ใชใฑใผในใฎ1ใคใฏใ**ใฉใใซใในใญใณใฐ**ใไฝฟ็จใใๅ ดๅใงใใใฉใใซใ็ก่ฆใใฆๆๅคฑใ่จ็ฎใใๅ ดๆใ็คบใใใใซใใฉใใซใ่ฒ ใฎๅคใซ่จญๅฎใใๆนๆณใงใใNumPyใพใใฏPyTorchใฎใฉใใซใในใญใณใฐใใตใใผใใใๆๅคฑ้ขๆฐใ่ฆใใจใๆฌกใฎใใใช[ใใผใซใคใณใใใฏใน](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing)ใไฝฟ็จใใใณใผใใใใ่ฆใใใพใ๏ผ
```python
label_mask = labels >= 0
masked_outputs = outputs[label_mask]
masked_labels = labels[label_mask]
loss = compute_loss(masked_outputs, masked_labels)
mean_loss = torch.mean(loss)
```
ใใฎใณใผใใฏNumPyใPyTorchใงใฏๅฎๅ
จใซๆฉ่ฝใใพใใใXLAใงใฏๅไฝใใพใใ๏ผใชใใชใใ`masked_outputs`ใจ`masked_labels`ใฎๅฝข็ถใฏใในใฏใใใไฝ็ฝฎใฎๆฐใซไพๅญใใใใใใใใฏ**ใใผใฟไพๅญใฎๅฝข็ถ**ใซใชใใพใใใใ ใใใซใผใซ๏ผ1ใจๅๆงใซใใใฎใณใผใใๆธใ็ดใใฆใใใผใฟไพๅญใฎๅฝข็ถใชใใงใพใฃใใๅใๅบๅใ็ๆใงใใใใจใใใใพใใ
```python
label_mask = tf.cast(labels >= 0, tf.float32)
loss = compute_loss(outputs, labels)
loss = loss * label_mask # Set negative label positions to 0
mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask)
```
ใใใงใฏใใใผใฟไพๅญใฎๅฝข็ถใ้ฟใใใใใซใๅไฝ็ฝฎใงๆๅคฑใ่จ็ฎใใฆใใใๅนณๅใ่จ็ฎใใ้ใซๅๅญใจๅๆฏใฎไธกๆนใงใในใฏใใใไฝ็ฝฎใใผใญๅใใๆนๆณใ็ดนไปใใพใใใใใซใใใๆๅใฎใขใใญใผใใจใพใฃใใๅใ็ตๆใๅพใใใพใใใXLAไบๆๆงใ็ถญๆใใพใใๆณจๆ็นใจใใฆใใซใผใซ๏ผ1ใจๅใใใชใใฏใไฝฟ็จใใพใ - `tf.bool`ใ`tf.float32`ใซๅคๆใใฆๆๆจๅคๆฐใจใใฆไฝฟ็จใใพใใใใใฏ้ๅธธใซไพฟๅฉใชใใชใใฏใงใใฎใงใ่ชๅใฎใณใผใใXLAใซๅคๆใใๅฟ
่ฆใใใๅ ดๅใซใฏ่ฆใใฆใใใฆใใ ใใ๏ผ
#### XLA Rule #3: XLA will need to recompile your model for every different input shape it sees
ใใใฏ้่ฆใชใซใผใซใงใใใใใฏใคใพใใๅ
ฅๅๅฝข็ถใ้ๅธธใซๅคๅ็ใชๅ ดๅใXLA ใฏใขใใซใไฝๅบฆใๅใณใณใใคใซใใๅฟ
่ฆใใใใใใๅคงใใชใใใฉใผใใณในใฎๅ้กใ็บ็ใใๅฏ่ฝๆงใใใใจใใใใจใงใใใใใฏ NLP ใขใใซใงไธ่ฌ็ใซ็บ็ใใใใผใฏใใคใบๅพใฎๅ
ฅๅใใญในใใฎ้ทใใ็ฐใชใๅ ดๅใใใใพใใไปใฎใขใใชใใฃใงใฏใ้็ใชๅฝข็ถใไธ่ฌ็ใงใใใใใฎใซใผใซใฏใปใจใใฉๅ้กใซใชใใพใใใ
ใซใผใซ๏ผ3ใๅ้ฟใใๆนๆณใฏไฝใงใใใใ๏ผ้ตใฏใใใใฃใณใฐใใงใ - ใในใฆใฎๅ
ฅๅใๅใ้ทใใซใใใฃใณใฐใใๆฌกใซใattention_maskใใไฝฟ็จใใใใจใงใๅฏๅคๅฝข็ถใจๅใ็ตๆใๅพใใใจใใงใใพใใใXLA ใฎๅ้กใฏ็บ็ใใพใใใใใ ใใ้ๅบฆใฎใใใฃใณใฐใๆทฑๅปใช้
ๅปถใๅผใ่ตทใใๅฏ่ฝๆงใใใใพใ - ใใผใฟใปใใๅ
จไฝใงๆๅคงใฎ้ทใใซใในใฆใฎใตใณใใซใใใใฃใณใฐใใใจใๅคใใฎ่จ็ฎใจใกใขใชใ็ก้งใซใใๅฏ่ฝๆงใใใใพใ๏ผ
ใใฎๅ้กใซใฏๅฎ็งใช่งฃๆฑบ็ญใฏใใใพใใใใใใใคใใฎใใชใใฏใ่ฉฆใใใจใใงใใพใใ้ๅธธใซไพฟๅฉใชใใชใใฏใฎ1ใคใฏใ**ใใใใฎใตใณใใซใ32ใพใใฏ64ใใผใฏใณใฎๅๆฐใพใงใใใฃใณใฐใใ**ใใจใงใใใใใซใใใใใผใฏใณๆฐใใใใใซๅขๅ ใใใ ใใงใใในใฆใฎๅ
ฅๅๅฝข็ถใ32ใพใใฏ64ใฎๅๆฐใงใใๅฟ
่ฆใใใใใใไธๆใฎๅ
ฅๅๅฝข็ถใฎๆฐใๅคงๅน
ใซๆธๅฐใใพใใไธๆใฎๅ
ฅๅๅฝข็ถใๅฐใชใใจใXLA ใฎๅใณใณใใคใซใๅฐใชใใชใใพใ๏ผ
<Tip>
**๐ค HuggingFace ใซ้ขใใๅ
ทไฝ็ใชใใณใ๐ค:** ๅผ็คพใฎใใผใฏใใคใถใผใจใใผใฟใณใฌใฏใฟใผใซใฏใใใใงๅฝน็ซใคใกใฝใใใใใใพใใใใผใฏใใคใถใผใๅผใณๅบใ้ใซ `padding="max_length"` ใพใใฏ `padding="longest"` ใไฝฟ็จใใฆใใใใฃใณใฐใใใใใผใฟใๅบๅใใใใใซ่จญๅฎใงใใพใใใใผใฏใใคใถใผใจใใผใฟใณใฌใฏใฟใผใซใฏใไธๆใฎๅ
ฅๅๅฝข็ถใฎๆฐใๆธใใใฎใซๅฝน็ซใค `pad_to_multiple_of` ๅผๆฐใใใใพใ๏ผ
</Tip>
### How do I actually train my model on TPU?
ไธๅบฆใใฌใผใใณใฐใ XLA ไบๆๆงใใใใใจใ็ขบ่ชใใ๏ผTPU Node/Colab ใไฝฟ็จใใๅ ดๅใฏ๏ผใใผใฟใปใใใ้ฉๅใซๆบๅใใใฆใใๅ ดๅใTPU ไธใงๅฎ่กใใใใจใฏ้ฉใใปใฉ็ฐกๅใงใ๏ผใณใผใใๅคๆดใใๅฟ
่ฆใใใใฎใฏใใใใคใใฎ่กใ่ฟฝๅ ใใฆ TPU ใๅๆๅใใใขใใซใจใใผใฟใปใใใ `TPUStrategy` ในใณใผใๅ
ใงไฝๆใใใใใใซใใใใจใ ใใงใใใใใๅฎ้ใซ่ฆใใซใฏใ[TPU ใฎใตใณใใซใใผใใใใฏ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)ใใ่ฆงใใ ใใ๏ผ
### Summary
ใใใงใฏๅคใใฎๆ
ๅ ฑใๆไพใใใพใใใฎใงใTPU ใงใขใใซใใใฌใผใใณใฐใใ้ใซไปฅไธใฎใใงใใฏใชในใใไฝฟ็จใงใใพใ๏ผ
- ใณใผใใ XLA ใฎไธใคใฎใซใผใซใซๅพใฃใฆใใใใจใ็ขบ่ชใใพใใ
- CPU/GPU ใง `jit_compile=True` ใไฝฟ็จใใฆใขใใซใใณใณใใคใซใใXLA ใงใใฌใผใใณใฐใงใใใใจใ็ขบ่ชใใพใใ
- ใใผใฟใปใใใใกใขใชใซ่ชญใฟ่พผใใใTPU ไบๆใฎใใผใฟใปใใ่ชญใฟ่พผใฟใขใใญใผใใไฝฟ็จใใพใ๏ผ[ใใผใใใใฏใๅ็
ง](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)๏ผใ
- ใณใผใใ Colab๏ผใขใฏใปใฉใฌใผใฟใใTPUใใซ่จญๅฎ๏ผใพใใฏ Google Cloud ใฎ TPU VM ใซ็งป่กใใพใใ
- TPU ๅๆๅใณใผใใ่ฟฝๅ ใใพใ๏ผ[ใใผใใใใฏใๅ็
ง](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)๏ผใ
- `TPUStrategy` ใไฝๆใใใใผใฟใปใใใฎ่ชญใฟ่พผใฟใจใขใใซใฎไฝๆใ `strategy.scope()` ๅ
ใง่กใใใใใจใ็ขบ่ชใใพใ๏ผ[ใใผใใใใฏใๅ็
ง](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)๏ผใ
- TPU ใซ็งป่กใใ้ใซ `jit_compile=True` ใๅคใใฎใๅฟใใชใใงใใ ใใ๏ผ
- ๐๐๐๐ฅบ๐ฅบ๐ฅบ
- `model.fit()` ใๅผใณๅบใใพใใ
- ใใใงใจใใใใใพใ๏ผ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_train_tpu.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on TPUs
<Tip>
ๆณจๆ: [ใทใณใฐใซGPUใปใฏใทใงใณ](perf_train_gpu_one)ใง็ดนไปใใใฆใใใปใจใใฉใฎๆฆ็ฅ๏ผๆททๅ็ฒพๅบฆใใฌใผใใณใฐใๅพ้
่็ฉใชใฉ๏ผใใใณ[ใใซใGPUใปใฏใทใงใณ](perf_train_gpu_many)ใฏไธ่ฌ็ใชใขใใซใฎใใฌใผใใณใฐใซ้ฉ็จใงใใพใใฎใงใใใฎใปใฏใทใงใณใซๅ
ฅใๅใซใใใ็ขบ่ชใใฆใใ ใใใ
</Tip>
ใใฎใใญใฅใกใณใใฏใTPUใงใฎใใฌใผใใณใฐๆนๆณใซ้ขใใๆ
ๅ ฑใใพใใชใ่ฟฝๅ ใใใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/torchscript.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Export to TorchScript
<Tip>
ใใใฏTorchScriptใไฝฟ็จใใๅฎ้จใฎๆๅใงใใใๅฏๅคๅ
ฅๅใตใคใบใฎใขใใซใซๅฏพใใใใฎ่ฝๅใใพใ ๆขๆฑไธญใงใใใใใฏ็งใใกใฎ้ขๅฟใฎ็ฆ็นใงใใใไปๅพใฎใชใชใผในใงใฏใใใๆ่ปใชๅฎ่ฃ
ใใPythonใใผในใฎใณใผใใจใณใณใใคใซใใใTorchScriptใๆฏ่ผใใใใณใใใผใฏใๅซใใใใๅคใใฎใณใผใไพใง่ฉณ็ดฐใชๅๆใ่กใใพใใ
</Tip>
[TorchScriptใฎใใญใฅใกใณใ](https://pytorch.org/docs/stable/jit.html)ใซใใใฐ๏ผ
> TorchScriptใฏใPyTorchใณใผใใใ็ดๅๅใใใณๆ้ฉๅๅฏ่ฝใชใขใใซใไฝๆใใๆนๆณใงใใ
TorchScriptใไฝฟ็จใใใจใๅน็ๅฟๅใฎC++ใใญใฐใฉใ ใชใฉใไปใฎใใญใฐใฉใ ใงใขใใซใๅๅฉ็จใงใใใใใซใชใใพใใPyTorchใใผในใฎPythonใใญใฐใฉใ ไปฅๅคใฎ็ฐๅขใง๐ค Transformersใขใใซใใจใฏในใใผใใใฆไฝฟ็จใใใใใฎใคใณใฟใผใใงใผในใๆไพใใฆใใพใใใใใงใฏใTorchScriptใไฝฟ็จใใฆใขใใซใใจใฏในใใผใใใไฝฟ็จใใๆนๆณใ่ชฌๆใใพใใ
ใขใใซใใจใฏในใใผใใใใซใฏใๆฌกใฎ2ใคใฎ่ฆไปถใใใใพใ๏ผ
- `torchscript`ใใฉใฐใไฝฟ็จใใใขใใซใฎใคใณในใฟใณในๅ
- ใใใผใฎๅ
ฅๅใไฝฟ็จใใใใฉใฏใผใใใน
ใใใใฎๅฟ
่ฆๆกไปถใฏใไปฅไธใง่ฉณ็ดฐใซ่ชฌๆใใใฆใใใใใซใ้็บ่
ใๆณจๆใใๅฟ
่ฆใใใใใใคใใฎใใจใๆๅณใใพใใ
## TorchScript flag and tied weights
`torchscript`ใใฉใฐใฏใใปใจใใฉใฎ๐ค Transformers่จ่ชใขใใซใซใใใฆใ`Embedding`ใฌใคใคใผใจ`Decoding`ใฌใคใคใผ้ใง้ใฟใ้ฃ็ตใใใฆใใใใๅฟ
่ฆใงใใ
TorchScriptใงใฏใ้ใฟใ้ฃ็ตใใใฆใใใขใใซใใจใฏในใใผใใใใใจใฏใงใใพใใใฎใงใไบๅใซ้ใฟใๅใ้ขใใฆ่ค่ฃฝใใๅฟ
่ฆใใใใพใใ
`torchscript`ใใฉใฐใไฝฟ็จใใฆใคใณในใฟใณในๅใใใใขใใซใฏใ`Embedding`ใฌใคใคใผใจ`Decoding`ใฌใคใคใผใๅ้ขใใใฆใใใใใฎใใๅพใงใใฌใผใใณใฐใใฆใฏใใใพใใใ
ใใฌใผใใณใฐใฏใใใใใฎ2ใคใฎใฌใคใคใผใ้ๅๆใซใใๅฏ่ฝๆงใใใใไบๆใใชใ็ตๆใใใใใๅฏ่ฝๆงใใใใพใใ
่จ่ชใขใใซใใใใๆใใชใใขใใซใซใฏ่จๅใใพใใใใใใใใฎใขใใซใซใฏ้ฃ็ตใใใ้ใฟใๅญๅจใใชใใใใ`torchscript`ใใฉใฐใชใใงๅฎๅ
จใซใจใฏในใใผใใงใใพใใ
## Dummy inputs and standard lengths
ใใใผๅ
ฅๅใฏใขใใซใฎใใฉใฏใผใใในใซไฝฟ็จใใใพใใๅ
ฅๅใฎๅคใฏใฌใคใคใผใ้ใใฆไผๆญใใใ้ใPyTorchใฏๅใใณใฝใซใซๅฎ่กใใใ็ฐใชใๆไฝใ่ฟฝ่ทกใใพใใใใใใฎ่จ้ฒใใใๆไฝใฏใใขใใซใฎ*ใใฌใผใน*ใไฝๆใใใใใซไฝฟ็จใใใพใใ
ใใฌใผในใฏๅ
ฅๅใฎๅฏธๆณใซๅฏพใใฆไฝๆใใใพใใใใฎใใใใใใผๅ
ฅๅใฎๅฏธๆณใซๅถ็ดใใใไปใฎใทใผใฑใณใน้ทใใใใใตใคใบใงใฏๅไฝใใพใใใ็ฐใชใใตใคใบใง่ฉฆใใจใไปฅไธใฎใจใฉใผใ็บ็ใใพใ๏ผ
```
`The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2`
```
ใๅงใใใพใใฎใฏใใขใใซใฎๆจ่ซไธญใซไพ็ตฆใใใๆๅคงใฎๅ
ฅๅใจๅใๅคงใใใฎใใใผๅ
ฅๅใตใคใบใงใขใใซใใใฌใผในใใใใจใงใใใใใฃใณใฐใไฝฟ็จใใฆไธ่ถณๅคใ่ฃๅฎใใใใจใใงใใพใใใใ ใใใขใใซใใใๅคงใใชๅ
ฅๅใตใคใบใงใใฌใผในใใใใใใ่กๅใฎๅฏธๆณใๅคงใใใชใใใใๅคใใฎ่จ็ฎใ็บ็ใใพใใ
็ฐใชใใทใผใฑใณใน้ทใฎใขใใซใใจใฏในใใผใใใ้ใซใๅๅ
ฅๅใซๅฏพใใฆๅฎ่กใใใๆผ็ฎใฎ็ทๆฐใซๆณจๆใใฆใใใใฉใผใใณในใๅฏๆฅใซใใฉใญใผใใใใจใใๅงใใใพใใ
## Using TorchScript in Python
ใใฎใปใฏใทใงใณใงใฏใใขใใซใฎไฟๅญใจ่ชญใฟ่พผใฟใใใใณๆจ่ซใซใใฌใผในใไฝฟ็จใใๆนๆณใ็คบใใพใใ
### Saving a model
TorchScriptใง`BertModel`ใใจใฏในใใผใใใใซใฏใ`BertConfig`ใฏใฉในใใ`BertModel`ใใคใณในใฟใณในๅใใใใใใใกใคใซๅ`traced_bert.pt`ใงใใฃในใฏใซไฟๅญใใพใ๏ผ
```python
from transformers import BertModel, BertTokenizer, BertConfig
import torch
enc = BertTokenizer.from_pretrained("bert-base-uncased")
# Tokenizing input text
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)
# Masking one of the input tokens
masked_index = 8
tokenized_text[masked_index] = "[MASK]"
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
# Creating a dummy input
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]
# Initializing the model with the torchscript flag
# Flag set to True even though it is not necessary as this model does not have an LM Head.
config = BertConfig(
vocab_size_or_config_json_file=32000,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
torchscript=True,
)
# Instantiating the model
model = BertModel(config)
# The model needs to be in evaluation mode
model.eval()
# If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag
model = BertModel.from_pretrained("bert-base-uncased", torchscript=True)
# Creating the trace
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "traced_bert.pt")
```
### Loading a model
ไปฅๅใซไฟๅญใใ `BertModel`ใ`traced_bert.pt` ใใใฃในใฏใใ่ชญใฟ่พผใใงใไปฅๅใซๅๆๅใใ `dummy_input` ใงไฝฟ็จใงใใพใใ
```python
loaded_model = torch.jit.load("traced_bert.pt")
loaded_model.eval()
all_encoder_layers, pooled_output = loaded_model(*dummy_input)
```
### Using a traced model for inference
ใใฌใผในใขใใซใไฝฟ็จใใฆๆจ่ซใ่กใใซใฏใใใฎ `__call__` ใใณใใผใกใฝใใใไฝฟ็จใใพใใ
```python
traced_model(tokens_tensor, segments_tensors)
```
## Deploy Hugging Face TorchScript models to AWS with the Neuron SDK
AWSใฏใฏใฉใฆใใงใฎไฝใณในใใง้ซๆง่ฝใชๆฉๆขฐๅญฆ็ฟๆจ่ซๅใใซ [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) ใคใณในใฟใณในใใกใใชใผใๅฐๅ
ฅใใพใใใInf1ใคใณในใฟใณในใฏAWS Inferentiaใใใใซใใฃใฆ้งๅใใใใใฃใผใใฉใผใใณใฐๆจ่ซใฏใผใฏใญใผใใซ็นๅใใใซในใฟใ ใใซใใฎใใผใใฆใงใขใขใฏใปใฉใฌใผใฟใงใใ[AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) ใฏInferentia็จใฎSDKใงใใใฉใณในใใฉใผใใผใขใใซใใใฌใผในใใฆๆ้ฉๅใใInf1ใซๅฑ้ใใใใใฎใตใใผใใๆไพใใพใใ
Neuron SDK ใๆไพใใใใฎ:
1. ใฏใฉใฆใใงใฎๆจ่ซใฎใใใซTorchScriptใขใใซใใใฌใผในใใฆๆ้ฉๅใใใใใฎใ1่กใฎใณใผใๅคๆดใงไฝฟ็จใงใใ็ฐกๅใชAPIใ
2. [ๆนๅใใใใณในใใใใฉใผใใณใน](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/) ใฎใใใฎใใใฏในๅคใฎใใใฉใผใใณในๆ้ฉๅใ
3. [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) ใพใใฏ [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html) ใงๆง็ฏใใใHugging Faceใใฉใณในใใฉใผใใผใขใใซใธใฎใตใใผใใ
### Implications
BERT๏ผBidirectional Encoder Representations from Transformers๏ผใขใผใญใใฏใใฃใใใฎๅค็จฎ๏ผ[distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) ใ [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta) ใชใฉ๏ผใซๅบใฅใใใฉใณในใใฉใผใใผใขใใซใฏใ้็ๆใฟในใฏ๏ผๆฝๅบๅ่ณชๅๅฟ็ญใใทใผใฑใณในๅ้กใใใผใฏใณๅ้กใชใฉ๏ผใซใใใฆใInf1ไธใงๆ้ฉใซๅไฝใใพใใใใ ใใใใญในใ็ๆใฟในใฏใ [AWS Neuron MarianMT ใใฅใผใใชใขใซ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html) ใซๅพใฃใฆInf1ไธใงๅฎ่กใงใใพใใInferentiaใงใใใฏในๅคใงๅคๆใงใใใขใใซใซ้ขใใ่ฉณ็ดฐๆ
ๅ ฑใฏใNeuronใใญใฅใกใณใใผใทใงใณใฎ [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) ใปใฏใทใงใณใซใใใพใใ
### Dependencies
ใขใใซใAWS Neuronใซๅคๆใใใซใฏใ[Neuron SDK ็ฐๅข](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide) ใๅฟ
่ฆใงใ[AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html) ใซไบๅใซๆงๆใใใฆใใพใใ
### Converting a model for AWS Neuron
ใขใใซใAWS NEURON็จใซๅคๆใใใซใฏใ[PythonใงTorchScriptใไฝฟ็จใใ](torchscript#using-torchscript-in-python) ใจๅใใณใผใใไฝฟ็จใใฆ `BertModel` ใใใฌใผในใใพใใPython APIใไปใใฆNeuron SDKใฎใณใณใใผใใณใใซใขใฏใปในใใใใใซใ`torch.neuron` ใใฌใผใ ใฏใผใฏๆกๅผตใใคใณใใผใใใพใใ
```python
from transformers import BertModel, BertTokenizer, BertConfig
import torch
import torch.neuron
```
ๆฌกใฎ่กใๅคๆดใใใ ใใงๆธใฟใพใใ
```diff
- torch.jit.trace(model, [tokens_tensor, segments_tensors])
+ torch.neuron.trace(model, [token_tensor, segments_tensors])
```
ใใใซใใใNeuron SDKใฏใขใใซใใใฌใผในใใInf1ใคใณในใฟใณในๅใใซๆ้ฉๅใใพใใ
AWS Neuron SDKใฎๆฉ่ฝใใใผใซใใตใณใใซใใฅใผใใชใขใซใๆๆฐใฎใขใใใใผใใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใฏใ[AWS NeuronSDK ใใญใฅใกใณใใผใทใงใณ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html) ใใ่ฆงใใ ใใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/generation_strategies.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Text generation strategies
ใใญในใ็ๆใฏใใชใผใใณใจใณใใฎใใญในใ็ๆใ่ฆ็ดใ็ฟป่จณใชใฉใๅคใใฎ่ช็ถ่จ่ชๅฆ็ใฟในใฏใซไธๅฏๆฌ ใงใใใพใใใใญในใใๅบๅใจใใใใพใใพใชๆททๅจใขใใชใใฃใขใใชใฑใผใทใงใณใซใๅฝฑ้ฟใไธใใฆใใใไพใใฐ้ณๅฃฐใใใใญในใใธใฎๅคๆใ็ปๅใใใใญในใใธใฎๅคๆใชใฉใใใใพใใใใญในใใ็ๆใงใใใใใคใใฎใขใใซใซใฏใGPT2ใXLNetใOpenAI GPTใCTRLใTransformerXLใXLMใBartใT5ใGITใWhisperใๅซใพใใพใใ
[`~transformers.generation_utils.GenerationMixin.generate`] ใกใฝใใใไฝฟ็จใใฆใ็ฐใชใใฟในใฏใฎใใญในใๅบๅใ็ๆใใใใใคใใฎไพใใ็ดนไปใใพใ๏ผ
* [ใใญในใ่ฆ็ด](./tasks/summarization#inference)
* [็ปๅใฎใญใฃใใทใงใณ](./model_doc/git#transformers.GitForCausalLM.forward.example)
* [้ณๅฃฐใฎ่ปข่จ](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example)
generateใกใฝใใใธใฎๅ
ฅๅใฏใใขใใซใฎใขใใชใใฃใซไพๅญใใพใใใใใใฎๅ
ฅๅใฏใAutoTokenizerใAutoProcessorใชใฉใฎใขใใซใฎใใชใใญใปใใตใฏใฉในใซใใฃใฆ่ฟใใใพใใใขใใซใฎใใชใใญใปใใตใ่คๆฐใฎ็จฎ้กใฎๅ
ฅๅใ็ๆใใๅ ดๅใฏใใในใฆใฎๅ
ฅๅใgenerate()ใซๆธกใใพใใๅใขใใซใฎใใชใใญใปใใตใซใคใใฆใฎ่ฉณ็ดฐใฏใๅฏพๅฟใใใขใใซใฎใใญใฅใกใณใใผใทใงใณใง็ขบ่ชใงใใพใใ
ใใญในใใ็ๆใใใใใฎใใผใฏใณใฎ้ธๆใใญใปในใฏใใณใผใใฃใณใฐใจใใฆ็ฅใใใ`generate()`ใกใฝใใใไฝฟ็จใใใใณใผใใฃใณใฐๆฆ็ฅใใซในใฟใใคใบใงใใพใใใใณใผใใฃใณใฐๆฆ็ฅใๅคๆดใใใใจใฏใ่จ็ทดๅฏ่ฝใชใใฉใกใผใฟใฎๅคใๅคๆดใใพใใใใ็ๆใใใใใญในใใฎๅ่ณชใซ้ก่ใชๅฝฑ้ฟใไธใใใใจใใใใพใใใใใซใใใใใญในใๅ
ใฎ็นฐใ่ฟใใๆธๅฐใใใใใไธ่ฒซๆงใฎใใใใญในใใ็ๆใใใฎใซๅฝน็ซใกใพใใ
ใใฎใฌใคใใงใฏไปฅไธใฎๅ
ๅฎนใ่ชฌๆใใใฆใใพใ๏ผ
* ใใใฉใซใใฎใใญในใ็ๆ่จญๅฎ
* ไธ่ฌ็ใชใใณใผใใฃใณใฐๆฆ็ฅใจใใฎไธป่ฆใชใใฉใกใผใฟ
* ๐ค Hubใฎใใชใใฎใใกใคใณใใฅใผใณใขใใซใจใซในใฟใ ็ๆ่จญๅฎใฎไฟๅญใจๅ
ฑๆ
## Default text generation configuration
ใขใใซใฎใใณใผใใฃใณใฐๆฆ็ฅใฏใใใฎ็ๆ่จญๅฎใงๅฎ็พฉใใใฆใใพใใ[`pipeline`] ๅ
ใงๆจ่ซใซไบๅ่จ็ทดใขใใซใไฝฟ็จใใ้ใซใฏใใขใใซใฏใใใฉใซใใฎ็ๆ่จญๅฎใๅ
้จใง้ฉ็จใใ `PreTrainedModel.generate()` ใกใฝใใใๅผใณๅบใใพใใใใใฉใซใใฎ่จญๅฎใฏใใขใใซใซใซในใฟใ ่จญๅฎใไฟๅญใใใฆใใชใๅ ดๅใซใไฝฟ็จใใใพใใ
ใขใใซใๆ็คบ็ใซ่ชญใฟ่พผใๅ ดๅใใใใซไปๅฑใใ็ๆ่จญๅฎใ `model.generation_config` ใไปใใฆ็ขบ่ชใงใใพใใ
```python
>>> from transformers import AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
>>> model.generation_config
GenerationConfig {
"bos_token_id": 50256,
"eos_token_id": 50256,
}
```
`model.generation_config` ใๅบๅใใใจใใใใฉใซใใฎ็ๆ่จญๅฎใใ็ฐใชใๅคใฎใฟใ่กจ็คบใใใใใใฉใซใใฎๅคใฏใชในใใใใพใใใ
ใใใฉใซใใฎ็ๆ่จญๅฎใงใฏใๅบๅใฎใตใคใบใฏๅ
ฅๅใใญใณใใใจใฎ็ตใฟๅใใใงๆๅคง20ใใผใฏใณใซๅถ้ใใใฆใใใใชใฝใผในๅถ้ใซ้ใใชใใใใซใใฆใใพใใใใใฉใซใใฎใใณใผใใฃใณใฐๆฆ็ฅใฏ่ฒชๆฌฒๆข็ดขใงใๆใ็ขบ็ใฎ้ซใใใผใฏใณใๆฌกใฎใใผใฏใณใจใใฆ้ธๆใใๆใๅ็ดใชใใณใผใใฃใณใฐๆฆ็ฅใงใใๅคใใฎใฟในใฏใๅฐใใชๅบๅใตใคใบใฎๅ ดๅใใใใฏใใพใๆฉ่ฝใใพใใใใ ใใ้ทใๅบๅใ็ๆใใใใใซไฝฟ็จใใใๅ ดๅใ่ฒชๆฌฒๆข็ดขใฏ้ซๅบฆใซ็นฐใ่ฟใใใ็ตๆใ็ๆใๅงใใใใจใใใใพใใ
## Customize text generation
`generate` ใกใฝใใใซ็ดๆฅใใฉใกใผใฟใจใใฎๅคใๆธกใใใจใงใ`generation_config` ใไธๆธใใงใใพใใ
```python
>>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP
```
ใใใฉใซใใฎใใณใผใใฃใณใฐๆฆ็ฅใใปใจใใฉใฎใฟในใฏใงใใพใๆฉ่ฝใใๅ ดๅใงใใใใใคใใฎ่จญๅฎใๅพฎ่ชฟๆดใงใใพใใไธ่ฌ็ใซ่ชฟๆดใใใใใฉใกใผใฟใซใฏๆฌกใฎใใฎใใใใพใ๏ผ
- `max_new_tokens`: ็ๆใใใใผใฏใณใฎๆๅคงๆฐใใคใพใใๅบๅใทใผใฑใณในใฎใตใคใบใงใใใใใญใณใใๅ
ใฎใใผใฏใณใฏๅซใพใใพใใใ
- `num_beams`: 1ใใใๅคงใใชใใผใ ๆฐใๆๅฎใใใใจใงใ่ฒชๆฌฒๆค็ดขใใใใผใ ใตใผใใซๅใๆฟใใใใจใใงใใพใใใใฎๆฆ็ฅใงใฏใๅๆ้ในใใใใงใใใคใใฎไปฎ่ชฌใ่ฉไพกใใๆ็ต็ใซๅ
จไฝใฎใทใผใฑใณในใซๅฏพใใๆใ้ซใ็ขบ็ใๆใคไปฎ่ชฌใ้ธๆใใพใใใใใซใใใๅๆใฎ็ขบ็ใไฝใใใผใฏใณใงๅงใพใ้ซ็ขบ็ใฎใทใผใฑใณในใ่ฒชๆฌฒๆค็ดขใซใใฃใฆ็ก่ฆใใใใใจใใชใใชใใพใใ
- `do_sample`: ใใฎใใฉใกใผใฟใ`True`ใซ่จญๅฎใใใจใๅค้
ๅๅธใตใณใใชใณใฐใใใผใ ใตใผใๅค้
ๅๅธใตใณใใชใณใฐใTop-KใตใณใใชใณใฐใTop-pใตใณใใชใณใฐใชใฉใฎใใณใผใใฃใณใฐๆฆ็ฅใๆๅนใซใชใใพใใใใใใฎๆฆ็ฅใฏใๅๆฆ็ฅๅบๆใฎ่ชฟๆดใๅซใๅ่ชๅฝๅ
จไฝใฎ็ขบ็ๅๅธใใๆฌกใฎใใผใฏใณใ้ธๆใใพใใ
- `num_return_sequences`: ๅๅ
ฅๅใซๅฏพใใฆ่ฟใใทใผใฑใณในๅ่ฃใฎๆฐใใใใฏใ่คๆฐใฎใทใผใฑใณในๅ่ฃใใตใใผใใใใใณใผใใฃใณใฐๆฆ็ฅ๏ผใใผใ ใตใผใใใตใณใใชใณใฐใฎใใชใจใผใทใงใณใชใฉ๏ผใซใฎใฟ้ฉ็จใใใพใใ่ฒชๆฌฒๆค็ดขใๅฏพ็
ง็ใชๆค็ดขใชใฉใๅไธใฎๅบๅใทใผใฑใณในใ่ฟใใใณใผใใฃใณใฐๆฆ็ฅใงใฏไฝฟ็จใงใใพใใใ
## Save a custom decoding strategy with your model
็นๅฎใฎ็ๆๆงๆใง่ชฟๆดใใใขใใซใๅ
ฑๆใใใๅ ดๅใไปฅไธใฎๆ้ ใๅฎ่กใงใใพใ๏ผ
* [`GenerationConfig`] ใฏใฉในใฎใคใณในใฟใณในใไฝๆใใ
* ใใณใผใใฃใณใฐๆฆ็ฅใฎใใฉใกใผใฟใๆๅฎใใ
* [`GenerationConfig.save_pretrained`] ใไฝฟ็จใใฆ็ๆๆงๆใไฟๅญใใ`config_file_name` ๅผๆฐใ็ฉบใซใใใใจใๅฟใใชใใงใใ ใใ
* `push_to_hub` ใ `True` ใซ่จญๅฎใใฆใๆงๆใใขใใซใฎใชใใธใใชใซใขใใใญใผใใใพใ
```python
>>> from transformers import AutoModelForCausalLM, GenerationConfig
>>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP
>>> generation_config = GenerationConfig(
... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id
... )
>>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP
```
1ใคใฎใใฃใฌใฏใใชใซ่คๆฐใฎ็ๆ่จญๅฎใไฟๅญใใใใจใใงใใ[`GenerationConfig.save_pretrained`] ใฎ `config_file_name`
ๅผๆฐใไฝฟ็จใใพใใๅพใง [`GenerationConfig.from_pretrained`] ใงใใใใใคใณในใฟใณในๅใงใใพใใใใใฏใ1ใคใฎใขใใซใซๅฏพใใฆ่คๆฐใฎ็ๆ่จญๅฎใไฟๅญใใใๅ ดๅใซไพฟๅฉใงใ
๏ผไพ๏ผใตใณใใชใณใฐใไฝฟ็จใใใฏใชใจใคใใฃใใชใใญในใ็ๆ็จใฎ1ใคใจใใใผใ ใตใผใใไฝฟ็จใใ่ฆ็ด็จใฎ1ใค๏ผใใขใใซใซ่จญๅฎใใกใคใซใ่ฟฝๅ ใใใซใฏใ้ฉๅใช Hub ๆจฉ้ใๅฟ
่ฆใงใใ
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig
>>> tokenizer = AutoTokenizer.from_pretrained("t5-small")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
>>> translation_generation_config = GenerationConfig(
... num_beams=4,
... early_stopping=True,
... decoder_start_token_id=0,
... eos_token_id=model.config.eos_token_id,
... pad_token=model.config.pad_token_id,
... )
>>> # Tip: add `push_to_hub=True` to push to the Hub
>>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json")
>>> # You could then use the named generation config file to parameterize generation
>>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json")
>>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt")
>>> outputs = model.generate(**inputs, generation_config=generation_config)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Les fichiers de configuration sont faciles ร utiliser!']
```
## Streaming
`generate()` ใฏใใใฎ `streamer` ๅ
ฅๅใไปใใฆในใใชใผใใณใฐใใตใใผใใใฆใใพใใ`streamer` ๅ
ฅๅใฏใๆฌกใฎใกใฝใใใๆใคใฏใฉในใฎใคใณในใฟใณในใจไบๆๆงใใใใพใ๏ผ`put()` ใจ `end()`ใๅ
้จ็ใซใฏใ`put()` ใฏๆฐใใใใผใฏใณใใใใทใฅใใใใใซไฝฟ็จใใใ`end()` ใฏใใญในใ็ๆใฎ็ตไบใใใฉใฐไปใใใใใใซไฝฟ็จใใใพใใ
<Tip warning={true}>
ในใใชใผใใผใฏใฉในใฎAPIใฏใพใ ้็บไธญใงใใใๅฐๆฅๅคๆดใใใๅฏ่ฝๆงใใใใพใใ
</Tip>
ๅฎ้ใซใฏใใใพใใพใช็ฎ็ใซๅฏพใใฆ็ฌ่ชใฎในใใชใผใใณใฐใฏใฉในใไฝๆใงใใพใ๏ผใพใใไฝฟ็จใงใใๅบๆฌ็ใชในใใชใผใใณใฐใฏใฉในใ็จๆใใใฆใใพใใไพใใฐใ[`TextStreamer`] ใฏใฉในใไฝฟ็จใใฆใ`generate()` ใฎๅบๅใ็ป้ขใซๅ่ชใใจใซในใใชใผใ ใใใใจใใงใใพใ๏ผ
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
>>> tok = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
>>> streamer = TextStreamer(tok)
>>> # Despite returning the usual output, the streamer will also print the generated text to stdout.
>>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20)
An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,
```
## Decoding strategies
็นๅฎใฎ `generate()` ใใฉใกใผใฟใฎ็ตใฟๅใใใใใใฆๆ็ต็ใซ `generation_config` ใฏใ็นๅฎใฎใใณใผใใฃใณใฐๆฆ็ฅใๆๅนใซใใใใใซไฝฟ็จใงใใพใใใใฎใณใณใปใใใๆฐใใๅ ดๅใ[ใใฎใใญใฐใในใ](https://huggingface.co/blog/how-to-generate)ใ่ชญใใใจใใๅงใใใพใใใใฎใใญใฐใในใใงใฏใไธ่ฌ็ใชใใณใผใใฃใณใฐๆฆ็ฅใใฉใฎใใใซๅไฝใใใใ่ชฌๆใใใฆใใพใใ
ใใใงใฏใใใณใผใใฃใณใฐๆฆ็ฅใๅถๅพกใใใใใคใใฎใใฉใกใผใฟใ็คบใใใใใใใฉใฎใใใซไฝฟ็จใงใใใใ่ชฌๆใใพใใ
### Greedy Search
[`generate`] ใฏใใใฉใซใใง่ฒชๆฌฒๆข็ดขใใณใผใใฃใณใฐใไฝฟ็จใใใใใๆๅนใซใใใใใซใใฉใกใผใฟใๆธกใๅฟ
่ฆใฏใใใพใใใใใใฏใใใฉใกใผใฟ `num_beams` ใ 1 ใซ่จญๅฎใใใ`do_sample=False` ใงใใใใจใๆๅณใใพใใ
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> prompt = "I look forward to"
>>> checkpoint = "distilgpt2"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> outputs = model.generate(**inputs)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n']
```
### Contrastive search
ใณใณใใฉในใใฃใๆค็ดขใใณใผใใฃใณใฐๆฆ็ฅใฏใ2022ๅนดใฎ่ซๆ[A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417)ใงๆๆกใใใพใใใ
ใใใฏใ้ๅๅพฉ็ใงใใใชใใไธ่ฒซๆงใฎใใ้ทใๅบๅใ็ๆใใใใใซๅชใใ็ตๆใ็คบใใฆใใพใใใณใณใใฉในใใฃใๆค็ดขใฎๅไฝๅ็ใๅญฆใถใซใฏใ[ใใฎใใญใฐใในใ](https://huggingface.co/blog/introducing-csearch)ใใ่ฆงใใ ใใใ
ใณใณใใฉในใใฃใๆค็ดขใฎๅไฝใๆๅนใซใใๅถๅพกใใ2ใคใฎไธป่ฆใชใใฉใกใผใฟใฏใpenalty_alphaใใจใtop_kใใงใ๏ผ
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> checkpoint = "gpt2-large"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> prompt = "Hugging Face Company is"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best
in the business and our customer service is second to none.\n\nIf you have any questions about our
products or services, feel free to contact us at any time. We look forward to hearing from you!']
```
### Multinomial sampling
ๅธธใซๆ้ซ็ขบ็ใฎใใผใฏใณใๆฌกใฎใใผใฏใณใจใใฆ้ธๆใใ่ฒชๆฌฒๆค็ดขใจใฏ็ฐใชใใๅค้
ๅๅธใตใณใใชใณใฐ๏ผใพใใฏ็ฅๅ
ใตใณใใชใณใฐใจใๅผใฐใใพใ๏ผใฏใขใใซใซใใฃใฆๆไพใใใ่ชๅฝๅ
จไฝใฎ็ขบ็ๅๅธใซๅบใฅใใฆๆฌกใฎใใผใฏใณใใฉใณใใ ใซ้ธๆใใพใใใผใญไปฅๅคใฎ็ขบ็ใๆใคใในใฆใฎใใผใฏใณใซใฏ้ธๆใใใๅฏ่ฝๆงใใใใใใใซใใ็นฐใ่ฟใใฎใชในใฏใๆธๅฐใใพใใ
ๅค้
ๅๅธใตใณใใชใณใฐใๆๅนใซใใใซใฏใ`do_sample=True` ใใใณ `num_beams=1` ใ่จญๅฎใใพใใ
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
>>> set_seed(0) # For reproducibility
>>> checkpoint = "gpt2-large"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> prompt = "Today was an amazing day because"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited,
that\'s a terrible feeling."']
```
### Beam-search decoding
่ฒชๆฌฒๆข็ดขใจใฏ็ฐใชใใใใผใ ใตใผใใใณใผใใฃใณใฐใฏๅๆ้ในใใใใงใใใคใใฎไปฎ่ชฌใไฟๆใใๆ็ต็ใซใทใผใฑใณในๅ
จไฝใงๆใ็ขบ็ใ้ซใไปฎ่ชฌใ้ธๆใใพใใใใใซใใใ่ฒชๆฌฒๆข็ดขใงใฏ็ก่ฆใใใฆใใพใๅๆใใผใฏใณใฎ็ขบ็ใไฝใ้ซ็ขบ็ใฎใทใผใฑใณในใ็นๅฎใใๅฉ็นใใใใพใใ
ใใฎใใณใผใใฃใณใฐๆฆ็ฅใๆๅนใซใใใซใฏใ`num_beams`๏ผ่ฟฝ่ทกใใไปฎ่ชฌใฎๆฐ๏ผใ1ใใใๅคงใใชๅคใซๆๅฎใใพใใ
ๅธๆใใใใใญในใใฎ็ฟป่จณใใๆไผใใงใใฆๅฌใใใงใ๏ผใใใใใชใ่ณชๅใใตใใผใใๅฟ
่ฆใชๅ ดๅใฏใใๆฐ่ปฝใซใ็ฅใใใใ ใใใ
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> prompt = "It is astonishing how one can"
>>> checkpoint = "gpt2-medium"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of
time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have']
```
### Beam-search multinomial sampling
ใใฎๅๅใใใใใใใใใซใใใฎใใณใผใใฃใณใฐๆฆ็ฅใฏใใผใ ใตใผใใจๅค้
ใตใณใใชใณใฐใ็ตใฟๅใใใฆใใพใใใใฎใใณใผใใฃใณใฐๆฆ็ฅใไฝฟ็จใใใซใฏใ`num_beams` ใ1ใใๅคงใใชๅคใซ่จญๅฎใใ`do_sample=True` ใ่จญๅฎใใๅฟ
่ฆใใใใพใใ
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed
>>> set_seed(0) # For reproducibility
>>> prompt = "translate English to German: The house is wonderful."
>>> checkpoint = "t5-small"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
>>> outputs = model.generate(**inputs, num_beams=5, do_sample=True)
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Das Haus ist wunderbar.'
```
### Diverse beam search decoding
ๅคๆงใชใใผใ ใตใผใใใณใผใใฃใณใฐๆฆ็ฅใฏใใใผใ ใตใผใๆฆ็ฅใฎๆกๅผตใงใใใ้ธๆ่ขใใใใๅคๆงใชใใผใ ใทใผใฑใณในใ็ๆใงใใใใใซใใพใใใใฎไป็ตใฟใฎ่ฉณ็ดฐใซใคใใฆใฏใ[Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf) ใใๅ็
งใใ ใใใใใฎใขใใญใผใใซใฏใ`num_beams`ใ`num_beam_groups`ใใใใณ `diversity_penalty` ใจใใ3ใคใฎไธป่ฆใชใใฉใกใผใฟใใใใพใใๅคๆงๆงใใใซใใฃใฏใๅบๅใใฐใซใผใใใจใซ็ฐใชใใใจใไฟ่จผใใใใผใ ใตใผใใฏๅใฐใซใผใๅ
ใงไฝฟ็จใใใพใใ
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> checkpoint = "google/pegasus-xsum"
>>> prompt = (
... "The Permaculture Design Principles are a set of universal design principles "
... "that can be applied to any location, climate and culture, and they allow us to design "
... "the most efficient and sustainable human habitation and food production systems. "
... "Permaculture is a design system that encompasses a wide variety of disciplines, such "
... "as ecology, landscape design, environmental science and energy conservation, and the "
... "Permaculture design principles are drawn from these various disciplines. Each individual "
... "design principle itself embodies a complete conceptual framework based on sound "
... "scientific principles. When we bring all these separate principles together, we can "
... "create a design system that both looks at whole systems, the parts that these systems "
... "consist of, and how those parts interact with each other to create a complex, dynamic, "
... "living system. Each design principle serves as a tool that allows us to integrate all "
... "the separate parts of a design, referred to as elements, into a functional, synergistic, "
... "whole system, where the elements harmoniously interact and work together in the most "
... "efficient way possible."
... )
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
>>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0)
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'The Design Principles are a set of universal design principles that can be applied to any location, climate and
culture, and they allow us to design the'
```
### Assisted Decoding
ใขใทในใใใณใผใใฃใณใฐใฏใไธ่จใฎใใณใผใใฃใณใฐๆฆ็ฅใๅคๆดใใใใฎใงใๅใใใผใฏใใคใถใผ๏ผ็ๆณ็ใซใฏใฏใใใซๅฐใใชใขใใซ๏ผใไฝฟ็จใใฆใใใใคใใฎๅ่ฃใใผใฏใณใ่ฒชๆฌฒใซ็ๆใใใขใทในใฟใณใใขใใซใไฝฟ็จใใพใใใใฎๅพใไธป่ฆใชใขใใซใฏๅ่ฃใใผใฏใณใ1ใคใฎๅๅใใในใงๆค่จผใใใใณใผใใฃใณใฐใใญใปในใ้ซ้ๅใใพใใ็พๅจใใขใทในใใใณใผใใฃใณใฐใงใฏ่ฒชๆฌฒๆค็ดขใจใตใณใใชใณใฐใฎใฟใใตใใผใใใใฆใใใใใใๅ
ฅๅใฏใตใใผใใใใฆใใพใใใใขใทในใใใณใผใใฃใณใฐใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ใใฎใใญใฐ่จไบ](https://huggingface.co/blog/assisted-generation) ใใ่ฆงใใ ใใใ
ใขใทในใใใณใผใใฃใณใฐใๆๅนใซใใใซใฏใ`assistant_model` ๅผๆฐใใขใใซใง่จญๅฎใใพใใ
ใใฎใฌใคใใฏใใใพใใพใชใใณใผใใฃใณใฐๆฆ็ฅใๅฏ่ฝใซใใไธป่ฆใชใใฉใกใผใฟใผใ่ชฌๆใใฆใใพใใใใใซ้ซๅบฆใชใใฉใกใผใฟใผใฏ [`generate`] ใกใฝใใใซๅญๅจใใ[`generate`] ใกใฝใใใฎๅไฝใใใใซๅถๅพกใงใใพใใไฝฟ็จๅฏ่ฝใชใใฉใกใผใฟใผใฎๅฎๅ
จใชใชในใใซใคใใฆใฏใ[APIใใญใฅใกใณใ](./main_classes/text_generation.md) ใๅ็
งใใฆใใ ใใใ
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> prompt = "Alice and Bob"
>>> checkpoint = "EleutherAI/pythia-1.4b-deduped"
>>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint)
>>> outputs = model.generate(**inputs, assistant_model=assistant_model)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a']
```
ใตใณใใชใณใฐๆนๆณใไฝฟ็จใใๅ ดๅใใขใทในใใใณใผใใฃใณใฐใงใฏ `temperature` ๅผๆฐใไฝฟ็จใใฆใๅค้
ใตใณใใชใณใฐใจๅๆงใซใฉใณใใ ๆงใๅถๅพกใงใใพใใใใ ใใใขใทในใใใณใผใใฃใณใฐใงใฏใๆธฉๅบฆใไฝใใใใใจใง้
ๅปถใฎๆนๅใซๅฝน็ซใกใพใใ
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
>>> set_seed(42) # For reproducibility
>>> prompt = "Alice and Bob"
>>> checkpoint = "EleutherAI/pythia-1.4b-deduped"
>>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint)
>>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5)
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Alice and Bob are going to the same party. It is a small party, in a small']
```
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/preprocessing.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใ็นๅฎใฎMDXใซ้กไผผใใใใญใฅใกใณใใใซใใผใฎๆงๆใๅซใใงใใใ
Markdownใใฅใผใขใผใงๆญฃใใ่กจ็คบใใใชใใใจใใใใพใใ
-->
# Preprocess
[[open-in-colab]]
ใใผใฟใปใใใงใขใใซใใใฌใผใใณใฐใใๅใซใใใใใขใใซใฎๆๅพ
ใใๅ
ฅๅๅฝขๅผใซๅๅฆ็ใใๅฟ
่ฆใใใใพใใ
ใใผใฟใใใญในใใ็ปๅใใพใใฏใชใผใใฃใชใงใใใใฉใใใซใใใใใใใใใใฏใใณใฝใซใฎใใใใซๅคๆใใฆ็ตใฟ็ซใฆใๅฟ
่ฆใใใใพใใ
๐ค Transformersใฏใใใผใฟใใขใใซ็จใซๆบๅใใใฎใซๅฝน็ซใคๅๅฆ็ใฏใฉในใฎใปใใใๆไพใใฆใใพใใ
ใใฎใใฅใผใใชใขใซใงใฏใๆฌกใฎใใจใๅญฆใณใพใ๏ผ
* ใใญในใใฎๅ ดๅใ[Tokenizer](./main_classes/tokenizer)ใไฝฟ็จใใฆใใญในใใใใผใฏใณใฎใทใผใฑใณในใซๅคๆใใใใผใฏใณใฎๆฐๅค่กจ็พใไฝๆใใใใใใใใณใฝใซใซ็ตใฟ็ซใฆใๆนๆณใ
* ้ณๅฃฐใจใชใผใใฃใชใฎๅ ดๅใ[Feature extractor](./main_classes/feature_extractor)ใไฝฟ็จใใฆใชใผใใฃใชๆณขๅฝขใใ้ฃ็ถ็ใช็นๅพดใๆฝๅบใใใใใใใใณใฝใซใซๅคๆใใๆนๆณใ
* ็ปๅๅ
ฅๅใฎๅ ดๅใ[ImageProcessor](./main_classes/image)ใไฝฟ็จใใฆ็ปๅใใใณใฝใซใซๅคๆใใๆนๆณใ
* ใใซใใขใผใใซๅ
ฅๅใฎๅ ดๅใ[Processor](./main_classes/processors)ใไฝฟ็จใใฆใใผใฏใใคใถใจ็นๅพดๆฝๅบๅจใพใใฏ็ปๅใใญใปใใตใ็ตใฟๅใใใๆนๆณใ
<Tip>
`AutoProcessor`ใฏๅธธใซๅไฝใใไฝฟ็จใใใขใใซใซ้ฉๅใชใฏใฉในใ่ชๅ็ใซ้ธๆใใพใใ
ใใผใฏใใคใถใ็ปๅใใญใปใใตใ็นๅพดๆฝๅบๅจใใพใใฏใใญใปใใตใไฝฟ็จใใฆใใใใซใใใใใใๅไฝใใพใใ
</Tip>
ๅงใใๅใซใ๐ค Datasetsใใคใณในใใผใซใใฆใใใใคใใฎใใผใฟใปใใใ่ฉฆใใใจใใงใใใใใซใใฆใใ ใใ๏ผ
```bash
pip install datasets
```
## Natural Language Processing
<Youtube id="Yffk5aydLzg"/>
ใใญในใใใผใฟใฎๅๅฆ็ใซไฝฟ็จใใไธป่ฆใชใใผใซใฏใ[ใใผใฏใใคใถ](main_classes/tokenizer)ใงใใใใผใฏใใคใถใฏใไธ้ฃใฎใซใผใซใซๅพใฃใฆใใญในใใ*ใใผใฏใณ*ใซๅๅฒใใพใใใใผใฏใณใฏๆฐๅคใซๅคๆใใใใใฎๅพใใณใฝใซใซๅคๆใใใใขใใซใฎๅ
ฅๅใจใชใใพใใใขใใซใๅฟ
่ฆใจใใ่ฟฝๅ ใฎๅ
ฅๅใฏใใใผใฏใใคใถใซใใฃใฆ่ฟฝๅ ใใใพใใ
<Tip>
ไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใไบๅฎใฎๅ ดๅใ้ข้ฃใใไบๅๅญฆ็ฟๆธใฟใใผใฏใใคใถใไฝฟ็จใใใใจใ้่ฆใงใใใใใซใใใใใญในใใไบๅๅญฆ็ฟใณใผใในใจๅใๆนๆณใงๅๅฒใใใไบๅๅญฆ็ฟไธญใซ้ๅธธ*ใใญใฃใ*ใจใใฆๅ็
งใใใๅฏพๅฟใใใใผใฏใณใคใณใใใฏในใไฝฟ็จใใพใใ
</Tip>
[`AutoTokenizer.from_pretrained`]ใกใฝใใใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใใผใฏใใคใถใใญใผใใใฆใ้ๅงใใพใใใใใใใซใใใใขใใซใไบๅๅญฆ็ฟใใใ*ใใญใฃใ*ใใใฆใณใญใผใใใใพใ๏ผ
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
```
ๆฌกใซใใใญในใใใใผใฏใใคใถใซๆธกใใพใ๏ผ
```py
>>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
>>> print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
ใใผใฏใใคใถใฏใ้่ฆใช3ใคใฎ้
็ฎใๆใค่พๆธใ่ฟใใพใ๏ผ
* [input_ids](glossary#input-ids) ใฏๆไธญใฎๅใใผใฏใณใซๅฏพๅฟใใใคใณใใใฏในใงใใ
* [attention_mask](glossary#attention-mask) ใฏใใผใฏใณใใขใใณใทใงใณใๅใใๅฟ
่ฆใใใใใฉใใใ็คบใใพใใ
* [token_type_ids](glossary#token-type-ids) ใฏ่คๆฐใฎใทใผใฑใณในใใใๅ ดๅใใใผใฏใณใใฉใฎใทใผใฑใณในใซๅฑใใฆใใใใ่ญๅฅใใพใใ
`input_ids` ใใใณใผใใใฆๅ
ฅๅใ่ฟใใพใ๏ผ
```python
>>> tokenizer.decode(encoded_input["input_ids"])
'[CLS] ้ญๆณไฝฟใใฎไบใซๅนฒๆธใใใชใๅฝผใใฏๅพฎๅฆใงๆใใฃใฝใใ [SEP]'
```
ๅฆไฝใซใๅใใใใใ ใใใใจๆใใพใใใใใผใฏใใคใถใฏใใฎๆ็ซ ใซ2ใคใฎ็นๅฅใชใใผใฏใณใ`CLS`๏ผใฏใฉใทใใกใคใข๏ผใจ`SEP`๏ผใปใใฌใผใฟ๏ผใ่ฟฝๅ ใใพใใใ
ใในใฆใฎใขใใซใ็นๅฅใชใใผใฏใณใๅฟ
่ฆใจใใใใใงใฏใใใพใใใใๅฟ
่ฆใชๅ ดๅใใใผใฏใใคใถใฏ่ชๅ็ใซใใใใ่ฟฝๅ ใใพใใ
่คๆฐใฎๆ็ซ ใๅๅฆ็ใใๅ ดๅใใใผใฏใใคใถใซใชในใใจใใฆๆธกใใฆใใ ใใ๏ผ
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_inputs = tokenizer(batch_sentences)
>>> print(encoded_inputs)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]]}
```
### Pad
ๆ็ซ ใฏๅธธใซๅใ้ทใใงใฏใชใใใจใใใใใใใฏใใณใฝใซ๏ผใขใใซใฎๅ
ฅๅ๏ผใๅไธใชๅฝข็ถใๆใคๅฟ
่ฆใใใใใๅ้กใจใชใใพใใ
ใใใฃใณใฐใฏใ็ญใๆใซ็นๅฅใชใใใใฃใณใฐใใผใฏใณใใ่ฟฝๅ ใใฆใใใณใฝใซใ้ทใใทใผใฑใณในใซๅใใใใใใฎๆฆ็ฅใงใใ
ใใใๅ
ใฎ็ญใใทใผใฑใณในใๆ้ทใฎใทใผใฑใณในใซๅใใใใใใซใ`padding`ใใฉใกใผใฟใ`True`ใซ่จญๅฎใใพใ๏ผ
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
```
1็ช็ฎใจ3็ช็ฎใฎๆใฏใ็ญใใใใซ`0`ใงใใใฃใณใฐใใใฆใใพใใ
### Truncation
้ใฎในใใฏใใซใงใฏใๆๆใใขใใซใๅฆ็ใใใฎใซ้ทใใใใทใผใฑใณในใใใใใใใใพใใใใใฎๅ ดๅใใทใผใฑใณในใ็ญ็ธฎใใๅฟ
่ฆใใใใพใใ
ใขใใซใๅใๅ
ฅใใๆๅคงใฎ้ทใใซใทใผใฑใณในใๅใ่ฉฐใใใซใฏใ`truncation`ใใฉใกใผใฟใ`True`ใซ่จญๅฎใใพใ๏ผ
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
```
<Tip>
็ฐใชใใใใฃใณใฐใจๅใ่ฉฐใใฎๅผๆฐใซใคใใฆ่ฉณใใใฏใ[ใใใฃใณใฐใจๅใ่ฉฐใ](./pad_truncation)ใฎใณใณใปใใใฌใคใใใ่ฆงใใ ใใใ
</Tip>
### Build tensors
ๆๅพใซใใใผใฏใใคใถใใขใใซใซไพ็ตฆใใใๅฎ้ใฎใใณใฝใซใ่ฟใใใใซ่จญๅฎใใพใใ
`return_tensors`ใใฉใกใผใฟใ`pt`๏ผPyTorch็จ๏ผใพใใฏ`tf`๏ผTensorFlow็จ๏ผใซ่จญๅฎใใพใ๏ผ
<frameworkcontent>
<pt>
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
>>> print(encoded_input)
{'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])}
```
</pt>
<tf>
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
>>> print(encoded_input)
{'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>}
```
</tf>
</frameworkcontent>
## Audio
ใชใผใใฃใชใฟในใฏใฎๅ ดๅใใใผใฟใปใใใใขใใซ็จใซๆบๅใใใใใซ[็นๅพดๆฝๅบๅจ](main_classes/feature_extractor)ใๅฟ
่ฆใงใใ
็นๅพดๆฝๅบๅจใฏ็ใฎใชใผใใฃใชใใผใฟใใ็นๅพดใๆฝๅบใใใใใใใใณใฝใซใซๅคๆใใใใใซ่จญ่จใใใฆใใพใใ
[PolyAI/minds14](https://huggingface.co/datasets/PolyAI/minds14)ใใผใฟใปใใใใญใผใใใฆ๏ผใใผใฟใปใใใฎใญใผใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏ๐ค [Datasetsใใฅใผใใชใขใซ](https://huggingface.co/docs/datasets/load_hub)ใๅ็
ง๏ผใ
ใชใผใใฃใชใใผใฟใปใใใง็นๅพดๆฝๅบๅจใใฉใฎใใใซไฝฟ็จใงใใใใ็ขบ่ชใใฆใฟใพใใใ๏ผ
```python
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
ใขใฏใปในใใฆ`audio`ๅใฎๆๅใฎ่ฆ็ด ใ็ขบ่ชใใพใใ`audio`ๅใๅผใณๅบใใจใ่ชๅ็ใซใชใผใใฃใชใใกใคใซใ่ชญใฟ่พผใพใใใชใตใณใใชใณใฐใใใพใ๏ผ
```py
>>> dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
```
ใใใซใใใ3ใคใฎใขใคใใ ใ่ฟใใใพใ๏ผ
* `array` ใฏ่ชญใฟ่พผใพใใ้ณๅฃฐไฟกๅทใงใ1Dใฎ้
ๅใจใใฆ่ชญใฟ่พผใพใใพใใๅฟ
่ฆใซๅฟใใฆใชใตใณใใชใณใฐใใใใใจใใใใพใใ
* `path` ใฏ้ณๅฃฐใใกใคใซใฎๅ ดๆใๆใใพใใ
* `sampling_rate` ใฏ้ณๅฃฐไฟกๅทๅ
ใฎใใผใฟใใคใณใใ1็ง้ใซใใใคๆธฌๅฎใใใใใ็คบใใพใใ
ใใฎใใฅใผใใชใขใซใงใฏใ[Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)ใขใใซใไฝฟ็จใใพใใ
ใขใใซใซใผใใ็ขบ่ชใใใจใWav2Vec2ใ16kHzใฎใตใณใใชใณใฐใใใ้ณๅฃฐใชใผใใฃใชใงไบๅๅญฆ็ฟใใใฆใใใใจใใใใใพใใ
ใขใใซใฎไบๅๅญฆ็ฟใซไฝฟ็จใใใใใผใฟใปใใใฎใตใณใใชใณใฐใฌใผใใจใใใชใใฎใชใผใใฃใชใใผใฟใฎใตใณใใชใณใฐใฌใผใใไธ่ดใใใใจใ้่ฆใงใใ
ใใผใฟใฎใตใณใใชใณใฐใฌใผใใ็ฐใชใๅ ดๅใใใผใฟใใชใตใณใใชใณใฐใใๅฟ
่ฆใใใใพใใ
1. ๐ค Datasetsใฎ [`~datasets.Dataset.cast_column`] ใกใฝใใใไฝฟ็จใใฆใใตใณใใชใณใฐใฌใผใใ16kHzใซใขใใใตใณใใชใณใฐใใพใ๏ผ
```py
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
```
2. ๅใณ `audio` ๅใๅผใณๅบใใฆใชใผใใฃใชใใกใคใซใใชใตใณใใซใใพใ๏ผ
```py
>>> dataset[0]["audio"]
{'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ...,
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 16000}
```
ๆฌกใซใๅ
ฅๅใๆญฃ่ฆๅใใใใฃใณใฐใใใใใซ็นๅพดๆฝๅบๅจใใญใผใใใพใใใใญในใใใผใฟใใใใฃใณใฐใใๅ ดๅใ็ญใใทใผใฑใณในใซใฏ `0` ใ่ฟฝๅ ใใใพใใๅใ่ใๆนใใชใผใใฃใชใใผใฟใซใ้ฉ็จใใใพใใ็นๅพดๆฝๅบๅจใฏ `array` ใซ `0` ใ่ฟฝๅ ใใพใ๏ผใใใฏ็ก้ณใจใใฆ่งฃ้ใใใพใ๏ผใ
[`AutoFeatureExtractor.from_pretrained`]ใไฝฟ็จใใฆ็นๅพดๆฝๅบๅจใใญใผใใใพใ๏ผ
```python
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
```
ใชใผใใฃใช `array` ใ็นๅพดๆฝๅบๅจใซๆธกใใพใใ็นๅพดๆฝๅบๅจใง็บ็ใใๅฏ่ฝๆงใฎใใ็ก้ณใจใฉใผใใใ่ฏใใใใใฐใใใใใซใ็นๅพดๆฝๅบๅจใซ `sampling_rate` ๅผๆฐใ่ฟฝๅ ใใใใจใใๅงใใใพใใ
```python
>>> audio_input = [dataset[0]["audio"]["array"]]
>>> feature_extractor(audio_input, sampling_rate=16000)
{'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ...,
5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]}
```
ๅๆงใซใใใผใฏใใคใถใจๅๆงใซใใใใๅ
ใฎๅฏๅคใทใผใฑใณในใๅฆ็ใใใใใซใใใฃใณใฐใพใใฏๅใ่ฉฐใใ้ฉ็จใงใใพใใๆฌกใซใใใใใฎ2ใคใฎใชใผใใฃใชใตใณใใซใฎใทใผใฑใณใน้ทใ็ขบ่ชใใฆใฟใพใใใ๏ผ
```python
>>> dataset[0]["audio"]["array"].shape
(173398,)
>>> dataset[1]["audio"]["array"].shape
(106496,)
```
ใใฎ้ขๆฐใฏใใใผใฟใปใใใๅๅฆ็ใใฆใชใผใใฃใชใตใณใใซใฎ้ทใใๅใใซใใใใใฎใใฎใงใใๆๅคงใตใณใใซ้ทใๆๅฎใใ็นๅพดๆฝๅบๅจใฏใทใผใฑใณในใใใใซๅใใใฆใใใฃใณใฐใพใใฏๅใ่ฉฐใใพใใ
```py
>>> def preprocess_function(examples):
... audio_arrays = [x["array"] for x in examples["audio"]]
... inputs = feature_extractor(
... audio_arrays,
... sampling_rate=16000,
... padding=True,
... max_length=100000,
... truncation=True,
... )
... return inputs
```
`preprocess_function`ใใใผใฟใปใใใฎๆๅใฎๆฐไพใซ้ฉ็จใใพใ๏ผ
```python
>>> processed_dataset = preprocess_function(dataset[:5])
```
ใตใณใใซใฎ้ทใใฏ็พๅจๅใใงใๆๅฎใใใๆๅคง้ทใจไธ่ดใใฆใใพใใใใใงๅฆ็ใใใใใผใฟใปใใใใขใใซใซๆธกใใใจใใงใใพใ๏ผ
```py
>>> processed_dataset["input_values"][0].shape
(100000,)
>>> processed_dataset["input_values"][1].shape
(100000,)
```
## Computer Vision
ใณใณใใฅใผใฟใใธใงใณใฟในใฏใงใฏใใขใใซ็จใซใใผใฟใปใใใๆบๅใใใใใฎ[็ปๅใใญใปใใต](main_classes/image_processor)ใๅฟ
่ฆใงใใ
็ปๅใฎๅๅฆ็ใซใฏใ็ปๅใใขใใซใๆๅพ
ใใๅ
ฅๅๅฝขๅผใซๅคๆใใใใใฎใใใคใใฎในใใใใๅซใพใใฆใใพใใใใใใฎในใใใใซใฏใใชใตใคใบใๆญฃ่ฆๅใใซใฉใผใใฃใใซใฎ่ฃๆญฃใใใใณ็ปๅใใใณใฝใซใซๅคๆใใใชใฉใๅซใพใใพใใ
<Tip>
็ปๅใฎๅๅฆ็ใฏใ้ๅธธใ็ปๅใฎๅขๅผทใฎๅฝขๅผใซๅพใใพใใ็ปๅใฎๅๅฆ็ใจ็ปๅใฎๅขๅผทใฎไธกๆนใฏ็ปๅใใผใฟใๅคๆใใพใใใ็ฐใชใ็ฎ็ใใใใพใ๏ผ
* ็ปๅใฎๅขๅผทใฏใ้ๅญฆ็ฟใ้ฒใใใขใใซใฎๅ
็ขๆงใๅไธใใใใฎใซๅฝน็ซใคๆนๆณใง็ปๅใๅคๆดใใพใใใใผใฟใๅขๅผทใใๆนๆณใฏ็ก้ใงใๆใใใ่ฒใฎ่ชฟๆดใใฏใญใใใๅ่ปขใใชใตใคใบใใบใผใ ใชใฉใๆงใ
ใชๆนๆณใใใใพใใใใ ใใๅขๅผทๆไฝใซใใฃใฆ็ปๅใฎๆๅณใๅคใใใชใใใใซๆณจๆใใๅฟ
่ฆใใใใพใใ
* ็ปๅใฎๅๅฆ็ใฏใ็ปๅใใขใใซใฎๆๅพ
ใใๅ
ฅๅๅฝขๅผใจไธ่ดใใใใจใไฟ่จผใใพใใใณใณใใฅใผใฟใใธใงใณใขใใซใใใกใคใณใใฅใผใใณใฐใใๅ ดๅใ็ปๅใฏใขใใซใๆๅใซใใฌใผใใณใฐใใใใจใใจใพใฃใใๅใๆนๆณใงๅๅฆ็ใใๅฟ
่ฆใใใใพใใ
็ปๅใฎๅขๅผทใซใฏไปปๆใฎใฉใคใใฉใชใไฝฟ็จใงใใพใใ็ปๅใฎๅๅฆ็ใซใฏใใขใใซใซ้ข้ฃไปใใใใ`ImageProcessor`ใไฝฟ็จใใพใใ
</Tip>
ใณใณใใฅใผใฟใใธใงใณใฎใใผใฟใปใใใง็ปๅใใญใปใใตใไฝฟ็จใใๆนๆณใ็คบใใใใซใ[food101](https://huggingface.co/datasets/food101)ใใผใฟใปใใใใญใผใใใพใ๏ผใใผใฟใปใใใฎใญใผใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏ๐ค[Datasetsใใฅใผใใชใขใซ](https://huggingface.co/docs/datasets/load_hub)ใๅ็
ง๏ผ๏ผ
<Tip>
ใใผใฟใปใใใใใชใๅคงใใใใใ๐ค Datasetsใฎ`split`ใใฉใกใผใฟใไฝฟ็จใใฆใใฌใผใใณใฐใใผใฟใฎๅฐใใชใตใณใใซใฎใฟใใญใผใใใพใ๏ผ
</Tip>
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("food101", split="train[:100]")
```
ๆฌกใซใ๐ค Datasetsใฎ [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) ๆฉ่ฝใง็ปๅใ่ฆใฆใฟใพใใใ๏ผ
```python
>>> dataset[0]["image"]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/>
</div>
AutoImageProcessorใ[`AutoImageProcessor.from_pretrained`]ใไฝฟ็จใใฆใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
```
1. ใพใใ็ปๅใฎๆกๅผตใ่ฟฝๅ ใใพใใใใๅฅฝใใชใฉใคใใฉใชใไฝฟ็จใงใใพใใใใใฎใใฅใผใใชใขใซใงใฏtorchvisionใฎ[`transforms`](https://pytorch.org/vision/stable/transforms.html)ใขใธใฅใผใซใไฝฟ็จใใพใใๅฅใฎใใผใฟๆกๅผตใฉใคใใฉใชใไฝฟ็จใใใๅ ดๅใฏใ[Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb)ใพใใฏ[Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)ใง่ฉณ็ดฐใๅญฆใถใใจใใงใใพใใ
ใใใงใฏใ[`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)ใไฝฟ็จใใฆใใใคใใฎๅคๆใ้ฃ้ใใใพใ - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)ใจ[`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html)ใ
ใตใคใบใฎๅคๆดใซ้ขใใฆใฏใ`image_processor`ใใ็ปๅใตใคใบใฎ่ฆไปถใๅๅพใงใใพใใ
ไธ้จใฎใขใใซใงใฏใๆญฃ็ขบใช้ซใใจๅน
ใๅฟ
่ฆใงใใใไปใฎใขใใซใงใฏ`shortest_edge`ใฎใฟใๅฎ็พฉใใใฆใใพใใ
```py
>>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose
>>> size = (
... image_processor.size["shortest_edge"]
... if "shortest_edge" in image_processor.size
... else (image_processor.size["height"], image_processor.size["width"])
... )
>>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)])
```
2. ใขใใซใฏ[`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)ใๅ
ฅๅใจใใฆๅใๅใใพใใ
`ImageProcessor`ใฏ็ปๅใฎๆญฃ่ฆๅใจ้ฉๅใชใใณใฝใซใฎ็ๆใๅฆ็ใงใใพใใ
ไธ้ฃใฎ็ปๅใซๅฏพใใ็ปๅๆกๅผตใจ็ปๅๅๅฆ็ใ็ตใฟๅใใใ`pixel_values`ใ็ๆใใ้ขๆฐใไฝๆใใพใ๏ผ
```python
>>> def transforms(examples):
... images = [_transforms(img.convert("RGB")) for img in examples["image"]]
... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"]
... return examples
```
<Tip>
ไธ่จใฎไพใงใฏใ็ปๅใฎใตใคใบๅคๆดใๆขใซ็ปๅๅขๅผทๅคๆใง่กใฃใฆใใใใใ`do_resize=False`ใ่จญๅฎใใพใใใ
้ฉๅใช `image_processor` ใใใฎ `size` ๅฑๆงใๆดป็จใใฆใใพใใ็ปๅๅขๅผทไธญใซ็ปๅใฎใตใคใบๅคๆดใ่กใใชใๅ ดๅใฏใใใฎใใฉใกใผใฟใ็็ฅใใฆใใ ใใใ
ใใใฉใซใใงใฏใ`ImageProcessor` ใใตใคใบๅคๆดใๅฆ็ใใพใใ
็ปๅใๅขๅผทๅคๆใฎไธ้จใจใใฆๆญฃ่ฆๅใใใๅ ดๅใฏใ`image_processor.image_mean` ใจ `image_processor.image_std` ใฎๅคใไฝฟ็จใใฆใใ ใใใ
</Tip>
3. ๆฌกใซใ๐ค Datasetsใฎ[`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)ใไฝฟ็จใใฆใๅคๆใใชใขใซใฟใคใ ใง้ฉ็จใใพใ๏ผ
```python
>>> dataset.set_transform(transforms)
```
4. ็ปๅใซใขใฏใปในใใใจใ็ปๅใใญใปใใตใ `pixel_values` ใ่ฟฝๅ ใใใใจใใใใใพใใใใใงๅฆ็ๆธใฟใฎใใผใฟใปใใใใขใใซใซๆธกใใใจใใงใใพใ๏ผ
```python
>>> dataset[0].keys()
```
ไปฅไธใฏใๅคๆใ้ฉ็จใใใๅพใฎ็ปๅใฎๅค่ฆณใงใใ ็ปๅใฏใฉใณใใ ใซๅใๆใใใใใฎ่ฒใฎ็นๆงใ็ฐใชใใพใใ
```py
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> img = dataset[0]["pixel_values"]
>>> plt.imshow(img.permute(1, 2, 0))
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/>
</div>
<Tip>
ใชใใธใงใฏใๆคๅบใๆๅณใปใฐใกใณใใผใทใงใณใใคใณในใฟใณในใปใฐใกใณใใผใทใงใณใใใใณใใใใใฃใใฏใปใฐใกใณใใผใทใงใณใชใฉใฎใฟในใฏใฎๅ ดๅใ`ImageProcessor`ใฏ
ใในใๅฆ็ใกใฝใใใๆไพใใพใใใใใใฎใกใฝใใใฏใใขใใซใฎ็ใฎๅบๅใๅข็ใใใฏในใใปใฐใกใณใใผใทใงใณใใใใชใฉใฎๆๅณใฎใใไบๆธฌใซๅคๆใใพใใ
</Tip>
### Pad
ไธ้จใฎๅ ดๅใใใจใใฐใ[DETR](./model_doc/detr)ใใใกใคใณใใฅใผใใณใฐใใๅ ดๅใใขใใซใฏใใฌใผใใณใฐๆใซในใฑใผใซใฎๅคๆดใ้ฉ็จใใพใใ
ใใใซใใใใใใๅ
ใฎ็ปๅใฎใตใคใบใ็ฐใชใๅ ดๅใใใใพใใ[`DetrImageProcessor`]ใใ[`DetrImageProcessor.pad`]ใไฝฟ็จใใ
ใซในใฟใ ใฎ`collate_fn`ใๅฎ็พฉใใฆ็ปๅใไธ็ทใซใใใๅฆ็ใงใใพใใ
```py
>>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch]
... batch = {}
... batch["pixel_values"] = encoding["pixel_values"]
... batch["pixel_mask"] = encoding["pixel_mask"]
... batch["labels"] = labels
... return batch
```
## Multi Modal
ใใซใใขใผใใซๅ
ฅๅใไฝฟ็จใใใฟในใฏใฎๅ ดๅใใขใใซ็จใซใใผใฟใปใใใๆบๅใใใใใฎ[ใใญใปใใต](main_classes/processors)ใๅฟ
่ฆใงใใใใญใปใใตใฏใใใผใฏใใคใถใ็นๅพด้ๆฝๅบๅจใชใฉใฎ2ใคใฎๅฆ็ใชใใธใงใฏใใ็ตๅใใพใใ
่ชๅ้ณๅฃฐ่ช่ญ๏ผASR๏ผใฎใใใฎใใญใปใใตใฎไฝฟ็จๆนๆณใ็คบใใใใซใ[LJ Speech](https://huggingface.co/datasets/lj_speech)ใใผใฟใปใใใใญใผใใใพใ๏ผใใผใฟใปใใใฎใญใผใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏ๐ค [Datasets ใใฅใผใใชใขใซ](https://huggingface.co/docs/datasets/load_hub)ใๅ็
ง๏ผ๏ผ
```python
>>> from datasets import load_dataset
>>> lj_speech = load_dataset("lj_speech", split="train")
```
ASR๏ผ่ชๅ้ณๅฃฐ่ช่ญ๏ผใฎๅ ดๅใไธปใซ `audio` ใจ `text` ใซ็ฆ็นใๅฝใฆใฆใใใใใไปใฎๅใๅ้คใงใใพใ๏ผ
```python
>>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"])
```
ๆฌกใซใ`audio`ใจ`text`ใฎๅใ่ฆใฆใฟใพใใใ๏ผ
```python
>>> lj_speech[0]["audio"]
{'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ...,
7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav',
'sampling_rate': 22050}
>>> lj_speech[0]["text"]
'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition'
```
ๅธธใซใใชใผใใฃใชใใผใฟใปใใใฎใตใณใใชใณใฐใฌใผใใใใขใใซใฎไบๅๅญฆ็ฟใซไฝฟ็จใใใใใผใฟใปใใใฎใตใณใใชใณใฐใฌใผใใจไธ่ดใใใใใใซ[ใชใตใณใใซ](preprocessing#audio)ใใๅฟ
่ฆใใใใพใ๏ผ
```py
>>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000))
```
ใใญใปใใตใ [`AutoProcessor.from_pretrained`] ใไฝฟ็จใใฆใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
```
1. `array`ๅ
ใซๅซใพใใใชใผใใฃใชใใผใฟใ`input_values`ใซๅฆ็ใใ`text`ใ`labels`ใซใใผใฏใณๅใใ้ขๆฐใไฝๆใใพใ๏ผ
```py
>>> def prepare_dataset(example):
... audio = example["audio"]
... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000))
... return example
```
2. ใตใณใใซใซ`prepare_dataset`้ขๆฐใ้ฉ็จใใพใ๏ผ
```py
>>> prepare_dataset(lj_speech[0])
```
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/model_memory_anatomy.md
|
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Model training anatomy
ใขใใซใใฌใผใใณใฐใฎๅน็ใๅไธใใใใใใซ้ฉ็จใงใใใใใฉใผใใณในๆ้ฉๅใใฏใใใฏใ็่งฃใใใซใฏใใใฌใผใใณใฐไธญใซGPUใใฉใฎใใใซๅฉ็จใใใใใใใใณๅฎ่กใใใๆไฝใซๅฟใใฆ่จ็ฎๅผทๅบฆใใฉใฎใใใซๅคๅใใใใ็่งฃใใใใจใๅฝน็ซใกใพใใ
ใพใใฏใGPUใฎๅฉ็จไพใจใขใใซใฎใใฌใผใใณใฐๅฎ่กใซ้ขใใ็คบๅใซๅฏใไพใๆขๆฑใใใใจใใๅงใใพใใใใใใขใณในใใฌใผใทใงใณใฎใใใซใใใใคใใฎใฉใคใใฉใชใใคใณในใใผใซใใๅฟ
่ฆใใใใพใ:
```bash
pip install transformers datasets accelerate nvidia-ml-py3
```
`nvidia-ml-py3` ใฉใคใใฉใชใฏใPythonๅ
ใใใขใใซใฎใกใขใชไฝฟ็จ็ถๆณใใขใใฟใผใใใใจใๅฏ่ฝใซใใพใใใใใใใใฟใผใใใซใงใฎ `nvidia-smi` ใณใใณใใซใคใใฆใฏใ่ใใใใใใพใใใใใใฎใฉใคใใฉใชใไฝฟ็จใใใจใPythonใใๅใๆ
ๅ ฑใซใขใฏใปในใงใใพใใ
ใใใใใใใใคใใฎใใใผใใผใฟใไฝๆใใพใใ100ใใ30000ใฎ้ใฎใฉใณใใ ใชใใผใฏใณIDใจใๅ้กๅจใฎใใใฎใใคใใชใฉใใซใงใใๅ่จใงใ512ใฎใทใผใฑใณในใใใใใใใใใฎ้ทใใฏ512ใงใPyTorchใใฉใผใใใใฎ [`~datasets.Dataset`] ใซๆ ผ็ดใใใพใใ
```py
>>> import numpy as np
>>> from datasets import Dataset
>>> seq_len, dataset_size = 512, 512
>>> dummy_data = {
... "input_ids": np.random.randint(100, 30000, (dataset_size, seq_len)),
... "labels": np.random.randint(0, 1, (dataset_size)),
... }
>>> ds = Dataset.from_dict(dummy_data)
>>> ds.set_format("pt")
```
[`Trainer`]ใไฝฟ็จใใฆGPUๅฉ็จ็ใจใใฌใผใใณใฐๅฎ่กใฎ่ฆ็ด็ตฑ่จๆ
ๅ ฑใ่กจ็คบใใใใใซใ2ใคใฎใใซใใผ้ขๆฐใๅฎ็พฉใใพใใ
```py
>>> from pynvml import *
>>> def print_gpu_utilization():
... nvmlInit()
... handle = nvmlDeviceGetHandleByIndex(0)
... info = nvmlDeviceGetMemoryInfo(handle)
... print(f"GPU memory occupied: {info.used//1024**2} MB.")
>>> def print_summary(result):
... print(f"Time: {result.metrics['train_runtime']:.2f}")
... print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}")
... print_gpu_utilization()
```
ไปฅไธใฏใ็กๆใฎGPUใกใขใชใใ้ๅงใใฆใใใใจใ็ขบ่ชใใพใใใ๏ผ
```py
>>> print_gpu_utilization()
GPU memory occupied: 0 MB.
```
GPUใกใขใชใใขใใซใ่ชญใฟ่พผใๅใฎใใใซๅ ๆใใใฆใใชใใใใซ่ฆใใพใใใใใใไฝฟใใฎใใทใณใงใฎ็ถๆณใงใชใๅ ดๅใฏใGPUใกใขใชใไฝฟ็จใใฆใใใในใฆใฎใใญใปในใๅๆญขใใฆใใ ใใใใใ ใใใในใฆใฎ็ฉบใGPUใกใขใชใใฆใผใถใผใไฝฟ็จใงใใใใใงใฏใใใพใใใใขใใซใGPUใซ่ชญใฟ่พผใพใใใจใใซใผใใซใ่ชญใฟ่พผใพใใ1ใ2GBใฎใกใขใชใไฝฟ็จใใใใจใใใใพใใใใใใฉใใใใใใ็ขบ่ชใใใใใซใGPUใซๅฐใใชใใณใฝใซใ่ชญใฟ่พผใใจใใซใผใใซใ่ชญใฟ่พผใพใใพใใ
```py
>>> import torch
>>> torch.ones((1, 1)).to("cuda")
>>> print_gpu_utilization()
GPU memory occupied: 1343 MB.
```
ใซใผใใซใ ใใง1.3GBใฎGPUใกใขใชใไฝฟ็จใใฆใใใใจใใใใใพใใๆฌกใซใใขใใซใใฉใใ ใใฎในใใผในใไฝฟ็จใใฆใใใใ่ฆใฆใฟใพใใใใ
## Load Model
ใพใใ`bert-large-uncased` ใขใใซใ่ชญใฟ่พผใฟใพใใใขใใซใฎ้ใฟใ็ดๆฅGPUใซ่ชญใฟ่พผใใใจใงใ้ใฟใ ใใใฉใใ ใใฎในใใผในใไฝฟ็จใใฆใใใใ็ขบ่ชใงใใพใใ
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-large-uncased").to("cuda")
>>> print_gpu_utilization()
GPU memory occupied: 2631 MB.
```
ใขใใซใฎ้ใฟใ ใใงใGPUใกใขใชใ1.3 GBไฝฟ็จใใฆใใใใจใใใใใพใใๆญฃ็ขบใชๆฐๅคใฏใไฝฟ็จใใฆใใๅ
ทไฝ็ใชGPUใซไพๅญใใพใใๆฐใใGPUใงใฏใใขใใซใฎ้ใฟใๆ้ฉๅใใใๆนๆณใง่ชญใฟ่พผใพใใใใใใขใใซใฎไฝฟ็จใ้ซ้ๅใใใใจใใใใใใใขใใซใใใๅคใใฎในใใผในใๅ ๆใใใใจใใใใพใใใใฆใ`nvidia-smi` CLIใจๅใ็ตๆใๅพใใใใใ็ฐกๅใซ็ขบ่ชใใใใจใใงใใพใใ
```bash
nvidia-smi
```
```bash
Tue Jan 11 08:58:05 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 |
| N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB |
+-----------------------------------------------------------------------------+
```
ๅๅใจๅใๆฐๅคใๅๅพใใ16GBใฎใกใขใชใๆญ่ผใใV100 GPUใไฝฟ็จใใฆใใใใจใใใใใพใใใใฆใใขใใซใฎใใฌใผใใณใฐใ้ๅงใใGPUใกใขใชใฎๆถ่ฒปใใฉใฎใใใซๅคๅใใใใ็ขบ่ชใใฆใฟใพใใใใใพใใใใใคใใฎๆจๆบ็ใชใใฌใผใใณใฐๅผๆฐใ่จญๅฎใใพใ:
```py
default_args = {
"output_dir": "tmp",
"evaluation_strategy": "steps",
"num_train_epochs": 1,
"log_level": "error",
"report_to": "none",
}
```
<Tip>
่คๆฐใฎๅฎ้จใๅฎ่กใใไบๅฎใใใๅ ดๅใๅฎ้จ้ใงใกใขใชใ้ฉๅใซใฏใชใขใใใใใซใๅฎ้จใฎ้ใซ Python ใซใผใใซใๅ่ตทๅใใฆใใ ใใใ
</Tip>
## Memory utilization at vanilla training
[`Trainer`] ใไฝฟ็จใใฆใGPU ใใใฉใผใใณในใฎๆ้ฉๅใใฏใใใฏใไฝฟ็จใใใซใใใใตใคใบ 4 ใงใขใใซใใใฌใผใใณใฐใใพใใใ๏ผ
```py
>>> from transformers import TrainingArguments, Trainer, logging
>>> logging.set_verbosity_error()
>>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args)
>>> trainer = Trainer(model=model, args=training_args, train_dataset=ds)
>>> result = trainer.train()
>>> print_summary(result)
```
```
Time: 57.82
Samples/second: 8.86
GPU memory occupied: 14949 MB.
```
ๆขใซใๆฏ่ผ็ๅฐใใใใใใตใคใบใงใใGPUใฎใปใจใใฉใฎใกใขใชใใใงใซไฝฟ็จใใใฆใใใใจใใใใใพใใใใใใใใๅคงใใชใใใใตใคใบใไฝฟ็จใใใใจใฏใใใฐใใฐใขใใซใฎๅๆใ้ใใชใฃใใใๆ็ต็ใชๆง่ฝใๅไธใใใใใใใจใใใใพใใใใใใฃใฆใ็ๆณ็ใซใฏใใใใใตใคใบใใขใใซใฎ่ฆไปถใซๅใใใฆ่ชฟๆดใใใใฎใงใใใGPUใฎๅถ้ใซๅใใใฆ่ชฟๆดใใๅฟ
่ฆใฏใใใพใใใ่ๅณๆทฑใใใจใซใใขใใซใฎใตใคใบใใใใฏใใใซๅคใใฎใกใขใชใไฝฟ็จใใฆใใพใใใชใใใใชใใฎใใๅฐใ็่งฃใใใใใซใใขใใซใฎๆไฝใจใกใขใชใฎๅฟ
่ฆๆงใ่ฆใฆใฟใพใใใใ
## Anatomy of Model's Operations
Transformerใขใผใญใใฏใใฃใซใฏใ่จ็ฎๅผทๅบฆใซใใฃใฆไปฅไธใฎ3ใคใฎไธป่ฆใชๆไฝใฐใซใผใใๅซใพใใฆใใพใใ
1. **ใใณใฝใซใฎๅ็ธฎ**
็ทๅฝขๅฑคใจMulti-Head Attentionใฎใณใณใใผใใณใใฏใใในใฆใใใๅฆ็ใใใ **่กๅ-่กๅใฎไน็ฎ** ใ่กใใพใใใใใใฎๆไฝใฏใTransformerใฎใใฌใผใใณใฐใซใใใฆๆใ่จ็ฎ้็ด็ใช้จๅใงใใ
2. **็ตฑ่จ็ๆญฃ่ฆๅ**
Softmaxใจๅฑคๆญฃ่ฆๅใฏใใใณใฝใซใฎๅ็ธฎใใใ่จ็ฎ่ฒ ่ทใๅฐใชใใ1ใคใพใใฏ่คๆฐใฎ **็ธฎ็ดๆไฝ** ใๅซใฟใใใฎ็ตๆใใใใใไปใใฆ้ฉ็จใใใพใใ
3. **่ฆ็ด ใใจใฎๆผ็ฎๅญ**
ใใใใฏๆฎใใฎๆผ็ฎๅญใงใ๏ผ**ใใคใขในใใใญใใใขใฆใใๆดปๆงๅใใใใณๆฎๅทฎๆฅ็ถ** ใงใใใใใใฏๆใ่จ็ฎ้็ด็ใชๆไฝใงใฏใใใพใใใ
ใใใฉใผใใณในใฎใใใซใใใฏใๅๆใใ้ใซใใใฎ็ฅ่ญใฏๅฝน็ซใคใใจใใใใพใใ
ใใฎ่ฆ็ดใฏใ[Data Movement Is All You Need: Optimizing Transformers 2020ใซ้ขใใใฑใผในในใฟใใฃ](https://arxiv.org/abs/2007.00072)ใใๆดพ็ใใฆใใพใใ
## Anatomy of Model's Memory
ใขใใซใฎใใฌใผใใณใฐใGPUใซ้
็ฝฎใใใใขใใซใใใใฏใใใซๅคใใฎใกใขใชใไฝฟ็จใใใใจใ่ฆใฆใใพใใใใใใฏใใใฌใผใใณใฐไธญใซGPUใกใขใชใไฝฟ็จใใๅคใใฎใณใณใใผใใณใใๅญๅจใใใใใงใใGPUใกใขใชไธใฎใณใณใใผใใณใใฏไปฅไธใฎ้ใใงใ๏ผ
1. ใขใใซใฎ้ใฟ
2. ใชใใใฃใใคใถใฎ็ถๆ
3. ๅพ้
4. ๅพ้
่จ็ฎใฎใใใซไฟๅญใใใๅๅใๆดปๆงๅ
5. ไธๆใใใใก
6. ๆฉ่ฝๅบๆใฎใกใขใช
้ๅธธใAdamWใไฝฟ็จใใฆๆททๅ็ฒพๅบฆใงใใฌใผใใณใฐใใใใขใใซใฏใใขใใซใใฉใกใผใฟใใจใซ18ใใคใใจใขใฏใใฃใใผใทใงใณใกใขใชใๅฟ
่ฆใงใใๆจ่ซใงใฏใชใใใฃใใคใถใฎ็ถๆ
ใจๅพ้
ใฏไธ่ฆใงใใฎใงใใใใใๅทฎใๅผใใใจใใงใใพใใใใใใฃใฆใๆททๅ็ฒพๅบฆใฎๆจ่ซใซใใใฆใฏใใขใใซใใฉใกใผใฟใใจใซ6ใใคใใจใขใฏใใฃใใผใทใงใณใกใขใชใๅฟ
่ฆใงใใ
่ฉณ็ดฐใ่ฆใฆใฟใพใใใใ
**ใขใใซใฎ้ใฟ:**
- fp32ใใฌใผใใณใฐใฎใใฉใกใผใฟใผๆฐ * 4ใใคใ
- ใใใฏในใใฌใทใธใงใณใใฌใผใใณใฐใฎใใฉใกใผใฟใผๆฐ * 6ใใคใ๏ผใกใขใชๅ
ใซfp32ใจfp16ใฎใขใใซใ็ถญๆ๏ผ
**ใชใใใฃใใคใถใฎ็ถๆ
:**
- ้ๅธธใฎAdamWใฎใใฉใกใผใฟใผๆฐ * 8ใใคใ๏ผ2ใคใฎ็ถๆ
ใ็ถญๆ๏ผ
- 8-bit AdamWใชใใใฃใใคใถใฎใใฉใกใผใฟใผๆฐ * 2ใใคใ๏ผ[bitsandbytes](https://github.com/TimDettmers/bitsandbytes)ใฎใใใชใชใใใฃใใคใถ๏ผ
- ใขใผใกใณใฟใ ใๆใคSGDใฎใใใชใชใใใฃใใคใถใฎใใฉใกใผใฟใผๆฐ * 4ใใคใ๏ผ1ใคใฎ็ถๆ
ใ็ถญๆ๏ผ
**ๅพ้
**
- fp32ใพใใฏใใใฏในใใฌใทใธใงใณใใฌใผใใณใฐใฎใใฉใกใผใฟใผๆฐ * 4ใใคใ๏ผๅพ้
ใฏๅธธใซfp32ใงไฟๆ๏ผ
**ใใฉใฏใผใใขใฏใใฃใใผใทใงใณ**
- ใตใคใบใฏๅคใใฎ่ฆๅ ใซไพๅญใใไธป่ฆใช่ฆๅ ใฏใทใผใฑใณในใฎ้ทใใ้ ใๅฑคใฎใตใคใบใใใใณใใใใตใคใบใงใใ
ใใฉใฏใผใใจใใใฏใฏใผใใฎ้ขๆฐใซใใฃใฆๆธกใใใ่ฟใใใๅ
ฅๅใจๅบๅใใใใณๅพ้
่จ็ฎใฎใใใซไฟๅญใใใใใฉใฏใผใใขใฏใใฃใใผใทใงใณใใใใพใใ
**ไธๆ็ใชใกใขใช**
ใใใซใ่จ็ฎใๅฎไบใใๅพใซ่งฃๆพใใใใใพใใพใชไธๆๅคๆฐใใใใพใใใใใใใฏไธๆ็ใซ่ฟฝๅ ใฎใกใขใชใๅฟ
่ฆใจใใOOMใซ้ใใๅฏ่ฝๆงใใใใพใใใใใใฃใฆใใณใผใใฃใณใฐๆใซใฏใใฎใใใชไธๆๅคๆฐใซๆฆ็ฅ็ใซ่ใใๅฟ
่ฆใชใใชใฃใใๆ็คบ็ใซ่งฃๆพใใใใจใ้ๅธธใซ้่ฆใงใใ
**ๆฉ่ฝๅบๆใฎใกใขใช**
ๆฌกใซใใฝใใใฆใงใขใซใฏ็นๅฅใชใกใขใช่ฆไปถใใใๅ ดๅใใใใพใใใใจใใฐใใใผใ ใตใผใใไฝฟ็จใใฆใใญในใใ็ๆใใๅ ดๅใใฝใใใฆใงใขใฏ่คๆฐใฎๅ
ฅๅใจๅบๅใฎใณใใผใ็ถญๆใใๅฟ
่ฆใใใใพใใ
**`forward`ใจ`backward`ใฎๅฎ่ก้ๅบฆ**
็ณใฟ่พผใฟๅฑคใจ็ทๅฝขๅฑคใงใฏใใใใฏใฏใผใใซใใฉใฏใผใใจๆฏในใฆ2ๅใฎFLOPSใใใใไธ่ฌ็ใซใฏ็ด2ๅ้
ใใชใใพใ๏ผใใใฏใฏใผใใฎใตใคใบใไธไพฟใงใใใใจใใใใใใใใไปฅไธใซใชใใใจใใใใพใ๏ผใ ใขใฏใใฃใใผใทใงใณใฏ้ๅธธใใใณใๅน
ๅถ้ใใใฆใใใใใใฏใฏใผใใงใขใฏใใฃใใผใทใงใณใใใฉใฏใผใใใใๅคใใฎใใผใฟใ่ชญใใใจใไธ่ฌ็ใงใ๏ผใใจใใฐใใขใฏใใฃใใผใทใงใณใใฉใฏใผใใฏ1ๅ่ชญใฟๅใใ1ๅๆธใ่พผใฟใใขใฏใใฃใใผใทใงใณใใใฏใฏใผใใฏใใฉใฏใผใใฎgradOutputใใใณๅบๅใ2ๅ่ชญใฟๅใใ1ๅๆธใ่พผใฟใพใ๏ผใ
ใ่ฆงใฎ้ใใGPUใกใขใชใ็ฏ็ดใใใๆไฝใ้ซ้ๅใงใใๅฏ่ฝๆงใฎใใใใใคใใฎๅ ดๆใใใใพใใ GPUใฎๅฉ็จใจ่จ็ฎ้ๅบฆใซๅฝฑ้ฟใไธใใ่ฆๅ ใ็่งฃใใใฎใงใใใใฉใผใใณในๆ้ฉๅใฎๆ่กใซใคใใฆใฏใ[ๅไธGPUใงใฎๅน็็ใชใใฌใผใใณใฐใฎใใใฎๆนๆณใจใใผใซ](perf_train_gpu_one)ใฎใใญใฅใกใณใใผใทใงใณใใผใธใๅ็
งใใฆใใ ใใใ
่ฉณ็ดฐใ่ฆใฆใฟใพใใใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/create_a_model.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Create a custom architecture
[`AutoClass`](model_doc/auto)ใฏใใขใใซใฎใขใผใญใใฏใใฃใ่ชๅ็ใซๆจ่ซใใไบๅๅญฆ็ฟๆธใฟใฎ่จญๅฎใจ้ใฟใใใฆใณใญใผใใใพใใไธ่ฌ็ใซใฏใใใงใใฏใใคใณใใซไพๅญใใชใใณใผใใ็ๆใใใใใซ`AutoClass`ใไฝฟ็จใใใใจใใๅงใใใพใใใใ ใใ็นๅฎใฎใขใใซใใฉใกใผใฟใซๅฏพใใๅถๅพกใใใ่ฉณ็ดฐใซ่กใใใใฆใผใถใผใฏใใใใคใใฎๅบๆฌใฏใฉในใใใซในใฟใ ๐ค Transformersใขใใซใไฝๆใงใใพใใใใใฏใ๐ค Transformersใขใใซใ็ ็ฉถใใใฌใผใใณใฐใใพใใฏๅฎ้จใใ่ๅณใใใใฆใผใถใผใซ็นใซๅฝน็ซใคใใใใใพใใใใใฎใฌใคใใงใฏใ`AutoClass`ใไฝฟ็จใใชใใซในใฟใ ใขใใซใฎไฝๆใซใคใใฆ่ฉณใใ่ชฌๆใใพใใๆฌกใฎๆนๆณใๅญฆใณใพใ๏ผ
- ใขใใซใฎ่จญๅฎใใญใผใใใใณใซในใฟใใคใบใใใ
- ใขใใซใขใผใญใใฏใใฃใไฝๆใใใ
- ใใญในใ็จใฎ้
ใใใผใฏใใคใถใจ้ซ้ใใผใฏใใคใถใไฝๆใใใ
- ใใธใงใณใฟในใฏ็จใฎ็ปๅใใญใปใใตใไฝๆใใใ
- ใชใผใใฃใชใฟในใฏ็จใฎ็นๅพดๆฝๅบๅจใไฝๆใใใ
- ใใซใใขใผใใซใฟในใฏ็จใฎใใญใปใใตใไฝๆใใใ
## Configuration
[่จญๅฎ](main_classes/configuration)ใฏใใขใใซใฎ็นๅฎใฎๅฑๆงใๆใใพใใๅใขใใซใฎ่จญๅฎใซใฏ็ฐใชใๅฑๆงใใใใพใใใใจใใฐใใในใฆใฎNLPใขใใซใซใฏใ`hidden_size`ใ`num_attention_heads`ใ`num_hidden_layers`ใใใใณ`vocab_size`ๅฑๆงใๅ
ฑ้ใใฆใใใพใใใใใใฎๅฑๆงใฏใใขใใซใๆง็ฏใใใใใฎๆณจๆใใใใฎๆฐใ้ ใๅฑคใฎๆฐใๆๅฎใใพใใ
[DistilBERT](model_doc/distilbert)ใใใ่ฉณใใ่ชฟในใใใใซใ[`DistilBertConfig`]ใซใขใฏใปในใใฆใใฎๅฑๆงใ่ชฟในใฆใฟใพใใใ๏ผ
```py
>>> from transformers import DistilBertConfig
>>> config = DistilBertConfig()
>>> print(config)
DistilBertConfig {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.16.2",
"vocab_size": 30522
}
```
[`DistilBertConfig`]ใฏใๅบๆฌใฎ[`DistilBertModel`]ใๆง็ฏใใใใใซไฝฟ็จใใใใในใฆใฎใใใฉใซใๅฑๆงใ่กจ็คบใใพใใ
ใในใฆใฎๅฑๆงใฏใซในใฟใใคใบๅฏ่ฝใงใๅฎ้จใฎใใใฎในใใผในใๆไพใใพใใไพใใฐใใใใฉใซใใฎใขใใซใใซในใฟใใคใบใใฆไปฅไธใฎใใใชใใจใใงใใพใ๏ผ
- `activation`ใใฉใกใผใฟใง็ฐใชใๆดปๆงๅ้ขๆฐใ่ฉฆใใ
- `attention_dropout`ใใฉใกใผใฟใงๆณจๆ็ขบ็ใฎ้ซใใใญใใใขใฆใ็ใไฝฟ็จใใใ
```py
>>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4)
>>> print(my_config)
DistilBertConfig {
"activation": "relu",
"attention_dropout": 0.4,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.16.2",
"vocab_size": 30522
}
```
ไบๅๅญฆ็ฟๆธใฟใขใใซใฎๅฑๆงใฏใ[`~PretrainedConfig.from_pretrained`] ้ขๆฐใงๅคๆดใงใใพใ๏ผ
```py
>>> my_config = DistilBertConfig.from_pretrained("distilbert-base-uncased", activation="relu", attention_dropout=0.4)
```
Once you are satisfied with your model configuration, you can save it with [`PretrainedConfig.save_pretrained`]. Your configuration file is stored as a JSON file in the specified save directory.
```py
>>> my_config.save_pretrained(save_directory="./your_model_save_path")
```
่จญๅฎใใกใคใซใๅๅฉ็จใใใซใฏใ[`~PretrainedConfig.from_pretrained`]ใไฝฟ็จใใฆใใใใญใผใใใพใ๏ผ
```py
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json")
```
<Tip>
ใซในใฟใ ๆงๆใใกใคใซใ่พๆธใจใใฆไฟๅญใใใใจใใใซในใฟใ ๆงๆๅฑๆงใจใใใฉใซใใฎๆงๆๅฑๆงใฎ้ใใ ใใไฟๅญใใใใจใใงใใพใ๏ผ่ฉณ็ดฐใซใคใใฆใฏ[configuration](main_classes/configuration)ใฎใใญใฅใกใณใใผใทใงใณใใ่ฆงใใ ใใใ
</Tip>
## Model
ๆฌกใฎในใใใใฏใ[ใขใใซ](main_classes/models)ใไฝๆใใใใจใงใใใขใใซ๏ผใขใผใญใใฏใใฃใจใ็ทฉใ่จใใใใใจใใใใพใ๏ผใฏใๅใฌใคใคใผใไฝใใใฆใใใใใฉใฎๆไฝใ่กใใใฆใใใใๅฎ็พฉใใพใใๆงๆใใใฎ `num_hidden_layers` ใฎใใใชๅฑๆงใฏใขใผใญใใฏใใฃใๅฎ็พฉใใใใใซไฝฟ็จใใใพใใ
ใในใฆใฎใขใใซใฏ [`PreTrainedModel`] ใใใผในใฏใฉในใจใใๅ
ฅๅๅใ่พผใฟใฎใชใตใคใบใใปใซใใขใใณใทใงใณใใใใฎใใซใผใใณใฐใชใฉใๅ
ฑ้ใฎใกใฝใใใใใใคใใใใพใใ
ใใใซใใในใฆใฎใขใใซใฏ [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html)ใ[`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)ใใพใใฏ [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html) ใฎใใใใใฎใตใใฏใฉในใงใใใใพใใใคใพใใใขใใซใฏใใใใใฎใใฌใผใ ใฏใผใฏใฎไฝฟ็จๆณใจไบๆๆงใใใใพใใ
<frameworkcontent>
<pt>
ใขใใซใซใซในใฟใ ๆงๆๅฑๆงใใญใผใใใพใ๏ผ
```py
>>> from transformers import DistilBertModel
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json")
>>> model = DistilBertModel(my_config)
```
ใใใซใใใไบๅใใฌใผใใณใฐๆธใฟใฎ้ใฟใงใฏใชใใฉใณใใ ใชๅคใๆใคใขใใซใไฝๆใใใพใใ
ใใใฏใใใฌใผใใณใฐใ่กใใใใพใงใใพใ ๆ็จใชใใฎใจใใฆไฝฟ็จใใใใจใฏใงใใพใใใ
ใใฌใผใใณใฐใฏใณในใใจๆ้ใใใใใใญใปในใงใใ
้ๅธธใใใฌใผใใณใฐใซๅฟ
่ฆใชใชใฝใผในใฎไธ้จใใไฝฟ็จใใใใใ้ใใใ่ฏใ็ตๆใๅพใใใใซไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใใใจใ่ฏใใงใใใใ
[`~PreTrainedModel.from_pretrained`]ใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใไฝๆใใพใ๏ผ
```py
>>> model = DistilBertModel.from_pretrained("distilbert-base-uncased")
```
ไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใใญใผใใใ้ใใขใใซใ๐ค Transformersใซใใฃใฆๆไพใใใฆใใๅ ดๅใใใใฉใซใใฎใขใใซ่จญๅฎใ่ชๅ็ใซใญใผใใใใพใใใใ ใใๅฟ
่ฆใซๅฟใใฆใใใฉใซใใฎใขใใซ่จญๅฎๅฑๆงใฎไธ้จใพใใฏใในใฆใ็ฌ่ชใฎใใฎใง็ฝฎใๆใใใใจใใงใใพใใ
```py
>>> model = DistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config)
```
</pt>
<tf>
ใขใใซใซใซในใฟใ ่จญๅฎๅฑๆงใใญใผใใใฆใใ ใใ๏ผ
```py
>>> from transformers import TFDistilBertModel
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json")
>>> tf_model = TFDistilBertModel(my_config)
```
ใใใซใใใไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใงใฏใชใใฉใณใใ ใชๅคใๆใคใขใใซใไฝๆใใใพใใ
ใใฎใขใใซใๆ็จใช็ฎ็ใซใฏใพใ ไฝฟ็จใใใใจใฏใงใใพใใใใใฌใผใใณใฐใฏใณในใใใใใใๆ้ใใใใใใญใปในใงใใ
ไธ่ฌ็ใซใฏใใใฌใผใใณใฐใซๅฟ
่ฆใชใชใฝใผในใฎไธ้จใใไฝฟ็จใใใซใใใ้ใๅชใใ็ตๆใๅพใใใใซไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใใใจใ่ฏใใงใใใใ
[`~TFPreTrainedModel.from_pretrained`]ใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใไฝๆใใพใ๏ผ
```py
>>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
```
ไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใใญใผใใใ้ใใขใใซใ๐ค Transformersใซใใฃใฆๆไพใใใฆใใๅ ดๅใใใใฉใซใใฎใขใใซๆงๆใ่ชๅ็ใซใญใผใใใใพใใใใ ใใๅฟ
่ฆใงใใใฐใใใฉใซใใฎใขใใซๆงๆๅฑๆงใฎไธ้จใพใใฏใในใฆใ็ฌ่ชใฎใใฎใง็ฝฎใๆใใใใจใใงใใพใ๏ผ
```py
>>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config)
```
</tf>
</frameworkcontent>
### Model heads
ใใฎๆ็นใงใใใผในใฎDistilBERTใขใใซใใใใใใใฏ้ ใใ็ถๆ
ใๅบๅใใพใใ้ ใใ็ถๆ
ใฏใขใใซใฎใใใใธใฎๅ
ฅๅใจใใฆๆธกใใใๆ็ต็ใชๅบๅใ็ๆใใพใใ๐ค Transformersใฏใใขใใซใใใฎใฟในใฏใใตใใผใใใฆใใ้ใใๅใฟในใฏใซๅฏพๅฟใใ็ฐใชใใขใใซใใใใๆไพใใพใ๏ผใคใพใใDistilBERTใ็ฟป่จณใฎใใใชใทใผใฑใณในๅฏพใทใผใฑใณในใฟในใฏใซไฝฟ็จใใใใจใฏใงใใพใใ๏ผใ
<frameworkcontent>
<pt>
ใใจใใฐใ[`DistilBertForSequenceClassification`]ใฏใใทใผใฑใณในๅ้กใใใใๆใคใใผในใฎDistilBERTใขใใซใงใใใทใผใฑใณในๅ้กใใใใฏใใใผใซใใใๅบๅใฎไธใซใใ็ทๅฝขๅฑคใงใใ
```py
>>> from transformers import DistilBertForSequenceClassification
>>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
ๆฐใใใฟในใฏใซใใฎใใงใใฏใใคใณใใ็ฐกๅใซๅๅฉ็จใใใซใฏใ็ฐใชใใขใใซใใใใซๅใๆฟใใพใใ
่ณชๅๅฟ็ญใฟในใฏใฎๅ ดๅใ[`DistilBertForQuestionAnswering`] ใขใใซใใใใไฝฟ็จใใพใใ
่ณชๅๅฟ็ญใใใใฏใทใผใฑใณในๅ้กใใใใจ้กไผผใใฆใใพใใใ้ ใ็ถๆ
ใฎๅบๅใฎไธใซ็ทๅฝขๅฑคใใใใพใใ
```py
>>> from transformers import DistilBertForQuestionAnswering
>>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
```
</pt>
<tf>
ไพใใฐใ[`TFDistilBertForSequenceClassification`]ใฏใใทใผใฑใณในๅ้กใใใใๆใคใใผในใฎDistilBERTใขใใซใงใใใทใผใฑใณในๅ้กใใใใฏใใใผใซใใใๅบๅใฎไธใซใใ็ทๅฝขๅฑคใงใใ
```py
>>> from transformers import TFDistilBertForSequenceClassification
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
ๅฅใฎใฟในใฏใซใใฎใใงใใฏใใคใณใใ็ฐกๅใซๅๅฉ็จใใใใจใใงใใ็ฐใชใใขใใซใใใใซๅใๆฟใใใ ใใงใใ
่ณชๅๅฟ็ญใฟในใฏใฎๅ ดๅใ[`TFDistilBertForQuestionAnswering`]ใขใใซใใใใไฝฟ็จใใพใใ
่ณชๅๅฟ็ญใใใใฏใทใผใฑใณในๅ้กใใใใจไผผใฆใใพใใใ้ ใ็ถๆ
ใฎๅบๅใฎไธใซ็ทๅฝขๅฑคใใใใ ใใงใใ
```py
>>> from transformers import TFDistilBertForQuestionAnswering
>>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
```
</tf>
</frameworkcontent>
## Tokenizer
ใใญในใใใผใฟใใขใใซใงไฝฟ็จใใๅใซๅฟ
่ฆใชๆๅพใฎใใผในใฏใฉในใฏใ็ใฎใใญในใใใใณใฝใซใซๅคๆใใใใใฎ[ใใผใฏใใคใถ](main_classes/tokenizer)ใงใใ
๐ค Transformersใงไฝฟ็จใงใใ2ใคใฎใฟใคใใฎใใผใฏใใคใถใใใใพใ๏ผ
- [`PreTrainedTokenizer`]: ใใผใฏใใคใถใฎPythonๅฎ่ฃ
ใงใใ
- [`PreTrainedTokenizerFast`]: Rustใใผในใฎ[๐ค Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/)ใฉใคใใฉใชใใใฎใใผใฏใใคใถใงใใ
ใใฎใใผใฏใใคใถใฎใฟใคใใฏใใใฎRustๅฎ่ฃ
ใซใใใ็นใซใใใใใผใฏใใคใผใผใทใงใณไธญใซ้ซ้ใงใใ
้ซ้ใชใใผใฏใใคใถใฏใใใผใฏใณใๅ
ใฎๅ่ชใพใใฏๆๅญใซใใใใณใฐใใ*ใชใใปใใใใใใณใฐ*ใชใฉใฎ่ฟฝๅ ใกใฝใใใๆไพใใพใใ
ไธกๆนใฎใใผใฏใใคใถใฏใใจใณใณใผใใจใใณใผใใๆฐใใใใผใฏใณใฎ่ฟฝๅ ใ็นๅฅใชใใผใฏใณใฎ็ฎก็ใชใฉใๅ
ฑ้ใฎใกใฝใใใใตใใผใใใฆใใพใใ
<Tip warning={true}>
ใในใฆใฎใขใใซใ้ซ้ใชใใผใฏใใคใถใใตใใผใใใฆใใใใใงใฏใใใพใใใ
ใขใใซใ้ซ้ใชใใผใฏใใคใถใใตใใผใใใฆใใใใฉใใใ็ขบ่ชใใใซใฏใใใฎ[่กจ](index#supported-frameworks)ใใ่ฆงใใ ใใใ
</Tip>
็ฌ่ชใฎใใผใฏใใคใถใใใฌใผใใณใฐใใๅ ดๅใ*ใใญใฃใใฉใชใผ*ใใกใคใซใใใใผใฏใใคใถใไฝๆใงใใพใใ
```py
>>> from transformers import DistilBertTokenizer
>>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left")
```
ใซในใฟใ ใใผใฏใใคใถใผใใ็ๆใใใ่ชๅฝใฏใไบๅๅญฆ็ฟๆธใฟใขใใซใฎใใผใฏใใคใถใผใ็ๆใใ่ชๅฝใจใฏ็ฐใชใใใจใ่ฆใใฆใใใใจใฏ้่ฆใงใใ
ไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใๅ ดๅใฏใไบๅๅญฆ็ฟๆธใฟใขใใซใฎ่ชๅฝใไฝฟ็จใใๅฟ
่ฆใใใใพใใใใใใชใใจใๅ
ฅๅใๆๅณใใชใใชใใชใใพใใ
[`DistilBertTokenizer`]ใฏใฉในใไฝฟ็จใใฆใไบๅๅญฆ็ฟๆธใฟใขใใซใฎ่ชๅฝใๆใคใใผใฏใใคใถใผใไฝๆใใพใ:
```py
>>> from transformers import DistilBertTokenizer
>>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
```
[`DistilBertTokenizerFast`]ใฏใฉในใไฝฟ็จใใฆ้ซ้ใชใใผใฏใใคใถใไฝๆใใพใ๏ผ
```py
>>> from transformers import DistilBertTokenizerFast
>>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased")
```
<Tip>
ใใใฉใซใใงใฏใ[`AutoTokenizer`]ใฏ้ซ้ใชใใผใฏใใคใถใ่ชญใฟ่พผใใใจใใพใใ`from_pretrained`ๅ
ใง`use_fast=False`ใ่จญๅฎใใใใจใงใใใฎๅไฝใ็กๅนใซใใใใจใใงใใพใใ
</Tip>
## Image Processor
็ปๅใใญใปใใตใฏใใธใงใณๅ
ฅๅใๅฆ็ใใพใใใใใฏๅบๆฌใฏใฉใน [`~image_processing_utils.ImageProcessingMixin`] ใ็ถๆฟใใฆใใพใใ
ไฝฟ็จใใใซใฏใไฝฟ็จใใฆใใใขใใซใซ้ข้ฃไปใใใใ็ปๅใใญใปใใตใไฝๆใใพใใ
ใใจใใฐใ็ปๅๅ้กใซ[ViT](model_doc/vit)ใไฝฟ็จใใๅ ดๅใใใใฉใซใใฎ [`ViTImageProcessor`] ใไฝๆใใพใใ
```py
>>> from transformers import ViTImageProcessor
>>> vit_extractor = ViTImageProcessor()
>>> print(vit_extractor)
ViTImageProcessor {
"do_normalize": true,
"do_resize": true,
"image_processor_type": "ViTImageProcessor",
"image_mean": [
0.5,
0.5,
0.5
],
"image_std": [
0.5,
0.5,
0.5
],
"resample": 2,
"size": 224
}
```
<Tip>
ใซในใฟใใคใบใๅฟ
่ฆใจใใชใๅ ดๅใใขใใซใฎใใใฉใซใใฎ็ปๅใใญใปใใตใใฉใกใผใฟใใญใผใใใใซใฏใๅ็ดใซ`from_pretrained`ใกใฝใใใไฝฟ็จใใฆใใ ใใใ
</Tip>
[`ViTImageProcessor`]ใฎใใฉใกใผใฟใๅคๆดใใฆใใซในใฟใ ใฎ็ปๅใใญใปใใตใไฝๆใงใใพใ๏ผ
```py
>>> from transformers import ViTImageProcessor
>>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3])
>>> print(my_vit_extractor)
ViTImageProcessor {
"do_normalize": false,
"do_resize": true,
"image_processor_type": "ViTImageProcessor",
"image_mean": [
0.3,
0.3,
0.3
],
"image_std": [
0.5,
0.5,
0.5
],
"resample": "PIL.Image.BOX",
"size": 224
}
```
## Feature Extractor
ใใฃใผใใฃใผๆฝๅบๅจใฏ้ณๅฃฐๅ
ฅๅใๅฆ็ใใพใใใใใฏๅบๆฌ็ใช [`~feature_extraction_utils.FeatureExtractionMixin`] ใฏใฉในใใ็ถๆฟใใใ้ณๅฃฐๅ
ฅๅใๅฆ็ใใใใใฎ [`SequenceFeatureExtractor`] ใฏใฉในใใใ็ถๆฟใใใใใจใใใใพใใ
ไฝฟ็จใใใซใฏใใขใใซใซ้ข้ฃไปใใใใใใฃใผใใฃใผๆฝๅบๅจใไฝๆใใพใใใใจใใฐใ้ณๅฃฐๅ้กใซ [Wav2Vec2](model_doc/wav2vec2) ใไฝฟ็จใใๅ ดๅใใใใฉใซใใฎ [`Wav2Vec2FeatureExtractor`] ใไฝๆใใพใใ
```py
>>> from transformers import Wav2Vec2FeatureExtractor
>>> w2v2_extractor = Wav2Vec2FeatureExtractor()
>>> print(w2v2_extractor)
Wav2Vec2FeatureExtractor {
"do_normalize": true,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": false,
"sampling_rate": 16000
}
```
<Tip>
ใซในใฟใใคใบใ่กใใชใๅ ดๅใใขใใซใฎใใใฉใซใใฎ็นๅพดๆฝๅบๅจใใฉใกใผใฟใผใใญใผใใใใซใฏใๅใซ `from_pretrained` ใกใฝใใใไฝฟ็จใใฆใใ ใใใ
</Tip>
[`Wav2Vec2FeatureExtractor`] ใฎใใฉใกใผใฟใผใๅคๆดใใฆใใซในใฟใ ็นๅพดๆฝๅบๅจใไฝๆใงใใพใ:
```py
>>> from transformers import Wav2Vec2FeatureExtractor
>>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False)
>>> print(w2v2_extractor)
Wav2Vec2FeatureExtractor {
"do_normalize": false,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": false,
"sampling_rate": 8000
}
```
## Processor
ใใซใใขใผใใซใฟในใฏใใตใใผใใใใขใใซใซๅฏพใใฆใ๐ค Transformersใฏไพฟๅฉใชใใญใปใใตใฏใฉในใๆไพใใฆใใพใใ
ใใฎใใญใปใใตใฏใฉในใฏใ็นๅพด้ๆฝๅบๅจใใใผใฏใใคใถใชใฉใฎๅฆ็ใฏใฉในใไพฟๅฉใซใฉใใใใๅไธใฎใชใใธใงใฏใใซ็ตๅใใพใใ
ใใจใใฐใ่ชๅ้ณๅฃฐ่ช่ญใฟในใฏ๏ผASR๏ผ็จใซ[`Wav2Vec2Processor`]ใไฝฟ็จใใฆใฟใพใใใใ
ASRใฏ้ณๅฃฐใใใญในใใซ่ปขๅใใใฟในใฏใงใใใ้ณๅฃฐๅ
ฅๅใๅฆ็ใใใใใซ็นๅพด้ๆฝๅบๅจใจใใผใฏใใคใถใๅฟ
่ฆใงใใ
้ณๅฃฐๅ
ฅๅใๅฆ็ใใ็นๅพด้ๆฝๅบๅจใไฝๆใใพใ๏ผ
```py
>>> from transformers import Wav2Vec2FeatureExtractor
>>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True)
```
ใใญในใๅ
ฅๅใๅฆ็ใใใใผใฏใใคใถใไฝๆใใพใ:
```py
>>> from transformers import Wav2Vec2CTCTokenizer
>>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt")
```
[`Wav2Vec2Processor`]ใง็นๅพด้ๆฝๅบๅจใจใใผใฏใใคใถใ็ตใฟๅใใใพใ๏ผ
```py
>>> from transformers import Wav2Vec2Processor
>>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
```
ไบใคใฎๅบๆฌใฏใฉใน - ่จญๅฎใจใขใใซ - ใใใณ่ฟฝๅ ใฎๅๅฆ็ใฏใฉใน๏ผใใผใฏใใคใถใ็ปๅใใญใปใใตใ็นๅพดๆฝๅบๅจใใพใใฏใใญใปใใต๏ผใไฝฟ็จใใใใจใงใ๐ค Transformers ใใตใใผใใใใขใใซใฎใใใใใไฝๆใงใใพใใใใใใฎๅบๆฌใฏใฉในใฏ่จญๅฎๅฏ่ฝใงใๅฟ
่ฆใช็นๆงใไฝฟ็จใงใใพใใใขใใซใใใฌใผใใณใฐ็จใซ็ฐกๅใซใปใใใขใใใใใใๆขๅญใฎไบๅๅญฆ็ฟๆธใฟใขใใซใๅพฎ่ชฟๆดใใใใจใใงใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/tasks_explained.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# How ๐ค Transformers solve tasks
[๐ค Transformersใงใงใใใใจ](task_summary)ใงใ่ช็ถ่จ่ชๅฆ็๏ผNLP๏ผใ้ณๅฃฐใจใชใผใใฃใชใใณใณใใฅใผใฟใใธใงใณใฎใฟในใฏใใใใใฎ้่ฆใชใขใใชใฑใผใทใงใณใซใคใใฆๅญฆใณใพใใใใใฎใใผใธใงใฏใใขใใซใใใใใฎใฟในใฏใใฉใฎใใใซ่งฃๆฑบใใใใ่ฉณใใ่ฆใฆใใขใใซใฎๅ
้จใงไฝใ่ตทใใฃใฆใใใใ่ชฌๆใใพใใ็นๅฎใฎใฟในใฏใ่งฃๆฑบใใใใใซใฏๅคใใฎๆนๆณใใใใไธ้จใฎใขใใซใฏ็นๅฎใฎใใฏใใใฏใๅฎ่ฃ
ใใใใใพใใฏๆฐใใ่ฆณ็นใใใฟในใฏใซๅใ็ตใใใใใใพใใใใTransformerใขใใซใซใจใฃใฆใไธ่ฌ็ใชใขใคใใขใฏๅใใงใใๆ่ปใชใขใผใญใใฏใใฃใฎใใใใงใใปใจใใฉใฎใขใใซใฏใจใณใณใผใใใใณใผใใใพใใฏใจใณใณใผใ-ใใณใผใๆง้ ใฎๅค็จฎใงใใTransformerใขใใซไปฅๅคใซใใๅฝ็คพใฎใฉใคใใฉใชใซใฏใณใณใใฅใผใฟใใธใงใณใฟในใฏใซไปใงใไฝฟ็จใใใฆใใใใใคใใฎ็ณใฟ่พผใฟใใฅใผใฉใซใใใใฏใผใฏ๏ผCNN๏ผใใใใพใใใพใใ็พไปฃใฎCNNใใฉใฎใใใซๆฉ่ฝใใใใ่ชฌๆใใพใใ
ใฟในใฏใใฉใฎใใใซ่งฃๆฑบใใใใใ่ชฌๆใใใใใซใใขใใซๅ
้จใงๆ็จใชไบๆธฌใๅบๅใใใใใซไฝใ่ตทใใใใซใคใใฆ่ชฌๆใใพใใ
- [Wav2Vec2](model_doc/wav2vec2)๏ผใชใผใใฃใชๅ้กใใใณ่ชๅ้ณๅฃฐ่ช่ญ๏ผASR๏ผๅใ
- [Vision Transformer๏ผViT๏ผ](model_doc/vit)ใใใณ[ConvNeXT](model_doc/convnext)๏ผ็ปๅๅ้กๅใ
- [DETR](model_doc/detr)๏ผใชใใธใงใฏใๆคๅบๅใ
- [Mask2Former](model_doc/mask2former)๏ผ็ปๅใปใฐใกใณใใผใทใงใณๅใ
- [GLPN](model_doc/glpn)๏ผๆทฑๅบฆๆจๅฎๅใ
- [BERT](model_doc/bert)๏ผใจใณใณใผใใไฝฟ็จใใใใญในใๅ้กใใใผใฏใณๅ้กใใใใณ่ณชๅๅฟ็ญใชใฉใฎNLPใฟในใฏๅใ
- [GPT2](model_doc/gpt2)๏ผใใณใผใใไฝฟ็จใใใใญในใ็ๆใชใฉใฎNLPใฟในใฏๅใ
- [BART](model_doc/bart)๏ผใจใณใณใผใ-ใใณใผใใไฝฟ็จใใ่ฆ็ดใใใณ็ฟป่จณใชใฉใฎNLPใฟในใฏๅใ
<Tip>
ใใใซ้ฒใๅใซใๅ
ใฎTransformerใขใผใญใใฏใใฃใฎๅบๆฌ็ใช็ฅ่ญใๆใคใจ่ฏใใงใใใจใณใณใผใใใใณใผใใใใใณๆณจๆๅใใฉใฎใใใซๅไฝใใใใ็ฅใฃใฆใใใจใ็ฐใชใTransformerใขใใซใใฉใฎใใใซๅไฝใใใใ็่งฃใใใฎใซๅฝน็ซใกใพใใๅงใใฆใใใใใชใใฌใใทใฅใๅฟ
่ฆใชๅ ดๅใฏใ่ฉณ็ดฐใชๆ
ๅ ฑใซใคใใฆใฏๅฝ็คพใฎ[ใณใผใน](https://huggingface.co/course/chapter1/4?fw=pt)ใใใงใใฏใใฆใใ ใใ๏ผ
</Tip>
## Speech and audio
[Wav2Vec2](model_doc/wav2vec2)ใฏใๆชใฉใใซใฎ้ณๅฃฐใใผใฟใงไบๅใใฌใผใใณใฐใใใใชใผใใฃใชๅ้กใใใณ่ชๅ้ณๅฃฐ่ช่ญใฎใฉใใซไปใใใผใฟใงใใกใคใณใใฅใผใณใใใ่ชๅทฑๆๅธซใขใใซใงใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/>
</div>
ใใฎใขใใซใซใฏไธปใซๆฌกใฎ4ใคใฎใณใณใใผใใณใใใใใพใใ
1. *็นๅพดใจใณใณใผใ*๏ผ็ใฎ้ณๅฃฐๆณขๅฝขใๅใๅใใๅนณๅๅคใใผใญใซๆญฃ่ฆๅใใๅไฝๅๆฃใซๅคๆใใใใใ20msใใจใฎ็นๅพดใใฏใใซใฎใทใผใฑใณในใซๅคๆใใพใใ
2. ๆณขๅฝขใฏ่ช็ถใซ้ฃ็ถใใฆใใใใใใใญในใใฎใทใผใฑใณในใๅ่ชใซๅๅฒใงใใใใใซใงใใใใใซใ็นๅพดใใฏใใซใฏ*้ๅญๅใขใธใฅใผใซ*ใซๆธกใใใ้ขๆฃ้ณๅฃฐใฆใใใใๅญฆ็ฟใใใใจใใพใใ้ณๅฃฐใฆใใใใฏ*ใณใผใใใใฏ*๏ผ่ชๅฝใจ่ใใใใจใใงใใพใ๏ผใจใใฆ็ฅใใใใณใผใใฏใผใใฎใณใฌใฏใทใงใณใใ้ธๆใใใพใใใณใผใใใใฏใใใ้ฃ็ถใใใชใผใใฃใชๅ
ฅๅใๆใใใ่กจใใใฏใใซใพใใฏ้ณๅฃฐใฆใใใ๏ผใฟใผใฒใใใฉใใซใจ่ใใใใจใใงใใพใ๏ผใ้ธๆใใใใขใใซใไปใใฆ่ปข้ใใใพใใ
3. ็นๅพดใใฏใใซใฎ็ดๅๅใฏใฉใณใใ ใซใในใฏใใใใในใฏใใใ็นๅพดใใฏใใซใฏ*ใณใณใใญในใใใใใฏใผใฏ*ใซไพ็ตฆใใใพใใใใใฏใ็ธๅฏพ็ใชไฝ็ฝฎใจใณใใใใฃใณใฐใ่ฟฝๅ ใใTransformerใจใณใณใผใใงใใ
4. ใณใณใใญในใใใใใฏใผใฏใฎไบๅใใฌใผใใณใฐใฎ็ฎ็ใฏ*ใณใณใใฉในใใฃใใฟในใฏ*ใงใใใขใใซใฏใในใฏใใใไบๆธฌใฎ็ใฎ้ๅญๅ้ณๅฃฐ่กจ็พใใๅฝใฎไบๆธฌใฎใปใใใใไบๆธฌใใชใใใฐใชใใใใขใใซใฏๆใไผผใใณใณใใญในใใใฏใใซใจ้ๅญๅ้ณๅฃฐใฆใใใ๏ผใฟใผใฒใใใฉใใซ๏ผใ่ฆใคใใใใใซไฟใใใพใใ
ไปใWav2Vec2ใฏไบๅใใฌใผใใณใฐใใใฆใใใฎใงใใชใผใใฃใชๅ้กใพใใฏ่ชๅ้ณๅฃฐ่ช่ญใฎใใใซใใผใฟใใใกใคใณใใฅใผใณใงใใพใ๏ผ
### Audio classification
ไบๅใใฌใผใใณใฐใใใใขใใซใใชใผใใฃใชๅ้กใซไฝฟ็จใใใซใฏใๅบๆฌ็ใชWav2Vec2ใขใใซใฎไธใซใทใผใฑใณในๅ้กใใใใ่ฟฝๅ ใใพใใๅ้กใใใใฏใจใณใณใผใใฎ้ ใใ็ถๆ
ใๅใๅ
ฅใใ็ทๅฝขๅฑคใงใๅใชใผใใฃใชใใฌใผใ ใใๅญฆ็ฟใใใ็นๅพดใ่กจใใพใใใใใใฎ้ ใใ็ถๆ
ใฏ้ทใใ็ฐใชใๅฏ่ฝๆงใใใใใใๆๅใซ้ ใใ็ถๆ
ใใใผใซใใใๆฌกใซใฏใฉในใฉใใซใซๅฏพใใใญใธใใใซๅคๆใใใพใใใญใธใใใจใฟใผใฒใใ้ใฎใฏใญในใจใณใใญใใผๆๅคฑใ่จ็ฎใใใๆใๅฏ่ฝๆงใฎ้ซใใฏใฉในใ่ฆใคใใใใใซไฝฟ็จใใใพใใ
ใชใผใใฃใชๅ้กใ่ฉฆใๆบๅใฏใงใใพใใใ๏ผWav2Vec2ใใใกใคใณใใฅใผใณใใฆๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใฎๅฎๅ
จใช[ใชใผใใฃใชๅ้กใฌใคใ](tasks/audio_classification)ใใใงใใฏใใฆใใ ใใ๏ผ
### Automatic speech recognition
ไบๅใใฌใผใใณใฐใใใใขใใซใ่ชๅ้ณๅฃฐ่ช่ญใซไฝฟ็จใใใซใฏใ[connectionist temporal classification๏ผCTC๏ผ](glossary#connectionist-temporal-classification-ctc)ใฎใใใฎๅบๆฌ็ใชWav2Vec2ใขใใซใฎไธใซ่จ่ชใขใใชใณใฐใใใใ่ฟฝๅ ใใพใใ่จ่ชใขใใชใณใฐใใใใฏใจใณใณใผใใฎ้ ใใ็ถๆ
ใๅใๅ
ฅใใใใใใใญใธใใใซๅคๆใใพใใๅใญใธใใใฏใใผใฏใณใฏใฉในใ่กจใ๏ผใใผใฏใณๆฐใฏใฟในใฏใฎ่ชๅฝใใๆฅใพใ๏ผใใญใธใใใจใฟใผใฒใใ้ใฎCTCๆๅคฑใ่จ็ฎใใใๆฌกใซ่ปขๅใซๅคๆใใใพใใ
่ชๅ้ณๅฃฐ่ช่ญใ่ฉฆใๆบๅใฏใงใใพใใใ๏ผWav2Vec2ใใใกใคใณใใฅใผใณใใฆๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใฎๅฎๅ
จใช[่ชๅ้ณๅฃฐ่ช่ญใฌใคใ](tasks/asr)ใใใงใใฏใใฆใใ ใใ๏ผ
## Computer vision
ใณใณใใฅใผใฟใใธใงใณใฎใฟในใฏใใขใใญใผใใใๆนๆณใฏ2ใคใใใพใใ
1. ็ปๅใใใใใฎใทใผใฑใณในใซๅๅฒใใTransformerใไฝฟ็จใใฆไธฆๅใซๅฆ็ใใพใใ
2. [ConvNeXT](model_doc/convnext)ใชใฉใฎใขใใณใชCNNใไฝฟ็จใใพใใใใใใฏ็ณใฟ่พผใฟๅฑคใไฝฟ็จใใพใใใใขใใณใชใใใใฏใผใฏ่จญ่จใๆก็จใใฆใใพใใ
<Tip>
ใตใผใใขใใญใผใใงใฏใTransformerใจ็ณใฟ่พผใฟใ็ตใฟๅใใใใใฎใใใใพใ๏ผไพ๏ผ[Convolutional Vision Transformer](model_doc/cvt)ใพใใฏ[LeViT](model_doc/levit)๏ผใใใใใซใคใใฆใฏ่ญฐ่ซใใพใใใใใใใใฏใใใง่ชฟในใ2ใคใฎใขใใญใผใใ็ตใฟๅใใใฆใใพใใ
</Tip>
ViTใจConvNeXTใฏ็ปๅๅ้กใซใใไฝฟ็จใใใพใใใใชใใธใงใฏใๆคๅบใใปใฐใกใณใใผใทใงใณใๆทฑๅบฆๆจๅฎใชใฉใฎไปใฎใใธใงใณใฟในใฏใซๅฏพใใฆใฏใDETRใMask2FormerใGLPNใชใฉใ้ฉใใฆใใพใใ
### Image classification
ViTใจConvNeXTใฎไธกๆนใ็ปๅๅ้กใซไฝฟ็จใงใใพใใไธปใช้ใใฏใViTใๆณจๆใกใซใใบใ ใไฝฟ็จใใConvNeXTใ็ณใฟ่พผใฟใไฝฟ็จใใใใจใงใใ
#### Transformer
[ViT](model_doc/vit)ใฏ็ณใฟ่พผใฟใๅฎๅ
จใซTransformerใขใผใญใใฏใใฃใง็ฝฎใๆใใพใใๅ
ใฎTransformerใซ็ฒพ้ใใฆใใๅ ดๅใViTใฎ็่งฃใฏๆขใซใปใจใใฉๅฎไบใใฆใใพใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/>
</div>
ViTใๅฐๅ
ฅใใไธปใชๅคๆด็นใฏใ็ปๅใTransformerใซไพ็ตฆใใๆนๆณใงใใ
1. ็ปๅใฏๆญฃๆนๅฝขใง้ใชใใชใใใใใฎใทใผใฑใณในใซๅๅฒใใใๅใใใใฏใใฏใใซใพใใฏ*ใใใๅใ่พผใฟ*ใซๅคๆใใใพใใใใใๅใ่พผใฟใฏใ้ฉๅใชๅ
ฅๅๆฌกๅ
ใไฝๆใใใใใซ2D็ณใฟ่พผใฟๅฑคใใ็ๆใใใพใ๏ผๅบๆฌใฎTransformerใฎๅ ดๅใๅใใใๅใ่พผใฟใซ768ใฎๅคใใใใพใ๏ผใ224x224ใใฏใปใซใฎ็ปๅใใใๅ ดๅใใใใ16x16ใฎ็ปๅใใใใซๅๅฒใงใใพใใใใญในใใๅ่ชใซใใผใฏใณๅใใใใใใซใ็ปๅใฏใใใใฎใทใผใฑใณในใซใใใผใฏใณๅใใใใพใใ
2. *ๅญฆ็ฟๅใ่พผใฟ*ใใคใพใ็นๅฅใช `[CLS]` ใใผใฏใณใใBERTใฎใใใซใใใๅใ่พผใฟใฎๅ
้ ญใซ่ฟฝๅ ใใใพใใ `[CLS]` ใใผใฏใณใฎๆ็ต็ใช้ ใใ็ถๆ
ใฏใไปๅฑใฎๅ้กใใใใฎๅ
ฅๅใจใใฆไฝฟ็จใใใพใใไปใฎๅบๅใฏ็ก่ฆใใใพใใใใฎใใผใฏใณใฏใใขใใซใ็ปๅใฎ่กจ็พใใจใณใณใผใใใๆนๆณใๅญฆใถใฎใซๅฝน็ซใกใพใใ
3. ใใใใจๅญฆ็ฟๅใ่พผใฟใซ่ฟฝๅ ใใๆๅพใฎ่ฆ็ด ใฏ*ไฝ็ฝฎๅใ่พผใฟ*ใงใใใขใใซใฏ็ปๅใใใใใฉใฎใใใซไธฆในใใใฆใใใใ็ฅใใพใใใฎใงใไฝ็ฝฎๅใ่พผใฟใๅญฆ็ฟๅฏ่ฝใงใใใใๅใ่พผใฟใจๅใใตใคใบใๆใกใพใใๆๅพใซใใในใฆใฎๅใ่พผใฟใTransformerใจใณใณใผใใซๆธกใใใพใใ
4. ๅบๅใๅ
ทไฝ็ใซใฏ `[CLS]` ใใผใฏใณใฎๅบๅใ ใใใๅคๅฑคใใผใปใใใญใณใใใ๏ผMLP๏ผใซๆธกใใใพใใViTใฎไบๅใใฌใผใใณใฐใฎ็ฎ็ใฏๅ็ดใซๅ้กใงใใไปใฎๅ้กใใใใจๅๆงใซใMLPใใใใฏๅบๅใใฏใฉในใฉใใซใซๅฏพใใใญใธใใใซๅคๆใใใฏใญในใจใณใใญใใผๆๅคฑใ่จ็ฎใใฆๆใๅฏ่ฝๆงใฎ้ซใใฏใฉในใ่ฆใคใใพใใ
็ปๅๅ้กใ่ฉฆใๆบๅใฏใงใใพใใใ๏ผViTใใใกใคใณใใฅใผใณใใฆๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใฎๅฎๅ
จใช[็ปๅๅ้กใฌใคใ](tasks/image_classification)ใใใงใใฏใใฆใใ ใใ๏ผ
#### CNN
<Tip>
ใใฎใปใฏใทใงใณใงใฏ็ณใฟ่พผใฟใซใคใใฆ็ฐกๅใซ่ชฌๆใใฆใใพใใใ็ปๅใฎๅฝข็ถใจใตใคใบใใฉใฎใใใซๅคๅใใใใไบๅใซ็่งฃใใฆใใใจๅฝน็ซใกใพใใ็ณใฟ่พผใฟใซๆ
ฃใใฆใใชใๅ ดๅใฏใfastaiใฎๆธ็ฑใใ[Convolution Neural Networks chapter](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb)ใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
</Tip>
[ConvNeXT](model_doc/convnext)ใฏใๆง่ฝใๅไธใใใใใใซๆฐใใใขใใณใชใใใใฏใผใฏ่จญ่จใๆก็จใใCNNใขใผใญใใฏใใฃใงใใใใ ใใ็ณใฟ่พผใฟใฏใขใใซใฎไธญๆ ธใซใพใ ใใใพใใ้ซใฌใใซใใ่ฆใๅ ดๅใ[็ณใฟ่พผใฟ๏ผconvolution๏ผ](glossary#convolution)ใฏใๅฐใใช่กๅ๏ผ*ใซใผใใซ*๏ผใ็ปๅใฎใใฏใปใซใฎๅฐใใชใฆใฃใณใใฆใซไน็ฎใใใๆไฝใงใใใใใฏ็นๅฎใฎใใฏในใใฃใ็ทใฎๆฒ็ใชใฉใฎ็นๅพดใ่จ็ฎใใพใใใใฎๅพใๆฌกใฎใใฏใปใซใฎใฆใฃใณใใฆใซ็งปๅใใพใใ็ณใฟ่พผใฟใ็งปๅใใ่ท้ขใฏ*ในใใฉใคใ*ใจใใฆ็ฅใใใฆใใพใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/>
</div>
<small>[Convolution Arithmetic for Deep Learning](https://arxiv.org/abs/1603.07285) ใใใฎๅบๆฌ็ใชใใใฃใณใฐใในใใฉใคใใฎใชใ็ณใฟ่พผใฟใ</small>
ใใฎๅบๅใๅฅใฎ็ณใฟ่พผใฟๅฑคใซไพ็ตฆใใๅ้ฃ็ถใใๅฑคใใจใซใใใใใฏใผใฏใฏใใใใใใฐใใญใฑใใใฎใใใชใใ่ค้ใงๆฝ่ฑก็ใชใใฎใๅญฆ็ฟใใพใใ็ณใฟ่พผใฟๅฑคใฎ้ใซใฏใ็นๅพดใฎๆฌกๅ
ใๅๆธใใ็นๅพดใฎไฝ็ฝฎใฎๅคๅใซๅฏพใใฆใขใใซใใใๅ
็ขใซใใใใใซใใผใชใณใฐๅฑคใ่ฟฝๅ ใใใฎใไธ่ฌ็ใงใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/>
</div>
ConvNeXTใฏใไปฅไธใฎ5ใคใฎๆนๆณใงCNNใใขใใณๅใใฆใใพใใ
1. ๅในใใผใธใฎใใญใใฏๆฐใๅคๆดใใ็ปๅใใใๅคงใใชในใใฉใคใใจๅฏพๅฟใใใซใผใใซใตใคใบใง*ใใใๅ*ใใพใใ้ใชใใชใในใฉใคใใฃใณใฐใฆใฃใณใใฆใฏใใใใซใใ็ปๅใใใใใซๅๅฒใใViTใฎๆฆ็ฅใจไผผใฆใใพใใ
2. *ใใใซใใใฏ* ใฌใคใคใผใฏใใฃใใซๆฐใ็ธฎๅฐใใใใใๅพฉๅ
ใใพใใ1x1ใฎ็ณใฟ่พผใฟใๅฎ่กใใใฎใฏ้ใใๆทฑใใๅขใใใใจใใงใใพใใ้ใใใซใใใฏใฏ้ใฎใใจใ่กใใใใฃใใซๆฐใๆกๅผตใใใใใ็ธฎๅฐใใพใใใใใฏใกใขใชๅน็ใ้ซใใงใใ
3. ใใใซใใใฏใฌใคใคใผๅ
ใฎ้ๅธธใฎ3x3ใฎ็ณใฟ่พผใฟๅฑคใใ*ๆทฑๅบฆๆนๅใฎ็ณใฟ่พผใฟ*ใง็ฝฎใๆใใพใใใใใฏๅๅ
ฅๅใใฃใใซใซๅๅฅใซ็ณใฟ่พผใฟใ้ฉ็จใใๆๅพใซใใใใ็ฉใฟ้ใญใ็ณใฟ่พผใฟใงใใใใใซใใใๆง่ฝๅไธใฎใใใซใใใใฏใผใฏๅน
ใๅบใใใพใใ
4. ViTใฏใฐใญใผใใซๅๅฎน้ใๆใฃใฆใใใใใใใฎๆณจๆใกใซใใบใ ใฎใใใใงไธๅบฆใซ็ปๅใฎๅคใใ่ฆใใใจใใงใใพใใConvNeXTใฏใใฎๅนๆใๅ็พใใใใจใใใซใผใใซใตใคใบใ7x7ใซๅขใใใพใใ
5. ConvNeXTใฏใพใใTransformerใขใใซใๆจกๅฃใใใใใคใใฎใฌใคใคใผใใถใคใณๅคๆดใ่กใฃใฆใใพใใใขใฏใใฃใใผใทใงใณใจๆญฃ่ฆๅใฌใคใคใผใๅฐใชใใๆดปๆงๅ้ขๆฐใฏReLUใฎไปฃใใใซGELUใซๅใๆฟใใBatchNormใฎไปฃใใใซLayerNormใไฝฟ็จใใฆใใพใใ
็ณใฟ่พผใฟใใญใใฏใใใฎๅบๅใฏใๅ้กใใใใซๆธกใใใๅบๅใใญใธใใใซๅคๆใใๆใๅฏ่ฝๆงใฎ้ซใใฉใใซใ่ฆใคใใใใใซใฏใญในใจใณใใญใใผๆๅคฑใ่จ็ฎใใใพใใ
### Object detection
[DETR](model_doc/detr)ใ*DEtection TRansformer*ใใฏCNNใจTransformerใจใณใณใผใใผใใณใผใใผใ็ตใฟๅใใใใจใณใใใผใจใณใใฎใชใใธใงใฏใๆคๅบใขใใซใงใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/>
</div>
1. ไบๅใใฌใผใใณใฐใใใCNN *ใใใฏใใผใณ* ใฏใใใฏใปใซๅคใง่กจใใใ็ปๅใๅใๅใใใใใฎไฝ่งฃๅๅบฆใฎ็นๅพดใใใใไฝๆใใพใใ็นๅพดใใใใซใฏๆฌกๅ
ๅๆธใฎใใใซ1x1ใฎ็ณใฟ่พผใฟใ้ฉ็จใใใ้ซใฌใใซใฎ็ปๅ่กจ็พใๆใคๆฐใใ็นๅพดใใใใไฝๆใใใพใใTransformerใฏ้ฃ็ถใขใใซใงใใใใใ็นๅพดใใใใฏ็นๅพดใใฏใใซใฎใทใผใฑใณในใซๅนณๅฆๅใใใไฝ็ฝฎใจใณใใใฃใณใฐใจ็ตใฟๅใใใใใพใใ
2. ็นๅพดใใฏใใซใฏใจใณใณใผใใผใซๆธกใใใใใฎๆณจๆใฌใคใคใผใไฝฟ็จใใฆ็ปๅ่กจ็พใๅญฆ็ฟใใพใใๆฌกใซใใจใณใณใผใใผใฎ้ ใ็ถๆ
ใฏใใณใผใใผใฎ*ใชใใธใงใฏใใฏใจใช*ใจ็ตใฟๅใใใใพใใใชใใธใงใฏใใฏใจใชใฏใ็ปๅใฎ็ฐใชใ้ ๅใซ็ฆ็นใๅฝใฆใๅญฆ็ฟๅใ่พผใฟใงใๅๆณจๆใฌใคใคใผใ้ฒ่กใใใซใคใใฆๆดๆฐใใใพใใใใณใผใใผใฎ้ ใ็ถๆ
ใฏใๅใชใใธใงใฏใใฏใจใชใซๅฏพใใฆใใฆใณใใฃใณใฐใใใฏในใฎๅบงๆจใจใฏใฉในใฉใใซใไบๆธฌใใใใฃใผใใใฉใฏใผใใใใใฏใผใฏใซๆธกใใใพใใใพใใฏใๅญๅจใใชใๅ ดๅใฏ `no object` ใๆธกใใใพใใ
DETRใฏๅใชใใธใงใฏใใฏใจใชใไธฆ่กใใฆใใณใผใใใฆใ*N*ใฎๆ็ต็ใชไบๆธฌ๏ผ*N*ใฏใฏใจใชใฎๆฐ๏ผใๅบๅใใพใใๅ
ธๅ็ใช่ชๅทฑๅๅธฐใขใใซใ1ใคใฎ่ฆ็ด ใ1ๅใใคไบๆธฌใใใฎใจใฏ็ฐใชใใใชใใธใงใฏใๆคๅบใฏใปใใไบๆธฌใฟในใฏ๏ผ`ใใฆใณใใฃใณใฐใใใฏใน`ใ`ใฏใฉในใฉใใซ`๏ผใงใใใ1ๅใฎใในใง*N*ใฎไบๆธฌใ่กใใพใใ
3. ่จ็ทดไธญใDETRใฏ*ไบ้จใใใใณใฐๆๅคฑ*ใไฝฟ็จใใฆใๅบๅฎใใใๆฐใฎไบๆธฌใจๅบๅฎใใใไธ้ฃใฎๆญฃ่งฃใฉใใซใๆฏ่ผใใพใใ *N*ใฎใฉใใซใปใใใซๆญฃ่งฃใฉใใซใๅฐใชใๅ ดๅใ `no object` ใฏใฉในใงใใใฃใณใฐใใใพใใใใฎๆๅคฑ้ขๆฐใฏใDETRใซไบๆธฌใจๆญฃ่งฃใฉใใซใจใฎ้ใง1ๅฏพ1ใฎๅฒใๅฝใฆใ่ฆใคใใใใใซไฟใใพใใใใฆใณใใฃใณใฐใใใฏในใพใใฏใฏใฉในใฉใใซใฎใฉใกใใใๆญฃใใใชใๅ ดๅใๆๅคฑใ็บ็ใใพใใๅๆงใซใDETRใๅญๅจใใชใใชใใธใงใฏใใไบๆธฌใใๅ ดๅใ็ฝฐ้ใ็งใใใใพใใใใใซใใใDETRใฏ1ใคใฎ้ๅธธใซ้ก่ใชใชใใธใงใฏใใซ็ฆ็นใๅฝใฆใใฎใงใฏใชใใ็ปๅๅ
ใฎไปใฎใชใใธใงใฏใใ่ฆใคใใใใใซไฟใใใพใใ
DETRใฎไธใซใชใใธใงใฏใๆคๅบใใใใ่ฟฝๅ ใใฆใใฏใฉในใฉใใซใจใใฆใณใใฃใณใฐใใใฏในใฎๅบงๆจใ่ฆใคใใพใใใชใใธใงใฏใๆคๅบใใใใซใฏ2ใคใฎใณใณใใผใใณใใใใใพใ๏ผใใณใผใใผใฎ้ ใ็ถๆ
ใใฏใฉในใฉใใซใฎใญใธใใใซๅคๆใใใใใฎ็ทๅฝขๅฑคใใใใณใใฆใณใใฃใณใฐใใใฏในใไบๆธฌใใใใใฎMLPใงใใ
ใชใใธใงใฏใๆคๅบใ่ฉฆใๆบๅใฏใงใใพใใใ๏ผDETROใฎๅฎๅ
จใช[ใชใใธใงใฏใๆคๅบใฌใคใ](tasks/object_detection)ใใใงใใฏใใฆใDETROใฎใใกใคใณใใฅใผใใณใฐๆนๆณใจๆจ่ซๆนๆณใๅญฆใใงใใ ใใ๏ผ
### Image segmentation
[Mask2Former](model_doc/mask2former)ใฏใใในใฆใฎ็จฎ้กใฎ็ปๅใปใฐใกใณใใผใทใงใณใฟในใฏใ่งฃๆฑบใใใใใฎใฆใใใผใตใซใขใผใญใใฏใใฃใงใใๅพๆฅใฎใปใฐใกใณใใผใทใงใณใขใใซใฏ้ๅธธใใคใณในใฟใณในใใปใใณใใฃใใฏใใพใใฏใใใใใฃใใฏใปใฐใกใณใใผใทใงใณใฎ็นๅฎใฎใตใใฟในใฏใซๅใใใฆ่จญ่จใใใฆใใพใใMask2Formerใฏใใใใใฎใฟในใฏใฎใใใใใ*ใในใฏๅ้ก*ใฎๅ้กใจใใฆๆใใพใใใในใฏๅ้กใฏใใฏใปใซใ*N*ใฎใปใฐใกใณใใซใฐใซใผใๅใใไธใใใใ็ปๅใซๅฏพใใฆ*N*ใฎใในใฏใจใใใซๅฏพๅฟใใใฏใฉในใฉใใซใไบๆธฌใใพใใใใฎใปใฏใทใงใณใงใฏใMask2Formerใฎๅไฝๆนๆณใ่ชฌๆใใๆๅพใซSegFormerใฎใใกใคใณใใฅใผใใณใฐใ่ฉฆใใใจใใงใใพใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/>
</div>
Mask2Formerใฎไธป่ฆใชใณใณใใผใใณใใฏๆฌกใฎ3ใคใงใใ
1. [Swin](model_doc/swin)ใใใฏใใผใณใฏ็ปๅใๅใๅ
ฅใใ3ใคใฎ้ฃ็ถใใ3x3ใฎ็ณใฟ่พผใฟใใไฝ่งฃๅๅบฆใฎ็ปๅ็นๅพดใใใใไฝๆใใพใใ
2. ็นๅพดใใใใฏ*ใใฏใปใซใใณใผใใผ*ใซๆธกใใใไฝ่งฃๅๅบฆใฎ็นๅพดใ้ซ่งฃๅๅบฆใฎใใฏใปใซๅใ่พผใฟใซๅพใ
ใซใขใใใตใณใใชใณใฐใใพใใใใฏใปใซใใณใผใใผใฏๅฎ้ใซใฏ่งฃๅๅบฆ1/32ใ1/16ใใใใณ1/8ใฎใชใชใธใใซ็ปๅใฎใใซใในใฑใผใซ็นๅพด๏ผไฝ่งฃๅๅบฆใจ้ซ่งฃๅๅบฆใฎ็นๅพดใๅซใ๏ผใ็ๆใใพใใ
3. ใใใใฎ็ฐใชใในใฑใผใซใฎ็นๅพดใใใใฎใใใใใฏใ้ซ่งฃๅๅบฆใฎ็นๅพดใใๅฐใใใชใใธใงใฏใใใญใฃใใใฃใใใใใซ1ๅใใคใใฉใณในใใฉใผใใผใใณใผใใผใฌใคใคใผใซๆธกใใใพใใMask2Formerใฎ่ฆ็นใฏใใใณใผใใผใฎ*ใในใฏใขใใณใทใงใณ*ใกใซใใบใ ใงใใใฏใญในใขใใณใทใงใณใ็ปๅๅ
จไฝใซๆณจๆใๅใใใใจใใงใใใฎใซๅฏพใใใในใฏใขใใณใทใงใณใฏ็ปๅใฎ็นๅฎใฎ้ ๅใซใฎใฟ็ฆ็นใๅฝใฆใพใใใใใฏ้ใใใญใผใซใซใช็ปๅ็นๅพดใ ใใงใใขใใซใๅญฆ็ฟใงใใใใใใใใฉใผใใณในใๅไธใใพใใ
4. [DETR](tasks_explained#object-detection)ใจๅๆงใซใMask2Formerใๅญฆ็ฟใใใใชใใธใงใฏใใฏใจใชใไฝฟ็จใใ็ปๅใฎ็นๅพดใจ็ตใฟๅใใใฆใปใใใฎไบๆธฌ๏ผ`ใฏใฉในใฉใใซ`ใ`ใในใฏไบๆธฌ`๏ผใ่กใใพใใใใณใผใใผใฎ้ ใ็ถๆ
ใฏ็ทๅฝขๅฑคใซๆธกใใใใฏใฉในใฉใใซใซๅฏพใใใญใธใใใซๅคๆใใใพใใใญใธใใใจๆญฃ่งฃใฉใใซ้ใฎใฏใญในใจใณใใญใใผๆๅคฑใๆใๅฏ่ฝๆงใฎ้ซใใใฎใ่ฆใคใใพใใ
ใในใฏไบๆธฌใฏใใใฏใปใซๅใ่พผใฟใจๆ็ต็ใชใใณใผใใผใฎ้ ใ็ถๆ
ใ็ตใฟๅใใใฆ็ๆใใใพใใใทใฐใขใคใใฏใญในใจใณใใญใใผใใใคในๆๅคฑใใญใธใใใจๆญฃ่งฃใในใฏใฎ้ใงๆใๅฏ่ฝๆงใฎ้ซใใในใฏใ่ฆใคใใพใใ
ใปใฐใกใณใใผใทใงใณใฟในใฏใซๅใ็ตใๆบๅใใงใใพใใใ๏ผSegFormerใฎใใกใคใณใใฅใผใใณใฐๆนๆณใจๆจ่ซๆนๆณใๅญฆใถใใใซใๅฎๅ
จใช[็ปๅใปใฐใกใณใใผใทใงใณใฌใคใ](tasks/semantic_segmentation)ใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
### Depth estimation
[GLPN](model_doc/glpn)ใ*Global-Local Path Network*ใใฏใปใฐใกใณใใผใทใงใณใพใใฏๆทฑๅบฆๆจๅฎใชใฉใฎๅฏใชไบๆธฌใฟในใฏใซ้ฉใใฆใใพใใ[SegFormer](model_doc/segformer)ใจใณใณใผใใผใ่ปฝ้ใใณใผใใผใจ็ตใฟๅใใใTransformerใใผในใฎๆทฑๅบฆๆจๅฎใขใใซใงใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/>
</div>
1. ViTใฎใใใซใ็ปๅใฏใใใใฎใทใผใฑใณในใซๅๅฒใใใพใใใใใใใฎ็ปๅใใใใฏๅฐใใใงใใใใใฏใปใฐใกใณใใผใทใงใณใๆทฑๅบฆๆจๅฎใชใฉใฎๅฏใชไบๆธฌใฟในใฏใซ้ฉใใฆใใพใใ็ปๅใใใใฏใใใๅใ่พผใฟใซๅคๆใใใพใ๏ผใใใๅใ่พผใฟใฎไฝๆๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใ[็ปๅๅ้ก](#image-classification)ใปใฏใทใงใณใๅ็
งใใฆใใ ใใ๏ผใใใใใฎใใใๅใ่พผใฟใฏใจใณใณใผใใผใซๆธกใใใพใใ
2. ใจใณใณใผใใผใฏใใใๅใ่พผใฟใๅใๅ
ฅใใ่คๆฐใฎใจใณใณใผใใผใใญใใฏใ้ใใฆใใใใๆธกใใพใใๅใใญใใฏใซใฏใขใใณใทใงใณใจMix-FFNใฌใคใคใผใๅซใพใใฆใใพใใๅพ่
ใฎๅฝนๅฒใฏไฝ็ฝฎๆ
ๅ ฑใๆไพใใใใจใงใใๅใจใณใณใผใใผใใญใใฏใฎๆๅพใซใฏใ้ๅฑค็่กจ็พใไฝๆใใใใใฎ*ใใใใใผใธใณใฐ*ใฌใคใคใผใใใใพใใ้ฃๆฅใใใใใใฎใฐใซใผใใใจใฎ็นๅพดใ้ฃ็ตใใใ้ฃ็ตใใใ็นๅพดใซๅฏพใใฆ็ทๅฝขๅฑคใ้ฉ็จใใใใใใใฎๆฐใ1/4ใฎ่งฃๅๅบฆใซๅๆธใใพใใใใใๆฌกใฎใจใณใณใผใใผใใญใใฏใธใฎๅ
ฅๅใจใชใใใใใงใฏใใฎใใญใปในๅ
จไฝใ็นฐใ่ฟใใใๅ
ใฎ็ปๅใฎ1/8ใ1/16ใใใใณ1/32ใฎ่งฃๅๅบฆใฎ็ปๅ็นๅพดใๅพใใใพใใ
3. ่ปฝ้ใใณใผใใผใฏใใจใณใณใผใใผใใใฎๆๅพใฎ็นๅพดใใใ๏ผ1/32ในใฑใผใซ๏ผใๅใๅใใใใใ1/16ในใฑใผใซใซใขใใใตใณใใชใณใฐใใพใใใใฎๅพใ็นๅพดใฏๅ็นๅพดใซๅฏพใใใขใใณใทใงใณใใใใใใญใผใซใซใจใฐใญใผใใซใช็นๅพดใ้ธๆใใฆ็ตใฟๅใใใ*ใปใฌใฏใใฃใใใฃใผใใฃใผใใฅใผใธใงใณ๏ผSFF๏ผ*ใขใธใฅใผใซใซๆธกใใใ1/8ใซใขใใใตใณใใชใณใฐใใใพใใใใฎใใญใปในใฏใใณใผใใใใ็นๅพดใๅ
ใฎ็ปๅใจๅใใตใคใบใซใชใใพใง็นฐใ่ฟใใใพใใ
4. ใใณใผใใใใ็นๅพดใฏใๆ็ต็ใชไบๆธฌใ่กใใใใซใปใใณใใฃใใฏใปใฐใกใณใใผใทใงใณใๆทฑๅบฆๆจๅฎใใพใใฏใใฎไปใฎๅฏใชไบๆธฌใฟในใฏใซไพ็ตฆใใใพใใใปใใณใใฃใใฏใปใฐใกใณใใผใทใงใณใฎๅ ดๅใ็นๅพดใฏใฏใฉในๆฐใซๅฏพใใใญใธใใใซๅคๆใใใใฏใญในใจใณใใญใใผๆๅคฑใไฝฟ็จใใฆๆ้ฉๅใใใพใใๆทฑๅบฆๆจๅฎใฎๅ ดๅใ็นๅพดใฏๆทฑๅบฆใใใใซๅคๆใใใๅนณๅ็ตถๅฏพ่ชคๅทฎ๏ผMAE๏ผใพใใฏๅนณๅไบไน่ชคๅทฎ๏ผMSE๏ผๆๅคฑใไฝฟ็จใใใพใใ
## Natural language processing
Transformerใฏๆๅใซๆฉๆขฐ็ฟป่จณใฎใใใซ่จญ่จใใใใใไปฅ้ใใปใจใใฉใฎNLPใฟในใฏใ่งฃๆฑบใใใใใฎใใใฉใซใใฎใขใผใญใใฏใใฃใจใชใฃใฆใใพใใไธ้จใฎใฟในใฏใฏTransformerใฎใจใณใณใผใใผๆง้ ใซ้ฉใใฆใใใไปใฎใฟในใฏใฏใใณใผใใผใซ้ฉใใฆใใพใใใใใซใไธ้จใฎใฟในใฏใงใฏTransformerใฎใจใณใณใผใใผ-ใใณใผใใผๆง้ ใไฝฟ็จใใพใใ
### Text classification
[BERT](model_doc/bert)ใฏใจใณใณใผใใผใฎใฟใฎใขใใซใงใใใใใญในใใฎ่ฑใใช่กจ็พใๅญฆ็ฟใใใใใซไธกๅดใฎๅ่ชใซๆณจๆใๆใใใจใงใๆทฑใๅๆนๅๆงใๅนๆ็ใซๅฎ่ฃ
ใใๆๅใฎใขใใซใงใใ
1. BERTใฏ[WordPiece](tokenizer_summary#wordpiece)ใใผใฏใใคใผใผใทใงใณใไฝฟ็จใใฆใใญในใใฎใใผใฏใณๅใ่พผใฟใ็ๆใใพใใๅไธใฎๆใจๆใฎใใขใๅบๅฅใใใใใซใ็นๅฅใช `[SEP]` ใใผใฏใณใ่ฟฝๅ ใใใพใใ `[CLS]` ใใผใฏใณใฏใในใฆใฎใใญในใใทใผใฑใณในใฎๅ
้ ญใซ่ฟฝๅ ใใใพใใ `[CLS]` ใใผใฏใณใจใจใใซๆ็ตๅบๅใฏใๅ้กใฟในใฏใฎใใใฎๅ
ฅๅใจใใฆไฝฟ็จใใใพใใBERTใฏใพใใใใผใฏใณใๆใฎใใขใฎๆๅใพใใฏ2็ช็ฎใฎๆใซๅฑใใใใฉใใใ็คบใใปใฐใกใณใๅใ่พผใฟใ่ฟฝๅ ใใพใใ
2. BERTใฏใไบๅใใฌใผใใณใฐใง2ใคใฎ็ฎๆจใไฝฟ็จใใพใ๏ผใในใฏใใใ่จ่ชใขใใชใณใฐใจๆฌกใฎๆใฎไบๆธฌใงใใใในใฏใใใ่จ่ชใขใใชใณใฐใงใฏใๅ
ฅๅใใผใฏใณใฎไธ้จใใฉใณใใ ใซใในใฏใใใใขใใซใฏใใใใไบๆธฌใใๅฟ
่ฆใใใใพใใใใใซใใใใขใใซใๅ
จใฆใฎๅ่ชใ่ฆใฆใๆฌกใฎๅ่ชใใไบๆธฌใใใใจใใงใใๅๆนๅๆงใฎๅ้กใ่งฃๆฑบใใใพใใไบๆธฌใใใใในใฏใใผใฏใณใฎๆ็ต็ใช้ ใใ็ถๆ
ใฏใใฝใใใใใฏในใไฝฟ็จใใๅ่ชใฎใในใฏใไบๆธฌใใใใใฎใใฃใผใใใฉใฏใผใใใใใฏใผใฏใซๆธกใใใพใใ
2็ช็ฎใฎไบๅใใฌใผใใณใฐใชใใธใงใฏใใฏๆฌกใฎๆใฎไบๆธฌใงใใใขใใซใฏๆAใฎๅพใซๆBใ็ถใใใฉใใใไบๆธฌใใๅฟ
่ฆใใใใพใใๅๅใฎๅ ดๅใๆBใฏๆฌกใฎๆใงใใใๆฎใใฎๅๅใฎๅ ดๅใๆBใฏใฉใณใใ ใชๆใงใใไบๆธฌ๏ผๆฌกใฎๆใใฉใใ๏ผใฏใ2ใคใฎใฏใฉใน๏ผ`IsNext`ใใใณ`NotNext`๏ผใซๅฏพใใใฝใใใใใฏในใๆใคใใฃใผใใใฉใฏใผใใใใใฏใผใฏใซๆธกใใใพใใ
3. ๅ
ฅๅๅใ่พผใฟใฏใๆ็ต็ใช้ ใใ็ถๆ
ใๅบๅใใใใใซ่คๆฐใฎใจใณใณใผใใผใฌใคใคใผใไปใใฆๆธกใใใพใใ
ไบๅ่จ็ทดๆธใฟใขใใซใใใญในใๅ้กใซไฝฟ็จใใใซใฏใใใผในใฎBERTใขใใซใฎไธใซใทใผใฑใณในๅ้กใใใใ่ฟฝๅ ใใพใใใทใผใฑใณในๅ้กใใใใฏๆ็ต็ใช้ ใใ็ถๆ
ใๅใๅ
ฅใใใใใใใญใธใใใซๅคๆใใใใใฎ็ทๅฝขๅฑคใงใใใฏใญในใจใณใใญใใผๆๅคฑใฏใใญใธใใใจใฟใผใฒใใ้ใงๆใๅฏ่ฝๆงใฎ้ซใใฉใใซใ่ฆใคใใใใใซ่จ็ฎใใใพใใ
ใใญในใๅ้กใ่ฉฆใใฆใฟใๆบๅใฏใงใใพใใใ๏ผDistilBERTใๅพฎ่ชฟๆดใใๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใซใๅฎๅ
จใช[ใใญในใๅ้กใฌใคใ](tasks/sequence_classification)ใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
### Token classification
BERTใๅๅใจใณใใฃใใฃ่ช่ญ๏ผNER๏ผใชใฉใฎใใผใฏใณๅ้กใฟในใฏใซไฝฟ็จใใใซใฏใใใผในใฎBERTใขใใซใฎไธใซใใผใฏใณๅ้กใใใใ่ฟฝๅ ใใพใใใใผใฏใณๅ้กใใใใฏๆ็ต็ใช้ ใใ็ถๆ
ใๅใๅ
ฅใใใใใใใญใธใใใซๅคๆใใใใใฎ็ทๅฝขๅฑคใงใใใฏใญในใจใณใใญใใผๆๅคฑใฏใใญใธใใใจๅใใผใฏใณ้ใงๆใๅฏ่ฝๆงใฎ้ซใใฉใใซใ่ฆใคใใใใใซ่จ็ฎใใใพใใ
ใใผใฏใณๅ้กใ่ฉฆใใฆใฟใๆบๅใฏใงใใพใใใ๏ผDistilBERTใๅพฎ่ชฟๆดใใๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใซใๅฎๅ
จใช[ใใผใฏใณๅ้กใฌใคใ](tasks/token_classification)ใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
### Question answering
BERTใ่ณชๅๅฟ็ญใซไฝฟ็จใใใซใฏใใใผในใฎBERTใขใใซใฎไธใซในใใณๅ้กใใใใ่ฟฝๅ ใใพใใใใฎ็ทๅฝขๅฑคใฏๆ็ต็ใช้ ใใ็ถๆ
ใๅใๅ
ฅใใๅ็ญใซๅฏพๅฟใใใใญในใใฎใในใใณใ้ๅงใจ็ตไบใฎใญใธใใใ่จ็ฎใใพใใใฏใญในใจใณใใญใใผๆๅคฑใฏใใญใธใใใจใฉใใซไฝ็ฝฎใจใฎ้ใงๆใๅฏ่ฝๆงใฎ้ซใใใญในใในใใณใ่ฆใคใใใใใซ่จ็ฎใใใพใใ
่ณชๅๅฟ็ญใ่ฉฆใใฆใฟใๆบๅใฏใงใใพใใใ๏ผDistilBERTใๅพฎ่ชฟๆดใใๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใซใๅฎๅ
จใช[่ณชๅๅฟ็ญใฌใคใ](tasks/question_answering)ใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
<Tip>
๐ก ๆณจๆใใฆใใ ใใใไธๅบฆไบๅใใฌใผใใณใฐใๅฎไบใใBERTใไฝฟ็จใใฆใใพใใพใชใฟในใฏใซ็ฐกๅใซ้ฉ็จใงใใใใจใซๆณจ็ฎใใฆใใ ใใใๅฟ
่ฆใชใฎใฏใไบๅใใฌใผใใณใฐๆธใฟใขใใซใซ็นๅฎใฎใใใใ่ฟฝๅ ใใฆใ้ ใใ็ถๆ
ใๆๆใฎๅบๅใซๅคๆใใใใจใ ใใงใ๏ผ
</Tip>
### Text generation
[GPT-2](model_doc/gpt2)ใฏๅคง้ใฎใใญในใใงไบๅใใฌใผใใณใฐใใใใใณใผใใผๅฐ็จใขใใซใงใใใใญใณใใใไธใใใจ่ชฌๅพๅใฎใใใใญในใใ็ๆใใๆ็คบ็ใซใใฌใผใใณใฐใใใฆใใชใใซใใใใใใใ่ณชๅๅฟ็ญใชใฉใฎไปใฎNLPใฟในใฏใๅฎไบใงใใพใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/>
</div>
1. GPT-2ใฏ[ใใคใใใขใจใณใณใผใใฃใณใฐ๏ผBPE๏ผ](tokenizer_summary#bytepair-encoding-bpe)ใไฝฟ็จใใฆๅ่ชใใใผใฏใใคใบใใใใผใฏใณๅใ่พผใฟใ็ๆใใพใใไฝ็ฝฎใจใณใณใผใใฃใณใฐใใใผใฏใณๅใ่พผใฟใซ่ฟฝๅ ใใใๅใใผใฏใณใฎไฝ็ฝฎใ็คบใใพใใๅ
ฅๅๅใ่พผใฟใฏ่คๆฐใฎใใณใผใใผใใญใใฏใไปใใฆๆ็ต็ใช้ ใใ็ถๆ
ใๅบๅใใใใใซๆธกใใใพใใๅใใณใผใใผใใญใใฏๅ
ใงใGPT-2ใฏใใในใฏใใใ่ชๅทฑๆณจๆใใฌใคใคใผใไฝฟ็จใใพใใใใใฏใGPT-2ใๆชๆฅใฎใใผใฏใณใซๆณจๆใๆใใใจใฏใงใใชใใใจใๆๅณใใพใใGPT-2ใฏๅทฆๅดใฎใใผใฏใณใซใฎใฟๆณจๆใๆใใใจใ่จฑๅฏใใใฆใใพใใใใใฏBERTใฎ[`mask`]ใใผใฏใณใจใฏ็ฐใชใใใในใฏใใใ่ชๅทฑๆณจๆใงใฏๆชๆฅใฎใใผใฏใณใซๅฏพใใฆในใณใขใ`0`ใซ่จญๅฎใใใใใฎๆณจๆใในใฏใไฝฟ็จใใใพใใ
2. ใใณใผใใผใใใฎๅบๅใฏใ่จ่ชใขใใชใณใฐใใใใซๆธกใใใๆ็ต็ใช้ ใใ็ถๆ
ใใญใธใใใซๅคๆใใใใใฎ็ทๅฝขๅคๆใๅฎ่กใใพใใใฉใใซใฏใทใผใฑใณในๅ
ใฎๆฌกใฎใใผใฏใณใงใใใใใใฏใญใธใใใๅณใซ1ใคใใใใฆ็ๆใใใพใใใฏใญในใจใณใใญใใผๆๅคฑใฏใใทใใใใใใญใธใใใจใฉใใซ้ใง่จ็ฎใใใๆฌกใซๆใๅฏ่ฝๆงใฎ้ซใใใผใฏใณใๅบๅใใพใใ
GPT-2ใฎไบๅใใฌใผใใณใฐใฎ็ฎๆจใฏๅฎๅ
จใซ[ๅ ๆ่จ่ชใขใใชใณใฐ](glossary#causal-language-modeling)ใซๅบใฅใใฆใใใใทใผใฑใณในๅ
ใฎๆฌกใฎๅ่ชใไบๆธฌใใพใใใใใซใใใGPT-2ใฏใใญในใ็ๆใๅซใใฟในใฏใง็นใซๅชใใๆง่ฝใ็บๆฎใใพใใ
ใใญในใ็ๆใ่ฉฆใใฆใฟใๆบๅใฏใงใใพใใใ๏ผDistilGPT-2ใๅพฎ่ชฟๆดใใๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใซใๅฎๅ
จใช[ๅ ๆ่จ่ชใขใใชใณใฐใฌใคใ](tasks/language_modeling#causal-language-modeling)ใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
<Tip>
ใใญในใ็ๆใซ้ขใใ่ฉณ็ดฐใฏใ[ใใญในใ็ๆๆฆ็ฅ](generation_strategies)ใฌใคใใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
</Tip>
### Summarization
[BART](model_doc/bart) ใ [T5](model_doc/t5) ใฎใใใชใจใณใณใผใใผใใณใผใใผใขใใซใฏใ่ฆ็ดใฟในใฏใฎใทใผใฑใณในใปใใฅใปใทใผใฑใณในใปใใฟใผใณใซ่จญ่จใใใฆใใพใใใใฎใปใฏใทใงใณใงใฏใBARTใฎๅไฝๆนๆณใ่ชฌๆใใๆๅพใซT5ใฎๅพฎ่ชฟๆดใ่ฉฆใใใจใใงใใพใใ
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/>
</div>
1. BARTใฎใจใณใณใผใใผใขใผใญใใฏใใฃใฏใBERTใจ้ๅธธใซไผผใฆใใใใใญในใใฎใใผใฏใณใจไฝ็ฝฎใจใณใใใฃใณใฐใๅใๅ
ฅใใพใใBARTใฏใๅ
ฅๅใ็ ดๅฃใใฆใใใใณใผใใผใงๅๆง็ฏใใใใจใซใใฃใฆไบๅใใฌใผใใณใฐใใใพใใ็นๅฎใฎ็ ดๅฃๆฆ็ฅใๆใคไปใฎใจใณใณใผใใผใจใฏ็ฐใชใใBARTใฏไปปๆใฎ็จฎ้กใฎ็ ดๅฃใ้ฉ็จใงใใพใใใใ ใใ*ใใญในใใคใณใใฃใชใณใฐ*็ ดๅฃๆฆ็ฅใๆ้ฉใงใใใใญในใใคใณใใฃใชใณใฐใงใฏใใใใคใใฎใใญในใในใใณใ**ๅไธใฎ** [`mask`] ใใผใฏใณใง็ฝฎใๆใใใใพใใใใใฏ้่ฆใงใใใชใใชใใขใใซใฏใในใฏใใใใใผใฏใณใไบๆธฌใใชใใใฐใชใใใใขใใซใซๆฌ ่ฝใใผใฏใณใฎๆฐใไบๆธฌใใใใใใงใใๅ
ฅๅๅใ่พผใฟใจใในใฏใใใในใใณใฏใจใณใณใผใใผใไปใใฆๆ็ต็ใช้ ใใ็ถๆ
ใๅบๅใใพใใใBERTใจใฏ็ฐใชใใBARTใฏๅ่ชใไบๆธฌใใใใใฎๆ็ต็ใชใใฃใผใใใฉใฏใผใใใใใฏใผใฏใๆๅพใซ่ฟฝๅ ใใพใใใ
2. ใจใณใณใผใใผใฎๅบๅใฏใใณใผใใผใซๆธกใใใใใณใผใใผใฏใจใณใณใผใใผใฎๅบๅใใใในใฏใใใใใผใฏใณใจ้็ ดๅฃใใผใฏใณใไบๆธฌใใๅฟ
่ฆใใใใพใใใใใซใใใใใณใผใใผใฏๅ
ใฎใใญในใใๅพฉๅ
ใใใฎใซๅฝน็ซใค่ฟฝๅ ใฎใณใณใใญในใใๆไพใใใพใใใใณใผใใผใใใฎๅบๅใฏ่จ่ชใขใใชใณใฐใใใใซๆธกใใใ้ ใใ็ถๆ
ใใญใธใใใซๅคๆใใใใใฎ็ทๅฝขๅคๆใๅฎ่กใใพใใใฏใญในใจใณใใญใใผๆๅคฑใฏใใญใธใใใจใฉใใซใฎ้ใง่จ็ฎใใใใฉใใซใฏๅใซๅณใซใทใใใใใใใผใฏใณใงใใ
่ฆ็ดใ่ฉฆใๆบๅใฏใงใใพใใใ๏ผT5ใๅพฎ่ชฟๆดใใฆๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใซใๅฎๅ
จใช[่ฆ็ดใฌใคใ](tasks/summarization)ใใ่ฆงใใ ใใ๏ผ
<Tip>
ใใญในใ็ๆใซ้ขใใ่ฉณ็ดฐใฏใ[ใใญในใ็ๆๆฆ็ฅ](generation_strategies)ใฌใคใใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
</Tip>
### Translation
็ฟป่จณใฏใใใไธใคใฎใทใผใฑใณในใปใใฅใปใทใผใฑใณในใปใฟในใฏใฎไพใงใใใ[BART](model_doc/bart) ใ [T5](model_doc/t5) ใฎใใใชใจใณใณใผใใผใใณใผใใผใขใใซใไฝฟ็จใใฆๅฎ่กใงใใพใใใใฎใปใฏใทใงใณใงใฏใBARTใฎๅไฝๆนๆณใ่ชฌๆใใๆๅพใซT5ใฎๅพฎ่ชฟๆดใ่ฉฆใใใจใใงใใพใใ
BARTใฏใใฝใผใน่จ่ชใใฟใผใฒใใ่จ่ชใซใใณใผใใงใใใใใซใใใใใซใๅฅๅใซใฉใณใใ ใซๅๆๅใใใใจใณใณใผใใผใ่ฟฝๅ ใใใใจใง็ฟป่จณใซ้ฉๅฟใใพใใใใฎๆฐใใใจใณใณใผใใผใฎๅใ่พผใฟใฏใๅ
ใฎๅ่ชๅใ่พผใฟใฎไปฃใใใซไบๅใใฌใผใใณใฐๆธใฟใฎใจใณใณใผใใผใซๆธกใใใพใใใฝใผในใจใณใณใผใใผใฏใใขใใซใฎๅบๅใใใฎใฏใญในใจใณใใญใใผๆๅคฑใ็จใใฆใฝใผในใจใณใณใผใใผใไฝ็ฝฎใจใณใใใฃใณใฐใใใใณๅ
ฅๅใจใณใใใฃใณใฐใๆดๆฐใใใใจใซใใฃใฆ่จ็ทดใใใพใใใใฎๆๅใฎในใใใใงใฏใขใใซใใฉใกใผใฟใๅบๅฎใใใใในใฆใฎใขใใซใใฉใกใผใฟใ2็ช็ฎใฎในใใใใงไธ็ทใซ่จ็ทดใใใพใใ
ใใฎๅพใ็ฟป่จณใฎใใใซๅค่จ่ช็ใฎmBARTใ็ปๅ ดใใๅค่จ่ชใงไบๅใใฌใผใใณใฐใใใใขใใซใจใใฆๅฉ็จๅฏ่ฝใงใใ
็ฟป่จณใ่ฉฆใๆบๅใฏใงใใพใใใ๏ผT5ใๅพฎ่ชฟๆดใใฆๆจ่ซใซไฝฟ็จใใๆนๆณใๅญฆใถใใใซใๅฎๅ
จใช[็ฟป่จณใฌใคใ](tasks/summarization)ใใ่ฆงใใ ใใ๏ผ
<Tip>
ใใญในใ็ๆใซ้ขใใ่ฉณ็ดฐใฏใ[ใใญในใ็ๆๆฆ็ฅ](generation_strategies)ใฌใคใใใใงใใฏใใฆใฟใฆใใ ใใ๏ผ
</Tip>
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/philosophy.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Philosophy
๐ค Transformersใฏใๆฌกใฎใใใช็ฎ็ใงๆง็ฏใใใๆ่ฆใๆใคใฉใคใใฉใชใงใ๏ผ
- ๅคง่ฆๆจกใชTransformersใขใใซใไฝฟ็จใ็ ็ฉถใใพใใฏๆกๅผตใใใๆฉๆขฐๅญฆ็ฟ็ ็ฉถ่
ใใใณๆ่ฒ่
ใ
- ใใใใฎใขใใซใๅพฎ่ชฟๆดใใใใๆฌ็ช็ฐๅขใงๆไพใใใใใพใใฏใใฎไธกๆนใ่กใใใๅฎๅๅฎถใ
- ไธใใใใๆฉๆขฐๅญฆ็ฟใฟในใฏใ่งฃๆฑบใใใใใซใไบๅใใฌใผใใณใฐใใใใขใใซใใใฆใณใญใผใใใฆไฝฟ็จใใใใจใณใธใใขใ
ใใฎใฉใคใใฉใชใฏใ2ใคใฎๅผทๅใช็ฎๆจใๆใฃใฆ่จญ่จใใใพใใ๏ผ
1. ใงใใใ ใ็ฐกๅใใค้ซ้ใซไฝฟ็จใงใใใใใซใใใใจ๏ผ
- ใฆใผใถใผๅใใฎๆฝ่ฑกๅใ้ใใชใๅฐใชใใใๅฎ้ใใปใจใใฉใฎๅ ดๅใๆฝ่ฑกๅใฏใใใพใใใ
ๅใขใใซใไฝฟ็จใใใใใซๅฟ
่ฆใช3ใคใฎๆจๆบใฏใฉในใ ใใๅญๅจใใพใ๏ผ[ๆงๆ](main_classes/configuration)ใ
[ใขใใซ](main_classes/model)ใใใใณๅๅฆ็ใฏใฉใน๏ผNLP็จใฎ[ใใผใฏใใคใถ](main_classes/tokenizer)ใใใธใงใณ็จใฎ[ใคใกใผใธใใญใปใใต](main_classes/image_processor)ใ
ใชใผใใฃใช็จใฎ[็นๅพดๆฝๅบๅจ](main_classes/feature_extractor)ใใใใณใใซใใขใผใใซๅ
ฅๅ็จใฎ[ใใญใปใใต](main_classes/processors)๏ผใ
- ใใใใฎใฏใฉในใฏใๅ
ฑ้ใฎ`from_pretrained()`ใกใฝใใใไฝฟ็จใใฆใไบๅใใฌใผใใณใฐๆธใฟใฎใคใณในใฟใณในใใ็ฐกๅใใค็ตฑไธใใใๆนๆณใงๅๆๅใงใใพใใใใฎใกใฝใใใฏใไบๅใใฌใผใใณใฐๆธใฟใฎใใงใใฏใใคใณใใใ้ข้ฃใใใฏใฉในใฎใคใณในใฟใณในใจ้ข้ฃใใผใฟ๏ผๆงๆใฎใใคใใผใใฉใกใผใฟใใใผใฏใใคใถใฎ่ชๅฝใใขใใซใฎ้ใฟ๏ผใใใฆใณใญใผใ๏ผๅฟ
่ฆใชๅ ดๅใฏใญใฃใใทใฅ๏ผใใฆ่ชญใฟ่พผใฟใพใใใใใใฎๅบๆฌใฏใฉในใฎไธใซใใฉใคใใฉใชใฏ2ใคใฎAPIใๆไพใใฆใใพใ๏ผ[ใใคใใฉใคใณ]ใฏใ็นๅฎใฎใฟในใฏใงใขใใซใใใฐใใๆจ่ซใซไฝฟ็จใใใใใฎใใฎใงใใใ[`Trainer`]ใฏPyTorchใขใใซใ่ฟ
้ใซใใฌใผใใณใฐใพใใฏๅพฎ่ชฟๆดใใใใใฎใใฎใงใ๏ผใในใฆใฎTensorFlowใขใใซใฏ`Keras.fit`ใจไบๆๆงใใใใพใ๏ผใ
- ใใฎ็ตๆใใใฎใฉใคใใฉใชใฏใใฅใผใฉใซใใใใฏใผใฏใฎใขใธใฅใฉใผใใผใซใใใฏในใงใฏใใใพใใใใฉใคใใฉใชใๆกๅผตใพใใฏๆง็ฏใใใๅ ดๅใฏใ้ๅธธใฎPythonใPyTorchใTensorFlowใKerasใขใธใฅใผใซใไฝฟ็จใใใฉใคใใฉใชใฎๅบๆฌใฏใฉในใใ็ถๆฟใใฆใขใใซใฎ่ชญใฟ่พผใฟใจไฟๅญใชใฉใฎๆฉ่ฝใๅๅฉ็จใใใ ใใงใใใขใใซใฎใณใผใใฃใณใฐๅฒๅญฆใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใฏใ[Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy)ใใญใฐๆ็จฟใใใงใใฏใใฆใฟใฆใใ ใใใ
2. ใชใชใธใใซใฎใขใใซใซใงใใใ ใ่ฟใๆง่ฝใๆใคๆๆฐใฎใขใใซใๆไพใใใใจ๏ผ
- ๅใขใผใญใใฏใใฃใซๅฏพใใฆใๅ
ฌๅผใช่่
ใใๆไพใใใ็ตๆใๅ็พใใๅฐใชใใจใ1ใคใฎไพใๆไพใใพใใ
- ใณใผใใฏ้ๅธธใๅฏ่ฝใช้ใๅ
ใฎใณใผใใใผในใซ่ฟใใใฎใงใใใใใใฏPyTorchใณใผใใTensorFlowใณใผใใซๅคๆใใใใใจใใ็ใใ้ใใพใ็ถใใงใใ
ใใฎไปใฎใใใคใใฎ็ฎๆจ๏ผ
- ใขใใซใฎๅ
้จใใงใใใ ใไธ่ฒซใใฆๅ
ฌ้ใใใใจ๏ผ
- ใใซใช้ ใ็ถๆ
ใจๆณจๆใฎ้ใฟใซใขใฏใปในใงใใๅไธใฎAPIใๆไพใใพใใ
- ๅๅฆ็ใฏใฉในใจๅบๆฌใขใใซใฎAPIใฏๆจๆบๅใใใ็ฐกๅใซใขใใซ้ใๅใๆฟใใใใจใใงใใพใใ
- ใใใใฎใขใใซใฎๅพฎ่ชฟๆดใจ่ชฟๆปใฎใใใฎๆๆใชใใผใซใไธป่ฆณ็ใซ้ธๅฎใใใใจ๏ผ
- ่ชๅฝใจๅใ่พผใฟใซๆฐใใใใผใฏใณใ่ฟฝๅ ใใใใใฎ็ฐกๅใงไธ่ฒซใใๆนๆณใ
- Transformerใใใใใในใฏใใใณใใซใผใณใใใใใฎ็ฐกๅใชๆนๆณใ
- PyTorchใTensorFlow 2.0ใใใใณFlaxใฎ้ใ็ฐกๅใซๅใๆฟใใฆใ1ใคใฎใใฌใผใ ใฏใผใฏใงใใฌใผใใณใฐใใๅฅใฎใใฌใผใ ใฏใผใฏใงๆจ่ซใ่กใใใจใๅฏ่ฝใซใใใใจใ
## Main concepts
ใใฎใฉใคใใฉใชใฏใๅใขใใซใซใคใใฆๆฌกใฎ3ใคใฎใฟใคใใฎใฏใฉในใไธญๅฟใซๆง็ฏใใใฆใใพใ๏ผ
- **ใขใใซใฏใฉใน**ใฏใใฉใคใใฉใชใงๆไพใใใไบๅใใฌใผใใณใฐๆธใฟใฎ้ใฟใจไบๆๆงใฎใใPyTorchใขใใซ๏ผ[torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)๏ผใKerasใขใใซ๏ผ[tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๏ผใพใใฏJAX/Flaxใขใใซ๏ผ[flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)๏ผใไฝฟ็จใงใใพใใ
- **ๆงๆใฏใฉใน**ใฏใใขใใซใๆง็ฏใใใใใซๅฟ
่ฆใชใใคใใผใใฉใกใผใฟใๆ ผ็ดใใพใ๏ผๅฑคใฎๆฐใ้ ใๅฑคใฎใตใคใบใชใฉ๏ผใใใใใ่ชๅใงใคใณในใฟใณในๅใใๅฟ
่ฆใฏใใใพใใใ็นใซใๅคๆดใๅ ใใใซไบๅใใฌใผใใณใฐๆธใฟใขใใซใไฝฟ็จใใฆใใๅ ดๅใใขใใซใไฝๆใใใจ่ชๅ็ใซๆงๆใใคใณในใฟใณในๅใใใใใใซใชใใพใ๏ผใใใฏใขใใซใฎไธ้จใงใ๏ผใ
- **ๅๅฆ็ใฏใฉใน**ใฏใ็ใใผใฟใใขใใซใๅใๅ
ฅใใๅฝขๅผใซๅคๆใใพใใ[ใใผใฏใใคใถ](main_classes/tokenizer)ใฏๅใขใใซใฎ่ชๅฝใไฟๅญใใๆๅญๅใใใผใฏใณๅใ่พผใฟใฎใคใณใใใฏในใฎใชในใใซใจใณใณใผใใใใณใใณใผใใใใใใฎใกใฝใใใๆไพใใพใใ[ใคใกใผใธใใญใปใใต](main_classes/image_processor)ใฏใใธใงใณๅ
ฅๅใๅๅฆ็ใใ[็นๅพดๆฝๅบๅจ](main_classes/feature_extractor)ใฏใชใผใใฃใชๅ
ฅๅใๅๅฆ็ใใ[ใใญใปใใต](main_classes/processors)ใฏใใซใใขใผใใซๅ
ฅๅใๅฆ็ใใพใใ
ใใใใฎใในใฆใฎใฏใฉในใฏใไบๅใใฌใผใใณใฐๆธใฟใฎใคใณในใฟใณในใใใคใณในใฟใณในๅใใใญใผใซใซใซไฟๅญใใHubใงๅ
ฑๆใใใใจใใงใใ3ใคใฎใกใฝใใใไฝฟ็จใใฆใใพใ๏ผ
- `from_pretrained()`ใฏใใฉใคใใฉใช่ชไฝใซใใฃใฆๆไพใใใ๏ผ[ใขใใซใใ](https://huggingface.co/models)ใงใตใใผใใใใฆใใใขใใซใใใใพใ๏ผใใใฆใผใถใผใซใใฃใฆใญใผใซใซใซไฟๅญใใใ๏ผใพใใฏใตใผใใผใซไฟๅญใใใ๏ผไบๅใใฌใผใใณใฐๆธใฟใใผใธใงใณใใใขใใซใๆงๆใๅๅฆ็ใฏใฉในใใคใณในใฟใณในๅใใใใใฎใกใฝใใใงใใ
- `save_pretrained()`ใฏใใขใใซใๆงๆใๅๅฆ็ใฏใฉในใใญใผใซใซใซไฟๅญใใ`from_pretrained()`ใไฝฟ็จใใฆๅ่ชญใฟ่พผใฟใงใใใใใซใใพใใ
- `push_to_hub()`ใฏใใขใใซใๆงๆใๅๅฆ็ใฏใฉในใHubใซๅ
ฑๆใใ่ชฐใงใ็ฐกๅใซใขใฏใปในใงใใใใใซใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/model_summary.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# The Transformer model family
2017ๅนดใซๅฐๅ
ฅใใใฆไปฅๆฅใ[ๅ
ใฎTransformer](https://arxiv.org/abs/1706.03762)ใขใใซใฏใ่ช็ถ่จ่ชๅฆ็๏ผNLP๏ผใฎใฟในใฏใ่ถ
ใใๅคใใฎๆฐใใใจใญใตใคใใฃใณใฐใชใขใใซใใคใณในใใคใขใใพใใใ[ใฟใณใใฏ่ณชใฎๆใใใใพใใๆง้ ใไบๆธฌ](https://huggingface.co/blog/deep-learning-with-proteins)ใใใขใใซใ[ใใผใฟใผใ่ตฐใใใใใใฎใใฌใผใใณใฐ](https://huggingface.co/blog/train-decision-transformers)ใใใขใใซใใใใฆ[ๆ็ณปๅไบๆธฌ](https://huggingface.co/blog/time-series-transformers)ใฎใใใฎใขใใซใชใฉใใใใพใใTransformerใฎใใพใใพใชใใชใขใณใใๅฉ็จๅฏ่ฝใงใใใๅคงๅฑใ่ฆ่ฝใจใใใจใใใใพใใใใใใฎใในใฆใฎใขใใซใซๅ
ฑ้ใใใฎใฏใๅ
ใฎTransformerใขใผใญใใฏใใฃใซๅบใฅใใฆใใใใจใงใใไธ้จใฎใขใใซใฏใจใณใณใผใใพใใฏใใณใผใใฎใฟใไฝฟ็จใใไปใฎใขใใซใฏไธกๆนใไฝฟ็จใใพใใใใใฏใTransformerใใกใใชใผๅ
ใฎใขใใซใฎ้ซใฌใใซใฎ้ใใใซใใดใฉใคใบใใ่ชฟๆปใใใใใฎๆ็จใชๅ้กๆณใๆไพใใไปฅๅใซๅบไผใฃใใใจใฎใชใTransformerใ็่งฃใใใฎใซๅฝน็ซใกใพใใ
ๅ
ใฎTransformerใขใใซใซๆ
ฃใใฆใใชใใใใชใใฌใใทใฅใๅฟ
่ฆใชๅ ดๅใฏใHugging Faceใณใผในใฎ[Transformerใฎๅไฝๅ็](https://huggingface.co/course/chapter1/4?fw=pt)็ซ ใใใงใใฏใใฆใใ ใใใ
<div align="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/H39Z_720T5s" title="YouTubeใใใชใใฌใผใคใผ"
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
picture-in-picture" allowfullscreen></iframe>
</div>
## Computer vision
<iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FacQBpeFBVvrDUlzFlkejoz%2FModelscape-timeline%3Fnode-id%3D0%253A1%26t%3Dm0zJ7m2BQ9oe0WtO-1" allowfullscreen></iframe>
### Convolutional network
้ทใ้ใ็ณใฟ่พผใฟใใใใฏใผใฏ๏ผCNN๏ผใฏใณใณใใฅใผใฟใใธใงใณใฎใฟในใฏใซใใใฆๆฏ้
็ใชใใฉใใคใ ใงใใใใ[ใใธใงใณTransformer](https://arxiv.org/abs/2010.11929)ใฏใใฎในใฑใผใฉใใชใใฃใจๅน็ๆงใ็คบใใพใใใใใใงใใไธ้จใฎCNNใฎๆ้ซใฎ็นๆงใ็นใซ็นๅฎใฎใฟในใฏใซใจใฃใฆใฏ้ๅธธใซๅผทๅใช็ฟป่จณไธๅคๆงใชใฉใไธ้จใฎTransformerใฏใขใผใญใใฏใใฃใซ็ณใฟ่พผใฟใ็ตใฟ่พผใใงใใพใใ[ConvNeXt](model_doc/convnext)ใฏใ็ณใฟ่พผใฟใ็พไปฃๅใใใใใซTransformerใใ่จญ่จใฎ้ธๆ่ขใๅใๅ
ฅใใไพใใฐใConvNeXtใฏ็ปๅใใใใใซๅๅฒใใใใใซ้ใชใๅใใชใในใฉใคใใฃใณใฐใฆใฃใณใใฆใจใใฐใญใผใใซๅๅฎน้ใๅขๅ ใใใใใใฎๅคงใใชใซใผใใซใไฝฟ็จใใพใใConvNeXtใฏใใกใขใชๅน็ใๅไธใใใใใใฉใผใใณในใๅไธใใใใใใซใใใคใใฎใฌใคใคใผใใถใคใณใฎ้ธๆ่ขใๆไพใใTransformerใจ็ซถๅ็ใซใชใใพใ๏ผ
### Encoder[[cv-encoder]]
[ใใธใงใณ ใใฉใณในใใฉใผใใผ๏ผViT๏ผ](model_doc/vit) ใฏใ็ณใฟ่พผใฟใไฝฟ็จใใชใใณใณใใฅใผใฟใใธใงใณใฟในใฏใฎๆใ้ใใพใใใViT ใฏๆจๆบใฎใใฉใณในใใฉใผใใผใจใณใณใผใใผใไฝฟ็จใใพใใใ็ปๅใๆฑใๆนๆณใไธป่ฆใชใใฌใผใฏในใซใผใงใใใ็ปๅใๅบๅฎใตใคใบใฎใใใใซๅๅฒใใใใใใใใผใฏใณใฎใใใซไฝฟ็จใใฆๅใ่พผใฟใไฝๆใใพใใViT ใฏใๅฝๆใฎCNNใจ็ซถไบๅใฎใใ็ตๆใ็คบใใใใซใใฉใณในใใฉใผใใผใฎๅน็็ใชใขใผใญใใฏใใฃใๆดป็จใใพใใใใใใฌใผใใณใฐใซๅฟ
่ฆใชใชใฝใผในใๅฐใชใใฆๆธใฟใพใใใViT ใซ็ถใใฆใใปใฐใกใณใใผใทใงใณใๆคๅบใชใฉใฎๅฏใชใใธใงใณใฟในใฏใๅฆ็ใงใใไปใฎใใธใงใณใขใใซใ็ปๅ ดใใพใใใ
ใใใใฎใขใใซใฎ1ใคใ[Swin](model_doc/swin) ใใฉใณในใใฉใผใใผใงใใSwin ใใฉใณในใใฉใผใใผใฏใใใๅฐใใชใตใคใบใฎใใใใใ้ๅฑค็ใช็นๅพดใใใ๏ผCNNใฎใใใง ViT ใจใฏ็ฐใชใใพใ๏ผใๆง็ฏใใๆทฑๅฑคใฎใใใใจ้ฃๆฅใใใใใใจใใผใธใใพใใๆณจๆใฏใญใผใซใซใฆใฃใณใใฆๅ
ใงใฎใฟ่จ็ฎใใใใฆใฃใณใใฆใฏๆณจๆใฎใฌใคใคใผ้ใงใทใใใใใใขใใซใใใ่ฏใๅญฆ็ฟใใใฎใใตใใผใใใๆฅ็ถใไฝๆใใพใใSwin ใใฉใณในใใฉใผใใผใฏ้ๅฑค็ใช็นๅพดใใใใ็ๆใงใใใใใใปใฐใกใณใใผใทใงใณใๆคๅบใชใฉใฎๅฏใชไบๆธฌใฟในใฏใซ้ฉใใฆใใพใใ[SegFormer](model_doc/segformer) ใ้ๅฑค็ใช็นๅพดใใใใๆง็ฏใใใใใซใใฉใณในใใฉใผใใผใจใณใณใผใใผใไฝฟ็จใใพใใใใในใฆใฎ็นๅพดใใใใ็ตใฟๅใใใฆไบๆธฌใใใใใซใทใณใใซใชใใซใใฌใคใคใผใใผใปใใใญใณ๏ผMLP๏ผใใณใผใใผใ่ฟฝๅ ใใพใใ
BeIT ใใใณ ViTMAE ใชใฉใฎไปใฎใใธใงใณใขใใซใฏใBERTใฎไบๅใใฌใผใใณใฐ็ฎๆจใใใคใณในใใฌใผใทใงใณใๅพใพใใใ[BeIT](model_doc/beit) ใฏ *masked image modeling (MIM)* ใซใใฃใฆไบๅใใฌใผใใณใฐใใใฆใใพใใ็ปๅใใใใฏใฉใณใใ ใซใในใฏใใใ็ปๅใ่ฆ่ฆใใผใฏใณใซใใผใฏใณๅใใใพใใBeIT ใฏใในใฏใใใใใใใซๅฏพๅฟใใ่ฆ่ฆใใผใฏใณใไบๆธฌใใใใใซใใฌใผใใณใฐใใใพใใ[ViTMAE](model_doc/vitmae) ใไผผใใใใชไบๅใใฌใผใใณใฐ็ฎๆจใๆใฃใฆใใใ่ฆ่ฆใใผใฏใณใฎไปฃใใใซใใฏใปใซใไบๆธฌใใๅฟ
่ฆใใใใพใใ็ฐไพใชใฎใฏ็ปๅใใใใฎ75%ใใในใฏใใใฆใใใใจใงใ๏ผใใณใผใใผใฏใในใฏใใใใใผใฏใณใจใจใณใณใผใใใใใใใใใใใฏใปใซใๅๆง็ฏใใพใใไบๅใใฌใผใใณใฐใฎๅพใใใณใผใใผใฏๆจใฆใใใใจใณใณใผใใผใฏใใฆใณในใใชใผใ ใฎใฟในใฏใงไฝฟ็จใงใใ็ถๆ
ใงใใ
### Decoder[[cv-decoder]]
ใใณใผใใผใฎใฟใฎใใธใงใณใขใใซใฏ็ใใใงใใใชใใชใใใปใจใใฉใฎใใธใงใณใขใใซใฏ็ปๅ่กจ็พใๅญฆใถใใใซใจใณใณใผใใผใไฝฟ็จใใใใใงใใใใใใ็ปๅ็ๆใชใฉใฎใฆใผในใฑใผในใงใฏใใใณใผใใผใฏ่ช็ถใช้ฉๅฟใงใใใใใฏใGPT-2ใชใฉใฎใใญในใ็ๆใขใใซใใ่ฆใฆใใใใใซใ[ImageGPT](model_doc/imagegpt) ใงใๅๆงใฎใขใผใญใใฏใใฃใไฝฟ็จใใพใใใใทใผใฑใณในๅ
ใฎๆฌกใฎใใผใฏใณใไบๆธฌใใไปฃใใใซใ็ปๅๅ
ใฎๆฌกใฎใใฏใปใซใไบๆธฌใใพใใ็ปๅ็ๆใซๅ ใใฆใImageGPT ใฏ็ปๅๅ้กใฎใใใซใใใกใคใณใใฅใผใใณใฐใงใใพใใ
### Encoder-decoder[[cv-encoder-decoder]]
ใใธใงใณใขใใซใฏไธ่ฌ็ใซใจใณใณใผใใผ๏ผใใใฏใใผใณใจใๅผใฐใใพใ๏ผใไฝฟ็จใใฆ้่ฆใช็ปๅ็นๅพดใๆฝๅบใใใใใใใฉใณในใใฉใผใใผใใณใผใใผใซๆธกใใใใซไฝฟ็จใใพใใ[DETR](model_doc/detr) ใฏไบๅใใฌใผใใณใฐๆธใฟใฎใใใฏใใผใณใๆใฃใฆใใพใใใใชใใธใงใฏใๆคๅบใฎใใใซๅฎๅ
จใชใใฉใณในใใฉใผใใผใจใณใณใผใใผใใณใผใใผใขใผใญใใฏใใฃใไฝฟ็จใใฆใใพใใใจใณใณใผใใผใฏ็ปๅ่กจ็พใๅญฆใณใใใณใผใใผๅ
ใฎใชใใธใงใฏใใฏใจใช๏ผๅใชใใธใงใฏใใฏใจใชใฏ็ปๅๅ
ใฎ้ ๅใพใใฏใชใใธใงใฏใใซ็ฆ็นใๅฝใฆใๅญฆ็ฟใใใๅใ่พผใฟใงใ๏ผใจ็ตใฟๅใใใพใใDETR ใฏๅใชใใธใงใฏใใฏใจใชใซๅฏพใใๅข็ใใใฏในใฎๅบงๆจใจใฏใฉในใฉใใซใไบๆธฌใใพใใ
## Natural lanaguage processing
<iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FUhbQAZDlpYW5XEpdFy6GoG%2Fnlp-model-timeline%3Fnode-id%3D0%253A1%26t%3D4mZMr4r1vDEYGJ50-1" allowfullscreen></iframe>
### Encoder[[nlp-encoder]]
[BERT](model_doc/bert) ใฏใจใณใณใผใใผๅฐ็จใฎTransformerใงใๅ
ฅๅใฎไธ้จใฎใใผใฏใณใใฉใณใใ ใซใในใฏใใฆไปใฎใใผใฏใณใ่ฆใชใใใใซใใฆใใพใใใใใซใใใใใผใฏใณใใในใฏใใๆ่ใซๅบใฅใใฆใในใฏใใใใใผใฏใณใไบๆธฌใใใใจใไบๅใใฌใผใใณใฐใฎ็ฎๆจใงใใใใใซใใใBERTใฏๅ
ฅๅใฎใใๆทฑใใใค่ฑใใช่กจ็พใๅญฆ็ฟใใใฎใซๅทฆๅณใฎๆ่ใๅฎๅ
จใซๆดป็จใงใใพใใใใใใBERTใฎไบๅใใฌใผใใณใฐๆฆ็ฅใซใฏใพใ ๆนๅใฎไฝๅฐใใใใพใใใ[RoBERTa](model_doc/roberta) ใฏใใใฌใผใใณใฐใ้ทๆ้่กใใใใๅคงใใชใใใใงใใฌใผใใณใฐใใไบๅๅฆ็ไธญใซไธๅบฆใ ใใงใชใๅใจใใใฏใงใใผใฏใณใใฉใณใใ ใซใในใฏใใๆฌกๆไบๆธฌใฎ็ฎๆจใๅ้คใใๆฐใใไบๅใใฌใผใใณใฐใฌใทใใๅฐๅ
ฅใใใใจใงใใใๆนๅใใพใใใ
ๆง่ฝใๅไธใใใไธป่ฆใชๆฆ็ฅใฏใขใใซใฎใตใคใบใๅขใใใใจใงใใใๅคง่ฆๆจกใชใขใใซใฎใใฌใผใใณใฐใฏ่จ็ฎใณในใใใใใใพใใ่จ็ฎใณในใใๅๆธใใๆนๆณใฎ1ใคใฏใ[DistilBERT](model_doc/distilbert) ใฎใใใชๅฐใใชใขใใซใไฝฟ็จใใใใจใงใใDistilBERTใฏ[็ฅ่ญ่ธ็](https://arxiv.org/abs/1503.02531) - ๅง็ธฎๆ่ก - ใไฝฟ็จใใฆใBERTใฎใปใผใในใฆใฎ่จ่ช็่งฃๆฉ่ฝใไฟๆใใชใใใใใๅฐใใชใใผใธใงใณใไฝๆใใพใใ
ใใใใใปใจใใฉใฎTransformerใขใใซใฏๅผใ็ถใใใๅคใใฎใใฉใกใผใฟใซ็ฆ็นใๅฝใฆใใใฌใผใใณใฐๅน็ใๅไธใใใๆฐใใใขใใซใ็ปๅ ดใใฆใใพใใ[ALBERT](model_doc/albert) ใฏใ2ใคใฎๆนๆณใงใใฉใกใผใฟใฎๆฐใๆธใใใใจใซใใฃใฆใกใขใชๆถ่ฒป้ใๅๆธใใพใใๅคงใใช่ชๅฝๅใ่พผใฟใ2ใคใฎๅฐใใช่กๅใซๅๅฒใใใฌใคใคใผใใใฉใกใผใฟใๅ
ฑๆใงใใใใใซใใพใใ[DeBERTa](model_doc/deberta) ใฏใๅ่ชใจใใฎไฝ็ฝฎใ2ใคใฎใใฏใใซใงๅฅใ
ใซใจใณใณใผใใใ่งฃใใใๆณจๆๆฉๆงใ่ฟฝๅ ใใพใใใๆณจๆใฏใใใใฎๅฅใ
ใฎใใฏใใซใใ่จ็ฎใใใพใใๅ่ชใจไฝ็ฝฎใฎๅใ่พผใฟใๅซใพใใๅไธใฎใใฏใใซใงใฏใชใใ[Longformer](model_doc/longformer) ใฏใ็นใซ้ทใใทใผใฑใณใน้ทใฎใใญใฅใกใณใใๅฆ็ใใใใใซๆณจๆใใใๅน็็ใซใใใใจใซ็ฆ็นใๅฝใฆใพใใใๅบๅฎใใใใฆใฃใณใใฆใตใคใบใฎๅจใใฎๅใใผใฏใณใใ่จ็ฎใใใใญใผใซใซใฆใฃใณใใฆไปใๆณจๆ๏ผ็นๅฎใฎใฟในใฏใใผใฏใณ๏ผๅ้กใฎใใใฎ `[CLS]` ใชใฉ๏ผใฎใฟใฎใใใฎใฐใญใผใใซใชๆณจๆใๅซใ๏ผใฎ็ตใฟๅใใใไฝฟ็จใใฆใๅฎๅ
จใชๆณจๆ่กๅใงใฏใชใ็ใชๆณจๆ่กๅใไฝๆใใพใใ
### Decoder[[nlp-decoder]]
[GPT-2](model_doc/gpt2)ใฏใใทใผใฑใณในๅ
ใฎๆฌกใฎๅ่ชใไบๆธฌใใใใณใผใใผๅฐ็จใฎTransformerใงใใใขใใซใฏๅ
ใ่ฆใใใจใใงใใชใใใใซใใผใฏใณใๅณใซใในใฏใใ"ใฎใใ่ฆ"ใ้ฒใใพใใๅคง้ใฎใใญในใใไบๅใใฌใผใใณใฐใใใใจใซใใใGPT-2ใฏใใญในใ็ๆใ้ๅธธใซๅพๆใงใใใญในใใๆญฃ็ขบใงใใใใจใใใใซใใฆใใๆๆๆญฃ็ขบใงใฏใชใใใจใใใใพใใใใใใGPT-2ใซใฏBERTใฎไบๅใใฌใผใใณใฐใใใฎๅๆนๅใณใณใใญในใใไธ่ถณใใฆใใใ็นๅฎใฎใฟในใฏใซใฏ้ฉใใฆใใพใใใงใใใ[XLNET](model_doc/xlnet)ใฏใๅๆนๅใซๅญฆ็ฟใงใใ้ ๅ่จ่ชใขใใชใณใฐ็ฎๆจ๏ผPLM๏ผใไฝฟ็จใใใใจใงใBERTใจGPT-2ใฎไบๅใใฌใผใใณใฐ็ฎๆจใฎใในใใ็ตใฟๅใใใฆใใพใใ
GPT-2ใฎๅพใ่จ่ชใขใใซใฏใใใซๅคงใใๆ้ทใใไปใงใฏ*ๅคง่ฆๆจก่จ่ชใขใใซ๏ผLLM๏ผ*ใจใใฆ็ฅใใใฆใใพใใๅคง่ฆๆจกใชใใผใฟใปใใใงไบๅใใฌใผใใณใฐใใใใฐใLLMใฏใปใผใผใญใทใงใใๅญฆ็ฟใ็คบใใใจใใใใพใใ[GPT-J](model_doc/gptj)ใฏใ6BใฎใใฉใกใผใฟใๆใคLLMใงใ400BใฎใใผใฏใณใงใใฌใผใใณใฐใใใฆใใพใใGPT-Jใซใฏ[OPT](model_doc/opt)ใ็ถใใใใฎใใกๆๅคงใฎใขใใซใฏ175Bใงใ180Bใฎใใผใฏใณใงใใฌใผใใณใฐใใใฆใใพใใๅใๆๆใซ[BLOOM](model_doc/bloom)ใใชใชใผในใใใใใฎใใกใใชใผใฎๆๅคงใฎใขใใซใฏ176Bใฎใใฉใกใผใฟใๆใกใ46ใฎ่จ่ชใจ13ใฎใใญใฐใฉใใณใฐ่จ่ชใง366Bใฎใใผใฏใณใงใใฌใผใใณใฐใใใฆใใพใใ
### Encoder-decoder[[nlp-encoder-decoder]]
[BART](model_doc/bart)ใฏใๅ
ใฎTransformerใขใผใญใใฏใใฃใไฟๆใใฆใใพใใใไบๅใใฌใผใใณใฐ็ฎๆจใ*ใใญในใ่ฃๅฎ*ใฎ็ ดๆใซๅคๆดใใฆใใพใใไธ้จใฎใใญในใในใใณใฏๅไธใฎ`mask`ใใผใฏใณใง็ฝฎๆใใใพใใใใณใผใใผใฏ็ ดๆใใฆใใชใใใผใฏใณใไบๆธฌใ๏ผๆชๆฅใฎใใผใฏใณใฏใในใฏใใใพใ๏ผใใจใณใณใผใใผใฎ้ ใใ็ถๆ
ใไฝฟ็จใใฆไบๆธฌใ่ฃๅฉใใพใใ[Pegasus](model_doc/pegasus)ใฏBARTใซไผผใฆใใพใใใPegasusใฏใใญในใในใใณใฎไปฃใใใซๆๅ
จไฝใใในใฏใใพใใใในใฏใใใ่จ่ชใขใใชใณใฐใซๅ ใใฆใPegasusใฏใฎใฃใใๆ็ๆ๏ผGSG๏ผใซใใฃใฆไบๅใใฌใผใใณใฐใใใฆใใพใใGSGใฎ็ฎๆจใฏใๆๆธใซ้่ฆใชๆใใในใฏใใใใใใ`mask`ใใผใฏใณใง็ฝฎๆใใใใจใงใใใใณใผใใผใฏๆฎใใฎๆใใๅบๅใ็ๆใใชใใใฐใชใใพใใใ[T5](model_doc/t5)ใฏใใในใฆใฎNLPใฟในใฏใ็นๅฎใฎใใฌใใฃใใฏในใไฝฟ็จใใฆใใญในใๅฏพใใญในใใฎๅ้กใซๅคๆใใใใใฆใใผใฏใชใขใใซใงใใใใจใใฐใใใฌใใฃใใฏใน`Summarize:`ใฏ่ฆ็ดใฟในใฏใ็คบใใพใใT5ใฏๆๅธซใใใใฌใผใใณใฐ๏ผGLUEใจSuperGLUE๏ผใจ่ชๅทฑๆๅธซใใใใฌใผใใณใฐ๏ผใใผใฏใณใฎ15๏ผ
ใใฉใณใใ ใซใตใณใใซใใใญใใใขใฆใ๏ผใซใใฃใฆไบๅใใฌใผใใณใฐใใใฆใใพใใ
## Audio
<iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2Fvrchl8jDV9YwNVPWu2W0kK%2Fspeech-and-audio-model-timeline%3Fnode-id%3D0%253A1%26t%3DmM4H8pPMuK23rClL-1" allowfullscreen></iframe>
### Encoder[[audio-encoder]]
[Wav2Vec2](model_doc/wav2vec2) ใฏใ็ใฎใชใผใใฃใชๆณขๅฝขใใ็ดๆฅ้ณๅฃฐ่กจ็พใๅญฆ็ฟใใใใใฎTransformerใจใณใณใผใใผใไฝฟ็จใใพใใใใใฏใๅฏพ็
ง็ใชใฟในใฏใงไบๅๅญฆ็ฟใใใไธ้ฃใฎๅฝใฎ่กจ็พใใ็ใฎ้ณๅฃฐ่กจ็พใ็นๅฎใใพใใ [HuBERT](model_doc/hubert) ใฏWav2Vec2ใซไผผใฆใใพใใใ็ฐใชใใใฌใผใใณใฐใใญใปในใๆใฃใฆใใพใใใฟใผใฒใใใฉใใซใฏใ้กไผผใใใชใผใใฃใชใปใฐใกใณใใใฏใฉในใฟใซๅฒใๅฝใฆใใใใใใ้ ใใฆใใใใซใชใใฏใฉในใฟใชใณใฐในใใใใซใใฃใฆไฝๆใใใพใใ้ ใใฆใใใใฏๅใ่พผใฟใซใใใใใใไบๆธฌใ่กใใพใใ
### Encoder-decoder[[audio-encoder-decoder]]
[Speech2Text](model_doc/speech_to_text) ใฏใ่ชๅ้ณๅฃฐ่ช่ญ๏ผASR๏ผใใใณ้ณๅฃฐ็ฟป่จณใฎใใใซ่จญ่จใใใ้ณๅฃฐใขใใซใงใใใใฎใขใใซใฏใใชใผใใฃใชๆณขๅฝขใใๆฝๅบใใใใญใฐใกใซใใฃใซใฟใผใใณใฏใใฃใผใใฃใผใๅใๅ
ฅใใไบๅใใฌใผใใณใฐใใใ่ชๅทฑๅๅธฐ็ใซใใฉใณในใฏใชใใใพใใฏ็ฟป่จณใ็ๆใใพใใ [Whisper](model_doc/whisper) ใASRใขใใซใงใใใไปใฎๅคใใฎ้ณๅฃฐใขใใซใจใฏ็ฐใชใใโจ ใฉใใซไปใ โจ ใชใผใใฃใชใใฉใณในใฏใชใใทใงใณใใผใฟใๅคง้ใซไบๅใซๅญฆ็ฟใใฆใใผใญใทใงใใใใใฉใผใใณในใๅฎ็พใใพใใใใผใฟใปใใใฎๅคง้จๅใซใฏ้่ฑ่ชใฎ่จ่ชใๅซใพใใฆใใใWhisperใฏไฝใชใฝใผใน่จ่ชใซใไฝฟ็จใงใใพใใๆง้ ็ใซใฏใWhisperใฏSpeech2Textใซไผผใฆใใพใใใชใผใใฃใชไฟกๅทใฏใจใณใณใผใใผใซใใฃใฆใจใณใณใผใใใใใญใฐใกใซในใใฏใใญใฐใฉใ ใซๅคๆใใใพใใใใณใผใใผใฏใจใณใณใผใใผใฎ้ ใ็ถๆ
ใจๅใฎใใผใฏใณใใใใฉใณในใฏใชใใใ่ชๅทฑๅๅธฐ็ใซ็ๆใใพใใ
## Multimodal
<iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FcX125FQHXJS2gxeICiY93p%2Fmultimodal%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe>
### Encoder[[mm-encoder]]
[VisualBERT](model_doc/visual_bert) ใฏใBERTใฎๅพใซใชใชใผในใใใใใธใงใณ่จ่ชใฟในใฏๅใใฎใใซใใขใผใใซใขใใซใงใใใใใฏBERTใจไบๅใใฌใผใใณใฐใใใ็ฉไฝๆคๅบใทในใใ ใ็ตใฟๅใใใ็ปๅ็นๅพดใใใธใฅใขใซๅใ่พผใฟใซๆฝๅบใใใใญในใๅใ่พผใฟใจไธ็ทใซBERTใซๆธกใใพใใVisualBERTใฏ้ใในใฏใใญในใใๅบใซใใใในใฏใใญในใใไบๆธฌใใใใญในใใ็ปๅใจๆดๅใใฆใใใใฉใใใไบๆธฌใใๅฟ
่ฆใใใใพใใViTใใชใชใผในใใใ้ใ[ViLT](model_doc/vilt) ใฏ็ปๅๅใ่พผใฟใๅๅพใใใใใซใใฎๆนๆณใๆก็จใใพใใใ็ปๅๅใ่พผใฟใฏใใญในใๅใ่พผใฟใจๅ
ฑใซๅ
ฑๅใงๅฆ็ใใใพใใใใใใใViLTใฏ็ปๅใใญในใใใใใณใฐใใในใฏ่จ่ชใขใใชใณใฐใใใใณๅ
จๅ่ชใในใญใณใฐใซใใไบๅใใฌใผใใณใฐใ่กใใใพใใ
[CLIP](model_doc/clip) ใฏ็ฐใชใใขใใญใผใใๅใใ(`็ปๅ`ใ`ใใญในใ`) ใฎใใขไบๆธฌใ่กใใพใใ็ปๅใจใณใณใผใใผ๏ผViT๏ผใจใใญในใใจใณใณใผใใผ๏ผTransformer๏ผใฏใ(`็ปๅ`ใ`ใใญในใ`) ใใขใใผใฟใปใใไธใงๅ
ฑๅใใฌใผใใณใฐใใใ(`็ปๅ`ใ`ใใญในใ`) ใใขใฎ็ปๅใจใใญในใใฎๅใ่พผใฟใฎ้กไผผๆงใๆๅคงๅใใพใใไบๅใใฌใผใใณใฐๅพใCLIPใไฝฟ็จใใฆ็ปๅใใใใญในใใไบๆธฌใใใใใใฎ้ใ่กใใใจใใงใใพใใ[OWL-ViT](model_doc/owlvit) ใฏใใผใญใทใงใใ็ฉไฝๆคๅบใฎใใใฏใใผใณใจใใฆCLIPใไฝฟ็จใใฆใใพใใไบๅใใฌใผใใณใฐๅพใ็ฉไฝๆคๅบใใใใ่ฟฝๅ ใใใ(`ใฏใฉใน`ใ`ใใฆใณใใฃใณใฐใใใฏใน`) ใใขใซๅฏพใใใปใใไบๆธฌใ่กใใใพใใ
### Encoder-decoder[[mm-encoder-decoder]]
ๅ
ๅญฆๆๅญ่ช่ญ๏ผOCR๏ผใฏใ้ๅธธใ็ปๅใ็่งฃใใใญในใใ็ๆใใใใใซ่คๆฐใฎใณใณใใผใใณใใ้ขไธใใใใญในใ่ช่ญใฟในใฏใงใใ [TrOCR](model_doc/trocr) ใฏใใจใณใใใผใจใณใใฎTransformerใไฝฟ็จใใฆใใฎใใญใปในใ็ฐก็ฅๅใใพใใใจใณใณใผใใผใฏ็ปๅใๅบๅฎใตใคใบใฎใใใใจใใฆๅฆ็ใใใใใฎViTในใฟใคใซใฎใขใใซใงใใใใใณใผใใผใฏใจใณใณใผใใผใฎ้ ใ็ถๆ
ใๅใๅ
ฅใใใใญในใใ่ชๅทฑๅๅธฐ็ใซ็ๆใใพใใ[Donut](model_doc/donut) ใฏOCRใใผในใฎใขใใญใผใใซไพๅญใใชใใใไธ่ฌ็ใชใใธใฅใขใซใใญใฅใกใณใ็่งฃใขใใซใงใใจใณใณใผใใผใจใใฆSwin Transformerใใใณใผใใผใจใใฆๅค่จ่ชBARTใไฝฟ็จใใพใใ Donutใฏ็ปๅใจใใญในใใฎๆณจ้ใซๅบใฅใใฆๆฌกใฎๅ่ชใไบๆธฌใใใใจใซใใใใใญในใใ่ชญใใใใซไบๅใใฌใผใใณใฐใใใพใใใใณใผใใผใฏใใญใณใใใไธใใใใใใผใฏใณใทใผใฑใณในใ็ๆใใพใใใใญใณใใใฏๅใใฆใณในใใชใผใ ใฟในใฏใใจใซ็นๅฅใชใใผใฏใณใไฝฟ็จใใฆ่กจ็พใใใพใใไพใใฐใใใญใฅใกใณใใฎ่งฃๆใซใฏ`่งฃๆ`ใใผใฏใณใใใใใจใณใณใผใใผใฎ้ ใ็ถๆ
ใจ็ตใฟๅใใใใฆใใญใฅใกใณใใๆง้ ๅใใใๅบๅใใฉใผใใใ๏ผJSON๏ผใซ่งฃๆใใพใใ
## Reinforcement learning
<iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FiB3Y6RvWYki7ZuKO6tNgZq%2Freinforcement-learning%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe>
### Decoder[[rl-decoder]]
ๆๆๆฑบๅฎใจ่ป่ทกใใฉใณในใใฉใผใใผใฏใ็ถๆ
ใใขใฏใทใงใณใๅ ฑ้
ฌใใทใผใฑใณในใขใใชใณใฐใฎๅ้กใจใใฆๆใใพใใ [Decision Transformer](model_doc/decision_transformer) ใฏใใชใฟใผใณใปใใฅใปใดใผใ้ๅปใฎ็ถๆ
ใใใใณใขใฏใทใงใณใซๅบใฅใใฆๅฐๆฅใฎๅธๆใชใฟใผใณใซใคใชใใใขใฏใทใงใณใฎ็ณปๅใ็ๆใใพใใๆๅพใฎ *K* ใฟใคใ ในใใใใงใฏใ3ใคใฎใขใใชใใฃใใใใใใใผใฏใณๅใ่พผใฟใซๅคๆใใใๅฐๆฅใฎใขใฏใทใงใณใใผใฏใณใไบๆธฌใใใใใซGPTใฎใใใชใขใใซใซใใฃใฆๅฆ็ใใใพใใ[Trajectory Transformer](model_doc/trajectory_transformer) ใ็ถๆ
ใใขใฏใทใงใณใๅ ฑ้
ฌใใใผใฏใณๅใใGPTใขใผใญใใฏใใฃใงๅฆ็ใใพใใๅ ฑ้
ฌ่ชฟๆดใซ็ฆ็นใๅฝใฆใDecision Transformerใจใฏ็ฐใชใใTrajectory Transformerใฏใใผใ ใตใผใใไฝฟ็จใใฆๅฐๆฅใฎใขใฏใทใงใณใ็ๆใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/attention.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใใใญใฅใกใณใใผใทใงใณใใซใใผ็จใฎ็นๅฎใฎๆงๆใๅซใใงใใใMarkdownใใฅใผใขใผใงใฏๆญฃใใ่กจ็คบใใใชใใใจใซๆณจๆใใฆใใ ใใใ
-->
# Attention mechanism
ใปใจใใฉใฎTransformerใขใใซใฏใใขใใณใทใงใณ่กๅใๆญฃๆนๅฝขใงใใใจใใๆๅณใงๅฎๅ
จใชใขใใณใทใงใณใไฝฟ็จใใพใใ
ใใใฏใ้ทใใใญในใใๆฑใๅ ดๅใซ่จ็ฎใฎใใใซใใใฏใจใชใใใจใใใใพใใLongformerใReformerใฏใใใๅน็็ใงใใฌใผใใณใฐใ้ซ้ๅใใใใใซใขใใณใทใงใณ่กๅใฎในใใผในใใผใธใงใณใไฝฟ็จใใใใจใใใขใใซใงใใ
## LSH attention
[Reformer](#reformer)ใฏLSH๏ผๅฑๆ็ใซๆฃๅจใใใทใฅ๏ผใขใใณใทใงใณใไฝฟ็จใใพใใ
ใฝใใใใใฏใน(QK^t)ใงใฏใ่กๅQK^tใฎไธญใง๏ผใฝใใใใใฏในๆฌกๅ
ใง๏ผๆใๅคงใใช่ฆ็ด ใฎใฟใๆ็จใชๅฏไธใๆไพใใพใใ
ใใใใฃใฆใๅใฏใจใชqใซใคใใฆใใฏใจใชqใซ่ฟใใญใผkใฎใฟใ่ๆ
ฎใงใใพใใ
qใจkใ่ฟใใใฉใใใๆฑบๅฎใใใใใซใใใใทใฅ้ขๆฐใไฝฟ็จใใใพใใ
ใขใใณใทใงใณใในใฏใฏๅคๆดใใใ็พๅจใฎใใผใฏใณใใในใฏๅใใพใ๏ผๆๅใฎไฝ็ฝฎใ้คใ๏ผใ
ใชใใชใใใใใฏใฏใจใชใจใญใผใ็ญใใ๏ผใคใพใ้ๅธธใซไผผใฆใใ๏ผใฏใจใชใจใญใผใๆไพใใใใใงใใ
ใใใทใฅใฏๅคๅฐใฉใณใใ ใใใใใชใใใใๅฎ้ใซใฏใใใคใใฎใใใทใฅ้ขๆฐใไฝฟ็จใใ๏ผn_roundsใใฉใกใผใฟใงๆฑบๅฎใใใพใ๏ผใใใใใๅนณๅๅใใใพใใ
## Local attention
[Longformer](#longformer)ใฏใญใผใซใซใขใใณใทใงใณใไฝฟ็จใใพใใ
ใใฐใใฐใใญใผใซใซใณใณใใญในใ๏ผไพ๏ผๅทฆๅณใฎ2ใคใฎใใผใฏใณใฏไฝใงใใ๏ผ๏ผใฏใ็นๅฎใฎใใผใฏใณใซๅฏพใใฆ่กๅใ่ตทใใใฎใซๅๅใงใใ
ใพใใๅฐใใชใฆใฃใณใใฆใๆใคใขใใณใทใงใณใฌใคใคใผใ็ฉใฟ้ใญใใใจใงใๆๅพใฎใฌใคใคใผใฏใฆใฃใณใใฆๅ
ใฎใใผใฏใณใ ใใงใชใใใฆใฃใณใใฆๅ
ใฎใใผใฏใณใ่ถ
ใใฆๅๅฎน้ใๆใคใใใซใชใใๆๅ
จไฝใฎ่กจ็พใๆง็ฏใงใใพใใ
ไธ้จใฎไบๅ้ธๆใใใๅ
ฅๅใใผใฏใณใซใฏใฐใญใผใใซใขใใณใทใงใณใไธใใใใพใใ
ใใใใฎๅฐๆฐใฎใใผใฏใณใซๅฏพใใฆใใขใใณใทใงใณ่กๅใฏใในใฆใฎใใผใฏใณใซใขใฏใปในใงใใใใฎใใญใปในใฏๅฏพ็งฐ็ใงใใ
ไปใฎใในใฆใฎใใผใฏใณใฏใใใใใฎ็นๅฎใฎใใผใฏใณใซใขใฏใปในใงใใพใ๏ผใญใผใซใซใฆใฃใณใใฆๅ
ใฎใใผใฏใณใซๅ ใใฆ๏ผใ
ใใใฏใ่ซๆใฎๅณ2dใซ็คบใใใฆใใใไปฅไธใฏใตใณใใซใฎใขใใณใทใงใณใในใฏใงใ๏ผ
<div class="flex justify-center">
<img scale="50 %" align="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/local_attention_mask.png"/>
</div>
## Other tricks
### Axial positional encodings
[Reformer](#reformer)ใฏ่ปธๆนๅใฎไฝ็ฝฎใจใณใณใผใใฃใณใฐใไฝฟ็จใใฆใใพใใไผ็ตฑ็ใชใใฉใณในใใฉใผใใผใขใใซใงใฏใไฝ็ฝฎใจใณใณใผใใฃใณใฐEใฏใตใคใบใ \\(l\\) ร \\(d\\) ใฎ่กๅใงใ\\(l\\) ใฏใทใผใฑใณในใฎ้ทใใ\\(d\\) ใฏ้ ใ็ถๆ
ใฎๆฌกๅ
ใงใใ้ๅธธใซ้ทใใใญในใใๆฑใๅ ดๅใใใฎ่กๅใฏ้ๅธธใซๅคงใใใGPUไธใงๅคง้ใฎในใใผในใๅ ๆใใพใใใใใ็ทฉๅใใใใใซใ่ปธๆนๅใฎไฝ็ฝฎใจใณใณใผใใฃใณใฐใฏใใใฎๅคงใใช่กๅEใ2ใคใฎๅฐใใช่กๅE1ใจE2ใซๅ่งฃใใพใใใใใใใฎ่กๅใฏใตใคใบ \\(l_{1} \times d_{1}\\) ใใใณ \\(l_{2} \times d_{2}\\) ใๆใกใ \\(l_{1} \times l_{2} = l\\) ใใใณ \\(d_{1} + d_{2} = d\\) ใจใใๆกไปถใๆบใใใพใ๏ผ้ทใใฎ็ฉใ่ใใใจใใใใใฏใใใซๅฐใใใชใใพใ๏ผใ่กๅEๅ
ใฎๆๅป \\(j\\) ใฎๅใ่พผใฟใฏใE1ๅ
ใฎๆๅป \\(j \% l1\\) ใฎๅใ่พผใฟใจE2ๅ
ใฎๆๅป \\(j // l1\\) ใฎๅใ่พผใฟใ้ฃ็ตใใใใจใซใใฃใฆๅพใใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/bertology.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BERTology
ๅคง่ฆๆจกใชใใฉใณในใใฉใผใใผใไพใใฐBERTใฎๅ
้จๅไฝใ่ชฟๆปใใ็ ็ฉถ้ ๅใๆฅๆ้ทใใฆใใพใ๏ผใใใใBERTologyใใจใๅผใณใพใ๏ผใใใฎๅ้ใฎ่ฏใไพใฏไปฅไธใงใ๏ผ
- BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick:
[่ซๆใชใณใฏ](https://arxiv.org/abs/1905.05950)
- Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: [่ซๆใชใณใฏ](https://arxiv.org/abs/1905.10650)
- What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: [่ซๆใชใณใฏ](https://arxiv.org/abs/1906.04341)
- CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: [่ซๆใชใณใฏ](https://arxiv.org/abs/2210.04633)
ใใฎๆฐใใๅ้ใฎ็บๅฑใๆฏๆดใใใใใซใBERT/GPT/GPT-2ใขใใซใซใใใคใใฎ่ฟฝๅ ๆฉ่ฝใ็ตใฟ่พผใฟใไบบใ
ใๅ
้จ่กจ็พใซใขใฏใปในใงใใใใใซใใพใใใใใใใฎๆฉ่ฝใฏใไธปใซPaul Michelๆฐใฎๅชใใ็ ็ฉถ๏ผ[่ซๆใชใณใฏ](https://arxiv.org/abs/1905.10650)๏ผใซๅบใฅใใฆใใพใใๅ
ทไฝ็ใซใฏใไปฅไธใฎๆฉ่ฝใๅซใพใใฆใใพใ๏ผ
- BERT/GPT/GPT-2ใฎใในใฆใฎ้ ใ็ถๆ
ใซใขใฏใปในใใใใจใใงใใพใใ
- BERT/GPT/GPT-2ใฎๅใใใใฎๆณจๆ้ใฟใซใขใฏใปในใงใใพใใ
- ใใใใฎๅบๅๅคใจๅพ้
ใๅๅพใใใใใใฎ้่ฆๆงในใณใขใ่จ็ฎใใ[่ซๆใชใณใฏ](https://arxiv.org/abs/1905.10650)ใง่ชฌๆใใใฆใใใใใซใใใใๅๆธใงใใพใใ
ใใใใฎๆฉ่ฝใ็่งฃใใไฝฟ็จใใใฎใๆฏๆดใใใใใซใ็นๅฎใฎใตใณใใซในใฏใชใใใ[bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py)ใใ่ฟฝๅ ใใพใใใใใฎในใฏใชใใใฏใGLUEใงไบๅใใฌใผใใณใฐใใใใขใใซใใๆ
ๅ ฑใๆฝๅบใใใใใใๅๆธใใๅฝนๅฒใๆใใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perplexity.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Perplexity of fixed-length models
[[open-in-colab]]
ใใผใใฌใญใทใใฃ๏ผPPL๏ผใฏ่จ่ชใขใใซใฎ่ฉไพกใซๆใไธ่ฌ็ใชๆๆจใฎ1ใคใงใใๆทฑๅ
ฅใใใๅใซใใใฎๆๆจใฏ็นใซๅคๅ
ธ็ใช่จ่ชใขใใซ๏ผๆใซใฏใชใผใใฌใฐใฌใใทใใพใใฏๅ ๆ่จ่ชใขใใซใจใๅผใฐใใ๏ผใซ้ฉ็จใใใBERTใชใฉใฎใในใฏใใใ่จ่ชใขใใซใซใฏ้ฉใใฆใใชใใใจใซๆณจๆใในใใงใ๏ผใขใใซใฎๆฆ่ฆใๅ็
งใใฆใใ ใใ[ใขใใซใฎๆฆ่ฆ](model_summary)๏ผใ
ใใผใใฌใญใทใใฃใฏใใทใผใฑใณในใฎๆๆฐๅนณๅ่ฒ ใฎๅฏพๆฐๅฐคๅบฆใจใใฆๅฎ็พฉใใใพใใใใผใฏใณๅใใใใทใผใฑใณใน \\(X = (x_0, x_1, \dots, x_t)\\) ใใใๅ ดๅใ\\(X\\) ใฎใใผใใฌใญใทใใฃใฏๆฌกใฎใใใซ่กจใใใพใใ
$$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$
ใใใงใ\\(\log p_\theta (x_i|x_{<i})\\) ใฏใขใใซใซใใๅใฎใใผใฏใณ \\(x_{<i}\\) ใซๅฏพใใ็ฌฌiใใผใฏใณใฎๅฏพๆฐๅฐคๅบฆใงใใ็ดๆ็ใซใฏใใใใฏใขใใซใใณใผใในๅ
ใฎๆๅฎใใใใใผใฏใณใฎ้ๅใซๅฏพใใฆไธๆงใซไบๆธฌใใ่ฝๅใฎ่ฉไพกใจ่ใใใใจใใงใใพใใ้่ฆใชใฎใฏใใใใซใใฃใฆใใผใฏใณๅๆๆณใใขใใซใฎใใผใใฌใญใทใใฃใซ็ดๆฅๅฝฑ้ฟใไธใใใใใ็ฐใชใใขใใซใๆฏ่ผใใ้ใซใฏๅธธใซ่ๆ
ฎใในใใงใใใจใใใใจใงใใ
ใใใฏใพใใใใผใฟใจใขใใซใฎไบๆธฌใจใฎ้ใฎไบคๅทฎใจใณใใญใใผใฎๆๆฐๅใจๅ็ญใงใใใใผใใฌใญใทใใฃใใใณใใใใปใใผใปใญใฃใฉใฏใฟใผ๏ผBPC๏ผใจใใผใฟๅง็ธฎใจใฎ้ขไฟใซใคใใฆใฎ่ฉณ็ดฐใชๆ
ๅ ฑใซใคใใฆใฏใใใฎ[็ด ๆดใใใ The Gradient ใฎใใญใฐ่จไบ](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/)ใๅ็
งใใฆใใ ใใใ
## Calculating PPL with fixed-length models
ใขใใซใฎใณใณใใญในใใตใคใบใซๅถ็ดใใชใๅ ดๅใใขใใซใฎใใผใใฌใญใทใใฃใ่ฉไพกใใใใใซใฏใใทใผใฑใณในใ่ชๅทฑๅๅธฐ็ใซๅ ๅญๅ่งฃใใๅในใใใใงๅใฎใตใใทใผใฑใณในใซๆกไปถใไปใใใใจใง่จ็ฎใใพใใไปฅไธใซ็คบใใใใซใ
<img width="600" alt="ๅฎๅ
จใชใณใณใใญในใ้ทใฎใทใผใฑใณในใฎๅ่งฃ" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/>
ใใใใ้ๅธธใ่ฟไผผใขใใซใไฝฟ็จใใๅ ดๅใใขใใซใๅฆ็ใงใใใใผใฏใณๆฐใซๅถ็ดใใใใพใใไพใใฐใๆๅคงใฎ[GPT-2](model_doc/gpt2)ใฎใใผใธใงใณใฏ1024ใใผใฏใณใฎๅบๅฎ้ทใๆใฃใฆใใใใใ1024ใใใๅคงใใ \\(t\\) ใซๅฏพใใฆ \\(p_\theta(x_t|x_{<t})\\) ใ็ดๆฅ่จ็ฎใใใใจใฏใงใใพใใใ
ไปฃใใใซใ้ๅธธใใทใผใฑใณในใฏใขใใซใฎๆๅคงๅ
ฅๅใตใคใบใซ็ญใใใตใใทใผใฑใณในใซๅๅฒใใใพใใใขใใซใฎๆๅคงๅ
ฅๅใตใคใบใ \\(k\\) ใฎๅ ดๅใใใผใฏใณ \\(x_t\\) ใฎๅฐคๅบฆใ่ฟไผผใใใซใฏใๅฎๅ
จใชใณใณใใญในใใงใฏใชใใใใใๅ
่กใใ \\(k-1\\) ใใผใฏใณใซใฎใฟๆกไปถใไปใใใใจใใใใพใใใทใผใฑใณในใฎใขใใซใฎใใผใใฌใญใทใใฃใ่ฉไพกใใ้ใ่ชๆ็ใงใใ้ๅน็ใชๆนๆณใฏใใทใผใฑใณในใๅๅฒใใๅใปใฐใกใณใใฎๅ่งฃๅฏพๆฐๅฐคๅบฆใ็ฌ็ซใซๅ็ฎใใใใจใงใใ
<img width="600" alt="ๅฉ็จๅฏ่ฝใชๅฎๅ
จใชใณใณใใญในใใๆดป็จใใชใ้ๆ้ฉใชPPL" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/>
ใใใฏๅใปใฐใกใณใใฎใใผใใฌใญใทใใฃใ1ๅใฎใใฉใฏใผใใในใง่จ็ฎใงใใใใใ่จ็ฎใ่ฟ
้ใงใใใ้ๅธธใใขใใซใฏใปใจใใฉใฎไบๆธฌในใใใใงใณใณใใญในใใๅฐใชใใใใๅฎๅ
จใซๅ ๅญๅ่งฃใใใใใผใใฌใญใทใใฃใฎๆชใ่ฟไผผใจใชใใ้ๅธธใใใ้ซใ๏ผๆชใ๏ผPPLใ่ฟใใพใใ
ไปฃใใใซใๅบๅฎ้ทใขใใซใฎPPLใฏในใฉใคใใฃใณใฐใฆใฃใณใใฆๆฆ็ฅใ็จใใฆ่ฉไพกใใในใใงใใใใใซใฏใใขใใซใๅไบๆธฌในใใใใงใใๅคใใฎใณใณใใญในใใๆใคใใใซใใณใณใใญในใใฆใฃใณใใฆใ็นฐใ่ฟใในใฉใคใใใใใจใใๆนๆณใๅซใพใใพใใ
<img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/>
ใใใฏใทใผใฑใณในใฎ็ขบ็ใฎใใๆญฃ็ขบใชๅ่งฃใซ่ฟใใใฎใงใใใ้ๅธธใฏใใๆๅฉใชในใณใขใ็ๆใใพใใๆฌ ็นใฏใใณใผใในๅ
ใฎๅใใผใฏใณใซๅฏพใใฆๅฅๅใฎๅๆนใในใๅฟ
่ฆใงใใๅฎ็จ็ใชๅฆฅๅๆกใฏใ1ใใผใฏใณใใคในใฉใคใใใไปฃใใใซใใใๅคงใใชในใใฉใคใใงใณใณใใญในใใ็งปๅใใในใใฉใคใๅใฎในใฉใคใใฃใณใฐใฆใฃใณใใฆใไฝฟ็จใใใใจใงใใใใใซใใใ่จ็ฎใใฏใใใซ้ซ้ใซ้ฒ่กใงใใไธๆนใงใใขใใซใซใฏๅในใใใใงไบๆธฌใ่กใใใใฎๅคงใใชใณใณใใญในใใๆไพใใใพใใ
## Example: Calculating perplexity with GPT-2 in ๐ค Transformers
GPT-2ใไฝฟ็จใใฆใใฎใใญใปในใใใขใณในใใฌใผใทใงใณใใฆใฟใพใใใใ
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
device = "cuda"
model_id = "gpt2-large"
model = GPT2LMHeadModel.from_pretrained(model_id).to(device)
tokenizer = GPT2TokenizerFast.from_pretrained(model_id)
```
WikiText-2ใใผใฟใปใใใ่ชญใฟ่พผใฟใ็ฐใชใในใฉใคใใฃใณใฐใฆใฃใณใใฆๆฆ็ฅใไฝฟ็จใใฆใใผใใฌใญใทใใฃใ่ฉไพกใใพใใใใฎใใผใฟใปใใใฏๅฐ่ฆๆจกใงใใปใใๅ
จไฝใซๅฏพใใฆๅไธใฎใใฉใฏใผใใในใๅฎ่กใใใ ใใชใฎใงใใใผใฟใปใใๅ
จไฝใใกใขใชใซ่ชญใฟ่พผใใงใจใณใณใผใใใใ ใใงๅๅใงใใ
```python
from datasets import load_dataset
test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test")
encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt")
```
๐ค Transformersใไฝฟ็จใใใจใๅ็ดใซ`input_ids`ใใขใใซใฎ`labels`ใจใใฆๆธกใใใจใงใๅใใผใฏใณใฎๅนณๅ่ฒ ใฎๅฏพๆฐๅฐคๅบฆใๆๅคฑใจใใฆ่ฟใใใพใใใใใใในใฉใคใใฃใณใฐใฆใฃใณใใฆใฎใขใใญใผใใงใฏใๅใคใใฌใผใทใงใณใงใขใใซใซๆธกใใใผใฏใณใซใชใผใใผใฉใใใใใใพใใ็งใใกใฏใใณใณใใญในใใจใใฆๆฑใฃใฆใใใใผใฏใณใฎๅฏพๆฐๅฐคๅบฆใๆๅคฑใซๅซใใใใใใพใใใใใฎใใใใใใใฎๅฏพ่ฑกใ `-100` ใซ่จญๅฎใใฆ็ก่ฆใใใใใใซใใพใใไปฅไธใฏใในใใฉใคใใ `512` ใจใใๅ ดๅใฎไพใงใใใใใซใใใใขใใซใฏไปปๆใฎใใผใฏใณใฎๆกไปถไปใใฎๅฐคๅบฆใ่จ็ฎใใ้ใซใๅฐใชใใจใใณใณใใญในใใจใใฆ 512 ใใผใฏใณใๆใคใใจใซใชใใพใ๏ผ512 ๅใฎๅใฎใใผใฏใณใๅฉ็จๅฏ่ฝใงใใๅ ดๅ๏ผใ
```python
import torch
from tqdm import tqdm
max_length = model.config.n_positions
stride = 512
seq_len = encodings.input_ids.size(1)
nlls = []
prev_end_loc = 0
for begin_loc in tqdm(range(0, seq_len, stride)):
end_loc = min(begin_loc + max_length, seq_len)
trg_len = end_loc - prev_end_loc # may be different from stride on last loop
input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device)
target_ids = input_ids.clone()
target_ids[:, :-trg_len] = -100
with torch.no_grad():
outputs = model(input_ids, labels=target_ids)
# loss is calculated using CrossEntropyLoss which averages over valid labels
# N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels
# to the left by 1.
neg_log_likelihood = outputs.loss
nlls.append(neg_log_likelihood)
prev_end_loc = end_loc
if end_loc == seq_len:
break
ppl = torch.exp(torch.stack(nlls).mean())
```
ในใใฉใคใ้ทใๆๅคงๅ
ฅๅ้ทใจๅใๅ ดๅใไธ่ฟฐใฎๆ้ฉใงใชใในใฉใคใใฃใณใฐใฆใฃใณใใฆๆฆ็ฅใจๅ็ญใงใใในใใฉใคใใๅฐใใใปใฉใใขใใซใฏๅไบๆธฌใ่กใ้ใซใใๅคใใฎใณใณใใญในใใๆใคใใใ้ๅธธใๅ ฑๅใใใๅฐ้ฃๅบฆ๏ผperplexity๏ผใๅไธใใพใใ
ไธ่จใฎใณใผใใ `stride = 1024` ใงๅฎ่กใใใจใใชใผใใผใฉใใใใชใ็ถๆ
ใงใ็ตๆใฎๅฐ้ฃๅบฆ๏ผperplexity๏ผใฏ `19.44` ใซใชใใพใใใใใฏ GPT-2 ใฎ่ซๆใซๅ ฑๅใใใ `19.93` ใจใปใผๅ็ญใงใใไธๆนใ`stride = 512` ใไฝฟ็จใใใใฎใใใซในใใฉใคใใฃใณใฐใฆใฃใณใใฆๆฆ็ฅใๆก็จใใใจใๅฐ้ฃๅบฆ๏ผperplexity๏ผใ `16.45` ใซๅไธใใพใใใใใฏใใๅฅฝๆ็ใชในใณใขใ ใใงใชใใใทใผใฑใณในใฎๅฐคๅบฆใฎ็ใฎ่ชๅทฑๅๅธฐๅ่งฃใซใใ่ฟใๆนๆณใง่จ็ฎใใใฆใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/serialization.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Export to ONNX
๐ค Transformersใขใใซใๆฌ็ช็ฐๅขใซๅฑ้ใใ้ใซใฏใใขใใซใ็นๆฎใชใฉใณใฟใคใ ใใใณใใผใใฆใงใขใง่ชญใฟ่พผใฟใๅฎ่กใงใใใใใซใใขใใซใใทใชใขใฉใคใบใใใๅฝขๅผใซใจใฏในใใผใใใใใจใๅฟ
่ฆใงใใใใใใฎๆฉๆตใๅใใใใจใใงใใใใจใใใใพใใ
๐ค OptimumใฏใTransformersใฎๆกๅผตๆฉ่ฝใงใใใPyTorchใพใใฏTensorFlowใใใขใใซใONNXใTFLiteใชใฉใฎใทใชใขใฉใคใบใใใๅฝขๅผใซใจใฏในใใผใใใใใจใๅฏ่ฝใซใใใexportersใใขใธใฅใผใซใๆไพใใฆใใพใใใพใใ๐ค Optimumใฏใๆๅคงใฎๅน็ใงใฟใผใฒใใใใผใใฆใงใขใงใขใใซใใใฌใผใใณใฐใใใณๅฎ่กใใใใใฎใใใฉใผใใณในๆ้ฉๅใใผใซใๆไพใใฆใใพใใ
ใใฎใฌใคใใงใฏใ๐ค Transformersใขใใซใ๐ค Optimumใไฝฟ็จใใฆONNXใซใจใฏในใใผใใใๆนๆณใ็คบใใฆใใใใขใใซใTFLiteใซใจใฏในใใผใใใๆนๆณใซใคใใฆใฏ[Export to TFLiteใใผใธ](tflite)ใๅ็
งใใฆใใ ใใใ
## Export to ONNX
[ONNX๏ผOpen Neural Network eXchange๏ผ](http://onnx.ai)ใฏใPyTorchใใใณTensorFlowใๅซใใใพใใพใชใใฌใผใ ใฏใผใฏใงๆทฑๅฑคๅญฆ็ฟใขใใซใ่กจ็พใใใใใฎๅ
ฑ้ใฎไธ้ฃใฎๆผ็ฎๅญใจใใกใคใซๅฝขๅผใๅฎ็พฉใใใชใผใใณในใฟใณใใผใใงใใใขใใซใONNXๅฝขๅผใซใจใฏในใใผใใใใใจใใใใใฎๆผ็ฎๅญใฏใใฅใผใฉใซใใใใฏใผใฏใไปใใใใผใฟใฎๆตใใ่กจใ่จ็ฎใฐใฉใ๏ผไธ่ฌ็ใซใฏใไธญ้่กจ็พใใจๅผใฐใใ๏ผใๆง็ฏใใใใใซไฝฟ็จใใใพใใ
ๆจๆบๅใใใๆผ็ฎๅญใจใใผใฟๅใๅใใใฐใฉใใๅ
ฌ้ใใใใจใงใONNXใฏใใฌใผใ ใฏใผใฏ้ใฎๅใๆฟใใๅฎนๆใซใใพใใใใจใใฐใPyTorchใงใใฌใผใใณใฐใใใใขใใซใฏONNXๅฝขๅผใซใจใฏในใใผใใใใใใTensorFlowใงใคใณใใผใใใใใจใใงใใพใ๏ผ้ใๅๆงใงใ๏ผใ
ONNXๅฝขๅผใซใจใฏในใใผใใใใใขใใซใฏใไปฅไธใฎใใใซไฝฟ็จใงใใพใ๏ผ
- [ใฐใฉใๆ้ฉๅ](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization)ใ[้ๅญๅ](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization)ใชใฉใฎใใฏใใใฏใไฝฟ็จใใฆๆจ่ซใฎใใใซๆ้ฉๅใ
- [`ORTModelForXXX`ใฏใฉใน](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort)ใไปใใฆONNX Runtimeใงๅฎ่กใใ๐ค Transformersใงใใชใใฟใฎ`AutoModel` APIใซๅพใใพใใ
- [ๆ้ฉๅใใใๆจ่ซใใคใใฉใคใณ](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines)ใไปใใฆๅฎ่กใใ๐ค Transformersใฎ[`pipeline`]้ขๆฐใจๅใAPIใๆใฃใฆใใพใใ
๐ค Optimumใฏใ่จญๅฎใชใใธใงใฏใใๆดป็จใใฆONNXใจใฏในใใผใใใตใใผใใใฆใใใใใใใฎ่จญๅฎใชใใธใงใฏใใฏๅคใใฎใขใใซใขใผใญใใฏใใฃ็จใซไบๅใซไฝๆใใใฆใใใไปใฎใขใผใญใใฏใใฃใซใ็ฐกๅใซๆกๅผตใงใใใใใซ่จญ่จใใใฆใใพใใ
ไบๅใซไฝๆใใใ่จญๅฎใฎใชในใใซใคใใฆใฏใ[๐ค Optimumใใญใฅใกใณใ](https://huggingface.co/docs/optimum/exporters/onnx/overview)ใๅ็
งใใฆใใ ใใใ
๐ค TransformersใขใใซใONNXใซใจใฏในใใผใใใๆนๆณใฏ2ใคใใใพใใไปฅไธใงใฏไธกๆนใฎๆนๆณใ็คบใใพใ๏ผ
- export with ๐ค Optimum via CLI.
- export with ๐ค Optimum with `optimum.onnxruntime`.
### Exporting a ๐ค Transformers model to ONNX with CLI
๐ค TransformersใขใใซใONNXใซใจใฏในใใผใใใใซใฏใใพใ่ฟฝๅ ใฎไพๅญ้ขไฟใใคใณในใใผใซใใฆใใ ใใ๏ผ
```bash
pip install optimum[exporters]
```
ใในใฆใฎๅฉ็จๅฏ่ฝใชๅผๆฐใ็ขบ่ชใใใซใฏใ[๐ค Optimumใใญใฅใกใณใ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli)ใๅ็
งใใฆใใ ใใใใพใใฏใใณใใณใใฉใคใณใงใใซใใ่กจ็คบใใใใจใใงใใพใ๏ผ
```bash
optimum-cli export onnx --help
```
๐ค Hubใใใขใใซใฎใใงใใฏใใคใณใใใจใฏในใใผใใใใซใฏใไพใใฐ `distilbert-base-uncased-distilled-squad` ใไฝฟใใใๅ ดๅใไปฅไธใฎใณใใณใใๅฎ่กใใฆใใ ใใ๏ผ
```bash
optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/
```
้ฒ่ก็ถๆณใ็คบใใ็ตๆใฎ `model.onnx` ใไฟๅญใใใๅ ดๆใ่กจ็คบใใใญใฐใฏใไปฅไธใฎใใใซ่กจ็คบใใใใฏใใงใ๏ผ
```bash
Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx...
-[โ] ONNX model output names match reference model (start_logits, end_logits)
- Validating ONNX Model output "start_logits":
-[โ] (2, 16) matches (2, 16)
-[โ] all values close (atol: 0.0001)
- Validating ONNX Model output "end_logits":
-[โ] (2, 16) matches (2, 16)
-[โ] all values close (atol: 0.0001)
The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx
```
ไธ่จใฎไพใฏ๐ค Hubใใใฎใใงใใฏใใคใณใใฎใจใฏในใใผใใ็คบใใฆใใพใใใญใผใซใซใขใใซใใจใฏในใใผใใใๅ ดๅใใพใใขใใซใฎ้ใฟใจใใผใฏใใคใถใฎใใกใคใซใๅใใใฃใฌใฏใใช๏ผ`local_path`๏ผใซไฟๅญใใฆใใ ใใใCLIใไฝฟ็จใใๅ ดๅใ๐ค Hubใฎใใงใใฏใใคใณใๅใฎไปฃใใใซ`model`ๅผๆฐใซ`local_path`ใๆธกใใ`--task`ๅผๆฐใๆๅฎใใฆใใ ใใใ[๐ค Optimumใใญใฅใกใณใ](https://huggingface.co/docs/optimum/exporters/task_manager)ใงใตใใผใใใใฆใใใฟในใฏใฎใชในใใ็ขบ่ชใงใใพใใ`task`ๅผๆฐใๆๅฎใใใฆใใชใๅ ดๅใใฟในใฏๅบๆใฎใใใใๆใใชใใขใใซใขใผใญใใฏใใฃใใใใฉใซใใง้ธๆใใใพใใ
```bash
optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/
```
ใจใฏในใใผใใใใ `model.onnx` ใใกใคใซใฏใONNXๆจๆบใใตใใผใใใ[ๅคใใฎใขใฏใปใฉใฌใผใฟ](https://onnx.ai/supported-tools.html#deployModel)ใฎ1ใคใงๅฎ่กใงใใพใใใใจใใฐใ[ONNX Runtime](https://onnxruntime.ai/)ใไฝฟ็จใใฆใขใใซใ่ชญใฟ่พผใฟใๅฎ่กใใๆนๆณใฏไปฅไธใฎ้ใใงใ๏ผ
```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt")
>>> outputs = model(**inputs)
```
๐ค HubใใTensorFlowใฎใใงใใฏใใคใณใใใจใฏในใใผใใใใใญใปในใฏใๅๆงใงใใไพใใฐใ[Keras organization](https://huggingface.co/keras-io)ใใ็ด็ฒใชTensorFlowใฎใใงใใฏใใคใณใใใจใฏในใใผใใใๆนๆณใฏไปฅไธใฎ้ใใงใ๏ผ
```bash
optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/
```
### Exporting a ๐ค Transformers model to ONNX with `optimum.onnxruntime`
CLIใฎไปฃใใใซใ๐ค TransformersใขใใซใONNXใซใใญใฐใฉใ ็ใซใจใฏในใใผใใใใใจใใงใใพใใไปฅไธใฎใใใซ่กใใพใ๏ผ
```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer
>>> model_checkpoint = "distilbert_base_uncased_squad"
>>> save_directory = "onnx/"
>>> # Load a model from transformers and export it to ONNX
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
>>> # Save the onnx model and tokenizer
>>> ort_model.save_pretrained(save_directory)
>>> tokenizer.save_pretrained(save_directory)
```
### Exporting a model for an unsupported architecture
็พๅจใจใฏในใใผใใงใใชใใขใใซใใตใใผใใใใใใซ่ฒข็ฎใใใๅ ดๅใใพใ[`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview)ใงใตใใผใใใใฆใใใใฉใใใ็ขบ่ชใใใตใใผใใใใฆใใชใๅ ดๅใฏ[๐ค Optimumใซ่ฒข็ฎ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute)ใใฆใใ ใใใ
### Exporting a model with `transformers.onnx`
<Tip warning={true}>
`transformers.onnx`ใฏใใฏใใกใณใใใณในใใใฆใใชใใใใใขใใซใไธ่จใง่ชฌๆใใใใใซ๐ค Optimumใงใจใฏในใใผใใใฆใใ ใใใใใฎใปใฏใทใงใณใฏๅฐๆฅใฎใใผใธใงใณใงๅ้คใใใพใใ
</Tip>
๐ค TransformersใขใใซใONNXใซใจใฏในใใผใใใใซใฏใ่ฟฝๅ ใฎไพๅญ้ขไฟใใคใณในใใผใซใใฆใใ ใใ๏ผ
```bash
pip install transformers[onnx]
```
`transformers.onnx`ใใใฑใผใธใPythonใขใธใฅใผใซใจใใฆไฝฟ็จใใฆใไบๅใซ็จๆใใใ่จญๅฎใไฝฟ็จใใฆใใงใใฏใใคใณใใใจใฏในใใผใใใๆนๆณใฏไปฅไธใฎ้ใใงใ๏ผ
```bash
python -m transformers.onnx --model=distilbert-base-uncased onnx/
```
ใใฎๆนๆณใฏใ`--model`ๅผๆฐใงๅฎ็พฉใใใใใงใใฏใใคใณใใฎONNXใฐใฉใใใจใฏในใใผใใใพใใ๐ค Hubใฎใใใใใฎใใงใใฏใใคใณใใพใใฏใญใผใซใซใซไฟๅญใใใใใงใใฏใใคใณใใๆธกใใใจใใงใใพใใใจใฏในใใผใใใใ`model.onnx`ใใกใคใซใฏใONNXๆจๆบใใตใใผใใใๅคใใฎใขใฏใปใฉใฌใผใฟใงๅฎ่กใงใใพใใไพใใฐใONNX Runtimeใไฝฟ็จใใฆใขใใซใ่ชญใฟ่พผใใงๅฎ่กใใๆนๆณใฏไปฅไธใฎ้ใใงใ๏ผ
```python
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>> # ONNX Runtime expects NumPy arrays as input
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
ๅฟ
่ฆใชๅบๅๅ๏ผไพ: `["last_hidden_state"]`๏ผใฏใๅใขใใซใฎONNXๆงๆใ็ขบ่ชใใใใจใงๅๅพใงใใพใใไพใใฐใDistilBERTใฎๅ ดๅใๆฌกใฎใใใซใชใใพใ๏ผ
```python
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
```
ใใใใ็ด็ฒใชTensorFlowใฎใใงใใฏใใคใณใใใใญใฐใฉใ ็ใซใจใฏในใใผใใใใใญใปในใฏใไปฅไธใฎใใใซๅๆงใงใ๏ผ
```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```
ใญใผใซใซใซไฟๅญใใใใขใใซใใจใฏในใใผใใใๅ ดๅใใขใใซใฎ้ใฟใจใใผใฏใใคใถใฎใใกใคใซใๅใใใฃใฌใฏใใชใซไฟๅญใใฆใใ ใใ๏ผไพ๏ผ `local-pt-checkpoint`๏ผใใใฎๅพใ`transformers.onnx`ใใใฑใผใธใฎ `--model`ๅผๆฐใๅธๆใใใใฃใฌใฏใใชใซๅใใฆ่จญๅฎใใฆใONNXใซใจใฏในใใผใใใพใ๏ผ
```bash
python -m transformers.onnx --model=local-pt-checkpoint onnx/
```
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_train_gpu_one.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Methods and tools for efficient training on a single GPU
ใใฎใฌใคใใงใฏใใกใขใชใฎๅฉ็จๅน็ใๆ้ฉๅใใใใฌใผใใณใฐใ้ซ้ๅใใใใจใงใใขใใซใฎใใฌใผใใณใฐๅน็ใๅไธใใใใใใซไฝฟ็จใงใใๅฎ็จ็ใชใใฏใใใฏใ็ดนไปใใพใใใใฌใผใใณใฐไธญใซGPUใใฉใฎใใใซๅฉ็จใใใใใ็่งฃใใใๅ ดๅใฏใๆๅใซใ[ใขใใซใใฌใผใใณใฐใฎ่งฃๅๅญฆ](model_memory_anatomy)ใใฎใณใณใปใใใฌใคใใๅ็
งใใฆใใ ใใใใใฎใฌใคใใฏๅฎ็จ็ใชใใฏใใใฏใซ็ฆ็นใๅฝใฆใฆใใพใใ
<Tip>
่คๆฐใฎGPUใๆญ่ผใใใใทใณใซใขใฏใปในใงใใๅ ดๅใใใใใฎใขใใญใผใใฏไพ็ถใจใใฆๆๅนใงใใใใใซใ[ใใซใGPUใปใฏใทใงใณ](perf_train_gpu_many)ใง่ชฌๆใใใฆใใ่ฟฝๅ ใฎๆนๆณใๆดป็จใงใใพใใ
</Tip>
ๅคง่ฆๆจกใชใขใใซใใใฌใผใใณใฐใใ้ใๅๆใซ่ๆ
ฎใในใ2ใคใฎๅด้ขใใใใพใ๏ผ
* ใใผใฟใฎในใซใผใใใ/ใใฌใผใใณใฐๆ้
* ใขใใซใฎใใใฉใผใใณใน
ในใซใผใใใ๏ผใตใณใใซ/็ง๏ผใๆๅคงๅใใใใจใฏใใใฌใผใใณใฐใณในใใไฝๆธใใใพใใใใใฏไธ่ฌ็ใซใGPUใใงใใใ ใๅนๆ็ใซๆดป็จใใGPUใกใขใชใ้็ใพใงๅใใใใจใซใใฃใฆ้ๆใใใพใใๅธๆใใใใใใตใคใบใGPUใกใขใชใฎๅถ้ใ่ถ
ใใๅ ดๅใๅพ้
่็ฉใชใฉใฎใกใขใชๆ้ฉๅใใฏใใใฏใๅฝน็ซใกใพใใ
ใใใใๅฅฝใฟใฎใใใใตใคใบใใกใขใชใซๅใพใๅ ดๅใใกใขใชใๆ้ฉๅใใใใฏใใใฏใ้ฉ็จใใ็็ฑใฏใใใพใใใๅคงใใชใใใใตใคใบใไฝฟ็จใงใใใใใจใใฃใฆใใใใๅฟ
ใใใไฝฟ็จใในใใงใฏใใใพใใใใใคใใผใใฉใกใผใฟใฎ่ชฟๆดใฎไธ็ฐใจใใฆใใฉใฎใใใใตใคใบใๆ่ฏใฎ็ตๆใ็ใฟๅบใใใๆฑบๅฎใใใชใฝใผในใ้ฉๅใซๆ้ฉๅใใๅฟ
่ฆใใใใพใใ
ใใฎใฌใคใใงใซใใผใใใฆใใๆนๆณใจใใผใซใฏใใใฌใผใใณใฐใใญใปในใซไธใใๅฝฑ้ฟใซๅบใฅใใฆๅ้กใงใใพใ๏ผ
| Method/tool | Improves training speed | Optimizes memory utilization |
|:-----------------------------------------------------------|:------------------------|:-----------------------------|
| [Batch size choice](#batch-size-choice) | Yes | Yes |
| [Gradient accumulation](#gradient-accumulation) | No | Yes |
| [Gradient checkpointing](#gradient-checkpointing) | No | Yes |
| [Mixed precision training](#mixed-precision-training) | Yes | (No) |
| [Optimizer choice](#optimizer-choice) | Yes | Yes |
| [Data preloading](#data-preloading) | Yes | No |
| [DeepSpeed Zero](#deepspeed-zero) | No | Yes |
| [torch.compile](#using-torchcompile) | Yes | No |
<Tip>
**ๆณจๆ**: ๅฐใใชใขใใซใจๅคงใใชใใใใตใคใบใไฝฟ็จใใๅ ดๅใใกใขใชใฎ็ฏ็ดใ่กใใใพใใใๅคงใใชใขใใซใจๅฐใใชใใใใตใคใบใไฝฟ็จใใๅ ดๅใใกใขใชใฎไฝฟ็จ้ใๅขๅ ใใพใใ
</Tip>
ใใใใฎใใฏใใใฏใฏใ[`Trainer`]ใงใขใใซใใใฌใผใใณใฐใใฆใใๅ ดๅใใ็ด็ฒใชPyTorchใซใผใใ่จ่ฟฐใใฆใใๅ ดๅใฎไธกๆนใงๅฉ็จใงใใพใใ่ฉณ็ดฐใชๆ้ฉๅใฎ่จญๅฎใซใคใใฆใฏใ๐ค Accelerateใไฝฟ็จใใฆ[ใใใใฎๆ้ฉๅใ่จญๅฎใงใใพใ](#using-accelerate)ใ
ใใใใฎๆนๆณใๅๅใชๅฉ็ใใใใใใชใๅ ดๅใไปฅไธใฎใชใใทใงใณใๆค่จใงใใพใ๏ผ
* [ๅน็็ใชใฝใใใฆใงใขใใชใใซใใๅใใใซในใฟใ Dockerใณใณใใใฎไฝๆ](#efficient-software-prebuilds)
* [Mixture of Experts๏ผMoE๏ผใไฝฟ็จใใใขใใซใๆค่จ](#mixture-of-experts)
* [ใขใใซใBetterTransformerใซๅคๆใใฆใPyTorchใใคใใฃใใฎใขใใณใทใงใณใๆดป็จ](#using-pytorch-native-attention)
ๆๅพใซใใใใใฎๆนๆณใใพใ ๅๅใงใชใๅ ดๅใA100ใชใฉใฎใตใผใใผใฐใฌใผใGPUใซๅใๆฟใใฆใใใใใชใๆนๅใๅฟ
่ฆใใใใใพใใใใใใใฎใขใใญใผใใฏใใใซใGPUใปใใใขใใใงใๆๅนใงใใใ[ใใซใGPUใปใฏใทใงใณ](perf_train_gpu_many)ใง่ชฌๆใใใฆใใ่ฟฝๅ ใฎไธฆๅๅๆ่กใๆดป็จใงใใพใใ
## Batch size choice
ๆ้ฉใชใใใฉใผใใณในใๅฎ็พใใใใใซใ้ฉๅใชใใใใตใคใบใ็นๅฎใใใใจใใๅงใใพใใใใ2^Nใฎใตใคใบใฎใใใใตใคใบใจๅ
ฅๅ/ๅบๅใใฅใผใญใณๆฐใไฝฟ็จใใใใจใๆจๅฅจใใใฆใใพใใ้ๅธธใใใใฏ8ใฎๅๆฐใงใใใไฝฟ็จใใใใผใใฆใงใขใจใขใใซใฎใใผใฟๅใซไพๅญใใใใจใใใใพใใ
ๅ่ใพใงใซใNVIDIAใฎ[ๅ
ฅๅ/ๅบๅใใฅใผใญใณๆฐใฎๆจๅฅจไบ้
](https://docs.nvidia.com/deeplearning/performance/dl-performance-fully-connected/index.html#input-features)ใจ[ใใใใตใคใบ](https://docs.nvidia.com/deeplearning/performance/dl-performance-fully-connected/index.html#batch-size)ใ็ขบ่ชใใฆใใ ใใ๏ผใใใใฏGEMM๏ผไธ่ฌ็ใช่กๅไน็ฎ๏ผใซ้ขไธใใพใ๏ผใ
[Tensor Core่ฆไปถ](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc)ใงใฏใใใผใฟๅใจใใผใใฆใงใขใซๅบใฅใใฆไนๆฐใๅฎ็พฉใใใฆใใพใใใใจใใฐใfp16ใใผใฟๅใฎๅ ดๅใ64ใฎๅๆฐใไฝฟ็จใใใใจใๆจๅฅจใใใพใ๏ผA100 GPUใฎๅ ดๅใ้คใ๏ผใ
ๅฐใใชใใฉใกใผใฟใฎๅ ดๅใ[ๆฌกๅ
้ๅญๅๅนๆ](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#dim-quantization)ใ่ๆ
ฎใใฆใใ ใใใใใใฏใฟใคใชใณใฐใ่กใใใ้ฉๅใชไนๆฐใๅคงๅน
ใช้ซ้ๅใใใใใๅ ดๅใใใใพใใ
## Gradient Accumulation
**ๅพ้
่็ฉ**ใกใฝใใใฏใGPUใฎใกใขใชๅฎน้ใฎๅถ็ดใซใใฃใฆ่ชฒใใใใๅถ้ใ่ถ
ใใๅนๆ็ใชใใใใตใคใบใๅฎ็พใใใใใซใๅพ้
ใๅฐใใชๅขๅใง่จ็ฎใใใใจใ็ฎ็ใจใใฆใใพใใใใฎใขใใญใผใใงใฏใใขใใซใ้ ๆนๅใใใณ้ๆนๅใซๅฐใใชใใใใงๅๅพฉ็ใซ่จ็ฎใใใใฎ้็จใงๅพ้
ใ่็ฉใใพใใๅๅใชๆฐใฎๅพ้
ใ่็ฉใใใใใใขใใซใฎๆ้ฉๅในใใใใๅฎ่กใใพใใๅพ้
่็ฉใไฝฟ็จใใใใจใงใGPUใฎใกใขใชๅฎน้ใซใใๅถ็ดใ่ถ
ใใฆ**ๅนๆ็ใชใใใใตใคใบ**ใๅขใใใใจใใงใใพใใใๅพ้
่็ฉใซใใฃใฆๅฐๅ
ฅใใใ่ฟฝๅ ใฎ้ ๆนๅใใใณ้ๆนๅใฎ่จ็ฎใฏใใฌใผใใณใฐใใญใปในใ้
ใใใๅฏ่ฝๆงใใใใใจใซๆณจๆใๅฟ
่ฆใงใใ
`TrainingArguments`ใซ`gradient_accumulation_steps`ๅผๆฐใ่ฟฝๅ ใใใใจใงใๅพ้
่็ฉใๆๅนใซใใใใจใใงใใพใ๏ผ
```py
training_args = TrainingArguments(per_device_train_batch_size=1, gradient_accumulation_steps=4, **default_args)
```
ไธ่จใฎไพใงใฏใๅนๆ็ใชใใใใตใคใบใฏ4ใซใชใใพใใ
ใพใใใใฌใผใใณใฐใซใผใใๅฎๅ
จใซๅถๅพกใใใใใซ๐ค Accelerateใไฝฟ็จใใใใจใใงใใพใใ๐ค Accelerateใฎไพใฏใ[ใใฎใฌใคใใฎๅพๅใซใใ](#using-accelerate)ใง่ฆใคใใใใจใใงใใพใใ
ใงใใใ ใGPUใฎไฝฟ็จ็ใๆๅคง้ใซใใใใจใๆจๅฅจใใใฆใใพใใใ้ซใๅพ้
่็ฉในใใใๆฐใฏใใฌใผใใณใฐใฎ้
ๅปถใใใ้ก่ใซใใใใจใใใใพใใไปฅไธใฎไพใ่ใใฆใฟใพใใใใ`per_device_train_batch_size=4`ใฎๅ ดๅใๅพ้
่็ฉใไฝฟ็จใใชใใจGPUใฎๅถ้ใซ้ใใพใใใใใใตใคใบ64ใงใใฌใผใใณใฐใใใๅ ดๅใ`per_device_train_batch_size`ใ1ใซ่จญๅฎใใ`gradient_accumulation_steps`ใ64ใซ่จญๅฎใใชใใงใใ ใใใไปฃใใใซใ`per_device_train_batch_size=4`ใไฟๆใใ`gradient_accumulation_steps=16`ใ่จญๅฎใใพใใใใใซใใใๅใๅนๆ็ใชใใใใตใคใบใๅพใใใๅฉ็จๅฏ่ฝใชGPUใชใฝใผในใๅนๆ็ใซๆดป็จใใใพใใ
่ฉณ็ดฐใชๆ
ๅ ฑใซใคใใฆใฏใ[RTX-3090็จใฎใใใใตใคใบใจๅพ้
่็ฉใฎใใณใใใผใฏ](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537)ใใใณ[A100็จใฎใใใใตใคใบใจๅพ้
่็ฉใฎใใณใใใผใฏ](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005033957)ใๅ็
งใใฆใใ ใใใ
## Gradient Checkpointing
ไธ้จใฎๅคงใใชใขใใซใฏใใใใใตใคใบใ1ใซ่จญๅฎใใๅพ้
่็ฉใไฝฟ็จใใฆใใๅ ดๅใงใใกใขใชใฎๅ้กใซ็ด้ขใใใใจใใใใพใใใใใฏใใกใขใชในใใฌใผใธใๅฟ
่ฆใชไปใฎใณใณใใผใใณใใๅญๅจใใใใใงใใ
ๅๅใใในใใใฎใในใฆใฎใขใฏใใฃใใผใทใงใณใไฟๅญใใฆใ้ๅใใในใงๅพ้
ใ่จ็ฎใใใจใใใชใใฎใกใขใชใชใผใใผใใใใ็บ็ใใพใใ้ๅใใในใงๅฟ
่ฆใชใจใใซใขใฏใใฃใใผใทใงใณใ็ ดๆฃใใฆๅ่จ็ฎใใไปฃๆฟใขใใญใผใใฏใ่จ็ฎใชใผใใผใใใใๅคงๅน
ใซๅขๅ ใใใใฌใผใใณใฐใใญใปในใ้
ใใชใใพใใ
**ๅพ้
ใใงใใฏใใคใณใ**ใฏใใใใใฎ2ใคใฎใขใใญใผใใฎๆ่กทๆกใๆไพใใ่จ็ฎใฐใฉใๅ
จไฝใงๆฆ็ฅ็ใซ้ธๆใใใใขใฏใใฃใใผใทใงใณใฎใฟใไฟๅญใใใใใๅพ้
ใๅ่จ็ฎใใๅฟ
่ฆใใใใขใฏใใฃใใผใทใงใณใฎไธ้จใ ใใ็ฏ็ดใใพใใๅพ้
ใใงใใฏใใคใณใใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ใใฎ็ด ๆดใใใ่จไบ](https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9)ใๅ็
งใใฆใใ ใใใ
[`Trainer`]ใงๅพ้
ใใงใใฏใใคใณใใๆๅนใซใใใซใฏใ[`TrainingArguments`]ใซๅฏพๅฟใใใใฉใฐใๆธกใใพใ๏ผ
```py
training_args = TrainingArguments(
per_device_train_batch_size=1, gradient_accumulation_steps=4, gradient_checkpointing=True, **default_args
)
```
ไปฃๆฟๆๆฎตใจใใฆใ๐ค Accelerateใไฝฟ็จใใใใจใใงใใพใ - ๐ค Accelerateใฎไพใฏ[ใใฎใฌใคใใฎใใใซๅพใใซใใใพใ](#using-accelerate)ใ
<Tip>
ๅพ้
ใใงใใฏใใคใณใใไฝฟ็จใใใใจใงใกใขใชๅน็ใๅไธใใๅ ดๅใใใใพใใใใใฌใผใใณใฐ้ๅบฆใฏ็ด20%้
ใใชใใใจใซๆณจๆใใฆใใ ใใใ
</Tip>
## Mixed precision training
**ๆททๅ็ฒพๅบฆใใฌใผใใณใฐ**ใฏใใขใใซใฎใใฌใผใใณใฐใฎ่จ็ฎๅน็ใๆ้ฉๅใใๆ่กใงใ็นๅฎใฎๅคๆฐใซๅฏพใใฆไฝ็ฒพๅบฆใฎๆฐๅคใใฉใผใใใใๅฉ็จใใพใใๅพๆฅใใปใจใใฉใฎใขใใซใฏๅคๆฐใ่กจ็พใๅฆ็ใใใใใซ32ใใใๆตฎๅๅฐๆฐ็น็ฒพๅบฆ๏ผfp32ใพใใฏfloat32๏ผใไฝฟ็จใใฆใใพใใใใใใใในใฆใฎๅคๆฐใๆญฃ็ขบใช็ตๆใๅพใใใใซใใฎ้ซ็ฒพๅบฆใฎใฌใใซใๅฟ
่ฆใจใใชใๅ ดๅใใใใพใใไธ้จใฎๅคๆฐใฎ็ฒพๅบฆใ16ใใใๆตฎๅๅฐๆฐ็น๏ผfp16ใพใใฏfloat16๏ผใชใฉใฎใใไฝใๆฐๅคใใฉใผใใใใซๅคๆดใใใใจใงใ่จ็ฎใ้ซ้ๅใงใใพใใใใฎใขใใญใผใใงใฏใไธ้จใฎ่จ็ฎใฏๅ็ฒพๅบฆใง่กใใใไธ้จใฏใพใ ๅฎๅ
จใช็ฒพๅบฆใง่กใใใใใใใใฎใขใใญใผใใฏๆททๅ็ฒพๅบฆใใฌใผใใณใฐใจๅผใฐใใฆใใพใใ
ๆใไธ่ฌ็ใซๆททๅ็ฒพๅบฆใใฌใผใใณใฐใฏใfp16๏ผfloat16๏ผใใผใฟๅใไฝฟ็จใใฆๅฎ็พใใใพใใใไธ้จใฎGPUใขใผใญใใฏใใฃ๏ผใขใณใใขใขใผใญใใฏใใฃใชใฉ๏ผใงใฏbf16ใใใณtf32๏ผCUDAๅ
้จใใผใฟๅ๏ผใใผใฟๅใๆไพใใใฆใใพใใใใใใฎใใผใฟๅใฎ้ใใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใฏใ[NVIDIAใฎใใญใฐ](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/)ใ็ขบ่ชใใฆใใ ใใใ
### fp16
ๆททๅ็ฒพๅบฆใใฌใผใใณใฐใฎไธปใชๅฉ็นใฏใๅ็ฒพๅบฆ๏ผfp16๏ผใงใขใฏใใฃใใผใทใงใณใไฟๅญใใใใจใใๅพใใใพใใ
ๅพ้
ใๅ็ฒพๅบฆใง่จ็ฎใใใพใใใๆ้ฉๅในใใใใงใฏๅใณๅฎๅ
จ็ฒพๅบฆใซๅคๆใใใใใใใใใงใฏใกใขใชใฏไฟๅญใใใพใใใ
ๆททๅ็ฒพๅบฆใใฌใผใใณใฐใฏ่จ็ฎ้ๅบฆใๅไธใใใไธๆนใ็นใซๅฐใใชใใใใตใคใบใฎๅ ดๅใใใๅคใใฎGPUใกใขใชใไฝฟ็จใใใใจใใใใพใใ
ใใใฏใใขใใซใGPUไธใซ16ใใใใใใณ32ใใใ็ฒพๅบฆใฎไธกๆนใงๅญๅจใใใใใงใ๏ผGPUไธใฎๅ
ใฎใขใใซใฎ1.5ๅ๏ผใ
ๆททๅ็ฒพๅบฆใใฌใผใใณใฐใๆๅนใซใใใซใฏใ`fp16`ใใฉใฐใ`True`ใซ่จญๅฎใใพใ๏ผ
```py
training_args = TrainingArguments(per_device_train_batch_size=4, fp16=True, **default_args)
```
๐ค Accelerateใไฝฟ็จใใๅ ดๅใ๐ค Accelerateใฎไพใฏ[ใใฎใฌใคใใฎใใใซๅพใใซใใใพใ](#using-accelerate)ใ
### BF16
Ampereใพใใฏใใไปฅ้ใฎใใผใใฆใงใขใซใขใฏใปในใงใใๅ ดๅใๆททๅ็ฒพๅบฆใใฌใผใใณใฐใจ่ฉไพกใซbf16ใไฝฟ็จใงใใพใใbf16ใฏfp16ใใใ็ฒพๅบฆใๅฃใใพใใใใฏใใใซๅคงใใชๅ็็ฏๅฒใๆใฃใฆใใพใใfp16ใงใฏใๆใคใใจใใงใใๆๅคงใฎๆฐใฏ `65535` ใงใใใใใใ่ถ
ใใๆฐๅคใฏใชใผใใผใใญใผใๅผใ่ตทใใใพใใไธๆนใbf16ใฎๆฐๅคใฏ `3.39e+38` ใฎใใใซๅคงใใใใใใฏfp32ใจใปใผๅใใงใ - ใฉใกใใๆฐๅค็ฏๅฒใซ8ใใใใไฝฟ็จใใฆใใใใใงใใ
BF16ใๆๅนใซใใใซใฏใ๐ค Trainerใงไปฅไธใฎใใใซ่จญๅฎใใพใ๏ผ
```python
training_args = TrainingArguments(bf16=True, **default_args)
```
### TF32
ใขใณใใขใใผใใฆใงใขใฏใtf32ใจใใ็นๅฅใชใใผใฟๅใไฝฟ็จใใพใใใใใฏใfp32ใจๅใๆฐๅค็ฏๅฒ๏ผ8ใใใ๏ผใๆใฃใฆใใพใใใ23ใใใใฎ็ฒพๅบฆใงใฏใชใใ10ใใใใฎ็ฒพๅบฆ๏ผfp16ใจๅใ๏ผใๆใกใๅ่จใง19ใใใใใไฝฟ็จใใพใใใใใใฏ้ๅธธใฎfp32ใใฌใผใใณใฐใใใณๆจ่ซใณใผใใไฝฟ็จใใtf32ใตใใผใใๆๅนใซใใใใจใงใๆๅคง3ๅใฎในใซใผใใใใฎๅไธใๅพใใใ็นใงใ้ญๆณใฎใใใใงใใ่กใๅฟ
่ฆใใใใฎใฏใๆฌกใฎใณใผใใ่ฟฝๅ ใใใ ใใงใ๏ผ
```
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
```
ไฝฟ็จใใใฆใใGPUใใขใณใใขใทใชใผใบใงใใใจไปฎๅฎใใCUDAใฏๅฏ่ฝใช้ใtf32ใไฝฟ็จใใใใใซ่ชๅ็ใซๅใๆฟใใพใใ
[NVIDIAใฎ็ ็ฉถใซใใใฐ](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/)ใใปใจใใฉใฎๆฉๆขฐๅญฆ็ฟใใฌใผใใณใฐใฏใผใฏใญใผใใฏtf32ใใฌใผใใณใฐใจfp32ใใฌใผใใณใฐใงๅใ้ฃ่งฃๅบฆใจๅๆใ็คบใใพใใใใงใซfp16ใพใใฏbf16ๆททๅ็ฒพๅบฆใไฝฟ็จใใฆใใๅ ดๅใในใซใผใใใใฎๅไธใซๅฝน็ซใคใใจใใใใพใใ
๐ค Trainerใงใใฎใขใผใใๆๅนใซใใใใจใใงใใพใ๏ผ
```python
TrainingArguments(tf32=True, **default_args)
```
<Tip>
tf32ใฏ`tensor.to(dtype=torch.tf32)`ใไปใใฆ็ดๆฅใขใฏใปในใงใใพใใใใใใฏๅ
้จใฎCUDAใใผใฟๅใงใใtf32ใใผใฟๅใไฝฟ็จใใใซใฏใ`torch>=1.7`ใๅฟ
่ฆใงใใ
</Tip>
tf32ใจไปใฎ็ฒพๅบฆใซ้ขใใ่ฉณ็ดฐใชๆ
ๅ ฑใซใคใใฆใฏใไปฅไธใฎใใณใใใผใฏใๅ็
งใใฆใใ ใใ๏ผ
[RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004390803)ใใใณ
[A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004543189)ใ
## Flash Attention 2
transformersใงFlash Attention 2็ตฑๅใไฝฟ็จใใใใจใงใใใฌใผใใณใฐใฎในใซใผใใใใๅไธใใใใใจใใงใใพใใFlash Attention 2ใขใธใฅใผใซใๅซใใขใใซใฎ่ชญใฟ่พผใฟๆนๆณใซใคใใฆใฏใ[single GPU section](./perf_infer_gpu_one#Flash-Attention-2)ใฎ้ฉๅใชใปใฏใทใงใณใ็ขบ่ชใใฆ่ฉณ็ดฐใๅญฆใณใพใใใใ
## ใชใใใฃใใคใถใฎ้ธๆ
Transformerใขใใซใใใฌใผใใณใฐใใใใใซๆใไธ่ฌ็ใซไฝฟ็จใใใใชใใใฃใใคใถใฏAdamใพใใฏAdamW๏ผ้ใฟๆธ่กฐใไผดใAdam๏ผใงใใAdamใฏๅๅใฎๅพ้
ใฎ็งปๅๅนณๅใไฟๅญใใใใจใงๅๆใ้ๆใใพใใใใขใใซใใฉใกใผใฟใฎๆฐใฎใชใผใใผใฎ่ฟฝๅ ใกใขใชใใใใใชใณใใ่ฟฝๅ ใใพใใใใใ่งฃๆถใใใใใซใไปฃๆฟใชใใใฃใใคใถใไฝฟ็จใงใใพใใใใจใใฐใ[NVIDIA/apex](https://github.com/NVIDIA/apex)ใใคใณในใใผใซใใใฆใใๅ ดๅใ`adamw_apex_fused`ใฏใในใฆใฎใตใใผใใใใฆใใAdamWใชใใใฃใใคใถใฎไธญใงๆใ้ซ้ใชใใฌใผใใณใฐไฝ้จใๆไพใใพใใ
[`Trainer`]ใฏใ็ดๆฅไฝฟ็จใงใใใใพใใพใชใชใใใฃใใคใถใ็ตฑๅใใฆใใใ`adamw_hf`ใ`adamw_torch`ใ`adamw_torch_fused`ใ`adamw_apex_fused`ใ`adamw_anyprecision`ใ`adafactor`ใใพใใฏ`adamw_bnb_8bit`ใๅซใพใใฆใใพใใใตใผใใใผใใฃใฎๅฎ่ฃ
ใไปใใฆใใใซๅคใใฎใชใใใฃใใคใถใ่ฟฝๅ ใงใใพใใ
AdamWใชใใใฃใใคใถใฎไปฃๆฟๆๆฎตใซใคใใฆ่ฉณใใ่ฆใฆใฟใพใใใ๏ผ
1. [`Trainer`]ใงไฝฟ็จๅฏ่ฝใช`adafactor`
2. Trainerใงไฝฟ็จๅฏ่ฝใช`adamw_bnb_8bit`ใฏใใใขใณในใใฌใผใทใงใณ็จใซไปฅไธใงใตใผใใใผใใฃใฎ็ตฑๅใๆไพใใใฆใใพใใ
ๆฏ่ผใฎใใใ3Bใใฉใกใผใฟใขใใซ๏ผไพ๏ผใt5-3bใ๏ผใฎๅ ดๅ๏ผ
* ๆจๆบใฎAdamWใชใใใฃใใคใถใฏใๅใใฉใกใผใฟใซ8ใใคใใไฝฟ็จใใใใใ24GBใฎGPUใกใขใชใๅฟ
่ฆใงใ๏ผ8 * 3 => 24GB๏ผใ
* Adafactorใชใใใฃใใคใถใฏ12GBไปฅไธๅฟ
่ฆใงใใๅใใฉใกใผใฟใซใใใ4ใใคใไปฅไธใไฝฟ็จใใใใใ4 * 3ใจๅฐใไฝๅใซใชใใพใใ
* 8ใใใใฎBNB้ๅญๅใชใใใฃใใคใถใฏใใในใฆใฎใชใใใฃใใคใถใฎ็ถๆ
ใ้ๅญๅใใใฆใใๅ ดๅใใใใ6GBใใไฝฟ็จใใพใใใ
### Adafactor
Adafactorใฏใ้ใฟ่กๅใฎๅ่ฆ็ด ใฎใใใซๅๅใฎๅนณๅใไฟๅญใใพใใใไปฃใใใซใ๏ผ่กใใจใจๅใใจใฎๅนณๅใฎๅ่จใชใฉ๏ผ้
```py
training_args = TrainingArguments(per_device_train_batch_size=4, optim="adafactor", **default_args)
```
ไปใฎใขใใญใผใ๏ผๅพ้
่็ฉใๅพ้
ใใงใใฏใใคใณใใๆททๅ็ฒพๅบฆใใฌใผใใณใฐ๏ผใจ็ตใฟๅใใใใใจใงใในใซใผใใใใ็ถญๆใใชใใๆๅคง3ๅใฎๅไธใ่ฆใใใใใจใใใใพใ๏ผใใ ใใๅ่ฟฐใฎใใใซใAdafactorใฎๅๆๆงใฏAdamใใใๆชใใใจใใใใพใใ
### 8ใใใ Adam
Adafactorใฎใใใซใชใใใฃใใคใถใฎ็ถๆ
ใ้็ดใใไปฃใใใซใ8ใใใใฎAdamใฏๅฎๅ
จใช็ถๆ
ใไฟๆใใใใใ้ๅญๅใใพใใ้ๅญๅใจใฏใ็ถๆ
ใไฝใ็ฒพๅบฆใงไฟๅญใใๆ้ฉๅใฎใใใ ใใซ้้ๅญๅใใใใจใๆๅณใใพใใใใใฏๆททๅ็ฒพๅบฆใใฌใผใใณใฐใฎ่ๅพใซใใใขใคใใขใจไผผใฆใใพใใ
`adamw_bnb_8bit`ใไฝฟ็จใใใซใฏใๅใซ[`TrainingArguments`]ใง`optim="adamw_bnb_8bit"`ใ่จญๅฎใใใ ใใงใ๏ผ
```py
training_args = TrainingArguments(per_device_train_batch_size=4, optim="adamw_bnb_8bit", **default_args)
```
ใใ ใใใใขใณในใใฌใผใทใงใณ็ฎ็ใง8ใใใใชใใใฃใใคใถใใตใผใใใผใใฃใฎๅฎ่ฃ
ใไฝฟ็จใใใใจใใงใใพใใใใใ็ตฑๅใใๆนๆณใ็ขบ่ชใใใใใงใใ
ใพใใ8ใใใAdamใชใใใฃใใคใถใๅฎ่ฃ
ใใ`bitsandbytes`ใฉใคใใฉใชใใคใณในใใผใซใใใใใซใGitHub [ใชใใธใใช](https://github.com/TimDettmers/bitsandbytes)ๅ
ใฎใคใณในใใผใซใฌใคใใซๅพใฃใฆใใ ใใใ
ๆฌกใซใใชใใใฃใใคใถใๅๆๅใใๅฟ
่ฆใใใใพใใใใใซใฏ2ใคใฎในใใใใๅซใพใใพใ๏ผ
* ใพใใใขใใซใฎใใฉใกใผใฟใ2ใคใฎใฐใซใผใใซๅใใพใ - ้ใฟๆธ่กฐใ้ฉ็จใใในใใฐใซใผใใจใ้ฉ็จใในใใงใชใใฐใซใผใใงใใ้ๅธธใใใคใขในใจใฌใคใคใผๆญฃ่ฆๅใใฉใกใผใฟใฏ้ใฟๆธ่กฐใใใพใใใ
* ๆฌกใซใไปฅๅใซไฝฟ็จใใAdamWใชใใใฃใใคใถใจๅใใใฉใกใผใฟใไฝฟ็จใใใใใซใใใใคใใฎๅผๆฐใฎ่ชฟๆดใ่กใใพใใ
```py
import bitsandbytes as bnb
from torch import nn
from transformers.trainer_pt_utils import get_parameter_names
training_args = TrainingArguments(per_device_train_batch_size=4, **default_args)
decay_parameters = get_parameter_names(model, [nn.LayerNorm])
decay_parameters = [name for name in decay_parameters if "bias" not in name]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if n in decay_parameters],
"weight_decay": training_args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if n not in decay_parameters],
"weight_decay": 0.0,
},
]
optimizer_kwargs = {
"betas": (training_args.adam_beta1, training_args.adam_beta2),
"eps": training_args.adam_epsilon,
}
optimizer_kwargs["lr"] = training_args.learning_rate
adam_bnb_optim = bnb.optim.Adam8bit(
optimizer_grouped_parameters,
betas=(training_args.adam_beta1, training_args.adam_beta2),
eps=training_args.adam_epsilon,
lr=training_args.learning_rate,
)
```
ๆๅพใซใใซในใฟใ ใชใใใฃใใคใถใ`Trainer`ใซๅผๆฐใจใใฆๆธกใใพใ๏ผ
```py
trainer = Trainer(model=model, args=training_args, train_dataset=ds, optimizers=(adam_bnb_optim, None))
```
ไปใฎใขใใญใผใ๏ผๅพ้
่็ฉใๅพ้
ใใงใใฏใใคใณใใๆททๅ็ฒพๅบฆใใฌใผใใณใฐ๏ผใจ็ตใฟๅใใใใใจใงใAdafactorใฎไฝฟ็จใจๅ็ญไปฅไธใฎ3ๅใฎใกใขใชๆนๅใใใณใใใใซ้ซใในใซใผใใใใๆๅพ
ใงใใพใใ
### multi_tensor
pytorch-nightlyใฏใๅคใใฎๅฐใใช็นๅพดใใณใฝใซใใใ็ถๆณใฎใชใใใฃใใคใถใๅคงๅน
ใซ้ซ้ๅใใใฏใใฎ`torch.optim._multi_tensor`ใๅฐๅ
ฅใใพใใใใใใฏๆ็ต็ใซใฏใใใฉใซใใซใชใใฏใใงใใใใใใๆฉใ่ฉฆใใฆใฟใใๅ ดๅใฏใใใฎGitHub [issue](https://github.com/huggingface/transformers/issues/9965)ใใ่ฆงใใ ใใใ
## ใใผใฟใฎไบๅ่ชญใฟ่พผใฟ
ๅชใใใใฌใผใใณใฐ้ๅบฆใซๅฐ้ใใใใใฎ้่ฆใช่ฆไปถใฎ1ใคใฏใGPUใๅฆ็ใงใใๆๅคง้ๅบฆใงใใผใฟใไพ็ตฆใงใใ่ฝๅใงใใใใใฉใซใใงใฏใในใฆใใกใคใณใใญใปในใง่กใใใใใผใฟใใใฃในใฏใใๅๅ้ใ่ชญใฟๅใใใจใใงใใชใๅ ดๅใGPUใฎใขใณใใผใฆใผใใฃใชใผใผใทใงใณใๅผใ่ตทใใใใใซใใใฏใ็บ็ใใๅฏ่ฝๆงใใใใพใใใใใซใใใฏใๆธใใใใใซใไปฅไธใฎๅผๆฐใ่จญๅฎใใพใ๏ผ
- `DataLoader(pin_memory=True, ...)` - ใใผใฟใCPUใฎใใณใกใขใชใซไบๅ่ชญใฟ่พผใฟใใ้ๅธธใCPUใใGPUใกใขใชใธใฎ่ปข้ใใฏใใใซ้ซ้ๅใใใพใใ
- `DataLoader(num_workers=4, ...)` - ใใผใฟใใใ้ใไบๅ่ชญใฟ่พผใฟใใใใใซ่คๆฐใฎใฏใผใซใผใ็ๆใใพใใใใฌใผใใณใฐไธญใซGPUใฎๅฉ็จ็ถๆณใฎ็ตฑ่จๆ
ๅ ฑใ็ขบ่ชใใ100๏ผ
ใใ้ ใๅ ดๅใใฏใผใซใผใฎๆฐใๅขใใๅฎ้จใ่กใฃใฆใใ ใใใใใกใใใๅ้กใฏไปใฎๅ ดๆใซใใใใใใใพใใใฎใงใๅคใใฎใฏใผใซใผใๅฟ
ใใใๆง่ฝๅไธใซใคใชใใใใใงใฏใใใพใใใ
[`Trainer`]ใไฝฟ็จใใๅ ดๅใๅฏพๅฟใใ[`TrainingArguments`]ใฏ`dataloader_pin_memory`๏ผใใใฉใซใใงใฏ`True`๏ผใใใณ`dataloader_num_workers`๏ผใใใฉใซใใฏ`0`๏ผใงใใ
## DeepSpeed ZeRO
DeepSpeedใฏใ๐ค Transformersใจ๐ค Accelerateใจ็ตฑๅใใใใชใผใใณใฝใผในใฎใใฃใผใใฉใผใใณใฐๆ้ฉๅใฉใคใใฉใชใงใใ
ๅคง่ฆๆจกใชใใฃใผใใฉใผใใณใฐใใฌใผใใณใฐใฎๅน็ใจในใฑใผใฉใใชใใฃใๅไธใใใใใใซ่จญ่จใใใใใพใใพใชๆฉ่ฝใจๆ้ฉๅใๆไพใใพใใ
ใขใใซใๅไธใฎGPUใซๅใพใใๅฐใใชใใใใตใคใบใๅใใในใใผในใใใๅ ดๅใDeepSpeedใไฝฟ็จใใๅฟ
่ฆใฏใใใพใใใใใใฏใใใ้
ใใชใใพใใใใ ใใใขใใซใๅไธใฎGPUใซๅใพใใชใๅ ดๅใใพใใฏๅฐใใชใใใใๅใใใใจใใงใใชใๅ ดๅใDeepSpeed ZeRO + CPU OffloadใพใใฏNVMe Offloadใๅฉ็จใงใใพใใใใฎๅ ดๅใ[ใฉใคใใฉใชใๅฅ้ใคใณในใใผใซ](main_classes/deepspeed#installation)ใใ่จญๅฎใใกใคใซใไฝๆใใDeepSpeedใ่ตทๅใใใใใฎใฌใคใใใใฉใญใผใใๅฟ
่ฆใใใใพใ๏ผ
* [`Trainer`]ใจใฎDeepSpeed็ตฑๅใฎ่ฉณ็ดฐใฌใคใใซใคใใฆใฏใ[่ฉฒๅฝใใใใญใฅใกใณใใผใทใงใณ](main_classes/deepspeed)ใ็ขบ่ชใใฆใใ ใใใ็นใซใ[ๅไธGPU็จใฎใใใญใคใกใณใ](main_classes/deepspeed#deployment-with-one-gpu)ใซ้ขใใใปใฏใทใงใณใงใใDeepSpeedใใใผใใใใฏใงไฝฟ็จใใใซใฏใใใคใใฎ่ชฟๆดใๅฟ
่ฆใงใใฎใงใ[่ฉฒๅฝใใใฌใคใ](main_classes/deepspeed#deployment-in-notebooks)ใใ่ฆงใใ ใใใ
* ๐ค Accelerateใไฝฟ็จใใๅ ดๅใฏใ[๐ค Accelerate DeepSpeedใฌใคใ](https://huggingface.co/docs/accelerate/en/usage_guides/deepspeed)ใๅ็
งใใฆใใ ใใใ
## torch.compileใฎไฝฟ็จ
PyTorch 2.0ใฏๆฐใใใณใณใใคใซ้ขๆฐใๅฐๅ
ฅใใพใใใใใใฏๆขๅญใฎPyTorchใณใผใใๅคๆดใใๅฟ
่ฆใฏใใใพใใใใ1่กใฎใณใผใใ่ฟฝๅ ใใใใจใงใณใผใใๆ้ฉๅใงใใพใ๏ผ`model = torch.compile(model)`ใ
[`Trainer`]ใไฝฟ็จใใๅ ดๅใ[`TrainingArguments`]ๅ
ใฎ`torch_compile`ใชใใทใงใณใๆธกใใ ใใงใ๏ผ
```python
training_args = TrainingArguments(torch_compile=True, **default_args)
```
`torch.compile`ใฏใๆขๅญใฎPyTorchใใญใฐใฉใ ใใใฐใฉใใ่ชๅ็ใซไฝๆใใใใใซPythonใฎใใฌใผใ ่ฉไพกAPIใไฝฟ็จใใพใใใฐใฉใใใญใฃใใใฃใใๅพใ็ฐใชใใใใฏใจใณใใๅฑ้ใใฆๆ้ฉๅใใใใจใณใธใณใซๅคๆใงใใพใใ
่ฉณ็ดฐใใใณใใณใใใผใฏใซใคใใฆใฏใ[PyTorchใใญใฅใกใณใ](https://pytorch.org/get-started/pytorch-2.0/)ใๅ็
งใใฆใใ ใใใ
`torch.compile`ใซใฏใใชใใทใงใณใฎไพๅญ้ขไฟใๆใคๆ้ทไธญใฎใใใฏใจใณใใฎใชในใใใใใ`torchdynamo.list_backends()`ใๅผใณๅบใใฆ็ขบ่ชใงใใพใใๆใไธ่ฌ็ใซไฝฟ็จใใใไธ้จใฎใใใฏใจใณใใฏๆฌกใฎใจใใใงใใ
**ใใใใฐ็จใใใฏใจใณใ**๏ผ
* `dynamo.optimize("eager")` - ๆฝๅบใใใGraphModuleใๅฎ่กใใใใใซPyTorchใไฝฟ็จใใพใใใใใฏTorchDynamoใฎๅ้กใใใใใฐใใ้ใซ้ๅธธใซๅฝน็ซใกใพใใ
* `dynamo.optimize("aot_eager")` - ใณใณใใคใฉใผใไฝฟ็จใใชใAotAutogradใไฝฟ็จใใฆAotAutogradใฎๆฝๅบใใใใใฉใฏใผใใใใณใใใฏใฏใผใใฐใฉใใซๅฏพใใฆๅใซPyTorch eagerใไฝฟ็จใใพใใใใใฏใใใใฐใซๅฝน็ซใกใ้ซ้ๅใฏๆๅพ
ใงใใพใใใ
**ใใฌใผใใณใฐใใใณๆจ่ซใใใฏใจใณใ**๏ผ
* `dynamo.optimize("inductor")` - TorchInductorใใใฏใจใณใใไฝฟ็จใใAotAutogradใใใณcudagraphsใๆดป็จใใฆใณใผใ็ๆใใใTritonใซใผใใซใไฝฟ็จใใพใ [่ฉณ็ดฐใฏใใกใ](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)
* `dynamo.optimize("nvfuser")` - nvFuser with TorchScriptใไฝฟ็จใใพใใ [่ฉณ็ดฐใฏใใกใ](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593)
* `dynamo.optimize("aot_nvfuser")` - nvFuser with AotAutogradใไฝฟ็จใใพใใ [่ฉณ็ดฐใฏใใกใ](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-1-nvfuser-and-its-primitives/593)
* `dynamo.optimize("aot_cudagraphs")` - AotAutogradใไฝฟ็จใใฆcudagraphsใไฝฟ็จใใพใใ [่ฉณ็ดฐใฏใใกใ](https://github.com/pytorch/torchdynamo/pull/757)
**ๆจ่ซๅฐ็จใใใฏใจใณใ**๏ผ
* `dynamo.optimize("ofi")` - Torchscriptใฎ`optimize_for_inference`ใไฝฟ็จใใพใใ [่ฉณ็ดฐใฏใใกใ](https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html)
* `dynamo.optimize("fx2trt")` - Nvidia TensorRTใไฝฟ็จใใๆจ่ซใฎๆ้ฉๅใซNvidia TensorRTใไฝฟ็จใใพใใ [่ฉณ็ดฐใฏใใกใ](https://pytorch.org/TensorRT/tutorials/getting_started_with_fx_path.html)
* `dynamo.optimize("onnxrt")` - CPU/GPUใงใฎๆจ่ซใซONNX Runtimeใไฝฟ็จใใพใใ [่ฉณ็ดฐใฏใใกใ](https://onnxruntime.ai/)
* `dynamo.optimize("ipex")` - CPUใงใฎๆจ่ซใซIPEXใไฝฟ็จใใพใใ [่ฉณ็ดฐใฏใใกใ](https://github.com/intel/intel-extension-for-pytorch)
๐ค Transformersใไฝฟ็จใใ`torch.compile`ใฎไฝฟ็จไพใซใคใใฆใฏใใใฎ[ใใญใฐ่จไบ](https://www.philschmid.de/getting-started-pytorch-2-0-transformers)ใใ่ฆงใใ ใใใ
## Using ๐ค Accelerate
[๐ค Accelerate](https://huggingface.co/docs/accelerate/index)ใไฝฟ็จใใใจใไธ่จใฎๆนๆณใไฝฟ็จใใชใใใใฌใผใใณใฐใซใผใใๅฎๅ
จใซๅถๅพกใงใใๅบๆฌ็ใซใฏ็ด็ฒใชPyTorchใงใซใผใใๆธใใใจใใงใใพใใ
ๆฌกใซใ[`TrainingArguments`]ๅ
ใงๆนๆณใ็ตใฟๅใใใๅ ดๅใๆณ
```py
training_args = TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
fp16=True,
**default_args,
)
```
๐ค Accelerateใไฝฟ็จใใๅฎๅ
จใชใใฌใผใใณใฐใซใผใใฎไพใฏใใปใใฎๆฐ่กใฎใณใผใใงใ๏ผ
```py
from accelerate import Accelerator
from torch.utils.data.dataloader import DataLoader
dataloader = DataLoader(ds, batch_size=training_args.per_device_train_batch_size)
if training_args.gradient_checkpointing:
model.gradient_checkpointing_enable()
accelerator = Accelerator(fp16=training_args.fp16)
model, optimizer, dataloader = accelerator.prepare(model, adam_bnb_optim, dataloader)
model.train()
for step, batch in enumerate(dataloader, start=1):
loss = model(**batch).loss
loss = loss / training_args.gradient_accumulation_steps
accelerator.backward(loss)
if step % training_args.gradient_accumulation_steps == 0:
optimizer.step()
optimizer.zero_grad()
```
ใพใใใใผใฟใปใใใ[`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)ใงใฉใใใใพใใ
ๆฌกใซใใขใใซใฎ[`~PreTrainedModel.gradient_checkpointing_enable`]ใกใฝใใใๅผใณๅบใใใจใงๅพ้
ใใงใใฏใใคใณใใๆๅนใซใงใใพใใ
[`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator)ใๅๆๅใใ้ใซใๆททๅ็ฒพๅบฆใใฌใผใใณใฐใไฝฟ็จใใใใฉใใใ[`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare)ใฎๅผใณๅบใใงๆๅฎใใ่คๆฐใฎGPUใไฝฟ็จใใๅ ดๅใ`prepare`ใฎ้ใซใใผใฟใญใผใใผใใฏใผใซใผ้ใงๅๆฃใใใพใใๅใ[8ใใใใชใใใฃใใคใถ](#8-bit-adam)ใๅใฎไพใใไฝฟ็จใใพใใ
ๆๅพใซใไธป่ฆใชใใฌใผใใณใฐใซใผใใ่ฟฝๅ ใงใใพใใ`backward`ใฎๅผใณๅบใใฏ๐ค Accelerateใซใใฃใฆๅฆ็ใใใใใจใซๆณจๆใใฆใใ ใใใใพใใๅพ้
ใฎ่็ฉใใฉใฎใใใซๆฉ่ฝใใใใ็ขบ่ชใงใใพใใๆๅคฑใๆญฃ่ฆๅใใฆใใใใใ่็ฉใฎๆๅพใซๅนณๅใๅพใฆใๅๅใชในใใใใใใใจๆ้ฉๅใๅฎ่กใใใพใใ
ใใใใฎๆ้ฉๅๆ่กใ๐ค Accelerateใไฝฟ็จใใฆๅฎ่ฃ
ใใใฎใฏใใใใใชใณใผใ่กใง่กใใใจใใงใใใใฌใผใใณใฐใซใผใใฎๆ่ปๆงใๅไธใใพใใใในใฆใฎๆฉ่ฝใฎ่ฉณ็ดฐใซใคใใฆใฏใ[Accelerateใฎใใญใฅใกใณใ](https://huggingface.co/docs/accelerate/index)ใๅ็
งใใฆใใ ใใใ
## Efficient Software Prebuilds
PyTorchใฎ[pipใจcondaใใซใ](https://pytorch.org/get-started/locally/#start-locally)ใฏใPyTorchใๅฎ่กใใใฎใซๅๅใชcudaใใผใซใญใใใงไบๅใซใใซใใใใฆใใพใใใcudaๆกๅผตใใใซใใใๅฟ
่ฆใใใๅ ดๅใซใฏไธๅๅใงใใ
ๆๆใ่ฟฝๅ ใฎๅชๅใๅฟ
่ฆใชๅ ดๅใใใใพใใใใจใใฐใไบๅใซใณใณใใคใซใใใฆใใชใ`apex`ใชใฉใฎใฉใคใใฉใชใไฝฟ็จใใฆใใๅ ดๅใงใใใพใใใทในใใ ๅ
จไฝใง้ฉๅใชcudaใใผใซใญใใใใคใณในใใผใซใใๆนๆณใ่ฆใคใใใใจใ้ฃใใๅ ดๅใใใใพใใ
ใใใใฎใทใใชใชใซๅฏพๅฆใใใใใซใPyTorchใจNVIDIAใฏcudaๆกๅผตใใใงใซไบๅใซใใซใใใใฆใใNGC dockerใณใณใใใฎๆฐใใใใผใธใงใณใใชใชใผในใใพใใใใใญใฐใฉใ ใใคใณในใใผใซใใใ ใใงใใใฎใพใพๅฎ่กใงใใพใใ
ใใฎใขใใญใผใใฏใPyTorchใฎใฝใผในใ่ชฟๆดใใใใๆฐใใใซในใฟใใคใบใใใใใซใใไฝๆใใใใใใๅ ดๅใซใๅฝน็ซใกใพใใ
ๆฌฒใใdockerใคใกใผใธใใผใธใงใณใ่ฆใคใใใซใฏใใพใ[PyTorchใฎใชใชใผในใใผใ](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/)ใใๅงใใๆๆฐใฎๆๆฌกใชใชใผในใฎใใใใใ้ธๆใใพใใๅธๆใฎใชใชใผในใฎใชใชใผในใใผใใซ็งปๅใใ็ฐๅขใฎใณใณใใผใใณใใๅฟ
่ฆใชใใฎใจไธ่ดใใฆใใใใจใ็ขบ่ชใใพใ๏ผNVIDIA Driverใฎ่ฆไปถใๅซใ๏ผ๏ผใใใฎๆๆธใฎไธ็ชไธใซ่กใใๅฏพๅฟใใNGCใใผใธใซ็งปๅใใพใใใชใใใใใใชใๅ ดๅใฏใ[ใในใฆใฎPyTorch NGCใคใกใผใธใฎใคใณใใใฏใน](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)ใงใใ
ๆฌกใซใdockerใคใกใผใธใใใฆใณใญใผใใใฆๅฑ้ใใๆ้ ใซๅพใใพใใ
## Mixture of Experts
ๆ่ฟใฎ่ซๆใซใใใฐใTransformerใขใใซใซๅฐ้ๅฎถใฎๆททๅ๏ผMoE๏ผใ็ตฑๅใใใใจใงใใใฌใผใใณใฐ้ๅบฆใ4ใ5ๅๅไธใใๆจ่ซใ้ซ้ๅใใใใใจใๅ ฑๅใใใฆใใพใใ
ใใๅคใใฎใใฉใกใผใฟใใใ่ฏใใใใฉใผใใณในใซใคใชใใใใจใใใใฃใฆใใใใใใใฎๆ่กใฏใใฌใผใใณใฐใณในใใๅขใใใใจใชใใใฉใกใผใฟใฎๆฐใๆก้ใใซๅขใใใใจใๅฏ่ฝใซใใพใใ
ใใฎใขใใญใผใใงใฏใไปใฎFFNๅฑคใฎไปฃใใใซMoEๅฑคใ้
็ฝฎใใใๅๅฐ้ๅฎถใใใผใฏใณใฎไฝ็ฝฎใซๅฟใใฆใใฉใณในใใใใฌใผใใณใฐใใใฒใผใ้ขๆฐใงๆงๆใใใพใใ

๏ผๅบๅ
ธ: [GLAM](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html)๏ผ
ใใฎใขใใญใผใใฎไธปใชๆฌ ็นใฏใGPUใกใขใชใใปใผๆก้ใใซๅคใๅฟ
่ฆใจใใใใจใงใใใกใขใช่ฆไปถใใฏใใใซๅคงใใใใจใใใฎใพใพๅๆ ใใใพใใใใ้ซใใกใขใช่ฆไปถใๅ
ๆใใๆนๆณใซใคใใฆใฏใใใพใใพใช่ธ็ใใใณใขใใญใผใใๆๆกใใใฆใใพใใ
ใใ ใใ็ดๆฅใฎใใฌใผใใชใใใใใพใใๆฐไบบใฎๅฐ้ๅฎถใไฝฟ็จใใฆใใผในใขใใซใ2ใ3ๅๅฐใใใใใใจใงใ5ๅๅฐใใชใขใใซใซใใใใฌใผใใณใฐ้ๅบฆใ้ฉๅบฆใซๅไธใใใใกใขใช่ฆไปถใ้ฉๅบฆใซๅขใใใใจใใงใใพใใ
้ข้ฃใใใปใจใใฉใฎ่ซๆใใใณๅฎ่ฃ
ใฏTensorflow/TPUใไธญๅฟใซๆง็ฏใใใฆใใพใใ
- [GShard: Conditional Computation and Automatic Shardingใๆดป็จใใๅทจๅคงใขใใซใฎในใฑใผใชใณใฐ](https://arxiv.org/abs/2006.16668)
- [Switch Transformers: ใทใณใใซใงๅน็็ใชในใใผในๆงใๅใใใใชใชใชใณใใฉใกใผใฟใขใใซใธใฎในใฑใผใชใณใฐ](https://arxiv.org/abs/2101.03961)
- [GLaM: Generalist Language Model (GLaM)](https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html)
PytorchใซใฏDeepSpeedใๆง็ฏใใใใฎใใใใพใ: [DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale](https://arxiv.org/abs/2201.05596)ใ[Mixture of Experts](https://www.deepspeed.ai/tutorials/mixture-of-experts/) - ใใญใฐ่จไบ: [1](https://www.microsoft.com/en-us/research/blog/deepspeed-powers-8x-larger-moe-model-training-with-high-performance/)ใ[2](https://www.microsoft.com/en-us/research/publication/scalable-and-efficient-moe-training-for-multitask-multilingual-models/)ใๅคง่ฆๆจกใชTransformerใใผในใฎ่ช็ถ่จ่ช็ๆใขใใซใฎๅ
ทไฝ็ใชๅฑ้ใซใคใใฆใฏใ[ใใญใฐ่จไบ](https://www.deepspeed.ai/2021/12/09/deepspeed-moe-nlg.html)ใ[Megatron-Deepspeedใใฉใณใ](https://github.com/microsoft/Megatron-DeepSpeed/tree/moe-training)ใๅ็
งใใฆใใ ใใใ
## PyTorchใใคใใฃใใขใใณใทใงใณใจFlash Attentionใฎไฝฟ็จ
PyTorch 2.0ใงใฏใใใคใใฃใใฎ[`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)๏ผSDPA๏ผใใชใชใผในใใใ[ใกใขใชๅน็ใฎ้ซใใขใใณใทใงใณ](https://arxiv.org/abs/2112.05682)ใ[ใใฉใใทใฅใขใใณใทใงใณ](https://arxiv.org/abs/2205.14135)ใชใฉใฎ่ๅใใใGPUใซใผใใซใฎไฝฟ็จใๅฏ่ฝใซใใพใใ
[`optimum`](https://github.com/huggingface/optimum)ใใใฑใผใธใใคใณในใใผใซใใๅพใ้ข้ฃใใๅ
้จใขใธใฅใผใซใ็ฝฎใๆใใฆใPyTorchใฎใใคใใฃใใขใใณใทใงใณใไฝฟ็จใงใใพใใไปฅไธใฎใใใซ่จญๅฎใใพใ๏ผ
```python
model = model.to_bettertransformer()
```
ๅคๆๅพใ้ๅธธ้ใใขใใซใใใฌใผใใณใฐใใฆใใ ใใใ
<Tip warning={true}>
PyTorchใใคใใฃใใฎ`scaled_dot_product_attention`ๆผ็ฎๅญใฏใ`attention_mask`ใๆไพใใใฆใใชใๅ ดๅใซใฎใฟFlash Attentionใซใใฃในใใใใงใใพใใ
ใใใฉใซใใงใฏใใใฌใผใใณใฐใขใผใใงBetterTransformer็ตฑๅใฏใในใฏใตใใผใใๅ้คใใใใใใใฌใผใใณใฐใซใใใฃใณใฐใในใฏใๅฟ
่ฆใชใใใฌใผใใณใฐใซใใไฝฟ็จใงใใพใใใใใใฏใไพใใฐใในใฏ่จ่ชใขใใชใณใฐใๅ ๆ่จ่ชใขใใชใณใฐใฎใใใชใใใใใใฌใผใใณใฐใซใใใฃใณใฐใในใฏใไธ่ฆใชใใฌใผใใณใฐใฎๅ ดๅใซ่ฉฒๅฝใใพใใBetterTransformerใฏใใใฃใณใฐใในใฏใๅฟ
่ฆใชใฟในใฏใซๅฏพใใใขใใซใฎๅพฎ่ชฟๆดใซใฏ้ฉใใฆใใพใใใ
</Tip>
SDPAใไฝฟ็จใใใขใฏใปใฉใฌใผใทใงใณใจใกใขใชใฎ็ฏ็ดใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใฏใใใฎ[ใใญใฐ่จไบ](https://pytorch.org/blog/out-of-the-box-acceleration/)ใใใงใใฏใใฆใใ ใใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/community.md
|
<!--โ ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Community
ใใฎใใผใธใฏใใณใใฅใใใฃใซใใฃใฆ้็บใใใ๐ค Transformersใซ้ขใใใชใฝใผในใใพใจใใใใฎใงใใ
## Community resources:
| ใชใฝใผใน | ่ชฌๆ | ไฝ่
|
|:----------|:-------------|------:|
| [Hugging Face Transformers Glossary Flashcards](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | [Transformers Docs Glossary](glossary)ใซๅบใฅใใใใฉใใทใฅใซใผใใปใใใงใใใใฎใปใใใฏใ้ทๆใฎ็ฅ่ญๅฎ็ใ็นใซ่ๆ
ฎใใฆ่จญ่จใใใใชใผใใณใฝใผในใฎใฏใญในใใฉใใใใฉใผใ ใขใใชใงใใ[Anki](https://apps.ankiweb.net/)ใไฝฟ็จใใฆ็ฐกๅใซๅญฆ็ฟ/ๅพฉ็ฟใงใใๅฝขๅผใซใชใฃใฆใใพใใ[ใใฉใใทใฅใซใผใใฎไฝฟ็จๆนๆณใซ้ขใใ็ดนไปใใใชใฏใใกใ](https://www.youtube.com/watch?v=Dji_h7PILrw)ใใ่ฆงใใ ใใใ | [Darigov Research](https://www.darigovresearch.com/) |
## Community notebooks:
| ใใผใใใใฏ | ่ชฌๆ | ่่
| |
|:----------|:-------------|:-------------|------:|
| [ไบๅๅญฆ็ฟๆธใฟใฎTransformerใๅพฎ่ชฟๆดใใฆๆญ่ฉใ็ๆ](https://github.com/AlekseyKorshuk/huggingartists) | GPT-2ใขใใซใๅพฎ่ชฟๆดใใฆใๆฐใซๅ
ฅใใฎใขใผใใฃในใใฎในใฟใคใซใงๆญ่ฉใ็ๆใใๆนๆณ | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) |
| [Tensorflow 2ใงT5ใใใฌใผใใณใฐ](https://github.com/snapthat/TF-T5-text-to-text) | Tensorflow 2ใไฝฟ็จใใฆไปปๆใฎใฟในใฏใซๅฏพใใฆT5ใใใฌใผใใณใฐใใๆนๆณใใใฎใใผใใใใฏใฏTensorflow 2ใไฝฟ็จใใฆSQUADใงๅฎ่ฃ
ใใใ่ณชๅใจๅ็ญใฟในใฏใ็คบใใฆใใพใใ | [Muhammad Harris](https://github.com/HarrisDePerceptron) | [](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) |
| [TPUใงT5ใใใฌใผใใณใฐ](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | TransformersใจNlpใไฝฟ็จใใฆSQUADใงT5ใใใฌใผใใณใฐใใๆนๆณ | [Suraj Patil](https://github.com/patil-suraj) | [](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) |
| [ๅ้กใจๅค่ข้ธๆใฎใใใซT5ใๅพฎ่ชฟๆด](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | PyTorch Lightningใไฝฟ็จใใฆใใญในใๅฏพใใญในใๅฝขๅผใงT5ใๅ้กใจๅค่ข้ธๆใฟในใฏใซๅพฎ่ชฟๆดใใๆนๆณ | [Suraj Patil](https://github.com/patil-suraj) | [](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) |
| [ๆฐใใใใผใฟใปใใใจ่จ่ชใงDialoGPTใๅพฎ่ชฟๆด](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | DialoGPTใขใใซใๆฐใใใใผใฟใปใใใงใชใผใใณใใคใขใญใฐไผ่ฉฑ็จใฎๅพฎ่ชฟๆดใใๆนๆณ | [Nathan Cooper](https://github.com/ncoop57) | [](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) |
| [Reformerใไฝฟ็จใใ้ทใใทใผใฑใณในใขใใชใณใฐ](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Reformerใไฝฟ็จใใฆ500,000ใใผใฏใณใพใงใฎใทใผใฑใณในใใใฌใผใใณใฐใใๆนๆณ | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) |
| [่ฆ็ดใฎใใใซBARTใๅพฎ่ชฟๆด](https://github.com/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | Blurrใไฝฟ็จใใฆ่ฆ็ดใฎใใใซBARTใๅพฎ่ชฟๆดใใๆนๆณ | [Wayde Gilliam](https://ohmeow.com/) | [](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) |
| [ไบๅๅญฆ็ฟๆธใฟใฎTransformerใๅพฎ่ชฟๆดใใฆ่ชฐใใฎใใคใผใใ็ๆ](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | GPT-2ใขใใซใๅพฎ่ชฟๆดใใฆใๆฐใซๅ
ฅใใฎTwitterใขใซใฆใณใใฎในใฟใคใซใงใใคใผใใ็ๆใใๆนๆณ | [Boris Dayma](https://github.com/borisdayma) | [](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) |
| [๐ค Hugging FaceใขใใซใWeights & Biasesใงๆ้ฉๅ](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | Hugging FaceใจWeights & Biasesใฎ็ตฑๅใ็คบใๅฎๅ
จใชใใฅใผใใชใขใซ | [Boris Dayma](https://github.com/borisdayma) | [](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) |
| [Longformerใฎไบๅๅญฆ็ฟ](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | ๆขๅญใฎไบๅๅญฆ็ฟๆธใฟใขใใซใฎใ้ทใใใใผใธใงใณใๆง็ฏใใๆนๆณ | [Iz Beltagy](https://beltagy.net) | [](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) |
| [QAใฟในใฏใฎใใใซLongformerใๅพฎ่ชฟๆด](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | QAใฟในใฏใฎใใใซLongformerใขใใซใๅพฎ่ชฟๆดใใๆนๆณ | [Suraj Patil](https://github.com/patil-suraj) | [](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) |
| [๐คnlpใไฝฟ็จใใใขใใซใฎ่ฉไพก](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | `nlp`ใไฝฟ็จใใฆTriviaQAใงLongformerใ่ฉไพกใใๆนๆณ | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) |
| [ๆๆ
ในใใณๆฝๅบใฎใใใซT5ใๅพฎ่ชฟๆด](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | PyTorch Lightningใไฝฟ็จใใฆๆๆ
ในใใณๆฝๅบใฎใใใซT5ใๅพฎ่ชฟๆดใใๆนๆณ | [Lorenzo Ampil](https://github.com/enzoampil) | [](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) |
| [DistilBertใใใซใใฏใฉในๅ้กใซใใกใคใณใใฅใผใใณใฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | PyTorchใไฝฟ็จใใฆDistilBertใใใซใใฏใฉในๅ้กใซใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)|
|[BERTใใใซใใฉใใซๅ้กใซใใกใคใณใใฅใผใใณใฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|PyTorchใไฝฟ็จใใฆBERTใใใซใใฉใใซๅ้กใซใใกใคใณใใฅใผใใณใฐใใๆนๆณ|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|
|[T5ใ่ฆ็ดใซใใกใคใณใใฅใผใใณใฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)|PyTorchใไฝฟ็จใใฆT5ใ่ฆ็ดใซใใกใคใณใใฅใผใใณใฐใใWandBใงๅฎ้จใใใฉใใญใณใฐใใๆนๆณ|[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)|
|[ใใคใใใใฏใใใฃใณใฐ/ใใฑใใใฃใณใฐใไฝฟ็จใใฆTransformersใฎใใกใคใณใใฅใผใใณใฐใ้ซ้ๅ](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)|ใใคใใใใฏใใใฃใณใฐ/ใใฑใใใฃใณใฐใไฝฟ็จใใฆใใกใคใณใใฅใผใใณใฐใ2ๅ้ซ้ๅใใๆนๆณ|[Michael Benesty](https://github.com/pommedeterresautee) |[](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)|
|[ใในใฏ่จ่ชใขใใชใณใฐใฎใใใฎReformerใฎไบๅๅญฆ็ฟ](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)|ๅๆนๅใปใซใใขใใณใทใงใณใฌใคใคใผใๅใใReformerใขใใซใฎใใฌใผใใณใฐๆนๆณ|[Patrick von Platen](https://github.com/patrickvonplaten) |[](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)|
|[Sci-BERTใๆกๅผตใใฆใใกใคใณใใฅใผใใณใฐ](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)|AllenAIใฎCORDใใผใฟใปใใใงไบๅๅญฆ็ฟๆธใฟใฎSciBERTใขใใซใฎ่ชๅฝใๆกๅผตใใใใคใใฉใคใณๅใใๆนๆณ|[Tanmay Thakur](https://github.com/lordtt13) |[](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)|
|[Trainer APIใไฝฟ็จใใฆBlenderBotSmallใ่ฆ็ดใฎใใใซใใกใคใณใใฅใผใใณใฐ](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)|ใซในใฟใ ใใผใฟใปใใใงBlenderBotSmallใ่ฆ็ดใฎใใใซใใกใคใณใใฅใผใใณใฐใใๆนๆณใTrainer APIใไฝฟ็จ|[Tanmay Thakur](https://github.com/lordtt13) |[](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)|
|[ElectraใใใกใคใณใใฅใผใใณใฐใใฆCaptum Integrated Gradientsใง่งฃ้](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) |Electraใๆๆ
ๅๆใฎใใใซใใกใคใณใใฅใผใใณใฐใใCaptum Integrated Gradientsใงไบๆธฌใ่งฃ้ใใๆนๆณ|[Eliza Szczechla](https://elsanns.github.io) |[](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)|
|[Trainerใฏใฉในใไฝฟ็จใใฆ้่ฑ่ชใฎGPT-2ใขใใซใใใกใคใณใใฅใผใใณใฐ](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) |Trainerใฏใฉในใไฝฟ็จใใฆ้่ฑ่ชใฎGPT-2ใขใใซใใใกใคใณใใฅใผใใณใฐใใๆนๆณ|[Philipp Schmid](https://www.philschmid.de) |[](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)|
|[DistilBERTใขใใซใใใซใใฉใใซๅ้กใฟในใฏใฎใใใซใใกใคใณใใฅใผใใณใฐ](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) |DistilBERTใขใใซใใใซใใฉใใซๅ้กใฟในใฏใฎใใใซใใกใคใณใใฅใผใใณใฐใใๆนๆณ|[Dhaval Taunk](https://github.com/DhavalTaunk08) |[](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)|
|[ALBERTใๆใใขๅ้กใฟในใฏใฎใใใซใใกใคใณใใฅใผใใณใฐ](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) |ALBERTใขใใซใพใใฏไปใฎBERTใใผในใฎใขใใซใๆใใขๅ้กใฟในใฏใฎใใใซใใกใคใณใใฅใผใใณใฐใใๆนๆณ|[Nadir El Manouzi](https://github.com/NadirEM) |[](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)|
|[RoBERTaใๆๆ
ๅๆใฎใใใซใใกใคใณใใฅใผใใณใฐ](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) |RoBERTaใขใใซใๆๆ
ๅๆใฎใใใซใใกใคใณใใฅใผใใณใฐใใๆนๆณ|[Dhaval Taunk](https://github.com/DhavalTaunk08) |[](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)|
|[่ณชๅ็ๆใขใใซใฎ่ฉไพก](https://github.com/flexudy-pipe/qugeev) | seq2seqใใฉใณในใใฉใผใใผใขใใซใซใใฃใฆ็ๆใใใ่ณชๅใฎๅ็ญใฎๆญฃ็ขบใใ่ฉไพกใใๆนๆณ | [Pascal Zoleko](https://github.com/zolekode) | [](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)|
|[DistilBERTใจTensorflowใไฝฟ็จใใฆใใญในใใๅ้ก](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | TensorFlowใงใใญในใๅ้กใฎใใใซDistilBERTใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Peter Bayerle](https://github.com/peterbayerle) | [](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)|
|[CNN/Dailymailใงใฎใจใณใณใผใใผใใณใผใใผ่ฆ็ดใซBERTใๆดป็จ](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | *bert-base-uncased* ใใงใใฏใใคใณใใไฝฟ็จใใฆCNN/Dailymailใฎ่ฆ็ดใฎใใใซ *EncoderDecoderModel* ใใฆใฉใผใ ในใฟใผใใใๆนๆณ | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)|
|[BBC XSumใงใฎใจใณใณใผใใผใใณใผใใผ่ฆ็ดใซRoBERTaใๆดป็จ](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | *roberta-base* ใใงใใฏใใคใณใใไฝฟ็จใใฆBBC/XSumใฎ่ฆ็ดใฎใใใฎๅ
ฑๆ *EncoderDecoderModel* ใใฆใฉใผใ ในใฟใผใใใๆนๆณ | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)|
|[TAPASใใทใผใฑใณใทใฃใซ่ณชๅๅฟ็ญ๏ผSQA๏ผใงใใกใคใณใใฅใผใใณใฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | ใทใผใฑใณใทใฃใซ่ณชๅๅฟ็ญ๏ผSQA๏ผใใผใฟใปใใใง *tapas-base* ใใงใใฏใใคใณใใไฝฟ็จใใฆ *TapasForQuestionAnswering* ใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)|
|[TabFactใงTAPASใ่ฉไพก](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | *tapas-base-finetuned-tabfact* ใใงใใฏใใคใณใใไฝฟ็จใใฆใใกใคใณใใฅใผใใณใฐใใใ *TapasForSequenceClassification* ใ่ฉไพกใใๆนๆณใ๐ค datasets ใจ ๐ค transformers ใฉใคใใฉใชใ็ตใฟๅใใใฆไฝฟ็จ | [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)|
|[็ฟป่จณใฎใใใฎmBARTใใใกใคใณใใฅใผใใณใฐ](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | Seq2SeqTrainerใไฝฟ็จใใฆHindiใใEnglishใธใฎ็ฟป่จณใฎใใใซmBARTใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Vasudev Gupta](https://github.com/vasudevgupta7) | [](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)|
|[FUNSD๏ผใใฉใผใ ็่งฃใใผใฟใปใใ๏ผใงLayoutLMใใใกใคใณใใฅใผใใณใฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | ในใญใฃใณใใใใใญใฅใกใณใใใใฎๆ
ๅ ฑๆฝๅบใฎใใใซFUNSDใใผใฟใปใใใง *LayoutLMForTokenClassification* ใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)|
| [DistilGPT2ใฎใใกใคใณใใฅใผใใณใฐใจใใญในใ็ๆ](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | DistilGPT2ใฎใใกใคใณใใฅใผใใณใฐใจใใญในใ็ๆๆนๆณ | [Aakash Tripathi](https://github.com/tripathiaakash) | [](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)|
| [ๆๅคง8KใใผใฏใณใงใฎLEDใฎใใกใคใณใใฅใผใใณใฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | ใญใณใฐใฌใณใธ่ฆ็ดใฎใใใฎpubmedใงLEDใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)|
| [ArxivใงใฎLEDใฎ่ฉไพก](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | ใญใณใฐใฌใณใธ่ฆ็ดใฎใใใฎLEDใฎๅนๆ็ใช่ฉไพกๆนๆณ | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)|
| [RVL-CDIP๏ผๆๆธ็ปๅๅ้กใใผใฟใปใใ๏ผใงใฎLayoutLMใฎใใกใคใณใใฅใผใใณใฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | ในใญใฃใณใใใๆๆธใฎๅ้กใฎใใใฎRVL-CDIPใใผใฟใปใใใง*LayoutLMForSequenceClassification*ใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Niels Rogge](https://github.com/nielsrogge) | [](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)|
| [Wav2Vec2 CTCใใณใผใใฃใณใฐใจGPT2ใฎ่ชฟๆด](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | ่จ่ชใขใใซใฎ่ชฟๆดใไผดใCTCใทใผใฑใณในใฎใใณใผใใฃใณใฐๆนๆณ | [Eric Lam](https://github.com/voidful) | [](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)|
| [Trainerใฏใฉในใไฝฟ็จใใ2่จ่ชใฎ่ฆ็ด็จใซBARTใใใกใคใณใใฅใผใใณใฐ](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | ใใฌใผใใผใฏใฉในใไฝฟ็จใใฆ2ใคใฎ่จ่ชใงใฎ่ฆ็ด็จใซBARTใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Eliza Szczechla](https://github.com/elsanns) | [](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)|
| [PubMedใใผใฟใปใใใงBigBirdใฎ่ฉไพก](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Trivia QAใฎ้ทใใใญใฅใกใณใ่ณชๅๅฟ็ญใงBigBirdใฎ่ฉไพกๆนๆณ | [Patrick von Platen](https://github.com/patrickvonplaten) | [](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)|
| [Wav2Vec2ใไฝฟ็จใใฆใใใชใฎๅญๅนใไฝๆใใ](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Wav2Vecใงใชใผใใฃใชใ่ปข่จใใฆไปปๆใฎใใใชใใYouTubeใฎๅญๅนใไฝๆใใๆนๆณ | [Niklas Muennighoff](https://github.com/Muennighoff) |[](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) |
| [PyTorch Lightningใไฝฟ็จใใCIFAR-10ใงใฎVision Transformerใฎใใกใคใณใใฅใผใใณใฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | HuggingFace TransformersใDatasetsใใใใณPyTorch Lightningใไฝฟ็จใใฆCIFAR-10ใงVision Transformer๏ผViT๏ผใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Niels Rogge](https://github.com/nielsrogge) |[](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) |
| [๐ค Trainerใไฝฟ็จใใCIFAR-10ใงใฎVision Transformerใฎใใกใคใณใใฅใผใใณใฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | HuggingFace TransformersใDatasetsใใใใณ๐ค Trainerใไฝฟ็จใใฆCIFAR-10ใงVision Transformer๏ผViT๏ผใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Niels Rogge](https://github.com/nielsrogge) |[](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) |
| [Open EntityใใจใณใใฃใใฃใฟใคใใณใฐใใผใฟใปใใใงLUKEใฎ่ฉไพก](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Open Entityใใผใฟใปใใใง*LukeForEntityClassification*ใฎ่ฉไพกๆนๆณ | [Ikuya Yamada](https://github.com/ikuyamada) |[](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) |
| [TACREDใ้ขไฟๆฝๅบใใผใฟใปใใใงLUKEใฎ่ฉไพก](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | TACREDใใผใฟใปใใใง*LukeForEntityPairClassification*ใฎ่ฉไพกๆนๆณ | [Ikuya Yamada](https://github.com/ikuyamada) |[](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) |
| [CoNLL-2003ใ้่ฆใชNERใใณใใใผใฏใงLUKEใฎ่ฉไพก](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | CoNLL-2003ใใผใฟใปใใใง*LukeForEntitySpanClassification*ใฎ่ฉไพกๆนๆณ | [Ikuya Yamada](https://github.com/ikuyamada) |[](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) |
| [PubMedใใผใฟใปใใใงBigBird-Pegasusใฎ่ฉไพก](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | PubMedใใผใฟใปใใใง*BigBirdPegasusForConditionalGeneration*ใฎ่ฉไพกๆนๆณ | [Vasudev Gupta](https://github.com/vasudevgupta7) | [](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) |
| [Wav2Vec2ใไฝฟ็จใใในใใผใใจใขใผใทใงใณๅ้ก](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | MEGAใใผใฟใปใใใงใฎๆๆ
ๅ้กใฎใใใฎไบๅๅญฆ็ฟๆธใฟWav2Vec2ใขใใซใฎๅฉ็จๆนๆณ | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) |
| [DETRใไฝฟ็จใใฆ็ปๅๅ
ใฎใชใใธใงใฏใใๆคๅบใใ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | ใใฌใผใใณใฐๆธใฟ*DetrForObjectDetection*ใขใใซใไฝฟ็จใใฆ็ปๅๅ
ใฎใชใใธใงใฏใใๆคๅบใใๆณจๆใๅฏ่ฆๅใใๆนๆณ | [Niels Rogge](https://github.com/NielsRogge) | [](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) |
| [ใซในใฟใ ใชใใธใงใฏใๆคๅบใใผใฟใปใใใงDETRใใใกใคใณใใฅใผใใณใฐใใ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | ใซในใฟใ ใชใใธใงใฏใๆคๅบใใผใฟใปใใใง*DetrForObjectDetection*ใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Niels Rogge](https://github.com/NielsRogge) | [](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) |
| [Named Entity RecognitionใฎใใใซT5ใใใกใคใณใใฅใผใใณใฐ](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | Named Entity RecognitionใฟในใฏใงT5ใใใกใคใณใใฅใผใใณใฐใใๆนๆณ | [Ogundepo Odunayo](https://github.com/ToluClassics) | [](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/training.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Fine-tune a pretrained model
[[open-in-colab]]
ไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใใจใ่จ็ฎใณในใใๅๆธใใ็ญ็ด ๆๅบ้ใๆธๅฐใใใใผใญใใใขใใซใใใฌใผใใณใฐใใๅฟ
่ฆใชใใซๆๆฐใฎใขใใซใไฝฟ็จใงใใๅฉ็นใใใใพใใ
๐ค Transformersใฏใใใพใใพใชใฟในใฏใซๅฏพๅฟใใๆฐๅใใฎไบๅๅญฆ็ฟๆธใฟใขใใซใธใฎใขใฏใปในใๆไพใใพใใ
ไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใๅ ดๅใใใใ็นๅฎใฎใฟในใฏใซๅใใใใใผใฟใปใใใงใใฌใผใใณใฐใใพใใใใใฏใใกใคใณใใฅใผใใณใฐใจใใฆ็ฅใใใ้ๅธธใซๅผทๅใชใใฌใผใใณใฐๆ่กใงใใ
ใใฎใใฅใผใใชใขใซใงใฏใไบๅๅญฆ็ฟๆธใฟใขใใซใ้ธๆใใใใฃใผใใฉใผใใณใฐใใฌใผใ ใฏใผใฏใงใใกใคใณใใฅใผใใณใฐใใๆนๆณใซใคใใฆ่ชฌๆใใพใ๏ผ
* ๐ค Transformersใฎ[`Trainer`]ใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใใใกใคใณใใฅใผใใณใฐใใใ
* TensorFlowใจKerasใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใใใกใคใณใใฅใผใใณใฐใใใ
* ใใคใใฃใใฎPyTorchใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใใใกใคใณใใฅใผใใณใฐใใใ
<a id='data-processing'></a>
## Prepare a dataset
<Youtube id="_BZearw7f0w"/>
ไบๅๅญฆ็ฟๆธใฟใขใใซใใใกใคใณใใฅใผใใณใฐใใๅใซใใใผใฟใปใใใใใฆใณใญใผใใใฆใใฌใผใใณใฐ็จใซๆบๅใใๅฟ
่ฆใใใใพใใ
ๅใฎใใฅใผใใชใขใซใงใฏใใใฌใผใใณใฐใใผใฟใฎๅฆ็ๆนๆณใ่ชฌๆใใพใใใใใใใใใฏใใใใฎในใญใซใๆดปใใๆฉไผใใใใพใ๏ผ
ใพใใ[Yelp Reviews](https://huggingface.co/datasets/yelp_review_full)ใใผใฟใปใใใ่ชญใฟ่พผใใงใฟใพใใใ๏ผ
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("yelp_review_full")
>>> dataset["train"][100]
{'label': 0,
'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'}
```
ใใผใฏใใคใถใใใญในใใๅฆ็ใใๅฏๅคใฎใทใผใฑใณใน้ทใๅฆ็ใใใใใฎใใใฃใณใฐใจๅใๆจใฆๆฆ็ฅใๅซใใๅฟ
่ฆใใใใใจใใๅญ็ฅใฎ้ใใ
ใใผใฟใปใใใ1ใคใฎในใใใใงๅฆ็ใใใซใฏใ๐ค Datasets ใฎ [`map`](https://huggingface.co/docs/datasets/process#map) ใกใฝใใใไฝฟ็จใใฆใ
ใใผใฟใปใใๅ
จไฝใซๅๅฆ็้ขๆฐใ้ฉ็จใใพใ๏ผ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> def tokenize_function(examples):
... return tokenizer(examples["text"], padding="max_length", truncation=True)
>>> tokenized_datasets = dataset.map(tokenize_function, batched=True)
```
ใๅฅฝใฟใงใๅฎ่กๆ้ใ็ญ็ธฎใใใใใซใใซใใผใฟใปใใใฎๅฐใใชใตใใปใใใไฝๆใใใใจใใงใใพใ๏ผ
```py
>>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
>>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
```
<a id='trainer'></a>
## Train
ใใฎๆ็นใงใไฝฟ็จใใใใใฌใผใ ใฏใผใฏใซๅฏพๅฟใใใปใฏใทใงใณใซๅพใๅฟ
่ฆใใใใพใใๅณๅดใฎใตใคใใใผใฎใชใณใฏใไฝฟ็จใใฆใใธใฃใณใใใใใใฌใผใ ใฏใผใฏใซ็งปๅใงใใพใใ
ใใใฆใ็นๅฎใฎใใฌใผใ ใฏใผใฏใฎใในใฆใฎใณใณใใณใใ้่กจ็คบใซใใใๅ ดๅใฏใใใฎใใฌใผใ ใฏใผใฏใฎใใญใใฏๅณไธใซใใใใฟใณใไฝฟ็จใใฆใใ ใใ๏ผ
<frameworkcontent>
<pt>
<Youtube id="nvBXf7s7vTI"/>
## Train with Pytorch Trainer
๐ค Transformersใฏใ๐ค Transformersใขใใซใฎใใฌใผใใณใฐใๆ้ฉๅใใ[`Trainer`]ใฏใฉในใๆไพใใ็ฌ่ชใฎใใฌใผใใณใฐใซใผใใๆๅใง่จ่ฟฐใใใซใใฌใผใใณใฐใ้ๅงใใใใใใฆใใพใใ
[`Trainer`] APIใฏใใญใฐ่จ้ฒใๅพ้
็ดฏ็ฉใๆททๅ็ฒพๅบฆใชใฉใใใพใใพใชใใฌใผใใณใฐใชใใทใงใณใจๆฉ่ฝใใตใใผใใใฆใใพใใ
ใพใใใขใใซใใญใผใใใไบๆณใใใใฉใใซใฎๆฐใๆๅฎใใพใใYelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields)ใใใ5ใคใฎใฉใใซใใใใใจใใใใใพใ๏ผ
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
```
<Tip>
ไธ้จใฎไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใไฝฟ็จใใใใไธ้จใฎ้ใฟใใฉใณใใ ใซๅๆๅใใใ่ญฆๅใ่กจ็คบใใใใใจใใใใพใใๅฟ้
ใใชใใงใใ ใใใใใใฏๅฎๅ
จใซๆญฃๅธธใงใ๏ผ
BERTใขใใซใฎไบๅๅญฆ็ฟๆธใฟใฎใใใใฏ็ ดๆฃใใใใฉใณใใ ใซๅๆๅใใใๅ้กใใใใง็ฝฎใๆใใใใพใใใใฎๆฐใใใขใใซใใใใใทใผใฑใณในๅ้กใฟในใฏใงใใกใคใณใใฅใผใใณใฐใใไบๅๅญฆ็ฟใขใใซใฎ็ฅ่ญใใใใซ่ปข้ใใพใใ
</Tip>
### Training Hyperparameters
ๆฌกใซใใใฌใผใใณใฐใชใใทใงใณใใขใฏใใฃใใผใใใใใใฎใในใฆใฎใใคใใผใใฉใกใผใฟใจใ่ชฟๆดใงใใใใคใใผใใฉใกใผใฟใๅซใ[`TrainingArguments`]ใฏใฉในใไฝๆใใพใใ
ใใฎใใฅใผใใชใขใซใงใฏใใใใฉใซใใฎใใฌใผใใณใฐ[ใใคใใผใใฉใกใผใฟ](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments)ใไฝฟ็จใใฆ้ๅงใงใใพใใใๆ้ฉใช่จญๅฎใ่ฆใคใใใใใซใใใใๅฎ้จใใฆใๆงใใพใใใ
ใใฌใผใใณใฐใฎใใงใใฏใใคใณใใไฟๅญใใๅ ดๆใๆๅฎใใพใ๏ผ
```python
>>> from transformers import TrainingArguments
>>> training_args = TrainingArguments(output_dir="test_trainer")
```
### Evaluate
[`Trainer`]ใฏใใฌใผใใณใฐไธญใซ่ชๅ็ใซใขใใซใฎใใใฉใผใใณในใ่ฉไพกใใพใใใใกใใชใฏในใ่จ็ฎใใฆๅ ฑๅใใ้ขๆฐใ[`Trainer`]ใซๆธกใๅฟ
่ฆใใใใพใใ
[๐ค Evaluate](https://huggingface.co/docs/evaluate/index)ใฉใคใใฉใชใงใฏใ[`evaluate.load`]้ขๆฐใไฝฟ็จใใฆ่ชญใฟ่พผใใใจใใงใใใทใณใใซใช[`accuracy`](https://huggingface.co/spaces/evaluate-metric/accuracy)้ขๆฐใๆไพใใใฆใใพใ๏ผ่ฉณ็ดฐใซใคใใฆใฏ[ใใกใใฎใฏใคใใฏใใขใผ](https://huggingface.co/docs/evaluate/a_quick_tour)ใๅ็
งใใฆใใ ใใ๏ผ๏ผ
```python
>>> import numpy as np
>>> import evaluate
>>> metric = evaluate.load("accuracy")
```
`metric`ใฎ`~evaluate.compute`ใๅผใณๅบใใฆใไบๆธฌใฎๆญฃ็ขบๅบฆใ่จ็ฎใใพใใ `compute`ใซไบๆธฌใๆธกใๅใซใไบๆธฌใใญใธใใใซๅคๆใใๅฟ
่ฆใใใใพใ๏ผใในใฆใฎ๐ค Transformersใขใใซใฏใญใธใใใ่ฟใใใจใ่ฆใใฆใใใฆใใ ใใ๏ผ๏ผ
```py
>>> def compute_metrics(eval_pred):
... logits, labels = eval_pred
... predictions = np.argmax(logits, axis=-1)
... return metric.compute(predictions=predictions, references=labels)
```
่ฉไพกใกใใชใฏในใใใกใคใณใใฅใผใใณใฐไธญใซ็ฃ่ฆใใใๅ ดๅใใใฌใผใใณใฐๅผๆฐใง `evaluation_strategy` ใใฉใกใผใฟใๆๅฎใใฆใๅใจใใใฏใฎ็ตไบๆใซ่ฉไพกใกใใชใฏในใๅ ฑๅใใพใ๏ผ
```python
>>> from transformers import TrainingArguments, Trainer
>>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
```
### Trainer
ใขใใซใใใฌใผใใณใฐๅผๆฐใใใฌใผใใณใฐใใใณใในใใใผใฟใปใใใ่ฉไพก้ขๆฐใไฝฟ็จใใฆ[`Trainer`]ใชใใธใงใฏใใไฝๆใใพใ๏ผ
```py
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... )
```
ใใฎๅพใ[`~transformers.Trainer.train`]ใๅผใณๅบใใฆใขใใซใๅพฎ่ชฟๆดใใพใ๏ผ
```python
>>> trainer.train()
```
</pt>
<tf>
<a id='keras'></a>
<Youtube id="rnTGBy2ax1c"/>
## Kerasใไฝฟ็จใใฆTensorFlowใขใใซใใใฌใผใใณใฐใใ
Keras APIใไฝฟ็จใใฆ๐ค TransformersใขใใซใTensorFlowใงใใฌใผใใณใฐใใใใจใใงใใพใ๏ผ
### Loading Data from Keras
๐ค TransformersใขใใซใKeras APIใงใใฌใผใใณใฐใใๅ ดๅใใใผใฟใปใใใKerasใ็่งฃใงใใๅฝขๅผใซๅคๆใใๅฟ
่ฆใใใใพใใ
ใใผใฟใปใใใๅฐใใๅ ดๅใใใผใฟใปใใๅ
จไฝใNumPy้
ๅใซๅคๆใใฆKerasใซๆธกใใใจใใงใใพใใ
่ค้ใชใใจใใใๅใซใใพใใใใ่ฉฆใใฆใฟใพใใใใ
ใพใใใใผใฟใปใใใ่ชญใฟ่พผใฟใพใใGLUEใใณใใใผใฏใใCoLAใใผใฟใปใใใไฝฟ็จใใพใ
([GLUE Banchmark](https://huggingface.co/datasets/glue))ใใใใฏๅ็ดใชใใคใใชใใญในใๅ้กใฟในใฏใงใใไปใฎใจใใใใฌใผใใณใฐๅๅฒใฎใฟใไฝฟ็จใใพใใ
```py
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
dataset = dataset["train"] # ไปใฎใจใใใใฌใผใใณใฐๅๅฒใฎใฟใไฝฟ็จใใพใ
```
ๆฌกใซใใใผใฏใใคใถใใญใผใใใใใผใฟใNumPy้
ๅใจใใฆใใผใฏใณๅใใพใใใฉใใซใฏๆขใซ`0`ใจ`1`ใฎใชในใใงใใใใใใใผใฏใณๅใใใซ็ดๆฅNumPy้
ๅใซๅคๆใงใใพใ๏ผ
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True)
# ใใผใฏใใคใถใฏBatchEncodingใ่ฟใใพใใใใใใKeras็จใซ่พๆธใซๅคๆใใพใ
tokenized_data = dict(tokenized_data)
labels = np.array(dataset["label"]) # ใฉใใซใฏใใงใซ0ใจ1ใฎ้
ๅใงใ
```
ๆๅพใซใใขใใซใใญใผใใใ[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ใจ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ใกใฝใใใๅฎ่กใใพใใ
ๆณจๆ็นใจใใฆใTransformersใขใใซใฏใในใฆใใใฉใซใใงใฟในใฏใซ้ข้ฃใใๆๅคฑ้ขๆฐใๆใฃใฆใใใใใๆๅฎใใชใใฆใๆงใใพใใ๏ผๆๅฎใใๅ ดๅใ้คใ๏ผ๏ผ
```python
from transformers import TFAutoModelForSequenceClassification
from tensorflow.keras.optimizers import Adam
# ใขใใซใใญใผใใใฆใณใณใใคใซใใ
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
# ใใกใคใณใใฅใผใใณใฐใซใฏ้ๅธธใๅญฆ็ฟ็ใไธใใใจ่ฏใใงใ
model.compile(optimizer=Adam(3e-5)) # ๆๅคฑ้ขๆฐใฎๆๅฎใฏไธ่ฆใงใ๏ผ
model.fit(tokenized_data, labels)
```
<Tip>
ใขใใซใ`compile()`ใใ้ใซ`loss`ๅผๆฐใๆธกใๅฟ
่ฆใฏใใใพใใ๏ผHugging Faceใขใใซใฏใใใฎๅผๆฐใ็ฉบ็ฝใฎใพใพใซใใฆใใใจใใฟในใฏใจใขใใซใขใผใญใใฏใใฃใซ้ฉใใๆๅคฑใ่ชๅ็ใซ้ธๆใใพใใ
ๅฟ
่ฆใซๅฟใใฆ่ชๅใงๆๅคฑใๆๅฎใใฆใชใผใใผใฉใคใใใใใจใใงใใพใ๏ผ
</Tip>
ใใฎใขใใญใผใใฏใๅฐ่ฆๆจกใชใใผใฟใปใใใซใฏ้ฉใใฆใใพใใใๅคง่ฆๆจกใชใใผใฟใปใใใซๅฏพใใฆใฏๅ้กใซใชใใใจใใใใพใใใชใใชใใใใผใฏใใคใบใใใ้
ๅใจใฉใใซใฏใกใขใชใซๅฎๅ
จใซ่ชญใฟ่พผใพใใๅฟ
่ฆใใใใใพใNumPyใฏใใธใฃใฎใผใใช้
ๅใๅฆ็ใใชใใใใใใผใฏใใคใบใใใๅใตใณใใซใๅ
จไฝใฎใใผใฟใปใใๅ
ใงๆใ้ทใใตใณใใซใฎ้ทใใซใใใฃใณใฐใใๅฟ
่ฆใใใใพใใ
ใใใซใใใ้
ๅใใใใซๅคงใใใชใใใในใฆใฎใใใฃใณใฐใใผใฏใณใใใฌใผใใณใฐใ้
ใใใๅๅ ใซใชใใพใ๏ผ
### Loading data as a tf.data.Dataset
ใใฌใผใใณใฐใ้
ใใใใซใใผใฟใ่ชญใฟ่พผใใซใฏใใใผใฟใ`tf.data.Dataset`ใจใใฆ่ชญใฟ่พผใใใจใใงใใพใใ็ฌ่ชใฎ`tf.data`ใใคใใฉใคใณใไฝๆใใใใจใใงใใพใใใใใใ่กใใใใฎไพฟๅฉใชๆนๆณใ2ใคใใใพใ๏ผ
- [`~TFPreTrainedModel.prepare_tf_dataset`]: ใใใฏใปใจใใฉใฎๅ ดๅใงๆจๅฅจใใๆนๆณใงใใใขใใซไธใฎใกใฝใใใชใฎใงใใขใใซใๆคๆปใใฆใขใใซๅ
ฅๅใจใใฆไฝฟ็จๅฏ่ฝใชๅใ่ชๅ็ใซๆๆกใใไปใฎๅใ็ ดๆฃใใฆใใๅ็ดใง้ซๆง่ฝใชใใผใฟใปใใใไฝๆใงใใพใใ
- [`~datasets.Dataset.to_tf_dataset`]: ใใฎใกใฝใใใฏใใไฝใฌใใซใงใใใผใฟใปใใใใฉใฎใใใซไฝๆใใใใใๆญฃ็ขบใซๅถๅพกใใๅ ดๅใซไพฟๅฉใงใใ`columns`ใจ`label_cols`ใๆๅฎใใฆใใใผใฟใปใใใซๅซใใๅใๆญฃ็ขบใซๆๅฎใงใใพใใ
[`~TFPreTrainedModel.prepare_tf_dataset`]ใไฝฟ็จใใๅใซใๆฌกใฎใณใผใใตใณใใซใซ็คบใใใใซใใใผใฏใใคใถใฎๅบๅใใใผใฟใปใใใซๅใจใใฆ่ฟฝๅ ใใๅฟ
่ฆใใใใพใ๏ผ
```py
def tokenize_dataset(data):
# ่ฟใใใ่พๆธใฎใญใผใฏใใผใฟใปใใใซๅใจใใฆ่ฟฝๅ ใใใพใ
return tokenizer(data["text"])
dataset = dataset.map(tokenize_dataset)
```
Hugging Faceใฎใใผใฟใปใใใฏใใใฉใซใใงใใฃในใฏใซไฟๅญใใใใใใใใใซใใใกใขใชใฎไฝฟ็จ้ใๅขใใใใจใฏใใใพใใ๏ผ
ๅใ่ฟฝๅ ใใใใใใใผใฟใปใใใใใใใใในใใชใผใ ใใๅใใใใซใใใฃใณใฐใ่ฟฝๅ ใงใใพใใใใใซใใใ
ใใผใฟใปใใๅ
จไฝใซใใใฃใณใฐใ่ฟฝๅ ใใๅ ดๅใจๆฏในใฆใใใใฃใณใฐใใผใฏใณใฎๆฐใๅคงๅน
ใซๅๆธใใใพใใ
```python
>>> tf_dataset = model.prepare_tf_dataset(dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer)
```
ไธ่จใฎใณใผใใตใณใใซใงใฏใใใผใฏใใคใถใ`prepare_tf_dataset`ใซๆธกใใฆใใใใใๆญฃใใ่ชญใฟ่พผใ้ใซๆญฃใใใใใฃใณใฐใงใใใใใซใใๅฟ
่ฆใใใใพใใ
ใใผใฟใปใใใฎใในใฆใฎใตใณใใซใๅใ้ทใใงใใใใใใฃใณใฐใไธ่ฆใชๅ ดๅใฏใใใฎๅผๆฐใในใญใใใงใใพใใ
ใใใฃใณใฐไปฅๅคใฎ่ค้ใชๅฆ็ใ่กใๅฟ
่ฆใใใๅ ดๅ๏ผไพ๏ผใในใฏ่จ่ชใขใใชใณใฐใฎใใใฎใใผใฏใณใฎ็ ดๆใชใฉ๏ผใ
ไปฃใใใซ`collate_fn`ๅผๆฐใไฝฟ็จใใฆใใตใณใใซใฎใชในใใใใใใซๅคๆใใๅฟ
่ฆใชๅๅฆ็ใ้ฉ็จใใ้ขๆฐใๆธกใใใจใใงใใพใใ
ใใฎใขใใญใผใใๅฎ้ใซไฝฟ็จใใไพใซใคใใฆใฏใ
[examples](https://github.com/huggingface/transformers/tree/main/examples)ใ
[notebooks](https://huggingface.co/docs/transformers/notebooks)ใใ่ฆงใใ ใใใ
`tf.data.Dataset`ใไฝๆใใใใไปฅๅใจๅๆงใซใขใใซใใณใณใใคใซใใ้ฉๅใใใใใจใใงใใพใ๏ผ
```python
model.compile(optimizer=Adam(3e-5)) # ๆๅคฑๅผๆฐใฏไธ่ฆใงใ๏ผ
model.fit(tf_dataset)
```
</tf>
</frameworkcontent>
<a id='pytorch_native'></a>
## Train in native Pytorch
<frameworkcontent>
<pt>
<Youtube id="Dh9CL8fyG80"/>
[`Trainer`]ใฏใใฌใผใใณใฐใซใผใใๅฆ็ใใ1่กใฎใณใผใใงใขใใซใใใกใคใณใใฅใผใใณใฐใงใใใใใซใใพใใ
ใใฌใผใใณใฐใซใผใใ็ฌ่ชใซ่จ่ฟฐใใใใฆใผใถใผใฎใใใซใ๐ค TransformersใขใใซใใใคใใฃใใฎPyTorchใงใใกใคใณใใฅใผใใณใฐใใใใจใใงใใพใใ
ใใฎๆ็นใงใใใผใใใใฏใๅ่ตทๅใใใใไปฅไธใฎใณใผใใๅฎ่กใใฆใกใขใชใ่งฃๆพใใๅฟ
่ฆใใใใใใใใพใใ๏ผ
```py
del model
del trainer
torch.cuda.empty_cache()
```
1. ใขใใซใฏ็ใฎใใญในใใๅ
ฅๅใจใใฆๅใๅใใชใใใใ`text` ๅใๅ้คใใพใ๏ผ
```py
>>> tokenized_datasets = tokenized_datasets.remove_columns(["text"])
```
2. `label`ๅใ`labels`ใซๅๅใๅคๆดใใพใใใขใใซใฏๅผๆฐใฎๅๅใ`labels`ใจๆๅพ
ใใฆใใพใ๏ผ
```py
>>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
```
3. ใใผใฟใปใใใฎๅฝขๅผใใชในใใงใฏใชใPyTorchใใณใฝใซใ่ฟใใใใซ่จญๅฎใใพใ๏ผ
```py
>>> tokenized_datasets.set_format("torch")
```
ไปฅๅใซ็คบใใใใใซใใใกใคใณใใฅใผใใณใฐใ้ซ้ๅใใใใใซใใผใฟใปใใใฎๅฐใใชใตใใปใใใไฝๆใใพใ๏ผ
```py
>>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
>>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
```
### DataLoader
ใใฌใผใใณใฐใใผใฟใปใใใจใในใใใผใฟใปใใ็จใฎ`DataLoader`ใไฝๆใใฆใใใผใฟใฎใใใใใคใใฌใผใใงใใใใใซใใพใ๏ผ
```py
>>> from torch.utils.data import DataLoader
>>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8)
>>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8)
```
ใญใผใใใใขใใซใจๆๅพ
ใใใใฉใใซใฎๆฐใๆๅฎใใฆใใ ใใ๏ผ
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
```
### Optimizer and learning rate scheduler
ใขใใซใใใกใคใณใใฅใผใใณใฐใใใใใฎใชใใใฃใใคใถใจๅญฆ็ฟ็ในใฑใธใฅใผใฉใผใไฝๆใใพใใใใ
PyTorchใใ[`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)ใชใใใฃใใคใถใไฝฟ็จใใพใ๏ผ
```python
>>> from torch.optim import AdamW
>>> optimizer = AdamW(model.parameters(), lr=5e-5)
```
ใใใฉใซใใฎๅญฆ็ฟ็ในใฑใธใฅใผใฉใ[`Trainer`]ใใไฝๆใใ๏ผ
```py
>>> from transformers import get_scheduler
>>> num_epochs = 3
>>> num_training_steps = num_epochs * len(train_dataloader)
>>> lr_scheduler = get_scheduler(
... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps
... )
```
ๆๅพใซใGPUใๅฉ็จใงใใๅ ดๅใฏ `device` ใๆๅฎใใฆใใ ใใใใใไปฅๅคใฎๅ ดๅใCPUใงใฎใใฌใผใใณใฐใฏๆฐๆ้ใใใๅฏ่ฝๆงใใใใๆฐๅใงๅฎไบใใใใจใใงใใพใใ
```py
>>> import torch
>>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
>>> model.to(device)
```
<Tip>
ใฏใฉใฆใGPUใๅฉ็จใงใใชใๅ ดๅใ[Colaboratory](https://colab.research.google.com/)ใ[SageMaker StudioLab](https://studiolab.sagemaker.aws/)ใชใฉใฎใในใใใใใใผใใใใฏใไฝฟ็จใใฆ็กๆใงGPUใซใขใฏใปในใงใใพใใ
</Tip>
ใใฆใใใฌใผใใณใฐใฎๆบๅใๆดใใพใใ๏ผ ๐ฅณ
### ใใฌใผใใณใฐใซใผใ
ใใฌใผใใณใฐใฎ้ฒๆใ่ฟฝ่ทกใใใใใซใ[tqdm](https://tqdm.github.io/)ใฉใคใใฉใชใไฝฟ็จใใฆใใฌใผใใณใฐในใใใใฎๆฐใซๅฏพใใฆ้ฒ่ก็ถๆณใใผใ่ฟฝๅ ใใพใ๏ผ
```py
>>> from tqdm.auto import tqdm
>>> progress_bar = tqdm(range(num_training_steps))
>>> model.train()
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... batch = {k: v.to(device) for k, v in batch.items()}
... outputs = model(**batch)
... loss = outputs.loss
... loss.backward()
... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```
### Evaluate
[`Trainer`]ใซ่ฉไพก้ขๆฐใ่ฟฝๅ ใใใฎใจๅๆงใซใ็ฌ่ชใฎใใฌใผใใณใฐใซใผใใไฝๆใใ้ใซใๅๆงใฎๆไฝใ่กใๅฟ
่ฆใใใใพใใ
ใใ ใใๅใจใใใฏใฎๆๅพใซใกใใชใใฏใ่จ็ฎใใใณๅ ฑๅใใไปฃใใใซใไปๅใฏ[`~evaluate.add_batch`]ใไฝฟ็จใใฆใในใฆใฎใใใใ่็ฉใใๆๅพใซใกใใชใใฏใ่จ็ฎใใพใใ
```python
>>> import evaluate
>>> metric = evaluate.load("accuracy")
>>> model.eval()
>>> for batch in eval_dataloader:
... batch = {k: v.to(device) for k, v in batch.items()}
... with torch.no_grad():
... outputs = model(**batch)
... logits = outputs.logits
... predictions = torch.argmax(logits, dim=-1)
... metric.add_batch(predictions=predictions, references=batch["labels"])
>>> metric.compute()
```
</pt>
</frameworkcontent>
<a id='additional-resources'></a>
## ่ฟฝๅ ใชใฝใผใน
ใใใชใใใกใคใณใใฅใผใใณใฐใฎไพใซใคใใฆใฏใไปฅไธใๅ็
งใใฆใใ ใใ๏ผ
- [๐ค Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) ใซใฏใPyTorchใจTensorFlowใงไธ่ฌ็ใชNLPใฟในใฏใใใฌใผใใณใฐใใในใฏใชใใใๅซใพใใฆใใพใใ
- [๐ค Transformers Notebooks](notebooks) ใซใฏใ็นๅฎใฎใฟในใฏใซใขใใซใใใกใคใณใใฅใผใใณใฐใใๆนๆณใซ้ขใใใใพใใพใชใใผใใใใฏใๅซใพใใฆใใพใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/pad_truncation.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Padding and truncation
ใใใๅ
ฅๅใฏใใฐใใฐ็ฐใชใ้ทใใงใใใๅบๅฎใตใคใบใฎใใณใฝใซใซๅคๆใงใใชใใใใๅคๅใใ้ทใใฎใใใใใ้ทๆนๅฝขใฎใใณใฝใซใไฝๆใใใใใฎๆฆ็ฅใจใใฆใใใใฃใณใฐใจๅใ่ฉฐใใใใใพใใใใใฃใณใฐใฏใ็ญใใทใผใฑใณในใใใใๅ
ใฎๆ้ทใทใผใฑใณในใพใใฏใขใใซใๅใๅ
ฅใใๆๅคง้ทใจๅใ้ทใใซใชใใใใซใ็นๅฅใช**ใใใฃใณใฐใใผใฏใณ**ใ่ฟฝๅ ใใพใใๅใ่ฉฐใใฏใ้ทใใทใผใฑใณในใๅใ่ฉฐใใใใจใง้ๆนๅใซๆฉ่ฝใใพใใ
ใปใจใใฉใฎๅ ดๅใใใใใๆ้ทใทใผใฑใณในใฎ้ทใใซใใใฃใณใฐใใใขใใซใๅใๅ
ฅใใๆๅคง้ทใซๅใ่ฉฐใใใใจใงใใใพใๅไฝใใพใใใใ ใใAPIใฏใใไปฅไธใฎๆฆ็ฅใใตใใผใใใฆใใพใใๅฟ
่ฆใช3ใคใฎๅผๆฐใฏๆฌกใฎใจใใใงใ๏ผ`padding`ใ`truncation`ใใใใณ `max_length`ใ
`padding`ๅผๆฐใฏใใใฃใณใฐใๅถๅพกใใพใใใใผใซๅคใพใใฏๆๅญๅใงใใใใจใใงใใพใ๏ผ
- `True`ใพใใฏ`'longest'`๏ผใใใๅ
ใฎๆ้ทใทใผใฑใณในใซใใใฃใณใฐใ่ฟฝๅ ใใพใ๏ผใทใผใฑใณในใ1ใคใใๆไพใใใชใๅ ดๅใใใใฃใณใฐใฏ้ฉ็จใใใพใใ๏ผใ
- `max_length'`๏ผ`max_length`ๅผๆฐใงๆๅฎใใใ้ทใใพใงใใใฃใณใฐใ่ฟฝๅ ใใพใใใพใใฏ`max_length`ใๆไพใใใฆใใชใๅ ดๅใฏใขใใซใๅใๅ
ฅใใๆๅคง้ท๏ผ`max_length=None`๏ผใใทใผใฑใณในใ1ใคใใๆไพใใใฆใใๅ ดๅใงใใใใใฃใณใฐใฏ้ฉ็จใใใพใใ
- `False`ใพใใฏ`'do_not_pad'`๏ผใใใฃใณใฐใฏ้ฉ็จใใใพใใใใใใใใใฉใซใใฎๅไฝใงใใ
`truncation`ๅผๆฐใฏๅใ่ฉฐใใๅถๅพกใใพใใใใผใซๅคใพใใฏๆๅญๅใงใใใใจใใงใใพใ๏ผ
- `True`ใพใใฏ`'longest_first'`๏ผๆๅคง้ทใ`max_length`ๅผๆฐใงๆๅฎใใใใใขใใซใๅใๅ
ฅใใๆๅคง้ท๏ผ`max_length=None`๏ผใพใงๅใ่ฉฐใใพใใใใใฏใใผใฏใณใใจใซๅใ่ฉฐใใ้ฉๅใช้ทใใซ้ใใใพใงใใขๅ
ใฎๆ้ทใทใผใฑใณในใใใใผใฏใณใๅ้คใใพใใ
- `'only_second'`๏ผๆๅคง้ทใ`max_length`ๅผๆฐใงๆๅฎใใใใใขใใซใๅใๅ
ฅใใๆๅคง้ท๏ผ`max_length=None`๏ผใพใงๅใ่ฉฐใใพใใใใใฏใใขใฎ2็ช็ฎใฎๆใ ใใๅใ่ฉฐใใพใ๏ผใทใผใฑใณในใฎใใขใพใใฏใทใผใฑใณในใฎใใใใฎใใขใๆไพใใใๅ ดๅ๏ผใ
- `'only_first'`๏ผๆๅคง้ทใ`max_length`ๅผๆฐใงๆๅฎใใใใใขใใซใๅใๅ
ฅใใๆๅคง้ท๏ผ`max_length=None`๏ผใพใงๅใ่ฉฐใใพใใใใใฏใใขใฎๆๅใฎๆใ ใใๅใ่ฉฐใใพใ๏ผใทใผใฑใณในใฎใใขใพใใฏใทใผใฑใณในใฎใใใใฎใใขใๆไพใใใๅ ดๅ๏ผใ
- `False`ใพใใฏ`'do_not_truncate'`๏ผๅใ่ฉฐใใฏ้ฉ็จใใใพใใใใใใใใใฉใซใใฎๅไฝใงใใ
`max_length`ๅผๆฐใฏใใใฃใณใฐใจๅใ่ฉฐใใฎ้ทใใๅถๅพกใใพใใๆดๆฐใพใใฏ`None`ใงใใใใใฎๅ ดๅใใขใใซใๅใๅ
ฅใใๆๅคงๅ
ฅๅ้ทใซใใใฉใซใใง่จญๅฎใใใพใใใขใใซใซ็นๅฎใฎๆๅคงๅ
ฅๅ้ทใใชใๅ ดๅใ`max_length`ใธใฎๅใ่ฉฐใใพใใฏใใใฃใณใฐใฏ็กๅนใซใชใใพใใ
ไปฅไธใฎ่กจใฏใใใใฃใณใฐใจๅใ่ฉฐใใ่จญๅฎใใๆจๅฅจๆนๆณใ่ฆ็ดใใฆใใพใใไปฅไธใฎไพใฎใใใใใงๅ
ฅๅใทใผใฑใณในใฎใใขใไฝฟ็จใใๅ ดๅใ`truncation=True`ใ`['only_first', 'only_second', 'longest_first']`ใง้ธๆใใ`STRATEGY`ใซ็ฝฎใๆใใใใจใใงใใพใใใคใพใใ`truncation='only_second'`ใพใใฏ`truncation='longest_first'`ใไฝฟ็จใใฆใใใขๅ
ใฎไธกๆนใฎใทใผใฑใณในใๅ่ฟฐใฎใใใซๅใ่ฉฐใใๆนๆณใๅถๅพกใงใใพใใ
| Truncation | Padding | Instruction |
|--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------|
| no truncation | no padding | `tokenizer(batch_sentences)` |
| | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or |
| | | `tokenizer(batch_sentences, padding='longest')` |
| | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` |
| | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` |
| | padding to a multiple of a value | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) |
| truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or |
| | | `tokenizer(batch_sentences, truncation=STRATEGY)` |
| | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or |
| | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` |
| | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or |
| | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` |
| | padding to specific length | Not possible |
| truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or |
| | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` |
| | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or |
| | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` |
| | padding to max model input length | Not possible |
| | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or |
| | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_infer_gpu_many.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Efficient Inference on a Multiple GPUs
ใใฎๆๆธใซใฏใ่คๆฐใฎGPUใงๅน็็ใซๆจ่ซใ่กใๆนๆณใซ้ขใใๆ
ๅ ฑใๅซใพใใฆใใพใใ
<Tip>
ๆณจๆ: ่คๆฐใฎGPUใปใใใขใใใฏใ[ๅไธใฎGPUใปใฏใทใงใณ](./perf_infer_gpu_one)ใง่ชฌๆใใใฆใใใปใจใใฉใฎๆฆ็ฅใไฝฟ็จใงใใพใใใใ ใใใใ่ฏใไฝฟ็จๆณใฎใใใซไฝฟ็จใงใใ็ฐกๅใชใใฏใใใฏใซใคใใฆใ่ช่ญใใฆใใๅฟ
่ฆใใใใพใใ
</Tip>
## Flash Attention 2
Flash Attention 2ใฎ็ตฑๅใฏใ่คๆฐใฎGPUใปใใใขใใใงใๆฉ่ฝใใพใใ่ฉณ็ดฐใซใคใใฆใฏใ[ๅไธใฎGPUใปใฏใทใงใณ](./perf_infer_gpu_one#Flash-Attention-2)ใฎ้ฉๅใชใปใฏใทใงใณใใ่ฆงใใ ใใใ
## BetterTransformer
[BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฏใ๐ค TransformersใขใใซใPyTorchใใคใใฃใใฎ้ซ้ๅฎ่กใในใไฝฟ็จใใใใใซๅคๆใใใใฎไธใงFlash Attentionใชใฉใฎๆ้ฉๅใใใใซใผใใซใๅผใณๅบใใพใใ
BetterTransformerใฏใใใญในใใ็ปๅใ้ณๅฃฐใขใใซใฎๅไธGPUใใใณ่คๆฐGPUใงใฎ้ซ้ๆจ่ซใใตใใผใใใฆใใพใใ
<Tip>
Flash Attentionใฏใfp16ใพใใฏbf16 dtypeใไฝฟ็จใใฆใใใขใใซใซใฎใฟไฝฟ็จใงใใพใใBetterTransformerใไฝฟ็จใใๅใซใใขใใซใ้ฉๅใชdtypeใซใญใฃในใใใฆใใ ใใใ
</Tip>
### Decoder models
ใใญในใใขใใซใ็นใซใใณใผใใผใใผในใฎใขใใซ๏ผGPTใT5ใLlamaใชใฉ๏ผใฎๅ ดๅใBetterTransformer APIใฏใในใฆใฎๆณจๆๆไฝใ[`torch.nn.functional.scaled_dot_product_attention`ใชใใฌใผใฟใผ](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention)๏ผSDPA๏ผใไฝฟ็จใใใใใซๅคๆใใพใใใใใฏPyTorch 2.0ไปฅ้ใงใฎใฟไฝฟ็จๅฏ่ฝใงใใ
ใขใใซใBetterTransformerใซๅคๆใใใซใฏ๏ผ
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
# convert the model to BetterTransformer
model.to_bettertransformer()
# Use it for training or inference
```
SDPAใฏใใใผใใฆใงใขใๅ้กใฎใตใคใบใชใฉใฎ็นๅฎใฎ่จญๅฎใง[Flash Attention](https://arxiv.org/abs/2205.14135)ใซใผใใซใๅผใณๅบใใใจใใงใใพใใFlash Attentionใๆๅนใซใใใใ็นๅฎใฎ่จญๅฎ๏ผใใผใใฆใงใขใๅ้กใฎใตใคใบ๏ผใงๅฉ็จๅฏ่ฝใใ็ขบ่ชใใใซใฏใ[`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel)ใใณใณใใญในใใใใผใธใฃใจใใฆไฝฟ็จใใพใใ
```diff
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m").to("cuda")
# convert the model to BetterTransformer
model.to_bettertransformer()
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
ใใใใฌใผในใใใฏใงๆฌกใฎใใใชใจใฉใผใกใใปใผใธใ่กจ็คบใใใๅ ดๅ๏ผ
```bash
RuntimeError: No available kernel. Aborting execution.
```
ๅฝๆฅใFlash Attentionใฎใซใใฌใใธใๅบ็ฏๅฒใงใใๅฏ่ฝๆงใใใPyTorch Nightlyใใผใธใงใณใ่ฉฆใใใใซใๅงใใใพใใ
```bash
pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
```
[ใใฎใใญใฐๆ็จฟ](https://pytorch.org/blog/out-of-the-box-acceleration/)ใใใงใใฏใใฆใBetterTransformer + SDPA APIใงๅฏ่ฝใชใใจใซใคใใฆ่ฉณใใๅญฆใณใพใใใใ
### Encoder Models
ๆจ่ซไธญใฎใจใณใณใผใใผใขใใซใงใฏใBetterTransformerใฏใจใณใณใผใใผใฌใคใคใผใฎforwardๅผใณๅบใใใใจใณใณใผใใผใฌใคใคใผใฎ[`torch.nn.TransformerEncoderLayer`](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html)ใฎ็ธๅฝใใใใฎใซใใฃในใใใใใพใใใใใซใใใใจใณใณใผใใผใฌใคใคใผใฎ้ซ้ๅฎ่ฃ
ใๅฎ่กใใใพใใ
`torch.nn.TransformerEncoderLayer`ใฎ้ซ้ๅฎ่ฃ
ใฏใใฌใผใใณใฐใใตใใผใใใฆใใชใใใใไปฃใใใซ`torch.nn.functional.scaled_dot_product_attention`ใซใใฃในใใใใใใพใใใใใซใใใใในใใใใใใณใฝใซใๆดป็จใใชใFlash AttentionใพใใฏMemory-Efficient Attentionใฎ่ๅใซใผใใซใไฝฟ็จใงใใพใใ
BetterTransformerใฎใใใฉใผใใณในใฎ่ฉณ็ดฐใซใคใใฆใฏใใใฎ[ใใญใฐๆ็จฟ](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใใ่ฆงใใใ ใใพใใใพใใใจใณใณใผใใผใขใใซ็จใฎBetterTransformerใซใคใใฆใฏใใใฎ[ใใญใฐ](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)ใง่ฉณใใๅญฆใถใใจใใงใใพใใ
## Advanced usage: mixing FP4 (or Int8) and BetterTransformer
ใขใใซใฎๆ่ฏใฎใใใฉใผใใณในใๅพใใใใซใไธ่จใง่ชฌๆใใ็ฐใชใๆนๆณใ็ตใฟๅใใใใใจใใงใใพใใไพใใฐใFP4ใใใฏในใใฌใทใธใงใณๆจ่ซ+Flash Attentionใไฝฟ็จใใBetterTransformerใ็ตใฟๅใใใใใจใใงใใพใใ
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config)
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/troubleshooting.md
|
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Troubleshoot
ๆใซใฏใจใฉใผใ็บ็ใใใใจใใใใพใใใ็งใใกใฏใใใซใใพใ๏ผใใฎใฌใคใใงใฏใ็งใใกใใใ่ฆใๆใไธ่ฌ็ใชๅ้กใจใใใใใ่งฃๆฑบใใๆนๆณใซใคใใฆ่ชฌๆใใพใใใใ ใใใใฎใฌใคใใฏใในใฆใฎ ๐ค Transformers ใฎๅ้กใฎๅ
ๆฌ็ใชใณใฌใฏใทใงใณใงใฏใใใพใใใๅ้กใใใฉใใซใทใฅใผใใฃใณใฐใใใใใฎ่ฉณ็ดฐใชใใซใใๅฟ
่ฆใชๅ ดๅใฏใไปฅไธใฎๆนๆณใ่ฉฆใใฆใฟใฆใใ ใใ๏ผ
<Youtube id="S2EEG3JIt2A"/>
1. [ใใฉใผใฉใ ](https://discuss.huggingface.co/)ใงๅฉใใๆฑใใใ [ๅๅฟ่
ๅใ](https://discuss.huggingface.co/c/beginners/5) ใพใใฏ [๐ค Transformers](https://discuss.huggingface.co/c/transformers/9) ใชใฉใ่ณชๅใๆ็จฟใงใใ็นๅฎใฎใซใใดใชใใใใพใใๅ้กใ่งฃๆฑบใใใๅฏ่ฝๆงใๆๅคง้ใซใใใใใซใๅ็พๅฏ่ฝใชใณใผใใๅซใ่ฏใ่ชฌๆ็ใชใใฉใผใฉใ ๆ็จฟใๆธใใใจใ็ขบ่ชใใฆใใ ใใ๏ผ
<Youtube id="_PAli-V4wj0"/>
2. ใใฐใใฉใคใใฉใชใซ้ข้ฃใใๅ ดๅใฏใ๐ค Transformers ใชใใธใใชใง [Issue](https://github.com/huggingface/transformers/issues/new/choose) ใไฝๆใใฆใใ ใใใใใฐใ่ชฌๆใใใใใฎใงใใใ ใๅคใใฎๆ
ๅ ฑใๅซใใใใใซๅฟใใใไฝใๅ้กใงใใฉใฎใใใซไฟฎๆญฃใงใใใใใใ่ฏใ็่งฃใงใใใใใซใใฆใใ ใใใ
3. ใใๅคใใใผใธใงใณใฎ ๐ค Transformers ใไฝฟ็จใใฆใใๅ ดๅใฏใ[Migration](migration) ใฌใคใใ็ขบ่ชใใฆใใ ใใใใใผใธใงใณ้ใง้่ฆใชๅคๆดใๅฐๅ
ฅใใใฆใใใใใงใใ
ใใฉใใซใทใฅใผใใฃใณใฐใจใใซใใฎ่ฉณ็ดฐใซใคใใฆใฏใHugging Faceใณใผในใฎ [็ฌฌ8็ซ ](https://huggingface.co/course/chapter8/1?fw=pt) ใๅ็
งใใฆใใ ใใใ
## Firewalled environments
ไธ้จใฎใฏใฉใฆใไธใฎGPUใคใณในใฟใณในใใคใณใใฉใใใใปใใใขใใใฏใๅค้จๆฅ็ถใซๅฏพใใฆใใกใคใขใฆใฉใผใซใงไฟ่ญทใใใฆใใใใใๆฅ็ถใจใฉใผใ็บ็ใใใใจใใใใพใใในใฏใชใใใใขใใซใฎ้ใฟใใใผใฟใปใใใใใฆใณใญใผใใใใใจใใใจใใใฆใณใญใผใใ้ไธญใงๆญขใพใใๆฌกใฎใกใใปใผใธใจใฟใคใ ใขใฆใใจใฉใผใ่กจ็คบใใใพใ๏ผ
```
ValueError: Connection error, and we cannot find the requested files in the cached path.
Please try again or make sure your Internet connection is on.
```
ใใฎๅ ดๅใๆฅ็ถใจใฉใผใๅ้ฟใใใใใซ[ใชใใฉใคใณใขใผใ](installation#offline-mode)ใง๐ค Transformersใๅฎ่กใใฆใฟใฆใใ ใใใ
## CUDA out of memory
ๆฐ็พไธใฎใใฉใกใผใฟใๆใคๅคง่ฆๆจกใชใขใใซใฎใใฌใผใใณใฐใฏใ้ฉๅใชใใผใใฆใงใขใชใใงใฏ่ชฒ้กใงใใGPUใฎใกใขใชใไธ่ถณใใใจใใใใใจใฉใผใฎ1ใคใฏๆฌกใฎใจใใใงใ๏ผ
ไปฅไธใฏใกใขใชไฝฟ็จ้ใๆธใใใใใซ่ฉฆใใใจใใงใใใใใคใใฎ่งฃๆฑบ็ญใงใ๏ผ
- [`TrainingArguments`]ใฎไธญใง [`per_device_train_batch_size`](main_classes/trainer#transformers.TrainingArguments.per_device_train_batch_size) ใฎๅคใๆธใใใ
- [`TrainingArguments`]ใฎไธญใง [`gradient_accumulation_steps`](main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps) ใไฝฟ็จใใฆใๅ
จไฝ็ใชใใใใตใคใบใๅนๆ็ใซๅขใใใใจใ่ฉฆใใ
<Tip>
ใกใขใช็ฏ็ดใฎใใฏใใใฏใซใคใใฆใฎ่ฉณ็ดฐใฏใ[ใฌใคใ](performance)ใๅ็
งใใฆใใ ใใใ
</Tip>
## Unable to load a saved TensorFlow model
TensorFlowใฎ[model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model)ใกใฝใใใฏใใขใใซๅ
จไฝ - ใขใผใญใใฏใใฃใ้ใฟใใใฌใผใใณใฐ่จญๅฎ - ใ1ใคใฎใใกใคใซใซไฟๅญใใพใใใใใใใขใใซใใกใคใซใๅๅบฆ่ชญใฟ่พผใ้ใซใจใฉใผใ็บ็ใใใใจใใใใพใใใใใฏใ๐ค Transformersใใขใใซใใกใคใซๅ
ใฎใในใฆใฎTensorFlow้ข้ฃใชใใธใงใฏใใ่ชญใฟ่พผใพใชใใใใงใใTensorFlowใขใใซใฎไฟๅญใจ่ชญใฟ่พผใฟใซ้ขใใๅ้กใๅ้ฟใใใใใซใๆฌกใฎใใจใใๅงใใใพใ๏ผ
- ใขใใซใฎ้ใฟใ`h5`ใใกใคใซๆกๅผตๅญใงไฟๅญใใ[`~TFPreTrainedModel.from_pretrained`]ใไฝฟ็จใใฆใขใใซใๅ่ชญใฟ่พผใฟใใ๏ผ
```py
>>> from transformers import TFPreTrainedModel
>>> from tensorflow import keras
>>> model.save_weights("some_folder/tf_model.h5")
>>> model = TFPreTrainedModel.from_pretrained("some_folder")
```
- Save the model with [`~TFPretrainedModel.save_pretrained`] and load it again with [`~TFPreTrainedModel.from_pretrained`]:
```py
>>> from transformers import TFPreTrainedModel
>>> model.save_pretrained("path_to/model")
>>> model = TFPreTrainedModel.from_pretrained("path_to/model")
```
## ImportError
ใใไธใคใใใใใจใฉใผใฏใ็นใซๆฐใใใชใชใผในใใใใขใใซใฎๅ ดๅใซ้ญ้ใใใใจใใใ `ImportError` ใงใ๏ผ
```
ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location)
```
ใใใใฎใจใฉใผใฟใคใใซ้ขใใฆใฏใๆๆฐใใผใธใงใณใฎ ๐ค Transformers ใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใๆๆฐใฎใขใใซใซใขใฏใปในใงใใใใใซใใฆใใ ใใ๏ผ
```bash
pip install transformers --upgrade
```
## CUDA error: device-side assert triggered
ๆใ
ใใใใคในใณใผใใงใจใฉใผใ็บ็ใใใจใใไธ่ฌ็ใช CUDA ใจใฉใผใซ้ญ้ใใใใจใใใใพใใ
```
RuntimeError: CUDA error: device-side assert triggered
```
ใใๅ
ทไฝ็ใชใจใฉใผใกใใปใผใธใๅๅพใใใใใซใใพใใฏCPUไธใงใณใผใใๅฎ่กใใฆใฟใใใจใใๅงใใใพใใไปฅไธใฎ็ฐๅขๅคๆฐใใณใผใใฎๅ้ ญใซ่ฟฝๅ ใใฆใCPUใซๅใๆฟใใฆใฟใฆใใ ใใ๏ผ
```py
>>> import os
>>> os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
GPUใใใใ่ฏใใใฌใผในใใใฏใๅๅพใใๅฅใฎใชใใทใงใณใฏใๆฌกใฎ็ฐๅขๅคๆฐใใณใผใใฎๅ
้ ญใซ่ฟฝๅ ใใใใจใงใใใใใซใใใใจใฉใผใฎ็บ็ๆบใๆใใใฌใผในใใใฏใๅพใใใพใ๏ผ
```py
>>> import os
>>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
```
## Incorrect output when padding tokens aren't masked
ไธ้จใฎใฑใผในใงใฏใ`input_ids`ใซใใใฃใณใฐใใผใฏใณใๅซใพใใฆใใๅ ดๅใๅบๅใฎ`hidden_state`ใๆญฃใใใชใใใจใใใใพใใใใขใณในใใฌใผใทใงใณใฎใใใซใใขใใซใจใใผใฏใใคใถใผใใญใผใใใพใใใขใใซใฎ`pad_token_id`ใซใขใฏใปในใใฆใใใฎๅคใ็ขบ่ชใงใใพใใไธ้จใฎใขใใซใงใฏ`pad_token_id`ใ`None`ใซใชใใใจใใใใพใใใๅธธใซๆๅใง่จญๅฎใใใใจใใงใใพใใ
```py
>>> from transformers import AutoModelForSequenceClassification
>>> import torch
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
>>> model.config.pad_token_id
0
```
ไปฅไธใฎไพใฏใใใใฃใณใฐใใผใฏใณใใในใฏใใใซๅบๅใ่กจ็คบใใใใฎใงใ๏ผ
```py
>>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
[ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>)
```
ไปฅไธใฏใ็ฌฌ2ใฎใทใผใฑใณในใฎๅฎ้ใฎๅบๅใงใ๏ผ
```py
>>> input_ids = torch.tensor([[7592]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)
```
ๅคงๆตใฎๅ ดๅใใขใใซใซใฏ `attention_mask` ใๆไพใใฆใใใใฃใณใฐใใผใฏใณใ็ก่ฆใใใใฎใใใช็ก้ณใฎใจใฉใผใๅ้ฟใใๅฟ
่ฆใใใใพใใใใใซใใใ2็ช็ฎใฎใทใผใฑใณในใฎๅบๅใๅฎ้ใฎๅบๅใจไธ่ดใใใใใซใชใใพใใ
<Tip>
ใใใฉใซใใงใฏใใใผใฏใใคใถใฏใใใผใฏใใคใถใฎใใใฉใซใใซๅบใฅใใฆ `attention_mask` ใ่ชๅใงไฝๆใใพใใ
</Tip>
```py
>>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]])
>>> output = model(input_ids, attention_mask=attention_mask)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)
```
๐ค Transformersใฏใๆไพใใใใใใฃใณใฐใใผใฏใณใใในใฏใใใใใซ่ชๅ็ใซ`attention_mask`ใไฝๆใใพใใใใใฎ็็ฑใฏไปฅไธใฎ้ใใงใ๏ผ
- ไธ้จใฎใขใใซใซใฏใใใฃใณใฐใใผใฏใณใๅญๅจใใชใๅ ดๅใใใใใใงใใ
- ไธ้จใฎใฆใผในใฑใผในใงใฏใใฆใผใถใผใใใใฃใณใฐใใผใฏใณใซใขใใณใทใงใณใๅใใใใจใๆใๅ ดๅใใใใใใงใใ
## ValueError: Unrecognized configuration class XYZ for this kind of AutoModel
ไธ่ฌ็ใซใไบๅๅญฆ็ฟๆธใฟใขใใซใฎใคใณในใฟใณในใใญใผใใใใใใซใฏ[`AutoModel`]ใฏใฉในใไฝฟ็จใใใใจใใๅงใใใพใใใใฎใฏใฉในใฏใ่จญๅฎใซๅบใฅใใฆไธใใใใใใงใใฏใใคใณใใใๆญฃใใใขใผใญใใฏใใฃใ่ชๅ็ใซๆจๆธฌใใใณใญใผใใงใใพใใใขใใซใใญใผใใใ้ใซใใฎ`ValueError`ใ่กจ็คบใใใๅ ดๅใAutoใฏใฉในใฏไธใใใใใใงใใฏใใคใณใใฎ่จญๅฎใใใใญใผใใใใใจใใฆใใใขใใซใฎ็จฎ้กใธใฎใใใใณใฐใ่ฆใคใใใใจใใงใใชใใฃใใใจใๆๅณใใพใใๆใไธ่ฌ็ใซใฏใ็นๅฎใฎใฟในใฏใใตใใผใใใชใใใงใใฏใใคใณใใใใๅ ดๅใซใใฎใจใฉใผใ็บ็ใใพใใ
ไพใใฐใ่ณชๅๅฟ็ญใฎใใใฎGPT2ใๅญๅจใใชใๅ ดๅใๆฌกใฎไพใงใใฎใจใฉใผใ่กจ็คบใใใพใ๏ผ
ไธ่จใฎใใญในใใๆฅๆฌ่ชใซ็ฟป่จณใใMarkdownใใกใคใซใจใใฆใใฉใผใใใใใพใใใ
```py
>>> from transformers import AutoProcessor, AutoModelForQuestionAnswering
>>> processor = AutoProcessor.from_pretrained("gpt2-medium")
>>> model = AutoModelForQuestionAnswering.from_pretrained("gpt2-medium")
ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ...
```
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/model_sharing.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ ใใฎใใกใคใซใฏMarkdownใงใใใHugging Faceใฎใใญใฅใกใณใใใซใใผ๏ผMDXใซ้กไผผ๏ผๅใใฎ็นๅฎใฎๆงๆใๅซใใงใใใใใMarkdownใใฅใผใขใผใง้ฉๅใซใฌใณใใชใณใฐใใใชใใใจใใใใพใใ
-->
# Share a Model
ๆๅพใฎ2ใคใฎใใฅใผใใชใขใซใงใฏใPyTorchใKerasใใใใณ๐ค Accelerateใไฝฟ็จใใฆใขใใซใใใกใคใณใใฅใผใใณใฐใใๆนๆณใ็คบใใพใใใๆฌกใฎในใใใใฏใใขใใซใใณใใฅใใใฃใจๅ
ฑๆใใใใจใงใ๏ผHugging Faceใงใฏใ็ฅ่ญใจใชใฝใผในใๅ
ฌ้็ใซๅ
ฑๆใใไบบๅทฅ็ฅ่ฝใ่ชฐใซใงใๆไพใใใใจใไฟกใใฆใใพใใไปใฎไบบใ
ใๆ้ใจใชใฝใผในใ็ฏ็ดใงใใใใใซใใขใใซใใณใใฅใใใฃใจๅ
ฑๆใใใใจใๆค่จใใใใจใใๅงใใใพใใ
ใใฎใใฅใผใใชใขใซใงใฏใ่จ็ทดๆธใฟใพใใฏใใกใคใณใใฅใผใใณใฐใใใใขใใซใ[Model Hub](https://huggingface.co/models)ใซๅ
ฑๆใใ2ใคใฎๆนๆณใๅญฆใณใพใ๏ผ
- ใใญใฐใฉใ ใงใใกใคใซใHubใซใใใทใฅใใใ
- ใฆใงใใคใณใฟใผใใงใผในใไฝฟ็จใใฆใใกใคใซใHubใซใใฉใใฐใขใณใใใญใใใใใ
<iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player"
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
picture-in-picture" allowfullscreen></iframe>
<Tip>
ใณใใฅใใใฃใจใขใใซใๅ
ฑๆใใใซใฏใ[huggingface.co](https://huggingface.co/join)ใงใขใซใฆใณใใๅฟ
่ฆใงใใๆขๅญใฎ็ต็นใซๅๅ ใใใใๆฐใใ็ต็นใไฝๆใใใใใใใจใใงใใพใใ
</Tip>
## Repository Features
Model Hubไธใฎๅใชใใธใใชใฏใ้ๅธธใฎGitHubใชใใธใใชใฎใใใซๅไฝใใพใใใชใใธใใชใฏใใผใธใงใใณใฐใใณใใใๅฑฅๆญดใ้ใใฎ่ฆ่ฆๅใฎๆฉ่ฝใๆไพใใพใใ
Model Hubใฎ็ตใฟ่พผใฟใใผใธใงใใณใฐใฏgitใใใณ[git-lfs](https://git-lfs.github.com/)ใซๅบใฅใใฆใใพใใ่จใๆใใใฐใใขใใซใ1ใคใฎใชใใธใใชใจใใฆๆฑใใใจใใงใใใใๅคงใใชใขใฏใปในๅถๅพกใจในใฑใผใฉใใชใใฃใๅฎ็พใใพใใใใผใธใงใณ็ฎก็ใซใฏ*ใชใใธใงใณ*ใใใใใณใใใใใใทใฅใใฟใฐใใพใใฏใใฉใณใๅใง็นๅฎใฎใขใใซใใผใธใงใณใใใณ็ใใใๆนๆณใงใใ
ใใฎ็ตๆใ`revision`ใใฉใกใผใฟใไฝฟ็จใใฆ็นๅฎใฎใขใใซใใผใธใงใณใใญใผใใงใใพใ๏ผ
```py
>>> model = AutoModel.from_pretrained(
... "julien-c/EsperBERTo-small", revision="v2.0.1" # ใฟใฐๅใใพใใฏใใฉใณใๅใใพใใฏใณใใใใใใทใฅ
... )
```
ใใกใคใซใฏใชใใธใใชๅ
ใง็ฐกๅใซ็ทจ้ใงใใใณใใใๅฑฅๆญดใจๅทฎๅใ่กจ็คบใงใใพใ๏ผ

## Set Up
ใขใใซใHubใซๅ
ฑๆใใๅใซใHugging Faceใฎ่ช่จผๆ
ๅ ฑใๅฟ
่ฆใงใใใฟใผใใใซใธใฎใขใฏใปในๆจฉใใใๅ ดๅใ๐ค Transformersใใคใณในใใผใซใใใฆใใไปฎๆณ็ฐๅขใงไปฅไธใฎใณใใณใใๅฎ่กใใพใใใใใซใใใใขใฏใปในใใผใฏใณใHugging Faceใฎใญใฃใใทใฅใใฉใซใใซไฟๅญใใใพใ๏ผใใใฉใซใใงใฏ `~/.cache/` ใซไฟๅญใใใพใ๏ผ๏ผ
```bash
huggingface-cli login
```
JupyterใColaboratoryใฎใใใชใใผใใใใฏใไฝฟ็จใใฆใใๅ ดๅใ[`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library)ใฉใคใใฉใชใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
ใใฎใฉใคใใฉใชใไฝฟ็จใใใจใHubใจใใญใฐใฉใ ็ใซๅฏพ่ฉฑใงใใพใใ
```bash
pip install huggingface_hub
```
ๆฌกใซใ`notebook_login`ใไฝฟ็จใใฆHubใซใตใคใณใคใณใใ[ใใกใใฎใชใณใฏ](https://huggingface.co/settings/token)ใซใขใฏใปในใใฆใญใฐใคใณใซไฝฟ็จใใใใผใฏใณใ็ๆใใพใ๏ผ
```python
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Convert a Model for all frameworks
็ฐใชใใใฌใผใ ใฏใผใฏใงไฝๆฅญใใฆใใไปใฎใฆใผใถใผใใใชใใฎใขใใซใไฝฟ็จใงใใใใใซใใใใใซใ
PyTorchใใใณTensorFlowใฎใใงใใฏใใคใณใใงใขใใซใๅคๆใใฆใขใใใญใผใใใใใจใใๅงใใใพใใ
ใใฎในใใใใในใญใใใใใจใใฆใผใถใผใฏ็ฐใชใใใฌใผใ ใฏใผใฏใใใขใใซใใญใผใใงใใพใใใ
ใขใใซใใชใณใถใใฉใคใงๅคๆใใๅฟ
่ฆใใใใใใ้
ใใชใใพใใ
ๅฅใฎใใฌใผใ ใฏใผใฏ็จใซใใงใใฏใใคใณใใๅคๆใใใใจใฏ็ฐกๅใงใใ
PyTorchใจTensorFlowใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใ๏ผใคใณในใใผใซๆ้ ใซใคใใฆใฏ[ใใกใ](installation)ใๅ็
ง๏ผใใ
ใใฎๅพใไปใฎใใฌใผใ ใฏใผใฏๅใใซ็นๅฎใฎใฟในใฏ็จใฎใขใใซใ่ฆใคใใพใใ
<frameworkcontent>
<pt>
TensorFlowใใPyTorchใซใใงใใฏใใคใณใใๅคๆใใใซใฏใ`from_tf=True`ใๆๅฎใใพใ๏ผ
```python
>>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True)
>>> pt_model.save_pretrained("path/to/awesome-name-you-picked")
```
</pt>
<tf>
ๆๅฎใใฆใPyTorchใใTensorFlowใซใใงใใฏใใคใณใใๅคๆใใใซใฏ `from_pt=True` ใไฝฟ็จใใพใ๏ผ
```python
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True)
```
ๆฐใใTensorFlowใขใใซใจใใฎๆฐใใใใงใใฏใใคใณใใไฟๅญใงใใพใ๏ผ
```python
>>> tf_model.save_pretrained("path/to/awesome-name-you-picked")
```
</tf>
<tf>
<jax>
Flaxใงใขใใซใๅฉ็จๅฏ่ฝใชๅ ดๅใPyTorchใใFlaxใธใฎใใงใใฏใใคใณใใฎๅคๆใ่กใใใจใใงใใพใ๏ผ
```py
>>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained(
... "path/to/awesome-name-you-picked", from_pt=True
... )
```
</jax>
</frameworkcontent>
## Push a model during traning
<frameworkcontent>
<pt>
<Youtube id="Z1-XMy-GNLQ"/>
ใขใใซใHubใซใใใทใฅใใใใจใฏใ่ฟฝๅ ใฎใใฉใกใผใฟใผใพใใฏใณใผใซใใใฏใ่ฟฝๅ ใใใ ใใง็ฐกๅใงใใ
[ใใกใคใณใใฅใผใใณใฐใใฅใผใใชใขใซ](training)ใใๆใๅบใใฆใใ ใใใ[`TrainingArguments`]ใฏใฉในใฏใใคใใผใใฉใกใผใฟใผใจ่ฟฝๅ ใฎใใฌใผใใณใฐใชใใทใงใณใๆๅฎใใๅ ดๆใงใใ
ใใใใฎใใฌใผใใณใฐใชใใทใงใณใฎ1ใคใซใใขใใซใ็ดๆฅHubใซใใใทใฅใใๆฉ่ฝใใใใพใใ[`TrainingArguments`]ใง`push_to_hub=True`ใ่จญๅฎใใพใ๏ผ
```py
>>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True)
```
Pass your training arguments as usual to [`Trainer`]:
```py
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... )
```
[`Trainer`]ใซ้ๅธธ้ใใใฌใผใใณใฐๅผๆฐใๆธกใใพใ๏ผ
```py
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... )
```
ใใกใคใณใใฅใผใใณใฐใๅฎไบใใใใ[`Trainer`]ใง[`~transformers.Trainer.push_to_hub`]ใๅผใณๅบใใฆใใใฌใผใใณใฐๆธใฟใขใใซใHubใซใใใทใฅใใพใใ๐ค Transformersใฏใใใฌใผใใณใฐใฎใใคใใผใใฉใกใผใฟใใใฌใผใใณใฐ็ตๆใใใใณใใฌใผใ ใฏใผใฏใฎใใผใธใงใณใ่ชๅ็ใซใขใใซใซใผใใซ่ฟฝๅ ใใพใ๏ผ
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
[`PushToHubCallback`]ใไฝฟ็จใใฆใขใใซใHubใซๅ
ฑๆใใพใใ[`PushToHubCallback`]้ขๆฐใซใฏใๆฌกใฎใใฎใ่ฟฝๅ ใใพใ๏ผ
- ใขใใซใฎๅบๅใใฃใฌใฏใใชใ
- ใใผใฏใใคใถใ
- `hub_model_id`ใใคใพใHubใฎใฆใผใถใผๅใจใขใใซๅใ
```python
>>> from transformers import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model"
... )
```
๐ค Transformersใฏ[`fit`](https://keras.io/api/models/model_training_apis/)ใซใณใผใซใใใฏใ่ฟฝๅ ใใใใฌใผใใณใฐๆธใฟใขใใซใHubใซใใใทใฅใใพใ๏ผ
```py
>>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback)
```
</tf>
</frameworkcontent>
## `push_to_hub` ้ขๆฐใไฝฟ็จใใ
ใพใใใขใใซใ็ดๆฅHubใซใขใใใญใผใใใใใใซใ`push_to_hub` ใๅผใณๅบใใใจใใงใใพใใ
`push_to_hub` ใงใขใใซๅใๆๅฎใใพใ๏ผ
```py
>>> pt_model.push_to_hub("my-awesome-model")
```
ใใใซใใใใฆใผใถใผๅใฎไธใซใขใใซๅ `my-awesome-model` ใๆใคใชใใธใใชใไฝๆใใใพใใ
ใฆใผใถใผใฏใ`from_pretrained` ้ขๆฐใไฝฟ็จใใฆใขใใซใใญใผใใงใใพใ๏ผ
```py
>>> from transformers import AutoModel
>>> model = AutoModel.from_pretrained("your_username/my-awesome-model")
```
็ต็นใซๆๅฑใใใขใใซใ็ต็นๅใฎใใจใซใใใทใฅใใใๅ ดๅใ`repo_id` ใซใใใ่ฟฝๅ ใใฆใใ ใใ๏ผ
```python
>>> pt_model.push_to_hub("my-awesome-org/my-awesome-model")
```
`push_to_hub`้ขๆฐใฏใใขใใซใชใใธใใชใซไปใฎใใกใคใซใ่ฟฝๅ ใใใใใซใไฝฟ็จใงใใพใใไพใใฐใใใผใฏใใคใถใใขใใซใชใใธใใชใซ่ฟฝๅ ใใพใ๏ผ
```py
>>> tokenizer.push_to_hub("my-awesome-model")
```
ใใใใฏใใใกใคใณใใฅใผใใณใฐใใใPyTorchใขใใซใฎTensorFlowใใผใธใงใณใ่ฟฝๅ ใใใใใใใใพใใ๏ผ
```python
>>> tf_model.push_to_hub("my-awesome-model")
```
Hugging Faceใใญใใฃใผใซใซ็งปๅใใใจใๆฐใใไฝๆใใใขใใซใชใใธใใชใ่กจ็คบใใใใฏใใงใใ**Files**ใฟใใใฏใชใใฏใใใจใใชใใธใใชใซใขใใใญใผใใใใในใฆใฎใใกใคใซใ่กจ็คบใใใพใใ
ใชใใธใใชใซใใกใคใซใไฝๆใใใณใขใใใญใผใใใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใHubใใญใฅใกใณใใผใทใงใณ[ใใกใ](https://huggingface.co/docs/hub/how-to-upstream)ใๅ็
งใใฆใใ ใใใ
## Upload with the web interface
ใณใผใใๆธใใใซใขใใซใใขใใใญใผใใใใใฆใผใถใผใฏใHubใฎใฆใงใใคใณใฟใผใใงใผในใไฝฟ็จใใฆใขใใซใใขใใใญใผใใงใใพใใ[huggingface.co/new](https://huggingface.co/new)ใ่จชใใฆๆฐใใใชใใธใใชใไฝๆใใพใ๏ผ

ใใใใใใขใใซใซ้ขใใใใใคใใฎๆ
ๅ ฑใ่ฟฝๅ ใใพใ๏ผ
- ใชใใธใใชใฎ**ๆๆ่
**ใ้ธๆใใพใใใใใฏใใชใ่ช่บซใพใใฏๆๅฑใใฆใใ็ต็นใฎใใใใใงใใ
- ใขใใซใฎๅๅใ้ธๆใใพใใใใใฏใชใใธใใชใฎๅๅใซใใชใใพใใ
- ใขใใซใๅ
ฌ้ใ้ๅ
ฌ้ใใ้ธๆใใพใใ
- ใขใใซใฎใฉใคใปใณในไฝฟ็จๆนๆณใๆๅฎใใพใใ
ใใฎๅพใ**Files**ใฟใใใฏใชใใฏใใ**Add file**ใใฟใณใใฏใชใใฏใใฆใชใใธใใชใซๆฐใใใใกใคใซใใขใใใญใผใใใพใใๆฌกใซใใใกใคใซใใใฉใใฐใขใณใใใญใใใใฆใขใใใญใผใใใใณใใใใกใใปใผใธใ่ฟฝๅ ใใพใใ

## Add a model card
ใฆใผใถใผใใขใใซใฎๆฉ่ฝใๅถ้ใๆฝๅจ็ใชๅใใๅซ็็ใช่ๆ
ฎไบ้
ใ็่งฃใงใใใใใซใใใใใซใใขใใซใชใใธใใชใซใขใใซใซใผใใ่ฟฝๅ ใใฆใใ ใใใใขใใซใซใผใใฏ`README.md`ใใกใคใซใงๅฎ็พฉใใใพใใใขใใซใซใผใใ่ฟฝๅ ใใๆนๆณ๏ผ
* ๆๅใง`README.md`ใใกใคใซใไฝๆใใใณใขใใใญใผใใใใ
* ใขใใซใชใใธใใชๅ
ใฎ**Edit model card**ใใฟใณใใฏใชใใฏใใใ
ใขใใซใซใผใใซๅซใใในใๆ
ๅ ฑใฎไพใซใคใใฆใฏใDistilBert [ใขใใซใซใผใ](https://huggingface.co/distilbert-base-uncased)ใใ่ฆงใใ ใใใ`README.md`ใใกใคใซใงๅถๅพกใงใใไปใฎใชใใทใงใณใไพใใฐใขใใซใฎ็ญ็ด ใใใใใชใณใใใฆใฃใธใงใใใฎไพใชใฉใซใคใใฆใฎ่ฉณ็ดฐใฏใ[ใใกใใฎใใญใฅใกใณใใผใทใงใณ](https://huggingface.co/docs/hub/models-cards)ใๅ็
งใใฆใใ ใใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/add_new_model.md
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใ็นๅฎใฎๆๆณใๅซใพใใฆใใใ้ๅธธใฎMarkdownใใฅใผใขใผใงใฏๆญฃใใ่กจ็คบใใใชใๅ ดๅใใใใพใใ
-->
# How to add a model to ๐ค Transformers?
๐ค Transformersใฉใคใใฉใชใฏใใณใใฅใใใฃใฎ่ฒข็ฎ่
ใฎใใใใงๆฐใใใขใใซใๆไพใงใใใใจใใใใใใพใใ
ใใใใใใใฏ้ฃใใใใญใธใงใฏใใงใใใ๐ค Transformersใฉใคใใฉใชใจๅฎ่ฃ
ใใใขใใซใซใคใใฆใฎๆทฑใ็ฅ่ญใๅฟ
่ฆใงใใ
Hugging Faceใงใฏใใณใใฅใใใฃใฎๅคใใฎไบบใ
ใซ็ฉๆฅต็ใซใขใใซใ่ฟฝๅ ใใๅใไธใใใใจๅชๅใใฆใใใ
ใใฎใฌใคใใใพใจใใฆใPyTorchใขใใซใ่ฟฝๅ ใใใใญใปในใ่ชฌๆใใพใ๏ผ[PyTorchใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใ](https://pytorch.org/get-started/locally/)๏ผใ
<Tip>
TensorFlowใขใใซใๅฎ่ฃ
ใใ่ๅณใใใๅ ดๅใฏใ[๐ค TransformersใขใใซใTensorFlowใซๅคๆใใๆนๆณ](add_tensorflow_model)ใฌใคใใๅ็
งใใฆใฟใฆใใ ใใ๏ผ
</Tip>
ใใฎ้็จใงใไปฅไธใฎใใจใๅญฆใณใพใ๏ผ
- ใชใผใใณใฝใผในใฎใในใใใฉใฏใใฃในใซ้ขใใๆดๅฏ
- ๆใไบบๆฐใฎใใๆทฑๅฑคๅญฆ็ฟใฉใคใใฉใชใฎ่จญ่จๅๅใ็่งฃใใ
- ๅคง่ฆๆจกใชใขใใซใๅน็็ใซใในใใใๆนๆณใๅญฆใถ
- `black`ใ`ruff`ใใใใณ`make fix-copies`ใชใฉใฎPythonใฆใผใใฃใชใใฃใ็ตฑๅใใฆใใฏใชใผใณใง่ชญใฟใใใใณใผใใ็ขบไฟใใๆนๆณใๅญฆใถ
Hugging Faceใใผใ ใฎใกใณใใผใใตใใผใใๆไพใใใฎใงใไธไบบใผใฃใกใซใชใใใจใฏใใใพใใใ ๐ค โค๏ธ
ใใใๅงใใพใใใ๏ผ๐ค Transformersใง่ฆใใใขใใซใซใคใใฆใฎ[New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml)ใฎใคใทใฅใผใ้ใใฆใใ ใใใ
็นๅฎใฎใขใใซใๆไพใใใใจใซ็นใซใใ ใใใใชใๅ ดๅใ[New model label](https://github.com/huggingface/transformers/labels/New%20model)ใงๆชๅฒใๅฝใฆใฎใขใใซใชใฏใจในใใใใใใฉใใใ็ขบ่ชใใฆใใใใซๅใ็ตใใใจใใงใใพใใ
ๆฐใใใขใใซใชใฏใจในใใ้ใใใใๆๅใฎในใใใใฏ๐ค Transformersใใใ็่งฃใใใใจใงใ๏ผ
## General overview of ๐ค Transformers
ใพใใ๐ค Transformersใฎไธ่ฌ็ใชๆฆ่ฆใๆๆกใใๅฟ
่ฆใใใใพใใ๐ค Transformersใฏ้ๅธธใซๆ่ฆใๅใใใใฉใคใใฉใชใงใใฎใงใ
ใฉใคใใฉใชใฎๅฒๅญฆใ่จญ่จ้ธๆใซใคใใฆๅๆใงใใชใๅฏ่ฝๆงใใใใพใใใใ ใใ็งใใกใฎ็ต้จใใใใฉใคใใฉใชใฎๅบๆฌ็ใช่จญ่จ้ธๆใจๅฒๅญฆใฏใ
๐ค Transformersใๅน็็ใซในใฑใผใชใณใฐใใ้ฉๅใชใฌใใซใงไฟๅฎใณในใใๆใใใใใซไธๅฏๆฌ ใงใใ
ใฉใคใใฉใชใฎ็่งฃใๆทฑใใใใใฎ่ฏใๅบ็บ็นใฏใ[ๅฒๅญฆใฎใใญใฅใกใณใ](philosophy)ใ่ชญใใใจใงใใ
็งใใกใฎไฝๆฅญๆนๆณใฎ็ตๆใใในใฆใฎใขใใซใซ้ฉ็จใใใใจใใใใใคใใฎ้ธๆ่ขใใใใพใ๏ผ
- ไธ่ฌ็ใซใๆฝ่ฑกๅใใใๆงๆใๅชๅ
ใใใพใใ
- ใณใผใใฎ้่คใฏใ่ชญใฟใใใใใขใฏใปในๅฏ่ฝๆงใๅคงๅน
ใซๅไธใใใๅ ดๅใๅฟ
ใใใๆชใใใใงใฏใใใพใใใ
- ใขใใซใใกใคใซใฏใงใใใ ใ่ชๅทฑๅฎ็ต็ใงใใในใใงใ็นๅฎใฎใขใใซใฎใณใผใใ่ชญใ้ใซใฏใ็ๆณ็ใซใฏ่ฉฒๅฝใใ`modeling_....py`ใใกใคใซใฎใฟใ่ฆใๅฟ
่ฆใใใใพใใ
็งใใกใฎๆ่ฆใงใฏใใใฎใฉใคใใฉใชใฎใณใผใใฏๅใชใ่ฃฝๅใๆไพใใๆๆฎตใ ใใงใชใใ*ไพใใฐใๆจ่ซใฎใใใซBERTใไฝฟ็จใใ่ฝๅ*ใชใฉใฎ่ฃฝๅใใฎใใฎ.
### Overview of models
ใขใใซใๆญฃๅธธใซ่ฟฝๅ ใใใใใซใฏใใขใใซใจใใฎ่จญๅฎใ[`PreTrainedModel`]ใใใใณ[`PretrainedConfig`]ใฎ็ธไบไฝ็จใ็่งฃใใใใจใ้่ฆใงใใ
ไพ็คบ็ใช็ฎ็ใงใ๐ค Transformersใซ่ฟฝๅ ใใใขใใซใใBrandNewBertใใจๅผใณใพใใ
ไปฅไธใใ่ฆงใใ ใใ๏ผ
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/>
ใ่ฆงใฎใใใซใ๐ค Transformersใงใฏ็ถๆฟใไฝฟ็จใใฆใใพใใใๆฝ่ฑกๅใฎใฌใใซใๆๅฐ้ใซไฟใฃใฆใใพใใ
ใฉใคใใฉใชๅ
ใฎใฉใฎใขใใซใซใใๆฝ่ฑกๅใฎใฌใใซใ2ใคใ่ถ
ใใใใจใฏใใใพใใใ
`BrandNewBertModel` ใฏ `BrandNewBertPreTrainedModel` ใ็ถๆฟใใใใใซ[`PreTrainedModel`]ใ็ถๆฟใใฆใใพใใ
ใใใ ใใงใใ
ไธ่ฌ็ใชใซใผใซใจใใฆใๆฐใใใขใใซใฏ[`PreTrainedModel`]ใซใฎใฟไพๅญใใใใใซใใใใจ่ใใฆใใพใใ
ใในใฆใฎๆฐใใใขใใซใซ่ชๅ็ใซๆไพใใใ้่ฆใชๆฉ่ฝใฏใ[`~PreTrainedModel.from_pretrained`]ใใใณ
[`~PreTrainedModel.save_pretrained`]ใงใใ
ใใใใฏใทใชใขใฉใคใผใผใทใงใณใจใใทใชใขใฉใคใผใผใทใงใณใซไฝฟ็จใใใพใใ
`BrandNewBertModel.forward`ใชใฉใฎไปใฎ้่ฆใชๆฉ่ฝใฏใๆฐใใใmodeling_brand_new_bert.pyใในใฏใชใใใงๅฎๅ
จใซๅฎ็พฉใใใในใใงใใ
ๆฌกใซใ็นๅฎใฎใใใใฌใคใคใผใๆใคใขใใซ๏ผใใจใใฐ `BrandNewBertForMaskedLM` ๏ผใ `BrandNewBertModel` ใ็ถๆฟใใใฎใงใฏใชใใ
ๆฝ่ฑกๅใฎใฌใใซใไฝใไฟใคใใใซใใใฎใใฉใฏใผใใในใง `BrandNewBertModel` ใๅผใณๅบใใณใณใใผใใณใใจใใฆไฝฟ็จใใใใใใซใใใใจ่ใใฆใใพใใ
ๆฐใใใขใใซใซใฏๅธธใซ `BrandNewBertConfig` ใจใใ่จญๅฎใฏใฉในใๅฟ
่ฆใงใใใใฎ่จญๅฎใฏๅธธใซ[`PreTrainedModel`]ใฎๅฑๆงใจใใฆไฟๅญใใใ
ใใใใฃใฆใ`BrandNewBertPreTrainedModel`ใใ็ถๆฟใใใในใฆใฎใฏใฉในใง`config`ๅฑๆงใไปใใฆใขใฏใปในใงใใพใใ
```python
model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert")
model.config # model has access to its config
```
ใขใใซใจๅๆงใซใ่จญๅฎใฏ[`PretrainedConfig`]ใใๅบๆฌ็ใชใทใชใขใซๅใใใณ้ใทใชใขใซๅใฎๆฉ่ฝใ็ถๆฟใใฆใใพใใๆณจๆใในใใฏใ่จญๅฎใจใขใใซใฏๅธธใซ2ใคใฎ็ฐใชใๅฝขๅผใซใทใชใขใซๅใใใใใจใงใ - ใขใใซใฏ*pytorch_model.bin*ใใกใคใซใซใ่จญๅฎใฏ*config.json*ใใกใคใซใซใทใชใขใซๅใใใพใใ[`~PreTrainedModel.save_pretrained`]ใๅผใณๅบใใจใ่ชๅ็ใซ[`~PretrainedConfig.save_pretrained`]ใๅผใณๅบใใใใขใใซใจ่จญๅฎใฎไธกๆนใไฟๅญใใใพใใ
### Code style
ๆฐใใใขใใซใใณใผใใฃใณใฐใใ้ใซใฏใTransformersใฏๆ่ฆใใใใฉใคใใฉใชใงใใใใณใผใใฎๆธใๆนใซ้ขใใฆใใใคใใฎ็ฌ่ชใฎ่ใๆนใใใใพใ :-)
1. ใขใใซใฎใใฉใฏใผใใในใฏใขใใชใณใฐใใกใคใซใซๅฎๅ
จใซ่จ่ฟฐใใใใฉใคใใฉใชๅ
ใฎไปใฎใขใใซใจใฏๅฎๅ
จใซ็ฌ็ซใใฆใใๅฟ
่ฆใใใใพใใไปใฎใขใใซใใใใญใใฏใๅๅฉ็จใใใๅ ดๅใใณใผใใใณใใผใใฆใใใใซ`# Copied from`ใณใกใณใใไปใใฆ่ฒผใไปใใพใ๏ผ่ฏใไพใฏ[ใใกใ](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)ใใณใใผใซ้ขใใ่ฉณ็ดฐใชใใญใฅใกใณใใผใทใงใณใฏ[ใใ](pr_checks#check-copies)ใๅ็
งใใฆใใ ใใ๏ผใ
2. ใณใผใใฏๅฎๅ
จใซ็่งฃๅฏ่ฝใงใชใใใฐใชใใพใใใใใใฏ่จ่ฟฐ็ใชๅคๆฐๅใ้ธๆใใ็็ฅๅฝขใ้ฟใใในใใงใใใใจใๆๅณใใพใใไพใใฐใ`act`ใงใฏใชใ`activation`ใๅฅฝใพใใพใใ1ๆๅญใฎๅคๆฐๅใฏใforใซใผใๅ
ใฎใคใณใใใฏในใงใชใ้ใใๅผทใ้ๆจๅฅจใงใใ
3. ใใไธ่ฌ็ใซใ้ญๆณใฎใใใช็ญใใณใผใใใใ้ทใใฆๆ็คบ็ใชใณใผใใๅฅฝใฟใพใใ
4. PyTorchใงใฏ`nn.Sequential`ใใตใใฏใฉในๅใใใซใ`nn.Module`ใใตใใฏใฉในๅใใใใฉใฏใผใใในใ่จ่ฟฐใใใณใผใใไฝฟ็จใใไปใฎไบบใ็ฐกๅใซใใใใฐใงใใใใใซใใพใใใใชใณใในใใผใใกใณใใใใฌใผใฏใใคใณใใ่ฟฝๅ ใใฆใใใใฐใงใใใใใซใใพใใ
5. ้ขๆฐใฎใทใฐใใใฃใฏๅใขใใใผใทใงใณใไปใใในใใงใใใใฎไปใฎ้จๅใซ้ขใใฆใฏใๅใขใใใผใทใงใณใใใ่ฏใๅคๆฐๅใ่ชญใฟใใใ็่งฃใใใใใใจใใใใพใใ
### Overview of tokenizers
ใพใ ๅฎไบใใฆใใพใใ :-( ใใฎใปใฏใทใงใณใฏ่ฟๆฅไธญใซ่ฟฝๅ ใใใพใ๏ผ
## Step-by-step recipe to add a model to ๐ค Transformers
ใขใใซใ่ฟฝๅ ใใๆนๆณใฏไบบใใใใ็ฐใชใใใใไปใฎใณใณใใชใใฅใผใฟใผใ๐ค Transformersใซใขใใซใ่ฟฝๅ ใใ้ใฎ่ฆ็ดใ็ขบ่ชใใใใจใ้ๅธธใซๅฝน็ซใคๅ ดๅใใใใพใใไปฅไธใฏใไปใฎใณใณใใชใใฅใผใฟใผใ๐ค Transformersใซใขใใซใใใผใใใ้ใฎใณใใฅใใใฃใใญใฐๆ็จฟใฎใชในใใงใใ
1. [GPT2ใขใใซใฎใใผใใฃใณใฐ](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf)
2. [WMT19 MTใขใใซใฎใใผใใฃใณใฐ](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas)
็ต้จใใ่จใใใใจใฏใใขใใซใ่ฟฝๅ ใใ้ใซๆใ้่ฆใชใใจใฏๆฌกใฎใใใซใชใใพใ๏ผ
- ่ป่ผชใฎๅ็บๆใใใชใใงใใ ใใ๏ผๆฐใใ๐ค Transformersใขใใซใฎใใใซ่ฟฝๅ ใใใณใผใใฎใปใจใใฉใฏใใงใซ๐ค Transformersๅ
ใฎใฉใใใซๅญๅจใใฆใใพใใ้กไผผใใๆขๅญใฎใขใใซใใใผใฏใใคใถใ่ฆใคใใใใใซใใใใคใใฎๆ้ใใใใฆๆขใใใจใ้่ฆใงใใ[grep](https://www.gnu.org/software/grep/)ใจ[rg](https://github.com/BurntSushi/ripgrep)ใฏใใชใใฎๅ้ใงใใใขใใซใฎใใผใฏใใคใถใฏ1ใคใฎใขใใซๅฎ่ฃ
ใซๅบใฅใใฆใใใใใใใพใใใใใขใใซใฎใขใใชใณใฐใณใผใใฏๅฅใฎๅฎ่ฃ
ใซๅบใฅใใฆใใใใจใใใใใจใซๆณจๆใใฆใใ ใใใไพใใฐใFSMTใฎใขใใชใณใฐใณใผใใฏBARTใซๅบใฅใใฆใใใFSMTใฎใใผใฏใใคใถใณใผใใฏXLMใซๅบใฅใใฆใใพใใ
- ใใใฏ็งๅญฆ็ใช่ชฒ้กใใใใจใณใธใใขใชใณใฐใฎ่ชฒ้กใงใใใขใใซใฎ่ซๆใฎ็่ซ็ใชๅด้ขใใในใฆ็่งฃใใใใจใใใใใใๅน็็ใชใใใใฐ็ฐๅขใไฝๆใใใใใซๆ้ใ่ฒปใใในใใงใใ
- ่กใ่ฉฐใพใฃใๅ ดๅใฏๅฉใใๆฑใใฆใใ ใใ๏ผใขใใซใฏ๐ค TransformersใฎใณใขใณใณใใผใใณใใงใใใHugging Faceใงใฏใขใใซใ่ฟฝๅ ใใใใใฎๅในใใใใงใๆไผใใใใฎใๅใใงใใพใใ้ฒ่กใใชใใใจใซๆฐไปใใๅ ดๅใฏใ้ฒๅฑใใฆใใชใใใจใๆฐใซใใชใใงใใ ใใใ
ไปฅไธใงใฏใ๐ค Transformersใซใขใใซใใใผใใใ้ใซๆใๅฝน็ซใคใจ่ใใใใไธ่ฌ็ใชใฌใทใใๆไพใใใใจใใฆใใพใใ
ๆฌกใฎใชในใใฏใใขใใซใ่ฟฝๅ ใใใใใซ่กใๅฟ
่ฆใใใใในใฆใฎใใจใฎ่ฆ็ดใงใใใTo-Doใชในใใจใใฆไฝฟ็จใงใใพใ๏ผ
- โ ๏ผใชใใทใงใณ๏ผใขใใซใฎ็่ซ็ใชๅด้ขใ็่งฃใใพใใ
- โ ๐ค Transformersใฎ้็บ็ฐๅขใๆบๅใใพใใ
- โ ใชใชใธใใซใฎใชใใธใใชใฎใใใใฐ็ฐๅขใใปใใใขใใใใพใใ
- โ `forward()` ใในใใชใชใธใใซใฎใชใใธใใชใจใใงใใฏใใคใณใใงๆญฃๅธธใซๅฎ่กใใในใฏใชใใใไฝๆใใพใใ
- โ ใขใใซใฎ้ชจๆ ผใ๐ค Transformersใซๆญฃๅธธใซ่ฟฝๅ ใใพใใ
- โ ใชใชใธใใซใฎใใงใใฏใใคใณใใ๐ค Transformersใฎใใงใใฏใใคใณใใซๆญฃๅธธใซๅคๆใใพใใ
- โ ๐ค Transformersใงๅฎ่กใใใ `forward()` ใในใๆญฃๅธธใซๅฎ่กใใใชใชใธใใซใฎใใงใใฏใใคใณใใจๅไธใฎๅบๅใๅพใพใใ
- โ ๐ค Transformersใงใฎใขใใซใในใใๅฎไบใใพใใ
- โ ๐ค Transformersใซใใผใฏใใคใถใๆญฃๅธธใซ่ฟฝๅ ใใพใใ
- โ ใจใณใใใผใจใณใใฎ็ตฑๅใในใใๅฎ่กใใพใใ
- โ ใใญใฅใกใณใใๅฎๆใใใพใใ
- โ ใขใใซใฎใฆใงใคใใHubใซใขใใใญใผใใใพใใ
- โ ใใซใชใฏใจในใใๆๅบใใพใใ
- โ ๏ผใชใใทใงใณ๏ผใใขใใผใใใใฏใ่ฟฝๅ ใใพใใ
ใพใใ้ๅธธใ`BrandNewBert`ใฎ็่ซ็ใช็่งฃใๆทฑใใใใจใใๅงใใใพใใ
ใใ ใใใใใขใใซใฎ็่ซ็ใชๅด้ขใใๅฎๅไธญใซ็่งฃใใใๆนใๅฅฝใพใใๅ ดๅใ`BrandNewBert`ใฎใณใผใใใผในใซ็ดๆฅใขใฏใปในใใใฎใๅ้กใใใพใใใ
ใใฎใชใใทใงใณใฏใใจใณใธใใขใชใณใฐใฎในใญใซใ็่ซ็ใชในใญใซใใใๅชใใฆใใๅ ดๅใ
`BrandNewBert`ใฎ่ซๆใ็่งฃใใใฎใซ่ฆๅดใใฆใใๅ ดๅใใพใใฏ็งๅญฆ็ใช่ซๆใ่ชญใใใใใใญใฐใฉใใณใฐใๆฅฝใใใงใใๅ ดๅใซ้ฉใใฆใใพใใ
### 1. (Optional) Theoretical aspects of BrandNewBert
BrandNewBertใฎ่ซๆใใใๅ ดๅใใใฎ่ชฌๆใ่ชญใใใใฎๆ้ใๅใในใใงใใ่ซๆใฎไธญใซใฏ็่งฃใ้ฃใใ้จๅใใใใใใใใพใใใ
ใใฎๅ ดๅใงใๅฟ้
ใใชใใงใใ ใใใ็ฎๆจใฏ่ซๆใฎๆทฑใ็่ซ็็่งฃใๅพใใใจใงใฏใชใใ
๐ค Transformersใงใขใใซใๅนๆ็ใซๅๅฎ่ฃ
ใใใใใซๅฟ
่ฆใชๆ
ๅ ฑใๆฝๅบใใใใจใงใใ
ใใ ใใ็่ซ็ใชๅด้ขใซใใพใๅคใใฎๆ้ใใใใๅฟ
่ฆใฏใใใพใใใไปฃใใใซใๅฎ่ทต็ใชๅด้ขใซ็ฆ็นใๅฝใฆใพใใใใๅ
ทไฝ็ใซใฏๆฌกใฎ็นใงใ๏ผ
- *brand_new_bert*ใฏใฉใฎ็จฎ้กใฎใขใใซใงใใ๏ผ BERTใฎใใใชใจใณใณใผใใผใฎใฟใฎใขใใซใงใใ๏ผ GPT2ใฎใใใชใใณใผใใผใฎใฟใฎใขใใซใงใใ๏ผ BARTใฎใใใชใจใณใณใผใใผ-ใใณใผใใผใขใใซใงใใ๏ผ
[model_summary](model_summary)ใๅ็
งใใฆใใใใใฎ้ใใซใคใใฆ่ฉณใใ็ฅใใใๅ ดๅใใใใพใใ
- *brand_new_bert*ใฎๅฟ็จๅ้ใฏไฝใงใใ๏ผ ใใญในใๅ้กใงใใ๏ผ ใใญในใ็ๆใงใใ๏ผ Seq2Seqใฟในใฏใไพใใฐ่ฆ็ดใงใใ๏ผ
- ใขใใซใBERT/GPT-2/BARTใจใฏ็ฐใชใใใฎใซใใๆฐใใๆฉ่ฝใฏไฝใงใใ๏ผ
- ๆขๅญใฎ[๐ค Transformersใขใใซ](https://huggingface.co/transformers/#contents)ใฎไธญใง*brand_new_bert*ใซๆใไผผใฆใใใขใใซใฏใฉใใงใใ๏ผ
- ไฝฟ็จใใใฆใใใใผใฏใใคใถใฎ็จฎ้กใฏไฝใงใใ๏ผ SentencePieceใใผใฏใใคใถใงใใ๏ผ WordPieceใใผใฏใใคใถใงใใ๏ผ BERTใBARTใงไฝฟ็จใใใฆใใใใผใฏใใคใถใจๅใใงใใ๏ผ
ใขใใซใฎใขใผใญใใฏใใฃใฎ่ฏใๆฆ่ฆใๅพใใจๆใใใใHugging Faceใใผใ ใซ่ณชๅใ้ใใใจใใงใใพใใ
ใใใซใฏใขใใซใฎใขใผใญใใฏใใฃใๆณจๆๅฑคใชใฉใซ้ขใใ่ณชๅใๅซใพใใใใใใใพใใใ
็งใใกใฏๅใใงใๆไผใใใพใใ
### 2. Next prepare your environment
1. ใชใใธใใชใฎใใผใธใงใForkใใใฟใณใใฏใชใใฏใใฆใ[ใชใใธใใช](https://github.com/huggingface/transformers)ใใใฉใผใฏใใพใใ
ใใใซใใใใณใผใใฎใณใใผใGitHubใฆใผใถใผใขใซใฆใณใใฎไธใซไฝๆใใใพใใ
2. ใญใผใซใซใใฃในใฏใซใใ`transformers`ใใฉใผใฏใใฏใญใผใณใใใใผในใชใใธใใชใใชใขใผใใจใใฆ่ฟฝๅ ใใพใ๏ผ
```bash
git clone https://github.com/[your Github handle]/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
```
```bash
python -m venv .env
source .env/bin/activate
pip install -e ".[dev]"
```
3. ้็บ็ฐๅขใใปใใใขใใใใใใใซใๆฌกใฎใณใใณใใๅฎ่กใใฆใใ ใใ๏ผ
```bash
python -m venv .env
source .env/bin/activate
pip install -e ".[dev]"
```
ใไฝฟใใฎOSใซๅฟใใฆใใใใณTransformersใฎใชใใทใงใณใฎไพๅญ้ขไฟใฎๆฐใๅขใใฆใใใใใใใฎใณใใณใใงใจใฉใผใ็บ็ใใๅฏ่ฝๆงใใใใพใใ
ใใฎๅ ดๅใฏใไฝๆฅญใใฆใใDeep Learningใใฌใผใ ใฏใผใฏ๏ผPyTorchใTensorFlowใใใใณ/ใพใใฏFlax๏ผใใคใณในใใผใซใใๆฌกใฎๆ้ ใๅฎ่กใใฆใใ ใใ๏ผ
```bash
pip install -e ".[quality]"
```
ใใใฏใปใจใใฉใฎใฆใผในใฑใผในใซใฏๅๅใงใใใฏใใงใใใใฎๅพใ่ฆชใใฃใฌใฏใใชใซๆปใใใจใใงใใพใใ
```bash
cd ..
```
4. Transformersใซ*brand_new_bert*ใฎPyTorchใใผใธใงใณใ่ฟฝๅ ใใใใจใใๅงใใใพใใPyTorchใใคใณในใใผใซใใใซใฏใ
https://pytorch.org/get-started/locally/ ใฎๆ็คบใซๅพใฃใฆใใ ใใใ
**ๆณจๆ:** CUDAใใคใณในใใผใซใใๅฟ
่ฆใฏใใใพใใใๆฐใใใขใใซใCPUใงๅไฝใใใใใจใงๅๅใงใใ
5. *brand_new_bert*ใ็งปๆคใใใซใฏใๅ
ใฎใชใใธใใชใธใฎใขใฏใปในใๅฟ
่ฆใงใใ
```bash
git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git
cd brand_new_bert
pip install -e .
```
*brand_new_bert*ใ๐ค Transformersใซใใผใใใใใใฎ้็บ็ฐๅขใ่จญๅฎใใพใใใ
### 3.-4. Run a pretrained checkpoint using the original repository
ๆๅใซใใชใชใธใใซใฎ*brand_new_bert*ใชใใธใใชใงไฝๆฅญใใพใใ้ๅธธใใชใชใธใใซใฎๅฎ่ฃ
ใฏ้ๅธธใซใ็ ็ฉถ็ใใงใใใใใญใฅใกใณใใผใทใงใณใไธ่ถณใใฆใใใใใณใผใใ็่งฃใใซใใใใจใใใใพใใใใใใใใใ*brand_new_bert*ใๅๅฎ่ฃ
ใใๅๆฉใจใชใในใใงใใHugging Faceใงใฏใไธป่ฆใช็ฎๆจใฎ1ใคใใๅไฝใใใขใใซใๅใใใใใใงใใใ ใ**ใขใฏใปในๅฏ่ฝใงใฆใผใถใผใใฌใณใใชใผใง็พใใ**ใใฎใซๆธใ็ดใใใจใงใใใใใฏใ๐ค Transformersใซใขใใซใๅๅฎ่ฃ
ใใๆใ้่ฆใชๅๆฉใงใ - ่ค้ใชๆฐใใNLPๆ่กใ**่ชฐใซใงใ**ใขใฏใปในๅฏ่ฝใซใใใใจใใ่ฉฆใฟใงใใ
ใพใใใชใชใธใใซใฎใชใใธใใชใซๅ
ฅใ่พผใใใจใใๅงใใในใใงใใ
ๅ
ฌๅผใฎไบๅๅญฆ็ฟๆธใฟใขใใซใใชใชใธใใซใฎใชใใธใใชใงๆญฃๅธธใซๅฎ่กใใใใจใฏใ้ๅธธใ**ๆใๅฐ้ฃใช**ในใใใใงใใ
็งใใกใฎ็ต้จใใใใชใชใธใใซใฎใณใผใใใผในใซๆ
ฃใใใฎใซๆ้ใใใใใใจใ้ๅธธใซ้่ฆใงใใไปฅไธใฎใใจใ็่งฃใใๅฟ
่ฆใใใใพใ๏ผ
- ไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใใฉใใง่ฆใคใใใ๏ผ
- ๅฏพๅฟใใใขใใซใซไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใใญใผใใใๆนๆณใฏ๏ผ
- ใขใใซใใ็ฌ็ซใใฆใใผใฏใใคใถใๅฎ่กใใๆนๆณใฏ๏ผ
- 1ใคใฎใใฉใฏใผใใในใ่ฟฝ่ทกใใฆใๅ็ดใชใใฉใฏใผใใในใซๅฟ
่ฆใชใฏใฉในใจ้ขๆฐใใใใใใใซใใพใใ้ๅธธใใใใใฎ้ขๆฐใ ใใๅๅฎ่ฃ
ใใๅฟ
่ฆใใใใพใใ
- ใขใใซใฎ้่ฆใชใณใณใใผใใณใใ็นๅฎใงใใใใจ๏ผใขใใซใฎใฏใฉในใฏใฉใใซใใใพใใ๏ผใขใใซใฎใตใใฏใฉในใ*ไพ* EncoderModelใDecoderModelใใใใพใใ๏ผ่ชๅทฑๆณจๆใฌใคใคใผใฏใฉใใซใใใพใใ๏ผ่คๆฐใฎ็ฐใชใๆณจๆใฌใคใคใผใ*ไพ* *่ชๅทฑๆณจๆ*ใ*ใฏใญในใขใใณใทใงใณ*ใชใฉใๅญๅจใใพใใ๏ผ
- ใชใชใธใใซใฎใชใใธใใชใฎ็ฐๅขใงใขใใซใใใใใฐใใๆนๆณใฏ๏ผ*print*ในใใผใใกใณใใ่ฟฝๅ ใใๅฟ
่ฆใใใใใ*ipdb*ใฎใใใชๅฏพ่ฉฑๅใใใใฌใไฝฟ็จใงใใใใPyCharmใฎใใใชๅน็็ใชIDEใไฝฟ็จใใฆใขใใซใใใใใฐใใๅฟ
่ฆใใใใพใใ๏ผ
้่ฆใชใฎใฏใใใผใใฃใณใฐใใญใปในใ้ๅงใใๅใซใใชใชใธใใซใฎใชใใธใใชใงใณใผใใ**ๅน็็ใซ**ใใใใฐใงใใใใจใงใ๏ผใพใใใใใฏใชใผใใณใฝใผในใฉใคใใฉใชใงไฝๆฅญใใฆใใใใจใ่ฆใใฆใใใฆใใ ใใใใชใชใธใใซใฎใชใใธใใชใงใณใผใใ่ชฟในใ่ชฐใใๆญ่ฟใใใใใซใๅ้กใใชใผใใณใซใใใใใใซใชใฏใจในใใ้ไฟกใใใใใใใจใใใใใใชใใงใใ ใใใใใฎใชใใธใใชใฎใกใณใใใผใฏใๅฝผใใฎใณใผใใ่ชฟในใฆใใใไบบใซๅฏพใใฆ้ๅธธใซๅใใงใใๅฏ่ฝๆงใ้ซใใงใ๏ผ
ใใฎๆฎต้ใงใฏใใชใชใธใใซใฎใขใใซใฎใใใใฐใซใฉใฎใใใช็ฐๅขใจๆฆ็ฅใไฝฟ็จใใใใฏใใใชใๆฌก็ฌฌใงใใๆๅใซใชใชใธใใซใฎใชใใธใใชใซ้ขใใใณใผใใใใใใฐใงใใใใจใ้ๅธธใซ้่ฆใงใใใพใใGPU็ฐๅขใใปใใใขใใใใใใจใฏใๅงใใใพใใใใพใใCPUไธใงไฝๆฅญใใใขใใซใใใงใซ๐ค Transformersใซๆญฃๅธธใซใใผใใใใฆใใใใจใ็ขบ่ชใใพใใๆๅพใซใใขใใซใGPUไธใงใๆๅพ
้ใใซๅไฝใใใใฉใใใๆค่จผใใๅฟ
่ฆใใใใพใใ
ไธ่ฌ็ใซใใชใชใธใใซใฎใขใใซใๅฎ่กใใใใใฎ2ใคใฎใใใใฐ็ฐๅขใใใใพใ๏ผ
- [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb)
- ใญใผใซใซใชPythonในใฏใชใใใ
Jupyterใใผใใใใฏใฏใใปใซใใจใซๅฎ่กใงใใใใใ่ซ็็ใชใณใณใใผใใณใใใใๅๅฒใใไธญ้็ตๆใไฟๅญใงใใใใใใใใใฐใตใคใฏใซใ้ใใชใใจใใๅฉ็นใใใใพใใใพใใใใผใใใใฏใฏไปใฎๅ
ฑๅไฝๆฅญ่
ใจ็ฐกๅใซๅ
ฑๆใงใใใใจใๅคใใHugging Faceใใผใ ใซๅฉใใๆฑใใๅ ดๅใซ้ๅธธใซๅฝน็ซใคๅ ดๅใใใใพใใJupyterใใผใใใใฏใซ็ฒพ้ใใฆใใๅ ดๅใใใ
```python
model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids
original_output = model.predict(input_ids)
```
ใใใใฐๆฆ็ฅใซใคใใฆใฏใ้ๅธธใใใใคใใฎ้ธๆ่ขใใใใพใ๏ผ
- ๅ
ใฎใขใใซใๅคใใฎๅฐใใชใในใๅฏ่ฝใชใณใณใใผใใณใใซๅ่งฃใใใใใใใซๅฏพใใฆๅๆนใในใๅฎ่กใใฆๆค่จผใใพใ
- ๅ
ใฎใขใใซใๅ
ใฎใใผใฏใใคใถใจๅ
ใฎใขใใซใซใฎใฟๅ่งฃใใใใใใซๅฏพใใฆๅๆนใในใๅฎ่กใใๆค่จผใฎใใใซไธญ้ใฎใใชใณใในใใผใใกใณใใพใใฏใใฌใผใฏใใคใณใใไฝฟ็จใใพใ
ๅๅบฆใใฉใฎๆฆ็ฅใ้ธๆใใใใฏใใชใๆฌก็ฌฌใงใใๅ
ใฎใณใผใใใผในใซไพๅญใใใใจใๅคใใๅ
ใฎใณใผใใใผในใซๅฟใใฆไธๆนใพใใฏไปๆนใๆๅฉใชใใจใใใใพใใ
ๅ
ใฎใณใผใใใผในใใขใใซใๅฐใใชใตใใณใณใใผใใณใใซๅ่งฃใงใใๅ ดๅใ*ไพใใฐ*ๅ
ใฎใณใผใใใผในใ็ฐกๅใซใคใผใฌใผใขใผใใงๅฎ่กใงใใๅ ดๅใใใใ่กใไพกๅคใ้ๅธธใใใพใใๆๅใใใใ้ฃใใๆนๆณใ้ธๆใใใใจใซใฏใใใคใใฎ้่ฆใชๅฉ็นใใใใพใ๏ผ
- ๅพใงๅ
ใฎใขใใซใ๐ค Transformersใฎๅฎ่ฃ
ใจๆฏ่ผใใ้ใซใๅใณใณใใผใใณใใๅฏพๅฟใใ๐ค Transformersๅฎ่ฃ
ใฎใณใณใใผใใณใใจไธ่ดใใใใจใ่ชๅ็ใซๆค่จผใงใใใใใ่ฆ่ฆ็ใชๆฏ่ผใซไพๅญใใใซๆธใฟใพใ
- ๅคงใใชๅ้กใๅฐใใชๅ้กใซๅ่งฃใใใใคใพใๅใ
ใฎใณใณใใผใใณใใฎใฟใใใผใใฃใณใฐใใๅ้กใซๅๅฒใใใฎใซๅฝน็ซใกใไฝๆฅญใๆง้ ๅใใใฎใซๅฝน็ซใกใพใ
- ใขใใซใ่ซ็็ใชๆๅณใฎใใใณใณใใผใใณใใซๅๅฒใใใใจใงใใขใใซใฎ่จญ่จใใใใใ็่งฃใใใใใใใขใใซใใใใใ็่งฃใใใฎใซๅฝน็ซใกใพใ
- ๅพใงใใณใณใใผใใณใใใจใฎใในใใ่กใใใจใงใใณใผใใๅคๆดใ็ถใใ้ใซใชใฐใฌใใทใงใณใ็บ็ใใชใใใจใ็ขบ่ชใใใฎใซๅฝน็ซใกใพใ
[Lysandreใฎ](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) ELECTRAใฎ็ตฑๅใใงใใฏใฏใใใใใฉใฎใใใซ่กใใใใใฎ่ฏใไพใงใใ
ใใ ใใๅ
ใฎใณใผใใใผในใ้ๅธธใซ่ค้ใงใไธญ้ใณใณใใผใใณใใใณใณใใคใซใขใผใใงๅฎ่กใใใใจใใ่จฑๅฏใใชใๅ ดๅใใขใใซใๅฐใใชใในใๅฏ่ฝใชใตใใณใณใใผใใณใใซๅ่งฃใใใใจใๆ้ใใใใใใใใใไธๅฏ่ฝใงใใใใจใใใใพใใ
่ฏใไพใฏ[T5ใฎMeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow)ใฉใคใใฉใชใงใใใ้ๅธธใซ่ค้ใงใขใใซใใตใใณใณใใผใใณใใซๅ่งฃใใ็ฐกๅใชๆนๆณใๆไพใใชใใใจใใใใพใใใใฎใใใชใฉใคใใฉใชใงใฏใ้ๅธธใใใชใณใในใใผใใกใณใใๆค่จผใใใใจใซไพๅญใใพใใ
ใฉใฎๆฆ็ฅใ้ธๆใใฆใใๆจๅฅจใใใๆ้ ใฏ้ๅธธๅใใงใๆๅใฎใฌใคใคใผใใใใใใฐใ้ๅงใใๆๅพใฎใฌใคใคใผใใใใใใฐใ่กใในใใงใใ
้ๅธธใไปฅไธใฎ้ ๅบใงๆฌกใฎใฌใคใคใผใใใฎๅบๅใๅๅพใใใใจใใๅงใใใพใ๏ผ
1. ใขใใซใซๆธกใใใๅ
ฅๅIDใๅๅพใใ
2. ๅ่ชใฎๅใ่พผใฟใๅๅพใใ
3. ๆๅใฎTransformerใฌใคใคใผใฎๅ
ฅๅใๅๅพใใ
4. ๆๅใฎTransformerใฌใคใคใผใฎๅบๅใๅๅพใใ
5. ๆฌกใฎn - 1ใคใฎTransformerใฌใคใคใผใฎๅบๅใๅๅพใใ
6. BrandNewBertใขใใซๅ
จไฝใฎๅบๅใๅๅพใใ
ๅ
ฅๅIDใฏๆดๆฐใฎ้
ๅใงใใๅฟ
่ฆใใใใ*ไพ๏ผ* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` ใฎใใใซใชใใพใใ
ไปฅไธใฎใฌใคใคใผใฎๅบๅใฏๅคๆฌกๅ
ใฎๆตฎๅๅฐๆฐ็น้
ๅใงใใใใจใๅคใใๆฌกใฎใใใซใชใใใจใใใใพใ๏ผ
```
[[
[-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024],
[-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132],
[-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648],
...,
[-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288],
[-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191],
[-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]],
```
๐ค Transformersใซ่ฟฝๅ ใใใใในใฆใฎใขใใซใฏใ็ตฑๅใในใใๆฐๅๅๆ ผใใใใจใๆๅพ
ใใใฆใใใๅ
ใฎใขใใซใจ๐ค Transformersใงๅๅฎ่ฃ
ใใใใใผใธใงใณใใ0.001ใฎ็ฒพๅบฆใพใงใพใฃใใๅใๅบๅใๆไพใใๅฟ
่ฆใใใใพใใ
็ฐใชใใฉใคใใฉใชใใฌใผใ ใฏใผใฏใงๅใใขใใซใๆธใใๅ ดๅใใใใใซ็ฐใชใๅบๅใ่ฟใใใจใๆญฃๅธธใงใใใใใ่ชคๅทฎ่จฑๅฎนๅคใจใใฆ1e-3๏ผ0.001๏ผใๅใๅ
ฅใใฆใใพใใใขใใซใใปใผๅใๅบๅใ่ฟใใ ใใงใฏไธๅๅใงใใปใผๅไธใงใใๅฟ
่ฆใใใใพใใใใฎใใใ๐ค Transformersใใผใธใงใณใฎไธญ้ๅบๅใๅ
ใฎ*brand_new_bert*ใฎๅฎ่ฃ
ใฎไธญ้ๅบๅใจ่คๆฐๅใซใใใฃใฆๆฏ่ผใใใใจใซใชใใงใใใใใใฎ้ใๅ
ใฎใชใใธใใชใฎ**ๅน็็ใช**ใใใใฐ็ฐๅขใ้ๅธธใซ้่ฆใงใใไปฅไธใฏใใใใใฐ็ฐๅขใใงใใใ ใๅน็็ใซใใใใใฎใขใใใคในใงใใ
- ไธญ้็ตๆใใใใใฐใใๆ้ฉใชๆนๆณใ่ฆใคใใใๅ
ใฎใชใใธใใชใฏPyTorchใงๆธใใใฆใใพใใ๏ผใใฎๅ ดๅใๅ
ใฎใขใใซใใใๅฐใใชใตใใณใณใใผใใณใใซๅ่งฃใใฆไธญ้ๅคใๅๅพใใ้ทใในใฏใชใใใๆธใใใจใใใใใ้ฉๅใงใใๅ
ใฎใชใใธใใชใTensorflow 1ใงๆธใใใฆใใๅ ดๅใ[tf.print](https://www.tensorflow.org/api_docs/python/tf/print)ใชใฉใฎTensorFlowใฎใใชใณใๆไฝใไฝฟ็จใใฆไธญ้ๅคใๅบๅใใๅฟ
่ฆใใใใใใใใพใใใๅ
ใฎใชใใธใใชใJaxใงๆธใใใฆใใๅ ดๅใใใฉใฏใผใใในใฎๅฎ่กๆใซใขใใซใ**jittedใใใฆใใชใ**ใใจใ็ขบ่ชใใฆใใ ใใใไพ๏ผ[ใใฎใชใณใฏ](https://github.com/google/jax/issues/196)ใใใงใใฏใ
- ไฝฟ็จๅฏ่ฝใชๆๅฐใฎไบๅๅญฆ็ฟๆธใฟใใงใใฏใใคใณใใไฝฟ็จใใพใใใใงใใฏใใคใณใใๅฐใใใปใฉใใใใใฐใตใคใฏใซใ้ใใชใใพใใไบๅๅญฆ็ฟๆธใฟใขใใซใใใฉใฏใผใใในใซ10็งไปฅไธใใใๅ ดๅใๅน็็ใงใฏใใใพใใใ้ๅธธใซๅคงใใชใใงใใฏใใคใณใใใๅฉ็จใงใใชใๅ ดๅใๆฐใใ็ฐๅขใงใฉใณใใ ใซๅๆๅใใใใฆใงใคใใๆใคใใใผใขใใซใไฝๆใใใใใใฎใฆใงใคใใ๐ค Transformersใใผใธใงใณใฎใขใใซใจๆฏ่ผใใๆนใ่ฏใใใใใใพใใใ
- ๅ
ใฎใชใใธใใชใงใใฉใฏใผใใในใๅผใณๅบใๆใ็ฐกๅใชๆนๆณใไฝฟ็จใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ็ๆณ็ใซใฏใๅ
ใฎใชใใธใใชใง**ๅไธใฎใใฉใฏใผใใใน**ใๅผใณๅบใ้ขๆฐใ่ฆใคใใใใงใใใใใฏ้ๅธธใpredictใใใevaluateใใใforwardใใใ__call__ใใจๅผใฐใใพใใ่คๆฐๅใforwardใใๅผใณๅบใ้ขๆฐใใใใใฐใใใใใใพใใใไพ๏ผใใญในใใ็ๆใใใใใซใautoregressive_sampleใใใgenerateใใจๅผใฐใใ้ขๆฐใ
- ใใผใฏใใคใผใผใทใงใณใจใขใใซใฎใใใฉใฏใผใใใในใๅ้ขใใใใจใใฆใใ ใใใๅ
ใฎใชใใธใใชใๅ
ฅๅๆๅญๅใๅ
ฅๅใใๅฟ
่ฆใใใไพใ็คบใๅ ดๅใใใฉใฏใผใใณใผใซๅ
ใงๆๅญๅๅ
ฅๅใๅ
ฅๅIDใซๅคๆดใใใๅ ดๆใ็นๅฎใใใใฎใใคใณใใใ้ๅงใใพใใใใใฏใในใฏใชใใใ่ชๅใงๆธใใใๅ
ฅๅๆๅญๅใงใฏใชใๅ
ฅๅIDใ็ดๆฅๅ
ฅๅใงใใใใใซๅ
ใฎใณใผใใๅคๆดใใๅฟ
่ฆใใใใใใใใพใใใ
- ใใใใฐใปใใใขใใๅ
ใฎใขใใซใใใฌใผใใณใฐใขใผใใงใฏใชใใใจใ็ขบ่ชใใฆใใ ใใใใใฌใผใใณใฐใขใผใใงใฏใใขใใซๅ
ใฎ่คๆฐใฎใใญใใใขใฆใใฌใคใคใผใฎใใใซใฉใณใใ ใชๅบๅใ็ๆใใใใใจใใใใพใใใใใใฐ็ฐๅขใฎใใฉใฏใผใใในใ**ๆฑบๅฎ่ซ็**ใงใใใใจใ็ขบ่ชใใใใญใใใขใฆใใฌใคใคใผใไฝฟ็จใใใชใใใใซใใพใใใพใใฏใๆฐใใๅฎ่ฃ
ใๅใใใฌใผใ ใฏใผใฏๅ
ใซใใๅ ดๅใ*transformers.utils.set_seed*ใไฝฟ็จใใฆใใ ใใใ
ไปฅไธใฎใปใฏใทใงใณใงใฏใ*brand_new_bert*ใซใคใใฆใใใๅ
ทไฝ็ใซใฉใฎใใใซ่กใใใซใคใใฆใฎ่ฉณ็ดฐ/ใใณใใๆไพใใพใใ
### 5.-14. Port BrandNewBert to ๐ค Transformers
ๆฌกใซใใคใใซๆฐใใใณใผใใ๐ค Transformersใซ่ฟฝๅ ใงใใพใใ๐ค Transformersใฎใใฉใผใฏใฎใฏใญใผใณใซ็งปๅใใฆใใ ใใ๏ผ
```bash
cd transformers
```
็นๅฅใชใฑใผในใจใใฆใๆขๅญใฎใขใใซใจๅฎๅ
จใซไธ่ดใใใขใผใญใใฏใใฃใฎใขใใซใ่ฟฝๅ ใใๅ ดๅใ
[ใใฎใปใฏใทใงใณ](#write-a-conversion-script)ใง่ชฌๆใใใฆใใใใใซใๅคๆในใฏใชใใใ่ฟฝๅ ใใใ ใใงๆธใฟใพใใ
ใใฎๅ ดๅใๆขๅญใฎใขใใซใฎๅฎๅ
จใชใขใใซใขใผใญใใฏใใฃใๅๅฉ็จใงใใพใใ
ใใไปฅๅคใฎๅ ดๅใๆฐใใใขใใซใฎ็ๆใ้ๅงใใพใใใใใง2ใคใฎ้ธๆ่ขใใใใพใ๏ผ
- `transformers-cli add-new-model-like`ใไฝฟ็จใใฆๆขๅญใฎใขใใซใฎใใใชๆฐใใใขใใซใ่ฟฝๅ ใใพใ
- `transformers-cli add-new-model`ใไฝฟ็จใใฆใใใณใใฌใผใใใๆฐใใใขใใซใ่ฟฝๅ ใใพใ๏ผใขใใซใฎใฟใคใใซๅฟใใฆBERTใพใใฏBartใฎใใใซ่ฆใใพใ๏ผ
ใฉใกใใฎๅ ดๅใงใใใขใใซใฎๅบๆฌๆ
ๅ ฑใๅ
ฅๅใใใใใฎ่ณชๅไบ้
ใ่กจ็คบใใใพใใ
2็ช็ฎใฎใณใใณใใๅฎ่กใใใซใฏใ`cookiecutter`ใใคใณในใใผใซใใๅฟ
่ฆใใใใพใใ
่ฉณ็ดฐใซใคใใฆใฏ[ใใกใ](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model)ใใ่ฆงใใ ใใใ
**ไธป่ฆใช huggingface/transformers ใชใใธใใชใงใใซใชใฏใจในใใ้ใ**
่ชๅ็ๆใใใใณใผใใ้ฉๅฟใๅงใใๅใซใ๐ค Transformers ใซใไฝๆฅญไธญ๏ผWIP๏ผใใใซใชใฏใจในใใ้ใใฟใคใใณใฐใงใใ
ไพ๏ผใ[WIP] *brand_new_bert* ใ่ฟฝๅ ใใชใฉใงใใ
ใใใซใใใใฆใผใถใผใจ Hugging Face ใใผใ ใ๐ค Transformers ใซใขใใซใ็ตฑๅใใไฝๆฅญใไธฆ่กใใฆ่กใใใจใใงใใพใใ
ไปฅไธใฎๆ้ ใๅฎ่กใใฆใใ ใใ๏ผ
1. ใกใคใณใใฉใณใใใๅใใใใใๅๅใฎใใฉใณใใไฝๆใใพใใ
```bash
git checkout -b add_brand_new_bert
```
2. ่ชๅ็ๆใใใใณใผใใใณใใใใใฆใใ ใใ:
```bash
git add .
git commit
```
3. ็พๅจใฎ main ใใฉใณใใซใใงใใใใฆใชใใผใน
```bash
git fetch upstream
git rebase upstream/main
```
4. ๅคๆดใใใชใใฎใขใซใฆใณใใซใใใทใฅใใใซใฏใๆฌกใฎใณใใณใใไฝฟ็จใใพใ๏ผ
```bash
git push -u origin a-descriptive-name-for-my-changes
```
5. ๆบ่ถณใใใใGitHubไธใฎใใฉใผใฏใฎใฆใงใใใผใธใซ็งปๅใใพใใ[ใใซใชใฏใจในใ]ใใฏใชใใฏใใพใใๅฐๆฅใฎๅคๆดใซๅใใฆใHugging Face ใใผใ ใฎใกใณใใผใฎGitHubใใณใใซใใฌใใฅใขใผใจใใฆ่ฟฝๅ ใใฆใใ ใใใ
6. GitHubใฎใใซใชใฏใจในใใฆใงใใใผใธใฎๅณๅดใซใใใใใฉใใใซๅคๆใใใฏใชใใฏใใฆใPRใใใฉใใใซๅคๆดใใพใใ
ไปฅไธใงใฏใ้ฒๆใใใฃใๅ ดๅใฏๅธธใซไฝๆฅญใใณใใใใใใใใทใฅใใฆใใซใชใฏใจในใใซ่กจ็คบใใใใใใซใใฆใใ ใใใใใใซใๅฎๆ็ใซใกใคใณใใใฎๆๆฐใฎๅคๆดใๅใ่พผใใใใซใๆฌกใฎใใใซ่กใใใจใๅฟใใชใใงใใ ใใ๏ผ
```bash
git fetch upstream
git merge upstream/main
```
ไธ่ฌ็ใซใใขใใซใๅฎ่ฃ
ใซ้ขใใ่ณชๅใฏPull Request (PR) ใง่กใใPRๅ
ใง่ญฐ่ซใใ่งฃๆฑบใใพใใ
ใใใซใใใHugging Face ใใผใ ใฏๆฐใใใณใผใใใณใใใใใ้ใ่ณชๅใใใๅ ดๅใซๅธธใซ้็ฅใๅใใใใจใใงใใพใใ
่ณชๅใๅ้กใ่งฃๆฑบใใใ้ใซใๅ้กใ่ณชๅใ็่งฃใใใใใใใใซใHugging Face ใใผใ ใซใณใผใใๆๆใใใใจใ้ๅธธใซๅฝน็ซใกใพใใ
ใใฎใใใซใฏใใFiles changedใใฟใใซ็งปๅใใฆใในใฆใฎๅคๆดใ่กจ็คบใใ่ณชๅใใใ่กใซ็งปๅใใฆใ+ใใทใณใใซใใฏใชใใฏใใฆใณใกใณใใ่ฟฝๅ ใใพใใ
่ณชๅใๅ้กใ่งฃๆฑบใใใๅ ดๅใฏใไฝๆใใใใณใกใณใใฎใResolveใใใฟใณใใฏใชใใฏใงใใพใใ
ๅๆงใซใHugging Face ใใผใ ใฏใณใผใใใฌใใฅใผใใ้ใซใณใกใณใใ้ใใพใใ
PRไธใงใฎใปใจใใฉใฎ่ณชๅใฏGitHubไธใง่กใใใจใใๅงใใใพใใ
ไธ่ฌ็ใช่ณชๅใซ้ขใใฆใฏใๅ
ฌใซใฏใใพใๅฝน็ซใใชใ่ณชๅใซใคใใฆใฏใSlackใใกใผใซใงHugging Face ใใผใ ใซ้ฃ็ตกใใใใจใใงใใพใใ
**5. ็ๆใใใใขใใซใณใผใใ"brand_new_bert"ใซ้ฉๅฟใใใ**
ๆๅใซใใขใใซ่ชไฝใซ็ฆ็นใๅฝใฆใใใผใฏใใคใถใซใฏๆฐใซใใชใใงใใ ใใใ
้ข้ฃใใใณใผใใฏใ็ๆใใใใใกใคใซ`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใใใณ`src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`ใง่ฆใคใใใฏใใงใใ
ใใฆใใคใใซใณใผใใฃใณใฐใๅงใใใใจใใงใใพใ :smile:ใ
`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใซใใ็ๆใใใใณใผใใฏใใจใณใณใผใใผใฎใฟใฎใขใใซใงใใใฐBERTใจๅใใขใผใญใใฏใใฃใๆใฃใฆใใใใใจใณใณใผใใผ-ใใณใผใใผใขใใซใงใใใฐBARTใจๅใใขใผใญใใฏใใฃใๆใฃใฆใใใฏใใงใใ
ใใฎๆฎต้ใงใฏใใขใใซใฎ็่ซ็ใชๅด้ขใซใคใใฆๅญฆใใ ใใจใๆใๅบใในใใงใใใคใพใใใใใฎใขใใซใฏBERTใพใใฏBARTใจใฉใฎใใใซ็ฐใชใใฎใ๏ผใใจใใใใจใงใใ
ใใใใฎๅคๆดใๅฎ่ฃ
ใใพใใใใใใฏ้ๅธธใใปใซใใขใใณใทใงใณใฌใคใคใผใๆญฃ่ฆๅใฌใคใคใผใฎ้ ๅบใชใฉใๅคๆดใใใใจใๆๅณใใพใใ
ๅใณใใใชใใฎใขใใซใใฉใฎใใใซๅฎ่ฃ
ใใใในใใใใใ่ฏใ็่งฃใใใใใซใTransformersๅ
ใซๆขๅญใฎใขใใซใฎ้กไผผใขใผใญใใฏใใฃใ่ฆใใใจใๅฝน็ซใคใใจใใใใพใใ
ใใฎๆ็นใงใฏใใณใผใใๅฎๅ
จใซๆญฃ็ขบใพใใฏใฏใชใผใณใงใใๅฟ
่ฆใฏใใใพใใใ
ใใใใใพใใฏๅฟ
่ฆใชใณใผใใฎๆๅใฎ*ใฏใชใผใณใงใชใ*ใณใใผ๏ผใใผในใใใผใธใงใณใ
`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใซ่ฟฝๅ ใใๅฟ
่ฆใชใณใผใใใในใฆ่ฟฝๅ ใใใฆใใใจๆใใใพใงๆนๅ/ไฟฎๆญฃใๅๅพฉ็ใซ่กใใใจใใๅงใใงใใ
็งใใกใฎ็ต้จใใใๅฟ
่ฆใชใณใผใใฎๆๅใฎใใผใธใงใณใ่ฟ
้ใซ่ฟฝๅ ใใๆฌกใฎใปใฏใทใงใณใง่ชฌๆใใๅคๆในใฏใชใใใไฝฟ็จใใฆใณใผใใ็นฐใ่ฟใๆนๅ/ไฟฎๆญฃใใๆนใๅน็็ใงใใใใจใๅคใใงใใ
ใใฎๆ็นใงๅไฝใใๅฟ
่ฆใใใใฎใฏใ๐ค Transformersใฎ"brand_new_bert"ใฎๅฎ่ฃ
ใใคใณในใฟใณในๅใงใใใใจใ ใใงใใใคใพใใไปฅไธใฎใณใใณใใๆฉ่ฝใใๅฟ
่ฆใใใใพใ๏ผ
```python
from transformers import BrandNewBertModel, BrandNewBertConfig
model = BrandNewBertModel(BrandNewBertConfig())
```
ไธ่จใฎใณใใณใใฏใ`BrandNewBertConfig()` ใงๅฎ็พฉใใใใใใฉใซใใใฉใกใผใฟใซๅพใฃใฆใขใใซใไฝๆใใ
ใในใฆใฎใณใณใใผใใณใใฎ `init()` ใกใฝใใใๆญฃๅธธใซๅไฝใใใใจใ็ขบ่ชใใพใใ
ใในใฆใฎใฉใณใใ ใชๅๆๅใฏใ`BrandnewBertPreTrainedModel` ใฏใฉในใฎ `_init_weights` ใกใฝใใใง่กใๅฟ
่ฆใใใใพใใ
ใใฎใกใฝใใใฏใ่จญๅฎๅคๆฐใซไพๅญใใใในใฆใฎใชใผใใขใธใฅใผใซใๅๆๅใใๅฟ
่ฆใใใใพใใไปฅไธใฏใBERT ใฎ `_init_weights` ใกใฝใใใฎไพใงใ๏ผ
```py
def _init_weights(self, module):
"""Initialize the weights"""
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
```
็นๅฎใฎใขใธใฅใผใซใซ็นๅฅใชๅๆๅใๅฟ
่ฆใชๅ ดๅใใซในใฟใ ในใญใผใ ใใใใซๆใคใใจใใงใใพใใใใจใใฐใ
`Wav2Vec2ForPreTraining`ใงใฏใๆๅพใฎ2ใคใฎ็ทๅฝขๅฑคใซใฏ้ๅธธใฎPyTorchใฎ`nn.Linear`ใฎๅๆๅใๅฟ
่ฆใงใใใ
ไปใฎใในใฆใฎๅฑคใฏไธ่จใฎใใใชๅๆๅใไฝฟ็จใใๅฟ
่ฆใใใใพใใใใใฏไปฅไธใฎใใใซใณใผใใฃใณใฐใใใฆใใพใ๏ผ
```py
def _init_weights(self, module):
"""Initialize the weights"""
if isinstnace(module, Wav2Vec2ForPreTraining):
module.project_hid.reset_parameters()
module.project_q.reset_parameters()
module.project_hid._is_hf_initialized = True
module.project_q._is_hf_initialized = True
elif isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
```
`_is_hf_initialized`ใใฉใฐใฏใใตใใขใธใฅใผใซใไธๅบฆใ ใๅๆๅใใใใจใ็ขบๅฎใซใใใใใซๅ
้จใงไฝฟ็จใใใพใใ
`module.project_q`ใจ`module.project_hid`ใฎใใใซใใใ`True`ใซ่จญๅฎใใใใจใงใ
ใซในใฟใ ๅๆๅใๅพใงไธๆธใใใใชใใใใซใใ`_init_weights`้ขๆฐใใใใใซ้ฉ็จใใใชใใใใซใใพใใ
**6. ๅคๆในใฏใชใใใๆธใ**
ๆฌกใซใ*brand_new_bert* ใฎๅ
ใฎใชใใธใใชใงใใใใฐใซไฝฟ็จใใใใงใใฏใใคใณใใใๆฐใใไฝๆใใ ๐ค Transformers ๅฎ่ฃ
ใฎ *brand_new_bert* ใจไบๆๆงใฎใใใใงใใฏใใคใณใใซๅคๆใงใใๅคๆในใฏใชใใใๆธใๅฟ
่ฆใใใใพใใ
ๅคๆในใฏใชใใใใผใญใใๆธใใใจใฏใๅงใใใใพใใใใไปฃใใใซ ๐ค Transformers ใงๆขใซๅญๅจใใ้กไผผใฎใขใใซใๅใใใฌใผใ ใฏใผใฏใงๅคๆใใในใฏใชใใใ่ชฟในใใใจใ่ฏใใงใใใใ
้ๅธธใๆขๅญใฎๅคๆในใฏใชใใใใณใใผใใฆใ่ชๅใฎใฆใผในใฑใผในใซใใใใซ้ฉๅฟใใใใใจใงๅๅใงใใ
Hugging Face ใใผใ ใซๆขๅญใฎใขใใซใซ้กไผผใใๅคๆในใฏใชใใใๆใใฆใใใใใจใ่บ่บใใชใใงใใ ใใใ
- TensorFlowใใPyTorchใซใขใใซใ็งปๆคใใฆใใๅ ดๅใ่ฏใๅบ็บ็นใฏBERTใฎๅคๆในใฏใชใใใใใใใพใใ [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)
- PyTorchใใPyTorchใซใขใใซใ็งปๆคใใฆใใๅ ดๅใ่ฏใๅบ็บ็นใฏBARTใฎๅคๆในใฏใชใใใใใใใพใใ [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py)
ไปฅไธใงใฏใPyTorchใขใใซใๅฑคใฎ้ใฟใใฉใฎใใใซไฟๅญใใๅฑคใฎๅๅใๅฎ็พฉใใใใซใคใใฆ็ฐกๅใซ่ชฌๆใใพใใ
PyTorchใงใฏใๅฑคใฎๅๅใฏๅฑคใซไธใใใฏใฉในๅฑๆงใฎๅๅใซใใฃใฆๅฎ็พฉใใใพใใ
PyTorchใง `SimpleModel` ใจใใใใใผใขใใซใๅฎ็พฉใใพใใใ๏ผ
```python
from torch import nn
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.dense = nn.Linear(10, 10)
self.intermediate = nn.Linear(10, 10)
self.layer_norm = nn.LayerNorm(10)
```
ใใใงใใใฎใขใใซๅฎ็พฉใฎใคใณในใฟใณในใไฝๆใใ`dense`ใ`intermediate`ใ`layer_norm`ใฎใในใฆใฎ้ใฟใใฉใณใใ ใช้ใฟใงๅใใใขใใซใไฝๆใงใใพใใใขใใซใฎใขใผใญใใฏใใฃใ็ขบ่ชใใใใใซใใขใใซใๅฐๅทใใฆใฟใพใใใใ
```python
model = SimpleModel()
print(model)
```
ใใใฏไปฅไธใๅบๅใใพใ๏ผ
```
SimpleModel(
(dense): Linear(in_features=10, out_features=10, bias=True)
(intermediate): Linear(in_features=10, out_features=10, bias=True)
(layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True)
)
```
ๅฑคใฎๅๅใฏPyTorchใฎใฏใฉในๅฑๆงใฎๅๅใซใใฃใฆๅฎ็พฉใใใฆใใพใใ็นๅฎใฎๅฑคใฎ้ใฟๅคใๅบๅใใใใจใใงใใพใ๏ผ
```python
print(model.dense.weight.data)
```
ใฉใณใใ ใซๅๆๅใใใ้ใฟใ็ขบ่ชใใใใใซ
```
tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212,
-0.2077, 0.2157],
[ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190,
0.2166, -0.0212],
[-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950,
-0.1023, -0.0447],
[-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415,
-0.1876, -0.2467],
[ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465,
0.2577, 0.0402],
[ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604,
0.2132, 0.1680],
[ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090,
0.2707, -0.2509],
[-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407,
0.1829, -0.1568],
[-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923,
0.0333, -0.0536],
[-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739,
0.2220, 0.2358]]).
```
ในใฏใชใใๅ
ใฎๅคๆในใฏใชใใใงใฏใใฉใณใใ ใซๅๆๅใใใ้ใฟใใๅฏพๅฟใใใใงใใฏใใคใณใๅ
ใฎๆญฃ็ขบใช้ใฟใงๅใใๅฟ
่ฆใใใใพใใไพใใฐใไปฅไธใฎใใใซ็ฟป่จณใใพใ๏ผ
```python
# retrieve matching layer weights, e.g. by
# recursive algorithm
layer_name = "dense"
pretrained_weight = array_of_dense_layer
model_pointer = getattr(model, "dense")
model_pointer.weight.data = torch.from_numpy(pretrained_weight)
```
PyTorchใขใใซใฎๅใฉใณใใ ๅๆๅใใใ้ใฟใจๅฏพๅฟใใไบๅๅญฆ็ฟๆธใฟใใงใใฏใใคใณใใฎ้ใฟใ
**ๅฝข็ถใจๅๅใฎไธกๆน**ใงๆญฃ็ขบใซไธ่ดใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
ใใใ่กใใใใซใๅฝข็ถใซๅฏพใใassertในใใผใใกใณใใ่ฟฝๅ ใใใใงใใฏใใคใณใใฎ้ใฟใฎๅๅใๅบๅใใใใจใ
**ๅฟ
่ฆไธๅฏๆฌ **ใงใใไพใใฐใๆฌกใฎใใใชในใใผใใกใณใใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใ๏ผ
```python
assert (
model_pointer.weight.shape == pretrained_weight.shape
), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched"
```
ใพใใไธกๆนใฎ้ใฟใฎๅๅใๅฐๅทใใฆใไธ่ดใใฆใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใไพใใฐใๆฌกใฎใใใซใใพใ๏ผ
```python
logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}")
```
ใใๅฝข็ถใพใใฏๅๅใฎใใใใใไธ่ดใใชใๅ ดๅใใใใใ่ชคใฃใฆ๐ค Transformersใฎๅฎ่ฃ
ใซๅๆๅใใใใฌใคใคใผใซ้้ใฃใใใงใใฏใใคใณใใฎ้ใฟใๅฒใๅฝใฆใฆใใพใฃใๅฏ่ฝๆงใใใใพใใ
่ชคใฃใๅฝข็ถใฏใใใใใ`BrandNewBertConfig()`ใงใฎ่จญๅฎใใฉใกใผใฟใผใใๅคๆใใใใใงใใฏใใคใณใใงไฝฟ็จใใใใใฎใจๆญฃ็ขบใซไธ่ดใใชใใใใงใใ
ใใ ใใPyTorchใฎใฌใคใคใผใฎๅฎ่ฃ
ใซใใฃใฆใฏใ้ใฟใไบๅใซ่ปข็ฝฎใใๅฟ
่ฆใใใๅ ดๅใใใใพใใ
ๆๅพใซใ**ใในใฆ**ใฎๅฟ
่ฆใช้ใฟใๅๆๅใใใฆใใใใจใ็ขบ่ชใใๅๆๅใซไฝฟ็จใใใชใใฃใใในใฆใฎใใงใใฏใใคใณใใฎ้ใฟใ่กจ็คบใใฆใใขใใซใๆญฃใใๅคๆใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
ๅคๆใใฉใคใขใซใ่ชคใฃใๅฝข็ถในใใผใใกใณใใพใใฏ่ชคใฃใๅๅๅฒใๅฝใฆใงๅคฑๆใใใฎใฏๅฎๅ
จใซๆญฃๅธธใงใใ
ใใใฏใใใใใ`BrandNewBertConfig()`ใง่ชคใฃใใใฉใกใผใฟใผใไฝฟ็จใใใใ๐ค Transformersใฎๅฎ่ฃ
ใซ่ชคใฃใใขใผใญใใฏใใฃใใใใใ๐ค Transformersใฎๅฎ่ฃ
ใฎ1ใคใฎใณใณใใผใใณใใฎ`init()`้ขๆฐใซใใฐใใใใใใใงใใฏใใคใณใใฎ้ใฟใฎ1ใคใ่ปข็ฝฎใใๅฟ
่ฆใใใใใใงใใ
ใใฎในใใใใฏใไปฅๅใฎในใใใใจ็นฐใ่ฟใในใใงใใใในใฆใฎใใงใใฏใใคใณใใฎ้ใฟใๆญฃใใ๐ค Transformersใขใใซใซ่ชญใฟ่พผใพใใใพใง็นฐใ่ฟใในใใงใใ
๐ค Transformersๅฎ่ฃ
ใซๆญฃใใใใงใใฏใใคใณใใ่ชญใฟ่พผใใ ๅพใ้ธๆใใใใฉใซใใผใซใขใใซใไฟๅญใงใใพใ `/path/to/converted/checkpoint/folder`ใใใฎใใฉใซใใซใฏ`pytorch_model.bin`ใใกใคใซใจ`config.json`ใใกใคใซใฎไธกๆนใๅซใพใใใฏใใงใใ
```python
model.save_pretrained("/path/to/converted/checkpoint/folder")
```
**7. ้ ไผๆญ๏ผforward pass๏ผใฎๅฎ่ฃ
**
๐ค Transformersๅฎ่ฃ
ใงไบๅๅญฆ็ฟๆธใฟใฎ้ใฟใๆญฃใใ่ชญใฟ่พผใใ ๅพใ้ ไผๆญใๆญฃใใๅฎ่ฃ
ใใใฆใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ[ๅ
ใฎใชใใธใใชใ็่งฃใใ](#34-run-a-pretrained-checkpoint-using-the-original-repository)ใงใๅ
ใฎใชใใธใใชใไฝฟ็จใใฆใขใใซใฎ้ ไผๆญใๅฎ่กใใในใฏใชใใใใใงใซไฝๆใใพใใใไปๅบฆใฏใๅ
ใฎใชใใธใใชใฎไปฃใใใซ๐ค Transformersๅฎ่ฃ
ใไฝฟ็จใใฆ้กไผผใฎในใฏใชใใใไฝๆใใๅฟ
่ฆใใใใพใใไปฅไธใฎใใใซใชใใพใ๏ผ
```python
model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder")
input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]
output = model(input_ids).last_hidden_states
```
๐ค Transformersใฎๅฎ่ฃ
ใจๅ
ใฎใขใใซใฎๅฎ่ฃ
ใๆๅใฎๅฎ่กใงๅฎๅ
จใซๅใๅบๅใๆไพใใชใใใ
ใใฉใฏใผใใในใงใจใฉใผใ็บ็ใใๅฏ่ฝๆงใ้ๅธธใซ้ซใใงใใๅคฑๆใใชใใงใใ ใใ - ใใใฏไบๆณใใใฆใใใใจใงใ๏ผ
ใพใใใใฉใฏใผใใในใใจใฉใผใในใญใผใใชใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
้้ใฃใๆฌกๅ
ใไฝฟ็จใใใ*ๆฌกๅ
ใฎไธไธ่ด*ใจใฉใผใใ่ชคใฃใใใผใฟๅใชใใธใงใฏใใไฝฟ็จใใใใใจใใใใใใพใใ
ไพใใฐใ`torch.long`ใงใฏใชใ`torch.float32`ใไฝฟ็จใใใพใใ็นๅฎใฎใจใฉใผใ่งฃๆฑบใงใใชใๅ ดๅใฏใ
Hugging Faceใใผใ ใซๅฉใใๆฑใใใใจใ่บ่บใใชใใงใใ ใใใ
๐ค Transformersๅฎ่ฃ
ใๆญฃใใๆฉ่ฝใใใใจใ็ขบ่ชใใๆ็ต็ใช้จๅใฏใๅบๅใ`1e-3`ใฎ็ฒพๅบฆใงๅ็ญใงใใใใจใ็ขบ่ชใใใใจใงใใ
ใพใใๅบๅใฎๅฝข็ถใๅไธใงใใใใจใใคใพใในใฏใชใใใฎ๐ค Transformersๅฎ่ฃ
ใจๅ
ใฎๅฎ่ฃ
ใฎไธกๆนใง`outputs.shape`ใๅใๅคใ็ๆใใๅฟ
่ฆใใใใพใใ
ๆฌกใซใๅบๅๅคใๅไธใงใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
ใใใฏๆฐใใใขใใซใ่ฟฝๅ ใใ้ใฎๆใ้ฃใใ้จๅใฎ1ใคใงใใ
ๅบๅใๅไธใงใชใ็็ฑใฎไธ่ฌ็ใช้้ใใฏไปฅไธใฎ้ใใงใใ
- ไธ้จใฎใฌใคใคใผใ่ฟฝๅ ใใใฆใใชใใใคใพใ*ๆดปๆงๅ*ใฌใคใคใผใ่ฟฝๅ ใใใฆใใชใใใใชใถใใซๆฅ็ถใๅฟใใใใฆใใ
- ๅ่ชๅใ่พผใฟ่กๅใ็ตใฐใใฆใใชใ
- ใชใชใธใใซใฎๅฎ่ฃ
ใใชใใปใใใไฝฟ็จใใฆใใใใใ่ชคใฃใไฝ็ฝฎๅใ่พผใฟใไฝฟ็จใใใฆใใ
- ใใฉใฏใผใใในไธญใซใใญใใใขใฆใใ้ฉ็จใใใฆใใพใใใใใไฟฎๆญฃใใใซใฏใ*model.trainingใFalse*ใงใใใใจใ็ขบ่ชใใใใฉใฏใผใใในไธญใซ่ชคใฃใฆใใญใใใขใฆใใฌใคใคใผใใขใฏใใฃใๅใใใชใใใใซใใพใใ
*ใคใพใ* [PyTorchใฎfunctional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)ใซ*model.training*ใๆธกใใพใใ
ๅ้กใไฟฎๆญฃใใๆ่ฏใฎๆนๆณใฏใ้ๅธธใๅ
ใฎๅฎ่ฃ
ใจ๐ค Transformersๅฎ่ฃ
ใฎใใฉใฏใผใใในใไธฆในใฆ่กจ็คบใใ้ใใใใใใฉใใใ็ขบ่ชใใใใจใงใใ
็ๆณ็ใซใฏใใใฉใฏใผใใในใฎไธกๆนใฎๅฎ่ฃ
ใฎไธญ้ๅบๅใใใใใฐ/ใใชใณใใขใฆใใใฆใ๐ค Transformersๅฎ่ฃ
ใๅ
ใฎๅฎ่ฃ
ใจ็ฐใชใๅบๅใ็คบใใใใใฏใผใฏๅ
ใฎๆญฃ็ขบใชไฝ็ฝฎใ่ฆใคใใใใจใใงใใพใใ
ๆๅใซใไธกๆนใฎในใฏใชใใใฎใใผใใณใผใใฃใณใฐใใใ`input_ids`ใๅไธใงใใใใจใ็ขบ่ชใใพใใ
ๆฌกใซใ`input_ids`ใฎๆๅใฎๅคๆ๏ผ้ๅธธใๅ่ชๅใ่พผใฟ๏ผใฎๅบๅใๅไธใงใใใใจใ็ขบ่ชใใพใใ
ใใฎๅพใใใใใฏใผใฏใฎๆๅพใฎใฌใคใคใผใพใงไฝๆฅญใ้ฒใใพใใ
ใใใใใฎๆ็นใงใ2ใคใฎๅฎ่ฃ
้ใง้ใใใใใใจใซๆฐไปใใฏใใงใใใใซใใ๐ค Transformersๅฎ่ฃ
ใฎใใฐใฎๅ ดๆใ็นๅฎใใใพใใ
็ต้จไธใๅ
ใฎๅฎ่ฃ
ใจ๐ค Transformersๅฎ่ฃ
ใฎใใฉใฏใผใใในใฎๅใไฝ็ฝฎใซๅคใใฎใใชใณใในใใผใใกใณใใ่ฟฝๅ ใใ
ไธญ้ใใฌใผใณใใผใทใงใณใงๅใๅคใ็คบใใใชใณใในใใผใใกใณใใๆฎต้็ใซๅ้คใใใฎใใทใณใใซใใคๅนๆ็ใชๆนๆณใงใใ
ไธกๆนใฎๅฎ่ฃ
ใๅใๅบๅใ็ๆใใใใจใซ่ชไฟกใๆใฃใฆใใๅ ดๅใ`torch.allclose(original_output, output, atol=1e-3)`ใไฝฟ็จใใฆๅบๅใ็ขบ่ชใใใจใๆใ้ฃใใ้จๅใๅฎไบใใพใ๏ผ
ใใใงใจใใใใใพใ - ๅฎไบใใไฝๆฅญใฏ็ฐกๅใชใใฎใซใชใใฏใใงใ ๐ใ
**8. ๅฟ
่ฆใชใในใฆใฎใขใใซใในใใ่ฟฝๅ **
ใใฎๆ็นใงใๆฐใใใขใใซใๆญฃๅธธใซ่ฟฝๅ ใใใพใใใ
ใใ ใใใขใใซใใพใ ๅฟ
่ฆใช่จญ่จใซๅฎๅ
จใซๆบๆ ใใฆใใชใๅฏ่ฝๆงใ้ๅธธใซ้ซใใงใใ
๐ค Transformersใจๅฎๅ
จใซไบๆๆงใใใใใจใ็ขบ่ชใใใใใซใใในใฆใฎไธ่ฌ็ใชใในใใใในใใๅฟ
่ฆใใใใพใใ
Cookiecutterใฏใใใใใขใใซ็จใฎใในใใใกใคใซใ่ชๅ็ใซ่ฟฝๅ ใใฆใใใฏใใงใใใใใๅใใใฃใฌใฏใใชใซ`tests/models/brand_new_bert/test_modeling_brand_new_bert.py`ใจใใฆๅญๅจใใพใใ
ใใฎใในใใใกใคใซใๅฎ่กใใฆใใในใฆใฎไธ่ฌ็ใชใในใใใในใใใใจใ็ขบ่ชใใฆใใ ใใ๏ผ
```bash
pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py
```
ใในใฆใฎไธ่ฌ็ใชใในใใไฟฎๆญฃใใใใไปๅบฆใฏๅฎ่กใใใในใฆใฎ็ด ๆดใใใไฝๆฅญใ้ฉๅใซใในใใใใฆใใใใจใ็ขบ่ชใใใใจใ้ๅธธใซ้่ฆใงใใใใใซใใใ
- a) ใณใใฅใใใฃใฏ*brand_new_bert*ใฎ็นๅฎใฎใในใใ่ฆใใใจใงใใใชใใฎไฝๆฅญใ็ฐกๅใซ็่งฃใงใใพใใ
- b) ใขใใซใธใฎๅฐๆฅใฎๅคๆดใใขใใซใฎ้่ฆใชๆฉ่ฝใๅฃใใชใใใใซใใใใจใใงใใพใใ
ใพใใ็ตฑๅใในใใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใใใใใฎ็ตฑๅใในใใฏใๅบๆฌ็ใซใฏใใใใฐในใฏใชใใใจๅใใใจใ่กใใพใใใใใใฎใขใใซใในใใฎใใณใใฌใผใใฏCookiecutterใซใใฃใฆๆขใซ่ฟฝๅ ใใใฆใใใใBrandNewBertModelIntegrationTestsใใจๅผใฐใใฆใใพใใใใฎใในใใ่จๅ
ฅใใใ ใใงใใใใใใฎใในใใๅๆ ผใใฆใใใใจใ็ขบ่ชใใใซใฏใๆฌกใฎใณใใณใใๅฎ่กใใพใใ
```bash
RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests
```
<Tip>
Windowsใไฝฟ็จใใฆใใๅ ดๅใ`RUN_SLOW=1`ใ`SET RUN_SLOW=1`ใซ็ฝฎใๆใใฆใใ ใใใ
</Tip>
ๆฌกใซใ*brand_new_bert*ใซ็นๆใฎใในใฆใฎ็นๅพดใฏใๅฅๅใฎใในใๅ
ใง่ฟฝๅ ใใใในใใงใใ
`BrandNewBertModelTester`/`BrandNewBertModelTest`ใฎไธใซใใใฎ้จๅใฏใใๅฟใใใใพใใใ2ใคใฎ็นใง้ๅธธใซๅฝน็ซใกใพใ๏ผ
- ใขใใซใฎ่ฟฝๅ ไธญใซ็ฒๅพใใ็ฅ่ญใใณใใฅใใใฃใซไผใใ*brand_new_bert*ใฎ็นๅฅใชๆฉ่ฝใใฉใฎใใใซๅไฝใใใใ็คบใใใจใซใใฃใฆใ็ฅ่ญใฎๅ
ฑๆใๆฏๆดใใพใใ
- ๅฐๆฅใฎ่ฒข็ฎ่
ใฏใใใใใฎ็นๅฅใชใในใใๅฎ่กใใใใจใงใขใใซใธใฎๅคๆดใ่ฟ
้ใซใในใใงใใพใใ
**9. ใใผใฏใใคใถใฎๅฎ่ฃ
**
ๆฌกใซใ*brand_new_bert*ใฎใใผใฏใใคใถใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ้ๅธธใใใผใฏใใคใถใฏ๐ค Transformersใฎๆขๅญใฎใใผใฏใใคใถใจๅ็ญใ้ๅธธใซไผผใฆใใพใใ
ใใผใฏใใคใถใๆญฃใใๅไฝใใใใจใ็ขบ่ชใใใใใซใฏใใพใใๅ
ใฎใชใใธใใชๅ
ใงๆๅญๅใๅ
ฅๅใใ`input_ids`ใ่ฟใในใฏใชใใใไฝๆใใใใจใใๅงใใใพใใ
ใใฎในใฏใชใใใฏใๆฌกใฎใใใซ่ฆใใใใใใใพใใ๏ผ็ไผผใณใผใใง็คบใใพใ๏ผ๏ผ
```python
input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
input_ids = model.tokenize(input_str)
```
ใชใชใธใใซใฎใชใใธใใชใ่ฉณใใ่ชฟๆปใใๆญฃใใใใผใฏใใคใถใฎ้ขๆฐใ่ฆใคใใๅฟ
่ฆใใใใใใใใพใใใ
ใพใใฏใใชใชใธใใซใฎใชใใธใใชใฎใฏใญใผใณใๅคๆดใใฆใ`input_ids`ใ ใใๅบๅใใใใใซใใๅฟ
่ฆใใใใใใใใพใใใ
ใชใชใธใใซใฎใชใใธใใชใไฝฟ็จใใๆฉ่ฝ็ใชใใผใฏใใคใผใผใทใงใณในใฏใชใใใไฝๆใใๅพใ
๐ค Transformersๅใใฎ้กไผผใใในใฏใชใใใไฝๆใใๅฟ
่ฆใใใใพใใ
ไปฅไธใฎใใใซ่ฆใใในใใงใ๏ผ
```python
from transformers import BrandNewBertTokenizer
input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/")
input_ids = tokenizer(input_str).input_ids
```
`input_ids`ใๅใๅคใ็ๆใใๅ ดๅใๆ็ตในใใใใจใใฆใใผใฏใใคใถใฎใในใใใกใคใซใ่ฟฝๅ ใใในใใงใใ
*brand_new_bert*ใฎใขใใซใณใฐใในใใใกใคใซใจๅๆงใซใ*brand_new_bert*ใฎใใผใฏใใคใบใในใใใกใคใซใซใฏใใใใคใใฎใใผใใณใผใใใใ็ตฑๅใในใใๅซใพใใในใใงใใ
**10. ใจใณใใใผใจใณใ็ตฑๅใในใใฎๅฎ่ก**
ใใผใฏใใคใถใ่ฟฝๅ ใใๅพใ`๐ค Transformers`ๅ
ใฎ`tests/models/brand_new_bert/test_modeling_brand_new_bert.py`ใซ
ใขใใซใจใใผใฏใใคใถใฎไธกๆนใไฝฟ็จใใใใใคใใฎใจใณใใใผใจใณใ็ตฑๅใในใใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
ใใฎใใใชใในใใฏใ๐ค Transformersใฎๅฎ่ฃ
ใๆๅพ
ใฉใใใซๆฉ่ฝใใใใจใ็คบใในใใงใใ
ๆๅณใฎใใใใญในใๅฏพใใญในใใฎใตใณใใซใๅซใพใใพใใๆ็จใชใใญในใๅฏพใใญในใใฎใตใณใใซใซใฏใใฝใผในใใใฟใผใฒใใใธใฎ็ฟป่จณใใขใ่จไบใใ่ฆ็ดใธใฎใใขใ่ณชๅใใๅ็ญใธใฎใใขใชใฉใๅซใพใใพใใ
ใใผใใใใใใงใใฏใใคใณใใใใฆใณในใใชใผใ ใฟในใฏใงใใกใคใณใใฅใผใใณใฐใใใฆใใชใๅ ดๅใใขใใซใฎใในใใซไพๅญใใใ ใใงๅๅใงใใ
ใขใใซใๅฎๅ
จใซๆฉ่ฝใใฆใใใใจใ็ขบ่ชใใใใใซใใในใฆใฎใในใใGPUไธใงๅฎ่กใใใใจใใๅงใใใพใใ
ใขใใซใฎๅ
้จใใณใฝใซใซ`.to(self.device)`ในใใผใใกใณใใ่ฟฝๅ ใใใฎใๅฟใใๅฏ่ฝๆงใใใใใใใใฎใใใชใในใใงใฏใจใฉใผใ่กจ็คบใใใใใจใใใใพใใ
GPUใซใขใฏใปในใงใใชใๅ ดๅใHugging Faceใใผใ ใไปฃใใใซใใใใฎใในใใๅฎ่กใงใใพใใ
**11. ใใญใฅใกใณใใฎ่ฟฝๅ **
ใใใงใ*brand_new_bert*ใฎๅฟ
่ฆใชใในใฆใฎๆฉ่ฝใ่ฟฝๅ ใใใพใใ - ใปใผๅฎไบใงใ๏ผๆฎใใฎ่ฟฝๅ ใในใใใจใฏใ่ฏใใใญใฅใกใณใใจใใญใฅใกใณใใใผใธใงใใ
Cookiecutterใ`docs/source/model_doc/brand_new_bert.md`ใจใใใใณใใฌใผใใใกใคใซใ่ฟฝๅ ใใฆใใใฏใใงใใใใ่จๅ
ฅใใๅฟ
่ฆใใใใพใใ
ใขใใซใฎใฆใผใถใผใฏ้ๅธธใใขใใซใไฝฟ็จใใๅใซใพใใใฎใใผใธใ่ฆใพใใใใใใฃใฆใใใญใฅใกใณใใผใทใงใณใฏ็่งฃใใใใ็ฐกๆฝใงใใๅฟ
่ฆใใใใพใใ
ใขใใซใฎไฝฟ็จๆนๆณใ็คบใใใใซใใใคใใฎ*Tips*ใ่ฟฝๅ ใใใใจใฏใณใใฅใใใฃใซใจใฃใฆ้ๅธธใซๅฝน็ซใกใพใใใใญใฅใกใณใใผใทใงใณใซ้ขใใฆใฏใHugging Faceใใผใ ใซๅใๅใใใใใจใใใใใใชใใงใใ ใใใ
ๆฌกใซใ`src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`ใซ่ฟฝๅ ใใใใใญใฅใกใณใใผใทใงใณๆๅญๅใๆญฃใใใใจใใใใณใในใฆใฎๅฟ
่ฆใชๅ
ฅๅใใใณๅบๅใๅซใใงใใใใจใ็ขบ่ชใใฆใใ ใใใ
ใใญใฅใกใณใใผใทใงใณใฎๆธใๆนใจใใญใฅใกใณใใผใทใงใณๆๅญๅใฎใใฉใผใใใใซใคใใฆ่ฉณ็ดฐใชใฌใคใใ[ใใกใ](writing-documentation)ใซใใใพใใ
ใใญใฅใกใณใใผใทใงใณใฏ้ๅธธใใณใใฅใใใฃใจใขใใซใฎๆๅใฎๆฅ่งฆ็นใงใใใใใใณใผใใจๅใใใใๆณจๆๆทฑใๆฑใในใใงใใใใจใๅธธใซๅฟต้ ญใซ็ฝฎใใฆใใ ใใใ
**ใณใผใใฎใชใใกใฏใฟใชใณใฐ**
็ด ๆดใใใใใใใง*brand_new_bert*ใซๅฟ
่ฆใชใในใฆใฎใณใผใใ่ฟฝๅ ใใใพใใใ
ใใฎๆ็นใงใๆฌกใฎใใใชใใใณใทใฃใซใชใณใผใในใฟใคใซใฎ่ชคใใ่จๆญฃใใใใใซไปฅไธใๅฎ่กใใๅฟ
่ฆใใใใพใ๏ผ
```bash
make style
```
ใใชใใฎใณใผใใฃใณใฐในใฟใคใซใๅ่ณชใใงใใฏใใในใใใใจใ็ขบ่ชใใฆใใ ใใ:
```bash
make quality
```
๐ค Transformersใฎ้ๅธธใซๅณๆ ผใชใใถใคใณใในใใซใฏใใพใ ๅๆ ผใใฆใใชใๅฏ่ฝๆงใใใใใใคใใฎไปใฎใในใใๅญๅจใใใใใใใพใใใ
ใใใฏใใใญใฅใกใณใๆๅญๅใซๆ
ๅ ฑใไธ่ถณใใฆใใใใๅๅใ้้ใฃใฆใใใใจใๅๅ ใงใใใใจใๅคใใงใใHugging Faceใใผใ ใฏใใใใง่ฉฐใพใฃใฆใใๅ ดๅใซใฏๅฟ
ใๅฉใใฆใใใใงใใใใ
ๆๅพใซใใณใผใใๆญฃใใๆฉ่ฝใใใใจใ็ขบ่ชใใๅพใใณใผใใใชใใกใฏใฟใชใณใฐใใใฎใฏๅธธใซ่ฏใใขใคใใขใงใใ
ใในใฆใฎใในใใใในใใไปใ่ฟฝๅ ใใใณใผใใๅๅบฆ็ขบ่ชใใฆใชใใกใฏใฟใชใณใฐใ่กใใฎใฏ่ฏใใฟใคใใณใฐใงใใ
ใใใงใณใผใใฃใณใฐใฎ้จๅใฏๅฎไบใใพใใใใใใงใจใใใใใพใ๏ผ ๐ ใใชใใฏ็ด ๆดใใใใงใ๏ผ ๐
**12. ใขใใซใใขใใซใใใซใขใใใญใผใ**
ๆๅพใฎใใผใใงใฏใใในใฆใฎใใงใใฏใใคใณใใใขใใซใใใซๅคๆใใฆใขใใใญใผใใใๅใขใใใญใผใใใใขใใซใใงใใฏใใคใณใใซใขใใซใซใผใใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
ใขใใซใใใฎๆฉ่ฝใซใคใใฆ่ฉณใใใฏใ[Model sharing and uploading Page](model_sharing)ใ่ชญใใง็่งฃใงใใพใใ
ใใใงใฏใ*brand_new_bert*ใฎ่่
็ต็นใฎไธใซใขใใซใใขใใใญใผใใงใใใใใซๅฟ
่ฆใชใขใฏใปในๆจฉใๅๅพใใใใใซใHugging Faceใใผใ ใจๅๅใใๅฟ
่ฆใใใใพใใ
`transformers`ใฎใในใฆใฎใขใใซใซๅญๅจใใ`push_to_hub`ใกใฝใใใฏใใใงใใฏใใคใณใใใใใซใใใทใฅใใ่ฟ
้ใใคๅน็็ใชๆนๆณใงใใ
ไปฅไธใซใๅฐใใฎใณใผใในใใใใใ็คบใใพใ๏ผ
```python
brand_new_bert.push_to_hub("brand_new_bert")
# Uncomment the following line to push to an organization.
# brand_new_bert.push_to_hub("<organization>/brand_new_bert")
```
ๅใใงใใฏใใคใณใใซ้ฉๅใชใขใใซใซใผใใไฝๆใใไพกๅคใใใใพใใใขใใซใซใผใใฏใใใฎ็นๅฎใฎใใงใใฏใใคใณใใฎ็นๆงใใใคใฉใคใใใในใใงใใไพใใฐใใใฎใใงใใฏใใคใณใใฏใฉใฎใใผใฟใปใใใงไบๅๅญฆ็ฟ/ใใกใคใณใใฅใผใใณใฐใใใใใใฉใฎใใใชไธๆตใฟในใฏใงใขใใซใไฝฟ็จใในใใใ็คบใในใใงใใใพใใใขใใซใฎๆญฃใใไฝฟ็จๆนๆณใซ้ขใใใณใผใใๅซใใในใใงใใ
**13.๏ผใชใใทใงใณ๏ผใใผใใใใฏใฎ่ฟฝๅ **
*brand_new_bert*ใๆจ่ซใพใใฏไธๆตใฟในใฏใฎใใกใคใณใใฅใผใใณใฐใซใฉใฎใใใซ่ฉณ็ดฐใซไฝฟ็จใงใใใใ็คบใใใผใใใใฏใ่ฟฝๅ ใใใใจใฏ้ๅธธใซๅฝน็ซใกใพใใใใใฏใใชใใฎPRใใใผใธใใใใใซๅฟ
้ ใงใฏใใใพใใใใใณใใฅใใใฃใซใจใฃใฆ้ๅธธใซๆ็จใงใใ
**14. ๅฎๆใใPRใฎๆๅบ**
ใใญใฐใฉใใณใฐใๅฎไบใใใใๆๅพใฎในใใใใซ็งปๅใใPRใใกใคใณใใฉใณใใซใใผใธใใพใใใใ้ๅธธใHugging Faceใใผใ ใฏใใฎๆ็นใงๆขใซใใชใใใตใใผใใใฆใใใฏใใงใใใPRใซ่ฏใ่ชฌๆใ่ฟฝๅ ใใใณใผใใซใณใกใณใใ่ฟฝๅ ใใฆใใฌใใฅใขใผใซ็นๅฎใฎ่จญ่จใฎ้ธๆ่ขใๆๆใใใๅ ดๅใฏใณใกใณใใ่ฟฝๅ ใใใใจใไพกๅคใใใใพใใ
### Share your work!!
ใใใใณใใฅใใใฃใใใใชใใฎไฝๆฅญใซๅฏพใใ่ฉไพกใๅพใๆใๆฅใพใใ๏ผใขใใซใฎ่ฟฝๅ ใๅฎไบใใใใจใฏใTransformersใใใณNLPใณใใฅใใใฃใซใจใฃใฆ้่ฆใช่ฒข็ฎใงใใใใชใใฎใณใผใใจใใผใใใใไบๅๅญฆ็ฟๆธใฟใขใใซใฏใไฝ็พไบบใไฝๅไบบใจใใ้็บ่
ใ็ ็ฉถ่
ใซใใฃใฆ็ขบๅฎใซไฝฟ็จใใใใงใใใใใใชใใฎไปไบใซ่ชใใๆใกใใณใใฅใใใฃใจใใชใใฎๆๆใๅ
ฑๆใใพใใใใ
**ใใชใใฏใณใใฅใใใฃใฎ่ชฐใงใ็ฐกๅใซใขใฏใปในใงใใๅฅใฎใขใใซใไฝๆใใพใใ๏ผ ๐คฏ**
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/pipeline_webserver.md
|
<!--โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Webใตใผใใผ็จใฎใใคใใฉใคใณใฎไฝฟ็จ
<Tip>
ๆจ่ซใจใณใธใณใฎไฝๆใฏ่ค้ใชใใใใฏใงใใใ"ๆ้ฉใช"ใฝใชใฅใผใทใงใณใฏใใใใๅ้กใฎ้ ๅใซไพๅญใใใงใใใใCPUใพใใฏGPUใไฝฟ็จใใฆใใพใใ๏ผๆไฝใฎใฌใคใใณใทใๆ้ซใฎในใซใผใใใใๅคใใฎใขใใซใฎใตใใผใใใพใใฏ็นๅฎใฎใขใใซใฎ้ซๅบฆใชๆ้ฉๅใๆใใงใใพใใ๏ผ
ใใฎใใใใฏใซๅใ็ตใใใใฎๅคใใฎๆนๆณใใใใ็งใใกใ็ดนไปใใใฎใฏใใใใใๆ้ฉใชใฝใชใฅใผใทใงใณใงใฏใชใใใใใใชใใใๅงใใใใใฎ่ฏใใใใฉใซใใงใใ
</Tip>
้่ฆใชใใจใฏใWebใตใผใใผใฏใชใฏใจในใใๅพ
ๆฉใใๅไฟกใใใใใซๆฑใใทในใใ ใงใใใใใ[ใใผใฟใปใใ](pipeline_tutorial#using-pipelines-on-a-dataset)ใฎใใใซใใคใใฌใผใฟใไฝฟ็จใงใใใใจใงใใ
้ๅธธใWebใตใผใใผใฏไธฆๅๅฆ็๏ผใใซใในใฌใใใ้ๅๆใชใฉ๏ผใใใฆใใใพใใพใชใชใฏใจในใใๅๆใซๅฆ็ใใพใใไธๆนใใใคใใฉใคใณ๏ผใใใณไธปใซใใฎๅบ็คใจใชใใขใใซ๏ผใฏไธฆๅๅฆ็ใซใฏใใพใ้ฉใใฆใใพใใใใใใใฏๅคใใฎRAMใไฝฟ็จใใใใใๅฎ่กไธญใซๅฉ็จๅฏ่ฝใชใชใฝใผในใใในใฆๆไพใใใใ่จ็ฎ้็ดๅใฎใธใงใใงใใๅ ดๅใซๆ้ฉใงใใ
Webใตใผใใผใฏๅไฟกใจ้ไฟกใฎ่ปฝใ่ฒ ่ทใๅฆ็ใใๅฎ้ใฎไฝๆฅญใ1ใคใฎในใฌใใใงๅฆ็ใใใใใซใใพใใใใฎไพใงใฏ`starlette`ใไฝฟ็จใใพใใๅฎ้ใฎใใฌใผใ ใฏใผใฏใฏใใพใ้่ฆใงใฏใใใพใใใใๅฅใฎใใฌใผใ ใฏใผใฏใไฝฟ็จใใฆใใๅ ดๅใฏใๅใๅนๆใๅพใใใใซใณใผใใ่ชฟๆดใพใใฏๅคๆดใใๅฟ
่ฆใใใใใใใใพใใใ
`server.py`ใไฝๆใใฆใใ ใใ๏ผ
```py
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from transformers import pipeline
import asyncio
async def homepage(request):
payload = await request.body()
string = payload.decode("utf-8")
response_q = asyncio.Queue()
await request.app.model_queue.put((string, response_q))
output = await response_q.get()
return JSONResponse(output)
async def server_loop(q):
pipe = pipeline(model="bert-base-uncased")
while True:
(string, response_q) = await q.get()
out = pipe(string)
await response_q.put(out)
app = Starlette(
routes=[
Route("/", homepage, methods=["POST"]),
],
)
@app.on_event("startup")
async def startup_event():
q = asyncio.Queue()
app.model_queue = q
asyncio.create_task(server_loop(q))
```
ใใใใๅงใใใใจใใงใใพใ๏ผ
```bash
uvicorn server:app
```
ใใใฆใๆฌกใฎใใใซใฏใจใชใงใใพใ๏ผ
```bash
curl -X POST -d "test [MASK]" http://localhost:8000/
#[{"score":0.7742936015129089,"token":1012,"token_str":".","sequence":"test."},...]
```
ใใใฆใใใใงใฆใงใใตใผใใผใไฝๆใใๆนๆณใฎ่ฏใใขใคใใขใๆใฃใฆใใพใ๏ผ
ๆฌๅฝใซ้่ฆใชใฎใฏใใขใใซใ**ไธๅบฆใ ใ**ใญใผใใใใใจใงใใใใใซใใใใฆใงใใตใผใใผไธใซใขใใซใฎใณใใผใใชใใใใไธๅฟ
่ฆใชRAMใไฝฟ็จใใใชใใชใใพใใ
ใใฎๅพใใญใฅใผใคใณใฐใกใซใใบใ ใไฝฟ็จใใฆใๅ็ใใใๅฆ็ใ่กใใชใฉใใใใคใใฎใขใคใใ ใ่็ฉใใฆใใๆจ่ซใ่กใใชใฉใ้ซๅบฆใชๅฆ็ใ่กใใใจใใงใใพใ๏ผ
<Tip warning={true}>
ไปฅไธใฎใณใผใใตใณใใซใฏใๅฏ่ชญๆงใฎใใใซๆฌไผผใณใผใใฎใใใซๆธใใใฆใใพใใใทในใใ ใชใฝใผในใซๅ็็ใใฉใใใ็ขบ่ชใใใซๅฎ่กใใชใใงใใ ใใ๏ผ
</Tip>
```py
(string, rq) = await q.get()
strings = []
queues = []
while True:
try:
(string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms
except asyncio.exceptions.TimeoutError:
break
strings.append(string)
queues.append(rq)
strings
outs = pipe(strings, batch_size=len(strings))
for rq, out in zip(queues, outs):
await rq.put(out)
```
ใพใ็ฌฌไธใซใ้ๅธธใฏใใพใ่ฏใใขใคใใขใงใฏใชใใใใใตใคใบใฎๅถ้ใใใใพใใใๆฌกใซใใฟใคใ ใขใฆใใฏใญใฅใผใฎๅๅพใใจใซใชใปใใใใใใใใๆจ่ซใๅฎ่กใใๅใซ1msไปฅไธๅพ
ใคๅฏ่ฝๆงใใใใพใ๏ผๆๅใฎใชใฏใจในใใฎ้
ๅปถใซ1msๅ้
ใใ็ใใพใ๏ผใ
1msใฎ็ท ใๅใใ1ๅใ ใๆใคใฎใ่ฏใใงใใใใ
ใใใฏใใญใฅใผใซไฝใใชใๅ ดๅใงใๅธธใซ1msๅพ
ๆฉใใพใใใใญใฅใผใซไฝใใชใๅ ดๅใซๆจ่ซใ้ๅงใใใๅ ดๅใฏ้ฉใใฆใใชใใใใใใพใใใใใ ใใใใใๅฆ็ใๆฌๅฝใซ้่ฆใชๅ ดๅใซใฏๆๅณใใใใใใใใพใใใๅๅบฆใ1ใคใฎๆ้ฉใช่งฃๆฑบ็ญใฏๅญๅจใใพใใใ
## Few things you might want to consider
### Error checking
ๆฌ็ช็ฐๅขใงใฏๅคใใฎๅ้กใ็บ็ใใๅฏ่ฝๆงใใใใพใ๏ผใกใขใชไธ่ถณใในใใผในไธ่ถณใใขใใซใฎ่ชญใฟ่พผใฟใๅคฑๆใใใใใใใพใใใใฏใจใชใ่ชคใฃใฆใใใใใใใพใใใใฏใจใชใๆญฃใใๅ ดๅใงใใขใใซใฎๆงๆใจใฉใผใฎใใใซๅฎ่กใซๅคฑๆใใใใใใใพใใใชใฉใ
ไธ่ฌ็ใซใฏใใตใผใใผใใจใฉใผใใฆใผใถใผใซๅบๅใใใจ่ฏใใใใใใใใฎใจใฉใผใ่กจ็คบใใใใใฎๅคใใฎ`try..except`ในใใผใใกใณใใ่ฟฝๅ ใใใใจใฏ่ฏใใขใคใใขใงใใใใ ใใใปใญใฅใชใใฃใณใณใใญในใใซๅฟใใฆใใใใฎใจใฉใผใใในใฆ่กจ็คบใใใใจใฏใปใญใฅใชใใฃใชในใฏใซใชใๅฏ่ฝๆงใใใใใจใซๆณจๆใใฆใใ ใใใ
### Circuit breaking
Webใตใผใใผใฏ้ๅธธใ้่ฒ ่ทๆใซๆญฃใใใจใฉใผใ่ฟใๆนใ่ฏใใงใใใฏใจใชใ็กๆ้ใซๅพ
ใคไปฃใใใซ้ฉๅใชใจใฉใผใ่ฟใใพใใ้ทๆ้ๅพ
ใคไปฃใใใซ503ใจใฉใผใ่ฟใใใ้ทๆ้ๅพ
ใฃใฆใใ504ใจใฉใผใ่ฟใใใงใใ
ๆๆกใใใใณใผใใงใฏๅไธใฎใญใฅใผใใใใใใใญใฅใผใตใคใบใ่ฆใใใจใฏใWebใตใผใใผใ่ฒ ่ทใซ่ใใๅใซใจใฉใผใ่ฟใใใใฎๅบๆฌ็ใชๆนๆณใงใใ
### Blocking the main thread
็พๅจใPyTorchใฏ้ๅๆใ่ช่ญใใฆใใชใใใใ่จ็ฎใฏใกใคใณในใฌใใใใใญใใฏใใพใใใคใพใใPyTorchใ็ฌ่ชใฎในใฌใใ/ใใญใปในใงๅฎ่กใใใใใใซใใใจ่ฏใใงใใใใๆๆกใใใใณใผใใฏใในใฌใใใจ้ๅๆใจใญใฅใผใใใพใ้ฃๆบใใชใใใใใใใฏ่กใใใฆใใพใใใใๆ็ต็ใซใฏๅใใใจใ่กใใพใใ
ใใใฏใๅไธใฎใขใคใใ ใฎๆจ่ซใ้ทใๅ ดๅ๏ผ>1็ง๏ผใซ้่ฆใงใใใใฎๅ ดๅใๆจ่ซไธญใซใในใฆใฎใฏใจใชใ1็งๅพ
ใใชใใใฐใชใใชใใใจใๆๅณใใพใใ
### Dynamic batching
ไธ่ฌ็ใซใใใใๅฆ็ใฏ1ๅใฎใขใคใใ ใ1ๅๆธกใใใใๆนๅใใใใใจใฏๅฟ
ใใใใใใพใใ๏ผ่ฉณ็ดฐใฏ[ใใใๅฆ็ใฎ่ฉณ็ดฐ](./main_classes/pipelines#pipeline-batching)ใๅ็
ง๏ผใใใใใๆญฃใใ่จญๅฎใงไฝฟ็จใใใจ้ๅธธใซๅนๆ็ใงใใAPIใงใฏใใใฉใซใใงๅ็ใใใๅฆ็ใฏ่กใใใพใใ๏ผ้
ๅปถใฎๆฉไผใๅคใใใพใ๏ผใใใใใ้ๅธธใซๅคง่ฆๆจกใชใขใใซใงใใBLOOMๆจ่ซใฎๅ ดๅใๅ็ใใใๅฆ็ใฏ**้่ฆ**ใงใใใใใซใใใใในใฆใฎใฆใผใถใผใซใจใฃใฆใพใจใใชใจใฏในใใชใจใณในใๆไพใงใใพใใ
ไปฅไธใใๆไพใใใใใญในใใฎMarkdownๅฝขๅผใฎ็ฟป่จณใงใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_train_gpu_many.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Efficient Training on Multiple GPUs
ๅไธใฎGPUใงใฎใใฌใผใใณใฐใ้
ใใใๅ ดๅใใใขใใซใฎ้ใฟใๅไธใฎGPUใฎใกใขใชใซๅใพใใชใๅ ดๅใ่คๆฐใฎGPUใไฝฟ็จใใใปใใใขใใใๅฟ
่ฆใจใชใใพใใๅไธใฎGPUใใ่คๆฐใฎGPUใธใฎๅใๆฟใใซใฏใใฏใผใฏใญใผใใๅๆฃใใใใใฎใใ็จฎใฎไธฆๅๅฆ็ใๅฟ
่ฆใงใใใใผใฟใใใณใฝใซใใพใใฏใใคใใฉใคใณใฎไธฆๅๅฆ็ใชใฉใใใพใใพใชไธฆๅๅฆ็ๆ่กใใใใพใใใใ ใใใในใฆใซ้ฉใใไธใคใฎ่งฃๆฑบ็ญใฏๅญๅจใใใๆ้ฉใช่จญๅฎใฏไฝฟ็จใใใใผใใฆใงใขใซไพๅญใใพใใใใฎ่จไบใฏใใใใใไปใฎใใฌใผใ ใฏใผใฏใซใ้ฉ็จใใใไธป่ฆใชๆฆๅฟตใซ็ฆ็นใๅฝใฆใคใคใPyTorchใใผในใฎๅฎ่ฃ
ใซ็ฆ็นใๅฝใฆใฆใใพใใ
<Tip>
**ๆณจๆ**: [ๅไธGPUใปใฏใทใงใณ](perf_train_gpu_one) ใง็ดนไปใใใๅคใใฎๆฆ็ฅ๏ผๆททๅ็ฒพๅบฆใใฌใผใใณใฐใๅพ้
่็ฉใชใฉ๏ผใฏไธ่ฌ็ใงใใใใขใใซใฎใใฌใผใใณใฐใซไธ่ฌ็ใซ้ฉ็จใใใพใใใใใใฃใฆใใใซใGPUใCPUใใฌใผใใณใฐใชใฉใฎๆฌกใฎใปใฏใทใงใณใซๅ
ฅใๅใซใใใใ็ขบ่ชใใฆใใ ใใใ
</Tip>
ใพใใใใพใใพใช1Dไธฆๅๅฆ็ๆ่กใจใใฎๅฉ็นใใใณๆฌ ็นใซใคใใฆ่ฉณใใ่ชฌๆใใใใใใ2Dใใใณ3Dไธฆๅๅฆ็ใซ็ตใฟๅใใใฆใใใซ้ซ้ใชใใฌใผใใณใฐใๅฎ็พใใใใๅคงใใชใขใใซใใตใใผใใใๆนๆณใๆค่จใใพใใใใพใใพใชไปใฎๅผทๅใชไปฃๆฟๆๆณใ็ดนไปใใใพใใ
## Concepts
ไปฅไธใฏใใใฎๆๆธใงๅพใง่ฉณใใ่ชฌๆใใใไธป่ฆใชๆฆๅฟตใฎ็ฐกๅใช่ชฌๆใงใใ
1. **DataParallel (DP)** - ๅใใปใใใขใใใ่คๆฐๅ่ค่ฃฝใใใๅใปใใใขใใใซใใผใฟใฎในใฉใคในใไพ็ตฆใใใพใใๅฆ็ใฏไธฆ่กใใฆ่กใใใๅใปใใใขใใใฏใใฌใผใใณใฐในใใใใฎๆๅพใซๅๆใใใพใใ
2. **TensorParallel (TP)** - ๅใใณใฝใซใฏ่คๆฐใฎใใฃใณใฏใซๅๅฒใใใๅไธใฎGPUใซใใณใฝใซๅ
จไฝใๅญๅจใใใฎใงใฏใชใใใใณใฝใซใฎๅใทใฃใผใใๆๅฎใใใGPUใซๅญๅจใใพใใๅฆ็ไธญใซใๅใทใฃใผใใฏๅฅใ
ใซไธฆ่กใใฆๅฆ็ใใใ็ฐใชใGPUใงๅๆใใใในใใใใฎๆๅพใซ็ตๆใๅๆใใใพใใใใใฏๆฐดๅนณไธฆๅๅฆ็ใจๅผใฐใใใใฎใงใๅๅฒใฏๆฐดๅนณใฌใใซใง่กใใใพใใ
3. **PipelineParallel (PP)** - ใขใใซใฏๅ็ด๏ผใฌใคใคใผใฌใใซ๏ผใซ่คๆฐใฎGPUใซๅๅฒใใใใขใใซใฎๅไธใพใใฏ่คๆฐใฎใฌใคใคใผใๅไธใฎGPUใซ้
็ฝฎใใใพใใๅGPUใฏใใคใใฉใคใณใฎ็ฐใชใในใใผใธใไธฆ่กใใฆๅฆ็ใใใใใใฎๅฐใใชใใฃใณใฏใงไฝๆฅญใใพใใ
4. **Zero Redundancy Optimizer (ZeRO)** - TPใจใใใใไผผใใใใชใใณใฝใซใฎใทใฃใผใใฃใณใฐใๅฎ่กใใพใใใๅๅใใพใใฏๅพๅใใฎ่จ็ฎใฎใใใซใใณใฝใซๅ
จไฝใๅๆง็ฏใใใใใใใขใใซใๅคๆดใใๅฟ
่ฆใฏใใใพใใใใพใใGPUใกใขใชใๅถ้ใใใฆใใๅ ดๅใซ่ฃๅใใใใใฎใใพใใพใชใชใใญใผใๆ่กใใตใใผใใใพใใ
5. **Sharded DDP** - Sharded DDPใฏใใใพใใพใชZeROๅฎ่ฃ
ใงไฝฟ็จใใใๅบๆฌ็ใชZeROใณใณใปใใใฎๅฅๅใงใใ
ๅใณใณใปใใใฎ่ฉณ็ดฐใซๆทฑๅ
ฅใใใๅใซใๅคง่ฆๆจกใชใคใณใใฉในใใฉใฏใใฃใงๅคง่ฆๆจกใชใขใใซใใใฌใผใใณใฐใใ้ใฎๅคงใพใใชๆฑบๅฎใใญใปในใ่ฆใฆใฟใพใใใใ
## Scalability Strategy
**โจ ใทใณใฐใซใใผใ / ใใซใGPU**
* ใขใใซใๅไธใฎGPUใซๅใพใๅ ดๅ๏ผ
1. DDP - ๅๆฃใใผใฟไธฆๅ
2. ZeRO - ็ถๆณใจไฝฟ็จใใใๆงๆใซๅฟใใฆ้ใใใฉใใใ็ฐใชใใพใ
* ใขใใซใๅไธใฎGPUใซๅใพใใชใๅ ดๅ๏ผ
1. PP
2. ZeRO
3. TP
้ๅธธใซ้ซ้ใชใใผใๅ
ๆฅ็ถ๏ผNVLINKใพใใฏNVSwitchใชใฉ๏ผใใใใฐใใใใใฎ3ใคใฏใปใผๅใ้ๅบฆใซใชใใฏใใงใใใใใใชใๅ ดๅใPPใฏTPใพใใฏZeROใใใ้ใใชใใพใใTPใฎ็จๅบฆใๅทฎใ็ใใใใใใใพใใใ็นๅฎใฎใปใใใขใใใงใฎๅ่
ใ่ฆใคใใใใใซๅฎ้จใใใใจใๆๅใงใใ
TPใฏใปใจใใฉใฎๅ ดๅใๅไธใใผใๅ
ใงไฝฟ็จใใใพใใใคใพใใTPใตใคใบ <= ใใผใใใจใฎGPUๆฐใงใใ
* ๆๅคงใฎใฌใคใคใผใๅไธใฎGPUใซๅใพใใชใๅ ดๅ๏ผ
1. ZeROใไฝฟ็จใใชใๅ ดๅ - TPใไฝฟ็จใใๅฟ
่ฆใใใใพใใPPๅ็ฌใงใฏๅใพใใชใใงใใใใ
2. ZeROใไฝฟ็จใใๅ ดๅ - "ใทใณใฐใซGPU"ใฎใจใณใใชใจๅใใใฎใๅ็
งใใฆใใ ใใ
**โจ ใใซใใใผใ / ใใซใGPU**
* ใใผใ้ใฎ้ซ้ๆฅ็ถใใใๅ ดๅ๏ผ
1. ZeRO - ใขใใซใธใฎใปใจใใฉใฎๅคๆดใไธ่ฆใงใ
2. PP+TP+DP - ้ไฟกใๅฐใชใใใขใใซใธใฎๅคง่ฆๆจกใชๅคๆดใๅฟ
่ฆใงใ
* ใใผใ้ใฎๆฅ็ถใ้
ใใGPUใกใขใชใใพใ ไธ่ถณใใฆใใๅ ดๅ๏ผ
1. DP+PP+TP+ZeRO-1
## Data Parallelism
2ใคใฎGPUใๆใคใปใจใใฉใฎใฆใผใถใผใฏใ`DataParallel`๏ผDP๏ผใจ`DistributedDataParallel`๏ผDDP๏ผใซใใฃใฆๆไพใใใใใฌใผใใณใฐ้ๅบฆใฎๅไธใใใงใซไบซๅใใฆใใพใใใใใใฏใปใผ่ชๆใซไฝฟ็จใงใใPyTorchใฎ็ตใฟ่พผใฟๆฉ่ฝใงใใไธ่ฌ็ใซใใในใฆใฎใขใใซใงๅไฝใใDDPใไฝฟ็จใใใใจใใๅงใใใพใใDPใฏไธ้จใฎใขใใซใงๅคฑๆใใๅฏ่ฝๆงใใใใใใงใใ[PyTorchใฎใใญใฅใกใณใใผใทใงใณ](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html)่ชไฝใDDPใฎไฝฟ็จใๆจๅฅจใใฆใใพใใ
### DP vs DDP
`DistributedDataParallel`๏ผDDP๏ผใฏ้ๅธธใ`DataParallel`๏ผDP๏ผใใใ้ซ้ใงใใใๅธธใซใใใจใฏ้ใใพใใ๏ผ
* DPใฏPythonในใฌใใใใผในใงใใใDDPใฏใใซใใใญใปในใใผในใงใใใใฎใใใGIL๏ผGlobal Interpreter Lock๏ผใชใฉใฎPythonในใฌใใใฎๅถ็ดใใชใใใใงใใ
* ไธๆนใGPUใซใผใ้ใฎ้
ใ็ธไบๆฅ็ถๆงใฏใDDPใฎๅ ดๅใซๅฎ้ใซใฏ้
ใ็ตๆใใใใใๅฏ่ฝๆงใใใใพใใ
ไปฅไธใฏใ2ใคใฎใขใผใ้ใฎGPU้้ไฟกใฎไธปใช้ใใงใ๏ผ
[DDP](https://pytorch.org/docs/master/notes/ddp.html):
- ้ๅงๆใใกใคใณใใญใปในใฏใขใใซใGPU 0ใใไปใฎGPUใซ่ค่ฃฝใใพใใ
- ใใใใๅใใใใใจใซ:
1. ๅGPUใฏๅ่ชใฎใใใใใใฎใใผใฟใ็ดๆฅๆถ่ฒปใใพใใ
2. `backward`ไธญใใญใผใซใซๅพ้
ใๆบๅใงใใใจใใใใใฏใในใฆใฎใใญใปในใงๅนณๅๅใใใพใใ
[DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html):
ๅใใใใใจใซ:
1. GPU 0ใฏใใผใฟใใใใ่ชญใฟๅใใใใใใๅGPUใซใใใใใใ้ไฟกใใพใใ
2. GPU 0ใใๅGPUใซๆๆฐใฎใขใใซใ่ค่ฃฝใใพใใ
3. `forward`ใๅฎ่กใใๅGPUใใGPU 0ใซๅบๅใ้ไฟกใใๆๅคฑใ่จ็ฎใใพใใ
4. GPU 0ใใใในใฆใฎGPUใซๆๅคฑใๅๆฃใใ`backward`ใๅฎ่กใใพใใ
5. ๅGPUใใGPU 0ใซๅพ้
ใ้ไฟกใใใใใใๅนณๅๅใใพใใ
DDPใฏใใใใใจใซ่กใ้ไฟกใฏๅพ้
ใฎ้ไฟกใฎใฟใงใใใไธๆนใDPใฏใใใใใจใซ5ใคใฎ็ฐใชใใใผใฟไบคๆใ่กใใพใใ
DPใฏใใญใปในๅ
ใงใใผใฟใPythonในใฌใใใไปใใฆใณใใผใใพใใใDDPใฏ[torch.distributed](https://pytorch.org/docs/master/distributed.html)ใไปใใฆใใผใฟใใณใใผใใพใใ
DPใงใฏGPU 0ใฏไปใฎGPUใใใใฏใใใซๅคใใฎไฝๆฅญใ่กใใใใGPUใฎๆชไฝฟ็จ็ใ้ซใใชใใพใใ
DDPใฏ่คๆฐใฎใใทใณ้ใงไฝฟ็จใงใใพใใใDPใฎๅ ดๅใฏใใใงใฏใใใพใใใ
DPใจDDPใฎไปใซใ้ใใใใใพใใใใใฎ่ญฐ่ซใซใฏ้ขไฟใใใพใใใ
ใใใ2ใคใฎใขใผใใๆทฑใ็่งฃใใใๅ ดๅใใใฎ[่จไบ](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/)ใๅผทใใๅงใใใพใใ็ด ๆดใใใใใคใขใฐใฉใ ใๅซใฟใใใพใใพใชใใผใใฆใงใขใงใฎ่คๆฐใฎใใณใใใผใฏใจใใญใใกใคใฉใฎๅบๅใ็คบใใ็ฅใฃใฆใใๅฟ
่ฆใใใใในใฆใฎๅพฎๅฆใชใใฅใขใณในใ่ชฌๆใใฆใใพใใ
ๅฎ้ใฎใใณใใใผใฏใ่ฆใฆใฟใพใใใ๏ผ
| Type | NVlink | Time |
| :----- | ----- | ---: |
| 2:DP | Y | 110s |
| 2:DDP | Y | 101s |
| 2:DDP | N | 131s |
่งฃๆ๏ผ
ใใใงใDPใฏNVlinkใไฝฟ็จใใDDPใซๆฏในใฆ็ด10๏ผ
้
ใใNVlinkใไฝฟ็จใใชใDDPใซๆฏในใฆ็ด15๏ผ
้ซ้ใงใใใใจใ็คบใใใฆใใพใใ
ๅฎ้ใฎ้ใใฏใๅGPUใไปใฎGPUใจๅๆใใๅฟ
่ฆใใใใใผใฟใฎ้ใซไพๅญใใพใใๅๆใใใใผใฟใๅคใใปใฉใ้
ใใชใณใฏใๅ่จใฎๅฎ่กๆ้ใ้
ใใใๅฏ่ฝๆงใ้ซใใชใใพใใ
ไปฅไธใฏๅฎๅ
จใชใใณใใใผใฏใณใผใใจๅบๅใงใ๏ผ
`NCCL_P2P_DISABLE=1`ใไฝฟ็จใใฆใๅฏพๅฟใใใใณใใใผใฏใงNVLinkๆฉ่ฝใ็กๅนใซใใพใใใ
```
# DP
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69}
# DDP w/ NVlink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVlink
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
ใใผใใฆใงใข: 2x TITAN RTXใๅ24GB + 2ใคใฎNVLink๏ผ`nvidia-smi topo -m`ใง `NV2`๏ผ
ใฝใใใฆใงใข: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
## ZeRO Data Parallelism
ZeROใใฏใผใใใผใฟไธฆๅๅฆ็๏ผZeRO-DP๏ผใฏใๆฌกใฎ[ใใญใฐๆ็จฟ](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)ใฎใใคใขใฐใฉใ ใง่ชฌๆใใใฆใใพใใ

ใใใฏ็่งฃใ้ฃใใใใใใใพใใใใๅฎ้ใซใฏใใฎๆฆๅฟตใฏ้ๅธธใซใทใณใใซใงใใใใใฏ้ๅธธใฎ`DataParallel`๏ผDP๏ผใงใใใๅฎๅ
จใชใขใใซใใฉใกใผใฟใๅพ้
ใใใใณใชใใใฃใใคใถใฎ็ถๆ
ใ่ค่ฃฝใใไปฃใใใซใๅGPUใฏใใใใใฎในใฉใคในใฎใฟใไฟๅญใใพใใใใใฆใๅฎ่กๆใซใ็นๅฎใฎใฌใคใคใผใซๅฟ
่ฆใชๅฎๅ
จใชใฌใคใคใผใใฉใกใผใฟใๅฟ
่ฆใชๅ ดๅใใในใฆใฎGPUใๅๆใใฆใใไบใใซไธ่ถณใใฆใใ้จๅใๆไพใใพใใใใใใในใฆใงใใ
3ใคใฎใฌใคใคใผใใใชใๅ็ดใชใขใใซใ่ใใฆใฟใพใใใใๅใฌใคใคใผใซใฏ3ใคใฎใใฉใกใผใฟใใใใพใ๏ผ
```
La | Lb | Lc
---|----|---
a0 | b0 | c0
a1 | b1 | c1
a2 | b2 | c2
```
ใฌใคใคใผLaใซใฏใ้ใฟa0ใa1ใใใใณa2ใใใใพใใ
3ใคใฎGPUใใใๅ ดๅใSharded DDP๏ผ= Zero-DP๏ผใฏใขใใซใ3ใคใฎGPUใซๆฌกใฎใใใซๅๅฒใใพใ๏ผ
```
GPU0:
La | Lb | Lc
---|----|---
a0 | b0 | c0
GPU1:
La | Lb | Lc
---|----|---
a1 | b1 | c1
GPU2:
La | Lb | Lc
---|----|---
a2 | b2 | c2
```
ใใใฏใๅ
ธๅ็ใชใใฃใผใใใฅใผใฉใซใใใใฏใผใฏ๏ผDNN๏ผใฎใใคใขใฐใฉใ ใๆณๅใใใจใใใณใฝใซไธฆๅๅฆ็ใจๅๆงใฎๆฐดๅนณในใฉใคในใงใใใใใชใใฎใงใใๅ็ดในใฉใคในใฏใ็ฐใชใGPUใซๅฎๅ
จใชๅฑคใฐใซใผใใ้
็ฝฎใใๆนๆณใงใใใใใใใใใฏๅใชใๅบ็บ็นใซ้ใใพใใใ
ใใใใใๅGPUใฏ้ๅธธใฎใใผใฟไธฆๅๅฆ็๏ผDP๏ผใจๅๆงใซใ้ๅธธใฎใใใใใใๅใๅใใพใ๏ผ
```
x0 => GPU0
x1 => GPU1
x2 => GPU2
```
ๆๅใซใๅ
ฅๅใใผใฟใฏใฌใคใคใผLaใซ้ฉ็จใใใพใใ
GPU0ใซ็ฆ็นใๅฝใฆใพใใใ๏ผx0ใฏใใใฎๅๅใใในใๅฎ่กใใใใใซa0ใa1ใa2ใฎใใฉใกใผใฟใๅฟ
่ฆใงใใใGPU0ใซใฏa0ใใใใใพใใใGPU1ใใa1ใใGPU2ใใa2ใๅใๅใใใขใใซใฎๅ้จๅใใพใจใใพใใ
ๅๆงใซใGPU1ใฏใใใใใx1ใๅใๅใใa1ใใๆใฃใฆใใพใใใใa0ใจa2ใฎใใฉใกใผใฟใๅฟ
่ฆใงใใใใใใฏGPU0ใจGPU2ใใๅๅพใใพใใ
GPU2ใx2ใๅใๅใใพใใa0ใจa1ใฏGPU0ใจGPU1ใใๅใๅใใa2ใจใจใใซๅฎๅ
จใชใใณใฝใซใๅๆง็ฏใใพใใ
3ใคใฎGPUใฏๅฎๅ
จใชใใณใฝใซใๅๆง็ฏใใๅๅใ่จ็ฎใ่กใใใพใใ
่จ็ฎใๅฎไบใใใจใไธ่ฆใซใชใฃใใใผใฟใฏๅ้คใใใพใใ่จ็ฎไธญใ ใไฝฟ็จใใใๅๆง็ฏใฏไบๅใซใใงใใใไฝฟ็จใใฆๅน็็ใซ่กใใใพใใ
ใใใฆใใใฎใใญใปในๅ
จไฝใใฌใคใคใผLbใๆฌกใซๅๅใใงLcใใใใฆ้ๆนๅใงLc -> Lb -> Laใซๅฏพใใฆ็นฐใ่ฟใใใพใใ
็งใซใจใฃใฆใใใใฏๅน็็ใชใฐใซใผใใงใฎ้ใฟใฎๅๆฃๆฆ็ฅใฎใใใซ่ใใใพใ๏ผ
1. ไบบAใฏใใณใใๆใฃใฆใใพใใ
2. ไบบBใฏในใใผใใๆใฃใฆใใพใใ
3. ไบบCใฏๆงใๆใฃใฆใใพใใ
ไปใๅฝผใใฏๆฏๆฉๆใฃใฆใใใใฎใๅ
ฑๆใใไปใฎไบบใใๆใฃใฆใใชใใใฎใใใใใๆใซใฏๅฒใๅฝใฆใใใใฟใคใใฎใฎใขใ่ฉฐใใฆๆ
ใ็ถใใพใใใใใSharded DDP / Zero DPใงใใ
ใใฎๆฆ็ฅใใๅไบบใ็ฌ่ชใฎใใณใใในใใผใใๆงใๆใฃใฆ้ใฐใชใใใฐใชใใชใใทใณใใซใชๆฆ็ฅใจๆฏ่ผใใฆใฟใฆใใ ใใใใใใPyTorchใฎDataParallel๏ผDPใใใณDDP๏ผใงใใ
ใใฎใใใใฏใฎๆ็ฎใ่ชญใ้ใซใไปฅไธใฎ้ก็พฉ่ชใซๅบไผใใใใใใพใใ๏ผShardedใPartitionedใ
ZeROใใขใใซใฎ้ใฟใๅๅฒใใๆนๆณใซๆณจๆใๆใใจใใใใฏใใณใฝใซใใฉใฌใชใบใ ใจ้ๅธธใซไผผใฆใใใใใซ่ฆใใพใใใใใฏๅพใง่ญฐ่ซใใใๅ็ดใขใใซใใฉใฌใชใบใ ใจใฏ็ฐใชใใๅใฌใคใคใผใฎ้ใฟใใใผใใฃใทใงใณ/ใทใฃใผใใฃใณใฐใใพใใ
Implementations:
- [DeepSpeed](https://www.deepspeed.ai/tutorials/zero/) ZeRO-DP stages 1+2+3
- [`transformers` integration](main_classes/trainer#trainer-integrations)
## Naive Model Parallelism (Vertical) and Pipeline Parallelism
ใใคใผใใขใใซใใฉใฌใชใบใ ๏ผMP๏ผใฏใใขใใซใฎๅฑคใ่คๆฐใฎGPUใซๅๆฃใใใๆนๆณใงใใใใฎใกใซใใบใ ใฏๆฏ่ผ็ๅ็ดใงใๅธๆใใๅฑคใ`.to()`ใกใฝใใใไฝฟ็จใใฆ็นๅฎใฎใใใคในใซๅใๆฟใใใ ใใงใใใใใซใใใใใผใฟใใใใใฎๅฑคใ้้ใใใใณใซใใใผใฟใๅฑคใจๅใใใใคในใซๅใๆฟใใใใๆฎใใฎ้จๅใฏๅคๆดใใใพใใใ
็งใใกใฏใใใใๅ็ดMPใใจๅผใณใพใใใชใใชใใใปใจใใฉใฎใขใใซใใฉใฎใใใซๆใใใใใๆใๅบใใจใๅฑคใๅ็ดใซในใฉใคในใใใใใงใใใใจใใฐใไปฅไธใฎๅณใฏ8ๅฑคใฎใขใใซใ็คบใใฆใใพใ๏ผ
```
=================== ===================
| 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 |
=================== ===================
gpu0 gpu1
```
ๆใ
ใฏใใขใใซใๅ็ดใซ2ใคใซๅๅฒใใใฌใคใคใผ0ใใ3ใGPU0ใซ้
็ฝฎใใใฌใคใคใผ4ใใ7ใGPU1ใซ้
็ฝฎใใพใใใ
ใใผใฟใใฌใคใคใผ0ใใ1ใ1ใใ2ใ2ใใ3ใซ็งปๅใใ้ใฏ้ๅธธใฎใขใใซใจๅใใงใใใใใใใใผใฟใใฌใคใคใผ3ใใใฌใคใคใผ4ใซ็งปๅใใๅฟ
่ฆใใใๅ ดๅใGPU0ใใGPU1ใธใฎ็งปๅใ็บ็ใใ้ไฟกใฎใชใผใใผใใใใ็บ็ใใพใใๅๅ ใใฆใใGPUใๅใใณใณใใฅใผใใใผใ๏ผไพ๏ผๅใ็ฉ็ใใทใณ๏ผใซใใๅ ดๅใใใฎใณใใผใฏ้ๅธธใซ้ซ้ใงใใใ็ฐใชใใณใณใใฅใผใใใผใ๏ผไพ๏ผ่คๆฐใฎใใทใณ๏ผใซใใๅ ดๅใ้ไฟกใฎใชใผใใผใใใใฏๅคงๅน
ใซๅขๅ ใใๅฏ่ฝๆงใใใใพใใ
ใใฎๅพใใฌใคใคใผ4ใใ5ใ6ใใ7ใพใงใฏ้ๅธธใฎใขใใซใจๅๆงใซๅไฝใใ7็ช็ฎใฎใฌใคใคใผใๅฎไบใใใจใใใผใฟใใใฐใใฐใฌใคใคใผ0ใซๆปใๅฟ
่ฆใใใใพใ๏ผใพใใฏใฉใใซใๆๅพใฎใฌใคใคใผใซ้ไฟกใใพใ๏ผใใใใงๆๅคฑใ่จ็ฎใใใชใใใฃใใคใถใไฝๆฅญใ้ๅงใงใใพใใ
ๅ้ก็น๏ผ
- ไธปใชๆฌ ็นใใใใณใชใใใใใๅ็ดใชใMPใจๅผใถใฎใใฏใ1ใคใ้คใใฆใในใฆใฎGPUใใฉใใช็ฌ้ใงใใขใคใใซ็ถๆ
ใงใใใใจใงใใใใใใฃใฆใ4ใคใฎGPUใไฝฟ็จใใๅ ดๅใๅ็ดใชMPใฏใ1ใคใฎGPUใฎใกใขใชๅฎน้ใ4ๅใซใใใฎใจใปใผๅใใงใใใใใผใใฆใงใขใฎๆฎใใ็ก่ฆใใพใใใใใซใใใผใฟใฎใณใใผใฎใชใผใใผใใใใใใใใจใๅฟใใฆใฏใใใพใใใใใใใฃใฆใ4ๆใฎ6GBใฎใซใผใใฏใใใผใฟใฎใณใใผใฎใชใผใใผใใใใใชใ1ๆใฎ24GBใฎใซใผใใจๅใใตใคใบใๅๅฎนใงใใใงใใใใใๅพ่
ใฏใใฌใผใใณใฐใใใ่ฟ
้ใซๅฎไบใใพใใใใ ใใใใจใใฐ40GBใฎใซใผใใใใใ45GBใฎใขใใซใๅใใๅฟ
่ฆใใใๅ ดๅใๅพ้
ใจใชใใใฃใใคใถใฎ็ถๆ
ใฎใใใซใปใจใใฉๅใใใใจใใงใใพใใใ
- ๅ
ฑๆใฎๅใ่พผใฟใฏใGPU้ใงใณใใผใใๅฟ
่ฆใใใใใใใใพใใใ
ใใคใใฉใคใณไธฆๅๅฆ็๏ผPP๏ผใฏใใปใผๅ็ดใชMPใจๅใใงใใใGPUใใขใคใใซ็ถๆ
ใซใชใๅ้กใ่งฃๆฑบใใๅ
ฅๅใใใใใใคใฏใญใใใใซๅๅฒใใใใคใใฉใคใณใไบบๅทฅ็ใซไฝๆใใใใจใซใใใ็ฐใชใGPUใ่จ็ฎใใญใปในใซๅๆใซๅๅ ใงใใใใใซใใพใใ
ไปฅไธใฏใ[GPipe่ซๆ](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html)ใใใฎๅณใงใไธ้จใซใฏๅ็ดใชMPใไธ้จใซใฏPPใ็คบใใใฆใใพใ๏ผ

ใใฎๅณใใใPPใGPUใใขใคใใซ็ถๆ
ใฎ้ ๅใงใใใใใใซใใๅฐใชใๆใคใใจใใใใใพใใใขใคใใซ็ถๆ
ใฎ้จๅใฏใใใใซใใจๅผใฐใใพใใ
ๅณใฎไธกๆนใฎ้จๅใฏใ4ใคใฎGPUใใใคใใฉใคใณใซๅๅ ใใฆใใ4ใฎๆฌกๅ
ใฎไธฆๅๆงใ็คบใใฆใใพใใใคใพใใ4ใคใฎใใคใในใใผใธF0ใF1ใF2ใF3ใฎใใฉใฏใผใใในใใใใ้้ ใฎใใใฏใฏใผใใในB3ใB2ใB1ใB0ใใใใพใใ
PPใฏ่ชฟๆดใใๆฐใใใใคใใผใใฉใกใผใฟใๅฐๅ
ฅใใพใใใใใฏ `chunks` ใงใๅใใใคใในใใผใธใ้ใใฆ้ฃ็ถใใฆ้ไฟกใใใใใผใฟใฎใใฃใณใฏใฎๆฐใๅฎ็พฉใใพใใใใจใใฐใไธใฎๅณใงใฏ `chunks=4` ใ่กจ็คบใใใฆใใพใใGPU0ใฏใใฃใณใฏ0ใ1ใ2ใ3๏ผF0,0ใF0,1ใF0,2ใF0,3๏ผใงๅใใใฉใฏใผใใในใๅฎ่กใใไปใฎGPUใไฝๆฅญใ้ๅงใๅงใใใฎใๅพ
ใฃใฆใใใGPU0ใฏใใฃใณใฏ3ใ2ใ1ใ0๏ผB0,3ใB0,2ใB0,1ใB0,0๏ผใง้้ ใในใๅฎ่กใใพใใ
ๆณจๆใในใใฏใๆฆๅฟต็ใซใฏใใใๅพ้
่็ฉในใใใ๏ผGAS๏ผใจๅใใณใณใปใใใงใใใใจใงใใPyTorchใฏ `chunks` ใไฝฟ็จใใDeepSpeedใฏๅใใใคใใผใใฉใกใผใฟใGASใจๅผใณใพใใ
`chunks` ใฎๅฐๅ
ฅใซใใใPPใฏใใคใฏใญใใใ๏ผMBS๏ผใฎๆฆๅฟตใๅฐๅ
ฅใใพใใDPใฏใฐใญใผใใซใใผใฟใใใใตใคใบใใใใใใใซๅๅฒใใพใใใใใใฃใฆใDPใฎๆฌกๆฐใ4ใงใใฐใญใผใใซใใใใตใคใบใ1024ใฎๅ ดๅใ4ใคใฎใใใใใ๏ผใใใใ256๏ผใซๅๅฒใใใพใ๏ผ1024/4๏ผใใใใฆใ`chunks`๏ผใพใใฏGAS๏ผใฎๆฐใ32ใงใใๅ ดๅใใใคใฏใญใใใใตใคใบใฏ8ใซใชใใพใ๏ผ256/32๏ผใๅใใคใใฉใคใณในใใผใธใฏ1ใคใฎใใคใฏใญใใใใงไฝๆฅญใใพใใ
DP + PPใปใใใขใใใฎใฐใญใผใใซใใใใตใคใบใ่จ็ฎใใใซใฏใ`mbs*chunks*dp_degree`๏ผ`8*32*4=1024`๏ผใ่กใใพใใ
ๅณใซๆปใใพใใใใ
`chunks=1` ใงใใใฐใ้ๅน็ใชๅ็ดใชMPใซใชใใพใใ้ๅธธใซๅคงใใช `chunks` ๅคใไฝฟ็จใใใจใ้ๅธธใซๅฐใใชใใคใฏใญใใใใตใคใบใซใชใใๅน็ใใใพใ้ซใใชใใใใใใพใใใใใใใฃใฆใGPUใฎๅน็็ใชๅฉ็จใๆๅคงๅใใๅคใ่ฆใคใใใใใซๅฎ้จใใๅฟ
่ฆใใใใพใใใใใฏใใใใซใฎใตใคใบใๆๅฐ้ใซใใใใจใซๅฏพๅฟใใใใในใฆใฎๅๅ GPUใซใใใ้ซใไธฆ่กGPUๅฉ็จใๅฏ่ฝใซใใใใใงใใ
2ใคใฎใฝใชใฅใผใทใงใณใฐใซใผใใใใใพใใๅพๆฅใฎใใคใใฉใคใณAPIใฝใชใฅใผใทใงใณใจใใฆใผใถใผใฎใขใใซใๅคงๅน
ใซๅคๆดใใๅฟ
่ฆใใใใใ็พไปฃ็ใชใฝใชใฅใผใทใงใณใงใใ
ๅพๆฅใฎใใคใใฉใคใณAPIใฝใชใฅใผใทใงใณ๏ผ
- PyTorch
- DeepSpeed
- Megatron-LM
็พไปฃ็ใชใฝใชใฅใผใทใงใณ๏ผ
- Varuna
- Sagemaker
ๅพๆฅใฎใใคใใฉใคใณAPIใฝใชใฅใผใทใงใณใฎๅ้ก็น๏ผ
- ใขใใซใใใชใๅคๆดใใๅฟ
่ฆใใใใใใPipelineใฏใขใธใฅใผใซใฎ้ๅธธใฎใใญใผใ`nn.Sequential`ใทใผใฑใณในใซๅๆธใ่พผใๅฟ
่ฆใใใใใขใใซใฎ่จญ่จใๅคๆดใใใใจใๅฟ
่ฆใงใใ
- ็พๅจใPipeline APIใฏ้ๅธธใซๅถ้็ใงใใๆๅใฎใใคใใฉใคใณในใใผใธใซๆธกใใใPythonๅคๆฐใฎใปใใใใใๅ ดๅใๅ้ฟ็ญใ่ฆใคใใๅฟ
่ฆใใใใพใใ็พๅจใใใคใใฉใคใณใคใณใฟใผใใงใผในใงใฏใๅฏไธใฎใใณใฝใซใพใใฏใใณใฝใซใฎใฟใใซใๅ
ฅๅใจๅบๅใจใใฆ่ฆๆฑใใฆใใพใใใใใใฎใใณใฝใซใฏใใใใตใคใบใๆๅใฎๆฌกๅ
ใจใใฆๆใฃใฆใใๅฟ
่ฆใใใใพใใใใคใใฉใคใณใฏใใใใใใใใคใฏใญใใใใซๅๅฒใใพใใๅฏ่ฝใชๆนๅ็นใซใคใใฆใฏใใใกใใฎ่ญฐ่ซใ่กใใใฆใใพใ๏ผhttps://github.com/pytorch/pytorch/pull/50693
- ใใคใในใใผใธใฎใฌใใซใงใฎๆกไปถไปใๅถๅพกใใญใผใฏไธๅฏ่ฝใงใใไพใใฐใT5ใฎใใใชใจใณใณใผใใผใใณใผใใผใขใใซใฏใๆกไปถไปใใจใณใณใผใใผในใใผใธใๅฆ็ใใใใใซ็นๅฅใชๅ้ฟ็ญใๅฟ
่ฆใงใใ
- ๅใฌใคใคใผใ้
็ฝฎใใๅฟ
่ฆใใใใใใ1ใคใฎใขใใซใฎๅบๅใไปใฎใขใใซใฎๅ
ฅๅใซใชใใใใซใใพใใ
VarunaใจSageMakerใจใฎๅฎ้จใฏใพใ ่กใฃใฆใใพใใใใๅฝผใใฎ่ซๆใซใใใฐใไธ่จใง่ฟฐในใๅ้กใฎใชในใใๅ
ๆใใใฆใผใถใผใฎใขใใซใซใฏใฏใใใซๅฐใใชๅคๆดใใๅฟ
่ฆใจใใชใใจๅ ฑๅใใใฆใใพใใ
ๅฎ่ฃ
๏ผ
- [Pytorch](https://pytorch.org/docs/stable/pipeline.html) (initial support in pytorch-1.8, and progressively getting improved in 1.9 and more so in 1.10). Some [examples](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py)
- [DeepSpeed](https://www.deepspeed.ai/tutorials/pipeline/)
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation - no API.
- [Varuna](https://github.com/microsoft/varuna)
- [SageMaker](https://arxiv.org/abs/2111.05972) - this is a proprietary solution that can only be used on AWS.
- [OSLO](https://github.com/tunib-ai/oslo) - ใใฎๅฎ่ฃ
ใฏใHugging Face Transformersใซๅบใฅใใฆใใพใใ
๐ค Transformersใฎในใใผใฟใน: ใใฎๅท็ญๆ็นใงใฏใใใใใฎใขใใซใๅฎๅ
จใชPP๏ผใใคใใฉใคใณไธฆๅๅฆ็๏ผใใตใใผใใใฆใใพใใใGPT2ใขใใซใจT5ใขใใซใฏๅ็ดใชMP๏ผใขใใซไธฆๅๅฆ็๏ผใตใใผใใๆใฃใฆใใพใใไธปใช้ๅฎณใฏใใขใใซใ`nn.Sequential`ใซๅคๆใงใใใใในใฆใฎๅ
ฅๅใใใณใฝใซใงใใๅฟ
่ฆใใใใใจใงใใ็พๅจใฎใขใใซใซใฏใๅคๆใ้ๅธธใซ่ค้ใซใใๅคใใฎๆฉ่ฝใๅซใพใใฆใใใใใใใๅ้คใใๅฟ
่ฆใใใใพใใ
ไปใฎใขใใญใผใ๏ผ
DeepSpeedใVarunaใใใใณSageMakerใฏใ[ไบคไบใซใใคใใฉใคใณใๅฎ่ก](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html)ใใใณใณใปใใใไฝฟ็จใใฆใใพใใใใใงใฏใใใใฏใฏใผใใในใๅชๅ
ใใใฆใใใซ๏ผใขใคใใซๆ้๏ผใใใใซๆๅฐ้ใซๆใใพใใ
Varunaใฏใๆ้ฉใชในใฑใธใฅใผใซใ็บ่ฆใใใใใซใทใใฅใฌใผใทใงใณใไฝฟ็จใใฆในใฑใธใฅใผใซใใใใซๆนๅใใใใจใใพใใ
OSLOใฏใ`nn.Sequential`ใฎๅคๆใชใใงTransformersใซๅบใฅใใใคใใฉใคใณไธฆๅๅฆ็ใๅฎ่ฃ
ใใฆใใพใใ
## Tensor Parallelism
ใใณใฝใซไธฆๅๅฆ็ใงใฏใๅGPUใใใณใฝใซใฎในใฉใคในใฎใฟใๅฆ็ใใๅ
จไฝใๅฟ
่ฆใชๆไฝใฎใใใซใฎใฟๅฎๅ
จใชใใณใฝใซใ้็ดใใพใใ
ใใฎใปใฏใทใงใณใงใฏใ[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)่ซๆใใใฎใณใณใปใใใจๅณใไฝฟ็จใใพใ๏ผ[GPUใฏใฉในใฟใงใฎๅน็็ใชๅคง่ฆๆจก่จ่ชใขใใซใใฌใผใใณใฐ](https://arxiv.org/abs/2104.04473)ใ
ใฉใฎใใฉใณในใใฉใผใใฎไธป่ฆใชๆง็ฏ่ฆ็ด ใฏใๅฎๅ
จใซๆฅ็ถใใใ`nn.Linear`ใซ็ถใ้็ทๅฝขใขใฏใใฃใใผใทใงใณ`GeLU`ใงใใ
Megatronใฎ่ซๆใฎ่กจ่จๆณใซๅพใฃใฆใ่กๅใฎไน็ฎ้จๅใ`Y = GeLU(XA)`ใจๆธใใใจใใงใใพใใใใใงใ`X`ใจ`Y`ใฏๅ
ฅๅใใฏใใซใจๅบๅใใฏใใซใงใ`A`ใฏ้ใฟ่กๅใงใใ
่กๅใฎ่จ็ฎใ่กๅๅฝขๅผใง่ฆใใจใ่กๅไน็ฎใ่คๆฐใฎGPUใงๅๅฒใงใใๆนๆณใ็ฐกๅใซ็่งฃใงใใพใ๏ผ

้ใฟ่กๅ`A`ใ`N`ๅใฎGPUใซๅฏพใใฆๅใใจใซๅๅฒใใไธฆๅใง่กๅไน็ฎ`XA_1`ใใ`XA_n`ใๅฎ่กใใใจใ`N`ๅใฎๅบๅใใฏใใซ`Y_1ใY_2ใ...ใY_n`ใๅพใใใใใใใ็ฌ็ซใใฆ`GeLU`ใซไพ็ตฆใงใใพใ๏ผ

ใใฎๅ็ใไฝฟ็จใใฆใๆๅพใพใงๅๆใๅฟ
่ฆใชใใพใพใไปปๆใฎๆทฑใใฎMLPใๆดๆฐใงใใพใใMegatron-LMใฎ่่
ใฏใใฎใใใฎๆ็จใชใคใฉในใใๆไพใใฆใใพใ๏ผ

ใใซใใใใใขใใณใทใงใณใฌใคใคใผใไธฆๅๅใใใใจใฏใใใซ็ฐกๅใงใใใใใใฏๆขใซ่คๆฐใฎ็ฌ็ซใใใใใใๆใฃใฆใใใใใๆฌ่ณช็ใซไธฆๅใงใ๏ผ

็นๅฅใช่ๆ
ฎไบ้
๏ผTPใซใฏ้ๅธธใซ้ซ้ใชใใใใฏใผใฏใๅฟ
่ฆใงใใใใใใใฃใฆ1ใคใฎใใผใใ่ถ
ใใฆTPใๅฎ่กใใชใใใจใใๅงใใใใพใใใๅฎ้ใซใฏใ1ใคใฎใใผใใซ4ใคใฎGPUใใใๅ ดๅใๆๅคงใฎTPๅบฆๆฐใฏ4ใงใใTPๅบฆๆฐ8ใๅฟ
่ฆใชๅ ดๅใฏใๅฐใชใใจใ8ใคใฎGPUใๆใคใใผใใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
ใใฎใปใฏใทใงใณใฏใๅ
ใฎใใ่ฉณ็ดฐใช[TPใฎๆฆ่ฆ](https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530)ใซๅบใฅใใฆใใพใใ
by [@anton-l](https://github.com/anton-l)ใ
SageMakerใฏใใใๅน็็ใชๅฆ็ใฎใใใซTPใจDPใ็ตใฟๅใใใฆไฝฟ็จใใพใใ
ไปฃๆฟๅ๏ผ
- [DeepSpeed](https://github.com/microsoft/DeepSpeed)ใฏใใใใใใณใฝใซในใฉใคใทใณใฐใใจๅผใณใพใใ่ฉณ็ดฐใฏ[DeepSpeedใฎ็นๅพด](https://www.deepspeed.ai/training/#model-parallelism)ใใ่ฆงใใ ใใใ
ๅฎ่ฃ
ไพ:
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)ใซใฏใใขใใซๅบๆใฎๅ
้จๅฎ่ฃ
ใใใใพใใ
- [parallelformers](https://github.com/tunib-ai/parallelformers)๏ผ็พๆ็นใงใฏๆจ่ซใฎใฟ๏ผใ
- [SageMaker](https://arxiv.org/abs/2111.05972) - ใใใฏAWSใงใฎใฟไฝฟ็จใงใใใใญใใฉใคใจใฟใชใชใฝใชใฅใผใทใงใณใงใใ
- [OSLO](https://github.com/tunib-ai/oslo)ใซใฏใTransformersใซๅบใฅใใใใณใฝใซไธฆๅๅฎ่ฃ
ใใใใพใใ
๐ค Transformersใฎ็ถๆณ:
- ใณใข: ใพใ ใณใขใซใฏๅฎ่ฃ
ใใใฆใใพใใใ
- ใใ ใใๆจ่ซใๅฟ
่ฆใชๅ ดๅใ[parallelformers](https://github.com/tunib-ai/parallelformers)ใฏใปใจใใฉใฎใขใใซใซๅฏพใใฆใตใใผใใๆไพใใพใใใใใใณใขใซๅฎ่ฃ
ใใใใพใงใใใใไฝฟ็จใงใใพใใใใใฆใใใฌใผใใณใฐใขใผใใใตใใผใใใใใใจใๆๅพ
ใใฆใใพใใ
- Deepspeed-InferenceใงใฏใBERTใGPT-2ใใใใณGPT-NeoใขใใซใCUDAใซใผใใซใใผในใฎ้ซ้ๆจ่ซใขใผใใงใตใใผใใใฆใใพใใ่ฉณ็ดฐใฏ[ใใกใ](https://www.deepspeed.ai/tutorials/inference-tutorial/)ใใ่ฆงใใ ใใใ
## DP+PP
DeepSpeedใฎ[ใใคใใฉใคใณใใฅใผใใชใขใซ](https://www.deepspeed.ai/tutorials/pipeline/)ใใใฎๆฌกใฎๅณใฏใDPใPPใจ็ตใฟๅใใใๆนๆณใ็คบใใฆใใพใใ

ใใใง้่ฆใชใฎใฏใDPใฉใณใฏ0ใGPU2ใ่ฆใใชใใใDPใฉใณใฏ1ใGPU3ใ่ฆใใชใใใใใจใงใใDPใซใจใฃใฆใๅญๅจใใใฎใฏGPU 0 ใจ 1 ใฎใฟใงใใใใใฎ2ใคใฎGPUใฎใใใซใใผใฟใไพ็ตฆใใพใใGPU0ใฏPPใไฝฟ็จใใฆGPU2ใซไธ้จใฎ่ฒ ่ทใใ็งๅฏ่ฃใซใใชใใญใผใใใGPU1ใๅๆงใซGPU3ใๆฏๆดใซๅผใๅ
ฅใใพใใ
ๅๆฌกๅ
ใซใฏๅฐใชใใจใ2ใคใฎGPUใๅฟ
่ฆใงใใฎใงใใใใงใฏๅฐใชใใจใ4ใคใฎGPUใๅฟ
่ฆใงใใ
ๅฎ่ฃ
ไพ:
- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
- [Varuna](https://github.com/microsoft/varuna)
- [SageMaker](https://arxiv.org/abs/2111.05972)
- [OSLO](https://github.com/tunib-ai/oslo)
๐ค Transformersใฎ็ถๆณ: ใพใ ๅฎ่ฃ
ใใใฆใใพใใ
## DP+PP+TP
ใใใซๅน็็ใชใใฌใผใใณใฐใ่กใใใใซใ3Dใใฉใฌใชใบใ ใไฝฟ็จใใPPใTPใจDPใจ็ตใฟๅใใใพใใใใใฏๆฌกใฎๅณใง็คบใใใฆใใพใใ

ใใฎๅณใฏ[3Dใใฉใฌใชใบใ ๏ผๅ
ใใฉใกใผใฟใขใใซใธใฎในใฑใผใชใณใฐ](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/)ใจใใใใญใฐๆ็จฟใใๅๅพใใใใใฎใงใใใใใใฎ่ชญใฟ็ฉใงใใ
ๅๆฌกๅ
ใซใฏๅฐใชใใจใ2ใคใฎGPUใๅฟ
่ฆใงใใฎใงใใใใงใฏๅฐใชใใจใ8ใคใฎGPUใๅฟ
่ฆใงใใ
ๅฎ่ฃ
ไพ:
- [DeepSpeed](https://github.com/microsoft/DeepSpeed) - DeepSpeedใซใฏใใใใซๅน็็ใชDPใงใใZeRO-DPใจๅผใฐใใใใฎใๅซใพใใฆใใพใใ
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
- [Varuna](https://github.com/microsoft/varuna)
- [SageMaker](https://arxiv.org/abs/2111.05972)
- [OSLO](https://github.com/tunib-ai/oslo)
๐ค Transformersใฎ็ถๆณ: ใพใ ๅฎ่ฃ
ใใใฆใใพใใใPPใจTPใใชใใใใ
## ZeRO DP+PP+TP
DeepSpeedใฎไธป่ฆใชๆฉ่ฝใฎ1ใคใฏZeROใงใใใใฏDPใฎๆกๅผตๆฉ่ฝใงใใใใใซใคใใฆใฏใใงใซใZeROใใผใฟไธฆๅๅใใง่ชฌๆใใใฆใใพใใ้ๅธธใใใใฏๅ็ฌใงๅไฝใใๆฉ่ฝใงใPPใTPใฏๅฟ
่ฆใใใพใใใใใใใPPใจTPใจ็ตใฟๅใใใใใจใใงใใพใใ
ZeRO-DPใPPใจ็ตใฟๅใใใใๅ ดๅใ้ๅธธใฏZeROในใใผใธ1๏ผใชใใใฃใใคใถใผใทใฃใผใใฃใณใฐ๏ผใฎใฟใๆๅนใซใชใใพใใ
ZeROในใใผใธ2๏ผๅพ้
ใทใฃใผใใฃใณใฐ๏ผใใใคใใฉใคใณไธฆๅๅใจ็ตใฟๅใใใฆไฝฟ็จใใ็่ซ็ใชๅฏ่ฝๆงใฏใใใพใใใๆง่ฝใซๆชๅฝฑ้ฟใๅใผใใพใใๅใใคใฏใญใใใใใจใซๅพ้
ใใทใฃใผใใฃใณใฐใใๅใซใๅพ้
ใ้็ดใใใใใฎ่ฟฝๅ ใฎใชใใฏใทใงใณในใญใฃใใฟใผ้่จใๅฟ
่ฆใงใ้ไฟกใชใผใใผใใใใ็บ็ใใๅฏ่ฝๆงใใใใพใใใใคใใฉใคใณไธฆๅๅใฎๆง่ณชไธใๅฐใใชใใคใฏใญใใใใไฝฟ็จใใใ่จ็ฎใฎ้ไธญๅบฆ๏ผใใคใฏใญใใใใตใคใบ๏ผใใใฉใณในใซใใใใใคใใฉใคใณใใใซ๏ผใใคใฏใญใใใๆฐ๏ผใๆๅฐ้ใซๆใใใใจใซ็ฆ็นใๅฝใฆใใใฆใใพใใใใใใฃใฆใใใใใฎ้ไฟกใณในใใฏๅฝฑ้ฟใๅใผใใงใใใใ
ใใใซใPPใซใฏ้ๅธธใใใๅฐใชใๅฑคใๅซใพใใฆใใใใกใขใชใฎ็ฏ็ดใฏใใใปใฉๅคงใใใใใพใใใPPใฏๆขใซๅพ้
ใตใคใบใใ1/PPใใซๅๆธใใใใใๅพ้
ใทใฃใผใใฃใณใฐใฎ็ฏ็ดใฏ็ด็ฒใชDPใใใใฏใใใซ้่ฆใงใฏใใใพใใใ
ZeROในใใผใธ3ใๅๆงใฎ็็ฑใง้ฉใใฆใใพใใ - ใใๅคใใฎใใผใ้้ไฟกใๅฟ
่ฆใงใใ
ใใใฆใZeROใๆใฃใฆใใใฎใงใใใไธใคใฎๅฉ็นใฏZeRO-Offloadใงใใใใใฏในใใผใธ1ใชใใใฃใใคใถใผในใใผใใCPUใซใชใใญใผใใงใใพใใ
ๅฎ่ฃ
ไพ:
- [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed)ใจ[BigScienceใใใฎMegatron-Deepspeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)ใฏใๅ่
ใฎใชใใธใใชใฎใใฉใผใฏใงใใ
- [OSLO](https://github.com/tunib-ai/oslo)
้่ฆใช่ซๆ:
- [DeepSpeedใจMegatronใไฝฟ็จใใMegatron-Turing NLG 530Bใฎใใฌใผใใณใฐ](https://arxiv.org/abs/2201.11990)
๐ค Transformersใฎ็ถๆณ: ใพใ ๅฎ่ฃ
ใใใฆใใพใใใPPใจTPใใชใใใใ
## FlexFlow
[FlexFlow](https://github.com/flexflow/FlexFlow)ใฏใใใใใซ็ฐใชใใขใใญใผใใงไธฆๅๅใฎๅ้กใ่งฃๆฑบใใพใใ
่ซๆ: [Zhihao JiaใMatei ZahariaใAlex Aikenใซใใ "Deep Neural Networksใฎใใผใฟใจใขใใซใฎไธฆๅๅใ่ถ
ใใฆ"](https://arxiv.org/abs/1807.05358)
FlexFlowใฏใใตใณใใซ-ใชใใฌใผใฟ-ๅฑๆง-ใใฉใกใผใฟใฎ4Dไธฆๅๅใ่กใใพใใ
1. ใตใณใใซ = ใใผใฟไธฆๅๅ๏ผใตใณใใซๅไฝใฎไธฆๅๅ๏ผ
2. ใชใใฌใผใฟ = ๅไธใฎๆไฝใใใใคใใฎใตใๆไฝใซไธฆๅๅ
3. ๅฑๆง = ใใผใฟไธฆๅๅ๏ผ้ทใๆนๅใฎไธฆๅๅ๏ผ
4. ใใฉใกใผใฟ = ใขใใซไธฆๅๅ๏ผๆฌกๅ
ใซ้ขไฟใชใใๆฐดๅนณใพใใฏๅ็ด๏ผ
ไพ:
* ใตใณใใซ
ใทใผใฑใณใน้ท512ใฎ10ใใใใ่ใใฆใฟใพใใใใใใใใใตใณใใซๆฌกๅ
ใง2ใคใฎใใใคในใซไธฆๅๅใใใจใ10 x 512ใ5 x 2 x 512ใซใชใใพใใ
* ใชใใฌใผใฟ
ๅฑคๆญฃ่ฆๅใ่กใๅ ดๅใใพใstdใ่จ็ฎใใๆฌกใซmeanใ่จ็ฎใใใใผใฟใๆญฃ่ฆๅใงใใพใใใชใใฌใผใฟใฎไธฆๅๅใซใใใstdใจmeanใไธฆๅใซ่จ็ฎใงใใพใใใใใใฃใฆใใชใใฌใผใฟๆฌกๅ
ใง2ใคใฎใใใคใน๏ผcuda:0ใcuda:1๏ผใซไธฆๅๅใใใจใๆๅใซๅ
ฅๅใใผใฟใไธกๆนใฎใใใคในใซใณใใผใใcuda:0ใงstdใ่จ็ฎใใcuda:1ใงmeanใๅๆใซ่จ็ฎใใพใใ
* ๅฑๆง
10ใใใใฎ512้ทใใใใพใใใใใใๅฑๆงๆฌกๅ
ใง2ใคใฎใใใคในใซไธฆๅๅใใใจใ10 x 512ใ10 x 2 x 256ใซใชใใพใใ
* ใใฉใกใผใฟ
ใใใฏใใณใฝใซใขใใซใฎไธฆๅๅใพใใฏๅ็ดใชๅฑคใใจใฎใขใใซใฎไธฆๅๅใจไผผใฆใใพใใ
ใใฎใใฌใผใ ใฏใผใฏใฎ้่ฆๆงใฏใ๏ผ1๏ผGPU/TPU/CPUๅฏพ๏ผ2๏ผRAM/DRAMๅฏพ๏ผ3๏ผ้ซ้ๅ
้จๆฅ็ถ/ไฝ้ๅค้จๆฅ็ถใชใฉใฎใชใฝใผในใๅใใใใใใในใฆใใขใซใดใชใบใ ใซใใฃใฆ่ชๅ็ใซๆ้ฉๅใใใใจใงใใใฉใฎไธฆๅๅใใฉใใงไฝฟ็จใใใใใขใซใดใชใบใ ็ใซๆฑบๅฎใใพใใ
้ๅธธใซ้่ฆใชๅด้ขใฎ1ใคใฏใFlexFlowใฏ้็ใงๅบๅฎใฎใฏใผใฏใญใผใใๆใคใขใใซใฎใใใซ่จญ่จใใใฆใใใๅ็ใชๅไฝใๆใคใขใใซใฏใคใใฌใผใทใงใณใใจใซ็ฐใชใไธฆๅๅๆฆ็ฅใๅฅฝใๅ ดๅใใใใใจใงใใ
ใใใใฃใฆใใใฎใใฌใผใ ใฏใผใฏใฎ็ดๆใฏ้ๅธธใซ้ญ
ๅ็ใงใใ้ธๆใใใฏใฉในใฟใง30ๅ้ใฎใทใใฅใฌใผใทใงใณใๅฎ่กใใใใฎ็นๅฎใฎ็ฐๅขใๆ้ฉใซๅฉ็จใใใใใฎๆ่ฏใฎๆฆ็ฅใๆไพใใพใใ้จๅใ่ฟฝๅ /ๅ้ค/็ฝฎๆใใใจใใใใซๅฏพใใฆๅฎ่กใใฆๅๆ้ฉๅใใฉใณใไฝๆใใพใใใใฎๅพใใใฌใผใใณใฐใงใใพใใ็ฐใชใใปใใใขใใใซใฏ็ฌ่ชใฎๆ้ฉๅใใใใพใใ
๐ค Transformersใฎ็พๅจใฎ็ถๆณ: ใพใ ็ตฑๅใใใฆใใพใใใใใงใซ[transformers.utils.fx](https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py)ใไฝฟ็จใใฆใขใใซใFXใใฌใผในๅฏ่ฝใงใใใใใFlexFlowใๅไฝใใใใใใซๅฟ
่ฆใชๆ้ ใ่ชฐใใ่ฆใคใใๅฟ
่ฆใใใใพใใ
## Which Strategy To Use When
ใใใงใฏใใฉใฎไธฆๅๅๆฆ็ฅใใใคไฝฟ็จใใใใฎ้ๅธธใซใใใพใใชใขใฆใใฉใคใณใ็คบใใพใใๅใชในใใฎๆๅใ้ๅธธใใใ้ใใใจใไธ่ฌ็ใงใใ
**โจ ๅไธGPU**
* ใขใใซใๅไธGPUใซๅใพใๅ ดๅ๏ผ
1. ้ๅธธใฎไฝฟ็จ
* ใขใใซใๅไธGPUใซๅใพใใชใๅ ดๅ๏ผ
1. ZeRO + CPUใใชใใญใผใใใใชใใทใงใณใงNVMeใใชใใญใผใ
2. ไธ่จใซๅ ใใฆใๆๅคงใฎใฌใคใคใผใๅไธGPUใซๅใพใใชใๅ ดๅใ[Memory Centric Tiling](https://deepspeed.readthedocs.io/en/latest/zero3.html#memory-centric-tiling)๏ผ่ฉณ็ดฐใฏไปฅไธๅ็
ง๏ผใๆๅนๅ
* ๆๅคงใฎใฌใคใคใผใๅไธGPUใซๅใพใใชใๅ ดๅ๏ผ
1. ZeROใไฝฟ็จใใชใๅ ดๅ - TPใๆๅนๅใใๅฟ
่ฆใใใใพใใใชใใชใใPPใ ใใงใฏๅใใใใจใใงใใชใใใใงใใ
2. ZeROใไฝฟ็จใใๅ ดๅใฏใไธ่จใฎใๅไธGPUใใฎใจใณใใชใจๅใใใฎใๅ็
งใใฆใใ ใใ
**โจ ๅไธใใผใ/ใใซใGPU**
* ใขใใซใๅไธGPUใซๅใพใๅ ดๅ๏ผ
1. DDP - ๅๆฃใใผใฟไธฆๅ
2. ZeRO - ็ถๆณใจไฝฟ็จใใใๆงๆใซไพๅญใใฆ้ใใใฉใใใ็ฐใชใใใจใใใใพใ
* ใขใใซใๅไธGPUใซๅใพใใชใๅ ดๅ๏ผ
1. PP
2. ZeRO
3. TP
้ๅธธใซ้ซ้ใชใใผใๅ
ๆฅ็ถใNVLINKใพใใฏNVSwitchใงใใๅ ดๅใใใใใฎใในใฆใฏใปใจใใฉๅ็ญใฎๆง่ฝใงใใใใใใใชใๅ ดๅใPPใฏTPใพใใฏZeROใใใ้ใใชใใพใใTPใฎๅบฆๅใใ้ใใ็ใใใใใใใพใใใ็นๅฎใฎใปใใใขใใใงๅ่
ใ่ฆใคใใใใใซๅฎ้จใใใฎใๆๅใงใใ
TPใฏใปใจใใฉๅธธใซๅไธใใผใๅ
ใงไฝฟ็จใใใพใใใคใพใใTPใตใคใบ <= ใใผใใใใใฎGPUใงใใ
* ๆๅคงใฎใฌใคใคใผใๅไธGPUใซๅใพใใชใๅ ดๅ๏ผ
1. ZeROใไฝฟ็จใใชใๅ ดๅ - TPใไฝฟ็จใใๅฟ
่ฆใใใใพใใใชใใชใใPPใ ใใงใฏๅใใใใจใใงใใชใใใใงใใ
2. ZeROใไฝฟ็จใใๅ ดๅใฏใไธ่จใฎใๅไธGPUใใฎใจใณใใชใจๅใใใฎใๅ็
งใใฆใใ ใใ
**โจ ใใซใใใผใ/ใใซใGPU**
* ้ซ้ใชใใผใ้ๆฅ็ถใใใๅ ดๅ๏ผ
1. ZeRO - ใขใใซใธใฎใปใจใใฉใฎๅคๆดใไธ่ฆใงใ
2. PP+TP+DP - ้ไฟกใๅฐใชใใใขใใซใซๅคง่ฆๆจกใชๅคๆดใๅฟ
่ฆใงใ
* ้
ใใใผใ้ๆฅ็ถใใใใGPUใกใขใชใๅฐใชใๅ ดๅ๏ผ
1. DP+PP+TP+ZeRO-1
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/pr_checks.md
|
<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Checks on a Pull Request
๐ค Transformersใชใใธใใชใงใใซใชใฏใจในใใ้ใใจใ่ฟฝๅ ใใฆใใใใใใๆขๅญใฎใใฎใๅฃใใฆใใชใใใจใ็ขบ่ชใใใใใซใใใชใใฎๆฐใฎใใงใใฏใๅฎ่กใใใพใใใใใใฎใใงใใฏใซใฏใๆฌกใฎ4ใคใฎใฟใคใใใใใพใ๏ผ
- ้ๅธธใฎใในใ
- ใใญใฅใกใณใใผใทใงใณใฎใใซใ
- ใณใผใใจใใญใฅใกใณใใผใทใงใณใฎในใฟใคใซ
- ใชใใธใใชๅ
จไฝใฎไธ่ฒซๆง
ใใฎใใญใฅใกใณใใงใฏใใใใใฎใใพใใพใชใใงใใฏใจใใฎ่ๅพใซใใ็็ฑใใใใฆใใใใฎใใใใใใใชใใฎใใซใชใฏใจในใใงๅคฑๆใใๅ ดๅใฎใญใผใซใซใงใฎใใใใฐๆนๆณใซใคใใฆ่ชฌๆใใพใใ
ใชใใ็ๆณ็ใซใฏใ้็บ่
็จใฎใคใณในใใผใซใๅฟ
่ฆใงใ๏ผ
```bash
pip install transformers[dev]
```
ใพใใฏ็ทจ้ๅฏ่ฝใชใคใณในใใผใซใฎๅ ดๅ๏ผ
```bash
pip install -e .[dev]
```
ใใฉใณในใใฉใผใใผใบใฎใชใใธใใชๅ
ใงไฝๆฅญใใฆใใพใใใใฉใณในใใฉใผใใผใบใฎใชใใทใงใณใฎไพๅญ้ขไฟใฎๆฐใๅขใใใใใใในใฆใๅๅพใงใใชใๅฏ่ฝๆงใใใใพใใ้็บ็จใคใณในใใผใซใๅคฑๆใใๅ ดๅใไฝๆฅญใใฆใใใใฃใผใใฉใผใใณใฐใใฌใผใ ใฏใผใฏ๏ผPyTorchใTensorFlowใใใใณ/ใพใใฏFlax๏ผใใคใณในใใผใซใใๆฌกใฎๆ้ ใๅฎ่กใใฆใใ ใใใ
```bash
pip install transformers[quality]
```
ใพใใฏ็ทจ้ๅฏ่ฝใชใคใณในใใผใซใฎๅ ดๅ๏ผ
```bash
pip install -e .[quality]
```
## Tests
`ci/circleci: run_tests_` ใงๅงใพใใในใฆใฎใธใงใใฏใTransformersใฎใในใในใคใผใใฎไธ้จใๅฎ่กใใพใใใใใใฎใธใงใใฏใ็นๅฎใฎ็ฐๅขใงใฉใคใใฉใชใฎไธ้จใซ็ฆ็นใๅฝใฆใฆๅฎ่กใใใพใใใใจใใฐใ`ci/circleci: run_tests_pipelines_tf` ใฏใTensorFlowใฎใฟใใคใณในใใผใซใใใ็ฐๅขใงใใคใใฉใคใณใฎใในใใๅฎ่กใใพใใ
ใในใในใคใผใใฎไธ้จใฎใฟใๅฎ่กใใใใใใซๆณจๆใใฆใใ ใใใใในใในใคใผใใฏใๅคๆดๅใจๅคๆดๅพใฎPRใฎใฉใคใใฉใชใฎ้ใใๆฑบๅฎใใใใฎ้ใใซๅฝฑ้ฟใๅใใใในใใ้ธๆใใใใใฎใฆใผใใฃใชใใฃใๅฎ่กใใใพใใใใฎใฆใผใใฃใชใใฃใฏใใญใผใซใซใงไปฅไธใฎใณใใณใใๅฎ่กใใฆๅฎ่กใงใใพใ๏ผ
```bash
python utils/tests_fetcher.py
```
1. ใชใใธใใชใฎใซใผใใใในใฏใชใใใๅฎ่กใใพใใใใใฏๆฌกใฎในใใใใๅฎ่กใใพใ๏ผ
1. ๅทฎๅๅ
ใฎๅใใกใคใซใใใงใใฏใใๅคๆดใใณใผใๅ
ใซใใใใใณใกใณใใใใญใฅใกใณใใผใทใงใณๆๅญๅใฎใฟใซใใใใ็ขบ่ชใใพใใๅฎ้ใฎใณใผใๅคๆดใใใใใกใคใซใฎใฟใไฟๆใใพใใ
2. ๅ
้จใฎใใใใๆง็ฏใใพใใใใฎใใใใฏใใฉใคใใฉใชใฎใฝใผในใณใผใใฎๅใใกใคใซใๅๅธฐ็ใซๅฝฑ้ฟใไธใใใในใฆใฎใใกใคใซใๆไพใใพใใใขใธใฅใผใซAใใขใธใฅใผใซBใซๅฝฑ้ฟใไธใใใจใฏใใขใธใฅใผใซBใใขใธใฅใผใซAใใคใณใใผใใใๅ ดๅใๆใใพใใๅๅธฐ็ใชๅฝฑ้ฟใๅพใใซใฏใใขใธใฅใผใซAใใใขใธใฅใผใซBใธใฎใขใธใฅใผใซใฎใใงใผใณใๅฟ
่ฆใงใๅใขใธใฅใผใซใฏๅใฎใขใธใฅใผใซใใคใณใใผใใใๅฟ
่ฆใใใใพใใ
3. ใใฎใใใใในใใใ1ใงๅ้ใใใใกใคใซใซ้ฉ็จใใพใใใใใซใใใPRใซๅฝฑ้ฟใๅใใใขใใซใใกใคใซใฎใชในใใๅพใใใพใใ
4. ใใใใฎใใกใคใซใใใใซๅฏพๅฟใใใในใใใกใคใซใซใใใใใๅฎ่กใใใในใใฎใชในใใๅๅพใใพใใ
2. ในใฏใชใใใใญใผใซใซใงๅฎ่กใใๅ ดๅใในใใใ1ใ3ใใใใณ4ใฎ็ตๆใ่กจ็คบใใใๅฎ่กใใใในใใใใใใพใใในใฏใชใใใฏใพใใ`test_list.txt` ใจใใๅๅใฎใใกใคใซใไฝๆใใพใใใใฎใใกใคใซใซใฏๅฎ่กใใใในใใฎใชในใใๅซใพใใฆใใใๆฌกใฎใณใใณใใงใญใผใซใซใงๅฎ่กใงใใพใ๏ผ
```bash
python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt)
```
## Documentation build
`build_pr_documentation` ใธใงใใฏใใใญใฅใกใณใใผใทใงใณใฎใใซใใ่กใใใใชใใฎPRใใใผใธใใใๅพใซใในใฆใๆญฃๅธธใซ่กจ็คบใใใใใจใ็ขบ่ชใใพใใใใใใใใฌใใฅใผใฎใใญใฅใกใณใใผใทใงใณใธใฎใชใณใฏใPRใซ่ฟฝๅ ใใพใใPRใซๅฏพใใๅคๆดใฏใใใฌใใฅใผใซ่ชๅ็ใซๅๆ ใใใพใใใใญใฅใกใณใใผใทใงใณใฎใใซใใซๅคฑๆใใๅ ดๅใๅคฑๆใใใธใงใใฎ้ฃใซใใใ่ฉณ็ดฐใใใฏใชใใฏใใฆใไฝใๅ้กใซใชใฃใฆใใใใ็ขบ่ชใงใใพใใๅคใใฎๅ ดๅใๅ้กใฏ`toctree`ๅ
ใฎใใกใคใซใไธ่ถณใใฆใใใชใฉใๅ็ดใชใใฎใงใใ
ใใญใฅใกใณใใผใทใงใณใใญใผใซใซใงใใซใใพใใฏใใฌใใฅใผใใใๅ ดๅใฏใ[docsใใฉใซใๅ
ใฎ`README.md`](https://github.com/huggingface/transformers/tree/main/docs)ใใ่ฆงใใ ใใใ
## Code and documentation style
ใในใฆใฎใฝใผในใใกใคใซใไพใใในใใซใฏใ`black`ใจ`ruff`ใไฝฟ็จใใฆใณใผใใฎใใฉใผใใใใ้ฉ็จใใใพใใใพใใใใใฏในใใชใณใฐใจ`rst`ใใกใคใซใฎใใฉใผใใใใTransformersใฎ`__init__.py`ใใกใคใซใงๅฎ่กใใใ้
ๅปถใคใณใใผใใฎ้ ๅบใซใคใใฆใใซในใฟใ ใใผใซใๅญๅจใใพใ๏ผ`utils/style_doc.py`ใจ`utils/custom_init_isort.py`๏ผใใใใใในใฆใฏใไปฅไธใๅฎ่กใใใใจใง่ตทๅใงใใพใใ
```bash
make style
```
CIใฏใ`ci/circleci: check_code_quality` ใใงใใฏๅ
ใงใใใใฎใใงใใฏใ้ฉ็จใใใฆใใใใจใ็ขบ่ชใใพใใใพใใ`ruff` ใๅฎ่กใใๆชๅฎ็พฉใฎๅคๆฐใไฝฟ็จใใใฆใใชใๅคๆฐใใใๅ ดๅใซใจใฉใผใๅ ฑๅใใพใใใใฎใใงใใฏใใญใผใซใซใงๅฎ่กใใใซใฏใไปฅไธใฎใณใใณใใไฝฟ็จใใฆใใ ใใใ
```bash
make quality
```
ๆ้ใใใใใใจใใใใพใใใใใใฃใฆใ็พๅจใฎใใฉใณใใงๅคๆดใใใใกใคใซใฎใฟใงๅใใใจใๅฎ่กใใใซใฏใๆฌกใฎใณใใณใใๅฎ่กใใพใใ
```bash
make fixup
```
ใใฎๆๅพใฎใณใใณใใฏใใชใใธใใชใฎๆดๅๆงใฎใใใฎใในใฆใฎ่ฟฝๅ ใฎใใงใใฏใๅฎ่กใใพใใใใใใ่ฉณใใ่ฆใฆใฟใพใใใใ
## Repository consistency
ใใใซใฏใใใชใใฎPRใใชใใธใใชใ้ฉๅใช็ถๆ
ใซไฟใฃใใพใพใงใใใใจใ็ขบ่ชใใใใใฎใในใฆใฎใในใใๅซใพใใฆใใใci/`circleci: check_repository_consistency` ใใงใใฏใซใใฃใฆๅฎ่กใใใพใใใญใผใซใซใงใใฎใใงใใฏใๅฎ่กใใใซใฏใไปฅไธใๅฎ่กใใพใใ
```bash
make repo-consistency
```
ใใใ็ขบ่ชใใพใ๏ผ
- `utils/check_repo.py` ใซใใฃใฆๅฎ่กใใใใinit ใซ่ฟฝๅ ใใใใในใฆใฎใชใใธใงใฏใใๆๆธๅใใใฆใใพใใ
- `utils/check_inits.py` ใซใใฃใฆๅฎ่กใใใใใในใฆใฎ `__init__.py` ใใกใคใซใใใฎ2ใคใฎใปใฏใทใงใณใงๅใๅ
ๅฎนใๆใฃใฆใใพใใ
- `utils/check_copies.py` ใซใใฃใฆๅฎ่กใใใใไปใฎใขใธใฅใผใซใใใฎใณใใผใจใใฆ่ญๅฅใใใใในใฆใฎใณใผใใๅ
ใฎใณใผใใจไธ่ดใใฆใใพใใ
- `utils/check_config_docstrings.py` ใซใใฃใฆๅฎ่กใใใใใในใฆใฎ่จญๅฎใฏใฉในใซใฏๅฐใชใใจใ1ใคใฎๆๅนใชใใงใใฏใใคใณใใใใญใฅใกใณใๆๅญๅใซ่จ่ผใใใฆใใพใใ
- `utils/check_config_attributes.py` ใซใใฃใฆๅฎ่กใใใใใในใฆใฎ่จญๅฎใฏใฉในใซใฏใๅฏพๅฟใใใขใใชใณใฐใใกใคใซใงไฝฟ็จใใใฆใใๅฑๆงใฎใฟใๅซใพใใฆใใพใใ
- `utils/check_copies.py` ใซใใฃใฆๅฎ่กใใใใREADME ใจใใญใฅใกใณใใฎใคใณใใใฏในใฎ็ฟป่จณใใใกใคใณใฎREADME ใจๅใใขใใซใชในใใๆใฃใฆใใพใใ
- `utils/check_table.py` ใซใใฃใฆๅฎ่กใใใใใใญใฅใกใณใใผใทใงใณใฎ่ชๅ็ๆใใผใใซใๆๆฐใงใใใใจใ็ขบ่ชใใพใใ
- `utils/check_dummies.py` ใซใใฃใฆๅฎ่กใใใใใในใฆใฎใชใใธใงใฏใใๅฉ็จๅฏ่ฝใงใใใใชใใทใงใณใฎไพๅญ้ขไฟใใในใฆใคใณในใใผใซใใใฆใใชใใฆใๅ้กใใใพใใใ
ใใฎใใงใใฏใๅคฑๆใใๅ ดๅใๆๅใฎ2ใคใฎ้
็ฎใฏๆๅใงไฟฎๆญฃใใๅฟ
่ฆใใใใๆๅพใฎ4ใคใฏใณใใณใใๅฎ่กใใฆ่ชๅ็ใซไฟฎๆญฃใงใใพใใ
```bash
make fix-copies
```
่ฟฝๅ ใฎใใงใใฏใใคใณใใฏใๆฐใใใขใใซใ่ฟฝๅ ใใPull Request๏ผPR๏ผใซ้ข้ฃใใฆใใพใใไธปใซๆฌกใฎ็นใ็ขบ่ชใใพใ๏ผ
- ใในใฆใฎ่ฟฝๅ ใใใใขใใซใฏใAuto-mapping๏ผ`utils/check_repo.py`ใงๅฎ่ก๏ผใซๅซใพใใฆใใพใใ
<!-- TODO Sylvainใๅ
ฑ้ใฎใในใใๅฎ่ฃ
ใใใฆใใใใจใ็ขบ่ชใใใใงใใฏใ่ฟฝๅ ใใฆใใ ใใใ-->
- ใในใฆใฎใขใใซใ้ฉๅใซใในใใใใฆใใพใ๏ผ`utils/check_repo.py`ใงๅฎ่ก๏ผใ
<!-- TODO Sylvainใไปฅไธใ่ฟฝๅ ใใฆใใ ใใ
- ใในใฆใฎใขใใซใใกใคใณใฎREADMEใใใณใกใคใณใฎใใญใฅใกใณใๅ
ใซ่ฟฝๅ ใใใฆใใพใใ
- ไฝฟ็จใใใฆใใใในใฆใฎใใงใใฏใใคใณใใๅฎ้ใซHubใซๅญๅจใใฆใใพใ
-->
### Check copies
Transformersใฉใคใใฉใชใฏใใขใใซใณใผใใซ้ขใใฆ้ๅธธใซๆ่ฆใใใใใใๅใขใใซใฏไปใฎใขใใซใซไพๅญใใใซๅฎๅ
จใซ1ใคใฎใใกใคใซใซๅฎ่ฃ
ใใๅฟ
่ฆใใใใพใใใใใใฃใฆใ็นๅฎใฎใขใใซใฎใณใผใใฎใณใใผใๅ
ใฎใณใผใใจไธ่ฒซใใฆใใใใฉใใใ็ขบ่ชใใไป็ตใฟใ่ฟฝๅ ใใพใใใใใใซใใใใใฐไฟฎๆญฃใใใๅ ดๅใไปใฎๅฝฑ้ฟใๅใใใขใใซใใในใฆ็ขบ่ชใใๅคๆดใไผ้ใใใใณใใผใ็ ดๆฃใใใใ้ธๆใงใใพใใ
<Tip>
ใใกใคใซใๅฅใฎใใกใคใซใฎๅฎๅ
จใชใณใใผใงใใๅ ดๅใใใใ`utils/check_copies.py`ใฎ`FULL_COPIES`ๅฎๆฐใซ็ป้ฒใใๅฟ
่ฆใใใใพใใ
</Tip>
ใใฎไป็ตใฟใฏใ`# Copied from xxx`ใจใใๅฝขๅผใฎใณใกใณใใซไพๅญใใฆใใพใใ`xxx`ใฏใใณใใผใใใฆใใใฏใฉในใพใใฏ้ขๆฐใฎๅฎๅ
จใชใในใๅซใๅฟ
่ฆใใใใพใใไพใใฐใ`RobertaSelfOutput`ใฏ`BertSelfOutput`ใฏใฉในใฎ็ดๆฅใฎใณใใผใงใใฎใงใ[ใใกใ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289)ใซใณใกใณใใใใใพใใ
```py
# Copied from transformers.models.bert.modeling_bert.BertSelfOutput
```
ๆณจๆ็นใจใใฆใใใใใฏใฉในๅ
จไฝใซ้ฉ็จใใไปฃใใใซใใณใใผๅ
ใฎ้ข้ฃใกใฝใใใซ้ฉ็จใงใใพใใใใจใใฐใ[ใใกใ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598)ใงใฏใ`RobertaPreTrainedModel._init_weights` ใ `BertPreTrainedModel` ใใใณใใผใใใฆใใใไปฅไธใฎใณใกใณใใใใใพใ๏ผ
```py
# Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta
```
ๆณจ๏ผ็ขๅฐใฎๅจใใซใฏในใใผในใๅซใพใใฆใใฆใฏใใใพใใ๏ผใใกใใใใใฎในใใผในใ็ฝฎๆใใฟใผใณใฎไธ้จใงใใๅ ดๅใ้คใใพใ๏ผใ
ใซใณใใงๅบๅใใใ่คๆฐใฎใใฟใผใณใ่ฟฝๅ ใงใใพใใไพใใฐใใใใงใฏ `CamemberForMaskedLM` ใฏ `RobertaForMaskedLM` ใฎ็ดๆฅใฎใณใใผใงใ2ใคใฎ็ฝฎๆใใใใพใ๏ผ `Roberta` ใใ `Camembert` ใธใใใใฆ `ROBERTA` ใใ `CAMEMBERT` ใธใจ็ฝฎๆใใใพใใ[ใใกใ](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/camembert/modeling_camembert.py#L929)ใงใใใฎไฝๆฅญใฏใณใกใณใไปใใง่กใใใฆใใพใใ
```py
# Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT
```
ใใ้ ๅบใ้่ฆใชๅ ดๅ๏ผไปฅๅใฎ็ฝฎๆใจ็ซถๅใใๅฏ่ฝๆงใใใใใ๏ผใ็ฝฎๆใฏๅทฆใใๅณใซๅฎ่กใใใพใใ
<Tip>
ใใ็ฝฎๆใใใฉใผใใใใๅคๆดใใๅ ดๅ๏ผใใจใใฐใ็ญใๅๅใ้ๅธธใซ้ทใๅๅใซ็ฝฎใๆใใๅ ดๅใชใฉ๏ผใ่ชๅใใฉใผใใใฟใ้ฉ็จใใๅพใซใณใใผใ็ขบ่ชใใใพใใ
</Tip>
ใใฟใผใณใๅใ็ฝฎๆใฎ็ฐใชใใฑใผใน๏ผๅคงๆๅญใจๅฐๆๅญใฎใใชใขใณใใใใ๏ผใฎๅ ดๅใใชใใทใงใณใจใใฆ `all-casing` ใ่ฟฝๅ ใใใ ใใฎๅฅใฎๆนๆณใใใใพใใ[ใใกใ](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/mobilebert/modeling_mobilebert.py#L1237)ใฏใ`MobileBertForSequenceClassification` ๅ
ใฎไพใงใใณใกใณใใใคใใฆใใพใใ
```py
# Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing
```
ใใฎๅ ดๅใใณใผใใฏใBertForSequenceClassificationใใใใณใใผใใใๆฌกใฎใใใซ็ฝฎๆใใใพใ๏ผ
- `Bert` ใ `MobileBert` ใซ็ฝฎใๆใใ๏ผไพ๏ผ`init`ใง `MobileBertModel` ใไฝฟ็จใใๅ ดๅ๏ผ
- `bert` ใ `mobilebert` ใซ็ฝฎใๆใใ๏ผไพ๏ผ`self.mobilebert` ใๅฎ็พฉใใๅ ดๅ๏ผ
- `BERT` ใ `MOBILEBERT` ใซ็ฝฎใๆใใ๏ผๅฎๆฐ `MOBILEBERT_INPUTS_DOCSTRING` ๅ
ใง๏ผ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/autoclass_tutorial.md
|
<!--
่ไฝๆจฉ 2023 The HuggingFace Teamใๅ
จ่ไฝๆจฉๆๆใ
Apache LicenseใVersion 2.0๏ผไปฅไธใใฉใคใปใณในใใจๅผใณใพใ๏ผใซๅบใฅใใฉใคใปใณในใงใ
ใฉใคใปใณในใซๅพใใชใ้ใใใใฎใใกใคใซใไฝฟ็จใงใใพใใใ
ใฉใคใปใณในใฎใณใใผใฏไปฅไธใใๅ
ฅๆใงใใพใ๏ผ
http://www.apache.org/licenses/LICENSE-2.0
้ฉ็จๆณใซๅพใใใๆธ้ขใซใใๅๆใใใ้ใใใฉใคใปใณในใฎไธใงใฝใใใฆใงใขใฏ้
ๅธใใใพใใ
ใฉใคใปใณในใซๅบใฅใ็นๅฎใฎ่จ่ชใงใฎๆกไปถใ็ขบ่ชใใใใใฉใคใปใณในใๅ็
งใใฆใใ ใใใ
ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใdoc-builder๏ผMDXใซ้กไผผใใใใฎ๏ผใฎ็นๅฎใฎๆงๆใๅซใใงใใใ
ใไฝฟใใฎMarkdownใใฅใผใขใงๆญฃใใใฌใณใใชใณใฐใใใชใๅ ดๅใใใใพใใ
-->
# AutoClassใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใคใณในใฟใณในใใญใผใใใ
ใใพใใพใชTransformerใขใผใญใใฏใใฃใๅญๅจใใใใใ่ชๅใฎใฟในใฏใซๅใฃใใขใใซใไฝๆใใใฎใฏ้ฃใใใใจใใใใพใใ
๐ค Transformersใฎใณใขๅฒๅญฆใฎไธ็ฐใจใใฆใใฉใคใใฉใชใไฝฟ็จใใใใใใทใณใใซใงๆ่ปใซใใใใใซใ
`AutoClass`ใฏไธใใใใใใงใใฏใใคใณใใใๆญฃใใใขใผใญใใฏใใฃใ่ชๅ็ใซๆจ่ซใใฆใญใผใใใพใใ
`from_pretrained()`ใกใฝใใใไฝฟ็จใใใจใไบๅๅญฆ็ฟๆธใฟใขใใซใ็ด ๆฉใใญใผใใงใใใใใใขใใซใใผใญใใใใฌใผใใณใฐใใใใใซๆ้ใจใชใฝใผในใ่ฒปใใๅฟ
่ฆใใใใพใใใ
ใใฎ็จฎใฎใใงใใฏใใคใณใใซไพๅญใใชใใณใผใใ็ๆใใใใจใฏใ
ใณใผใใ1ใคใฎใใงใใฏใใคใณใใงๅไฝใใใฐใใขใผใญใใฏใใฃใ็ฐใชใฃใฆใใฆใใๅใใฟในใฏใซๅใใฆใใฌใผใใณใฐใใใๅ ดๅใฏๅฅใฎใใงใใฏใใคใณใใงใๅไฝใใใใจใๆๅณใใพใใ
<Tip>
ใขใผใญใใฏใใฃใฏใขใใซใฎ้ชจๆ ผใๆใใใใงใใฏใใคใณใใฏ็นๅฎใฎใขใผใญใใฏใใฃใฎ้ใฟใงใใ
ใใจใใฐใ[BERT](https://huggingface.co/bert-base-uncased)ใฏใขใผใญใใฏใใฃใงใใใ`bert-base-uncased`ใฏใใงใใฏใใคใณใใงใใ
ใขใใซใฏใขใผใญใใฏใใฃใพใใฏใใงใใฏใใคใณใใฎใฉใกใใๆใไธ่ฌ็ใช็จ่ชใงใใ
</Tip>
ใใฎใใฅใผใใชใขใซใงใฏใไปฅไธใๅญฆ็ฟใใพใ๏ผ
* ไบๅๅญฆ็ฟๆธใฟใใผใฏใใคใถใใญใผใใใใ
* ไบๅๅญฆ็ฟๆธใฟ็ปๅใใญใปใใตใใญใผใใใใ
* ไบๅๅญฆ็ฟๆธใฟ็นๅพด้ๆฝๅบๅจใใญใผใใใใ
* ไบๅๅญฆ็ฟๆธใฟใใญใปใใตใใญใผใใใใ
* ไบๅๅญฆ็ฟๆธใฟใขใใซใใญใผใใใใ
## AutoTokenizer
ใปใจใใฉใฎNLPใฟในใฏใฏใใผใฏใใคใถใงๅงใพใใพใใใใผใฏใใคใถใฏๅ
ฅๅใใขใใซใงๅฆ็ใงใใๅฝขๅผใซๅคๆใใพใใ
[`AutoTokenizer.from_pretrained`]ใไฝฟ็จใใฆใใผใฏใใคใถใใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
```
ๆฌกใซใไปฅไธใฎใใใซๅ
ฅๅใใใผใฏใใคใบใใพใ๏ผ
```py
>>> sequence = "In a hole in the ground there lived a hobbit."
>>> print(tokenizer(sequence))
{'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
## AutoImageProcessor
ใใธใงใณใฟในใฏใฎๅ ดๅใ็ปๅใใญใปใใตใ็ปๅใๆญฃใใๅ
ฅๅๅฝขๅผใซๅคๆใใพใใ
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
```
## AutoFeatureExtractor
ใชใผใใฃใชใฟในใฏใฎๅ ดๅใ็นๅพด้ๆฝๅบๅจใใชใผใใฃใชไฟกๅทใๆญฃใใๅ
ฅๅๅฝขๅผใซๅคๆใใพใใ
[`AutoFeatureExtractor.from_pretrained`]ใไฝฟ็จใใฆ็นๅพด้ๆฝๅบๅจใใญใผใใใพใ.
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(
... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
... )
```
## AutoProcessor
ใใซใใขใผใใซใฟในใฏใฎๅ ดๅใ2ใคใฎๅๅฆ็ใใผใซใ็ตใฟๅใใใใใญใปใใตใๅฟ
่ฆใงใใใใจใใฐใ
[LayoutLMV2](model_doc/layoutlmv2)ใขใใซใฏ็ปๅใๅฆ็ใใใใใฎ็ปๅใใญใปใใตใจใใญในใใๅฆ็ใใใใใฎใใผใฏใใคใถใๅฟ
่ฆใงใใ
ใใญใปใใตใฏใใใใฎไธกๆนใ็ตใฟๅใใใพใใ
[`AutoProcessor.from_pretrained`]ใไฝฟ็จใใฆใใญใปใใตใใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
```
## AutoModel
<frameworkcontent>
<pt>
ๆๅพใซใ`AutoModelFor`ใฏใฉในใฏ็นๅฎใฎใฟในใฏใซๅฏพใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใใญใผใใงใใพใ๏ผไฝฟ็จๅฏ่ฝใชใฟในใฏใฎๅฎๅ
จใชไธ่ฆงใซใคใใฆใฏ[ใใกใ](model_doc/auto)ใๅ็
ง๏ผใ
ใใจใใฐใ[`AutoModelForSequenceClassification.from_pretrained`]ใไฝฟ็จใใฆใทใผใฑใณในๅ้ก็จใฎใขใใซใใญใผใใงใใพใ๏ผ
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
ๅใใใงใใฏใใคใณใใๅๅฉ็จใใฆ็ฐใชใใฟในใฏใฎใขใผใญใใฏใใฃใใญใผใใงใใพใ๏ผ
```py
>>> from transformers import AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased")
```
<Tip warning={true}>
PyTorchใขใใซใฎๅ ดๅใ `from_pretrained()`ใกใฝใใใฏๅ
้จใง`torch.load()`ใไฝฟ็จใใๅ
้จ็ใซใฏ`pickle`ใไฝฟ็จใใฆใใใใปใญใฅใชใใฃใฎๅ้กใ็ฅใใใฆใใพใใ
ไธ่ฌ็ใซใฏใไฟก้ ผๆงใฎใชใใฝใผในใใๅๅพใใๅฏ่ฝๆงใใใใขใใซใๆนใใใใใๅฏ่ฝๆงใฎใใใขใใซใใญใผใใใชใใงใใ ใใใ
ใใฎใปใญใฅใชใใฃใชในใฏใฏใ`Hugging Face Hub`ใงใในใใใใฆใใๅ
ฌ้ใขใใซใซๅฏพใใฆ้จๅ็ใซ็ทฉๅใใใฆใใใๅใณใใใใงใใซใฆใงใขใฎในใญใฃใณใ่กใใใฆใใพใใ
GPGใไฝฟ็จใใ็ฝฒๅๆธใฟใณใใใใฎๆค่จผใชใฉใฎใในใใใฉใฏใใฃในใซใคใใฆใฏใHubใฎใใญใฅใกใณใใผใทใงใณใๅ็
งใใฆใใ ใใใ
TensorFlowใใใณFlaxใฎใใงใใฏใใคใณใใซใฏๅฝฑ้ฟใใชใใ`from_pretrained`ใกใฝใใใฎ`from_tf`ใใใณ`from_flax`ๅผๆฐใไฝฟ็จใใฆPyTorchใขใผใญใใฏใใฃๅ
ใงใญใผใใงใใพใใ
</Tip>
ไธ่ฌ็ใซใไบๅๅญฆ็ฟๆธใฟใขใใซใฎใคใณในใฟใณในใใญใผใใใใใใซ`AutoTokenizer`ใฏใฉในใจ`AutoModelFor`ใฏใฉในใฎไฝฟ็จใใๅงใใใพใใ
ใใใซใใใๅธธใซๆญฃใใใขใผใญใใฏใใฃใใญใผใใงใใพใใ
ๆฌกใฎ[tutorial](preprocessing)ใงใฏใๆฐใใใญใผใใใใใผใฏใใคใถใ็ปๅใใญใปใใตใ็นๅพด้ๆฝๅบๅจใใใใณใใญใปใใตใไฝฟ็จใใฆใใใกใคใณใใฅใผใใณใฐ็จใซใใผใฟใปใใใๅๅฆ็ใใๆนๆณใๅญฆใณใพใใ
</pt>
<tf>
ๆๅพใซใ`TFAutoModelFor`ใฏใฉในใฏ็นๅฎใฎใฟในใฏใซๅฏพใใฆไบๅๅญฆ็ฟๆธใฟใขใใซใใญใผใใงใใพใ๏ผไฝฟ็จๅฏ่ฝใชใฟในใฏใฎๅฎๅ
จใชไธ่ฆงใซใคใใฆใฏใใกใใๅ็
ง๏ผใ
ใใจใใฐใ[`TFAutoModelForSequenceClassification.from_pretrained`]ใไฝฟ็จใใฆใทใผใฑใณในๅ้ก็จใฎใขใใซใใญใผใใงใใพใ๏ผ
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
ๅใใใงใใฏใใคใณใใๅๅฉ็จใใฆ็ฐใชใใฟในใฏใฎใขใผใญใใฏใใฃใใญใผใใงใใพใ๏ผ
```py
>>> from transformers import TFAutoModelForTokenClassification
>>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased")
```
ไธ่ฌ็ใซใฏใไบๅๅญฆ็ฟๆธใฟใขใใซใฎใคใณในใฟใณในใใญใผใใใใใใซ`AutoTokenizer`ใฏใฉในใจ`TFAutoModelFor`ใฏใฉในใฎไฝฟ็จใใๅงใใใพใใ
ใใใซใใใๅธธใซๆญฃใใใขใผใญใใฏใใฃใใญใผใใงใใพใใ
ๆฌกใฎ[tutorial](preproccesing)ใงใฏใๆฐใใใญใผใใใใใผใฏใใคใถใ็ปๅใใญใปใใตใ็นๅพด้ๆฝๅบๅจใใใใณใใญใปใใตใไฝฟ็จใใฆใใใกใคใณใใฅใผใใณใฐ็จใซใใผใฟใปใใใๅๅฆ็ใใๆนๆณใๅญฆใณใพใใ
</tf>
</frameworkcontent>
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/_toctree.yml
|
- sections:
- local: index
title: ๐ค Transformers
- local: quicktour
title: ใฏใคใใฏใใขใผ
- local: installation
title: ใคใณในใใผใซ
title: Get started
- sections:
- local: pipeline_tutorial
title: ใใคใใฉใคใณใไฝฟ็จใใฆๆจ่ซใๅฎ่กใใ
- local: autoclass_tutorial
title: AutoClass ใไฝฟ็จใใฆ็งปๆคๅฏ่ฝใชใณใผใใไฝๆใใ
- local: preprocessing
title: ใใผใฟใฎๅๅฆ็
- local: training
title: ไบๅใใฌใผใใณใฐใใใใขใใซใๅพฎ่ชฟๆดใใ
- local: run_scripts
title: ในใฏใชใใใไฝฟ็จใใฆใใฌใผใใณใฐใใ
- local: accelerate
title: ๐ค Accelerate ใไฝฟ็จใใฆๅๆฃใใฌใผใใณใฐใใปใใใขใใใใ
- local: peft
title: ๐ค PEFT ใไฝฟ็จใใฆใขใใใฟใผใใญใผใใใฆใใฌใผใใณใฐใใ
- local: model_sharing
title: ใขใใซใๅ
ฑๆใใ
- local: transformers_agents
title: ใจใผใธใงใณใ
- local: llm_tutorial
title: LLM ใไฝฟ็จใใ็ๆ
title: Tutorials
- sections:
- isExpanded: false
sections:
- local: tasks/sequence_classification
title: ใใญในใใฎๅ้ก
- local: tasks/token_classification
title: ใใผใฏใณใฎๅ้ก
- local: tasks/question_answering
title: ่ณช็ๅฟ็ญ
- local: tasks/language_modeling
title: ๅ ๆ่จ่ชใขใใชใณใฐ
- local: tasks/masked_language_modeling
title: ใในใฏใใใ่จ่ชใขใใชใณใฐ
- local: tasks/translation
title: ็ฟป่จณ
- local: tasks/summarization
title: ่ฆ็ด
- local: tasks/multiple_choice
title: ่คๆฐใฎ้ธๆ่ข
title: ่ช็ถ่จ่ชๅฆ็
- isExpanded: false
sections:
- local: tasks/audio_classification
title: ้ณๅฃฐใฎๅ้ก
- local: tasks/asr
title: ่ชๅ้ณๅฃฐ่ช่ญ
title: ใชใผใใฃใช
- isExpanded: false
sections:
- local: tasks/image_classification
title: ็ปๅๅ้ก
- local: tasks/semantic_segmentation
title: ใปใใณใใฃใใฏใปใฐใกใณใใผใทใงใณ
- local: tasks/video_classification
title: ใใใชใฎๅ้ก
- local: tasks/object_detection
title: ็ฉไฝๆคๅบ
- local: tasks/zero_shot_object_detection
title: ใผใญใทใงใใ็ฉไฝๆคๅบ
- local: tasks/zero_shot_image_classification
title: ใผใญใทใงใใ็ปๅๅ้ก
- local: tasks/monocular_depth_estimation
title: ๆทฑใใฎๆจๅฎ
- local: tasks/image_to_image
title: ็ปๅใใ็ปๅใธ
- local: tasks/knowledge_distillation_for_image_classification
title: ใณใณใใฅใผใฟใใธใงใณใฎใใใฎ็ฅ่ญใฎ่ธ็
title: ใณใณใใฅใผใฟใใธใงใณ
- isExpanded: false
sections:
- local: tasks/image_captioning
title: ็ปๅใฎใญใฃใใทใงใณ
- local: tasks/document_question_answering
title: ๆๆธใฎ่ณชๅใธใฎๅ็ญ
- local: tasks/visual_question_answering
title: ่ฆ่ฆ็ใช่ณชๅใธใฎๅ็ญ
- local: tasks/text-to-speech
title: ใใญในใ่ชญใฟไธใ
title: ใใซใใขใผใใซ
- isExpanded: false
sections:
- local: generation_strategies
title: ็ๆๆฆ็ฅใใซในใฟใใคใบใใ
title: ไธไปฃ
- isExpanded: false
sections:
- local: tasks/idefics
title: IDEFICS ใไฝฟ็จใใใคใกใผใธ ใฟในใฏ
- local: tasks/prompting
title: LLM ใใญใณใใ ใฌใคใ
title: ใใญใณใใ
title: Task Guides
- sections:
- local: fast_tokenizers
title: ๐ค ใใผใฏใใคใถใผใฎ้ซ้ใใผใฏใใคใถใผใไฝฟ็จใใ
- local: multilingual
title: ๅค่จ่ชใขใใซใงๆจ่ซใๅฎ่กใใ
- local: create_a_model
title: ใขใใซๅบๆใฎ API ใไฝฟ็จใใ
- local: custom_models
title: ใซในใฟใ ใขใใซใๅ
ฑๆใใ
- local: chat_templating
title: ใใฃใใใขใใซใฎใใณใใฌใผใ
- local: serialization
title: ONNX ใธใฎใจใฏในใใผใ
- local: tflite
title: TFLite ใธใฎใจใฏในใใผใ
- local: torchscript
title: ใใผใในใฏใชใใใธใฎใจใฏในใใผใ
- local: benchmarks
title: ใใณใใใผใฏ
- local: community
title: ใณใใฅใใใฃใชใฝใผใน
- local: custom_tools
title: ใซในใฟใ ใใผใซใจใใญใณใใ
- local: troubleshooting
title: ใใฉใใซใทใฅใผใใฃใณใฐ
title: ้็บ่
ใฌใคใ
- sections:
- local: performance
title: ๆฆ่ฆ
- sections:
- local: perf_train_gpu_one
title: ๅไธใฎ GPU ใงๅน็็ใซใใฌใผใใณใฐใใใใใฎๆนๆณใจใใผใซ
- local: perf_train_gpu_many
title: ่คๆฐใฎ GPU ใจไธฆๅๅฆ็
- local: perf_train_cpu
title: CPU ใงใฎๅน็็ใชใใฌใผใใณใฐ
- local: perf_train_cpu_many
title: ๅๆฃCPUใใฌใผใใณใฐ
- local: perf_train_tpu
title: TPU ใซ้ขใใใใฌใผใใณใฐ
- local: perf_train_tpu_tf
title: TensorFlow ใไฝฟ็จใใ TPU ใฎใใฌใผใใณใฐ
- local: perf_train_special
title: ็นๆฎใชใใผใใฆใงใขใซ้ขใใใใฌใผใใณใฐ
- local: perf_hardware
title: ใใฌใผใใณใฐ็จใฎใซในใฟใ ใใผใใฆใงใข
- local: hpo_train
title: Trainer API ใไฝฟ็จใใใใคใใผใใฉใกใผใฟๆค็ดข
title: ๅน็็ใชใใฌใผใใณใฐใใฏใใใฏ
- sections:
- local: perf_infer_cpu
title: CPUใงใฎๆจ่ซ
- local: perf_infer_gpu_one
title: 1 ใคใฎ GPU ใงใฎๆจ่ซ
- local: perf_infer_gpu_many
title: ๅคใใฎ GPU ใงใฎๆจ่ซ
- local: perf_infer_special
title: ็นๆฎใชใใผใใฆใงใขใงใฎๆจ่ซ
title: ๆจ่ซใฎๆ้ฉๅ
- local: big_models
title: ๅคงใใชใขใใซใฎใคใณในใฟใณในๅ
- local: tf_xla
title: TensorFlowใขใใซใฎXLA็ตฑๅ
- local: perf_torch_compile
title: torch.compile()ใไฝฟ็จใใๆจ่ซใฎๆ้ฉๅ
title: ใใใฉใผใใณในใจในใฑใผใฉใใชใใฃ
- sections:
- local: add_new_model
title: ๐ค Transformersใซใขใใซใ่ฟฝๅ ใใๆนๆณ
- local: add_tensorflow_model
title: ๐ค TransformersใขใใซใTensorFlowใซๅคๆใใๆนๆณ
- local: testing
title: ใในใ
- local: pr_checks
title: ใใซใชใฏใจในใใฎใใงใใฏ
title: ่ฒข็ฎใใ
- sections:
- local: philosophy
title: ใใฃใญใฝใใฃใผ
- local: glossary
title: ็จ่ช้
- local: task_summary
title: ๐ค Transformersใฎๆฉ่ฝ
- local: tasks_explained
title: ๐ค Transformersใใฟในใฏใ่งฃๆฑบใใๆนๆณ
- local: model_summary
title: Transformerใขใใซใใกใใชใผ
- local: tokenizer_summary
title: ใใผใฏใใคใถใผใฎๆฆ่ฆ
- local: attention
title: ๆณจๆๆฉๆง
- local: pad_truncation
title: ใใใฃใณใฐใจๅใ่ฉฐใ
- local: bertology
title: BERTology
- local: perplexity
title: ๅบๅฎ้ทใขใใซใฎใใผใใฌใญใทใใฃ
- local: pipeline_webserver
title: Webใตใผใใผๆจ่ซ็จใใคใใฉใคใณ
- local: model_memory_anatomy
title: ใขใใซใใฌใผใใณใฐใฎ่งฃๅๅญฆ
title: ใณใณใปใใใฅใขใซใฌใคใ
- sections:
- sections:
- local: main_classes/agent
title: ใจใผใธใงใณใใจใใผใซ
- local: model_doc/auto
title: Auto Classes
- local: main_classes/callback
title: ใณใผใซใใใฏ
- local: main_classes/configuration
title: ๆงๆ
- local: main_classes/data_collator
title: ใใผใฟ็
งๅ่
- local: main_classes/keras_callbacks
title: Keras ใณใผใซใใใฏ
- local: main_classes/logging
title: ใญใฎใณใฐ
- local: main_classes/model
title: ใขใใซ
- local: main_classes/text_generation
title: ใใญในใใฎ็ๆ
- local: main_classes/onnx
title: ONNX
- local: main_classes/optimizer_schedules
title: ๆ้ฉๅ
- local: main_classes/output
title: ใขใใซใฎๅบๅ
- local: main_classes/pipelines
title: ใใคใใฉใคใณ
- local: main_classes/processors
title: ใใญใปใใตใผ
- local: main_classes/quantization
title: ้ๅญๅ
- local: main_classes/tokenizer
title: ใใผใฏใใคใถใผ
- local: main_classes/trainer
title: ใใฌใผใใผ
- local: main_classes/deepspeed
title: ใใฃใผใในใใผใใฎ็ตฑๅ
- local: main_classes/feature_extractor
title: ็นๅพดๆฝๅบๅจ
- local: main_classes/image_processor
title: ็ปๅๅฆ็ใใญใปใใต
title: ไธป่ฆใชใฏใฉใน
- sections:
- isExpanded: false
sections:
- local: model_doc/albert
title: ALBERT
- local: model_doc/bart
title: BART
- local: model_doc/barthez
title: BARThez
- local: model_doc/bartpho
title: BARTpho
- local: model_doc/bert
title: BERT
- local: model_doc/bert-generation
title: BertGeneration
- local: model_doc/bert-japanese
title: BertJapanese
- local: model_doc/bertweet
title: Bertweet
- local: model_doc/big_bird
title: BigBird
- local: model_doc/bigbird_pegasus
title: BigBirdPegasus
- local: model_doc/biogpt
title: BioGpt
- local: model_doc/blenderbot
title: Blenderbot
- local: model_doc/blenderbot-small
title: Blenderbot Small
- local: model_doc/bloom
title: BLOOM
- local: model_doc/bort
title: BORT
- local: model_doc/byt5
title: ByT5
- local: model_doc/camembert
title: CamemBERT
- local: model_doc/canine
title: CANINE
- local: model_doc/codegen
title: CodeGen
- local: model_doc/code_llama
title: CodeLlama
- local: model_doc/convbert
title: ConvBERT
- local: model_doc/cpm
title: CPM
- local: model_doc/cpmant
title: CPMANT
- local: model_doc/ctrl
title: CTRL
- local: model_doc/deberta
title: DeBERTa
- local: model_doc/deberta-v2
title: DeBERTa-v2
title: ๆ็ซ ใขใใซ
- isExpanded: false
sections:
- local: model_doc/beit
title: BEiT
- local: model_doc/bit
title: BiT
- local: model_doc/conditional_detr
title: Conditional DETR
- local: model_doc/convnext
title: ConvNeXT
- local: model_doc/convnextv2
title: ConvNeXTV2
- local: model_doc/cvt
title: CvT
- local: model_doc/deformable_detr
title: Deformable DETR
title: ใใธใงใณใขใใซ
- isExpanded: false
sections:
- local: model_doc/audio-spectrogram-transformer
title: Audio Spectrogram Transformer
- local: model_doc/bark
title: Bark
- local: model_doc/clap
title: CLAP
title: ้ณๅฃฐใขใใซ
- isExpanded: false
sections:
- local: model_doc/align
title: ALIGN
- local: model_doc/altclip
title: AltCLIP
- local: model_doc/blip
title: BLIP
- local: model_doc/blip-2
title: BLIP-2
- local: model_doc/bridgetower
title: BridgeTower
- local: model_doc/bros
title: BROS
- local: model_doc/chinese_clip
title: Chinese-CLIP
- local: model_doc/clip
title: CLIP
- local: model_doc/clipseg
title: CLIPSeg
- local: model_doc/clvp
title: CLVP
- local: model_doc/data2vec
title: Data2Vec
title: ใใซใใขใผใใซใขใใซ
- isExpanded: false
sections:
- local: model_doc/decision_transformer
title: Decision Transformer
title: ๅผทๅๅญฆ็ฟใขใใซ
- isExpanded: false
sections:
- local: model_doc/autoformer
title: Autoformer
title: ๆ็ณปๅใขใใซ
title: ใขใใซ
- sections:
- local: internal/modeling_utils
title: ใซในใฟใ ใฌใคใคใผใจใฆใผใใฃใชใใฃ
- local: internal/pipelines_utils
title: ใใคใใฉใคใณ็จใฎใฆใผใใฃใชใใฃ
- local: internal/tokenization_utils
title: ใ=ใผใฏใใคใถใผ็จใฎใฆใผใใฃใชใใฃ
- local: internal/trainer_utils
title: ใใฌใผใใผ็จใฆใผใใฃใชใใฃ
- local: internal/generation_utils
title: ็บ้ป็จใฆใผใใฃใชใใฃ
- local: internal/image_processing_utils
title: ็ปๅใใญใปใใต็จใฆใผใใฃใชใใฃ
- local: internal/audio_utils
title: ใชใผใใฃใชๅฆ็็จใฎใฆใผใใฃใชใใฃ
- local: internal/file_utils
title: ไธ่ฌๅ
ฌๅ
ฑไบๆฅญ
- local: internal/time_series_utils
title: ๆ็ณปๅ็จใฎใฆใผใใฃใชใใฃ
title: ๅ
้จใใซใใผ
title: API
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_infer_special.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Inference on Specialized Hardware
ใใกใใฎใใญใฅใกใณใใฏใๅฐ็จใฎใใผใใฆใงใขใงใฎๆจ่ซๆนๆณใซใคใใฆใฎๆ
ๅ ฑใใพใใชใๆไพใใใพใใใใฎ้ใซใCPUใงใฎๆจ่ซใซ้ขใใใฌใคใใใ่ฆงใใใ ใใพใใ[the guide for inference on CPUs](perf_infer_cpu).
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_train_cpu.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Efficient Training on CPU
ใใฎใฌใคใใฏใCPUไธใงๅคง่ฆๆจกใชใขใใซใๅน็็ใซใใฌใผใใณใฐใใๆนๆณใซ็ฆ็นใๅฝใฆใฆใใพใใ
## Mixed precision with IPEX
IPEXใฏAVX-512ไปฅไธใฎCPUใซๆ้ฉๅใใใฆใใใAVX2ใฎใฟใฎCPUใงใๆฉ่ฝ็ใซๅไฝใใพใใใใฎใใใAVX-512ไปฅไธใฎIntel CPUไธไปฃใงใฏใใใฉใผใใณในใฎๅไธใๆๅพ
ใใใพใใใAVX2ใฎใฟใฎCPU๏ผไพ๏ผAMD CPUใพใใฏๅคใIntel CPU๏ผใงใฏIPEXใฎไธใงใใ่ฏใใใใฉใผใใณในใๅพใใใใใใใใพใใใใไฟ่จผใใใพใใใIPEXใฏใFloat32ใจBFloat16ใฎไธกๆนใงCPUใใฌใผใใณใฐใฎใใใฉใผใใณในใๆ้ฉๅใใพใใไปฅไธใฎใปใฏใทใงใณใงใฏใBFloat16ใฎไฝฟ็จใซ้็นใ็ฝฎใใฆ่ชฌๆใใพใใ
ไฝ็ฒพๅบฆใใผใฟๅใงใใBFloat16ใฏใAVX512ๅฝไปคใปใใใๅใใ็ฌฌ3ไธไปฃXeonยฎ Scalable Processors๏ผๅฅๅCooper Lake๏ผใงใใคใใฃใใตใใผใใใใฆใใใใใใซ้ซๆง่ฝใชIntelยฎ Advanced Matrix Extensions๏ผIntelยฎ AMX๏ผๅฝไปคใปใใใๅใใๆฌกไธไปฃใฎIntelยฎ Xeonยฎ Scalable ProcessorsใงใใตใใผใใใใพใใCPUใใใฏใจใณใ็จใฎ่ชๅๆททๅ็ฒพๅบฆใPyTorch-1.10ไปฅ้ใงๆๅนใซใชใฃใฆใใพใใๅๆใซใIntelยฎ Extension for PyTorchใงใฎCPU็จBFloat16ใฎ่ชๅๆททๅ็ฒพๅบฆใตใใผใใจใใชใใฌใผใฟใผใฎBFloat16ๆ้ฉๅใฎใตใใผใใๅคงๅน
ใซๅไธใใไธ้จใPyTorchใฎใกใคใณใใฉใณใใซใขใใในใใชใผใ ใใใฆใใพใใใฆใผใถใผใฏIPEX Auto Mixed Precisionใไฝฟ็จใใใใจใงใใใๅชใใใใใฉใผใใณในใจใฆใผใถใผใจใฏในใใชใจใณในใๅพใใใจใใงใใพใใ
่ฉณ็ดฐใชๆ
ๅ ฑใซใคใใฆใฏใ[Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html)ใ็ขบ่ชใใฆใใ ใใใ
### IPEX installation:
IPEXใฎใชใชใผในใฏPyTorchใซๅพใฃใฆใใใpipใไฝฟ็จใใฆใคใณในใใผใซใงใใพใ๏ผ
| PyTorch Version | IPEX version |
| :---------------: | :----------: |
| 1.13 | 1.13.0+cpu |
| 1.12 | 1.12.300+cpu |
| 1.11 | 1.11.200+cpu |
| 1.10 | 1.10.100+cpu |
```
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
[IPEXใฎใคใณในใใผใซๆนๆณ](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html)ใซใคใใฆใใใใชใใขใใญใผใใ็ขบ่ชใใฆใใ ใใใ
### Trainerใงใฎไฝฟ็จๆนๆณ
TrainerใงIPEXใฎ่ชๅๆททๅ็ฒพๅบฆใๆๅนใซใใใซใฏใใฆใผใถใผใฏใใฌใผใใณใฐใณใใณใๅผๆฐใซ `use_ipex`ใ`bf16`ใใใใณ `no_cuda` ใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
[Transformersใฎ่ณชๅๅฟ็ญ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)ใฎใฆใผในใฑใผในใไพใซ่ชฌๆใใพใใ
- CPUไธใงBF16่ชๅๆททๅ็ฒพๅบฆใไฝฟ็จใใฆIPEXใงใใฌใผใใณใฐใ่กใๅ ดๅ๏ผ
<pre> python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
<b>--use_ipex \</b>
<b>--bf16 --no_cuda</b></pre>
### Practice example
Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/accelerate.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ๐ค Accelerate ใ็จใใๅๆฃๅญฆ็ฟ
ใขใใซใๅคงใใใชใใซใคใใฆใ้ใใใใใผใใฆใงใขใงใใๅคงใใชใขใใซใ่จ็ทดใใ่จ็ทด้ๅบฆใๅคงๅน
ใซไธๆใใใใใใฎๆนๆณใจใใฆไธฆๅๅฆ็ใๆตฎไธใใฆใใพใใใ1ๅฐใฎใใทใณใซ่คๆฐใฎGPUใใใฃใฆใใ่คๆฐใฎใใทใณใซใพใใใ่คๆฐใฎGPUใใใฃใฆใใใใใใใฟใคใใฎๅๆฃๅฆ็ใปใใใขใใไธใงใฆใผใถใผใ็ฐกๅใซ ๐ค Transformers ใขใใซใ่จ็ทดใงใใใใใซใ Hugging Face ใงใฏ [๐ค Accelerate](https://huggingface.co/docs/accelerate) ใฉใคใใฉใชใไฝๆใใพใใใใใฎใใฅใผใใชใขใซใงใฏใPyTorch ใฎ่จ็ทดใซใผใใใซในใฟใใคใบใใฆใๅๆฃๅฆ็็ฐๅขใงใฎ่จ็ทดใๅฏ่ฝใซใใๆนๆณใซใคใใฆๅญฆใณใพใใ
## ใปใใใขใใ
ใฏใใใซ ๐ค Accelerate ใใคใณในใใผใซใใพใใใ:
```bash
pip install accelerate
```
ใใใใใคใณใใผใใใฆ [`~accelerate.Accelerator`] ใชใใธใงใฏใใไฝๆใใพใใใใ[`~accelerate.Accelerator`] ใฏๅๆฃๅฆ็ใปใใใขใใใ่ชๅ็ใซๆคๅบใใ่จ็ทดใฎใใใซๅฟ
่ฆใชๅ
จใฆใฎใณใณใใผใใณใใๅๆๅใใพใใใขใใซใใใใคในใซๆ็คบ็ใซ้
็ฝฎใใๅฟ
่ฆใฏใใใพใใใ
```py
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
```
## Accelerate ใใๆบๅใใใพใใใ
ๆฌกใซใ้ข้ฃใใๅ
จใฆใฎ่จ็ทดใชใใธใงใฏใใ [`~accelerate.Accelerator.prepare`] ใกใฝใใใซๆธกใใพใใใใใซใฏใ่จ็ทดใจ่ฉไพกใใใใใฎDataloaderใใขใใซใoptimizer ใๅซใพใใพใ:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader, model, optimizer
... )
```
## Backward
ๆๅพใซ่จ็ทดใซใผใๅ
ใฎ `loss.backward()` ใ ๐ค Accelerate ใฎ [`~accelerate.Accelerator.backward`] ใกใฝใใใง็ฝฎใๆใใพใ๏ผ
```py
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accelerator.backward(loss)
... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```
ไปฅไธใฎใณใผใใง็ขบ่ชใงใใ้ใใ่จ็ทดใซใผใใซ4่กใฎใณใผใใ่ฟฝๅ ใใใ ใใงๅๆฃๅญฆ็ฟใๅฏ่ฝใงใ๏ผ
```diff
+ from accelerate import Accelerator
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
+ accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
optimizer = AdamW(model.parameters(), lr=3e-5)
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)
+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
+ train_dataloader, eval_dataloader, model, optimizer
+ )
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
- batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
- loss.backward()
+ accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
```
## ่จ็ทดใใ
้ข้ฃใใใณใผใใ่ฟฝๅ ใใใใในใฏใชใใใพใใฏ Colaboratory ใชใฉใฎใใผใใใใฏใง่จ็ทดใ้ๅงใใพใใ
### ในใฏใชใใใง่จ็ทดใใ
ในใฏใชใใใใ่จ็ทดใใใฆใใๅ ดๅใฏใ่จญๅฎใใกใคใซใไฝๆใปไฟๅญใใใใใซไปฅไธใฎใณใใณใใๅฎ่กใใฆใใ ใใ:
```bash
accelerate config
```
ใใใฆๆฌกใฎใใใซใใฆ่จ็ทดใ้ๅงใใพใ:
```bash
accelerate launch train.py
```
### ใใผใใใใฏใง่จ็ทดใใ
Colaboratory ใฎ TPU ใฎๅฉ็จใใ่ใใฎๅ ดๅใ๐ค Accelerate ใฏใใผใใใใฏไธใงๅฎ่กใใใใจใใงใใพใใ่จ็ทดใซๅฟ
่ฆใชๅ
จใฆใฎใณใผใใ้ขๆฐใซๅซใใ[`~accelerate.notebook_launcher`] ใซๆธกใใฆใใ ใใ:
```py
>>> from accelerate import notebook_launcher
>>> notebook_launcher(training_function)
```
๐ค Accelerate ใจ่ฑๅฏใชๆฉ่ฝใซใคใใฆใใฃใจ็ฅใใใๆนใฏ[ใใญใฅใกใณใ](https://huggingface.co/docs/accelerate)ใๅ็
งใใฆใใ ใใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/tf_xla.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# XLA Integration for TensorFlow Models
[[open-in-colab]]
ๅ ้็ทๅฝขไปฃๆฐ๏ผAccelerated Linear Algebra๏ผใ้็งฐXLAใฏใTensorFlowใขใใซใฎใฉใณใฟใคใ ใ้ซ้ๅใใใใใฎใณใณใใคใฉใงใใ[ๅ
ฌๅผใใญใฅใกใณใ](https://www.tensorflow.org/xla)ใซใใใฐใXLA๏ผAccelerated Linear Algebra๏ผใฏ็ทๅฝขไปฃๆฐใฎใใใฎใใกใคใณๅบๆใฎใณใณใใคใฉใงใTensorFlowใขใใซใๆฝๅจ็ใซใฝใผในใณใผใใฎๅคๆดใชใใง้ซ้ๅใงใใพใใ
TensorFlowใงXLAใไฝฟ็จใใใฎใฏ็ฐกๅใงใใXLAใฏ`tensorflow`ใฉใคใใฉใชๅ
ใซใใใฑใผใธๅใใใฆใใใ[`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs)ใชใฉใฎใฐใฉใใไฝๆใใ้ขๆฐๅ
ใง`jit_compile`ๅผๆฐใไฝฟ็จใใฆใใชใฌใผใงใใพใใ`fit()`ใ`predict()`ใชใฉใฎKerasใกใฝใใใไฝฟ็จใใๅ ดๅใ`model.compile()`ใซ`jit_compile`ๅผๆฐใๆธกใใ ใใงXLAใๆๅนใซใงใใพใใใใ ใใXLAใฏใใใใฎใกใฝใใใซ้ๅฎใใใฆใใใใใงใฏใใใพใใใไปปๆใฎ`tf.function`ใ้ซ้ๅใใใใใซใไฝฟ็จใงใใพใใ
๐ค Transformersๅ
ใฎใใใคใใฎTensorFlowใกใฝใใใฏใXLAใจไบๆๆงใใใใใใซๆธใ็ดใใใฆใใพใใใใใซใฏใ[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)ใ[T5](https://huggingface.co/docs/transformers/model_doc/t5)ใ[OPT](https://huggingface.co/docs/transformers/model_doc/opt)ใชใฉใฎใใญในใ็ๆใขใใซใใ[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)ใชใฉใฎ้ณๅฃฐๅฆ็ใขใใซใๅซใพใใพใใ
้ๅบฆๅไธใฎๅ
ทไฝ็ใช้ใฏใขใใซใซ้ๅธธใซไพๅญใใพใใใ๐ค Transformersๅ
ใฎTensorFlowใใญในใ็ๆใขใใซใงใฏใ็ด100ๅใฎ้ๅบฆๅไธใ็ขบ่ชใใฆใใพใใใใฎใใญใฅใกใณใใงใฏใใใใใฎใขใใซใซXLAใไฝฟ็จใใฆๆๅคงใฎใใใฉใผใใณในใๅพใๆนๆณใ่ชฌๆใใพใใใพใใใใณใใใผใฏใจXLA็ตฑๅใฎใใถใคใณๅฒๅญฆใซใคใใฆ่ฉณใใๅญฆใณใใๅ ดๅใฎ่ฟฝๅ ใชใฝใผในใธใฎใชใณใฏใๆไพใใพใใ
## Running TF functions with XLA
ไปฅไธใฎTensorFlowใขใใซใ่ใใฆใฟใพใใใ๏ผ
```py
import tensorflow as tf
model = tf.keras.Sequential(
[tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")]
)
```
ไธ่จใฎใขใใซใฏใๆฌกๅ
ใ`(10, )`ใฎๅ
ฅๅใๅใๅ
ฅใใพใใใใฎใขใใซใใใฉใฏใผใใในใงๅฎ่กใใใซใฏใๆฌกใฎใใใซใใพใ๏ผ
```py
# Generate random inputs for the model.
batch_size = 16
input_vector_dim = 10
random_inputs = tf.random.normal((batch_size, input_vector_dim))
# Run a forward pass.
_ = model(random_inputs)
```
XLAใงใณใณใใคใซใใใ้ขๆฐใไฝฟ็จใใฆใใฉใฏใผใใในใๅฎ่กใใใซใฏใไปฅไธใฎใใใซใใพใ๏ผ
```py
xla_fn = tf.function(model, jit_compile=True)
_ = xla_fn(random_inputs)
```
`model`ใฎใใใฉใซใใฎ `call()` ้ขๆฐใฏXLAใฐใฉใใใณใณใใคใซใใใใใซไฝฟ็จใใใพใใใใ ใใXLAใซใณใณใใคใซใใใไปใฎใขใใซ้ขๆฐใใใๅ ดๅใใใใๅฏ่ฝใงใใไปฅไธใฏใใฎๆนๆณใงใ๏ผ
```py
my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True)
```
## Running a TF text generation model with XLA from ๐ค Transformers
๐ค Transformersๅ
ใงXLAใงใฎ้ซ้ๅใใใ็ๆใๆๅนใซใใใซใฏใๆๆฐใใผใธใงใณใฎ`transformers`ใใคใณในใใผใซใใใฆใใๅฟ
่ฆใใใใพใใๆฌกใฎใณใใณใใๅฎ่กใใฆใคใณในใใผใซใงใใพใ๏ผ
```bash
pip install transformers --upgrade
```
ๆฌกใซใๆฌกใฎใณใผใใๅฎ่กใงใใพใ๏ผ
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
# Will error if the minimal version of Transformers is not installed.
from transformers.utils import check_min_version
check_min_version("4.21.0")
tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
input_string = ["TensorFlow is"]
# One line to create an XLA generation function
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_string, return_tensors="tf")
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
# Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the
```
`generate()`ใงXLAใๆๅนใซใใใฎใฏใใใฃใไธ่กใฎใณใผใใงใใใณใผใใฎๆฎใ้จๅใฏๅคๆดใใใฆใใพใใใใใ ใใXLAๅบๆใฎใใใคใใฎๆณจๆ็นใไธ่จใฎใณใผใในใใใใใซใใใพใใใใใใซๆณจๆใใๅฟ
่ฆใใใใXLAใใใใใ้ๅบฆๅไธใๅฎ็พใใใใใซใใใใๆๆกใใใใจใ้่ฆใงใใๆฌกใฎใปใฏใทใงใณใงใใใใซใคใใฆ่ฉณใใ่ชฌๆใใพใใ
## Gotchas to be aware of
XLAใๆๅนใซใใ้ขๆฐ๏ผไธ่จใฎ`xla_generate()`ใชใฉ๏ผใๅใใฆๅฎ่กใใใจใๅ
้จใง่จ็ฎใฐใฉใใๆจ่ซใใใใจใใพใใใใใใฏๆ้ใใใใใพใใใใฎใใญใปในใฏ["ใใฌใผใทใณใฐ"๏ผtracing๏ผ](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing)ใจใใฆ็ฅใใใฆใใพใใ
็ๆๆ้ใ้ซ้ใงใฏใชใใใจใซๆฐไปใใใใใใพใใใ`xla_generate()`๏ผใพใใฏไปใฎXLAๅฏพๅฟ้ขๆฐ๏ผใฎ้ฃ็ถๅผใณๅบใใงใฏใ้ขๆฐใธใฎๅ
ฅๅใๆๅใซ่จ็ฎใฐใฉใใๆง็ฏใใใใจใใจๅใๅฝข็ถใซๅพใฃใฆใใๅ ดๅใ่จ็ฎใฐใฉใใๆจ่ซใใๅฟ
่ฆใฏใใใพใใใใใใฏใๅ
ฅๅๅฝข็ถใๅบๅฎใใใฆใใใขใใชใใฃ๏ผไพ๏ผ็ปๅ๏ผใซใฏๅ้กใใใพใใใใๅคๆฐใฎๅ
ฅๅๅฝข็ถใขใใชใใฃ๏ผไพ๏ผใใญในใ๏ผใๆฑใๅ ดๅใซใฏๆณจๆใๅฟ
่ฆใงใใ
`xla_generate()`ใๅธธใซๅใๅ
ฅๅๅฝข็ถใงๅไฝใใใใใซใใใซใฏใใใผใฏใใคใถใๅผใณๅบใ้ใซ`padding`ๅผๆฐใๆๅฎใงใใพใใ
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
input_string = ["TensorFlow is"]
xla_generate = tf.function(model.generate, jit_compile=True)
# Here, we call the tokenizer with padding options.
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf")
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
```
ใใใซใใใ`xla_generate()`ใธใฎๅ
ฅๅใๅธธใซใใฌใผในใใใๅฝข็ถใฎๅ
ฅๅใๅใๅใใใจใ็ขบ่ชใใ็ๆๆ้ใฎ้ซ้ๅใๅฎ็พใงใใพใใไปฅไธใฎใณใผใใงใใใ็ขบ่ชใงใใพใ๏ผ
```py
import time
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
xla_generate = tf.function(model.generate, jit_compile=True)
for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]:
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf")
start = time.time_ns()
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
end = time.time_ns()
print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n")
```
Tesla T4 GPUใไฝฟ็จใใใจใๆฌกใฎใใใชๅบๅใๆๅพ
ใใใพใ๏ผ
```bash
Execution time -- 30819.6 ms
Execution time -- 79.0 ms
Execution time -- 78.9 ms
```
ๆๅใฎ`xla_generate()`ๅผใณๅบใใฏใใฌใผใทใณใฐใฎใใใซๆ้ใใใใใพใใใ้ฃ็ถใใๅผใณๅบใใฏๆก้ใใซ้ซ้ใงใใ็ๆใชใใทใงใณใฎใใใชใๅคๆดใใๅใใฌใผใทใณใฐใๅผใ่ตทใใใ็ๆๆ้ใฎ้
ๅปถใๅผใ่ตทใใใใจใซๆณจๆใใฆใใ ใใใ
ใใฎใใญใฅใกใณใใงใฏใ๐ค Transformersใๆไพใใใใญในใ็ๆใชใใทใงใณใใในใฆ็ถฒ็พ
ใใฆใใพใใใ้ซๅบฆใชใฆใผในใฑใผในใซใคใใฆใฏใใญใฅใกใณใใผใทใงใณใๅ็
งใใใใจใใๅงใใใพใใ
## Additional Resources
ใใใงใฏใ๐ค Transformersใจไธ่ฌ็ใชXLAใซใคใใฆใใใซ่ฉณใใๅญฆใณใใๅ ดๅใฎใใใคใใฎ่ฟฝๅ ใชใฝใผในใๆไพใใพใใ
* [ใใฎColab Notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb)ใงใฏใXLAๅฏพๅฟใฎใจใณใณใผใใผใใณใผใใผ๏ผ[T5](https://huggingface.co/docs/transformers/model_doc/t5)ใชใฉ๏ผใใใณใใณใผใใผๅฐ็จ๏ผ[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)ใชใฉ๏ผใใญในใ็ๆใขใใซใ่ฉฆใใใใฎๅฏพ่ฉฑๅใใขใๆไพใใใฆใใพใใ
* [ใใฎใใญใฐ่จไบ](https://huggingface.co/blog/tf-xla-generate)ใงใฏใXLAๅฏพๅฟใขใใซใฎๆฏ่ผใใณใใใผใฏใฎๆฆ่ฆใจใTensorFlowใงใฎXLAใซใคใใฆใฎๅๅฅฝ็ใช็ดนไปใๆไพใใใฆใใพใใ
* [ใใฎใใญใฐ่จไบ](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html)ใงใฏใ๐ค TransformersใฎTensorFlowใขใใซใซXLAใตใใผใใ่ฟฝๅ ใใ้ใฎ่จญ่จๅฒๅญฆใซใคใใฆ่ชฌๆใใฆใใพใใ
* ไธ่ฌ็ใชXLAใจTensorFlowใฐใฉใใซใคใใฆ่ฉณใใๅญฆใถใใใฎใใใใใฎๆ็จฟ๏ผ
* [XLA: ๆฉๆขฐๅญฆ็ฟ็จใฎๆ้ฉๅใณใณใใคใฉ](https://www.tensorflow.org/xla)
* [ใฐใฉใใจ`tf.function`ใฎ็ดนไป](https://www.tensorflow.org/guide/intro_to_graphs)
* [`tf.function`ใไฝฟ็จใใใใใฉใผใใณในๅไธ](https://www.tensorflow.org/guide/function)
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_train_special.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on Specialized Hardware
<Tip>
ๆณจๆ: [ๅไธGPUใปใฏใทใงใณ](perf_train_gpu_one)ใง็ดนไปใใใใปใจใใฉใฎๆฆ็ฅ๏ผๆททๅ็ฒพๅบฆใใฌใผใใณใฐใๅพ้
่็ฉใชใฉ๏ผใใใณ[ใใซใGPUใปใฏใทใงใณ](perf_train_gpu_many)ใฏไธ่ฌ็ใชใใฌใผใใณใฐใขใใซใซ้ฉ็จใใใๆฑ็จ็ใชใใฎใงใใฎใงใใใฎใปใฏใทใงใณใซๅ
ฅใๅใซใใใ็ขบ่ชใใฆใใ ใใใ
</Tip>
ใใฎใใญใฅใกใณใใฏใๅฐ็จใใผใใฆใงใขใงใใฌใผใใณใฐใใๆนๆณใซ้ขใใๆ
ๅ ฑใ่ฟๆฅไธญใซ่ฟฝๅ ไบๅฎใงใใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/llm_tutorial.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Generation with LLMs
[[open-in-colab]]
LLMใใพใใฏLarge Language Models๏ผๅคง่ฆๆจก่จ่ชใขใใซ๏ผใฏใใใญในใ็ๆใฎ้ตใจใชใ่ฆ็ด ใงใใ่ฆใใใซใใใใใฏๅคง่ฆๆจกใชไบๅ่จ็ทดๆธใฟใใฉใณในใใฉใผใใผใขใใซใงใไธใใใใๅ
ฅๅใใญในใใซๅบใฅใใฆๆฌกใฎๅ่ช๏ผใพใใฏใใใๆญฃ็ขบใซใฏใใผใฏใณ๏ผใไบๆธฌใใใใใซ่จ็ทดใใใฆใใพใใใใผใฏใณใ1ใคใใคไบๆธฌใใใใใใขใใซใๅผใณๅบใใ ใใงใฏๆฐใใๆใ็ๆใใใใใซไฝใใใ็ฒพๅทงใชใใจใใใๅฟ
่ฆใใใใพใใ่ชๅทฑๅๅธฐ็ๆใ่กใๅฟ
่ฆใใใใพใใ
่ชๅทฑๅๅธฐ็ๆใฏใๆจ่ซๆใฎๆ็ถใใงใใใใคใใฎๅๆๅ
ฅๅใไธใใ็ถๆ
ใงใใขใใซใๅๅพฉ็ใซๅผใณๅบใๆๆณใงใใ๐ค Transformersใงใฏใใใใฏ[`~generation.GenerationMixin.generate`]ใกใฝใใใซใใฃใฆๅฆ็ใใใใใใฏ็ๆ่ฝๅใๆใคใในใฆใฎใขใใซใงๅฉ็จๅฏ่ฝใงใใ
ใใฎใใฅใผใใชใขใซใงใฏใไปฅไธใฎใใจใ็คบใใพใ๏ผ
* LLMใไฝฟ็จใใฆใใญในใใ็ๆใใๆนๆณ
* ไธ่ฌ็ใช่ฝใจใ็ฉดใๅ้ฟใใๆนๆณ
* LLMใๆๅคง้ใซๆดป็จใใใใใฎๆฌกใฎในใใใ
ๅงใใๅใซใๅฟ
่ฆใชใฉใคใใฉใชใใในใฆใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใ๏ผ
```bash
pip install transformers bitsandbytes>=0.39.0 -q
```
## Generate text
[ๅ ๆ่จ่ชใขใใชใณใฐ](tasks/language_modeling)ใฎใใใซใใฌใผใใณใฐใใใ่จ่ชใขใใซใฏใใใญในใใใผใฏใณใฎใทใผใฑใณในใๅ
ฅๅใจใใฆๅใๅใใๆฌกใฎใใผใฏใณใฎ็ขบ็ๅๅธใ่ฟใใพใใ
<!-- [GIF 1 -- FWD PASS] -->
<figure class="image table text-center m-0 w-full">
<video
style="max-width: 90%; margin: auto;"
autoplay loop muted playsinline
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov"
></video>
<figcaption>"Forward pass of an LLM"</figcaption>
</figure>
LLM๏ผLanguage Model๏ผใซใใ่ชๅทฑๅๅธฐ็ๆใฎ้่ฆใชๅด้ขใฎ1ใคใฏใใใฎ็ขบ็ๅๅธใใๆฌกใฎใใผใฏใณใ้ธๆใใๆนๆณใงใใใใฎในใใใใงใฏใๆฌกใฎใคใใฌใผใทใงใณใฎใใใฎใใผใฏใณใๅพใใใ้ใใไฝใงใๅฏ่ฝใงใใใใใฏใ็ขบ็ๅๅธใใๆใๅฏ่ฝๆงใฎ้ซใใใผใฏใณใ้ธๆใใใ ใใฎใทใณใใซใชๆนๆณใใใ็ตๆใฎๅๅธใใใตใณใใชใณใฐใใๅใซๆฐใ
ใฎๅคๆใ้ฉ็จใใใปใฉ่ค้ใชๆนๆณใพใงใใใใใๆนๆณใ่ใใใใพใใ
<!-- [GIF 2 -- TEXT GENERATION] -->
<figure class="image table text-center m-0 w-full">
<video
style="max-width: 90%; margin: auto;"
autoplay loop muted playsinline
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov"
></video>
<figcaption>"Autoregressive generation iteratively selects the next token from a probability distribution to generate text"</figcaption>
</figure>
ไธ่จใฎใใญใปในใฏใใใๅๆญขๆกไปถใๆบใใใใใพใงๅๅพฉ็ใซ็นฐใ่ฟใใใพใใ็ๆณ็ใซใฏใๅๆญขๆกไปถใฏใขใใซใซใใฃใฆๆ็คบใใใใขใใซใฏ็ตไบใทใผใฑใณใน๏ผ`EOS`๏ผใใผใฏใณใๅบๅใใใฟใคใใณใฐใๅญฆ็ฟใในใใงใใใใใใใใงใชใๅ ดๅใ็ๆใฏใใใใใๅฎ็พฉใใใๆๅคง้ทใซ้ใใใจใใซๅๆญขใใพใใ
ใใผใฏใณ้ธๆในใใใใจๅๆญขๆกไปถใ้ฉๅใซ่จญๅฎใใใใจใฏใใขใใซใใฟในใฏใงๆๅพ
ใฉใใใซๆฏใ่ใใใใซ้่ฆใงใใใใใใๅใขใใซใซ้ข้ฃไปใใใใ [`~generation.GenerationConfig`] ใใกใคใซใใใ็็ฑใงใใใใใใซใฏๅชใใใใใฉใซใใฎ็ๆใใฉใกใผใฟๅใๅซใพใใใขใใซใจไธ็ทใซ่ชญใฟ่พผใพใใพใใ
ใณใผใใซใคใใฆ่ฉฑใใพใใใ๏ผ
<Tip>
ๅบๆฌ็ใชLLMใฎไฝฟ็จใซ่ๅณใใใๅ ดๅใ้ซใฌใใซใฎ [`Pipeline`](pipeline_tutorial) ใคใณใฟใผใใงใผในใ่ฏใๅบ็บ็นใงใใใใ ใใLLMใฏใใฐใใฐ้ๅญๅใใใผใฏใณ้ธๆในใใใใฎ็ดฐใใๅถๅพกใชใฉใฎ้ซๅบฆใชๆฉ่ฝใๅฟ
่ฆใงใใใใใใฏ [`~generation.GenerationMixin.generate`] ใไปใใฆๆ่ฏใซ่กใใใพใใLLMใจใฎ่ชๅทฑๅๅธฐ็ๆใฏใชใฝใผในใๅคใๅฟ
่ฆใงใใใ้ฉๅใชในใซใผใใใใฎใใใซGPUใงๅฎ่กใใๅฟ
่ฆใใใใพใใ
</Tip>
<!-- TODO: llama 2๏ผใพใใฏใใๆฐใใไธ่ฌ็ใชใใผในใฉใคใณ๏ผใๅฉ็จๅฏ่ฝใซใชใฃใใใไพใๆดๆฐใใ -->
ใพใใใขใใซใ่ชญใฟ่พผใๅฟ
่ฆใใใใพใใ
```py
>>> from transformers import AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained(
... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True
... )
```
`from_pretrained` ๅผใณๅบใใง2ใคใฎใใฉใฐใใใใใจใซๆณจๆใใฆใใ ใใ๏ผ
- `device_map` ใฏใขใใซใใใชใใฎGPUใซ็งปๅใใใพใ
- `load_in_4bit` ใฏ[4ใใใใฎๅ็้ๅญๅ](main_classes/quantization)ใ้ฉ็จใใฆใชใฝใผใน่ฆไปถใๅคงๅน
ใซๅๆธใใพใ
ใขใใซใๅๆๅใใไปใฎๆนๆณใใใใพใใใใใใฏLLMใๅงใใใใใฎ่ฏใๅบๆบใงใใ
ๆฌกใซใ[ใใผใฏใใคใถ](tokenizer_summary)ใไฝฟ็จใใฆใใญในใๅ
ฅๅใๅๅฆ็ใใๅฟ
่ฆใใใใพใใ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b")
>>> model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to("cuda")
```
`model_inputs` ๅคๆฐใฏใใใผใฏใณๅใใใใใญในใๅ
ฅๅใจใขใใณใทใงใณใในใฏใไฟๆใใฆใใพใใ [`~generation.GenerationMixin.generate`] ใฏใใขใใณใทใงใณใในใฏใๆธกใใใฆใใชใๅ ดๅใงใใๆๅใฎๅชๅใใใฆใใใๆจๆธฌใใใใจใใพใใใใงใใ้ใๆธกใใใจใใๅงใใใพใใๆ้ฉใช็ตๆใๅพใใใใงใใ
ๆๅพใซใ[`~generation.GenerationMixin.generate`] ใกใฝใใใๅผใณๅบใใฆ็ๆใใใใใผใฏใณใๅๅพใใใใใ่กจ็คบใใๅใซใใญในใใซๅคๆใใๅฟ
่ฆใใใใพใใ
```py
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A list of colors: red, blue, green, yellow, black, white, and brown'
```
ใใใงๅฎไบใงใ๏ผใใใใชใณใผใ่กๆฐใงใLLM๏ผLarge Language Model๏ผใฎใใฏใผใๆดป็จใงใใพใใ
## Common pitfalls
[็ๆๆฆ็ฅ](generation_strategies)ใฏใใใใใใใใใใฉใซใใฎๅคใใใชใใฎใฆใผในใฑใผในใซ้ฉใใฆใใชใใใจใใใใพใใๅบๅใๆๅพ
้ใใงใชใๅ ดๅใๆใไธ่ฌ็ใช่ฝใจใ็ฉดใจใใฎๅ้ฟๆนๆณใฎใชในใใไฝๆใใพใใใ
```py
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b")
>>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default
>>> model = AutoModelForCausalLM.from_pretrained(
... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True
... )
```
### Generated output is too short/long
[`~generation.GenerationConfig`] ใใกใคใซใงๆๅฎใใใฆใใชใๅ ดๅใ`generate` ใฏใใใฉใซใใงๆๅคงใง 20 ใใผใฏใณใพใง่ฟใใพใใๆใ
ใฏ `generate` ใณใผใซใง `max_new_tokens` ใๆๅใง่จญๅฎใใใใจใๅผทใใๅงใใใพใใใใใซใใใ่ฟใใใๆฐใใใใผใฏใณใฎๆๅคงๆฐใๅถๅพกใงใใพใใLLM๏ผๆญฃ็ขบใซใฏใ[ใใณใผใใผๅฐ็จใขใใซ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)๏ผใๅบๅใฎไธ้จใจใใฆๅ
ฅๅใใญใณใใใ่ฟใใใจใซๆณจๆใใฆใใ ใใใ
```py
>>> model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda")
>>> # By default, the output will contain up to 20 tokens
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5'
>>> # Setting `max_new_tokens` allows you to control the maximum length
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=50)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,'
```
### Incorrect generation mode
ใใใฉใซใใงใฏใ [`~generation.GenerationConfig`] ใใกใคใซใงๆๅฎใใใฆใใชใ้ใใ`generate` ใฏๅใคใใฌใผใทใงใณใงๆใๅฏ่ฝๆงใฎ้ซใใใผใฏใณใ้ธๆใใพใ๏ผ่ฒชๆฌฒใใณใผใใฃใณใฐ๏ผใใฟในใฏใซๅฟใใฆใใใใฏๆใพใใใชใใใจใใใใพใใใใฃใใใใใใใจใใปใคใฎใใใชๅต้ ็ใชใฟในใฏใงใฏใใตใณใใชใณใฐใๆ็ใงใใไธๆนใ้ณๅฃฐใฎ่ปขๅใ็ฟป่จณใฎใใใชๅ
ฅๅใซๅบใฅใใฟในใฏใงใฏใ่ฒชๆฌฒใใณใผใใฃใณใฐใๆ็ใงใใ`do_sample=True` ใงใตใณใใชใณใฐใๆๅนใซใงใใพใใใใฎใใใใฏใซใคใใฆใฎ่ฉณ็ดฐใฏใใใฎ[ใใญใฐใในใ](https://huggingface.co/blog/how-to-generate)ใงๅญฆใถใใจใใงใใพใใ
```py
>>> # Set seed or reproducibility -- you don't need this unless you want full reproducibility
>>> from transformers import set_seed
>>> set_seed(0)
>>> model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda")
>>> # LLM + greedy decoding = repetitive, boring output
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat. I am a cat. I am a cat. I am a cat'
>>> # With sampling, the output becomes more creative!
>>> generated_ids = model.generate(**model_inputs, do_sample=True)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat.\nI just need to be. I am always.\nEvery time'
```
### Wrong padding side
LLM๏ผLarge Language Models๏ผใฏ[ใใณใผใใผๅฐ็จ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)ใฎใขใผใญใใฏใใฃใงใใใๅ
ฅๅใใญใณใใใ็นฐใ่ฟใๅฆ็ใใใใจใๆๅณใใพใใๅ
ฅๅใๅใ้ทใใงใชใๅ ดๅใใใใใใใใฃใณใฐใใๅฟ
่ฆใใใใพใใLLMใฏใใใใใผใฏใณใใใฎ็ถใใๅญฆ็ฟใใฆใใชใใใใๅ
ฅๅใฏๅทฆใใใฃใณใฐใใๅฟ
่ฆใใใใพใใใพใใ็ๆใซๅฏพใใฆๆณจ็ฎใในใฏใๆธกใๅฟใใชใใใใซใใฆใใ ใใ๏ผ
```py
>>> # The tokenizer initialized above has right-padding active by default: the 1st sequence,
>>> # which is shorter, has padding on the right side. Generation fails.
>>> model_inputs = tokenizer(
... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
... ).to("cuda")
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0]
''
>>> # With left-padding, it works as expected!
>>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", padding_side="left")
>>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default
>>> model_inputs = tokenizer(
... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
... ).to("cuda")
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'1, 2, 3, 4, 5, 6,'
```
## Further resources
ใชใผใใชใฐใฌใใทใ็ๆใใญใปในใฏๆฏ่ผ็็ฐกๅใงใใใLLMใๆๅคง้ใซๆดป็จใใใใจใฏๅคใใฎ่ฆ็ด ใ็ตกใใใใๆๆฆ็ใช่ฉฆใฟใจใชใใพใใLLMใฎไฝฟ็จใจ็่งฃใใใใซๆทฑใใใใใฎๆฌกใฎในใใใใซใคใใฆใฏไปฅไธใฎใชใฝใผในใใ่ฆงใใ ใใใ
<!-- TODO: ๆฐใใใฌใคใใงๅฎไบ -->
### Advanced generate usage
1. [ใฌใคใ](generation_strategies)๏ผ็ฐใชใ็ๆๆนๆณใๅถๅพกใใๆนๆณใ็ๆๆงๆใใกใคใซใฎ่จญๅฎๆนๆณใๅบๅใฎในใใชใผใใณใฐๆนๆณใซใคใใฆใฎใฌใคใ;
2. [`~generation.GenerationConfig`]ใ[`~generation.GenerationMixin.generate`]ใใใใณ[็ๆ้ข้ฃใฏใฉใน](internal/generation_utils)ใซ้ขใใAPIใชใใกใฌใณในใ
### LLM leaderboards
1. [Open LLM ใชใผใใผใใผใ](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)๏ผใชใผใใณใฝใผในใขใใซใฎๅ่ณชใซ็ฆ็นใๅฝใฆใใชใผใใผใใผใ;
2. [Open LLM-Perf ใชใผใใผใใผใ](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)๏ผLLMใฎในใซใผใใใใซ็ฆ็นใๅฝใฆใใชใผใใผใใผใใ
### Latency and throughput
1. [ใฌใคใ](main_classes/quantization)๏ผใใคใใใใฏใฏใชใณใฟใคใบใซ้ขใใใฌใคใใใใใซใใใกใขใช่ฆไปถใๅ็ใซๅๆธใใๆนๆณใ็คบใใใฆใใพใใ
### Related libraries
1. [`text-generation-inference`](https://github.com/huggingface/text-generation-inference)๏ผLLM็จใฎๆฌ็ชๅใใตใผใใผ;
2. [`optimum`](https://github.com/huggingface/optimum)๏ผ็นๅฎใฎใใผใใฆใงใขใใใคในๅใใซๆ้ฉๅใใใ๐ค Transformersใฎๆกๅผตใ
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_torch_compile.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Optimize inference using torch.compile()
ใใฎใฌใคใใฏใ[`torch.compile()`](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) ใไฝฟ็จใใๆจ่ซ้ๅบฆใฎๅไธใซ้ขใใใใณใใใผใฏใๆไพใใใใจใ็ฎ็ใจใใฆใใพใใใใใฏใ[๐ค Transformers ใฎใณใณใใฅใผใฟใใธใงใณใขใใซ](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers&sort=trending)ๅใใฎใใฎใงใใ
## Benefits of torch.compile
`torch.compile()`ใฎๅฉ็น
ใขใใซใจGPUใซใใฃใฆใฏใtorch.compile()ใฏๆจ่ซๆใซๆๅคง30%ใฎ้ซ้ๅใๅฎ็พใใพใใ `torch.compile()`ใไฝฟ็จใใใซใฏใใใผใธใงใณ2.0ไปฅไธใฎtorchใใคใณในใใผใซใใใ ใใงใใ
ใขใใซใฎใณใณใใคใซใซใฏๆ้ใใใใใใใๆฏๅๆจ่ซใใใฎใงใฏใชใใใขใใซใ1ๅบฆใ ใใณใณใใคใซใใๅ ดๅใซๅฝน็ซใกใพใใ
ไปปๆใฎใณใณใใฅใผใฟใใธใงใณใขใใซใใณใณใใคใซใใใซใฏใไปฅไธใฎใใใซใขใใซใซ`torch.compile()`ใๅผใณๅบใใพใ๏ผ
```diff
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda")
+ model = torch.compile(model)
```
`compile()` ใฏใใณใณใใคใซใซ้ขใใ็ฐใชใใขใผใใๅใใฆใใใๅบๆฌ็ใซใฏใณใณใใคใซๆ้ใจๆจ่ซใฎใชใผใใผใใใใ็ฐใชใใพใใ`max-autotune` ใฏ `reduce-overhead` ใใใๆ้ใใใใใพใใใๆจ่ซ้ๅบฆใ้ใใชใใพใใใใใฉใซใใขใผใใฏใณใณใใคใซใซใใใฆใฏๆ้ใงใใใๆจ่ซๆ้ใซใใใฆใฏ `reduce-overhead` ใซๆฏในใฆๅน็ใ่ฏใใใใพใใใใใฎใฌใคใใงใฏใใใใฉใซใใขใผใใไฝฟ็จใใพใใใ่ฉณ็ดฐใซใคใใฆใฏใ[ใใกใ](https://pytorch.org/get-started/pytorch-2.0/#user-experience) ใๅ็
งใใฆใใ ใใใ
`torch` ใใผใธใงใณ 2.0.1 ใง็ฐใชใใณใณใใฅใผใฟใใธใงใณใขใใซใใฟในใฏใใใผใใฆใงใขใฎ็จฎ้กใใใใณใใใใตใคใบใไฝฟ็จใใฆ `torch.compile` ใใใณใใใผใฏใใพใใใ
## Benchmarking code
ไปฅไธใซใๅใฟในใฏใฎใใณใใใผใฏใณใผใใ็คบใใพใใๆจ่ซๅใซGPUใใฆใฉใผใ ใขใใใใๆฏๅๅใ็ปๅใไฝฟ็จใใฆ300ๅใฎๆจ่ซใฎๅนณๅๆ้ใๅๅพใใพใใ
### Image Classification with ViT
```
from PIL import Image
import requests
import numpy as np
from transformers import AutoImageProcessor, AutoModelForImageClassification
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224").to("cuda")
model = torch.compile(model)
processed_input = processor(image, return_tensors='pt').to(device="cuda")
with torch.no_grad():
_ = model(**processed_input)
```
#### Object Detection with DETR
```python
from transformers import AutoImageProcessor, AutoModelForObjectDetection
processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50").to("cuda")
model = torch.compile(model)
texts = ["a photo of a cat", "a photo of a dog"]
inputs = processor(text=texts, images=image, return_tensors="pt").to("cuda")
with torch.no_grad():
_ = model(**inputs)
```
#### Image Segmentation with Segformer
```python
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512").to("cuda")
model = torch.compile(model)
seg_inputs = processor(images=image, return_tensors="pt").to("cuda")
with torch.no_grad():
_ = model(**seg_inputs)
```
ไปฅไธใฏใ็งใใกใใใณใใใผใฏใ่กใฃใใขใใซใฎใชในใใงใใ
**Image Classification**
- [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224)
- [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k)
- [facebook/convnext-large-224](https://huggingface.co/facebook/convnext-large-224)
- [microsoft/resnet-50](https://huggingface.co/)
**Image Segmentation**
- [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [facebook/mask2former-swin-tiny-coco-panoptic](https://huggingface.co/facebook/mask2former-swin-tiny-coco-panoptic)
- [facebook/maskformer-swin-base-ade](https://huggingface.co/facebook/maskformer-swin-base-ade)
- [google/deeplabv3_mobilenet_v2_1.0_513](https://huggingface.co/google/deeplabv3_mobilenet_v2_1.0_513)
**Object Detection**
- [google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32)
- [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101)
- [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50)
ไปฅไธใฏใ`torch.compile()`ใไฝฟ็จใใๅ ดๅใจไฝฟ็จใใชใๅ ดๅใฎๆจ่ซๆ้ใฎๅฏ่ฆๅใจใ็ฐใชใใใผใใฆใงใขใจใใใใตใคใบใฎๅใขใใซใซๅฏพใใใใใฉใผใใณในๅไธใฎๅฒๅใงใใ
<div class="flex">
<div>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/a100_batch_comp.png" />
</div>
<div>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/v100_batch_comp.png" />
</div>
<div>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/t4_batch_comp.png" />
</div>
</div>
<div class="flex">
<div>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/A100_1_duration.png" />
</div>
<div>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/A100_1_percentage.png" />
</div>
</div>


ไธ่จใฏใๅใขใใซใซใคใใฆ`compile()`ใไฝฟ็จใใๅ ดๅใจไฝฟ็จใใชใใฃใๅ ดๅใฎๆจ่ซๆ้๏ผใใช็งๅไฝ๏ผใงใใใชใใOwlViTใฏๅคงใใชใใใใตใคใบใงใฎไฝฟ็จๆใซใกใขใชไธ่ถณ๏ผOOM๏ผใ็บ็ใใใใจใซๆณจๆใใฆใใ ใใใ
### A100 (batch size: 1)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 9.325 | 7.584 |
| Image Segmentation/Segformer | 11.759 | 10.500 |
| Object Detection/OwlViT | 24.978 | 18.420 |
| Image Classification/BeiT | 11.282 | 8.448 |
| Object Detection/DETR | 34.619 | 19.040 |
| Image Classification/ConvNeXT | 10.410 | 10.208 |
| Image Classification/ResNet | 6.531 | 4.124 |
| Image Segmentation/Mask2former | 60.188 | 49.117 |
| Image Segmentation/Maskformer | 75.764 | 59.487 |
| Image Segmentation/MobileNet | 8.583 | 3.974 |
| Object Detection/Resnet-101 | 36.276 | 18.197 |
| Object Detection/Conditional-DETR | 31.219 | 17.993 |
### A100 (batch size: 4)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 14.832 | 14.499 |
| Image Segmentation/Segformer | 18.838 | 16.476 |
| Image Classification/BeiT | 13.205 | 13.048 |
| Object Detection/DETR | 48.657 | 32.418|
| Image Classification/ConvNeXT | 22.940 | 21.631 |
| Image Classification/ResNet | 6.657 | 4.268 |
| Image Segmentation/Mask2former | 74.277 | 61.781 |
| Image Segmentation/Maskformer | 180.700 | 159.116 |
| Image Segmentation/MobileNet | 14.174 | 8.515 |
| Object Detection/Resnet-101 | 68.101 | 44.998 |
| Object Detection/Conditional-DETR | 56.470 | 35.552 |
### A100 (batch size: 16)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 40.944 | 40.010 |
| Image Segmentation/Segformer | 37.005 | 31.144 |
| Image Classification/BeiT | 41.854 | 41.048 |
| Object Detection/DETR | 164.382 | 161.902 |
| Image Classification/ConvNeXT | 82.258 | 75.561 |
| Image Classification/ResNet | 7.018 | 5.024 |
| Image Segmentation/Mask2former | 178.945 | 154.814 |
| Image Segmentation/Maskformer | 638.570 | 579.826 |
| Image Segmentation/MobileNet | 51.693 | 30.310 |
| Object Detection/Resnet-101 | 232.887 | 155.021 |
| Object Detection/Conditional-DETR | 180.491 | 124.032 |
### V100 (batch size: 1)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 10.495 | 6.00 |
| Image Segmentation/Segformer | 13.321 | 5.862 |
| Object Detection/OwlViT | 25.769 | 22.395 |
| Image Classification/BeiT | 11.347 | 7.234 |
| Object Detection/DETR | 33.951 | 19.388 |
| Image Classification/ConvNeXT | 11.623 | 10.412 |
| Image Classification/ResNet | 6.484 | 3.820 |
| Image Segmentation/Mask2former | 64.640 | 49.873 |
| Image Segmentation/Maskformer | 95.532 | 72.207 |
| Image Segmentation/MobileNet | 9.217 | 4.753 |
| Object Detection/Resnet-101 | 52.818 | 28.367 |
| Object Detection/Conditional-DETR | 39.512 | 20.816 |
### V100 (batch size: 4)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 15.181 | 14.501 |
| Image Segmentation/Segformer | 16.787 | 16.188 |
| Image Classification/BeiT | 15.171 | 14.753 |
| Object Detection/DETR | 88.529 | 64.195 |
| Image Classification/ConvNeXT | 29.574 | 27.085 |
| Image Classification/ResNet | 6.109 | 4.731 |
| Image Segmentation/Mask2former | 90.402 | 76.926 |
| Image Segmentation/Maskformer | 234.261 | 205.456 |
| Image Segmentation/MobileNet | 24.623 | 14.816 |
| Object Detection/Resnet-101 | 134.672 | 101.304 |
| Object Detection/Conditional-DETR | 97.464 | 69.739 |
### V100 (batch size: 16)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 52.209 | 51.633 |
| Image Segmentation/Segformer | 61.013 | 55.499 |
| Image Classification/BeiT | 53.938 | 53.581 |
| Object Detection/DETR | OOM | OOM |
| Image Classification/ConvNeXT | 109.682 | 100.771 |
| Image Classification/ResNet | 14.857 | 12.089 |
| Image Segmentation/Mask2former | 249.605 | 222.801 |
| Image Segmentation/Maskformer | 831.142 | 743.645 |
| Image Segmentation/MobileNet | 93.129 | 55.365 |
| Object Detection/Resnet-101 | 482.425 | 361.843 |
| Object Detection/Conditional-DETR | 344.661 | 255.298 |
### T4 (batch size: 1)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 16.520 | 15.786 |
| Image Segmentation/Segformer | 16.116 | 14.205 |
| Object Detection/OwlViT | 53.634 | 51.105 |
| Image Classification/BeiT | 16.464 | 15.710 |
| Object Detection/DETR | 73.100 | 53.99 |
| Image Classification/ConvNeXT | 32.932 | 30.845 |
| Image Classification/ResNet | 6.031 | 4.321 |
| Image Segmentation/Mask2former | 79.192 | 66.815 |
| Image Segmentation/Maskformer | 200.026 | 188.268 |
| Image Segmentation/MobileNet | 18.908 | 11.997 |
| Object Detection/Resnet-101 | 106.622 | 82.566 |
| Object Detection/Conditional-DETR | 77.594 | 56.984 |
### T4 (batch size: 4)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 43.653 | 43.626 |
| Image Segmentation/Segformer | 45.327 | 42.445 |
| Image Classification/BeiT | 52.007 | 51.354 |
| Object Detection/DETR | 277.850 | 268.003 |
| Image Classification/ConvNeXT | 119.259 | 105.580 |
| Image Classification/ResNet | 13.039 | 11.388 |
| Image Segmentation/Mask2former | 201.540 | 184.670 |
| Image Segmentation/Maskformer | 764.052 | 711.280 |
| Image Segmentation/MobileNet | 74.289 | 48.677 |
| Object Detection/Resnet-101 | 421.859 | 357.614 |
| Object Detection/Conditional-DETR | 289.002 | 226.945 |
### T4 (batch size: 16)
| **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|
| Image Classification/ViT | 163.914 | 160.907 |
| Image Segmentation/Segformer | 192.412 | 163.620 |
| Image Classification/BeiT | 188.978 | 187.976 |
| Object Detection/DETR | OOM | OOM |
| Image Classification/ConvNeXT | 422.886 | 388.078 |
| Image Classification/ResNet | 44.114 | 37.604 |
| Image Segmentation/Mask2former | 756.337 | 695.291 |
| Image Segmentation/Maskformer | 2842.940 | 2656.88 |
| Image Segmentation/MobileNet | 299.003 | 201.942 |
| Object Detection/Resnet-101 | 1619.505 | 1262.758 |
| Object Detection/Conditional-DETR | 1137.513 | 897.390|
## PyTorch Nightly
ใพใใPyTorchใฎใใคใใชใผใใผใธใงใณ๏ผ2.1.0dev๏ผใงใฎใใณใใใผใฏใ่กใใใณใณใใคใซใใใฆใใชใใขใใซใจใณใณใใคใซๆธใฟใขใใซใฎไธกๆนใงใฌใคใใณใทใผใฎๅไธใ่ฆณๅฏใใพใใใใใคใผใซใฏ[ใใกใ](https://download.pytorch.org/whl/nightly/cu118)ใใๅ
ฅๆใงใใพใใ
### A100
| **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 -<br> compile** |
|:---:|:---:|:---:|:---:|
| Image Classification/BeiT | Unbatched | 12.462 | 6.954 |
| Image Classification/BeiT | 4 | 14.109 | 12.851 |
| Image Classification/BeiT | 16 | 42.179 | 42.147 |
| Object Detection/DETR | Unbatched | 30.484 | 15.221 |
| Object Detection/DETR | 4 | 46.816 | 30.942 |
| Object Detection/DETR | 16 | 163.749 | 163.706 |
### T4
| **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|:---:|
| Image Classification/BeiT | Unbatched | 14.408 | 14.052 |
| Image Classification/BeiT | 4 | 47.381 | 46.604 |
| Image Classification/BeiT | 16 | 42.179 | 42.147 |
| Object Detection/DETR | Unbatched | 68.382 | 53.481 |
| Object Detection/DETR | 4 | 269.615 | 204.785 |
| Object Detection/DETR | 16 | OOM | OOM |
###ย V100
| **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|:---:|
| Image Classification/BeiT | Unbatched | 13.477 | 7.926 |
| Image Classification/BeiT | 4 | 15.103 | 14.378 |
| Image Classification/BeiT | 16 | 52.517 | 51.691 |
| Object Detection/DETR | Unbatched | 28.706 | 19.077 |
| Object Detection/DETR | 4 | 88.402 | 62.949|
| Object Detection/DETR | 16 | OOM | OOM |
## Reduce Overhead
NightlyใใซใใงA100ใใใณT4ๅใใฎ `reduce-overhead` ใณใณใใคใซใขใผใใใใณใใใผใฏใใพใใใ
### A100
| **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|:---:|
| Image Classification/ConvNeXT | Unbatched | 11.758 | 7.335 |
| Image Classification/ConvNeXT | 4 | 23.171 | 21.490 |
| Image Classification/ResNet | Unbatched | 7.435 | 3.801 |
| Image Classification/ResNet | 4 | 7.261 | 2.187 |
| Object Detection/Conditional-DETR | Unbatched | 32.823 | 11.627 |
| Object Detection/Conditional-DETR | 4 | 50.622 | 33.831 |
| Image Segmentation/MobileNet | Unbatched | 9.869 | 4.244 |
| Image Segmentation/MobileNet | 4 | 14.385 | 7.946 |
### T4
| **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** |
|:---:|:---:|:---:|:---:|
| Image Classification/ConvNeXT | Unbatched | 32.137 | 31.84 |
| Image Classification/ConvNeXT | 4 | 120.944 | 110.209 |
| Image Classification/ResNet | Unbatched | 9.761 | 7.698 |
| Image Classification/ResNet | 4 | 15.215 | 13.871 |
| Object Detection/Conditional-DETR | Unbatched | 72.150 | 57.660 |
| Object Detection/Conditional-DETR | 4 | 301.494 | 247.543 |
| Image Segmentation/MobileNet | Unbatched | 22.266 | 19.339 |
| Image Segmentation/MobileNet | 4 | 78.311 | 50.983 |
| 0 |
hf_public_repos/transformers/docs/source
|
hf_public_repos/transformers/docs/source/ja/perf_infer_gpu_one.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Efficient Inference on a Single GPU
ใใฎใฌใคใใซๅ ใใฆใ[1ใคใฎGPUใงใฎใใฌใผใใณใฐใฌใคใ](perf_train_gpu_one)ใจ[CPUใงใฎๆจ่ซใฌใคใ](perf_infer_cpu)ใซ้ข้ฃใใๆ
ๅ ฑใใใใพใใ
## Flash Attention 2
<Tip>
ใใฎๆฉ่ฝใฏๅฎ้จ็ใงใใใๅฐๆฅใฎใใผใธใงใณใงๅคงๅน
ใซๅคๆดใใใๅฏ่ฝๆงใใใใพใใใใจใใฐใFlash Attention 2 APIใฏ่ฟใๅฐๆฅ`BetterTransformer` APIใซ็งป่กใใใใใใใพใใใ
</Tip>
Flash Attention 2ใฏใใใฉใณในใใฉใผใใผใใผในใฎใขใใซใฎใใฌใผใใณใฐใจๆจ่ซ้ๅบฆใๅคงๅน
ใซ้ซ้ๅใงใใพใใFlash Attention 2ใฏใTri Daoๆฐใซใใฃใฆ[ๅ
ฌๅผใฎFlash Attentionใชใใธใใช](https://github.com/Dao-AILab/flash-attention)ใงๅฐๅ
ฅใใใพใใใFlash Attentionใซ้ขใใ็งๅญฆ่ซๆใฏ[ใใกใ](https://arxiv.org/abs/2205.14135)ใง่ฆใใใจใใงใใพใใ
Flash Attention 2ใๆญฃใใใคใณในใใผใซใใใซใฏใไธ่จใฎใชใใธใใชใซ่จ่ผใใใฆใใใคใณในใใผใซใฌใคใใซๅพใฃใฆใใ ใใใ
ไปฅไธใฎใขใใซใซๅฏพใใฆFlash Attention 2ใใใคใใฃใใตใใผใใใฆใใพใ๏ผ
- Llama
- Falcon
ใใใซๅคใใฎใขใใซใซFlash Attention 2ใฎใตใใผใใ่ฟฝๅ ใใใใจใGitHubใงๆๆกใใใใจใใงใใๅคๆดใ็ตฑๅใใใใใซใใซใชใฏใจในใใ้ใใใจใใงใใพใใใตใใผใใใใฆใใใขใใซใฏใใใใฃใณใฐใใผใฏใณใไฝฟ็จใใฆใใฌใผใใณใฐใๅซใใๆจ่ซใจใใฌใผใใณใฐใซไฝฟ็จใงใใพใ๏ผ็พๅจใฎ`BetterTransformer` APIใงใฏใตใใผใใใใฆใใชใ๏ผใ
<Tip>
Flash Attention 2ใฏใใขใใซใฎdtypeใ`fp16`ใพใใฏ`bf16`ใฎๅ ดๅใซใฎใฟไฝฟ็จใงใใNVIDIA-GPUใใใคในใงใฎใฟๅฎ่กใใใพใใใใฎๆฉ่ฝใไฝฟ็จใใๅใซใใขใใซใ้ฉๅใชdtypeใซใญใฃในใใใใตใใผใใใใฆใใใใใคในใซใญใผใใใฆใใ ใใใ
</Tip>
### Quick usage
ใขใใซใงFlash Attention 2ใๆๅนใซใใใซใฏใ`from_pretrained`ใฎๅผๆฐใซ`attn_implementation="flash_attention_2"`ใ่ฟฝๅ ใใพใใ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
```
ใใกใใฏใ็ๆใพใใฏๅพฎ่ชฟๆดใฎใใใซไฝฟ็จใใใใญในใใงใใ
### Expected speedups
็นใซ้ทใใทใผใฑใณในใซๅฏพใใฆใๅพฎ่ชฟๆดใจๆจ่ซใฎ้ใซใฏใใใชใใฎ้ซ้ๅใๆๅพ
ใงใใพใใใใ ใใFlash Attentionใฏใใใฃใณใฐใใผใฏใณใไฝฟ็จใใฆใขใใณใทใงใณในใณใขใ่จ็ฎใใชใใใใใทใผใฑใณในใซใใใฃใณใฐใใผใฏใณใๅซใพใใๅ ดๅใใใใๆจ่ซใซใใใฆใขใใณใทใงใณในใณใขใๆๅใงใใใ/ใขใณใใใใใๅฟ
่ฆใใใใใใใฃใณใฐใใผใฏใณใๅซใใใใ็ๆใฎๅคงๅน
ใช้
ๅปถใ็บ็ใใพใใ
ใใใๅ
ๆใใใใใซใใใฌใผใใณใฐไธญใซใทใผใฑใณในใซใใใฃใณใฐใใผใฏใณใไฝฟ็จใใใซFlash Attentionใไฝฟ็จใใๅฟ
่ฆใใใใพใ๏ผใใจใใฐใใใผใฟใปใใใใใใฏใใใใจใซใใใใทใผใฑใณในใๆๅคงใทใผใฑใณใน้ทใซ้ใใใพใง้ฃ็ตใใใใจใชใฉ๏ผใใใใซ[ไพ](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516)ใๆไพใใใฆใใพใใ
ไปฅไธใฏใใใใฃใณใฐใใผใฏใณใฎใชใๅ ดๅใซใใทใผใฑใณใน้ทใ4096ใฎ[tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b)ใซๅฏพใใๅ็ดใชใใฉใฏใผใใในใฎไบๆณใใใ้ซ้ๅใงใใใใพใใพใชใใใใตใคใบใ็คบใใใฆใใพใ๏ผ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/falcon-7b-inference-large-seqlen.png">
</div>
ไปฅไธใฏใใใใฃใณใฐใใผใฏใณใฎใชใๅ ดๅใซใใทใผใฑใณใน้ทใ4096ใฎ[`meta-llama/Llama-7b-hf`](https://hf.co/meta-llama/Llama-7b-hf)ใซๅฏพใใๅ็ดใชใใฉใฏใผใใในใฎไบๆณใใใ้ซ้ๅใงใใใใพใใพใชใใใใตใคใบใ็คบใใใฆใใพใ๏ผ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-7b-inference-large-seqlen.png">
</div>
ใใใฃใณใฐใใผใฏใณใๅซใใทใผใฑใณใน๏ผใใใฃใณใฐใใผใฏใณใไฝฟ็จใใฆใใฌใผใใณใฐใพใใฏ็ๆใใ๏ผใฎๅ ดๅใใขใใณใทใงใณในใณใขใๆญฃใใ่จ็ฎใใใใใซๅ
ฅๅใทใผใฑใณในใใขใณใใใ/ใใใใใๅฟ
่ฆใใใใพใใๆฏ่ผ็ๅฐใใใทใผใฑใณใน้ทใฎๅ ดๅใ็ด็ฒใชใใฉใฏใผใใในใงใฏใใใฃใณใฐใใผใฏใณใ30%ๆชๆบใใๅใใใใฆใใชใใใใใใใฏใใใใช้ซ้ๅใใใใใใพใใ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-small-seqlen-padding.png">
</div>
ใใใใๅคงใใชใทใผใฑใณใน้ทใฎๅ ดๅใ็ด็ฒใชๆจ่ซ๏ผใใฌใผใใณใฐใๅซใ๏ผใซใฏ่ๅณๆทฑใ้ซ้ๅใๅพใใใพใใ
Flash Attentionใฏใใขใใณใทใงใณ่จ็ฎใใใใกใขใชๅน็ใฎ่ฏใใใฎใซใใๅคงใใชใทใผใฑใณใน้ทใงใฎCUDA OOMใฎๅ้กใๅ้ฟใงใใใใใซใใพใใๅคงใใชใทใผใฑใณใน้ทใซๅฏพใใฆๆๅคง20ใฎใกใขใชๅๆธใใใใใใใจใใใใพใใ่ฉณ็ดฐใซใคใใฆใฏใ[ๅ
ฌๅผใฎFlash Attentionใชใใธใใช](https://github.com/Dao-AILab/flash-attention)ใใ่ฆงใใ ใใใ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-large-seqlen-padding.png">
</div>
### Advanced usage
ใใฎๆฉ่ฝใใขใใซใฎๆ้ฉๅใซๅคใใฎๆขๅญใฎๆฉ่ฝใจ็ตใฟๅใใใใใจใใงใใพใใไปฅไธใซใใใคใใฎไพใ็คบใใพใ๏ผ
### Combining Flash Attention 2 and 8-bit models
ใใฎๆฉ่ฝใ8ใใใใฎ้ๅญๅใจ็ตใฟๅใใใใใจใใงใใพใ๏ผ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_8bit=True,
attn_implementation="flash_attention_2",
)
```
### Combining Flash Attention 2 and 4-bit models
ใใฎๆฉ่ฝใ 4 ใใใใฎ้ๅญๅใจ็ตใฟๅใใใใใจใใงใใพใ๏ผ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=True,
attn_implementation="flash_attention_2",
)
```
### Combining Flash Attention 2 and PEFT
ใใฎๆฉ่ฝใไฝฟ็จใใฆใFlash Attention 2ใใใผในใซใขใใใฟใผใใใฌใผใใณใฐใใ้ใซPEFTใ็ตใฟๅใใใใใจใใงใใพใใ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
from peft import LoraConfig
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=True,
attn_implementation="flash_attention_2",
)
lora_config = LoraConfig(
r=8,
task_type="CAUSAL_LM"
)
model.add_adapter(lora_config)
... # train your model
```
## BetterTransformer
[BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฏใ๐ค TransformersใขใใซใPyTorchใใคใใฃใใฎ้ซ้ใในๅฎ่กใซๅคๆใใพใใใใใซใใใFlash Attentionใชใฉใฎๆ้ฉๅใใใใซใผใใซใๅ
้จใงๅผใณๅบใใใพใใ
BetterTransformerใฏใใใญในใใ็ปๅใใใใณใชใผใใฃใชใขใใซใฎๅไธใใใณใใซใGPUใงใฎ้ซ้ใชๆจ่ซใใตใใผใใใฆใใพใใ
<Tip>
Flash Attentionใฏใfp16ใพใใฏbf16ใฎdtypeใไฝฟ็จใใใขใใซใซใฎใฟไฝฟ็จใงใใพใใBetterTransformerใไฝฟ็จใใๅใซใใขใใซใ้ฉๅใชdtypeใซใญใฃในใใใฆใใ ใใใ
</Tip>
### Encoder models
PyTorchใใคใใฃใใฎ[`nn.MultiHeadAttention`](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)ใขใใณใทใงใณ้ซ้ใในใBetterTransformerใจๅผใฐใใใใฎใฏใ[๐ค Optimumใฉใคใใฉใช](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฎ็ตฑๅใ้ใใฆTransformersใจไธ็ทใซไฝฟ็จใงใใพใใ
PyTorchใฎใขใใณใทใงใณ้ซ้ใในใไฝฟ็จใใใจใใซใผใใซใใฅใผใธใงใณใจ[ใในใใใใใใณใฝใซ](https://pytorch.org/docs/stable/nested.html)ใฎไฝฟ็จใซใใใๆจ่ซใ้ซ้ๅใงใใพใใ่ฉณ็ดฐใชใใณใใใผใฏๆ
ๅ ฑใฏ[ใใฎใใญใฐ่จไบ](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใซใใใพใใ
[`optimum`](https://github.com/huggingface/optimum)ใใใฑใผใธใใคใณในใใผใซใใๅพใๆจ่ซไธญใซBetter Transformerใไฝฟ็จใใใซใฏใ้ข้ฃใใๅ
้จใขใธใฅใผใซใๅผใณๅบใใใจใง็ฝฎใๆใใๅฟ
่ฆใใใใพใ[`~PreTrainedModel.to_bettertransformer`]:
```python
model = model.to_bettertransformer()
```
ใกใฝใใ [`~PreTrainedModel.reverse_bettertransformer`] ใฏใใขใใซใไฟๅญใใๅใซไฝฟ็จใในใใงใๆจๆบใฎใใฉใณในใใฉใผใใผใขใใชใณใฐใไฝฟ็จใใใใใฎใใฎใงใ๏ผ
```python
model = model.reverse_bettertransformer()
model.save_pretrained("saved_model")
```
BetterTransformer APIใไฝฟใฃใใจใณใณใผใใผใขใใซใฎๅฏ่ฝๆงใซใคใใฆ่ฉณใใ็ฅใใซใฏใ[ใใฎใใญใฐใในใ](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใใ่ฆงใใ ใใใ
### Decoder models
ใใญในใใขใใซใ็นใซใใณใผใใผใใผในใฎใขใใซ๏ผGPTใT5ใLlamaใชใฉ๏ผใซใจใฃใฆใBetterTransformer APIใฏใในใฆใฎๆณจๆๆไฝใ[`torch.nn.functional.scaled_dot_product_attention`ใชใใฌใผใฟใผ](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention)๏ผSDPA๏ผใไฝฟ็จใใใใใซๅคๆใใพใใใใฎใชใใฌใผใฟใผใฏPyTorch 2.0ไปฅ้ใงใฎใฟๅฉ็จๅฏ่ฝใงใใ
ใขใใซใBetterTransformerใซๅคๆใใใซใฏใไปฅไธใฎๆ้ ใๅฎ่กใใฆใใ ใใ๏ผ
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
# convert the model to BetterTransformer
model.to_bettertransformer()
# Use it for training or inference
```
SDPAใฏใใใผใใฆใงใขใๅ้กใฎใตใคใบใซๅฟใใฆ[Flash Attention](https://arxiv.org/abs/2205.14135)ใซใผใใซใไฝฟ็จใใใใจใใงใใพใใFlash Attentionใๆๅนใซใใใใ็นๅฎใฎ่จญๅฎ๏ผใใผใใฆใงใขใๅ้กใตใคใบ๏ผใงไฝฟ็จๅฏ่ฝใใฉใใใ็ขบ่ชใใใซใฏใ[`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel)ใใณใณใใญในใใใใผใธใฃใจใใฆไฝฟ็จใใพใใ
```diff
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16).to("cuda")
# convert the model to BetterTransformer
model.to_bettertransformer()
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
ใใใใฌใผในใใใฏใซใใฐใ่กจ็คบใใใๅ ดๅ
```bash
RuntimeError: No available kernel. Aborting execution.
```
Flash Attention ใฎๅบ็ฏใชใซใใฌใใธใๆใคใใใใใชใ PyTorch ใฎใใคใใชใผใใผใธใงใณใ่ฉฆใใฆใฟใใใจใใๅงใใใพใใ
```bash
pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
```
Or make sure your model is correctly casted in float16 or bfloat16
ใขใใซใๆญฃใใfloat16ใพใใฏbfloat16ใซใญใฃในใใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
Have a look at [this detailed blogpost](https://pytorch.org/blog/out-of-the-box-acceleration/) to read more about what is possible to do with `BetterTransformer` + SDPA API.
`BetterTransformer` + SDPA APIใไฝฟ็จใใฆไฝใๅฏ่ฝใใซใคใใฆ่ฉณใใ่ชญใใซใฏใ[ใใฎ่ฉณ็ดฐใชใใญใฐใในใ](https://pytorch.org/blog/out-of-the-box-acceleration/)ใใ่ฆงใใ ใใใ
## `bitsandbytes` integration for FP4 mixed-precision inference
FP4ๆททๅ็ฒพๅบฆๆจ่ซใฎใใใฎ`bitsandbytes`็ตฑๅ
You can install `bitsandbytes` and benefit from easy model compression on GPUs. Using FP4 quantization you can expect to reduce up to 8x the model size compared to its native full precision version. Check out below how to get started.
`bitsandbytes`ใใคใณในใใผใซใใGPUใง็ฐกๅใชใขใใซใฎๅง็ธฎใๅฉ็จใงใใพใใFP4้ๅญๅใไฝฟ็จใใใจใใใคใใฃใใฎใใซใใฌใทใธใงใณใใผใธใงใณใจๆฏ่ผใใฆใขใใซใตใคใบใๆๅคง8ๅๅๆธใงใใใใจใๆๅพ
ใงใใพใใไปฅไธใ็ขบ่ชใใฆใใฉใฎใใใซๅงใใใใใ่ฆงใใ ใใใ
<Tip>
Note that this feature can also be used in a multi GPU setup.
ใใฎๆฉ่ฝใฏใใใซใGPUใปใใใขใใใงใไฝฟ็จใงใใใใจใซๆณจๆใใฆใใ ใใใ
</Tip>
### Requirements [[requirements-for-fp4-mixedprecision-inference]]
- Latest `bitsandbytes` library
`pip install bitsandbytes>=0.39.0`
- Install latest `accelerate` from source
`pip install git+https://github.com/huggingface/accelerate.git`
- Install latest `transformers` from source
`pip install git+https://github.com/huggingface/transformers.git`
### Running FP4 models - single GPU setup - Quickstart
ไปฅไธใฎใณใผใใๅฎ่กใใใใจใงใ็ฐกๅใซๅไธใฎGPUใงFP4ใขใใซใๅฎ่กใงใใพใ:
```py
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-2b5"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
```
ๆณจๆ: `device_map`ใฏใชใใทใงใณใงใใใๆจ่ซๆใซ `device_map = 'auto'` ใ่จญๅฎใใใใจใๆจๅฅจใใใฆใใพใใใใใซใใใๅฉ็จๅฏ่ฝใชใชใฝใผในใซๅน็็ใซใขใใซใใใฃในใใใใใใพใใ
### Running FP4 models - multi GPU setup
ๆททๅ4ใใใใขใใซใ่คๆฐใฎGPUใซใญใผใใใๆนๆณใฏใๅไธGPUใปใใใขใใใจๅใใงใ๏ผๅไธGPUใปใใใขใใใจๅใใณใใณใใงใ๏ผ๏ผ
```py
model_name = "bigscience/bloom-2b5"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
```
ใใใใ`accelerate`ใไฝฟ็จใใฆใๅGPUใซๅฒใๅฝใฆใGPU RAMใๅถๅพกใใใใจใใงใใพใใไปฅไธใฎใใใซใ`max_memory`ๅผๆฐใไฝฟ็จใใพใ๏ผ
```py
max_memory_mapping = {0: "600MB", 1: "1GB"}
model_name = "bigscience/bloom-3b"
model_4bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping
)
```
ใใฎไพใงใฏใๆๅใฎGPUใฏ600MBใฎใกใขใชใไฝฟ็จใใ2็ช็ฎใฎGPUใฏ1GBใไฝฟ็จใใพใใ
### Advanced usage
ใใฎใกใฝใใใฎใใใชใ้ซๅบฆใชไฝฟ็จๆณใซใคใใฆใฏใ[้ๅญๅ](main_classes/quantization)ใฎใใญใฅใกใณใใผใทใงใณใใผใธใใ่ฆงใใ ใใใ
## `bitsandbytes` integration for Int8 mixed-precision matrix decomposition
<Tip>
ใใฎๆฉ่ฝใฏใใใซใGPU็ฐๅขใงใไฝฟ็จใงใใพใใ
</Tip>
่ซๆ[`LLM.int8()๏ผในใฑใผใฉใใซใชTransformerๅใใฎ8ใใใ่กๅไน็ฎ`](https://arxiv.org/abs/2208.07339)ใซใใใฐใHugging Face็ตฑๅใHubๅ
ใฎใในใฆใฎใขใใซใงใใใๆฐ่กใฎใณใผใใงใตใใผใใใใฆใใพใใใใฎใกใฝใใใฏใๅ็ฒพๅบฆ๏ผ`float16`ใใใณ`bfloat16`๏ผใฎ้ใฟใฎๅ ดๅใซ`nn.Linear`ใตใคใบใ2ๅใๅ็ฒพๅบฆ๏ผ`float32`๏ผใฎ้ใฟใฎๅ ดๅใฏ4ๅใซ็ธฎๅฐใใๅคใๅคใซๅฏพใใฆใปใจใใฉๅฝฑ้ฟใไธใใพใใใ

Int8ๆททๅ็ฒพๅบฆ่กๅๅ่งฃใฏใ่กๅไน็ฎใ2ใคใฎในใใชใผใ ใซๅๅฒใใใใจใซใใฃใฆๅไฝใใพใ๏ผ(1) ใทในใใใใฃใใฏใช็นๅพดๅคใๅคในใใชใผใ ใfp16ใง่กๅไน็ฎ๏ผ0.01%๏ผใ(2) int8่กๅไน็ฎใฎ้ๅธธใฎในใใชใผใ ๏ผ99.9%๏ผใใใฎๆนๆณใไฝฟ็จใใใจใ้ๅธธใซๅคงใใชใขใใซใซๅฏพใใฆไบๆธฌใฎๅฃๅใชใใซint8ๆจ่ซใๅฏ่ฝใงใใ
ใใฎใกใฝใใใฎ่ฉณ็ดฐใซใคใใฆใฏใ[่ซๆ](https://arxiv.org/abs/2208.07339)ใพใใฏ[ใใฎ็ตฑๅใซ้ขใใใใญใฐ่จไบ](https://huggingface.co/blog/hf-bitsandbytes-integration)ใใ็ขบ่ชใใ ใใใ

ใชใใใใฎๆฉ่ฝใไฝฟ็จใใใซใฏGPUใๅฟ
่ฆใงใใใใซใผใใซใฏGPUๅฐ็จใซใณใณใใคใซใใใฆใใๅฟ
่ฆใใใใพใใใใฎๆฉ่ฝใไฝฟ็จใใๅใซใใขใใซใฎ1/4๏ผใพใใฏใใผใ็ฒพๅบฆใฎ้ใฟใฎๅ ดๅใฏ1/2๏ผใไฟๅญใใใฎใซๅๅใชGPUใกใขใชใใใใใจใ็ขบ่ชใใฆใใ ใใใ
ใใฎใขใธใฅใผใซใไฝฟ็จใใ้ใฎใใซใใซ้ขใใ่ฉณ็ดฐใฏใไปฅไธใฎใใผใใใ่ฆงใใใ ใใใ[Google Colabใฎใใข](#colab-demos)ใใ่ฆงใใ ใใใ
### Requirements [[requirements-for-int8-mixedprecision-matrix-decomposition]]
- `bitsandbytes<0.37.0`ใไฝฟ็จใใๅ ดๅใNVIDIA GPUใไฝฟ็จใใฆใใใใจใ็ขบ่ชใใ8ใใใใใณใฝใซใณใขใใตใใผใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใ๏ผTuringใAmpereใใพใใฏใใไปฅ้ใฎใขใผใญใใฏใใฃใผใไพ๏ผT4ใRTX20s RTX30sใA40-A100ใชใฉ๏ผใ`bitsandbytes>=0.37.0`ใฎๅ ดๅใใในใฆใฎGPUใใตใใผใใใใใฏใใงใใ
- ๆญฃใใใใผใธใงใณใฎ`bitsandbytes`ใใคใณในใใผใซใใใซใฏใๆฌกใฎใณใใณใใๅฎ่กใใฆใใ ใใ๏ผ
`pip install bitsandbytes>=0.31.5`
- `accelerate`ใใคใณในใใผใซใใพใ๏ผ
`pip install accelerate>=0.12.0`
### Running mixed-Int8 models - single GPU setup
ๅฟ
่ฆใชใฉใคใใฉใชใใคใณในใใผใซใใๅพใใใใฏใน 8 ใใใใขใใซใ่ชญใฟ่พผใๆนๆณใฏๆฌกใฎ้ใใงใ๏ผ
```py
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
```
ไปฅไธใฏใทใณใใซใชไพใงใ๏ผ
* `pipeline()` ้ขๆฐใฎไปฃใใใซใใขใใซใฎ `generate()` ใกใฝใใใไฝฟ็จใใใใจใใๅงใใใพใใ`pipeline()` ้ขๆฐใไฝฟ็จใใฆๆจ่ซใใใใจใฏๅฏ่ฝใงใใใๆททๅ8ใใใใขใใซใซๆ้ฉๅใใใฆใใใใ`generate()` ใกใฝใใใไฝฟ็จใใใใใ้
ใใชใใพใใใพใใไธ้จใฎใตใณใใชใณใฐๆฆ็ฅ๏ผไพ๏ผใใฏใฌใฆในใตใณใใชใณใฐ๏ผใฏใ`pipeline()` ้ขๆฐใงใฏๆททๅ8ใใใใขใใซใงใฏใตใใผใใใใฆใใพใใใ
* ใในใฆใฎๅ
ฅๅใใขใใซใจๅใใใใคในใซ้
็ฝฎใใฆใใ ใใใ
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "bigscience/bloom-2b5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
prompt = "Hello, my llama is cute"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(**inputs)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```
### Running mixed-int8 models - multi GPU setup
่คๆฐใฎGPUใซๆททๅ8ใใใใขใใซใใญใผใใใๆนๆณใฏใๆฌกใฎ้ใใงใ๏ผใทใณใฐใซGPUใปใใใขใใใจๅใใณใใณใใงใ๏ผ๏ผ
```py
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
```
`accelerate`ใไฝฟ็จใใฆๅGPUใซๅฒใๅฝใฆใGPU RAMใๅถๅพกใใ้ใซใฏใไปฅไธใฎใใใซ`max_memory`ๅผๆฐใไฝฟ็จใใพใ๏ผ
```py
max_memory_mapping = {0: "1GB", 1: "2GB"}
model_name = "bigscience/bloom-3b"
model_8bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping
)
```
In this example, the first GPU will use 1GB of memory and the second 2GB.
### Colab demos
ใใฎๆนๆณใไฝฟ็จใใใจใไปฅๅใฎGoogle Colabใงใฏๆจ่ซใงใใชใใฃใใขใใซใซๅฏพใใฆๆจ่ซใ่กใใใจใใงใใพใใไปฅไธใฏใGoogle Colabใง8ใใใ้ๅญๅใไฝฟ็จใใฆT5-11b๏ผfp32ใง42GB๏ผใๅฎ่กใใใใขใฎใชใณใฏใงใ๏ผ
[](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing)
ใพใใBLOOM-3Bใฎใใขใใ่ฆงใใใ ใใพใ๏ผ
[](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing)
## Advanced usage: mixing FP4 (or Int8) and BetterTransformer
็ฐใชใๆนๆณใ็ตใฟๅใใใฆใใขใใซใฎๆ้ฉใชใใใฉใผใใณในใๅพใใใจใใงใใพใใไพใใฐใBetterTransformerใไฝฟ็จใใฆFP4ใใใฏในใใฌใทใธใงใณๆจ่ซใจใใฉใใทใฅใขใใณใทใงใณใ็ตใฟๅใใใใใจใใงใใพใใ
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config)
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/pipelines_utils.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ใใคใใฉใคใณ็จใฎใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใใฉใคใใฉใชใใใคใใฉใคใณใซๆไพใใใในใฆใฎใฆใผใใฃใชใใฃ้ขๆฐใใชในใใใใพใใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชๅ
ใฎใขใใซใฎใณใผใใ็ ็ฉถใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## Argument handling
[[autodoc]] pipelines.ArgumentHandler
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
## Data format
[[autodoc]] pipelines.PipelineDataFormat
[[autodoc]] pipelines.CsvPipelineDataFormat
[[autodoc]] pipelines.JsonPipelineDataFormat
[[autodoc]] pipelines.PipedPipelineDataFormat
## Utilities
[[autodoc]] pipelines.PipelineException
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/image_processing_utils.md
|
!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ็ปๅใใญใปใใต็จใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใ็ปๅใใญใปใใตใผใงไฝฟ็จใใใใในใฆใฎใฆใผใใฃใชใใฃใผ้ขๆฐใใชในใใใใฆใใพใใไธปใซๆฉ่ฝ็ใชใใฎใงใใ
็ปๅใๅฆ็ใใใใใซไฝฟ็จใใใๅคๆใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชๅ
ใฎ็ปๅใใญใปใใตใฎใณใผใใๅญฆ็ฟใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## Image Transformations
[[autodoc]] image_transforms.center_crop
[[autodoc]] image_transforms.center_to_corners_format
[[autodoc]] image_transforms.corners_to_center_format
[[autodoc]] image_transforms.id_to_rgb
[[autodoc]] image_transforms.normalize
[[autodoc]] image_transforms.pad
[[autodoc]] image_transforms.rgb_to_id
[[autodoc]] image_transforms.rescale
[[autodoc]] image_transforms.resize
[[autodoc]] image_transforms.to_pil_image
## ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/time_series_utils.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ๆ็ณปๅใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใๆ็ณปๅใใผในใฎใขใใซใซไฝฟ็จใงใใใในใฆใฎใฆใผใใฃใชใใฃ้ขๆฐใจใฏใฉในใใชในใใใใพใใ
ใใใใฎใปใจใใฉใฏใๆ็ณปๅใขใใซใฎใณใผใใ็ ็ฉถใใฆใใๅ ดๅใใพใใฏๅๆฃๅบๅใฏใฉในใฎใณใฌใฏใทใงใณใซ่ฟฝๅ ใใใๅ ดๅใซใฎใฟๅฝน็ซใกใพใใ
## Distributional Output
[[autodoc]] time_series_utils.NormalOutput
[[autodoc]] time_series_utils.StudentTOutput
[[autodoc]] time_series_utils.NegativeBinomialOutput
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/audio_utils.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# `FeatureExtractor` ็จใฎใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใ*็ญๆ้ใใผใชใจๅคๆ* ใ *ใญใฐ ใกใซ ในใใฏใใญใฐใฉใ * ใชใฉใฎไธ่ฌ็ใชใขใซใดใชใบใ ใไฝฟ็จใใฆ็ใฎใชใผใใฃใชใใ็นๅฅใช็นๅพดใ่จ็ฎใใใใใซใใชใผใใฃใช [`FeatureExtractor`] ใงไฝฟ็จใงใใใในใฆใฎใฆใผใใฃใชใใฃ้ขๆฐใใชในใใใใฆใใพใใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชๅ
ใฎใชใผใใฃใช ใใญใปใใตใฎใณใผใใๅญฆ็ฟใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## ใชใผใใฃใชๅคๆ
[[autodoc]] audio_utils.hertz_to_mel
[[autodoc]] audio_utils.mel_to_hertz
[[autodoc]] audio_utils.mel_filter_bank
[[autodoc]] audio_utils.optimal_fft_length
[[autodoc]] audio_utils.window_function
[[autodoc]] audio_utils.spectrogram
[[autodoc]] audio_utils.power_to_db
[[autodoc]] audio_utils.amplitude_to_db
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/file_utils.md
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ไธ่ฌ็ใชใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใใใกใคใซ `utils.py` ใซใใ Transformers ใฎไธ่ฌ็ใชใฆใผใใฃใชใใฃ้ขๆฐใใในใฆใชในใใใใฆใใพใใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชใงไธ่ฌ็ใชใณใผใใๅญฆ็ฟใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## ๅๆๅใจๅๅไปใใฟใใซ
[[autodoc]] utils.ExplicitEnum
[[autodoc]] utils.PaddingStrategy
[[autodoc]] utils.TensorType
## ็นๅฅใชใใณใฌใผใฟใผ
[[autodoc]] utils.add_start_docstrings
[[autodoc]] utils.add_start_docstrings_to_model_forward
[[autodoc]] utils.add_end_docstrings
[[autodoc]] utils.add_code_sample_docstrings
[[autodoc]] utils.replace_return_docstrings
## ็นๆฎใชใใญใใใฃ
[[autodoc]] utils.cached_property
## ใใฎไปใฎใฆใผใใฃใชใใฃ
[[autodoc]] utils._LazyModule
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/modeling_utils.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ใซในใฟใ ใฌใคใคใผใจใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใใฉใคใใฉใชใงไฝฟ็จใใใใในใฆใฎใซในใฟใ ใฌใคใคใผใจใใขใใชใณใฐใซๆไพใใใใฆใผใใฃใชใใฃ้ขๆฐใใชในใใใใพใใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชๅ
ใฎใขใใซใฎใณใผใใ็ ็ฉถใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## Pytorch custom modules
[[autodoc]] pytorch_utils.Conv1D
[[autodoc]] modeling_utils.PoolerStartLogits
- forward
[[autodoc]] modeling_utils.PoolerEndLogits
- forward
[[autodoc]] modeling_utils.PoolerAnswerClass
- forward
[[autodoc]] modeling_utils.SquadHeadOutput
[[autodoc]] modeling_utils.SQuADHead
- forward
[[autodoc]] modeling_utils.SequenceSummary
- forward
## PyTorch Helper Functions
[[autodoc]] pytorch_utils.apply_chunking_to_forward
[[autodoc]] pytorch_utils.find_pruneable_heads_and_indices
[[autodoc]] pytorch_utils.prune_layer
[[autodoc]] pytorch_utils.prune_conv1d_layer
[[autodoc]] pytorch_utils.prune_linear_layer
## TensorFlow custom layers
[[autodoc]] modeling_tf_utils.TFConv1D
[[autodoc]] modeling_tf_utils.TFSequenceSummary
## TensorFlow loss functions
[[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss
[[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss
[[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss
[[autodoc]] modeling_tf_utils.TFTokenClassificationLoss
## TensorFlow Helper Functions
[[autodoc]] modeling_tf_utils.get_initializer
[[autodoc]] modeling_tf_utils.keras_serializable
[[autodoc]] modeling_tf_utils.shape_list
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/generation_utils.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ็บ้ป็จใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใ[`~generation.GenerationMixin.generate`] ใงไฝฟ็จใใใใในใฆใฎใฆใผใใฃใชใใฃ้ขๆฐใใชในใใใใฆใใพใใ
[`~generation.GenerationMixin.greedy_search`],
[`~generation.GenerationMixin.contrastive_search`],
[`~generation.GenerationMixin.sample`],
[`~generation.GenerationMixin.beam_search`],
[`~generation.GenerationMixin.beam_sample`],
[`~generation.GenerationMixin.group_beam_search`]ใใใใณ
[`~generation.GenerationMixin.constrained_beam_search`]ใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชๅ
ใฎ็ๆใกใฝใใใฎใณใผใใๅญฆ็ฟใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## ๅบๅใ็ๆใใ
[`~generation.GenerationMixin.generate`] ใฎๅบๅใฏใๆฌกใฎใตใใฏใฉในใฎใคใณในใฟใณในใงใใ
[`~utils.ModelOutput`]ใใใฎๅบๅใฏใ่ฟใใใใในใฆใฎๆ
ๅ ฑใๅซใใใผใฟๆง้ ใงใใ
[`~generation.GenerationMixin.generate`] ใซใใฃใฆไฝๆใใใพใใใใฟใใซใพใใฏ่พๆธใจใใฆใไฝฟ็จใงใใพใใ
ไปฅไธใซไพใ็คบใใพใใ
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
```
`generation_output` ใชใใธใงใฏใใฏใใงใใ้ใ [`~generation.GenerateDecoderOnlyOutput`] ใงใใ
ไปฅไธใฎใใฎใฏใฉในใฎใใญใฅใกใณใใๅ็
งใใฆใใ ใใใใใใฏใๆฌกใฎๅฑๆงใใใใใจใๆๅณใใพใใ
- `sequences`: ็ๆใใใใใผใฏใณใฎใทใผใฑใณใน
- `scores` (ใชใใทใงใณ): ๅ็ๆในใใใใฎ่จ่ชใขใใชใณใฐ ใใใใฎไบๆธฌในใณใข
- `hidden_โโstates` (ใชใใทใงใณ): ็ๆในใใใใใจใฎใขใใซใฎ้ ใใ็ถๆ
- `attentions` (ใชใใทใงใณ): ็ๆในใใใใใจใฎใขใใซใฎใขใใณใทใงใณใฎ้ใฟ
ใใใงใฏใ`output_scores=True`ใๆธกใใใฎใง `scores` ใใใใพใใใ`hidden_โโstates` ใฏใใใพใใใ
`attentions` ใฏใ`output_hidden_โโstates=True`ใพใใฏ`output_attentions=True`ใๆธกใใชใใฃใใใใงใใ
้ๅธธใจๅใใใใซๅๅฑๆงใซใขใฏใปในใงใใพใใใใฎๅฑๆงใใขใใซใใ่ฟใใใชใใฃใๅ ดๅใฏใ
ใฏใใชใใใๅๅพใใพใใใใใงใใใจใใฐ`generation_output.scores`ใฏใ็ๆใใใใในใฆใฎไบๆธฌในใณใขใงใใ
่จ่ชใขใใชใณใฐใฎใใใใงใใใ`generation_output.attentions`ใฏ`None`ใงใใ
`generation_output` ใชใใธใงใฏใใใฟใใซใจใใฆไฝฟ็จใใๅ ดๅใ`None` ๅคใๆใใชใๅฑๆงใฎใฟใไฟๆใใใพใใ
ใใจใใฐใใใใซใฏ 2 ใคใฎ่ฆ็ด ใ`loss`ใๆฌกใซ`logits`ใใใใพใใ
```python
generation_output[:2]
```
ใใจใใฐใใฟใใซ `(generation_output.sequences,generation_output.scores)` ใ่ฟใใพใใ
`generation_output` ใชใใธใงใฏใใ่พๆธใจใใฆไฝฟ็จใใๅ ดๅใ`None` ใๆใใชใๅฑๆงใฎใฟใไฟๆใใใพใใ
ใใใงใฏใใใจใใฐใ`sequences`ใจ`scores`ใจใใ 2 ใคใฎใญใผใใใใพใใ
ใใใงใฏใในใฆใฎๅบๅใฟใคใใๆๆธๅใใพใใ
### PyTorch
[[autodoc]] generation.GenerateDecoderOnlyOutput
[[autodoc]] generation.GenerateEncoderDecoderOutput
[[autodoc]] generation.GenerateBeamDecoderOnlyOutput
[[autodoc]] generation.GenerateBeamEncoderDecoderOutput
### TensorFlow
[[autodoc]] generation.TFGreedySearchEncoderDecoderOutput
[[autodoc]] generation.TFGreedySearchDecoderOnlyOutput
[[autodoc]] generation.TFSampleEncoderDecoderOutput
[[autodoc]] generation.TFSampleDecoderOnlyOutput
[[autodoc]] generation.TFBeamSearchEncoderDecoderOutput
[[autodoc]] generation.TFBeamSearchDecoderOnlyOutput
[[autodoc]] generation.TFBeamSampleEncoderDecoderOutput
[[autodoc]] generation.TFBeamSampleDecoderOnlyOutput
[[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput
[[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput
### FLAX
[[autodoc]] generation.FlaxSampleOutput
[[autodoc]] generation.FlaxGreedySearchOutput
[[autodoc]] generation.FlaxBeamSearchOutput
## LogitsProcessor
[`LogitsProcessor`] ใไฝฟ็จใใฆใ่จ่ชใขใใซใฎใใใใฎไบๆธฌในใณใขใๅคๆดใงใใพใใ
ไธไปฃใ
### PyTorch
[[autodoc]] AlternatingCodebooksLogitsProcessor
- __call__
[[autodoc]] ClassifierFreeGuidanceLogitsProcessor
- __call__
[[autodoc]] EncoderNoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] EncoderRepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] EpsilonLogitsWarper
- __call__
[[autodoc]] EtaLogitsWarper
- __call__
[[autodoc]] ExponentialDecayLengthPenalty
- __call__
[[autodoc]] ForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] ForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] ForceTokensLogitsProcessor
- __call__
[[autodoc]] HammingDiversityLogitsProcessor
- __call__
[[autodoc]] InfNanRemoveLogitsProcessor
- __call__
[[autodoc]] LogitNormalization
- __call__
[[autodoc]] LogitsProcessor
- __call__
[[autodoc]] LogitsProcessorList
- __call__
[[autodoc]] LogitsWarper
- __call__
[[autodoc]] MinLengthLogitsProcessor
- __call__
[[autodoc]] MinNewTokensLengthLogitsProcessor
- __call__
[[autodoc]] NoBadWordsLogitsProcessor
- __call__
[[autodoc]] NoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] PrefixConstrainedLogitsProcessor
- __call__
[[autodoc]] RepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] SequenceBiasLogitsProcessor
- __call__
[[autodoc]] SuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] SuppressTokensLogitsProcessor
- __call__
[[autodoc]] TemperatureLogitsWarper
- __call__
[[autodoc]] TopKLogitsWarper
- __call__
[[autodoc]] TopPLogitsWarper
- __call__
[[autodoc]] TypicalLogitsWarper
- __call__
[[autodoc]] UnbatchedClassifierFreeGuidanceLogitsProcessor
- __call__
[[autodoc]] WhisperTimeStampLogitsProcessor
- __call__
### TensorFlow
[[autodoc]] TFForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] TFForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] TFForceTokensLogitsProcessor
- __call__
[[autodoc]] TFLogitsProcessor
- __call__
[[autodoc]] TFLogitsProcessorList
- __call__
[[autodoc]] TFLogitsWarper
- __call__
[[autodoc]] TFMinLengthLogitsProcessor
- __call__
[[autodoc]] TFNoBadWordsLogitsProcessor
- __call__
[[autodoc]] TFNoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] TFRepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] TFSuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] TFSuppressTokensLogitsProcessor
- __call__
[[autodoc]] TFTemperatureLogitsWarper
- __call__
[[autodoc]] TFTopKLogitsWarper
- __call__
[[autodoc]] TFTopPLogitsWarper
- __call__
### FLAX
[[autodoc]] FlaxForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] FlaxForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] FlaxForceTokensLogitsProcessor
- __call__
[[autodoc]] FlaxLogitsProcessor
- __call__
[[autodoc]] FlaxLogitsProcessorList
- __call__
[[autodoc]] FlaxLogitsWarper
- __call__
[[autodoc]] FlaxMinLengthLogitsProcessor
- __call__
[[autodoc]] FlaxSuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] FlaxSuppressTokensLogitsProcessor
- __call__
[[autodoc]] FlaxTemperatureLogitsWarper
- __call__
[[autodoc]] FlaxTopKLogitsWarper
- __call__
[[autodoc]] FlaxTopPLogitsWarper
- __call__
[[autodoc]] FlaxWhisperTimeStampLogitsProcessor
- __call__
## StoppingCriteria
[`StoppingCriteria`] ใไฝฟ็จใใฆใ(EOS ใใผใฏใณไปฅๅคใฎ) ็ๆใๅๆญขใใใฟใคใใณใฐใๅคๆดใงใใพใใใใใฏ PyTorch ๅฎ่ฃ
ใงใฎใฟๅฉ็จๅฏ่ฝใงใใใใจใซๆณจๆใใฆใใ ใใใ
[[autodoc]] StoppingCriteria
- __call__
[[autodoc]] StoppingCriteriaList
- __call__
[[autodoc]] MaxLengthCriteria
- __call__
[[autodoc]] MaxTimeCriteria
- __call__
## Constraints
[`Constraint`] ใไฝฟ็จใใใจใ็ๆๆใซๅบๅใซ็นๅฎใฎใใผใฏใณใพใใฏใทใผใฑใณในใๅซใพใใใใใซๅผทๅถใงใใพใใใใใฏ PyTorch ๅฎ่ฃ
ใงใฎใฟๅฉ็จๅฏ่ฝใงใใใใจใซๆณจๆใใฆใใ ใใใ
[[autodoc]] Constraint
[[autodoc]] PhrasalConstraint
[[autodoc]] DisjunctiveConstraint
[[autodoc]] ConstraintListState
## BeamSearch
[[autodoc]] BeamScorer
- process
- finalize
[[autodoc]] BeamSearchScorer
- process
- finalize
[[autodoc]] ConstrainedBeamSearchScorer
- process
- finalize
## Utilities
[[autodoc]] top_k_top_p_filtering
[[autodoc]] tf_top_k_top_p_filtering
## Streamers
[[autodoc]] TextStreamer
[[autodoc]] TextIteratorStreamer
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/tokenization_utils.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Tokenizers
ใใฎใใผใธใซใฏใใใผใฏใใคใถใผใซใใฃใฆไฝฟ็จใใใใในใฆใฎใฆใผใใฃใชใใฃ้ขๆฐ (ไธปใซใฏใฉใน) ใใชในใใใใพใใ
[`~tokenization_utils_base.PreTrainedTokenizerBase`] ้ใฎๅ
ฑ้ใกใฝใใใๅฎ่ฃ
ใใพใใ
[`PreTrainedTokenizer`] ใจ [`PreTrainedTokenizerFast`] ใใใณใใใฏในใคใณ
[`~tokenization_utils_base.SpecialTokensMixin`]ใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชๅ
ใฎใใผใฏใใคใถใผใฎใณใผใใๅญฆ็ฟใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## PreTrainedTokenizerBase
[[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase
- __call__
- all
## SpecialTokensMixin
[[autodoc]] tokenization_utils_base.SpecialTokensMixin
## Enums and namedtuples
[[autodoc]] tokenization_utils_base.TruncationStrategy
[[autodoc]] tokenization_utils_base.CharSpan
[[autodoc]] tokenization_utils_base.TokenSpan
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/internal/trainer_utils.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ใใฌใผใใผ็จใฆใผใใฃใชใใฃ
ใใฎใใผใธใซใฏใ[`Trainer`] ใงไฝฟ็จใใใใในใฆใฎใฆใผใใฃใชใใฃ้ขๆฐใใชในใใใใฆใใพใใ
ใใใใฎใปใจใใฉใฏใใฉใคใใฉใชๅ
ใฎใใฌใผใใผใฎใณใผใใๅญฆ็ฟใใๅ ดๅใซใฎใฟๅฝนใซ็ซใกใพใใ
## Utilities
[[autodoc]] EvalPrediction
[[autodoc]] IntervalStrategy
[[autodoc]] enable_full_determinism
[[autodoc]] set_seed
[[autodoc]] torch_distributed_zero_first
## Callbacks internals
[[autodoc]] trainer_callback.CallbackHandler
## Distributed Evaluation
[[autodoc]] trainer_pt_utils.DistributedTensorGatherer
## Distributed Evaluation
[[autodoc]] HfArgumentParser
## Debug Utilities
[[autodoc]] debug_utils.DebugUnderflowOverflow
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/feature_extractor.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Feature Extractor
ใใฃใผใใฃใผใจใฏในใใฉใฏใฟใฏใใชใผใใฃใชใพใใฏใใธใงใณใขใใซใฎใใใฎๅ
ฅๅใใฃใผใใฃใผใฎๆบๅใๆ
ๅฝใใฆใใพใใใใใซใฏใใทใผใฑใณในใใใฎใใฃใผใใฃใผๆฝๅบ๏ผไพ๏ผใชใผใใฃใชใใกใคใซใฎๅๅฆ็ใใLog-Melในใใฏใใญใฐใฉใ ใใฃใผใใฃใผใธใฎๅคๆ๏ผใ็ปๅใใใฎใใฃใผใใฃใผๆฝๅบ๏ผไพ๏ผ็ปๅใใกใคใซใฎใฏใญใใใณใฐ๏ผใใพใใใใฃใณใฐใๆญฃ่ฆๅใใใใฆNumpyใPyTorchใTensorFlowใใณใฝใซใธใฎๅคๆใๅซใพใใพใใ
## FeatureExtractionMixin
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
- from_pretrained
- save_pretrained
## SequenceFeatureExtractor
[[autodoc]] SequenceFeatureExtractor
- pad
## BatchFeature
[[autodoc]] BatchFeature
## ImageFeatureExtractionMixin
[[autodoc]] image_utils.ImageFeatureExtractionMixin
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/output.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Model outputs
ใในใฆใฎใขใใซใซใฏใ[`~utils.ModelOutput`] ใฎใตใใฏใฉในใฎใคใณในใฟใณในใงใใๅบๅใใใใพใใใใใใฏ
ใขใใซใซใใฃใฆ่ฟใใใใในใฆใฎๆ
ๅ ฑใๅซใใใผใฟๆง้ ใงใใใใฟใใซใพใใฏ
่พๆธใ
ใใใใฉใฎใใใซใชใใใไพใง่ฆใฆใฟใพใใใใ
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
`outputs`ใชใใธใงใฏใใฏ[`~modeling_outputs.SequenceClassifierOutput`]ใงใใใ
ใใใฏใใชใใทใงใณใง `loss`ใ`logits`ใใชใใทใงใณใง `hidden_states`ใใชใใทใงใณใง `attentions` ๅฑๆงใๆใคใใจใๆๅณใใพใใ
ใชใใทใงใณใฎ `attentions` ๅฑๆงใๆใคใใจใๆๅณใใใใใใงใฏใ`labels`ใๆธกใใใฎใง`loss`ใใใใใ`hidden_states`ใจ`attentions`ใฏใชใใ
`output_hidden_states=True`ใ`output_attentions=True`ใๆธกใใฆใใชใใฎใงใ`hidden_states`ใจ`attentions`ใฏใชใใ
`output_attentions=True`ใๆธกใใชใใฃใใใใ ใ
<Tip>
`output_hidden_states=True`ใๆธกใใจใ`outputs.hidden_states[-1]`ใ `outputs.last_hidden_states` ใจๆญฃ็ขบใซไธ่ดใใใใจใๆๅพ
ใใใใใใใชใใ
ใใใใๅฟ
ใใใใใใชใใจใฏ้ใใพใใใใขใใซใซใใฃใฆใฏใๆๅพใซ้ ใใใ็ถๆ
ใ่ฟใใใใจใใซใๆญฃ่ฆๅใใใฎๅพใฎๅฆ็ใ้ฉ็จใใใใฎใใใใพใใ
</Tip>
้ๅธธใจๅใใใใซๅๅฑๆงใซใขใฏใปในใงใใพใใใใฎๅฑๆงใใขใใซใใ่ฟใใใชใใฃใๅ ดๅใฏใ
ใฏ `None`ใๅๅพใใพใใใใใงใใใจใใฐ`outputs.loss`ใฏใขใใซใซใใฃใฆ่จ็ฎใใใๆๅคฑใงใใใ`outputs.attentions`ใฏ
`None`ใ
`outputs`ใชใใธใงใฏใใใฟใใซใจใใฆ่ใใๅ ดๅใ`None`ๅคใๆใใชใๅฑๆงใฎใฟใ่ๆ
ฎใใใพใใ
ใใจใใฐใใใใซใฏ 2 ใคใฎ่ฆ็ด ใ`loss`ใๆฌกใซ`logits`ใใใใพใใ
```python
outputs[:2]
```
ใใจใใฐใใฟใใซ `(outputs.loss, Outputs.logits)` ใ่ฟใใพใใ
`outputs`ใชใใธใงใฏใใ่พๆธใจใใฆ่ๆ
ฎใใๅ ดๅใใNoneใใๆใใชใๅฑๆงใฎใฟใ่ๆ
ฎใใใพใใ
ไพกๅค่ฆณใใใจใใฐใใใใซใฏ`loss` ใจ `logits`ใจใใ 2 ใคใฎใญใผใใใใพใใ
ใใใงใฏใ่คๆฐใฎใขใใซ ใฟใคใใงไฝฟ็จใใใๆฑ็จใขใใซใฎๅบๅใๆๆธๅใใพใใๅ
ทไฝ็ใชๅบๅใฟใคใใฏๆฌกใฎใจใใใงใใ
ๅฏพๅฟใใใขใใซใฎใใผใธใซ่จ่ผใใใฆใใพใใ
## ModelOutput
[[autodoc]] utils.ModelOutput
- to_tuple
## BaseModelOutput
[[autodoc]] modeling_outputs.BaseModelOutput
## BaseModelOutputWithPooling
[[autodoc]] modeling_outputs.BaseModelOutputWithPooling
## BaseModelOutputWithCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions
## BaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
## BaseModelOutputWithPast
[[autodoc]] modeling_outputs.BaseModelOutputWithPast
## BaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions
## Seq2SeqModelOutput
[[autodoc]] modeling_outputs.Seq2SeqModelOutput
## CausalLMOutput
[[autodoc]] modeling_outputs.CausalLMOutput
## CausalLMOutputWithCrossAttentions
[[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions
## CausalLMOutputWithPast
[[autodoc]] modeling_outputs.CausalLMOutputWithPast
## MaskedLMOutput
[[autodoc]] modeling_outputs.MaskedLMOutput
## Seq2SeqLMOutput
[[autodoc]] modeling_outputs.Seq2SeqLMOutput
## NextSentencePredictorOutput
[[autodoc]] modeling_outputs.NextSentencePredictorOutput
## SequenceClassifierOutput
[[autodoc]] modeling_outputs.SequenceClassifierOutput
## Seq2SeqSequenceClassifierOutput
[[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput
## MultipleChoiceModelOutput
[[autodoc]] modeling_outputs.MultipleChoiceModelOutput
## TokenClassifierOutput
[[autodoc]] modeling_outputs.TokenClassifierOutput
## QuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.QuestionAnsweringModelOutput
## Seq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
## Seq2SeqSpectrogramOutput
[[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput
## SemanticSegmenterOutput
[[autodoc]] modeling_outputs.SemanticSegmenterOutput
## ImageClassifierOutput
[[autodoc]] modeling_outputs.ImageClassifierOutput
## ImageClassifierOutputWithNoAttention
[[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention
## DepthEstimatorOutput
[[autodoc]] modeling_outputs.DepthEstimatorOutput
## Wav2Vec2BaseModelOutput
[[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput
## XVectorOutput
[[autodoc]] modeling_outputs.XVectorOutput
## Seq2SeqTSModelOutput
[[autodoc]] modeling_outputs.Seq2SeqTSModelOutput
## Seq2SeqTSPredictionOutput
[[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput
## SampleTSPredictionOutput
[[autodoc]] modeling_outputs.SampleTSPredictionOutput
## TFBaseModelOutput
[[autodoc]] modeling_tf_outputs.TFBaseModelOutput
## TFBaseModelOutputWithPooling
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling
## TFBaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions
## TFBaseModelOutputWithPast
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast
## TFBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions
## TFSeq2SeqModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput
## TFCausalLMOutput
[[autodoc]] modeling_tf_outputs.TFCausalLMOutput
## TFCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions
## TFCausalLMOutputWithPast
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast
## TFMaskedLMOutput
[[autodoc]] modeling_tf_outputs.TFMaskedLMOutput
## TFSeq2SeqLMOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput
## TFNextSentencePredictorOutput
[[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput
## TFSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput
## TFSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput
## TFMultipleChoiceModelOutput
[[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput
## TFTokenClassifierOutput
[[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput
## TFQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput
## TFSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput
## FlaxBaseModelOutput
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput
## FlaxBaseModelOutputWithPast
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast
## FlaxBaseModelOutputWithPooling
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling
## FlaxBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions
## FlaxSeq2SeqModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput
## FlaxCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions
## FlaxMaskedLMOutput
[[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput
## FlaxSeq2SeqLMOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput
## FlaxNextSentencePredictorOutput
[[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput
## FlaxSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput
## FlaxSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput
## FlaxMultipleChoiceModelOutput
[[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput
## FlaxTokenClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput
## FlaxQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput
## FlaxSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/text_generation.md
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Generation
ๅใใฌใผใ ใฏใผใฏใซใฏใใใใใใฎ `GenerationMixin` ใฏใฉในใซๅฎ่ฃ
ใใใใใญในใ็ๆใฎใใใฎ Generate ใกใฝใใใใใใพใใ
- PyTorch [`~generation.GenerationMixin.generate`] ใฏ [`~generation.GenerationMixin`] ใซๅฎ่ฃ
ใใใฆใใพใใ
- TensorFlow [`~generation.TFGenerationMixin.generate`] ใฏ [`~generation.TFGenerationMixin`] ใซๅฎ่ฃ
ใใใฆใใพใใ
- Flax/JAX [`~generation.FlaxGenerationMixin.generate`] ใฏ [`~generation.FlaxGenerationMixin`] ใซๅฎ่ฃ
ใใใฆใใพใใ
้ธๆใใใใฌใผใ ใฏใผใฏใซ้ขไฟใชใใ[`~generation.GenerationConfig`] ใไฝฟ็จใใฆ็ๆใกใฝใใใใใฉใกใผใฟๅใงใใพใใ
ใฏใฉในใคใณในใฟใณในใๅไฝใๅถๅพกใใ็ๆใใฉใกใผใฟใฎๅฎๅ
จใชใชในใใซใคใใฆใฏใใใฎใฏใฉในใๅ็
งใใฆใใ ใใใ
็ๆๆนๆณใฎใใจใ
ใขใใซใฎ็ๆๆงๆใๆคๆปใใๆนๆณใใใใฉใซใใจใฏไฝใใใใฉใกใผใฟใผใใขใใใใฏใซๅคๆดใใๆนๆณใๅญฆ็ฟใใใซใฏใ
ใซในใฟใใคใบใใใ็ๆๆงๆใไฝๆใใฆไฟๅญใใๆนๆณใซใคใใฆใฏใใ
[ใใญในใ็ๆๆฆ็ฅใฌใคใ](../generation_strategies)ใใใฎใฌใคใใงใฏใ้ข้ฃๆฉ่ฝใฎไฝฟ็จๆนๆณใซใคใใฆใ่ชฌๆใใฆใใพใใ
ใใผใฏใณในใใชใผใใณใฐใฎใใใชใ
## GenerationConfig
[[autodoc]] generation.GenerationConfig
- from_pretrained
- from_model_config
- save_pretrained
## GenerationMixin
[[autodoc]] generation.GenerationMixin
- generate
- compute_transition_scores
- greedy_search
- sample
- beam_search
- beam_sample
- contrastive_search
- group_beam_search
- constrained_beam_search
## TFGenerationMixin
[[autodoc]] generation.TFGenerationMixin
- generate
- compute_transition_scores
## FlaxGenerationMixin
[[autodoc]] generation.FlaxGenerationMixin
- generate
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/logging.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Logging
๐ค Transformersใซใฏใใฉใคใใฉใชใฎ่ฉณ็ดฐๅบฆใ็ฐกๅใซ่จญๅฎใงใใไธญๅคฎ้ไธญๅใฎใญใฎใณใฐใทในใใ ใใใใพใใ
็พๅจใใฉใคใใฉใชใฎใใใฉใซใใฎ่ฉณ็ดฐๅบฆใฏใWARNINGใใงใใ
่ฉณ็ดฐๅบฆใๅคๆดใใใซใฏใ็ดๆฅ่จญๅฎใกใฝใใใฎ1ใคใไฝฟ็จใใใ ใใงใใไพใใฐใ่ฉณ็ดฐๅบฆใINFOใฌใใซใซๅคๆดใใๆนๆณใฏไปฅไธใฎ้ใใงใใ
```python
import transformers
transformers.logging.set_verbosity_info()
```
็ฐๅขๅคๆฐ `TRANSFORMERS_VERBOSITY` ใไฝฟ็จใใฆใใใใฉใซใใฎๅ้ทๆงใใชใผใใผใฉใคใใใใใจใใงใใพใใ่จญๅฎใงใใพใ
`debug`ใ`info`ใ`warning`ใ`error`ใ`critical` ใฎใใใใใซๅคๆดใใพใใไพใใฐ๏ผ
```bash
TRANSFORMERS_VERBOSITY=error ./myprogram.py
```
ใใใซใไธ้จใฎใ่ญฆๅใใฏ็ฐๅขๅคๆฐใ่จญๅฎใใใใจใง็กๅนใซใงใใพใใ
`TRANSFORMERS_NO_ADVISORY_WARNINGS` ใ *1* ใชใฉใฎ true ๅคใซ่จญๅฎใใพใใใใใซใใใๆฌกใไฝฟ็จใใฆใญใฐใซ่จ้ฒใใใ่ญฆๅใ็กๅนใซใชใใพใใ
[`logger.warning_advice`]ใไพใใฐ๏ผ
```bash
TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```
ไปฅไธใฏใ็ฌ่ชใฎใขใธใฅใผใซใพใใฏในใฏใชใใใงใฉใคใใฉใชใจๅใใญใฌใผใไฝฟ็จใใๆนๆณใฎไพใงใใ
```python
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("transformers")
logger.info("INFO")
logger.warning("WARN")
```
ใใฎใญใฎใณใฐ ใขใธใฅใผใซใฎใในใฆใฎใกใฝใใใฏไปฅไธใซๆๆธๅใใใฆใใพใใไธปใชใกใฝใใใฏๆฌกใฎใจใใใงใใ
[`logging.get_verbosity`] ใญใฌใผใฎ็พๅจใฎๅ้ทใฌใใซใๅๅพใใพใใ
[`logging.set_verbosity`] ใไฝฟ็จใใฆใๅ้ทๆงใ้ธๆใใใฌใใซใซ่จญๅฎใใพใใ้ ็ชใซ๏ผๅฐใชใใใฎใใ๏ผ
ๅ้ทใใๆใๅ้ทใพใง)ใใใใใฎใฌใใซ (ๆฌๅผงๅ
ใฏๅฏพๅฟใใ int ๅค) ใฏๆฌกใฎใจใใใงใใ
- `transformers.logging.CRITICAL` ใพใใฏ `transformers.logging.FATAL` (int ๅคใ50): ๆใๅคใใใฎใฎใฟใใฌใใผใใใพใใ
้ๅคงใชใจใฉใผใ
- `transformers.logging.ERROR` (int ๅคใ40): ใจใฉใผใฎใฟใๅ ฑๅใใพใใ
- `transformers.logging.WARNING` ใพใใฏ `transformers.logging.WARN` (int ๅคใ30): ใจใฉใผใจ
่ญฆๅใใใใฏใฉใคใใฉใชใงไฝฟ็จใใใใใใฉใซใใฎใฌใใซใงใใ
- `transformers.logging.INFO` (int ๅคใ20): ใจใฉใผใ่ญฆๅใใใใณๅบๆฌๆ
ๅ ฑใใฌใใผใใใพใใ
- `transformers.logging.DEBUG` (int ๅคใ10): ใในใฆใฎๆ
ๅ ฑใใฌใใผใใใพใใ
ใใใฉใซใใงใฏใใขใใซใฎใใฆใณใญใผใไธญใซใtqdmใ้ฒ่ก็ถๆณใใผใ่กจ็คบใใใพใใ [`logging.disable_progress_bar`] ใใใณ [`logging.enable_progress_bar`] ใไฝฟ็จใใฆใใใฎๅไฝใๆๅถใพใใฏๆๅถ่งฃ้คใงใใพใใ
## `logging` vs `warnings`
Python ใซใฏใใใ็ตใฟๅใใใฆไฝฟ็จโโใใใ 2 ใคใฎใญใฎใณใฐ ใทในใใ ใใใใพใใไธใง่ชฌๆใใ `logging` ใจ `warnings` ใงใใ
ใใใซใใใ็นๅฎใฎใใฑใใๅ
ใฎ่ญฆๅใใใใซๅ้กใงใใพใ (ไพ: ๆฉ่ฝใพใใฏใในใฎ`FutureWarning`)
ใใใฏใใงใซ้ๆจๅฅจใซใชใฃใฆใใใ`DeprecationWarning`ใฏไปๅพใฎ้ๆจๅฅจใ็คบใใพใใ
ไธกๆนใจใ`transformers`ใฉใคใใฉใชใงไฝฟ็จใใพใใ `logging`ใฎ`captureWarning`ใกใฝใใใๆดป็จใใฆ้ฉๅฟใใใฆใ
ใใใใฎ่ญฆๅใกใใปใผใธใฏใไธ่จใฎๅ้ท่จญๅฎใใผใซใซใใฃใฆ็ฎก็ใใใพใใ
ใใใฏใฉใคใใฉใชใฎ้็บ่
ใซใจใฃใฆไฝใๆๅณใใพใใ?ๆฌกใฎใใฅใผใชในใใฃใใฏใๅฐ้ใใๅฟ
่ฆใใใใพใใ
- `warnings`ใฏใใฉใคใใฉใชใใใณ`transformers`ใซไพๅญใใใฉใคใใฉใชใฎ้็บ่
ใซๅชๅ
ใใใในใใงใใ
- `logging`ใฏใๆฅๅธธใฎใใญใธใงใฏใใงใฉใคใใฉใชใไฝฟ็จใใใฉใคใใฉใชใฎใจใณใใฆใผใถใผใซไฝฟ็จใใๅฟ
่ฆใใใใพใใ
ไปฅไธใฎ`captureWarnings`ใกใฝใใใฎใชใใกใฌใณในใๅ็
งใใฆใใ ใใใ
[[autodoc]] logging.captureWarnings
## Base setters
[[autodoc]] logging.set_verbosity_error
[[autodoc]] logging.set_verbosity_warning
[[autodoc]] logging.set_verbosity_info
[[autodoc]] logging.set_verbosity_debug
## Other functions
[[autodoc]] logging.get_verbosity
[[autodoc]] logging.set_verbosity
[[autodoc]] logging.get_logger
[[autodoc]] logging.enable_default_handler
[[autodoc]] logging.disable_default_handler
[[autodoc]] logging.enable_explicit_format
[[autodoc]] logging.reset_format
[[autodoc]] logging.enable_progress_bar
[[autodoc]] logging.disable_progress_bar
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/configuration.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
๏ผ ๆงๆ
ๅบๆฌใฏใฉใน [`PretrainedConfig`] ใฏใ่จญๅฎใใญใผใ/ไฟๅญใใใใใฎไธ่ฌ็ใชใกใฝใใใๅฎ่ฃ
ใใพใใ
ใญใผใซใซ ใใกใคใซใพใใฏใใฃใฌใฏใใชใใใใพใใฏใฉใคใใฉใช (ใใฆใณใญใผใใใใ) ใซใใฃใฆๆไพใใใไบๅใใฌใผใใณใฐๆธใฟใขใใซๆงๆใใ
HuggingFace ใฎ AWS S3 ใชใใธใใชใใ)ใ
ๅๆดพ็ๆงๆใฏใฉในใฏใขใใซๅบๆใฎๅฑๆงใๅฎ่ฃ
ใใพใใใในใฆใฎๆงๆใฏใฉในใซๅญๅจใใๅ
ฑ้ใฎๅฑๆงใฏๆฌกใฎใจใใใงใใ
`hidden_โโsize`ใ`num_attention_heads`ใใใใณ `num_hidden_โโlayers`ใใใญในใ ใขใใซใฏใใใซไปฅไธใๅฎ่ฃ
ใใพใใ
`vocab_size`ใ
## PretrainedConfig
[[autodoc]] PretrainedConfig
- push_to_hub
- all
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/processors.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Processors
Transformers ใฉใคใใฉใชใงใฏใใใญใปใใตใฏ 2 ใคใฎ็ฐใชใๆๅณใๆใกใพใใ
- [Wav2Vec2](../model_doc/wav2vec2) ใชใฉใฎใใซใใขใผใใซ ใขใใซใฎๅ
ฅๅใๅๅฆ็ใใใชใใธใงใฏใ (้ณๅฃฐใจใใญในใ)
ใพใใฏ [CLIP](../model_doc/clip) (ใใญในใใจใใธใงใณ)
- ๅคใใใผใธใงใณใฎใฉใคใใฉใชใง GLUE ใพใใฏ SQUAD ใฎใใผใฟใๅๅฆ็ใใใใใซไฝฟ็จใใใฆใใใชใใธใงใฏใใฏ้ๆจๅฅจใซใชใใพใใใ
## Multi-modal processors
ใใซใใขใผใใซ ใขใใซใงใฏใใชใใธใงใฏใใ่คๆฐใฎใขใใชใใฃ (ใใญในใใ
่ฆ่ฆใจ้ณๅฃฐ๏ผใใใใฏใ2 ใคไปฅไธใฎๅฆ็ใชใใธใงใฏใใใฐใซใผใๅใใใใญใปใใตใผใจๅผใฐใใใชใใธใงใฏใใซใใฃใฆๅฆ็ใใใพใใ
ใใผใฏใใคใถใผ (ใใญในใ ใขใใชใใฃ็จ)ใ็ปๅใใญใปใใตใผ (่ฆ่ฆ็จ)ใ็นๅพดๆฝๅบๅจ (ใชใผใใฃใช็จ) ใชใฉใ
ใใใใฎใใญใปใใตใฏใไฟๅญใใใณใญใผใๆฉ่ฝใๅฎ่ฃ
ใใๆฌกใฎๅบๆฌใฏใฉในใ็ถๆฟใใพใใ
[[autodoc]] ProcessorMixin
## Deprecated processors
ใในใฆใฎใใญใปใใตใฏใๅใใขใผใญใใฏใใฃใซๅพใฃใฆใใพใใ
[`~data.processors.utils.DataProcessor`]ใใใญใปใใตใฏๆฌกใฎใชในใใ่ฟใใพใใ
[`~data.processors.utils.InputExample`]ใใใใ
[`~data.processors.utils.InputExample`] ใฏๆฌกใฎใใใซๅคๆใงใใพใใ
[`~data.processors.utils.Input features`] ใใขใใซใซใใฃใผใใใพใใ
[[autodoc]] data.processors.utils.DataProcessor
[[autodoc]] data.processors.utils.InputExample
[[autodoc]] data.processors.utils.InputFeatures
## GLUE
[ไธ่ฌ่จ่ช็่งฃ่ฉไพก (GLUE)](https://gluebenchmark.com/) ใฏใ
ๆขๅญใฎ NLU ใฟในใฏใฎๅคๆงใชใปใใใซใใใใขใใซใฎใใใฉใผใใณในใ็ดใจๅๆ็บๅฃฒใใใ [GLUE: A
่ช็ถ่จ่ช็่งฃใฎใใใฎใใซใใฟในใฏใใณใใใผใฏใใใณๅๆใใฉใใใใฉใผใ ](https://openreview.net/pdf?id=rJ4km2R5t7)
ใใฎใฉใคใใฉใชใฏใMRPCใMNLIใMNLI (ไธไธ่ด)ใCoLAใSST2ใSTSBใ
QQPใQNLIใRTEใWNLIใ
ใใใใฎใใญใปใใตใฏๆฌกใฎใจใใใงใใ
- [`~data.processors.utils.MrpcProcessor`]
- [`~data.processors.utils.MnliProcessor`]
- [`~data.processors.utils.MnliMismatchedProcessor`]
- [`~data.processors.utils.Sst2Processor`]
- [`~data.processors.utils.StsbProcessor`]
- [`~data.processors.utils.QqpProcessor`]
- [`~data.processors.utils.QnliProcessor`]
- [`~data.processors.utils.RteProcessor`]
- [`~data.processors.utils.WnliProcessor`]
ใใใซใๆฌกใฎใกใฝใใใไฝฟ็จใใฆใใใผใฟ ใใกใคใซใใๅคใใญใผใใใใใใใใชในใใซๅคๆใใใใจใใงใใพใใ
[`~data.processors.utils.InputExample`]ใ
[[autodoc]] data.processors.glue.glue_convert_examples_to_features
## XNLI
[ใฏใญในใชใณใฌใซ NLI ใณใผใใน (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) ใฏใ
่จ่ชใ่ถ
ใใใใญในใ่กจ็พใฎๅ่ณชใ XNLI ใฏใ[*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/) ใซๅบใฅใใฏใฉใฆใใฝใผในใฎใใผใฟใปใใใงใใใใญในใใฎใใขใซใฏใ15 ๅใฎใใญในใๅซๆใขใใใผใทใงใณใใฉใใซไปใใใใฆใใพใใ
ใใพใใพใช่จ่ช (่ฑ่ชใชใฉใฎ้ซใชใฝใผใน่จ่ชใจในใฏใใช่ชใชใฉใฎไฝใชใฝใผใน่จ่ชใฎไธกๆนใๅซใ)ใ
่ซๆ [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) ใจๅๆใซใชใชใผในใใใพใใใ
ใใฎใฉใคใใฉใชใฏใXNLI ใใผใฟใใญใผใใใใใญใปใใตใใในใใใพใใ
- [`~data.processors.utils.XnliProcessor`]
ใในใใปใใใซใฏใดใผใซใใฉใใซใไปใใฆใใใใใ่ฉไพกใฏใในใใปใใใง่กใใใพใใฎใงใไบๆฟใใ ใใใ
ใใใใฎใใญใปใใตใไฝฟ็จใใไพใฏใ[run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) ในใฏใชใใใซ็คบใใใฆใใพใใ
## SQuAD
[The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) ใฏใๆฌกใฎใใณใใใผใฏใงใใ
่ณชๅๅฟ็ญใซ้ขใใใขใใซใฎใใใฉใผใใณในใ่ฉไพกใใพใใ v1.1 ใจ v2.0 ใฎ 2 ใคใฎใใผใธใงใณใๅฉ็จๅฏ่ฝใงใใๆๅใฎใใผใธใงใณ
(v1.1) ใฏใ่ซๆ [SQuAD: 100,000+ question for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ใจใจใใซใชใชใผในใใใพใใใ 2 ็ช็ฎใฎใใผใธใงใณ (v2.0) ใฏใ่ซๆ [Know What You Don't ใจๅๆใซใชใชใผในใใใพใใใ
็ฅใฃใฆใใในใ: SQuAD ใฎ็ญใใใใชใ่ณชๅ](https://arxiv.org/abs/1806.03822)ใ
ใใฎใฉใคใใฉใชใฏใๆฌกใฎ 2 ใคใฎใใผใธใงใณใฎใใใใใฎใใญใปใใตใใในใใใพใใ
### Processors
ใใใใฎใใญใปใใตใฏๆฌกใฎใจใใใงใใ
- [`~data.processors.utils.SquadV1Processor`]
- [`~data.processors.utils.SquadV2Processor`]
ใฉใกใใๆฝ่ฑกใฏใฉใน [`~data.processors.utils.SquadProcessor`] ใ็ถๆฟใใฆใใพใใ
[[autodoc]] data.processors.squad.SquadProcessor
- all
ใใใซใๆฌกใฎใกใฝใใใไฝฟ็จใใฆใSQuAD ใฎไพใๆฌกใฎๅฝขๅผใซๅคๆใงใใพใใ
ใขใใซใฎๅ
ฅๅใจใใฆไฝฟ็จใงใใ [`~data.processors.utils.SquadFeatures`]ใ
[[autodoc]] data.processors.squad.squad_convert_examples_to_features
ใใใใฎใใญใปใใตใจๅ่ฟฐใฎๆนๆณใฏใใใผใฟใๅซใใใกใคใซใ ใใงใชใใ
*tensorflow_datasets* ใใใฑใผใธใไปฅไธใซไพใ็คบใใพใใ
### Example usage
ไปฅไธใซใใญใปใใตใไฝฟ็จใใไพใจใใใผใฟ ใใกใคใซใไฝฟ็จใใๅคๆๆนๆณใ็คบใใพใใ
```python
# Loading a V2 processor
processor = SquadV2Processor()
examples = processor.get_dev_examples(squad_v2_data_dir)
# Loading a V1 processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(squad_v1_data_dir)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
```
*tensorflow_datasets* ใฎไฝฟ็จใฏใใใผใฟ ใใกใคใซใไฝฟ็จใใใฎใจๅใใใใ็ฐกๅใงใใ
```python
# tensorflow_datasets only handle Squad V1.
tfds_examples = tfds.load("squad")
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
```
ใใใใฎใใญใปใใตใไฝฟ็จใใๅฅใฎไพใฏใ[run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) ในใฏใชใใใซ็คบใใใฆใใพใใ
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/pipelines.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Pipelines
ใใคใใฉใคใณใฏใๆจ่ซใซใขใใซใไฝฟใใใใฎ็ฐกๅใงๅชใใๆนๆณใงใใใใใคใใฉใคใณใฏใ่ค้ใชใณใผใใฎใปใจใใฉใๆฝ่ฑกๅใใใชใใธใงใฏใใงใใ
ใใคใใฉใคใณใฏใใฉใคใใฉใชใใ่ค้ใชใณใผใใฎใปใจใใฉใๆฝ่ฑกๅใใใชใใธใงใฏใใงใๅๅไปใๅบๆ่กจ็พ่ช่ญใใในใฏ่จ่ชใขใใชใณใฐใๆๆ
ๅๆใ็นๅพดๆฝๅบใ่ณชๅๅฟ็ญใชใฉใฎใฟในใฏใซ็นๅใใใทใณใใซใชAPIใๆไพใใพใใ
RecognitionใMasked Language ModelingใSentiment AnalysisใFeature ExtractionใQuestion Answeringใชใฉใฎใฟในใฏใซ็นๅใใใทใณใใซใชAPIใๆไพใใพใใไปฅไธใๅ็
งใฎใใจใ
[ใฟในใฏๆฆ่ฆ](../task_summary)ใๅ็
งใใฆใใ ใใใ
ใใคใใฉใคใณใฎๆฝ่ฑกๅใซใฏ2ใคใฎใซใใดใชใผใใใ๏ผ
- [`pipeline`] ใฏใไปใฎใในใฆใฎใใคใใฉใคใณใใซใใปใซๅใใๆใๅผทๅใชใชใใธใงใฏใใงใใ
- ใฟในใฏๅบๆใฎใใคใใฉใคใณใฏใ[ใชใผใใฃใช](#audio)ใ[ใณใณใใฅใผใฟใผ ใใธใงใณ](#computer-vision)ใ[่ช็ถ่จ่ชๅฆ็](#natural-language-processing)ใใใใณ [ใใซใใขใผใใซ](#multimodal) ใฟในใฏใงไฝฟ็จใงใใพใใ
## The pipeline abstraction
*ใใคใใฉใคใณ* ๆฝ่ฑกๅใฏใไปใฎใในใฆใฎๅฉ็จๅฏ่ฝใชใใคใใฉใคใณใฎใฉใใใผใงใใไปใฎใใฎใจๅๆงใซใคใณในใฟใณในๅใใใพใ
ใใคใใฉใคใณใงใใใใใใชใ็ๆดปใฎ่ณชใๆไพใงใใพใใ
1 ใคใฎ้
็ฎใซๅฏพใใๅ็ดใชๅผใณๅบใ:
```python
>>> pipe = pipeline("text-classification")
>>> pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
```
[ใใ](https://huggingface.co) ใฎ็นๅฎใฎใขใใซใไฝฟ็จใใใๅ ดๅใฏใใขใใซใใชใณใซใชใฃใฆใใๅ ดๅใฏใฟในใฏใ็ก่ฆใงใใพใใ
ใใใฏใใงใซใใใๅฎ็พฉใใฆใใพใใ
```python
>>> pipe = pipeline(model="roberta-large-mnli")
>>> pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
```
ๅคใใฎ้
็ฎใซๅฏพใใฆใใคใใฉใคใณใๅผใณๅบใใซใฏใ*list* ใไฝฟ็จใใฆใใคใใฉใคใณใๅผใณๅบใใใจใใงใใพใใ
```python
>>> pipe = pipeline("text-classification")
>>> pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
```
ๅฎๅ
จใชใใผใฟใปใใใๅๅพฉใใใซใฏใ`Dataset`ใ็ดๆฅไฝฟ็จใใใใจใใๅงใใใพใใใใใฏใๅฒใๅฝใฆใๅฟ
่ฆใใชใใใจใๆๅณใใพใ
ใใผใฟใปใใๅ
จไฝใไธๅบฆใซๅฆ็ใใใใจใใ่ชๅใงใใใๅฆ็ใ่กใๅฟ
่ฆใใใใพใใใใใใฏใซในใฟใ ใซใผใใจๅใใใใ้ใๅไฝใใใฏใใงใใ
GPUใใใใๅ้กใงใชใๅ ดๅใฏใใใใใใใซๅ้กใไฝๆใใฆใใ ใใใ
```python
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
ไฝฟใใใใใใใใใซใใธใงใใฌใผใฟใผใไฝฟ็จใใใใจใใงใใพใใ
```python
from transformers import pipeline
pipe = pipeline("text-classification")
def data():
while True:
# This could come from a dataset, a database, a queue or HTTP request
# in a server
# Caveat: because this is iterative, you cannot use `num_workers > 1` variable
# to use multiple threads to preprocess data. You can still have 1 thread that
# does the preprocessing while the main runs the big inference
yield "This is a test"
for out in pipe(data()):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
[[autodoc]] pipeline
## Pipeline batching
ใในใฆใฎใใคใใฉใคใณใงใใใๅฆ็ใไฝฟ็จใงใใพใใใใใฏใใพใใใใพใ
ใใคใใฉใคใณใในใใชใผใใณใฐๆฉ่ฝใไฝฟ็จใใใจใใฏๅธธใซ (ใคใพใใใชในใใ`dataset`ใใพใใฏ `generator`ใๆธกใใจใ)ใ
```python
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
```
<Tip warning={true}>
ใใ ใใใใใซใใฃใฆใใใฉใผใใณในใ่ชๅ็ใซๅไธใใใใใงใฏใใใพใใใ็ถๆณใซๅฟใใฆใ10 ๅใฎ้ซ้ๅใพใใฏ 5 ๅใฎไฝ้ๅใฎใใใใใซใชใใพใใ
ใใผใใฆใงใขใใใผใฟใไฝฟ็จใใใฆใใๅฎ้ใฎใขใใซใซใคใใฆใ
ไธปใซ้ซ้ๅใงใใไพ:
</Tip>
```python
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm
pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
return "This is a test"
dataset = MyDataset()
for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
```
```
# On GTX 970
------------------------------
Streaming no batching
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------
Streaming batch_size=8
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:04<00:00, 1205.95it/s]
------------------------------
Streaming batch_size=64
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:02<00:00, 2478.24it/s]
------------------------------
Streaming batch_size=256
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
```
ๆใ้ๅบฆใไฝไธใใไพ:
```python
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n
```
ใใใฏใไปใฎๆใซๆฏในใฆ้ๅธธใซ้ทใๆใๆๆใใใพใใใใฎๅ ดๅใ**ๅ
จไฝ**ใฎใใใใฏ 400 ใงใใๅฟ
่ฆใใใใพใใ
ใใผใฏใณใ้ทใใใใใใใๅ
จไฝใ [64, 4] ใงใฏใชใ [64, 400] ใซใชใใ้ๅบฆใๅคงๅน
ใซไฝไธใใพใใใใใซๆชใใใจใซใ
ใใใใๅคงใใใชใใจใใใญใฐใฉใ ใฏๅ็ดใซใฏใฉใใทใฅใใพใใ
```
------------------------------
Streaming no batching
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:05<00:00, 183.69it/s]
------------------------------
Streaming batch_size=8
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:03<00:00, 265.74it/s]
------------------------------
Streaming batch_size=64
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:26<00:00, 37.80it/s]
------------------------------
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
```
ใใฎๅ้กใซๅฏพใใ้ฉๅใช (ไธ่ฌ็ใช) ่งฃๆฑบ็ญใฏใชใใไฝฟ็จใงใใ่ท้ขใฏใฆใผในใฑใผในใซใใฃใฆ็ฐใชใๅ ดๅใใใใพใใใฎใซใผใซ
่ฆชๆ๏ผ
ใฆใผใถใผใซใจใฃใฆใฎ็ต้จๅใฏๆฌกใฎใจใใใงใใ
- **ใใผใใฆใงใขใไฝฟ็จใใฆใ่ฒ ่ทใซๅฏพใใใใใฉใผใใณในใๆธฌๅฎใใพใใๆธฌใฃใฆใๆธฌใฃใฆใๆธฌใ็ถใใใๅฎๆฐใจใใใฎใฏใ
้ฒใในใๅฏไธใฎๆนๆณใ**
- ใฌใคใใณใทใซๅถ็ดใใใๅ ดๅ (ๅฎ้ใฎ่ฃฝๅใๆจ่ซใๅฎ่กใใฆใใๅ ดๅ)ใใใใๅฆ็ใ่กใใชใใงใใ ใใใ
- CPU ใไฝฟ็จใใฆใใๅ ดๅใฏใใใใๅฆ็ใ่กใใชใใงใใ ใใใ
- GPU ใงในใซใผใใใใไฝฟ็จใใฆใใๅ ดๅ (ๅคง้ใฎ้็ใใผใฟใงใขใใซใๅฎ่กใใใๅ ดๅ)ใๆฌกใฎใใใซใใพใใ
- sequence_length (ใ่ช็ถใชใใใผใฟ) ใฎใตใคใบใซใคใใฆใพใฃใใใใใใชใๅ ดๅใฏใใใใฉใซใใงใฏใใใๅฆ็ใๆธฌๅฎใ่กใใใ
ๆซๅฎ็ใซ่ฟฝๅ ใใฆใฟใพใใๅคฑๆใใๅ ดๅใซๅๅพฉใใใใใซ OOM ใใงใใฏใ่ฟฝๅ ใใพใ (ๅคฑๆใใๅ ดๅใฏใใใๆ็นใงๅๅพฉใใพใ)ใ
sequence_length ใๅถๅพกใใพใใ)
- sequence_length ใ้ๅธธใซ่ฆๅ็ใงใใๅ ดๅใใใใๅฆ็ใฏ้ๅธธใซ่ๅณๆทฑใใใฎใจใชใๅฏ่ฝๆงใ้ซใใๆธฌๅฎใใฆใใใทใฅใใฆใใ ใใใ
OOM ใ็บ็ใใใพใง็ถใใพใใ
- GPU ใๅคงใใใปใฉใใใใๅฆ็ใใใ่ๅณๆทฑใใใฎใซใชใๅฏ่ฝๆงใ้ซใใชใใพใใ
- ใใใๅฆ็ใๆๅนใซใใใใใใซใOOM ใ้ฉๅใซๅฆ็ใงใใใใจใ็ขบ่ชใใฆใใ ใใใ
## Pipeline chunk batching
`zero-shot-classification` ใจ `question-answering` ใฏใๅไธใฎๅ
ฅๅใง็ตๆใๅพใใใๅฏ่ฝๆงใใใใจใใๆๅณใงใๅฐใ็นๆฎใงใใ
ใขใใซใฎ่คๆฐใฎๅๆนใในใ้ๅธธใฎ็ถๆณใงใฏใใใใซใใ `batch_size` ๅผๆฐใซ้ขใใๅ้กใ็บ็ใใพใใ
ใใฎๅ้กใๅ้ฟใใใใใซใใใใใฎใใคใใฉใคใณใฏใฉใกใใๅฐใ็นๆฎใซใชใฃใฆใใใไปฃใใใซ `ChunkPipeline` ใซใชใฃใฆใใพใใ
้ๅธธใฎ `Pipeline`ใ่ฆใใใซ๏ผ
```python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
```
ไปใฏๆฌกใฎใใใซใชใใพใ:
```python
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
```
ใใคใใฉใคใณใฏไปฅไธใงไฝฟ็จใใใใใใใใใฏใณใผใใซๅฏพใใฆ้ๅธธใซ้้็ใงใใๅฟ
่ฆใใใใพใใ
ๅใๆนๆณใ
ใใคใใฉใคใณใฏใใใใ่ชๅ็ใซๅฆ็ใงใใใใใใใใฏ็ฐก็ฅๅใใใใใฅใผใงใใๆฐใซใใๅฟ
่ฆใฏใชใใจใใๆๅณใงใ
ๅ
ฅๅใๅฎ้ใซใใชใฌใผใใๅๆนใในใฎๆฐใซใคใใฆใฏใ`batch_size` ใๆ้ฉๅใงใใพใใ
ๅ
ฅๅใจใฏ็ฌ็ซใใฆใๅใฎใปใฏใทใงใณใฎๆณจๆไบ้
ใๅผใ็ถใ้ฉ็จใใใพใใ
## Pipeline custom code
็นๅฎใฎใใคใใฉใคใณใใชใผใใผใฉใคใใใๅ ดๅใ
็ฎใฎๅใฎใฟในใฏใซ้ขใใๅ้กใไฝๆใใใใจใ่บ่บใใชใใงใใ ใใใใใคใใฉใคใณใฎ็ฎๆจใฏใไฝฟใใใใใใปใจใใฉใฎใฆใผใถใผใใตใใผใใใใใจใงใใ
ใใใใฃใฆใ`transformers`ใใใชใใฎใฆใผในใฑใผในใใตใใผใใใๅฏ่ฝๆงใใใใพใใ
ๅ็ดใซ่ฉฆใใฆใฟใใๅ ดๅใฏใๆฌกใฎใใจใใงใใพใใ
- ้ธๆใใใใคใใฉใคใณใใตใใฏใฉในๅใใพใ
```python
class MyPipeline(TextClassificationPipeline):
def postprocess():
# Your code goes here
scores = scores * 100
# And here
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
# or if you use *pipeline* function, then:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
```
ใใใซใใใๅฟ
่ฆใชใซในใฟใ ใณใผใใใในใฆๅฎ่กใงใใใใใซใชใใพใใ
## Implementing a pipeline
[Implementing a new pipeline](../add_new_pipeline)
## Audio
ใชใผใใฃใช ใฟในใฏใซไฝฟ็จใงใใใใคใใฉใคใณใซใฏๆฌกใฎใใฎใใใใพใใ
### AudioClassificationPipeline
[[autodoc]] AudioClassificationPipeline
- __call__
- all
### AutomaticSpeechRecognitionPipeline
[[autodoc]] AutomaticSpeechRecognitionPipeline
- __call__
- all
### TextToAudioPipeline
[[autodoc]] TextToAudioPipeline
- __call__
- all
### ZeroShotAudioClassificationPipeline
[[autodoc]] ZeroShotAudioClassificationPipeline
- __call__
- all
## Computer vision
ใณใณใใฅใผใฟใผ ใใธใงใณ ใฟในใฏใซไฝฟ็จใงใใใใคใใฉใคใณใซใฏๆฌกใฎใใฎใใใใพใใ
### DepthEstimationPipeline
[[autodoc]] DepthEstimationPipeline
- __call__
- all
### ImageClassificationPipeline
[[autodoc]] ImageClassificationPipeline
- __call__
- all
### ImageSegmentationPipeline
[[autodoc]] ImageSegmentationPipeline
- __call__
- all
### ImageToImagePipeline
[[autodoc]] ImageToImagePipeline
- __call__
- all
### ObjectDetectionPipeline
[[autodoc]] ObjectDetectionPipeline
- __call__
- all
### VideoClassificationPipeline
[[autodoc]] VideoClassificationPipeline
- __call__
- all
### ZeroShotImageClassificationPipeline
[[autodoc]] ZeroShotImageClassificationPipeline
- __call__
- all
### ZeroShotObjectDetectionPipeline
[[autodoc]] ZeroShotObjectDetectionPipeline
- __call__
- all
## Natural Language Processing
่ช็ถ่จ่ชๅฆ็ใฟในใฏใซไฝฟ็จใงใใใใคใใฉใคใณใซใฏๆฌกใฎใใฎใใใใพใใ
### ConversationalPipeline
[[autodoc]] Conversation
[[autodoc]] ConversationalPipeline
- __call__
- all
### FillMaskPipeline
[[autodoc]] FillMaskPipeline
- __call__
- all
### NerPipeline
[[autodoc]] NerPipeline
่ฉณ็ดฐใซใคใใฆใฏใ[`TokenClassificationPipeline`] ใๅ็
งใใฆใใ ใใใ
### QuestionAnsweringPipeline
[[autodoc]] QuestionAnsweringPipeline
- __call__
- all
### SummarizationPipeline
[[autodoc]] SummarizationPipeline
- __call__
- all
### TableQuestionAnsweringPipeline
[[autodoc]] TableQuestionAnsweringPipeline
- __call__
### TextClassificationPipeline
[[autodoc]] TextClassificationPipeline
- __call__
- all
### TextGenerationPipeline
[[autodoc]] TextGenerationPipeline
- __call__
- all
### Text2TextGenerationPipeline
[[autodoc]] Text2TextGenerationPipeline
- __call__
- all
### TokenClassificationPipeline
[[autodoc]] TokenClassificationPipeline
- __call__
- all
### TranslationPipeline
[[autodoc]] TranslationPipeline
- __call__
- all
### ZeroShotClassificationPipeline
[[autodoc]] ZeroShotClassificationPipeline
- __call__
- all
## Multimodal
ใใซใใขใผใใซ ใฟในใฏใซไฝฟ็จใงใใใใคใใฉใคใณใซใฏๆฌกใฎใใฎใใใใพใใ
### DocumentQuestionAnsweringPipeline
[[autodoc]] DocumentQuestionAnsweringPipeline
- __call__
- all
### FeatureExtractionPipeline
[[autodoc]] FeatureExtractionPipeline
- __call__
- all
### ImageToTextPipeline
[[autodoc]] ImageToTextPipeline
- __call__
- all
### VisualQuestionAnsweringPipeline
[[autodoc]] VisualQuestionAnsweringPipeline
- __call__
- all
## Parent class: `Pipeline`
[[autodoc]] Pipeline
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/image_processor.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image Processor
็ปๅใใญใปใใตใฏใใใธใงใณ ใขใใซใฎๅ
ฅๅ็นๅพดใฎๆบๅใจใใฎๅบๅใฎๅพๅฆ็ใๆ
ๅฝใใพใใใใใซใฏใใตใคใบๅคๆดใๆญฃ่ฆๅใPyTorchใTensorFlowใFlaxใNumpy ใใณใฝใซใธใฎๅคๆใชใฉใฎๅคๆใๅซใพใใพใใใญใธใใใใปใฐใกใณใใผใทใงใณ ใในใฏใซๅคๆใใใชใฉใใขใใซๅบๆใฎๅพๅฆ็ใๅซใพใใๅ ดๅใใใใพใใ
## ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
- from_pretrained
- save_pretrained
## BatchFeature
[[autodoc]] BatchFeature
## BaseImageProcessor
[[autodoc]] image_processing_utils.BaseImageProcessor
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/optimizer_schedules.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Optimization
`.optimization` ใขใธใฅใผใซใฏไปฅไธใๆไพใใพใใ
- ใขใใซใฎๅพฎ่ชฟๆดใซไฝฟ็จใงใใ้ใฟๆธ่กฐใไฟฎๆญฃใใใใชใใใฃใใคใถใผใใใใณ
- `_LRSchedule` ใใ็ถๆฟใใในใฑใธใฅใผใซ ใชใใธใงใฏใใฎๅฝขๅผใฎใใใคใใฎในใฑใธใฅใผใซ:
- ่คๆฐใฎใใใใฎๅพ้
ใ็ดฏ็ฉใใใใใฎๅพ้
็ดฏ็ฉใฏใฉใน
## AdamW (PyTorch)
[[autodoc]] AdamW
## AdaFactor (PyTorch)
[[autodoc]] Adafactor
## AdamWeightDecay (TensorFlow)
[[autodoc]] AdamWeightDecay
[[autodoc]] create_optimizer
## Schedules
### Learning Rate Schedules (Pytorch)
[[autodoc]] SchedulerType
[[autodoc]] get_scheduler
[[autodoc]] get_constant_schedule
[[autodoc]] get_constant_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png"/>
[[autodoc]] get_cosine_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png"/>
[[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png"/>
[[autodoc]] get_linear_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png"/>
[[autodoc]] get_polynomial_decay_schedule_with_warmup
[[autodoc]] get_inverse_sqrt_schedule
### Warmup (TensorFlow)
[[autodoc]] WarmUp
## Gradient Strategies
### GradientAccumulator (TensorFlow)
[[autodoc]] GradientAccumulator
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/quantization.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quantize ๐ค Transformers models
## `AutoGPTQ` Integration
๐ค Transformers ใซใฏใ่จ่ชใขใใซใง GPTQ ้ๅญๅใๅฎ่กใใใใใฎ `optimum` API ใ็ตฑๅใใใฆใใพใใใใใฉใผใใณในใๅคงๅน
ใซไฝไธใใใใใจใชใใๆจ่ซ้ๅบฆใ้ซ้ๅใใใใจใชใใใขใใซใ 8ใ4ใ3ใใใใซใฏ 2 ใใใใงใญใผใใใใณ้ๅญๅใงใใพใใใใใฏใใปใจใใฉใฎ GPU ใใผใใฆใงใขใงใตใใผใใใใฆใใพใใ
้ๅญๅใขใใซใฎ่ฉณ็ดฐใซใคใใฆใฏใไปฅไธใ็ขบ่ชใใฆใใ ใใใ
- [GPTQ](https://arxiv.org/pdf/2210.17323.pdf) ่ซๆ
- GPTQ ้ๅญๅใซ้ขใใ `optimum` [ใฌใคใ](https://huggingface.co/docs/optimum/llm_quantization/usage_guides/quantization)
- ใใใฏใจใณใใจใใฆไฝฟ็จใใใ [`AutoGPTQ`](https://github.com/PanQiWei/AutoGPTQ) ใฉใคใใฉใช
### Requirements
ไปฅไธใฎใณใผใใๅฎ่กใใใซใฏใไปฅไธใฎ่ฆไปถใใคใณในใใผใซใใใฆใใๅฟ
่ฆใใใใพใ๏ผ
- ๆๆฐใฎ `AutoGPTQ` ใฉใคใใฉใชใใคใณในใใผใซใใใ
`pip install auto-gptq` ใใคใณในใใผใซใใใ
- ๆๆฐใฎ `optimum` ใใฝใผในใใใคใณในใใผใซใใใ
`git+https://github.com/huggingface/optimum.git` ใใคใณในใใผใซใใใ
- ๆๆฐใฎ `transformers` ใใฝใผในใใใคใณในใใผใซใใใ
ๆๆฐใฎ `transformers` ใใฝใผในใใใคใณในใใผใซใใ `pip install git+https://github.com/huggingface/transformers.git`
- ๆๆฐใฎ `accelerate` ใฉใคใใฉใชใใคใณในใใผใซใใใ
`pip install --upgrade accelerate` ใๅฎ่กใใใ
GPTQ็ตฑๅใฏไปใฎใจใใใใญในใใขใใซใฎใฟใใตใใผใใใฆใใใฎใงใ่ฆ่ฆใ้ณๅฃฐใใใซใใขใผใใซใขใใซใงใฏไบๆใใฌๆๅใซ้ญ้ใใใใใใใชใใใจใซๆณจๆใใฆใใ ใใใ
### Load and quantize a model
GPTQ ใฏใ้ๅญๅใขใใซใไฝฟ็จใใๅใซ้ใฟใฎใญใฃใชใใฌใผใทใงใณใๅฟ
่ฆใจใใ้ๅญๅๆนๆณใงใใใใฉใณในใใฉใผใใผ ใขใใซใๆๅใใ้ๅญๅใใๅ ดๅใฏใ้ๅญๅใขใใซใไฝๆใใใพใงใซๆ้ใใใใใใจใใใใพใ (`facebook/opt-350m`ใขใใซใฎ Google colab ใงใฏ็ด 5 ๅ)ใ
ใใใใฃใฆใGPTQ ้ๅญๅใขใใซใไฝฟ็จใใใทใใชใชใฏ 2 ใคใใใพใใๆๅใฎไฝฟ็จไพใฏใใใใงๅฉ็จๅฏ่ฝใชไปใฎใฆใผใถใผใซใใฃใฆใใงใซ้ๅญๅใใใใขใใซใใญใผใใใใใจใงใใ2 ็ช็ฎใฎไฝฟ็จไพใฏใใขใใซใๆๅใใ้ๅญๅใใไฟๅญใใใใใใซใใใทใฅใใฆใไปใฎใฆใผใถใผใไฝฟ็จใงใใใใใซใใใใจใงใใใใใไฝฟใฃใฆใใ ใใใ
#### GPTQ Configuration
ใขใใซใใญใผใใใฆ้ๅญๅใใใซใฏใ[`GPTQConfig`] ใไฝๆใใๅฟ
่ฆใใใใพใใใใผใฟใปใใใๆบๅใใใซใฏใ`bits`ใฎๆฐใ้ๅญๅใ่ชฟๆดใใใใใฎ`dataset`ใใใใณใขใใซใฎ`Tokenizer`ใๆธกใๅฟ
่ฆใใใใพใใ
```python
model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
gptq_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer)
```
็ฌ่ชใฎใใผใฟใปใใใๆๅญๅใฎใชในใใจใใฆๆธกใใใจใใงใใใใจใซๆณจๆใใฆใใ ใใใใใ ใใGPTQ ่ซๆใฎใใผใฟใปใใใไฝฟ็จใใใใจใๅผทใใๅงใใใพใใ
```python
dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."]
quantization = GPTQConfig(bits=4, dataset = dataset, tokenizer=tokenizer)
```
#### Quantization
`from_pretrained` ใไฝฟ็จใใ`quantization_config` ใ่จญๅฎใใใใจใงใขใใซใ้ๅญๅใงใใพใใ
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config)
```
ใขใใซใ้ๅญๅใใใซใฏ GPU ใๅฟ
่ฆใงใใใใจใซๆณจๆใใฆใใ ใใใใขใใซใ CPU ใซ้
็ฝฎใใ้ๅญๅใใใใใซใขใธใฅใผใซใ GPU ใซๅๅพใซ็งปๅใใใพใใ
CPU ใชใใญใผใใฎไฝฟ็จไธญใซ GPU ใฎไฝฟ็จ้ใๆๅคงๅใใใๅ ดๅใฏใ`device_map = "auto"` ใ่จญๅฎใงใใพใใ
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config)
```
ใใฃในใฏ ใชใใญใผใใฏใตใใผใใใใฆใใชใใใจใซๆณจๆใใฆใใ ใใใใใใซใใใผใฟใปใใใๅๅ ใงใกใขใชใไธ่ถณใใฆใใๅ ดๅใฏใ`from_pretained` ใง `max_memory` ใๆธกใๅฟ
่ฆใใใๅ ดๅใใใใพใใ `device_map`ใจ`max_memory`ใฎ่ฉณ็ดฐใซใคใใฆใฏใใใฎ [ใฌใคใ](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map) ใๅ็
งใใฆใใ ใใใ
<Tip warning={true}>
GPTQ ้ๅญๅใฏใ็พๆ็นใงใฏใใญในใ ใขใใซใงใฎใฟๆฉ่ฝใใพใใใใใซใ้ๅญๅใใญใปในใฏใใผใใฆใงใขใซใใฃใฆใฏ้ทๆ้ใใใๅ ดๅใใใใพใ (NVIDIA A100 ใไฝฟ็จใใๅ ดๅใ175B ใขใใซ = 4 gpu ๆ้)ใใขใใซใฎ GPTQ ้ๅญๅใใผใธใงใณใๅญๅจใใชใๅ ดๅใฏใใใใง็ขบ่ชใใฆใใ ใใใใใใงใชใๅ ดๅใฏใgithub ใง่ฆๆฑใ้ไฟกใงใใพใใ
</Tip>
### Push quantized model to ๐ค Hub
ไปใฎ ๐ค ใขใใซใจๅๆงใซใ`push_to_hub` ใไฝฟ็จใใฆ้ๅญๅใขใใซใใใใซใใใทใฅใงใใพใใ้ๅญๅๆงๆใฏไฟๅญใใใใขใใซใซๆฒฟใฃใฆใใใทใฅใใใพใใ
```python
quantized_model.push_to_hub("opt-125m-gptq")
tokenizer.push_to_hub("opt-125m-gptq")
```
้ๅญๅใใใใขใใซใใญใผใซใซ ใใทใณใซไฟๅญใใใๅ ดๅใฏใ`save_pretrained` ใไฝฟ็จใใฆ่กใใใจใใงใใพใใ
```python
quantized_model.save_pretrained("opt-125m-gptq")
tokenizer.save_pretrained("opt-125m-gptq")
```
`device_map` ใไฝฟ็จใใฆใขใใซใ้ๅญๅใใๅ ดๅใฏใไฟๅญใใๅใซใขใใซๅ
จไฝใ GPU ใพใใฏ `cpu` ใฎใใใใใซ็งปๅใใฆใใ ใใใ
```python
quantized_model.to("cpu")
quantized_model.save_pretrained("opt-125m-gptq")
```
### Load a quantized model from the ๐ค Hub
`from_pretrained`ใไฝฟ็จใใฆใ้ๅญๅใใใใขใใซใใใใใใญใผใใงใใพใใ
ๅฑๆง `quantization_config` ใใขใใซ่จญๅฎใชใใธใงใฏใใซๅญๅจใใใใจใ็ขบ่ชใใฆใใใใทใฅใใใ้ใฟใ้ๅญๅใใใฆใใใใจใ็ขบ่ชใใพใใ
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq")
```
ๅฟ
่ฆไปฅไธใฎใกใขใชใๅฒใๅฝใฆใใซใขใใซใใใ้ใใญใผใใใใๅ ดๅใฏใ`device_map` ๅผๆฐใฏ้ๅญๅใขใใซใงใๆฉ่ฝใใพใใ `accelerate`ใฉใคใใฉใชใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto")
```
### Exllama kernels for faster inference
4 ใใใ ใขใใซใฎๅ ดๅใๆจ่ซ้ๅบฆใ้ซใใใใใซ exllama ใซใผใใซใไฝฟ็จใงใใพใใใใใฉใซใใงๆๅนใซใชใฃใฆใใพใใ [`GPTQConfig`] ใง `disable_exllama` ใๆธกใใใจใงใใใฎๅไฝใๅคๆดใงใใพใใใใใซใใใ่จญๅฎใซไฟๅญใใใฆใใ้ๅญๅ่จญๅฎใไธๆธใใใใพใใใซใผใใซใซ้ข้ฃใใๅฑๆงใฎใฟใไธๆธใใงใใใใจใซๆณจๆใใฆใใ ใใใใใใซใexllama ใซใผใใซใไฝฟ็จใใใๅ ดๅใฏใใขใใซๅ
จไฝใ GPU ไธใซ็ฝฎใๅฟ
่ฆใใใใพใใ
```py
import torch
gptq_config = GPTQConfig(bits=4, disable_exllama=False)
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config = gptq_config)
```
็พๆ็นใงใฏ 4 ใใใ ใขใใซใฎใฟใใตใใผใใใใฆใใใใจใซๆณจๆใใฆใใ ใใใใใใซใpeft ใไฝฟ็จใใฆ้ๅญๅใขใใซใๅพฎ่ชฟๆดใใฆใใๅ ดๅใฏใexllama ใซใผใใซใ้ใขใฏใใฃใๅใใใใจใใๅงใใใพใใ
#### Fine-tune a quantized model
Hugging Face ใจใณใทในใใ ใฎใขใใใฟใผใฎๅ
ฌๅผใตใใผใใซใใใGPTQ ใง้ๅญๅใใใใขใใซใๅพฎ่ชฟๆดใงใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[`peft`](https://github.com/huggingface/peft) ใฉใคใใฉใชใใ่ฆงใใ ใใใ
### Example demo
GPTQ ใไฝฟ็จใใฆใขใใซใ้ๅญๅใใๆนๆณใจใpeft ใไฝฟ็จใใฆ้ๅญๅใใใใขใใซใๅพฎ่ชฟๆดใใๆนๆณใซใคใใฆใฏใGoogle Colab [ใใผใใใใฏ](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) ใๅ็
งใใฆใใ ใใใ
### GPTQConfig
[[autodoc]] GPTQConfig
## `bitsandbytes` Integration
๐ค Transformers ใฏใ`bitsandbytes` ใงๆใใใไฝฟ็จใใใใขใธใฅใผใซใจ็ทๅฏใซ็ตฑๅใใใฆใใพใใๆฐ่กใฎใณใผใใงใขใใซใ 8 ใใใ็ฒพๅบฆใงใญใผใใงใใพใใ
ใใใฏใ`bitsandbytes`ใฎ `0.37.0`ใชใชใผในไปฅ้ใใปใจใใฉใฎ GPU ใใผใใฆใงใขใงใตใใผใใใใฆใใพใใ
้ๅญๅๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใ[LLM.int8()](https://arxiv.org/abs/2208.07339) ่ซๆใใพใใฏ [ใใญใฐๆ็จฟ](https://huggingface.co/blog/hf-bitsandbytes-) ใใ่ฆงใใ ใใใ็ตฑๅ๏ผใณใฉใใฌใผใทใงใณใซใคใใฆใ
`0.39.0`ใชใชใผในไปฅ้ใFP4 ใใผใฟๅใๆดป็จใใ4 ใใใ้ๅญๅใไฝฟ็จใใฆ`device_map`ใใตใใผใใใไปปๆใฎใขใใซใใญใผใใงใใพใใ
็ฌ่ชใฎ pytorch ใขใใซใ้ๅญๅใใใๅ ดๅใฏใ๐ค Accelerate ใฉใคใใฉใชใฎ [ใใญใฅใกใณใ](https://huggingface.co/docs/accelerate/main/en/usage_guides/quantization) ใใใงใใฏใใฆใใ ใใใ
`bitsandbytes`็ตฑๅใไฝฟ็จใใฆใงใใใใจใฏๆฌกใฎใจใใใงใ
### General usage
ใขใใซใ ๐ค Accelerate ใซใใ่ชญใฟ่พผใฟใใตใใผใใใ`torch.nn.Linear` ใฌใคใคใผใๅซใพใใฆใใ้ใใ [`~PreTrainedModel.from_pretrained`] ใกใฝใใใๅผใณๅบใใจใใซ `load_in_8bit` ใพใใฏ `load_in_4bit` ๅผๆฐใไฝฟ็จใใฆใขใใซใ้ๅญๅใงใใพใใใใใฏใฉใฎใใใชใขใใชใใฃใงใๅๆงใซๆฉ่ฝใใใฏใใงใใ
```python
from transformers import AutoModelForCausalLM
model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True)
model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True)
```
ใใใฉใซใใงใฏใไปใฎใในใฆใฎใขใธใฅใผใซ (ไพ: `torch.nn.LayerNorm`) ใฏ `torch.float16` ใซๅคๆใใใพใใใใใฎ `dtype` ใๅคๆดใใใๅ ดๅใฏใ`torch_dtype` ๅผๆฐใไธๆธใใงใใพใใ
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM
>>> model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True, torch_dtype=torch.float32)
>>> model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype
torch.float32
```
### FP4 quantization
#### Requirements
ไปฅไธใฎใณใผใ ในใใใใใๅฎ่กใใๅใซใไปฅไธใฎ่ฆไปถใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
- ๆๆฐใฎ`bitsandbytes`ใฉใคใใฉใช
`pip install bitsandbytes>=0.39.0`
- ๆๆฐใฎ`accelerate`ใใคใณในใใผใซใใ
`pip install --upgrade accelerate`
- ๆๆฐใฎ `transformers` ใใคใณในใใผใซใใ
`pip install --upgrade transformers`
#### Tips and best practices
- **้ซๅบฆใชไฝฟ็จๆณ:** ๅฏ่ฝใชใในใฆใฎใชใใทใงใณใไฝฟ็จใใ 4 ใใใ้ๅญๅใฎ้ซๅบฆใชไฝฟ็จๆณใซใคใใฆใฏใ[ใใฎ Google Colab ใใผใใใใฏ](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) ใๅ็
งใใฆใใ ใใใ
- **`batch_size=1` ใซใใ้ซ้ๆจ่ซ :** bitsandbytes ใฎ `0.40.0` ใชใชใผในไปฅ้ใ`batch_size=1` ใงใฏ้ซ้ๆจ่ซใฎๆฉๆตใๅใใใใจใใงใใพใใ [ใใใใฎใชใชใผใน ใใผใ](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0) ใ็ขบ่ชใใใใฎๆฉ่ฝใๆดป็จใใใซใฏ`0.40.0`ไปฅ้ใฎใใผใธใงใณใไฝฟ็จใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ็ฎฑใฎใ
- **ใใฌใผใใณใฐ:** [QLoRA ่ซๆ](https://arxiv.org/abs/2305.14314) ใซใใใจใ4 ใใใๅบๆฌใขใใซใใใฌใผใใณใฐใใๅ ดๅ (ไพ: LoRA ใขใใใฟใผใไฝฟ็จ)ใ`bnb_4bit_quant_type='nf4'` ใไฝฟ็จใใๅฟ
่ฆใใใใพใใ ใ
- **ๆจ่ซ:** ๆจ่ซใฎๅ ดๅใ`bnb_4bit_quant_type` ใฏใใใฉใผใใณในใซๅคงใใชๅฝฑ้ฟใไธใใพใใใใใ ใใใขใใซใฎ้ใฟใจใฎไธ่ฒซๆงใไฟใคใใใซใๅฟ
ใๅใ `bnb_4bit_compute_dtype` ใใใณ `torch_dtype` ๅผๆฐใไฝฟ็จใใฆใใ ใใใ
#### Load a large model in 4bit
`.from_pretrained` ใกใฝใใใๅผใณๅบใใจใใซ `load_in_4bit=True` ใไฝฟ็จใใใจใใกใขใชไฝฟ็จ้ใ (ใใใใ) 4 ใงๅฒใใใจใใงใใพใใ
```python
# pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True)
```
<Tip warning={true}>
ใขใใซใ 4 ใใใใงใญใผใใใใใจใ็พๆ็นใงใฏ้ๅญๅใใใ้ใฟใใใใซใใใทใฅใใใใจใฏใงใใชใใใจใซๆณจๆใใฆใใ ใใใ 4 ใใใใฎ้ใฟใฏใพใ ใตใใผใใใใฆใใชใใใใใใฌใผใใณใฐใงใใชใใใจใซใๆณจๆใใฆใใ ใใใใใ ใใ4 ใใใ ใขใใซใไฝฟ็จใใฆ่ฟฝๅ ใฎใใฉใกใผใฟใผใใใฌใผใใณใฐใใใใจใใงใใพใใใใใซใคใใฆใฏๆฌกใฎใปใฏใทใงใณใง่ชฌๆใใพใใ
</Tip>
### Load a large model in 8bit
`.from_pretrained` ใกใฝใใใๅผใณๅบใใจใใซ `load_in_8bit=True` ๅผๆฐใไฝฟ็จใใใจใใกใขใช่ฆไปถใใใใๅๅใซใใฆใขใใซใใญใผใใงใใพใใ
```python
# pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True)
```
ๆฌกใซใ้ๅธธ [`PreTrainedModel`] ใไฝฟ็จใใใฎใจๅใใใใซใขใใซใไฝฟ็จใใพใใ
`get_memory_footprint` ใกใฝใใใไฝฟ็จใใฆใใขใใซใฎใกใขใช ใใใใใชใณใใ็ขบ่ชใงใใพใใ
```python
print(model.get_memory_footprint())
```
ใใฎ็ตฑๅใซใใใๅคงใใชใขใใซใๅฐใใชใใใคในใซใญใผใใใๅ้กใชใๅฎ่กใงใใใใใซใชใใพใใใ
<Tip warning={true}>
ใขใใซใ 8 ใใใใงใญใผใใใใใจใๆๆฐใฎ `transformers`ใจ`bitsandbytes`ใไฝฟ็จใใๅ ดๅใ้คใใ้ๅญๅใใใ้ใฟใใใใซใใใทใฅใใใใจใฏ็พๅจไธๅฏ่ฝใงใใใใจใซๆณจๆใใฆใใ ใใใ 8 ใใใใฎ้ใฟใฏใพใ ใตใใผใใใใฆใใชใใใใใใฌใผใใณใฐใงใใชใใใจใซใๆณจๆใใฆใใ ใใใใใ ใใ8 ใใใ ใขใใซใไฝฟ็จใใฆ่ฟฝๅ ใฎใใฉใกใผใฟใผใใใฌใผใใณใฐใใใใจใใงใใพใใใใใซใคใใฆใฏๆฌกใฎใปใฏใทใงใณใง่ชฌๆใใพใใ
ใพใใ`device_map` ใฏใชใใทใงใณใงใใใๅฉ็จๅฏ่ฝใชใชใฝใผในไธใงใขใใซใๅน็็ใซใใฃในใใใใใใใใๆจ่ซใซใฏ `device_map = 'auto'` ใ่จญๅฎใใใใจใๆจๅฅจใใใพใใ
</Tip>
#### Advanced use cases
ใใใงใฏใFP4 ้ๅญๅใไฝฟ็จใใฆๅฎ่กใงใใใใใคใใฎ้ซๅบฆใชไฝฟ็จไพใซใคใใฆ่ชฌๆใใพใใ
##### Change the compute dtype
compute dtype ใฏใ่จ็ฎไธญใซไฝฟ็จใใใ dtype ใๅคๆดใใใใใซไฝฟ็จใใใพใใใใจใใฐใ้ ใ็ถๆ
ใฏ`float32`ใซใใใพใใใ้ซ้ๅใฎใใใซ่จ็ฎใ bf16 ใซ่จญๅฎใงใใพใใใใใฉใซใใงใฏใcompute dtype ใฏ `float32` ใซ่จญๅฎใใใพใใ
```python
import torch
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
```
##### Using NF4 (Normal Float 4) data type
NF4 ใใผใฟๅใไฝฟ็จใใใใจใใงใใพใใใใใฏใๆญฃ่ฆๅๅธใไฝฟ็จใใฆๅๆๅใใใ้ใฟใซ้ฉๅใใๆฐใใ 4 ใใใ ใใผใฟๅใงใใใใฎๅฎ่กใฎใใใซ:
```python
from transformers import BitsAndBytesConfig
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
)
model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config)
```
##### Use nested quantization for more memory efficient inference
ใพใใใในใใใใ้ๅญๅๆๆณใไฝฟ็จใใใใจใใๅงใใใพใใใใใซใใใใใใฉใผใใณในใ่ฟฝๅ ใใใใจใชใใใใๅคใใฎใกใขใชใ็ฏ็ดใใใพใใ็ต้จ็ใช่ฆณๅฏใใใใใใซใใใNVIDIA-T4 16GB ไธใงใทใผใฑใณใน้ท 1024ใใใใ ใตใคใบ 1ใๅพ้
็ดฏ็ฉในใใใ 4 ใฎ llama-13b ใขใใซใๅพฎ่ชฟๆดใใใใจใๅฏ่ฝใซใชใใพใใ
```python
from transformers import BitsAndBytesConfig
double_quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
)
model_double_quant = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=double_quant_config)
```
### Push quantized models on the ๐ค Hub
`push_to_hub`ใกใฝใใใๅ็ดใซไฝฟ็จใใใใจใงใ้ๅญๅใใใใขใใซใใใใซใใใทใฅใงใใพใใใใใซใใใๆๅใซ้ๅญๅๆงๆใใกใคใซใใใใทใฅใใใๆฌกใซ้ๅญๅใใใใขใใซใฎ้ใฟใใใใทใฅใใใพใใ
ใใฎๆฉ่ฝใไฝฟ็จใงใใใใใซใใใซใฏใๅฟ
ใ `bitsandbytes>0.37.2` ใไฝฟ็จใใฆใใ ใใ (ใใฎ่จไบใฎๅท็ญๆ็นใงใฏใ`bitsandbytes==0.38.0.post1` ใงใในใใใพใใ)ใ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model.push_to_hub("bloom-560m-8bit")
```
<Tip warning={true}>
ๅคง่ฆๆจกใชใขใใซใงใฏใใใไธใง 8 ใใใ ใขใใซใใใใทใฅใใใใจใๅผทใๆจๅฅจใใใพใใใใใซใใใใณใใฅใใใฃใฏใกใขใช ใใใใใชใณใใฎๅๆธใจใใใจใใฐ Google Colab ใงใฎๅคง่ฆๆจกใชใขใใซใฎ่ชญใฟ่พผใฟใซใใๆฉๆตใๅใใใใจใใงใใพใใ
</Tip>
### Load a quantized model from the ๐ค Hub
`from_pretrained`ใกใฝใใใไฝฟ็จใใฆใใใใใ้ๅญๅใขใใซใใญใผใใงใใพใใๅฑๆง `quantization_config` ใใขใใซ่จญๅฎใชใใธใงใฏใใซๅญๅจใใใใจใ็ขบ่ชใใฆใใใใทใฅใใใ้ใฟใ้ๅญๅใใใฆใใใใจใ็ขบ่ชใใพใใ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto")
```
ใใฎๅ ดๅใๅผๆฐ `load_in_8bit=True` ใๆๅฎใใๅฟ
่ฆใฏใใใพใใใใ`bitsandbytes` ใจ `accelerate` ใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใใจใซๆณจๆใใฆใใ ใใใ
ใพใใ`device_map` ใฏใชใใทใงใณใงใใใๅฉ็จๅฏ่ฝใชใชใฝใผในไธใงใขใใซใๅน็็ใซใใฃในใใใใใใใใๆจ่ซใซใฏ `device_map = 'auto'` ใ่จญๅฎใใใใจใๆจๅฅจใใใพใใ
### Advanced use cases
ใใฎใปใฏใทใงใณใฏใ8 ใใใ ใขใใซใฎใญใผใใจๅฎ่กไปฅๅคใซไฝใใงใใใใๆขๆฑใใใไธ็ดใฆใผใถใผใๅฏพ่ฑกใจใใฆใใพใใ
#### Offload between `cpu` and `gpu`
ใใฎ้ซๅบฆใชไฝฟ็จไพใฎ 1 ใคใฏใใขใใซใใญใผใใใ`CPU`ใจ`GPU`ใฎ้ใง้ใฟใใใฃในใใใใงใใใใจใงใใ CPU ไธใงใใฃในใใใใใใ้ใฟใฏ **8 ใใใใซๅคๆใใใชใ**ใใใ`float32`ใซไฟๆใใใใใจใซๆณจๆใใฆใใ ใใใใใฎๆฉ่ฝใฏใ้ๅธธใซๅคง่ฆๆจกใชใขใใซใ้ฉๅใใใใใฎใขใใซใ GPU ใจ CPU ใฎ้ใงใใฃในใใใใใใใฆใผใถใผใๅฏพ่ฑกใจใใฆใใพใใ
ใพใใ`transformers` ใใ [`BitsAndBytesConfig`] ใใญใผใใใๅฑๆง `llm_int8_enable_fp32_cpu_offload` ใ `True` ใซ่จญๅฎใใพใใ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
```
`bigscience/bloom-1b7`ใขใใซใใญใผใใใๅฟ
่ฆใใใใ`lm_head`ใ้คใใขใใซๅ
จไฝใซโโ้ฉๅใใใฎใซๅๅใช GPU RAM ใใใใจใใพใใใใใใฃใฆใๆฌกใฎใใใซใซในใฟใ device_map ใไฝๆใใพใใ
```python
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": "cpu",
"transformer.h": 0,
"transformer.ln_f": 0,
}
```
ใใใฆใๆฌกใฎใใใซใขใใซใใญใผใใใพใใ
```python
model_8bit = AutoModelForCausalLM.from_pretrained(
"bigscience/bloom-1b7",
device_map=device_map,
quantization_config=quantization_config,
)
```
ไปฅไธใงใ๏ผใขใใซใๆฅฝใใใงใใ ใใ๏ผ
#### Play with `llm_int8_threshold`
`llm_int8_threshold` ๅผๆฐใๆไฝใใฆใๅคใๅคใฎใใใๅคใๅคๆดใงใใพใใ ๅคใๅค ใจใฏใ็นๅฎใฎใใใๅคใใๅคงใใ้ ใใ็ถๆ
ใฎๅคใงใใ
ใใใฏใ`LLM.int8()`่ซๆใง่ชฌๆใใใฆใใๅคใๅคๆคๅบใฎๅคใๅคใใใๅคใซๅฏพๅฟใใพใใใใฎใใใๅคใ่ถ
ใใ้ ใ็ถๆ
ใฎๅคใฏๅคใๅคใจใฟใชใใใใใใใฎๅคใซๅฏพใใๆไฝใฏ fp16 ใงๅฎ่กใใใพใใ้ๅธธใๅคใฏๆญฃ่ฆๅๅธใใพใใใคใพใใใปใจใใฉใฎๅคใฏ [-3.5, 3.5] ใฎ็ฏๅฒๅ
ใซใใใพใใใๅคง่ฆๆจกใชใขใใซใงใฏๅคงใใ็ฐใชใๅๅธใ็คบใไพๅค็ใช็ณป็ตฑ็ๅคใๅคใใใใคใใใใพใใใใใใฎๅคใๅคใฏใๅคใใฎๅ ดๅ [-60, -6] ใพใใฏ [6, 60] ใฎ็ฏๅฒๅ
ใซใใใพใใ Int8 ้ๅญๅใฏใๅคงใใใ 5 ็จๅบฆใพใงใฎๅคใงใฏใใพใๆฉ่ฝใใพใใใใใใ่ถ
ใใใจใใใใฉใผใใณในใๅคงๅน
ใซไฝไธใใพใใ้ฉๅใชใใใฉใซใใฎใใใๅคใฏ 6 ใงใใใใใไธๅฎๅฎใชใขใใซ (ๅฐ่ฆๆจกใชใขใใซใๅพฎ่ชฟๆด) ใงใฏใใใไฝใใใใๅคใๅฟ
่ฆใซใชใๅ ดๅใใใใพใใ
ใใฎๅผๆฐใฏใใขใใซใฎๆจ่ซ้ๅบฆใซๅฝฑ้ฟใไธใใๅฏ่ฝๆงใใใใพใใใใฎใใฉใกใผใฟใ่ฉฆใใฆใฟใฆใใฆใผในใฑใผในใซๆ้ฉใชใใฉใกใผใฟใ่ฆใคใใใใจใใๅงใใใพใใ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_threshold=10,
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
#### Skip the conversion of some modules
ไธ้จใฎใขใใซใซใฏใๅฎๅฎๆงใ็ขบไฟใใใใใซ 8 ใใใใซๅคๆใใๅฟ
่ฆใใชใใขใธใฅใผใซใใใใคใใใใพใใใใจใใฐใใธใฅใผใฏใใใฏใน ใขใใซใซใฏใในใญใใใใๅฟ
่ฆใใใใใใคใใฎ `lm_head` ใขใธใฅใผใซใใใใพใใ `llm_int8_skip_modules` ใง้ใใงใฟใ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_skip_modules=["lm_head"],
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
#### Fine-tune a model that has been loaded in 8-bit
Hugging Face ใจใณใทในใใ ใฎใขใใใฟใผใฎๅ
ฌๅผใตใใผใใซใใใ8 ใใใใงใญใผใใใใใขใใซใๅพฎ่ชฟๆดใงใใพใใ
ใใใซใใใๅไธใฎ Google Colab ใง`flan-t5-large`ใ`facebook/opt-6.7b`ใชใฉใฎๅคง่ฆๆจกใขใใซใๅพฎ่ชฟๆดใใใใจใใงใใพใใ่ฉณ็ดฐใซใคใใฆใฏใ[`peft`](https://github.com/huggingface/peft) ใฉใคใใฉใชใใ่ฆงใใ ใใใ
ใใฌใผใใณใฐ็จใฎใขใใซใใญใผใใใใจใใซ `device_map` ใๆธกใๅฟ
่ฆใใชใใใจใซๆณจๆใใฆใใ ใใใใขใใซใ GPU ใซ่ชๅ็ใซใญใผใใใใพใใๅฟ
่ฆใซๅฟใใฆใใใใคใน ใใใใ็นๅฎใฎใใใคในใซ่จญๅฎใใใใจใใงใใพใ (ไพ: `cuda:0`ใ`0`ใ`torch.device('cuda:0')`)ใ `device_map=auto`ใฏๆจ่ซใฎใฟใซไฝฟ็จใใๅฟ
่ฆใใใใใจใซๆณจๆใใฆใใ ใใใ
### BitsAndBytesConfig
[[autodoc]] BitsAndBytesConfig
## Quantization with ๐ค `optimum`
`optimum`ใงใตใใผใใใใฆใใ้ๅญๅๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใ[Optimum ใใญใฅใกใณใ](https://huggingface.co/docs/optimum/index) ใๅ็
งใใใใใใ่ชๅใฎใฆใผในใฑใผในใซ้ฉ็จใงใใใใฉใใใ็ขบ่ชใใฆใใ ใใใ
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/agent.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ใจใผใธใงใณใใจใใผใซ
<Tip warning={true}>
Transformers Agents ใฏๅฎ้จ็ใช API ใงใใใใใคใงใๅคๆดใใใๅฏ่ฝๆงใใใใพใใใจใผใธใงใณใใใ่ฟใใใ็ตๆ
API ใพใใฏๅบ็คใจใชใใขใใซใฏๅคๆดใใใๅพๅใใใใใใๅคๆดใใใๅฏ่ฝๆงใใใใพใใ
</Tip>
ใจใผใธใงใณใใจใใผใซใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ๅ
ฅ้ใฌใคใ](../transformers_agents) ใๅฟ
ใใ่ชญใฟใใ ใใใใใฎใใผใธ
ๅบ็คใจใชใใฏใฉในใฎ API ใใญใฅใกใณใใๅซใพใใฆใใพใใ
## ใจใผใธใงใณใ
็งใใกใฏ 3 ็จฎ้กใฎใจใผใธใงใณใใๆไพใใพใใ[`HfAgent`] ใฏใชใผใใณใฝใผใน ใขใใซใฎๆจ่ซใจใณใใใคใณใใไฝฟ็จใใ[`LocalAgent`] ใฏ้ธๆใใใขใใซใใญใผใซใซใงไฝฟ็จใใ[`OpenAiAgent`] ใฏ OpenAI ใฏใญใผใบใ ใขใใซใไฝฟ็จใใพใใ
### HfAgent
[[autodoc]] HfAgent
### LocalAgent
[[autodoc]] LocalAgent
### OpenAiAgent
[[autodoc]] OpenAiAgent
### AzureOpenAiAgent
[[autodoc]] AzureOpenAiAgent
### Agent
[[autodoc]] Agent
- chat
- run
- prepare_for_new_chat
## Tools
### load_tool
[[autodoc]] load_tool
### Tool
[[autodoc]] Tool
### PipelineTool
[[autodoc]] PipelineTool
### RemoteTool
[[autodoc]] RemoteTool
### launch_gradio_demo
[[autodoc]] launch_gradio_demo
## ใจใผใธใงใณใใฎ็จฎ้ก
ใจใผใธใงใณใใฏใใผใซ้ใงใใใใ็จฎ้กใฎใชใใธใงใฏใใๅฆ็ใงใใพใใใใผใซใฏๅฎๅ
จใซใใซใใขใผใใซใงใใใใใๅใๅใใจ่ฟๅใๅฏ่ฝใงใ
ใใญในใใ็ปๅใใชใผใใฃใชใใใใชใชใฉใฎใฟใคใใใใผใซ้ใฎไบๆๆงใ้ซใใใใใ ใใงใชใใ
ใใใใฎๆปใๅคใ ipython (jupyterใcolabใipython ใใผใใใใฏใชใฉ) ใงๆญฃใใใฌใณใใชใณใฐใใใซใฏใใฉใใใผ ใฏใฉในใๅฎ่ฃ
ใใพใใ
ใใฎใฟใคใใฎๅจใใ
ใฉใใใใใใชใใธใงใฏใใฏๆๅใจๅใใใใซๅไฝใ็ถใใใฏใใงใใใใญในใใชใใธใงใฏใใฏไพ็ถใจใใฆๆๅญๅใพใใฏ็ปๅใจใใฆๅไฝใใๅฟ
่ฆใใใใพใ
ใชใใธใงใฏใใฏไพ็ถใจใใฆ `PIL.Image` ใจใใฆๅไฝใใใฏใใงใใ
ใใใใฎใฟใคใใซใฏใๆฌกใฎ 3 ใคใฎ็นๅฎใฎ็ฎ็ใใใใพใใ
- ๅใซๅฏพใใฆ `to_raw` ใๅผใณๅบใใจใๅบใซใชใใชใใธใงใฏใใ่ฟใใใใฏใใงใ
- ๅใซๅฏพใใฆ `to_string` ใๅผใณๅบใใจใใชใใธใงใฏใใๆๅญๅใจใใฆ่ฟใๅฟ
่ฆใใใใพใใ`AgentText` ใฎๅ ดๅใฏๆๅญๅใซใชใๅฏ่ฝๆงใใใใพใใ
ใใ ใใไปใฎใคใณในใฟใณในใฎใชใใธใงใฏใใฎใทใชใขใซๅใใใใใผใธใงใณใฎใในใซใชใใพใใ
- ipython ใซใผใใซใง่กจ็คบใใใจใใชใใธใงใฏใใๆญฃใใ่กจ็คบใใใใฏใใงใ
### AgentText
[[autodoc]] transformers.tools.agent_types.AgentText
### AgentImage
[[autodoc]] transformers.tools.agent_types.AgentImage
### AgentAudio
[[autodoc]] transformers.tools.agent_types.AgentAudio
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/trainer.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Trainer
[`Trainer`] ใฏใฉในใฏใใปใจใใฉใฎๆจๆบ็ใชใฆใผในใฑใผในใซๅฏพใใฆใPyTorch ใงๆฉ่ฝใๅฎๅ
จใซใใฌใผใใณใฐใใใใใฎ API ใๆไพใใพใใใใใฏใ[ใตใณใใซ ในใฏใชใใ](https://github.com/huggingface/transformers/tree/main/examples) ใฎใปใจใใฉใงไฝฟ็จใใใฆใใพใใ
[`Trainer`] ใใคใณในใฟใณในๅใใๅใซใใใฌใผใใณใฐไธญใซใซในใฟใใคใบใฎใในใฆใฎใใคใณใใซใขใฏใปในใใใใใซ [`TrainingArguments`] ใไฝๆใใพใใ
ใใฎ API ใฏใ่คๆฐใฎ GPU/TPU ใงใฎๅๆฃใใฌใผใใณใฐใ[NVIDIA Apex](https://github.com/NVIDIA/apex) ใใใณ PyTorch ใฎใใคใใฃใ AMP ใซใใๆททๅ็ฒพๅบฆใใตใใผใใใพใใ
[`Trainer`] ใซใฏใไธ่จใฎๆฉ่ฝใใตใใผใใใๅบๆฌ็ใชใใฌใผใใณใฐ ใซใผใใๅซใพใใฆใใพใใใซในใฟใ ๅไฝใๆฟๅ
ฅใใใซใฏใใใใใใตใใฏใฉในๅใใๆฌกใฎใกใฝใใใใชใผใใผใฉใคใใใพใใ
- **get_train_dataloader** -- ใใฌใผใใณใฐ ใใผใฟใญใผใใผใไฝๆใใพใใ
- **get_eval_dataloader** -- ่ฉไพก็จใใผใฟใญใผใใผใไฝๆใใพใใ
- **get_test_dataloader** -- ใในใ ใใผใฟใญใผใใผใไฝๆใใพใใ
- **log** -- ใใฌใผใใณใฐใ็ฃ่ฆใใฆใใใใพใใพใชใชใใธใงใฏใใซ้ขใใๆ
ๅ ฑใใญใฐใซ่จ้ฒใใพใใ
- **create_optimizer_and_scheduler** -- ใชใใใฃใใคใถใจๅญฆ็ฟ็ในใฑใธใฅใผใฉใๆธกใใใชใใฃใๅ ดๅใซใปใใใขใใใใพใใ
ๅๆๅใ `create_optimizer`ใกใฝใใใจ`create_scheduler`ใกใฝใใใใตใใฏใฉในๅใพใใฏใชใผใใผใฉใคใใใใใจใใงใใใใจใซๆณจๆใใฆใใ ใใใ
ๅฅใ
ใซใ
- **create_optimizer** -- init ใงๆธกใใใชใใฃใๅ ดๅใซใชใใใฃใใคใถใผใใปใใใขใใใใพใใ
- **create_scheduler** -- init ใงๆธกใใใชใใฃใๅ ดๅใๅญฆ็ฟ็ในใฑใธใฅใผใฉใ่จญๅฎใใพใใ
- **compute_loss** - ใใฌใผใใณใฐๅ
ฅๅใฎใใใใฎๆๅคฑใ่จ็ฎใใพใใ
- **training_step** -- ใใฌใผใใณใฐ ในใใใใๅฎ่กใใพใใ
- **prediction_step** -- ่ฉไพก/ใในใ ในใใใใๅฎ่กใใพใใ
- **evaluate** -- ่ฉไพกใซใผใใๅฎ่กใใใกใใชใฏในใ่ฟใใพใใ
- **predict** -- ใในใ ใปใใใฎไบๆธฌ (ใฉใใซใไฝฟ็จๅฏ่ฝใชๅ ดๅใฏใกใใชใฏในใๅซใ) ใ่ฟใใพใใ
<Tip warning={true}>
[`Trainer`] ใฏใฉในใฏ ๐ค Transformers ใขใใซ็จใซๆ้ฉๅใใใฆใใใ้ฉใในใๅไฝใใใๅฏ่ฝๆงใใใใพใ
ไปใฎๆฉ็จฎใงไฝฟ็จใใๅ ดๅใ็ฌ่ชใฎใขใใซใงไฝฟ็จใใๅ ดๅใฏใๆฌกใฎ็นใ็ขบ่ชใใฆใใ ใใใ
- ใขใใซใฏๅธธใซ [`~utils.ModelOutput`] ใฎใฟใใซใพใใฏใตใใฏใฉในใ่ฟใใพใใ
- `labels` ๅผๆฐใๆๅฎใใใใใฎๆๅคฑใๆๅใฎๅคใจใใฆ่ฟใใใๅ ดๅใใขใใซใฏๆๅคฑใ่จ็ฎใงใใพใใ
ใฟใใซใฎ่ฆ็ด (ใขใใซใใฟใใซใ่ฟใๅ ดๅ)
- ใขใใซใฏ่คๆฐใฎใฉใใซๅผๆฐใๅใๅ
ฅใใใใจใใงใใพใ ([`TrainingArguments`] ใง `label_names` ใไฝฟ็จใใฆใใใฎๅๅใ [`Trainer`] ใซ็คบใใพใ) ใใใใใใฎใใใใซใ `"label"` ใจใใๅๅใไปใใๅฟ
่ฆใฏใใใพใใใ
</Tip>
ไปฅไธใฏใๅ ้ๆๅคฑใไฝฟ็จใใใใใซ [`Trainer`] ใใซในใฟใใคใบใใๆนๆณใฎไพใงใ (ไธๅ่กกใชใใฌใผใใณใฐ ใปใใใใใๅ ดๅใซๅฝน็ซใกใพใ)ใ
```python
from torch import nn
from transformers import Trainer
class CustomTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
# forward pass
outputs = model(**inputs)
logits = outputs.get("logits")
# compute custom loss (suppose one has 3 labels with different weights)
loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device))
loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1))
return (loss, outputs) if return_outputs else loss
```
PyTorch [`Trainer`] ใฎใใฌใผใใณใฐ ใซใผใใฎๅไฝใใซในใฟใใคใบใใใใ 1 ใคใฎๆนๆณใฏใใใฌใผใใณใฐ ใซใผใใฎ็ถๆ
ใๆคๆปใงใใ [callbacks](ใณใผใซใใใฏ) ใไฝฟ็จใใใใจใงใ (้ฒ่ก็ถๆณใฌใใผใใTensorBoard ใพใใฏไปใฎ ML ใใฉใใใใฉใผใ ใงใฎใญใฐ่จ้ฒใชใฉ)ใๆฑบๅฎ๏ผๆฉๆๅๆญขใชใฉ๏ผใ
## Trainer
[[autodoc]] Trainer
- all
## Seq2SeqTrainer
[[autodoc]] Seq2SeqTrainer
- evaluate
- predict
## TrainingArguments
[[autodoc]] TrainingArguments
- all
## Seq2SeqTrainingArguments
[[autodoc]] Seq2SeqTrainingArguments
- all
## Checkpoints
ใใใฉใซใใงใฏใ[`Trainer`] ใฏใในใฆใฎใใงใใฏใใคใณใใใ
[`TrainingArguments`] ใไฝฟ็จใใฆใใพใใใใใใฏใxxx ใๅซใ`checkpoint-xxx`ใจใใๅๅใฎใตใใใฉใซใใผใซไฟๅญใใใพใใ
ใใใฏใใฌใผใใณใฐใฎๆฎต้ใงใใใ
ใใงใใฏใใคใณใใใใใฌใผใใณใฐใๅ้ใใใซใฏใๆฌกใฎใใใใใไฝฟ็จใใฆ [`Trainer.train`] ใๅผใณๅบใใพใใ
- `resume_from_checkpoint=True` ใฏๆๆฐใฎใใงใใฏใใคใณใใใใใฌใผใใณใฐใๅ้ใใพใ
- `resume_from_checkpoint=checkpoint_dir` ใใฃใฌใฏใใชๅ
ใฎ็นๅฎใฎใใงใใฏใใคใณใใใใใฌใผใใณใฐใๅ้ใใพใ
ๅๆ ผใใใ
ใใใซใ`push_to_hub=True` ใไฝฟ็จใใใจใใขใใซ ใใใซใใงใใฏใใคใณใใ็ฐกๅใซไฟๅญใงใใพใใใใใฉใซใใงใฏใใในใฆ
ไธญ้ใใงใใฏใใคใณใใซไฟๅญใใใใขใใซใฏๅฅใฎใณใใใใซไฟๅญใใใพใใใใชใใใฃใใคใถใผใฎ็ถๆ
ใฏไฟๅญใใใพใใใ้ฉๅฟใงใใพใ
[`TrainingArguments`] ใฎ `hub-strategy` ๅคใๆฌกใฎใใใใใซใใพใใ
- `"checkpoint"`: ๆๆฐใฎใใงใใฏใใคใณใใ last-checkpoint ใจใใๅๅใฎใตใใใฉใซใใผใซใใใทใฅใใใพใใ
`trainer.train(resume_from_checkpoint="output_dir/last-checkpoint")` ใไฝฟ็จใใฆใใฌใผใใณใฐใ็ฐกๅใซๅ้ใใพใใ
- `"all_checkpoints"`: ใในใฆใฎใใงใใฏใใคใณใใฏใๅบๅใใฉใซใใผใซ่กจ็คบใใใใใใซใใใทใฅใใใพใ (ใใใใฃใฆใ1 ใคใฎใใงใใฏใใคใณใใๅพใใใพใ)
ๆ็ตใชใใธใใชๅ
ใฎใใฉใซใใผใใจใฎใใงใใฏใใคใณใ ใใฉใซใใผ)
## Logging
ใใใฉใซใใงใฏใ[`Trainer`] ใฏใกใคใณใใญใปในใซ `logging.INFO` ใไฝฟ็จใใใฌใใชใซใใใๅ ดๅใซใฏ `logging.WARNING` ใไฝฟ็จใใพใใ
ใใใใฎใใใฉใซใใฏใ[`TrainingArguments`] ใฎ 5 ใคใฎ `logging` ใฌใใซใฎใใใใใไฝฟ็จใใใใใซใชใผใใผใฉใคใใงใใพใใ
ๅผๆฐ:
- `log_level` - ใกใคใณใใญใปใน็จ
- `log_level_replica` - ใฌใใชใซ็จ
ใใใซใ[`TrainingArguments`] ใฎ `log_on_each_node` ใ `False` ใซ่จญๅฎใใใฆใใๅ ดๅใใกใคใณ ใใผใใฎใฟใ
ใกใคใณ ใใญใปในใฎใญใฐ ใฌใใซ่จญๅฎใไฝฟ็จใใใจใไปใฎใในใฆใฎใใผใใฏใฌใใชใซใฎใญใฐ ใฌใใซ่จญๅฎใไฝฟ็จใใพใใ
[`Trainer`] ใฏใ`transformers` ใฎใญใฐ ใฌใใซใใใผใใใจใซๅๅฅใซ่จญๅฎใใใใจใซๆณจๆใใฆใใ ใใใ
[`Trainer.__init__`]ใใใใใฃใฆใไปใฎๆฉ่ฝใๅฉ็จใใๅ ดๅใฏใใใใใใๆฉใ่จญๅฎใใใใจใใๅงใใใพใ (ๆฌกใฎไพใๅ็
ง)ใ
[`Trainer`] ใชใใธใงใฏใใไฝๆใใๅใฎ `transformers` ๆฉ่ฝใ
ใใใใขใใชใฑใผใทใงใณใงไฝฟ็จใใๆนๆณใฎไพใๆฌกใซ็คบใใพใใ
```python
[...]
logger = logging.getLogger(__name__)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
# set the main code and the modules it uses to the same log-level according to the node
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
trainer = Trainer(...)
```
ใใใฆใใกใคใณ ใใผใใจไปใฎใในใฆใฎใใผใใง้่คใใๅฏ่ฝๆงใ้ซใใใฎใๅบๅใใชใใใใซ่ญฆๅใใใ ใใ่กจ็คบใใใๅ ดๅใฏใ
่ญฆๅ: ๆฌกใฎใใใซๅฎ่กใงใใพใใ
```bash
my_app.py ... --log_level warning --log_level_replica error
```
ใใซใใใผใ็ฐๅขใงใๅใใผใใฎใกใคใณใใญใปในใฎใญใฐใ็นฐใ่ฟใใใใชใๅ ดๅใฏใๆฌกใฎใใใซใใพใใ
ไธ่จใๆฌกใฎใใใซๅคๆดใใพใใ
```bash
my_app.py ... --log_level warning --log_level_replica error --log_on_each_node 0
```
ใใฎๅพใๆๅใฎใใผใใฎใกใคใณ ใใญใปในใฎใฟใใ่ญฆๅใใฌใใซใงใญใฐใซ่จ้ฒใใใใกใคใณ ใใผใไธใฎไปใฎใในใฆใฎใใญใปในใฏใญใฐใซ่จ้ฒใใใพใใ
ใใผใใจไปใฎใใผใไธใฎใในใฆใฎใใญใปในใฏใใจใฉใผใใฌใใซใงใญใฐใซ่จ้ฒใใใพใใ
ใขใใชใฑใผใทใงใณใใงใใใ ใ้ใใซใใๅฟ
่ฆใใใๅ ดๅใฏใๆฌกใฎใใใซใใพใใ
```bash
my_app.py ... --log_level error --log_level_replica error --log_on_each_node 0
```
(ใใซใใใผใ็ฐๅขใฎๅ ดๅใฏ `--log_on_each_node 0` ใ่ฟฝๅ ใใพใ)
## Randomness
[`Trainer`] ใซใใฃใฆ็ๆใใใใใงใใฏใใคใณใใใๅ้ใใๅ ดๅใใในใฆใฎๅชๅใใใฎ็ถๆ
ใๅพฉๅ
ใใใใใซ่กใใใพใใ
_python_ใ_numpy_ใใใใณ _pytorch_ ใฎ RNG ็ถๆ
ใฏใใใฎใใงใใฏใใคใณใใไฟๅญใใๆ็นใจๅใ็ถๆ
ใซใชใใพใใ
ใใใซใใใใๅๆญขใใฆๅ้ใใจใใในใฟใคใซใฎใใฌใผใใณใฐใใใใณในใใใใใฌใผใใณใฐใซๅฏ่ฝใช้ใ่ฟใฅใใใใใฏใใงใใ
ใใ ใใใใพใใพใชใใใฉใซใใฎ้ๆฑบๅฎ็ใช pytorch ่จญๅฎใซใใใใใใฏๅฎๅ
จใซๆฉ่ฝใใชใๅฏ่ฝๆงใใใใพใใใใซใใๅธๆใฎๅ ดๅใฏ
ๆฑบๅฎ่ซใซใคใใฆใฏใ[ใฉใณใใ ๆงใฎใฝใผในใฎๅถๅพก](https://pytorch.org/docs/stable/notes/randomness) ใๅ็
งใใฆใใ ใใใใใญใฅใกใณใใง่ชฌๆใใใฆใใใใใซใใใใใฎ่จญๅฎใฎไธ้จใฏ
็ฉไบใๆฑบๅฎ่ซ็ใซใใใใฎ (ไพ: `torch.backends.cudnn.deterministic`) ใฏ็ฉไบใ้
ใใใๅฏ่ฝๆงใใใใใใใใใฏ
ใใใฉใซใใงใฏๅฎ่กใงใใพใใใใๅฟ
่ฆใซๅฟใใฆ่ชๅใงๆๅนใซใใใใจใใงใใพใใ
## Specific GPUs Selection
ใฉใฎ GPU ใใฉใฎใใใช้ ๅบใงไฝฟ็จใใใใใใญใฐใฉใ ใซๆ็คบใใๆนๆณใซใคใใฆ่ชฌๆใใพใใ
[`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.Parallel.DistributedDataParallel.html) ใไฝฟ็จใใฆ GPU ใฎใตใใปใใใฎใฟใไฝฟ็จใใๅ ดๅใไฝฟ็จใใ GPU ใฎๆฐใๆๅฎใใใ ใใงใใ ใใใจใใฐใGPU ใ 4 ใคใใใใๆๅใฎ 2 ใคใไฝฟ็จใใใๅ ดๅใฏใๆฌกใฎใใใซใใพใใ
```bash
torchrun --nproc_per_node=2 trainer-program.py ...
```
[`accelerate`](https://github.com/huggingface/accelerate) ใพใใฏ [`deepspeed`](https://github.com/microsoft/DeepSpeed) ใใคใณในใใผใซใใใฆใใๅ ดๅใฏใๆฌกใไฝฟ็จใใฆๅใใใจใ้ๆใใใใจใใงใใพใใใฎไธใค๏ผ
```bash
accelerate launch --num_processes 2 trainer-program.py ...
```
```bash
deepspeed --num_gpus 2 trainer-program.py ...
```
ใใใใฎใฉใณใใฃใผใไฝฟ็จใใใใใซใAccelerate ใพใใฏ [Deepspeed ็ตฑๅ](deepspeed) ๆฉ่ฝใไฝฟ็จใใๅฟ
่ฆใฏใใใพใใใ
ใใใพใงใฏใใใญใฐใฉใ ใซไฝฟ็จใใ GPU ใฎๆฐใๆ็คบใงใใพใใใๆฌกใซใ็นๅฎใฎ GPU ใ้ธๆใใใใฎ้ ๅบใๅถๅพกใใๆนๆณใซใคใใฆ่ชฌๆใใพใใ
ๆฌกใฎ็ฐๅขๅคๆฐใฏใไฝฟ็จใใ GPU ใจใใฎ้ ๅบใๅถๅพกใใใฎใซๅฝน็ซใกใพใใ
**`CUDA_VISIBLE_DEVICES`**
่คๆฐใฎ GPU ใใใใใใฎใใกใฎ 1 ใคใพใใฏใใใคใใฎ GPU ใ ใใไฝฟ็จใใใๅ ดๅใฏใ็ฐๅขๅคๆฐ `CUDA_VISIBLE_DEVICES` ใไฝฟ็จใใ GPU ใฎใชในใใซ่จญๅฎใใพใใ
ใใจใใฐใ4 ใคใฎ GPU (0ใ1ใ2ใ3) ใใใใจใใพใใ็ฉ็ GPU 0 ใจ 2 ใฎใฟใงๅฎ่กใใใซใฏใๆฌกใฎใใใซใใพใใ
```bash
CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ...
```
ใใใใฃใฆใpytorch ใฏ 2 ใคใฎ GPU ใฎใฟใ่ช่ญใใ็ฉ็ GPU 0 ใจ 2 ใฏใใใใ `cuda:0` ใจ `cuda:1` ใซใใใใณใฐใใใพใใ
้ ๅบใๅคๆดใใใใจใใงใใพใใ
```bash
CUDA_VISIBLE_DEVICES=2,0 torchrun trainer-program.py ...
```
ใใใงใฏใ็ฉ็ GPU 0 ใจ 2 ใใใใใ`cuda:1`ใจ`cuda:0`ใซใใใใณใฐใใใฆใใพใใ
ไธ่จใฎไพใฏใในใฆ `DistributedDataParallel` ไฝฟ็จใใฟใผใณใฎใใฎใงใใใๅใๆนๆณใ [`DataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) ใงใๆฉ่ฝใใพใใ
```bash
CUDA_VISIBLE_DEVICES=2,0 python trainer-program.py ...
```
GPU ใฎใชใ็ฐๅขใใจใใฅใฌใผใใใใซใฏใๆฌกใฎใใใซใใฎ็ฐๅขๅคๆฐใ็ฉบใฎๅคใซ่จญๅฎใใใ ใใงใใ
```bash
CUDA_VISIBLE_DEVICES= python trainer-program.py ...
```
ไปใฎ็ฐๅขๅคๆฐใจๅๆงใซใใใใใใณใใณใ ใฉใคใณใซ่ฟฝๅ ใใไปฃใใใซใๆฌกใฎใใใซใจใฏในใใผใใใใใจใใงใใพใใ
```bash
export CUDA_VISIBLE_DEVICES=0,2
torchrun trainer-program.py ...
```
ใใ ใใใใฎๆนๆณใงใฏใไปฅๅใซ็ฐๅขๅคๆฐใ่จญๅฎใใใใจใๅฟใใฆใใชใ้้ใฃใ GPU ใไฝฟ็จใใใฆใใใฎใ็่งฃใงใใชใๅฏ่ฝๆงใใใใใใๆททไนฑใๆใๅฏ่ฝๆงใใใใพใใใใใใฃใฆใใใฎใปใฏใทใงใณใฎใปใจใใฉใฎไพใง็คบใใใฆใใใใใซใๅใใณใใณใ ใฉใคใณใง็นๅฎใฎๅฎ่กใซๅฏพใใฆใฎใฟ็ฐๅขๅคๆฐใ่จญๅฎใใใฎใไธ่ฌ็ใงใใ
**`CUDA_DEVICE_ORDER`**
็ฉ็ใใใคในใฎ้ ๅบใๅถๅพกใใ่ฟฝๅ ใฎ็ฐๅขๅคๆฐ `CUDA_DEVICE_ORDER` ใใใใพใใ้ธๆ่ขใฏๆฌกใฎ 2 ใคใงใใ
1. PCIe ใใน ID ้ (`nvidia-smi` ใฎ้ ๅบใจไธ่ด) - ใใใใใใฉใซใใงใใ
```bash
export CUDA_DEVICE_ORDER=PCI_BUS_ID
```
2. GPU ใณใณใใฅใผใใฃใณใฐ่ฝๅ้ ใซไธฆในใ
```bash
export CUDA_DEVICE_ORDER=FASTEST_FIRST
```
ใปใจใใฉใฎๅ ดๅใใใฎ็ฐๅขๅคๆฐใๆฐใซใใๅฟ
่ฆใฏใใใพใใใใๅคใ GPU ใจๆฐใใ GPU ใ็ฉ็็ใซๆฟๅ
ฅใใใฆใใใใใ้
ใๅคใใซใผใใ้
ใใชใฃใฆใใใใใซ่ฆใใใใใชๅใฃใใปใใใขใใใ่กใฃใฆใใๅ ดๅใซใฏใ้ๅธธใซๅฝน็ซใกใพใใๅใใใใใ่งฃๆฑบใใ 1 ใคใฎๆนๆณใฏใใซใผใใไบคๆใใใใจใงใใใใ ใใใซใผใใไบคๆใงใใชใๅ ดๅ (ใใใคในใฎๅทๅดใๅฝฑ้ฟใๅใใๅ ดๅใชใฉ)ใ`CUDA_DEVICE_ORDER=FASTEST_FIRST`ใ่จญๅฎใใใจใๅธธใซๆฐใใ้ซ้ใซใผใใๆๅใซ้
็ฝฎใใใพใใใใ ใใ`nvidia-smi`ใฏไพ็ถใจใใฆ PCIe ใฎ้ ๅบใงใฌใใผใใใใใใๅคๅฐๆททไนฑใใใงใใใใ
้ ๅบใๅ
ฅใๆฟใใใใ 1 ใคใฎ่งฃๆฑบ็ญใฏใไปฅไธใไฝฟ็จใใใใจใงใใ
```bash
export CUDA_VISIBLE_DEVICES=1,0
```
ใใฎไพใงใฏ 2 ใคใฎ GPU ใ ใใไฝฟ็จใใฆใใพใใใใใกใใใใณใณใใฅใผใฟใผใซๆญ่ผใใใฆใใๆฐใฎ GPU ใซใๅใใใจใๅฝใฆใฏใพใใพใใ
ใพใใใใฎ็ฐๅขๅคๆฐใ่จญๅฎใใๅ ดๅใฏใ`~/.bashrc` ใใกใคใซใพใใฏใใฎไปใฎ่ตทๅ่จญๅฎใใกใคใซใซ่จญๅฎใใฆใๅฟใใใฎใๆๅใงใใ
## Trainer Integrations
[`Trainer`] ใฏใใใฌใผใใณใฐใๅ็ใซๆนๅใใๅฏ่ฝๆงใฎใใใฉใคใใฉใชใใตใใผใใใใใใซๆกๅผตใใใพใใใ
ๆ้ใจใฏใใใซๅคงใใชใขใใซใซ้ฉๅใใพใใ
็พๅจใใตใผใใใผใใฃใฎใฝใชใฅใผใทใงใณ [DeepSpeed](https://github.com/microsoft/DeepSpeed) ใใใณ [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html) ใใตใใผใใใฆใใพใใ่ซๆ [ZeRO: ใกใขใชใฎๆ้ฉๅ]
ๅ
ใใฉใกใผใฟ ใขใใซใฎใใฌใผใใณใฐใซๅใใฆใSamyam RajbhandariใJeff RasleyใOlatunji RuwaseใYuxiong He ่](https://arxiv.org/abs/1910.02054)ใ
ใใฎๆไพใใใใตใใผใใฏใใใฎ่จไบใฎๅท็ญๆ็นใงใฏๆฐใใใฆๅฎ้จ็ใชใใฎใงใใ DeepSpeed ใจ PyTorch FSDP ใฎใตใใผใใฏใขใฏใใฃใใงใใใใใใซ้ขใใๅ้กใฏๆญ่ฟใใพใใใFairScale ็ตฑๅใฏ PyTorch ใกใคใณใซ็ตฑๅใใใฆใใใใใใใใตใใผใใใฆใใพใใ ([PyTorch FSDP ็ตฑๅ](#pytorch-fully-sharded-data-parallel))
<a id='zero-install-notes'></a>
### CUDA Extension Installation Notes
ใใฎ่จไบใฎๅท็ญๆ็นใงใฏใDeepspeed ใไฝฟ็จใใใซใฏใCUDA C++ ใณใผใใใณใณใใคใซใใๅฟ
่ฆใใใใพใใ
ใในใฆใฎใคใณในใใผใซใฎๅ้กใฏใ[Deepspeed](https://github.com/microsoft/DeepSpeed/issues) ใฎๅฏพๅฟใใ GitHub ใฎๅ้กใ้ใใฆๅฏพๅฆใใๅฟ
่ฆใใใใพใใใใใซใไธญใซ็บ็ใใๅฏ่ฝๆงใฎใใไธ่ฌ็ใชๅ้กใใใใคใใใใพใใ
CUDA ๆกๅผตๆฉ่ฝใๆง็ฏใใๅฟ
่ฆใใใ PyTorch ๆกๅผตๆฉ่ฝใ
ใใใใฃใฆใๆฌกใฎๆไฝใๅฎ่กไธญใซ CUDA ้ข้ฃใฎใใซใใฎๅ้กใ็บ็ใใๅ ดๅใฏใๆฌกใฎใจใใใงใใ
```bash
pip install deepspeed
```
ใพใๆฌกใฎๆณจๆไบ้
ใใ่ชญใฟใใ ใใใ
ใใใใฎใใผใใงใฏใ`pytorch` ใ CUDA `10.2` ใงใใซใใใใๅ ดๅใซไฝใใในใใใฎไพใ็คบใใพใใใใชใใฎ็ถๆณใๆฌกใฎใใใชๅ ดๅ
็ฐใชใๅ ดๅใฏใใใผใธใงใณ็ชๅทใ็ฎ็ใฎใใผใธใงใณใซ่ชฟๆดใใใใจใๅฟใใชใใงใใ ใใใ
#### Possible problem #1
Pytorch ใซใฏ็ฌ่ชใฎ CUDA ใใผใซใญใใใไปๅฑใใฆใใพใใใใใใ 2 ใคใฎใใญใธใงใฏใใใใซใใใใซใฏใๅไธใใผใธใงใณใฎ CUDA ใๅฟ
่ฆใงใใ
ใทในใใ ๅ
จไฝใซใคใณในใใผใซใใใพใใ
ใใจใใฐใPython ็ฐๅขใซ `cudatoolkit==10.2` ใๆๅฎใใฆ `pytorch` ใใคใณในใใผใซใใๅ ดๅใฏใๆฌกใฎใใฎใๅฟ
่ฆใงใใ
CUDA `10.2` ใใทในใใ ๅ
จไฝใซใคใณในใใผใซใใใพใใใ
ๆญฃ็ขบใชๅ ดๆใฏใทในใใ ใซใใฃใฆ็ฐใชใๅ ดๅใใใใพใใใๅคใใฎใทในใใ ใงใฏ`/usr/local/cuda-10.2`ใๆใไธ่ฌ็ใชๅ ดๆใงใใ
Unix ใทในใใ ใ CUDA ใๆญฃใใ่จญๅฎใใใ`PATH`็ฐๅขๅคๆฐใซ่ฟฝๅ ใใใใจใ
ๆฌกใฎใใใซใใฆใคใณในใใผใซๅ ดๆใๆๅฎใใพใใ
```bash
which nvcc
```
CUDA ใใทในใใ ๅ
จไฝใซใคใณในใใผใซใใใฆใใชใๅ ดๅใฏใๆๅใซใคใณในใใผใซใใฆใใ ใใใใๆฐใซๅ
ฅใใไฝฟ็จใใฆๆ้ ใ่ฆใคใใใใจใใงใใพใ
ๆค็ดขใจใณใธใณใใใจใใฐใUbuntu ใไฝฟ็จใใฆใใๅ ดๅใฏใ[ubuntu cuda 10.2 install](https://www.google.com/search?q=ubuntu+cuda+10.2+install) ใๆค็ดขใใใจใใใงใใใใ
#### Possible problem #2
ใใ 1 ใคใฎ่ใใใใไธ่ฌ็ใชๅ้กใฏใใทในใใ ๅ
จไฝใซ่คๆฐใฎ CUDA ใใผใซใญใใใใคใณในใใผใซใใใฆใใๅฏ่ฝๆงใใใใใจใงใใใใจใใฐใใชใ
ใใใๅฏ่ฝๆงใใใ๏ผ
```bash
/usr/local/cuda-10.2
/usr/local/cuda-11.0
```
ใใฎ็ถๆณใงใฏใ`PATH` ใใใณ `LD_LIBRARY_PATH` ็ฐๅขๅคๆฐใซไปฅไธใๅซใพใใฆใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
็ฎ็ใฎ CUDA ใใผใธใงใณใธใฎๆญฃใใใในใ้ๅธธใใใใฑใผใธ ใคใณในใใผใฉใผใฏใใใใใซใ
ๆๅพใฎใใผใธใงใณใใคใณในใใผใซใใใพใใใ้ฉๅใชใใใฑใผใธใ่ฆใคใใใชใใใใซใใใฑใผใธใฎใใซใใๅคฑๆใใใจใใๅ้กใ็บ็ใใๅ ดๅใฏใ
CUDA ใใผใธใงใณใใทในใใ ๅ
จไฝใซใคใณในใใผใซใใใฆใใใซใใใใใใใๅ่ฟฐใฎ 2 ใคใ่ชฟๆดใใๅฟ
่ฆใใใใใจใๆๅณใใพใ
็ฐๅขๅคๆฐใ
ใพใใใใฎๅ
ๅฎนใ่ฆใฆใฟใพใใใใ
```bash
echo $PATH
echo $LD_LIBRARY_PATH
```
ใใใงใไธญใซไฝใๅ
ฅใฃใฆใใใใใใใใพใใ
`LD_LIBRARY_PATH` ใ็ฉบใงใใๅฏ่ฝๆงใใใใพใใ
`PATH` ใฏๅฎ่กๅฏ่ฝใใกใคใซใๅญๅจใใๅ ดๆใใชในใใใ`LD_LIBRARY_PATH` ใฏๅ
ฑๆใฉใคใใฉใชใฎๅ ดๆใ็คบใใพใใ
ๆขใใใจใงใใใฉใกใใฎๅ ดๅใใๅใฎใจใณใใชใๅพใฎใจใณใใชใใๅชๅ
ใใใพใใ `:` ใฏ่คๆฐใๅบๅใใใใซไฝฟ็จใใใพใ
ใจใณใใชใ
ใใใงใใใซใ ใใญใฐใฉใ ใซ็นๅฎใฎ CUDA ใใผใซใญใใใฎๅ ดๆใๆ็คบใใใซใฏใๆๅใซใชในใใใใๅธๆใฎใในใๆฟๅ
ฅใใพใใ
ใใฃใฆใใใใจ๏ผ
```bash
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
```
ๆขๅญใฎๅคใไธๆธใใใใฎใงใฏใชใใๅ
้ ญใซ่ฟฝๅ ใใใใจใซๆณจๆใใฆใใ ใใใ
ใใกใใใๅฟ
่ฆใซๅฟใใฆใใผใธใงใณ็ชๅทใใใซใในใ่ชฟๆดใใพใใๅฒใๅฝใฆใใใฃใฌใฏใใชใๅฎ้ใซๆฉ่ฝใใใใจใ็ขบ่ชใใฆใใ ใใ
ๅญๅจใใใ `lib64` ใตใใใฃใฌใฏใใชใฏใ`libcudart.so` ใชใฉใฎใใพใใพใช CUDA `.so` ใชใใธใงใฏใใๅญๅจใใๅ ดๆใงใใ
ใทในใใ ใงใฏๅฅใฎๅๅใไปใใใใพใใใ็พๅฎใๅๆ ใใใใใซ่ชฟๆดใใฆใใ ใใใ
#### Possible problem #3
ไธ้จใฎๅคใ CUDA ใใผใธใงใณใฏใๆฐใใใณใณใใคใฉใงใฎใใซใใๆๅฆใใๅ ดๅใใใใพใใใใจใใฐใใใชใใฏ`gcc-9`ใๆใฃใฆใใพใใใใใใๅฟ
่ฆใงใ
`gcc-7`ใ
ใใใซใฏใใพใใพใชๆนๆณใใใใพใใ
ๆๆฐใฎ CUDA ใใผใซใญใใใใคใณในใใผใซใงใใๅ ดๅใฏใ้ๅธธใๆฐใใใณใณใใคใฉใใตใใผใใใใฆใใใฏใใงใใ
ใใใใฏใๆขใซๆๆใใฆใใใณใณใใคใฉใซๅ ใใฆใไธไฝใใผใธใงใณใฎใณใณใใคใฉใใคใณในใใผใซใใใใจใใงใใพใใ
ใใงใซๅญๅจใใพใใใใใใฉใซใใงใฏใชใใใใใใซใใทในใใ ใฏใใใ่ช่ญใงใใพใใใ ใgcc-7ใใใคใณในใใผใซใใใฆใใใใ
ใใซใใทในใใ ใ่ฆใคใใใชใใจใใใกใใปใผใธใ่กจ็คบใใๅ ดๅใฏใๆฌกใฎๆนๆณใง่งฃๆฑบใงใใๅฏ่ฝๆงใใใใพใใ
```bash
sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc
sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++
```
ใใใงใฏใ`/usr/local/cuda-10.2/bin/gcc` ใใ `gcc-7` ใธใฎใทใณใใชใใฏใชใณใฏใไฝๆใใฆใใพใใ
`/usr/local/cuda-10.2/bin/` ใฏ `PATH` ็ฐๅขๅคๆฐๅ
ใซใใๅฟ
่ฆใใใใพใ (ๅใฎๅ้กใฎ่งฃๆฑบ็ญใๅ็
ง)ใ
`gcc-7` (ใใใณ `g++7`) ใ่ฆใคใใใฏใใงใใใซใใฏๆๅใใพใใ
ใใคใใฎใใใซใ็ถๆณใซๅใใใฆไพใฎใในใ็ทจ้ใใฆใใ ใใใ
### PyTorch Fully Sharded Data parallel
ใใๅคงใใชใใใ ใตใคใบใงๅทจๅคงใชใขใใซใฎใใฌใผใใณใฐใ้ซ้ๅใใใซใฏใๅฎๅ
จใซใทใฃใผใๅใใใใใผใฟไธฆๅใขใใซใไฝฟ็จใงใใพใใ
ใใฎใฟใคใใฎใใผใฟไธฆๅใใฉใใคใ ใงใฏใใชใใใฃใใคใถใผใฎ็ถๆ
ใๅพ้
ใใใฉใกใผใฟใผใใทใฃใผใใฃใณใฐใใใใจใงใใใๅคใใฎใใผใฟใจๅคง่ฆๆจกใชใขใใซใใใฃใใใฃใณใฐใงใใพใใ
ใใฎๆฉ่ฝใจใใฎๅฉ็นใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ๅฎๅ
จใทใฃใผใใฃใณใฐ ใใผใฟไธฆๅใใญใฐ](https://pytorch.org/blog/introducing-pytorch-full-sharded-data-Parallel-api/) ใใ่ฆงใใ ใใใ
ๆๆฐใฎ PyTorch ใฎ Fully Sharded Data Parallel (FSDP) ใใฌใผใใณใฐๆฉ่ฝใ็ตฑๅใใพใใใ
ๅฟ
่ฆใชใฎใฏใ่จญๅฎใ้ใใฆๆๅนใซใใใใจใ ใใงใใ
**FSDP ใตใใผใใซๅฟ
่ฆใช PyTorch ใใผใธใงใณ**: PyTorch Nightly (ใชใชใผในๅพใซใใใ่ชญใใ ๅ ดๅใฏ 1.12.0)
FSDP ใๆๅนใซใใใขใใซใฎไฟๅญใฏใๆ่ฟใฎไฟฎๆญฃใงใฎใฟๅฉ็จใงใใใใใงใใ
**ไฝฟ็จๆณ**๏ผ
- ้
ๅธใใใใฉใณใใฃใผใ่ฟฝๅ ใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใ
ใพใ ไฝฟ็จใใฆใใชใๅ ดๅใฏใ`-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE`ใไฝฟ็จใใพใใ
- **ใทใฃใผใใฃใณใฐๆฆ็ฅ**:
- FULL_SHARD : ใใผใฟไธฆๅใฏใผใซใผ/GPU ใซใใใใทใฃใผใ ใชใใใฃใใคใถใผใฎ็ถๆ
+ ๅพ้
+ ใขใใซ ใใฉใกใผใฟใผใ
ใใฎใใใซใฏใใณใใณใใฉใคใณๅผๆฐใซ`--fsdp full_shard`ใ่ฟฝๅ ใใพใใ
- SHARD_GRAD_OP : ใทใฃใผใ ใชใใใฃใใคใถใผใฎ็ถๆ
+ ใใผใฟไธฆๅใฏใผใซใผ/GPU ๅ
จไฝใฎๅพ้
ใ
ใใฎใใใซใฏใใณใใณใใฉใคใณๅผๆฐใซ`--fsdp shard_grad_op`ใ่ฟฝๅ ใใพใใ
- NO_SHARD : ใทใฃใผใใฃใณใฐใชใใใใฎใใใซใฏใใณใใณใใฉใคใณๅผๆฐใซ`--fsdp no_shard`ใ่ฟฝๅ ใใพใใ
- ใใฉใกใผใฟใจๅพ้
ใ CPU ใซใชใใญใผใใใใซใฏใ
ใณใใณใใฉใคใณๅผๆฐใซ`--fsdp "full_shard offload"`ใพใใฏ`--fsdp "shard_grad_op offload"`ใ่ฟฝๅ ใใพใใ
- `default_auto_wrap_policy` ใไฝฟ็จใใฆ FSDP ใงใฌใคใคใผใ่ชๅ็ใซๅๅธฐ็ใซใฉใใใใใซใฏใ
ใณใใณใใฉใคใณๅผๆฐใซ`--fsdp "full_shard auto_wrap"`ใพใใฏ`--fsdp "shard_grad_op auto_wrap"`ใ่ฟฝๅ ใใพใใ
- CPU ใชใใญใผใใจ่ชๅใฉใใใณใฐใฎไธกๆนใๆๅนใซใใใซใฏใ
ใณใใณใใฉใคใณๅผๆฐใซ`--fsdp "full_shard offload auto_wrap"`ใพใใฏ`--fsdp "shard_grad_op offload auto_wrap"`ใ่ฟฝๅ ใใพใใ
- ๆฎใใฎ FSDP ๆงๆใฏใ`--fsdp_config <path_to_fsdp_config.json>`ใไปใใฆๆธกใใใพใใใใใฏใๆฌกใฎใใใใใฎๅ ดๆใงใใ
FSDP json ๆงๆใใกใคใซ (ไพ: `fsdp_config.json`)ใใพใใฏใใงใซใญใผใใใใฆใใ json ใใกใคใซใ `dict` ใจใใฆไฝฟ็จใใพใใ
- ่ชๅใฉใใใณใฐใๆๅนใชๅ ดๅใฏใใใฉใณในใใผในใฎ่ชๅใฉใใ ใใชใทใผใพใใฏใตใคใบ ใใผในใฎ่ชๅใฉใใ ใใชใทใผใไฝฟ็จใงใใพใใ
- ใใฉใณในใใฉใผใใผใใผในใฎ่ชๅใฉใใใใชใทใผใฎๅ ดๅใๆงๆใใกใคใซใง `fsdp_transformer_layer_cls_to_wrap` ใๆๅฎใใใใจใใๅงใใใพใใๆๅฎใใชใๅ ดๅใไฝฟ็จๅฏ่ฝใชๅ ดๅใใใใฉใซใๅคใฏ `model._no_split_modules` ใซใชใใพใใ
ใใใฏใใฉใใใใใใฉใณในใใฉใผใใผๅฑคใฏใฉในๅใฎใชในใ (ๅคงๆๅญใจๅฐๆๅญใๅบๅฅ) ใๆๅฎใใพใ (ไพ: [`BertLayer`]ใ[`GPTJBlock`]ใ[`T5Block`] ...)ใ
้ใฟใๅ
ฑๆใใใตใใขใธใฅใผใซ (ๅใ่พผใฟๅฑคใชใฉ) ใ็ฐใชใ FSDP ใฉใใใใใใฆใใใใซใชใใชใใใใซใใๅฟ
่ฆใใใใใใใใใฏ้่ฆใงใใ
ใใฎใใชใทใผใไฝฟ็จใใใจใใใซใใใใ ใขใใณใทใงใณใจใใใซ็ถใใใใคใใฎ MLP ใฌใคใคใผใๅซใใใญใใฏใใจใซใฉใใใณใฐใ็บ็ใใพใใ
ๅ
ฑๆๅใ่พผใฟใๅซใๆฎใใฎๅฑคใฏใๅใๆใๅคๅดใฎ FSDP ใฆใใใใซใฉใใใใใใฎใไพฟๅฉใงใใ
ใใใใฃใฆใใใฉใณในใใผในใฎใขใใซใซใฏใใใไฝฟ็จใใฆใใ ใใใ
- ใตใคใบใใผในใฎ่ชๅใฉใใใใชใทใผใฎๅ ดๅใฏใ่จญๅฎใใกใคใซใซ`fsdp_min_num_params`ใ่ฟฝๅ ใใฆใใ ใใใ
่ชๅใฉใใใณใฐใฎใใใฎ FSDP ใฎใใฉใกใผใฟใฎๆๅฐๆฐใๆๅฎใใพใใ
- ่จญๅฎใใกใคใซใง `fsdp_backward_prefetch` ใๆๅฎใงใใใใใซใชใใพใใใๆฌกใฎใใฉใกใผใฟใฎใปใใใใใคใใชใใงใใใใใใๅถๅพกใใพใใ
`backward_pre` ใจ `backward_pos` ใๅฉ็จๅฏ่ฝใชใชใใทใงใณใงใใ
่ฉณ็ดฐใซใคใใฆใฏใ`torch.distributed.fsdp.full_sharded_data_Parallel.BackwardPrefetch`ใๅ็
งใใฆใใ ใใใ
- ่จญๅฎใใกใคใซใง `fsdp_forward_prefetch` ใๆๅฎใงใใใใใซใชใใพใใใๆฌกใฎใใฉใกใผใฟใฎใปใใใใใคใใชใใงใใใใใใๅถๅพกใใพใใ
`True`ใฎๅ ดๅใFSDP ใฏใใฉใฏใผใ ใในใงใฎๅฎ่กไธญใซใๆฌกใซๆฅใใชใผใซใฎใฃใถใผใๆ็คบ็ใซใใชใใงใใใใพใใ
- ่จญๅฎใใกใคใซใง `limit_all_gathers` ใๆๅฎใงใใใใใซใชใใพใใใ
`True`ใฎๅ ดๅใFSDP ใฏ CPU ในใฌใใใๆ็คบ็ใซๅๆใใฆใๅฎ่กไธญใฎใชใผใซใฎใฃใถใๅคใใใใฎใ้ฒใใพใใ
- `activation_checkpointing`ใ่จญๅฎใใกใคใซใงๆๅฎใงใใใใใซใชใใพใใใ
`True`ใฎๅ ดๅใFSDP ใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใฏใFSDP ใฎใขใฏใใฃใใผใทใงใณใใฏใชใขใใใใจใงใกใขใชไฝฟ็จ้ใๅๆธใใๆๆณใงใใ
็นๅฎใฎใฌใคใคใผใๅฆ็ใใใใใฏใฏใผใ ใในไธญใซใใใใๅ่จ็ฎใใพใใไบๅฎไธใใใใฏไฝๅใช่จ็ฎๆ้ใ็ ็ฒใซใใพใ
ใกใขใชไฝฟ็จ้ใๅๆธใใพใใ
**ๆณจๆใในใๆณจๆ็นใใใใคใใใใพใ**
- ใใใฏ `generate` ใจไบๆๆงใใชใใใใ `--predict_with_generate` ใจใไบๆๆงใใใใพใใ
ใในใฆใฎ seq2seq/clm ในใฏใชใใ (็ฟป่จณ/่ฆ็ด/clm ใชใฉ)ใ
ๅ้ก [#21667](https://github.com/huggingface/transformers/issues/21667) ใๅ็
งใใฆใใ ใใใ
### PyTorch/XLA Fully Sharded Data parallel
TPU ใฆใผใถใผใฎ็ๆงใซๆๅ ฑใงใใ PyTorch/XLA ใฏ FSDP ใใตใใผใใใใใใซใชใใพใใใ
ๆๆฐใฎ Fully Sharded Data Parallel (FSDP) ใใฌใผใใณใฐใใในใฆใตใใผใใใใฆใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[FSDP ใไฝฟ็จใใ Cloud TPU ใงใฎ PyTorch ใขใใซใฎในใฑใผใชใณใฐ](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) ใใใณ [PyTorch/XLA ๅฎ่ฃ
ใๅ็
งใใฆใใ ใใใ FSDP ใฎ](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp)
ๅฟ
่ฆใชใฎใฏใ่จญๅฎใ้ใใฆๆๅนใซใใใใจใ ใใงใใ
**FSDP ใตใใผใใซๅฟ
่ฆใช PyTorch/XLA ใใผใธใงใณ**: >=2.0
**ไฝฟ็จๆณ**๏ผ
`--fsdp "full shard"` ใใ`--fsdp_config <path_to_fsdp_config.json>` ใซๅ ใใใใๆฌกใฎๅคๆดใจใจใใซๆธกใใพใใ
- PyTorch/XLA FSDP ใๆๅนใซใใใซใฏใ`xla`ใ`True`ใซ่จญๅฎใใๅฟ
่ฆใใใใพใใ
- `xla_fsdp_settings` ๅคใฏใXLA FSDP ใฉใใใณใฐ ใใฉใกใผใฟใๆ ผ็ดใใ่พๆธใงใใ
ใชใใทใงใณใฎๅฎๅ
จใชใชในใใซใคใใฆใฏใ[ใใกใ](
https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_full_sharded_data_Parallel.py)ใ
- `xla_fsdp_grad_ckpt`ใ `True`ใฎๅ ดๅใใในใใใใ XLA FSDP ใงใฉใใใใใๅใฌใคใคใผไธใงๅพ้
ใใงใใฏใใคใณใใไฝฟ็จใใพใใ
ใใฎ่จญๅฎใฏใxla ใใฉใฐใ true ใซ่จญๅฎใใใฆใใใ่ชๅใฉใใใณใฐ ใใชใทใผใๆๅฎใใใฆใใๅ ดๅใซใฎใฟไฝฟ็จใงใใพใใ
`fsdp_min_num_params` ใพใใฏ `fsdp_transformer_layer_cls_to_wrap`ใ
- ใใฉใณในใใฉใผใใผ ใใผในใฎ่ชๅใฉใใ ใใชใทใผใพใใฏใตใคใบ ใใผในใฎ่ชๅใฉใใ ใใชใทใผใฎใใใใใไฝฟ็จใงใใพใใ
- ใใฉใณในใใฉใผใใผใใผในใฎ่ชๅใฉใใใใชใทใผใฎๅ ดๅใๆงๆใใกใคใซใง `fsdp_transformer_layer_cls_to_wrap` ใๆๅฎใใใใจใใๅงใใใพใใๆๅฎใใชใๅ ดๅใไฝฟ็จๅฏ่ฝใชๅ ดๅใใใใฉใซใๅคใฏ `model._no_split_modules` ใซใชใใพใใ
ใใใฏใใฉใใใใใใฉใณในใใฉใผใใผๅฑคใฏใฉในๅใฎใชในใ (ๅคงๆๅญใจๅฐๆๅญใๅบๅฅ) ใๆๅฎใใพใ (ไพ: [`BertLayer`]ใ[`GPTJBlock`]ใ[`T5Block`] ...)ใ
้ใฟใๅ
ฑๆใใใตใใขใธใฅใผใซ (ๅใ่พผใฟๅฑคใชใฉ) ใ็ฐใชใ FSDP ใฉใใใใใใฆใใใใซใชใใชใใใใซใใๅฟ
่ฆใใใใใใใใใฏ้่ฆใงใใ
ใใฎใใชใทใผใไฝฟ็จใใใจใใใซใใใใ ใขใใณใทใงใณใจใใใซ็ถใใใใคใใฎ MLP ใฌใคใคใผใๅซใใใญใใฏใใจใซใฉใใใณใฐใ็บ็ใใพใใ
ๅ
ฑๆๅใ่พผใฟใๅซใๆฎใใฎๅฑคใฏใๅใๆใๅคๅดใฎ FSDP ใฆใใใใซใฉใใใใใใฎใไพฟๅฉใงใใ
ใใใใฃใฆใใใฉใณในใใผในใฎใขใใซใซใฏใใใไฝฟ็จใใฆใใ ใใใ
- ใตใคใบใใผในใฎ่ชๅใฉใใใใชใทใผใฎๅ ดๅใฏใ่จญๅฎใใกใคใซใซ`fsdp_min_num_params`ใ่ฟฝๅ ใใฆใใ ใใใ
่ชๅใฉใใใณใฐใฎใใใฎ FSDP ใฎใใฉใกใผใฟใฎๆๅฐๆฐใๆๅฎใใพใใ
### Using Trainer for accelerated PyTorch Training on Mac
PyTorch v1.12 ใชใชใผในใซใใใ้็บ่
ใจ็ ็ฉถ่
ใฏ Apple ใทใชใณใณ GPU ใๅฉ็จใใฆใขใใซ ใใฌใผใใณใฐใๅคงๅน
ใซ้ซ้ๅใงใใพใใ
ใใใซใใใใใญใใฟใคใใณใฐใๅพฎ่ชฟๆดใชใฉใฎๆฉๆขฐๅญฆ็ฟใฏใผใฏใใญใผใ Mac ไธใงใญใผใซใซใงๅฎ่กใงใใใใใซใชใใพใใ
PyTorch ใฎใใใฏใจใณใใจใใฆใฎ Apple ใฎ Metal Performance Shaders (MPS) ใฏใใใๅฏ่ฝใซใใๆฐใใ `"mps"` ใใใคใน็ต็ฑใงไฝฟ็จใงใใพใใ
ใใใซใใใ่จ็ฎใฐใฉใใจใใชใใใฃใใ MPS Graph ใใฌใผใ ใฏใผใฏใจ MPS ใซใใฃใฆๆไพใใใ่ชฟๆดใใใใซใผใใซใซใใใใณใฐใใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใๅ
ฌๅผใใญใฅใกใณใ [Mac ใงใฎ Accelerated PyTorch Training ใฎ็ดนไป](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/) ใๅ็
งใใฆใใ ใใใ
ใใใณ [MPS ใใใฏใจใณใ](https://pytorch.org/docs/stable/notes/mps.html)ใ
<Tip warning={false}>
MacOS ใใทใณใซ PyTorch >= 1.13 (ๅท็ญๆ็นใงใฏใใคใใชใผ ใใผใธใงใณ) ใใคใณในใใผใซใใใใจใๅผทใใๅงใใใพใใ
ใใฉใณในใใผในใฎใขใใซใฎใขใใซใฎๆญฃ็ขบๆงใจใใใฉใผใใณในใฎๅไธใซ้ข้ฃใใไธป่ฆใชไฟฎๆญฃใ่กใใใฆใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใhttps://github.com/pytorch/pytorch/issues/82707 ใๅ็
งใใฆใใ ใใใ
</Tip>
**Apple Silicon ใใใใไฝฟ็จใใใใฌใผใใณใฐใจๆจ่ซใฎๅฉ็น**
1. ใฆใผใถใผใใญใผใซใซใงๅคง่ฆๆจกใชใใใใฏใผใฏใใใใ ใตใคใบใใใฌใผใใณใฐใงใใใใใซใใพใ
2. ใฆใใใกใคใ ใกใขใช ใขใผใญใใฏใใฃใซใใใใใผใฟๅๅพใฎ้
ๅปถใ็ญ็ธฎใใใGPU ใใกใขใช ในใใขๅ
จไฝใซ็ดๆฅใขใฏใปในใงใใใใใซใชใใพใใ
ใใใใฃใฆใใจใณใใใผใจใณใใฎใใใฉใผใใณในใๅไธใใพใใ
3. ใฏใฉใฆใใใผในใฎ้็บใซ้ข้ฃใใใณในใใ่ฟฝๅ ใฎใญใผใซใซ GPU ใฎๅฟ
่ฆๆงใๅๆธใใพใใ
**ๅๆๆกไปถ**: mps ใตใใผใใๅใใใใผใใใคใณในใใผใซใใใซใฏใ
ใใฎ็ด ๆดใใใใกใใฃใข่จไบ [GPU ใขใฏใปใฉใฌใผใทใงใณใ M1 Mac ใฎ PyTorch ใซ็ปๅ ด](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1) ใซๅพใฃใฆใใ ใใใ ใ
**ไฝฟ็จๆณ**๏ผ
`mps` ใใใคในใฏใ`cuda` ใใใคในใไฝฟ็จใใใๆนๆณใจๅๆงใซๅฉ็จๅฏ่ฝใชๅ ดๅใใใใฉใซใใงไฝฟ็จใใใพใใ
ใใใใฃใฆใใฆใผใถใผใซใใใขใฏใทใงใณใฏๅฟ
่ฆใใใพใใใ
ใใจใใฐใไปฅไธใฎใณใใณใใไฝฟ็จใใฆใApple Silicon GPU ใไฝฟ็จใใฆๅ
ฌๅผใฎ Glue ใใญในใๅ้กใฟในใฏใ (ใซใผใ ใใฉใซใใผใใ) ๅฎ่กใงใใพใใ
```bash
export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
**ๆณจๆใในใใใใคใใฎๆณจๆไบ้
**
1. ไธ้จใฎ PyTorch ๆไฝใฏ mps ใซๅฎ่ฃ
ใใใฆใใชใใใใใจใฉใผใในใญใผใใใพใใ
ใใใๅ้ฟใใ 1 ใคใฎๆนๆณใฏใ็ฐๅขๅคๆฐ `PYTORCH_ENABLE_MPS_FALLBACK=1` ใ่จญๅฎใใใใจใงใใ
ใใใใฎๆไฝใงใฏ CPU ใซใใฉใผใซใใใฏใใพใใใใ ใใใใใงใ UserWarning ใในใญใผใใใพใใ
2. ๅๆฃใปใใใขใใ`gloo`ใใใณ`nccl`ใฏใ`mps`ใใใคในใงใฏๅไฝใใพใใใ
ใใใฏใ็พๅจใmpsใใใใคใน ใฟใคใใฎๅไธ GPU ใฎใฟใไฝฟ็จใงใใใใจใๆๅณใใพใใ
ๆๅพใซใ่ฆใใฆใใใฆใใ ใใใ ๐ค `Trainer` ใฏ MPS ใใใฏใจใณใใฎใฟใ็ตฑๅใใใใใ
MPS ใใใฏใจใณใใฎไฝฟ็จใซ้ขใใฆๅ้กใ่ณชๅใใใๅ ดๅใฏใ
[PyTorch GitHub](https://github.com/pytorch/pytorch/issues) ใซๅ้กใๆๅบใใฆใใ ใใใ
## Using Accelerate Launcher with Trainer
ๅ ้ใใฆใใฌใผใใผใซใใฏใผใไธใใพใใใใใฆใผใถใผใๆๅพ
ใใใใจใซ้ขใใฆใฏใๆฌกใฎใจใใใงใใ
- ใใฌใผใใผๅผๆฐใซๅฏพใใฆ FSDPใDeepSpeed ใชใฉใฎใใฌใผใใผ ใคใณใใฌใผใทใงใณใๅคๆดใใใซไฝฟ็จใ็ถใใใใจใใงใใพใใ
- ใใฌใผใใผใง Accelerate Launcher ใไฝฟ็จใงใใใใใซใชใใพใใ (ๆจๅฅจ)ใ
ใใฌใผใใผใง Accelerate Launcher ใไฝฟ็จใใๆ้ :
1. ๐ค Accelerate ใใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใAccelerate ใใชใใจ `Trainer` ใไฝฟ็จใใใใจใฏใงใใพใใใใใใงใชใๅ ดๅใฏใ`pip install accelerate`ใใฆใใ ใใใ Accelerate ใฎใใผใธใงใณใๆดๆฐใใๅฟ
่ฆใใใๅ ดๅใใใใพใ: `pip install activate --upgrade`
2. `accelerate config`ใๅฎ่กใใใขใณใฑใผใใซ่จๅ
ฅใใพใใไปฅไธใฏๅ ้่จญๅฎใฎไพใงใใ
๏ฝ๏ผ DDP ใใซใใใผใ ใใซใ GPU ๆงๆ:
```yaml
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0 #change rank as per the node
main_process_ip: 192.168.20.1
main_process_port: 9898
main_training_function: main
mixed_precision: fp16
num_machines: 2
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
b. FSDP config:
```yaml
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
c.ใใกใคใซใๆใ DeepSpeed ๆงๆ:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/user/configs/ds_zero3_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
d.ๅ ้ใใฉใฐใคใณใไฝฟ็จใใ DeepSpeed ๆงๆ:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 0.7
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
3. ๅ ้่จญๅฎใพใใฏใฉใณใใฃใผๅผๆฐใซใใฃใฆไธ่จใงๅฆ็ใใใๅผๆฐไปฅๅคใฎๅผๆฐใไฝฟ็จใใฆใใใฌใผใใผ ในใฏใชใใใๅฎ่กใใพใใ
ไปฅไธใฏใไธ่จใฎ FSDP ๆงๆใง`accelerate launcher`ใไฝฟ็จใใฆ`run_glue.py`ใๅฎ่กใใไพใงใใ
```bash
cd transformers
accelerate launch \
./examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
4. `accelerate launch`ใใใใใฎ cmd ๅผๆฐใ็ดๆฅไฝฟ็จใใใใจใใงใใพใใไธใฎไพใฏๆฌกใฎใใใซใใใใณใฐใใใพใใ
```bash
cd transformers
accelerate launch --num_processes=2 \
--use_fsdp \
--mixed_precision=bf16 \
--fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \
--fsdp_transformer_layer_cls_to_wrap="BertLayer" \
--fsdp_sharding_strategy=1 \
--fsdp_state_dict_type=FULL_STATE_DICT \
./examples/pytorch/text-classification/run_glue.py
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
่ฉณ็ดฐใซใคใใฆใฏใ๐ค Accelerate CLI ใฌใคใใๅ็
งใใฆใใ ใใ: [๐ค Accelerate ในใฏใชใใใฎ่ตทๅ](https://huggingface.co/docs/accelerate/basic_tutorials/launch)ใ
็งปๅใใใใปใฏใทใงใณ:
[ <a href="./deepspeed#deepspeed-trainer-integration">DeepSpeed</a><a id="deepspeed"></a>
| <a href="./deepspeed#deepspeed-installation">Installation</a><a id="installation"></a>
| <a href="./deepspeed#deepspeed-multi-gpu">Deployment with multiple GPUs</a><a id="deployment-with-multiple-gpus"></a>
| <a href="./deepspeed#deepspeed-one-gpu">Deployment with one GPU</a><a id="deployment-with-one-gpu"></a>
| <a href="./deepspeed#deepspeed-notebook">Deployment in Notebooks</a><a id="deployment-in-notebooks"></a>
| <a href="./deepspeed#deepspeed-config">Configuration</a><a id="configuration"></a>
| <a href="./deepspeed#deepspeed-config-passing">Passing Configuration</a><a id="passing-configuration"></a>
| <a href="./deepspeed#deepspeed-config-shared">Shared Configuration</a><a id="shared-configuration"></a>
| <a href="./deepspeed#deepspeed-zero">ZeRO</a><a id="zero"></a>
| <a href="./deepspeed#deepspeed-zero2-config">ZeRO-2 Config</a><a id="zero-2-config"></a>
| <a href="./deepspeed#deepspeed-zero3-config">ZeRO-3 Config</a><a id="zero-3-config"></a>
| <a href="./deepspeed#deepspeed-nvme">NVMe Support</a><a id="nvme-support"></a>
| <a href="./deepspeed#deepspeed-zero2-zero3-performance">ZeRO-2 vs ZeRO-3 Performance</a><a id="zero-2-vs-zero-3-performance"></a>
| <a href="./deepspeed#deepspeed-zero2-example">ZeRO-2 Example</a><a id="zero-2-example"></a>
| <a href="./deepspeed#deepspeed-zero3-example">ZeRO-3 Example</a><a id="zero-3-example"></a>
| <a href="./deepspeed#deepspeed-optimizer">Optimizer</a><a id="optimizer"></a>
| <a href="./deepspeed#deepspeed-scheduler">Scheduler</a><a id="scheduler"></a>
| <a href="./deepspeed#deepspeed-fp32">fp32 Precision</a><a id="fp32-precision"></a>
| <a href="./deepspeed#deepspeed-amp">Automatic Mixed Precision</a><a id="automatic-mixed-precision"></a>
| <a href="./deepspeed#deepspeed-bs">Batch Size</a><a id="batch-size"></a>
| <a href="./deepspeed#deepspeed-grad-acc">Gradient Accumulation</a><a id="gradient-accumulation"></a>
| <a href="./deepspeed#deepspeed-grad-clip">Gradient Clipping</a><a id="gradient-clipping"></a>
| <a href="./deepspeed#deepspeed-weight-extraction">Getting The Model Weights Out</a><a id="getting-the-model-weights-out"></a>
]
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/model.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Models
ใใผในใฏใฉในใงใใ [`PreTrainedModel`]ใ[`TFPreTrainedModel`]ใ[`FlaxPreTrainedModel`] ใฏใใขใใซใฎ่ชญใฟ่พผใฟใจไฟๅญใซ้ขใใๅ
ฑ้ใฎใกใฝใใใๅฎ่ฃ
ใใฆใใใใใใฏใญใผใซใซใฎใใกใคใซใใใฃใฌใฏใใชใใใใพใใฏใฉใคใใฉใชใๆไพใใไบๅๅญฆ็ฟใขใใซๆงๆ๏ผHuggingFaceใฎAWS S3ใชใใธใใชใใใใฆใณใญใผใ๏ผใใใขใใซใ่ชญใฟ่พผใใใใซไฝฟ็จใงใใพใใ
[`PreTrainedModel`] ใจ [`TFPreTrainedModel`] ใฏใๆฌกใฎๅ
ฑ้ใฎใกใฝใใใๅฎ่ฃ
ใใฆใใพใ๏ผ
- ่ชๅฝใซๆฐใใใใผใฏใณใ่ฟฝๅ ใใใๅ ดๅใซใๅ
ฅๅใใผใฏใณๅใ่พผใฟใฎใชใตใคใบใ่กใ
- ใขใใซใฎใขใใณใทใงใณใใใใๅใ่พผใ
ๅใขใใซใซๅ
ฑ้ใใใใฎไปใฎใกใฝใใใฏใ[`~modeling_utils.ModuleUtilsMixin`]๏ผPyTorchใขใใซ็จ๏ผใใใณ[`~modeling_tf_utils.TFModuleUtilsMixin`]๏ผTensorFlowใขใใซ็จ๏ผใงๅฎ็พฉใใใฆใใใใใญในใ็ๆใฎๅ ดๅใ[`~generation.GenerationMixin`]๏ผPyTorchใขใใซ็จ๏ผใ[`~generation.TFGenerationMixin`]๏ผTensorFlowใขใใซ็จ๏ผใใใใณ[`~generation.FlaxGenerationMixin`]๏ผFlax/JAXใขใใซ็จ๏ผใใใใพใใ
## PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
<a id='from_pretrained-torch-dtype'></a>
### ๅคง่ฆๆจกใขใใซใฎ่ชญใฟ่พผใฟ
Transformers 4.20.0ใงใฏใ[`~PreTrainedModel.from_pretrained`] ใกใฝใใใๅ่จญ่จใใใ[Accelerate](https://huggingface.co/docs/accelerate/big_modeling) ใไฝฟ็จใใฆๅคง่ฆๆจกใขใใซใๆฑใใใจใๅฏ่ฝใซใชใใพใใใใใใซใฏ Accelerate >= 0.9.0 ใจ PyTorch >= 1.9.0 ใๅฟ
่ฆใงใใไปฅๅใฎๆนๆณใงใใซใขใใซใไฝๆใใใใฎๅพไบๅๅญฆ็ฟใฎ้ใฟใ่ชญใฟ่พผใไปฃใใใซ๏ผใใใซใฏใกใขใชๅ
ใฎใขใใซใตใคใบใ2ๅๅฟ
่ฆใงใใฉใณใใ ใซๅๆๅใใใใขใใซ็จใจ้ใฟ็จใฎ2ใคใๅฟ
่ฆใงใใ๏ผใใขใใซใ็ฉบใฎๅคๆฎปใจใใฆไฝๆใใไบๅๅญฆ็ฟใฎ้ใฟใ่ชญใฟ่พผใพใใใจใใซใใฉใกใผใฟใผใๅฎไฝๅใใใชใใทใงใณใ่ฟฝๅ ใใใพใใใ
ใใฎใชใใทใงใณใฏ `low_cpu_mem_usage=True` ใงๆๅนใซใงใใพใใใขใใซใฏใพใ็ฉบใฎ้ใฟใๆใคใกใฟใใใคในไธใซไฝๆใใใใใฎๅพ็ถๆ
่พๆธใๅ
้จใซ่ชญใฟ่พผใพใใพใ๏ผใทใฃใผใใใใใใงใใฏใใคใณใใฎๅ ดๅใใทใฃใผใใใจใซ่ชญใฟ่พผใพใใพใ๏ผใใใฎๆนๆณใงไฝฟ็จใใใๆๅคงRAMใฏใใขใใซใฎๅฎๅ
จใชใตใคใบใ ใใงใใ
```py
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
```
ใใใซใใขใใซใๅฎๅ
จใซRAMใซๅใพใใชใๅ ดๅ๏ผ็พๆ็นใงใฏๆจ่ซใฎใฟๆๅน๏ผใ็ฐใชใใใใคในใซใขใใซใ็ดๆฅ้
็ฝฎใงใใพใใ`device_map="auto"` ใไฝฟ็จใใใจใAccelerateใฏๅใฌใคใคใผใใฉใฎใใใคในใซ้
็ฝฎใใใใๆฑบๅฎใใๆ้ใฎใใใคใน๏ผGPU๏ผใๆๅคง้ใซๆดป็จใใๆฎใใฎ้จๅใCPUใใใใใฏGPU RAMใไธ่ถณใใฆใใๅ ดๅใฏใใผใใใฉใคใใซใชใใญใผใใใพใใใขใใซใ่คๆฐใฎใใใคในใซๅๅฒใใใฆใใฆใใ้ๅธธใฉใใๅฎ่กใใใพใใ
`device_map` ใๆธกใ้ใ`low_cpu_mem_usage` ใฏ่ชๅ็ใซ `True` ใซ่จญๅฎใใใใใใใใใๆๅฎใใๅฟ
่ฆใฏใใใพใใใ
```py
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
ใขใใซใใใใคใน้ใงใฉใฎใใใซๅๅฒใใใใใฏใใใฎ `hf_device_map` ๅฑๆงใ่ฆใใใจใง็ขบ่ชใงใใพใ:
```py
t0pp.hf_device_map
```
```python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
```
ๅใใใฉใผใใใใซๅพใฃใฆใ็ฌ่ชใฎใใใคในใใใใไฝๆใใใใจใใงใใพใ๏ผใฌใคใคใผๅใใใใใคในใธใฎ่พๆธใงใ๏ผใใขใใซใฎใในใฆใฎใใฉใกใผใฟใๆๅฎใใใใใใคในใซใใใใใๅฟ
่ฆใใใใพใใใ1ใคใฎใฌใคใคใผใๅฎๅ
จใซๅใใใใคในใซใใๅ ดๅใใใฎใฌใคใคใผใฎใตใใขใธใฅใผใซใฎใในใฆใใฉใใซ่กใใใฎ่ฉณ็ดฐใ็คบใๅฟ
่ฆใฏใใใพใใใไพใใฐใๆฌกใฎใใใคในใใใใฏT0ppใซ้ฉใใฆใใพใ๏ผGPUใกใขใชใใใๅ ดๅ๏ผ:
```python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
```
ใขใใซใฎใกใขใชใธใฎๅฝฑ้ฟใๆๅฐ้ใซๆใใใใ 1 ใคใฎๆนๆณใฏใไฝ็ฒพๅบฆใฎ dtype (`torch.float16` ใชใฉ) ใงใขใใซใใคใณในใฟใณในๅใใใใไปฅไธใง่ชฌๆใใ็ดๆฅ้ๅญๅๆๆณใไฝฟ็จใใใใจใงใใ
### Model Instantiation dtype
Pytorch ใงใฏใใขใใซใฏ้ๅธธ `torch.float32` ๅฝขๅผใงใคใณในใฟใณในๅใใใพใใใใใฏใใใใใจใใใจๅ้กใซใชใๅฏ่ฝๆงใใใใพใ
้ใฟใ fp16 ใซใใใขใใซใใญใผใใใใจใ2 ๅใฎใกใขใชใๅฟ
่ฆใซใชใใใใงใใใใฎๅถ้ใๅ
ๆใใใซใฏใๆฌกใฎใใจใใงใใพใใ
`torch_dtype` ๅผๆฐใไฝฟ็จใใฆใ็ฎ็ใฎ `dtype` ใๆ็คบ็ใซๆธกใใพใใ
```python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
```
ใพใใฏใใขใใซใๅธธใซๆ้ฉใชใกใขใช ใใฟใผใณใงใญใผใใใใๅ ดๅใฏใ็นๅฅใชๅค `"auto"` ใไฝฟ็จใงใใพใใ
ใใใฆใ`dtype` ใฏใขใใซใฎ้ใฟใใ่ชๅ็ใซๅฐๅบใใใพใใ
```python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
```
ในใฏใฉใใใใใคใณในใฟใณในๅใใใใขใใซใซใฏใใฉใฎ `dtype` ใไฝฟ็จใใใใๆ็คบใใใใจใใงใใพใใ
```python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
```
Pytorch ใฎ่จญ่จใซใใใใใฎๆฉ่ฝใฏๆตฎๅๅฐๆฐ็น dtype ใงใฎใฟไฝฟ็จใงใใพใใ
## ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
## TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
## TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
## FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
## Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
## Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/tokenizer.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Tokenizer
ใใผใฏใใคใถใผใฏใใขใใซใฎๅ
ฅๅใฎๆบๅใๆ
ๅฝใใพใใใฉใคใใฉใชใซใฏใใในใฆใฎใขใใซใฎใใผใฏใใคใถใผใๅซใพใใฆใใพใใใปใจใใฉ
ใใผใฏใใคใถใผใฎไธ้จใฏใๅฎๅ
จใช Python ๅฎ่ฃ
ใจใ
Rust ใฉใคใใฉใช [๐ค Tokenizers](https://github.com/huggingface/tokenizers)ใ ใ้ซ้ใๅฎ่ฃ
ใงใฏๆฌกใฎใใจใๅฏ่ฝใซใชใใพใใ
1. ็นใซใใใใใผใฏใณๅใ่กใๅ ดๅใฎๅคงๅน
ใชในใใผใใขใใใจ
2. ๅ
ใฎๆๅญๅ (ๆๅญใจๅ่ช) ใจใใผใฏใณ็ฉบ้ใฎ้ใงใใใใณใฐใใ่ฟฝๅ ใฎใกใฝใใ (ไพ:
็นๅฎใฎๆๅญใๅซใใใผใฏใณใฎใคใณใใใฏในใใพใใฏ็นๅฎใฎใใผใฏใณใซๅฏพๅฟใใๆๅญใฎ็ฏๅฒ๏ผใ
ๅบๆฌใฏใฉใน [`PreTrainedTokenizer`] ใใใณ [`PreTrainedTokenizerFast`]
ใขใใซๅ
ฅๅใฎๆๅญๅๅ
ฅๅใใจใณใณใผใใ (ไปฅไธใๅ็
ง)ใPython ใใคใณในใฟใณในๅ/ไฟๅญใใใใใฎไธ่ฌ็ใชใกใฝใใใๅฎ่ฃ
ใใพใใ
ใญใผใซใซ ใใกใคใซใพใใฏใใฃใฌใฏใใชใใพใใฏใฉใคใใฉใชใซใใฃใฆๆไพใใใไบๅใใฌใผใใณใฐๆธใฟใใผใฏใใคใถใผใใใฎใ้ซ้ใใใผใฏใใคใถใผ
(HuggingFace ใฎ AWS S3 ใชใใธใใชใใใใฆใณใญใผใ)ใไบไบบใจใ้ ผใใซใใฆใใใฎใฏใ
ๅ
ฑ้ใกใฝใใใๅซใ [`~tokenization_utils_base.PreTrainedTokenizerBase`]
[`~tokenization_utils_base.SpecialTokensMixin`]ใ
ใใใใฃใฆใ[`PreTrainedTokenizer`] ใจ [`PreTrainedTokenizerFast`] ใฏใกใคใณใๅฎ่ฃ
ใใพใใ
ใในใฆใฎใใผใฏใใคใถใผใไฝฟ็จใใใใใฎใกใฝใใ:
- ใใผใฏใณๅ (ๆๅญๅใใตใใฏใผใ ใใผใฏใณๆๅญๅใซๅๅฒ)ใใใผใฏใณๆๅญๅใ ID ใซๅคๆใใใใใใฎ้ใฎๅคๆใ่กใฃใใใใพใใ
ใจใณใณใผใ/ใใณใผใ (ใคใพใใใใผใฏใณๅใจๆดๆฐใธใฎๅคๆ)ใ
- ๅบ็คใจใชใๆง้ (BPEใSentencePiece...) ใใ็ฌ็ซใใๆนๆณใงใ่ชๅฝใซๆฐใใใใผใฏใณใ่ฟฝๅ ใใพใใ
- ็นๅฅใชใใผใฏใณ (ใในใฏใๆใฎๅงใพใใชใฉ) ใฎ็ฎก็: ใใผใฏใณใฎ่ฟฝๅ ใๅฑๆงใธใฎๅฒใๅฝใฆใ
ใใผใฏใใคใถใผใซใใใ็ฐกๅใซใขใฏใปในใงใใใใผใฏใณๅไธญใซๅๅฒใใใชใใใใซใใใใจใใงใใพใใ
[`BatchEncoding`] ใฏใ
[`~tokenization_utils_base.PreTrainedTokenizerBase`] ใฎใจใณใณใผใ ใกใฝใใ (`__call__`ใ
`encode_plus` ใใใณ `batch_encode_plus`) ใงใใใPython ่พๆธใใๆดพ็ใใฆใใพใใใใผใฏใใคใถใผใ็ด็ฒใช Python ใฎๅ ดๅ
tokenizer ใฎๅ ดๅใใใฎใฏใฉในใฏๆจๆบใฎ Python ่พๆธใจๅใใใใซๅไฝใใใซใใฃใฆ่จ็ฎใใใใใพใใพใชใขใใซๅ
ฅๅใไฟๆใใพใใ
ใใใใฎใกใฝใใ (`input_ids`ใ`attention_mask`...)ใใใผใฏใใคใถใผใใ้ซ้ใใใผใฏใใคใถใผใงใใๅ ดๅ (ใคใพใใ
HuggingFace [ใใผใฏใใคใถใผ ใฉใคใใฉใช](https://github.com/huggingface/tokenizers))ใใใฎใฏใฉในใฏใใใซๆไพใใพใ
ๅ
ใฎๆๅญๅ (ๆๅญใจๅ่ช) ใจ
ใใผใฏใณในใใผใน (ไพ: ๆๅฎใใใๆๅญใพใใฏๅฏพๅฟใใๆๅญใฎ็ฏๅฒใๆงๆใใใใผใฏใณใฎใคใณใใใฏในใฎๅๅพ)
ไธใใใใใใผใฏใณใซ๏ผใ
## PreTrainedTokenizer
[[autodoc]] PreTrainedTokenizer
- __call__
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## PreTrainedTokenizerFast
[`PreTrainedTokenizerFast`] ใฏ [tokenizers](https://huggingface.co/docs/tokenizers) ใฉใคใใฉใชใซไพๅญใใพใใ ๐ค ใใผใฏใใคใถใผ ใฉใคใใฉใชใใๅๅพใใใใผใฏใใคใถใผใฏใ
๐ค ใใฉใณในใซ้ๅธธใซ็ฐกๅใซใญใผใใใใพใใใใใใฉใฎใใใซ่กใใใใใ็่งฃใใใซใฏใ[๐ค tokenizers ใใใฎ tokenizers ใไฝฟ็จใใ](../fast_tokenizers) ใใผใธใๅ็
งใใฆใใ ใใใ
[[autodoc]] PreTrainedTokenizerFast
- __call__
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## BatchEncoding
[[autodoc]] BatchEncoding
| 0 |
hf_public_repos/transformers/docs/source/ja
|
hf_public_repos/transformers/docs/source/ja/main_classes/deepspeed.md
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed Integration
[DeepSpeed](https://github.com/microsoft/DeepSpeed) ใฏใ[ZeRO ่ซๆ](https://arxiv.org/abs/1910.02054) ใง่ชฌๆใใใฆใใใในใฆใๅฎ่ฃ
ใใพใใ็พๅจใๆฌกใฎใใฎใๅฎๅ
จใซใตใใผใใใฆใใพใใ
1. ใชใใใฃใใคใถใผใฎ็ถๆ
ๅๅฒ (ZeRO ในใใผใธ 1)
2. ๅพ้
ๅๅฒ (ZeRO ในใใผใธ 2)
3. ใใฉใกใผใฟใผใฎๅๅฒ (ZeRO ในใใผใธ 3)
4. ใซในใฟใ ๆททๅ็ฒพๅบฆใใฌใผใใณใฐๅฆ็
5. ไธ้ฃใฎ้ซ้ CUDA ๆกๅผตใใผในใฎใชใใใฃใใคใถใผ
6. CPU ใใใณ NVMe ใธใฎ ZeRO ใชใใญใผใ
ZeRO-Offload ใซใฏ็ฌ่ชใฎๅฐ็จใใผใใผใใใใพใ: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)ใ NVMe ใตใใผใใซใคใใฆใฏใ่ซๆ [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)ใ
DeepSpeed ZeRO-2 ใฏใใใฎๆฉ่ฝใๆจ่ซใซใฏๅฝนใซ็ซใใชใใใใไธปใซใใฌใผใใณใฐใฎใฟใซไฝฟ็จใใใพใใ
DeepSpeed ZeRO-3 ใฏใๅทจๅคงใชใขใใซใ่คๆฐใฎ GPU ใซใญใผใใงใใใใใๆจ่ซใซใไฝฟ็จใงใใพใใ
ๅไธใฎ GPU ใงใฏไธๅฏ่ฝใงใใ
๐ค Transformers ใฏใ2 ใคใฎใชใใทใงใณใไปใใฆ [DeepSpeed](https://github.com/microsoft/DeepSpeed) ใ็ตฑๅใใพใใ
1. [`Trainer`] ใซใใใณใข DeepSpeed ๆฉ่ฝใฎ็ตฑๅใไฝใงใใใฃใฆใใใใฟใคใใงใ
็ตฑๅใฎๅ ดๅ - ใซในใฟใ ๆงๆใใกใคใซใๆๅฎใใใใใใณใใฌใผใใไฝฟ็จใใใ ใใงใไปใซไฝใใใๅฟ
่ฆใฏใใใพใใใใใใฆใใฎ
ใใฎใใญใฅใกใณใใงใฏใใฎๆฉ่ฝใซ็ฆ็นใๅฝใฆใฆใใพใใ
2. [`Trainer`] ใไฝฟ็จใใใDeepSpeed ใ็ตฑๅใใ็ฌ่ชใฎใใฌใผใใผใไฝฟ็จใใใๅ ดๅ
`from_pretrained` ใ `from_config` ใชใฉใฎใณใขๆฉ่ฝใซใฏใ้่ฆใชๆฉ่ฝใฎ็ตฑๅใๅซใพใใฆใใพใใ
ZeRO ในใใผใธ 3 ไปฅ้ใฎ `zero.Init`ใชใฉใฎ DeepSpeed ใฎ้จๅใใใฎๆฉ่ฝใๆดป็จใใใซใฏใๆฌกใฎใใญใฅใกใณใใใ่ชญใฟใใ ใใใ
[้ใใฌใผใใผ DeepSpeed ็ตฑๅ](#nontrainer-deepspeed-integration)ใ
็ตฑๅใใใฆใใใใฎ:
ใใฌใผใใณใฐ๏ผ
1. DeepSpeed ZeRO ใใฌใผใใณใฐใฏใZeRO-Infinity (CPU ใใใณ NVME ใชใใญใผใ) ใไฝฟ็จใใฆๅฎๅ
จใช ZeRO ในใใผใธ 1ใ2ใใใใณ 3 ใใตใใผใใใพใใ
ๆจ่ซ๏ผ
1. DeepSpeed ZeRO Inference ใฏใZeRO-Infinity ใซใใ ZeRO ในใใผใธ 3 ใใตใใผใใใพใใใใฌใผใใณใฐใจๅใ ZeRO ใใญใใณใซใไฝฟ็จใใพใใใ
ใชใใใฃใใคใถใจ lr ในใฑใธใฅใผใฉใฏไฝฟ็จใใใในใใผใธ 3 ใฎใฟใ้ข้ฃใใพใใ่ฉณ็ดฐใซใคใใฆใฏใไปฅไธใๅ็
งใใฆใใ ใใใ
[ใผใญๆจ่ซ](#zero-inference)ใ
DeepSpeed Inference ใใใใพใใใใใฏใTensor Parallelism ใฎไปฃใใใซ Tensor Parallelism ใไฝฟ็จใใใพใฃใใ็ฐใชใใใฏใใญใธใผใงใใ
ZeRO (่ฟๆฅๅ
ฌ้)ใ
<a id='deepspeed-trainer-integration'></a>
## Trainer Deepspeed Integration
<a id='deepspeed-installation'></a>
### Installation
pypi ็ต็ฑใงใฉใคใใฉใชใใคใณในใใผใซใใพใใ
```bash
pip install deepspeed
```
ใพใใฏ`tansformers`, `extras`็ต็ฑ:
```bash
pip install transformers[deepspeed]
```
ใพใใฏใ[DeepSpeed ใฎ GitHub ใใผใธ](https://github.com/microsoft/deepspeed#installation) ใง่ฉณ็ดฐใ็ขบ่ชใใฆใใ ใใใ
[้ซๅบฆใชใคใณในใใผใซ](https://www.deepspeed.ai/tutorials/advanced-install/)ใ
ใใใงใใใซใใซ่ฆๅดใใๅ ดๅใฏใใพใ [CUDA ๆกๅผตๆฉ่ฝใฎใคใณในใใผใซ ใใผใ](trainer#cuda-extension-installation-notes) ใๅฟ
ใ่ชญใใงใใ ใใใ
ๆกๅผตๆฉ่ฝใไบๅใใซใใใใๅฎ่กๆใซๆกๅผตๆฉ่ฝใใใซใใใใใใจใซไพๅญใใฆใใใไธ่จใฎ่งฃๆฑบ็ญใใในใฆ่ฉฆใใๅ ดๅ
ใใใๅฝนใซ็ซใใชใใฃใๅ ดๅใๆฌกใซ่ฉฆใในใใใจใฏใใขใธใฅใผใซใใคใณในใใผใซใใๅใซใขใธใฅใผใซใไบๅใซใใซใใใใใจใงใใ
DeepSpeed ใฎใญใผใซใซ ใใซใใไฝๆใใใซใฏ:
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \
--global-option="build_ext" --global-option="-j8" --no-cache -v \
--disable-pip-version-check 2>&1 | tee build.log
```
NVMe ใชใใญใผใใไฝฟ็จใใๅ ดๅใฏใไธ่จใฎๆ้ ใซ`DS_BUILD_AIO=1`ใๅซใใๅฟ
่ฆใใใใพใ (ใพใใ
*libaio-dev* ใทในใใ ๅ
จไฝใซใคใณในใใผใซใใพใ)ใ
`TORCH_CUDA_ARCH_LIST` ใ็ทจ้ใใฆใไฝฟ็จใใ GPU ใซใผใใฎใขใผใญใใฏใใฃใฎใณใผใใๆฟๅ
ฅใใพใใใในใฆใไปฎๅฎใใใจ
ใใชใใฎใซใผใใฏๅใใงใๆฌกใฎๆนๆณใงใขใผใใๅๅพใงใใพใใ
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())"
```
ใใใใฃใฆใ`8, 6`ใๅๅพใใๅ ดๅใฏใ`TORCH_CUDA_ARCH_LIST="8.6"`ใไฝฟ็จใใพใใ่คๆฐใฎ็ฐใชใใซใผใใใๆใกใฎๅ ดๅใฏใใในใฆใใชในใใใใใจใใงใใพใ
ใใใใฎใใกใ`TORCH_CUDA_ARCH_LIST="6.1;8.6"`ใๅฅฝใใงใ
่คๆฐใฎใใทใณใงๅใใปใใใขใใใไฝฟ็จใใๅฟ
่ฆใใใๅ ดๅใฏใใใคใใช ใใคใผใซใไฝๆใใพใใ
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \
python setup.py build_ext -j8 bdist_wheel
```
`dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ใฎใใใชใใฎใ็ๆใใใใฎใงใใใใใคใณในใใผใซใงใใพใ
`pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ใจใใฆใญใผใซใซใพใใฏไปใฎใใทใณใซใคใณในใใผใซใใพใใ
็นฐใ่ฟใใพใใใ`TORCH_CUDA_ARCH_LIST`ใใฟใผใฒใใ ใขใผใญใใฏใใฃใซๅใใใฆ่ชฟๆดใใใใจใๅฟใใชใใงใใ ใใใ
NVIDIA GPU ใฎๅฎๅ
จใชใชในใใจใใใใซๅฏพๅฟใใ **ใณใณใใฅใผใใฃใณใฐๆฉ่ฝ** (ใใฎ่จไบใฎ Arch ใจๅใ) ใ่ฆใคใใใใจใใงใใพใใ
ใณใณใใญในใ) [ใใ](https://developer.nvidia.com/cuda-gpus)ใ
ไปฅไธใไฝฟ็จใใฆใpytorch ใๆง็ฏใใใใขใผใใ็ขบ่ชใงใใพใใ
```bash
python -c "import torch; print(torch.cuda.get_arch_list())"
```
ใใใงใฏใใคใณในใใผใซใใใฆใใ GPU ใฎ 1 ใคใฎใขใผใใ่ฆใคใใๆนๆณใ่ชฌๆใใพใใใใจใใฐใGPU 0 ใฎๅ ดๅ:
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; \
print(torch.cuda.get_device_properties(torch.device('cuda')))"
```
ๅบๅใๆฌกใฎๅ ดๅ:
```bash
_CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82)
```
ใใใใใฐใใใฎใซใผใใฎใขใผใใ`8.6`ใงใใใใจใใใใใพใใ
`TORCH_CUDA_ARCH_LIST` ใๅฎๅ
จใซ็็ฅใใใใจใใงใใพใใใใใใใฐใใใซใ ใใญใฐใฉใ ใ่ชๅ็ใซใฏใจใชใๅฎ่กใใพใใ
ใใซใใ่กใใใ GPU ใฎใขใผใญใใฏใใฃใใใใฏใใฟใผใฒใใ ใใทใณใฎ GPU ใจไธ่ดใใๅ ดๅใใใใฐใไธ่ดใใชใๅ ดๅใใใใพใใ
็ฎ็ใฎใขใผใใๆ็คบ็ใซๆๅฎใใใใจใใๅงใใใพใใ
ๆๆกใใใใใจใใในใฆ่ฉฆใใฆใใพใ ใใซใใฎๅ้กใ็บ็ใใๅ ดๅใฏใGitHub ใฎๅ้กใซ้ฒใใงใใ ใใใ
[ใใฃใผใในใใผใ](https://github.com/microsoft/DeepSpeed/issues)ใ
<a id='deepspeed-multi-gpu'></a>
### Deployment with multiple GPUs
DeepSpeed ็ตฑๅใใใใญใคใใใซใฏใ[`Trainer`] ใณใใณใ ใฉใคใณๅผๆฐใ่ชฟๆดใใฆๆฐใใๅผๆฐ `--deepspeed ds_config.json` ใๅซใใพใใใใใงใ`ds_config.json` ใฏ DeepSpeed ๆงๆใใกใคใซใงใใ
[ใใกใ](https://www.deepspeed.ai/docs/config-json/)ใซ่จ่ผใใใฆใใพใใใใกใคใซๅใฏใใชใๆฌก็ฌฌใงใใ
DeepSpeed ใฎ`add_config_arguments`ใฆใผใใฃใชใใฃใไฝฟ็จใใฆใๅฟ
่ฆใชใณใใณใ ใฉใคใณๅผๆฐใใณใผใใซ่ฟฝๅ ใใใใจใใๅงใใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[DeepSpeed ใฎๅผๆฐ่งฃๆ](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) ใใญใฅใกใณใใๅ็
งใใฆใใ ใใใ
ใใใง้ธๆใใใฉใณใใฃใผใไฝฟ็จใงใใพใใ pytorch ใฉใณใใฃใผใๅผใ็ถใไฝฟ็จใงใใพใใ
```bash
torch.distributed.run --nproc_per_node=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ใพใใฏใ`deepspeed`ใซใใฃใฆๆไพใใใใฉใณใใฃใผใไฝฟ็จใใพใใ
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ใ่ฆงใฎใจใใใๅผๆฐใฏๅใใงใฏใใใพใใใใใปใจใใฉใฎใใผใบใงใฏใฉใกใใงใๆฉ่ฝใใพใใใฎ
ใใพใใพใชใใผใใจ GPU ใๆงๆใใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ใใกใ](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใๅ็
งใใฆใใ ใใใ
`deepspeed`ใฉใณใใฃใผใไฝฟ็จใใๅฉ็จๅฏ่ฝใชใในใฆใฎ GPU ใไฝฟ็จใใใๅ ดๅใฏใ`--num_gpus`ใใฉใฐใ็็ฅใใใ ใใงใใ
ไปฅไธใฏใๅฉ็จๅฏ่ฝใชใในใฆใฎ GPU ใใใใญใคใใ DeepSpeed ใง`run_translation.py`ใๅฎ่กใใไพใงใใ
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
DeepSpeed ใฎใใญใฅใกใณใใซใฏใ`--deepspeed --deepspeed_config ds_config.json`ใ่กจ็คบใใใๅฏ่ฝๆงใ้ซใใใจใซๆณจๆใใฆใใ ใใใ
DeepSpeed ้ข้ฃใฎๅผๆฐใ 2 ใคใใใพใใใ็ฐกๅใซใใใใใงใใใๅฆ็ใในใๅผๆฐใใใงใซ้ๅธธใซๅคใใใใงใใ
ใใฎ 2 ใคใ 1 ใคใฎๅผๆฐใซ็ตๅใใพใใใ
ๅฎ้ใฎไฝฟ็จไพใซใคใใฆใฏใใใฎ [ๆ็จฟ](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) ใๅ็
งใใฆใใ ใใใ
<a id='deepspeed-one-gpu'></a>
### Deployment with one GPU
1 ใคใฎ GPU ใง DeepSpeed ใใใใญใคใใใซใฏใ[`Trainer`] ใณใใณใ ใฉใคใณๅผๆฐใๆฌกใฎใใใซ่ชฟๆดใใพใใ
```bash
deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
ใใใฏ่คๆฐใฎ GPU ใฎๅ ดๅใจใปใผๅใใงใใใใใใงใฏใDeepSpeed ใซ 1 ใคใฎ GPU ใ ใใไฝฟ็จใใใใใซๆ็คบ็ใซๆ็คบใใพใใ
`--num_gpus=1`ใใใใฉใซใใงใฏใDeepSpeed ใฏๆๅฎใใใใใผใไธใง่ช่ญใงใใใในใฆใฎ GPU ใใใใญใคใใพใใ่ตทๅใใ GPU ใ 1 ใคใ ใใฎๅ ดๅ
ใฎๅ ดๅใใใฎๅผๆฐใฏๅฟ
่ฆใใใพใใใๆฌกใฎ [ใใญใฅใกใณใ](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใงใฏใใฉใณใใฃใผ ใชใใทใงใณใซใคใใฆ่ชฌๆใใฆใใพใใ
1 ใคใฎ GPU ใ ใใง DeepSpeed ใไฝฟ็จใใใใฎใฏใชใใงใใ?
1. ไธ้จใฎ่จ็ฎใจใกใขใชใใในใใฎ CPU ใจ RAM ใซๅงไปปใงใใ ZeRO ใชใใญใผใๆฉ่ฝใๅใใฆใใใใใ
ใขใใซใฎใใผใบใซๅใใใฆใใๅคใใฎ GPU ใชใฝใผในใๆฎใใฆใใใพใใใใๅคงใใชใใใ ใตใคใบใใพใใฏ้ๅธธใซๅคงใใชใขใใซใฎใใฃใใใฃใณใฐใๅฏ่ฝใซใใ
ๆฎ้ใฏๅใใชใใงใใใใ
2. ในใใผใใช GPU ใกใขใช็ฎก็ใทในใใ ใๆไพใใใกใขใชใฎๆญ็ๅใๆๅฐ้ใซๆใใพใใ
ใใๅคงใใชใขใใซใจใใผใฟ ใใใใ
ๆฌกใซๆงๆใซใคใใฆ่ฉณใใ่ชฌๆใใพใใใๅไธใฎ GPU ใงๅคงๅน
ใชๆนๅใๅฎ็พใใใใใฎ้ตใฏๆฌกใฎใจใใใงใใ
DeepSpeed ใไฝฟ็จใใใซใฏใๆงๆใใกใคใซใซๅฐใชใใจใๆฌกใฎๆงๆใๅฟ
่ฆใงใใ
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true
}
}
```
ใใใซใใใใชใใใฃใใคใถใผใฎใชใใญใผใใใใฎไปใฎ้่ฆใชๆฉ่ฝใๆๅนใซใชใใพใใใใใใก ใตใคใบใ่ฉฆใใฆใฟใใจใใใงใใใใ
่ฉณ็ดฐใซใคใใฆใฏใไปฅไธใฎใใฃในใซใใทใงใณใๅ็
งใใฆใใ ใใใ
ใใฎใฟใคใใฎใใใญใคใกใณใใฎๅฎ้็ใชไฝฟ็จไพใซใคใใฆใฏใใใฎ [ๆ็จฟ](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685) ใๅ็
งใใฆใใ ใใใ
ใใฎใใญใฅใกใณใใง่ฉณใใ่ชฌๆใใใฆใใใใใซใCPU ใใใณ NVMe ใชใใญใผใใๅใใ ZeRO-3 ใ่ฉฆใใใจใใงใใพใใ
ใใผใ๏ผ
- GPU 0 ใจใฏ็ฐใชใ็นๅฎใฎ GPU ใงๅฎ่กใใๅฟ
่ฆใใใๅ ดๅใ`CUDA_VISIBLE_DEVICES` ใไฝฟ็จใใฆๅถ้ใใใใจใฏใงใใพใใใ
ๅฉ็จๅฏ่ฝใช GPU ใฎ่กจ็คบ็ฏๅฒใไปฃใใใซใๆฌกใฎๆงๆใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
```bash
deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py ...
```
ใใฎไพใงใฏใDeepSpeed ใซ GPU 1 (2 ็ช็ฎใฎ GPU) ใไฝฟ็จใใใใใซๆ็คบใใพใใ
<a id='deepspeed-multi-node'></a>
### ่คๆฐใฎใใผใใไฝฟ็จใใใใใญใคใกใณใ
ใใฎใปใฏใทใงใณใฎๆ
ๅ ฑใฏ DeepSpeed ็ตฑๅใซๅบๆใฎใใฎใงใฏใชใใใใใใใใซใใใผใ ใใญใฐใฉใ ใซ้ฉ็จใงใใพใใใใ ใใDeepSpeed ใฏใSLURM ็ฐๅขใงใชใ้ใใไปใฎใฉใณใใฃใผใใใไฝฟใใใใ`deepspeed`ใฉใณใใฃใผใๆไพใใพใใ
ใใฎใปใฏใทใงใณใงใฏใใใใใ 8 GPU ใๅใใ 2 ใคใฎใใผใใใใใจไปฎๅฎใใพใใใพใใๆๅใฎใใผใใซใฏ `ssh hostname1` ใไฝฟ็จใใฆใ2 ็ช็ฎใฎใใผใใซใฏ `ssh hostname2` ใไฝฟ็จใใฆๆฅ็ถใงใใพใใไธกๆนใจใใในใฏใผใใชใใงใญใผใซใซใฎ ssh ็ต็ฑใง็ธไบใซๆฅ็ถใงใใๅฟ
่ฆใใใใพใใใใกใใใใใใใฎใในใ (ใใผใ) ๅใใไฝๆฅญใใฆใใๅฎ้ใฎใในใๅใซๅคๆดใใๅฟ
่ฆใใใใพใใ
#### The torch.distributed.run launcher
ใใจใใฐใ`torch.distributed.run` ใไฝฟ็จใใใซใฏใๆฌกใฎใใใซใใพใใ
```bash
python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \
--master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
```
ๅใใผใใซ SSH ใงๆฅ็ถใใใใใใใฎใใผใใงๅใใณใใณใใๅฎ่กใใๅฟ
่ฆใใใใพใใๆฅใๅฟ
่ฆใฏใใใพใใใใฉใณใใฃใผใฏไธกๆนใฎใใผใใๅๆใใใพใงๅพ
ๆฉใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[torchrun](https://pytorch.org/docs/stable/elastic/run.html) ใๅ็
งใใฆใใ ใใใใกใชใฟใซใใใใฏ pytorch ใฎๆฐใใผใธใงใณๅใฎ`torch.distributed.launch`ใ็ฝฎใๆใใใฉใณใใฃใผใงใใใใพใใ
#### ใใฃใผใในใใผใ ใฉใณใใฃใผ
ไปฃใใใซ`deepspeed`ใฉใณใใฃใผใไฝฟ็จใใใซใฏใใพใ`hostfile`ใใกใคใซใไฝๆใใๅฟ
่ฆใใใใพใใ
```
hostname1 slots=8
hostname2 slots=8
```
ใใใฆใๆฌกใฎใใใซ่ตทๅใงใใพใใ
```bash
deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \
your_program.py <normal cl args> --deepspeed ds_config.json
```
`torch.distributed.run`ใฉใณใใฃใผใจใฏ็ฐใชใใ`deepspeed`ใฏไธกๆนใฎใใผใใงใใฎใณใใณใใ่ชๅ็ใซ่ตทๅใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[ใชใฝใผในๆงๆ (ใใซใใใผใ)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใๅ็
งใใฆใใ ใใใ
#### Launching in a SLURM environment
SLURM ็ฐๅขใงใฏใๆฌกใฎใขใใญใผใใไฝฟ็จใงใใพใใไปฅไธใฏใ็นๅฎใฎ SLURM ็ฐๅขใซ้ฉๅใใใใใใซๅฟ
่ฆใช slurm ในใฏใชใใ `launch.slurm` ใงใใ
```bash
#SBATCH --job-name=test-nodes # name
#SBATCH --nodes=2 # nodes
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
#SBATCH --cpus-per-task=10 # number of cores per tasks
#SBATCH --gres=gpu:8 # number of gpus
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
#SBATCH --output=%x-%j.out # output file name
export GPUS_PER_NODE=8
export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
export MASTER_PORT=9901
srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \
--nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \
--master_addr $MASTER_ADDR --master_port $MASTER_PORT \
your_program.py <normal cl args> --deepspeed ds_config.json'
```
ใใจใฏๅฎ่กใในใฑใธใฅใผใซใใใ ใใงใใ
```bash
sbatch launch.slurm
```
#### Use of Non-shared filesystem
ใใใฉใซใใงใฏใDeepSpeed ใฏใใซใใใผใ็ฐๅขใๅ
ฑๆในใใฌใผใธใไฝฟ็จใใใใจใๆณๅฎใใฆใใพใใใใใๅฝใฆใฏใพใใใๅใใผใใใญใผใซใซ ใใกใคใซใทในใใ ใใๅ็
งใงใใชใๅ ดๅใฏใ่จญๅฎใใกใคใซใ่ชฟๆดใใฆ [`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#) ใๅซใใๅฟ
่ฆใใใใพใใใใงใใฏใใคใณใ ใชใใทใงใณ) ใๆฌกใฎ่จญๅฎใงๆๅฎใใพใใ
```json
{
"checkpoint": {
"use_node_local_storage": true
}
}
```
ใใใใฏใ[`Trainer`] ใฎ `--save_on_each_node` ๅผๆฐใไฝฟ็จใใใใจใใงใใไธ่จใฎ่จญๅฎใฏ่ชๅ็ใซ่ฟฝๅ ใใใพใใ
<a id='deepspeed-notebook'></a>
### Deployment in Notebooks
ใใผใใใใฏใฎใปใซใในใฏใชใใใจใใฆๅฎ่กใใๅ ดๅใฎๅ้กใฏใไพๅญใใ้ๅธธใฎ`deepspeed`ใฉใณใใฃใผใใชใใใจใงใใ
็นๅฎใฎ่จญๅฎใงใฏใใใใใจใใฅใฌใผใใใๅฟ
่ฆใใใใพใใ
GPU ใ 1 ใคใ ใไฝฟ็จใใฆใใๅ ดๅใDeepSpeed ใไฝฟ็จใใใใใซใใผใใใใฏๅ
ใฎใใฌใผใใณใฐ ใณใผใใ่ชฟๆดใใๅฟ
่ฆใใใๆนๆณใฏๆฌกใฎใจใใใงใใ
```python
# DeepSpeed requires a distributed environment even when only one process is used.
# This emulates a launcher in the notebook
import os
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use
os.environ["RANK"] = "0"
os.environ["LOCAL_RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
# Now proceed as normal, plus pass the deepspeed config file
training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json")
trainer = Trainer(...)
trainer.train()
```
ๆณจ: `...` ใฏใ้ขๆฐใซๆธกใ้ๅธธใฎๅผๆฐใ่กจใใพใใ
่คๆฐใฎ GPU ใไฝฟ็จใใๅ ดๅใDeepSpeed ใๅไฝใใใซใฏใใซใใใญใปใน็ฐๅขใไฝฟ็จใใๅฟ
่ฆใใใใพใใใคใพใใใใชใใฏๆใฃใฆใใพใ
ใใฎ็ฎ็ใงใฉใณใใฃใผใไฝฟ็จใใใใจใฏใงใใพใใใใใใใฏใๆ็คบใใใๅๆฃ็ฐๅขใใจใใฅใฌใผใใใใใจใซใใฃใฆใฏๅฎ็พใงใใพใใใ
ใใฎใปใฏใทใงใณใฎๅ้ ญใงใ
็พๅจใฎใใฃใฌใฏใใชใฎใใผใใใใฏใซใใฎๅ ดใงๆงๆใใกใคใซใไฝๆใใใๅ ดๅใฏใๅฐ็จใฎ
ใปใซใฎๅ
ๅฎน:
```python no-style
%%bash
cat <<'EOT' > ds_config_zero3.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
EOT
```
ใใฌใผใใณใฐ ในใฏใชใใใใใผใใใใฏใฎใปใซใงใฏใชใ้ๅธธใฎใใกใคใซใซใใๅ ดๅใฏใๆฌกใฎใใใซใใฆ`deepspeed`ใ้ๅธธใฉใใ่ตทๅใงใใพใใ
็ดฐ่ใใใฎใทใงใซใใใจใใฐใ`run_translation.py` ใไฝฟ็จใใใซใฏใๆฌกใฎใใใซ่ตทๅใใพใใ
```python no-style
!git clone https://github.com/huggingface/transformers
!cd transformers; deepspeed examples/pytorch/translation/run_translation.py ...
```
ใพใใฏใ`%%bash` ใใธใใฏใไฝฟ็จใใใจใใทใงใซ ใใญใฐใฉใ ใๅฎ่กใใใใใฎ่คๆฐ่กใฎใณใผใใ่จ่ฟฐใใใใจใใงใใพใใ
```python no-style
%%bash
git clone https://github.com/huggingface/transformers
cd transformers
deepspeed examples/pytorch/translation/run_translation.py ...
```
ใใฎใใใชๅ ดๅใใใฎใปใฏใทใงใณใฎๆๅใซ็คบใใใณใผใใฏๅฟ
่ฆใใใพใใใ
ๆณจ: `%%bash` ใใธใใฏใฏๅชใใฆใใพใใใ็พๆ็นใงใฏๅบๅใใใใใกใชใณใฐใใใใใใใญใปในใ็ตไบใใใพใงใญใฐใฏ่กจ็คบใใใพใใใ
ๅฎไบใใพใใ
<a id='deepspeed-config'></a>
### Configuration
่จญๅฎใใกใคใซใงไฝฟ็จใงใใ DeepSpeed ่จญๅฎใชใใทใงใณใฎๅฎๅ
จใชใฌใคใใซใคใใฆใฏใๆฌกใๅ็
งใใฆใใ ใใใ
[ๆฌกใฎใใญใฅใกใณใ](https://www.deepspeed.ai/docs/config-json/) ใซใขใฏใปในใใฆใใ ใใใ
ใใพใใพใชๅฎ้ใฎใใผใบใซๅฏพๅฟใใๆฐๅใฎ DeepSpeed ๆงๆไพใ [DeepSpeedExamples] (https://github.com/microsoft/DeepSpeedExamples)ใง่ฆใคใใใใจใใงใใพใใ
ใชใใธใใช:
```bash
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples
find . -name '*json'
```
ไธ่จใฎใณใผใใ็ถใใฆใLamb ใชใใใฃใใคใถใผใๆงๆใใใใจใใฆใใใจใใพใใใใใใฃใฆใๆฌกใฎไธญใใๆค็ดขใงใใพใ
`.json` ใใกใคใซใฎไพ:
```bash
grep -i Lamb $(find . -name '*json')
```
ใใใซใใใคใใฎไพใ [ใกใคใณ ใชใใธใใช](https://github.com/microsoft/DeepSpeed) ใซใใใใพใใ
DeepSpeed ใไฝฟ็จใใๅ ดๅใฏใๅธธใซ DeepSpeed ๆงๆใใกใคใซใๆๅฎใใๅฟ
่ฆใใใใพใใใไธ้จใฎๆงๆใใฉใกใผใฟใซใฏ
ใณใใณใใฉใคใณ็ต็ฑใง่จญๅฎใใพใใๅพฎๅฆใช้ใใซใคใใฆใฏใใใฎใฌใคใใฎๆฎใใฎ้จๅใง่ชฌๆใใพใใ
DeepSpeed ๆงๆใใกใคใซใใฉใฎใใใชใใฎใใ็่งฃใใใใใซใZeRO ในใใผใธ 2 ๆฉ่ฝใๆๅนใซใใๆงๆใใกใคใซใๆฌกใซ็คบใใพใใ
ใชใใใฃใใคใถใผ็ถๆ
ใฎ CPU ใชใใญใผใใๅซใฟใ`AdamW`ใชใใใฃใใคใถใผใจ`WarmupLR`ในใฑใธใฅใผใฉใผใไฝฟ็จใใๆททๅใๆๅนใซใใพใใ
`--fp16` ใๆธกใใใๅ ดๅใฎ็ฒพๅบฆใใฌใผใใณใฐ:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
```
ใใญใฐใฉใ ใๅฎ่กใใใจใDeepSpeed ใฏ [`Trainer`] ใใๅใๅใฃใ่จญๅฎใใญใฐใซ่จ้ฒใใพใใ
ใณใณใฝใผใซใซๆธกใใใใใใๆ็ต็ใซใฉใฎใใใช่จญๅฎใๆธกใใใใฎใใๆญฃ็ขบใซ็ขบ่ชใงใใพใใ
<a id='deepspeed-config-passing'></a>
### Passing Configuration
ใใฎใใญใฅใกใณใใง่ชฌๆใใใใใซใ้ๅธธใDeepSpeed ่จญๅฎใฏ json ใใกใคใซใธใฎใในใจใใฆๆธกใใใพใใใ
ใใฌใผใใณใฐใฎ่จญๅฎใซใณใใณใ ใฉใคใณ ใคใณใฟใผใใงใคในใไฝฟ็จใใใไปฃใใใซใคใณในใฟใณในใไฝๆใใพใใ
[`Trainer`] via [`TrainingArguments`] ใใฎๅพใ`deepspeed` ๅผๆฐใซใคใใฆใฏๆฌกใฎใใจใใงใใพใ
ใในใใใใ `dict` ใๆธกใใพใใใใใซใใใใใฎๅ ดใงๆงๆใไฝๆใงใใใใใๆธใ่พผใๅฟ
่ฆใใใใพใใใ
[`TrainingArguments`] ใซๆธกใๅใซใใกใคใซ ใทในใใ ใๅคๆดใใพใใ
่ฆ็ดใใใจใๆฌกใฎใใจใใงใใพใใ
```python
TrainingArguments(..., deepspeed="/path/to/ds_config.json")
```
ใพใใฏ๏ผ
```python
ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params)
TrainingArguments(..., deepspeed=ds_config_dict)
```
<a id='deepspeed-config-shared'></a>
### Shared Configuration
<Tip warning={true}>
ใใฎใปใฏใทใงใณใฏๅฟ
่ชญใงใ
</Tip>
[`Trainer`] ใจ DeepSpeed ใฎไธกๆนใๆญฃใใๆฉ่ฝใใใซใฏใใใใคใใฎ่จญๅฎๅคใๅฟ
่ฆใงใใ
ใใใใฃใฆใๆคๅบใๅฐ้ฃใชใจใฉใผใซใคใชใใๅฏ่ฝๆงใฎใใๅฎ็พฉใฎ็ซถๅใ้ฒใใใใซใใใใใๆงๆใใใใจใซใใพใใใ
[`Trainer`] ใณใใณใใฉใคใณๅผๆฐ็ต็ฑใ
ใใใซใไธ้จใฎๆงๆๅคใฏใขใใซใฎๆงๆใซๅบใฅใใฆ่ชๅ็ใซๅฐๅบใใใพใใ
่คๆฐใฎๅคใๆๅใง่ชฟๆดใใใใจใๅฟใใชใใงใใ ใใใ[`Trainer`] ใซๅคง้จๅใไปปใใใฎใๆๅใงใ
ใฎ่จญๅฎใ่กใใพใใ
ใใใใฃใฆใใใฎใฌใคใใฎๆฎใใฎ้จๅใงใฏใ็นๅฅใช่จญๅฎๅค `auto` ใ่กจ็คบใใใพใใใใใ่จญๅฎใใใจใ
ๆญฃใใๅคใพใใฏๆใๅน็็ใชๅคใซ่ชๅ็ใซ็ฝฎใๆใใใใพใใใใใ็ก่ฆใใใใจใ่ช็ฑใซ้ธๆใใฆใใ ใใ
ๆจๅฅจไบ้
ใๅ็
งใใๅคใๆ็คบ็ใซ่จญๅฎใใพใใใใฎๅ ดๅใๆฌกใฎ็นใซๅๅๆณจๆใใฆใใ ใใใ
[`Trainer`] ๅผๆฐใจ DeepSpeed ่จญๅฎใฏไธ่ดใใพใใใใจใใฐใๅใใใฎใไฝฟ็จใใฆใใพใใ
ๅญฆ็ฟ็ใใใใใตใคใบใใพใใฏๅพ้
็ดฏ็ฉ่จญๅฎ?ใใใใไธ่ดใใชใๅ ดๅใใใฌใผใใณใฐใฏ้ๅธธใซๅคฑๆใใๅฏ่ฝๆงใใใใพใ
ๆนๆณใๆคๅบใใใฎใ้ฃใใใใใชใใฏ่ญฆๅใๅใใพใใใ
DeepSpeed ใฎใฟใซๅบๆใฎๅคใใใใใซๅใใใฆๆๅใง่จญๅฎใใๅฟ
่ฆใใใๅคใไปใซใ่คๆฐใใใพใใ
ใใชใใฎ่ฆๆใ
็ฌ่ชใฎใใญใฐใฉใ ใงใDeepSpeed ๆงๆใใในใฟใผใจใใฆๅคๆดใใใๅ ดๅใฏใๆฌกใฎใขใใญใผใใไฝฟ็จใใใใจใใงใใพใใ
ใใใซๅบใฅใใฆ [`TrainingArguments`] ใ่จญๅฎใใพใใๆ้ ใฏๆฌกใฎใจใใใงใใ
1. ใในใฟใผๆงๆใจใใฆไฝฟ็จใใ DeepSpeed ๆงๆใไฝๆใพใใฏใญใผใใใพใ
2. ใใใใฎๅคใซๅบใฅใใฆ [`TrainingArguments`] ใชใใธใงใฏใใไฝๆใใพใ
`scheduler.params.total_num_steps`ใชใฉใฎไธ้จใฎๅคใฏๆฌกใฎใใใซ่จ็ฎใใใใใจใซๆณจๆใใฆใใ ใใใ
`train` ไธญใซ [`Trainer`] ใๅฎ่กใใพใใใใใกใใ่ชๅใง่จ็ฎใใใใจใใงใใพใใ
<a id='deepspeed-zero'></a>
### ZeRO
[Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) ใฏใDeepSpeed ใฎไธปๅ่ฃฝๅใงใใใใ
3 ใคใฎ็ฐใชใใฌใใซ (ๆฎต้) ใฎๆ้ฉๅใใตใใผใใใพใใๆๅใฎใใฎใฏใในใฑใผใฉใใชใใฃใฎ่ฆณ็นใใใฏใใพใ่ๅณๆทฑใใใฎใงใฏใใใพใใใ
ใใใใฃใฆใใใฎใใญใฅใกใณใใงใฏในใใผใธ 2 ใจ 3 ใซ็ฆ็นใๅฝใฆใพใใในใใผใธ 3 ใฏใๆๆฐใฎ ZeRO-Infinity ใฎ่ฟฝๅ ใซใใฃใฆใใใซๆนๅใใใฆใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใDeepSpeed ใฎใใญใฅใกใณใใๅ็
งใใฆใใ ใใใ
ๆงๆใใกใคใซใฎ `zero_optimization` ใปใฏใทใงใณใฏๆใ้่ฆใช้จๅใงใ ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training))ใใใใงๅฎ็พฉใใพใ
ใฉใฎ ZeRO ในใใผใธใๆๅนใซใใใใใใใฆใใใใใฉใฎใใใซๆงๆใใใใๅใใฉใกใผใฟใฎ่ชฌๆใฏใ
DeepSpeed ใฎใใญใฅใกใณใใ
ใใฎใปใฏใทใงใณใฏใDeepSpeed ่จญๅฎใไปใใฆใฎใฟ่จญๅฎใใๅฟ
่ฆใใใใพใ - [`Trainer`] ใๆไพใใพใ
ๅ็ญใฎใณใใณใใฉใคใณๅผๆฐใฏใใใพใใใ
ๆณจ: ็พๅจใDeepSpeed ใฏใใฉใกใผใฟใผๅใๆค่จผใใชใใใใในใใซใ้้ใใใจใใใใฉใซใ่จญๅฎใไฝฟ็จใใใพใใ
ในใใซใ้้ใฃใฆใใใใฉใกใผใฟใ DeepSpeed ใจใณใธใณใฎ่ตทๅใญใฐ ใกใใปใผใธใ่ฆใฆใใใฎๅคใ็ขบ่ชใงใใพใใ
ไฝฟ็จใใใคใใใงใใ
<a id='deepspeed-zero2-config'></a>
#### ZeRO-2 Config
ไปฅไธใฏใZeRO ในใใผใธ 2 ใฎๆงๆไพใงใใ
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
}
}
```
**ๆง่ฝ่ชฟๆด๏ผ**
- `offload_optimizer` ใๆๅนใซใใใจใGPU RAM ใฎไฝฟ็จ้ใๅๆธใใใพใ (`"stage": 2` ใๅฟ
่ฆใงใ)
- `"overlap_comm": true` ใฏใGPU RAM ไฝฟ็จ้ใฎๅขๅ ใจใใฌใผใใชใใใฆใ้
ๅปถใใในใฆๅๆธใใพใใ `overlap_comm`ใฏ 4.5x ใไฝฟ็จใใพใ
`allgather_bucket_size`ใจ`reduce_bucket_size`ใฎๅคใใใใใฃใฆใ5e8 ใซ่จญๅฎใใใฆใใๅ ดๅใ9GB ใๅฟ
่ฆใซใชใใพใใ
ใใใใใชใณใ (`5e8 x 2Bytes x 2 x 4.5`)ใใใใใฃใฆใ8GB ไปฅไธใฎ RAM ใๆญ่ผใใ GPU ใไฝฟ็จใใฆใใๅ ดๅใ
OOM ใจใฉใผใ็บ็ใใๅ ดๅใฏใใใใใฎใใฉใกใผใฟใ`2e8`็จๅบฆใซๆธใใๅฟ
่ฆใใใใใใใซใฏ 3.6GB ใๅฟ
่ฆใซใชใใพใใใใใใใชใใงใใใ
OOM ใซ้ใๅงใใฆใใๅ ดๅใฏใใใๅคงๅฎน้ใฎ GPU ใงใๅๆงใงใใ
- ใใใใฎใใใใกใๆธใใใจใใใๅคใใฎ GPU RAM ใๅฉ็จใใใใใซ้ไฟก้ๅบฆใ็ ็ฒใซใใใใจใซใชใใพใใใใใใกใตใคใบใๅฐใใใปใฉใ
้ไฟกใ้
ใใชใใไปใฎใฟในใฏใงไฝฟ็จใงใใ GPU RAM ใๅขใใพใใใใใใฃใฆใใใใใตใคใบใๅคงใใๅ ดๅใฏใ
้่ฆใชใฎใฏใใใฌใผใใณใฐๆ้ใๅฐใ้
ใใใใใจใฏ่ฏใใใฌใผใใซใชใๅฏ่ฝๆงใใใใพใใ
ใใใซใ`deepspeed==0.4.4`ใซใฏใๆฌกใฎใณใใณใใงๆๅนใซใงใใๆฐใใใชใใทใงใณ`round_robin_gradients`ใ่ฟฝๅ ใใใพใใใ
```json
{
"zero_optimization": {
"round_robin_gradients": true
}
}
```
ใใใฏใใใ็ดฐใใๅพ้
ใใผใใฃใทใงใใณใฐใซใใฃใฆใฉใณใฏ้ใฎ CPU ใกใขใชใธใฎๅพ้
ใณใใผใไธฆๅๅใใใCPU ใชใใญใผใใฎในใใผใธ 2 ๆ้ฉๅใงใใใใใฉใผใใณในใฎๅฉ็นใฏใๅพ้
็ดฏ็ฉในใใใ (ใชใใใฃใใคใถใผ ในใใใ้ใฎใณใใผใฎๅขๅ ) ใพใใฏ GPU ๆฐ (ไธฆๅๅฆ็ใฎๅขๅ ) ใซๅฟใใฆๅขๅ ใใพใใ
<a id='deepspeed-zero3-config'></a>
#### ZeRO-3 Config
ไปฅไธใฏใZeRO ในใใผใธ 3 ใฎๆงๆไพใงใใ
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
ใขใใซใพใใฏใขใฏใใฃใใผใทใงใณใ GPU ใกใขใชใซ้ฉๅใใใCPU ใๆชไฝฟ็จใงใใใใใซ OOM ใ็บ็ใใฆใใๅ ดๅ
`"device": "cpu"` ใไฝฟ็จใใฆใชใใใฃใใคใถใฎ็ถๆ
ใจใใฉใกใผใฟใ CPU ใกใขใชใซใกใขใชใชใใญใผใใใใจใใใฎๅถ้ใ่งฃๆฑบใใใๅฏ่ฝๆงใใใใพใใ
CPU ใกใขใชใซใชใใญใผใใใใใชใๅ ดๅใฏใ`device`ใจใณใใชใซ`cpu`ใฎไปฃใใใซ`none`ใไฝฟ็จใใพใใใชใใญใผใๅ
NVMe ใซใคใใฆใฏๅพใปใฉ่ชฌๆใใพใใ
ๅบๅฎใกใขใชใฏใ`pin_memory`ใ`true`ใซ่จญๅฎใใใจๆๅนใซใชใใพใใใใฎๆฉ่ฝใซใใใๆฌกใฎใใใชใณในใใใใใฆในใซใผใใใใๅไธใใใใใจใใงใใพใใ
ไปใฎใใญใปในใไฝฟ็จใงใใใกใขใชใๅฐใชใใชใใพใใใใณ็ใใใใใกใขใชใฏใใใใ่ฆๆฑใใ็นๅฎใฎใใญใปในใฎใใใซ็ขบไฟใใใพใใ
้ๅธธใ้ๅธธใฎ CPU ใกใขใชใใใใฏใใใซ้ซ้ใซใขใฏใปในใใใพใใ
**ๆง่ฝ่ชฟๆด๏ผ**
- `stage3_max_live_parameters`: `1e9`
- `stage3_max_reuse_distance`: `1e9`
OOM ใซ้ใใๅ ดๅใฏใใstage3_max_live_parametersใใจใstage3_max_reuse_ distanceใใๆธใใใพใใๅฝฑ้ฟใฏๆๅฐ้ใซๆใใใใใฏใใงใ
ใขใฏใใฃใๅใใงใใฏใใคใณใใๅฎ่กใใชใ้ใใใใใฉใผใใณในใซๅฝฑ้ฟใใพใใ `1e9`ใฏ็ด 2GB ใๆถ่ฒปใใพใใ่จๆถใๅ
ฑๆใใฆใใใฎใฏใ
`stage3_max_live_parameters` ใจ `stage3_max_reuse_distance` ใชใฎใงใๅ ็ฎใใใใใฎใงใฏใชใใๅ่จใง 2GB ใซใชใใพใใ
`stage3_max_live_parameters` ใฏใ็นๅฎใฎๆ็นใง GPU ไธใซไฟๆใใๅฎๅ
จใชใใฉใกใผใฟใฎๆฐใฎไธ้ใงใใ
ๆ้ใ ใๅๅฉ็จ่ท้ขใใฏใใใฉใกใผใฟใๅฐๆฅใใคๅใณไฝฟ็จใใใใใๅคๆญใใใใใซไฝฟ็จใใๆๆจใงใใ
`stage3_max_reuse_ distance`ใไฝฟ็จใใฆใใใฉใกใผใฟใ็ ดๆฃใใใไฟๆใใใใๆฑบๅฎใใพใใใใฉใกใผใฟใ
่ฟใๅฐๆฅใซๅใณไฝฟ็จใใใไบๅฎ (`stage3_max_reuse_distance`ๆชๆบ) ใชใฎใงใ้ไฟกใๆธใใใใใซไฟๆใใพใใ
ใชใผใใผใใใใใใใฏใใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใๆๅนใซใใฆใใๅ ดๅใซ้ๅธธใซๅฝน็ซใกใพใใใใฉใฏใผใๅ่จ็ฎใ่กใใใ
backward ใฏๅไธใฌใคใคใผ็ฒๅบฆใๆธกใใๅพๆนๅ่จ็ฎใพใงใใฉใกใผใฟใๅๆนๅ่จ็ฎใซไฟๆใใใใจ่ใใฆใใพใใ
ๆฌกใฎๆงๆๅคใฏใใขใใซใฎ้่กจ็คบใตใคใบใซใใฃใฆ็ฐใชใใพใใ
- `reduce_bucket_size`: `hidden_size*hidden_size`
- `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size`
- `stage3_param_persistence_threshold`: `10 * hidden_size`
ใใใใฃใฆใใใใใฎๅคใ `auto` ใซ่จญๅฎใใใจใ[`Trainer`] ใๆจๅฅจใใใๅคใ่ชๅ็ใซๅฒใๅฝใฆใพใใ
ไพกๅค่ฆณใใใ ใใใใกใใใใใใใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
`stage3_gather_16bit_weights_on_model_save` ใฏใใขใใซใฎไฟๅญๆใซใขใใซ fp16 ใฎ้ใฟ็ตฑๅใๆๅนใซใใพใใๅคงใใ
ใขใใซใจ่คๆฐใฎ GPU ใฎๅ ดๅใใใใฏใกใขใชใจ้ๅบฆใฎไธกๆนใฎ็นใง้ซไพกใชๆไฝใงใใ็พๅจๅฟ
้ ใจใชใฃใฆใใใฎใฏใ
ใใฌใผใใณใฐใๅ้ใใไบๅฎใงใใใใฎๅถ้ใๅใ้คใใใใไพฟๅฉใซใใไปๅพใฎใขใใใใผใใซๆณจ็ฎใใฆใใ ใใใ
ใใฌใญใทใใซใ
ZeRO-2 ๆงๆใใ็งป่กใใฆใใๅ ดๅใฏใ`allgather_partitions`ใ`allgather_bucket_size`ใใใใณ
`reduce_scatter`่จญๅฎใใฉใกใผใฟใฏ ZeRO-3 ใงใฏไฝฟ็จใใใพใใใใใใใ่จญๅฎใใกใคใซใซไฟๅญใใฆใใใจใ
็ก่ฆใใใใ
- `sub_group_size`: `1e9`
`sub_group_size` ใฏใใชใใใฃใใคใถใผใฎในใใใไธญใซใใฉใกใผใฟใผใๆดๆฐใใใ็ฒๅบฆใๅถๅพกใใพใใใใฉใกใผใฟใฏๆฌกใฎใจใใใงใใ
`sub_group_size` ใฎใใฑใใใซใฐใซใผใๅใใใๅใใฑใใใฏไธๅบฆใซ 1 ใคใใคๆดๆฐใใใพใใ NVMeใชใใญใผใใงไฝฟ็จใใๅ ดๅ
ใใใใฃใฆใZeRO-Infinity ใฎ `sub_group_size`ใฏใใขใใซใฎ็ถๆ
ใ CPU ใซๅบๅ
ฅใใใ็ฒๅบฆใๅถๅพกใใพใใ
ใชใใใฃใใคใถในใใใไธญใซ NVMe ใใใกใขใชใๅๅพใใพใใใใใซใใใ้ๅธธใซๅคง่ฆๆจกใชใขใใซใฎ CPU ใกใขใชไธ่ถณใ้ฒๆญขใใใพใใ
NVMe ใชใใญใผใใไฝฟ็จใใชใๅ ดๅใฏใ`sub_group_size`ใใใใฉใซใๅคใฎ *1e9* ใฎใพใพใซใใใใจใใงใใพใใๅคๆดใใใใจใใงใใพใ
ๆฌกใฎๅ ดๅใฎใใใฉใซใๅค:
1. ใชใใใฃใใคใถใผ ในใใใไธญใซ OOM ใ็บ็ใใ: `sub_group_size` ใๆธใใใฆใไธๆใใใใกใผใฎใกใขใชไฝฟ็จ้ใๅๆธใใพใใ
2. ใชใใใฃใใคใถใผ ในใใใใซๆ้ใใใใใพใใ`sub_group_size`ใๅขใใใฆใๅธฏๅๅน
ใฎไฝฟ็จ็ใๅไธใใใพใใ
ใใผใฟใใใใกใฎๅขๅ ใ
#### ZeRO-0 Config
ในใใผใธ 0 ใจ 1 ใฏใใฃใใซไฝฟ็จใใใชใใใใๆๅพใซใชในใใใฆใใใใจใซๆณจๆใใฆใใ ใใใ
ในใใผใธ 0 ใงใฏใใในใฆใฎใฟใคใใฎใทใฃใผใใฃใณใฐใ็กๅนใซใใDDP ใจใใฆ DeepSpeed ใฎใฟใไฝฟ็จใใพใใๆฌกใฎใณใใณใใงใชใณใซใงใใพใใ
```json
{
"zero_optimization": {
"stage": 0
}
}
```
ใใใซใใใไปใซไฝใๅคๆดใใๅฟ
่ฆใใชใใๅบๆฌ็ใซ ZeRO ใ็กๅนใซใชใใพใใ
#### ZeRO-1 Config
ในใใผใธ 1 ใฏใในใใผใธ 2 ใใใฐใฉใใผใทใงใณ ใทใฃใผใใฃใณใฐใ้คใใใใฎใงใใใชใใใฃใใคใถใผใฎ็ถๆ
ใใทใฃใผใๅใใใ ใใงใๅฆ็ใๅฐใ้ซ้ๅใใใใใซใใคใงใ่ฉฆใใใจใใงใใพใใ
```json
{
"zero_optimization": {
"stage": 1
}
}
```
<a id='deepspeed-nvme'></a>
### NVMe Support
ZeRO-Infinity ใฏใGPU ใจ CPU ใกใขใชใ NVMe ใกใขใชใงๆกๅผตใใใใจใงใ้ๅธธใซๅคง่ฆๆจกใชใขใใซใฎใใฌใผใใณใฐใๅฏ่ฝใซใใพใใใใใใง
ในใใผใ ใใผใใฃใทใงใใณใฐใใใณใฟใคใชใณใฐ ใขใซใดใชใบใ ใงใฏใๅ GPU ใ้ๅธธใซๅฐ้ใฎใใผใฟใ้ๅไฟกใใๅฟ
่ฆใใใใพใใ
ใชใใญใผใใซใใใๆๆฐใฎ NVMe ใใใฌใผใใณใฐใซๅฉ็จใงใใๅ่จใกใขใช ใใผใซใใใใซๅคงใใใใใฎใซ้ฉใใฆใใใใจใๅคๆใใพใใใ
ใใญใปในใ ZeRO-Infinity ใซใฏใZeRO-3 ใๆๅนใซใชใฃใฆใใๅฟ
่ฆใใใใพใใ
ๆฌกใฎ่จญๅฎไพใงใฏใNVMe ใใชใใใฃใใคใถใฎ็ถๆ
ใจใใฉใกใผใฟใฎไธกๆนใใชใใญใผใใงใใใใใซใใพใใ
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": false,
"overlap_events": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
}
```
ใชใใใฃใใคใถใฎ็ถๆ
ใจใใฉใกใผใฟใฎไธกๆนใ NVMe ใซใชใใญใผใใใใใใฉใกใใ 1 ใคใ ใใใชใใญใผใใใใใใพใฃใใใชใใญใผใใใชใใใ้ธๆใงใใพใใใใจใใฐใๆฌกใฎๅ ดๅ
ๅฉ็จๅฏ่ฝใช CPU ใกใขใชใๅคง้ใซใใๅ ดๅใฏใ้ซ้ใซใชใใใใๅฟ
ใ CPU ใกใขใชใฎใฟใซใชใใญใผใใใฆใใ ใใ (ใใณใ:
*"device": "CPU"*)ใ
[ใชใใใฃใใคใถใผใฎ็ถๆ
](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) ใจ [ใใฉใกใผใฟใผ](https://www.deepspeed.ai/docs/config-json/#parameter-offloading)ใ
`nvme_path`ใๅฎ้ใซ NVMe ใงใใใใจใ็ขบ่ชใใฆใใ ใใใNVMe ใฏ้ๅธธใฎใใผใใใฉใคใใพใใฏ SSD ใงๅไฝใใพใใใ
ใฏใใใซ้
ใใชใใพใใ้ซ้ในใฑใผใฉใใซใชใใฌใผใใณใฐใฏใๆๆฐใฎ NVMe ่ปข้้ๅบฆใๅฟต้ ญใซ็ฝฎใใฆ่จญ่จใใใพใใ (ใใฎๆ็นใงใฏ
ๆธใ่พผใฟใงใฏใ่ชญใฟๅใๆๅคง 3.5 GB/็งใๆธใ่พผใฟๆๅคง 3 GB/็งใฎใใผใฏ้ๅบฆใๅพใใใพใ)ใ
ๆ้ฉใช`aio`ๆงๆใใญใใฏใ่ฆใคใใใซใฏใใฟใผใฒใใ่จญๅฎใงใใณใใใผใฏใๅฎ่กใใๅฟ
่ฆใใใใพใใ
[ใใใง่ชฌๆ](https://github.com/microsoft/DeepSpeed/issues/998)ใ
<a id='deepspeed-zero2-zero3-performance'></a>
#### ZeRO-2 vs ZeRO-3 Performance
ZeRO-3 ใฏใไปใฎใในใฆใๅใใใใซๆงๆใใใฆใใๅ ดๅใZeRO-2 ใใใ้
ใใชใๅฏ่ฝๆงใใใใพใใๅ่
ใฏๅ้ใใๅฟ
่ฆใใใใใใงใใ
ZeRO-2 ใฎๆฉ่ฝใซๅ ใใฆใขใใซใฎ้ใฟไปใใ่กใใพใใ ZeRO-2 ใใใผใบใๆบใใใๆฐๅใฎ GPU ใ่ถ
ใใฆๆกๅผตใใๅฟ
่ฆใใชใๅ ดๅ
ใใใใใฐใใใใซๅบๅทใใใใจใ้ธๆใใใใจใใงใใพใใ ZeRO-3 ใซใใใใฏใใใซ้ซใในใฑใผใฉใใชใใฃๅฎน้ใๅฏ่ฝใซใชใใใจใ็่งฃใใใใจใ้่ฆใงใ
ในใใผใใ็ ็ฒใซใใฆใ
ZeRO-3 ใฎๆงๆใ่ชฟๆดใใฆใZeRO-2 ใซ่ฟใฅใใใใจใใงใใพใใ
- `stage3_param_persistence_threshold` ใ้ๅธธใซๅคงใใชๆฐๅคใซ่จญๅฎใใพใใใใจใใฐใ`6 * hidden_โโsize * hidden_โโsize` ใฎใใใซใๆๅคงโโใใฉใกใผใฟใใใๅคงใใใชใใพใใใใใซใใใใใฉใกใผใฟใ GPU ใซไฟๆใใใพใใ
- ZeRO-2 ใซใฏใใฎใชใใทใงใณใใชใใใใ`offload_params` ใใชใใซใใพใใ
ๅคๆดใใชใใฆใใ`offload_params`ใใชใใซใใใ ใใงใใใฉใผใใณในใๅคงๅน
ใซๅไธใใๅฏ่ฝๆงใใใใพใใ
`stage3_param_persistence_threshold`ใใใกใใใใใใใฎๅคๆดใฏใใฌใผใใณใฐใงใใใขใใซใฎใตใคใบใซๅฝฑ้ฟใใพใใใใใง
ใใใใฏใใใผใบใซๅฟใใฆใในใฑใผใฉใใชใใฃใจๅผใๆใใซ้ๅบฆใๅไธใใใใฎใซๅฝน็ซใกใพใใ
<a id='deepspeed-zero2-example'></a>
#### ZeRO-2 Example
ไปฅไธใฏใๅฎๅ
จใช ZeRO-2 ่ชๅๆงๆใใกใคใซ `ds_config_zero2.json` ใงใใ
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
ไปฅไธใฏใๆๅใง่จญๅฎใใใๅฎๅ
จใช ZeRO-2 ใฎใในใฆใๆๅนใชๆงๆใใกใคใซใงใใใใใงใฏไธปใซใๅ
ธๅ็ใชใใฎใ็ขบ่ชใใใใใฎใใฎใงใใ
ๅคใฏๆฌกใฎใใใซใชใใพใใใ่คๆฐใฎ`auto`่จญๅฎใๅซใพใใๅคใไฝฟ็จใใใใจใๅผทใใๅงใใใพใใ
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
<a id='deepspeed-zero3-example'></a>
#### ZeRO-3 Example
ไปฅไธใฏใๅฎๅ
จใช ZeRO-3 ่ชๅๆงๆใใกใคใซ`ds_config_zero3.json`ใงใใ
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
ไปฅไธใฏใๆๅใง่จญๅฎใใใๅฎๅ
จใช ZeRO-3 ใฎใในใฆใๆๅนใชๆงๆใใกใคใซใงใใใใใงใฏไธปใซใๅ
ธๅ็ใชใใฎใ็ขบ่ชใใใใใฎใใฎใงใใ
ๅคใฏๆฌกใฎใใใซใชใใพใใใ่คๆฐใฎ`auto`่จญๅฎใๅซใพใใๅคใไฝฟ็จใใใใจใๅผทใใๅงใใใพใใ
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1e6,
"stage3_prefetch_bucket_size": 0.94e6,
"stage3_param_persistence_threshold": 1e4,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
#### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance
ใใใงใใใพใใพใชๆฎต้ใใใใใจใใใใใพใใใใฉใกใใไฝฟ็จใใใใใฉใฎใใใซๆฑบๅฎใใใฐใใใงใใใใ?ใใฎใปใฏใทใงใณใงใฏใใใฎ่ณชๅใซ็ญใใฆใใใพใใ
ไธ่ฌใซใๆฌกใฎใใจใๅฝใฆใฏใพใใพใใ
- ้ๅบฆใฎ็น๏ผๅทฆใฎๆนใๅณใใ้ใ๏ผ
ในใใผใธ 0 (DDP) > ในใใผใธ 1 > ในใใผใธ 2 > ในใใผใธ 2 + ใชใใญใผใ > ในใใผใธ 3 > ในใใผใธ 3 + ใชใใญใผใ
- GPU ใกใขใชใฎไฝฟ็จ็ถๆณ (ๅณใฏๅทฆใใใ GPU ใกใขใชๅน็ใ้ซใ)
ในใใผใธ 0 (DDP) < ในใใผใธ 1 < ในใใผใธ 2 < ในใใผใธ 2 + ใชใใญใผใ < ในใใผใธ 3 < ในใใผใธ 3 + ใชใใญใผใ
ใใใใฃใฆใๆๅฐ้ใฎๆฐใฎ GPU ใซๅใพใใชใใๆ้ใฎๅฎ่กใๅฎ็พใใใๅ ดๅใฏใๆฌกใฎใใญใปในใซๅพใใใจใใงใใพใใๆใ้ใใขใใญใผใใใ้ๅงใใGPU OOM ใซ้ฅใฃใๅ ดๅใฏใๆฌกใซ้
ใใขใใญใผใใซ้ฒใฟใพใใใใใใซใใไฝฟ็จใใใ GPU ใกใขใชใๅฐใชใใชใใพใใใชใฉใชใฉใ
ใพใใใใใ ใตใคใบใ 1 ใซ่จญๅฎใใพใ (ๅฟ
่ฆใชๆๅนใใใ ใตใคใบใซๅฏพใใฆใใใคใงใๅพ้
็ดฏ็ฉใไฝฟ็จใงใใพใ)ใ
1. `--gradient_checkpointing 1` (HF Trainer) ใพใใฏ็ดๆฅ `model.gradient_checkpointing_enable()` ใๆๅนใซใใพใ - OOM ใฎๅ ดๅ
2. ๆๅใซ ZeRO ในใใผใธ 2 ใ่ฉฆใใฆใใ ใใใ OOMใฎๅ ดๅ
3. ZeRO ในใใผใธ 2 + `offload_optimizer` ใ่ฉฆใใพใ - OOM ใฎๅ ดๅ
4. ZeRO ในใใผใธ 3 ใซๅใๆฟใใ - OOM ใฎๅ ดๅ
5. `cpu` ใซๅฏพใใฆ `offload_param` ใๆๅนใซใใพใ - OOM ใฎๅ ดๅ
6. OOM ใฎๅ ดๅใฏใ`cpu`ใซๅฏพใใฆ`offload_optimizer`ใๆๅนใซใใพใใ
7. ใใใงใใใใ ใตใคใบ 1 ใซ้ฉๅใใชใๅ ดๅใฏใใพใใใพใใพใชใใใฉใซใๅคใ็ขบ่ชใใๅฏ่ฝใงใใใฐๅคใไธใใพใใใใจใใฐใ`generate`ใไฝฟ็จใใๅบใๆค็ดขใใผใ ใไฝฟ็จใใชใๅ ดๅใฏใๅคง้ใฎใกใขใชใๆถ่ฒปใใใใใๆค็ดขใใผใ ใ็ญใใใพใใ
8. fp32 ใงใฏๅฟ
ใๆททๅๅ็ฒพๅบฆใไฝฟ็จใใพใใใคใพใใAmpere ไปฅไธใฎ GPU ใงใฏ bf16ใๅคใ GPU ใขใผใญใใฏใใฃใงใฏ fp16 ใไฝฟ็จใใพใใ
9. ใใใงใ OOM ใ่กใๅ ดๅใฏใใใผใใฆใงใขใ่ฟฝๅ ใใใใZeRO-Infinity ใๆๅนใซใใใใจใใงใใพใใใคใพใใใชใใญใผใ `offload_param` ใจ `offload_optimizer` ใ `nvme` ใซๅใๆฟใใพใใ้ๅธธใซ้ซ้ใช nvme ใงใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ้ธ่ฉฑใจใใฆใZeRO-Infinity ใไฝฟ็จใใฆๅฐใใช GPU ใง BLOOM-176B ใๆจ่ซใใใใจใใงใใพใใใใ้ๅธธใซ้
ใใฃใใงใใใงใใใใพใใใใพใใ๏ผ
ใใกใใใๆใ GPU ใกใขใชๅน็ใฎ้ซใๆงๆใใๅงใใฆใๅพใใ้ใซ้ฒใใใจใงใใใใใฎๆ้ ใ้ใซๅฎ่กใใใใจใใงใใพใใใใใใฏไบ็ญๅใใฆใฟใฆใใ ใใใ
OOM ใๅผใ่ตทใใใชใใใใ ใตใคใบ 1 ใๅๅพใใใใๅฎๅนในใซใผใใใใๆธฌๅฎใใพใใ
ๆฌกใซใใใใ ใตใคใบใใงใใใ ใๅคงใใใใฆใฟใพใใใใใ ใตใคใบใๅคงใใใปใฉใไน็ฎใใ่กๅใๅทจๅคงใชๅ ดๅใซ GPU ใฎใใใฉใผใใณในใๆ้ซใซใชใใใใGPU ใฎๅน็ใๅไธใใพใใ
ใใใงใใใใฉใผใใณในๆ้ฉๅใฒใผใ ใๅงใพใใพใใไธ้จใฎใชใใญใผใๆฉ่ฝใใชใใซใใใใZeRO ๆฎต้ใงในใใใใใฆใณใใฆใใใ ใตใคใบใๅขๆธใใฆใๅฎๅนในใซใผใใใใๅๅบฆๆธฌๅฎใใใใจใใงใใพใใๆบ่ถณใใใพใงๆดใๆตใใ็นฐใ่ฟใใพใใ
ๆฐธ้ ใซใใใซ่ฒปใใๅฟ
่ฆใฏใใใพใใใใ3 ใๆใฎใใฌใผใใณใฐใ้ๅงใใใใจใใฆใใๅ ดๅใฏใในใซใผใใใใซ้ขใใฆๆใๅนๆ็ใช่จญๅฎใ่ฆใคใใใใใซๆฐๆฅใใใฆใใ ใใใใใฎใใใใใฌใผใใณใฐใฎใณในใใๆๅฐ้ใซใชใใใใฌใผใใณใฐใใใๆฉใๅฎไบใงใใพใใ็พๅจใฎ็ฎใพใใใใๅคๅใใ ML ใฎไธ็ใงใฏใไฝใใใใฌใผใใณใฐใใใฎใซใใใซ 1 ใๆใใใๅ ดๅใ็ตถๅฅฝใฎๆฉไผใ้ใๅฏ่ฝๆงใใใใพใใใใกใใใใใใฏ็งใๆ่ฆใๅ
ฑๆใใฆใใใ ใใงใใใๆฑบใใฆใใชใใๆฅใใใใจใใฆใใใใใงใฏใใใพใใใ BLOOM-176B ใฎใใฌใผใใณใฐใ้ๅงใใๅใซใใใฎใใญใปในใซ 2 ๆฅ้่ฒปใใใในใซใผใใใใ 90 TFLOP ใใ 150 TFLOP ใซๅไธใใใใใจใใงใใพใใใใใฎๅใ็ตใฟใซใใใใใฌใผใใณใฐๆ้ใ 1 ใๆไปฅไธ็ฏ็ดใงใใพใใใ
ใใใใฎใกใขใฏไธปใซใใฌใผใใณใฐ ใขใผใ็จใซๆธใใใใใฎใงใใใใปใจใใฉใฎๅ ดๅใฏๆจ่ซใซใ้ฉ็จใใใใฏใใงใใใใจใใฐใๅพ้
ใใงใใฏใใคใณใใฏใใฌใผใใณใฐไธญใซใฎใฟๅฝน็ซใคใใใๆจ่ซไธญใฏไฝใ่กใใใพใใใใใใซใใใซใ GPU ๆจ่ซใๅฎ่กใใฆใใฆใ[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/)ใ[Accelerate](https://ใใฐใใงใคใน.co/blog/bloom-inference-pytorch-scripts) ใฏๅชใใใใใฉใผใใณในใๆไพใใใฏใใงใใ
ใใฎไปใฎใใใฉใผใใณใน้ข้ฃใฎ็ฐกๅใชใกใข:
- ไฝใใๆๅใใใใฌใผใใณใฐใใฆใใๅ ดๅใฏใๅธธใซ 16 ใงๅฒใๅใใๅฝข็ถใฎใใณใฝใซ (้ ใใใตใคใบใชใฉ) ใไฝฟ็จใใใใใซใใฆใใ ใใใใใใ ใตใคใบใซใคใใฆใฏใๅฐใชใใจใ 2 ใงๅฒใๅใใใใใซใใฆใใ ใใใ GPU ใใใใใซ้ซใใใใฉใผใใณในใๅผใๅบใใใๅ ดๅใฏใใใผใใฆใงใขๅบๆใฎ [ๆณขใจใฟใคใซใฎ้ๅญๅ](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) ใฎๅฏๅๆงใใใใพใใ
### Activation Checkpointing or Gradient Checkpointing
ใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใจๅพ้
ใใงใใฏใใคใณใใฏใๅใๆนๆณ่ซใๆใ 2 ใคใฎ็ฐใชใ็จ่ชใงใใใจใฆใใใใใใใงใใใใใใชๆใใงใใ
ๅพ้
ใใงใใฏใใคใณใใไฝฟ็จใใใจใ้ๅบฆใ GPU ใกใขใชใจๅผใๆใใซใงใใพใใใใใซใใใGPU OOM ใๅ
ๆใใใใใใใ ใตใคใบใๅขใใใใจใใงใใๅคใใฎๅ ดๅใใใใฉใผใใณในใฎๅไธใซใคใชใใใพใใ
HF Transformers ใขใใซใฏใDeepSpeed ใฎใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใซใคใใฆไฝใ็ฅใใชใใใใDeepSpeed ๆงๆใใกใคใซใงใใฎๆฉ่ฝใๆๅนใซใใใใจใใฆใใไฝใ่ตทใใใพใใใ
ใใใใฃใฆใใใฎ้ๅธธใซๆ็ใชๆฉ่ฝใๆดป็จใใใซใฏ 2 ใคใฎๆนๆณใใใใพใใ
1. HF Transformers ใขใใซใไฝฟ็จใใใๅ ดๅใฏใ`model.gradient_checkpointing_enable()` ใๅฎ่กใใใใHF ใใฌใผใใผใง `--gradient_checkpointing` ใไฝฟ็จใใพใใใใใซใใใใใใ่ชๅ็ใซๆๅนใซใชใใพใใใใใงไฝฟใใใใฎใ `torch.utils.checkpoint` ใงใใ
2. ็ฌ่ชใฎใขใใซใไฝๆใใDeepSpeed ใฎใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใไฝฟ็จใใใๅ ดๅใฏใ[ใใใง่ฆๅฎใใใฆใใ API](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html) ใไฝฟ็จใงใใพใใ HF Transformers ใขใใชใณใฐ ใณใผใใไฝฟ็จใใฆใ`torch.utils.checkpoint` ใ DeepSpeed ใฎ API ใซ็ฝฎใๆใใใใจใใงใใพใใๅพ่
ใฏใ้ ๆนๅใขใฏใใฃใใผใทใงใณใๅ่จ็ฎใใไปฃใใใซ CPU ใกใขใชใซใชใใญใผใใงใใใใใใใๆ่ปใงใใ
### Optimizer and Scheduler
`offload_optimizer`ใๆๅนใซใใชใ้ใใDeepSpeed ในใฑใธใฅใผใฉใผใจ HuggingFace ในใฑใธใฅใผใฉใผใ็ตใฟๅใใใฆไฝฟ็จโโใงใใพใใ
ใชใใใฃใใคใถใผ (HuggingFace ในใฑใธใฅใผใฉใผใจ DeepSpeed ใชใใใฃใใคใถใผใฎ็ตใฟๅใใใ้คใ):
| Combos | HF Scheduler | DS Scheduler |
|:-------------|:-------------|:-------------|
| HF Optimizer | Yes | Yes |
| DS Optimizer | No | Yes |
`offload_optimizer`ใๆๅนใชๅ ดๅใCPU ใจ
GPU ๅฎ่ฃ
(LAMB ใ้คใ)ใ
<a id='deepspeed-optimizer'></a>
#### Optimizer
DeepSpeed ใฎไธปใชใชใใใฃใใคใถใผใฏใAdamใAdamWใOneBitAdamใLamb ใงใใใใใใฏ ZeRO ใงๅพนๅบ็ใซใในใใใใฆใใใ
ใใใใฃใฆใไฝฟ็จใใใใจใใๅงใใใพใใใใ ใใไปใฎใชใใใฃใใคใถใใtorchใใใใคใณใใผใใใใใจใฏใงใใพใใๅฎๅ
จใชใใญใฅใกใณใใฏ [ใใกใ](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters) ใซใใใพใใ
่จญๅฎใใกใคใซใง `optimizer` ใจใณใใชใ่จญๅฎใใชใๅ ดๅใ[`Trainer`] ใฏ
่ชๅ็ใซ`AdamW`ใซ่จญๅฎใใใๆๅฎใใใๅคใพใใฏๆฌกใฎใณใใณใใฉใคใณใฎใใใฉใซใใไฝฟ็จใใใพใใ
ๅผๆฐ: `--learning_rate`ใ`--adam_beta1`ใ`--adam_beta2`ใ`--adam_epsilon`ใใใใณ `--weight_decay`ใ
ไปฅไธใฏใ`AdamW`ใฎ่ชๅๆงๆใใใ`optimizer`ใจใณใใชใฎไพใงใใ
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
}
}
```
ใณใใณใใฉใคใณๅผๆฐใซใใฃใฆๆงๆใใกใคใซๅ
ใฎๅคใ่จญๅฎใใใใใจใซๆณจๆใใฆใใ ใใใใใใฏ 1 ใคใใใใใงใ
ๅคใฎๆฑบๅฎ็ใชใฝใผในใๆไพใใใใจใใฐๅญฆ็ฟ็ใๆฌกใฎใใใซ่จญๅฎใใใฆใใๅ ดๅใซใ่ฆใคใใซใใใจใฉใผใๅ้ฟใใพใใ
ใใพใใพใชๅ ดๆใงใใพใใพใชไพกๅค่ฆณใใณใใณใใฉใคใณใฎใซใผใซใใชใผใใผใฉใคใใใใๅคใฏๆฌกใฎใจใใใงใใ
- `lr` ใจ `--learning_rate` ใฎๅค
- `betas` ใจ `--adam_beta1 --adam_beta2` ใฎๅค
- `eps` ใจ `--adam_epsilon` ใฎๅค
- `weight_decay` ใจ `--weight_decay` ใฎๅค
ใใใใฃใฆใใณใใณใใฉใคใณใงๅ
ฑๆใใคใใผใใฉใกใผใฟใ่ชฟๆดใใใใจใๅฟใใชใใงใใ ใใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ไธ่จใซใชในใใใใฆใใชใๅฅใฎใชใใใฃใใคใถใผใไฝฟ็จใใๅ ดๅใฏใใใใใฌใใซใฎๆงๆใซ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
```json
{
"zero_allow_untested_optimizer": true
}
```
`AdamW`ใจๅๆงใซใๅ
ฌๅผใซใตใใผใใใใฆใใไปใฎใชใใใฃใใคใถใผใๆงๆใงใใพใใใใใใฏ็ฐใชใ่จญๅฎๅคใๆใคๅฏ่ฝๆงใใใใใจใซๆณจๆใใฆใใ ใใใไพใใฐAdam ใฎๅ ดๅใฏใ`weight_decay`ใ`0.01`ไป่ฟใซใใๅฟ
่ฆใใใใพใใ
ใใใซใใชใใญใผใใฏใDeepspeed ใฎ CPU Adam ใชใใใฃใใคใถใผใจไฝต็จใใใจๆใๅนๆ็ใซๆฉ่ฝใใพใใ `deepspeed==0.8.3` ใชใฎใงใใชใใญใผใใงๅฅใฎใชใใใฃใใคใถใผใไฝฟ็จใใใๅ ดๅใฏใไปฅไธใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
```json
{
"zero_force_ds_cpu_optimizer": false
}
```
ๆไธไฝใฎๆงๆใซ็งป่กใใพใใ
<a id='deepspeed-scheduler'></a>
#### Scheduler
DeepSpeed ใฏใ`LRRangeTest`ใ`OneCycle`ใ`WarmupLR`ใใใใณ`WarmupDecayLR`ๅญฆ็ฟ็ในใฑใธใฅใผใฉใผใใตใใผใใใฆใใพใใๅฎๅ
จใช
ใใญใฅใกใณใใฏ[ใใ](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters)ใงใใ
ใใใงใฏใ๐ค Transformers ใจ DeepSpeed ใฎ้ใงในใฑใธใฅใผใฉใผใ้่คใใๅ ดๆใ็คบใใพใใ
- `--lr_scheduler_type constant_with_warmup` ็ต็ฑใฎ `WarmupLR`
- `--lr_scheduler_type Linear` ใไปใใ `WarmupDecayLR`ใใใใฏ `--lr_scheduler_type` ใฎใใใฉใซใๅคใงใใใใพใใ
ใใใใฃใฆใในใฑใธใฅใผใฉใ่จญๅฎใใชใๅ ดๅใใใใใใใฉใซใใง่จญๅฎใใใในใฑใธใฅใผใฉใซใชใใพใใ
่จญๅฎใใกใคใซใง `scheduler` ใจใณใใชใ่จญๅฎใใชใๅ ดๅใ[`Trainer`] ใฏ
`--lr_scheduler_type`ใ`--learning_rate`ใใใใณ `--warmup_steps` ใพใใฏ `--warmup_ratio` ใฎๅคใ่จญๅฎใใพใใ
๐ค ใใใฎใใฉใณในใใฉใผใใผใใผใธใงใณใ
ไปฅไธใฏใ`WarmupLR`ใฎ่ชๅๆงๆใใใ`scheduler`ใจใณใใชใฎไพใงใใ
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
*"auto"* ใไฝฟ็จใใใฆใใใใใ[`Trainer`] ๅผๆฐใฏ่จญๅฎใซๆญฃใใๅคใ่จญๅฎใใพใใ
ใใกใคใซใใใใฏใๅคใฎๆฑบๅฎ็ใชใฝใผในใ 1 ใคใใใใจใจใใใจใใฐๆฌกใฎใใใชๅ ดๅใซ่ฆใคใใซใใใจใฉใผใ้ฟใใใใใงใใ
ๅญฆ็ฟ็ใฏใๅ ดๆใใจใซ็ฐใชใๅคใซ่จญๅฎใใใพใใใณใใณใใฉใคใณใฎใซใผใซใ่จญๅฎใใใๅคใฏๆฌกใฎใจใใใงใใ
- `warmup_min_lr` ใฎๅคใฏ `0` ใงใใ
- `warmup_max_lr` ใจ `--learning_rate` ใฎๅคใ
- `warmup_num_steps` ใจ `--warmup_steps` ใฎๅค (ๆๅฎใใใฆใใๅ ดๅ)ใใใไปฅๅคใฎๅ ดๅใฏ `--warmup_ratio` ใไฝฟ็จใใพใ
ใใฌใผใใณใฐ ในใใใใฎๆฐใไน็ฎใใๅใไธใใพใใ
- `total_num_steps` ใซใฏ `--max_steps` ใฎๅคใๆๅฎใใใใๆๅฎใใใฆใใชใๅ ดๅใฏๅฎ่กๆใซ่ชๅ็ใซๅฐๅบใใใพใใ
็ฐๅขใใใผใฟใปใใใฎใตใคใบใใใใณใใฎไปใฎใณใใณใ ใฉใคใณๅผๆฐ (
`WarmupDecayLR`)ใ
ใใกใใใๆงๆๅคใฎไธ้จใพใใฏใในใฆใๅผใ็ถใใงใ่ชๅใง่จญๅฎใใใใจใใงใใพใใ
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ใใจใใฐใ`WarmupDecayLR`ใฎๅ ดๅใฏใๆฌกใฎใจใณใใชใไฝฟ็จใงใใพใใ
```json
{
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"last_batch_iteration": -1,
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
`total_num_steps`ใ`warmup_max_lr`ใ`warmup_num_steps`ใใใใณ `total_num_steps` ใฏใญใผใๆใซ่จญๅฎใใใพใใ
<a id='deepspeed-fp32'></a>
### fp32 Precision
Deepspeed ใฏใๅฎๅ
จใช fp32 ใจ fp16 ใฎๆททๅ็ฒพๅบฆใใตใใผใใใพใใ
fp16 ๆททๅ็ฒพๅบฆใไฝฟ็จใใใจใๅฟ
่ฆใชใกใขใชใๅคงๅน
ใซๅๆธใใใ้ๅบฆใๅไธใใใใใ
ไฝฟ็จใใฆใใใขใใซใใใฎใใฌใผใใณใฐ ใขใผใใง้ฉๅใซๅไฝใใชใๅ ดๅใฏใไฝฟ็จใใชใๆนใใใใงใใใใ้ๅธธใใ
ใขใใซใ fp16 ๆททๅ็ฒพๅบฆใงไบๅใใฌใผใใณใฐใใใฆใใชใๅ ดๅใซ็บ็ใใพใ (ใใจใใฐใใใใฏ bf16 ใงไบๅใใฌใผใใณใฐใใใๅ ดๅใซใใ็บ็ใใพใ)
ใขใใซ๏ผใใใฎใใใชใขใใซใงใฏใใชใผใใผใใญใผใพใใฏใขใณใใผใใญใผใ็บ็ใใ`NaN`ๆๅคฑใ็บ็ใใๅฏ่ฝๆงใใใใพใใใใใใใชใใฎๅ ดๅใฏใไฝฟ็จใใใใจๆใใงใใใ
ๅฎๅ
จใช fp32 ใขใผใใใใใฉใซใใฎ fp16 ๆททๅ็ฒพๅบฆใขใผใใๆฌกใฎใใใซๆ็คบ็ใซ็กๅนใซใใพใใ
```json
{
"fp16": {
"enabled": false,
}
}
```
Ampere ใขใผใญใใฏใใฃ ใใผในใฎ GPU ใไฝฟ็จใใฆใใๅ ดๅใpytorch ใใผใธใงใณ 1.7 ไปฅ้ใฏ่ชๅ็ใซ ใไฝฟ็จใใใใใซๅใๆฟใใใพใใ
ไธ้จใฎๆไฝใงใฏใฏใใใซๅน็็ใช tf32 ๅฝขๅผใไฝฟ็จใใพใใใ็ตๆใฏไพ็ถใจใใฆ fp32 ใซใชใใพใใ่ฉณ็ดฐใจ
ใใณใใใผใฏใซใคใใฆใฏใ[Ampere ใใใคในไธใฎ TensorFloat-32(TF32)](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) ใๅ็
งใใฆใใ ใใใๆๆธใซใฏไปฅไธใๅซใพใใพใ
ไฝใใใฎ็็ฑใงใใฎ่ชๅๅคๆใไฝฟ็จใใใใชใๅ ดๅใฏใใใฎ่ชๅๅคๆใ็กๅนใซใใๆนๆณใซใคใใฆ่ชฌๆใใพใใ
๐ค ใใฌใผใใผใงใฏใ`--tf32` ใไฝฟ็จใใฆๆๅนใซใใใใ`--tf32 0` ใพใใฏ `--no_tf32` ใไฝฟ็จใใฆ็กๅนใซใใใใจใใงใใพใใใใใฉใซใใงใฏใPyTorch ใฎใใใฉใซใใไฝฟ็จใใใพใใ
<a id='deepspeed-amp'></a>
### Automatic Mixed Precision
pytorch ใฎใใใช AMP ใฎๆนๆณใพใใฏ apex ใฎใใใชๆนๆณใง่ชๅๆททๅ็ฒพๅบฆใไฝฟ็จใงใใพใใ
### fp16
fp16 (float16) ใ่จญๅฎใใฆ pytorch AMP ใฎใใใชใขใผใใ่จญๅฎใใใซใฏ:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
[`Trainer`] ใฏใใฎๅคใซๅบใฅใใฆใใใ่ชๅ็ใซๆๅนใพใใฏ็กๅนใซใใพใใ
`args.fp16_backend`ใๆฎใใฎ่จญๅฎๅคใฏใใชใๆฌก็ฌฌใงใใ
ใใฎใขใผใใฏใ`--fp16 --fp16_backend amp`ใพใใฏ`--fp16_full_eval`ใณใใณใใฉใคใณๅผๆฐใๆธกใใใใจๆๅนใซใชใใพใใ
ใใฎใขใผใใๆ็คบ็ใซๆๅน/็กๅนใซใใใใจใใงใใพใใ
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ใใใ[ใใญใฅใกใณใ](https://www.deepspeed.ai/docs/config-json/#fp16-training-options)ใงใใ
### BF16
fp16 ใฎไปฃใใใซ bf16 (bfloat16) ใๅฟ
่ฆใชๅ ดๅใฏใๆฌกใฎๆงๆใปใฏใทใงใณใไฝฟ็จใใใพใใ
```json
{
"bf16": {
"enabled": "auto"
}
}
```
bf16 ใฏ fp32 ใจๅใใใคใใใใฏ ใฌใณใธใๅใใฆใใใใใๆๅคฑในใฑใผใชใณใฐใฏๅฟ
่ฆใใใพใใใ
ใใฎใขใผใใฏใ`--bf16` ใพใใฏ `--bf16_full_eval` ใณใใณใใฉใคใณๅผๆฐใๆธกใใใใจๆๅนใซใชใใพใใ
ใใฎใขใผใใๆ็คบ็ใซๆๅน/็กๅนใซใใใใจใใงใใพใใ
```json
{
"bf16": {
"enabled": true
}
}
```
<Tip>
`deepspeed==0.6.0`ใฎๆ็นใงใฏใbf16 ใตใใผใใฏๆฐใใๅฎ้จ็ใชใใฎใงใใ
bf16 ใๆๅนใช็ถๆ
ใง [ๅพ้
็ดฏ็ฉ](#gradient-accumulation) ใไฝฟ็จใใๅ ดๅใฏใbf16 ใงๅพ้
ใ็ดฏ็ฉใใใใใจใซๆณจๆใใๅฟ
่ฆใใใใพใใใใฎๅฝขๅผใฎ็ฒพๅบฆใไฝใใใใใใใฏๅธๆใฉใใใงใฏใชใๅฏ่ฝๆงใใใใพใใๆๅคฑใฎใใ่็ฉใซใคใชใใใพใใ
ใใฎๅ้กใไฟฎๆญฃใใใใ้ซ็ฒพๅบฆใฎ `dtype` (fp16 ใพใใฏ fp32) ใไฝฟ็จใใใชใใทใงใณใๆไพใใใใใฎไฝๆฅญใ่กใใใฆใใพใใ
</Tip>
### NCCL Collectives
่จ็ทดไฝๅถใฎ`dtype`ใใใใใใพใใพใชๅๆธใๅ้/ๅๆฃๆไฝใชใฉใฎใณใใฅใใฑใผใทใงใณ้ๅไฝใซไฝฟ็จใใใๅฅใฎ`dtype`ใใใใพใใ
ใในใฆใฎๅ้/ๅๆฃๆไฝใฏใใใผใฟใๅซใพใใฆใใใฎใจๅใ `dtype` ใงๅฎ่กใใใใใใbf16 ใใฌใผใใณใฐไฝๅถใไฝฟ็จใใฆใใๅ ดๅใใใผใฟใฏ bf16 ใงๅ้ใใใพใใๅ้ใฏๆๅคฑใฎใชใๆไฝใงใใ
ใใพใใพใชใชใใฅใผในๆไฝใฏ้ๅธธใซๆๅคฑใๅคงใใๅฏ่ฝๆงใใใใพใใใใจใใฐใ่คๆฐใฎ GPU ้ใงๅพ้
ใๅนณๅๅใใใๅ ดๅใ้ไฟกใ fp16 ใพใใฏ bf16 ใง่กใใใๅ ดๅใ็ตๆใฏๆๅคฑใๅคใใชใๅฏ่ฝๆงใใใใพใใ่คๆฐใฎๆฐๅคใไฝ็ฒพๅบฆใงใขใใใฟใคใบใใใจ็ตๆใฏๆญฃ็ขบใงใฏใชใใใใงใใ ใ bf16 ใงใฏ fp16 ใใใ็ฒพๅบฆใไฝใใใใใใใซใใใงใใ้ๅธธใฏ้ๅธธใซๅฐใใ grad ใๅนณๅใใ้ใฎๆๅคฑใๆๅฐ้ใซๆใใใใใใใfp16 ใงๅๅใงใใใใจใใใใใใพใใใใใใฃใฆใใใใฉใซใใงใฏใๅ็ฒพๅบฆใใฌใผใใณใฐใงใฏ fp16 ใใชใใฏใทใงใณๆผ็ฎใฎใใใฉใซใใจใใฆไฝฟ็จใใใพใใใใ ใใใใฎๆฉ่ฝใๅฎๅ
จใซๅถๅพกใงใใๅฟ
่ฆใซๅฟใใฆๅฐใใชใชใผใใผใใใใ่ฟฝๅ ใใฆใใชใใฏใทใงใณใ็ดฏ็ฉ dtype ใจใใฆ fp32 ใไฝฟ็จใใ็ตๆใฎๆบๅใใงใใๅ ดๅใซใฎใฟๅ็ฒพๅบฆ `dtype` ใซใใฆใณใญใฃในใใใใใใซใใใใจใใงใใพใใใงใใฌใผใใณใฐไธญใงใใ
ใใใฉใซใใใชใผใใผใฉใคใใใใซใฏใๆฐใใๆงๆใจใณใใชใ่ฟฝๅ ใใใ ใใงใใ
```json
{
"communication_data_type": "fp32"
}
```
ใใฎ่จไบใฎๅท็ญๆ็นใงใฎๆๅนใชๅคใฏใ"fp16"ใ"bfp16"ใ"fp32"ใงใใ
ๆณจ: ในใใผใธ ใผใญ 3 ใซใฏใbf16 ้ไฟกใฟใคใใซ้ขใใใใฐใใใใ`deepspeed==0.8.1`ใงไฟฎๆญฃใใใพใใใ
### apex
apex AMP ใฎใใใชใขใผใ ใปใใใ่จญๅฎใใใซใฏ:
```json
"amp": {
"enabled": "auto",
"opt_level": "auto"
}
```
[`Trainer`] ใฏ `args.fp16_backend` ใฎๅคใซๅบใฅใใฆ่ชๅ็ใซ่จญๅฎใใพใใ
`args.fp16_opt_level`ใ
ใใฎใขใผใใฏใ`--fp16 --fp16_backend apex --fp16_opt_level 01`ใณใใณใ ใฉใคใณๅผๆฐใๆธกใใใใจๆๅนใซใชใใพใใ
ใใฎใขใผใใๆ็คบ็ใซๆงๆใใใใจใใงใใพใใ
```json
{
"amp": {
"enabled": true,
"opt_level": "O1"
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ใใใฏ[ใใญใฅใกใณใ](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options)ใงใใ
<a id='deepspeed-bs'></a>
### Batch Size
ใใใใตใคใบใ่จญๅฎใใใซใฏใๆฌกใไฝฟ็จใใพใใ
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
[`Trainer`] ใฏ่ชๅ็ใซ `train_micro_batch_size_per_gpu` ใๆฌกใฎๅคใซ่จญๅฎใใพใใ
`args.per_device_train_batch_size`ใจ`train_batch_size`ใ`args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`ใซๅคๆดใใพใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"train_batch_size": 12,
"train_micro_batch_size_per_gpu": 4
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
<a id='deepspeed-grad-acc'></a>
### Gradient Accumulation
ๅพ้
็ดฏ็ฉใปใใใๆงๆใใใซใฏ:
```json
{
"gradient_accumulation_steps": "auto"
}
```
[`Trainer`] ใฏ่ชๅ็ใซใใใ `args.gradient_accumulation_steps` ใฎๅคใซ่จญๅฎใใพใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"gradient_accumulation_steps": 3
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
<a id='deepspeed-grad-clip'></a>
### Gradient Clipping
ใฐใฉใใผใทใงใณ ใฐใฉใใผใทใงใณ ใฏใชใใใณใฐ ใปใใใๆงๆใใใซใฏ:
```json
{
"gradient_clipping": "auto"
}
```
[`Trainer`] ใฏ่ชๅ็ใซใใใ `args.max_grad_norm` ใฎๅคใซ่จญๅฎใใพใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"gradient_clipping": 1.0
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
<a id='deepspeed-weight-extraction'></a>
### Getting The Model Weights Out
ใใฌใผใใณใฐใ็ถ็ถใใDeepSpeed ใฎไฝฟ็จใๅ้ใใ้ใใไฝใๅฟ้
ใใๅฟ
่ฆใฏใใใพใใใ DeepSpeed ในใใข
fp32 ใฎใซในใฟใ ใใงใใฏใใคใณใ ใชใใใฃใใคใถใผ ใใกใคใซๅ
ใฎใในใฟใผใฎ้ใฟใใใใฏ `global_step*/*optim_states.pt` (ใใใฏ glob
ใใฟใผใณ)ใ้ๅธธใฎใใงใใฏใใคใณใใฎไธใซไฟๅญใใใพใใ
**FP16 ใฆใงใคใ:**
ใขใใซใ ZeRO-2 ใงไฟๅญใใใจใใขใใซใฎ้ใฟใๅซใ้ๅธธใฎ `pytorch_model.bin` ใใกใคใซใไฝๆใใใพใใใ
ใใใใฏ้ใฟใฎ fp16 ใใผใธใงใณใซใใใพใใใ
ZeRO-3 ใงใฏใใขใใซใฎ้ใฟใ่คๆฐใฎ GPU ใซๅๅฒใใใใใใ็ถๆณใฏใใใซ่ค้ใซใชใใพใใ
ใใใใฃใฆใfp16 ใไฟๅญใใใใใฎ `Trainer` ใๅๅพใใใซใฏใ`"stage3_gather_16bit_weights_on_model_save": true` ใๅฟ
่ฆใงใใ
้ใฟใฎใใผใธใงใณใใใฎ่จญๅฎใ`False`ใฎๅ ดๅใ`pytorch_model.bin`ใฏไฝๆใใใพใใใใใใฏใใใใฉใซใใง DeepSpeed ใฎ `state_dict` ใซๅฎ้ใฎ้ใฟใงใฏใชใใใฌใผในใใซใใผใๅซใพใใใใใงใใใใฎ `state_dict` ใไฟๅญใใๅ ดๅใใญใผใใ็ดใใใจใฏใงใใพใใใ
```json
{
"zero_optimization": {
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
**FP32 ้้:**
fp16 ใฆใงใคใใฏใใฌใผใใณใฐใๅ้ใใใฎใซ้ฉใใฆใใพใใใใขใใซใฎๅพฎ่ชฟๆดใๅฎไบใใใใใ
[ใขใใซ ใใ](https://huggingface.co/models) ใซใขใฏใปในใใใใfp32 ใๅ
ฅๆใใใใจๆใใใไปใฎไบบใซๆธกใใพใใ
้ใฟใใใใฏๅคง้ใฎใกใขใชใๅฟ
่ฆใจใใใใญใปในใงใใใใใใใฌใผใใณใฐไธญใซ่กใในใใงใฏใชใใฎใ็ๆณ็ใงใใ
ใใใใฃใฆใใใฌใผใใณใฐใฎๅฎไบๅพใซใชใใฉใคใณใงๅฎ่กใใใฎใๆ้ฉใงใใใใ ใใๅฟ
่ฆใซๅฟใใฆใ็ฉบใ CPU ใๅๅใซใใๅ ดๅใฏใ
ๅใใใฌใผใใณใฐ ในใฏใชใใใงๅฎ่กใงใใใใจใๆใๅบใใฆใใ ใใใๆฌกใฎใปใฏใทใงใณใงใฏใไธกๆนใฎใขใใญใผใใซใคใใฆ่ชฌๆใใพใใ
**ใฉใคใ FP32 ใฆใงใคใ ใชใซใใช:**
ใขใใซใๅคงใใใใใฌใผใใณใฐใฎ็ตไบๆใซ็ฉบใ CPU ใกใขใชใใปใจใใฉๆฎใฃใฆใใชใๅ ดๅใใใฎใขใใญใผใใฏๆฉ่ฝใใชใๅฏ่ฝๆงใใใใพใใ
ๅฐใชใใจใ 1 ใคใฎใใงใใฏใใคใณใใไฟๅญใใฆใใฆใๆๆฐใฎใใงใใฏใใคใณใใไฝฟ็จใใใๅ ดๅใฏใๆฌกใฎๆ้ ใๅฎ่กใงใใพใใ
```python
from transformers.trainer_utils import get_last_checkpoint
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = get_last_checkpoint(trainer.args.output_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
`--load_best_model_at_end` class:*~transformers.TrainingArguments* ๅผๆฐใไฝฟ็จใใฆใใๅ ดๅ (ๆ้ฉใชใขใใซใ่ฟฝ่ทกใใใใ)
ใใงใใฏใใคใณใ)ใๆๅใซๆ็ตใขใใซใๆ็คบ็ใซไฟๅญใใฆใใใไธ่จใจๅใใใจใ่กใใใจใงใใฌใผใใณใฐใ็ตไบใงใใพใใ
```python
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final")
trainer.deepspeed.save_checkpoint(checkpoint_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
<Tip>
`load_state_dict_from_zero_checkpoint` ใๅฎ่กใใใใจใ`model` ใฏใใฏใไฝฟ็จใงใใชใใชใใใจใซๆณจๆใใฆใใ ใใใ
ๅใใขใใชใฑใผใทใงใณใฎ DeepSpeed ใณใณใใญในใใใคใพใใdeepspeed ใจใณใธใณใๅๅๆๅใใๅฟ
่ฆใใใใพใใ
`model.load_state_dict(state_dict)` ใฏใใใใใในใฆใฎ DeepSpeed ใใธใใฏใๅ้คใใพใใใใใใฃใฆใใใใฏๆๅพใซใฎใฟๅฎ่กใใฆใใ ใใ
ใใฌใผใใณใฐใฎๆงๅญใ
</Tip>
ใใกใใใclass:*~transformers.Trainer* ใไฝฟ็จใใๅฟ
่ฆใฏใชใใไธ่จใฎไพใ็ฌ่ชใฎใใฎใซ่ชฟๆดใใใใจใใงใใพใใ
ใใฌใผใใผใ
ไฝใใใฎ็็ฑใงใใใซๆน่ฏใใใๅ ดๅใฏใ้ใฟใฎ fp32 `state_dict` ใๆฝๅบใใฆ้ฉ็จใใใใจใใงใใพใใ
ๆฌกใฎไพใซ็คบใใใใซใใใใใฏ่ชๅใงไฝๆใใพใใ
```python
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
model = model.cpu()
model.load_state_dict(state_dict)
```
**ใชใใฉใคใณ FP32 ใฆใงใคใ ใชใซใใช:**
DeepSpeed ใฏ็นๅฅใชๅคๆในใฏใชใใ`zero_to_fp32.py`ใไฝๆใใใใงใใฏใใคใณใใฎๆไธไฝใซ้
็ฝฎใใพใใ
ใใฉใซใใใใฎในใฏใชใใใไฝฟ็จใใใจใใใคใงใ้ใฟใๆฝๅบใงใใพใใในใฏใชใใใฏในใฟใณใใขใญใณใชใฎใงใใใๅฟ
่ฆใใใพใใใ
ๆฝๅบใ่กใใใใฎ่จญๅฎใใกใคใซใพใใฏ `Trainer` ใๅฟ
่ฆใงใใ
ใใงใใฏใใคใณใ ใใฉใซใใผใๆฌกใฎใใใซใชใฃใฆใใใจใใพใใ
```bash
$ ls -l output_dir/checkpoint-1/
-rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json
drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/
-rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest
-rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt
-rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin
-rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt
-rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json
-rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model
-rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json
-rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json
-rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin
-rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py*
```
ใใฎไพใงใฏใDeepSpeed ใใงใใฏใใคใณใ ใตใใใฉใซใใผ *global_step1* ใ 1 ใคใ ใใใใพใใใใใใฃใฆใFP32ใๅๆง็ฏใใใซใฏ
้ใฟใๅฎ่กใใใ ใใงใ:
```bash
python zero_to_fp32.py . pytorch_model.bin
```
ใใใ ใใ `pytorch_model.bin`ใซใฏใ่คๆฐใฎ GPU ใใ็ตฑๅใใใๅฎๅ
จใช fp32 ใขใใซใฎ้ใฟใๅซใพใใใใใซใชใใพใใ
ในใฏใชใใใฏใZeRO-2 ใพใใฏ ZeRO-3 ใใงใใฏใใคใณใใ่ชๅ็ใซๅฆ็ใงใใใใใซใชใใพใใ
`python zero_to_fp32.py -h` ใๅฎ่กใใใจใไฝฟ็จๆนๆณใฎ่ฉณ็ดฐใ่กจ็คบใใใพใใ
ในใฏใชใใใฏใใใกใคใซ`latest`ใฎๅ
ๅฎนใไฝฟ็จใใฆ deepspeed ใตใใใฉใซใใผใ่ชๅๆคๅบใใพใใ
ไพใซใฏ`global_step1`ใๅซใพใใพใใ
ๆณจ: ็พๅจใในใฏใชใใใซใฏๆ็ต็ใช fp32 ใขใใซใฎ้ใฟใฎ 2 ๅใฎไธ่ฌ RAM ใๅฟ
่ฆใงใใ
### ZeRO-3 ใจ Infinity Nuances
ZeRO-3 ใฏใใใฉใกใผใฟ ใทใฃใผใใฃใณใฐๆฉ่ฝใฎ็นใง ZeRO-2 ใจใฏๅคงใใ็ฐใชใใพใใ
ZeRO-Infinity ใฏ ZeRO-3 ใใใใซๆกๅผตใใNVMe ใกใขใชใใใฎไปใฎ่คๆฐใฎ้ๅบฆใจในใฑใผใฉใใชใใฃใฎๅไธใใตใใผใใใพใใ
ใขใใซใซ็นๅฅใชๅคๆดใๅ ใใๅฟ
่ฆใใชใใฆใๆญฃๅธธใซๅไฝใใใใใซใใใใๅชๅใๆใใใฆใใพใใใใ็นๅฎใฎ็นใงใฏ
็ถๆณใซใใฃใฆใฏใๆฌกใฎๆ
ๅ ฑใๅฟ
่ฆใซใชใๅ ดๅใใใใพใใ
#### Constructing Massive Models
DeepSpeed/ZeRO-3 ใฏใๆขๅญใฎ RAM ใซๅใพใใชใๅฏ่ฝๆงใฎใใๆฐๅ
ใฎใใฉใกใผใฟใๆใคใขใใซใๅฆ็ใงใใพใใใใฎใใใชๅ ดๅใ
ใพใใๅๆๅใใใ้ซ้ใซๅฎ่กใใใๅ ดๅใฏใ*deepspeed.zero.Init()* ใไฝฟ็จใใฆใขใใซใๅๆๅใใพใใ
ใณใณใใญในใ ใใใผใธใฃใผ (้ขๆฐใใณใฌใผใฟใผใงใใใใพใ)ใๆฌกใฎใใใซใชใใพใใ
```python
from transformers import T5ForConditionalGeneration, T5Config
import deepspeed
with deepspeed.zero.Init():
config = T5Config.from_pretrained("t5-small")
model = T5ForConditionalGeneration(config)
```
ใ่ฆงใฎใจใใใใใใซใใใฉใณใใ ใซๅๆๅใใใใขใใซใๅพใใใพใใ
ไบๅใใฌใผใใณใฐใใใใขใใซใไฝฟ็จใใใๅ ดๅใ`model_class.from_pretrained` ใฏๆฌกใฎๆกไปถใๆบใใ้ใใใฎๆฉ่ฝใๆๅนใซใใพใใ
`is_deepspeed_zero3_enabled()` ใฏ `True` ใ่ฟใใพใใใใใฏ็พๅจใ
[`TrainingArguments`] ใชใใธใงใฏใ (ๆธกใใใ DeepSpeed ๆงๆใใกใคใซใซ ZeRO-3 ๆงๆใๅซใพใใฆใใๅ ดๅ)
ใปใฏใทใงใณใใใใใฃใฆใๅผใณๅบใใฎๅใซ** [`TrainingArguments`] ใชใใธใงใฏใใไฝๆใใๅฟ
่ฆใใใใพใใ
`from_pretrained`ใ่ใใใใใทใผใฑใณในใฎไพใๆฌกใซ็คบใใพใใ
```python
from transformers import AutoModel, Trainer, TrainingArguments
training_args = TrainingArguments(..., deepspeed=ds_config)
model = AutoModel.from_pretrained("t5-small")
trainer = Trainer(model=model, args=training_args, ...)
```
ๅ
ฌๅผใฎใตใณใใซ ในใฏใชใใใไฝฟ็จใใฆใใฆใใณใใณใ ใฉใคใณๅผๆฐใซ `--deepspeed ds_config.json` ใๅซใพใใฆใใๅ ดๅ
ZeRO-3 ่จญๅฎใๆๅนใซใใใจใใใใใตใณใใซ ในใฏใชใใใฎ่จ่ฟฐๆนๆณใงใใใใใใในใฆใใใงใซๅฎไบใใฆใใพใใ
ๆณจ: ใขใใซใฎ fp16 ้ใฟใๅไธใฎ GPU ใฎใกใขใชใซๅใพใใชใๅ ดๅใฏใใใฎๆฉ่ฝใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
ใใฎๆนๆณใจใใฎไปใฎ้ข้ฃๆฉ่ฝใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ๅคง่ฆๆจกใขใใซใฎๆง็ฏ](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models) ใๅ็
งใใฆใใ ใใใ
ใพใใfp16 ใงไบๅ่จ็ทดใใใใขใใซใใญใผใใใใจใใฏใ`from_pretrained` ใซไฝฟ็จใใใใใซๆ็คบใใๅฟ
่ฆใใใใพใใ
`torch_dtype=torch.float16`ใ่ฉณ็ดฐใซใคใใฆใฏใ[from_pretrained-torch-dtype](#from_pretrained-torch-dtype) ใๅ็
งใใฆใใ ใใใ
#### Gathering Parameters
่คๆฐใฎ GPU ไธใฎ ZeRO-3 ใงใฏใ็พๅจใฎ GPU ใฎใใฉใกใผใฟใงใชใ้ใใๅไธใฎ GPU ใใในใฆใฎใใฉใกใผใฟใๆใคใใจใฏใใใพใใใ
ๅฎ่กๅฑคใใใใใฃใฆใใในใฆใฎใฌใคใคใผใฎใในใฆใฎใใฉใกใผใฟใผใซไธๅบฆใซใขใฏใปในใใๅฟ
่ฆใใใๅ ดๅใฏใใใใ่กใใใใฎ็นๅฎใฎๆนๆณใใใใพใใ
ใปใจใใฉใฎๅ ดๅใฏๅฟ
่ฆใใใพใใใใๅฟ
่ฆใชๅ ดๅใฏใ[ใใฉใกใผใฟใฎๅ้](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) ใๅ็
งใใฆใใ ใใใ
ใใ ใใใใใคใใฎๅ ดๆใงๅ
้จ็ใซไฝฟ็จใใฆใใพใใใใฎไพใฎ 1 ใคใฏใไบๅใใฌใผใใณใฐใใใใขใใซใฎ้ใฟใใญใผใใใใจใใงใใ
`from_pretrained`ใไธๅบฆใซ 1 ใคใฎใฌใคใคใผใใญใผใใใๅๅ ใใฆใใใในใฆใฎ GPU ใซๅณๅบงใซๅๅฒใใพใใ
ๅคง่ฆๆจกใชใขใใซใงใฏใใกใขใชใฎ้ขไฟใงใ1 ใคใฎ GPU ใซใญใผใใใฆใใ่คๆฐใฎ GPU ใซๅๆฃใใใใจใฏใงใใพใใใ
ๅถ้ใ
ใพใใZeRO-3 ใงใฏใ็ฌ่ชใฎใณใผใใไฝๆใใๆฌกใฎใใใชใขใใซ ใใฉใกใผใฟใผใฎ้ใฟใ็บ็ใใใจใใพใใ
```python
tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
```
`tensor([1.])` ใซในใใฌในใๆใใๅ ดๅใใพใใฏใใฉใกใผใฟใฎใตใคใบใ `1` ใงใใใจใใใจใฉใผใ็บ็ใใๅ ดๅ
ใใๅคงใใชๅคๆฌกๅ
ๅฝข็ถใใใใฏใใใฉใกใผใฟใผใๅๅฒใใใฆใใใ่กจ็คบใใใใฎใฏ ZeRO-3 ใใฌใผในใใซใใผใงใใใใจใๆๅณใใพใใ
<a id='deepspeed-zero-inference'></a>
### ZeRO Inference
ZeRO Inference ใฏใZeRO-3 Training ใจๅใๆงๆใไฝฟ็จใใพใใใชใใใฃใใคใถใผใจในใฑใธใฅใผใฉใผใฎใปใฏใทใงใณใฏๅฟ
่ฆใใใพใใใใง
ๅฎ้ใๅใใใฎใใใฌใผใใณใฐใจๅ
ฑๆใใใๅ ดๅใฏใใใใใ่จญๅฎใใกใคใซใซๆฎใใใจใใงใใพใใๅฝผใใฏใใ ใใใชใใ ใใ
็ก่ฆใใใพใใใ
ใใไปฅๅคใฎๅ ดๅใฏใ้ๅธธใฎ [`TrainingArguments`] ๅผๆฐใๆธกใใ ใใงใใไพใใฐ๏ผ
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json
```
ๅฏไธ้่ฆใชใใจใฏใZeRO-2 ใซใฏไฝใฎๅฉ็นใใชใใใใZeRO-3 ๆงๆใไฝฟ็จใใๅฟ
่ฆใใใใจใใใใจใงใใ
ZeRO-3 ใฎใฟใใใฉใกใผใฟใผใฎใทใฃใผใใฃใณใฐใๅฎ่กใใใฎใซๅฏพใใZeRO-1 ใฏๅพ้
ใจใชใใใฃใใคใถใผใฎ็ถๆ
ใใทใฃใผใใฃใณใฐใใใใใๆจ่ซใซๅฝน็ซใกใพใใ
ไปฅไธใฏใๅฉ็จๅฏ่ฝใชใในใฆใฎ GPU ใใใใญใคใใ DeepSpeed ใง`run_translation.py`ใๅฎ่กใใไพใงใใ
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path t5-small --output_dir output_dir \
--do_eval --max_eval_samples 50 --warmup_steps 50 \
--max_source_length 128 --val_max_target_length 128 \
--overwrite_output_dir --per_device_eval_batch_size 4 \
--predict_with_generate --dataset_config "ro-en" --fp16 \
--source_lang en --target_lang ro --dataset_name wmt16 \
--source_prefix "translate English to Romanian: "
```
ๆจ่ซใฎใใใซใใชใใใฃใใคใถใผใฎ็ถๆ
ใจๅพ้
ใซใใฃใฆไฝฟ็จใใใ่ฟฝๅ ใฎๅคงใใชใกใขใชใฏๅฟ
่ฆใชใใใใ
ใฏใใใซๅคงใใชใใใใใทใผใฑใณใน้ทใๅใใใผใใฆใงใขใซ้ฉๅใงใใๅฟ
่ฆใใใใพใใ
ใใใซใDeepSpeed ใฏ็พๅจใDeepspeed-Inference ใจๅผใฐใใ้ข้ฃ่ฃฝๅใ้็บใใฆใใพใใใใใใจใฏไฝใฎ้ขไฟใใใใพใใใ
ZeRO ใใฏใใญใธใผใซๆบๆ ใใฆใใพใใใไปฃใใใซใใณใฝใซไธฆๅๅฆ็ใไฝฟ็จใใฆใๅไธใฎ GPU ใซๅใพใใชใใขใใซใในใฑใผใชใณใฐใใพใใใใใฏ
็พๅจ้็บไธญใงใใ่ฃฝๅใๅฎๆใใใ็ตฑๅใๆไพใใไบๅฎใงใใ
### Memory Requirements
Deepspeed ZeRO ใฏใกใขใชใ CPU (ใใใณ NVMe) ใซใชใใญใผใใงใใใใใใใฌใผใ ใฏใผใฏใฏใไฝฟ็จใใใฆใใ GPU ใฎๆฐใซๅฟใใฆๅฟ
่ฆใช CPU ใใใณ GPU ใกใขใชใฎ้ใ็ฅใใใจใใงใใใฆใผใใฃใชใใฃใๆไพใใพใใ
ๅไธใฎ GPU ใง `bigscience/T0_3B`ใๅพฎ่ชฟๆดใใใใใซๅฟ
่ฆใชใกใขใชใฎ้ใ่ฆ็ฉใใฃใฆใฟใพใใใใ
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 1 GPU per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1
15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ใใใใฃใฆใๅไธใฎ 80 GB GPU ใง CPU ใชใใญใผใใชใใงๆญ่ผใใใใจใใๅฐใใช 8 GB GPU ใงใๆๅคง 60 GB ใฎ CPU ใกใขใชใๅฟ
่ฆใซใชใใใจใๅฏ่ฝใงใใ (ใใใฏใใฉใกใผใฟใใชใใใฃใใคใถใฎ็ถๆ
ใใใใณๅพ้
ใฎใใใฎใกใขใชใงใใใใจใซๆณจๆใใฆใใ ใใใcuda ใซใผใใซใใขใฏใใฃใใผใทใงใณใใใใณไธๆใกใขใชใซใฏใใๅฐใๅคใใฎใกใขใชใๅฟ
่ฆใงใใ)
ๆฌกใซใใณในใใจ้ๅบฆใฎใใฌใผใใชใใซใชใใพใใใใๅฐใใ GPU ใ่ณผๅ
ฅใพใใฏใฌใณใฟใซใใๆนใๅฎใใชใใพใ (Deepspeed ZeRO ใงใฏ่คๆฐใฎ GPU ใไฝฟ็จใงใใใใใGPU ใฎๆฐใๆธใใใใจใใงใใพใ)ใใใใใใใฎๅ ดๅใฏ้
ใใชใใพใใใใฎใใใไฝใใๅฎ่กใใ้ๅบฆใๆฐใซใใชใใฆใใ้ๅบฆใฎไฝไธใฏ GPU ใฎไฝฟ็จๆ้ใซ็ดๆฅๅฝฑ้ฟใใใณในใใๅขๅคงใใใใใใฉใใๆใๅนๆ็ใใๅฎ้จใใฆๆฏ่ผใใฆใใ ใใใ
ๅๅใช GPU ใกใขใชใใใๅ ดๅใฏใใในใฆใ้ซ้ใซใชใใใใCPU/NVMe ใชใใญใผใใๅฟ
ใ็กๅนใซใใฆใใ ใใใ
ใใจใใฐใ2 ใคใฎ GPU ใซๅฏพใใฆๅใใใจใ็นฐใ่ฟใใฆใฟใพใใใใ
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 2 GPUs per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1
31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ใใใใฃใฆใใใใงใฏใCPU ใซใชใใญใผใใใใซ 2x 32GB ไปฅไธใฎ GPU ใๅฟ
่ฆใซใชใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[ใกใขใชๆจๅฎใใผใซ](https://deepspeed.readthedocs.io/en/latest/memory.html) ใๅ็
งใใฆใใ ใใใ
### Filing Issues
ใใใงใฏใๅ้กใฎ็็ธใใใใซ่งฃๆใใไฝๆฅญใฎใใญใใฏใ่งฃ้คใงใใใใใๅ้กใๅ ฑๅใใๆนๆณใ่ชฌๆใใพใใ
ใฌใใผใใซใฏๅฟ
ใๆฌกใฎๅ
ๅฎนใๅซใใฆใใ ใใใ
1. ใฌใใผใๅ
ใฎๅฎๅ
จใช Deepspeed ๆงๆใใกใคใซ
2. [`Trainer`] ใไฝฟ็จใใฆใใๅ ดๅใฏใณใใณใใฉใคใณๅผๆฐใใพใใฏ
ใใฌใผใใผใฎใปใใใขใใใ่ชๅใงในใฏใชใใไฝๆใใฆใใๅ ดๅใฏใ[`TrainingArguments`] ๅผๆฐใใใชใใงใใ ใใ
[`TrainingArguments`] ใซใฏ็ก้ขไฟใชใจใณใใชใๅคๆฐๅซใพใใฆใใใใใใใณใใใพใใ
3. ๆฌกใฎๅบๅ:
```bash
python -c 'import torch; print(f"torch: {torch.__version__}")'
python -c 'import transformers; print(f"transformers: {transformers.__version__}")'
python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")'
```
4. ๅฏ่ฝใงใใใฐใๅ้กใๅ็พใงใใ Google Colab ใใผใใใใฏใธใฎใชใณใฏใๅซใใฆใใ ใใใใใใไฝฟใใพใ
[ใใผใใใใฏ](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) ใจใใฆ
ๅบ็บ็นใ
5. ไธๅฏ่ฝใงใชใ้ใใใซในใฟใ ใใผใฟใปใใใงใฏใชใใๅธธใซไฝฟ็จใงใใๆจๆบใใผใฟใปใใใไฝฟ็จใใฆใใ ใใใ
6. ๅฏ่ฝใงใใใฐใๆขๅญใฎ [ใตใณใใซ](https://github.com/huggingface/transformers/tree/main/examples/pytorch) ใฎใใใใใไฝฟ็จใใฆๅ้กใๅ็พใใฆใฟใฆใใ ใใใ
- Deepspeed ใๅ้กใฎๅๅ ใงใฏใชใใใจใใใใใใพใใ
ๆๅบใใใๅ้กใฎไธ้จใฏใDeepspeed ใจใฏ็ก้ขไฟใงใใใใจใๅคๆใใพใใใใใใฏใDeepspeed ใใปใใใขใใใใๅ้คใใใๅพใงใใ
ๅ้กใฏใพใ ๆฎใฃใฆใใใ
ใใใใฃใฆใๅฎๅ
จใซๆ็ฝใงใชใๅ ดๅใฏใDeepSpeed ้ข้ฃใฎๅ้กใงใใ
ไพๅคใ็บ็ใใDeepSpeed ใขใธใฅใผใซใ้ขไฟใใฆใใใใจใใใใใพใใใพใใDeepSpeed ใๅซใพใชใใปใใใขใใใๅใในใใใฆใใ ใใใ
ๅ้กใ่งฃๆฑบใใชใๅ ดๅใซใฎใฟใDeepspeed ใซใคใใฆ่จๅใใๅฟ
่ฆใช่ฉณ็ดฐใใในใฆๆไพใใฆใใ ใใใ
- ๅ้กใ็ตฑๅ้จๅใงใฏใชใ DeepSpeed ใณใขใซใใใใจใๆใใใชๅ ดๅใฏใๅ้กใๆๅบใใฆใใ ใใใ
[Deepspeed](https://github.com/microsoft/DeepSpeed/) ใ็ดๆฅไฝฟ็จใใพใใใใใใใใชใๅ ดๅใงใใใๅฎๅฟใใ ใใใ
ใฉใกใใฎๅ้กใใฉใใซใผใงใๅ้กใใใพใใใๆ็จฟใใใใใใใๅคๆญใใๆฌกใฎๅ ดๅใฏๅฅใฎๅ้กใใฉใใซใผใซใชใใคใฌใฏใใใพใใ
ใใใงใใๅฟ
่ฆใใใใ
### Troubleshooting
#### the `deepspeed` process gets killed at startup without a traceback
`deepspeed`ใใญใปในใ่ตทๅๆใซใใฌใผในใใใฏใชใใงๅผทๅถ็ตไบใใใๅ ดๅใใใใฏ้ๅธธใใใญใฐใฉใ ใ่ฉฆ่กใใใใจใๆๅณใใพใใ
ใทในใใ ใๆใฃใฆใใใใใๅคใใฎ CPU ใกใขใชใๅฒใๅฝใฆใใใใใญใปในใๅฒใๅฝใฆใ่จฑๅฏใใใฆใใใใใOS ใซใผใใซใใใใๅผทๅถ็ตไบใใพใใ
ใใญใปในใใใใฏใ่จญๅฎใใกใคใซใซ `offload_optimizer` ใพใใฏ `offload_param` ใๅซใพใใฆใใๅฏ่ฝๆงใ้ซใใใใงใใ
ใฉใกใใ`cpu`ใซใชใใญใผใใใใใใซ่จญๅฎใใใฆใใพใใ NVMe ใไฝฟ็จใใฆใใๅ ดๅใฏใๆฌกใฎ็ฐๅขใงๅฎ่กใใฆใใๅ ดๅใฏ NVMe ใธใฎใชใใญใผใใ่ฉฆใใฆใใ ใใใ
ใผใญ-3ใ [็นๅฎใฎใขใใซใซๅฟ
่ฆใชใกใขใช้ใ่ฆ็ฉใใ]ๆนๆณใฏๆฌกใฎใจใใใงใ(https://deepspeed.readthedocs.io/en/latest/memory.html)ใ
#### training and/or eval/predict loss is `NaN`
ใใใฏใbf16 ๆททๅ็ฒพๅบฆใขใผใใงไบๅใใฌใผใใณใฐใใใใขใใซใๅๅพใใใใใ fp16 (ๆททๅ็ฒพๅบฆใฎๆ็กใซใใใใใ) ใงไฝฟ็จใใใใจใใๅ ดๅใซใใ็บ็ใใพใใ TPU ใงใใฌใผใใณใฐใใใใปใจใใฉใฎใขใใซใใใใณๅคใใฎๅ ดๅใGoogle ใซใใฃใฆใชใชใผในใใใใขใใซใฏใใใฎใซใใดใชใซๅ้กใใใพใ (ใใจใใฐใใปใผใในใฆใฎ t5 ใใผในใฎใขใใซ)ใใใใงใฎ่งฃๆฑบ็ญใฏใใใผใใฆใงใขใใตใใผใใใฆใใๅ ดๅ (TPUใAmpere GPU ไปฅ้)ใfp32 ใพใใฏ bf16 ใไฝฟ็จใใใใจใงใใ
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ใญใฐใซใฏใDeepspeed ใๆฌกใฎใใใซ`OVERFLOW!`ใๅ ฑๅใใฆใใใใจใใใใใพใใ
```
0%| | 0/189 [00:00<?, ?it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144
1%|โ | 1/189 [00:00<01:26, 2.17it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0
1%|โโ
[...]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
14%|โโโโโโโโโโโโโโโโโ | 27/189 [00:14<01:13, 2.21it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|โโโโโโโโโโโโโโโโโโ | 28/189 [00:14<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|โโโโโโโโโโโโโโโโโโ | 29/189 [00:15<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
[...]
```
ใใใฏใDeepspeed ๆๅคฑในใฑใผใฉใผใๆๅคฑใชใผใใผใใญใผใๅ
ๆใใในใฑใผใชใณใฐไฟๆฐใ่ฆใคใใใใชใใใจใๆๅณใใพใใ
(ใญใฐใฏใใใง่ชญใฟใใใใใใใใซใใใตใผใธใใใฆใใพใใ)
ใใฎๅ ดๅใ้ๅธธใฏ `initial_scale_power` ใฎๅคใไธใใๅฟ
่ฆใใใใพใใ้ๅธธใ`initial_scale_power: 32` ใซ่จญๅฎใใใจๅ้กใ่งฃๆฑบใใพใใ
### Notes
- DeepSpeed ใซใฏ pip ใงใคใณในใใผใซๅฏ่ฝใช PyPI ใใใฑใผใธใใใใพใใใใใผใใฆใงใขใซๆใ้ฉๅใใใใใซใใพใๆๅนใซใใๅฟ
่ฆใใใๅ ดๅใฏใ[ใฝใผใน](https://github.com/microsoft/deepspeed#installation) ใใใคใณในใใผใซใใใใจใๅผทใใๅงใใใพใใ
1 ใใใ Adam ใชใฉใฎ็นๅฎใฎๆฉ่ฝใฏใpypi ใใฃในใใชใใฅใผใทใงใณใงใฏๅฉ็จใงใใพใใใ
- ๐ค Transformers ใง DeepSpeed ใไฝฟ็จใใใใใซ [`Trainer`] ใไฝฟ็จใใๅฟ
่ฆใฏใใใพใใ - ไปปๆใฎใขใใซใไฝฟ็จใงใใพใ
ๅพ่
ใฏ [DeepSpeed ็ตฑๅๆ้ ](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models) ใซๅพใฃใฆ่ชฟๆดใใๅฟ
่ฆใใใใพใใ
## Non-Trainer Deepspeed Integration
[`~integrations.HfDeepSpeedConfig`] ใฏใDeepspeed ใ ๐ค Transformers ใณใขใซ็ตฑๅใใใใใซไฝฟ็จใใใพใ
[`Trainer`] ใไฝฟ็จใใชใๅ ดๅใฎๆฉ่ฝใๅฎ่กใใๅฏไธใฎใใจใฏใDeepspeed ZeRO-3 ใใฉใกใผใฟๅ้ใๅฆ็ใใ`from_pretrained`ๅผใณๅบใไธญใซใขใใซใ่คๆฐใฎ GPU ใซ่ชๅ็ใซๅๅฒใใใใจใงใใใใไปฅๅคใฏใในใฆ่ชๅใง่กใๅฟ
่ฆใใใใพใใ
[`Trainer`] ใไฝฟ็จใใใจใใในใฆใ่ชๅ็ใซๅฆ็ใใใพใใ
[`Trainer`] ใไฝฟ็จใใชใๅ ดๅใDeepSpeed ZeRO-3 ใๅน็็ใซๅฐๅ
ฅใใใซใฏใ
ใขใใซใใคใณในใฟใณในๅใใๅใซ [`~integrations.HfDeepSpeedConfig`] ใชใใธใงใฏใใๅ้คใใใใฎใชใใธใงใฏใใ็ใใใพใพใซใใพใใ
Deepspeed ZeRO-1 ใพใใฏ ZeRO-2 ใไฝฟ็จใใฆใใๅ ดๅใฏใ`HfDeepSpeedConfig`ใไฝฟ็จใใๅฟ
่ฆใฏใพใฃใใใใใพใใใ
ใใจใใฐใไบๅใใฌใผใใณใฐใใใใขใใซใฎๅ ดๅใฏๆฌกใฎใใใซใชใใพใใ
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model = AutoModel.from_pretrained("gpt2")
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
ใพใใฏใไบๅใใฌใผใใณใฐใใใฆใใชใใขใใซใฎๅ ดๅ:
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel, AutoConfig
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
config = AutoConfig.from_pretrained("gpt2")
model = AutoModel.from_config(config)
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
[`Trainer`] ็ตฑๅใไฝฟ็จใใฆใใชใๅ ดๅใฏใๅฎๅ
จใซ็ฌๅใง่กใใใจใซใชใใใจใซๆณจๆใใฆใใ ใใใๅบๆฌ็ใซใฏใ[Deepspeed](https://www.deepspeed.ai/) Web ใตใคใใฎใใญใฅใกใณใใซๅพใฃใฆใใ ใใใใพใใ่จญๅฎใใกใคใซใๆ็คบ็ใซ่จญๅฎใใๅฟ
่ฆใใใใพใใ`"auto"`ๅคใฏไฝฟ็จใงใใใไปฃใใใซๅฎ้ใฎๅคใๅ
ฅๅใใๅฟ
่ฆใใใใพใใ
## HfDeepSpeedConfig
[[autodoc]] integrations.HfDeepSpeedConfig
- all
### Custom DeepSpeed ZeRO Inference
ไปฅไธใฏใๅไธใฎ GPU ใซใขใใซใ้ฉๅใงใใชใๅ ดๅใซใ[`Trainer`] ใไฝฟ็จใใใซ DeepSpeed ZeRO ๆจ่ซใๅฎ่กใใๆนๆณใฎไพใงใใ่งฃๆฑบ็ญใซใฏใ่ฟฝๅ ใฎ GPU ใฎไฝฟ็จใใพใใฏ GPU ใกใขใชใ CPU ใกใขใชใซใชใใญใผใใใใใจใๅซใพใใพใใ
ใใใง็่งฃใในใ้่ฆใชใใฅใขใณในใฏใZeRO ใฎ่จญ่จๆนๆณใซใใใ็ฐใชใ GPU ใง็ฐใชใๅ
ฅๅใไธฆ่กใใฆๅฆ็ใงใใใจใใใใจใงใใ
ใใฎไพใซใฏๅคง้ใฎใกใขใใใใ่ชๅทฑๆๆธๅใใใฆใใพใใ
ๅฟ
ใๆฌกใฎใใจใ่กใฃใฆใใ ใใใ
1. ๅๅใช GPU ใกใขใชใใใๅ ดๅใฏใCPU ใชใใญใผใใ็กๅนใซใใพใ (้ๅบฆใไฝไธใใใใ)ใ
2. Ampere ใพใใฏๆฐใใ GPU ใๆๆใใฆใใๅ ดๅใฏใๅฆ็ใ้ซ้ๅใใใใใซ bf16 ใๆๅนใซใใพใใใใฎใใผใใฆใงใขใใชใๅ ดๅใฏใbf16 ๆททๅ็ฒพๅบฆใงไบๅใใฌใผใใณใฐใใใใขใใซ (ใปใจใใฉใฎ t5 ใขใใซใชใฉ) ใไฝฟ็จใใชใ้ใใfp16 ใๆๅนใซใใใใจใใงใใพใใใใใใฏ้ๅธธใfp16 ใงใชใผใใผใใญใผใใๅบๅใจใใฆใฌใใผใธใ่กจ็คบใใใพใใ
```python
#!/usr/bin/env python
# This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.integrations import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
# distributed setup
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = 1 * world_size
# ds_config notes
#
# - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
# faster.
#
# - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
# all official t5 models are bf16-pretrained
#
# - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
# - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
# - which params should remain on gpus - the larger the value the smaller the offload size
#
# For indepth info on Deepspeed config see
# https://huggingface.co/docs/transformers/main/main_classes/deepspeed
# keeping the same format as json for consistency, except it uses lower case for true/false
# fmt: off
ds_config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# fmt: on
# next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
# now a model can be loaded.
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# initialise Deepspeed ZeRO and store only the engine object
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval() # inference
# Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
ใใใ`t0.py`ใจใใฆไฟๅญใใฆๅฎ่กใใพใใใใ
```
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
rank1:
in=Is this review positive or negative? Review: this is the worst restaurant ever
out=negative
```
ใใใฏ้ๅธธใซๅบๆฌ็ใชไพใงใใใใใผใบใซๅใใใฆ่ชฟๆดใใฆใใ ใใใ
### `generate` nuances
ZeRO Stage-3 ใง่คๆฐใฎ GPU ใไฝฟ็จใใๅ ดๅใ`generate(..., synced_gpus=True)`ใๅผใณๅบใใฆ GPU ใๅๆใใๅฟ
่ฆใใใใพใใใใใ่กใใชใใจใ1 ใคใฎ GPU ใไปใฎ GPU ใใๅ
ใซ็ๆใ็ตไบใใๅ ดๅใๆฎใใฎ GPU ใ็ๆใๅๆญขใใ GPU ใใใฆใงใคใใฎใทใฃใผใใๅไฟกใงใใชใใชใใใใใทในใใ ๅ
จไฝใใใณใฐใใพใใ
`transformers>=4.28` ไปฅ้ใ`synced_gpus` ใๆ็คบ็ใซๆๅฎใใใฆใใชใๅ ดๅใใใใใฎๆกไปถใๆคๅบใใใใจ่ชๅ็ใซ `True` ใซ่จญๅฎใใใพใใใใ ใใๅฟ
่ฆใซๅฟใใฆ `synced_gpus` ใฎๅคใใชใผใใผใฉใคใใใใใจใใงใใพใใ
## Deepspeed ็ตฑๅใฎใในใ
DeepSpeed ็ตฑๅใๅซใ PR ใ้ไฟกใใๅ ดๅใฏใCircleCI PR CI ใปใใใขใใใซใฏ GPU ใใชใใใจใซๆณจๆใใฆใใ ใใใใใฎใใใGPU ใๅฟ
่ฆใจใใใในใใฏๅฅใฎ CI ใงๆฏๆฉใฎใฟๅฎ่กใใใพใใใใใใฃใฆใPR ใง็ท่ฒใฎ CI ใฌใใผใใ่กจ็คบใใใฆใใDeepSpeed ใในใใๅๆ ผใใใใจใๆๅณใใใใใงใฏใใใพใใใ
DeepSpeed ใในใใๅฎ่กใใใซใฏใๅฐใชใใจใไปฅไธใๅฎ่กใใฆใใ ใใใ
```
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
```
ใขใใชใณใฐใพใใฏ pytorch ใตใณใใซ ใณใผใใฎใใใใใๅคๆดใใๅ ดๅใฏใModel Zoo ใในใใๅฎ่กใใพใใไปฅไธใฏใในใฆใฎ DeepSpeed ใในใใๅฎ่กใใพใใ
```
RUN_SLOW=1 pytest tests/deepspeed
```
## Main DeepSpeed Resources
- [ใใญใธใงใฏใใฎ github](https://github.com/microsoft/deepspeed)
- [ไฝฟ็จๆนๆณใใญใฅใกใณใ](https://www.deepspeed.ai/getting-started/)
- [API ใใญใฅใกใณใ](https://deepspeed.readthedocs.io/en/latest/index.html)
- [ใใญใฐๆ็จฟ](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
่ซๆ:
- [ZeRO: ๅ
ใใฉใกใผใฟ ใขใใซใฎใใฌใผใใณใฐใซๅใใใกใขใชใฎๆ้ฉๅ](https://arxiv.org/abs/1910.02054)
- [ZeRO-Offload: 10 ๅ่ฆๆจกใฎใขใใซ ใใฌใผใใณใฐใฎๆฐไธปๅ](https://arxiv.org/abs/2101.06840)
- [ZeRO-Infinity: ๆฅต้ในใฑใผใซใฎๆทฑๅฑคๅญฆ็ฟใฎใใใฎ GPU ใกใขใชใฎๅฃใๆใก็ ดใ](https://arxiv.org/abs/2104.07857)
ๆๅพใซใHuggingFace [`Trainer`] ใฏ DeepSpeed ใฎใฟใ็ตฑๅใใฆใใใใจใ่ฆใใฆใใใฆใใ ใใใ
DeepSpeed ใฎไฝฟ็จใซ้ขใใฆๅ้กใ่ณชๅใใใๅ ดๅใฏใ[DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues) ใซๅ้กใๆๅบใใฆใใ ใใใ
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.