repo_id
stringlengths
15
86
file_path
stringlengths
28
180
content
stringlengths
1
1.75M
__index_level_0__
int64
0
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ja/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Accelerate ใ‚’็”จใ„ใŸๅˆ†ๆ•ฃๅญฆ็ฟ’ ใƒขใƒ‡ใƒซใŒๅคงใใใชใ‚‹ใซใคใ‚Œใฆใ€้™ใ‚‰ใ‚ŒใŸใƒใƒผใƒ‰ใ‚ฆใ‚งใ‚ขใงใ‚ˆใ‚Šๅคงใใชใƒขใƒ‡ใƒซใ‚’่จ“็ทดใ—ใ€่จ“็ทด้€Ÿๅบฆใ‚’ๅคงๅน…ใซไธŠๆ˜‡ใ•ใ›ใ‚‹ใŸใ‚ใฎๆ–นๆณ•ใจใ—ใฆไธฆๅˆ—ๅ‡ฆ็†ใŒๆตฎไธŠใ—ใฆใใพใ—ใŸใ€‚1ๅฐใฎใƒžใ‚ทใƒณใซ่ค‡ๆ•ฐใฎGPUใŒใ‚ใฃใฆใ‚‚ใ€่ค‡ๆ•ฐใฎใƒžใ‚ทใƒณใซใพใŸใŒใ‚‹่ค‡ๆ•ฐใฎGPUใŒใ‚ใฃใฆใ‚‚ใ€ใ‚ใ‚‰ใ‚†ใ‚‹ใ‚ฟใ‚คใƒ—ใฎๅˆ†ๆ•ฃๅ‡ฆ็†ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ไธŠใงใƒฆใƒผใ‚ถใƒผใŒ็ฐกๅ˜ใซ ๐Ÿค— Transformers ใƒขใƒ‡ใƒซใ‚’่จ“็ทดใงใใ‚‹ใ‚ˆใ†ใซใ€ Hugging Face ใงใฏ [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝœๆˆใ—ใพใ—ใŸใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€PyTorch ใฎ่จ“็ทดใƒซใƒผใƒ—ใ‚’ใ‚ซใ‚นใ‚ฟใƒžใ‚คใ‚บใ—ใฆใ€ๅˆ†ๆ•ฃๅ‡ฆ็†็’ฐๅขƒใงใฎ่จ“็ทดใ‚’ๅฏ่ƒฝใซใ™ใ‚‹ๆ–นๆณ•ใซใคใ„ใฆๅญฆใณใพใ™ใ€‚ ## ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ— ใฏใ˜ใ‚ใซ ๐Ÿค— Accelerate ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ—ใ‚‡ใ†: ```bash pip install accelerate ``` ใใ—ใŸใ‚‰ใ‚คใƒณใƒใƒผใƒˆใ—ใฆ [`~accelerate.Accelerator`] ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ไฝœๆˆใ—ใพใ—ใ‚‡ใ†ใ€‚[`~accelerate.Accelerator`] ใฏๅˆ†ๆ•ฃๅ‡ฆ็†ใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—ใ‚’่‡ชๅ‹•็š„ใซๆคœๅ‡บใ—ใ€่จ“็ทดใฎใŸใ‚ใซๅฟ…่ฆใชๅ…จใฆใฎใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใ‚’ๅˆๆœŸๅŒ–ใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใ‚’ใƒ‡ใƒใ‚คใ‚นใซๆ˜Ž็คบ็š„ใซ้…็ฝฎใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## Accelerate ใ™ใ‚‹ๆบ–ๅ‚™ใ‚’ใ—ใพใ—ใ‚‡ใ† ๆฌกใซใ€้–ข้€ฃใ™ใ‚‹ๅ…จใฆใฎ่จ“็ทดใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ [`~accelerate.Accelerator.prepare`] ใƒกใ‚ฝใƒƒใƒ‰ใซๆธกใ—ใพใ™ใ€‚ใ“ใ‚Œใซใฏใ€่จ“็ทดใจ่ฉ•ไพกใใ‚Œใžใ‚ŒใฎDataloaderใ€ใƒขใƒ‡ใƒซใ€optimizer ใŒๅซใพใ‚Œใพใ™: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## Backward ๆœ€ๅพŒใซ่จ“็ทดใƒซใƒผใƒ—ๅ†…ใฎ `loss.backward()` ใ‚’ ๐Ÿค— Accelerate ใฎ [`~accelerate.Accelerator.backward`] ใƒกใ‚ฝใƒƒใƒ‰ใง็ฝฎใๆ›ใˆใพใ™๏ผš ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ใง็ขบ่ชใงใใ‚‹้€šใ‚Šใ€่จ“็ทดใƒซใƒผใƒ—ใซ4่กŒใฎใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ™ใ‚‹ใ ใ‘ใงๅˆ†ๆ•ฃๅญฆ็ฟ’ใŒๅฏ่ƒฝใงใ™๏ผ ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## ่จ“็ทดใ™ใ‚‹ ้–ข้€ฃใ™ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’่ฟฝๅŠ ใ—ใŸใ‚‰ใ€ใ‚นใ‚ฏใƒชใƒ—ใƒˆใพใŸใฏ Colaboratory ใชใฉใฎใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใง่จ“็ทดใ‚’้–‹ๅง‹ใ—ใพใ™ใ€‚ ### ใ‚นใ‚ฏใƒชใƒ—ใƒˆใง่จ“็ทดใ™ใ‚‹ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใ‹ใ‚‰่จ“็ทดใ‚’ใ—ใฆใ„ใ‚‹ๅ ดๅˆใฏใ€่จญๅฎšใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใƒปไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใซไปฅไธ‹ใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใฆใใ ใ•ใ„: ```bash accelerate config ``` ใใ—ใฆๆฌกใฎใ‚ˆใ†ใซใ—ใฆ่จ“็ทดใ‚’้–‹ๅง‹ใ—ใพใ™: ```bash accelerate launch train.py ``` ### ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใง่จ“็ทดใ™ใ‚‹ Colaboratory ใฎ TPU ใฎๅˆฉ็”จใ‚’ใŠ่€ƒใˆใฎๅ ดๅˆใ€๐Ÿค— Accelerate ใฏใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏไธŠใงๅฎŸ่กŒใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚่จ“็ทดใซๅฟ…่ฆใชๅ…จใฆใฎใ‚ณใƒผใƒ‰ใ‚’้–ขๆ•ฐใซๅซใ‚ใ€[`~accelerate.notebook_launcher`] ใซๆธกใ—ใฆใใ ใ•ใ„: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` ๐Ÿค— Accelerate ใจ่ฑŠๅฏŒใชๆฉŸ่ƒฝใซใคใ„ใฆใ‚‚ใฃใจ็Ÿฅใ‚ŠใŸใ„ๆ–นใฏ[ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://huggingface.co/docs/accelerate)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # AutoClass๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ์ธ์Šคํ„ด์Šค ๋กœ๋“œ[[load-pretrained-instances-with-an-autoclass]] ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋งค์šฐ ๋‹ค์–‘ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ฒดํฌํฌ์ธํŠธ์— ๋งž๋Š” ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ด ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‰ฝ๊ณ  ๊ฐ„๋‹จํ•˜๋ฉฐ ์œ ์—ฐํ•˜๊ฒŒ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•œ Transformer ํ•ต์‹ฌ ์ฒ ํ•™์˜ ์ผํ™˜์œผ๋กœ, `AutoClass`๋Š” ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜์—ฌ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. `from_pretrained()` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šตํ•˜๋Š” ๋ฐ ์‹œ๊ฐ„๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ ํˆฌ์ž…ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š” ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค๋Š” ๊ฒƒ์€ ์ฝ”๋“œ๊ฐ€ ํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ž‘๋™ํ•˜๋ฉด ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋‹ค๋ฅด๋”๋ผ๋„ ๋‹ค๋ฅธ ์ฒดํฌํฌ์ธํŠธ(์œ ์‚ฌํ•œ ์ž‘์—…์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๊ฒฝ์šฐ)์—์„œ๋„ ์ž‘๋™ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. <Tip> ์•„ํ‚คํ…์ฒ˜๋Š” ๋ชจ๋ธ์˜ ๊ณจ๊ฒฉ์„ ์˜๋ฏธํ•˜๋ฉฐ ์ฒดํฌํฌ์ธํŠธ๋Š” ์ฃผ์–ด์ง„ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ๊ฐ€์ค‘์น˜์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [BERT](https://huggingface.co/bert-base-uncased)๋Š” ์•„ํ‚คํ…์ฒ˜์ด๊ณ , `bert-base-uncased`๋Š” ์ฒดํฌํฌ์ธํŠธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ์•„ํ‚คํ…์ฒ˜ ๋˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์˜๋ฏธํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ฐ˜์ ์ธ ์šฉ์–ด์ž…๋‹ˆ๋‹ค. </Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค: * ์‚ฌ์ „ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ € ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ํŠน์ง• ์ถ”์ถœ๊ธฐ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ํ”„๋กœ์„ธ์„œ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋กœ๋“œํ•˜๊ธฐ. ## AutoTokenizer[[autotokenizer]] ๊ฑฐ์˜ ๋ชจ๋“  NLP ์ž‘์—…์€ ํ† ํฌ๋‚˜์ด์ €๋กœ ์‹œ์ž‘๋ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์„ ๋ชจ๋ธ์—์„œ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. [`AutoTokenizer.from_pretrained`]๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` ๊ทธ๋ฆฌ๊ณ  ์•„๋ž˜์™€ ๊ฐ™์ด ์ž…๋ ฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoImageProcessor[[autoimageprocessor]] ๋น„์ „ ์ž‘์—…์˜ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์ด๋ฏธ์ง€๋ฅผ ์˜ฌ๋ฐ”๋ฅธ ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ## AutoFeatureExtractor[[autofeatureextractor]] ์˜ค๋””์˜ค ์ž‘์—…์˜ ๊ฒฝ์šฐ ํŠน์ง• ์ถ”์ถœ๊ธฐ๊ฐ€ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์˜ฌ๋ฐ”๋ฅธ ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. [`AutoFeatureExtractor.from_pretrained`]๋กœ ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor[[autoprocessor]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์˜ ์ „์ฒ˜๋ฆฌ ๋„๊ตฌ๋ฅผ ๊ฒฐํ•ฉํ•œ ํ”„๋กœ์„ธ์„œ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด LayoutLMV2 ๋ชจ๋ธ์—๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ํ”„๋กœ์„ธ์„œ๋Š” ์ด ๋‘ ๊ฐ€์ง€๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. [`AutoProcessor.from_pretrained()`]๋กœ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel[[automodel]] <frameworkcontent> <pt> ๋งˆ์ง€๋ง‰์œผ๋กœ AutoModelForํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](model_doc/auto)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). ์˜ˆ๋ฅผ ๋“ค์–ด, [`AutoModelForSequenceClassification.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹œํ€€์Šค ๋ถ„๋ฅ˜์šฉ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค๋ฅธ ์ž‘์—…์— ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` <Tip warning={true}> PyTorch๋ชจ๋ธ์˜ ๊ฒฝ์šฐ `from_pretrained()` ๋ฉ”์„œ๋“œ๋Š” ๋‚ด๋ถ€์ ์œผ๋กœ ํ”ผํด์„ ์‚ฌ์šฉํ•˜์—ฌ ์•ˆ์ „ํ•˜์ง€ ์•Š์€ ๊ฒƒ์œผ๋กœ ์•Œ๋ ค์ง„ `torch.load()`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์‹ ๋ขฐํ•  ์ˆ˜ ์—†๋Š” ์†Œ์Šค์—์„œ ๊ฐ€์ ธ์™”๊ฑฐ๋‚˜ ๋ณ€์กฐ๋˜์—ˆ์„ ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์€ ๋กœ๋“œํ•˜์ง€ ๋งˆ์„ธ์š”. ํ—ˆ๊น… ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ํ˜ธ์ŠคํŒ…๋˜๋Š” ๊ณต๊ฐœ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ด๋Ÿฌํ•œ ๋ณด์•ˆ ์œ„ํ—˜์ด ๋ถ€๋ถ„์ ์œผ๋กœ ์™„ํ™”๋˜๋ฉฐ, ๊ฐ ์ปค๋ฐ‹ ์‹œ ๋ฉ€์›จ์–ด๋ฅผ [๊ฒ€์‚ฌํ•ฉ๋‹ˆ๋‹ค](https://huggingface.co/docs/hub/security-malware). GPG๋ฅผ ์‚ฌ์šฉํ•ด ์„œ๋ช…๋œ [์ปค๋ฐ‹ ๊ฒ€์ฆ](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg)๊ณผ ๊ฐ™์€ ๋ชจ๋ฒ”์‚ฌ๋ก€๋Š” [๋ฌธ์„œ](https://huggingface.co/docs/hub/security)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ํ…์„œํ”Œ๋กœ์šฐ์™€ Flax ์ฒดํฌํฌ์ธํŠธ๋Š” ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š์œผ๋ฉฐ, `from_pretrained`๋ฉ”์„œ๋“œ์— `from_tf` ์™€ `from_flax` ํ‚ค์›Œ๋“œ ๊ฐ€๋ณ€ ์ธ์ž๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ์šฐํšŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์ผ๋ฐ˜์ ์œผ๋กœ AutoTokenizer ํด๋ž˜์Šค์™€ AutoModelFor ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šค๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งค๋ฒˆ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ [ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ๋Š” ์ƒˆ๋กญ๊ฒŒ ๋กœ๋“œํ•œ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ํŠœ๋‹์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. </pt> <tf> ๋งˆ์ง€๋ง‰์œผ๋กœ `TFAutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](model_doc/auto)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, [`TFAutoModelForSequenceClassification.from_pretrained`]๋กœ ์‹œํ€€์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ์‰ฝ๊ฒŒ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์žฌ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค๋ฅธ ์ž‘์—…์— ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` ์ผ๋ฐ˜์ ์œผ๋กœ, `AutoTokenizer`ํด๋ž˜์Šค์™€ `TFAutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šค๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งค๋ฒˆ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ [ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ๋Š” ์ƒˆ๋กญ๊ฒŒ ๋กœ๋“œํ•œ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ํŠœ๋‹์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/torchscript.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TorchScript๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ[[export-to-torchscript]] <Tip> TorchScript๋ฅผ ํ™œ์šฉํ•œ ์‹คํ—˜์€ ์•„์ง ์ดˆ๊ธฐ ๋‹จ๊ณ„๋กœ, ๊ฐ€๋ณ€์ ์ธ ์ž…๋ ฅ ํฌ๊ธฐ ๋ชจ๋ธ๋“ค์„ ํ†ตํ•ด ๊ทธ ๊ธฐ๋Šฅ์„ฑ์„ ๊ณ„์† ํƒ๊ตฌํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์€ ์ €ํฌ๊ฐ€ ๊ด€์‹ฌ์„ ๋‘๊ณ  ์žˆ๋Š” ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜์ด๋ฉฐ, ์•ž์œผ๋กœ ์ถœ์‹œ๋  ๋ฒ„์ „์—์„œ ๋” ๋งŽ์€ ์ฝ”๋“œ ์˜ˆ์ œ, ๋” ์œ ์—ฐํ•œ ๊ตฌํ˜„, ๊ทธ๋ฆฌ๊ณ  Python ๊ธฐ๋ฐ˜ ์ฝ”๋“œ์™€ ์ปดํŒŒ์ผ๋œ TorchScript๋ฅผ ๋น„๊ตํ•˜๋Š” ๋ฒค์น˜๋งˆํฌ๋ฅผ ๋“ฑ์„ ํ†ตํ•ด ๋ถ„์„์„ ์‹ฌํ™”ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> [TorchScript ๋ฌธ์„œ](https://pytorch.org/docs/stable/jit.html)์—์„œ๋Š” ์ด๋ ‡๊ฒŒ ๋งํ•ฉ๋‹ˆ๋‹ค. > TorchScript๋Š” PyTorch ์ฝ”๋“œ์—์„œ ์ง๋ ฌํ™” ๋ฐ ์ตœ์ ํ™” ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. [JIT๊ณผ TRACE](https://pytorch.org/docs/stable/jit.html)๋Š” ๊ฐœ๋ฐœ์ž๊ฐ€ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„œ ํšจ์œจ ์ง€ํ–ฅ์ ์ธ C++ ํ”„๋กœ๊ทธ๋žจ๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ ํ”„๋กœ๊ทธ๋žจ์—์„œ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” PyTorch ๋ชจ๋“ˆ์ž…๋‹ˆ๋‹ค. PyTorch ๊ธฐ๋ฐ˜ Python ํ”„๋กœ๊ทธ๋žจ๊ณผ ๋‹ค๋ฅธ ํ™˜๊ฒฝ์—์„œ ๋ชจ๋ธ์„ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, ๐Ÿค— Transformers ๋ชจ๋ธ์„ TorchScript๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ๋Š” TorchScript๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๊ณ  ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋‘ ๊ฐ€์ง€๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค: - `torchscript` ํ”Œ๋ž˜๊ทธ๋กœ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šคํ™” - ๋”๋ฏธ ์ž…๋ ฅ์„ ์‚ฌ์šฉํ•œ ์ˆœ์ „ํŒŒ(forward pass) ์ด ํ•„์ˆ˜ ์กฐ๊ฑด๋“ค์€ ์•„๋ž˜์— ์ž์„ธํžˆ ์„ค๋ช…๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๊ฐœ๋ฐœ์ž๋“ค์ด ์ฃผ์˜ํ•ด์•ผ ํ•  ์—ฌ๋Ÿฌ ์‚ฌํ•ญ๋“ค์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ## TorchScript ํ”Œ๋ž˜๊ทธ์™€ ๋ฌถ์ธ ๊ฐ€์ค‘์น˜(tied weights)[[torchscript-flag-and-tied-weights]] `torchscript` ํ”Œ๋ž˜๊ทธ๊ฐ€ ํ•„์š”ํ•œ ์ด์œ ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๐Ÿค— Transformers ์–ธ์–ด ๋ชจ๋ธ์—์„œ `Embedding` ๋ ˆ์ด์–ด์™€ `Decoding` ๋ ˆ์ด์–ด ๊ฐ„์˜ ๋ฌถ์ธ ๊ฐ€์ค‘์น˜(tied weights)๊ฐ€ ์กด์žฌํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. TorchScript๋Š” ๋ฌถ์ธ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์—†์œผ๋ฏ€๋กœ, ๋ฏธ๋ฆฌ ๊ฐ€์ค‘์น˜๋ฅผ ํ’€๊ณ  ๋ณต์ œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `torchscript` ํ”Œ๋ž˜๊ทธ๋กœ ์ธ์Šคํ„ด์Šคํ™”๋œ ๋ชจ๋ธ์€ `Embedding` ๋ ˆ์ด์–ด์™€ `Decoding` ๋ ˆ์ด์–ด๊ฐ€ ๋ถ„๋ฆฌ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ์ดํ›„์— ํ›ˆ๋ จํ•ด์„œ๋Š” ์•ˆ ๋ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ํ•˜๊ฒŒ ๋˜๋ฉด ๋‘ ๋ ˆ์ด์–ด ๊ฐ„ ๋™๊ธฐํ™”๊ฐ€ ํ•ด์ œ๋˜์–ด ์˜ˆ์ƒ์น˜ ๋ชปํ•œ ๊ฒฐ๊ณผ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ๊ฐ–์ง€ ์•Š์€ ๋ชจ๋ธ์€ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌถ์—ฌ ์žˆ์ง€ ์•Š์•„์„œ ์ด ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ๋“ค์€ `torchscript` ํ”Œ๋ž˜๊ทธ ์—†์ด ์•ˆ์ „ํ•˜๊ฒŒ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋”๋ฏธ ์ž…๋ ฅ๊ณผ ํ‘œ์ค€ ๊ธธ์ด[[dummy-inputs-and-standard-lengths]] ๋”๋ฏธ ์ž…๋ ฅ(dummy inputs)์€ ๋ชจ๋ธ์˜ ์ˆœ์ „ํŒŒ(forward pass)์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๊ฐ’์ด ๋ ˆ์ด์–ด๋ฅผ ํ†ตํ•ด ์ „ํŒŒ๋˜๋Š” ๋™์•ˆ, PyTorch๋Š” ๊ฐ ํ…์„œ์—์„œ ์‹คํ–‰๋œ ๋‹ค๋ฅธ ์—ฐ์‚ฐ์„ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋ก๋œ ์—ฐ์‚ฐ์€ ๋ชจ๋ธ์˜ *์ถ”์ (trace)*์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ถ”์ ์€ ์ž…๋ ฅ์˜ ์ฐจ์›์„ ๊ธฐ์ค€์œผ๋กœ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋”๋ฏธ ์ž…๋ ฅ์˜ ์ฐจ์›์— ์ œํ•œ๋˜์–ด, ๋‹ค๋ฅธ ์‹œํ€€์Šค ๊ธธ์ด๋‚˜ ๋ฐฐ์น˜ ํฌ๊ธฐ์—์„œ๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ํฌ๊ธฐ๋กœ ์‹œ๋„ํ•  ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค: ``` `The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2` ``` ์ถ”๋ก  ์ค‘ ๋ชจ๋ธ์— ๊ณต๊ธ‰๋  ๊ฐ€์žฅ ํฐ ์ž…๋ ฅ๋งŒํผ ํฐ ๋”๋ฏธ ์ž…๋ ฅ ํฌ๊ธฐ๋กœ ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ๋ˆ„๋ฝ๋œ ๊ฐ’์„ ์ฑ„์šฐ๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ์ด ๋” ํฐ ์ž…๋ ฅ ํฌ๊ธฐ๋กœ ์ถ”์ ๋˜๊ธฐ ๋•Œ๋ฌธ์—, ํ–‰๋ ฌ์˜ ์ฐจ์›์ด ์ปค์ง€๊ณ  ๊ณ„์‚ฐ๋Ÿ‰์ด ๋งŽ์•„์ง‘๋‹ˆ๋‹ค. ๋‹ค์–‘ํ•œ ์‹œํ€€์Šค ๊ธธ์ด ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ๋•Œ๋Š” ๊ฐ ์ž…๋ ฅ์— ๋Œ€ํ•ด ์ˆ˜ํ–‰๋˜๋Š” ์ด ์—ฐ์‚ฐ ํšŸ์ˆ˜์— ์ฃผ์˜ํ•˜๊ณ  ์„ฑ๋Šฅ์„ ์ฃผ์˜ ๊นŠ๊ฒŒ ํ™•์ธํ•˜์„ธ์š”. ## Python์—์„œ TorchScript ์‚ฌ์šฉํ•˜๊ธฐ[[using-torchscript-in-python]] ์ด ์„น์…˜์—์„œ๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ณ  ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•, ์ถ”์ ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ### ๋ชจ๋ธ ์ €์žฅํ•˜๊ธฐ[[saving-a-model]] `BertModel`์„ TorchScript๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด `BertConfig` ํด๋ž˜์Šค์—์„œ `BertModel`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•œ ๋‹ค์Œ, `traced_bert.pt`๋ผ๋Š” ํŒŒ์ผ๋ช…์œผ๋กœ ๋””์Šคํฌ์— ์ €์žฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("bert-base-uncased") # ์ž…๋ ฅ ํ…์ŠคํŠธ ํ† ํฐํ™”ํ•˜๊ธฐ text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # ์ž…๋ ฅ ํ† ํฐ ์ค‘ ํ•˜๋‚˜๋ฅผ ๋งˆ์Šคํ‚นํ•˜๊ธฐ masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # ๋”๋ฏธ ์ž…๋ ฅ ๋งŒ๋“ค๊ธฐ tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # torchscript ํ”Œ๋ž˜๊ทธ๋กœ ๋ชจ๋ธ ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ # ์ด ๋ชจ๋ธ์€ LM ํ—ค๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ํ•„์š”ํ•˜์ง€ ์•Š์ง€๋งŒ, ํ”Œ๋ž˜๊ทธ๋ฅผ True๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # ๋ชจ๋ธ์„ ์ธ์Šคํ„ดํŠธํ™”ํ•˜๊ธฐ model = BertModel(config) # ๋ชจ๋ธ์„ ํ‰๊ฐ€ ๋ชจ๋“œ๋กœ ๋‘์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. model.eval() # ๋งŒ์•ฝ *from_pretrained*๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋Š” ๊ฒฝ์šฐ, TorchScript ํ”Œ๋ž˜๊ทธ๋ฅผ ์‰ฝ๊ฒŒ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค model = BertModel.from_pretrained("bert-base-uncased", torchscript=True) # ์ถ”์  ์ƒ์„ฑํ•˜๊ธฐ traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` ### ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ[[loading-a-model]] ์ด์ œ ์ด์ „์— ์ €์žฅํ•œ `BertModel`, ์ฆ‰ `traced_bert.pt`๋ฅผ ๋””์Šคํฌ์—์„œ ๊ฐ€์ ธ์˜ค๊ณ , ์ด์ „์— ์ดˆ๊ธฐํ™”ํ•œ `dummy_input`์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` ### ์ถ”์ ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๊ธฐ[[using-a-traced-model-for-inference]] `__call__` ์ด์ค‘ ์–ธ๋”์Šค์ฝ”์–ด(dunder) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ์— ์ถ”์ ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```python traced_model(tokens_tensor, segments_tensors) ``` ## Neuron SDK๋กœ Hugging Face TorchScript ๋ชจ๋ธ์„ AWS์— ๋ฐฐํฌํ•˜๊ธฐ[[deploy-hugging-face-torchscript-models-to-aws-with-the-neuron-sdk]] AWS๊ฐ€ ํด๋ผ์šฐ๋“œ์—์„œ ์ €๋น„์šฉ, ๊ณ ์„ฑ๋Šฅ ๋จธ์‹  ๋Ÿฌ๋‹ ์ถ”๋ก ์„ ์œ„ํ•œ [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) ์ธ์Šคํ„ด์Šค ์ œํ’ˆ๊ตฐ์„ ์ถœ์‹œํ–ˆ์Šต๋‹ˆ๋‹ค. Inf1 ์ธ์Šคํ„ด์Šค๋Š” ๋”ฅ๋Ÿฌ๋‹ ์ถ”๋ก  ์›Œํฌ๋กœ๋“œ์— ํŠนํ™”๋œ ๋งž์ถค ํ•˜๋“œ์›จ์–ด ๊ฐ€์†๊ธฐ์ธ AWS Inferentia ์นฉ์œผ๋กœ ๊ตฌ๋™๋ฉ๋‹ˆ๋‹ค. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#)์€ Inferentia๋ฅผ ์œ„ํ•œ SDK๋กœ, Inf1์— ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•œ transformers ๋ชจ๋ธ ์ถ”์  ๋ฐ ์ตœ์ ํ™”๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. Neuron SDK๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค: 1. ์ฝ”๋“œ ํ•œ ์ค„๋งŒ ๋ณ€๊ฒฝํ•˜๋ฉด ํด๋ผ์šฐ๋“œ ์ถ”๋ก ๋ฅผ ์œ„ํ•ด TorchScript ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๊ณ  ์ตœ์ ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ์‰ฌ์šด API 2. ์ฆ‰์‹œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋กœ [๋น„์šฉ ํšจ์œจ ํ–ฅ์ƒ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>) 3. [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) ๋˜๋Š” [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html)๋กœ ๊ตฌ์ถ•๋œ Hugging Face transformers ๋ชจ๋ธ ์ง€์› ### ์‹œ์‚ฌ์ [[implications]] [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert) ์•„ํ‚คํ…์ฒ˜ ๋˜๋Š” ๊ทธ ๋ณ€ํ˜•์ธ [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) ๋ฐ [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ Transformers ๋ชจ๋ธ์€ ์ถ”์ถœ ๊ธฐ๋ฐ˜ ์งˆ์˜์‘๋‹ต, ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ๋ฐ ํ† ํฐ ๋ถ„๋ฅ˜์™€ ๊ฐ™์€ ๋น„์ƒ์„ฑ ์ž‘์—… ์‹œ Inf1์—์„œ ์ตœ์ƒ์˜ ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ž‘์—…๋„ [AWS Neuron MarianMT ํŠœํ† ๋ฆฌ์–ผ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html)์„ ๋”ฐ๋ผ Inf1์—์„œ ์‹คํ–‰๋˜๋„๋ก ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Inferentia์—์„œ ๋ฐ”๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋Š” Neuron ๋ฌธ์„œ์˜ [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) ์„น์…˜์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ข…์†์„ฑ[[dependencies]] AWS Neuron์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด [Neuron SDK ํ™˜๊ฒฝ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide)์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html)์— ๋ฏธ๋ฆฌ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ### AWS Neuron์œผ๋กœ ๋ชจ๋ธ ๋ณ€ํ™˜ํ•˜๊ธฐ[[converting-a-model-for-aws-neuron]] `BertModel`์„ ์ถ”์ ํ•˜๋ ค๋ฉด, [Python์—์„œ TorchScript ์‚ฌ์šฉํ•˜๊ธฐ](torchscript#using-torchscript-in-python)์—์„œ์™€ ๋™์ผํ•œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•ด์„œ AWS NEURON์šฉ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `torch.neuron` ํ”„๋ ˆ์ž„์›Œํฌ ์ต์Šคํ…์…˜์„ ๊ฐ€์ ธ์™€ Python API๋ฅผ ํ†ตํ•ด Neuron SDK์˜ ๊ตฌ์„ฑ ์š”์†Œ์— ์ ‘๊ทผํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` ๋‹ค์Œ ์ค„๋งŒ ์ˆ˜์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```diff - torch.jit.trace(model, [tokens_tensor, segments_tensors]) + torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` ์ด๋กœ์จ Neuron SDK๊ฐ€ ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๊ณ  Inf1 ์ธ์Šคํ„ด์Šค์— ์ตœ์ ํ™”ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. AWS Neuron SDK์˜ ๊ธฐ๋Šฅ, ๋„๊ตฌ, ์˜ˆ์ œ ํŠœํ† ๋ฆฌ์–ผ ๋ฐ ์ตœ์‹  ์—…๋ฐ์ดํŠธ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [AWS NeuronSDK ๋ฌธ์„œ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/task_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ[[what__transformers_can_do]] ๐Ÿค— Transformers๋Š” ์ž์—ฐ์–ด์ฒ˜๋ฆฌ(NLP), ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ์Œ์„ฑ ์ฒ˜๋ฆฌ ์ž‘์—…์— ๋Œ€ํ•œ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์„ ์œ„ํ•œ ํ˜„๋Œ€์ ์ธ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง๊ณผ ๊ฐ™์€ ํŠธ๋žœ์Šคํฌ๋จธ๊ฐ€ ์•„๋‹Œ ๋ชจ๋ธ๋„ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์Šค๋งˆํŠธํฐ, ์•ฑ, ํ…”๋ ˆ๋น„์ „๊ณผ ๊ฐ™์€ ์˜ค๋Š˜๋‚  ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ์†Œ๋น„์ž ์ œํ’ˆ์„ ์‚ดํŽด๋ณด๋ฉด, ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์ด ๊ทธ ๋’ค์— ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์„ ํ™•๋ฅ ์ด ๋†’์Šต๋‹ˆ๋‹ค. ์Šค๋งˆํŠธํฐ์œผ๋กœ ์ดฌ์˜ํ•œ ์‚ฌ์ง„์—์„œ ๋ฐฐ๊ฒฝ ๊ฐ์ฒด๋ฅผ ์ œ๊ฑฐํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์–ด๋–ป๊ฒŒ ํ• ๊นŒ์š”? ์ด๋Š” ํŒŒ๋†‰ํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜ ์ž‘์—…์˜ ์˜ˆ์ž…๋‹ˆ๋‹ค(์•„์ง ์ด๊ฒŒ ๋ฌด์—‡์ธ์ง€ ๋ชจ๋ฅธ๋‹ค๋ฉด, ๋‹ค์Œ ์„น์…˜์—์„œ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค!). ์ด ํŽ˜์ด์ง€๋Š” ๋‹ค์–‘ํ•œ ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค, ์ปดํ“จํ„ฐ ๋น„์ „, NLP ์ž‘์—…์„ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋‹ค๋ฃจ๋Š” ๊ฐ„๋‹จํ•œ ์˜ˆ์ œ๋ฅผ 3์ค„์˜ ์ฝ”๋“œ๋กœ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ## ์˜ค๋””์˜ค[[audio]] ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค ์ฒ˜๋ฆฌ ์ž‘์—…์€ ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์™€ ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ด๋Š” ์ฃผ๋กœ ์˜ค๋””์˜ค๊ฐ€ ์—ฐ์†์ ์ธ ์‹ ํ˜ธ๋กœ ์ž…๋ ฅ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ์™€ ๋‹ฌ๋ฆฌ ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•(waveform)์€ ๋ฌธ์žฅ์ด ๋‹จ์–ด๋กœ ๋‚˜๋ˆ ์ง€๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๊น”๋”ํ•˜๊ฒŒ ์ด์‚ฐ์ ์ธ ๋ฌถ์Œ์œผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์›๋ณธ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋Š” ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ์ƒ˜ํ”Œ๋ง๋ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ๊ฐ„๊ฒฉ ๋‚ด์—์„œ ๋” ๋งŽ์€ ์ƒ˜ํ”Œ์„ ์ทจํ•  ๊ฒฝ์šฐ ์ƒ˜ํ”Œ๋ง๋ฅ ์ด ๋†’์•„์ง€๋ฉฐ, ์˜ค๋””์˜ค๋Š” ์›๋ณธ ์˜ค๋””์˜ค ์†Œ์Šค์— ๋” ๊ฐ€๊นŒ์›Œ์ง‘๋‹ˆ๋‹ค. ๊ณผ๊ฑฐ์˜ ์ ‘๊ทผ ๋ฐฉ์‹์€ ์˜ค๋””์˜ค์—์„œ ์œ ์šฉํ•œ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด ์˜ค๋””์˜ค๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํ˜„์žฌ๋Š” ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•์„ ํŠน์„ฑ ์ธ์ฝ”๋”์— ์ง์ ‘ ๋„ฃ์–ด์„œ ์˜ค๋””์˜ค ํ‘œํ˜„(representation)์„ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ด ๋” ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๊ฐ€ ๋‹จ์ˆœํ•ด์ง€๊ณ  ๋ชจ๋ธ์ด ๊ฐ€์žฅ ์ค‘์š”ํ•œ ํŠน์ง•์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio_classification]] ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋Š” ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋งŽ์€ ๊ตฌ์ฒด์ ์ธ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์„ ํฌํ•จํ•œ ๋„“์€ ๋ฒ”์ฃผ์ž…๋‹ˆ๋‹ค. ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์Œํ–ฅ ์žฅ๋ฉด ๋ถ„๋ฅ˜: ์˜ค๋””์˜ค์— ์žฅ๋ฉด ๋ ˆ์ด๋ธ”("์‚ฌ๋ฌด์‹ค", "ํ•ด๋ณ€", "๊ฒฝ๊ธฐ์žฅ")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ์Œํ–ฅ ์ด๋ฒคํŠธ ๊ฐ์ง€: ์˜ค๋””์˜ค์— ์†Œ๋ฆฌ ์ด๋ฒคํŠธ ๋ ˆ์ด๋ธ”("์ฐจ ๊ฒฝ์ ", "๊ณ ๋ž˜ ์šธ์Œ์†Œ๋ฆฌ", "์œ ๋ฆฌ ํŒŒ์†")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ํƒœ๊น…: ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์†Œ๋ฆฌ(์ƒˆ ์ง€์ €๊ท, ํšŒ์˜์—์„œ์˜ ํ™”์ž ์‹๋ณ„)๊ฐ€ ํฌํ•จ๋œ ์˜ค๋””์˜ค์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ์Œ์•… ๋ถ„๋ฅ˜: ์Œ์•…์— ์žฅ๋ฅด ๋ ˆ์ด๋ธ”("๋ฉ”ํƒˆ", "ํž™ํ•ฉ", "์ปจํŠธ๋ฆฌ")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") >>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] ``` ### ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic_speech_recognition]] ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์€ ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์Œ์„ฑ์€ ์ธ๊ฐ„์˜ ์ž์—ฐ์Šค๋Ÿฌ์šด ์˜์‚ฌ์†Œํ†ต ํ˜•ํƒœ์ด๊ธฐ ๋•Œ๋ฌธ์— ASR์€ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์˜ค๋””์˜ค ์ž‘์—… ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์˜ค๋Š˜๋‚  ASR ์‹œ์Šคํ…œ์€ ์Šคํ”ผ์ปค, ์ „ํ™” ๋ฐ ์ž๋™์ฐจ์™€ ๊ฐ™์€ "์Šค๋งˆํŠธ" ๊ธฐ์ˆ  ์ œํ’ˆ์— ๋‚ด์žฅ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ์Œ์•… ์žฌ์ƒ, ์•Œ๋ฆผ ์„ค์ • ๋ฐ ๋‚ ์”จ ์ •๋ณด๋ฅผ ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค€ ํ•ต์‹ฌ ๋„์ „ ๊ณผ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ์–‘์ด ๋ฐ์ดํ„ฐ ์–‘์ด ์ ์€ ์–ธ์–ด(low-resource language)์— ๋Œ€ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋Œ€๋Ÿ‰์˜ ์Œ์„ฑ ๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จํ•œ ํ›„ ๋ฐ์ดํ„ฐ ์–‘์ด ์ ์€ ์–ธ์–ด์—์„œ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ์Œ์„ฑ ๋ฐ์ดํ„ฐ 1์‹œ๊ฐ„๋งŒ์œผ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ฉด ์ด์ „์˜ 100๋ฐฐ ๋งŽ์€ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ๋กœ ํ›ˆ๋ จ๋œ ASR ์‹œ์Šคํ…œ๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋†’์€ ํ’ˆ์งˆ์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer_vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—… ์ค‘ ๊ฐ€์žฅ ์ดˆ๊ธฐ์˜ ์„ฑ๊ณต์ ์ธ ์ž‘์—… ์ค‘ ํ•˜๋‚˜๋Š” [ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง(CNN)](glossary#convolution)์„ ์‚ฌ์šฉํ•˜์—ฌ ์šฐํŽธ๋ฒˆํ˜ธ ์ˆซ์ž ์ด๋ฏธ์ง€๋ฅผ ์ธ์‹ํ•˜๋Š” ๊ฒƒ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” ํ”ฝ์…€๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์œผ๋ฉฐ ๊ฐ ํ”ฝ์…€์€ ์ˆซ์ž ๊ฐ’์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์ด๋กœ์จ ์ด๋ฏธ์ง€๋ฅผ ํ”ฝ์…€ ๊ฐ’์˜ ํ–‰๋ ฌ๋กœ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฒƒ์ด ์‰ฌ์›Œ์ง‘๋‹ˆ๋‹ค. ํŠน์ •ํ•œ ํ”ฝ์…€ ๊ฐ’์˜ ์กฐํ•ฉ์€ ์ด๋ฏธ์ง€์˜ ์ƒ‰์ƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค: 1. ํ•ฉ์„ฑ๊ณฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ๋‚ฎ์€ ์ˆ˜์ค€ ํŠน์ง•์—์„œ ๋†’์€ ์ˆ˜์ค€์˜ ์ถ”์ƒ์ ์ธ ์š”์†Œ๊นŒ์ง€ ๊ณ„์ธต์ ์œผ๋กœ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. 2. ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜๋กœ ๋‚˜๋ˆ„๊ณ  ํŠธ๋žœ์Šคํฌ๋จธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์ง„์ ์œผ๋กœ ๊ฐ ์ด๋ฏธ์ง€ ํŒจ์น˜๊ฐ€ ์„œ๋กœ ์–ด๋– ํ•œ ๋ฐฉ์‹์œผ๋กœ ์—ฐ๊ด€๋˜์–ด ์ด๋ฏธ์ง€๋ฅผ ํ˜•์„ฑํ•˜๋Š”์ง€ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. `CNN`์—์„œ ์„ ํ˜ธํ•˜๋Š” ์ƒํ–ฅ์‹ ์ ‘๊ทผ๋ฒ•๊ณผ๋Š” ๋‹ฌ๋ฆฌ, ์ด ๋ฐฉ์‹์€ ํ๋ฆฟํ•œ ์ด๋ฏธ์ง€๋กœ ์ดˆ์•ˆ์„ ๊ทธ๋ฆฌ๊ณ  ์ ์ง„์ ์œผ๋กœ ์„ ๋ช…ํ•œ ์ด๋ฏธ์ง€๋กœ ๋งŒ๋“ค์–ด๊ฐ€๋Š” ๊ฒƒ๊ณผ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ### ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image_classification]] ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ํ•œ ๊ฐœ์˜ ์ „์ฒด ์ด๋ฏธ์ง€์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๋ถ„๋ฅ˜ ์ž‘์—…๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—๋Š” ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์šฉ๋„๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์˜๋ฃŒ: ์งˆ๋ณ‘์„ ๊ฐ์ง€ํ•˜๊ฑฐ๋‚˜ ํ™˜์ž ๊ฑด๊ฐ•์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ธฐ ์œ„ํ•ด ์˜๋ฃŒ ์ด๋ฏธ์ง€์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ํ™˜๊ฒฝ: ์œ„์„ฑ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์‚ฐ๋ฆผ ๋ฒŒ์ฑ„๋ฅผ ๊ฐ์‹œํ•˜๊ณ  ์•ผ์ƒ ์ง€์—ญ ๊ด€๋ฆฌ๋ฅผ ์œ„ํ•œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๊ฑฐ๋‚˜ ์‚ฐ๋ถˆ์„ ๊ฐ์ง€ํ•ฉ๋‹ˆ๋‹ค. * ๋†์—…: ์ž‘๋ฌผ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์‹๋ฌผ ๊ฑด๊ฐ•์„ ํ™•์ธํ•˜๊ฑฐ๋‚˜ ์œ„์„ฑ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ํ† ์ง€ ์ด์šฉ ๊ด€์ฐฐ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. * ์ƒํƒœํ•™: ๋™๋ฌผ์ด๋‚˜ ์‹๋ฌผ ์ข… ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์•ผ์ƒ ๋™๋ฌผ ๊ฐœ์ฒด๊ตฐ์„ ์กฐ์‚ฌํ•˜๊ฑฐ๋‚˜ ๋ฉธ์ข… ์œ„๊ธฐ์— ์ฒ˜ํ•œ ์ข…์„ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="image-classification") >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} ``` ### ๊ฐ์ฒด ํƒ์ง€[[object_detection]] ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์™€ ๋‹ฌ๋ฆฌ ๊ฐ์ฒด ํƒ์ง€๋Š” ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ์—ฌ๋Ÿฌ ๊ฐ์ฒด๋ฅผ ์‹๋ณ„ํ•˜๊ณ  ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋กœ ์ •์˜๋œ ๊ฐ์ฒด์˜ ์œ„์น˜๋ฅผ ํŒŒ์•…ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€์˜ ๋ช‡ ๊ฐ€์ง€ ์‘์šฉ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰: ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰, ๋ณดํ–‰์ž ๋ฐ ์‹ ํ˜ธ๋“ฑ๊ณผ ๊ฐ™์€ ์ผ์ƒ์ ์ธ ๊ตํ†ต ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•ฉ๋‹ˆ๋‹ค. * ์›๊ฒฉ ๊ฐ์ง€: ์žฌ๋‚œ ๋ชจ๋‹ˆํ„ฐ๋ง, ๋„์‹œ ๊ณ„ํš ๋ฐ ๊ธฐ์ƒ ์˜ˆ์ธก ๋“ฑ์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. * ๊ฒฐํ•จ ํƒ์ง€: ๊ฑด๋ฌผ์˜ ๊ท ์—ด์ด๋‚˜ ๊ตฌ์กฐ์  ์†์ƒ, ์ œ์กฐ ๊ฒฐํ•จ ๋“ฑ์„ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> detector = pipeline(task="object-detection") >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] ``` ### ์ด๋ฏธ์ง€ ๋ถ„ํ• [[image_segmentation]] ์ด๋ฏธ์ง€ ๋ถ„ํ• ์€ ํ”ฝ์…€ ์ฐจ์›์˜ ์ž‘์—…์œผ๋กœ, ์ด๋ฏธ์ง€ ๋‚ด์˜ ๋ชจ๋“  ํ”ฝ์…€์„ ํด๋ž˜์Šค์— ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๊ฐ์ฒด ํƒ์ง€์™€ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€๋Š” ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ๋‚ด์˜ ๊ฐ์ฒด๋ฅผ ๋ ˆ์ด๋ธ”๋งํ•˜๊ณ  ์˜ˆ์ธกํ•˜๋Š” ๋ฐ˜๋ฉด, ๋ถ„ํ• ์€ ๋” ์„ธ๋ถ„ํ™”๋œ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋ถ„ํ• ์€ ํ”ฝ์…€ ์ˆ˜์ค€์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์—๋Š” ์—ฌ๋Ÿฌ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ธ์Šคํ„ด์Šค ๋ถ„ํ• : ๊ฐœ์ฒด์˜ ํด๋ž˜์Šค๋ฅผ ๋ ˆ์ด๋ธ”๋งํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„, ๊ฐœ์ฒด์˜ ๊ฐ ๊ตฌ๋ถ„๋œ ์ธ์Šคํ„ด์Šค์—๋„ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค ("๊ฐœ-1", "๊ฐœ-2" ๋“ฑ). * ํŒŒ๋†‰ํ‹ฑ ๋ถ„ํ• : ์˜๋ฏธ์  ๋ถ„ํ• ๊ณผ ์ธ์Šคํ„ด์Šค ๋ถ„ํ• ์˜ ์กฐํ•ฉ์ž…๋‹ˆ๋‹ค. ๊ฐ ํ”ฝ์…€์„ ์˜๋ฏธ์  ํด๋ž˜์Šค๋กœ ๋ ˆ์ด๋ธ”๋งํ•˜๋Š” **๋™์‹œ์—** ๊ฐœ์ฒด์˜ ๊ฐ๊ฐ ๊ตฌ๋ถ„๋œ ์ธ์Šคํ„ด์Šค๋กœ๋„ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ•  ์ž‘์—…์€ ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์—์„œ ์œ ์šฉํ•˜๋ฉฐ, ์ฃผ๋ณ€ ํ™˜๊ฒฝ์˜ ํ”ฝ์…€ ์ˆ˜์ค€ ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ณดํ–‰์ž์™€ ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰ ์ฃผ๋ณ€์—์„œ ์•ˆ์ „ํ•˜๊ฒŒ ํƒ์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์˜๋ฃŒ ์˜์ƒ์—์„œ๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ•  ์ž‘์—…์ด ํ”ฝ์…€ ์ˆ˜์ค€์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋น„์ •์ƒ์ ์ธ ์„ธํฌ๋‚˜ ์žฅ๊ธฐ์˜ ํŠน์ง•์„ ์‹๋ณ„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์€ ์˜๋ฅ˜ ๊ฐ€์ƒ ์‹œ์ฐฉ์ด๋‚˜ ์นด๋ฉ”๋ผ๋ฅผ ํ†ตํ•ด ์‹ค์ œ ์„ธ๊ณ„์— ๊ฐ€์ƒ ๊ฐœ์ฒด๋ฅผ ๋ง์”Œ์›Œ ์ฆ๊ฐ• ํ˜„์‹ค ๊ฒฝํ—˜์„ ๋งŒ๋“œ๋Š” ๋“ฑ ์ „์ž ์ƒ๊ฑฐ๋ž˜ ๋ถ„์•ผ์—์„œ๋„ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} ``` ### ๊นŠ์ด ์ถ”์ •[[depth_estimation]] ๊นŠ์ด ์ถ”์ •์€ ์นด๋ฉ”๋ผ๋กœ๋ถ€ํ„ฐ ์ด๋ฏธ์ง€ ๋‚ด๋ถ€์˜ ๊ฐ ํ”ฝ์…€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ์ด ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์€ ํŠนํžˆ ์žฅ๋ฉด ์ดํ•ด์™€ ์žฌ๊ตฌ์„ฑ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์€ ๋ณดํ–‰์ž, ๊ตํ†ต ํ‘œ์ง€ํŒ ๋ฐ ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰๊ณผ ๊ฐ™์€ ๊ฐ์ฒด์™€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์ดํ•ดํ•˜์—ฌ ์žฅ์• ๋ฌผ๊ณผ ์ถฉ๋Œ์„ ํ”ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊นŠ์ด ์ •๋ณด๋Š” ๋˜ํ•œ 2D ์ด๋ฏธ์ง€์—์„œ 3D ํ‘œํ˜„์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋ฉฐ ์ƒ๋ฌผํ•™์  ๊ตฌ์กฐ๋‚˜ ๊ฑด๋ฌผ์˜ ๊ณ ํ’ˆ์งˆ 3D ํ‘œํ˜„์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊นŠ์ด ์ถ”์ •์—๋Š” ๋‘ ๊ฐ€์ง€ ์ ‘๊ทผ ๋ฐฉ์‹์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์Šคํ…Œ๋ ˆ์˜ค: ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ๊ฐ๋„์—์„œ ์ดฌ์˜๋œ ๋™์ผํ•œ ์ด๋ฏธ์ง€ ๋‘ ์žฅ์„ ๋น„๊ตํ•˜์—ฌ ๊นŠ์ด๋ฅผ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. * ๋‹จ์•ˆ: ๋‹จ์ผ ์ด๋ฏธ์ง€์—์„œ ๊นŠ์ด๋ฅผ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> depth_estimator = pipeline(task="depth-estimation") >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) ``` ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural_language_processing]] ํ…์ŠคํŠธ๋Š” ์ธ๊ฐ„์ด ์˜์‚ฌ ์†Œํ†ตํ•˜๋Š” ์ž์—ฐ์Šค๋Ÿฌ์šด ๋ฐฉ์‹ ์ค‘ ํ•˜๋‚˜์ด๊ธฐ ๋•Œ๋ฌธ์— ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์—ญ์‹œ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์ž‘์—… ์œ ํ˜• ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์ธ์‹ํ•˜๋Š” ํ˜•์‹์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ํ† ํฐํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๊ฐœ๋ณ„ ๋‹จ์–ด ๋˜๋Š” ํ•˜์œ„ ๋‹จ์–ด(ํ† ํฐ)๋กœ ๋ถ„ํ• ํ•œ ๋‹ค์Œ ์ด๋Ÿฌํ•œ ํ† ํฐ์„ ์ˆซ์ž๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ์ˆซ์ž ์‹œํ€€์Šค๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ˆซ์ž ์‹œํ€€์Šค๋ฅผ ๋‹ค์–‘ํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๋ชจ๋ธ์— ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ### ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text_classification]] ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ์˜ ๋ถ„๋ฅ˜ ์ž‘์—…๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋Š” ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์—์„œ ํ…์ŠคํŠธ ์‹œํ€€์Šค(๋ฌธ์žฅ ์ˆ˜์ค€, ๋‹จ๋ฝ ๋˜๋Š” ๋ฌธ์„œ ๋“ฑ)์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์—๋Š” ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์‘์šฉ ์‚ฌ๋ก€๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊ฐ์„ฑ ๋ถ„์„: ํ…์ŠคํŠธ๋ฅผ `๊ธ์ •` ๋˜๋Š” `๋ถ€์ •`๊ณผ ๊ฐ™์€ ์–ด๋–ค ๊ทน์„ฑ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋งํ•˜์—ฌ ์ •์น˜, ๊ธˆ์œต, ๋งˆ์ผ€ํŒ…๊ณผ ๊ฐ™์€ ๋ถ„์•ผ์—์„œ ์˜์‚ฌ ๊ฒฐ์ •์— ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์ง€์›ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ์ฝ˜ํ…์ธ  ๋ถ„๋ฅ˜: ํ…์ŠคํŠธ๋ฅผ ์ฃผ์ œ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋ง(๋‚ ์”จ, ์Šคํฌ์ธ , ๊ธˆ์œต ๋“ฑ)ํ•˜์—ฌ ๋‰ด์Šค ๋ฐ ์†Œ์…œ ๋ฏธ๋””์–ด ํ”ผ๋“œ์—์„œ ์ •๋ณด๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ  ํ•„ํ„ฐ๋งํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="sentiment-analysis") >>> preds = classifier("Hugging Face is the best thing since sliced bread!") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] ``` ### ํ† ํฐ ๋ถ„๋ฅ˜[[token_classification]] ๋ชจ๋“  ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์—์„œ๋Š” ํ…์ŠคํŠธ๊ฐ€ ๊ฐœ๋ณ„ ๋‹จ์–ด๋‚˜ ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„๋ฆฌ๋˜์–ด ์ „์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ๋ถ„๋ฆฌ๋œ ๋‹จ์–ด๋ฅผ [ํ† ํฐ](/glossary#token)์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜๋Š” ๊ฐ ํ† ํฐ์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜์˜ ๋‘ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ์œ ํ˜•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊ฐœ์ฒด๋ช… ์ธ์‹ (NER): ํ† ํฐ์„ ์กฐ์ง, ์ธ๋ฌผ, ์œ„์น˜ ๋˜๋Š” ๋‚ ์งœ์™€ ๊ฐ™์€ ๊ฐœ์ฒด ๋ฒ”์ฃผ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋งํ•ฉ๋‹ˆ๋‹ค. NER์€ ํŠนํžˆ ์œ ์ „์ฒดํ•™์ ์ธ ํ™˜๊ฒฝ์—์„œ ์œ ์ „์ž, ๋‹จ๋ฐฑ์งˆ ๋ฐ ์•ฝ๋ฌผ ์ด๋ฆ„์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ๋ฐ ๋„๋ฆฌ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. * ํ’ˆ์‚ฌ ํƒœ๊น… (POS): ๋ช…์‚ฌ, ๋™์‚ฌ, ํ˜•์šฉ์‚ฌ์™€ ๊ฐ™์€ ํ’ˆ์‚ฌ์— ๋”ฐ๋ผ ํ† ํฐ์— ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. POS๋Š” ๋ฒˆ์—ญ ์‹œ์Šคํ…œ์ด ๋™์ผํ•œ ๋‹จ์–ด๊ฐ€ ๋ฌธ๋ฒ•์ ์œผ๋กœ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฅธ์ง€ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค (๋ช…์‚ฌ๋กœ ์‚ฌ์šฉ๋˜๋Š” "bank(์€ํ–‰)"๊ณผ ๋™์‚ฌ๋กœ ์‚ฌ์šฉ๋˜๋Š” "bank(์˜ˆ๊ธˆ์„ ์˜ˆ์น˜ํ•˜๋‹ค)"๊ณผ ๊ฐ™์€ ๊ฒฝ์šฐ). ```py >>> from transformers import pipeline >>> classifier = pipeline(task="ner") >>> preds = classifier("Hugging Face is a French company based in New York City.") >>> preds = [ ... { ... "entity": pred["entity"], ... "score": round(pred["score"], 4), ... "index": pred["index"], ... "word": pred["word"], ... "start": pred["start"], ... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} ``` ### ์งˆ์˜์‘๋‹ต[[question_answering]] ์งˆ์˜์‘๋‹ต์€ ๋˜ ํ•˜๋‚˜์˜ ํ† ํฐ ์ฐจ์›์˜ ์ž‘์—…์œผ๋กœ, ๋ฌธ๋งฅ์ด ์žˆ์„ ๋•Œ(๊ฐœ๋ฐฉํ˜• ๋„๋ฉ”์ธ)์™€ ๋ฌธ๋งฅ์ด ์—†์„ ๋•Œ(ํ์‡„ํ˜• ๋„๋ฉ”์ธ) ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ž‘์—…์€ ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ์‹๋‹น์ด ์˜์—… ์ค‘์ธ์ง€์™€ ๊ฐ™์€ ์งˆ๋ฌธ์„ ํ•  ๋•Œ๋งˆ๋‹ค ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ ๊ฐ ์ง€์› ๋˜๋Š” ๊ธฐ์ˆ  ์ง€์›์„ ์ œ๊ณตํ•˜๊ฑฐ๋‚˜ ๊ฒ€์ƒ‰ ์—”์ง„์ด ์š”์ฒญํ•œ ์ •๋ณด๋ฅผ ๊ฒ€์ƒ‰ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ๋ฌธ ๋‹ต๋ณ€์—๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ถ”์ถœํ˜•: ์งˆ๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ, ๋ชจ๋ธ์ด ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์˜ ์ผ๋ถ€์—์„œ ๊ฐ€์ ธ์˜จ ํ…์ŠคํŠธ์˜ ๋ฒ”์œ„๋ฅผ ๋‹ต๋ณ€์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. * ์ƒ์„ฑํ˜•: ์งˆ๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ, ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์„ ํ†ตํ•ด ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ [`QuestionAnsweringPipeline`] ๋Œ€์‹  [`Text2TextGenerationPipeline`]์„ ํ†ตํ•ด ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> question_answerer = pipeline(task="question-answering") >>> preds = question_answerer( ... question="What is the name of the repository?", ... context="The name of the repository is huggingface/transformers", ... ) >>> print( ... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ... ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers ``` ### ์š”์•ฝ[[summarization]] ์š”์•ฝ์€ ์›๋ณธ ๋ฌธ์„œ์˜ ์˜๋ฏธ๋ฅผ ์ตœ๋Œ€ํ•œ ๋ณด์กดํ•˜๋ฉด์„œ ๊ธด ๋ฌธ์„œ๋ฅผ ์งง์€ ๋ฌธ์„œ๋กœ ๋งŒ๋“œ๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์š”์•ฝ์€ `sequence-to-sequence` ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ž…๋ ฅ๋ณด๋‹ค ์งง์€ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์š”์•ฝ ์ž‘์—…์€ ๋…์ž๊ฐ€ ์žฅ๋ฌธ ๋ฌธ์„œ๋“ค์˜ ์ฃผ์š” ํฌ์ธํŠธ๋ฅผ ๋น ๋ฅด๊ฒŒ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž…๋ฒ•์•ˆ, ๋ฒ•๋ฅ  ๋ฐ ๊ธˆ์œต ๋ฌธ์„œ, ํŠนํ—ˆ ๋ฐ ๊ณผํ•™ ๋…ผ๋ฌธ์€ ์š”์•ฝ ์ž‘์—…์ด ๋…์ž์˜ ์‹œ๊ฐ„์„ ์ ˆ์•ฝํ•˜๊ณ  ๋…์„œ ๋ณด์กฐ ๋„๊ตฌ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ์งˆ๋ฌธ ๋‹ต๋ณ€๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์š”์•ฝ์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ถ”์ถœํ˜•: ์›๋ณธ ํ…์ŠคํŠธ์—์„œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋ฌธ์žฅ์„ ์‹๋ณ„ํ•˜๊ณ  ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. * ์ƒ์„ฑํ˜•: ์›๋ณธ ํ…์ŠคํŠธ์—์„œ ๋ชฉํ‘œ ์š”์•ฝ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๋ฌธ์„œ์— ์—†๋Š” ์ƒˆ๋กœ์šด ๋‹จ์–ด๋ฅผ ํฌํ•จํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [`SummarizationPipeline`]์€ ์ƒ์„ฑํ˜• ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> summarizer = pipeline(task="summarization") >>> summarizer( ... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ... ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] ``` ### ๋ฒˆ์—ญ[[translation]] ๋ฒˆ์—ญ์€ ํ•œ ์–ธ์–ด๋กœ ๋œ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ๋ฐฐ๊ฒฝ์„ ๊ฐ€์ง„ ์‚ฌ๋žŒ๋“ค์ด ์„œ๋กœ ์†Œํ†ตํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ฃผ๋Š” ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ๋” ๋„“์€ ๋Œ€์ค‘์—๊ฒŒ ์ฝ˜ํ…์ธ ๋ฅผ ๋ฒˆ์—ญํ•˜์—ฌ ์ „๋‹ฌํ•˜๊ฑฐ๋‚˜, ์ƒˆ๋กœ์šด ์–ธ์–ด๋ฅผ ๋ฐฐ์šฐ๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ํ•™์Šต ๋„๊ตฌ๊ฐ€ ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์š”์•ฝ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ๋ฒˆ์—ญ์€ `sequence-to-sequence` ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ๋ฐ›์•„์„œ ์ถœ๋ ฅ์ด ๋˜๋Š” ๋ชฉํ‘œ ์‹œํ€€์Šค๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ดˆ๊ธฐ์˜ ๋ฒˆ์—ญ ๋ชจ๋ธ์€ ๋Œ€๋ถ€๋ถ„ ๋‹จ์ผ ์–ธ์–ด๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์—ˆ์ง€๋งŒ, ์ตœ๊ทผ์—๋Š” ๋งŽ์€ ์–ธ์–ด ์Œ ๊ฐ„์— ๋ฒˆ์—ญ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค์ค‘ ์–ธ์–ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ๋†’์•„์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task="translation", model="t5-small") >>> translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}] ``` ### ์–ธ์–ด ๋ชจ๋ธ๋ง[[language_modeling]] ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ํ…์ŠคํŠธ ์‹œํ€€์Šค์—์„œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์–ธ์–ด ๋ชจ๋ธ์€ ๋งŽ์€ ๋‹ค๋ฅธ ํ•˜์œ„ ์ž‘์—…์— ๋”ฐ๋ผ ๋ฏธ์„ธ ์กฐ์ •๋  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋งค์šฐ ์ธ๊ธฐ ์žˆ๋Š” ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ตœ๊ทผ์—๋Š” ์ œ๋กœ ์ƒท(zero-shot) ๋˜๋Š” ํ“จ ์ƒท(few-shot) ํ•™์Šต์ด ๊ฐ€๋Šฅํ•œ ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(Large Language Models, LLM)์— ๋Œ€ํ•œ ๋งŽ์€ ๊ด€์‹ฌ์ด ๋ฐœ์ƒํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ช…์‹œ์ ์œผ๋กœ ํ›ˆ๋ จ๋˜์ง€ ์•Š์€ ์ž‘์—…๋„ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค! ์–ธ์–ด ๋ชจ๋ธ์€ ์œ ์ฐฝํ•˜๊ณ  ์„ค๋“๋ ฅ ์žˆ๋Š” ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์ง€๋งŒ, ํ…์ŠคํŠธ๊ฐ€ ํ•ญ์ƒ ์ •ํ™•ํ•˜์ง€๋Š” ์•Š์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ฃผ์˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง: ์ด ๋ชจ๋ธ์˜ ๋ชฉ์ ์€ ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ๋ฏธ๋ž˜ ํ† ํฐ์ด ๋งˆ์Šคํ‚น ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task="text-generation") >>> generator(prompt) # doctest: +SKIP ``` * ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง: ์ด ๋ชจ๋ธ์˜ ๋ชฉ์ ์€ ์‹œํ€€์Šค ๋‚ด์˜ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ์‹œํ€€์Šค ๋‚ด์˜ ๋ชจ๋“  ํ† ํฐ์— ๋Œ€ํ•œ ์ ‘๊ทผ์ด ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ```py >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... { ... "score": round(pred["score"], 4), ... "token": pred["token"], ... "token_str": pred["token_str"], ... "sequence": pred["sequence"], ... } ... for pred in preds ... ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] ``` ์ด ํŽ˜์ด์ง€๋ฅผ ํ†ตํ•ด ๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ๋‹ค์–‘ํ•œ ์ž‘์—… ์œ ํ˜•๊ณผ ๊ฐ ์ž‘์—…์˜ ์‹ค์šฉ์  ์ค‘์š”์„ฑ์— ๋Œ€ํ•ด ์ถ”๊ฐ€์ ์ธ ๋ฐฐ๊ฒฝ ์ •๋ณด๋ฅผ ์–ป์œผ์…จ๊ธฐ๋ฅผ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋‹ค์Œ [์„น์…˜](tasks_explained)์—์„œ๋Š” ๐Ÿค— Transformer๊ฐ€ ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” **๋ฐฉ๋ฒ•**์— ๋Œ€ํ•ด ์•Œ์•„๋ณด์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/custom_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[sharing-custom-models]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์€ ์ถ”์ƒํ™” ์—†์ด ์ €์žฅ์†Œ์˜ ์ง€์ •๋œ ํ•˜์œ„ ํด๋”์— ์™„์ „ํžˆ ์ฝ”๋”ฉ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ์†์‰ฝ๊ฒŒ ๋ชจ๋ธ๋ง ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๊ณ  ํ•„์š”์— ๋”ฐ๋ผ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™„์ „ํžˆ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๋งŒ๋“œ๋Š” ๊ฒฝ์šฐ์—๋Š” ์ฒ˜์Œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์ด ๋” ์‰ฌ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” Transformers ๋‚ด์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์„ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์—†๋Š” ๊ฒฝ์šฐ์—๋„ ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก (์˜์กด์„ฑ๊ณผ ํ•จ๊ป˜) ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [timm ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://github.com/rwightman/pytorch-image-models)์˜ ResNet ํด๋ž˜์Šค๋ฅผ [`PreTrainedModel`]๋กœ ๋ž˜ํ•‘ํ•œ ResNet ๋ชจ๋ธ์„ ์˜ˆ๋กœ ๋ชจ๋“  ๊ฒƒ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ ์ž‘์„ฑํ•˜๊ธฐ[[writing-a-custom-configuration]] ๋ชจ๋ธ์— ๋“ค์–ด๊ฐ€๊ธฐ ์ „์— ๋จผ์ € ๊ตฌ์„ฑ์„ ์ž‘์„ฑํ•ด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ `configuration`์€ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ๋ชจ๋“  ์ค‘์š”ํ•œ ๊ฒƒ๋“ค์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ๊ฐ์ฒด์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, ๋ชจ๋ธ์€ `config`๋ฅผ ์‚ฌ์šฉํ•ด์„œ๋งŒ ์ดˆ๊ธฐํ™”ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์™„๋ฒฝํ•œ ๊ตฌ์„ฑ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ์˜ˆ์‹œ์—์„œ๋Š” ResNet ํด๋ž˜์Šค์˜ ์ธ์ˆ˜(argument)๋ฅผ ์กฐ์ •ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๊ตฌ์„ฑ์€ ๊ฐ€๋Šฅํ•œ ResNet ์ค‘ ๋‹ค๋ฅธ ์œ ํ˜•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ช‡ ๊ฐ€์ง€ ์œ ํšจ์„ฑ์„ ํ™•์ธํ•œ ํ›„ ํ•ด๋‹น ์ธ์ˆ˜๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` ์‚ฌ์šฉ์ž ์ •์˜ `configuration`์„ ์ž‘์„ฑํ•  ๋•Œ ๊ธฐ์–ตํ•ด์•ผ ํ•  ์„ธ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - `PretrainedConfig`์„ ์ƒ์†ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `PretrainedConfig`์˜ `__init__`์€ ๋ชจ๋“  kwargs๋ฅผ ํ—ˆ์šฉํ•ด์•ผ ํ•˜๊ณ , - ์ด๋Ÿฌํ•œ `kwargs`๋Š” ์ƒ์œ„ ํด๋ž˜์Šค `__init__`์— ์ „๋‹ฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ƒ์†์€ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ชจ๋“  ๊ธฐ๋Šฅ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ ์œผ๋กœ๋ถ€ํ„ฐ ๋น„๋กฏ๋˜๋Š” ๋‘ ๊ฐ€์ง€ ์ œ์•ฝ ์กฐ๊ฑด์€ `PretrainedConfig`์— ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ๋งŽ์€ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. `from_pretrained` ๋ฉ”์„œ๋“œ๋กœ ๊ตฌ์„ฑ์„ ๋‹ค์‹œ ๋กœ๋“œํ•  ๋•Œ ํ•ด๋‹น ํ•„๋“œ๋Š” ๊ตฌ์„ฑ์—์„œ ์ˆ˜๋ฝํ•œ ํ›„ ์ƒ์œ„ ํด๋ž˜์Šค๋กœ ๋ณด๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•˜์ง€ ์•Š๋Š” ํ•œ, `configuration`์—์„œ `model_type`์„ ์ •์˜(์—ฌ๊ธฐ์„œ `model_type="resnet"`)ํ•˜๋Š” ๊ฒƒ์€ ํ•„์ˆ˜ ์‚ฌํ•ญ์ด ์•„๋‹™๋‹ˆ๋‹ค (๋งˆ์ง€๋ง‰ ์„น์…˜ ์ฐธ์กฐ). ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ๋ชจ๋ธ ๊ตฌ์„ฑ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ์„ ์‰ฝ๊ฒŒ ๋งŒ๋“ค๊ณ  ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ resnet50d ๊ตฌ์„ฑ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `custom-resnet` ํด๋” ์•ˆ์— `config.json`์ด๋ผ๋Š” ํŒŒ์ผ์ด ์ €์žฅ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `from_pretrained` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌ์„ฑ์„ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` ๊ตฌ์„ฑ์„ Hub์— ์ง์ ‘ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด [`PretrainedConfig`] ํด๋ž˜์Šค์˜ [`~PretrainedConfig.push_to_hub`]์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ์ž‘์„ฑํ•˜๊ธฐ[[writing-a-custom-model]] ์ด์ œ ResNet ๊ตฌ์„ฑ์ด ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ๋Š” ๋‘ ๊ฐœ๋ฅผ ์ž‘์„ฑํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜๋‚˜๋Š” ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์—์„œ hidden features๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ([`BertModel`]๊ณผ ๊ฐ™์ด), ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์ ํ•ฉํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค([`BertForSequenceClassification`]๊ณผ ๊ฐ™์ด). ์ด์ „์— ์–ธ๊ธ‰ํ–ˆ๋“ฏ์ด ์ด ์˜ˆ์ œ์—์„œ๋Š” ๋‹จ์ˆœํ•˜๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์˜ ๋Š์Šจํ•œ ๋ž˜ํผ(loose wrapper)๋งŒ ์ž‘์„ฑํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ ์ „์— ๋ธ”๋ก ์œ ํ˜•๊ณผ ์‹ค์ œ ๋ธ”๋ก ํด๋ž˜์Šค ๊ฐ„์˜ ๋งคํ•‘ ์ž‘์—…๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `ResNet` ํด๋ž˜์Šค๋กœ ์ „๋‹ฌ๋˜์–ด `configuration`์„ ํ†ตํ•ด ๋ชจ๋ธ์ด ์„ ์–ธ๋ฉ๋‹ˆ๋‹ค: ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด์„œ๋Š” forward ๋ฉ”์†Œ๋“œ๋งŒ ๋ณ€๊ฒฝํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘ `PreTrainedModel`๋ฅผ ์ƒ์†๋ฐ›๊ณ , `config`๋ฅผ ํ†ตํ•ด ์ƒ์œ„ ํด๋ž˜์Šค ์ดˆ๊ธฐํ™”๋ฅผ ํ˜ธ์ถœํ•˜๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š” (์ผ๋ฐ˜์ ์ธ `torch.nn.Module`์„ ์ž‘์„ฑํ•  ๋•Œ์™€ ๋น„์Šทํ•จ). ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•˜๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ์—๋Š” `config_class`๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ถ€๋ถ„์ด ํ•„์ˆ˜์ž…๋‹ˆ๋‹ค (๋งˆ์ง€๋ง‰ ์„น์…˜ ์ฐธ์กฐ). <Tip> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ๊ณผ ๊ต‰์žฅํžˆ ์œ ์‚ฌํ•˜๋‹ค๋ฉด, ๋ชจ๋ธ์„ ์ƒ์„ฑํ•  ๋•Œ ๊ตฌ์„ฑ์„ ์ฐธ์กฐํ•ด ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์›ํ•˜๋Š” ๊ฒƒ์„ ๋ชจ๋ธ์ด ๋ฐ˜ํ™˜ํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, `ResnetModelForImageClassification`์—์„œ ํ–ˆ๋˜ ๊ฒƒ ์ฒ˜๋Ÿผ ๋ ˆ์ด๋ธ”์„ ํ†ต๊ณผ์‹œ์ผฐ์„ ๋•Œ ์†์‹ค๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒƒ์ด [`Trainer`] ํด๋ž˜์Šค ๋‚ด์—์„œ ์ง์ ‘ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ž์‹ ๋งŒ์˜ ํ•™์Šต ๋ฃจํ”„ ๋˜๋Š” ๋‹ค๋ฅธ ํ•™์Šต ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ๊ณ„ํš์ด๋ผ๋ฉด ๋‹ค๋ฅธ ์ถœ๋ ฅ ํ˜•์‹์„ ์‚ฌ์šฉํ•ด๋„ ์ข‹์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ชจ๋ธ ํด๋ž˜์Šค๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ํ•˜๋‚˜ ์ƒ์„ฑํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, [`~PreTrainedModel.save_pretrained`]๋˜๋Š” [`~PreTrainedModel.push_to_hub`]์ฒ˜๋Ÿผ [`PreTrainedModel`]์— ์†ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ๋‘ ๋ฒˆ์งธ ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ ์ฝ”๋“œ์™€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ €, ๋ชจ๋ธ ๋‚ด๋ถ€์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋กœ๋“œํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋ฅผ ํ™œ์šฉํ•  ๋•Œ๋Š”, ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ ์ž์‹ ๋งŒ์˜ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต์‹œํ‚ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋น ๋ฅด๊ฒŒ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ resnet50d๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ๋ชจ๋ธ์€ resnet50d์˜ ๋ž˜ํผ์ด๊ธฐ ๋•Œ๋ฌธ์—, ๊ฐ€์ค‘์น˜๋ฅผ ์‰ฝ๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ์ด์ œ [`~PreTrainedModel.save_pretrained`] ๋˜๋Š” [`~PreTrainedModel.push_to_hub`]๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๋ชจ๋ธ ์ฝ”๋“œ๊ฐ€ ์ €์žฅ๋˜๋Š”์ง€ ํ™•์ธํ•ด๋ด…์‹œ๋‹ค. ## Hub๋กœ ์ฝ”๋“œ ์—…๋กœ๋“œํ•˜๊ธฐ[[sending-the-code-to-the-hub]] <Tip warning={true}> ์ด API๋Š” ์‹คํ—˜์ ์ด๋ฉฐ ๋‹ค์Œ ๋ฆด๋ฆฌ์Šค์—์„œ ์•ฝ๊ฐ„์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ๋จผ์ € ๋ชจ๋ธ์ด `.py` ํŒŒ์ผ์— ์™„์ „ํžˆ ์ •์˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋ชจ๋“  ํŒŒ์ผ์ด ๋™์ผํ•œ ์ž‘์—… ๊ฒฝ๋กœ์— ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ƒ๋Œ€๊ฒฝ๋กœ ์ž„ํฌํŠธ(relative import)์— ์˜์กดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (transformers์—์„œ๋Š” ์ด ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ํ•˜์œ„ ๋ชจ๋“ˆ์„ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค). ์ด ์˜ˆ์‹œ์—์„œ๋Š” ์ž‘์—… ๊ฒฝ๋กœ ์•ˆ์˜ `resnet_model`์—์„œ `modeling_resnet.py` ํŒŒ์ผ๊ณผ `configuration_resnet.py` ํŒŒ์ผ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ํŒŒ์ผ์—๋Š” `ResnetConfig`์— ๋Œ€ํ•œ ์ฝ”๋“œ๊ฐ€ ์žˆ๊ณ  ๋ชจ๋ธ๋ง ํŒŒ์ผ์—๋Š” `ResnetModel` ๋ฐ `ResnetModelForImageClassification`์— ๋Œ€ํ•œ ์ฝ”๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` Python์ด `resnet_model`์„ ๋ชจ๋“ˆ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ฐ์ง€ํ•˜๋Š” ๋ชฉ์ ์ด๊ธฐ ๋•Œ๋ฌธ์— `__init__.py`๋Š” ๋น„์–ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip warning={true}> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ชจ๋ธ๋ง ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋“  ํŒŒ์ผ ์ƒ๋‹จ์— ์žˆ๋Š” ์ƒ๋Œ€ ๊ฒฝ๋กœ ์ž„ํฌํŠธ(relative import) ๋ถ€๋ถ„์„ `transformers` ํŒจํ‚ค์ง€์—์„œ ์ž„ํฌํŠธ ํ•˜๋„๋ก ๋ณ€๊ฒฝํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๊ธฐ์กด ๊ตฌ์„ฑ์ด๋‚˜ ๋ชจ๋ธ์„ ์žฌ์‚ฌ์šฉ(๋˜๋Š” ์„œ๋ธŒ ํด๋ž˜์Šคํ™”)ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ๋จผ์ €, ์ƒˆ๋กœ ๋งŒ๋“  ํŒŒ์ผ์— ResNet ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์„ ์ž„ํฌํŠธํ•ฉ๋‹ˆ๋‹ค: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` ๋‹ค์Œ์œผ๋กœ `save_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ•ด๋‹น ๊ฐ์ฒด์˜ ์ฝ”๋“œ ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๊ณ , ๋ณต์‚ฌํ•œ ํŒŒ์ผ์„ Auto ํด๋ž˜์Šค๋กœ ๋“ฑ๋กํ•˜๊ณ (๋ชจ๋ธ์ธ ๊ฒฝ์šฐ) ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` `configuration`์— ๋Œ€ํ•œ auto ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•  ํ•„์š”๋Š” ์—†์ง€๋งŒ(`configuration` ๊ด€๋ จ auto ํด๋ž˜์Šค๋Š” AutoConfig ํด๋ž˜์Šค ํ•˜๋‚˜๋งŒ ์žˆ์Œ), ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ์—๋Š” ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ชจ๋ธ์€ ๋‹ค์–‘ํ•œ ์ž‘์—…์— ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ๋ชจ๋ธ์— ๋งž๋Š” auto ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ด์ „์— ์ž‘์—…ํ–ˆ๋˜ ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ๊ณผ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ์ด์ œ ๋ชจ๋ธ์„ Hub๋กœ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด ๋กœ๊ทธ์ธ ์ƒํƒœ์ธ์ง€ ํ™•์ธํ•˜์„ธ์š”. ํ„ฐ๋ฏธ๋„์—์„œ ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash huggingface-cli login ``` ์ฃผํ”ผํ„ฐ ๋…ธํŠธ๋ถ์˜ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py from huggingface_hub import notebook_login notebook_login() ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด๋ ‡๊ฒŒ ์ž์‹ ์˜ ๋„ค์ž„์ŠคํŽ˜์ด์Šค(๋˜๋Š” ์ž์‹ ์ด ์†ํ•œ ์กฐ์ง)์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py resnet50d.push_to_hub("custom-resnet50d") ``` On top of the modeling weights and the configuration in json format, this also copied the modeling and configuration `.py` files in the folder `custom-resnet50d` and uploaded the result to the Hub. You can check the result in this [model repo](https://huggingface.co/sgugger/custom-resnet50d). json ํ˜•์‹์˜ ๋ชจ๋ธ๋ง ๊ฐ€์ค‘์น˜์™€ ๊ตฌ์„ฑ ์™ธ์—๋„ `custom-resnet50d` ํด๋” ์•ˆ์˜ ๋ชจ๋ธ๋ง๊ณผ ๊ตฌ์„ฑ `.py` ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜ํ•ด Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. [๋ชจ๋ธ ์ €์žฅ์†Œ](https://huggingface.co/sgugger/custom-resnet50d)์—์„œ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [sharing tutorial](model_sharing) ๋ฌธ์„œ์˜ `push_to_hub` ๋ฉ”์†Œ๋“œ์—์„œ ์ž์„ธํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ ์‚ฌ์šฉํ•˜๊ธฐ[[using-a-model-with-custom-code]] auto ํด๋ž˜์Šค์™€ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ง€์ • ์ฝ”๋“œ ํŒŒ์ผ๊ณผ ํ•จ๊ป˜ ๋ชจ๋“  ๊ตฌ์„ฑ, ๋ชจ๋ธ, ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hub์— ์—…๋กœ๋“œ๋œ ๋ชจ๋“  ํŒŒ์ผ ๋ฐ ์ฝ”๋“œ๋Š” ๋ฉœ์›จ์–ด๊ฐ€ ์žˆ๋Š”์ง€ ๊ฒ€์‚ฌ๋˜์ง€๋งŒ (์ž์„ธํ•œ ๋‚ด์šฉ์€ [Hub ๋ณด์•ˆ](https://huggingface.co/docs/hub/security#malware-scanning) ์„ค๋ช… ์ฐธ์กฐ), ์ž์‹ ์˜ ์ปดํ“จํ„ฐ์—์„œ ๋ชจ๋ธ ์ฝ”๋“œ์™€ ์ž‘์„ฑ์ž๊ฐ€ ์•…์„ฑ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `trust_remote_code=True`๋กœ ์„ค์ •ํ•˜์„ธ์š”: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` ๋ชจ๋ธ ์ž‘์„ฑ์ž๊ฐ€ ์•…์˜์ ์œผ๋กœ ์ฝ”๋“œ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜์ง€ ์•Š์•˜๋‹ค๋Š” ์ ์„ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด, ์ปค๋ฐ‹ ํ•ด์‹œ(commit hash)๋ฅผ `revision`์œผ๋กœ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ๋„ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ ์ž‘์„ฑ์ž๋ฅผ ์™„์ „ํžˆ ์‹ ๋ขฐํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ). ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Hub์—์„œ ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ์ปค๋ฐ‹ ๊ธฐ๋ก์„ ์ฐพ์•„๋ณผ ๋•Œ, ๋ชจ๋“  ์ปค๋ฐ‹์˜ ์ปค๋ฐ‹ ํ•ด์‹œ๋ฅผ ์‰ฝ๊ฒŒ ๋ณต์‚ฌํ•  ์ˆ˜ ์žˆ๋Š” ๋ฒ„ํŠผ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋งŒ๋“  ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค๋กœ ๋“ฑ๋กํ•˜๊ธฐ[[registering-a-model-with-custom-code-to-the-auto-classes]] ๐Ÿค— Transformers๋ฅผ ์ƒ์†ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ํ•ด๋‹น ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ž„ํฌํŠธํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด๋Š” Hub๋กœ ์ฝ”๋“œ๋ฅผ ์—…๋กœ๋“œํ•˜๋Š” ๊ฒƒ๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค (Hub์—์„œ ์ž๋™์ ์œผ๋กœ ๋ชจ๋ธ ์ฝ”๋“œ๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ•˜๋Š” ๊ฒƒ๊ณผ ๋ฐ˜๋Œ€). ๊ตฌ์„ฑ์— ๊ธฐ์กด ๋ชจ๋ธ ์œ ํ˜•๊ณผ ๋‹ค๋ฅธ `model_type` ์†์„ฑ์ด ์žˆ๊ณ  ๋ชจ๋ธ ํด๋ž˜์Šค์— ์˜ฌ๋ฐ”๋ฅธ `config_class` ์†์„ฑ์ด ์žˆ๋Š” ํ•œ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด auto ํด๋ž˜์Šค์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ์„ [`AutoConfig`]์— ๋“ฑ๋กํ•  ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ธ์ˆ˜๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ์˜ `model_type`๊ณผ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•  ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ธ์ˆ˜๋Š” ํ•ด๋‹น ๋ชจ๋ธ์˜ `config_class`์™€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_hardware.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ›ˆ๋ จ์šฉ ์‚ฌ์šฉ์ž ๋งž์ถคํ˜• ํ•˜๋“œ์›จ์–ด [[custom-hardware-for-training]] ๋ชจ๋ธ ํ›ˆ๋ จ๊ณผ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ํ•˜๋“œ์›จ์–ด๋Š” ์„ฑ๋Šฅ์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด, Tim Dettmer์˜ ํ›Œ๋ฅญํ•œ ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”. [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ ๋งํฌ](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/) (์˜์–ด๋กœ ์ž‘์„ฑ๋จ). GPU ์„ค์ •์— ๋Œ€ํ•œ ์‹ค์šฉ์ ์ธ ์กฐ์–ธ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## GPU [[gpu]] ๋” ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ๋•Œ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์„ธ ๊ฐ€์ง€ ์˜ต์…˜์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ๋” ํฐ GPU - ๋” ๋งŽ์€ GPU - ๋” ๋งŽ์€ CPU ๋ฐ NVMe ([DeepSpeed-Infinity](../en/main_classes/deepspeed#nvme-support)๋ฅผ ํ†ตํ•œ ์˜คํ”„๋กœ๋“œ(offload)) ์šฐ์„ , ํ•˜๋‚˜์˜ GPU๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด๋ด…์‹œ๋‹ค. ### ์ „์› ๊ณต๊ธ‰๊ณผ ๋ƒ‰๊ฐ [[power-and-cooling]] ๋น„์‹ผ ๊ณ ์„ฑ๋Šฅ GPU๋ฅผ ๊ตฌ๋งคํ•œ ๊ฒฝ์šฐ, ์˜ฌ๋ฐ”๋ฅธ ์ „์› ๊ณต๊ธ‰๊ณผ ์ถฉ๋ถ„ํ•œ ๋ƒ‰๊ฐ์„ ์ œ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **์ „์› ๊ณต๊ธ‰**: ์ผ๋ถ€ ๊ณ ์„ฑ๋Šฅ ์†Œ๋น„์ž์šฉ GPU๋Š” 2๊ฐœ ํ˜น์€ ๊ฐ€๋”๊ฐ€๋‹ค 3๊ฐœ์˜ PCI-E 8ํ•€ ์ „์› ์†Œ์ผ“์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์นด๋“œ์— ์žˆ๋Š” ์†Œ์ผ“ ์ˆ˜๋งŒํผ ๋…๋ฆฝ์ ์ธ 12V PCI-E 8ํ•€ ์ผ€์ด๋ธ”์ด ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๊ฐ™์€ ์ผ€์ด๋ธ”์˜ ํ•œ์ชฝ ๋์— ์žˆ๋Š” 2๊ฐœ์˜ ์Šคํ”Œ๋ฆฟ(๋˜๋Š” ํ”ผ๊ทธํ…Œ์ผ(pigtail) ์ผ€์ด๋ธ”)์„ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”. ์ฆ‰, GPU์— 2๊ฐœ์˜ ์†Œ์ผ“์ด ์žˆ๋‹ค๋ฉด, PSU(์ „์› ๊ณต๊ธ‰ ์žฅ์น˜)์—์„œ ์นด๋“œ๋กœ ์—ฐ๊ฒฐ๋˜๋Š” 2๊ฐœ์˜ PCI-E 8ํ•€ ์ผ€์ด๋ธ”์ด ํ•„์š”ํ•˜๋ฉฐ, ๋์— 2๊ฐœ์˜ PCI-E 8ํ•€ ์ปค๋„ฅํ„ฐ๊ฐ€ ์žˆ๋Š” ์ผ€์ด๋ธ”์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์นด๋“œ์˜ ์ „์ฒด ์„ฑ๋Šฅ์„ ์ œ๋Œ€๋กœ ๋ฐœํœ˜ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ๊ฐ์˜ PCI-E 8ํ•€ ์ „์› ์ผ€์ด๋ธ”์€ PSU ์ชฝ์˜ 12V ๋ ˆ์ผ์— ์—ฐ๊ฒฐ๋˜์–ด์•ผ ํ•˜๋ฉฐ ์ตœ๋Œ€ 150W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋‹ค๋ฅธ GPU๋Š” PCI-E 12ํ•€ ์ปค๋„ฅํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ด๋Ÿฌํ•œ ์ปค๋„ฅํ„ฐ๋Š” ์ตœ๋Œ€ 500W-600W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ €๊ฐ€ํ˜• GPU๋Š” 6ํ•€ ์ปค๋„ฅํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ตœ๋Œ€ 75W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ GPU๊ฐ€ ์•ˆ์ •์ ์ธ ์ „์••์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ๊ณ ๊ธ‰ PSU๋ฅผ ์„ ํƒํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ถ€ ์ €ํ’ˆ์งˆ์˜ PSU๋Š” GPU๊ฐ€ ์ตœ๊ณ  ์„ฑ๋Šฅ์œผ๋กœ ๋™์ž‘ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์ „์••์„ ์•ˆ์ •์ ์œผ๋กœ ๊ณต๊ธ‰ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌผ๋ก , PSU๋Š” GPU์— ์ „์›์„ ๊ณต๊ธ‰ํ•˜๊ธฐ์— ์ถฉ๋ถ„ํ•œ ์—ฌ๋ถ„์˜ ์ „๋ ฅ ์šฉ๋Ÿ‰์„ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. **๋ƒ‰๊ฐ**: GPU๊ฐ€ ๊ณผ์—ด๋˜๋ฉด ์„ฑ๋Šฅ์ด ์ €ํ•˜๋˜๊ณ  ์ตœ๋Œ€ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋„ˆ๋ฌด ๋œจ๊ฑฐ์›Œ์ง€๋ฉด ์ค‘์ง€๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU๊ฐ€ ๊ณผ์—ด๋  ๋•Œ ์ •ํ™•ํ•œ ์ ์ • ์˜จ๋„๋ฅผ ์•Œ๊ธฐ ์–ด๋ ค์šฐ๋‚˜, ์•„๋งˆ๋„ +80โ„ƒ ๋ฏธ๋งŒ์ด๋ฉด ์ข‹์ง€๋งŒ ๋” ๋‚ฎ์„์ˆ˜๋ก ์ข‹์Šต๋‹ˆ๋‹ค. 70โ„ƒ-75โ„ƒ ์ •๋„๊ฐ€ ํ›Œ๋ฅญํ•œ ์˜จ๋„ ๋ฒ”์œ„์ž…๋‹ˆ๋‹ค. ์„ฑ๋Šฅ ์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋Š” ์˜จ๋„๋Š” ๋Œ€๋žต 84โ„ƒ-90โ„ƒ ์ •๋„์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์„ฑ๋Šฅ ์ €ํ•˜ ์ด์™ธ์—๋„ ์ง€์†์ ์œผ๋กœ ๋งค์šฐ ๋†’์€ ์˜จ๋„๋Š” GPU ์ˆ˜๋ช…์„ ๋‹จ์ถ•์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์–ด์„œ, ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ธก๋ฉด ์ค‘ ํ•˜๋‚˜์ธ GPU ๊ฐ„ ์—ฐ๊ฒฐ ๋ฐฉ์‹์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋‹ค์ค‘ GPU ์—ฐ๊ฒฐ ๋ฐฉ์‹ [[multigpu-connectivity]] ๋‹ค์ค‘ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ GPU ๊ฐ„์˜ ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ ์ „์ฒด ํ›ˆ๋ จ ์‹œ๊ฐ„์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ GPU๊ฐ€ ๋™์ผํ•œ ๋ฌผ๋ฆฌ์  ๋…ธ๋“œ์— ์žˆ์„ ๊ฒฝ์šฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` nvidia-smi topo -m ``` ๋งŒ์•ฝ NVLink๋กœ ์—ฐ๊ฒฐ๋œ ๋“€์–ผ GPU ํ™˜๊ฒฝ์ด๋ผ๋ฉด, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` NVLink๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š๋Š” ๋‹ค๋ฅธ ํ™˜๊ฒฝ์˜ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A ``` ์ด ๊ฒฐ๊ณผ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฒ”๋ก€๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: ``` X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` ๋”ฐ๋ผ์„œ ์ฒซ ๋ฒˆ์งธ ๊ฒฐ๊ณผ์˜ `NV2`๋Š” GPU๊ฐ€ 2๊ฐœ์˜ NVLink๋กœ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋‚˜ํƒ€๋‚ด๊ณ , ๋‘ ๋ฒˆ์งธ ๊ฒฐ๊ณผ์˜ `PHB`๋Š” ์ผ๋ฐ˜์ ์ธ ์†Œ๋น„์ž์šฉ PCIe+๋ธŒ๋ฆฟ์ง€ ์„ค์ •์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์„ค์ •์—์„œ ์–ด๋–ค ์œ ํ˜•์˜ ์—ฐ๊ฒฐ ๋ฐฉ์‹์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ผ๋ถ€ ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ GPU ๊ฐ„ ํ†ต์‹ ์„ ๋” ๋น ๋ฅด๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์œผ๋ฉฐ(NVLink์™€ ๊ฐ™์ด), ์–ด๋–ค ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ ๋” ๋Š๋ฆฌ๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(PHB์™€ ๊ฐ™์ด). ์‚ฌ์šฉํ•˜๋Š” ํ™•์žฅ์„ฑ ์†”๋ฃจ์…˜์˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ์ฃผ์š”ํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜๋„ ์žˆ๊ณ  ๋ฏธ๋ฏธํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. DDP์™€ ๊ฐ™์ด GPU๊ฐ€ ๊ฑฐ์˜ ๋™๊ธฐํ™”ํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ๊ฒฝ์šฐ, ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋Š๋ ค๋„ ํฐ ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด ZeRO-DP์™€ ๊ฐ™์ด GPU๊ฐ„ ํ†ต์‹ ์ด ๋งŽ์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋” ๋น ๋ฅธ ํ›ˆ๋ จ์„ ์œ„ํ•ด์„œ๋Š” ๋” ๋น ๋ฅธ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. #### NVLink [[nvlink]] [NVLink](https://en.wikipedia.org/wiki/NVLink)๋Š” Nvidia์—์„œ ๊ฐœ๋ฐœํ•œ ์œ ์„  ๊ธฐ๋ฐ˜์˜ ์ง๋ ฌ ๋‹ค์ค‘ ๋ ˆ์ธ ๊ทผ๊ฑฐ๋ฆฌ ํ†ต์‹  ๋งํฌ์ž…๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ์„ธ๋Œ€์˜ NVLink๋Š” ๋” ๋น ๋ฅธ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf)์—์„œ ์•„๋ž˜์™€ ๊ฐ™์€ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: > 3์„ธ๋Œ€ NVLinkยฎ > GA102 GPU๋Š” 4๊ฐœ์˜ x4 ๋งํฌ๋ฅผ ํฌํ•จํ•˜๋Š” NVIDIA์˜ 3์„ธ๋Œ€ NVLink ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, > ๊ฐ ๋งํฌ๋Š” ๋‘ ๊ฐœ์˜ GPU ๊ฐ„์— ๊ฐ ๋ฐฉํ–ฅ์œผ๋กœ ์ดˆ๋‹น 14.0625GB์˜ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. > 4๊ฐœ์˜ ๋งํฌ๋Š” ๊ฐ ๋ฐฉํ–ฅ์— ์ดˆ๋‹น 56.25GB์˜ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•˜๋ฉฐ, ๋‘ ๊ฐœ์˜ GPU ๊ฐ„์—๋Š” ์ดˆ๋‹น 112.5GB์˜ ์ด ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. > ๋‘ ๊ฐœ์˜ RTX 3090 GPU๋ฅผ NVLink๋ฅผ ์‚ฌ์šฉํ•ด SLI๋กœ ์—ฐ๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. > (3-Way ๋ฐ 4-Way SLI ๊ตฌ์„ฑ์€ ์ง€์›๋˜์ง€ ์•Š์Œ์— ์œ ์˜ํ•˜์„ธ์š”.) ๋”ฐ๋ผ์„œ `nvidia-smi topo -m`์˜ ๊ฒฐ๊ณผ์—์„œ `NVX`์˜ ๊ฐ’์ด ๋†’์„์ˆ˜๋ก ๋” ์ข‹์Šต๋‹ˆ๋‹ค. ์„ธ๋Œ€๋Š” GPU ์•„ํ‚คํ…์ฒ˜์— ๋”ฐ๋ผ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด, gpt2๋ฅผ ์ž‘์€ wikitext ์ƒ˜ํ”Œ๋กœ ํ•™์Šต์‹œํ‚ค๋Š” ์˜ˆ์ œ๋ฅผ ํ†ตํ•ด, NVLink๊ฐ€ ํ›ˆ๋ จ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | NVLink ์‚ฌ์šฉ ์‹œ ํ›ˆ๋ จ์ด ์•ฝ 23% ๋” ๋น ๋ฅด๊ฒŒ ์™„๋ฃŒ๋จ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ฒค์น˜๋งˆํฌ์—์„œ๋Š” `NCCL_P2P_DISABLE=1`์„ ์‚ฌ์šฉํ•˜์—ฌ NVLink๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋„๋ก ์„ค์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฒค์น˜๋งˆํฌ ์ฝ”๋“œ์™€ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 python -m torch.distributed.launch \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` ํ•˜๋“œ์›จ์–ด: ๊ฐ๊ฐ 2๊ฐœ์˜ TITAN RTX 24GB + 2๊ฐœ์˜ NVLink (`NV2` in `nvidia-smi topo -m`) ์†Œํ”„ํŠธ์›จ์–ด: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/pad_truncation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ[[padding-and-truncation]] ๋ฐฐ์น˜ ์ž…๋ ฅ์€ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์•„์„œ ๊ณ ์ • ํฌ๊ธฐ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ๋Š” ๋‹ค์–‘ํ•œ ๊ธธ์ด์˜ ๋ฐฐ์น˜์—์„œ ์ง์‚ฌ๊ฐํ˜• ํ…์„œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ์ „๋žต์ž…๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ํŠน์ˆ˜ํ•œ **ํŒจ๋”ฉ ํ† ํฐ**์„ ์ถ”๊ฐ€ํ•˜์—ฌ ์งง์€ ์‹œํ€€์Šค๊ฐ€ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค ๋˜๋Š” ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉํ•˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด์™€ ๋™์ผํ•œ ๊ธธ์ด๋ฅผ ๊ฐ–๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ž˜๋ผ๋‚ด๊ธฐ๋Š” ๊ธด ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋‚ด์–ด ํŒจ๋”ฉ๊ณผ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ์‹œํ€€์Šค์˜ ๊ธธ์ด๋ฅผ ๋™์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋ฐฐ์น˜์— ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์˜ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๊ณ  ๋ชจ๋ธ์ด ํ—ˆ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๋Š” ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•„์š”ํ•˜๋‹ค๋ฉด API๊ฐ€ ์ง€์›ํ•˜๋Š” ๋” ๋งŽ์€ ์ „๋žต์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ธ์ˆ˜๋Š” `padding`, `truncation`, `max_length` ์„ธ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. `padding` ์ธ์ˆ˜๋Š” ํŒจ๋”ฉ์„ ์ œ์–ดํ•ฉ๋‹ˆ๋‹ค. ๋ถˆ๋ฆฌ์–ธ ๋˜๋Š” ๋ฌธ์ž์—ด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `True` ๋˜๋Š” `'longest'`: ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค๋กœ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค(๋‹จ์ผ ์‹œํ€€์Šค๋งŒ ์ œ๊ณตํ•˜๋Š” ๊ฒฝ์šฐ ํŒจ๋”ฉ์ด ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค). - `'max_length'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ ์‹œํ€€์Šค๋งŒ ์ œ๊ณตํ•˜๋Š” ๊ฒฝ์šฐ์—๋„ ํŒจ๋”ฉ์ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. - `False` ๋˜๋Š” `'do_not_pad'`: ํŒจ๋”ฉ์ด ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ๊ธฐ๋ณธ ๋™์ž‘์ž…๋‹ˆ๋‹ค. `truncation` ์ธ์ˆ˜๋Š” ์ž˜๋ผ๋‚ผ ๋ฐฉ๋ฒ•์„ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ถˆ๋ฆฌ์–ธ ๋˜๋Š” ๋ฌธ์ž์—ด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `True` ๋˜๋Š” `longest_first`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ์—์„œ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์˜ ํ† ํฐ์„ ์ ์ ˆํ•œ ๊ธธ์ด์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ํ•˜๋‚˜์”ฉ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. - `'only_second'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ(๋˜๋Š” ์‹œํ€€์Šค ์Œ์˜ ๋ฐฐ์น˜)๊ฐ€ ์ œ๊ณต๋œ ๊ฒฝ์šฐ ์Œ์˜ ๋‘ ๋ฒˆ์งธ ๋ฌธ์žฅ๋งŒ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. - `'only_first'`: `max_length` ์ธ์ˆ˜๊ฐ€ ์ง€์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ฑฐ๋‚˜, `max_length`๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ(`max_length=None`) ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉ๋˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ์Œ(๋˜๋Š” ์‹œํ€€์Šค ์Œ์˜ ๋ฐฐ์น˜)๊ฐ€ ์ œ๊ณต๋œ ๊ฒฝ์šฐ ์Œ์˜ ์ฒซ ๋ฒˆ์งธ ๋ฌธ์žฅ๋งŒ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. - `False` ๋˜๋Š” `'do_not_truncate'`: ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ๊ธฐ๋ณธ ๋™์ž‘์ž…๋‹ˆ๋‹ค. `max_length` ์ธ์ˆ˜๋Š” ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•  ๊ธธ์ด๋ฅผ ์ œ์–ดํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ˆ˜๋Š” ์ •์ˆ˜ ๋˜๋Š” `None`์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ, `None`์ผ ๊ฒฝ์šฐ ๋ชจ๋ธ์ด ํ—ˆ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๊ธฐ๋ณธ๊ฐ’์ด ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ํŠน์ •ํ•œ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ `max_length`์— ๋Œ€ํ•œ ์ž˜๋ผ๋‚ด๊ธฐ ๋˜๋Š” ํŒจ๋”ฉ์ด ๋น„ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ‘œ์—๋Š” ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ถŒ์žฅ ๋ฐฉ๋ฒ•์ด ์š”์•ฝ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ž…๋ ฅ์œผ๋กœ ์‹œํ€€์Šค ์Œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ์˜ˆ์ œ์—์„œ `truncation=True`๋ฅผ `['only_first', 'only_second', 'longest_first']`์—์„œ ์„ ํƒํ•œ `STRATEGY`, ์ฆ‰ `truncation='only_second'` ๋˜๋Š” `truncation='longest_first'`๋กœ ๋ฐ”๊พธ๋ฉด ์•ž์„œ ์„ค๋ช…ํ•œ ๋Œ€๋กœ ์Œ์˜ ๋‘ ์‹œํ€€์Šค๊ฐ€ ์ž˜๋ฆฌ๋Š” ๋ฐฉ์‹์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. | ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ฐฉ๋ฒ• | |--------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------| | ์ž˜๋ผ๋‚ด๊ธฐ ์—†์Œ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='longest')` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length')` | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | ๋‹ค์–‘ํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences, truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', truncation=True)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ถˆ๊ฐ€ | | ํŠน์ • ๊ธธ์ด๋กœ ์ž˜๋ผ๋‚ด๊ธฐ | ํŒจ๋”ฉ ์—†์Œ | `tokenizer(batch_sentences, truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | ๋ฐฐ์น˜ ๋‚ด ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋กœ ํŒจ๋”ฉ | ์‚ฌ์šฉ ๋ถˆ๊ฐ€ | | | ํŠน์ • ๊ธธ์ด๋กœ ํŒจ๋”ฉ | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` ๋˜๋Š” | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/attention.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜[[attention_mechanisms]] ๋Œ€๋ถ€๋ถ„์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ ์ •๋ฐฉํ–‰๋ ฌ์ธ ์ „์ฒด ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Š” ๊ธด ํ…์ŠคํŠธ๋ฅผ ๋‹ค๋ฃฐ ๋•Œ๋Š” ํฐ ๊ณ„์‚ฐ ๋ณ‘๋ชฉ ํ˜„์ƒ์„ ์œ ๋ฐœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `Longformer`์™€ `Reformer`๋Š” ํ›ˆ๋ จ ์†๋„๋ฅผ ๋†’์ด๊ธฐ ์œ„ํ•ด ์–ดํ…์…˜ ํ–‰๋ ฌ์˜ ํฌ์†Œ ๋ฒ„์ „์„ ์‚ฌ์šฉํ•˜์—ฌ ํšจ์œจ์„ ๋†’์ด๋ ค๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ## LSH ์–ดํ…์…˜[[lsh_attention]] [Reformer](#reformer)๋Š” LSH(Locality Sensitive Hashing) ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. softmax(QK^t)์—์„œ๋Š” ํ–‰๋ ฌ QK^t์˜ (softmax ์ฐจ์›์—์„œ) ๊ฐ€์žฅ ํฐ ์š”์†Œ๋“ค๋งŒ ์œ ์šฉํ•œ ๊ธฐ์—ฌ๋ฅผ ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ๊ฐ์˜ ์ฟผ๋ฆฌ q์— ๋Œ€ํ•ด, q์™€ ๊ฐ€๊นŒ์šด ํ‚ค k๋งŒ ๊ณ ๋ คํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•ด์‹œ ํ•จ์ˆ˜๋Š” q์™€ k๊ฐ€ ๊ฐ€๊นŒ์šด์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋Š” ํ˜„์žฌ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜์—ฌ ๋ณ€๊ฒฝ๋ฉ๋‹ˆ๋‹ค. ์ด ๋•Œ ์ฒซ ๋ฒˆ์งธ ์œ„์น˜์˜ ํ† ํฐ์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ฟผ๋ฆฌ์™€ ํ‚ค๊ฐ€ ๋™์ผํ•œ ๊ฐ’์„ ๊ฐ–๊ฒŒ ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค(์„œ๋กœ ๋งค์šฐ ์œ ์‚ฌํ•จ). ํ•ด์‹œ๋Š” ์•ฝ๊ฐ„์˜ ๋ฌด์ž‘์œ„์„ฑ์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์‹ค์ œ๋กœ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ•ด์‹œ ํ•จ์ˆ˜๊ฐ€ ์‚ฌ์šฉ๋˜๊ณ  (`n_rounds` ๋งค๊ฐœ๋ณ€์ˆ˜์— ์˜ํ•ด ๊ฒฐ์ •๋จ) ๊ทธ ํ›„์— ํ‰๊ท ๊ฐ’์„ ์ทจํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ์ง€์—ญ ์–ดํ…์…˜[[local_attention]] [Longformer](#longformer)๋Š” ์ง€์—ญ ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ข…์ข… ํŠน์ • ํ† ํฐ์— ๋Œ€ํ•ด ์ง€์—ญ ์ปจํ…์ŠคํŠธ(์˜ˆ: ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ์— ์žˆ๋Š” ๋‘ ๊ฐœ์˜ ํ† ํฐ์€ ๋ฌด์—‡์ธ๊ฐ€์š”?)๋งŒ์œผ๋กœ๋„ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š”๋ฐ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์ž‘์€ ์ฐฝ(window)์„ ๊ฐ€์ง„ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์Œ“์Œ์œผ๋กœ์จ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋Š” ์ฐฝ ๋‚ด์˜ ํ† ํฐ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋” ๋งŽ์€ ์ˆ˜์˜ ํ† ํฐ์— ๋Œ€ํ•œ ์ˆ˜์šฉ ์˜์—ญ(receptive field)์„ ๊ฐ–๊ฒŒ ๋˜์–ด ์ „์ฒด ๋ฌธ์žฅ์˜ ํ‘œํ˜„์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „์— ์„ ํƒ๋œ ์ผ๋ถ€ ์ž…๋ ฅ ํ† ํฐ๋“ค์€ ์ „์—ญ ์–ดํ…์…˜์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด ๋ช‡ ๊ฐœ์˜ ํ† ํฐ์— ๋Œ€ํ•ด์„œ๋Š” ์–ดํ…์…˜ ํ–‰๋ ฌ์ด ๋ชจ๋“  ํ† ํฐ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ณผ์ •์€ ๋Œ€์นญ์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ชจ๋“  ํ† ํฐ๋“ค์€ ๋กœ์ปฌ ์ฐฝ ๋‚ด์˜ ํ† ํฐ๋“ค์— ๋”ํ•ด ํ•ด๋‹น ํŠน์ • ํ† ํฐ๋“ค์—๋„ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋…ผ๋ฌธ์˜ Figure 2d์—์„œ ๋‚˜ํƒ€๋‚˜๋ฉฐ, ์•„๋ž˜์— ์ƒ˜ํ”Œ ์–ดํ…์…˜ ๋งˆ์Šคํฌ๊ฐ€ ์ œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: <div class="flex justify-center"> <img scale="50 %" align="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/local_attention_mask.png"/> </div> ์ ์€ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์–ดํ…์…˜ ํ–‰๋ ฌ์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์ด ๋” ํฐ ์‹œํ€€์Šค ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•๋“ค[[other_tricks]] ### ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ[[axial_positional_encodings]] [Reformer](#reformer)๋Š” ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ(axial positional encodings)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์—์„œ๋Š” ์œ„์น˜ ์ธ์ฝ”๋”ฉ ํ–‰๋ ฌ E๋Š” ํฌ๊ธฐ๊ฐ€ \\(l \times d\\)์ธ ํ–‰๋ ฌ์ด๋ฉฐ, ์—ฌ๊ธฐ์„œ \\(l\\)์€ ์‹œํ€€์Šค ๊ธธ์ด(sequence length)์ด๊ณ  \\(d\\)๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ(hidden state)์˜ ์ฐจ์›์ž…๋‹ˆ๋‹ค. ๋งค์šฐ ๊ธด ํ…์ŠคํŠธ์˜ ๊ฒฝ์šฐ, ์ด ํ–‰๋ ฌ์€ ๋งค์šฐ ํฌ๋ฉฐ GPU ์ƒ์—์„œ ๊ณต๊ฐ„์„ ๋งŽ์ด ์ฐจ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์™„ํ™”ํ•˜๊ธฐ ์œ„ํ•ด, ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ์€ ํฐ ํ–‰๋ ฌ E๋ฅผ ๋‘ ๊ฐœ์˜ ์ž‘์€ ํ–‰๋ ฌ E1๊ณผ E2๋กœ ๋ถ„ํ•ดํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ E1์˜ ํฌ๊ธฐ๋Š” \\(l_{1} \times d_{1}\\)์ด๊ณ , E2์˜ ํฌ๊ธฐ๋Š” \\(l_{2} \times d_{2}\\)์ž…๋‹ˆ๋‹ค. ์ด๋•Œ \\(l_{1} \times l_{2} = l\\)์ด๊ณ  \\(d_{1} + d_{2} = d\\)(๊ธธ์ด์— ๋Œ€ํ•œ ๊ณฑ์…ˆ ์—ฐ์‚ฐ์„ ์‚ฌ์šฉํ•˜๋ฉด ํ›จ์”ฌ ์ž‘์•„์ง‘๋‹ˆ๋‹ค). E์˜ ์‹œ๊ฐ„ ๋‹จ๊ณ„ j์— ๋Œ€ํ•œ ์ž„๋ฒ ๋”ฉ์€ E1์—์„œ ์‹œ๊ฐ„ ๋‹จ๊ณ„ \\(j \% l1\\)์˜ ์ž„๋ฒ ๋”ฉ๊ณผ E2์—์„œ ์‹œ๊ฐ„ ๋‹จ๊ณ„ \\(j // l1\\)์˜ ์ž„๋ฒ ๋”ฉ์„ ์—ฐ๊ฒฐํ•˜์—ฌ ์–ป์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‘˜๋Ÿฌ๋ณด๊ธฐ [[quick-tour]] [[open-in-colab]] ๐Ÿค— Transformers๋ฅผ ์‹œ์ž‘ํ•ด๋ณด์„ธ์š”! ๊ฐœ๋ฐœํ•ด๋ณธ ์ ์ด ์—†๋”๋ผ๋„ ์‰ฝ๊ฒŒ ์ฝ์„ ์ˆ˜ ์žˆ๋„๋ก ์“ฐ์ธ ์ด ๊ธ€์€ [`pipeline`](./main_classes/pipelines)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๊ณ , ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ๊ณผ ์ „์ฒ˜๋ฆฌ๊ธฐ๋ฅผ [AutoClass](./model_doc/auto)๋กœ ๋กœ๋“œํ•˜๊ณ , PyTorch ๋˜๋Š” TensorFlow๋กœ ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ํ•™์Šต์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์„ ์†Œ๊ฐœํ•ด ๋“œ๋ฆด ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ณธ ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœ๋˜๋Š” ๊ฐœ๋…์„ (ํŠนํžˆ ์ดˆ๋ณด์ž์˜ ๊ด€์ ์œผ๋กœ) ๋” ์นœ์ ˆํ•˜๊ฒŒ ์ ‘ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ํŠœํ† ๋ฆฌ์–ผ์ด๋‚˜ [์ฝ”์Šค](https://huggingface.co/course/chapter1/1)๋ฅผ ์ฐธ์กฐํ•˜๊ธฐ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash !pip install transformers datasets ``` ๋˜ํ•œ ์„ ํ˜ธํ•˜๋Š” ๋จธ์‹  ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## ํŒŒ์ดํ”„๋ผ์ธ [[pipeline]] <Youtube id="tiZFewofSLM"/> [`pipeline`](./main_classes/pipelines)์€ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๊ธฐ์— ๊ฐ€์žฅ ์‰ฝ๊ณ  ๋น ๋ฅธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. [`pipeline`]์€ ์—ฌ๋Ÿฌ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ ๋‹ค์–‘ํ•œ ๊ณผ์—…์„ ์‰ฝ๊ฒŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์•„๋ž˜ ํ‘œ์— ํ‘œ์‹œ๋œ ๋ช‡ ๊ฐ€์ง€ ๊ณผ์—…์„ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: <Tip> ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [Pipelines API ์ฐธ์กฐ](./main_classes/pipelines)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. </Tip> | **ํƒœ์Šคํฌ** | **์„ค๋ช…** | **๋ชจ๋‹ฌ๋ฆฌํ‹ฐ** | **ํŒŒ์ดํ”„๋ผ์ธ ID** | |-----------------|----------------------------------------------------------------------|------------------|-----------------------------------------------| | ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ | ํ…์ŠคํŠธ์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="sentiment-analysis") | | ํ…์ŠคํŠธ ์ƒ์„ฑ | ์ฃผ์–ด์ง„ ๋ฌธ์ž์—ด ์ž…๋ ฅ๊ณผ ์ด์–ด์ง€๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="text-generation") | | ๊ฐœ์ฒด๋ช… ์ธ์‹ | ๋ฌธ์ž์—ด์˜ ๊ฐ ํ† ํฐ๋งˆ๋‹ค ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ (์ธ๋ฌผ, ์กฐ์ง, ์žฅ์†Œ ๋“ฑ๋“ฑ) | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="ner") | | ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ๊ณผ ์งˆ๋ฌธ์— ๋”ฐ๋ผ ์˜ฌ๋ฐ”๋ฅธ ๋Œ€๋‹ตํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="question-answering") | | ๋นˆ์นธ ์ฑ„์šฐ๊ธฐ | ๋ฌธ์ž์—ด์˜ ๋นˆ์นธ์— ์•Œ๋งž์€ ํ† ํฐ ๋งž์ถ”๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="fill-mask") | | ์š”์•ฝ | ํ…์ŠคํŠธ๋‚˜ ๋ฌธ์„œ๋ฅผ ์š”์•ฝํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="summarization") | | ๋ฒˆ์—ญ | ํ…์ŠคํŠธ๋ฅผ ํ•œ ์–ธ์–ด์—์„œ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="translation") | | ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ | ์ด๋ฏธ์ง€์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="image-classification") | | ์ด๋ฏธ์ง€ ๋ถ„ํ•  | ์ด๋ฏธ์ง€์˜ ํ”ฝ์…€๋งˆ๋‹ค ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ(์‹œ๋งจํ‹ฑ, ํŒŒ๋†‰ํ‹ฑ ๋ฐ ์ธ์Šคํ„ด์Šค ๋ถ„ํ•  ํฌํ•จ) | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="image-segmentation") | | ๊ฐ์ฒด ํƒ์ง€ | ์ด๋ฏธ์ง€ ์† ๊ฐ์ฒด์˜ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ทธ๋ฆฌ๊ณ  ํด๋ž˜์Šค๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="object-detection") | | ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ | ์˜ค๋””์˜ค ํŒŒ์ผ์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์˜ค๋””์˜ค | pipeline(task="audio-classification") | | ์ž๋™ ์Œ์„ฑ ์ธ์‹ | ์˜ค๋””์˜ค ํŒŒ์ผ ์† ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ฐ”๊พธ๊ธฐ | ์˜ค๋””์˜ค | pipeline(task="automatic-speech-recognition") | | ์‹œ๊ฐ ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋Œ€๋‹ตํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="vqa") | | ๋ฌธ์„œ ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ๋ฌธ์„œ์™€ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋Œ€๋‹ตํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="document-question-answering") | | ์ด๋ฏธ์ง€ ์บก์…˜ ๋‹ฌ๊ธฐ | ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์˜ ์บก์…˜ ์ƒ์„ฑํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="image-to-text") | ๋จผ์ € [`pipeline`]์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ์‚ฌ์šฉํ•  ์ž‘์—…์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์˜ˆ์ œ๋ฅผ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` [`pipeline`]์€ ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•œ [์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ž๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ `classifier`๋ฅผ ๋Œ€์ƒ ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` ๋งŒ์•ฝ ์ž…๋ ฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ž…๋ ฅ์„ ๋ฆฌ์ŠคํŠธ๋กœ [`pipeline`]์— ์ „๋‹ฌํ•˜์—ฌ, ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์„ ๋”•์…”๋„ˆ๋ฆฌ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` [`pipeline`]์€ ์ฃผ์–ด์ง„ ๊ณผ์—…์— ๊ด€๊ณ„์—†์ด ๋ฐ์ดํ„ฐ์…‹ ์ „๋ถ€๋ฅผ ์ˆœํšŒํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ๊ณผ์—…์œผ๋กœ ์„ ํƒํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. (์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— Datasets [์‹œ์ž‘ํ•˜๊ธฐ](https://huggingface.co/docs/datasets/quickstart#audio)์„ ์ฐธ์กฐํ•˜์„ธ์š”) ์—ฌ๊ธฐ์—์„œ๋Š” [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` ๋ฐ์ดํ„ฐ์…‹์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ๊ธฐ์กด ๋ชจ๋ธ์ธ [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h)์˜ ํ›ˆ๋ จ ๋‹น์‹œ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` `"audio"` ์—ด์„ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์™€์„œ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์ฒซ 4๊ฐœ ์ƒ˜ํ”Œ์—์„œ ์›์‹œ ์›จ์ด๋ธŒํผ ๋ฐฐ์—ด์„ ์ถ”์ถœํ•˜๊ณ  ํŒŒ์ดํ”„๋ผ์ธ์— ๋ฆฌ์ŠคํŠธ๋กœ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT'] ``` ์Œ์„ฑ์ด๋‚˜ ๋น„์ „๊ณผ ๊ฐ™์ด ์ž…๋ ฅ์ด ํฐ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์…‹์˜ ๊ฒฝ์šฐ, ๋ชจ๋“  ์ž…๋ ฅ์„ ๋ฉ”๋ชจ๋ฆฌ์— ๋กœ๋“œํ•˜๋ ค๋ฉด ๋ฆฌ์ŠคํŠธ ๋Œ€์‹  ์ œ๋„ˆ๋ ˆ์ดํ„ฐ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Pipelines API ์ฐธ์กฐ](./main_classes/pipelines)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ [[use-another-model-and-tokenizer-in-the-pipeline]] [`pipeline`]์€ [Hub](https://huggingface.co/models)์˜ ๋ชจ๋“  ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, [`pipeline`]์„ ๋‹ค๋ฅธ ์šฉ๋„์— ๋งž๊ฒŒ ์‰ฝ๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„  Hub์˜ ํƒœ๊ทธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์ ˆํ•œ ๋ชจ๋ธ์„ ํ•„ํ„ฐ๋งํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ํ•„ํ„ฐ๋ง๋œ ๊ฒฐ๊ณผ์˜ ์ƒ์œ„ ํ•ญ๋ชฉ์œผ๋กœ๋Š” ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค๊ตญ์–ด [BERT ๋ชจ๋ธ](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment)์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค: ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> [`AutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š” (๋‹ค์Œ ์„น์…˜์—์„œ [`AutoClass`]์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> [`TFAutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š” (๋‹ค์Œ ์„น์…˜์—์„œ [`TFAutoClass`]์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> [`pipeline`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง€์ •ํ•˜๋ฉด, ์ด์ œ `classifier`๋ฅผ ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` ๋งˆ๋•…ํ•œ ๋ชจ๋ธ์„ ์ฐพ์„ ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฏธ์„ธ์กฐ์ • ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋ฏธ์„ธ์กฐ์ • ํŠœํ† ๋ฆฌ์–ผ](./training)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•œ ํ›„์—๋Š” ๋ชจ๋ธ์„ Hub์˜ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜์—ฌ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ฏผ์ฃผํ™”์— ๊ธฐ์—ฌํ•ด์ฃผ์„ธ์š”! ๐Ÿค— ## AutoClass [[autoclass]] <Youtube id="AhChOFRegn4"/> [`AutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`] ํด๋ž˜์Šค๋Š” ์œ„์—์„œ ๋‹ค๋ฃฌ [`pipeline`]์˜ ๊ธฐ๋Šฅ์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. [AutoClass](./model_doc/auto)๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ด๋ฆ„์ด๋‚˜ ๊ฒฝ๋กœ์—์„œ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๋Š” '๋ฐ”๋กœ๊ฐ€๊ธฐ'์ž…๋‹ˆ๋‹ค. ๊ณผ์—…์— ์ ํ•ฉํ•œ `AutoClass`๋ฅผ ์„ ํƒํ•˜๊ณ  ํ•ด๋‹น ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์„ ํƒํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด์ „ ์„น์…˜์˜ ์˜ˆ์ œ๋กœ ๋Œ์•„๊ฐ€์„œ [`pipeline`]์˜ ๊ฒฐ๊ณผ๋ฅผ `AutoClass`๋ฅผ ํ™œ์šฉํ•ด ๋ณต์ œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### AutoTokenizer [[autotokenizer]] ํ† ํฌ๋‚˜์ด์ €๋Š” ํ…์ŠคํŠธ๋ฅผ ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ˆซ์ž ๋ฐฐ์—ด ํ˜•ํƒœ๋กœ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ์—ญํ• ์„ ๋‹ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™” ๊ณผ์ •์—๋Š” ๋‹จ์–ด๋ฅผ ์–ด๋””์—์„œ ๋Š์„์ง€, ์–ด๋Š ์ˆ˜์ค€๊นŒ์ง€ ๋‚˜๋ˆŒ์ง€์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ๊ทœ์น™๋“ค์ด ์žˆ์Šต๋‹ˆ๋‹ค (ํ† ํฐํ™”์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ์š”์•ฝ](./tokenizer_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”). ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ ์€ ๋ชจ๋ธ์ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ ํ† ํฐํ™” ๊ทœ์น™์„ ์‚ฌ์šฉํ•˜๋„๋ก ๋™์ผํ•œ ๋ชจ๋ธ ์ด๋ฆ„์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [`AutoTokenizer`]๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ €์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹ค์Œ์„ ํฌํ•จํ•œ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: * [input_ids](./glossary#input-ids): ํ† ํฐ์˜ ์ˆซ์ž ํ‘œํ˜„. * [attention_mask](.glossary#attention-mask): ์–ด๋–ค ํ† ํฐ์— ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์—ฌ์•ผ ํ•˜๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์ž…๋ ฅ์„ ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ๋„ ๋ฐ›์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•˜๊ณ  ์ž˜๋ผ๋‚ด์–ด ์ผ์ •ํ•œ ๊ธธ์ด์˜ ๋ฌถ์Œ์„ ๋ฐ˜ํ™˜ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> [์ „์ฒ˜๋ฆฌ](./preprocessing) ํŠœํ† ๋ฆฌ์–ผ์„ ์ฐธ์กฐํ•˜์‹œ๋ฉด ํ† ํฐํ™”์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…๊ณผ ํ•จ๊ป˜ ์ด๋ฏธ์ง€, ์˜ค๋””์˜ค์™€ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ [`AutoImageProcessor`]์™€ [`AutoFeatureExtractor`], [`AutoProcessor`]์˜ ์‚ฌ์šฉ๋ฐฉ๋ฒ•๋„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ### AutoModel [[automodel]] <frameworkcontent> <pt> ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ตํ•ฉ๋œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, [`AutoTokenizer`]์ฒ˜๋Ÿผ [`AutoModel`]์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ๊ณผ์—…์— ์•Œ๋งž์€ [`AutoModel`]์„ ์„ ํƒํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ (๋˜๋Š” ์‹œํ€€์Šค) ๋ถ„๋ฅ˜์˜ ๊ฒฝ์šฐ [`AutoModelForSequenceClassification`]์„ ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] ํด๋ž˜์Šค์—์„œ ์ง€์›ํ•˜๋Š” ๊ณผ์—…์— ๋Œ€ํ•ด์„œ๋Š” [๊ณผ์—… ์š”์•ฝ](./task_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ์ด์ œ ์ „์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ ๋ฌถ์Œ์„ ์ง์ ‘ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜์ฒ˜๋Ÿผ `**`๋ฅผ ์•ž์— ๋ถ™์—ฌ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ํ’€์–ด์ฃผ๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> pt_outputs = pt_model(**pt_batch) ``` ๋ชจ๋ธ์˜ ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ ์ถœ๋ ฅ์€ `logits` ์†์„ฑ์— ๋‹ด๊ฒจ์žˆ์Šต๋‹ˆ๋‹ค. `logits`์— softmax ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์—ฌ ํ™•๋ฅ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ตํ•ฉ๋œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, [`AutoTokenizer`]์ฒ˜๋Ÿผ [`TFAutoModel`]์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ๊ณผ์—…์— ์•Œ๋งž์€ [`TFAutoModel`]์„ ์„ ํƒํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ (๋˜๋Š” ์‹œํ€€์Šค) ๋ถ„๋ฅ˜์˜ ๊ฒฝ์šฐ [`TFAutoModelForSequenceClassification`]์„ ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] ํด๋ž˜์Šค์—์„œ ์ง€์›ํ•˜๋Š” ๊ณผ์—…์— ๋Œ€ํ•ด์„œ๋Š” [๊ณผ์—… ์š”์•ฝ](./task_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ์ด์ œ ์ „์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ ๋ฌถ์Œ์„ ์ง์ ‘ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜์ฒ˜๋Ÿผ ๊ทธ๋Œ€๋กœ ํ…์„œ๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> tf_outputs = tf_model(tf_batch) ``` ๋ชจ๋ธ์˜ ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ ์ถœ๋ ฅ์€ `logits` ์†์„ฑ์— ๋‹ด๊ฒจ์žˆ์Šต๋‹ˆ๋‹ค. `logits`์— softmax ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์—ฌ ํ™•๋ฅ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> ๋ชจ๋“  ๐Ÿค— Transformers ๋ชจ๋ธ(PyTorch ๋˜๋Š” TensorFlow)์€ (softmax์™€ ๊ฐ™์€) ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ *์ด์ „์—* ํ…์„œ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜์˜ ์ถœ๋ ฅ์€ ์ข…์ข… ์†์‹ค ํ•จ์ˆ˜ ์ถœ๋ ฅ๊ณผ ๊ฒฐํ•ฉ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ถœ๋ ฅ์€ ํŠน์ˆ˜ํ•œ ๋ฐ์ดํ„ฐ ํด๋ž˜์Šค์ด๋ฏ€๋กœ IDE์—์„œ ์ž๋™ ์™„์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ถœ๋ ฅ์€ ํŠœํ”Œ์ด๋‚˜ ๋”•์…”๋„ˆ๋ฆฌ์ฒ˜๋Ÿผ ๋™์ž‘ํ•˜๋ฉฐ (์ •์ˆ˜, ์Šฌ๋ผ์ด์Šค ๋˜๋Š” ๋ฌธ์ž์—ด๋กœ ์ธ๋ฑ์‹ฑ ๊ฐ€๋Šฅ), None์ธ ์†์„ฑ์€ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค. </Tip> ### ๋ชจ๋ธ ์ €์žฅํ•˜๊ธฐ [[save-a-model]] <frameworkcontent> <pt> ๋ฏธ์„ธ์กฐ์ •๋œ ๋ชจ๋ธ์„ ํ† ํฌ๋‚˜์ด์ €์™€ ํ•จ๊ป˜ ์ €์žฅํ•˜๋ ค๋ฉด [`PreTrainedModel.save_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` ๋ชจ๋ธ์„ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`PreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> ๋ฏธ์„ธ์กฐ์ •๋œ ๋ชจ๋ธ์„ ํ† ํฌ๋‚˜์ด์ €์™€ ํ•จ๊ป˜ ์ €์žฅํ•˜๋ ค๋ฉด [`TFPreTrainedModel.save_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` ๋ชจ๋ธ์„ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`TFPreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> ๐Ÿค— Transformers์˜ ๋ฉ‹์ง„ ๊ธฐ๋Šฅ ์ค‘ ํ•˜๋‚˜๋Š” ๋ชจ๋ธ์„ PyTorch ๋˜๋Š” TensorFlow ๋ชจ๋ธ๋กœ ์ €์žฅํ•ด๋’€๋‹ค๊ฐ€ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ์ ์ž…๋‹ˆ๋‹ค. `from_pt` ๋˜๋Š” `from_tf` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## ์ปค์Šคํ…€ ๋ชจ๋ธ ๊ตฌ์ถ•ํ•˜๊ธฐ [[custom-model-builds]] ๋ชจ๋ธ์˜ ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ์ˆ˜์ •ํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ตฌ์กฐ๋ฅผ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์€๋‹‰์ธต์ด๋‚˜ ์–ดํ…์…˜ ํ—ค๋“œ์˜ ์ˆ˜์™€ ๊ฐ™์€) ๋ชจ๋ธ์˜ ์†์„ฑ์€ ๊ตฌ์„ฑ์—์„œ ์ง€์ •๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ปค์Šคํ…€ ๊ตฌ์„ฑ ํด๋ž˜์Šค๋กœ ๋ชจ๋ธ์„ ๋งŒ๋“ค๋ฉด ์ฒ˜์Œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์†์„ฑ์€ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋˜๋ฏ€๋กœ ์˜๋ฏธ ์žˆ๋Š” ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋จผ์ € ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผœ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € [`AutoConfig`]๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์ˆ˜์ •ํ•˜๊ณ  ์‹ถ์€ ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜์„ธ์š”. [`AutoConfig.from_pretrained`] ๋‚ด๋ถ€์—์„œ (์–ดํ…์…˜ ํ—ค๋“œ ์ˆ˜์™€ ๊ฐ™์ด) ๋ณ€๊ฒฝํ•˜๋ ค๋Š” ์†์„ฑ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> [`AutoModel.from_config`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๊พผ ๊ตฌ์„ฑ๋Œ€๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> [`TFAutoModel.from_config`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๊พผ ๊ตฌ์„ฑ๋Œ€๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> ์ปค์Šคํ…€ ๊ตฌ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ปค์Šคํ…€ ์•„ํ‚คํ…์ฒ˜ ๋งŒ๋“ค๊ธฐ](./create_a_model) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ## Trainer - PyTorch์— ์ตœ์ ํ™”๋œ ํ›ˆ๋ จ ๋ฃจํ”„ [[trainer-a-pytorch-optimized-training-loop]] ๋ชจ๋“  ๋ชจ๋ธ์€ [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)์ด๋ฏ€๋กœ ์ผ๋ฐ˜์ ์ธ ํ›ˆ๋ จ ๋ฃจํ”„์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ๐Ÿค— Transformers๋Š” PyTorch๋ฅผ ์œ„ํ•œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค์—๋Š” ๊ธฐ๋ณธ ํ›ˆ๋ จ ๋ฃจํ”„๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์œผ๋ฉฐ ๋ถ„์‚ฐ ํ›ˆ๋ จ, ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋“ฑ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€๋กœ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ณผ์—…์— ๋”ฐ๋ผ ๋‹ค๋ฅด์ง€๋งŒ ์ผ๋ฐ˜์ ์œผ๋กœ [`Trainer`]์— ๋‹ค์Œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: 1. [`PreTrainedModel`] ๋˜๋Š” [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. [`TrainingArguments`]๋Š” ํ•™์Šต๋ฅ , ๋ฐฐ์น˜ ํฌ๊ธฐ, ํ›ˆ๋ จํ•  ์—ํฌํฌ ์ˆ˜์™€ ๊ฐ™์€ ๋ชจ๋ธ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ธ์ž๋ฅผ ์ง€์ •ํ•˜์ง€ ์•Š์œผ๋ฉด ๊ธฐ๋ณธ๊ฐ’์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ(feature extractor) ๋˜๋Š” ํ”„๋กœ์„ธ์„œ์™€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 4. ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. ๋ฐ์ดํ„ฐ์…‹์„ ํ† ํฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` ๊ทธ๋ฆฌ๊ณ  [`~datasets.Dataset.map`]๋กœ ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด์— ์ ์šฉํ•˜์„ธ์š”: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. [`DataCollatorWithPadding`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ํ‘œ๋ณธ ๋ฌถ์Œ์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` ์ด์ œ ์œ„์˜ ๋ชจ๋“  ํด๋ž˜์Šค๋ฅผ [`Trainer`]๋กœ ๋ชจ์œผ์„ธ์š”: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉด [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> ๋ฒˆ์—ญ์ด๋‚˜ ์š”์•ฝ๊ณผ ๊ฐ™์ด ์‹œํ€€์Šค-์‹œํ€€์Šค ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ณผ์—…์—๋Š” [`Seq2SeqTrainer`] ๋ฐ [`Seq2SeqTrainingArguments`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. </Tip> [`Trainer`] ๋‚ด์˜ ๋ฉ”์„œ๋“œ๋ฅผ ์„œ๋ธŒํด๋ž˜์Šคํ™”ํ•˜์—ฌ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ๋ฐ”๊ฟ€ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌ๋ฉด ์†์‹ค ํ•จ์ˆ˜, ์˜ตํ‹ฐ๋งˆ์ด์ €, ์Šค์ผ€์ค„๋Ÿฌ์™€ ๊ฐ™์€ ๊ธฐ๋Šฅ ๋˜ํ•œ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ๊ฐ€๋Šฅํ•œ ๋ฉ”์†Œ๋“œ์— ๋Œ€ํ•ด์„œ๋Š” [`Trainer`] ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ˆ˜์ •ํ•˜๋Š” ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ [Callbacks](./main_classes/callbacks)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. Callbacks๋กœ ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ํ†ตํ•ฉํ•˜๊ณ , ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ฒดํฌํ•˜์—ฌ ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋ณด๊ณ ๋ฐ›๊ฑฐ๋‚˜, ํ›ˆ๋ จ์„ ์กฐ๊ธฐ์— ์ค‘๋‹จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Callbacks์€ ํ›ˆ๋ จ ๋ฃจํ”„ ์ž์ฒด๋ฅผ ๋ฐ”๊พธ์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค. ์†์‹ค ํ•จ์ˆ˜์™€ ๊ฐ™์€ ๊ฒƒ์„ ๋ฐ”๊พธ๋ ค๋ฉด [`Trainer`]๋ฅผ ์„œ๋ธŒํด๋ž˜์Šคํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## TensorFlow๋กœ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ [[train-with-tensorflow]] ๋ชจ๋“  ๋ชจ๋ธ์€ [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)์ด๋ฏ€๋กœ [Keras](https://keras.io/) API๋ฅผ ํ†ตํ•ด TensorFlow์—์„œ ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ฐ์ดํ„ฐ์…‹์„ ์‰ฝ๊ฒŒ `tf.data.Dataset` ํ˜•ํƒœ๋กœ ์‰ฝ๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” [`~TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ”์†Œ๋“œ๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์—, Keras์˜ [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฐ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ๋ฉ”์†Œ๋“œ๋กœ ๋ฐ”๋กœ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. [`TFPreTrainedModel`] ๋˜๋Š” [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ(feature extractor) ๋˜๋Š” ํ”„๋กœ์„ธ์„œ์™€ ๊ฐ™์€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 3. ๋ฐ์ดํ„ฐ์…‹์„ ํ† ํฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. [`~datasets.Dataset.map`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ํ† ํฐํ™” ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๊ณ , ๋ฐ์ดํ„ฐ์…‹๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ [`~TFPreTrainedModel.prepare_tf_dataset`]์— ์ „๋‹ฌํ•˜์„ธ์š”. ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๋ณ€๊ฒฝํ•˜๊ฑฐ๋‚˜ ๋ฐ์ดํ„ฐ์…‹์„ ์„ž์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. ์ค€๋น„๋˜์—ˆ์œผ๋ฉด `compile` ๋ฐ `fit`๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”. ๐Ÿค— Transformers์˜ ๋ชจ๋“  ๋ชจ๋ธ์€ ๊ณผ์—…๊ณผ ๊ด€๋ จ๋œ ๊ธฐ๋ณธ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ ๋ช…์‹œ์ ์œผ๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์•„๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) # No loss argument! >>> model.fit(tf_dataset) # doctest: +SKIP ``` ## ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? [[whats-next]] ๐Ÿค— Transformers ๋‘˜๋Ÿฌ๋ณด๊ธฐ๋ฅผ ๋ชจ๋‘ ์ฝ์œผ์…จ๋‹ค๋ฉด, ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด๊ณ  ๋” ๊ตฌ์ฒด์ ์ธ ๊ฒƒ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด์„ธ์š”. ์ด๋ฅผํ…Œ๋ฉด ์ปค์Šคํ…€ ๋ชจ๋ธ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐฉ๋ฒ•, ๊ณผ์—…์— ์•Œ๋งž๊ฒŒ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•, ์Šคํฌ๋ฆฝํŠธ๋กœ ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํ•ต์‹ฌ ๊ฐœ๋…์— ๋Œ€ํ•ด ๋” ์•Œ์•„๋ณด๋ ค๋ฉด ์ปคํ”ผ ํ•œ ์ž” ๋“ค๊ณ  ๊ฐœ๋… ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์Šคํฌ๋ฆฝํŠธ๋กœ ์‹คํ–‰ํ•˜๊ธฐ[[train-with-a-script]] ๐Ÿค— Transformers ๋…ธํŠธ๋ถ๊ณผ ํ•จ๊ป˜ [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), ๋˜๋Š” [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax)๋ฅผ ์‚ฌ์šฉํ•ด ํŠน์ • ํƒœ์Šคํฌ์— ๋Œ€ํ•œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๋Š” ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ [์—ฐ๊ตฌ ํ”„๋กœ์ ํŠธ](https://github.com/huggingface/transformers/tree/main/examples/research_projects) ๋ฐ [๋ ˆ๊ฑฐ์‹œ ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples/legacy)์—์„œ ๋Œ€๋ถ€๋ถ„ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ์ œ๊ณตํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์ ๊ทน์ ์œผ๋กœ ์œ ์ง€ ๊ด€๋ฆฌ๋˜์ง€ ์•Š์œผ๋ฉฐ ์ตœ์‹  ๋ฒ„์ „์˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ํ˜ธํ™˜๋˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํŠน์ • ๋ฒ„์ „์˜ ๐Ÿค— Transformers๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋ชจ๋“  ๋ฌธ์ œ์—์„œ ๋ฐ”๋กœ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋ฉฐ, ํ•ด๊ฒฐํ•˜๋ ค๋Š” ๋ฌธ์ œ์— ๋งž๊ฒŒ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋Œ€๋ถ€๋ถ„์˜ ์Šคํฌ๋ฆฝํŠธ์—๋Š” ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์ด ๋‚˜์™€์žˆ์–ด ํ•„์š”์— ๋”ฐ๋ผ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ์— ๊ตฌํ˜„ํ•˜๊ณ  ์‹ถ์€ ๊ธฐ๋Šฅ์ด ์žˆ์œผ๋ฉด pull request๋ฅผ ์ œ์ถœํ•˜๊ธฐ ์ „์— [ํฌ๋Ÿผ](https://discuss.huggingface.co/) ๋˜๋Š” [์ด์Šˆ](https://github.com/huggingface/transformers/issues)์—์„œ ๋…ผ์˜ํ•ด ์ฃผ์„ธ์š”. ๋ฒ„๊ทธ ์ˆ˜์ •์€ ํ™˜์˜ํ•˜์ง€๋งŒ ๊ฐ€๋…์„ฑ์„ ํฌ์ƒํ•˜๋ฉด์„œ๊นŒ์ง€ ๋” ๋งŽ์€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” pull request๋Š” ๋ณ‘ํ•ฉ(merge)ํ•˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) ๋ฐ [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization)์—์„œ ์š”์•ฝ ํ›ˆ๋ จํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ ์˜ˆ์ œ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ํŠน๋ณ„ํ•œ ์„ค๋ช…์ด ์—†๋Š” ํ•œ ๋ชจ๋“  ์˜ˆ์ œ๋Š” ๋‘ ํ”„๋ ˆ์ž„์›Œํฌ ๋ชจ๋‘์—์„œ ์ž‘๋™ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ## ์„ค์ •ํ•˜๊ธฐ[[setup]] ์ตœ์‹  ๋ฒ„์ „์˜ ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์ƒˆ ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ **์†Œ์Šค๋กœ๋ถ€ํ„ฐ ๐Ÿค— Transformers๋ฅผ ์„ค์น˜**ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` ์ด์ „ ๋ฒ„์ „์˜ ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณด๋ ค๋ฉด ์•„๋ž˜ ํ† ๊ธ€์„ ํด๋ฆญํ•˜์„ธ์š”: <details> <summary>์ด์ „ ๋ฒ„์ „์˜ ๐Ÿค— Transformers ์˜ˆ์ œ</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณต์ œ(clone)ํ•ด์˜จ ๐Ÿค— Transformers ๋ฒ„์ „์„ ํŠน์ • ๋ฒ„์ „(์˜ˆ: v3.5.1)์œผ๋กœ ์ „ํ™˜ํ•˜์„ธ์š”: ```bash git checkout tags/v3.5.1 ``` ์˜ฌ๋ฐ”๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ฒ„์ „์„ ์„ค์ •ํ•œ ํ›„ ์›ํ•˜๋Š” ์˜ˆ์ œ ํด๋”๋กœ ์ด๋™ํ•˜์—ฌ ์˜ˆ์ œ๋ณ„๋กœ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•œ ์š”๊ตฌ ์‚ฌํ•ญ(requirements)์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install -r requirements.txt ``` ## ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script]] <frameworkcontent> <pt> ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์š”์•ฝ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜์—์„œ [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [T5-small](https://huggingface.co/t5-small)์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. T5 ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ถ”๊ฐ€ `source_prefix` ์ธ์ˆ˜๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด ํ”„๋กฌํ”„ํŠธ๋Š” ์š”์•ฝ ์ž‘์—…์ž„์„ T5์— ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์š”์•ฝ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜์—์„œ Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [T5-small](https://huggingface.co/t5-small)์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. T5 ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ถ”๊ฐ€ `source_prefix` ์ธ์ˆ˜๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด ํ”„๋กฌํ”„ํŠธ๋Š” ์š”์•ฝ ์ž‘์—…์ž„์„ T5์— ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋กœ ๋ถ„์‚ฐ ํ›ˆ๋ จํ•˜๊ธฐ[[distributed-training-and-mixed-precision]] [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) ํด๋ž˜์Šค๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ๊ณผ ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋ฅผ ์ง€์›ํ•˜๋ฏ€๋กœ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐ€์ง€ ๊ธฐ๋Šฅ์„ ๋ชจ๋‘ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‘ ๊ฐ€์ง€๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - `fp16` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. - `nproc_per_node` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ์‚ฌ์šฉํ•  GPU ๊ฐœ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```bash python -m torch.distributed.launch \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ์œ„ํ•ด [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy)๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, ํ›ˆ๋ จ ์Šคํฌ๋ฆฝํŠธ์— ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋‹ค์ค‘ GPU ํ™˜๊ฒฝ์ด๋ผ๋ฉด, TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## TPU ์œ„์—์„œ ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script-on-a-tpu]] <frameworkcontent> <pt> Tensor Processing Units (TPUs)๋Š” ์„ฑ๋Šฅ์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. PyTorch๋Š” [XLA](https://www.tensorflow.org/xla) ๋”ฅ๋Ÿฌ๋‹ ์ปดํŒŒ์ผ๋Ÿฌ์™€ ํ•จ๊ป˜ TPU๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค(์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://github.com/pytorch/xla/blob/master/README.md) ์ฐธ์กฐ). TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `xla_spawn.py` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๊ณ  `num_cores` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉํ•˜๋ ค๋Š” TPU ์ฝ”์–ด ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Tensor Processing Units (TPUs)๋Š” ์„ฑ๋Šฅ์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” TPU๋ฅผ ํ›ˆ๋ จ์— ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy)๋ฅผ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด TPU ๋ฆฌ์†Œ์Šค์˜ ์ด๋ฆ„์„ `tpu` ์ธ์ˆ˜์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## ๐Ÿค— Accelerate๋กœ ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script-with-accelerate]] ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate)๋Š” PyTorch ํ›ˆ๋ จ ๊ณผ์ •์— ๋Œ€ํ•œ ์™„์ „ํ•œ ๊ฐ€์‹œ์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ์—ฌ๋Ÿฌ ์œ ํ˜•์˜ ์„ค์ •(CPU ์ „์šฉ, ๋‹ค์ค‘ GPU, TPU)์—์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ํ†ตํ•ฉ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” PyTorch ์ „์šฉ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๐Ÿค— Accelerate๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: > ์ฐธ๊ณ : Accelerate๋Š” ๋น ๋ฅด๊ฒŒ ๊ฐœ๋ฐœ ์ค‘์ด๋ฏ€๋กœ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด accelerate๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```bash pip install git+https://github.com/huggingface/accelerate ``` `run_summarization.py` ์Šคํฌ๋ฆฝํŠธ ๋Œ€์‹  `run_summarization_no_trainer.py` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Accelerate ํด๋ž˜์Šค๊ฐ€ ์ง€์›๋˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋Š” ํด๋”์— `task_no_trainer.py` ํŒŒ์ผ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ตฌ์„ฑ ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate config ``` ์„ค์ •์„ ํ…Œ์ŠคํŠธํ•˜์—ฌ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ตฌ์„ฑ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate test ``` ์ด์ œ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## ์‚ฌ์šฉ์ž ์ •์˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์‚ฌ์šฉํ•˜๊ธฐ[[use-a-custom-dataset]] ์š”์•ฝ ์Šคํฌ๋ฆฝํŠธ๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ CSV ๋˜๋Š” JSON ํŒŒ์ผ์ธ ๊ฒฝ์šฐ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์ถ”๊ฐ€ ์ธ์ˆ˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - `train_file`๊ณผ `validation_file`์€ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ํŒŒ์ผ์˜ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. - `text_column`์€ ์š”์•ฝํ•  ์ž…๋ ฅ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - `summary_column`์€ ์ถœ๋ ฅํ•  ๋Œ€์ƒ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์š”์•ฝ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## ์Šคํฌ๋ฆฝํŠธ ํ…Œ์ŠคํŠธํ•˜๊ธฐ[[test-a-script]] ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋Œ€์ƒ์œผ๋กœ ํ›ˆ๋ จ์„ ์™„๋ฃŒํ•˜๋Š”๋ฐ ๊ฝค ์˜ค๋žœ ์‹œ๊ฐ„์ด ๊ฑธ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์—, ์ž‘์€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ๋ชจ๋“  ๊ฒƒ์ด ์˜ˆ์ƒ๋Œ€๋กœ ์‹คํ–‰๋˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ตœ๋Œ€ ์ƒ˜ํ”Œ ์ˆ˜๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` ๋ชจ๋“  ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ `max_predict_samples` ์ธ์ˆ˜๋ฅผ ์ง€์›ํ•˜์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ด ์ธ์ˆ˜๋ฅผ ์ง€์›ํ•˜๋Š”์ง€ ํ™•์‹คํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ `-h` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ํ™•์ธํ•˜์„ธ์š”: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## ์ฒดํฌํฌ์ธํŠธ(checkpoint)์—์„œ ํ›ˆ๋ จ ์ด์–ด์„œ ํ•˜๊ธฐ[[resume-training-from-checkpoint]] ๋˜ ๋‹ค๋ฅธ ์œ ์šฉํ•œ ์˜ต์…˜์€ ์ด์ „ ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ›ˆ๋ จ์ด ์ค‘๋‹จ๋˜๋”๋ผ๋„ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜์ง€ ์•Š๊ณ  ์ค‘๋‹จํ•œ ๋ถ€๋ถ„๋ถ€ํ„ฐ ๋‹ค์‹œ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ๋‘ ๊ฐ€์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” `output_dir previous_output_dir` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `output_dir`์— ์ €์žฅ๋œ ์ตœ์‹  ์ฒดํฌํฌ์ธํŠธ๋ถ€ํ„ฐ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ `overwrite_output_dir`์„ ์ œ๊ฑฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` ๋‘ ๋ฒˆ์งธ๋Š” `resume_from_checkpoint path_to_specific_checkpoint` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ์ฒดํฌํฌ์ธํŠธ ํด๋”์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[share-your-model]] ๋ชจ๋“  ์Šคํฌ๋ฆฝํŠธ๋Š” ์ตœ์ข… ๋ชจ๋ธ์„ [Model Hub](https://huggingface.co/models)์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— Hugging Face์— ๋กœ๊ทธ์ธํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash huggingface-cli login ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ์— `push_to_hub` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ˆ˜๋Š” Hugging Face ์‚ฌ์šฉ์ž ์ด๋ฆ„๊ณผ `output_dir`์— ์ง€์ •๋œ ํด๋” ์ด๋ฆ„์œผ๋กœ ์ €์žฅ์†Œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์— ํŠน์ • ์ด๋ฆ„์„ ์ง€์ •ํ•˜๋ ค๋ฉด `push_to_hub_model_id` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ๋Š” ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์•„๋ž˜์— ์ž๋™์œผ๋กœ ๋‚˜์—ด๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” ํŠน์ • ์ €์žฅ์†Œ ์ด๋ฆ„์œผ๋กœ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/troubleshooting.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฌธ์ œ ํ•ด๊ฒฐ[[troubleshoot]] ๋•Œ๋•Œ๋กœ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ์ €ํฌ๊ฐ€ ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ด ๊ฐ€์ด๋“œ๋Š” ํ˜„์žฌ๊นŒ์ง€ ํ™•์ธ๋œ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ ๋ช‡ ๊ฐ€์ง€์™€ ๊ทธ๊ฒƒ๋“ค์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋‹ค๋ฃน๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ๊ฐ€์ด๋“œ๋Š” ๋ชจ๋“  ๐Ÿค— Transformers ๋ฌธ์ œ๋ฅผ ํฌ๊ด„์ ์œผ๋กœ ๋‹ค๋ฃจ๊ณ  ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฌธ์ œ ํ•ด๊ฒฐ์— ๋” ๋งŽ์€ ๋„์›€์„ ๋ฐ›์œผ๋ ค๋ฉด ๋‹ค์Œ์„ ์‹œ๋„ํ•ด๋ณด์„ธ์š”: <Youtube id="S2EEG3JIt2A"/> 1. [ํฌ๋Ÿผ](https://discuss.huggingface.co/)์—์„œ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. [Beginners](https://discuss.huggingface.co/c/beginners/5) ๋˜๋Š” [๐Ÿค— Transformers](https://discuss.huggingface.co/c/transformers/9)์™€ ๊ฐ™์€ ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์— ์งˆ๋ฌธ์„ ๊ฒŒ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ์ฝ”๋“œ์™€ ํ•จ๊ป˜ ์ž˜ ์„œ์ˆ ๋œ ํฌ๋Ÿผ ๊ฒŒ์‹œ๋ฌผ์„ ์ž‘์„ฑํ•˜์—ฌ ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋  ๊ฐ€๋Šฅ์„ฑ์„ ๊ทน๋Œ€ํ™”ํ•˜์„ธ์š”! <Youtube id="_PAli-V4wj0"/> 2. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ด€๋ จ๋œ ๋ฒ„๊ทธ์ด๋ฉด ๐Ÿค— Transformers ์ €์žฅ์†Œ์—์„œ [์ด์Šˆ](https://github.com/huggingface/transformers/issues/new/choose)๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”. ๋ฒ„๊ทธ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๋Š” ์ •๋ณด๋ฅผ ๊ฐ€๋Šฅํ•œ ๋งŽ์ด ํฌํ•จํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์—ฌ, ๋ฌด์—‡์ด ์ž˜๋ชป ๋˜์—ˆ๋Š”์ง€์™€ ์–ด๋–ป๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ๋” ์ž˜ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ์„ธ์š”. 3. ์ด์ „ ๋ฒ„์ „์˜ ๐Ÿค— Transformers์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์ค‘์š”ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ๋ฒ„์ „ ์‚ฌ์ด์— ๋„์ž…๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— [๋งˆ์ด๊ทธ๋ ˆ์ด์…˜](migration) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๋ฌธ์ œ ํ•ด๊ฒฐ ๋ฐ ๋„์›€ ๋งค๋‰ด์–ผ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ Hugging Face ๊ฐ•์ขŒ์˜ [8์žฅ](https://huggingface.co/course/chapter8/1?fw=pt)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ## ๋ฐฉํ™”๋ฒฝ ํ™˜๊ฒฝ[[firewalled-environments]] ํด๋ผ์šฐ๋“œ ๋ฐ ๋‚ด๋ถ€๋ง(intranet) ์„ค์ •์˜ ์ผ๋ถ€ GPU ์ธ์Šคํ„ด์Šค๋Š” ์™ธ๋ถ€ ์—ฐ๊ฒฐ์— ๋Œ€ํ•œ ๋ฐฉํ™”๋ฒฝ์œผ๋กœ ์ฐจ๋‹จ๋˜์–ด ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋‚˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๋ ค๊ณ  ํ•  ๋•Œ, ๋‹ค์šด๋กœ๋“œ๊ฐ€ ์ค‘๋‹จ๋˜๊ณ  ๋‹ค์Œ ๋ฉ”์‹œ์ง€์™€ ํ•จ๊ป˜ ์‹œ๊ฐ„ ์ดˆ๊ณผ๋ฉ๋‹ˆ๋‹ค: ``` ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ``` ์ด ๊ฒฝ์šฐ์—๋Š” ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๋ฅผ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Transformers๋ฅผ [์˜คํ”„๋ผ์ธ ๋ชจ๋“œ](installation#offline-mode)๋กœ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## CUDA ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ(CUDA out of memory)[[cuda-out-of-memory]] ์ˆ˜๋ฐฑ๋งŒ ๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์ ์ ˆํ•œ ํ•˜๋“œ์›จ์–ด ์—†์ด ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch) ``` ๋‹ค์Œ์€ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ์‹œ๋„ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž ์žฌ์ ์ธ ํ•ด๊ฒฐ์ฑ…์ž…๋‹ˆ๋‹ค: - [`TrainingArguments`]์˜ [`per_device_train_batch_size`](main_classes/trainer#transformers.TrainingArguments.per_device_train_batch_size) ๊ฐ’์„ ์ค„์ด์„ธ์š”. - [`TrainingArguments`]์˜ [`gradient_accumulation_steps`](main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps)์€ ์ „์ฒด ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋Š˜๋ฆฌ์„ธ์š”. <Tip> ๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ ๊ธฐ์ˆ ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์„ฑ๋Šฅ [๊ฐ€์ด๋“œ](performance)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ €์žฅ๋œ TensorFlow ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค(Unable to load a saved TensorFlow model)[[unable-to-load-a-saved-uensorFlow-model]] TensorFlow์˜ [model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) ๋ฉ”์†Œ๋“œ๋Š” ์•„ํ‚คํ…์ฒ˜, ๊ฐ€์ค‘์น˜, ํ›ˆ๋ จ ๊ตฌ์„ฑ ๋“ฑ ์ „์ฒด ๋ชจ๋ธ์„ ๋‹จ์ผ ํŒŒ์ผ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ ํŒŒ์ผ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ ํŒŒ์ผ์— ์žˆ๋Š” ๋ชจ๋“  TensorFlow ๊ด€๋ จ ๊ฐ์ฒด๋ฅผ ๊ฐ€์ ธ์˜ค์ง€ ์•Š์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow ๋ชจ๋ธ ์ €์žฅ ๋ฐ ๊ฐ€์ ธ์˜ค๊ธฐ ๋ฌธ์ œ๋ฅผ ํ”ผํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค: - ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ `h5` ํŒŒ์ผ ํ™•์žฅ์ž๋กœ [`model.save_weights`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model)๋กœ ์ €์žฅํ•œ ๋‹ค์Œ [`~TFPreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFPreTrainedModel >>> from tensorflow import keras >>> model.save_weights("some_folder/tf_model.h5") >>> model = TFPreTrainedModel.from_pretrained("some_folder") ``` - ๋ชจ๋ธ์„ [`~TFPretrainedModel.save_pretrained`]๋กœ ์ €์žฅํ•˜๊ณ  [`~TFPreTrainedModel.from_pretrained`]๋กœ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFPreTrainedModel >>> model.save_pretrained("path_to/model") >>> model = TFPreTrainedModel.from_pretrained("path_to/model") ``` ## ImportError[[importerror]] ํŠนํžˆ ์ตœ์‹  ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ ๋งŒ๋‚  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” `ImportError`์ž…๋‹ˆ๋‹ค: ``` ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location) ``` ์ด๋Ÿฌํ•œ ์˜ค๋ฅ˜ ์œ ํ˜•์˜ ๊ฒฝ์šฐ ์ตœ์‹  ๋ชจ๋ธ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ์‹  ๋ฒ„์ „์˜ ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers --upgrade ``` ## CUDA error: device-side assert triggered[[cuda-error-deviceside-assert-triggered]] ๋•Œ๋•Œ๋กœ ์žฅ์น˜ ์ฝ”๋“œ ์˜ค๋ฅ˜์— ๋Œ€ํ•œ ์ผ๋ฐ˜์ ์ธ CUDA ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` RuntimeError: CUDA error: device-side assert triggered ``` ๋” ์ž์„ธํ•œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ์–ป์œผ๋ ค๋ฉด ์šฐ์„  ์ฝ”๋“œ๋ฅผ CPU์—์„œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ CPU๋กœ ์ „ํ™˜ํ•˜์„ธ์š”: ```py >>> import os >>> os.environ["CUDA_VISIBLE_DEVICES"] = "" ``` ๋˜ ๋‹ค๋ฅธ ์˜ต์…˜์€ GPU์—์„œ ๋” ๋‚˜์€ ์—ญ์ถ”์ (traceback)์„ ์–ป๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—ญ์ถ”์ ์ด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•œ ์†Œ์Šค๋ฅผ ๊ฐ€๋ฆฌํ‚ค๋„๋ก ํ•˜์„ธ์š”: ```py >>> import os >>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1" ``` ## ํŒจ๋”ฉ ํ† ํฐ์ด ๋งˆ์Šคํ‚น๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ž˜๋ชป๋œ ์ถœ๋ ฅ(Incorrect output when padding tokens aren't masked)[[incorrect-output-when-padding-tokens-arent-masked]] ๊ฒฝ์šฐ์— ๋”ฐ๋ผ `input_ids`์— ํŒจ๋”ฉ ํ† ํฐ์ด ํฌํ•จ๋œ ๊ฒฝ์šฐ `hidden_state` ์ถœ๋ ฅ์ด ์˜ฌ๋ฐ”๋ฅด์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ๋ชจ๋ฅผ ์œ„ํ•ด ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ๋ชจ๋ธ์˜ `pad_token_id`์— ์•ก์„ธ์Šคํ•˜์—ฌ ํ•ด๋‹น ๊ฐ’์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ `pad_token_id`๊ฐ€ `None`์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์–ธ์ œ๋“ ์ง€ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSequenceClassification >>> import torch >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") >>> model.config.pad_token_id 0 ``` ๋‹ค์Œ ์˜ˆ์ œ๋Š” ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜์ง€ ์•Š์€ ์ถœ๋ ฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>) ``` ๋‹ค์Œ์€ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์‹ค์ œ ์ถœ๋ ฅ์ž…๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([[7592]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋ชจ๋ธ์— `attention_mask`๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŒจ๋”ฉ ํ† ํฐ์„ ๋ฌด์‹œํ•ด์•ผ ์ด๋Ÿฌํ•œ ์กฐ์šฉํ•œ ์˜ค๋ฅ˜๋ฅผ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์ถœ๋ ฅ์ด ์‹ค์ œ ์ถœ๋ ฅ๊ณผ ์ผ์น˜ํ•ฉ๋‹ˆ๋‹ค: <Tip> ์ผ๋ฐ˜์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋Š” ํŠน์ • ํ† ํฌ๋‚˜์ด์ €์˜ ๊ธฐ๋ณธ ๊ฐ’์„ ๊ธฐ์ค€์œผ๋กœ ์‚ฌ์šฉ์ž์— ๋Œ€ํ•œ 'attention_mask'๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. </Tip> ```py >>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]]) >>> output = model(input_ids, attention_mask=attention_mask) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๐Ÿค— Transformers๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์ œ๊ณต๋œ ๊ฒฝ์šฐ ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜๊ธฐ ์œ„ํ•œ `attention_mask`๋ฅผ ์ž๋™์œผ๋กœ ์ƒ์„ฑํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ผ๋ถ€ ๋ชจ๋ธ์—๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์—†์Šต๋‹ˆ๋‹ค. - ์ผ๋ถ€ ์‚ฌ์šฉ ์‚ฌ๋ก€์˜ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์ด ํŒจ๋”ฉ ํ† ํฐ์„ ๊ด€๋ฆฌํ•˜๊ธฐ๋ฅผ ์›ํ•ฉ๋‹ˆ๋‹ค. ## ValueError: ์ด ์œ ํ˜•์˜ AutoModel์— ๋Œ€ํ•ด ์ธ์‹ํ•  ์ˆ˜ ์—†๋Š” XYZ ๊ตฌ์„ฑ ํด๋ž˜์Šค(ValueError: Unrecognized configuration class XYZ for this kind of AutoModel)[[valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel]] ์ผ๋ฐ˜์ ์œผ๋กœ, ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•ด [`AutoModel`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜๊ณ  ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ์ด `ValueError`๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด, ์ด๋Š” Auto ํด๋ž˜์Šค๊ฐ€ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์˜ ๊ตฌ์„ฑ์—์„œ ๊ฐ€์ ธ์˜ค๋ ค๋Š” ๋ชจ๋ธ ์œ ํ˜•๊ณผ ๋งคํ•‘์„ ์ฐพ์„ ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ํ”ํ•˜๊ฒŒ ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ฃผ์–ด์ง„ ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์„ ๋•Œ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ ์˜ˆ์ œ์—์„œ ์งˆ์˜์‘๋‹ต์— ๋Œ€ํ•œ GPT2๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForQuestionAnswering >>> processor = AutoProcessor.from_pretrained("gpt2-medium") >>> model = AutoModelForQuestionAnswering.from_pretrained("gpt2-medium") ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering. Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ... ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), [JAX](https://jax.readthedocs.io/en/latest/)๋ฅผ ์œ„ํ•œ ์ตœ์ฒจ๋‹จ ๋จธ์‹ ๋Ÿฌ๋‹ ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ํ•™์Šต๋œ ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ๋“ค์„ ์‰ฝ๊ฒŒ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” API์™€ ๋„๊ตฌ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์“ฐ๋ฉด ์ปดํ“จํŒ… ๋น„์šฉ๊ณผ ํƒ„์†Œ ๋ฐฐ์ถœ๋Ÿ‰์ด ์ค„๊ณ , ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐ ํ•„์š”ํ•œ ์‹œ๊ฐ„๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ €ํฌ ๋ชจ๋ธ๋“ค์€ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์˜ ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿ“ **์ž์—ฐ์–ด ์ฒ˜๋ฆฌ**: ํ…์ŠคํŠธ ๋ถ„๋ฅ˜, ๊ฐœ์ฒด๋ช… ์ธ์‹, ์งˆ์˜์‘๋‹ต, ์–ธ์–ด ๋ชจ๋ธ๋ง, ์š”์•ฝ, ๋ฒˆ์—ญ, ๊ฐ๊ด€์‹ ์งˆ์˜์‘๋‹ต, ํ…์ŠคํŠธ ์ƒ์„ฑ<br> ๐Ÿ–ผ๏ธ **์ปดํ“จํ„ฐ ๋น„์ „**: ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜, ๊ฐ์ฒด ํƒ์ง€, ๊ฐ์ฒด ๋ถ„ํ• <br> ๐Ÿ—ฃ๏ธ **์˜ค๋””์˜ค**: ์ž๋™์Œ์„ฑ์ธ์‹, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜<br> ๐Ÿ™ **๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ**: ํ‘œ ์งˆ์˜์‘๋‹ต, ๊ด‘ํ•™ ๋ฌธ์ž ์ธ์‹ (OCR), ์Šค์บ”ํ•œ ๋ฌธ์„œ์—์„œ ์ •๋ณด ์ถ”์ถœ, ๋น„๋””์˜ค ๋ถ„๋ฅ˜, ์‹œ๊ฐ ์งˆ์˜์‘๋‹ต ๐Ÿค— Transformers๋Š” PyTorch, TensorFlow์™€ JAX ๊ฐ„์˜ ์ƒํ˜ธ์šด์šฉ์„ฑ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์œ ์—ฐํ•˜๊ฒŒ ๋ชจ๋ธ์˜ ๊ฐ ๋‹จ๊ณ„๋งˆ๋‹ค ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ฝ”๋“œ 3์ค„๋งŒ ์จ์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚จ ๋‹ค์Œ, ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ ์ƒ์—์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์šด์˜ ํ™˜๊ฒฝ์— ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•ด ONNX๋‚˜ TorchScript ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ์ฐธ์—ฌํ•˜์‹œ๋ ค๋ฉด [Hub](https://huggingface.co/models), [ํฌ๋Ÿผ](https://discuss.huggingface.co/), [๋””์Šค์ฝ”๋“œ](https://discord.com/invite/JfAtkvEtRb)๋ฅผ ๋ฐฉ๋ฌธํ•ด์ฃผ์„ธ์š”! ## Hugging Face ํŒ€๊ณผ ์ง์ ‘ ๋Œ€ํ™”ํ•˜๊ณ  ์‹ถ์œผ์‹ ๊ฐ€์š”?[[hugging-face-team]] <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## ์ฝ˜ํ…์ธ [[contents]] ์ €ํฌ ๊ธฐ์ˆ ๋ฌธ์„œ๋Š” ํฌ๊ฒŒ 5๊ฐœ ์„น์…˜์œผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - **์‹œ์ž‘ํ•˜๊ธฐ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ฐ„๋‹จํžˆ ํ›‘์–ด๋ณด๊ณ , ๋ณธ๊ฒฉ์ ์œผ๋กœ ๋›ฐ์–ด๋“ค ์ˆ˜ ์žˆ๊ฒŒ ์„ค์น˜ ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **ํŠœํ† ๋ฆฌ์–ผ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ต์ˆ™ํ•ด์งˆ ์ˆ˜ ์žˆ๋„๋ก ์ž์„ธํ•˜๊ณ ๋„ ์‰ฝ๊ฒŒ ๊ธฐ๋ณธ์ ์ธ ๋ถ€๋ถ„์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **How-to ๊ฐ€์ด๋“œ**์—์„œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‚˜, ์ง์ ‘ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•˜๊ณ  ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๊ฐ™์ด ํŠน์ • ๋ชฉํ‘œ๋ฅผ ๋‹ฌ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **๊ฐœ๋… ๊ฐ€์ด๋“œ**์—์„œ ๐Ÿค— Transformers์˜ ์„ค๊ณ„ ์ฒ ํ•™๊ณผ ํ•จ๊ป˜ ๋ชจ๋ธ์ด๋‚˜ ํƒœ์Šคํฌ ๋’ค์— ์ˆจ๊ฒจ์ง„ ๊ฐœ๋…๋“ค๊ณผ ์•„์ด๋””์–ด๋ฅผ ํƒ๊ตฌํ•˜๊ณ  ์„ค๋ช…์„ ๋ง๋ถ™์ž…๋‹ˆ๋‹ค. - **API**์—์„œ ๋ชจ๋“  ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋ฉ”์ธ ํด๋ž˜์Šค**์—์„œ configuration, model, tokenizer, pipeline๊ณผ ๊ฐ™์ด ์ œ์ผ ์ค‘์š”ํ•œ ํด๋ž˜์Šค๋“ค์„ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋ชจ๋ธ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์† ๊ตฌํ˜„๋œ ๊ฐ ๋ชจ๋ธ๊ณผ ์—ฐ๊ด€๋œ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋‚ด๋ถ€ ์œ ํ‹ธ๋ฆฌํ‹ฐ**์—์„œ ๋‚ด๋ถ€์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ์œ ํ‹ธ๋ฆฌํ‹ฐ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ### ์ง€์› ๋ชจ๋ธ[[supported-models]] <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### ์ง€์› ํ”„๋ ˆ์ž„์›Œํฌ[[supported-framework]] ์•„๋ž˜ ํ‘œ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์† ๊ฐ ๋ชจ๋ธ์˜ ์ง€์› ํ˜„ํ™ฉ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ํ† ํฐํ™”๋ฅผ ํŒŒ์ด์ฌ (๋ณ„์นญ "slow") ๋˜๋Š” ๐Ÿค— Tokenizers (๋ณ„์นญ "fast") ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ•˜๋Š”์ง€; (Flax๋ฅผ ํ†ตํ•œ) Jax, PyTorch, TensorFlow ์ค‘ ์–ด๋–ค ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ง€์›ํ•˜๋Š”์ง€ ํ‘œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/tflite.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ[[export-to-tflite]] [TensorFlow Lite](https://www.tensorflow.org/lite/guide)๋Š” ์ž์›์ด ์ œํ•œ๋œ ํœด๋Œ€ํฐ, ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ, ์‚ฌ๋ฌผ์ธํ„ฐ๋„ท(IoT) ๊ธฐ๊ธฐ์—์„œ ๊ธฐ๊ณ„ํ•™์Šต ๋ชจ๋ธ์„ ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•œ ๊ฒฝ๋Ÿ‰ ํ”„๋ ˆ์ž„์›Œํฌ์ž…๋‹ˆ๋‹ค. TFLite๋Š” ์—ฐ์‚ฐ ๋Šฅ๋ ฅ, ๋ฉ”๋ชจ๋ฆฌ, ์ „๋ ฅ ์†Œ๋น„๊ฐ€ ์ œํ•œ๋œ ๊ธฐ๊ธฐ์—์„œ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ตœ์ ํ™”ํ•˜๊ณ  ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. TensorFlow Lite ๋ชจ๋ธ์€ `.tflite` ํŒŒ์ผ ํ™•์žฅ์ž๋กœ ์‹๋ณ„๋˜๋Š” ํŠน์ˆ˜ํ•˜๊ณ  ํšจ์œจ์ ์ธ ํœด๋Œ€์šฉ ํฌ๋งท์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ `exporters.tflite` ๋ชจ๋“ˆ๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ TFLite๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/tflite/overview)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋ชจ๋ธ์„ TFLite๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด, ํ•„์š”ํ•œ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install optimum[exporters-tf] ``` ๋ชจ๋“  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ธ์ˆ˜๋ฅผ ํ™•์ธํ•˜๋ ค๋ฉด, [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model)๋ฅผ ์ฐธ๊ณ ํ•˜๊ฑฐ๋‚˜ ํ„ฐ๋ฏธ๋„์—์„œ ๋„์›€๋ง์„ ์‚ดํŽด๋ณด์„ธ์š”: ```bash optimum-cli export tflite --help ``` ์˜ˆ๋ฅผ ๋“ค์–ด ๐Ÿค— Hub์—์„œ์˜ `bert-base-uncased` ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด, ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ ``` ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋กœ๊ทธ์™€ ๊ฒฐ๊ณผ๋ฌผ์ธ `model.tflite`๊ฐ€ ์ €์žฅ๋œ ์œ„์น˜๋ฅผ ๋ณด์—ฌ์ฃผ๋Š” ๋กœ๊ทธ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```bash Validating TFLite model... -[โœ“] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[โœ“] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite ``` ์œ„ ์˜ˆ์ œ๋Š” ๐Ÿค— Hub์—์„œ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ๋กœ์ปฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ธ๋‹ค๋ฉด, ๋จผ์ € ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์™€ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์ด ๋ชจ๋‘ ๊ฐ™์€ ๋””๋ ‰ํ„ฐ๋ฆฌ( `local_path` )์— ์ €์žฅ๋๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. CLI๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ, ๐Ÿค— Hub์—์„œ์˜ ์ฒดํฌํฌ์ธํŠธ ์ด๋ฆ„ ๋Œ€์‹  `model` ์ธ์ˆ˜์— `local_path`๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/testing.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ…Œ์ŠคํŠธ[[testing]] ๋จผ์ € ๐Ÿค— Transformers ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ํ…Œ์ŠคํŠธ๋˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ณ , ์ƒˆ๋กœ์šด ํ…Œ์ŠคํŠธ๋ฅผ ์ž‘์„ฑ ๋ฐ ๊ธฐ์กด ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐœ์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ด…์‹œ๋‹ค. ์ด ์ €์žฅ์†Œ์—๋Š” 2๊ฐœ์˜ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. `tests` - ์ผ๋ฐ˜ API์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ 2. `examples` - API์˜ ์ผ๋ถ€๊ฐ€ ์•„๋‹Œ ๋‹ค์–‘ํ•œ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ ## Transformers ํ…Œ์ŠคํŠธ ๋ฐฉ๋ฒ•[[how-transformers-are-tested]] 1. PR์ด ์ œ์ถœ๋˜๋ฉด 9๊ฐœ์˜ CircleCi ์ž‘์—…์œผ๋กœ ํ…Œ์ŠคํŠธ๊ฐ€ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น PR์— ๋Œ€ํ•ด ์ƒˆ๋กœ์šด ์ปค๋ฐ‹์ด ์ƒ์„ฑ๋  ๋•Œ๋งˆ๋‹ค ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์‹œ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค. ์ด ์ž‘์—…๋“ค์€ ์ด [config ํŒŒ์ผ](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml)์— ์ •์˜๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ•„์š”ํ•˜๋‹ค๋ฉด ์‚ฌ์šฉ์ž์˜ ๋กœ์ปฌ ํ™˜๊ฒฝ์—์„œ ๋™์ผํ•˜๊ฒŒ ์žฌํ˜„ํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด CI ์ž‘์—…์€ `@slow` ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. [github actions](https://github.com/huggingface/transformers/actions)์— ์˜ํ•ด ์‹คํ–‰๋˜๋Š” ์ž‘์—…์€ 3๊ฐœ์ž…๋‹ˆ๋‹ค: - [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): torch hub integration์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): `main` ๋ธŒ๋žœ์น˜์—์„œ ์ปค๋ฐ‹์ด ์—…๋ฐ์ดํŠธ๋œ ๊ฒฝ์šฐ์—๋งŒ GPU๋ฅผ ์ด์šฉํ•œ ๋น ๋ฅธ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” `src`, `tests`, `.github` ํด๋” ์ค‘ ํ•˜๋‚˜์— ์ฝ”๋“œ๊ฐ€ ์—…๋ฐ์ดํŠธ๋œ ๊ฒฝ์šฐ์—๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. (model card, notebook, ๊ธฐํƒ€ ๋“ฑ๋“ฑ์„ ์ถ”๊ฐ€ํ•œ ๊ฒฝ์šฐ ์‹คํ–‰๋˜์ง€ ์•Š๋„๋ก ํ•˜๊ธฐ ์œ„ํ•ด์„œ์ž…๋‹ˆ๋‹ค) - [self-hosted runner](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-scheduled.yml): `tests` ๋ฐ `examples`์—์„œ GPU๋ฅผ ์ด์šฉํ•œ ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ, ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ```bash RUN_SLOW=1 pytest tests/ RUN_SLOW=1 pytest examples/ ``` ๊ฒฐ๊ณผ๋Š” [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/actions)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ…Œ์ŠคํŠธ ์‹คํ–‰[[running-tests]] ### ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ์„ ํƒ[[choosing-which-tests-to-run]] ์ด ๋ฌธ์„œ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋‚ด์šฉ์„ ์ฝ์€ ํ›„์—๋„, ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์ด ํ•„์š”ํ•˜๋‹ค๋ฉด [์—ฌ๊ธฐ](https://docs.pytest.org/en/latest/usage.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ€์žฅ ์œ ์šฉํ•œ ํ…Œ์ŠคํŠธ ์‹คํ–‰ ๋ฐฉ๋ฒ• ๋ช‡ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ ์‹คํ–‰: ```console pytest ``` ๋˜๋Š”: ```bash make test ``` ํ›„์ž๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜๋ฉ๋‹ˆ๋‹ค: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/ ``` ์œ„์˜ ๋ช…๋ น์–ด๋Š” pytest์—๊ฒŒ ์•„๋ž˜์˜ ๋‚ด์šฉ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ CPU ์ฝ”์–ด ์ˆ˜๋งŒํผ ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. (RAM์ด ์ถฉ๋ถ„ํ•˜์ง€ ์•Š๋‹ค๋ฉด, ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค ์ˆ˜๊ฐ€ ๋„ˆ๋ฌด ๋งŽ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!) - ๋™์ผํ•œ ํŒŒ์ผ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋Š” ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค์—์„œ ์‹คํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ์ถœ๋ ฅ์„ ์บก์ฒ˜ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. - ์ž์„ธํ•œ ๋ชจ๋“œ๋กœ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ๋ชจ๋“  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก ๊ฐ€์ ธ์˜ค๊ธฐ[[getting-the-list-of-all-tests]] ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ: ```bash pytest --collect-only -q ``` ์ง€์ •๋œ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ: ```bash pytest tests/test_optimization.py --collect-only -q ``` ### ํŠน์ • ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ์‹คํ–‰[[run-a-specific-test-module]] ๊ฐœ๋ณ„ ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ์‹คํ–‰ํ•˜๊ธฐ: ```bash pytest tests/test_logging.py ``` ### ํŠน์ • ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-specific-tests]] ๋Œ€๋ถ€๋ถ„์˜ ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ๋Š” unittest๊ฐ€ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํŠน์ • ํ•˜์œ„ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ํฌํ•จํ•˜๋Š” unittest ํด๋ž˜์Šค์˜ ์ด๋ฆ„์„ ์•Œ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest tests/test_optimization.py::OptimizationTest::test_adam_w ``` ์œ„์˜ ๋ช…๋ น์–ด์˜ ์˜๋ฏธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - `tests/test_optimization.py` - ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ๋Š” ํŒŒ์ผ - `OptimizationTest` - ํด๋ž˜์Šค์˜ ์ด๋ฆ„ - `test_adam_w` - ํŠน์ • ํ…Œ์ŠคํŠธ ํ•จ์ˆ˜์˜ ์ด๋ฆ„ ํŒŒ์ผ์— ์—ฌ๋Ÿฌ ํด๋ž˜์Šค๊ฐ€ ํฌํ•จ๋œ ๊ฒฝ์šฐ, ํŠน์ • ํด๋ž˜์Šค์˜ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest tests/test_optimization.py::OptimizationTest ``` ์ด ๋ช…๋ น์–ด๋Š” ํ•ด๋‹น ํด๋ž˜์Šค ๋‚ด๋ถ€์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ `OptimizationTest` ํด๋ž˜์Šค์— ํฌํ•จ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pytest tests/test_optimization.py::OptimizationTest --collect-only -q ``` ํ‚ค์›Œ๋“œ ํ‘œํ˜„์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `adam`์ด๋ผ๋Š” ์ด๋ฆ„์„ ํฌํ•จํ•˜๋Š” ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -k adam tests/test_optimization.py ``` ๋…ผ๋ฆฌ ์—ฐ์‚ฐ์ž `and`์™€ `or`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ํ‚ค์›Œ๋“œ๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•˜๋Š”์ง€ ๋˜๋Š” ์–ด๋Š ํ•˜๋‚˜๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•˜๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `not`์€ ๋ถ€์ •ํ•  ๋•Œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `adam`์ด๋ผ๋Š” ์ด๋ฆ„์„ ํฌํ•จํ•˜์ง€ ์•Š๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -k "not adam" tests/test_optimization.py ``` ๋‘ ๊ฐ€์ง€ ํŒจํ„ด์„ ํ•˜๋‚˜๋กœ ๊ฒฐํ•ฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "ada and not adam" tests/test_optimization.py ``` ์˜ˆ๋ฅผ ๋“ค์–ด `test_adafactor`์™€ `test_adam_w`๋ฅผ ๋ชจ๋‘ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py ``` ์—ฌ๊ธฐ์„œ `or`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์— ์œ ์˜ํ•˜์„ธ์š”. ๋‘ ํ‚ค์›Œ๋“œ ์ค‘ ํ•˜๋‚˜๊ฐ€ ์ผ์น˜ํ•˜๋„๋ก ํ•˜๊ธฐ ์œ„ํ•œ ๋ชฉ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋‘ ํŒจํ„ด์ด ๋ชจ๋‘ ํฌํ•จ๋˜์–ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด, `and`๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pytest -k "test and ada" tests/test_optimization.py ``` ### `accelerate` ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-`accelerate`-tests]] ๋ชจ๋ธ์—์„œ `accelerate` ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ด์•ผ ํ•  ๋•Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ช…๋ น์–ด์— `-m accelerate_tests`๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `OPT`์—์„œ ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py ``` ### ๋ฌธ์„œ ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-documentation-tests]] ์˜ˆ์‹œ ๋ฌธ์„œ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ์ง€ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `doctests`๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`WhisperModel.forward`'s docstring](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1017-L1035)๋ฅผ ์‚ฌ์šฉํ•ด ๋ด…์‹œ๋‹ค: ```python r""" Returns: Example: ```python >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" ``` ์›ํ•˜๋Š” ํŒŒ์ผ์˜ ๋ชจ๋“  docstring ์˜ˆ์ œ๋ฅผ ์ž๋™์œผ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```bash pytest --doctest-modules <path_to_file_or_dir> ``` ํŒŒ์ผ์˜ ํ™•์žฅ์ž๊ฐ€ markdown์ธ ๊ฒฝ์šฐ `--doctest-glob="*.md"` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ### ์ˆ˜์ •๋œ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰[[run-only-modified-tests]] ์ˆ˜์ •๋œ ํŒŒ์ผ ๋˜๋Š” ํ˜„์žฌ ๋ธŒ๋žœ์น˜ (Git ๊ธฐ์ค€)์™€ ๊ด€๋ จ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด [pytest-picked](https://github.com/anapaulagomes/pytest-picked)์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ณ€๊ฒฝํ•œ ๋‚ด์šฉ์ด ํ…Œ์ŠคํŠธ์— ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š์•˜๋Š”์ง€ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ```bash pip install pytest-picked ``` ```bash pytest --picked ``` ์ˆ˜์ •๋˜์—ˆ์ง€๋งŒ, ์•„์ง ์ปค๋ฐ‹๋˜์ง€ ์•Š์€ ๋ชจ๋“  ํŒŒ์ผ ๋ฐ ํด๋”์—์„œ ํ…Œ์ŠคํŠธ๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ### ์†Œ์Šค ์ˆ˜์ • ์‹œ ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ ์ž๋™ ์žฌ์‹คํ–‰[[automatically-rerun-failed-tests-on-source-modification]] [pytest-xdist](https://github.com/pytest-dev/pytest-xdist)๋Š” ๋ชจ๋“  ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐ์ง€ํ•˜๊ณ , ํŒŒ์ผ์„ ์ˆ˜์ •ํ•œ ํ›„์— ํŒŒ์ผ์„ ๊ณ„์† ์žฌ์‹คํ–‰ํ•˜์—ฌ ํ…Œ์ŠคํŠธ๊ฐ€ ์„ฑ๊ณตํ•  ๋•Œ๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๋งค์šฐ ์œ ์šฉํ•œ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ˆ˜์ •ํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•œ ํ›„ pytest๋ฅผ ๋‹ค์‹œ ์‹œ์ž‘ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผ๋  ๋•Œ๊นŒ์ง€ ์ด ๊ณผ์ •์„ ๋ฐ˜๋ณตํ•œ ํ›„ ๋‹ค์‹œ ์ „์ฒด ์‹คํ–‰์ด ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ```bash pip install pytest-xdist ``` ์žฌ๊ท€์  ๋ชจ๋“œ์˜ ์‚ฌ์šฉ: `pytest -f` ๋˜๋Š” `pytest --looponfail` ํŒŒ์ผ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ `looponfailroots` ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์™€ ํ•ด๋‹น ๋‚ด์šฉ์„ (์žฌ๊ท€์ ์œผ๋กœ) ํ™•์ธํ•˜์—ฌ ๊ฐ์ง€๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ’์˜ ๊ธฐ๋ณธ๊ฐ’์ด ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, `setup.cfg`์˜ ์„ค์ • ์˜ต์…˜์„ ๋ณ€๊ฒฝํ•˜์—ฌ ํ”„๋กœ์ ํŠธ์—์„œ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```ini [tool:pytest] looponfailroots = transformers tests ``` ๋˜๋Š” `pytest.ini`/``tox.ini`` ํŒŒ์ผ: ```ini [pytest] looponfailroots = transformers tests ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ini-file์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์ƒ๋Œ€์ ์œผ๋กœ ์ง€์ •๋œ ๊ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํŒŒ์ผ ๋ณ€๊ฒฝ ์‚ฌํ•ญ๋งŒ ์ฐพ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์„ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ๋Š” ๊ตฌํ˜„ ๋ฐฉ๋ฒ•์ธ [pytest-watch](https://github.com/joeyespo/pytest-watch)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### ํŠน์ • ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ๊ฑด๋„ˆ๋›ฐ๊ธฐ[[skip-a-test-module]] ๋ชจ๋“  ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์„ ์‹คํ–‰ํ•˜๋˜ ํŠน์ • ๋ชจ๋“ˆ์„ ์ œ์™ธํ•˜๋ ค๋ฉด, ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก์„ ๋ช…์‹œ์ ์œผ๋กœ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `test_modeling_*.py` ํ…Œ์ŠคํŠธ๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest *ls -1 tests/*py | grep -v test_modeling* ``` ### ์ƒํƒœ ์ดˆ๊ธฐํ™”[[clearing state]] CI ๋นŒ๋“œ ๋ฐ (์†๋„์— ๋Œ€ํ•œ) ๊ฒฉ๋ฆฌ๊ฐ€ ์ค‘์š”ํ•œ ๊ฒฝ์šฐ, ์บ์‹œ๋ฅผ ์ง€์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pytest --cache-clear tests ``` ### ํ…Œ์ŠคํŠธ๋ฅผ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰[[running-tests-in-parallel]] ์ด์ „์— ์–ธ๊ธ‰ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ `make test`๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด `pytest-xdist` ํ”Œ๋Ÿฌ๊ทธ์ธ(`-n X` ์ธ์ˆ˜, ์˜ˆ๋ฅผ ๋“ค์–ด `-n 2`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ 2๊ฐœ์˜ ๋ณ‘๋ ฌ ์ž‘์—… ์‹คํ–‰)์„ ํ†ตํ•ด ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. `pytest-xdist`์˜ `--dist=` ์˜ต์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ๋ฅผ ์–ด๋–ป๊ฒŒ ๊ทธ๋ฃนํ™”ํ• ์ง€ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `--dist=loadfile`์€ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์žˆ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๋กœ ๊ทธ๋ฃนํ™”ํ•ฉ๋‹ˆ๋‹ค. ์‹คํ–‰๋œ ํ…Œ์ŠคํŠธ์˜ ์ˆœ์„œ๊ฐ€ ๋‹ค๋ฅด๊ณ  ์˜ˆ์ธกํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์—, `pytest-xdist`๋กœ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ์‹คํŒจ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (๊ฒ€์ถœ๋˜์ง€ ์•Š์€ ๊ฒฐํ•ฉ๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ). ์ด ๊ฒฝ์šฐ [pytest-replay](https://github.com/ESSS/pytest-replay)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋™์ผํ•œ ์ˆœ์„œ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค์‹œ ์‹คํ–‰ํ•ด์„œ ์‹คํŒจํ•˜๋Š” ์‹œํ€€์Šค๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๋ฐ์— ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ### ํ…Œ์ŠคํŠธ ์ˆœ์„œ์™€ ๋ฐ˜๋ณต[[test-order-and-repetition]] ์ž ์žฌ์ ์ธ ์ข…์†์„ฑ ๋ฐ ์ƒํƒœ ๊ด€๋ จ ๋ฒ„๊ทธ(tear down)๋ฅผ ๊ฐ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ํ…Œ์ŠคํŠธ๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ, ์—ฐ์†์œผ๋กœ, ๋ฌด์ž‘์œ„๋กœ ๋˜๋Š” ์„ธํŠธ๋กœ ๋ฐ˜๋ณตํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ง์ ‘์ ์ธ ์—ฌ๋Ÿฌ ๋ฒˆ์˜ ๋ฐ˜๋ณต์€ DL์˜ ๋ฌด์ž‘์œ„์„ฑ์— ์˜ํ•ด ๋ฐœ๊ฒฌ๋˜๋Š” ์ผ๋ถ€ ๋ฌธ์ œ๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ๋ฐ์—๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. #### ํ…Œ์ŠคํŠธ๋ฅผ ๋ฐ˜๋ณต[[repeat-tests]] - [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder): ```bash pip install pytest-flakefinder ``` ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ๊ฐ’์€ 50๋ฒˆ): ```bash pytest --flake-finder --flake-runs=5 tests/test_failing_test.py ``` <Tip> ์ด ํ”Œ๋Ÿฌ๊ทธ์ธ์€ `pytest-xdist`์˜ `-n` ํ”Œ๋ž˜๊ทธ์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> <Tip> `pytest-repeat`๋ผ๋Š” ๋˜ ๋‹ค๋ฅธ ํ”Œ๋Ÿฌ๊ทธ์ธ๋„ ์žˆ์ง€๋งŒ `unittest`์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> #### ํ…Œ์ŠคํŠธ๋ฅผ ์ž„์˜์˜ ์ˆœ์„œ๋กœ ์‹คํ–‰[[run-tests-in-a-random-order]] ```bash pip install pytest-random-order ``` ์ค‘์š”: `pytest-random-order`๊ฐ€ ์„ค์น˜๋˜๋ฉด ํ…Œ์ŠคํŠธ๊ฐ€ ์ž๋™์œผ๋กœ ์ž„์˜์˜ ์ˆœ์„œ๋กœ ์„ž์ž…๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ๋ณ€๊ฒฝ์ด๋‚˜ ์ปค๋งจ๋“œ ๋ผ์ธ ์˜ต์…˜์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์•ž์„œ ์„ค๋ช…ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ์ด๋ฅผ ํ†ตํ•ด ํ•œ ํ…Œ์ŠคํŠธ์˜ ์ƒํƒœ๊ฐ€ ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ์˜ ์ƒํƒœ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๊ฒฐํ•ฉ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `pytest-random-order`๊ฐ€ ์„ค์น˜๋˜๋ฉด ํ•ด๋‹น ์„ธ์…˜์—์„œ ์‚ฌ์šฉ๋œ ๋žœ๋ค ์‹œ๋“œ๊ฐ€ ์ถœ๋ ฅ๋˜๋ฉฐ ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest tests [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ๋”ฐ๋ผ์„œ ํŠน์ • ์‹œํ€€์Šค๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์ •ํ™•ํ•œ ์‹œ๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-seed=573663 [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ์ •ํ™•ํžˆ ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ๋ชฉ๋ก(๋˜๋Š” ๋ชฉ๋ก์ด ์—†์Œ)์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋งŒ ์ •ํ™•ํ•œ ์ˆœ์„œ๋ฅผ ์žฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ๋ชฉ๋ก์„ ์ˆ˜๋™์œผ๋กœ ์ขํžˆ๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ๋” ์ด์ƒ ์‹œ๋“œ์— ์˜์กดํ•  ์ˆ˜ ์—†๊ณ  ์‹คํŒจํ–ˆ๋˜ ์ •ํ™•ํ•œ ์ˆœ์„œ๋กœ ์ˆ˜๋™์œผ๋กœ ๋ชฉ๋ก์„ ๋‚˜์—ดํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  `--random-order-bucket=none`์„ ์‚ฌ์šฉํ•˜์—ฌ pytest์—๊ฒŒ ์ˆœ์„œ๋ฅผ ์ž„์˜๋กœ ์„ค์ •ํ•˜์ง€ ์•Š๋„๋ก ์•Œ๋ ค์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py ``` ๋ชจ๋“  ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•ด ์„ž๊ธฐ๋ฅผ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-bucket=none ``` ๊ธฐ๋ณธ์ ์œผ๋กœ `--random-order-bucket=module`์ด ๋‚ด์žฌ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ๋ชจ๋“ˆ ์ˆ˜์ค€์—์„œ ํŒŒ์ผ์„ ์„ž์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ `class`, `package`, `global` ๋ฐ `none` ์ˆ˜์ค€์—์„œ๋„ ์„ž์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ํ•ด๋‹น [๋ฌธ์„œ](https://github.com/jbasko/pytest-random-order)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋˜ ๋‹ค๋ฅธ ๋ฌด์ž‘์œ„ํ™”์˜ ๋Œ€์•ˆ์€ [`pytest-randomly`](https://github.com/pytest-dev/pytest-randomly)์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋“ˆ์€ ๋งค์šฐ ์œ ์‚ฌํ•œ ๊ธฐ๋Šฅ/์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ, `pytest-random-order`์— ์žˆ๋Š” ๋ฒ„ํ‚ท ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์„ค์น˜ ํ›„์—๋Š” ์ž๋™์œผ๋กœ ์ ์šฉ๋˜๋Š” ๋ฌธ์ œ๋„ ๋™์ผํ•˜๊ฒŒ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ### ์™ธ๊ด€๊ณผ ๋Š๋‚Œ์„ ๋ณ€๊ฒฝ[[look-and-feel-variations] #### pytest-sugar ์‚ฌ์šฉ[[pytest-sugar]] [pytest-sugar](https://github.com/Frozenball/pytest-sugar)๋Š” ํ…Œ์ŠคํŠธ๊ฐ€ ๋ณด์—ฌ์ง€๋Š” ํ˜•ํƒœ๋ฅผ ๊ฐœ์„ ํ•˜๊ณ , ์ง„ํ–‰ ์ƒํ™ฉ ๋ฐ”๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉฐ, ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ์™€ ๊ฒ€์ฆ์„ ์ฆ‰์‹œ ํ‘œ์‹œํ•˜๋Š” ํ”Œ๋Ÿฌ๊ทธ์ธ์ž…๋‹ˆ๋‹ค. ์„ค์น˜ํ•˜๋ฉด ์ž๋™์œผ๋กœ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ```bash pip install pytest-sugar ``` pytest-sugar ์—†์ด ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -p no:sugar ``` ๋˜๋Š” ์ œ๊ฑฐํ•˜์„ธ์š”. #### ๊ฐ ํ•˜์œ„ ํ…Œ์ŠคํŠธ ์ด๋ฆ„๊ณผ ์ง„ํ–‰ ์ƒํ™ฉ ๋ณด๊ณ [[report-each-sub-test-name-and-its-progress]] `pytest`๋ฅผ ํ†ตํ•ด ๋‹จ์ผ ๋˜๋Š” ๊ทธ๋ฃน์˜ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒฝ์šฐ(`pip install pytest-pspec` ์ดํ›„): ```bash pytest --pspec tests/test_optimization.py ``` #### ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ ์ฆ‰์‹œ ํ‘œ์‹œ[[instantly-shows-failed-tests]] [pytest-instafail](https://github.com/pytest-dev/pytest-instafail)์€ ํ…Œ์ŠคํŠธ ์„ธ์…˜์˜ ๋๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆฌ์ง€ ์•Š๊ณ  ์‹คํŒจ ๋ฐ ์˜ค๋ฅ˜๋ฅผ ์ฆ‰์‹œ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ```bash pip install pytest-instafail ``` ```bash pytest --instafail ``` ### GPU ์‚ฌ์šฉ ์—ฌ๋ถ€[[to-GPU-or-not-to-GPU]] GPU๊ฐ€ ํ™œ์„ฑํ™”๋œ ํ™˜๊ฒฝ์—์„œ, CPU ์ „์šฉ ๋ชจ๋“œ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `CUDA_VISIBLE_DEVICES=""`๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```bash CUDA_VISIBLE_DEVICES="" pytest tests/test_logging.py ``` ๋˜๋Š” ๋‹ค์ค‘ GPU๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ `pytest`์—์„œ ์‚ฌ์šฉํ•  GPU๋ฅผ ์ง€์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, GPU `0` ๋ฐ `1`์ด ์žˆ๋Š” ๊ฒฝ์šฐ ๋‹ค์Œ์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash CUDA_VISIBLE_DEVICES="1" pytest tests/test_logging.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋‹ค๋ฅธ GPU์—์„œ ๋‹ค๋ฅธ ์ž‘์—…์„ ์‹คํ–‰ํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ถ€ ํ…Œ์ŠคํŠธ๋Š” ๋ฐ˜๋“œ์‹œ CPU ์ „์šฉ์œผ๋กœ ์‹คํ–‰ํ•ด์•ผ ํ•˜๋ฉฐ, ์ผ๋ถ€๋Š” CPU ๋˜๋Š” GPU ๋˜๋Š” TPU์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•˜๊ณ , ์ผ๋ถ€๋Š” ์—ฌ๋Ÿฌ GPU์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์Šคํ‚ต ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ์˜ ์š”๊ตฌ ์‚ฌํ•ญ์„ CPU/GPU/TPU๋ณ„๋กœ ์„ค์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค: - `require_torch` - ์ด ํ…Œ์ŠคํŠธ๋Š” torch์—์„œ๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. - `require_torch_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 1๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_multi_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_non_multi_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ 0๊ฐœ ๋˜๋Š” 1๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_up_to_2_gpus` - `require_torch`์— ์ถ”๊ฐ€๋กœ 0๊ฐœ, 1๊ฐœ ๋˜๋Š” 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_tpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 1๊ฐœ์˜ TPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. GPU ์š”๊ตฌ ์‚ฌํ•ญ์„ ํ‘œ๋กœ ์ •๋ฆฌํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋””ใ…: | n gpus | decorator | |--------+--------------------------------| | `>= 0` | `@require_torch` | | `>= 1` | `@require_torch_gpu` | | `>= 2` | `@require_torch_multi_gpu` | | `< 2` | `@require_torch_non_multi_gpu` | | `< 3` | `@require_torch_up_to_2_gpus` | ์˜ˆ๋ฅผ ๋“ค์–ด, 2๊ฐœ ์ด์ƒ์˜ GPU๊ฐ€ ์žˆ๊ณ  pytorch๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์„ ๋•Œ์—๋งŒ ์‹คํ–‰๋˜์–ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python no-style @require_torch_multi_gpu def test_example_with_multi_gpu(): ``` `tensorflow`๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ `require_tf` ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python no-style @require_tf def test_tf_thing_with_tensorflow(): ``` ์ด๋Ÿฌํ•œ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ์ค‘์ฒฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์ง„ํ–‰๋˜๊ณ  pytorch์—์„œ ์ ์–ด๋„ ํ•˜๋‚˜์˜ GPU๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python no-style @require_torch_gpu @slow def test_example_slow_on_gpu(): ``` `@parametrized`์™€ ๊ฐ™์€ ์ผ๋ถ€ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ ์ด๋ฆ„์„ ๋‹ค์‹œ ์ž‘์„ฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— `@require_*` ์Šคํ‚ต ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋ ค๋ฉด ํ•ญ์ƒ ๋งจ ๋งˆ์ง€๋ง‰์— ๋‚˜์—ด๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์˜ฌ๋ฐ”๋ฅธ ์‚ฌ์šฉ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```python no-style @parameterized.expand(...) @require_torch_multi_gpu def test_integration_foo(): ``` `@pytest.mark.parametrize`์—๋Š” ์ด๋Ÿฌํ•œ ์ˆœ์„œ ๋ฌธ์ œ๋Š” ์—†์œผ๋ฏ€๋กœ ์ฒ˜์Œ ํ˜น์€ ๋งˆ์ง€๋ง‰์— ์œ„์น˜์‹œํ‚ฌ ์ˆ˜ ์žˆ๊ณ  ์ด๋Ÿฌํ•œ ๊ฒฝ์šฐ์—๋„ ์ž˜ ์ž‘๋™ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ unittest๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ์—๋งŒ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ GPU ์ˆ˜: ```python from transformers.testing_utils import get_gpu_count n_gpu = get_gpu_count() #torch์™€ tf์™€ ํ•จ๊ป˜ ์ž‘๋™ ``` ### ๋ถ„์‚ฐ ํ›ˆ๋ จ[[distributed-training]] `pytest`๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ์ง์ ‘์ ์œผ๋กœ ๋‹ค๋ฃจ์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์‹œ๋„ํ•˜๋ฉด ํ•˜์œ„ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์ง€ ์•Š๊ณ  `pytest`๋ผ๊ณ  ์ƒ๊ฐํ•˜๊ธฐ์— ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๋ฅผ ๋ฐ˜๋ณตํ•ด์„œ ์‹คํ–‰ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ผ๋ฐ˜ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ƒ์„ฑํ•œ ๋‹ค์Œ ์—ฌ๋Ÿฌ ์›Œ์ปค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  IO ํŒŒ์ดํ”„๋ฅผ ๊ด€๋ฆฌํ•˜๋„๋ก ํ•˜๋ฉด ๋™์ž‘ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ํ…Œ์ŠคํŠธ์ž…๋‹ˆ๋‹ค: - [test_trainer_distributed.py](https://github.com/huggingface/transformers/tree/main/tests/trainer/test_trainer_distributed.py) - [test_deepspeed.py](https://github.com/huggingface/transformers/tree/main/tests/deepspeed/test_deepspeed.py) ์‹คํ–‰ ์ง€์ ์œผ๋กœ ๋ฐ”๋กœ ์ด๋™ํ•˜๋ ค๋ฉด, ํ•ด๋‹น ํ…Œ์ŠคํŠธ์—์„œ `execute_subprocess_async` ํ˜ธ์ถœ์„ ๊ฒ€์ƒ‰ํ•˜์„ธ์š”. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ```bash CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py ``` ### ์ถœ๋ ฅ ์บก์ฒ˜[[output-capture]] ํ…Œ์ŠคํŠธ ์‹คํ–‰ ์ค‘ `stdout` ๋ฐ `stderr`๋กœ ์ „์†ก๋œ ๋ชจ๋“  ์ถœ๋ ฅ์ด ์บก์ฒ˜๋ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ๋‚˜ ์„ค์ • ๋ฉ”์†Œ๋“œ๊ฐ€ ์‹คํŒจํ•˜๋ฉด ์บก์ฒ˜๋œ ์ถœ๋ ฅ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์‹คํŒจ ์ถ”์  ์ •๋ณด์™€ ํ•จ๊ป˜ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ ์บก์ฒ˜๋ฅผ ๋น„ํ™œ์„ฑํ™”ํ•˜๊ณ  `stdout` ๋ฐ `stderr`๋ฅผ ์ •์ƒ์ ์œผ๋กœ ๋ฐ›์œผ๋ ค๋ฉด `-s` ๋˜๋Š” `--capture=no`๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash pytest -s tests/test_logging.py ``` ํ…Œ์ŠคํŠธ ๊ฒฐ๊ณผ๋ฅผ JUnit ํ˜•์‹์˜ ์ถœ๋ ฅ์œผ๋กœ ๋ณด๋‚ด๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash py.test tests --junitxml=result.xml ``` ### ์ƒ‰์ƒ ์กฐ์ ˆ[[color-control]] ์ƒ‰์ƒ์ด ์—†๊ฒŒ ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์ •ํ•˜์„ธ์š”(์˜ˆ๋ฅผ ๋“ค์–ด ํฐ์ƒ‰ ๋ฐฐ๊ฒฝ์— ๋…ธ๋ž€์ƒ‰ ๊ธ€์”จ๋Š” ๊ฐ€๋…์„ฑ์ด ์ข‹์ง€ ์•Š์Šต๋‹ˆ๋‹ค): ```bash pytest --color=no tests/test_logging.py ``` ### online pastebin service์— ํ…Œ์ŠคํŠธ ๋ณด๊ณ ์„œ ์ „์†ก[[sending test report to online pastebin service]] ๊ฐ ํ…Œ์ŠคํŠธ ์‹คํŒจ์— ๋Œ€ํ•œ URL์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```bash pytest --pastebin=failed tests/test_logging.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ฐ ์‹คํŒจ์— ๋Œ€ํ•œ URL์„ ์ œ๊ณตํ•˜๋Š” remote Paste service์— ํ…Œ์ŠคํŠธ ์‹คํ–‰ ์ •๋ณด๋ฅผ ์ œ์ถœํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์„ ํƒํ•  ์ˆ˜๋„ ์žˆ๊ณ  ํ˜น์€ ํŠน์ • ์‹คํŒจ๋งŒ ๋ณด๋‚ด๋ ค๋ฉด `-x`์™€ ๊ฐ™์ด ์ถ”๊ฐ€ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ํ…Œ์ŠคํŠธ ์„ธ์…˜ ๋กœ๊ทธ์— ๋Œ€ํ•œ URL์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```bash pytest --pastebin=all tests/test_logging.py ``` ## ํ…Œ์ŠคํŠธ ์ž‘์„ฑ[[writing-tests]] ๐Ÿค— transformers ํ…Œ์ŠคํŠธ๋Š” ๋Œ€๋ถ€๋ถ„ `unittest`๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์ง€๋งŒ, `pytest`์—์„œ ์‹คํ–‰๋˜๋ฏ€๋กœ ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋‘ ์‹œ์Šคํ…œ์˜ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ๊ธฐ๋Šฅ์— ๋Œ€ํ•ด [์—ฌ๊ธฐ](https://docs.pytest.org/en/stable/unittest.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ธฐ์–ตํ•ด์•ผ ํ•  ์ค‘์š”ํ•œ ์ ์€ ๋Œ€๋ถ€๋ถ„์˜ `pytest` fixture๊ฐ€ ์ž‘๋™ํ•˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํŒŒ๋ผ๋ฏธํ„ฐํ™”๋„ ์ž‘๋™ํ•˜์ง€ ์•Š์ง€๋งŒ, ์šฐ๋ฆฌ๋Š” ๋น„์Šทํ•œ ๋ฐฉ์‹์œผ๋กœ ์ž‘๋™ํ•˜๋Š” `parameterized` ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”[[parametrization]] ๋™์ผํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค๋ฅธ ์ธ์ˆ˜๋กœ ์—ฌ๋Ÿฌ ๋ฒˆ ์‹คํ–‰ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์ข…์ข… ์žˆ์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋‚ด์—์„œ ์ด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ทธ๋ ‡๊ฒŒ ํ•˜๋ฉด ํ•˜๋‚˜์˜ ์ธ์ˆ˜ ์„ธํŠธ์— ๋Œ€ํ•ด ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ```python # test_this1.py import unittest from parameterized import parameterized class TestMathUnitTest(unittest.TestCase): @parameterized.expand( [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ] ) def test_floor(self, name, input, expected): assert_equal(math.floor(input), expected) ``` ์ด์ œ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ด ํ…Œ์ŠคํŠธ๋Š” `test_floor`์˜ ๋งˆ์ง€๋ง‰ 3๊ฐœ ์ธ์ˆ˜๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜ ๋ชฉ๋ก์˜ ํ•ด๋‹น ์ธ์ˆ˜์— ํ• ๋‹น๋˜๋Š” ๊ฒƒ์œผ๋กœ 3๋ฒˆ ์‹คํ–‰๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  `negative` ๋ฐ `integer` ๋งค๊ฐœ๋ณ€์ˆ˜ ์ง‘ํ•ฉ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "negative and integer" tests/test_mytest.py ``` ๋˜๋Š” `negative` ํ•˜์œ„ ํ…Œ์ŠคํŠธ๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "not negative" tests/test_mytest.py ``` ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ `-k` ํ•„ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„, ๊ฐ ์„œ๋ธŒ ํ…Œ์ŠคํŠธ์˜ ์ •ํ™•ํ•œ ์ด๋ฆ„์„ ํ™•์ธํ•œ ํ›„์— ์ผ๋ถ€ ํ˜น์€ ์ „์ฒด ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pytest test_this1.py --collect-only -q ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer test_this1.py::TestMathUnitTest::test_floor_2_large_fraction ``` 2๊ฐœ์˜ ํŠน์ •ํ•œ ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer ``` `transformers`์˜ ๊ฐœ๋ฐœ์ž ์ข…์†์„ฑ์— ์ด๋ฏธ ์žˆ๋Š” [parameterized](https://pypi.org/project/parameterized/) ๋ชจ๋“ˆ์€ `unittests`์™€ `pytest` ํ…Œ์ŠคํŠธ ๋ชจ๋‘์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ๊ฐ€ `unittest`๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ `pytest.mark.parametrize`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์ด๋ฏธ ์žˆ๋Š” ์ผ๋ถ€ ํ…Œ์ŠคํŠธ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฃผ๋กœ `examples` ํ•˜์œ„์— ์žˆ์Šต๋‹ˆ๋‹ค). ๋‹ค์Œ์€ `pytest`์˜ `parametrize` ๋งˆ์ปค๋ฅผ ์‚ฌ์šฉํ•œ ๋™์ผํ•œ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```python # test_this2.py import pytest @pytest.mark.parametrize( "name, input, expected", [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ], ) def test_floor(name, input, expected): assert_equal(math.floor(input), expected) ``` `parameterized`์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `pytest.mark.parametrize`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด `-k` ํ•„ํ„ฐ๊ฐ€ ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์—๋„ ์‹คํ–‰ํ•  ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹จ, ์ด ๋งค๊ฐœ๋ณ€์ˆ˜ํ™” ํ•จ์ˆ˜๋Š” ์„œ๋ธŒ ํ…Œ์ŠคํŠธ์˜ ์ด๋ฆ„ ์ง‘ํ•ฉ์„ ์•ฝ๊ฐ„ ๋‹ค๋ฅด๊ฒŒ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ์Šต์ž…๋‹ˆ๋‹ค: ```bash pytest test_this2.py --collect-only -q ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash test_this2.py::test_floor[integer-1-1.0] test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[large fraction-1.6-1] ``` ํŠน์ •ํ•œ ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•ด์„œ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0] ``` ์ด์ „์˜ ์˜ˆ์‹œ์™€ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ[[files-and-directories]] ํ…Œ์ŠคํŠธ์—์„œ ์ข…์ข… ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ๊ณผ ๊ด€๋ จ๋œ ์ƒ๋Œ€์ ์ธ ์œ„์น˜๋ฅผ ์•Œ์•„์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ๊ฐ€ ์—ฌ๋Ÿฌ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํ˜ธ์ถœ๋˜๊ฑฐ๋‚˜ ๊นŠ์ด๊ฐ€ ๋‹ค๋ฅธ ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ์žˆ์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๊ทธ ์œ„์น˜๋ฅผ ์•„๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. `transformers.test_utils.TestCasePlus`๋ผ๋Š” ํ—ฌํผ ํด๋ž˜์Šค๋Š” ๋ชจ๋“  ๊ธฐ๋ณธ ๊ฒฝ๋กœ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  ๊ฐ„๋‹จํ•œ ์•ก์„ธ์„œ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค: - `pathlib` ๊ฐ์ฒด(์™„์ „ํžˆ ์ •ํ•ด์ง„ ๊ฒฝ๋กœ) - `test_file_path` - ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ (์˜ˆ: `__file__`) - test_file_dir` - ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์ด ํฌํ•จ๋œ ๋””๋ ‰ํ„ฐ๋ฆฌ - tests_dir` - `tests` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ - examples_dir` - `examples` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ - repo_root_dir` - ์ €์žฅ์†Œ ๋””๋ ‰ํ„ฐ๋ฆฌ - src_dir` - `src`์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ(์˜ˆ: `transformers` ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ์žˆ๋Š” ๊ณณ) - ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜๋œ ๊ฒฝ๋กœ---์œ„์™€ ๋™์ผํ•˜์ง€๋งŒ, `pathlib` ๊ฐ์ฒด๊ฐ€ ์•„๋‹Œ ๋ฌธ์ž์—ด๋กœ ๊ฒฝ๋กœ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: - `test_file_path_str` - `test_file_dir_str` - `tests_dir_str` - `examples_dir_str` - `repo_root_dir_str` - `src_dir_str` ์œ„์˜ ๋‚ด์šฉ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ํ…Œ์ŠคํŠธ๊ฐ€ 'transformers.test_utils.TestCasePlus'์˜ ์„œ๋ธŒํด๋ž˜์Šค์— ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_local_locations(self): data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro" ``` ๋งŒ์•ฝ `pathlib`๋ฅผ ํ†ตํ•ด ๊ฒฝ๋กœ๋ฅผ ์กฐ์ž‘ํ•  ํ•„์š”๊ฐ€ ์—†๊ฑฐ๋‚˜ ๊ฒฝ๋กœ๋ฅผ ๋ฌธ์ž์—ด๋กœ๋งŒ ํ•„์š”๋กœ ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” `pathlib` ๊ฐ์ฒด์— `str()`์„ ํ˜ธ์ถœํ•˜๊ฑฐ๋‚˜ `_str`๋กœ ๋๋‚˜๋Š” ์ ‘๊ทผ์ž๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_stringified_locations(self): examples_dir = self.examples_dir_str ``` ### ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ[[temporary-files-and-directories]] ๊ณ ์œ ํ•œ ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๋ณ‘๋ ฌ ํ…Œ์ŠคํŠธ ์‹คํ–‰์— ์žˆ์–ด ํ•„์ˆ˜์ ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•จ์œผ๋กœ์จ ํ…Œ์ŠคํŠธ๋“ค์ด ์„œ๋กœ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฎ์–ด์“ฐ์ง€ ์•Š๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์šฐ๋ฆฌ๋Š” ์ƒ์„ฑ๋œ ํ…Œ์ŠคํŠธ์˜ ์ข…๋ฃŒ ๋‹จ๊ณ„์—์„œ ์ด๋Ÿฌํ•œ ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ œ๊ฑฐํ•˜๊ณ  ์‹ถ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋Ÿฌํ•œ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์ถฉ์กฑ์‹œ์ผœ์ฃผ๋Š” `tempfile`๊ณผ ๊ฐ™์€ ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ๋ฅผ ๋””๋ฒ„๊น…ํ•  ๋•Œ๋Š” ์ž„์‹œ ํŒŒ์ผ์ด๋‚˜ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ๋“ค์–ด๊ฐ€๋Š” ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฉฐ, ์žฌ์‹คํ–‰๋˜๋Š” ๊ฐ ํ…Œ์ŠคํŠธ๋งˆ๋‹ค ์ž„์‹œ ํŒŒ์ผ์ด๋‚˜ ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ๊ฒฝ๋กœ์— ๋Œ€ํ•ด ๋ฌด์ž‘์œ„ ๊ฐ’์ด ์•„๋‹Œ ์ •ํ™•ํ•œ ๊ฐ’์„ ์•Œ๊ณ  ์‹ถ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. `transformers.test_utils.TestCasePlus`๋ผ๋Š” ๋„์šฐ๋ฏธ ํด๋ž˜์Šค๋Š” ์ด๋Ÿฌํ•œ ๋ชฉ์ ์— ๊ฐ€์žฅ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” `unittest.TestCase`์˜ ํ•˜์œ„ ํด๋ž˜์Šค์ด๋ฏ€๋กœ, ์šฐ๋ฆฌ๋Š” ์ด๊ฒƒ์„ ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์—์„œ ์‰ฝ๊ฒŒ ์ƒ์†ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ•ด๋‹น ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class ExamplesTests(TestCasePlus): def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` ์ด ์ฝ”๋“œ๋Š” ๊ณ ์œ ํ•œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  `tmp_dir`์„ ํ•ด๋‹น ์œ„์น˜๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. - ๊ณ ์œ ํ•œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` `tmp_dir`์—๋Š” ์ƒ์„ฑ๋œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ๊ฒฝ๋กœ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ…Œ์ŠคํŠธ์˜ ์ข…๋ฃŒ ๋‹จ๊ณ„์—์„œ ์ž๋™์œผ๋กœ ์ œ๊ฑฐ๋ฉ๋‹ˆ๋‹ค. - ์„ ํƒํ•œ ๊ฒฝ๋กœ๋กœ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ ์ƒ์„ฑ ํ›„์— ํ…Œ์ŠคํŠธ ์‹œ์ž‘ ์ „์— ๋น„์–ด ์žˆ๋Š” ์ƒํƒœ์ธ์ง€ ํ™•์ธํ•˜๊ณ , ํ…Œ์ŠคํŠธ ํ›„์—๋Š” ๋น„์šฐ์ง€ ๋งˆ์„ธ์š”. ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir("./xxx") ``` ์ด๊ฒƒ์€ ๋””๋ฒ„๊น…ํ•  ๋•Œ ํŠน์ • ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ณ , ๊ทธ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ์ด์ „์— ์‹คํ–‰๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ฐ์ดํ„ฐ๋ฅผ ๋‚จ๊ธฐ์ง€ ์•Š๋„๋ก ํ•˜๋Š” ๋ฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. - `before` ๋ฐ `after` ์ธ์ˆ˜๋ฅผ ์ง์ ‘ ์˜ค๋ฒ„๋ผ์ด๋”ฉํ•˜์—ฌ ๊ธฐ๋ณธ ๋™์ž‘์„ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋‹ค์Œ ์ค‘ ํ•˜๋‚˜์˜ ๋™์ž‘์œผ๋กœ ์ด์–ด์ง‘๋‹ˆ๋‹ค: - `before=True`: ํ…Œ์ŠคํŠธ ์‹œ์ž‘ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ์ง€์›Œ์ง‘๋‹ˆ๋‹ค. - `before=False`: ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ์กด ํŒŒ์ผ์€ ๊ทธ๋Œ€๋กœ ๋‚จ์Šต๋‹ˆ๋‹ค. - `after=True`: ํ…Œ์ŠคํŠธ ์ข…๋ฃŒ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ์‚ญ์ œ๋ฉ๋‹ˆ๋‹ค. - `after=False`: ํ…Œ์ŠคํŠธ ์ข…๋ฃŒ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. <Tip> `rm -r`์— ํ•ด๋‹นํ•˜๋Š” ๋ช…๋ น์„ ์•ˆ์ „ํ•˜๊ฒŒ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด, ๋ช…์‹œ์ ์ธ `tmp_dir`์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ํ”„๋กœ์ ํŠธ ์ €์žฅ์†Œ ์ฒดํฌ ์•„์›ƒ์˜ ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ๋งŒ ํ—ˆ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์‹ค์ˆ˜๋กœ `/tmp`๊ฐ€ ์•„๋‹Œ ์ค‘์š”ํ•œ ํŒŒ์ผ ์‹œ์Šคํ…œ์˜ ์ผ๋ถ€๊ฐ€ ์‚ญ์ œ๋˜์ง€ ์•Š๋„๋ก ํ•ญ์ƒ `./`๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒฝ๋กœ๋ฅผ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> <Tip> ๊ฐ ํ…Œ์ŠคํŠธ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๋“ฑ๋กํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋ณ„๋„๋กœ ์š”์ฒญํ•˜์ง€ ์•Š๋Š” ํ•œ ๋ชจ๋‘ ์ž๋™์œผ๋กœ ์ œ๊ฑฐ๋ฉ๋‹ˆ๋‹ค. </Tip> ### ์ž„์‹œ sys.path ์˜ค๋ฒ„๋ผ์ด๋“œ[[temporary-sys.path-override]] `sys.path`๋ฅผ ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ๋กœ ์ž„์‹œ๋กœ ์˜ค๋ฒ„๋ผ์ด๋“œํ•˜๊ธฐ ์œ„ํ•ด ์˜ˆ๋ฅผ ๋“ค์–ด `ExtendSysPath` ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python import os from transformers.testing_utils import ExtendSysPath bindir = os.path.abspath(os.path.dirname(__file__)) with ExtendSysPath(f"{bindir}/.."): from test_trainer import TrainerIntegrationCommon # noqa ``` ### ํ…Œ์ŠคํŠธ ๊ฑด๋„ˆ๋›ฐ๊ธฐ[[skipping-tests]] ์ด๊ฒƒ์€ ๋ฒ„๊ทธ๊ฐ€ ๋ฐœ๊ฒฌ๋˜์–ด ์ƒˆ๋กœ์šด ํ…Œ์ŠคํŠธ๊ฐ€ ์ž‘์„ฑ๋˜์—ˆ์ง€๋งŒ ์•„์ง ๊ทธ ๋ฒ„๊ทธ๊ฐ€ ์ˆ˜์ •๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํ…Œ์ŠคํŠธ๋ฅผ ์ฃผ ์ €์žฅ์†Œ์— ์ปค๋ฐ‹ํ•˜๋ ค๋ฉด `make test` ์ค‘์— ๊ฑด๋„ˆ๋›ฐ๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐฉ๋ฒ•: - **skip**์€ ํ…Œ์ŠคํŠธ๊ฐ€ ์ผ๋ถ€ ์กฐ๊ฑด์ด ์ถฉ์กฑ๋  ๊ฒฝ์šฐ์—๋งŒ ํ†ต๊ณผ๋  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๊ณ , ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด pytest๊ฐ€ ์ „์ฒด ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ์–ด์•ผ ํ•จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์˜ˆ๋กœ๋Š” Windows๊ฐ€ ์•„๋‹Œ ํ”Œ๋žซํผ์—์„œ Windows ์ „์šฉ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๊ฑฐ๋‚˜ ์™ธ๋ถ€ ๋ฆฌ์†Œ์Šค(์˜ˆ๋ฅผ ๋“ค์–ด ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค)์— ์˜์กดํ•˜๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๊ฒƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - **xfail**์€ ํ…Œ์ŠคํŠธ๊ฐ€ ํŠน์ •ํ•œ ์ด์œ ๋กœ ์ธํ•ด ์‹คํŒจํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์˜ˆ๋กœ๋Š” ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์€ ๊ธฐ๋Šฅ์ด๋‚˜ ์•„์ง ์ˆ˜์ •๋˜์ง€ ์•Š์€ ๋ฒ„๊ทธ์˜ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. `xfail`๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ์˜ˆ์ƒ๋Œ€๋กœ ์‹คํŒจํ•˜์ง€ ์•Š๊ณ  ํ†ต๊ณผ๋œ ๊ฒฝ์šฐ, ์ด๊ฒƒ์€ xpass์ด๋ฉฐ ํ…Œ์ŠคํŠธ ๊ฒฐ๊ณผ ์š”์•ฝ์— ๊ธฐ๋ก๋ฉ๋‹ˆ๋‹ค. ๋‘ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์ฐจ์ด์  ์ค‘ ํ•˜๋‚˜๋Š” `skip`์€ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์ง€๋งŒ `xfail`์€ ์‹คํ–‰ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์˜ค๋ฅ˜๊ฐ€ ์žˆ๋Š” ์ฝ”๋“œ๊ฐ€ ์ผ๋ถ€ ํ…Œ์ŠคํŠธ์— ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ `xfail`์„ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”. #### ๊ตฌํ˜„[[implementation]] - ์ „์ฒด ํ…Œ์ŠคํŠธ๋ฅผ ๋ฌด์กฐ๊ฑด ๊ฑด๋„ˆ๋›ฐ๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python no-style @unittest.skip("this bug needs to be fixed") def test_feature_x(): ``` ๋˜๋Š” pytest๋ฅผ ํ†ตํ•ด: ```python no-style @pytest.mark.skip(reason="this bug needs to be fixed") ``` ๋˜๋Š” `xfail` ๋ฐฉ์‹์œผ๋กœ: ```python no-style @pytest.mark.xfail def test_feature_x(): ``` - ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ ๋‚ด๋ถ€ ํ™•์ธ์— ๋”ฐ๋ผ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python def test_feature_x(): if not has_something(): pytest.skip("unsupported configuration") ``` ๋˜๋Š” ๋ชจ๋“ˆ ์ „์ฒด: ```python import pytest if not pytest.config.getoption("--custom-flag"): pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True) ``` ๋˜๋Š” `xfail` ๋ฐฉ์‹์œผ๋กœ: ```python def test_feature_x(): pytest.xfail("expected to fail until bug XYZ is fixed") ``` - import๊ฐ€ missing๋œ ๋ชจ๋“ˆ์ด ์žˆ์„ ๋•Œ ๊ทธ ๋ชจ๋“ˆ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python docutils = pytest.importorskip("docutils", minversion="0.3") ``` - ์กฐ๊ฑด์— ๋”ฐ๋ผ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python no-style @pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher") def test_feature_x(): ``` ๋˜๋Š”: ```python no-style @unittest.skipIf(torch_device == "cpu", "Can't do half precision") def test_feature_x(): ``` ๋˜๋Š” ๋ชจ๋“ˆ ์ „์ฒด๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python no-style @pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows") class TestClass(): def test_feature_x(self): ``` ๋ณด๋‹ค ์ž์„ธํ•œ ์˜ˆ์ œ ๋ฐ ๋ฐฉ๋ฒ•์€ [์—ฌ๊ธฐ](https://docs.pytest.org/en/latest/skipping.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ[[slow-tests]] ํ…Œ์ŠคํŠธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ง€์†์ ์œผ๋กœ ํ™•์žฅ๋˜๊ณ  ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ํ…Œ์ŠคํŠธ๋Š” ์‹คํ–‰ํ•˜๋Š” ๋ฐ ๋ช‡ ๋ถ„์ด ๊ฑธ๋ฆฝ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์šฐ๋ฆฌ์—๊ฒŒ๋Š” ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ CI๋ฅผ ํ†ตํ•ด ์™„๋ฃŒ๋˜๊ธฐ๊นŒ์ง€ ํ•œ ์‹œ๊ฐ„์„ ๊ธฐ๋‹ค๋ฆด ์—ฌ์œ ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•„์ˆ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์œ„ํ•œ ์ผ๋ถ€ ์˜ˆ์™ธ๋ฅผ ์ œ์™ธํ•˜๊ณ  ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python no-style from transformers.testing_utils import slow @slow def test_integration_foo(): ``` `@slow`๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด `RUN_SLOW=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash RUN_SLOW=1 pytest tests ``` `@parameterized`์™€ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ ์ด๋ฆ„์„ ๋‹ค์‹œ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ `@slow`์™€ ๋‚˜๋จธ์ง€ ๊ฑด๋„ˆ๋›ฐ๊ธฐ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ `@require_*`๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™๋˜๋ ค๋ฉด ๋งˆ์ง€๋ง‰์— ๋‚˜์—ด๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์˜ฌ๋ฐ”๋ฅธ ์‚ฌ์šฉ ์˜ˆ์ž…๋‹ˆ๋‹ค. ```python no-style @parameterized.expand(...) @slow def test_integration_foo(): ``` ์ด ๋ฌธ์„œ์˜ ์ดˆ๋ฐ˜๋ถ€์— ์„ค๋ช…๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” PR์˜ CI ํ™•์ธ์ด ์•„๋‹Œ ์˜ˆ์•ฝ๋œ ์ผ์ • ๊ธฐ๋ฐ˜์œผ๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PR ์ œ์ถœ ์ค‘์— ์ผ๋ถ€ ๋ฌธ์ œ๋ฅผ ๋†“์นœ ์ฑ„๋กœ ๋ณ‘ํ•ฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋“ค์€ ๋‹ค์Œ๋ฒˆ์˜ ์˜ˆ์ •๋œ CI ์ž‘์—… ์ค‘์— ๊ฐ์ง€๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ PR์„ ์ œ์ถœํ•˜๊ธฐ ์ „์— ์ž์‹ ์˜ ์ปดํ“จํ„ฐ์—์„œ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ ๋˜ํ•œ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ํ‘œ์‹œํ•ด์•ผ ํ•˜๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋Œ€๋žต์ ์ธ ๊ฒฐ์ • ๊ธฐ์ค€์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‚ด๋ถ€ ๊ตฌ์„ฑ ์š”์†Œ ์ค‘ ํ•˜๋‚˜์— ์ง‘์ค‘๋˜์–ด ์žˆ๋‹ค๋ฉด(์˜ˆ: ๋ชจ๋ธ๋ง ํŒŒ์ผ, ํ† ํฐํ™” ํŒŒ์ผ, ํŒŒ์ดํ”„๋ผ์ธ), ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ์ธก๋ฉด(์˜ˆ: ๋ฌธ์„œ ๋˜๋Š” ์˜ˆ์ œ)์— ์ง‘์ค‘๋˜์–ด ์žˆ๋‹ค๋ฉด, ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด ์˜ˆ์™ธ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋ฌด๊ฑฐ์šด ๊ฐ€์ค‘์น˜ ์„ธํŠธ๋‚˜ 50MB๋ณด๋‹ค ํฐ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•ด์•ผ ํ•˜๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ(์˜ˆ: ๋ชจ๋ธ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ, ํ† ํฌ๋‚˜์ด์ € ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ, ํŒŒ์ดํ”„๋ผ์ธ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ)๋ฅผ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ์ž‘์€ ๋ฒ„์ „์„ ๋งŒ๋“ค์–ด ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋‚ด์šฉ์€ ์•„๋ž˜ ๋‹จ๋ฝ์—์„œ ์„ค๋ช…๋ฉ๋‹ˆ๋‹ค. - ํŠน๋ณ„ํžˆ ๋น ๋ฅด๊ฒŒ ์‹คํ–‰๋˜๋„๋ก ์ตœ์ ํ™”๋˜์ง€ ์•Š์€ ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋Š” ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋Š๋ฆฌ์ง€ ์•Š์•„์•ผ ํ•  ํ…Œ์ŠคํŠธ ์ค‘ ์ผ๋ถ€๊ฐ€ ๊ทน๋„๋กœ ๋Š๋ฆฐ ๊ฒฝ์šฐ ์˜ˆ์™ธ๋ฅผ ๋„์ž…ํ•˜๊ณ  ์ด๋ฅผ `@slow`๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€์šฉ๋Ÿ‰ ํŒŒ์ผ์„ ๋””์Šคํฌ์— ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์˜ค๋Š” ์ž๋™ ๋ชจ๋ธ๋ง ํ…Œ์ŠคํŠธ๋Š” `@slow`์œผ๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ์˜ ์ข‹์€ ์˜ˆ์ž…๋‹ˆ๋‹ค. - CI์—์„œ 1์ดˆ ์ด๋‚ด์— ํ…Œ์ŠคํŠธ๊ฐ€ ์™„๋ฃŒ๋˜๋Š” ๊ฒฝ์šฐ(๋‹ค์šด๋กœ๋“œ ํฌํ•จ)์—๋Š” ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ์•„๋‹ˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์–‘ํ•œ ๋‚ด๋ถ€๋ฅผ ์™„์ „ํžˆ ์ปค๋ฒ„ํ•˜๋ฉด์„œ ๋น ๋ฅด๊ฒŒ ์œ ์ง€๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน๋ณ„ํžˆ ์ƒ์„ฑ๋œ ์ž‘์€ ๋ชจ๋ธ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ฉด ์ƒ๋‹นํ•œ ์ปค๋ฒ„๋ฆฌ์ง€๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ตœ์†Œํ•œ์˜ ๋ ˆ์ด์–ด ์ˆ˜(์˜ˆ: 2), ์–ดํœ˜ ํฌ๊ธฐ(์˜ˆ: 1000) ๋“ฑ์˜ ์š”์†Œ๋งŒ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `@slow` ํ…Œ์ŠคํŠธ๋Š” ๋Œ€ํ˜• ๋Š๋ฆฐ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ •์„ฑ์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด *tiny* ๋ชจ๋ธ์„ ์ฐพ์•„๋ณด์„ธ์š”. ```bash grep tiny tests examples ``` ๋‹ค์Œ์€ ์ž‘์€ ๋ชจ๋ธ[stas/tiny-wmt19-en-de](https://huggingface.co/stas/tiny-wmt19-en-de)์„ ๋งŒ๋“  [script](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py) ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ํŠน์ • ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜์— ๋งž๊ฒŒ ์‰ฝ๊ฒŒ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋Œ€์šฉ๋Ÿ‰ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ๊ฒฝ์šฐ ๋Ÿฐํƒ€์ž„์„ ์ž˜๋ชป ์ธก์ •ํ•˜๊ธฐ ์‰ฝ์ง€๋งŒ, ๋กœ์ปฌ์—์„œ ํ…Œ์ŠคํŠธํ•˜๋ฉด ๋‹ค์šด๋กœ๋“œํ•œ ํŒŒ์ผ์ด ์บ์‹œ๋˜์–ด ๋‹ค์šด๋กœ๋“œ ์‹œ๊ฐ„์ด ์ธก์ •๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  CI ๋กœ๊ทธ์˜ ์‹คํ–‰ ์†๋„ ๋ณด๊ณ ์„œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”(`pytest --durations=0 tests`์˜ ์ถœ๋ ฅ). ์ด ๋ณด๊ณ ์„œ๋Š” ๋Š๋ฆฐ ์ด์ƒ๊ฐ’์œผ๋กœ ํ‘œ์‹œ๋˜์ง€ ์•Š๊ฑฐ๋‚˜ ๋น ๋ฅด๊ฒŒ ๋‹ค์‹œ ์ž‘์„ฑํ•ด์•ผ ํ•˜๋Š” ๋Š๋ฆฐ ์ด์ƒ๊ฐ’์„ ์ฐพ๋Š” ๋ฐ๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. CI์—์„œ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ๋Š๋ ค์ง€๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ์ด ๋ณด๊ณ ์„œ์˜ ๋งจ ์œ„ ๋ชฉ๋ก์— ๊ฐ€์žฅ ๋Š๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ### stdout/stderr ์ถœ๋ ฅ ํ…Œ์ŠคํŠธ[[testing-the-stdout/stderr-output]] `stdout` ๋ฐ/๋˜๋Š” `stderr`๋กœ ์“ฐ๋Š” ํ•จ์ˆ˜๋ฅผ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `pytest`์˜ [capsys ์‹œ์Šคํ…œ](https://docs.pytest.org/en/latest/capture.html)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•ด๋‹น ์ŠคํŠธ๋ฆผ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python import sys def print_to_stdout(s): print(s) def print_to_stderr(s): sys.stderr.write(s) def test_result_and_stdout(capsys): msg = "Hello" print_to_stdout(msg) print_to_stderr(msg) out, err = capsys.readouterr() # ์บก์ฒ˜๋œ ์ถœ๋ ฅ ์ŠคํŠธ๋ฆผ ์‚ฌ์šฉ # ์„ ํƒ ์‚ฌํ•ญ: ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ ์žฌ์ƒ์„ฑ sys.stdout.write(out) sys.stderr.write(err) # ํ…Œ์ŠคํŠธ: assert msg in out assert msg in err ``` ๊ทธ๋ฆฌ๊ณ , ๋ฌผ๋ก  ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์—๋Š” `stderr`๋Š” ์˜ˆ์™ธ์˜ ์ผ๋ถ€๋กœ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ ํ•ด๋‹น ๊ฒฝ์šฐ์—๋Š” try/except๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python def raise_exception(msg): raise ValueError(msg) def test_something_exception(): msg = "Not a good value" error = "" try: raise_exception(msg) except Exception as e: error = str(e) assert msg in error, f"{msg} is in the exception:\n{error}" ``` `stdout`๋ฅผ ์บก์ฒ˜ํ•˜๋Š” ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ `contextlib.redirect_stdout`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```python from io import StringIO from contextlib import redirect_stdout def print_to_stdout(s): print(s) def test_result_and_stdout(): msg = "Hello" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # ์„ ํƒ ์‚ฌํ•ญ: ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ ์žฌ์ƒ์„ฑ sys.stdout.write(out) # ํ…Œ์ŠคํŠธ: assert msg in out ``` `stdout` ์บก์ฒ˜์— ๊ด€๋ จ๋œ ์ค‘์š”ํ•œ ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ๋ณดํ†ต `print`์—์„œ ์ด์ „์— ์ธ์‡„๋œ ๋‚ด์šฉ์„ ์žฌ์„ค์ •ํ•˜๋Š” `\r` ๋ฌธ์ž๊ฐ€ ํฌํ•จ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `pytest`์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ์—†์ง€๋งŒ `pytest -s`์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ž๊ฐ€ ๋ฒ„ํผ์— ํฌํ•จ๋˜๋ฏ€๋กœ `-s`๊ฐ€ ์žˆ๊ฑฐ๋‚˜ ์—†๋Š” ์ƒํƒœ์—์„œ ํƒœ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์œผ๋ ค๋ฉด ์บก์ฒ˜๋œ ์ถœ๋ ฅ์— ๋Œ€ํ•ด ์ถ”๊ฐ€์ ์ธ ์ •๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” `re.sub(r'~.*\r', '', buf, 0, re.M)`์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋„์šฐ๋ฏธ ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž ๋ž˜ํผ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ถœ๋ ฅ์— `\r`์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€์˜ ์—ฌ๋ถ€์— ๊ด€๊ณ„์—†์ด ๋ชจ๋“  ๊ฒƒ์„ ์ž๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋ฏ€๋กœ ํŽธ๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```python from transformers.testing_utils import CaptureStdout with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out) ``` ๋‹ค์Œ์€ ์ „์ฒด ํ…Œ์ŠคํŠธ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ```python from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}" ``` `stderr`๋ฅผ ์บก์ฒ˜ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ๋Œ€์‹  `CaptureStderr` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```python from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err) ``` ๋‘ ์ŠคํŠธ๋ฆผ์„ ๋™์‹œ์— ์บก์ฒ˜ํ•ด์•ผ ํ•œ๋‹ค๋ฉด, ๋ถ€๋ชจ `CaptureStd` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```python from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out) ``` ๋˜ํ•œ, ํ…Œ์ŠคํŠธ์˜ ๋””๋ฒ„๊น…์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด ์ด๋Ÿฌํ•œ ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ปจํ…์ŠคํŠธ์—์„œ ์ข…๋ฃŒํ•  ๋•Œ ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ์„ ์ž๋™์œผ๋กœ ๋‹ค์‹œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ๋กœ๊ฑฐ ์ŠคํŠธ๋ฆผ ์บก์ฒ˜[[capturing-logger-stream]] ๋กœ๊ฑฐ ์ถœ๋ ฅ์„ ๊ฒ€์ฆํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ `CaptureLogger`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger("transformers.models.bart.tokenization_bart") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n" ``` ### ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ[[testing-with-environment-variables]] ํŠน์ • ํ…Œ์ŠคํŠธ์˜ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ์˜ํ–ฅ์„ ๊ฒ€์ฆํ•˜๋ ค๋ฉด `transformers.testing_utils.mockenv`๋ผ๋Š” ๋„์šฐ๋ฏธ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers.testing_utils import mockenv class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None) ``` ์ผ๋ถ€ ๊ฒฝ์šฐ์—๋Š” ์™ธ๋ถ€ ํ”„๋กœ๊ทธ๋žจ์„ ํ˜ธ์ถœํ•ด์•ผํ•  ์ˆ˜๋„ ์žˆ๋Š”๋ฐ, ์ด ๋•Œ์—๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ํฌํ•จํ•˜๋Š” `os.environ`์—์„œ `PYTHONPATH`์˜ ์„ค์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ—ฌํผ ํด๋ž˜์Šค `transformers.test_utils.TestCasePlus`๊ฐ€ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # ์ด์ œ `env`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์™ธ๋ถ€ ํ”„๋กœ๊ทธ๋žจ ํ˜ธ์ถœ ``` ํ…Œ์ŠคํŠธ ํŒŒ์ผ์ด `tests` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ ๋˜๋Š” `examples`์— ์žˆ๋Š”์ง€์— ๋”ฐ๋ผ `env[PYTHONPATH]`๊ฐ€ ๋‘ ๋””๋ ‰ํ„ฐ๋ฆฌ ์ค‘ ํ•˜๋‚˜๋ฅผ ํฌํ•จํ•˜๋„๋ก ์„ค์ •๋˜๋ฉฐ, ํ˜„์žฌ ์ €์žฅ์†Œ์— ๋Œ€ํ•ด ํ…Œ์ŠคํŠธ๊ฐ€ ์ˆ˜ํ–‰๋˜๋„๋ก `src` ๋””๋ ‰ํ„ฐ๋ฆฌ๋„ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ํ˜ธ์ถœ ์ด์ „์— ์„ค์ •๋œ ๊ฒฝ์šฐ์—๋Š” `env[PYTHONPATH]`๋ฅผ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํ—ฌํผ ๋ฉ”์†Œ๋“œ๋Š” `os.environ` ๊ฐ์ฒด์˜ ์‚ฌ๋ณธ์„ ์ƒ์„ฑํ•˜๋ฏ€๋กœ ์›๋ณธ์€ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. ### ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ ์–ป๊ธฐ[[getting-reproducible-results]] ์ผ๋ถ€ ์ƒํ™ฉ์—์„œ ํ…Œ์ŠคํŠธ์—์„œ ์ž„์˜์„ฑ์„ ์ œ๊ฑฐํ•˜์—ฌ ๋™์ผํ•˜๊ฒŒ ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ณ  ์‹ถ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹œ๋“œ๋ฅผ ๊ณ ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python seed = 42 # ํŒŒ์ด์ฌ RNG import random random.seed(seed) # ํŒŒ์ดํ† ์น˜ RNG import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # ๋„˜ํŒŒ์ด RNG import numpy as np np.random.seed(seed) # ํ…์„œํ”Œ๋กœ RNG tf.random.set_seed(seed) ``` ### ํ…Œ์ŠคํŠธ ๋””๋ฒ„๊น…[[debugging tests]] ๊ฒฝ๊ณ ๊ฐ€ ์žˆ๋Š” ๊ณณ์—์„œ ๋””๋ฒ„๊ฑฐ๋ฅผ ์‹œ์ž‘ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜์„ธ์š”. ```bash pytest tests/test_logging.py -W error::UserWarning --pdb ``` ## Github Actions ์›Œํฌํ”Œ๋กœ์šฐ ์ž‘์—… ์ฒ˜๋ฆฌ[[working-with-github-actions-workflows]] ์…€ํ”„ ํ‘ธ์‹œ ์›Œํฌํ”Œ๋กœ์šฐ CI ์ž‘์—…์„ ํŠธ๋ฆฌ๊ฑฐํ•˜๋ ค๋ฉด, ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. `transformers` ์›๋ณธ์—์„œ ์ƒˆ ๋ธŒ๋žœ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค(ํฌํฌ๊ฐ€ ์•„๋‹™๋‹ˆ๋‹ค!). 2. ๋ธŒ๋žœ์น˜ ์ด๋ฆ„์€ `ci_` ๋˜๋Š” `ci-`๋กœ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(`main`๋„ ํŠธ๋ฆฌ๊ฑฐํ•˜์ง€๋งŒ `main`์—์„œ๋Š” PR์„ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค). ๋˜ํ•œ ํŠน์ • ๊ฒฝ๋กœ์— ๋Œ€ํ•ด์„œ๋งŒ ํŠธ๋ฆฌ๊ฑฐ๋˜๋ฏ€๋กœ ์ด ๋ฌธ์„œ๊ฐ€ ์ž‘์„ฑ๋œ ํ›„์— ๋ณ€๊ฒฝ๋œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml)์˜ *push:*์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ์ด ๋ธŒ๋žœ์น˜์—์„œ PR์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค 4. ๊ทธ๋Ÿฐ ๋‹ค์Œ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/actions/workflows/self-push.yml)์—์„œ ์ž‘์—…์ด ๋‚˜ํƒ€๋‚˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฑ๋กœ๊ทธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, ๋ฐ”๋กœ ์‹คํ–‰๋˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‹คํ—˜์ ์ธ CI ๊ธฐ๋Šฅ ํ…Œ์ŠคํŠธ[[testing-Experimental-CI-Features]] CI ๊ธฐ๋Šฅ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฒƒ์€ ์ผ๋ฐ˜ CI ์ž‘๋™์— ๋ฐฉํ•ด๊ฐ€ ๋  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ž ์žฌ์ ์œผ๋กœ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ƒˆ๋กœ์šด CI ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. ํ…Œ์ŠคํŠธํ•ด์•ผ ํ•  ๋‚ด์šฉ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ์ƒˆ๋กœ์šด ์ „์šฉ ์ž‘์—…์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ์ƒˆ๋กœ์šด ์ž‘์—…์€ ํ•ญ์ƒ ์„ฑ๊ณตํ•ด์•ผ๋งŒ ๋…น์ƒ‰ โœ“๋ฅผ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์•„๋ž˜์— ์ž์„ธํ•œ ๋‚ด์šฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค). 3. ๋‹ค์–‘ํ•œ PR ์œ ํ˜•์— ๋Œ€ํ•œ ํ™•์ธ์„ ์œ„ํ•ด (์‚ฌ์šฉ์ž ํฌํฌ ๋ธŒ๋žœ์น˜, ํฌํฌ๋˜์ง€ ์•Š์€ ๋ธŒ๋žœ์น˜, github.com UI ์ง์ ‘ ํŒŒ์ผ ํŽธ์ง‘์—์„œ ์ƒ์„ฑ๋œ ๋ธŒ๋žœ์น˜, ๊ฐ•์ œ ํ‘ธ์‹œ ๋“ฑ PR์˜ ์œ ํ˜•์€ ์•„์ฃผ ๋‹ค์–‘ํ•ฉ๋‹ˆ๋‹ค.) ๋ฉฐ์น  ๋™์•ˆ ์‹คํ—˜ ์ž‘์—…์˜ ๋กœ๊ทธ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋ฉด์„œ ์‹คํ–‰ํ•ด๋ด…๋‹ˆ๋‹ค. (์˜๋„์ ์œผ๋กœ ํ•ญ์ƒ ๋…น์ƒ‰์„ ํ‘œ์‹œํ•˜๋ฏ€๋กœ ์ž‘์—… ์ „์ฒด๊ฐ€ ๋…น์ƒ‰์€ ์•„๋‹ˆ๋ผ๋Š” ์ ์— ์œ ์˜ํ•ฉ๋‹ˆ๋‹ค.) 4. ๋ชจ๋“  ๊ฒƒ์ด ์•ˆ์ •์ ์ธ์ง€ ํ™•์ธํ•œ ํ›„, ์ƒˆ๋กœ์šด ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๊ธฐ์กด ์ž‘์—…์— ๋ณ‘ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด CI ๊ธฐ๋Šฅ ์ž์ฒด์— ๋Œ€ํ•œ ์‹คํ—˜์ด ์ผ๋ฐ˜ ์ž‘์—… ํ๋ฆ„์— ๋ฐฉํ•ด๊ฐ€ ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ƒˆ๋กœ์šด CI ๊ธฐ๋Šฅ์ด ๊ฐœ๋ฐœ ์ค‘์ธ ๋™์•ˆ, ํ•ญ์ƒ ์„ฑ๊ณตํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ผ๊นŒ์š”? TravisCI์™€ ๊ฐ™์€ ์ผ๋ถ€ CI๋Š” `ignore-step-failure`๋ฅผ ์ง€์›ํ•˜๋ฉฐ ์ „์ฒด ์ž‘์—…์„ ์„ฑ๊ณตํ•œ ๊ฒƒ์œผ๋กœ ๋ณด๊ณ ํ•˜์ง€๋งŒ, ํ˜„์žฌ ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•˜๋Š” CircleCI์™€ Github Actions๋Š” ์ด๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ•ด๊ฒฐ์ฑ…์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. bash ์Šคํฌ๋ฆฝํŠธ์—์„œ ๊ฐ€๋Šฅํ•œ ๋งŽ์€ ์˜ค๋ฅ˜๋ฅผ ์–ต์ œํ•˜๊ธฐ ์œ„ํ•ด ์‹คํ–‰ ๋ช…๋ น์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— `set +euo pipefail`์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. 2. ๋งˆ์ง€๋ง‰ ๋ช…๋ น์€ ๋ฐ˜๋“œ์‹œ ์„ฑ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `echo "done"` ๋˜๋Š” `true`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ```yaml - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" ``` ๊ฐ„๋‹จํ•œ ๋ช…๋ น์˜ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash cmd_that_may_fail || true ``` ๊ฒฐ๊ณผ์— ๋งŒ์กฑํ•œ ํ›„์—๋Š” ๋ฌผ๋ก , ์‹คํ—˜์ ์ธ ๋‹จ๊ณ„ ๋˜๋Š” ์ž‘์—…์„ ์ผ๋ฐ˜ ์ž‘์—…์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„๊ณผ ํ†ตํ•ฉํ•˜๋ฉด์„œ `set +euo pipefail` ๋˜๋Š” ๊ธฐํƒ€ ์ถ”๊ฐ€ํ•œ ์š”์†Œ๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ ์‹คํ—˜ ์ž‘์—…์ด ์ผ๋ฐ˜ CI ์ž‘๋™์— ๋ฐฉํ•ด๋˜์ง€ ์•Š๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ „๋ฐ˜์ ์ธ ๊ณผ์ •์€ ์‹คํ—˜ ๋‹จ๊ณ„๊ฐ€ PR์˜ ์ „๋ฐ˜์ ์ธ ์ƒํƒœ์— ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š๊ณ  ์‹คํŒจํ•˜๋„๋ก `allow-failure`์™€ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ๋‹ค๋ฉด ํ›จ์”ฌ ๋” ์‰ฌ์› ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ ๋ฐ”์™€ ๊ฐ™์ด CircleCI์™€ Github Actions๋Š” ํ˜„์žฌ ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ๋“ค ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์˜ ์ง€์›์„ ์œ„ํ•œ ํˆฌํ‘œ์— ์ฐธ์—ฌํ•˜๊ณ  CI ๊ด€๋ จ ์Šค๋ ˆ๋“œ๋“ค์—์„œ ์ด๋Ÿฌํ•œ ์ƒํ™ฉ์„ ํ™•์ธํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. - [Github Actions:](https://github.com/actions/toolkit/issues/399) - [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์„ค์น˜๋ฐฉ๋ฒ•[[installation]] ๐Ÿค— Transformers๋ฅผ ์‚ฌ์šฉ ์ค‘์ธ ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋งž์ถฐ ์„ค์น˜ํ•˜๊ณ , ์บ์‹œ๋ฅผ ๊ตฌ์„ฑํ•˜๊ฑฐ๋‚˜ ์„ ํƒ์ ์œผ๋กœ ์˜คํ”„๋ผ์ธ์—์„œ๋„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋„๋ก ๐Ÿค— Transformers๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šฐ๊ฒ ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+ ๋ฐ Flax์—์„œ ํ…Œ์ŠคํŠธ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ์•„๋ž˜ ๋งํฌ๋œ ์ €๋งˆ๋‹ค์˜ ๊ณต์‹ ์‚ฌ์ดํŠธ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. * [PyTorch](https://pytorch.org/get-started/locally/) ์„ค์น˜ํ•˜๊ธฐ * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) ์„ค์น˜ํ•˜๊ธฐ * [Flax](https://flax.readthedocs.io/en/latest/) ์„ค์น˜ํ•˜๊ธฐ ## pip์œผ๋กœ ์„ค์น˜ํ•˜๊ธฐ[[install-with-pip]] ๐Ÿค— Transformers๋ฅผ [๊ฐ€์ƒ ํ™˜๊ฒฝ](https://docs.python.org/3/library/venv.html)์— ์„ค์น˜ํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Python ๊ฐ€์ƒ ํ™˜๊ฒฝ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, ์ด [๊ฐ€์ด๋“œ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ์‚ฌ์šฉํ•˜๋ฉด ์„œ๋กœ ๋‹ค๋ฅธ ํ”„๋กœ์ ํŠธ๋“ค์„ ๋ณด๋‹ค ์‰ฝ๊ฒŒ ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ๊ณ , ์˜์กด์„ฑ ๊ฐ„์˜ ํ˜ธํ™˜์„ฑ ๋ฌธ์ œ๋ฅผ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ํ”„๋กœ์ ํŠธ ๋””๋ ‰ํ† ๋ฆฌ์—์„œ ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ๋งŒ๋“ค์–ด ์ค๋‹ˆ๋‹ค. ```bash python -m venv .env ``` ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ํ™œ์„ฑํ™”ํ•ด์ฃผ์„ธ์š”. Linux๋‚˜ MacOS์˜ ๊ฒฝ์šฐ: ```bash source .env/bin/activate ``` Windows์˜ ๊ฒฝ์šฐ: ```bash .env/Scripts/activate ``` ์ด์ œ ๐Ÿค— Transformers๋ฅผ ์„ค์น˜ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์ž…๋ ฅํ•ด์ฃผ์„ธ์š”. ```bash pip install transformers ``` CPU๋งŒ ์จ๋„ ๋œ๋‹ค๋ฉด, ๐Ÿค— Transformers์™€ ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋‹จ 1์ค„๋กœ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๐Ÿค— Transformers์™€ PyTorch์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers์™€ TensorFlow 2.0์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers์™€ Flax์˜ ๊ฒฝ์šฐ: ```bash pip install transformers[flax] ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๐Ÿค— Transformers๊ฐ€ ์ œ๋Œ€๋กœ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` ๋ผ๋ฒจ๊ณผ ์ ์ˆ˜๊ฐ€ ์ถœ๋ ฅ๋˜๋ฉด ์ž˜ ์„ค์น˜๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๊ธฐ[[install-from-source]] ๐Ÿค— Transformers๋ฅผ ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๋ ค๋ฉด ์•„๋ž˜ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install git+https://github.com/huggingface/transformers ``` ์œ„ ๋ช…๋ น์€ ์ตœ์‹ ์ด์ง€๋งŒ (์•ˆ์ •์ ์ธ) `stable` ๋ฒ„์ „์ด ์•„๋‹Œ ์‹คํ—˜์„ฑ์ด ์ง™์€ `main` ๋ฒ„์ „์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค. `main` ๋ฒ„์ „์€ ๊ฐœ๋ฐœ ํ˜„ํ™ฉ๊ณผ ๋ฐœ๋งž์ถ”๋Š”๋ฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋กœ ๋งˆ์ง€๋ง‰ ๊ณต์‹ ๋ฆด๋ฆฌ์Šค ์ดํ›„ ๋ฐœ๊ฒฌ๋œ ๋ฒ„๊ทธ๊ฐ€ ํŒจ์น˜๋˜์—ˆ์ง€๋งŒ, ์ƒˆ ๋ฆด๋ฆฌ์Šค๋กœ ์•„์ง ๋กค์•„์›ƒ๋˜์ง€๋Š” ์•Š์€ ๊ฒฝ์šฐ๋ฅผ ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ”๊ฟ” ๋งํ•˜๋ฉด `main` ๋ฒ„์ „์ด ์•ˆ์ •์„ฑ๊ณผ๋Š” ๊ฑฐ๋ฆฌ๊ฐ€ ์žˆ๋‹ค๋Š” ๋œป์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” `main` ๋ฒ„์ „์„ ์‚ฌ์šฉํ•˜๋Š”๋ฐ ๋ฌธ์ œ๊ฐ€ ์—†๋„๋ก ๋…ธ๋ ฅํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ๋Œ€๋ถ€๋ถ„์˜ ๋ฌธ์ œ๋Š” ๋Œ€๊ฐœ ๋ช‡ ์‹œ๊ฐ„์ด๋‚˜ ํ•˜๋ฃจ ์•ˆ์— ํ•ด๊ฒฐ๋ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด [์ด์Šˆ](https://github.com/huggingface/transformers/issues)๋ฅผ ์—ด์–ด์ฃผ์‹œ๋ฉด ๋” ๋นจ๋ฆฌ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๐Ÿค— Transformers๊ฐ€ ์ œ๋Œ€๋กœ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## ์ˆ˜์ • ๊ฐ€๋Šฅํ•œ ์„ค์น˜[[editable-install]] ์ˆ˜์ • ๊ฐ€๋Šฅํ•œ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. * `main` ๋ฒ„์ „์˜ ์†Œ์Šค ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด * ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๊ณ  ์‹ถ์–ด์„œ ์ฝ”๋“œ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํ…Œ์ŠคํŠธํ•˜๊ธฐ ์œ„ํ•ด ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋ฅผ ๋ณต์ œํ•˜๊ณ  ๐Ÿค— Transformers๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์ž…๋ ฅํ•ด์ฃผ์„ธ์š”. ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` ์œ„ ๋ช…๋ น์€ ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋ฅผ ๋ณต์ œํ•œ ์œ„์น˜์˜ ํด๋”์™€ Python ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ฒฝ๋กœ๋ฅผ ์—ฐ๊ฒฐ์‹œํ‚ต๋‹ˆ๋‹ค. Python์ด ์ผ๋ฐ˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๊ฒฝ๋กœ ์™ธ์— ๋ณต์ œํ•œ ํด๋” ๋‚ด๋ถ€๋ฅผ ํ™•์ธํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด Python ํŒจํ‚ค์ง€๊ฐ€ ์ผ๋ฐ˜์ ์œผ๋กœ `~/anaconda3/envs/main/lib/python3.7/site-packages/`์— ์„ค์น˜๋˜์–ด ์žˆ๋Š”๋ฐ, ๋ช…๋ น์„ ๋ฐ›์€ Python์ด ์ด์ œ ๋ณต์ œํ•œ ํด๋”์ธ `~/transformers/`๋„ ๊ฒ€์ƒ‰ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. <Tip warning={true}> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ณ„์† ์‚ฌ์šฉํ•˜๋ ค๋ฉด `transformers` ํด๋”๋ฅผ ๊ผญ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋ณต์ œ๋ณธ์€ ์ตœ์‹  ๋ฒ„์ „์˜ ๐Ÿค— Transformers๋กœ ์‰ฝ๊ฒŒ ์—…๋ฐ์ดํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash cd ~/transformers/ git pull ``` Python ํ™˜๊ฒฝ์„ ๋‹ค์‹œ ์‹คํ–‰ํ•˜๋ฉด ์—…๋ฐ์ดํŠธ๋œ ๐Ÿค— Transformers์˜ `main` ๋ฒ„์ „์„ ์ฐพ์•„๋‚ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## conda๋กœ ์„ค์น˜ํ•˜๊ธฐ[[install-with-conda]] `huggingface` conda ์ฑ„๋„์—์„œ ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash conda install -c huggingface transformers ``` ## ์บ์‹œ ๊ตฌ์„ฑํ•˜๊ธฐ[[cache-setup]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ๋‹ค์šด๋กœ๋“œ๋œ ํ›„ ๋กœ์ปฌ ๊ฒฝ๋กœ `~/.cache/huggingface/hub`์— ์บ์‹œ๋ฉ๋‹ˆ๋‹ค. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `TRANSFORMERS_CACHE`์˜ ๊ธฐ๋ณธ ๋””๋ ‰ํ„ฐ๋ฆฌ์ž…๋‹ˆ๋‹ค. Windows์˜ ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋””๋ ‰ํ„ฐ๋ฆฌ๋Š” `C:\Users\username\.cache\huggingface\hub`์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ (์šฐ์„  ์ˆœ์œ„) ์ˆœ์„œ๋Œ€๋กœ ๋ณ€๊ฒฝํ•˜์—ฌ ๋‹ค๋ฅธ ์บ์‹œ ๋””๋ ‰ํ† ๋ฆฌ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ (๊ธฐ๋ณธ): `HUGGINGFACE_HUB_CACHE` ๋˜๋Š” `TRANSFORMERS_CACHE` 2. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜: `HF_HOME` 3. ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜: `XDG_CACHE_HOME` + `/huggingface` <Tip> ๊ณผ๊ฑฐ ๐Ÿค— Transformers์—์„œ ์“ฐ์˜€๋˜ ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `PYTORCH_TRANSFORMERS_CACHE` ๋˜๋Š” `PYTORCH_PRETRAINED_BERT_CACHE`์ด ์„ค์ •๋˜์žˆ๋‹ค๋ฉด, ์…ธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ `TRANSFORMERS_CACHE`์„ ์ง€์ •ํ•˜์ง€ ์•Š๋Š” ํ•œ ์šฐ์„  ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. </Tip> ## ์˜คํ”„๋ผ์ธ ๋ชจ๋“œ[[offline-mode]] ๐Ÿค— Transformers๋ฅผ ๋กœ์ปฌ ํŒŒ์ผ๋งŒ ์‚ฌ์šฉํ•˜๋„๋ก ํ•ด์„œ ๋ฐฉํ™”๋ฒฝ ๋˜๋Š” ์˜คํ”„๋ผ์ธ ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `TRANSFORMERS_OFFLINE=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. <Tip> `HF_DATASETS_OFFLINE=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์—ฌ ์˜คํ”„๋ผ์ธ ํ›ˆ๋ จ ๊ณผ์ •์— [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/)์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์˜ˆ๋ฅผ ๋“ค์–ด ์™ธ๋ถ€ ๊ธฐ๊ธฐ ์‚ฌ์ด์— ๋ฐฉํ™”๋ฒฝ์„ ๋‘” ์ผ๋ฐ˜ ๋„คํŠธ์›Œํฌ์—์„œ ํ‰์†Œ์ฒ˜๋Ÿผ ํ”„๋กœ๊ทธ๋žจ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ์˜คํ”„๋ผ์ธ ๊ธฐ๊ธฐ์—์„œ ๋™์ผํ•œ ํ”„๋กœ๊ทธ๋žจ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` ์ด์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋กœ์ปฌ ํŒŒ์ผ์— ํ•œํ•ด์„œ๋งŒ ๊ฒ€์ƒ‰ํ•  ๊ฒƒ์ด๋ฏ€๋กœ, ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ค‘๋‹จ๋˜๊ฑฐ๋‚˜ ์‹œ๊ฐ„์ด ์ดˆ๊ณผ๋  ๋•Œ๊นŒ์ง€ ๋ฉˆ์ถฐ์žˆ์ง€ ์•Š๊ณ  ์ž˜ ์‹คํ–‰๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ### ์˜คํ”„๋ผ์ธ์šฉ ๋ชจ๋ธ ๋ฐ ํ† ํฌ๋‚˜์ด์ € ๋งŒ๋“ค์–ด๋‘๊ธฐ[[fetch-models-and-tokenizers-to-use-offline]] Another option for using ๐Ÿค— Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this: ๐Ÿค— Transformers๋ฅผ ์˜คํ”„๋ผ์ธ์œผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ ํŒŒ์ผ์„ ๋ฏธ๋ฆฌ ๋‹ค์šด๋กœ๋“œํ•œ ๋‹ค์Œ, ์˜คํ”„๋ผ์ธ์ผ ๋•Œ ์‚ฌ์šฉํ•  ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ด๋‘๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. 3๊ฐ€์ง€ ์ค‘ ํŽธํ•œ ๋ฐฉ๋ฒ•์„ ๊ณ ๋ฅด์„ธ์š”. * [Model Hub](https://huggingface.co/models)์˜ UI๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋ ค๋ฉด โ†“ ์•„์ด์ฝ˜์„ ํด๋ฆญํ•˜์„ธ์š”. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * [`PreTrainedModel.from_pretrained`]์™€ [`PreTrainedModel.save_pretrained`] ์›Œํฌํ”Œ๋กœ๋ฅผ ํ™œ์šฉํ•˜์„ธ์š”. 1. ๋ฏธ๋ฆฌ [`PreTrainedModel.from_pretrained`]๋กœ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•ด๋‘์„ธ์š”. ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. [`PreTrainedModel.save_pretrained`]๋กœ ์ง€์ •๋œ ๊ฒฝ๋กœ์— ํŒŒ์ผ์„ ์ €์žฅํ•ด๋‘์„ธ์š”. ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. ์ด์ œ ์˜คํ”„๋ผ์ธ์ผ ๋•Œ [`PreTrainedModel.from_pretrained`]๋กœ ์ €์žฅํ•ด๋’€๋˜ ํŒŒ์ผ์„ ์ง€์ •๋œ ๊ฒฝ๋กœ์—์„œ ๋‹ค์‹œ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•ด์„œ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜์„ธ์š”. 1. ๊ฐ€์ƒํ™˜๊ฒฝ์— `huggingface_hub` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜์„ธ์š”. ```bash python -m pip install huggingface_hub ``` 2. [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) ํ•จ์ˆ˜๋กœ ํŒŒ์ผ์„ ํŠน์ • ์œ„์น˜์— ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์•„๋ž˜ ๋ช…๋ น์€ [T0](https://huggingface.co/bigscience/T0_3B) ๋ชจ๋ธ์˜ `config.json` ํŒŒ์ผ์„ ์ง€์ •๋œ ๊ฒฝ๋กœ์— ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ๋กœ์ปฌ์— ์บ์‹œ ํ•ด๋†“๊ณ  ๋‚˜๋ฉด, ๋‚˜์ค‘์— ๋ถˆ๋Ÿฌ์™€ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ด๋‘์„ธ์š”. ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Hub์— ์ €์žฅ๋œ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [Hub์—์„œ ํŒŒ์ผ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ](https://huggingface.co/docs/hub/how-to-downstream) ์„น์…˜์„ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/pipeline_webserver.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์›น ์„œ๋ฒ„๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉํ•˜๊ธฐ[[using_pipelines_for_a_webserver]] <Tip> ์ถ”๋ก  ์—”์ง„์„ ๋งŒ๋“œ๋Š” ๊ฒƒ์€ ๋ณต์žกํ•œ ์ฃผ์ œ์ด๋ฉฐ, "์ตœ์„ ์˜" ์†”๋ฃจ์…˜์€ ๋ฌธ์ œ ๊ณต๊ฐ„์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. CPU ๋˜๋Š” GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š”์ง€์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ณ  ๋‚ฎ์€ ์ง€์—ฐ ์‹œ๊ฐ„์„ ์›ํ•˜๋Š”์ง€, ๋†’์€ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์›ํ•˜๋Š”์ง€, ๋‹ค์–‘ํ•œ ๋ชจ๋ธ์„ ์ง€์›ํ•  ์ˆ˜ ์žˆ๊ธธ ์›ํ•˜๋Š”์ง€, ํ•˜๋‚˜์˜ ํŠน์ • ๋ชจ๋ธ์„ ๊ณ ๋„๋กœ ์ตœ์ ํ™”ํ•˜๊ธธ ์›ํ•˜๋Š”์ง€ ๋“ฑ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ด ์ฃผ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ, ์ด ์žฅ์—์„œ ์ œ์‹œํ•˜๋Š” ๊ฒƒ์€ ์ฒ˜์Œ ์‹œ๋„ํ•ด ๋ณด๊ธฐ์— ์ข‹์€ ์ถœ๋ฐœ์ ์ผ ์ˆ˜๋Š” ์žˆ์ง€๋งŒ, ์ด ์žฅ์„ ์ฝ๋Š” ์—ฌ๋Ÿฌ๋ถ„์ด ํ•„์š”๋กœ ํ•˜๋Š” ์ตœ์ ์˜ ์†”๋ฃจ์…˜์€ ์•„๋‹ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ํ•ต์‹ฌ์ ์œผ๋กœ ์ดํ•ดํ•ด์•ผ ํ•  ์ ์€ [dataset](pipeline_tutorial#using-pipelines-on-a-dataset)๋ฅผ ๋‹ค๋ฃฐ ๋•Œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ฐ˜๋ณต์ž๋ฅผ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด, ์›น ์„œ๋ฒ„๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์š”์ฒญ์„ ๊ธฐ๋‹ค๋ฆฌ๊ณ  ๋“ค์–ด์˜ค๋Š” ๋Œ€๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์‹œ์Šคํ…œ์ด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ณดํ†ต ์›น ์„œ๋ฒ„๋Š” ๋‹ค์–‘ํ•œ ์š”์ฒญ์„ ๋™์‹œ์— ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด ๋งค์šฐ ๋‹ค์ค‘ํ™”๋œ ๊ตฌ์กฐ(๋ฉ€ํ‹ฐ ์Šค๋ ˆ๋”ฉ, ๋น„๋™๊ธฐ ๋“ฑ)๋ฅผ ์ง€๋‹ˆ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด์—, ํŒŒ์ดํ”„๋ผ์ธ(๋Œ€๋ถ€๋ถ„ ํŒŒ์ดํ”„๋ผ์ธ ์•ˆ์— ์žˆ๋Š” ๋ชจ๋ธ)์€ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ์— ๊ทธ๋‹ค์ง€ ์ข‹์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์€ ๋งŽ์€ RAM์„ ์ฐจ์ง€ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, ํŒŒ์ดํ”„๋ผ์ธ์ด ์‹คํ–‰ ์ค‘์ด๊ฑฐ๋‚˜ ๊ณ„์‚ฐ ์ง‘์•ฝ์ ์ธ ์ž‘์—… ์ค‘์ผ ๋•Œ ๋ชจ๋“  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ฆฌ์†Œ์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ์šฐ๋ฆฌ๋Š” ์›น ์„œ๋ฒ„๊ฐ€ ์š”์ฒญ์„ ๋ฐ›๊ณ  ๋ณด๋‚ด๋Š” ๊ฐ€๋ฒผ์šด ๋ถ€ํ•˜๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ , ์‹ค์ œ ์ž‘์—…์„ ์ฒ˜๋ฆฌํ•˜๋Š” ๋‹จ์ผ ์Šค๋ ˆ๋“œ๋ฅผ ๊ฐ–๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ํ•ด๊ฒฐํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋Š” `starlette` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ์ค‘์š”ํ•˜์ง€ ์•Š์ง€๋งŒ, ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋™์ผํ•œ ํšจ๊ณผ๋ฅผ ๋ณด๊ธฐ ์œ„ํ•ด์„  ์ฝ”๋“œ๋ฅผ ์กฐ์ •ํ•˜๊ฑฐ๋‚˜ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `server.py`๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py from starlette.applications import Starlette from starlette.responses import JSONResponse from starlette.routing import Route from transformers import pipeline import asyncio async def homepage(request): payload = await request.body() string = payload.decode("utf-8") response_q = asyncio.Queue() await request.app.model_queue.put((string, response_q)) output = await response_q.get() return JSONResponse(output) async def server_loop(q): pipe = pipeline(model="bert-base-uncased") while True: (string, response_q) = await q.get() out = pipe(string) await response_q.put(out) app = Starlette( routes=[ Route("/", homepage, methods=["POST"]), ], ) @app.on_event("startup") async def startup_event(): q = asyncio.Queue() app.model_queue = q asyncio.create_task(server_loop(q)) ``` ์ด์ œ ๋‹ค์Œ ๋ช…๋ น์–ด๋กœ ์‹คํ–‰์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash uvicorn server:app ``` ์ด์ œ ์ฟผ๋ฆฌ๋ฅผ ๋‚ ๋ ค๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash curl -X POST -d "test [MASK]" http://localhost:8000/ #[{"score":0.7742936015129089,"token":1012,"token_str":".","sequence":"test."},...] ``` ์ž, ์ด์ œ ์›น ์„œ๋ฒ„๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ข‹์€ ๊ฐœ๋…์„ ์•Œ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ค‘์š”ํ•œ ์ ์€ ๋ชจ๋ธ์„ **ํ•œ ๋ฒˆ๋งŒ** ๊ฐ€์ ธ์˜จ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›น ์„œ๋ฒ„์—๋Š” ๋ชจ๋ธ์˜ ์‚ฌ๋ณธ์ด ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๋ฐฉ์‹์€ ๋ถˆํ•„์š”ํ•œ RAM์ด ์‚ฌ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ํ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์‚ฌ์šฉํ•˜๋ฉด, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋™์  ๋ฐฐ์น˜๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ถ”๋ก  ์ „ ๋‹จ๊ณ„์— ๋ช‡ ๊ฐœ์˜ ํ•ญ๋ชฉ์„ ์ถ•์ ํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์€ ๋ฉ‹์ง„ ์ž‘์—…์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py (string, rq) = await q.get() strings = [] queues = [] while True: try: (string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms except asyncio.exceptions.TimeoutError: break strings.append(string) queues.append(rq) strings outs = pipe(strings, batch_size=len(strings)) for rq, out in zip(queues, outs): await rq.put(out) ``` <Tip warning={true}> ์œ„์˜ ์ฝ”๋“œ๋ฅผ ์ž‘๋™์‹œํ‚ค๊ธฐ ์ „์— ๋‹น์‹ ์˜ ์‹œ์Šคํ…œ ์ž์›์ด ์ถฉ๋ถ„ํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ œ์•ˆ๋œ ์ฝ”๋“œ๋Š” ๊ฐ€๋…์„ฑ์„ ์œ„ํ•ด ์ตœ์ ํ™”๋˜์—ˆ์œผ๋ฉฐ, ์ตœ์ƒ์˜ ์ฝ”๋“œ๋Š” ์•„๋‹™๋‹ˆ๋‹ค. ์ฒซ์งธ, ๋ฐฐ์น˜ ํฌ๊ธฐ ์ œํ•œ์ด ์—†์œผ๋ฉฐ ์ด๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ข‹์€ ๋ฐฉ์‹์ด ์•„๋‹™๋‹ˆ๋‹ค. ๋‘˜์งธ, ๋ชจ๋“  ํ ๊ฐ€์ ธ์˜ค๊ธฐ์—์„œ ํƒ€์ž„์•„์›ƒ์ด ์žฌ์„ค์ •๋˜๋ฏ€๋กœ ์ถ”๋ก ์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— 1ms๋ณด๋‹ค ํ›จ์”ฌ ์˜ค๋ž˜ ๊ธฐ๋‹ค๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์ฒซ ๋ฒˆ์งธ ์š”์ฒญ์„ ๊ทธ๋งŒํผ ์ง€์—ฐ์‹œํ‚ด). ๋‹จ์ผ 1ms ๊ธธ์ด์˜ ๋ฐ๋“œ๋ผ์ธ์„ ๋‘๋Š” ํŽธ์ด ๋” ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ํ๊ฐ€ ๋น„์–ด ์žˆ์–ด๋„ ํ•ญ์ƒ 1ms๋ฅผ ๊ธฐ๋‹ค๋ฆฌ๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ์— ์•„๋ฌด๊ฒƒ๋„ ์—†์„ ๋•Œ ์ถ”๋ก ์„ ์›ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์ตœ์„ ์˜ ๋ฐฉ๋ฒ•์ด ์•„๋‹ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ฐฐ์น˜ ์ž‘์—…์ด ์‚ฌ์šฉ๋ก€์— ๋”ฐ๋ผ ์ •๋ง๋กœ ์ค‘์š”ํ•˜๋‹ค๋ฉด ์˜๋ฏธ๊ฐ€ ์žˆ์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์ตœ์ƒ์˜ ์†”๋ฃจ์…˜์€ ์—†์Šต๋‹ˆ๋‹ค. ## ๊ณ ๋ คํ•ด์•ผ ํ•  ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ[[few_things_you_might want_to_consider]] ### ์—๋Ÿฌ ํ™•์ธ[[error_checking]] ํ”„๋กœ๋•์…˜ ํ™˜๊ฒฝ์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์—ฌ์ง€๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ชจ์ž๋ผ๊ฑฐ๋‚˜, ๊ณต๊ฐ„์ด ๋ถ€์กฑํ•˜๊ฑฐ๋‚˜, ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐ์— ์‹คํŒจํ•˜๊ฑฐ๋‚˜, ์ฟผ๋ฆฌ๊ฐ€ ์ž˜๋ชป๋˜์—ˆ๊ฑฐ๋‚˜, ์ฟผ๋ฆฌ๋Š” ์ •ํ™•ํ•ด๋„ ๋ชจ๋ธ ์„ค์ •์ด ์ž˜๋ชป๋˜์–ด ์‹คํ–‰์— ์‹คํŒจํ•˜๋Š” ๋“ฑ๋“ฑ ๋งŽ์€ ๊ฒฝ์šฐ๊ฐ€ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์„œ๋ฒ„๊ฐ€ ์‚ฌ์šฉ์ž์—๊ฒŒ ์˜ค๋ฅ˜๋ฅผ ์ถœ๋ ฅํ•˜๋Š” ๊ฒƒ์ด ์ข‹์œผ๋ฏ€๋กœ ์˜ค๋ฅ˜๋ฅผ ํ‘œ์‹œํ•˜๊ธฐ ์œ„ํ•ด `try...except` ๋ฌธ์„ ๋งŽ์ด ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ณด์•ˆ ์ƒํ™ฉ์— ๋”ฐ๋ผ ๋ชจ๋“  ์˜ค๋ฅ˜๋ฅผ ํ‘œ์‹œํ•˜๋Š” ๊ฒƒ์€ ๋ณด์•ˆ์ƒ ์œ„ํ—˜ํ•  ์ˆ˜๋„ ์žˆ๋‹ค๋Š” ์ ์„ ๋ช…์‹ฌํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ### ์„œํ‚ท ๋ธŒ๋ ˆ์ดํ‚น[[circuit_breaking]] ์›น ์„œ๋ฒ„๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์„œํ‚ท ๋ธŒ๋ ˆ์ดํ‚น์„ ์ˆ˜ํ–‰ํ•  ๋•Œ ๋” ๋‚˜์€ ์ƒํ™ฉ์— ์ง๋ฉดํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์ด๋Š” ์„œ๋ฒ„๊ฐ€ ์ฟผ๋ฆฌ๋ฅผ ๋ฌด๊ธฐํ•œ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๋Œ€์‹  ๊ณผ๋ถ€ํ•˜ ์ƒํƒœ์ผ ๋•Œ ์ ์ ˆํ•œ ์˜ค๋ฅ˜๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์„œ๋ฒ„๊ฐ€ ๋งค์šฐ ์˜ค๋žœ ์‹œ๊ฐ„ ๋™์•ˆ ๋Œ€๊ธฐํ•˜๊ฑฐ๋‚˜ ์ ๋‹นํ•œ ์‹œ๊ฐ„์ด ์ง€๋‚œ ํ›„์— 504 ์—๋Ÿฌ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ๋Œ€์‹  503 ์—๋Ÿฌ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋ฐ˜ํ™˜ํ•˜๊ฒŒ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ œ์•ˆ๋œ ์ฝ”๋“œ์—๋Š” ๋‹จ์ผ ํ๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ๊ตฌํ˜„ํ•˜๊ธฐ๊ฐ€ ๋น„๊ต์  ์‰ฝ์Šต๋‹ˆ๋‹ค. ํ ํฌ๊ธฐ๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์€ ์›น ์„œ๋ฒ„๊ฐ€ ๊ณผ๋ถ€ํ•˜ ์ƒํ•ญ ํ•˜์— ์žˆ์„ ๋•Œ ์—๋Ÿฌ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ์œ„ํ•œ ๊ฐ€์žฅ ๊ธฐ์ดˆ์ ์ธ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ### ๋ฉ”์ธ ์“ฐ๋ ˆ๋“œ ์ฐจ๋‹จ[[blocking_the_main_thread]] ํ˜„์žฌ PyTorch๋Š” ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์‹คํ–‰ ์ค‘์—๋Š” ๋ฉ”์ธ ์Šค๋ ˆ๋“œ๊ฐ€ ์ฐจ๋‹จ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PyTorch๋ฅผ ๋ณ„๋„์˜ ์Šค๋ ˆ๋“œ/ํ”„๋กœ์„ธ์Šค์—์„œ ์‹คํ–‰ํ•˜๋„๋ก ๊ฐ•์ œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด ์ž‘์—…์ด ์ˆ˜ํ–‰๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ฝ”๋“œ๊ฐ€ ํ›จ์”ฌ ๋” ๋ณต์žกํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค(์ฃผ๋กœ ์Šค๋ ˆ๋“œ, ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ, ํ๊ฐ€ ์„œ๋กœ ์ž˜ ๋งž์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค). ํ•˜์ง€๋งŒ ๊ถ๊ทน์ ์œผ๋กœ๋Š” ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ํ•ญ๋ชฉ์˜ ์ถ”๋ก ์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฐ๋‹ค๋ฉด (> 1์ดˆ), ๋ฉ”์ธ ์“ฐ๋ ˆ๋“œ๋ฅผ ์ฐจ๋‹จํ•˜๋Š” ๊ฒƒ์€ ์ค‘์š”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ด ๊ฒฝ์šฐ ์ถ”๋ก  ์ค‘ ๋ชจ๋“  ์ฟผ๋ฆฌ๋Š” ์˜ค๋ฅ˜๋ฅผ ๋ฐ›๊ธฐ ์ „์— 1์ดˆ๋ฅผ ๊ธฐ๋‹ค๋ ค์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ### ๋™์  ๋ฐฐ์น˜[[dynamic_batching]] ์ผ๋ฐ˜์ ์œผ๋กœ, ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๊ฐ€ 1๊ฐœ ํ•ญ๋ชฉ์„ ํ•œ ๋ฒˆ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์— ๋น„ํ•ด ๋ฐ˜๋“œ์‹œ ์„ฑ๋Šฅ ํ–ฅ์ƒ์ด ์žˆ๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค(์ž์„ธํ•œ ๋‚ด์šฉ์€ [`batching details`](./main_classes/pipelines#pipeline-batching)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”). ํ•˜์ง€๋งŒ ์˜ฌ๋ฐ”๋ฅธ ์„ค์ •์—์„œ ์‚ฌ์šฉํ•˜๋ฉด ๋งค์šฐ ํšจ๊ณผ์ ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API์—๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์†๋„ ์ €ํ•˜์˜ ๊ฐ€๋Šฅ์„ฑ์ด ๋งค์šฐ ๋†’๊ธฐ ๋•Œ๋ฌธ์— ๋™์  ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋งค์šฐ ํฐ ๋ชจ๋ธ์ธ BLOOM ์ถ”๋ก ์˜ ๊ฒฝ์šฐ ๋™์  ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ์—๊ฒŒ ์ ์ ˆํ•œ ๊ฒฝํ—˜์„ ์ œ๊ณตํ•˜๋Š” ๋ฐ **ํ•„์ˆ˜**์ž…๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: ๋‘˜๋Ÿฌ๋ณด๊ธฐ - local: installation title: ์„ค์น˜๋ฐฉ๋ฒ• title: ์‹œ์ž‘ํ•˜๊ธฐ - sections: - local: pipeline_tutorial title: Pipeline์œผ๋กœ ์ถ”๋ก ํ•˜๊ธฐ - local: autoclass_tutorial title: AutoClass๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ์ธ์Šคํ„ด์Šค ๋กœ๋“œํ•˜๊ธฐ - local: preprocessing title: ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ - local: training title: ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ - local: run_scripts title: ์Šคํฌ๋ฆฝํŠธ๋กœ ํ•™์Šตํ•˜๊ธฐ - local: accelerate title: ๐Ÿค— Accelerate๋กœ ๋ถ„์‚ฐ ํ•™์Šต ๊ตฌ์„ฑํ•˜๊ธฐ - local: model_sharing title: ๋งŒ๋“  ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ title: ํŠœํ† ๋ฆฌ์–ผ - sections: - sections: - local: tasks/sequence_classification title: ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ - local: tasks/token_classification title: ํ† ํฐ ๋ถ„๋ฅ˜ - local: tasks/question_answering title: ์งˆ์˜ ์‘๋‹ต(Question Answering) - local: tasks/language_modeling title: ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง(Causal language modeling) - local: tasks/masked_language_modeling title: ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง(Masked language modeling) - local: tasks/translation title: ๋ฒˆ์—ญ - local: tasks/summarization title: ์š”์•ฝ - local: tasks/multiple_choice title: ๊ฐ๊ด€์‹ ๋ฌธ์ œ(Multiple Choice) title: ์ž์—ฐ์–ด์ฒ˜๋ฆฌ isExpanded: false - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Audio classification - local: tasks/asr title: ์ž๋™ ์Œ์„ฑ ์ธ์‹ title: (๋ฒˆ์—ญ์ค‘) ์˜ค๋””์˜ค isExpanded: false - sections: - local: tasks/image_classification title: ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Semantic segmentation - local: tasks/video_classification title: ์˜์ƒ ๋ถ„๋ฅ˜ - local: tasks/object_detection title: ๊ฐ์ฒด ํƒ์ง€ - local: tasks/zero_shot_object_detection title: ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€ - local: tasks/zero_shot_image_classification title: ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ - local: tasks/monocular_depth_estimation title: ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ • title: (๋ฒˆ์—ญ์ค‘) ์ปดํ“จํ„ฐ ๋น„์ „ isExpanded: false - sections: - local: tasks/image_captioning title: ์ด๋ฏธ์ง€ ์บก์…”๋‹ - local: tasks/document_question_answering title: ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering) title: ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ isExpanded: false title: ํƒœ์Šคํฌ ๊ฐ€์ด๋“œ - sections: - local: fast_tokenizers title: ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ - local: multilingual title: ๋‹ค๊ตญ์–ด ๋ชจ๋ธ ์ถ”๋ก ํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Customize text generation strategy - local: create_a_model title: ๋ชจ๋ธ๋ณ„ API ์‚ฌ์šฉํ•˜๊ธฐ - local: custom_models title: ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ - local: sagemaker title: Amazon SageMaker์—์„œ ํ•™์Šต ์‹คํ–‰ํ•˜๊ธฐ - local: serialization title: ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: tflite title: TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: torchscript title: TorchScript๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Benchmarks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Notebooks with examples - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Community resources - local: custom_tools title: ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ - local: troubleshooting title: ๋ฌธ์ œ ํ•ด๊ฒฐ title: (๋ฒˆ์—ญ์ค‘) ๊ฐœ๋ฐœ์ž ๊ฐ€์ด๋“œ - sections: - local: performance title: ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on one GPU - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on many GPUs - local: perf_train_cpu title: CPU์—์„œ ํ›ˆ๋ จ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on many CPUs - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on TPUs - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on TPU with TensorFlow - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on Specialized Hardware - local: perf_infer_cpu title: CPU๋กœ ์ถ”๋ก ํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Inference on one GPU - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Inference on many GPUs - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Inference on Specialized Hardware - local: perf_hardware title: ํ›ˆ๋ จ์šฉ ์‚ฌ์šฉ์ž ๋งž์ถคํ˜• ํ•˜๋“œ์›จ์–ด - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Instantiating a big model - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Debugging - local: hpo_train title: Trainer API๋ฅผ ์‚ฌ์šฉํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ - local: tf_xla title: TensorFlow ๋ชจ๋ธ์„ ์œ„ํ•œ XLA ํ†ตํ•ฉ title: (๋ฒˆ์—ญ์ค‘) ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) How to contribute to transformers? - local: in_translation title: (๋ฒˆ์—ญ์ค‘) How to add a model to ๐Ÿค— Transformers? - local: in_translation title: (๋ฒˆ์—ญ์ค‘) How to convert a ๐Ÿค— Transformers model to TensorFlow? - local: in_translation title: (๋ฒˆ์—ญ์ค‘) How to add a pipeline to ๐Ÿค— Transformers? - local: testing title: ํ…Œ์ŠคํŠธ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Checks on a Pull Request title: (๋ฒˆ์—ญ์ค‘) ๊ธฐ์—ฌํ•˜๊ธฐ - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Philosophy - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Glossary - local: task_summary title: ๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘์—… - local: tasks_explained title: ๐Ÿค— Transformers๋กœ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ• - local: in_translation title: (๋ฒˆ์—ญ์ค‘) The Transformer model family - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Summary of the tokenizers - local: attention title: ์–ดํ…์…˜ ๋งค์ปค๋‹ˆ์ฆ˜ - local: pad_truncation title: ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ - local: bertology title: BERTology - local: perplexity title: ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity) - local: pipeline_webserver title: ์ถ”๋ก  ์›น ์„œ๋ฒ„๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ title: (๋ฒˆ์—ญ์ค‘) ๊ฐœ๋… ๊ฐ€์ด๋“œ - sections: - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Auto Classes - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Callbacks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Configuration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Data Collator - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Keras callbacks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Logging - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Text Generation - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ONNX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Optimization - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Model outputs - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pipelines - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Processors - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Quantization - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Tokenizer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Trainer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeepSpeed Integration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Feature Extractor - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Image Processor title: (๋ฒˆ์—ญ์ค‘) ๋ฉ”์ธ ํด๋ž˜์Šค - sections: - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ALBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BART - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BARThez - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BARTpho - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BertGeneration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BertJapanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Bertweet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BigBird - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BigBirdPegasus - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BioGpt - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Blenderbot - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Blenderbot Small - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLOOM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BORT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ByT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CamemBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CANINE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CodeGen - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CPM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CPMANT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CTRL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeBERTa-v2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DialoGPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DistilBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DPR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ELECTRA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ERNIE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ErnieM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ESM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAN-T5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAN-UL2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FlauBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FSMT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Funnel Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT Neo - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT NeoX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT NeoX Japanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT-J - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTBigCode - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTSAN Japanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTSw3 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) HerBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) I-BERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Jukebox - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LED - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LLaMA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Longformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LongT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LUKE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) M2M100 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MarianMT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MarkupLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MBart and MBart-50 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MEGA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MegatronBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MegatronGPT2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) mLUKE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MPNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MVP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NEZHA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NLLB - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NLLB-MoE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Nystrรถmformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Open-Llama - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pegasus - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PEGASUS-X - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PhoBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PLBart - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ProphetNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) QDQBert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RAG - local: in_translation title: (๋ฒˆ์—ญ์ค‘) REALM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Reformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RemBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RetriBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoBERTa-PreLayerNorm - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoCBert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Splinter - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SqueezeBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SwitchTransformers - local: in_translation title: (๋ฒˆ์—ญ์ค‘) T5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) T5v1.1 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TAPEX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Transformer XL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UL2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) X-MOD - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XGLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-ProphetNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-RoBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-RoBERTa-XL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-V - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) YOSO title: (๋ฒˆ์—ญ์ค‘) ํ…์ŠคํŠธ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BEiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Conditional DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvNeXT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvNeXTV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CvT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Deformable DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DETA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DiNAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) EfficientFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) EfficientNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FocalNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GLPN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ImageGPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LeViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Mask2Former - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MaskFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileNetV1 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileNetV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PoolFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RegNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ResNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SegFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin Transformer V2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin2SR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Table Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TimeSformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UperNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VAN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VideoMAE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Transformer (ViT) - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViT Hybrid - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViTMAE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViTMSN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) YOLOS title: (๋ฒˆ์—ญ์ค‘) ๋น„์ „ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Audio Spectrogram Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLAP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Hubert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MCTCT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SEW - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SEW-D - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech2Text - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech2Text2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SpeechT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UniSpeech - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UniSpeech-SAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2-Conformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2Phoneme - local: in_translation title: (๋ฒˆ์—ญ์ค‘) WavLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Whisper - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLS-R - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLSR-Wav2Vec2 title: (๋ฒˆ์—ญ์ค‘) ์˜ค๋””์˜ค ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ALIGN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) AltCLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLIP-2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BridgeTower - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Chinese-CLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLIPSeg - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Data2Vec - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DePlot - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Donut - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAVA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GIT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GroupViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLMV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLMV3 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutXLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LiLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LXMERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MatCha - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MGP-STR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OneFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OWL-ViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Perceiver - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pix2Struct - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Segment Anything - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TAPAS - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TrOCR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TVLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Text Dual Encoder - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VisualBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) X-CLIP title: (๋ฒˆ์—ญ์ค‘) ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Decision Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Trajectory Transformer title: (๋ฒˆ์—ญ์ค‘) ๊ฐ•ํ™”ํ•™์Šต ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Informer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Time Series Transformer title: (๋ฒˆ์—ญ์ค‘) ์‹œ๊ณ„์—ด ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Graphormer title: (๋ฒˆ์—ญ์ค‘) Graph models title: (๋ฒˆ์—ญ์ค‘) ๋ชจ๋ธ - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Custom Layers and Utilities - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for pipelines - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Tokenizers - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Trainer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Generation - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Image Processors - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Audio processing - local: in_translation title: (๋ฒˆ์—ญ์ค‘) General Utilities - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Time Series title: (๋ฒˆ์—ญ์ค‘) Internal Helpers title: (๋ฒˆ์—ญ์ค‘) API
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค๊ตญ์–ด ๋ชจ๋ธ ์ถ”๋ก ํ•˜๊ธฐ[[multilingual-models-for-inference]] [[open-in-colab]] ๐Ÿค— Transformers์—๋Š” ์—ฌ๋Ÿฌ ์ข…๋ฅ˜์˜ ๋‹ค๊ตญ์–ด(multilingual) ๋ชจ๋ธ์ด ์žˆ์œผ๋ฉฐ, ๋‹จ์ผ ์–ธ์–ด(monolingual) ๋ชจ๋ธ๊ณผ ์ถ”๋ก  ์‹œ ์‚ฌ์šฉ๋ฒ•์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๊ณ  ํ•ด์„œ *๋ชจ๋“ * ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์˜ ์‚ฌ์šฉ๋ฒ•์ด ๋‹ค๋ฅธ ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)์™€ ๊ฐ™์€ ๋ช‡๋ช‡ ๋ชจ๋ธ์€ ๋‹จ์ผ ์–ธ์–ด ๋ชจ๋ธ์ฒ˜๋Ÿผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์˜ ์ถ”๋ก  ์‹œ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## XLM[[xlm]] XLM์—๋Š” 10๊ฐ€์ง€ ์ฒดํฌํฌ์ธํŠธ(checkpoint)๊ฐ€ ์žˆ๋Š”๋ฐ, ์ด ์ค‘ ํ•˜๋‚˜๋งŒ ๋‹จ์ผ ์–ธ์–ด์ž…๋‹ˆ๋‹ค. ๋‚˜๋จธ์ง€ ์ฒดํฌํฌ์ธํŠธ 9๊ฐœ๋Š” ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜๋Š” ์ฒดํฌํฌ์ธํŠธ์™€ ๊ทธ๋ ‡์ง€ ์•Š์€ ์ฒดํฌํฌ์ธํŠธ์˜ ๋‘ ๊ฐ€์ง€ ๋ฒ”์ฃผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜๋Š” XLM[[xlm-with-language-embeddings]] ๋‹ค์Œ XLM ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: - `xlm-mlm-ende-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-๋…์ผ์–ด) - `xlm-mlm-enfr-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-ํ”„๋ž‘์Šค์–ด) - `xlm-mlm-enro-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์˜์–ด-๋ฃจ๋งˆ๋‹ˆ์•„์–ด) - `xlm-mlm-xnli15-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, XNLI ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ œ๊ณตํ•˜๋Š” 15๊ฐœ ๊ตญ์–ด) - `xlm-mlm-tlm-xnli15-1024` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋ฒˆ์—ญ, XNLI ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ œ๊ณตํ•˜๋Š” 15๊ฐœ ๊ตญ์–ด) - `xlm-clm-enfr-1024` (Causal language modeling, ์˜์–ด-ํ”„๋ž‘์Šค์–ด) - `xlm-clm-ende-1024` (Causal language modeling, ์˜์–ด-๋…์ผ์–ด) ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์€ ๋ชจ๋ธ์— ์ „๋‹ฌ๋œ `input_ids`์™€ ๋™์ผํ•œ shape์˜ ํ…์„œ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…์„œ์˜ ๊ฐ’์€ ์‚ฌ์šฉ๋œ ์–ธ์–ด์— ๋”ฐ๋ผ ๋‹ค๋ฅด๋ฉฐ ํ† ํฌ๋‚˜์ด์ €์˜ `lang2id` ๋ฐ `id2lang` ์†์„ฑ์— ์˜ํ•ด ์‹๋ณ„๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ์ œ์—์„œ๋Š” `xlm-clm-enfr-1024` ์ฒดํฌํฌ์ธํŠธ(์ฝ”์ž˜ ์–ธ์–ด ๋ชจ๋ธ๋ง(causal language modeling), ์˜์–ด-ํ”„๋ž‘์Šค์–ด)๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") ``` ํ† ํฌ๋‚˜์ด์ €์˜ `lang2id` ์†์„ฑ์€ ๋ชจ๋ธ์˜ ์–ธ์–ด์™€ ํ•ด๋‹น ID๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` ๋‹ค์Œ์œผ๋กœ, ์˜ˆ์ œ ์ž…๋ ฅ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # ๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 1์ž…๋‹ˆ๋‹ค ``` ์–ธ์–ด ID๋ฅผ `"en"`์œผ๋กœ ์„ค์ •ํ•ด ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์€ ์˜์–ด์˜ ์–ธ์–ด ID์ธ `0`์œผ๋กœ ์ฑ„์›Œ์ง„ ํ…์„œ์ž…๋‹ˆ๋‹ค. ์ด ํ…์„œ๋Š” `input_ids`์™€ ๊ฐ™์€ ํฌ๊ธฐ์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # (batch_size, sequence_length) shape์˜ ํ…์„œ๊ฐ€ ๋˜๋„๋ก ๋งŒ๋“ญ๋‹ˆ๋‹ค. >>> langs = langs.view(1, -1) # ์ด์ œ [1, sequence_length] shape์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค(๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 1์ž…๋‹ˆ๋‹ค) ``` ์ด์ œ `input_ids`์™€ ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ๋ชจ๋ธ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> outputs = model(input_ids, langs=langs) ``` [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) ์Šคํฌ๋ฆฝํŠธ๋กœ `xlm-clm` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ด ํ…์ŠคํŠธ์™€ ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” XLM[[xlm-without-language-embeddings]] ๋‹ค์Œ XLM ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค: - `xlm-mlm-17-1280` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 17๊ฐœ ๊ตญ์–ด) - `xlm-mlm-100-1280` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) ์ด์ „์˜ XLM ์ฒดํฌํฌ์ธํŠธ์™€ ๋‹ฌ๋ฆฌ ์ด ๋ชจ๋ธ์€ ์ผ๋ฐ˜ ๋ฌธ์žฅ ํ‘œํ˜„์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ## BERT[[bert]] ๋‹ค์Œ BERT ๋ชจ๋ธ์€ ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `bert-base-multilingual-uncased` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, 102๊ฐœ ๊ตญ์–ด) - `bert-base-multilingual-cased` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง + ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, 104๊ฐœ ๊ตญ์–ด) ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ถ”๋ก  ์‹œ์— ์–ธ์–ด ์ž„๋ฒ ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฌธ๋งฅ์—์„œ ์–ธ์–ด๋ฅผ ์‹๋ณ„ํ•˜๊ณ , ์‹๋ณ„๋œ ์–ธ์–ด๋กœ ์ถ”๋ก ํ•ฉ๋‹ˆ๋‹ค. ## XLM-RoBERTa[[xlmroberta]] ๋‹ค์Œ XLM-RoBERTa ๋˜ํ•œ ๋‹ค๊ตญ์–ด ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `xlm-roberta-base` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) - `xlm-roberta-large` (๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง, 100๊ฐœ ๊ตญ์–ด) XLM-RoBERTa๋Š” 100๊ฐœ ๊ตญ์–ด์— ๋Œ€ํ•ด ์ƒˆ๋กœ ์ƒ์„ฑ๋˜๊ณ  ์ •์ œ๋œ 2.5TB ๊ทœ๋ชจ์˜ CommonCrawl ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ „์— ๊ณต๊ฐœ๋œ mBERT๋‚˜ XLM๊ณผ ๊ฐ™์€ ๋‹ค๊ตญ์–ด ๋ชจ๋ธ์— ๋น„ํ•ด ๋ถ„๋ฅ˜, ์‹œํ€€์Šค ๋ผ๋ฒจ๋ง, ์งˆ์˜ ์‘๋‹ต๊ณผ ๊ฐ™์€ ๋‹ค์šด์ŠคํŠธ๋ฆผ(downstream) ์ž‘์—…์—์„œ ์ด์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## M2M100[[m2m100]] ๋‹ค์Œ M2M100 ๋ชจ๋ธ ๋˜ํ•œ ๋‹ค๊ตญ์–ด ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `facebook/m2m100_418M` (๋ฒˆ์—ญ) - `facebook/m2m100_1.2B` (๋ฒˆ์—ญ) ์ด ์˜ˆ์ œ์—์„œ๋Š” `facebook/m2m100_418M` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ค‘๊ตญ์–ด๋ฅผ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์—์„œ ๋ฒˆ์—ญ ๋Œ€์ƒ ์–ธ์–ด(source language)๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100์€ ๋ฒˆ์—ญ์„ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์€ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `generate` ๋ฉ”์†Œ๋“œ์—์„œ `forced_bos_token_id`๋ฅผ `en`์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart[[mbart]] ๋‹ค์Œ MBart ๋ชจ๋ธ ๋˜ํ•œ ๋‹ค๊ตญ์–ด ํƒœ์Šคํฌ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `facebook/mbart-large-50-one-to-many-mmt` (์ผ๋Œ€๋‹ค ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50-many-to-many-mmt` (๋‹ค๋Œ€๋‹ค ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50-many-to-one-mmt` (๋‹ค๋Œ€์ผ ๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-50` (๋‹ค๊ตญ์–ด ๋ฒˆ์—ญ, 50๊ฐœ ๊ตญ์–ด) - `facebook/mbart-large-cc25` ์ด ์˜ˆ์ œ์—์„œ๋Š” ํ•€๋ž€๋“œ์–ด๋ฅผ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `facebook/mbart-large-50-many-to-many-mmt` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์—์„œ ๋ฒˆ์—ญ ๋Œ€์ƒ ์–ธ์–ด(source language)๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart๋Š” ๋ฒˆ์—ญ์„ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์€ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด `generate` ๋ฉ”์†Œ๋“œ์—์„œ `forced_bos_token_id`๋ฅผ `en`์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` `facebook/mbart-large-50-many-to-one-mmt` ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด, ์ฒซ ๋ฒˆ์งธ๋กœ ์ƒ์„ฑ๋˜๋Š” ํ† ํฐ์„ ๋ฒˆ์—ญํ•  ์–ธ์–ด(target language) ID๋กœ ๊ฐ•์ œ ์ง€์ •ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/in_translation.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์—ด์‹ฌํžˆ ๋ฒˆ์—ญ ์ค‘์ž…๋‹ˆ๋‹ค. ์กฐ๊ธˆ ์ด๋”ฐ ๋งŒ๋‚˜์š”!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/tasks_explained.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers๋กœ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•[[how-transformers-solve-tasks]] [๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘์—…](task_summary)์—์„œ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP), ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค, ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—… ๋“ฑ์˜ ์ค‘์š”ํ•œ ์‘์šฉ์„ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€์—์„œ๋Š” ๋ชจ๋ธ์ด ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐํ•˜๋Š”์ง€ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ณ  ๋‚ด๋ถ€์—์„œ ์–ด๋–ค ์ผ์ด ์ผ์–ด๋‚˜๋Š”์ง€ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์ฃผ์–ด์ง„ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋งŽ์€ ๋ฐฉ๋ฒ•์ด ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ๋ชจ๋ธ์€ ํŠน์ • ๊ธฐ์ˆ ์„ ๊ตฌํ˜„ํ•˜๊ฑฐ๋‚˜ ์‹ฌ์ง€์–ด ์ƒˆ๋กœ์šด ๋ฐฉ์‹์œผ๋กœ ์ž‘์—…์— ์ ‘๊ทผํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, Transformer ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ผ๋ฐ˜์ ์ธ ์•„์ด๋””์–ด๋Š” ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์œ ์—ฐํ•œ ์•„ํ‚คํ…์ฒ˜ ๋•๋ถ„์— ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์€ ์ธ์ฝ”๋”, ๋””์ฝ”๋” ๋˜๋Š” ์ธ์ฝ”๋”-๋””์ฝ”๋” ๊ตฌ์กฐ์˜ ๋ณ€ํ˜•์ž…๋‹ˆ๋‹ค. Transformer ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์šฐ๋ฆฌ์˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—๋Š” ์˜ค๋Š˜๋‚  ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์— ์‚ฌ์šฉ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง(CNNs)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ํ˜„๋Œ€ CNN์˜ ์ž‘๋™ ๋ฐฉ์‹์— ๋Œ€ํ•ด ์„ค๋ช…ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ž‘์—…์ด ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐ๋˜๋Š”์ง€ ์„ค๋ช…ํ•˜๊ธฐ ์œ„ํ•ด, ์œ ์šฉํ•œ ์˜ˆ์ธก์„ ์ถœ๋ ฅํ•˜๊ณ ์ž ๋ชจ๋ธ ๋‚ด๋ถ€์—์„œ ์–ด๋–ค ์ผ์ด ์ผ์–ด๋‚˜๋Š”์ง€ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. - ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋ฐ ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์„ ์œ„ํ•œ [Wav2Vec2](model_doc/wav2vec2) - ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ [Vision Transformer (ViT)](model_doc/vit) ๋ฐ [ConvNeXT](model_doc/convnext) - ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ [DETR](model_doc/detr) - ์ด๋ฏธ์ง€ ๋ถ„ํ• ์„ ์œ„ํ•œ [Mask2Former](model_doc/mask2former) - ๊นŠ์ด ์ถ”์ •์„ ์œ„ํ•œ [GLPN](model_doc/glpn) - ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜, ํ† ํฐ ๋ถ„๋ฅ˜ ๋ฐ ์งˆ์˜์‘๋‹ต๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [BERT](model_doc/bert) - ๋””์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑ๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [GPT2](model_doc/gpt2) - ์ธ์ฝ”๋”-๋””์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์š”์•ฝ ๋ฐ ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [BART](model_doc/bart) <Tip> ๋” ๋‚˜์•„๊ฐ€๊ธฐ ์ „์—, ๊ธฐ์กด Transformer ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ๊ธฐ๋ณธ์ ์ธ ์ง€์‹์„ ์ˆ™์ง€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ธ์ฝ”๋”, ๋””์ฝ”๋” ๋ฐ ์–ดํ…์…˜์˜ ์ž‘๋™ ๋ฐฉ์‹์„ ์•Œ๋ฉด ๋‹ค์–‘ํ•œ Transformer ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ ๋‹จ๊ณ„๊ฑฐ๋‚˜ ๋ณต์Šต์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋” ๋งŽ์€ ์ •๋ณด๋ฅผ ์œ„ํ•ด [์ฝ”์Šค](https://huggingface.co/course/chapter1/4?fw=pt)๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ## ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค[[speech-and-audio]] [Wav2Vec2](model_doc/wav2vec2)๋Š” ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋˜์ง€ ์•Š์€ ์Œ์„ฑ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๋กœ, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋ฐ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/> </div> ์ด ๋ชจ๋ธ์—๋Š” 4๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. *ํŠน์ง• ์ธ์ฝ”๋”(feature encoder)*๋Š” ์›์‹œ ์˜ค๋””์˜ค ํŒŒํ˜•(raw audio waveform)์„ ๊ฐ€์ ธ์™€์„œ ์ œ๋กœ ํ‰๊ท  ๋ฐ ๋‹จ์œ„ ๋ถ„์‚ฐ์œผ๋กœ ํ‘œ์ค€ํ™”ํ•˜๊ณ , ๊ฐ๊ฐ 20ms ๊ธธ์ด์˜ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒํ˜•์€ ๋ณธ์งˆ์ ์œผ๋กœ ์—ฐ์†์ ์ด๊ธฐ ๋•Œ๋ฌธ์—, ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๋‹จ์–ด๋กœ ๋‚˜๋ˆ„๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด ๋ถ„ํ• ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ *์–‘์žํ™” ๋ชจ๋“ˆ(quantization module)*๋กœ ์ „๋‹ฌ๋˜๋Š” ํŠน์ง• ๋ฒกํ„ฐ๋Š” ์ด์‚ฐํ˜• ์Œ์„ฑ ๋‹จ์œ„๋ฅผ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์Œ์„ฑ ๋‹จ์œ„๋Š” *์ฝ”๋“œ๋ถ(codebook)*(์–ดํœ˜์ง‘์ด๋ผ๊ณ  ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค)์ด๋ผ๋Š” ์ฝ”๋“œ๋‹จ์–ด(codewords) ์ฝœ๋ ‰์…˜์—์„œ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ถ์—์„œ ์—ฐ์†์ ์ธ ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ๊ฐ€์žฅ ์ž˜ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฒกํ„ฐ ๋˜๋Š” ์Œ์„ฑ ๋‹จ์œ„๊ฐ€ ์„ ํƒ๋˜์–ด ๋ชจ๋ธ์„ ํ†ต๊ณผํ•ฉ๋‹ˆ๋‹ค. 3. ํŠน์ง• ๋ฒกํ„ฐ์˜ ์ ˆ๋ฐ˜์€ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํฌ๊ฐ€ ์ ์šฉ๋˜๋ฉฐ, ๋งˆ์Šคํฌ๋œ ํŠน์ง• ๋ฒกํ„ฐ๋Š” *์ƒ๋Œ€์  ์œ„์น˜ ์ž„๋ฒ ๋”ฉ*์„ ์ถ”๊ฐ€ํ•˜๋Š” Transformer ์ธ์ฝ”๋”์ธ *๋ฌธ๋งฅ ๋„คํŠธ์›Œํฌ(context network)*๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 4. ๋ฌธ๋งฅ ๋„คํŠธ์›Œํฌ์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋Š” *๋Œ€์กฐ์  ์ž‘์—…(contrastive task)*์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ์ž˜๋ชป๋œ ์˜ˆ์ธก ์‹œํ€€์Šค์—์„œ ๋งˆ์Šคํฌ๋œ ์˜ˆ์ธก์˜ ์‹ค์ œ ์–‘์žํ™”๋œ ์Œ์„ฑ ํ‘œํ˜„์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์ด ๊ฐ€์žฅ ์œ ์‚ฌํ•œ ์ปจํ…์ŠคํŠธ ๋ฒกํ„ฐ์™€ ์–‘์žํ™”๋œ ์Œ์„ฑ ๋‹จ์œ„(ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”)๋ฅผ ์ฐพ๋„๋ก ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ wav2vec2๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ๋˜์—ˆ์œผ๋ฏ€๋กœ, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋˜๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์— ๋งž์ถฐ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ### ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio-classification]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ Wav2Vec2 ๋ชจ๋ธ ์ƒ๋‹จ์— ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ(hidden states)๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ์€๋‹‰ ์ƒํƒœ๋Š” ๊ฐ๊ฐ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ์˜ค๋””์˜ค ํ”„๋ ˆ์ž„์—์„œ ํ•™์Šต๋œ ํŠน์ง•์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๊ณ ์ • ๊ธธ์ด์˜ ๋ฒกํ„ฐ ํ•˜๋‚˜๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด, ์€๋‹‰ ์ƒํƒœ๋Š” ๋จผ์ € ํ’€๋ง๋˜๊ณ , ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํด๋ž˜์Šค๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ์‚ฌ์ด์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์ด ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/audio_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ Wav2Vec2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์ž๋™ ์Œ์„ฑ ์ธ์‹์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, [์—ฐ๊ฒฐ์ฃผ์˜์  ์‹œ๊ฐ„ ๋ถ„๋ฅ˜(CTC, Connectionist Temporal Classification)](glossary#connectionist-temporal-classification-ctc)๋ฅผ ์œ„ํ•ด ๊ธฐ๋ณธ Wav2Vec2 ๋ชจ๋ธ ์ƒ๋‹จ์— ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›์•„์„œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋กœ์ง“์€ ํ† ํฐ ํด๋ž˜์Šค(ํ† ํฐ ์ˆ˜๋Š” ์ž‘์—…์˜ ์–ดํœ˜์—์„œ ๋‚˜ํƒ€๋‚ฉ๋‹ˆ๋‹ค)๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. CTC ์†์‹ค์€ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉ๋œ ํ† ํฐ์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ† ํฐ ์‹œํ€€์Šค๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ์‚ฌ์ด์—์„œ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์ž๋™ ์Œ์„ฑ ์ธ์‹์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ž๋™ ์Œ์„ฑ ์ธ์‹ ๊ฐ€์ด๋“œ](tasks/asr)๋ฅผ ํ™•์ธํ•˜์—ฌ Wav2Vec2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์— ์ ‘๊ทผํ•˜๋Š” 2๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜ ์‹œํ€€์Šค๋กœ ๋ถ„๋ฆฌํ•˜๊ณ  Transformer๋กœ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. 2. [ConvNeXT](model_doc/convnext)์™€ ๊ฐ™์€ ํ˜„๋Œ€ CNN์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์ง€๋งŒ ํ˜„๋Œ€ ๋„คํŠธ์›Œํฌ ์„ค๊ณ„๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์„ธ ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ Transformer์™€ ํ•ฉ์„ฑ๊ณฑ(์˜ˆ๋ฅผ ๋“ค์–ด, [Convolutional Vision Transformer](model_doc/cvt) ๋˜๋Š” [LeViT](model_doc/levit))์„ ๊ฒฐํ•ฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์‚ดํŽด๋ณผ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•๋งŒ ๊ฒฐํ•ฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์—ฌ๊ธฐ์„œ ์ด ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃจ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> ViT์™€ ConvNeXT๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—์„œ ์‚ฌ์šฉ๋˜์ง€๋งŒ, ๋ฌผ์ฒด ๊ฐ์ง€, ๋ถ„ํ• , ๊นŠ์ด ์ถ”์ •๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ ๋น„์ „ ์ž‘์—…์—๋Š” ๊ฐ๊ฐ DETR, Mask2Former, GLPN์ด ๋” ์ ํ•ฉํ•˜๋ฏ€๋กœ ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image-classification]] ViT์™€ ConvNeXT ๋ชจ๋‘ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์ง€๋งŒ, ViT๋Š” ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„, ConvNeXT๋Š” ํ•ฉ์„ฑ๊ณฑ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ฃผ๋œ ์ฐจ์ด์ž…๋‹ˆ๋‹ค. #### Transformer[[transformer]] [ViT](model_doc/vit)์€ ํ•ฉ์„ฑ๊ณฑ์„ ์ „์ ์œผ๋กœ ์ˆœ์ˆ˜ Transformer ์•„ํ‚คํ…์ฒ˜๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด Transformer์— ์ต์ˆ™ํ•˜๋‹ค๋ฉด, ViT๋ฅผ ์ดํ•ดํ•˜๋Š” ๋ฐฉ๋ฒ•์˜ ๋Œ€๋ถ€๋ถ„์„ ์ด๋ฏธ ํŒŒ์•…ํ–ˆ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/> </div> ViT๊ฐ€ ๋„์ž…ํ•œ ์ฃผ์š” ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ ์ด๋ฏธ์ง€๊ฐ€ Transformer๋กœ ์–ด๋–ป๊ฒŒ ์ „๋‹ฌ๋˜๋Š”์ง€์— ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€๋Š” ์„œ๋กœ ์ค‘์ฒฉ๋˜์ง€ ์•Š๋Š” ์ •์‚ฌ๊ฐํ˜• ํŒจ์น˜๋กœ ๋ถ„ํ• ๋˜๊ณ , ๊ฐ ํŒจ์น˜๋Š” ๋ฒกํ„ฐ ๋˜๋Š” *ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ(patch embedding)*์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์€ ์ ์ ˆํ•œ ์ž…๋ ฅ ์ฐจ์›์„ ๋งŒ๋“œ๋Š” 2D ํ•ฉ์„ฑ๊ณฑ ๊ณ„์ธต์—์„œ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ Transformer์˜ ๊ฒฝ์šฐ ๊ฐ ํŒจ์น˜์˜ ์ž„๋ฒ ๋”ฉ๋งˆ๋‹ค 768๊ฐœ์˜ ๊ฐ’์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค). 224x224 ํ”ฝ์…€ ์ด๋ฏธ์ง€๊ฐ€ ์žˆ๋‹ค๋ฉด, 16x16 ์ด๋ฏธ์ง€ ํŒจ์น˜ 196๊ฐœ๋กœ ๋ถ„ํ• ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๊ฐ€ ๋‹จ์–ด๋กœ ํ† ํฐํ™”๋˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ, ์ด๋ฏธ์ง€๋„ ํŒจ์น˜ ์‹œํ€€์Šค๋กœ "ํ† ํฐํ™”"๋ฉ๋‹ˆ๋‹ค. 2. *ํ•™์Šต ๊ฐ€๋Šฅํ•œ ์ž„๋ฒ ๋”ฉ(learnable embedding)*(ํŠน์ˆ˜ํ•œ `[CLS]` ํ† ํฐ)์ด BERT์™€ ๊ฐ™์ด ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. `[CLS]` ํ† ํฐ์˜ ๋งˆ์ง€๋ง‰ ์€๋‹‰ ์ƒํƒœ๋Š” ๋ถ€์ฐฉ๋œ ๋ถ„๋ฅ˜ ํ—ค๋“œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜๊ณ , ๋‹ค๋ฅธ ์ถœ๋ ฅ์€ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค. ์ด ํ† ํฐ์€ ๋ชจ๋ธ์ด ์ด๋ฏธ์ง€์˜ ํ‘œํ˜„์„ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. 3. ํŒจ์น˜์™€ ํ•™์Šต ๊ฐ€๋Šฅํ•œ ์ž„๋ฒ ๋”ฉ์— ๋งˆ์ง€๋ง‰์œผ๋กœ ์ถ”๊ฐ€ํ•  ๊ฒƒ์€ *์œ„์น˜ ์ž„๋ฒ ๋”ฉ*์ž…๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€ ํŒจ์น˜์˜ ์ˆœ์„œ๋ฅผ ๋ชจ๋ฅด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์œ„์น˜ ์ž„๋ฒ ๋”ฉ๋„ ํ•™์Šต ๊ฐ€๋Šฅํ•˜๋ฉฐ, ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ๊ณผ ๋™์ผํ•œ ํฌ๊ธฐ๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ตœ์ข…์ ์œผ๋กœ, ๋ชจ๋“  ์ž„๋ฒ ๋”ฉ์ด Transformer ์ธ์ฝ”๋”์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 4. `[CLS]` ํ† ํฐ์„ ํฌํ•จํ•œ ์ถœ๋ ฅ์€ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก  ํ—ค๋“œ(MLP)์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ViT์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋Š” ๋‹จ์ˆœํžˆ ๋ถ„๋ฅ˜์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ๊ฐ™์ด, MLP ํ—ค๋“œ๋Š” ์ถœ๋ ฅ์„ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•ด ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํด๋ž˜์Šค๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/image_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ ViT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! #### CNN[[cnn]] <Tip> ์ด ์„น์…˜์—์„œ๋Š” ํ•ฉ์„ฑ๊ณฑ์— ๋Œ€ํ•ด ๊ฐ„๋žตํ•˜๊ฒŒ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฏธ์ง€์˜ ๋ชจ์–‘๊ณผ ํฌ๊ธฐ๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณ€ํ™”ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์‚ฌ์ „ ์ดํ•ด๊ฐ€ ์žˆ๋‹ค๋ฉด ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, fastai book์˜ [ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ์ฑ•ํ„ฐ](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb)๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> [ConvNeXT](model_doc/convnext)๋Š” ์„ฑ๋Šฅ์„ ๋†’์ด๊ธฐ ์œ„ํ•ด ์ƒˆ๋กœ์šด ํ˜„๋Œ€ ๋„คํŠธ์›Œํฌ ์„ค๊ณ„๋ฅผ ์ ์šฉํ•œ CNN ๊ตฌ์กฐ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•ฉ์„ฑ๊ณฑ์€ ์—ฌ์ „ํžˆ ๋ชจ๋ธ์˜ ํ•ต์‹ฌ์ž…๋‹ˆ๋‹ค. ๋†’์€ ์ˆ˜์ค€์˜ ๊ด€์ ์—์„œ ๋ณผ ๋•Œ, [ํ•ฉ์„ฑ๊ณฑ](glossary#convolution)์€ ์ž‘์€ ํ–‰๋ ฌ(*์ปค๋„*)์— ์ด๋ฏธ์ง€ ํ”ฝ์…€์˜ ์ž‘์€ ์œˆ๋„์šฐ๋ฅผ ๊ณฑํ•˜๋Š” ์—ฐ์‚ฐ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ํŠน์ • ํ…์Šค์ณ(texture)์ด๋‚˜ ์„ ์˜ ๊ณก๋ฅ ๊ณผ ๊ฐ™์€ ์ผ๋ถ€ ํŠน์ง•์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๊ณ  ๋‹ค์Œ ํ”ฝ์…€ ์œˆ๋„์šฐ๋กœ ๋„˜์–ด๊ฐ€๋Š”๋ฐ, ์—ฌ๊ธฐ์„œ ํ•ฉ์„ฑ๊ณฑ์ด ์ด๋™ํ•˜๋Š” ๊ฑฐ๋ฆฌ๋ฅผ *๋ณดํญ(stride)*์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/> </div> <small>ํŒจ๋”ฉ์ด๋‚˜ ๋ณดํญ์ด ์—†๋Š” ๊ธฐ๋ณธ ํ•ฉ์„ฑ๊ณฑ, <a href="https://arxiv.org/abs/1603.07285">๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ ๊ฐ€์ด๋“œ</a></small> ์ด ์ถœ๋ ฅ์„ ๋‹ค๋ฅธ ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ฐ ์—ฐ์†์ ์ธ ๋ ˆ์ด์–ด๋ฅผ ํ†ตํ•ด ๋„คํŠธ์›Œํฌ๋Š” ํ•ซ๋„๊ทธ๋‚˜ ๋กœ์ผ“๊ณผ ๊ฐ™์ด ๋” ๋ณต์žกํ•˜๊ณ  ์ถ”์ƒ์ ์ธ ๊ฒƒ์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด ์‚ฌ์ด์— ํ’€๋ง ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ์ฐจ์›์„ ์ค„์ด๊ณ  ํŠน์ง•์˜ ์œ„์น˜ ๋ณ€ํ™”์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋” ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/> </div> ConvNeXT๋Š” CNN์„ 5๊ฐ€์ง€ ๋ฐฉ์‹์œผ๋กœ ํ˜„๋Œ€ํ™”ํ•ฉ๋‹ˆ๋‹ค: 1. ๊ฐ ๋‹จ๊ณ„์˜ ๋ธ”๋ก ์ˆ˜๋ฅผ ๋ณ€๊ฒฝํ•˜๊ณ  ๋” ํฐ ๋ณดํญ๊ณผ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ์ปค๋„ ํฌ๊ธฐ๋กœ ์ด๋ฏธ์ง€๋ฅผ "ํŒจ์น˜ํ™”(patchify)"ํ•ฉ๋‹ˆ๋‹ค. ๊ฒน์น˜์ง€ ์•Š๋Š” ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ๋Š” ViT๊ฐ€ ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜๋กœ ๋ถ„ํ• ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ ์ด ํŒจ์น˜ํ™” ์ „๋žต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 2. *๋ณ‘๋ชฉ(bottleneck)* ๋ ˆ์ด์–ด๋Š” ์ฑ„๋„ ์ˆ˜๋ฅผ ์ค„์˜€๋‹ค๊ฐ€ ๋‹ค์‹œ ๋ณต์›ํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด 1x1 ํ•ฉ์„ฑ๊ณฑ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ด ๋” ๋น ๋ฅด๊ณ , ๊นŠ์ด๋ฅผ ๋Š˜๋ฆด ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์—ญ ๋ณ‘๋ชฉ(inverted bottlenect)์€ ์ฑ„๋„ ์ˆ˜๋ฅผ ํ™•์žฅํ•˜๊ณ  ์ถ•์†Œํ•จ์œผ๋กœ์จ ๊ทธ ๋ฐ˜๋Œ€๋กœ ์ˆ˜ํ–‰ํ•˜๋ฏ€๋กœ, ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ด ๋” ๋†’์Šต๋‹ˆ๋‹ค. 3. ๋ณ‘๋ชฉ ๋ ˆ์ด์–ด์˜ ์ผ๋ฐ˜์ ์ธ 3x3 ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ฐ ์ž…๋ ฅ ์ฑ„๋„์— ๊ฐœ๋ณ„์ ์œผ๋กœ ํ•ฉ์„ฑ๊ณฑ์„ ์ ์šฉํ•œ ๋‹ค์Œ ๋งˆ์ง€๋ง‰์— ์Œ“๋Š” *๊นŠ์ด๋ณ„ ํ•ฉ์„ฑ๊ณฑ(depthwise convolution)*์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋„คํŠธ์›Œํฌ ํญ์ด ๋„“ํ˜€ ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋ฉ๋‹ˆ๋‹ค. 4. ViT๋Š” ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜ ๋•๋ถ„์— ํ•œ ๋ฒˆ์— ๋” ๋งŽ์€ ์ด๋ฏธ์ง€๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๋Š” ์ „์—ญ ์ˆ˜์‹  ํ•„๋“œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ConvNeXT๋Š” ์ปค๋„ ํฌ๊ธฐ๋ฅผ 7x7๋กœ ๋Š˜๋ ค ์ด ํšจ๊ณผ๋ฅผ ์žฌํ˜„ํ•˜๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. 5. ๋˜ํ•œ ConvNeXT๋Š” Transformer ๋ชจ๋ธ์„ ๋ชจ๋ฐฉํ•˜๋Š” ๋ช‡ ๊ฐ€์ง€ ๋ ˆ์ด์–ด ์„ค๊ณ„๋ฅผ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ํ™œ์„ฑํ™” ๋ฐ ์ •๊ทœํ™” ๋ ˆ์ด์–ด๊ฐ€ ๋” ์ ๊ณ , ํ™œ์„ฑํ™” ํ•จ์ˆ˜๊ฐ€ ReLU ๋Œ€์‹  GELU๋กœ ์ „ํ™˜๋˜๊ณ , BatchNorm ๋Œ€์‹  LayerNorm์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ ๋ธ”๋ก์˜ ์ถœ๋ ฅ์€ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ์ „๋‹ฌ๋˜๋ฉฐ, ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ถœ๋ ฅ์„ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ### ๊ฐ์ฒด ํƒ์ง€[[object-detection]] [DETR](model_doc/detr), *DEtection TRansformer*๋Š” CNN๊ณผ Transformer ์ธ์ฝ”๋”-๋””์ฝ”๋”๋ฅผ ๊ฒฐํ•ฉํ•œ ์ข…๋‹จ๊ฐ„(end-to-end) ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/> </div> 1. ์‚ฌ์ „ํ›ˆ๋ จ๋œ CNN *๋ฐฑ๋ณธ(backbone)*์€ ํ”ฝ์…€ ๊ฐ’์œผ๋กœ ๋‚˜ํƒ€๋‚ธ ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€ ์ €ํ•ด์ƒ๋„ ํŠน์ง• ๋งต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ํŠน์ง• ๋งต์— ๋Œ€ํ•ด 1x1 ํ•ฉ์„ฑ๊ณฑ์„ ์ ์šฉํ•˜์—ฌ ์ฐจ์›์„ ์ค„์ด๊ณ , ๊ณ ์ˆ˜์ค€ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ๊ฐ€์ง„ ์ƒˆ๋กœ์šด ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. Transformer๋Š” ์‹œํ€€์Šค ๋ชจ๋ธ์ด๊ธฐ ๋•Œ๋ฌธ์— ํŠน์ง• ๋งต์„ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ๊ณผ ๊ฒฐํ•ฉ๋œ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์‹œํ€€์Šค๋กœ ํ‰ํƒ„ํ™”ํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ง• ๋ฒกํ„ฐ๋Š” ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๋Š” ์ธ์ฝ”๋”์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ๋””์ฝ”๋”์—์„œ *๊ฐ์ฒด ์ฟผ๋ฆฌ*์™€ ๊ฒฐํ•ฉ๋ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ์ฟผ๋ฆฌ๋Š” ์ด๋ฏธ์ง€์˜ ๋‹ค๋ฅธ ์˜์—ญ์— ์ดˆ์ ์„ ๋งž์ถ˜ ํ•™์Šต๋œ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ํ•™์Šต๋˜๊ณ , ๊ฐ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์ง„ํ–‰ํ•˜๋ฉด์„œ ๊ฐฑ์‹ ๋ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ์— ๋Œ€ํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ์ขŒํ‘œ์™€ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•˜๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ์— ์ „๋‹ฌ๋˜๋ฉฐ, ๊ฐ์ฒด๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ `no object`๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค. DETR์€ ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋ฅผ ๋ณ‘๋ ฌ๋กœ ๋””์ฝ”๋”ฉํ•˜์—ฌ *N* ๊ฐœ์˜ ์ตœ์ข… ์˜ˆ์ธก์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ *N*์€ ์ฟผ๋ฆฌ ์ˆ˜์ž…๋‹ˆ๋‹ค. ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ์š”์†Œ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ผ๋ฐ˜์ ์ธ ์ž๊ธฐํšŒ๊ท€ ๋ชจ๋ธ๊ณผ ๋‹ฌ๋ฆฌ, ๊ฐ์ฒด ํƒ์ง€๋Š” ํ•œ ๋ฒˆ์— *N* ๊ฐœ์˜ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์ง‘ํ•ฉ ์˜ˆ์ธก ์ž‘์—…(`๋ฐ”์šด๋”ฉ ๋ฐ•์Šค`, `ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”`)์ž…๋‹ˆ๋‹ค. 3. DETR์€ ํ›ˆ๋ จ ์ค‘ *์ด๋ถ„ ๋งค์นญ ์†์‹ค(bipartite matching loss)*์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ณ ์ •๋œ ์ˆ˜์˜ ์˜ˆ์ธก๊ณผ ๊ณ ์ •๋œ ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ”(ground truth labels) ์„ธํŠธ๋ฅผ ๋น„๊ตํ•ฉ๋‹ˆ๋‹ค. *N*๊ฐœ์˜ ๋ ˆ์ด๋ธ” ์„ธํŠธ์— ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ”๋ณด๋‹ค ์ ์€ ๊ฒฝ์šฐ, `no object` ํด๋ž˜์Šค๋กœ ํŒจ๋”ฉ๋ฉ๋‹ˆ๋‹ค. ์ด ์†์‹ค ํ•จ์ˆ˜๋Š” DETR์ด ์˜ˆ์ธก๊ณผ ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ” ๊ฐ„ 1:1 ๋Œ€์‘์„ ์ฐพ๋„๋ก ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ๋˜๋Š” ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์ค‘ ํ•˜๋‚˜๋ผ๋„ ์ž˜๋ชป๋œ ๊ฒฝ์šฐ, ์†์‹ค์ด ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์กด์žฌํ•˜์ง€ ์•Š๋Š” ๊ฐ์ฒด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ฒฝ์šฐ, ํŒจ๋„ํ‹ฐ๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด DETR์€ ์ด๋ฏธ์ง€์—์„œ ๋ˆˆ์— ์ž˜ ๋„๋Š” ๋ฌผ์ฒด ํ•˜๋‚˜์— ์ง‘์ค‘ํ•˜๋Š” ๋Œ€์‹ , ๋‹ค๋ฅธ ๊ฐ์ฒด๋ฅผ ์ฐพ๋„๋ก ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ๊ฐ€ DETR ์ƒ๋‹จ์— ์ถ”๊ฐ€๋˜์–ด ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”๊ณผ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ์ขŒํ‘œ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ์—๋Š” ๋‘ ๊ฐ€์ง€ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: ๋””์ฝ”๋” ์€๋‹‰ ์ƒํƒœ๋ฅผ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์˜ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด ๋ฐ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์˜ˆ์ธกํ•˜๋Š” MLP ๊ฐ์ฒด ํƒ์ง€์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [๊ฐ์ฒด ํƒ์ง€ ๊ฐ€์ด๋“œ](tasks/object_detection)๋ฅผ ํ™•์ธํ•˜์—ฌ DETR์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์ด๋ฏธ์ง€ ๋ถ„ํ• [[image-segmentation]] [Mask2Former](model_doc/mask2former)๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ์ด๋ฏธ์ง€ ๋ถ„ํ•  ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฒ”์šฉ ์•„ํ‚คํ…์ฒ˜์ž…๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ๋ถ„ํ•  ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์‹œ๋ฉ˜ํ‹ฑ(semantic) ๋˜๋Š” ํŒŒ๋†‰ํ‹ฑ(panoptic) ๋ถ„ํ• ๊ณผ ๊ฐ™์€ ์ด๋ฏธ์ง€ ๋ถ„ํ• ์˜ ํŠน์ • ํ•˜์œ„ ์ž‘์—…์— ๋งž์ถฐ ์กฐ์ •๋ฉ๋‹ˆ๋‹ค. Mask2Former๋Š” ๋ชจ๋“  ์ž‘์—…์„ *๋งˆ์Šคํฌ ๋ถ„๋ฅ˜* ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ ๋ถ„๋ฅ˜๋Š” ํ”ฝ์…€์„ *N*๊ฐœ ์„ธ๊ทธ๋จผํŠธ๋กœ ๊ทธ๋ฃนํ™”ํ•˜๊ณ , ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด *N*๊ฐœ์˜ ๋งˆ์Šคํฌ์™€ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ Mask2Former์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— SegFormer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/> </div> Mask2Former์—๋Š” 3๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. [Swin](model_doc/swin) ๋ฐฑ๋ณธ์ด ์ด๋ฏธ์ง€๋ฅผ ๋ฐ›์•„ 3๊ฐœ์˜ ์—ฐ์†๋œ 3x3 ํ•ฉ์„ฑ๊ณฑ์—์„œ ์ €ํ•ด์ƒ๋„ ์ด๋ฏธ์ง€ ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ง• ๋งต์€ *ํ”ฝ์…€ ๋””์ฝ”๋”*์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์ด ๋””์ฝ”๋”๋Š” ์ €ํ•ด์ƒ๋„ ํŠน์ง•์„ ๊ณ ํ•ด์ƒ๋„ ํ”ฝ์…€ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ์ ์ง„์ ์œผ๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ํ”ฝ์…€ ๋””์ฝ”๋”๋Š” ์‹ค์ œ๋กœ ์›๋ณธ ์ด๋ฏธ์ง€์˜ 1/32, 1/16, 1/8 ํ•ด์ƒ๋„์˜ ๋‹ค์ค‘ ์Šค์ผ€์ผ ํŠน์ง•(์ €ํ•ด์ƒ๋„ ๋ฐ ๊ณ ํ•ด์ƒ๋„ ํŠน์ง• ๋ชจ๋‘ ํฌํ•จ)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 3. ์ด๋Ÿฌํ•œ ์„œ๋กœ ๋‹ค๋ฅธ ํฌ๊ธฐ์˜ ํŠน์ง• ๋งต์€ ๊ณ ํ•ด์ƒ๋„ ํŠน์ง•์—์„œ ์ž‘์€ ๊ฐ์ฒด๋ฅผ ํฌ์ฐฉํ•˜๊ธฐ ์œ„ํ•ด ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ Transformer ๋””์ฝ”๋” ๋ ˆ์ด์–ด์— ์—ฐ์†์ ์œผ๋กœ ๊ณต๊ธ‰๋ฉ๋‹ˆ๋‹ค. Mask2Former์˜ ํ•ต์‹ฌ์€ ๋””์ฝ”๋”์˜ *๋งˆ์Šคํฌ ์–ดํ…์…˜* ๋ฉ”์ปค๋‹ˆ์ฆ˜์ž…๋‹ˆ๋‹ค. ์ „์ฒด ์ด๋ฏธ์ง€๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ๋Š” ํฌ๋กœ์Šค ์–ดํ…์…˜(cross-attention)๊ณผ ๋‹ฌ๋ฆฌ, ๋งˆ์Šคํฌ ์–ดํ…์…˜์€ ์ด๋ฏธ์ง€์˜ ํŠน์ • ์˜์—ญ์—๋งŒ ์ง‘์ค‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€์˜ ์ง€์—ญ์  ํŠน์ง•๋งŒ์œผ๋กœ ๋ชจ๋ธ์ด ์ถฉ๋ถ„ํžˆ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋น ๋ฅด๊ณ  ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. 4. [DETR](tasks_explained#object-detection)๊ณผ ๊ฐ™์ด, Mask2Former๋Š” ํ•™์Šต๋œ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์ด๋ฅผ ํ”ฝ์…€ ๋””์ฝ”๋”์—์„œ์˜ ์ด๋ฏธ์ง€ ํŠน์ง•๊ณผ ๊ฒฐํ•ฉํ•˜์—ฌ ์˜ˆ์ธก ์ง‘ํ•ฉ(`ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”`, `๋งˆ์Šคํฌ ์˜ˆ์ธก`)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด๋กœ ์ „๋‹ฌ๋˜์–ด ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๋กœ์ง“๊ณผ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์‚ฌ์ด์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒƒ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ ์˜ˆ์ธก์€ ํ”ฝ์…€ ์ž„๋ฒ ๋”ฉ๊ณผ ์ตœ์ข… ๋””์ฝ”๋” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์‹œ๊ทธ๋ชจ์ด๋“œ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ๋ฐ Dice ์†์‹ค์€ ๋กœ์ง“๊ณผ ์‹ค์ œ ์ •๋‹ต ๋งˆ์Šคํฌ(ground truth mask) ์‚ฌ์ด์—์„œ ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋งˆ์Šคํฌ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ด๋ฏธ์ง€ ๋ถ„ํ•  ๊ฐ€์ด๋“œ](tasks/semantic_segmentation)๋ฅผ ํ™•์ธํ•˜์—ฌ SegFormer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ๊นŠ์ด ์ถ”์ •[[depth-estimation]] [GLPN](model_doc/glpn), *Global-Local Path Network*๋Š” [SegFormer](model_doc/segformer) ์ธ์ฝ”๋”์™€ ๊ฒฝ๋Ÿ‰ ๋””์ฝ”๋”๋ฅผ ๊ฒฐํ•ฉํ•œ ๊นŠ์ด ์ถ”์ •์„ ์œ„ํ•œ Transformer์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/> </div> 1. ViT์™€ ๊ฐ™์ด, ์ด๋ฏธ์ง€๋Š” ํŒจ์น˜ ์‹œํ€€์Šค๋กœ ๋ถ„ํ• ๋˜์ง€๋งŒ, ์ด๋ฏธ์ง€ ํŒจ์น˜๊ฐ€ ๋” ์ž‘๋‹ค๋Š” ์ ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ด๋Š” ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜์ด๋‚˜ ๊นŠ์ด ์ถ”์ •๊ณผ ๊ฐ™์€ ๋ฐ€๋„ ์˜ˆ์ธก ์ž‘์—…์— ๋” ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํŒจ์น˜๋Š” ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ๋ณ€ํ™˜๋˜์–ด(ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์ด ์ƒ์„ฑ๋˜๋Š” ๋ฐฉ๋ฒ•์€ [์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜](#image-classification) ์„น์…˜์„ ์ฐธ์กฐํ•˜์„ธ์š”), ์ธ์ฝ”๋”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 2. ์ธ์ฝ”๋”๋Š” ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์„ ๋ฐ›์•„, ์—ฌ๋Ÿฌ ์ธ์ฝ”๋” ๋ธ”๋ก์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ธ”๋ก์€ ์–ดํ…์…˜ ๋ฐ Mix-FFN ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ›„์ž์˜ ๋ชฉ์ ์€ ์œ„์น˜ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ์ธ์ฝ”๋” ๋ธ”๋ก์˜ ๋์—๋Š” ๊ณ„์ธต์  ํ‘œํ˜„์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•œ *ํŒจ์น˜ ๋ณ‘ํ•ฉ(patch merging)* ๋ ˆ์ด์–ด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ์ธ์ ‘ํ•œ ํŒจ์น˜ ๊ทธ๋ฃน์˜ ํŠน์ง•์€ ์—ฐ๊ฒฐ๋˜๊ณ , ์—ฐ๊ฒฐ๋œ ํŠน์ง•์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์ ์šฉ๋˜์–ด ํŒจ์น˜ ์ˆ˜๋ฅผ 1/4์˜ ํ•ด์ƒ๋„๋กœ ์ค„์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ ์ธ์ฝ”๋” ๋ธ”๋ก์˜ ์ž…๋ ฅ์ด ๋˜๋ฉฐ, ์ด๋Ÿฌํ•œ ์ „์ฒด ํ”„๋กœ์„ธ์Šค๋Š” 1/8, 1/16, 1/32 ํ•ด์ƒ๋„์˜ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ๊ฐ€์งˆ ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. 3. ๊ฒฝ๋Ÿ‰ ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์—์„œ ๋งˆ์ง€๋ง‰ ํŠน์ง• ๋งต(1/32 ํฌ๊ธฐ)์„ ๊ฐ€์ ธ์™€ 1/16 ํฌ๊ธฐ๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ, ํŠน์ง•์€ *์„ ํƒ์  ํŠน์ง• ์œตํ•ฉ(SFF, Selective Feature Fusion)* ๋ชจ๋“ˆ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋“ˆ์€ ๊ฐ ํŠน์ง•์— ๋Œ€ํ•ด ์–ดํ…์…˜ ๋งต์—์„œ ๋กœ์ปฌ ๋ฐ ์ „์—ญ ํŠน์ง•์„ ์„ ํƒํ•˜๊ณ  ๊ฒฐํ•ฉํ•œ ๋‹ค์Œ, 1/8๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์ด ํ”„๋กœ์„ธ์Šค๋Š” ๋””์ฝ”๋”ฉ๋œ ํŠน์„ฑ์ด ์›๋ณธ ์ด๋ฏธ์ง€์™€ ๋™์ผํ•œ ํฌ๊ธฐ๊ฐ€ ๋  ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ์€ ๋‘ ๊ฐœ์˜ ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ฑฐ์นœ ๋‹ค์Œ, ์‹œ๊ทธ๋ชจ์ด๋“œ ํ™œ์„ฑํ™”๊ฐ€ ์ ์šฉ๋˜์–ด ๊ฐ ํ”ฝ์…€์˜ ๊นŠ์ด๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] Transformer๋Š” ์ดˆ๊ธฐ์— ๊ธฐ๊ณ„ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ๊ณ , ๊ทธ ์ดํ›„๋กœ๋Š” ์‚ฌ์‹ค์ƒ ๋ชจ๋“  NLP ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ณธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์–ด๋–ค ์ž‘์—…์€ Transformer์˜ ์ธ์ฝ”๋” ๊ตฌ์กฐ์— ์ ํ•ฉํ•˜๋ฉฐ, ๋‹ค๋ฅธ ์ž‘์—…์€ ๋””์ฝ”๋”์— ๋” ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ ๋‹ค๋ฅธ ์ž‘์—…์€ Transformer์˜ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๊ตฌ์กฐ๋ฅผ ๋ชจ๋‘ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text-classification]] [BERT](model_doc/bert)๋Š” ์ธ์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ์ด๋ฉฐ, ํ…์ŠคํŠธ์˜ ํ’๋ถ€ํ•œ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด ์–‘๋ฐฉํ–ฅ์˜ ๋‹จ์–ด์— ์ฃผ๋ชฉํ•จ์œผ๋กœ์จ ์‹ฌ์ธต ์–‘๋ฐฉํ–ฅ์„ฑ(deep bidirectionality)์„ ํšจ๊ณผ์ ์œผ๋กœ ๊ตฌํ˜„ํ•œ ์ตœ์ดˆ์˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. 1. BERT๋Š” [WordPiece](tokenizer_summary#wordpiece) ํ† ํฐํ™”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฌธ์žฅ์˜ ํ† ํฐ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ ๋ฌธ์žฅ๊ณผ ํ•œ ์Œ์˜ ๋ฌธ์žฅ์„ ๊ตฌ๋ถ„ํ•˜๊ธฐ ์œ„ํ•ด ํŠน์ˆ˜ํ•œ `[SEP]` ํ† ํฐ์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…์ŠคํŠธ ์‹œํ€€์Šค์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์—๋Š” ํŠน์ˆ˜ํ•œ `[CLS]` ํ† ํฐ์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. `[CLS]` ํ† ํฐ์ด ์žˆ๋Š” ์ตœ์ข… ์ถœ๋ ฅ์€ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•œ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ์ž…๋ ฅ์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. BERT๋Š” ๋˜ํ•œ ํ•œ ์Œ์˜ ๋ฌธ์žฅ์—์„œ ๊ฐ ํ† ํฐ์ด ์ฒซ ๋ฒˆ์งธ ๋ฌธ์žฅ์ธ์ง€ ๋‘ ๋ฒˆ์งธ ๋ฌธ์žฅ์— ์†ํ•˜๋Š”์ง€ ๋‚˜ํƒ€๋‚ด๋Š” ์„ธ๊ทธ๋จผํŠธ ์ž„๋ฒ ๋”ฉ(segment embedding)์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. 2. BERT๋Š” ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง๊ณผ ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, ๋‘ ๊ฐ€์ง€ ๋ชฉ์ ์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง์—์„œ๋Š” ์ž…๋ ฅ ํ† ํฐ์˜ ์ผ๋ถ€๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚น๋˜๊ณ , ๋ชจ๋ธ์€ ์ด๋ฅผ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ชจ๋“  ๋‹จ์–ด๋ฅผ ๋ณด๊ณ  ๋‹ค์Œ ๋‹จ์–ด๋ฅผ "์˜ˆ์ธก"ํ•  ์ˆ˜ ์žˆ๋Š” ์–‘๋ฐฉํ–ฅ์„ฑ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก๋œ ๋งˆ์Šคํฌ ํ† ํฐ์˜ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋Š” ์–ดํœ˜์— ๋Œ€ํ•œ ์†Œํ”„ํŠธ๋งฅ์Šค๊ฐ€ ์žˆ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋กœ ์ „๋‹ฌ๋˜์–ด ๋งˆ์Šคํฌ๋œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์‚ฌ์ „ํ›ˆ๋ จ ๋Œ€์ƒ์€ ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ๋ฌธ์žฅ B๊ฐ€ ๋ฌธ์žฅ A ๋‹ค์Œ์— ์˜ค๋Š”์ง€ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ์žฅ B๊ฐ€ ๋‹ค์Œ ๋ฌธ์žฅ์ธ ๊ฒฝ์šฐ์™€ ๋ฌด์ž‘์œ„ ๋ฌธ์žฅ์ธ ๊ฒฝ์šฐ ๊ฐ๊ฐ 50%์˜ ํ™•๋ฅ ๋กœ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฌธ์žฅ์ธ์ง€ ์•„๋‹Œ์ง€์— ๋Œ€ํ•œ ์˜ˆ์ธก์€ ๋‘ ๊ฐœ์˜ ํด๋ž˜์Šค(`IsNext` ๋ฐ `NotNext`)์— ๋Œ€ํ•œ ์†Œํ”„ํŠธ๋งฅ์Šค๊ฐ€ ์žˆ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 3. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์€ ์—ฌ๋Ÿฌ ์ธ์ฝ”๋” ๋ ˆ์ด์–ด๋ฅผ ๊ฑฐ์ณ์„œ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์ƒ๋‹จ์— ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ด๋ฉฐ, ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/sequence_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ํ† ํฐ ๋ถ„๋ฅ˜[[token-classification]] ๊ฐœ์ฒด๋ช… ์ธ์‹(Named Entity Recognition, NER)๊ณผ ๊ฐ™์€ ํ† ํฐ ๋ถ„๋ฅ˜ ์ž‘์—…์— BERT๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์ƒ๋‹จ์— ํ† ํฐ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ด๋ฉฐ, ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ๊ฐ ํ† ํฐ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [ํ† ํฐ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/token_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์งˆ์˜์‘๋‹ต[[question-answering]] ์งˆ์˜์‘๋‹ต์— BERT๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์œ„์— ์ŠคํŒฌ(span) ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„ ํ˜• ๋ ˆ์ด์–ด๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๊ณ , ๋‹ต๋ณ€์— ๋Œ€์‘ํ•˜๋Š” `์ŠคํŒฌ`์˜ ์‹œ์ž‘๊ณผ ๋ ๋กœ๊ทธ๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ๊ฐ ๋ ˆ์ด๋ธ” ์œ„์น˜ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๋‹ต๋ณ€์— ๋Œ€์‘ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ…์ŠคํŠธ์˜ ์ŠคํŒฌ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์งˆ์˜์‘๋‹ต ๊ฐ€์ด๋“œ](tasks/question_answering)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ๐Ÿ’ก ์‚ฌ์ „ํ›ˆ๋ จ๋œ BERT๋ฅผ ๋‹ค์–‘ํ•œ ์ž‘์—…์— ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์–ผ๋งˆ๋‚˜ ์‰ฌ์šด์ง€ ์ฃผ๋ชฉํ•˜์„ธ์š”. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์— ํŠน์ • ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ์€๋‹‰ ์ƒํƒœ๋ฅผ ์›ํ•˜๋Š” ์ถœ๋ ฅ์œผ๋กœ ์กฐ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ### ํ…์ŠคํŠธ ์ƒ์„ฑ[[text-generation]] [GPT-2](model_doc/gpt2)๋Š” ๋Œ€๋Ÿ‰์˜ ํ…์ŠคํŠธ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋””์ฝ”๋”ฉ ์ „์šฉ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฃผ์–ด์ง€๋ฉด ์„ค๋“๋ ฅ ์žˆ๋Š” (ํ•ญ์ƒ ์‚ฌ์‹ค์€ ์•„๋‹ˆ์ง€๋งŒ!) ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๋ช…์‹œ์ ์œผ๋กœ ํ›ˆ๋ จ๋˜์ง€ ์•Š์•˜์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์งˆ์˜์‘๋‹ต๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ NLP ์ž‘์—…์„ ์™„์ˆ˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/> </div> 1. GPT-2๋Š” ๋‹จ์–ด๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  ํ† ํฐ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด [๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ(BPE, byte pair encoding)](tokenizer_summary#bytepair-encoding-bpe)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์œ„์น˜ ์ธ์ฝ”๋”ฉ์€ ์‹œํ€€์Šค์—์„œ ๊ฐ ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ํ† ํฐ ์ž„๋ฒ ๋”ฉ์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์€ ์—ฌ๋Ÿฌ ๋””์ฝ”๋” ๋ธ”๋ก์„ ๊ฑฐ์ณ ์ผ๋ถ€ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋””์ฝ”๋” ๋ธ”๋ก ๋‚ด์—์„œ GPT-2๋Š” *๋งˆ์Šคํฌ๋“œ ์…€ํ”„ ์–ดํ…์…˜(masked self-attention)* ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” GPT-2๊ฐ€ ์ดํ›„ ํ† ํฐ(future tokens)์— ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์ผ ์ˆ˜ ์—†๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์™ผ์ชฝ์— ์žˆ๋Š” ํ† ํฐ์—๋งŒ ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ๋“œ ์…€ํ”„ ์–ดํ…์…˜์—์„œ๋Š” ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ดํ›„ ํ† ํฐ์— ๋Œ€ํ•œ ์ ์ˆ˜(score)๋ฅผ `0`์œผ๋กœ ์„ค์ •ํ•˜๊ธฐ ๋•Œ๋ฌธ์— BERT์˜ [`mask`] ํ† ํฐ๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. 2. ๋””์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ์— ์ „๋‹ฌ๋˜๋ฉฐ, ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋กœ์ง“์œผ๋กœ ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์€ ์‹œํ€€์Šค์˜ ๋‹ค์Œ ํ† ํฐ์œผ๋กœ, ๋กœ์ง“์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•˜๋‚˜์”ฉ ์ด๋™ํ•˜์—ฌ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ์ด๋™๋œ ๋กœ์ง“๊ณผ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋‹ค์Œ ํ† ํฐ์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. GPT-2์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉ์ ์€ ์ „์ ์œผ๋กœ [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง](glossary#causal-language-modeling)์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ, ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” GPT-2๊ฐ€ ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๊ด€๋ จ๋œ ์ž‘์—…์— ํŠนํžˆ ์šฐ์ˆ˜ํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง ๊ฐ€์ด๋“œ](tasks/language_modeling#causal-language-modeling)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilGPT-2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ### ์š”์•ฝ[[summarization]] [BART](model_doc/bart) ๋ฐ [T5](model_doc/t5)์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์€ ์š”์•ฝ ์ž‘์—…์˜ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ํŒจํ„ด์„ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ BART์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/> </div> 1. BART์˜ ์ธ์ฝ”๋” ์•„ํ‚คํ…์ฒ˜๋Š” BERT์™€ ๋งค์šฐ ์œ ์‚ฌํ•˜๋ฉฐ ํ…์ŠคํŠธ์˜ ํ† ํฐ ๋ฐ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. BART๋Š” ์ž…๋ ฅ์„ ๋ณ€ํ˜•์‹œํ‚ค๊ณ  ๋””์ฝ”๋”๋กœ ์žฌ๊ตฌ์„ฑํ•˜์—ฌ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ํŠน์ • ๋ณ€ํ˜• ๊ธฐ๋ฒ•์ด ์žˆ๋Š” ๋‹ค๋ฅธ ์ธ์ฝ”๋”์™€๋Š” ๋‹ฌ๋ฆฌ, BART๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ๋ณ€ํ˜•์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ *text infilling* ๋ณ€ํ˜• ๊ธฐ๋ฒ•์ด ๊ฐ€์žฅ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. Text Infiling์—์„œ๋Š” ์—ฌ๋Ÿฌ ํ…์ŠคํŠธ ์ŠคํŒฌ์„ **๋‹จ์ผ** [`mask`] ํ† ํฐ์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋งˆ์Šคํฌ๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•ด์•ผ ํ•˜๊ณ , ๋ชจ๋ธ์— ๋ˆ„๋ฝ๋œ ํ† ํฐ์˜ ์ˆ˜๋ฅผ ์˜ˆ์ธกํ•˜๋„๋ก ๊ฐ€๋ฅด์น˜๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ๊ณผ ๋งˆ์Šคํฌ๋œ ์ŠคํŒฌ์ด ์ธ์ฝ”๋”๋ฅผ ๊ฑฐ์ณ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•˜์ง€๋งŒ, BERT์™€ ๋‹ฌ๋ฆฌ BART๋Š” ๋งˆ์ง€๋ง‰์— ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋ฅผ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. ์ธ์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ๋””์ฝ”๋”๋กœ ์ „๋‹ฌ๋˜๋ฉฐ, ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์˜ ์ถœ๋ ฅ์—์„œ ๋งˆ์Šคํฌ ํ† ํฐ๊ณผ ๋ณ€ํ˜•๋˜์ง€ ์•Š์€ ํ† ํฐ์„ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋””์ฝ”๋”๊ฐ€ ์›๋ณธ ํ…์ŠคํŠธ๋ฅผ ๋ณต์›ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ์ถ”๊ฐ€์ ์ธ ๋ฌธ๋งฅ์„ ์–ป๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ์— ์ „๋‹ฌ๋˜๋ฉฐ, ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋กœ์ง“์œผ๋กœ ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ํ† ํฐ์ด ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์ด๋™๋œ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์š”์•ฝ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์š”์•ฝ ๊ฐ€์ด๋“œ](tasks/summarization)๋ฅผ ํ™•์ธํ•˜์—ฌ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ### ๋ฒˆ์—ญ[[translation]] ๋ฒˆ์—ญ์€ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ์ž‘์—…์˜ ๋˜ ๋‹ค๋ฅธ ์˜ˆ๋กœ, [BART](model_doc/bart) ๋˜๋Š” [T5](model_doc/t5)์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ BART์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. BART๋Š” ์›์ฒœ ์–ธ์–ด๋ฅผ ํƒ€๊ฒŸ ์–ธ์–ด๋กœ ๋””์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ๋Š” ์ž…๋ ฅ์— ๋งคํ•‘ํ•˜๊ธฐ ์œ„ํ•ด ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ณ„๋„์˜ ์ธ์ฝ”๋”๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ๋ฒˆ์—ญ์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์ƒˆ๋กœ์šด ์ธ์ฝ”๋”์˜ ์ž„๋ฒ ๋”ฉ์€ ์›๋ณธ ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ ๋Œ€์‹  ์‚ฌ์ „ํ›ˆ๋ จ๋œ ์ธ์ฝ”๋”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์›์ฒœ ์ธ์ฝ”๋”๋Š” ๋ชจ๋ธ ์ถœ๋ ฅ์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค๋กœ๋ถ€ํ„ฐ ์›์ฒœ ์ธ์ฝ”๋”, ์œ„์น˜ ์ž„๋ฒ ๋”ฉ, ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์„ ๊ฐฑ์‹ ํ•˜์—ฌ ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ๋Š” ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ๊ณ ์ •๋˜๊ณ , ๋‘ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ๋Š” ๋ชจ๋“  ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ํ•จ๊ป˜ ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. BART๋Š” ์ดํ›„ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์–ธ์–ด๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋‹ค๊ตญ์–ด ๋ฒ„์ „์˜ mBART๋กœ ํ™•์žฅ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฒˆ์—ญ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [๋ฒˆ์—ญ ๊ฐ€์ด๋“œ](tasks/summarization)๋ฅผ ํ™•์ธํ•˜์—ฌ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/preprocessing.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ „์ฒ˜๋ฆฌ[[preprocess]] [[open-in-colab]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ๋งž๋Š” ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ „์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€ ๋˜๋Š” ์˜ค๋””์˜ค์ธ์ง€ ๊ด€๊ณ„์—†์ด ๋ฐ์ดํ„ฐ๋ฅผ ํ…์„œ ๋ฐฐ์น˜๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์กฐ๋ฆฝํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ์ผ๋ จ์˜ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ํ…์ŠคํŠธ๋Š” [Tokenizer](./main_classes/tokenizer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ† ํฐ ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ํ† ํฐ์˜ ์ˆซ์ž ํ‘œํ˜„์„ ๋งŒ๋“  ํ›„ ํ…์„œ๋กœ ์กฐ๋ฆฝํ•ฉ๋‹ˆ๋‹ค. * ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค๋Š” [Feature extractor](./main_classes/feature_extractor)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒํ˜•์—์„œ ์‹œํ€€์Šค ํŠน์„ฑ์„ ํŒŒ์•…ํ•˜์—ฌ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ์ž…๋ ฅ์€ [ImageProcessor](./main_classes/image)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. * ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์€ [Processor](./main_classes/processors)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ† ํฌ๋‚˜์ด์ €์™€ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. <Tip> `AutoProcessor`๋Š” **์–ธ์ œ๋‚˜** ์ž‘๋™ํ•˜์—ฌ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ํ”„๋กœ์„ธ์„œ ๋“ฑ ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ์— ๋งž๋Š” ํด๋ž˜์Šค๋ฅผ ์ž๋™์œผ๋กœ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ๐Ÿค— Datasets๋ฅผ ์„ค์น˜ํ•˜์—ฌ ์‹คํ—˜์— ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ๋ฅผ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pip install datasets ``` ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] <Youtube id="Yffk5aydLzg"/> ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ณธ ๋„๊ตฌ๋Š” [tokenizer](main_classes/tokenizer)์ž…๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์ผ๋ จ์˜ ๊ทœ์น™์— ๋”ฐ๋ผ ํ…์ŠคํŠธ๋ฅผ *ํ† ํฐ*์œผ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ํ† ํฐ์€ ์ˆซ์ž๋กœ ๋ณ€ํ™˜๋˜๊ณ  ํ…์„œ๋Š” ๋ชจ๋ธ ์ž…๋ ฅ์ด ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ํ•„์š”ํ•œ ์ถ”๊ฐ€ ์ž…๋ ฅ์€ ํ† ํฌ๋‚˜์ด์ €์— ์˜ํ•ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. <Tip> ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ๊ณ„ํš์ด๋ผ๋ฉด ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ…์ŠคํŠธ๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ ๋ง๋ญ‰์น˜์™€ ๋™์ผํ•œ ๋ฐฉ์‹์œผ๋กœ ๋ถ„ํ• ๋˜๊ณ  ์‚ฌ์ „ํ›ˆ๋ จ ์ค‘์— ๋™์ผํ•œ ํ•ด๋‹น ํ† ํฐ-์ธ๋ฑ์Šค ์Œ(์ผ๋ฐ˜์ ์œผ๋กœ *vocab*์ด๋ผ๊ณ  ํ•จ)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. </Tip> ์‹œ์ž‘ํ•˜๋ ค๋ฉด [`AutoTokenizer.from_pretrained`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ํ›ˆ๋ จ๋œ *vocab*์„ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` ๊ทธ ๋‹ค์Œ์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ €์— ๋„ฃ์–ด์ฃผ์„ธ์š”: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ํ† ํฌ๋‚˜์ด์ €๋Š” ์„ธ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ํ•ญ๋ชฉ์„ ํฌํ•จํ•œ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: * [input_ids](glossary#input-ids)๋Š” ๋ฌธ์žฅ์˜ ๊ฐ ํ† ํฐ์— ํ•ด๋‹นํ•˜๋Š” ์ธ๋ฑ์Šค์ž…๋‹ˆ๋‹ค. * [attention_mask](glossary#attention-mask)๋Š” ํ† ํฐ์„ ์ฒ˜๋ฆฌํ•ด์•ผ ํ•˜๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. * [token_type_ids](glossary#token-type-ids)๋Š” ๋‘ ๊ฐœ ์ด์ƒ์˜ ์‹œํ€€์Šค๊ฐ€ ์žˆ์„ ๋•Œ ํ† ํฐ์ด ์†ํ•œ ์‹œํ€€์Šค๋ฅผ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. `input_ids`๋ฅผ ๋””์ฝ”๋”ฉํ•˜์—ฌ ์ž…๋ ฅ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋‘ ๊ฐœ์˜ ํŠน์ˆ˜ํ•œ ํ† ํฐ(๋ถ„๋ฅ˜ ํ† ํฐ `CLS`์™€ ๋ถ„ํ•  ํ† ํฐ `SEP`)์„ ๋ฌธ์žฅ์— ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์— ํŠน์ˆ˜ํ•œ ํ† ํฐ์ด ํ•„์š”ํ•œ ๊ฒƒ์€ ์•„๋‹ˆ์ง€๋งŒ, ํ•„์š”ํ•˜๋‹ค๋ฉด ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌํ•  ๋ฌธ์žฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ์—๋Š” ๋ฆฌ์ŠคํŠธ๋กœ ํ† ํฌ๋‚˜์ด์ €์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### ํŒจ๋”ฉ[[pad]] ๋ชจ๋ธ ์ž…๋ ฅ์ธ ํ…์„œ๋Š” ๋ชจ์–‘์ด ๊ท ์ผํ•ด์•ผ ํ•˜์ง€๋งŒ, ๋ฌธ์žฅ์˜ ๊ธธ์ด๊ฐ€ ํ•ญ์ƒ ๊ฐ™์ง€๋Š” ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ์งง์€ ๋ฌธ์žฅ์— ํŠน์ˆ˜ํ•œ *ํŒจ๋”ฉ ํ† ํฐ*์„ ์ถ”๊ฐ€ํ•˜์—ฌ ํ…์„œ๋ฅผ ์ง์‚ฌ๊ฐํ˜• ๋ชจ์–‘์ด ๋˜๋„๋ก ํ•˜๋Š” ์ „๋žต์ž…๋‹ˆ๋‹ค. `padding` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐฐ์น˜ ๋‚ด์˜ ์งง์€ ์‹œํ€€์Šค๋ฅผ ๊ฐ€์žฅ ๊ธด ์‹œํ€€์Šค์— ๋งž์ถฐ ํŒจ๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ๊ธธ์ด๊ฐ€ ์งง์€ ์ฒซ ๋ฌธ์žฅ๊ณผ ์„ธ ๋ฒˆ์งธ ๋ฌธ์žฅ์ด ์ด์ œ `0`์œผ๋กœ ์ฑ„์›Œ์กŒ์Šต๋‹ˆ๋‹ค. ### ์ž˜๋ผ๋‚ด๊ธฐ[[truncation]] ํ•œํŽธ, ๋•Œ๋กœ๋Š” ์‹œํ€€์Šค๊ฐ€ ๋ชจ๋ธ์—์„œ ์ฒ˜๋ฆฌํ•˜๊ธฐ์— ๋„ˆ๋ฌด ๊ธธ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ์‹œํ€€์Šค๋ฅผ ๋” ์งง๊ฒŒ ์ค„์ผ ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์—์„œ ํ—ˆ์šฉํ•˜๋Š” ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์‹œํ€€์Šค๋ฅผ ์ž๋ฅด๋ ค๋ฉด `truncation` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` <Tip> ๋‹ค์–‘ํ•œ ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ ์ธ์ˆ˜์— ๋Œ€ํ•ด ๋” ์•Œ์•„๋ณด๋ ค๋ฉด [ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ](./pad_truncation) ๊ฐœ๋… ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”. </Tip> ### ํ…์„œ ๋งŒ๋“ค๊ธฐ[[build-tensors]] ๋งˆ์ง€๋ง‰์œผ๋กœ, ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋ชจ๋ธ์— ๊ณต๊ธ‰๋˜๋Š” ์‹ค์ œ ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. `return_tensors` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ PyTorch์˜ ๊ฒฝ์šฐ `pt`, TensorFlow์˜ ๊ฒฝ์šฐ `tf`๋กœ ์„ค์ •ํ•˜์„ธ์š”: <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> ## ์˜ค๋””์˜ค[[audio]] ์˜ค๋””์˜ค ์ž‘์—…์€ ๋ชจ๋ธ์— ๋งž๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [ํŠน์„ฑ ์ถ”์ถœ๊ธฐ](main_classes/feature_extractor)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋Š” ์›์‹œ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์—์„œ ํŠน์„ฑ๋ฅผ ์ถ”์ถœํ•˜๊ณ  ์ด๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ ์ž…๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด๊ธฐ ์œ„ํ•ด [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. (๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub.html)์—์„œ ์ž์„ธํžˆ ์„ค๋ช…ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.) ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` `audio` ์—ด์˜ ์ฒซ ๋ฒˆ์งธ ์š”์†Œ์— ์ ‘๊ทผํ•˜์—ฌ ์ž…๋ ฅ์„ ์‚ดํŽด๋ณด์„ธ์š”. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜๋ฉด ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์„ธ ๊ฐ€์ง€ ํ•ญ๋ชฉ์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค: * `array`๋Š” 1D ๋ฐฐ์—ด๋กœ ๊ฐ€์ ธ์™€์„œ (ํ•„์š”ํ•œ ๊ฒฝ์šฐ) ๋ฆฌ์ƒ˜ํ”Œ๋ง๋œ ์Œ์„ฑ ์‹ ํ˜ธ์ž…๋‹ˆ๋‹ค. * `path`๋Š” ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์œ„์น˜๋ฅผ ๊ฐ€๋ฆฌํ‚ต๋‹ˆ๋‹ค. * `sampling_rate`๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์—์„œ ์ดˆ๋‹น ์ธก์ •๋˜๋Š” ๋ฐ์ดํ„ฐ ํฌ์ธํŠธ ์ˆ˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ๋ฅผ ๋ณด๋ฉด Wav2Vec2๊ฐ€ 16kHz ์ƒ˜ํ”Œ๋ง๋œ ์Œ์„ฑ ์˜ค๋””์˜ค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์ „ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์™€ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ๋‹ค๋ฅด๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. ๐Ÿค— Datasets์˜ [`~datasets.Dataset.cast_column`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ 16kHz๋กœ ์—…์ƒ˜ํ”Œ๋งํ•˜์„ธ์š”: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด `audio` ์—ด์„ ๋‹ค์‹œ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` ๋‹ค์Œ์œผ๋กœ, ์ž…๋ ฅ์„ ์ •๊ทœํ™”ํ•˜๊ณ  ํŒจ๋”ฉํ•  ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ๊ฒฝ์šฐ, ๋” ์งง์€ ์‹œํ€€์Šค์— ๋Œ€ํ•ด `0`์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์—๋„ ๊ฐ™์€ ๊ฐœ๋…์ด ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋Š” ๋ฐฐ์—ด์— `0`(๋ฌต์Œ์œผ๋กœ ํ•ด์„)์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. [`AutoFeatureExtractor.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` ์˜ค๋””์˜ค `array`๋ฅผ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์— ์ „๋‹ฌํ•˜์„ธ์š”. ๋˜ํ•œ, ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์กฐ์šฉํ•œ ์˜ค๋ฅ˜(silent errors)๋ฅผ ๋” ์ž˜ ๋””๋ฒ„๊น…ํ•  ์ˆ˜ ์žˆ๋„๋ก ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์— `sampling_rate` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ํ† ํฌ๋‚˜์ด์ €์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ฐฐ์น˜ ๋‚ด์—์„œ ๊ฐ€๋ณ€์ ์ธ ์‹œํ€€์Šค๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ํŒจ๋”ฉ ๋˜๋Š” ์ž˜๋ผ๋‚ด๊ธฐ๋ฅผ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐœ์˜ ์˜ค๋””์˜ค ์ƒ˜ํ”Œ์˜ ์‹œํ€€์Šค ๊ธธ์ด๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` ์˜ค๋””์˜ค ์ƒ˜ํ”Œ์˜ ๊ธธ์ด๊ฐ€ ๋™์ผํ•˜๋„๋ก ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ์„ธ์š”. ์ตœ๋Œ€ ์ƒ˜ํ”Œ ๊ธธ์ด๋ฅผ ์ง€์ •ํ•˜๋ฉด ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๊ฐ€ ํ•ด๋‹น ๊ธธ์ด์— ๋งž์ถฐ ์‹œํ€€์Šค๋ฅผ ํŒจ๋”ฉํ•˜๊ฑฐ๋‚˜ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` `preprocess_function`์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ฒ˜์Œ ์˜ˆ์‹œ ๋ช‡ ๊ฐœ์— ์ ์šฉํ•ด๋ณด์„ธ์š”: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` ์ด์ œ ์ƒ˜ํ”Œ ๊ธธ์ด๊ฐ€ ๋ชจ๋‘ ๊ฐ™๊ณ  ์ง€์ •๋œ ์ตœ๋Œ€ ๊ธธ์ด์— ๋งž๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ „์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ](main_classes/image_processor)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ž…๋ ฅ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์—ฌ๋Ÿฌ ๋‹จ๊ณ„๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋‹จ๊ณ„์—๋Š” ํฌ๊ธฐ ์กฐ์ •, ์ •๊ทœํ™”, ์ƒ‰์ƒ ์ฑ„๋„ ๋ณด์ •, ์ด๋ฏธ์ง€์˜ ํ…์„œ ๋ณ€ํ™˜ ๋“ฑ์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. <Tip> ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์„ ๋ช‡ ๊ฐ€์ง€ ์ ์šฉํ•œ ๋’ค์— ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ ๋ฐ ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์€ ๋ชจ๋‘ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ˜•ํ•˜์ง€๋งŒ, ์„œ๋กœ ๋‹ค๋ฅธ ๋ชฉ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค: * ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์€ ๊ณผ์ ํ•ฉ(over-fitting)์„ ๋ฐฉ์ง€ํ•˜๊ณ  ๋ชจ๋ธ์˜ ๊ฒฌ๊ณ ํ•จ(resiliency)์„ ๋†’์ด๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ๊ธฐ์™€ ์ƒ‰์ƒ ์กฐ์ •, ์ž๋ฅด๊ธฐ, ํšŒ์ „, ํฌ๊ธฐ ์กฐ์ •, ํ™•๋Œ€/์ถ•์†Œ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ฆ๊ฐ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ฆ๊ฐ•์œผ๋กœ ์ด๋ฏธ์ง€์˜ ์˜๋ฏธ๊ฐ€ ๋ฐ”๋€Œ์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋Š” ์ด๋ฏธ์ง€๊ฐ€ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ž…๋ ฅ ํ˜•์‹๊ณผ ์ผ์น˜ํ•˜๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ์ปดํ“จํ„ฐ ๋น„์ „ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ๋•Œ ์ด๋ฏธ์ง€๋Š” ๋ชจ๋ธ์ด ์ดˆ๊ธฐ์— ํ›ˆ๋ จ๋  ๋•Œ์™€ ์ •ํ™•ํžˆ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ „์ฒ˜๋ฆฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์—๋Š” ์›ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋ฌด์—‡์ด๋“  ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ์—๋Š” ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ `ImageProcessor`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. </Tip> [food101](https://huggingface.co/datasets/food101) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ปดํ“จํ„ฐ ๋น„์ „ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉํ•˜๋Š”์ง€ ์•Œ์•„๋ณด์„ธ์š”. ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub.html)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. <Tip> ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ƒ๋‹นํžˆ ํฌ๊ธฐ ๋•Œ๋ฌธ์— ๐Ÿค— Datasets์˜ `split` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ์„ธํŠธ์—์„œ ์ž‘์€ ์ƒ˜ํ”Œ๋งŒ ๊ฐ€์ ธ์˜ค์„ธ์š”! </Tip> ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` ๋‹ค์Œ์œผ๋กœ, ๐Ÿค— Datasets์˜ [`image`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image)๋กœ ์ด๋ฏธ์ง€๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> dataset[0]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/> </div> [`AutoImageProcessor.from_pretrained`]๋กœ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ๋จผ์ € ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๋‹จ๊ณ„๋ฅผ ์ถ”๊ฐ€ํ•ด ๋ด…์‹œ๋‹ค. ์•„๋ฌด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋‚˜ ์‚ฌ์šฉํ•ด๋„ ๊ดœ์ฐฎ์ง€๋งŒ, ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” torchvision์˜ [`transforms`](https://pytorch.org/vision/stable/transforms.html) ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ด๋ณด๊ณ  ์‹ถ๋‹ค๋ฉด, [Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) ๋˜๋Š” [Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)์—์„œ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉํ•˜๋Š”์ง€ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)๋กœ [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)์™€ [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) ๋“ฑ ๋ณ€ํ™˜์„ ๋ช‡ ๊ฐ€์ง€ ์—ฐ๊ฒฐํ•˜์„ธ์š”. ์ฐธ๊ณ ๋กœ ํฌ๊ธฐ ์กฐ์ •์— ํ•„์š”ํ•œ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ ์š”๊ตฌ์‚ฌํ•ญ์€ `image_processor`์—์„œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์€ ์ •ํ™•ํ•œ ๋†’์ด์™€ ๋„ˆ๋น„๋ฅผ ์š”๊ตฌํ•˜์ง€๋งŒ, ์ œ์ผ ์งง์€ ๋ณ€์˜ ๊ธธ์ด(`shortest_edge`)๋งŒ ์ •์˜๋œ ๋ชจ๋ธ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) ``` 2. ๋ชจ๋ธ์€ ์ž…๋ ฅ์œผ๋กœ [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. `ImageProcessor`๋Š” ์ด๋ฏธ์ง€ ์ •๊ทœํ™” ๋ฐ ์ ์ ˆํ•œ ํ…์„œ ์ƒ์„ฑ์„ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฐ์น˜ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ๋ฐ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ๊ฒฐํ•ฉํ•˜๊ณ  `pixel_values`๋ฅผ ์ƒ์„ฑํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> def transforms(examples): ... images = [_transforms(img.convert("RGB")) for img in examples["image"]] ... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"] ... return examples ``` <Tip> ์œ„์˜ ์˜ˆ์—์„œ๋Š” ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ์ค‘์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— `do_resize=False`๋กœ ์„ค์ •ํ•˜๊ณ , ํ•ด๋‹น `image_processor`์—์„œ `size` ์†์„ฑ์„ ํ™œ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ฆ๊ฐ• ์ค‘์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ƒ๋žตํ•˜์„ธ์š”. ๊ธฐ๋ณธ์ ์œผ๋กœ๋Š” `ImageProcessor`๊ฐ€ ํฌ๊ธฐ ์กฐ์ •์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ฆ๊ฐ• ๋ณ€ํ™˜ ๊ณผ์ •์—์„œ ์ด๋ฏธ์ง€๋ฅผ ์ •๊ทœํ™”ํ•˜๋ ค๋ฉด `image_processor.image_mean` ๋ฐ `image_processor.image_std` ๊ฐ’์„ ์‚ฌ์šฉํ•˜์„ธ์š”. </Tip> 3. ๐Ÿค— Datasets์˜ [`set_transform`](https://huggingface.co/docs/datasets/process.html#format-transform)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๋ณ€ํ™˜์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset.set_transform(transforms) ``` 4. ์ด์ œ ์ด๋ฏธ์ง€์— ์ ‘๊ทผํ•˜๋ฉด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ `pixel_values`๋ฅผ ์ถ”๊ฐ€ํ•œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py >>> dataset[0].keys() ``` ๋‹ค์Œ์€ ๋ณ€ํ˜•์ด ์ ์šฉ๋œ ํ›„์˜ ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ž˜๋ ค๋‚˜๊ฐ”๊ณ  ์ƒ‰์ƒ ์†์„ฑ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/> </div> <Tip> `ImageProcessor`๋Š” ๊ฐ์ฒด ๊ฐ์ง€, ์‹œ๋งจํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(semantic segmentation), ์ธ์Šคํ„ด์Šค ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(instance segmentation), ํŒŒ๋†‰ํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜(panoptic segmentation)๊ณผ ๊ฐ™์€ ์ž‘์—…์— ๋Œ€ํ•œ ํ›„์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์€ ๋ชจ๋ธ์˜ ์›์‹œ ์ถœ๋ ฅ์„ ๊ฒฝ๊ณ„ ์ƒ์ž๋‚˜ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜ ๋งต๊ณผ ๊ฐ™์€ ์˜๋ฏธ ์žˆ๋Š” ์˜ˆ์ธก์œผ๋กœ ๋ณ€ํ™˜ํ•ด์ค๋‹ˆ๋‹ค. </Tip> ### ํŒจ๋”ฉ[[pad]] ์˜ˆ๋ฅผ ๋“ค์–ด, [DETR](./model_doc/detr)์™€ ๊ฐ™์€ ๊ฒฝ์šฐ์—๋Š” ๋ชจ๋ธ์ด ํ›ˆ๋ จํ•  ๋•Œ ํฌ๊ธฐ ์กฐ์ • ์ฆ๊ฐ•์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ๋ฐฐ์น˜ ๋‚ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`DetrImageProcessor`]์˜ [`DetrImageProcessor.pad`]๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‚ฌ์šฉ์ž ์ •์˜ `collate_fn`์„ ์ •์˜ํ•ด์„œ ๋ฐฐ์น˜ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ[[multimodal]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์ด ํ•„์š”ํ•œ ์ž‘์—…์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•œ [ํ”„๋กœ์„ธ์„œ](main_classes/processors)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ํ† ํฌ๋‚˜์ด์ €์™€ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ๊ฐ™์€ ๋‘ ๊ฐ€์ง€ ์ฒ˜๋ฆฌ ๊ฐ์ฒด๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. [LJ Speech](https://huggingface.co/datasets/lj_speech) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์„ ์œ„ํ•œ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์„ธ์š”. (๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— [๋ฐ์ดํ„ฐ ์„ธํŠธ ํŠœํ† ๋ฆฌ์–ผ](https://huggingface.co/docs/datasets/load_hub.html)์—์„œ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.) ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์—์„œ๋Š” `audio`์™€ `text`์—๋งŒ ์ง‘์ค‘ํ•˜๋ฉด ๋˜๋ฏ€๋กœ, ๋‹ค๋ฅธ ์—ด๋“ค์€ ์ œ๊ฑฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` ์ด์ œ `audio`์™€ `text`์—ด์„ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` ๊ธฐ์กด์— ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ์ƒˆ๋กœ์šด ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ ์ผ์น˜์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋ฅผ [๋ฆฌ์ƒ˜ํ”Œ๋ง](preprocessing#audio)ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` [`AutoProcessor.from_pretrained`]๋กœ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. `array`์— ๋“ค์–ด ์žˆ๋Š” ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ๋ฅผ `input_values`๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  `text`๋ฅผ ํ† ํฐํ™”ํ•˜์—ฌ `labels`๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. ์ƒ˜ํ”Œ์„ `prepare_dataset` ํ•จ์ˆ˜์— ์ ์šฉํ•˜์„ธ์š”: ```py >>> prepare_dataset(lj_speech[0]) ``` ์ด์ œ ํ”„๋กœ์„ธ์„œ๊ฐ€ `input_values`์™€ `labels`๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ , ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ 16kHz๋กœ ๋‹ค์šด์ƒ˜ํ”Œ๋งํ–ˆ์Šต๋‹ˆ๋‹ค. ๋“œ๋””์–ด ์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/performance.md
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ [[performance-and-scalability]] ์ ์  ๋” ํฐ ๊ทœ๋ชจ์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ”„๋กœ๋•์…˜์— ๋ฐฐํฌํ•˜๋Š” ๋ฐ์—๋Š” ๋‹ค์–‘ํ•œ ์–ด๋ ค์›€์ด ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ค‘์—๋Š” ๋ชจ๋ธ์ด ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ GPU ๋ฉ”๋ชจ๋ฆฌ๋ณด๋‹ค ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ•„์š”๋กœ ํ•˜๊ฑฐ๋‚˜ ํ›ˆ๋ จ ์†๋„๊ฐ€ ๋งค์šฐ ๋Š๋ฆด ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฐฐํฌํ•  ๋•Œ๋Š” ์ œํ’ˆ ํ™˜๊ฒฝ์—์„œ ์š”๊ตฌ๋˜๋Š” ์ฒ˜๋ฆฌ๋Ÿ‰์œผ๋กœ ์ธํ•ด ๊ณผ๋ถ€ํ•˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ณ  ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๊ฐ€์žฅ ์ ํ•ฉํ•œ ์„ค์ •์„ ์ฐพ๋„๋ก ๋„์›€์„ ์ฃผ๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ๊ณผ ์ถ”๋ก ์œผ๋กœ ๊ฐ€์ด๋“œ๋ฅผ ๋ถ„ํ• ํ–ˆ๋Š”๋ฐ, ์ด๋Š” ๊ฐ๊ฐ ๋‹ค๋ฅธ ๋ฌธ์ œ์™€ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๊ฐ ๊ฐ€์ด๋“œ์—๋Š” ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ํ•˜๋“œ์›จ์–ด ์„ค์ •์— ๋Œ€ํ•œ ๋ณ„๋„์˜ ๊ฐ€์ด๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋‹จ์ผ GPU vs ๋‹ค์ค‘ GPU ๋˜๋Š” ์ถ”๋ก ์„ ์œ„ํ•œ CPU vs GPU). ![perf_overview](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perf_overview.png) ์ด ๋ฌธ์„œ๋Š” ์‚ฌ์šฉ์ž์˜ ์ƒํ™ฉ์— ์œ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋“ค์— ๋Œ€ํ•œ ๊ฐœ์š” ๋ฐ ์‹œ์ž‘์  ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ [[training]] ํšจ์œจ์ ์ธ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ ํ›ˆ๋ จ์—๋Š” GPU๋‚˜ TPU์™€ ๊ฐ™์€ ๊ฐ€์†๊ธฐ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ๊ฒฝ์šฐ๋Š” ๋‹จ์ผ GPU๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์ง€๋งŒ, ๋‹ค์ค‘ GPU ๋ฐ CPU ํ›ˆ๋ จ์— ๋Œ€ํ•œ ์„น์…˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค(๊ณง ๋” ๋งŽ์€ ๋‚ด์šฉ์ด ์ถ”๊ฐ€๋  ์˜ˆ์ •). <Tip> ์ฐธ๊ณ : ๋‹จ์ผ GPU ์„น์…˜์—์„œ ์†Œ๊ฐœ๋œ ๋Œ€๋ถ€๋ถ„์˜ ์ „๋žต(์˜ˆ: ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ ๋˜๋Š” ๊ทธ๋ผ๋””์–ธํŠธ ๋ˆ„์ )์€ ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ ํ›ˆ๋ จ์—๋„ ์ ์šฉ๋˜๋ฏ€๋กœ, ๋‹ค์ค‘ GPU๋‚˜ CPU ํ›ˆ๋ จ๊ณผ ๊ฐ™์€ ์„น์…˜์„ ์‚ดํŽด๋ณด๊ธฐ ์ „์— ๊ผญ ์ฐธ๊ณ ํ•˜์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹ค. </Tip> ### ๋‹จ์ผ GPU [[single-gpu]] ๋‹จ์ผ GPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์ง€๋งŒ, ์ด๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋„๊ตฌ์™€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ, ๊ทธ๋ผ๋””์–ธํŠธ ๋ˆ„์  ๋ฐ ์ฒดํฌํฌ์ธํŒ…, ํšจ์œจ์ ์ธ ์˜ตํ‹ฐ๋งˆ์ด์ €, ์ตœ์ ์˜ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๊ฒฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ์ „๋žต ๋“ฑ์— ๋Œ€ํ•ด ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. [๋‹จ์ผ GPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_gpu_one) ### ๋‹ค์ค‘ GPU [[multigpu]] ๋‹จ์ผ GPU์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋Š๋ฆฌ๊ฑฐ๋‚˜ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์— ์ ํ•ฉํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์ค‘ GPU ์„ค์ •์œผ๋กœ ์ „ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ๋…ผ๋ฆฌ์ ์ธ ๋‹จ๊ณ„์ด์ง€๋งŒ, ์—ฌ๋Ÿฌ GPU์—์„œ ํ•œ ๋ฒˆ์— ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๊ฐ GPU๋งˆ๋‹ค ๋ชจ๋ธ์˜ ์ „์ฒด ์‚ฌ๋ณธ์„ ๋‘˜์ง€, ํ˜น์€ ๋ชจ๋ธ ์ž์ฒด๋„ ์—ฌ๋Ÿฌ GPU์— ๋ถ„์‚ฐํ•˜์—ฌ ๋‘˜์ง€ ๋“ฑ ์ƒˆ๋กœ์šด ๊ฒฐ์ •์„ ๋‚ด๋ ค์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” ๋ฐ์ดํ„ฐ, ํ…์„œ ๋ฐ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™”์— ๋Œ€ํ•ด ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. [๋‹ค์ค‘ GPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_gpu_many) ### CPU [[cpu]] [CPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_cpu) ### TPU [[tpu]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_train_tpu) ### ํŠน์ˆ˜ํ•œ ํ•˜๋“œ์›จ์–ด [[specialized-hardware]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_train_special) ## ์ถ”๋ก  [[inference]] ์ œํ’ˆ ๋ฐ ์„œ๋น„์Šค ํ™˜๊ฒฝ์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ถ”๋ก ํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ๋งŒํผ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์–ด์ง€๋Š” ์„น์…˜์—์„œ๋Š” CPU ๋ฐ ๋‹จ์ผ/๋‹ค์ค‘ GPU ์„ค์ •์—์„œ ์ถ”๋ก ์„ ์ง„ํ–‰ํ•˜๋Š” ๋‹จ๊ณ„๋ฅผ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. ### CPU [[cpu]] [CPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_cpu) ### ๋‹จ์ผ GPU [[single-gpu]] [๋‹จ์ผ GPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_gpu_one) ### ๋‹ค์ค‘ GPU [[multigpu]] [๋‹ค์ค‘ GPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_gpu_many) ### ํŠน์ˆ˜ํ•œ ํ•˜๋“œ์›จ์–ด [[specialized-hardware]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_infer_special) ## ํ•˜๋“œ์›จ์–ด [[hardware]] ํ•˜๋“œ์›จ์–ด ์„น์…˜์—์„œ๋Š” ์ž์‹ ๋งŒ์˜ ๋”ฅ๋Ÿฌ๋‹ ์žฅ๋น„๋ฅผ ๊ตฌ์ถ•ํ•  ๋•Œ ์œ ์šฉํ•œ ํŒ๊ณผ ์š”๋ น์„ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [ํ•˜๋“œ์›จ์–ด ์„น์…˜์œผ๋กœ ์ด๋™](perf_hardware) ## ๊ธฐ์—ฌํ•˜๊ธฐ [[contribute]] ์ด ๋ฌธ์„œ๋Š” ์™„์„ฑ๋˜์ง€ ์•Š์€ ์ƒํƒœ์ด๋ฉฐ, ์ถ”๊ฐ€ํ•ด์•ผ ํ•  ๋‚ด์šฉ์ด๋‚˜ ์ˆ˜์ • ์‚ฌํ•ญ์ด ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ถ”๊ฐ€ํ•˜๊ฑฐ๋‚˜ ์ˆ˜์ •ํ•  ๋‚ด์šฉ์ด ์žˆ์œผ๋ฉด ์ฃผ์ €ํ•˜์ง€ ๋ง๊ณ  PR์„ ์—ด์–ด ์ฃผ์‹œ๊ฑฐ๋‚˜, ์ž์„ธํ•œ ๋‚ด์šฉ์„ ๋…ผ์˜ํ•˜๊ธฐ ์œ„ํ•ด Issue๋ฅผ ์‹œ์ž‘ํ•ด ์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. A๊ฐ€ B๋ณด๋‹ค ์ข‹๋‹ค๊ณ  ํ•˜๋Š” ๊ธฐ์—ฌ๋ฅผ ํ•  ๋•Œ๋Š”, ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๋ฒค์น˜๋งˆํฌ์™€/๋˜๋Š” ํ•ด๋‹น ์ •๋ณด์˜ ์ถœ์ฒ˜ ๋งํฌ๋ฅผ ํฌํ•จํ•ด์ฃผ์„ธ์š”(๋‹น์‹ ์œผ๋กœ๋ถ€ํ„ฐ์˜ ์ง์ ‘์ ์ธ ์ •๋ณด๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/hpo_train.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Trainer API๋ฅผ ์‚ฌ์šฉํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ [[hyperparameter-search-using-trainer-api]] ๐Ÿค— Transformers์—์„œ๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋Š”๋ฐ ์ตœ์ ํ™”๋œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์‚ฌ์šฉ์ž๋Š” ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•  ํ•„์š” ์—†์ด ๋”์šฑ ๊ฐ„ํŽธํ•˜๊ฒŒ ํ•™์Šต์„ ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, [`Trainer`]๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ์œ„ํ•œ API๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ ์ด API๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์˜ˆ์‹œ์™€ ํ•จ๊ป˜ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ## ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ [[hyperparameter-search-backend]] [`Trainer`]๋Š” ํ˜„์žฌ ์•„๋ž˜ 4๊ฐ€์ง€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: [optuna](https://optuna.org/)์™€ [sigopt](https://sigopt.com/), [raytune](https://docs.ray.io/en/latest/tune/index.html), [wandb](https://wandb.ai/site/sweeps) ์ž…๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ์•„๋ž˜์˜ ๋ช…๋ น์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค์„ ์„ค์น˜ํ•˜์„ธ์š”. ```bash pip install optuna/sigopt/wandb/ray[tune] ``` ## ์˜ˆ์ œ์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ํ™œ์„ฑํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ• [[how-to-enable-hyperparameter-search-in-example]] ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๊ณต๊ฐ„์„ ์ •์˜ํ•˜์„ธ์š”. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๋ฐฑ์—”๋“œ๋งˆ๋‹ค ์„œ๋กœ ๋‹ค๋ฅธ ํ˜•์‹์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. sigopt์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def sigopt_hp_space(trial): ... return [ ... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"}, ... { ... "categorical_values": ["16", "32", "64", "128"], ... "name": "per_device_train_batch_size", ... "type": "categorical", ... }, ... ] ``` optuna์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def optuna_hp_space(trial): ... return { ... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True), ... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]), ... } ``` raytune์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def ray_hp_space(trial): ... return { ... "learning_rate": tune.loguniform(1e-6, 1e-4), ... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]), ... } ``` wandb์˜ ๊ฒฝ์šฐ, ํ•ด๋‹น [object_parameter](https://docs.wandb.ai/guides/sweeps/configuration) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์•„๋ž˜์™€ ๊ฐ™์ด ์ž‘์„ฑํ•˜์„ธ์š”: ```py >>> def wandb_hp_space(trial): ... return { ... "method": "random", ... "metric": {"name": "objective", "goal": "minimize"}, ... "parameters": { ... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4}, ... "per_device_train_batch_size": {"values": [16, 32, 64, 128]}, ... }, ... } ``` `model_init` ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๊ณ  ์ด๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. ์•„๋ž˜๋Š” ๊ทธ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ```py >>> def model_init(trial): ... return AutoModelForSequenceClassification.from_pretrained( ... model_args.model_name_or_path, ... from_tf=bool(".ckpt" in model_args.model_name_or_path), ... config=config, ... cache_dir=model_args.cache_dir, ... revision=model_args.model_revision, ... use_auth_token=True if model_args.use_auth_token else None, ... ) ``` ์•„๋ž˜์™€ ๊ฐ™์ด `model_init` ํ•จ์ˆ˜, ํ›ˆ๋ จ ์ธ์ˆ˜, ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹, ๊ทธ๋ฆฌ๊ณ  ํ‰๊ฐ€ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ [`Trainer`]๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> trainer = Trainer( ... model=None, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... tokenizer=tokenizer, ... model_init=model_init, ... data_collator=data_collator, ... ) ``` ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์„ ํ˜ธ์ถœํ•˜๊ณ , ์ตœ์ ์˜ ์‹œํ—˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ๋ฐฑ์—”๋“œ๋Š” `"optuna"`/`"sigopt"`/`"wandb"`/`"ray"` ์ค‘์—์„œ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฉํ–ฅ์€ `"minimize"` ๋˜๋Š” `"maximize"` ์ค‘ ์„ ํƒํ•˜๋ฉฐ, ๋ชฉํ‘œ๋ฅผ ์ตœ์†Œํ™”ํ•  ๊ฒƒ์ธ์ง€ ์ตœ๋Œ€ํ™”ํ•  ๊ฒƒ์ธ์ง€๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์ž์‹ ๋งŒ์˜ compute_objective ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ด ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜์ง€ ์•Š์œผ๋ฉด, ๊ธฐ๋ณธ compute_objective๊ฐ€ ํ˜ธ์ถœ๋˜๊ณ , f1๊ณผ ๊ฐ™์€ ํ‰๊ฐ€ ์ง€ํ‘œ์˜ ํ•ฉ์ด ๋ชฉํ‘ฏ๊ฐ’์œผ๋กœ ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค. ```py >>> best_trial = trainer.hyperparameter_search( ... direction="maximize", ... backend="optuna", ... hp_space=optuna_hp_space, ... n_trials=20, ... compute_objective=compute_objective, ... ) ``` ## DDP ๋ฏธ์„ธ ์กฐ์ •์„ ์œ„ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ [[hyperparameter-search-for-ddp-finetune]] ํ˜„์žฌ, DDP(Distributed Data Parallelism; ๋ถ„์‚ฐ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ)๋ฅผ ์œ„ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰์€ optuna์™€ sigopt์—์„œ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ตœ์ƒ์œ„ ํ”„๋กœ์„ธ์Šค๊ฐ€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ ๊ณผ์ •์„ ์‹œ์ž‘ํ•˜๊ณ  ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค๋ฅธ ํ”„๋กœ์„ธ์Šค์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_infer_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPU์—์„œ ํšจ์œจ์ ์ธ ์ถ”๋ก ํ•˜๊ธฐ [[efficient-inference-on-cpu]] ์ด ๊ฐ€์ด๋“œ๋Š” CPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ค‘์ ์„ ๋‘๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋” ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•œ `BetterTransformer` [[bettertransformer-for-faster-inference]] ์šฐ๋ฆฌ๋Š” ์ตœ๊ทผ CPU์—์„œ ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€ ๋ฐ ์˜ค๋””์˜ค ๋ชจ๋ธ์˜ ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•ด `BetterTransformer`๋ฅผ ํ†ตํ•ฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ํ†ตํ•ฉ์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ด ๋ฌธ์„œ](https://huggingface.co/docs/optimum/bettertransformer/overview)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## PyTorch JIT ๋ชจ๋“œ (TorchScript) [[pytorch-jitmode-torchscript]] TorchScript๋Š” PyTorch ์ฝ”๋“œ์—์„œ ์ง๋ ฌํ™”์™€ ์ตœ์ ํ™”๊ฐ€ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ์ƒ์„ฑํ• ๋•Œ ์“ฐ์ž…๋‹ˆ๋‹ค. TorchScript๋กœ ๋งŒ๋“ค์–ด์ง„ ํ”„๋กœ๊ทธ๋žจ์€ ๊ธฐ์กด Python ํ”„๋กœ์„ธ์Šค์—์„œ ์ €์žฅํ•œ ๋’ค, ์ข…์†์„ฑ์ด ์—†๋Š” ์ƒˆ๋กœ์šด ํ”„๋กœ์„ธ์Šค๋กœ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ๊ธฐ๋ณธ ์„ค์ •์ธ `eager` ๋ชจ๋“œ์™€ ๋น„๊ตํ–ˆ์„๋•Œ, `jit` ๋ชจ๋“œ๋Š” ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ๊ณผ ๊ฐ™์€ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ๋ชจ๋ธ ์ถ”๋ก ์—์„œ ๋Œ€๋ถ€๋ถ„ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. TorchScript์— ๋Œ€ํ•œ ์นœ์ ˆํ•œ ์†Œ๊ฐœ๋Š” [PyTorch TorchScript ํŠœํ† ๋ฆฌ์–ผ](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ### JIT ๋ชจ๋“œ์™€ ํ•จ๊ป˜ํ•˜๋Š” IPEX ๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™” [[ipex-graph-optimization-with-jitmode]] Intelยฎ Extension for PyTorch(IPEX)๋Š” Transformers ๊ณ„์—ด ๋ชจ๋ธ์˜ jit ๋ชจ๋“œ์—์„œ ์ถ”๊ฐ€์ ์ธ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. jit ๋ชจ๋“œ์™€ ๋”๋ถˆ์–ด Intelยฎ Extension for PyTorch(IPEX)๋ฅผ ํ™œ์šฉํ•˜์‹œ๊ธธ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Transformers ๋ชจ๋ธ์—์„œ ์ž์ฃผ ์‚ฌ์šฉ๋˜๋Š” ์ผ๋ถ€ ์—ฐ์‚ฐ์ž ํŒจํ„ด์€ ์ด๋ฏธ jit ๋ชจ๋“œ ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ(operator fusion)์˜ ํ˜•ํƒœ๋กœ Intelยฎ Extension for PyTorch(IPEX)์—์„œ ์ง€์›๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. Multi-head-attention, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm ๊ฒฐํ•ฉ ํŒจํ„ด ๋“ฑ์ด ์ด์šฉ ๊ฐ€๋Šฅํ•˜๋ฉฐ ํ™œ์šฉํ–ˆ์„ ๋•Œ ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ์˜ ์ด์ ์€ ์‚ฌ์šฉ์ž์—๊ฒŒ ๊ณ ์Šค๋ž€ํžˆ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๋ถ„์„์— ๋”ฐ๋ฅด๋ฉด, ์งˆ์˜ ์‘๋‹ต, ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ฐ ํ† ํฐ ๋ถ„๋ฅ˜์™€ ๊ฐ™์€ ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” NLP ํƒœ์Šคํฌ ์ค‘ ์•ฝ 70%๊ฐ€ ์ด๋Ÿฌํ•œ ๊ฒฐํ•ฉ ํŒจํ„ด์„ ์‚ฌ์šฉํ•˜์—ฌ Float32 ์ •๋ฐ€๋„์™€ BFloat16 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋ชจ๋‘์—์„œ ์„ฑ๋Šฅ์ƒ์˜ ์ด์ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [IPEX ๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™”](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html)์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์„ธ์š”. #### IPEX ์„ค์น˜: [[ipex-installation]] IPEX ๋ฐฐํฌ ์ฃผ๊ธฐ๋Š” PyTorch๋ฅผ ๋”ฐ๋ผ์„œ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [IPEX ์„ค์น˜ ๋ฐฉ๋ฒ•](https://intel.github.io/intel-extension-for-pytorch/)์„ ํ™•์ธํ•˜์„ธ์š”. ### JIT ๋ชจ๋“œ ์‚ฌ์šฉ๋ฒ• [[usage-of-jitmode]] ํ‰๊ฐ€ ๋˜๋Š” ์˜ˆ์ธก์„ ์œ„ํ•ด Trainer์—์„œ JIT ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด Trainer์˜ ๋ช…๋ น ์ธ์ˆ˜์— `jit_mode_eval`์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> PyTorch์˜ ๋ฒ„์ „์ด 1.14.0 ์ด์ƒ์ด๋ผ๋ฉด, jit ๋ชจ๋“œ๋Š” jit.trace์—์„œ dict ์ž…๋ ฅ์ด ์ง€์›๋˜๋ฏ€๋กœ, ๋ชจ๋“  ๋ชจ๋ธ์˜ ์˜ˆ์ธก๊ณผ ํ‰๊ฐ€๊ฐ€ ๊ฐœ์„ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ๋ฒ„์ „์ด 1.14.0 ๋ฏธ๋งŒ์ด๋ผ๋ฉด, ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ๊ณผ ๊ฐ™์ด forward ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ์ˆœ์„œ๊ฐ€ jit.trace์˜ ํŠœํ”Œ ์ž…๋ ฅ ์ˆœ์„œ์™€ ์ผ์น˜ํ•˜๋Š” ๋ชจ๋ธ์— ๋“์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ชจ๋ธ๊ณผ ๊ฐ™์ด forward ๋งค๊ฐœ๋ณ€์ˆ˜ ์ˆœ์„œ๊ฐ€ jit.trace์˜ ํŠœํ”Œ ์ž…๋ ฅ ์ˆœ์„œ์™€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ, jit.trace๊ฐ€ ์‹คํŒจํ•˜๋ฉฐ ์˜ˆ์™ธ๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ ์˜ˆ์™ธ์ƒํ™ฉ์„ ์‚ฌ์šฉ์ž์—๊ฒŒ ์•Œ๋ฆฌ๊ธฐ ์œ„ํ•ด Logging์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. </Tip> [Transformers ์งˆ์˜ ์‘๋‹ต](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€ ์˜ˆ์‹œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. - CPU์—์„œ jit ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•œ ์ถ”๋ก : <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--jit_mode_eval </b></pre> - CPU์—์„œ IPEX์™€ ํ•จ๊ป˜ jit ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•œ ์ถ”๋ก : <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--use_ipex \</b> <b>--jit_mode_eval</b></pre>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[share-a-model]] ์ง€๋‚œ ๋‘ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ๋ถ„์‚ฐ ์„ค์ •์„ ์œ„ํ•ด PyTorch, Keras ๋ฐ ๐Ÿค— Accelerate๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์„ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! Hugging Face๋Š” ์ธ๊ณต์ง€๋Šฅ์˜ ๋ฏผ์ฃผํ™”๋ฅผ ์œ„ํ•ด ๋ชจ๋‘์—๊ฒŒ ์ง€์‹๊ณผ ์ž์›์„ ๊ณต๊ฐœ์ ์œผ๋กœ ๊ณต์œ ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ๋ฏฟ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์‹œ๊ฐ„๊ณผ ์ž์›์„ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ๋„๋ก ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ณ ๋ คํ•ด ๋ณด์„ธ์š”. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ [Model Hub](https://huggingface.co/models)์—์„œ ํ›ˆ๋ จ๋˜๊ฑฐ๋‚˜ ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…์‹œ๋‹ค: - API๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ Hub์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค. - ์›น์‚ฌ์ดํŠธ๋ฅผ ํ†ตํ•ด ํŒŒ์ผ์„ Hub๋กœ ๋Œ์–ด๋‹ค ๋†“์Šต๋‹ˆ๋‹ค. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋ ค๋ฉด, [huggingface.co](https://huggingface.co/join)์— ๊ณ„์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด ์กฐ์ง์— ๊ฐ€์ž…ํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ ๋งŒ๋“ค ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ## ์ €์žฅ์†Œ ํŠน์ง•[[repository-features]] ๋ชจ๋ธ ํ—ˆ๋ธŒ์˜ ๊ฐ ์ €์žฅ์†Œ๋Š” ์ผ๋ฐ˜์ ์ธ GitHub ์ €์žฅ์†Œ์ฒ˜๋Ÿผ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ๋Š” ๋ฒ„์ „ ๊ด€๋ฆฌ, ์ปค๋ฐ‹ ๊ธฐ๋ก, ์ฐจ์ด์  ์‹œ๊ฐํ™” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ๋‚ด์žฅ๋œ ๋ฒ„์ „ ๊ด€๋ฆฌ๋Š” git ๋ฐ [git-lfs](https://git-lfs.github.com/)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ํ•˜๋‚˜์˜ ๋ชจ๋ธ์„ ํ•˜๋‚˜์˜ ์ €์žฅ์†Œ๋กœ ์ทจ๊ธ‰ํ•˜์—ฌ ์ ‘๊ทผ ์ œ์–ด ๋ฐ ํ™•์žฅ์„ฑ์ด ํ–ฅ์ƒ๋ฉ๋‹ˆ๋‹ค. ๋ฒ„์ „ ์ œ์–ด๋Š” ์ปค๋ฐ‹ ํ•ด์‹œ, ํƒœ๊ทธ ๋˜๋Š” ๋ธŒ๋žœ์น˜๋กœ ๋ชจ๋ธ์˜ ํŠน์ • ๋ฒ„์ „์„ ๊ณ ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ *revision*์„ ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ `revision` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ๋ชจ๋ธ ๋ฒ„์ „์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` ๋˜ํ•œ ์ €์žฅ์†Œ์—์„œ ํŒŒ์ผ์„ ์‰ฝ๊ฒŒ ํŽธ์ง‘ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ปค๋ฐ‹ ๊ธฐ๋ก๊ณผ ์ฐจ์ด๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## ์„ค์ •[[setup]] ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๊ธฐ ์ „์— Hugging Face ์ž๊ฒฉ ์ฆ๋ช…์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ„ฐ๋ฏธ๋„์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋œ ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฉด Hugging Face ์บ์‹œ ํด๋”(๊ธฐ๋ณธ์ ์œผ๋กœ `~/.cache/`)์— ์•ก์„ธ์Šค ํ† ํฐ์„ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash huggingface-cli login ``` Jupyter ๋˜๋Š” Colaboratory์™€ ๊ฐ™์€ ๋…ธํŠธ๋ถ์„ ์‚ฌ์šฉ ์ค‘์ธ ๊ฒฝ์šฐ, [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด API๋กœ ํ—ˆ๋ธŒ์™€ ์ƒํ˜ธ ์ž‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pip install huggingface_hub ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `notebook_login`๋กœ ํ—ˆ๋ธŒ์— ๋กœ๊ทธ์ธํ•˜๊ณ , [์—ฌ๊ธฐ](https://huggingface.co/settings/token) ๋งํฌ์—์„œ ๋กœ๊ทธ์ธํ•  ํ† ํฐ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„ ๋ชจ๋ธ ๋ณ€ํ™˜ํ•˜๊ธฐ[[convert-a-model-for-all-frameworks]] ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ž‘์—…ํ•˜๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋ ค๋ฉด, PyTorch ๋ฐ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•˜๊ณ  ์—…๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„๋ฅผ ๊ฑด๋„ˆ๋›ฐ์–ด๋„ ์‚ฌ์šฉ์ž๋Š” ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์ง€๋งŒ, ๐Ÿค— Transformers๊ฐ€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ฆ‰์„์—์„œ ๋ณ€ํ™˜ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์†๋„๊ฐ€ ๋Š๋ ค์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ์‰ฝ์Šต๋‹ˆ๋‹ค. PyTorch ๋ฐ TensorFlow๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•œ ๋‹ค์Œ(์„ค์น˜ ์ง€์นจ์€ [์—ฌ๊ธฐ](installation) ์ฐธ์กฐ) ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ์ž‘์—…์— ๋Œ€ํ•œ ํŠน์ • ๋ชจ๋ธ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์ฒดํฌํฌ์ธํŠธ๋ฅผ TensorFlow์—์„œ PyTorch๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด `from_tf=True`๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> ์ฒดํฌํฌ์ธํŠธ๋ฅผ PyTorch์—์„œ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด `from_pt=True`๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ƒˆ๋กœ์šด ์ฒดํฌํฌ์ธํŠธ์™€ ํ•จ๊ป˜ ์ƒˆ๋กœ์šด TensorFlow ๋ชจ๋ธ์„ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> Flax์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, PyTorch์—์„œ Flax๋กœ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ ํ‘ธ์‹œํ•˜๊ธฐ[[push-a-model-during-training]] <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์€ ์ถ”๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋‚˜ ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ๋งŒํผ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. [๋ฏธ์„ธ ์กฐ์ • ํŠœํ† ๋ฆฌ์–ผ](training)์—์„œ [`TrainingArguments`] ํด๋ž˜์Šค๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์™€ ์ถ”๊ฐ€ ํ›ˆ๋ จ ์˜ต์…˜์„ ์ง€์ •ํ•˜๋Š” ๊ณณ์ด๋ผ๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ์ด๋Ÿฌํ•œ ํ›ˆ๋ จ ์˜ต์…˜ ์ค‘ ํ•˜๋‚˜๋Š” ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ์ง์ ‘ ํ‘ธ์‹œํ•˜๋Š” ๊ธฐ๋Šฅ์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. [`TrainingArguments`]์—์„œ `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` ํ‰์†Œ์™€ ๊ฐ™์ด ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•œ ํ›„, [`Trainer`]์—์„œ [`~transformers.Trainer.push_to_hub`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•˜์„ธ์š”. ๐Ÿค— Transformers๋Š” ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ํ›ˆ๋ จ ๊ฒฐ๊ณผ ๋ฐ ํ”„๋ ˆ์ž„์›Œํฌ ๋ฒ„์ „์„ ๋ชจ๋ธ ์นด๋“œ์— ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค! ```py >>> trainer.push_to_hub() ``` </pt> <tf> [`PushToHubCallback`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜๋ ค๋ฉด, [`PushToHubCallback`]์— ๋‹ค์Œ ์ธ์ˆ˜๋ฅผ ์ •์˜ํ•˜์„ธ์š”: - ์ถœ๋ ฅ๋œ ๋ชจ๋ธ์˜ ํŒŒ์ผ ๊ฒฝ๋กœ - ํ† ํฌ๋‚˜์ด์ € - `{Hub ์‚ฌ์šฉ์ž ์ด๋ฆ„}/{๋ชจ๋ธ ์ด๋ฆ„}` ํ˜•์‹์˜ `hub_model_id` ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` [`fit`](https://keras.io/api/models/model_training_apis/)์— ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๋ฉด, ๐Ÿค— Transformers๊ฐ€ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## `push_to_hub` ํ•จ์ˆ˜ ์‚ฌ์šฉํ•˜๊ธฐ[[use-the-pushtohub-function]] ๋ชจ๋ธ์—์„œ ์ง์ ‘ `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `push_to_hub`์— ๋ชจ๋ธ ์ด๋ฆ„์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‚ฌ์šฉ์ž ์ด๋ฆ„ ์•„๋ž˜์— ๋ชจ๋ธ ์ด๋ฆ„ `my-awesome-model`๋กœ ์ €์žฅ์†Œ๊ฐ€ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด์ œ ์‚ฌ์šฉ์ž๋Š” `from_pretrained` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` ์กฐ์ง์— ์†ํ•˜๊ณ  ๋ชจ๋ธ์„ ์กฐ์ง ์ด๋ฆ„์œผ๋กœ ๋Œ€์‹  ํ‘ธ์‹œํ•˜๋ ค๋ฉด `repo_id`์— ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` `push_to_hub` ํ•จ์ˆ˜๋Š” ๋ชจ๋ธ ์ €์žฅ์†Œ์— ๋‹ค๋ฅธ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ชจ๋ธ ์ €์žฅ์†Œ์— ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` ๋˜๋Š” ๋ฏธ์„ธ ์กฐ์ •๋œ PyTorch ๋ชจ๋ธ์˜ TensorFlow ๋ฒ„์ „์„ ์ถ”๊ฐ€ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` ์ด์ œ Hugging Face ํ”„๋กœํ•„๋กœ ์ด๋™ํ•˜๋ฉด, ์ƒˆ๋กœ ์ƒ์„ฑํ•œ ๋ชจ๋ธ ์ €์žฅ์†Œ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. **Files** ํƒญ์„ ํด๋ฆญํ•˜๋ฉด ์ €์žฅ์†Œ์— ์—…๋กœ๋“œํ•œ ๋ชจ๋“  ํŒŒ์ผ์ด ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์— ํŒŒ์ผ์„ ๋งŒ๋“ค๊ณ  ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ํ—ˆ๋ธŒ ์„ค๋ช…์„œ [์—ฌ๊ธฐ](https://huggingface.co/docs/hub/how-to-upstream)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ์›น ์ธํ„ฐํŽ˜์ด์Šค๋กœ ์—…๋กœ๋“œํ•˜๊ธฐ[[upload-with-the-web-interface]] ์ฝ”๋“œ ์—†๋Š” ์ ‘๊ทผ ๋ฐฉ์‹์„ ์„ ํ˜ธํ•˜๋Š” ์‚ฌ์šฉ์ž๋Š” ํ—ˆ๋ธŒ์˜ ์›น ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [huggingface.co/new](https://huggingface.co/new)๋ฅผ ๋ฐฉ๋ฌธํ•˜์—ฌ ์ƒˆ๋กœ์šด ์ €์žฅ์†Œ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) ์—ฌ๊ธฐ์„œ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ •๋ณด๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”: - ์ €์žฅ์†Œ์˜ **์†Œ์œ ์ž**๋ฅผ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์‚ฌ์šฉ์ž ๋˜๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์†ํ•œ ์กฐ์ง์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ €์žฅ์†Œ ์ด๋ฆ„์ด ๋  ๋ชจ๋ธ์˜ ์ด๋ฆ„์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์ด ๊ณต๊ฐœ์ธ์ง€ ๋น„๊ณต๊ฐœ์ธ์ง€ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์˜ ๋ผ์ด์„ผ์Šค ์‚ฌ์šฉ์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ **Files** ํƒญ์„ ํด๋ฆญํ•˜๊ณ  **Add file** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ƒˆ๋กœ์šด ํŒŒ์ผ์„ ์ €์žฅ์†Œ์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—…๋กœ๋“œํ•  ํŒŒ์ผ์„ ๋Œ์–ด๋‹ค ๋†“๊ณ  ์ปค๋ฐ‹ ๋ฉ”์‹œ์ง€๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## ๋ชจ๋ธ ์นด๋“œ ์ถ”๊ฐ€ํ•˜๊ธฐ[[add-a-model-card]] ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์˜ ๊ธฐ๋Šฅ, ์ œํ•œ, ์ž ์žฌ์  ํŽธํ–ฅ ๋ฐ ์œค๋ฆฌ์  ๊ณ ๋ ค ์‚ฌํ•ญ์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์ €์žฅ์†Œ์— ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ๋ชจ๋ธ ์นด๋“œ๋Š” `README.md` ํŒŒ์ผ์— ์ •์˜๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * `README.md` ํŒŒ์ผ์„ ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑํ•˜์—ฌ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. * ๋ชจ๋ธ ์ €์žฅ์†Œ์—์„œ **Edit model card** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ์— ํฌํ•จํ•  ์ •๋ณด ์œ ํ˜•์— ๋Œ€ํ•œ ์ข‹์€ ์˜ˆ๋Š” DistilBert [๋ชจ๋ธ ์นด๋“œ](https://huggingface.co/distilbert-base-uncased)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋ชจ๋ธ์˜ ํƒ„์†Œ ๋ฐœ์ž๊ตญ์ด๋‚˜ ์œ„์ ฏ ์˜ˆ์‹œ ๋“ฑ `README.md` ํŒŒ์ผ์—์„œ ์ œ์–ดํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ์˜ต์…˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/hub/models-cards) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ถ”๋ก ์„ ์œ„ํ•œ Pipeline[[pipelines-for-inference]] [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋ฉด ์–ธ์–ด, ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์— ๋Œ€ํ•œ ์ถ”๋ก ์„ ์œ„ํ•ด [Hub](https://huggingface.co/models)์˜ ์–ด๋–ค ๋ชจ๋ธ์ด๋“  ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ๋ถ„์•ผ์— ๋Œ€ํ•œ ๊ฒฝํ—˜์ด ์—†๊ฑฐ๋‚˜, ๋ชจ๋ธ์„ ์ด๋ฃจ๋Š” ์ฝ”๋“œ๊ฐ€ ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋„ [`pipeline`]์„ ์‚ฌ์šฉํ•ด์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์–ด์š”! ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ์„ ๋ฐฐ์›Œ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. * ์ถ”๋ก ์„ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• * ํŠน์ • ํ† ํฌ๋‚˜์ด์ € ๋˜๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• * ์–ธ์–ด, ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์—์„œ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• <Tip> ์ง€์›ํ•˜๋Š” ๋ชจ๋“  ํƒœ์Šคํฌ์™€ ์“ธ ์ˆ˜ ์žˆ๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋‹ด์€ ๋ชฉ๋ก์€ [`pipeline`] ์„ค๋ช…์„œ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. </Tip> ## Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[pipeline-usage]] ๊ฐ ํƒœ์Šคํฌ๋งˆ๋‹ค ๊ณ ์œ ์˜ [`pipeline`]์ด ์žˆ์ง€๋งŒ, ๊ฐœ๋ณ„ ํŒŒ์ดํ”„๋ผ์ธ์„ ๋‹ด๊ณ ์žˆ๋Š” ์ถ”์ƒํ™”๋œ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. [`pipeline`]์€ ํƒœ์Šคํฌ์— ์•Œ๋งž๊ฒŒ ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•œ ๊ธฐ๋ณธ ๋ชจ๋ธ๊ณผ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ž๋™์œผ๋กœ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. 1. ๋จผ์ € [`pipeline`]์„ ์ƒ์„ฑํ•˜๊ณ  ํƒœ์Šคํฌ๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ```py >>> from transformers import pipeline >>> generator = pipeline(task="automatic-speech-recognition") ``` 2. ๊ทธ๋ฆฌ๊ณ  [`pipeline`]์— ์ž…๋ ฅ์„ ๋„ฃ์–ด์ฃผ์„ธ์š”. ```py >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'} ``` ๊ธฐ๋Œ€ํ–ˆ๋˜ ๊ฒฐ๊ณผ๊ฐ€ ์•„๋‹Œ๊ฐ€์š”? Hub์—์„œ [๊ฐ€์žฅ ๋งŽ์ด ๋‹ค์šด๋กœ๋“œ๋œ ์ž๋™ ์Œ์„ฑ ์ธ์‹ ๋ชจ๋ธ](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads)๋กœ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•ด๋ณด์„ธ์š”. ๋‹ค์Œ์€ [openai/whisper-large](https://huggingface.co/openai/whisper-large)๋กœ ์‹œ๋„ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> generator = pipeline(model="openai/whisper-large") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ํ›จ์”ฌ ๋” ๋‚˜์•„์กŒ๊ตฐ์š”! Hub์˜ ๋ชจ๋ธ๋“ค์€ ์—ฌ๋Ÿฌ ๋‹ค์–‘ํ•œ ์–ธ์–ด์™€ ์ „๋ฌธ๋ถ„์•ผ๋ฅผ ์•„์šฐ๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๊ผญ ์ž์‹ ์˜ ์–ธ์–ด๋‚˜ ๋ถ„์•ผ์— ํŠนํ™”๋œ ๋ชจ๋ธ์„ ์ฐพ์•„๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋ธŒ๋ผ์šฐ์ €๋ฅผ ๋ฒ—์–ด๋‚  ํ•„์š”์—†์ด Hub์—์„œ ์ง์ ‘ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์„ ํ™•์ธํ•˜๊ณ  ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ๋น„๊ตํ•ด์„œ ์ž์‹ ์˜ ์ƒํ™ฉ์— ๋” ์ ํ•ฉํ•œ์ง€, ์• ๋งคํ•œ ์ž…๋ ฅ์„ ๋” ์ž˜ ์ฒ˜๋ฆฌํ•˜๋Š”์ง€๋„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ƒํ™ฉ์— ์•Œ๋งž๋Š” ๋ชจ๋ธ์„ ์—†๋‹ค๋ฉด ์–ธ์ œ๋‚˜ ์ง์ ‘ [ํ›ˆ๋ จ](training)์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž…๋ ฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ, ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py generator( [ "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", ] ) ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์„ ์ˆœํšŒํ•˜๊ฑฐ๋‚˜ ์›น์„œ๋ฒ„์— ์˜ฌ๋ ค๋‘์–ด ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ๊ฐ ์ƒ์„ธ ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. [๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ](#using-pipelines-on-a-dataset) [์›น์„œ๋ฒ„์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ](./pipeline_webserver) ## ๋งค๊ฐœ๋ณ€์ˆ˜[[parameters]] [`pipeline`]์€ ๋งŽ์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ํƒœ์Šคํฌ์šฉ์ธ ๊ฒƒ๋„ ์žˆ๊ณ , ๋ฒ”์šฉ์ธ ๊ฒƒ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์›ํ•˜๋Š” ์œ„์น˜์— ์–ด๋””๋“  ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋„ฃ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py generator(model="openai/whisper-large", my_parameter=1) out = generate(...) # This will use `my_parameter=1`. out = generate(..., my_parameter=2) # This will override and use `my_parameter=2`. out = generate(...) # This will go back to using `my_parameter=1`. ``` ์ค‘์š”ํ•œ 3๊ฐ€์ง€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๊ธฐ๊ธฐ(device)[[device]] `device=n`์ฒ˜๋Ÿผ ๊ธฐ๊ธฐ๋ฅผ ์ง€์ •ํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์ด ์ž๋™์œผ๋กœ ํ•ด๋‹น ๊ธฐ๊ธฐ์— ๋ชจ๋ธ์„ ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ๋‚˜ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ๋„ ๋ชจ๋‘ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ```py generator(model="openai/whisper-large", device=0) ``` ๋ชจ๋ธ์ด GPU ํ•˜๋‚˜์— ๋Œ์•„๊ฐ€๊ธฐ ๋ฒ„๊ฒ๋‹ค๋ฉด, `device_map="auto"`๋ฅผ ์ง€์ •ํ•ด์„œ ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate)๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ์–ด๋–ป๊ฒŒ ๋กœ๋“œํ•˜๊ณ  ์ €์žฅํ• ์ง€ ์ž๋™์œผ๋กœ ๊ฒฐ์ •ํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py #!pip install accelerate generator(model="openai/whisper-large", device_map="auto") ``` ### ๋ฐฐ์น˜ ์‚ฌ์ด์ฆˆ[[batch-size]] ๊ธฐ๋ณธ์ ์œผ๋กœ ํŒŒ์ดํ”„๋ผ์ธ์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching)์— ๋‚˜์˜จ ์ด์œ ๋กœ ์ถ”๋ก ์„ ์ผ๊ด„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•˜์ž๋ฉด ์ผ๊ด„ ์ฒ˜๋ฆฌ๊ฐ€ ๋ฐ˜๋“œ์‹œ ๋” ๋น ๋ฅด์ง€ ์•Š๊ณ  ์˜คํžˆ๋ ค ๋” ๋Š๋ ค์งˆ ์ˆ˜๋„ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ž์‹ ์˜ ์ƒํ™ฉ์— ์ ํ•ฉํ•˜๋‹ค๋ฉด, ์ด๋ ‡๊ฒŒ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py generator(model="openai/whisper-large", device=0, batch_size=2) audio_filenames = [f"audio_{i}.flac" for i in range(10)] texts = generator(audio_filenames) ``` ํŒŒ์ดํ”„๋ผ์ธ ์œ„ ์ œ๊ณต๋œ 10๊ฐœ์˜ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ถ”๊ฐ€๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์ฝ”๋“œ ์—†์ด (์ผ๊ด„ ์ฒ˜๋ฆฌ์— ๋ณด๋‹ค ํšจ๊ณผ์ ์ธ GPU ์œ„) ๋ชจ๋ธ์— 2๊ฐœ์”ฉ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ์€ ์ผ๊ด„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š์•˜์„ ๋•Œ์™€ ๋˜‘๊ฐ™์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ์†๋„๋ฅผ ๋” ๋‚ผ ์ˆ˜๋„ ์žˆ๋Š” ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ผ ๋ฟ์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์€ ์ผ๊ด„ ์ฒ˜๋ฆฌ์˜ ๋ณต์žกํ•œ ๋ถ€๋ถ„์„ ์ค„์—ฌ์ฃผ๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. (์˜ˆ๋ฅผ ๋“ค์–ด ๊ธด ์˜ค๋””์˜ค ํŒŒ์ผ์ฒ˜๋Ÿผ) ์—ฌ๋Ÿฌ ๋ถ€๋ถ„์œผ๋กœ ๋‚˜๋ˆ ์•ผ ๋ชจ๋ธ์ด ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์„ [*chunk batching*](./main_classes/pipelines#pipeline-chunk-batching)์ด๋ผ๊ณ  ํ•˜๋Š”๋ฐ, ํŒŒ์ดํ”„๋ผ์ธ์„ ์‚ฌ์šฉํ•˜๋ฉด ์ž๋™์œผ๋กœ ๋‚˜๋ˆ ์ค๋‹ˆ๋‹ค. ### ํŠน์ • ํƒœ์Šคํฌ์šฉ ๋งค๊ฐœ๋ณ€์ˆ˜[[task-specific-parameters]] ๊ฐ ํƒœ์Šคํฌ๋งˆ๋‹ค ๊ตฌํ˜„ํ•  ๋•Œ ์œ ์—ฐ์„ฑ๊ณผ ์˜ต์…˜์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด ํƒœ์Šคํฌ์šฉ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [`transformers.AutomaticSpeechRecognitionPipeline.__call__`] ๋ฉ”์„œ๋“œ์—๋Š” ๋™์˜์ƒ์˜ ์ž๋ง‰์„ ๋„ฃ์„ ๋•Œ ์œ ์šฉํ•  ๊ฒƒ ๊ฐ™์€ `return_timestamps` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> # Not using whisper, as it cannot provide timestamps. >>> generator = pipeline(model="facebook/wav2vec2-large-960h-lv60-self", return_timestamps="word") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP AND LIVE OUT THE TRUE MEANING OF ITS CREED', 'chunks': [{'text': 'I', 'timestamp': (1.22, 1.24)}, {'text': 'HAVE', 'timestamp': (1.42, 1.58)}, {'text': 'A', 'timestamp': (1.66, 1.68)}, {'text': 'DREAM', 'timestamp': (1.76, 2.14)}, {'text': 'BUT', 'timestamp': (3.68, 3.8)}, {'text': 'ONE', 'timestamp': (3.94, 4.06)}, {'text': 'DAY', 'timestamp': (4.16, 4.3)}, {'text': 'THIS', 'timestamp': (6.36, 6.54)}, {'text': 'NATION', 'timestamp': (6.68, 7.1)}, {'text': 'WILL', 'timestamp': (7.32, 7.56)}, {'text': 'RISE', 'timestamp': (7.8, 8.26)}, {'text': 'UP', 'timestamp': (8.38, 8.48)}, {'text': 'AND', 'timestamp': (10.08, 10.18)}, {'text': 'LIVE', 'timestamp': (10.26, 10.48)}, {'text': 'OUT', 'timestamp': (10.58, 10.7)}, {'text': 'THE', 'timestamp': (10.82, 10.9)}, {'text': 'TRUE', 'timestamp': (10.98, 11.18)}, {'text': 'MEANING', 'timestamp': (11.26, 11.58)}, {'text': 'OF', 'timestamp': (11.66, 11.7)}, {'text': 'ITS', 'timestamp': (11.76, 11.88)}, {'text': 'CREED', 'timestamp': (12.0, 12.38)}]} ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๋ชจ๋ธ์ด ํ…์ŠคํŠธ๋ฅผ ์ถ”๋ก ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ฐ ๋‹จ์–ด๋ฅผ ๋งํ•œ ์‹œ์ ๊นŒ์ง€๋„ ์ถœ๋ ฅํ–ˆ์Šต๋‹ˆ๋‹ค. ํƒœ์Šคํฌ๋งˆ๋‹ค ๋‹ค์–‘ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”๋ฐ์š”. ์›ํ•˜๋Š” ํƒœ์Šคํฌ์˜ API๋ฅผ ์ฐธ์กฐํ•ด์„œ ๋ฐ”๊ฟ”๋ณผ ์ˆ˜ ์žˆ๋Š” ์—ฌ๋Ÿฌ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! ์ง€๊ธˆ๊นŒ์ง€ ๋‹ค๋ค„๋ณธ [`~transformers.AutomaticSpeechRecognitionPipeline`]์—๋Š” `chunk_length_s` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ํ™”๋‚˜ 1์‹œ๊ฐ„ ๋ถ„๋Ÿ‰์˜ ๋™์˜์ƒ์˜ ์ž๋ง‰ ์ž‘์—…์„ ํ•  ๋•Œ์ฒ˜๋Ÿผ, ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์ด ์ž์ฒด์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์—†๋Š” ๋งค์šฐ ๊ธด ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ฒ˜๋ฆฌํ•  ๋•Œ ์œ ์šฉํ•˜์ฃ . ๋„์›€์ด ๋  ๋งŒํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ฐพ์ง€ ๋ชปํ–ˆ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ [์š”์ฒญ](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)ํ•ด์ฃผ์„ธ์š”! ## ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[using-pipelines-on-a-dataset]] ํŒŒ์ดํ”„๋ผ์ธ์€ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ๋„ ์ถ”๋ก  ์ž‘์—…์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ ์ดํ„ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฑธ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. ```py def data(): for i in range(1000): yield f"My example {i}" pipe = pipe(model="gpt2", device=0) generated_characters = 0 for out in pipe(data()): generated_characters += len(out["generated_text"]) ``` ์ดํ„ฐ๋ ˆ์ดํ„ฐ `data()`๋Š” ๊ฐ ๊ฒฐ๊ณผ๋ฅผ ํ˜ธ์ถœ๋งˆ๋‹ค ์ƒ์„ฑํ•˜๊ณ , ํŒŒ์ดํ”„๋ผ์ธ์€ ์ž…๋ ฅ์ด ์ˆœํšŒํ•  ์ˆ˜ ์žˆ๋Š” ์ž๋ฃŒ๊ตฌ์กฐ์ž„์„ ์ž๋™์œผ๋กœ ์ธ์‹ํ•˜์—ฌ GPU์—์„œ ๊ธฐ์กด ๋ฐ์ดํ„ฐ๊ฐ€ ์ฒ˜๋ฆฌ๋˜๋Š” ๋™์•ˆ ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค.(์ด๋•Œ ๋‚ด๋ถ€์ ์œผ๋กœ [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)๋ฅผ ์‚ฌ์šฉํ•ด์š”.) ์ด ๊ณผ์ •์€ ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ์ ์žฌํ•˜์ง€ ์•Š๊ณ ๋„ GPU์— ์ตœ๋Œ€ํ•œ ๋น ๋ฅด๊ฒŒ ์ƒˆ๋กœ์šด ์ž‘์—…์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ผ๊ด„ ์ฒ˜๋ฆฌ๊ฐ€ ๋” ๋น ๋ฅผ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, `batch_size` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์กฐ์ •ํ•ด๋ด๋„ ์ข‹์•„์š”. ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์ˆœํšŒํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ๐Ÿค— [Datasets](https://github.com/huggingface/datasets/)๋ฅผ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ธ๋ฐ์š”. ```py # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]") for out in pipe(KeyDataset(dataset["audio"])): print(out) ``` ## ์›น์„œ๋ฒ„์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[using-pipelines-for-a-webserver]] <Tip> ์ถ”๋ก  ์—”์ง„์„ ๋งŒ๋“œ๋Š” ๊ณผ์ •์€ ๋”ฐ๋กœ ํŽ˜์ด์ง€๋ฅผ ์ž‘์„ฑํ• ๋งŒํ•œ ๋ณต์žกํ•œ ์ฃผ์ œ์ž…๋‹ˆ๋‹ค. </Tip> [Link](./pipeline_webserver) ## ๋น„์ „ Pipeline[[vision-pipeline]] ๋น„์ „ ํƒœ์Šคํฌ๋ฅผ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์ผ์€ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ํƒœ์Šคํฌ๋ฅผ ์ง€์ •ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜๊ธฐ์— ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” ์ธํ„ฐ๋„ท ๋งํฌ ๋˜๋Š” ๋กœ์ปฌ ๊ฒฝ๋กœ์˜ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์ฃผ์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ์•„๋ž˜์— ํ‘œ์‹œ๋œ ๊ณ ์–‘์ด๋Š” ์–ด๋–ค ์ข…์ธ๊ฐ€์š”? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(model="google/vit-base-patch16-224") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ### ํ…์ŠคํŠธ Pipeline[[text-pipeline]] NLP ํƒœ์Šคํฌ๋ฅผ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์ผ๋„ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model="facebook/bart-large-mnli") >>> classifier( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} ``` ### ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ Pipeline[[multimodal-pipeline]] [`pipeline`]์€ ์—ฌ๋Ÿฌ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์—ญ์ฃผ: ์˜ค๋””์˜ค, ๋น„๋””์˜ค, ํ…์ŠคํŠธ์™€ ๊ฐ™์€ ๋ฐ์ดํ„ฐ ํ˜•ํƒœ)๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋กœ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต(VQA; Visual Question Answering) ํƒœ์Šคํฌ๋Š” ํ…์ŠคํŠธ์™€ ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์–ด๋–ค ์ด๋ฏธ์ง€ ๋งํฌ๋‚˜ ๋ฌป๊ณ  ์‹ถ์€ ์งˆ๋ฌธ๋„ ์ž์œ ๋กญ๊ฒŒ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” URL ๋˜๋Š” ๋กœ์ปฌ ๊ฒฝ๋กœ์˜ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์ฃผ์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ์ด [๊ฑฐ๋ž˜๋ช…์„ธ์„œ ์‚ฌ์ง„](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png)์—์„œ ๊ฑฐ๋ž˜๋ช…์„ธ์„œ ๋ฒˆํ˜ธ๋ฅผ ๋ฌป๊ณ  ์‹ถ๋‹ค๋ฉด, ```py >>> from transformers import pipeline >>> vqa = pipeline(model="impira/layoutlm-document-qa") >>> vqa( ... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", ... question="What is the invoice number?", ... ) [{'score': 0.42514941096305847, 'answer': 'us-001', 'start': 16, 'end': 16}] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/fast_tokenizers.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ[[use-tokenizers-from-tokenizers]] [`PreTrainedTokenizerFast`]๋Š” [๐Ÿค— Tokenizers](https://huggingface.co/docs/tokenizers) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๊ธฐ๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ํ† ํฌ๋‚˜์ด์ €๋Š” ๐Ÿค— Transformers๋กœ ๋งค์šฐ ๊ฐ„๋‹จํ•˜๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ์ฒด์ ์ธ ๋‚ด์šฉ์— ๋“ค์–ด๊ฐ€๊ธฐ ์ „์—, ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋กœ ๋”๋ฏธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` ์šฐ๋ฆฌ๊ฐ€ ์ •์˜ํ•œ ํŒŒ์ผ์„ ํ†ตํ•ด ์ด์ œ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ–๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋Ÿฐํƒ€์ž„์—์„œ ๊ณ„์† ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ JSON ํŒŒ์ผ๋กœ ์ €์žฅํ•˜์—ฌ ๋‚˜์ค‘์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ† ํฌ๋‚˜์ด์ € ๊ฐ์ฒด๋กœ๋ถ€ํ„ฐ ์ง์ ‘ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[loading-directly-from-the-tokenizer-object]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ด ํ† ํฌ๋‚˜์ด์ € ๊ฐ์ฒด๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [`PreTrainedTokenizerFast`] ํด๋ž˜์Šค๋Š” ์ธ์Šคํ„ด์Šคํ™”๋œ *ํ† ํฌ๋‚˜์ด์ €* ๊ฐ์ฒด๋ฅผ ์ธ์ˆ˜๋กœ ๋ฐ›์•„ ์‰ฝ๊ฒŒ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` ์ด์ œ `fast_tokenizer` ๊ฐ์ฒด๋Š” ๐Ÿค— Transformers ํ† ํฌ๋‚˜์ด์ €์—์„œ ๊ณต์œ ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ํŽ˜์ด์ง€](main_classes/tokenizer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## JSON ํŒŒ์ผ์—์„œ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[loading-from-a-JSON-file]] <!--In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer:--> JSON ํŒŒ์ผ์—์„œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ ์œ„ํ•ด, ๋จผ์ € ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ €์žฅํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> tokenizer.save("tokenizer.json") ``` JSON ํŒŒ์ผ์„ ์ €์žฅํ•œ ๊ฒฝ๋กœ๋Š” `tokenizer_file` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ [`PreTrainedTokenizerFast`] ์ดˆ๊ธฐํ™” ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` ์ด์ œ `fast_tokenizer` ๊ฐ์ฒด๋Š” ๐Ÿค— Transformers ํ† ํฌ๋‚˜์ด์ €์—์„œ ๊ณต์œ ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ํŽ˜์ด์ง€](main_classes/tokenizer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/tf_xla.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TensorFlow ๋ชจ๋ธ์„ ์œ„ํ•œ XLA ํ†ตํ•ฉ [[xla-integration-for-tensorflow-models]] [[open-in-colab]] XLA(Accelerated Linear Algebra)๋Š” TensorFlow ๋ชจ๋ธ์˜ ์‹คํ–‰ ์‹œ๊ฐ„์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์ปดํŒŒ์ผ๋Ÿฌ์ž…๋‹ˆ๋‹ค. [๊ณต์‹ ๋ฌธ์„œ](https://www.tensorflow.org/xla)์— ๋”ฐ๋ฅด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: XLA(Accelerated Linear Algebra)๋Š” ์„ ํ˜• ๋Œ€์ˆ˜๋ฅผ ์œ„ํ•œ ๋„๋ฉ”์ธ ํŠนํ™” ์ปดํŒŒ์ผ๋Ÿฌ๋กœ, TensorFlow ๋ชจ๋ธ์„ ์†Œ์Šค ์ฝ”๋“œ ๋ณ€๊ฒฝ ์—†์ด ๊ฐ€์†ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow์—์„œ XLA๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. XLA๋Š” `tensorflow` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋‚ด์— ํŒจํ‚ค์ง€๋กœ ์ œ๊ณต๋˜๋ฉฐ, [`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs)๊ณผ ๊ฐ™์€ ๊ทธ๋ž˜ํ”„ ์ƒ์„ฑ ํ•จ์ˆ˜์—์„œ `jit_compile` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `fit()` ๋ฐ `predict()`์™€ ๊ฐ™์€ Keras ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, `jit_compile` ์ธ์ˆ˜๋ฅผ `model.compile()`์— ์ „๋‹ฌํ•˜์—ฌ XLA๋ฅผ ๊ฐ„๋‹จํ•˜๊ฒŒ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ XLA๋Š” ์ด๋Ÿฌํ•œ ๋ฉ”์†Œ๋“œ์— ๊ตญํ•œ๋˜์ง€ ์•Š๊ณ  ์ž„์˜์˜ `tf.function`์„ ๊ฐ€์†ํ™”ํ•˜๋Š” ๋ฐ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์—์„œ๋Š” [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2), [T5](https://huggingface.co/docs/transformers/model_doc/t5), [OPT](https://huggingface.co/docs/transformers/model_doc/opt)์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ํ…์ŠคํŠธ ์ƒ์„ฑ, ๊ทธ๋ฆฌ๊ณ  [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ์Œ์„ฑ ์ฒ˜๋ฆฌ๋ฅผ ํฌํ•จํ•˜์—ฌ ์—ฌ๋Ÿฌ TensorFlow ๋ฉ”์†Œ๋“œ๊ฐ€ XLA์™€ ํ˜ธํ™˜๋˜๋„๋ก ๋‹ค์‹œ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ์†๋„ ํ–ฅ์ƒ์€ ๋ชจ๋ธ์— ๋”ฐ๋ผ ๋‹ค๋ฅด์ง€๋งŒ, ๐Ÿค— Transformers ๋‚ด์˜ TensorFlow ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ตœ๋Œ€ 100๋ฐฐ์˜ ์†๋„ ํ–ฅ์ƒ์„ ํ™•์ธํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์— ๋Œ€ํ•ด XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ๋Œ€ ์„ฑ๋Šฅ์„ ์–ป๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ XLA ํ†ตํ•ฉ์˜ ๋ฒค์น˜๋งˆํฌ ๋ฐ ๋””์ž์ธ ์ฒ ํ•™์— ๋Œ€ํ•œ ์ถ”๊ฐ€ ์ž๋ฃŒ ๋งํฌ๋„ ์ œ๊ณตํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TF ํ•จ์ˆ˜ ์‹คํ–‰ํ•˜๊ธฐ [[running-tf-functions-with-xla]] TensorFlow์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ์„ ๊ณ ๋ คํ•ด ๋ด…์‹œ๋‹ค: ```py import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")] ) ``` ์œ„ ๋ชจ๋ธ์€ ์ฐจ์›์ด `(10, )`์ธ ์ž…๋ ฅ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž„์˜์˜ ์ž…๋ ฅ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) # ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. _ = model(random_inputs) ``` XLA๋กœ ์ปดํŒŒ์ผ๋œ ํ•จ์ˆ˜๋กœ ์ˆœ์ „ํŒŒ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) ``` `model`์˜ ๊ธฐ๋ณธ `call()` ํ•จ์ˆ˜๋Š” XLA ๊ทธ๋ž˜ํ”„๋ฅผ ์ปดํŒŒ์ผํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋‹ค๋ฅธ ๋ชจ๋ธ ํ•จ์ˆ˜๋ฅผ XLA๋กœ ์ปดํŒŒ์ผํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) ``` ## ๐Ÿค— Transformers์—์„œ XLA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TF ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ ์‹คํ–‰ํ•˜๊ธฐ [[running-a-tf-text-generation-model-with-xla-from-transformers]] ๐Ÿค— Transformers์—์„œ XLA๋กœ ๊ฐ€์†ํ™”๋œ ์ƒ์„ฑ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ์ตœ์‹  ๋ฒ„์ „์˜ `transformers`๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์น˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pip install transformers --upgrade ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM # ์ตœ์†Œ ๋ฒ„์ „์˜ Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์ง€ ์•Š๋‹ค๋ฉด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. from transformers.utils import check_min_version check_min_version("4.21.0") tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] # XLA ์ƒ์„ฑ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•œ ํ•œ ์ค„ xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the ``` ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด, `generate()`์—์„œ XLA๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋Š” ๊ฒƒ์€ ๋‹จ ํ•œ ์ค„์˜ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ์ฝ”๋“œ์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„์€ ๋ณ€๊ฒฝ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์œ„ ์ฝ”๋“œ ์Šค๋‹ˆํŽซ์—์„œ๋Š” XLA์— ํŠน์ •ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ฃผ์˜ํ•  ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. XLA๊ฐ€ ๊ฐ€์ ธ๋‹ค์ค„ ์†๋„ ํ–ฅ์ƒ์„ ์‹คํ˜„ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ด๋ฅผ ์•Œ๊ณ  ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ์ด์— ๋Œ€ํ•ด ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. ## ์ฃผ์˜ํ•  ์  [[gotchas-to-be-aware-of]] XLA ํ™œ์„ฑํ™” ํ•จ์ˆ˜(`xla_generate()`์™€ ๊ฐ™์€)๋ฅผ ์ฒ˜์Œ ์‹คํ–‰ํ•  ๋•Œ ๋‚ด๋ถ€์ ์œผ๋กœ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ์ถ”๋ก ํ•˜๋ ค๊ณ  ํ•˜๋ฉฐ, ์ด๋Š” ์‹œ๊ฐ„์ด ์†Œ์š”๋ฉ๋‹ˆ๋‹ค. ์ด ๊ณผ์ •์€ [โ€œ์ถ”์ (tracing)โ€](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing)์ด๋ผ๊ณ  ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๋น ๋ฅด์ง€ ์•Š๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. `xla_generate()`(๋˜๋Š” ๋‹ค๋ฅธ XLA ํ™œ์„ฑํ™” ํ•จ์ˆ˜)์˜ ์—ฐ์† ํ˜ธ์ถœ์€ ํ•จ์ˆ˜์— ์ „๋‹ฌ๋œ ์ž…๋ ฅ์ด ์ดˆ๊ธฐ์— ๊ตฌ์ถ•๋œ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„์™€ ๋™์ผํ•œ ํ˜•ํƒœ๋ฅผ ๋”ฐ๋ฅธ๋‹ค๋ฉด, ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ์ถ”๋ก ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ž…๋ ฅ ํ˜•ํƒœ๊ฐ€ ๊ณ ์ •๋œ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์˜ˆ: ์ด๋ฏธ์ง€)์—๋Š” ๋ฌธ์ œ๊ฐ€ ๋˜์ง€ ์•Š์ง€๋งŒ, ๊ฐ€๋ณ€ ์ž…๋ ฅ ํ˜•ํƒœ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์˜ˆ: ํ…์ŠคํŠธ)๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `xla_generate()`๊ฐ€ ํ•ญ์ƒ ๋™์ผํ•œ ์ž…๋ ฅ ํ˜•ํƒœ๋กœ ๋™์ž‘ํ•˜๋„๋ก ํ•˜๋ ค๋ฉด, ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ `padding` ์ธ์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] xla_generate = tf.function(model.generate, jit_compile=True) # ์—ฌ๊ธฐ์„œ, padding ์˜ต์…˜์ด ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `xla_generate()`์— ๋Œ€ํ•œ ์ž…๋ ฅ์ด ํ•ญ์ƒ ์ถ”์ ๋œ ํ˜•ํƒœ๋กœ ์ „๋‹ฌ๋˜์–ด ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๊ฐ€์†ํ™”๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์ฝ”๋“œ๋กœ ์ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n") ``` Tesla T4 GPU์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ถœ๋ ฅ์„ ์˜ˆ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms ``` `xla_generate()`์˜ ์ฒซ ๋ฒˆ์งธ ํ˜ธ์ถœ์€ ์ถ”์  ๋•Œ๋ฌธ์— ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ์ง€๋งŒ, ์—ฐ์† ํ˜ธ์ถœ์€ ๋ช‡ ๋ฐฐ๋‚˜ ๋น ๋ฆ…๋‹ˆ๋‹ค. ์ƒ์„ฑ ์˜ต์…˜์— ๋Œ€ํ•œ ์–ด๋–ค ๋ณ€๊ฒฝ์ด๋“  ๋‹ค์‹œ ์ถ”์ ์„ ์œ ๋ฐœํ•˜๋ฏ€๋กœ ์ƒ์„ฑ ์‹œ๊ฐ„์ด ๋Š๋ ค์งˆ ์ˆ˜ ์žˆ์Œ์„ ๋ช…์‹ฌํ•˜์„ธ์š”. ์ด ๋ฌธ์„œ์—์„œ๋Š” ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•˜๋Š” ๋ชจ๋“  ํ…์ŠคํŠธ ์ƒ์„ฑ ์˜ต์…˜์„ ๋‹ค๋ฃจ์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๊ณ ๊ธ‰ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋Œ€ํ•ด ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ## ์ถ”๊ฐ€ ์ž๋ฃŒ [[additional-resources]] ์—ฌ๊ธฐ์— ๐Ÿค— Transformers์™€ XLA์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์ถ”๊ฐ€ ์ž๋ฃŒ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด Colab ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb)์€ XLA์™€ ํ˜ธํ™˜๋˜๋Š” ์ธ์ฝ”๋”-๋””์ฝ”๋”([T5](https://huggingface.co/docs/transformers/model_doc/t5)์™€ ๊ฐ™์€) ๋ฐ ๋””์ฝ”๋” ์ „์šฉ([GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)์™€ ๊ฐ™์€) ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์„ ์‹คํ—˜ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ™”ํ˜• ๋ฐ๋ชจ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://huggingface.co/blog/tf-xla-generate)์€ TensorFlow์—์„œ XLA์— ๋Œ€ํ•œ ์นœ์ ˆํ•œ ์†Œ๊ฐœ์™€ ํ•จ๊ป˜ XLA์™€ ํ˜ธํ™˜๋˜๋Š” ๋ชจ๋ธ์˜ ๋น„๊ต ๋ฒค์น˜๋งˆํฌ์— ๋Œ€ํ•œ ๊ฐœ์š”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. * [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html)์€ ๐Ÿค— Transformers์˜ TensorFlow ๋ชจ๋ธ์— XLA ์ง€์›์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•œ ๋””์ž์ธ ์ฒ ํ•™์„ ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. * XLA์™€ TensorFlow ๊ทธ๋ž˜ํ”„์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ ์ถ”์ฒœํ•˜๋Š” ๊ธ€: * [XLA: ๊ธฐ๊ณ„ ํ•™์Šต์„ ์œ„ํ•œ ์ตœ์ ํ™” ์ปดํŒŒ์ผ๋Ÿฌ](https://www.tensorflow.org/xla) * [๊ทธ๋ž˜ํ”„ ๋ฐ tf.function ์†Œ๊ฐœ](https://www.tensorflow.org/guide/intro_to_graphs) * [tf.function์œผ๋กœ ์„ฑ๋Šฅ ํ–ฅ์ƒํ•˜๊ธฐ](https://www.tensorflow.org/guide/function)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers ์„ค์น˜ ๋ฐฉ๋ฒ• ! pip install transformers datasets # ๋งˆ์ง€๋ง‰ ๋ฆด๋ฆฌ์Šค ๋Œ€์‹  ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๋ ค๋ฉด, ์œ„ ๋ช…๋ น์„ ์ฃผ์„์œผ๋กœ ๋ฐ”๊พธ๊ณ  ์•„๋ž˜ ๋ช…๋ น์„ ํ•ด์ œํ•˜์„ธ์š”. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[export-to-onnx]] ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์ œํ’ˆ ํ™˜๊ฒฝ์—์„œ ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋ธ์„ ์ง๋ ฌํ™”๋œ ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด๊ณ  ํŠน์ • ๋Ÿฐํƒ€์ž„๊ณผ ํ•˜๋“œ์›จ์–ด์—์„œ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์œผ๋ฉด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ Transformers์˜ ํ™•์žฅ์œผ๋กœ, PyTorch ๋˜๋Š” TensorFlow์—์„œ ๋ชจ๋ธ์„ ONNX์™€ TFLite์™€ ๊ฐ™์€ ์ง๋ ฌํ™”๋œ ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” `exporters` ๋ชจ๋“ˆ์„ ํ†ตํ•ด ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ ๋˜ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™” ๋„๊ตฌ ์„ธํŠธ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŠน์ • ํ•˜๋“œ์›จ์–ด์—์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ์‹คํ–‰ํ•  ๋•Œ ์ตœ๋Œ€ ํšจ์œจ์„ฑ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์•ˆ๋‚ด์„œ๋Š” ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. TFLite๋กœ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋Š” ์•ˆ๋‚ด์„œ๋Š” [TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ ํŽ˜์ด์ง€](tflite)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[export-to-onnx]] [ONNX (Open Neural Network eXchange)](http://onnx.ai)๋Š” PyTorch์™€ TensorFlow๋ฅผ ํฌํ•จํ•œ ๋‹ค์–‘ํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ์‹ฌ์ธต ํ•™์Šต ๋ชจ๋ธ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๊ณตํ†ต ์—ฐ์‚ฐ์ž ์„ธํŠธ์™€ ๊ณตํ†ต ํŒŒ์ผ ํ˜•์‹์„ ์ •์˜ํ•˜๋Š” ์˜คํ”ˆ ํ‘œ์ค€์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด์ง€๋ฉด ์ด๋Ÿฌํ•œ ์—ฐ์‚ฐ์ž๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹ ๊ฒฝ๋ง์„ ํ†ตํ•ด ๋ฐ์ดํ„ฐ๊ฐ€ ํ๋ฅด๋Š” ํ๋ฆ„์„ ๋‚˜ํƒ€๋‚ด๋Š” ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„(์ผ๋ฐ˜์ ์œผ๋กœ _์ค‘๊ฐ„ ํ‘œํ˜„_์ด๋ผ๊ณ  ํ•จ)๊ฐ€ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ‘œ์ค€ํ™”๋œ ์—ฐ์‚ฐ์ž์™€ ๋ฐ์ดํ„ฐ ์œ ํ˜•์„ ๊ฐ€์ง„ ๊ทธ๋ž˜ํ”„๋ฅผ ๋…ธ์ถœํ•จ์œผ๋กœ์จ, ONNX๋Š” ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, PyTorch์—์„œ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด๊ณ  TensorFlow์—์„œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๊ทธ ๋ฐ˜๋Œ€๋„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค). ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ธ ๋ชจ๋ธ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - [๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™”](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) ๋ฐ [์–‘์žํ™”](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization)์™€ ๊ฐ™์€ ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ์„ ์œ„ํ•ด ์ตœ์ ํ™”๋ฉ๋‹ˆ๋‹ค. - ONNX Runtime์„ ํ†ตํ•ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`ORTModelForXXX` ํด๋ž˜์Šค๋“ค](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort)์„ ํ†ตํ•ด ๋™์ผํ•œ `AutoModel` API๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ์ด API๋Š” ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. - [์ตœ์ ํ™”๋œ ์ถ”๋ก  ํŒŒ์ดํ”„๋ผ์ธ](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines)์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๐Ÿค— Transformers์˜ [`pipeline`] ํ•จ์ˆ˜์™€ ๋™์ผํ•œ API๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ ๊ตฌ์„ฑ ๊ฐ์ฒด๋ฅผ ํ™œ์šฉํ•˜์—ฌ ONNX ๋‚ด๋ณด๋‚ด๊ธฐ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ตฌ์„ฑ ๊ฐ์ฒด๋Š” ์—ฌ๋Ÿฌ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ๋ฏธ๋ฆฌ ์ค€๋น„๋˜์–ด ์žˆ์œผ๋ฉฐ ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜์— ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ๋ฆฌ ์ค€๋น„๋œ ๊ตฌ์„ฑ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/onnx/overview)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์„ ๋ชจ๋‘ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: - ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ CLI๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Optimum์œผ๋กœ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ ### CLI๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-transformers-model-to-onnx-with-cli]] ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋จผ์ € ์ถ”๊ฐ€ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install optimum[exporters] ``` ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ์ธ์ˆ˜๋ฅผ ํ™•์ธํ•˜๋ ค๋ฉด [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli)๋ฅผ ์ฐธ์กฐํ•˜๊ฑฐ๋‚˜ ๋ช…๋ น์ค„์—์„œ ๋„์›€๋ง์„ ๋ณด์„ธ์š”. ```bash optimum-cli export onnx --help ``` ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Hub์—์„œ `distilbert-base-uncased-distilled-squad`์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ ``` ์œ„์™€ ๊ฐ™์ด ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋กœ๊ทธ๊ฐ€ ํ‘œ์‹œ๋˜๊ณ  ๊ฒฐ๊ณผ์ธ `model.onnx`๊ฐ€ ์ €์žฅ๋œ ์œ„์น˜๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ```bash Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx... -[โœ“] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output "start_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) - Validating ONNX Model output "end_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx ``` ์œ„์˜ ์˜ˆ์ œ๋Š” ๐Ÿค— Hub์—์„œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๊ฒƒ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋กœ์ปฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ๋•Œ์—๋Š” ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜์™€ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์„ ๋™์ผํ•œ ๋””๋ ‰ํ† ๋ฆฌ(`local_path`)์— ์ €์žฅํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. CLI๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ์—๋Š” ๐Ÿค— Hub์˜ ์ฒดํฌํฌ์ธํŠธ ์ด๋ฆ„ ๋Œ€์‹  `model` ์ธ์ˆ˜์— `local_path`๋ฅผ ์ „๋‹ฌํ•˜๊ณ  `--task` ์ธ์ˆ˜๋ฅผ ์ œ๊ณตํ•˜์„ธ์š”. ์ง€์›๋˜๋Š” ์ž‘์—…์˜ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/task_manager)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. `task` ์ธ์ˆ˜๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์œผ๋ฉด ์ž‘์—…์— ํŠนํ™”๋œ ํ—ค๋“œ ์—†์ด ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋กœ ๊ธฐ๋ณธ ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ```bash optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ ``` ๊ทธ ๊ฒฐ๊ณผ๋กœ ์ƒ์„ฑ๋œ `model.onnx` ํŒŒ์ผ์€ ONNX ํ‘œ์ค€์„ ์ง€์›ํ•˜๋Š” ๋งŽ์€ [๊ฐ€์†๊ธฐ](https://onnx.ai/supported-tools.html#deployModel) ์ค‘ ํ•˜๋‚˜์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [ONNX Runtime](https://onnxruntime.ai/)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx") >>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx") >>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt") >>> outputs = model(**inputs) ``` Hub์˜ TensorFlow ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•ด์„œ๋„ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [Keras organization](https://huggingface.co/keras-io)์—์„œ ์ˆœ์ˆ˜ํ•œ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ ``` ### `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-transformers-model-to-onnx-with-optimumonnxruntime]] CLI ๋Œ€์‹ ์— `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ๋ฐฉ์‹์œผ๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ํ•˜์„ธ์š”: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert_base_uncased_squad" >>> save_directory = "onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) ``` ### ์ง€์›๋˜์ง€ ์•Š๋Š” ์•„ํ‚คํ…์ฒ˜์˜ ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-model-for-an-unsupported-architecture]] ํ˜„์žฌ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์—†๋Š” ๋ชจ๋ธ์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด ๊ธฐ์—ฌํ•˜๋ ค๋ฉด, ๋จผ์ € [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview)์—์„œ ์ง€์›๋˜๋Š”์ง€ ํ™•์ธํ•œ ํ›„ ์ง€์›๋˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์—๋Š” [๐Ÿค— Optimum์— ๊ธฐ์—ฌ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute)ํ•˜์„ธ์š”. ### `transformers.onnx`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-model-with-transformersonnx]] <Tip warning={true}> `tranformers.onnx`๋Š” ๋” ์ด์ƒ ์œ ์ง€๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์œ„์—์„œ ์„ค๋ช…ํ•œ ๋Œ€๋กœ ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„ธ์š”. ์ด ์„น์…˜์€ ํ–ฅํ›„ ๋ฒ„์ „์—์„œ ์ œ๊ฑฐ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ์ถ”๊ฐ€ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install transformers[onnx] ``` `transformers.onnx` ํŒจํ‚ค์ง€๋ฅผ Python ๋ชจ๋“ˆ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ์ค€๋น„๋œ ๊ตฌ์„ฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `--model` ์ธ์ˆ˜์— ์ •์˜๋œ ์ฒดํฌํฌ์ธํŠธ์˜ ONNX ๊ทธ๋ž˜ํ”„๊ฐ€ ๋‚ด๋ณด๋‚ด์ง‘๋‹ˆ๋‹ค. ๐Ÿค— Hub์—์„œ ์ œ๊ณตํ•˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋‚˜ ๋กœ์ปฌ์— ์ €์žฅ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋กœ ์ƒ์„ฑ๋œ `model.onnx` ํŒŒ์ผ์€ ONNX ํ‘œ์ค€์„ ์ง€์›ํ•˜๋Š” ๋งŽ์€ ๊ฐ€์†๊ธฐ ์ค‘ ํ•˜๋‚˜์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ONNX Runtime์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` ํ•„์š”ํ•œ ์ถœ๋ ฅ ์ด๋ฆ„(์˜ˆ: `["last_hidden_state"]`)์€ ๊ฐ ๋ชจ๋ธ์˜ ONNX ๊ตฌ์„ฑ์„ ํ™•์ธํ•˜์—ฌ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, DistilBERT์˜ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` Hub์˜ TensorFlow ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•ด์„œ๋„ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆœ์ˆ˜ํ•œ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` ๋กœ์ปฌ์— ์ €์žฅ๋œ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜ ํŒŒ์ผ๊ณผ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์„ ๋™์ผํ•œ ๋””๋ ‰ํ† ๋ฆฌ์— ์ €์žฅํ•œ ๋‹ค์Œ, transformers.onnx ํŒจํ‚ค์ง€์˜ --model ์ธ์ˆ˜๋ฅผ ์›ํ•˜๋Š” ๋””๋ ‰ํ† ๋ฆฌ๋กœ ์ง€์ •ํ•˜์—ฌ ONNX๋กœ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPU์—์„œ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ [[efficient-training-on-cpu]] ์ด ๊ฐ€์ด๋“œ๋Š” CPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ์ดˆ์ ์„ ๋งž์ถฅ๋‹ˆ๋‹ค. ## IPEX์™€ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ [[mixed-precision-with-ipex]] IPEX๋Š” AVX-512 ์ด์ƒ์„ ์ง€์›ํ•˜๋Š” CPU์— ์ตœ์ ํ™”๋˜์–ด ์žˆ์œผ๋ฉฐ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU์—๋„ ๊ธฐ๋Šฅ์ ์œผ๋กœ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ AVX-512 ์ด์ƒ์˜ Intel CPU ์„ธ๋Œ€์—์„œ๋Š” ์„ฑ๋Šฅ์ƒ ์ด์ ์ด ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜์ง€๋งŒ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU (์˜ˆ: AMD CPU ๋˜๋Š” ์˜ค๋ž˜๋œ Intel CPU)์˜ ๊ฒฝ์šฐ์—๋Š” IPEX ์•„๋ž˜์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์ด๋Š” ๋ณด์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. IPEX๋Š” Float32์™€ BFloat16๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์—ฌ CPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. BFloat16์˜ ์‚ฌ์šฉ์€ ๋‹ค์Œ ์„น์…˜์˜ ์ฃผ์š” ์ดˆ์ ์ž…๋‹ˆ๋‹ค. ์ €์ •๋ฐ€๋„ ๋ฐ์ดํ„ฐ ํƒ€์ž…์ธ BFloat16์€ 3์„ธ๋Œ€ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ (์ฝ”๋“œ๋ช…: Cooper Lake)์—์„œ AVX512 ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ๋„ค์ดํ‹ฐ๋ธŒ๋กœ ์ง€์›ํ•ด ์™”์œผ๋ฉฐ, ๋‹ค์Œ ์„ธ๋Œ€์˜ Intelยฎ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ์—์„œ Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ์ง€์›ํ•˜์—ฌ ์„ฑ๋Šฅ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚ฌ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. CPU ๋ฐฑ์—”๋“œ์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๊ธฐ๋Šฅ์€ PyTorch-1.10๋ถ€ํ„ฐ ํ™œ์„ฑํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋™์‹œ์—, Intelยฎ Extension for PyTorch์—์„œ BFloat16์— ๋Œ€ํ•œ CPU์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋ฐ ์—ฐ์‚ฐ์ž์˜ BFloat16 ์ตœ์ ํ™”๋ฅผ ๋Œ€๊ทœ๋ชจ๋กœ ํ™œ์„ฑํ™”ํ•˜๊ณ , PyTorch ๋งˆ์Šคํ„ฐ ๋ธŒ๋žœ์น˜๋กœ ๋ถ€๋ถ„์ ์œผ๋กœ ์—…์ŠคํŠธ๋ฆผ์„ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž๋“ค์€ IPEX ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ๋‚˜์€ ์„ฑ๋Šฅ๊ณผ ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html)์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### IPEX ์„ค์น˜: [[ipex-installation]] IPEX ๋ฆด๋ฆฌ์Šค๋Š” PyTorch๋ฅผ ๋”ฐ๋ผ๊ฐ‘๋‹ˆ๋‹ค. pip๋ฅผ ํ†ตํ•ด ์„ค์น˜ํ•˜๋ ค๋ฉด: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ``` pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` [IPEX ์„ค์น˜](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html)์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### Trainer์—์„œ์˜ ์‚ฌ์šฉ๋ฒ• [[usage-in-trainer]] Trainer์—์„œ IPEX์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ์‚ฌ์šฉ์ž๋Š” ํ›ˆ๋ จ ๋ช…๋ น ์ธ์ˆ˜์— `use_ipex`, `bf16`, `no_cuda`๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [Transformers ์งˆ๋ฌธ-์‘๋‹ต](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. - CPU์—์„œ BF16 ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ IPEX๋กœ ํ›ˆ๋ จํ•˜๊ธฐ: <pre> python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### ์‹ค์Šต ์˜ˆ์‹œ [[practice-example]] ๋ธ”๋กœ๊ทธ: [Intel Sapphire Rapids๋กœ PyTorch Transformers ๊ฐ€์†ํ™”](https://huggingface.co/blog/intel-sapphire-rapids)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/bertology.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BERTology BERT์™€ ๊ฐ™์€ ๋Œ€๊ทœ๋ชจ ํŠธ๋žœ์Šคํฌ๋จธ์˜ ๋‚ด๋ถ€ ๋™์ž‘์„ ์กฐ์‚ฌํ•˜๋Š” ์—ฐ๊ตฌ ๋ถ„์•ผ๊ฐ€ ์ ์  ๋” ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜น์ž๋Š” "BERTology"๋ผ ์นญํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ถ„์•ผ์˜ ์ข‹์€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - BERT๋Š” ๊ณ ์ „์ ์ธ NLP ํŒŒ์ดํ”„๋ผ์ธ์˜ ์žฌ๋ฐœ๊ฒฌ - Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 - 16๊ฐœ์˜ ํ—ค๋“œ๊ฐ€ ์ •๋ง๋กœ 1๊ฐœ๋ณด๋‹ค ๋‚˜์€๊ฐ€? - Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 - BERT๋Š” ๋ฌด์—‡์„ ๋ณด๋Š”๊ฐ€? BERT์˜ ์–ดํ…์…˜ ๋ถ„์„ - Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 - CAT-probing: ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ฝ”๋“œ ๊ตฌ์กฐ๋ฅผ ๋ณด๋Š”์ง€ ์•Œ์•„๋ณด๊ธฐ ์œ„ํ•œ ๋ฉ”ํŠธ๋ฆญ ๊ธฐ๋ฐ˜ ์ ‘๊ทผ ๋ฐฉ๋ฒ•: https://arxiv.org/abs/2210.04633 ์šฐ๋ฆฌ๋Š” ์ด ์ƒˆ๋กœ์šด ์—ฐ๊ตฌ ๋ถ„์•ผ์˜ ๋ฐœ์ „์„ ๋•๊ธฐ ์œ„ํ•ด, BERT/GPT/GPT-2 ๋ชจ๋ธ์— ๋‚ด๋ถ€ ํ‘œํ˜„์„ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ๋“ค์€ ์ฃผ๋กœ Paul Michel์˜ ํ›Œ๋ฅญํ•œ ์ž‘์—…์„ ์ฐธ๊ณ ํ•˜์—ฌ ๊ฐœ๋ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค (https://arxiv.org/abs/1905.10650): - BERT/GPT/GPT-2์˜ ๋ชจ๋“  ์€๋‹‰ ์ƒํƒœ์— ์ ‘๊ทผํ•˜๊ธฐ, - BERT/GPT/GPT-2์˜ ๊ฐ ํ—ค๋“œ์˜ ๋ชจ๋“  ์–ดํ…์…˜ ๊ฐ€์ค‘์น˜์— ์ ‘๊ทผํ•˜๊ธฐ, - ํ—ค๋“œ์˜ ์ถœ๋ ฅ ๊ฐ’๊ณผ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ฒ€์ƒ‰ํ•˜์—ฌ ํ—ค๋“œ ์ค‘์š”๋„ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  https://arxiv.org/abs/1905.10650์—์„œ ์„ค๋ช…๋œ ๋Œ€๋กœ ํ—ค๋“œ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ๋“ค์„ ์ดํ•ดํ•˜๊ณ  ์ง์ ‘ ์‚ฌ์šฉํ•ด๋ณผ ์ˆ˜ ์žˆ๋„๋ก [bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py) ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋Š” GLUE์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์—์„œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๊ณ  ๋ชจ๋ธ์„ ๊ฐ€์ง€์น˜๊ธฐ(prune)ํ•ด๋ด…๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/create_a_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋งž์ถคํ˜• ์•„ํ‚คํ…์ฒ˜ ๋งŒ๋“ค๊ธฐ[[create-a-custom-architecture]] [`AutoClass`](model_doc/auto)๋Š” ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜๊ณ  ๋ฏธ๋ฆฌ ํ•™์Šต๋œ configuration๊ณผ ๊ฐ€์ค‘์น˜๋ฅผ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ฒดํฌํฌ์ธํŠธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š” ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด `AutoClass`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํŠน์ • ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ณด๋‹ค ์„ธ๋ฐ€ํ•˜๊ฒŒ ์ œ์–ดํ•˜๊ณ ์ž ํ•˜๋Š” ์‚ฌ์šฉ์ž๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋งŒ์œผ๋กœ ์ปค์Šคํ…€ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์—ฐ๊ตฌ, ๊ต์œก ๋˜๋Š” ์‹คํ—˜ํ•˜๋Š” ๋ฐ ๊ด€์‹ฌ์ด ์žˆ๋Š” ๋ชจ๋“  ์‚ฌ์šฉ์ž์—๊ฒŒ ํŠนํžˆ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” 'AutoClass'๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ์ปค์Šคํ…€ ๋ชจ๋ธ์„ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ๋ชจ๋ธ configuration์„ ๊ฐ€์ ธ์˜ค๊ณ  ์‚ฌ์šฉ์ž ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ๋Š๋ฆฌ๊ฑฐ๋‚˜ ๋น ๋ฅธ ํ† ํฐํ™”๊ธฐ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. - ๋น„์ „ ์ž‘์—…์„ ์œ„ํ•œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ์˜ค๋””์˜ค ์ž‘์—…์„ ์œ„ํ•œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. - ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์šฉ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ## Configuration[[configuration]] [configuration](main_classes/configuration)์€ ๋ชจ๋ธ์˜ ํŠน์ • ์†์„ฑ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๊ฐ ๋ชจ๋ธ ๊ตฌ์„ฑ์—๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ์†์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ชจ๋“  NLP ๋ชจ๋ธ์—๋Š” `hidden_size`, `num_attention_heads`, `num_hidden_layers` ๋ฐ `vocab_size` ์†์„ฑ์ด ๊ณตํ†ต์œผ๋กœ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์†์„ฑ์€ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•  attention heads ๋˜๋Š” hidden layers์˜ ์ˆ˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. [DistilBERT](model_doc/distilbert) ์†์„ฑ์„ ๊ฒ€์‚ฌํ•˜๊ธฐ ์œ„ํ•ด [`DistilBertConfig`]์— ์ ‘๊ทผํ•˜์—ฌ ์ž์„ธํžˆ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`]๋Š” ๊ธฐ๋ณธ [`DistilBertModel`]์„ ๋นŒ๋“œํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋“  ๊ธฐ๋ณธ ์†์„ฑ์„ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์†์„ฑ์€ ์ปค์Šคํ„ฐ๋งˆ์ด์ง•์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ์‹คํ—˜์„ ์œ„ํ•œ ๊ณต๊ฐ„์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ธฐ๋ณธ ๋ชจ๋ธ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ปค์Šคํ„ฐ๋งˆ์ด์ฆˆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `activation` ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ๋‹ค๋ฅธ ํ™œ์„ฑํ™” ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์„ธ์š”. - `attention_dropout` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์–ดํ…์…˜ ํ™•๋ฅ ์— ๋” ๋†’์€ ๋“œ๋กญ์•„์›ƒ ๋น„์œจ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ์†์„ฑ์€ [`~PretrainedConfig.from_pretrained`] ํ•จ์ˆ˜์—์„œ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` ๋ชจ๋ธ ๊ตฌ์„ฑ์ด ๋งŒ์กฑ์Šค๋Ÿฌ์šฐ๋ฉด [`~PretrainedConfig.save_pretrained`]๋กœ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค์ • ํŒŒ์ผ์€ ์ง€์ •๋œ ์ž‘์—… ๊ฒฝ๋กœ์— JSON ํŒŒ์ผ๋กœ ์ €์žฅ๋ฉ๋‹ˆ๋‹ค: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` configuration ํŒŒ์ผ์„ ์žฌ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`~PretrainedConfig.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip> configuration ํŒŒ์ผ์„ ๋”•์…”๋„ˆ๋ฆฌ๋กœ ์ €์žฅํ•˜๊ฑฐ๋‚˜ ์‚ฌ์šฉ์ž ์ •์˜ configuration ์†์„ฑ๊ณผ ๊ธฐ๋ณธ configuration ์†์„ฑ์˜ ์ฐจ์ด์ ๋งŒ ์ €์žฅํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [configuration](main_classes/configuration) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ๋ชจ๋ธ[[model]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” [๋ชจ๋ธ(model)](main_classes/models)์„ ๋งŒ๋“œ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋Š์Šจํ•˜๊ฒŒ ์•„ํ‚คํ…์ฒ˜๋ผ๊ณ ๋„ ๋ถˆ๋ฆฌ๋Š” ๋ชจ๋ธ์€ ๊ฐ ๊ณ„์ธต์ด ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์ž‘๊ณผ ๋ฐœ์ƒํ•˜๋Š” ์ž‘์—…์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. configuration์˜ `num_hidden_layers`์™€ ๊ฐ™์€ ์†์„ฑ์€ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ •์˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์€ ๊ธฐ๋ณธ ํด๋ž˜์Šค [`PreTrainedModel`]๊ณผ ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ ํฌ๊ธฐ ์กฐ์ • ๋ฐ ์…€ํ”„ ์–ดํ…์…˜ ํ—ค๋“œ ๊ฐ€์ง€ ์น˜๊ธฐ์™€ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ๋ฉ”์†Œ๋“œ๋ฅผ ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ชจ๋“  ๋ชจ๋ธ์€ [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) ๋˜๋Š” [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/flax.linen.html#module)์˜ ์„œ๋ธŒํด๋ž˜์Šค์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ๊ฐ ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์‚ฌ์šฉ๋ฒ•๊ณผ ํ˜ธํ™˜๋ฉ๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์‚ฌ์šฉ์ž ์ง€์ • configuration ์†์„ฑ์„ ๋ชจ๋ธ์— ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") >>> model = DistilBertModel(my_config) ``` ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜ ๋Œ€์‹  ์ž„์˜์˜ ๊ฐ’์„ ๊ฐ€์ง„ ๋ชจ๋ธ์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „๊นŒ์ง€๋Š” ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ํ”„๋กœ์„ธ์Šค์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ฆฌ์†Œ์Šค์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๋ฉด์„œ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ๋” ๋นจ๋ฆฌ ์–ป์œผ๋ ค๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ [`~PreTrainedModel.from_pretrained`]๋กœ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased") ``` ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•œ ๋ชจ๋ธ์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration์„ ์ž๋™์œผ๋กœ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration ์†์„ฑ์˜ ์ผ๋ถ€ ๋˜๋Š” ์ „๋ถ€๋ฅผ ์‚ฌ์šฉ์ž ์ง€์ •์œผ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </pt> <tf> ์‚ฌ์šฉ์ž ์ง€์ • configuration ์†์„ฑ์„ ๋ชจ๋ธ์— ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜ ๋Œ€์‹  ์ž„์˜์˜ ๊ฐ’์„ ๊ฐ€์ง„ ๋ชจ๋ธ์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „๊นŒ์ง€๋Š” ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ํ”„๋กœ์„ธ์Šค์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ฆฌ์†Œ์Šค์˜ ์ผ๋ถ€๋งŒ ์‚ฌ์šฉํ•˜๋ฉด์„œ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ๋” ๋นจ๋ฆฌ ์–ป์œผ๋ ค๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ [`~TFPreTrainedModel.from_pretrained`]๋กœ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") ``` ๐Ÿค— Transformers์—์„œ ์ œ๊ณตํ•œ ๋ชจ๋ธ์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration์„ ์ž๋™์œผ๋กœ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ configuration ์†์„ฑ์˜ ์ผ๋ถ€ ๋˜๋Š” ์ „๋ถ€๋ฅผ ์‚ฌ์šฉ์ž ์ง€์ •์œผ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### ๋ชจ๋ธ ํ—ค๋“œ[[model-heads]] ์ด ์‹œ์ ์—์„œ *์€๋‹‰ ์ƒํƒœ(hidden state)*๋ฅผ ์ถœ๋ ฅํ•˜๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์„ ๊ฐ–๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์€๋‹‰ ์ƒํƒœ๋Š” ์ตœ์ข… ์ถœ๋ ฅ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ ํ—ค๋“œ์— ์ž…๋ ฅ์œผ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ์ด ํ•ด๋‹น ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ํ•œ ๊ฐ ์ž‘์—…๋งˆ๋‹ค ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค(์ฆ‰, ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ ์‹œํ€€์Šค ๊ฐ„ ์ž‘์—…์—๋Š” DistilBERT๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Œ). <frameworkcontent> <pt> ์˜ˆ๋ฅผ ๋“ค์–ด, [`DistilBertForSequenceClassification`]์€ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๊ฐ€ ์žˆ๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ํ’€๋ง๋œ ์ถœ๋ ฅ ์œ„์— ์žˆ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋กœ ์ „ํ™˜ํ•˜์—ฌ ์ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ์ž‘์—…์— ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ์ž‘์—…์˜ ๊ฒฝ์šฐ, [`DistilBertForQuestionAnswering`] ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ํ—ค๋“œ๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ ์ถœ๋ ฅ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </pt> <tf> ์˜ˆ๋ฅผ ๋“ค์–ด, [`TFDistilBertForSequenceClassification`]์€ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๊ฐ€ ์žˆ๋Š” ๊ธฐ๋ณธ DistilBERT ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ํ’€๋ง๋œ ์ถœ๋ ฅ ์œ„์— ์žˆ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๋‹ค๋ฅธ ๋ชจ๋ธ ํ—ค๋“œ๋กœ ์ „ํ™˜ํ•˜์—ฌ ์ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‹ค๋ฅธ ์ž‘์—…์— ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ์ž‘์—…์˜ ๊ฒฝ์šฐ, [`TFDistilBertForQuestionAnswering`] ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต ํ—ค๋“œ๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ ์ถœ๋ ฅ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </tf> </frameworkcontent> ## ํ† ํฌ๋‚˜์ด์ €[[tokenizer]] ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋งˆ์ง€๋ง‰์œผ๋กœ ํ•„์š”ํ•œ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋Š” ์›์‹œ ํ…์ŠคํŠธ๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” [ํ† ํฌ๋‚˜์ด์ €](main_classes/tokenizer)์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - [`PreTrainedTokenizer`]: ํŒŒ์ด์ฌ์œผ๋กœ ๊ตฌํ˜„๋œ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. - [`PreTrainedTokenizerFast`]: Rust ๊ธฐ๋ฐ˜ [๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ๋งŒ๋“ค์–ด์ง„ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. ์ด ํ† ํฌ๋‚˜์ด์ €๋Š” Rust๋กœ ๊ตฌํ˜„๋˜์–ด ๋ฐฐ์น˜ ํ† ํฐํ™”์—์„œ ํŠนํžˆ ๋น ๋ฆ…๋‹ˆ๋‹ค. ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋Š” ํ† ํฐ์„ ์›๋ž˜ ๋‹จ์–ด๋‚˜ ๋ฌธ์ž์— ๋งคํ•‘ํ•˜๋Š” *์˜คํ”„์…‹ ๋งคํ•‘*๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ๋ฉ”์†Œ๋“œ๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋‘ ํ† ํฌ๋‚˜์ด์ € ๋ชจ๋‘ ์ธ์ฝ”๋”ฉ ๋ฐ ๋””์ฝ”๋”ฉ, ์ƒˆ ํ† ํฐ ์ถ”๊ฐ€, ํŠน์ˆ˜ ํ† ํฐ ๊ด€๋ฆฌ์™€ ๊ฐ™์€ ์ผ๋ฐ˜์ ์ธ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> ๋ชจ๋“  ๋ชจ๋ธ์ด ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง€์›ํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์ด [ํ‘œ](index#supported-frameworks)์—์„œ ๋ชจ๋ธ์˜ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ € ์ง€์› ์—ฌ๋ถ€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. </Tip> ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง์ ‘ ํ•™์Šตํ•œ ๊ฒฝ์šฐ, *์–ดํœ˜(vocabulary)* ํŒŒ์ผ์—์„œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` ์‚ฌ์šฉ์ž ์ง€์ • ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜๋Š” ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ํ† ํฌ๋‚˜์ด์ €์—์„œ ์ƒ์„ฑ๋œ ์–ดํœ˜์™€ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์–ดํœ˜๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์ž…๋ ฅ์ด ์˜๋ฏธ๋ฅผ ๊ฐ–์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. [`DistilBertTokenizer`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์–ดํœ˜๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") ``` [`DistilBertTokenizerFast`] ํด๋ž˜์Šค๋กœ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased") ``` <Tip> [`AutoTokenizer`]๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ๋น ๋ฅธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋™์ž‘์„ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด `from_pretrained`์—์„œ `use_fast=False`๋ฅผ ์„ค์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ## ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ[[image-processor]] ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ(image processor)๋Š” ๋น„์ „ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ [`~image_processing_utils.ImageProcessingMixin`] ํด๋ž˜์Šค์—์„œ ์ƒ์†ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— [ViT](model_doc/vit)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ [`ViTImageProcessor`]๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> ์‚ฌ์šฉ์ž ์ง€์ •์„ ์›ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ์‚ฌ์šฉ์ž ์ง€์ • ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`ViTImageProcessor`] ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { "do_normalize": false, "do_resize": true, "feature_extractor_type": "ViTImageProcessor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` ## ํŠน์„ฑ ์ถ”์ถœ๊ธฐ[[feature-extractor]] ํŠน์„ฑ ์ถ”์ถœ๊ธฐ(feature extractor)๋Š” ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ [`~feature_extraction_utils.FeatureExtractionMixin`] ํด๋ž˜์Šค์—์„œ ์ƒ์†๋˜๋ฉฐ, ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด [`SequenceFeatureExtractor`] ํด๋ž˜์Šค์—์„œ ์ƒ์†ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋ชจ๋ธ๊ณผ ์—ฐ๊ฒฐ๋œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— [Wav2Vec2](model_doc/wav2vec2)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ [`Wav2Vec2FeatureExtractor`]๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` <Tip> ์‚ฌ์šฉ์ž ์ง€์ •์ด ํ•„์š”ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ใ…๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ถˆ๋Ÿฌ ์˜ค๋ฉด ๋ฉ๋‹ˆ๋‹ค. </Tip> ์‚ฌ์šฉ์ž ์ง€์ • ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [`Wav2Vec2FeatureExtractor`] ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False) >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": false, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 8000 } ``` ## ํ”„๋กœ์„ธ์„œ[[processor]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ๐Ÿค— Transformers๋Š” ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋ฐ ํ† ํฌ๋‚˜์ด์ €์™€ ๊ฐ™์€ ์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋‹จ์ผ ๊ฐ์ฒด๋กœ ํŽธ๋ฆฌํ•˜๊ฒŒ ๋ž˜ํ•‘ํ•˜๋Š” ํ”„๋กœ์„ธ์„œ ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ž๋™ ์Œ์„ฑ ์ธ์‹ ์ž‘์—…(Automatic Speech Recognition task (ASR))์— [`Wav2Vec2Processor`]๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ž๋™ ์Œ์„ฑ ์ธ์‹ ์ž‘์—…์€ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๋ฏ€๋กœ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•  ํŠน์„ฑ ์ถ”์ถœ๊ธฐ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•  ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` [`Wav2Vec2Processor`]์—์„œ ํŠน์„ฑ ์ถ”์ถœ๊ธฐ์™€ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` configuration๊ณผ ๋ชจ๋ธ์ด๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค์™€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค(ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์„ฑ ์ถ”์ถœ๊ธฐ ๋˜๋Š” ํ”„๋กœ์„ธ์„œ)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๐Ÿค— Transformers์—์„œ ์ง€์›ํ•˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋Š” ๊ตฌ์„ฑ์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ์›ํ•˜๋Š” ํŠน์ • ์†์„ฑ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•™์Šต์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ์‰ฝ๊ฒŒ ์„ค์ •ํ•˜๊ฑฐ๋‚˜ ๊ธฐ์กด์˜ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ˆ˜์ •ํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/custom_tools.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ[[custom-tools-and-prompts]] <Tip> Transformers์™€ ๊ด€๋ จํ•˜์—ฌ ์–ด๋–ค ๋„๊ตฌ์™€ ์—์ด์ „ํŠธ๊ฐ€ ์žˆ๋Š”์ง€ ์ž˜ ๋ชจ๋ฅด์‹ ๋‹ค๋ฉด [Transformers Agents](transformers_agents) ํŽ˜์ด์ง€๋ฅผ ๋จผ์ € ์ฝ์–ด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. </Tip> <Tip warning={true}> Transformers Agent๋Š” ์‹คํ—˜ ์ค‘์ธ API๋กœ ์–ธ์ œ๋“ ์ง€ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API ๋˜๋Š” ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ด ๋ณ€๊ฒฝ๋˜๊ธฐ ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์— ์—์ด์ „ํŠธ๊ฐ€ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒฐ๊ณผ๋„ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์—์ด์ „ํŠธ์—๊ฒŒ ๊ถŒํ•œ์„ ๋ถ€์—ฌํ•˜๊ณ  ์ƒˆ๋กœ์šด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ํ•˜๋ ค๋ฉด ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค๊ณ  ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋ฌด์—‡๋ณด๋‹ค ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ• - ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• - ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ• ## ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-prompt]] [Transformers Agents](transformers_agents)์—์„œ ์„ค๋ช…ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ์—์ด์ „ํŠธ๋Š” [`~Agent.run`] ๋ฐ [`~Agent.chat`] ๋ชจ๋“œ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `run`(์‹คํ–‰) ๋ชจ๋“œ์™€ `chat`(์ฑ„ํŒ…) ๋ชจ๋“œ ๋ชจ๋‘ ๋™์ผํ•œ ๋กœ์ง์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋ฅผ ๊ตฌ๋™ํ•˜๋Š” ์–ธ์–ด ๋ชจ๋ธ์€ ๊ธด ํ”„๋กฌํ”„ํŠธ์— ๋”ฐ๋ผ ์กฐ๊ฑด์ด ์ง€์ •๋˜๊ณ , ์ค‘์ง€ ํ† ํฐ์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ๋‹ค์Œ ํ† ํฐ์„ ์ƒ์„ฑํ•˜์—ฌ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์™„์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. `chat` ๋ชจ๋“œ์—์„œ๋Š” ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ด์ „ ์‚ฌ์šฉ์ž ์ž…๋ ฅ ๋ฐ ๋ชจ๋ธ ์ƒ์„ฑ์œผ๋กœ ์—ฐ์žฅ๋œ๋‹ค๋Š” ์ ์ด ๋‘ ๋ชจ๋“œ์˜ ์œ ์ผํ•œ ์ฐจ์ด์ ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์—์ด์ „ํŠธ๊ฐ€ ๊ณผ๊ฑฐ ์ƒํ˜ธ์ž‘์šฉ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜๋ฏ€๋กœ ์—์ด์ „ํŠธ์—๊ฒŒ ์ผ์ข…์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ œ๊ณตํ•˜๋Š” ์…ˆ์ž…๋‹ˆ๋‹ค. ### ํ”„๋กฌํ”„ํŠธ์˜ ๊ตฌ์กฐ[[structure-of-the-prompt]] ์–ด๋–ป๊ฒŒ ํ”„๋กฌํ”„ํŠธ ์‚ฌ์šฉ์ž ์ •์˜๋ฅผ ์ž˜ ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด ํ”„๋กฌํ”„ํŠธ์˜ ๊ตฌ์กฐ๋ฅผ ์ž์„ธํžˆ ์‚ดํŽด๋ด…์‹œ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋Š” ํฌ๊ฒŒ ๋„ค ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - 1. ๋„์ž…: ์—์ด์ „ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ํ–‰๋™ํ•ด์•ผ ํ•˜๋Š”์ง€, ๋„๊ตฌ์˜ ๊ฐœ๋…์— ๋Œ€ํ•œ ์„ค๋ช…. - 2. ๋ชจ๋“  ๋„๊ตฌ์— ๋Œ€ํ•œ ์„ค๋ช…. ์ด๋Š” ๋Ÿฐํƒ€์ž„์— ์‚ฌ์šฉ์ž๊ฐ€ ์ •์˜/์„ ํƒํ•œ ๋„๊ตฌ๋กœ ๋™์ ์œผ๋กœ ๋Œ€์ฒด๋˜๋Š” `<<all_tools>>` ํ† ํฐ์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. - 3. ์ž‘์—… ์˜ˆ์ œ ๋ฐ ํ•ด๋‹น ์†”๋ฃจ์…˜ ์„ธํŠธ. - 4. ํ˜„์žฌ ์˜ˆ์ œ ๋ฐ ํ•ด๊ฒฐ ์š”์ฒญ. ๊ฐ ๋ถ€๋ถ„์„ ๋” ์ž˜ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์งง์€ ๋ฒ„์ „์„ ํ†ตํ•ด `run` ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณด์ด๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ````text I will ask you to perform a task, your job is to come up with a series of simple commands in Python that will perform the task. [...] You can print intermediate results if it makes sense to do so. Tools: - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. - image_captioner: This is a tool that generates a description of an image. It takes an input named `image` which should be the image to the caption and returns a text that contains the description in English. [...] Task: "Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French." I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image. Answer: ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(image=image, question=translated_question) print(f"The answer is {answer}") ``` Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner." I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` [...] Task: "Draw me a picture of rivers and lakes" I will use the following ```` ๋„์ž…(*"๋„๊ตฌ:"* ์•ž์˜ ํ…์ŠคํŠธ)์—์„œ๋Š” ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๊ณ  ๋ฌด์—‡์„ ํ•ด์•ผ ํ•˜๋Š”์ง€ ์ •ํ™•ํ•˜๊ฒŒ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋Š” ํ•ญ์ƒ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ž‘๋™ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์ด ๋ถ€๋ถ„์€ ์‚ฌ์šฉ์ž ์ •์˜ํ•  ํ•„์š”๊ฐ€ ์—†์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„(*"๋„๊ตฌ"* ์•„๋ž˜์˜ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ)์€ `run` ๋˜๋Š” `chat`์„ ํ˜ธ์ถœํ•  ๋•Œ ๋™์ ์œผ๋กœ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํžˆ `agent.toolbox`์— ์žˆ๋Š” ๋„๊ตฌ ์ˆ˜๋งŒํผ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ๊ฐ€ ์žˆ๊ณ , ๊ฐ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ๋Š” ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์œผ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค: ```text - <tool.name>: <tool.description> ``` ๋ฌธ์„œ ์งˆ์˜์‘๋‹ต ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ์ถœ๋ ฅํ•ด์„œ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from transformers import load_tool document_qa = load_tool("document-question-answering") print(f"- {document_qa.name}: {document_qa.description}") ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. ``` ์—ฌ๊ธฐ์„œ ๋„๊ตฌ ์ด๋ฆ„์ด ์งง๊ณ  ์ •ํ™•ํ•˜๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค๋ช…์€ ๋‘ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”๋ฐ, ์ฒซ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ๋„๊ตฌ์˜ ๊ธฐ๋Šฅ์„ ์„ค๋ช…ํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ์˜ˆ์ƒ๋˜๋Š” ์ž…๋ ฅ ์ธ์ˆ˜์™€ ๋ฐ˜ํ™˜ ๊ฐ’์„ ๋ช…์‹œํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ข‹์€ ๋„๊ตฌ ์ด๋ฆ„๊ณผ ๋„๊ตฌ ์„ค๋ช…์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ์— ๋Œ€ํ•ด ์•Œ ์ˆ˜ ์žˆ๋Š” ์œ ์ผํ•œ ์ •๋ณด๋Š” ์ด๋ฆ„๊ณผ ์„ค๋ช…๋ฟ์ด๋ฏ€๋กœ, ์ด ๋‘ ๊ฐ€์ง€๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ž‘์„ฑํ•˜๊ณ  ๋„๊ตฌ ์ƒ์ž์— ์žˆ๋Š” ๊ธฐ์กด ๋„๊ตฌ์˜ ์Šคํƒ€์ผ๊ณผ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŠนํžˆ ์ด๋ฆ„์— ๋”ฐ๋ผ ์˜ˆ์ƒ๋˜๋Š” ๋ชจ๋“  ์ธ์ˆ˜๊ฐ€ ์„ค๋ช…์— ์ฝ”๋“œ ์Šคํƒ€์ผ๋กœ ์–ธ๊ธ‰๋˜์–ด ์žˆ๋Š”์ง€, ์˜ˆ์ƒ๋˜๋Š” ์œ ํ˜•๊ณผ ๊ทธ ์œ ํ˜•์ด ๋ฌด์—‡์ธ์ง€์— ๋Œ€ํ•œ ์„ค๋ช…์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. <Tip> ๋„๊ตฌ์— ์–ด๋–ค ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ์–ด์•ผ ํ•˜๋Š”์ง€ ์ดํ•ดํ•˜๋ ค๋ฉด ์—„์„ ๋œ Transformers ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ํ™•์ธํ•˜์„ธ์š”. [`Agent.toolbox`] ์†์„ฑ์„ ๊ฐ€์ง„ ๋ชจ๋“  ๋„๊ตฌ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์„ธ ๋ฒˆ์งธ ๋ถ€๋ถ„์—๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์–ด๋–ค ์ข…๋ฅ˜์˜ ์‚ฌ์šฉ์ž ์š”์ฒญ์— ๋Œ€ํ•ด ์–ด๋–ค ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•˜๋Š”์ง€ ์ •ํ™•ํ•˜๊ฒŒ ๋ณด์—ฌ์ฃผ๋Š” ์—„์„ ๋œ ์˜ˆ์ œ ์„ธํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋ฅผ ์ง€์›ํ•˜๋Š” ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์€ ํ”„๋กฌํ”„ํŠธ์—์„œ ํŒจํ„ด์„ ์ธ์‹ํ•˜๊ณ  ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋กœ ํŒจํ„ด์„ ๋ฐ˜๋ณตํ•˜๋Š” ๋ฐ ๋งค์šฐ ๋Šฅ์ˆ™ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ์‹ค์ œ๋กœ ์˜ฌ๋ฐ”๋ฅธ ์‹คํ–‰ ๊ฐ€๋Šฅํ•œ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•  ๊ฐ€๋Šฅ์„ฑ์„ ๊ทน๋Œ€ํ™”ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์˜ˆ์ œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ํ•œ ๊ฐ€์ง€ ์˜ˆ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ````text Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner." I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` ```` ์ž‘์—… ์„ค๋ช…, ์—์ด์ „ํŠธ๊ฐ€ ์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ์ž‘์—…์— ๋Œ€ํ•œ ์„ค๋ช…, ๋งˆ์ง€๋ง‰์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ, ์ด ์„ธ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋œ ํ”„๋กฌํ”„ํŠธ๋Š” ๋ชจ๋ธ์— ๋ฐ˜๋ณตํ•˜์—ฌ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ์˜ ์ผ๋ถ€์ธ ๋ชจ๋“  ์˜ˆ์ œ๋Š” ์ด๋Ÿฌํ•œ ์ •ํ™•ํ•œ ํŒจํ„ด์œผ๋กœ ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ์—์ด์ „ํŠธ๊ฐ€ ์ƒˆ ํ† ํฐ์„ ์ƒ์„ฑํ•  ๋•Œ ์ •ํ™•ํžˆ ๋™์ผํ•œ ํŒจํ„ด์„ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ ์˜ˆ์ œ๋Š” Transformers ํŒ€์ด ์„ ๋ณ„ํ•˜๊ณ  ์ผ๋ จ์˜ [problem statements](https://github.com/huggingface/transformers/blob/main/src/transformers/tools/evaluate_agent.py)์— ๋”ฐ๋ผ ์—„๊ฒฉํ•˜๊ฒŒ ํ‰๊ฐ€ํ•˜์—ฌ ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์—์ด์ „ํŠธ์˜ ์‹ค์ œ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์ตœ๋Œ€ํ•œ ์ž˜ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ์˜ ๋งˆ์ง€๋ง‰ ๋ถ€๋ถ„์€ ๋‹ค์Œ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค: ```text Task: "Draw me a picture of rivers and lakes" I will use the following ``` ์ด๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์™„๋ฃŒํ•ด์•ผ ํ•  ์ตœ์ข…์ ์ธ ๋ฏธ์™„์„ฑ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ๋ฏธ์™„์„ฑ ์˜ˆ์ œ๋Š” ์‹ค์ œ ์‚ฌ์šฉ์ž ์ž…๋ ฅ์— ๋”ฐ๋ผ ๋™์ ์œผ๋กœ ๋งŒ๋“ค์–ด์ง‘๋‹ˆ๋‹ค. ์œ„ ์˜ˆ์‹œ์˜ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž๊ฐ€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of rivers and lakes") ``` ์‚ฌ์šฉ์ž ์ž…๋ ฅ - *์ฆ‰* Task: *"Draw me a picture of rivers and lakes"*๊ฐ€ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์— ๋งž์ถฐ "Task: <task> \n\n I will use the following"๋กœ ์บ์ŠคํŒ…๋ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์žฅ์€ ์—์ด์ „ํŠธ์—๊ฒŒ ์กฐ๊ฑด์ด ์ ์šฉ๋˜๋Š” ํ”„๋กฌํ”„ํŠธ์˜ ๋งˆ์ง€๋ง‰ ์ค„์„ ๊ตฌ์„ฑํ•˜๋ฏ€๋กœ ์—์ด์ „ํŠธ๊ฐ€ ์ด์ „ ์˜ˆ์ œ์—์„œ ์ˆ˜ํ–‰ํ•œ ๊ฒƒ๊ณผ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๋ฐฉ์‹์œผ๋กœ ์˜ˆ์ œ๋ฅผ ์™„๋ฃŒํ•˜๋„๋ก ๊ฐ•๋ ฅํ•˜๊ฒŒ ์˜ํ–ฅ์„ ๋ฏธ์นฉ๋‹ˆ๋‹ค. ๋„ˆ๋ฌด ์ž์„ธํžˆ ์„ค๋ช…ํ•˜์ง€ ์•Š๋”๋ผ๋„ ์ฑ„ํŒ… ํ…œํ”Œ๋ฆฟ์˜ ํ”„๋กฌํ”„ํŠธ ๊ตฌ์กฐ๋Š” ๋™์ผํ•˜์ง€๋งŒ ์˜ˆ์ œ์˜ ์Šคํƒ€์ผ์ด ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค๋ฉด*: ````text [...] ===== Human: Answer the question in the variable `question` about the image stored in the variable `image`. Assistant: I will use the tool `image_qa` to answer the question on the input image. ```py answer = image_qa(text=question, image=image) print(f"The answer is {answer}") ``` Human: I tried this code, it worked but didn't give me a good result. The question is in French Assistant: In this case, the question needs to be translated first. I will use the tool `translator` to do this. ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(text=translated_question, image=image) print(f"The answer is {answer}") ``` ===== [...] ```` `run` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์™€๋Š” ๋ฐ˜๋Œ€๋กœ, ๊ฐ `chat` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์—๋Š” *Human(์‚ฌ๋žŒ)*๊ณผ *Assistant(์–ด์‹œ์Šคํ„ดํŠธ)* ๊ฐ„์— ํ•˜๋‚˜ ์ด์ƒ์˜ ๊ตํ™˜์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๊ตํ™˜์€ `run` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์™€ ์œ ์‚ฌํ•œ ๊ตฌ์กฐ๋กœ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์ด *Human:* ๋’ค์— ์ถ”๊ฐ€๋˜๋ฉฐ, ์—์ด์ „ํŠธ์—๊ฒŒ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์ „์— ์ˆ˜ํ–‰ํ•ด์•ผ ํ•  ์ž‘์—…์„ ๋จผ์ € ์ƒ์„ฑํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๊ตํ™˜์€ ์ด์ „ ๊ตํ™˜์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์œ„์™€ ๊ฐ™์ด ์‚ฌ์šฉ์ž๊ฐ€ "**์ด** ์ฝ”๋“œ๋ฅผ ์‹œ๋„ํ–ˆ์Šต๋‹ˆ๋‹ค"๋ผ๊ณ  ์ž…๋ ฅํ•˜๋ฉด ์ด์ „์— ์ƒ์„ฑ๋œ ์—์ด์ „ํŠธ์˜ ์ฝ”๋“œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ๊ณผ๊ฑฐ ๊ตํ™˜์„ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `.chat`์„ ์‹คํ–‰ํ•˜๋ฉด ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ ๋˜๋Š” *์ž‘์—…*์ด ๋ฏธ์™„์„ฑ๋œ ์–‘์‹์˜ ์˜ˆ์‹œ๋กœ ์บ์ŠคํŒ…๋ฉ๋‹ˆ๋‹ค: ```text Human: <user-input>\n\nAssistant: ``` ๊ทธ๋Ÿฌ๋ฉด ์—์ด์ „ํŠธ๊ฐ€ ์ด๋ฅผ ์™„์„ฑํ•ฉ๋‹ˆ๋‹ค. `run` ๋ช…๋ น๊ณผ ๋‹ฌ๋ฆฌ `chat` ๋ช…๋ น์€ ์™„๋ฃŒ๋œ ์˜ˆ์ œ๋ฅผ ํ”„๋กฌํ”„ํŠธ์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—์ด์ „ํŠธ์—๊ฒŒ ๋‹ค์Œ `chat` ์ฐจ๋ก€์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ๋ฌธ๋งฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”์ง€ ์•Œ์•˜์œผ๋‹ˆ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค! ### ์ข‹์€ ์‚ฌ์šฉ์ž ์ž…๋ ฅ ์ž‘์„ฑํ•˜๊ธฐ[[writing-good-user-inputs]] ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์ด ์‚ฌ์šฉ์ž์˜ ์˜๋„๋ฅผ ์ดํ•ดํ•˜๋Š” ๋Šฅ๋ ฅ์ด ์ ์  ๋” ํ–ฅ์ƒ๋˜๊ณ  ์žˆ์ง€๋งŒ, ์—์ด์ „ํŠธ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ์ž‘์—…์„ ์„ ํƒํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ๋Œ€ํ•œ ์ •ํ™•์„ฑ์„ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์€ ํฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ตœ๋Œ€ํ•œ ์ •ํ™•ํ•˜๋‹ค๋Š” ๊ฒƒ์€ ๋ฌด์—‡์„ ์˜๋ฏธํ• ๊นŒ์š”? ์—์ด์ „ํŠธ๋Š” ํ”„๋กฌํ”„ํŠธ์—์„œ ๋„๊ตฌ ์ด๋ฆ„ ๋ชฉ๋ก๊ณผ ํ•ด๋‹น ์„ค๋ช…์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ๋„๊ตฌ๊ฐ€ ์ถ”๊ฐ€๋ ์ˆ˜๋ก ์—์ด์ „ํŠธ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ๋„๊ตฌ๋ฅผ ์„ ํƒํ•˜๊ธฐ๊ฐ€ ๋” ์–ด๋ ค์›Œ์ง€๊ณ  ์‹คํ–‰ํ•  ๋„๊ตฌ์˜ ์˜ฌ๋ฐ”๋ฅธ ์ˆœ์„œ๋ฅผ ์„ ํƒํ•˜๋Š” ๊ฒƒ์€ ๋”์šฑ ์–ด๋ ค์›Œ์ง‘๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์‹คํŒจ ์‚ฌ๋ก€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๋ถ„์„ํ•  ์ฝ”๋“œ๋งŒ ๋ฐ˜ํ™˜ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.run("Show me a tree", return_code=True) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tool: `image_segmenter` to create a segmentation mask for the image. ==Code generated by the agent== mask = image_segmenter(image, prompt="tree") ``` ์šฐ๋ฆฌ๊ฐ€ ์›ํ–ˆ๋˜ ๊ฒฐ๊ณผ๊ฐ€ ์•„๋‹ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  ๋‚˜๋ฌด ์ด๋ฏธ์ง€๊ฐ€ ์ƒ์„ฑ๋˜๊ธฐ๋ฅผ ์›ํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋” ๋†’์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ํŠน์ • ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ์œ ๋„ํ•˜๋ ค๋ฉด ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์— ์žˆ๋Š” ์ค‘์š”ํ•œ ํ‚ค์›Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.toolbox["image_generator"].description ``` ```text 'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image. ``` ์ด๋ฆ„๊ณผ ์„ค๋ช…์€ "image", "prompt", "create" ๋ฐ "generate" ํ‚ค์›Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋‹จ์–ด๋“ค์„ ์‚ฌ์šฉํ•˜๋ฉด ๋” ์ž˜ ์ž‘๋™ํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋ฅผ ์กฐ๊ธˆ ๋” ๊ตฌ์ฒดํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.run("Create an image of a tree", return_code=True) ``` ์ด ์ฝ”๋“œ๋Š” ๋‹ค์Œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค์–ด๋ƒ…๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tool `image_generator` to generate an image of a tree. ==Code generated by the agent== image = image_generator(prompt="tree") ``` ํ›จ์”ฌ ๋‚ซ๋„ค์š”! ์ €ํฌ๊ฐ€ ์›ํ–ˆ๋˜ ๊ฒƒ๊ณผ ๋น„์Šทํ•ด ๋ณด์ž…๋‹ˆ๋‹ค. ์ฆ‰, ์—์ด์ „ํŠธ๊ฐ€ ์ž‘์—…์„ ์˜ฌ๋ฐ”๋ฅธ ๋„๊ตฌ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋งคํ•‘ํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์„ ๊ฒช๊ณ  ์žˆ๋‹ค๋ฉด ๋„๊ตฌ ์ด๋ฆ„๊ณผ ์„ค๋ช…์—์„œ ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ์ด ๋†’์€ ํ‚ค์›Œ๋“œ๋ฅผ ์ฐพ์•„๋ณด๊ณ  ์ด๋ฅผ ํ†ตํ•ด ์ž‘์—… ์š”์ฒญ์„ ๊ตฌ์ฒดํ™”ํ•ด ๋ณด์„ธ์š”. ### ๋„๊ตฌ ์„ค๋ช… ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-tool-descriptions]] ์•ž์„œ ์‚ดํŽด๋ณธ ๊ฒƒ์ฒ˜๋Ÿผ ์—์ด์ „ํŠธ๋Š” ๊ฐ ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ๋„๊ตฌ์—๋Š” ๋งค์šฐ ์ •ํ™•ํ•œ ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ์–ด์•ผ ํ•˜์ง€๋งŒ ํŠน์ • ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ๋„๊ตฌ์˜ ์„ค๋ช…์ด๋‚˜ ์ด๋ฆ„์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ์ด ๋„์›€์ด ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋งค์šฐ ์œ ์‚ฌํ•œ ์—ฌ๋Ÿฌ ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€ํ–ˆ๊ฑฐ๋‚˜ ํŠน์ • ๋„๋ฉ”์ธ(*์˜ˆ*: ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ฐ ๋ณ€ํ™˜)์—๋งŒ ์—์ด์ „ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ์— ํŠนํžˆ ์ค‘์š”ํ•ด์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ ์ž‘์—…์— ๋งŽ์ด ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ ์—์ด์ „ํŠธ๊ฐ€ ์ด๋ฏธ์ง€ ์ƒ์„ฑ๊ณผ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜/์ˆ˜์ •์„ ํ˜ผ๋™ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด,* ```py agent.run("Make an image of a house and a car", return_code=True) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools `image_generator` to generate an image of a house and `image_transformer` to transform the image of a car into the image of a house. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") house_car_image = image_transformer(image=car_image, prompt="A house") ``` ๊ฒฐ๊ณผ๋ฌผ์ด ์šฐ๋ฆฌ๊ฐ€ ์—ฌ๊ธฐ์„œ ์›ํ•˜๋Š” ๊ฒƒ๊ณผ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ `image_generator`์™€ `image_transformer`์˜ ์ฐจ์ด์ ์„ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์›Œ์„œ ๋‘ ๊ฐ€์ง€๋ฅผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์€ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ `image_transformer`์˜ ๋„๊ตฌ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ๋ณ€๊ฒฝํ•˜์—ฌ ์—์ด์ „ํŠธ๊ฐ€ ๋„์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. "image" ๋ฐ "prompt"์™€ ์•ฝ๊ฐ„ ๋ถ„๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด `modifier`๋ผ๊ณ  ๋Œ€์‹  ๋ถ€๋ฅด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py agent.toolbox["modifier"] = agent.toolbox.pop("image_transformer") agent.toolbox["modifier"].description = agent.toolbox["modifier"].description.replace( "transforms an image according to a prompt", "modifies an image" ) ``` ์ด์ œ "modify"์€ ์ƒˆ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜๋ผ๋Š” ๊ฐ•๋ ฅํ•œ ์‹ ํ˜ธ์ด๋ฏ€๋กœ ์œ„์˜ ํ”„๋กฌํ”„ํŠธ์— ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์‹œ ์‹คํ–‰ํ•ด ๋ด…์‹œ๋‹ค. ```py agent.run("Make an image of a house and a car", return_code=True) ``` ์—ฌ๊ธฐ์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools: `image_generator` to generate an image of a house, then `image_generator` to generate an image of a car. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") ``` ์šฐ๋ฆฌ๊ฐ€ ์—ผ๋‘์— ๋‘์—ˆ๋˜ ๊ฒƒ๊ณผ ํ™•์‹คํžˆ ๋” ๊ฐ€๊นŒ์›Œ์กŒ์Šต๋‹ˆ๋‹ค! ํ•˜์ง€๋งŒ ์ง‘๊ณผ ์ž๋™์ฐจ๊ฐ€ ๋ชจ๋‘ ๊ฐ™์€ ์ด๋ฏธ์ง€์— ํฌํ•จ๋˜๋ฉด ์ข‹๊ฒ ์Šต๋‹ˆ๋‹ค. ์ž‘์—…์„ ๋‹จ์ผ ์ด๋ฏธ์ง€ ์ƒ์„ฑ์— ๋” ์ง‘์ค‘ํ•˜๋ฉด ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py agent.run("Create image: 'A house and car'", return_code=True) ``` ```text ==Explanation from the agent== I will use the following tool: `image_generator` to generate an image. ==Code generated by the agent== image = image_generator(prompt="A house and car") ``` <Tip warning={true}> ์—์ด์ „ํŠธ๋Š” ์—ฌ์ „ํžˆ ํŠนํžˆ ์—ฌ๋Ÿฌ ๊ฐœ์ฒด์˜ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด ์•ฝ๊ฐ„ ๋” ๋ณต์žกํ•œ ์‚ฌ์šฉ ์‚ฌ๋ก€์—์„œ ์ทจ์•ฝํ•œ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์•ž์œผ๋กœ ๋ช‡ ๋‹ฌ ์•ˆ์— ์—์ด์ „ํŠธ ์ž์ฒด์™€ ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋”์šฑ ๊ฐœ์„ ๋˜์–ด ์—์ด์ „ํŠธ๊ฐ€ ๋‹ค์–‘ํ•œ ์‚ฌ์šฉ์ž ์ž…๋ ฅ์— ๋”์šฑ ๊ฐ•๋ ฅํ•˜๊ฒŒ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> ### ์ „์ฒด ํ”„๋กฌํ”„ํŠธ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-whole-prompt]] ์‚ฌ์šฉ์ž์—๊ฒŒ ์ตœ๋Œ€ํ•œ์˜ ์œ ์—ฐ์„ฑ์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด [์œ„](#structure-of-the-prompt)์— ์„ค๋ช…๋œ ์ „์ฒด ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ์‚ฌ์šฉ์ž๊ฐ€ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ํ”„๋กฌํ”„ํŠธ์— ์†Œ๊ฐœ ์„น์…˜, ๋„๊ตฌ ์„น์…˜, ์˜ˆ์ œ ์„น์…˜ ๋ฐ ๋ฏธ์™„์„ฑ ์˜ˆ์ œ ์„น์…˜์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. `run` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ฐ๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py template = """ [...] """ agent = HfAgent(your_endpoint, run_prompt_template=template) ``` <Tip warning={true}> ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋„๊ตฌ๋ฅผ ์ธ์‹ํ•˜๊ณ  ์‚ฌ์šฉ์ž์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฝ์ž…ํ•  ์ˆ˜ ์žˆ๋„๋ก `<<all_tools>>` ๋ฌธ์ž์—ด๊ณผ `<<prompt>>`๋ฅผ `template` ์–ด๋”˜๊ฐ€์— ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `chat` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `chat` ๋ชจ๋“œ์—์„œ๋Š” ํ•ญ์ƒ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ตํ™˜ ํ˜•์‹์„ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”: ```text Human: <<task>> Assistant: ``` ๋”ฐ๋ผ์„œ ์‚ฌ์šฉ์ž ์ •์˜ `chat` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์˜ ์˜ˆ์ œ์—์„œ๋„ ์ด ํ˜•์‹์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ธ์Šคํ„ด์Šคํ™” ํ•  ๋•Œ `chat` ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` template = """ [...] """ agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template) ``` <Tip warning={true}> ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋„๊ตฌ๋ฅผ ์ธ์‹ํ•  ์ˆ˜ ์žˆ๋„๋ก `<<all_tools>>` ๋ฌธ์ž์—ด์„ `template` ์–ด๋”˜๊ฐ€์— ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘ ์ปค๋ฎค๋‹ˆํ‹ฐ์˜ ๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ํ˜ธ์ŠคํŒ…ํ•˜๋Š” ํ…œํ”Œ๋ฆฟ์„ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ๋Œ€์‹  ์ €์žฅ์†Œ ID๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ๋Š” [์ด ์ €์žฅ์†Œ](https://huggingface.co/datasets/huggingface-tools/default-prompts)๋ฅผ ์˜ˆ๋กœ ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hub์˜ ์ €์žฅ์†Œ์— ์‚ฌ์šฉ์ž ์ •์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์—…๋กœ๋“œํ•˜์—ฌ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ํ™•์ธํ•˜์„ธ์š”: - ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. - `run` ๋ช…๋ น์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ `run_prompt_template.txt`๋ผ๋Š” ํŒŒ์ผ์— ๋„ฃ์œผ์„ธ์š”. - `chat` ๋ช…๋ น์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ `chat_prompt_template.txt`๋ผ๋Š” ํŒŒ์ผ์— ๋„ฃ์œผ์„ธ์š”. ## ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ ์‚ฌ์šฉํ•˜๊ธฐ[[using-custom-tools]] ์ด ์„น์…˜์—์„œ๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ์— ํŠนํ™”๋œ ๋‘ ๊ฐ€์ง€ ๊ธฐ์กด ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: - ๋” ๋งŽ์€ ์ด๋ฏธ์ง€ ์ˆ˜์ •์„ ํ—ˆ์šฉํ•˜๊ธฐ ์œ„ํ•ด [huggingface-tools/image-transformation](https://huggingface.co/spaces/huggingface-tools/image-transformation)์„ [diffusers/controlnet-canny-tool](https://huggingface.co/spaces/diffusers/controlnet-canny-tool)๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. - ๊ธฐ๋ณธ ๋„๊ตฌ ์ƒ์ž์— ์ด๋ฏธ์ง€ ์—…์Šค์ผ€์ผ๋ง์„ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๋„๊ตฌ๊ฐ€ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค: [diffusers/latent-upscaler-tool](https://huggingface.co/spaces/diffusers/latent-upscaler-tool)๊ฐ€ ๊ธฐ์กด ์ด๋ฏธ์ง€ ๋ณ€ํ™˜ ๋„๊ตฌ๋ฅผ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ํŽธ๋ฆฌํ•œ [`load_tool`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py from transformers import load_tool controlnet_transformer = load_tool("diffusers/controlnet-canny-tool") upscaler = load_tool("diffusers/latent-upscaler-tool") ``` ์—์ด์ „ํŠธ์—๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์ด ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ์— ์ž๋™์œผ๋กœ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์„ ์ž˜ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `controlnet_transformer`์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py print(f"Description: '{controlnet_transformer.description}'") print(f"Name: '{controlnet_transformer.name}'") ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text Description: 'This is a tool that transforms an image with ControlNet according to a prompt. It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.' Name: 'image_transformer' ``` ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์ •ํ™•ํ•˜๊ณ  [ํ๋ ˆ์ดํŒ… ๋œ ๋„๊ตฌ ์„ธํŠธ(curated set of tools)](./transformers_agents#a-curated-set-of-tools)์˜ ์Šคํƒ€์ผ์— ๋งž์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, `controlnet_transformer`์™€ `upscaler`๋กœ ์—์ด์ „ํŠธ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด ๋ด…์‹œ๋‹ค: ```py tools = [controlnet_transformer, upscaler] agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=tools) ``` ์ด ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด ๋‹ค์Œ ์ •๋ณด๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```text image_transformer has been replaced by <transformers_modules.diffusers.controlnet-canny-tool.bd76182c7777eba9612fc03c0 8718a60c0aa6312.image_transformation.ControlNetTransformationTool object at 0x7f1d3bfa3a00> as provided in `additional_tools` ``` ํ๋ ˆ์ดํŒ…๋œ ๋„๊ตฌ ์„ธํŠธ์—๋Š” ์ด๋ฏธ 'image_transformer' ๋„๊ตฌ๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ด ๋„๊ตฌ๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋กœ ๋Œ€์ฒด๋ฉ๋‹ˆ๋‹ค. <Tip> ๊ธฐ์กด ๋„๊ตฌ์™€ ๋˜‘๊ฐ™์€ ์ž‘์—…์— ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ๊ธฐ์กด ๋„๊ตฌ๋ฅผ ๋ฎ์–ด์“ฐ๋Š” ๊ฒƒ์ด ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ํ•ด๋‹น ์ž‘์—…์— ๋Šฅ์ˆ™ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๊ฐ€ ๋ฎ์–ด์“ด ๋„๊ตฌ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ API๋ฅผ ๋”ฐ๋ผ์•ผ ํ•˜๋ฉฐ, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ํ•ด๋‹น ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ์˜ˆ์ œ๊ฐ€ ์—…๋ฐ์ดํŠธ๋˜๋„๋ก ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ์กฐ์ •ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”. </Tip> ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์— ์ง€์ •๋œ 'image_upscaler'๋ผ๋Š” ์ด๋ฆ„ ์•„์ง ๊ธฐ๋ณธ ๋„๊ตฌ ์ƒ์ž์—๋Š” ์กด์žฌํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ๋„๊ตฌ ๋ชฉ๋ก์— ํ•ด๋‹น ์ด๋ฆ„์ด ๊ฐ„๋‹จํžˆ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ํ˜„์žฌ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๊ตฌ ์ƒ์ž๋Š” ์–ธ์ œ๋“ ์ง€ `agent.toolbox` ์†์„ฑ์„ ํ†ตํ•ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py print("\n".join([f"- {a}" for a in agent.toolbox.keys()])) ``` ```text - document_qa - image_captioner - image_qa - image_segmenter - transcriber - summarizer - text_classifier - text_qa - text_reader - translator - image_transformer - text_downloader - image_generator - video_generator - image_upscaler ``` ์—์ด์ „ํŠธ์˜ ๋„๊ตฌ ์ƒ์ž์— `image_upscaler`๊ฐ€ ์ถ”๊ฐ€๋œ ์ ์„ ์ฃผ๋ชฉํ•˜์„ธ์š”. ์ด์ œ ์ƒˆ๋กœ์šด ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ด๋ด…์‹œ๋‹ค! [Transformers Agents Quickstart](./transformers_agents#single-execution-run)์—์„œ ์ƒ์„ฑํ•œ ์ด๋ฏธ์ง€๋ฅผ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from diffusers.utils import load_image image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" ) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ์ด๋ฏธ์ง€๋ฅผ ์•„๋ฆ„๋‹ค์šด ๊ฒจ์šธ ํ’๊ฒฝ์œผ๋กœ ๋ฐ”๊ฟ” ๋ด…์‹œ๋‹ค: ```py image = agent.run("Transform the image: 'A frozen lake and snowy forest'", image=image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_transformer` to transform the image. ==Code generated by the agent== image = image_transformer(image, prompt="A frozen lake and snowy forest") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter.png" width=200> ์ƒˆ๋กœ์šด ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ๋„๊ตฌ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋งค์šฐ ๊ฐ•๋ ฅํ•˜๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š” ControlNet์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ๋„๊ตฌ๋Š” 512x512 ํ”ฝ์…€ ํฌ๊ธฐ์˜ ์ด๋ฏธ์ง€๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—…์Šค์ผ€์ผ๋งํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค. ```py image = agent.run("Upscale the image", image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_upscaler` to upscale the image. ==Code generated by the agent== upscaled_image = image_upscaler(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter_upscale.png" width=400> ์—์ด์ „ํŠธ๋Š” ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„๋งŒ ๋ณด๊ณ  ๋ฐฉ๊ธˆ ์ถ”๊ฐ€ํ•œ ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์— "์ด๋ฏธ์ง€ ์—…์Šค์ผ€์ผ๋ง"์ด๋ผ๋Š” ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ž๋™์œผ๋กœ ๋งคํ•‘ํ•˜์—ฌ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‹คํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ ์ƒˆ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์ƒˆ ๋„๊ตฌ ์ถ”๊ฐ€ํ•˜๊ธฐ[[adding-new-tools]] ์ด ์„น์…˜์—์„œ๋Š” ์—์ด์ „ํŠธ์—๊ฒŒ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ ๋“œ๋ฆฝ๋‹ˆ๋‹ค. #### ์ƒˆ ๋„๊ตฌ ๋งŒ๋“ค๊ธฐ[[creating-a-new-tool]] ๋จผ์ € ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ํŠน์ • ์ž‘์—…์— ๋Œ€ํ•ด ๊ฐ€์žฅ ๋งŽ์€ ๋‹ค์šด๋กœ๋“œ๋ฅผ ๋ฐ›์€ Hugging Face Hub์˜ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š”, ๊ทธ๋‹ค์ง€ ์œ ์šฉํ•˜์ง€๋Š” ์•Š์ง€๋งŒ ์žฌ๋ฏธ์žˆ๋Š” ์ž‘์—…์„ ์ถ”๊ฐ€ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python from huggingface_hub import list_models task = "text-classification" model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(model.id) ``` `text-classification`(ํ…์ŠคํŠธ ๋ถ„๋ฅ˜) ์ž‘์—…์˜ ๊ฒฝ์šฐ `'facebook/bart-large-mnli'`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , `translation`(๋ฒˆ์—ญ) ์ž‘์—…์˜ ๊ฒฝ์šฐ `'t5-base'`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—์ด์ „ํŠธ๊ฐ€ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๊ตฌ๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ• ๊นŒ์š”? ๋ชจ๋“  ๋„๊ตฌ๋Š” ํ•„์š”ํ•œ ์ฃผ์š” ์†์„ฑ์„ ๋ณด์œ ํ•˜๋Š” ์Šˆํผํด๋ž˜์Šค `Tool`์— ์˜์กดํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์ƒ์†ํ•˜๋Š” ํด๋ž˜์Šค๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python from transformers import Tool class HFModelDownloadsTool(Tool): pass ``` ์ด ํด๋ž˜์Šค์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์š”๊ตฌ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ๋„๊ตฌ ์ž์ฒด์˜ ์ด๋ฆ„์— ํ•ด๋‹นํ•˜๋Š” `name` ์†์„ฑ. ์ˆ˜ํ–‰๋ช…์ด ์žˆ๋Š” ๋‹ค๋ฅธ ๋„๊ตฌ์™€ ํ˜ธํ™˜๋˜๋„๋ก `model_download_counter`๋กœ ์ด๋ฆ„์„ ์ง€์ •ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. - ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฑ„์šฐ๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ์†์„ฑ `description`. - `inputs` ๋ฐ `outputs` ์†์„ฑ. ์ด๋ฅผ ์ •์˜ํ•˜๋ฉด Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ๊ฐ€ ์œ ํ˜•์— ๋Œ€ํ•œ ์ •๋ณด์— ์ž…๊ฐํ•œ ์„ ํƒ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋ฉฐ, ๋„๊ตฌ๋ฅผ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•  ๋•Œ gradio ๋ฐ๋ชจ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ์†์„ฑ ๋ชจ๋‘ ๊ฐ’์€ 'ํ…์ŠคํŠธ', '์ด๋ฏธ์ง€' ๋˜๋Š” '์˜ค๋””์˜ค'๊ฐ€ ๋  ์ˆ˜ ์žˆ๋Š” ์˜ˆ์ƒ ๊ฐ’์˜ ๋ฆฌ์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - ์ถ”๋ก  ์ฝ”๋“œ๊ฐ€ ํฌํ•จ๋œ `__call__` ๋ฉ”์†Œ๋“œ. ์ด๊ฒƒ์ด ์šฐ๋ฆฌ๊ฐ€ ์œ„์—์„œ ๋‹ค๋ฃจ์—ˆ๋˜ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค! ์ด์ œ ํด๋ž˜์Šค์˜ ๋ชจ์Šต์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers import Tool from huggingface_hub import list_models class HFModelDownloadsTool(Tool): name = "model_download_counter" description = ( "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. " "It takes the name of the category (such as text-classification, depth-estimation, etc), and " "returns the name of the checkpoint." ) inputs = ["text"] outputs = ["text"] def __call__(self, task: str): model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id ``` ์ด์ œ ๋„๊ตฌ๋ฅผ ์†์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋„๊ตฌ๋ฅผ ํŒŒ์ผ์— ์ €์žฅํ•˜๊ณ  ๋ฉ”์ธ ์Šคํฌ๋ฆฝํŠธ์—์„œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ์˜ ์ด๋ฆ„์„ `model_downloads.py`๋กœ ์ง€์ •ํ•˜๋ฉด ๊ฒฐ๊ณผ์ ์œผ๋กœ ๊ฐ€์ ธ์˜ค๊ธฐ ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from model_downloads import HFModelDownloadsTool tool = HFModelDownloadsTool() ``` ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์ด ๊ธฐ๋Šฅ์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ณ  ์ดˆ๊ธฐํ™”๋ฅผ ๋” ๊ฐ„๋‹จํ•˜๊ฒŒ ํ•˜๋ ค๋ฉด ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์•„๋ž˜์˜ Hub๋กœ ํ‘ธ์‹œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ฒŒ ํ•˜๋ ค๋ฉด `tool` ๋ณ€์ˆ˜์—์„œ `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python tool.push_to_hub("hf-model-downloads") ``` ์ด์ œ ํ—ˆ๋ธŒ์— ์ฝ”๋“œ๊ฐ€ ์ƒ๊ฒผ์Šต๋‹ˆ๋‹ค! ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„์ธ ์—์ด์ „ํŠธ๊ฐ€ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ํ•˜๋Š” ๋‹จ๊ณ„๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. #### ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒŒ ํ•˜๊ธฐ[[Having-the-agent-use-the-tool]] ์ด์ œ ์ด๋Ÿฐ ์‹์œผ๋กœ ํ—ˆ๋ธŒ์— ์กด์žฌํ•˜๋Š” ๋„๊ตฌ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋„๊ตฌ์˜ ์‚ฌ์šฉ์ž ์ด๋ฆ„์€ ๋ณ€๊ฒฝํ•˜์„ธ์š”): We now have our tool that lives on the Hub which can be instantiated as such (change the user name for your tool): ```python from transformers import load_tool tool = load_tool("lysandre/hf-model-downloads") ``` ์ด ๋„๊ตฌ๋ฅผ ์—์ด์ „ํŠธ์—์„œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์—์ด์ „ํŠธ ์ดˆ๊ธฐํ™” ๋ฉ”์†Œ๋“œ์˜ `additional_tools` ๋งค๊ฐœ๋ณ€์ˆ˜์— ์ „๋‹ฌํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run( "Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Code generated by the agent== model = model_download_counter(task="text-to-video") print(f"The model with the most downloads is {model}.") audio_model = text_reader(model) ==Result== The model with the most downloads is damo-vilab/text-to-video-ms-1.7b. ``` and generates the following audio. | **Audio** | |------------------------------------------------------------------------------------------------------------------------------------------------------| | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/damo.wav" type="audio/wav"/> | <Tip> LLM์— ๋”ฐ๋ผ ์ผ๋ถ€๋Š” ๋งค์šฐ ์ทจ์•ฝํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋ ค๋ฉด ๋งค์šฐ ์ •ํ™•ํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์ž˜ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ์ž˜ ์ •์˜ํ•˜๋Š” ๊ฒƒ์ด ๋ฌด์—‡๋ณด๋‹ค ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. </Tip> ### ๊ธฐ์กด ๋„๊ตฌ ๋Œ€์ฒดํ•˜๊ธฐ[[replacing-existing-tools]] ์—์ด์ „ํŠธ์˜ ๋„๊ตฌ ์ƒ์ž์— ์ƒˆ ํ•ญ๋ชฉ์„ ๋ฐฐ์ •ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๊ธฐ์กด ๋„๊ตฌ๋ฅผ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers import HfAgent, load_tool agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.toolbox["image-transformation"] = load_tool("diffusers/controlnet-canny-tool") ``` <Tip> ๋‹ค๋ฅธ ๋„๊ตฌ๋กœ ๊ต์ฒดํ•  ๋•Œ๋Š” ์ฃผ์˜ํ•˜์„ธ์š”! ์ด ์ž‘์—…์œผ๋กœ ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๋„ ์กฐ์ •๋ฉ๋‹ˆ๋‹ค. ์ž‘์—…์— ๋” ์ ํ•ฉํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์žˆ์œผ๋ฉด ์ข‹์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ๋‹ค๋ฅธ ๋„๊ตฌ๋ณด๋‹ค ๋” ๋งŽ์ด ์„ ํƒ๋˜๊ฑฐ๋‚˜ ์ •์˜ํ•œ ๋„๊ตฌ ๋Œ€์‹  ๋‹ค๋ฅธ ๋„๊ตฌ๊ฐ€ ์„ ํƒ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ## gradio-tools ์‚ฌ์šฉํ•˜๊ธฐ[[leveraging-gradio-tools]] [gradio-tools](https://github.com/freddyaboulton/gradio-tools)๋Š” Hugging Face Spaces๋ฅผ ๋„๊ตฌ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ•๋ ฅํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๊ธฐ์กด์˜ ๋งŽ์€ Spaces๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‚ฌ์šฉ์ž ์ •์˜ Spaces๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋””์ž์ธํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” `Tool.from_gradio` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `gradio_tools`์— ๋Œ€ํ•œ ์ง€์›์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ฐœ์„ ํ•˜๊ณ  ๋” ๋‚˜์€ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด `gradio-tools` ํˆดํ‚ท์—์„œ ์ œ๊ณต๋˜๋Š” `StableDiffusionPromptGeneratorTool` ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € `gradio_tools`์—์„œ ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python from gradio_tools import StableDiffusionPromptGeneratorTool gradio_tool = StableDiffusionPromptGeneratorTool() ``` ํ•ด๋‹น ์ธ์Šคํ„ด์Šค๋ฅผ `Tool.from_gradio` ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import Tool tool = Tool.from_gradio(gradio_tool) ``` ์ด์ œ ์ผ๋ฐ˜์ ์ธ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ๋˜‘๊ฐ™์ด ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ํ™œ์šฉํ•˜์—ฌ `a rabbit wearing a space suit'(์šฐ์ฃผ๋ณต์„ ์ž…์€ ํ† ๋ผ)๋ผ๋Š” ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ฐœ์„ ํ–ˆ์Šต๋‹ˆ๋‹ค: ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run("Generate an image of the `prompt` after improving it.", prompt="A rabbit wearing a space suit") ``` ๋ชจ๋ธ์ด ๋„๊ตฌ๋ฅผ ์ ์ ˆํžˆ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools: `StableDiffusionPromptGenerator` to improve the prompt, then `image_generator` to generate an image according to the improved prompt. ==Code generated by the agent== improved_prompt = StableDiffusionPromptGenerator(prompt) print(f"The improved prompt is {improved_prompt}.") image = image_generator(improved_prompt) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์ „์—: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"> <Tip warning={true}> gradio-tools๋Š” ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋กœ ์ž‘์—…ํ•  ๋•Œ์—๋„ *ํ…์ŠคํŠธ* ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ตฌํ˜„์€ ์ด๋ฏธ์ง€ ๋ฐ ์˜ค๋””์˜ค ๊ฐ์ฒด์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ๋Š” ์ด ๋‘ ๊ฐ€์ง€๊ฐ€ ํ˜ธํ™˜๋˜์ง€ ์•Š์ง€๋งŒ ์ง€์› ๊ฐœ์„ ์„ ์œ„ํ•ด ๋…ธ๋ ฅํ•˜๋ฉด์„œ ๋น ๋ฅด๊ฒŒ ํ˜ธํ™˜๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. </Tip> ## ํ–ฅํ›„ Langchain๊ณผ์˜ ํ˜ธํ™˜์„ฑ[[future-compatibility-with-langchain]] ์ €ํฌ๋Š” Langchain์„ ์ข‹์•„ํ•˜๋ฉฐ ๋งค์šฐ ๋งค๋ ฅ์ ์ธ ๋„๊ตฌ ๋ชจ์Œ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด Langchain์€ ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์™€ ์ž‘์—…ํ•  ๋•Œ์—๋„ *ํ…์ŠคํŠธ* ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ข…์ข… ๊ฐ์ฒด์˜ ์ง๋ ฌํ™”๋œ(์ฆ‰, ๋””์Šคํฌ์— ์ €์žฅ๋œ) ๋ฒ„์ „์ž…๋‹ˆ๋‹ค. ์ด ์ฐจ์ด๋กœ ์ธํ•ด transformers-agents์™€ Langchain ๊ฐ„์—๋Š” ๋ฉ€ํ‹ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๊ฐ€ ์ฒ˜๋ฆฌ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ–ฅํ›„ ๋ฒ„์ „์—์„œ ์ด ์ œํ•œ์ด ํ•ด๊ฒฐ๋˜๊ธฐ๋ฅผ ๋ฐ”๋ผ๋ฉฐ, ์ด ํ˜ธํ™˜์„ฑ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ์—ด๋ ฌํ•œ Langchain ์‚ฌ์šฉ์ž์˜ ๋„์›€์„ ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” ๋” ๋‚˜์€ ์ง€์›์„ ์ œ๊ณตํ•˜๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค. ๋„์›€์„ ์ฃผ๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, [์ด์Šˆ๋ฅผ ์—ด์–ด](https://github.com/huggingface/transformers/issues/new) ์˜๊ฒฌ์„ ๊ณต์œ ํ•ด ์ฃผ์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Accelerate๋ฅผ ํ™œ์šฉํ•œ ๋ถ„์‚ฐ ํ•™์Šต[[distributed-training-with-accelerate]] ๋ชจ๋ธ์ด ์ปค์ง€๋ฉด์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋Š” ์ œํ•œ๋œ ํ•˜๋“œ์›จ์–ด์—์„œ ๋” ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ›ˆ๋ จ ์†๋„๋ฅผ ๋ช‡ ๋ฐฐ๋กœ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์ „๋žต์œผ๋กœ ๋“ฑ์žฅํ–ˆ์Šต๋‹ˆ๋‹ค. Hugging Face์—์„œ๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ํ•˜๋‚˜์˜ ๋จธ์‹ ์— ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋“  ์—ฌ๋Ÿฌ ๋จธ์‹ ์— ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋“  ๋ชจ๋“  ์œ ํ˜•์˜ ๋ถ„์‚ฐ ์„ค์ •์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์‰ฝ๊ฒŒ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๊ธฐ ์œ„ํ•ด [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋ถ„์‚ฐ ํ™˜๊ฒฝ์—์„œ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ธฐ๋ณธ PyTorch ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ปค์Šคํ„ฐ๋งˆ์ด์ฆˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ด…์‹œ๋‹ค. ## ์„ค์ •[[setup]] ๐Ÿค— Accelerate ์„ค์น˜ ์‹œ์ž‘ํ•˜๊ธฐ: ```bash pip install accelerate ``` ๊ทธ ๋‹ค์Œ, [`~accelerate.Accelerator`] ๊ฐ์ฒด๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. [`~accelerate.Accelerator`]๋Š” ์ž๋™์œผ๋กœ ๋ถ„์‚ฐ ์„ค์ • ์œ ํ˜•์„ ๊ฐ์ง€ํ•˜๊ณ  ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ชจ๋“  ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์žฅ์น˜์— ๋ชจ๋ธ์„ ๋ช…์‹œ์ ์œผ๋กœ ๋ฐฐ์น˜ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## ๊ฐ€์†ํ™”๋ฅผ ์œ„ํ•œ ์ค€๋น„[[prepare-to-accelerate]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๊ด€๋ จ๋œ ๋ชจ๋“  ํ›ˆ๋ จ ๊ฐ์ฒด๋ฅผ [`~accelerate.Accelerator.prepare`] ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—๋Š” ํ›ˆ๋ จ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ๋กœ๋”, ๋ชจ๋ธ ๋ฐ ์˜ตํ‹ฐ๋งˆ์ด์ €๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## ๋ฐฑ์›Œ๋“œ(Backward)[[backward]] ๋งˆ์ง€๋ง‰์œผ๋กœ ํ›ˆ๋ จ ๋ฃจํ”„์˜ ์ผ๋ฐ˜์ ์ธ `loss.backward()`๋ฅผ ๐Ÿค— Accelerate์˜ [`~accelerate.Accelerator.backward`] ๋ฉ”์†Œ๋“œ๋กœ ๋Œ€์ฒดํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ๋‹ค์Œ ์ฝ”๋“œ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, ํ›ˆ๋ จ ๋ฃจํ”„์— ์ฝ”๋“œ ๋„ค ์ค„๋งŒ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ถ„์‚ฐ ํ•™์Šต์„ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## ํ•™์Šต[[train]] ๊ด€๋ จ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„์—๋Š” ์Šคํฌ๋ฆฝํŠธ๋‚˜ Colaboratory์™€ ๊ฐ™์€ ๋…ธํŠธ๋ถ์—์„œ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”. ### ์Šคํฌ๋ฆฝํŠธ๋กœ ํ•™์Šตํ•˜๊ธฐ[[train-with-a-script]] ์Šคํฌ๋ฆฝํŠธ์—์„œ ํ›ˆ๋ จ์„ ์‹คํ–‰ํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ตฌ์„ฑ ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate config ``` Then launch your training with: ```bash accelerate launch train.py ``` ### ๋…ธํŠธ๋ถ์œผ๋กœ ํ•™์Šตํ•˜๊ธฐ[[train-with-a-notebook]] Collaboratory์˜ TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ, ๋…ธํŠธ๋ถ์—์„œ๋„ ๐Ÿค— Accelerate๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ๋‹ด๋‹นํ•˜๋Š” ๋ชจ๋“  ์ฝ”๋“œ๋ฅผ ํ•จ์ˆ˜๋กœ ๊ฐ์‹ธ์„œ [`~accelerate.notebook_launcher`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` ๐Ÿค— Accelerate ๋ฐ ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [documentation](https://huggingface.co/docs/accelerate)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perplexity.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity)[[perplexity-of-fixedlength-models]] [[open-in-colab]] ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity, PPL)๋Š” ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์–ธ์–ด ๋ชจ๋ธ ํ‰๊ฐ€์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ธฐ ์ „์— ์ด ํ‰๊ฐ€์ง€ํ‘œ๋Š” ๊ณ ์ „์ ์ธ ์–ธ์–ด ๋ชจ๋ธ(์ž๊ธฐํšŒ๊ท€ ๋˜๋Š” ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ์ด๋ผ๊ณ ๋„ ํ•จ)์—๋งŒ ์ ์šฉ๋˜๋ฉฐ BERT์™€ ๊ฐ™์€ ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ์—๋Š” ์ž˜ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค (BERT๋Š” [summary of the models](../en/model_summary) ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”). ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋Š” ์‹œํ€€์Šค์˜ ์Œ์˜ ๋กœ๊ทธ ์šฐ๋„(negative log-likelihood, NLL) ๊ฐ’์˜ ํ‰๊ท ์— ์ง€์ˆ˜(exponentiate)๋ฅผ ์ทจํ•œ ๊ฐ’์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™”๋œ ์‹œํ€€์Šค \\(X = (x_0, x_1, \dots, x_t)\\) ๊ฐ€ ์žˆ์„ ๋•Œ, \\(X\\) ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋Š” ์•„๋ž˜ ์ˆ˜์‹๊ณผ ๊ฐ™์ด ๊ตฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$ \\(\log p_\theta (x_i|x_{<i})\\) ๋Š” ๋ชจ๋ธ์— i๋ฒˆ์งธ ์ด์ „๊นŒ์ง€ ํ† ํฐ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ i๋ฒˆ์งธ ํ† ํฐ์˜ ๋กœ๊ทธ ์šฐ๋„๊ฐ’์ž…๋‹ˆ๋‹ค. ์ง๊ด€์ ์œผ๋กœ ๋ง๋ญ‰์น˜์—์„œ ์ง€์ •๋œ ํ† ํฐ ์ง‘ํ•ฉ์„ ๊ท ์ผํ•˜๊ฒŒ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ์˜ ๋Šฅ๋ ฅ์— ๋Œ€ํ•œ ํ‰๊ฐ€๋กœ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ค‘์š”ํ•œ ์ ์€ ํ† ํฐํ™” ๊ณผ์ •์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ์— ์ง์ ‘์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋ฏ€๋กœ ์„œ๋กœ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋น„๊ตํ•  ๋•Œ ํ•ญ์ƒ ์ด๋ฅผ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ฐ์ดํ„ฐ์™€ ๋ชจ๋ธ ์˜ˆ์ธก ๊ฐ„์˜ cross-entropy ๊ฐ’์— ์ง€์ˆ˜๋ฅผ ์ทจํ•œ ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ์™€ ๋ฌธ์ž๋‹น ๋น„ํŠธ ์ˆ˜(BPC) ๋ฐ ๋ฐ์ดํ„ฐ ์••์ถ•๊ณผ์˜ ๊ด€๊ณ„์— ๋Œ€ํ•ด ๋” ์ง๊ด€์ ์ธ ์ดํ•ด๋ฅผ ์›ํ•˜์‹ ๋‹ค๋ฉด ๋‹ค์Œ ๊ธ€ [fantastic blog post on The Gradient](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/)์„ ํ™•์ธํ•˜์„ธ์š”. ## ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(PPL) ๊ณ„์‚ฐํ•˜๊ธฐ[[calculating-ppl-with-fixedlength-models]] ๋ชจ๋ธ์˜ ์ปจํ…์ŠคํŠธ ํฌ๊ธฐ๊ฐ€ ์ •ํ•ด์ ธ์žˆ์ง€ ์•Š๋‹ค๋ฉด, ์•„๋ž˜์™€ ๊ฐ™์ด ์‹œํ€€์Šค๋ฅผ ์ž๋™ ํšŒ๊ท€์ ์œผ๋กœ ๋ถ„ํ•ดํ•˜๊ณ  ๊ฐ ๋‹จ๊ณ„์—์„œ ์„ ํ–‰ ํ•˜๋Š” ์ „์ฒด ์‹œํ€€์Šค๋ฅผ ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ ์— ๋„ฃ์–ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img width="600" alt="Full decomposition of a sequence with unlimited context length" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/> ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ์˜ ๊ทผ์‚ฌ์น˜๋ฅผ ๊ตฌํ•  ๋•Œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์ด ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฐ ์ˆ˜์— ์ œํ•œ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๊ฐ€์žฅ ํฐ ๋ฒ„์ „์˜ [GPT-2](model_doc/gpt2)๋Š” ํ† ํฐ์˜ ๊ธธ์ด๊ฐ€ 1024๋กœ ๊ณ ์ •๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ \\(t\\) ๊ฐ€ 1024๋ณด๋‹ค ํฐ ๊ฒฝ์šฐ์— \\(p_\theta(x_t|x_{<t})\\) ์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  ์‹œํ€€์Šค๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ํฌ๊ธฐ์™€ ๋™์ผํ•œ ๊ธธ์ด๋Š” ๊ฐ€์ง€๋Š” ๋ถ€๋ถ„ ์‹œํ€€์Šค๋กœ ์ชผ๊ฐญ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๊ฐ€ \\(k\\) ๋ผ๋ฉด, ํ† ํฐ \\(x_t\\) ์˜ ์šฐ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•  ๋•Œ ์ด์ „ ํ† ํฐ์„ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ , \\(k-1\\) ํ† ํฐ๊นŒ์ง€ ์‚ฌ์šฉํ•ด ๋Œ€๋žต์ ์ธ ์šฐ๋„ ๊ฐ’์„ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์‹œํ€€์Šค์— ๋Œ€ํ•œ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•  ๋•Œ, ์ˆ˜์›”ํ•˜์ง€๋งŒ ์ฐจ์„ ์ฑ…์€ ์‹œํ€€์Šค๋ฅผ ์ฒญํฌ๋กœ ์ชผ๊ฐœ๊ณ  ๋ถ„ํ•ด๋œ ๊ฐ ๋ถ€๋ถ„์˜ ๋กœ๊ทธ ์šฐ๋„ ๊ฐ’์„ ๋…๋ฆฝ์ ์œผ๋กœ ํ•ฉ์‚ฐํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img width="600" alt="Suboptimal PPL not taking advantage of full available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/> ์ด ๋ฐฉ๋ฒ•์€ ๊ฐ ๋ถ€๋ถ„์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ํ•œ ๋ฒˆ์˜ ํฌ์›Œ๋“œ ํŒจ์Šค๋กœ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ์–ด ๋น ๋ฅด์ง€๋งŒ ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๋†’์€(๋” ๋‚˜์œ) PPL์„ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ๋Œ€๋ถ€๋ถ„์˜ ์˜ˆ์ธก ๋‹จ๊ณ„์—์„œ ๋ชจ๋ธ์˜ ์ปจํ…์ŠคํŠธ๊ฐ€ ์ ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋Œ€์‹ , ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ PPL์€ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์œผ๋กœ ํ‰๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ „๋žต์—๋Š” ์ปจํ…์ŠคํŠธ ์œˆ๋„์šฐ์„ ๋ฐ˜๋ณต์ ์œผ๋กœ ์Šฌ๋ผ์ด๋”ฉํ•ด ๋ชจ๋ธ์ด ๊ฐ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•  ๋•Œ ๋” ๋งŽ์€ ์ปจํ…์ŠคํŠธ๋ฅผ ๊ฐ–๋„๋ก ํ•˜๋Š” ์ž‘์—…์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. <img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/> ์ด๋Š” ์‹œํ€€์Šค ํ™•๋ฅ ์˜ ์‹ค์ œ ๋ถ„ํ•ด์— ๋” ๊ฐ€๊นŒ์šด ๊ทผ์‚ฌ์น˜์ด๋ฉฐ ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ์œ ๋ฆฌํ•œ ์ ์ˆ˜๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ ์€ ๋ง๋ญ‰์น˜์˜ ๊ฐ ํ† ํฐ์— ๋Œ€ํ•ด ๋ณ„๋„์˜ ํฌ์›Œ๋“œ ํŒจ์Šค๊ฐ€ ํ•„์š”ํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ˜„์‹ค์ ์œผ๋กœ ์ข‹์€ ์ ˆ์ถฉ์•ˆ์€ ํ•œ ๋ฒˆ์— ํ•œ ํ† ํฐ์”ฉ ์Šฌ๋ผ์ด๋”ฉํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ๋” ํฐ ๊ฐ„๊ฒฉ์œผ๋กœ ์ปจํ…์ŠคํŠธ๋ฅผ ์ด๋™ํ•˜๋Š” ์ŠคํŠธ๋ผ์ด๋“œ๊ฐ€ ์ ์šฉ๋œ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ณ„์‚ฐ์„ ํ›จ์”ฌ ๋” ๋น ๋ฅด๊ฒŒ ์ง„ํ–‰ํ•˜๋ฉด์„œ๋„ ๋ชจ๋ธ์— ๊ฐ ๋‹จ๊ณ„์—์„œ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๊ธด ์ปจํ…์ŠคํŠธ๋ฅผ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์˜ˆ์ œ: ๐Ÿค— Transformers์—์„œ GPT-2๋กœ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(perplexity) ๊ณ„์‚ฐํ•˜๊ธฐ[[example-calculating-perplexity-with-gpt2-in-transformers]] ์ด์ œ GPT-2๋กœ ์œ„์˜ ๊ณผ์ •์„ ์‹œ์—ฐํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) ``` WikiText-2 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๋ช‡ ๊ฐ€์ง€ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์„ ์‚ฌ์šฉํ•ด ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ํฌ๊ธฐ๊ฐ€ ์ž‘๊ณ  ํฌ์›Œ๋“œ ํŒจ์Šค ํ•œ ๋ฒˆ๋งŒ ์ˆ˜ํ–‰ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ๊ฐ€์ ธ์˜ค๊ณ  ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from datasets import load_dataset test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") ``` ๐Ÿค— Transformers๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์˜ `labels`๋กœ `input_ids`๋ฅผ ์ „๋‹ฌํ•ด ๊ฐ ํ† ํฐ์— ๋Œ€ํ•œ ํ‰๊ท  ์Œ์˜ ์šฐ๋„ ๊ฐ’์„ ์†์‹ค๋กœ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ๊ฐ ๋ฐ˜๋ณต๋งˆ๋‹ค ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๋Š” ํ† ํฐ์ด ๊ฒน์นฉ๋‹ˆ๋‹ค. ์ปจํ…์ŠคํŠธ๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฐ์— ๋Œ€ํ•œ ๋กœ๊ทธ ์šฐ๋„ ๊ฐ’์ด ์†์‹ค์— ํฌํ•จ๋˜๋Š” ๊ฒƒ์„ ์›ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์ด๋Ÿฌํ•œ ํ† ํฐ์˜ `input_ids`๋ฅผ `-100`์œผ๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฌด์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์ŠคํŠธ๋ผ์ด๋“œ(stride)๋ฅผ `512`๋กœ ์‚ฌ์šฉํ•œ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์ด ํ•œ ํ† ํฐ์˜ ์กฐ๊ฑด๋ถ€ ์šฐ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•  ๋•Œ ์ปจํ…์ŠคํŠธ์— ์ตœ์†Œํ•œ 512๊ฐœ์˜ ํ† ํฐ์ด ํฌํ•จ๋˜์–ด์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค (ํ•ด๋‹น ํ† ํฐ ์•ž์— 512๊ฐœ์˜ ํ† ํฐ์ด ์žˆ๋Š” ๊ฒฝ์šฐ). ```python import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # ๋งˆ์ง€๋ง‰ ๋ฃจํ”„์˜ ์ŠคํŠธ๋ผ์ด๋“œ ๊ฐ’๊ณผ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Œ input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # ์†์‹ค์€ ๋ชจ๋“  ์œ ํšจํ•œ ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ํ‰๊ท ๊ฐ’์„ ๊ตฌํ•˜๋Š” ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ(cross entropy)๋กœ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. # ๋‚˜์ด๋ธŒ ๋ฒ ์ด์ง€์•ˆ ๋ชจ๋ธ์€ ๋‚ด๋ถ€์ ์œผ๋กœ ๋ ˆ์ด๋ธ”์„ ์™ผ์ชฝ์œผ๋กœ 1๊ฐœ์”ฉ ๋ฐ€๊ธฐ ๋•Œ๋ฌธ์—, (ํƒ€์ผ“ - 1)๊ฐœ ๋งŒํผ์˜ ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•ด ์†์‹ค์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) ``` ์ŠคํŠธ๋ผ์ด๋“œ๋ฅผ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด์™€ ๋™์ผํ•˜๊ฒŒ ์„ค์ •ํ•˜๋ฉด ์œ„์—์„œ ์„ค๋ช…ํ•œ ์ฐจ์„ ์ฑ…์ธ ๋น„์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ŠคํŠธ๋ผ์ด๋“œ๊ฐ€ ์ž‘์„์ˆ˜๋ก ๋ชจ๋ธ์ด ๊ฐ ์˜ˆ์ธก์„ ํ•  ๋•Œ ๋” ๋งŽ์€ ์ปจํ…์ŠคํŠธ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๊ฒŒ ๋˜์–ด ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ ๊ฐ’์ด ์ข‹์•„์ง‘๋‹ˆ๋‹ค. ์œ„์˜ ๊ณ„์‚ฐ์„ ํ† ํฐ์ด ๊ฒน์น˜์ง€ ์•Š๋„๋ก `stride = 1024`๋กœ ์„ค์ •ํ•˜๋ฉด PPL์€ `19.44`๋กœ GPT-2 ๋…ผ๋ฌธ์—์„œ ๋ณด๊ณ ๋œ `19.93`๊ณผ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. `stride = 512`๋กœ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์„ ์‚ฌ์šฉํ•˜๋ฉด PPL์€ `16.45`๋กœ ๋–จ์–ด์ง‘๋‹ˆ๋‹ค. ์ด๋Š” ๋” ์ข‹์€ ์ ์ˆ˜์ผ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‹œํ€€์Šค ํ™•๋ฅ ์˜ ์‹ค์ œ ์ž๋™ ํšŒ๊ท€ ๋ถ„ํ•ด์— ๋” ๊ฐ€๊นŒ์šด ๋ฐฉ์‹์œผ๋กœ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ[[finetune-a-pretrained-model]] [[open-in-colab]] ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ฉด ์ƒ๋‹นํ•œ ์ด์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ„์‚ฐ ๋น„์šฉ๊ณผ ํƒ„์†Œ๋ฐœ์ž๊ตญ์„ ์ค„์ด๊ณ , ์ฒ˜์Œ๋ถ€ํ„ฐ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ฌ ํ•„์š” ์—†์ด ์ตœ์‹  ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋‹ค์–‘ํ•œ ์ž‘์—…์„ ์œ„ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ ์ˆ˜์ฒœ ๊ฐœ์˜ ๋ชจ๋ธ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ์ž์‹ ์˜ ์ž‘์—…๊ณผ ๊ด€๋ จ๋œ ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•ด ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๋ฏธ์„ธ ํŠœ๋‹์ด๋ผ๊ณ  ํ•˜๋Š” ๋งค์šฐ ๊ฐ•๋ ฅํ•œ ํ›ˆ๋ จ ๊ธฐ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹น์‹ ์ด ์„ ํƒํ•œ ๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: * ๐Ÿค— Transformers๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ [`Trainer`]. * Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TensorFlow์—์„œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ. * ๊ธฐ๋ณธ PyTorch์—์„œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ. <a id='data-processing'></a> ## ๋ฐ์ดํ„ฐ์…‹ ์ค€๋น„[[prepare-a-dataset]] <Youtube id="_BZearw7f0w"/> ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ์ค€๋น„ํ•˜์„ธ์š”. ์ด์ „ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ํ›ˆ๋ จ์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ๋“œ๋ ธ๋Š”๋ฐ, ์ง€๊ธˆ์ด ๋ฐฐ์šธ ๊ฑธ ๋˜์งš์„ ๊ธฐํšŒ์ž…๋‹ˆ๋‹ค! ๋จผ์ € [Yelp ๋ฆฌ๋ทฐ](https://huggingface.co/datasets/yelp_review_full) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  ์„œ๋กœ ๋‹ค๋ฅธ ๊ธธ์ด์˜ ์‹œํ€€์Šค ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ ์ „๋žต์„ ํฌํ•จํ•˜๋ ค๋ฉด ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ๐Ÿค— Dataset [`map`](https://huggingface.co/docs/datasets/process.html#map) ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋ฏธ์„ธ ํŠœ๋‹์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์…‹์˜ ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๋งŒ๋“ค์–ด ๋ฏธ์„ธ ํŠœ๋‹ ์ž‘์—… ์‹œ๊ฐ„์„ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Train ์—ฌ๊ธฐ์„œ๋ถ€ํ„ฐ๋Š” ์‚ฌ์šฉํ•˜๋ ค๋Š” ํ”„๋ ˆ์ž„์›Œํฌ์— ํ•ด๋‹นํ•˜๋Š” ์„น์…˜์„ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋ฅธ์ชฝ ์‚ฌ์ด๋“œ๋ฐ”์˜ ๋งํฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ด๋™ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํŠน์ • ํ”„๋ ˆ์ž„์›Œํฌ์˜ ๋ชจ๋“  ์ฝ˜ํ…์ธ ๋ฅผ ์ˆจ๊ธฐ๋ ค๋ฉด ํ•ด๋‹น ํ”„๋ ˆ์ž„์›Œํฌ ๋ธ”๋ก์˜ ์˜ค๋ฅธ์ชฝ ์ƒ๋‹จ์— ์žˆ๋Š” ๋ฒ„ํŠผ์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค! <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ## ํŒŒ์ดํ† ์น˜ Trainer๋กœ ํ›ˆ๋ จํ•˜๊ธฐ[[train-with-pytorch-trainer]] ๐Ÿค— Transformers๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ ํ›ˆ๋ จ์— ์ตœ์ ํ™”๋œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ง์ ‘ ์ž‘์„ฑํ•˜์ง€ ์•Š๊ณ ๋„ ์‰ฝ๊ฒŒ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Trainer`] API๋Š” ๋กœ๊น…(logging), ๊ฒฝ์‚ฌ ๋ˆ„์ (gradient accumulation), ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision) ๋“ฑ ๋‹ค์–‘ํ•œ ํ›ˆ๋ จ ์˜ต์…˜๊ณผ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. Yelp ๋ฆฌ๋ทฐ [๋ฐ์ดํ„ฐ์…‹ ์นด๋“œ](https://huggingface.co/datasets/yelp_review_full#data-fields)์—์„œ 5๊ฐœ์˜ ๋ ˆ์ด๋ธ”์ด ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` <Tip> ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜ ์ค‘ ์ผ๋ถ€๊ฐ€ ์‚ฌ์šฉ๋˜์ง€ ์•Š๊ณ  ์ผ๋ถ€ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ํ‘œ์‹œ๋œ๋‹ค๋Š” ๊ฒฝ๊ณ ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๊ฑฑ์ •๋งˆ์„ธ์š”. ์ด๊ฒƒ์€ ์˜ฌ๋ฐ”๋ฅธ ๋™์ž‘์ž…๋‹ˆ๋‹ค! ์‚ฌ์ „ ํ•™์Šต๋œ BERT ๋ชจ๋ธ์˜ ํ—ค๋“œ๋Š” ํ๊ธฐ๋˜๊ณ  ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ๋Œ€์ฒด๋ฉ๋‹ˆ๋‹ค. ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์ง€์‹์œผ๋กœ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ๋ฏธ์„ธ ํŠœ๋‹ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ### ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํ›ˆ๋ จ[[training-hyperparameters]] ๋‹ค์Œ์œผ๋กœ ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์™€ ๋‹ค์–‘ํ•œ ํ›ˆ๋ จ ์˜ต์…˜์„ ํ™œ์„ฑํ™”ํ•˜๊ธฐ ์œ„ํ•œ ํ”Œ๋ž˜๊ทธ๋ฅผ ํฌํ•จํ•˜๋Š” [`TrainingArguments`] ํด๋ž˜์Šค๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ธฐ๋ณธ ํ›ˆ๋ จ [ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments)๋กœ ์‹œ์ž‘ํ•˜์ง€๋งŒ, ์ž์œ ๋กญ๊ฒŒ ์‹คํ—˜ํ•˜์—ฌ ์—ฌ๋Ÿฌ๋ถ„๋“ค์—๊ฒŒ ๋งž๋Š” ์ตœ์ ์˜ ์„ค์ •์„ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์—์„œ ์ฒดํฌํฌ์ธํŠธ(checkpoints)๋ฅผ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] [`Trainer`]๋Š” ํ›ˆ๋ จ ์ค‘์— ๋ชจ๋ธ ์„ฑ๋Šฅ์„ ์ž๋™์œผ๋กœ ํ‰๊ฐ€ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ณด๊ณ ํ•  ํ•จ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [๐Ÿค— Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” [`evaluate.load`](https://huggingface.co/spaces/evaluate-metric/accuracy) ํ•จ์ˆ˜๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ„๋‹จํ•œ [`accuracy`]ํ•จ์ˆ˜๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค (์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import numpy as np >>> import evaluate >>> metric = evaluate.load("accuracy") ``` `metric`์—์„œ [`~evaluate.compute`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์˜ˆ์ธก์˜ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก์„ `compute`์— ์ „๋‹ฌํ•˜๊ธฐ ์ „์— ์˜ˆ์ธก์„ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋“  ๐Ÿค— Transformers ๋ชจ๋ธ์€ ๋กœ์ง“์œผ๋กœ ๋ฐ˜ํ™˜ํ•œ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` ๋ฏธ์„ธ ํŠœ๋‹ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ์ธ์ˆ˜์— `evaluation_strategy` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ง€์ •ํ•˜์—ฌ ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") ``` ### ํ›ˆ๋ จ ํ•˜๊ธฐ[[trainer]] ๋ชจ๋ธ, ํ›ˆ๋ จ ์ธ์ˆ˜, ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹, ํ‰๊ฐ€ ํ•จ์ˆ˜๊ฐ€ ํฌํ•จ๋œ [`Trainer`] ๊ฐ์ฒด๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ๊ทธ๋ฆฌ๊ณ  [`~transformers.Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> ## Keras๋กœ ํ…์„œํ”Œ๋กœ์šฐ ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ[[train-a-tensorflow-model-with-keras]] Keras API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ### Keras์šฉ ๋ฐ์ดํ„ฐ ๋กœ๋“œ[[loading-data-for-keras]] Keras API๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋ ค๋ฉด ๋ฐ์ดํ„ฐ์…‹์„ Keras๊ฐ€ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ž‘์€ ๊ฒฝ์šฐ, ์ „์ฒด๋ฅผ NumPy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ Keras๋กœ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋” ๋ณต์žกํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์ „์— ๋จผ์ € ์ด ์ž‘์—…์„ ์‹œ๋„ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. [GLUE ๋ฒค์น˜๋งˆํฌ](https://huggingface.co/datasets/glue)์˜ CoLA ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ ๋ฐ”์ด๋„ˆ๋ฆฌ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ์ž‘์—…์ด๋ฏ€๋กœ ์ง€๊ธˆ์€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๋ถ„ํ• ๋งŒ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now ``` ๋‹ค์Œ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ NumPy ๋ฐฐ์—ด๋กœ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์€ ์ด๋ฏธ 0๊ณผ 1๋กœ ๋œ ๋ฆฌ์ŠคํŠธ์ด๊ธฐ ๋•Œ๋ฌธ์— ํ† ํฐํ™”ํ•˜์ง€ ์•Š๊ณ  ๋ฐ”๋กœ NumPy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ชจ๋ธ์„ ๋กœ๋“œ, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method), [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)ํ•ฉ๋‹ˆ๋‹ค: ```py from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) model.fit(tokenized_data, labels) ``` <Tip> ๋ชจ๋ธ์„ `compile()`ํ•  ๋•Œ ์†์‹ค ์ธ์ˆ˜๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค! ์ด ์ธ์ˆ˜๋ฅผ ๋น„์›Œ๋‘๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค ๋ชจ๋ธ์€ ์ž‘์—…๊ณผ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์ ํ•ฉํ•œ ์†์‹ค์„ ์ž๋™์œผ๋กœ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์›ํ•œ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ ์ง์ ‘ ์†์‹ค์„ ์ง€์ •ํ•˜์—ฌ ์ด๋ฅผ ์žฌ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์†Œ๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์—์„œ๋Š” ์ž˜ ์ž‘๋™ํ•˜์ง€๋งŒ, ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™œ ๊ทธ๋Ÿด๊นŒ์š”? ํ† ํฐํ™”๋œ ๋ฐฐ์—ด๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ฉ”๋ชจ๋ฆฌ์— ์™„์ „ํžˆ ๋กœ๋“œํ•˜๊ณ  NumPy๋Š” "๋“ค์ญ‰๋‚ ์ญ‰ํ•œ" ๋ฐฐ์—ด์„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ๋ชจ๋“  ํ† ํฐํ™”๋œ ์ƒ˜ํ”Œ์„ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์—์„œ ๊ฐ€์žฅ ๊ธด ์ƒ˜ํ”Œ์˜ ๊ธธ์ด๋งŒํผ ํŒจ๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐฐ์—ด์ด ํ›จ์”ฌ ๋” ์ปค์ง€๊ณ  ์ด ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ธํ•ด ํ•™์Šต ์†๋„๋„ ๋Š๋ ค์ง‘๋‹ˆ๋‹ค! ### ๋ฐ์ดํ„ฐ๋ฅผ tf.data.Dataset์œผ๋กœ ๋กœ๋“œํ•˜๊ธฐ[[loading-data-as-a-tfdatadataset]] ํ•™์Šต ์†๋„๊ฐ€ ๋Š๋ ค์ง€๋Š” ๊ฒƒ์„ ํ”ผํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ `tf.data.Dataset`์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›ํ•œ๋‹ค๋ฉด ์ง์ ‘ `tf.data` ํŒŒ์ดํ”„๋ผ์ธ์„ ์ง์ ‘ ์ž‘์„ฑํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์ด ์ž‘์—…์„ ๊ฐ„ํŽธํ•˜๊ฒŒ ์ˆ˜ํ–‰ํ•˜๋Š” ์ˆ˜ ์žˆ๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - [`~TFPreTrainedModel.prepare_tf_dataset`]: ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ์ด ๋ฐฉ๋ฒ•์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๋ฉ”์„œ๋“œ์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์„ ๊ฒ€์‚ฌํ•˜์—ฌ ๋ชจ๋ธ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์—ด์„ ์ž๋™์œผ๋กœ ํŒŒ์•…ํ•˜๊ณ  ๋‚˜๋จธ์ง€๋Š” ๋ฒ„๋ ค์„œ ๋” ๋‹จ์ˆœํ•˜๊ณ  ์„ฑ๋Šฅ์ด ์ข‹์€ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - [`~datasets.Dataset.to_tf_dataset`]: ์ด ๋ฐฉ๋ฒ•์€ ์ข€ ๋” ๋‚ฎ์€ ์ˆ˜์ค€์ด๋ฉฐ, ํฌํ•จํ•  '์—ด'๊ณผ '๋ ˆ์ด๋ธ”'์„ ์ •ํ™•ํžˆ ์ง€์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ •ํ™•ํžˆ ์ œ์–ดํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์œ ์šฉํ•˜๋ฉฐ, ํฌํ•จํ•  'columns'๊ณผ 'label_cols'์„ ์ •ํ™•ํžˆ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋จผ์ € ๋‹ค์Œ ์ฝ”๋“œ ์ƒ˜ํ”Œ๊ณผ ๊ฐ™์ด ํ† ํฌ๋‚˜์ด์ € ์ถœ๋ ฅ์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์—ด๋กœ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` ํ—ˆ๊น… ํŽ˜์ด์Šค ๋ฐ์ดํ„ฐ์…‹์€ ๊ธฐ๋ณธ์ ์œผ๋กœ ๋””์Šคํฌ์— ์ €์žฅ๋˜๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ๋Š˜๋ฆฌ์ง€ ์•Š๋Š”๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”! ์—ด์ด ์ถ”๊ฐ€๋˜๋ฉด ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋ฐฐ์น˜๋ฅผ ์ŠคํŠธ๋ฆฌ๋ฐํ•˜๊ณ  ๊ฐ ๋ฐฐ์น˜์— ํŒจ๋”ฉ์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ํŒจ๋”ฉ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ํŒจ๋”ฉ ํ† ํฐ์˜ ์ˆ˜๋ฅผ ํฌ๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tf_dataset = model.prepare_tf_dataset(dataset, batch_size=16, shuffle=True, tokenizer=tokenizer) ``` ์œ„์˜ ์ฝ”๋“œ ์ƒ˜ํ”Œ์—์„œ๋Š” ๋ฐฐ์น˜๊ฐ€ ๋กœ๋“œ๋  ๋•Œ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ๋„๋ก `prepare_tf_dataset`์— ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์˜ ๋ชจ๋“  ์ƒ˜ํ”Œ ๊ธธ์ด๊ฐ€ ๊ฐ™๊ณ  ํŒจ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ด ์ธ์ˆ˜๋ฅผ ๊ฑด๋„ˆ๋›ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ˜ํ”Œ์„ ์ฑ„์šฐ๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ๋ณต์žกํ•œ ์ž‘์—…(์˜ˆ: ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด์˜ ํ† ํฐ ์†์ƒ ๋ชจ๋ธ๋ง)์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ํ† ํฐ์„ ์†์ƒ์‹œ์ผœ์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ, `collate_fn` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ˜ํ”Œ ๋ชฉ๋ก์„ ๋ฐฐ์น˜๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์›ํ•˜๋Š” ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•  ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [์˜ˆ์‹œ](https://github.com/huggingface/transformers/tree/main/examples) ๋˜๋Š” [๋…ธํŠธ๋ถ](https://huggingface.co/docs/transformers/notebooks)์„ ์ฐธ์กฐํ•˜์—ฌ ์ด ์ ‘๊ทผ ๋ฐฉ์‹์ด ์‹ค์ œ๋กœ ์ž‘๋™ํ•˜๋Š” ๋ชจ์Šต์„ ํ™•์ธํ•˜์„ธ์š”. `tf.data.Dataset`์„ ์ƒ์„ฑํ•œ ํ›„์—๋Š” ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ํ›ˆ๋ จ(fit)ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py model.compile(optimizer=Adam(3e-5)) model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## ๊ธฐ๋ณธ ํŒŒ์ดํ† ์น˜๋กœ ํ›ˆ๋ จํ•˜๊ธฐ[[train-in-native-pytorch]] <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`]๋Š” ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ฒ˜๋ฆฌํ•˜๋ฉฐ ํ•œ ์ค„์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์„ ์„ ํ˜ธํ•˜๋Š” ์‚ฌ์šฉ์ž์˜ ๊ฒฝ์šฐ, ๊ธฐ๋ณธ PyTorch์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ๋…ธํŠธ๋ถ์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜๊ฑฐ๋‚˜ ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ด ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ™•๋ณดํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py del model del trainer torch.cuda.empty_cache() ``` ๋‹ค์Œ์œผ๋กœ, 'ํ† ํฐํ™”๋œ ๋ฐ์ดํ„ฐ์…‹'์„ ์ˆ˜๋™์œผ๋กœ ํ›„์ฒ˜๋ฆฌํ•˜์—ฌ ํ›ˆ๋ จ๋ จ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 1. ๋ชจ๋ธ์ด ์›์‹œ ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ํ—ˆ์šฉํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ `text` ์—ด์„ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. ๋ชจ๋ธ์—์„œ ์ธ์ˆ˜์˜ ์ด๋ฆ„์ด `labels`๋กœ ์ง€์ •๋  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜๋ฏ€๋กœ `label` ์—ด์˜ ์ด๋ฆ„์„ `labels`๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. ๋ฐ์ดํ„ฐ์…‹์˜ ํ˜•์‹์„ List ๋Œ€์‹  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋„๋ก ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets.set_format("torch") ``` ๊ทธ๋ฆฌ๊ณ  ์•ž์„œ ํ‘œ์‹œ๋œ ๋Œ€๋กœ ๋ฐ์ดํ„ฐ์…‹์˜ ๋” ์ž‘์€ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ์ƒ์„ฑํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ • ์†๋„๋ฅผ ๋†’์ž…๋‹ˆ๋‹ค: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader[[dataloader]] ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ 'DataLoader'๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜๋ฅผ ๋ฐ˜๋ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` ์˜ˆ์ธก์„ ์œ„ํ•œ ๋ ˆ์ด๋ธ” ๊ฐœ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` ### ์˜ตํ‹ฐ๋งˆ์ด์ € ๋ฐ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ[[optimizer-and-learning-rate-scheduler]] ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ ์ œ๊ณตํ•˜๋Š” [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` [`Trainer`]์—์„œ ๊ธฐ๋ณธ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, GPU์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ 'device'๋ฅผ ์ง€์ •ํ•˜์—ฌ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด CPU์—์„œ ํ›ˆ๋ จํ•˜๋ฉฐ ๋ช‡ ๋ถ„์ด ์•„๋‹Œ ๋ช‡ ์‹œ๊ฐ„์ด ๊ฑธ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> [Colaboratory](https://colab.research.google.com/) ๋˜๋Š” [SageMaker StudioLab](https://studiolab.sagemaker.aws/)๊ณผ ๊ฐ™์€ ํ˜ธ์ŠคํŒ… ๋…ธํŠธ๋ถ์ด ์—†๋Š” ๊ฒฝ์šฐ ํด๋ผ์šฐ๋“œ GPU์— ๋ฌด๋ฃŒ๋กœ ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์ด์ œ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ๐Ÿฅณ ### ํ›ˆ๋ จ ๋ฃจํ”„[[training-loop]] ํ›ˆ๋ จ ์ง„ํ–‰ ์ƒํ™ฉ์„ ์ถ”์ ํ•˜๋ ค๋ฉด [tqdm](https://tqdm.github.io/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠธ๋ ˆ์ด๋‹ ๋‹จ๊ณ„ ์ˆ˜์— ์ง„ํ–‰๋ฅ  ํ‘œ์‹œ์ค„์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] [`Trainer`]์— ํ‰๊ฐ€ ํ•จ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ฐฉ๋ฒ•๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ง์ ‘ ์ž‘์„ฑํ•  ๋•Œ๋„ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋ฒˆ์—๋Š” ๊ฐ ์—ํฌํฌ๊ฐ€ ๋๋‚  ๋•Œ๋งˆ๋‹ค ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ๋ณด๊ณ ํ•˜๋Š” ๋Œ€์‹ , [`~evaluate.add_batch`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ๋ฐฐ์น˜๋ฅผ ๋ˆ„์ ํ•˜๊ณ  ๋งจ ๋งˆ์ง€๋ง‰์— ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## ์ถ”๊ฐ€ ์ž๋ฃŒ[[additional-resources]] ๋” ๋งŽ์€ ๋ฏธ์„ธ ํŠœ๋‹ ์˜ˆ์ œ๋Š” ๋‹ค์Œ์„ ์ฐธ์กฐํ•˜์„ธ์š”: - [๐Ÿค— Trnasformers ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples)์—๋Š” PyTorch ๋ฐ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ์ผ๋ฐ˜์ ์ธ NLP ์ž‘์—…์„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - [๐Ÿค— Transformers ๋…ธํŠธ๋ถ](notebooks)์—๋Š” PyTorch ๋ฐ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ํŠน์ • ์ž‘์—…์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ๋…ธํŠธ๋ถ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/sagemaker.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Amazon SageMaker์—์„œ ํ•™์Šต ์‹คํ–‰ํ•˜๊ธฐ[[run-training-on-amazon-sagemaker]] ๋ฌธ์„œ๊ฐ€ [hf.co/docs/sagemaker](https://huggingface.co/docs/sagemaker)๋กœ ์ด๋™๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€๋Š” `transformers` 5.0 ์—์„œ ์‚ญ์ œ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ### ๋ชฉ์ฐจ[[table-of-content]] - [Train Hugging Face models on Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/train) - [Deploy Hugging Face models to Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/inference) - [Frequently Asked Questions](https://huggingface.co/docs/sagemaker/faq)
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/question_answering.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์งˆ์˜ ์‘๋‹ต(Question Answering)[[question-answering]] [[open-in-colab]] <Youtube id="ajPx5LwJD-I"/> ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋Š” ์ฃผ์–ด์ง„ ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. Alexa, Siri ๋˜๋Š” Google๊ณผ ๊ฐ™์€ ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ๋‚ ์”จ๊ฐ€ ์–ด๋–ค์ง€ ๋ฌผ์–ด๋ณธ ์ ์ด ์žˆ๋‹ค๋ฉด ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด๋ณธ ์ ์ด ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์—๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค. - ์ถ”์ถœ์ (Extractive) ์งˆ์˜ ์‘๋‹ต: ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์—์„œ ๋‹ต๋ณ€์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. - ์ƒ์„ฑ์ (Abstractive) ์งˆ์˜ ์‘๋‹ต: ๋ฌธ๋งฅ์—์„œ ์งˆ๋ฌธ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋‹ตํ•˜๋Š” ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฐฉ๋ฒ•๋“ค์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. 1. ์ถ”์ถœ์  ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด [SQuAD](https://huggingface.co/datasets/squad) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert-base-uncased) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ ์‚ฌ์šฉํ•˜๊ธฐ <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ํƒœ์Šคํฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค. <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [LXMERT](../model_doc/lxmert), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [OPT](../model_doc/opt), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [Splinter](../model_doc/splinter), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•ด์„œ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-squad-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ํ›ˆ๋ จํ•˜๋ฉฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> squad = load_dataset("squad", split="train[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ถ„ํ• ๋œ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด์ค๋‹ˆ๋‹ค: ```py >>> squad = squad.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ ๋‚˜์„œ ์˜ˆ์‹œ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ํ•˜๋‚˜ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> squad["train"][0] {'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame' } ``` ์ด ์ค‘์—์„œ ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ํ•ญ๋ชฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - `answers`: ๋‹ต์•ˆ ํ† ํฐ์˜ ์‹œ์ž‘ ์œ„์น˜์™€ ๋‹ต์•ˆ ํ…์ŠคํŠธ - `context`: ๋ชจ๋ธ์ด ๋‹ต์„ ์ถ”์ถœํ•˜๋Š”๋ฐ ํ•„์š”ํ•œ ๋ฐฐ๊ฒฝ ์ง€์‹ - `question`: ๋ชจ๋ธ์ด ๋‹ตํ•ด์•ผ ํ•˜๋Š” ์งˆ๋ฌธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="qgaM0weJHpA"/> ๋‹ค์Œ ๋‹จ๊ณ„์—์„œ๋Š” `question` ๋ฐ `context` ํ•ญ๋ชฉ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์™€ ๊ด€๋ จํ•ด์„œ ํŠนํžˆ ์œ ์˜ํ•ด์•ผํ•  ๋ช‡ ๊ฐ€์ง€ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€ ์˜ˆ์ œ์—๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ์ดˆ๊ณผํ•˜๋Š” ๋งค์šฐ ๊ธด `context`๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธด ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด์„œ๋Š”, `truncation="only_second"`๋กœ ์„ค์ •ํ•ด `context`๋งŒ ์ž˜๋ผ๋‚ด๋ฉด ๋ฉ๋‹ˆ๋‹ค. 2. ๊ทธ ๋‹ค์Œ, `return_offset_mapping=True`๋กœ ์„ค์ •ํ•ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘๊ณผ ์ข…๋ฃŒ ์œ„์น˜๋ฅผ ์›๋ž˜์˜ `context`์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. 3. ๋งคํ•‘์„ ์™„๋ฃŒํ•˜๋ฉด, ์ด์ œ ๋‹ต๋ณ€์—์„œ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํ”„์…‹์˜ ์–ด๋Š ๋ถ€๋ถ„์ด `question`๊ณผ `context`์— ํ•ด๋‹นํ•˜๋Š”์ง€ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก [`~tokenizers.Encoding.sequence_ids`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋‹ค์Œ์€ `answer`์˜ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ž˜๋ผ๋‚ด์„œ `context`์— ๋งคํ•‘ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... questions = [q.strip() for q in examples["question"]] ... inputs = tokenizer( ... questions, ... examples["context"], ... max_length=384, ... truncation="only_second", ... return_offsets_mapping=True, ... padding="max_length", ... ) ... offset_mapping = inputs.pop("offset_mapping") ... answers = examples["answers"] ... start_positions = [] ... end_positions = [] ... for i, offset in enumerate(offset_mapping): ... answer = answers[i] ... start_char = answer["answer_start"][0] ... end_char = answer["answer_start"][0] + len(answer["text"][0]) ... sequence_ids = inputs.sequence_ids(i) ... # Find the start and end of the context ... idx = 0 ... while sequence_ids[idx] != 1: ... idx += 1 ... context_start = idx ... while sequence_ids[idx] == 1: ... idx += 1 ... context_end = idx - 1 ... # If the answer is not fully inside the context, label it (0, 0) ... if offset[context_start][0] > end_char or offset[context_end][1] < start_char: ... start_positions.append(0) ... end_positions.append(0) ... else: ... # Otherwise it's the start and end token positions ... idx = context_start ... while idx <= context_end and offset[idx][0] <= start_char: ... idx += 1 ... start_positions.append(idx - 1) ... idx = context_end ... while idx >= context_start and offset[idx][1] >= end_char: ... idx -= 1 ... end_positions.append(idx + 1) ... inputs["start_positions"] = start_positions ... inputs["end_positions"] = end_positions ... return inputs ``` ๋ชจ๋“  ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋“ค์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋น ๋ฅด๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ๋ชจ๋‘ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names) ``` ์ด์ œ [`DefaultDataCollator`]๋ฅผ ์ด์šฉํ•ด ์˜ˆ์‹œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(data collator)์™€ ๋‹ฌ๋ฆฌ, [`DefaultDataCollator`]๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` </pt> <tf> ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer >>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ผญ ํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir` ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋กœ ์„ค์ •ํ•ด์„œ ์ด ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋“ค์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ด์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_qa_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_squad["train"], ... eval_dataset=tokenized_squad["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋งค์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•ด์„œ ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๊ณต์œ ํ•ด์ฃผ์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> TensorFlow๋ฅผ ์ด์šฉํ•œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด ๋ฐ ๋ช‡ ๊ฐ€์ง€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 2 >>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs >>> optimizer, schedule = create_optimizer( ... init_lr=2e-5, ... num_warmup_steps=0, ... num_train_steps=total_train_steps, ... ) ``` ๊ทธ ๋‹ค์Œ [`TFAutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering("distilbert-base-uncased") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•ด์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_squad["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_squad["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋กœ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•  ๋ฐฉ๋ฒ•์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ‘ธ์‹œํ•  ๊ฒฝ๋กœ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_qa_model", ... tokenizer=tokenizer, ... ) ``` ๋“œ๋””์–ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜, ์ฝœ๋ฐฑ์„ ์„ค์ •ํ•œ ํ›„ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์งˆ์˜ ์‘๋‹ต์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ์˜ˆ์‹œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ํ‰๊ฐ€[[evaluate]] ์งˆ์˜ ์‘๋‹ต์„ ํ‰๊ฐ€ํ•˜๋ ค๋ฉด ์ƒ๋‹นํ•œ ์–‘์˜ ํ›„์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฑธ๋ฆฌ์ง€ ์•Š๋„๋ก ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ํ‰๊ฐ€ ๋‹จ๊ณ„๋ฅผ ์ƒ๋žตํ•ฉ๋‹ˆ๋‹ค. [`Trainer`]๋Š” ํ›ˆ๋ จ ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์†์‹ค(evaluation loss)์„ ๊ณ„์† ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋Œ€๋žต์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ๊ฐ„์— ์—ฌ์œ ๊ฐ€ ์žˆ๊ณ  ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๊ด€์‹ฌ์ด ์žˆ๋‹ค๋ฉด ๐Ÿค— Hugging Face Course์˜ [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) ์ฑ•ํ„ฐ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! ## ์ถ”๋ก [[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์งˆ๋ฌธ๊ณผ ๋ชจ๋ธ์ด ์˜ˆ์ธกํ•˜๊ธฐ ์›ํ•˜๋Š” ๋ฌธ๋งฅ(context)๋ฅผ ์ƒ๊ฐํ•ด๋ณด์„ธ์š”: ```py >>> question = "How many programming languages does BLOOM support?" >>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด์„œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model") >>> question_answerer(question=question, context=context) {'score': 0.2058267742395401, 'start': 10, 'end': 95, 'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'} ``` ์›ํ•œ๋‹ค๋ฉด `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ‘ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, context, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForQuestionAnswering >>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> with torch.no_grad(): ... outputs = model(**inputs) ``` ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() ``` ์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, text, return_tensors="tf") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> outputs = model(**inputs) ``` ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) ``` ์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/language_modeling.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง[[causal-language-modeling]] [[open-in-colab]] ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง๊ณผ ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์œผ๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์— ์ž์ฃผ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋˜ ์ฐฝ์˜์ ์ธ ๋ฐฉํ–ฅ์œผ๋กœ ์‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ์‚ฌ์šฉํ•˜๋ฉฐ ์žฌ๋ฏธ์žˆ๋Š” ํƒ๊ตฌ๋ฅผ ํ•ด๋ณด๊ฑฐ๋‚˜, Copilot ๋˜๋Š” CodeParrot์™€ ๊ฐ™์€ ์ง€๋Šฅํ˜• ์ฝ”๋”ฉ ์–ด์‹œ์Šคํ„ดํŠธ์˜ ๊ธฐ๋ฐ˜์ด ๋˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. <Youtube id="Vpjb1lu0MDk"/> ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ํ† ํฐ ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์€ ์™ผ์ชฝ์˜ ํ† ํฐ์—๋งŒ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ฏธ๋ž˜์˜ ํ† ํฐ์„ ๋ณผ ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ์˜ ์˜ˆ๋กœ GPT-2๊ฐ€ ์žˆ์ฃ . ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค: 1. [DistilGPT2](https://huggingface.co/distilgpt2) ๋ชจ๋ธ์„ [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ [r/askscience](https://www.reddit.com/r/askscience/) ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ ๋ฏธ์„ธ ์กฐ์ • 2. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉ <Tip> ์ด ์•ˆ๋‚ด์„œ์˜ ๋‹จ๊ณ„์™€ ๋™์ผํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์•„ํ‚คํ…์ฒ˜ ์ค‘ ํ•˜๋‚˜๋ฅผ ์„ ํƒํ•˜์„ธ์š”: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BART](../model_doc/bart), [BERT](../model_doc/bert), [Bert Generation](../model_doc/bert-generation), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CodeGen](../model_doc/codegen), [CPM-Ant](../model_doc/cpmant), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [GIT](../model_doc/git), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT NeoX Japanese](../model_doc/gpt_neox_japanese), [GPT-J](../model_doc/gptj), [LLaMA](../model_doc/llama), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MVP](../model_doc/mvp), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Pegasus](../model_doc/pegasus), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [RWKV](../model_doc/rwkv), [Speech2Text2](../model_doc/speech_to_text_2), [Transformer-XL](../model_doc/transfo-xl), [TrOCR](../model_doc/trocr), [XGLM](../model_doc/xglm), [XLM](../model_doc/xlm), [XLM-ProphetNet](../model_doc/xlm-prophetnet), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์•Œ๋ฆผ์ด ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load-eli5-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ r/askscience์˜ ์ž‘์€ ํ•˜์œ„ ์ง‘ํ•ฉ์ธ ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ํ•™์Šตํ•˜๋Š” ๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜๊ธฐ ์ „์—, ์‹คํ—˜ํ•ด๋ด„์œผ๋กœ์จ ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train_asks` ๋ถ„ํ• ์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šต ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` ๋งŽ์•„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‹ค์ œ๋กœ๋Š” `text` ํ•„๋“œ๋งŒ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์˜ ์žฅ์ ์€ ๋ ˆ์ด๋ธ”์ด ํ•„์š”ํ•˜์ง€ ์•Š๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹จ์–ด *์ž์ฒด๊ฐ€* ๋ ˆ์ด๋ธ”์ž…๋‹ˆ๋‹ค. (์ด๋ ‡๊ฒŒ ๋ ˆ์ด๋ธ”์„ ์ œ๊ณตํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ํ•™์Šต์„ ๋น„์ง€๋„ ํ•™์Šต์ด๋ผ๊ณ  ์ผ์ปซ์Šต๋‹ˆ๋‹ค) ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="ma1TrR7gE7I"/> ๋‹ค์Œ ๋‹จ๊ณ„๋Š” `text` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilGPT2 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") ``` ์œ„์˜ ์˜ˆ์ œ์—์„œ ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด, `text` ํ•„๋“œ๋Š” `answers` ์•„๋ž˜์— ์ค‘์ฒฉ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘์ฒฉ ๊ตฌ์กฐ์—์„œ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ถ”์ถœํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` ๊ฐ ํ•˜์œ„ ํ•„๋“œ๋Š” ์ด์ œ `answers` ์ ‘๋‘์‚ฌ๋ฅผ ๊ฐ€์ง„ ๋ณ„๋„์˜ ์—ด๋กœ ๋‚˜๋‰˜์—ˆ์œผ๋ฉฐ, `text` ํ•„๋“œ๋Š” ์ด์ œ ๋ฆฌ์ŠคํŠธ์ž…๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„์ ์œผ๋กœ ํ† ํฐํ™”ํ•˜๋Š” ๋Œ€์‹ , ๋จผ์ € ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ํ•œ๊บผ๋ฒˆ์— ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๋ฌธ์ž์—ด ๋ฆฌ์ŠคํŠธ๋ฅผ ๊ฒฐํ•ฉํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๊ณ , `num_proc`๋ฅผ ์ฆ๊ฐ€์‹œ์ผœ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š” ์—†๋Š” ์—ด์€ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ์‹œํ€€์Šค๊ฐ€ ํ† ํฐํ™”๋์ง€๋งŒ, ์ผ๋ถ€ ์‹œํ€€์Šค๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ - ๋ชจ๋“  ์‹œํ€€์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ , - `block_size`๋กœ ์ •์˜๋œ ๊ธธ์ด๋กœ ์—ฐ๊ฒฐ๋œ ์‹œํ€€์Šค๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์งง์€ ๋ฌถ์Œ์œผ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด์™€ GPU RAM์„ ๊ณ ๋ คํ•ด ์ถฉ๋ถ„ํžˆ ์งง์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> block_size = 128 >>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `group_texts` ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์„ธ์š”: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`DataCollatorForLanguageModeling`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค, ์ทจํ•ฉ ๋‹จ๊ณ„์—์„œ ๊ฐ ๋ฐฐ์น˜์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ข…๊ฒฐ ํ† ํฐ์„ ์‚ฌ์šฉํ•˜๊ณ  `mlm=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž…๋ ฅ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•œ ์นธ์”ฉ ์‹œํ”„ํŠธํ•œ ๊ฐ’์„ ๋ ˆ์ด๋ธ”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) ``` </pt> <tf> ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ข…๊ฒฐ ํ† ํฐ์„ ์‚ฌ์šฉํ•˜๊ณ  `mlm=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž…๋ ฅ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•œ ์นธ์”ฉ ์‹œํ”„ํŠธํ•œ ๊ฐ’์„ ๋ ˆ์ด๋ธ”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž˜ ๋ชจ๋ฅด์‹ ๋‹ค๋ฉด [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-with-pytorch-trainer)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCausalLM`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilGPT2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") ``` ์—ฌ๊ธฐ๊นŒ์ง€ ์ง„ํ–‰ํ•˜๋ฉด ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ, ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. (๋จผ์ € Hugging Face์— ๋กœ๊ทธ์ธ ํ•„์ˆ˜) `push_to_hub=True`๋กœ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 2. ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_clm-model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํผํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 49.61 ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๋ฐ ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForCausalLM`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilGPT2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("distilgpt2") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด ๊ตฌ์„ฑํ•˜์„ธ์š”. Transformers ๋ชจ๋ธ์€ ๋ชจ๋‘ ๊ธฐ๋ณธ์ ์ธ ์ž‘์—… ๊ด€๋ จ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ, ์›ํ•œ๋‹ค๋ฉด ๋ณ„๋„๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์•„๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # ๋ณ„๋„๋กœ loss ์ธ์ž๋ฅผ ๋„ฃ์ง€ ์•Š์•˜์–ด์š”! ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_clm-model", ... tokenizer=tokenizer, ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ชจ๋‘๊ฐ€ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ํ•ด๋‹นํ•˜๋Š” [PyTorch ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) ๋˜๋Š” [TensorFlow ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋ฏ€๋กœ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ƒ์„ฑํ•  ํ…์ŠคํŠธ๋ฅผ ์œ„ํ•œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค์–ด๋ณด์„ธ์š”: ```py >>> prompt = "Somatic hypermutation allows the immune system to" ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ๊ฐ„๋‹จํžˆ ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ํ…์ŠคํŠธ ์ƒ์„ฑ์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model") >>> generator(prompt) [{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}] ``` <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="pt").input_ids ``` [`~transformers.generation_utils.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”. ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๋Š” ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต๊ณผ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](../generation_strategies) ํŽ˜์ด์ง€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"] ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="tf").input_ids ``` [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์š”์•ฝ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๋Š” ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต๊ณผ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](../generation_strategies) ํŽ˜์ด์ง€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for'] ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/document_question_answering.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering) [[document_question_answering]] [[open-in-colab]] ๋ฌธ์„œ ์‹œ๊ฐ์  ์งˆ์˜ ์‘๋‹ต(Document Visual Question Answering)์ด๋ผ๊ณ ๋„ ํ•˜๋Š” ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering)์€ ๋ฌธ์„œ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€์„ ์ฃผ๋Š” ํƒœ์Šคํฌ์ž…๋‹ˆ๋‹ค. ์ด ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์˜ ์กฐํ•ฉ์ด๊ณ , ์ถœ๋ ฅ์€ ์ž์—ฐ์–ด๋กœ ๋œ ๋‹ต๋ณ€์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ํ…์ŠคํŠธ, ๋‹จ์–ด์˜ ์œ„์น˜(๋ฐ”์šด๋”ฉ ๋ฐ•์Šค), ์ด๋ฏธ์ง€ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋ฅผ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค: - [DocVQA dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut)์„ ์‚ฌ์šฉํ•ด [LayoutLMv2](../model_doc/layoutlmv2) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ - ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ํƒœ์Šคํฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3) <!--End of the generated tip--> </Tip> LayoutLMv2๋Š” ํ† ํฐ์˜ ๋งˆ์ง€๋ง‰ ์€๋‹‰์ธต ์œ„์— ์งˆ์˜ ์‘๋‹ต ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘ ํ† ํฐ๊ณผ ๋ ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์˜ˆ์ธกํ•จ์œผ๋กœ์จ ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ ์งˆ๋ฌธ์— ๋‹ตํ•˜๋Š” ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ์ถ”์ถœํ˜• ์งˆ์˜ ์‘๋‹ต(Extractive question answering)์œผ๋กœ ๋ฌธ์ œ๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ๋งฅ์€ OCR ์—”์ง„์˜ ์ถœ๋ ฅ์—์„œ ๊ฐ€์ ธ์˜ค๋ฉฐ, ์—ฌ๊ธฐ์„œ๋Š” Google์˜ Tesseract๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. LayoutLMv2๋Š” detectron2, torchvision ๋ฐ ํ…Œ์„œ๋ž™ํŠธ๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ```bash pip install -q transformers datasets ``` ```bash pip install 'git+https://github.com/facebookresearch/detectron2.git' pip install torchvision ``` ```bash sudo apt install tesseract-ocr pip install -q pytesseract ``` ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค์„ ๋ชจ๋‘ ์„ค์น˜ํ•œ ํ›„ ๋Ÿฐํƒ€์ž„์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋‹น์‹ ์˜ ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•ด์„œ ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•˜์„ธ์š”. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์‹คํ–‰๋˜๋ฉด, ๋กœ๊ทธ์ธ์„ ์œ„ํ•ด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ๋ช‡ ๊ฐ€์ง€ ์ „์—ญ ๋ณ€์ˆ˜๋ฅผ ์ •์˜ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> model_checkpoint = "microsoft/layoutlmv2-base-uncased" >>> batch_size = 4 ``` ## ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ [[load-the-data]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๐Ÿค— Hub์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š” ์ „์ฒ˜๋ฆฌ๋œ DocVQA์˜ ์ž‘์€ ์ƒ˜ํ”Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. DocVQA์˜ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, [DocVQA homepage](https://rrc.cvc.uab.es/?ch=17)์— ๊ฐ€์ž… ํ›„ ๋‹ค์šด๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ–ˆ๋‹ค๋ฉด, ์ด ๊ฐ€์ด๋“œ๋ฅผ ๊ณ„์† ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด [๐Ÿค— dataset์— ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•](https://huggingface.co/docs/datasets/loading#local-and-remote-files)์„ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from datasets import load_dataset >>> dataset = load_dataset("nielsr/docvqa_1200_examples") >>> dataset DatasetDict({ train: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 1000 }) test: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 200 }) }) ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ, ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ์ด๋ฏธ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌด์ž‘์œ„๋กœ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด๋ฉด์„œ ํŠน์„ฑ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ```py >>> dataset["train"].features ``` ๊ฐ ํ•„๋“œ๊ฐ€ ๋‚˜ํƒ€๋‚ด๋Š” ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * `id`: ์˜ˆ์ œ์˜ id * `image`: ๋ฌธ์„œ ์ด๋ฏธ์ง€๋ฅผ ํฌํ•จํ•˜๋Š” PIL.Image.Image ๊ฐ์ฒด * `query`: ์งˆ๋ฌธ ๋ฌธ์ž์—ด - ์—ฌ๋Ÿฌ ์–ธ์–ด์˜ ์ž์—ฐ์–ด๋กœ ๋œ ์งˆ๋ฌธ * `answers`: ์‚ฌ๋žŒ์ด ์ฃผ์„์„ ๋‹จ ์ •๋‹ต ๋ฆฌ์ŠคํŠธ * `words` and `bounding_boxes`: OCR์˜ ๊ฒฐ๊ณผ๊ฐ’๋“ค์ด๋ฉฐ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ์˜ˆ์ • * `answer`: ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ์ผ์น˜ํ•˜๋Š” ๋‹ต๋ณ€์ด๋ฉฐ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ์˜ˆ์ • ์˜์–ด๋กœ ๋œ ์งˆ๋ฌธ๋งŒ ๋‚จ๊ธฐ๊ณ  ๋‹ค๋ฅธ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ํฌํ•จํ•˜๋Š” `answer` ํŠน์„ฑ์„ ์‚ญ์ œํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ฃผ์„ ์ž‘์„ฑ์ž๊ฐ€ ์ œ๊ณตํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ฒซ ๋ฒˆ์งธ ๋‹ต๋ณ€์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ๋˜๋Š” ๋ฌด์ž‘์œ„๋กœ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"]) >>> updated_dataset = updated_dataset.map( ... lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"] ... ) ``` ์ด ๊ฐ€์ด๋“œ์—์„œ ์‚ฌ์šฉํ•˜๋Š” LayoutLMv2 ์ฒดํฌํฌ์ธํŠธ๋Š” `max_position_embeddings = 512`๋กœ ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค(์ด ์ •๋ณด๋Š” [์ฒดํฌํฌ์ธํŠธ์˜ `config.json` ํŒŒ์ผ](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค). ๋ฐ”๋กœ ์˜ˆ์ œ๋ฅผ ์ž˜๋ผ๋‚ผ ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ๊ธด ๋ฌธ์„œ์˜ ๋์— ๋‹ต๋ณ€์ด ์žˆ์–ด ์ž˜๋ฆฌ๋Š” ์ƒํ™ฉ์„ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ์—ฌ๊ธฐ์„œ๋Š” ์ž„๋ฒ ๋”ฉ์ด 512๋ณด๋‹ค ๊ธธ์–ด์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ์ œ๋ฅผ ์ œ๊ฑฐํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์žˆ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๋ฌธ์„œ๊ฐ€ ๊ธด ๊ฒฝ์šฐ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค - ์ž์„ธํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•˜๊ณ  ์‹ถ์œผ๋ฉด ์ด [๋…ธํŠธ๋ถ](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb)์„ ํ™•์ธํ•˜์„ธ์š”. ```py >>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512) ``` ์ด ์‹œ์ ์—์„œ ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ OCR ํŠน์„ฑ๋„ ์ œ๊ฑฐํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. OCR ํŠน์„ฑ์€ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์œผ๋กœ, ์ด ๊ฐ€์ด๋“œ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ ์š”๊ตฌ ์‚ฌํ•ญ๊ณผ ์ผ์น˜ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์ด ํŠน์„ฑ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ผ๋ถ€ ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์‹ , ์›๋ณธ ๋ฐ์ดํ„ฐ์— [`LayoutLMv2Processor`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ OCR ๋ฐ ํ† ํฐํ™”๋ฅผ ๋ชจ๋‘ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” ์ž…๋ ฅ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด, [`LayoutLMv2` model documentation](../model_doc/layoutlmv2)์—์„œ ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” ์ž…๋ ฅ ํฌ๋งท์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ```py >>> updated_dataset = updated_dataset.remove_columns("words") >>> updated_dataset = updated_dataset.remove_columns("bounding_boxes") ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ฐ์ดํ„ฐ ํƒ์ƒ‰์„ ์™„๋ฃŒํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค. ```py >>> updated_dataset["train"][11]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg" alt="DocVQA Image Example"/> </div> ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocess-the-data]] ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์ด๋ฉฐ, ๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ์ž…๋ ฅ์ด ๋ชจ๋ธ์˜ ์š”๊ตฌ์— ๋งž๊ฒŒ ์ „์ฒ˜๋ฆฌ ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฒฐํ•ฉํ•œ [`LayoutLMv2Processor`]๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(model_checkpoint) ``` ### ๋ฌธ์„œ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ [[preprocessing-document-images]] ๋จผ์ €, ํ”„๋กœ์„ธ์„œ์˜ `image_processor`๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฌธ์„œ ์ด๋ฏธ์ง€๋ฅผ ์ค€๋น„ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ 224x224๋กœ ์กฐ์ •ํ•˜๊ณ  ์ƒ‰์ƒ ์ฑ„๋„์˜ ์ˆœ์„œ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ์ง€ ํ™•์ธํ•œ ํ›„ ๋‹จ์–ด์™€ ์ •๊ทœํ™”๋œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด ํ…Œ์„œ๋ž™ํŠธ๋ฅผ ์‚ฌ์šฉํ•ด OCR๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์šฐ๋ฆฌ๊ฐ€ ํ•„์š”ํ•œ ๊ฒƒ๊ณผ ๊ธฐ๋ณธ๊ฐ’์€ ์™„์ „ํžˆ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ๊ธฐ๋ณธ ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜๊ณ  OCR์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณ€ํ™˜ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ```py >>> image_processor = processor.image_processor >>> def get_ocr_words_and_boxes(examples): ... images = [image.convert("RGB") for image in examples["image"]] ... encoded_inputs = image_processor(images) ... examples["image"] = encoded_inputs.pixel_values ... examples["words"] = encoded_inputs.words ... examples["boxes"] = encoded_inputs.boxes ... return examples ``` ์ด ์ „์ฒ˜๋ฆฌ๋ฅผ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ๋น ๋ฅด๊ฒŒ ์ ์šฉํ•˜๋ ค๋ฉด [`~datasets.Dataset.map`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py >>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2) ``` ### ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocessing-text-data]] ์ด๋ฏธ์ง€์— OCR์„ ์ ์šฉํ–ˆ์œผ๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ…์ŠคํŠธ ๋ถ€๋ถ„์„ ๋ชจ๋ธ์— ๋งž๊ฒŒ ์ธ์ฝ”๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ฝ”๋”ฉ์—๋Š” ์ด์ „ ๋‹จ๊ณ„์—์„œ ๊ฐ€์ ธ์˜จ ๋‹จ์–ด์™€ ๋ฐ•์Šค๋ฅผ ํ† ํฐ ์ˆ˜์ค€์˜ `input_ids`, `attention_mask`, `token_type_ids` ๋ฐ `bbox`๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ํ”„๋กœ์„ธ์„œ์˜ `tokenizer`๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> tokenizer = processor.tokenizer ``` ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ์ „์ฒ˜๋ฆฌ ์™ธ์—๋„ ๋ชจ๋ธ์„ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ `xxxForQuestionAnswering` ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ๋ ˆ์ด๋ธ”์€ `start_positions`์™€ `end_positions`๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘๊ณผ ๋์— ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ” ์ถ”๊ฐ€๋ฅผ ์œ„ํ•ด์„œ, ๋จผ์ € ๋” ํฐ ๋ฆฌ์ŠคํŠธ(๋‹จ์–ด ๋ฆฌ์ŠคํŠธ)์—์„œ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ(๋‹จ์–ด๋กœ ๋ถ„ํ• ๋œ ๋‹ต๋ณ€)์„ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š” ํ—ฌํผ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” `words_list`์™€ `answer_list`, ์ด๋ ‡๊ฒŒ ๋‘ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `words_list`๋ฅผ ๋ฐ˜๋ณตํ•˜์—ฌ `words_list`์˜ ํ˜„์žฌ ๋‹จ์–ด(words_list[i])๊ฐ€ `answer_list`์˜ ์ฒซ ๋ฒˆ์งธ ๋‹จ์–ด(answer_list[0])์™€ ๊ฐ™์€์ง€, ํ˜„์žฌ ๋‹จ์–ด์—์„œ ์‹œ์ž‘ํ•ด `answer_list`์™€ ๊ฐ™์€ ๊ธธ์ด๋งŒํผ์˜ `words_list`์˜ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ๊ฐ€ `answer_list`์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์ด ์กฐ๊ฑด์ด ์ฐธ์ด๋ผ๋ฉด ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ฐœ๊ฒฌํ–ˆ์Œ์„ ์˜๋ฏธํ•˜๋ฉฐ, ํ•จ์ˆ˜๋Š” ์ผ์น˜ ํ•ญ๋ชฉ, ์‹œ์ž‘ ์ธ๋ฑ์Šค(idx) ๋ฐ ์ข…๋ฃŒ ์ธ๋ฑ์Šค(idx + len(answer_list) - 1)๋ฅผ ๊ธฐ๋กํ•ฉ๋‹ˆ๋‹ค. ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ๋‘ ๊ฐœ ์ด์ƒ ๋ฐœ๊ฒฌ๋˜๋ฉด ํ•จ์ˆ˜๋Š” ์ฒซ ๋ฒˆ์งธ ํ•ญ๋ชฉ๋งŒ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ์—†๋‹ค๋ฉด ํ•จ์ˆ˜๋Š” (`None`, 0, 0)์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def subfinder(words_list, answer_list): ... matches = [] ... start_indices = [] ... end_indices = [] ... for idx, i in enumerate(range(len(words_list))): ... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list: ... matches.append(answer_list) ... start_indices.append(idx) ... end_indices.append(idx + len(answer_list) - 1) ... if matches: ... return matches[0], start_indices[0], end_indices[0] ... else: ... return None, 0, 0 ``` ์ด ํ•จ์ˆ˜๊ฐ€ ์–ด๋–ป๊ฒŒ ์ •๋‹ต์˜ ์œ„์น˜๋ฅผ ์ฐพ๋Š”์ง€ ์„ค๋ช…ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์Œ ์˜ˆ์ œ์—์„œ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset_with_ocr["train"][1] >>> words = [word.lower() for word in example["words"]] >>> match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split()) >>> print("Question: ", example["question"]) >>> print("Words:", words) >>> print("Answer: ", example["answer"]) >>> print("start_index", word_idx_start) >>> print("end_index", word_idx_end) Question: Who is in cc in this letter? Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', 'ยซshort', 'cigarette,', 'tobacco', 'section', '30', 'mm.', 'ยซextremely', 'fast', 'buming', 'cigarette.', 'ยซnovel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', 'ยซmore', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498'] Answer: T.F. Riehl start_index 17 end_index 18 ``` ํ•œํŽธ, ์œ„ ์˜ˆ์ œ๊ฐ€ ์ธ์ฝ”๋”ฉ๋˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```py >>> encoding = tokenizer(example["question"], example["words"], example["boxes"]) >>> tokenizer.decode(encoding["input_ids"]) [CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ... ``` ์ด์ œ ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์—์„œ ์ •๋‹ต์˜ ์œ„์น˜๋ฅผ ์ฐพ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. * `token_type_ids`๋Š” ์–ด๋–ค ํ† ํฐ์ด ์งˆ๋ฌธ์— ์†ํ•˜๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ์–ด๋–ค ํ† ํฐ์ด ๋ฌธ์„œ์˜ ๋‹จ์–ด์— ํฌํ•จ๋˜๋Š”์ง€๋ฅผ ์•Œ๋ ค์ค๋‹ˆ๋‹ค. * `tokenizer.cls_token_id` ์ž…๋ ฅ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์žˆ๋Š” ํŠน์ˆ˜ ํ† ํฐ์„ ์ฐพ๋Š” ๋ฐ ๋„์›€์„ ์ค๋‹ˆ๋‹ค. * `word_ids`๋Š” ์›๋ณธ `words`์—์„œ ์ฐพ์€ ๋‹ต๋ณ€์„ ์ „์ฒด ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์˜ ๋™์ผํ•œ ๋‹ต๊ณผ ์ผ์น˜์‹œํ‚ค๊ณ  ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์—์„œ ๋‹ต๋ณ€์˜ ์‹œ์ž‘/๋ ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์œ„ ๋‚ด์šฉ๋“ค์„ ์—ผ๋‘์— ๋‘๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> def encode_dataset(examples, max_length=512): ... questions = examples["question"] ... words = examples["words"] ... boxes = examples["boxes"] ... answers = examples["answer"] ... # ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๊ณ  start_positions์™€ end_positions๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค ... encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True) ... start_positions = [] ... end_positions = [] ... # ๋ฐฐ์น˜์˜ ์˜ˆ์ œ๋ฅผ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค ... for i in range(len(questions)): ... cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id) ... # ์˜ˆ์ œ์˜ words์—์„œ ๋‹ต๋ณ€์˜ ์œ„์น˜๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค ... words_example = [word.lower() for word in words[i]] ... answer = answers[i] ... match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split()) ... if match: ... # ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ฐœ๊ฒฌํ•˜๋ฉด, `token_type_ids`๋ฅผ ์‚ฌ์šฉํ•ด ์ธ์ฝ”๋”ฉ์—์„œ ๋‹จ์–ด๊ฐ€ ์‹œ์ž‘ํ•˜๋Š” ์œ„์น˜๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค ... token_type_ids = encoding["token_type_ids"][i] ... token_start_index = 0 ... while token_type_ids[token_start_index] != 1: ... token_start_index += 1 ... token_end_index = len(encoding["input_ids"][i]) - 1 ... while token_type_ids[token_end_index] != 1: ... token_end_index -= 1 ... word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1] ... start_position = cls_index ... end_position = cls_index ... # words์˜ ๋‹ต๋ณ€ ์œ„์น˜์™€ ์ผ์น˜ํ•  ๋•Œ๊นŒ์ง€ word_ids๋ฅผ ๋ฐ˜๋ณตํ•˜๊ณ  `token_start_index`๋ฅผ ๋Š˜๋ฆฝ๋‹ˆ๋‹ค ... # ์ผ์น˜ํ•˜๋ฉด `token_start_index`๋ฅผ ์ธ์ฝ”๋”ฉ์—์„œ ๋‹ต๋ณ€์˜ `start_position`์œผ๋กœ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค ... for id in word_ids: ... if id == word_idx_start: ... start_position = token_start_index ... else: ... token_start_index += 1 ... # ๋น„์Šทํ•˜๊ฒŒ, ๋์—์„œ ์‹œ์ž‘ํ•ด `word_ids`๋ฅผ ๋ฐ˜๋ณตํ•˜๋ฉฐ ๋‹ต๋ณ€์˜ `end_position`์„ ์ฐพ์Šต๋‹ˆ๋‹ค ... for id in word_ids[::-1]: ... if id == word_idx_end: ... end_position = token_end_index ... else: ... token_end_index -= 1 ... start_positions.append(start_position) ... end_positions.append(end_position) ... else: ... start_positions.append(cls_index) ... end_positions.append(cls_index) ... encoding["image"] = examples["image"] ... encoding["start_positions"] = start_positions ... encoding["end_positions"] = end_positions ... return encoding ``` ์ด์ œ ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๊ฐ€ ์žˆ์œผ๋‹ˆ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> encoded_train_dataset = dataset_with_ocr["train"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names ... ) >>> encoded_test_dataset = dataset_with_ocr["test"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names ... ) ``` ์ธ์ฝ”๋”ฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŠน์„ฑ์ด ์–ด๋–ป๊ฒŒ ์ƒ๊ฒผ๋Š”์ง€ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> encoded_train_dataset.features {'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'start_positions': Value(dtype='int64', id=None), 'end_positions': Value(dtype='int64', id=None)} ``` ## ํ‰๊ฐ€ [[evaluation]] ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต์„ ํ‰๊ฐ€ํ•˜๋ ค๋ฉด ์ƒ๋‹นํ•œ ์–‘์˜ ํ›„์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฑธ๋ฆฌ์ง€ ์•Š๋„๋ก ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ํ‰๊ฐ€ ๋‹จ๊ณ„๋ฅผ ์ƒ๋žตํ•ฉ๋‹ˆ๋‹ค. [`Trainer`]๊ฐ€ ํ›ˆ๋ จ ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์†์‹ค(evaluation loss)์„ ๊ณ„์† ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋Œ€๋žต์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”์ถœ์ (Extractive) ์งˆ์˜ ์‘๋‹ต์€ ๋ณดํ†ต F1/exact match ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด ํ‰๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ง์ ‘ ๊ตฌํ˜„ํ•ด๋ณด๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, Hugging Face course์˜ [Question Answering chapter](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. ## ํ›ˆ๋ จ [[train]] ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ์ด ๊ฐ€์ด๋“œ์˜ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ฒ˜๋ฆฌํ–ˆ์œผ๋‹ˆ ์ด์ œ ๋‚˜๋งŒ์˜ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‹จ๊ณ„๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค: * ์ „์ฒ˜๋ฆฌ์—์„œ์˜ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด [`AutoModelForDocumentQuestionAnswering`]์œผ๋กœ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. * [`TrainingArguments`]๋กœ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. * ์˜ˆ์ œ๋ฅผ ๋ฐฐ์น˜ ์ฒ˜๋ฆฌํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” [`DefaultDataCollator`]๊ฐ€ ์ ๋‹นํ•ฉ๋‹ˆ๋‹ค. * ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(Data collator)์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋“ค์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. * [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ด์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForDocumentQuestionAnswering >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint) ``` [`TrainingArguments`]์—์„œ `output_dir`์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๊ณ , ์ ์ ˆํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋ ค๋ฉด `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์„ธ์š” (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ์ด ๊ฒฝ์šฐ `output_dir`์€ ๋ชจ๋ธ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ‘ธ์‹œํ•  ๋ ˆํฌ์ง€ํ† ๋ฆฌ์˜ ์ด๋ฆ„์ด ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TrainingArguments >>> # ๋ณธ์ธ์˜ ๋ ˆํฌ์ง€ํ† ๋ฆฌ ID๋กœ ๋ฐ”๊พธ์„ธ์š” >>> repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... evaluation_strategy="steps", ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` ๊ฐ„๋‹จํ•œ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋ฅผ ์ •์˜ํ•˜์—ฌ ์˜ˆ์ œ๋ฅผ ํ•จ๊ป˜ ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ชจ๋“  ๊ฒƒ์„ ํ•œ ๊ณณ์— ๋ชจ์•„ [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=encoded_train_dataset, ... eval_dataset=encoded_test_dataset, ... tokenizer=processor, ... ) >>> trainer.train() ``` ์ตœ์ข… ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด, ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.create_model_card() >>> trainer.push_to_hub() ``` ## ์ถ”๋ก  [[inference]] ์ด์ œ LayoutLMv2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ์—…๋กœ๋“œํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`Pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset["test"][2] >>> question = example["query"]["en"] >>> image = example["image"] >>> print(question) >>> print(example["answers"]) 'Who is โ€˜presidingโ€™ TRRF GENERAL SESSION (PART 1)?' ['TRRF Vice President', 'lee a. waller'] ``` ๊ทธ ๋‹ค์Œ, ๋ชจ๋ธ๋กœ ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€ + ์งˆ๋ฌธ ์กฐํ•ฉ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> qa_pipeline(image, question) [{'score': 0.9949808120727539, 'answer': 'Lee A. Waller', 'start': 55, 'end': 57}] ``` ์›ํ•œ๋‹ค๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์˜ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์— ๋งž๊ฒŒ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ์„ ํ†ตํ•ด ๊ฒฐ๊ณผ ๋˜๋Š” ์ „์ฒ˜๋ฆฌ๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. ๋ชจ๋ธ์€ ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘์— ์žˆ๋Š”์ง€, ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์ด ๋์— ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” `start_logits`์™€ `end_logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋‘˜ ๋‹ค (batch_size, sequence_length) ํ˜•ํƒœ๋ฅผ ๊ฐ–์Šต๋‹ˆ๋‹ค. 4. `start_logits`์™€ `end_logits`์˜ ๋งˆ์ง€๋ง‰ ์ฐจ์›์„ ์ตœ๋Œ€๋กœ ๋งŒ๋“œ๋Š” ๊ฐ’์„ ์ฐพ์•„ ์˜ˆ์ƒ `start_idx`์™€ `end_idx`๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. 5. ํ† ํฌ๋‚˜์ด์ €๋กœ ๋‹ต๋ณ€์„ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from transformers import AutoProcessor >>> from transformers import AutoModelForDocumentQuestionAnswering >>> processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> with torch.no_grad(): ... encoding = processor(image.convert("RGB"), question, return_tensors="pt") ... outputs = model(**encoding) ... start_logits = outputs.start_logits ... end_logits = outputs.end_logits ... predicted_start_idx = start_logits.argmax(-1).item() ... predicted_end_idx = end_logits.argmax(-1).item() >>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]) 'lee a. waller' ```
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/asr.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] [[open-in-colab]] <Youtube id="TksaY_FDgnk"/> ์ž๋™ ์Œ์„ฑ ์ธ์‹(Automatic Speech Recognition, ASR)์€ ์Œ์„ฑ ์‹ ํ˜ธ๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์Œ์„ฑ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ํ…์ŠคํŠธ ์ถœ๋ ฅ์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. Siri์™€ Alexa์™€ ๊ฐ™์€ ๊ฐ€์ƒ ์–ด์‹œ์Šคํ„ดํŠธ๋Š” ASR ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ์ƒ์ ์œผ๋กœ ์‚ฌ์šฉ์ž๋ฅผ ๋•๊ณ  ์žˆ์œผ๋ฉฐ, ํšŒ์˜ ์ค‘ ๋ผ์ด๋ธŒ ์บก์…˜ ๋ฐ ๋ฉ”๋ชจ ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์‚ฌ์šฉ์ž ์นœํ™”์  ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ๋„ ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate jiwer ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-minds-14-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ถ„์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ์‹œ๊ฐ„์„ ๋“ค์ด๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]") ``` [`~Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id`์™€ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ, ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio`์™€ `transcription`์— ์ดˆ์ ์„ ๋งž์ถœ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ๋‹ค์‹œ ํ•œ๋ฒˆ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> minds["train"][0] {'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `array(๋ฐฐ์—ด)` - `transcription`: ๋ชฉํ‘œ ํ…์ŠคํŠธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ Wav2Vec2 ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base") ``` MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋Š” 8000kHz์ด๋ฏ€๋กœ([๋ฐ์ดํ„ฐ ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธ), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ..., 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ์œ„์˜ 'transcription'์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ํ…์ŠคํŠธ๋Š” ๋Œ€๋ฌธ์ž์™€ ์†Œ๋ฌธ์ž๊ฐ€ ์„ž์—ฌ ์žˆ์Šต๋‹ˆ๋‹ค. Wav2Vec2 ํ† ํฌ๋‚˜์ด์ €๋Š” ๋Œ€๋ฌธ์ž ๋ฌธ์ž์— ๋Œ€ํ•ด์„œ๋งŒ ํ›ˆ๋ จ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ…์ŠคํŠธ๊ฐ€ ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> def uppercase(example): ... return {"transcription": example["transcription"].upper()} >>> minds = minds.map(uppercase) ``` ์ด์ œ ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์—์„œ `input_values`๋ฅผ ์ถ”์ถœํ•˜๊ณ  ํ”„๋กœ์„ธ์„œ๋กœ `transcription` ์—ด์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def prepare_dataset(batch): ... audio = batch["audio"] ... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"]) ... batch["input_length"] = len(batch["input_values"][0]) ... return batch ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `num_proc` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4) ``` ๐Ÿค— Transformers์—๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” ํ…์ŠคํŠธ์™€ ๋ ˆ์ด๋ธ”์„ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์š”์†Œ์˜ ๊ธธ์ด์— ๋™์ ์œผ๋กœ ํŒจ๋”ฉํ•˜์—ฌ ๊ธธ์ด๋ฅผ ๊ท ์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. `tokenizer` ํ•จ์ˆ˜์—์„œ `padding=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๋™์  ํŒจ๋”ฉ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ ์ด ํŠน์ • ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” `input_values`์™€ `labels`์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass ... class DataCollatorCTCWithPadding: ... processor: AutoProcessor ... padding: Union[bool, str] = "longest" ... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... # ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค ... # ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๊ณ , ๊ฐ๊ฐ ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค ... input_features = [{"input_values": feature["input_values"][0]} for feature in features] ... label_features = [{"input_ids": feature["labels"]} for feature in features] ... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt") ... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt") ... # ํŒจ๋”ฉ์— ๋Œ€ํ•ด ์†์‹ค์„ ์ ์šฉํ•˜์ง€ ์•Š๋„๋ก -100์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค ... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) ... batch["labels"] = labels ... return batch ``` ์ด์ œ `DataCollatorForCTCWithPadding`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate, WER)](https://huggingface.co/spaces/evaluate-metric/wer) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> wer = evaluate.load("wer") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ WER์„ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(pred): ... pred_logits = pred.predictions ... pred_ids = np.argmax(pred_logits, axis=-1) ... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id ... pred_str = processor.batch_decode(pred_ids) ... label_str = processor.batch_decode(pred.label_ids, group_tokens=False) ... wer = wer.compute(predictions=pred_str, references=label_str) ... return {"wer": wer} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCTC`]๋กœ Wav2Vec2๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. `ctc_loss_reduction` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ CTC ์†์‹ค์— ์ ์šฉํ•  ์ถ•์†Œ(reduction) ๋ฐฉ๋ฒ•์„ ์ง€์ •ํ•˜์„ธ์š”. ๊ธฐ๋ณธ๊ฐ’์ธ ํ•ฉ๊ณ„ ๋Œ€์‹  ํ‰๊ท ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋” ์ข‹์€ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer >>> model = AutoModelForCTC.from_pretrained( ... "facebook/wav2vec2-base", ... ctc_loss_reduction="mean", ... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). [`Trainer`]๋Š” ๊ฐ ์—ํญ๋งˆ๋‹ค WER์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_asr_mind_model", ... per_device_train_batch_size=8, ... gradient_accumulation_steps=2, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True, ... group_by_length=True, ... evaluation_strategy="steps", ... per_device_eval_batch_size=8, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... load_best_model_at_end=True, ... metric_for_best_model="wer", ... greater_is_better=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=processor.feature_extractor, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋‘๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ์˜์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-wav2vec2-english)์™€ ๋‹ค๊ตญ์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ๋น„์œจ์„ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์— ๋งž๊ฒŒ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} ``` <Tip> ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜๋œ ๊ฒฐ๊ณผ๊ฐ€ ๊ฝค ๊ดœ์ฐฎ์ง€๋งŒ ๋” ์ข‹์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋” ๋งŽ์€ ์˜ˆ์ œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”! </Tip> `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ์žฌํ˜„ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์˜ค๋””์˜ค ํŒŒ์ผ๊ณผ ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  PyTorch ํ…์„œ๋กœ `input`์„ ๋ฐ˜ํ™˜ํ•  ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForCTC >>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์˜ `input_ids`๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ธก๋œ `input_ids`๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> import torch >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] ``` </pt> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_object_detection.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€[[zeroshot-object-detection]] [[open-in-colab]] ์ผ๋ฐ˜์ ์œผ๋กœ [๊ฐ์ฒด ํƒ์ง€](object_detection)์— ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํ•™์Šต ๋ฐ์ดํ„ฐ์— ์กด์žฌํ•˜๋Š” ํด๋ž˜์Šค(๋ ˆ์ด๋ธ”)๋งŒ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ํ•œ๊ณ„์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋Š” [OWL-ViT](../model_doc/owlvit) ๋ชจ๋ธ๋กœ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€๊ฐ€ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. OWL-ViT๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open-vocabulary) ๊ฐ์ฒด ํƒ์ง€๊ธฐ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜์ง€ ์•Š๊ณ  ์ž์œ  ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์€ ๋ฉ€ํ‹ฐ ๋ชจ๋‹ฌ ํ‘œํ˜„์„ ํ™œ์šฉํ•ด ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€(open-vocabulary detection)๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. [CLIP](../model_doc/clip) ๋ชจ๋ธ์— ๊ฒฝ๋Ÿ‰ํ™”(lightweight)๋œ ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™”(localization) ํ—ค๋“œ๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€๋Š” CLIP์˜ ํ…์ŠคํŠธ ์ธ์ฝ”๋”๋กœ free-text ์ฟผ๋ฆฌ๋ฅผ ์ž„๋ฒ ๋”ฉํ•˜๊ณ , ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™” ํ—ค๋“œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ํ…์ŠคํŠธ ์„ค๋ช…์„ ์—ฐ๊ฒฐํ•˜๋ฉด ViT๊ฐ€ ์ด๋ฏธ์ง€ ํŒจ์น˜(image patches)๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์˜ ์ €์ž๋“ค์€ CLIP ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šต(scratch learning)ํ•œ ํ›„์—, bipartite matching loss๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‘œ์ค€ ๊ฐ์ฒด ์ธ์‹ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ OWL-ViT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์€ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ์‚ฌ์ „ ํ•™์Šต ์—†์ด๋„ ํ…์ŠคํŠธ ์„ค๋ช…์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ๋Š” OWL-ViT ๋ชจ๋ธ์˜ ์‚ฌ์šฉ๋ฒ•์„ ๋‹ค๋ฃฐ ๊ฒƒ์ž…๋‹ˆ๋‹ค: - ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€ - ์ผ๊ด„ ๊ฐ์ฒด ํƒ์ง€ - ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-object-detection-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€์šฉ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python >>> from transformers import pipeline >>> checkpoint = "google/owlvit-base-patch32" >>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection") ``` ๋‹ค์Œ์œผ๋กœ, ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ์—ฌ๊ธฐ์„œ๋Š” [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€์ธ ์šฐ์ฃผ๋น„ํ–‰์‚ฌ ์—์ผ๋ฆฐ ์ฝœ๋ฆฐ์Šค(Eileen Collins) ์‚ฌ์ง„์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> import skimage >>> import numpy as np >>> from PIL import Image >>> image = skimage.data.astronaut() >>> image = Image.fromarray(np.uint8(image)).convert("RGB") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์„ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. candidate_labels๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰(query)ํ•˜๋ ค๋Š” ๋ชจ๋“  ํ•ญ๋ชฉ์— ๋Œ€ํ•œ ํ…์ŠคํŠธ ์„ค๋ช…๋„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> predictions = detector( ... image, ... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"], ... ) >>> predictions [{'score': 0.3571370542049408, 'label': 'human face', 'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}}, {'score': 0.28099656105041504, 'label': 'nasa badge', 'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}}, {'score': 0.2110239565372467, 'label': 'rocket', 'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}}, {'score': 0.13790413737297058, 'label': 'star-spangled banner', 'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}}, {'score': 0.11950037628412247, 'label': 'nasa badge', 'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}}, {'score': 0.10649408400058746, 'label': 'rocket', 'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}] ``` ์ด์ œ ์˜ˆ์ธก๊ฐ’์„ ์‹œ๊ฐํ™”ํ•ด๋ด…์‹œ๋‹ค: ```py >>> from PIL import ImageDraw >>> draw = ImageDraw.Draw(image) >>> for prediction in predictions: ... box = prediction["box"] ... label = prediction["label"] ... score = prediction["score"] ... xmin, ymin, xmax, ymax = box.values() ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"/> </div> ## ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€[[textprompted-zeroshot-object-detection-by-hand]] ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•ด ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์ด์ œ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ œํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?other=owlvit)์—์„œ ๊ด€๋ จ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import requests >>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640" >>> im = Image.open(requests.get(url, stream=True).raw) >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” [`CLIPTokenizer`]๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> text_queries = ["hat", "book", "sunglasses", "camera"] >>> inputs = processor(text=text_queries, images=im, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌ ๋ฐ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ๋ชจ๋ธ์— ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅํ•˜๊ธฐ ์ „์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์—, [`~OwlViTImageProcessor.post_process_object_detection`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ์˜ˆ์ธก๊ฐ’์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค(bounding box)๊ฐ€ ์›๋ณธ ์ด๋ฏธ์ง€์˜ ์ขŒํ‘œ์™€ ์ƒ๋Œ€์ ์œผ๋กœ ๋™์ผํ•œ์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = torch.tensor([im.size[::-1]]) ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(im) >>> scores = results["scores"].tolist() >>> labels = results["labels"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white") >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ผ๊ด„ ์ฒ˜๋ฆฌ[[batch-processing]] ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์—์„œ ์„œ๋กœ ๋‹ค๋ฅธ(๋˜๋Š” ๋™์ผํ•œ) ๊ฐ์ฒด๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๊ด„ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋Š” ์ด์ค‘ ๋ฆฌ์ŠคํŠธ๋กœ, ์ด๋ฏธ์ง€๋Š” PIL ์ด๋ฏธ์ง€, PyTorch ํ…์„œ, ๋˜๋Š” NumPy ๋ฐฐ์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋กœ ํ”„๋กœ์„ธ์„œ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> images = [image, im] >>> text_queries = [ ... ["human face", "rocket", "nasa badge", "star-spangled banner"], ... ["hat", "book", "sunglasses", "camera"], ... ] >>> inputs = processor(text=text_queries, images=images, return_tensors="pt") ``` ์ด์ „์—๋Š” ํ›„์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ๋‹จ์ผ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ๋ฅผ ํ…์„œ๋กœ ์ „๋‹ฌํ–ˆ์ง€๋งŒ, ํŠœํ”Œ์„ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๊ณ , ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ํŠœํ”Œ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ๋‘ ์˜ˆ์ œ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ์ƒ์„ฑํ•˜๊ณ , ๋‘ ๋ฒˆ์งธ ์ด๋ฏธ์ง€(`image_idx = 1`)๋ฅผ ์‹œ๊ฐํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = [x.size[::-1] for x in images] ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes) >>> image_idx = 1 >>> draw = ImageDraw.Draw(images[image_idx]) >>> scores = results[image_idx]["scores"].tolist() >>> labels = results[image_idx]["labels"].tolist() >>> boxes = results[image_idx]["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white") >>> images[image_idx] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€[[imageguided-object-detection]] ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ด์šฉํ•œ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ์™ธ์—๋„ OWL-ViT ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด ๋Œ€์ƒ ์ด๋ฏธ์ง€์—์„œ ์œ ์‚ฌํ•œ ๊ฐ์ฒด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ฟผ๋ฆฌ์™€ ๋‹ฌ๋ฆฌ ํ•˜๋‚˜์˜ ์˜ˆ์ œ ์ด๋ฏธ์ง€์—์„œ๋งŒ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์†ŒํŒŒ์— ๊ณ ์–‘์ด ๋‘ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋Œ€์ƒ ์ด๋ฏธ์ง€(target image)๋กœ, ๊ณ ์–‘์ด ํ•œ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image_target = Image.open(requests.get(url, stream=True).raw) >>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg" >>> query_image = Image.open(requests.get(query_url, stream=True).raw) ``` ๋‹ค์Œ ์ด๋ฏธ์ง€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 2) >>> ax[0].imshow(image_target) >>> ax[1].imshow(query_image) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"/> </div> ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„์—์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ ๋Œ€์‹ ์— `query_images`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt") ``` ์˜ˆ์ธก์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๋Š” ๋Œ€์‹  [`~OwlViTForObjectDetection.image_guided_detection`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์ด ์—†๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์ด์ „๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ด์ „๊ณผ ๋™์ผํ•˜๊ฒŒ ์ด๋ฏธ์ง€๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model.image_guided_detection(**inputs) ... target_sizes = torch.tensor([image_target.size[::-1]]) ... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(image_target) >>> scores = results["scores"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4) >>> image_target ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"/> </div> OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์•„๋ž˜ ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: <iframe src="https://adirik-owl-vit.hf.space" frameborder="0" width="850" height="450" ></iframe>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/token_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ† ํฐ ๋ถ„๋ฅ˜[[token-classification]] [[open-in-colab]] <Youtube id="wVHdVlPScxA"/> ํ† ํฐ ๋ถ„๋ฅ˜๋Š” ๋ฌธ์žฅ์˜ ๊ฐœ๋ณ„ ํ† ํฐ์— ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ํ† ํฐ ๋ถ„๋ฅ˜ ์ž‘์—… ์ค‘ ํ•˜๋‚˜๋Š” ๊ฐœ์ฒด๋ช… ์ธ์‹(Named Entity Recognition, NER)์ž…๋‹ˆ๋‹ค. ๊ฐœ์ฒด๋ช… ์ธ์‹์€ ๋ฌธ์žฅ์—์„œ ์‚ฌ๋žŒ, ์œ„์น˜ ๋˜๋Š” ์กฐ์ง๊ณผ ๊ฐ™์€ ๊ฐ ๊ฐœ์ฒด์˜ ๋ ˆ์ด๋ธ”์„ ์ฐพ์œผ๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. [WNUT 17](https://huggingface.co/datasets/wnut_17) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert-base-uncased)๋ฅผ ํŒŒ์ธ ํŠœ๋‹ํ•˜์—ฌ ์ƒˆ๋กœ์šด ๊ฐœ์ฒด๋ฅผ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธ ํŠœ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate seqeval ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-wnut-17-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> wnut = load_dataset("wnut_17") ``` ๋‹ค์Œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> wnut["train"][0] {'id': '0', 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], 'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'] } ``` `ner_tags`์˜ ๊ฐ ์ˆซ์ž๋Š” ๊ฐœ์ฒด๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ˆซ์ž๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๊ฐœ์ฒด๊ฐ€ ๋ฌด์—‡์ธ์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> label_list = wnut["train"].features[f"ner_tags"].feature.names >>> label_list [ "O", "B-corporation", "I-corporation", "B-creative-work", "I-creative-work", "B-group", "I-group", "B-location", "I-location", "B-person", "I-person", "B-product", "I-product", ] ``` ๊ฐ `ner_tag`์˜ ์•ž์— ๋ถ™์€ ๋ฌธ์ž๋Š” ๊ฐœ์ฒด์˜ ํ† ํฐ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค: - `B-`๋Š” ๊ฐœ์ฒด์˜ ์‹œ์ž‘์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. - `I-`๋Š” ํ† ํฐ์ด ๋™์ผํ•œ ๊ฐœ์ฒด ๋‚ด๋ถ€์— ํฌํ•จ๋˜์–ด ์žˆ์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค(์˜ˆ๋ฅผ ๋“ค์–ด `State` ํ† ํฐ์€ `Empire State Building`์™€ ๊ฐ™์€ ๊ฐœ์ฒด์˜ ์ผ๋ถ€์ž…๋‹ˆ๋‹ค). - `0`๋Š” ํ† ํฐ์ด ์–ด๋–ค ๊ฐœ์ฒด์—๋„ ํ•ด๋‹นํ•˜์ง€ ์•Š์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="iY2AZYdZAr0"/> ๋‹ค์Œ์œผ๋กœ `tokens` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` ์œ„์˜ ์˜ˆ์ œ `tokens` ํ•„๋“œ๋ฅผ ๋ณด๋ฉด ์ž…๋ ฅ์ด ์ด๋ฏธ ํ† ํฐํ™”๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‹ค์ œ๋กœ ์ž…๋ ฅ์€ ์•„์ง ํ† ํฐํ™”๋˜์ง€ ์•Š์•˜์œผ๋ฏ€๋กœ ๋‹จ์–ด๋ฅผ ํ•˜์œ„ ๋‹จ์–ด๋กœ ํ† ํฐํ™”ํ•˜๊ธฐ ์œ„ํ•ด `is_split_into_words=True`๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ œ๋กœ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> example = wnut["train"][0] >>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True) >>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"]) >>> tokens ['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]'] ``` ๊ทธ๋Ÿฌ๋‚˜ ์ด๋กœ ์ธํ•ด `[CLS]`๊ณผ `[SEP]`๋ผ๋Š” ํŠน์ˆ˜ ํ† ํฐ์ด ์ถ”๊ฐ€๋˜๊ณ , ํ•˜์œ„ ๋‹จ์–ด ํ† ํฐํ™”๋กœ ์ธํ•ด ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๋ถˆ์ผ์น˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ํ•˜๋‚˜์˜ ๋ ˆ์ด๋ธ”์— ํ•ด๋‹นํ•˜๋Š” ๋‹จ์ผ ๋‹จ์–ด๋Š” ์ด์ œ ๋‘ ๊ฐœ์˜ ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„ํ• ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์žฌ์ •๋ ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) ๋ฉ”์†Œ๋“œ๋กœ ๋ชจ๋“  ํ† ํฐ์„ ํ•ด๋‹น ๋‹จ์–ด์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ˆ˜ ํ† ํฐ `[CLS]`์™€ `[SEP]`์— `-100` ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•˜์—ฌ, PyTorch ์†์‹ค ํ•จ์ˆ˜๊ฐ€ ํ•ด๋‹น ํ† ํฐ์„ ๋ฌด์‹œํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 3. ์ฃผ์–ด์ง„ ๋‹จ์–ด์˜ ์ฒซ ๋ฒˆ์งธ ํ† ํฐ์—๋งŒ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ™์€ ๋‹จ์–ด์˜ ๋‹ค๋ฅธ ํ•˜์œ„ ํ† ํฐ์— `-100`์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ์žฌ์ •๋ ฌํ•˜๊ณ  DistilBERT์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋‚ด๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py >>> def tokenize_and_align_labels(examples): ... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) ... labels = [] ... for i, label in enumerate(examples[f"ner_tags"]): ... word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. ... previous_word_idx = None ... label_ids = [] ... for word_idx in word_ids: # Set the special tokens to -100. ... if word_idx is None: ... label_ids.append(-100) ... elif word_idx != previous_word_idx: # Only label the first token of a given word. ... label_ids.append(label[word_idx]) ... else: ... label_ids.append(-100) ... previous_word_idx = word_idx ... labels.append(label_ids) ... tokenized_inputs["labels"] = labels ... return tokenized_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) ``` ์ด์ œ [`DataCollatorWithPadding`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹ , *๋™์  ํŒจ๋”ฉ*์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด์— ๋งž๊ฒŒ ๋ฌธ์žฅ์„ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ด ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluation]] ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๐Ÿค— Evaluate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). Seqeval์€ ์‹ค์ œ๋กœ ์ •๋ฐ€๋„, ์žฌํ˜„๋ฅ , F1 ๋ฐ ์ •ํ™•๋„์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ์ ์ˆ˜๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> seqeval = evaluate.load("seqeval") ``` ๋จผ์ € NER ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜จ ๋‹ค์Œ, [`~evaluate.EvaluationModule.compute`]์— ์‹ค์ œ ์˜ˆ์ธก๊ณผ ์‹ค์ œ ๋ ˆ์ด๋ธ”์„ ์ „๋‹ฌํ•˜์—ฌ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> labels = [label_list[i] for i in example[f"ner_tags"]] >>> def compute_metrics(p): ... predictions, labels = p ... predictions = np.argmax(predictions, axis=2) ... true_predictions = [ ... [label_list[p] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... true_labels = [ ... [label_list[l] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... results = seqeval.compute(predictions=true_predictions, references=true_labels) ... return { ... "precision": results["overall_precision"], ... "recall": results["overall_recall"], ... "f1": results["overall_f1"], ... "accuracy": results["overall_accuracy"], ... } ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•˜๋ฉด ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „์—, `id2label`์™€ `label2id`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ƒ๋˜๋Š” id์™€ ๋ ˆ์ด๋ธ”์˜ ๋งต์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> id2label = { ... 0: "O", ... 1: "B-corporation", ... 2: "I-corporation", ... 3: "B-creative-work", ... 4: "I-creative-work", ... 5: "B-group", ... 6: "I-group", ... 7: "B-location", ... 8: "I-location", ... 9: "B-person", ... 10: "I-person", ... 11: "B-product", ... 12: "I-product", ... } >>> label2id = { ... "O": 0, ... "B-corporation": 1, ... "I-corporation": 2, ... "B-creative-work": 3, ... "I-creative-work": 4, ... "B-group": 5, ... "I-group": 6, ... "B-location": 7, ... "I-location": 8, ... "B-person": 9, ... "I-person": 10, ... "B-product": 11, ... "I-product": 12, ... } ``` <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSequenceClassification`]๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer >>> model = AutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.) ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๋Š” seqeval ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜์™€ ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_wnut_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_wnut["train"], ... eval_dataset=tokenized_wnut["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด, ๊ทธ๋ฆฌ๊ณ  ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 3 >>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs >>> optimizer, lr_schedule = create_optimizer( ... init_lr=2e-5, ... num_train_steps=num_train_steps, ... weight_decay_rate=0.01, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSequenceClassification`]์„ ์‚ฌ์šฉํ•˜์—ฌ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ , ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_wnut["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_wnut["validation"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ seqeval ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_wnut_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์— ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ์˜ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์—ฌ ํŒŒ์ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ํ† ํฐ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ํ…์ŠคํŠธ๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco." ``` ํŒŒ์ธ ํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ NER์˜ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ด๋ณด์„ธ์š”: ```py >>> from transformers import pipeline >>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model") >>> classifier(text) [{'entity': 'B-location', 'score': 0.42658573, 'index': 2, 'word': 'golden', 'start': 4, 'end': 10}, {'entity': 'I-location', 'score': 0.35856336, 'index': 3, 'word': 'state', 'start': 11, 'end': 16}, {'entity': 'B-group', 'score': 0.3064001, 'index': 4, 'word': 'warriors', 'start': 17, 'end': 25}, {'entity': 'B-location', 'score': 0.65523505, 'index': 13, 'word': 'san', 'start': 80, 'end': 83}, {'entity': 'B-location', 'score': 0.4668663, 'index': 14, 'word': 'francisco', 'start': 84, 'end': 93}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predictions = torch.argmax(logits, dim=2) >>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` ์ž…๋ ฅ๊ฐ’์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_image_classification.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[zeroshot-image-classification]] [[open-in-colab]] ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์˜ ์˜ˆ์‹œ๊ฐ€ ํฌํ•จ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต๋˜์ง€ ์•Š์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ๋‹ฌ๋ฆฐ ํŠน์ • ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋กœ ๋ชจ๋ธ ํ•™์Šต์ด ํ•„์š”ํ•˜๋ฉฐ, ์ด ๋ชจ๋ธ์€ ํŠน์ • ์ด๋ฏธ์ง€์˜ ํŠน์ง•์„ ๋ ˆ์ด๋ธ”์— "๋งคํ•‘"ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด ์žˆ๋Š” ๋ถ„๋ฅ˜ ์ž‘์—…์— ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š”, ๋ชจ๋ธ์„ "์žฌ๋ณด์ •"ํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด์™€ ๋Œ€์กฐ์ ์œผ๋กœ, ์ œ๋กœ์ƒท ๋˜๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open vocabulary) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋Œ€๊ทœ๋ชจ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์™€ ํ•ด๋‹น ์„ค๋ช…์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ(multimodal) ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ํฌํ•จํ•œ ๋งŽ์€ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ •๋ ฌ๋œ(aligned) ๋น„์ „ ์–ธ์–ด ํ‘œํ˜„์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ๋Œ€ํ•œ ๋ณด๋‹ค ์œ ์—ฐํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์œผ๋กœ, ์ถ”๊ฐ€ ํ•™์Šต ๋ฐ์ดํ„ฐ ์—†์ด ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด๋‚˜ ํ•™์Šตํ•˜์ง€ ๋ชปํ•œ ์นดํ…Œ๊ณ ๋ฆฌ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ์ผ๋ฐ˜ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž๊ฐ€ ๋Œ€์ƒ ๊ฐœ์ฒด์— ๋Œ€ํ•œ ์ž์œ  ํ˜•์‹์˜ ํ…์ŠคํŠธ ์„ค๋ช…์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ๋งŒ๋“ค๊ธฐ * ์ง์ ‘ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์ถ”๋ก  ์‹คํ–‰ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-image-classification-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import pipeline >>> checkpoint = "openai/clip-vit-large-patch14" >>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification") ``` ๋‹ค์Œ์œผ๋กœ, ๋ถ„๋ฅ˜ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์ธ `candidate_labels`๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `candidate_labels`๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> predictions = classifier(image, candidate_labels=["fox", "bear", "seagull", "owl"]) >>> predictions [{'score': 0.9996670484542847, 'label': 'owl'}, {'score': 0.000199399160919711, 'label': 'seagull'}, {'score': 7.392891711788252e-05, 'label': 'fox'}, {'score': 5.96074532950297e-05, 'label': 'bear'}] ``` ## ์ง์ ‘ ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ํ•˜๊ธฐ[[zeroshot-image-classification-by-hand]] ์ด์ œ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification >>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> candidate_labels = ["tree", "car", "bike", "cat"] >>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ , ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits_per_image[0] >>> probs = logits.softmax(dim=-1).numpy() >>> scores = probs.tolist() >>> result = [ ... {"score": score, "label": candidate_label} ... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0]) ... ] >>> result [{'score': 0.998572, 'label': 'car'}, {'score': 0.0010570387, 'label': 'bike'}, {'score': 0.0003393686, 'label': 'tree'}, {'score': 3.1572064e-05, 'label': 'cat'}] ```
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/video_classification.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜์ƒ ๋ถ„๋ฅ˜ [[video-classification]] [[open-in-colab]] ์˜์ƒ ๋ถ„๋ฅ˜๋Š” ์˜์ƒ ์ „์ฒด์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ฐ ์˜์ƒ์—๋Š” ํ•˜๋‚˜์˜ ํด๋ž˜์Šค๊ฐ€ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์˜์ƒ์„ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์•„ ์–ด๋Š ํด๋ž˜์Šค์— ์†ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์˜์ƒ์ด ์–ด๋–ค ๋‚ด์šฉ์ธ์ง€ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜์˜ ์‹ค์ œ ์‘์šฉ ์˜ˆ๋Š” ํ”ผํŠธ๋‹ˆ์Šค ์•ฑ์—์„œ ์œ ์šฉํ•œ ๋™์ž‘ / ์šด๋™ ์ธ์‹ ์„œ๋น„์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋˜ํ•œ ์‹œ๊ฐ ์žฅ์• ์ธ์ด ์ด๋™ํ•  ๋•Œ ๋ณด์กฐํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ํ†ตํ•ด [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q pytorchvideo transformers evaluate ``` ์˜์ƒ์„ ์ฒ˜๋ฆฌํ•˜๊ณ  ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [PyTorchVideo](https://pytorchvideo.org/)(์ดํ•˜ `pytorchvideo`)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## UCF101 ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ [[load-ufc101-dataset]] [UCF-101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ(subset)์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ•™์Šตํ•˜๋Š”๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ฐ์ดํ„ฐ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๋ถˆ๋Ÿฌ์™€ ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์ด ๋‹ค์šด๋กœ๋“œ ๋˜๋ฉด, ์••์ถ•๋œ ํŒŒ์ผ์˜ ์••์ถ•์„ ํ•ด์ œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` ์ •๋ ฌ๋œ ์˜์ƒ์˜ ๊ฒฝ๋กœ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` ๋™์ผํ•œ ๊ทธ๋ฃน/์žฅ๋ฉด์— ์†ํ•˜๋Š” ์˜์ƒ ํด๋ฆฝ์€ ํŒŒ์ผ ๊ฒฝ๋กœ์—์„œ `g`๋กœ ํ‘œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, `v_ApplyEyeMakeup_g07_c04.avi`์™€ `v_ApplyEyeMakeup_g07_c06.avi` ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘˜์€ ๊ฐ™์€ ๊ทธ๋ฃน์ž…๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ๋ถ„ํ• ์„ ํ•  ๋•Œ, [๋ฐ์ดํ„ฐ ๋ˆ„์ถœ(data leakage)](https://www.kaggle.com/code/alexisbcook/data-leakage)์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๋™์ผํ•œ ๊ทธ๋ฃน / ์žฅ๋ฉด์˜ ์˜์ƒ ํด๋ฆฝ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ํ•˜์œ„ ์ง‘ํ•ฉ์€ ์ด๋Ÿฌํ•œ ์ •๋ณด๋ฅผ ๊ณ ๋ คํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ, ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์กด์žฌํ•˜๋Š” ๋ผ๋ฒจ์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•  ๋•Œ ๋„์›€์ด ๋  ๋”•์…”๋„ˆ๋ฆฌ(dictionary data type)๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. * `label2id`: ํด๋ž˜์Šค ์ด๋ฆ„์„ ์ •์ˆ˜์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. * `id2label`: ์ •์ˆ˜๋ฅผ ํด๋ž˜์Šค ์ด๋ฆ„์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ์ด 10๊ฐœ์˜ ๊ณ ์œ ํ•œ ํด๋ž˜์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ํด๋ž˜์Šค๋งˆ๋‹ค 30๊ฐœ์˜ ์˜์ƒ์ด ํ›ˆ๋ จ ์„ธํŠธ์— ์žˆ์Šต๋‹ˆ๋‹ค ## ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-a-model-to-fine-tune]] ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ์™€ ์ฒดํฌํฌ์ธํŠธ์— ์—ฐ๊ด€๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ธ์ฝ”๋”์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์ œ๊ณต๋˜๋ฉฐ, ๋ถ„๋ฅ˜ ํ—ค๋“œ(๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด)๋Š” ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ „์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ž‘์„ฑํ•  ๋•Œ๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋™์•ˆ, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฝ๊ณ ๋ฅผ ๋งˆ์ฃผ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` ์œ„ ๊ฒฝ๊ณ ๋Š” ์šฐ๋ฆฌ๊ฐ€ ์ผ๋ถ€ ๊ฐ€์ค‘์น˜(์˜ˆ: `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ)๋ฅผ ๋ฒ„๋ฆฌ๊ณ  ์ƒˆ๋กœ์šด `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ์„ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๊ฐ€ ์—†๋Š” ์ƒˆ๋กœ์šด ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์žˆ์œผ๋ฏ€๋กœ, ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ผ๊ณ  ๊ฒฝ๊ณ ๋ฅผ ๋ณด๋‚ด๋Š” ๊ฒƒ์€ ๋‹น์—ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด์ œ ์šฐ๋ฆฌ๋Š” ์ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. **์ฐธ๊ณ ** ์ด [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics)๋Š” ๋„๋ฉ”์ธ์ด ๋งŽ์ด ์ค‘์ฒฉ๋œ ์œ ์‚ฌํ•œ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ๋Œ€ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ ์ฒดํฌํฌ์ธํŠธ์ด๋ฏ€๋กœ ์ด ์ž‘์—…์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `MCG-NJU/videomae-base-finetuned-kinetics` ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ค€๋น„ํ•˜๊ธฐ[[prepare-the-datasets-for-training]] ์˜์ƒ ์ „์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด [PyTorchVideo ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://pytorchvideo.org/)๋ฅผ ํ™œ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ข…์†์„ฑ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜์„ธ์š”. ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` ํ•™์Šต ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๊ท ์ผํ•œ ์‹œ๊ฐ„ ์ƒ˜ํ”Œ๋ง(uniform temporal subsampling)', 'ํ”ฝ์…€ ์ •๊ทœํ™”(pixel normalization)', '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ(random cropping)' ๋ฐ '๋žœ๋ค ์ˆ˜ํ‰ ๋’ค์ง‘๊ธฐ(random horizontal flipping)'์˜ ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ'์™€ '๋žœ๋ค ๋’ค์ง‘๊ธฐ'๋ฅผ ์ œ์™ธํ•œ ๋™์ผํ•œ ๋ณ€ํ™˜ ์ฒด์ธ์„ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ณ€ํ™˜์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [PyTorchVideo ๊ณต์‹ ๋ฌธ์„œ](https://pytorchvideo.org)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ ์ •๋ณด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ์˜์ƒ ํ”„๋ ˆ์ž„ ํ”ฝ์…€์„ ์ •๊ทœํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ์ด๋ฏธ์ง€ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ * ์˜์ƒ ํ”„๋ ˆ์ž„์ด ์กฐ์ •๋  ๊ณต๊ฐ„ ํ•ด์ƒ๋„ ๋จผ์ €, ๋ช‡ ๊ฐ€์ง€ ์ƒ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํŠนํ™”๋œ ์ „์ฒ˜๋ฆฌ(transform)๊ณผ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ž์ฒด๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` ๊ฐ™์€ ๋ฐฉ์‹์˜ ์ž‘์—… ํ๋ฆ„์„ ๊ฒ€์ฆ๊ณผ ํ‰๊ฐ€ ์„ธํŠธ์—๋„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **์ฐธ๊ณ **: ์œ„์˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŒŒ์ดํ”„๋ผ์ธ์€ [๊ณต์‹ ํŒŒ์ดํ† ์น˜ ์˜ˆ์ œ](https://pytorchvideo.org/docs/tutorial_classification#dataset)์—์„œ ๊ฐ€์ ธ์˜จ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” UCF-101 ๋ฐ์ดํ„ฐ์…‹์— ๋งž๊ฒŒ [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‚ด๋ถ€์ ์œผ๋กœ ์ด ํ•จ์ˆ˜๋Š” [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) ๊ฐ์ฒด๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `LabeledVideoDataset` ํด๋ž˜์Šค๋Š” PyTorchVideo ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋ชจ๋“  ์˜์ƒ ๊ด€๋ จ ์ž‘์—…์˜ ๊ธฐ๋ณธ ํด๋ž˜์Šค์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PyTorchVideo์—์„œ ๋ฏธ๋ฆฌ ์ œ๊ณตํ•˜์ง€ ์•Š๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ์ด ํด๋ž˜์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ํ™•์žฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ์‚ฌํ•ญ์ด ์•Œ๊ณ  ์‹ถ๋‹ค๋ฉด `data` API [๋ฌธ์„œ](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋˜ํ•œ ์œ„์˜ ์˜ˆ์‹œ์™€ ์œ ์‚ฌํ•œ ๊ตฌ์กฐ๋ฅผ ๊ฐ–๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด, `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ ๋ฌธ์ œ๊ฐ€ ์—†์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์˜์ƒ์˜ ๊ฐœ์ˆ˜๋ฅผ ์•Œ๊ธฐ ์œ„ํ•ด `num_videos` ์ธ์ˆ˜์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ``` ## ๋” ๋‚˜์€ ๋””๋ฒ„๊น…์„ ์œ„ํ•ด ์ „์ฒ˜๋ฆฌ ์˜์ƒ ์‹œ๊ฐํ™”ํ•˜๊ธฐ[[visualize-the-preprocessed-video-for-better-debugging]] ```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div> ## ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ[[train-the-model]] ๐Ÿค— Transformers์˜ [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผœ๋ณด์„ธ์š”. `Trainer`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ์„ค์ •๊ณผ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments)์ž…๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ชจ๋“  ์†์„ฑ์„ ํฌํ•จํ•˜๋ฉฐ, ํ›ˆ๋ จ ์ค‘ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•  ์ถœ๋ ฅ ํด๋” ์ด๋ฆ„์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๐Ÿค— Hub์˜ ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ๋ชจ๋“  ์ •๋ณด๋ฅผ ๋™๊ธฐํ™”ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋Š” ๋”ฐ๋กœ ์„ค๋ช…ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ์—์„œ ์ค‘์š”ํ•œ ์ธ์ˆ˜๋Š” `remove_unused_columns=False` ์ž…๋‹ˆ๋‹ค. ์ด ์ธ์ž๋Š” ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜์—์„œ ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๋ชจ๋“  ์†์„ฑ ์—ด(columns)์„ ์‚ญ์ œํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์€ ์ผ๋ฐ˜์ ์œผ๋กœ True์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ ์—ด์„ ์‚ญ์ œํ•˜๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ด๋ฉฐ, ์ž…๋ ฅ์„ ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜๋กœ ํ’€๊ธฐ(unpack)๊ฐ€ ์‰ฌ์›Œ์ง€๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด ๊ฒฝ์šฐ์—๋Š” `pixel_values`(๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ํ•„์ˆ˜์ ์ธ ํ‚ค)๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ('video'๊ฐ€ ํŠนํžˆ ๊ทธ๋ ‡์Šต๋‹ˆ๋‹ค)์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ remove_unused_columns์„ False๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋กœ ๋ฐ˜ํ™˜๋˜๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” `__len__` ๋ฉ”์†Œ๋“œ๊ฐ€ ์ด์‹๋˜์–ด ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, `TrainingArguments`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ๋•Œ `max_steps`๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ , ์˜ˆ์ธก๊ฐ’์—์„œ ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•  ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ „์ฒ˜๋ฆฌ ์ž‘์—…์€ ์˜ˆ์ธก๋œ ๋กœ์ง“(logits)์— argmax ๊ฐ’์„ ์ทจํ•˜๋Š” ๊ฒƒ๋ฟ์ž…๋‹ˆ๋‹ค: ```py import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **ํ‰๊ฐ€์— ๋Œ€ํ•œ ์ฐธ๊ณ ์‚ฌํ•ญ**: [VideoMAE ๋…ผ๋ฌธ](https://arxiv.org/abs/2203.12602)์—์„œ ์ €์ž๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ‰๊ฐ€ ์ „๋žต์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ์˜์ƒ์—์„œ ์—ฌ๋Ÿฌ ํด๋ฆฝ์„ ์„ ํƒํ•˜๊ณ  ๊ทธ ํด๋ฆฝ์— ๋‹ค์–‘ํ•œ ํฌ๋กญ์„ ์ ์šฉํ•˜์—ฌ ์ง‘๊ณ„ ์ ์ˆ˜๋ฅผ ๋ณด๊ณ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ฐ„๋‹จํ•จ๊ณผ ๊ฐ„๊ฒฐํ•จ์„ ์œ„ํ•ด ํ•ด๋‹น ์ „๋žต์„ ๊ณ ๋ คํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์˜ˆ์ œ๋ฅผ ๋ฌถ์–ด์„œ ๋ฐฐ์น˜๋ฅผ ํ˜•์„ฑํ•˜๋Š” `collate_fn`์„ ์ •์˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ฐฐ์น˜๋Š” `pixel_values`์™€ `labels`๋ผ๋Š” 2๊ฐœ์˜ ํ‚ค๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> def collate_fn(examples): ... # permute to (num_frames, num_channels, height, width) ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด ๋ชจ๋“  ๊ฒƒ์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ•จ๊ป˜ `Trainer`์— ์ „๋‹ฌํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` ๋ฐ์ดํ„ฐ๋ฅผ ์ด๋ฏธ ์ฒ˜๋ฆฌํ–ˆ๋Š”๋ฐ๋„ ๋ถˆ๊ตฌํ•˜๊ณ  `image_processor`๋ฅผ ํ† ํฌ๋‚˜์ด์ € ์ธ์ˆ˜๋กœ ๋„ฃ์€ ์ด์œ ๋Š” JSON์œผ๋กœ ์ €์žฅ๋˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๊ตฌ์„ฑ ํŒŒ์ผ์ด Hub์˜ ์ €์žฅ์†Œ์— ์—…๋กœ๋“œ๋˜๋„๋ก ํ•˜๊ธฐ ์œ„ํ•จ์ž…๋‹ˆ๋‹ค. `train` ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”: ```py >>> train_results = trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์„ [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์—ฌ ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜์ƒ์„ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”: ```py >>> sample_test_video = next(iter(test_dataset)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/> </div> ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline)์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ์˜์ƒ ๋ถ„๋ฅ˜๋ฅผ ํ•˜๊ธฐ ์œ„ํ•ด `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜์ƒ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` ๋งŒ์•ฝ ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> def run_inference(model, video): ... # (num_frames, num_channels, height, width) ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), # this can be skipped if you don't have labels available. ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... # forward pass ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` ๋ชจ๋ธ์— ์ž…๋ ฅ๊ฐ’์„ ๋„ฃ๊ณ  `logits`์„ ๋ฐ˜ํ™˜๋ฐ›์œผ์„ธ์š”: ``` >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` `logits`์„ ๋””์ฝ”๋”ฉํ•˜๋ฉด, ์šฐ๋ฆฌ๋Š” ๋‹ค์Œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/monocular_depth_estimation.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •[[depth-estimation-pipeline]] ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •์€ ํ•œ ์žฅ๋ฉด์˜ ๋‹จ์ผ ์ด๋ฏธ์ง€์—์„œ ์žฅ๋ฉด์˜ ๊นŠ์ด ์ •๋ณด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋‹จ์ผ ์นด๋ฉ”๋ผ ์‹œ์ ์˜ ์žฅ๋ฉด์— ์žˆ๋Š” ๋ฌผ์ฒด์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •์€ 3D ์žฌ๊ตฌ์„ฑ, ์ฆ๊ฐ• ํ˜„์‹ค, ์ž์œจ ์ฃผํ–‰, ๋กœ๋ด‡ ๊ณตํ•™ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ์‘์šฉ๋ฉ๋‹ˆ๋‹ค. ์กฐ๋ช… ์กฐ๊ฑด, ๊ฐ€๋ ค์ง, ํ…์Šค์ฒ˜์™€ ๊ฐ™์€ ์š”์†Œ์˜ ์˜ํ–ฅ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋Š” ์žฅ๋ฉด ๋‚ด ๋ฌผ์ฒด์™€ ํ•ด๋‹น ๊นŠ์ด ์ •๋ณด ๊ฐ„์˜ ๋ณต์žกํ•œ ๊ด€๊ณ„๋ฅผ ๋ชจ๋ธ์ด ์ดํ•ดํ•ด์•ผ ํ•˜๋ฏ€๋กœ ๊นŒ๋‹ค๋กœ์šด ์ž‘์—…์ž…๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ๋‹ค๋ฃจ๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [DPT](../model_doc/dpt), [GLPN](../model_doc/glpn) <!--End of the generated tip--> </Tip> ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ ๋งŒ๋“ค๊ธฐ * ์ง์ ‘ ๊นŠ์ด ์ถ”์ • ์ถ”๋ก ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ[[depth-estimation-inference-by-hand]] ๊นŠ์ด ์ถ”์ •์„ ์ถ”๋ก ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ํ•ด๋‹น ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•˜๋Š” [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [Hugging Face Hub ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads)์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> checkpoint = "vinvino02/glpn-nyu" >>> depth_estimator = pipeline("depth-estimation", model=checkpoint) ``` ๋‹ค์Œ์œผ๋กœ, ๋ถ„์„ํ•  ์ด๋ฏธ์ง€๋ฅผ ํ•œ ์žฅ ์„ ํƒํ•˜์„ธ์š”: ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"/> </div> ์ด๋ฏธ์ง€๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> predictions = depth_estimator(image) ``` ํŒŒ์ดํ”„๋ผ์ธ์€ ๋‘ ๊ฐœ์˜ ํ•ญ๋ชฉ์„ ๊ฐ€์ง€๋Š” ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” `predicted_depth`๋กœ ๊ฐ ํ”ฝ์…€์˜ ๊นŠ์ด๋ฅผ ๋ฏธํ„ฐ๋กœ ํ‘œํ˜„ํ•œ ๊ฐ’์„ ๊ฐ€์ง€๋Š” ํ…์„œ์ž…๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ๋Š” `depth`๋กœ ๊นŠ์ด ์ถ”์ • ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๋Š” PIL ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ์ด์ œ ์‹œ๊ฐํ™”ํ•œ ๊ฒฐ๊ณผ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> predictions["depth"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div> ## ์ง์ ‘ ๊นŠ์ด ์ถ”์ • ์ถ”๋ก ํ•˜๊ธฐ[[depth-estimation-inference-by-hand]] ์ด์ œ ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ๋ฒ•์„ ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads)์—์„œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ์ด์ „์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ๊ฒƒ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation >>> checkpoint = "vinvino02/glpn-nyu" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) >>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint) ``` ํ•„์š”ํ•œ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ฒ˜๋ฆฌํ•˜๋Š” `image_processor`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. `image_processor`๋Š” ํฌ๊ธฐ ์กฐ์ • ๋ฐ ์ •๊ทœํ™” ๋“ฑ ํ•„์š”ํ•œ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values ``` ์ค€๋น„ํ•œ ์ž…๋ ฅ์„ ๋ชจ๋ธ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(pixel_values) ... predicted_depth = outputs.predicted_depth ``` ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> # ์›๋ณธ ์‚ฌ์ด์ฆˆ๋กœ ๋ณต์› >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ).squeeze() >>> output = prediction.numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted) >>> depth ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/multiple_choice.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๊ฐ๊ด€์‹ ๋ฌธ์ œ[[multiple-choice]] [[open-in-colab]] ๊ฐ๊ด€์‹ ๊ณผ์ œ๋Š” ๋ฌธ๋งฅ๊ณผ ํ•จ๊ป˜ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ›„๋ณด ๋‹ต๋ณ€์ด ์ œ๊ณต๋˜๊ณ  ๋ชจ๋ธ์ด ์ •๋‹ต์„ ์„ ํƒํ•˜๋„๋ก ํ•™์Šต๋œ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์งˆ์˜์‘๋‹ต๊ณผ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์ง„ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [SWAG](https://huggingface.co/datasets/swag) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ 'regular' ๊ตฌ์„ฑ์œผ๋กœ [BERT](https://huggingface.co/bert-base-uncased)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์—ฌ๋Ÿฌ ์˜ต์…˜๊ณผ ์ผ๋ถ€ ์ปจํ…์ŠคํŠธ๊ฐ€ ์ฃผ์–ด์กŒ์„ ๋•Œ ๊ฐ€์žฅ ์ ํ•ฉํ•œ ๋‹ต์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SWAG ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-swag-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SWAG ๋ฐ์ดํ„ฐ์…‹์˜ '์ผ๋ฐ˜' ๊ตฌ์„ฑ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> swag = load_dataset("swag", "regular") ``` ์ด์ œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> swag["train"][0] {'ending0': 'passes by walking down the street playing their instruments.', 'ending1': 'has heard approaching them.', 'ending2': "arrives and they're outside dancing and asleep.", 'ending3': 'turns the lead singer watches the performance.', 'fold-ind': '3416', 'gold-source': 'gold', 'label': 0, 'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', 'sent2': 'A drum line', 'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line', 'video-id': 'anetv_jkn6uvmqwh4'} ``` ์—ฌ๊ธฐ์—๋Š” ๋งŽ์€ ํ•„๋“œ๊ฐ€ ์žˆ๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ด์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค: - `sent1` ๋ฐ `sent2`: ์ด ํ•„๋“œ๋Š” ๋ฌธ์žฅ์ด ์–ด๋–ป๊ฒŒ ์‹œ์ž‘๋˜๋Š”์ง€ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด ๋‘ ํ•„๋“œ๋ฅผ ํ•ฉ์น˜๋ฉด `์‹œ์ž‘ ๊ตฌ์ ˆ(startphrase)` ํ•„๋“œ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. - `์ข…๋ฃŒ ๊ตฌ์ ˆ(ending)`: ๋ฌธ์žฅ์ด ์–ด๋–ป๊ฒŒ ๋๋‚  ์ˆ˜ ์žˆ๋Š”์ง€์— ๋Œ€ํ•œ ๊ฐ€๋Šฅํ•œ ์ข…๋ฃŒ ๊ตฌ์ ˆ๋ฅผ ์ œ์‹œํ•˜์ง€๋งŒ ๊ทธ ์ค‘ ํ•˜๋‚˜๋งŒ ์ •๋‹ต์ž…๋‹ˆ๋‹ค. - `๋ ˆ์ด๋ธ”(label)`: ์˜ฌ๋ฐ”๋ฅธ ๋ฌธ์žฅ ์ข…๋ฃŒ ๊ตฌ์ ˆ์„ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ฌธ์žฅ์˜ ์‹œ์ž‘๊ณผ ๋„ค ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๊ตฌ์ ˆ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด BERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` ์ƒ์„ฑํ•˜๋ ค๋Š” ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. `sent1` ํ•„๋“œ๋ฅผ ๋„ค ๊ฐœ ๋ณต์‚ฌํ•œ ๋‹ค์Œ ๊ฐ๊ฐ์„ `sent2`์™€ ๊ฒฐํ•ฉํ•˜์—ฌ ๋ฌธ์žฅ์ด ์‹œ์ž‘๋˜๋Š” ๋ฐฉ์‹์„ ์žฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. 2. `sent2`๋ฅผ ๋„ค ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๋ฌธ์žฅ ๊ตฌ์ ˆ ๊ฐ๊ฐ๊ณผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. 3. ์ด ๋‘ ๋ชฉ๋ก์„ ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ‰ํƒ„ํ™”(flatten)ํ•˜๊ณ , ๊ฐ ์˜ˆ์ œ์— ํ•ด๋‹นํ•˜๋Š” `input_ids`, `attention_mask` ๋ฐ `labels` ํ•„๋“œ๋ฅผ ๊ฐ–๋„๋ก ๋‹ค์ฐจ์›ํ™”(unflatten) ํ•ฉ๋‹ˆ๋‹ค. ```py >>> ending_names = ["ending0", "ending1", "ending2", "ending3"] >>> def preprocess_function(examples): ... first_sentences = [[context] * 4 for context in examples["sent1"]] ... question_headers = examples["sent2"] ... second_sentences = [ ... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers) ... ] ... first_sentences = sum(first_sentences, []) ... second_sentences = sum(second_sentences, []) ... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True) ... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()} ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py tokenized_swag = swag.map(preprocess_function, batched=True) ``` ๐Ÿค— Transformers์—๋Š” ๊ฐ๊ด€์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์ •๋ ฌ ์ค‘์— ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹  ๋ฐฐ์น˜ ์ค‘ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์  ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. `DataCollatorForMultipleChoice`๋Š” ๋ชจ๋“  ๋ชจ๋ธ ์ž…๋ ฅ์„ ํ‰ํƒ„ํ™”ํ•˜๊ณ  ํŒจ๋”ฉ์„ ์ ์šฉํ•˜๋ฉฐ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค์ฐจ์›ํ™”ํ•ฉ๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import torch >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="pt", ... ) ... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()} ... batch["labels"] = torch.tensor(labels, dtype=torch.int64) ... return batch ``` </pt> <tf> ```py >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import tensorflow as tf >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="tf", ... ) ... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()} ... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64) ... return batch ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค—[Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค(๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋Œ์•„๊ฐ€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ ํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForMultipleChoice`]๋กœ BERT๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer >>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_swag_model", ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_swag["train"], ... eval_dataset=tokenized_swag["validation"], ... tokenizer=tokenizer, ... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ์ตœ์ ํ™” ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด ๋ฐ ๋ช‡ ๊ฐ€์ง€ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 2 >>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs >>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` ๊ทธ๋ฆฌ๊ณ  [`TFAutoModelForMultipleChoice`]๋กœ BERT๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_swag["train"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_swag["validation"], ... shuffle=False, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผ ํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์˜ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐ€์ง€ ์ž‘์—…์€ ๋ชจ๋‘ [Keras ์ฝœ๋ฐฑ](../main_classes/keras_callbacks)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `compute_metrics`ํ•จ์ˆ˜๋ฅผ [`~transformers.KerasMetricCallback`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์—์„œ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋ฆฌ๊ณ  ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜, ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜๊ณ  ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๊ฐ๊ด€์‹ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์˜ˆ๋Š” ์•„๋ž˜ ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). </Tip> ## ์ถ”๋ก  ํ•˜๊ธฐ[[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ํ…์ŠคํŠธ์™€ ๋‘ ๊ฐœ์˜ ํ›„๋ณด ๋‹ต์•ˆ์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> prompt = "France has a bread law, Le Dรฉcret Pain, with strict rules on what is allowed in a traditional baguette." >>> candidate1 = "The law does not apply to croissants and brioche." >>> candidate2 = "The law applies to baguettes." ``` <frameworkcontent> <pt> ๊ฐ ํ”„๋กฌํ”„ํŠธ์™€ ํ›„๋ณด ๋‹ต๋ณ€ ์Œ์„ ํ† ํฐํ™”ํ•˜์—ฌ PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `labels`์„ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True) >>> labels = torch.tensor(0).unsqueeze(0) ``` ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMultipleChoice >>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels) >>> logits = outputs.logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> predicted_class = logits.argmax().item() >>> predicted_class '0' ``` </pt> <tf> ๊ฐ ํ”„๋กฌํ”„ํŠธ์™€ ํ›„๋ณด ๋‹ต์•ˆ ์Œ์„ ํ† ํฐํ™”ํ•˜์—ฌ ํ…์„œํ”Œ๋กœ ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()} >>> outputs = model(inputs) >>> logits = outputs.logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0]) >>> predicted_class '0' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/image_captioning.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋ฏธ์ง€ ์บก์…”๋‹[[image-captioning]] [[open-in-colab]] ์ด๋ฏธ์ง€ ์บก์…”๋‹(Image captioning)์€ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์บก์…˜์„ ์˜ˆ์ธกํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์บก์…”๋‹์€ ์‹œ๊ฐ ์žฅ์• ์ธ์ด ๋‹ค์–‘ํ•œ ์ƒํ™ฉ์„ ํƒ์ƒ‰ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ๋„๋ก ์‹œ๊ฐ ์žฅ์• ์ธ์„ ๋ณด์กฐํ•˜๋Š” ๋“ฑ ์‹ค์ƒํ™œ์—์„œ ํ”ํžˆ ํ™œ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋ฏธ์ง€ ์บก์…”๋‹์€ ์ด๋ฏธ์ง€๋ฅผ ์„ค๋ช…ํ•จ์œผ๋กœ์จ ์‚ฌ๋žŒ๋“ค์˜ ์ฝ˜ํ…์ธ  ์ ‘๊ทผ์„ฑ์„ ๊ฐœ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ด๋ฏธ์ง€ ์บก์…”๋‹ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค. * ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate -q pip install jiwer -q ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```python from huggingface_hub import notebook_login notebook_login() ``` ## ํฌ์ผ“๋ชฌ BLIP ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-the-pokmon-blip-captions-dataset]] {์ด๋ฏธ์ง€-์บก์…˜} ์Œ์œผ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด ๐Ÿค— Dataset ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. PyTorch์—์„œ ์ž์‹ ๋งŒ์˜ ์ด๋ฏธ์ง€ ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [์ด ๋…ธํŠธ๋ถ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ```python from datasets import load_dataset ds = load_dataset("lambdalabs/pokemon-blip-captions") ds ``` ```bash DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 833 }) }) ``` ์ด ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” `image`์™€ `text`๋ผ๋Š” ๋‘ ํŠน์„ฑ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> ๋งŽ์€ ์ด๋ฏธ์ง€ ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ์ด๋ฏธ์ง€๋‹น ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์บก์…˜์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฝ์šฐ, ์ผ๋ฐ˜์ ์œผ๋กœ ํ•™์Šต ์ค‘์— ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์บก์…˜ ์ค‘์—์„œ ๋ฌด์ž‘์œ„๋กœ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. </Tip> [~datasets.Dataset.train_test_split] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํ•™์Šต ๋ถ„ํ• ์„ ํ•™์Šต ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค: ```python ds = ds["train"].train_test_split(test_size=0.1) train_ds = ds["train"] test_ds = ds["test"] ``` ํ•™์Šต ์„ธํŠธ์˜ ์ƒ˜ํ”Œ ๋ช‡ ๊ฐœ๋ฅผ ์‹œ๊ฐํ™”ํ•ด ๋ด…์‹œ๋‹ค. Let's visualize a couple of samples from the training set. ```python from textwrap import wrap import matplotlib.pyplot as plt import numpy as np def plot_images(images, captions): plt.figure(figsize=(20, 20)) for i in range(len(images)): ax = plt.subplot(1, len(images), i + 1) caption = captions[i] caption = "\n".join(wrap(caption, 12)) plt.title(caption) plt.imshow(images[i]) plt.axis("off") sample_images_to_visualize = [np.array(train_ds[i]["image"]) for i in range(5)] sample_captions = [train_ds[i]["text"] for i in range(5)] plot_images(sample_images_to_visualize, sample_captions) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png" alt="Sample training images"/> </div> ## ๋ฐ์ดํ„ฐ์„ธํŠธ ์ „์ฒ˜๋ฆฌ[[preprocess-the-dataset]] ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ์–‘์‹์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, ์ „์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ์ด๋ฏธ์ง€์™€ ์บก์…˜์„ ๋ชจ๋‘ ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌ ์ž‘์—…์„ ์œ„ํ•ด, ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋Š” ๋ชจ๋ธ์— ์—ฐ๊ฒฐ๋œ ํ”„๋กœ์„ธ์„œ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ```python from transformers import AutoProcessor checkpoint = "microsoft/git-base" processor = AutoProcessor.from_pretrained(checkpoint) ``` ํ”„๋กœ์„ธ์„œ๋Š” ๋‚ด๋ถ€์ ์œผ๋กœ ํฌ๊ธฐ ์กฐ์ • ๋ฐ ํ”ฝ์…€ ํฌ๊ธฐ ์กฐ์ •์„ ํฌํ•จํ•œ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ์บก์…˜์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```python def transforms(example_batch): images = [x for x in example_batch["image"]] captions = [x for x in example_batch["text"]] inputs = processor(images=images, text=captions, padding="max_length") inputs.update({"labels": inputs["input_ids"]}) return inputs train_ds.set_transform(transforms) test_ds.set_transform(transforms) ``` ๋ฐ์ดํ„ฐ์„ธํŠธ๊ฐ€ ์ค€๋น„๋˜์—ˆ์œผ๋‹ˆ ์ด์ œ ํŒŒ์ธํŠœ๋‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๊ธฐ๋ณธ ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-a-base-model]] ["microsoft/git-base"](https://huggingface.co/microsoft/git-base)๋ฅผ [`AutoModelForCausalLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) ๊ฐ์ฒด๋กœ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(checkpoint) ``` ## ํ‰๊ฐ€[[evaluate]] ์ด๋ฏธ์ง€ ์บก์…˜ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ [Rouge ์ ์ˆ˜](https://huggingface.co/spaces/evaluate-metric/rouge) ๋˜๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate)](https://huggingface.co/spaces/evaluate-metric/wer)๋กœ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹จ์–ด ์˜ค๋ฅ˜์œจ(WER)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๐Ÿค— Evaluate ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. WER์˜ ์ž ์žฌ์  ์ œํ•œ ์‚ฌํ•ญ ๋ฐ ๊ธฐํƒ€ ๋ฌธ์ œ์ ์€ [์ด ๊ฐ€์ด๋“œ](https://huggingface.co/spaces/evaluate-metric/wer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```python from evaluate import load import torch wer = load("wer") def compute_metrics(eval_pred): logits, labels = eval_pred predicted = logits.argmax(-1) decoded_labels = processor.batch_decode(labels, skip_special_tokens=True) decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True) wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels) return {"wer_score": wer_score} ``` ## ํ•™์Šต![[train!]] ์ด์ œ ๋ชจ๋ธ ํŒŒ์ธํŠœ๋‹์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๐Ÿค— [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, [`TrainingArguments`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šต ์ธ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import TrainingArguments, Trainer model_name = checkpoint.split("/")[1] training_args = TrainingArguments( output_dir=f"{model_name}-pokemon", learning_rate=5e-5, num_train_epochs=50, fp16=True, per_device_train_batch_size=32, per_device_eval_batch_size=32, gradient_accumulation_steps=2, save_total_limit=3, evaluation_strategy="steps", eval_steps=50, save_strategy="steps", save_steps=50, logging_steps=50, remove_unused_columns=False, push_to_hub=True, label_names=["labels"], load_best_model_at_end=True, ) ``` ํ•™์Šต ์ธ์ˆ˜๋ฅผ ๋ฐ์ดํ„ฐ์„ธํŠธ, ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๐Ÿค— Trainer์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```python trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) ``` ํ•™์Šต์„ ์‹œ์ž‘ํ•˜๋ ค๋ฉด [`Trainer`] ๊ฐ์ฒด์—์„œ [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```python trainer.train() ``` ํ•™์Šต์ด ์ง„ํ–‰๋˜๋ฉด์„œ ํ•™์Šต ์†์‹ค์ด ์›ํ™œํ•˜๊ฒŒ ๊ฐ์†Œํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```python trainer.push_to_hub() ``` ## ์ถ”๋ก [[inference]] `test_ds`์—์„œ ์ƒ˜ํ”Œ ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ```python from PIL import Image import requests url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png" image = Image.open(requests.get(url, stream=True).raw) image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png" alt="Test image"/> </div> ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์ด๋ฏธ์ง€๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ```python device = "cuda" if torch.cuda.is_available() else "cpu" inputs = processor(images=image, return_tensors="pt").to(device) pixel_values = inputs.pixel_values ``` [`generate`]๋ฅผ ํ˜ธ์ถœํ•˜๊ณ  ์˜ˆ์ธก์„ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```python generated_ids = model.generate(pixel_values=pixel_values, max_length=50) generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_caption) ``` ```bash a drawing of a pink and blue pokemon ``` ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์ด ๊ฝค ๊ดœ์ฐฎ์€ ์บก์…˜์„ ์ƒ์„ฑํ•œ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค!
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/sequence_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text-classification]] [[open-in-colab]] <Youtube id="leNG9fN9FQU"/> ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋Š” ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ์˜ ์ผ์ข…์œผ๋กœ, ํ…์ŠคํŠธ์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋งŽ์€ ๋Œ€๊ธฐ์—…์ด ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์‘์šฉ ๋ถ„์•ผ์—์„œ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์šด์˜ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ํ˜•ํƒœ ์ค‘ ํ•˜๋‚˜๋Š” ๊ฐ์„ฑ ๋ถ„์„์œผ๋กœ, ํ…์ŠคํŠธ ์‹œํ€€์Šค์— ๐Ÿ™‚ ๊ธ์ •, ๐Ÿ™ ๋ถ€์ • ๋˜๋Š” ๐Ÿ˜ ์ค‘๋ฆฝ๊ณผ ๊ฐ™์€ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. [IMDb](https://huggingface.co/datasets/imdb) ๋ฐ์ดํ„ฐ์…‹์—์„œ [DistilBERT](https://huggingface.co/distilbert-base-uncased)๋ฅผ ํŒŒ์ธ ํŠœ๋‹ํ•˜์—ฌ ์˜ํ™” ๋ฆฌ๋ทฐ๊ฐ€ ๊ธ์ •์ ์ธ์ง€ ๋ถ€์ •์ ์ธ์ง€ ํŒ๋‹จํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธ ํŠœ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPT Neo](../model_doc/gpt_neo), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [LLaMA](../model_doc/llama), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Perceiver](../model_doc/perceiver), [PLBart](../model_doc/plbart), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Transformer-XL](../model_doc/transfo-xl), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## IMDb ๋ฐ์ดํ„ฐ์…‹ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-imdb-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ IMDb ๋ฐ์ดํ„ฐ์…‹์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> imdb = load_dataset("imdb") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค: ```py >>> imdb["test"][0] { "label": 0, "text": "I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn't match the background, and painfully one-dimensional characters cannot be overcome with a 'sci-fi' setting. (I'm sure there are those of you out there who think Babylon 5 is good sci-fi TV. It's not. It's clichรฉd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It's really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it's rubbish as they have to always say \"Gene Roddenberry's Earth...\" otherwise people would not continue watching. Roddenberry's ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again.", } ``` ์ด ๋ฐ์ดํ„ฐ์…‹์—๋Š” ๋‘ ๊ฐ€์ง€ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `text`: ์˜ํ™” ๋ฆฌ๋ทฐ ํ…์ŠคํŠธ - `label`: `0`์€ ๋ถ€์ •์ ์ธ ๋ฆฌ๋ทฐ, `1`์€ ๊ธ์ •์ ์ธ ๋ฆฌ๋ทฐ๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€์„œ `text` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` `text`๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  ์‹œํ€€์Šค๊ฐ€ DistilBERT์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์ž๋ฅด๊ธฐ ์œ„ํ•œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def preprocess_function(examples): ... return tokenizer(examples["text"], truncation=True) ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด `batched=True`๋กœ ์„ค์ •ํ•จ์œผ๋กœ์จ ๋ฐ์ดํ„ฐ์…‹ `map`๋ฅผ ๋” ๋น ๋ฅด๊ฒŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py tokenized_imdb = imdb.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorWithPadding`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค. ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹ , *๋™์  ํŒจ๋”ฉ*์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด์— ๋งž๊ฒŒ ๋ฌธ์žฅ์„ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ด ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` </pt> <tf> ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๐Ÿค— Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด์„œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ ๊ณ„์‚ฐํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋„๋ก [`~evaluate.EvaluationModule.compute`]๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋Š” ์ค€๋น„๋˜์—ˆ๊ณ , ํ›ˆ๋ จ ๊ณผ์ •์„ ์„ค์ •ํ•  ๋•Œ ๋‹ค์‹œ ์‚ดํŽด๋ณผ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „์—, `id2label`์™€ `label2id`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ƒ๋˜๋Š” id์™€ ๋ ˆ์ด๋ธ”์˜ ๋งต์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> id2label = {0: "NEGATIVE", 1: "POSITIVE"} >>> label2id = {"NEGATIVE": 0, "POSITIVE": 1} ``` <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSequenceClassification`]๋กœ DistilBERT๋ฅผ ๊ฐ€์ณ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer >>> model = AutoModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.) ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๋Š” ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜์™€ ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์…‹, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ธฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์€ ํŒŒ์ธ ํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_imdb["train"], ... eval_dataset=tokenized_imdb["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` <Tip> [`Trainer`]๋Š” `tokenizer`๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๊ธฐ๋ณธ์ ์œผ๋กœ ๋™์  ๋งคํ•‘์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ๋ช…์‹œ์ ์œผ๋กœ ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ธฐ๋ฅผ ์ง€์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. </Tip> ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด, ๊ทธ๋ฆฌ๊ณ  ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> import tensorflow as tf >>> batch_size = 16 >>> num_epochs = 5 >>> batches_per_epoch = len(tokenized_imdb["train"]) // batch_size >>> total_train_steps = int(batches_per_epoch * num_epochs) >>> optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSequenceClassification`]์„ ์‚ฌ์šฉํ•˜์—ฌ DistilBERT๋ฅผ ๋กœ๋“œํ•˜๊ณ , ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_imdb["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_imdb["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•  ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics`๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๋†’์ž…๋‹ˆ๋‹ค. ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์— ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์…‹, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์…‹, ์—ํญ์˜ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์—ฌ ํŒŒ์ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ํ…์ŠคํŠธ๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> text = "This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three." ``` ํŒŒ์ธ ํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ด๋ณด์„ธ์š”: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis", model="stevhliu/my_awesome_model") >>> classifier(text) [{'label': 'POSITIVE', 'score': 0.9994940757751465}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'POSITIVE' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` ์ž…๋ ฅ๊ฐ’์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'POSITIVE' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/masked_language_modeling.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง(Masked language modeling)[[masked-language-modeling]] [[open-in-colab]] <Youtube id="mqElG5QJWUg"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์‹œํ€€์Šค์—์„œ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์€ ์–‘๋ฐฉํ–ฅ์œผ๋กœ ํ† ํฐ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ํ† ํฐ์˜ ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ ์–‘์ชฝ์—์„œ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์ „์ฒด ์‹œํ€€์Šค์— ๋Œ€ํ•œ ๋ฌธ๋งฅ์  ์ดํ•ด๊ฐ€ ํ•„์š”ํ•œ ์ž‘์—…์— ์ ํ•ฉํ•˜๋ฉฐ, BERT๊ฐ€ ๊ทธ ์˜ˆ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋‹ค๋ฃฐ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [r/askscience](https://www.reddit.com/r/askscience/) ๋ถ€๋ถ„์„ ์‚ฌ์šฉํ•ด [DistilRoBERTa](https://huggingface.co/distilroberta-base) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก  ์‹œ์— ์ง์ ‘ ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ์ฒ˜๋Ÿผ ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์•„ํ‚คํ…์ณ ์ค‘ ํ•˜๋‚˜๋ฅผ ์„ ํƒํ•˜์„ธ์š”: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€์˜ ๊ณต์œ ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด(When prompted) ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-eli5-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ r/askscience ์ค‘ ์ผ๋ถ€๋งŒ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ ํ•™์Šต์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train_asks`๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ์•„๋ž˜ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` ๋งŽ์•„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” `text` ํ•„๋“œ์—๋งŒ ์ง‘์ค‘ํ•˜๋ฉด ๋ฉ๋‚˜๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์˜ ๋ฉ‹์ง„ ์ ์€ (๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ) *๋‹ค์Œ ๋‹จ์–ด๊ฐ€ ๋ ˆ์ด๋ธ”*์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ ˆ์ด๋ธ”์ด ๋”ฐ๋กœ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="8PmhEIXhBvI"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด, ๋‹ค์Œ ๋‹จ๊ณ„๋กœ DistilRoBERTa ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€์„œ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") ``` ์œ„์˜ ์˜ˆ์ œ์—์„œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, `text` ํ•„๋“œ๋Š” `answers` ์•ˆ์— ์ค‘์ฒฉ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ค‘์ฒฉ๋œ ๊ตฌ์กฐ์—์„œ [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` ์ด์ œ ๊ฐ ํ•˜์œ„ ํ•„๋“œ๋Š” `answers` ์ ‘๋‘์‚ฌ(prefix)๋กœ ํ‘œ์‹œ๋œ ๋Œ€๋กœ ๋ณ„๋„์˜ ์—ด์ด ๋˜๊ณ , `text` ํ•„๋“œ๋Š” ์ด์ œ ๋ฆฌ์ŠคํŠธ๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„์ ์œผ๋กœ ํ† ํฐํ™”ํ•˜๋Š” ๋Œ€์‹  ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ํ•œ๋ฒˆ์— ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ ์˜ˆ์ œ์— ๋Œ€ํ•ด ๋ฌธ์ž์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ `join`ํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋„๋ก `batched=True`๋ฅผ ์„ค์ •ํ•˜๊ณ  `num_proc`๋กœ ์ฒ˜๋ฆฌ ํšŸ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ํ† ํฐ ์‹œํ€€์Šค๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ ์ด ์ค‘ ์ผ๋ถ€๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊น๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด - ๋ชจ๋“  ์‹œํ€€์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ  - ์—ฐ๊ฒฐ๋œ ์‹œํ€€์Šค๋ฅผ ์ •์˜ํ•œ `block_size` ๋ณด๋‹ค ๋” ์งง์€ ๋ฉ์–ด๋ฆฌ๋กœ ๋ถ„ํ• ํ•˜๋Š”๋ฐ, ์ด ๋ฉ์–ด๋ฆฌ๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ์งง๊ณ  GPU RAM์ด ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ธธ์ด์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> block_size = 128 >>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `group_texts` ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` ์ด์ œ [`DataCollatorForLanguageModeling`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค collation ๋‹จ๊ณ„์—์„œ ๋งค ๋ฐฐ์น˜์•ˆ์—์„œ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) ``` </pt> <tf> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๊ฐ€ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์˜ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ €์žฅ ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์€ ์œ ์ผํ•œ ํ•„์ˆ˜ ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(collator)์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_mlm_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(perplexity)๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 8.76 ``` ๊ทธ๋ฆฌ๊ณ  [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, Hub๋กœ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> TensorFlow๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer) ํ•จ์ˆ˜ ์„ค์ •, ํ•™์Šต๋ฅ (learning rate) ์Šค์ผ€์ฅด๋ง, ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๋‹ค์Œ์œผ๋กœ [`TFAutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ”์†Œ๋“œ๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ์ด๋Š” ์—…๋กœ๋“œํ•  ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €์˜ ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์— ์ง€์ •ํ•˜์—ฌ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_mlm_model", ... tokenizer=tokenizer, ... ) ``` ๋“œ๋””์–ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ๋•Œ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํฌํฌ ์ˆ˜, ์ฝœ๋ฐฑ์ด ํฌํ•จ๋œ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ์ž๋™์œผ๋กœ Hub๋กœ ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์˜ˆ์ œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ง€๊ธˆ๊นŒ์ง€ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •์„ ์ž˜ ํ–ˆ์œผ๋‹ˆ, ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์ด ๋นˆ์นธ์„ ์ฑ„์šธ ํ…์ŠคํŠธ๋ฅผ ์ŠคํŽ˜์…œ ํ† ํฐ(special token)์ธ `<mask>` ํ† ํฐ์œผ๋กœ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> text = "The Milky Way is a <mask> galaxy." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `fill-mask`ํƒœ์Šคํฌ๋กœ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. `top_k` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ˜ํ™˜ํ•˜๋Š” ์˜ˆ์ธก์˜ ์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model") >>> mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] ``` <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="pt") >>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="tf") >>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/object_detection.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๊ฐ์ฒด ํƒ์ง€ [[object-detection]] [[open-in-colab]] ๊ฐ์ฒด ํƒ์ง€๋Š” ์ด๋ฏธ์ง€์—์„œ ์ธ์Šคํ„ด์Šค(์˜ˆ: ์‚ฌ๋žŒ, ๊ฑด๋ฌผ ๋˜๋Š” ์ž๋™์ฐจ)๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›๊ณ  ํƒ์ง€๋œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ์ขŒํ‘œ์™€ ๊ด€๋ จ๋œ ๋ ˆ์ด๋ธ”์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ํ•˜๋‚˜์˜ ์ด๋ฏธ์ง€์—๋Š” ์—ฌ๋Ÿฌ ๊ฐ์ฒด๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ฐ๊ฐ์€ ์ž์ฒด์ ์ธ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์™€ ๋ ˆ์ด๋ธ”์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ฐจ์™€ ๊ฑด๋ฌผ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€). ๋˜ํ•œ ๊ฐ ๊ฐ์ฒด๋Š” ์ด๋ฏธ์ง€์˜ ๋‹ค๋ฅธ ๋ถ€๋ถ„์— ์กด์žฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ด๋ฏธ์ง€์— ์—ฌ๋Ÿฌ ๋Œ€์˜ ์ฐจ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Œ). ์ด ์ž‘์—…์€ ๋ณดํ–‰์ž, ๋„๋กœ ํ‘œ์ง€ํŒ, ์‹ ํ˜ธ๋“ฑ๊ณผ ๊ฐ™์€ ๊ฒƒ๋“ค์„ ๊ฐ์ง€ํ•˜๋Š” ์ž์œจ ์ฃผํ–‰์— ์ผ๋ฐ˜์ ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์‘์šฉ ๋ถ„์•ผ๋กœ๋Š” ์ด๋ฏธ์ง€ ๋‚ด ๊ฐ์ฒด ์ˆ˜ ๊ณ„์‚ฐ ๋ฐ ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰ ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ๋‹ค์Œ์„ ๋ฐฐ์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค: 1. ํ•ฉ์„ฑ๊ณฑ ๋ฐฑ๋ณธ(์ธํ’‹ ๋ฐ์ดํ„ฐ์˜ ํŠน์„ฑ์„ ์ถ”์ถœํ•˜๋Š” ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ)๊ณผ ์ธ์ฝ”๋”-๋””์ฝ”๋” ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ๊ฒฐํ•ฉํ•œ [DETR](https://huggingface.co/docs/transformers/model_doc/detr) ๋ชจ๋ธ์„ [CPPE-5](https://huggingface.co/datasets/cppe-5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•ด ๋ฏธ์„ธ์กฐ์ • ํ•˜๊ธฐ 2. ๋ฏธ์„ธ์กฐ์ • ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์˜ ํƒœ์Šคํฌ๋Š” ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Conditional DETR](../model_doc/conditional_detr), [Deformable DETR](../model_doc/deformable_detr), [DETA](../model_doc/deta), [DETR](../model_doc/detr), [Table Transformer](../model_doc/table-transformer), [YOLOS](../model_doc/yolos) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q datasets transformers evaluate timm albumentations ``` ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•œ ๐Ÿค— Datasets๊ณผ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•œ ๐Ÿค— Transformers, ๋ฐ์ดํ„ฐ๋ฅผ ์ฆ๊ฐ•ํ•˜๊ธฐ ์œ„ํ•œ `albumentations`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. DETR ๋ชจ๋ธ์˜ ํ•ฉ์„ฑ๊ณฑ ๋ฐฑ๋ณธ์„ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ˜„์žฌ `timm`์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## CPPE-5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-the-CPPE-5-dataset]] [CPPE-5](https://huggingface.co/datasets/cppe-5) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” COVID-19 ๋Œ€์œ ํ–‰ ์ƒํ™ฉ์—์„œ ์˜๋ฃŒ ์ „๋ฌธ์ธ๋ ฅ ๋ณดํ˜ธ ์žฅ๋น„(PPE)๋ฅผ ์‹๋ณ„ํ•˜๋Š” ์–ด๋…ธํ…Œ์ด์…˜์ด ํฌํ•จ๋œ ์ด๋ฏธ์ง€๋ฅผ ๋‹ด๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from datasets import load_dataset >>> cppe5 = load_dataset("cppe-5") >>> cppe5 DatasetDict({ train: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 1000 }) test: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 29 }) }) ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ํ•™์Šต ์„ธํŠธ ์ด๋ฏธ์ง€ 1,000๊ฐœ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ ์ด๋ฏธ์ง€ 29๊ฐœ๋ฅผ ๊ฐ–๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์— ์ต์ˆ™ํ•ด์ง€๊ธฐ ์œ„ํ•ด, ์˜ˆ์‹œ๊ฐ€ ์–ด๋–ป๊ฒŒ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”์ง€ ์‚ดํŽด๋ณด์„ธ์š”. ```py >>> cppe5["train"][0] {'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x7F9EC9E77C10>, 'width': 943, 'height': 663, 'objects': {'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [[302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0]], 'category': [4, 4, 0, 0]}} ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์žˆ๋Š” ์˜ˆ์‹œ๋Š” ๋‹ค์Œ์˜ ์˜์—ญ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค: - `image_id`: ์˜ˆ์‹œ ์ด๋ฏธ์ง€ id - `image`: ์ด๋ฏธ์ง€๋ฅผ ํฌํ•จํ•˜๋Š” `PIL.Image.Image` ๊ฐ์ฒด - `width`: ์ด๋ฏธ์ง€์˜ ๋„ˆ๋น„ - `height`: ์ด๋ฏธ์ง€์˜ ๋†’์ด - `objects`: ์ด๋ฏธ์ง€ ์•ˆ์˜ ๊ฐ์ฒด๋“ค์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๋ฅผ ํฌํ•จํ•˜๋Š” ๋”•์…”๋„ˆ๋ฆฌ: - `id`: ์–ด๋…ธํ…Œ์ด์…˜ id - `area`: ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ๋ฉด์  - `bbox`: ๊ฐ์ฒด์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ([COCO ํฌ๋งท](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco)์œผ๋กœ) - `category`: ๊ฐ์ฒด์˜ ์นดํ…Œ๊ณ ๋ฆฌ, ๊ฐ€๋Šฅํ•œ ๊ฐ’์œผ๋กœ๋Š” `Coverall (0)`, `Face_Shield (1)`, `Gloves (2)`, `Goggles (3)` ๋ฐ `Mask (4)` ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. `bbox` ํ•„๋“œ๊ฐ€ DETR ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” COCO ํ˜•์‹์„ ๋”ฐ๋ฅธ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ `objects` ๋‚ด๋ถ€์˜ ํ•„๋“œ ๊ทธ๋ฃน์€ DETR์ด ์š”๊ตฌํ•˜๋Š” ์–ด๋…ธํ…Œ์ด์…˜ ํ˜•์‹๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต์— ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ํ•œ ๊ฐ€์ง€ ์˜ˆ์‹œ๋ฅผ ์‹œ๊ฐํ™”ํ•˜์„ธ์š”. ```py >>> import numpy as np >>> import os >>> from PIL import Image, ImageDraw >>> image = cppe5["train"][0]["image"] >>> annotations = cppe5["train"][0]["objects"] >>> draw = ImageDraw.Draw(image) >>> categories = cppe5["train"].features["objects"].feature["category"].names >>> id2label = {index: x for index, x in enumerate(categories, start=0)} >>> label2id = {v: k for k, v in id2label.items()} >>> for i in range(len(annotations["id"])): ... box = annotations["bbox"][i - 1] ... class_idx = annotations["category"][i - 1] ... x, y, w, h = tuple(box) ... draw.rectangle((x, y, x + w, y + h), outline="red", width=1) ... draw.text((x, y), id2label[class_idx], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/TdaqPJO.png" alt="CPPE-5 Image Example"/> </div> ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์™€ ์—ฐ๊ฒฐ๋œ ๋ ˆ์ด๋ธ”์„ ์‹œ๊ฐํ™”ํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ฉ”ํƒ€ ๋ฐ์ดํ„ฐ, ํŠนํžˆ `category` ํ•„๋“œ์—์„œ ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์™€์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค์— ๋งคํ•‘ํ•˜๋Š” `id2label`๊ณผ ๋ฐ˜๋Œ€๋กœ ๋งคํ•‘ํ•˜๋Š” `label2id` ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์„ค์ •ํ•  ๋•Œ ์ด๋Ÿฌํ•œ ๋งคํ•‘์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋งคํ•‘์€ ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ๋ชจ๋ธ์„ ๊ณต์œ ํ–ˆ์„ ๋•Œ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์ข… ๋‹จ๊ณ„๋กœ, ์ž ์žฌ์ ์ธ ๋ฌธ์ œ๋ฅผ ์ฐพ์•„๋ณด์„ธ์š”. ๊ฐ์ฒด ๊ฐ์ง€๋ฅผ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ž์ฃผ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๊ฐ€ ์ด๋ฏธ์ง€์˜ ๊ฐ€์žฅ์ž๋ฆฌ๋ฅผ ๋„˜์–ด๊ฐ€๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ "๋„˜์–ด๊ฐ€๋Š” ๊ฒƒ(run away)"์€ ํ›ˆ๋ จ ์ค‘์— ์˜ค๋ฅ˜๋ฅผ ๋ฐœ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๊ธฐ์— ์ด ๋‹จ๊ณ„์—์„œ ์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋„ ๊ฐ™์€ ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๊ฐ„๋‹จํ•˜๊ฒŒํ•˜๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์—์„œ ์ด๋Ÿฌํ•œ ์ด๋ฏธ์ง€๋ฅผ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> remove_idx = [590, 821, 822, 875, 876, 878, 879] >>> keep = [i for i in range(len(cppe5["train"])) if i not in remove_idx] >>> cppe5["train"] = cppe5["train"].select(keep) ``` ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ [[preprocess-the-data]] ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•˜๋ ค๋ฉด, ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉํ•œ ์ „์ฒ˜๋ฆฌ ๋ฐฉ์‹๊ณผ ์ •ํ™•ํ•˜๊ฒŒ ์ผ์น˜ํ•˜๋„๋ก ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [`AutoImageProcessor`]๋Š” ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜์—ฌ DETR ๋ชจ๋ธ์ด ํ•™์Šต์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `pixel_values`, `pixel_mask`, ๊ทธ๋ฆฌ๊ณ  `labels`๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์ž‘์—…์„ ๋‹ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์—๋Š” ๊ฑฑ์ •ํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์†์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - `image_mean = [0.485, 0.456, 0.406 ]` - `image_std = [0.229, 0.224, 0.225]` ์ด ๊ฐ’๋“ค์€ ๋ชจ๋ธ ์‚ฌ์ „ ํ›ˆ๋ จ ์ค‘ ์ด๋ฏธ์ง€๋ฅผ ์ •๊ทœํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ’๋“ค์€ ์ถ”๋ก  ๋˜๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ด๋ฏธ์ง€ ๋ชจ๋ธ์„ ์„ธ๋ฐ€ํ•˜๊ฒŒ ์กฐ์ •ํ•  ๋•Œ ๋ณต์ œํ•ด์•ผ ํ•˜๋Š” ์ค‘์š”ํ•œ ๊ฐ’์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "facebook/detr-resnet-50" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` `image_processor`์— ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•˜๊ธฐ ์ „์—, ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋‘ ๊ฐ€์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - ์ด๋ฏธ์ง€ ์ฆ๊ฐ• - DETR ๋ชจ๋ธ์˜ ์š”๊ตฌ์— ๋งž๊ฒŒ ์–ด๋…ธํ…Œ์ด์…˜์„ ๋‹ค์‹œ ํฌ๋งทํŒ… ์ฒซ์งธ๋กœ, ๋ชจ๋ธ์ด ํ•™์Šต ๋ฐ์ดํ„ฐ์— ๊ณผ์ ํ•ฉ ๋˜์ง€ ์•Š๋„๋ก ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ค‘ ์•„๋ฌด๊ฑฐ๋‚˜ ์‚ฌ์šฉํ•˜์—ฌ ๋ณ€ํ™˜์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” [Albumentations](https://albumentations.ai/docs/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค... ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋ณ€ํ™˜์„ ์ด๋ฏธ์ง€์— ์ ์šฉํ•˜๊ณ  ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ์—…๋ฐ์ดํŠธํ•˜๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ฌธ์„œ์—๋Š” [๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•ด ์ด๋ฏธ์ง€๋ฅผ ๋ณด๊ฐ•ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๊ฐ€์ด๋“œ](https://huggingface.co/docs/datasets/object_detection)๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ด ์˜ˆ์ œ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๊ฐ ์ด๋ฏธ์ง€๋ฅผ (480, 480) ํฌ๊ธฐ๋กœ ์กฐ์ •ํ•˜๊ณ , ์ขŒ์šฐ๋กœ ๋’ค์ง‘๊ณ , ๋ฐ๊ธฐ๋ฅผ ๋†’์ด๋Š” ๋™์ผํ•œ ์ ‘๊ทผ๋ฒ•์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> import albumentations >>> import numpy as np >>> import torch >>> transform = albumentations.Compose( ... [ ... albumentations.Resize(480, 480), ... albumentations.HorizontalFlip(p=1.0), ... albumentations.RandomBrightnessContrast(p=1.0), ... ], ... bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]), ... ) ``` ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” ์–ด๋…ธํ…Œ์ด์…˜์ด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ˜•์‹์ผ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•ฉ๋‹ˆ๋‹ค: `{'image_id': int, 'annotations': List[Dict]}`, ์—ฌ๊ธฐ์„œ ๊ฐ ๋”•์…”๋„ˆ๋ฆฌ๋Š” COCO ๊ฐ์ฒด ์–ด๋…ธํ…Œ์ด์…˜์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ์˜ˆ์ œ์— ๋Œ€ํ•ด ์–ด๋…ธํ…Œ์ด์…˜์˜ ํ˜•์‹์„ ๋‹ค์‹œ ์ง€์ •ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> def formatted_anns(image_id, category, area, bbox): ... annotations = [] ... for i in range(0, len(category)): ... new_ann = { ... "image_id": image_id, ... "category_id": category[i], ... "isCrowd": 0, ... "area": area[i], ... "bbox": list(bbox[i]), ... } ... annotations.append(new_ann) ... return annotations ``` ์ด์ œ ์ด๋ฏธ์ง€์™€ ์–ด๋…ธํ…Œ์ด์…˜ ์ „์ฒ˜๋ฆฌ ๋ณ€ํ™˜์„ ๊ฒฐํ•ฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> # transforming a batch >>> def transform_aug_ann(examples): ... image_ids = examples["image_id"] ... images, bboxes, area, categories = [], [], [], [] ... for image, objects in zip(examples["image"], examples["objects"]): ... image = np.array(image.convert("RGB"))[:, :, ::-1] ... out = transform(image=image, bboxes=objects["bbox"], category=objects["category"]) ... area.append(objects["area"]) ... images.append(out["image"]) ... bboxes.append(out["bboxes"]) ... categories.append(out["category"]) ... targets = [ ... {"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)} ... for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes) ... ] ... return image_processor(images=images, annotations=targets, return_tensors="pt") ``` ์ด์ „ ๋‹จ๊ณ„์—์„œ ๋งŒ๋“  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๐Ÿค— Datasets์˜ [`~datasets.Dataset.with_transform`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฉ”์†Œ๋“œ๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ ธ์˜ฌ ๋•Œ๋งˆ๋‹ค ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ๋Š” ์ „์ฒ˜๋ฆฌ ํ›„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์˜ˆ์‹œ ํ•˜๋‚˜๋ฅผ ๊ฐ€์ ธ์™€์„œ ๋ณ€ํ™˜ ํ›„ ๋ชจ์–‘์ด ์–ด๋–ป๊ฒŒ ๋˜๋Š”์ง€ ํ™•์ธํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ, `pixel_values` ํ…์„œ, `pixel_mask` ํ…์„œ, ๊ทธ๋ฆฌ๊ณ  `labels`๋กœ ๊ตฌ์„ฑ๋œ ํ…์„œ๊ฐ€ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann) >>> cppe5["train"][15] {'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9638, -1.9638, -1.9638], ..., [-1.5699, -1.5699, -1.5699, ..., -1.9980, -1.9980, -1.9980], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809]], [[ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8256, -1.8256, -1.8256], ..., [-1.3179, -1.3179, -1.3179, ..., -1.8606, -1.8606, -1.8606], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431]], [[ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6302, -1.6302, -1.6302], ..., [-1.0201, -1.0201, -1.0201, ..., -1.5604, -1.5604, -1.5604], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430]]]), 'pixel_mask': tensor([[1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1]]), 'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}} ``` ๊ฐ๊ฐ์˜ ์ด๋ฏธ์ง€๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ์ฆ๊ฐ•ํ•˜๊ณ  ์ด๋ฏธ์ง€์˜ ์–ด๋…ธํ…Œ์ด์…˜์„ ์ค€๋น„ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ „์ฒ˜๋ฆฌ๋Š” ์•„์ง ๋๋‚˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ, ์ด๋ฏธ์ง€๋ฅผ ๋ฐฐ์น˜๋กœ ๋งŒ๋“ค ์‚ฌ์šฉ์ž ์ •์˜ `collate_fn`์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ํฐ ์ด๋ฏธ์ง€์— ์ด๋ฏธ์ง€(ํ˜„์žฌ `pixel_values` ์ธ)๋ฅผ ํŒจ๋“œํ•˜๊ณ , ์‹ค์ œ ํ”ฝ์…€(1)๊ณผ ํŒจ๋”ฉ(0)์„ ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ๊ทธ์— ํ•ด๋‹นํ•˜๋Š” ์ƒˆ๋กœ์šด `pixel_mask`๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## DETR ๋ชจ๋ธ ํ•™์Šต์‹œํ‚ค๊ธฐ [[training-the-DETR-model]] ์ด์ „ ์„น์…˜์—์„œ ๋Œ€๋ถ€๋ถ„์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์—ฌ ์ด์ œ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ด๋ฏธ์ง€๋Š” ๋ฆฌ์‚ฌ์ด์ฆˆ ํ›„์—๋„ ์—ฌ์ „ํžˆ ์šฉ๋Ÿ‰์ด ํฌ๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•˜๋ ค๋ฉด ์ ์–ด๋„ ํ•˜๋‚˜์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์€ ๋‹ค์Œ์˜ ๋‹จ๊ณ„๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค: 1. [`AutoModelForObjectDetection`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒ˜๋ฆฌ์™€ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. 2. [`TrainingArguments`]์—์„œ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. 3. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 4. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌ์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ๋•Œ, ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ์—์„œ ๋งŒ๋“  `label2id`์™€ `id2label` ๋งคํ•‘์„ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `ignore_mismatched_sizes=True`๋ฅผ ์ง€์ •ํ•˜์—ฌ ๊ธฐ์กด ๋ถ„๋ฅ˜ ํ—ค๋“œ(๋ชจ๋ธ์—์„œ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋˜๋Š” ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด)๋ฅผ ์ƒˆ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForObjectDetection >>> model = AutoModelForObjectDetection.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ignore_mismatched_sizes=True, ... ) ``` [`TrainingArguments`]์—์„œ `output_dir`์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•œ ๋‹ค์Œ, ํ•„์š”์— ๋”ฐ๋ผ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ตฌ์„ฑํ•˜์„ธ์š”. ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ `remove_unused_columns`๊ฐ€ `True`์ผ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ์—ด์ด ์‚ญ์ œ๋ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์—ด์ด ์—†๋Š” ๊ฒฝ์šฐ `pixel_values`๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์— `remove_unused_columns`๋ฅผ `False`๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜์—ฌ ๊ณต์œ ํ•˜๋ ค๋ฉด `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์‹ญ์‹œ์˜ค(ํ—ˆ๊น…ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="detr-resnet-50_finetuned_cppe5", ... per_device_train_batch_size=8, ... num_train_epochs=10, ... fp16=True, ... save_steps=200, ... logging_steps=50, ... learning_rate=1e-5, ... weight_decay=1e-4, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ `model`, `training_args`, `collate_fn`, `image_processor`์™€ ๋ฐ์ดํ„ฐ ์„ธํŠธ(`cppe5`)๋ฅผ ๋ชจ๋‘ ๊ฐ€์ ธ์˜จ ํ›„, [`~transformers.Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=collate_fn, ... train_dataset=cppe5["train"], ... tokenizer=image_processor, ... ) >>> trainer.train() ``` `training_args`์—์„œ `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•œ ๊ฒฝ์šฐ, ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋Š” ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋ฉ๋‹ˆ๋‹ค. ํ•™์Šต ์™„๋ฃŒ ํ›„, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์ตœ์ข… ๋ชจ๋ธ์„ ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ [[evaluate]] ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ผ๋ จ์˜ <a href="https://cocodataset.org/#detection-eval">COCO-์Šคํƒ€์ผ ์ง€ํ‘œ</a>๋กœ ํ‰๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด์— ๊ตฌํ˜„๋œ ํ‰๊ฐ€ ์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์—ฌ๊ธฐ์—์„œ๋Š” ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•œ ์ตœ์ข… ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ `torchvision`์—์„œ ์ œ๊ณตํ•˜๋Š” ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `torchvision` ํ‰๊ฐ€์ž(evaluator)๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‹ค์ธก๊ฐ’์ธ COCO ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. COCO ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋นŒ๋“œํ•˜๋Š” API๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ํŠน์ • ํ˜•์‹์œผ๋กœ ์ €์žฅํ•ด์•ผ ํ•˜๋ฏ€๋กœ, ๋จผ์ € ์ด๋ฏธ์ง€์™€ ์–ด๋…ธํ…Œ์ด์…˜์„ ๋””์Šคํฌ์— ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•  ๋•Œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, cppe5["test"]์—์„œ์˜ ์–ด๋…ธํ…Œ์ด์…˜์€ ํฌ๋งท์„ ๋งž์ถฐ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฏธ์ง€๋Š” ๊ทธ๋Œ€๋กœ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ‰๊ฐ€ ๋‹จ๊ณ„๋Š” ์•ฝ๊ฐ„์˜ ์ž‘์—…์ด ํ•„์š”ํ•˜์ง€๋งŒ, ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€ ์ฃผ์š” ๋‹จ๊ณ„๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋จผ์ €, `cppe5["test"]` ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค: ์–ด๋…ธํ…Œ์ด์…˜์„ ํฌ๋งท์— ๋งž๊ฒŒ ๋งŒ๋“ค๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋””์Šคํฌ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> import json >>> # format annotations the same as for training, no need for data augmentation >>> def val_formatted_anns(image_id, objects): ... annotations = [] ... for i in range(0, len(objects["id"])): ... new_ann = { ... "id": objects["id"][i], ... "category_id": objects["category"][i], ... "iscrowd": 0, ... "image_id": image_id, ... "area": objects["area"][i], ... "bbox": objects["bbox"][i], ... } ... annotations.append(new_ann) ... return annotations >>> # Save images and annotations into the files torchvision.datasets.CocoDetection expects >>> def save_cppe5_annotation_file_images(cppe5): ... output_json = {} ... path_output_cppe5 = f"{os.getcwd()}/cppe5/" ... if not os.path.exists(path_output_cppe5): ... os.makedirs(path_output_cppe5) ... path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json") ... categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label] ... output_json["images"] = [] ... output_json["annotations"] = [] ... for example in cppe5: ... ann = val_formatted_anns(example["image_id"], example["objects"]) ... output_json["images"].append( ... { ... "id": example["image_id"], ... "width": example["image"].width, ... "height": example["image"].height, ... "file_name": f"{example['image_id']}.png", ... } ... ) ... output_json["annotations"].extend(ann) ... output_json["categories"] = categories_json ... with open(path_anno, "w") as file: ... json.dump(output_json, file, ensure_ascii=False, indent=4) ... for im, img_id in zip(cppe5["image"], cppe5["image_id"]): ... path_img = os.path.join(path_output_cppe5, f"{img_id}.png") ... im.save(path_img) ... return path_output_cppe5, path_anno ``` ๋‹ค์Œ์œผ๋กœ, `cocoevaluator`์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `CocoDetection` ํด๋ž˜์Šค์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torchvision >>> class CocoDetection(torchvision.datasets.CocoDetection): ... def __init__(self, img_folder, image_processor, ann_file): ... super().__init__(img_folder, ann_file) ... self.image_processor = image_processor ... def __getitem__(self, idx): ... # read in PIL image and target in COCO format ... img, target = super(CocoDetection, self).__getitem__(idx) ... # preprocess image and target: converting target to DETR format, ... # resizing + normalization of both image and target) ... image_id = self.ids[idx] ... target = {"image_id": image_id, "annotations": target} ... encoding = self.image_processor(images=img, annotations=target, return_tensors="pt") ... pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension ... target = encoding["labels"][0] # remove batch dimension ... return {"pixel_values": pixel_values, "labels": target} >>> im_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5["test"]) >>> test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์™€์„œ ํ‰๊ฐ€๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> from tqdm import tqdm >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> module = evaluate.load("ybelkada/cocoevaluate", coco=test_ds_coco_format.coco) >>> val_dataloader = torch.utils.data.DataLoader( ... test_ds_coco_format, batch_size=8, shuffle=False, num_workers=4, collate_fn=collate_fn ... ) >>> with torch.no_grad(): ... for idx, batch in enumerate(tqdm(val_dataloader)): ... pixel_values = batch["pixel_values"] ... pixel_mask = batch["pixel_mask"] ... labels = [ ... {k: v for k, v in t.items()} for t in batch["labels"] ... ] # these are in DETR format, resized + normalized ... # forward pass ... outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask) ... orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0) ... results = im_processor.post_process(outputs, orig_target_sizes) # convert outputs of model to COCO api ... module.add(prediction=results, reference=labels) ... del batch >>> results = module.compute() >>> print(results) Accumulating evaluation results... DONE (t=0.08s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.292 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.429 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.590 ``` ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ๋Š” [`~transformers.TrainingArguments`]์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์กฐ์ •ํ•˜์—ฌ ๋”์šฑ ๊ฐœ์„ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ์‹œ๋„ํ•ด ๋ณด์„ธ์š”! ## ์ถ”๋ก ํ•˜๊ธฐ [[inference]] DETR ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ํ‰๊ฐ€ํ•˜๊ณ , ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ ํ–ˆ์œผ๋ฏ€๋กœ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> import requests >>> url = "https://i.imgur.com/2lnWoly.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> obj_detector = pipeline("object-detection", model="devonho/detr-resnet-50_finetuned_cppe5") >>> obj_detector(image) ``` ๋งŒ์•ฝ ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> image_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> with torch.no_grad(): ... inputs = image_processor(images=image, return_tensors="pt") ... outputs = model(**inputs) ... target_sizes = torch.tensor([image.size[::-1]]) ... results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected Coverall with confidence 0.566 at location [1215.32, 147.38, 4401.81, 3227.08] Detected Mask with confidence 0.584 at location [2449.06, 823.19, 3256.43, 1413.9] ``` ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> draw = ImageDraw.Draw(image) >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... x, y, x2, y2 = tuple(box) ... draw.rectangle((x, y, x2, y2), outline="red", width=1) ... draw.text((x, y), model.config.id2label[label.item()], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/4QZnf9A.png" alt="Object detection result on a new image"/> </div>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/image_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image-classification]] [[open-in-colab]] <Youtube id="tjAIM7BOYhw"/> ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ์ด๋ฏธ์ง€์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋˜๋Š” ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์™€ ๋‹ฌ๋ฆฌ ์ž…๋ ฅ์€ ์ด๋ฏธ์ง€๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ํ”ฝ์…€ ๊ฐ’์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—๋Š” ์ž์—ฐ์žฌํ•ด ํ›„ ํ”ผํ•ด ๊ฐ์ง€, ๋†์ž‘๋ฌผ ๊ฑด๊ฐ• ๋ชจ๋‹ˆํ„ฐ๋ง, ์˜๋ฃŒ ์ด๋ฏธ์ง€์—์„œ ์งˆ๋ณ‘์˜ ์ง•ํ›„ ๊ฒ€์‚ฌ ์ง€์› ๋“ฑ ๋‹ค์–‘ํ•œ ์‘์šฉ ์‚ฌ๋ก€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค: 1. [Food-101](https://huggingface.co/datasets/food101) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [ViT](model_doc/vit)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์ด๋ฏธ์ง€์—์„œ ์‹ํ’ˆ ํ•ญ๋ชฉ์„ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BEiT](../model_doc/beit), [BiT](../model_doc/bit), [ConvNeXT](../model_doc/convnext), [ConvNeXTV2](../model_doc/convnextv2), [CvT](../model_doc/cvt), [Data2VecVision](../model_doc/data2vec-vision), [DeiT](../model_doc/deit), [DiNAT](../model_doc/dinat), [EfficientFormer](../model_doc/efficientformer), [EfficientNet](../model_doc/efficientnet), [FocalNet](../model_doc/focalnet), [ImageGPT](../model_doc/imagegpt), [LeViT](../model_doc/levit), [MobileNetV1](../model_doc/mobilenet_v1), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [NAT](../model_doc/nat), [Perceiver](../model_doc/perceiver), [PoolFormer](../model_doc/poolformer), [RegNet](../model_doc/regnet), [ResNet](../model_doc/resnet), [SegFormer](../model_doc/segformer), [Swin Transformer](../model_doc/swin), [Swin Transformer V2](../model_doc/swinv2), [VAN](../model_doc/van), [ViT](../model_doc/vit), [ViT Hybrid](../model_doc/vit_hybrid), [ViTMSN](../model_doc/vit_msn) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Food-101 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-food101-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ Food-101 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋” ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ์‹คํ—˜์„ ํ†ตํ•ด ๋ชจ๋“  ๊ฒƒ์ด ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> food = load_dataset("food101", split="train[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”: ```py >>> food = food.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> food["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>, 'label': 79} ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๊ฐ ์˜ˆ์ œ์—๋Š” ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `image`: ์‹ํ’ˆ ํ•ญ๋ชฉ์˜ PIL ์ด๋ฏธ์ง€ - `label`: ์‹ํ’ˆ ํ•ญ๋ชฉ์˜ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค ๋ชจ๋ธ์ด ๋ ˆ์ด๋ธ” ID์—์„œ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์‰ฝ๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•˜๊ณ , ์ •์ˆ˜๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> labels = food["train"].features["label"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` ์ด์ œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> id2label[str(79)] 'prime_rib' ``` ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์ด๋ฏธ์ง€๋ฅผ ํ…์„œ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ViT ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "google/vit-base-patch16-224-in21k" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` <frameworkcontent> <pt> ์ด๋ฏธ์ง€์— ๋ช‡ ๊ฐ€์ง€ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜์—ฌ ๊ณผ์ ํ•ฉ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋” ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ Torchvision์˜ [`transforms`](https://pytorch.org/vision/stable/transforms.html) ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ, ์›ํ•˜๋Š” ์ด๋ฏธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์˜ ์ž„์˜ ๋ถ€๋ถ„์„ ํฌ๋กญํ•˜๊ณ  ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•œ ๋‹ค์Œ, ์ด๋ฏธ์ง€ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ๋กœ ์ •๊ทœํ™”ํ•˜์„ธ์š”: ```py >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๊ณ  ์ด๋ฏธ์ง€์˜ `pixel_values`(๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž…๋ ฅ)๋ฅผ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]] ... del examples["image"] ... return examples ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.with_transform`]์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ ธ์˜ฌ ๋•Œ ๋ณ€ํ™˜์ด ์ฆ‰์‹œ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค: ```py >>> food = food.with_transform(transforms) ``` ์ด์ œ [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ, `DefaultDataCollator`๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€์ ์ธ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ๊ณผ์ ํ•ฉ์„ ๋ฐฉ์ง€ํ•˜๊ณ  ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ›ˆ๋ จ ๋ถ€๋ถ„์— ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ Keras ์ „์ฒ˜๋ฆฌ ๋ ˆ์ด์–ด๋กœ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ณ€ํ™˜(๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ํฌํ•จ)๊ณผ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ณ€ํ™˜(์ค‘์•™ ํฌ๋กœํ•‘, ํฌ๊ธฐ ์กฐ์ •, ์ •๊ทœํ™”๋งŒ)์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. `tf.image` ๋˜๋Š” ๋‹ค๋ฅธ ์›ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from tensorflow import keras >>> from tensorflow.keras import layers >>> size = (image_processor.size["height"], image_processor.size["width"]) >>> train_data_augmentation = keras.Sequential( ... [ ... layers.RandomCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... layers.RandomFlip("horizontal"), ... layers.RandomRotation(factor=0.02), ... layers.RandomZoom(height_factor=0.2, width_factor=0.2), ... ], ... name="train_data_augmentation", ... ) >>> val_data_augmentation = keras.Sequential( ... [ ... layers.CenterCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... ], ... name="val_data_augmentation", ... ) ``` ๋‹ค์Œ์œผ๋กœ ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ์ด๋ฏธ์ง€๊ฐ€ ์•„๋‹ˆ๋ผ ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ์ ์ ˆํ•œ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> import tensorflow as tf >>> from PIL import Image >>> def convert_to_tf_tensor(image: Image): ... np_image = np.array(image) ... tf_image = tf.convert_to_tensor(np_image) ... # `expand_dims()` is used to add a batch dimension since ... # the TF augmentation layers operates on batched inputs. ... return tf.expand_dims(tf_image, 0) >>> def preprocess_train(example_batch): ... """Apply train_transforms across a batch.""" ... images = [ ... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ... def preprocess_val(example_batch): ... """Apply val_transforms across a batch.""" ... images = [ ... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ``` ๐Ÿค— Datasets [`~datasets.Dataset.set_transform`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฆ‰์‹œ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜์„ธ์š”: ```py food["train"].set_transform(preprocess_train) food["test"].set_transform(preprocess_val) ``` ์ตœ์ข… ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋กœ `DefaultDataCollator`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ `DefaultDataCollator`๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (๐Ÿค— Evaluate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•˜๋ฉด ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForImageClassification`]๋กœ ViT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜, ๋ ˆ์ด๋ธ” ๋งคํ•‘ ๋ฐ ๋ ˆ์ด๋ธ” ์ˆ˜๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer >>> model = AutoModelForImageClassification.from_pretrained( ... checkpoint, ... num_labels=len(labels), ... id2label=id2label, ... label2id=label2id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `image` ์—ด์ด ์‚ญ์ œ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฏธ์‚ฌ์šฉ ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. `image` ์—ด์ด ์—†์œผ๋ฉด `pixel_values`์„ ์ƒ์„ฑํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด ๋™์ž‘์„ ๋ฐฉ์ง€ํ•˜๋ ค๋ฉด `remove_unused_columns=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”! ๋‹ค๋ฅธ ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ ์ €์žฅ ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋กœ ์„ค์ •ํ•˜๋ฉด ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_food_model", ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=food["train"], ... eval_dataset=food["test"], ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋กœ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, ๋จผ์ € [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](./training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์„ธ์š”: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. 2. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. 3. ๐Ÿค— Dataset์„ `tf.data.Dataset`์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 4. ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•ฉ๋‹ˆ๋‹ค. 5. ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด `fit()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 6. ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ์˜ตํ‹ฐ๋งˆ์ด์ € ๋ฐ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด์„ ์ •์˜ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 5 >>> num_train_steps = len(food["train"]) * num_epochs >>> learning_rate = 3e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ ˆ์ด๋ธ” ๋งคํ•‘๊ณผ ํ•จ๊ป˜ [`TFAuto ModelForImageClassification`]์œผ๋กœ ViT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ [`~datasets.Dataset.to_tf_dataset`]์™€ `data_collator`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> # converting our train dataset to tf.data.Dataset >>> tf_train_dataset = food["train"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) >>> # converting our test dataset to tf.data.Dataset >>> tf_eval_dataset = food["test"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) ``` `compile()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> from tensorflow.keras.losses import SparseCategoricalCrossentropy >>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) >>> model.compile(optimizer=optimizer, loss=loss) ``` ์˜ˆ์ธก์—์„œ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ๐Ÿค— Hub๋กœ ํ‘ธ์‹œํ•˜๋ ค๋ฉด [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `compute_metrics` ํ•จ์ˆ˜๋ฅผ [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback)์— ์ „๋‹ฌํ•˜๊ณ , [PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) >>> push_to_hub_callback = PushToHubCallback( ... output_dir="food_classifier", ... tokenizer=image_processor, ... save_strategy="no", ... ) >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜์™€ ํ•จ๊ป˜ `fit()`์„ ํ˜ธ์ถœํ•˜๊ณ , ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch 1/5 250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290 Epoch 2/5 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690 Epoch 3/5 250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820 Epoch 4/5 250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900 Epoch 5/5 250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890 ``` ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ๊ณต์œ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> ds = load_dataset("food101", split="validation[:10]") >>> image = ds["image"][0] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"/> </div> ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("image-classification", model="my_awesome_food_model") >>> classifier(image) [{'score': 0.31856709718704224, 'label': 'beignets'}, {'score': 0.015232225880026817, 'label': 'bruschetta'}, {'score': 0.01519392803311348, 'label': 'chicken_wings'}, {'score': 0.013022331520915031, 'label': 'pork_chop'}, {'score': 0.012728818692266941, 'label': 'prime_rib'}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  `input`์„ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> import torch >>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model") >>> inputs = image_processor(image, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  logits์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForImageClassification >>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ์˜ˆ์ธก ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜ค๊ณ , ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_label = logits.argmax(-1).item() >>> model.config.id2label[predicted_label] 'beignets' ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  `input`์„ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier") >>> inputs = image_processor(image, return_tensors="tf") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  logits์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier") >>> logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ์˜ˆ์ธก ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜ค๊ณ , ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'beignets' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/summarization.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์š”์•ฝ[[summarization]] [[open-in-colab]] <Youtube id="yHnr5Dk2zCI"/> ์š”์•ฝ์€ ๋ฌธ์„œ๋‚˜ ๊ธฐ์‚ฌ์—์„œ ์ค‘์š”ํ•œ ์ •๋ณด๋ฅผ ๋ชจ๋‘ ํฌํ•จํ•˜๋˜ ์งง๊ฒŒ ๋งŒ๋“œ๋Š” ์ผ์ž…๋‹ˆ๋‹ค. ๋ฒˆ์—ญ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ‘œ์ ์ธ ์ž‘์—… ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์š”์•ฝ์—๋Š” ์•„๋ž˜์™€ ๊ฐ™์ด ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ์ถ”์ถœ(Extractive) ์š”์•ฝ: ๋ฌธ์„œ์—์„œ ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ ๋†’์€ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. - ์ƒ์„ฑ(Abstractive) ์š”์•ฝ: ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ ๋†’์€ ์ •๋ณด๋ฅผ ํฌ์ฐฉํ•ด๋‚ด๋Š” ์ƒˆ๋กœ์šด ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ์ƒ์„ฑ ์š”์•ฝ์„ ์œ„ํ•œ [BillSum](https://huggingface.co/datasets/billsum) ๋ฐ์ดํ„ฐ์…‹ ์ค‘ ์บ˜๋ฆฌํฌ๋‹ˆ์•„ ์ฃผ ๋ฒ•์•ˆ ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ [T5](https://huggingface.co/t5-small)๋ฅผ ํŒŒ์ธํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค. 2. ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate rouge_score ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## BillSum ๋ฐ์ดํ„ฐ์…‹ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-billsum-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ BillSum ๋ฐ์ดํ„ฐ์…‹์˜ ์ž‘์€ ๋ฒ„์ „์ธ ์บ˜๋ฆฌํฌ๋‹ˆ์•„ ์ฃผ ๋ฒ•์•ˆ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from datasets import load_dataset >>> billsum = load_dataset("billsum", split="ca_test") ``` [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋กœ ๋ฐ์ดํ„ฐ์…‹์„ ํ•™์Šต์šฉ์™€ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> billsum = billsum.train_test_split(test_size=0.2) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์‹œ๋ฅผ ํ•˜๋‚˜ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> billsum["train"][0] {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employeeโ€™s or dependentโ€™s actual or perceived gender identity, including, but not limited to, the employeeโ€™s or dependentโ€™s identification as transgender.\n(2) For purposes of this section, โ€œcontractโ€ includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractorโ€™s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractorโ€™s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractorโ€™s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.', 'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'} ``` ์—ฌ๊ธฐ์„œ ๋‹ค์Œ ๋‘ ๊ฐœ์˜ ํ•„๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: - `text`: ๋ชจ๋ธ์˜ ์ž…๋ ฅ์ด ๋  ๋ฒ•์•ˆ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - `summary`: `text`์˜ ๊ฐ„๋žตํ•œ ๋ฒ„์ „์œผ๋กœ ๋ชจ๋ธ์˜ ํƒ€๊ฒŸ์ด ๋ฉ๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ `text`์™€ `summary`๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ T5 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` ์ƒ์„ฑํ•˜๋ ค๋Š” ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ์•„๋ž˜ ์กฐ๊ฑด์„ ๋งŒ์กฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. ์ž…๋ ฅ ์•ž์— ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋ถ™์—ฌ T5๊ฐ€ ์š”์•ฝ ์ž‘์—…์ž„์„ ์ธ์‹ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ NLP ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ถ€ ๋ชจ๋ธ์€ ํŠน์ • ์ž‘์—…์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ ˆ์ด๋ธ”์„ ํ† ํฐํ™”ํ•  ๋•Œ `text_target` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 3. `max_length` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์„ค์ •๋œ ์ตœ๋Œ€ ๊ธธ์ด๋ฅผ ๋„˜์ง€ ์•Š๋„๋ก ๊ธด ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ```py >>> prefix = "summarize: " >>> def preprocess_function(examples): ... inputs = [prefix + doc for doc in examples["text"]] ... model_inputs = tokenizer(inputs, max_length=1024, truncation=True) ... labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) ... model_inputs["labels"] = labels["input_ids"] ... return model_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tokenized_billsum = billsum.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorForSeq2Seq`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“œ์„ธ์š”. ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์„ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋ฐฐ์น˜๋งˆ๋‹ค ๊ฐ€์žฅ ๊ธด ๋ฌธ์žฅ ๊ธธ์ด์— ๋งž์ถฐ *๋™์  ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluate]] ํ•™์Šต ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.) ```py >>> import evaluate >>> rouge = evaluate.load("rouge") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ROUGE ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] ... result["gen_len"] = np.mean(prediction_lens) ... return {k: round(v, 4) for k, v in result.items()} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ•™์Šต์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ•™์Šต[[train]] <frameworkcontent> <pt> <Tip> ๋ชจ๋ธ์„ [`Trainer`]๋กœ ํŒŒ์ธํŠœ๋‹ ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSeq2SeqLM`]๋กœ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`Seq2SeqTrainingArguments`]์—์„œ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.) [`Trainer`]๋Š” ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค ROUGE ์ง€ํ‘œ๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์…‹, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ•™์Šต ์ธ์ˆ˜๋ฅผ [`Seq2SeqTrainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_billsum_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=4, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_billsum["train"], ... eval_dataset=tokenized_billsum["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋กœ Hub์— ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ ํŒŒ์ธํŠœ๋‹์„ ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ๋ณธ์ ์ธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ €, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๊ทธ๋ฆฌ๊ณ  ๋ช‡ ๊ฐ€์ง€ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSeq2SeqLM`]์„ ์‚ฌ์šฉํ•˜์—ฌ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_billsum["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_billsum["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ•™์Šต์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผ ํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ ROUGE ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‘ ์ž‘์—… ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)์œผ๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ‘ธ์‹œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_billsum_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ๋ฒˆ๋“ค๋กœ ๋ฌถ์–ด์ค๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ•™์Šต ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์…‹, ์—ํญ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ๊ณผ ํ•จ๊ป˜ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์š”์•ฝ์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋ฅผ ๋ณด๋ ค๋ฉด [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์š”์•ฝํ•  ํ…์ŠคํŠธ๋ฅผ ์ž‘์„ฑํ•ด๋ณด์„ธ์š”. T5์˜ ๊ฒฝ์šฐ ์ž‘์—…์— ๋”ฐ๋ผ ์ž…๋ ฅ ์•ž์— ์ ‘๋‘์‚ฌ๋ฅผ ๋ถ™์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์š”์•ฝ์˜ ๊ฒฝ์šฐ, ์•„๋ž˜์™€ ๊ฐ™์€ ์ ‘๋‘์‚ฌ๋ฅผ ์ž…๋ ฅ ์•ž์— ๋ถ™์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> text = "summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธํŠœ๋‹ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์š”์•ฝ์„ ์ˆ˜ํ–‰ํ•  [`pipeline`]์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model") >>> summarizer(text) [{"summary_text": "The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."}] ``` ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์—ฌ [`pipeline`]์˜ ๊ฒฐ๊ณผ์™€ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` ์š”์•ฝ๋ฌธ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`~transformers.generation_utils.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ์ „๋žต๊ณผ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ](../main_classes/text_generation) API๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` ์š”์•ฝ๋ฌธ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ์ „๋žต๊ณผ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ](../main_classes/text_generation) API๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/translation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฒˆ์—ญ[[translation]] [[open-in-colab]] <Youtube id="1JvfrvZgi6c"/> ๋ฒˆ์—ญ์€ ํ•œ ์–ธ์–ด๋กœ ๋œ ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋ฒˆ์—ญ์ด๋‚˜ ์š”์•ฝ์€ ์ž…๋ ฅ์„ ๋ฐ›์•„ ์ผ๋ จ์˜ ์ถœ๋ ฅ์„ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์ธ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ‘œ์ ์ธ ํƒœ์Šคํฌ์ž…๋‹ˆ๋‹ค. ๋ฒˆ์—ญ ์‹œ์Šคํ…œ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋œ ํ…์ŠคํŠธ ๊ฐ„์˜ ๋ฒˆ์—ญ์— ์‚ฌ์šฉ๋˜์ง€๋งŒ, ์Œ์„ฑ ๊ฐ„์˜ ํ†ต์—ญ์ด๋‚˜ ํ…์ŠคํŠธ-์Œ์„ฑ ๋˜๋Š” ์Œ์„ฑ-ํ…์ŠคํŠธ์™€ ๊ฐ™์€ ์กฐํ•ฉ์—๋„ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. ์˜์–ด ํ…์ŠคํŠธ๋ฅผ ํ”„๋ž‘์Šค์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด [T5](https://huggingface.co/t5-small) ๋ชจ๋ธ์„ OPUS Books ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ 2. ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. <Tip> ์ด ํƒœ์Šคํฌ ๊ฐ€์ด๋“œ๋Š” ์•„๋ž˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—๋„ ์‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate sacrebleu ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ์ฐฝ์ด ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## OPUS Books ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-opus-books-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [OPUS Books](https://huggingface.co/datasets/opus_books) ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ```py >>> from datasets import load_dataset >>> books = load_dataset("opus_books", "en-fr") ``` ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”. ```py >>> books = books["train"].train_test_split(test_size=0.2) ``` ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณผ๊นŒ์š”? ```py >>> books["train"][0] {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau รฉlevรฉ ne mesurait que quelques toises, et bientรดt nous fรปmes rentrรฉs dans notre รฉlรฉment.'}} ``` ๋ฐ˜ํ™˜๋œ ๋”•์…”๋„ˆ๋ฆฌ์˜ `translation` ํ‚ค๊ฐ€ ํ…์ŠคํŠธ์˜ ์˜์–ด, ํ”„๋ž‘์Šค์–ด ๋ฒ„์ „์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="XAR8jnZZuUs"/> ๋‹ค์Œ ๋‹จ๊ณ„๋กœ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ์Œ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด T5 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ```py >>> from transformers import AutoTokenizer >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` ๋งŒ๋“ค ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ์•„๋ž˜ ์š”๊ตฌ์‚ฌํ•ญ์„ ์ถฉ์กฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. T5๊ฐ€ ๋ฒˆ์—ญ ํƒœ์Šคํฌ์ž„์„ ์ธ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ž…๋ ฅ ์•ž์— ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ์—ฌ๋Ÿฌ NLP ํƒœ์Šคํฌ๋ฅผ ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ ์ค‘ ์ผ๋ถ€๋Š” ์ด๋ ‡๊ฒŒ ํƒœ์Šคํฌ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋ฏธ๋ฆฌ ์ค˜์•ผํ•ฉ๋‹ˆ๋‹ค. 2. ์›์–ด(์˜์–ด)๊ณผ ๋ฒˆ์—ญ์–ด(ํ”„๋ž‘์Šค์–ด)๋ฅผ ๋ณ„๋„๋กœ ํ† ํฐํ™”ํ•˜์„ธ์š”. ์˜์–ด ์–ดํœ˜๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ €๋กœ ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•  ์ˆ˜๋Š” ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. 3. `max_length` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์„ค์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์‹œํ€€์Šค๋ฅผ truncateํ•˜์„ธ์š”. ```py >>> source_lang = "en" >>> target_lang = "fr" >>> prefix = "translate English to French: " >>> def preprocess_function(examples): ... inputs = [prefix + example[source_lang] for example in examples["translation"]] ... targets = [example[target_lang] for example in examples["translation"]] ... model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) ... return model_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.Dataset.map`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ด๋ ค๋ฉด `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tokenized_books = books.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorForSeq2Seq`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ „๋ถ€๋ฅผ paddingํ•˜๋Š” ๋Œ€์‹ , ๋ฐ์ดํ„ฐ ์ •๋ ฌ ์ค‘ ๊ฐ ๋ฐฐ์น˜์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ padding*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evalulate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•(evaluation method)์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ ํƒœ์Šคํฌ์— ์ ํ•ฉํ•œ SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. (๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> metric = evaluate.load("sacrebleu") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`~evaluate.EvaluationModule.compute`]์— ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ ์ „๋‹ฌํ•˜์—ฌ SacreBLEU ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> import numpy as np >>> def postprocess_text(preds, labels): ... preds = [pred.strip() for pred in preds] ... labels = [[label.strip()] for label in labels] ... return preds, labels >>> def compute_metrics(eval_preds): ... preds, labels = eval_preds ... if isinstance(preds, tuple): ... preds = preds[0] ... decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) ... result = metric.compute(predictions=decoded_preds, references=decoded_labels) ... result = {"bleu": result["score"]} ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] ... result["gen_len"] = np.mean(prediction_lens) ... result = {k: round(v, 4) for k, v in result.items()} ... return result ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋Š” ์ค€๋น„๋˜์—ˆ๊ณ , ํ›ˆ๋ จ ๊ณผ์ •์„ ์„ค์ •ํ•  ๋•Œ ๋‹ค์‹œ ์‚ดํŽด๋ณผ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ๊ตฐ์š”! [`AutoModelForSeq2SeqLM`]์œผ๋กœ T5๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`Seq2SeqTrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜์ธ `output_dir`์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋กœ ์„ค์ •ํ•˜์„ธ์š”. (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.) [`Trainer`]๋Š” ์—ํญ์ด ๋๋‚ ๋•Œ๋งˆ๋‹ค SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Seq2SeqTrainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, data collator ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋„ ๋ฉ๋‹ฌ์•„ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_opus_books_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=2, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_books["train"], ... eval_dataset=tokenized_books["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ```` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.push_to_hub`] ๋ฉ”์„œ๋“œ๋กœ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”. ์ด๋Ÿฌ๋ฉด ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋ฉด ์šฐ์„  optimizer ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๋“ฑ์˜ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ์ด์ œ [`TFAutoModelForSeq2SeqLM`]๋กœ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]๋กœ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_books["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_books["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ”์„œ๋“œ๋กœ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์˜ˆ์ธก๊ฐ’์œผ๋กœ๋ถ€ํ„ฐ SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ• ๋‘ ๊ฐ€์ง€๋ฅผ ๋ฏธ๋ฆฌ ์„ค์ •ํ•ด๋‘ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‘˜ ๋‹ค [Keras callbacks](../main_classes/keras_callbacks)๋กœ ๊ตฌํ˜„ํ•˜์„ธ์š”. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์—์„œ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_opus_books_model", ... tokenizer=tokenizer, ... ) ``` ์ด์ œ ์ฝœ๋ฐฑ๋“ค์„ ํ•œ๋ฐ๋กœ ๋ฌถ์–ด์ฃผ์„ธ์š”: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ๋ชจ๋“  ์ค€๋น„๋ฅผ ๋งˆ์ณค๊ตฐ์š”! ์ด์ œ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ๋ฉ”์„œ๋“œ๋ฅผ ์—ํญ ์ˆ˜์™€ ๋งŒ๋“ค์–ด๋‘” ์ฝœ๋ฐฑ๊ณผ ํ•จ๊ป˜ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜๊ณ , ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ํ•ด๋‹น [PyTorch ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) ๋˜๋Š” [TensorFlow ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ณ  ์‹ถ์€ ํ…์ŠคํŠธ๋ฅผ ์จ๋ณด์„ธ์š”. T5์˜ ๊ฒฝ์šฐ ์›ํ•˜๋Š” ํƒœ์Šคํฌ๋ฅผ ์ž…๋ ฅ์˜ ์ ‘๋‘์‚ฌ๋กœ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์˜์–ด์—์„œ ํ”„๋ž‘์Šค์–ด๋กœ ๋ฒˆ์—ญํ•˜๋Š” ๊ฒฝ์šฐ, ์•„๋ž˜์™€ ๊ฐ™์€ ์ ‘๋‘์‚ฌ๊ฐ€ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค: ```py >>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria." ``` ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๊ธฐ์— ์ œ์ผ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ด๋‹น ๋ชจ๋ธ๋กœ ๋ฒˆ์—ญ `pipeline`์„ ๋งŒ๋“  ๋’ค, ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> translator = pipeline("translation", model="my_awesome_opus_books_model") >>> translator(text) [{'translation_text': 'Legumes partagent des ressources avec des bactรฉries azotantes.'}] ``` ์›ํ•œ๋‹ค๋ฉด `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ‘ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` [`~transformers.generation_utils.GenerationMixin.generate`] ๋ฉ”์„œ๋“œ๋กœ ๋ฒˆ์—ญ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต ๋ฐ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Text Generation](../main_classes/text_generation) API๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋“ค์„ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lignรฉes partagent des ressources avec des bactรฉries enfixant l'azote.' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์„œ๋“œ๋กœ ๋ฒˆ์—ญ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต ๋ฐ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Text Generation](../main_classes/text_generation) API๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋“ค์„ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lugumes partagent les ressources avec des bactรฉries fixatrices d'azote.' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ms/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Dilesenkan di bawah Lesen Apache, Versi 2.0 ("Lesen"); anda tidak boleh menggunakan fail ini kecuali dengan mematuhi Lesen. Anda boleh mendapatkan salinan Lesen di http://www.apache.org/licenses/LICENSE-2.0 Melainkan diperlukan oleh undang-undang yang terpakai atau dipersetujui secara bertulis, perisian yang diedarkan di bawah Lesen diedarkan pada ASAS ""SEBAGAIMANA ADANYA"", TANPA WARANTI ATAU SEBARANG JENIS SYARAT, sama ada nyata atau tersirat. Lihat Lesen untuk bahasa tertentu yang mengawal kebenaran dan pengehadan di bawah Lesen. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Pembelajaran Mesin terkini untuk [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), dan [JAX](https://jax.readthedocs.io/en/latest/). ๐Ÿค— Transformers menyediakan API dan alatan untuk memuat turun dan melatih model pra-latihan terkini dengan mudah. Menggunakan model terlatih boleh mengurangkan kos pengiraan anda, jejak karbon dan menjimatkan masa serta sumber yang diperlukan untuk melatih model dari awal. Model ini menyokong tugas biasa dalam modaliti yang berbeza, seperti: ๐Ÿ“ **Natural Language Processing**: klasifikasi teks, pengecaman entiti bernama, menjawab soalan, pemodelan bahasa, ringkasan, terjemahan, pilihan berganda dan penjanaan teks.<br> ๐Ÿ–ผ๏ธ **Computer Vision**: pengelasan imej, pengesanan objek dan pembahagian.<br> ๐Ÿ—ฃ๏ธ **Audio**: pengecaman pertuturan automatik dan klasifikasi audio.<br> ๐Ÿ™ **Multimodal**: jawapan soalan jadual, pengecaman aksara optik, pengekstrakan maklumat daripada dokumen yang diimbas, klasifikasi video dan jawapan soalan visual. ๐Ÿค— Transformer menyokong kebolehoperasian rangka kerja antara PyTorch, TensorFlow, and JAX. Ini memberikan fleksibiliti untuk menggunakan rangka kerja yang berbeza pada setiap peringkat kehidupan model; latih model dalam tiga baris kod dalam satu rangka kerja, dan muatkannya untuk inferens dalam rangka kerja yang lain. Model juga boleh dieksport ke format seperti ONNX. Sertai komuniti yang semakin berkembang di [Hub](https://huggingface.co/models), [forum](https://discuss.huggingface.co/), atau [Discord](https://discord.com/invite/JfAtkvEtRb) hari ini! ## Jika anda sedang mencari sokongan tersuai daripada pasukan Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Kandungan Dokumentasi disusun kepada lima bahagian: - **MULAKAN** menyediakan lawatan pantas ke perpustakaan dan arahan pemasangan untuk bangun dan berjalan. - **TUTORIAL** ialah tempat yang bagus untuk bermula jika anda seorang pemula. Bahagian ini akan membantu anda memperoleh kemahiran asas yang anda perlukan untuk mula menggunakan perpustakaan. - **PANDUAN CARA-CARA** menunjukkan kepada anda cara untuk mencapai matlamat tertentu, seperti memperhalusi model terlatih untuk pemodelan bahasa atau cara menulis dan berkongsi model tersuai. - **PANDUAN KONSEP** menawarkan lebih banyak perbincangan dan penjelasan tentang konsep dan idea asas di sebalik model, tugasan dan falsafah reka bentuk ๐Ÿค— Transformers. - **API** menerangkan semua kelas dan fungsi: - **KELAS UTAMA** memperincikan kelas yang paling penting seperti konfigurasi, model, tokenizer dan saluran paip. - **MODEL** memperincikan kelas dan fungsi yang berkaitan dengan setiap model yang dilaksanakan dalam perpustakaan. - **PEMBANTU DALAMAN** memperincikan kelas utiliti dan fungsi yang digunakan secara dalaman. ### Model yang disokong <!--Senarai ini dikemas kini secara automatik daripada README dengan _make fix-copies_. Jangan kemas kini secara manual! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[AltCLIP](model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. 1. **[Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 1. **[Autoformer](model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BioGpt](model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 1. **[BiT](model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLIP](model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. 1. **[BLIP-2](model_doc/blip-2)** (from Salesforce) released with the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[BridgeTower](model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[Chinese-CLIP](model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 1. **[CLAP](model_doc/clap)** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CPM-Ant](model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/). 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DePlot](model_doc/deplot)** (from Google AI) released with the paper [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) by Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. 1. **[DETA](model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krรคhenbรผhl. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientFormer](model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ErnieM](model_doc/ernie_m)** (from Baidu) released with the paper [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FLAN-UL2](model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[FocalNet](model_doc/focalnet)** (from Microsoft Research) released with the paper [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GIT](model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPT-Sw3](model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey ร–hman, Fredrik Carlsson, Magnus Sahlgren. 1. **[GPTBigCode](model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo Garcรญa del Rรญo, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. 1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[Graphormer](model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Informer](model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[LLaMA](model_doc/llama)** (from The FAIR team of Meta AI) released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothรฉe Lacroix, Baptiste Roziรจre, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[MatCha](model_doc/matcha)** (from Google AI) released with the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) by Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[MEGA](model_doc/mega)** (from Meta/USC/CMU/SJTU) released with the paper [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[MGP-STR](model_doc/mgp-str)** (from Alibaba Research) released with the paper [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) by Peng Wang, Cheng Da, and Cong Yao. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileNetV1](model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 1. **[MobileNetV2](model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[NAT](model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[NLLB-MOE](model_doc/nllb-moe)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OpenLlama](model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released in [Open-Llama](https://github.com/s-JoL/Open-Llama). 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[Pix2Struct](model_doc/pix2struct)** (from Google) released with the paper [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[RWKV](model_doc/rwkv)** (from Bo Peng), released on [this repo](https://github.com/BlinkDL/RWKV-LM) by Bo Peng. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[Segment Anything](model_doc/sam)** (from Meta AI) released with the paper [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechT5](model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[SwiftFormer](model_doc/swiftformer)** (from MBZUAI) released with the paper [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[Swin2SR](model_doc/swin2sr)** (from University of Wรผrzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. 1. **[SwitchTransformers](model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[TimeSformer](model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[TVLT](model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[UPerNet](model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViT Hybrid](model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[X-MOD](model_doc/xmod)** (from Meta AI) released with the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLM-V](model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Rangka kerja yang disokong Jadual di bawah mewakili sokongan semasa dalam perpustakaan untuk setiap model tersebut, sama ada model tersebut mempunyai Python tokenizer (dipanggil ""lambat""). Tokenizer ""pantas"" yang disokong oleh perpustakaan Tokenizers ๐Ÿค—, sama ada mereka mempunyai sokongan dalam Jax (melalui Flax), PyTorch, dan/atau TensorFlow. <!--Jadual ini dikemas kini secara automatik daripada modul auto dengan _make fix-copies_. Jangan kemas kini secara manual!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | ALIGN | โŒ | โŒ | โœ… | โŒ | โŒ | | AltCLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | Audio Spectrogram Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Autoformer | โŒ | โŒ | โœ… | โŒ | โŒ | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | BioGpt | โœ… | โŒ | โœ… | โŒ | โŒ | | BiT | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLIP | โŒ | โŒ | โœ… | โœ… | โŒ | | BLIP-2 | โŒ | โŒ | โœ… | โŒ | โŒ | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | BridgeTower | โŒ | โŒ | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | Chinese-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | CLAP | โŒ | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | ConvNeXTV2 | โŒ | โŒ | โœ… | โŒ | โŒ | | CPM-Ant | โœ… | โŒ | โœ… | โŒ | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETA | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DiNAT | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | EfficientFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | EfficientNet | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ErnieM | โœ… | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | FocalNet | โŒ | โŒ | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GIT | โŒ | โŒ | โœ… | โŒ | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GPT-Sw3 | โœ… | โœ… | โœ… | โœ… | โœ… | | GPTBigCode | โŒ | โŒ | โœ… | โŒ | โŒ | | GPTSAN-japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | Graphormer | โŒ | โŒ | โœ… | โŒ | โŒ | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Informer | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | LLaMA | โœ… | โœ… | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | Mask2Former | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormerSwin | โŒ | โŒ | โŒ | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | MEGA | โŒ | โŒ | โœ… | โŒ | โŒ | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MGP-STR | โœ… | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileNetV1 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileNetV2 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | NAT | โŒ | โŒ | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | NLLB-MOE | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OneFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OpenLlama | โŒ | โŒ | โœ… | โŒ | โŒ | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | Pix2Struct | โŒ | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoBERTa-PreLayerNorm | โŒ | โŒ | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | RWKV | โŒ | โŒ | โœ… | โŒ | โŒ | | SAM | โŒ | โŒ | โœ… | โœ… | โŒ | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | SpeechT5 | โœ… | โŒ | โœ… | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | SwiftFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | Swin2SR | โŒ | โŒ | โœ… | โŒ | โŒ | | SwitchTransformers | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TimeSformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | TVLT | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | UPerNet | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViT Hybrid | โŒ | โŒ | โœ… | โŒ | โŒ | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โœ… | โœ… | โœ… | โœ… | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | X-MOD | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- Tamat -->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ms/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Lawatan cepat - local: installation title: Pemasangan title: Mulakan - sections: - local: pipeline_tutorial title: Jalankan inferens dengan saluran paip - local: autoclass_tutorial title: Tulis kod mudah alih dengan AutoClass - local: preprocessing title: Praproses data - local: training title: Perhalusi model yang telah dilatih - local: run_scripts title: Latih dengan skrip - local: accelerate title: Sediakan latihan yang diedarkan dengan ๐Ÿค— Accelerate - local: model_sharing title: Kongsi model anda - local: transformers_agents title: Ejen title: Tutorials - sections: - sections: - local: tasks/sequence_classification title: Klasifikasi teks - local: tasks/token_classification title: Klasifikasi token - local: tasks/question_answering title: Soalan menjawab - local: tasks/language_modeling title: Pemodelan bahasa sebab-akibat - local: tasks/masked_language_modeling title: Pemodelan bahasa Masked - local: tasks/translation title: Terjemahan - local: tasks/summarization title: Rumusan - local: tasks/multiple_choice title: Pilihan title: Natural Language Processing isExpanded: false - sections: - local: tasks/audio_classification title: Klasifikasi audio - local: tasks/asr title: Pengecaman pertuturan automatik title: Audio isExpanded: false - sections: - local: tasks/image_classification title: Klasifikasi imej - local: tasks/semantic_segmentation title: Segmentasi semantik - local: tasks/video_classification title: Klasifikasi video - local: tasks/object_detection title: Pengesanan objek - local: tasks/zero_shot_object_detection title: Pengesanan objek Zero-Shot - local: tasks/zero_shot_image_classification title: Klasifikasi imej tangkapan Zero-Shot - local: tasks/monocular_depth_estimation title: Anggaran kedalaman title: Visi komputer isExpanded: false - sections: - local: tasks/image_captioning title: Kapsyen imej - local: tasks/document_question_answering title: Menjawab Soalan Dokumen - local: tasks/text-to-speech title: Teks kepada ucapan title: Multimodal isExpanded: false title: Panduan Tugasan - sections: - local: fast_tokenizers title: Gunakan tokenizer cepat dari ๐Ÿค— Tokenizers - local: multilingual title: Jalankan inferens dengan model berbilang bahasa - local: generation_strategies title: Sesuaikan strategi penjanaan teks - local: create_a_model title: Gunakan API khusus model - local: custom_models title: Kongsi model tersuai - local: sagemaker title: Jalankan latihan di Amazon SageMaker - local: serialization title: Eksport ke ONNX - local: torchscript title: Eksport ke TorchScript - local: benchmarks title: Penanda aras - local: Buku nota dengan contoh title: Notebooks with examples - local: Sumber komuniti title: Community resources - local: Sumber komuniti title: Custom Tools and Prompts - local: Alat dan Gesaan Tersuai title: Selesaikan masalah title: Panduan Developer - sections: - local: performance title: Gambaran keseluruhan - local: perf_train_gpu_one title: Latihan pada satu GPU - local: perf_train_gpu_many title: Latihan pada banyak GPU - local: perf_train_cpu title: Latihan mengenai CPU - local: perf_train_cpu_many title: Latihan pada banyak CPU - local: perf_train_tpu title: Latihan mengenai TPU - local: perf_train_tpu_tf title: Latihan tentang TPU dengan TensorFlow - local: perf_train_special title: Latihan mengenai Perkakasan Khusus - local: perf_infer_cpu title: Inferens pada CPU - local: perf_infer_gpu_one title: Inferens pada satu GPU - local: perf_infer_gpu_many title: Inferens pada banyak GPUs - local: perf_infer_special title: Inferens pada Perkakasan Khusus - local: perf_hardware title: Perkakasan tersuai untuk latihan - local: big_models title: Menghidupkan model besar - local: debugging title: Penyahpepijatan - local: hpo_train title: Carian Hiperparameter menggunakan API Pelatih - local: tf_xla title: Penyepaduan XLA untuk Model TensorFlow title: Prestasi dan kebolehskalaan - sections: - local: contributing title: Bagaimana untuk menyumbang kepada transformer? - local: add_new_model title: Bagaimana untuk menambah model pada ๐Ÿค— Transformers? - local: add_tensorflow_model title: Bagaimana untuk menukar model Transformers kepada TensorFlow? - local: add_new_pipeline title: Bagaimana untuk menambah saluran paip ke ๐Ÿค— Transformers? - local: testing title: Ujian - local: pr_checks title: Menyemak Permintaan Tarik title: Sumbangkan - sections: - local: philosophy title: Falsafah - local: glossary title: Glosari - local: task_summary title: Apa ๐Ÿค— Transformers boleh buat - local: tasks_explained title: Bagaimana ๐Ÿค— Transformers menyelesaikan tugasan - local: model_summary title: Keluarga model Transformer - local: tokenizer_summary title: Ringkasan tokenizer - local: attention title: Mekanisme perhatian - local: pad_truncation title: Padding dan pemotongan - local: bertology title: BERTology - local: perplexity title: Kekeliruan model panjang tetap - local: pipeline_webserver title: Saluran paip untuk inferens pelayan web title: Panduan konsep - sections: - sections: - local: main_classes/agent title: Ejen dan Alat - local: model_doc/auto title: Kelas Auto - local: main_classes/callback title: Panggilan balik - local: main_classes/configuration title: Configuration - local: main_classes/data_collator title: Data Collator - local: main_classes/keras_callbacks title: Keras callbacks - local: main_classes/logging title: Logging - local: main_classes/model title: Models - local: main_classes/text_generation title: Text Generation - local: main_classes/onnx title: ONNX - local: main_classes/optimizer_schedules title: Optimization - local: main_classes/output title: Model outputs - local: main_classes/pipelines title: Pipelines - local: main_classes/processors title: Processors - local: main_classes/quantization title: Quantization - local: main_classes/tokenizer title: Tokenizer - local: main_classes/trainer title: Trainer - local: main_classes/deepspeed title: DeepSpeed Integration - local: main_classes/feature_extractor title: Feature Extractor - local: main_classes/image_processor title: Image Processor title: Main Classes - sections: - isExpanded: false sections: - local: model_doc/albert title: ALBERT - local: model_doc/bart title: BART - local: model_doc/barthez title: BARThez - local: model_doc/bartpho title: BARTpho - local: model_doc/bert title: BERT - local: model_doc/bert-generation title: BertGeneration - local: model_doc/bert-japanese title: BertJapanese - local: model_doc/bertweet title: Bertweet - local: model_doc/big_bird title: BigBird - local: model_doc/bigbird_pegasus title: BigBirdPegasus - local: model_doc/biogpt title: BioGpt - local: model_doc/blenderbot title: Blenderbot - local: model_doc/blenderbot-small title: Blenderbot Small - local: model_doc/bloom title: BLOOM - local: model_doc/bort title: BORT - local: model_doc/byt5 title: ByT5 - local: model_doc/camembert title: CamemBERT - local: model_doc/canine title: CANINE - local: model_doc/codegen title: CodeGen - local: model_doc/convbert title: ConvBERT - local: model_doc/cpm title: CPM - local: model_doc/cpmant title: CPMANT - local: model_doc/ctrl title: CTRL - local: model_doc/deberta title: DeBERTa - local: model_doc/deberta-v2 title: DeBERTa-v2 - local: model_doc/dialogpt title: DialoGPT - local: model_doc/distilbert title: DistilBERT - local: model_doc/dpr title: DPR - local: model_doc/electra title: ELECTRA - local: model_doc/encoder-decoder title: Encoder Decoder Models - local: model_doc/ernie title: ERNIE - local: model_doc/ernie_m title: ErnieM - local: model_doc/esm title: ESM - local: model_doc/flan-t5 title: FLAN-T5 - local: model_doc/flan-ul2 title: FLAN-UL2 - local: model_doc/flaubert title: FlauBERT - local: model_doc/fnet title: FNet - local: model_doc/fsmt title: FSMT - local: model_doc/funnel title: Funnel Transformer - local: model_doc/openai-gpt title: GPT - local: model_doc/gpt_neo title: GPT Neo - local: model_doc/gpt_neox title: GPT NeoX - local: model_doc/gpt_neox_japanese title: GPT NeoX Japanese - local: model_doc/gptj title: GPT-J - local: model_doc/gpt2 title: GPT2 - local: model_doc/gpt_bigcode title: GPTBigCode - local: model_doc/gptsan-japanese title: GPTSAN Japanese - local: model_doc/gpt-sw3 title: GPTSw3 - local: model_doc/herbert title: HerBERT - local: model_doc/ibert title: I-BERT - local: model_doc/jukebox title: Jukebox - local: model_doc/led title: LED - local: model_doc/llama title: LLaMA - local: model_doc/longformer title: Longformer - local: model_doc/longt5 title: LongT5 - local: model_doc/luke title: LUKE - local: model_doc/m2m_100 title: M2M100 - local: model_doc/marian title: MarianMT - local: model_doc/markuplm title: MarkupLM - local: model_doc/mbart title: MBart and MBart-50 - local: model_doc/mega title: MEGA - local: model_doc/megatron-bert title: MegatronBERT - local: model_doc/megatron_gpt2 title: MegatronGPT2 - local: model_doc/mluke title: mLUKE - local: model_doc/mobilebert title: MobileBERT - local: model_doc/mpnet title: MPNet - local: model_doc/mt5 title: MT5 - local: model_doc/mvp title: MVP - local: model_doc/nezha title: NEZHA - local: model_doc/nllb title: NLLB - local: model_doc/nllb-moe title: NLLB-MoE - local: model_doc/nystromformer title: Nystrรถmformer - local: model_doc/open-llama title: Open-Llama - local: model_doc/opt title: OPT - local: model_doc/pegasus title: Pegasus - local: model_doc/pegasus_x title: PEGASUS-X - local: model_doc/phobert title: PhoBERT - local: model_doc/plbart title: PLBart - local: model_doc/prophetnet title: ProphetNet - local: model_doc/qdqbert title: QDQBert - local: model_doc/rag title: RAG - local: model_doc/realm title: REALM - local: model_doc/reformer title: Reformer - local: model_doc/rembert title: RemBERT - local: model_doc/retribert title: RetriBERT - local: model_doc/roberta title: RoBERTa - local: model_doc/roberta-prelayernorm title: RoBERTa-PreLayerNorm - local: model_doc/roc_bert title: RoCBert - local: model_doc/roformer title: RoFormer - local: model_doc/rwkv title: RWKV - local: model_doc/splinter title: Splinter - local: model_doc/squeezebert title: SqueezeBERT - local: model_doc/switch_transformers title: SwitchTransformers - local: model_doc/t5 title: T5 - local: model_doc/t5v1.1 title: T5v1.1 - local: model_doc/tapex title: TAPEX - local: model_doc/transfo-xl title: Transformer XL - local: model_doc/ul2 title: UL2 - local: model_doc/xmod title: X-MOD - local: model_doc/xglm title: XGLM - local: model_doc/xlm title: XLM - local: model_doc/xlm-prophetnet title: XLM-ProphetNet - local: model_doc/xlm-roberta title: XLM-RoBERTa - local: model_doc/xlm-roberta-xl title: XLM-RoBERTa-XL - local: model_doc/xlm-v title: XLM-V - local: model_doc/xlnet title: XLNet - local: model_doc/yoso title: YOSO title: Text models - isExpanded: false sections: - local: model_doc/beit title: BEiT - local: model_doc/bit title: BiT - local: model_doc/conditional_detr title: Conditional DETR - local: model_doc/convnext title: ConvNeXT - local: model_doc/convnextv2 title: ConvNeXTV2 - local: model_doc/cvt title: CvT - local: model_doc/deformable_detr title: Deformable DETR - local: model_doc/deit title: DeiT - local: model_doc/deta title: DETA - local: model_doc/detr title: DETR - local: model_doc/dinat title: DiNAT - local: model_doc/dit title: DiT - local: model_doc/dpt title: DPT - local: model_doc/efficientformer title: EfficientFormer - local: model_doc/efficientnet title: EfficientNet - local: model_doc/focalnet title: FocalNet - local: model_doc/glpn title: GLPN - local: model_doc/imagegpt title: ImageGPT - local: model_doc/levit title: LeViT - local: model_doc/mask2former title: Mask2Former - local: model_doc/maskformer title: MaskFormer - local: model_doc/mobilenet_v1 title: MobileNetV1 - local: model_doc/mobilenet_v2 title: MobileNetV2 - local: model_doc/mobilevit title: MobileViT - local: model_doc/nat title: NAT - local: model_doc/poolformer title: PoolFormer - local: model_doc/regnet title: RegNet - local: model_doc/resnet title: ResNet - local: model_doc/segformer title: SegFormer - local: model_doc/swiftformer title: SwiftFormer - local: model_doc/swin title: Swin Transformer - local: model_doc/swinv2 title: Swin Transformer V2 - local: model_doc/swin2sr title: Swin2SR - local: model_doc/table-transformer title: Table Transformer - local: model_doc/timesformer title: TimeSformer - local: model_doc/upernet title: UperNet - local: model_doc/van title: VAN - local: model_doc/videomae title: VideoMAE - local: model_doc/vit title: Vision Transformer (ViT) - local: model_doc/vit_hybrid title: ViT Hybrid - local: model_doc/vit_mae title: ViTMAE - local: model_doc/vit_msn title: ViTMSN - local: model_doc/yolos title: YOLOS title: Vision models - isExpanded: false sections: - local: model_doc/audio-spectrogram-transformer title: Audio Spectrogram Transformer - local: model_doc/clap title: CLAP - local: model_doc/hubert title: Hubert - local: model_doc/mctct title: MCTCT - local: model_doc/sew title: SEW - local: model_doc/sew-d title: SEW-D - local: model_doc/speech_to_text title: Speech2Text - local: model_doc/speech_to_text_2 title: Speech2Text2 - local: model_doc/speecht5 title: SpeechT5 - local: model_doc/unispeech title: UniSpeech - local: model_doc/unispeech-sat title: UniSpeech-SAT - local: model_doc/wav2vec2 title: Wav2Vec2 - local: model_doc/wav2vec2-conformer title: Wav2Vec2-Conformer - local: model_doc/wav2vec2_phoneme title: Wav2Vec2Phoneme - local: model_doc/wavlm title: WavLM - local: model_doc/whisper title: Whisper - local: model_doc/xls_r title: XLS-R - local: model_doc/xlsr_wav2vec2 title: XLSR-Wav2Vec2 title: Audio models - isExpanded: false sections: - local: model_doc/align title: ALIGN - local: model_doc/altclip title: AltCLIP - local: model_doc/blip title: BLIP - local: model_doc/blip-2 title: BLIP-2 - local: model_doc/bridgetower title: BridgeTower - local: model_doc/chinese_clip title: Chinese-CLIP - local: model_doc/clip title: CLIP - local: model_doc/clipseg title: CLIPSeg - local: model_doc/data2vec title: Data2Vec - local: model_doc/deplot title: DePlot - local: model_doc/donut title: Donut - local: model_doc/flava title: FLAVA - local: model_doc/git title: GIT - local: model_doc/groupvit title: GroupViT - local: model_doc/layoutlm title: LayoutLM - local: model_doc/layoutlmv2 title: LayoutLMV2 - local: model_doc/layoutlmv3 title: LayoutLMV3 - local: model_doc/layoutxlm title: LayoutXLM - local: model_doc/lilt title: LiLT - local: model_doc/lxmert title: LXMERT - local: model_doc/matcha title: MatCha - local: model_doc/mgp-str title: MGP-STR - local: model_doc/oneformer title: OneFormer - local: model_doc/owlvit title: OWL-ViT - local: model_doc/perceiver title: Perceiver - local: model_doc/pix2struct title: Pix2Struct - local: model_doc/sam title: Segment Anything - local: model_doc/speech-encoder-decoder title: Speech Encoder Decoder Models - local: model_doc/tapas title: TAPAS - local: model_doc/trocr title: TrOCR - local: model_doc/tvlt title: TVLT - local: model_doc/vilt title: ViLT - local: model_doc/vision-encoder-decoder title: Vision Encoder Decoder Models - local: model_doc/vision-text-dual-encoder title: Vision Text Dual Encoder - local: model_doc/visual_bert title: VisualBERT - local: model_doc/xclip title: X-CLIP title: Multimodal models - isExpanded: false sections: - local: model_doc/decision_transformer title: Decision Transformer - local: model_doc/trajectory_transformer title: Trajectory Transformer title: Reinforcement learning models - isExpanded: false sections: - local: model_doc/informer title: Informer - local: model_doc/time_series_transformer title: Time Series Transformer title: Time series models - isExpanded: false sections: - local: model_doc/graphormer title: Graphormer title: Graph models title: Models - sections: - local: internal/modeling_utils title: Custom Layers and Utilities - local: internal/pipelines_utils title: Utilities for pipelines - local: internal/tokenization_utils title: Utilities for Tokenizers - local: internal/trainer_utils title: Utilities for Trainer - local: internal/generation_utils title: Utilities for Generation - local: internal/image_processing_utils title: Utilities for Image Processors - local: internal/audio_utils title: Utilities for Audio processing - local: internal/file_utils title: General Utilities - local: internal/time_series_utils title: Utilities for Time Series title: Internal Helpers title: API
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Visite rapide [[open-in-colab]] Soyez opรฉrationnel avec ๐Ÿค— Transformers ! Que vous soyez un dรฉveloppeur ou un utilisateur lambda, cette visite rapide vous aidera ร  dรฉmarrer et vous montrera comment utiliser le [`pipeline`] pour l'infรฉrence, charger un modรจle prรฉ-entraรฎnรฉ et un prรฉprocesseur avec une [AutoClass](./model_doc/auto), et entraรฎner rapidement un modรจle avec PyTorch ou TensorFlow. Si vous รชtes un dรฉbutant, nous vous recommandons de consulter nos tutoriels ou notre [cours](https://huggingface.co/course/chapter1/1) suivant pour des explications plus approfondies des concepts prรฉsentรฉs ici. Avant de commencer, assurez-vous que vous avez installรฉ toutes les bibliothรจques nรฉcessaires : ```bash !pip install transformers datasets ``` Vous aurez aussi besoin d'installer votre bibliothรจque d'apprentissage profond favorite : <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## Pipeline <Youtube id="tiZFewofSLM"/> Le [`pipeline`] est le moyen le plus simple d'utiliser un modรจle prรฉ-entraรฎnรฉ pour l'infรฉrence. Vous pouvez utiliser le [`pipeline`] prรชt ร  l'emploi pour de nombreuses tรขches dans diffรฉrentes modalitรฉs. Consultez le tableau ci-dessous pour connaรฎtre les tรขches prises en charge : | **Tรขche** | **Description** | **Modalitรฉ** | **Identifiant du pipeline** | |------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------|-----------------------------------------------| | Classification de texte | Attribue une catรฉgorie ร  une sรฉquence de texte donnรฉe | Texte | pipeline(task="sentiment-analysis") | | Gรฉnรฉration de texte | Gรฉnรจre du texte ร  partir d'une consigne donnรฉe | Texte | pipeline(task="text-generation") | | Reconnaissance de token nommรฉ | Attribue une catรฉgorie ร  chaque token dans une sรฉquence (personnes, organisation, localisation, etc.) | Texte | pipeline(task="ner") | | Question rรฉponse | Extrait une rรฉponse du texte en fonction du contexte et d'une question | Texte | pipeline(task="question-answering") | | Prรฉdiction de token masquรฉ | Prรฉdit correctement le token masquรฉ dans une sรฉquence | Texte | pipeline(task="fill-mask") | | Gรฉnรฉration de rรฉsumรฉ | Gรฉnรจre un rรฉsumรฉ d'une sรฉquence de texte donnรฉe ou d'un document | Texte | pipeline(task="summarization") | | Traduction | Traduit du texte d'un langage ร  un autre | Texte | pipeline(task="translation") | | Classification d'image | Attribue une catรฉgorie ร  une image | Image | pipeline(task="image-classification") | | Segmentation d'image | Attribue une catรฉgorie ร  chaque pixel d'une image (supporte la segmentation sรฉmantique, panoptique et d'instance) | Image | pipeline(task="image-segmentation") | | Dรฉtection d'objects | Prรฉdit les dรฉlimitations et catรฉgories d'objects dans une image | Image | pipeline(task="object-detection") | | Classification d'audio | Attribue une catรฉgorie ร  un fichier audio | Audio | pipeline(task="audio-classification") | | Reconnaissance automatique de la parole | Extrait le discours d'un fichier audio en texte | Audio | pipeline(task="automatic-speech-recognition") | | Question rรฉponse visuels | Etant donnรฉes une image et une question, rรฉpond correctement ร  une question sur l'image | Modalitรฉs multiples | pipeline(task="vqa") | Commencez par crรฉer une instance de [`pipeline`] et spรฉcifiez la tรขche pour laquelle vous souhaitez l'utiliser. Vous pouvez utiliser le [`pipeline`] pour n'importe laquelle des tรขches mentionnรฉes dans le tableau prรฉcรฉdent. Pour obtenir une liste complรจte des tรขches prises en charge, consultez la documentation de l'[API pipeline](./main_classes/pipelines). Dans ce guide, nous utiliserons le [`pipeline`] pour l'analyse des sentiments ร  titre d'exemple : ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Le [`pipeline`] tรฉlรฉcharge et stocke en cache un [modรจle prรฉ-entraรฎnรฉ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) et un tokenizer par dรฉfaut pour l'analyse des sentiments. Vous pouvez maintenant utiliser le `classifier` sur le texte de votre choix : ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` Si vous voulez classifier plus qu'un texte, donnez une liste de textes au [`pipeline`] pour obtenir une liste de dictionnaires en retour : ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, avec le score de: {round(result['score'], 4)}") label: POSITIVE, avec le score de: 0.9998 label: NEGATIVE, avec le score de: 0.5309 ``` Le [`pipeline`] peut aussi itรฉrer sur un jeu de donnรฉes entier pour n'importe quelle tรขche. Prenons par exemple la reconnaissance automatique de la parole : ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Chargez un jeu de donnรฉes audio (voir le ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) pour plus de dรฉtails) sur lequel vous souhaitez itรฉrer. Pour cet example, nous chargons le jeu de donnรฉes [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) : ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Vous devez vous assurer que le taux d'รฉchantillonnage de l'ensemble de donnรฉes correspond au taux d'รฉchantillonnage sur lequel [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) a รฉtรฉ entraรฎnรฉ : ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Les fichiers audio sont automatiquement chargรฉs et rรฉรฉchantillonnรฉs lors de l'appel de la colonne `"audio"`. Extrayez les tableaux de formes d'ondes brutes des quatre premiers รฉchantillons et passez-les comme une liste au pipeline : ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Pour les ensembles de donnรฉes plus importants oรน les entrรฉes sont volumineuses (comme dans les domaines de la parole ou de la vision), utilisez plutรดt un gรฉnรฉrateur au lieu d'une liste pour charger toutes les entrรฉes en mรฉmoire. Pour plus d'informations, consultez la documentation de l'[API pipeline](./main_classes/pipelines). ### Utiliser une autre modรจle et tokenizer dans le pipeline Le [`pipeline`] peut รชtre utilisรฉ avec n'importe quel modรจle du [Hub](https://huggingface.co/models), ce qui permet d'adapter facilement le [`pipeline`] ร  d'autres cas d'utilisation. Par exemple, si vous souhaitez un modรจle capable de traiter du texte franรงais, utilisez les filtres du Hub pour trouver un modรจle appropriรฉ. Le premier rรฉsultat renvoie un [modรจle BERT](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) multilingue finetunรฉ pour l'analyse des sentiments que vous pouvez utiliser pour le texte franรงais : ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Utilisez [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `AutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Utilisez [`TFAutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `TFAutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Specifiez le modรจle et le tokenizer dans le [`pipeline`], et utilisez le `classifier` sur le texte en franรงais : ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Si vous ne parvenez pas ร  trouver un modรจle adaptรฉ ร  votre cas d'utilisation, vous devrez finetuner un modรจle prรฉ-entraรฎnรฉ sur vos donnรฉes. Jetez un coup d'ล“il ร  notre [tutoriel sur le finetuning](./training) pour apprendre comment faire. Enfin, aprรจs avoir finetunรฉ votre modรจle prรฉ-entraรฎnรฉ, pensez ร  [partager](./model_sharing) le modรจle avec la communautรฉ sur le Hub afin de dรฉmocratiser l'apprentissage automatique pour tous ! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Les classes [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] fonctionnent ensemble pour crรฉer un [`pipeline`] comme celui que vous avez utilisรฉ ci-dessus. Une [AutoClass](./model_doc/auto) est un raccourci qui rรฉcupรจre automatiquement l'architecture d'un modรจle prรฉ-entraรฎnรฉ ร  partir de son nom ou de son emplacement. Il vous suffit de sรฉlectionner l'`AutoClass` appropriรฉe ร  votre tรขche et la classe de prรฉtraitement qui lui est associรฉe. Reprenons l'exemple de la section prรฉcรฉdente et voyons comment vous pouvez utiliser l'`AutoClass` pour reproduire les rรฉsultats du [`pipeline`]. ### AutoTokenizer Un tokenizer est chargรฉ de prรฉtraiter le texte pour en faire un tableau de chiffres qui servira d'entrรฉe ร  un modรจle. De nombreuses rรจgles rรฉgissent le processus de tokenisation, notamment la maniรจre de diviser un mot et le niveau auquel les mots doivent รชtre divisรฉs (pour en savoir plus sur la tokenisation, consultez le [rรฉsumรฉ](./tokenizer_summary)). La chose la plus importante ร  retenir est que vous devez instancier un tokenizer avec le mรชme nom de modรจle pour vous assurer que vous utilisez les mรชmes rรจgles de tokenisation que celles avec lesquelles un modรจle a รฉtรฉ prรฉ-entraรฎnรฉ. Chargez un tokenizer avec [`AutoTokenizer`] : ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Passez votre texte au tokenizer : ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Le tokenizer retourne un dictionnaire contenant : * [input_ids](./glossary#input-ids): la reprรฉsentation numรฉrique des tokens. * [attention_mask](.glossary#attention-mask): indique quels tokens doivent faire l'objet d'une attention particuliรจre (plus particuliรจrement les tokens de remplissage). Un tokenizer peut รฉgalement accepter une liste de textes, et remplir et tronquer le texte pour retourner un รฉchantillon de longueur uniforme : <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> Consultez le tutoriel [prรฉtraitement](./preprocessing) pour plus de dรฉtails sur la tokenisation, et sur la maniรจre d'utiliser un [`AutoImageProcessor`], un [`AutoFeatureExtractor`] et un [`AutoProcessor`] pour prรฉtraiter les images, l'audio et les contenus multimodaux. </Tip> ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉes. Cela signifie que vous pouvez charger un [`AutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner l'[`AutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`AutoModelForSequenceClassification`] : ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Maintenant, passez votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle. Il vous suffit de dรฉcompresser le dictionnaire en ajoutant `**` : ```py >>> pt_outputs = pt_model(**pt_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉs. Cela signifie que vous pouvez charger un [`TFAutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner le [`TFAutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`TFAutoModelForSequenceClassification`] : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Passez maintenant votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle en passant les clรฉs du dictionnaire directement aux tensors : ```py >>> tf_outputs = tf_model(tf_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tous les modรจles ๐Ÿค— Transformers (PyTorch ou TensorFlow) produisent les tensors *avant* la fonction d'activation finale (comme softmax) car la fonction d'activation finale est souvent fusionnรฉe avec le calcul de la perte. Les structures produites par le modรจle sont des classes de donnรฉes spรฉciales, de sorte que leurs attributs sont autocomplรฉtรฉs dans un environnement de dรฉveloppement. Les structures produites par le modรจle se comportent comme un tuple ou un dictionnaire (vous pouvez les indexer avec un entier, une tranche ou une chaรฎne), auquel cas les attributs qui sont None sont ignorรฉs. </Tip> ### Sauvegarder un modรจle <frameworkcontent> <pt> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`PreTrainedModel.save_pretrained`] : ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`PreTrainedModel.from_pretrained`] : ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`TFPreTrainedModel.save_pretrained`] : ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`TFPreTrainedModel.from_pretrained`] : ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Une fonctionnalitรฉ particuliรจrement cool ๐Ÿค— Transformers est la possibilitรฉ d'enregistrer un modรจle et de le recharger en tant que modรจle PyTorch ou TensorFlow. Le paramรจtre `from_pt` ou `from_tf` permet de convertir le modรจle d'un framework ร  l'autre : <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Constructions de modรจles personnalisรฉs Vous pouvez modifier la configuration du modรจle pour changer la faรงon dont un modรจle est construit. La configuration spรฉcifie les attributs d'un modรจle, tels que le nombre de couches ou de tรชtes d'attention. Vous partez de zรฉro lorsque vous initialisez un modรจle ร  partir d'une configuration personnalisรฉe. Les attributs du modรจle sont initialisรฉs de maniรจre alรฉatoire et vous devrez entraรฎner le modรจle avant de pouvoir l'utiliser pour obtenir des rรฉsultats significatifs. Commencez par importer [`AutoConfig`], puis chargez le modรจle prรฉ-entraรฎnรฉ que vous voulez modifier. Dans [`AutoConfig.from_pretrained`], vous pouvez spรฉcifier l'attribut que vous souhaitez modifier, tel que le nombre de tรชtes d'attention : ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`AutoModel.from_config`] : ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`TFAutoModel.from_config`] : ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Consultez le guide [Crรฉer une architecture personnalisรฉe](./create_a_model) pour plus d'informations sur la crรฉation de configurations personnalisรฉes. ## Trainer - une boucle d'entraรฎnement optimisรฉe par PyTorch Tous les modรจles sont des [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) standard, vous pouvez donc les utiliser dans n'importe quelle boucle d'entraรฎnement typique. Bien que vous puissiez รฉcrire votre propre boucle d'entraรฎnement, ๐Ÿค— Transformers fournit une classe [`Trainer`] pour PyTorch, qui contient la boucle d'entraรฎnement de base et ajoute des fonctionnalitรฉs supplรฉmentaires comme l'entraรฎnement distribuรฉ, la prรฉcision mixte, et plus encore. En fonction de votre tรขche, vous passerez gรฉnรฉralement les paramรจtres suivants ร  [`Trainer`] : 1. Un [`PreTrainedModel`] ou un [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module): ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. [`TrainingArguments`] contient les hyperparamรจtres du modรจle que vous pouvez changer comme le taux d'apprentissage, la taille due l'รฉchantillon, et le nombre d'รฉpoques pour s'entraรฎner. Les valeurs par dรฉfaut sont utilisรฉes si vous ne spรฉcifiez pas d'hyperparamรจtres d'apprentissage : ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 4. Chargez un jeu de donnรฉes : ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` Puis appliquez-la ร  l'intรฉgralitรฉ du jeu de donnรฉes avec [`~datasets.Dataset.map`]: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. Un [`DataCollatorWithPadding`] pour crรฉer un รฉchantillon d'exemples ร  partir de votre jeu de donnรฉes : ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` Maintenant, rassemblez tous ces รฉlรฉments dans un [`Trainer`] : ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` Une fois que vous รชtes prรชt, appelez la fonction [`~Trainer.train`] pour commencer l'entraรฎnement : ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> Pour les tรขches - comme la traduction ou la gรฉnรฉration de rรฉsumรฉ - qui utilisent un modรจle sรฉquence ร  sรฉquence, utilisez plutรดt les classes [`Seq2SeqTrainer`] et [`Seq2SeqTrainingArguments`]. </Tip> Vous pouvez personnaliser le comportement de la boucle d'apprentissage en redรฉfinissant les mรฉthodes ร  l'intรฉrieur de [`Trainer`]. Cela vous permet de personnaliser des caractรฉristiques telles que la fonction de perte, l'optimiseur et le planificateur. Consultez la documentation de [`Trainer`] pour savoir quelles mรฉthodes peuvent รชtre redรฉfinies. L'autre moyen de personnaliser la boucle d'apprentissage est d'utiliser les [Callbacks](./main_classes/callbacks). Vous pouvez utiliser les callbacks pour intรฉgrer d'autres bibliothรจques et inspecter la boucle d'apprentissage afin de suivre la progression ou d'arrรชter l'apprentissage plus tรดt. Les callbacks ne modifient rien dans la boucle d'apprentissage elle-mรชme. Pour personnaliser quelque chose comme la fonction de perte, vous devez redรฉfinir le [`Trainer`] ร  la place. ## Entraรฎnement avec TensorFlow Tous les modรจles sont des modรจles standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) afin qu'ils puissent รชtre entraรฎnรฉs avec TensorFlow avec l'API [Keras](https://keras.io/). ๐Ÿค— Transformers fournit la fonction [`~TFPreTrainedModel.prepare_tf_dataset`] pour charger facilement votre jeu de donnรฉes comme un `tf.data.Dataset` afin que vous puissiez commencer l'entraรฎnement immรฉdiatement avec les fonctions [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) et [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) de Keras. 1. Vous commencez avec un modรจle [`TFPreTrainedModel`] ou [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 3. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. Appliquez le tokenizer ร  l'ensemble du jeu de donnรฉes avec [`~datasets.Dataset.map`] et passez ensuite le jeu de donnรฉes et le tokenizer ร  [`~TFPreTrainedModel.prepare_tf_dataset`]. Vous pouvez รฉgalement modifier la taille de l'รฉchantillon et mรฉlanger le jeu de donnรฉes ici si vous le souhaitez : ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. Une fois que vous รชtes prรชt, appelez les fonctions `compile` et `fit` pour commencer l'entraรฎnement : ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) >>> model.fit(dataset) # doctest: +SKIP ``` ## Et aprรจs ? Maintenant que vous avez terminรฉ la visite rapide de ๐Ÿค— Transformers, consultez nos guides et apprenez ร  faire des choses plus spรฉcifiques comme crรฉer un modรจle personnalisรฉ, finetuner un modรจle pour une tรขche, et comment entraรฎner un modรจle avec un script. Si vous souhaitez en savoir plus sur les concepts fondamentaux de ๐Ÿค— Transformers, jetez un ล“il ร  nos guides conceptuels !
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Apprentissage automatique de pointe pour [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), et [JAX](https://jax.readthedocs.io/en/latest/). ๐Ÿค— Transformers fournit des API et des outils pour tรฉlรฉcharger et entraรฎner facilement des modรจles prรฉ-entraรฎnรฉs de pointe. L'utilisation de modรจles prรฉ-entraรฎnรฉs peut rรฉduire vos coรปts de calcul, votre empreinte carbone, et vous faire รฉconomiser le temps et les ressources nรฉcessaires pour entraรฎner un modรจle ร  partir de zรฉro. Ces modรจles prennent en charge des tรขches courantes dans diffรฉrentes modalitรฉs, telles que : ๐Ÿ“ **Traitement automatique des langues**: classification de texte, reconnaissance d'entitรฉs, systรจme de question-rรฉponse, modรจle de langage, gรฉnรฉration de rรฉsumรฉ, traduction, question ร  choix multiples et gรฉnรฉration de texte.<br> ๐Ÿ–ผ๏ธ **Vision par ordinateur**: classification d'image, dรฉtection d'objet et segmentation.<br> ๐Ÿ—ฃ๏ธ **Audio**: reconnaissance automatique de la parole et classification audio.<br> ๐Ÿ™ **Multimodalitรฉ**: systรจme de question-rรฉponse avec des tableaux ou images, reconnaissance optique de caractรจres, extraction d'information depuis des documents scannรฉs et classification de vidรฉo. ๐Ÿค— Transformers prend en charge l'interopรฉrabilitรฉ entre PyTorch, TensorFlow et JAX. Cela permet d'utiliser un framework diffรฉrent ร  chaque รฉtape de la vie d'un modรจle, par example entraรฎner un modรจle en trois lignes de code avec un framework, et le charger pour l'infรฉrence avec un autre. Les modรจles peuvent รฉgalement รชtre exportรฉs dans un format comme ONNX et TorchScript pour รชtre dรฉployรฉs dans des environnements de production. Rejoignez la communautรฉ grandissante sur le [Hub](https://huggingface.co/models), le [forum](https://discuss.huggingface.co/) ou [Discord](https://discord.com/invite/JfAtkvEtRb) dรจs aujourd'hui ! ## Si vous cherchez un support personnalisรฉ de l'รฉquipe Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contents La documentation est organisรฉe en 5 parties: - **DEMARRER** propose une visite rapide de la bibliothรจque et des instructions d'installation pour รชtre opรฉrationnel. - **TUTORIELS** excellent point de dรฉpart pour les dรฉbutants. Cette section vous aidera ร  acquรฉrir les compรฉtences de base dont vous avez besoin pour commencer ร  utiliser la bibliothรจque. - **GUIDES D'UTILISATION** pour diffรฉrentes tรขches comme par exemple le finetuning d'un modรจle prรฉ-entraรฎnรฉ pour la classification de texte ou comment crรฉer et partager votre propre modรจle. - **GUIDES CONCEPTUELS** pour plus de discussions et d'explications sur les concepts et les idรฉes sous-jacentes aux modรจles, aux tรขches et ร  la philosophie de conception de ๐Ÿค— Transformers. - **API** dรฉcrit toutes les classes et fonctions : - **CLASSES PRINCIPALES** dรฉtaille les classes les plus importantes comme la configuration, le modรจle, le tokenizer et le pipeline.. - **MODELES** dรฉtaille les classes et les fonctions propres ร  chaque modรจle de la bibliothรจque. - **UTILITAIRES INTERNES** dรฉtaille les classes et fonctions utilitaires utilisรฉes en interne. ### Modรจles supportรฉs <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[AltCLIP](model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. 1. **[Audio Spectrogram Transformer](model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BioGpt](model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 1. **[BiT](model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLIP](model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[BridgeTower](model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[Chinese-CLIP](model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETA](model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krรคhenbรผhl. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DiNAT](model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientFormer](model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GIT](model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPT-Sw3](model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey ร–hman, Fredrik Carlsson, Magnus Sahlgren. 1. **[Graphormer](model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileNetV1](model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 1. **[MobileNetV2](model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[NAT](model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoBERTa-PreLayerNorm](model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechT5](model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[Swin2SR](model_doc/swin2sr)** (from University of Wรผrzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. 1. **[SwitchTransformers](model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[TimeSformer](model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[UPerNet](model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViT Hybrid](model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Frameworks compatibles Le tableau ci-dessous reprรฉsente la prise en charge actuelle dans la bibliothรจque pour chacun de ces modรจles, qu'ils aient ou non un tokenizer Python (appelรฉ "slow"). Un tokenizer rapide ("fast") soutenu par la bibliothรจque ๐Ÿค— Tokenizers, qu'ils aient un support en Jax (via Flax), PyTorch, et/ou TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Modรจle | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:-----------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | AltCLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | Audio Spectrogram Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | BioGpt | โœ… | โŒ | โœ… | โŒ | โŒ | | BiT | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | BridgeTower | โŒ | โŒ | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | Chinese-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETA | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DiNAT | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | EfficientFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GIT | โŒ | โŒ | โœ… | โŒ | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GPT-Sw3 | โœ… | โœ… | โœ… | โœ… | โœ… | | Graphormer | โŒ | โŒ | โœ… | โŒ | โŒ | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | Mask2Former | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormerSwin | โŒ | โŒ | โŒ | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileNetV1 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileNetV2 | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | NAT | โŒ | โŒ | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OneFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โŒ | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoBERTa-PreLayerNorm | โŒ | โŒ | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | SpeechT5 | โœ… | โŒ | โœ… | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | Swin2SR | โŒ | โŒ | โœ… | โŒ | โŒ | | SwitchTransformers | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TimeSformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | UPerNet | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViT Hybrid | โŒ | โŒ | โœ… | โŒ | โŒ | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Visite rapide - local: in_translation title: Installation title: Dรฉmarrer - sections: - local: in_translation title: Pipelines pour l'infรฉrence - local: in_translation title: Chargement d'instances prรฉ-entraรฎnรฉes avec une AutoClass - local: in_translation title: Prรฉparation des donnรฉes - local: in_translation title: Fine-tune un modรจle prรฉ-entraรฎnรฉ - local: in_translation title: Entraรฎnement distribuรฉ avec ๐Ÿค— Accelerate - local: in_translation title: Partager un modรจle title: Tutoriels - sections: - sections: - local: in_translation title: Crรฉer votre architecture - local: in_translation title: Partager vos modรจles - local: in_translation title: Entraรฎnement avec un script - local: in_translation title: Entraรฎnement avec Amazon SageMaker - local: in_translation title: Convertir depuis des checkpoints Tensorflow - local: in_translation title: Exporter vers ONNX - local: in_translation title: Exporter vers TorchScript - local: in_translation title: Aide au dรฉpannage title: Usage gรฉnรฉral - sections: - local: in_translation title: Utiliser les tokenizers de ๐Ÿค— Tokenizers - local: in_translation title: Infรฉrence avec les modรจles multilingues - local: in_translation title: Stratรฉgies de gรฉnรฉration de texte - sections: - isExpanded: false local: in_translation title: Classification de texte - local: in_translation title: Classification de token - local: in_translation title: Systรจme de question-rรฉponse - local: in_translation title: Modรฉlisation causale du langage - local: in_translation title: Modรฉlisation du langage avec masque - local: in_translation title: Traduction - local: in_translation title: Gรฉnรฉration de rรฉsumรฉ - local: in_translation title: Question ร  choix multiple title: Guides des tรขches title: Traitement automatique des langues - sections: - local: in_translation title: Classification audio - local: in_translation title: Reconnaissance automatique de la parole title: Audio - sections: - local: in_translation title: Classification d'images - local: in_translation title: Segmentation sรฉmantique - local: in_translation title: Classification de vidรฉos - local: in_translation title: Dรฉtection d'objets title: Vision par ordinateur - sections: - local: in_translation title: Performance et extensibilitรฉ - sections: - local: in_translation title: Comment contribuer ร  transformers? - local: in_translation title: Comment ajouter un modรจle ร  ๐Ÿค— Transformers? - local: in_translation title: Comment convertir un modรจle ๐Ÿค— Transformers vers TensorFlow? - local: in_translation title: Comment ajouter un pipeline ร  ๐Ÿค— Transformers? - local: in_translation title: Tester - local: in_translation title: Vรฉrification pour une Pull Request title: Contribuer - local: in_translation title: ๐Ÿค— Transformers Notebooks - local: in_translation title: Ressources communautaires - local: in_translation title: Benchmarks - local: in_translation title: Migration ร  partir de versions prรฉcรฉdentes title: Guides d'utilisation - sections: - local: in_translation title: Philosophie - local: in_translation title: Glossaire - local: in_translation title: Qu'est ce ๐Ÿค— Transformers peut faire ? - local: in_translation title: Quelles tรขches ๐Ÿค— Transformers peut rรฉsoudre ? - local: in_translation title: Rรฉsumรฉ des modรจles - local: in_translation title: Rรฉsumรฉ des tokenizers - local: in_translation title: Remplissage et troncature - local: in_translation title: BERTology - local: in_translation title: Perplexitรฉ des modรจles ร  longueur fixe - local: in_translation title: Pipelines pour infรฉrence avec des serveurs web title: Guides conceptuels - sections: - isExpanded: false sections: - local: in_translation title: Classes principales - local: in_translation title: Modรจles textuels - local: in_translation title: Modรจles visuels - local: in_translation title: Modรจles audio - local: in_translation title: Modรจles multimodal - local: in_translation title: Modรจles d'apprentissage par renforcement - local: in_translation title: Modรจles de sรฉries temporelles - local: in_translation title: Graph models title: Modรจles - sections: - local: in_translation title: Utilitaires internes title: API
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/in_translation.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Traduction en cours.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/fr/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Installation de Transformers ! pip install transformers datasets # Pour installer ร  partir du code source au lieu de la derniรจre version, commentez la commande ci-dessus et dรฉcommentez la suivante. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Vortrainierte Instanzen mit einer AutoClass laden Bei so vielen verschiedenen Transformator-Architekturen kann es eine Herausforderung sein, eine fรผr Ihren Checkpoint zu erstellen. Als Teil der ๐Ÿค— Transformers Kernphilosophie, die Bibliothek leicht, einfach und flexibel nutzbar zu machen, leitet eine `AutoClass` automatisch die richtige Architektur aus einem gegebenen Checkpoint ab und lรคdt sie. Mit der Methode `from_pretrained()` kann man schnell ein vortrainiertes Modell fรผr eine beliebige Architektur laden, so dass man keine Zeit und Ressourcen aufwenden muss, um ein Modell von Grund auf zu trainieren. Die Erstellung dieser Art von Checkpoint-agnostischem Code bedeutet, dass Ihr Code, wenn er fรผr einen Checkpoint funktioniert, auch mit einem anderen Checkpoint funktionieren wird - solange er fรผr eine รคhnliche Aufgabe trainiert wurde - selbst wenn die Architektur unterschiedlich ist. <Tip> Denken Sie daran, dass sich die Architektur auf das Skelett des Modells bezieht und die Checkpoints die Gewichte fรผr eine bestimmte Architektur sind. Zum Beispiel ist [BERT](https://huggingface.co/bert-base-uncased) eine Architektur, wรคhrend `bert-base-uncased` ein Checkpoint ist. Modell ist ein allgemeiner Begriff, der entweder Architektur oder Prรผfpunkt bedeuten kann. </Tip> In dieser Anleitung lernen Sie, wie man: * Einen vortrainierten Tokenizer lรคdt. * Einen vortrainierten Merkmalsextraktor lรคdt. * Einen vortrainierten Prozessor lรคdt. * Ein vortrainiertes Modell lรคdt. ## AutoTokenizer Nahezu jede NLP-Aufgabe beginnt mit einem Tokenizer. Ein Tokenizer wandelt Ihre Eingabe in ein Format um, das vom Modell verarbeitet werden kann. Laden Sie einen Tokenizer mit [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` Dann tokenisieren Sie Ihre Eingabe wie unten gezeigt: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoFeatureExtractor Fรผr Audio- und Bildverarbeitungsaufgaben verarbeitet ein Merkmalsextraktor das Audiosignal oder Bild in das richtige Eingabeformat. Laden Sie einen Merkmalsextraktor mit [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Multimodale Aufgaben erfordern einen Prozessor, der zwei Arten von Vorverarbeitungswerkzeugen kombiniert. Das Modell [LayoutLMV2](model_doc/layoutlmv2) beispielsweise benรถtigt einen Feature-Extraktor fรผr Bilder und einen Tokenizer fรผr Text; ein Prozessor kombiniert beide. Laden Sie einen Prozessor mit [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Mit den `AutoModelFor`-Klassen kรถnnen Sie schlieรŸlich ein vortrainiertes Modell fรผr eine bestimmte Aufgabe laden (siehe [hier](model_doc/auto) fรผr eine vollstรคndige Liste der verfรผgbaren Aufgaben). Laden Sie zum Beispiel ein Modell fรผr die Sequenzklassifikation mit [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Sie kรถnnen denselben Prรผfpunkt problemlos wiederverwenden, um eine Architektur fรผr eine andere Aufgabe zu laden: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` <Tip warning={true}> Fรผr PyTorch-Modelle verwendet die Methode `from_pretrained()` `torch.load()`, die intern `pickle` verwendet und als unsicher bekannt ist. Generell sollte man niemals ein Modell laden, das aus einer nicht vertrauenswรผrdigen Quelle stammen kรถnnte, oder das manipuliert worden sein kรถnnte. Dieses Sicherheitsrisiko wird fรผr รถffentliche Modelle, die auf dem Hugging Face Hub gehostet werden, teilweise gemildert, da diese bei jeder รœbertragung [auf Malware](https://huggingface.co/docs/hub/security-malware) gescannt werden. Siehe die [Hub-Dokumentation](https://huggingface.co/docs/hub/security) fรผr Best Practices wie [signierte Commit-Verifizierung](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg) mit GPG. TensorFlow- und Flax-Checkpoints sind nicht betroffen und kรถnnen in PyTorch-Architekturen mit den Kwargs `from_tf` und `from_flax` fรผr die Methode `from_pretrained` geladen werden, um dieses Problem zu umgehen. </Tip> Im Allgemeinen empfehlen wir die Verwendung der Klasse "AutoTokenizer" und der Klasse "AutoModelFor", um trainierte Instanzen von Modellen zu laden. Dadurch wird sichergestellt, dass Sie jedes Mal die richtige Architektur laden. Im nรคchsten [Tutorial] (Vorverarbeitung) erfahren Sie, wie Sie Ihren neu geladenen Tokenizer, Feature Extractor und Prozessor verwenden, um einen Datensatz fรผr die Feinabstimmung vorzuverarbeiten. </pt> <tf> Mit den Klassen `TFAutoModelFor` schlieรŸlich kรถnnen Sie ein vortrainiertes Modell fรผr eine bestimmte Aufgabe laden (siehe [hier](model_doc/auto) fรผr eine vollstรคndige Liste der verfรผgbaren Aufgaben). Laden Sie zum Beispiel ein Modell fรผr die Sequenzklassifikation mit [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Sie kรถnnen denselben Prรผfpunkt problemlos wiederverwenden, um eine Architektur fรผr eine andere Aufgabe zu laden: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` Im Allgemeinen empfehlen wir, die Klasse "AutoTokenizer" und die Klasse "TFAutoModelFor" zu verwenden, um vortrainierte Instanzen von Modellen zu laden. Dadurch wird sichergestellt, dass Sie jedes Mal die richtige Architektur laden. Im nรคchsten [Tutorial] (Vorverarbeitung) erfahren Sie, wie Sie Ihren neu geladenen Tokenizer, Feature Extractor und Prozessor verwenden, um einen Datensatz fรผr die Feinabstimmung vorzuverarbeiten. </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Schnellstart [[open-in-colab]] Mit ๐Ÿค— Transformers kรถnnen Sie sofort loslegen! Verwenden Sie die [`pipeline`] fรผr schnelle Inferenz und laden Sie schnell ein vortrainiertes Modell und einen Tokenizer mit einer [AutoClass](./model_doc/auto), um Ihre Text-, Bild- oder Audioaufgabe zu lรถsen. <Tip> Alle in der Dokumentation vorgestellten Codebeispiele haben oben links einen Umschalter fรผr PyTorch und TensorFlow. Wenn nicht, wird erwartet, dass der Code fรผr beide Backends ohne ร„nderungen funktioniert. </Tip> ## Pipeline [`pipeline`] ist der einfachste Weg, ein vortrainiertes Modell fรผr eine bestimmte Aufgabe zu verwenden. <Youtube id="tiZFewofSLM"/> Die [`pipeline`] unterstรผtzt viele gรคngige Aufgaben: **Text**: * Stimmungsanalyse: Klassifizierung der Polaritรคt eines gegebenen Textes. * Textgenerierung (auf Englisch): Generierung von Text aus einer gegebenen Eingabe. * Name-Entity-Recognition (NER): Kennzeichnung jedes Worts mit der Entitรคt, die es reprรคsentiert (Person, Datum, Ort usw.). * Beantwortung von Fragen: Extrahieren der Antwort aus dem Kontext, wenn ein gewisser Kontext und eine Frage gegeben sind. * Fill-mask: Ausfรผllen von Lรผcken in einem Text mit maskierten Wรถrtern. * Zusammenfassung: Erstellung einer Zusammenfassung einer langen Text- oder Dokumentensequenz. * รœbersetzung: รœbersetzen eines Textes in eine andere Sprache. * Merkmalsextraktion: Erstellen einer Tensordarstellung des Textes. **Bild**: * Bildklassifizierung: Klassifizierung eines Bildes. * Bildsegmentierung: Klassifizierung jedes Pixels in einem Bild. * Objekterkennung: Erkennen von Objekten innerhalb eines Bildes. **Audio**: * Audioklassifizierung: Zuweisung eines Labels zu einem bestimmten Audiosegment. * Automatische Spracherkennung (ASR): Transkription von Audiodaten in Text. <Tip> Fรผr mehr Details รผber die [`pipeline`] und assoziierte Aufgaben, schauen Sie in die Dokumentation [hier](./main_classes/pipelines). </Tip> ### Verwendung der Pipeline Im folgenden Beispiel werden Sie die [`pipeline`] fรผr die Stimmungsanalyse verwenden. Installieren Sie die folgenden Abhรคngigkeiten, falls Sie dies nicht bereits getan haben: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importieren sie die [`pipeline`] und spezifizieren sie die Aufgabe, welche sie lรถsen mรถchten: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Die Pipeline lรคdt ein standardmรครŸiges [vortrainiertes Modell] (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) und einen Tokenizer fรผr die Stimmungs-Analyse herunter und speichert sie. Jetzt kรถnnen Sie den "Klassifikator" auf Ihren Zieltext anwenden: ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` For more than one sentence, pass a list of sentences to the [`pipeline`] which returns a list of dictionaries: ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` Die [`pipeline`] kann auch รผber einen ganzen Datensatz iterieren. Starten wir mit der Installation der [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) Bibliothek: ```bash pip install datasets ``` Erstellen wir eine [`pipeline`] mit der Aufgabe die wir lรถsen und dem Modell welches wir nutzen mรถchten. ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Als nรคchstes laden wir den Datensatz (siehe ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart.html) fรผr mehr Details) welches wir nutzen mรถchten. Zum Beispiel laden wir den [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) Datensatz: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Wir mรผssen sicherstellen, dass die Abtastrate des Datensatzes der Abtastrate entspricht, mit der `facebook/wav2vec2-base-960h` trainiert wurde. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Audiodateien werden automatisch geladen und neu abgetastet, wenn die Spalte "audio" aufgerufen wird. Extrahieren wir die rohen Wellenform-Arrays der ersten 4 Beispiele und รผbergeben wir sie als Liste an die Pipeline: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Bei einem grรถรŸeren Datensatz mit vielen Eingaben (wie bei Sprache oder Bildverarbeitung) sollten Sie einen Generator anstelle einer Liste รผbergeben, der alle Eingaben in den Speicher lรคdt. Weitere Informationen finden Sie in der [Pipeline-Dokumentation](./main_classes/pipelines). ### Ein anderes Modell und einen anderen Tokenizer in der Pipeline verwenden Die [`pipeline`] kann jedes Modell aus dem [Model Hub] (https://huggingface.co/models) verwenden, wodurch es einfach ist, die [`pipeline`] fรผr andere Anwendungsfรคlle anzupassen. Wenn Sie beispielsweise ein Modell wรผnschen, das franzรถsischen Text verarbeiten kann, verwenden Sie die Tags im Model Hub, um nach einem geeigneten Modell zu filtern. Das oberste gefilterte Ergebnis liefert ein mehrsprachiges [BERT-Modell](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment), das auf die Stimmungsanalyse abgestimmt ist. GroรŸartig, verwenden wir dieses Modell! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Use the [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `AutoClass` below): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Use the [`TFAutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` below): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Dann kรถnnen Sie das Modell und den Tokenizer in der [`pipeline`] angeben und den `Klassifikator` auf Ihren Zieltext anwenden: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Wenn Sie kein Modell fรผr Ihren Anwendungsfall finden kรถnnen, mรผssen Sie ein vortrainiertes Modell auf Ihren Daten feinabstimmen. Schauen Sie sich unser [Feinabstimmungs-Tutorial](./training) an, um zu erfahren, wie das geht. Und schlieรŸlich, nachdem Sie Ihr trainiertes Modell verfeinert haben, sollten Sie es mit der Community im Model Hub teilen (siehe Tutorial [hier](./model_sharing)), um NLP fรผr alle zu demokratisieren! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Unter der Haube arbeiten die Klassen [`AutoModelForSequenceClassification`] und [`AutoTokenizer`] zusammen, um die [`pipeline`] zu betreiben. Eine [`AutoClass`](./model_doc/auto) ist eine Abkรผrzung, die automatisch die Architektur eines trainierten Modells aus dessen Namen oder Pfad abruft. Sie mรผssen nur die passende `AutoClass` fรผr Ihre Aufgabe und den zugehรถrigen Tokenizer mit [`AutoTokenizer`] auswรคhlen. Kehren wir zu unserem Beispiel zurรผck und sehen wir uns an, wie Sie die `AutoClass` verwenden kรถnnen, um die Ergebnisse der [`pipeline`] zu replizieren. ### AutoTokenizer Ein Tokenizer ist fรผr die Vorverarbeitung von Text in ein fรผr das Modell verstรคndliches Format zustรคndig. Zunรคchst zerlegt der Tokenisierer den Text in Wรถrter, die *Token* genannt werden. Es gibt mehrere Regeln fรผr den Tokenisierungsprozess, z. B. wie und auf welcher Ebene ein Wort aufgespalten wird (weitere Informationen รผber Tokenisierung [hier](./tokenizer_summary)). Das Wichtigste ist jedoch, dass Sie den Tokenizer mit demselben Modellnamen instanziieren mรผssen, um sicherzustellen, dass Sie dieselben Tokenisierungsregeln verwenden, mit denen ein Modell zuvor trainiert wurde. Laden sie einen Tokenizer mit [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` AnschlieรŸend wandelt der Tokenizer die Token in Zahlen um, um einen Tensor als Eingabe fรผr das Modell zu konstruieren. Dieser wird als *Vokabular* des Modells bezeichnet. รœbergeben Sie Ihren Text an den Tokenizer: ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Der Tokenizer gibt ein Wรถrterbuch zurรผck, das Folgendes enthรคlt: * [input_ids](./glossary#input-ids): numerische Reprรคsentationen Ihrer Token. * [atttention_mask](.glossary#attention-mask): gibt an, welche Token beachtet werden sollen. Genau wie die [`pipeline`] akzeptiert der Tokenizer eine Liste von Eingaben. Darรผber hinaus kann der Tokenizer den Text auch auffรผllen und kรผrzen, um einen Stapel mit einheitlicher Lรคnge zurรผckzugeben: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Lesen Sie das Tutorial [preprocessing](./preprocessing) fรผr weitere Details zur Tokenisierung. ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers bietet eine einfache und einheitliche Mรถglichkeit, vortrainierte Instanzen zu laden. Das bedeutet, dass Sie ein [`AutoModel`] laden kรถnnen, wie Sie einen [`AutoTokenizer`] laden wรผrden. Der einzige Unterschied ist die Auswahl des richtigen [`AutoModel`] fรผr die Aufgabe. Da Sie eine Text- oder Sequenzklassifizierung vornehmen, laden Sie [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse fรผr welche Aufgabe zu verwenden ist. </Tip> Jetzt kรถnnen Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell รผbergeben. Sie mรผssen nur das Wรถrterbuch entpacken, indem Sie `**` hinzufรผgen: ```py >>> pt_outputs = pt_model(**pt_batch) ``` Das Modell gibt die endgรผltigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers bietet eine einfache und einheitliche Methode zum Laden von vortrainierten Instanzen. Das bedeutet, dass Sie ein [`TFAutoModel`] genauso laden kรถnnen, wie Sie einen [`AutoTokenizer`] laden wรผrden. Der einzige Unterschied ist die Auswahl des richtigen [`TFAutoModel`] fรผr die Aufgabe. Da Sie Text - oder Sequenz - Klassifizierung machen, laden Sie [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse fรผr welche Aufgabe zu verwenden ist. </Tip> Jetzt kรถnnen Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell รผbergeben, indem Sie die Wรถrterbuchschlรผssel direkt an die Tensoren รผbergeben: ```py >>> tf_outputs = tf_model(tf_batch) ``` Das Modell gibt die endgรผltigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Alle ๐Ÿค— Transformers-Modelle (PyTorch oder TensorFlow) geben die Tensoren *vor* der endgรผltigen Aktivierungsfunktion Funktion (wie Softmax) aus, da die endgรผltige Aktivierungsfunktion oft mit dem Verlusten verschmolzen ist. </Tip> Modelle sind ein standardmรครŸiges [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) oder ein [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model), sodass Sie sie in Ihrer รผblichen Trainingsschleife verwenden kรถnnen. Um jedoch die Dinge einfacher zu machen, bietet ๐Ÿค— Transformers eine [`Trainer`]-Klasse fรผr PyTorch, die Funktionalitรคt fรผr verteiltes Training, gemischte Prรคzision und mehr bietet. Fรผr TensorFlow kรถnnen Sie die Methode `fit` aus [Keras](https://keras.io/) verwenden. Siehe das [training tutorial](./training) fรผr weitere Details. <Tip> Transformers-Modellausgaben sind spezielle Datenklassen, so dass ihre Attribute in einer IDE automatisch vervollstรคndigt werden. Die Modellausgรคnge verhalten sich auch wie ein Tupel oder ein Wรถrterbuch (z.B. kรถnnen Sie mit einem Integer, einem Slice oder einem String indexieren), wobei die Attribute, die "None" sind, ignoriert werden. </Tip> ### Modell speichern <frameworkcontent> <pt> Sobald Ihr Modell feinabgestimmt ist, kรถnnen Sie es mit seinem Tokenizer speichern, indem Sie [`PreTrainedModel.save_pretrained`] verwenden: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Wenn Sie bereit sind, das Modell erneut zu verwenden, laden Sie es mit [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Sobald Ihr Modell feinabgestimmt ist, kรถnnen Sie es mit seinem Tokenizer unter Verwendung von [`TFPreTrainedModel.save_pretrained`] speichern: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Wenn Sie bereit sind, das Modell wieder zu verwenden, laden Sie es mit [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Ein besonders cooles ๐Ÿค— Transformers-Feature ist die Mรถglichkeit, ein Modell zu speichern und es entweder als PyTorch- oder TensorFlow-Modell wieder zu laden. Der Parameter "from_pt" oder "from_tf" kann das Modell von einem Framework in das andere konvertieren: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Custom model builds Sie kรถnnen die Konfigurationsklasse des Modells รคndern, um zu bestimmen, wie ein Modell aufgebaut ist. Die Konfiguration legt die Attribute eines Modells fest, z. B. die Anzahl der verborgenen Schichten oder der Aufmerksamkeitskรถpfe. Wenn Sie ein Modell aus einer benutzerdefinierten Konfigurationsklasse initialisieren, beginnen Sie bei Null. Die Modellattribute werden zufรคllig initialisiert, und Sie mรผssen das Modell trainieren, bevor Sie es verwenden kรถnnen, um aussagekrรคftige Ergebnisse zu erhalten. Beginnen Sie mit dem Import von [`AutoConfig`] und laden Sie dann das trainierte Modell, das Sie รคndern mรถchten. Innerhalb von [`AutoConfig.from_pretrained`] kรถnnen Sie das Attribut angeben, das Sie รคndern mรถchten, z. B. die Anzahl der Aufmerksamkeitskรถpfe: ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Create a model from your custom configuration with [`AutoModel.from_config`]: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Create a model from your custom configuration with [`TFAutoModel.from_config`]: ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Weitere Informationen zur Erstellung von benutzerdefinierten Konfigurationen finden Sie in der Anleitung [Erstellen einer benutzerdefinierten Architektur](./create_a_model). ## Wie geht es weiter? Nachdem Sie nun die ๐Ÿค— Transformers-Kurztour abgeschlossen haben, schauen Sie sich unsere Anleitungen an und erfahren Sie, wie Sie spezifischere Dinge tun kรถnnen, wie das Schreiben eines benutzerdefinierten Modells, die Feinabstimmung eines Modells fรผr eine Aufgabe und wie man ein Modell mit einem Skript trainiert. Wenn Sie mehr รผber die Kernkonzepte von ๐Ÿค— Transformers erfahren mรถchten, nehmen Sie sich eine Tasse Kaffee und werfen Sie einen Blick auf unsere konzeptionellen Leitfรคden!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Maschinelles Lernen auf dem neuesten Stand der Technik fรผr PyTorch, TensorFlow und JAX. ๐Ÿค— Transformers bietet APIs zum einfachen Herunterladen und Trainieren von vortrainierten Modellen auf dem neuesten Stand der Technik. Die Verwendung von vortrainierten Modellen kann Rechenkosten sparen und den CO2-FuรŸabdruck reduzieren und Zeit sparen, die fรผr das Training eines Modells von Grund auf benรถtigt wird. Die Modelle kรถnnen fรผr verschiedene Modalitรคten verwendet werden, wie z. B.: * ๐Ÿ“ Text: Textklassifizierung, Informationsextrahierung, Beantwortung von Fragen, Zusammenfassung, รœbersetzung und Texterstellung in รผber 100 Sprachen. * ๐Ÿ–ผ๏ธ Bilder: Bildklassifizierung, Objekterkennung und Segmentierung. * ๐Ÿ—ฃ๏ธ Audio: Spracherkennung und Audioklassifizierung. * ๐Ÿ™ Multimodal: Beantwortung von Tabellenfragen, optische Zeichenerkennung, Informationsextraktion aus gescannten Dokumenten, Videoklassifizierung und Beantwortung visueller Fragen. Unsere Bibliothek unterstรผtzt die nahtlose Integration von drei der beliebtesten Deep-Learning-Bibliotheken: [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) und [JAX](https://jax.readthedocs.io/en/latest/). Trainieren Sie Ihr Modell in drei Codezeilen in einem Framework und laden Sie es zur Inferenz mit einem anderen. Jede ๐Ÿค— Transformers-Architektur ist in einem eigenstรคndigen Python-Modul definiert, so dass sie leicht fรผr Forschung und Experimente angepasst werden kann. ## Wenn Sie auf der Suche nach individueller Unterstรผtzung durch das Hugging Face-Team sind <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Inhalt Die Dokumentation ist in fรผnf Teile gegliedert: - **GET STARTED** enthรคlt eine kurze Tour und Installationsanweisungen, um mit ๐Ÿค— Transformers loszulegen. - **TUTORIALS** sind ein hervorragender Ausgangspunkt, wenn Sie neu in unserer Bibliothek sind. Dieser Abschnitt hilft Ihnen, die grundlegenden Fรคhigkeiten zu erlangen, die Sie benรถtigen, um mit ๐Ÿค— Transformers zu arbeiten. - **HOW-TO GUIDES** zeigen Ihnen, wie Sie ein bestimmtes Ziel erreichen kรถnnen, z. B. die Feinabstimmung eines vortrainierten Modells fรผr die Sprachmodellierung oder die Erstellung eines benutzerdefinierten Modellkopfs. - **KONZEPTUELLE ANLEITUNGEN** bietet weitere Diskussionen und Erklรคrungen zu den zugrunde liegenden Konzepten und Ideen hinter Modellen, Aufgaben und der Designphilosophie von ๐Ÿค— Transformers. - **API** beschreibt jede Klasse und Funktion, gruppiert in: - **MAIN CLASSES** fรผr die Hauptklassen, die die wichtigsten APIs der Bibliothek darstellen. - MODELLE** fรผr die Klassen und Funktionen, die zu jedem in der Bibliothek implementierten Modell gehรถren. - **INTERNAL HELPERS** fรผr die Klassen und Funktionen, die wir intern verwenden. Die Bibliothek enthรคlt derzeit JAX-, PyTorch- und TensorFlow-Implementierungen, vortrainierte Modellgewichte, Nutzungsskripte und Konvertierungsprogramme fรผr die folgenden Modelle. ### Unterstรผtze Modelle <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UMT5](model_doc/umt5)** (from Google Research) released with the paper [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLM-V](model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Unterstรผtzte Frameworks Die folgende Tabelle zeigt die derzeitige Unterstรผtzung in der Bibliothek fรผr jedes dieser Modelle, unabhรคngig davon, ob sie einen Python Tokenizer haben (als "langsam" bezeichnet), ein "schneller" Tokenizer, der von der ๐Ÿค— Tokenizers Bibliothek unterstรผtzt wird, ob sie Unterstรผtzung in Jax (via Flax), PyTorch, und/oder TensorFlow haben. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GroupViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โŒ | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileViT | โŒ | โŒ | โœ… | โŒ | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โŒ | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installation Installieren Sie ๐Ÿค— Transformers fรผr die Deep-Learning-Bibliothek, mit der Sie arbeiten, richten Sie Ihren Cache ein und konfigurieren Sie ๐Ÿค— Transformers optional fรผr den Offline-Betrieb. ๐Ÿค— Transformers wurde unter Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, und Flax getestet. Folgen Sie den Installationsanweisungen unten fรผr die von Ihnen verwendete Deep-Learning-Bibliothek: * [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions. * [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. ## Installation mit pip Sie sollten ๐Ÿค— Transformers in einer [virtuellen Umgebung](https://docs.python.org/3/library/venv.html) installieren. Wenn Sie mit virtuellen Python-Umgebungen nicht vertraut sind, werfen Sie einen Blick auf diese [Anleitung](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Eine virtuelle Umgebung macht es einfacher, verschiedene Projekte zu verwalten und Kompatibilitรคtsprobleme zwischen Abhรคngigkeiten zu vermeiden. Beginnen wir mit der Erstellung einer virtuellen Umgebung in Ihrem Projektverzeichnis: ```bash python -m venv .env ``` Aktivieren wir die virtuelle Umgebung. Unter Linux und MacOs: ```bash source .env/bin/activate ``` Aktivieren wir die virtuelle Umgebung unter Windows ```bash .env/Scripts/activate ``` Jetzt kรถnnen wir die ๐Ÿค— Transformers mit dem folgenden Befehl installieren: ```bash pip install transformers ``` Bei reiner CPU-Unterstรผtzung kรถnnen wir ๐Ÿค— Transformers und eine Deep-Learning-Bibliothek bequem in einer Zeile installieren. Installieren wir zum Beispiel ๐Ÿค— Transformers und PyTorch mit: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers und TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers und Flax: ```bash pip install transformers[flax] ``` รœberprรผfen wir abschlieรŸend, ob ๐Ÿค— Transformers ordnungsgemรครŸ installiert wurde, indem wir den folgenden Befehl ausfรผhren. Es wird ein vortrainiertes Modell heruntergeladen: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Dann wird die Kategorie und die Wahrscheinlichkeit ausgegeben: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installation aus dem Code Installieren wir ๐Ÿค— Transformers aus dem Quellcode mit dem folgenden Befehl: ```bash pip install git+https://github.com/huggingface/transformers ``` Dieser Befehl installiert die aktuelle `main` Version und nicht die neueste `stable` Version. Die `main`-Version ist nรผtzlich, um mit den neuesten Entwicklungen Schritt zu halten. Zum Beispiel, wenn ein Fehler seit der letzten offiziellen Version behoben wurde, aber eine neue Version noch nicht verรถffentlicht wurde. Das bedeutet jedoch, dass die "Hauptversion" nicht immer stabil ist. Wir bemรผhen uns, die Hauptversion einsatzbereit zu halten, und die meisten Probleme werden normalerweise innerhalb weniger Stunden oder eines Tages behoben. Wenn Sie auf ein Problem stoรŸen, รถffnen Sie bitte ein [Issue] (https://github.com/huggingface/transformers/issues), damit wir es noch schneller beheben kรถnnen! รœberprรผfen wir, ob ๐Ÿค— Transformers richtig installiert wurde, indem Sie den folgenden Befehl ausfรผhren: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Editierbare Installation Sie benรถtigen eine bearbeitbare Installation, wenn Sie: * die "Haupt"-Version des Quellcodes verwenden mรถchten. * Zu ๐Ÿค— Transformers beitragen und ร„nderungen am Code testen wollen. Klonen Sie das Repository und installieren ๐Ÿค— Transformers mit den folgenden Befehlen: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Diese Befehle verknรผpfen den Ordner, in den Sie das Repository geklont haben, mit den Pfaden Ihrer Python-Bibliotheken. Python wird nun in dem Ordner suchen, in den Sie geklont haben, zusรคtzlich zu den normalen Bibliothekspfaden. Wenn zum Beispiel Ihre Python-Pakete normalerweise in `~/anaconda3/envs/main/lib/python3.7/site-packages/` installiert sind, wird Python auch den Ordner durchsuchen, in den Sie geklont haben: `~/transformers/`. <Tip warning={true}> Sie mรผssen den Ordner `transformers` behalten, wenn Sie die Bibliothek weiter verwenden wollen. </Tip> Jetzt kรถnnen Sie Ihren Klon mit dem folgenden Befehl ganz einfach auf die neueste Version von ๐Ÿค— Transformers aktualisieren: ```bash cd ~/transformers/ git pull ``` Ihre Python-Umgebung wird beim nรคchsten Ausfรผhren die `main`-Version von ๐Ÿค— Transformers finden. ## Installation mit conda Installation von dem conda Kanal `huggingface`: ```bash conda install -c huggingface transformers ``` ## Cache Einrichtung Vorgefertigte Modelle werden heruntergeladen und lokal zwischengespeichert unter: `~/.cache/huggingface/hub`. Dies ist das Standardverzeichnis, das durch die Shell-Umgebungsvariable "TRANSFORMERS_CACHE" vorgegeben ist. Unter Windows wird das Standardverzeichnis durch `C:\Benutzer\Benutzername\.cache\huggingface\hub` angegeben. Sie kรถnnen die unten aufgefรผhrten Shell-Umgebungsvariablen - in der Reihenfolge ihrer Prioritรคt - รคndern, um ein anderes Cache-Verzeichnis anzugeben: 1. Shell-Umgebungsvariable (Standard): `HUGGINGFACE_HUB_CACHE` oder `TRANSFORMERS_CACHE`. 2. Shell-Umgebungsvariable: `HF_HOME`. 3. Shell-Umgebungsvariable: `XDG_CACHE_HOME` + `/huggingface`. <Tip> Transformers verwendet die Shell-Umgebungsvariablen `PYTORCH_TRANSFORMERS_CACHE` oder `PYTORCH_PRETRAINED_BERT_CACHE`, wenn Sie von einer frรผheren Iteration dieser Bibliothek kommen und diese Umgebungsvariablen gesetzt haben, sofern Sie nicht die Shell-Umgebungsvariable `TRANSFORMERS_CACHE` angeben. </Tip> ## Offline Modus Transformers ist in der Lage, in einer Firewall- oder Offline-Umgebung zu laufen, indem es nur lokale Dateien verwendet. Setzen Sie die Umgebungsvariable `TRANSFORMERS_OFFLINE=1`, um dieses Verhalten zu aktivieren. <Tip> Fรผgen sie [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) zu Ihrem Offline-Trainingsworkflow hinzufรผgen, indem Sie die Umgebungsvariable `HF_DATASETS_OFFLINE=1` setzen. </Tip> So wรผrden Sie beispielsweise ein Programm in einem normalen Netzwerk mit einer Firewall fรผr externe Instanzen mit dem folgenden Befehl ausfรผhren: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Fรผhren Sie das gleiche Programm in einer Offline-Instanz mit aus: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Das Skript sollte nun laufen, ohne sich aufzuhรคngen oder eine Zeitรผberschreitung abzuwarten, da es weiรŸ, dass es nur nach lokalen Dateien suchen soll. ### Abrufen von Modellen und Tokenizern zur Offline-Verwendung Eine andere Mรถglichkeit, ๐Ÿค— Transformers offline zu verwenden, besteht darin, die Dateien im Voraus herunterzuladen und dann auf ihren lokalen Pfad zu verweisen, wenn Sie sie offline verwenden mรผssen. Es gibt drei Mรถglichkeiten, dies zu tun: * Laden Sie eine Datei รผber die Benutzeroberflรคche des [Model Hub](https://huggingface.co/models) herunter, indem Sie auf das โ†“-Symbol klicken. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Verwenden Sie den [PreTrainedModel.from_pretrained] und [PreTrainedModel.save_pretrained] Workflow: 1. Laden Sie Ihre Dateien im Voraus mit [`PreTrainedModel.from_pretrained`] herunter: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Speichern Sie Ihre Dateien in einem bestimmten Verzeichnis mit [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Wenn Sie nun offline sind, laden Sie Ihre Dateien mit [`PreTrainedModel.from_pretrained`] aus dem bestimmten Verzeichnis: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Programmatisches Herunterladen von Dateien mit der [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) Bibliothek: 1. Installieren Sie die "huggingface_hub"-Bibliothek in Ihrer virtuellen Umgebung: ```bash python -m pip install huggingface_hub ``` 2. Verwenden Sie die Funktion [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub), um eine Datei in einen bestimmten Pfad herunterzuladen. Der folgende Befehl lรคdt zum Beispiel die Datei "config.json" aus dem Modell [T0](https://huggingface.co/bigscience/T0_3B) in den gewรผnschten Pfad herunter: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Sobald Ihre Datei heruntergeladen und lokal zwischengespeichert ist, geben Sie den lokalen Pfad an, um sie zu laden und zu verwenden: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Weitere Informationen zum Herunterladen von Dateien, die auf dem Hub gespeichert sind, finden Sie im Abschnitt [Wie man Dateien vom Hub herunterlรคdt] (https://huggingface.co/docs/hub/how-to-downstream). </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Schnellstart - local: installation title: Installation title: Erste Schritte - sections: - local: pipeline_tutorial title: Pipelines fรผr Inferenzen - local: autoclass_tutorial title: Laden von vortrainierten Instanzen mit einer AutoClass - local: preprocessing title: Vorverarbeiten - local: training title: Optimierung eines vortrainierten Modells - local: accelerate title: Verteiltes Training mit ๐Ÿค— Accelerate - local: model_sharing title: Ein Modell teilen title: Tutorials
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/preprocessing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Vorverarbeiten [[open-in-colab]] Bevor Sie Ihre Daten in einem Modell verwenden kรถnnen, mรผssen die Daten in ein fรผr das Modell akzeptables Format gebracht werden. Ein Modell versteht keine Rohtexte, Bilder oder Audiodaten. Diese Eingaben mรผssen in Zahlen umgewandelt und zu Tensoren zusammengesetzt werden. In dieser Anleitung werden Sie: * Textdaten mit einem Tokenizer vorverarbeiten. * Bild- oder Audiodaten mit einem Feature Extractor vorverarbeiten. * Daten fรผr eine multimodale Aufgabe mit einem Prozessor vorverarbeiten. ## NLP <Youtube id="Yffk5aydLzg"/> Das wichtigste Werkzeug zur Verarbeitung von Textdaten ist ein [Tokenizer](main_classes/tokenizer). Ein Tokenizer zerlegt Text zunรคchst nach einer Reihe von Regeln in *Token*. Die Token werden in Zahlen umgewandelt, die zum Aufbau von Tensoren als Eingabe fรผr ein Modell verwendet werden. Alle zusรคtzlichen Eingaben, die ein Modell benรถtigt, werden ebenfalls vom Tokenizer hinzugefรผgt. <Tip> Wenn Sie ein vortrainiertes Modell verwenden mรถchten, ist es wichtig, den zugehรถrigen vortrainierten Tokenizer zu verwenden. Dadurch wird sichergestellt, dass der Text auf die gleiche Weise aufgeteilt wird wie das Pretraining-Korpus und die gleichen entsprechenden Token-zu-Index (in der Regel als *vocab* bezeichnet) wรคhrend des Pretrainings verwendet werden. </Tip> Laden Sie einen vortrainierten Tokenizer mit der Klasse [AutoTokenizer], um schnell loszulegen. Damit wird das *vocab* heruntergeladen, das verwendet wird, wenn ein Modell vortrainiert wird. ### Tokenize Laden Sie einen vortrainierten Tokenizer mit [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` Dann รผbergeben Sie Ihren Satz an den Tokenizer: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Der Tokenizer gibt ein Wรถrterbuch mit drei wichtigen Elementen zurรผck: * [input_ids](glossary#input-ids) sind die Indizes, die den einzelnen Token im Satz entsprechen. * [attention_mask](glossary#attention-mask) gibt an, ob ein Token beachtet werden soll oder nicht. * [token_type_ids](glossary#token-type-ids) gibt an, zu welcher Sequenz ein Token gehรถrt, wenn es mehr als eine Sequenz gibt. Sie kรถnnen die `input_ids` dekodieren, um die ursprรผngliche Eingabe zurรผckzugeben: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` Wie Sie sehen kรถnnen, hat der Tokenisierer zwei spezielle Token - `CLS` und `SEP` (Klassifikator und Separator) - zum Satz hinzugefรผgt. Nicht alle Modelle benรถtigen spezielle Token, aber wenn dies der Fall ist, fรผgt der Tokenisierer sie automatisch fรผr Sie hinzu. Wenn Sie mehrere Sรคtze verarbeiten wollen, รผbergeben Sie die Sรคtze als Liste an den Tokenizer: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### Pad Dies bringt uns zu einem wichtigen Thema. Wenn Sie einen Haufen von Sรคtzen verarbeiten, sind diese nicht immer gleich lang. Das ist ein Problem, weil Tensoren, die Eingabe fรผr das Modell, eine einheitliche Form haben mรผssen. Padding ist eine Strategie, die sicherstellt, dass Tensoren rechteckig sind, indem ein spezielles *Padding-Token* zu Sรคtzen mit weniger Token hinzugefรผgt wird. Setzen Sie den Parameter "padding" auf "true", um die kรผrzeren Sequenzen im Stapel so aufzufรผllen, dass sie der lรคngsten Sequenz entsprechen: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` Beachten Sie, dass der Tokenizer den ersten und den dritten Satz mit einer "0" aufgefรผllt hat, weil sie kรผrzer sind! ### Kรผrzung Auf der anderen Seite des Spektrums kann es vorkommen, dass eine Sequenz zu lang fรผr ein Modell ist. In diesem Fall mรผssen Sie die Sequenz auf eine kรผrzere Lรคnge kรผrzen. Setzen Sie den Parameter "truncation" auf "true", um eine Sequenz auf die vom Modell akzeptierte Hรถchstlรคnge zu kรผrzen: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ### Tensoren erstellen SchlieรŸlich mรถchten Sie, dass der Tokenizer die tatsรคchlichen Tensoren zurรผckgibt, die dem Modell zugefรผhrt werden. Setzen Sie den Parameter `return_tensors` entweder auf `pt` fรผr PyTorch, oder `tf` fรผr TensorFlow: <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> ## Audio Audioeingaben werden anders vorverarbeitet als Texteingaben, aber das Endziel bleibt dasselbe: numerische Sequenzen zu erstellen, die das Modell verstehen kann. Ein [feature extractor](main_classes/feature_extractor) dient dem ausdrรผcklichen Zweck, Merkmale aus Rohbild- oder Audiodaten zu extrahieren und in Tensoren zu konvertieren. Bevor Sie beginnen, installieren Sie ๐Ÿค— Datasets, um einen Audio-Datensatz zu laden, mit dem Sie experimentieren kรถnnen: ```bash pip install datasets ``` Laden Sie den [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) Datensatz (weitere Informationen zum Laden eines Datensatzes finden Sie im ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub.html)): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Greifen Sie auf das erste Element der `audio`-Spalte zu, um einen Blick auf die Eingabe zu werfen. Durch den Aufruf der Spalte "audio" wird die Audiodatei automatisch geladen und neu gesampelt: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` Dies gibt drei Elemente zurรผck: * "array" ist das Sprachsignal, das als 1D-Array geladen - und mรถglicherweise neu gesampelt - wurde. * Pfad" zeigt auf den Speicherort der Audiodatei. * `sampling_rate` bezieht sich darauf, wie viele Datenpunkte im Sprachsignal pro Sekunde gemessen werden. ### Resample Fรผr dieses Tutorial werden Sie das Modell [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) verwenden. Wie Sie aus der Modellkarte ersehen kรถnnen, ist das Wav2Vec2-Modell auf 16kHz abgetastetes Sprachaudio vortrainiert. Es ist wichtig, dass die Abtastrate Ihrer Audiodaten mit der Abtastrate des Datensatzes รผbereinstimmt, der fรผr das Pre-Training des Modells verwendet wurde. Wenn die Abtastrate Ihrer Daten nicht dieselbe ist, mรผssen Sie Ihre Audiodaten neu abtasten. Der Datensatz [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) hat zum Beispiel eine Abtastrate von 8000 kHz. Um das Wav2Vec2-Modell mit diesem Datensatz verwenden zu kรถnnen, mรผssen Sie die Abtastrate auf 16 kHz erhรถhen: ```py >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` 1. Verwenden Sie die Methode [~datasets.Dataset.cast_column] von ๐Ÿค— Datasets, um die Abtastrate auf 16kHz zu erhรถhen: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. Laden Sie die Audiodatei: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` Wie Sie sehen kรถnnen, ist die Abtastrate jetzt 16kHz! ### Merkmalsextraktor Der nรคchste Schritt ist das Laden eines Merkmalsextraktors, um die Eingabe zu normalisieren und aufzufรผllen. Beim Auffรผllen von Textdaten wird fรผr kรผrzere Sequenzen ein `0` hinzugefรผgt. Die gleiche Idee gilt fรผr Audiodaten, und der Audio-Feature-Extraktor fรผgt eine `0` - interpretiert als Stille - zu `array` hinzu. Laden Sie den Merkmalsextraktor mit [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` รœbergeben Sie das Audio-"Array" an den Feature-Extraktor. Wir empfehlen auch, das Argument `sampling_rate` im Feature Extractor hinzuzufรผgen, um eventuell auftretende stille Fehler besser zu beheben. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ### Auffรผllen und Kรผrzen Genau wie beim Tokenizer kรถnnen Sie variable Sequenzen in einem Stapel durch Auffรผllen oder Abschneiden behandeln. Werfen Sie einen Blick auf die Sequenzlรคnge dieser beiden Audiobeispiele: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` Wie Sie sehen kรถnnen, hat das erste Beispiel eine lรคngere Sequenz als das zweite Beispiel. Lassen Sie uns eine Funktion erstellen, die den Datensatz vorverarbeitet. Geben Sie eine maximale Lรคnge der Probe an, und der Feature-Extraktor wird die Sequenzen entweder auffรผllen oder abschneiden, damit sie dieser Lรคnge entsprechen: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` Wenden Sie die Funktion auf die ersten paar Beispiele im Datensatz an: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` Schauen Sie sich nun noch einmal die verarbeiteten Beispiel-Lรคngen an: ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` Die Lรคnge der ersten beiden Beispiele entspricht nun der von Ihnen angegebenen Maximallรคnge. ## Bildverarbeitung Ein Merkmalsextraktor wird auch verwendet, um Bilder fรผr Bildverarbeitungsaufgaben zu verarbeiten. Auch hier besteht das Ziel darin, das Rohbild in eine Reihe von Tensoren als Eingabe zu konvertieren. Laden wir den [food101](https://huggingface.co/datasets/food101) Datensatz fรผr dieses Tutorial. Verwenden Sie den Parameter ๐Ÿค— Datasets `split`, um nur eine kleine Stichprobe aus dem Trainingssplit zu laden, da der Datensatz recht groรŸ ist: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` Als Nรคchstes sehen Sie sich das Bild mit dem Merkmal ๐Ÿค— Datensรคtze [Bild] (https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image) an: ```py >>> dataset[0]["image"] ``` ![vision-preprocess-tutorial.png](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png) ### Merkmalsextraktor Laden Sie den Merkmalsextraktor mit [`AutoImageProcessor.from_pretrained`]: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ### Datenerweiterung Bei Bildverarbeitungsaufgaben ist es รผblich, den Bildern als Teil der Vorverarbeitung eine Art von Datenerweiterung hinzuzufรผgen. Sie kรถnnen Erweiterungen mit jeder beliebigen Bibliothek hinzufรผgen, aber in diesem Tutorial werden Sie das Modul [`transforms`](https://pytorch.org/vision/stable/transforms.html) von torchvision verwenden. 1. Normalisieren Sie das Bild und verwenden Sie [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html), um einige Transformationen - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) und [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) - miteinander zu verknรผpfen: ```py >>> from torchvision.transforms import Compose, Normalize, RandomResizedCrop, ColorJitter, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> _transforms = Compose( ... [RandomResizedCrop(image_processor.size["height"]), ColorJitter(brightness=0.5, hue=0.5), ToTensor(), normalize] ... ) ``` 2. Das Modell akzeptiert [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) als Eingabe. Dieser Wert wird vom Merkmalsextraktor erzeugt. Erstellen Sie eine Funktion, die `pixel_values` aus den Transformationen erzeugt: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(image.convert("RGB")) for image in examples["image"]] ... return examples ``` 3. Dann verwenden Sie ๐Ÿค— Datasets [`set_transform`](https://huggingface.co/docs/datasets/process.html#format-transform), um die Transformationen im laufenden Betrieb anzuwenden: ```py >>> dataset.set_transform(transforms) ``` 4. Wenn Sie nun auf das Bild zugreifen, werden Sie feststellen, dass der Feature Extractor die Modelleingabe "pixel_values" hinzugefรผgt hat: ```py >>> dataset[0]["image"] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x7F1A7B0630D0>, 'label': 6, 'pixel_values': tensor([[[ 0.0353, 0.0745, 0.1216, ..., -0.9922, -0.9922, -0.9922], [-0.0196, 0.0667, 0.1294, ..., -0.9765, -0.9843, -0.9922], [ 0.0196, 0.0824, 0.1137, ..., -0.9765, -0.9686, -0.8667], ..., [ 0.0275, 0.0745, 0.0510, ..., -0.1137, -0.1216, -0.0824], [ 0.0667, 0.0824, 0.0667, ..., -0.0588, -0.0745, -0.0980], [ 0.0353, 0.0353, 0.0431, ..., -0.0039, -0.0039, -0.0588]], [[ 0.2078, 0.2471, 0.2863, ..., -0.9451, -0.9373, -0.9451], [ 0.1608, 0.2471, 0.3098, ..., -0.9373, -0.9451, -0.9373], [ 0.2078, 0.2706, 0.3020, ..., -0.9608, -0.9373, -0.8275], ..., [-0.0353, 0.0118, -0.0039, ..., -0.2392, -0.2471, -0.2078], [ 0.0196, 0.0353, 0.0196, ..., -0.1843, -0.2000, -0.2235], [-0.0118, -0.0039, -0.0039, ..., -0.0980, -0.0980, -0.1529]], [[ 0.3961, 0.4431, 0.4980, ..., -0.9216, -0.9137, -0.9216], [ 0.3569, 0.4510, 0.5216, ..., -0.9059, -0.9137, -0.9137], [ 0.4118, 0.4745, 0.5216, ..., -0.9137, -0.8902, -0.7804], ..., [-0.2314, -0.1922, -0.2078, ..., -0.4196, -0.4275, -0.3882], [-0.1843, -0.1686, -0.2000, ..., -0.3647, -0.3804, -0.4039], [-0.1922, -0.1922, -0.1922, ..., -0.2941, -0.2863, -0.3412]]])} ``` Hier sehen Sie, wie das Bild nach der Vorverarbeitung aussieht. Wie von den angewandten Transformationen zu erwarten, wurde das Bild willkรผrlich beschnitten und seine Farbeigenschaften sind anders. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` ![preprocessed_image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png) ## Multimodal Fรผr multimodale Aufgaben werden Sie eine Kombination aus allem, was Sie bisher gelernt haben, verwenden und Ihre Fรคhigkeiten auf eine Aufgabe der automatischen Spracherkennung (ASR) anwenden. Dies bedeutet, dass Sie einen: * Feature Extractor zur Vorverarbeitung der Audiodaten. * Tokenizer, um den Text zu verarbeiten. Kehren wir zum [LJ Speech](https://huggingface.co/datasets/lj_speech) Datensatz zurรผck: ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` Da Sie hauptsรคchlich an den Spalten "Audio" und "Text" interessiert sind, entfernen Sie die anderen Spalten: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` Schauen Sie sich nun die Spalten "Audio" und "Text" an: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` Erinnern Sie sich an den frรผheren Abschnitt รผber die Verarbeitung von Audiodaten: Sie sollten immer die Abtastrate Ihrer Audiodaten [resample](preprocessing#audio), damit sie mit der Abtastrate des Datensatzes รผbereinstimmt, der fรผr das Vortraining eines Modells verwendet wird: ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` ### Prozessor Ein Processor kombiniert einen Feature-Extraktor und einen Tokenizer. Laden Sie einen Processor mit [`AutoProcessor.from_pretrained]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. Erstellen Sie eine Funktion, die die Audiodaten zu `input_values` verarbeitet und den Text zu `labels` tokenisiert. Dies sind Ihre Eingaben fรผr das Modell: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. Wenden Sie die Funktion "prepare_dataset" auf ein Beispiel an: ```py >>> prepare_dataset(lj_speech[0]) ``` Beachten Sie, dass der Processor `input_values` und `labels` hinzugefรผgt hat. Auch die Abtastrate wurde korrekt auf 16kHz heruntergerechnet. Toll, Sie sollten jetzt in der Lage sein, Daten fรผr jede Modalitรคt vorzuverarbeiten und sogar verschiedene Modalitรคten zu kombinieren! Im nรคchsten Kurs lernen Sie, wie Sie ein Modell mit Ihren neu aufbereiteten Daten feinabstimmen kรถnnen.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Ein Modell teilen Die letzten beiden Tutorials haben gezeigt, wie man ein Modell mit PyTorch, Keras und ๐Ÿค— Accelerate fรผr verteilte Setups feinabstimmen kann. Der nรคchste Schritt besteht darin, Ihr Modell mit der Community zu teilen! Bei Hugging Face glauben wir an den offenen Austausch von Wissen und Ressourcen, um kรผnstliche Intelligenz fรผr alle zu demokratisieren. Wir ermutigen Sie, Ihr Modell mit der Community zu teilen, um anderen zu helfen, Zeit und Ressourcen zu sparen. In diesem Tutorial lernen Sie zwei Methoden kennen, wie Sie ein trainiertes oder verfeinertes Modell auf dem [Model Hub](https://huggingface.co/models) teilen kรถnnen: - Programmgesteuertes รœbertragen Ihrer Dateien auf den Hub. - Ziehen Sie Ihre Dateien per Drag-and-Drop รผber die Weboberflรคche in den Hub. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> Um ein Modell mit der ร–ffentlichkeit zu teilen, benรถtigen Sie ein Konto auf [huggingface.co](https://huggingface.co/join). Sie kรถnnen auch einer bestehenden Organisation beitreten oder eine neue Organisation grรผnden. </Tip> ## Repository-Funktionen Jedes Repository im Model Hub verhรคlt sich wie ein typisches GitHub-Repository. Unsere Repositorys bieten Versionierung, Commit-Historie und die Mรถglichkeit, Unterschiede zu visualisieren. Die integrierte Versionierung des Model Hub basiert auf Git und [git-lfs](https://git-lfs.github.com/). Mit anderen Worten: Sie kรถnnen ein Modell als ein Repository behandeln, was eine bessere Zugriffskontrolle und Skalierbarkeit ermรถglicht. Die Versionskontrolle ermรถglicht *Revisionen*, eine Methode zum Anheften einer bestimmten Version eines Modells mit einem Commit-Hash, Tag oder Branch. Folglich kรถnnen Sie eine bestimmte Modellversion mit dem Parameter "Revision" laden: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` Dateien lassen sich auch in einem Repository leicht bearbeiten, und Sie kรถnnen die Commit-Historie sowie die Unterschiede einsehen: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Einrichtung Bevor Sie ein Modell fรผr den Hub freigeben, benรถtigen Sie Ihre Hugging Face-Anmeldedaten. Wenn Sie Zugang zu einem Terminal haben, fรผhren Sie den folgenden Befehl in der virtuellen Umgebung aus, in der ๐Ÿค— Transformers installiert ist. Dadurch werden Ihre Zugangsdaten in Ihrem Hugging Face-Cache-Ordner (standardmรครŸig `~/.cache/`) gespeichert: ```bash huggingface-cli login ``` Wenn Sie ein Notebook wie Jupyter oder Colaboratory verwenden, stellen Sie sicher, dass Sie die [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) Bibliothek installiert haben. Diese Bibliothek ermรถglicht Ihnen die programmatische Interaktion mit dem Hub. ```bash pip install huggingface_hub ``` Verwenden Sie dann `notebook_login`, um sich beim Hub anzumelden, und folgen Sie dem Link [hier](https://huggingface.co/settings/token), um ein Token fรผr die Anmeldung zu generieren: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Ein Modell fรผr alle Frameworks konvertieren Um sicherzustellen, dass Ihr Modell von jemandem verwendet werden kann, der mit einem anderen Framework arbeitet, empfehlen wir Ihnen, Ihr Modell sowohl mit PyTorch- als auch mit TensorFlow-Checkpoints zu konvertieren und hochzuladen. Wรคhrend Benutzer immer noch in der Lage sind, Ihr Modell von einem anderen Framework zu laden, wenn Sie diesen Schritt รผberspringen, wird es langsamer sein, weil ๐Ÿค— Transformers den Checkpoint on-the-fly konvertieren mรผssen. Die Konvertierung eines Checkpoints fรผr ein anderes Framework ist einfach. Stellen Sie sicher, dass Sie PyTorch und TensorFlow installiert haben (siehe [hier](installation) fรผr Installationsanweisungen), und finden Sie dann das spezifische Modell fรผr Ihre Aufgabe in dem anderen Framework. <frameworkcontent> <pt> Geben Sie `from_tf=True` an, um einen Prรผfpunkt von TensorFlow nach PyTorch zu konvertieren: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> Geben Sie `from_pt=True` an, um einen Prรผfpunkt von PyTorch nach TensorFlow zu konvertieren: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` Dann kรถnnen Sie Ihr neues TensorFlow-Modell mit seinem neuen Checkpoint speichern: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> Wenn ein Modell in Flax verfรผgbar ist, kรถnnen Sie auch einen Kontrollpunkt von PyTorch nach Flax konvertieren: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Ein Modell wรคhrend des Trainings hochladen <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Die Weitergabe eines Modells an den Hub ist so einfach wie das Hinzufรผgen eines zusรคtzlichen Parameters oder Rรผckrufs. Erinnern Sie sich an das [Feinabstimmungs-Tutorial](training), in der Klasse [`TrainingArguments`] geben Sie Hyperparameter und zusรคtzliche Trainingsoptionen an. Eine dieser Trainingsoptionen beinhaltet die Mรถglichkeit, ein Modell direkt an den Hub zu pushen. Setzen Sie `push_to_hub=True` in Ihrer [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` รœbergeben Sie Ihre Trainingsargumente wie gewohnt an [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Nach der Feinabstimmung Ihres Modells rufen Sie [`~transformers.Trainer.push_to_hub`] auf [`Trainer`] auf, um das trainierte Modell an den Hub zu รผbertragen. Transformers fรผgt sogar automatisch Trainings-Hyperparameter, Trainingsergebnisse und Framework-Versionen zu Ihrer Modellkarte hinzu! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Geben Sie ein Modell mit [`PushToHubCallback`] an den Hub weiter. In der [`PushToHubCallback`] Funktion, fรผgen Sie hinzu: - Ein Ausgabeverzeichnis fรผr Ihr Modell. - Einen Tokenizer. - Die `hub_model_id`, die Ihr Hub-Benutzername und Modellname ist. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` Fรผgen Sie den Callback zu [`fit`](https://keras.io/api/models/model_training_apis/) hinzu, und ๐Ÿค— Transformers wird das trainierte Modell an den Hub weiterleiten: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Verwenden Sie die Funktion `push_to_hub`. Sie kรถnnen `push_to_hub` auch direkt fรผr Ihr Modell aufrufen, um es in den Hub hochzuladen. Geben Sie den Namen Ihres Modells in "push_to_hub" an: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` Dadurch wird ein Repository unter Ihrem Benutzernamen mit dem Modellnamen `my-awesome-model` erstellt. Benutzer kรถnnen nun Ihr Modell mit der Funktion `from_pretrained` laden: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` Wenn Sie zu einer Organisation gehรถren und Ihr Modell stattdessen unter dem Namen der Organisation pushen wollen, fรผgen Sie diesen einfach zur `repo_id` hinzu: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` Die Funktion "push_to_hub" kann auch verwendet werden, um andere Dateien zu einem Modell-Repository hinzuzufรผgen. Zum Beispiel kann man einen Tokenizer zu einem Modell-Repository hinzufรผgen: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` Oder vielleicht mรถchten Sie die TensorFlow-Version Ihres fein abgestimmten PyTorch-Modells hinzufรผgen: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` Wenn Sie nun zu Ihrem Hugging Face-Profil navigieren, sollten Sie Ihr neu erstelltes Modell-Repository sehen. Wenn Sie auf die Registerkarte **Dateien** klicken, werden alle Dateien angezeigt, die Sie in das Repository hochgeladen haben. Weitere Einzelheiten zum Erstellen und Hochladen von Dateien in ein Repository finden Sie in der Hub-Dokumentation [hier](https://huggingface.co/docs/hub/how-to-upstream). ## Hochladen mit der Weboberflรคche Benutzer, die einen no-code Ansatz bevorzugen, kรถnnen ein Modell รผber das Webinterface des Hubs hochladen. Besuchen Sie [huggingface.co/new](https://huggingface.co/new) um ein neues Repository zu erstellen: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) Fรผgen Sie von hier aus einige Informationen รผber Ihr Modell hinzu: - Wรคhlen Sie den **Besitzer** des Repositorys. Dies kรถnnen Sie selbst oder eine der Organisationen sein, denen Sie angehรถren. - Wรคhlen Sie einen Namen fรผr Ihr Modell, der auch der Name des Repositorys sein wird. - Wรคhlen Sie, ob Ihr Modell รถffentlich oder privat ist. - Geben Sie die Lizenzverwendung fรผr Ihr Modell an. Klicken Sie nun auf die Registerkarte **Dateien** und klicken Sie auf die Schaltflรคche **Datei hinzufรผgen**, um eine neue Datei in Ihr Repository hochzuladen. Ziehen Sie dann eine Datei per Drag-and-Drop hoch und fรผgen Sie eine รœbergabemeldung hinzu. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Hinzufรผgen einer Modellkarte Um sicherzustellen, dass die Benutzer die Fรคhigkeiten, Grenzen, mรถglichen Verzerrungen und ethischen Aspekte Ihres Modells verstehen, fรผgen Sie bitte eine Modellkarte zu Ihrem Repository hinzu. Die Modellkarte wird in der Datei `README.md` definiert. Sie kรถnnen eine Modellkarte hinzufรผgen, indem Sie: * Manuelles Erstellen und Hochladen einer "README.md"-Datei. * Klicken Sie auf die Schaltflรคche **Modellkarte bearbeiten** in Ihrem Modell-Repository. Werfen Sie einen Blick auf die DistilBert [model card](https://huggingface.co/distilbert-base-uncased) als gutes Beispiel fรผr die Art von Informationen, die eine Modellkarte enthalten sollte. Weitere Details รผber andere Optionen, die Sie in der Datei "README.md" einstellen kรถnnen, wie z.B. den Kohlenstoff-FuรŸabdruck eines Modells oder Beispiele fรผr Widgets, finden Sie in der Dokumentation [hier](https://huggingface.co/docs/hub/models-cards).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipelines fรผr Inferenzen Die [`pipeline`] macht es einfach, jedes beliebige Modell aus dem [Hub](https://huggingface.co/models) fรผr die Inferenz auf jede Sprache, Computer Vision, Sprache und multimodale Aufgaben zu verwenden. Selbst wenn Sie keine Erfahrung mit einer bestimmten Modalitรคt haben oder nicht mit dem zugrundeliegenden Code hinter den Modellen vertraut sind, kรถnnen Sie sie mit der [`pipeline`] fรผr Inferenzen verwenden! In diesem Beispiel lernen Sie, wie: * Eine [`pipeline`] fรผr Inferenz zu verwenden. * Einen bestimmten Tokenizer oder ein bestimmtes Modell zu verwenden. * Eine [`pipeline`] fรผr Audio-, Vision- und multimodale Aufgaben zu verwenden. <Tip> Eine vollstรคndige Liste der unterstรผtzten Aufgaben und verfรผgbaren Parameter finden Sie in der [`pipeline`]-Dokumentation. </Tip> ## Verwendung von Pipelines Obwohl jede Aufgabe eine zugehรถrige [`pipeline`] hat, ist es einfacher, die allgemeine [`pipeline`]-Abstraktion zu verwenden, die alle aufgabenspezifischen Pipelines enthรคlt. Die [`pipeline`] lรคdt automatisch ein Standardmodell und eine Vorverarbeitungsklasse, die fรผr Ihre Aufgabe inferenzfรคhig ist. 1. Beginnen Sie mit der Erstellung einer [`pipeline`] und geben Sie eine Inferenzaufgabe an: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation") ``` 2. รœbergeben Sie Ihren Eingabetext an die [`pipeline`]: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Iron-priests at the door to the east, and thirteen for the Lord Kings at the end of the mountain'}] ``` Wenn Sie mehr als eine Eingabe haben, รผbergeben Sie die Eingabe als Liste: ```py >>> generator( ... [ ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... "Nine for Mortal Men, doomed to die, One for the Dark Lord on his dark throne", ... ] ... ) # doctest: +SKIP ``` Alle zusรคtzlichen Parameter fรผr Ihre Aufgabe kรถnnen auch in die [`pipeline`] aufgenommen werden. Die Aufgabe `Text-Generierung` hat eine [`~generation.GenerationMixin.generate`]-Methode mit mehreren Parametern zur Steuerung der Ausgabe. Wenn Sie zum Beispiel mehr als eine Ausgabe erzeugen wollen, setzen Sie den Parameter `num_return_sequences`: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... num_return_sequences=2, ... ) # doctest: +SKIP ``` ### Wรคhlen Sie ein Modell und einen Tokenizer Die [`pipeline`] akzeptiert jedes Modell aus dem [Hub] (https://huggingface.co/models). Auf dem Hub gibt es Tags, mit denen Sie nach einem Modell filtern kรถnnen, das Sie fรผr Ihre Aufgabe verwenden mรถchten. Sobald Sie ein passendes Modell ausgewรคhlt haben, laden Sie es mit der entsprechenden `AutoModelFor` und [`AutoTokenizer`] Klasse. Laden Sie zum Beispiel die Klasse [`AutoModelForCausalLM`] fรผr eine kausale Sprachmodellierungsaufgabe: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") ``` Erstellen Sie eine [`pipeline`] fรผr Ihre Aufgabe, und geben Sie das Modell und den Tokenizer an, die Sie geladen haben: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) ``` รœbergeben Sie Ihren Eingabetext an die [`pipeline`] , um einen Text zu erzeugen: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Dragon-lords (for them to rule in a world ruled by their rulers, and all who live within the realm'}] ``` ## Audio-Pipeline Die [`pipeline`] unterstรผtzt auch Audioaufgaben wie Audioklassifizierung und automatische Spracherkennung. Lassen Sie uns zum Beispiel die Emotion in diesem Audioclip klassifizieren: ```py >>> from datasets import load_dataset >>> import torch >>> torch.manual_seed(42) # doctest: +IGNORE_RESULT >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> audio_file = ds[0]["audio"]["path"] ``` Finden Sie ein [Audioklassifikation](https://huggingface.co/models?pipeline_tag=audio-classification) Modell auf dem Model Hub fรผr Emotionserkennung und laden Sie es in die [`pipeline`]: ```py >>> from transformers import pipeline >>> audio_classifier = pipeline( ... task="audio-classification", model="ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` รœbergeben Sie die Audiodatei an die [`pipeline`]: ```py >>> preds = audio_classifier(audio_file) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}] ``` ## Bildverarbeitungs-Pipeline Die Verwendung einer [`pipeline`] fรผr Bildverarbeitungsaufgaben ist praktisch identisch. Geben Sie Ihre Aufgabe an und รผbergeben Sie Ihr Bild an den Klassifikator. Das Bild kann ein Link oder ein lokaler Pfad zu dem Bild sein. Zum Beispiel: Welche Katzenart ist unten abgebildet? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(task="image-classification") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ## Multimodale Pipeline Die [`pipeline`] unterstรผtzt mehr als eine Modalitรคt. Eine Aufgabe zur Beantwortung visueller Fragen (VQA) kombiniert zum Beispiel Text und Bild. Verwenden Sie einen beliebigen Bildlink und eine Frage, die Sie zu dem Bild stellen mรถchten. Das Bild kann eine URL oder ein lokaler Pfad zu dem Bild sein. Wenn Sie zum Beispiel das gleiche Bild wie in der obigen Vision-Pipeline verwenden: ```py >>> image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" >>> question = "Where is the cat?" ``` Erstellen Sie eine Pipeline fรผr "vqa" und รผbergeben Sie ihr das Bild und die Frage: ```py >>> from transformers import pipeline >>> vqa = pipeline(task="vqa") >>> preds = vqa(image=image, question=question) >>> preds = [{"score": round(pred["score"], 4), "answer": pred["answer"]} for pred in preds] >>> preds [{'score': 0.9112, 'answer': 'snow'}, {'score': 0.8796, 'answer': 'in snow'}, {'score': 0.6717, 'answer': 'outside'}, {'score': 0.0291, 'answer': 'on ground'}, {'score': 0.027, 'answer': 'ground'}] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers installation ! pip install transformers datasets # To install from source instead of the last release, comment the command above and uncomment the following one. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Verteiltes Training mit ๐Ÿค— Accelerate Da die Modelle immer grรถรŸer werden, hat sich die Parallelitรคt als Strategie zum Trainieren grรถรŸerer Modelle auf begrenzter Hardware und zur Beschleunigung der Trainingsgeschwindigkeit um mehrere GrรถรŸenordnungen erwiesen. Bei Hugging Face haben wir die Bibliothek [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) entwickelt, um Nutzern zu helfen, ein ๐Ÿค— Transformers-Modell auf jeder Art von verteiltem Setup zu trainieren, egal ob es sich um mehrere GPUs auf einer Maschine oder mehrere GPUs auf mehreren Maschinen handelt. In diesem Tutorial lernen Sie, wie Sie Ihre native PyTorch-Trainingsschleife anpassen, um das Training in einer verteilten Umgebung zu ermรถglichen. ## Einrichtung Beginnen Sie mit der Installation von ๐Ÿค— Accelerate: ```bash pip install accelerate ``` Dann importieren und erstellen Sie ein [`~accelerate.Accelerator`]-Objekt. Der [`~accelerate.Accelerator`] wird automatisch Ihre Art der verteilten Einrichtung erkennen und alle notwendigen Komponenten fรผr das Training initialisieren. Sie mรผssen Ihr Modell nicht explizit auf einem Gerรคt platzieren. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## Vorbereiten auf die Beschleunigung Der nรคchste Schritt ist die รœbergabe aller relevanten Trainingsobjekte an die Methode [`~accelerate.Accelerator.prepare`]. Dazu gehรถren Ihre Trainings- und Evaluierungs-DataLoader, ein Modell und ein Optimierer: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## Rรผckwรคrts Die letzte Ergรคnzung besteht darin, das typische `loss.backward()` in der Trainingsschleife durch die ๐Ÿค— Accelerate-Methode [`~accelerate.Accelerator.backward`] zu ersetzen: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` Wie Sie im folgenden Code sehen kรถnnen, mรผssen Sie nur vier zusรคtzliche Codezeilen zu Ihrer Trainingsschleife hinzufรผgen, um verteiltes Training zu ermรถglichen! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## Trainieren Sobald Sie die entsprechenden Codezeilen hinzugefรผgt haben, starten Sie Ihr Training in einem Skript oder einem Notebook wie Colaboratory. ### Trainieren mit einem Skript Wenn Sie Ihr Training mit einem Skript durchfรผhren, fรผhren Sie den folgenden Befehl aus, um eine Konfigurationsdatei zu erstellen und zu speichern: ```bash accelerate config ``` Dann starten Sie Ihr Training mit: ```bash accelerate launch train.py ``` ### Trainieren mit einem Notebook ๐Ÿค— Accelerate kann auch in einem Notebook laufen, wenn Sie planen, die TPUs von Colaboratory zu verwenden. Verpacken Sie den gesamten Code, der fรผr das Training verantwortlich ist, in eine Funktion und รผbergeben Sie diese an [`~accelerate.notebook_launcher`]: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` Weitere Informationen รผber ๐Ÿค— Accelerate und seine umfangreichen Funktionen finden Sie in der [Dokumentation](https://huggingface.co/docs/accelerate).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/de/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimierung eines vortrainierten Modells [[open-in-colab]] Die Verwendung eines vorab trainierten Modells hat erhebliche Vorteile. Es reduziert die Rechenkosten und den CO2-FuรŸabdruck und ermรถglicht Ihnen die Verwendung von Modellen, die dem neuesten Stand der Technik entsprechen, ohne dass Sie ein Modell von Grund auf neu trainieren mรผssen. Transformers bietet Zugang zu Tausenden von vortrainierten Modellen fรผr eine Vielzahl von Aufgaben. Wenn Sie ein vorab trainiertes Modell verwenden, trainieren Sie es auf einem fรผr Ihre Aufgabe spezifischen Datensatz. Dies wird als Feinabstimmung bezeichnet und ist eine unglaublich leistungsfรคhige Trainingstechnik. In diesem Tutorial werden Sie ein vortrainiertes Modell mit einem Deep-Learning-Framework Ihrer Wahl feinabstimmen: * Feinabstimmung eines vorab trainierten Modells mit ๐Ÿค— Transformers [`Trainer`]. * Feinabstimmung eines vorab trainierten Modells in TensorFlow mit Keras. * Feinabstimmung eines vorab trainierten Modells in nativem PyTorch. <a id='data-processing'></a> ## Vorbereitung eines Datensatzes <Youtube id="_BZearw7f0w"/> Bevor Sie die Feinabstimmung eines vortrainierten Modells vornehmen kรถnnen, mรผssen Sie einen Datensatz herunterladen und fรผr das Training vorbereiten. Im vorangegangenen Leitfaden haben Sie gelernt, wie man Daten fรผr das Training aufbereitet, und jetzt haben Sie die Gelegenheit, diese Fรคhigkeiten zu testen! Laden Sie zunรคchst den Datensatz [Yelp Reviews](https://huggingface.co/datasets/yelp_review_full): ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` Wie Sie nun wissen, benรถtigen Sie einen Tokenizer, um den Text zu verarbeiten und eine Auffรผll- und Abschneidungsstrategie einzubauen, um mit variablen Sequenzlรคngen umzugehen. Um Ihren Datensatz in einem Schritt zu verarbeiten, verwenden Sie die ๐Ÿค— Methode Datasets [`map`](https://huggingface.co/docs/datasets/process.html#map), um eine Vorverarbeitungsfunktion auf den gesamten Datensatz anzuwenden: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` Wenn Sie mรถchten, kรถnnen Sie eine kleinere Teilmenge des gesamten Datensatzes fรผr die Feinabstimmung erstellen, um den Zeitaufwand zu verringern: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Training An dieser Stelle sollten Sie dem Abschnitt folgen, der dem Rahmen entspricht, den Sie verwenden mรถchten. Sie kรถnnen รผber die Links in der rechten Seitenleiste kรถnnen Sie zu dem gewรผnschten Abschnitt springen - und wenn Sie den gesamten Inhalt eines bestimmten Frameworks ausblenden mรถchten, klicken Sie einfach auf die Schaltflรคche oben rechts im Block des jeweiligen Frameworks! <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ## Trainieren mit PyTorch Trainer ๐Ÿค— Transformers bietet eine [`Trainer`]-Klasse, die fรผr das Training von ๐Ÿค— Transformers-Modellen optimiert ist und es einfacher macht, mit dem Training zu beginnen, ohne manuell eine eigene Trainingsschleife zu schreiben. Die [`Trainer`]-API unterstรผtzt eine breite Palette von Trainingsoptionen und Funktionen wie Logging, Gradientenakkumulation und gemischte Prรคzision. Beginnen Sie mit dem Laden Ihres Modells und geben Sie die Anzahl der erwarteten Labels an. Aus dem Yelp Review [dataset card](https://huggingface.co/datasets/yelp_review_full#data-fields) wissen Sie, dass es fรผnf Labels gibt: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` <Tip> Es wird eine Warnung angezeigt, dass einige der trainierten Parameter nicht verwendet werden und einige Parameter zufรคllig initialisiert werden. Machen Sie sich keine Sorgen, das ist vรถllig normal! Der vorher trainierte Kopf des BERT-Modells wird verworfen und durch einen zufรคllig initialisierten Klassifikationskopf ersetzt. Sie werden diesen neuen Modellkopf in Ihrer Sequenzklassifizierungsaufgabe feinabstimmen, indem Sie das Wissen des vortrainierten Modells auf ihn รผbertragen. </Tip> ### Hyperparameter fรผr das Training Als Nรคchstes erstellen Sie eine Klasse [`TrainingArguments`], die alle Hyperparameter enthรคlt, die Sie einstellen kรถnnen, sowie Flags zur Aktivierung verschiedener Trainingsoptionen. Fรผr dieses Lernprogramm kรถnnen Sie mit den Standard- [Hyperparametern](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) beginnen, aber Sie kรถnnen mit diesen experimentieren, um Ihre optimalen Einstellungen zu finden. Geben Sie an, wo die Kontrollpunkte Ihres Trainings gespeichert werden sollen: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### Auswerten Der [`Trainer`] wertet die Leistung des Modells wรคhrend des Trainings nicht automatisch aus. Sie mรผssen [`Trainer`] eine Funktion รผbergeben, um Metriken zu berechnen und zu berichten. Die [๐Ÿค— Evaluate](https://huggingface.co/docs/evaluate/index) Bibliothek bietet eine einfache [`accuracy`](https://huggingface.co/spaces/evaluate-metric/accuracy) Funktion, die Sie mit der [`evaluate.load`] Funktion laden kรถnnen (siehe diese [quicktour](https://huggingface.co/docs/evaluate/a_quick_tour) fรผr weitere Informationen): ```py >>> import numpy as np >>> import evaluate >>> metric = evaluate.load("accuracy") ``` Rufen Sie [`~evaluate.compute`] auf `metric` auf, um die Genauigkeit Ihrer Vorhersagen zu berechnen. Bevor Sie Ihre Vorhersagen an `compute` รผbergeben, mรผssen Sie die Vorhersagen in Logits umwandeln (denken Sie daran, dass alle ๐Ÿค— Transformers-Modelle Logits zurรผckgeben): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` Wenn Sie Ihre Bewertungsmetriken wรคhrend der Feinabstimmung รผberwachen mรถchten, geben Sie den Parameter `evaluation_strategy` in Ihren Trainingsargumenten an, um die Bewertungsmetrik am Ende jeder Epoche zu ermitteln: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") ``` ### Trainer Erstellen Sie ein [`Trainer`]-Objekt mit Ihrem Modell, Trainingsargumenten, Trainings- und Testdatensรคtzen und einer Evaluierungsfunktion: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` AnschlieรŸend kรถnnen Sie Ihr Modell durch den Aufruf von [`~transformers.Trainer.train`] optimieren: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> ## Trainieren Sie ein TensorFlow-Modell mit Keras Sie kรถnnen auch ๐Ÿค— Transformers Modelle in TensorFlow mit der Keras API trainieren! ### Laden von Daten fรผr Keras Wenn Sie ein ๐Ÿค— Transformers Modell mit der Keras API trainieren wollen, mรผssen Sie Ihren Datensatz in ein Format konvertieren, das Keras versteht. Wenn Ihr Datensatz klein ist, kรถnnen Sie das Ganze einfach in NumPy-Arrays konvertieren und an Keras รผbergeben. Probieren wir das zuerst aus, bevor wir etwas Komplizierteres tun. Laden Sie zunรคchst ein Dataset. Wir werden den CoLA-Datensatz aus dem [GLUE-Benchmark](https://huggingface.co/datasets/glue) verwenden, da es sich um eine einfache Aufgabe zur Klassifizierung von binรคrem Text handelt, und nehmen vorerst nur den Trainingssplit. ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now ``` Als nรคchstes laden Sie einen Tokenizer und tokenisieren die Daten als NumPy-Arrays. Beachten Sie, dass die Beschriftungen bereits eine Liste von 0 und 1en sind, Wir kรถnnen sie also ohne Tokenisierung direkt in ein NumPy-Array konvertieren! ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_data = tokenizer(dataset["text"], return_tensors="np", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 ``` SchlieรŸlich laden, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) und [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) Sie das Modell: ```py from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) model.fit(tokenized_data, labels) ``` <Tip> Sie mรผssen Ihren Modellen kein Verlustargument รผbergeben, wenn Sie sie `compile()`! Hugging-Face-Modelle wรคhlen automatisch einen Loss, der fรผr ihre Aufgabe und Modellarchitektur geeignet ist, wenn dieses Argument leer gelassen wird. Sie kรถnnen jederzeit auรŸer Kraft setzen, indem Sie selbst einen Loss angeben, wenn Sie das mรถchten! </Tip> Dieser Ansatz eignet sich hervorragend fรผr kleinere Datensรคtze, aber bei grรถรŸeren Datensรคtzen kann er zu einem Problem werden. Warum? Weil das tokenisierte Array und die Beschriftungen vollstรคndig in den Speicher geladen werden mรผssten, und weil NumPy nicht mit "gezackte" Arrays nicht verarbeiten kann, so dass jedes tokenisierte Sample auf die Lรคnge des lรคngsten Samples im gesamten Datensatz aufgefรผllt werden mรผsste. Datensatzes aufgefรผllt werden. Dadurch wird das Array noch grรถรŸer, und all die aufgefรผllten Token verlangsamen auch das Training! ### Laden von Daten als tf.data.Dataset Wenn Sie eine Verlangsamung des Trainings vermeiden wollen, kรถnnen Sie Ihre Daten stattdessen als `tf.data.Dataset` laden. Sie kรถnnen zwar Ihre eigene tf.data"-Pipeline schreiben kรถnnen, wenn Sie wollen, haben wir zwei bequeme Methoden, um dies zu tun: - [`~TFPreTrainedModel.prepare_tf_dataset`]: Dies ist die Methode, die wir in den meisten Fรคllen empfehlen. Da es sich um eine Methode Ihres Modells ist, kann sie das Modell inspizieren, um automatisch herauszufinden, welche Spalten als Modelleingaben verwendet werden kรถnnen, und verwirft die anderen, um einen einfacheren, leistungsfรคhigeren Datensatz zu erstellen. - [~datasets.Dataset.to_tf_dataset`]: Diese Methode ist eher auf niedriger Ebene angesiedelt und ist nรผtzlich, wenn Sie genau kontrollieren wollen, wie Dataset erstellt wird, indem man genau angibt, welche `columns` und `label_cols` einbezogen werden sollen. Bevor Sie [~TFPreTrainedModel.prepare_tf_dataset`] verwenden kรถnnen, mรผssen Sie die Tokenizer-Ausgaben als Spalten zu Ihrem Datensatz hinzufรผgen, wie in dem folgenden Codebeispiel: ```py def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` Denken Sie daran, dass Hugging Face-Datensรคtze standardmรครŸig auf der Festplatte gespeichert werden, so dass dies nicht zu einem erhรถhten Arbeitsspeicherbedarf fรผhren wird! Sobald die Spalten hinzugefรผgt wurden, kรถnnen Sie Batches aus dem Datensatz streamen und zu jedem Batch Auffรผllungen hinzufรผgen, was die Anzahl der Auffรผllungs-Token im Vergleich zum Auffรผllen des gesamten Datensatzes reduziert. ```py >>> tf_dataset = model.prepare_tf_dataset(dataset, batch_size=16, shuffle=True, tokenizer=tokenizer) ``` Beachten Sie, dass Sie im obigen Codebeispiel den Tokenizer an `prepare_tf_dataset` รผbergeben mรผssen, damit die Stapel beim Laden korrekt aufgefรผllt werden kรถnnen. Wenn alle Stichproben in Ihrem Datensatz die gleiche Lรคnge haben und kein Auffรผllen erforderlich ist, kรถnnen Sie dieses Argument weglassen. Wenn Sie etwas Komplexeres als nur das Auffรผllen von Stichproben benรถtigen (z. B. das Korrumpieren von Token fรผr die maskierte Sprachmodellierung), kรถnnen Sie das Argument Modellierung), kรถnnen Sie stattdessen das Argument `collate_fn` verwenden, um eine Funktion zu รผbergeben, die aufgerufen wird, um die Liste von Stichproben in einen Stapel umwandelt und alle gewรผnschten Vorverarbeitungen vornimmt. Siehe unsere [examples](https://github.com/huggingface/transformers/tree/main/examples) oder [notebooks](https://huggingface.co/docs/transformers/notebooks), um diesen Ansatz in Aktion zu sehen. Sobald Sie einen `tf.data.Dataset` erstellt haben, kรถnnen Sie das Modell wie zuvor kompilieren und anpassen: ```py model.compile(optimizer=Adam(3e-5)) model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## Trainieren in nativem PyTorch <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`] kรผmmert sich um die Trainingsschleife und ermรถglicht die Feinabstimmung eines Modells in einer einzigen Codezeile. Fรผr Benutzer, die es vorziehen, ihre eigene Trainingsschleife zu schreiben, kรถnnen Sie auch eine Feinabstimmung eines ๐Ÿค— Transformers-Modells in nativem PyTorch vornehmen. An diesem Punkt mรผssen Sie mรถglicherweise Ihr Notebook neu starten oder den folgenden Code ausfรผhren, um etwas Speicher freizugeben: ```py del model del pytorch_model del trainer torch.cuda.empty_cache() ``` Als Nรคchstes mรผssen Sie den Datensatz `tokenized_dataset` manuell nachbearbeiten, um ihn fรผr das Training vorzubereiten. 1. Entfernen Sie die Spalte "Text", da das Modell keinen Rohtext als Eingabe akzeptiert: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. Benennen Sie die Spalte "Label" in "Labels" um, da das Modell erwartet, dass das Argument "Labels" genannt wird: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. Stellen Sie das Format des Datensatzes so ein, dass PyTorch-Tensoren anstelle von Listen zurรผckgegeben werden: ```py >>> tokenized_datasets.set_format("torch") ``` Erstellen Sie dann eine kleinere Teilmenge des Datensatzes, wie zuvor gezeigt, um die Feinabstimmung zu beschleunigen: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader Erstellen Sie einen `DataLoader` fรผr Ihre Trainings- und Testdatensรคtze, damit Sie รผber die Datenstapel iterieren kรถnnen: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` Laden Sie Ihr Modell mit der Anzahl der erwarteten Kennzeichnungen: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` ### Optimierer und Lernratensteuerung Erstellen Sie einen Optimierer und einen Scheduler fรผr die Lernrate, um das Modell fein abzustimmen. Wir verwenden den Optimierer [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) aus PyTorch: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` Erstellen Sie den Standard-Lernratenplaner aus [`Trainer`]: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` Geben Sie schlieรŸlich `device` an, um einen Grafikprozessor zu verwenden, wenn Sie Zugang zu einem solchen haben. Andernfalls kann das Training auf einer CPU mehrere Stunden statt ein paar Minuten dauern. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> Holen Sie sich mit einem gehosteten Notebook wie [Colaboratory](https://colab.research.google.com/) oder [SageMaker StudioLab](https://studiolab.sagemaker.aws/) kostenlosen Zugang zu einem Cloud-GPU, wenn Sie noch keinen haben. </Tip> GroรŸartig, Sie sind bereit fรผr das Training! ๐Ÿฅณ ### Trainingsschleife Um Ihren Trainingsfortschritt zu verfolgen, verwenden Sie die [tqdm](https://tqdm.github.io/) Bibliothek, um einen Fortschrittsbalken รผber die Anzahl der Trainingsschritte hinzuzufรผgen: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### Auswertung Genauso wie Sie eine Bewertungsfunktion zu [`Trainer`] hinzugefรผgt haben, mรผssen Sie dasselbe tun, wenn Sie Ihre eigene Trainingsschleife schreiben. Aber anstatt die Metrik am Ende jeder Epoche zu berechnen und zu melden, werden Sie dieses Mal alle Stapel mit [`~evaluate.add_batch`] akkumulieren und die Metrik ganz am Ende berechnen. ```py >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## Zusรคtzliche Ressourcen Weitere Beispiele fรผr die Feinabstimmung finden Sie unter: - [๐Ÿค— Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) enthรคlt Skripte um gรคngige NLP-Aufgaben in PyTorch und TensorFlow zu trainieren. - [๐Ÿค— Transformers Notebooks](notebooks) enthรคlt verschiedene Notebooks zur Feinabstimmung eines Modells fรผr bestimmte Aufgaben in PyTorch und TensorFlow.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Carica istanze pre-allenate con AutoClass Con cosรฌ tante architetture Transformer differenti, puรฒ essere sfidante crearne una per il tuo checkpoint. Come parte della filosofia centrale di ๐Ÿค— Transformers per rendere la libreria facile, semplice e flessibile da utilizzare, una `AutoClass` inferisce e carica automaticamente l'architettura corretta da un dato checkpoint. Il metodo `from_pretrained` ti permette di caricare velocemente un modello pre-allenato per qualsiasi architettura, cosรฌ non devi utilizzare tempo e risorse per allenare un modello da zero. Produrre questo codice agnostico ai checkpoint significa che se il tuo codice funziona per un checkpoint, funzionerร  anche per un altro checkpoint, purchรฉ sia stato allenato per un compito simile, anche se l'architettura รจ differente. <Tip> Ricorda, con architettura ci si riferisce allo scheletro del modello e con checkpoint ai pesi di una determinata architettura. Per esempio, [BERT](https://huggingface.co/bert-base-uncased) รจ un'architettura, mentre `bert-base-uncased` รจ un checkpoint. Modello รจ un termine generale che puรฒ significare sia architettura che checkpoint. </Tip> In questo tutorial, imparerai a: * Caricare un tokenizer pre-allenato. * Caricare un estrattore di caratteristiche (feature extractor, in inglese) pre-allenato. * Caricare un processore pre-allenato. * Caricare un modello pre-allenato. ## AutoTokenizer Quasi tutti i compiti di NLP iniziano con un tokenizer. Un tokenizer converte il tuo input in un formato che possa essere elaborato dal modello. Carica un tokenizer con [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") ``` Poi tokenizza il tuo input come mostrato in seguito: ```py >>> sequenza = "In un buco nel terreno viveva uno Hobbit." >>> print(tokenizer(sequenza)) {'input_ids': [0, 360, 51, 373, 587, 1718, 54644, 22597, 330, 3269, 2291, 22155, 18, 5, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoFeatureExtractor Per compiti inerenti a audio e video, un feature extractor processa il segnale audio o l'immagine nel formato di input corretto. Carica un feature extractor con [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor Compiti multimodali richiedono un processore che combini i due tipi di strumenti di elaborazione. Per esempio, il modello [LayoutLMV2](model_doc/layoutlmv2) richiede un feature extractor per gestire le immagine e un tokenizer per gestire il testo; un processore li combina entrambi. Carica un processore con [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel <frameworkcontent> <pt> Infine, le classi `AutoModelFor` ti permettono di caricare un modello pre-allenato per un determinato compito (guarda [qui](model_doc/auto) per una lista completa di compiti presenti). Per esempio, carica un modello per la classificazione di sequenze con [`AutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Semplicemente utilizza lo stesso checkpoint per caricare un'architettura per un task differente: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` Generalmente, raccomandiamo di utilizzare la classe `AutoTokenizer` e la classe `AutoModelFor` per caricare istanze pre-allenate dei modelli. Questo ti assicurerร  di aver caricato la corretta architettura ogni volta. Nel prossimo [tutorial](preprocessing), imparerai come utilizzare il tokenizer, il feature extractor e il processore per elaborare un dataset per il fine-tuning. </pt> <tf> Infine, le classi `TFAutoModelFor` ti permettono di caricare un modello pre-allenato per un determinato compito (guarda [qui](model_doc/auto) per una lista completa di compiti presenti). Per esempio, carica un modello per la classificazione di sequenze con [`TFAutoModelForSequenceClassification.from_pretrained`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Semplicemente utilizza lo stesso checkpoint per caricare un'architettura per un task differente: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` Generalmente, raccomandiamo di utilizzare la classe `AutoTokenizer` e la classe `TFAutoModelFor` per caricare istanze pre-allenate dei modelli. Questo ti assicurerร  di aver caricato la corretta architettura ogni volta. Nel prossimo [tutorial](preprocessing), imparerai come utilizzare il tokenizer, il feature extractor e il processore per elaborare un dataset per il fine-tuning. </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/converting_tensorflow_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Convertire checkpoint di Tensorflow รˆ disponibile un'interfaccia a linea di comando per convertire gli originali checkpoint di Bert/GPT/GPT-2/Transformer-XL/XLNet/XLM in modelli che possono essere caricati utilizzando i metodi `from_pretrained` della libreria. <Tip> A partire dalla versione 2.3.0 lo script di conversione รจ parte di transformers CLI (**transformers-cli**), disponibile in ogni installazione di transformers >=2.3.0. La seguente documentazione riflette il formato dei comandi di **transformers-cli convert**. </Tip> ## BERT Puoi convertire qualunque checkpoint Tensorflow di BERT (in particolare [i modeli pre-allenati rilasciati da Google](https://github.com/google-research/bert#pre-trained-models)) in un file di salvataggio Pytorch utilizzando lo script [convert_bert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py). Questo CLI prende come input un checkpoint di Tensorflow (tre files che iniziano con `bert_model.ckpt`) ed il relativo file di configurazione (`bert_config.json`), crea un modello Pytorch per questa configurazione, carica i pesi dal checkpoint di Tensorflow nel modello di Pytorch e salva il modello che ne risulta in un file di salvataggio standard di Pytorch che puรฒ essere importato utilizzando `from_pretrained()` (vedi l'esempio nel [quicktour](quicktour) , [run_glue.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_glue.py) ). Devi soltanto lanciare questo script di conversione **una volta** per ottenere un modello Pytorch. Dopodichรจ, potrai tralasciare il checkpoint di Tensorflow (i tre files che iniziano con `bert_model.ckpt`), ma assicurati di tenere il file di configurazione (`bert_config.json`) ed il file di vocabolario (`vocab.txt`) in quanto queste componenti sono necessarie anche per il modello di Pytorch. Per lanciare questo specifico script di conversione avrai bisogno di un'installazione di Tensorflow e di Pytorch (`pip install tensorflow`). Il resto della repository richiede soltanto Pytorch. Questo รจ un esempio del processo di conversione per un modello `BERT-Base Uncased` pre-allenato: ```bash export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12 transformers-cli convert --model_type bert \ --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \ --config $BERT_BASE_DIR/bert_config.json \ --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin ``` Puoi scaricare i modelli pre-allenati di Google per la conversione [qua](https://github.com/google-research/bert#pre-trained-models). ## ALBERT Per il modello ALBERT, converti checkpoint di Tensoflow in Pytorch utilizzando lo script [convert_albert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/main/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py). Il CLI prende come input un checkpoint di Tensorflow (tre files che iniziano con `model.ckpt-best`) e i relativi file di configurazione (`albert_config.json`), dopodichรจ crea e salva un modello Pytorch. Per lanciare questa conversione avrai bisogno di un'installazione di Tensorflow e di Pytorch. Ecco un esempio del procedimento di conversione di un modello `ALBERT Base` pre-allenato: ```bash export ALBERT_BASE_DIR=/path/to/albert/albert_base transformers-cli convert --model_type albert \ --tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-best \ --config $ALBERT_BASE_DIR/albert_config.json \ --pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin ``` Puoi scaricare i modelli pre-allenati di Google per la conversione [qui](https://github.com/google-research/albert#pre-trained-models). ## OpenAI GPT Ecco un esempio del processo di conversione di un modello OpenAI GPT pre-allenato, assumendo che il tuo checkpoint di NumPy sia salvato nello stesso formato dei modelli pre-allenati OpenAI (vedi [qui](https://github.com/openai/finetune-transformer-lm)): ```bash export OPENAI_GPT_CHECKPOINT_FOLDER_PATH=/path/to/openai/pretrained/numpy/weights transformers-cli convert --model_type gpt \ --tf_checkpoint $OPENAI_GPT_CHECKPOINT_FOLDER_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--config OPENAI_GPT_CONFIG] \ [--finetuning_task_name OPENAI_GPT_FINETUNED_TASK] \ ``` ## OpenAI GPT-2 Ecco un esempio del processo di conversione di un modello OpenAI GPT-2 pre-allenato (vedi [qui](https://github.com/openai/gpt-2)): ```bash export OPENAI_GPT2_CHECKPOINT_PATH=/path/to/gpt2/pretrained/weights transformers-cli convert --model_type gpt2 \ --tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--config OPENAI_GPT2_CONFIG] \ [--finetuning_task_name OPENAI_GPT2_FINETUNED_TASK] ``` ## Transformer-XL Ecco un esempio del processo di conversione di un modello Transformer-XL pre-allenato (vedi [qui](https://github.com/kimiyoung/transformer-xl/tree/master/tf#obtain-and-evaluate-pretrained-sota-models)): ```bash export TRANSFO_XL_CHECKPOINT_FOLDER_PATH=/path/to/transfo/xl/checkpoint transformers-cli convert --model_type transfo_xl \ --tf_checkpoint $TRANSFO_XL_CHECKPOINT_FOLDER_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--config TRANSFO_XL_CONFIG] \ [--finetuning_task_name TRANSFO_XL_FINETUNED_TASK] ``` ## XLNet Ecco un esempio del processo di conversione di un modello XLNet pre-allenato: ```bash export TRANSFO_XL_CHECKPOINT_PATH=/path/to/xlnet/checkpoint export TRANSFO_XL_CONFIG_PATH=/path/to/xlnet/config transformers-cli convert --model_type xlnet \ --tf_checkpoint $TRANSFO_XL_CHECKPOINT_PATH \ --config $TRANSFO_XL_CONFIG_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \ [--finetuning_task_name XLNET_FINETUNED_TASK] \ ``` ## XLM Ecco un esempio del processo di conversione di un modello XLM pre-allenato: ```bash export XLM_CHECKPOINT_PATH=/path/to/xlm/checkpoint transformers-cli convert --model_type xlm \ --tf_checkpoint $XLM_CHECKPOINT_PATH \ --pytorch_dump_output $PYTORCH_DUMP_OUTPUT [--config XML_CONFIG] \ [--finetuning_task_name XML_FINETUNED_TASK] ``` ## T5 Ecco un esempio del processo di conversione di un modello T5 pre-allenato: ```bash export T5=/path/to/t5/uncased_L-12_H-768_A-12 transformers-cli convert --model_type t5 \ --tf_checkpoint $T5/t5_model.ckpt \ --config $T5/t5_config.json \ --pytorch_dump_output $T5/pytorch_model.bin ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/add_new_model.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Come aggiungere un modello a ๐Ÿค— Transformers? Aggiungere un nuovo modello รฉ spesso difficile e richiede una profonda conoscenza della libreria ๐Ÿค— Transformers e anche della repository originale del modello. A Hugging Face cerchiamo di dare alla community sempre piรบ poteri per aggiungere modelli independentemente. Quindi, per alcuni nuovi modelli che la community vuole aggiungere a ๐Ÿค— Transformers, abbiamo creato una specifica *call-for-model-addition* che spiega passo dopo passo come aggiungere il modello richiesto. Con questo *call-for-model-addition* vogliamo insegnare a volenterosi e esperti collaboratori della community come implementare un modello in ๐Ÿค— Transformers. Se questo รฉ qualcosa che puรฒ interessarvi, siete liberi di controllare l'attuale โ€œcalls-for-model-additionโ€ [qui](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model/open_model_proposals/README.md) e contattarci. Se il modello sarร  selezionato, allora potrete lavorare insieme a un membro di Hugging Face per integrare il modello in ๐Ÿค— Transformers. Cosรฌ facendo, ci guadagnerai in una comprensione totale, sia teorica che pratica, del modello proposto. Inoltre, sarai l'artefice di un importante contributo open-source a ๐Ÿค— Transformers. Durante l'implementazione avrai l'opportunitร  di: - ottenere piรน comprensione delle best practices in open-source - capire i principi di design di una della librerie NLP piรน popolari - capire come efficientemente testare complessi modelli NLP - capire come integrare utilit Python come `black`, `ruff`, `make fix-copies` in una libreria per garantire sempre di avere un codice leggibile e pulito Siamo anche contenti se vuoi aggiungere un modello che non puรฒ essere trovato nella cartella โ€œcalls-for-model-additionโ€. Le seguenti sezioni spiegano in dettaglio come aggiungere un nuovo modello. Puรฒ anche essere molto utile controllare modelli giร  aggiunti [qui](https://github.com/huggingface/transformers/pulls?q=is%3Apr+label%3A%22PR+for+Model+Addition%22+is%3Aclosed), per capire se richiamano il modello che vorreste aggiungere. Per cominciare, vediamo una panoramica general della libreria Transformers. ## Panoramica generale su ๐Ÿค— Transformers Prima di tutto, vediamo in generale ๐Ÿค— Transformers. ๐Ÿค— Transformers รฉ una libreria molto strutturata, quindi puร  essere che a volte ci sia un disaccordo con alcune filosofie della libreria o scelte di design. Dalla nostra esperienza, tuttavia, abbiamo trovato che le scelte fondamentali di design della libreria sono cruciali per usare ๐Ÿค— Transformers efficacemente su larga scala, mantenendo i costi a un livello accettabile. Un buon primo punto di partenza per capire al meglio la libreria รฉ leggere la [documentazione sulla nostra filosofia](filosofia) Da qui, ci sono alcune scelte sul modo di lavorare che cerchiamo di applicare a tutti i modelli: - La composizione รฉ generalmente favorita sulla sovra-astrazione - Duplicare il codice non รฉ sempre male, soprattutto se migliora notevolmente la leggibilitร  e accessibilitร  del modello - Tutti i files creati per il nuovo modello devono il piu possibile "compatti". Questo vuol dire che quando qualcuno leggerรก il codice di uno specifico modello, potrรก vedere solo il corrispettivo file `modeling_....py` senza avere multiple dipendenze. La cosa piรบ importante, รฉ che consideriamo la libreria non solo un mezzo per dare un prodotto, *per esempio* dare la possibilitร  di usare BERT per inferenza, ma รฉ anche il prodotto reale che noi vogliamo migliorare sempre piรน. Quindi, quando aggiungi un modello, non sei solo la persona che userร  il modello, ma rappresenti anche tutti coloro che leggeranno, cercheranno di capire e modificare il tuo modello. Tenendo questi principi in mente, immergiamoci nel design generale della libreria. ### Panoramica sui modelli Per aggiungere con successo un modello, รฉ importante capire l'interazione tra il tuo modello e la sua configurazione, [`PreTrainedModel`], e [`PretrainedConfig`]. Per dare un esempio, chiameremo il modello da aggiungere a ๐Ÿค— Transformers `BrandNewBert`. Diamo un'occhiata: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> Come potete vedere, ci basiamo sull'ereditarietร  in ๐Ÿค— Transformers, tenendo perรฒ il livello di astrazione a un minimo assoluto. Non ci sono mai piรน di due livelli di astrazione per ogni modello nella libreria. `BrandNewBertModel` eredita da `BrandNewBertPreTrainedModel` che, a sua volta, eredita da [`PreTrainedModel`] - semplice no? Come regola generale, vogliamo essere sicuri che un nuovo modello dipenda solo da [`PreTrainedModel`]. Le funzionalitร  importanti che sono automaticamente conferite a ogni nuovo modello sono [`~PreTrainedModel.from_pretrained`] e [`~PreTrainedModel.save_pretrained`], che sono usate per serializzazione e deserializzazione. Tutte le altre importanti funzionalitร , come ad esempio `BrandNewBertModel.forward` devono essere definite completamente nel nuovo script `modeling_brand_new_bert.py`. Inoltre, vogliamo essere sicuri che un modello con uno specifico head layer, come `BrandNewBertForMaskedLM` non erediti da `BrandNewBertModel`, ma piuttosto usi `BrandNewBertModel` come componente che puรฒ essere chiamata nel passaggio forward per mantenere il livello di astrazione basso. Ogni nuovo modello richieste una classe di configurazione, chiamata `BrandNewBertConfig`. Questa configurazione รฉ sempre mantenuta come un attributo in [`PreTrainedModel`], e quindi puรฒ essere accessibile tramite l'attributo `config` per tutte le classi che ereditano da `BrandNewBertPreTrainedModel`: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # il modello ha accesso al suo config ``` Analogamente al modello, la configurazione eredita le funzionalitร  base di serializzazione e deserializzazione da [`PretrainedConfig`]. ร‰ da notare che la configurazione e il modello sono sempre serializzati in due formati differenti - il modello รฉ serializzato in un file *pytorch_model.bin* mentre la configurazione con *config.json*. Chiamando [`~PreTrainedModel.save_pretrained`] automaticamente chiamerร  [`~PretrainedConfig.save_pretrained`], cosicchรฉ sia il modello che la configurazione siano salvati. ### Stile per il codice Quando codifichi un nuovo modello, tieni presente che Transformers ha una sua struttura di fondo come libreria, perciรฒ ci sono alcuni fatti da considerare su come scrivere un codice :-) 1. Il forward pass del tuo modello dev'essere scritto completamente nel file del modello, mentre dev'essere indipendente da altri modelli nella libreria. Se vuoi riutilizzare un blocco di codice da un altro modello, copia e incolla il codice con un commento `# Copied from` in cima al codice (guarda [qui](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) per un ottimo esempio). 2. Il codice dev'essere interamente comprensibile, anche da persone che non parlano in inglese. Questo significa che le variabili devono avere un nome descrittivo e bisogna evitare abbreviazioni. Per esempio, `activation` รฉ molto meglio che `act`. Le variabili con una lettera sono da evitare fortemente, almeno che non sia per un indce in un for loop. 3. Generamente รฉ meglio avere un codice esplicito e piรบ lungo che un codice corto e magico. 4. Evita di subclassare `nn.Sequential` in Pytorch, puoi subclassare `nn.Module` e scrivere il forward pass, cosicchรฉ chiunque puรฒ effettuare debug sul tuo codice, aggiungendo print o breaking points. 5. La tua function-signature dev'essere type-annoted. Per il resto, รฉ meglio preferire variabili con un nome accettabile piuttosto che annotazioni per aumentare la comprensione e leggibilitร  del codice. ### Panoramica sui tokenizers Questa sezione sarร  creata al piu presto :-( ## Aggiungere un modello a ๐Ÿค— Transformers passo dopo passo Ci sono differenti modi per aggiungere un modello a Hugging Face. Qui trovi una lista di blog posts da parte della community su come aggiungere un modello: 1. [Aggiungere GPT2](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) scritto da [Thomas](https://huggingface.co/thomwolf) 2. [Aggiungere WMT19 MT](https://huggingface.co/blog/porting-fsmt) scritto da [Stas](https://huggingface.co/stas) Per esperienza, possiamo dirti che quando si aggiunge un modello รฉ meglio tenere a mente le seguenti considerazioni: - Non sfondare una porta giรก aperta! La maggior parte del codice che aggiungerai per un nuovo modello ๐Ÿค— Transformers esiste giร  da qualche parte in ๐Ÿค— Transformers. Prendi un po' di tempo per trovare codici simili in modelli e tokenizers esistenti e fare un copia-incolla. Ricorda che [grep](https://www.gnu.org/software/grep/) e [rg](https://github.com/BurntSushi/ripgrep) sono tuoi buoni amici. Inoltre, ricorda che puรณ essere molto probabile che il tokenizer per il tuo modello sia basato sull'implementazione di un altro modello, e il codice del tuo modello stesso su un altro ancora. *Per esempio* il modello FSMT รฉ basato su BART, mentre il tokenizer di FSMT รฉ basato su XLM. - Ricorda che qui รฉ piu una sfida ingegneristica che scientifica. Spendi piรบ tempo per create un efficiente ambiente di debugging piuttosto che cercare di capire tutti gli aspetti teorici dell'articolo del modello. - Chiedi aiuto se sei in panne! I modelli sono la parte principale di ๐Ÿค— Transformers, perciรฒ qui a Hugging Face siamo piรน che contenti di aiutarti in ogni passo per aggiungere il tuo modello. Non esitare a chiedere se vedi che non riesci a progredire. Di seguito, diamo una ricetta generale per aiutare a portare un modello in ๐Ÿค— Transformers. La lista seguente รฉ un sommario di tutto quello che รฉ stato fatto per aggiungere un modello, e puรฒ essere usata come To-Do List: - 1. โ˜ (Opzionale) Capire gli aspetti teorici del modello - 2. โ˜ Preparare l'ambiente dev per transformers - 3. โ˜ Preparare l'ambiente debugging della repository originale - 4. โ˜ Create uno script che gestisca con successo il forward pass usando la repository originale e checkpoint - 5. โ˜ Aggiungere con successo lo scheletro del modello a Transformers - 6. โ˜ Convertire i checkpoint original a Transformers checkpoint - 7. โ˜ Effettuare con successo la forward pass in Transformers, di modo che dia un output identico al checkpoint originale - 8. โ˜ Finire i tests per il modello in Transformers - 9. โ˜ Aggiungere con successo Tokenizer in Transformers - 10. โ˜ Testare e provare gli integration tests da capo a fine - 11. โ˜ Completare i docs - 12. โ˜ Caricare i moedl weights all'hub - 13. โ˜ Sottomettere una pull request - 14. โ˜ (Opzionale) Aggiungere un notebook con una demo Per cominciare di solito consigliamo `BrandNewBert`, partendo dalla teoria, di modo da avere una buona comprensione della teoria generale. TUttavia, se preferisci imparare l'aspetto teorico del modello mentre *lavori* sul modello รฉ ok immergersi direttamente nel codice di `BrandNewBert`. Questa opzione puรณ essere buona se le tue skills ingegneristiche sono meglio che quelle teoriche, o se il paper `BrandNewBert` ti dรก problemi, o se semplicemente ti piace programmare piรบ che leggere articoli scientifici. ### 1. (Opzionale) Aspetti teorici di BrandNewBert Allora con calma, prendi un po' di tempo per leggere l'articolo su *BrandNewBert* . Sicuramente, alcune sezioni dell'articolo sono molto complesse, ma non preoccuparti! L'obiettivo non รฉ avere una compresione immensa della teoria alla base, ma estrarre le informazioni necessarie per re-implementare con successo il modello in ๐Ÿค— Transformers. Quindi, non impazzire sugli aspetti teorici, ma piuttosto focalizzati su quelli pratici, ossia: - Che tipo di modello รฉ *brand_new_bert*? ร‰ solo un encoder in stile BERT? O tipo decoder come GPT2? O encoder e decoder stile BART? Dai un'occhiata a [model_summary](model_summary) se non sei famigliare con le differenze tra questi modelli - Quali sono le applicazioni di *brand_new_bert*? Classificazione di testo? Generazione di testo? O per tasks del genere seq2seq? - Quali sono le nuove aggiunte al modello che lo rendono diverso da BERT/GPT-2/BART? - Quali modelli estistenti in [๐Ÿค— Transformers models](https://huggingface.co/transformers/#contents) sono molto simili a *brand_new_bert*? - Che tipo di tokenizer si usa in questo caso? Un sentencepiece tokenizer? O un word piece tokenizer? Il tokenizer รฉ lo stesso di BERT o BART? Una volta che senti che hai avuto una bella overview dell'architettura del modello, puoi scrivere senza problemi al team di Hugging Face per ogni domanda che tu hai. Questo puรณ includere domande sull'architettura del modello, o sull'attention layer, etc. Saremo molto felici di aiutarti :) ### 2. Prepare il tuo ambiente 1. Forka la [repository](https://github.com/huggingface/transformers) cliccando sul tasto โ€˜Fork' nella pagina della repository. Questo crea una copia del codice nel tuo account GitHub 2. Clona il tuo fork `transfomers` sul tuo dico locale, e aggiungi la repository base come remota: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Crea un ambiente di sviluppo, per esempio tramite questo comando: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` quindi torna alla directory principale: ```bash cd .. ``` 4. Attenzione, raccomandiamo di aggiungere la versione di PyTorch di *brand_new_bert* a Transfomers. Per installare PyTorch, basta seguire queste istruzioni https://pytorch.org/get-started/locally/. **Nota bene:** Non c'รฉ bisogno di installare o avere installato CUDA. Il nuovo modello puรฒ funzionare senza problemi su una CPU. 5. Per trasferire *brand_new_bert* To port *brand_new_bert* avrai bisogno anche accesso alla sua repository originale: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` Ok, ora hai un ambiente di sviluppo per portare *brand_new_bert* in ๐Ÿค— Transformers. ### 3.-4. Provare un pretrained checkpoint usando la repo originale Per cominciare, comincerai a lavorare sulla repo originale di *brand_new_bert*. Come spesso accade, l'implementazione originale รฉ molto sullo stile "ricerca". Questo significa che a volte la documentazione non รฉ al top, magari manca qualche cosa e il codice puรณ essere difficile da capire. Tuttavia, questa รฉ e dev'essere la motivazione per reimplementare *brand_new_bert*. In Hugging Face, uno degli obiettivi principali รฉ di *mettere le persone sulle spalle dei giganti*, il che si traduce, in questo contesto, di prendere un modello funzionante e riscriverlo e renderlo il piรบ possibile **accessibile, user-friendly, e leggibile**. Questa รฉ la top motivazione per re-implementare modelli in ๐Ÿค— Transformers - cercare di creare nuove complesse tecnologie NLP accessibili a **chiunque**. Riuscire a far girare il modello pretrained originale dalla repository ufficiale รฉ spesso il passo **piu arduo**. Dalla nostra esperienza, รฉ molto importante spendere un p' di tempo per diventare familiari con il codice base originale. Come test, prova a capire i seguenti punti: - Dove si trovano i pretrained weights? - Come caricare i pretrained weights nel modello corrispondente? - Come girare un tokenizer independentemente dal modello? - Prova a tracciare un singolo forward pass, cosicchรฉ potrai sapere che classi e funzioni sono richieste per un semplice forward pass. Di solito, dovrai reimplementare queste funzioni e basta - Prova a localizzare i componenti importanti del modello: Dove si trova la classe del modello? Ci sono sotto classi nel modello *per esempio* EngoderModel, DecoderMOdel? Dove si trova il self-attention layer? Ci sono molteplici differenti layer di attention, *per esempio * *self-attention*, *cross-attention*...? - Come puoi fare debug sul modello nell'ambiente originale della repo? Devi aggiungere dei *print* o puoi usare *ipdb* come debugger interattivo, o vabene anche un IDE efficiente per debug come PyCharm? ร‰ molto importante che prima di cominciare a trasferire il modello nuovo tu spenda tempo a fare debug del codice originale in maniera **efficiente**! Inoltre, ricorda che tutta la library รฉ open-soruce, quindi non temere di aprire issue o fare una pull request nella repo originale. Tutti coloro che mantengono la repository saranno piรบ che felici di avere qualcuno che guarda e gioca con i loro codici! A questo punto, sta a te decidere quale ambiente per debug vuoi usare. Noi consilgiamo di evitare setup con GPU, che potrebbero costare assai, lavorare su una CPU puรณ essere un ottimo punto di partenza per indagare la repository originale e per cominciare a scrivere il codice per ๐Ÿค— Transformers. Solo alla fine, quando il modello รฉ stato portato con successo in ๐Ÿค— Transformers, allora si potrรก verificare il suo funzionamento su GPU. In generale ci sono due possibili ambienti di debug per il testare il modello originale: - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Scripts locali in Python Il vantaggio dei Jupyter notebooks รฉ la possibilitร  di eseguire cella per cella, il che puรฒ essere utile per decomporre tutte le componenti logiche, cosi da a vere un ciclo di debug piรน rapido, siccome si possono salvare i risultati da steps intermedi. Inoltre, i notebooks spesso sono molto facili da condividere con altri contributors, il che puรฒ essere molto utile se vuoi chiedere aiuto al team di Hugging Face. Se sei famigliare con Jupyter notebooks allora racommandiamo di lavorare in questa maniera. Ovviamente se non siete abituati a lavorare con i notebook, questo puรฒ essere uno svantaggio nell'usare questa tecnologia, sprecando un sacco di tempo per setup e portare tutto al nuovo ambiente, siccome non potreste neanche usare dei tools di debug come `ipdb`. Per ogni pratica code-base, รฉ sempre meglio come primo step caricare un **piccolo** checkpoint pretrained e cercare di riprodurre un singolo forward pass usando un vettore fittizio di IDs fatti da numeri interi. Un esempio per uno script simile, in pseudocodice รฉ: ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` Per quanto riguarda la strategia di debugging, si puรฒ scegliere tra: - Decomporre il modello originario in piccole componenenti e testare ognuna di esse - Decomporre il modello originario nel *tokenizer* originale e nel *modello* originale, testare un forward pass su questi, e usare dei print statement o breakpoints intermedi per verificare Ancora una volta, siete liberi di scegliere quale strategia sia ottimale per voi. Spesso una strategia รฉ piu avvantaggiosa di un'altra, ma tutto dipende dall'code-base originario. Se il code-base vi permette di decomporre il modello in piccole sub-componenenti, *per esempio* se il code-base originario puรฒ essere facilmente testato in eager mode, allora vale la pena effettuare un debugging di questo genere. Ricordate che ci sono dei vantaggi nel decidere di prendere la strada piu impegnativa sin da subito: - negli stage piu finali, quando bisognerร  comparare il modello originario all'implementazione in Hugging Face, potrete verificare automaticamente ogni componente, individualmente, di modo che ci sia una corrispondenza 1:1 - avrete l'opportunitร  di decomporre un problema molto grande in piccoli passi, cosรฌ da strutturare meglio il vostro lavoro - separare il modello in componenti logiche vi aiuterร  ad avere un'ottima overview sul design del modello, quindi una migliore comprensione del modello stesso - verso gli stage finali i test fatti componente per componente vi aiuterร  ad essere sicuri di non andare avanti e indietro nell'implementazione, cosรฌ da continuare la modifica del codice senza interruzione Un ottimo esempio di come questo puรฒ essere fatto รฉ dato da [Lysandre](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) per il modello ELECTRA Tuttavia, se il code-base originale รฉ molto complesso o le componenti intermedie possono essere testate solo in tramite compilazione, potrebbe richiedere parecchio tempo o addirittura essere impossibile separare il modello in piccole sotto-componenti. Un buon esempio รฉ [MeshTensorFlow di T5](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow). Questa libreria รฉ molto complessa e non offre un metodo semplice di decomposizione in sotto-componenti. Per simili librerie, potrete fare affidamento ai print statements. In ogni caso, indipendentemente da quale strategia scegliete, la procedura raccomandata รฉ di cominciare a fare debug dal primo layer al layer finale. ร‰ consigliato recuperare gli output dai layers, tramite print o sotto-componenti, nel seguente ordine: 1. Recuperare gli IDs di input dati al modello 2. Recuperare i word embeddings 3. Recuperare l'input del primo Transformer layer 4. Recuperare l'output del primo Transformer layer 5. Recuperare l'output dei seguenti `n - 1` Transformer layers 6. Recuperare l'output dell'intero BrandNewBert Model Gli IDs in input dovrebbero essere un arrary di interi, *per esempio* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` Gli output dei seguenti layer di solito dovrebbero essere degli array di float multi-dimensionali come questo: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` Ci aspettiamo che ogni modello aggiunto a ๐Ÿค— Transformers passi con successo un paio di test d'integrazione. Questo significa che il modello originale e la sua implementazione in ๐Ÿค— Transformers abbiano lo stesso output con una precisione di 0.001! Siccome รฉ normale che lo stesso esatto modello, scritto in librerie diverse, possa dare output leggermente diversi, la tolleranza accettata รฉ 1e-3 (0.001). Ricordate che i due modelli devono dare output quasi identici. Dunque, รฉ molto conveniente comparare gli output intermedi di ๐Ÿค— Transformers molteplici volte con gli output intermedi del modello originale di *brand_new_bert*. Di seguito vi diamo alcuni consigli per avere un ambiente di debug il piu efficiente possibile: - Trovate la migliore strategia per fare debug dei risultati intermedi. Per esempio, รฉ la repository originale scritta in PyTorch? Se si, molto probabilmente dovrete dedicare un po' di tempo per scrivere degli script piu lunghi, cosรฌ da decomporre il modello originale in piccole sotto-componenti, in modo da poter recuperare i valori intermedi. Oppure, la repo originale รฉ scritta in Tensorflow 1? Se รฉ cosรฌ dovrete fare affidamento ai print di Tensorflow [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) per avere i valori intermedi. Altro caso, la repo รฉ scritta in Jax? Allora assicuratevi che il modello non sia in **jit** quanto testate il foward pass, *per esempio* controllate [questo link](https://github.com/google/jax/issues/196). - Usate i piรน piccoli pretrained checkpoint che potete trovare. Piu piccolo รฉ il checkpoint, piu velocemente sarร  il vostro ciclo di debug. Non รฉ efficiente avere un pretrained model cosรฌ gigante che per il forward pass impieghi piu di 10 secondi. Nel caso in cui i checkpoints siano molto grandi, e non si possa trovare di meglio, allora รฉ buona consuetudine ricorrere a fare un dummy model nel nuovo ambiente, con weights inizializzati random e salvare quei weights per comprare la versione ๐Ÿค— Transformers con il vostro modello - Accertatevi di usare la via piu semplice per chiamare il forward pass nella repo originale. Sarebbe opportuno trovare la funzione originaria che chiami **solo** un singolo forward pass, *per esempio* questa funzione spesso viene chiamata `predict`, `evaluate`, `forward` o `__call__`. Siate sicuri di non fare debug su una funzione che chiami `forward` molteplici volte, *per esempio* per generare testo, come `autoregressive_sample`, `generate`. - Cercate di separare la tokenization dal forward pass del modello. Se la repo originaria mostra esempio dove potete dare come input una stringa, provate a cercare dove nella forward call la stringa viene cambiata in input ids e cominciate il debug da questo punto. Questo vi garantisce un ottimo punto di partenza per scrivere un piccolo script personale dove dare gli input al modello, anziche delle stringhe in input. - Assicuratevi che il debugging **non** sia in training mode. Spesso questo potra il modello a dare degli output random, per via dei molteplici dropout layers. Assicuratevi che il forward pass nell'ambiente di debug sia **deterministico**, cosicche i dropout non siano usati. Alternativamente, potete usare *transformers.utils.set_seed* se la vecchia e nuova implementazione sono nello stesso framework. La seguente sezione vi da ulteriori dettagli e accorgimenti su come potete fare tutto questo per *brand_new_bert*. ### 5.-14. Trasferire BrandNewBert in ๐Ÿค— Transformers Allora cominciamo ad aggiungere un nuovo codice in ๐Ÿค— Transformers. Andate nel vostro fork clone di ๐Ÿค— Transformers: ```bash cd transformers ``` Nel caso speciale in cui stiate aggiungendo un modello, la cui architettura sia identica a una di un modello giร  esistente, dovrete solo aggiugnere uno script di conversione, come descritto [qui](#write-a-conversion-script). In questo caso, potete riutilizzare l'intera architettura del modello gia esistente. Se questo non รฉ il caso, cominciamo con il generare un nuovo modello. Avrete due opzioni: - `transformers-cli add-new-model-like` per aggiungere un nuovo modello come uno che gia esiste - `transformers-cli add-new-model` per aggiungere un nuovo modello da un nostro template (questo assomigliera a BERT o Bart, in base al modello che selezionerete) In entrambi i casi, l'output vi darร  un questionario da riempire con informazioni basi sul modello. Il secondo comando richiede di installare un `cookiecutter` - maggiori informazioni [qui](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model). **Aprire una Pull Request in main huggingface/transformers repo** Prime di cominciare ad adattare il codice automaticamente generato, aprite una nuova PR come "Work in progress (WIP)", *per esempio* "[WIP] Aggiungere *brand_new_bert*", cosicchรฉ il team di Hugging Face possa lavorare al vostro fianco nell' integrare il modello in ๐Ÿค— Transformers. Questi sarebbero gli step generali da seguire: 1. Creare un branch dal main branch con un nome descrittivo ```bash git checkout -b add_brand_new_bert ``` 2. Commit del codice automaticamente generato ```bash git add . git commit ``` 3. Fare fetch e rebase del main esistente ```bash git fetch upstream git rebase upstream/main ``` 4. Push dei cambiamenti al proprio account: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. Una volte che siete soddisfatti dei nuovi cambiamenti, andate sulla webpage del vostro fork su GitHub. Cliccate "Pull request". Assiuratevi di aggiungere alcuni membri di Hugging Face come reviewers, nel riguardo alla destra della pagina della PR, cosicche il team Hugging Face verrร  notificato anche per i futuri cambiamenti. 6. Cambiare la PR a draft, cliccando su "Convert to draft" alla destra della pagina della PR Da quel punto in poi, ricordate di fare commit di ogni progresso e cambiamento, cosicche venga mostrato nella PR. Inoltre, ricordatevi di tenere aggiornato il vostro lavoro con il main esistente: ```bash git fetch upstream git merge upstream/main ``` In generale, tutte le domande che avrete riguardo al modello o l'implementazione dovranno essere fatte nella vostra PR e discusse/risolte nella PR stessa. In questa maniera, il team di Hugging Face sarร  sempre notificato quando farete commit di un nuovo codice o se avrete qualche domanda. ร‰ molto utile indicare al team di Hugging Face il codice a cui fate riferimento nella domanda, cosicche il team potra facilmente capire il problema o la domanda. Per fare questo andate sulla tab "Files changed", dove potrete vedere tutti i vostri cambiamenti al codice, andate sulla linea dove volete chiedere una domanda, e cliccate sul simbolo "+" per aggiungere un commento. Ogni volta che una domanda o problema รฉ stato risolto, cliccate sul bottone "Resolve". In questa stessa maniera, Hugging Face aprirร  domande o commenti nel rivedere il vostro codice. Mi raccomando, chiedete piรน domande possibili nella pagina della vostra PR. Se avete domande molto generali, non molto utili per il pubblico, siete liberi di chiedere al team Hugging Face direttamente su slack o email. **5. Adattare i codici per brand_new_bert** Per prima cosa, ci focalizzeremo sul modello e non sui tokenizer. Tutto il codice relative dovrebbe trovarsi in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` e `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Ora potete finalmente cominciare il codice :). Il codice generato in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` avrร  sia la stessa architettura di BERT se รฉ un modello encoder-only o BART se รฉ encoder-decoder. A questo punto, ricordatevi cio che avete imparato all'inizio, riguardo agli aspetti teorici del modello: *In che maniera il modello che sto implmementando รฉ diverso da BERT o BART?*. Implementare questi cambi spesso vuol dire cambiare il layer *self-attention*, l'ordine dei layer di normalizzazione e cosรฌ via... Ancora una volta ripetiamo, รฉ molto utile vedere architetture simili di modelli gia esistenti in Transformers per avere un'idea migliore su come implementare il modello. **Notate** che a questo punto non dovete avere subito un codice tutto corretto o pulito. Piuttosto, รฉ consigliato cominciare con un codice poco pulito, con copia-incolla del codice originale in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` fino a che non avrete tutto il codice necessario. In base alla nostra esperienza, รฉ molto meglio aggiungere una prima bozza del codice richiesto e poi correggere e migliorare iterativamente. L'unica cosa essenziale che deve funzionare qui รฉ la seguente instanza: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` Questo comando creerร  un modello con i parametri di default definiti in `BrandNewBergConfig()` e weights random. Questo garantisce che `init()` di tutte le componenti funzioni correttamente. **6. Scrivere uno script di conversione** Il prossimo step รฉ scrivere uno script per convertire il checkpoint che avete usato per fare debug su *brand_new_berts* nella repo originale in un checkpoint per la nuova implementazione di *brand_new_bert* in ๐Ÿค— Transformers. Non รฉ consigliato scrivere lo script di conversione da zero, ma piuttosto cercate e guardate script gia esistenti in ๐Ÿค— Transformers, cosรฌ da trovarne uno simile al vostro modello. Di solito basta fare una copia di uno script gia esistente e adattarlo al vostro caso. Non esistate a chiedre al team di Hugging Face a riguardo. - Se state convertendo un modello da TensorFlow a PyTorch, un ottimo inizio รฉ vedere [questo script di conversione per BERT](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - Se state convertendo un modello da PyTorch a PyTorch, [lo script di conversione di BART puรฒ esservi utile](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) Qui di seguito spiegheremo come i modelli PyTorch salvano i weights per ogni layer e come i nomi dei layer sono definiti. In PyTorch, il nomde del layer รฉ definito dal nome della class attribute che date al layer. Definiamo un modello dummy in PyTorch, chiamato `SimpleModel`: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` Ora possiamo creare un'instanza di questa definizione di modo da inizializzare a random weights: `dense`, `intermediate`, `layer_norm`. Possiamo usare print per vedere l'architettura del modello: ```python model = SimpleModel() print(model) ``` Da cui si ottiene: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` Si puรฒ vedere come i nomi dei layers siano definiti dal nome della class attribute in PyTorch. I valori dei weights di uno specifico layer possono essere visualizzati: ```python print(model.dense.weight.data) ``` ad esempio: ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` Nello script di conversione, dovreste riempire quei valori di inizializzazione random con gli stessi weights del corrispondente layer nel checkpoint. *Per esempio* ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` Cosรฌ facendo, dovete verificare che ogni inizializzazione random di un peso del modello PyTorch e il suo corrispondente peso nel pretrained checkpoint siano esattamente gli stessi e uguali in **dimensione/shape e nome**. Per fare questo, รฉ **necessario** aggiungere un `assert` per la dimensione/shape e nome: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` Inoltre, dovrete fare il print sia dei nomi che dei weights per essere sicuri che siano gli stessi: ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` Se la dimensione o il nome non sono uguali, probabilmente avete sbagliato ad assegnare il peso nel checkpoint o nel layer costrutture di ๐Ÿค— Transformers. Una dimensione sbagliata puรฒ essere dovuta ad un errore nei parameteri in `BrandNewBertConfig()`. Tuttavia, puรฒ essere anche che l'implementazione del layer in PyTorch richieda di fare una transposizione della matrice dei weights. Infine, controllate **tutti** che tutti i weights inizializzati e fate print di tutti i weights del checkpoint che non sono stati usati per l'inizializzazione, di modo da essere sicuri che il modello sia correttamente convertito. ร‰ normale che ci siano errori nel test di conversione, fai per un errore in `BrandNewBertConfig()`, o un errore nell'architettura in ๐Ÿค— Transformers, o un bug in `init()`. Questo step dev'essere fatto tramite iterazioni fino a che non si raggiungano gli stessi valori per i weights. Una volta che il checkpoint รฉ stato correttamente caricato in ๐Ÿค— Transformers, potete salvare il modello in una cartella di vostra scelta `/path/to/converted/checkpoint/folder` che contenga sia `pytorch_model.bin` che `config.json`: ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. Implementare il forward pass** Una volta che i weights pretrained sono stati correttamente caricati in ๐Ÿค— Transformers, dovrete assicurarvi che il forward pass sia correttamente implementato. [Qui](#provare-un-pretrained-checkpoint-usando-la-repo-originale), avete give creato e provato uno script che testi il forward pass del modello usando la repo originaria. Ora dovrete fare lo stesso con uno script analogo usando l'implementazione in ๐Ÿค— Transformers anzichรฉ l'originale. Piu o meno lo script dovrebbe essere: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` Di solito l'output da ๐Ÿค— Transformers non รฉ uguale uguale all'output originario, sopratto la prima volta. Non vi abbattete - รฉ normale! Prima di tutto assicuratevi che non ci siano errori o che non vengano segnalati degli errori nella forward pass. Spesso capita che ci siano dimensioni sbagliate o data type sbagliati, *ad esempio* `torch.long` anziche `torch.float32`. Non esistate a chiedere al team Hugging Face! Nella parte finale assicuratevi che l'implementazione ๐Ÿค— Transformers funzioni correttamente cosi da testare che gli output siano equivalenti a una precisione di `1e-3`. Controllate che `outputs.shape` siano le stesse tra ๐Ÿค— Transformers e l'implementazione originaria. Poi, controllate che i valori in output siano identici. Questa รฉ sicuramente la parte piรน difficile, qui una serie di errori comuni quando gli output non sono uguali: - Alcuni layers non sono stati aggiunti, *ad esempio* un *activation* layer non รฉ stato aggiunto, o ci si รฉ scordati di una connessione - La matrice del word embedding non รฉ stata ripareggiata - Ci sono degli embeddings posizionali sbagliati perchรฉ l'implementazione originaria ha un offset - Il dropout รฉ in azione durante il forward pass. Per sistemare questo errore controllate che *model.training = False* e che il dropout non sia stato attivato nel forward pass, * per esempio * passate *self.training* a [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout) La miglior maniera per sistemare il problema รฉ di vedere all'implementazione originaria del forward pass e in ๐Ÿค— Transformers fianco a fianco e vedere se ci sono delle differenze. In teoria, con debug e print degli output intermedie di entrambe le implementazioni nel forward pass nell'esatta posizione del network dovrebbe aiutarvi a vedere dove ci sono differenze tra i due frameworks. Come prima mossa controllate che `input_ids` siano identici in entrambi gli scripts. Da lรฌ andate fino all'ultimo layer. Potrete notare una differenza tra le due implementazioni a quel punto. Una volta che lo stesso output รฉ stato ragguingi, verificate gli output con `torch.allclose(original_output, output, atol=1e-3)`. A questo punto se รฉ tutto a posto: complimenti! Le parti seguenti saranno una passeggiata ๐Ÿ˜Š. **8. Aggiungere i test necessari per il modello** A questo punto avete aggiunto con successo il vostro nuovo modello. Tuttavia, รฉ molto probabile che il modello non sia del tutto ok con il design richiesto. Per essere sicuri che l'implementazione sia consona e compatibile con ๐Ÿค— Transformers รฉ necessario implementare dei tests. Il Cookiecutter dovrebbe fornire automaticamente dei file per test per il vostro modello, di solito nella folder `tests/test_modeling_brand_new_bert.py`. Provate questo per verificare l'ok nei test piu comuni: ```bash pytest tests/test_modeling_brand_new_bert.py ``` Una volta sistemati i test comuni, bisogna assicurarsi che il vostro lavoro sia correttamente testato cosicchรจ: - a) La community puo capire in maniera semplice il vostro lavoro controllando tests specifici del modello *brand_new_bert*, - b) Implementazioni future del vostro modello non rompano alcune feature importante del modello. Per prima cosa agguingete dei test d'integrazione. Questi sono essenziali perche fanno la stessa funzione degli scripts di debug usati precedentemente. Un template per questi tests esiste gia nel Cookiecutter ed รฉ sotto il nome di `BrandNewBertModelIntegrationTests`, voi dovrete solo completarlo. Una volta che questi tests sono OK, provate: ```bash RUN_SLOW=1 pytest -sv tests/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Nel caso siate su Windows, sostituite `RUN_SLOW=1` con `SET RUN_SLOW=1` </Tip> Di seguito, tutte le features che sono utili e necessarire per *brand_new_bert* devono essere testate in test separati, contenuti in `BrandNewBertModelTester`/ `BrandNewBertModelTest`. spesso la gente si scorda questi test, ma ricordate che sono utili per: - Aiuta gli utenti a capire il vostro codice meglio, richiamando l'attenzione su queste nuove features - Developers e contributors futuri potranno velocemente testare nuove implementazioni del modello testanto questi casi speciali. **9. Implementare il tokenizer** A questo punto avremo bisogno un tokenizer per *brand_new_bert*. Di solito il tokenizer รฉ uguale ad altri modelli in ๐Ÿค— Transformers. ร‰ importante che troviate il file con il tokenizer originale e che lo carichiate in ๐Ÿค— Transformers. Per controllare che il tokenizer funzioni in modo corretto, create uno script nella repo originaria che riceva come input una stringa e ritorni gli `input_ids`. Piu o meno questo potrebbe essere il codice: ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` Potrebbe richiedere un po' di tempo, ma guardate ancora alla repo originaria per trovare la funzione corretta del tokenizer. A volte capita di dover riscrivere il tokenizer nella repo originaria, di modo da avere come output gli `input_ids`. A quel punto uno script analogo รฉ necessario in ๐Ÿค— Transformers: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` Una volta che `input_ids` sono uguali, bisogna aggiungere un test per il tokenizer. Il file test per tokenizer di *brand_new_brand* dovrebbe avere un paio di hard-coded test d'integrazione. **10. Test end-to-end** Ora che avete il tokenizer, dovrete aggiungere dei test d'integrazione per l'intero workflow in `tests/test_modeling_brand_new_bert.py` in ๐Ÿค— Transformer. Questi test devono mostrare che un significante campione text-to-text funzioni come ci si aspetta nell'implementazione di ๐Ÿค— Transformers. *Per esempio* potreste usare dei source-to-target-translation, o un sommario di un articolo, o un domanda-risposta e cosi via. Se nessuno dei checkpoints รฉ stato ultra parametrizzato per task simili, allora i tests per il modello sono piu che sufficienti. Nello step finale dovete assicurarvi che il modello sia totalmente funzionale, e consigliamo anche di provare a testare su GPU. Puo succedere che ci si scordi un `.to(self.device)` ad esempio. Se non avete accesso a GPU, il team Hugging Face puo provvedere a testare questo aspetto per voi. **11. Aggiungere una Docstring** Siete quasi alla fine! L'ultima cosa rimasta รฉ avere una bella docstring e una pagina doc. Il Cookiecutter dovrebbe provvedere giร  un template chiamato `docs/source/model_doc/brand_new_bert.rst`, che dovrete compilare. La prima cosa che un utente farร  per usare il vostro modello sarร  dare una bella lettura al doc. Quindi proponete una documentazione chiara e concisa. ร‰ molto utile per la community avere anche delle *Tips* per mostrare come il modello puo' essere usato. Non esitate a chiedere a Hugging Face riguardo alle docstirng. Quindi, assicuratevi che la docstring sia stata aggiunta a `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`. Assicuratevi che la docstring sia corretta e che includa tutti i necessari input e output. Abbiamo una guida dettagliata per scrivere la documentazione e docstring. **Rifattorizzare il codice** Perfetto! Ora che abbiamo tutto per *brand_new_bert* controllate che lo stile del codice sia ok: ```bash make style ``` E che il codice passi i quality check: ```bash make quality ``` A volte capita che manchino delle informazioninella docstring o alcuni nomi sbagliati, questo farร  fallire i tests sopra. Ripetiamo: chiedete pure a Hugging Face, saremo lieti di aiutarvi. Per ultimo, fare del refactoring del codice una volta che รฉ stato creato. Avete finito con il codice, congratulazioni! ๐ŸŽ‰ Siete fantasticiiiiiii! ๐Ÿ˜Ž **12. Caricare il modello sul model hub** In questa ultima parte dovrete convertire e caricare il modello, con tutti i checkpoints, nel model hub e aggiungere una model card per ogni checkpoint caricato. Leggete la nostra guida [Model sharing and uploading Page](model_sharing) per avere familiaritร  con l'hub. Di solito in questa parte lavorate a fianco di Hugging face per decidere un nome che sia ok per ogni checkpoint, per ottenere i permessi necessari per caricare il modello nell'organizzazione dell'autore di *brand_new_bert*. Il metodo `push_to_hub`, presente in tutti i modelli `transformers`, รฉ una maniera rapida e indolore per caricare il vostro checkpoint sull'hub: ```python brand_new_bert.push_to_hub( repo_path_or_name="brand_new_bert", # Uncomment the following line to push to an organization # organization="<ORGANIZATION>", commit_message="Add model", use_temp_dir=True, ) ``` Vale la pena spendere un po' di tempo per creare una model card ad-hoc per ogni checkpoint. Le model cards dovrebbero suggerire le caratteristiche specifiche del checkpoint, *per esempio* su che dataset il checkpoint รฉ stato pretrained o fine-tuned. O che su che genere di task il modello lavoro? E anche buona pratica includere del codice su come usare il modello correttamente. **13. (Opzionale) Aggiungere un notebook** ร‰ molto utile aggiungere un notebook, che dimostri in dettaglio come *brand_new_bert* si utilizzi per fare inferenza e/o fine-tuned su specifiche task. Non รฉ una cosa obbligatoria da avere nella vostra PR, ma รฉ molto utile per la community. **14. Sottomettere la PR** L'ultimissimo step! Ovvero il merge della PR nel main. Di solito il team Hugging face a questo punto vi avrร  gia aiutato, ma รฉ ok prendere un po' di tempo per pulire la descirzione e commenti nel codice. ### Condividete il vostro lavoro!! ร‰ ora tempo di prendere un po' di credito dalla communitร  per il vostro lavoro! Caricare e implementare un nuovo modello รฉ un grandissimo contributo per Transformers e l'intera community NLP. Il codice e la conversione dei modelli pre-trained sara sicuramente utilizzato da centinaia o migliaia di sviluppatori e ricercatori. Siate fieri e orgogliosi di condividere il vostro traguardo con l'intera community :) ** Avete create un altro modello che รฉ super facile da usare per tutti quanti nella community! ๐Ÿคฏ**
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/custom_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Condividere modelli personalizzati La libreria ๐Ÿค— Transformers รจ studiata per essere facilmente estendibile. Il codice di ogni modello รจ interamente situato in una sottocartella del repository senza alcuna astrazione, perciรฒ puoi facilmente copiare il file di un modello e modificarlo in base ai tuoi bisogni. Se stai scrivendo un nuovo modello, potrebbe essere piรน semplice iniziare da zero. In questo tutorial, ti mostreremo come scrivere un modello personalizzato e la sua configurazione in modo che possa essere utilizzato allโ€™interno di Transformers, e come condividerlo con la community (assieme al relativo codice) cosรฌ che tutte le persone possano usarlo, anche se non presente nella libreria ๐Ÿค— Transformers. Illustriamo tutto questo su un modello ResNet, avvolgendo la classe ResNet della [libreria timm](https://github.com/rwightman/pytorch-image-models) in un [`PreTrainedModel`]. ## Scrivere una configurazione personalizzata Prima di iniziare a lavorare al modello, scriviamone la configurazione. La configurazione di un modello รจ un oggetto che contiene tutte le informazioni necessarie per la build del modello. Come vedremo nella prossima sezione, il modello puรฒ soltanto essere inizializzato tramite `config`, per cui dovremo rendere tale oggetto piรน completo possibile. Nel nostro esempio, prenderemo un paio di argomenti della classe ResNet che potremmo voler modificare. Configurazioni differenti ci daranno quindi i differenti possibili tipi di ResNet. Salveremo poi questi argomenti, dopo averne controllato la validitร . ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` Le tre cose piรน importanti da ricordare quando scrivi le tue configurazioni sono le seguenti: - Devi ereditare da `Pretrainedconfig`, - Il metodo `__init__` del tuo `Pretrainedconfig` deve accettare i kwargs, - I `kwargs` devono essere passati alla superclass `__init__` Lโ€™ereditร  รจ importante per assicurarsi di ottenere tutte le funzionalitร  della libreria ๐Ÿค— transformers, mentre gli altri due vincoli derivano dal fatto che un `Pretrainedconfig` ha piรน campi di quelli che stai settando. Quando ricarichi una config da un metodo `from_pretrained`, questi campi devono essere accettati dalla tua config e poi inviati alla superclasse. Definire un `model_type` per la tua configurazione (qua `model_type = โ€œresnetโ€`) non รจ obbligatorio, a meno che tu non voglia registrare il modello con le classi Auto (vedi l'ultima sezione). Una volta completato, puoi facilmente creare e salvare la tua configurazione come faresti con ogni altra configurazione di modelli della libreria. Ecco come possiamo creare la config di un resnet50d e salvarlo: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` Questo salverร  un file chiamato `config.json` all'interno della cartella `custom-resnet`. Potrai poi ricaricare la tua config con il metodo `from_pretrained`. ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` Puoi anche usare qualunque altro metodo della classe [`PretrainedConfig`], come [`~PretrainedConfig.push_to_hub`] per caricare direttamente la tua configurazione nell'hub. ## Scrivere un modello personalizzato Ora che abbiamo la nostra configurazione ResNet, possiamo continuare a scrivere il modello. In realtร , ne scriveremo due: uno che estrae le features nascoste da una batch di immagini (come [`BertModel`]) e uno che รจ utilizzabile per la classificazione di immagini (come [`BertModelForSequenceClassification`]). Come abbiamo menzionato in precedenza, scriveremo soltanto un wrapper del modello, per mantenerlo semplice ai fini di questo esempio. L'unica cosa che dobbiamo fare prima di scrivere questa classe รจ una mappatura fra i tipi di blocco e le vere classi dei blocchi. Successivamente il modello รจ definito tramite la configurazione, passando tutto quanto alla classe `ResNet`. ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` Per il modello che classificherร  le immagini, cambiamo soltanto il metodo forward: ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` Nota come, in entrambi i casi, ereditiamo da `PreTrainedModel` e chiamiamo l'inizializzazione della superclasse con il metodo `config` (un po' come quando scrivi un normale `torch.nn.Module`). La riga che imposta la `config_class` non รจ obbligatoria, a meno che tu non voglia registrare il modello con le classi Auto (vedi l'ultima sezione). <Tip> Se il tuo modello รจ molto simile a un modello all'interno della libreria, puoi ri-usare la stessa configurazione di quel modello. </Tip> Puoi fare in modo che il tuo modello restituisca in output qualunque cosa tu voglia, ma far restituire un dizionario come abbiamo fatto per `ResnetModelForImageClassification`, con la funzione di perdita inclusa quando vengono passate le labels, renderร  il tuo modello direttamente utilizzabile all'interno della classe [`Trainer`]. Utilizzare altri formati di output va bene se hai in progetto di utilizzare un tuo loop di allenamento, o se utilizzerai un'altra libreria per l'addestramento. Ora che abbiamo la classe del nostro modello, creiamone uno: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` Ribadiamo, puoi usare qualunque metodo dei [`PreTrainedModel`], come [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`]. Utilizzeremo quest'ultimo nella prossima sezione, e vedremo come caricare i pesi del modello assieme al codice del modello stesso. Ma prima, carichiamo alcuni pesi pre-allenati all'interno del nostro modello. Nel tuo caso specifico, probabilmente allenerai il tuo modello sui tuoi dati. Per velocizzare in questo tutorial, utilizzeremo la versione pre-allenata del resnet50d. Dato che il nostro modello รจ soltanto un wrapper attorno a quel modello, sarร  facile trasferirne i pesi: ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Vediamo adesso come assicurarci che quando facciamo [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`], il codice del modello venga salvato. ## Inviare il codice all'Hub <Tip warning={true}> Questa API รจ sperimentale e potrebbe avere alcuni cambiamenti nei prossimi rilasci. </Tip> Innanzitutto, assicurati che il tuo modello sia completamente definito in un file `.py`. Puรฒ sfruttare import relativi ad altri file, purchรจ questi siano nella stessa directory (non supportiamo ancora sotto-moduli per questa funzionalitร ). Per questo esempio, definiremo un file `modeling_resnet.py` e un file `configuration_resnet.py` in una cartella dell'attuale working directory chiamata `resnet_model`. Il file configuration contiene il codice per `ResnetConfig` e il file modeling contiene il codice di `ResnetModel` e `ResnetModelForImageClassification`. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` Il file `__init__.py` puรฒ essere vuoto, serve solo perchรจ Python capisca che `resnet_model` puรฒ essere utilizzato come un modulo. <Tip warning={true}> Se stai copiando i file relativi alla modellazione della libreria, dovrai sostituire tutti gli import relativi in cima al file con import del pacchetto `transformers`. </Tip> Nota che puoi ri-utilizzare (o usare come sottoclassi) un modello/configurazione esistente. Per condividere il tuo modello con la community, segui questi passi: prima importa il modello ResNet e la sua configurazione dai nuovi file creati: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` Dopodichรจ dovrai dire alla libreria che vuoi copiare i file con il codice di quegli oggetti quando utilizzi il metodo `save_pretrained` e registrarli in modo corretto con una Auto classe (specialmente per i modelli). Utilizza semplicemente: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` Nota che non c'รจ bisogno di specificare una Auto classe per la configurazione (c'รจ solo una Auto classe per le configurazioni, [`AutoConfig`], ma รจ diversa per i modelli). Il tuo modello personalizato potrebbe essere utilizzato per diverse tasks, per cui devi specificare quale delle classi Auto รจ quella corretta per il tuo modello. Successivamente, creiamo i modelli e la config come abbiamo fatto in precedenza: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Adesso, per inviare il modello all'Hub, assicurati di aver effettuato l'accesso. Lancia dal tuo terminale: ```bash huggingface-cli login ``` O da un notebook: ```py from huggingface_hub import notebook_login notebook_login() ``` Potrai poi inviare il tutto sul tuo profilo (o di un'organizzazione di cui fai parte) in questo modo: ```py resnet50d.push_to_hub("custom-resnet50d") ``` Oltre ai pesi del modello e alla configurazione in formato json, questo ha anche copiato i file `.py` modeling e configuration all'interno della cartella `custom-resnet50d` e ha caricato i risultati sull'Hub. Puoi controllare i risultati in questa [model repo](https://huggingface.co/sgugger/custom-resnet50d). Puoi controllare il tutorial di condivisione [tutorial di condivisione](model_sharing) per piรน informazioni sul metodo con cui inviare all'Hub. ## Usare un modello con codice personalizzato Puoi usare ogni configurazione, modello o tokenizer con file di codice personalizzati nella sua repository con le classi Auto e il metodo `from_pretrained`. Tutti i files e il codice caricati sull'Hub sono scansionati da malware (fai riferimento alla documentazione [Hub security](https://huggingface.co/docs/hub/security#malware-scanning) per piรน informazioni), ma dovresti comunque assicurarti dell'affidabilitร  del codice e dell'autore per evitare di eseguire codice dannoso sulla tua macchina. Imposta `trust_remote_code=True` per usare un modello con codice personalizzato: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` Inoltre, raccomandiamo fortemente di passare un hash del commit come `revision` per assicurarti che le autrici o gli autori del modello non abbiano modificato il codice con alcune nuove righe dannose (a meno che non ti fidi completamente della fonte): ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Nota che quando cerchi la storia dei commit della repo del modello sull'Hub, c'รจ un bottone con cui facilmente copiare il commit hash di ciascun commit. ## Registrare un modello con codice personalizzato nelle classi Auto Se stai scrivendo una libreria che estende ๐Ÿค— Transformers, potresti voler estendere le classi Auto per includere il tuo modello. Questo รจ diverso dall'inviare codice nell'Hub: gli utenti dovranno importare la tua libreria per ottenere il modello personalizzato (anzichรจ scaricare automaticamente il modello dall'Hub). Finchรจ il tuo file di configurazione ha un attributo `model_type` diverso dai model types esistenti, e finchรจ le tue classi modello hanno i corretti attributi `config_class`, potrai semplicemente aggiungerli alle classi Auto come segue: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` Nota che il primo argomento utilizzato quando registri la configurazione di un modello personalizzato con [`AutoConfig`] deve corrispondere al `model_type` della tua configurazione personalizzata, ed il primo argomento utilizzato quando registri i tuoi modelli personalizzati in una qualunque classe Auto del modello deve corrispondere alla `config_class` di quei modelli.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_hardware.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hardware ottimizzato per l'addestramento L'hardware utilizzato per eseguire l'addestramento del modello e l'inferenza puรฒ avere un grande effetto sulle prestazioni. Per un analisi approfondita delle GPUs, assicurati di dare un'occhiata all'eccellente [blog post](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/) di Tim Dettmer. Diamo un'occhiata ad alcuni consigli pratici per la configurazione della GPU. ## GPU Quando si addestrano modelli piรน grandi ci sono essenzialmente tre opzioni: - GPUs piu' grandi - Piu' GPUs - Piu' CPU e piu' NVMe (scaricato da [DeepSpeed-Infinity](main_classes/deepspeed#nvme-support)) Iniziamo dal caso in cui ci sia una singola GPU. ### Potenza e Raffreddamento Se hai acquistato una costosa GPU di fascia alta, assicurati di darle la potenza corretta e un raffreddamento sufficiente. **Potenza**: Alcune schede GPU consumer di fascia alta hanno 2 e talvolta 3 prese di alimentazione PCI-E a 8 pin. Assicurati di avere tanti cavi PCI-E a 8 pin indipendenti da 12 V collegati alla scheda quante sono le prese. Non utilizzare le 2 fessure a un'estremitร  dello stesso cavo (noto anche come cavo a spirale). Cioรจ se hai 2 prese sulla GPU, vuoi 2 cavi PCI-E a 8 pin che vanno dall'alimentatore alla scheda e non uno che abbia 2 connettori PCI-E a 8 pin alla fine! In caso contrario, non otterrai tutte le prestazioni ufficiali. Ciascun cavo di alimentazione PCI-E a 8 pin deve essere collegato a una guida da 12 V sul lato dell'alimentatore e puรฒ fornire fino a 150 W di potenza. Alcune altre schede possono utilizzare connettori PCI-E a 12 pin e questi possono fornire fino a 500-600 W di potenza. Le schede di fascia bassa possono utilizzare connettori a 6 pin, che forniscono fino a 75 W di potenza. Inoltre vuoi un alimentatore (PSU) di fascia alta che abbia una tensione stabile. Alcuni PSU di qualitร  inferiore potrebbero non fornire alla scheda la tensione stabile di cui ha bisogno per funzionare al massimo. E ovviamente l'alimentatore deve avere abbastanza Watt inutilizzati per alimentare la scheda. **Raffreddamento**: Quando una GPU si surriscalda, inizierร  a rallentare e non fornirร  le prestazioni mssimali e potrebbe persino spegnersi se diventasse troppo calda. รˆ difficile dire l'esatta temperatura migliore a cui aspirare quando una GPU รจ molto caricata, ma probabilmente qualsiasi cosa al di sotto di +80ยฐC va bene, ma piรน bassa รจ meglio - forse 70-75ยฐC รจ un intervallo eccellente in cui trovarsi. รˆ probabile che il rallentamento inizi a circa 84-90ยฐC. Ma oltre alla limitazione delle prestazioni, una temperatura molto elevata prolungata รจ probabile che riduca la durata di una GPU. Diamo quindi un'occhiata a uno degli aspetti piรน importanti quando si hanno piรน GPU: la connettivitร . ### Connettivitร  multi-GPU Se utilizzi piรน GPU, il modo in cui le schede sono interconnesse puรฒ avere un enorme impatto sul tempo totale di allenamento. Se le GPU si trovano sullo stesso nodo fisico, puoi eseguire: ``` nvidia-smi topo -m ``` e ti dirร  come sono interconnesse le GPU. Su una macchina con doppia GPU e collegata a NVLink, molto probabilmente vedrai qualcosa del tipo: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` su una macchina diversa senza NVLink potremmo vedere: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A ``` Il rapporto include questa legenda: ``` X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` Quindi il primo rapporto `NV2` ci dice che le GPU sono interconnesse con 2 NVLinks e nel secondo report `PHB` abbiamo una tipica configurazione PCIe+Bridge a livello di consumatore. Controlla che tipo di connettivitร  hai sulla tua configurazione. Alcuni di questi renderanno la comunicazione tra le carte piรน veloce (es. NVLink), altri piรน lenta (es. PHB). A seconda del tipo di soluzione di scalabilitร  utilizzata, la velocitร  di connettivitร  potrebbe avere un impatto maggiore o minore. Se le GPU devono sincronizzarsi raramente, come in DDP, l'impatto di una connessione piรน lenta sarร  meno significativo. Se le GPU devono scambiarsi messaggi spesso, come in ZeRO-DP, una connettivitร  piรน veloce diventa estremamente importante per ottenere un addestramento piรน veloce. #### NVlink [NVLink](https://en.wikipedia.org/wiki/NVLink) รจ un collegamento di comunicazione a corto raggio multilinea seriale basato su cavo sviluppato da Nvidia. Ogni nuova generazione fornisce una larghezza di banda piรน veloce, ad es. ecco una citazione da [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf): > Third-Generation NVLinkยฎ > GA102 GPUs utilize NVIDIAโ€™s third-generation NVLink interface, which includes four x4 links, > with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four > links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth > between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. > (Note that 3-Way and 4-Way SLI configurations are not supported.) Quindi piรน `X` si ottiene nel rapporto di `NVX` nell'output di `nvidia-smi topo -m`, meglio รจ. La generazione dipenderร  dall'architettura della tua GPU. Confrontiamo l'esecuzione di un training del modello di linguaggio gpt2 su un piccolo campione di wikitext I risultati sono: | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | Puoi vedere che NVLink completa l'addestramento circa il 23% piรน velocemente. Nel secondo benchmark utilizziamo `NCCL_P2P_DISABLE=1` per dire alle GPU di non utilizzare NVLink. Ecco il codice benchmark completo e gli output: ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 python -m torch.distributed.launch \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`) Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_train_cpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento effciente su multiple CPU Quando l'addestramento su una singola CPU รจ troppo lento, possiamo usare CPU multiple. Quasta guida si concentra su DDP basato su PyTorch abilitando l'addetramento distribuito su CPU in maniera efficiente. ## Intelยฎ oneCCL Bindings per PyTorch [Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) รจ una libreria per l'addestramento efficiente del deep learning in distribuito e implementa collettivi come allreduce, allgather, alltoall. Per maggiori informazioni su oneCCL, fai riferimento a [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) e [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html). Il modulo `oneccl_bindings_for_pytorch` (`torch_ccl` precedentemente alla versione 1.12) implementa PyTorch C10D ProcessGroup API e puรฒ essere caricato dinamicamente com external ProcessGroup e funziona solo su piattaforma Linux al momento. Qui trovi informazioni piรน dettagliate per [oneccl_bind_pt](https://github.com/intel/torch-ccl). ### Intelยฎ oneCCL Bindings per l'installazione PyTorch: I file wheel sono disponibili per le seguenti versioni di Python: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | โˆš | โˆš | โˆš | โˆš | | 1.12.100 | | โˆš | โˆš | โˆš | โˆš | | 1.12.0 | | โˆš | โˆš | โˆš | โˆš | | 1.11.0 | | โˆš | โˆš | โˆš | โˆš | | 1.10.0 | โˆš | โˆš | โˆš | โˆš | | ```bash pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` dove `{pytorch_version}` deve essere la tua versione di PyTorch, per l'stanza 1.13.0. Verifica altri approcci per [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Le versioni di oneCCL e PyTorch devono combaciare. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100 </Tip> ## Intelยฎ MPI library Usa questa implementazione basata su standard MPI per fornire una architettura flessibile, efficiente, scalabile su cluster per Intelยฎ. Questo componente รจ parte di Intelยฎ oneAPI HPC Toolkit. oneccl_bindings_for_pytorch รจ installato insieme al set di strumenti MPI. Necessitร  di reperire l'ambiente prima di utilizzarlo. per Intelยฎ oneCCL >= 1.12.0 ```bash oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` per Intelยฎ oneCCL con versione < 1.12.0 ```bash torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### Installazione IPEX: IPEX fornisce ottimizzazioni delle prestazioni per l'addestramento della CPU sia con Float32 che con BFloat16; puoi fare riferimento a [single CPU section](./perf_train_cpu). Il seguente "Utilizzo in Trainer" prende come esempio mpirun nella libreria Intelยฎ MPI. ## Utilizzo in Trainer Per abilitare l'addestramento distribuito multi CPU nel Trainer con il ccl backend, gli utenti devono aggiungere **`--ddp_backend ccl`** negli argomenti del comando. Vediamo un esempio per il [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) Il seguente comando abilita due processi sul nodo Xeon, con un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` Il seguente comando abilita l'addestramento per un totale di quattro processi su due Xeon (node0 e node1, prendendo node0 come processo principale), ppn (processes per node) รจ impostato a 2, on un processo in esecuzione per ogni socket. Le variabili OMP_NUM_THREADS/CCL_WORKER_COUNT possono essere impostate per una prestazione ottimale. In node0, รจ necessario creare un file di configurazione che contenga gli indirizzi IP di ciascun nodo (per esempio hostfile) e passare il percorso del file di configurazione come parametro. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` A questo punto, esegui il seguente comando nel nodo0 e **4DDP** sarร  abilitato in node0 e node1 con BF16 auto mixed precision: ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_infer_gpu_one.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza efficiente su GPU singola Questo documento sarร  presto completato con informazioni su come effetture l'inferenza su una singola GPU. Nel frattempo รจ possibile consultare [la guida per l'addestramento su una singola GPU](perf_train_gpu_one) e [la guida per l'inferenza su CPU](perf_infer_cpu). ## `BetterTransformer` per l'inferenza piรน veloce Abbiamo recentemente integrato `BetterTransformer` per velocizzare l'inferenza su GPU per modelli di testo, immagini e audio. Per maggiori dettagli, consultare la documentazione su questa integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview). ## Integrazione di `bitsandbytes` per Int8 mixed-precision matrix decomposition <Tip> Nota che questa funzione puรฒ essere utilizzata anche nelle configurazioni multi GPU. </Tip> Dal paper [`LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale`](https://arxiv.org/abs/2208.07339), noi supportiamo l'integrazione di Hugging Face per tutti i modelli dell'Hub con poche righe di codice. Il metodo `nn.Linear` riduce la dimensione di 2 per i pesi `float16` e `bfloat16` e di 4 per i pesi `float32`, con un impatto quasi nullo sulla qualitร , operando sugli outlier in half-precision. ![HFxbitsandbytes.png](https://s3.amazonaws.com/moonup/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png) Il metodo Int8 mixed-precision matrix decomposition funziona separando la moltiplicazione tra matrici in due flussi: (1) una matrice di flusso di outlier di caratteristiche sistematiche moltiplicata in fp16, (2) in flusso regolare di moltiplicazione di matrici int8 (99,9%). Con questo metodo, รจ possibile effettutare inferenza int8 per modelli molto grandi senza degrado predittivo. Per maggiori dettagli sul metodo, consultare il [paper](https://arxiv.org/abs/2208.07339) o il nostro [blogpost sull'integrazione](https://huggingface.co/blog/hf-bitsandbytes-integration). ![MixedInt8.gif](https://s3.amazonaws.com/moonup/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif) Nota che รจ necessaria una GPU per eseguire modelli di tipo mixed-8bit, poichรฉ i kernel sono stati compilati solo per le GPU. Prima di utilizzare questa funzione, assicurarsi di disporre di memoria sufficiente sulla GPU per memorizzare un quarto del modello (o la metร  se i pesi del modello sono in mezza precisione). Di seguito sono riportate alcune note per aiutarvi a utilizzare questo modulo, oppure seguite le dimostrazioni su [Google colab](#colab-demos). ### Requisiti - Se si dispone di `bitsandbytes<0.37.0`, assicurarsi di eseguire su GPU NVIDIA che supportano tensor cores a 8 bit (Turing, Ampere o architetture piรน recenti - ad esempio T4, RTX20s RTX30s, A40-A100). Per `bitsandbytes>=0.37.0`, tutte le GPU dovrebbero essere supportate. - Installare la versione corretta di `bitsandbytes` eseguendo: `pip install bitsandbytes>=0.31.5`. - Installare `accelerate` `pip install accelerate>=0.12.0` ### Esecuzione di modelli mixed-Int8 - configurazione per singola GPU Dopo aver installato le librerie necessarie, per caricare il tuo modello mixed 8-bit รจ il seguente: ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` Per la generazione di testo, si consiglia di: * utilizzare il metodo `generate()` del modello invece della funzione `pipeline()`. Sebbene l'inferenza sia possibile con la funzione `pipeline()`, essa non รจ ottimizzata per i modelli mixed-8bit e sarร  piรน lenta rispetto all'uso del metodo `generate()`. Inoltre, alcune strategie di campionamento, come il campionamento nucleaus, non sono supportate dalla funzione `pipeline()` per i modelli mixed-8bit. * collocare tutti gli ingressi sullo stesso dispositivo del modello. Ecco un semplice esempio: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-2b5" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) text = "Hello, my llama is cute" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ### Esecuzione di modelli mixed-8bit - configurazione multi GPU Usare il seguente modo caricare il modello mixed-8bit su piรน GPU (stesso comando della configurazione a GPU singola): ```py model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` Puoi controllare la RAM della GPU che si vuole allocare su ogni GPU usando `accelerate`. Utilizzare l'argomento `max_memory` come segue: ```py max_memory_mapping = {0: "1GB", 1: "2GB"} model_name = "bigscience/bloom-3b" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping ) ``` In questo esempio, la prima GPU utilizzerร  1 GB di memoria e la seconda 2 GB. ### Colab demos Con questo metodo รจ possibile inferire modelli che prima non era possibile inferire su Google Colab. Guardate la demo per l'esecuzione di T5-11b (42GB in fp32)! Utilizzo la quantizzazione a 8 bit su Google Colab: [![Open In Colab: T5-11b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) Oppure questa demo di BLOOM-3B: [![Open In Colab: BLOOM-3b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quick tour [[open-in-colab]] Entra in azione con ๐Ÿค— Transformers! Inizia utilizzando [`pipeline`] per un'inferenza veloce, carica un modello pre-allenato e un tokenizer con una [AutoClass](./model_doc/auto) per risolvere i tuoi compiti legati a testo, immagini o audio. <Tip> Tutti gli esempi di codice presenti in questa documentazione hanno un pulsante in alto a sinistra che permette di selezionare tra PyTorch e TensorFlow. Se questo non รจ presente, ci si aspetta che il codice funzioni per entrambi i backend senza alcun cambiamento. </Tip> ## Pipeline [`pipeline`] รจ il modo piรน semplice per utilizzare un modello pre-allenato per un dato compito. <Youtube id="tiZFewofSLM"/> La [`pipeline`] supporta molti compiti comuni: **Testo**: * Analisi del Sentimento (Sentiment Analysis, in inglese): classifica la polaritร  di un testo dato. * Generazione del Testo (Text Generation, in inglese): genera del testo a partire da un dato input. * Riconoscimento di Entitร  (Name Entity Recognition o NER, in inglese): etichetta ogni parola con l'entitร  che questa rappresenta (persona, data, luogo, ecc.). * Rispondere a Domande (Question answering, in inglese): estrae la risposta da un contesto, dato del contesto e una domanda. * Riempimento di Maschere (Fill-mask, in inglese): riempie gli spazi mancanti in un testo che ha parole mascherate. * Riassumere (Summarization, in inglese): genera una sintesi di una lunga sequenza di testo o di un documento. * Traduzione (Translation, in inglese): traduce un testo in un'altra lingua. * Estrazione di Caratteristiche (Feature Extraction, in inglese): crea un tensore che rappresenta un testo. **Immagini**: * Classificazione di Immagini (Image Classification, in inglese): classifica un'immagine. * Segmentazione di Immagini (Image Segmentation, in inglese): classifica ogni pixel di un'immagine. * Rilevazione di Oggetti (Object Detection, in inglese): rileva oggetti all'interno di un'immagine. **Audio**: * Classificazione di Audio (Audio Classification, in inglese): assegna un'etichetta ad un segmento di audio dato. * Riconoscimento Vocale Automatico (Automatic Speech Recognition o ASR, in inglese): trascrive il contenuto di un audio dato in un testo. <Tip> Per maggiori dettagli legati alla [`pipeline`] e ai compiti ad essa associati, fai riferimento alla documentazione [qui](./main_classes/pipelines). </Tip> ### Utilizzo della Pipeline Nel seguente esempio, utilizzerai la [`pipeline`] per l'analisi del sentimento. Installa le seguenti dipendenze se non lo hai giร  fatto: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importa [`pipeline`] e specifica il compito che vuoi completare: ```py >>> from transformers import pipeline >>> classificatore = pipeline("sentiment-analysis", model="MilaNLProc/feel-it-italian-sentiment") ``` La pipeline scarica e salva il [modello pre-allenato](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) e il tokenizer per l'analisi del sentimento. Se non avessimo scelto un modello, la pipeline ne avrebbe scelto uno di default. Ora puoi utilizzare il `classifier` sul tuo testo obiettivo: ```py >>> classificatore("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") [{'label': 'positive', 'score': 0.9997}] ``` Per piรน di una frase, passa una lista di frasi alla [`pipeline`] la quale restituirร  una lista di dizionari: ```py >>> risultati = classificatore( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."] ... ) >>> for risultato in risultati: ... print(f"etichetta: {risultato['label']}, con punteggio: {round(risultato['score'], 4)}") etichetta: positive, con punteggio: 0.9998 etichetta: negative, con punteggio: 0.9998 ``` La [`pipeline`] puรฒ anche iterare su un dataset intero. Inizia installando la libreria [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/): ```bash pip install datasets ``` Crea una [`pipeline`] con il compito che vuoi risolvere e con il modello che vuoi utilizzare. ```py >>> import torch >>> from transformers import pipeline >>> riconoscitore_vocale = pipeline( ... "automatic-speech-recognition", model="radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram" ... ) ``` Poi, carica un dataset (vedi ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart.html) per maggiori dettagli) sul quale vuoi iterare. Per esempio, carichiamo il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="it-IT", split="train") # doctest: +IGNORE_RESULT ``` Dobbiamo assicurarci che la frequenza di campionamento del set di dati corrisponda alla frequenza di campionamento con cui รจ stato addestrato `radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram`. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=riconoscitore_vocale.feature_extractor.sampling_rate)) ``` I file audio vengono caricati automaticamente e ri-campionati quando chiamiamo la colonna "audio". Estraiamo i vettori delle forme d'onda grezze delle prime 4 osservazioni e passiamoli come lista alla pipeline: ```py >>> risultato = riconoscitore_vocale(dataset[:4]["audio"]) >>> print([d["text"] for d in risultato]) ['dovrei caricare dei soldi sul mio conto corrente', 'buongiorno e senza vorrei depositare denaro sul mio conto corrente come devo fare per cortesia', 'sรฌ salve vorrei depositare del denaro sul mio conto', 'e buon pomeriggio vorrei depositare dei soldi sul mio conto bancario volleo sapere come posso fare se e posso farlo online ed un altro conto o andandoo tramite bancomut'] ``` Per un dataset piรน grande dove gli input sono di dimensione maggiore (come nel parlato/audio o nella visione), dovrai passare un generatore al posto di una lista che carica tutti gli input in memoria. Guarda la [documentazione della pipeline](./main_classes/pipelines) per maggiori informazioni. ### Utilizzare un altro modello e tokenizer nella pipeline La [`pipeline`] puรฒ ospitare qualsiasi modello del [Model Hub](https://huggingface.co/models), rendendo semplice l'adattamento della [`pipeline`] per altri casi d'uso. Per esempio, se si vuole un modello capace di trattare testo in francese, usa i tag presenti nel Model Hub in modo da filtrare per ottenere un modello appropriato. Il miglior risultato filtrato restituisce un modello multi-lingua [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) fine-tuned per l'analisi del sentimento. Ottimo, utilizziamo questo modello! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Usa [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `AutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Usa [`TFAutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `TFAutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Poi puoi specificare il modello e il tokenizer nella [`pipeline`], e applicare il `classifier` sul tuo testo obiettivo: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Se non riesci a trovare un modello per il tuo caso d'uso, dovrai fare fine-tuning di un modello pre-allenato sui tuoi dati. Dai un'occhiata al nostro tutorial [fine-tuning tutorial](./training) per imparare come. Infine, dopo che hai completato il fine-tuning del tuo modello pre-allenato, considera per favore di condividerlo (vedi il tutorial [qui](./model_sharing)) con la comunitร  sul Model Hub per democratizzare l'NLP! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Al suo interno, le classi [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] lavorano assieme per dare potere alla [`pipeline`]. Una [AutoClass](./model_doc/auto) รจ una scorciatoia che automaticamente recupera l'architettura di un modello pre-allenato a partire dal suo nome o path. Hai solo bisogno di selezionare la `AutoClass` appropriata per il tuo compito e il suo tokenizer associato con [`AutoTokenizer`]. Ritorniamo al nostro esempio e vediamo come puoi utilizzare la `AutoClass` per replicare i risultati della [`pipeline`]. ### AutoTokenizer Un tokenizer รจ responsabile dell'elaborazione del testo in modo da trasformarlo in un formato comprensibile dal modello. Per prima cosa, il tokenizer dividerร  il testo in parole chiamate *token*. Ci sono diverse regole che governano il processo di tokenizzazione, tra cui come dividere una parola e a quale livello (impara di piรน sulla tokenizzazione [qui](./tokenizer_summary)). La cosa piรน importante da ricordare comunque รจ che hai bisogno di inizializzare il tokenizer con lo stesso nome del modello in modo da assicurarti che stai utilizzando le stesse regole di tokenizzazione con cui il modello รจ stato pre-allenato. Carica un tokenizer con [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(nome_del_modello) ``` Dopodichรฉ, il tokenizer converte i token in numeri in modo da costruire un tensore come input del modello. Questo รจ conosciuto come il *vocabolario* del modello. Passa il tuo testo al tokenizer: ```py >>> encoding = tokenizer("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") >>> print(encoding) {'input_ids': [101, 56821, 10132, 14407, 13019, 13007, 10120, 47201, 10330, 10106, 91686, 100, 58263, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Il tokenizer restituirร  un dizionario contenente: * [input_ids](./glossary#input-ids): rappresentazioni numeriche dei tuoi token. * [attention_mask](.glossary#attention-mask): indica quali token devono essere presi in considerazione. Come con la [`pipeline`], il tokenizer accetterร  una lista di input. In piรน, il tokenizer puรฒ anche completare (pad, in inglese) e troncare il testo in modo da restituire un lotto (batch, in inglese) di lunghezza uniforme: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Leggi il tutorial sul [preprocessing](./preprocessing) per maggiori dettagli sulla tokenizzazione. ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`AutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare l'[`AutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello. Devi solo spacchettare il dizionario aggiungendo `**`: ```py >>> pt_outputs = pt_model(**pt_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0041, 0.0037, 0.0203, 0.2005, 0.7713], [0.3766, 0.3292, 0.1832, 0.0558, 0.0552]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`TFAutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare il [`TFAutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(nome_del_modello) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello passando le chiavi del dizionario al tensore: ```py >>> tf_outputs = tf_model(tf_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tutti i modelli di ๐Ÿค— Transformers (PyTorch e TensorFlow) restituiscono i tensori *prima* della funzione finale di attivazione (come la softmax) perchรฉ la funzione di attivazione finale viene spesso unita a quella di perdita. </Tip> I modelli sono [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) o [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) standard cosรฌ puoi utilizzarli all'interno del tuo training loop usuale. Tuttavia, per rendere le cose piรน semplici, ๐Ÿค— Transformers fornisce una classe [`Trainer`] per PyTorch che aggiunge delle funzionalitร  per l'allenamento distribuito, precisione mista, e altro ancora. Per TensorFlow, puoi utilizzare il metodo `fit` di [Keras](https://keras.io/). Fai riferimento al [tutorial per il training](./training) per maggiori dettagli. <Tip> Gli output del modello di ๐Ÿค— Transformers sono delle dataclasses speciali in modo che i loro attributi vengano auto-completati all'interno di un IDE. Gli output del modello si comportano anche come una tupla o un dizionario (ad esempio, puoi indicizzare con un intero, una slice o una stringa) nel qual caso gli attributi che sono `None` vengono ignorati. </Tip> ### Salva un modello <frameworkcontent> <pt> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`PreTrainedModel.save_pretrained`]: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`TFPreTrainedModel.save_pretrained`]: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Una caratteristica particolarmente interessante di ๐Ÿค— Transformers รจ la sua abilitร  di salvare un modello e ri-caricarlo sia come modello di PyTorch che di TensorFlow. I parametri `from_pt` o `from_tf` possono convertire un modello da un framework all'altro: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento con script Insieme ai [notebooks](./noteboks/README) ๐Ÿค— Transformers, ci sono anche esempi di script che dimostrano come addestrare un modello per un task con [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), o [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). Troverai anche script che abbiamo usato nei nostri [progetti di ricerca](https://github.com/huggingface/transformers/tree/main/examples/research_projects) e [precedenti esempi](https://github.com/huggingface/transformers/tree/main/examples/legacy) a cui contribuisce per lo piรน la comunitร . Questi script non sono attivamente mantenuti e richiedono una specifica versione di ๐Ÿค— Transformers che sarร  molto probabilmente incompatibile con l'ultima versione della libreria. Non รจ dato per scontato che gli script di esempio funzionino senza apportare modifiche per ogni problema, bensรฌ potrebbe essere necessario adattare lo script al tuo caso specifico. Per aiutarti in ciรฒ, la maggioranza degli script espone le modalitร  di pre-processamento dei dati, consentendoti di modificare lo script come preferisci. Per qualsiasi feature che vorresti implementare in uno script d'esempio, per favore discutine nel [forum](https://discuss.huggingface.co/) o in un'[issue](https://github.com/huggingface/transformers/issues) prima di inviare una Pull Request. Mentre accogliamo con piacere la correzione di bug, รจ piรน improbabile che faremo la stessa con una PR che aggiunge funzionalitร  sacrificando la leggibilitร . Questa guida ti mostrerร  come eseguire uno script di esempio relativo al task di summarization in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) e [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). Tutti gli esempi funzioneranno con entrambi i framework a meno che non sia specificato altrimenti. ## Installazione Per eseguire con successo l'ultima versione degli script di esempio, devi **installare ๐Ÿค— Transformers dalla fonte** in un nuovo ambiente virtuale: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` Per le precedenti versioni degli script di esempio, clicca sul pulsante di seguito: <details> <summary>Esempi per versioni precedenti di ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Successivamente, cambia la tua attuale copia di ๐Ÿค— Transformers specificandone la versione, ad esempio v3.5.1: ```bash git checkout tags/v3.5.1 ``` Dopo aver configurato correttamente la versione della libreria, naviga nella cartella degli esempi di tua scelta e installa i requisiti: ```bash pip install -r requirements.txt ``` ## Esegui uno script <frameworkcontent> <pt> Lo script di esempio scarica e pre-processa un dataset dalla libreria ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui รจ stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Lo script di esempio scarica e pre-processa un dataset dalla libreria ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Successivamente, lo script esegue il fine-tuning su un dataset usando Keras su un'architettura che supporta la summarization. Il seguente esempio mostra come eseguire il fine-tuning di [T5-small](https://huggingface.co/t5-small) sul dataset [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). Il modello T5 richiede un parametro addizionale `source_prefix` a causa del modo in cui รจ stato addestrato. Questo prefisso permette a T5 di sapere che si tratta di un task di summarization. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Addestramento distribuito e precisione mista Il [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supporta l'addestramento distribuito e la precisione mista, che significa che puoi anche usarla in uno script. Per abilitare entrambe le funzionalitร : - Aggiunto l'argomento `fp16` per abilitare la precisione mista. - Imposta un numero di GPU da usare con l'argomento `nproc_per_node`. ```bash python -m torch.distributed.launch \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Gli script TensorFlow utilizzano una [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) per il training distribuito e non devi aggiungere alcun argomento addizionale allo script di training. Lo script TensorFlow userร  multiple GPU in modo predefinito se quest'ultime sono disponibili: ## Esegui uno script su TPU <frameworkcontent> <pt> Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. PyTorch supporta le TPU con il compilatore per deep learning [XLA](https://www.tensorflow.org/xla) (guarda [questo link](https://github.com/pytorch/xla/blob/master/README.md) per maggiori dettagli). Per usare una TPU, avvia lo script `xla_spawn.py` e usa l'argomento `num_cores` per impostare il numero di core TPU che intendi usare. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Le Tensor Processing Units (TPU) sono state progettate per migliorare le prestazioni. Gli script TensorFlow utilizzano una [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) per eseguire l'addestramento su TPU. Per usare una TPU, passa il nome della risorsa TPU all'argomento `tpu`. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Esegui uno script con ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) รจ una libreria compatibile solo con PyTorch che offre un metodo unificato per addestrare modelli su diverse tipologie di configurazioni (CPU, multiple GPU, TPU) mantenendo una completa visibilitร  rispetto al ciclo di training di PyTorch. Assicurati di aver effettuato l'installazione di ๐Ÿค— Accelerate, nel caso non lo avessi fatto: > Nota: dato che Accelerate รจ in rapido sviluppo, รจ necessario installare la versione proveniente da git per eseguire gli script: ```bash pip install git+https://github.com/huggingface/accelerate ``` Invece che usare lo script `run_summarization.py`, devi usare lo script `run_summarization_no_trainer.py`. Gli script supportati in ๐Ÿค— Accelerate avranno un file chiamato `task_no_trainer.py` nella rispettiva cartella. Per iniziare, esegui il seguente comando per creare e salvare un file di configurazione: ```bash accelerate config ``` Testa la tua configurazione per assicurarti della sua correttezza: ```bash accelerate test ``` Ora sei pronto per avviare l'addestramento: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Uso di un dataset personalizzato Lo script di summarization supporta dataset personalizzati purchรฉ siano file CSV o JSON Line. Quando usi il tuo dataset, devi specificare diversi argomenti aggiuntivi: - `train_file` e `validation_file` specificano dove si trovano i file di addestramento e validazione. - `text_column` รจ il file di input da riassumere. - `summary_column` รจ il file di destinazione per l'output. Uno script di summarization usando un dataset personalizzato sarebbe simile a questo: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Testare uno script รˆ spesso una buona idea avviare il tuo script su un numero inferiore di esempi tratti dal dataset, per assicurarti che tutto funzioni come previsto prima di eseguire lo script sull'intero dataset, che potrebbe necessitare di ore. Usa i seguenti argomenti per limitare il dataset ad un massimo numero di esempi: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Non tutti gli esempi di script supportano l'argomento `max_predict_samples`. Se non sei sicuro circa il supporto di questo argomento da parte del tuo script, aggiungi l'argomento `-h` per controllare: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Riavviare addestramento da un checkpoint Un'altra utile opzione รจ riavviare un addestramento da un checkpoint precedente. Questo garantirร  che tu possa riprendere da dove hai interrotto senza ricominciare se l'addestramento viene interrotto. Ci sono due metodi per riavviare l'addestramento da un checkpoint: Il primo metodo usa l'argomento `output_dir previous_output_dir` per riavviare l'addestramento dall'ultima versione del checkpoint contenuto in `output_dir`. In questo caso, dovresti rimuovere `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` Il secondo metodo usa l'argomento `resume_from_checkpoint path_to_specific_checkpoint` per riavviare un addestramento da una specifica cartella di checkpoint. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Condividi il tuo modello Tutti gli script possono caricare il tuo modello finale al [Model Hub](https://huggingface.co/models). Prima di iniziare, assicurati di aver effettuato l'accesso su Hugging Face: ```bash huggingface-cli login ``` Poi, aggiungi l'argomento `push_to_hub` allo script. Questo argomento consentirร  di creare un repository con il tuo username Hugging Face e la cartella specificata in `output_dir`. Per dare uno specifico nome al repository, usa l'argomento `push_to_hub_model_id`. Il repository verrร  automaticamente elencata sotto al tuo namespace. Il seguente esempio mostra come caricare un modello specificando il nome del repository: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers Machine Learning allo stato dell'arte per PyTorch, TensorFlow e JAX. ๐Ÿค— Transformers fornisce delle API per scaricare in modo semplice e allenare modelli pre-allenati allo stato dell'arte. L'utilizzo di modelli pre-allenati puรฒ ridurre i tuoi costi computazionali, l'impatto ambientale, e farti risparmiare il tempo che utilizzeresti per allenare un modello da zero. I modelli possono essere utilizzati in diverse modalitร  come ad esempio: * ๐Ÿ“ Testo: classificazione del testo, estrazione delle informazioni, rispondere a domande, riassumere, traduzione e generazione del testo in piรน di 100 lingue. * ๐Ÿ–ผ๏ธ Immagini: classificazione di immagini, rilevazione di oggetti e segmentazione. * ๐Ÿ—ฃ๏ธ Audio: riconoscimento vocale e classificazione dell'audio. * ๐Ÿ™ Multimodale: rispondere a domande inerenti dati tabulari, riconoscimento ottico dei caratteri, estrazione di informazioni a partire da documenti scannerizzati, classificazione di video e risposta visuale a domande. La nostra libreria supporta un'integrazione perfetta tra tre delle librerie per il deep learning piรน popolari: [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/) e [JAX](https://jax.readthedocs.io/en/latest/). Allena il tuo modello in tre righe di codice in un framework, e caricalo per l'inferenza in un altro. Ogni architettura di ๐Ÿค— Transformers รจ definita in un modulo Python indipendente cosรฌ da poter essere personalizzata in modo semplice per la ricerca e gli esperimenti. ## Se stai cercando supporto personalizzato dal team di Hugging Face <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## Contenuti La documentazione รจ organizzata in cinque parti: - **INIZIARE** contiene un tour rapido e le istruzioni di installazione per cominciare ad utilizzare ๐Ÿค— Transformers. - **TUTORIALS** รจ un buon posto da cui iniziare se per te la nostra libreria รจ nuova. Questa sezione ti aiuterร  ad acquisire le competenze basilari di cui hai bisogno per iniziare ad utilizzare ๐Ÿค— Transformers. - **GUIDE PRATICHE** ti mostrerร  come raggiungere obiettivi specifici come fare fine-tuning di un modello pre-allenato per la modellizzazione del linguaggio o come creare una testa per un modello personalizzato. - **GUIDE CONCETTUALI** fornisce discussioni e spiegazioni dei concetti sottostanti alle idee dietro ai modelli, compiti, e la filosofia di progettazione di ๐Ÿค— Transformers. - **API** descrive ogni classe e funzione, raggruppate in: - **CLASSI PRINCIPALI** per le classi principali che espongono le API importanti della libreria. - **MODELLI** per le classi e le funzioni relative ad ogni modello implementato all'interno della libreria. - **HELPERS INTERNI** per le classi e le funzioni che utilizziamo internamente. La libreria attualmente contiene implementazioni in JAX, PyTorch e TensorFlow, pesi di modelli pre-allenati, script di utilizzo e strumenti di conversione per i seguenti modelli. ### Modelli supportati <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (da Google Research e l'Istituto Tecnologico di Chicago) rilasciato con il paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), da Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](model_doc/align)** (from Google Research) rilasciato con il paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) da Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. 1. **[BART](model_doc/bart)** (da Facebook) rilasciato con il paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) da Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov e Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (da politecnico di ร‰cole) rilasciato con il paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) da Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (da VinAI Research) rilasciato con il paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) da Nguyen Luong Tran, Duong Minh Le e Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (da Microsoft) rilasciato con il paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) da Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (da Google) rilasciato con il paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) da Jacob Devlin, Ming-Wei Chang, Kenton Lee e Kristina Toutanova. 1. **[BERTweet](model_doc/bertweet)** (da VinAI Research) rilasciato con il paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) da Dat Quoc Nguyen, Thanh Vu e Anh Tuan Nguyen. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (da Google) rilasciato con il paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) da Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (da Google Research) rilasciato con il paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) da Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (v Google Research) rilasciato con il paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) da Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (da Facebook) rilasciato con il paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) da Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (da Facebook) rilasciato con il paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) da Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BORT](model_doc/bort)** (da Alexa) rilasciato con il paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) da Adrian de Wynter e Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (da Google Research) rilasciato con il paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) da Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (da Inria/Facebook/Sorbonne) rilasciato con il paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) da Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah e Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (da Google Research) rilasciato con il paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) da Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[ConvNeXT](model_doc/convnext)** (da Facebook AI) rilasciato con il paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) da Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (da Facebook AI) rilasciato con il paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) da Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CLIP](model_doc/clip)** (da OpenAI) rilasciato con il paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) da Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[ConvBERT](model_doc/convbert)** (da YituTech) rilasciato con il paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) da Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[CPM](model_doc/cpm)** (dalla Universitร  di Tsinghua) rilasciato con il paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) da Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (da Salesforce) rilasciato con il paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) da Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong e Richard Socher. 1. **[CvT](model_doc/cvt)** (da Microsoft) rilasciato con il paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) da Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (da Facebook) rilasciato con il paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) da Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (da Microsoft) rilasciato con il paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) da Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (da Microsoft) rilasciato con il paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) da Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (da Berkeley/Facebook/Google) rilasciato con il paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) da Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[DiT](model_doc/dit)** (da Microsoft Research) rilasciato con il paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) da Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[DeiT](model_doc/deit)** (da Facebook) rilasciato con il paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) da Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (da Facebook) rilasciato con il paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) da Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (da Microsoft Research) rilasciato con il paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) da Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (da HuggingFace), rilasciato assieme al paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) da Victor Sanh, Lysandre Debut e Thomas Wolf. La stessa tecnica รจ stata applicata per comprimere GPT2 in [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa in [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT in [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DPR](model_doc/dpr)** (da Facebook) rilasciato con il paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) da Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, e Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (da Intel Labs) rilasciato con il paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) da Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (da Google Research) rilasciato con il paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) da Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ELECTRA](model_doc/electra)** (da Google Research/Stanford University) rilasciato con il paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) da Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[FlauBERT](model_doc/flaubert)** (da CNRS) rilasciato con il paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) da Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (da Facebook AI) rilasciato con il paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) da Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, e Douwe Kiela. 1. **[FNet](model_doc/fnet)** (da Google Research) rilasciato con il paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) da James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (da CMU/Google Brain) rilasciato con il paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) da Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (da KAIST) rilasciato con il paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) da Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (da OpenAI) rilasciato con il paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) da Alec Radford, Karthik Narasimhan, Tim Salimans e Ilya Sutskever. 1. **[GPT-2](model_doc/gpt2)** (da OpenAI) rilasciato con il paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) da Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** e Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (da EleutherAI) rilasciato nel repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) da Ben Wang e Aran Komatsuzaki. 1. **[GPT Neo](model_doc/gpt_neo)** (da EleutherAI) rilasciato nel repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) da Sid Black, Stella Biderman, Leo Gao, Phil Wang e Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (da EleutherAI) rilasciato con il paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) da Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[Hubert](model_doc/hubert)** (da Facebook) rilasciato con il paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) da Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (da Berkeley) rilasciato con il paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) da Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (da OpenAI) rilasciato con il paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) da Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) da Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) da Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (da Microsoft Research Asia) rilasciato con il paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) da Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutlxlm)** (da Microsoft Research Asia) rilasciato con il paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) da Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (da AllenAI) rilasciato con il paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) da Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[Longformer](model_doc/longformer)** (da AllenAI) rilasciato con il paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) da Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LUKE](model_doc/luke)** (da Studio Ousia) rilasciato con il paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) da Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[mLUKE](model_doc/mluke)** (da Studio Ousia) rilasciato con il paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) da Ryokan Ri, Ikuya Yamada, e Yoshimasa Tsuruoka. 1. **[LXMERT](model_doc/lxmert)** (da UNC Chapel Hill) rilasciato con il paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) da Hao Tan e Mohit Bansal. 1. **[M2M100](model_doc/m2m_100)** (da Facebook) rilasciato con il paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) da Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Modello di machine learning per le traduzioni allenato utilizzando i dati [OPUS](http://opus.nlpl.eu/) di Jรถrg Tiedemann. Il [Framework Marian](https://marian-nmt.github.io/) รจ stato sviluppato dal Microsoft Translator Team. 1. **[Mask2Former](model_doc/mask2former)** (da FAIR e UIUC) rilasciato con il paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) da Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (da Meta e UIUC) rilasciato con il paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) da Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[MBart](model_doc/mbart)** (da Facebook) rilasciato con il paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) da Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[MBart-50](model_doc/mbart)** (da Facebook) rilasciato con il paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) da Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (da NVIDIA) rilasciato con il paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) da Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper e Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (da NVIDIA) rilasciato con il paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) da Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper e Bryan Catanzaro. 1. **[MPNet](model_doc/mpnet)** (da Microsoft Research) rilasciato con il paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) da Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (da Google AI) rilasciato con il paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) da Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[Nystrรถmformer](model_doc/nystromformer)** (dalla Universitร  del Wisconsin - Madison) rilasciato con il paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) da Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (da SHI Labs) rilasciato con il paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) da Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (da Meta AI) rilasciato con il paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) da Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[Pegasus](model_doc/pegasus)** (da Google) rilasciato con il paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) da Jingqing Zhang, Yao Zhao, Mohammad Saleh e Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (da Deepmind) rilasciato con il paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) da Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (da VinAI Research) rilasciato con il paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) da Dat Quoc Nguyen e Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (da UCLA NLP) rilasciato con il paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) da Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (da Sea AI Labs) rilasciato con il paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) da Yu, Weihao e Luo, Mi e Zhou, Pan e Si, Chenyang e Zhou, Yichen e Wang, Xinchao e Feng, Jiashi e Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (da Microsoft Research) rilasciato con il paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) da Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang e Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (da NVIDIA) rilasciato con il paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) da Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev e Paulius Micikevicius. 1. **[REALM](model_doc/realm.html)** (da Google Research) rilasciato con il paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) da Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat e Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (da Google Research) rilasciato con il paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) da Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RemBERT](model_doc/rembert)** (da Google Research) rilasciato con il paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) da Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[RegNet](model_doc/regnet)** (da META Platforms) rilasciato con il paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) da Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[ResNet](model_doc/resnet)** (da Microsoft Research) rilasciato con il paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) da Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (da Facebook), rilasciato assieme al paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) da Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoFormer](model_doc/roformer)** (da ZhuiyiTechnology), rilasciato assieme al paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) da Jianlin Su e Yu Lu e Shengfeng Pan e Bo Wen e Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (da NVIDIA) rilasciato con il paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) da Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (da ASAPP) rilasciato con il paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) da Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (da ASAPP) rilasciato con il paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) da Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (da Facebook), rilasciato assieme al paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) da Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (da Facebook), rilasciato assieme al paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) da Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (dalla Universitร  di Tel Aviv), rilasciato assieme al paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) da Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBert](model_doc/squeezebert)** (da Berkeley) rilasciato con il paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) da Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, e Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (da Microsoft) rilasciato con il paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) da Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[T5](model_doc/t5)** (da Google AI) rilasciato con il paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) da Colin Raffel e Noam Shazeer e Adam Roberts e Katherine Lee e Sharan Narang e Michael Matena e Yanqi Zhou e Wei Li e Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (da Google AI) rilasciato nel repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) da Colin Raffel e Noam Shazeer e Adam Roberts e Katherine Lee e Sharan Narang e Michael Matena e Yanqi Zhou e Wei Li e Peter J. Liu. 1. **[TAPAS](model_doc/tapas)** (da Google AI) rilasciato con il paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) da Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno e Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (da Microsoft Research) rilasciato con il paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) da Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (dall'Universitร  della California a Berkeley) rilasciato con il paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) da Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (da Google/CMU) rilasciato con il paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) da Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (da Microsoft), rilasciato assieme al paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) da Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UniSpeech](model_doc/unispeech)** (da Microsoft Research) rilasciato con il paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) da Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (da Microsoft Research) rilasciato con il paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) da Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (dalle Universitร  di Tsinghua e Nankai) rilasciato con il paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) da Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[ViLT](model_doc/vilt)** (da NAVER AI Lab/Kakao Enterprise/Kakao Brain) rilasciato con il paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) da Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (da Google AI) rilasciato con il paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) da Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[ViTMAE](model_doc/vit_mae)** (da Meta AI) rilasciato con il paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) da Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[VisualBERT](model_doc/visual_bert)** (da UCLA NLP) rilasciato con il paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) da Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[WavLM](model_doc/wavlm)** (da Microsoft Research) rilasciato con il paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) da Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Wav2Vec2](model_doc/wav2vec2)** (da Facebook AI) rilasciato con il paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) da Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (da Facebook AI) rilasciato con il paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) da Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[XGLM](model_doc/xglm)** (da Facebook AI) rilasciato con il paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) da Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (v Facebook) rilasciato assieme al paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) da Guillaume Lample e Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (da Microsoft Research) rilasciato con il paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) da Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang e Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (da Facebook AI), rilasciato assieme al paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) da Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer e Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (da Facebook AI), rilasciato assieme al paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) da Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (da Google/CMU) rilasciato con il paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) da Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (da Facebook AI) rilasciato con il paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) da Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[XLS-R](model_doc/xls_r)** (da Facebook AI) rilasciato con il paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) da Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (dalla Universitร  della scienza e tecnologia di Huazhong) rilasciato con il paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) da Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (dall'Universitร  del Wisconsin - Madison) rilasciato con il paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) da Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### Framework supportati La tabella seguente rappresenta il supporto attuale nella libreria per ognuno di questi modelli, si puรฒ identificare se questi hanno un Python tokenizer (chiamato "slow"). Un tokenizer "fast" supportato dalla libreria ๐Ÿค— Tokenizers, e se hanno supporto in Jax (via Flax), PyTorch, e/o TensorFlow. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBirdPegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | Canine | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNext | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โŒ | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | Flava | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | MegatronBert | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | mT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Nystromformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | Realm | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin | โŒ | โŒ | โœ… | โœ… | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBert | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โŒ | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLMProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/migration.md
<!--- Copyright 2020 The HuggingFace Team. Tutti i diritti riservati. Concesso in licenza in base alla Licenza Apache, Versione 2.0 (la "Licenza"); non รจ possibile utilizzare questo file se non in conformitร  con la Licenza. รˆ possibile ottenere una copia della Licenza all'indirizzo http://www.apache.org/licenses/LICENSE-2.0 A meno che non sia richiesto dalla legge applicabile o concordato per iscritto, il software distribuito con la Licenza รจ distribuito su BASE "COSรŒ COM'รˆ", SENZA GARANZIE O CONDIZIONI DI ALCUN TIPO, espresse o implicite. Per la lingua specifica vedi la Licenza che regola le autorizzazioni e le limitazioni ai sensi della STESSA. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Migrazione da pacchetti precedenti ## Migrazione da transformers `v3.x` a `v4.x` Un paio di modifiche sono state introdotte nel passaggio dalla versione 3 alla versione 4. Di seguito รจ riportato un riepilogo delle modifiche previste: #### 1. AutoTokenizer e pipeline ora utilizzano tokenizer veloci (rust) per impostazione predefinita. I tokenizer python e rust hanno all'incirca le stesse API, ma i tokenizer rust hanno un set di funzionalitร  piรน completo. Ciรฒ introduce due modifiche sostanziali: - La gestione dei token in overflow tra i tokenizer Python e Rust รจ diversa. - I tokenizers di rust non accettano numeri interi nei metodi di codifica. ##### Come ottenere lo stesso comportamento di v3.x in v4.x - Le pipeline ora contengono funzionalitร  aggiuntive pronte all'uso. Vedi la [pipeline di classificazione dei token con il flag `grouped_entities`](main_classes/pipelines#transformers.TokenClassificationPipeline). - Gli auto-tokenizer ora restituiscono tokenizer rust. Per ottenere invece i tokenizer python, l'utente deve usare il flag `use_fast` impostandolo `False`: Nella versione `v3.x`: ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` per ottenere lo stesso nella versione `v4.x`: ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False) ``` #### 2. SentencePiece รจ stato rimosso dalle dipendenze richieste Il requisito sulla dipendenza SentencePiece รจ stato rimosso da `setup.py`. รˆ stato fatto per avere un canale su anaconda cloud senza basarsi su `conda-forge`. Ciรฒ significa che i tokenizer che dipendono dalla libreria SentencePiece non saranno disponibili con un'installazione standard di `transformers`. Ciรฒ include le versioni **lente** di: - `XLNetTokenizer` - `AlbertTokenizer` - `CamembertTokenizer` - `MBartTokenizer` - `PegasusTokenizer` - `T5Tokenizer` - `ReformerTokenizer` - `XLMRobertaTokenizer` ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, devi installare anche `sentencepiece`: Nella versione `v3.x`: ```bash pip install transformers ``` per ottenere lo stesso nella versione `v4.x`: ```bash pip install transformers[sentencepiece] ``` o ```bash pip install transformers stentencepiece ``` #### 3. L'architettura delle repo รจ stato aggiornata in modo che ogni modello abbia la propria cartella Con lโ€™aggiunta di nuovi modelli, il numero di file nella cartella `src/transformers` continua a crescere e diventa piรน difficile navigare e capire. Abbiamo fatto la scelta di inserire ogni modello e i file che lo accompagnano nelle proprie sottocartelle. Si tratta di una modifica sostanziale in quanto l'importazione di layer intermedi utilizzando direttamente il modulo di un modello deve essere eseguita tramite un percorso diverso. ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, devi aggiornare il percorso utilizzato per accedere ai layer. Nella versione `v3.x`: ```bash from transformers.modeling_bert import BertLayer ``` per ottenere lo stesso nella versione `v4.x`: ```bash from transformers.models.bert.modeling_bert import BertLayer ``` #### 4. Impostare l'argomento `return_dict` su `True` per impostazione predefinita L'[argomento `return_dict`](main_classes/output) abilita la restituzione di oggetti python dict-like contenenti gli output del modello, invece delle tuple standard. Questo oggetto รจ self-documented poichรฉ le chiavi possono essere utilizzate per recuperare valori, comportandosi anche come una tupla e gli utenti possono recuperare oggetti per indexing o slicing. Questa รจ una modifica sostanziale poichรฉ la tupla non puรฒ essere decompressa: `value0, value1 = outputs` non funzionerร . ##### Come ottenere lo stesso comportamento della v3.x nella v4.x Per ottenere lo stesso comportamento della versione `v3.x`, specifica l'argomento `return_dict` come `False`, sia nella configurazione del modello che nel passaggio successivo. Nella versione `v3.x`: ```bash model = BertModel.from_pretrained("bert-base-cased") outputs = model(**inputs) ``` per ottenere lo stesso nella versione `v4.x`: ```bash model = BertModel.from_pretrained("bert-base-cased") outputs = model(**inputs, return_dict=False) ``` o ```bash model = BertModel.from_pretrained("bert-base-cased", return_dict=False) outputs = model(**inputs) ``` #### 5. Rimozione di alcuni attributi deprecati Gli attributi sono stati rimossi se deprecati da almeno un mese. L'elenco completo degli attributi obsoleti รจ disponibile in [#8604](https://github.com/huggingface/transformers/pull/8604). Ecco un elenco di questi attributi/metodi/argomenti e quali dovrebbero essere le loro sostituzioni: In diversi modelli, le etichette diventano coerenti con gli altri modelli: - `masked_lm_labels` diventa `labels` in `AlbertForMaskedLM` e `AlbertForPreTraining`. - `masked_lm_labels` diventa `labels` in `BertForMaskedLM` e `BertForPreTraining`. - `masked_lm_labels` diventa `labels` in `DistilBertForMaskedLM`. - `masked_lm_labels` diventa `labels` in `ElectraForMaskedLM`. - `masked_lm_labels` diventa `labels` in `LongformerForMaskedLM`. - `masked_lm_labels` diventa `labels` in `MobileBertForMaskedLM`. - `masked_lm_labels` diventa `labels` in `RobertaForMaskedLM`. - `lm_labels` diventa `labels` in `BartForConditionalGeneration`. - `lm_labels` diventa `labels` in `GPT2DoubleHeadsModel`. - `lm_labels` diventa `labels` in `OpenAIGPTDoubleHeadsModel`. - `lm_labels` diventa `labels` in `T5ForConditionalGeneration`. In diversi modelli, il meccanismo di memorizzazione nella cache diventa coerente con gli altri: - `decoder_cached_states` diventa `past_key_values` in tutti i modelli BART-like, FSMT e T5. - `decoder_past_key_values` diventa `past_key_values` in tutti i modelli BART-like, FSMT e T5. - `past` diventa `past_key_values` in tutti i modelli CTRL. - `past` diventa `past_key_values` in tutti i modelli GPT-2. Per quanto riguarda le classi tokenizer: - L'attributo tokenizer `max_len` diventa `model_max_length`. - L'attributo tokenizer `return_lengths` diventa `return_length`. - L'argomento di codifica del tokenizer `is_pretokenized` diventa `is_split_into_words`. Per quanto riguarda la classe `Trainer`: - L'argomento `tb_writer` di `Trainer` รจ stato rimosso in favore della funzione richiamabile `TensorBoardCallback(tb_writer=...)`. - L'argomento `prediction_loss_only` di `Trainer` รจ stato rimosso in favore dell'argomento di classe `args.prediction_loss_only`. - L'attributo `data_collator` di `Trainer` sarร  richiamabile. - Il metodo `_log` di `Trainer` รจ deprecato a favore di `log`. - Il metodo `_training_step` di `Trainer` รจ deprecato a favore di `training_step`. - Il metodo `_prediction_loop` di `Trainer` รจ deprecato a favore di `prediction_loop`. - Il metodo `is_local_master` di `Trainer` รจ deprecato a favore di `is_local_process_zero`. - Il metodo `is_world_master` di `Trainer` รจ deprecato a favore di `is_world_process_zero`. Per quanto riguarda la classe `TFTrainer`: - L'argomento `prediction_loss_only` di `TFTrainer` รจ stato rimosso a favore dell'argomento di classe `args.prediction_loss_only`. - Il metodo `_log` di `Trainer` รจ deprecato a favore di `log`. - Il metodo `_prediction_loop` di `TFTrainer` รจ deprecato a favore di `prediction_loop`. - Il metodo `_setup_wandb` di `TFTrainer` รจ deprecato a favore di `setup_wandb`. - Il metodo `_run_model` di `TFTrainer` รจ deprecato a favore di `run_model`. Per quanto riguarda la classe `TrainingArguments`: - L'argomento `evaluate_during_training` di `TrainingArguments` รจ deprecato a favore di `evaluation_strategy`. Per quanto riguarda il modello Transfo-XL: - L'attributo di configurazione `tie_weight` di Transfo-XL diventa `tie_words_embeddings`. - Il metodo di modellazione `reset_length` di Transfo-XL diventa `reset_memory_length`. Per quanto riguarda le pipeline: - L'argomento `topk` di `FillMaskPipeline` diventa `top_k`. ## Passaggio da pytorch-transformers a ๐Ÿค— Transformers Ecco un breve riepilogo di ciรฒ a cui prestare attenzione durante il passaggio da `pytorch-transformers` a ๐Ÿค— Transformers. ### Lโ€™ordine posizionale di alcune parole chiave di input dei modelli (`attention_mask`, `token_type_ids`...) รจ cambiato Per usare Torchscript (vedi #1010, #1204 e #1195) l'ordine specifico delle **parole chiave di input** di alcuni modelli (`attention_mask`, `token_type_ids`...) รจ stato modificato. Se inizializzavi i modelli usando parole chiave per gli argomenti, ad esempio `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, questo non dovrebbe causare alcun cambiamento. Se inizializzavi i modelli con input posizionali per gli argomenti, ad esempio `model(inputs_ids, attention_mask, token_type_ids)`, potrebbe essere necessario ricontrollare l'ordine esatto degli argomenti di input. ## Migrazione da pytorch-pretrained-bert Ecco un breve riepilogo di ciรฒ a cui prestare attenzione durante la migrazione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers ### I modelli restituiscono sempre `tuple` La principale modifica di rilievo durante la migrazione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers รจ che il metodo dei modelli di previsione dร  sempre una `tupla` con vari elementi a seconda del modello e dei parametri di configurazione. Il contenuto esatto delle tuple per ciascun modello รจ mostrato in dettaglio nelle docstring dei modelli e nella [documentazione](https://huggingface.co/transformers/). In quasi tutti i casi, andrร  bene prendendo il primo elemento dell'output come quello che avresti precedentemente utilizzato in `pytorch-pretrained-bert`. Ecco un esempio di conversione da `pytorch-pretrained-bert` a ๐Ÿค— Transformers per un modello di classificazione `BertForSequenceClassification`: ```python # Carichiamo il nostro modello model = BertForSequenceClassification.from_pretrained("bert-base-uncased") # Se usavi questa riga in pytorch-pretrained-bert : loss = model(input_ids, labels=labels) # Ora usa questa riga in ๐Ÿค— Transformers per estrarre la perdita dalla tupla di output: outputs = model(input_ids, labels=labels) loss = outputs[0] # In ๐Ÿค— Transformers puoi anche avere accesso ai logit: loss, logits = outputs[:2] # Ed anche agli attention weight se configuri il modello per restituirli (e anche altri output, vedi le docstring e la documentazione) model = BertForSequenceClassification.from_pretrained(" bert-base-uncased", output_attentions=True) outputs = model(input_ids, labels=labels) loss, logits, attentions = outputs ``` ### Serializzazione Modifica sostanziale nel metodo `from_pretrained()`: 1. I modelli sono ora impostati in modalitร  di valutazione in maniera predefinita quando usi il metodo `from_pretrained()`. Per addestrarli non dimenticare di riportarli in modalitร  di addestramento (`model.train()`) per attivare i moduli di dropout. 2. Gli argomenti aggiuntivi `*inputs` e `**kwargs` forniti al metodo `from_pretrained()` venivano passati direttamente al metodo `__init__()` della classe sottostante del modello. Ora sono usati per aggiornare prima l'attributo di configurazione del modello, che puรฒ non funzionare con le classi del modello derivate costruite basandosi sui precedenti esempi di `BertForSequenceClassification`. Piรน precisamente, gli argomenti posizionali `*inputs` forniti a `from_pretrained()` vengono inoltrati direttamente al metodo `__init__()` del modello mentre gli argomenti keyword `**kwargs` (i) che corrispondono agli attributi della classe di configurazione, vengono utilizzati per aggiornare tali attributi (ii) che non corrispondono ad alcun attributo della classe di configurazione, vengono inoltrati al metodo `__init__()`. Inoltre, sebbene non si tratti di una modifica sostanziale, i metodi di serializzazione sono stati standardizzati e probabilmente dovresti passare al nuovo metodo `save_pretrained(save_directory)` se prima usavi qualsiasi altro metodo di serializzazione. Ecco un esempio: ```python ### Carichiamo un modello e un tokenizer model = BertForSequenceClassification.from_pretrained("bert-base-uncased") tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") ### Facciamo fare alcune cose al nostro modello e tokenizer # Es: aggiungiamo nuovi token al vocabolario e agli embending del nostro modello tokenizer.add_tokens(["[SPECIAL_TOKEN_1]", "[SPECIAL_TOKEN_2]"]) model.resize_token_embeddings(len(tokenizer)) # Alleniamo il nostro modello train(model) ### Ora salviamo il nostro modello e il tokenizer in una cartella model.save_pretrained("./my_saved_model_directory/") tokenizer.save_pretrained("./my_saved_model_directory/") ### Ricarichiamo il modello e il tokenizer model = BertForSequenceClassification.from_pretrained("./my_saved_model_directory/") tokenizer = BertTokenizer.from_pretrained("./my_saved_model_directory/") ``` ### Ottimizzatori: BertAdam e OpenAIAdam ora sono AdamW, lo scheduling รจ quello standard PyTorch I due ottimizzatori precedenti inclusi, `BertAdam` e `OpenAIAdam`, sono stati sostituiti da un singolo `AdamW` che presenta alcune differenze: - implementa solo la correzione del weights decay, - lo scheduling ora รจ esterno (vedi sotto), - anche il gradient clipping ora รจ esterno (vedi sotto). Il nuovo ottimizzatore `AdamW` corrisponde alle API di `Adam` di PyTorch e ti consente di utilizzare metodi PyTorch o apex per lo scheduling e il clipping. Lo scheduling รจ ora standard [PyTorch learning rate schedulers](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) e non fanno piรน parte dell'ottimizzatore. Ecco un esempio di linear warmup e decay con `BertAdam` e con `AdamW`: ```python # Parametri: lr = 1e-3 max_grad_norm = 1.0 num_training_steps = 1000 num_warmup_steps = 100 warmup_proportion = float( num_warmup_steps) / float(num_training_steps) # 0.1 ### In precedenza l'ottimizzatore BertAdam veniva istanziato in questo modo: optimizer = BertAdam( model.parameters(), lr=lr, schedule="warmup_linear", warmup=warmup_proportion, num_training_steps=num_training_steps, ) ### e usato in questo modo: for batch in train_data: loss = model(batch) loss.backward() optimizer.step() ### In ๐Ÿค— Transformers, ottimizzatore e schedule sono divisi e usati in questo modo: optimizer = AdamW( model.parameters(), lr=lr, correct_bias=False ) # Per riprodurre il comportamento specifico di BertAdam impostare correct_bias=False scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps ) # PyTorch scheduler ### e va usato cosรฌ: for batch in train_data: loss = model(batch) loss.backward() torch.nn.utils.clip_grad_norm_( model.parameters(), max_grad_norm ) # Gradient clipping non รจ piรน in AdamW (quindi puoi usare amp senza problemi) optimizer.step() scheduler.step() ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installazione Installa ๐Ÿค— Transformers per qualsiasi libreria di deep learning con cui stai lavorando, imposta la tua cache, e opzionalmente configura ๐Ÿค— Transformers per l'esecuzione offline. ๐Ÿค— Transformers รจ testato su Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, e Flax. Segui le istruzioni di installazione seguenti per la libreria di deep learning che stai utilizzando: * [PyTorch](https://pytorch.org/get-started/locally/) istruzioni di installazione. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) istruzioni di installazione. * [Flax](https://flax.readthedocs.io/en/latest/) istruzioni di installazione. ## Installazione con pip Puoi installare ๐Ÿค— Transformers in un [ambiente virtuale](https://docs.python.org/3/library/venv.html). Se non sei familiare con gli ambienti virtuali in Python, dai un'occhiata a questa [guida](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Un ambiente virtuale rende piรน semplice la gestione di progetti differenti, evitando problemi di compatibilitร  tra dipendenze. Inizia creando un ambiente virtuale nella directory del tuo progetto: ```bash python -m venv .env ``` Attiva l'ambiente virtuale: ```bash source .env/bin/activate ``` Ora puoi procedere con l'installazione di ๐Ÿค— Transformers eseguendo il comando seguente: ```bash pip install transformers ``` Per il solo supporto della CPU, puoi installare facilmente ๐Ÿค— Transformers e una libreria di deep learning in solo una riga. Ad esempio, installiamo ๐Ÿค— Transformers e PyTorch con: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers e TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers e Flax: ```bash pip install transformers[flax] ``` Infine, verifica se ๐Ÿค— Transformers รจ stato installato in modo appropriato eseguendo il seguente comando. Questo scaricherร  un modello pre-allenato: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Dopodichรฉ stampa l'etichetta e il punteggio: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installazione dalla fonte Installa ๐Ÿค— Transformers dalla fonte con il seguente comando: ```bash pip install git+https://github.com/huggingface/transformers ``` Questo comando installa la versione `main` piรน attuale invece dell'ultima versione stabile. Questo รจ utile per stare al passo con gli ultimi sviluppi. Ad esempio, se un bug รจ stato sistemato da quando รจ uscita l'ultima versione ufficiale ma non รจ stata ancora rilasciata una nuova versione. Tuttavia, questo significa che questa versione `main` puรฒ non essere sempre stabile. Ci sforziamo per mantenere la versione `main` operativa, e la maggior parte dei problemi viene risolta in poche ore o in un giorno. Se riscontri un problema, per favore apri una [Issue](https://github.com/huggingface/transformers/issues) cosรฌ possiamo sistemarlo ancora piรน velocemente! Controlla se ๐Ÿค— Transformers รจ stata installata in modo appropriato con il seguente comando: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Installazione modificabile Hai bisogno di un'installazione modificabile se vuoi: * Usare la versione `main` del codice dalla fonte. * Contribuire a ๐Ÿค— Transformers e hai bisogno di testare i cambiamenti nel codice. Clona il repository e installa ๐Ÿค— Transformers con i seguenti comandi: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Questi comandi collegheranno la cartella in cui รจ stato clonato il repository e i path delle librerie Python. Python guarderร  ora all'interno della cartella clonata, oltre ai normali path delle librerie. Per esempio, se i tuoi pacchetti Python sono installati tipicamente in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python cercherร  anche nella cartella clonata: `~/transformers/`. <Tip warning={true}> Devi tenere la cartella `transformers` se vuoi continuare ad utilizzare la libreria. </Tip> Ora puoi facilmente aggiornare il tuo clone all'ultima versione di ๐Ÿค— Transformers con il seguente comando: ```bash cd ~/transformers/ git pull ``` Il tuo ambiente Python troverร  la versione `main` di ๐Ÿค— Transformers alla prossima esecuzione. ## Installazione con conda Installazione dal canale conda `huggingface`: ```bash conda install -c huggingface transformers ``` ## Impostazione della cache I modelli pre-allenati sono scaricati e memorizzati localmente nella cache in: `~/.cache/huggingface/transformers/`. Questa รจ la directory di default data dalla variabile d'ambiente della shell `TRANSFORMERS_CACHE`. Su Windows, la directory di default รจ data da `C:\Users\username\.cache\huggingface\transformers`. Puoi cambiare le variabili d'ambiente della shell indicate in seguito, in ordine di prioritร , per specificare una directory differente per la cache: 1. Variabile d'ambiente della shell (default): `TRANSFORMERS_CACHE`. 2. Variabile d'ambiente della shell: `HF_HOME` + `transformers/`. 3. Variabile d'ambiente della shell: `XDG_CACHE_HOME` + `/huggingface/transformers`. <Tip> ๐Ÿค— Transformers utilizzerร  le variabili d'ambiente della shell `PYTORCH_TRANSFORMERS_CACHE` o `PYTORCH_PRETRAINED_BERT_CACHE` se si proviene da un'iterazione precedente di questa libreria e sono state impostate queste variabili d'ambiente, a meno che non si specifichi la variabile d'ambiente della shell `TRANSFORMERS_CACHE`. </Tip> ## Modalitร  Offline ๐Ÿค— Transformers puรฒ essere eseguita in un ambiente firewalled o offline utilizzando solo file locali. Imposta la variabile d'ambiente `TRANSFORMERS_OFFLINE=1` per abilitare questo comportamento. <Tip> Aggiungi [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) al tuo flusso di lavoro offline di training impostando la variabile d'ambiente `HF_DATASETS_OFFLINE=1`. </Tip> Ad esempio, in genere si esegue un programma su una rete normale, protetta da firewall per le istanze esterne, con il seguente comando: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Esegui lo stesso programma in un'istanza offline con: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Lo script viene ora eseguito senza bloccarsi o attendere il timeout, perchรฉ sa di dover cercare solo file locali. ### Ottenere modelli e tokenizer per l'uso offline Un'altra opzione per utilizzare offline ๐Ÿค— Transformers รจ scaricare i file in anticipo, e poi puntare al loro path locale quando hai la necessitร  di utilizzarli offline. Ci sono tre modi per fare questo: * Scarica un file tramite l'interfaccia utente sul [Model Hub](https://huggingface.co/models) premendo sull'icona โ†“. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Utilizza il flusso [`PreTrainedModel.from_pretrained`] e [`PreTrainedModel.save_pretrained`]: 1. Scarica i tuoi file in anticipo con [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Salva i tuoi file in una directory specificata con [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./il/tuo/path/bigscience_t0") >>> model.save_pretrained("./il/tuo/path/bigscience_t0") ``` 3. Ora quando sei offline, carica i tuoi file con [`PreTrainedModel.from_pretrained`] dalla directory specificata: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./il/tuo/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./il/tuo/path/bigscience_t0") ``` * Scarica in maniera programmatica i file con la libreria [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub): 1. Installa la libreria `huggingface_hub` nel tuo ambiente virtuale: ```bash python -m pip install huggingface_hub ``` 2. Utilizza la funzione [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) per scaricare un file in un path specifico. Per esempio, il seguente comando scarica il file `config.json` dal modello [T0](https://huggingface.co/bigscience/T0_3B) nel path che desideri: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./il/tuo/path/bigscience_t0") ``` Una volta che il tuo file รจ scaricato e salvato in cache localmente, specifica il suo path locale per caricarlo e utilizzarlo: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./il/tuo/path/bigscience_t0/config.json") ``` <Tip> Fai riferimento alla sezione [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) per avere maggiori dettagli su come scaricare modelli presenti sull Hub. </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Tour rapido - local: installation title: Installazione title: Iniziare - sections: - local: pipeline_tutorial title: Pipeline per l'inferenza - local: autoclass_tutorial title: Carica istanze pre-allenate con AutoClass - local: preprocessing title: Preprocess - local: training title: Fine-tuning di un modello pre-addestrato - local: accelerate title: Allenamento distribuito con ๐Ÿค— Accelerate - local: model_sharing title: Condividere un modello title: Esercitazione - sections: - local: create_a_model title: Crea un'architettura personalizzata - local: custom_models title: Condividere modelli personalizzati - local: run_scripts title: Addestramento con script - local: multilingual title: Modelli multilingua per l'inferenza - local: converting_tensorflow_models title: Convertire modelli tensorflow - local: serialization title: Esporta modelli Transformers - local: perf_train_cpu title: Addestramento efficiente su CPU - local: perf_train_cpu_many title: Addestramento efficiente su multiple CPU - local: perf_train_tpu title: Addestramento su TPU - local: perf_train_special title: Addestramento su Hardware Specializzato - local: perf_infer_cpu title: Inferenza Efficiente su CPU - local: perf_infer_gpu_one title: Inferenza su una GPU - local: perf_infer_gpu_many title: Inferenza Efficiente su GPU Multiple - local: perf_infer_special title: Inferenza su Hardware Specializzato - local: big_models title: Istanziare un big model - local: migration title: Passaggio da pacchetti precedenti - local: debugging title: Debugging title: Guide pratiche - sections: - local: add_new_pipeline title: Come aggiungere una pipeline a ๐Ÿค— Transformers? - local: add_new_model title: Come aggiungere un modello a ๐Ÿค— Transformers? - local: perf_hardware title: Hardware ottimizzato per l'addestramento - local: community title: Risorse della comunitร  - local: pr_checks title: Controlli su una Pull Request title: Guide How-to
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Modelli multilingue per l'inferenza [[open-in-colab]] Ci sono diversi modelli multilingue in ๐Ÿค— Transformers, e il loro utilizzo per l'inferenza differisce da quello dei modelli monolingua. Non *tutti* gli utilizzi dei modelli multilingue sono perรฒ diversi. Alcuni modelli, come [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased), possono essere usati come un modello monolingua. Questa guida ti mostrerร  come utilizzare modelli multilingue che utilizzano un modo diverso per fare l'inferenza. ## XLM XLM ha dieci diversi checkpoint, di cui solo uno รจ monolingua. I nove checkpoint rimanenti possono essere suddivisi in due categorie: i checkpoint che utilizzano i language embeddings e quelli che non li utilizzano. ### XLM con language embeddings I seguenti modelli XLM utilizzano gli embeddings linguistici per specificare la lingua utilizzata per l'inferenza: - `xlm-mlm-ende-1024` (Modellazione mascherata del linguaggio (Masked language modeling, in inglese), Inglese-Tedesco) - `xlm-mlm-enfr-1024` (Modellazione mascherata del linguaggio, Inglese-Francese) - `xlm-mlm-enro-1024` (Modellazione mascherata del linguaggio, Inglese-Rumeno) - `xlm-mlm-xnli15-1024` (Modellazione mascherata del linguaggio, lingue XNLI) - `xlm-mlm-tlm-xnli15-1024` (Modellazione mascherata del linguaggio + traduzione, lingue XNLI) - `xlm-clm-enfr-1024` (Modellazione causale del linguaggio, Inglese-Francese) - `xlm-clm-ende-1024` (Modellazione causale del linguaggio, Inglese-Tedesco) Gli embeddings linguistici sono rappresentati come un tensore delle stesse dimensioni dell' `input_ids` passato al modello. I valori in questi tensori dipendono dal linguaggio usato e sono identificati dagli attributi `lang2id` e `id2lang` del tokenizer. In questo esempio, carica il checkpoint `xlm-clm-enfr-1024` (Modellazione causale del linguaggio, Inglese-Francese): ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") ``` L'attributo `lang2id` del tokenizer mostra il linguaggio del modello e il suo ids: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` Poi, crea un esempio di input: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` Imposta l'id del linguaggio a `"en"` e usalo per definire il language embedding. Il language embedding รจ un tensore riempito con `0` perchรฉ questo รจ il language id per l'inglese. Questo tensore dovrebbe avere la stessa dimensione di `input_ids`. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` Adesso puoi inserire `input_ids` e language embedding nel modello: ```py >>> outputs = model(input_ids, langs=langs) ``` Lo script [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) puรฒ generare testo tramite i language embeddings usando i checkpoints `xlm-clm`. ### XLM senza language embeddings I seguenti modelli XLM non richiedono l'utilizzo dei language embeddings per fare inferenza: - `xlm-mlm-17-1280` (Modellazione mascherata del linguaggio, 17 lingue) - `xlm-mlm-100-1280` (Modellazione mascherata del linguaggio, 100 lingue) Questi modelli sono utilizzati per rappresentazioni generiche di frasi, a differenza dei precedenti checkpoints XML. ## BERT Il seguente modello BERT puรฒ essere usato per compiti multilingue: - `bert-base-multilingual-uncased` (Modellazione mascherata del linguaggio + Previsione della prossima frase, 102 lingue) - `bert-base-multilingual-cased` (Modellazione mascherata del linguaggio + Previsione della prossima frase, 104 lingue) Questi modelli non richiedono language embeddings per fare inferenza. Riescono ad identificare il linguaggio dal contesto e inferire di conseguenza. ## XLM-RoBERTa Il seguente modello XLM-RoBERTa puรฒ essere usato per compiti multilingue: - `xlm-roberta-base` (Modellazione mascherata del linguaggio, 100 lingue) - `xlm-roberta-large` (Modellazione mascherata del linguaggio, 100 lingue) XLM-RoBERTa รจ stato addestrato su 2.5TB di dati CommonCrawl appena creati e puliti in 100 lingue. Offre notevoli vantaggi rispetto ai modelli multilingue rilasciati in precedenza, come mBERT o XLM, in compiti come la classificazione, l'etichettatura delle sequenze e la risposta alle domande. ## M2M100 Il seguente modello M2M100 puรฒ essere usato per compiti multilingue: - `facebook/m2m100_418M` (Traduzione) - `facebook/m2m100_1.2B` (Traduzione) In questo esempio, carica il checkpoint `facebook/m2m100_418M` per tradurre dal cinese all'inglese. Puoi impostare la lingua di partenza nel tokenizer: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` Applica il tokenizer al testo: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 forza l'id della lingua obiettivo come primo token generato per tradurre nella lingua obiettivo. Imposta il parametro `forced_bos_token_id` a `en` nel metodo `generate` per tradurre in inglese: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart Il seguente modello MBart puรฒ essere usato per compiti multilingue: - `facebook/mbart-large-50-one-to-many-mmt` (Traduzione automatica multilingue uno-a-molti, 50 lingue) - `facebook/mbart-large-50-many-to-many-mmt` (Traduzione automatica multilingue molti-a-molti, 50 lingue) - `facebook/mbart-large-50-many-to-one-mmt` (Traduzione automatica multilingue molti-a-uno, 50 lingue) - `facebook/mbart-large-50` (Traduzione multilingue, 50 lingue) - `facebook/mbart-large-cc25` In questo esempio, carica il checkpoint `facebook/mbart-large-50-many-to-many-mmt` per tradurre dal finlandese all'inglese. Puoi impostare la lingua di partenza nel tokenizer: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` Applica il tokenizer sul testo: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart forza l'id della lingua obiettivo come primo token generato per tradurre nella lingua obiettivo. Imposta il parametro `forced_bos_token_id` a `en` nel metodo `generate` per tradurre in inglese: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` Se stai usando il checkpoint `facebook/mbart-large-50-many-to-one-mmt`, non hai bisogno di forzare l'id della lingua obiettivo come primo token generato altrimenti l'uso รจ lo stesso.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/preprocessing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Preprocess [[open-in-colab]] Prima di poter usare i dati in un modello, bisogna processarli in un formato accettabile per quest'ultimo. Un modello non comprende il testo grezzo, le immagini o l'audio. Bisogna convertire questi input in numeri e assemblarli all'interno di tensori. In questa esercitazione, tu potrai: * Preprocessare dati testuali con un tokenizer. * Preprocessare immagini o dati audio con un estrattore di caratteristiche. * Preprocessare dati per attivitร  multimodali mediante un processore. ## NLP <Youtube id="Yffk5aydLzg"/> Lo strumento principale per processare dati testuali รจ un [tokenizer](main_classes/tokenizer). Un tokenizer inizia separando il testo in *tokens* secondo una serie di regole. I tokens sono convertiti in numeri, questi vengono utilizzati per costruire i tensori di input del modello. Anche altri input addizionali se richiesti dal modello vengono aggiunti dal tokenizer. <Tip> Se stai pensando si utilizzare un modello preaddestrato, รจ importante utilizzare il tokenizer preaddestrato associato. Questo assicura che il testo sia separato allo stesso modo che nel corpus usato per l'addestramento, e venga usata la stessa mappatura tokens-to-index (solitamente indicato come il *vocabolario*) come nel preaddestramento. </Tip> Iniziamo subito caricando un tokenizer preaddestrato con la classe [`AutoTokenizer`]. Questo scarica il *vocabolario* usato quando il modello รจ stato preaddestrato. ### Tokenize Carica un tokenizer preaddestrato con [`AutoTokenizer.from_pretrained`]: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` Poi inserisci le tue frasi nel tokenizer: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Il tokenizer restituisce un dizionario contenente tre oggetti importanti: * [input_ids](glossary#input-ids) sono gli indici che corrispondono ad ogni token nella frase. * [attention_mask](glossary#attention-mask) indicata se un token deve essere elaborato o no. * [token_type_ids](glossary#token-type-ids) identifica a quale sequenza appartiene un token se รจ presente piรน di una sequenza. Si possono decodificare gli `input_ids` per farsi restituire l'input originale: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` Come si puรฒ vedere, il tokenizer aggiunge due token speciali - `CLS` e `SEP` (classificatore e separatore) - alla frase. Non tutti i modelli hanno bisogno dei token speciali, ma se servono, il tokenizer li aggiungerร  automaticamente. Se ci sono piรน frasi che vuoi processare, passale come una lista al tokenizer: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### Pad Questo รจ un argomento importante. Quando processi un insieme di frasi potrebbero non avere tutte la stessa lunghezza. Questo รจ un problema perchรจ i tensori, in input del modello, devono avere dimensioni uniformi. Il padding รจ una strategia per assicurarsi che i tensori siano rettangolari aggiungendo uno speciale *padding token* alle frasi piรน corte. Imposta il parametro `padding` a `True` per imbottire le frasi piรน corte nel gruppo in modo che combacino con la massima lunghezza presente: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` Nota che il tokenizer aggiunge alle sequenze degli `0` perchรจ sono troppo corte! ### Truncation L'altra faccia della medaglia รจ che avolte le sequenze possono essere troppo lunghe per essere gestite dal modello. In questo caso, avrai bisogno di troncare la sequenza per avere una lunghezza minore. Imposta il parametro `truncation` a `True` per troncare una sequenza alla massima lunghezza accettata dal modello: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` ### Costruire i tensori Infine, vuoi che il tokenizer restituisca i tensori prodotti dal modello. Imposta il parametro `return_tensors` su `pt` per PyTorch, o `tf` per TensorFlow: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102], [ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0]])} ===PT-TF-SPLIT=== >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102], [ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0]], dtype=int32)>} ``` ## Audio Gli input audio sono processati in modo differente rispetto al testo, ma l'obiettivo rimane lo stesso: creare sequenze numeriche che il modello puรฒ capire. Un [estrattore di caratteristiche](main_classes/feature_extractor) รจ progettato con lo scopo preciso di estrarre caratteristiche da immagini o dati audio grezzi e convertirli in tensori. Prima di iniziare, installa ๐Ÿค— Datasets per caricare un dataset audio e sperimentare: ```bash pip install datasets ``` Carica il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) (vedi il ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub.html) per avere maggiori dettagli su come caricare un dataset): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Accedi al primo elemento della colonna `audio` per dare uno sguardo all'input. Richiamando la colonna `audio` sarร  caricato automaticamente e ricampionato il file audio: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` Questo restituisce tre oggetti: * `array` รจ il segnale vocale caricato - e potenzialmente ricampionato - come vettore 1D. * `path` il percorso del file audio. * `sampling_rate` si riferisce al numero di campioni del segnale vocale misurati al secondo. ### Ricampionamento Per questo tutorial, puoi usare il modello [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base). Come puoi vedere dalla model card, il modello Wav2Vec2 รจ preaddestrato su un campionamento vocale a 16kHz.รˆ importante che la frequenza di campionamento dei tuoi dati audio combaci con la frequenza di campionamento del dataset usato per preaddestrare il modello. Se la frequenza di campionamento dei tuoi dati non รจ uguale dovrai ricampionare i tuoi dati audio. Per esempio, il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ha una frequenza di campionamento di 8000kHz. Utilizzando il modello Wav2Vec2 su questo dataset, alzala a 16kHz: ```py >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` 1. Usa il metodo di ๐Ÿค— Datasets' [`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.cast_column) per alzare la frequenza di campionamento a 16kHz: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. Carica il file audio: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` Come puoi notare, la `sampling_rate` adesso รจ 16kHz! ### Feature extractor Il prossimo passo รจ caricare un estrattore di caratteristiche per normalizzare e fare padding sull'input. Quando applichiamo il padding sui dati testuali, uno `0` รจ aggiunto alle sequenze piรน brevi. La stessa idea si applica ai dati audio, l'estrattore di caratteristiche per gli audio aggiungerร  uno `0` - interpretato come silenzio - agli `array`. Carica l'estrattore delle caratteristiche con [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` Inserisci l' `array` audio nell'estrattore delle caratteristiche. Noi raccomandiamo sempre di aggiungere il parametro `sampling_rate` nell'estrattore delle caratteristiche per correggere meglio qualche errore, dovuto ai silenzi, che potrebbe verificarsi. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` ### Pad e truncate Come per il tokenizer, puoi applicare le operazioni padding o truncation per manipolare sequenze di variabili a lotti. Dai uno sguaro alla lunghezza delle sequenze di questi due campioni audio: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` Come puoi vedere, il primo campione ha una sequenza piรน lunga del secondo. Crea una funzione che preprocesserร  il dataset. Specifica una lunghezza massima del campione, e l'estrattore di features si occuperร  di riempire o troncare la sequenza per coincidervi: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` Applica la funzione ai primi esempi nel dataset: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` Adesso guarda la lunghezza dei campioni elaborati: ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` La lunghezza dei campioni adesso coincide con la massima lunghezza impostata nelle funzione. ## Vision Un estrattore di caratteristiche si puรฒ usare anche per processare immagini e per compiti di visione. Ancora una volta, l'obiettivo รจ convertire l'immagine grezza in un lotto di tensori come input. Carica il dataset [food101](https://huggingface.co/datasets/food101) per questa esercitazione. Usa il parametro `split` di ๐Ÿค— Datasets per caricare solo un piccolo campione dal dataset di addestramento poichรจ il set di dati รจ molto grande: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` Secondo passo, dai uno sguardo alle immagini usando la caratteristica [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image) di ๐Ÿค— Datasets: ```py >>> dataset[0]["image"] ``` ![vision-preprocess-tutorial.png](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png) ### Feature extractor Carica l'estrattore di caratteristiche [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224") ``` ### Data augmentation Per le attivitร  di visione, รจ usuale aggiungere alcuni tipi di data augmentation alle immagini come parte del preprocessing. Puoi aggiungere augmentations con qualsiasi libreria che preferisci, ma in questa esercitazione, userai il modulo [`transforms`](https://pytorch.org/vision/stable/transforms.html) di torchvision. 1. Normalizza l'immagine e usa [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) per concatenare alcune trasformazioni - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) e [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) - insieme: ```py >>> from torchvision.transforms import Compose, Normalize, RandomResizedCrop, ColorJitter, ToTensor >>> normalize = Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std) >>> _transforms = Compose( ... [RandomResizedCrop(feature_extractor.size), ColorJitter(brightness=0.5, hue=0.5), ToTensor(), normalize] ... ) ``` 2. Il modello accetta [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) come input. Questo valore รจ generato dall'estrattore di caratteristiche. Crea una funzione che genera `pixel_values` dai transforms: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(image.convert("RGB")) for image in examples["image"]] ... return examples ``` 3. Poi utilizza ๐Ÿค— Datasets [`set_transform`](https://huggingface.co/docs/datasets/process.html#format-transform)per applicare al volo la trasformazione: ```py >>> dataset.set_transform(transforms) ``` 4. Adesso quando accedi all'immagine, puoi notare che l'estrattore di caratteristiche ha aggiunto `pixel_values` allo schema di input: ```py >>> dataset[0]["image"] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=384x512 at 0x7F1A7B0630D0>, 'label': 6, 'pixel_values': tensor([[[ 0.0353, 0.0745, 0.1216, ..., -0.9922, -0.9922, -0.9922], [-0.0196, 0.0667, 0.1294, ..., -0.9765, -0.9843, -0.9922], [ 0.0196, 0.0824, 0.1137, ..., -0.9765, -0.9686, -0.8667], ..., [ 0.0275, 0.0745, 0.0510, ..., -0.1137, -0.1216, -0.0824], [ 0.0667, 0.0824, 0.0667, ..., -0.0588, -0.0745, -0.0980], [ 0.0353, 0.0353, 0.0431, ..., -0.0039, -0.0039, -0.0588]], [[ 0.2078, 0.2471, 0.2863, ..., -0.9451, -0.9373, -0.9451], [ 0.1608, 0.2471, 0.3098, ..., -0.9373, -0.9451, -0.9373], [ 0.2078, 0.2706, 0.3020, ..., -0.9608, -0.9373, -0.8275], ..., [-0.0353, 0.0118, -0.0039, ..., -0.2392, -0.2471, -0.2078], [ 0.0196, 0.0353, 0.0196, ..., -0.1843, -0.2000, -0.2235], [-0.0118, -0.0039, -0.0039, ..., -0.0980, -0.0980, -0.1529]], [[ 0.3961, 0.4431, 0.4980, ..., -0.9216, -0.9137, -0.9216], [ 0.3569, 0.4510, 0.5216, ..., -0.9059, -0.9137, -0.9137], [ 0.4118, 0.4745, 0.5216, ..., -0.9137, -0.8902, -0.7804], ..., [-0.2314, -0.1922, -0.2078, ..., -0.4196, -0.4275, -0.3882], [-0.1843, -0.1686, -0.2000, ..., -0.3647, -0.3804, -0.4039], [-0.1922, -0.1922, -0.1922, ..., -0.2941, -0.2863, -0.3412]]])} ``` Di seguito come si vede l'immagine dopo la fase di preprocessing. Come ci si aspetterebbe dalle trasformazioni applicate, l'immagine รจ stata ritagliata in modo casuale e le proprietร  del colore sono diverse. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` ![preprocessed_image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png) ## Multimodal Per attivitร  multimodali userai una combinazione di tutto quello che hai imparato poco fa e applicherai le tue competenze alla comprensione automatica del parlato (Automatic Speech Recognition - ASR). Questo significa che avrai bisogno di: * Un estrattore delle caratteristiche per processare i dati audio. * Il Tokenizer per processare i testi. Ritorna sul datasere [LJ Speech](https://huggingface.co/datasets/lj_speech): ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` Visto che sei interessato solo alle colonne `audio` e `text`, elimina tutte le altre: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` Adesso guarda le colonne `audio` e `text`: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` Ricorda dalla sezione precedente sull'elaborazione dei dati audio, tu dovresti sempre [ricampionare](preprocessing#audio) la frequenza di campionamento dei tuoi dati audio per farla coincidere con quella del dataset usato dal modello preaddestrato: ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` ### Processor Un processor combina un estrattore di caratteristiche e un tokenizer. Carica un processor con [`AutoProcessor.from_pretrained]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. Crea una funzione che processi i dati audio in `input_values`, e tokenizza il testo in `labels`. Questi sono i tuoi input per il modello: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. Applica la funzione `prepare_dataset` ad un campione: ```py >>> prepare_dataset(lj_speech[0]) ``` Nota che il processor ha aggiunto `input_values` e `labels`. La frequenza di campionamento รจ stata corretta riducendola a 16kHz. Fantastico, ora dovresti essere in grado di preelaborare i dati per qualsiasi modalitร  e persino di combinare modalitร  diverse! Nella prossima esercitazione, impareremo a mettere a punto un modello sui dati appena pre-elaborati.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_train_special.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento su Hardware Specializzato <Tip> Nota: Molte delle strategie introdotte nella [sezione sulla GPU singola](perf_train_gpu_one) (come mixed precision training o gradient accumulation) e [sezione multi-GPU](perf_train_gpu_many) sono generiche e applicabili all'addestramento di modelli in generale quindi assicurati di dargli un'occhiata prima di immergerti in questa sezione. </Tip> Questo documento sarร  presto completato con informazioni su come effettuare la formazione su hardware specializzato.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_infer_gpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza Efficiente su GPU Multiple Questo documento contiene informazioni su come fare inferenza in maniera efficiente su GPU multiple. <Tip> Nota: Un setup con GPU multiple puรฒ utilizzare la maggior parte delle strategie descritte nella [sezione con GPU singola](./perf_infer_gpu_one). Tuttavia, รจ necessario conoscere delle tecniche semplici che possono essere utilizzate per un risultato migliore. </Tip> ## `BetterTransformer` per inferenza piรน rapida Abbiamo recentemente integrato `BetterTransformer` per inferenza piรน rapida su multi-GPU per modelli su testo, immagini e audio. Controlla il documento con queste integrazioni [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_infer_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inferenza Efficiente su CPU Questa guida si concentra sull'inferenza di modelli di grandi dimensioni in modo efficiente sulla CPU. ## `BetterTransformer` per inferenza piรน rapida Abbiamo integrato di recente `BetterTransformer` per fare inferenza piรน rapidamente con modelli per testi, immagini e audio. Visualizza la documentazione sull'integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli. ## PyTorch JIT-mode (TorchScript) TorchScript รจ un modo di creare modelli serializzabili e ottimizzabili da codice PyTorch. Ogni programmma TorchScript puรฒ esere salvato da un processo Python e caricato in un processo dove non ci sono dipendenze Python. Comparandolo con l'eager mode di default, jit mode in PyTorch normalmente fornisce prestazioni migliori per l'inferenza del modello da parte di metodologie di ottimizzazione come la operator fusion. Per una prima introduzione a TorchScript, vedi la Introduction to [PyTorch TorchScript tutorial](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules). ### IPEX Graph Optimization con JIT-mode Intelยฎ Extension per PyTorch fornnisce ulteriori ottimizzazioni in jit mode per i modelli della serie Transformers. Consigliamo vivamente agli utenti di usufruire dei vantaggi di Intelยฎ Extension per PyTorch con jit mode. Alcuni operator patterns usati fequentemente dai modelli Transformers models sono giร  supportati in Intelยฎ Extension per PyTorch con jit mode fusions. Questi fusion patterns come Multi-head-attention fusion, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm fusion and etc. sono abilitati e hanno buone performance. I benefici della fusion รจ fornito agli utenti in modo trasparente. In base alle analisi, il ~70% dei problemi piรน popolari in NLP question-answering, text-classification, and token-classification possono avere benefici sulle performance grazie ai fusion patterns sia per Float32 precision che per BFloat16 Mixed precision. Vedi maggiori informazioni per [IPEX Graph Optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html). #### Installazione di IPEX I rilasci di IPEX seguono PyTorch, verifica i vari approcci per [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/). ### Utilizzo del JIT-mode Per abilitare JIT-mode in Trainer per evaluation e prediction, devi aggiungere `jit_mode_eval` negli argomenti di Trainer. <Tip warning={true}> per PyTorch >= 1.14.0. JIT-mode potrebe giovare a qualsiasi modello di prediction e evaluaion visto che il dict input รจ supportato in jit.trace per PyTorch < 1.14.0. JIT-mode potrebbe giovare ai modelli il cui ordine dei parametri corrisponde all'ordine delle tuple in ingresso in jit.trace, come i modelli per question-answering. Nel caso in cui l'ordine dei parametri seguenti non corrisponda all'ordine delle tuple in ingresso in jit.trace, come nei modelli di text-classification, jit.trace fallirร  e lo cattureremo con una eccezione al fine di renderlo un fallback. Il logging รจ usato per notificare gli utenti. </Tip> Trovi un esempo con caso d'uso in [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Inference using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--jit_mode_eval </b></pre> - Inference with IPEX using jit mode on CPU: <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--use_ipex \</b> <b>--jit_mode_eval</b></pre>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Condividi un modello Gli ultimi due tutorial ti hanno mostrato come puoi fare fine-tuning di un modello con PyTorch, Keras e ๐Ÿค— Accelerate per configurazioni distribuite. Il prossimo passo รจ quello di condividere il tuo modello con la community! In Hugging Face, crediamo nella condivisione della conoscenza e delle risorse in modo da democratizzare l'intelligenza artificiale per chiunque. Ti incoraggiamo a considerare di condividere il tuo modello con la community per aiutare altre persone a risparmiare tempo e risorse. In questo tutorial, imparerai due metodi per la condivisione di un modello trained o fine-tuned nel [Model Hub](https://huggingface.co/models): - Condividi in modo programmatico i tuoi file nell'Hub. - Trascina i tuoi file nell'Hub mediante interfaccia grafica. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> Per condividere un modello con la community, hai bisogno di un account su [huggingface.co](https://huggingface.co/join). Puoi anche unirti ad un'organizzazione esistente o crearne una nuova. </Tip> ## Caratteristiche dei repository Ogni repository nel Model Hub si comporta come un tipico repository di GitHub. I nostri repository offrono il versionamento, la cronologia dei commit, e la possibilitร  di visualizzare le differenze. Il versionamento all'interno del Model Hub รจ basato su git e [git-lfs](https://git-lfs.github.com/). In altre parole, puoi trattare un modello come un unico repository, consentendo un maggiore controllo degli accessi e maggiore scalabilitร . Il controllo delle versioni consente *revisions*, un metodo per appuntare una versione specifica di un modello con un hash di commit, un tag o un branch. Come risultato, puoi caricare una specifica versione di un modello con il parametro `revision`: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # nome di un tag, di un branch, o commit hash ... ) ``` Anche i file possono essere modificati facilmente in un repository ed รจ possibile visualizzare la cronologia dei commit e le differenze: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Configurazione Prima di condividere un modello nell'Hub, hai bisogno delle tue credenziali di Hugging Face. Se hai accesso ad un terminale, esegui il seguente comando nell'ambiente virtuale in cui รจ installata la libreria ๐Ÿค— Transformers. Questo memorizzerร  il tuo token di accesso nella cartella cache di Hugging Face (di default `~/.cache/`): ```bash huggingface-cli login ``` Se stai usando un notebook come Jupyter o Colaboratory, assicurati di avere la libreria [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) installata. Questa libreria ti permette di interagire in maniera programmatica con l'Hub. ```bash pip install huggingface_hub ``` Utilizza `notebook_login` per accedere all'Hub, e segui il link [qui](https://huggingface.co/settings/token) per generare un token con cui effettuare il login: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Converti un modello per tutti i framework Per assicurarti che il tuo modello possa essere utilizzato da persone che lavorano con un framework differente, ti raccomandiamo di convertire e caricare il tuo modello sia con i checkpoint di PyTorch che con quelli di TensorFlow. Anche se รจ possibile caricare il modello da un framework diverso, se si salta questo passaggio, il caricamento sarร  piรน lento perchรฉ ๐Ÿค— Transformers ha bisogno di convertire i checkpoint al momento. Convertire un checkpoint per un altro framework รจ semplice. Assicurati di avere PyTorch e TensorFlow installati (vedi [qui](installation) per le istruzioni d'installazione), e poi trova il modello specifico per il tuo compito nell'altro framework. <frameworkcontent> <pt> Specifica `from_tf=True` per convertire un checkpoint da TensorFlow a PyTorch: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_tf=True ... ) >>> pt_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto") ``` </pt> <tf> Specifica `from_pt=True` per convertire un checkpoint da PyTorch a TensorFlow: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True ... ) ``` Poi puoi salvare il tuo nuovo modello in TensorFlow con il suo nuovo checkpoint: ```py >>> tf_model.save_pretrained("path/verso/il-nome-magnifico-che-hai-scelto") ``` </tf> <jax> Se un modello รจ disponibile in Flax, puoi anche convertire un checkpoint da PyTorch a Flax: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/verso/il-nome-magnifico-che-hai-scelto", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Condividi un modello durante il training <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Condividere un modello nell'Hub รจ tanto semplice quanto aggiungere un parametro extra o un callback. Ricorda dal [tutorial sul fine-tuning](training), la classe [`TrainingArguments`] รจ dove specifichi gli iperparametri e le opzioni addizionali per l'allenamento. Una di queste opzioni di training include l'abilitร  di condividere direttamente un modello nell'Hub. Imposta `push_to_hub=True` in [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="il-mio-bellissimo-modello", push_to_hub=True) ``` Passa gli argomenti per il training come di consueto al [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` Dopo aver effettuato il fine-tuning del tuo modello, chiama [`~transformers.Trainer.push_to_hub`] sul [`Trainer`] per condividere il modello allenato nell'Hub. ๐Ÿค— Transformers aggiungerร  in modo automatico persino gli iperparametri, i risultati del training e le versioni del framework alla scheda del tuo modello (model card, in inglese)! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Condividi un modello nell'Hub con [`PushToHubCallback`]. Nella funzione [`PushToHubCallback`], aggiungi: - Una directory di output per il tuo modello. - Un tokenizer. - L'`hub_model_id`, che รจ il tuo username sull'Hub e il nome del modello. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./il_path_dove_salvare_il_tuo_modello", ... tokenizer=tokenizer, ... hub_model_id="il-tuo-username/il-mio-bellissimo-modello", ... ) ``` Aggiungi il callback a [`fit`](https://keras.io/api/models/model_training_apis/), e ๐Ÿค— Transformers caricherร  il modello allenato nell'Hub: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Utilizzare la funzione `push_to_hub` Puoi anche chiamare `push_to_hub` direttamente sul tuo modello per caricarlo nell'Hub. Specifica il nome del tuo modello in `push_to_hub`: ```py >>> pt_model.push_to_hub("il-mio-bellissimo-modello") ``` Questo crea un repository sotto il proprio username con il nome del modello `il-mio-bellissimo-modello`. Ora chiunque puรฒ caricare il tuo modello con la funzione `from_pretrained`: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("il-tuo-username/il-mio-bellissimo-modello") ``` Se fai parte di un'organizzazione e vuoi invece condividere un modello sotto il nome dell'organizzazione, aggiungi il parametro `organization`: ```py >>> pt_model.push_to_hub("il-mio-bellissimo-modello", organization="la-mia-fantastica-org") ``` La funzione `push_to_hub` puรฒ essere anche utilizzata per aggiungere altri file al repository del modello. Per esempio, aggiungi un tokenizer ad un repository di un modello: ```py >>> tokenizer.push_to_hub("il-mio-bellissimo-modello") ``` O magari potresti voler aggiungere la versione di TensorFlow del tuo modello PyTorch a cui hai fatto fine-tuning: ```py >>> tf_model.push_to_hub("il-mio-bellissimo-modello") ``` Ora quando navighi nel tuo profilo Hugging Face, dovresti vedere il tuo repository del modello appena creato. Premendo sulla scheda **Files** vengono visualizzati tutti i file caricati nel repository. Per maggiori dettagli su come creare e caricare file ad un repository, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/how-to-upstream). ## Carica un modello utilizzando l'interfaccia web Chi preferisce un approccio senza codice puรฒ caricare un modello tramite l'interfaccia web dell'hub. Visita [huggingface.co/new](https://huggingface.co/new) per creare un nuovo repository: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) Da qui, aggiungi alcune informazioni sul tuo modello: - Seleziona il/la **owner** del repository. Puoi essere te o qualunque organizzazione di cui fai parte. - Scegli un nome per il tuo modello, il quale sarร  anche il nome del repository. - Scegli se il tuo modello รจ pubblico o privato. - Specifica la licenza utilizzata per il tuo modello. Ora premi sulla scheda **Files** e premi sul pulsante **Add file** per caricare un nuovo file al tuo repository. Trascina poi un file per caricarlo e aggiungere un messaggio di commit. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Aggiungi una scheda del modello Per assicurarti che chiunque possa comprendere le abilitร , limitazioni, i potenziali bias e le considerazioni etiche del tuo modello, per favore aggiungi una scheda del modello (model card, in inglese) al tuo repository. La scheda del modello รจ definita nel file `README.md`. Puoi aggiungere una scheda del modello: * Creando manualmente e caricando un file `README.md`. * Premendo sul pulsante **Edit model card** nel repository del tuo modello. Dai un'occhiata alla [scheda del modello](https://huggingface.co/distilbert-base-uncased) di DistilBert per avere un buon esempio del tipo di informazioni che una scheda di un modello deve includere. Per maggiori dettagli legati ad altre opzioni che puoi controllare nel file `README.md`, come l'impatto ambientale o widget di esempio, fai riferimento alla documentazione [qui](https://huggingface.co/docs/hub/models-cards).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipeline per l'inferenza La [`pipeline`] rende semplice usare qualsiasi modello dal [Model Hub](https://huggingface.co/models) per fare inferenza su diversi compiti come generazione del testo, segmentazione di immagini e classificazione di audio. Anche se non hai esperienza con una modalitร  specifica o non comprendi bene il codice che alimenta i modelli, รจ comunque possibile utilizzarli con l'opzione [`pipeline`]! Questa esercitazione ti insegnerร  a: * Usare una [`pipeline`] per fare inferenza. * Usare uno specifico tokenizer o modello. * Usare una [`pipeline`] per compiti che riguardano audio e video. <Tip> Dai un'occhiata alla documentazione di [`pipeline`] per una lista completa dei compiti supportati. </Tip> ## Utilizzo della Pipeline Nonostante ogni compito abbia una [`pipeline`] associata, รจ piรน semplice utilizzare l'astrazione generica della [`pipeline`] che contiene tutte quelle specifiche per ogni mansione. La [`pipeline`] carica automaticamente un modello predefinito e un tokenizer in grado di fare inferenza per il tuo compito. 1. Inizia creando una [`pipeline`] e specificando il compito su cui fare inferenza: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation") ``` 2. Inserisci il testo in input nella [`pipeline`]: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Iron-priests at the door to the east, and thirteen for the Lord Kings at the end of the mountain'}] ``` Se hai piรน di un input, inseriscilo in una lista: ```py >>> generator( ... [ ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... "Nine for Mortal Men, doomed to die, One for the Dark Lord on his dark throne", ... ] ... ) # doctest: +SKIP ``` Qualsiasi parametro addizionale per il tuo compito puรฒ essere incluso nella [`pipeline`]. La mansione `text-generation` ha un metodo [`~generation.GenerationMixin.generate`] con diversi parametri per controllare l'output. Ad esempio, se desideri generare piรน di un output, utilizza il parametro `num_return_sequences`: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone", ... num_return_sequences=2, ... ) # doctest: +SKIP ``` ### Scegliere modello e tokenizer La [`pipeline`] accetta qualsiasi modello dal [Model Hub](https://huggingface.co/models). Ci sono tag nel Model Hub che consentono di filtrare i modelli per attivitร . Una volta che avrai scelto il modello appropriato, caricalo usando la corrispondente classe `AutoModelFor` e [`AutoTokenizer`]. Ad esempio, carica la classe [`AutoModelForCausalLM`] per un compito di causal language modeling: ```py >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") ``` Crea una [`pipeline`] per il tuo compito, specificando il modello e il tokenizer che hai caricato: ```py >>> from transformers import pipeline >>> generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) ``` Inserisci il testo di input nella [`pipeline`] per generare del testo: ```py >>> generator( ... "Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone" ... ) # doctest: +SKIP [{'generated_text': 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Seven for the Dragon-lords (for them to rule in a world ruled by their rulers, and all who live within the realm'}] ``` ## Audio pipeline La flessibilitร  della [`pipeline`] fa si che possa essere estesa ad attivitร  sugli audio. Per esempio, classifichiamo le emozioni in questo clip audio: ```py >>> from datasets import load_dataset >>> import torch >>> torch.manual_seed(42) # doctest: +IGNORE_RESULT >>> ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> audio_file = ds[0]["audio"]["path"] ``` Trova un modello per la [classificazione audio](https://huggingface.co/models?pipeline_tag=audio-classification) sul Model Hub per eseguire un compito di riconoscimento automatico delle emozioni e caricalo nella [`pipeline`]: ```py >>> from transformers import pipeline >>> audio_classifier = pipeline( ... task="audio-classification", model="ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` Inserisci il file audio nella [`pipeline`]: ```py >>> preds = audio_classifier(audio_file) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}] ``` ## Vision pipeline Infine, usare la [`pipeline`] per le attivitร  sulle immagini รจ praticamente la stessa cosa. Specifica la tua attivitร  e inserisci l'immagine nel classificatore. L'immagine puรฒ essere sia un link che un percorso sul tuo pc in locale. Per esempio, quale specie di gatto รจ raffigurata qui sotto? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(task="image-classification") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Installazione di Transformers ! pip install transformers datasets # Per installare dalla fonte invece dell'ultima versione rilasciata, commenta il comando sopra e # rimuovi la modalitร  commento al comando seguente. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Esporta modelli ๐Ÿค— Transformers Se devi implementare ๐Ÿค— modelli Transformers in ambienti di produzione, noi consigliamo di esportarli in un formato serializzato che puรฒ essere caricato ed eseguito su runtime e hardware specializzati. In questa guida ti mostreremo come farlo esporta ๐Ÿค— Modelli Transformers in due formati ampiamente utilizzati: ONNX e TorchScript. Una volta esportato, un modello puรฒ essere ottimizato per l'inferenza tramite tecniche come la quantizzazione e soppressione. Se sei interessato a ottimizzare i tuoi modelli per l'esecuzione con la massima efficienza, dai un'occhiata a [๐Ÿค— Optimum library](https://github.com/huggingface/optimum). ## ONNX Il progetto [ONNX (Open Neural Network eXchange)](http://onnx.ai) Il progetto onnx รจ un open standard che definisce un insieme comune di operatori e un formato di file comune a rappresentano modelli di deep learning in un'ampia varietร  di framework, tra cui PyTorch e TensorFlow. Quando un modello viene esportato nel formato ONNX, questi operatori sono usati per costruire un grafico computazionale (often called an _intermediate representation_) che rappresenta il flusso di dati attraverso la rete neurale. Esponendo un grafico con operatori e tipi di dati standardizzati, ONNX rende piรน facile passare da un framework all'altro. Ad esempio, un modello allenato in PyTorch puรฒ essere esportato in formato ONNX e quindi importato in TensorFlow (e viceversa). ๐Ÿค— Transformers fornisce un pacchetto `transformers.onnx` che ti consente di convertire i checkpoint del modello in un grafico ONNX sfruttando gli oggetti di configurazione. Questi oggetti di configurazione sono giร  pronti per una serie di architetture di modelli, e sono progettati per essere facilmente estensibili ad altre architetture. Le configurazioni pronte includono le seguenti architetture: <!--This table is automatically generated by `make fix-copies`, do not fill manually!--> - ALBERT - BART - BEiT - BERT - BigBird - BigBird-Pegasus - Blenderbot - BlenderbotSmall - CamemBERT - ConvBERT - Data2VecText - Data2VecVision - DeiT - DistilBERT - ELECTRA - FlauBERT - GPT Neo - GPT-J - I-BERT - LayoutLM - M2M100 - Marian - mBART - MobileBERT - OpenAI GPT-2 - Perceiver - PLBart - RoBERTa - RoFormer - SqueezeBERT - T5 - ViT - XLM - XLM-RoBERTa - XLM-RoBERTa-XL Nelle prossime due sezioni, ti mostreremo come: * Esporta un modello supportato usando il pacchetto `transformers.onnx`. * Esporta un modello personalizzato per un'architettura non supportata. ### Esportazione di un modello in ONNX Per esportare un modello ๐Ÿค— Transformers in ONNX, dovrai prima installarne alcune dipendenze extra: ```bash pip install transformers[onnx] ``` Il pacchetto `transformers.onnx` puรฒ essere usato come modulo Python: ```bash python -m transformers.onnx --help usage: Hugging Face Transformers ONNX exporter [-h] -m MODEL [--feature {causal-lm, ...}] [--opset OPSET] [--atol ATOL] output positional arguments: output Path indicating where to store generated ONNX model. optional arguments: -h, --help show this help message and exit -m MODEL, --model MODEL Model ID on huggingface.co or path on disk to load model from. --feature {causal-lm, ...} The type of features to export the model with. --opset OPSET ONNX opset version to export the model with. --atol ATOL Absolute difference tolerance when validating the model. ``` L'esportazione di un checkpoint utilizzando una configurazione giร  pronta puรฒ essere eseguita come segue: ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ ``` che dovrebbe mostrare i seguenti log: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[โœ“] (2, 8, 768) matches (2, 8, 768) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Questo esporta un grafico ONNX del checkpoint definito dall'argomento `--model`. In questo esempio รจ `distilbert-base-uncased`, ma puรฒ essere qualsiasi checkpoint Hugging Face Hub o uno memorizzato localmente. Il file risultante `model.onnx` puรฒ quindi essere eseguito su uno dei [tanti acceleratori](https://onnx.ai/supported-tools.html#deployModel) che supportano il lo standard ONNX. Ad esempio, possiamo caricare ed eseguire il modello con [ONNX Runtime](https://onnxruntime.ai/) come segue: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` I nomi di output richiesti (cioรจ `["last_hidden_state"]`) possono essere ottenuti dando un'occhiata alla configurazione ONNX di ogni modello. Ad esempio, per DistilBERT abbiamo: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` Il processo รจ identico per i checkpoint TensorFlow sull'hub. Ad esempio, noi possiamo esportare un checkpoint TensorFlow puro da [Keras organizzazione](https://huggingface.co/keras-io) come segue: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` Per esportare un modello memorizzato localmente, devi disporre dei pesi del modello e file tokenizer memorizzati in una directory. Ad esempio, possiamo caricare e salvare un checkpoint come segue: <frameworkcontent> <pt> ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> # Load tokenizer and PyTorch weights form the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> pt_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-pt-checkpoint") >>> pt_model.save_pretrained("local-pt-checkpoint") ``` Una volta salvato il checkpoint, possiamo esportarlo su ONNX puntando l'argomento `--model` del pacchetto `transformers.onnx` nella directory desiderata: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ``` </pt> <tf> ```python >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> # Load tokenizer and TensorFlow weights from the Hub >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") >>> # Save to disk >>> tokenizer.save_pretrained("local-tf-checkpoint") >>> tf_model.save_pretrained("local-tf-checkpoint") ``` Once the checkpoint is saved, we can export it to ONNX by pointing the `--model` argument of the `transformers.onnx` package to the desired directory: ```bash python -m transformers.onnx --model=local-tf-checkpoint onnx/ ``` </tf> </frameworkcontent> ### Selezione delle caratteristiche per diverse topologie di modello Ogni configurazione giร  pronta viene fornita con una serie di _caratteristiche_ che ti consentono di esportare modelli per diversi tipi di topologie o attivitร . Come mostrato nella tabella di seguito, ogni caratteristica รจ associata a una diversa Auto Class: | Caratteristica | Auto Class | | ------------------------------------ | ------------------------------------ | | `causal-lm`, `causal-lm-with-past` | `AutoModelForCausalLM` | | `default`, `default-with-past` | `AutoModel` | | `masked-lm` | `AutoModelForMaskedLM` | | `question-answering` | `AutoModelForQuestionAnswering` | | `seq2seq-lm`, `seq2seq-lm-with-past` | `AutoModelForSeq2SeqLM` | | `sequence-classification` | `AutoModelForSequenceClassification` | | `token-classification` | `AutoModelForTokenClassification` | Per ciascuna configurazione, puoi trovare l'elenco delle funzionalitร  supportate tramite il `FeaturesManager`. Ad esempio, per DistilBERT abbiamo: ```python >>> from transformers.onnx.features import FeaturesManager >>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type("distilbert").keys()) >>> print(distilbert_features) ["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"] ``` Puoi quindi passare una di queste funzionalitร  all'argomento `--feature` nel pacchetto `transformers.onnx`. Ad esempio, per esportare un modello di classificazione del testo possiamo scegliere un modello ottimizzato dall'Hub ed eseguire: ```bash python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \ --feature=sequence-classification onnx/ ``` che visualizzerร  i seguenti registri: ```bash Validating ONNX model... -[โœ“] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[โœ“] (2, 2) matches (2, 2) -[โœ“] all values close (atol: 1e-05) All good, model saved at: onnx/model.onnx ``` Puoi notare che in questo caso, i nomi di output del modello ottimizzato sono `logits` invece di `last_hidden_state` che abbiamo visto con il checkpoint `distilbert-base-uncased` precedente. Questo รจ previsto dal modello ottimizato visto che ha una testa di e. <Tip> Le caratteristiche che hanno un suffisso `wtih-past` (ad es. `causal-lm-with-past`) corrispondono a topologie di modello con stati nascosti precalcolati (chiave e valori nei blocchi di attenzione) che possono essere utilizzati per la decodifica autoregressiva veloce. </Tip> ### Esportazione di un modello per un'architettura non supportata Se desideri esportare un modello la cui architettura non รจ nativamente supportata dalla libreria, ci sono tre passaggi principali da seguire: 1. Implementare una configurazione ONNX personalizzata. 2. Esportare il modello in ONNX. 3. Convalidare gli output di PyTorch e dei modelli esportati. In questa sezione, vedremo come DistilBERT รจ stato implementato per mostrare cosa รจ coinvolto in ogni passaggio. #### Implementazione di una configurazione ONNX personalizzata Iniziamo con l'oggetto di configurazione ONNX. Forniamo tre classi astratte da cui ereditare, a seconda del tipo di archittettura del modello che desideri esportare: * I modelli basati su encoder ereditano da [`~onnx.config.OnnxConfig`] * I modelli basati su decoder ereditano da [`~onnx.config.OnnxConfigWithPast`] * I modelli encoder-decoder ereditano da[`~onnx.config.OnnxSeq2SeqConfigWithPast`] <Tip> Un buon modo per implementare una configurazione ONNX personalizzata รจ guardare l'implementazione esistente nel file `configuration_<model_name>.py` di un'architettura simile. </Tip> Poichรฉ DistilBERT รจ un modello basato su encoder, la sua configurazione eredita da `OnnxConfig`: ```python >>> from typing import Mapping, OrderedDict >>> from transformers.onnx import OnnxConfig >>> class DistilBertOnnxConfig(OnnxConfig): ... @property ... def inputs(self) -> Mapping[str, Mapping[int, str]]: ... return OrderedDict( ... [ ... ("input_ids", {0: "batch", 1: "sequence"}), ... ("attention_mask", {0: "batch", 1: "sequence"}), ... ] ... ) ``` Ogni oggetto di configurazione deve implementare la proprietร  `inputs` e restituire una mappatura, dove ogni chiave corrisponde a un input previsto e ogni valore indica l'asse di quell'input. Per DistilBERT, possiamo vedere che sono richiesti due input: `input_ids` e `attention_mask`. Questi inputs hanno la stessa forma di `(batch_size, sequence_length)` per questo motivo vediamo gli stessi assi usati nella configurazione. <Tip> Puoi notare che la proprietร  `inputs` per `DistilBertOnnxConfig` restituisce un `OrdinatoDict`. Ciรฒ garantisce che gli input corrispondano alla loro posizione relativa all'interno del metodo `PreTrainedModel.forward()` durante il tracciamento del grafico. Raccomandiamo di usare un `OrderedDict` per le proprietร  `inputs` e `outputs` quando si implementano configurazioni ONNX personalizzate. </Tip> Dopo aver implementato una configurazione ONNX, รจ possibile istanziarla fornendo alla configurazione del modello base come segue: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert-base-uncased") >>> onnx_config = DistilBertOnnxConfig(config) ``` L'oggetto risultante ha diverse proprietร  utili. Ad esempio รจ possibile visualizzare il Set operatore ONNX che verrร  utilizzato durante l'esportazione: ```python >>> print(onnx_config.default_onnx_opset) 11 ``` รˆ inoltre possibile visualizzare gli output associati al modello come segue: ```python >>> print(onnx_config.outputs) OrderedDict([("last_hidden_state", {0: "batch", 1: "sequence"})]) ``` Puoi notare che la proprietร  degli output segue la stessa struttura degli input; esso restituisce un `OrderedDict` di output con nome e le loro forme. La struttura di output รจ legato alla scelta della funzione con cui viene inizializzata la configurazione. Per impostazione predefinita, la configurazione ONNX viene inizializzata con la funzione 'predefinita' che corrisponde all'esportazione di un modello caricato con la classe `AutoModel`. Se tu desideri esportare una topologia di modello diversa, รจ sufficiente fornire una funzionalitร  diversa a l'argomento `task` quando inizializzi la configurazione ONNX. Ad esempio, se volevamo esportare DistilBERT con una testa di classificazione per sequenze, potremmo usare: ```python >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("distilbert-base-uncased") >>> onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification") >>> print(onnx_config_for_seq_clf.outputs) OrderedDict([('logits', {0: 'batch'})]) ``` <Tip> Tutte le proprietร  e i metodi di base associati a [`~onnx.config.OnnxConfig`] e le altre classi di configurazione possono essere sovrascritte se necessario. Guarda [`BartOnnxConfig`] per un esempio avanzato. </Tip> #### Esportazione del modello Una volta implementata la configurazione ONNX, il passaggio successivo consiste nell'esportare il modello. Qui possiamo usare la funzione `export()` fornita dal pacchetto `transformers.onnx`. Questa funzione prevede la configurazione ONNX, insieme con il modello base e il tokenizer e il percorso per salvare il file esportato: ```python >>> from pathlib import Path >>> from transformers.onnx import export >>> from transformers import AutoTokenizer, AutoModel >>> onnx_path = Path("model.onnx") >>> model_ckpt = "distilbert-base-uncased" >>> base_model = AutoModel.from_pretrained(model_ckpt) >>> tokenizer = AutoTokenizer.from_pretrained(model_ckpt) >>> onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) ``` Gli `onnx_inputs` e `onnx_outputs` restituiti dalla funzione `export()` sono liste di chiavi definite nelle proprietร  di `input` e `output` della configurazione. Una volta esportato il modello, puoi verificare che il modello sia ben formato come segue: ```python >>> import onnx >>> onnx_model = onnx.load("model.onnx") >>> onnx.checker.check_model(onnx_model) ``` <Tip> Se il tuo modello รจ piรน largo di 2 GB, vedrai che molti file aggiuntivi sono creati durante l'esportazione. Questo รจ _previsto_ perchรฉ ONNX utilizza [Protocol Buffer](https://developers.google.com/protocol-buffers/) per memorizzare il modello e questi hanno un limite di dimensione 2 GB. Vedi la [Documentazione ONNX](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md) per istruzioni su come caricare modelli con dati esterni. </Tip> #### Convalida degli output del modello Il passaggio finale consiste nel convalidare gli output dal modello di base e quello esportato corrispondere entro una soglia di tolleranza assoluta. Qui possiamo usare la Funzione `validate_model_outputs()` fornita dal pacchetto `transformers.onnx` come segue: ```python >>> from transformers.onnx import validate_model_outputs >>> validate_model_outputs( ... onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation ... ) ``` Questa funzione usa il metodo `OnnxConfig.generate_dummy_inputs()` per generare input per il modello di base e quello esportato e la tolleranza assoluta puรฒ essere definita nella configurazione. Generalmente troviamo una corrispondenza numerica nell'intervallo da 1e-6 a 1e-4, anche se รจ probabile che qualsiasi cosa inferiore a 1e-3 vada bene. ### Contribuire con una nuova configurazione a ๐Ÿค— Transformers Stiamo cercando di espandere l'insieme di configurazioni giร  pronte e di accettare contributi della community! Se vuoi contribuire con la tua aggiunta nella libreria, dovrai: * Implementare la configurazione ONNX nella corrispondente `configuration file _<model_name>.py` * Includere l'architettura del modello e le funzioni corrispondenti in [`~onnx.features.FeatureManager`] * Aggiungere la tua architettura del modello ai test in `test_onnx_v2.py` Scopri come stato contribuito la configurazione per [IBERT] (https://github.com/huggingface/transformers/pull/14868/files) per avere un'idea di cosa รจ coinvolto. ## TorchScript <Tip> Questo รจ l'inizio dei nostri esperimenti con TorchScript e stiamo ancora esplorando le sue capacitร  con modelli con variable-input-size. รˆ una nostra prioritร  e approfondiremo le nostre analisi nelle prossime versioni, con piรน esempi di codici, un'implementazione piรน flessibile e benchmark che confrontano i codici basati su Python con quelli compilati con TorchScript. </Tip> Secondo la documentazione di Pytorch: "TorchScript รจ un modo per creare modelli serializzabili e ottimizzabili da codice Pytorch". I due moduli di Pytorch [JIT e TRACE](https://pytorch.org/docs/stable/jit.html) consentono allo sviluppatore di esportare il loro modello da riutilizzare in altri programmi, come i programmi C++ orientati all'efficienza. Abbiamo fornito un'interfaccia che consente l'esportazione di modelli ๐Ÿค— Transformers in TorchScript in modo che possano essere riutilizzati in un ambiente diverso rispetto a un programma Python basato su Pytorch. Qui spieghiamo come esportare e utilizzare i nostri modelli utilizzando TorchScript. Esportare un modello richiede due cose: - Un passaggio in avanti con input fittizzi. - Istanziazione del modello con flag `torchscript`. Queste necessitร  implicano diverse cose a cui gli sviluppatori dovrebbero prestare attenzione. Questi dettagli mostrati sotto. ### Flag TorchScript e pesi legati Questo flag รจ necessario perchรฉ la maggior parte dei modelli linguistici in questo repository hanno pesi legati tra il loro strato "Embedding" e lo strato "Decoding". TorchScript non consente l'esportazione di modelli che hanno pesi legati, quindi รจ necessario prima slegare e clonare i pesi. Ciรฒ implica che i modelli istanziati con il flag `torchscript` hanno il loro strato `Embedding` e strato `Decoding` separato, il che significa che non dovrebbero essere addestrati in futuro. L'allenamento de-sincronizza i due strati, portando a risultati inaspettati. Questo non รจ il caso per i modelli che non hanno una testa del modello linguistico, poichรฉ quelli non hanno pesi legati. Questi modelli puรฒ essere esportato in sicurezza senza il flag `torchscript`. ### Input fittizi e standard lengths Gli input fittizzi sono usati per fare un modello passaggio in avanti . Mentre i valori degli input si propagano attraverso i strati, Pytorch tiene traccia delle diverse operazioni eseguite su ciascun tensore. Queste operazioni registrate vengono quindi utilizzate per creare la "traccia" del modello. La traccia viene creata relativamente alle dimensioni degli input. รˆ quindi vincolato dalle dimensioni dell'input fittizio e non funzionerร  per altre lunghezze di sequenza o dimensioni batch. Quando si proverร  con una dimensione diversa, ci sarร  errore come: `La dimensione espansa del tensore (3) deve corrispondere alla dimensione esistente (7) nella dimensione non singleton 2` will be raised. Si consiglia pertanto di tracciare il modello con una dimensione di input fittizia grande almeno quanto il piรน grande input che verrร  fornito al modello durante l'inferenza. รˆ possibile eseguire il padding per riempire i valori mancanti. Il modello sarร  tracciato con una grande dimensione di input, tuttavia, anche le dimensioni della diverse matrici saranno grandi, risultando in piรน calcoli. Si raccomanda di prestare attenzione al numero totale di operazioni eseguite su ciascun input e di seguire da vicino le prestazioni durante l'esportazione di modelli di sequenza-lunghezza variabili. ### Usare TorchSscript in Python Di seguito รจ riportato un esempio, che mostra come salvare, caricare modelli e come utilizzare la traccia per l'inferenza. #### Salvare un modello Questo frammento di codice mostra come usare TorchScript per esportare un `BertModel`. Qui il `BertModel` รจ istanziato secondo una classe `BertConfig` e quindi salvato su disco con il nome del file `traced_bert.pt` ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("bert-base-uncased") # Tokenizing input text text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # Masking one of the input tokens masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # Creating a dummy input tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # Initializing the model with the torchscript flag # Flag set to True even though it is not necessary as this model does not have an LM Head. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # Instantiating the model model = BertModel(config) # The model needs to be in evaluation mode model.eval() # If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag model = BertModel.from_pretrained("bert-base-uncased", torchscript=True) # Creating the trace traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` #### Caricare un modello Questo frammento di codice mostra come caricare il `BertModel` che era stato precedentemente salvato su disco con il nome `traced_bert.pt`. Stiamo riutilizzando il `dummy_input` precedentemente inizializzato. ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` #### Utilizzare un modello tracciato per l'inferenza Usare il modello tracciato per l'inferenza รจ semplice come usare il suo metodo dunder `__call__`: ```python traced_model(tokens_tensor, segments_tensors) ``` ###Implementare modelli HuggingFace TorchScript su AWS utilizzando Neuron SDK AWS ha introdotto [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) famiglia di istanze per l'inferenza di machine learning a basso costo e ad alte prestazioni nel cloud. Le istanze Inf1 sono alimentate dal chip AWS Inferentia, un acceleratore hardware personalizzato, specializzato in carichi di lavoro di inferenza di deep learning. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#) รจ l'SDK per Inferentia che supporta il tracciamento e l'ottimizzazione dei modelli transformers per distribuzione su Inf1. L'SDK Neuron fornisce: 1. API di facile utilizzo con una riga di modifica del codice per tracciare e ottimizzare un modello TorchScript per l'inferenza nel cloud. 2. Ottimizzazioni delle prestazioni pronte all'uso per [miglioramento dei costi-prestazioni](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>) 3. Supporto per i modelli di trasformatori HuggingFace costruiti con [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) o [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html). #### Implicazioni Modelli Transformers basati su architettura [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert), o sue varianti come [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) e [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta) funzioneranno meglio su Inf1 per attivitร  non generative come la question answering estrattive, Classificazione della sequenza, Classificazione dei token. In alternativa, generazione di testo le attivitร  possono essere adattate per essere eseguite su Inf1, secondo questo [tutorial AWS Neuron MarianMT](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html). Ulteriori informazioni sui modelli che possono essere convertiti fuori dagli schemi su Inferentia possono essere trovati nella [sezione Model Architecture Fit della documentazione Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia). #### Dipendenze L'utilizzo di AWS Neuron per convertire i modelli richiede le seguenti dipendenze e l'ambiente: * A [Neuron SDK environment](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide), which comes pre-configured on [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html). #### Convertire un modello per AWS Neuron Usando lo stesso script come in [Usando TorchScipt in Python](https://huggingface.co/docs/transformers/main/en/serialization#using-torchscript-in-python) per tracciare un "BertModel", importi l'estensione del framework `torch.neuron` per accedere i componenti di Neuron SDK tramite un'API Python. ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` E modificare solo la riga di codice di traccia Da: ```python torch.jit.trace(model, [tokens_tensor, segments_tensors]) ``` A: ```python torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` Questa modifica consente a Neuron SDK di tracciare il modello e ottimizzarlo per l'esecuzione nelle istanze Inf1. Per ulteriori informazioni sulle funzionalitร , gli strumenti, i tutorial di esempi e gli ultimi aggiornamenti di AWS Neuron SDK, consultare la [documentazione AWS NeuronSDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/big_models.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Istanziare un big model Quando vuoi utilizzare un modello preaddestrato (pretrained) molto grande, una sfida รจ minimizzare l'uso della RAM. Il workflow classico in PyTorch รจ: 1. Crea il tuo modello con pesi casuali (random weights). 2. Carica i tuoi pesi preaddestrati. 3. Inserisci i pesi preaddestrati nel tuo modello casuale. I passi 1 e 2 una versione completa del modello in memoria, in molti casi non รจ un problema, ma se il modello inizia a pesare diversi GigaBytes, queste due copie possono sturare la nostra RAM. Ancora peggio, se stai usando `torch.distributed` per seguire l'addestramento (training) in distribuito, ogni processo caricherร  il modello preaddestrato e memorizzerร  queste due copie nella RAM. <Tip> Nota che il modello creato casualmente รจ inizializzato con tensori "vuoti", che occupano spazio in memoria ma senza riempirlo (quindi i valori casuali sono quelli che si trovavano in questa porzione di memoria in un determinato momento). L'inizializzazione casuale che segue la distribuzione appropriata per il tipo di modello/parametri istanziato (come la distribuzione normale per le istanze) รจ eseguito solo dopo il passaggio 3 sui pesi non inizializzati, per essere piรน rapido possibile! </Tip> In questa guida, esploreremo le soluzioni che Transformers offre per affrontare questo problema. C'รจ da tenere in conto che questa รจ un'area in cui si sta attualmente sviluppando, quindi le API spiegate qui possono variare velocemente in futuro. ## Checkpoints condivisi Dalla versione 4.18.0, i checkpoints dei modelli che occupano piรน di 10GB di spazio vengono automaticamente frammentati in piรน parti. Per quanto riguarda la possibilitร  di avere un unico checkpoint quando si utilizza `model.save_pretrained(save_dir)`, si hanno diversi checkpoint parziali (ognuno con dimensione < 10GB) e un indice che mappa i nomi dei parametri ai file in cui sono memorizzati. Puoi controllare la dimensione massima dopo la frammentazione con il parametro `max_shard_size`, nel prossimo esempio, useremo modelli di dimensioni normali con frammenti di piccoli dimensioni: prendiamo un modello BERT classico. ```py from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") ``` Se tu salvi usando [`~PreTrainedModel.save_pretrained`], avrai una nuova cartella con due file: il config del modello e i suoi pesi: ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` Adesso usiamo una dimensione massima di frammentazione di 200MB: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` In aggiunta alla configurazione del modello, vediamo tre differenti file dei pesi, e un file `index.json` che รจ il nostro indice. Un checkpoint puรฒ essere ricaricato totalmente usando il metodo [`~PreTrainedModel.from_pretrained`]: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` Il vantaggio principale di applicare questo metodo per modelli grandi รจ che durante il passo 2 del workflow illustrato in precedenza, ogni frammento del checkpoint viene caricato dopo il precedente, limitando l'utilizzo della RAM alla dimensione del modello piรน la dimensione del frammento piรน grande. Dietro le quinte, il file indice รจ utilizzato per determinare quali chiavi sono nel checkpoint, e dove i corrispondenti pesi sono memorizzati. Possiamo caricare l'indice come un qualsiasi json e ottenere un dizionario: ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` I metadati consistono solo nella dimensione totale del modello per ora. Abbiamo in programma di aggiungere altre informazioni in futuro: ```py >>> index["metadata"] {'total_size': 433245184} ``` La mappa dei pesi รจ la parte principale di questo indice, che mappa ogni nome dei parametri (si trova solitamente nei modelli PyTorch come `state_dict`) al file in cui รจ memorizzato: ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` Se vuoi caricare direttamente un checkpoint frammentato in un modello senza usare [`~PreTrainedModel.from_pretrained`] (come si farebbe con `model.load_state_dict()` per un checkpoint completo) devi usare [`~modeling_utils.load_sharded_checkpoint`]: ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## Caricamento low memory Frammentare i checkpoint l'utilizzo di memoria al passo 2 del workflow citato in precedenza, ma per utilizzare questo modello in un ambiente con poca memoria, consigliamo di utilizzare i nostri strumenti basati sulla libreria Accelerate. Per ulteriori informazioni, leggere la seguente guida: [Large model loading using Accelerate](./main_classes/model#large-model-loading)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Addestramento efficiente su CPU Questa guida si concentra su come addestrare in maniera efficiente grandi modelli su CPU. ## Mixed precision con IPEX IPEX รจ ottimizzato per CPU con AVX-512 o superiore, e funziona per le CPU con solo AVX2. Pertanto, si prevede che le prestazioni saranno piรน vantaggiose per le le CPU Intel con AVX-512 o superiori, mentre le CPU con solo AVX2 (ad esempio, le CPU AMD o le CPU Intel piรน vecchie) potrebbero ottenere prestazioni migliori con IPEX, ma non sono garantite. IPEX offre ottimizzazioni delle prestazioni per l'addestramento della CPU sia con Float32 che con BFloat16. L'uso di BFloat16 รจ l'argomento principale delle seguenti sezioni. Il tipo di dati a bassa precisione BFloat16 รจ stato supportato in modo nativo su 3rd Generation Xeonยฎ Scalable Processors (aka Cooper Lake) con AVX512 e sarร  supportata dalla prossima generazione di Intelยฎ Xeonยฎ Scalable Processors con Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) instruction set con prestazioni ulteriormente migliorate. L'Auto Mixed Precision per il backende della CPU รจ stato abilitato da PyTorch-1.10. allo stesso tempo, il supporto di Auto Mixed Precision con BFloat16 per CPU e l'ottimizzazione degli operatori BFloat16 รจ stata abilitata in modo massiccio in Intelยฎ Extension per PyTorch, and parzialmente aggiornato al branch master di PyTorch. Gli utenti possono ottenere prestazioni migliori ed users experience con IPEX Auto Mixed Precision.. Vedi informazioni piรน dettagliate su [Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html). ### Installazione di IPEX: Il rilascio di IPEX segue quello di PyTorch, da installare via pip: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ```bash pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` Vedi altri approcci per [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html). ### Utilizzo nel Trainer Per abilitare la auto mixed precision con IPEX in Trainer, l'utende dovrebbe aggiungere `use_ipex`, `bf16` e `no_cuda` negli argomenti del comando di addestramento. Vedi un sempio di un caso d'uso [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Training with IPEX using BF16 auto mixed precision on CPU: <pre> python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### Esempi pratici Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/community.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Comunitร  Questa pagina raggruppa le risorse sviluppate dalla comunitร  riguardo ๐Ÿค— Transformers. ## Risorse della comunitร : | Risorsa | Descrizione | Autore | |:----------|:-------------|------:| | [Glossario delle Flashcards di Transformers](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | Un insieme di flashcards basate sul [glossario della documentazione di Transformers](glossary), creato in un formato tale da permettere un facile apprendimento e revisione usando [Anki](https://apps.ankiweb.net/), un'applicazione open-source e multi-piattaforma, specificatamente progettata per ricordare informazioni nel lungo termine. Guarda questo [video introduttivo su come usare le flashcards](https://www.youtube.com/watch?v=Dji_h7PILrw). | [Darigov Research](https://www.darigovresearch.com/) | ## Notebook della comunitร : | Notebook | Descrizione | Autore | | |:----------|:-------------|:-------------|------:| | [Fine-tuning di un Transformer pre-addestrato, al fine di generare testi di canzoni](https://github.com/AlekseyKorshuk/huggingartists) | Come generare testi di canzoni nello stile del vostro artista preferito attraverso il fine-tuning di un modello GPT-2. | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Addestramento di T5 in Tensorflow 2 ](https://github.com/snapthat/TF-T5-text-to-text) | Come addestrare T5 per qualsiasi attivitร  usando Tensorflow 2. Questo notebook mostra come risolvere l'attivitร  di "Question Answering" usando Tensorflow 2 e SQUAD. | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [Addestramento di T5 con TPU](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | Come addestrare T5 su SQUAD con Transformers e NLP. | [Suraj Patil](https://github.com/patil-suraj) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [Fine-tuning di T5 per la classificazione e scelta multipla](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | Come effettuare il fine-tuning di T5 per le attivitร  di classificazione a scelta multipla - usando un formato testo-a-testo - con PyTorch Lightning. | [Suraj Patil](https://github.com/patil-suraj) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [Fine-tuning di DialoGPT su nuovi dataset e lingue](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | Come effettuare il fine-tuning di un modello DialoGPT su un nuovo dataset per chatbots conversazionali open-dialog. | [Nathan Cooper](https://github.com/ncoop57) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Modellamento di una lunga sequenza con Reformer](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Come addestrare su sequenze di lunghezza fino a 500 mila token con Reformer. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [Fine-tuning di BART per riassumere testi](https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi con fastai usando blurr. | [Wayde Gilliam](https://ohmeow.com/) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) | | [Fine-tuning di un Transformer pre-addestrato su tweet](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | Come generare tweet nello stile del tuo account Twitter preferito attraverso il fine-tuning di un modello GPT-2. | [Boris Dayma](https://github.com/borisdayma) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Ottimizzazione di modelli ๐Ÿค— Hugging Face con Weights & Biases](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | Un tutorial completo che mostra l'integrazione di W&B con Hugging Face. | [Boris Dayma](https://github.com/borisdayma) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Longformer pre-addestrato](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | Come costruire una versione "long" degli esistenti modelli pre-addestrati. | [Iz Beltagy](https://beltagy.net) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [Fine-tuning di Longformer per QA](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | Come effettuare il fine-tuning di un modello longformer per un task di QA.| [Suraj Patil](https://github.com/patil-suraj) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [Valutazione di modelli con ๐Ÿค—NLP](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | Come valutare longformer su TriviaQA con `NLP`. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [Fine-tuning di T5 per Sentiment Span Extraction](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | Come effettuare il fine-tuning di T5 per la sentiment span extraction - usando un formato testo-a-testo - con PyTorch Lightning. | [Lorenzo Ampil](https://github.com/enzoampil) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [Fine-tuning di DistilBert per la classificazione multi-classe](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | Come effettuare il fine-tuning di DistilBert per la classificazione multi-classe con PyTorch. | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| |[Fine-tuning di BERT per la classificazione multi-etichetta](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)|Come effettuare il fine-tuning di BERT per la classificazione multi-etichetta con PyTorch. |[Abhishek Kumar Mishra](https://github.com/abhimishra91) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| |[Accelerazione del fine-tuning con il Dynamic Padding / Bucketing](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)| Come velocizzare il fine-tuning di un fattore 2X usando il dynamic padding / bucketing. |[Michael Benesty](https://github.com/pommedeterresautee) |[![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[Pre-addestramento di Reformer per Masked Language Modeling](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| Come addestrare un modello Reformer usando livelli di self-attention bi-direzionali.| [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| |[Espansione e fine-tuning di Sci-BERT](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| Come incrementare il vocabolario di un modello SciBERT - pre-addestrato da AllenAI sul dataset CORD - e crearne una pipeline. | [Tanmay Thakur](https://github.com/lordtt13) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| |[Fine-tuning di BlenderBotSmall per riassumere testi usando Trainer API](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| Come effettuare il fine-tuning di BlenderBotSmall per riassumere testi su un dataset personalizzato, usando Trainer API. | [Tanmay Thakur](https://github.com/lordtt13) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| |[Fine-tuning di Electra e interpretazione con Integrated Gradients](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | Come effettuare il fine-tuning di Electra per l'analisi dei sentimenti e intepretare le predizioni con Captum Integrated Gradients. | [Eliza Szczechla](https://elsanns.github.io) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| |[Fine-tuning di un modello GPT-2 non inglese con la classe Trainer](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | Come effettuare il fine-tuning di un modello GPT-2 non inglese con la classe Trainer. | [Philipp Schmid](https://www.philschmid.de) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[Fine-tuning di un modello DistilBERT per la classficazione multi-etichetta](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | Come effettuare il fine-tuning di un modello DistilBERT per l'attivitร  di classificazione multi-etichetta. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[Fine-tuning di ALBERT per la classifcazione di coppie di frasi](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | Come effettuare il fine-tuning di un modello ALBERT - o un altro modello BERT-based - per l'attivitร  di classificazione di coppie di frasi. | [Nadir El Manouzi](https://github.com/NadirEM) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[Fine-tuning di Roberta per l'analisi di sentimenti](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | Come effettuare il fine-tuning di un modello Roberta per l'analisi di sentimenti. | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[Valutazione di modelli che generano domande](https://github.com/flexudy-pipe/qugeev) | Quanto sono accurante le risposte alle domande generate dal tuo modello transformer seq2seq? | [Pascal Zoleko](https://github.com/zolekode) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[Classificazione di testo con DistilBERT e Tensorflow](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | Come effettuare il fine-tuning di DistilBERT per la classificazione di testo in TensorFlow. | [Peter Bayerle](https://github.com/peterbayerle) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[Utilizzo di BERT per riassumere testi con un modello Encoder-Decoder su CNN/Dailymail](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* attraverso l'utilizzo di un checkpoint *bert-base-uncased* per riassumere testi su CNN/Dailymail. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Aprilo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[Utilizzo di RoBERTa per riassumere testi con un modello Encoder-Decoder su BBC XSum](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | Come avviare "a caldo" un *EncoderDecoderModel* (condiviso) attraverso l'utilizzo di un checkpoint *roberta-base* per riassumere testi su BBC/XSum. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[Fine-tuning di TAPAS su Sequential Question Answering (SQA)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | Come effettuare il fine-tuning di un modello *TapasForQuestionAnswering* attraverso l'utilizzo di un checkpoint *tapas-base* sul dataset Sequential Question Answering (SQA). | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[Valutazione di TAPAS su Table Fact Checking (TabFact)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | Come valutare un modello *TapasForSequenceClassification* - fine-tuned con un checkpoint *tapas-base-finetuned-tabfact* - usando una combinazione delle librerie ๐Ÿค— datasets e ๐Ÿค— transformers. | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[Fine-tuning di mBART per la traduzione](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | Come effettuare il fine-tuning di mBART usando Seq2SeqTrainer per la traduzione da hindi a inglese.| [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[Fine-tuning di LayoutLM su FUNSD (un dataset per la comprensione della forma)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForTokenClassification* sul dataset FUNSD per l'estrazione di informazioni da documenti scannerizzati.| [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[Fine-tuning di DistilGPT2 e generazione di testo](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | Come effettuare il fine-tuning di DistilGPT2 e generare testo. | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[Fine-tuning di LED fino a 8 mila token](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | Come effettuare il fine-tuning di LED su PubMed per riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Valutazione di LED su Arxiv](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | Come valutare efficacemente LED sull'attivitร  di riassumere "lunghi" testi. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[Fine-tuning di LayoutLM su RVL-CDIP, un dataset per la classificazione di documenti (immagini)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | Come effettuare il fine-tuning di un modello *LayoutLMForSequenceClassification* sul dataset RVL-CDIP per la classificazione di documenti scannerizzati. | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[Decodifica Wav2Vec2 CTC con variazioni di GPT2](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | Come decodificare sequenze CTC, variate da modelli di linguaggio. | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing) |[Fine-tuning di BART per riassumere testi in due lingue con la classe Trainer](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | Come effettuare il fine-tuning di BART per riassumere testi in due lingue usando la classe Trainer. | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Valutazione di Big Bird su Trivia QA](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Come valutare BigBird su question answering di "lunghi" documenti attraverso Trivia QA. | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Creazione di sottotitoli per video usando Wav2Vec2](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Come creare sottotitoli per qualsiasi video di YouTube trascrivendo l'audio con Wav2Vec. | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [Fine-tuning di Vision Transformer su CIFAR-10 usando PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e PyTorch Lightning.| [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [Fine-tuning di Vision Transformer su CIFAR-10 usando ๐Ÿค— Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | Come effettuare il fine-tuning di Vision Transformer (ViT) su CIFAR-10 usando HuggingFace Transformers, Datasets e ๐Ÿค— Trainer. | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [Valutazione di LUKE su Open Entity, un dataset di entity typing](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Come valutare un modello *LukeForEntityClassification* sul dataset Open Entity. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [Valutazione di LUKE su TACRED, un dataset per l'estrazione di relazioni](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | Come valutare un modello *LukeForEntityPairClassification* sul dataset TACRED. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [Valutazione di LUKE su CoNLL-2003, un importante benchmark NER](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | Come valutare un modello *LukeForEntitySpanClassification* sul dataset CoNLL-2003. | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [Valutazione di BigBird-Pegasus su dataset PubMed](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | Come valutare un modello *BigBirdPegasusForConditionalGeneration* su dataset PubMed. | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Classificazione di emozioni dal discorso con Wav2Vec2](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | Come utilizzare un modello pre-addestrato Wav2Vec2 per la classificazione di emozioni sul dataset MEGA. | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [Rilevamento oggetti in un'immagine con DETR](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | Come usare un modello addestrato *DetrForObjectDetection* per rilevare oggetti in un'immagine e visualizzare l'attention. | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [Fine-tuning di DETR su un dataset personalizzato per rilevare oggetti](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | Come effettuare fine-tuning di un modello *DetrForObjectDetection* su un dataset personalizzato per rilevare oggetti. | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [Fine-tuning di T5 per Named Entity Recognition](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | Come effettuare fine-tunining di *T5* per un'attivitร  di Named Entity Recognition. | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/add_new_pipeline.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Come creare una pipeline personalizzata? In questa guida, scopriremo come creare una pipeline personalizzata e condividerla sull' [Hub](hf.co/models) o aggiungerla nella libreria Transformers. Innanzitutto, รจ necessario decidere gli input grezzi che la pipeline sarร  in grado di accettare. Possono essere strings, raw bytes, dictionaries o qualsiasi cosa sia l'input desiderato piรน probabile. Cerca di mantenere questi input il piรน possibile in Python in quanto facilita la compatibilitร  (anche con altri linguaggi tramite JSON). Questi saranno gli `inputs` della pipeline (`preprocess`). Poi definire gli `outputs`. Stessa strategia degli `inputs`. Piรน รจ seplice e meglio รจ. Questi saranno gli output del metodo `postprocess`. Si parte ereditando la classe base `Pipeline`. con i 4 metodi che bisogna implementare `preprocess`, `_forward`, `postprocess` e `_sanitize_parameters`. ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Maybe {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` La struttura di questa suddivisione consiste nel supportare in modo relativamente continuo CPU/GPU, supportando allo stesso tempo l'esecuzione di pre/postelaborazione sulla CPU su thread diversi. `preprocess` prenderร  gli input originariamente definiti e li trasformerร  in qualcosa di alimentabile dal modello. Potrebbe contenere piรน informazioni e di solito รจ un `Dict`. `_forward` รจ il dettaglio dell'implementazione e non รจ destinato a essere chiamato direttamente. `forward` รจ il metodo preferito per assicurarsi che tutto funzioni correttamente perchรจ contiene delle slavaguardie. Se qualcosa รจ รจ collegato a un modello reale, appartiene al metodo `_forward`, tutto il resto รจ nel preprocess/postprocess. `postprocess` prende l'otput di `_forward` e lo trasforma nell'output finale che era stato deciso in precedenza. `_sanitize_parameters` esiste per consentire agli utenti di passare i parametri ogni volta che desiderano sia a inizialization time `pipeline(...., maybe_arg=4)` che al call time `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`. `_sanitize_parameters` ritorna 3 dicts di kwargs che vengono passati direttamente a `preprocess`, `_forward` e `postprocess`. Non riempire nulla se il chiamante non ha chiamato con alcun parametro aggiuntivo. Questo consente di mantenere gli argomenti predefiniti nella definizione della funzione, che รจ sempre piรน "naturale". Un esempio classico potrebbe essere l'argomento `top_k` nel post processing dei classification tasks. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit `_sanitize_parameters` to allow this new parameter. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # Add logic to handle top_k return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` Cercare di mantenere gli input/output molto semplici e idealmente serializzabili in JSON, in quanto ciรฒ rende l'uso della pipeline molto facile senza richiedere agli utenti di comprendere nuovi tipi di oggetti. รˆ anche relativamente comune supportare molti tipi di argomenti per facilitarne l'uso (ad esempio file audio, possono essere nomi di file, URL o byte puri). ## Aggiungilo alla lista dei tasks supportati Per registrar il tuo `new-task` alla lista dei tasks supportati, devi aggiungerlo al `PIPELINE_REGISTRY`: ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` Puoi specificare il modello di default che desideri, in questo caso dovrebbe essere accompagnato da una revisione specifica (che puรฒ essere il nome di un branch o l'hash di un commit, in questo caso abbiamo preso `"abcdef"`) e anche dal type: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # current support type: text, audio, image, multimodal ) ``` ## Condividi la tua pipeline sull'Hub Per condividere la tua pipeline personalizzata sull'Hub, devi solo salvare il codice della tua sottoclasse `Pipeline` in un file python. Per esempio, supponiamo di voler utilizzare una pipeline personalizzata per la classificazione delle coppie di frasi come la seguente: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` L'implementazione รจ agnostica al framework, e lavorerร  sia con modelli PyTorch che con TensorFlow. Se l'abbiamo salvato in un file chiamato `pair_classification.py`, puรฒ essere successivamente importato e registrato in questo modo: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` Una volta fatto, possiamo usarla con un modello pretrained. L'istanza `sgugger/finetuned-bert-mrpc` รจ stata fine-tuned sul dataset MRPC, che classifica le coppie di frasi come parafrasi o no. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` Successivamente possiamo condividerlo sull'Hub usando il metodo `save_pretrained` in un `Repository`: ```py from huggingface_hub import Repository repo = Repository("test-dynamic-pipeline", clone_from="{your_username}/test-dynamic-pipeline") classifier.save_pretrained("test-dynamic-pipeline") repo.push_to_hub() ``` Questo codice copierร  il file dove รจ stato definitp `PairClassificationPipeline` all'interno della cartella `"test-dynamic-pipeline"`, insieme al salvataggio del modello e del tokenizer della pipeline, prima di pushare il tutto nel repository `{your_username}/test-dynamic-pipeline`. Dopodichรฉ chiunque potrร  utilizzarlo, purchรฉ fornisca l'opzione `trust_remote_code=True`: ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## Aggiungere la pipeline a Transformers Se vuoi contribuire con la tua pipeline a Transformers, dovrai aggiungere un modulo nel sottomodulo `pipelines` con il codice della tua pipeline, quindi aggiungilo all'elenco dei tasks definiti in `pipelines/__init__.py`. Poi hai bisogno di aggiungere i test. Crea un nuovo file `tests/test_pipelines_MY_PIPELINE.py` con esempi ed altri test. La funzione `run_pipeline_test` sarร  molto generica e su piccoli modelli casuali su ogni possibile architettura, come definito da `model_mapping` e `tf_model_mapping`. Questo รจ molto importante per testare la compatibilitร  futura, nel senso che se qualcuno aggiunge un nuovo modello di `XXXForQuestionAnswering` allora il test della pipeline tenterร  di essere eseguito su di esso. Poichรฉ i modelli sono casuali, รจ รจ impossibile controllare i valori effettivi, per questo esiste un aiuto `ANY` che tenterร  solamente di far corrispondere l'output della pipeline TYPE. Hai anche *bisogno* di implementare 2 (idealmente 4) test. - `test_small_model_pt` : Definire 1 piccolo modello per questa pipeline (non importa se i risultati non hanno senso) e testare i risultati della pipeline. I risultati dovrebbero essere gli stessi di `test_small_model_tf`. - `test_small_model_tf` : Definire 1 piccolo modello per questa pipeline (non importa se i risultati non hanno senso) e testare i risultati della pipeline. I risultati dovrebbero essere gli stessi di `test_small_model_pt`. - `test_large_model_pt` (`optional`): Testare la pipeline su una pipeline reale in cui i risultati dovrebbero avere senso. Questi test sono lenti e dovrebbero essere contrassegnati come tali. In questo caso l'obiettivo รจ mostrare la pipeline e assicurarsi che non ci siano derive nelle versioni future - `test_large_model_tf` (`optional`): Testare la pipeline su una pipeline reale in cui i risultati dovrebbero avere senso. Questi test sono lenti e dovrebbero essere contrassegnati come tali. In questo caso l'obiettivo รจ mostrare la pipeline e assicurarsi che non ci siano derive nelle versioni future
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/it/debugging.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Debugging ## Debug dei problemi di rete multi-GPU Quando addestri o fai inferenza con `DistributedDataParallel` e GPU multiple, se si verificano problemi di intercomunicazione tra processi e/o nodi, puoi utilizzare il seguente script per diagnosticare i problemi della rete. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py ``` Per esempio per testare come 2 GPU interagiscono fai: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` Se entrambi i processi sono in grado di comunicare tra loro e di allocare la memoria della GPU, ciascuno di essi stamperร  lo stato OK. Per piรน GPU o nodi adatta gli argumenti nello script. All'interno dello script di diagnostica troverai molti altri dettagli e anche una guida per eseguirlo in ambiente SLURM. Un livello di debug superiore รจ aggiungere la variabile d'ambiente `NCCL_DEBUG=INFO` come di seguito: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` In questo modo si scaricano molte informazioni di debug relative a NCCL, che puoi cercare online in caso di problemi. Oppure, se non hai la sicurezza di come interpretare l'output, puoi condividere il file di log in una Issue. ## Rilevamento di Underflow e Overflow <Tip> Questa funzionalitร  al momento รจ disponibile solo per PyTorch. </Tip> <Tip> Per addestramento multi-GPU richiede DDP (`torch.distributed.launch`). </Tip> <Tip> Questa funzionalitร  puรฒ essere usata con modelli basati su `nn.Module`. </Tip> Se inizi a ottenere `loss=NaN` o il modello presenta qualche altro comportamento anomalo a causa di valori `inf` o `nan` in attivazioni o nei pesi, รจ necessario scoprire dove si verifica il primo underflow o overflow e cosa lo ha determinato. Fortunatamente รจ possibile farlo facilmente attivando un modulo speciale che effettuerร  il rilevamento automaticamente. Se stai usando [`Trainer`], hai bisogno di aggiungere solo: ```bash --debug underflow_overflow ``` ai normali argomenti della riga di comando, o passa `debug="underflow_overflow"` quando viene creato l'oggetto [`TrainingArguments`]. Se stai usando il tuo ciclo di allenamento o un altro trainer, puoi ottenere lo stesso risultato con: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) ``` [`~debug_utils.DebugUnderflowOverflow`] inserisce dei ganci nel modello che dopo ogni chiamata testeranno le variabili di ingresso e di uscita e anche i pesi del modulo corrispondente. Non appena viene rilevato `inf` o o `nan` in almeno un elemento delle attivazioni o dei pesi, il programma lo notifica e stampa un rapporto come il seguente (questo รจ stato rilevato con `google/mt5-small` sotto fp16 mixed precision): ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [...] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` L'output di esempio รจ stato tagliato al centro per brevitร . La seconda colonna mostra il valore dell'elemento piรน grande in assoluto,cosรฌ se osserviamo da vicino gli ultimi istanti, input e output sono nel range di `1e4`. Questo addestramento รจ stato eseguito con una mixed precision fp16 e l'ultimo passo usciva fuori (sotto `fp16` il valore piรน grande prima di `inf` รจ `64e3`). Per evitare overflows sotto `fp16` le attivazionioni devono rimanere molto al di sotto di `1e4`, perchรฉ `1e4 * 1e4 = 1e8` quindi qualsiasi moltiplicazione di matrice con grandi attivazioni porterร  a una condizione di overflow numerico. All'inizio della traccia รจ possibile scoprire a quale lotto si รจ verificato il problema (questo `Detected inf/nan during batch_number=0` significa che il problema si รจ verificato nel primo lotto). Ogni frame segnalato inizia dichiarando la voce completamente qualificata per il modulo corrispondente per il quale il frame รจ stato segnalato. Se osserviamo il seguente frame: ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output ``` Questo, `encoder.block.2.layer.1.layer_norm` indica che si tratta di un layer norm nel primo layer, del secondo blocco dell'encoder. E le chiamata specifica di `forward` รจ `T5LayerNorm`. Osserviamo gli ultimi frame del report: ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` L'ultimo frame report per la funzione `Dropout.forward` con la prima voce per l'unico input e la seconda per l'unico output. Si puรฒ notare che รจ stato richiamato da un attibuto `dropout` dentro la classe `DenseReluDense`. Si puรฒ notare che ciรฒ รจ avvenuto durante il primo strato, del 2ยฐ blocco, durante il primissimo lotto. Infine, gli elementi di input piรน grandi in assoluto sono stati `6.27e+04` e l'equivalente per l'output era `inf`. Puoi vedere qui, che `T5DenseGatedGeluDense.forward` risulta in output activations, il cui valore massimo assoluto era circa 62,7K, che รจ molto vicino al limite massimo di 64K di fp16. Nel prossimo frame abbiamo `Dropout` che rinormalizza i pesi, dopo aver azzerato alcuni elementi, il che spinge il valore massimo assoluto a piรน di 64K e si verifica un overflow.(`inf`). Come puoi notare, รจ nei frames precedenti che occorre esaminare quando i numeri iniziano a diventare molto grandi per i valori fp16. Confrontiamo il report al codice `models/t5/modeling_t5.py`: ```python class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states ``` Ora รจ facile vedere la chiamata `dropout`, e tutte le chiamate precedenti. Poichรฉ il rilevamento avviene in un avanzamento (forward hook in eng.), i rapporti vengono creati immeditamente dopo ogni rientro da `forward` (forward returns in eng.). Tornando al rapporto completo, per agire e risolvere il problema, dobbiamo andare qualche frame piรน in alto, dove i numeri hanno iniziato a salire, e probabilmente passare alla modalitร  `fp32`, in modo che i numeri non trabocchino quando vengono moltiplicati o sommati. Naturalmente, potrebbero esserci altre soluzioni. Per esempio, potremmo spegnere temporanemante `amp` se รจ abilitato, successivamente spostare `forward` in un helper wrapper, come: ```python def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) ``` Poichรฉ il rilevatore automatico riporta solo gli ingressi e le uscite di fotogrammi completi, una volta che si sa dove cercare, si puรฒ analizzare anche le fasi intermedie di una specifica funzione `forward`. In alcuni casi puoi usare la funzione di supporto `detect_overflow` per indirizzare il rilevatore dove preferisci, ad esempio: ```python from debug_utils import detect_overflow class T5LayerFF(nn.Module): [...] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` Si puรฒ vedere che abbiamo aggiunto 2 di questi e ora teniamo traccia se `inf` o `nan` per `forwarded_states` รจ stato rilevato da qualche parte. In realtร , il rilevatore li riporta giร , perchรฉ ciascuna delle chiamate nell'esempio precedente รจ un `nn.Module`, ma diciamo che se avessimo dei calcoli diretti locali, questo รจ il modo in cui lo faremmo. Inoltre, se si istanzia il debugger nel proprio codice, รจ possibile modificare il numero di fotogrammi stampati rispetto a predefinito, ad esempio.: ```python from .debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` ### Tracciamento della mistura assoluta del lotto specifico e del valore massimo La stessa classe di debug puรฒ essere utilizzata per il tracciamento per-batch con la funzione di rilevamento di underflow/overflow disattivata. Supponiamo di voler osservare i valori minimi e massimi assoluti per tutti gli ingredienti di ogni chiamata `forward` di un dato lotto. lotto, e che lo si voglia fare solo per i lotti 1 e 3. Si istanzia questa classe come: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` Ora i batch completi 1 e 3 saranno tracciati utilizzando lo stesso formato del rilevatore di underflow/overflow. I batches sono 0-indexed. Questo รจ utile se si sa che il programma inizia a comportarsi male dopo un certo numero di batch, in modo da poter avanzare velocemente fino a quell'area. direttamente a quell'area. Ecco un esempio di output troncato per questa configurazione: ``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [...] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [...] ``` Qui verrร  scaricato un numero enorme di fotogrammi, tanti quanti sono le chiamate in avanti nel modello, quindi puรฒ essere o non essere quello che volete, ma a volte puรฒ essere piรน utile usarlo di un classico debugger. Per esempio, se il problema inizia a verificarsi a partire dal lotto numero 150. Quindi รจ possibile scaricare le tracce dei lotti 149 e 150 e confrontare i punti in cui i numeri hanno iniziato a divergere. รˆ inoltre possibile specificare il numero di batch dopo il quale interrompere l'addestramento, con: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
0