text
stringlengths
7
318k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
439
# (Gluon) ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks. The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html). {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/HeZRS15, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {CoRR}, volume = {abs/1512.03385}, year = {2015}, url = {http://arxiv.org/abs/1512.03385}, archivePrefix = {arXiv}, eprint = {1512.03385}, timestamp = {Wed, 17 Apr 2019 17:23:45 +0200}, biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: Gloun ResNet Paper: Title: Deep Residual Learning for Image Recognition URL: https://paperswithcode.com/paper/deep-residual-learning-for-image-recognition Models: - Name: gluon_resnet101_v1b In Collection: Gloun ResNet Metadata: FLOPs: 10068547584 Parameters: 44550000 File Size: 178723172 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet101_v1b Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L89 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1b-3b017079.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.3% Top 5 Accuracy: 94.53% - Name: gluon_resnet101_v1c In Collection: Gloun ResNet Metadata: FLOPs: 10376567296 Parameters: 44570000 File Size: 178802575 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet101_v1c Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L113 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1c-1f26822a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.53% Top 5 Accuracy: 94.59% - Name: gluon_resnet101_v1d In Collection: Gloun ResNet Metadata: FLOPs: 10377018880 Parameters: 44570000 File Size: 178802755 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet101_v1d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L138 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1d-0f9c8644.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.4% Top 5 Accuracy: 95.02% - Name: gluon_resnet101_v1s In Collection: Gloun ResNet Metadata: FLOPs: 11805511680 Parameters: 44670000 File Size: 179221777 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet101_v1s Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L166 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1s-60fe0cc1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.29% Top 5 Accuracy: 95.16% - Name: gluon_resnet152_v1b In Collection: Gloun ResNet Metadata: FLOPs: 14857660416 Parameters: 60190000 File Size: 241534001 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet152_v1b Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L97 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1b-c1edb0dd.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.69% Top 5 Accuracy: 94.73% - Name: gluon_resnet152_v1c In Collection: Gloun ResNet Metadata: FLOPs: 15165680128 Parameters: 60210000 File Size: 241613404 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet152_v1c Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L121 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1c-a3bb0b98.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.91% Top 5 Accuracy: 94.85% - Name: gluon_resnet152_v1d In Collection: Gloun ResNet Metadata: FLOPs: 15166131712 Parameters: 60210000 File Size: 241613584 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet152_v1d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L147 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1d-bd354e12.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.48% Top 5 Accuracy: 95.2% - Name: gluon_resnet152_v1s In Collection: Gloun ResNet Metadata: FLOPs: 16594624512 Parameters: 60320000 File Size: 242032606 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet152_v1s Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L175 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1s-dcc41b81.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.02% Top 5 Accuracy: 95.42% - Name: gluon_resnet18_v1b In Collection: Gloun ResNet Metadata: FLOPs: 2337073152 Parameters: 11690000 File Size: 46816736 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet18_v1b Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L65 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet18_v1b-0757602b.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 70.84% Top 5 Accuracy: 89.76% - Name: gluon_resnet34_v1b In Collection: Gloun ResNet Metadata: FLOPs: 4718469120 Parameters: 21800000 File Size: 87295112 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet34_v1b Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L73 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet34_v1b-c6d82d59.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 74.59% Top 5 Accuracy: 92.0% - Name: gluon_resnet50_v1b In Collection: Gloun ResNet Metadata: FLOPs: 5282531328 Parameters: 25560000 File Size: 102493763 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet50_v1b Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L81 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1b-0ebe02e2.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.58% Top 5 Accuracy: 93.72% - Name: gluon_resnet50_v1c In Collection: Gloun ResNet Metadata: FLOPs: 5590551040 Parameters: 25580000 File Size: 102573166 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet50_v1c Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L105 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1c-48092f55.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.01% Top 5 Accuracy: 93.99% - Name: gluon_resnet50_v1d In Collection: Gloun ResNet Metadata: FLOPs: 5591002624 Parameters: 25580000 File Size: 102573346 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet50_v1d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L129 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1d-818a1b1b.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.06% Top 5 Accuracy: 94.46% - Name: gluon_resnet50_v1s In Collection: Gloun ResNet Metadata: FLOPs: 7019495424 Parameters: 25680000 File Size: 102992368 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: gluon_resnet50_v1s Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L156 Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1s-1762acc0.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.7% Top 5 Accuracy: 94.25% -->
pytorch-image-models/docs/models/.templates/models/gloun-resnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/gloun-resnet.md", "repo_id": "pytorch-image-models", "token_count": 6383 }
170
# MobileNet v3 **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) modules in the [MBConv blocks](https://paperswithcode.com/method/inverted-residual-block). {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/abs-1905-02244, author = {Andrew Howard and Mark Sandler and Grace Chu and Liang{-}Chieh Chen and Bo Chen and Mingxing Tan and Weijun Wang and Yukun Zhu and Ruoming Pang and Vijay Vasudevan and Quoc V. Le and Hartwig Adam}, title = {Searching for MobileNetV3}, journal = {CoRR}, volume = {abs/1905.02244}, year = {2019}, url = {http://arxiv.org/abs/1905.02244}, archivePrefix = {arXiv}, eprint = {1905.02244}, timestamp = {Tue, 12 Jan 2021 15:30:06 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: MobileNet V3 Paper: Title: Searching for MobileNetV3 URL: https://paperswithcode.com/paper/searching-for-mobilenetv3 Models: - Name: mobilenetv3_large_100 In Collection: MobileNet V3 Metadata: FLOPs: 287193752 Parameters: 5480000 File Size: 22076443 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Depthwise Separable Convolution - Dropout - Global Average Pooling - Hard Swish - Inverted Residual Block - ReLU - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 4x4 TPU Pod ID: mobilenetv3_large_100 LR: 0.1 Dropout: 0.8 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 4096 Image Size: '224' Weight Decay: 1.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/mobilenetv3.py#L363 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_large_100_ra-f55367f5.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.77% Top 5 Accuracy: 92.54% - Name: mobilenetv3_rw In Collection: MobileNet V3 Metadata: FLOPs: 287190638 Parameters: 5480000 File Size: 22064048 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Depthwise Separable Convolution - Dropout - Global Average Pooling - Hard Swish - Inverted Residual Block - ReLU - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 4x4 TPU Pod ID: mobilenetv3_rw LR: 0.1 Dropout: 0.8 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 4096 Image Size: '224' Weight Decay: 1.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/mobilenetv3.py#L384 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_100-35495452.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.62% Top 5 Accuracy: 92.71% -->
pytorch-image-models/docs/models/.templates/models/mobilenet-v3.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/mobilenet-v3.md", "repo_id": "pytorch-image-models", "token_count": 1755 }
171
# SK-ResNet **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner. {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{li2019selective, title={Selective Kernel Networks}, author={Xiang Li and Wenhai Wang and Xiaolin Hu and Jian Yang}, year={2019}, eprint={1903.06586}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: SKResNet Paper: Title: Selective Kernel Networks URL: https://paperswithcode.com/paper/selective-kernel-networks Models: - Name: skresnet18 In Collection: SKResNet Metadata: FLOPs: 2333467136 Parameters: 11960000 File Size: 47923238 Architecture: - Convolution - Dense Connections - Global Average Pooling - Max Pooling - Residual Connection - Selective Kernel - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x GPUs ID: skresnet18 LR: 0.1 Epochs: 100 Layers: 18 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/sknet.py#L148 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/skresnet18_ra-4eec2804.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 73.03% Top 5 Accuracy: 91.17% - Name: skresnet34 In Collection: SKResNet Metadata: FLOPs: 4711849952 Parameters: 22280000 File Size: 89299314 Architecture: - Convolution - Dense Connections - Global Average Pooling - Max Pooling - Residual Connection - Selective Kernel - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x GPUs ID: skresnet34 LR: 0.1 Epochs: 100 Layers: 34 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/sknet.py#L165 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/skresnet34_ra-bdc0ccde.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.93% Top 5 Accuracy: 93.32% -->
pytorch-image-models/docs/models/.templates/models/skresnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/skresnet.md", "repo_id": "pytorch-image-models", "token_count": 1276 }
172
# Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution layers](https://paperswithcode.com/method/depthwise-separable-convolution). The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models). {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/ZagoruykoK16, @misc{chollet2017xception, title={Xception: Deep Learning with Depthwise Separable Convolutions}, author={Franรงois Chollet}, year={2017}, eprint={1610.02357}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: Xception Paper: Title: 'Xception: Deep Learning with Depthwise Separable Convolutions' URL: https://paperswithcode.com/paper/xception-deep-learning-with-depthwise Models: - Name: xception In Collection: Xception Metadata: FLOPs: 10600506792 Parameters: 22860000 File Size: 91675053 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Depthwise Separable Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: xception Crop Pct: '0.897' Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/xception.py#L229 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/xception-43020ad28.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.05% Top 5 Accuracy: 94.4% - Name: xception41 In Collection: Xception Metadata: FLOPs: 11681983232 Parameters: 26970000 File Size: 108422028 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Depthwise Separable Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: xception41 Crop Pct: '0.903' Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/xception_aligned.py#L181 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_xception_41-e6439c97.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.54% Top 5 Accuracy: 94.28% - Name: xception65 In Collection: Xception Metadata: FLOPs: 17585702144 Parameters: 39920000 File Size: 160536780 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Depthwise Separable Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: xception65 Crop Pct: '0.903' Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/xception_aligned.py#L200 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_xception_65-c9ae96e8.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.55% Top 5 Accuracy: 94.66% - Name: xception71 In Collection: Xception Metadata: FLOPs: 22817346560 Parameters: 42340000 File Size: 170295556 Architecture: - 1x1 Convolution - Convolution - Dense Connections - Depthwise Separable Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Connection - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: xception71 Crop Pct: '0.903' Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/xception_aligned.py#L219 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_xception_71-8eec7df1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.88% Top 5 Accuracy: 94.93% -->
pytorch-image-models/docs/models/.templates/models/xception.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/xception.md", "repo_id": "pytorch-image-models", "token_count": 1874 }
173
# CSP-ResNet **CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('cspresnet50', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `cspresnet50`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('cspresnet50', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{wang2019cspnet, title={CSPNet: A New Backbone that can Enhance Learning Capability of CNN}, author={Chien-Yao Wang and Hong-Yuan Mark Liao and I-Hau Yeh and Yueh-Hua Wu and Ping-Yang Chen and Jun-Wei Hsieh}, year={2019}, eprint={1911.11929}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: CSP ResNet Paper: Title: 'CSPNet: A New Backbone that can Enhance Learning Capability of CNN' URL: https://paperswithcode.com/paper/cspnet-a-new-backbone-that-can-enhance Models: - Name: cspresnet50 In Collection: CSP ResNet Metadata: FLOPs: 5924992000 Parameters: 21620000 File Size: 86679303 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - Label Smoothing - Polynomial Learning Rate Decay - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: cspresnet50 LR: 0.1 Layers: 50 Crop Pct: '0.887' Momentum: 0.9 Batch Size: 128 Image Size: '256' Weight Decay: 0.005 Interpolation: bilinear Training Steps: 8000000 Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/cspnet.py#L415 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/cspresnet50_ra-d3e8d487.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.57% Top 5 Accuracy: 94.71% -->
pytorch-image-models/hfdocs/source/models/csp-resnet.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/csp-resnet.mdx", "repo_id": "pytorch-image-models", "token_count": 1706 }
174
# RegNetX **RegNetX** is a convolutional network design space with simple, regular models with parameters: depth \\( d \\), initial width \\( w\_{0} > 0 \\), and slope \\( w\_{a} > 0 \\), and generates a different block width \\( u\_{j} \\) for each block \\( j < d \\). The key restriction for the RegNet types of model is that there is a linear parameterisation of block widths (the design space only contains models with this linear structure): \\( \\) u\_{j} = w\_{0} + w\_{a}\cdot{j} \\( \\) For **RegNetX** we have additional restrictions: we set \\( b = 1 \\) (the bottleneck ratio), \\( 12 \leq d \leq 28 \\), and \\( w\_{m} \geq 2 \\) (the width multiplier). ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('regnetx_002', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `regnetx_002`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('regnetx_002', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{radosavovic2020designing, title={Designing Network Design Spaces}, author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollรกr}, year={2020}, eprint={2003.13678}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: RegNetX Paper: Title: Designing Network Design Spaces URL: https://paperswithcode.com/paper/designing-network-design-spaces Models: - Name: regnetx_002 In Collection: RegNetX Metadata: FLOPs: 255276032 Parameters: 2680000 File Size: 10862199 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_002 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L337 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_002-e7e85e5c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 68.75% Top 5 Accuracy: 88.56% - Name: regnetx_004 In Collection: RegNetX Metadata: FLOPs: 510619136 Parameters: 5160000 File Size: 20841309 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_004 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L343 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_004-7d0e9424.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 72.39% Top 5 Accuracy: 90.82% - Name: regnetx_006 In Collection: RegNetX Metadata: FLOPs: 771659136 Parameters: 6200000 File Size: 24965172 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_006 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L349 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_006-85ec1baa.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 73.84% Top 5 Accuracy: 91.68% - Name: regnetx_008 In Collection: RegNetX Metadata: FLOPs: 1027038208 Parameters: 7260000 File Size: 29235944 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_008 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L355 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_008-d8b470eb.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.05% Top 5 Accuracy: 92.34% - Name: regnetx_016 In Collection: RegNetX Metadata: FLOPs: 2059337856 Parameters: 9190000 File Size: 36988158 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_016 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L361 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_016-65ca972a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.95% Top 5 Accuracy: 93.43% - Name: regnetx_032 In Collection: RegNetX Metadata: FLOPs: 4082555904 Parameters: 15300000 File Size: 61509573 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_032 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L367 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_032-ed0c7f7e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.15% Top 5 Accuracy: 94.09% - Name: regnetx_040 In Collection: RegNetX Metadata: FLOPs: 5095167744 Parameters: 22120000 File Size: 88844824 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_040 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L373 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_040-73c2a654.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.48% Top 5 Accuracy: 94.25% - Name: regnetx_064 In Collection: RegNetX Metadata: FLOPs: 8303405824 Parameters: 26210000 File Size: 105184854 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_064 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L379 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_064-29278baa.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.06% Top 5 Accuracy: 94.47% - Name: regnetx_080 In Collection: RegNetX Metadata: FLOPs: 10276726784 Parameters: 39570000 File Size: 158720042 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_080 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L385 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_080-7c7fcab1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.21% Top 5 Accuracy: 94.55% - Name: regnetx_120 In Collection: RegNetX Metadata: FLOPs: 15536378368 Parameters: 46110000 File Size: 184866342 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_120 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L391 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_120-65d5521e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.61% Top 5 Accuracy: 94.73% - Name: regnetx_160 In Collection: RegNetX Metadata: FLOPs: 20491740672 Parameters: 54280000 File Size: 217623862 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_160 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L397 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_160-c98c4112.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.84% Top 5 Accuracy: 94.82% - Name: regnetx_320 In Collection: RegNetX Metadata: FLOPs: 40798958592 Parameters: 107810000 File Size: 431962133 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_320 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L403 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_320-8ea38b93.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.25% Top 5 Accuracy: 95.03% -->
pytorch-image-models/hfdocs/source/models/regnetx.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/regnetx.mdx", "repo_id": "pytorch-image-models", "token_count": 6574 }
175
# SWSL ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks. The models in this collection utilise semi-weakly supervised learning to improve the performance of the model. The approach brings important gains to standard architectures for image, video and fine-grained classification. Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('swsl_resnet18', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `swsl_resnet18`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('swsl_resnet18', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/abs-1905-00546, author = {I. Zeki Yalniz and Herv{\'{e}} J{\'{e}}gou and Kan Chen and Manohar Paluri and Dhruv Mahajan}, title = {Billion-scale semi-supervised learning for image classification}, journal = {CoRR}, volume = {abs/1905.00546}, year = {2019}, url = {http://arxiv.org/abs/1905.00546}, archivePrefix = {arXiv}, eprint = {1905.00546}, timestamp = {Mon, 28 Sep 2020 08:19:37 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-00546.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: SWSL ResNet Paper: Title: Billion-scale semi-supervised learning for image classification URL: https://paperswithcode.com/paper/billion-scale-semi-supervised-learning-for Models: - Name: swsl_resnet18 In Collection: SWSL ResNet Metadata: FLOPs: 2337073152 Parameters: 11690000 File Size: 46811375 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - IG-1B-Targeted - ImageNet Training Resources: 64x GPUs ID: swsl_resnet18 LR: 0.0015 Epochs: 30 Layers: 18 Crop Pct: '0.875' Batch Size: 1536 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L954 Weights: https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnet18-118f1556.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 73.28% Top 5 Accuracy: 91.76% - Name: swsl_resnet50 In Collection: SWSL ResNet Metadata: FLOPs: 5282531328 Parameters: 25560000 File Size: 102480594 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - IG-1B-Targeted - ImageNet Training Resources: 64x GPUs ID: swsl_resnet50 LR: 0.0015 Epochs: 30 Layers: 50 Crop Pct: '0.875' Batch Size: 1536 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L965 Weights: https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnet50-16a12f1b.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 81.14% Top 5 Accuracy: 95.97% -->
pytorch-image-models/hfdocs/source/models/swsl-resnet.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/swsl-resnet.mdx", "repo_id": "pytorch-image-models", "token_count": 2442 }
176
# Results CSV files containing an ImageNet-1K and out-of-distribution (OOD) test set validation results for all models with pretrained weights is located in the repository [results folder](https://github.com/rwightman/pytorch-image-models/tree/master/results). ## Self-trained Weights The table below includes ImageNet-1k validation results of model weights that I've trained myself. It is not updated as frequently as the csv results outputs linked above. |Model | Acc@1 (Err) | Acc@5 (Err) | Param # (M) | Interpolation | Image Size | |---|---|---|---|---|---| | efficientnet_b3a | 82.242 (17.758) | 96.114 (3.886) | 12.23 | bicubic | 320 (1.0 crop) | | efficientnet_b3 | 82.076 (17.924) | 96.020 (3.980) | 12.23 | bicubic | 300 | | regnet_32 | 82.002 (17.998) | 95.906 (4.094) | 19.44 | bicubic | 224 | | skresnext50d_32x4d | 81.278 (18.722) | 95.366 (4.634) | 27.5 | bicubic | 288 (1.0 crop) | | seresnext50d_32x4d | 81.266 (18.734) | 95.620 (4.380) | 27.6 | bicubic | 224 | | efficientnet_b2a | 80.608 (19.392) | 95.310 (4.690) | 9.11 | bicubic | 288 (1.0 crop) | | resnet50d | 80.530 (19.470) | 95.160 (4.840) | 25.6 | bicubic | 224 | | mixnet_xl | 80.478 (19.522) | 94.932 (5.068) | 11.90 | bicubic | 224 | | efficientnet_b2 | 80.402 (19.598) | 95.076 (4.924) | 9.11 | bicubic | 260 | | seresnet50 | 80.274 (19.726) | 95.070 (4.930) | 28.1 | bicubic | 224 | | skresnext50d_32x4d | 80.156 (19.844) | 94.642 (5.358) | 27.5 | bicubic | 224 | | cspdarknet53 | 80.058 (19.942) | 95.084 (4.916) | 27.6 | bicubic | 256 | | cspresnext50 | 80.040 (19.960) | 94.944 (5.056) | 20.6 | bicubic | 224 | | resnext50_32x4d | 79.762 (20.238) | 94.600 (5.400) | 25 | bicubic | 224 | | resnext50d_32x4d | 79.674 (20.326) | 94.868 (5.132) | 25.1 | bicubic | 224 | | cspresnet50 | 79.574 (20.426) | 94.712 (5.288) | 21.6 | bicubic | 256 | | ese_vovnet39b | 79.320 (20.680) | 94.710 (5.290) | 24.6 | bicubic | 224 | | resnetblur50 | 79.290 (20.710) | 94.632 (5.368) | 25.6 | bicubic | 224 | | dpn68b | 79.216 (20.784) | 94.414 (5.586) | 12.6 | bicubic | 224 | | resnet50 | 79.038 (20.962) | 94.390 (5.610) | 25.6 | bicubic | 224 | | mixnet_l | 78.976 (21.024 | 94.184 (5.816) | 7.33 | bicubic | 224 | | efficientnet_b1 | 78.692 (21.308) | 94.086 (5.914) | 7.79 | bicubic | 240 | | efficientnet_es | 78.066 (21.934) | 93.926 (6.074) | 5.44 | bicubic | 224 | | seresnext26t_32x4d | 77.998 (22.002) | 93.708 (6.292) | 16.8 | bicubic | 224 | | seresnext26tn_32x4d | 77.986 (22.014) | 93.746 (6.254) | 16.8 | bicubic | 224 | | efficientnet_b0 | 77.698 (22.302) | 93.532 (6.468) | 5.29 | bicubic | 224 | | seresnext26d_32x4d | 77.602 (22.398) | 93.608 (6.392) | 16.8 | bicubic | 224 | | mobilenetv2_120d | 77.294 (22.706 | 93.502 (6.498) | 5.8 | bicubic | 224 | | mixnet_m | 77.256 (22.744) | 93.418 (6.582) | 5.01 | bicubic | 224 | | resnet34d | 77.116 (22.884) | 93.382 (6.618) | 21.8 | bicubic | 224 | | seresnext26_32x4d | 77.104 (22.896) | 93.316 (6.684) | 16.8 | bicubic | 224 | | skresnet34 | 76.912 (23.088) | 93.322 (6.678) | 22.2 | bicubic | 224 | | ese_vovnet19b_dw | 76.798 (23.202) | 93.268 (6.732) | 6.5 | bicubic | 224 | | resnet26d | 76.68 (23.32) | 93.166 (6.834) | 16 | bicubic | 224 | | densenetblur121d | 76.576 (23.424) | 93.190 (6.810) | 8.0 | bicubic | 224 | | mobilenetv2_140 | 76.524 (23.476) | 92.990 (7.010) | 6.1 | bicubic | 224 | | mixnet_s | 75.988 (24.012) | 92.794 (7.206) | 4.13 | bicubic | 224 | | mobilenetv3_large_100 | 75.766 (24.234) | 92.542 (7.458) | 5.5 | bicubic | 224 | | mobilenetv3_rw | 75.634 (24.366) | 92.708 (7.292) | 5.5 | bicubic | 224 | | mnasnet_a1 | 75.448 (24.552) | 92.604 (7.396) | 3.89 | bicubic | 224 | | resnet26 | 75.292 (24.708) | 92.57 (7.43) | 16 | bicubic | 224 | | fbnetc_100 | 75.124 (24.876) | 92.386 (7.614) | 5.6 | bilinear | 224 | | resnet34 | 75.110 (24.890) | 92.284 (7.716) | 22 | bilinear | 224 | | mobilenetv2_110d | 75.052 (24.948) | 92.180 (7.820) | 4.5 | bicubic | 224 | | seresnet34 | 74.808 (25.192) | 92.124 (7.876) | 22 | bilinear | 224 | | mnasnet_b1 | 74.658 (25.342) | 92.114 (7.886) | 4.38 | bicubic | 224 | | spnasnet_100 | 74.084 (25.916) | 91.818 (8.182) | 4.42 | bilinear | 224 | | skresnet18 | 73.038 (26.962) | 91.168 (8.832) | 11.9 | bicubic | 224 | | mobilenetv2_100 | 72.978 (27.022) | 91.016 (8.984) | 3.5 | bicubic | 224 | | resnet18d | 72.260 (27.740) | 90.696 (9.304) | 11.7 | bicubic | 224 | | seresnet18 | 71.742 (28.258) | 90.334 (9.666) | 11.8 | bicubic | 224 | ## Ported and Other Weights For weights ported from other deep learning frameworks (Tensorflow, MXNet GluonCV) or copied from other PyTorch sources, please see the full results tables for ImageNet and various OOD test sets at in the [results tables](https://github.com/rwightman/pytorch-image-models/tree/master/results). Model code .py files contain links to original sources of models and weights.
pytorch-image-models/hfdocs/source/results.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/results.mdx", "repo_id": "pytorch-image-models", "token_count": 2259 }
177
import logging from .constants import * _logger = logging.getLogger(__name__) def resolve_data_config( args=None, pretrained_cfg=None, model=None, use_test_size=False, verbose=False ): assert model or args or pretrained_cfg, "At least one of model, args, or pretrained_cfg required for data config." args = args or {} pretrained_cfg = pretrained_cfg or {} if not pretrained_cfg and model is not None and hasattr(model, 'pretrained_cfg'): pretrained_cfg = model.pretrained_cfg data_config = {} # Resolve input/image size in_chans = 3 if args.get('in_chans', None) is not None: in_chans = args['in_chans'] elif args.get('chans', None) is not None: in_chans = args['chans'] input_size = (in_chans, 224, 224) if args.get('input_size', None) is not None: assert isinstance(args['input_size'], (tuple, list)) assert len(args['input_size']) == 3 input_size = tuple(args['input_size']) in_chans = input_size[0] # input_size overrides in_chans elif args.get('img_size', None) is not None: assert isinstance(args['img_size'], int) input_size = (in_chans, args['img_size'], args['img_size']) else: if use_test_size and pretrained_cfg.get('test_input_size', None) is not None: input_size = pretrained_cfg['test_input_size'] elif pretrained_cfg.get('input_size', None) is not None: input_size = pretrained_cfg['input_size'] data_config['input_size'] = input_size # resolve interpolation method data_config['interpolation'] = 'bicubic' if args.get('interpolation', None): data_config['interpolation'] = args['interpolation'] elif pretrained_cfg.get('interpolation', None): data_config['interpolation'] = pretrained_cfg['interpolation'] # resolve dataset + model mean for normalization data_config['mean'] = IMAGENET_DEFAULT_MEAN if args.get('mean', None) is not None: mean = tuple(args['mean']) if len(mean) == 1: mean = tuple(list(mean) * in_chans) else: assert len(mean) == in_chans data_config['mean'] = mean elif pretrained_cfg.get('mean', None): data_config['mean'] = pretrained_cfg['mean'] # resolve dataset + model std deviation for normalization data_config['std'] = IMAGENET_DEFAULT_STD if args.get('std', None) is not None: std = tuple(args['std']) if len(std) == 1: std = tuple(list(std) * in_chans) else: assert len(std) == in_chans data_config['std'] = std elif pretrained_cfg.get('std', None): data_config['std'] = pretrained_cfg['std'] # resolve default inference crop crop_pct = DEFAULT_CROP_PCT if args.get('crop_pct', None): crop_pct = args['crop_pct'] else: if use_test_size and pretrained_cfg.get('test_crop_pct', None): crop_pct = pretrained_cfg['test_crop_pct'] elif pretrained_cfg.get('crop_pct', None): crop_pct = pretrained_cfg['crop_pct'] data_config['crop_pct'] = crop_pct # resolve default crop percentage crop_mode = DEFAULT_CROP_MODE if args.get('crop_mode', None): crop_mode = args['crop_mode'] elif pretrained_cfg.get('crop_mode', None): crop_mode = pretrained_cfg['crop_mode'] data_config['crop_mode'] = crop_mode if verbose: _logger.info('Data processing configuration for current model + dataset:') for n, v in data_config.items(): _logger.info('\t%s: %s' % (n, str(v))) return data_config def resolve_model_data_config( model, args=None, pretrained_cfg=None, use_test_size=False, verbose=False, ): """ Resolve Model Data Config This is equivalent to resolve_data_config() but with arguments re-ordered to put model first. Args: model (nn.Module): the model instance args (dict): command line arguments / configuration in dict form (overrides pretrained_cfg) pretrained_cfg (dict): pretrained model config (overrides pretrained_cfg attached to model) use_test_size (bool): use the test time input resolution (if one exists) instead of default train resolution verbose (bool): enable extra logging of resolved values Returns: dictionary of config """ return resolve_data_config( args=args, pretrained_cfg=pretrained_cfg, model=model, use_test_size=use_test_size, verbose=verbose, )
pytorch-image-models/timm/data/config.py/0
{ "file_path": "pytorch-image-models/timm/data/config.py", "repo_id": "pytorch-image-models", "token_count": 1927 }
178
""" Dataset reader for HF IterableDataset """ import math import os from itertools import repeat, chain from typing import Optional import torch import torch.distributed as dist from PIL import Image try: import datasets from datasets.distributed import split_dataset_by_node from datasets.splits import SplitInfo except ImportError as e: print("Please install Hugging Face datasets package `pip install datasets`.") raise e from .class_map import load_class_map from .reader import Reader from .shared_count import SharedCount SHUFFLE_SIZE = int(os.environ.get('HFIDS_SHUFFLE_SIZE', 4096)) class ReaderHfids(Reader): def __init__( self, name: str, root: Optional[str] = None, split: str = 'train', is_training: bool = False, batch_size: int = 1, download: bool = False, repeats: int = 0, seed: int = 42, class_map: Optional[dict] = None, input_key: str = 'image', input_img_mode: str = 'RGB', target_key: str = 'label', target_img_mode: str = '', shuffle_size: Optional[int] = None, num_samples: Optional[int] = None, ): super().__init__() self.root = root self.split = split self.is_training = is_training self.batch_size = batch_size self.download = download self.repeats = repeats self.common_seed = seed # a seed that's fixed across all worker / distributed instances self.shuffle_size = shuffle_size or SHUFFLE_SIZE self.input_key = input_key self.input_img_mode = input_img_mode self.target_key = target_key self.target_img_mode = target_img_mode self.builder = datasets.load_dataset_builder(name, cache_dir=root) if download: self.builder.download_and_prepare() split_info: Optional[SplitInfo] = None if self.builder.info.splits and split in self.builder.info.splits: if isinstance(self.builder.info.splits[split], SplitInfo): split_info: Optional[SplitInfo] = self.builder.info.splits[split] if num_samples: self.num_samples = num_samples elif split_info and split_info.num_examples: self.num_samples = split_info.num_examples else: raise ValueError( "Dataset length is unknown, please pass `num_samples` explicitely. " "The number of steps needs to be known in advance for the learning rate scheduler." ) self.remap_class = False if class_map: self.class_to_idx = load_class_map(class_map) self.remap_class = True else: self.class_to_idx = {} # Distributed world state self.dist_rank = 0 self.dist_num_replicas = 1 if dist.is_available() and dist.is_initialized() and dist.get_world_size() > 1: self.dist_rank = dist.get_rank() self.dist_num_replicas = dist.get_world_size() # Attributes that are updated in _lazy_init self.worker_info = None self.worker_id = 0 self.num_workers = 1 self.global_worker_id = 0 self.global_num_workers = 1 # Initialized lazily on each dataloader worker process self.ds: Optional[datasets.IterableDataset] = None self.epoch = SharedCount() def set_epoch(self, count): # to update the shuffling effective_seed = seed + epoch self.epoch.value = count def set_loader_cfg( self, num_workers: Optional[int] = None, ): if self.ds is not None: return if num_workers is not None: self.num_workers = num_workers self.global_num_workers = self.dist_num_replicas * self.num_workers def _lazy_init(self): """ Lazily initialize worker (in worker processes) """ if self.worker_info is None: worker_info = torch.utils.data.get_worker_info() if worker_info is not None: self.worker_info = worker_info self.worker_id = worker_info.id self.num_workers = worker_info.num_workers self.global_num_workers = self.dist_num_replicas * self.num_workers self.global_worker_id = self.dist_rank * self.num_workers + self.worker_id if self.download: dataset = self.builder.as_dataset(split=self.split) # to distribute evenly to workers ds = dataset.to_iterable_dataset(num_shards=self.global_num_workers) else: # in this case the number of shard is determined by the number of remote files ds = self.builder.as_streaming_dataset(split=self.split) if self.is_training: # will shuffle the list of shards and use a shuffle buffer ds = ds.shuffle(seed=self.common_seed, buffer_size=self.shuffle_size) # Distributed: # The dataset has a number of shards that is a factor of `dist_num_replicas` (i.e. if `ds.n_shards % dist_num_replicas == 0`), # so the shards are evenly assigned across the nodes. # If it's not the case for dataset streaming, each node keeps 1 example out of `dist_num_replicas`, skipping the other examples. # Workers: # In a node, datasets.IterableDataset assigns the shards assigned to the node as evenly as possible to workers. self.ds = split_dataset_by_node(ds, rank=self.dist_rank, world_size=self.dist_num_replicas) def _num_samples_per_worker(self): num_worker_samples = \ max(1, self.repeats) * self.num_samples / max(self.global_num_workers, self.dist_num_replicas) if self.is_training or self.dist_num_replicas > 1: num_worker_samples = math.ceil(num_worker_samples) if self.is_training and self.batch_size is not None: num_worker_samples = math.ceil(num_worker_samples / self.batch_size) * self.batch_size return int(num_worker_samples) def __iter__(self): if self.ds is None: self._lazy_init() self.ds.set_epoch(self.epoch.value) target_sample_count = self._num_samples_per_worker() sample_count = 0 if self.is_training: ds_iter = chain.from_iterable(repeat(self.ds)) else: ds_iter = iter(self.ds) for sample in ds_iter: input_data: Image.Image = sample[self.input_key] if self.input_img_mode and input_data.mode != self.input_img_mode: input_data = input_data.convert(self.input_img_mode) target_data = sample[self.target_key] if self.target_img_mode: assert isinstance(target_data, Image.Image), "target_img_mode is specified but target is not an image" if target_data.mode != self.target_img_mode: target_data = target_data.convert(self.target_img_mode) elif self.remap_class: target_data = self.class_to_idx[target_data] yield input_data, target_data sample_count += 1 if self.is_training and sample_count >= target_sample_count: break def __len__(self): num_samples = self._num_samples_per_worker() * self.num_workers return num_samples def _filename(self, index, basename=False, absolute=False): assert False, "Not supported" # no random access to examples def filenames(self, basename=False, absolute=False): """ Return all filenames in dataset, overrides base""" if self.ds is None: self._lazy_init() names = [] for sample in self.ds: if 'file_name' in sample: name = sample['file_name'] elif 'filename' in sample: name = sample['filename'] elif 'id' in sample: name = sample['id'] elif 'image_id' in sample: name = sample['image_id'] else: assert False, "No supported name field present" names.append(name) return names
pytorch-image-models/timm/data/readers/reader_hfids.py/0
{ "file_path": "pytorch-image-models/timm/data/readers/reader_hfids.py", "repo_id": "pytorch-image-models", "token_count": 3722 }
179
from typing import Optional import torch import torch.nn as nn import torch.nn.functional as F from .config import use_fused_attn from .mlp import Mlp from .weight_init import trunc_normal_tf_ class AttentionPoolLatent(nn.Module): """ Attention pooling w/ latent query """ fused_attn: torch.jit.Final[bool] def __init__( self, in_features: int, out_features: int = None, embed_dim: int = None, num_heads: int = 8, mlp_ratio: float = 4.0, qkv_bias: bool = True, qk_norm: bool = False, latent_len: int = 1, latent_dim: int = None, pos_embed: str = '', pool_type: str = 'token', norm_layer: Optional[nn.Module] = None, drop: float = 0.0, ): super().__init__() embed_dim = embed_dim or in_features out_features = out_features or in_features assert embed_dim % num_heads == 0 self.num_heads = num_heads self.head_dim = embed_dim // num_heads self.scale = self.head_dim ** -0.5 self.pool = pool_type self.fused_attn = use_fused_attn() if pos_embed == 'abs': spatial_len = self.feat_size self.pos_embed = nn.Parameter(torch.zeros(spatial_len, in_features)) else: self.pos_embed = None self.latent_dim = latent_dim or embed_dim self.latent_len = latent_len self.latent = nn.Parameter(torch.zeros(1, self.latent_len, embed_dim)) self.q = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.kv = nn.Linear(embed_dim, embed_dim * 2, bias=qkv_bias) self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.proj = nn.Linear(embed_dim, embed_dim) self.proj_drop = nn.Dropout(drop) self.norm = norm_layer(out_features) if norm_layer is not None else nn.Identity() self.mlp = Mlp(embed_dim, int(embed_dim * mlp_ratio)) self.init_weights() def init_weights(self): if self.pos_embed is not None: trunc_normal_tf_(self.pos_embed, std=self.pos_embed.shape[1] ** -0.5) trunc_normal_tf_(self.latent, std=self.latent_dim ** -0.5) def forward(self, x): B, N, C = x.shape if self.pos_embed is not None: # FIXME interpolate x = x + self.pos_embed.unsqueeze(0).to(x.dtype) q_latent = self.latent.expand(B, -1, -1) q = self.q(q_latent).reshape(B, self.latent_len, self.num_heads, self.head_dim).transpose(1, 2) kv = self.kv(x).reshape(B, N, 2, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4) k, v = kv.unbind(0) q, k = self.q_norm(q), self.k_norm(k) if self.fused_attn: x = F.scaled_dot_product_attention(q, k, v) else: q = q * self.scale attn = q @ k.transpose(-2, -1) attn = attn.softmax(dim=-1) x = attn @ v x = x.transpose(1, 2).reshape(B, self.latent_len, C) x = self.proj(x) x = self.proj_drop(x) x = x + self.mlp(self.norm(x)) # optional pool if latent seq_len > 1 and pooled output is desired if self.pool == 'token': x = x[:, 0] elif self.pool == 'avg': x = x.mean(1) return x
pytorch-image-models/timm/layers/attention_pool.py/0
{ "file_path": "pytorch-image-models/timm/layers/attention_pool.py", "repo_id": "pytorch-image-models", "token_count": 1758 }
180
""" ECA module from ECAnet paper: ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks https://arxiv.org/abs/1910.03151 Original ECA model borrowed from https://github.com/BangguWu/ECANet Modified circular ECA implementation and adaption for use in timm package by Chris Ha https://github.com/VRandme Original License: MIT License Copyright (c) 2019 BangguWu, Qilong Wang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import math from torch import nn import torch.nn.functional as F from .create_act import create_act_layer from .helpers import make_divisible class EcaModule(nn.Module): """Constructs an ECA module. Args: channels: Number of channels of the input feature map for use in adaptive kernel sizes for actual calculations according to channel. gamma, beta: when channel is given parameters of mapping function refer to original paper https://arxiv.org/pdf/1910.03151.pdf (default=None. if channel size not given, use k_size given for kernel size.) kernel_size: Adaptive selection of kernel size (default=3) gamm: used in kernel_size calc, see above beta: used in kernel_size calc, see above act_layer: optional non-linearity after conv, enables conv bias, this is an experiment gate_layer: gating non-linearity to use """ def __init__( self, channels=None, kernel_size=3, gamma=2, beta=1, act_layer=None, gate_layer='sigmoid', rd_ratio=1/8, rd_channels=None, rd_divisor=8, use_mlp=False): super(EcaModule, self).__init__() if channels is not None: t = int(abs(math.log(channels, 2) + beta) / gamma) kernel_size = max(t if t % 2 else t + 1, 3) assert kernel_size % 2 == 1 padding = (kernel_size - 1) // 2 if use_mlp: # NOTE 'mlp' mode is a timm experiment, not in paper assert channels is not None if rd_channels is None: rd_channels = make_divisible(channels * rd_ratio, divisor=rd_divisor) act_layer = act_layer or nn.ReLU self.conv = nn.Conv1d(1, rd_channels, kernel_size=1, padding=0, bias=True) self.act = create_act_layer(act_layer) self.conv2 = nn.Conv1d(rd_channels, 1, kernel_size=kernel_size, padding=padding, bias=True) else: self.conv = nn.Conv1d(1, 1, kernel_size=kernel_size, padding=padding, bias=False) self.act = None self.conv2 = None self.gate = create_act_layer(gate_layer) def forward(self, x): y = x.mean((2, 3)).view(x.shape[0], 1, -1) # view for 1d conv y = self.conv(y) if self.conv2 is not None: y = self.act(y) y = self.conv2(y) y = self.gate(y).view(x.shape[0], -1, 1, 1) return x * y.expand_as(x) EfficientChannelAttn = EcaModule # alias class CecaModule(nn.Module): """Constructs a circular ECA module. ECA module where the conv uses circular padding rather than zero padding. Unlike the spatial dimension, the channels do not have inherent ordering nor locality. Although this module in essence, applies such an assumption, it is unnecessary to limit the channels on either "edge" from being circularly adapted to each other. This will fundamentally increase connectivity and possibly increase performance metrics (accuracy, robustness), without significantly impacting resource metrics (parameter size, throughput,latency, etc) Args: channels: Number of channels of the input feature map for use in adaptive kernel sizes for actual calculations according to channel. gamma, beta: when channel is given parameters of mapping function refer to original paper https://arxiv.org/pdf/1910.03151.pdf (default=None. if channel size not given, use k_size given for kernel size.) kernel_size: Adaptive selection of kernel size (default=3) gamm: used in kernel_size calc, see above beta: used in kernel_size calc, see above act_layer: optional non-linearity after conv, enables conv bias, this is an experiment gate_layer: gating non-linearity to use """ def __init__(self, channels=None, kernel_size=3, gamma=2, beta=1, act_layer=None, gate_layer='sigmoid'): super(CecaModule, self).__init__() if channels is not None: t = int(abs(math.log(channels, 2) + beta) / gamma) kernel_size = max(t if t % 2 else t + 1, 3) has_act = act_layer is not None assert kernel_size % 2 == 1 # PyTorch circular padding mode is buggy as of pytorch 1.4 # see https://github.com/pytorch/pytorch/pull/17240 # implement manual circular padding self.padding = (kernel_size - 1) // 2 self.conv = nn.Conv1d(1, 1, kernel_size=kernel_size, padding=0, bias=has_act) self.gate = create_act_layer(gate_layer) def forward(self, x): y = x.mean((2, 3)).view(x.shape[0], 1, -1) # Manually implement circular padding, F.pad does not seemed to be bugged y = F.pad(y, (self.padding, self.padding), mode='circular') y = self.conv(y) y = self.gate(y).view(x.shape[0], -1, 1, 1) return x * y.expand_as(x) CircularEfficientChannelAttn = CecaModule
pytorch-image-models/timm/layers/eca.py/0
{ "file_path": "pytorch-image-models/timm/layers/eca.py", "repo_id": "pytorch-image-models", "token_count": 2411 }
181
""" PyTorch Mixed Convolution Paper: MixConv: Mixed Depthwise Convolutional Kernels (https://arxiv.org/abs/1907.09595) Hacked together by / Copyright 2020 Ross Wightman """ import torch from torch import nn as nn from .conv2d_same import create_conv2d_pad def _split_channels(num_chan, num_groups): split = [num_chan // num_groups for _ in range(num_groups)] split[0] += num_chan - sum(split) return split class MixedConv2d(nn.ModuleDict): """ Mixed Grouped Convolution Based on MDConv and GroupedConv in MixNet impl: https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mixnet/custom_layers.py """ def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding='', dilation=1, depthwise=False, **kwargs): super(MixedConv2d, self).__init__() kernel_size = kernel_size if isinstance(kernel_size, list) else [kernel_size] num_groups = len(kernel_size) in_splits = _split_channels(in_channels, num_groups) out_splits = _split_channels(out_channels, num_groups) self.in_channels = sum(in_splits) self.out_channels = sum(out_splits) for idx, (k, in_ch, out_ch) in enumerate(zip(kernel_size, in_splits, out_splits)): conv_groups = in_ch if depthwise else 1 # use add_module to keep key space clean self.add_module( str(idx), create_conv2d_pad( in_ch, out_ch, k, stride=stride, padding=padding, dilation=dilation, groups=conv_groups, **kwargs) ) self.splits = in_splits def forward(self, x): x_split = torch.split(x, self.splits, 1) x_out = [c(x_split[i]) for i, c in enumerate(self.values())] x = torch.cat(x_out, 1) return x
pytorch-image-models/timm/layers/mixed_conv2d.py/0
{ "file_path": "pytorch-image-models/timm/layers/mixed_conv2d.py", "repo_id": "pytorch-image-models", "token_count": 834 }
182
""" Split Attention Conv2d (for ResNeSt Models) Paper: `ResNeSt: Split-Attention Networks` - /https://arxiv.org/abs/2004.08955 Adapted from original PyTorch impl at https://github.com/zhanghang1989/ResNeSt Modified for torchscript compat, performance, and consistency with timm by Ross Wightman """ import torch import torch.nn.functional as F from torch import nn from .helpers import make_divisible class RadixSoftmax(nn.Module): def __init__(self, radix, cardinality): super(RadixSoftmax, self).__init__() self.radix = radix self.cardinality = cardinality def forward(self, x): batch = x.size(0) if self.radix > 1: x = x.view(batch, self.cardinality, self.radix, -1).transpose(1, 2) x = F.softmax(x, dim=1) x = x.reshape(batch, -1) else: x = torch.sigmoid(x) return x class SplitAttn(nn.Module): """Split-Attention (aka Splat) """ def __init__(self, in_channels, out_channels=None, kernel_size=3, stride=1, padding=None, dilation=1, groups=1, bias=False, radix=2, rd_ratio=0.25, rd_channels=None, rd_divisor=8, act_layer=nn.ReLU, norm_layer=None, drop_layer=None, **kwargs): super(SplitAttn, self).__init__() out_channels = out_channels or in_channels self.radix = radix mid_chs = out_channels * radix if rd_channels is None: attn_chs = make_divisible(in_channels * radix * rd_ratio, min_value=32, divisor=rd_divisor) else: attn_chs = rd_channels * radix padding = kernel_size // 2 if padding is None else padding self.conv = nn.Conv2d( in_channels, mid_chs, kernel_size, stride, padding, dilation, groups=groups * radix, bias=bias, **kwargs) self.bn0 = norm_layer(mid_chs) if norm_layer else nn.Identity() self.drop = drop_layer() if drop_layer is not None else nn.Identity() self.act0 = act_layer(inplace=True) self.fc1 = nn.Conv2d(out_channels, attn_chs, 1, groups=groups) self.bn1 = norm_layer(attn_chs) if norm_layer else nn.Identity() self.act1 = act_layer(inplace=True) self.fc2 = nn.Conv2d(attn_chs, mid_chs, 1, groups=groups) self.rsoftmax = RadixSoftmax(radix, groups) def forward(self, x): x = self.conv(x) x = self.bn0(x) x = self.drop(x) x = self.act0(x) B, RC, H, W = x.shape if self.radix > 1: x = x.reshape((B, self.radix, RC // self.radix, H, W)) x_gap = x.sum(dim=1) else: x_gap = x x_gap = x_gap.mean((2, 3), keepdim=True) x_gap = self.fc1(x_gap) x_gap = self.bn1(x_gap) x_gap = self.act1(x_gap) x_attn = self.fc2(x_gap) x_attn = self.rsoftmax(x_attn).view(B, -1, 1, 1) if self.radix > 1: out = (x * x_attn.reshape((B, self.radix, RC // self.radix, 1, 1))).sum(dim=1) else: out = x * x_attn return out.contiguous()
pytorch-image-models/timm/layers/split_attn.py/0
{ "file_path": "pytorch-image-models/timm/layers/split_attn.py", "repo_id": "pytorch-image-models", "token_count": 1533 }
183
""" EfficientNet, MobileNetV3, etc Builder Assembles EfficieNet and related network feature blocks from string definitions. Handles stride, dilation calculations, and selects feature extraction points. Hacked together by / Copyright 2019, Ross Wightman """ import logging import math import re from copy import deepcopy from functools import partial from typing import Any, Dict, List import torch.nn as nn from ._efficientnet_blocks import * from timm.layers import CondConv2d, get_condconv_initializer, get_act_layer, get_attn, make_divisible __all__ = ["EfficientNetBuilder", "decode_arch_def", "efficientnet_init_weights", 'resolve_bn_args', 'resolve_act_layer', 'round_channels', 'BN_MOMENTUM_TF_DEFAULT', 'BN_EPS_TF_DEFAULT'] _logger = logging.getLogger(__name__) _DEBUG_BUILDER = False # Defaults used for Google/Tensorflow training of mobile networks /w RMSprop as per # papers and TF reference implementations. PT momentum equiv for TF decay is (1 - TF decay) # NOTE: momentum varies btw .99 and .9997 depending on source # .99 in official TF TPU impl # .9997 (/w .999 in search space) for paper BN_MOMENTUM_TF_DEFAULT = 1 - 0.99 BN_EPS_TF_DEFAULT = 1e-3 _BN_ARGS_TF = dict(momentum=BN_MOMENTUM_TF_DEFAULT, eps=BN_EPS_TF_DEFAULT) BlockArgs = List[List[Dict[str, Any]]] def get_bn_args_tf(): return _BN_ARGS_TF.copy() def resolve_bn_args(kwargs): bn_args = {} bn_momentum = kwargs.pop('bn_momentum', None) if bn_momentum is not None: bn_args['momentum'] = bn_momentum bn_eps = kwargs.pop('bn_eps', None) if bn_eps is not None: bn_args['eps'] = bn_eps return bn_args def resolve_act_layer(kwargs, default='relu'): return get_act_layer(kwargs.pop('act_layer', default)) def round_channels(channels, multiplier=1.0, divisor=8, channel_min=None, round_limit=0.9): """Round number of filters based on depth multiplier.""" if not multiplier: return channels return make_divisible(channels * multiplier, divisor, channel_min, round_limit=round_limit) def _log_info_if(msg, condition): if condition: _logger.info(msg) def _parse_ksize(ss): if ss.isdigit(): return int(ss) else: return [int(k) for k in ss.split('.')] def _decode_block_str(block_str): """ Decode block definition string Gets a list of block arg (dicts) through a string notation of arguments. E.g. ir_r2_k3_s2_e1_i32_o16_se0.25_noskip All args can exist in any order with the exception of the leading string which is assumed to indicate the block type. leading string - block type ( ir = InvertedResidual, ds = DepthwiseSep, dsa = DeptwhiseSep with pw act, cn = ConvBnAct) r - number of repeat blocks, k - kernel size, s - strides (1-9), e - expansion ratio, c - output channels, se - squeeze/excitation ratio n - activation fn ('re', 'r6', 'hs', or 'sw') Args: block_str: a string representation of block arguments. Returns: A list of block args (dicts) Raises: ValueError: if the string def not properly specified (TODO) """ assert isinstance(block_str, str) ops = block_str.split('_') block_type = ops[0] # take the block type off the front ops = ops[1:] options = {} skip = None for op in ops: # string options being checked on individual basis, combine if they grow if op == 'noskip': skip = False # force no skip connection elif op == 'skip': skip = True # force a skip connection elif op.startswith('n'): # activation fn key = op[0] v = op[1:] if v == 're': value = get_act_layer('relu') elif v == 'r6': value = get_act_layer('relu6') elif v == 'hs': value = get_act_layer('hard_swish') elif v == 'sw': value = get_act_layer('swish') # aka SiLU elif v == 'mi': value = get_act_layer('mish') else: continue options[key] = value else: # all numeric options splits = re.split(r'(\d.*)', op) if len(splits) >= 2: key, value = splits[:2] options[key] = value # if act_layer is None, the model default (passed to model init) will be used act_layer = options['n'] if 'n' in options else None exp_kernel_size = _parse_ksize(options['a']) if 'a' in options else 1 pw_kernel_size = _parse_ksize(options['p']) if 'p' in options else 1 force_in_chs = int(options['fc']) if 'fc' in options else 0 # FIXME hack to deal with in_chs issue in TPU def num_repeat = int(options['r']) # each type of block has different valid arguments, fill accordingly block_args = dict( block_type=block_type, out_chs=int(options['c']), stride=int(options['s']), act_layer=act_layer, ) if block_type == 'ir': block_args.update(dict( dw_kernel_size=_parse_ksize(options['k']), exp_kernel_size=exp_kernel_size, pw_kernel_size=pw_kernel_size, exp_ratio=float(options['e']), se_ratio=float(options['se']) if 'se' in options else 0., noskip=skip is False, )) if 'cc' in options: block_args['num_experts'] = int(options['cc']) elif block_type == 'ds' or block_type == 'dsa': block_args.update(dict( dw_kernel_size=_parse_ksize(options['k']), pw_kernel_size=pw_kernel_size, se_ratio=float(options['se']) if 'se' in options else 0., pw_act=block_type == 'dsa', noskip=block_type == 'dsa' or skip is False, )) elif block_type == 'er': block_args.update(dict( exp_kernel_size=_parse_ksize(options['k']), pw_kernel_size=pw_kernel_size, exp_ratio=float(options['e']), force_in_chs=force_in_chs, se_ratio=float(options['se']) if 'se' in options else 0., noskip=skip is False, )) elif block_type == 'cn': block_args.update(dict( kernel_size=int(options['k']), skip=skip is True, )) else: assert False, 'Unknown block type (%s)' % block_type if 'gs' in options: block_args['group_size'] = options['gs'] return block_args, num_repeat def _scale_stage_depth(stack_args, repeats, depth_multiplier=1.0, depth_trunc='ceil'): """ Per-stage depth scaling Scales the block repeats in each stage. This depth scaling impl maintains compatibility with the EfficientNet scaling method, while allowing sensible scaling for other models that may have multiple block arg definitions in each stage. """ # We scale the total repeat count for each stage, there may be multiple # block arg defs per stage so we need to sum. num_repeat = sum(repeats) if depth_trunc == 'round': # Truncating to int by rounding allows stages with few repeats to remain # proportionally smaller for longer. This is a good choice when stage definitions # include single repeat stages that we'd prefer to keep that way as long as possible num_repeat_scaled = max(1, round(num_repeat * depth_multiplier)) else: # The default for EfficientNet truncates repeats to int via 'ceil'. # Any multiplier > 1.0 will result in an increased depth for every stage. num_repeat_scaled = int(math.ceil(num_repeat * depth_multiplier)) # Proportionally distribute repeat count scaling to each block definition in the stage. # Allocation is done in reverse as it results in the first block being less likely to be scaled. # The first block makes less sense to repeat in most of the arch definitions. repeats_scaled = [] for r in repeats[::-1]: rs = max(1, round((r / num_repeat * num_repeat_scaled))) repeats_scaled.append(rs) num_repeat -= r num_repeat_scaled -= rs repeats_scaled = repeats_scaled[::-1] # Apply the calculated scaling to each block arg in the stage sa_scaled = [] for ba, rep in zip(stack_args, repeats_scaled): sa_scaled.extend([deepcopy(ba) for _ in range(rep)]) return sa_scaled def decode_arch_def( arch_def, depth_multiplier=1.0, depth_trunc='ceil', experts_multiplier=1, fix_first_last=False, group_size=None, ): """ Decode block architecture definition strings -> block kwargs Args: arch_def: architecture definition strings, list of list of strings depth_multiplier: network depth multiplier depth_trunc: networ depth truncation mode when applying multiplier experts_multiplier: CondConv experts multiplier fix_first_last: fix first and last block depths when multiplier is applied group_size: group size override for all blocks that weren't explicitly set in arch string Returns: list of list of block kwargs """ arch_args = [] if isinstance(depth_multiplier, tuple): assert len(depth_multiplier) == len(arch_def) else: depth_multiplier = (depth_multiplier,) * len(arch_def) for stack_idx, (block_strings, multiplier) in enumerate(zip(arch_def, depth_multiplier)): assert isinstance(block_strings, list) stack_args = [] repeats = [] for block_str in block_strings: assert isinstance(block_str, str) ba, rep = _decode_block_str(block_str) if ba.get('num_experts', 0) > 0 and experts_multiplier > 1: ba['num_experts'] *= experts_multiplier if group_size is not None: ba.setdefault('group_size', group_size) stack_args.append(ba) repeats.append(rep) if fix_first_last and (stack_idx == 0 or stack_idx == len(arch_def) - 1): arch_args.append(_scale_stage_depth(stack_args, repeats, 1.0, depth_trunc)) else: arch_args.append(_scale_stage_depth(stack_args, repeats, multiplier, depth_trunc)) return arch_args class EfficientNetBuilder: """ Build Trunk Blocks This ended up being somewhat of a cross between https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mnasnet_models.py and https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/maskrcnn_benchmark/modeling/backbone/fbnet_builder.py """ def __init__(self, output_stride=32, pad_type='', round_chs_fn=round_channels, se_from_exp=False, act_layer=None, norm_layer=None, se_layer=None, drop_path_rate=0., feature_location=''): self.output_stride = output_stride self.pad_type = pad_type self.round_chs_fn = round_chs_fn self.se_from_exp = se_from_exp # calculate se channel reduction from expanded (mid) chs self.act_layer = act_layer self.norm_layer = norm_layer self.se_layer = get_attn(se_layer) try: self.se_layer(8, rd_ratio=1.0) # test if attn layer accepts rd_ratio arg self.se_has_ratio = True except TypeError: self.se_has_ratio = False self.drop_path_rate = drop_path_rate if feature_location == 'depthwise': # old 'depthwise' mode renamed 'expansion' to match TF impl, old expansion mode didn't make sense _logger.warning("feature_location=='depthwise' is deprecated, using 'expansion'") feature_location = 'expansion' self.feature_location = feature_location assert feature_location in ('bottleneck', 'expansion', '') self.verbose = _DEBUG_BUILDER # state updated during build, consumed by model self.in_chs = None self.features = [] def _make_block(self, ba, block_idx, block_count): drop_path_rate = self.drop_path_rate * block_idx / block_count bt = ba.pop('block_type') ba['in_chs'] = self.in_chs ba['out_chs'] = self.round_chs_fn(ba['out_chs']) if 'force_in_chs' in ba and ba['force_in_chs']: # NOTE this is a hack to work around mismatch in TF EdgeEffNet impl ba['force_in_chs'] = self.round_chs_fn(ba['force_in_chs']) ba['pad_type'] = self.pad_type # block act fn overrides the model default ba['act_layer'] = ba['act_layer'] if ba['act_layer'] is not None else self.act_layer assert ba['act_layer'] is not None ba['norm_layer'] = self.norm_layer ba['drop_path_rate'] = drop_path_rate if bt != 'cn': se_ratio = ba.pop('se_ratio') if se_ratio and self.se_layer is not None: if not self.se_from_exp: # adjust se_ratio by expansion ratio if calculating se channels from block input se_ratio /= ba.get('exp_ratio', 1.0) if self.se_has_ratio: ba['se_layer'] = partial(self.se_layer, rd_ratio=se_ratio) else: ba['se_layer'] = self.se_layer if bt == 'ir': _log_info_if(' InvertedResidual {}, Args: {}'.format(block_idx, str(ba)), self.verbose) block = CondConvResidual(**ba) if ba.get('num_experts', 0) else InvertedResidual(**ba) elif bt == 'ds' or bt == 'dsa': _log_info_if(' DepthwiseSeparable {}, Args: {}'.format(block_idx, str(ba)), self.verbose) block = DepthwiseSeparableConv(**ba) elif bt == 'er': _log_info_if(' EdgeResidual {}, Args: {}'.format(block_idx, str(ba)), self.verbose) block = EdgeResidual(**ba) elif bt == 'cn': _log_info_if(' ConvBnAct {}, Args: {}'.format(block_idx, str(ba)), self.verbose) block = ConvBnAct(**ba) else: assert False, 'Uknkown block type (%s) while building model.' % bt self.in_chs = ba['out_chs'] # update in_chs for arg of next block return block def __call__(self, in_chs, model_block_args): """ Build the blocks Args: in_chs: Number of input-channels passed to first block model_block_args: A list of lists, outer list defines stages, inner list contains strings defining block configuration(s) Return: List of block stacks (each stack wrapped in nn.Sequential) """ _log_info_if('Building model trunk with %d stages...' % len(model_block_args), self.verbose) self.in_chs = in_chs total_block_count = sum([len(x) for x in model_block_args]) total_block_idx = 0 current_stride = 2 current_dilation = 1 stages = [] if model_block_args[0][0]['stride'] > 1: # if the first block starts with a stride, we need to extract first level feat from stem feature_info = dict(module='bn1', num_chs=in_chs, stage=0, reduction=current_stride) self.features.append(feature_info) # outer list of block_args defines the stacks for stack_idx, stack_args in enumerate(model_block_args): last_stack = stack_idx + 1 == len(model_block_args) _log_info_if('Stack: {}'.format(stack_idx), self.verbose) assert isinstance(stack_args, list) blocks = [] # each stack (stage of blocks) contains a list of block arguments for block_idx, block_args in enumerate(stack_args): last_block = block_idx + 1 == len(stack_args) _log_info_if(' Block: {}'.format(block_idx), self.verbose) assert block_args['stride'] in (1, 2) if block_idx >= 1: # only the first block in any stack can have a stride > 1 block_args['stride'] = 1 extract_features = False if last_block: next_stack_idx = stack_idx + 1 extract_features = next_stack_idx >= len(model_block_args) or \ model_block_args[next_stack_idx][0]['stride'] > 1 next_dilation = current_dilation if block_args['stride'] > 1: next_output_stride = current_stride * block_args['stride'] if next_output_stride > self.output_stride: next_dilation = current_dilation * block_args['stride'] block_args['stride'] = 1 _log_info_if(' Converting stride to dilation to maintain output_stride=={}'.format( self.output_stride), self.verbose) else: current_stride = next_output_stride block_args['dilation'] = current_dilation if next_dilation != current_dilation: current_dilation = next_dilation # create the block block = self._make_block(block_args, total_block_idx, total_block_count) blocks.append(block) # stash feature module name and channel info for model feature extraction if extract_features: feature_info = dict( stage=stack_idx + 1, reduction=current_stride, **block.feature_info(self.feature_location), ) leaf_name = feature_info.get('module', '') if leaf_name: feature_info['module'] = '.'.join([f'blocks.{stack_idx}.{block_idx}', leaf_name]) else: assert last_block feature_info['module'] = f'blocks.{stack_idx}' self.features.append(feature_info) total_block_idx += 1 # incr global block idx (across all stacks) stages.append(nn.Sequential(*blocks)) return stages def _init_weight_goog(m, n='', fix_group_fanout=True): """ Weight initialization as per Tensorflow official implementations. Args: m (nn.Module): module to init n (str): module name fix_group_fanout (bool): enable correct (matching Tensorflow TPU impl) fanout calculation w/ group convs Handles layers in EfficientNet, EfficientNet-CondConv, MixNet, MnasNet, MobileNetV3, etc: * https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mnasnet_model.py * https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/efficientnet_model.py """ if isinstance(m, CondConv2d): fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels if fix_group_fanout: fan_out //= m.groups init_weight_fn = get_condconv_initializer( lambda w: nn.init.normal_(w, 0, math.sqrt(2.0 / fan_out)), m.num_experts, m.weight_shape) init_weight_fn(m.weight) if m.bias is not None: nn.init.zeros_(m.bias) elif isinstance(m, nn.Conv2d): fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels if fix_group_fanout: fan_out //= m.groups nn.init.normal_(m.weight, 0, math.sqrt(2.0 / fan_out)) if m.bias is not None: nn.init.zeros_(m.bias) elif isinstance(m, nn.BatchNorm2d): nn.init.ones_(m.weight) nn.init.zeros_(m.bias) elif isinstance(m, nn.Linear): fan_out = m.weight.size(0) # fan-out fan_in = 0 if 'routing_fn' in n: fan_in = m.weight.size(1) init_range = 1.0 / math.sqrt(fan_in + fan_out) nn.init.uniform_(m.weight, -init_range, init_range) nn.init.zeros_(m.bias) def efficientnet_init_weights(model: nn.Module, init_fn=None): init_fn = init_fn or _init_weight_goog for n, m in model.named_modules(): init_fn(m, n)
pytorch-image-models/timm/models/_efficientnet_builder.py/0
{ "file_path": "pytorch-image-models/timm/models/_efficientnet_builder.py", "repo_id": "pytorch-image-models", "token_count": 9013 }
184
""" Bring-Your-Own-Attention Network A flexible network w/ dataclass based config for stacking NN blocks including self-attention (or similar) layers. Currently used to implement experimental variants of: * Bottleneck Transformers * Lambda ResNets * HaloNets Consider all of the models definitions here as experimental WIP and likely to change. Hacked together by / copyright Ross Wightman, 2021. """ from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from ._builder import build_model_with_cfg from ._registry import register_model, generate_default_cfgs from .byobnet import ByoBlockCfg, ByoModelCfg, ByobNet, interleave_blocks __all__ = [] model_cfgs = dict( botnet26t=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25), ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', fixed_input_size=True, self_attn_layer='bottleneck', self_attn_kwargs=dict() ), sebotnet33ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=1024, s=2, gs=0, br=0.25), ByoBlockCfg('self_attn', d=2, c=1536, s=2, gs=0, br=0.333), ), stem_chs=64, stem_type='tiered', stem_pool='', act_layer='silu', num_features=1280, attn_layer='se', self_attn_layer='bottleneck', self_attn_kwargs=dict() ), botnet50ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), every=4, d=4, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', act_layer='silu', fixed_input_size=True, self_attn_layer='bottleneck', self_attn_kwargs=dict() ), eca_botnext26ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=16, br=0.25), ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=16, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=16, br=0.25), ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=16, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', fixed_input_size=True, act_layer='silu', attn_layer='eca', self_attn_layer='bottleneck', self_attn_kwargs=dict(dim_head=16) ), halonet_h1=ByoModelCfg( blocks=( ByoBlockCfg(type='self_attn', d=3, c=64, s=1, gs=0, br=1.0), ByoBlockCfg(type='self_attn', d=3, c=128, s=2, gs=0, br=1.0), ByoBlockCfg(type='self_attn', d=10, c=256, s=2, gs=0, br=1.0), ByoBlockCfg(type='self_attn', d=3, c=512, s=2, gs=0, br=1.0), ), stem_chs=64, stem_type='7x7', stem_pool='maxpool', self_attn_layer='halo', self_attn_kwargs=dict(block_size=8, halo_size=3), ), halonet26t=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25), ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', self_attn_layer='halo', self_attn_kwargs=dict(block_size=8, halo_size=2) ), sehalonet33ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=1024, s=2, gs=0, br=0.25), ByoBlockCfg('self_attn', d=2, c=1536, s=2, gs=0, br=0.333), ), stem_chs=64, stem_type='tiered', stem_pool='', act_layer='silu', num_features=1280, attn_layer='se', self_attn_layer='halo', self_attn_kwargs=dict(block_size=8, halo_size=3) ), halonet50ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), interleave_blocks( types=('bottle', 'self_attn'), every=4, d=4, c=512, s=2, gs=0, br=0.25, self_attn_layer='halo', self_attn_kwargs=dict(block_size=8, halo_size=3, num_heads=4)), interleave_blocks(types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', act_layer='silu', self_attn_layer='halo', self_attn_kwargs=dict(block_size=8, halo_size=3) ), eca_halonext26ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=16, br=0.25), ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=16, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=16, br=0.25), ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=16, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', act_layer='silu', attn_layer='eca', self_attn_layer='halo', self_attn_kwargs=dict(block_size=8, halo_size=2, dim_head=16) ), lambda_resnet26t=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25), ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', self_attn_layer='lambda', self_attn_kwargs=dict(r=9) ), lambda_resnet50ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), every=4, d=4, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', act_layer='silu', self_attn_layer='lambda', self_attn_kwargs=dict(r=9) ), lambda_resnet26rpt_256=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25), ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25), interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25), ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25), ), stem_chs=64, stem_type='tiered', stem_pool='maxpool', self_attn_layer='lambda', self_attn_kwargs=dict(r=None) ), # experimental haloregnetz_b=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=2, c=48, s=2, gs=16, br=3), ByoBlockCfg(type='bottle', d=6, c=96, s=2, gs=16, br=3), interleave_blocks(types=('bottle', 'self_attn'), every=3, d=12, c=192, s=2, gs=16, br=3), ByoBlockCfg('self_attn', d=2, c=288, s=2, gs=16, br=3), ), stem_chs=32, stem_pool='', downsample='', num_features=1536, act_layer='silu', attn_layer='se', attn_kwargs=dict(rd_ratio=0.25), block_kwargs=dict(bottle_in=True, linear_out=True), self_attn_layer='halo', self_attn_kwargs=dict(block_size=7, halo_size=2, qk_ratio=0.33) ), # experimental lamhalobotnet50ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), interleave_blocks( types=('bottle', 'self_attn'), d=4, c=512, s=2, gs=0, br=0.25, self_attn_layer='lambda', self_attn_kwargs=dict(r=13)), interleave_blocks( types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25, self_attn_layer='halo', self_attn_kwargs=dict(halo_size=3)), interleave_blocks( types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25, self_attn_layer='bottleneck', self_attn_kwargs=dict()), ), stem_chs=64, stem_type='tiered', stem_pool='', act_layer='silu', ), halo2botnet50ts=ByoModelCfg( blocks=( ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25), interleave_blocks( types=('bottle', 'self_attn'), d=4, c=512, s=2, gs=0, br=0.25, self_attn_layer='halo', self_attn_kwargs=dict(halo_size=3)), interleave_blocks( types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25, self_attn_layer='halo', self_attn_kwargs=dict(halo_size=3)), interleave_blocks( types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25, self_attn_layer='bottleneck', self_attn_kwargs=dict()), ), stem_chs=64, stem_type='tiered', stem_pool='', act_layer='silu', ), ) def _create_byoanet(variant, cfg_variant=None, pretrained=False, **kwargs): return build_model_with_cfg( ByobNet, variant, pretrained, model_cfg=model_cfgs[variant] if not cfg_variant else model_cfgs[cfg_variant], feature_cfg=dict(flatten_sequential=True), **kwargs, ) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.95, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.conv1.conv', 'classifier': 'head.fc', 'fixed_input_size': False, 'min_input_size': (3, 224, 224), **kwargs } default_cfgs = generate_default_cfgs({ # GPU-Efficient (ResNet) weights 'botnet26t_256.c1_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/botnet26t_c1_256-167a0e9f.pth', hf_hub_id='timm/', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), 'sebotnet33ts_256.a1h_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/sebotnet33ts_a1h2_256-957e3c3e.pth', hf_hub_id='timm/', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94), 'botnet50ts_256.untrained': _cfg( fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), 'eca_botnext26ts_256.c1_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/eca_botnext26ts_c_256-95a898f6.pth', hf_hub_id='timm/', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), 'halonet_h1.untrained': _cfg(input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)), 'halonet26t.a1h_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/halonet26t_a1h_256-3083328c.pth', hf_hub_id='timm/', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)), 'sehalonet33ts.ra2_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/sehalonet33ts_256-87e053f9.pth', hf_hub_id='timm/', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256), crop_pct=0.94), 'halonet50ts.a1h_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/halonet50ts_a1h2_256-f3a3daee.pth', hf_hub_id='timm/', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256), crop_pct=0.94), 'eca_halonext26ts.c1_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/eca_halonext26ts_c_256-06906299.pth', hf_hub_id='timm/', input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256), crop_pct=0.94), 'lambda_resnet26t.c1_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lambda_resnet26t_c_256-e5a5c857.pth', hf_hub_id='timm/', min_input_size=(3, 128, 128), input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94), 'lambda_resnet50ts.a1h_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lambda_resnet50ts_a1h_256-b87370f7.pth', hf_hub_id='timm/', min_input_size=(3, 128, 128), input_size=(3, 256, 256), pool_size=(8, 8)), 'lambda_resnet26rpt_256.c1_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lambda_resnet26rpt_c_256-ab00292d.pth', hf_hub_id='timm/', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94), 'haloregnetz_b.ra3_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/haloregnetz_c_raa_256-c8ad7616.pth', hf_hub_id='timm/', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), first_conv='stem.conv', input_size=(3, 224, 224), pool_size=(7, 7), min_input_size=(3, 224, 224), crop_pct=0.94), 'lamhalobotnet50ts_256.a1h_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lamhalobotnet50ts_a1h2_256-fe3d9445.pth', hf_hub_id='timm/', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), 'halo2botnet50ts_256.a1h_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/halo2botnet50ts_a1h2_256-fd9c11a3.pth', hf_hub_id='timm/', fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)), }) @register_model def botnet26t_256(pretrained=False, **kwargs) -> ByobNet: """ Bottleneck Transformer w/ ResNet26-T backbone. """ kwargs.setdefault('img_size', 256) return _create_byoanet('botnet26t_256', 'botnet26t', pretrained=pretrained, **kwargs) @register_model def sebotnet33ts_256(pretrained=False, **kwargs) -> ByobNet: """ Bottleneck Transformer w/ a ResNet33-t backbone, SE attn for non Halo blocks, SiLU, """ return _create_byoanet('sebotnet33ts_256', 'sebotnet33ts', pretrained=pretrained, **kwargs) @register_model def botnet50ts_256(pretrained=False, **kwargs) -> ByobNet: """ Bottleneck Transformer w/ ResNet50-T backbone, silu act. """ kwargs.setdefault('img_size', 256) return _create_byoanet('botnet50ts_256', 'botnet50ts', pretrained=pretrained, **kwargs) @register_model def eca_botnext26ts_256(pretrained=False, **kwargs) -> ByobNet: """ Bottleneck Transformer w/ ResNet26-T backbone, silu act. """ kwargs.setdefault('img_size', 256) return _create_byoanet('eca_botnext26ts_256', 'eca_botnext26ts', pretrained=pretrained, **kwargs) @register_model def halonet_h1(pretrained=False, **kwargs) -> ByobNet: """ HaloNet-H1. Halo attention in all stages as per the paper. NOTE: This runs very slowly! """ return _create_byoanet('halonet_h1', pretrained=pretrained, **kwargs) @register_model def halonet26t(pretrained=False, **kwargs) -> ByobNet: """ HaloNet w/ a ResNet26-t backbone. Halo attention in final two stages """ return _create_byoanet('halonet26t', pretrained=pretrained, **kwargs) @register_model def sehalonet33ts(pretrained=False, **kwargs) -> ByobNet: """ HaloNet w/ a ResNet33-t backbone, SE attn for non Halo blocks, SiLU, 1-2 Halo in stage 2,3,4. """ return _create_byoanet('sehalonet33ts', pretrained=pretrained, **kwargs) @register_model def halonet50ts(pretrained=False, **kwargs) -> ByobNet: """ HaloNet w/ a ResNet50-t backbone, silu act. Halo attention in final two stages """ return _create_byoanet('halonet50ts', pretrained=pretrained, **kwargs) @register_model def eca_halonext26ts(pretrained=False, **kwargs) -> ByobNet: """ HaloNet w/ a ResNet26-t backbone, silu act. Halo attention in final two stages """ return _create_byoanet('eca_halonext26ts', pretrained=pretrained, **kwargs) @register_model def lambda_resnet26t(pretrained=False, **kwargs) -> ByobNet: """ Lambda-ResNet-26-T. Lambda layers w/ conv pos in last two stages. """ return _create_byoanet('lambda_resnet26t', pretrained=pretrained, **kwargs) @register_model def lambda_resnet50ts(pretrained=False, **kwargs) -> ByobNet: """ Lambda-ResNet-50-TS. SiLU act. Lambda layers w/ conv pos in last two stages. """ return _create_byoanet('lambda_resnet50ts', pretrained=pretrained, **kwargs) @register_model def lambda_resnet26rpt_256(pretrained=False, **kwargs) -> ByobNet: """ Lambda-ResNet-26-R-T. Lambda layers w/ rel pos embed in last two stages. """ kwargs.setdefault('img_size', 256) return _create_byoanet('lambda_resnet26rpt_256', pretrained=pretrained, **kwargs) @register_model def haloregnetz_b(pretrained=False, **kwargs) -> ByobNet: """ Halo + RegNetZ """ return _create_byoanet('haloregnetz_b', pretrained=pretrained, **kwargs) @register_model def lamhalobotnet50ts_256(pretrained=False, **kwargs) -> ByobNet: """ Combo Attention (Lambda + Halo + Bot) Network """ return _create_byoanet('lamhalobotnet50ts_256', 'lamhalobotnet50ts', pretrained=pretrained, **kwargs) @register_model def halo2botnet50ts_256(pretrained=False, **kwargs) -> ByobNet: """ Combo Attention (Halo + Halo + Bot) Network """ return _create_byoanet('halo2botnet50ts_256', 'halo2botnet50ts', pretrained=pretrained, **kwargs)
pytorch-image-models/timm/models/byoanet.py/0
{ "file_path": "pytorch-image-models/timm/models/byoanet.py", "repo_id": "pytorch-image-models", "token_count": 9703 }
185
""" EfficientFormer-V2 @article{ li2022rethinking, title={Rethinking Vision Transformers for MobileNet Size and Speed}, author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian}, journal={arXiv preprint arXiv:2212.08059}, year={2022} } Significantly refactored and cleaned up for timm from original at: https://github.com/snap-research/EfficientFormer Original code licensed Apache 2.0, Copyright (c) 2022 Snap Inc. Modifications and timm support by / Copyright 2023, Ross Wightman """ import math from functools import partial from typing import Dict import torch import torch.nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import create_conv2d, create_norm_layer, get_act_layer, get_norm_layer, ConvNormAct from timm.layers import DropPath, trunc_normal_, to_2tuple, to_ntuple, ndgrid from ._builder import build_model_with_cfg from ._manipulate import checkpoint_seq from ._registry import generate_default_cfgs, register_model EfficientFormer_width = { 'L': (40, 80, 192, 384), # 26m 83.3% 6attn 'S2': (32, 64, 144, 288), # 12m 81.6% 4attn dp0.02 'S1': (32, 48, 120, 224), # 6.1m 79.0 'S0': (32, 48, 96, 176), # 75.0 75.7 } EfficientFormer_depth = { 'L': (5, 5, 15, 10), # 26m 83.3% 'S2': (4, 4, 12, 8), # 12m 'S1': (3, 3, 9, 6), # 79.0 'S0': (2, 2, 6, 4), # 75.7 } EfficientFormer_expansion_ratios = { 'L': (4, 4, (4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4), (4, 4, 4, 3, 3, 3, 3, 4, 4, 4)), 'S2': (4, 4, (4, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4), (4, 4, 3, 3, 3, 3, 4, 4)), 'S1': (4, 4, (4, 4, 3, 3, 3, 3, 4, 4, 4), (4, 4, 3, 3, 4, 4)), 'S0': (4, 4, (4, 3, 3, 3, 4, 4), (4, 3, 3, 4)), } class ConvNorm(nn.Module): def __init__( self, in_channels, out_channels, kernel_size=1, stride=1, padding='', dilation=1, groups=1, bias=True, norm_layer='batchnorm2d', norm_kwargs=None, ): norm_kwargs = norm_kwargs or {} super(ConvNorm, self).__init__() self.conv = create_conv2d( in_channels, out_channels, kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=bias, ) self.bn = create_norm_layer(norm_layer, out_channels, **norm_kwargs) def forward(self, x): x = self.conv(x) x = self.bn(x) return x class Attention2d(torch.nn.Module): attention_bias_cache: Dict[str, torch.Tensor] def __init__( self, dim=384, key_dim=32, num_heads=8, attn_ratio=4, resolution=7, act_layer=nn.GELU, stride=None, ): super().__init__() self.num_heads = num_heads self.scale = key_dim ** -0.5 self.key_dim = key_dim resolution = to_2tuple(resolution) if stride is not None: resolution = tuple([math.ceil(r / stride) for r in resolution]) self.stride_conv = ConvNorm(dim, dim, kernel_size=3, stride=stride, groups=dim) self.upsample = nn.Upsample(scale_factor=stride, mode='bilinear') else: self.stride_conv = None self.upsample = None self.resolution = resolution self.N = self.resolution[0] * self.resolution[1] self.d = int(attn_ratio * key_dim) self.dh = int(attn_ratio * key_dim) * num_heads self.attn_ratio = attn_ratio kh = self.key_dim * self.num_heads self.q = ConvNorm(dim, kh) self.k = ConvNorm(dim, kh) self.v = ConvNorm(dim, self.dh) self.v_local = ConvNorm(self.dh, self.dh, kernel_size=3, groups=self.dh) self.talking_head1 = nn.Conv2d(self.num_heads, self.num_heads, kernel_size=1) self.talking_head2 = nn.Conv2d(self.num_heads, self.num_heads, kernel_size=1) self.act = act_layer() self.proj = ConvNorm(self.dh, dim, 1) pos = torch.stack(ndgrid(torch.arange(self.resolution[0]), torch.arange(self.resolution[1]))).flatten(1) rel_pos = (pos[..., :, None] - pos[..., None, :]).abs() rel_pos = (rel_pos[0] * self.resolution[1]) + rel_pos[1] self.attention_biases = torch.nn.Parameter(torch.zeros(num_heads, self.N)) self.register_buffer('attention_bias_idxs', torch.LongTensor(rel_pos), persistent=False) self.attention_bias_cache = {} # per-device attention_biases cache (data-parallel compat) @torch.no_grad() def train(self, mode=True): super().train(mode) if mode and self.attention_bias_cache: self.attention_bias_cache = {} # clear ab cache def get_attention_biases(self, device: torch.device) -> torch.Tensor: if torch.jit.is_tracing() or self.training: return self.attention_biases[:, self.attention_bias_idxs] else: device_key = str(device) if device_key not in self.attention_bias_cache: self.attention_bias_cache[device_key] = self.attention_biases[:, self.attention_bias_idxs] return self.attention_bias_cache[device_key] def forward(self, x): B, C, H, W = x.shape if self.stride_conv is not None: x = self.stride_conv(x) q = self.q(x).reshape(B, self.num_heads, -1, self.N).permute(0, 1, 3, 2) k = self.k(x).reshape(B, self.num_heads, -1, self.N).permute(0, 1, 2, 3) v = self.v(x) v_local = self.v_local(v) v = v.reshape(B, self.num_heads, -1, self.N).permute(0, 1, 3, 2) attn = (q @ k) * self.scale attn = attn + self.get_attention_biases(x.device) attn = self.talking_head1(attn) attn = attn.softmax(dim=-1) attn = self.talking_head2(attn) x = (attn @ v).transpose(2, 3) x = x.reshape(B, self.dh, self.resolution[0], self.resolution[1]) + v_local if self.upsample is not None: x = self.upsample(x) x = self.act(x) x = self.proj(x) return x class LocalGlobalQuery(torch.nn.Module): def __init__(self, in_dim, out_dim): super().__init__() self.pool = nn.AvgPool2d(1, 2, 0) self.local = nn.Conv2d(in_dim, in_dim, kernel_size=3, stride=2, padding=1, groups=in_dim) self.proj = ConvNorm(in_dim, out_dim, 1) def forward(self, x): local_q = self.local(x) pool_q = self.pool(x) q = local_q + pool_q q = self.proj(q) return q class Attention2dDownsample(torch.nn.Module): attention_bias_cache: Dict[str, torch.Tensor] def __init__( self, dim=384, key_dim=16, num_heads=8, attn_ratio=4, resolution=7, out_dim=None, act_layer=nn.GELU, ): super().__init__() self.num_heads = num_heads self.scale = key_dim ** -0.5 self.key_dim = key_dim self.resolution = to_2tuple(resolution) self.resolution2 = tuple([math.ceil(r / 2) for r in self.resolution]) self.N = self.resolution[0] * self.resolution[1] self.N2 = self.resolution2[0] * self.resolution2[1] self.d = int(attn_ratio * key_dim) self.dh = int(attn_ratio * key_dim) * num_heads self.attn_ratio = attn_ratio self.out_dim = out_dim or dim kh = self.key_dim * self.num_heads self.q = LocalGlobalQuery(dim, kh) self.k = ConvNorm(dim, kh, 1) self.v = ConvNorm(dim, self.dh, 1) self.v_local = ConvNorm(self.dh, self.dh, kernel_size=3, stride=2, groups=self.dh) self.act = act_layer() self.proj = ConvNorm(self.dh, self.out_dim, 1) self.attention_biases = nn.Parameter(torch.zeros(num_heads, self.N)) k_pos = torch.stack(ndgrid(torch.arange(self.resolution[0]), torch.arange(self.resolution[1]))).flatten(1) q_pos = torch.stack(ndgrid( torch.arange(0, self.resolution[0], step=2), torch.arange(0, self.resolution[1], step=2) )).flatten(1) rel_pos = (q_pos[..., :, None] - k_pos[..., None, :]).abs() rel_pos = (rel_pos[0] * self.resolution[1]) + rel_pos[1] self.register_buffer('attention_bias_idxs', rel_pos, persistent=False) self.attention_bias_cache = {} # per-device attention_biases cache (data-parallel compat) @torch.no_grad() def train(self, mode=True): super().train(mode) if mode and self.attention_bias_cache: self.attention_bias_cache = {} # clear ab cache def get_attention_biases(self, device: torch.device) -> torch.Tensor: if torch.jit.is_tracing() or self.training: return self.attention_biases[:, self.attention_bias_idxs] else: device_key = str(device) if device_key not in self.attention_bias_cache: self.attention_bias_cache[device_key] = self.attention_biases[:, self.attention_bias_idxs] return self.attention_bias_cache[device_key] def forward(self, x): B, C, H, W = x.shape q = self.q(x).reshape(B, self.num_heads, -1, self.N2).permute(0, 1, 3, 2) k = self.k(x).reshape(B, self.num_heads, -1, self.N).permute(0, 1, 2, 3) v = self.v(x) v_local = self.v_local(v) v = v.reshape(B, self.num_heads, -1, self.N).permute(0, 1, 3, 2) attn = (q @ k) * self.scale attn = attn + self.get_attention_biases(x.device) attn = attn.softmax(dim=-1) x = (attn @ v).transpose(2, 3) x = x.reshape(B, self.dh, self.resolution2[0], self.resolution2[1]) + v_local x = self.act(x) x = self.proj(x) return x class Downsample(nn.Module): def __init__( self, in_chs, out_chs, kernel_size=3, stride=2, padding=1, resolution=7, use_attn=False, act_layer=nn.GELU, norm_layer=nn.BatchNorm2d, ): super().__init__() kernel_size = to_2tuple(kernel_size) stride = to_2tuple(stride) padding = to_2tuple(padding) norm_layer = norm_layer or nn.Identity() self.conv = ConvNorm( in_chs, out_chs, kernel_size=kernel_size, stride=stride, padding=padding, norm_layer=norm_layer, ) if use_attn: self.attn = Attention2dDownsample( dim=in_chs, out_dim=out_chs, resolution=resolution, act_layer=act_layer, ) else: self.attn = None def forward(self, x): out = self.conv(x) if self.attn is not None: return self.attn(x) + out return out class ConvMlpWithNorm(nn.Module): """ Implementation of MLP with 1*1 convolutions. Input: tensor with shape [B, C, H, W] """ def __init__( self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, norm_layer=nn.BatchNorm2d, drop=0., mid_conv=False, ): super().__init__() out_features = out_features or in_features hidden_features = hidden_features or in_features self.fc1 = ConvNormAct( in_features, hidden_features, 1, bias=True, norm_layer=norm_layer, act_layer=act_layer) if mid_conv: self.mid = ConvNormAct( hidden_features, hidden_features, 3, groups=hidden_features, bias=True, norm_layer=norm_layer, act_layer=act_layer) else: self.mid = nn.Identity() self.drop1 = nn.Dropout(drop) self.fc2 = ConvNorm(hidden_features, out_features, 1, norm_layer=norm_layer) self.drop2 = nn.Dropout(drop) def forward(self, x): x = self.fc1(x) x = self.mid(x) x = self.drop1(x) x = self.fc2(x) x = self.drop2(x) return x class LayerScale2d(nn.Module): def __init__(self, dim, init_values=1e-5, inplace=False): super().__init__() self.inplace = inplace self.gamma = nn.Parameter(init_values * torch.ones(dim)) def forward(self, x): gamma = self.gamma.view(1, -1, 1, 1) return x.mul_(gamma) if self.inplace else x * gamma class EfficientFormerV2Block(nn.Module): def __init__( self, dim, mlp_ratio=4., act_layer=nn.GELU, norm_layer=nn.BatchNorm2d, proj_drop=0., drop_path=0., layer_scale_init_value=1e-5, resolution=7, stride=None, use_attn=True, ): super().__init__() if use_attn: self.token_mixer = Attention2d( dim, resolution=resolution, act_layer=act_layer, stride=stride, ) self.ls1 = LayerScale2d( dim, layer_scale_init_value) if layer_scale_init_value is not None else nn.Identity() self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() else: self.token_mixer = None self.ls1 = None self.drop_path1 = None self.mlp = ConvMlpWithNorm( in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, norm_layer=norm_layer, drop=proj_drop, mid_conv=True, ) self.ls2 = LayerScale2d( dim, layer_scale_init_value) if layer_scale_init_value is not None else nn.Identity() self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x): if self.token_mixer is not None: x = x + self.drop_path1(self.ls1(self.token_mixer(x))) x = x + self.drop_path2(self.ls2(self.mlp(x))) return x class Stem4(nn.Sequential): def __init__(self, in_chs, out_chs, act_layer=nn.GELU, norm_layer=nn.BatchNorm2d): super().__init__() self.stride = 4 self.conv1 = ConvNormAct( in_chs, out_chs // 2, kernel_size=3, stride=2, padding=1, bias=True, norm_layer=norm_layer, act_layer=act_layer ) self.conv2 = ConvNormAct( out_chs // 2, out_chs, kernel_size=3, stride=2, padding=1, bias=True, norm_layer=norm_layer, act_layer=act_layer ) class EfficientFormerV2Stage(nn.Module): def __init__( self, dim, dim_out, depth, resolution=7, downsample=True, block_stride=None, downsample_use_attn=False, block_use_attn=False, num_vit=1, mlp_ratio=4., proj_drop=.0, drop_path=0., layer_scale_init_value=1e-5, act_layer=nn.GELU, norm_layer=nn.BatchNorm2d, ): super().__init__() self.grad_checkpointing = False mlp_ratio = to_ntuple(depth)(mlp_ratio) resolution = to_2tuple(resolution) if downsample: self.downsample = Downsample( dim, dim_out, use_attn=downsample_use_attn, resolution=resolution, norm_layer=norm_layer, act_layer=act_layer, ) dim = dim_out resolution = tuple([math.ceil(r / 2) for r in resolution]) else: assert dim == dim_out self.downsample = nn.Identity() blocks = [] for block_idx in range(depth): remain_idx = depth - num_vit - 1 b = EfficientFormerV2Block( dim, resolution=resolution, stride=block_stride, mlp_ratio=mlp_ratio[block_idx], use_attn=block_use_attn and block_idx > remain_idx, proj_drop=proj_drop, drop_path=drop_path[block_idx], layer_scale_init_value=layer_scale_init_value, act_layer=act_layer, norm_layer=norm_layer, ) blocks += [b] self.blocks = nn.Sequential(*blocks) def forward(self, x): x = self.downsample(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x) else: x = self.blocks(x) return x class EfficientFormerV2(nn.Module): def __init__( self, depths, in_chans=3, img_size=224, global_pool='avg', embed_dims=None, downsamples=None, mlp_ratios=4, norm_layer='batchnorm2d', norm_eps=1e-5, act_layer='gelu', num_classes=1000, drop_rate=0., proj_drop_rate=0., drop_path_rate=0., layer_scale_init_value=1e-5, num_vit=0, distillation=True, ): super().__init__() assert global_pool in ('avg', '') self.num_classes = num_classes self.global_pool = global_pool self.feature_info = [] img_size = to_2tuple(img_size) norm_layer = partial(get_norm_layer(norm_layer), eps=norm_eps) act_layer = get_act_layer(act_layer) self.stem = Stem4(in_chans, embed_dims[0], act_layer=act_layer, norm_layer=norm_layer) prev_dim = embed_dims[0] stride = 4 num_stages = len(depths) dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)] downsamples = downsamples or (False,) + (True,) * (len(depths) - 1) mlp_ratios = to_ntuple(num_stages)(mlp_ratios) stages = [] for i in range(num_stages): curr_resolution = tuple([math.ceil(s / stride) for s in img_size]) stage = EfficientFormerV2Stage( prev_dim, embed_dims[i], depth=depths[i], resolution=curr_resolution, downsample=downsamples[i], block_stride=2 if i == 2 else None, downsample_use_attn=i >= 3, block_use_attn=i >= 2, num_vit=num_vit, mlp_ratio=mlp_ratios[i], proj_drop=proj_drop_rate, drop_path=dpr[i], layer_scale_init_value=layer_scale_init_value, act_layer=act_layer, norm_layer=norm_layer, ) if downsamples[i]: stride *= 2 prev_dim = embed_dims[i] self.feature_info += [dict(num_chs=prev_dim, reduction=stride, module=f'stages.{i}')] stages.append(stage) self.stages = nn.Sequential(*stages) # Classifier head self.num_features = embed_dims[-1] self.norm = norm_layer(embed_dims[-1]) self.head_drop = nn.Dropout(drop_rate) self.head = nn.Linear(embed_dims[-1], num_classes) if num_classes > 0 else nn.Identity() self.dist = distillation if self.dist: self.head_dist = nn.Linear(embed_dims[-1], num_classes) if num_classes > 0 else nn.Identity() else: self.head_dist = None self.apply(self.init_weights) self.distilled_training = False # init for classification def init_weights(self, m): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if m.bias is not None: nn.init.constant_(m.bias, 0) @torch.jit.ignore def no_weight_decay(self): return {k for k, _ in self.named_parameters() if 'attention_biases' in k} @torch.jit.ignore def group_matcher(self, coarse=False): matcher = dict( stem=r'^stem', # stem and embed blocks=[(r'^stages\.(\d+)', None), (r'^norm', (99999,))] ) return matcher @torch.jit.ignore def set_grad_checkpointing(self, enable=True): for s in self.stages: s.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.head, self.head_dist def reset_classifier(self, num_classes, global_pool=None): self.num_classes = num_classes if global_pool is not None: self.global_pool = global_pool self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() self.head_dist = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() @torch.jit.ignore def set_distilled_training(self, enable=True): self.distilled_training = enable def forward_features(self, x): x = self.stem(x) x = self.stages(x) x = self.norm(x) return x def forward_head(self, x, pre_logits: bool = False): if self.global_pool == 'avg': x = x.mean(dim=(2, 3)) x = self.head_drop(x) if pre_logits: return x x, x_dist = self.head(x), self.head_dist(x) if self.distilled_training and self.training and not torch.jit.is_scripting(): # only return separate classification predictions when training in distilled mode return x, x_dist else: # during standard train/finetune, inference average the classifier predictions return (x + x_dist) / 2 def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'fixed_input_size': True, 'crop_pct': .95, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'classifier': ('head', 'head_dist'), 'first_conv': 'stem.conv1.conv', **kwargs } default_cfgs = generate_default_cfgs({ 'efficientformerv2_s0.snap_dist_in1k': _cfg( hf_hub_id='timm/', ), 'efficientformerv2_s1.snap_dist_in1k': _cfg( hf_hub_id='timm/', ), 'efficientformerv2_s2.snap_dist_in1k': _cfg( hf_hub_id='timm/', ), 'efficientformerv2_l.snap_dist_in1k': _cfg( hf_hub_id='timm/', ), }) def _create_efficientformerv2(variant, pretrained=False, **kwargs): out_indices = kwargs.pop('out_indices', (0, 1, 2, 3)) model = build_model_with_cfg( EfficientFormerV2, variant, pretrained, feature_cfg=dict(flatten_sequential=True, out_indices=out_indices), **kwargs) return model @register_model def efficientformerv2_s0(pretrained=False, **kwargs) -> EfficientFormerV2: model_args = dict( depths=EfficientFormer_depth['S0'], embed_dims=EfficientFormer_width['S0'], num_vit=2, drop_path_rate=0.0, mlp_ratios=EfficientFormer_expansion_ratios['S0'], ) return _create_efficientformerv2('efficientformerv2_s0', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def efficientformerv2_s1(pretrained=False, **kwargs) -> EfficientFormerV2: model_args = dict( depths=EfficientFormer_depth['S1'], embed_dims=EfficientFormer_width['S1'], num_vit=2, drop_path_rate=0.0, mlp_ratios=EfficientFormer_expansion_ratios['S1'], ) return _create_efficientformerv2('efficientformerv2_s1', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def efficientformerv2_s2(pretrained=False, **kwargs) -> EfficientFormerV2: model_args = dict( depths=EfficientFormer_depth['S2'], embed_dims=EfficientFormer_width['S2'], num_vit=4, drop_path_rate=0.02, mlp_ratios=EfficientFormer_expansion_ratios['S2'], ) return _create_efficientformerv2('efficientformerv2_s2', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def efficientformerv2_l(pretrained=False, **kwargs) -> EfficientFormerV2: model_args = dict( depths=EfficientFormer_depth['L'], embed_dims=EfficientFormer_width['L'], num_vit=6, drop_path_rate=0.1, mlp_ratios=EfficientFormer_expansion_ratios['L'], ) return _create_efficientformerv2('efficientformerv2_l', pretrained=pretrained, **dict(model_args, **kwargs))
pytorch-image-models/timm/models/efficientformer_v2.py/0
{ "file_path": "pytorch-image-models/timm/models/efficientformer_v2.py", "repo_id": "pytorch-image-models", "token_count": 12721 }
186
""" InceptionNeXt paper: https://arxiv.org/abs/2303.16900 Original implementation & weights from: https://github.com/sail-sg/inceptionnext """ from functools import partial import torch import torch.nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import trunc_normal_, DropPath, to_2tuple, get_padding, SelectAdaptivePool2d from ._builder import build_model_with_cfg from ._manipulate import checkpoint_seq from ._registry import register_model, generate_default_cfgs class InceptionDWConv2d(nn.Module): """ Inception depthwise convolution """ def __init__( self, in_chs, square_kernel_size=3, band_kernel_size=11, branch_ratio=0.125, dilation=1, ): super().__init__() gc = int(in_chs * branch_ratio) # channel numbers of a convolution branch square_padding = get_padding(square_kernel_size, dilation=dilation) band_padding = get_padding(band_kernel_size, dilation=dilation) self.dwconv_hw = nn.Conv2d( gc, gc, square_kernel_size, padding=square_padding, dilation=dilation, groups=gc) self.dwconv_w = nn.Conv2d( gc, gc, (1, band_kernel_size), padding=(0, band_padding), dilation=(1, dilation), groups=gc) self.dwconv_h = nn.Conv2d( gc, gc, (band_kernel_size, 1), padding=(band_padding, 0), dilation=(dilation, 1), groups=gc) self.split_indexes = (in_chs - 3 * gc, gc, gc, gc) def forward(self, x): x_id, x_hw, x_w, x_h = torch.split(x, self.split_indexes, dim=1) return torch.cat(( x_id, self.dwconv_hw(x_hw), self.dwconv_w(x_w), self.dwconv_h(x_h) ), dim=1, ) class ConvMlp(nn.Module): """ MLP using 1x1 convs that keeps spatial dims copied from timm: https://github.com/huggingface/pytorch-image-models/blob/v0.6.11/timm/models/layers/mlp.py """ def __init__( self, in_features, hidden_features=None, out_features=None, act_layer=nn.ReLU, norm_layer=None, bias=True, drop=0., ): super().__init__() out_features = out_features or in_features hidden_features = hidden_features or in_features bias = to_2tuple(bias) self.fc1 = nn.Conv2d(in_features, hidden_features, kernel_size=1, bias=bias[0]) self.norm = norm_layer(hidden_features) if norm_layer else nn.Identity() self.act = act_layer() self.drop = nn.Dropout(drop) self.fc2 = nn.Conv2d(hidden_features, out_features, kernel_size=1, bias=bias[1]) def forward(self, x): x = self.fc1(x) x = self.norm(x) x = self.act(x) x = self.drop(x) x = self.fc2(x) return x class MlpClassifierHead(nn.Module): """ MLP classification head """ def __init__( self, dim, num_classes=1000, pool_type='avg', mlp_ratio=3, act_layer=nn.GELU, norm_layer=partial(nn.LayerNorm, eps=1e-6), drop=0., bias=True ): super().__init__() self.global_pool = SelectAdaptivePool2d(pool_type=pool_type, flatten=True) in_features = dim * self.global_pool.feat_mult() hidden_features = int(mlp_ratio * in_features) self.fc1 = nn.Linear(in_features, hidden_features, bias=bias) self.act = act_layer() self.norm = norm_layer(hidden_features) self.fc2 = nn.Linear(hidden_features, num_classes, bias=bias) self.drop = nn.Dropout(drop) def forward(self, x): x = self.global_pool(x) x = self.fc1(x) x = self.act(x) x = self.norm(x) x = self.drop(x) x = self.fc2(x) return x class MetaNeXtBlock(nn.Module): """ MetaNeXtBlock Block Args: dim (int): Number of input channels. drop_path (float): Stochastic depth rate. Default: 0.0 ls_init_value (float): Init value for Layer Scale. Default: 1e-6. """ def __init__( self, dim, dilation=1, token_mixer=InceptionDWConv2d, norm_layer=nn.BatchNorm2d, mlp_layer=ConvMlp, mlp_ratio=4, act_layer=nn.GELU, ls_init_value=1e-6, drop_path=0., ): super().__init__() self.token_mixer = token_mixer(dim, dilation=dilation) self.norm = norm_layer(dim) self.mlp = mlp_layer(dim, int(mlp_ratio * dim), act_layer=act_layer) self.gamma = nn.Parameter(ls_init_value * torch.ones(dim)) if ls_init_value else None self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x): shortcut = x x = self.token_mixer(x) x = self.norm(x) x = self.mlp(x) if self.gamma is not None: x = x.mul(self.gamma.reshape(1, -1, 1, 1)) x = self.drop_path(x) + shortcut return x class MetaNeXtStage(nn.Module): def __init__( self, in_chs, out_chs, stride=2, depth=2, dilation=(1, 1), drop_path_rates=None, ls_init_value=1.0, token_mixer=InceptionDWConv2d, act_layer=nn.GELU, norm_layer=None, mlp_ratio=4, ): super().__init__() self.grad_checkpointing = False if stride > 1 or dilation[0] != dilation[1]: self.downsample = nn.Sequential( norm_layer(in_chs), nn.Conv2d( in_chs, out_chs, kernel_size=2, stride=stride, dilation=dilation[0], ), ) else: self.downsample = nn.Identity() drop_path_rates = drop_path_rates or [0.] * depth stage_blocks = [] for i in range(depth): stage_blocks.append(MetaNeXtBlock( dim=out_chs, dilation=dilation[1], drop_path=drop_path_rates[i], ls_init_value=ls_init_value, token_mixer=token_mixer, act_layer=act_layer, norm_layer=norm_layer, mlp_ratio=mlp_ratio, )) self.blocks = nn.Sequential(*stage_blocks) def forward(self, x): x = self.downsample(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x) else: x = self.blocks(x) return x class MetaNeXt(nn.Module): r""" MetaNeXt A PyTorch impl of : `InceptionNeXt: When Inception Meets ConvNeXt` - https://arxiv.org/abs/2303.16900 Args: in_chans (int): Number of input image channels. Default: 3 num_classes (int): Number of classes for classification head. Default: 1000 depths (tuple(int)): Number of blocks at each stage. Default: (3, 3, 9, 3) dims (tuple(int)): Feature dimension at each stage. Default: (96, 192, 384, 768) token_mixers: Token mixer function. Default: nn.Identity norm_layer: Normalization layer. Default: nn.BatchNorm2d act_layer: Activation function for MLP. Default: nn.GELU mlp_ratios (int or tuple(int)): MLP ratios. Default: (4, 4, 4, 3) head_fn: classifier head drop_rate (float): Head dropout rate drop_path_rate (float): Stochastic depth rate. Default: 0. ls_init_value (float): Init value for Layer Scale. Default: 1e-6. """ def __init__( self, in_chans=3, num_classes=1000, global_pool='avg', output_stride=32, depths=(3, 3, 9, 3), dims=(96, 192, 384, 768), token_mixers=InceptionDWConv2d, norm_layer=nn.BatchNorm2d, act_layer=nn.GELU, mlp_ratios=(4, 4, 4, 3), head_fn=MlpClassifierHead, drop_rate=0., drop_path_rate=0., ls_init_value=1e-6, ): super().__init__() num_stage = len(depths) if not isinstance(token_mixers, (list, tuple)): token_mixers = [token_mixers] * num_stage if not isinstance(mlp_ratios, (list, tuple)): mlp_ratios = [mlp_ratios] * num_stage self.num_classes = num_classes self.global_pool = global_pool self.drop_rate = drop_rate self.feature_info = [] self.stem = nn.Sequential( nn.Conv2d(in_chans, dims[0], kernel_size=4, stride=4), norm_layer(dims[0]) ) dp_rates = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)] prev_chs = dims[0] curr_stride = 4 dilation = 1 # feature resolution stages, each consisting of multiple residual blocks self.stages = nn.Sequential() for i in range(num_stage): stride = 2 if curr_stride == 2 or i > 0 else 1 if curr_stride >= output_stride and stride > 1: dilation *= stride stride = 1 curr_stride *= stride first_dilation = 1 if dilation in (1, 2) else 2 out_chs = dims[i] self.stages.append(MetaNeXtStage( prev_chs, out_chs, stride=stride if i > 0 else 1, dilation=(first_dilation, dilation), depth=depths[i], drop_path_rates=dp_rates[i], ls_init_value=ls_init_value, act_layer=act_layer, token_mixer=token_mixers[i], norm_layer=norm_layer, mlp_ratio=mlp_ratios[i], )) prev_chs = out_chs self.feature_info += [dict(num_chs=prev_chs, reduction=curr_stride, module=f'stages.{i}')] self.num_features = prev_chs if self.num_classes > 0: if issubclass(head_fn, MlpClassifierHead): assert self.global_pool, 'Cannot disable global pooling with MLP head present.' self.head = head_fn(self.num_features, num_classes, pool_type=self.global_pool, drop=drop_rate) else: if self.global_pool: self.head = SelectAdaptivePool2d(pool_type=self.global_pool, flatten=True) else: self.head = nn.Identity() self.apply(self._init_weights) def _init_weights(self, m): if isinstance(m, (nn.Conv2d, nn.Linear)): trunc_normal_(m.weight, std=.02) if m.bias is not None: nn.init.constant_(m.bias, 0) @torch.jit.ignore def group_matcher(self, coarse=False): return dict( stem=r'^stem', blocks=r'^stages\.(\d+)' if coarse else [ (r'^stages\.(\d+)\.downsample', (0,)), # blocks (r'^stages\.(\d+)\.blocks\.(\d+)', None), ] ) @torch.jit.ignore def get_classifier(self): return self.head.fc2 def reset_classifier(self, num_classes=0, global_pool=None, head_fn=MlpClassifierHead): if global_pool is not None: self.global_pool = global_pool if num_classes > 0: if issubclass(head_fn, MlpClassifierHead): assert self.global_pool, 'Cannot disable global pooling with MLP head present.' self.head = head_fn(self.num_features, num_classes, pool_type=self.global_pool, drop=self.drop_rate) else: if self.global_pool: self.head = SelectAdaptivePool2d(pool_type=self.global_pool, flatten=True) else: self.head = nn.Identity() @torch.jit.ignore def set_grad_checkpointing(self, enable=True): for s in self.stages: s.grad_checkpointing = enable @torch.jit.ignore def no_weight_decay(self): return set() def forward_features(self, x): x = self.stem(x) x = self.stages(x) return x def forward_head(self, x, pre_logits: bool = False): if pre_logits: if hasattr(self.head, 'global_pool'): x = self.head.global_pool(x) return x return self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.875, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.0', 'classifier': 'head.fc2', **kwargs } default_cfgs = generate_default_cfgs({ 'inception_next_tiny.sail_in1k': _cfg( hf_hub_id='timm/', # url='https://github.com/sail-sg/inceptionnext/releases/download/model/inceptionnext_tiny.pth', ), 'inception_next_small.sail_in1k': _cfg( hf_hub_id='timm/', # url='https://github.com/sail-sg/inceptionnext/releases/download/model/inceptionnext_small.pth', ), 'inception_next_base.sail_in1k': _cfg( hf_hub_id='timm/', # url='https://github.com/sail-sg/inceptionnext/releases/download/model/inceptionnext_base.pth', crop_pct=0.95, ), 'inception_next_base.sail_in1k_384': _cfg( hf_hub_id='timm/', # url='https://github.com/sail-sg/inceptionnext/releases/download/model/inceptionnext_base_384.pth', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, ), }) def _create_inception_next(variant, pretrained=False, **kwargs): model = build_model_with_cfg( MetaNeXt, variant, pretrained, feature_cfg=dict(out_indices=(0, 1, 2, 3), flatten_sequential=True), **kwargs, ) return model @register_model def inception_next_tiny(pretrained=False, **kwargs): model_args = dict( depths=(3, 3, 9, 3), dims=(96, 192, 384, 768), token_mixers=InceptionDWConv2d, ) return _create_inception_next('inception_next_tiny', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def inception_next_small(pretrained=False, **kwargs): model_args = dict( depths=(3, 3, 27, 3), dims=(96, 192, 384, 768), token_mixers=InceptionDWConv2d, ) return _create_inception_next('inception_next_small', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def inception_next_base(pretrained=False, **kwargs): model_args = dict( depths=(3, 3, 27, 3), dims=(128, 256, 512, 1024), token_mixers=InceptionDWConv2d, ) return _create_inception_next('inception_next_base', pretrained=pretrained, **dict(model_args, **kwargs))
pytorch-image-models/timm/models/inception_next.py/0
{ "file_path": "pytorch-image-models/timm/models/inception_next.py", "repo_id": "pytorch-image-models", "token_count": 7709 }
187
""" pnasnet5large implementation grabbed from Cadene's pretrained models Additional credit to https://github.com/creafz https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/pnasnet.py """ from collections import OrderedDict from functools import partial import torch import torch.nn as nn import torch.nn.functional as F from timm.layers import ConvNormAct, create_conv2d, create_pool2d, create_classifier from ._builder import build_model_with_cfg from ._registry import register_model, generate_default_cfgs __all__ = ['PNASNet5Large'] class SeparableConv2d(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride, padding=''): super(SeparableConv2d, self).__init__() self.depthwise_conv2d = create_conv2d( in_channels, in_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=in_channels) self.pointwise_conv2d = create_conv2d( in_channels, out_channels, kernel_size=1, padding=padding) def forward(self, x): x = self.depthwise_conv2d(x) x = self.pointwise_conv2d(x) return x class BranchSeparables(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, stem_cell=False, padding=''): super(BranchSeparables, self).__init__() middle_channels = out_channels if stem_cell else in_channels self.act_1 = nn.ReLU() self.separable_1 = SeparableConv2d( in_channels, middle_channels, kernel_size, stride=stride, padding=padding) self.bn_sep_1 = nn.BatchNorm2d(middle_channels, eps=0.001) self.act_2 = nn.ReLU() self.separable_2 = SeparableConv2d( middle_channels, out_channels, kernel_size, stride=1, padding=padding) self.bn_sep_2 = nn.BatchNorm2d(out_channels, eps=0.001) def forward(self, x): x = self.act_1(x) x = self.separable_1(x) x = self.bn_sep_1(x) x = self.act_2(x) x = self.separable_2(x) x = self.bn_sep_2(x) return x class ActConvBn(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=''): super(ActConvBn, self).__init__() self.act = nn.ReLU() self.conv = create_conv2d( in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) self.bn = nn.BatchNorm2d(out_channels, eps=0.001) def forward(self, x): x = self.act(x) x = self.conv(x) x = self.bn(x) return x class FactorizedReduction(nn.Module): def __init__(self, in_channels, out_channels, padding=''): super(FactorizedReduction, self).__init__() self.act = nn.ReLU() self.path_1 = nn.Sequential(OrderedDict([ ('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)), ('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)), ])) self.path_2 = nn.Sequential(OrderedDict([ ('pad', nn.ZeroPad2d((-1, 1, -1, 1))), # shift ('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)), ('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)), ])) self.final_path_bn = nn.BatchNorm2d(out_channels, eps=0.001) def forward(self, x): x = self.act(x) x_path1 = self.path_1(x) x_path2 = self.path_2(x) out = self.final_path_bn(torch.cat([x_path1, x_path2], 1)) return out class CellBase(nn.Module): def cell_forward(self, x_left, x_right): x_comb_iter_0_left = self.comb_iter_0_left(x_left) x_comb_iter_0_right = self.comb_iter_0_right(x_left) x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right x_comb_iter_1_left = self.comb_iter_1_left(x_right) x_comb_iter_1_right = self.comb_iter_1_right(x_right) x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right x_comb_iter_2_left = self.comb_iter_2_left(x_right) x_comb_iter_2_right = self.comb_iter_2_right(x_right) x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right x_comb_iter_3_left = self.comb_iter_3_left(x_comb_iter_2) x_comb_iter_3_right = self.comb_iter_3_right(x_right) x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right x_comb_iter_4_left = self.comb_iter_4_left(x_left) if self.comb_iter_4_right is not None: x_comb_iter_4_right = self.comb_iter_4_right(x_right) else: x_comb_iter_4_right = x_right x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right x_out = torch.cat([x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) return x_out class CellStem0(CellBase): def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''): super(CellStem0, self).__init__() self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type) self.comb_iter_0_left = BranchSeparables( in_chs_left, out_chs_left, kernel_size=5, stride=2, stem_cell=True, padding=pad_type) self.comb_iter_0_right = nn.Sequential(OrderedDict([ ('max_pool', create_pool2d('max', 3, stride=2, padding=pad_type)), ('conv', create_conv2d(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type)), ('bn', nn.BatchNorm2d(out_chs_left, eps=0.001)), ])) self.comb_iter_1_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=7, stride=2, padding=pad_type) self.comb_iter_1_right = create_pool2d('max', 3, stride=2, padding=pad_type) self.comb_iter_2_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=5, stride=2, padding=pad_type) self.comb_iter_2_right = BranchSeparables( out_chs_right, out_chs_right, kernel_size=3, stride=2, padding=pad_type) self.comb_iter_3_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=3, padding=pad_type) self.comb_iter_3_right = create_pool2d('max', 3, stride=2, padding=pad_type) self.comb_iter_4_left = BranchSeparables( in_chs_right, out_chs_right, kernel_size=3, stride=2, stem_cell=True, padding=pad_type) self.comb_iter_4_right = ActConvBn( out_chs_right, out_chs_right, kernel_size=1, stride=2, padding=pad_type) def forward(self, x_left): x_right = self.conv_1x1(x_left) x_out = self.cell_forward(x_left, x_right) return x_out class Cell(CellBase): def __init__( self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type='', is_reduction=False, match_prev_layer_dims=False, ): super(Cell, self).__init__() # If `is_reduction` is set to `True` stride 2 is used for # convolution and pooling layers to reduce the spatial size of # the output of a cell approximately by a factor of 2. stride = 2 if is_reduction else 1 # If `match_prev_layer_dimensions` is set to `True` # `FactorizedReduction` is used to reduce the spatial size # of the left input of a cell approximately by a factor of 2. self.match_prev_layer_dimensions = match_prev_layer_dims if match_prev_layer_dims: self.conv_prev_1x1 = FactorizedReduction(in_chs_left, out_chs_left, padding=pad_type) else: self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type) self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type) self.comb_iter_0_left = BranchSeparables( out_chs_left, out_chs_left, kernel_size=5, stride=stride, padding=pad_type) self.comb_iter_0_right = create_pool2d('max', 3, stride=stride, padding=pad_type) self.comb_iter_1_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=7, stride=stride, padding=pad_type) self.comb_iter_1_right = create_pool2d('max', 3, stride=stride, padding=pad_type) self.comb_iter_2_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=5, stride=stride, padding=pad_type) self.comb_iter_2_right = BranchSeparables( out_chs_right, out_chs_right, kernel_size=3, stride=stride, padding=pad_type) self.comb_iter_3_left = BranchSeparables(out_chs_right, out_chs_right, kernel_size=3) self.comb_iter_3_right = create_pool2d('max', 3, stride=stride, padding=pad_type) self.comb_iter_4_left = BranchSeparables( out_chs_left, out_chs_left, kernel_size=3, stride=stride, padding=pad_type) if is_reduction: self.comb_iter_4_right = ActConvBn( out_chs_right, out_chs_right, kernel_size=1, stride=stride, padding=pad_type) else: self.comb_iter_4_right = None def forward(self, x_left, x_right): x_left = self.conv_prev_1x1(x_left) x_right = self.conv_1x1(x_right) x_out = self.cell_forward(x_left, x_right) return x_out class PNASNet5Large(nn.Module): def __init__( self, num_classes=1000, in_chans=3, output_stride=32, drop_rate=0., global_pool='avg', pad_type='', ): super(PNASNet5Large, self).__init__() self.num_classes = num_classes self.num_features = 4320 assert output_stride == 32 self.conv_0 = ConvNormAct( in_chans, 96, kernel_size=3, stride=2, padding=0, norm_layer=partial(nn.BatchNorm2d, eps=0.001, momentum=0.1), apply_act=False) self.cell_stem_0 = CellStem0( in_chs_left=96, out_chs_left=54, in_chs_right=96, out_chs_right=54, pad_type=pad_type) self.cell_stem_1 = Cell( in_chs_left=96, out_chs_left=108, in_chs_right=270, out_chs_right=108, pad_type=pad_type, match_prev_layer_dims=True, is_reduction=True) self.cell_0 = Cell( in_chs_left=270, out_chs_left=216, in_chs_right=540, out_chs_right=216, pad_type=pad_type, match_prev_layer_dims=True) self.cell_1 = Cell( in_chs_left=540, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) self.cell_2 = Cell( in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) self.cell_3 = Cell( in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) self.cell_4 = Cell( in_chs_left=1080, out_chs_left=432, in_chs_right=1080, out_chs_right=432, pad_type=pad_type, is_reduction=True) self.cell_5 = Cell( in_chs_left=1080, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type, match_prev_layer_dims=True) self.cell_6 = Cell( in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type) self.cell_7 = Cell( in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type) self.cell_8 = Cell( in_chs_left=2160, out_chs_left=864, in_chs_right=2160, out_chs_right=864, pad_type=pad_type, is_reduction=True) self.cell_9 = Cell( in_chs_left=2160, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type, match_prev_layer_dims=True) self.cell_10 = Cell( in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type) self.cell_11 = Cell( in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type) self.act = nn.ReLU() self.feature_info = [ dict(num_chs=96, reduction=2, module='conv_0'), dict(num_chs=270, reduction=4, module='cell_stem_1.conv_1x1.act'), dict(num_chs=1080, reduction=8, module='cell_4.conv_1x1.act'), dict(num_chs=2160, reduction=16, module='cell_8.conv_1x1.act'), dict(num_chs=4320, reduction=32, module='act'), ] self.global_pool, self.head_drop, self.last_linear = create_classifier( self.num_features, self.num_classes, pool_type=global_pool, drop_rate=drop_rate) @torch.jit.ignore def group_matcher(self, coarse=False): return dict(stem=r'^conv_0|cell_stem_[01]', blocks=r'^cell_(\d+)') @torch.jit.ignore def set_grad_checkpointing(self, enable=True): assert not enable, 'gradient checkpointing not supported' @torch.jit.ignore def get_classifier(self): return self.last_linear def reset_classifier(self, num_classes, global_pool='avg'): self.num_classes = num_classes self.global_pool, self.last_linear = create_classifier( self.num_features, self.num_classes, pool_type=global_pool) def forward_features(self, x): x_conv_0 = self.conv_0(x) x_stem_0 = self.cell_stem_0(x_conv_0) x_stem_1 = self.cell_stem_1(x_conv_0, x_stem_0) x_cell_0 = self.cell_0(x_stem_0, x_stem_1) x_cell_1 = self.cell_1(x_stem_1, x_cell_0) x_cell_2 = self.cell_2(x_cell_0, x_cell_1) x_cell_3 = self.cell_3(x_cell_1, x_cell_2) x_cell_4 = self.cell_4(x_cell_2, x_cell_3) x_cell_5 = self.cell_5(x_cell_3, x_cell_4) x_cell_6 = self.cell_6(x_cell_4, x_cell_5) x_cell_7 = self.cell_7(x_cell_5, x_cell_6) x_cell_8 = self.cell_8(x_cell_6, x_cell_7) x_cell_9 = self.cell_9(x_cell_7, x_cell_8) x_cell_10 = self.cell_10(x_cell_8, x_cell_9) x_cell_11 = self.cell_11(x_cell_9, x_cell_10) x = self.act(x_cell_11) return x def forward_head(self, x, pre_logits: bool = False): x = self.global_pool(x) x = self.head_drop(x) return x if pre_logits else self.last_linear(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _create_pnasnet(variant, pretrained=False, **kwargs): return build_model_with_cfg( PNASNet5Large, variant, pretrained, feature_cfg=dict(feature_cls='hook', no_rewrite=True), # not possible to re-write this model **kwargs, ) default_cfgs = generate_default_cfgs({ 'pnasnet5large.tf_in1k': { 'hf_hub_id': 'timm/', 'input_size': (3, 331, 331), 'pool_size': (11, 11), 'crop_pct': 0.911, 'interpolation': 'bicubic', 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5), 'num_classes': 1000, 'first_conv': 'conv_0.conv', 'classifier': 'last_linear', }, }) @register_model def pnasnet5large(pretrained=False, **kwargs) -> PNASNet5Large: r"""PNASNet-5 model architecture from the `"Progressive Neural Architecture Search" <https://arxiv.org/abs/1712.00559>`_ paper. """ model_kwargs = dict(pad_type='same', **kwargs) return _create_pnasnet('pnasnet5large', pretrained, **model_kwargs)
pytorch-image-models/timm/models/pnasnet.py/0
{ "file_path": "pytorch-image-models/timm/models/pnasnet.py", "repo_id": "pytorch-image-models", "token_count": 7653 }
188
""" Swin Transformer V2 A PyTorch impl of : `Swin Transformer V2: Scaling Up Capacity and Resolution` - https://arxiv.org/abs/2111.09883 Code/weights from https://github.com/microsoft/Swin-Transformer, original copyright/license info below Modifications and additions for timm hacked together by / Copyright 2022, Ross Wightman """ # -------------------------------------------------------- # Swin Transformer V2 # Copyright (c) 2022 Microsoft # Licensed under The MIT License [see LICENSE for details] # Written by Ze Liu # -------------------------------------------------------- import math from typing import Callable, Optional, Tuple, Union, Set, Dict import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint as checkpoint from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import PatchEmbed, Mlp, DropPath, to_2tuple, trunc_normal_, _assert, ClassifierHead,\ resample_patch_embed, ndgrid from ._builder import build_model_with_cfg from ._features_fx import register_notrace_function from ._registry import generate_default_cfgs, register_model, register_model_deprecations __all__ = ['SwinTransformerV2'] # model_registry will add each entrypoint fn to this _int_or_tuple_2_t = Union[int, Tuple[int, int]] def window_partition(x: torch.Tensor, window_size: Tuple[int, int]) -> torch.Tensor: """ Args: x: (B, H, W, C) window_size (int): window size Returns: windows: (num_windows*B, window_size, window_size, C) """ B, H, W, C = x.shape x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C) windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C) return windows @register_notrace_function # reason: int argument is a Proxy def window_reverse(windows: torch.Tensor, window_size: Tuple[int, int], img_size: Tuple[int, int]) -> torch.Tensor: """ Args: windows: (num_windows * B, window_size[0], window_size[1], C) window_size (Tuple[int, int]): Window size img_size (Tuple[int, int]): Image size Returns: x: (B, H, W, C) """ H, W = img_size C = windows.shape[-1] x = windows.view(-1, H // window_size[0], W // window_size[1], window_size[0], window_size[1], C) x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, H, W, C) return x class WindowAttention(nn.Module): r""" Window based multi-head self attention (W-MSA) module with relative position bias. It supports both of shifted and non-shifted window. Args: dim (int): Number of input channels. window_size (tuple[int]): The height and width of the window. num_heads (int): Number of attention heads. qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 proj_drop (float, optional): Dropout ratio of output. Default: 0.0 pretrained_window_size (tuple[int]): The height and width of the window in pre-training. """ def __init__( self, dim: int, window_size: Tuple[int, int], num_heads: int, qkv_bias: bool = True, attn_drop: float = 0., proj_drop: float = 0., pretrained_window_size: Tuple[int, int] = (0, 0), ) -> None: super().__init__() self.dim = dim self.window_size = window_size # Wh, Ww self.pretrained_window_size = pretrained_window_size self.num_heads = num_heads self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1)))) # mlp to generate continuous relative position bias self.cpb_mlp = nn.Sequential( nn.Linear(2, 512, bias=True), nn.ReLU(inplace=True), nn.Linear(512, num_heads, bias=False) ) # get relative_coords_table relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0]).to(torch.float32) relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1]).to(torch.float32) relative_coords_table = torch.stack(ndgrid(relative_coords_h, relative_coords_w)) relative_coords_table = relative_coords_table.permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2 if pretrained_window_size[0] > 0: relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1) relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1) else: relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1) relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1) relative_coords_table *= 8 # normalize to -8, 8 relative_coords_table = torch.sign(relative_coords_table) * torch.log2( torch.abs(relative_coords_table) + 1.0) / math.log2(8) self.register_buffer("relative_coords_table", relative_coords_table, persistent=False) # get pair-wise relative position index for each token inside the window coords_h = torch.arange(self.window_size[0]) coords_w = torch.arange(self.window_size[1]) coords = torch.stack(ndgrid(coords_h, coords_w)) # 2, Wh, Ww coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 relative_coords[:, :, 1] += self.window_size[1] - 1 relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww self.register_buffer("relative_position_index", relative_position_index, persistent=False) self.qkv = nn.Linear(dim, dim * 3, bias=False) if qkv_bias: self.q_bias = nn.Parameter(torch.zeros(dim)) self.register_buffer('k_bias', torch.zeros(dim), persistent=False) self.v_bias = nn.Parameter(torch.zeros(dim)) else: self.q_bias = None self.k_bias = None self.v_bias = None self.attn_drop = nn.Dropout(attn_drop) self.proj = nn.Linear(dim, dim) self.proj_drop = nn.Dropout(proj_drop) self.softmax = nn.Softmax(dim=-1) def forward(self, x: torch.Tensor, mask: Optional[torch.Tensor] = None) -> torch.Tensor: """ Args: x: input features with shape of (num_windows*B, N, C) mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None """ B_, N, C = x.shape qkv_bias = None if self.q_bias is not None: qkv_bias = torch.cat((self.q_bias, self.k_bias, self.v_bias)) qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) q, k, v = qkv.unbind(0) # cosine attention attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1)) logit_scale = torch.clamp(self.logit_scale, max=math.log(1. / 0.01)).exp() attn = attn * logit_scale relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads) relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view( self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww relative_position_bias = 16 * torch.sigmoid(relative_position_bias) attn = attn + relative_position_bias.unsqueeze(0) if mask is not None: num_win = mask.shape[0] attn = attn.view(-1, num_win, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) attn = attn.view(-1, self.num_heads, N, N) attn = self.softmax(attn) else: attn = self.softmax(attn) attn = self.attn_drop(attn) x = (attn @ v).transpose(1, 2).reshape(B_, N, C) x = self.proj(x) x = self.proj_drop(x) return x class SwinTransformerV2Block(nn.Module): """ Swin Transformer Block. """ def __init__( self, dim: int, input_resolution: _int_or_tuple_2_t, num_heads: int, window_size: _int_or_tuple_2_t = 7, shift_size: _int_or_tuple_2_t = 0, mlp_ratio: float = 4., qkv_bias: bool = True, proj_drop: float = 0., attn_drop: float = 0., drop_path: float = 0., act_layer: nn.Module = nn.GELU, norm_layer: nn.Module = nn.LayerNorm, pretrained_window_size: _int_or_tuple_2_t = 0, ) -> None: """ Args: dim: Number of input channels. input_resolution: Input resolution. num_heads: Number of attention heads. window_size: Window size. shift_size: Shift size for SW-MSA. mlp_ratio: Ratio of mlp hidden dim to embedding dim. qkv_bias: If True, add a learnable bias to query, key, value. proj_drop: Dropout rate. attn_drop: Attention dropout rate. drop_path: Stochastic depth rate. act_layer: Activation layer. norm_layer: Normalization layer. pretrained_window_size: Window size in pretraining. """ super().__init__() self.dim = dim self.input_resolution = to_2tuple(input_resolution) self.num_heads = num_heads ws, ss = self._calc_window_shift(window_size, shift_size) self.window_size: Tuple[int, int] = ws self.shift_size: Tuple[int, int] = ss self.window_area = self.window_size[0] * self.window_size[1] self.mlp_ratio = mlp_ratio self.attn = WindowAttention( dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=proj_drop, pretrained_window_size=to_2tuple(pretrained_window_size), ) self.norm1 = norm_layer(dim) self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.mlp = Mlp( in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop, ) self.norm2 = norm_layer(dim) self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() if any(self.shift_size): # calculate attention mask for SW-MSA H, W = self.input_resolution img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 cnt = 0 for h in ( slice(0, -self.window_size[0]), slice(-self.window_size[0], -self.shift_size[0]), slice(-self.shift_size[0], None)): for w in ( slice(0, -self.window_size[1]), slice(-self.window_size[1], -self.shift_size[1]), slice(-self.shift_size[1], None)): img_mask[:, h, w, :] = cnt cnt += 1 mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 mask_windows = mask_windows.view(-1, self.window_area) attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) else: attn_mask = None self.register_buffer("attn_mask", attn_mask, persistent=False) def _calc_window_shift(self, target_window_size: _int_or_tuple_2_t, target_shift_size: _int_or_tuple_2_t) -> Tuple[Tuple[int, int], Tuple[int, int]]: target_window_size = to_2tuple(target_window_size) target_shift_size = to_2tuple(target_shift_size) window_size = [r if r <= w else w for r, w in zip(self.input_resolution, target_window_size)] shift_size = [0 if r <= w else s for r, w, s in zip(self.input_resolution, window_size, target_shift_size)] return tuple(window_size), tuple(shift_size) def _attn(self, x: torch.Tensor) -> torch.Tensor: B, H, W, C = x.shape # cyclic shift has_shift = any(self.shift_size) if has_shift: shifted_x = torch.roll(x, shifts=(-self.shift_size[0], -self.shift_size[1]), dims=(1, 2)) else: shifted_x = x # partition windows x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C x_windows = x_windows.view(-1, self.window_area, C) # nW*B, window_size*window_size, C # W-MSA/SW-MSA attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C # merge windows attn_windows = attn_windows.view(-1, self.window_size[0], self.window_size[1], C) shifted_x = window_reverse(attn_windows, self.window_size, self.input_resolution) # B H' W' C # reverse cyclic shift if has_shift: x = torch.roll(shifted_x, shifts=self.shift_size, dims=(1, 2)) else: x = shifted_x return x def forward(self, x: torch.Tensor) -> torch.Tensor: B, H, W, C = x.shape x = x + self.drop_path1(self.norm1(self._attn(x))) x = x.reshape(B, -1, C) x = x + self.drop_path2(self.norm2(self.mlp(x))) x = x.reshape(B, H, W, C) return x class PatchMerging(nn.Module): """ Patch Merging Layer. """ def __init__(self, dim: int, out_dim: Optional[int] = None, norm_layer: nn.Module = nn.LayerNorm) -> None: """ Args: dim (int): Number of input channels. out_dim (int): Number of output channels (or 2 * dim if None) norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm """ super().__init__() self.dim = dim self.out_dim = out_dim or 2 * dim self.reduction = nn.Linear(4 * dim, self.out_dim, bias=False) self.norm = norm_layer(self.out_dim) def forward(self, x: torch.Tensor) -> torch.Tensor: B, H, W, C = x.shape _assert(H % 2 == 0, f"x height ({H}) is not even.") _assert(W % 2 == 0, f"x width ({W}) is not even.") x = x.reshape(B, H // 2, 2, W // 2, 2, C).permute(0, 1, 3, 4, 2, 5).flatten(3) x = self.reduction(x) x = self.norm(x) return x class SwinTransformerV2Stage(nn.Module): """ A Swin Transformer V2 Stage. """ def __init__( self, dim: int, out_dim: int, input_resolution: _int_or_tuple_2_t, depth: int, num_heads: int, window_size: _int_or_tuple_2_t, downsample: bool = False, mlp_ratio: float = 4., qkv_bias: bool = True, proj_drop: float = 0., attn_drop: float = 0., drop_path: float = 0., norm_layer: nn.Module = nn.LayerNorm, pretrained_window_size: _int_or_tuple_2_t = 0, output_nchw: bool = False, ) -> None: """ Args: dim: Number of input channels. out_dim: Number of output channels. input_resolution: Input resolution. depth: Number of blocks. num_heads: Number of attention heads. window_size: Local window size. downsample: Use downsample layer at start of the block. mlp_ratio: Ratio of mlp hidden dim to embedding dim. qkv_bias: If True, add a learnable bias to query, key, value. proj_drop: Projection dropout rate attn_drop: Attention dropout rate. drop_path: Stochastic depth rate. norm_layer: Normalization layer. pretrained_window_size: Local window size in pretraining. output_nchw: Output tensors on NCHW format instead of NHWC. """ super().__init__() self.dim = dim self.input_resolution = input_resolution self.output_resolution = tuple(i // 2 for i in input_resolution) if downsample else input_resolution self.depth = depth self.output_nchw = output_nchw self.grad_checkpointing = False window_size = to_2tuple(window_size) shift_size = tuple([w // 2 for w in window_size]) # patch merging / downsample layer if downsample: self.downsample = PatchMerging(dim=dim, out_dim=out_dim, norm_layer=norm_layer) else: assert dim == out_dim self.downsample = nn.Identity() # build blocks self.blocks = nn.ModuleList([ SwinTransformerV2Block( dim=out_dim, input_resolution=self.output_resolution, num_heads=num_heads, window_size=window_size, shift_size=0 if (i % 2 == 0) else shift_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, proj_drop=proj_drop, attn_drop=attn_drop, drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, norm_layer=norm_layer, pretrained_window_size=pretrained_window_size, ) for i in range(depth)]) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.downsample(x) for blk in self.blocks: if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint.checkpoint(blk, x) else: x = blk(x) return x def _init_respostnorm(self) -> None: for blk in self.blocks: nn.init.constant_(blk.norm1.bias, 0) nn.init.constant_(blk.norm1.weight, 0) nn.init.constant_(blk.norm2.bias, 0) nn.init.constant_(blk.norm2.weight, 0) class SwinTransformerV2(nn.Module): """ Swin Transformer V2 A PyTorch impl of : `Swin Transformer V2: Scaling Up Capacity and Resolution` - https://arxiv.org/abs/2111.09883 """ def __init__( self, img_size: _int_or_tuple_2_t = 224, patch_size: int = 4, in_chans: int = 3, num_classes: int = 1000, global_pool: str = 'avg', embed_dim: int = 96, depths: Tuple[int, ...] = (2, 2, 6, 2), num_heads: Tuple[int, ...] = (3, 6, 12, 24), window_size: _int_or_tuple_2_t = 7, mlp_ratio: float = 4., qkv_bias: bool = True, drop_rate: float = 0., proj_drop_rate: float = 0., attn_drop_rate: float = 0., drop_path_rate: float = 0.1, norm_layer: Callable = nn.LayerNorm, pretrained_window_sizes: Tuple[int, ...] = (0, 0, 0, 0), **kwargs, ): """ Args: img_size: Input image size. patch_size: Patch size. in_chans: Number of input image channels. num_classes: Number of classes for classification head. embed_dim: Patch embedding dimension. depths: Depth of each Swin Transformer stage (layer). num_heads: Number of attention heads in different layers. window_size: Window size. mlp_ratio: Ratio of mlp hidden dim to embedding dim. qkv_bias: If True, add a learnable bias to query, key, value. drop_rate: Head dropout rate. proj_drop_rate: Projection dropout rate. attn_drop_rate: Attention dropout rate. drop_path_rate: Stochastic depth rate. norm_layer: Normalization layer. patch_norm: If True, add normalization after patch embedding. pretrained_window_sizes: Pretrained window sizes of each layer. output_fmt: Output tensor format if not None, otherwise output 'NHWC' by default. """ super().__init__() self.num_classes = num_classes assert global_pool in ('', 'avg') self.global_pool = global_pool self.output_fmt = 'NHWC' self.num_layers = len(depths) self.embed_dim = embed_dim self.num_features = int(embed_dim * 2 ** (self.num_layers - 1)) self.feature_info = [] if not isinstance(embed_dim, (tuple, list)): embed_dim = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] # split image into non-overlapping patches self.patch_embed = PatchEmbed( img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim[0], norm_layer=norm_layer, output_fmt='NHWC', ) dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)] layers = [] in_dim = embed_dim[0] scale = 1 for i in range(self.num_layers): out_dim = embed_dim[i] layers += [SwinTransformerV2Stage( dim=in_dim, out_dim=out_dim, input_resolution=( self.patch_embed.grid_size[0] // scale, self.patch_embed.grid_size[1] // scale), depth=depths[i], downsample=i > 0, num_heads=num_heads[i], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, proj_drop=proj_drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, pretrained_window_size=pretrained_window_sizes[i], )] in_dim = out_dim if i > 0: scale *= 2 self.feature_info += [dict(num_chs=out_dim, reduction=4 * scale, module=f'layers.{i}')] self.layers = nn.Sequential(*layers) self.norm = norm_layer(self.num_features) self.head = ClassifierHead( self.num_features, num_classes, pool_type=global_pool, drop_rate=drop_rate, input_fmt=self.output_fmt, ) self.apply(self._init_weights) for bly in self.layers: bly._init_respostnorm() def _init_weights(self, m): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if isinstance(m, nn.Linear) and m.bias is not None: nn.init.constant_(m.bias, 0) @torch.jit.ignore def no_weight_decay(self): nod = set() for n, m in self.named_modules(): if any([kw in n for kw in ("cpb_mlp", "logit_scale")]): nod.add(n) return nod @torch.jit.ignore def group_matcher(self, coarse=False): return dict( stem=r'^absolute_pos_embed|patch_embed', # stem and embed blocks=r'^layers\.(\d+)' if coarse else [ (r'^layers\.(\d+).downsample', (0,)), (r'^layers\.(\d+)\.\w+\.(\d+)', None), (r'^norm', (99999,)), ] ) @torch.jit.ignore def set_grad_checkpointing(self, enable=True): for l in self.layers: l.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.head.fc def reset_classifier(self, num_classes, global_pool=None): self.num_classes = num_classes self.head.reset(num_classes, global_pool) def forward_features(self, x): x = self.patch_embed(x) x = self.layers(x) x = self.norm(x) return x def forward_head(self, x, pre_logits: bool = False): return self.head(x, pre_logits=True) if pre_logits else self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def checkpoint_filter_fn(state_dict, model): state_dict = state_dict.get('model', state_dict) state_dict = state_dict.get('state_dict', state_dict) native_checkpoint = 'head.fc.weight' in state_dict out_dict = {} import re for k, v in state_dict.items(): if any([n in k for n in ('relative_position_index', 'relative_coords_table', 'attn_mask')]): continue # skip buffers that should not be persistent if 'patch_embed.proj.weight' in k: _, _, H, W = model.patch_embed.proj.weight.shape if v.shape[-2] != H or v.shape[-1] != W: v = resample_patch_embed( v, (H, W), interpolation='bicubic', antialias=True, verbose=True, ) if not native_checkpoint: # skip layer remapping for updated checkpoints k = re.sub(r'layers.(\d+).downsample', lambda x: f'layers.{int(x.group(1)) + 1}.downsample', k) k = k.replace('head.', 'head.fc.') out_dict[k] = v return out_dict def _create_swin_transformer_v2(variant, pretrained=False, **kwargs): default_out_indices = tuple(i for i, _ in enumerate(kwargs.get('depths', (1, 1, 1, 1)))) out_indices = kwargs.pop('out_indices', default_out_indices) model = build_model_with_cfg( SwinTransformerV2, variant, pretrained, pretrained_filter_fn=checkpoint_filter_fn, feature_cfg=dict(flatten_sequential=True, out_indices=out_indices), **kwargs) return model def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 256, 256), 'pool_size': (8, 8), 'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True, 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'patch_embed.proj', 'classifier': 'head.fc', 'license': 'mit', **kwargs } default_cfgs = generate_default_cfgs({ 'swinv2_base_window12to16_192to256.ms_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_base_patch4_window12to16_192to256_22kto1k_ft.pth', ), 'swinv2_base_window12to24_192to384.ms_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_base_patch4_window12to24_192to384_22kto1k_ft.pth', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, ), 'swinv2_large_window12to16_192to256.ms_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_large_patch4_window12to16_192to256_22kto1k_ft.pth', ), 'swinv2_large_window12to24_192to384.ms_in22k_ft_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_large_patch4_window12to24_192to384_22kto1k_ft.pth', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, ), 'swinv2_tiny_window8_256.ms_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_tiny_patch4_window8_256.pth', ), 'swinv2_tiny_window16_256.ms_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_tiny_patch4_window16_256.pth', ), 'swinv2_small_window8_256.ms_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_small_patch4_window8_256.pth', ), 'swinv2_small_window16_256.ms_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_small_patch4_window16_256.pth', ), 'swinv2_base_window8_256.ms_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_base_patch4_window8_256.pth', ), 'swinv2_base_window16_256.ms_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_base_patch4_window16_256.pth', ), 'swinv2_base_window12_192.ms_in22k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_base_patch4_window12_192_22k.pth', num_classes=21841, input_size=(3, 192, 192), pool_size=(6, 6) ), 'swinv2_large_window12_192.ms_in22k': _cfg( hf_hub_id='timm/', url='https://github.com/SwinTransformer/storage/releases/download/v2.0.0/swinv2_large_patch4_window12_192_22k.pth', num_classes=21841, input_size=(3, 192, 192), pool_size=(6, 6) ), }) @register_model def swinv2_tiny_window16_256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=16, embed_dim=96, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24)) return _create_swin_transformer_v2( 'swinv2_tiny_window16_256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_tiny_window8_256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=8, embed_dim=96, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24)) return _create_swin_transformer_v2( 'swinv2_tiny_window8_256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_small_window16_256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=16, embed_dim=96, depths=(2, 2, 18, 2), num_heads=(3, 6, 12, 24)) return _create_swin_transformer_v2( 'swinv2_small_window16_256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_small_window8_256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=8, embed_dim=96, depths=(2, 2, 18, 2), num_heads=(3, 6, 12, 24)) return _create_swin_transformer_v2( 'swinv2_small_window8_256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_base_window16_256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=16, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32)) return _create_swin_transformer_v2( 'swinv2_base_window16_256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_base_window8_256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=8, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32)) return _create_swin_transformer_v2( 'swinv2_base_window8_256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_base_window12_192(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=12, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32)) return _create_swin_transformer_v2( 'swinv2_base_window12_192', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_base_window12to16_192to256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict( window_size=16, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32), pretrained_window_sizes=(12, 12, 12, 6)) return _create_swin_transformer_v2( 'swinv2_base_window12to16_192to256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_base_window12to24_192to384(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict( window_size=24, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32), pretrained_window_sizes=(12, 12, 12, 6)) return _create_swin_transformer_v2( 'swinv2_base_window12to24_192to384', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_large_window12_192(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict(window_size=12, embed_dim=192, depths=(2, 2, 18, 2), num_heads=(6, 12, 24, 48)) return _create_swin_transformer_v2( 'swinv2_large_window12_192', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_large_window12to16_192to256(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict( window_size=16, embed_dim=192, depths=(2, 2, 18, 2), num_heads=(6, 12, 24, 48), pretrained_window_sizes=(12, 12, 12, 6)) return _create_swin_transformer_v2( 'swinv2_large_window12to16_192to256', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def swinv2_large_window12to24_192to384(pretrained=False, **kwargs) -> SwinTransformerV2: """ """ model_args = dict( window_size=24, embed_dim=192, depths=(2, 2, 18, 2), num_heads=(6, 12, 24, 48), pretrained_window_sizes=(12, 12, 12, 6)) return _create_swin_transformer_v2( 'swinv2_large_window12to24_192to384', pretrained=pretrained, **dict(model_args, **kwargs)) register_model_deprecations(__name__, { 'swinv2_base_window12_192_22k': 'swinv2_base_window12_192.ms_in22k', 'swinv2_base_window12to16_192to256_22kft1k': 'swinv2_base_window12to16_192to256.ms_in22k_ft_in1k', 'swinv2_base_window12to24_192to384_22kft1k': 'swinv2_base_window12to24_192to384.ms_in22k_ft_in1k', 'swinv2_large_window12_192_22k': 'swinv2_large_window12_192.ms_in22k', 'swinv2_large_window12to16_192to256_22kft1k': 'swinv2_large_window12to16_192to256.ms_in22k_ft_in1k', 'swinv2_large_window12to24_192to384_22kft1k': 'swinv2_large_window12to24_192to384.ms_in22k_ft_in1k', })
pytorch-image-models/timm/models/swin_transformer_v2.py/0
{ "file_path": "pytorch-image-models/timm/models/swin_transformer_v2.py", "repo_id": "pytorch-image-models", "token_count": 16762 }
189
""" Cross-Covariance Image Transformer (XCiT) in PyTorch Paper: - https://arxiv.org/abs/2106.09681 Same as the official implementation, with some minor adaptations, original copyright below - https://github.com/facebookresearch/xcit/blob/master/xcit.py Modifications and additions for timm hacked together by / Copyright 2021, Ross Wightman """ # Copyright (c) 2015-present, Facebook, Inc. # All rights reserved. import math from functools import partial import torch import torch.nn as nn from torch.utils.checkpoint import checkpoint from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import DropPath, trunc_normal_, to_2tuple from ._builder import build_model_with_cfg from ._features_fx import register_notrace_module from ._registry import register_model, generate_default_cfgs, register_model_deprecations from .cait import ClassAttn from .vision_transformer import Mlp __all__ = ['Xcit'] # model_registry will add each entrypoint fn to this @register_notrace_module # reason: FX can't symbolically trace torch.arange in forward method class PositionalEncodingFourier(nn.Module): """ Positional encoding relying on a fourier kernel matching the one used in the "Attention is all you Need" paper. Based on the official XCiT code - https://github.com/facebookresearch/xcit/blob/master/xcit.py """ def __init__(self, hidden_dim=32, dim=768, temperature=10000): super().__init__() self.token_projection = nn.Conv2d(hidden_dim * 2, dim, kernel_size=1) self.scale = 2 * math.pi self.temperature = temperature self.hidden_dim = hidden_dim self.dim = dim self.eps = 1e-6 def forward(self, B: int, H: int, W: int): device = self.token_projection.weight.device dtype = self.token_projection.weight.dtype y_embed = torch.arange(1, H + 1, device=device).to(torch.float32).unsqueeze(1).repeat(1, 1, W) x_embed = torch.arange(1, W + 1, device=device).to(torch.float32).repeat(1, H, 1) y_embed = y_embed / (y_embed[:, -1:, :] + self.eps) * self.scale x_embed = x_embed / (x_embed[:, :, -1:] + self.eps) * self.scale dim_t = torch.arange(self.hidden_dim, device=device).to(torch.float32) dim_t = self.temperature ** (2 * torch.div(dim_t, 2, rounding_mode='floor') / self.hidden_dim) pos_x = x_embed[:, :, :, None] / dim_t pos_y = y_embed[:, :, :, None] / dim_t pos_x = torch.stack([pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()], dim=4).flatten(3) pos_y = torch.stack([pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()], dim=4).flatten(3) pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) pos = self.token_projection(pos.to(dtype)) return pos.repeat(B, 1, 1, 1) # (B, C, H, W) def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution + batch norm""" return torch.nn.Sequential( nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False), nn.BatchNorm2d(out_planes) ) class ConvPatchEmbed(nn.Module): """Image to Patch Embedding using multiple convolutional layers""" def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768, act_layer=nn.GELU): super().__init__() img_size = to_2tuple(img_size) num_patches = (img_size[1] // patch_size) * (img_size[0] // patch_size) self.img_size = img_size self.patch_size = patch_size self.num_patches = num_patches if patch_size == 16: self.proj = torch.nn.Sequential( conv3x3(in_chans, embed_dim // 8, 2), act_layer(), conv3x3(embed_dim // 8, embed_dim // 4, 2), act_layer(), conv3x3(embed_dim // 4, embed_dim // 2, 2), act_layer(), conv3x3(embed_dim // 2, embed_dim, 2), ) elif patch_size == 8: self.proj = torch.nn.Sequential( conv3x3(in_chans, embed_dim // 4, 2), act_layer(), conv3x3(embed_dim // 4, embed_dim // 2, 2), act_layer(), conv3x3(embed_dim // 2, embed_dim, 2), ) else: raise('For convolutional projection, patch size has to be in [8, 16]') def forward(self, x): x = self.proj(x) Hp, Wp = x.shape[2], x.shape[3] x = x.flatten(2).transpose(1, 2) # (B, N, C) return x, (Hp, Wp) class LPI(nn.Module): """ Local Patch Interaction module that allows explicit communication between tokens in 3x3 windows to augment the implicit communication performed by the block diagonal scatter attention. Implemented using 2 layers of separable 3x3 convolutions with GeLU and BatchNorm2d """ def __init__(self, in_features, out_features=None, act_layer=nn.GELU, kernel_size=3): super().__init__() out_features = out_features or in_features padding = kernel_size // 2 self.conv1 = torch.nn.Conv2d( in_features, in_features, kernel_size=kernel_size, padding=padding, groups=in_features) self.act = act_layer() self.bn = nn.BatchNorm2d(in_features) self.conv2 = torch.nn.Conv2d( in_features, out_features, kernel_size=kernel_size, padding=padding, groups=out_features) def forward(self, x, H: int, W: int): B, N, C = x.shape x = x.permute(0, 2, 1).reshape(B, C, H, W) x = self.conv1(x) x = self.act(x) x = self.bn(x) x = self.conv2(x) x = x.reshape(B, C, N).permute(0, 2, 1) return x class ClassAttentionBlock(nn.Module): """Class Attention Layer as in CaiT https://arxiv.org/abs/2103.17239""" def __init__( self, dim, num_heads, mlp_ratio=4., qkv_bias=False, proj_drop=0., attn_drop=0., drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, eta=1., tokens_norm=False, ): super().__init__() self.norm1 = norm_layer(dim) self.attn = ClassAttn( dim, num_heads=num_heads, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=proj_drop) self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.norm2 = norm_layer(dim) self.mlp = Mlp(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop) if eta is not None: # LayerScale Initialization (no layerscale when None) self.gamma1 = nn.Parameter(eta * torch.ones(dim)) self.gamma2 = nn.Parameter(eta * torch.ones(dim)) else: self.gamma1, self.gamma2 = 1.0, 1.0 # See https://github.com/rwightman/pytorch-image-models/pull/747#issuecomment-877795721 self.tokens_norm = tokens_norm def forward(self, x): x_norm1 = self.norm1(x) x_attn = torch.cat([self.attn(x_norm1), x_norm1[:, 1:]], dim=1) x = x + self.drop_path(self.gamma1 * x_attn) if self.tokens_norm: x = self.norm2(x) else: x = torch.cat([self.norm2(x[:, 0:1]), x[:, 1:]], dim=1) x_res = x cls_token = x[:, 0:1] cls_token = self.gamma2 * self.mlp(cls_token) x = torch.cat([cls_token, x[:, 1:]], dim=1) x = x_res + self.drop_path(x) return x class XCA(nn.Module): """ Cross-Covariance Attention (XCA) Operation where the channels are updated using a weighted sum. The weights are obtained from the (softmax normalized) Cross-covariance matrix (Q^T \\cdot K \\in d_h \\times d_h) """ def __init__(self, dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0.): super().__init__() self.num_heads = num_heads self.temperature = nn.Parameter(torch.ones(num_heads, 1, 1)) self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop) self.proj = nn.Linear(dim, dim) self.proj_drop = nn.Dropout(proj_drop) def forward(self, x): B, N, C = x.shape # Result of next line is (qkv, B, num (H)eads, (C')hannels per head, N) qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 4, 1) q, k, v = qkv.unbind(0) # make torchscript happy (cannot use tensor as tuple) # Paper section 3.2 l2-Normalization and temperature scaling q = torch.nn.functional.normalize(q, dim=-1) k = torch.nn.functional.normalize(k, dim=-1) attn = (q @ k.transpose(-2, -1)) * self.temperature attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) # (B, H, C', N), permute -> (B, N, H, C') x = (attn @ v).permute(0, 3, 1, 2).reshape(B, N, C) x = self.proj(x) x = self.proj_drop(x) return x @torch.jit.ignore def no_weight_decay(self): return {'temperature'} class XCABlock(nn.Module): def __init__( self, dim, num_heads, mlp_ratio=4., qkv_bias=False, proj_drop=0., attn_drop=0., drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, eta=1., ): super().__init__() self.norm1 = norm_layer(dim) self.attn = XCA(dim, num_heads=num_heads, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=proj_drop) self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.norm3 = norm_layer(dim) self.local_mp = LPI(in_features=dim, act_layer=act_layer) self.norm2 = norm_layer(dim) self.mlp = Mlp(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop) self.gamma1 = nn.Parameter(eta * torch.ones(dim)) self.gamma3 = nn.Parameter(eta * torch.ones(dim)) self.gamma2 = nn.Parameter(eta * torch.ones(dim)) def forward(self, x, H: int, W: int): x = x + self.drop_path(self.gamma1 * self.attn(self.norm1(x))) # NOTE official code has 3 then 2, so keeping it the same to be consistent with loaded weights # See https://github.com/rwightman/pytorch-image-models/pull/747#issuecomment-877795721 x = x + self.drop_path(self.gamma3 * self.local_mp(self.norm3(x), H, W)) x = x + self.drop_path(self.gamma2 * self.mlp(self.norm2(x))) return x class Xcit(nn.Module): """ Based on timm and DeiT code bases https://github.com/rwightman/pytorch-image-models/tree/master/timm https://github.com/facebookresearch/deit/ """ def __init__( self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, global_pool='token', embed_dim=768, depth=12, num_heads=12, mlp_ratio=4., qkv_bias=True, drop_rate=0., pos_drop_rate=0., proj_drop_rate=0., attn_drop_rate=0., drop_path_rate=0., act_layer=None, norm_layer=None, cls_attn_layers=2, use_pos_embed=True, eta=1., tokens_norm=False, ): """ Args: img_size (int, tuple): input image size patch_size (int): patch size in_chans (int): number of input channels num_classes (int): number of classes for classification head embed_dim (int): embedding dimension depth (int): depth of transformer num_heads (int): number of attention heads mlp_ratio (int): ratio of mlp hidden dim to embedding dim qkv_bias (bool): enable bias for qkv if True drop_rate (float): dropout rate after positional embedding, and in XCA/CA projection + MLP pos_drop_rate: position embedding dropout rate proj_drop_rate (float): projection dropout rate attn_drop_rate (float): attention dropout rate drop_path_rate (float): stochastic depth rate (constant across all layers) norm_layer: (nn.Module): normalization layer cls_attn_layers: (int) Depth of Class attention layers use_pos_embed: (bool) whether to use positional encoding eta: (float) layerscale initialization value tokens_norm: (bool) Whether to normalize all tokens or just the cls_token in the CA Notes: - Although `layer_norm` is user specifiable, there are hard-coded `BatchNorm2d`s in the local patch interaction (class LPI) and the patch embedding (class ConvPatchEmbed) """ super().__init__() assert global_pool in ('', 'avg', 'token') img_size = to_2tuple(img_size) assert (img_size[0] % patch_size == 0) and (img_size[0] % patch_size == 0), \ '`patch_size` should divide image dimensions evenly' norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) act_layer = act_layer or nn.GELU self.num_classes = num_classes self.num_features = self.embed_dim = embed_dim self.global_pool = global_pool self.grad_checkpointing = False self.patch_embed = ConvPatchEmbed( img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, act_layer=act_layer, ) self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if use_pos_embed: self.pos_embed = PositionalEncodingFourier(dim=embed_dim) else: self.pos_embed = None self.pos_drop = nn.Dropout(p=pos_drop_rate) self.blocks = nn.ModuleList([ XCABlock( dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, proj_drop=proj_drop_rate, attn_drop=attn_drop_rate, drop_path=drop_path_rate, act_layer=act_layer, norm_layer=norm_layer, eta=eta, ) for _ in range(depth)]) self.cls_attn_blocks = nn.ModuleList([ ClassAttentionBlock( dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, proj_drop=drop_rate, attn_drop=attn_drop_rate, act_layer=act_layer, norm_layer=norm_layer, eta=eta, tokens_norm=tokens_norm, ) for _ in range(cls_attn_layers)]) # Classifier head self.norm = norm_layer(embed_dim) self.head_drop = nn.Dropout(drop_rate) self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() # Init weights trunc_normal_(self.cls_token, std=.02) self.apply(self._init_weights) def _init_weights(self, m): if isinstance(m, nn.Linear): trunc_normal_(m.weight, std=.02) if isinstance(m, nn.Linear) and m.bias is not None: nn.init.constant_(m.bias, 0) @torch.jit.ignore def no_weight_decay(self): return {'pos_embed', 'cls_token'} @torch.jit.ignore def group_matcher(self, coarse=False): return dict( stem=r'^cls_token|pos_embed|patch_embed', # stem and embed blocks=r'^blocks\.(\d+)', cls_attn_blocks=[(r'^cls_attn_blocks\.(\d+)', None), (r'^norm', (99999,))] ) @torch.jit.ignore def set_grad_checkpointing(self, enable=True): self.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self): return self.head def reset_classifier(self, num_classes, global_pool=''): self.num_classes = num_classes if global_pool is not None: assert global_pool in ('', 'avg', 'token') self.global_pool = global_pool self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() def forward_features(self, x): B = x.shape[0] # x is (B, N, C). (Hp, Hw) is (height in units of patches, width in units of patches) x, (Hp, Wp) = self.patch_embed(x) if self.pos_embed is not None: # `pos_embed` (B, C, Hp, Wp), reshape -> (B, C, N), permute -> (B, N, C) pos_encoding = self.pos_embed(B, Hp, Wp).reshape(B, -1, x.shape[1]).permute(0, 2, 1) x = x + pos_encoding x = self.pos_drop(x) for blk in self.blocks: if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint(blk, x, Hp, Wp) else: x = blk(x, Hp, Wp) x = torch.cat((self.cls_token.expand(B, -1, -1), x), dim=1) for blk in self.cls_attn_blocks: if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint(blk, x) else: x = blk(x) x = self.norm(x) return x def forward_head(self, x, pre_logits: bool = False): if self.global_pool: x = x[:, 1:].mean(dim=1) if self.global_pool == 'avg' else x[:, 0] x = self.head_drop(x) return x if pre_logits else self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def checkpoint_filter_fn(state_dict, model): if 'model' in state_dict: state_dict = state_dict['model'] # For consistency with timm's transformer models while being compatible with official weights source we rename # pos_embeder to pos_embed. Also account for use_pos_embed == False use_pos_embed = getattr(model, 'pos_embed', None) is not None pos_embed_keys = [k for k in state_dict if k.startswith('pos_embed')] for k in pos_embed_keys: if use_pos_embed: state_dict[k.replace('pos_embeder.', 'pos_embed.')] = state_dict.pop(k) else: del state_dict[k] # timm's implementation of class attention in CaiT is slightly more efficient as it does not compute query vectors # for all tokens, just the class token. To use official weights source we must split qkv into q, k, v if 'cls_attn_blocks.0.attn.qkv.weight' in state_dict and 'cls_attn_blocks.0.attn.q.weight' in model.state_dict(): num_ca_blocks = len(model.cls_attn_blocks) for i in range(num_ca_blocks): qkv_weight = state_dict.pop(f'cls_attn_blocks.{i}.attn.qkv.weight') qkv_weight = qkv_weight.reshape(3, -1, qkv_weight.shape[-1]) for j, subscript in enumerate('qkv'): state_dict[f'cls_attn_blocks.{i}.attn.{subscript}.weight'] = qkv_weight[j] qkv_bias = state_dict.pop(f'cls_attn_blocks.{i}.attn.qkv.bias', None) if qkv_bias is not None: qkv_bias = qkv_bias.reshape(3, -1) for j, subscript in enumerate('qkv'): state_dict[f'cls_attn_blocks.{i}.attn.{subscript}.bias'] = qkv_bias[j] return state_dict def _create_xcit(variant, pretrained=False, default_cfg=None, **kwargs): if kwargs.get('features_only', None): raise RuntimeError('features_only not implemented for Cross-Covariance Image Transformers models.') model = build_model_with_cfg( Xcit, variant, pretrained, pretrained_filter_fn=checkpoint_filter_fn, **kwargs, ) return model def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'crop_pct': 1.0, 'interpolation': 'bicubic', 'fixed_input_size': True, 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'patch_embed.proj.0.0', 'classifier': 'head', **kwargs } default_cfgs = generate_default_cfgs({ # Patch size 16 'xcit_nano_12_p16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_nano_12_p16_224.pth'), 'xcit_nano_12_p16_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_nano_12_p16_224_dist.pth'), 'xcit_nano_12_p16_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_nano_12_p16_384_dist.pth', input_size=(3, 384, 384)), 'xcit_tiny_12_p16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_12_p16_224.pth'), 'xcit_tiny_12_p16_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_12_p16_224_dist.pth'), 'xcit_tiny_12_p16_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_12_p16_384_dist.pth', input_size=(3, 384, 384)), 'xcit_tiny_24_p16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_24_p16_224.pth'), 'xcit_tiny_24_p16_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_24_p16_224_dist.pth'), 'xcit_tiny_24_p16_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_24_p16_384_dist.pth', input_size=(3, 384, 384)), 'xcit_small_12_p16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_12_p16_224.pth'), 'xcit_small_12_p16_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_12_p16_224_dist.pth'), 'xcit_small_12_p16_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_12_p16_384_dist.pth', input_size=(3, 384, 384)), 'xcit_small_24_p16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_24_p16_224.pth'), 'xcit_small_24_p16_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_24_p16_224_dist.pth'), 'xcit_small_24_p16_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_24_p16_384_dist.pth', input_size=(3, 384, 384)), 'xcit_medium_24_p16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_medium_24_p16_224.pth'), 'xcit_medium_24_p16_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_medium_24_p16_224_dist.pth'), 'xcit_medium_24_p16_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_medium_24_p16_384_dist.pth', input_size=(3, 384, 384)), 'xcit_large_24_p16_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_large_24_p16_224.pth'), 'xcit_large_24_p16_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_large_24_p16_224_dist.pth'), 'xcit_large_24_p16_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_large_24_p16_384_dist.pth', input_size=(3, 384, 384)), # Patch size 8 'xcit_nano_12_p8_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_nano_12_p8_224.pth'), 'xcit_nano_12_p8_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_nano_12_p8_224_dist.pth'), 'xcit_nano_12_p8_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_nano_12_p8_384_dist.pth', input_size=(3, 384, 384)), 'xcit_tiny_12_p8_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_12_p8_224.pth'), 'xcit_tiny_12_p8_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_12_p8_224_dist.pth'), 'xcit_tiny_12_p8_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_12_p8_384_dist.pth', input_size=(3, 384, 384)), 'xcit_tiny_24_p8_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_24_p8_224.pth'), 'xcit_tiny_24_p8_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_24_p8_224_dist.pth'), 'xcit_tiny_24_p8_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_tiny_24_p8_384_dist.pth', input_size=(3, 384, 384)), 'xcit_small_12_p8_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_12_p8_224.pth'), 'xcit_small_12_p8_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_12_p8_224_dist.pth'), 'xcit_small_12_p8_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_12_p8_384_dist.pth', input_size=(3, 384, 384)), 'xcit_small_24_p8_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_24_p8_224.pth'), 'xcit_small_24_p8_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_24_p8_224_dist.pth'), 'xcit_small_24_p8_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_small_24_p8_384_dist.pth', input_size=(3, 384, 384)), 'xcit_medium_24_p8_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_medium_24_p8_224.pth'), 'xcit_medium_24_p8_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_medium_24_p8_224_dist.pth'), 'xcit_medium_24_p8_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_medium_24_p8_384_dist.pth', input_size=(3, 384, 384)), 'xcit_large_24_p8_224.fb_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_large_24_p8_224.pth'), 'xcit_large_24_p8_224.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_large_24_p8_224_dist.pth'), 'xcit_large_24_p8_384.fb_dist_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/xcit/xcit_large_24_p8_384_dist.pth', input_size=(3, 384, 384)), }) @register_model def xcit_nano_12_p16_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=128, depth=12, num_heads=4, eta=1.0, tokens_norm=False) model = _create_xcit('xcit_nano_12_p16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_nano_12_p16_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=128, depth=12, num_heads=4, eta=1.0, tokens_norm=False, img_size=384) model = _create_xcit('xcit_nano_12_p16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_12_p16_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=192, depth=12, num_heads=4, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_tiny_12_p16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_12_p16_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=192, depth=12, num_heads=4, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_tiny_12_p16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_12_p16_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=384, depth=12, num_heads=8, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_small_12_p16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_12_p16_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=384, depth=12, num_heads=8, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_small_12_p16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_24_p16_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=192, depth=24, num_heads=4, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_tiny_24_p16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_24_p16_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=192, depth=24, num_heads=4, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_tiny_24_p16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_24_p16_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=384, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_small_24_p16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_24_p16_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=384, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_small_24_p16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_medium_24_p16_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=512, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_medium_24_p16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_medium_24_p16_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=512, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_medium_24_p16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_large_24_p16_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=768, depth=24, num_heads=16, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_large_24_p16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_large_24_p16_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=16, embed_dim=768, depth=24, num_heads=16, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_large_24_p16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model # Patch size 8x8 models @register_model def xcit_nano_12_p8_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=128, depth=12, num_heads=4, eta=1.0, tokens_norm=False) model = _create_xcit('xcit_nano_12_p8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_nano_12_p8_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=128, depth=12, num_heads=4, eta=1.0, tokens_norm=False) model = _create_xcit('xcit_nano_12_p8_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_12_p8_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=192, depth=12, num_heads=4, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_tiny_12_p8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_12_p8_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=192, depth=12, num_heads=4, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_tiny_12_p8_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_12_p8_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=384, depth=12, num_heads=8, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_small_12_p8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_12_p8_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=384, depth=12, num_heads=8, eta=1.0, tokens_norm=True) model = _create_xcit('xcit_small_12_p8_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_24_p8_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=192, depth=24, num_heads=4, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_tiny_24_p8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_tiny_24_p8_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=192, depth=24, num_heads=4, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_tiny_24_p8_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_24_p8_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=384, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_small_24_p8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_small_24_p8_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=384, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_small_24_p8_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_medium_24_p8_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=512, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_medium_24_p8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_medium_24_p8_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=512, depth=24, num_heads=8, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_medium_24_p8_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_large_24_p8_224(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=768, depth=24, num_heads=16, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_large_24_p8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def xcit_large_24_p8_384(pretrained=False, **kwargs) -> Xcit: model_args = dict( patch_size=8, embed_dim=768, depth=24, num_heads=16, eta=1e-5, tokens_norm=True) model = _create_xcit('xcit_large_24_p8_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model register_model_deprecations(__name__, { # Patch size 16 'xcit_nano_12_p16_224_dist': 'xcit_nano_12_p16_224.fb_dist_in1k', 'xcit_nano_12_p16_384_dist': 'xcit_nano_12_p16_384.fb_dist_in1k', 'xcit_tiny_12_p16_224_dist': 'xcit_tiny_12_p16_224.fb_dist_in1k', 'xcit_tiny_12_p16_384_dist': 'xcit_tiny_12_p16_384.fb_dist_in1k', 'xcit_tiny_24_p16_224_dist': 'xcit_tiny_24_p16_224.fb_dist_in1k', 'xcit_tiny_24_p16_384_dist': 'xcit_tiny_24_p16_384.fb_dist_in1k', 'xcit_small_12_p16_224_dist': 'xcit_small_12_p16_224.fb_dist_in1k', 'xcit_small_12_p16_384_dist': 'xcit_small_12_p16_384.fb_dist_in1k', 'xcit_small_24_p16_224_dist': 'xcit_small_24_p16_224.fb_dist_in1k', 'xcit_small_24_p16_384_dist': 'xcit_small_24_p16_384.fb_dist_in1k', 'xcit_medium_24_p16_224_dist': 'xcit_medium_24_p16_224.fb_dist_in1k', 'xcit_medium_24_p16_384_dist': 'xcit_medium_24_p16_384.fb_dist_in1k', 'xcit_large_24_p16_224_dist': 'xcit_large_24_p16_224.fb_dist_in1k', 'xcit_large_24_p16_384_dist': 'xcit_large_24_p16_384.fb_dist_in1k', # Patch size 8 'xcit_nano_12_p8_224_dist': 'xcit_nano_12_p8_224.fb_dist_in1k', 'xcit_nano_12_p8_384_dist': 'xcit_nano_12_p8_384.fb_dist_in1k', 'xcit_tiny_12_p8_224_dist': 'xcit_tiny_12_p8_224.fb_dist_in1k', 'xcit_tiny_12_p8_384_dist': 'xcit_tiny_12_p8_384.fb_dist_in1k', 'xcit_tiny_24_p8_224_dist': 'xcit_tiny_24_p8_224.fb_dist_in1k', 'xcit_tiny_24_p8_384_dist': 'xcit_tiny_24_p8_384.fb_dist_in1k', 'xcit_small_12_p8_224_dist': 'xcit_small_12_p8_224.fb_dist_in1k', 'xcit_small_12_p8_384_dist': 'xcit_small_12_p8_384.fb_dist_in1k', 'xcit_small_24_p8_224_dist': 'xcit_small_24_p8_224.fb_dist_in1k', 'xcit_small_24_p8_384_dist': 'xcit_small_24_p8_384.fb_dist_in1k', 'xcit_medium_24_p8_224_dist': 'xcit_medium_24_p8_224.fb_dist_in1k', 'xcit_medium_24_p8_384_dist': 'xcit_medium_24_p8_384.fb_dist_in1k', 'xcit_large_24_p8_224_dist': 'xcit_large_24_p8_224.fb_dist_in1k', 'xcit_large_24_p8_384_dist': 'xcit_large_24_p8_384.fb_dist_in1k', })
pytorch-image-models/timm/models/xcit.py/0
{ "file_path": "pytorch-image-models/timm/models/xcit.py", "repo_id": "pytorch-image-models", "token_count": 18692 }
190
""" Optimizer Factory w/ Custom Weight Decay Hacked together by / Copyright 2021 Ross Wightman """ import logging from itertools import islice from typing import Optional, Callable, Tuple import torch import torch.nn as nn import torch.optim as optim from timm.models import group_parameters from .adabelief import AdaBelief from .adafactor import Adafactor from .adahessian import Adahessian from .adamp import AdamP from .adan import Adan from .lamb import Lamb from .lars import Lars from .lion import Lion from .lookahead import Lookahead from .madgrad import MADGRAD from .nadam import Nadam from .nadamw import NAdamW from .nvnovograd import NvNovoGrad from .radam import RAdam from .rmsprop_tf import RMSpropTF from .sgdp import SGDP from .sgdw import SGDW _logger = logging.getLogger(__name__) # optimizers to default to multi-tensor _DEFAULT_FOREACH = { 'lion', } def param_groups_weight_decay( model: nn.Module, weight_decay=1e-5, no_weight_decay_list=() ): no_weight_decay_list = set(no_weight_decay_list) decay = [] no_decay = [] for name, param in model.named_parameters(): if not param.requires_grad: continue if param.ndim <= 1 or name.endswith(".bias") or name in no_weight_decay_list: no_decay.append(param) else: decay.append(param) return [ {'params': no_decay, 'weight_decay': 0.}, {'params': decay, 'weight_decay': weight_decay}] def _group(it, size): it = iter(it) return iter(lambda: tuple(islice(it, size)), ()) def _layer_map(model, layers_per_group=12, num_groups=None): def _in_head(n, hp): if not hp: return True elif isinstance(hp, (tuple, list)): return any([n.startswith(hpi) for hpi in hp]) else: return n.startswith(hp) head_prefix = getattr(model, 'pretrained_cfg', {}).get('classifier', None) names_trunk = [] names_head = [] for n, _ in model.named_parameters(): names_head.append(n) if _in_head(n, head_prefix) else names_trunk.append(n) # group non-head layers num_trunk_layers = len(names_trunk) if num_groups is not None: layers_per_group = -(num_trunk_layers // -num_groups) names_trunk = list(_group(names_trunk, layers_per_group)) num_trunk_groups = len(names_trunk) layer_map = {n: i for i, l in enumerate(names_trunk) for n in l} layer_map.update({n: num_trunk_groups for n in names_head}) return layer_map def param_groups_layer_decay( model: nn.Module, weight_decay: float = 0.05, no_weight_decay_list: Tuple[str] = (), layer_decay: float = .75, end_layer_decay: Optional[float] = None, verbose: bool = False, ): """ Parameter groups for layer-wise lr decay & weight decay Based on BEiT: https://github.com/microsoft/unilm/blob/master/beit/optim_factory.py#L58 """ no_weight_decay_list = set(no_weight_decay_list) param_group_names = {} # NOTE for debugging param_groups = {} if hasattr(model, 'group_matcher'): # FIXME interface needs more work layer_map = group_parameters(model, model.group_matcher(coarse=False), reverse=True) else: # fallback layer_map = _layer_map(model) num_layers = max(layer_map.values()) + 1 layer_max = num_layers - 1 layer_scales = list(layer_decay ** (layer_max - i) for i in range(num_layers)) for name, param in model.named_parameters(): if not param.requires_grad: continue # no decay: all 1D parameters and model specific ones if param.ndim == 1 or name in no_weight_decay_list: g_decay = "no_decay" this_decay = 0. else: g_decay = "decay" this_decay = weight_decay layer_id = layer_map.get(name, layer_max) group_name = "layer_%d_%s" % (layer_id, g_decay) if group_name not in param_groups: this_scale = layer_scales[layer_id] param_group_names[group_name] = { "lr_scale": this_scale, "weight_decay": this_decay, "param_names": [], } param_groups[group_name] = { "lr_scale": this_scale, "weight_decay": this_decay, "params": [], } param_group_names[group_name]["param_names"].append(name) param_groups[group_name]["params"].append(param) if verbose: import json _logger.info("parameter groups: \n%s" % json.dumps(param_group_names, indent=2)) return list(param_groups.values()) def optimizer_kwargs(cfg): """ cfg/argparse to kwargs helper Convert optimizer args in argparse args or cfg like object to keyword args for updated create fn. """ kwargs = dict( opt=cfg.opt, lr=cfg.lr, weight_decay=cfg.weight_decay, momentum=cfg.momentum, ) if getattr(cfg, 'opt_eps', None) is not None: kwargs['eps'] = cfg.opt_eps if getattr(cfg, 'opt_betas', None) is not None: kwargs['betas'] = cfg.opt_betas if getattr(cfg, 'layer_decay', None) is not None: kwargs['layer_decay'] = cfg.layer_decay if getattr(cfg, 'opt_args', None) is not None: kwargs.update(cfg.opt_args) if getattr(cfg, 'opt_foreach', None) is not None: kwargs['foreach'] = cfg.opt_foreach return kwargs def create_optimizer(args, model, filter_bias_and_bn=True): """ Legacy optimizer factory for backwards compatibility. NOTE: Use create_optimizer_v2 for new code. """ return create_optimizer_v2( model, **optimizer_kwargs(cfg=args), filter_bias_and_bn=filter_bias_and_bn, ) def create_optimizer_v2( model_or_params, opt: str = 'sgd', lr: Optional[float] = None, weight_decay: float = 0., momentum: float = 0.9, foreach: Optional[bool] = None, filter_bias_and_bn: bool = True, layer_decay: Optional[float] = None, param_group_fn: Optional[Callable] = None, **kwargs, ): """ Create an optimizer. TODO currently the model is passed in and all parameters are selected for optimization. For more general use an interface that allows selection of parameters to optimize and lr groups, one of: * a filter fn interface that further breaks params into groups in a weight_decay compatible fashion * expose the parameters interface and leave it up to caller Args: model_or_params (nn.Module): model containing parameters to optimize opt: name of optimizer to create lr: initial learning rate weight_decay: weight decay to apply in optimizer momentum: momentum for momentum based optimizers (others may use betas via kwargs) foreach: Enable / disable foreach (multi-tensor) operation if True / False. Choose safe default if None filter_bias_and_bn: filter out bias, bn and other 1d params from weight decay **kwargs: extra optimizer specific kwargs to pass through Returns: Optimizer """ if isinstance(model_or_params, nn.Module): # a model was passed in, extract parameters and add weight decays to appropriate layers no_weight_decay = {} if hasattr(model_or_params, 'no_weight_decay'): no_weight_decay = model_or_params.no_weight_decay() if param_group_fn: parameters = param_group_fn(model_or_params) elif layer_decay is not None: parameters = param_groups_layer_decay( model_or_params, weight_decay=weight_decay, layer_decay=layer_decay, no_weight_decay_list=no_weight_decay, ) weight_decay = 0. elif weight_decay and filter_bias_and_bn: parameters = param_groups_weight_decay(model_or_params, weight_decay, no_weight_decay) weight_decay = 0. else: parameters = model_or_params.parameters() else: # iterable of parameters or param groups passed in parameters = model_or_params opt_lower = opt.lower() opt_split = opt_lower.split('_') opt_lower = opt_split[-1] if opt_lower.startswith('fused'): try: from apex.optimizers import FusedNovoGrad, FusedAdam, FusedLAMB, FusedSGD has_apex = True except ImportError: has_apex = False assert has_apex and torch.cuda.is_available(), 'APEX and CUDA required for fused optimizers' if opt_lower.startswith('bnb'): try: import bitsandbytes as bnb has_bnb = True except ImportError: has_bnb = False assert has_bnb and torch.cuda.is_available(), 'bitsandbytes and CUDA required for bnb optimizers' opt_args = dict(weight_decay=weight_decay, **kwargs) if lr is not None: opt_args.setdefault('lr', lr) if foreach is None: if opt in _DEFAULT_FOREACH: opt_args.setdefault('foreach', True) else: opt_args['foreach'] = foreach # basic SGD & related if opt_lower == 'sgd' or opt_lower == 'nesterov': # NOTE 'sgd' refers to SGD + nesterov momentum for legacy / backwards compat reasons opt_args.pop('eps', None) optimizer = optim.SGD(parameters, momentum=momentum, nesterov=True, **opt_args) elif opt_lower == 'momentum': opt_args.pop('eps', None) optimizer = optim.SGD(parameters, momentum=momentum, nesterov=False, **opt_args) elif opt_lower == 'sgdp': optimizer = SGDP(parameters, momentum=momentum, nesterov=True, **opt_args) elif opt_lower == 'sgdw' or opt_lower == 'nesterovw': # NOTE 'sgd' refers to SGD + nesterov momentum for legacy / backwards compat reasons opt_args.pop('eps', None) optimizer = SGDW(parameters, momentum=momentum, nesterov=True, **opt_args) elif opt_lower == 'momentumw': opt_args.pop('eps', None) optimizer = SGDW(parameters, momentum=momentum, nesterov=False, **opt_args) # adaptive elif opt_lower == 'adam': optimizer = optim.Adam(parameters, **opt_args) elif opt_lower == 'adamw': optimizer = optim.AdamW(parameters, **opt_args) elif opt_lower == 'adamp': optimizer = AdamP(parameters, wd_ratio=0.01, nesterov=True, **opt_args) elif opt_lower == 'nadam': try: # NOTE PyTorch >= 1.10 should have native NAdam optimizer = optim.Nadam(parameters, **opt_args) except AttributeError: optimizer = Nadam(parameters, **opt_args) elif opt_lower == 'nadamw': optimizer = NAdamW(parameters, **opt_args) elif opt_lower == 'radam': optimizer = RAdam(parameters, **opt_args) elif opt_lower == 'adamax': optimizer = optim.Adamax(parameters, **opt_args) elif opt_lower == 'adabelief': optimizer = AdaBelief(parameters, rectify=False, **opt_args) elif opt_lower == 'radabelief': optimizer = AdaBelief(parameters, rectify=True, **opt_args) elif opt_lower == 'adadelta': optimizer = optim.Adadelta(parameters, **opt_args) elif opt_lower == 'adagrad': opt_args.setdefault('eps', 1e-8) optimizer = optim.Adagrad(parameters, **opt_args) elif opt_lower == 'adafactor': optimizer = Adafactor(parameters, **opt_args) elif opt_lower == 'adanp': optimizer = Adan(parameters, no_prox=False, **opt_args) elif opt_lower == 'adanw': optimizer = Adan(parameters, no_prox=True, **opt_args) elif opt_lower == 'lamb': optimizer = Lamb(parameters, **opt_args) elif opt_lower == 'lambc': optimizer = Lamb(parameters, trust_clip=True, **opt_args) elif opt_lower == 'larc': optimizer = Lars(parameters, momentum=momentum, trust_clip=True, **opt_args) elif opt_lower == 'lars': optimizer = Lars(parameters, momentum=momentum, **opt_args) elif opt_lower == 'nlarc': optimizer = Lars(parameters, momentum=momentum, trust_clip=True, nesterov=True, **opt_args) elif opt_lower == 'nlars': optimizer = Lars(parameters, momentum=momentum, nesterov=True, **opt_args) elif opt_lower == 'madgrad': optimizer = MADGRAD(parameters, momentum=momentum, **opt_args) elif opt_lower == 'madgradw': optimizer = MADGRAD(parameters, momentum=momentum, decoupled_decay=True, **opt_args) elif opt_lower == 'novograd' or opt_lower == 'nvnovograd': optimizer = NvNovoGrad(parameters, **opt_args) elif opt_lower == 'rmsprop': optimizer = optim.RMSprop(parameters, alpha=0.9, momentum=momentum, **opt_args) elif opt_lower == 'rmsproptf': optimizer = RMSpropTF(parameters, alpha=0.9, momentum=momentum, **opt_args) elif opt_lower == 'lion': opt_args.pop('eps', None) optimizer = Lion(parameters, **opt_args) # second order elif opt_lower == 'adahessian': optimizer = Adahessian(parameters, **opt_args) # NVIDIA fused optimizers, require APEX to be installed elif opt_lower == 'fusedsgd': opt_args.pop('eps', None) optimizer = FusedSGD(parameters, momentum=momentum, nesterov=True, **opt_args) elif opt_lower == 'fusedmomentum': opt_args.pop('eps', None) optimizer = FusedSGD(parameters, momentum=momentum, nesterov=False, **opt_args) elif opt_lower == 'fusedadam': optimizer = FusedAdam(parameters, adam_w_mode=False, **opt_args) elif opt_lower == 'fusedadamw': optimizer = FusedAdam(parameters, adam_w_mode=True, **opt_args) elif opt_lower == 'fusedlamb': optimizer = FusedLAMB(parameters, **opt_args) elif opt_lower == 'fusednovograd': opt_args.setdefault('betas', (0.95, 0.98)) optimizer = FusedNovoGrad(parameters, **opt_args) # bitsandbytes optimizers, require bitsandbytes to be installed elif opt_lower == 'bnbsgd': opt_args.pop('eps', None) optimizer = bnb.optim.SGD(parameters, momentum=momentum, nesterov=True, **opt_args) elif opt_lower == 'bnbsgd8bit': opt_args.pop('eps', None) optimizer = bnb.optim.SGD8bit(parameters, momentum=momentum, nesterov=True, **opt_args) elif opt_lower == 'bnbmomentum': opt_args.pop('eps', None) optimizer = bnb.optim.SGD(parameters, momentum=momentum, **opt_args) elif opt_lower == 'bnbmomentum8bit': opt_args.pop('eps', None) optimizer = bnb.optim.SGD8bit(parameters, momentum=momentum, **opt_args) elif opt_lower == 'bnbadam': optimizer = bnb.optim.Adam(parameters, **opt_args) elif opt_lower == 'bnbadam8bit': optimizer = bnb.optim.Adam8bit(parameters, **opt_args) elif opt_lower == 'bnbadamw': optimizer = bnb.optim.AdamW(parameters, **opt_args) elif opt_lower == 'bnbadamw8bit': optimizer = bnb.optim.AdamW8bit(parameters, **opt_args) elif opt_lower == 'bnblamb': optimizer = bnb.optim.LAMB(parameters, **opt_args) elif opt_lower == 'bnblamb8bit': optimizer = bnb.optim.LAMB8bit(parameters, **opt_args) elif opt_lower == 'bnblars': optimizer = bnb.optim.LARS(parameters, **opt_args) elif opt_lower == 'bnblarsb8bit': optimizer = bnb.optim.LAMB8bit(parameters, **opt_args) elif opt_lower == 'bnblion': optimizer = bnb.optim.Lion(parameters, **opt_args) elif opt_lower == 'bnblion8bit': optimizer = bnb.optim.Lion8bit(parameters, **opt_args) else: assert False and "Invalid optimizer" raise ValueError if len(opt_split) > 1: if opt_split[0] == 'lookahead': optimizer = Lookahead(optimizer) return optimizer
pytorch-image-models/timm/optim/optim_factory.py/0
{ "file_path": "pytorch-image-models/timm/optim/optim_factory.py", "repo_id": "pytorch-image-models", "token_count": 6927 }
191
""" Checkpoint Saver Track top-n training checkpoints and maintain recovery checkpoints on specified intervals. Hacked together by / Copyright 2020 Ross Wightman """ import glob import operator import os import logging import torch from .model import unwrap_model, get_state_dict _logger = logging.getLogger(__name__) class CheckpointSaver: def __init__( self, model, optimizer, args=None, model_ema=None, amp_scaler=None, checkpoint_prefix='checkpoint', recovery_prefix='recovery', checkpoint_dir='', recovery_dir='', decreasing=False, max_history=10, unwrap_fn=unwrap_model): # objects to save state_dicts of self.model = model self.optimizer = optimizer self.args = args self.model_ema = model_ema self.amp_scaler = amp_scaler # state self.checkpoint_files = [] # (filename, metric) tuples in order of decreasing betterness self.best_epoch = None self.best_metric = None self.curr_recovery_file = '' self.last_recovery_file = '' # config self.checkpoint_dir = checkpoint_dir self.recovery_dir = recovery_dir self.save_prefix = checkpoint_prefix self.recovery_prefix = recovery_prefix self.extension = '.pth.tar' self.decreasing = decreasing # a lower metric is better if True self.cmp = operator.lt if decreasing else operator.gt # True if lhs better than rhs self.max_history = max_history self.unwrap_fn = unwrap_fn assert self.max_history >= 1 def save_checkpoint(self, epoch, metric=None): assert epoch >= 0 tmp_save_path = os.path.join(self.checkpoint_dir, 'tmp' + self.extension) last_save_path = os.path.join(self.checkpoint_dir, 'last' + self.extension) self._save(tmp_save_path, epoch, metric) if os.path.exists(last_save_path): os.unlink(last_save_path) # required for Windows support. os.rename(tmp_save_path, last_save_path) worst_file = self.checkpoint_files[-1] if self.checkpoint_files else None if (len(self.checkpoint_files) < self.max_history or metric is None or self.cmp(metric, worst_file[1])): if len(self.checkpoint_files) >= self.max_history: self._cleanup_checkpoints(1) filename = '-'.join([self.save_prefix, str(epoch)]) + self.extension save_path = os.path.join(self.checkpoint_dir, filename) os.link(last_save_path, save_path) self.checkpoint_files.append((save_path, metric)) self.checkpoint_files = sorted( self.checkpoint_files, key=lambda x: x[1], reverse=not self.decreasing) # sort in descending order if a lower metric is not better checkpoints_str = "Current checkpoints:\n" for c in self.checkpoint_files: checkpoints_str += ' {}\n'.format(c) _logger.info(checkpoints_str) if metric is not None and (self.best_metric is None or self.cmp(metric, self.best_metric)): self.best_epoch = epoch self.best_metric = metric best_save_path = os.path.join(self.checkpoint_dir, 'model_best' + self.extension) if os.path.exists(best_save_path): os.unlink(best_save_path) os.link(last_save_path, best_save_path) return (None, None) if self.best_metric is None else (self.best_metric, self.best_epoch) def _save(self, save_path, epoch, metric=None): save_state = { 'epoch': epoch, 'arch': type(self.model).__name__.lower(), 'state_dict': get_state_dict(self.model, self.unwrap_fn), 'optimizer': self.optimizer.state_dict(), 'version': 2, # version < 2 increments epoch before save } if self.args is not None: save_state['arch'] = self.args.model save_state['args'] = self.args if self.amp_scaler is not None: save_state[self.amp_scaler.state_dict_key] = self.amp_scaler.state_dict() if self.model_ema is not None: save_state['state_dict_ema'] = get_state_dict(self.model_ema, self.unwrap_fn) if metric is not None: save_state['metric'] = metric torch.save(save_state, save_path) def _cleanup_checkpoints(self, trim=0): trim = min(len(self.checkpoint_files), trim) delete_index = self.max_history - trim if delete_index < 0 or len(self.checkpoint_files) <= delete_index: return to_delete = self.checkpoint_files[delete_index:] for d in to_delete: try: _logger.debug("Cleaning checkpoint: {}".format(d)) os.remove(d[0]) except Exception as e: _logger.error("Exception '{}' while deleting checkpoint".format(e)) self.checkpoint_files = self.checkpoint_files[:delete_index] def save_recovery(self, epoch, batch_idx=0): assert epoch >= 0 filename = '-'.join([self.recovery_prefix, str(epoch), str(batch_idx)]) + self.extension save_path = os.path.join(self.recovery_dir, filename) self._save(save_path, epoch) if os.path.exists(self.last_recovery_file): try: _logger.debug("Cleaning recovery: {}".format(self.last_recovery_file)) os.remove(self.last_recovery_file) except Exception as e: _logger.error("Exception '{}' while removing {}".format(e, self.last_recovery_file)) self.last_recovery_file = self.curr_recovery_file self.curr_recovery_file = save_path def find_recovery(self): recovery_path = os.path.join(self.recovery_dir, self.recovery_prefix) files = glob.glob(recovery_path + '*' + self.extension) files = sorted(files) return files[0] if len(files) else ''
pytorch-image-models/timm/utils/checkpoint_saver.py/0
{ "file_path": "pytorch-image-models/timm/utils/checkpoint_saver.py", "repo_id": "pytorch-image-models", "token_count": 2818 }
192
#!/usr/bin/env python3 """ ImageNet Validation Script This is intended to be a lean and easily modifiable ImageNet validation script for evaluating pretrained models or training checkpoints against ImageNet or similarly organized image datasets. It prioritizes canonical PyTorch, standard Python style, and good performance. Repurpose as you see fit. Hacked together by Ross Wightman (https://github.com/rwightman) """ import argparse import csv import glob import json import logging import os import time from collections import OrderedDict from contextlib import suppress from functools import partial import torch import torch.nn as nn import torch.nn.parallel from timm.data import create_dataset, create_loader, resolve_data_config, RealLabelsImagenet from timm.layers import apply_test_time_pool, set_fast_norm from timm.models import create_model, load_checkpoint, is_model, list_models from timm.utils import accuracy, AverageMeter, natural_key, setup_default_logging, set_jit_fuser, \ decay_batch_step, check_batch_size_retry, ParseKwargs, reparameterize_model try: from apex import amp has_apex = True except ImportError: has_apex = False has_native_amp = False try: if getattr(torch.cuda.amp, 'autocast') is not None: has_native_amp = True except AttributeError: pass try: from functorch.compile import memory_efficient_fusion has_functorch = True except ImportError as e: has_functorch = False has_compile = hasattr(torch, 'compile') _logger = logging.getLogger('validate') parser = argparse.ArgumentParser(description='PyTorch ImageNet Validation') parser.add_argument('data', nargs='?', metavar='DIR', const=None, help='path to dataset (*deprecated*, use --data-dir)') parser.add_argument('--data-dir', metavar='DIR', help='path to dataset (root dir)') parser.add_argument('--dataset', metavar='NAME', default='', help='dataset type + name ("<type>/<name>") (default: ImageFolder or ImageTar if empty)') parser.add_argument('--split', metavar='NAME', default='validation', help='dataset split (default: validation)') parser.add_argument('--num-samples', default=None, type=int, metavar='N', help='Manually specify num samples in dataset split, for IterableDatasets.') parser.add_argument('--dataset-download', action='store_true', default=False, help='Allow download of dataset for torch/ and tfds/ datasets that support it.') parser.add_argument('--class-map', default='', type=str, metavar='FILENAME', help='path to class to idx mapping file (default: "")') parser.add_argument('--input-key', default=None, type=str, help='Dataset key for input images.') parser.add_argument('--input-img-mode', default=None, type=str, help='Dataset image conversion mode for input images.') parser.add_argument('--target-key', default=None, type=str, help='Dataset key for target labels.') parser.add_argument('--model', '-m', metavar='NAME', default='dpn92', help='model architecture (default: dpn92)') parser.add_argument('--pretrained', dest='pretrained', action='store_true', help='use pre-trained model') parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 4)') parser.add_argument('-b', '--batch-size', default=256, type=int, metavar='N', help='mini-batch size (default: 256)') parser.add_argument('--img-size', default=None, type=int, metavar='N', help='Input image dimension, uses model default if empty') parser.add_argument('--in-chans', type=int, default=None, metavar='N', help='Image input channels (default: None => 3)') parser.add_argument('--input-size', default=None, nargs=3, type=int, metavar='N N N', help='Input all image dimensions (d h w, e.g. --input-size 3 224 224), uses model default if empty') parser.add_argument('--use-train-size', action='store_true', default=False, help='force use of train input size, even when test size is specified in pretrained cfg') parser.add_argument('--crop-pct', default=None, type=float, metavar='N', help='Input image center crop pct') parser.add_argument('--crop-mode', default=None, type=str, metavar='N', help='Input image crop mode (squash, border, center). Model default if None.') parser.add_argument('--crop-border-pixels', type=int, default=None, help='Crop pixels from image border.') parser.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN', help='Override mean pixel value of dataset') parser.add_argument('--std', type=float, nargs='+', default=None, metavar='STD', help='Override std deviation of of dataset') parser.add_argument('--interpolation', default='', type=str, metavar='NAME', help='Image resize interpolation type (overrides model)') parser.add_argument('--num-classes', type=int, default=None, help='Number classes in dataset') parser.add_argument('--gp', default=None, type=str, metavar='POOL', help='Global pool type, one of (fast, avg, max, avgmax, avgmaxc). Model default if None.') parser.add_argument('--log-freq', default=10, type=int, metavar='N', help='batch logging frequency (default: 10)') parser.add_argument('--checkpoint', default='', type=str, metavar='PATH', help='path to latest checkpoint (default: none)') parser.add_argument('--num-gpu', type=int, default=1, help='Number of GPUS to use') parser.add_argument('--test-pool', dest='test_pool', action='store_true', help='enable test time pool') parser.add_argument('--no-prefetcher', action='store_true', default=False, help='disable fast prefetcher') parser.add_argument('--pin-mem', action='store_true', default=False, help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.') parser.add_argument('--channels-last', action='store_true', default=False, help='Use channels_last memory layout') parser.add_argument('--device', default='cuda', type=str, help="Device (accelerator) to use.") parser.add_argument('--amp', action='store_true', default=False, help='use NVIDIA Apex AMP or Native AMP for mixed precision training') parser.add_argument('--amp-dtype', default='float16', type=str, help='lower precision AMP dtype (default: float16)') parser.add_argument('--amp-impl', default='native', type=str, help='AMP impl to use, "native" or "apex" (default: native)') parser.add_argument('--tf-preprocessing', action='store_true', default=False, help='Use Tensorflow preprocessing pipeline (require CPU TF installed') parser.add_argument('--use-ema', dest='use_ema', action='store_true', help='use ema version of weights if present') parser.add_argument('--fuser', default='', type=str, help="Select jit fuser. One of ('', 'te', 'old', 'nvfuser')") parser.add_argument('--fast-norm', default=False, action='store_true', help='enable experimental fast-norm') parser.add_argument('--reparam', default=False, action='store_true', help='Reparameterize model') parser.add_argument('--model-kwargs', nargs='*', default={}, action=ParseKwargs) scripting_group = parser.add_mutually_exclusive_group() scripting_group.add_argument('--torchscript', default=False, action='store_true', help='torch.jit.script the full model') scripting_group.add_argument('--torchcompile', nargs='?', type=str, default=None, const='inductor', help="Enable compilation w/ specified backend (default: inductor).") scripting_group.add_argument('--aot-autograd', default=False, action='store_true', help="Enable AOT Autograd support.") parser.add_argument('--results-file', default='', type=str, metavar='FILENAME', help='Output csv file for validation results (summary)') parser.add_argument('--results-format', default='csv', type=str, help='Format for results file one of (csv, json) (default: csv).') parser.add_argument('--real-labels', default='', type=str, metavar='FILENAME', help='Real labels JSON file for imagenet evaluation') parser.add_argument('--valid-labels', default='', type=str, metavar='FILENAME', help='Valid label indices txt file for validation of partial label space') parser.add_argument('--retry', default=False, action='store_true', help='Enable batch size decay & retry for single model validation') def validate(args): # might as well try to validate something args.pretrained = args.pretrained or not args.checkpoint args.prefetcher = not args.no_prefetcher if torch.cuda.is_available(): torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.benchmark = True device = torch.device(args.device) # resolve AMP arguments based on PyTorch / Apex availability use_amp = None amp_autocast = suppress if args.amp: if args.amp_impl == 'apex': assert has_apex, 'AMP impl specified as APEX but APEX is not installed.' assert args.amp_dtype == 'float16' use_amp = 'apex' _logger.info('Validating in mixed precision with NVIDIA APEX AMP.') else: assert has_native_amp, 'Please update PyTorch to a version with native AMP (or use APEX).' assert args.amp_dtype in ('float16', 'bfloat16') use_amp = 'native' amp_dtype = torch.bfloat16 if args.amp_dtype == 'bfloat16' else torch.float16 amp_autocast = partial(torch.autocast, device_type=device.type, dtype=amp_dtype) _logger.info('Validating in mixed precision with native PyTorch AMP.') else: _logger.info('Validating in float32. AMP not enabled.') if args.fuser: set_jit_fuser(args.fuser) if args.fast_norm: set_fast_norm() # create model in_chans = 3 if args.in_chans is not None: in_chans = args.in_chans elif args.input_size is not None: in_chans = args.input_size[0] model = create_model( args.model, pretrained=args.pretrained, num_classes=args.num_classes, in_chans=in_chans, global_pool=args.gp, scriptable=args.torchscript, **args.model_kwargs, ) if args.num_classes is None: assert hasattr(model, 'num_classes'), 'Model must have `num_classes` attr if not set on cmd line/config.' args.num_classes = model.num_classes if args.checkpoint: load_checkpoint(model, args.checkpoint, args.use_ema) if args.reparam: model = reparameterize_model(model) param_count = sum([m.numel() for m in model.parameters()]) _logger.info('Model %s created, param count: %d' % (args.model, param_count)) data_config = resolve_data_config( vars(args), model=model, use_test_size=not args.use_train_size, verbose=True, ) test_time_pool = False if args.test_pool: model, test_time_pool = apply_test_time_pool(model, data_config) model = model.to(device) if args.channels_last: model = model.to(memory_format=torch.channels_last) if args.torchscript: assert not use_amp == 'apex', 'Cannot use APEX AMP with torchscripted model' model = torch.jit.script(model) elif args.torchcompile: assert has_compile, 'A version of torch w/ torch.compile() is required for --compile, possibly a nightly.' torch._dynamo.reset() model = torch.compile(model, backend=args.torchcompile) elif args.aot_autograd: assert has_functorch, "functorch is needed for --aot-autograd" model = memory_efficient_fusion(model) if use_amp == 'apex': model = amp.initialize(model, opt_level='O1') if args.num_gpu > 1: model = torch.nn.DataParallel(model, device_ids=list(range(args.num_gpu))) criterion = nn.CrossEntropyLoss().to(device) root_dir = args.data or args.data_dir if args.input_img_mode is None: input_img_mode = 'RGB' if data_config['input_size'][0] == 3 else 'L' else: input_img_mode = args.input_img_mode dataset = create_dataset( root=root_dir, name=args.dataset, split=args.split, download=args.dataset_download, load_bytes=args.tf_preprocessing, class_map=args.class_map, num_samples=args.num_samples, input_key=args.input_key, input_img_mode=input_img_mode, target_key=args.target_key, ) if args.valid_labels: with open(args.valid_labels, 'r') as f: valid_labels = [int(line.rstrip()) for line in f] else: valid_labels = None if args.real_labels: real_labels = RealLabelsImagenet(dataset.filenames(basename=True), real_json=args.real_labels) else: real_labels = None crop_pct = 1.0 if test_time_pool else data_config['crop_pct'] loader = create_loader( dataset, input_size=data_config['input_size'], batch_size=args.batch_size, use_prefetcher=args.prefetcher, interpolation=data_config['interpolation'], mean=data_config['mean'], std=data_config['std'], num_workers=args.workers, crop_pct=crop_pct, crop_mode=data_config['crop_mode'], crop_border_pixels=args.crop_border_pixels, pin_memory=args.pin_mem, device=device, tf_preprocessing=args.tf_preprocessing, ) batch_time = AverageMeter() losses = AverageMeter() top1 = AverageMeter() top5 = AverageMeter() model.eval() with torch.no_grad(): # warmup, reduce variability of first batch time, especially for comparing torchscript vs non input = torch.randn((args.batch_size,) + tuple(data_config['input_size'])).to(device) if args.channels_last: input = input.contiguous(memory_format=torch.channels_last) with amp_autocast(): model(input) end = time.time() for batch_idx, (input, target) in enumerate(loader): if args.no_prefetcher: target = target.to(device) input = input.to(device) if args.channels_last: input = input.contiguous(memory_format=torch.channels_last) # compute output with amp_autocast(): output = model(input) if valid_labels is not None: output = output[:, valid_labels] loss = criterion(output, target) if real_labels is not None: real_labels.add_result(output) # measure accuracy and record loss acc1, acc5 = accuracy(output.detach(), target, topk=(1, 5)) losses.update(loss.item(), input.size(0)) top1.update(acc1.item(), input.size(0)) top5.update(acc5.item(), input.size(0)) # measure elapsed time batch_time.update(time.time() - end) end = time.time() if batch_idx % args.log_freq == 0: _logger.info( 'Test: [{0:>4d}/{1}] ' 'Time: {batch_time.val:.3f}s ({batch_time.avg:.3f}s, {rate_avg:>7.2f}/s) ' 'Loss: {loss.val:>7.4f} ({loss.avg:>6.4f}) ' 'Acc@1: {top1.val:>7.3f} ({top1.avg:>7.3f}) ' 'Acc@5: {top5.val:>7.3f} ({top5.avg:>7.3f})'.format( batch_idx, len(loader), batch_time=batch_time, rate_avg=input.size(0) / batch_time.avg, loss=losses, top1=top1, top5=top5 ) ) if real_labels is not None: # real labels mode replaces topk values at the end top1a, top5a = real_labels.get_accuracy(k=1), real_labels.get_accuracy(k=5) else: top1a, top5a = top1.avg, top5.avg results = OrderedDict( model=args.model, top1=round(top1a, 4), top1_err=round(100 - top1a, 4), top5=round(top5a, 4), top5_err=round(100 - top5a, 4), param_count=round(param_count / 1e6, 2), img_size=data_config['input_size'][-1], crop_pct=crop_pct, interpolation=data_config['interpolation'], ) _logger.info(' * Acc@1 {:.3f} ({:.3f}) Acc@5 {:.3f} ({:.3f})'.format( results['top1'], results['top1_err'], results['top5'], results['top5_err'])) return results def _try_run(args, initial_batch_size): batch_size = initial_batch_size results = OrderedDict() error_str = 'Unknown' while batch_size: args.batch_size = batch_size * args.num_gpu # multiply by num-gpu for DataParallel case try: if torch.cuda.is_available() and 'cuda' in args.device: torch.cuda.empty_cache() results = validate(args) return results except RuntimeError as e: error_str = str(e) _logger.error(f'"{error_str}" while running validation.') if not check_batch_size_retry(error_str): break batch_size = decay_batch_step(batch_size) _logger.warning(f'Reducing batch size to {batch_size} for retry.') results['error'] = error_str _logger.error(f'{args.model} failed to validate ({error_str}).') return results _NON_IN1K_FILTERS = ['*_in21k', '*_in22k', '*in12k', '*_dino', '*fcmae', '*seer'] def main(): setup_default_logging() args = parser.parse_args() model_cfgs = [] model_names = [] if os.path.isdir(args.checkpoint): # validate all checkpoints in a path with same model checkpoints = glob.glob(args.checkpoint + '/*.pth.tar') checkpoints += glob.glob(args.checkpoint + '/*.pth') model_names = list_models(args.model) model_cfgs = [(args.model, c) for c in sorted(checkpoints, key=natural_key)] else: if args.model == 'all': # validate all models in a list of names with pretrained checkpoints args.pretrained = True model_names = list_models( pretrained=True, exclude_filters=_NON_IN1K_FILTERS, ) model_cfgs = [(n, '') for n in model_names] elif not is_model(args.model): # model name doesn't exist, try as wildcard filter model_names = list_models( args.model, pretrained=True, ) model_cfgs = [(n, '') for n in model_names] if not model_cfgs and os.path.isfile(args.model): with open(args.model) as f: model_names = [line.rstrip() for line in f] model_cfgs = [(n, None) for n in model_names if n] if len(model_cfgs): _logger.info('Running bulk validation on these pretrained models: {}'.format(', '.join(model_names))) results = [] try: initial_batch_size = args.batch_size for m, c in model_cfgs: args.model = m args.checkpoint = c r = _try_run(args, initial_batch_size) if 'error' in r: continue if args.checkpoint: r['checkpoint'] = args.checkpoint results.append(r) except KeyboardInterrupt as e: pass results = sorted(results, key=lambda x: x['top1'], reverse=True) else: if args.retry: results = _try_run(args, args.batch_size) else: results = validate(args) if args.results_file: write_results(args.results_file, results, format=args.results_format) # output results in JSON to stdout w/ delimiter for runner script print(f'--result\n{json.dumps(results, indent=4)}') def write_results(results_file, results, format='csv'): with open(results_file, mode='w') as cf: if format == 'json': json.dump(results, cf, indent=4) else: if not isinstance(results, (list, tuple)): results = [results] if not results: return dw = csv.DictWriter(cf, fieldnames=results[0].keys()) dw.writeheader() for r in results: dw.writerow(r) cf.flush() if __name__ == '__main__': main()
pytorch-image-models/validate.py/0
{ "file_path": "pytorch-image-models/validate.py", "repo_id": "pytorch-image-models", "token_count": 9310 }
193
<div align="center"> <a href="https://www.youtube.com/watch?v=jlMAX2Oaht0"> <img width=560 width=315 alt="Making TGI deployment optimal" src="https://huggingface.co/datasets/Narsil/tgi_assets/resolve/main/thumbnail.png"> </a> # Text Generation Inference <a href="https://github.com/huggingface/text-generation-inference"> <img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/huggingface/text-generation-inference?style=social"> </a> <a href="https://huggingface.github.io/text-generation-inference"> <img alt="Swagger API documentation" src="https://img.shields.io/badge/API-Swagger-informational"> </a> A Rust, Python and gRPC server for text generation inference. Used in production at [HuggingFace](https://huggingface.co) to power Hugging Chat, the Inference API and Inference Endpoint. </div> ## Table of contents - [Get Started](#get-started) - [API Documentation](#api-documentation) - [Using a private or gated model](#using-a-private-or-gated-model) - [A note on Shared Memory](#a-note-on-shared-memory-shm) - [Distributed Tracing](#distributed-tracing) - [Local Install](#local-install) - [CUDA Kernels](#cuda-kernels) - [Optimized architectures](#optimized-architectures) - [Run Mistral](#run-a-model) - [Run](#run) - [Quantization](#quantization) - [Develop](#develop) - [Testing](#testing) Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-source LLMs, including Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and [more](https://huggingface.co/docs/text-generation-inference/supported_models). TGI implements many features, such as: - Simple launcher to serve most popular LLMs - Production ready (distributed tracing with Open Telemetry, Prometheus metrics) - Tensor Parallelism for faster inference on multiple GPUs - Token streaming using Server-Sent Events (SSE) - Continuous batching of incoming requests for increased total throughput - Optimized transformers code for inference using [Flash Attention](https://github.com/HazyResearch/flash-attention) and [Paged Attention](https://github.com/vllm-project/vllm) on the most popular architectures - Quantization with : - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) - [GPT-Q](https://arxiv.org/abs/2210.17323) - [EETQ](https://github.com/NetEase-FuXi/EETQ) - [AWQ](https://github.com/casper-hansen/AutoAWQ) - [Safetensors](https://github.com/huggingface/safetensors) weight loading - Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226) - Logits warper (temperature scaling, top-p, top-k, repetition penalty, more details see [transformers.LogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.LogitsProcessor)) - Stop sequences - Log probabilities - Custom Prompt Generation: Easily generate text by providing custom prompts to guide the model's output - Fine-tuning Support: Utilize fine-tuned models for specific tasks to achieve higher accuracy and performance ### Hardware support - [Nvidia](https://github.com/huggingface/text-generation-inference/pkgs/container/text-generation-inference) - [AMD](https://github.com/huggingface/text-generation-inference/pkgs/container/text-generation-inference) (-rocm) - [Inferentia](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference) - [Intel GPU](https://github.com/huggingface/text-generation-inference/pull/1475) - [Gaudi](https://github.com/huggingface/tgi-gaudi) ## Get Started ### Docker For a detailed starting guide, please see the [Quick Tour](https://huggingface.co/docs/text-generation-inference/quicktour). The easiest way of getting started is using the official Docker container: ```shell model=HuggingFaceH4/zephyr-7b-beta volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4 --model-id $model ``` And then you can make requests like ```bash curl 127.0.0.1:8080/generate \ -X POST \ -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \ -H 'Content-Type: application/json' ``` **Note:** To use NVIDIA GPUs, you need to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). We also recommend using NVIDIA drivers with CUDA version 12.2 or higher. For running the Docker container on a machine with no GPUs or CUDA support, it is enough to remove the `--gpus all` flag and add `--disable-custom-kernels`, please note CPU is not the intended platform for this project, so performance might be subpar. **Note:** TGI supports AMD Instinct MI210 and MI250 GPUs. Details can be found in the [Supported Hardware documentation](https://huggingface.co/docs/text-generation-inference/supported_models#supported-hardware). To use AMD GPUs, please use `docker run --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4-rocm --model-id $model` instead of the command above. To see all options to serve your models (in the [code](https://github.com/huggingface/text-generation-inference/blob/main/launcher/src/main.rs) or in the cli): ``` text-generation-launcher --help ``` ### API documentation You can consult the OpenAPI documentation of the `text-generation-inference` REST API using the `/docs` route. The Swagger UI is also available at: [https://huggingface.github.io/text-generation-inference](https://huggingface.github.io/text-generation-inference). ### Using a private or gated model You have the option to utilize the `HUGGING_FACE_HUB_TOKEN` environment variable for configuring the token employed by `text-generation-inference`. This allows you to gain access to protected resources. For example, if you want to serve the gated Llama V2 model variants: 1. Go to https://huggingface.co/settings/tokens 2. Copy your cli READ token 3. Export `HUGGING_FACE_HUB_TOKEN=<your cli READ token>` or with Docker: ```shell model=meta-llama/Llama-2-7b-chat-hf volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run token=<your cli READ token> docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4 --model-id $model ``` ### A note on Shared Memory (shm) [`NCCL`](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html) is a communication framework used by `PyTorch` to do distributed training/inference. `text-generation-inference` make use of `NCCL` to enable Tensor Parallelism to dramatically speed up inference for large language models. In order to share data between the different devices of a `NCCL` group, `NCCL` might fall back to using the host memory if peer-to-peer using NVLink or PCI is not possible. To allow the container to use 1G of Shared Memory and support SHM sharing, we add `--shm-size 1g` on the above command. If you are running `text-generation-inference` inside `Kubernetes`. You can also add Shared Memory to the container by creating a volume with: ```yaml - name: shm emptyDir: medium: Memory sizeLimit: 1Gi ``` and mounting it to `/dev/shm`. Finally, you can also disable SHM sharing by using the `NCCL_SHM_DISABLE=1` environment variable. However, note that this will impact performance. ### Distributed Tracing `text-generation-inference` is instrumented with distributed tracing using OpenTelemetry. You can use this feature by setting the address to an OTLP collector with the `--otlp-endpoint` argument. ### Architecture ![TGI architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/TGI.png) ### Local install You can also opt to install `text-generation-inference` locally. First [install Rust](https://rustup.rs/) and create a Python virtual environment with at least Python 3.9, e.g. using `conda`: ```shell curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh conda create -n text-generation-inference python=3.11 conda activate text-generation-inference ``` You may also need to install Protoc. On Linux: ```shell PROTOC_ZIP=protoc-21.12-linux-x86_64.zip curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v21.12/$PROTOC_ZIP sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*' rm -f $PROTOC_ZIP ``` On MacOS, using Homebrew: ```shell brew install protobuf ``` Then run: ```shell BUILD_EXTENSIONS=True make install # Install repository and HF/transformer fork with CUDA kernels text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2 ``` **Note:** on some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run: ```shell sudo apt-get install libssl-dev gcc -y ``` ## Optimized architectures TGI works out of the box to serve optimized models for all modern models. They can be found in [this list](https://huggingface.co/docs/text-generation-inference/supported_models). Other architectures are supported on a best-effort basis using: `AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")` or `AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")` ## Run locally ### Run ```shell text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2 ``` ### Quantization You can also quantize the weights with bitsandbytes to reduce the VRAM requirement: ```shell text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2 --quantize ``` 4bit quantization is available using the [NF4 and FP4 data types from bitsandbytes](https://arxiv.org/pdf/2305.14314.pdf). It can be enabled by providing `--quantize bitsandbytes-nf4` or `--quantize bitsandbytes-fp4` as a command line argument to `text-generation-launcher`. ## Develop ```shell make server-dev make router-dev ``` ## Testing ```shell # python make python-server-tests make python-client-tests # or both server and client tests make python-tests # rust cargo tests make rust-tests # integration tests make integration-tests ```
text-generation-inference/README.md/0
{ "file_path": "text-generation-inference/README.md", "repo_id": "text-generation-inference", "token_count": 3286 }
194
[tool.poetry] name = "text-generation" version = "0.6.1" description = "Hugging Face Text Generation Python Client" license = "Apache-2.0" authors = ["Olivier Dehaene <[email protected]>"] maintainers = ["Olivier Dehaene <[email protected]>"] readme = "README.md" homepage = "https://github.com/huggingface/text-generation-inference" repository = "https://github.com/huggingface/text-generation-inference" [tool.poetry.dependencies] python = "^3.7" pydantic = "> 1.10, < 3" aiohttp = "^3.8" huggingface-hub = ">= 0.12, < 1.0" [tool.poetry.dev-dependencies] pytest = "^6.2.5" pytest-asyncio = "^0.17.2" pytest-cov = "^3.0.0" [tool.pytest.ini_options] asyncio_mode = "auto" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api"
text-generation-inference/clients/python/pyproject.toml/0
{ "file_path": "text-generation-inference/clients/python/pyproject.toml", "repo_id": "text-generation-inference", "token_count": 336 }
195
# Text-generation-launcher arguments <!-- WRAP CODE BLOCKS --> ```shell Text Generation Launcher Usage: text-generation-launcher [OPTIONS] Options: ``` ## MODEL_ID ```shell --model-id <MODEL_ID> The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers [env: MODEL_ID=] [default: bigscience/bloom-560m] ``` ## REVISION ```shell --revision <REVISION> The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id or a branch like `refs/pr/2` [env: REVISION=] ``` ## VALIDATION_WORKERS ```shell --validation-workers <VALIDATION_WORKERS> The number of tokenizer workers used for payload validation and truncation inside the router [env: VALIDATION_WORKERS=] [default: 2] ``` ## SHARDED ```shell --sharded <SHARDED> Whether to shard the model across multiple GPUs By default text-generation-inference will use all available GPUs to run the model. Setting it to `false` deactivates `num_shard` [env: SHARDED=] [possible values: true, false] ``` ## NUM_SHARD ```shell --num-shard <NUM_SHARD> The number of shards to use if you don't want to use all GPUs on a given machine. You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance [env: NUM_SHARD=] ``` ## QUANTIZE ```shell --quantize <QUANTIZE> Whether you want the model to be quantized [env: QUANTIZE=] Possible values: - awq: 4 bit quantization. Requires a specific AWQ quantized model: https://hf.co/models?search=awq. Should replace GPTQ models wherever possible because of the better latency - eetq: 8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from https://github.com/NetEase-FuXi/EETQ.git - gptq: 4 bit quantization. Requires a specific GTPQ quantized model: https://hf.co/models?search=gptq. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels - bitsandbytes: Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16 - bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16 - bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model ``` ## SPECULATE ```shell --speculate <SPECULATE> The number of input_ids to speculate on If using a medusa model, the heads will be picked up automatically Other wise, it will use n-gram speculation which is relatively free in terms of compute, but the speedup heavily depends on the task [env: SPECULATE=] ``` ## DTYPE ```shell --dtype <DTYPE> The dtype to be forced upon the model. This option cannot be used with `--quantize` [env: DTYPE=] [possible values: float16, bfloat16] ``` ## TRUST_REMOTE_CODE ```shell --trust-remote-code Whether you want to execute hub modelling code. Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision [env: TRUST_REMOTE_CODE=] ``` ## MAX_CONCURRENT_REQUESTS ```shell --max-concurrent-requests <MAX_CONCURRENT_REQUESTS> The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle backpressure correctly [env: MAX_CONCURRENT_REQUESTS=] [default: 128] ``` ## MAX_BEST_OF ```shell --max-best-of <MAX_BEST_OF> This is the maximum allowed value for clients to set `best_of`. Best of makes `n` generations at the same time, and return the best in terms of overall log probability over the entire generated sequence [env: MAX_BEST_OF=] [default: 2] ``` ## MAX_STOP_SEQUENCES ```shell --max-stop-sequences <MAX_STOP_SEQUENCES> This is the maximum allowed value for clients to set `stop_sequences`. Stop sequences are used to allow the model to stop on more than just the EOS token, and enable more complex "prompting" where users can preprompt the model in a specific way and define their "own" stop token aligned with their prompt [env: MAX_STOP_SEQUENCES=] [default: 4] ``` ## MAX_TOP_N_TOKENS ```shell --max-top-n-tokens <MAX_TOP_N_TOKENS> This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking [env: MAX_TOP_N_TOKENS=] [default: 5] ``` ## MAX_INPUT_LENGTH ```shell --max-input-length <MAX_INPUT_LENGTH> This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle [env: MAX_INPUT_LENGTH=] [default: 1024] ``` ## MAX_TOTAL_TOKENS ```shell --max-total-tokens <MAX_TOTAL_TOKENS> This is the most important value to set as it defines the "memory budget" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be [env: MAX_TOTAL_TOKENS=] [default: 2048] ``` ## WAITING_SERVED_RATIO ```shell --waiting-served-ratio <WAITING_SERVED_RATIO> This represents the ratio of waiting queries vs running queries where you want to start considering pausing the running queries to include the waiting ones into the same batch. `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's only 10 queries left in the current batch we check if we can fit those 12 waiting queries into the batching strategy, and if yes, then batching happens delaying the 10 running queries by a `prefill` run. This setting is only applied if there is room in the batch as defined by `max_batch_total_tokens`. [env: WAITING_SERVED_RATIO=] [default: 1.2] ``` ## MAX_BATCH_PREFILL_TOKENS ```shell --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS> Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent [env: MAX_BATCH_PREFILL_TOKENS=] [default: 4096] ``` ## MAX_BATCH_TOTAL_TOKENS ```shell --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS> **IMPORTANT** This is one critical control to allow maximum usage of the available hardware. This represents the total amount of potential tokens within a batch. When using padding (not recommended) this would be equivalent of `batch_size` * `max_total_tokens`. However in the non-padded (flash attention) version this can be much finer. For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens. Overall this number should be the largest possible amount that fits the remaining memory (after the model is loaded). Since the actual memory overhead depends on other parameters like if you're using quantization, flash attention or the model implementation, text-generation-inference cannot infer this number automatically. [env: MAX_BATCH_TOTAL_TOKENS=] ``` ## MAX_WAITING_TOKENS ```shell --max-waiting-tokens <MAX_WAITING_TOKENS> This setting defines how many tokens can be passed before forcing the waiting queries to be put on the batch (if the size of the batch allows for it). New queries require 1 `prefill` forward, which is different from `decode` and therefore you need to pause the running batch in order to run `prefill` to create the correct values for the waiting queries to be able to join the batch. With a value too small, queries will always "steal" the compute to run `prefill` and running queries will be delayed by a lot. With a value too big, waiting queries could wait for a very long time before being allowed a slot in the running batch. If your server is busy that means that requests that could run in ~2s on an empty server could end up running in ~20s because the query had to wait for 18s. This number is expressed in number of tokens to make it a bit more "model" agnostic, but what should really matter is the overall latency for end users. [env: MAX_WAITING_TOKENS=] [default: 20] ``` ## HOSTNAME ```shell --hostname <HOSTNAME> The IP address to listen on [env: HOSTNAME=] [default: 0.0.0.0] ``` ## PORT ```shell -p, --port <PORT> The port to listen on [env: PORT=] [default: 3000] ``` ## SHARD_UDS_PATH ```shell --shard-uds-path <SHARD_UDS_PATH> The name of the socket for gRPC communication between the webserver and the shards [env: SHARD_UDS_PATH=] [default: /tmp/text-generation-server] ``` ## MASTER_ADDR ```shell --master-addr <MASTER_ADDR> The address the master shard will listen on. (setting used by torch distributed) [env: MASTER_ADDR=] [default: localhost] ``` ## MASTER_PORT ```shell --master-port <MASTER_PORT> The address the master port will listen on. (setting used by torch distributed) [env: MASTER_PORT=] [default: 29500] ``` ## HUGGINGFACE_HUB_CACHE ```shell --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE> The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance [env: HUGGINGFACE_HUB_CACHE=] ``` ## WEIGHTS_CACHE_OVERRIDE ```shell --weights-cache-override <WEIGHTS_CACHE_OVERRIDE> The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance [env: WEIGHTS_CACHE_OVERRIDE=] ``` ## DISABLE_CUSTOM_KERNELS ```shell --disable-custom-kernels For some models (like bloom), text-generation-inference implemented custom cuda kernels to speed up inference. Those kernels were only tested on A100. Use this flag to disable them if you're running on different hardware and encounter issues [env: DISABLE_CUSTOM_KERNELS=] ``` ## CUDA_MEMORY_FRACTION ```shell --cuda-memory-fraction <CUDA_MEMORY_FRACTION> Limit the CUDA available memory. The allowed value equals the total visible memory multiplied by cuda-memory-fraction [env: CUDA_MEMORY_FRACTION=] [default: 1.0] ``` ## ROPE_SCALING ```shell --rope-scaling <ROPE_SCALING> Rope scaling will only be used for RoPE models and allow rescaling the position rotary to accomodate for larger prompts. Goes together with `rope_factor`. `--rope-factor 2.0` gives linear scaling with a factor of 2.0 `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed basically) `--rope-scaling linear --rope-factor` fully describes the scaling you want [env: ROPE_SCALING=] [possible values: linear, dynamic] ``` ## ROPE_FACTOR ```shell --rope-factor <ROPE_FACTOR> Rope scaling will only be used for RoPE models See `rope_scaling` [env: ROPE_FACTOR=] ``` ## JSON_OUTPUT ```shell --json-output Outputs the logs in JSON format (useful for telemetry) [env: JSON_OUTPUT=] ``` ## OTLP_ENDPOINT ```shell --otlp-endpoint <OTLP_ENDPOINT> [env: OTLP_ENDPOINT=] ``` ## CORS_ALLOW_ORIGIN ```shell --cors-allow-origin <CORS_ALLOW_ORIGIN> [env: CORS_ALLOW_ORIGIN=] ``` ## WATERMARK_GAMMA ```shell --watermark-gamma <WATERMARK_GAMMA> [env: WATERMARK_GAMMA=] ``` ## WATERMARK_DELTA ```shell --watermark-delta <WATERMARK_DELTA> [env: WATERMARK_DELTA=] ``` ## NGROK ```shell --ngrok Enable ngrok tunneling [env: NGROK=] ``` ## NGROK_AUTHTOKEN ```shell --ngrok-authtoken <NGROK_AUTHTOKEN> ngrok authentication token [env: NGROK_AUTHTOKEN=] ``` ## NGROK_EDGE ```shell --ngrok-edge <NGROK_EDGE> ngrok edge [env: NGROK_EDGE=] ``` ## TOKENIZER_CONFIG_PATH ```shell --tokenizer-config-path <TOKENIZER_CONFIG_PATH> The path to the tokenizer config file. This path is used to load the tokenizer configuration which may include a `chat_template`. If not provided, the default config will be used from the model hub [env: TOKENIZER_CONFIG_PATH=] ``` ## ENV ```shell -e, --env Display a lot of information about your runtime environment ``` ## HELP ```shell -h, --help Print help (see a summary with '-h') ``` ## VERSION ```shell -V, --version Print version ```
text-generation-inference/docs/source/basic_tutorials/launcher.md/0
{ "file_path": "text-generation-inference/docs/source/basic_tutorials/launcher.md", "repo_id": "text-generation-inference", "token_count": 5833 }
196
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.5625, "text": " dรฉg" }, { "id": 21543, "logprob": -0.14770508, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.4609375, "text": " ort" }, { "id": 35567, "logprob": -7.5585938, "text": "olan" }, { "id": 15, "logprob": -1.4003906, "text": "," }, { "id": 1669, "logprob": -1.5673828, "text": " il" }, { "id": 11580, "logprob": -0.94628906, "text": " faut" }, { "id": 3913, "logprob": -3.703125, "text": " tout" }, { "id": 39261, "logprob": -1.5732422, "text": " d'abord" } ], "seed": 0, "tokens": [ { "id": 578, "logprob": -1.6591797, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.4492188, "special": false, "text": " faire" }, { "id": 159570, "logprob": -6.6835938, "special": false, "text": " rรฉch" }, { "id": 810, "logprob": 0.0, "special": false, "text": "au" }, { "id": 12736, "logprob": 0.0, "special": false, "text": "ffer" }, { "id": 1742, "logprob": -2.5175781, "special": false, "text": " au" }, { "id": 6105, "logprob": -2.0078125, "special": false, "text": " bain" }, { "id": 88254, "logprob": -0.12695312, "special": false, "text": "-mar" }, { "id": 641, "logprob": 0.0, "special": false, "text": "ie" }, { "id": 2940, "logprob": -3.5175781, "special": false, "text": " avec" } ] }, "generated_text": " le faire rรฉchauffer au bain-marie avec" }
text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m/test_bloom_560m.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m/test_bloom_560m.json", "repo_id": "text-generation-inference", "token_count": 1544 }
197
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 1, "logprob": null, "text": "<s>" }, { "id": 4321, "logprob": -9.59375, "text": "Test" }, { "id": 2009, "logprob": -9.6640625, "text": "request" } ], "seed": null, "tokens": [ { "id": 29918, "logprob": -2.3867188, "special": false, "text": "_" }, { "id": 5338, "logprob": -2.8183594, "special": false, "text": "uri" }, { "id": 13, "logprob": -1.6367188, "special": false, "text": "\n" }, { "id": 3057, "logprob": -1.0527344, "special": false, "text": "Test" }, { "id": 2009, "logprob": -0.6542969, "special": false, "text": " request" }, { "id": 29918, "logprob": -0.056121826, "special": false, "text": "_" }, { "id": 5338, "logprob": -0.01600647, "special": false, "text": "uri" }, { "id": 13, "logprob": -0.87939453, "special": false, "text": "\n" }, { "id": 3057, "logprob": -0.7529297, "special": false, "text": "Test" }, { "id": 2009, "logprob": -0.2980957, "special": false, "text": " request" } ] }, "generated_text": "_uri\nTest request_uri\nTest request" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_gptq/test_flash_llama_gptq.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_gptq/test_flash_llama_gptq.json", "repo_id": "text-generation-inference", "token_count": 1036 }
198
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 563, "logprob": null, "text": "def" }, { "id": 942, "logprob": -5.1367188, "text": " print" }, { "id": 62, "logprob": -0.24450684, "text": "_" }, { "id": 7196, "logprob": -6.9609375, "text": "hello" } ], "seed": null, "tokens": [ { "id": 1241, "logprob": -0.9863281, "special": false, "text": "():" }, { "id": 258, "logprob": -0.21447754, "special": false, "text": "\n " }, { "id": 942, "logprob": -0.43701172, "special": false, "text": " print" }, { "id": 372, "logprob": -0.5361328, "special": false, "text": "(\"" }, { "id": 7371, "logprob": -0.44555664, "special": false, "text": "Hello" }, { "id": 9956, "logprob": -1.2412109, "special": false, "text": " World" }, { "id": 8657, "logprob": -0.7583008, "special": false, "text": "!\")" }, { "id": 185, "logprob": -0.76171875, "special": false, "text": "\n" }, { "id": 185, "logprob": -0.20837402, "special": false, "text": "\n" }, { "id": 1018, "logprob": -1.2470703, "special": false, "text": "print" } ] }, "generated_text": "():\n print(\"Hello World!\")\n\nprint" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_santacoder/test_flash_santacoder.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_santacoder/test_flash_santacoder.json", "repo_id": "text-generation-inference", "token_count": 1111 }
199
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|USER|>" }, { "id": 1276, "logprob": -4.5546875, "text": "What" }, { "id": 434, "logprob": -4.1953125, "text": "'s" }, { "id": 634, "logprob": -5.125, "text": " your" }, { "id": 12315, "logprob": -9.8828125, "text": " mood" }, { "id": 3063, "logprob": -3.9980469, "text": " today" }, { "id": 32, "logprob": -0.14672852, "text": "?" }, { "id": 50279, "logprob": -0.26489258, "text": "<|ASSISTANT|>" } ], "seed": null, "tokens": [ { "id": 42, "logprob": -0.8618164, "special": false, "text": "I" }, { "id": 1353, "logprob": -0.9506836, "special": false, "text": "'m" }, { "id": 7016, "logprob": -2.1738281, "special": false, "text": " sorry" }, { "id": 13, "logprob": -0.0758667, "special": false, "text": "," }, { "id": 1394, "logprob": -0.9135742, "special": false, "text": "You" }, { "id": 452, "logprob": -1.1445312, "special": false, "text": " have" }, { "id": 247, "logprob": -1.4375, "special": false, "text": " a" }, { "id": 4327, "logprob": -1.1103516, "special": false, "text": " choice" }, { "id": 273, "logprob": -1.0058594, "special": false, "text": " of" }, { "id": 752, "logprob": -1.921875, "special": false, "text": " what" } ] }, "generated_text": "I'm sorry,You have a choice of what" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|USER|>" }, { "id": 1276, "logprob": -4.5546875, "text": "What" }, { "id": 434, "logprob": -4.1953125, "text": "'s" }, { "id": 634, "logprob": -5.125, "text": " your" }, { "id": 12315, "logprob": -9.8828125, "text": " mood" }, { "id": 3063, "logprob": -3.9980469, "text": " today" }, { "id": 32, "logprob": -0.14672852, "text": "?" }, { "id": 50279, "logprob": -0.26489258, "text": "<|ASSISTANT|>" } ], "seed": null, "tokens": [ { "id": 42, "logprob": -0.8618164, "special": false, "text": "I" }, { "id": 1353, "logprob": -0.9506836, "special": false, "text": "'m" }, { "id": 7016, "logprob": -2.1738281, "special": false, "text": " sorry" }, { "id": 13, "logprob": -0.0758667, "special": false, "text": "," }, { "id": 1394, "logprob": -0.9135742, "special": false, "text": "You" }, { "id": 452, "logprob": -1.1445312, "special": false, "text": " have" }, { "id": 247, "logprob": -1.4375, "special": false, "text": " a" }, { "id": 4327, "logprob": -1.1103516, "special": false, "text": " choice" }, { "id": 273, "logprob": -1.0058594, "special": false, "text": " of" }, { "id": 752, "logprob": -1.921875, "special": false, "text": " what" } ] }, "generated_text": "I'm sorry,You have a choice of what" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|USER|>" }, { "id": 1276, "logprob": -4.5546875, "text": "What" }, { "id": 434, "logprob": -4.1953125, "text": "'s" }, { "id": 634, "logprob": -5.125, "text": " your" }, { "id": 12315, "logprob": -9.8828125, "text": " mood" }, { "id": 3063, "logprob": -3.9980469, "text": " today" }, { "id": 32, "logprob": -0.14672852, "text": "?" }, { "id": 50279, "logprob": -0.26489258, "text": "<|ASSISTANT|>" } ], "seed": null, "tokens": [ { "id": 42, "logprob": -0.8618164, "special": false, "text": "I" }, { "id": 1353, "logprob": -0.9506836, "special": false, "text": "'m" }, { "id": 7016, "logprob": -2.1738281, "special": false, "text": " sorry" }, { "id": 13, "logprob": -0.0758667, "special": false, "text": "," }, { "id": 1394, "logprob": -0.9135742, "special": false, "text": "You" }, { "id": 452, "logprob": -1.1445312, "special": false, "text": " have" }, { "id": 247, "logprob": -1.4375, "special": false, "text": " a" }, { "id": 4327, "logprob": -1.1103516, "special": false, "text": " choice" }, { "id": 273, "logprob": -1.0058594, "special": false, "text": " of" }, { "id": 752, "logprob": -1.921875, "special": false, "text": " what" } ] }, "generated_text": "I'm sorry,You have a choice of what" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|USER|>" }, { "id": 1276, "logprob": -4.5546875, "text": "What" }, { "id": 434, "logprob": -4.1953125, "text": "'s" }, { "id": 634, "logprob": -5.125, "text": " your" }, { "id": 12315, "logprob": -9.8828125, "text": " mood" }, { "id": 3063, "logprob": -3.9980469, "text": " today" }, { "id": 32, "logprob": -0.14672852, "text": "?" }, { "id": 50279, "logprob": -0.26489258, "text": "<|ASSISTANT|>" } ], "seed": null, "tokens": [ { "id": 42, "logprob": -0.8618164, "special": false, "text": "I" }, { "id": 1353, "logprob": -0.9506836, "special": false, "text": "'m" }, { "id": 7016, "logprob": -2.1738281, "special": false, "text": " sorry" }, { "id": 13, "logprob": -0.0758667, "special": false, "text": "," }, { "id": 1394, "logprob": -0.9135742, "special": false, "text": "You" }, { "id": 452, "logprob": -1.1445312, "special": false, "text": " have" }, { "id": 247, "logprob": -1.4375, "special": false, "text": " a" }, { "id": 4327, "logprob": -1.1103516, "special": false, "text": " choice" }, { "id": 273, "logprob": -1.0058594, "special": false, "text": " of" }, { "id": 752, "logprob": -1.921875, "special": false, "text": " what" } ] }, "generated_text": "I'm sorry,You have a choice of what" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_neox/test_neox_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_neox/test_neox_load.json", "repo_id": "text-generation-inference", "token_count": 6296 }
200
import pytest @pytest.fixture(scope="module") def flash_phi_handle(launcher): with launcher("microsoft/phi-2", num_shard=1) as handle: yield handle @pytest.fixture(scope="module") async def flash_phi(flash_phi_handle): await flash_phi_handle.health(300) return flash_phi_handle.client @pytest.mark.asyncio @pytest.mark.private async def test_flash_phi(flash_phi, response_snapshot): response = await flash_phi.generate( "Test request", max_new_tokens=10, decoder_input_details=True ) assert response.details.generated_tokens == 10 assert response.generated_text == ': {request}")\n response = self' assert response == response_snapshot @pytest.mark.asyncio @pytest.mark.private async def test_flash_phi_all_params(flash_phi, response_snapshot): response = await flash_phi.generate( "Test request", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, stop_sequences=["network"], temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 6 assert response.generated_text == "Test request to send data over a network" assert response == response_snapshot @pytest.mark.asyncio @pytest.mark.private async def test_flash_phi_load(flash_phi, generate_load, response_snapshot): responses = await generate_load(flash_phi, "Test request", max_new_tokens=10, n=4) assert len(responses) == 4 assert all( [r.generated_text == responses[0].generated_text for r in responses] ), f"{[r.generated_text for r in responses]}" assert responses[0].generated_text == ': {request}")\n response = self' assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_phi.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_phi.py", "repo_id": "text-generation-inference", "token_count": 749 }
201
use std::fmt; use std::process::Command; pub(crate) struct Env { cargo_target: &'static str, cargo_version: &'static str, git_sha: &'static str, docker_label: &'static str, nvidia_env: String, } impl Env { pub fn new() -> Self { let nvidia_env = nvidia_smi(); Self { nvidia_env: nvidia_env.unwrap_or("N/A".to_string()), cargo_target: env!("VERGEN_CARGO_TARGET_TRIPLE"), cargo_version: env!("VERGEN_RUSTC_SEMVER"), git_sha: option_env!("VERGEN_GIT_SHA").unwrap_or("N/A"), docker_label: option_env!("DOCKER_LABEL").unwrap_or("N/A"), } } } impl fmt::Display for Env { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { writeln!(f, "Runtime environment:")?; writeln!(f, "Target: {}", self.cargo_target)?; writeln!(f, "Cargo version: {}", self.cargo_version)?; writeln!(f, "Commit sha: {}", self.git_sha)?; writeln!(f, "Docker label: {}", self.docker_label)?; write!(f, "nvidia-smi:\n{}", self.nvidia_env)?; Ok(()) } } fn nvidia_smi() -> Option<String> { let output = Command::new("nvidia-smi").output().ok()?; let nvidia_smi = String::from_utf8(output.stdout).ok()?; let output = nvidia_smi.replace('\n', "\n "); Some(output.trim().to_string()) }
text-generation-inference/launcher/src/env_runtime.rs/0
{ "file_path": "text-generation-inference/launcher/src/env_runtime.rs", "repo_id": "text-generation-inference", "token_count": 650 }
202
[package] name = "grpc-metadata" version = "0.1.0" edition = "2021" [dependencies] opentelemetry = "^0.20" tonic = "^0.10" tracing = "^0.1" tracing-opentelemetry = "^0.21"
text-generation-inference/router/grpc-metadata/Cargo.toml/0
{ "file_path": "text-generation-inference/router/grpc-metadata/Cargo.toml", "repo_id": "text-generation-inference", "token_count": 83 }
203
flash_att_v2_commit_cuda := 02ac572f3ffc4f402e4183aaa6824b45859d3ed3 flash_att_v2_commit_rocm := 8736558c287ff2ef28b24878e42828c595ac3e69 flash-attention-v2-cuda: # Clone flash attention pip install -U packaging ninja --no-cache-dir git clone https://github.com/HazyResearch/flash-attention.git flash-attention-v2 build-flash-attention-v2-cuda: flash-attention-v2-cuda cd flash-attention-v2 && git fetch && git checkout $(flash_att_v2_commit_cuda) cd flash-attention-v2 && git submodule update --init --recursive cd flash-attention-v2 && python setup.py build install-flash-attention-v2-cuda: build-flash-attention-v2-cuda cd flash-attention-v2 && git submodule update --init --recursive && python setup.py install flash-attention-v2-rocm: # Clone flash attention pip install -U packaging ninja --no-cache-dir git clone https://github.com/fxmarty/flash-attention-rocm flash-attention-v2 build-flash-attention-v2-rocm: flash-attention-v2-rocm cd flash-attention-v2 && git fetch && git checkout $(flash_att_v2_commit_rocm) cd flash-attention-v2 && git submodule update --init --recursive cd flash-attention-v2 && PYTORCH_ROCM_ARCH=gfx90a python setup.py build install-flash-attention-v2-rocm: build-flash-attention-v2-rocm cd flash-attention-v2 && git submodule update --init --recursive && python setup.py install
text-generation-inference/server/Makefile-flash-att-v2/0
{ "file_path": "text-generation-inference/server/Makefile-flash-att-v2", "repo_id": "text-generation-inference", "token_count": 496 }
204
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #ifndef _hip_compat_cuh #define _hip_compat_cuh // Workaround for a bug in hipamd, backported from upstream, this is fixed in ROCm 5.6. __device__ __forceinline__ __half __compat_hrcp(__half x) { return __half_raw{ static_cast<_Float16>(__builtin_amdgcn_rcph(static_cast<__half_raw>(x).data))}; } __device__ __forceinline__ __half2 __compat_h2rcp(__half2 x) { return _Float16_2{static_cast<_Float16>(__builtin_amdgcn_rcph(x.x)), static_cast<_Float16>(__builtin_amdgcn_rcph(x.y))}; } #define hrcp __compat_hrcp #define h2rcp __compat_h2rcp // Automatic conversion of hipblasHgemm doesn't convert half to hipblasHalf. __host__ __forceinline__ hipblasStatus_t __compat_hipblasHgemm(hipblasHandle_t handle, hipblasOperation_t transA, hipblasOperation_t transB, int m, int n, int k, const half* alpha, const half* AP, int lda, const half* BP, int ldb, const half* beta, half* CP, int ldc) { return hipblasHgemm(handle, transA, transB, m, n, k, reinterpret_cast<const hipblasHalf *>(alpha), reinterpret_cast<const hipblasHalf *>(AP), lda, reinterpret_cast<const hipblasHalf *>(BP), ldb, reinterpret_cast<const hipblasHalf *>(beta), reinterpret_cast<hipblasHalf *>(CP), ldc); } #define hipblasHgemm __compat_hipblasHgemm // Previous version of PyTorch were converting to rocBLAS instead of hipBLAS. #define rocblas_handle hipblasHandle_t #define rocblas_operation_none HIPBLAS_OP_N #define rocblas_get_stream hipblasGetStream #define rocblas_set_stream hipblasSetStream #define rocblas_hgemm __compat_hipblasHgemm #endif
text-generation-inference/server/exllama_kernels/exllama_kernels/hip_compat.cuh/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/hip_compat.cuh", "repo_id": "text-generation-inference", "token_count": 1707 }
205
#ifndef _qdq_3_cuh #define _qdq_3_cuh #include "qdq_util.cuh" #include "../../config.h" #if QMODE_3BIT == 1 // Permutation: // // v9997775 55333111 u8886664 44222000 (u, v lsb) // vjjjhhhf ffdddbbb uiiiggge eecccaaa // vtttrrrp ppnnnlll usssqqqo oommmkkk __forceinline__ __device__ void shuffle_3bit_32 ( uint32_t* q, int stride ) { uint32_t qa = q[0 * stride]; uint32_t qb = q[1 * stride]; uint32_t qc = q[2 * stride]; // qa: aa999888 77766655 54443332 22111000 // qb: lkkkjjji iihhhggg fffeeedd dcccbbba // qc: vvvuuutt tsssrrrq qqpppooo nnnmmmll uint32_t qd = qc >> 26; qc <<= 4; qc |= qb >> 28; qb <<= 2; qb |= qa >> 30; // qa: ..999888 77766655 54443332 22111000 // qb: ..jjjiii hhhgggff feeedddc ccbbbaaa // qc: ..tttsss rrrqqqpp pooonnnm mmlllkkk // qd: vvvuuu uint32_t za = 0; uint32_t zb = 0; uint32_t zc = 0; for (int i = 0; i < 5; i++) { uint32_t t0 = qa & 0x07; uint32_t t1 = (qa & 0x38) >> 3; qa >>= 6; za |= (t0 << (i * 3)); za |= (t1 << (i * 3 + 16)); } for (int i = 0; i < 5; i++) { uint32_t t0 = qb & 0x07; uint32_t t1 = (qb & 0x38) >> 3; qb >>= 6; zb |= (t0 << (i * 3)); zb |= (t1 << (i * 3 + 16)); } for (int i = 0; i < 5; i++) { uint32_t t0 = qc & 0x07; uint32_t t1 = (qc & 0x38) >> 3; qc >>= 6; zc |= (t0 << (i * 3)); zc |= (t1 << (i * 3 + 16)); } // za: 9997775 55333111 8886664 44222000 // zb: jjjhhhf ffdddbbb iiiggge eecccaaa // zc: tttrrrp ppnnnlll sssqqqo oommmkkk // qd: vvvuuu za |= ((qd & 0x01) >> 0) << 15; zb |= ((qd & 0x02) >> 1) << 15; zc |= ((qd & 0x04) >> 2) << 15; za |= ((qd & 0x08) >> 3) << 31; zb |= ((qd & 0x10) >> 4) << 31; zc |= ((qd & 0x20) >> 5) << 31; // za: v9997775 55333111 u8886664 44222000 (u, v lsb) // zb: vjjjhhhf ffdddbbb uiiiggge eecccaaa // zc: vtttrrrp ppnnnlll usssqqqo oommmkkk q[0 * stride] = za; q[1 * stride] = zb; q[2 * stride] = zc; } __forceinline__ __device__ void dequant_3bit_32 ( const uint32_t q_0, const uint32_t q_1, const uint32_t q_2, half2 (&dq)[16], int stride ) { const uint32_t c0 = 0x64006400; const half y8_ = __float2half_rn(1.0f / 8.0f); const half y64_ = __float2half_rn(1.0f / 64.0f); const half2 y8 = __halves2half2(y8_, y8_); const half2 y64 = __halves2half2(y64_, y64_); const half z1_ = __float2half_rn(-1024.0f - 4.0f); const half z8_ = __float2half_rn(-1024.0f / 8.0f - 4.0f); const half z64_ = __float2half_rn(-1024.0f / 64.0f - 4.0f); const half2 z1 = __halves2half2(z1_, z1_); const half2 z8 = __halves2half2(z8_, z8_); const half2 z64 = __halves2half2(z64_, z64_); uint32_t qa = q_0; uint32_t qb = q_1; uint32_t qc = q_2; half2_uint32 q0((qa & 0x00070007) | c0); // half2(q[ 0], q[ 1]) + 1024 half2_uint32 q1((qa & 0x00380038) | c0); // half2(q[ 2], q[ 3]) * 8 + 1024 qa >>= 6; half2_uint32 q2((qa & 0x00070007) | c0); // half2(q[ 4], q[ 5]) + 1024 half2_uint32 q3((qa & 0x00380038) | c0); // half2(q[ 6], q[ 7]) * 8 + 1024 half2_uint32 q4((qa & 0x01c001c0) | c0); // half2(q[ 8], q[ 9]) * 64 + 1024 qa >>= 9; qa &= 0x00010001; half2_uint32 q5((qb & 0x00070007) | c0); // half2(q[10], q[11]) + 1024 half2_uint32 q6((qb & 0x00380038) | c0); // half2(q[12], q[13]) * 8 + 1024 qb >>= 6; half2_uint32 q7((qb & 0x00070007) | c0); // half2(q[14], q[15]) + 1024 half2_uint32 q8((qb & 0x00380038) | c0); // half2(q[16], q[17]) * 8 + 1024 half2_uint32 q9((qb & 0x01c001c0) | c0); // half2(q[18], q[19]) * 64 + 1024 qb >>= 8; qb &= 0x00020002; half2_uint32 q10((qc & 0x00070007) | c0); // half2(q[20], q[21]) + 1024 half2_uint32 q11((qc & 0x00380038) | c0); // half2(q[22], q[23]) * 8 + 1024 qc >>= 6; half2_uint32 q12((qc & 0x00070007) | c0); // half2(q[24], q[25]) + 1024 half2_uint32 q13((qc & 0x00380038) | c0); // half2(q[26], q[27]) * 8 + 1024 half2_uint32 q14((qc & 0x01c001c0) | c0); // half2(q[28], q[29]) * 64 + 1024 qc >>= 7; qc &= 0x00040004; half2_uint32 q15((qa | qb | qc) | c0); dq[ 0] = __hadd2( q0.as_half2, z1); dq[ 1] = __hfma2( q1.as_half2, y8, z8); dq[ 2] = __hadd2( q2.as_half2, z1); dq[ 3] = __hfma2( q3.as_half2, y8, z8); dq[ 4] = __hfma2( q4.as_half2, y64, z64); dq[ 5] = __hadd2( q5.as_half2, z1); dq[ 6] = __hfma2( q6.as_half2, y8, z8); dq[ 7] = __hadd2( q7.as_half2, z1); dq[ 8] = __hfma2( q8.as_half2, y8, z8); dq[ 9] = __hfma2( q9.as_half2, y64, z64); dq[10] = __hadd2(q10.as_half2, z1); dq[11] = __hfma2(q11.as_half2, y8, z8); dq[12] = __hadd2(q12.as_half2, z1); dq[13] = __hfma2(q13.as_half2, y8, z8); dq[14] = __hfma2(q14.as_half2, y64, z64); dq[15] = __hadd2(q15.as_half2, z1); } #else __forceinline__ __device__ void shuffle_3bit_32 ( uint32_t* q, int stride ) { } __forceinline__ __device__ void dequant_3bit_32 ( const uint32_t q_0, const uint32_t q_1, const uint32_t q_2, half2 (&dq)[16], int stride ) { half dqh[32]; for (int i = 0; i < 10; i++) dqh[ i] = dq_ns(exb( q_0, i * 3 , 0x07), 4); dqh[10 ] = dq_ns(exb(q_1, q_0, 30, 0x07), 4); for (int i = 0; i < 10; i++) dqh[11 + i] = dq_ns(exb( q_1, i * 3 + 1, 0x07), 4); dqh[21 ] = dq_ns(exb(q_2, q_1, 31, 0x07), 4); for (int i = 0; i < 10; i++) dqh[22 + i] = dq_ns(exb( q_2, i * 3 + 2, 0x07), 4); for (int i = 0; i < 16; i++) dq[i] = __halves2half2(dqh[i * 2], dqh[i * 2 + 1]); } #endif #endif
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_3.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_3.cuh", "repo_id": "text-generation-inference", "token_count": 3335 }
206
import pytest import torch from copy import copy from transformers import AutoTokenizer from text_generation_server.pb import generate_pb2 from text_generation_server.models.causal_lm import CausalLM, CausalLMBatch @pytest.fixture(scope="session") def default_causal_lm(): return CausalLM("gpt2") @pytest.fixture(scope="session") def gpt2_tokenizer(): tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left") tokenizer.pad_token_id = 50256 return tokenizer @pytest.fixture def default_pb_request(default_pb_parameters, default_pb_stop_parameters): return generate_pb2.Request( id=0, inputs="Test", prefill_logprobs=True, truncate=100, parameters=default_pb_parameters, stopping_parameters=default_pb_stop_parameters, ) @pytest.fixture def default_pb_batch(default_pb_request): return generate_pb2.Batch(id=0, requests=[default_pb_request], size=1) @pytest.fixture def default_causal_lm_batch(default_pb_batch, gpt2_tokenizer): return CausalLMBatch.from_pb( default_pb_batch, gpt2_tokenizer, torch.float32, torch.device("cpu") ) @pytest.fixture def default_multi_requests_causal_lm_batch(default_pb_request, gpt2_tokenizer): req_0 = copy(default_pb_request) req_0.id = 1 req_1 = default_pb_request req_1.id = 2 req_1.stopping_parameters.max_new_tokens = 5 batch_pb = generate_pb2.Batch(id=1, requests=[req_0, req_1], size=2) return CausalLMBatch.from_pb( batch_pb, gpt2_tokenizer, torch.float32, torch.device("cpu") ) def test_batch_from_pb(default_pb_batch, default_causal_lm_batch): batch = default_causal_lm_batch assert batch.batch_id == default_pb_batch.id assert batch.requests == default_pb_batch.requests assert len(batch.input_ids) == default_pb_batch.size assert batch.input_ids[0][-1] == 14402 assert torch.all(batch.input_ids[0][:-1] == 50256) assert batch.attention_mask[0, 0] == 1 assert torch.all(batch.attention_mask[0, 1:] == 0) assert batch.past_key_values is None assert all( [ torch.equal(input_ids, all_input_ids[:, 0]) for input_ids, all_input_ids in zip(batch.input_ids, batch.all_input_ids) ] ) assert batch.input_lengths == [1] assert len(batch) == default_pb_batch.size assert len(batch.next_token_choosers) == len(batch.stopping_criterias) == len(batch) assert batch.max_input_length == batch.input_lengths[0] def test_batch_concatenate_no_prefill(default_causal_lm_batch): with pytest.raises(ValueError): CausalLMBatch.concatenate([default_causal_lm_batch, default_causal_lm_batch]) def test_causal_lm_batch_type(default_causal_lm): assert default_causal_lm.batch_type == CausalLMBatch def test_causal_lm_generate_token(default_causal_lm, default_causal_lm_batch): sequence_length = len(default_causal_lm_batch.all_input_ids[0]) generations, next_batch, _ = default_causal_lm.generate_token( default_causal_lm_batch ) assert len(generations) == len(next_batch) assert isinstance(next_batch, CausalLMBatch) assert len(next_batch.all_input_ids) == len(next_batch) assert len(next_batch.all_input_ids[0]) == sequence_length + 1 assert len(next_batch.attention_mask[0]) == 11 assert next_batch.all_input_ids[0][-1] == 13 assert next_batch.all_input_ids[0][-2] == 14402 assert torch.all(next_batch.all_input_ids[0][:-2] == 50256) assert torch.all(next_batch.attention_mask[0][0:2] == 1) assert torch.all(next_batch.attention_mask[0][2:] == 0) assert next_batch.input_ids.shape == (len(next_batch), 1) assert next_batch.input_ids[0, 0] == 13 assert next_batch.input_lengths == [2] assert next_batch.max_input_length == next_batch.input_lengths[0] assert next_batch.past_key_values is not None assert all( [p[0].shape == (1, 12, sequence_length, 64) for p in next_batch.past_key_values] ) assert all( [p[1].shape == (1, 12, sequence_length, 64) for p in next_batch.past_key_values] ) assert all([generation.generated_text is None for generation in generations]) assert all([len(generation.prefill_tokens) == 1 for generation in generations]) assert all( [ token_id.item() == 13 for generation in generations for token_id in generation.tokens.token_ids ] ) assert all( [ token_text == "." for generation in generations for token_text in generation.tokens.texts ] ) assert generations[0].request_id == 0 def test_causal_lm_generate_token_completion( default_causal_lm, default_causal_lm_batch ): next_batch = default_causal_lm_batch for _ in range(default_causal_lm_batch.stopping_criterias[0].max_new_tokens - 1): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is None assert len(generations) == 1 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert generations[0].request_id == default_causal_lm_batch.requests[0].id assert ( generations[0].generated_text.generated_tokens == default_causal_lm_batch.stopping_criterias[0].max_new_tokens ) def test_causal_lm_generate_token_completion_multi( default_causal_lm, default_multi_requests_causal_lm_batch ): next_batch = default_multi_requests_causal_lm_batch for i in range( default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 1 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is not None assert len(generations) == 2 assert generations[1].generated_text.text == ".java:784)" assert ( generations[1].request_id == default_multi_requests_causal_lm_batch.requests[1].id ) assert ( generations[1].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens ) # Copy stopping_criterias before filtering stopping_criterias = ( default_multi_requests_causal_lm_batch.stopping_criterias.copy() ) next_batch = next_batch.filter([next_batch.requests[0].id]) for _ in range( stopping_criterias[0].max_new_tokens - stopping_criterias[1].max_new_tokens - 1 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is None assert len(generations) == 1 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert ( generations[0].request_id == default_multi_requests_causal_lm_batch.requests[0].id ) assert ( generations[0].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[0].max_new_tokens ) def test_batch_concatenate( default_causal_lm, default_causal_lm_batch, default_multi_requests_causal_lm_batch ): next_batch_0 = default_causal_lm_batch _, next_batch_0, _ = default_causal_lm.generate_token(next_batch_0) _, next_batch_0, _ = default_causal_lm.generate_token(next_batch_0) next_batch_1 = default_multi_requests_causal_lm_batch _, next_batch_1, _ = default_causal_lm.generate_token(next_batch_1) # Clone past_key_values before concatenating to compare after, # because they are removed from the concatenated batches next_batch_0_past_key_values = [ (k.clone(), v.clone()) for (k, v) in next_batch_0.past_key_values ] next_batch_1_past_key_values = [ (k.clone(), v.clone()) for (k, v) in next_batch_1.past_key_values ] next_batch = CausalLMBatch.concatenate([next_batch_0, next_batch_1]) assert torch.equal(next_batch.all_input_ids[0], next_batch_0.all_input_ids[0]) assert torch.equal(next_batch.all_input_ids[1], next_batch_1.all_input_ids[0]) assert torch.equal(next_batch.all_input_ids[2], next_batch_1.all_input_ids[1]) assert torch.all( next_batch.attention_mask[0, : -next_batch.padding_right_offset] == 1 ) assert torch.all( next_batch.attention_mask[1:, 1 : -next_batch.padding_right_offset] == 1 ) assert torch.all(next_batch.attention_mask[1:, 3:] == 0) assert next_batch.batch_id == 0 assert next_batch.input_ids[0, 0] == 12355 assert torch.all(next_batch.input_ids[1:] == 13) assert next_batch.input_lengths == [3, 2, 2] assert next_batch.max_input_length == 3 assert next_batch.requests[0] == next_batch_0.requests[0] assert next_batch.requests[1:] == next_batch_1.requests assert next_batch.next_token_choosers[0] == next_batch_0.next_token_choosers[0] assert next_batch.next_token_choosers[1:] == next_batch_1.next_token_choosers assert next_batch.stopping_criterias[0] == next_batch_0.stopping_criterias[0] assert next_batch.stopping_criterias[1:] == next_batch_1.stopping_criterias assert next_batch.past_key_values is not None assert all([p[0].shape == (3, 12, 2, 64) for p in next_batch.past_key_values]) assert all([p[1].shape == (3, 12, 2, 64) for p in next_batch.past_key_values]) for i, past in enumerate(next_batch.past_key_values): assert torch.equal(next_batch_0_past_key_values[i][0][0, :, -2:], past[0][0]) assert torch.equal( next_batch_1_past_key_values[i][0][:, :, -1:], past[0][1:, :, -1:, :] ) assert torch.equal(next_batch_0_past_key_values[i][1][0, :, -2:], past[1][0]) assert torch.equal( next_batch_1_past_key_values[i][1][:, :, -1:], past[1][1:, :, -1:, :] ) for _ in range( default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 2 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is not None assert len(generations) == 3 assert generations[2].generated_text.text == ".java:784)" assert ( generations[2].request_id == default_multi_requests_causal_lm_batch.requests[1].id ) assert ( generations[2].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens ) next_batch = next_batch.filter( [next_batch.requests[0].id, next_batch.requests[1].id] ) for _ in range( default_causal_lm_batch.stopping_criterias[0].max_new_tokens - default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 2 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is not None assert len(generations) == 2 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert generations[0].request_id == default_causal_lm_batch.requests[0].id assert ( generations[0].generated_text.generated_tokens == default_causal_lm_batch.stopping_criterias[0].max_new_tokens ) next_batch = next_batch.filter([next_batch.requests[1].id]) for _ in range( default_multi_requests_causal_lm_batch.stopping_criterias[0].max_new_tokens - default_causal_lm_batch.stopping_criterias[0].max_new_tokens - default_multi_requests_causal_lm_batch.stopping_criterias[1].max_new_tokens - 4 ): generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert len(generations) == len(next_batch) generations, next_batch, _ = default_causal_lm.generate_token(next_batch) assert next_batch is None assert len(generations) == 1 assert generations[0].generated_text.text == ".java:784) at net.minecraft." assert ( generations[0].request_id == default_multi_requests_causal_lm_batch.requests[0].id ) assert ( generations[0].generated_text.generated_tokens == default_multi_requests_causal_lm_batch.stopping_criterias[0].max_new_tokens )
text-generation-inference/server/tests/models/test_causal_lm.py/0
{ "file_path": "text-generation-inference/server/tests/models/test_causal_lm.py", "repo_id": "text-generation-inference", "token_count": 5345 }
207
import torch import time from dataclasses import dataclass from opentelemetry import trace from transformers import AutoTokenizer, AutoModelForCausalLM, PreTrainedTokenizerBase from typing import Optional, Tuple, List, Type, Dict from text_generation_server.models import Model from text_generation_server.utils.tokens import batch_top_tokens from text_generation_server.models.types import ( Batch, Tokens, Generation, GeneratedText, ) from text_generation_server.pb import generate_pb2 from text_generation_server.utils import NextTokenChooser, StoppingCriteria, Sampling tracer = trace.get_tracer(__name__) @dataclass class CausalLMBatch(Batch): batch_id: int requests: List[generate_pb2.Request] requests_idx_mapping: Dict[int, int] # Decoder values input_ids: torch.Tensor attention_mask: torch.Tensor position_ids: torch.Tensor past_key_values: Optional[List[Tuple]] # All tokens all_input_ids: List[torch.Tensor] # Lengths of all generations present in the batch input_lengths: List[int] prefix_offsets: List[int] read_offsets: List[int] # Generation helpers next_token_choosers: List[NextTokenChooser] stopping_criterias: List[StoppingCriteria] top_n_tokens: List[int] top_n_tokens_tensor: torch.Tensor # Metadata used for padding max_input_length: int padding_right_offset: int # Maximum number of tokens this batch will grow to max_tokens: int # Past metadata keys_head_dim_last: bool = True def to_pb(self) -> generate_pb2.CachedBatch: return generate_pb2.CachedBatch( id=self.batch_id, request_ids=[r.id for r in self.requests], size=len(self), max_tokens=self.max_tokens, ) @classmethod def from_pb( cls, pb: generate_pb2.Batch, tokenizer: PreTrainedTokenizerBase, dtype: torch.dtype, device: torch.device, ) -> "CausalLMBatch": inputs = [] next_token_choosers = [] stopping_criterias = [] top_n_tokens = [] prefix_offsets = [] read_offsets = [] requests_idx_mapping = {} # Parse batch max_truncation = 0 padding_right_offset = 0 max_decode_tokens = 0 for i, r in enumerate(pb.requests): requests_idx_mapping[r.id] = i inputs.append(r.inputs) next_token_choosers.append(NextTokenChooser.from_pb(r.parameters, device)) stopping_criteria = StoppingCriteria.from_pb( r.stopping_parameters, tokenizer ) stopping_criterias.append(stopping_criteria) top_n_tokens.append(r.top_n_tokens) max_truncation = max(max_truncation, r.truncate) max_decode_tokens += stopping_criteria.max_new_tokens padding_right_offset = max( padding_right_offset, stopping_criteria.max_new_tokens ) tokenized_inputs = tokenizer( inputs, return_tensors="pt", padding=True, return_token_type_ids=False, truncation=True, max_length=max_truncation, ).to(device) for _ in pb.requests: input_len = tokenized_inputs["input_ids"].shape[1] prefix_offsets.append(input_len - 5) read_offsets.append(input_len) input_lengths = tokenized_inputs["attention_mask"].sum(1) max_input_length = input_lengths.max() input_ids = tokenized_inputs["input_ids"] # Allocate maximum attention_mask attention_mask = input_ids.new_zeros( (pb.size, max_input_length + padding_right_offset) ) # Copy tokenizer attention_mask into fully allocated attention_mask attention_mask[:, :max_input_length] = tokenized_inputs["attention_mask"] position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1 position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1) all_input_ids = tokenized_inputs["input_ids"].T.split(1, dim=1) top_n_tokens_tensor = torch.tensor( top_n_tokens, device=device, dtype=torch.int64 ) max_tokens = len(inputs) * (max_input_length + max_decode_tokens) return cls( batch_id=pb.id, requests=pb.requests, requests_idx_mapping=requests_idx_mapping, input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=None, all_input_ids=list(all_input_ids), input_lengths=input_lengths.tolist(), prefix_offsets=prefix_offsets, read_offsets=read_offsets, next_token_choosers=next_token_choosers, stopping_criterias=stopping_criterias, top_n_tokens=top_n_tokens, top_n_tokens_tensor=top_n_tokens_tensor, max_input_length=max_input_length.item(), padding_right_offset=padding_right_offset, max_tokens=max_tokens, ) @tracer.start_as_current_span("filter") def filter(self, request_ids: List[int]) -> Optional["CausalLMBatch"]: if len(request_ids) == 0: raise ValueError("Batch must have at least one request") if len(request_ids) == len(self): return self keep_indices = [] # New values after filtering requests_idx_mapping = {} requests = [] input_lengths = [] prefix_offsets = [] read_offsets = [] all_input_ids = [] max_input_length = 0 next_token_choosers = [] stopping_criterias = [] top_n_tokens = [] total_remaining_decode_tokens = 0 new_padding_right_offset = 0 for i, request_id in enumerate(request_ids): idx = self.requests_idx_mapping[request_id] requests_idx_mapping[request_id] = i keep_indices.append(idx) requests.append(self.requests[idx]) prefix_offsets.append(self.prefix_offsets[idx]) read_offsets.append(self.read_offsets[idx]) all_input_ids.append(self.all_input_ids[idx]) request_input_length = self.input_lengths[idx] input_lengths.append(request_input_length) max_input_length = max(max_input_length, request_input_length) next_token_choosers.append(self.next_token_choosers[idx]) stopping_criteria = self.stopping_criterias[idx] stopping_criterias.append(stopping_criteria) top_n_tokens.append(self.top_n_tokens[idx]) remaining_decode_tokens = ( stopping_criteria.max_new_tokens - stopping_criteria.current_tokens ) total_remaining_decode_tokens += remaining_decode_tokens new_padding_right_offset = max( new_padding_right_offset, remaining_decode_tokens ) # Apply indices to input_ids, attention mask, past key values and other items that need to be cached input_ids = self.input_ids[keep_indices] position_ids = self.position_ids[keep_indices] self.attention_mask = self.attention_mask[ keep_indices, -(self.padding_right_offset + max_input_length) : ( self.attention_mask.shape[1] - self.padding_right_offset ) + new_padding_right_offset, ] # Ensure that past_key_values tensors can be updated in-place if type(self.past_key_values[0]) == tuple: self.past_key_values = [list(layer) for layer in self.past_key_values] # Update tensors in-place to allow incremental garbage collection past_kv_length = max_input_length - 1 for layer in self.past_key_values: past_keys, past_values = layer if len(past_keys.shape) == 3: # Force past to be of dim [self_size, num_heads, ...] for easy indexing past_keys = past_keys.view(len(self), -1, *past_keys.shape[-2:]) past_values = past_values.view(len(self), -1, *past_values.shape[-2:]) if self.keys_head_dim_last: layer[0] = past_keys[keep_indices, :, -past_kv_length:, :] else: layer[0] = past_keys[keep_indices, :, :, -past_kv_length:] del past_keys layer[1] = past_values[keep_indices, :, -past_kv_length:, :] del past_values top_n_tokens_tensor = self.top_n_tokens_tensor[keep_indices] max_tokens = len(request_ids) * max_input_length + total_remaining_decode_tokens self.requests = requests self.requests_idx_mapping = requests_idx_mapping self.input_ids = input_ids self.position_ids = position_ids self.all_input_ids = all_input_ids self.input_lengths = input_lengths self.prefix_offsets = prefix_offsets self.read_offsets = read_offsets self.next_token_choosers = next_token_choosers self.stopping_criterias = stopping_criterias self.top_n_tokens = top_n_tokens self.top_n_tokens_tensor = top_n_tokens_tensor self.max_input_length = max_input_length self.padding_right_offset = new_padding_right_offset self.max_tokens = max_tokens return self @classmethod @tracer.start_as_current_span("concatenate") def concatenate(cls, batches: List["CausalLMBatch"]) -> "CausalLMBatch": # Used for padding total_batch_size = 0 max_input_length = 0 padding_right_offset = 0 for batch in batches: total_batch_size += len(batch) max_input_length = max(max_input_length, batch.max_input_length) padding_right_offset = max(padding_right_offset, batch.padding_right_offset) # Batch attributes requests = [] requests_idx_mapping = {} input_lengths = [] prefix_offsets = [] read_offsets = [] all_input_ids = [] next_token_choosers = [] stopping_criterias = [] top_n_tokens = [] max_tokens = 0 # Batch tensors input_ids = None attention_mask = None position_ids = None past_key_values = [] top_n_tokens_tensor = None # Used for slicing correctly inside the tensors # Equivalent to a cumsum on batch sizes start_index = 0 for i, batch in enumerate(batches): requests.extend(batch.requests) input_lengths.extend(batch.input_lengths) prefix_offsets.extend(batch.prefix_offsets) read_offsets.extend(batch.read_offsets) all_input_ids.extend(batch.all_input_ids) next_token_choosers.extend(batch.next_token_choosers) stopping_criterias.extend(batch.stopping_criterias) top_n_tokens.extend(batch.top_n_tokens) if i == 0: requests_idx_mapping = batch.requests_idx_mapping else: # We need to offset the mapping for each batch by the cumulative batch size for k, v in batch.requests_idx_mapping.items(): requests_idx_mapping[k] = v + start_index # Slicing end index for this batch end_index = start_index + len(batch) # We only concatenate batches that did at least one step if batch.past_key_values is None: raise ValueError("only concatenate prefilled batches") # Create empty tensor # input_ids is always of shape [batch_size, 1] # We do not need to pad it if input_ids is None: input_ids = batch.input_ids.new_empty((total_batch_size, 1)) # Copy to correct indices input_ids[start_index:end_index] = batch.input_ids # Create padded tensor if attention_mask is None: attention_mask = batch.attention_mask.new_zeros( (total_batch_size, max_input_length + padding_right_offset), ) if top_n_tokens_tensor is None: top_n_tokens_tensor = batches[0].top_n_tokens_tensor.new_zeros( total_batch_size, ) top_n_tokens_tensor[start_index:end_index] = batch.top_n_tokens_tensor # We need to slice the attention mask to remove padding from previous steps # and to remove unused allocated space left_offset = max_input_length - batch.max_input_length batch_left_offset = ( batch.attention_mask.shape[1] - batch.max_input_length - batch.padding_right_offset ) attention_mask[ start_index:end_index, left_offset:-padding_right_offset, ] = batch.attention_mask[ :, batch_left_offset : -batch.padding_right_offset, ] # Create empty tensor # position_ids is always of shape [batch_size, 1] if position_ids is None: position_ids = batch.position_ids.new_empty((total_batch_size, 1)) position_ids[start_index:end_index] = batch.position_ids # Shenanigans to get dimensions because BLOOM outputs a past with a different shape # BLOOM Keys: [batch_size * num_heads, head_dim, seq_length] # BLOOM Values: [batch_size * num_heads, seq_length, head_dim] # And ensure that we can update tensors in-place if type(batch.past_key_values[0]) == tuple: batch.past_key_values = [ [t.view(len(batch), -1, *t.shape[-2:]) for t in layer] for layer in batch.past_key_values ] elif len(batch.past_key_values[0][0].shape) == 3: for layer in batch.past_key_values: for k, t in enumerate(layer): layer[k] = t.view(len(batch), -1, *t.shape[-2:]) # Add eventual padding tokens that were added while concatenating max_tokens += batch.max_tokens + ( max_input_length - batch.max_input_length ) * len(batch) start_index = end_index first_past_kvs = batches[0].past_key_values _, num_heads, padded_sequence_length, head_dim = first_past_kvs[0][1].shape padded_past_values_shape = ( total_batch_size, num_heads, max_input_length - 1, head_dim, ) if batches[0].keys_head_dim_last: padded_past_keys_shape = padded_past_values_shape else: # seq_length is last for BLOOM padded_past_keys_shape = ( total_batch_size, num_heads, head_dim, max_input_length - 1, ) # Iterate over attention layers # Concatenate past key values layer by layer to allow incremental garbage collection for j in range(len(first_past_kvs)): padded_past_keys = first_past_kvs[j][0].new_zeros(padded_past_keys_shape) start_index = 0 for batch in batches: past_keys = batch.past_key_values[j][0] # Clear reference to the original tensor batch.past_key_values[j][0] = None # Slicing end index for this batch end_index = start_index + len(batch) # We slice the keys to remove the padding from previous batches past_seq_len = batch.max_input_length - 1 if batch.keys_head_dim_last: padded_past_keys[ start_index:end_index, :, -past_seq_len:, : ] = past_keys[:, :, -past_seq_len:, :] else: # BLOOM case padded_past_keys[ start_index:end_index, :, :, -past_seq_len: ] = past_keys[:, :, :, -past_seq_len:] del past_keys start_index = end_index padded_past_values = first_past_kvs[j][1].new_zeros( padded_past_values_shape ) start_index = 0 for batch in batches: past_values = batch.past_key_values[j][1] # Clear reference to the original tensor batch.past_key_values[j][1] = None # Slicing end index for this batch end_index = start_index + len(batch) # We slice the past values to remove the padding from previous batches past_seq_len = batch.max_input_length - 1 padded_past_values[ start_index:end_index, :, -past_seq_len:, : ] = past_values[:, :, -past_seq_len:, :] del past_values # Update values start_index = end_index past_key_values.append([padded_past_keys, padded_past_values]) return cls( batch_id=batches[0].batch_id, requests=requests, requests_idx_mapping=requests_idx_mapping, input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=past_key_values, all_input_ids=all_input_ids, input_lengths=input_lengths, prefix_offsets=prefix_offsets, read_offsets=read_offsets, next_token_choosers=next_token_choosers, stopping_criterias=stopping_criterias, top_n_tokens=top_n_tokens, top_n_tokens_tensor=top_n_tokens_tensor, max_input_length=max_input_length, padding_right_offset=padding_right_offset, keys_head_dim_last=batches[0].keys_head_dim_last, max_tokens=max_tokens, ) def __len__(self): return len(self.requests) class CausalLM(Model): def __init__( self, model_id: str, revision: Optional[str] = None, quantize: Optional[str] = None, dtype: Optional[torch.dtype] = None, trust_remote_code: bool = False, ): if torch.cuda.is_available(): device = torch.device("cuda") dtype = torch.float16 if dtype is None else dtype else: if quantize: raise ValueError("quantization is not available on CPU") device = torch.device("cpu") dtype = torch.float32 if dtype is None else dtype tokenizer = AutoTokenizer.from_pretrained( model_id, revision=revision, padding_side="left", truncation_side="left", trust_remote_code=trust_remote_code, ) model = AutoModelForCausalLM.from_pretrained( model_id, revision=revision, torch_dtype=dtype, device_map="auto" if torch.cuda.is_available() and torch.cuda.device_count() > 1 else None, load_in_8bit=quantize == "bitsandbytes", trust_remote_code=trust_remote_code, ) if ( torch.cuda.is_available() and torch.cuda.device_count() == 1 and quantize != "bitsandbytes" ): model = model.cuda() if tokenizer.pad_token_id is None: if model.config.pad_token_id is not None: tokenizer.pad_token_id = model.config.pad_token_id elif model.config.eos_token_id is not None: tokenizer.pad_token_id = model.config.eos_token_id elif tokenizer.eos_token_id is not None: tokenizer.pad_token_id = tokenizer.eos_token_id else: tokenizer.add_special_tokens({"pad_token": "[PAD]"}) super(CausalLM, self).__init__( model=model, tokenizer=tokenizer, requires_padding=True, dtype=dtype, device=device, ) @property def batch_type(self) -> Type[CausalLMBatch]: return CausalLMBatch def decode(self, generated_ids: List[int]) -> str: return self.tokenizer.decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) def forward( self, input_ids, attention_mask, position_ids, past_key_values: Optional = None ) -> Tuple[torch.Tensor, List[Tuple[torch.Tensor, torch.Tensor]]]: # Model Forward kwargs = { "input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values, "use_cache": True, "return_dict": True, } if self.has_position_ids: kwargs["position_ids"] = position_ids outputs = self.model.forward(**kwargs) return outputs.logits, outputs.past_key_values @tracer.start_as_current_span("generate_token") def generate_token( self, batch: CausalLMBatch ) -> Tuple[List[Generation], Optional[CausalLMBatch], Tuple[int, int]]: start = time.time_ns() # slice the attention mask to the correct shape attention_mask = batch.attention_mask[:, : -batch.padding_right_offset] logits, past = self.forward( batch.input_ids, attention_mask, batch.position_ids, batch.past_key_values, ) # Results generations: List[Generation] = [] stopped = True # Speculation is not active for causal accepted_ids = torch.ones_like(batch.input_ids)[:, 0] batch_top_token_ids, batch_top_token_logprobs = batch_top_tokens( batch.top_n_tokens, batch.top_n_tokens_tensor, torch.log_softmax(logits[:, -1], -1), accepted_ids, ) start_decode = time.time_ns() # Zipped iterator iterator = zip( batch.requests, batch.input_lengths, batch.prefix_offsets, batch.read_offsets, logits, batch.next_token_choosers, batch.stopping_criterias, batch.all_input_ids, batch.top_n_tokens, batch_top_token_ids, batch_top_token_logprobs, ) # For each member of the batch for i, ( request, input_length, prefix_offset, read_offset, logits, next_token_chooser, stopping_criteria, all_input_ids, top_n_tokens, top_token_ids, top_token_logprobs, ) in enumerate(iterator): # Select next token next_token_id, logprobs = next_token_chooser( all_input_ids.view(1, -1), logits[-1:, :] ) # Append next token to all tokens all_input_ids = torch.cat([all_input_ids, next_token_id]) new_input_length = input_length + 1 # Generated token next_token_logprob = logprobs[-1, next_token_id] next_token_id_squeezed = next_token_id.squeeze() next_token_text, prefix_offset, read_offset = self.decode_token( all_input_ids[:, 0], prefix_offset, read_offset ) # Evaluate stopping criteria stop, reason = stopping_criteria( next_token_id_squeezed, next_token_text, ) if not stop: stopped = False # Shard generations # All generations will be appended in the rust sharded client if i % self.world_size == self.rank: if stop: # Decode generated tokens output_text, _, _ = self.decode_token( all_input_ids[:, 0], prefix_offset=len(all_input_ids) - stopping_criteria.current_tokens - 1, read_offset=len(all_input_ids) - stopping_criteria.current_tokens, skip_special_tokens=True, ) # Get seed if isinstance(next_token_chooser.choice, Sampling): seed = next_token_chooser.choice.seed else: seed = None generated_text = GeneratedText( output_text, stopping_criteria.current_tokens, reason, seed ) else: generated_text = None # Prefill if stopping_criteria.current_tokens == 1 and request.prefill_logprobs: # Remove generated token to only have prefill and add nan for first prompt token prefill_logprobs = [float("nan")] + torch.log_softmax( logits, -1 ).gather(1, all_input_ids[1:]).squeeze(1)[ -new_input_length:-1 ].tolist() prefill_token_ids = all_input_ids[-new_input_length:-1] prefill_texts = self.tokenizer.batch_decode( prefill_token_ids, clean_up_tokenization_spaces=False, skip_special_tokens=False, ) prefill_tokens = Tokens( prefill_token_ids, prefill_logprobs, prefill_texts, is_special=[], ) else: prefill_tokens = None if top_n_tokens > 0: all_top_tokens = [] for (top_token_ids, top_token_logprobs) in zip(top_token_ids, top_token_logprobs): toptoken_texts = self.tokenizer.batch_decode( top_token_ids, clean_up_tokenization_spaces=False, skip_special_tokens=False, ) special_toptokens = [ token_id in self.all_special_ids for token_id in top_token_ids ] top_tokens = Tokens( top_token_ids, top_token_logprobs, toptoken_texts, special_toptokens, ) all_top_tokens.append(top_tokens) top_tokens = all_top_tokens else: top_tokens = None generation = Generation( request.id, prefill_tokens, Tokens( [next_token_id_squeezed], [next_token_logprob], [next_token_text], [next_token_id_squeezed.item() in self.all_special_ids], ), generated_text, top_tokens, ) generations.append(generation) # Update values batch.input_ids[i, 0] = next_token_id batch.all_input_ids[i] = all_input_ids batch.input_lengths[i] = new_input_length batch.prefix_offsets[i] = prefix_offset batch.read_offsets[i] = read_offset batch.max_input_length = max(batch.max_input_length, new_input_length) # We finished all generations in the batch; there is no next batch if stopped: forward_ns = start_decode - start decode_ns = time.time_ns() - start_decode return generations, None, (forward_ns, decode_ns) # Slice unused values from prefill batch.input_ids = batch.input_ids[:, :1] # Update attention_mask as we added a new token to input_ids batch.attention_mask[:, -batch.padding_right_offset] = 1 # Decrease right offset batch.padding_right_offset -= 1 # Update position_ids batch.position_ids = batch.position_ids[:, -1:] + 1 # Update past key values batch.past_key_values = past forward_ns = start_decode - start decode_ns = time.time_ns() - start_decode return generations, batch, (forward_ns, decode_ns)
text-generation-inference/server/text_generation_server/models/causal_lm.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/causal_lm.py", "repo_id": "text-generation-inference", "token_count": 14874 }
208
"""A simple, flexible implementation of a GPT model. Inspired by https://github.com/karpathy/minGPT/blob/master/mingpt/model.py """ import math import os import warnings from typing import List, Optional, Tuple, Union import torch import torch.nn as nn import torch.nn.functional as F from transformers import PreTrainedModel, PreTrainedTokenizer, PreTrainedTokenizerFast from transformers.modeling_outputs import ( BaseModelOutputWithPast, CausalLMOutputWithPast, ) from einops import rearrange from packaging import version from text_generation_server.utils.layers import ( TensorParallelEmbedding, TensorParallelColumnLinear, TensorParallelRowLinear, TensorParallelHead, get_linear, ) EPS = 1e-5 def load_col(config, prefix, weights, bias): assert config.quantize != "gptq", NotImplementedError slice_ = weights._get_slice(f"{prefix}.weight") rank = weights.process_group.rank() size = weights.process_group.size() h3, h = slice_.get_shape() block_size = h // size q_part = slice_[rank * block_size : (rank + 1) * block_size] k_part = slice_[h + rank * block_size : h + (rank + 1) * block_size] v_part = slice_[2 * h + rank * block_size : 2 * h + (rank + 1) * block_size] weight = torch.cat([q_part, k_part, v_part], dim=0) if weight.dtype != torch.int32: weight = weight.to(dtype=weights.dtype) weight = weight.to(device=weights.device) if bias: bias_slice_ = weights._get_slice(f"{prefix}.bias") bias_rank = weights.process_group.rank() bias_size = weights.process_group.size() bias_h = bias_slice_.get_shape() bias_h = bias_h[0] bias_block_size = bias_h // bias_size bias_q_part = bias_slice_[ bias_rank * bias_block_size : (bias_rank + 1) * bias_block_size ] bias_k_part = bias_slice_[ bias_h + bias_rank * bias_block_size : bias_h + (bias_rank + 1) * bias_block_size ] bias_v_part = bias_slice_[ 2 * bias_h + bias_rank * bias_block_size : 2 * bias_h + (bias_rank + 1) * bias_block_size ] bias = torch.cat([bias_q_part, bias_k_part, bias_v_part], dim=0) if bias.dtype != torch.int32: bias = bias.to(dtype=weights.dtype) bias = bias.to(device=weights.device) else: bias = None linear = get_linear(weight, bias, config.quantize) return TensorParallelColumnLinear(linear) def _reset_is_causal( num_query_tokens: int, num_key_tokens: int, original_is_causal: bool ): if original_is_causal and num_query_tokens != num_key_tokens: if num_query_tokens != 1: raise NotImplementedError( "MPT does not support query and key with different number of tokens, unless number of query tokens is 1." ) else: return False return original_is_causal def scaled_multihead_dot_product_attention( query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False, ): q = rearrange(query, "b s (h d) -> b h s d", h=n_heads) kv_n_heads = 1 if multiquery else n_heads k = rearrange(key, "b s (h d) -> b h d s", h=kv_n_heads) v = rearrange(value, "b s (h d) -> b h s d", h=kv_n_heads) if past_key_value is not None: if len(past_key_value) != 0: k = torch.cat([past_key_value[0], k], dim=3) v = torch.cat([past_key_value[1], v], dim=2) past_key_value = (k, v) (b, _, s_q, d) = q.shape s_k = k.size(-1) attn_weight = q.matmul(k) * softmax_scale if attn_bias is not None: _s_q = max(0, attn_bias.size(2) - s_q) _s_k = max(0, attn_bias.size(3) - s_k) attn_bias = attn_bias[:, :, _s_q:, _s_k:] if ( attn_bias.size(-1) != 1 and attn_bias.size(-1) != s_k or (attn_bias.size(-2) != 1 and attn_bias.size(-2) != s_q) ): raise RuntimeError( f"attn_bias (shape: {attn_bias.shape}) is expected to broadcast to shape: {attn_weight.shape}." ) attn_weight = attn_weight + attn_bias min_val = torch.finfo(q.dtype).min if key_padding_mask is not None: if attn_bias is not None: warnings.warn( "Propogating key_padding_mask to the attention module " + "and applying it within the attention module can cause " + "unneccessary computation/memory usage. Consider integrating " + "into attn_bias once and passing that to each attention " + "module instead." ) attn_weight = attn_weight.masked_fill( ~key_padding_mask.view((b, 1, 1, s_k)), min_val ) if is_causal and (not q.size(2) == 1): s = max(s_q, s_k) causal_mask = attn_weight.new_ones(s, s, dtype=torch.float16) causal_mask = causal_mask.tril() causal_mask = causal_mask.to(torch.bool) causal_mask = ~causal_mask causal_mask = causal_mask[-s_q:, -s_k:] attn_weight = attn_weight.masked_fill(causal_mask.view(1, 1, s_q, s_k), min_val) attn_weight = torch.softmax(attn_weight, dim=-1) if dropout_p: attn_weight = torch.nn.functional.dropout( attn_weight, p=dropout_p, training=training, inplace=True ) out = attn_weight.to(v.dtype).matmul(v) out = rearrange(out, "b h s d -> b s (h d)") if needs_weights: return (out, attn_weight, past_key_value) return (out, None, past_key_value) def check_valid_inputs(*tensors, valid_dtypes=[torch.float16, torch.bfloat16]): for tensor in tensors: if tensor.dtype not in valid_dtypes: raise TypeError( f"tensor.dtype={tensor.dtype!r} must be in valid_dtypes={valid_dtypes!r}." ) if not tensor.is_cuda: raise TypeError( f"Inputs must be cuda tensors (tensor.is_cuda={tensor.is_cuda!r})." ) def flash_attn_fn( query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False, ): try: from flash_attn import bert_padding, flash_attn_interface except: raise RuntimeError("Please install flash-attn==1.0.3.post0") check_valid_inputs(query, key, value) if past_key_value is not None: if len(past_key_value) != 0: key = torch.cat([past_key_value[0], key], dim=1) value = torch.cat([past_key_value[1], value], dim=1) past_key_value = (key, value) if attn_bias is not None: _s_q = max(0, attn_bias.size(2) - query.size(1)) _s_k = max(0, attn_bias.size(3) - key.size(1)) attn_bias = attn_bias[:, :, _s_q:, _s_k:] if attn_bias is not None: raise NotImplementedError(f"attn_bias not implemented for flash attn.") (batch_size, seqlen) = query.shape[:2] if key_padding_mask is None: key_padding_mask = torch.ones_like(key[:, :, 0], dtype=torch.bool) query_padding_mask = key_padding_mask[:, -query.size(1) :] (query_unpad, indices_q, cu_seqlens_q, max_seqlen_q) = bert_padding.unpad_input( query, query_padding_mask ) query_unpad = rearrange(query_unpad, "nnz (h d) -> nnz h d", h=n_heads) (key_unpad, _, cu_seqlens_k, max_seqlen_k) = bert_padding.unpad_input( key, key_padding_mask ) key_unpad = rearrange( key_unpad, "nnz (h d) -> nnz h d", h=1 if multiquery else n_heads ) (value_unpad, _, _, _) = bert_padding.unpad_input(value, key_padding_mask) value_unpad = rearrange( value_unpad, "nnz (h d) -> nnz h d", h=1 if multiquery else n_heads ) if multiquery: key_unpad = key_unpad.expand(key_unpad.size(0), n_heads, key_unpad.size(-1)) value_unpad = value_unpad.expand( value_unpad.size(0), n_heads, value_unpad.size(-1) ) dropout_p = dropout_p if training else 0.0 reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) output_unpad = flash_attn_interface.flash_attn_unpadded_func( query_unpad, key_unpad, value_unpad, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, dropout_p, softmax_scale=softmax_scale, causal=reset_is_causal, return_attn_probs=needs_weights, ) output = bert_padding.pad_input( rearrange(output_unpad, "nnz h d -> nnz (h d)"), indices_q, batch_size, seqlen ) return (output, None, past_key_value) def triton_flash_attn_fn( query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False, ): try: from .flash_attn_triton import flash_attn_func except: _installed = False if version.parse(torch.__version__) < version.parse("2.0.0"): _installed = True try: from flash_attn.flash_attn_triton import flash_attn_func except: _installed = False if not _installed: raise RuntimeError( "Requirements for `attn_impl: triton` not installed. Either (1) have a CUDA-compatible GPU and `pip install .[gpu]` if installing from llm-foundry source or `pip install triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir#subdirectory=python` if installing from pypi, or (2) use torch attn model.attn_config.attn_impl=torch (torch attn_impl will be slow). Note: (1) requires you have CMake and PyTorch already installed." ) check_valid_inputs(query, key, value) if past_key_value is not None: if len(past_key_value) != 0: key = torch.cat([past_key_value[0], key], dim=1) value = torch.cat([past_key_value[1], value], dim=1) past_key_value = (key, value) if attn_bias is not None: _s_q = max(0, attn_bias.size(2) - query.size(1)) _s_k = max(0, attn_bias.size(3) - key.size(1)) attn_bias = attn_bias[:, :, _s_q:, _s_k:] if dropout_p: raise NotImplementedError(f"Dropout not implemented for attn_impl: triton.") if needs_weights: raise NotImplementedError(f"attn_impl: triton cannot return attn weights.") if key_padding_mask is not None: warnings.warn( "Propagating key_padding_mask to the attention module " + "and applying it within the attention module can cause " + "unnecessary computation/memory usage. Consider integrating " + "into attn_bias once and passing that to each attention " + "module instead." ) (b_size, s_k) = key_padding_mask.shape[:2] if attn_bias is None: attn_bias = query.new_zeros(b_size, 1, 1, s_k) attn_bias = attn_bias.masked_fill( ~key_padding_mask.view((b_size, 1, 1, s_k)), torch.finfo(query.dtype).min ) query = rearrange(query, "b s (h d) -> b s h d", h=n_heads) key = rearrange(key, "b s (h d) -> b s h d", h=1 if multiquery else n_heads) value = rearrange(value, "b s (h d) -> b s h d", h=1 if multiquery else n_heads) if multiquery: key = key.expand(*key.shape[:2], n_heads, key.size(-1)) value = value.expand(*value.shape[:2], n_heads, value.size(-1)) reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) attn_output = flash_attn_func( query, key, value, attn_bias, reset_is_causal, softmax_scale ) output = attn_output.view(*attn_output.shape[:2], -1) return (output, None, past_key_value) class MultiheadAttention(nn.Module): """Multi-head self attention. Using torch or triton attention implementation enables user to also use additive bias. """ def __init__( self, config, prefix, weights, ): super().__init__() attn_impl = config.attn_config["attn_impl"] self.attn_impl = config.attn_config["attn_impl"] self.clip_qkv = config.attn_config["clip_qkv"] self.qk_ln = config.attn_config["qk_ln"] self.d_model = config.d_model d_model = config.d_model self.n_heads = config.n_heads self.softmax_scale = config.attn_config["softmax_scale"] if self.softmax_scale is None: self.softmax_scale = 1 / math.sqrt(self.d_model / self.n_heads) self.attn_dropout_p = config.attn_config["attn_pdrop"] if self.n_heads % weights.process_group.size() != 0: raise ValueError( f"`n_heads` must be divisible by `num_shards` (got `n_heads`: {self.n_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.n_heads = self.n_heads // weights.process_group.size() self.Wqkv = load_col( config, prefix=f"{prefix}.Wqkv", weights=weights, bias=not config.no_bias ) if self.qk_ln: bias = not config.no_bias hidden_size = config.d_model head_dim = hidden_size // self.n_heads self.q_ln = LPLayerNorm( d_model, bias=bias, prefix=f"{prefix}.q_ln", weights=weights ) self.k_ln = LPLayerNorm( self.n_heads * head_dim, prefix=f"{prefix}.k_ln", weights=weights ) if self.attn_impl == "flash": self.attn_fn = flash_attn_fn elif self.attn_impl == "triton": self.attn_fn = triton_flash_attn_fn elif self.attn_impl == "torch": self.attn_fn = scaled_multihead_dot_product_attention else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") self.out_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.out_proj", weights=weights, bias=not config.no_bias, ) def forward( self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False, ): qkv = self.Wqkv(x) if self.clip_qkv: qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) (query, key, value) = qkv.chunk(3, dim=2) key_padding_mask = attention_mask if self.qk_ln: dtype = query.dtype query = self.q_ln(query).to(dtype) key = self.k_ln(key).to(dtype) (context, attn_weights, past_key_value) = self.attn_fn( query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights, ) out = self.out_proj(context) return (out, attn_weights, past_key_value) class MultiQueryAttention(nn.Module): """Multi-Query self attention. Using torch or triton attention implementation enables user to also use additive bias. """ def __init__(self, config, prefix, weights): super().__init__() attn_impl = config.attn_config["attn_impl"] self.attn_impl = config.attn_config["attn_impl"] self.clip_qkv = config.attn_config["clip_qkv"] self.qk_ln = config.attn_config["qk_ln"] self.d_model = config.d_model d_model = config.d_model self.n_heads = config.n_heads self.softmax_scale = config.attn_config["softmax_scale"] if self.softmax_scale is None: self.softmax_scale = 1 / math.sqrt(self.head_dim) self.attn_dropout_p = config.attn_config["attn_pdrop"] # self.Wqkv = nn.Linear(d_model, d_model + 2 * self.head_dim, device=device) self.Wqkv = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.Wqkv", weights=weights, bias=not config.no_bias ) fuse_splits = (d_model, d_model + self.head_dim) if self.qk_ln: raise NotImplementedError("qk_ln not supported") if self.attn_impl == "flash": self.attn_fn = flash_attn_fn elif self.attn_impl == "triton": self.attn_fn = triton_flash_attn_fn if verbose: warnings.warn( "While `attn_impl: triton` can be faster than `attn_impl: flash` " + "it uses more memory. When training larger models this can trigger " + "alloc retries which hurts performance. If encountered, we recommend " + "using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`." ) elif self.attn_impl == "torch": self.attn_fn = scaled_multihead_dot_product_attention if torch.cuda.is_available() and verbose: warnings.warn( "Using `attn_impl: torch`. If your model does not use `alibi` or " + "`prefix_lm` we recommend using `attn_impl: flash` otherwise " + "we recommend using `attn_impl: triton`." ) else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") self.out_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.out_proj", weights=weights, bias=not config.no_bias, ) # self.out_proj._is_residual = True def forward( self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False, ): qkv = self.Wqkv(x) if self.clip_qkv: qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) (query, key, value) = qkv.split( [self.d_model, self.head_dim, self.head_dim], dim=2 ) key_padding_mask = attention_mask if self.qk_ln: dtype = query.dtype query = self.q_ln(query).to(dtype) key = self.k_ln(key).to(dtype) (context, attn_weights, past_key_value) = self.attn_fn( query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights, multiquery=True, ) return (self.out_proj(context), attn_weights, past_key_value) def attn_bias_shape( attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id ): if attn_impl == "flash": return None elif attn_impl in ["torch", "triton"]: if alibi: if (prefix_lm or not causal) or use_sequence_id: return (1, n_heads, seq_len, seq_len) return (1, n_heads, 1, seq_len) elif prefix_lm or use_sequence_id: return (1, 1, seq_len, seq_len) return None else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") def build_attn_bias( attn_impl, attn_bias, n_heads, seq_len, causal=False, alibi=False, alibi_bias_max=8 ): if attn_impl == "flash": return None elif attn_impl in ["torch", "triton"]: if alibi: (device, dtype) = (attn_bias.device, attn_bias.dtype) attn_bias = attn_bias.add( build_alibi_bias( n_heads, seq_len, full=not causal, alibi_bias_max=alibi_bias_max, device=device, dtype=dtype, ) ) return attn_bias else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") def gen_slopes(n_heads, alibi_bias_max=8, device=None): _n_heads = 2 ** math.ceil(math.log2(n_heads)) m = torch.arange(1, _n_heads + 1, dtype=torch.float32, device=device) m = m.mul(alibi_bias_max / _n_heads) slopes = 1.0 / torch.pow(2, m) if _n_heads != n_heads: slopes = torch.concat([slopes[1::2], slopes[::2]])[:n_heads] return slopes.view(1, n_heads, 1, 1) def build_alibi_bias( n_heads, seq_len, full=False, alibi_bias_max=8, device=None, dtype=None ): alibi_bias = torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view( 1, 1, 1, seq_len ) if full: alibi_bias = alibi_bias - torch.arange( 1 - seq_len, 1, dtype=torch.int32, device=device ).view(1, 1, seq_len, 1) alibi_bias = alibi_bias.abs().mul(-1) slopes = gen_slopes(n_heads, alibi_bias_max, device=device) alibi_bias = alibi_bias * slopes return alibi_bias.to(dtype=dtype) ATTN_CLASS_REGISTRY = { "multihead_attention": MultiheadAttention, "multiquery_attention": MultiQueryAttention, } """GPT Blocks used for the GPT Model.""" class MPTMLP(nn.Module): def __init__(self, config, prefix, weights): super().__init__() # self.up_proj = nn.Linear(d_model, expansion_ratio * d_model, device=device) self.up_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.up_proj", weights=weights, bias=not config.no_bias ) self.act = nn.GELU(approximate="none") # self.down_proj = nn.Linear(expansion_ratio * d_model, d_model, device=device) self.down_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.down_proj", weights=weights, bias=not config.no_bias, ) # self.down_proj._is_residual = True def forward(self, x): return self.down_proj(self.act(self.up_proj(x))) class MPTBlock(nn.Module): def __init__(self, config, prefix, weights): super().__init__() self.prefix = prefix if config.attn_config["attn_type"] != "multihead_attention": raise NotImplementedError( f"""Not implemented attn {config.attn_config["attn_type"]}""" ) resid_pdrop = config.resid_pdrop if config.no_bias: self.norm_1 = nn.LayerNorm.load_no_bias( prefix=f"{prefix}.norm_1", weights=weights, eps=EPS ) self.norm_2 = nn.LayerNorm.load_no_bias( prefix=f"{prefix}.norm_2", weights=weights, eps=EPS ) else: self.norm_1 = nn.LayerNorm.load( prefix=f"{prefix}.norm_1", weights=weights, eps=EPS ) self.norm_2 = nn.LayerNorm.load( prefix=f"{prefix}.norm_2", weights=weights, eps=EPS ) self.attn = MultiheadAttention(config, prefix=f"{prefix}.attn", weights=weights) self.ffn = MPTMLP(config, prefix=f"{prefix}.ffn", weights=weights) self.resid_attn_dropout = nn.Dropout(resid_pdrop) self.resid_ffn_dropout = nn.Dropout(resid_pdrop) def forward( self, x: torch.Tensor, past_key_value: Optional[Tuple[torch.Tensor]] = None, attn_bias: Optional[torch.Tensor] = None, attention_mask: Optional[torch.ByteTensor] = None, is_causal: bool = True, ) -> Tuple[torch.Tensor, Optional[Tuple[torch.Tensor]]]: a = self.norm_1(x) (b, attn_weights, past_key_value) = self.attn( a, past_key_value=past_key_value, attn_bias=attn_bias, attention_mask=attention_mask, is_causal=is_causal, ) x = x + self.resid_attn_dropout(b) m = self.norm_2(x) n = self.ffn(m) x = x + self.resid_ffn_dropout(n) return (x, attn_weights, past_key_value) def _cast_if_autocast_enabled(tensor): if torch.is_autocast_enabled(): if tensor.device.type == "cuda": dtype = torch.get_autocast_gpu_dtype() elif tensor.device.type == "cpu": dtype = torch.get_autocast_cpu_dtype() else: raise NotImplementedError() return tensor.to(dtype=dtype) return tensor class LPLayerNorm(torch.nn.LayerNorm): def __init__( self, normalized_shape, eps=1e-05, elementwise_affine=True, device=None, dtype=None, bias: Optional[bool] = True, prefix=None, weights=None, ): super().__init__( normalized_shape=normalized_shape, eps=eps, elementwise_affine=elementwise_affine, device=device, dtype=dtype, bias=bias, ) if weights is not None: self.weight = nn.Parameter(weights.get_sharded(f"{prefix}.weight", dim=0)) if bias: self.bias = nn.Parameter(weights.get_sharded(f"{prefix}.bias", dim=0)) self.normalized_shape = self.weight.shape def forward(self, x): module_device = x.device downcast_x = _cast_if_autocast_enabled(x) downcast_weight = ( _cast_if_autocast_enabled(self.weight) if self.weight is not None else self.weight ) downcast_bias = ( _cast_if_autocast_enabled(self.bias) if self.bias is not None else self.bias ) with torch.autocast(enabled=False, device_type=module_device.type): return torch.nn.functional.layer_norm( downcast_x, self.normalized_shape, downcast_weight, downcast_bias, self.eps, ) def rms_norm(x, weight=None, eps=1e-05): output = x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + eps) if weight is not None: return output * weight return output class RMSNorm(torch.nn.Module): def __init__( self, normalized_shape, eps=1e-05, weight=True, dtype=None, device=None ): super().__init__() self.eps = eps if weight: self.weight = torch.nn.Parameter( torch.ones(normalized_shape, dtype=dtype, device=device) ) else: self.register_parameter("weight", None) def forward(self, x): return rms_norm(x.float(), self.weight, self.eps).to(dtype=x.dtype) class LPRMSNorm(RMSNorm): def __init__( self, normalized_shape, eps=1e-05, weight=True, dtype=None, device=None ): super().__init__( normalized_shape=normalized_shape, eps=eps, weight=weight, dtype=dtype, device=device, ) def forward(self, x): downcast_x = _cast_if_autocast_enabled(x) downcast_weight = ( _cast_if_autocast_enabled(self.weight) if self.weight is not None else self.weight ) with torch.autocast(enabled=False, device_type=x.device.type): return rms_norm(downcast_x, downcast_weight, self.eps).to(dtype=x.dtype) NORM_CLASS_REGISTRY = { "layernorm": torch.nn.LayerNorm, "low_precision_layernorm": LPLayerNorm, "rmsnorm": RMSNorm, "low_precision_rmsnorm": LPRMSNorm, } Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast] class MPTPreTrainedModel(PreTrainedModel): base_model_prefix = "model" _no_split_modules = ["MPTBlock"] class MPTModel(MPTPreTrainedModel): def __init__(self, config, weights): # config._validate_config() super().__init__(config) self.world_size = weights.process_group.size() self.rank = weights.process_group.rank() self.n_heads = config.n_heads self.attn_impl = config.attn_config["attn_impl"] self.prefix_lm = config.attn_config["prefix_lm"] self.attn_uses_sequence_id = config.attn_config["attn_uses_sequence_id"] self.alibi = config.attn_config["alibi"] self.alibi_bias_max = config.attn_config["alibi_bias_max"] if config.init_device == "mixed": if dist.get_local_rank() == 0: config.init_device = "cpu" else: config.init_device = "meta" if config.norm_type.lower() not in NORM_CLASS_REGISTRY.keys(): norm_options = " | ".join(NORM_CLASS_REGISTRY.keys()) raise NotImplementedError( f"Requested norm type ({config.norm_type}) is not implemented within this repo (Options: {norm_options})." ) if config.norm_type.lower() != "low_precision_layernorm": raise NotImplementedError( f"Requested norm type ({config.norm_type}) is not implemented within this repo." ) self.wte = TensorParallelEmbedding("transformer.wte", weights) if not self.alibi: self.wpe = TensorParallelEmbedding("transformer.wpe", weights) self.blocks = nn.ModuleList( [ MPTBlock(config, prefix=f"transformer.blocks.{i}", weights=weights) for i in range(config.n_layers) ] ) if config.no_bias: self.norm_f = nn.LayerNorm.load_no_bias( prefix="transformer.norm_f", weights=weights, eps=EPS ) else: self.norm_f = nn.LayerNorm.load( prefix="transformer.norm_f", weights=weights, eps=EPS ) self.is_causal = not self.prefix_lm self._attn_bias_initialized = False self.attn_bias = None self.attn_bias_shape = attn_bias_shape( self.attn_impl, config.n_heads, config.max_seq_len, self.alibi, prefix_lm=self.prefix_lm, causal=self.is_causal, use_sequence_id=self.attn_uses_sequence_id, ) if config.no_bias: for module in self.modules(): if hasattr(module, "bias") and isinstance(module.bias, nn.Parameter): if config.verbose: warnings.warn(f"Removing bias ({module.bias}) from {module}.") module.register_parameter("bias", None) if hasattr(self.config, "verbose"): if config.verbose and config.verbose > 2: print(self) if "verbose" not in self.config.init_config: self.config.init_config["verbose"] = self.config.verbose if self.config.init_config["verbose"] > 1: init_fn_name = self.config.init_config["name"] warnings.warn(f"Using {init_fn_name} initialization.") @torch.no_grad() def _attn_bias( self, device, dtype, attention_mask: Optional[torch.ByteTensor] = None, prefix_mask: Optional[torch.ByteTensor] = None, sequence_id: Optional[torch.LongTensor] = None, ): if not self._attn_bias_initialized: if self.attn_bias_shape: self.attn_bias = torch.zeros( self.attn_bias_shape, device=device, dtype=dtype ) self.attn_bias = build_attn_bias( self.attn_impl, self.attn_bias, self.config.n_heads, self.config.max_seq_len, causal=self.is_causal, alibi=self.alibi, alibi_bias_max=self.alibi_bias_max, ) assert self.n_heads % self.world_size == 0 block_size = self.n_heads // self.world_size self.attn_bias = self.attn_bias[ :, self.rank * block_size : (self.rank + 1) * block_size ] self._attn_bias_initialized = True if self.attn_impl == "flash": return (self.attn_bias, attention_mask) if self.attn_bias is not None: self.attn_bias = self.attn_bias.to(dtype=dtype, device=device) attn_bias = self.attn_bias if self.prefix_lm: assert isinstance(attn_bias, torch.Tensor) assert isinstance(prefix_mask, torch.Tensor) attn_bias = self._apply_prefix_mask(attn_bias, prefix_mask) if self.attn_uses_sequence_id and sequence_id is not None: assert isinstance(attn_bias, torch.Tensor) attn_bias = self._apply_sequence_id(attn_bias, sequence_id) if attention_mask is not None: s_k = attention_mask.shape[-1] if attn_bias is None: attn_bias = torch.zeros((1, 1, 1, s_k), device=device, dtype=dtype) else: _s_k = max(0, attn_bias.size(-1) - s_k) attn_bias = attn_bias[:, :, :, _s_k:] if prefix_mask is not None and attention_mask.shape != prefix_mask.shape: raise ValueError( f"attention_mask shape={attention_mask.shape} " + f"and prefix_mask shape={prefix_mask.shape} are not equal." ) min_val = torch.finfo(attn_bias.dtype).min attn_bias = attn_bias.masked_fill( ~attention_mask.view(-1, 1, 1, s_k), min_val ) return (attn_bias, None) def _apply_prefix_mask(self, attn_bias: torch.Tensor, prefix_mask: torch.Tensor): (s_k, s_q) = attn_bias.shape[-2:] if s_k != self.config.max_seq_len or s_q != self.config.max_seq_len: raise ValueError( "attn_bias does not match the expected shape. " + f"The last two dimensions should both be {self.config.max_length} " + f"but are {s_k} and {s_q}." ) seq_len = prefix_mask.shape[-1] if seq_len > self.config.max_seq_len: raise ValueError( f"prefix_mask sequence length cannot exceed max_seq_len={self.config.max_seq_len}" ) attn_bias = attn_bias[..., :seq_len, :seq_len] causal = torch.tril( torch.ones((seq_len, seq_len), dtype=torch.bool, device=prefix_mask.device) ).view(1, 1, seq_len, seq_len) prefix = prefix_mask.view(-1, 1, 1, seq_len) cannot_attend = ~torch.logical_or(causal, prefix.bool()) min_val = torch.finfo(attn_bias.dtype).min attn_bias = attn_bias.masked_fill(cannot_attend, min_val) return attn_bias def _apply_sequence_id( self, attn_bias: torch.Tensor, sequence_id: torch.LongTensor ): seq_len = sequence_id.shape[-1] if seq_len > self.config.max_seq_len: raise ValueError( f"sequence_id sequence length cannot exceed max_seq_len={self.config.max_seq_len}" ) attn_bias = attn_bias[..., :seq_len, :seq_len] cannot_attend = torch.logical_not( torch.eq(sequence_id.view(-1, seq_len, 1), sequence_id.view(-1, 1, seq_len)) ).unsqueeze(1) min_val = torch.finfo(attn_bias.dtype).min attn_bias = attn_bias.masked_fill(cannot_attend, min_val) return attn_bias def forward( self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]] = None, attention_mask: Optional[torch.ByteTensor] = None, prefix_mask: Optional[torch.ByteTensor] = None, sequence_id: Optional[torch.LongTensor] = None, return_dict: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, use_cache: Optional[bool] = None, ): return_dict = ( return_dict if return_dict is not None else self.config.return_dict ) use_cache = use_cache if use_cache is not None else self.config.use_cache if attention_mask is not None: attention_mask = attention_mask.bool() if prefix_mask is not None: prefix_mask = prefix_mask.bool() if not return_dict: raise NotImplementedError( "return_dict False is not implemented yet for MPT" ) if output_attentions: if self.attn_impl != "torch": raise NotImplementedError( "output_attentions is not implemented for MPT when using attn_impl `flash` or `triton`." ) if ( attention_mask is not None and attention_mask[:, 0].sum() != attention_mask.shape[0] and self.training ): raise NotImplementedError( "MPT does not support training with left padding." ) if self.prefix_lm and prefix_mask is None: raise ValueError( "prefix_mask is a required argument when MPT is configured with prefix_lm=True." ) if self.training: if self.attn_uses_sequence_id and sequence_id is None: raise ValueError( "sequence_id is a required argument when MPT is configured with attn_uses_sequence_id=True " + "and the model is in train mode." ) elif self.attn_uses_sequence_id is False and sequence_id is not None: warnings.warn( "MPT received non-None input for `sequence_id` but is configured with attn_uses_sequence_id=False. " + "This input will be ignored. If you want the model to use `sequence_id`, set attn_uses_sequence_id to True." ) S = input_ids.size(1) assert ( S <= self.config.max_seq_len ), f"Cannot forward input with seq_len={S}, this model only supports seq_len<={self.config.max_seq_len}" tok_emb = self.wte(input_ids) if self.alibi: x = tok_emb else: past_position = 0 if past_key_values is not None: if len(past_key_values) != self.config.n_layers: raise ValueError( f"past_key_values must provide a past_key_value for each attention " + f"layer in the network (len(past_key_values)={len(past_key_values)!r}; self.config.n_layers={self.config.n_layers!r})." ) past_position = past_key_values[0][0].size(1) if self.attn_impl == "torch": past_position = past_key_values[0][0].size(3) if S + past_position > self.config.max_seq_len: raise ValueError( f"Cannot forward input with past sequence length {past_position} and current sequence length {S + 1}, this model only supports total sequence length <= {self.config.max_seq_len}." ) pos = torch.arange( past_position, S + past_position, dtype=torch.long, device=input_ids.device, ).unsqueeze(0) if attention_mask is not None: pos = torch.clamp( pos - torch.cumsum((~attention_mask).to(torch.int32), dim=1)[ :, past_position: ], min=0, ) pos_emb = self.wpe(pos) x = tok_emb + pos_emb (attn_bias, attention_mask) = self._attn_bias( device=x.device, dtype=torch.float32, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id, ) if use_cache and past_key_values is None: past_key_values = [() for _ in range(self.config.n_layers)] all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None for b_idx, block in enumerate(self.blocks): if output_hidden_states: assert all_hidden_states is not None all_hidden_states = all_hidden_states + (x,) past_key_value = ( past_key_values[b_idx] if past_key_values is not None else None ) (x, attn_weights, past_key_value) = block( x, past_key_value=past_key_value, attn_bias=attn_bias, attention_mask=attention_mask, is_causal=self.is_causal, ) if past_key_values is not None: past_key_values[b_idx] = past_key_value if output_attentions: assert all_self_attns is not None all_self_attns = all_self_attns + (attn_weights,) x = self.norm_f(x) if output_hidden_states: assert all_hidden_states is not None all_hidden_states = all_hidden_states + (x,) return BaseModelOutputWithPast( last_hidden_state=x, past_key_values=past_key_values, hidden_states=all_hidden_states, attentions=all_self_attns, ) class MPTForCausalLM(MPTPreTrainedModel): def __init__(self, config, weights): super().__init__(config) if not config.tie_word_embeddings: raise ValueError("MPTForCausalLM only supports tied word embeddings") self.transformer = MPTModel(config, weights) self.lm_head = TensorParallelHead.load( config, prefix="transformer.wte", weights=weights ) self.logit_scale = None if config.logit_scale is not None: logit_scale = config.logit_scale if isinstance(logit_scale, str): if logit_scale == "inv_sqrt_d_model": logit_scale = 1 / math.sqrt(config.d_model) else: raise ValueError( f"logit_scale={logit_scale!r} is not recognized as an option; use numeric value or 'inv_sqrt_d_model'." ) self.logit_scale = logit_scale def forward( self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]] = None, attention_mask: Optional[torch.ByteTensor] = None, prefix_mask: Optional[torch.ByteTensor] = None, sequence_id: Optional[torch.LongTensor] = None, labels: Optional[torch.LongTensor] = None, return_dict: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, use_cache: Optional[bool] = None, ): return_dict = ( return_dict if return_dict is not None else self.config.return_dict ) use_cache = use_cache if use_cache is not None else self.config.use_cache outputs = self.transformer( input_ids=input_ids, past_key_values=past_key_values, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id, return_dict=return_dict, output_attentions=output_attentions, output_hidden_states=output_hidden_states, use_cache=use_cache, ) logits = self.lm_head(outputs.last_hidden_state) if self.logit_scale is not None: if self.logit_scale == 0: warnings.warn( f"Multiplying logits by self.logit_scale={self.logit_scale!r}. This will produce uniform (uninformative) outputs." ) logits *= self.logit_scale loss = None if labels is not None: labels = torch.roll(labels, shifts=-1) labels[:, -1] = -100 loss = F.cross_entropy( logits.view(-1, logits.size(-1)), labels.to(logits.device).view(-1) ) return CausalLMOutputWithPast( loss=loss, logits=logits, past_key_values=outputs.past_key_values, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) def prepare_inputs_for_generation( self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs ): if inputs_embeds is not None: raise NotImplementedError("inputs_embeds is not implemented for MPT yet") attention_mask = kwargs["attention_mask"].bool() if attention_mask[:, -1].sum() != attention_mask.shape[0]: raise NotImplementedError( "MPT does not support generation with right padding." ) if self.transformer.attn_uses_sequence_id and self.training: sequence_id = torch.zeros_like(input_ids[:1]) else: sequence_id = None if past_key_values is not None: input_ids = input_ids[:, -1].unsqueeze(-1) if self.transformer.prefix_lm: prefix_mask = torch.ones_like(attention_mask) if kwargs.get("use_cache") == False: raise NotImplementedError( "MPT with prefix_lm=True does not support use_cache=False." ) else: prefix_mask = None return { "input_ids": input_ids, "attention_mask": attention_mask, "prefix_mask": prefix_mask, "sequence_id": sequence_id, "past_key_values": past_key_values, "use_cache": kwargs.get("use_cache", True), } @staticmethod def _reorder_cache(past_key_values, beam_idx): """Used by HuggingFace generate when using beam search with kv-caching. See https://github.com/huggingface/transformers/blob/3ec7a47664ebe40c40f4b722f6bb1cd30c3821ec/src/transformers/models/gpt2/modeling_gpt2.py#L1122-L1133 for an example in transformers. """ reordered_past = [] for layer_past in past_key_values: reordered_past += [ tuple( (past_state.index_select(0, beam_idx) for past_state in layer_past) ) ] return reordered_past
text-generation-inference/server/text_generation_server/models/custom_modeling/mpt_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/mpt_modeling.py", "repo_id": "text-generation-inference", "token_count": 23558 }
209
import torch import time from dataclasses import dataclass from opentelemetry import trace from transformers import ( AutoProcessor, AutoTokenizer, PreTrainedTokenizerBase, ProcessorMixin, ) from typing import Optional, Tuple, List, Type, Dict from text_generation_server.models import Model from text_generation_server.models.types import ( Batch, Tokens, Generation, GeneratedText, ) from text_generation_server.pb import generate_pb2 from text_generation_server.utils import NextTokenChooser, StoppingCriteria, Sampling import re IMAGES = re.compile(r"!\[[^\]]*\]\((.*?)\s*(\"(?:.*[^\"])\")?\s*\)") def split(string): parts = [] cursor = 0 for pattern in IMAGES.finditer(string): start = pattern.start() if start != cursor: parts.append(string[cursor:start]) parts.append(pattern.group(1)) cursor = pattern.end() if cursor != len(string): parts.append(string[cursor:]) return parts tracer = trace.get_tracer(__name__) @dataclass class IdeficsCausalLMBatch(Batch): batch_id: int requests: List[generate_pb2.Request] requests_idx_mapping: Dict[int, int] # Decoder values input_ids: torch.Tensor attention_mask: torch.Tensor position_ids: torch.Tensor pixel_values: Optional[torch.Tensor] image_hidden_states: Optional[torch.Tensor] image_attention_mask: Optional[torch.Tensor] past_key_values: Optional[List[Tuple]] # All tokens all_input_ids: List[torch.Tensor] # Lengths of all generations present in the batch input_lengths: List[int] prefix_offsets: List[int] read_offsets: List[int] # Generation helpers next_token_choosers: List[NextTokenChooser] stopping_criterias: List[StoppingCriteria] # Metadata used for padding max_input_length: int padding_right_offset: int # Maximum number of tokens this batch will grow to max_tokens: int # Past metadata keys_head_dim_last: bool = True def to_pb(self) -> generate_pb2.CachedBatch: return generate_pb2.CachedBatch( id=self.batch_id, request_ids=[r.id for r in self.requests], size=len(self), max_tokens=self.max_tokens, ) @classmethod def from_pb( cls, pb: generate_pb2.Batch, tokenizer: PreTrainedTokenizerBase, processor: ProcessorMixin, # Hack dtype: torch.dtype, device: torch.device, ) -> "IdeficsCausalLMBatch": inputs = [] next_token_choosers = [] stopping_criterias = [] prefix_offsets = [] read_offsets = [] requests_idx_mapping = {} # Parse batch max_truncation = 0 padding_right_offset = 0 max_decode_tokens = 0 for i, r in enumerate(pb.requests): requests_idx_mapping[r.id] = i inputs.append(r.inputs) next_token_choosers.append(NextTokenChooser.from_pb(r.parameters, device)) stopping_criteria = StoppingCriteria.from_pb( r.stopping_parameters, tokenizer ) stopping_criterias.append(stopping_criteria) max_truncation = max(max_truncation, r.truncate) max_decode_tokens += stopping_criteria.max_new_tokens padding_right_offset = max( padding_right_offset, stopping_criteria.max_new_tokens ) prompts = [] for inp in inputs: # Each input is encoded into a list, where each element of this input list is either a string or a URL prompts.append(split(inp)) # The processor replaces the call to tokenizer, and # a/ takes care of fetching images from the URL # b/ generate the correct input_ids, attention_mask, pixel_values, image_attention_mask to feed to the model tokenized_inputs = processor( prompts, return_tensors="pt", padding=True, truncation=True, max_length=max_truncation, add_end_of_utterance_token=False, # Already taken care of inside the prompts, so bypassing the processor's handling of this token ).to(device) for _ in pb.requests: input_len = tokenized_inputs["input_ids"].shape[1] prefix_offsets.append( input_len - 5 ) # To decode without potential fallbacks errors read_offsets.append( input_len ) # To decode without potential fallbacks errors input_lengths = tokenized_inputs["attention_mask"].sum(1) max_input_length = input_lengths.max() input_ids = tokenized_inputs["input_ids"] pixel_values = tokenized_inputs["pixel_values"] image_hidden_states = None # Allocate maximum attention_mask attention_mask = input_ids.new_zeros( (pb.size, max_input_length + padding_right_offset) ) # Copy tokenizer attention_mask into fully allocated attention_mask attention_mask[:, :max_input_length] = tokenized_inputs["attention_mask"] # Do the same for image_attention_mask image_attention_mask = input_ids.new_zeros( ( pb.size, max_input_length + padding_right_offset, tokenized_inputs["pixel_values"].size(1), ) ) image_attention_mask[:, :max_input_length, :] = tokenized_inputs[ "image_attention_mask" ] position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1 position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1) all_input_ids = tokenized_inputs["input_ids"].T.split( 1, dim=1 ) # It's input_ids but splitted into a tuple of tensors where each tensor is (seq_len, 1) size. It is then transformed into a list max_tokens = len(inputs) * (max_input_length + max_decode_tokens) return cls( batch_id=pb.id, requests=pb.requests, requests_idx_mapping=requests_idx_mapping, input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, pixel_values=pixel_values, image_hidden_states=image_hidden_states, image_attention_mask=image_attention_mask, past_key_values=None, all_input_ids=list(all_input_ids), input_lengths=input_lengths.tolist(), prefix_offsets=prefix_offsets, read_offsets=read_offsets, next_token_choosers=next_token_choosers, stopping_criterias=stopping_criterias, max_input_length=max_input_length.item(), padding_right_offset=padding_right_offset, max_tokens=max_tokens, ) @tracer.start_as_current_span("filter") def filter(self, request_ids: List[int]) -> Optional["IdeficsCausalLMBatch"]: # It deletes requests from the batch. For instance when client lost connection if len(request_ids) == 0: raise ValueError("Batch must have at least one request") if len(request_ids) == len(self): return self keep_indices = [] # New values after filtering requests_idx_mapping = {} requests = [] input_lengths = [] prefix_offsets = [] read_offsets = [] all_input_ids = [] max_input_length = 0 next_token_choosers = [] stopping_criterias = [] total_remaining_decode_tokens = 0 new_padding_right_offset = 0 for i, request_id in enumerate(request_ids): idx = self.requests_idx_mapping[request_id] requests_idx_mapping[request_id] = i keep_indices.append(idx) requests.append(self.requests[idx]) prefix_offsets.append(self.prefix_offsets[idx]) read_offsets.append(self.read_offsets[idx]) all_input_ids.append(self.all_input_ids[idx]) request_input_length = self.input_lengths[idx] input_lengths.append(request_input_length) max_input_length = max(max_input_length, request_input_length) next_token_choosers.append(self.next_token_choosers[idx]) stopping_criteria = self.stopping_criterias[idx] stopping_criterias.append(stopping_criteria) remaining_decode_tokens = ( stopping_criteria.max_new_tokens - stopping_criteria.current_tokens ) total_remaining_decode_tokens += remaining_decode_tokens new_padding_right_offset = max( new_padding_right_offset, remaining_decode_tokens ) # Apply indices to input_ids, attention mask, past key values and other items that need to be cached input_ids = self.input_ids[keep_indices] position_ids = self.position_ids[keep_indices] self.attention_mask = self.attention_mask[ keep_indices, -(self.padding_right_offset + max_input_length) : ( self.attention_mask.shape[1] - self.padding_right_offset ) + new_padding_right_offset, ] # Do the same for pixel_values and image_attention_mask pixel_values = self.pixel_values[keep_indices] self.image_attention_mask = self.image_attention_mask[ keep_indices, -(self.padding_right_offset + max_input_length) : ( self.image_attention_mask.shape[1] - self.padding_right_offset ) + new_padding_right_offset, :, ] if self.image_hidden_states is None: image_hidden_states = None else: image_hidden_states = self.image_hidden_states[keep_indices] # Ensure that past_key_values tensors can be updated in-place if type(self.past_key_values[0]) == tuple: self.past_key_values = [list(layer) for layer in self.past_key_values] # Update tensors in-place to allow incremental garbage collection past_kv_length = max_input_length - 1 for layer in self.past_key_values: past_keys, past_values = layer if len(past_keys.shape) == 3: # Force past to be of dim [self_size, num_heads, ...] for easy indexing past_keys = past_keys.view(len(self), -1, *past_keys.shape[-2:]) past_values = past_values.view(len(self), -1, *past_values.shape[-2:]) if self.keys_head_dim_last: layer[0] = past_keys[keep_indices, :, -past_kv_length:, :] else: layer[0] = past_keys[keep_indices, :, :, -past_kv_length:] del past_keys layer[1] = past_values[keep_indices, :, -past_kv_length:, :] del past_values max_tokens = len(request_ids) * max_input_length + total_remaining_decode_tokens self.requests = requests self.requests_idx_mapping = requests_idx_mapping self.input_ids = input_ids self.pixel_values = pixel_values self.image_hidden_states = image_hidden_states self.position_ids = position_ids self.all_input_ids = all_input_ids self.input_lengths = input_lengths self.prefix_offsets = prefix_offsets self.read_offsets = read_offsets self.next_token_choosers = next_token_choosers self.stopping_criterias = stopping_criterias self.max_input_length = max_input_length self.padding_right_offset = new_padding_right_offset self.max_tokens = max_tokens return self @classmethod @tracer.start_as_current_span("concatenate") def concatenate( cls, batches: List["IdeficsCausalLMBatch"] ) -> "IdeficsCausalLMBatch": # It adds new requests to the batch # Used for padding total_batch_size = 0 max_input_length = 0 max_num_images = 0 padding_right_offset = 0 for batch in batches: total_batch_size += len(batch) max_input_length = max(max_input_length, batch.max_input_length) max_num_images = max(max_num_images, batch.pixel_values.size(1)) padding_right_offset = max(padding_right_offset, batch.padding_right_offset) # Batch attributes requests = [] requests_idx_mapping = {} input_lengths = [] prefix_offsets = [] read_offsets = [] all_input_ids = [] next_token_choosers = [] stopping_criterias = [] max_tokens = 0 # Batch tensors input_ids = None attention_mask = None position_ids = None pixel_values = None image_hidden_states = None image_attention_mask = None past_key_values = [] # Used for slicing correctly inside the tensors # Equivalent to a cumsum on batch sizes start_index = 0 for i, batch in enumerate(batches): requests.extend(batch.requests) input_lengths.extend(batch.input_lengths) prefix_offsets.extend(batch.prefix_offsets) read_offsets.extend(batch.read_offsets) all_input_ids.extend(batch.all_input_ids) next_token_choosers.extend(batch.next_token_choosers) stopping_criterias.extend(batch.stopping_criterias) if i == 0: requests_idx_mapping = batch.requests_idx_mapping else: # We need to offset the mapping for each batch by the cumulative batch size for k, v in batch.requests_idx_mapping.items(): requests_idx_mapping[k] = v + start_index # Slicing end index for this batch end_index = start_index + len(batch) # We only concatenate batches that did at least one step if batch.past_key_values is None: raise ValueError("only concatenate prefilled batches") # Create empty tensor # input_ids is always of shape [batch_size, 1] # We do not need to pad it if input_ids is None: input_ids = batch.input_ids.new_empty((total_batch_size, 1)) # Copy to correct indices input_ids[start_index:end_index] = batch.input_ids # Create padded tensor if attention_mask is None: attention_mask = batch.attention_mask.new_zeros( (total_batch_size, max_input_length + padding_right_offset), ) curr_batch_max_num_images = batch.pixel_values.size(1) if pixel_values is None: pixel_values = batch.pixel_values.new_zeros( (total_batch_size, max_num_images, 3, 224, 224) ) pixel_values[ start_index:end_index, :curr_batch_max_num_images ] = batch.pixel_values if image_attention_mask is None: image_attention_mask = batch.image_attention_mask.new_zeros( ( total_batch_size, max_input_length + padding_right_offset, max_num_images, ) ) # We need to slice the attention mask to remove padding from previous steps # and to remove unused allocated space left_offset = max_input_length - batch.max_input_length batch_left_offset = ( batch.attention_mask.shape[1] - batch.max_input_length - batch.padding_right_offset ) attention_mask[ start_index:end_index, left_offset:-padding_right_offset, ] = batch.attention_mask[ :, batch_left_offset : -batch.padding_right_offset, ] image_attention_mask[ start_index:end_index, left_offset:-padding_right_offset, :curr_batch_max_num_images, ] = batch.image_attention_mask[ :, batch_left_offset : -batch.padding_right_offset, : ] # Create empty tensor # position_ids is always of shape [batch_size, 1] if position_ids is None: position_ids = batch.position_ids.new_empty((total_batch_size, 1)) position_ids[start_index:end_index] = batch.position_ids # Shenanigans to get dimensions because BLOOM outputs a past with a different shape # BLOOM Keys: [batch_size * num_heads, head_dim, seq_length] # BLOOM Values: [batch_size * num_heads, seq_length, head_dim] # And ensure that we can update tensors in-place if type(batch.past_key_values[0]) == tuple: batch.past_key_values = [ [t.view(len(batch), -1, *t.shape[-2:]) for t in layer] for layer in batch.past_key_values ] elif len(batch.past_key_values[0][0].shape) == 3: for layer in batch.past_key_values: for k, t in enumerate(layer): layer[k] = t.view(len(batch), -1, *t.shape[-2:]) # Add eventual padding tokens that were added while concatenating max_tokens += batch.max_tokens + ( max_input_length - batch.max_input_length ) * len(batch) start_index = end_index first_past_kvs = batches[0].past_key_values _, num_heads, padded_sequence_length, head_dim = first_past_kvs[0][1].shape padded_past_values_shape = ( total_batch_size, num_heads, max_input_length - 1, head_dim, ) if batches[0].keys_head_dim_last: padded_past_keys_shape = padded_past_values_shape else: # seq_length is last for BLOOM padded_past_keys_shape = ( total_batch_size, num_heads, head_dim, max_input_length - 1, ) # Iterate over attention layers # Concatenate past key values layer by layer to allow incremental garbage collection for j in range(len(first_past_kvs)): padded_past_keys = first_past_kvs[j][0].new_zeros(padded_past_keys_shape) start_index = 0 for batch in batches: past_keys = batch.past_key_values[j][0] # Clear reference to the original tensor batch.past_key_values[j][0] = None # Slicing end index for this batch end_index = start_index + len(batch) # We slice the keys to remove the padding from previous batches past_seq_len = batch.max_input_length - 1 if batch.keys_head_dim_last: padded_past_keys[ start_index:end_index, :, -past_seq_len:, : ] = past_keys[:, :, -past_seq_len:, :] else: # BLOOM case padded_past_keys[ start_index:end_index, :, :, -past_seq_len: ] = past_keys[:, :, :, -past_seq_len:] del past_keys start_index = end_index padded_past_values = first_past_kvs[j][1].new_zeros( padded_past_values_shape ) start_index = 0 for batch in batches: past_values = batch.past_key_values[j][1] # Clear reference to the original tensor batch.past_key_values[j][1] = None # Slicing end index for this batch end_index = start_index + len(batch) # We slice the past values to remove the padding from previous batches past_seq_len = batch.max_input_length - 1 padded_past_values[ start_index:end_index, :, -past_seq_len:, : ] = past_values[:, :, -past_seq_len:, :] del past_values # Update values start_index = end_index past_key_values.append([padded_past_keys, padded_past_values]) return cls( batch_id=batches[0].batch_id, requests=requests, requests_idx_mapping=requests_idx_mapping, input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, pixel_values=pixel_values, image_hidden_states=image_hidden_states, image_attention_mask=image_attention_mask, past_key_values=past_key_values, all_input_ids=all_input_ids, input_lengths=input_lengths, prefix_offsets=prefix_offsets, read_offsets=read_offsets, next_token_choosers=next_token_choosers, stopping_criterias=stopping_criterias, max_input_length=max_input_length, padding_right_offset=padding_right_offset, keys_head_dim_last=batches[0].keys_head_dim_last, max_tokens=max_tokens, ) def __len__(self): return len(self.requests) class IdeficsCausalLM(Model): def __init__( self, model_id: str, revision: Optional[str] = None, quantize: Optional[str] = None, dtype: Optional[torch.dtype] = None, trust_remote_code: bool = False, ): from text_generation_server.models.custom_modeling.idefics_modeling import ( IdeficsForVisionText2Text, ) if torch.cuda.is_available(): device = torch.device("cuda") dtype = torch.bfloat16 if dtype is None else dtype else: if quantize: raise ValueError("quantization is not available on CPU") device = torch.device("cpu") dtype = torch.float32 if dtype is None else dtype tokenizer = AutoTokenizer.from_pretrained( model_id, revision=revision, padding_side="left", truncation_side="left", trust_remote_code=trust_remote_code, ) self.processor = AutoProcessor.from_pretrained( model_id, revision=revision, padding_side="left", truncation_side="left", trust_remote_code=trust_remote_code, ) model = IdeficsForVisionText2Text.from_pretrained( model_id, revision=revision, torch_dtype=dtype, device_map="auto" if torch.cuda.is_available() and torch.cuda.device_count() > 1 else None, load_in_8bit=quantize == "bitsandbytes", trust_remote_code=trust_remote_code, ) if torch.cuda.is_available() and torch.cuda.device_count() == 1: model = model.cuda() if tokenizer.pad_token_id is None: if model.config.pad_token_id is not None: tokenizer.pad_token_id = model.config.pad_token_id elif model.config.eos_token_id is not None: tokenizer.pad_token_id = model.config.eos_token_id elif tokenizer.eos_token_id is not None: tokenizer.pad_token_id = tokenizer.eos_token_id else: tokenizer.add_special_tokens({"pad_token": "<unk>"}) super(IdeficsCausalLM, self).__init__( model=model, tokenizer=tokenizer, requires_padding=True, dtype=dtype, device=device, ) @property def batch_type(self) -> Type[IdeficsCausalLMBatch]: return IdeficsCausalLMBatch def forward( self, input_ids, attention_mask, position_ids, pixel_values, image_hidden_states, image_attention_mask, past_key_values: Optional = None, ) -> Tuple[torch.Tensor, List[Tuple[torch.Tensor, torch.Tensor]]]: # Model Forward kwargs = { "input_ids": input_ids, "attention_mask": attention_mask, "pixel_values": pixel_values, "image_hidden_states": image_hidden_states, "image_attention_mask": image_attention_mask, "past_key_values": past_key_values, "use_cache": True, "return_dict": True, } if self.has_position_ids: kwargs["position_ids"] = position_ids outputs = self.model.forward(**kwargs) return outputs.logits, outputs.past_key_values, outputs.image_hidden_states @tracer.start_as_current_span("generate_token") def generate_token( self, batch: IdeficsCausalLMBatch ) -> Tuple[List[Generation], Optional[IdeficsCausalLMBatch], Tuple[int, int]]: start = time.time_ns() # slice the attention mask to the correct shape attention_mask = batch.attention_mask[:, : -batch.padding_right_offset] if batch.input_ids.size(1) == 1: # THIS is a hack: when calling idefics.generate, the first time, we need the whole image_attention_mask (size bs x max_seq_len x max_num_images), # but the subsequent times, we only need the last attention mask along the `max_seq_len` dimension # this is due to the nature IDEFICS: it's an encoder decoder, and so when decoding, only the currently generated # token need to attend to the encoder hidden states (i.e. the vision encoder) # Also see seq2seq_lm.Seq2SeqLM.generate_token which has roughly the same logic image_attention_mask = batch.image_attention_mask[ :, -(batch.padding_right_offset + 1) ].unsqueeze(1) else: image_attention_mask = batch.image_attention_mask[ :, : -batch.padding_right_offset ] logits, past, image_hidden_states = self.forward( input_ids=batch.input_ids, attention_mask=attention_mask, position_ids=batch.position_ids, pixel_values=batch.pixel_values, image_hidden_states=batch.image_hidden_states, image_attention_mask=image_attention_mask, past_key_values=batch.past_key_values, ) # Hardcoded remove image tokens logits[:, 32000:32001] = torch.finfo(logits.dtype).min start_decode = time.time_ns() # Results generations: List[Generation] = [] stopped = True # Zipped iterator iterator = zip( batch.requests, batch.input_lengths, batch.prefix_offsets, batch.read_offsets, logits, batch.next_token_choosers, batch.stopping_criterias, batch.all_input_ids, ) # For each member of the batch for i, ( request, input_length, prefix_offset, read_offset, logits, next_token_chooser, stopping_criteria, all_input_ids, ) in enumerate(iterator): # Select next token next_token_id, logprobs = next_token_chooser( all_input_ids.view(1, -1), logits[-1:, :] ) # Append next token to all tokens all_input_ids = torch.cat([all_input_ids, next_token_id]) new_input_length = input_length + 1 # Generated token next_token_logprob = logprobs[-1, next_token_id] next_token_id_squeezed = next_token_id.squeeze() next_token_text, prefix_offset, read_offset = self.decode_token( all_input_ids[:, 0], prefix_offset, read_offset ) # Evaluate stopping criteria stop, reason = stopping_criteria( next_token_id_squeezed, next_token_text, ) if not stop: stopped = False # Shard generations # All generations will be appended in the rust sharded client if i % self.world_size == self.rank: if stop: # Decode generated tokens output_text, _, _ = self.decode_token( all_input_ids[:, 0], prefix_offset=len(all_input_ids) - stopping_criteria.current_tokens - 1, read_offset=len(all_input_ids) - stopping_criteria.current_tokens, skip_special_tokens=True, ) # Get seed if isinstance(next_token_chooser.choice, Sampling): seed = next_token_chooser.choice.seed else: seed = None generated_text = GeneratedText( output_text, stopping_criteria.current_tokens, reason, seed ) else: generated_text = None # Prefill if stopping_criteria.current_tokens == 1 and request.prefill_logprobs: # Remove generated token to only have prefill and add nan for first prompt token prefill_logprobs = [float("nan")] + torch.log_softmax( logits, -1 ).gather(1, all_input_ids[1:]).squeeze(1)[ -new_input_length:-1 ].tolist() prefill_token_ids = all_input_ids[-new_input_length:-1] prefill_texts = self.tokenizer.batch_decode( prefill_token_ids, clean_up_tokenization_spaces=False, skip_special_tokens=False, ) prefill_tokens = Tokens( prefill_token_ids, prefill_logprobs, prefill_texts, is_special=[], ) else: prefill_tokens = None top_tokens = None generation = Generation( request.id, prefill_tokens, Tokens( [next_token_id_squeezed], [next_token_logprob], [next_token_text], [next_token_id_squeezed.item() in self.all_special_ids], ), generated_text, top_tokens, ) generations.append(generation) # Update values batch.input_ids[i, 0] = next_token_id batch.all_input_ids[i] = all_input_ids batch.input_lengths[i] = new_input_length batch.prefix_offsets[i] = prefix_offset batch.read_offsets[i] = read_offset batch.max_input_length = max(batch.max_input_length, new_input_length) # We finished all generations in the batch; there is no next batch if stopped: forward_ns = start_decode - start decode_ns = time.time_ns() - start_decode return generations, None, (forward_ns, decode_ns) # Slice unused values from prefill batch.input_ids = batch.input_ids[:, :1] # Update attention_mask as we added a new token to input_ids batch.attention_mask[:, -batch.padding_right_offset] = 1 batch.image_attention_mask[ :, -batch.padding_right_offset, : ] = batch.image_attention_mask[:, -(batch.padding_right_offset + 1), :] # Decrease right offset batch.padding_right_offset -= 1 # Update position_ids batch.position_ids = batch.position_ids[:, -1:] + 1 # Update past key values batch.past_key_values = past batch.image_hidden_states = image_hidden_states forward_ns = start_decode - start decode_ns = time.time_ns() - start_decode return generations, batch, (forward_ns, decode_ns)
text-generation-inference/server/text_generation_server/models/idefics_causal_lm.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/idefics_causal_lm.py", "repo_id": "text-generation-inference", "token_count": 16143 }
210
import os import torch from datetime import timedelta from loguru import logger # Tensor Parallelism settings RANK = int(os.getenv("RANK", "0")) WORLD_SIZE = int(os.getenv("WORLD_SIZE", "1")) # CUDA memory fraction MEMORY_FRACTION = float(os.getenv("CUDA_MEMORY_FRACTION", "1.0")) class FakeBarrier: def wait(self): pass class FakeGroup: def __init__(self, rank, size): self._rank = rank self._size = size def allreduce(self, *args, **kwargs): return FakeBarrier() def allgather(self, inputs, local_tensor, **kwargs): assert ( len(inputs[0]) == len(local_tensor) == 1 ), f"{len(inputs[0])} != {len(local_tensor)} != 1, and the FakeGroup is supposed to join on simple tensors" for input_ in inputs: input_[0].data = local_tensor[0].data return FakeBarrier() def barrier(self, *args, **kwargs): return FakeBarrier() def size(self): return self._size def rank(self): return self._rank def initialize_torch_distributed(): if torch.cuda.is_available(): from torch.distributed import ProcessGroupNCCL # Set the device id. assert WORLD_SIZE <= torch.cuda.device_count(), "Each process is one gpu" device = RANK % torch.cuda.device_count() torch.cuda.set_device(device) torch.cuda.set_per_process_memory_fraction(MEMORY_FRACTION, device) backend = "nccl" options = ProcessGroupNCCL.Options() options.is_high_priority_stream = True options._timeout = timedelta(seconds=60) else: backend = "gloo" options = None if WORLD_SIZE == 1: return FakeGroup(RANK, WORLD_SIZE), RANK, WORLD_SIZE else: if os.getenv("DEBUG", None) == "1": return FakeGroup(RANK, WORLD_SIZE), RANK, WORLD_SIZE if not torch.distributed.is_initialized(): # Call the init process. torch.distributed.init_process_group( backend=backend, world_size=WORLD_SIZE, rank=RANK, timeout=timedelta(seconds=60), pg_options=options, ) else: logger.warning("torch.distributed is already initialized.") return torch.distributed.group.WORLD, RANK, WORLD_SIZE
text-generation-inference/server/text_generation_server/utils/dist.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/dist.py", "repo_id": "text-generation-inference", "token_count": 1042 }
211
import re from typing import Callable, List, Optional, Tuple import torch from text_generation_server.pb import generate_pb2 from text_generation_server.pb.generate_pb2 import FinishReason from text_generation_server.utils.logits_process import ( HeterogeneousProcessorWrapper, HeterogeneousRepetitionPenaltyLogitsProcessor, HeterogeneousTemperatureLogitsWarper, HeterogeneousTopKLogitsWarper, HeterogeneousTopPLogitsWarper, HeterogeneousTypicalLogitsWarper, static_warper, ) from text_generation_server.utils.watermark import WatermarkLogitsProcessor from transformers import PreTrainedTokenizerBase, RepetitionPenaltyLogitsProcessor class NextTokenChooser: def __init__( self, watermark=False, temperature=1.0, repetition_penalty=1.0, top_k=None, top_p=None, typical_p=None, do_sample=False, seed=0, device="cpu", ): self.watermark_processor = ( WatermarkLogitsProcessor(device=device) if watermark else None ) self.repetition_processor = ( RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty) if repetition_penalty else None ) has_warpers = ( (temperature is not None and temperature != 1.0) or (top_k is not None and top_k != 0) or (top_p is not None and top_p < 1.0) or (typical_p is not None and typical_p < 1.0) ) if has_warpers: self.static_warper = static_warper( temperature=temperature, top_k=top_k, top_p=top_p, typical_p=typical_p ) else: self.static_warper = None sampling = do_sample or has_warpers self.choice = Sampling(seed, device) if sampling else Greedy() def __call__(self, input_ids, scores): if self.watermark_processor is not None: scores = self.watermark_processor(input_ids, scores) if self.repetition_processor is not None: scores = self.repetition_processor(input_ids, scores) if self.static_warper is None: next_logprob = torch.log_softmax(scores, -1) else: scores, next_logprob = self.static_warper(scores) next_id = self.choice(scores[-1]).view(1, 1) return next_id, next_logprob @classmethod def from_pb( cls, pb: generate_pb2.NextTokenChooserParameters, device: torch.device, ) -> "NextTokenChooser": return NextTokenChooser( watermark=pb.watermark, temperature=pb.temperature, repetition_penalty=pb.repetition_penalty, top_k=pb.top_k, top_p=pb.top_p, typical_p=pb.typical_p, do_sample=pb.do_sample, seed=pb.seed, device=device, ) class StopSequenceCriteria: def __init__(self, stop_sequence: str): stop_sequence = re.escape(stop_sequence) self.regex = re.compile(f"{stop_sequence}$") def __call__(self, output: str) -> bool: if self.regex.findall(output): return True return False class StoppingCriteria: def __init__( self, eos_token_id: int, stop_sequence_criterias: List[StopSequenceCriteria], max_new_tokens: int = 20, ignore_eos_token: bool = False, ): self.eos_token_id = eos_token_id self.stop_sequence_criterias = stop_sequence_criterias self.max_new_tokens = max_new_tokens self.current_tokens = 0 self.current_output = "" self.ignore_eos_token = ignore_eos_token def __call__(self, last_token: int, last_output: str) -> Tuple[bool, Optional[str]]: self.current_tokens += 1 if self.current_tokens >= self.max_new_tokens: return True, FinishReason.FINISH_REASON_LENGTH if not self.ignore_eos_token and last_token == self.eos_token_id: return True, FinishReason.FINISH_REASON_EOS_TOKEN if self.stop_sequence_criterias: self.current_output += last_output # There is no need to keep an output that is too long if len(self.current_output) > 300: # Slice to -200 to avoid doing it all the time self.current_output = self.current_output[-200:] for stop_sequence_criteria in self.stop_sequence_criterias: if stop_sequence_criteria(self.current_output): return True, FinishReason.FINISH_REASON_STOP_SEQUENCE return False, None @classmethod def from_pb( cls, pb: generate_pb2.StoppingCriteriaParameters, tokenizer: PreTrainedTokenizerBase, ) -> "StoppingCriteria": stop_sequence_criterias = [ StopSequenceCriteria(sequence) for sequence in pb.stop_sequences ] return StoppingCriteria( tokenizer.eos_token_id, stop_sequence_criterias, pb.max_new_tokens, pb.ignore_eos_token, ) def create_n_gram_speculation( input_ids: torch.Tensor, next_ids: torch.Tensor, accepted_ids: torch.Tensor, speculate: int, verbose: bool, ): # Very trivial approach, find first match in the string. # This is much less refined than actual n-gram but seems to work # relatively OK in grounded mode and is by far much faster with # much less worst case complexity as everything happens on device. B = accepted_ids.shape[0] device = input_ids.device seeds = next_ids[accepted_ids.cumsum(dim=-1) - 1] indices = (input_ids == seeds.unsqueeze(-1)).max(dim=1).indices + 1 all_indices = indices.unsqueeze(-1).expand(B, speculate) + torch.arange( speculate, device=device ) all_indices = torch.clamp(all_indices, max=input_ids.shape[1] - 1) speculative_ids = input_ids.gather(dim=-1, index=all_indices) return speculative_ids class HeterogeneousNextTokenChooser: def __init__( self, dtype: torch.dtype, device: torch.device, watermark: List[bool], temperature: List[float], repetition_penalty: List[float], top_k: List[int], top_p: List[float], typical_p: List[float], do_sample: List[bool], seeds: List[int], ): warpers = [] self.watermark_processor = ( HeterogeneousProcessorWrapper( { i: WatermarkLogitsProcessor(device=device) for i, do_watermark in enumerate(watermark) if do_watermark } ) if any(watermark) else None ) self.repetition_processor = ( HeterogeneousRepetitionPenaltyLogitsProcessor( repetition_penalty, dtype, device ) if any([x != 1.0 for x in repetition_penalty]) else None ) if any([x != 1.0 for x in temperature]): do_sample = [ sample or x != 1.0 for x, sample in zip(temperature, do_sample) ] warpers.append( HeterogeneousTemperatureLogitsWarper(temperature, dtype, device) ) if any([x != 0 for x in top_k]): do_sample = [sample or x != 0 for x, sample in zip(top_k, do_sample)] warpers.append(HeterogeneousTopKLogitsWarper(top_k, device)) if any([x < 1.0 for x in top_p]): do_sample = [sample or x < 1.0 for x, sample in zip(top_p, do_sample)] warpers.append(HeterogeneousTopPLogitsWarper(top_p, dtype, device)) if any([x < 1.0 for x in typical_p]): do_sample = [sample or x < 1.0 for x, sample in zip(typical_p, do_sample)] warpers.append(HeterogeneousTypicalLogitsWarper(typical_p, dtype, device)) self.warpers = warpers if any(do_sample): self.choice = HeterogeneousSampling(do_sample, seeds, device) else: self.choice = Greedy() self.seeds = seeds self.do_sample = do_sample self.dtype = dtype self.device = device def __call__( self, input_ids: torch.Tensor, scores: torch.Tensor, speculate: int, speculated_ids: Optional[torch.Tensor] = None, speculative_scores: Optional[torch.Tensor] = None, verbose=False, ): if speculated_ids is not None: B = scores.shape[0] // (speculated_ids.shape[1] + 1) S = speculated_ids.shape[1] + 1 scores = scores.view(B, S, -1) else: B = scores.shape[0] S = 1 scores = scores.view(B, S, -1) next_ids = torch.zeros((B, S), device=scores.device, dtype=torch.long) for j in range(S): _scores = scores[:, j] if self.watermark_processor is not None: _scores = self.watermark_processor(input_ids, _scores) if self.repetition_processor is not None: _scores = self.repetition_processor(input_ids, _scores) for warper in self.warpers: _scores = warper(input_ids, _scores) _next_ids = self.choice(_scores) scores[:, j] = _scores next_ids[:, j] = _next_ids next_ids = next_ids.view(B * S) allscores = scores.view(B * S, -1) alllogprobs = torch.log_softmax(allscores, -1) if speculated_ids is not None: accepted_ids = [] B = next_ids.shape[0] // (speculated_ids.shape[1] + 1) S = speculated_ids.shape[1] + 1 indices = [] for i in range(B): _next_ids = next_ids[i * S : (i + 1) * S] _speculated_ids = speculated_ids[i] validate_speculative = _next_ids[:-1] == _speculated_ids index = i * S accepted = 1 # First is always valid indices.append(index) for valid in validate_speculative.tolist(): if valid: index += 1 accepted += 1 indices.append(index) else: break accepted_ids.append(accepted) accepted_ids = torch.tensor( accepted_ids, device=input_ids.device, dtype=input_ids.dtype ) next_ids = next_ids[indices] logprobs = alllogprobs[indices] indices = torch.arange(B, device=input_ids.device) * S if speculative_scores is not None: speculative_scores = speculative_scores[indices + accepted_ids - 1] else: accepted_ids = torch.ones_like(next_ids) logprobs = alllogprobs next_logprobs = torch.gather(logprobs, 1, next_ids.view(-1, 1)).view(-1) if speculate > 0: if speculative_scores is not None: # Medusa provided some scores speculative_ids = Greedy()(speculative_scores) else: # n-gram speculative_ids = create_n_gram_speculation( input_ids, next_ids, accepted_ids, speculate, verbose ) else: speculative_ids = None return next_ids, next_logprobs, alllogprobs, accepted_ids, speculative_ids def filter(self, indices): if self.watermark_processor is not None: self.watermark_processor = self.watermark_processor.filter(indices) if self.repetition_processor is not None: self.repetition_processor = self.repetition_processor.filter(indices) filtered_warpers = [] for warper in self.warpers: filtered_warper = warper.filter(indices) if filtered_warper is not None: filtered_warpers.append(filtered_warper) self.warpers = filtered_warpers self.seeds = [self.seeds[i] for i in indices] self.do_sample = [self.do_sample[i] for i in indices] if any(self.do_sample): self.choice.filter(indices) else: self.choice = Greedy() return self @classmethod def from_pb( cls, pb: List[generate_pb2.NextTokenChooserParameters], dtype: torch.dtype, device: torch.device, ) -> "HeterogeneousNextTokenChooser": return HeterogeneousNextTokenChooser( watermark=[pb_.watermark for pb_ in pb], temperature=[pb_.temperature for pb_ in pb], repetition_penalty=[pb_.repetition_penalty for pb_ in pb], top_k=[pb_.top_k for pb_ in pb], top_p=[pb_.top_p for pb_ in pb], typical_p=[pb_.typical_p for pb_ in pb], do_sample=[pb_.do_sample for pb_ in pb], seeds=[pb_.seed for pb_ in pb], device=device, dtype=dtype, ) class Sampling: def __init__(self, seed: int, device: str = "cpu"): self.generator = torch.Generator(device) self.generator.manual_seed(seed) self.seed = seed def __call__(self, logits): probs = torch.nn.functional.softmax(logits, -1) # Avoid GPU<->CPU sync done by torch multinomial # See: https://github.com/pytorch/pytorch/blob/925a3788ec5c06db62ca732a0e9425a26a00916f/aten/src/ATen/native/Distributions.cpp#L631-L637 q = torch.empty_like(probs).exponential_(1, generator=self.generator) return probs.div_(q).argmax() class Greedy: def __call__(self, logits): return logits.argmax(dim=-1) class HeterogeneousSampling: r""" Mixed greedy and probabilistic sampling. Compute both and pick the right one for each sample. """ def __init__(self, do_sample: List[bool], seeds: List[int], device: torch.device): self.seeds = seeds self.greedy_indices = [] self.sampling_mapping = {} for i, (sample, seed) in enumerate(zip(do_sample, seeds)): if sample: self.sampling_mapping[i] = Sampling(seed, device) else: self.greedy_indices.append(i) self.greedy = Greedy() def __call__(self, logits): out = torch.empty(logits.shape[0], dtype=torch.int64, device=logits.device) if self.greedy_indices: # Computing for all indices is faster than slicing torch.argmax(logits, -1, out=out) for i, sampling in self.sampling_mapping.items(): out[i] = sampling(logits[i]) return out def filter(self, indices): new_greedy_indices = [] new_sampling_mapping = {} for i, idx in enumerate(indices): if idx in self.sampling_mapping: new_sampling_mapping[i] = self.sampling_mapping[idx] else: new_greedy_indices.append(i) self.greedy_indices = new_greedy_indices self.sampling_mapping = new_sampling_mapping return self def batch_top_tokens( top_n_tokens: List[int], top_n_tokens_tensor: torch.Tensor, logprobs: torch.Tensor, accepted_ids: torch.Tensor ) -> Tuple[List[List[List[int]]], List[List[List[float]]]]: """Find the top n most likely tokens for a batch of generations. When multiple tokens have equal probabilities and they don't all fit, the remaining tokens are also returned. """ max_top_n = max(top_n_tokens) # Early exit when top_n_tokens is not used if max_top_n == 0: return [[[]]] * len(top_n_tokens), [[[]]] * len(top_n_tokens) batch_size = accepted_ids.shape[0] speculate_size = logprobs.shape[0] // batch_size top_n_tokens_tensor = top_n_tokens_tensor.repeat_interleave(speculate_size) # Ensure top_n doesn't exceed vocab size top_n_tokens = [min(tok, logprobs.size(-1)) for tok in top_n_tokens for _ in range(speculate_size)] # Parallel kthvalue adapted from https://discuss.pytorch.org/t/how-to-efficiently-get-the-k-th-largest-values-in-parallel/160529/2 # Sorted topk is faster than torch.sort() since we only need a small subset sorted_top_k = torch.topk(logprobs, k=max_top_n, dim=-1, sorted=True).values nth_highest = torch.gather( sorted_top_k, 1, (top_n_tokens_tensor - 1).clip(min=0).unsqueeze(1) ) nth_highest[nth_highest == -float("inf")] = torch.finfo(logprobs.dtype).min # Find the new "fuzzy" top n values top_n_indices = (logprobs >= nth_highest).nonzero() _, top_n_ishes = torch.unique_consecutive(top_n_indices[:, 0], return_counts=True) k = 1 if top_n_ishes.numel() == 0 else top_n_ishes.max() # Take a new topk for these new max n values top_k = torch.topk(logprobs, k=k, dim=1, sorted=True) top_n_ishes = top_n_ishes.tolist() top_indices = top_k.indices.tolist() top_values = top_k.values.tolist() batch_top_token_ids = [] batch_top_token_logprobs = [] accepted_ids_list = accepted_ids.tolist() for i, n_accepted_ids in enumerate(accepted_ids_list): start = speculate_size * i stop = speculate_size * (i + 1) _top_indices = top_indices[start: stop] _top_values = top_values[start: stop] _top_n_ishes = top_n_ishes[start: stop] _top_n_tokens = top_n_tokens[start: stop] _top_indices = _top_indices[:n_accepted_ids] _top_values = _top_values[:n_accepted_ids] _top_n_ishes = _top_n_ishes[:n_accepted_ids] _top_n_tokens = _top_n_tokens[:n_accepted_ids] row_top_token_ids = [] row_top_token_logprobs = [] for idxs, vals, n, req_n in zip(_top_indices, _top_values, _top_n_ishes, _top_n_tokens): indices = idxs[:n] if req_n > 0 else [] values = vals[:n] if req_n > 0 else [] row_top_token_ids.append(indices) row_top_token_logprobs.append(values) batch_top_token_ids.append(row_top_token_ids) batch_top_token_logprobs.append(row_top_token_logprobs) return batch_top_token_ids, batch_top_token_logprobs
text-generation-inference/server/text_generation_server/utils/tokens.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/tokens.py", "repo_id": "text-generation-inference", "token_count": 8706 }
212
# This CITATION.cff file was generated with cffinit. # Visit https://bit.ly/cffinit to generate yours today! cff-version: 1.2.0 title: HuggingFace's Tokenizers message: >- Fast State-of-the-Art Tokenizers optimized for Research and Production. type: software authors: - given-names: Anthony family-names: Moi email: [email protected] affiliation: HuggingFace - given-names: Nicolas family-names: Patry affiliation: HuggingFace repository-code: 'https://github.com/huggingface/tokenizers' url: 'https://github.com/huggingface/tokenizers' repository: 'https://huggingface.co' abstract: >- Fast State-of-the-Art Tokenizers optimized for Research and Production. keywords: - Rust - Tokenizer - NLP license: Apache-2.0 commit: 37372b6 version: 0.13.4 date-released: '2023-04-05'
tokenizers/CITATION.cff/0
{ "file_path": "tokenizers/CITATION.cff", "repo_id": "tokenizers", "token_count": 293 }
213
<p align="center"> <br> <img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/> <br> <p> <p align="center"> <a href="https://badge.fury.io/js/tokenizers"> <img alt="Build" src="https://badge.fury.io/js/tokenizers.svg"> </a> <a href="https://github.com/huggingface/tokenizers/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/tokenizers.svg?color=blue"> </a> </p> <br> NodeJS implementation of today's most used tokenizers, with a focus on performance and versatility. Bindings over the [Rust](https://github.com/huggingface/tokenizers/tree/master/tokenizers) implementation. If you are interested in the High-level design, you can go check it there. ## Main features - Train new vocabularies and tokenize using 4 pre-made tokenizers (Bert WordPiece and the 3 most common BPE versions). - Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU. - Easy to use, but also extremely versatile. - Designed for research and production. - Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token. - Does all the pre-processing: Truncate, Pad, add the special tokens your model needs. ## Installation ```bash npm install tokenizers@latest ``` ## Basic example ```ts import { Tokenizer } from "tokenizers"; const tokenizer = await Tokenizer.fromFile("tokenizer.json"); const wpEncoded = await tokenizer.encode("Who is John?"); console.log(wpEncoded.getLength()); console.log(wpEncoded.getTokens()); console.log(wpEncoded.getIds()); console.log(wpEncoded.getAttentionMask()); console.log(wpEncoded.getOffsets()); console.log(wpEncoded.getOverflowing()); console.log(wpEncoded.getSpecialTokensMask()); console.log(wpEncoded.getTypeIds()); console.log(wpEncoded.getWordIds()); ``` ## License [Apache License 2.0](../../LICENSE)
tokenizers/bindings/node/README.md/0
{ "file_path": "tokenizers/bindings/node/README.md", "repo_id": "tokenizers", "token_count": 651 }
214
/* eslint-disable @typescript-eslint/no-explicit-any */ /* eslint-disable @typescript-eslint/no-empty-function */ import { TruncationStrategy, BPE, Encoding, AddedToken, Tokenizer } from '../../' // jest.mock('../../bindings/tokenizer'); // jest.mock('../../bindings/models', () => ({ // __esModule: true, // Model: jest.fn() // })); // Or: // jest.mock('../../bindings/models', () => { // return require('../../bindings/__mocks__/models'); // }); // const TokenizerMock = mocked(Tokenizer); describe('AddedToken', () => { it('instantiates with only content', () => { const addToken = new AddedToken('test', false) expect(addToken.constructor.name).toEqual('AddedToken') }) it('instantiates with empty options', () => { const addToken = new AddedToken('test', false, {}) expect(addToken.constructor.name).toEqual('AddedToken') }) it('instantiates with options', () => { const addToken = new AddedToken('test', false, { leftStrip: true, rightStrip: true, singleWord: true, }) expect(addToken.constructor.name).toEqual('AddedToken') }) describe('getContent', () => { it('returns the string content of AddedToken', () => { const addedToken = new AddedToken('test', false) expect(addedToken.getContent()).toEqual('test') }) }) }) describe('Tokenizer', () => { it('has expected methods', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) expect(typeof Tokenizer.fromFile).toBe('function') expect(typeof Tokenizer.fromString).toBe('function') // expect(typeof Tokenizer.fromPretrained).toBe('function') expect(typeof tokenizer.addSpecialTokens).toBe('function') expect(typeof tokenizer.addTokens).toBe('function') expect(typeof tokenizer.decode).toBe('function') expect(typeof tokenizer.decodeBatch).toBe('function') expect(typeof tokenizer.disablePadding).toBe('function') expect(typeof tokenizer.disableTruncation).toBe('function') expect(typeof tokenizer.encode).toBe('function') expect(typeof tokenizer.encodeBatch).toBe('function') expect(typeof tokenizer.getDecoder).toBe('function') expect(typeof tokenizer.getNormalizer).toBe('function') expect(typeof tokenizer.getPostProcessor).toBe('function') expect(typeof tokenizer.getPreTokenizer).toBe('function') expect(typeof tokenizer.getVocab).toBe('function') expect(typeof tokenizer.getVocabSize).toBe('function') expect(typeof tokenizer.idToToken).toBe('function') expect(typeof tokenizer.runningTasks).toBe('function') expect(typeof tokenizer.save).toBe('function') expect(typeof tokenizer.setDecoder).toBe('function') expect(typeof tokenizer.setModel).toBe('function') expect(typeof tokenizer.setNormalizer).toBe('function') expect(typeof tokenizer.setPadding).toBe('function') expect(typeof tokenizer.setPostProcessor).toBe('function') expect(typeof tokenizer.setPreTokenizer).toBe('function') expect(typeof tokenizer.setTruncation).toBe('function') expect(typeof tokenizer.tokenToId).toBe('function') expect(typeof tokenizer.toString).toBe('function') expect(typeof tokenizer.train).toBe('function') }) // it('can be instantiated from the hub', async () => { // let tokenizer: Tokenizer // let output: Encoding // tokenizer = Tokenizer.fromPretrained('bert-base-cased') // output = await tokenizer.encode('Hey there dear friend!', null, { addSpecialTokens: false }) // expect(output.getTokens()).toEqual(['Hey', 'there', 'dear', 'friend', '!']) // tokenizer = Tokenizer.fromPretrained('anthony/tokenizers-test') // output = await tokenizer.encode('Hey there dear friend!', null, { addSpecialTokens: false }) // expect(output.getTokens()).toEqual(['hey', 'there', 'dear', 'friend', '!']) // tokenizer = Tokenizer.fromPretrained('anthony/tokenizers-test', { // revision: 'gpt-2', // }) // output = await tokenizer.encode('Hey there dear friend!', null, { addSpecialTokens: false }) // expect(output.getTokens()).toEqual(['Hey', 'ฤ there', 'ฤ dear', 'ฤ friend', '!']) // }, 10000) describe('addTokens', () => { it('accepts a list of string as new tokens when initial model is empty', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) const nbAdd = tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) expect(nbAdd).toBe(5) }) it('accepts a list of AddedToken as new tokens when initial model is empty', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) const addedToken = new AddedToken('test', false) const nbAdd = tokenizer.addAddedTokens([addedToken]) expect(nbAdd).toBe(1) }) }) describe('encode', () => { let tokenizer: Tokenizer beforeEach(() => { // Clear all instances and calls to constructor and all methods: // TokenizerMock.mockClear(); const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) it('accepts a pair of strings as parameters', async () => { const encoding = await tokenizer.encode('my name is john', 'pair') expect(encoding).toBeDefined() }) it('accepts a string with a null pair', async () => { const encoding = await tokenizer.encode('my name is john', null) expect(encoding).toBeDefined() }) // TODO // it("throws if we try to encode a pre-tokenized string without isPretokenized=true", async () => { // await expect((encode as any)(["my", "name", "is", "john"], null)).rejects.toThrow( // "encode with isPreTokenized=false expect string" // ); // }); // it("accepts a pre-tokenized string as parameter", async () => { // const encoding = await tokenizer.encode(["my", "name", "is", "john"], undefined, { // isPretokenized: true, // }); // expect(encoding).toBeDefined(); // }); // it("throws if we try to encodeBatch pre-tokenized strings without isPretokenized=true", async () => { // await expect((encodeBatch as any)([["my", "name", "is", "john"]])).rejects.toThrow( // "encodeBatch with isPretokenized=false expects input to be `EncodeInput[]` " + // "with `EncodeInput = string | [string, string]`" // ); // }); // it("accepts a pre-tokenized input in encodeBatch", async () => { // const encoding = await tokenizer.encodeBatch([["my", "name", "is", "john"]], { // isPretokenized: true, // }); // expect(encoding).toBeDefined(); // }); it('Encodes correctly if called with only one argument', async () => { const encoded = await tokenizer.encode('my name is john') expect(encoded.getIds()).toEqual([0, 1, 2, 3]) }) it('returns an Encoding', async () => { const encoding = await tokenizer.encode('my name is john', 'pair') expect(encoding.getAttentionMask()).toEqual([1, 1, 1, 1, 1]) const ids = encoding.getIds() expect(Array.isArray(ids)).toBe(true) expect(ids).toHaveLength(5) for (const id of ids) { expect(typeof id).toBe('number') } expect(encoding.getOffsets()).toEqual([ [0, 2], [3, 7], [8, 10], [11, 15], [0, 4], ]) expect(encoding.getOverflowing()).toEqual([]) expect(encoding.getSpecialTokensMask()).toEqual([0, 0, 0, 0, 0]) expect(encoding.getTokens()).toEqual(['my', 'name', 'is', 'john', 'pair']) expect(encoding.getTypeIds()).toEqual([0, 0, 0, 0, 1]) }) describe('when truncation is enabled', () => { it('truncates with default if no truncation options provided', async () => { tokenizer.setTruncation(2) const singleEncoding = await tokenizer.encode('my name is john', null) expect(singleEncoding.getTokens()).toEqual(['my', 'name']) const pairEncoding = await tokenizer.encode('my name is john', 'pair') expect(pairEncoding.getTokens()).toEqual(['my', 'pair']) }) it('throws an error with strategy `only_second` and no pair is encoded', async () => { tokenizer.setTruncation(2, { strategy: TruncationStrategy.OnlySecond }) await expect(tokenizer.encode('my name is john', null)).rejects.toThrow( 'Truncation error: Second sequence not provided', ) }) }) describe('when padding is enabled', () => { it('does not pad anything with default options', async () => { tokenizer.setPadding() const singleEncoding = await tokenizer.encode('my name', null) expect(singleEncoding.getTokens()).toEqual(['my', 'name']) const pairEncoding = await tokenizer.encode('my name', 'pair') expect(pairEncoding.getTokens()).toEqual(['my', 'name', 'pair']) }) it('pads to the right by default', async () => { tokenizer.setPadding({ maxLength: 5 }) const singleEncoding = await tokenizer.encode('my name', null) expect(singleEncoding.getTokens()).toEqual(['my', 'name', '[PAD]', '[PAD]', '[PAD]']) const pairEncoding = await tokenizer.encode('my name', 'pair') expect(pairEncoding.getTokens()).toEqual(['my', 'name', 'pair', '[PAD]', '[PAD]']) }) it('pads to multiple of the given value', async () => { tokenizer.setPadding({ padToMultipleOf: 8 }) const singleEncoding = await tokenizer.encode('my name', null) expect(singleEncoding.getTokens()).toHaveLength(8) const pairEncoding = await tokenizer.encode('my name', 'pair') expect(pairEncoding.getTokens()).toHaveLength(8) }) }) }) describe('decode', () => { let tokenizer: Tokenizer beforeEach(() => { const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) it('has its callback called with the decoded string', async () => { const decode = tokenizer.decode.bind(tokenizer) expect(await decode([0, 1, 2, 3], true)).toEqual('my name is john') }) }) describe('decodeBatch', () => { let tokenizer: Tokenizer beforeEach(() => { const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) it('has its callback called with the decoded string', async () => { const decodeBatch = tokenizer.decodeBatch.bind(tokenizer) expect(await decodeBatch([[0, 1, 2, 3], [4]], true)).toEqual(['my name is john', 'pair']) }) }) describe('getVocab', () => { it('accepts `undefined` as parameter', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) expect(tokenizer.getVocab(undefined)).toBeDefined() }) it('returns the vocabulary', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john']) expect(tokenizer.getVocab(true)).toEqual({ my: 0, name: 1, is: 2, john: 3, }) }) }) describe('getVocabSize', () => { it('accepts `undefined` as parameter', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) expect(tokenizer.getVocabSize(undefined)).toBeDefined() }) }) describe('setTruncation', () => { it('returns the full truncation configuration', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) tokenizer.setTruncation(2) // TODO Return type is weird // const expectedConfig: TruncationOptions = { // maxLength: 2, // strategy: TruncationStrategy.LongestFirst, // stride: 0, // direction: TruncationDirection.Right, // }; // expect(truncation).toEqual(expectedConfig); }) }) describe('setPadding', () => { it('returns the full padding params', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) tokenizer.setPadding() // TODO Return type is weird // const expectedConfig: PaddingOptions = { // direction: PaddingDirection.Right, // padId: 0, // padToken: "[PAD]", // padTypeId: 0, // }; // expect(padding).toEqual(expectedConfig); }) }) describe('postProcess', () => { let tokenizer: Tokenizer let firstEncoding: Encoding let secondEncoding: Encoding beforeAll(() => { const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) beforeEach(async () => { firstEncoding = await tokenizer.encode('my name is john', null) secondEncoding = await tokenizer.encode('pair', null) tokenizer.setTruncation(2) tokenizer.setPadding({ maxLength: 5 }) }) it('returns correctly with a single Encoding param', () => { const encoding = tokenizer.postProcess(firstEncoding) expect(encoding.getTokens()).toEqual(['my', 'name', '[PAD]', '[PAD]', '[PAD]']) }) it('returns correctly with `undefined` as second and third parameters', () => { const encoding = tokenizer.postProcess(firstEncoding, undefined, undefined) expect(encoding.getTokens()).toEqual(['my', 'name', '[PAD]', '[PAD]', '[PAD]']) }) it('returns correctly with 2 encodings', () => { const encoding = tokenizer.postProcess(firstEncoding, secondEncoding) expect(encoding.getTokens()).toEqual(['my', 'pair', '[PAD]', '[PAD]', '[PAD]']) }) }) })
tokenizers/bindings/node/lib/bindings/tokenizer.test.ts/0
{ "file_path": "tokenizers/bindings/node/lib/bindings/tokenizer.test.ts", "repo_id": "tokenizers", "token_count": 5268 }
215
# `tokenizers-linux-arm64-musl` This is the **aarch64-unknown-linux-musl** binary for `tokenizers`
tokenizers/bindings/node/npm/linux-arm64-musl/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/linux-arm64-musl/README.md", "repo_id": "tokenizers", "token_count": 37 }
216
use crate::tokenizer::PaddingOptions; use napi::bindgen_prelude::*; use napi_derive::napi; use tokenizers::utils::truncation::TruncationDirection; use tokenizers::Encoding; #[napi(js_name = "Encoding")] #[derive(Clone, Default)] pub struct JsEncoding { pub(crate) encoding: Option<Encoding>, } impl From<Encoding> for JsEncoding { fn from(value: Encoding) -> Self { Self { encoding: Some(value), } } } impl TryFrom<JsEncoding> for Encoding { type Error = Error; fn try_from(value: JsEncoding) -> Result<Self> { value .encoding .ok_or(Error::from_reason("Uninitialized encoding".to_string())) } } #[napi(string_enum, js_name = "TruncationDirection")] pub enum JsTruncationDirection { Left, Right, } impl From<JsTruncationDirection> for TruncationDirection { fn from(value: JsTruncationDirection) -> Self { match value { JsTruncationDirection::Left => TruncationDirection::Left, JsTruncationDirection::Right => TruncationDirection::Right, } } } impl TryFrom<String> for JsTruncationDirection { type Error = Error; fn try_from(value: String) -> Result<JsTruncationDirection> { match value.as_str() { "left" => Ok(JsTruncationDirection::Left), "right" => Ok(JsTruncationDirection::Right), s => Err(Error::from_reason(format!( "{s:?} is not a valid direction" ))), } } } #[napi(string_enum, js_name = "TruncationStrategy")] pub enum JsTruncationStrategy { LongestFirst, OnlyFirst, OnlySecond, } impl From<JsTruncationStrategy> for tokenizers::TruncationStrategy { fn from(value: JsTruncationStrategy) -> Self { match value { JsTruncationStrategy::LongestFirst => tokenizers::TruncationStrategy::LongestFirst, JsTruncationStrategy::OnlyFirst => tokenizers::TruncationStrategy::OnlyFirst, JsTruncationStrategy::OnlySecond => tokenizers::TruncationStrategy::OnlySecond, } } } #[napi] impl JsEncoding { #[napi(constructor)] pub fn new() -> Self { Self { encoding: None } } #[napi] pub fn get_length(&self) -> u32 { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_ids() .len() as u32 } #[napi] pub fn get_n_sequences(&self) -> u32 { self .encoding .as_ref() .expect("Uninitialized Encoding") .n_sequences() as u32 } #[napi] pub fn get_ids(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_ids() .to_vec() } #[napi] pub fn get_type_ids(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_type_ids() .to_vec() } #[napi] pub fn get_attention_mask(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_attention_mask() .to_vec() } #[napi] pub fn get_special_tokens_mask(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_special_tokens_mask() .to_vec() } #[napi] pub fn get_tokens(&self) -> Vec<String> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_tokens() .to_vec() } #[napi] pub fn get_offsets(&self) -> Vec<Vec<u32>> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_offsets() .iter() .map(|(a, b)| vec![*a as u32, *b as u32]) .collect() } #[napi] pub fn get_word_ids(&self) -> Vec<Option<u32>> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_word_ids() .to_vec() } #[napi] pub fn char_to_token(&self, pos: u32, seq_id: Option<u32>) -> Option<u32> { let seq_id = seq_id.unwrap_or(0); self .encoding .as_ref() .expect("Uninitialized Encoding") .char_to_token(pos as usize, seq_id as usize) .map(|i| i as u32) } #[napi] pub fn char_to_word(&self, pos: u32, seq_id: Option<u32>) -> Option<u32> { let seq_id = seq_id.unwrap_or(0); self .encoding .as_ref() .expect("Uninitialized Encoding") .char_to_word(pos as usize, seq_id as usize) } #[napi] pub fn pad(&mut self, length: u32, options: Option<PaddingOptions>) -> Result<()> { let params: tokenizers::PaddingParams = options.unwrap_or_default().try_into()?; self.encoding.as_mut().expect("Uninitialized Encoding").pad( length as usize, params.pad_id, params.pad_type_id, &params.pad_token, params.direction, ); Ok(()) } #[napi] pub fn truncate( &mut self, length: u32, stride: Option<u32>, direction: Option<Either<String, JsTruncationDirection>>, ) -> Result<()> { let stride = stride.unwrap_or_default(); let direction = match direction { None => TruncationDirection::Left, Some(Either::A(s)) => match s.as_str() { "left" => TruncationDirection::Left, "right" => TruncationDirection::Right, d => { return Err(Error::from_reason(format!( "{d} is not a valid truncation direction" ))); } }, Some(Either::B(t)) => t.into(), }; self .encoding .as_mut() .expect("Uninitialized Encoding") .truncate(length as usize, stride as usize, direction); Ok(()) } #[napi(ts_return_type = "[number, number] | null | undefined")] pub fn word_to_tokens(&self, env: Env, word: u32, seq_id: Option<u32>) -> Result<Option<Array>> { let seq_id = seq_id.unwrap_or(0); if let Some((a, b)) = self .encoding .as_ref() .expect("Uninitialized Encoding") .word_to_tokens(word, seq_id as usize) { let mut arr = env.create_array(2)?; arr.set(0, env.create_uint32(a as u32)?)?; arr.set(1, env.create_uint32(b as u32)?)?; Ok(Some(arr)) } else { Ok(None) } } #[napi(ts_return_type = "[number, number] | null | undefined")] pub fn word_to_chars(&self, env: Env, word: u32, seq_id: Option<u32>) -> Result<Option<Array>> { let seq_id = seq_id.unwrap_or(0); if let Some((a, b)) = self .encoding .as_ref() .expect("Uninitialized Encoding") .word_to_chars(word, seq_id as usize) { let mut arr = env.create_array(2)?; arr.set(0, env.create_uint32(a as u32)?)?; arr.set(1, env.create_uint32(b as u32)?)?; Ok(Some(arr)) } else { Ok(None) } } #[napi(ts_return_type = "[number, [number, number]] | null | undefined")] pub fn token_to_chars(&self, env: Env, token: u32) -> Result<Option<Array>> { if let Some((_, (start, stop))) = self .encoding .as_ref() .expect("Uninitialized Encoding") .token_to_chars(token as usize) { let mut offsets = env.create_array(2)?; offsets.set(0, env.create_uint32(start as u32)?)?; offsets.set(1, env.create_uint32(stop as u32)?)?; Ok(Some(offsets)) } else { Ok(None) } } #[napi] pub fn token_to_word(&self, token: u32) -> Result<Option<u32>> { if let Some((_, index)) = self .encoding .as_ref() .expect("Uninitialized Encoding") .token_to_word(token as usize) { Ok(Some(index)) } else { Ok(None) } } #[napi] pub fn get_overflowing(&self) -> Vec<JsEncoding> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_overflowing() .clone() .into_iter() .map(|enc| JsEncoding { encoding: Some(enc), }) .collect() } #[napi] pub fn get_sequence_ids(&self) -> Vec<Option<u32>> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_sequence_ids() .into_iter() .map(|s| s.map(|id| id as u32)) .collect() } #[napi] pub fn token_to_sequence(&self, token: u32) -> Option<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .token_to_sequence(token as usize) .map(|s| s as u32) } }
tokenizers/bindings/node/src/encoding.rs/0
{ "file_path": "tokenizers/bindings/node/src/encoding.rs", "repo_id": "tokenizers", "token_count": 3778 }
217
# Generated content DO NOT EDIT class Decoder: """ Base class for all decoders This class is not supposed to be instantiated directly. Instead, any implementation of a Decoder will return an instance of this class when instantiated. """ def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class BPEDecoder(Decoder): """ BPEDecoder Decoder Args: suffix (:obj:`str`, `optional`, defaults to :obj:`</w>`): The suffix that was used to caracterize an end-of-word. This suffix will be replaced by whitespaces during the decoding """ def __init__(self, suffix="</w>"): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class ByteFallback(Decoder): """ ByteFallback Decoder ByteFallback is a simple trick which converts tokens looking like `<0x61>` to pure bytes, and attempts to make them into a string. If the tokens cannot be decoded you will get ๏ฟฝ instead for each inconvertable byte token """ def __init__(self): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class ByteLevel(Decoder): """ ByteLevel Decoder This decoder is to be used in tandem with the :class:`~tokenizers.pre_tokenizers.ByteLevel` :class:`~tokenizers.pre_tokenizers.PreTokenizer`. """ def __init__(self): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class CTC(Decoder): """ CTC Decoder Args: pad_token (:obj:`str`, `optional`, defaults to :obj:`<pad>`): The pad token used by CTC to delimit a new token. word_delimiter_token (:obj:`str`, `optional`, defaults to :obj:`|`): The word delimiter token. It will be replaced by a <space> cleanup (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether to cleanup some tokenization artifacts. Mainly spaces before punctuation, and some abbreviated english forms. """ def __init__(self, pad_token="<pad>", word_delimiter_token="|", cleanup=True): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class Fuse(Decoder): """ Fuse Decoder Fuse simply fuses every token into a single string. This is the last step of decoding, this decoder exists only if there is need to add other decoders *after* the fusion """ def __init__(self): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class Metaspace(Decoder): """ Metaspace Decoder Args: replacement (:obj:`str`, `optional`, defaults to :obj:`โ–`): The replacement character. Must be exactly one character. By default we use the `โ–` (U+2581) meta symbol (Same as in SentencePiece). add_prefix_space (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether to add a space to the first word if there isn't already one. This lets us treat `hello` exactly like `say hello`. """ def __init__(self, replacement="โ–", add_prefix_space=True): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class Replace(Decoder): """ Replace Decoder This decoder is to be used in tandem with the :class:`~tokenizers.pre_tokenizers.Replace` :class:`~tokenizers.pre_tokenizers.PreTokenizer`. """ def __init__(self, pattern, content): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class Sequence(Decoder): """ Sequence Decoder Args: decoders (:obj:`List[Decoder]`) The decoders that need to be chained """ def __init__(self, decoders): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class Strip(Decoder): """ Strip normalizer Strips n left characters of each token, or n right characters of each token """ def __init__(self, content, left=0, right=0): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass class WordPiece(Decoder): """ WordPiece Decoder Args: prefix (:obj:`str`, `optional`, defaults to :obj:`##`): The prefix to use for subwords that are not a beginning-of-word cleanup (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether to cleanup some tokenization artifacts. Mainly spaces before punctuation, and some abbreviated english forms. """ def __init__(self, prefix="##", cleanup=True): pass def decode(self, tokens): """ Decode the given list of tokens to a final string Args: tokens (:obj:`List[str]`): The list of tokens to decode Returns: :obj:`str`: The decoded string """ pass
tokenizers/bindings/python/py_src/tokenizers/decoders/__init__.pyi/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/decoders/__init__.pyi", "repo_id": "tokenizers", "token_count": 3115 }
218
from .visualizer import Annotation, EncodingVisualizer
tokenizers/bindings/python/py_src/tokenizers/tools/__init__.py/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/tools/__init__.py", "repo_id": "tokenizers", "token_count": 13 }
219
use std::sync::{Arc, RwLock}; use pyo3::exceptions; use pyo3::prelude::*; use pyo3::types::*; use crate::error::ToPyResult; use crate::utils::{PyNormalizedString, PyNormalizedStringRefMut, PyPattern}; use serde::ser::SerializeStruct; use serde::{Deserialize, Deserializer, Serialize, Serializer}; use tk::normalizers::{ BertNormalizer, Lowercase, Nmt, NormalizerWrapper, Precompiled, Prepend, Replace, Strip, StripAccents, NFC, NFD, NFKC, NFKD, }; use tk::{NormalizedString, Normalizer}; use tokenizers as tk; /// Represents the different kind of NormalizedString we can receive from Python: /// - Owned: Created in Python and owned by Python /// - RefMut: A mutable reference to a NormalizedString owned by Rust #[derive(FromPyObject)] enum PyNormalizedStringMut<'p> { Owned(PyRefMut<'p, PyNormalizedString>), RefMut(PyNormalizedStringRefMut), } impl PyNormalizedStringMut<'_> { /// Normalized the underlying `NormalizedString` using the provided normalizer pub fn normalize_with<N>(&mut self, normalizer: &N) -> PyResult<()> where N: Normalizer, { match self { PyNormalizedStringMut::Owned(ref mut n) => normalizer.normalize(&mut n.normalized), PyNormalizedStringMut::RefMut(n) => n.map_as_mut(|n| normalizer.normalize(n))?, } .map_err(|e| exceptions::PyException::new_err(format!("{}", e))) } } /// Base class for all normalizers /// /// This class is not supposed to be instantiated directly. Instead, any implementation of a /// Normalizer will return an instance of this class when instantiated. #[pyclass(dict, module = "tokenizers.normalizers", name = "Normalizer", subclass)] #[derive(Clone, Serialize, Deserialize)] pub struct PyNormalizer { #[serde(flatten)] pub(crate) normalizer: PyNormalizerTypeWrapper, } impl PyNormalizer { pub(crate) fn new(normalizer: PyNormalizerTypeWrapper) -> Self { PyNormalizer { normalizer } } pub(crate) fn get_as_subtype(&self, py: Python<'_>) -> PyResult<PyObject> { let base = self.clone(); Ok(match self.normalizer { PyNormalizerTypeWrapper::Sequence(_) => Py::new(py, (PySequence {}, base))?.into_py(py), PyNormalizerTypeWrapper::Single(ref inner) => match &*inner.as_ref().read().unwrap() { PyNormalizerWrapper::Custom(_) => Py::new(py, base)?.into_py(py), PyNormalizerWrapper::Wrapped(ref inner) => match inner { NormalizerWrapper::Sequence(_) => { Py::new(py, (PySequence {}, base))?.into_py(py) } NormalizerWrapper::BertNormalizer(_) => { Py::new(py, (PyBertNormalizer {}, base))?.into_py(py) } NormalizerWrapper::StripNormalizer(_) => { Py::new(py, (PyBertNormalizer {}, base))?.into_py(py) } NormalizerWrapper::Prepend(_) => Py::new(py, (PyPrepend {}, base))?.into_py(py), NormalizerWrapper::StripAccents(_) => { Py::new(py, (PyStripAccents {}, base))?.into_py(py) } NormalizerWrapper::NFC(_) => Py::new(py, (PyNFC {}, base))?.into_py(py), NormalizerWrapper::NFD(_) => Py::new(py, (PyNFD {}, base))?.into_py(py), NormalizerWrapper::NFKC(_) => Py::new(py, (PyNFKC {}, base))?.into_py(py), NormalizerWrapper::NFKD(_) => Py::new(py, (PyNFKD {}, base))?.into_py(py), NormalizerWrapper::Lowercase(_) => { Py::new(py, (PyLowercase {}, base))?.into_py(py) } NormalizerWrapper::Precompiled(_) => { Py::new(py, (PyPrecompiled {}, base))?.into_py(py) } NormalizerWrapper::Replace(_) => Py::new(py, (PyReplace {}, base))?.into_py(py), NormalizerWrapper::Nmt(_) => Py::new(py, (PyNmt {}, base))?.into_py(py), }, }, }) } } impl Normalizer for PyNormalizer { fn normalize(&self, normalized: &mut NormalizedString) -> tk::Result<()> { self.normalizer.normalize(normalized) } } #[pymethods] impl PyNormalizer { #[staticmethod] fn custom(obj: PyObject) -> Self { Self { normalizer: PyNormalizerWrapper::Custom(CustomNormalizer::new(obj)).into(), } } fn __getstate__(&self, py: Python) -> PyResult<PyObject> { let data = serde_json::to_string(&self.normalizer).map_err(|e| { exceptions::PyException::new_err(format!( "Error while attempting to pickle Normalizer: {}", e )) })?; Ok(PyBytes::new(py, data.as_bytes()).to_object(py)) } fn __setstate__(&mut self, py: Python, state: PyObject) -> PyResult<()> { match state.extract::<&PyBytes>(py) { Ok(s) => { self.normalizer = serde_json::from_slice(s.as_bytes()).map_err(|e| { exceptions::PyException::new_err(format!( "Error while attempting to unpickle Normalizer: {}", e )) })?; Ok(()) } Err(e) => Err(e), } } /// Normalize a :class:`~tokenizers.NormalizedString` in-place /// /// This method allows to modify a :class:`~tokenizers.NormalizedString` to /// keep track of the alignment information. If you just want to see the result /// of the normalization on a raw string, you can use /// :meth:`~tokenizers.normalizers.Normalizer.normalize_str` /// /// Args: /// normalized (:class:`~tokenizers.NormalizedString`): /// The normalized string on which to apply this /// :class:`~tokenizers.normalizers.Normalizer` #[pyo3(text_signature = "(self, normalized)")] fn normalize(&self, mut normalized: PyNormalizedStringMut) -> PyResult<()> { normalized.normalize_with(&self.normalizer) } /// Normalize the given string /// /// This method provides a way to visualize the effect of a /// :class:`~tokenizers.normalizers.Normalizer` but it does not keep track of the alignment /// information. If you need to get/convert offsets, you can use /// :meth:`~tokenizers.normalizers.Normalizer.normalize` /// /// Args: /// sequence (:obj:`str`): /// A string to normalize /// /// Returns: /// :obj:`str`: A string after normalization #[pyo3(text_signature = "(self, sequence)")] fn normalize_str(&self, sequence: &str) -> PyResult<String> { let mut normalized = NormalizedString::from(sequence); ToPyResult(self.normalizer.normalize(&mut normalized)).into_py()?; Ok(normalized.get().to_owned()) } } macro_rules! getter { ($self: ident, $variant: ident, $name: ident) => {{ let super_ = $self.as_ref(); if let PyNormalizerTypeWrapper::Single(ref norm) = super_.normalizer { let wrapper = norm.read().unwrap(); if let PyNormalizerWrapper::Wrapped(NormalizerWrapper::$variant(o)) = (*wrapper).clone() { o.$name } else { unreachable!() } } else { unreachable!() } }}; } macro_rules! setter { ($self: ident, $variant: ident, $name: ident, $value: expr) => {{ let super_ = $self.as_ref(); if let PyNormalizerTypeWrapper::Single(ref norm) = super_.normalizer { let mut wrapper = norm.write().unwrap(); if let PyNormalizerWrapper::Wrapped(NormalizerWrapper::$variant(ref mut o)) = *wrapper { o.$name = $value; } } }}; } /// BertNormalizer /// /// Takes care of normalizing raw text before giving it to a Bert model. /// This includes cleaning the text, handling accents, chinese chars and lowercasing /// /// Args: /// clean_text (:obj:`bool`, `optional`, defaults to :obj:`True`): /// Whether to clean the text, by removing any control characters /// and replacing all whitespaces by the classic one. /// /// handle_chinese_chars (:obj:`bool`, `optional`, defaults to :obj:`True`): /// Whether to handle chinese chars by putting spaces around them. /// /// strip_accents (:obj:`bool`, `optional`): /// Whether to strip all accents. If this option is not specified (ie == None), /// then it will be determined by the value for `lowercase` (as in the original Bert). /// /// lowercase (:obj:`bool`, `optional`, defaults to :obj:`True`): /// Whether to lowercase. #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "BertNormalizer")] pub struct PyBertNormalizer {} #[pymethods] impl PyBertNormalizer { #[getter] fn get_clean_text(self_: PyRef<Self>) -> bool { getter!(self_, BertNormalizer, clean_text) } #[setter] fn set_clean_text(self_: PyRef<Self>, clean_text: bool) { setter!(self_, BertNormalizer, clean_text, clean_text); } #[getter] fn get_handle_chinese_chars(self_: PyRef<Self>) -> bool { getter!(self_, BertNormalizer, handle_chinese_chars) } #[setter] fn set_handle_chinese_chars(self_: PyRef<Self>, handle_chinese_chars: bool) { setter!( self_, BertNormalizer, handle_chinese_chars, handle_chinese_chars ); } #[getter] fn get_strip_accents(self_: PyRef<Self>) -> Option<bool> { getter!(self_, BertNormalizer, strip_accents) } #[setter] fn set_strip_accents(self_: PyRef<Self>, strip_accents: Option<bool>) { setter!(self_, BertNormalizer, strip_accents, strip_accents); } #[getter] fn get_lowercase(self_: PyRef<Self>) -> bool { getter!(self_, BertNormalizer, lowercase) } #[setter] fn set_lowercase(self_: PyRef<Self>, lowercase: bool) { setter!(self_, BertNormalizer, lowercase, lowercase) } #[new] #[pyo3(signature = ( clean_text = true, handle_chinese_chars = true, strip_accents = None, lowercase = true ), text_signature = "(self, clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True)")] fn new( clean_text: bool, handle_chinese_chars: bool, strip_accents: Option<bool>, lowercase: bool, ) -> (Self, PyNormalizer) { let normalizer = BertNormalizer::new(clean_text, handle_chinese_chars, strip_accents, lowercase); (PyBertNormalizer {}, normalizer.into()) } } /// NFD Unicode Normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "NFD")] pub struct PyNFD {} #[pymethods] impl PyNFD { #[new] #[pyo3(text_signature = "(self)")] fn new() -> (Self, PyNormalizer) { (PyNFD {}, PyNormalizer::new(NFD.into())) } } /// NFKD Unicode Normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "NFKD")] pub struct PyNFKD {} #[pymethods] impl PyNFKD { #[new] #[pyo3(text_signature = "(self)")] fn new() -> (Self, PyNormalizer) { (PyNFKD {}, NFKD.into()) } } /// NFC Unicode Normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "NFC")] pub struct PyNFC {} #[pymethods] impl PyNFC { #[new] #[pyo3(text_signature = "(self)")] fn new() -> (Self, PyNormalizer) { (PyNFC {}, NFC.into()) } } /// NFKC Unicode Normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "NFKC")] pub struct PyNFKC {} #[pymethods] impl PyNFKC { #[new] #[pyo3(text_signature = "(self)")] fn new() -> (Self, PyNormalizer) { (PyNFKC {}, NFKC.into()) } } /// Allows concatenating multiple other Normalizer as a Sequence. /// All the normalizers run in sequence in the given order /// /// Args: /// normalizers (:obj:`List[Normalizer]`): /// A list of Normalizer to be run as a sequence #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "Sequence")] pub struct PySequence {} #[pymethods] impl PySequence { #[new] #[pyo3(text_signature = None)] fn new(normalizers: &PyList) -> PyResult<(Self, PyNormalizer)> { let mut sequence = Vec::with_capacity(normalizers.len()); for n in normalizers.iter() { let normalizer: PyRef<PyNormalizer> = n.extract()?; match &normalizer.normalizer { PyNormalizerTypeWrapper::Sequence(inner) => sequence.extend(inner.iter().cloned()), PyNormalizerTypeWrapper::Single(inner) => sequence.push(inner.clone()), } } Ok(( PySequence {}, PyNormalizer::new(PyNormalizerTypeWrapper::Sequence(sequence)), )) } fn __getnewargs__<'p>(&self, py: Python<'p>) -> &'p PyTuple { PyTuple::new(py, [PyList::empty(py)]) } fn __len__(&self) -> usize { 0 } } /// Lowercase Normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "Lowercase")] pub struct PyLowercase {} #[pymethods] impl PyLowercase { #[new] #[pyo3(text_signature = "(self)")] fn new() -> (Self, PyNormalizer) { (PyLowercase {}, Lowercase.into()) } } /// Strip normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "Strip")] pub struct PyStrip {} #[pymethods] impl PyStrip { #[getter] fn get_left(self_: PyRef<Self>) -> bool { getter!(self_, StripNormalizer, strip_left) } #[setter] fn set_left(self_: PyRef<Self>, left: bool) { setter!(self_, StripNormalizer, strip_left, left) } #[getter] fn get_right(self_: PyRef<Self>) -> bool { getter!(self_, StripNormalizer, strip_right) } #[setter] fn set_right(self_: PyRef<Self>, right: bool) { setter!(self_, StripNormalizer, strip_right, right) } #[new] #[pyo3(signature = (left = true, right = true), text_signature = "(self, left=True, right=True)")] fn new(left: bool, right: bool) -> (Self, PyNormalizer) { (PyStrip {}, Strip::new(left, right).into()) } } /// Prepend normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "Prepend")] pub struct PyPrepend {} #[pymethods] impl PyPrepend { #[getter] fn get_prepend(self_: PyRef<Self>) -> String { getter!(self_, Prepend, prepend) } #[setter] fn set_prepend(self_: PyRef<Self>, prepend: String) { setter!(self_, Prepend, prepend, prepend) } #[new] #[pyo3(signature = (prepend="โ–".to_string()), text_signature = "(self, prepend)")] fn new(prepend: String) -> (Self, PyNormalizer) { (PyPrepend {}, Prepend::new(prepend).into()) } } /// StripAccents normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "StripAccents")] pub struct PyStripAccents {} #[pymethods] impl PyStripAccents { #[new] #[pyo3(text_signature = "(self)")] fn new() -> (Self, PyNormalizer) { (PyStripAccents {}, StripAccents.into()) } } /// Nmt normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "Nmt")] pub struct PyNmt {} #[pymethods] impl PyNmt { #[new] #[pyo3(text_signature = "(self)")] fn new() -> (Self, PyNormalizer) { (PyNmt {}, Nmt.into()) } } /// Precompiled normalizer /// Don't use manually it is used for compatiblity for SentencePiece. #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "Precompiled")] pub struct PyPrecompiled {} #[pymethods] impl PyPrecompiled { #[new] #[pyo3(text_signature = "(self, precompiled_charsmap)")] fn new(py_precompiled_charsmap: &PyBytes) -> PyResult<(Self, PyNormalizer)> { let precompiled_charsmap: &[u8] = FromPyObject::extract(py_precompiled_charsmap)?; Ok(( PyPrecompiled {}, Precompiled::from(precompiled_charsmap) .map_err(|e| { exceptions::PyException::new_err(format!( "Error while attempting to build Precompiled normalizer: {}", e )) })? .into(), )) } } /// Replace normalizer #[pyclass(extends=PyNormalizer, module = "tokenizers.normalizers", name = "Replace")] pub struct PyReplace {} #[pymethods] impl PyReplace { #[new] #[pyo3(text_signature = "(self, pattern, content)")] fn new(pattern: PyPattern, content: String) -> PyResult<(Self, PyNormalizer)> { Ok(( PyReplace {}, ToPyResult(Replace::new(pattern, content)).into_py()?.into(), )) } } #[derive(Debug, Clone)] pub(crate) struct CustomNormalizer { inner: PyObject, } impl CustomNormalizer { pub fn new(inner: PyObject) -> Self { Self { inner } } } impl tk::tokenizer::Normalizer for CustomNormalizer { fn normalize(&self, normalized: &mut NormalizedString) -> tk::Result<()> { Python::with_gil(|py| { let normalized = PyNormalizedStringRefMut::new(normalized); let py_normalized = self.inner.as_ref(py); py_normalized.call_method("normalize", (normalized.get(),), None)?; Ok(()) }) } } impl Serialize for CustomNormalizer { fn serialize<S>(&self, _serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { Err(serde::ser::Error::custom( "Custom Normalizer cannot be serialized", )) } } impl<'de> Deserialize<'de> for CustomNormalizer { fn deserialize<D>(_deserializer: D) -> Result<Self, D::Error> where D: Deserializer<'de>, { Err(serde::de::Error::custom( "Custom Normalizer cannot be deserialized", )) } } #[derive(Debug, Clone, Deserialize)] #[serde(untagged)] pub(crate) enum PyNormalizerWrapper { Custom(CustomNormalizer), Wrapped(NormalizerWrapper), } impl Serialize for PyNormalizerWrapper { fn serialize<S>(&self, serializer: S) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error> where S: Serializer, { match self { PyNormalizerWrapper::Wrapped(inner) => inner.serialize(serializer), PyNormalizerWrapper::Custom(inner) => inner.serialize(serializer), } } } #[derive(Debug, Clone, Deserialize)] #[serde(untagged)] pub(crate) enum PyNormalizerTypeWrapper { Sequence(Vec<Arc<RwLock<PyNormalizerWrapper>>>), Single(Arc<RwLock<PyNormalizerWrapper>>), } impl Serialize for PyNormalizerTypeWrapper { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { match self { PyNormalizerTypeWrapper::Sequence(seq) => { let mut ser = serializer.serialize_struct("Sequence", 2)?; ser.serialize_field("type", "Sequence")?; ser.serialize_field("normalizers", seq)?; ser.end() } PyNormalizerTypeWrapper::Single(inner) => inner.serialize(serializer), } } } impl<I> From<I> for PyNormalizerWrapper where I: Into<NormalizerWrapper>, { fn from(norm: I) -> Self { PyNormalizerWrapper::Wrapped(norm.into()) } } impl<I> From<I> for PyNormalizerTypeWrapper where I: Into<PyNormalizerWrapper>, { fn from(norm: I) -> Self { PyNormalizerTypeWrapper::Single(Arc::new(RwLock::new(norm.into()))) } } impl<I> From<I> for PyNormalizer where I: Into<NormalizerWrapper>, { fn from(norm: I) -> Self { PyNormalizer { normalizer: norm.into().into(), } } } impl Normalizer for PyNormalizerTypeWrapper { fn normalize(&self, normalized: &mut NormalizedString) -> tk::Result<()> { match self { PyNormalizerTypeWrapper::Single(inner) => inner.read().unwrap().normalize(normalized), PyNormalizerTypeWrapper::Sequence(inner) => inner .iter() .try_for_each(|n| n.read().unwrap().normalize(normalized)), } } } impl Normalizer for PyNormalizerWrapper { fn normalize(&self, normalized: &mut NormalizedString) -> tk::Result<()> { match self { PyNormalizerWrapper::Wrapped(inner) => inner.normalize(normalized), PyNormalizerWrapper::Custom(inner) => inner.normalize(normalized), } } } /// Normalizers Module #[pymodule] pub fn normalizers(_py: Python, m: &PyModule) -> PyResult<()> { m.add_class::<PyNormalizer>()?; m.add_class::<PyBertNormalizer>()?; m.add_class::<PyNFD>()?; m.add_class::<PyNFKD>()?; m.add_class::<PyNFC>()?; m.add_class::<PyNFKC>()?; m.add_class::<PySequence>()?; m.add_class::<PyLowercase>()?; m.add_class::<PyStrip>()?; m.add_class::<PyStripAccents>()?; m.add_class::<PyPrepend>()?; m.add_class::<PyNmt>()?; m.add_class::<PyPrecompiled>()?; m.add_class::<PyReplace>()?; Ok(()) } #[cfg(test)] mod test { use pyo3::prelude::*; use tk::normalizers::unicode::{NFC, NFKC}; use tk::normalizers::utils::Sequence; use tk::normalizers::NormalizerWrapper; use crate::normalizers::{PyNormalizer, PyNormalizerTypeWrapper, PyNormalizerWrapper}; #[test] fn get_subtype() { Python::with_gil(|py| { let py_norm = PyNormalizer::new(NFC.into()); let py_nfc = py_norm.get_as_subtype(py).unwrap(); assert_eq!("NFC", py_nfc.as_ref(py).get_type().name().unwrap()); }) } #[test] fn serialize() { let py_wrapped: PyNormalizerWrapper = NFKC.into(); let py_ser = serde_json::to_string(&py_wrapped).unwrap(); let rs_wrapped = NormalizerWrapper::NFKC(NFKC); let rs_ser = serde_json::to_string(&rs_wrapped).unwrap(); assert_eq!(py_ser, rs_ser); let py_norm: PyNormalizer = serde_json::from_str(&rs_ser).unwrap(); match py_norm.normalizer { PyNormalizerTypeWrapper::Single(inner) => match *inner.as_ref().read().unwrap() { PyNormalizerWrapper::Wrapped(NormalizerWrapper::NFKC(_)) => {} _ => panic!("Expected NFKC"), }, _ => panic!("Expected wrapped, not sequence."), } let py_seq: PyNormalizerWrapper = Sequence::new(vec![NFC.into(), NFKC.into()]).into(); let py_wrapper_ser = serde_json::to_string(&py_seq).unwrap(); let rs_wrapped = NormalizerWrapper::Sequence(Sequence::new(vec![NFC.into(), NFKC.into()])); let rs_ser = serde_json::to_string(&rs_wrapped).unwrap(); assert_eq!(py_wrapper_ser, rs_ser); let py_seq = PyNormalizer::new(py_seq.into()); let py_ser = serde_json::to_string(&py_seq).unwrap(); assert_eq!(py_wrapper_ser, py_ser); let rs_seq = Sequence::new(vec![NFC.into(), NFKC.into()]); let rs_ser = serde_json::to_string(&rs_seq).unwrap(); assert_eq!(py_wrapper_ser, rs_ser); } #[test] fn deserialize_sequence() { let string = r#"{"type": "NFKC"}"#; let normalizer: PyNormalizer = serde_json::from_str(string).unwrap(); match normalizer.normalizer { PyNormalizerTypeWrapper::Single(inner) => match *inner.as_ref().read().unwrap() { PyNormalizerWrapper::Wrapped(NormalizerWrapper::NFKC(_)) => {} _ => panic!("Expected NFKC"), }, _ => panic!("Expected wrapped, not sequence."), } let sequence_string = format!(r#"{{"type": "Sequence", "normalizers": [{}]}}"#, string); let normalizer: PyNormalizer = serde_json::from_str(&sequence_string).unwrap(); match normalizer.normalizer { PyNormalizerTypeWrapper::Single(inner) => match &*inner.as_ref().read().unwrap() { PyNormalizerWrapper::Wrapped(NormalizerWrapper::Sequence(sequence)) => { let normalizers = sequence.get_normalizers(); assert_eq!(normalizers.len(), 1); match normalizers[0] { NormalizerWrapper::NFKC(_) => {} _ => panic!("Expected NFKC"), } } _ => panic!("Expected sequence"), }, _ => panic!("Expected single"), }; } }
tokenizers/bindings/python/src/normalizers.rs/0
{ "file_path": "tokenizers/bindings/python/src/normalizers.rs", "repo_id": "tokenizers", "token_count": 11191 }
220
import pytest from tokenizers import BertWordPieceTokenizer from ..utils import bert_files, data_dir class TestEncoding: @pytest.fixture(scope="class") def encodings(self, bert_files): tokenizer = BertWordPieceTokenizer.from_file(bert_files["vocab"]) single_encoding = tokenizer.encode("I love HuggingFace") pair_encoding = tokenizer.encode("I love HuggingFace", "Do you?") return single_encoding, pair_encoding def test_sequence_ids(self, encodings): single, pair = encodings assert single.sequence_ids == [None, 0, 0, 0, 0, None] assert pair.sequence_ids == [None, 0, 0, 0, 0, None, 1, 1, 1, None] def test_n_sequences(self, encodings): single, pair = encodings assert single.n_sequences == 1 assert pair.n_sequences == 2 def test_word_to_tokens(self, encodings): single, pair = encodings assert single.tokens == ["[CLS]", "i", "love", "hugging", "##face", "[SEP]"] assert single.word_to_tokens(0) == (1, 2) assert pair.tokens == [ "[CLS]", "i", "love", "hugging", "##face", "[SEP]", "do", "you", "?", "[SEP]", ] assert pair.word_to_tokens(0) == (1, 2) assert pair.word_to_tokens(0, 0) == (1, 2) assert pair.word_to_tokens(6, 0) == None assert pair.word_to_tokens(0, 1) == (6, 7) def test_word_to_chars(self, encodings): single, pair = encodings assert single.word_to_chars(2) == (7, 18) assert pair.word_to_chars(2) == (7, 18) assert pair.word_to_chars(2, 0) == (7, 18) assert pair.word_to_chars(2, 1) == (6, 7) def test_token_to_sequence(self, encodings): single, pair = encodings assert single.token_to_sequence(2) == 0 assert pair.token_to_sequence(2) == 0 assert pair.token_to_sequence(0) == None assert pair.token_to_sequence(5) == None assert pair.token_to_sequence(6) == 1 assert pair.token_to_sequence(8) == 1 assert pair.token_to_sequence(9) == None assert pair.token_to_sequence(1200) == None def test_token_to_chars(self, encodings): single, pair = encodings assert single.token_to_chars(0) == None assert single.token_to_chars(2) == (2, 6) assert pair.token_to_chars(2) == (2, 6) assert pair.token_to_chars(5) == None assert pair.token_to_chars(6) == (0, 2) def test_token_to_word(self, encodings): single, pair = encodings assert single.token_to_word(0) == None assert single.token_to_word(1) == 0 assert single.token_to_word(4) == 2 assert pair.token_to_word(1) == 0 assert pair.token_to_word(4) == 2 assert pair.token_to_word(5) == None assert pair.token_to_word(6) == 0 assert pair.token_to_word(7) == 1 def test_char_to_token(self, encodings): single, pair = encodings assert single.char_to_token(0) == 1 assert pair.char_to_token(0) == 1 assert pair.char_to_token(0, 0) == 1 assert pair.char_to_token(1, 0) == None assert pair.char_to_token(0, 1) == 6 assert pair.char_to_token(2, 1) == None def test_char_to_word(self, encodings): single, pair = encodings assert single.char_to_word(0) == 0 assert single.char_to_word(1) == None assert pair.char_to_word(2) == 1 assert pair.char_to_word(2, 0) == 1 assert pair.char_to_word(2, 1) == None assert pair.char_to_word(3, 1) == 1 def test_truncation(self, encodings): single, _ = encodings single.truncate(2, 1, "right") assert single.tokens == ["[CLS]", "i"] assert single.overflowing[0].tokens == ["i", "love"] def test_invalid_truncate_direction(self, encodings): single, _ = encodings with pytest.raises(ValueError) as excinfo: single.truncate(2, 1, "not_a_direction") assert "Invalid truncation direction value : not_a_direction" == str(excinfo.value)
tokenizers/bindings/python/tests/bindings/test_encoding.py/0
{ "file_path": "tokenizers/bindings/python/tests/bindings/test_encoding.py", "repo_id": "tokenizers", "token_count": 1991 }
221
import os import pytest from tokenizers import SentencePieceBPETokenizer, SentencePieceUnigramTokenizer class TestSentencePieceBPE: def test_train_from_iterator(self): text = ["A first sentence", "Another sentence", "And a last one"] tokenizer = SentencePieceBPETokenizer() tokenizer.train_from_iterator(text, show_progress=False) output = tokenizer.encode("A sentence") assert output.tokens == ["โ–A", "โ–sentence"] class TestSentencePieceUnigram: def test_train(self, tmpdir): p = tmpdir.mkdir("tmpdir").join("file.txt") p.write("A first sentence\nAnother sentence\nAnd a last one") tokenizer = SentencePieceUnigramTokenizer() tokenizer.train(files=str(p), show_progress=False) output = tokenizer.encode("A sentence") assert output.tokens == ["โ–A", "โ–", "s", "en", "t", "en", "c", "e"] with pytest.raises(Exception) as excinfo: _ = tokenizer.encode("A sentence ๐Ÿค—") assert str(excinfo.value) == "Encountered an unknown token but `unk_id` is missing" def test_train_with_unk_token(self, tmpdir): p = tmpdir.mkdir("tmpdir").join("file.txt") p.write("A first sentence\nAnother sentence\nAnd a last one") tokenizer = SentencePieceUnigramTokenizer() tokenizer.train(files=str(p), show_progress=False, special_tokens=["<unk>"], unk_token="<unk>") output = tokenizer.encode("A sentence ๐Ÿค—") assert output.ids[-1] == 0 assert output.tokens == ["โ–A", "โ–", "s", "en", "t", "en", "c", "e", "โ–", "๐Ÿค—"] def test_train_from_iterator(self): text = ["A first sentence", "Another sentence", "And a last one"] tokenizer = SentencePieceUnigramTokenizer() tokenizer.train_from_iterator(text, show_progress=False) output = tokenizer.encode("A sentence") assert output.tokens == ["โ–A", "โ–", "s", "en", "t", "en", "c", "e"] with pytest.raises(Exception) as excinfo: _ = tokenizer.encode("A sentence ๐Ÿค—") assert str(excinfo.value) == "Encountered an unknown token but `unk_id` is missing" def test_train_from_iterator_with_unk_token(self): text = ["A first sentence", "Another sentence", "And a last one"] tokenizer = SentencePieceUnigramTokenizer() tokenizer.train_from_iterator( text, vocab_size=100, show_progress=False, special_tokens=["<unk>"], unk_token="<unk>" ) output = tokenizer.encode("A sentence ๐Ÿค—") assert output.ids[-1] == 0 assert output.tokens == ["โ–A", "โ–", "s", "en", "t", "en", "c", "e", "โ–", "๐Ÿค—"]
tokenizers/bindings/python/tests/implementations/test_sentencepiece.py/0
{ "file_path": "tokenizers/bindings/python/tests/implementations/test_sentencepiece.py", "repo_id": "tokenizers", "token_count": 1122 }
222
# Trainers <tokenizerslangcontent> <python> ## BpeTrainer [[autodoc]] tokenizers.trainers.BpeTrainer ## UnigramTrainer [[autodoc]] tokenizers.trainers.UnigramTrainer ## WordLevelTrainer [[autodoc]] tokenizers.trainers.WordLevelTrainer ## WordPieceTrainer [[autodoc]] tokenizers.trainers.WordPieceTrainer </python> <rust> The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website. </rust> <node> The node API has not been documented yet. </node> </tokenizerslangcontent>
tokenizers/docs/source-doc-builder/api/trainers.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/api/trainers.mdx", "repo_id": "tokenizers", "token_count": 183 }
223
/* Our DOM objects */ /* Version control */ .selectors { margin-bottom: 10px; } .dropdown-button { display: inline-block; width: 50%; background-color: #6670FF; color: white; border: none; padding: 5px; font-size: 15px; cursor: pointer; } .dropdown-button:hover, .dropdown-button:focus, .dropdown-button.active { background-color: #A6B0FF; } .dropdown-button.active { background-color: #7988FF; } .menu-dropdown { display: none; background-color: #7988FF; min-width: 160px; overflow: auto; font-size: 15px; padding: 10px 0; } .menu-dropdown a { color: white; padding: 3px 4px; text-decoration: none; display: block; } .menu-dropdown a:hover { background-color: #A6B0FF; } .dropdown-link.active { background-color: #A6B0FF; } .show { display: block; } /* The literal code blocks */ .rst-content tt.literal, .rst-content tt.literal, .rst-content code.literal { color: #6670FF; } /* To keep the logo centered */ .wy-side-scroll { width: auto; font-size: 20px; } /* The div that holds the Hugging Face logo */ .HuggingFaceDiv { width: 100% } /* The research field on top of the toc tree */ .wy-side-nav-search{ padding-top: 0; background-color: #6670FF; } /* The toc tree */ .wy-nav-side{ background-color: #6670FF; padding-bottom: 0; } /* The section headers in the toc tree */ .wy-menu-vertical p.caption{ background-color: #4d59ff; line-height: 40px; } /* The selected items in the toc tree */ .wy-menu-vertical li.current{ background-color: #A6B0FF; } /* When a list item that does belong to the selected block from the toc tree is hovered */ .wy-menu-vertical li.current a:hover{ background-color: #B6C0FF; } /* When a list item that does NOT belong to the selected block from the toc tree is hovered. */ .wy-menu-vertical li a:hover{ background-color: #A7AFFB; } /* The text items on the toc tree */ .wy-menu-vertical a { color: #FFFFDD; font-family: Calibre-Light, sans-serif; } .wy-menu-vertical header, .wy-menu-vertical p.caption{ color: white; font-family: Calibre-Light, sans-serif; } /* The color inside the selected toc tree block */ .wy-menu-vertical li.toctree-l2 a, .wy-menu-vertical li.toctree-l3 a, .wy-menu-vertical li.toctree-l4 a { color: black; } /* Inside the depth-2 selected toc tree block */ .wy-menu-vertical li.toctree-l2.current>a { background-color: #B6C0FF } .wy-menu-vertical li.toctree-l2.current li.toctree-l3>a { background-color: #C6D0FF } /* Inside the depth-3 selected toc tree block */ .wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{ background-color: #D6E0FF } /* Inside code snippets */ .rst-content dl:not(.docutils) dt{ font-size: 15px; } /* Links */ a { color: #6670FF; } /* Content bars */ .rst-content dl:not(.docutils) dt { background-color: rgba(251, 141, 104, 0.1); border-right: solid 2px #FB8D68; border-left: solid 2px #FB8D68; color: #FB8D68; font-family: Calibre-Light, sans-serif; border-top: none; font-style: normal !important; } /* Expand button */ .wy-menu-vertical li.toctree-l2 span.toctree-expand, .wy-menu-vertical li.on a span.toctree-expand, .wy-menu-vertical li.current>a span.toctree-expand, .wy-menu-vertical li.toctree-l3 span.toctree-expand{ color: black; } /* Max window size */ .wy-nav-content{ max-width: 1200px; } /* Mobile header */ .wy-nav-top{ background-color: #6670FF; } /* Source spans */ .rst-content .viewcode-link, .rst-content .viewcode-back{ color: #6670FF; font-size: 110%; letter-spacing: 2px; text-transform: uppercase; } /* It would be better for table to be visible without horizontal scrolling */ .wy-table-responsive table td, .wy-table-responsive table th{ white-space: normal; } .footer { margin-top: 20px; } .footer__Social { display: flex; flex-direction: row; } .footer__CustomImage { margin: 2px 5px 0 0; } /* class and method names in doc */ .rst-content dl:not(.docutils) tt.descname, .rst-content dl:not(.docutils) tt.descclassname, .rst-content dl:not(.docutils) tt.descname, .rst-content dl:not(.docutils) code.descname, .rst-content dl:not(.docutils) tt.descclassname, .rst-content dl:not(.docutils) code.descclassname{ font-family: Calibre, sans-serif; font-size: 20px !important; } /* class name in doc*/ .rst-content dl:not(.docutils) tt.descname, .rst-content dl:not(.docutils) tt.descname, .rst-content dl:not(.docutils) code.descname{ margin-right: 10px; font-family: Calibre-Medium, sans-serif; } /* Method and class parameters */ .sig-param{ line-height: 23px; } /* Class introduction "class" string at beginning */ .rst-content dl:not(.docutils) .property{ font-size: 18px; color: black; } /* FONTS */ body{ font-family: Calibre, sans-serif; font-size: 16px; } h1 { font-family: Calibre-Thin, sans-serif; font-size: 70px; } h2, .rst-content .toctree-wrapper p.caption, h3, h4, h5, h6, legend{ font-family: Calibre-Medium, sans-serif; } @font-face { font-family: Calibre-Medium; src: url(./Calibre-Medium.otf); font-weight:400; } @font-face { font-family: Calibre; src: url(./Calibre-Regular.otf); font-weight:400; } @font-face { font-family: Calibre-Light; src: url(./Calibre-Light.ttf); font-weight:400; } @font-face { font-family: Calibre-Thin; src: url(./Calibre-Thin.otf); font-weight:400; } /** * Nav Links to other parts of huggingface.co */ div.hf-menu { position: absolute; top: 0; right: 0; padding-top: 20px; padding-right: 20px; z-index: 1000; } div.hf-menu a { font-size: 14px; letter-spacing: 0.3px; text-transform: uppercase; color: white; -webkit-font-smoothing: antialiased; background: linear-gradient(0deg, #6671ffb8, #9a66ffb8 50%); padding: 10px 16px 6px 16px; border-radius: 3px; margin-left: 12px; position: relative; } div.hf-menu a:active { top: 1px; } @media (min-width: 768px) and (max-width: 1860px) { .wy-breadcrumbs { margin-top: 32px; } } @media (max-width: 768px) { div.hf-menu { display: none; } }
tokenizers/docs/source/_static/css/huggingface.css/0
{ "file_path": "tokenizers/docs/source/_static/css/huggingface.css", "repo_id": "tokenizers", "token_count": 2708 }
224
Training from memory ---------------------------------------------------------------------------------------------------- In the `Quicktour <quicktour>`__, we saw how to build and train a tokenizer using text files, but we can actually use any Python Iterator. In this section we'll see a few different ways of training our tokenizer. For all the examples listed below, we'll use the same :class:`~tokenizers.Tokenizer` and :class:`~tokenizers.trainers.Trainer`, built as following: .. literalinclude:: ../../../../bindings/python/tests/documentation/test_tutorial_train_from_iterators.py :language: python :start-after: START init_tokenizer_trainer :end-before: END init_tokenizer_trainer :dedent: 8 This tokenizer is based on the :class:`~tokenizers.models.Unigram` model. It takes care of normalizing the input using the NFKC Unicode normalization method, and uses a :class:`~tokenizers.pre_tokenizers.ByteLevel` pre-tokenizer with the corresponding decoder. For more information on the components used here, you can check `here <components>`__ The most basic way ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As you probably guessed already, the easiest way to train our tokenizer is by using a :obj:`List`: .. literalinclude:: ../../../../bindings/python/tests/documentation/test_tutorial_train_from_iterators.py :language: python :start-after: START train_basic :end-before: END train_basic :dedent: 8 Easy, right? You can use anything working as an iterator here, be it a :obj:`List`, :obj:`Tuple`, or a :obj:`np.Array`. Anything works as long as it provides strings. Using the ๐Ÿค— Datasets library ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ An awesome way to access one of the many datasets that exist out there is by using the ๐Ÿค— Datasets library. For more information about it, you should check `the official documentation here <https://huggingface.co/docs/datasets/>`__. Let's start by loading our dataset: .. literalinclude:: ../../../../bindings/python/tests/documentation/test_tutorial_train_from_iterators.py :language: python :start-after: START load_dataset :end-before: END load_dataset :dedent: 8 The next step is to build an iterator over this dataset. The easiest way to do this is probably by using a generator: .. literalinclude:: ../../../../bindings/python/tests/documentation/test_tutorial_train_from_iterators.py :language: python :start-after: START def_batch_iterator :end-before: END def_batch_iterator :dedent: 8 As you can see here, for improved efficiency we can actually provide a batch of examples used to train, instead of iterating over them one by one. By doing so, we can expect performances very similar to those we got while training directly from files. With our iterator ready, we just need to launch the training. In order to improve the look of our progress bars, we can specify the total length of the dataset: .. literalinclude:: ../../../../bindings/python/tests/documentation/test_tutorial_train_from_iterators.py :language: python :start-after: START train_datasets :end-before: END train_datasets :dedent: 8 And that's it! Using gzip files ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since gzip files in Python can be used as iterators, it is extremely simple to train on such files: .. literalinclude:: ../../../../bindings/python/tests/documentation/test_tutorial_train_from_iterators.py :language: python :start-after: START single_gzip :end-before: END single_gzip :dedent: 8 Now if we wanted to train from multiple gzip files, it wouldn't be much harder: .. literalinclude:: ../../../../bindings/python/tests/documentation/test_tutorial_train_from_iterators.py :language: python :start-after: START multi_gzip :end-before: END multi_gzip :dedent: 8 And voilร !
tokenizers/docs/source/tutorials/python/training_from_memory.rst/0
{ "file_path": "tokenizers/docs/source/tutorials/python/training_from_memory.rst", "repo_id": "tokenizers", "token_count": 1149 }
225
mod utils; use tokenizers::models::bpe::{Vocab, BPE}; use tokenizers::Tokenizer; use wasm_bindgen::prelude::*; // When the `wee_alloc` feature is enabled, use `wee_alloc` as the global // allocator. #[cfg(feature = "wee_alloc")] #[global_allocator] static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT; #[wasm_bindgen] pub fn tokenize(string: &str) -> Vec<u32> { let vocab: Vocab = vec![ ("a".to_string(), 0), ("##b".to_string(), 1), ("##c".to_string(), 2), ("ab".to_string(), 3), ("abc".to_string(), 4), ] .into_iter() .collect(); let merges = vec![ ("a".to_string(), "##b".to_string()), ("ab".to_string(), "##c".to_string()), ]; let bpe = BPE::builder() .vocab_and_merges(vocab, merges) .unk_token("[UNK]".to_string()) .continuing_subword_prefix("##".to_string()) .build() .unwrap(); let tokenizer = Tokenizer::new(bpe); tokenizer .encode(string, false) .unwrap() .get_ids() .into_iter() .cloned() .collect() }
tokenizers/tokenizers/examples/unstable_wasm/src/lib.rs/0
{ "file_path": "tokenizers/tokenizers/examples/unstable_wasm/src/lib.rs", "repo_id": "tokenizers", "token_count": 543 }
226
//! //! This is the CLI binary for the Tokenizers project //! use clap::{Parser, Subcommand}; use std::io::{self, BufRead, Write}; use tokenizers::models::bpe::BPE; use tokenizers::pre_tokenizers::byte_level::ByteLevel; use tokenizers::tokenizer::{AddedToken, Result}; use tokenizers::Tokenizer; /// Generate custom Tokenizers or use existing ones #[derive(Parser, Debug)] #[command(author, version)] struct Args { #[command(subcommand)] command: Command, } #[derive(Subcommand, Debug)] enum Command { Shell { /// Path to the vocab.json file vocab: String, /// Path to the merges.txt file merges: String, }, } fn shell(vocab: &str, merges: &str) -> Result<()> { let bpe = BPE::from_file(vocab, merges).build()?; let mut tokenizer = Tokenizer::new(bpe); tokenizer .with_pre_tokenizer(ByteLevel::default()) .with_decoder(ByteLevel::default()); tokenizer.add_tokens(&[AddedToken::from(String::from("ing"), false).single_word(false)]); tokenizer .add_special_tokens(&[AddedToken::from(String::from("[ENT]"), true).single_word(true)]); let stdin = io::stdin(); let mut handle = stdin.lock(); let mut buffer = String::new(); loop { buffer.clear(); print!("\nEnter some text to tokenize:\n> "); io::stdout().flush()?; handle.read_line(&mut buffer)?; let buffer = buffer.trim_end(); let timer = std::time::Instant::now(); let encoded = tokenizer.encode(buffer.to_owned(), false)?; let elapsed = timer.elapsed(); println!("\nInput:\t\t{}", buffer); println!("Tokens:\t\t{:?}", encoded.get_tokens()); println!("IDs:\t\t{:?}", encoded.get_ids()); println!("Offsets:\t{:?}", encoded.get_offsets()); println!( "Decoded:\t{}", tokenizer.decode(encoded.get_ids(), true).unwrap() ); println!("Tokenized in {:?}", elapsed); } } fn main() -> Result<()> { let args = Args::parse(); match args.command { Command::Shell { vocab, merges } => shell(&vocab, &merges), } }
tokenizers/tokenizers/src/cli.rs/0
{ "file_path": "tokenizers/tokenizers/src/cli.rs", "repo_id": "tokenizers", "token_count": 900 }
227
use rand::distributions::WeightedIndex; use rand::prelude::*; use std::cell::RefCell; use std::cmp::{min, Ordering}; use std::collections::BinaryHeap; use std::rc::Rc; type NodeRef = Rc<RefCell<Node>>; type HypothesisRef = Rc<RefCell<Hypothesis>>; type Agenda = BinaryHeap<Hypothesis>; struct Hypothesis { node_ref: NodeRef, next: Option<HypothesisRef>, fx: f64, gx: f64, } impl Hypothesis { pub fn new(node_ref: NodeRef, next: Option<HypothesisRef>, fx: f64, gx: f64) -> Self { Self { node_ref, next, fx, gx, } } } impl PartialEq for Hypothesis { fn eq(&self, other: &Self) -> bool { self.fx == other.fx } } impl Eq for Hypothesis {} impl PartialOrd for Hypothesis { fn partial_cmp(&self, other: &Self) -> Option<Ordering> { Some(self.cmp(other)) } } // TODO Maybe use Ordered Floats (https://docs.rs/ordered-float/1.0.2/ordered_float/) impl Ord for Hypothesis { fn cmp(&self, other: &Self) -> Ordering { if self.fx < other.fx { Ordering::Less } else { Ordering::Greater } } } /// Structure to implement Viterbi algorithm to find the best encoding, or sample /// from all possible encodings of a given sentence. #[derive(Debug)] pub struct Lattice<'a> { pub(super) sentence: &'a str, len: usize, nodes: Vec<NodeRef>, pub(super) begin_nodes: Vec<Vec<NodeRef>>, pub(super) end_nodes: Vec<Vec<NodeRef>>, _bos_id: usize, _eos_id: usize, } impl std::fmt::Display for Lattice<'_> { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { let display_pieces = |nodes: &Vec<Vec<NodeRef>>| { nodes .iter() .map(|l| { l.iter() .map(|n| self.piece(&n.borrow())) .collect::<Vec<_>>() }) .collect::<Vec<_>>() }; f.debug_struct("Lattice") .field("sentence", &self.sentence) .field("begin_nodes", &display_pieces(&self.begin_nodes)) .field("end_nodes", &display_pieces(&self.end_nodes)) .finish() } } /// A node from the lattice, that helps reconstruct the underlying `String` #[derive(Debug, Clone)] pub struct Node { // Vocabulary id pub(super) id: usize, // Local lattice identifier pub(super) node_id: usize, pos: usize, length: usize, prev: Option<NodeRef>, backtrace_score: f64, score: f64, } impl PartialEq for Node { fn eq(&self, other: &Node) -> bool { self.id == other.id } } impl Node { pub fn new(id: usize, node_id: usize, pos: usize, length: usize, score: f64) -> Self { Self { id, node_id, pos, length, prev: None, score, backtrace_score: 0.0, } } } /// Returns log(exp(x) + exp(y)). /// if init_mode is true, returns log(exp(y)) == y. /// log(\sum_i exp(a[i])) can be computed as /// for (int i = 0; i < a.size(); ++i) /// x = LogSumExp(x, a[i], i == 0); fn log_sum_exp(x: f64, y: f64, init_mode: bool) -> f64 { if init_mode { y } else { let (vmin, vmax) = if x > y { (y, x) } else { (x, y) }; let k_minus_log_epsilon = 50.0; if vmax > vmin + k_minus_log_epsilon { vmax } else { vmax + ((vmin - vmax).exp() + 1.0).ln() } } } impl<'a> Lattice<'a> { pub fn from(sentence: &'a str, bos_id: usize, eos_id: usize) -> Self { let len = sentence.len(); let k_reserved_node_size = 16; // We are adding 2 tokens, bos and eos let mut nodes: Vec<NodeRef> = Vec::with_capacity(k_reserved_node_size); let mut begin_nodes = vec![Vec::with_capacity(k_reserved_node_size); len + 1]; let mut end_nodes = vec![Vec::with_capacity(k_reserved_node_size); len + 1]; let bos = Rc::new(RefCell::new(Node::new(bos_id, 0, 0, 0, 0.0))); let eos = Rc::new(RefCell::new(Node::new(eos_id, 1, len, 0, 0.0))); begin_nodes[len].push(Rc::clone(&eos)); end_nodes[0].push(Rc::clone(&bos)); nodes.push(bos); nodes.push(eos); Self { sentence, len, nodes, begin_nodes, end_nodes, _bos_id: bos_id, _eos_id: eos_id, } } pub fn insert(&mut self, pos: usize, length: usize, score: f64, id: usize) { let node_id = self.nodes.len(); let node = Rc::new(RefCell::new(Node::new(id, node_id, pos, length, score))); self.begin_nodes[pos].push(Rc::clone(&node)); self.end_nodes[pos + length].push(Rc::clone(&node)); self.nodes.push(node); } pub fn viterbi(&mut self) -> Vec<NodeRef> { let len = self.len; let mut pos = 0; while pos <= len { if self.begin_nodes[pos].is_empty() { return vec![]; } for rnode in &self.begin_nodes[pos] { rnode.borrow_mut().prev = None; let mut best_score = 0.0; let mut best_node: Option<NodeRef> = None; for lnode in &self.end_nodes[pos] { let score = lnode.borrow().backtrace_score + rnode.borrow().score; if best_node.is_none() || score > best_score { // TODO can we remove this clone ? best_node = Some(lnode.clone()); best_score = score } } match best_node { Some(bnode) => { rnode.borrow_mut().prev = Some(Rc::clone(&bnode)); rnode.borrow_mut().backtrace_score = best_score; } None => return vec![], } } if let Some(c) = self.sentence[pos..].chars().next() { pos += c.len_utf8(); } else { break; } } let mut results: Vec<NodeRef> = vec![]; let root = self.begin_nodes[len][0].borrow(); let prev = root.prev.as_ref(); if prev.is_none() { return vec![]; } let mut node: NodeRef = prev.unwrap().clone(); while node.borrow().prev.is_some() { results.push(node.clone()); let n = node.borrow().clone(); node = n.prev.as_ref().unwrap().clone(); } results.reverse(); results } pub fn piece(&self, node: &Node) -> String { self.sentence[node.pos..node.pos + node.length].to_owned() } pub fn tokens(&mut self) -> Vec<String> { self.viterbi() .iter() .map(|node| self.piece(&node.borrow())) .collect() } pub fn nbest(&mut self, n: usize) -> Vec<Vec<NodeRef>> { match n { 0 => vec![], 1 => vec![self.viterbi()], _ => { // let k_reserved_hypothesis_size = 512; let mut agenda: Agenda = BinaryHeap::new(); let mut hypotheses: Vec<Vec<NodeRef>> = vec![]; let eos = self.eos_node(); let score = eos.borrow().score; let hypo = Hypothesis::new(eos, None, score, score); agenda.push(hypo); // Fill backtrace scores self.viterbi(); while !agenda.is_empty() { let top = Rc::new(RefCell::new(agenda.pop().unwrap())); let node = Rc::clone(&top.borrow().node_ref); if node.borrow().id == self.bos_node().borrow().id { let mut hypothesis = vec![]; let mut next: HypothesisRef = Rc::clone(top.borrow().next.as_ref().unwrap()); while next.borrow().next.is_some() { hypothesis.push(next.borrow().node_ref.clone()); let c: HypothesisRef = next.clone(); // let c: Ref<Hypothesis> = next.clone().borrow(); next = Rc::clone(c.borrow().next.as_ref().unwrap()); } hypotheses.push(hypothesis); if hypotheses.len() == n { return hypotheses; } } else { for lnode in &self.end_nodes[node.borrow().pos] { let top_gx = top.borrow().gx; let fx = lnode.borrow().backtrace_score + top_gx; let gx = lnode.borrow().score + top_gx; let hyp = Hypothesis::new(Rc::clone(lnode), Some(Rc::clone(&top)), fx, gx); agenda.push(hyp); } // When the input is too long or contains duplicated phrases, // `agenda` will get extremely big. Here we avoid this case by // dynamically shrinking the agenda. let k_max_agenda_size = 100_000; let k_min_agenda_size = 512; if agenda.len() > k_max_agenda_size { let mut new_agenda = BinaryHeap::new(); let len = min(k_min_agenda_size, n * 10); for _i in 0..len { new_agenda.push(agenda.pop().unwrap()); } agenda = new_agenda; } } } hypotheses } } } pub fn nbest_tokens(&mut self, n: usize) -> Vec<Vec<String>> { self.nbest(n) .iter() .map(|v| v.iter().map(|node| self.piece(&node.borrow())).collect()) .collect() } pub fn len(&self) -> usize { self.len } pub fn is_empty(&self) -> bool { self.len == 0 } pub fn bos_node(&self) -> NodeRef { Rc::clone(&self.end_nodes[0][0]) } pub fn eos_node(&self) -> NodeRef { Rc::clone(&self.begin_nodes[self.len][0]) } pub fn surface(&self, n: usize) -> &str { match self.sentence.char_indices().nth(n) { Some((pos, _)) => &self.sentence[pos..], None => "", } } pub fn sentence(&self) -> &str { self.sentence } pub fn populate_marginal(&self, freq: f64, expected: &mut [f64]) -> f64 { let len = self.len(); let n_nodes = self.nodes.len(); let mut alpha = vec![0.0; n_nodes]; let mut beta = vec![0.0; n_nodes]; for pos in 0..=len { for rnode in &self.begin_nodes[pos] { for lnode in &self.end_nodes[pos] { let lid = lnode.borrow().node_id; let rid = rnode.borrow().node_id; alpha[rid] = log_sum_exp( alpha[rid], lnode.borrow().score + alpha[lid], *lnode == self.end_nodes[pos][0], ); } } } for pos in (0..=len).rev() { // let rpos = len - pos; for lnode in &self.end_nodes[pos] { for rnode in &self.begin_nodes[pos] { let lid = lnode.borrow().node_id; let rid = rnode.borrow().node_id; beta[lid] = log_sum_exp( beta[lid], rnode.borrow().score + beta[rid], *rnode == self.begin_nodes[pos][0], ); } } } let eos_id = self.begin_nodes[len][0].borrow().node_id; let z = alpha[eos_id]; for pos in 0..len { for node in &self.begin_nodes[pos] { let node_id = node.borrow().node_id; let id = node.borrow().id; let a = alpha[node_id]; let b = beta[node_id]; let total = a + node.borrow().score + b - z; let update = freq * total.exp(); expected[id] += update; } } freq * z } pub fn sample(&self, theta: f64) -> Vec<NodeRef> { let len = self.len(); if len == 0 { return vec![]; } let mut alpha = vec![0.0; self.nodes.len()]; for pos in 0..=len { for rnode in &self.begin_nodes[pos] { for lnode in &self.end_nodes[pos] { let lid = lnode.borrow().node_id; let rid = rnode.borrow().node_id; alpha[rid] = log_sum_exp( alpha[rid], theta * (lnode.borrow().score + alpha[lid]), *lnode == self.end_nodes[pos][0], ); } } } let mut rng = thread_rng(); let mut results: Vec<NodeRef> = vec![]; let mut probs: Vec<f64> = vec![]; let mut z = alpha[self.eos_node().borrow().node_id]; let mut node = self.eos_node(); loop { probs.clear(); let pos = node.borrow().pos; for lnode in &self.end_nodes[pos] { let lid = lnode.borrow().node_id; probs.push((alpha[lid] + theta * lnode.borrow().score - z).exp()) } let dist = WeightedIndex::new(&probs).unwrap(); let index = dist.sample(&mut rng); node = Rc::clone(&self.end_nodes[pos][index]); if node == self.bos_node() { break; } z = alpha[node.borrow().node_id]; results.push(Rc::clone(&node)); } results.reverse(); results } pub fn sample_token(&self, theta: f64) -> Vec<String> { self.sample(theta) .iter() .map(|node| self.piece(&node.borrow())) .collect() } } #[cfg(test)] mod tests { use super::*; use assert_approx_eq::assert_approx_eq; #[test] fn set_sentence() { let lattice = Lattice::from("", 1, 2); assert_eq!(lattice.len(), 0); let lattice = Lattice::from("", 1, 2); assert_eq!(lattice.len(), 0); assert_eq!(lattice.sentence(), ""); assert_eq!(lattice.surface(0), ""); let lattice = Lattice::from("test", 1, 2); assert_eq!(lattice.len(), 4); assert_eq!(lattice.sentence(), "test"); assert_eq!(lattice.surface(0), "test"); assert_eq!(lattice.surface(1), "est"); assert_eq!(lattice.surface(2), "st"); assert_eq!(lattice.surface(3), "t"); let bos = lattice.bos_node(); let eos = lattice.eos_node(); assert_eq!(bos.borrow().id, 1); assert_eq!(eos.borrow().id, 2); assert_eq!( lattice.end_nodes[0].first().unwrap().borrow().id, bos.borrow().id ); assert_eq!( lattice.begin_nodes[4].first().unwrap().borrow().id, eos.borrow().id ); let lattice = Lattice::from("ใƒ†ใ‚นใƒˆab", 1, 2); assert_eq!(lattice.len(), 11); assert_eq!(lattice.sentence(), "ใƒ†ใ‚นใƒˆab"); assert_eq!(lattice.surface(0), "ใƒ†ใ‚นใƒˆab"); assert_eq!(lattice.surface(1), "ใ‚นใƒˆab"); assert_eq!(lattice.surface(2), "ใƒˆab"); assert_eq!(lattice.surface(3), "ab"); assert_eq!(lattice.surface(4), "b"); } #[test] fn insert_test() { let mut lattice = Lattice::from("ABใ‚ใ„", 1, 2); lattice.insert(0, 1, 0.0, 3); lattice.insert(1, 1, 0.0, 4); lattice.insert(2, 3, 0.0, 5); lattice.insert(5, 3, 0.0, 6); lattice.insert(0, 2, 0.0, 7); lattice.insert(1, 4, 0.0, 8); lattice.insert(2, 6, 0.0, 9); // 0 & 1 are bos and eos let node0 = lattice.nodes[2].borrow(); let node1 = lattice.nodes[3].borrow(); let node2 = lattice.nodes[4].borrow(); let node3 = lattice.nodes[5].borrow(); let node4 = lattice.nodes[6].borrow(); let node5 = lattice.nodes[7].borrow(); let node6 = lattice.nodes[8].borrow(); assert_eq!(lattice.piece(&node0), "A"); assert_eq!(lattice.piece(&node1), "B"); assert_eq!(lattice.piece(&node2), "ใ‚"); assert_eq!(lattice.piece(&node3), "ใ„"); assert_eq!(lattice.piece(&node4), "AB"); assert_eq!(lattice.piece(&node5), "Bใ‚"); assert_eq!(lattice.piece(&node6), "ใ‚ใ„"); assert_eq!(node0.pos, 0); assert_eq!(node1.pos, 1); assert_eq!(node2.pos, 2); assert_eq!(node3.pos, 5); assert_eq!(node4.pos, 0); assert_eq!(node5.pos, 1); assert_eq!(node6.pos, 2); assert_eq!(node0.length, 1); assert_eq!(node1.length, 1); assert_eq!(node2.length, 3); assert_eq!(node3.length, 3); assert_eq!(node4.length, 2); assert_eq!(node5.length, 4); assert_eq!(node6.length, 6); assert_eq!(lattice.bos_node().borrow().id, 1); assert_eq!(lattice.eos_node().borrow().id, 2); assert_eq!(node0.id, 3); assert_eq!(node1.id, 4); assert_eq!(node2.id, 5); assert_eq!(node3.id, 6); assert_eq!(node4.id, 7); assert_eq!(node5.id, 8); assert_eq!(node6.id, 9); assert_eq!(lattice.begin_nodes[0].len(), 2); assert_eq!(lattice.begin_nodes[1].len(), 2); assert_eq!(lattice.begin_nodes[2].len(), 2); assert_eq!(lattice.begin_nodes[5].len(), 1); assert_eq!(lattice.begin_nodes[8].len(), 1); assert_eq!(lattice.end_nodes[0].len(), 1); assert_eq!(lattice.end_nodes[1].len(), 1); assert_eq!(lattice.end_nodes[2].len(), 2); assert_eq!(lattice.end_nodes[5].len(), 2); assert_eq!(lattice.end_nodes[8].len(), 2); assert_eq!(lattice.begin_nodes[0][0].borrow().id, node0.id); assert_eq!(lattice.begin_nodes[0][1].borrow().id, node4.id); assert_eq!(lattice.begin_nodes[1][0].borrow().id, node1.id); assert_eq!(lattice.begin_nodes[1][1].borrow().id, node5.id); assert_eq!(lattice.begin_nodes[2][0].borrow().id, node2.id); assert_eq!(lattice.begin_nodes[2][1].borrow().id, node6.id); assert_eq!(lattice.begin_nodes[5][0].borrow().id, node3.id); assert_eq!( lattice.eos_node().borrow().id, lattice.begin_nodes[8][0].borrow().id ); assert_eq!( lattice.bos_node().borrow().id, lattice.end_nodes[0][0].borrow().id ); assert_eq!(node0.id, lattice.end_nodes[1][0].borrow().id); assert_eq!(node1.id, lattice.end_nodes[2][0].borrow().id); assert_eq!(node4.id, lattice.end_nodes[2][1].borrow().id); assert_eq!(node2.id, lattice.end_nodes[5][0].borrow().id); assert_eq!(node5.id, lattice.end_nodes[5][1].borrow().id); assert_eq!(node3.id, lattice.end_nodes[8][0].borrow().id); assert_eq!(node6.id, lattice.end_nodes[8][1].borrow().id); } #[test] fn test_viterbi() { let mut lattice = Lattice::from("ABC", 1, 2); assert_eq!(lattice.viterbi(), vec![]); // Still incomplete lattice.insert(0, 1, 0.0, 3); assert_eq!(lattice.viterbi(), vec![]); lattice.insert(1, 1, 0.0, 4); lattice.insert(2, 1, 0.0, 5); // XXX: In sentence piece this is not tested, still incomplete ? assert_eq!(lattice.viterbi().len(), 3); } #[test] fn test_viterbi2() { let mut lattice = Lattice::from("ABC", 1, 2); lattice.insert(0, 1, 0.0, 3); lattice.insert(1, 1, 0.0, 4); lattice.insert(2, 1, 0.0, 5); assert_eq!(lattice.tokens(), ["A", "B", "C"]); lattice.insert(0, 2, 2.0, 6); assert_eq!(lattice.tokens(), ["AB", "C"]); lattice.insert(1, 2, 5.0, 7); assert_eq!(lattice.tokens(), ["A", "BC"]); lattice.insert(0, 3, 10.0, 8); assert_eq!(lattice.tokens(), ["ABC"]); } #[test] fn test_nbest() { let mut lattice = Lattice::from("ABC", 1, 2); lattice.insert(0, 1, 0.0, 3); lattice.insert(1, 1, 0.0, 4); lattice.insert(2, 1, 0.0, 5); lattice.insert(0, 2, 2.0, 6); lattice.insert(1, 2, 5.0, 7); lattice.insert(0, 3, 10.0, 8); let nbests = lattice.nbest_tokens(10); assert_eq!( nbests, vec![ vec!["ABC"], vec!["A", "BC"], vec!["AB", "C"], vec!["A", "B", "C"] ] ); assert!(lattice.nbest_tokens(0).is_empty()); assert_eq!(lattice.nbest_tokens(1), vec![vec!["ABC"]]); } #[test] fn test_log_sum_exp() { let mut x = 0.0; let v: Vec<f64> = vec![1.0, 2.0, 3.0]; for (i, y) in v.iter().enumerate() { x = log_sum_exp(x, *y, i == 0); } assert_approx_eq!(x, v.iter().map(|n| n.exp()).sum::<f64>().ln(), 0.001); } #[test] fn test_populate() { let mut lattice = Lattice::from("ABC", 1, 2); lattice.insert(0, 1, 1.0, 3); // A lattice.insert(1, 1, 1.2, 4); // B lattice.insert(2, 1, 2.5, 5); // C lattice.insert(0, 2, 3.0, 6); // AB lattice.insert(1, 2, 4.0, 7); // BC lattice.insert(0, 3, 2.0, 8); // ABC let mut probs = vec![0.0; 9]; let p1 = (1.0_f64 + 1.2 + 2.5).exp(); let p2 = (3.0_f64 + 2.5).exp(); let p3 = (1.0_f64 + 4.0).exp(); let p4 = 2.0_f64.exp(); let z = p1 + p2 + p3 + p4; let log_z = lattice.populate_marginal(1.0, &mut probs); assert_approx_eq!(log_z, z.ln(), 0.001); assert_approx_eq!(probs[0], 0.0, 0.001); assert_approx_eq!(probs[1], 0.0, 0.001); assert_approx_eq!(probs[2], 0.0, 0.001); assert_approx_eq!(probs[3], (p1 + p3) / z, 0.001); assert_approx_eq!(probs[4], (p1) / z, 0.001); assert_approx_eq!(probs[5], (p1 + p2) / z, 0.001); assert_approx_eq!(probs[6], (p2) / z, 0.001); assert_approx_eq!(probs[7], (p3) / z, 0.001); assert_approx_eq!(probs[8], (p4) / z, 0.001); } }
tokenizers/tokenizers/src/models/unigram/lattice.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/unigram/lattice.rs", "repo_id": "tokenizers", "token_count": 12682 }
228
use crate::tokenizer::pattern::Pattern; use crate::tokenizer::Decoder; use crate::tokenizer::{NormalizedString, Normalizer, Result}; use crate::utils::SysRegex; use serde::{Deserialize, Serialize}; /// Represents the different patterns that `Replace` can use #[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)] pub enum ReplacePattern { String(String), Regex(String), } impl From<String> for ReplacePattern { fn from(v: String) -> Self { Self::String(v) } } impl From<&str> for ReplacePattern { fn from(v: &str) -> Self { Self::String(v.to_owned()) } } /// We use this custom deserializer to provide the value for `regex` for `Replace` #[doc(hidden)] #[derive(Deserialize)] #[serde(tag = "type")] struct ReplaceDeserializer { pattern: ReplacePattern, content: String, } impl std::convert::TryFrom<ReplaceDeserializer> for Replace { type Error = Box<dyn std::error::Error + Send + Sync>; fn try_from(v: ReplaceDeserializer) -> Result<Self> { Self::new(v.pattern, v.content) } } /// This normalizer will take a `pattern` (for now only a String) /// and replace every occurrence with `content`. #[derive(Debug, Serialize, Deserialize)] #[serde(tag = "type", try_from = "ReplaceDeserializer")] pub struct Replace { pattern: ReplacePattern, content: String, #[serde(skip)] regex: SysRegex, } impl Clone for Replace { fn clone(&self) -> Self { Self::new(self.pattern.clone(), &self.content).unwrap() } } impl PartialEq for Replace { fn eq(&self, other: &Self) -> bool { self.pattern == other.pattern && self.content == other.content } } impl Replace { pub fn new<I: Into<ReplacePattern>, C: Into<String>>(pattern: I, content: C) -> Result<Self> { let pattern: ReplacePattern = pattern.into(); let regex = match &pattern { ReplacePattern::String(s) => SysRegex::new(&regex::escape(s))?, ReplacePattern::Regex(r) => SysRegex::new(r)?, }; Ok(Self { pattern, content: content.into(), regex, }) } } impl Normalizer for Replace { fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> { normalized.replace(&self.regex, &self.content) } } impl Decoder for Replace { fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> { tokens .into_iter() .map(|token| -> Result<String> { let mut new_token = "".to_string(); for ((start, stop), is_match) in (&self.regex).find_matches(&token)? { if is_match { new_token.push_str(&self.content); } else { new_token.push_str(&token[start..stop]); } } Ok(new_token) }) .collect() } } #[cfg(test)] mod tests { use super::*; #[test] fn test_replace() { let original = "This is a ''test''"; let normalized = "This is a \"test\""; let mut n = NormalizedString::from(original); Replace::new("''", "\"").unwrap().normalize(&mut n).unwrap(); assert_eq!(&n.get(), &normalized); } #[test] fn test_replace_regex() { let original = "This is a test"; let normalized = "This is a test"; let mut n = NormalizedString::from(original); Replace::new(ReplacePattern::Regex(r"\s+".into()), ' ') .unwrap() .normalize(&mut n) .unwrap(); assert_eq!(&n.get(), &normalized); } #[test] fn serialization() { let replace = Replace::new("Hello", "Hey").unwrap(); let replace_s = r#"{"type":"Replace","pattern":{"String":"Hello"},"content":"Hey"}"#; assert_eq!(serde_json::to_string(&replace).unwrap(), replace_s); assert_eq!(serde_json::from_str::<Replace>(replace_s).unwrap(), replace); let replace = Replace::new(ReplacePattern::Regex(r"\s+".into()), ' ').unwrap(); let replace_s = r#"{"type":"Replace","pattern":{"Regex":"\\s+"},"content":" "}"#; assert_eq!(serde_json::to_string(&replace).unwrap(), replace_s); assert_eq!(serde_json::from_str::<Replace>(replace_s).unwrap(), replace); } #[test] fn test_replace_decode() { let original = vec!["hello".to_string(), "_hello".to_string()]; let replace = Replace::new("_", " ").unwrap(); assert_eq!( replace.decode_chain(original).unwrap(), vec!["hello", " hello"] ); } }
tokenizers/tokenizers/src/normalizers/replace.rs/0
{ "file_path": "tokenizers/tokenizers/src/normalizers/replace.rs", "repo_id": "tokenizers", "token_count": 2048 }
229
use regex::Regex; use crate::tokenizer::{ pattern::Invert, PreTokenizedString, PreTokenizer, Result, SplitDelimiterBehavior, }; use crate::utils::macro_rules_attribute; #[derive(Clone, Debug, PartialEq, Eq)] #[macro_rules_attribute(impl_serde_type!)] pub struct Whitespace; impl Default for Whitespace { fn default() -> Self { Self } } impl PreTokenizer for Whitespace { fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> { lazy_static! { static ref RE: Regex = Regex::new(r"\w+|[^\w\s]+").unwrap(); } let re_ref: &Regex = &RE; pretokenized.split(|_, normalized| { normalized.split(Invert(re_ref), SplitDelimiterBehavior::Removed) }) } } #[derive(Copy, Clone, Debug, PartialEq, Eq)] #[macro_rules_attribute(impl_serde_type!)] pub struct WhitespaceSplit; impl PreTokenizer for WhitespaceSplit { fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> { pretokenized.split(|_, normalized| { normalized.split(char::is_whitespace, SplitDelimiterBehavior::Removed) }) } } #[cfg(test)] mod tests { use super::*; use crate::{OffsetReferential, OffsetType, PreTokenizer}; #[test] fn basic() { let tests = vec![ ( "Hey man!", vec![("Hey", (0, 3)), ("man", (4, 7)), ("!", (7, 8))], ), ( "How are you doing?", vec![ ("How", (0, 3)), ("are", (4, 7)), ("you", (8, 11)), ("doing", (12, 17)), ("?", (17, 18)), ], ), ("\n", vec![]), ]; let pretok = Whitespace {}; for (s, res) in tests { let mut pretokenized = PreTokenizedString::from(s); pretok.pre_tokenize(&mut pretokenized).unwrap(); assert_eq!( pretokenized .get_splits(OffsetReferential::Original, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), res ); } } #[test] fn whitespace_split() { let tests = vec![ ("Hey man!", vec![("Hey", (0, 3)), ("man!", (4, 8))]), ( "Hey, man, Good?", vec![("Hey,", (0, 4)), ("man,", (5, 9)), ("Good?", (10, 15))], ), ]; let pretok = WhitespaceSplit; for (s, res) in tests { let mut pretokenized = PreTokenizedString::from(s); pretok.pre_tokenize(&mut pretokenized).unwrap(); assert_eq!( pretokenized .get_splits(OffsetReferential::Original, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), res ); } } }
tokenizers/tokenizers/src/pre_tokenizers/whitespace.rs/0
{ "file_path": "tokenizers/tokenizers/src/pre_tokenizers/whitespace.rs", "repo_id": "tokenizers", "token_count": 1660 }
230
//! This comes from the Rust libcore and is duplicated here because it is not exported //! (cf <https://github.com/rust-lang/rust/blob/25091ed9b7739e12466fb2490baa1e8a2815121c/src/libcore/iter/adapters/mod.rs#L2664>) //! We are now using the version from <https://stackoverflow.com/questions/44544323/how-to-unzip-a-sequence-of-resulta-b-e-to-a-veca-vecb-and-stop-on-f> //! because the one from the libcore seems to cause overflowing stacks in some cases //! It also contains a lines_with_ending that copies std::io::BufRead but keeps line endings. use std::io::BufRead; pub struct ResultShunt<I, E> { iter: I, error: Option<E>, } impl<I, T, E> ResultShunt<I, E> where I: Iterator<Item = Result<T, E>>, { /// Process the given iterator as if it yielded a `T` instead of a /// `Result<T, _>`. Any errors will stop the inner iterator and /// the overall result will be an error. pub fn process<F, U>(iter: I, mut f: F) -> Result<U, E> where F: FnMut(&mut Self) -> U, { let mut shunt = ResultShunt::new(iter); let value = f(shunt.by_ref()); shunt.reconstruct(value) } fn new(iter: I) -> Self { ResultShunt { iter, error: None } } /// Consume the adapter and rebuild a `Result` value. This should /// *always* be called, otherwise any potential error would be /// lost. fn reconstruct<U>(self, val: U) -> Result<U, E> { match self.error { None => Ok(val), Some(e) => Err(e), } } } impl<I, T, E> Iterator for ResultShunt<I, E> where I: Iterator<Item = Result<T, E>>, { type Item = T; fn next(&mut self) -> Option<Self::Item> { match self.iter.next() { Some(Ok(v)) => Some(v), Some(Err(e)) => { self.error = Some(e); None } None => None, } } } /// Copied from std::io::BufRead but keep newline characters. #[derive(Debug)] pub struct Lines<B> { buf: B, } pub trait LinesWithEnding<B> { fn lines_with_ending(self) -> Lines<B>; } impl<B> LinesWithEnding<B> for B where B: BufRead, { fn lines_with_ending(self) -> Lines<B> { Lines::<B> { buf: self } } } impl<B: BufRead> Iterator for Lines<B> { type Item = std::io::Result<String>; fn next(&mut self) -> Option<Self::Item> { let mut buf = String::new(); match self.buf.read_line(&mut buf) { Ok(0) => None, Ok(_n) => { // if buf.ends_with('\n') { // buf.pop(); // if buf.ends_with('\r') { // buf.pop(); // } // } Some(Ok(buf)) } Err(e) => Some(Err(e)), } } }
tokenizers/tokenizers/src/utils/iter.rs/0
{ "file_path": "tokenizers/tokenizers/src/utils/iter.rs", "repo_id": "tokenizers", "token_count": 1339 }
231
version: 2.1 setup: true orbs: continuation: circleci/[email protected] parameters: nightly: type: boolean default: false jobs: # Ensure running with CircleCI/huggingface check_circleci_user: docker: - image: cimg/python:3.8.12 parallelism: 1 steps: - run: echo $CIRCLE_PROJECT_USERNAME - run: | if [ "$CIRCLE_PROJECT_USERNAME" = "huggingface" ]; then exit 0 else echo "The CI is running under $CIRCLE_PROJECT_USERNAME personal account. Please follow https://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization- to fix it."; exit -1 fi # Fetch the tests to run fetch_tests: working_directory: ~/transformers docker: - image: cimg/python:3.8.12 parallelism: 1 steps: - checkout - run: pip install --upgrade --upgrade-strategy eager pip - run: pip install -U --upgrade-strategy eager GitPython - run: pip install -U --upgrade-strategy eager . - run: mkdir -p test_preparation - run: python utils/tests_fetcher.py | tee tests_fetched_summary.txt - store_artifacts: path: ~/transformers/tests_fetched_summary.txt - run: | if [ -f test_list.txt ]; then cp test_list.txt test_preparation/test_list.txt else touch test_preparation/test_list.txt fi - run: | if [ -f examples_test_list.txt ]; then mv examples_test_list.txt test_preparation/examples_test_list.txt else touch test_preparation/examples_test_list.txt fi - run: | if [ -f filtered_test_list_cross_tests.txt ]; then mv filtered_test_list_cross_tests.txt test_preparation/filtered_test_list_cross_tests.txt else touch test_preparation/filtered_test_list_cross_tests.txt fi - run: | if [ -f doctest_list.txt ]; then cp doctest_list.txt test_preparation/doctest_list.txt else touch test_preparation/doctest_list.txt fi - run: | if [ -f test_repo_utils.txt ]; then mv test_repo_utils.txt test_preparation/test_repo_utils.txt else touch test_preparation/test_repo_utils.txt fi - run: python utils/tests_fetcher.py --filter_tests - run: | if [ -f test_list.txt ]; then mv test_list.txt test_preparation/filtered_test_list.txt else touch test_preparation/filtered_test_list.txt fi - store_artifacts: path: test_preparation/test_list.txt - store_artifacts: path: test_preparation/doctest_list.txt - store_artifacts: path: ~/transformers/test_preparation/filtered_test_list.txt - store_artifacts: path: test_preparation/examples_test_list.txt - run: python .circleci/create_circleci_config.py --fetcher_folder test_preparation - run: | if [ ! -s test_preparation/generated_config.yml ]; then echo "No tests to run, exiting early!" circleci-agent step halt fi - run: cp test_preparation/generated_config.yml test_preparation/generated_config.txt - store_artifacts: path: test_preparation/generated_config.txt - store_artifacts: path: test_preparation/filtered_test_list_cross_tests.txt - continuation/continue: configuration_path: test_preparation/generated_config.yml # To run all tests for the nightly build fetch_all_tests: working_directory: ~/transformers docker: - image: cimg/python:3.8.12 parallelism: 1 steps: - checkout - run: pip install --upgrade --upgrade-strategy eager pip - run: pip install -U --upgrade-strategy eager GitPython - run: pip install -U --upgrade-strategy eager . - run: | mkdir test_preparation echo -n "tests" > test_preparation/test_list.txt echo -n "all" > test_preparation/examples_test_list.txt echo -n "tests/repo_utils" > test_preparation/test_repo_utils.txt - run: | echo -n "tests" > test_list.txt python utils/tests_fetcher.py --filter_tests mv test_list.txt test_preparation/filtered_test_list.txt - run: python .circleci/create_circleci_config.py --fetcher_folder test_preparation - run: cp test_preparation/generated_config.yml test_preparation/generated_config.txt - store_artifacts: path: test_preparation/generated_config.txt - continuation/continue: configuration_path: test_preparation/generated_config.yml check_code_quality: working_directory: ~/transformers docker: - image: cimg/python:3.8.12 resource_class: large environment: TRANSFORMERS_IS_CI: yes PYTEST_TIMEOUT: 120 parallelism: 1 steps: - checkout - restore_cache: keys: - v0.7-code_quality-pip-{{ checksum "setup.py" }} - v0.7-code-quality-pip - restore_cache: keys: - v0.7-code_quality-site-packages-{{ checksum "setup.py" }} - v0.7-code-quality-site-packages - run: pip install --upgrade --upgrade-strategy eager pip - run: pip install -U --upgrade-strategy eager .[all,quality] - save_cache: key: v0.7-code_quality-pip-{{ checksum "setup.py" }} paths: - '~/.cache/pip' - save_cache: key: v0.7-code_quality-site-packages-{{ checksum "setup.py" }} paths: - '~/.pyenv/versions/' - run: name: Show installed libraries and their versions command: pip freeze | tee installed.txt - store_artifacts: path: ~/transformers/installed.txt - run: ruff check examples tests src utils - run: ruff format tests src utils --check - run: python utils/custom_init_isort.py --check_only - run: python utils/sort_auto_mappings.py --check_only - run: python utils/check_doc_toc.py check_repository_consistency: working_directory: ~/transformers docker: - image: cimg/python:3.8.12 resource_class: large environment: TRANSFORMERS_IS_CI: yes PYTEST_TIMEOUT: 120 parallelism: 1 steps: - checkout - restore_cache: keys: - v0.7-repository_consistency-pip-{{ checksum "setup.py" }} - v0.7-repository_consistency-pip - restore_cache: keys: - v0.7-repository_consistency-site-packages-{{ checksum "setup.py" }} - v0.7-repository_consistency-site-packages - run: pip install --upgrade --upgrade-strategy eager pip - run: pip install -U --upgrade-strategy eager .[all,quality] - save_cache: key: v0.7-repository_consistency-pip-{{ checksum "setup.py" }} paths: - '~/.cache/pip' - save_cache: key: v0.7-repository_consistency-site-packages-{{ checksum "setup.py" }} paths: - '~/.pyenv/versions/' - run: name: Show installed libraries and their versions command: pip freeze | tee installed.txt - store_artifacts: path: ~/transformers/installed.txt - run: python utils/check_copies.py - run: python utils/check_table.py - run: python utils/check_dummies.py - run: python utils/check_repo.py - run: python utils/check_inits.py - run: python utils/check_config_docstrings.py - run: python utils/check_config_attributes.py - run: python utils/check_doctest_list.py - run: make deps_table_check_updated - run: python utils/update_metadata.py --check-only - run: python utils/check_task_guides.py - run: python utils/check_docstrings.py - run: python utils/check_support_list.py workflows: version: 2 setup_and_quality: when: not: <<pipeline.parameters.nightly>> jobs: - check_circleci_user - check_code_quality - check_repository_consistency - fetch_tests nightly: when: <<pipeline.parameters.nightly>> jobs: - check_circleci_user - check_code_quality - check_repository_consistency - fetch_all_tests
transformers/.circleci/config.yml/0
{ "file_path": "transformers/.circleci/config.yml", "repo_id": "transformers", "token_count": 5200 }
232
FROM google/cloud-sdk:slim # Build args. ARG GITHUB_REF=refs/heads/main # TODO: This Dockerfile installs pytorch/xla 3.6 wheels. There are also 3.7 # wheels available; see below. ENV PYTHON_VERSION=3.6 RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ curl \ ca-certificates # Install conda and python. # NOTE new Conda does not forward the exit status... https://github.com/conda/conda/issues/8385 RUN curl -o ~/miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-4.7.12-Linux-x86_64.sh && \ chmod +x ~/miniconda.sh && \ ~/miniconda.sh -b && \ rm ~/miniconda.sh ENV PATH=/root/miniconda3/bin:$PATH RUN conda create -y --name container python=$PYTHON_VERSION # Run the rest of commands within the new conda env. # Use absolute path to appease Codefactor. SHELL ["/root/miniconda3/bin/conda", "run", "-n", "container", "/bin/bash", "-c"] RUN conda install -y python=$PYTHON_VERSION mkl RUN pip uninstall -y torch && \ # Python 3.7 wheels are available. Replace cp36-cp36m with cp37-cp37m gsutil cp 'gs://tpu-pytorch/wheels/torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \ gsutil cp 'gs://tpu-pytorch/wheels/torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \ gsutil cp 'gs://tpu-pytorch/wheels/torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \ pip install 'torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ pip install 'torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ pip install 'torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ rm 'torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ rm 'torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ rm 'torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \ apt-get install -y libomp5 ENV LD_LIBRARY_PATH=root/miniconda3/envs/container/lib # Install huggingface/transformers at the current PR, plus dependencies. RUN git clone https://github.com/huggingface/transformers.git && \ cd transformers && \ git fetch origin $GITHUB_REF:CI && \ git checkout CI && \ cd .. && \ pip install ./transformers && \ pip install -r ./transformers/examples/pytorch/_test_requirements.txt && \ pip install pytest RUN python -c "import torch_xla; print(torch_xla.__version__)" RUN python -c "import transformers as trf; print(trf.__version__)" RUN conda init bash COPY docker-entrypoint.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/docker-entrypoint.sh ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] CMD ["bash"]
transformers/docker/transformers-pytorch-tpu/Dockerfile/0
{ "file_path": "transformers/docker/transformers-pytorch-tpu/Dockerfile", "repo_id": "transformers", "token_count": 1235 }
233
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installation Installieren Sie ๐Ÿค— Transformers fรผr die Deep-Learning-Bibliothek, mit der Sie arbeiten, richten Sie Ihren Cache ein und konfigurieren Sie ๐Ÿค— Transformers optional fรผr den Offline-Betrieb. ๐Ÿค— Transformers wurde unter Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, und Flax getestet. Folgen Sie den Installationsanweisungen unten fรผr die von Ihnen verwendete Deep-Learning-Bibliothek: * [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions. * [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. ## Installation mit pip Sie sollten ๐Ÿค— Transformers in einer [virtuellen Umgebung](https://docs.python.org/3/library/venv.html) installieren. Wenn Sie mit virtuellen Python-Umgebungen nicht vertraut sind, werfen Sie einen Blick auf diese [Anleitung](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Eine virtuelle Umgebung macht es einfacher, verschiedene Projekte zu verwalten und Kompatibilitรคtsprobleme zwischen Abhรคngigkeiten zu vermeiden. Beginnen wir mit der Erstellung einer virtuellen Umgebung in Ihrem Projektverzeichnis: ```bash python -m venv .env ``` Aktivieren wir die virtuelle Umgebung. Unter Linux und MacOs: ```bash source .env/bin/activate ``` Aktivieren wir die virtuelle Umgebung unter Windows ```bash .env/Scripts/activate ``` Jetzt kรถnnen wir die ๐Ÿค— Transformers mit dem folgenden Befehl installieren: ```bash pip install transformers ``` Bei reiner CPU-Unterstรผtzung kรถnnen wir ๐Ÿค— Transformers und eine Deep-Learning-Bibliothek bequem in einer Zeile installieren. Installieren wir zum Beispiel ๐Ÿค— Transformers und PyTorch mit: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers und TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers und Flax: ```bash pip install transformers[flax] ``` รœberprรผfen wir abschlieรŸend, ob ๐Ÿค— Transformers ordnungsgemรครŸ installiert wurde, indem wir den folgenden Befehl ausfรผhren. Es wird ein vortrainiertes Modell heruntergeladen: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Dann wird die Kategorie und die Wahrscheinlichkeit ausgegeben: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installation aus dem Code Installieren wir ๐Ÿค— Transformers aus dem Quellcode mit dem folgenden Befehl: ```bash pip install git+https://github.com/huggingface/transformers ``` Dieser Befehl installiert die aktuelle `main` Version und nicht die neueste `stable` Version. Die `main`-Version ist nรผtzlich, um mit den neuesten Entwicklungen Schritt zu halten. Zum Beispiel, wenn ein Fehler seit der letzten offiziellen Version behoben wurde, aber eine neue Version noch nicht verรถffentlicht wurde. Das bedeutet jedoch, dass die "Hauptversion" nicht immer stabil ist. Wir bemรผhen uns, die Hauptversion einsatzbereit zu halten, und die meisten Probleme werden normalerweise innerhalb weniger Stunden oder eines Tages behoben. Wenn Sie auf ein Problem stoรŸen, รถffnen Sie bitte ein [Issue] (https://github.com/huggingface/transformers/issues), damit wir es noch schneller beheben kรถnnen! รœberprรผfen wir, ob ๐Ÿค— Transformers richtig installiert wurde, indem Sie den folgenden Befehl ausfรผhren: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Editierbare Installation Sie benรถtigen eine bearbeitbare Installation, wenn Sie: * die "Haupt"-Version des Quellcodes verwenden mรถchten. * Zu ๐Ÿค— Transformers beitragen und ร„nderungen am Code testen wollen. Klonen Sie das Repository und installieren ๐Ÿค— Transformers mit den folgenden Befehlen: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Diese Befehle verknรผpfen den Ordner, in den Sie das Repository geklont haben, mit den Pfaden Ihrer Python-Bibliotheken. Python wird nun in dem Ordner suchen, in den Sie geklont haben, zusรคtzlich zu den normalen Bibliothekspfaden. Wenn zum Beispiel Ihre Python-Pakete normalerweise in `~/anaconda3/envs/main/lib/python3.7/site-packages/` installiert sind, wird Python auch den Ordner durchsuchen, in den Sie geklont haben: `~/transformers/`. <Tip warning={true}> Sie mรผssen den Ordner `transformers` behalten, wenn Sie die Bibliothek weiter verwenden wollen. </Tip> Jetzt kรถnnen Sie Ihren Klon mit dem folgenden Befehl ganz einfach auf die neueste Version von ๐Ÿค— Transformers aktualisieren: ```bash cd ~/transformers/ git pull ``` Ihre Python-Umgebung wird beim nรคchsten Ausfรผhren die `main`-Version von ๐Ÿค— Transformers finden. ## Installation mit conda Installation von dem conda Kanal `conda-forge`: ```bash conda install conda-forge::transformers ``` ## Cache Einrichtung Vorgefertigte Modelle werden heruntergeladen und lokal zwischengespeichert unter: `~/.cache/huggingface/hub`. Dies ist das Standardverzeichnis, das durch die Shell-Umgebungsvariable "TRANSFORMERS_CACHE" vorgegeben ist. Unter Windows wird das Standardverzeichnis durch `C:\Benutzer\Benutzername\.cache\huggingface\hub` angegeben. Sie kรถnnen die unten aufgefรผhrten Shell-Umgebungsvariablen - in der Reihenfolge ihrer Prioritรคt - รคndern, um ein anderes Cache-Verzeichnis anzugeben: 1. Shell-Umgebungsvariable (Standard): `HUGGINGFACE_HUB_CACHE` oder `TRANSFORMERS_CACHE`. 2. Shell-Umgebungsvariable: `HF_HOME`. 3. Shell-Umgebungsvariable: `XDG_CACHE_HOME` + `/huggingface`. <Tip> Transformers verwendet die Shell-Umgebungsvariablen `PYTORCH_TRANSFORMERS_CACHE` oder `PYTORCH_PRETRAINED_BERT_CACHE`, wenn Sie von einer frรผheren Iteration dieser Bibliothek kommen und diese Umgebungsvariablen gesetzt haben, sofern Sie nicht die Shell-Umgebungsvariable `TRANSFORMERS_CACHE` angeben. </Tip> ## Offline Modus Transformers ist in der Lage, in einer Firewall- oder Offline-Umgebung zu laufen, indem es nur lokale Dateien verwendet. Setzen Sie die Umgebungsvariable `TRANSFORMERS_OFFLINE=1`, um dieses Verhalten zu aktivieren. <Tip> Fรผgen sie [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) zu Ihrem Offline-Trainingsworkflow hinzufรผgen, indem Sie die Umgebungsvariable `HF_DATASETS_OFFLINE=1` setzen. </Tip> So wรผrden Sie beispielsweise ein Programm in einem normalen Netzwerk mit einer Firewall fรผr externe Instanzen mit dem folgenden Befehl ausfรผhren: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Fรผhren Sie das gleiche Programm in einer Offline-Instanz mit aus: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Das Skript sollte nun laufen, ohne sich aufzuhรคngen oder eine Zeitรผberschreitung abzuwarten, da es weiรŸ, dass es nur nach lokalen Dateien suchen soll. ### Abrufen von Modellen und Tokenizern zur Offline-Verwendung Eine andere Mรถglichkeit, ๐Ÿค— Transformers offline zu verwenden, besteht darin, die Dateien im Voraus herunterzuladen und dann auf ihren lokalen Pfad zu verweisen, wenn Sie sie offline verwenden mรผssen. Es gibt drei Mรถglichkeiten, dies zu tun: * Laden Sie eine Datei รผber die Benutzeroberflรคche des [Model Hub](https://huggingface.co/models) herunter, indem Sie auf das โ†“-Symbol klicken. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Verwenden Sie den [PreTrainedModel.from_pretrained] und [PreTrainedModel.save_pretrained] Workflow: 1. Laden Sie Ihre Dateien im Voraus mit [`PreTrainedModel.from_pretrained`] herunter: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Speichern Sie Ihre Dateien in einem bestimmten Verzeichnis mit [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Wenn Sie nun offline sind, laden Sie Ihre Dateien mit [`PreTrainedModel.from_pretrained`] aus dem bestimmten Verzeichnis: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Programmatisches Herunterladen von Dateien mit der [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) Bibliothek: 1. Installieren Sie die "huggingface_hub"-Bibliothek in Ihrer virtuellen Umgebung: ```bash python -m pip install huggingface_hub ``` 2. Verwenden Sie die Funktion [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub), um eine Datei in einen bestimmten Pfad herunterzuladen. Der folgende Befehl lรคdt zum Beispiel die Datei "config.json" aus dem Modell [T0](https://huggingface.co/bigscience/T0_3B) in den gewรผnschten Pfad herunter: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Sobald Ihre Datei heruntergeladen und lokal zwischengespeichert ist, geben Sie den lokalen Pfad an, um sie zu laden und zu verwenden: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> Weitere Informationen zum Herunterladen von Dateien, die auf dem Hub gespeichert sind, finden Sie im Abschnitt [Wie man Dateien vom Hub herunterlรคdt] (https://huggingface.co/docs/hub/how-to-downstream). </Tip>
transformers/docs/source/de/installation.md/0
{ "file_path": "transformers/docs/source/de/installation.md", "repo_id": "transformers", "token_count": 3991 }
234
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # How to add a model to ๐Ÿค— Transformers? The ๐Ÿค— Transformers library is often able to offer new models thanks to community contributors. But this can be a challenging project and requires an in-depth knowledge of the ๐Ÿค— Transformers library and the model to implement. At Hugging Face, we're trying to empower more of the community to actively add models and we've put together this guide to walk you through the process of adding a PyTorch model (make sure you have [PyTorch installed](https://pytorch.org/get-started/locally/)). <Tip> If you're interested in implementing a TensorFlow model, take a look at the [How to convert a ๐Ÿค— Transformers model to TensorFlow](add_tensorflow_model) guide! </Tip> Along the way, you'll: - get insights into open-source best practices - understand the design principles behind one of the most popular deep learning libraries - learn how to efficiently test large models - learn how to integrate Python utilities like `black`, `ruff`, and `make fix-copies` to ensure clean and readable code A Hugging Face team member will be available to help you along the way so you'll never be alone. ๐Ÿค— โค๏ธ To get started, open a [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) issue for the model you want to see in ๐Ÿค— Transformers. If you're not especially picky about contributing a specific model, you can filter by the [New model label](https://github.com/huggingface/transformers/labels/New%20model) to see if there are any unclaimed model requests and work on it. Once you've opened a new model request, the first step is to get familiar with ๐Ÿค— Transformers if you aren't already! ## General overview of ๐Ÿค— Transformers First, you should get a general overview of ๐Ÿค— Transformers. ๐Ÿค— Transformers is a very opinionated library, so there is a chance that you don't agree with some of the library's philosophies or design choices. From our experience, however, we found that the fundamental design choices and philosophies of the library are crucial to efficiently scale ๐Ÿค— Transformers while keeping maintenance costs at a reasonable level. A good first starting point to better understand the library is to read the [documentation of our philosophy](philosophy). As a result of our way of working, there are some choices that we try to apply to all models: - Composition is generally favored over-abstraction - Duplicating code is not always bad if it strongly improves the readability or accessibility of a model - Model files are as self-contained as possible so that when you read the code of a specific model, you ideally only have to look into the respective `modeling_....py` file. In our opinion, the library's code is not just a means to provide a product, *e.g.* the ability to use BERT for inference, but also as the very product that we want to improve. Hence, when adding a model, the user is not only the person who will use your model, but also everybody who will read, try to understand, and possibly tweak your code. With this in mind, let's go a bit deeper into the general library design. ### Overview of models To successfully add a model, it is important to understand the interaction between your model and its config, [`PreTrainedModel`], and [`PretrainedConfig`]. For exemplary purposes, we will call the model to be added to ๐Ÿค— Transformers `BrandNewBert`. Let's take a look: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> As you can see, we do make use of inheritance in ๐Ÿค— Transformers, but we keep the level of abstraction to an absolute minimum. There are never more than two levels of abstraction for any model in the library. `BrandNewBertModel` inherits from `BrandNewBertPreTrainedModel` which in turn inherits from [`PreTrainedModel`] and that's it. As a general rule, we want to make sure that a new model only depends on [`PreTrainedModel`]. The important functionalities that are automatically provided to every new model are [`~PreTrainedModel.from_pretrained`] and [`~PreTrainedModel.save_pretrained`], which are used for serialization and deserialization. All of the other important functionalities, such as `BrandNewBertModel.forward` should be completely defined in the new `modeling_brand_new_bert.py` script. Next, we want to make sure that a model with a specific head layer, such as `BrandNewBertForMaskedLM` does not inherit from `BrandNewBertModel`, but rather uses `BrandNewBertModel` as a component that can be called in its forward pass to keep the level of abstraction low. Every new model requires a configuration class, called `BrandNewBertConfig`. This configuration is always stored as an attribute in [`PreTrainedModel`], and thus can be accessed via the `config` attribute for all classes inheriting from `BrandNewBertPreTrainedModel`: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` Similar to the model, the configuration inherits basic serialization and deserialization functionalities from [`PretrainedConfig`]. Note that the configuration and the model are always serialized into two different formats - the model to a *pytorch_model.bin* file and the configuration to a *config.json* file. Calling [`~PreTrainedModel.save_pretrained`] will automatically call [`~PretrainedConfig.save_pretrained`], so that both model and configuration are saved. ### Code style When coding your new model, keep in mind that Transformers is an opinionated library and we have a few quirks of our own regarding how code should be written :-) 1. The forward pass of your model should be fully written in the modeling file while being fully independent of other models in the library. If you want to reuse a block from another model, copy the code and paste it with a `# Copied from` comment on top (see [here](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) for a good example and [there](pr_checks#check-copies) for more documentation on Copied from). 2. The code should be fully understandable, even by a non-native English speaker. This means you should pick descriptive variable names and avoid abbreviations. As an example, `activation` is preferred to `act`. One-letter variable names are strongly discouraged unless it's an index in a for loop. 3. More generally we prefer longer explicit code to short magical one. 4. Avoid subclassing `nn.Sequential` in PyTorch but subclass `nn.Module` and write the forward pass, so that anyone using your code can quickly debug it by adding print statements or breaking points. 5. Your function signature should be type-annotated. For the rest, good variable names are way more readable and understandable than type annotations. ### Overview of tokenizers Not quite ready yet :-( This section will be added soon! ## Step-by-step recipe to add a model to ๐Ÿค— Transformers Everyone has different preferences of how to port a model so it can be very helpful for you to take a look at summaries of how other contributors ported models to Hugging Face. Here is a list of community blog posts on how to port a model: 1. [Porting GPT2 Model](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf) 2. [Porting WMT19 MT Model](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas) From experience, we can tell you that the most important things to keep in mind when adding a model are: - Don't reinvent the wheel! Most parts of the code you will add for the new ๐Ÿค— Transformers model already exist somewhere in ๐Ÿค— Transformers. Take some time to find similar, already existing models and tokenizers you can copy from. [grep](https://www.gnu.org/software/grep/) and [rg](https://github.com/BurntSushi/ripgrep) are your friends. Note that it might very well happen that your model's tokenizer is based on one model implementation, and your model's modeling code on another one. *E.g.* FSMT's modeling code is based on BART, while FSMT's tokenizer code is based on XLM. - It's more of an engineering challenge than a scientific challenge. You should spend more time creating an efficient debugging environment rather than trying to understand all theoretical aspects of the model in the paper. - Ask for help, when you're stuck! Models are the core component of ๐Ÿค— Transformers so we at Hugging Face are more than happy to help you at every step to add your model. Don't hesitate to ask if you notice you are not making progress. In the following, we try to give you a general recipe that we found most useful when porting a model to ๐Ÿค— Transformers. The following list is a summary of everything that has to be done to add a model and can be used by you as a To-Do List: โ˜ (Optional) Understood the model's theoretical aspects<br> โ˜ Prepared ๐Ÿค— Transformers dev environment<br> โ˜ Set up debugging environment of the original repository<br> โ˜ Created script that successfully runs the `forward()` pass using the original repository and checkpoint<br> โ˜ Successfully added the model skeleton to ๐Ÿค— Transformers<br> โ˜ Successfully converted original checkpoint to ๐Ÿค— Transformers checkpoint<br> โ˜ Successfully ran `forward()` pass in ๐Ÿค— Transformers that gives identical output to original checkpoint<br> โ˜ Finished model tests in ๐Ÿค— Transformers<br> โ˜ Successfully added tokenizer in ๐Ÿค— Transformers<br> โ˜ Run end-to-end integration tests<br> โ˜ Finished docs<br> โ˜ Uploaded model weights to the Hub<br> โ˜ Submitted the pull request<br> โ˜ (Optional) Added a demo notebook To begin with, we usually recommend starting by getting a good theoretical understanding of `BrandNewBert`. However, if you prefer to understand the theoretical aspects of the model *on-the-job*, then it is totally fine to directly dive into the `BrandNewBert`'s code-base. This option might suit you better if your engineering skills are better than your theoretical skill, if you have trouble understanding `BrandNewBert`'s paper, or if you just enjoy programming much more than reading scientific papers. ### 1. (Optional) Theoretical aspects of BrandNewBert You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in ๐Ÿค— Transformers. That being said, you don't have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely: - What type of model is *brand_new_bert*? BERT-like encoder-only model? GPT2-like decoder-only model? BART-like encoder-decoder model? Look at the [model_summary](model_summary) if you're not familiar with the differences between those. - What are the applications of *brand_new_bert*? Text classification? Text generation? Seq2Seq tasks, *e.g.,* summarization? - What is the novel feature of the model that makes it different from BERT/GPT-2/BART? - Which of the already existing [๐Ÿค— Transformers models](https://huggingface.co/transformers/#contents) is most similar to *brand_new_bert*? - What type of tokenizer is used? A sentencepiece tokenizer? Word piece tokenizer? Is it the same tokenizer as used for BERT or BART? After you feel like you have gotten a good overview of the architecture of the model, you might want to write to the Hugging Face team with any questions you might have. This might include questions regarding the model's architecture, its attention layer, etc. We will be more than happy to help you. ### 2. Next prepare your environment 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the โ€˜Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your `transformers` fork to your local disk, and add the base repository as a remote: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Set up a development environment, for instance by running the following command: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do: ```bash pip install -e ".[quality]" ``` which should be enough for most use cases. You can then return to the parent directory ```bash cd .. ``` 4. We recommend adding the PyTorch version of *brand_new_bert* to Transformers. To install PyTorch, please follow the instructions on https://pytorch.org/get-started/locally/. **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 5. To port *brand_new_bert*, you will also need access to its original repository: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` Now you have set up a development environment to port *brand_new_bert* to ๐Ÿค— Transformers. ### 3.-4. Run a pretrained checkpoint using the original repository At first, you will work on the original *brand_new_bert* repository. Often, the original implementation is very โ€œresearchyโ€. Meaning that documentation might be lacking and the code can be difficult to understand. But this should be exactly your motivation to reimplement *brand_new_bert*. At Hugging Face, one of our main goals is to *make people stand on the shoulders of giants* which translates here very well into taking a working model and rewriting it to make it as **accessible, user-friendly, and beautiful** as possible. This is the number-one motivation to re-implement models into ๐Ÿค— Transformers - trying to make complex new NLP technology accessible to **everybody**. You should start thereby by diving into the original repository. Successfully running the official pretrained model in the original repository is often **the most difficult** step. From our experience, it is very important to spend some time getting familiar with the original code-base. You need to figure out the following: - Where to find the pretrained weights? - How to load the pretrained weights into the corresponding model? - How to run the tokenizer independently from the model? - Trace one forward pass so that you know which classes and functions are required for a simple forward pass. Usually, you only have to reimplement those functions. - Be able to locate the important components of the model: Where is the model's class? Are there model sub-classes, *e.g.* EncoderModel, DecoderModel? Where is the self-attention layer? Are there multiple different attention layers, *e.g.* *self-attention*, *cross-attention*...? - How can you debug the model in the original environment of the repo? Do you have to add *print* statements, can you work with an interactive debugger like *ipdb*, or should you use an efficient IDE to debug the model, like PyCharm? It is very important that before you start the porting process, you can **efficiently** debug code in the original repository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or even a pull request in the original repository. The maintainers of this repository are most likely very happy about someone looking into their code! At this point, it is really up to you which debugging environment and strategy you prefer to use to debug the original model. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to dive into the original repository and also when starting to write the ๐Ÿค— Transformers implementation of the model. Only at the very end, when the model has already been successfully ported to ๐Ÿค— Transformers, one should verify that the model also works as expected on GPU. In general, there are two possible debugging environments for running the original model - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Local python scripts. Jupyter notebooks have the advantage that they allow for cell-by-cell execution which can be helpful to better split logical components from one another and to have faster debugging cycles as intermediate results can be stored. Also, notebooks are often easier to share with other contributors, which might be very helpful if you want to ask the Hugging Face team for help. If you are familiar with Jupyter notebooks, we strongly recommend you work with them. The obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend some time adjusting to the new programming environment and you might not be able to use your known debugging tools anymore, like `ipdb`. For each code-base, a good first step is always to load a **small** pretrained checkpoint and to be able to reproduce a single forward pass using a dummy integer vector of input IDs as an input. Such a script could look like this (in pseudocode): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` Next, regarding the debugging strategy, there are generally a few from which to choose from: - Decompose the original model into many small testable components and run a forward pass on each of those for verification - Decompose the original model only into the original *tokenizer* and the original *model*, run a forward pass on those, and use intermediate print statements or breakpoints for verification Again, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code base. If the original code-base allows you to decompose the model into smaller sub-components, *e.g.* if the original code-base can easily be run in eager mode, it is usually worth the effort to do so. There are some important advantages to taking the more difficult road in the beginning: - at a later stage when comparing the original model to the Hugging Face implementation, you can verify automatically for each component individually that the corresponding component of the ๐Ÿค— Transformers implementation matches instead of relying on visual comparison via print statements - it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting individual components and thus structure your work better - separating the model into logical meaningful components will help you to get a better overview of the model's design and thus to better understand the model - at a later stage those component-by-component tests help you to ensure that no regression occurs as you continue changing your code [Lysandre's](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) integration checks for ELECTRA gives a nice example of how this can be done. However, if the original code-base is very complex or only allows intermediate components to be run in a compiled mode, it might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good example is [T5's MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) library which is very complex and does not offer a simple way to decompose the model into its sub-components. For such libraries, one often relies on verifying print statements. No matter which strategy you choose, the recommended procedure is often the same that you should start to debug the starting layers first and the ending layers last. It is recommended that you retrieve the output, either by print statements or sub-component functions, of the following layers in the following order: 1. Retrieve the input IDs passed to the model 2. Retrieve the word embeddings 3. Retrieve the input of the first Transformer layer 4. Retrieve the output of the first Transformer layer 5. Retrieve the output of the following n - 1 Transformer layers 6. Retrieve the output of the whole BrandNewBert Model Input IDs should thereby consists of an array of integers, *e.g.* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` The outputs of the following layers often consist of multi-dimensional float arrays and can look like this: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` We expect that every model added to ๐Ÿค— Transformers passes a couple of integration tests, meaning that the original model and the reimplemented version in ๐Ÿค— Transformers have to give the exact same output up to a precision of 0.001! Since it is normal that the exact same model written in different libraries can give a slightly different output depending on the library framework, we accept an error tolerance of 1e-3 (0.001). It is not enough if the model gives nearly the same output, they have to be almost identical. Therefore, you will certainly compare the intermediate outputs of the ๐Ÿค— Transformers version multiple times against the intermediate outputs of the original implementation of *brand_new_bert* in which case an **efficient** debugging environment of the original repository is absolutely important. Here is some advice to make your debugging environment as efficient as possible. - Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should probably take the time to write a longer script that decomposes the original model into smaller sub-components to retrieve intermediate values. Is the original repository written in Tensorflow 1? Then you might have to rely on TensorFlow print operations like [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) to output intermediate values. Is the original repository written in Jax? Then make sure that the model is **not jitted** when running the forward pass, *e.g.* check-out [this link](https://github.com/google/jax/issues/196). - Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle becomes. It is not efficient if your pretrained model is so big that your forward pass takes more than 10 seconds. In case only very large checkpoints are available, it might make more sense to create a dummy model in the new environment with randomly initialized weights and save those weights for comparison with the ๐Ÿค— Transformers version of your model - Make sure you are using the easiest way of calling a forward pass in the original repository. Ideally, you want to find the function in the original repository that **only** calls a single forward pass, *i.e.* that is often called `predict`, `evaluate`, `forward` or `__call__`. You don't want to debug a function that calls `forward` multiple times, *e.g.* to generate text, like `autoregressive_sample`, `generate`. - Try to separate the tokenization from the model's *forward* pass. If the original repository shows examples where you have to input a string, then try to find out where in the forward call the string input is changed to input ids and start from this point. This might mean that you have to possibly write a small script yourself or change the original code so that you can directly input the ids instead of an input string. - Make sure that the model in your debugging setup is **not** in training mode, which often causes the model to yield random outputs due to multiple dropout layers in the model. Make sure that the forward pass in your debugging environment is **deterministic** so that the dropout layers are not used. Or use *transformers.utils.set_seed* if the old and new implementations are in the same framework. The following section gives you more specific details/tips on how you can do this for *brand_new_bert*. ### 5.-14. Port BrandNewBert to ๐Ÿค— Transformers Next, you can finally start adding new code to ๐Ÿค— Transformers. Go into the clone of your ๐Ÿค— Transformers' fork: ```bash cd transformers ``` In the special case that you are adding a model whose architecture exactly matches the model architecture of an existing model you only have to add a conversion script as described in [this section](#write-a-conversion-script). In this case, you can just re-use the whole model architecture of the already existing model. Otherwise, let's start generating a new model. You have two choices here: - `transformers-cli add-new-model-like` to add a new model like an existing one - `transformers-cli add-new-model` to add a new model from our template (will look like BERT or Bart depending on the type of model you select) In both cases, you will be prompted with a questionnaire to fill in the basic information of your model. The second command requires to install `cookiecutter`, you can find more information on it [here](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model). **Open a Pull Request on the main huggingface/transformers repo** Before starting to adapt the automatically generated code, now is the time to open a โ€œWork in progress (WIP)โ€ pull request, *e.g.* โ€œ[WIP] Add *brand_new_bert*โ€, in ๐Ÿค— Transformers so that you and the Hugging Face team can work side-by-side on integrating the model into ๐Ÿค— Transformers. You should do the following: 1. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_brand_new_bert ``` 2. Commit the automatically generated code: ```bash git add . git commit ``` 3. Fetch and rebase to current main ```bash git fetch upstream git rebase upstream/main ``` 4. Push the changes to your account using: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. Once you are satisfied, go to the webpage of your fork on GitHub. Click on โ€œPull requestโ€. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes. 6. Change the PR into a draft by clicking on โ€œConvert to draftโ€ on the right of the GitHub pull request web page. In the following, whenever you have made some progress, don't forget to commit your work and push it to your account so that it shows in the pull request. Additionally, you should make sure to update your work with the current main from time to time by doing: ```bash git fetch upstream git merge upstream/main ``` In general, all questions you might have regarding the model or your implementation should be asked in your PR and discussed/solved in the PR. This way, the Hugging Face team will always be notified when you are committing new code or if you have a question. It is often very helpful to point the Hugging Face team to your added code so that the Hugging Face team can efficiently understand your problem or question. To do so, you can go to the โ€œFiles changedโ€ tab where you see all of your changes, go to a line regarding which you want to ask a question, and click on the โ€œ+โ€ symbol to add a comment. Whenever a question or problem has been solved, you can click on the โ€œResolveโ€ button of the created comment. In the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions on GitHub on your PR. For some very general questions that are not very useful for the public, feel free to ping the Hugging Face team by Slack or email. **5. Adapt the generated models code for brand_new_bert** At first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be found in the generated files `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` and `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Now you can finally start coding :). The generated code in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` will either have the same architecture as BERT if it's an encoder-only model or BART if it's an encoder-decoder model. At this point, you should remind yourself what you've learned in the beginning about the theoretical aspects of the model: *How is the model different from BERT or BART?*". Implement those changes which often means changing the *self-attention* layer, the order of the normalization layer, etcโ€ฆ Again, it is often useful to look at the similar architecture of already existing models in Transformers to get a better feeling of how your model should be implemented. **Note** that at this point, you don't have to be very sure that your code is fully correct or clean. Rather, it is advised to add a first *unclean*, copy-pasted version of the original code to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` until you feel like all the necessary code is added. From our experience, it is much more efficient to quickly add a first version of the required code and improve/correct the code iteratively with the conversion script as described in the next section. The only thing that has to work at this point is that you can instantiate the ๐Ÿค— Transformers implementation of *brand_new_bert*, *i.e.* the following command should work: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` The above command will create a model according to the default parameters as defined in `BrandNewBertConfig()` with random weights, thus making sure that the `init()` methods of all components works. Note that all random initialization should happen in the `_init_weights` method of your `BrandnewBertPreTrainedModel` class. It should initialize all leaf modules depending on the variables of the config. Here is an example with the BERT `_init_weights` method: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` You can have some more custom schemes if you need a special initialization for some modules. For instance, in `Wav2Vec2ForPreTraining`, the last two linear layers need to have the initialization of the regular PyTorch `nn.Linear` but all the other ones should use an initialization as above. This is coded like this: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` The `_is_hf_initialized` flag is internally used to make sure we only initialize a submodule once. By setting it to `True` for `module.project_q` and `module.project_hid`, we make sure the custom initialization we did is not overridden later on, the `_init_weights` function won't be applied to them. **6. Write a conversion script** Next, you should write a conversion script that lets you convert the checkpoint you used to debug *brand_new_bert* in the original repository to a checkpoint compatible with your just created ๐Ÿค— Transformers implementation of *brand_new_bert*. It is not advised to write the conversion script from scratch, but rather to look through already existing conversion scripts in ๐Ÿค— Transformers for one that has been used to convert a similar model that was written in the same framework as *brand_new_bert*. Usually, it is enough to copy an already existing conversion script and slightly adapt it for your use case. Don't hesitate to ask the Hugging Face team to point you to a similar already existing conversion script for your model. - If you are porting a model from TensorFlow to PyTorch, a good starting point might be BERT's conversion script [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - If you are porting a model from PyTorch to PyTorch, a good starting point might be BART's conversion script [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) In the following, we'll quickly explain how PyTorch models store layer weights and define layer names. In PyTorch, the name of a layer is defined by the name of the class attribute you give the layer. Let's define a dummy model in PyTorch, called `SimpleModel` as follows: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` Now we can create an instance of this model definition which will fill all weights: `dense`, `intermediate`, `layer_norm` with random weights. We can print the model to see its architecture ```python model = SimpleModel() print(model) ``` This will print out the following: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` We can see that the layer names are defined by the name of the class attribute in PyTorch. You can print out the weight values of a specific layer: ```python print(model.dense.weight.data) ``` to see that the weights were randomly initialized ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` In the conversion script, you should fill those randomly initialized weights with the exact weights of the corresponding layer in the checkpoint. *E.g.* ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` While doing so, you must verify that each randomly initialized weight of your PyTorch model and its corresponding pretrained checkpoint weight exactly match in both **shape and name**. To do so, it is **necessary** to add assert statements for the shape and print out the names of the checkpoints weights. E.g. you should add statements like: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` Besides, you should also print out the names of both weights to make sure they match, *e.g.* ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` If either the shape or the name doesn't match, you probably assigned the wrong checkpoint weight to a randomly initialized layer of the ๐Ÿค— Transformers implementation. An incorrect shape is most likely due to an incorrect setting of the config parameters in `BrandNewBertConfig()` that do not exactly match those that were used for the checkpoint you want to convert. However, it could also be that PyTorch's implementation of a layer requires the weight to be transposed beforehand. Finally, you should also check that **all** required weights are initialized and print out all checkpoint weights that were not used for initialization to make sure the model is correctly converted. It is completely normal, that the conversion trials fail with either a wrong shape statement or a wrong name assignment. This is most likely because either you used incorrect parameters in `BrandNewBertConfig()`, have a wrong architecture in the ๐Ÿค— Transformers implementation, you have a bug in the `init()` functions of one of the components of the ๐Ÿค— Transformers implementation or you need to transpose one of the checkpoint weights. This step should be iterated with the previous step until all weights of the checkpoint are correctly loaded in the Transformers model. Having correctly loaded the checkpoint into the ๐Ÿค— Transformers implementation, you can then save the model under a folder of your choice `/path/to/converted/checkpoint/folder` that should then contain both a `pytorch_model.bin` file and a `config.json` file: ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. Implement the forward pass** Having managed to correctly load the pretrained weights into the ๐Ÿค— Transformers implementation, you should now make sure that the forward pass is correctly implemented. In [Get familiar with the original repository](#34-run-a-pretrained-checkpoint-using-the-original-repository), you have already created a script that runs a forward pass of the model using the original repository. Now you should write an analogous script using the ๐Ÿค— Transformers implementation instead of the original one. It should look as follows: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` It is very likely that the ๐Ÿค— Transformers implementation and the original model implementation don't give the exact same output the very first time or that the forward pass throws an error. Don't be disappointed - it's expected! First, you should make sure that the forward pass doesn't throw any errors. It often happens that the wrong dimensions are used leading to a *Dimensionality mismatch* error or that the wrong data type object is used, *e.g.* `torch.long` instead of `torch.float32`. Don't hesitate to ask the Hugging Face team for help, if you don't manage to solve certain errors. The final part to make sure the ๐Ÿค— Transformers implementation works correctly is to ensure that the outputs are equivalent to a precision of `1e-3`. First, you should ensure that the output shapes are identical, *i.e.* `outputs.shape` should yield the same value for the script of the ๐Ÿค— Transformers implementation and the original implementation. Next, you should make sure that the output values are identical as well. This one of the most difficult parts of adding a new model. Common mistakes why the outputs are not identical are: - Some layers were not added, *i.e.* an *activation* layer was not added, or the residual connection was forgotten - The word embedding matrix was not tied - The wrong positional embeddings are used because the original implementation uses on offset - Dropout is applied during the forward pass. To fix this make sure *model.training is False* and that no dropout layer is falsely activated during the forward pass, *i.e.* pass *self.training* to [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout) The best way to fix the problem is usually to look at the forward pass of the original implementation and the ๐Ÿค— Transformers implementation side-by-side and check if there are any differences. Ideally, you should debug/print out intermediate outputs of both implementations of the forward pass to find the exact position in the network where the ๐Ÿค— Transformers implementation shows a different output than the original implementation. First, make sure that the hard-coded `input_ids` in both scripts are identical. Next, verify that the outputs of the first transformation of the `input_ids` (usually the word embeddings) are identical. And then work your way up to the very last layer of the network. At some point, you will notice a difference between the two implementations, which should point you to the bug in the ๐Ÿค— Transformers implementation. From our experience, a simple and efficient way is to add many print statements in both the original implementation and ๐Ÿค— Transformers implementation, at the same positions in the network respectively, and to successively remove print statements showing the same values for intermediate presentations. When you're confident that both implementations yield the same output, verify the outputs with `torch.allclose(original_output, output, atol=1e-3)`, you're done with the most difficult part! Congratulations - the work left to be done should be a cakewalk ๐Ÿ˜Š. **8. Adding all necessary model tests** At this point, you have successfully added a new model. However, it is very much possible that the model does not yet fully comply with the required design. To make sure, the implementation is fully compatible with ๐Ÿค— Transformers, all common tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under the same `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`. Run this test file to verify that all common tests pass: ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` Having fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that - a) The community can easily understand your work by looking at specific tests of *brand_new_bert* - b) Future changes to your model will not break any important feature of the model. At first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts you used earlier to implement the model to ๐Ÿค— Transformers. A template of those model tests has already added by the Cookiecutter, called `BrandNewBertModelIntegrationTests` and only has to be filled out by you. To ensure that those tests are passing, run ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> In case you are using Windows, you should replace `RUN_SLOW=1` with `SET RUN_SLOW=1` </Tip> Second, all features that are special to *brand_new_bert* should be tested additionally in a separate test under `BrandNewBertModelTester`/``BrandNewBertModelTest`. This part is often forgotten but is extremely useful in two ways: - It helps to transfer the knowledge you have acquired during the model addition to the community by showing how the special features of *brand_new_bert* should work. - Future contributors can quickly test changes to the model by running those special tests. **9. Implement the tokenizer** Next, we should add the tokenizer of *brand_new_bert*. Usually, the tokenizer is equivalent to or very similar to an already existing tokenizer of ๐Ÿค— Transformers. It is very important to find/extract the original tokenizer file and to manage to load this file into the ๐Ÿค— Transformers' implementation of the tokenizer. To ensure that the tokenizer works correctly, it is recommended to first create a script in the original repository that inputs a string and returns the `input_ids``. It could look similar to this (in pseudo-code): ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` You might have to take a deeper look again into the original repository to find the correct tokenizer function or you might even have to do changes to your clone of the original repository to only output the `input_ids`. Having written a functional tokenization script that uses the original repository, an analogous script for ๐Ÿค— Transformers should be created. It should look similar to this: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` When both `input_ids` yield the same values, as a final step a tokenizer test file should also be added. Analogous to the modeling test files of *brand_new_bert*, the tokenization test files of *brand_new_bert* should contain a couple of hard-coded integration tests. **10. Run End-to-end integration tests** Having added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the tokenizer to `tests/models/brand_new_bert/test_modeling_brand_new_bert.py` in ๐Ÿค— Transformers. Such a test should show on a meaningful text-to-text sample that the ๐Ÿค— Transformers implementation works as expected. A meaningful text-to-text sample can include *e.g.* a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etcโ€ฆ If none of the ported checkpoints has been fine-tuned on a downstream task it is enough to simply rely on the model tests. In a final step to ensure that the model is fully functional, it is advised that you also run all tests on GPU. It can happen that you forgot to add some `.to(self.device)` statements to internal tensors of the model, which in such a test would show in an error. In case you have no access to a GPU, the Hugging Face team can take care of running those tests for you. **11. Add Docstring** Now, all the necessary functionality for *brand_new_bert* is added - you're almost done! The only thing left to add is a nice docstring and a doc page. The Cookiecutter should have added a template file called `docs/source/model_doc/brand_new_bert.md` that you should fill out. Users of your model will usually first look at this page before using your model. Hence, the documentation must be understandable and concise. It is very useful for the community to add some *Tips* to show how the model should be used. Don't hesitate to ping the Hugging Face team regarding the docstrings. Next, make sure that the docstring added to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` is correct and included all necessary inputs and outputs. We have a detailed guide about writing documentation and our docstring format [here](writing-documentation). It is always to good to remind oneself that documentation should be treated at least as carefully as the code in ๐Ÿค— Transformers since the documentation is usually the first contact point of the community with the model. **Code refactor** Great, now you have added all the necessary code for *brand_new_bert*. At this point, you should correct some potential incorrect code style by running: ```bash make style ``` and verify that your coding style passes the quality check: ```bash make quality ``` There are a couple of other very strict design tests in ๐Ÿค— Transformers that might still be failing, which shows up in the tests of your pull request. This is often because of some missing information in the docstring or some incorrect naming. The Hugging Face team will surely help you if you're stuck here. Lastly, it is always a good idea to refactor one's code after having ensured that the code works correctly. With all tests passing, now it's a good time to go over the added code again and do some refactoring. You have now finished the coding part, congratulation! ๐ŸŽ‰ You are Awesome! ๐Ÿ˜Ž **12. Upload the models to the model hub** In this final part, you should convert and upload all checkpoints to the model hub and add a model card for each uploaded model checkpoint. You can get familiar with the hub functionalities by reading our [Model sharing and uploading Page](model_sharing). You should work alongside the Hugging Face team here to decide on a fitting name for each checkpoint and to get the required access rights to be able to upload the model under the author's organization of *brand_new_bert*. The `push_to_hub` method, present in all models in `transformers`, is a quick and efficient way to push your checkpoint to the hub. A little snippet is pasted below: ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` It is worth spending some time to create fitting model cards for each checkpoint. The model cards should highlight the specific characteristics of this particular checkpoint, *e.g.* On which dataset was the checkpoint pretrained/fine-tuned on? On what down-stream task should the model be used? And also include some code on how to correctly use the model. **13. (Optional) Add notebook** It is very helpful to add a notebook that showcases in-detail how *brand_new_bert* can be used for inference and/or fine-tuned on a downstream task. This is not mandatory to merge your PR, but very useful for the community. **14. Submit your finished PR** You're done programming now and can move to the last step, which is getting your PR merged into main. Usually, the Hugging Face team should have helped you already at this point, but it is worth taking some time to give your finished PR a nice description and eventually add comments to your code, if you want to point out certain design choices to your reviewer. ### Share your work!! Now, it's time to get some credit from the community for your work! Having completed a model addition is a major contribution to Transformers and the whole NLP community. Your code and the ported pre-trained models will certainly be used by hundreds and possibly even thousands of developers and researchers. You should be proud of your work and share your achievements with the community. **You have made another model that is super easy to access for everyone in the community! ๐Ÿคฏ**
transformers/docs/source/en/add_new_model.md/0
{ "file_path": "transformers/docs/source/en/add_new_model.md", "repo_id": "transformers", "token_count": 14076 }
235
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Use tokenizers from ๐Ÿค— Tokenizers The [`PreTrainedTokenizerFast`] depends on the [๐Ÿค— Tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the ๐Ÿค— Tokenizers library can be loaded very simply into ๐Ÿค— Transformers. Before getting in the specifics, let's first start by creating a dummy tokenizer in a few lines: ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to a JSON file for future re-use. ## Loading directly from the tokenizer object Let's see how to leverage this tokenizer object in the ๐Ÿค— Transformers library. The [`PreTrainedTokenizerFast`] class allows for easy instantiation, by accepting the instantiated *tokenizer* object as an argument: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` This object can now be used with all the methods shared by the ๐Ÿค— Transformers tokenizers! Head to [the tokenizer page](main_classes/tokenizer) for more information. ## Loading from a JSON file In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer: ```python >>> tokenizer.save("tokenizer.json") ``` The path to which we saved this file can be passed to the [`PreTrainedTokenizerFast`] initialization method using the `tokenizer_file` parameter: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` This object can now be used with all the methods shared by the ๐Ÿค— Transformers tokenizers! Head to [the tokenizer page](main_classes/tokenizer) for more information.
transformers/docs/source/en/fast_tokenizers.md/0
{ "file_path": "transformers/docs/source/en/fast_tokenizers.md", "repo_id": "transformers", "token_count": 792 }
236
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BARTpho ## Overview The BARTpho model was proposed in [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. The abstract from the paper is the following: *We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future research and applications of generative Vietnamese NLP tasks.* This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BARTpho). ## Usage example ```python >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable") >>> tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable") >>> line = "Chรบng tรดi lร  nhแปฏng nghiรชn cแปฉu viรชn." >>> input_ids = tokenizer(line, return_tensors="pt") >>> with torch.no_grad(): ... features = bartpho(**input_ids) # Models outputs are now tuples >>> # With TensorFlow 2.0+: >>> from transformers import TFAutoModel >>> bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable") >>> input_ids = tokenizer(line, return_tensors="tf") >>> features = bartpho(**input_ids) ``` ## Usage tips - Following mBART, BARTpho uses the "large" architecture of BART with an additional layer-normalization layer on top of both the encoder and decoder. Thus, usage examples in the [documentation of BART](bart), when adapting to use with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts. For example: ```python >>> from transformers import MBartForConditionalGeneration >>> bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable") >>> TXT = "Chรบng tรดi lร  <mask> nghiรชn cแปฉu viรชn." >>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"] >>> logits = bartpho(input_ids).logits >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() >>> probs = logits[0, masked_index].softmax(dim=0) >>> values, predictions = probs.topk(5) >>> print(tokenizer.decode(predictions).split()) ``` - This implementation is only for tokenization: "monolingual_vocab_file" consists of Vietnamese-specialized types extracted from the pre-trained SentencePiece model "vocab_file" that is available from the multilingual XLM-RoBERTa. Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword segmentation, can reuse BartphoTokenizer with their own language-specialized "monolingual_vocab_file". ## BartphoTokenizer [[autodoc]] BartphoTokenizer
transformers/docs/source/en/model_doc/bartpho.md/0
{ "file_path": "transformers/docs/source/en/model_doc/bartpho.md", "repo_id": "transformers", "token_count": 1166 }
237
<!--Copyright 2023 The Intel Labs Team Authors, The Microsoft Research Team Authors and HuggingFace Inc. team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BridgeTower ## Overview The BridgeTower model was proposed in [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs. This paper has been accepted to the [AAAI'23](https://aaai.org/Conferences/AAAI-23/) conference. The abstract from the paper is the following: *Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/bridgetower_architecture%20.jpg" alt="drawing" width="600"/> <small> BridgeTower architecture. Taken from the <a href="https://arxiv.org/abs/2206.08657">original paper.</a> </small> This model was contributed by [Anahita Bhiwandiwalla](https://huggingface.co/anahita-b), [Tiep Le](https://huggingface.co/Tile) and [Shaoyen Tseng](https://huggingface.co/shaoyent). The original code can be found [here](https://github.com/microsoft/BridgeTower). ## Usage tips and examples BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers. The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder. In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture. The [`BridgeTowerProcessor`] wraps [`RobertaTokenizer`] and [`BridgeTowerImageProcessor`] into a single instance to both encode the text and prepare the images respectively. The following example shows how to run contrastive learning using [`BridgeTowerProcessor`] and [`BridgeTowerForContrastiveLearning`]. ```python >>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning >>> import requests >>> from PIL import Image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> # forward pass >>> scores = dict() >>> for text in texts: ... # prepare inputs ... encoding = processor(image, text, return_tensors="pt") ... outputs = model(**encoding) ... scores[text] = outputs ``` The following example shows how to run image-text retrieval using [`BridgeTowerProcessor`] and [`BridgeTowerForImageAndTextRetrieval`]. ```python >>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval >>> import requests >>> from PIL import Image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> # forward pass >>> scores = dict() >>> for text in texts: ... # prepare inputs ... encoding = processor(image, text, return_tensors="pt") ... outputs = model(**encoding) ... scores[text] = outputs.logits[0, 1].item() ``` The following example shows how to run masked language modeling using [`BridgeTowerProcessor`] and [`BridgeTowerForMaskedLM`]. ```python >>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000360943.jpg" >>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB") >>> text = "a <mask> looking out of the window" >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> # prepare inputs >>> encoding = processor(image, text, return_tensors="pt") >>> # forward pass >>> outputs = model(**encoding) >>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) >>> print(results) .a cat looking out of the window. ``` Tips: - This implementation of BridgeTower uses [`RobertaTokenizer`] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings. - Checkpoints for pre-trained [bridgeTower-base](https://huggingface.co/BridgeTower/bridgetower-base) and [bridgetower masked language modeling and image text matching](https://huggingface.co/BridgeTower/bridgetower-base-itm-mlm) are released. - Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other down stream tasks. - The PyTorch version of this model is only available in torch 1.10 and higher. ## BridgeTowerConfig [[autodoc]] BridgeTowerConfig ## BridgeTowerTextConfig [[autodoc]] BridgeTowerTextConfig ## BridgeTowerVisionConfig [[autodoc]] BridgeTowerVisionConfig ## BridgeTowerImageProcessor [[autodoc]] BridgeTowerImageProcessor - preprocess ## BridgeTowerProcessor [[autodoc]] BridgeTowerProcessor - __call__ ## BridgeTowerModel [[autodoc]] BridgeTowerModel - forward ## BridgeTowerForContrastiveLearning [[autodoc]] BridgeTowerForContrastiveLearning - forward ## BridgeTowerForMaskedLM [[autodoc]] BridgeTowerForMaskedLM - forward ## BridgeTowerForImageAndTextRetrieval [[autodoc]] BridgeTowerForImageAndTextRetrieval - forward
transformers/docs/source/en/model_doc/bridgetower.md/0
{ "file_path": "transformers/docs/source/en/model_doc/bridgetower.md", "repo_id": "transformers", "token_count": 2392 }
238
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPM ## Overview The CPM model was proposed in [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. The abstract from the paper is the following: *Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning.* This model was contributed by [canwenxu](https://huggingface.co/canwenxu). The original implementation can be found here: https://github.com/TsinghuaAI/CPM-Generate <Tip> CPM's architecture is the same as GPT-2, except for tokenization method. Refer to [GPT-2 documentation](gpt2) for API reference information. </Tip> ## CpmTokenizer [[autodoc]] CpmTokenizer ## CpmTokenizerFast [[autodoc]] CpmTokenizerFast
transformers/docs/source/en/model_doc/cpm.md/0
{ "file_path": "transformers/docs/source/en/model_doc/cpm.md", "repo_id": "transformers", "token_count": 735 }
239
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # FLAN-T5 ## Overview FLAN-T5 was released in the paper [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) - it is an enhanced version of T5 that has been finetuned in a mixture of tasks. One can directly use FLAN-T5 weights without finetuning the model: ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") >>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Pour a cup of bolognese into a large bowl and add the pasta'] ``` FLAN-T5 includes the same improvements as T5 version 1.1 (see [here](https://huggingface.co/docs/transformers/model_doc/t5v1.1) for the full details of the model's improvements.) Google has released the following variants: - [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) - [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) - [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) - [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl). The original checkpoints can be found [here](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints). <Tip> Refer to [T5's documentation page](t5) for all API reference, code examples and notebooks. For more details regarding training and evaluation of the FLAN-T5, refer to the model card. </Tip>
transformers/docs/source/en/model_doc/flan-t5.md/0
{ "file_path": "transformers/docs/source/en/model_doc/flan-t5.md", "repo_id": "transformers", "token_count": 781 }
240
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # GPT-NeoX-Japanese ## Overview We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of [https://github.com/EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox). Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts. To address this distinct structure of the Japanese language, we use a [special sub-word tokenizer](https://github.com/tanreinama/Japanese-BPEEncoder_V2). We are very grateful to *tanreinama* for open-sourcing this incredibly helpful tokenizer. Following the recommendations from Google's research on [PaLM](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), we have removed bias parameters from transformer blocks, achieving better model performance. Please refer [this article](https://medium.com/ml-abeja/training-a-better-gpt-2-93b157662ae4) in detail. Development of the model was led by [Shinya Otani](https://github.com/SO0529), [Takayoshi Makabe](https://github.com/spider-man-tm), [Anuj Arora](https://github.com/Anuj040), and [Kyo Hattori](https://github.com/go5paopao) from [ABEJA, Inc.](https://www.abejainc.com/). For more information on this model-building activity, please refer [here (ja)](https://tech-blog.abeja.asia/entry/abeja-gpt-project-202207). ### Usage example The `generate()` method can be used to generate text using GPT NeoX Japanese model. ```python >>> from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer >>> model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> prompt = "ไบบใจAIใŒๅ”่ชฟใ™ใ‚‹ใŸใ‚ใซใฏใ€" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0] >>> print(gen_text) ไบบใจAIใŒๅ”่ชฟใ™ใ‚‹ใŸใ‚ใซใฏใ€AIใจไบบใŒๅ…ฑๅญ˜ใ—ใ€AIใ‚’ๆญฃใ—ใ็†่งฃใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ``` ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## GPTNeoXJapaneseConfig [[autodoc]] GPTNeoXJapaneseConfig ## GPTNeoXJapaneseTokenizer [[autodoc]] GPTNeoXJapaneseTokenizer ## GPTNeoXJapaneseModel [[autodoc]] GPTNeoXJapaneseModel - forward ## GPTNeoXJapaneseForCausalLM [[autodoc]] GPTNeoXJapaneseForCausalLM - forward
transformers/docs/source/en/model_doc/gpt_neox_japanese.md/0
{ "file_path": "transformers/docs/source/en/model_doc/gpt_neox_japanese.md", "repo_id": "transformers", "token_count": 1075 }
241
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Mask2Former ## Overview The Mask2Former model was proposed in [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. Mask2Former is a unified framework for panoptic, instance and semantic segmentation and features significant performance and efficiency improvements over [MaskFormer](maskformer). The abstract from the paper is the following: *Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/mask2former_architecture.jpg" alt="drawing" width="600"/> <small> Mask2Former architecture. Taken from the <a href="https://arxiv.org/abs/2112.01527">original paper.</a> </small> This model was contributed by [Shivalika Singh](https://huggingface.co/shivi) and [Alara Dirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/facebookresearch/Mask2Former). ## Usage tips - Mask2Former uses the same preprocessing and postprocessing steps as [MaskFormer](maskformer). Use [`Mask2FormerImageProcessor`] or [`AutoImageProcessor`] to prepare images and optional targets for the model. - To get the final segmentation, depending on the task, you can call [`~Mask2FormerImageProcessor.post_process_semantic_segmentation`] or [`~Mask2FormerImageProcessor.post_process_instance_segmentation`] or [`~Mask2FormerImageProcessor.post_process_panoptic_segmentation`]. All three tasks can be solved using [`Mask2FormerForUniversalSegmentation`] output, panoptic segmentation accepts an optional `label_ids_to_fuse` argument to fuse instances of the target object/s (e.g. sky) together. ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with Mask2Former. - Demo notebooks regarding inference + fine-tuning Mask2Former on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Mask2Former). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. ## Mask2FormerConfig [[autodoc]] Mask2FormerConfig ## MaskFormer specific outputs [[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerModelOutput [[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput ## Mask2FormerModel [[autodoc]] Mask2FormerModel - forward ## Mask2FormerForUniversalSegmentation [[autodoc]] Mask2FormerForUniversalSegmentation - forward ## Mask2FormerImageProcessor [[autodoc]] Mask2FormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation
transformers/docs/source/en/model_doc/mask2former.md/0
{ "file_path": "transformers/docs/source/en/model_doc/mask2former.md", "repo_id": "transformers", "token_count": 1219 }
242
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # OpenAI GPT <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=openai-gpt"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-openai--gpt-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/openai-gpt"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview OpenAI GPT model was proposed in [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It's a causal (unidirectional) transformer pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus. The abstract from the paper is the following: *Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied.* [Write With Transformer](https://transformer.huggingface.co/doc/gpt) is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT is one of them. This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/openai/finetune-transformer-lm). ## Usage tips - GPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the *run_generation.py* example script. Note: If you want to reproduce the original tokenization process of the *OpenAI GPT* paper, you will need to install `ftfy` and `SpaCy`: ```bash pip install spacy ftfy==4.4.3 python -m spacy download en ``` If you don't install `ftfy` and `SpaCy`, the [`OpenAIGPTTokenizer`] will default to tokenize using BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most usage, don't worry). ## Resources A list of official Hugging Face and community (indicated by ๐ŸŒŽ) resources to help you get started with OpenAI GPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="text-classification"/> - A blog post on [outperforming OpenAI GPT-3 with SetFit for text-classification](https://www.philschmid.de/getting-started-setfit). - See also: [Text classification task guide](../tasks/sequence_classification) <PipelineTag pipeline="text-generation"/> - A blog on how to [Finetune a non-English GPT-2 Model with Hugging Face](https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface). - A blog on [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) with GPT-2. - A blog on [Training CodeParrot ๐Ÿฆœ from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model. - A blog on [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate) with GPT-2. - A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model. - A notebook on how to [finetune GPT2 to generate lyrics in the style of your favorite artist](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb). ๐ŸŒŽ - A notebook on how to [finetune GPT2 to generate tweets in the style of your favorite Twitter user](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb). ๐ŸŒŽ - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the ๐Ÿค— Hugging Face Course. - [`OpenAIGPTLMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-generation/run_generation.py) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [`TFOpenAIGPTLMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - See also: [Causal language modeling task guide](../tasks/language_modeling) <PipelineTag pipeline="token-classification"/> - A course material on [Byte-Pair Encoding tokenization](https://huggingface.co/course/en/chapter6/5). ## OpenAIGPTConfig [[autodoc]] OpenAIGPTConfig ## OpenAIGPTTokenizer [[autodoc]] OpenAIGPTTokenizer - save_vocabulary ## OpenAIGPTTokenizerFast [[autodoc]] OpenAIGPTTokenizerFast ## OpenAI specific outputs [[autodoc]] models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput [[autodoc]] models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput <frameworkcontent> <pt> ## OpenAIGPTModel [[autodoc]] OpenAIGPTModel - forward ## OpenAIGPTLMHeadModel [[autodoc]] OpenAIGPTLMHeadModel - forward ## OpenAIGPTDoubleHeadsModel [[autodoc]] OpenAIGPTDoubleHeadsModel - forward ## OpenAIGPTForSequenceClassification [[autodoc]] OpenAIGPTForSequenceClassification - forward </pt> <tf> ## TFOpenAIGPTModel [[autodoc]] TFOpenAIGPTModel - call ## TFOpenAIGPTLMHeadModel [[autodoc]] TFOpenAIGPTLMHeadModel - call ## TFOpenAIGPTDoubleHeadsModel [[autodoc]] TFOpenAIGPTDoubleHeadsModel - call ## TFOpenAIGPTForSequenceClassification [[autodoc]] TFOpenAIGPTForSequenceClassification - call </tf> </frameworkcontent>
transformers/docs/source/en/model_doc/openai-gpt.md/0
{ "file_path": "transformers/docs/source/en/model_doc/openai-gpt.md", "repo_id": "transformers", "token_count": 2422 }
243
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ProphetNet <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=prophetnet"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-prophetnet-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/prophetnet-large-uncased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The ProphetNet model was proposed in [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou on 13 Jan, 2020. ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just the next token. The abstract from the paper is the following: *In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.* The Authors' code can be found [here](https://github.com/microsoft/ProphetNet). ## Usage tips - ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - The model architecture is based on the original Transformer, but replaces the โ€œstandardโ€ self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism. ## Resources - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## ProphetNetConfig [[autodoc]] ProphetNetConfig ## ProphetNetTokenizer [[autodoc]] ProphetNetTokenizer ## ProphetNet specific outputs [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput ## ProphetNetModel [[autodoc]] ProphetNetModel - forward ## ProphetNetEncoder [[autodoc]] ProphetNetEncoder - forward ## ProphetNetDecoder [[autodoc]] ProphetNetDecoder - forward ## ProphetNetForConditionalGeneration [[autodoc]] ProphetNetForConditionalGeneration - forward ## ProphetNetForCausalLM [[autodoc]] ProphetNetForCausalLM - forward
transformers/docs/source/en/model_doc/prophetnet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/prophetnet.md", "repo_id": "transformers", "token_count": 1170 }
244
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # SAM ## Overview SAM (Segment Anything Model) was proposed in [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. The model can be used to predict segmentation masks of any object of interest given an input image. ![example image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-output.png) The abstract from the paper is the following: *We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision.* Tips: - The model predicts binary masks that states the presence or not of the object of interest given an image. - The model predicts much better results if input 2D points and/or input bounding boxes are provided - You can prompt multiple points for the same image, and predict a single mask. - Fine-tuning the model is not supported yet - According to the paper, textual input should be also supported. However, at this time of writing this seems to be not supported according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844). This model was contributed by [ybelkada](https://huggingface.co/ybelkada) and [ArthurZ](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/facebookresearch/segment-anything). Below is an example on how to run mask generation given an image and a 2D point: ```python import torch from PIL import Image import requests from transformers import SamModel, SamProcessor device = "cuda" if torch.cuda.is_available() else "cpu" model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-huge") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() ) scores = outputs.iou_scores ``` You can also process your own masks alongside the input images in the processor to be passed to the model. ```python import torch from PIL import Image import requests from transformers import SamModel, SamProcessor device = "cuda" if torch.cuda.is_available() else "cpu" model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-huge") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") mask_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" segmentation_map = Image.open(requests.get(mask_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, segmentation_maps=mask, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() ) scores = outputs.iou_scores ``` Resources: - [Demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb) for using the model. - [Demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb) for using the automatic mask generation pipeline. - [Demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Run_inference_with_MedSAM_using_HuggingFace_Transformers.ipynb) for inference with MedSAM, a fine-tuned version of SAM on the medical domain. - [Demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb) for fine-tuning the model on custom data. ## SamConfig [[autodoc]] SamConfig ## SamVisionConfig [[autodoc]] SamVisionConfig ## SamMaskDecoderConfig [[autodoc]] SamMaskDecoderConfig ## SamPromptEncoderConfig [[autodoc]] SamPromptEncoderConfig ## SamProcessor [[autodoc]] SamProcessor ## SamImageProcessor [[autodoc]] SamImageProcessor ## SamModel [[autodoc]] SamModel - forward ## TFSamModel [[autodoc]] TFSamModel - call
transformers/docs/source/en/model_doc/sam.md/0
{ "file_path": "transformers/docs/source/en/model_doc/sam.md", "repo_id": "transformers", "token_count": 1871 }
245
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # UniSpeech-SAT ## Overview The UniSpeech-SAT model was proposed in [UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu . The abstract from the paper is the following: *Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisedly and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT). ## Usage tips - UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. - UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## UniSpeechSatConfig [[autodoc]] UniSpeechSatConfig ## UniSpeechSat specific outputs [[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput ## UniSpeechSatModel [[autodoc]] UniSpeechSatModel - forward ## UniSpeechSatForCTC [[autodoc]] UniSpeechSatForCTC - forward ## UniSpeechSatForSequenceClassification [[autodoc]] UniSpeechSatForSequenceClassification - forward ## UniSpeechSatForAudioFrameClassification [[autodoc]] UniSpeechSatForAudioFrameClassification - forward ## UniSpeechSatForXVector [[autodoc]] UniSpeechSatForXVector - forward ## UniSpeechSatForPreTraining [[autodoc]] UniSpeechSatForPreTraining - forward
transformers/docs/source/en/model_doc/unispeech-sat.md/0
{ "file_path": "transformers/docs/source/en/model_doc/unispeech-sat.md", "repo_id": "transformers", "token_count": 1045 }
246
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # XLNet <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=xlnet"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlnet-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/xlnet-base-cased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The XLNet model was proposed in [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order. The abstract from the paper is the following: *With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/zihangdai/xlnet/). ## Usage tips - The specific attention pattern can be controlled at training and test time using the `perm_mask` input. - Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the `target_mapping` input. - To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the `perm_mask` and `target_mapping` inputs to control the attention span and outputs (see examples in *examples/pytorch/text-generation/run_generation.py*) - XLNet is one of the few models that has no sequence length limit. - XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,โ€ฆ,sequence length. - XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLNetConfig [[autodoc]] XLNetConfig ## XLNetTokenizer [[autodoc]] XLNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## XLNetTokenizerFast [[autodoc]] XLNetTokenizerFast ## XLNet specific outputs [[autodoc]] models.xlnet.modeling_xlnet.XLNetModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput <frameworkcontent> <pt> ## XLNetModel [[autodoc]] XLNetModel - forward ## XLNetLMHeadModel [[autodoc]] XLNetLMHeadModel - forward ## XLNetForSequenceClassification [[autodoc]] XLNetForSequenceClassification - forward ## XLNetForMultipleChoice [[autodoc]] XLNetForMultipleChoice - forward ## XLNetForTokenClassification [[autodoc]] XLNetForTokenClassification - forward ## XLNetForQuestionAnsweringSimple [[autodoc]] XLNetForQuestionAnsweringSimple - forward ## XLNetForQuestionAnswering [[autodoc]] XLNetForQuestionAnswering - forward </pt> <tf> ## TFXLNetModel [[autodoc]] TFXLNetModel - call ## TFXLNetLMHeadModel [[autodoc]] TFXLNetLMHeadModel - call ## TFXLNetForSequenceClassification [[autodoc]] TFXLNetForSequenceClassification - call ## TFLNetForMultipleChoice [[autodoc]] TFXLNetForMultipleChoice - call ## TFXLNetForTokenClassification [[autodoc]] TFXLNetForTokenClassification - call ## TFXLNetForQuestionAnsweringSimple [[autodoc]] TFXLNetForQuestionAnsweringSimple - call </tf> </frameworkcontent>
transformers/docs/source/en/model_doc/xlnet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/xlnet.md", "repo_id": "transformers", "token_count": 2042 }
247
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimize inference using torch.compile() This guide aims to provide a benchmark on the inference speed-ups introduced with [`torch.compile()`](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html)ย for [computer vision models in ๐Ÿค— Transformers](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers&sort=trending). ## Benefits of torch.compile Depending on the model and the GPU, `torch.compile()` yields up to 30% speed-up during inference. To use `torch.compile()`, simply install any version of `torch` above 2.0. Compiling a model takes time, so it's useful if you are compiling the model only once instead of every time you infer. To compile any computer vision model of your choice, call `torch.compile()` on the model as shown below: ```diff from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda") + model = torch.compile(model) ``` `compile()`ย comes with multiple modes for compiling, which essentially differ in compilation time and inference overhead. `max-autotune`ย takes longer than `reduce-overhead`ย but results in faster inference. Default mode is fastest for compilation but is not as efficient compared to `reduce-overhead` for inference time. In this guide, we used the default mode. You can learn more about it [here](https://pytorch.org/get-started/pytorch-2.0/#user-experience). We benchmarked `torch.compile` with different computer vision models, tasks, types of hardware, and batch sizes on `torch`ย version 2.0.1. ## Benchmarking code Below you can find the benchmarking code for each task. We warm up the GPU before inference and take the mean time of 300 inferences, using the same image each time. ### Image Classification with ViT ```python import torch from PIL import Image import requests import numpy as np from transformers import AutoImageProcessor, AutoModelForImageClassification url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224").to("cuda") model = torch.compile(model) processed_input = processor(image, return_tensors='pt').to(device="cuda") with torch.no_grad(): _ = model(**processed_input) ``` #### Object Detection with DETR ```python from transformers import AutoImageProcessor, AutoModelForObjectDetection processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50").to("cuda") model = torch.compile(model) texts = ["a photo of a cat", "a photo of a dog"] inputs = processor(text=texts, images=image, return_tensors="pt").to("cuda") with torch.no_grad(): _ = model(**inputs) ``` #### Image Segmentation with Segformer ```python from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512").to("cuda") model = torch.compile(model) seg_inputs = processor(images=image, return_tensors="pt").to("cuda") with torch.no_grad(): _ = model(**seg_inputs) ``` Below you can find the list of the models we benchmarked. **Image Classification** - [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) - [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) - [facebook/convnext-large-224](https://huggingface.co/facebook/convnext-large-224) - [microsoft/resnet-50](https://huggingface.co/) **Image Segmentation** - [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) - [facebook/mask2former-swin-tiny-coco-panoptic](https://huggingface.co/facebook/mask2former-swin-tiny-coco-panoptic) - [facebook/maskformer-swin-base-ade](https://huggingface.co/facebook/maskformer-swin-base-ade) - [google/deeplabv3_mobilenet_v2_1.0_513](https://huggingface.co/google/deeplabv3_mobilenet_v2_1.0_513) **Object Detection** - [google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) - [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) - [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) Below you can find visualization of inference durations with and without `torch.compile()`ย and percentage improvements for each model in different hardware and batch sizes. <div class="flex"> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/a100_batch_comp.png" /> </div> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/v100_batch_comp.png" /> </div> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/t4_batch_comp.png" /> </div> </div> <div class="flex"> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/A100_1_duration.png" /> </div> <div> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/A100_1_percentage.png" /> </div> </div> ![Duration Comparison on V100 with Batch Size of 1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/v100_1_duration.png) ![Percentage Improvement on T4 with Batch Size of 4](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/torch_compile/T4_4_percentage.png) Below you can find inference durations in milliseconds for each model with and without `compile()`. Note that OwlViT results in OOM in larger batch sizes. ### A100 (batch size: 1) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 9.325 | 7.584 | | Image Segmentation/Segformer | 11.759 | 10.500 | | Object Detection/OwlViT | 24.978 | 18.420 | | Image Classification/BeiT | 11.282 | 8.448 | | Object Detection/DETR | 34.619 | 19.040 | | Image Classification/ConvNeXT | 10.410 | 10.208 | | Image Classification/ResNet | 6.531 | 4.124 | | Image Segmentation/Mask2former | 60.188 | 49.117 | | Image Segmentation/Maskformer | 75.764 | 59.487 | | Image Segmentation/MobileNet | 8.583 | 3.974 | | Object Detection/Resnet-101 | 36.276 | 18.197 | | Object Detection/Conditional-DETR | 31.219 | 17.993 | ### A100 (batch size: 4) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 14.832 | 14.499 | | Image Segmentation/Segformer | 18.838 | 16.476 | | Image Classification/BeiT | 13.205 | 13.048 | | Object Detection/DETR | 48.657 | 32.418| | Image Classification/ConvNeXT | 22.940 | 21.631 | | Image Classification/ResNet | 6.657 | 4.268 | | Image Segmentation/Mask2former | 74.277 | 61.781 | | Image Segmentation/Maskformer | 180.700 | 159.116 | | Image Segmentation/MobileNet | 14.174 | 8.515 | | Object Detection/Resnet-101 | 68.101 | 44.998 | | Object Detection/Conditional-DETR | 56.470 | 35.552 | ### A100 (batch size: 16) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 40.944 | 40.010 | | Image Segmentation/Segformer | 37.005 | 31.144 | | Image Classification/BeiT | 41.854 | 41.048 | | Object Detection/DETR | 164.382 | 161.902 | | Image Classification/ConvNeXT | 82.258 | 75.561 | | Image Classification/ResNet | 7.018 | 5.024 | | Image Segmentation/Mask2former | 178.945 | 154.814 | | Image Segmentation/Maskformer | 638.570 | 579.826 | | Image Segmentation/MobileNet | 51.693 | 30.310 | | Object Detection/Resnet-101 | 232.887 | 155.021 | | Object Detection/Conditional-DETR | 180.491 | 124.032 | ### V100 (batch size: 1) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 10.495 | 6.00 | | Image Segmentation/Segformer | 13.321 | 5.862 | | Object Detection/OwlViT | 25.769 | 22.395 | | Image Classification/BeiT | 11.347 | 7.234 | | Object Detection/DETR | 33.951 | 19.388 | | Image Classification/ConvNeXT | 11.623 | 10.412 | | Image Classification/ResNet | 6.484 | 3.820 | | Image Segmentation/Mask2former | 64.640 | 49.873 | | Image Segmentation/Maskformer | 95.532 | 72.207 | | Image Segmentation/MobileNet | 9.217 | 4.753 | | Object Detection/Resnet-101 | 52.818 | 28.367 | | Object Detection/Conditional-DETR | 39.512 | 20.816 | ### V100 (batch size: 4) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 15.181 | 14.501 | | Image Segmentation/Segformer | 16.787 | 16.188 | | Image Classification/BeiT | 15.171 | 14.753 | | Object Detection/DETR | 88.529 | 64.195 | | Image Classification/ConvNeXT | 29.574 | 27.085 | | Image Classification/ResNet | 6.109 | 4.731 | | Image Segmentation/Mask2former | 90.402 | 76.926 | | Image Segmentation/Maskformer | 234.261 | 205.456 | | Image Segmentation/MobileNet | 24.623 | 14.816 | | Object Detection/Resnet-101 | 134.672 | 101.304 | | Object Detection/Conditional-DETR | 97.464 | 69.739 | ### V100 (batch size: 16) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 52.209 | 51.633 | | Image Segmentation/Segformer | 61.013 | 55.499 | | Image Classification/BeiT | 53.938 | 53.581 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 109.682 | 100.771 | | Image Classification/ResNet | 14.857 | 12.089 | | Image Segmentation/Mask2former | 249.605 | 222.801 | | Image Segmentation/Maskformer | 831.142 | 743.645 | | Image Segmentation/MobileNet | 93.129 | 55.365 | | Object Detection/Resnet-101 | 482.425 | 361.843 | | Object Detection/Conditional-DETR | 344.661 | 255.298 | ### T4 (batch size: 1) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 16.520 | 15.786 | | Image Segmentation/Segformer | 16.116 | 14.205 | | Object Detection/OwlViT | 53.634 | 51.105 | | Image Classification/BeiT | 16.464 | 15.710 | | Object Detection/DETR | 73.100 | 53.99 | | Image Classification/ConvNeXT | 32.932 | 30.845 | | Image Classification/ResNet | 6.031 | 4.321 | | Image Segmentation/Mask2former | 79.192 | 66.815 | | Image Segmentation/Maskformer | 200.026 | 188.268 | | Image Segmentation/MobileNet | 18.908 | 11.997 | | Object Detection/Resnet-101 | 106.622 | 82.566 | | Object Detection/Conditional-DETR | 77.594 | 56.984 | ### T4 (batch size: 4) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 43.653 | 43.626 | | Image Segmentation/Segformer | 45.327 | 42.445 | | Image Classification/BeiT | 52.007 | 51.354 | | Object Detection/DETR | 277.850 | 268.003 | | Image Classification/ConvNeXT | 119.259 | 105.580 | | Image Classification/ResNet | 13.039 | 11.388 | | Image Segmentation/Mask2former | 201.540 | 184.670 | | Image Segmentation/Maskformer | 764.052 | 711.280 | | Image Segmentation/MobileNet | 74.289 | 48.677 | | Object Detection/Resnet-101 | 421.859 | 357.614 | | Object Detection/Conditional-DETR | 289.002 | 226.945 | ### T4 (batch size: 16) | **Task/Model** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:| | Image Classification/ViT | 163.914 | 160.907 | | Image Segmentation/Segformer | 192.412 | 163.620 | | Image Classification/BeiT | 188.978 | 187.976 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 422.886 | 388.078 | | Image Classification/ResNet | 44.114 | 37.604 | | Image Segmentation/Mask2former | 756.337 | 695.291 | | Image Segmentation/Maskformer | 2842.940 | 2656.88 | | Image Segmentation/MobileNet | 299.003 | 201.942 | | Object Detection/Resnet-101 | 1619.505 | 1262.758 | | Object Detection/Conditional-DETR | 1137.513 | 897.390| ## PyTorch Nightly We also benchmarked on PyTorch nightly (2.1.0dev, find the wheel [here](https://download.pytorch.org/whl/nightly/cu118)) and observed improvement in latency both for uncompiled and compiled models. ### A100 | **Task/Model** | **Batch Size** | **torch 2.0 - no compile** | **torch 2.0 -<br> compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 12.462 | 6.954 | | Image Classification/BeiT | 4 | 14.109 | 12.851 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 30.484 | 15.221 | | Object Detection/DETR | 4 | 46.816 | 30.942 | | Object Detection/DETR | 16 | 163.749 | 163.706 | ### T4 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 14.408 | 14.052 | | Image Classification/BeiT | 4 | 47.381 | 46.604 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 68.382 | 53.481 | | Object Detection/DETR | 4 | 269.615 | 204.785 | | Object Detection/DETR | 16 | OOM | OOM | ###ย V100 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 13.477 | 7.926 | | Image Classification/BeiT | 4 | 15.103 | 14.378 | | Image Classification/BeiT | 16 | 52.517 | 51.691 | | Object Detection/DETR | Unbatched | 28.706 | 19.077 | | Object Detection/DETR | 4 | 88.402 | 62.949| | Object Detection/DETR | 16 | OOM | OOM | ## Reduce Overhead We benchmarked `reduce-overhead` compilation mode for A100 and T4 in Nightly. ### A100 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 11.758 | 7.335 | | Image Classification/ConvNeXT | 4 | 23.171 | 21.490 | | Image Classification/ResNet | Unbatched | 7.435 | 3.801 | | Image Classification/ResNet | 4 | 7.261 | 2.187 | | Object Detection/Conditional-DETR | Unbatched | 32.823 | 11.627 | | Object Detection/Conditional-DETR | 4 | 50.622 | 33.831 | | Image Segmentation/MobileNet | Unbatched | 9.869 | 4.244 | | Image Segmentation/MobileNet | 4 | 14.385 | 7.946 | ### T4 | **Task/Model** | **Batch Size** | **torch 2.0 - <br>no compile** | **torch 2.0 - <br>compile** | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 32.137 | 31.84 | | Image Classification/ConvNeXT | 4 | 120.944 | 110.209 | | Image Classification/ResNet | Unbatched | 9.761 | 7.698 | | Image Classification/ResNet | 4 | 15.215 | 13.871 | | Object Detection/Conditional-DETR | Unbatched | 72.150 | 57.660 | | Object Detection/Conditional-DETR | 4 | 301.494 | 247.543 | | Image Segmentation/MobileNet | Unbatched | 22.266 | 19.339 | | Image Segmentation/MobileNet | 4 | 78.311 | 50.983 |
transformers/docs/source/en/perf_torch_compile.md/0
{ "file_path": "transformers/docs/source/en/perf_torch_compile.md", "repo_id": "transformers", "token_count": 5859 }
248
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Train with a script Along with the ๐Ÿค— Transformers [notebooks](./noteboks/README), there are also example scripts demonstrating how to train a model for a task with [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), or [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). You will also find scripts we've used in our [research projects](https://github.com/huggingface/transformers/tree/main/examples/research_projects) and [legacy examples](https://github.com/huggingface/transformers/tree/main/examples/legacy) which are mostly community contributed. These scripts are not actively maintained and require a specific version of ๐Ÿค— Transformers that will most likely be incompatible with the latest version of the library. The example scripts are not expected to work out-of-the-box on every problem, and you may need to adapt the script to the problem you're trying to solve. To help you with this, most of the scripts fully expose how data is preprocessed, allowing you to edit it as necessary for your use case. For any feature you'd like to implement in an example script, please discuss it on the [forum](https://discuss.huggingface.co/) or in an [issue](https://github.com/huggingface/transformers/issues) before submitting a Pull Request. While we welcome bug fixes, it is unlikely we will merge a Pull Request that adds more functionality at the cost of readability. This guide will show you how to run an example summarization training script in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). All examples are expected to work with both frameworks unless otherwise specified. ## Setup To successfully run the latest version of the example scripts, you have to **install ๐Ÿค— Transformers from source** in a new virtual environment: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` For older versions of the example scripts, click on the toggle below: <details> <summary>Examples for older versions of ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Then switch your current clone of ๐Ÿค— Transformers to a specific version, like v3.5.1 for example: ```bash git checkout tags/v3.5.1 ``` After you've setup the correct library version, navigate to the example folder of your choice and install the example specific requirements: ```bash pip install -r requirements.txt ``` ## Run a script <frameworkcontent> <pt> The example script downloads and preprocesses a dataset from the ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset with the [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> The example script downloads and preprocesses a dataset from the ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset using Keras on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Distributed training and mixed precision The [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supports distributed training and mixed precision, which means you can also use it in a script. To enable both of these features: - Add the `fp16` argument to enable mixed precision. - Set the number of GPUs to use with the `nproc_per_node` argument. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` TensorFlow scripts utilize a [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) for distributed training, and you don't need to add any additional arguments to the training script. The TensorFlow script will use multiple GPUs by default if they are available. ## Run a script on a TPU <frameworkcontent> <pt> Tensor Processing Units (TPUs) are specifically designed to accelerate performance. PyTorch supports TPUs with the [XLA](https://www.tensorflow.org/xla) deep learning compiler (see [here](https://github.com/pytorch/xla/blob/master/README.md) for more details). To use a TPU, launch the `xla_spawn.py` script and use the `num_cores` argument to set the number of TPU cores you want to use. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Tensor Processing Units (TPUs) are specifically designed to accelerate performance. TensorFlow scripts utilize a [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) for training on TPUs. To use a TPU, pass the name of the TPU resource to the `tpu` argument. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Run a script with ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) is a PyTorch-only library that offers a unified method for training a model on several types of setups (CPU-only, multiple GPUs, TPUs) while maintaining complete visibility into the PyTorch training loop. Make sure you have ๐Ÿค— Accelerate installed if you don't already have it: > Note: As Accelerate is rapidly developing, the git version of accelerate must be installed to run the scripts ```bash pip install git+https://github.com/huggingface/accelerate ``` Instead of the `run_summarization.py` script, you need to use the `run_summarization_no_trainer.py` script. ๐Ÿค— Accelerate supported scripts will have a `task_no_trainer.py` file in the folder. Begin by running the following command to create and save a configuration file: ```bash accelerate config ``` Test your setup to make sure it is configured correctly: ```bash accelerate test ``` Now you are ready to launch the training: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Use a custom dataset The summarization script supports custom datasets as long as they are a CSV or JSON Line file. When you use your own dataset, you need to specify several additional arguments: - `train_file` and `validation_file` specify the path to your training and validation files. - `text_column` is the input text to summarize. - `summary_column` is the target text to output. A summarization script using a custom dataset would look like this: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Test a script It is often a good idea to run your script on a smaller number of dataset examples to ensure everything works as expected before committing to an entire dataset which may take hours to complete. Use the following arguments to truncate the dataset to a maximum number of samples: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Not all example scripts support the `max_predict_samples` argument. If you aren't sure whether your script supports this argument, add the `-h` argument to check: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Resume training from checkpoint Another helpful option to enable is resuming training from a previous checkpoint. This will ensure you can pick up where you left off without starting over if your training gets interrupted. There are two methods to resume training from a checkpoint. The first method uses the `output_dir previous_output_dir` argument to resume training from the latest checkpoint stored in `output_dir`. In this case, you should remove `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` The second method uses the `resume_from_checkpoint path_to_specific_checkpoint` argument to resume training from a specific checkpoint folder. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Share your model All scripts can upload your final model to the [Model Hub](https://huggingface.co/models). Make sure you are logged into Hugging Face before you begin: ```bash huggingface-cli login ``` Then add the `push_to_hub` argument to the script. This argument will create a repository with your Hugging Face username and the folder name specified in `output_dir`. To give your repository a specific name, use the `push_to_hub_model_id` argument to add it. The repository will be automatically listed under your namespace. The following example shows how to upload a model with a specific repository name: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
transformers/docs/source/en/run_scripts.md/0
{ "file_path": "transformers/docs/source/en/run_scripts.md", "repo_id": "transformers", "token_count": 5851 }
249
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Object detection [[open-in-colab]] Object detection is the computer vision task of detecting instances (such as humans, buildings, or cars) in an image. Object detection models receive an image as input and output coordinates of the bounding boxes and associated labels of the detected objects. An image can contain multiple objects, each with its own bounding box and a label (e.g. it can have a car and a building), and each object can be present in different parts of an image (e.g. the image can have several cars). This task is commonly used in autonomous driving for detecting things like pedestrians, road signs, and traffic lights. Other applications include counting objects in images, image search, and more. In this guide, you will learn how to: 1. Finetune [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a model that combines a convolutional backbone with an encoder-decoder Transformer, on the [CPPE-5](https://huggingface.co/datasets/cppe-5) dataset. 2. Use your finetuned model for inference. <Tip> The task illustrated in this tutorial is supported by the following model architectures: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Conditional DETR](../model_doc/conditional_detr), [Deformable DETR](../model_doc/deformable_detr), [DETA](../model_doc/deta), [DETR](../model_doc/detr), [Table Transformer](../model_doc/table-transformer), [YOLOS](../model_doc/yolos) <!--End of the generated tip--> </Tip> Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q datasets transformers evaluate timm albumentations ``` You'll use ๐Ÿค— Datasets to load a dataset from the Hugging Face Hub, ๐Ÿค— Transformers to train your model, and `albumentations` to augment the data. `timm` is currently required to load a convolutional backbone for the DETR model. We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the Hub. When prompted, enter your token to log in: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load the CPPE-5 dataset The [CPPE-5 dataset](https://huggingface.co/datasets/cppe-5) contains images with annotations identifying medical personal protective equipment (PPE) in the context of the COVID-19 pandemic. Start by loading the dataset: ```py >>> from datasets import load_dataset >>> cppe5 = load_dataset("cppe-5") >>> cppe5 DatasetDict({ train: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 1000 }) test: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 29 }) }) ``` You'll see that this dataset already comes with a training set containing 1000 images and a test set with 29 images. To get familiar with the data, explore what the examples look like. ```py >>> cppe5["train"][0] {'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x7F9EC9E77C10>, 'width': 943, 'height': 663, 'objects': {'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [[302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0]], 'category': [4, 4, 0, 0]}} ``` The examples in the dataset have the following fields: - `image_id`: the example image id - `image`: a `PIL.Image.Image` object containing the image - `width`: width of the image - `height`: height of the image - `objects`: a dictionary containing bounding box metadata for the objects in the image: - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [COCO format](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) ) - `category`: the object's category, with possible values including `Coverall (0)`, `Face_Shield (1)`, `Gloves (2)`, `Goggles (3)` and `Mask (4)` You may notice that the `bbox` field follows the COCO format, which is the format that the DETR model expects. However, the grouping of the fields inside `objects` differs from the annotation format DETR requires. You will need to apply some preprocessing transformations before using this data for training. To get an even better understanding of the data, visualize an example in the dataset. ```py >>> import numpy as np >>> import os >>> from PIL import Image, ImageDraw >>> image = cppe5["train"][0]["image"] >>> annotations = cppe5["train"][0]["objects"] >>> draw = ImageDraw.Draw(image) >>> categories = cppe5["train"].features["objects"].feature["category"].names >>> id2label = {index: x for index, x in enumerate(categories, start=0)} >>> label2id = {v: k for k, v in id2label.items()} >>> for i in range(len(annotations["id"])): ... box = annotations["bbox"][i] ... class_idx = annotations["category"][i] ... x, y, w, h = tuple(box) ... # Check if coordinates are normalized or not ... if max(box) > 1.0: ... # Coordinates are un-normalized, no need to re-scale them ... x1, y1 = int(x), int(y) ... x2, y2 = int(x + w), int(y + h) ... else: ... # Coordinates are normalized, re-scale them ... x1 = int(x * width) ... y1 = int(y * height) ... x2 = int((x + w) * width) ... y2 = int((y + h) * height) ... draw.rectangle((x, y, x + w, y + h), outline="red", width=1) ... draw.text((x, y), id2label[class_idx], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/TdaqPJO.png" alt="CPPE-5 Image Example"/> </div> To visualize the bounding boxes with associated labels, you can get the labels from the dataset's metadata, specifically the `category` field. You'll also want to create dictionaries that map a label id to a label class (`id2label`) and the other way around (`label2id`). You can use them later when setting up the model. Including these maps will make your model reusable by others if you share it on the Hugging Face Hub. Please note that, the part of above code that draws the bounding boxes assume that it is in `XYWH` (x,y co-ordinates and width and height of the box) format. It might not work for other formats like `(x1, y1, x2, y2)`. As a final step of getting familiar with the data, explore it for potential issues. One common problem with datasets for object detection is bounding boxes that "stretch" beyond the edge of the image. Such "runaway" bounding boxes can raise errors during training and should be addressed at this stage. There are a few examples with this issue in this dataset. To keep things simple in this guide, we remove these images from the data. ```py >>> remove_idx = [590, 821, 822, 875, 876, 878, 879] >>> keep = [i for i in range(len(cppe5["train"])) if i not in remove_idx] >>> cppe5["train"] = cppe5["train"].select(keep) ``` ## Preprocess the data To finetune a model, you must preprocess the data you plan to use to match precisely the approach used for the pre-trained model. [`AutoImageProcessor`] takes care of processing image data to create `pixel_values`, `pixel_mask`, and `labels` that a DETR model can train with. The image processor has some attributes that you won't have to worry about: - `image_mean = [0.485, 0.456, 0.406 ]` - `image_std = [0.229, 0.224, 0.225]` These are the mean and standard deviation used to normalize images during the model pre-training. These values are crucial to replicate when doing inference or finetuning a pre-trained image model. Instantiate the image processor from the same checkpoint as the model you want to finetune. ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "facebook/detr-resnet-50" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` Before passing the images to the `image_processor`, apply two preprocessing transformations to the dataset: - Augmenting images - Reformatting annotations to meet DETR expectations First, to make sure the model does not overfit on the training data, you can apply image augmentation with any data augmentation library. Here we use [Albumentations](https://albumentations.ai/docs/) ... This library ensures that transformations affect the image and update the bounding boxes accordingly. The ๐Ÿค— Datasets library documentation has a detailed [guide on how to augment images for object detection](https://huggingface.co/docs/datasets/object_detection), and it uses the exact same dataset as an example. Apply the same approach here, resize each image to (480, 480), flip it horizontally, and brighten it: ```py >>> import albumentations >>> import numpy as np >>> import torch >>> transform = albumentations.Compose( ... [ ... albumentations.Resize(480, 480), ... albumentations.HorizontalFlip(p=1.0), ... albumentations.RandomBrightnessContrast(p=1.0), ... ], ... bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]), ... ) ``` The `image_processor` expects the annotations to be in the following format: `{'image_id': int, 'annotations': List[Dict]}`, where each dictionary is a COCO object annotation. Let's add a function to reformat annotations for a single example: ```py >>> def formatted_anns(image_id, category, area, bbox): ... annotations = [] ... for i in range(0, len(category)): ... new_ann = { ... "image_id": image_id, ... "category_id": category[i], ... "isCrowd": 0, ... "area": area[i], ... "bbox": list(bbox[i]), ... } ... annotations.append(new_ann) ... return annotations ``` Now you can combine the image and annotation transformations to use on a batch of examples: ```py >>> # transforming a batch >>> def transform_aug_ann(examples): ... image_ids = examples["image_id"] ... images, bboxes, area, categories = [], [], [], [] ... for image, objects in zip(examples["image"], examples["objects"]): ... image = np.array(image.convert("RGB"))[:, :, ::-1] ... out = transform(image=image, bboxes=objects["bbox"], category=objects["category"]) ... area.append(objects["area"]) ... images.append(out["image"]) ... bboxes.append(out["bboxes"]) ... categories.append(out["category"]) ... targets = [ ... {"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)} ... for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes) ... ] ... return image_processor(images=images, annotations=targets, return_tensors="pt") ``` Apply this preprocessing function to the entire dataset using ๐Ÿค— Datasets [`~datasets.Dataset.with_transform`] method. This method applies transformations on the fly when you load an element of the dataset. At this point, you can check what an example from the dataset looks like after the transformations. You should see a tensor with `pixel_values`, a tensor with `pixel_mask`, and `labels`. ```py >>> cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann) >>> cppe5["train"][15] {'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9638, -1.9638, -1.9638], ..., [-1.5699, -1.5699, -1.5699, ..., -1.9980, -1.9980, -1.9980], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809]], [[ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8256, -1.8256, -1.8256], ..., [-1.3179, -1.3179, -1.3179, ..., -1.8606, -1.8606, -1.8606], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431]], [[ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6302, -1.6302, -1.6302], ..., [-1.0201, -1.0201, -1.0201, ..., -1.5604, -1.5604, -1.5604], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430]]]), 'pixel_mask': tensor([[1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1]]), 'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}} ``` You have successfully augmented the individual images and prepared their annotations. However, preprocessing isn't complete yet. In the final step, create a custom `collate_fn` to batch images together. Pad images (which are now `pixel_values`) to the largest image in a batch, and create a corresponding `pixel_mask` to indicate which pixels are real (1) and which are padding (0). ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## Training the DETR model You have done most of the heavy lifting in the previous sections, so now you are ready to train your model! The images in this dataset are still quite large, even after resizing. This means that finetuning this model will require at least one GPU. Training involves the following steps: 1. Load the model with [`AutoModelForObjectDetection`] using the same checkpoint as in the preprocessing. 2. Define your training hyperparameters in [`TrainingArguments`]. 3. Pass the training arguments to [`Trainer`] along with the model, dataset, image processor, and data collator. 4. Call [`~Trainer.train`] to finetune your model. When loading the model from the same checkpoint that you used for the preprocessing, remember to pass the `label2id` and `id2label` maps that you created earlier from the dataset's metadata. Additionally, we specify `ignore_mismatched_sizes=True` to replace the existing classification head with a new one. ```py >>> from transformers import AutoModelForObjectDetection >>> model = AutoModelForObjectDetection.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ignore_mismatched_sizes=True, ... ) ``` In the [`TrainingArguments`] use `output_dir` to specify where to save your model, then configure hyperparameters as you see fit. It is important you do not remove unused columns because this will drop the image column. Without the image column, you can't create `pixel_values`. For this reason, set `remove_unused_columns` to `False`. If you wish to share your model by pushing to the Hub, set `push_to_hub` to `True` (you must be signed in to Hugging Face to upload your model). ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="detr-resnet-50_finetuned_cppe5", ... per_device_train_batch_size=8, ... num_train_epochs=10, ... fp16=True, ... save_steps=200, ... logging_steps=50, ... learning_rate=1e-5, ... weight_decay=1e-4, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` Finally, bring everything together, and call [`~transformers.Trainer.train`]: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=collate_fn, ... train_dataset=cppe5["train"], ... tokenizer=image_processor, ... ) >>> trainer.train() ``` If you have set `push_to_hub` to `True` in the `training_args`, the training checkpoints are pushed to the Hugging Face Hub. Upon training completion, push the final model to the Hub as well by calling the [`~transformers.Trainer.push_to_hub`] method. ```py >>> trainer.push_to_hub() ``` ## Evaluate Object detection models are commonly evaluated with a set of <a href="https://cocodataset.org/#detection-eval">COCO-style metrics</a>. You can use one of the existing metrics implementations, but here you'll use the one from `torchvision` to evaluate the final model that you pushed to the Hub. To use the `torchvision` evaluator, you'll need to prepare a ground truth COCO dataset. The API to build a COCO dataset requires the data to be stored in a certain format, so you'll need to save images and annotations to disk first. Just like when you prepared your data for training, the annotations from the `cppe5["test"]` need to be formatted. However, images should stay as they are. The evaluation step requires a bit of work, but it can be split in three major steps. First, prepare the `cppe5["test"]` set: format the annotations and save the data to disk. ```py >>> import json >>> # format annotations the same as for training, no need for data augmentation >>> def val_formatted_anns(image_id, objects): ... annotations = [] ... for i in range(0, len(objects["id"])): ... new_ann = { ... "id": objects["id"][i], ... "category_id": objects["category"][i], ... "iscrowd": 0, ... "image_id": image_id, ... "area": objects["area"][i], ... "bbox": objects["bbox"][i], ... } ... annotations.append(new_ann) ... return annotations >>> # Save images and annotations into the files torchvision.datasets.CocoDetection expects >>> def save_cppe5_annotation_file_images(cppe5): ... output_json = {} ... path_output_cppe5 = f"{os.getcwd()}/cppe5/" ... if not os.path.exists(path_output_cppe5): ... os.makedirs(path_output_cppe5) ... path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json") ... categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label] ... output_json["images"] = [] ... output_json["annotations"] = [] ... for example in cppe5: ... ann = val_formatted_anns(example["image_id"], example["objects"]) ... output_json["images"].append( ... { ... "id": example["image_id"], ... "width": example["image"].width, ... "height": example["image"].height, ... "file_name": f"{example['image_id']}.png", ... } ... ) ... output_json["annotations"].extend(ann) ... output_json["categories"] = categories_json ... with open(path_anno, "w") as file: ... json.dump(output_json, file, ensure_ascii=False, indent=4) ... for im, img_id in zip(cppe5["image"], cppe5["image_id"]): ... path_img = os.path.join(path_output_cppe5, f"{img_id}.png") ... im.save(path_img) ... return path_output_cppe5, path_anno ``` Next, prepare an instance of a `CocoDetection` class that can be used with `cocoevaluator`. ```py >>> import torchvision >>> class CocoDetection(torchvision.datasets.CocoDetection): ... def __init__(self, img_folder, image_processor, ann_file): ... super().__init__(img_folder, ann_file) ... self.image_processor = image_processor ... def __getitem__(self, idx): ... # read in PIL image and target in COCO format ... img, target = super(CocoDetection, self).__getitem__(idx) ... # preprocess image and target: converting target to DETR format, ... # resizing + normalization of both image and target) ... image_id = self.ids[idx] ... target = {"image_id": image_id, "annotations": target} ... encoding = self.image_processor(images=img, annotations=target, return_tensors="pt") ... pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension ... target = encoding["labels"][0] # remove batch dimension ... return {"pixel_values": pixel_values, "labels": target} >>> im_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5["test"]) >>> test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno) ``` Finally, load the metrics and run the evaluation. ```py >>> import evaluate >>> from tqdm import tqdm >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> module = evaluate.load("ybelkada/cocoevaluate", coco=test_ds_coco_format.coco) >>> val_dataloader = torch.utils.data.DataLoader( ... test_ds_coco_format, batch_size=8, shuffle=False, num_workers=4, collate_fn=collate_fn ... ) >>> with torch.no_grad(): ... for idx, batch in enumerate(tqdm(val_dataloader)): ... pixel_values = batch["pixel_values"] ... pixel_mask = batch["pixel_mask"] ... labels = [ ... {k: v for k, v in t.items()} for t in batch["labels"] ... ] # these are in DETR format, resized + normalized ... # forward pass ... outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask) ... orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0) ... results = im_processor.post_process(outputs, orig_target_sizes) # convert outputs of model to Pascal VOC format (xmin, ymin, xmax, ymax) ... module.add(prediction=results, reference=labels) ... del batch >>> results = module.compute() >>> print(results) Accumulating evaluation results... DONE (t=0.08s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.292 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.429 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.590 ``` These results can be further improved by adjusting the hyperparameters in [`~transformers.TrainingArguments`]. Give it a go! ## Inference Now that you have finetuned a DETR model, evaluated it, and uploaded it to the Hugging Face Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a [`Pipeline`]. Instantiate a pipeline for object detection with your model, and pass an image to it: ```py >>> from transformers import pipeline >>> import requests >>> url = "https://i.imgur.com/2lnWoly.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> obj_detector = pipeline("object-detection", model="devonho/detr-resnet-50_finetuned_cppe5") >>> obj_detector(image) ``` You can also manually replicate the results of the pipeline if you'd like: ```py >>> image_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> with torch.no_grad(): ... inputs = image_processor(images=image, return_tensors="pt") ... outputs = model(**inputs) ... target_sizes = torch.tensor([image.size[::-1]]) ... results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected Coverall with confidence 0.566 at location [1215.32, 147.38, 4401.81, 3227.08] Detected Mask with confidence 0.584 at location [2449.06, 823.19, 3256.43, 1413.9] ``` Let's plot the result: ```py >>> draw = ImageDraw.Draw(image) >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... x, y, x2, y2 = tuple(box) ... draw.rectangle((x, y, x2, y2), outline="red", width=1) ... draw.text((x, y), model.config.id2label[label.item()], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/4QZnf9A.png" alt="Object detection result on a new image"/> </div>
transformers/docs/source/en/tasks/object_detection.md/0
{ "file_path": "transformers/docs/source/en/tasks/object_detection.md", "repo_id": "transformers", "token_count": 9638 }
250
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to TFLite [TensorFlow Lite](https://www.tensorflow.org/lite/guide) is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the `.tflite` file extension. ๐Ÿค— Optimum offers functionality to export ๐Ÿค— Transformers models to TFLite through the `exporters.tflite` module. For the list of supported model architectures, please refer to [๐Ÿค— Optimum documentation](https://huggingface.co/docs/optimum/exporters/tflite/overview). To export a model to TFLite, install the required dependencies: ```bash pip install optimum[exporters-tf] ``` To check out all available arguments, refer to the [๐Ÿค— Optimum docs](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model), or view help in command line: ```bash optimum-cli export tflite --help ``` To export a model's checkpoint from the ๐Ÿค— Hub, for example, `bert-base-uncased`, run the following command: ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ ``` You should see the logs indicating progress and showing where the resulting `model.tflite` is saved, like this: ```bash Validating TFLite model... -[โœ“] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[โœ“] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite ``` The example above illustrates exporting a checkpoint from ๐Ÿค— Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the `local_path` to the `model` argument instead of the checkpoint name on ๐Ÿค— Hub.
transformers/docs/source/en/tflite.md/0
{ "file_path": "transformers/docs/source/en/tflite.md", "repo_id": "transformers", "token_count": 871 }
251
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Compartir modelos personalizados La biblioteca ๐Ÿค— Transformers estรก diseรฑada para ser fรกcilmente ampliable. Cada modelo estรก completamente codificado sin abstracciรณn en una subcarpeta determinada del repositorio, por lo que puedes copiar fรกcilmente un archivo del modelo y ajustarlo segรบn tus necesidades. Si estรกs escribiendo un modelo completamente nuevo, podrรญa ser mรกs fรกcil comenzar desde cero. En este tutorial, te mostraremos cรณmo escribir un modelo personalizado y su configuraciรณn para que pueda usarse dentro de Transformers, y cรณmo puedes compartirlo con la comunidad (con el cรณdigo en el que se basa) para que cualquiera pueda usarlo, incluso si no estรก presente en la biblioteca ๐Ÿค— Transformers. Ilustraremos todo esto con un modelo ResNet, envolviendo la clase ResNet de la [biblioteca timm](https://github.com/rwightman/pytorch-image-models) en un [`PreTrainedModel`]. ## Escribir una configuraciรณn personalizada Antes de adentrarnos en el modelo, primero escribamos su configuraciรณn. La configuraciรณn de un modelo es un objeto que contendrรก toda la informaciรณn necesaria para construir el modelo. Como veremos en la siguiente secciรณn, el modelo solo puede tomar un `config` para ser inicializado, por lo que realmente necesitamos que ese objeto estรฉ lo mรกs completo posible. En nuestro ejemplo, tomaremos un par de argumentos de la clase ResNet que tal vez queramos modificar. Las diferentes configuraciones nos darรกn los diferentes tipos de ResNet que son posibles. Luego simplemente almacenamos esos argumentos despuรฉs de verificar la validez de algunos de ellos. ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` Las tres cosas importantes que debes recordar al escribir tu propia configuraciรณn son las siguientes: - tienes que heredar de `PretrainedConfig`, - el `__init__` de tu `PretrainedConfig` debe aceptar cualquier `kwargs`, - esos `kwargs` deben pasarse a la superclase `__init__`. La herencia es para asegurarte de obtener toda la funcionalidad de la biblioteca ๐Ÿค— Transformers, mientras que las otras dos restricciones provienen del hecho de que una `PretrainedConfig` tiene mรกs campos que los que estรกs configurando. Al recargar una `config` con el mรฉtodo `from_pretrained`, esos campos deben ser aceptados por tu `config` y luego enviados a la superclase. Definir un `model_type` para tu configuraciรณn (en este caso `model_type="resnet"`) no es obligatorio, a menos que quieras registrar tu modelo con las clases automรกticas (ver la รบltima secciรณn). Una vez hecho esto, puedes crear y guardar fรกcilmente tu configuraciรณn como lo harรญas con cualquier otra configuraciรณn de un modelo de la biblioteca. Asรญ es como podemos crear una configuraciรณn resnet50d y guardarla: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` Esto guardarรก un archivo llamado `config.json` dentro de la carpeta `custom-resnet`. Luego puedes volver a cargar tu configuraciรณn con el mรฉtodo `from_pretrained`: ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` Tambiรฉn puedes usar cualquier otro mรฉtodo de la clase [`PretrainedConfig`], como [`~PretrainedConfig.push_to_hub`], para cargar directamente tu configuraciรณn en el Hub. ## Escribir un modelo personalizado Ahora que tenemos nuestra configuraciรณn de ResNet, podemos seguir escribiendo el modelo. En realidad escribiremos dos: una que extrae las caracterรญsticas ocultas de un grupo de imรกgenes (como [`BertModel`]) y una que es adecuada para clasificaciรณn de imagenes (como [`BertForSequenceClassification`]). Como mencionamos antes, solo escribiremos un envoltura (_wrapper_) libre del modelo para simplificar este ejemplo. Lo รบnico que debemos hacer antes de escribir esta clase es un mapeo entre los tipos de bloques y las clases de bloques reales. Luego se define el modelo desde la configuraciรณn pasando todo a la clase `ResNet`: ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` Para el modelo que clasificarรก las imรกgenes, solo cambiamos el mรฉtodo de avance (es decir, el mรฉtodo `forward`): ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` En ambos casos, observa cรณmo heredamos de `PreTrainedModel` y llamamos a la inicializaciรณn de la superclase con `config` (un poco como cuando escribes `torch.nn.Module`). La lรญnea que establece `config_class` no es obligatoria, a menos que quieras registrar tu modelo con las clases automรกticas (consulta la รบltima secciรณn). <Tip> Si tu modelo es muy similar a un modelo dentro de la biblioteca, puedes reutilizar la misma configuraciรณn de ese modelo. </Tip> Puedes hacer que tu modelo devuelva lo que quieras, pero devolver un diccionario como lo hicimos para `ResnetModelForImageClassification`, con el `loss` incluido cuando se pasan las etiquetas, harรก que tu modelo se pueda usar directamente dentro de la clase [`Trainer`]. Usar otro formato de salida estรก bien, siempre y cuando estรฉs planeando usar tu propio bucle de entrenamiento u otra biblioteca para el entrenamiento. Ahora que tenemos nuestra clase, vamos a crear un modelo: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` Nuevamente, puedes usar cualquiera de los mรฉtodos de [`PreTrainedModel`], como [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`]. Usaremos el segundo en la siguiente secciรณn y veremos cรณmo pasar los pesos del modelo con el cรณdigo de nuestro modelo. Pero primero, carguemos algunos pesos previamente entrenados dentro de nuestro modelo. En tu caso de uso, probablemente estarรกs entrenando tu modelo personalizado con tus propios datos. Para ir rรกpido en este tutorial, usaremos la versiรณn preentrenada de resnet50d. Dado que nuestro modelo es solo un envoltorio alrededor del resnet50d original, serรก fรกcil transferir esos pesos: ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Ahora veamos cรณmo asegurarnos de que cuando hacemos [`~PreTrainedModel.save_pretrained`] o [`~PreTrainedModel.push_to_hub`], se guarda el cรณdigo del modelo. ## Enviar el cรณdigo al _Hub_ <Tip warning={true}> Esta _API_ es experimental y puede tener algunos cambios leves en las prรณximas versiones. </Tip> Primero, asegรบrate de que tu modelo estรฉ completamente definido en un archivo `.py`. Puedes basarte en importaciones relativas a otros archivos, siempre que todos los archivos estรฉn en el mismo directorio (aรบn no admitimos submรณdulos para esta caracterรญstica). Para nuestro ejemplo, definiremos un archivo `modeling_resnet.py` y un archivo `configuration_resnet.py` en una carpeta del directorio de trabajo actual llamado `resnet_model`. El archivo de configuraciรณn contiene el cรณdigo de `ResnetConfig` y el archivo del modelo contiene el cรณdigo de `ResnetModel` y `ResnetModelForImageClassification`. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` El `__init__.py` puede estar vacรญo, solo estรก ahรญ para que Python detecte que `resnet_model` se puede usar como un mรณdulo. <Tip warning={true}> Si copias archivos del modelo desde la biblioteca, deberรกs reemplazar todas las importaciones relativas en la parte superior del archivo para importarlos desde el paquete `transformers`. </Tip> Ten en cuenta que puedes reutilizar (o subclasificar) una configuraciรณn o modelo existente. Para compartir tu modelo con la comunidad, sigue estos pasos: primero importa el modelo y la configuraciรณn de ResNet desde los archivos reciรฉn creados: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` Luego, debes decirle a la biblioteca que deseas copiar el cรณdigo de esos objetos cuando usas el mรฉtodo `save_pretrained` y registrarlos correctamente con una determinada clase automรกtica (especialmente para modelos), simplemente ejecuta: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` Ten en cuenta que no es necesario especificar una clase automรกtica para la configuraciรณn (solo hay una clase automรกtica para ellos, [`AutoConfig`]), pero es diferente para los modelos. Tu modelo personalizado podrรญa ser adecuado para muchas tareas diferentes, por lo que debes especificar cuรกl de las clases automรกticas es la correcta para tu modelo. A continuaciรณn, vamos a crear la configuraciรณn y los modelos como lo hicimos antes: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` Ahora, para enviar el modelo al Hub, asegรบrate de haber iniciado sesiรณn. Ejecuta en tu terminal: ```bash huggingface-cli login ``` o desde un _notebook_: ```py from huggingface_hub import notebook_login notebook_login() ``` Luego puedes ingresar a tu propio espacio (o una organizaciรณn de la que seas miembro) de esta manera: ```py resnet50d.push_to_hub("custom-resnet50d") ``` Ademรกs de los pesos del modelo y la configuraciรณn en formato json, esto tambiรฉn copiรณ los archivos `.py` del modelo y la configuraciรณn en la carpeta `custom-resnet50d` y subiรณ el resultado al Hub. Puedes verificar el resultado en este [repositorio de modelos](https://huggingface.co/sgugger/custom-resnet50d). Consulta el tutorial sobre cรณmo [compartir modelos](model_sharing) para obtener mรกs informaciรณn sobre el mรฉtodo para subir modelos al Hub. ## Usar un modelo con cรณdigo personalizado Puedes usar cualquier configuraciรณn, modelo o _tokenizador_ con archivos de cรณdigo personalizado en tu repositorio con las clases automรกticas y el mรฉtodo `from_pretrained`. Todos los archivos y cรณdigos cargados en el Hub se analizan en busca de malware (consulta la documentaciรณn de [seguridad del Hub](https://huggingface.co/docs/hub/security#malware-scanning) para obtener mรกs informaciรณn), pero aรบn debes revisar el cรณdigo del modelo y el autor para evitar la ejecuciรณn de cรณdigo malicioso en tu computadora. Configura `trust_remote_code=True` para usar un modelo con cรณdigo personalizado: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` Tambiรฉn se recomienda encarecidamente pasar un _hash_ de confirmaciรณn como una "revisiรณn" para asegurarte de que el autor de los modelos no actualizรณ el cรณdigo con algunas lรญneas nuevas maliciosas (a menos que confรญes plenamente en los autores de los modelos). ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Ten en cuenta que al navegar por el historial de confirmaciones del repositorio del modelo en Hub, hay un botรณn para copiar fรกcilmente el hash de confirmaciรณn de cualquier _commit_. ## Registrar un model con cรณdigo personalizado a las clases automรกticas Si estรกs escribiendo una biblioteca que amplรญa ๐Ÿค— Transformers, es posible que quieras ampliar las clases automรกticas para incluir tu propio modelo. Esto es diferente de enviar el cรณdigo al Hub en el sentido de que los usuarios necesitarรกn importar tu biblioteca para obtener los modelos personalizados (al contrario de descargar automรกticamente el cรณdigo del modelo desde Hub). Siempre que tu configuraciรณn tenga un atributo `model_type` que sea diferente de los tipos de modelos existentes, y que tus clases modelo tengan los atributos `config_class` correctos, puedes agregarlos a las clases automรกticas de la siguiente manera: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` Ten en cuenta que el primer argumento utilizado al registrar tu configuraciรณn personalizada en [`AutoConfig`] debe coincidir con el `model_type` de tu configuraciรณn personalizada, y el primer argumento utilizado al registrar tus modelos personalizados en cualquier clase del modelo automรกtico debe coincidir con el `config_class ` de esos modelos.
transformers/docs/source/es/custom_models.md/0
{ "file_path": "transformers/docs/source/es/custom_models.md", "repo_id": "transformers", "token_count": 5983 }
252
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Entrenamiento con scripts Junto con los [notebooks](./noteboks/README) de ๐Ÿค— Transformers, tambiรฉn hay scripts con ejemplos que muestran cรณmo entrenar un modelo para una tarea en [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), o [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). Tambiรฉn encontrarรกs scripts que hemos usado en nuestros [proyectos de investigaciรณn](https://github.com/huggingface/transformers/tree/main/examples/research_projects) y [ejemplos pasados](https://github.com/huggingface/transformers/tree/main/examples/legacy) que en su mayorรญa son aportados por la comunidad. Estos scripts no se mantienen activamente y requieren una versiรณn especรญfica de ๐Ÿค— Transformers que probablemente sea incompatible con la รบltima versiรณn de la biblioteca. No se espera que los scripts de ejemplo funcionen de inmediato en todos los problemas, y es posible que debas adaptar el script al problema que estรกs tratando de resolver. Para ayudarte con esto, la mayorรญa de los scripts exponen completamente cรณmo se preprocesan los datos, lo que te permite editarlos segรบn sea necesario para tu caso de uso. Para cualquier caracterรญstica que te gustarรญa implementar en un script de ejemplo, por favor discรบtelo en el [foro](https://discuss.huggingface.co/) o con un [issue](https://github.com/huggingface/transformers/issues) antes de enviar un Pull Request. Si bien agradecemos las correcciones de errores, es poco probable que fusionemos un Pull Request que agregue mรกs funcionalidad a costa de la legibilidad. Esta guรญa te mostrarรก cรณmo ejecutar un ejemplo de un script de entrenamiento para resumir texto en [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) y [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). Se espera que todos los ejemplos funcionen con ambos frameworks a menos que se especifique lo contrario. ## Configuraciรณn Para ejecutar con รฉxito la รบltima versiรณn de los scripts de ejemplo debes **instalar ๐Ÿค— Transformers desde su fuente** en un nuevo entorno virtual: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` Para versiones anteriores de los scripts de ejemplo, haz clic en alguno de los siguientes links: <details> <summary>Ejemplos de versiones anteriores de ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Luego cambia tu clon actual de ๐Ÿค— Transformers a una versiรณn especรญfica, por ejemplo v3.5.1: ```bash git checkout tags/v3.5.1 ``` Una vez que hayas configurado la versiรณn correcta de la biblioteca, ve a la carpeta de ejemplo de tu elecciรณn e instala los requisitos especรญficos del ejemplo: ```bash pip install -r requirements.txt ``` ## Ejecutar un script <frameworkcontent> <pt> El script de ejemplo descarga y preprocesa un conjunto de datos de la biblioteca ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Luego, el script ajusta un conjunto de datos con [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) en una arquitectura que soporta la tarea de resumen. El siguiente ejemplo muestra cรณmo ajustar un [T5-small](https://huggingface.co/t5-small) en el conjunto de datos [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). El modelo T5 requiere un argumento adicional `source_prefix` debido a cรณmo fue entrenado. Este aviso le permite a T5 saber que se trata de una tarea de resumir. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> El script de ejemplo descarga y preprocesa un conjunto de datos de la biblioteca ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/). Luego, el script ajusta un conjunto de datos utilizando Keras en una arquitectura que soporta la tarea de resumir. El siguiente ejemplo muestra cรณmo ajustar un [T5-small](https://huggingface.co/t5-small) en el conjunto de datos [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail). El modelo T5 requiere un argumento adicional `source_prefix` debido a cรณmo fue entrenado. Este aviso le permite a T5 saber que se trata de una tarea de resumir. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Entrenamiento distribuido y de precisiรณn mixta [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) admite un entrenamiento distribuido y de precisiรณn mixta, lo que significa que tambiรฉn puedes usarlo en un script. Para habilitar ambas caracterรญsticas: - Agrega el argumento `fp16` para habilitar la precisiรณn mixta. - Establece la cantidad de GPU que se usarรก con el argumento `nproc_per_node`. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Los scripts de TensorFlow utilizan [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) para el entrenamiento distribuido, y no es necesario agregar argumentos adicionales al script de entrenamiento. El script de TensorFlow utilizarรก mรบltiples GPUs de forma predeterminada si estรกn disponibles. ## Ejecutar un script en una TPU <frameworkcontent> <pt> Las Unidades de Procesamiento de Tensor (TPUs) estรกn diseรฑadas especรญficamente para acelerar el rendimiento. PyTorch admite TPU con el compilador de aprendizaje profundo [XLA](https://www.tensorflow.org/xla) (consulta [aquรญ](https://github.com/pytorch/xla/blob/master/README.md) para obtener mรกs detalles). Para usar una TPU, inicia el script `xla_spawn.py` y usa el argumento `num_cores` para establecer la cantidad de nรบcleos de TPU que deseas usar. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Las Unidades de Procesamiento de Tensor (TPUs) estรกn diseรฑadas especรญficamente para acelerar el rendimiento. TensorFlow utiliza [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) para entrenar en TPUs. Para usar una TPU, pasa el nombre del recurso de la TPU al argumento `tpu` ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Ejecutar un script con ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) es una biblioteca exclusiva de PyTorch que ofrece un mรฉtodo unificado para entrenar un modelo en varios tipos de configuraciones (solo CPU, GPU mรบltiples, TPU) mientras mantiene una visibilidad completa en el ciclo de entrenamiento de PyTorch. Asegรบrate de tener ๐Ÿค— Accelerate instalado si aรบn no lo tienes: > Nota: Como Accelerate se estรก desarrollando rรกpidamente, debes instalar la versiรณn git de Accelerate para ejecutar los scripts ```bash pip install git+https://github.com/huggingface/accelerate ``` En lugar del script `run_summarization.py`, debes usar el script `run_summarization_no_trainer.py`. Los scripts compatibles con ๐Ÿค— Accelerate tendrรกn un archivo `task_no_trainer.py` en la carpeta. Comienza ejecutando el siguiente comando para crear y guardar un archivo de configuraciรณn: ```bash accelerate config ``` Prueba tu configuraciรณn para asegurarte que estรก configurada correctamente: ```bash accelerate test ``` Todo listo para iniciar el entrenamiento: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Usar un conjunto de datos personalizado El script de la tarea resumir admite conjuntos de datos personalizados siempre que sean un archivo CSV o JSON Line. Cuando uses tu propio conjunto de datos, necesitas especificar varios argumentos adicionales: - `train_file` y `validation_file` especifican la ruta a tus archivos de entrenamiento y validaciรณn. - `text_column` es el texto de entrada para resumir. - `summary_column` es el texto de destino para la salida. Un script para resumir que utiliza un conjunto de datos personalizado se vera asรญ: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Prueba un script A veces, es una buena idea ejecutar tu secuencia de comandos en una cantidad menor de ejemplos para asegurarte de que todo funciona como se espera antes de comprometerte con un conjunto de datos completo, lo que puede demorar horas en completarse. Utiliza los siguientes argumentos para truncar el conjunto de datos a un nรบmero mรกximo de muestras: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` No todos los scripts de ejemplo admiten el argumento `max_predict_samples`. Puede que desconozcas si la secuencia de comandos admite este argumento, agrega `-h` para verificar: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Reanudar el entrenamiento desde el punto de control Otra opciรณn รบtil para habilitar es reanudar el entrenamiento desde un punto de control anterior. Esto asegurarรก que puedas continuar donde lo dejaste sin comenzar de nuevo si tu entrenamiento se interrumpe. Hay dos mรฉtodos para reanudar el entrenamiento desde un punto de control. El primer mรฉtodo utiliza el argumento `output_dir previous_output_dir` para reanudar el entrenamiento desde el รบltimo punto de control almacenado en `output_dir`. En este caso, debes eliminar `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` El segundo mรฉtodo utiliza el argumento `resume_from_checkpoint path_to_specific_checkpoint` para reanudar el entrenamiento desde una carpeta de punto de control especรญfica. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Comparte tu modelo Todos los scripts pueden cargar tu modelo final en el [Model Hub](https://huggingface.co/models). Asegรบrate de haber iniciado sesiรณn en Hugging Face antes de comenzar: ```bash huggingface-cli login ``` Luego agrega el argumento `push_to_hub` al script. Este argumento crearรก un repositorio con tu nombre de usuario Hugging Face y el nombre de la carpeta especificado en `output_dir`. Para darle a tu repositorio un nombre especรญfico, usa el argumento `push_to_hub_model_id` para aรฑadirlo. El repositorio se incluirรก automรกticamente en tu namespace. El siguiente ejemplo muestra cรณmo cargar un modelo con un nombre de repositorio especรญfico: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
transformers/docs/source/es/run_scripts.md/0
{ "file_path": "transformers/docs/source/es/run_scripts.md", "repo_id": "transformers", "token_count": 6952 }
253
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Visite rapide [[open-in-colab]] Soyez opรฉrationnel avec ๐Ÿค— Transformers ! Que vous soyez un dรฉveloppeur ou un utilisateur lambda, cette visite rapide vous aidera ร  dรฉmarrer et vous montrera comment utiliser le [`pipeline`] pour l'infรฉrence, charger un modรจle prรฉ-entraรฎnรฉ et un prรฉprocesseur avec une [AutoClass](./model_doc/auto), et entraรฎner rapidement un modรจle avec PyTorch ou TensorFlow. Si vous รชtes un dรฉbutant, nous vous recommandons de consulter nos tutoriels ou notre [cours](https://huggingface.co/course/chapter1/1) suivant pour des explications plus approfondies des concepts prรฉsentรฉs ici. Avant de commencer, assurez-vous que vous avez installรฉ toutes les bibliothรจques nรฉcessaires : ```bash !pip install transformers datasets ``` Vous aurez aussi besoin d'installer votre bibliothรจque d'apprentissage profond favorite : <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## Pipeline <Youtube id="tiZFewofSLM"/> Le [`pipeline`] est le moyen le plus simple d'utiliser un modรจle prรฉ-entraรฎnรฉ pour l'infรฉrence. Vous pouvez utiliser le [`pipeline`] prรชt ร  l'emploi pour de nombreuses tรขches dans diffรฉrentes modalitรฉs. Consultez le tableau ci-dessous pour connaรฎtre les tรขches prises en charge : | **Tรขche** | **Description** | **Modalitรฉ** | **Identifiant du pipeline** | |------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------|-----------------------------------------------| | Classification de texte | Attribue une catรฉgorie ร  une sรฉquence de texte donnรฉe | Texte | pipeline(task="sentiment-analysis") | | Gรฉnรฉration de texte | Gรฉnรจre du texte ร  partir d'une consigne donnรฉe | Texte | pipeline(task="text-generation") | | Reconnaissance de token nommรฉ | Attribue une catรฉgorie ร  chaque token dans une sรฉquence (personnes, organisation, localisation, etc.) | Texte | pipeline(task="ner") | | Question rรฉponse | Extrait une rรฉponse du texte en fonction du contexte et d'une question | Texte | pipeline(task="question-answering") | | Prรฉdiction de token masquรฉ | Prรฉdit correctement le token masquรฉ dans une sรฉquence | Texte | pipeline(task="fill-mask") | | Gรฉnรฉration de rรฉsumรฉ | Gรฉnรจre un rรฉsumรฉ d'une sรฉquence de texte donnรฉe ou d'un document | Texte | pipeline(task="summarization") | | Traduction | Traduit du texte d'un langage ร  un autre | Texte | pipeline(task="translation") | | Classification d'image | Attribue une catรฉgorie ร  une image | Image | pipeline(task="image-classification") | | Segmentation d'image | Attribue une catรฉgorie ร  chaque pixel d'une image (supporte la segmentation sรฉmantique, panoptique et d'instance) | Image | pipeline(task="image-segmentation") | | Dรฉtection d'objets | Prรฉdit les dรฉlimitations et catรฉgories d'objets dans une image | Image | pipeline(task="object-detection") | | Classification d'audio | Attribue une catรฉgorie ร  un fichier audio | Audio | pipeline(task="audio-classification") | | Reconnaissance automatique de la parole | Extrait le discours d'un fichier audio en texte | Audio | pipeline(task="automatic-speech-recognition") | | Question rรฉponse visuels | Etant donnรฉes une image et une question, rรฉpond correctement ร  une question sur l'image | Modalitรฉs multiples | pipeline(task="vqa") | Commencez par crรฉer une instance de [`pipeline`] et spรฉcifiez la tรขche pour laquelle vous souhaitez l'utiliser. Vous pouvez utiliser le [`pipeline`] pour n'importe laquelle des tรขches mentionnรฉes dans le tableau prรฉcรฉdent. Pour obtenir une liste complรจte des tรขches prises en charge, consultez la documentation de l'[API pipeline](./main_classes/pipelines). Dans ce guide, nous utiliserons le [`pipeline`] pour l'analyse des sentiments ร  titre d'exemple : ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` Le [`pipeline`] tรฉlรฉcharge et stocke en cache un [modรจle prรฉ-entraรฎnรฉ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) et un tokenizer par dรฉfaut pour l'analyse des sentiments. Vous pouvez maintenant utiliser le `classifier` sur le texte de votre choix : ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` Si vous voulez classifier plus qu'un texte, donnez une liste de textes au [`pipeline`] pour obtenir une liste de dictionnaires en retour : ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, avec le score de: {round(result['score'], 4)}") label: POSITIVE, avec le score de: 0.9998 label: NEGATIVE, avec le score de: 0.5309 ``` Le [`pipeline`] peut aussi itรฉrer sur un jeu de donnรฉes entier pour n'importe quelle tรขche. Prenons par exemple la reconnaissance automatique de la parole : ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` Chargez un jeu de donnรฉes audio (voir le ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart#audio) pour plus de dรฉtails) sur lequel vous souhaitez itรฉrer. Pour cet exemple, nous chargeons le jeu de donnรฉes [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) : ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` Vous devez vous assurer que le taux d'รฉchantillonnage de l'ensemble de donnรฉes correspond au taux d'รฉchantillonnage sur lequel [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h) a รฉtรฉ entraรฎnรฉ : ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` Les fichiers audio sont automatiquement chargรฉs et rรฉรฉchantillonnรฉs lors de l'appel de la colonne `"audio"`. Extrayez les tableaux de formes d'ondes brutes des quatre premiers รฉchantillons et passez-les comme une liste au pipeline : ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT'] ``` Pour les ensembles de donnรฉes plus importants oรน les entrรฉes sont volumineuses (comme dans les domaines de la parole ou de la vision), utilisez plutรดt un gรฉnรฉrateur au lieu d'une liste pour charger toutes les entrรฉes en mรฉmoire. Pour plus d'informations, consultez la documentation de l'[API pipeline](./main_classes/pipelines). ### Utiliser une autre modรจle et tokenizer dans le pipeline Le [`pipeline`] peut รชtre utilisรฉ avec n'importe quel modรจle du [Hub](https://huggingface.co/models), ce qui permet d'adapter facilement le [`pipeline`] ร  d'autres cas d'utilisation. Par exemple, si vous souhaitez un modรจle capable de traiter du texte franรงais, utilisez les filtres du Hub pour trouver un modรจle appropriรฉ. Le premier rรฉsultat renvoie un [modรจle BERT](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) multilingue finetunรฉ pour l'analyse des sentiments que vous pouvez utiliser pour le texte franรงais : ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Utilisez [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `AutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Utilisez [`TFAutoModelForSequenceClassification`] et [`AutoTokenizer`] pour charger le modรจle prรฉ-entraรฎnรฉ et le tokenizer adaptรฉ (plus de dรฉtails sur une `TFAutoClass` dans la section suivante) : ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Spรฉcifiez le modรจle et le tokenizer dans le [`pipeline`], et utilisez le `classifier` sur le texte en franรงais : ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Si vous ne parvenez pas ร  trouver un modรจle adaptรฉ ร  votre cas d'utilisation, vous devrez finetuner un modรจle prรฉ-entraรฎnรฉ sur vos donnรฉes. Jetez un coup d'ล“il ร  notre [tutoriel sur le finetuning](./training) pour apprendre comment faire. Enfin, aprรจs avoir finetunรฉ votre modรจle prรฉ-entraรฎnรฉ, pensez ร  [partager](./model_sharing) le modรจle avec la communautรฉ sur le Hub afin de dรฉmocratiser l'apprentissage automatique pour tous ! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Les classes [`AutoModelForSequenceClassification`] et [`AutoTokenizer`] fonctionnent ensemble pour crรฉer un [`pipeline`] comme celui que vous avez utilisรฉ ci-dessus. Une [AutoClass](./model_doc/auto) est un raccourci qui rรฉcupรจre automatiquement l'architecture d'un modรจle prรฉ-entraรฎnรฉ ร  partir de son nom ou de son emplacement. Il vous suffit de sรฉlectionner l'`AutoClass` appropriรฉe ร  votre tรขche et la classe de prรฉtraitement qui lui est associรฉe. Reprenons l'exemple de la section prรฉcรฉdente et voyons comment vous pouvez utiliser l'`AutoClass` pour reproduire les rรฉsultats du [`pipeline`]. ### AutoTokenizer Un tokenizer est chargรฉ de prรฉtraiter le texte pour en faire un tableau de chiffres qui servira d'entrรฉe ร  un modรจle. De nombreuses rรจgles rรฉgissent le processus de tokenisation, notamment la maniรจre de diviser un mot et le niveau auquel les mots doivent รชtre divisรฉs (pour en savoir plus sur la tokenisation, consultez le [rรฉsumรฉ](./tokenizer_summary)). La chose la plus importante ร  retenir est que vous devez instancier un tokenizer avec le mรชme nom de modรจle pour vous assurer que vous utilisez les mรชmes rรจgles de tokenisation que celles avec lesquelles un modรจle a รฉtรฉ prรฉ-entraรฎnรฉ. Chargez un tokenizer avec [`AutoTokenizer`] : ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Passez votre texte au tokenizer : ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Le tokenizer retourne un dictionnaire contenant : * [input_ids](./glossary#input-ids): la reprรฉsentation numรฉrique des tokens. * [attention_mask](.glossary#attention-mask): indique quels tokens doivent faire l'objet d'une attention particuliรจre (plus particuliรจrement les tokens de remplissage). Un tokenizer peut รฉgalement accepter une liste de textes, et remplir et tronquer le texte pour retourner un รฉchantillon de longueur uniforme : <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> Consultez le tutoriel [prรฉtraitement](./preprocessing) pour plus de dรฉtails sur la tokenisation, et sur la maniรจre d'utiliser un [`AutoImageProcessor`], un [`AutoFeatureExtractor`] et un [`AutoProcessor`] pour prรฉtraiter les images, l'audio et les contenus multimodaux. </Tip> ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉes. Cela signifie que vous pouvez charger un [`AutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner l'[`AutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`AutoModelForSequenceClassification`] : ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Maintenant, passez votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle. Il vous suffit de dรฉcompresser le dictionnaire en ajoutant `**` : ```py >>> pt_outputs = pt_model(**pt_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fournit un moyen simple et unifiรฉ de charger des instances prรฉ-entraรฎnรฉs. Cela signifie que vous pouvez charger un [`TFAutoModel`] comme vous chargeriez un [`AutoTokenizer`]. La seule diffรฉrence est de sรฉlectionner le [`TFAutoModel`] appropriรฉ pour la tรขche. Pour une classification de texte (ou de sรฉquence de textes), vous devez charger [`TFAutoModelForSequenceClassification`] : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Voir le [rรฉsumรฉ de la tรขche](./task_summary) pour vรฉrifier si elle est prise en charge par une classe [`AutoModel`]. </Tip> Passez maintenant votre รฉchantillon d'entrรฉes prรฉtraitรฉes directement au modรจle en passant les clรฉs du dictionnaire directement aux tensors : ```py >>> tf_outputs = tf_model(tf_batch) ``` Le modรจle produit les activations finales dans l'attribut `logits`. Appliquez la fonction softmax aux `logits` pour rรฉcupรฉrer les probabilitรฉs : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tous les modรจles ๐Ÿค— Transformers (PyTorch ou TensorFlow) produisent les tensors *avant* la fonction d'activation finale (comme softmax) car la fonction d'activation finale est souvent fusionnรฉe avec le calcul de la perte. Les structures produites par le modรจle sont des classes de donnรฉes spรฉciales, de sorte que leurs attributs sont autocomplรฉtรฉs dans un environnement de dรฉveloppement. Les structures produites par le modรจle se comportent comme un tuple ou un dictionnaire (vous pouvez les indexer avec un entier, une tranche ou une chaรฎne), auquel cas les attributs qui sont None sont ignorรฉs. </Tip> ### Sauvegarder un modรจle <frameworkcontent> <pt> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`PreTrainedModel.save_pretrained`] : ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`PreTrainedModel.from_pretrained`] : ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Une fois que votre modรจle est finetunรฉ, vous pouvez le sauvegarder avec son tokenizer en utilisant [`TFPreTrainedModel.save_pretrained`] : ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Lorsque vous voulez rรฉutiliser le modรจle, rechargez-le avec [`TFPreTrainedModel.from_pretrained`] : ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Une fonctionnalitรฉ particuliรจrement cool ๐Ÿค— Transformers est la possibilitรฉ d'enregistrer un modรจle et de le recharger en tant que modรจle PyTorch ou TensorFlow. Le paramรจtre `from_pt` ou `from_tf` permet de convertir le modรจle d'un framework ร  l'autre : <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## Constructions de modรจles personnalisรฉs Vous pouvez modifier la configuration du modรจle pour changer la faรงon dont un modรจle est construit. La configuration spรฉcifie les attributs d'un modรจle, tels que le nombre de couches ou de tรชtes d'attention. Vous partez de zรฉro lorsque vous initialisez un modรจle ร  partir d'une configuration personnalisรฉe. Les attributs du modรจle sont initialisรฉs de maniรจre alรฉatoire et vous devrez entraรฎner le modรจle avant de pouvoir l'utiliser pour obtenir des rรฉsultats significatifs. Commencez par importer [`AutoConfig`], puis chargez le modรจle prรฉ-entraรฎnรฉ que vous voulez modifier. Dans [`AutoConfig.from_pretrained`], vous pouvez spรฉcifier l'attribut que vous souhaitez modifier, tel que le nombre de tรชtes d'attention : ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`AutoModel.from_config`] : ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> Crรฉez un modรจle personnalisรฉ ร  partir de votre configuration avec [`TFAutoModel.from_config`] : ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> Consultez le guide [Crรฉer une architecture personnalisรฉe](./create_a_model) pour plus d'informations sur la crรฉation de configurations personnalisรฉes. ## Trainer - une boucle d'entraรฎnement optimisรฉe par PyTorch Tous les modรจles sont des [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) standard, vous pouvez donc les utiliser dans n'importe quelle boucle d'entraรฎnement typique. Bien que vous puissiez รฉcrire votre propre boucle d'entraรฎnement, ๐Ÿค— Transformers fournit une classe [`Trainer`] pour PyTorch, qui contient la boucle d'entraรฎnement de base et ajoute des fonctionnalitรฉs supplรฉmentaires comme l'entraรฎnement distribuรฉ, la prรฉcision mixte, et plus encore. En fonction de votre tรขche, vous passerez gรฉnรฉralement les paramรจtres suivants ร  [`Trainer`] : 1. Un [`PreTrainedModel`] ou un [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module): ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. [`TrainingArguments`] contient les hyperparamรจtres du modรจle que vous pouvez changer comme le taux d'apprentissage, la taille de l'รฉchantillon, et le nombre d'รฉpoques pour s'entraรฎner. Les valeurs par dรฉfaut sont utilisรฉes si vous ne spรฉcifiez pas d'hyperparamรจtres d'apprentissage : ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 4. Chargez un jeu de donnรฉes : ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` Puis appliquez-la ร  l'intรฉgralitรฉ du jeu de donnรฉes avec [`~datasets.Dataset.map`]: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. Un [`DataCollatorWithPadding`] pour crรฉer un รฉchantillon d'exemples ร  partir de votre jeu de donnรฉes : ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` Maintenant, rassemblez tous ces รฉlรฉments dans un [`Trainer`] : ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` Une fois que vous รชtes prรชt, appelez la fonction [`~Trainer.train`] pour commencer l'entraรฎnement : ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> Pour les tรขches - comme la traduction ou la gรฉnรฉration de rรฉsumรฉ - qui utilisent un modรจle sรฉquence ร  sรฉquence, utilisez plutรดt les classes [`Seq2SeqTrainer`] et [`Seq2SeqTrainingArguments`]. </Tip> Vous pouvez personnaliser le comportement de la boucle d'apprentissage en redรฉfinissant les mรฉthodes ร  l'intรฉrieur de [`Trainer`]. Cela vous permet de personnaliser des caractรฉristiques telles que la fonction de perte, l'optimiseur et le planificateur. Consultez la documentation de [`Trainer`] pour savoir quelles mรฉthodes peuvent รชtre redรฉfinies. L'autre moyen de personnaliser la boucle d'apprentissage est d'utiliser les [Callbacks](./main_classes/callbacks). Vous pouvez utiliser les callbacks pour intรฉgrer d'autres bibliothรจques et inspecter la boucle d'apprentissage afin de suivre la progression ou d'arrรชter l'apprentissage plus tรดt. Les callbacks ne modifient rien dans la boucle d'apprentissage elle-mรชme. Pour personnaliser quelque chose comme la fonction de perte, vous devez redรฉfinir le [`Trainer`] ร  la place. ## Entraรฎnement avec TensorFlow Tous les modรจles sont des modรจles standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) afin qu'ils puissent รชtre entraรฎnรฉs avec TensorFlow avec l'API [Keras](https://keras.io/). ๐Ÿค— Transformers fournit la fonction [`~TFPreTrainedModel.prepare_tf_dataset`] pour charger facilement votre jeu de donnรฉes comme un `tf.data.Dataset` afin que vous puissiez commencer l'entraรฎnement immรฉdiatement avec les fonctions [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) et [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) de Keras. 1. Vous commencez avec un modรจle [`TFPreTrainedModel`] ou [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) : ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. Une classe de prรฉtraitement comme un tokenizer, un processeur d'images ou un extracteur de caractรฉristiques : ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 3. Crรฉez une fonction qui transforme le texte du jeu de donnรฉes en token : ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. Appliquez le tokenizer ร  l'ensemble du jeu de donnรฉes avec [`~datasets.Dataset.map`] et passez ensuite le jeu de donnรฉes et le tokenizer ร  [`~TFPreTrainedModel.prepare_tf_dataset`]. Vous pouvez รฉgalement modifier la taille de l'รฉchantillon et mรฉlanger le jeu de donnรฉes ici si vous le souhaitez : ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. Une fois que vous รชtes prรชt, appelez les fonctions `compile` et `fit` pour commencer l'entraรฎnement : ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) >>> model.fit(dataset) # doctest: +SKIP ``` ## Et aprรจs ? Maintenant que vous avez terminรฉ la visite rapide de ๐Ÿค— Transformers, consultez nos guides et apprenez ร  faire des choses plus spรฉcifiques comme crรฉer un modรจle personnalisรฉ, finetuner un modรจle pour une tรขche, et comment entraรฎner un modรจle avec un script. Si vous souhaitez en savoir plus sur les concepts fondamentaux de ๐Ÿค— Transformers, jetez un ล“il ร  nos guides conceptuels !
transformers/docs/source/fr/quicktour.md/0
{ "file_path": "transformers/docs/source/fr/quicktour.md", "repo_id": "transformers", "token_count": 10715 }
254
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installazione Installa ๐Ÿค— Transformers per qualsiasi libreria di deep learning con cui stai lavorando, imposta la tua cache, e opzionalmente configura ๐Ÿค— Transformers per l'esecuzione offline. ๐Ÿค— Transformers รจ testato su Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, e Flax. Segui le istruzioni di installazione seguenti per la libreria di deep learning che stai utilizzando: * [PyTorch](https://pytorch.org/get-started/locally/) istruzioni di installazione. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) istruzioni di installazione. * [Flax](https://flax.readthedocs.io/en/latest/) istruzioni di installazione. ## Installazione con pip Puoi installare ๐Ÿค— Transformers in un [ambiente virtuale](https://docs.python.org/3/library/venv.html). Se non sei familiare con gli ambienti virtuali in Python, dai un'occhiata a questa [guida](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). Un ambiente virtuale rende piรน semplice la gestione di progetti differenti, evitando problemi di compatibilitร  tra dipendenze. Inizia creando un ambiente virtuale nella directory del tuo progetto: ```bash python -m venv .env ``` Attiva l'ambiente virtuale: ```bash source .env/bin/activate ``` Ora puoi procedere con l'installazione di ๐Ÿค— Transformers eseguendo il comando seguente: ```bash pip install transformers ``` Per il solo supporto della CPU, puoi installare facilmente ๐Ÿค— Transformers e una libreria di deep learning in solo una riga. Ad esempio, installiamo ๐Ÿค— Transformers e PyTorch con: ```bash pip install transformers[torch] ``` ๐Ÿค— Transformers e TensorFlow 2.0: ```bash pip install transformers[tf-cpu] ``` ๐Ÿค— Transformers e Flax: ```bash pip install transformers[flax] ``` Infine, verifica se ๐Ÿค— Transformers รจ stato installato in modo appropriato eseguendo il seguente comando. Questo scaricherร  un modello pre-allenato: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Dopodichรฉ stampa l'etichetta e il punteggio: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Installazione dalla fonte Installa ๐Ÿค— Transformers dalla fonte con il seguente comando: ```bash pip install git+https://github.com/huggingface/transformers ``` Questo comando installa la versione `main` piรน attuale invece dell'ultima versione stabile. Questo รจ utile per stare al passo con gli ultimi sviluppi. Ad esempio, se un bug รจ stato sistemato da quando รจ uscita l'ultima versione ufficiale ma non รจ stata ancora rilasciata una nuova versione. Tuttavia, questo significa che questa versione `main` puรฒ non essere sempre stabile. Ci sforziamo per mantenere la versione `main` operativa, e la maggior parte dei problemi viene risolta in poche ore o in un giorno. Se riscontri un problema, per favore apri una [Issue](https://github.com/huggingface/transformers/issues) cosรฌ possiamo sistemarlo ancora piรน velocemente! Controlla se ๐Ÿค— Transformers รจ stata installata in modo appropriato con il seguente comando: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Installazione modificabile Hai bisogno di un'installazione modificabile se vuoi: * Usare la versione `main` del codice dalla fonte. * Contribuire a ๐Ÿค— Transformers e hai bisogno di testare i cambiamenti nel codice. Clona il repository e installa ๐Ÿค— Transformers con i seguenti comandi: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` Questi comandi collegheranno la cartella in cui รจ stato clonato il repository e i path delle librerie Python. Python guarderร  ora all'interno della cartella clonata, oltre ai normali path delle librerie. Per esempio, se i tuoi pacchetti Python sono installati tipicamente in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python cercherร  anche nella cartella clonata: `~/transformers/`. <Tip warning={true}> Devi tenere la cartella `transformers` se vuoi continuare ad utilizzare la libreria. </Tip> Ora puoi facilmente aggiornare il tuo clone all'ultima versione di ๐Ÿค— Transformers con il seguente comando: ```bash cd ~/transformers/ git pull ``` Il tuo ambiente Python troverร  la versione `main` di ๐Ÿค— Transformers alla prossima esecuzione. ## Installazione con conda Installazione dal canale conda `conda-forge`: ```bash conda install conda-forge::transformers ``` ## Impostazione della cache I modelli pre-allenati sono scaricati e memorizzati localmente nella cache in: `~/.cache/huggingface/transformers/`. Questa รจ la directory di default data dalla variabile d'ambiente della shell `TRANSFORMERS_CACHE`. Su Windows, la directory di default รจ data da `C:\Users\username\.cache\huggingface\transformers`. Puoi cambiare le variabili d'ambiente della shell indicate in seguito, in ordine di prioritร , per specificare una directory differente per la cache: 1. Variabile d'ambiente della shell (default): `TRANSFORMERS_CACHE`. 2. Variabile d'ambiente della shell: `HF_HOME` + `transformers/`. 3. Variabile d'ambiente della shell: `XDG_CACHE_HOME` + `/huggingface/transformers`. <Tip> ๐Ÿค— Transformers utilizzerร  le variabili d'ambiente della shell `PYTORCH_TRANSFORMERS_CACHE` o `PYTORCH_PRETRAINED_BERT_CACHE` se si proviene da un'iterazione precedente di questa libreria e sono state impostate queste variabili d'ambiente, a meno che non si specifichi la variabile d'ambiente della shell `TRANSFORMERS_CACHE`. </Tip> ## Modalitร  Offline ๐Ÿค— Transformers puรฒ essere eseguita in un ambiente firewalled o offline utilizzando solo file locali. Imposta la variabile d'ambiente `TRANSFORMERS_OFFLINE=1` per abilitare questo comportamento. <Tip> Aggiungi [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) al tuo flusso di lavoro offline di training impostando la variabile d'ambiente `HF_DATASETS_OFFLINE=1`. </Tip> Ad esempio, in genere si esegue un programma su una rete normale, protetta da firewall per le istanze esterne, con il seguente comando: ```bash python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Esegui lo stesso programma in un'istanza offline con: ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` Lo script viene ora eseguito senza bloccarsi o attendere il timeout, perchรฉ sa di dover cercare solo file locali. ### Ottenere modelli e tokenizer per l'uso offline Un'altra opzione per utilizzare offline ๐Ÿค— Transformers รจ scaricare i file in anticipo, e poi puntare al loro path locale quando hai la necessitร  di utilizzarli offline. Ci sono tre modi per fare questo: * Scarica un file tramite l'interfaccia utente sul [Model Hub](https://huggingface.co/models) premendo sull'icona โ†“. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Utilizza il flusso [`PreTrainedModel.from_pretrained`] e [`PreTrainedModel.save_pretrained`]: 1. Scarica i tuoi file in anticipo con [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Salva i tuoi file in una directory specificata con [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./il/tuo/path/bigscience_t0") >>> model.save_pretrained("./il/tuo/path/bigscience_t0") ``` 3. Ora quando sei offline, carica i tuoi file con [`PreTrainedModel.from_pretrained`] dalla directory specificata: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./il/tuo/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./il/tuo/path/bigscience_t0") ``` * Scarica in maniera programmatica i file con la libreria [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub): 1. Installa la libreria `huggingface_hub` nel tuo ambiente virtuale: ```bash python -m pip install huggingface_hub ``` 2. Utilizza la funzione [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) per scaricare un file in un path specifico. Per esempio, il seguente comando scarica il file `config.json` dal modello [T0](https://huggingface.co/bigscience/T0_3B) nel path che desideri: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./il/tuo/path/bigscience_t0") ``` Una volta che il tuo file รจ scaricato e salvato in cache localmente, specifica il suo path locale per caricarlo e utilizzarlo: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./il/tuo/path/bigscience_t0/config.json") ``` <Tip> Fai riferimento alla sezione [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) per avere maggiori dettagli su come scaricare modelli presenti sull Hub. </Tip>
transformers/docs/source/it/installation.md/0
{ "file_path": "transformers/docs/source/it/installation.md", "repo_id": "transformers", "token_count": 3575 }
255
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quick tour [[open-in-colab]] Entra in azione con ๐Ÿค— Transformers! Inizia utilizzando [`pipeline`] per un'inferenza veloce, carica un modello pre-allenato e un tokenizer con una [AutoClass](./model_doc/auto) per risolvere i tuoi compiti legati a testo, immagini o audio. <Tip> Tutti gli esempi di codice presenti in questa documentazione hanno un pulsante in alto a sinistra che permette di selezionare tra PyTorch e TensorFlow. Se questo non รจ presente, ci si aspetta che il codice funzioni per entrambi i backend senza alcun cambiamento. </Tip> ## Pipeline [`pipeline`] รจ il modo piรน semplice per utilizzare un modello pre-allenato per un dato compito. <Youtube id="tiZFewofSLM"/> La [`pipeline`] supporta molti compiti comuni: **Testo**: * Analisi del Sentimento (Sentiment Analysis, in inglese): classifica la polaritร  di un testo dato. * Generazione del Testo (Text Generation, in inglese): genera del testo a partire da un dato input. * Riconoscimento di Entitร  (Name Entity Recognition o NER, in inglese): etichetta ogni parola con l'entitร  che questa rappresenta (persona, data, luogo, ecc.). * Rispondere a Domande (Question answering, in inglese): estrae la risposta da un contesto, dato del contesto e una domanda. * Riempimento di Maschere (Fill-mask, in inglese): riempie gli spazi mancanti in un testo che ha parole mascherate. * Riassumere (Summarization, in inglese): genera una sintesi di una lunga sequenza di testo o di un documento. * Traduzione (Translation, in inglese): traduce un testo in un'altra lingua. * Estrazione di Caratteristiche (Feature Extraction, in inglese): crea un tensore che rappresenta un testo. **Immagini**: * Classificazione di Immagini (Image Classification, in inglese): classifica un'immagine. * Segmentazione di Immagini (Image Segmentation, in inglese): classifica ogni pixel di un'immagine. * Rilevazione di Oggetti (Object Detection, in inglese): rileva oggetti all'interno di un'immagine. **Audio**: * Classificazione di Audio (Audio Classification, in inglese): assegna un'etichetta ad un segmento di audio dato. * Riconoscimento Vocale Automatico (Automatic Speech Recognition o ASR, in inglese): trascrive il contenuto di un audio dato in un testo. <Tip> Per maggiori dettagli legati alla [`pipeline`] e ai compiti ad essa associati, fai riferimento alla documentazione [qui](./main_classes/pipelines). </Tip> ### Utilizzo della Pipeline Nel seguente esempio, utilizzerai la [`pipeline`] per l'analisi del sentimento. Installa le seguenti dipendenze se non lo hai giร  fatto: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> Importa [`pipeline`] e specifica il compito che vuoi completare: ```py >>> from transformers import pipeline >>> classificatore = pipeline("sentiment-analysis", model="MilaNLProc/feel-it-italian-sentiment") ``` La pipeline scarica e salva il [modello pre-allenato](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) e il tokenizer per l'analisi del sentimento. Se non avessimo scelto un modello, la pipeline ne avrebbe scelto uno di default. Ora puoi utilizzare il `classifier` sul tuo testo obiettivo: ```py >>> classificatore("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") [{'label': 'positive', 'score': 0.9997}] ``` Per piรน di una frase, passa una lista di frasi alla [`pipeline`] la quale restituirร  una lista di dizionari: ```py >>> risultati = classificatore( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."] ... ) >>> for risultato in risultati: ... print(f"etichetta: {risultato['label']}, con punteggio: {round(risultato['score'], 4)}") etichetta: positive, con punteggio: 0.9998 etichetta: negative, con punteggio: 0.9998 ``` La [`pipeline`] puรฒ anche iterare su un dataset intero. Inizia installando la libreria [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/): ```bash pip install datasets ``` Crea una [`pipeline`] con il compito che vuoi risolvere e con il modello che vuoi utilizzare. ```py >>> import torch >>> from transformers import pipeline >>> riconoscitore_vocale = pipeline( ... "automatic-speech-recognition", model="radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram" ... ) ``` Poi, carica un dataset (vedi ๐Ÿค— Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart) per maggiori dettagli) sul quale vuoi iterare. Per esempio, carichiamo il dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14): ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="it-IT", split="train") # doctest: +IGNORE_RESULT ``` Dobbiamo assicurarci che la frequenza di campionamento del set di dati corrisponda alla frequenza di campionamento con cui รจ stato addestrato `radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram`. ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=riconoscitore_vocale.feature_extractor.sampling_rate)) ``` I file audio vengono caricati automaticamente e ri-campionati quando chiamiamo la colonna "audio". Estraiamo i vettori delle forme d'onda grezze delle prime 4 osservazioni e passiamoli come lista alla pipeline: ```py >>> risultato = riconoscitore_vocale(dataset[:4]["audio"]) >>> print([d["text"] for d in risultato]) ['dovrei caricare dei soldi sul mio conto corrente', 'buongiorno e senza vorrei depositare denaro sul mio conto corrente come devo fare per cortesia', 'sรฌ salve vorrei depositare del denaro sul mio conto', 'e buon pomeriggio vorrei depositare dei soldi sul mio conto bancario volleo sapere come posso fare se e posso farlo online ed un altro conto o andandoo tramite bancomut'] ``` Per un dataset piรน grande dove gli input sono di dimensione maggiore (come nel parlato/audio o nella visione), dovrai passare un generatore al posto di una lista che carica tutti gli input in memoria. Guarda la [documentazione della pipeline](./main_classes/pipelines) per maggiori informazioni. ### Utilizzare un altro modello e tokenizer nella pipeline La [`pipeline`] puรฒ ospitare qualsiasi modello del [Model Hub](https://huggingface.co/models), rendendo semplice l'adattamento della [`pipeline`] per altri casi d'uso. Per esempio, se si vuole un modello capace di trattare testo in francese, usa i tag presenti nel Model Hub in modo da filtrare per ottenere un modello appropriato. Il miglior risultato filtrato restituisce un modello multi-lingua [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) fine-tuned per l'analisi del sentimento. Ottimo, utilizziamo questo modello! ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> Usa [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `AutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> Usa [`TFAutoModelForSequenceClassification`] e [`AutoTokenizer`] per caricare il modello pre-allenato e il suo tokenizer associato (maggiori informazioni su una `TFAutoClass` in seguito): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> Poi puoi specificare il modello e il tokenizer nella [`pipeline`], e applicare il `classifier` sul tuo testo obiettivo: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` Se non riesci a trovare un modello per il tuo caso d'uso, dovrai fare fine-tuning di un modello pre-allenato sui tuoi dati. Dai un'occhiata al nostro tutorial [fine-tuning tutorial](./training) per imparare come. Infine, dopo che hai completato il fine-tuning del tuo modello pre-allenato, considera per favore di condividerlo (vedi il tutorial [qui](./model_sharing)) con la comunitร  sul Model Hub per democratizzare l'NLP! ๐Ÿค— ## AutoClass <Youtube id="AhChOFRegn4"/> Al suo interno, le classi [`AutoModelForSequenceClassification`] e [`AutoTokenizer`] lavorano assieme per dare potere alla [`pipeline`]. Una [AutoClass](./model_doc/auto) รจ una scorciatoia che automaticamente recupera l'architettura di un modello pre-allenato a partire dal suo nome o path. Hai solo bisogno di selezionare la `AutoClass` appropriata per il tuo compito e il suo tokenizer associato con [`AutoTokenizer`]. Ritorniamo al nostro esempio e vediamo come puoi utilizzare la `AutoClass` per replicare i risultati della [`pipeline`]. ### AutoTokenizer Un tokenizer รจ responsabile dell'elaborazione del testo in modo da trasformarlo in un formato comprensibile dal modello. Per prima cosa, il tokenizer dividerร  il testo in parole chiamate *token*. Ci sono diverse regole che governano il processo di tokenizzazione, tra cui come dividere una parola e a quale livello (impara di piรน sulla tokenizzazione [qui](./tokenizer_summary)). La cosa piรน importante da ricordare comunque รจ che hai bisogno di inizializzare il tokenizer con lo stesso nome del modello in modo da assicurarti che stai utilizzando le stesse regole di tokenizzazione con cui il modello รจ stato pre-allenato. Carica un tokenizer con [`AutoTokenizer`]: ```py >>> from transformers import AutoTokenizer >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(nome_del_modello) ``` Dopodichรฉ, il tokenizer converte i token in numeri in modo da costruire un tensore come input del modello. Questo รจ conosciuto come il *vocabolario* del modello. Passa il tuo testo al tokenizer: ```py >>> encoding = tokenizer("Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.") >>> print(encoding) {'input_ids': [101, 56821, 10132, 14407, 13019, 13007, 10120, 47201, 10330, 10106, 91686, 100, 58263, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` Il tokenizer restituirร  un dizionario contenente: * [input_ids](./glossary#input-ids): rappresentazioni numeriche dei tuoi token. * [attention_mask](.glossary#attention-mask): indica quali token devono essere presi in considerazione. Come con la [`pipeline`], il tokenizer accetterร  una lista di input. In piรน, il tokenizer puรฒ anche completare (pad, in inglese) e troncare il testo in modo da restituire un lotto (batch, in inglese) di lunghezza uniforme: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["Siamo molto felici di mostrarti la libreria ๐Ÿค— Transformers.", "Speriamo te non la odierai."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> Leggi il tutorial sul [preprocessing](./preprocessing) per maggiori dettagli sulla tokenizzazione. ### AutoModel <frameworkcontent> <pt> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`AutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare l'[`AutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`AutoModelForSequenceClassification`]: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello. Devi solo spacchettare il dizionario aggiungendo `**`: ```py >>> pt_outputs = pt_model(**pt_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0041, 0.0037, 0.0203, 0.2005, 0.7713], [0.3766, 0.3292, 0.1832, 0.0558, 0.0552]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers fornisce un metodo semplice e unificato per caricare istanze pre-allenate. Questo significa che puoi caricare un [`TFAutoModel`] come caricheresti un [`AutoTokenizer`]. L'unica differenza รจ selezionare il [`TFAutoModel`] corretto per il compito di interesse. Dato che stai facendo classificazione di testi, o sequenze, carica [`TFAutoModelForSequenceClassification`]: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> nome_del_modello = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(nome_del_modello) ``` <Tip> Guarda il [task summary](./task_summary) per sapere quale classe di [`AutoModel`] utilizzare per quale compito. </Tip> Ora puoi passare il tuo lotto di input pre-processati direttamente al modello passando le chiavi del dizionario al tensore: ```py >>> tf_outputs = tf_model(tf_batch) ``` Il modello produrrร  le attivazioni finali nell'attributo `logits`. Applica la funzione softmax a `logits` per ottenere le probabilitร : ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> Tutti i modelli di ๐Ÿค— Transformers (PyTorch e TensorFlow) restituiscono i tensori *prima* della funzione finale di attivazione (come la softmax) perchรฉ la funzione di attivazione finale viene spesso unita a quella di perdita. </Tip> I modelli sono [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) o [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) standard cosรฌ puoi utilizzarli all'interno del tuo training loop usuale. Tuttavia, per rendere le cose piรน semplici, ๐Ÿค— Transformers fornisce una classe [`Trainer`] per PyTorch che aggiunge delle funzionalitร  per l'allenamento distribuito, precisione mista, e altro ancora. Per TensorFlow, puoi utilizzare il metodo `fit` di [Keras](https://keras.io/). Fai riferimento al [tutorial per il training](./training) per maggiori dettagli. <Tip> Gli output del modello di ๐Ÿค— Transformers sono delle dataclasses speciali in modo che i loro attributi vengano auto-completati all'interno di un IDE. Gli output del modello si comportano anche come una tupla o un dizionario (ad esempio, puoi indicizzare con un intero, una slice o una stringa) nel qual caso gli attributi che sono `None` vengono ignorati. </Tip> ### Salva un modello <frameworkcontent> <pt> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`PreTrainedModel.save_pretrained`]: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`PreTrainedModel.from_pretrained`]: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> Una volta completato il fine-tuning del tuo modello, puoi salvarlo con il suo tokenizer utilizzando [`TFPreTrainedModel.save_pretrained`]: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` Quando desideri utilizzare il tuo modello nuovamente, puoi ri-caricarlo con [`TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> Una caratteristica particolarmente interessante di ๐Ÿค— Transformers รจ la sua abilitร  di salvare un modello e ri-caricarlo sia come modello di PyTorch che di TensorFlow. I parametri `from_pt` o `from_tf` possono convertire un modello da un framework all'altro: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent>
transformers/docs/source/it/quicktour.md/0
{ "file_path": "transformers/docs/source/it/quicktour.md", "repo_id": "transformers", "token_count": 6490 }
256
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Sharing custom models ๐Ÿค— Transformersใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€็ฐกๅ˜ใซๆ‹กๅผตใงใใ‚‹ใ‚ˆใ†ใซ่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ™ในใฆใฎใƒขใƒ‡ใƒซใฏใƒชใƒใ‚ธใƒˆใƒชใฎ็‰นๅฎšใฎใ‚ตใƒ–ใƒ•ใ‚ฉใƒซใƒ€ใซๅฎŒๅ…จใซใ‚ณใƒผใƒ‰ๅŒ–ใ•ใ‚ŒใฆใŠใ‚Šใ€ๆŠฝ่ฑกๅŒ–ใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ—ใŸใŒใฃใฆใ€ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ณใƒ”ใƒผใ—ใฆ่ชฟๆ•ดใ™ใ‚‹ใ“ใจใŒ็ฐกๅ˜ใงใ™ใ€‚ ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใ‚’ๆ›ธใ„ใฆใ„ใ‚‹ๅ ดๅˆใ€ใ‚ผใƒญใ‹ใ‚‰ๅง‹ใ‚ใ‚‹ๆ–นใŒ็ฐกๅ˜ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใจใใฎ่จญๅฎšใ‚’ใฉใฎใ‚ˆใ†ใซๆ›ธใใ€Transformersๅ†…ใงไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ—ใ€ใ‚ณใƒผใƒ‰ใซไพๅญ˜ใ™ใ‚‹ๅ…ฑๅŒไฝ“ใจๅ…ฑๆœ‰ใ™ใ‚‹ๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ใƒฉใ‚คใƒ–ใƒฉใƒชใซๅญ˜ๅœจใ—ใชใ„ๅ ดๅˆใงใ‚‚ใ€่ชฐใงใ‚‚ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ใ“ใ‚Œใ‚’ๅฎŸ่จผใ™ใ‚‹ใŸใ‚ใซใ€[timmใƒฉใ‚คใƒ–ใƒฉใƒช](https://github.com/rwightman/pytorch-image-models)ใฎResNetใ‚ฏใƒฉใ‚นใ‚’[`PreTrainedModel`]ใซใƒฉใƒƒใƒ—ใ™ใ‚‹ใ“ใจใซใ‚ˆใฃใฆใ€ResNetใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ## Writing a custom configuration ใƒขใƒ‡ใƒซใซๅ–ใ‚Š็ต„ใ‚€ๅ‰ใซใ€ใพใšใใฎ่จญๅฎšใ‚’ๆ›ธใใพใ—ใ‚‡ใ†ใ€‚ใƒขใƒ‡ใƒซใฎ่จญๅฎšใฏใ€ใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใซๅฟ…่ฆใชใ™ในใฆใฎๆƒ…ๅ ฑใ‚’ๅซใ‚€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใงใ™ใ€‚ๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใง่ฆ‹ใ‚‹ใ‚ˆใ†ใซใ€ใƒขใƒ‡ใƒซใฏๅˆๆœŸๅŒ–ใ™ใ‚‹ใŸใ‚ใซ`config`ใ—ใ‹ๅ—ใ‘ๅ–ใ‚‹ใ“ใจใŒใงใใชใ„ใŸใ‚ใ€ใใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒใงใใ‚‹ใ ใ‘ๅฎŒๅ…จใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใฎไพ‹ใงใฏใ€ResNetใ‚ฏใƒฉใ‚นใฎใ„ใใคใ‹ใฎๅผ•ๆ•ฐใ‚’ๅ–ๅพ—ใ—ใ€่ชฟๆ•ดใ—ใŸใ„ใ‹ใ‚‚ใ—ใ‚Œใชใ„ใจใ—ใพใ™ใ€‚็•ฐใชใ‚‹่จญๅฎšใฏใ€็•ฐใชใ‚‹ใ‚ฟใ‚คใƒ—ใฎResNetใ‚’ๆไพ›ใ—ใพใ™ใ€‚ใใฎๅพŒใ€ใ“ใ‚Œใ‚‰ใฎๅผ•ๆ•ฐใ‚’็ขบ่ชใ—ใŸๅพŒใ€ใใ‚Œใ‚‰ใฎๅผ•ๆ•ฐใ‚’ๅ˜ใซๆ ผ็ดใ—ใพใ™ใ€‚ ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` ้‡่ฆใชใ“ใจใ‚’3ใค่ฆšใˆใฆใŠใในใใƒใ‚คใƒณใƒˆใฏๆฌกใฎใจใŠใ‚Šใงใ™๏ผš - `PretrainedConfig` ใ‚’็ถ™ๆ‰ฟใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ใ‚ใชใŸใฎ `PretrainedConfig` ใฎ `__init__` ใฏไปปๆ„ใฎ kwargs ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ - ใ“ใ‚Œใ‚‰ใฎ `kwargs` ใฏ่ฆชใ‚ฏใƒฉใ‚นใฎ `__init__` ใซๆธกใ™ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ็ถ™ๆ‰ฟใฏใ€๐Ÿค— Transformers ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ™ในใฆใฎๆฉŸ่ƒฝใ‚’ๅ–ๅพ—ใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใงใ™ใ€‚ไป–ใฎ2ใคใฎๅˆถ็ด„ใฏใ€ `PretrainedConfig` ใŒ่จญๅฎšใ—ใฆใ„ใ‚‹ใƒ•ใ‚ฃใƒผใƒซใƒ‰ไปฅๅค–ใซใ‚‚ๅคšใใฎใƒ•ใ‚ฃใƒผใƒซใƒ‰ใ‚’ๆŒใฃใฆใ„ใ‚‹ใ“ใจใ‹ใ‚‰ๆฅใฆใ„ใพใ™ใ€‚ `from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใง่จญๅฎšใ‚’ๅ†ใƒญใƒผใƒ‰ใ™ใ‚‹ๅ ดๅˆใ€ใ“ใ‚Œใ‚‰ใฎใƒ•ใ‚ฃใƒผใƒซใƒ‰ใฏใ‚ใชใŸใฎ่จญๅฎšใซๅ—ใ‘ๅ…ฅใ‚Œใ‚‰ใ‚Œใ€ ใใฎๅพŒใ€่ฆชใ‚ฏใƒฉใ‚นใซ้€ไฟกใ•ใ‚Œใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ่จญๅฎšใฎ `model_type` ใ‚’ๅฎš็พฉใ™ใ‚‹ใ“ใจ๏ผˆใ“ใ“ใงใฏ `model_type="resnet"`๏ผ‰ใฏใ€ ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใซใƒขใƒ‡ใƒซใ‚’็™ป้Œฒใ—ใŸใ„ๅ ดๅˆใ‚’้™คใ„ใฆใฏๅฟ…้ ˆใงใฏใ‚ใ‚Šใพใ›ใ‚“๏ผˆๆœ€ๅพŒใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ๅ‚็…ง๏ผ‰ใ€‚ ใ“ใ‚Œใงใ€ใƒฉใ‚คใƒ–ใƒฉใƒชใฎไป–ใฎใƒขใƒ‡ใƒซ่จญๅฎšใจๅŒๆง˜ใซใ€่จญๅฎšใ‚’็ฐกๅ˜ใซไฝœๆˆใ—ใฆไฟๅญ˜ใงใใพใ™ใ€‚ ไปฅไธ‹ใฏใ€resnet50d ่จญๅฎšใ‚’ไฝœๆˆใ—ใฆไฟๅญ˜ใ™ใ‚‹ๆ–นๆณ•ใฎไพ‹ใงใ™๏ผš ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` ใ“ใ‚Œใซใ‚ˆใ‚Šใ€`custom-resnet` ใƒ•ใ‚ฉใƒซใƒ€ๅ†…ใซ `config.json` ใจใ„ใ†ๅๅ‰ใฎใƒ•ใ‚กใ‚คใƒซใŒไฟๅญ˜ใ•ใ‚Œใพใ™ใ€‚ใใฎๅพŒใ€`from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆๆง‹ๆˆใ‚’ๅ†ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` ใพใŸใ€[`PretrainedConfig`] ใ‚ฏใƒฉใ‚นใฎไป–ใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ใŸใจใˆใฐใ€[`~PretrainedConfig.push_to_hub`] ใ‚’ไฝฟ็”จใ—ใฆใ€่จญๅฎšใ‚’็›ดๆŽฅ Hub ใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใงใใพใ™ใ€‚ ## Writing a custom model ResNet ใฎ่จญๅฎšใŒใงใใŸใฎใงใ€ใƒขใƒ‡ใƒซใ‚’ๆ›ธใๅง‹ใ‚ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ๅฎŸ้š›ใซใฏ2ใคใฎใƒขใƒ‡ใƒซใ‚’ๆ›ธใใพใ™ใ€‚1ใคใฏใƒใƒƒใƒใฎ็”ปๅƒใ‹ใ‚‰้š ใ‚ŒใŸ็‰นๅพดใ‚’ๆŠฝๅ‡บใ™ใ‚‹ใƒขใƒ‡ใƒซ๏ผˆ[`BertModel`] ใฎใ‚ˆใ†ใชใ‚‚ใฎ๏ผ‰ใงใ€ใ‚‚ใ†1ใคใฏ็”ปๅƒๅˆ†้กžใซ้ฉใ—ใŸใƒขใƒ‡ใƒซ๏ผˆ[`BertForSequenceClassification`] ใฎใ‚ˆใ†ใชใ‚‚ใฎ๏ผ‰ใงใ™ใ€‚ ๅ‰่ฟฐใ—ใŸใ‚ˆใ†ใซใ€ใ“ใฎไพ‹ใ‚’ใ‚ทใƒณใƒ—ใƒซใซไฟใคใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใฎ็ทฉใ„ใƒฉใƒƒใƒ‘ใƒผใฎใฟใ‚’ๆ›ธใใพใ™ใ€‚ใ“ใฎใ‚ฏใƒฉใ‚นใ‚’ๆ›ธใๅ‰ใซ่กŒใ†ๅฟ…่ฆใŒใ‚ใ‚‹ๅ”ฏไธ€ใฎใ“ใจใฏใ€ใƒ–ใƒญใƒƒใ‚ฏใ‚ฟใ‚คใƒ—ใจๅฎŸ้š›ใฎใƒ–ใƒญใƒƒใ‚ฏใ‚ฏใƒฉใ‚นใฎ้–“ใฎใƒžใƒƒใƒ—ใงใ™ใ€‚ใใฎๅพŒใ€ใ™ในใฆใ‚’ `ResNet` ใ‚ฏใƒฉใ‚นใซๆธกใ—ใฆ่จญๅฎšใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚’ๅฎš็พฉใ—ใพใ™๏ผš ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` ็”ปๅƒใ‚’ๅˆ†้กžใ™ใ‚‹ใƒขใƒ‡ใƒซใฎๅ ดๅˆใ€forwardใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใ ใ‘ใงใ™๏ผš ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` ไธกๆ–นใฎๅ ดๅˆใ€`PreTrainedModel`ใ‹ใ‚‰็ถ™ๆ‰ฟใ—ใ€`config`ใ‚’ไฝฟ็”จใ—ใฆใ‚นใƒผใƒ‘ใƒผใ‚ฏใƒฉใ‚นใฎๅˆๆœŸๅŒ–ใ‚’ๅ‘ผใณๅ‡บใ—ใพใ™๏ผˆ้€šๅธธใฎ`torch.nn.Module`ใ‚’ๆ›ธใใจใใฎใ‚ˆใ†ใชๆ„Ÿใ˜ใงใ™๏ผ‰ใ€‚ `config_class`ใ‚’่จญๅฎšใ™ใ‚‹่กŒใฏๅฟ…้ ˆใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€๏ผˆๆœ€ๅพŒใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใ‚’ๅ‚็…ง๏ผ‰ใ€ใƒขใƒ‡ใƒซใ‚’่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใซ็™ป้Œฒใ—ใŸใ„ๅ ดๅˆใซไฝฟ็”จใงใใพใ™ใ€‚ <Tip> ใƒขใƒ‡ใƒซใŒใƒฉใ‚คใƒ–ใƒฉใƒชๅ†…ใฎใƒขใƒ‡ใƒซใจ้žๅธธใซไผผใฆใ„ใ‚‹ๅ ดๅˆใ€ใ“ใฎใƒขใƒ‡ใƒซใจๅŒใ˜ๆง‹ๆˆใ‚’ๅ†ๅˆฉ็”จใงใใพใ™ใ€‚ </Tip> ใƒขใƒ‡ใƒซใŒ่ฟ”ใ™ๅ†…ๅฎนใฏไฝ•ใงใ‚‚ๆง‹ใ„ใพใ›ใ‚“ใŒใ€ใƒฉใƒ™ใƒซใŒๆธกใ•ใ‚Œใ‚‹ใจใใซๆๅคฑใ‚’ๅซใ‚€่พžๆ›ธใ‚’่ฟ”ใ™๏ผˆ`ResnetModelForImageClassification`ใฎใ‚ˆใ†ใซ่กŒใฃใŸใ‚‚ใฎ๏ผ‰ใจใ€ ใƒขใƒ‡ใƒซใ‚’[`Trainer`]ใ‚ฏใƒฉใ‚นๅ†…ใง็›ดๆŽฅไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚็‹ฌ่‡ชใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใƒซใƒผใƒ—ใพใŸใฏไป–ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝฟ็”จใ™ใ‚‹ไบˆๅฎšใงใ‚ใ‚‹้™ใ‚Šใ€ ๅˆฅใฎๅ‡บๅŠ›ๅฝขๅผใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ•ใฆใ€ใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใŒใงใใŸใฎใงใ€1ใคไฝœๆˆใ—ใพใ—ใ‚‡ใ†๏ผš ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` ๅ†ๅบฆใ€[`PreTrainedModel`]ใฎใ„ใšใ‚Œใ‹ใฎใƒกใ‚ฝใƒƒใƒ‰ใ€ไพ‹ใˆใฐ[`~PreTrainedModel.save_pretrained`]ใ‚„ [`~PreTrainedModel.push_to_hub`]ใชใฉใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ๆฌกใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใƒขใƒ‡ใƒซใฎ้‡ใฟใ‚’ใ‚ณใƒผใƒ‰ใจไธ€็ท’ใซ Hugging Face Hub ใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ๆ–นๆณ•ใ‚’่ฆ‹ใฆใฟใพใ™ใ€‚ ใ—ใ‹ใ—ใ€ใพใšใฏใƒขใƒ‡ใƒซๅ†…ใซไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใฎ้‡ใฟใ‚’ใƒญใƒผใƒ‰ใ—ใพใ—ใ‚‡ใ†ใ€‚ ็‹ฌ่‡ชใฎใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใงใฏใ€ใŠใใ‚‰ใ็‹ฌ่‡ชใฎใƒ‡ใƒผใ‚ฟใงใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใซใชใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ‚นใƒ”ใƒผใƒ‰ใ‚ขใƒƒใƒ—ใฎใŸใ‚ใซใ€resnet50dใฎไบ‹ๅ‰ๅญฆ็ฟ’ๆธˆใฟใƒใƒผใ‚ธใƒงใƒณใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ ็งใŸใกใฎใƒขใƒ‡ใƒซใฏใใ‚Œใ‚’ใƒฉใƒƒใƒ—ใ™ใ‚‹ใ ใ‘ใชใฎใงใ€ใ“ใ‚Œใ‚‰ใฎ้‡ใฟใ‚’่ปข้€ใ™ใ‚‹ใฎใฏ็ฐกๅ˜ใงใ™๏ผš ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ใ•ใฆใ€[`~PreTrainedModel.save_pretrained`]ใพใŸใฏ[`~PreTrainedModel.push_to_hub`]ใ‚’ๅฎŸ่กŒใ—ใŸใจใใซใ€ ใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒ‰ใŒไฟๅญ˜ใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ๆ–นๆณ•ใ‚’่ฆ‹ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ ## Sending the code to the Hub <Tip warning={true}> ใ“ใฎAPIใฏๅฎŸ้จ“็š„ใงใ‚ใ‚Šใ€ๆฌกใฎใƒชใƒชใƒผใ‚นใงใ‚ใšใ‹ใชๅค‰ๆ›ดใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ </Tip> ใพใšใ€ใƒขใƒ‡ใƒซใŒ`.py`ใƒ•ใ‚กใ‚คใƒซใซๅฎŒๅ…จใซๅฎš็พฉใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ใƒ•ใ‚กใ‚คใƒซใฏ็›ธๅฏพใ‚คใƒณใƒใƒผใƒˆใ‚’ไป–ใฎใƒ•ใ‚กใ‚คใƒซใซไพๅญ˜ใงใใพใ™ใŒใ€ใ™ในใฆใฎใƒ•ใ‚กใ‚คใƒซใŒๅŒใ˜ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใซใ‚ใ‚‹้™ใ‚Š๏ผˆใพใ ใ“ใฎๆฉŸ่ƒฝใงใฏใ‚ตใƒ–ใƒขใ‚ธใƒฅใƒผใƒซใฏใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ›ใ‚“๏ผ‰ใ€ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ“ใฎไพ‹ใงใฏใ€็พๅœจใฎไฝœๆฅญใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชๅ†…ใซๅๅ‰ใŒใ€Œresnet_modelใ€ใฎใƒ•ใ‚ฉใƒซใƒ€ใ‚’ไฝœๆˆใ—ใ€ใใฎไธญใซ`modeling_resnet.py`ใƒ•ใ‚กใ‚คใƒซใจ`configuration_resnet.py`ใƒ•ใ‚กใ‚คใƒซใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ ๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซใซใฏ`ResnetConfig`ใฎใ‚ณใƒผใƒ‰ใŒๅซใพใ‚Œใ€ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใซใฏ`ResnetModel`ใจ`ResnetModelForImageClassification`ใฎใ‚ณใƒผใƒ‰ใŒๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` `__init__.py`ใฏ็ฉบใงใ‚ใฃใฆใ‚‚ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚PythonใŒ`resnet_model`ใ‚’ใƒขใ‚ธใƒฅใƒผใƒซใจใ—ใฆๆคœๅ‡บใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใซๅญ˜ๅœจใ—ใพใ™ใ€‚ <Tip warning={true}> ใƒฉใ‚คใƒ–ใƒฉใƒชใ‹ใ‚‰ใƒขใƒ‡ใƒชใƒณใ‚ฐใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ณใƒ”ใƒผใ™ใ‚‹ๅ ดๅˆใ€ใƒ•ใ‚กใ‚คใƒซใฎๅ…ˆ้ ญใซใ‚ใ‚‹ใ™ในใฆใฎ็›ธๅฏพใ‚คใƒณใƒใƒผใƒˆใ‚’`transformers`ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใ‹ใ‚‰ใ‚คใƒณใƒใƒผใƒˆใซ็ฝฎใๆ›ใˆใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ </Tip> ๆ—ขๅญ˜ใฎ่จญๅฎšใ‚„ใƒขใƒ‡ใƒซใ‚’ๅ†ๅˆฉ็”จ๏ผˆใพใŸใฏใ‚ตใƒ–ใ‚ฏใƒฉใ‚นๅŒ–๏ผ‰ใงใใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹ใŸใ‚ใซใ€ๆฌกใฎๆ‰‹้ †ใซๅพ“ใฃใฆใใ ใ•ใ„๏ผšใพใšใ€ๆ–ฐใ—ใไฝœๆˆใ—ใŸใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ResNetใƒขใƒ‡ใƒซใจ่จญๅฎšใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใพใ™๏ผš ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` ๆฌกใซใ€`save_pretrained`ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ“ใ‚Œใ‚‰ใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎใ‚ณใƒผใƒ‰ใƒ•ใ‚กใ‚คใƒซใ‚’ใ‚ณใƒ”ใƒผใ—ใ€็‰นๅฎšใฎAutoใ‚ฏใƒฉใ‚น๏ผˆ็‰นใซใƒขใƒ‡ใƒซใฎๅ ดๅˆ๏ผ‰ใซๆญฃใ—ใ็™ป้Œฒใ™ใ‚‹ใ‚ˆใ†ใƒฉใ‚คใƒ–ใƒฉใƒชใซๆŒ‡็คบใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆฌกใฎใ‚ˆใ†ใซๅฎŸ่กŒใ—ใพใ™๏ผš ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` ๆณจๆ„: ่จญๅฎšใซใคใ„ใฆใฏ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“๏ผˆ่จญๅฎš็”จใฎ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใฏ1ใคใ—ใ‹ใชใใ€[`AutoConfig`]ใงใ™๏ผ‰ใŒใ€ ใƒขใƒ‡ใƒซใซใคใ„ใฆใฏ็•ฐใชใ‚Šใพใ™ใ€‚ใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใฏๅคšใใฎ็•ฐใชใ‚‹ใ‚ฟใ‚นใ‚ฏใซ้ฉใ—ใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใŸใ‚ใ€ ใƒขใƒ‡ใƒซใŒๆญฃ็ขบใช่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใฎใ†ใกใฉใ‚Œใซ้ฉใ—ใฆใ„ใ‚‹ใ‹ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆฌกใซใ€ๅ‰่ฟฐใฎใ‚ˆใ†ใซ่จญๅฎšใจใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ—ใพใ—ใ‚‡ใ†๏ผš ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ใƒขใƒ‡ใƒซใ‚’Hubใซ้€ไฟกใ™ใ‚‹ใซใฏใ€ใƒญใ‚ฐใ‚คใƒณใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ใ‚ฟใƒผใƒŸใƒŠใƒซใงๆฌกใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใพใ™๏ผš ```bash huggingface-cli login ``` ใพใŸใฏใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‹ใ‚‰๏ผš ```py from huggingface_hub import notebook_login notebook_login() ``` ๆฌกใซใ€ๆฌกใฎใ‚ˆใ†ใซใ—ใฆใ€็‹ฌ่‡ชใฎๅๅ‰็ฉบ้–“ใซใƒ—ใƒƒใ‚ทใƒฅใงใใพใ™๏ผˆใพใŸใฏใ€ใƒกใƒณใƒใƒผใงใ‚ใ‚‹็ต„็น”ใซใƒ—ใƒƒใ‚ทใƒฅใงใใพใ™๏ผ‰๏ผš ```py resnet50d.push_to_hub("custom-resnet50d") ``` ใƒขใƒ‡ใƒชใƒณใ‚ฐใฎ้‡ใฟใจJSONๅฝขๅผใฎๆง‹ๆˆใซๅŠ ใˆใฆใ€ใ“ใฎใƒ•ใ‚ฉใƒซใƒ€ใƒผใ€Œcustom-resnet50dใ€ๅ†…ใฎใƒขใƒ‡ใƒชใƒณใ‚ฐใŠใ‚ˆใณๆง‹ๆˆใ€Œ.pyใ€ใƒ•ใ‚กใ‚คใƒซใ‚‚ใ‚ณใƒ”ใƒผใ•ใ‚Œใ€็ตๆžœใฏHubใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚Œใพใ—ใŸใ€‚็ตๆžœใฏใ“ใฎ[model repo](https://huggingface.co/sgugger/custom-resnet50d)ใง็ขบ่ชใงใใพใ™ใ€‚ ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[Hubใธใฎใƒ—ใƒƒใ‚ทใƒฅๆ–นๆณ•](model_sharing)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Using a model with custom code ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใจ `from_pretrained` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒชใƒใ‚ธใƒˆใƒชๅ†…ใฎใ‚ซใ‚นใ‚ฟใƒ ใ‚ณใƒผใƒ‰ใƒ•ใ‚กใ‚คใƒซใจๅ…ฑใซไปปๆ„ใฎๆง‹ๆˆใ€ใƒขใƒ‡ใƒซใ€ใพใŸใฏใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ Hubใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚Œใ‚‹ใ™ในใฆใฎใƒ•ใ‚กใ‚คใƒซใจใ‚ณใƒผใƒ‰ใฏใƒžใƒซใ‚ฆใ‚งใ‚ขใฎใ‚นใ‚ญใƒฃใƒณใŒๅฎŸๆ–ฝใ•ใ‚Œใพใ™๏ผˆ่ฉณ็ดฐใฏ[Hubใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃ](https://huggingface.co/docs/hub/security#malware-scanning)ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒ†ใƒผใ‚ทใƒงใƒณใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„๏ผ‰ใ€ใ—ใ‹ใ—ใ€ไพ็„ถใจใ—ใฆๆ‚ชๆ„ใฎใ‚ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’ๅฎŸ่กŒใ—ใชใ„ใŸใ‚ใซใ€ใƒขใƒ‡ใƒซใ‚ณใƒผใƒ‰ใจไฝœ่€…ใ‚’็ขบ่ชใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ `trust_remote_code=True` ใ‚’่จญๅฎšใ—ใฆใ‚ซใ‚นใ‚ฟใƒ ใ‚ณใƒผใƒ‰ใ‚’ๆŒใคใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใพใ™๏ผš ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` ใ‚ณใƒŸใƒƒใƒˆใƒใƒƒใ‚ทใƒฅใ‚’ใ€Œrevisionใ€ใจใ—ใฆๆธกใ™ใ“ใจใ‚‚ๅผทใๆŽจๅฅจใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฎไฝœ่€…ใŒใ‚ณใƒผใƒ‰ใ‚’ๆ‚ชๆ„ใฎใ‚ใ‚‹ๆ–ฐใ—ใ„่กŒใงๆ›ดๆ–ฐใ—ใชใ‹ใฃใŸใ“ใจใ‚’็ขบ่ชใงใใพใ™๏ผˆใƒขใƒ‡ใƒซใฎไฝœ่€…ใ‚’ๅฎŒๅ…จใซไฟก้ ผใ—ใฆใ„ใ‚‹ๅ ดๅˆใ‚’้™คใใพใ™๏ผ‰ใ€‚ ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` ใƒขใƒ‡ใƒซใƒชใƒใ‚ธใƒˆใƒชใฎใ‚ณใƒŸใƒƒใƒˆๅฑฅๆญดใ‚’ใƒ–ใƒฉใ‚ฆใ‚ธใƒณใ‚ฐใ™ใ‚‹้š›ใซใฏใ€ไปปๆ„ใฎใ‚ณใƒŸใƒƒใƒˆใฎใ‚ณใƒŸใƒƒใƒˆใƒใƒƒใ‚ทใƒฅใ‚’็ฐกๅ˜ใซใ‚ณใƒ”ใƒผใงใใ‚‹ใƒœใ‚ฟใƒณใŒใ‚ใ‚Šใพใ™ใ€‚ ## Registering a model with custom code to the auto classes ๐Ÿค— Transformersใ‚’ๆ‹กๅผตใ™ใ‚‹ใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ไฝœๆˆใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€็‹ฌ่‡ชใฎใƒขใƒ‡ใƒซใ‚’ๅซใ‚ใ‚‹ใŸใ‚ใซ่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใ‚’ๆ‹กๅผตใ—ใŸใ„ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏใ‚ณใƒผใƒ‰ใ‚’Hubใซใƒ—ใƒƒใ‚ทใƒฅใ™ใ‚‹ใ“ใจใจใฏ็•ฐใชใ‚Šใ€ใƒฆใƒผใ‚ถใƒผใฏใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใ‚’ๅ–ๅพ—ใ™ใ‚‹ใŸใ‚ใซใ‚ใชใŸใฎใƒฉใ‚คใƒ–ใƒฉใƒชใ‚’ใ‚คใƒณใƒใƒผใƒˆใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ ๏ผˆHubใ‹ใ‚‰ใƒขใƒ‡ใƒซใ‚ณใƒผใƒ‰ใ‚’่‡ชๅ‹•็š„ใซใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ™ใ‚‹ใฎใจใฏๅฏพ็…ง็š„ใงใ™๏ผ‰ใ€‚ ๆง‹ๆˆใซๆ—ขๅญ˜ใฎใƒขใƒ‡ใƒซใ‚ฟใ‚คใƒ—ใจ็•ฐใชใ‚‹ `model_type` ๅฑžๆ€งใŒใ‚ใ‚‹้™ใ‚Šใ€ใพใŸใ‚ใชใŸใฎใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใŒ้ฉๅˆ‡ใช `config_class` ๅฑžๆ€งใ‚’ๆŒใฃใฆใ„ใ‚‹้™ใ‚Šใ€ ๆฌกใฎใ‚ˆใ†ใซใใ‚Œใ‚‰ใ‚’่‡ชๅ‹•ใ‚ฏใƒฉใ‚นใซ่ฟฝๅŠ ใงใใพใ™๏ผš ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` ๆณจๆ„: `AutoConfig` ใซใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšใ‚’็™ป้Œฒใ™ใ‚‹้š›ใฎๆœ€ๅˆใฎๅผ•ๆ•ฐใฏใ€ใ‚ซใ‚นใ‚ฟใƒ ่จญๅฎšใฎ `model_type` ใจไธ€่‡ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใพใŸใ€ไปปๆ„ใฎ่‡ชๅ‹•ใƒขใƒ‡ใƒซใ‚ฏใƒฉใ‚นใซใ‚ซใ‚นใ‚ฟใƒ ใƒขใƒ‡ใƒซใ‚’็™ป้Œฒใ™ใ‚‹้š›ใฎๆœ€ๅˆใฎๅผ•ๆ•ฐใฏใ€ใใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฎ `config_class` ใจไธ€่‡ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚
transformers/docs/source/ja/custom_models.md/0
{ "file_path": "transformers/docs/source/ja/custom_models.md", "repo_id": "transformers", "token_count": 7501 }
257
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Processors Transformers ใƒฉใ‚คใƒ–ใƒฉใƒชใงใฏใ€ใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏ 2 ใคใฎ็•ฐใชใ‚‹ๆ„ๅ‘ณใ‚’ๆŒใกใพใ™ใ€‚ - [Wav2Vec2](../model_doc/wav2vec2) ใชใฉใฎใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆ (้Ÿณๅฃฐใจใƒ†ใ‚ญใ‚นใƒˆ) ใพใŸใฏ [CLIP](../model_doc/clip) (ใƒ†ใ‚ญใ‚นใƒˆใจใƒ“ใ‚ธใƒงใƒณ) - ๅคใ„ใƒใƒผใ‚ธใƒงใƒณใฎใƒฉใ‚คใƒ–ใƒฉใƒชใง GLUE ใพใŸใฏ SQUAD ใฎใƒ‡ใƒผใ‚ฟใ‚’ๅ‰ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใฆใ„ใŸใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฏ้žๆŽจๅฅจใซใชใ‚Šใพใ—ใŸใ€‚ ## Multi-modal processors ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ ใƒขใƒ‡ใƒซใงใฏใ€ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใŒ่ค‡ๆ•ฐใฎใƒขใƒ€ใƒชใƒ†ใ‚ฃ (ใƒ†ใ‚ญใ‚นใƒˆใ€ ่ฆ–่ฆšใจ้Ÿณๅฃฐ๏ผ‰ใ€‚ใ“ใ‚Œใฏใ€2 ใคไปฅไธŠใฎๅ‡ฆ็†ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’ใ‚ฐใƒซใƒผใƒ—ๅŒ–ใ™ใ‚‹ใƒ—ใƒญใ‚ปใƒƒใ‚ตใƒผใจๅ‘ผใฐใ‚Œใ‚‹ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใซใ‚ˆใฃใฆๅ‡ฆ็†ใ•ใ‚Œใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผ (ใƒ†ใ‚ญใ‚นใƒˆ ใƒขใƒ€ใƒชใƒ†ใ‚ฃ็”จ)ใ€็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใƒผ (่ฆ–่ฆš็”จ)ใ€็‰นๅพดๆŠฝๅ‡บๅ™จ (ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ช็”จ) ใชใฉใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ€ไฟๅญ˜ใŠใ‚ˆใณใƒญใƒผใƒ‰ๆฉŸ่ƒฝใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ๆฌกใฎๅŸบๆœฌใ‚ฏใƒฉใ‚นใ‚’็ถ™ๆ‰ฟใ—ใพใ™ใ€‚ [[autodoc]] ProcessorMixin ## Deprecated processors ใ™ในใฆใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ€ๅŒใ˜ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅพ“ใฃใฆใ„ใพใ™ใ€‚ [`~data.processors.utils.DataProcessor`]ใ€‚ใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏๆฌกใฎใƒชใ‚นใƒˆใ‚’่ฟ”ใ—ใพใ™ใ€‚ [`~data.processors.utils.InputExample`]ใ€‚ใ“ใ‚Œใ‚‰ [`~data.processors.utils.InputExample`] ใฏๆฌกใฎใ‚ˆใ†ใซๅค‰ๆ›ใงใใพใ™ใ€‚ [`~data.processors.utils.Input features`] ใ‚’ใƒขใƒ‡ใƒซใซใƒ•ใ‚ฃใƒผใƒ‰ใ—ใพใ™ใ€‚ [[autodoc]] data.processors.utils.DataProcessor [[autodoc]] data.processors.utils.InputExample [[autodoc]] data.processors.utils.InputFeatures ## GLUE [ไธ€่ˆฌ่จ€่ชž็†่งฃ่ฉ•ไพก (GLUE)](https://gluebenchmark.com/) ใฏใ€ ๆ—ขๅญ˜ใฎ NLU ใ‚ฟใ‚นใ‚ฏใฎๅคšๆง˜ใชใ‚ปใƒƒใƒˆใซใ‚ใŸใ‚‹ใƒขใƒ‡ใƒซใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ€‚็ด™ใจๅŒๆ™‚็™บๅฃฒใ•ใ‚ŒใŸ [GLUE: A ่‡ช็„ถ่จ€่ชž็†่งฃใฎใŸใ‚ใฎใƒžใƒซใƒใ‚ฟใ‚นใ‚ฏใƒ™ใƒณใƒใƒžใƒผใ‚ฏใŠใ‚ˆใณๅˆ†ๆžใƒ—ใƒฉใƒƒใƒˆใƒ•ใ‚ฉใƒผใƒ ](https://openreview.net/pdf?id=rJ4km2R5t7) ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€MRPCใ€MNLIใ€MNLI (ไธไธ€่‡ด)ใ€CoLAใ€SST2ใ€STSBใ€ QQPใ€QNLIใ€RTEใ€WNLIใ€‚ ใใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - [`~data.processors.utils.MrpcProcessor`] - [`~data.processors.utils.MnliProcessor`] - [`~data.processors.utils.MnliMismatchedProcessor`] - [`~data.processors.utils.Sst2Processor`] - [`~data.processors.utils.StsbProcessor`] - [`~data.processors.utils.QqpProcessor`] - [`~data.processors.utils.QnliProcessor`] - [`~data.processors.utils.RteProcessor`] - [`~data.processors.utils.WnliProcessor`] ใ•ใ‚‰ใซใ€ๆฌกใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ‡ใƒผใ‚ฟ ใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ๅ€คใ‚’ใƒญใƒผใƒ‰ใ—ใ€ใใ‚Œใ‚‰ใ‚’ใƒชใ‚นใƒˆใซๅค‰ๆ›ใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ [`~data.processors.utils.InputExample`]ใ€‚ [[autodoc]] data.processors.glue.glue_convert_examples_to_features ## XNLI [ใ‚ฏใƒญใ‚นใƒชใƒณใ‚ฌใƒซ NLI ใ‚ณใƒผใƒ‘ใ‚น (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) ใฏใ€ ่จ€่ชžใ‚’่ถ…ใˆใŸใƒ†ใ‚ญใ‚นใƒˆ่กจ็พใฎๅ“่ณชใ€‚ XNLI ใฏใ€[*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/) ใซๅŸบใฅใใ‚ฏใƒฉใ‚ฆใƒ‰ใ‚ฝใƒผใ‚นใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใ™ใ€‚ใƒ†ใ‚ญใ‚นใƒˆใฎใƒšใ‚ขใซใฏใ€15 ๅ€‹ใฎใƒ†ใ‚ญใ‚นใƒˆๅซๆ„ใ‚ขใƒŽใƒ†ใƒผใ‚ทใƒงใƒณใŒใƒฉใƒ™ใƒซไป˜ใ‘ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ใ•ใพใ–ใพใช่จ€่ชž (่‹ฑ่ชžใชใฉใฎ้ซ˜ใƒชใ‚ฝใƒผใ‚น่จ€่ชžใจใ‚นใƒฏใƒ’ใƒช่ชžใชใฉใฎไฝŽใƒชใ‚ฝใƒผใ‚น่จ€่ชžใฎไธกๆ–นใ‚’ๅซใ‚€)ใ€‚ ่ซ–ๆ–‡ [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) ใจๅŒๆ™‚ใซใƒชใƒชใƒผใ‚นใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€XNLI ใƒ‡ใƒผใ‚ฟใ‚’ใƒญใƒผใƒ‰ใ™ใ‚‹ใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒ›ใ‚นใƒˆใ—ใพใ™ใ€‚ - [`~data.processors.utils.XnliProcessor`] ใƒ†ใ‚นใƒˆใ‚ปใƒƒใƒˆใซใฏใ‚ดใƒผใƒซใƒ‰ใƒฉใƒ™ใƒซใŒไป˜ใ„ใฆใ„ใ‚‹ใŸใ‚ใ€่ฉ•ไพกใฏใƒ†ใ‚นใƒˆใ‚ปใƒƒใƒˆใง่กŒใ‚ใ‚Œใพใ™ใฎใงใ”ไบ†ๆ‰ฟใใ ใ•ใ„ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ™ใ‚‹ไพ‹ใฏใ€[run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) ใ‚นใ‚ฏใƒชใƒ—ใƒˆใซ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ## SQuAD [The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) ใฏใ€ๆฌกใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใงใ™ใ€‚ ่ณชๅ•ๅฟœ็ญ”ใซ้–ขใ™ใ‚‹ใƒขใƒ‡ใƒซใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’่ฉ•ไพกใ—ใพใ™ใ€‚ v1.1 ใจ v2.0 ใฎ 2 ใคใฎใƒใƒผใ‚ธใƒงใƒณใŒๅˆฉ็”จๅฏ่ƒฝใงใ™ใ€‚ๆœ€ๅˆใฎใƒใƒผใ‚ธใƒงใƒณ (v1.1) ใฏใ€่ซ–ๆ–‡ [SQuAD: 100,000+ question for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ใจใจใ‚‚ใซใƒชใƒชใƒผใ‚นใ•ใ‚Œใพใ—ใŸใ€‚ 2 ็•ช็›ฎใฎใƒใƒผใ‚ธใƒงใƒณ (v2.0) ใฏใ€่ซ–ๆ–‡ [Know What You Don't ใจๅŒๆ™‚ใซใƒชใƒชใƒผใ‚นใ•ใ‚Œใพใ—ใŸใ€‚ ็ŸฅใฃใฆใŠใในใ: SQuAD ใฎ็ญ”ใˆใ‚‰ใ‚Œใชใ„่ณชๅ•](https://arxiv.org/abs/1806.03822)ใ€‚ ใ“ใฎใƒฉใ‚คใƒ–ใƒฉใƒชใฏใ€ๆฌกใฎ 2 ใคใฎใƒใƒผใ‚ธใƒงใƒณใฎใใ‚Œใžใ‚Œใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ใƒ›ใ‚นใƒˆใ—ใพใ™ใ€‚ ### Processors ใใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ - [`~data.processors.utils.SquadV1Processor`] - [`~data.processors.utils.SquadV2Processor`] ใฉใกใ‚‰ใ‚‚ๆŠฝ่ฑกใ‚ฏใƒฉใ‚น [`~data.processors.utils.SquadProcessor`] ใ‚’็ถ™ๆ‰ฟใ—ใฆใ„ใพใ™ใ€‚ [[autodoc]] data.processors.squad.SquadProcessor - all ใ•ใ‚‰ใซใ€ๆฌกใฎใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใ€SQuAD ใฎไพ‹ใ‚’ๆฌกใฎๅฝขๅผใซๅค‰ๆ›ใงใใพใ™ใ€‚ ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใจใ—ใฆไฝฟ็”จใงใใ‚‹ [`~data.processors.utils.SquadFeatures`]ใ€‚ [[autodoc]] data.processors.squad.squad_convert_examples_to_features ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใจๅ‰่ฟฐใฎๆ–นๆณ•ใฏใ€ใƒ‡ใƒผใ‚ฟใ‚’ๅซใ‚€ใƒ•ใ‚กใ‚คใƒซใ ใ‘ใงใชใใ€ *tensorflow_datasets* ใƒ‘ใƒƒใ‚ฑใƒผใ‚ธใ€‚ไปฅไธ‹ใซไพ‹ใ‚’็คบใ—ใพใ™ใ€‚ ### Example usage ไปฅไธ‹ใซใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ—ใŸไพ‹ใจใ€ใƒ‡ใƒผใ‚ฟ ใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟ็”จใ—ใŸๅค‰ๆ›ๆ–นๆณ•ใ‚’็คบใ—ใพใ™ใ€‚ ```python # Loading a V2 processor processor = SquadV2Processor() examples = processor.get_dev_examples(squad_v2_data_dir) # Loading a V1 processor processor = SquadV1Processor() examples = processor.get_dev_examples(squad_v1_data_dir) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) ``` *tensorflow_datasets* ใฎไฝฟ็”จใฏใ€ใƒ‡ใƒผใ‚ฟ ใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟ็”จใ™ใ‚‹ใฎใจๅŒใ˜ใใ‚‰ใ„็ฐกๅ˜ใงใ™ใ€‚ ```python # tensorflow_datasets only handle Squad V1. tfds_examples = tfds.load("squad") examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=max_seq_length, doc_stride=args.doc_stride, max_query_length=max_query_length, is_training=not evaluate, ) ``` ใ“ใ‚Œใ‚‰ใฎใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‚’ไฝฟ็”จใ™ใ‚‹ๅˆฅใฎไพ‹ใฏใ€[run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) ใ‚นใ‚ฏใƒชใƒ—ใƒˆใซ็คบใ•ใ‚Œใฆใ„ใพใ™ใ€‚
transformers/docs/source/ja/main_classes/processors.md/0
{ "file_path": "transformers/docs/source/ja/main_classes/processors.md", "repo_id": "transformers", "token_count": 3103 }
258
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BertGeneration ## Overview BertGeneration ใƒขใƒ‡ใƒซใฏใ€ๆฌกใ‚’ไฝฟ็”จใ—ใฆใ‚ทใƒผใ‚ฑใƒณใ‚น้–“ใฎใ‚ฟใ‚นใ‚ฏใซๅˆฉ็”จใงใใ‚‹ BERT ใƒขใƒ‡ใƒซใงใ™ใ€‚ [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) ใงๆๆกˆใ•ใ‚Œใฆใ„ใ‚‹ [`EncoderDecoderModel`] ใ‚ฟใ‚นใ‚ฏใ€Sascha Rotheใ€Sishi Nagayanใ€Aliaksei Severyn ่‘—ใ€‚ ่ซ–ๆ–‡ใฎ่ฆ็ด„ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ *ๅคง่ฆๆจกใชใƒ‹ใƒฅใƒผใƒฉใƒซ ใƒขใƒ‡ใƒซใฎๆ•™ๅธซใชใ—ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฏใ€ๆœ€่ฟ‘ใ€่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใซ้ฉๅ‘ฝใ‚’ใ‚‚ใŸใ‚‰ใ—ใพใ—ใŸใ€‚ใซใ‚ˆใ‚‹ NLP ๅฎŸ่ทต่€…ใฏใ€ๅ…ฌ้–‹ใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‹ใ‚‰ใ‚ฆใ‚ฉใƒผใƒ ใ‚นใ‚ฟใƒผใƒˆใ—ใฆใ€่ค‡ๆ•ฐใฎ้ …็›ฎใงๆœ€ๅ…ˆ็ซฏใฎๆŠ€่ก“ใ‚’ๆŽจ้€ฒใ—ใฆใใพใ—ใŸใ€‚ ใ‚ณใƒณใƒ”ใƒฅใƒผใƒ†ใ‚ฃใƒณใ‚ฐๆ™‚้–“ใ‚’ๅคงๅน…ใซ็ฏ€็ด„ใ—ใชใŒใ‚‰ใƒ™ใƒณใƒใƒžใƒผใ‚ฏใ‚’ๅฎŸ่กŒใ—ใพใ™ใ€‚ใ“ใ‚Œใพใงใฎใจใ“ใ‚ใ€ไธปใซ่‡ช็„ถ่จ€่ชžใซ็„ฆ็‚นใ‚’ๅฝ“ใฆใฆใใพใ—ใŸใ€‚ ใ‚ฟใ‚นใ‚ฏใ‚’็†่งฃใ™ใ‚‹ใ€‚ใ“ใฎ่ซ–ๆ–‡ใงใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚น็”ŸๆˆใฎใŸใ‚ใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎๆœ‰ๅŠนๆ€งใ‚’ๅฎŸ่จผใ—ใพใ™ใ€‚็งใŸใกใฏ ๅ…ฌ้–‹ใ•ใ‚Œใฆใ„ใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟ BERT ใจไบ’ๆ›ๆ€งใฎใ‚ใ‚‹ Transformer ใƒ™ใƒผใ‚นใฎใ‚ทใƒผใ‚ฑใƒณใ‚น้–“ใƒขใƒ‡ใƒซใ‚’้–‹็™บใ—ใพใ—ใŸใ€‚ GPT-2 ใŠใ‚ˆใณ RoBERTa ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใ€ใƒขใƒ‡ใƒซใฎๅˆๆœŸๅŒ–ใฎๆœ‰็”จๆ€งใซใคใ„ใฆๅบƒ็ฏ„ใชๅฎŸ่จผ็ ”็ฉถใ‚’ๅฎŸๆ–ฝใ—ใพใ—ใŸใ€‚ ใ‚จใƒณใ‚ณใƒผใƒ€ใจใƒ‡ใ‚ณใƒผใƒ€ใ€ใ“ใ‚Œใ‚‰ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ€‚็งใŸใกใฎใƒขใƒ‡ใƒซใฏใ€ๆฉŸๆขฐ็ฟป่จณใซ้–ขใ™ใ‚‹ๆ–ฐใ—ใ„ๆœ€ๅ…ˆ็ซฏใฎ็ตๆžœใ‚’ใ‚‚ใŸใ‚‰ใ—ใพใ™ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆใฎ่ฆ็ด„ใ€ๆ–‡ใฎๅˆ†ๅ‰ฒใ€ใŠใ‚ˆใณๆ–‡ใฎ่žๅˆใ€‚* ## Usage examples and tips - ใƒขใƒ‡ใƒซใ‚’ [`EncoderDecoderModel`] ใจ็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จโ€‹โ€‹ใ—ใฆใ€2 ใคใฎไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’ๆดป็”จใงใใพใ™ใ€‚ ๅพŒ็ถšใฎๅพฎ่ชฟๆ•ดใฎใŸใ‚ใฎ BERT ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ€‚ ```python >>> # leverage checkpoints for Bert2Bert model... >>> # use BERT's cls token as BOS token and sep token as EOS token >>> encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102) >>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token >>> decoder = BertGenerationDecoder.from_pretrained( ... "bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102 ... ) >>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder) >>> # create tokenizer... >>> tokenizer = BertTokenizer.from_pretrained("bert-large-uncased") >>> input_ids = tokenizer( ... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids >>> # train... >>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss >>> loss.backward() ``` - ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ [`EncoderDecoderModel`] ใ‚‚ใƒขใƒ‡ใƒซ ใƒใƒ–ใง็›ดๆŽฅๅˆฉ็”จใงใใพใ™ใ€‚ ```python >>> # instantiate sentence fusion model >>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse") >>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse") >>> input_ids = tokenizer( ... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt" ... ).input_ids >>> outputs = sentence_fuser.generate(input_ids) >>> print(tokenizer.decode(outputs[0])) ``` ใƒใƒƒใƒ—๏ผš - [`BertGenerationEncoder`] ใจ [`BertGenerationDecoder`] ใฏใ€ [`EncoderDecoder`] ใจ็ต„ใฟๅˆใ‚ใ›ใพใ™ใ€‚ - ่ฆ็ด„ใ€ๆ–‡ใฎๅˆ†ๅ‰ฒใ€ๆ–‡ใฎ่žๅˆใ€ใŠใ‚ˆใณ็ฟป่จณใฎๅ ดๅˆใ€ๅ…ฅๅŠ›ใซ็‰นๅˆฅใชใƒˆใƒผใ‚ฏใƒณใฏๅฟ…่ฆใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ใ—ใŸใŒใฃใฆใ€ๅ…ฅๅŠ›ใฎๆœซๅฐพใซ EOS ใƒˆใƒผใ‚ฏใƒณใ‚’่ฟฝๅŠ ใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ ใ“ใฎใƒขใƒ‡ใƒซใฏใ€[patrickvonplaten](https://huggingface.co/patrickvonplaten) ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใพใ—ใŸใ€‚ๅ…ƒใฎใ‚ณใƒผใƒ‰ใฏๆฌกใฎใจใŠใ‚Šใงใ™ [ใ“ใ“](https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder) ใŒใ‚ใ‚Šใพใ™ใ€‚ ## BertGenerationConfig [[autodoc]] BertGenerationConfig ## BertGenerationTokenizer [[autodoc]] BertGenerationTokenizer - save_vocabulary ## BertGenerationEncoder [[autodoc]] BertGenerationEncoder - forward ## BertGenerationDecoder [[autodoc]] BertGenerationDecoder - forward
transformers/docs/source/ja/model_doc/bert-generation.md/0
{ "file_path": "transformers/docs/source/ja/model_doc/bert-generation.md", "repo_id": "transformers", "token_count": 1962 }
259
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ByT5 ## Overview ByT5 ใƒขใƒ‡ใƒซใฏใ€[ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. ่ซ–ๆ–‡ใฎ่ฆ็ด„ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ *ๆœ€ใ‚‚ๅบƒใไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟ่จ€่ชžใƒขใƒ‡ใƒซใฏใ€ๅ˜่ชžใพใŸใฏใ‚ตใƒ–ใƒฏใƒผใƒ‰ๅ˜ไฝใซๅฏพๅฟœใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใงๅ‹•ไฝœใ—ใพใ™ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณใฎใ‚ทใƒผใ‚ฑใƒณใ‚นใจใ—ใฆใ‚จใƒณใ‚ณใƒผใƒ‰ใ™ใ‚‹ใซใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใŒๅฟ…่ฆใงใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฏ้€šๅธธใ€ ใƒขใƒ‡ใƒซใ€‚ไปฃใ‚ใ‚Šใซ็”Ÿใฎใƒ†ใ‚ญใ‚นใƒˆ (ใƒใ‚คใƒˆใพใŸใฏๆ–‡ๅญ—) ใ‚’็›ดๆŽฅๆ“ไฝœใ™ใ‚‹ใƒˆใƒผใ‚ฏใƒณใƒ•ใƒชใƒผ ใƒขใƒ‡ใƒซใซใฏๅคšใใฎๅˆฉ็‚นใŒใ‚ใ‚Šใพใ™ใ€‚ ใ™ใใซไฝฟ็”จใงใใ‚‹ใ‚ใ‚‰ใ‚†ใ‚‹่จ€่ชžใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’ๅ‡ฆ็†ใงใใ€ใƒŽใ‚คใ‚บใซๅฏพใ—ใฆใ‚ˆใ‚Šๅ …็‰ขใงใ‚ใ‚Šใ€ๆŠ€่ก“็š„่ฒ ๅ‚ตใ‚’ๆœ€ๅฐ้™ใซๆŠ‘ใˆใพใ™ใ€‚ ่ค‡้›‘ใงใ‚จใƒฉใƒผใŒ็™บ็”Ÿใ—ใ‚„ใ™ใ„ใƒ†ใ‚ญใ‚นใƒˆๅ‰ๅ‡ฆ็†ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚ใƒใ‚คใƒˆใพใŸใฏๆ–‡ๅญ—ๅˆ—ใŒใƒˆใƒผใ‚ฏใƒณใ‚ˆใ‚Š้•ทใ„ใŸใ‚ ใƒˆใƒผใ‚ฏใƒณใƒ•ใƒชใƒผ ใƒขใƒ‡ใƒซใซ้–ขใ™ใ‚‹้ŽๅŽปใฎ็ ”็ฉถใงใฏใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใ‚ณใ‚นใƒˆใ‚’ๅ„Ÿๅดใ™ใ‚‹ใ‚ˆใ†ใซ่จญ่จˆใ•ใ‚ŒใŸๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซ ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒๅฐŽๅ…ฅใ•ใ‚Œใ‚‹ใ“ใจใŒใ‚ˆใใ‚ใ‚Šใพใ—ใŸใ€‚ ็”Ÿใฎใƒ†ใ‚ญใ‚นใƒˆใ‚’็›ดๆŽฅๆ“ไฝœใ—ใพใ™ใ€‚ใ“ใฎ่ซ–ๆ–‡ใงใฏใ€ๆจ™ๆบ–็š„ใช Transformer ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใŒๆฌกใฎใ‚ˆใ†ใชใ‚‚ใฎใงไฝฟ็”จใงใใ‚‹ใ“ใจใ‚’็คบใ—ใพใ™ใ€‚ ใƒใ‚คใƒˆใ‚ทใƒผใ‚ฑใƒณใ‚นใ‚’ๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใฎๆœ€ๅฐ้™ใฎๅค‰ๆ›ดใ€‚ใƒ‘ใƒฉใƒกใƒผใ‚ฟๆ•ฐใฎ่ฆณ็‚นใ‹ใ‚‰ใƒˆใƒฌใƒผใƒ‰ใ‚ชใƒ•ใ‚’ๆณจๆ„ๆทฑใ็‰นๅพดไป˜ใ‘ใพใ™ใ€‚ FLOP ใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๆŽจ่ซ–้€Ÿๅบฆใ‚’่ชฟในใ€ใƒใ‚คใƒˆใƒฌใƒ™ใƒซใฎใƒขใƒ‡ใƒซใŒใƒˆใƒผใ‚ฏใƒณใƒฌใƒ™ใƒซใจ็ซถๅˆใงใใ‚‹ใ“ใจใ‚’็คบใ—ใพใ™ใ€‚ ๅฏพๅฟœ่€…ใ€‚ใพใŸใ€ใƒใ‚คใƒˆใƒฌใƒ™ใƒซใฎใƒขใƒ‡ใƒซใฏใƒŽใ‚คใ‚บใซๅฏพใ—ใฆๅคงๅน…ใซๅ …็‰ขใงใ‚ใ‚Šใ€ใ‚ˆใ‚Šๅ„ชใ‚ŒใŸใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใ‚’็™บๆฎใ™ใ‚‹ใ“ใจใ‚‚็คบใ—ใฆใ„ใพใ™ใ€‚ ใ‚นใƒšใƒซใจ็™บ้Ÿณใซๆ•ๆ„Ÿใชใ‚ฟใ‚นใ‚ฏใ€‚็งใŸใกใฎ่ฒข็Œฎใฎไธ€็’ฐใจใ—ใฆใ€ๆ–ฐใ—ใ„ใ‚ปใƒƒใƒˆใ‚’ใƒชใƒชใƒผใ‚นใ—ใพใ™ใ€‚ T5 ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใซๅŸบใฅใ„ใŸไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆธˆใฟใฎใƒใ‚คใƒˆใƒฌใƒ™ใƒซใฎ Transformer ใƒขใƒ‡ใƒซใจใ€ใใ“ใงไฝฟ็”จใ•ใ‚Œใ‚‹ใ™ในใฆใฎใ‚ณใƒผใƒ‰ใจใƒ‡ใƒผใ‚ฟ ๅฎŸ้จ“ใ€‚* ใ“ใฎใƒขใƒ‡ใƒซใฏใ€[patrickvonplaten](https://huggingface.co/patrickvonplaten) ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใพใ—ใŸใ€‚ๅ…ƒใฎใ‚ณใƒผใƒ‰ใฏๆฌกใฎใจใŠใ‚Šใงใ™ [ใ“ใ“](https://github.com/google-research/byt5) ใซใ‚ใ‚Šใพใ™ใ€‚ <Tip> ByT5 ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฏ T5v1.1 ใƒขใƒ‡ใƒซใซๅŸบใฅใ„ใฆใ„ใพใ™ใ€‚API ใƒชใƒ•ใ‚กใƒฌใƒณใ‚นใซใคใ„ใฆใฏใ€[T5v1.1 ใฎใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ ใƒšใƒผใ‚ธ](t5v1.1) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ๅฝผใ‚‰ใฏ ใƒขใƒ‡ใƒซใฎๅ…ฅๅŠ›ใ‚’ๆบ–ๅ‚™ใ™ใ‚‹ๆ–นๆณ•ใŒ็•ฐใชใ‚‹ใ ใ‘ใงใ™ใ€‚ไปฅไธ‹ใฎใ‚ณใƒผใƒ‰ไพ‹ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ </Tip> ByT5 ใฏๆ•™ๅธซใชใ—ใงไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ๅ˜ไธ€ใ‚ฟใ‚นใ‚ฏไธญใซใ‚ฟใ‚นใ‚ฏ ใƒ—ใƒฌใƒ•ใ‚ฃใƒƒใ‚ฏใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ๅˆฉ็‚นใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๅพฎ่ชฟๆ•ดใ€‚ใƒžใƒซใƒใ‚ฟใ‚นใ‚ฏใฎๅพฎ่ชฟๆ•ดใ‚’่กŒใ†ๅ ดๅˆใฏใ€ใƒ—ใƒฌใƒ•ใ‚ฃใƒƒใ‚ฏใ‚นใ‚’ไฝฟ็”จใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ## Usage Examples ByT5 ใฏ็”Ÿใฎ UTF-8 ใƒใ‚คใƒˆใงๅ‹•ไฝœใ™ใ‚‹ใŸใ‚ใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใชใ—ใงไฝฟ็”จใงใใพใ™ใ€‚ ```python >>> from transformers import T5ForConditionalGeneration >>> import torch >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> num_special_tokens = 3 >>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. >>> # => Need to shift utf-8 character encodings by 3 before passing ids to model. >>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens >>> labels = torch.tensor([list("La vie est comme une boรฎte de chocolat.".encode("utf-8"))]) + num_special_tokens >>> loss = model(input_ids, labels=labels).loss >>> loss.item() 2.66 ``` ใŸใ ใ—ใ€ใƒใƒƒใƒๆŽจ่ซ–ใจใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใฎๅ ดๅˆใฏใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ```python >>> from transformers import T5ForConditionalGeneration, AutoTokenizer >>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small") >>> model_inputs = tokenizer( ... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt" ... ) >>> labels_dict = tokenizer( ... ["La vie est comme une boรฎte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt" ... ) >>> labels = labels_dict.input_ids >>> loss = model(**model_inputs, labels=labels).loss >>> loss.item() 17.9 ``` [T5](t5) ใจๅŒๆง˜ใซใ€ByT5 ใฏใ‚นใƒ‘ใƒณใƒžใ‚นใ‚ฏใƒŽใ‚คใ‚บ้™คๅŽปใ‚ฟใ‚นใ‚ฏใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใพใ—ใŸใ€‚ใ—ใ‹ใ—ใ€ ใƒขใƒ‡ใƒซใฏใ‚ญใƒฃใƒฉใ‚ฏใ‚ฟใƒผใซ็›ดๆŽฅไฝœ็”จใ™ใ‚‹ใŸใ‚ใ€ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ฟใ‚นใ‚ฏใฏๅฐ‘ใ—่ค‡้›‘ใงใ™ ้•ใ†ใ€‚ใฎใ„ใใคใ‹ใฎๆ–‡ๅญ—ใ‚’็ ดๆใ—ใฆใฟใพใ—ใ‚‡ใ† `"The dog chases a ball in the park."`ใจใ„ใ†ๆ–‡ใ‚’ๅ…ฅๅŠ›ใ—ใ€ByT5 ใซไบˆๆธฌใ—ใฆใ‚‚ใ‚‰ใ„ใพใ™ใ€‚ ใ‚ใŸใ—ใŸใกใฎใŸใ‚ใ€‚ ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base") >>> input_ids_prompt = "The dog chases a ball in the park." >>> input_ids = tokenizer(input_ids_prompt).input_ids >>> # Note that we cannot add "{extra_id_...}" to the string directly >>> # as the Byte tokenizer would incorrectly merge the tokens >>> # For ByT5, we need to work directly on the character level >>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead >>> # uses final utf character ids. >>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. >>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. >>> # => mask to "The dog [258]a ball [257]park." >>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) >>> input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]]) >>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`. >>> output_ids = model.generate(input_ids, max_length=100)[0].tolist() >>> output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49] >>> # ^- Note how 258 descends to 257, 256, 255 >>> # Now we need to split on the sentinel tokens, let's write a short loop for this >>> output_ids_list = [] >>> start_token = 0 >>> sentinel_token = 258 >>> while sentinel_token in output_ids: ... split_idx = output_ids.index(sentinel_token) ... output_ids_list.append(output_ids[start_token:split_idx]) ... start_token = split_idx ... sentinel_token -= 1 >>> output_ids_list.append(output_ids[start_token:]) >>> output_string = tokenizer.batch_decode(output_ids_list) >>> output_string ['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ``` ## ByT5Tokenizer [[autodoc]] ByT5Tokenizer ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[`ByT5Tokenizer`] ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚
transformers/docs/source/ja/model_doc/byt5.md/0
{ "file_path": "transformers/docs/source/ja/model_doc/byt5.md", "repo_id": "transformers", "token_count": 3268 }
260
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CTRL <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=ctrl"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-ctrl-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/tiny-ctrl"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview CTRL ใƒขใƒ‡ใƒซใฏใ€Nitish Shirish Keskar*ใ€Bryan McCann*ใ€Lav R. Varshneyใ€Caiming Xiong, Richard Socher ใซใ‚ˆใฃใฆ [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) ใงๆๆกˆใ•ใ‚Œใพใ—ใŸใ€‚ ใƒชใƒใƒฃใƒผใƒ‰ใƒปใ‚ฝใƒผใƒใƒฃใƒผใ€‚ใ“ใ‚Œใฏใ€้žๅธธใซๅคง่ฆๆจกใชใ‚ณใƒผใƒ‘ใ‚นใฎ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ‚’ไฝฟ็”จใ—ใฆไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸๅ› ๆžœ็š„ (ไธ€ๆ–นๅ‘) ใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผใงใ™ ๆœ€ๅˆใฎใƒˆใƒผใ‚ฏใƒณใŒๅˆถๅพกใ‚ณใƒผใƒ‰ (ใƒชใƒณใ‚ฏใ€ๆ›ธ็ฑใ€Wikipedia ใชใฉ) ใจใ—ใฆไบˆ็ด„ใ•ใ‚Œใฆใ„ใ‚‹ใ€็ด„ 140 GB ใฎใƒ†ใ‚ญใ‚นใƒˆ ใƒ‡ใƒผใ‚ฟใ€‚ ่ซ–ๆ–‡ใฎ่ฆ็ด„ใฏๆฌกใฎใจใŠใ‚Šใงใ™ใ€‚ *ๅคง่ฆๆจกใช่จ€่ชžใƒขใƒ‡ใƒซใฏๆœ‰ๆœ›ใชใƒ†ใ‚ญใ‚นใƒˆ็”ŸๆˆๆฉŸ่ƒฝใ‚’็คบใ—ใฆใ„ใพใ™ใŒใ€ใƒฆใƒผใ‚ถใƒผใฏ็‰นๅฎšใฎ่จ€่ชžใƒขใƒ‡ใƒซใ‚’็ฐกๅ˜ใซๅˆถๅพกใงใใพใ›ใ‚“ ็”Ÿๆˆใ•ใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆใฎๅด้ขใ€‚ 16 ๅ„„ 3,000 ไธ‡ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆกไปถไป˜ใใƒˆใƒฉใƒณใ‚นใƒ•ใ‚ฉใƒผใƒžใƒผ่จ€่ชžใƒขใƒ‡ใƒซใงใ‚ใ‚‹ CTRL ใ‚’ใƒชใƒชใƒผใ‚นใ—ใพใ™ใ€‚ ใ‚นใ‚ฟใ‚คใƒซใ€ใ‚ณใƒณใƒ†ใƒณใƒ„ใ€ใ‚ฟใ‚นใ‚ฏๅ›บๆœ‰ใฎๅ‹•ไฝœใ‚’ๅˆถๅพกใ™ใ‚‹ๅˆถๅพกใ‚ณใƒผใƒ‰ใ‚’ๆกไปถไป˜ใ‘ใ‚‹ใ‚ˆใ†ใซ่จ“็ทดใ•ใ‚Œใฆใ„ใพใ™ใ€‚ๅˆถๅพกใ‚ณใƒผใƒ‰ใฏ ็”Ÿใฎใƒ†ใ‚ญใ‚นใƒˆใจ่‡ช็„ถใซๅ…ฑ็”Ÿใ™ใ‚‹ๆง‹้€ ใ‹ใ‚‰ๆดพ็”Ÿใ—ใ€ๆ•™ๅธซใชใ—ๅญฆ็ฟ’ใฎๅˆฉ็‚นใ‚’็ถญๆŒใ—ใชใŒใ‚‰ใ€ ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใ‚’ใ‚ˆใ‚Šๆ˜Ž็คบ็š„ใซๅˆถๅพกใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใ‚ณใƒผใƒ‰ใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€CTRL ใงใฉใฎ้ƒจๅˆ†ใŒไบˆๆธฌใ•ใ‚Œใ‚‹ใฎใ‹ใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒ‡ใƒผใ‚ฟใซใฏใ‚ทใƒผใ‚ฑใƒณใ‚นใŒไธŽใˆใ‚‰ใ‚Œใ‚‹ๅฏ่ƒฝๆ€งใŒๆœ€ใ‚‚้ซ˜ใใชใ‚Šใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅคง้‡ใฎใƒ‡ใƒผใ‚ฟใ‚’ๅˆ†ๆžใ™ใ‚‹ใŸใ‚ใฎๆฝœๅœจ็š„ใชๆ–นๆณ•ใŒๆไพ›ใ•ใ‚Œใพใ™ใ€‚ ใƒขใƒ‡ใƒซใƒ™ใƒผใ‚นใฎใ‚ฝใƒผใ‚นๅธฐๅฑžใ‚’ไป‹ใ—ใฆใ€‚* ใ“ใฎใƒขใƒ‡ใƒซใฏใ€[keskarnitishr](https://huggingface.co/keskarnitishr) ใซใ‚ˆใฃใฆๆไพ›ใ•ใ‚Œใพใ—ใŸใ€‚ๅ…ƒใฎใ‚ณใƒผใƒ‰ใŒ่ฆ‹ใคใ‹ใ‚‹ [ใ“ใกใ‚‰](https://github.com/salesforce/ctrl)ใ€‚ ## Usage tips - CTRL ใฏๅˆถๅพกใ‚ณใƒผใƒ‰ใ‚’ๅˆฉ็”จใ—ใฆใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚็”Ÿๆˆใ‚’็‰นๅฎšใฎๅ˜่ชžใ‚„ๆ–‡ใง้–‹ๅง‹ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ใพใŸใฏใƒชใƒณใ‚ฏใ—ใฆไธ€่ฒซใ—ใŸใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ [ๅ…ƒใฎๅฎŸ่ฃ…](https://github.com/salesforce/ctrl) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ่ฉณใ—ใใฏใ€‚ - CTRL ใฏ็ตถๅฏพไฝ็ฝฎๅŸ‹ใ‚่พผใฟใ‚’ๅ‚™ใˆใŸใƒขใƒ‡ใƒซใงใ‚ใ‚‹ใŸใ‚ใ€้€šๅธธใฏๅ…ฅๅŠ›ใ‚’ๅณๅดใซใƒ‘ใƒ‡ใ‚ฃใƒณใ‚ฐใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ ๅทฆใ€‚ - CTRL ใฏๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ (CLM) ใฎ็›ฎ็š„ใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใฆใ„ใ‚‹ใŸใ‚ใ€ๆฌกใฎไบˆๆธฌใซๅผทๅŠ›ใงใ™ใ€‚ ใ‚ทใƒผใ‚ฑใƒณใ‚นๅ†…ใฎใƒˆใƒผใ‚ฏใƒณใ€‚ใ“ใฎๆฉŸ่ƒฝใ‚’ๅˆฉ็”จใ™ใ‚‹ใจใ€CTRL ใฏๆง‹ๆ–‡็š„ใซไธ€่ฒซใ—ใŸใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ *run_generation.py* ใ‚ตใƒณใƒ—ใƒซ ใ‚นใ‚ฏใƒชใƒ—ใƒˆใง็ขบ่ชใงใใพใ™ใ€‚ - PyTorch ใƒขใƒ‡ใƒซใฏใ€ไปฅๅ‰ใซ่จˆ็ฎ—ใ•ใ‚ŒใŸใ‚ญใƒผใจๅ€คใฎใ‚ขใƒ†ใƒณใ‚ทใƒงใƒณ ใƒšใ‚ขใงใ‚ใ‚‹`past_key_values`ใ‚’ๅ…ฅๅŠ›ใจใ—ใฆๅ—ใ‘ๅ–ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ TensorFlow ใƒขใƒ‡ใƒซใฏ`past`ใ‚’ๅ…ฅๅŠ›ใจใ—ใฆๅ—ใ‘ๅ…ฅใ‚Œใพใ™ใ€‚ `past_key_values`ๅ€คใ‚’ไฝฟ็”จใ™ใ‚‹ใจใ€ใƒขใƒ‡ใƒซใŒๅ†่จˆ็ฎ—ใ•ใ‚Œใชใใชใ‚Šใพใ™ใ€‚ ใƒ†ใ‚ญใ‚นใƒˆ็”Ÿๆˆใฎใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใงไบ‹ๅ‰ใซ่จˆ็ฎ—ใ•ใ‚ŒใŸๅ€คใ€‚ [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ใ“ใฎๅผ•ๆ•ฐใฎไฝฟ็”จๆณ•ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ## Resources - [ใƒ†ใ‚ญใ‚นใƒˆๅˆ†้กžใ‚ฟใ‚นใ‚ฏใ‚ฌใ‚คใƒ‰](../tasks/sequence_classification) - [ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ ใ‚ฟใ‚นใ‚ฏ ใ‚ฌใ‚คใƒ‰](../tasks/language_modeling) ## CTRLConfig [[autodoc]] CTRLConfig ## CTRLTokenizer [[autodoc]] CTRLTokenizer - save_vocabulary <frameworkcontent> <pt> ## CTRLModel [[autodoc]] CTRLModel - forward ## CTRLLMHeadModel [[autodoc]] CTRLLMHeadModel - forward ## CTRLForSequenceClassification [[autodoc]] CTRLForSequenceClassification - forward </pt> <tf> ## TFCTRLModel [[autodoc]] TFCTRLModel - call ## TFCTRLLMHeadModel [[autodoc]] TFCTRLLMHeadModel - call ## TFCTRLForSequenceClassification [[autodoc]] TFCTRLForSequenceClassification - call </tf> </frameworkcontent>
transformers/docs/source/ja/model_doc/ctrl.md/0
{ "file_path": "transformers/docs/source/ja/model_doc/ctrl.md", "repo_id": "transformers", "token_count": 2118 }
261
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๆŽจ่ซ–ใฎใŸใ‚ใฎๅคš่จ€่ชžใƒขใƒ‡ใƒซ [[open-in-colab]] ๐Ÿค— Transformers ใซใฏใ„ใใคใ‹ใฎๅคš่จ€่ชžใƒขใƒ‡ใƒซใŒใ‚ใ‚Šใ€ใใ‚Œใ‚‰ใฎๆŽจ่ซ–ใฎไฝฟ็”จๆ–นๆณ•ใฏๅ˜ไธ€่จ€่ชžใƒขใƒ‡ใƒซใจใฏ็•ฐใชใ‚Šใพใ™ใ€‚ใŸใ ใ—ใ€ๅคš่จ€่ชžใƒขใƒ‡ใƒซใฎไฝฟ็”จๆ–นๆณ•ใŒใ™ในใฆ็•ฐใชใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) ใชใฉใฎไธ€้ƒจใฎใƒขใƒ‡ใƒซใฏใ€ๅ˜ไธ€่จ€่ชžใƒขใƒ‡ใƒซใจๅŒๆง˜ใซไฝฟ็”จใงใใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€ๆŽจ่ซ–ใฎใŸใ‚ใซไฝฟ็”จๆ–นๆณ•ใŒ็•ฐใชใ‚‹ๅคš่จ€่ชžใƒขใƒ‡ใƒซใ‚’ใฉใฎใ‚ˆใ†ใซไฝฟใ†ใ‹ใ‚’็คบใ—ใพใ™ใ€‚ ## XLM XLM ใซใฏ10ใฎ็•ฐใชใ‚‹ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒใ‚ใ‚Šใ€ใใฎใ†ใกใฎ1ใคใ ใ‘ใŒๅ˜ไธ€่จ€่ชžใงใ™ใ€‚ ๆฎ‹ใ‚Šใฎ9ใคใฎใƒขใƒ‡ใƒซใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฏใ€่จ€่ชžๅŸ‹ใ‚่พผใฟใ‚’ไฝฟ็”จใ™ใ‚‹ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใจไฝฟ็”จใ—ใชใ„ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใฎ2ใคใฎใ‚ซใƒ†ใ‚ดใƒชใซๅˆ†ใ‘ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ### ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใŒใ‚ใ‚‹ XLM ๆฌกใฎ XLM ใƒขใƒ‡ใƒซใฏใ€่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ไฝฟ็”จใ—ใฆใ€ๆŽจ่ซ–ใงไฝฟ็”จใ•ใ‚Œใ‚‹่จ€่ชžใ‚’ๆŒ‡ๅฎšใ—ใพใ™ใ€‚ - `xlm-mlm-ende-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ‰ใ‚คใƒ„่ชž) - `xlm-mlm-enfr-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ•ใƒฉใƒณใ‚น่ชž) - `xlm-mlm-enro-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒซใƒผใƒžใƒ‹ใ‚ข่ชž) - `xlm-mlm-xnli15-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€XNLI ่จ€่ชž) - `xlm-mlm-tlm-xnli15-1024` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ + ็ฟป่จณ + XNLI ่จ€่ชž) - `xlm-clm-enfr-1024` (ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ•ใƒฉใƒณใ‚น่ชž) - `xlm-clm-ende-1024` (ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ‰ใ‚คใƒ„่ชž) ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใฏใ€ใƒขใƒ‡ใƒซใซๆธกใ•ใ‚Œใ‚‹ `input_ids` ใจๅŒใ˜ๅฝข็Šถใฎใƒ†ใƒณใ‚ฝใƒซใจใ—ใฆ่กจใ•ใ‚Œใพใ™ใ€‚ ใ“ใ‚Œใ‚‰ใฎใƒ†ใƒณใ‚ฝใƒซใฎๅ€คใฏใ€ไฝฟ็”จใ•ใ‚Œใ‚‹่จ€่ชžใซไพๅญ˜ใ—ใ€ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎ `lang2id` ใŠใ‚ˆใณ `id2lang` ๅฑžๆ€งใซใ‚ˆใฃใฆ่ญ˜ๅˆฅใ•ใ‚Œใพใ™ใ€‚ ใ“ใฎไพ‹ใงใฏใ€`xlm-clm-enfr-1024` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ (ๅ› ๆžœ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€่‹ฑ่ชž-ใƒ•ใƒฉใƒณใ‚น่ชž)ใ€‚ ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") ``` ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใฎ `lang2id` ๅฑžๆ€งใฏใ€ใ“ใฎใƒขใƒ‡ใƒซใฎ่จ€่ชžใจใใฎ ID ใ‚’่กจ็คบใ—ใพใ™ใ€‚ ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` ๆฌกใซใ€ๅ…ฅๅŠ›ไพ‹ใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` ่จ€่ชž ID ใ‚’ `en` ใซ่จญๅฎšใ—ใ€ใใ‚Œใ‚’ไฝฟ็”จใ—ใฆ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใฏใ€่‹ฑ่ชžใฎ่จ€่ชž ID ใงใ‚ใ‚‹ใŸใ‚ใ€`0` ใงๅŸ‹ใ‚ใ‚‰ใ‚ŒใŸใƒ†ใƒณใ‚ฝใƒซใงใ™ใ€‚ ใ“ใฎใƒ†ใƒณใ‚ฝใƒซใฏ `input_ids` ใจๅŒใ˜ใ‚ตใ‚คใ‚บใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` ใ“ใ‚Œใงใ€`input_ids` ใจ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ใƒขใƒ‡ใƒซใซๆธกใ™ใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> outputs = model(input_ids, langs=langs) ``` [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) ใ‚นใ‚ฏใƒชใƒ—ใƒˆใฏใ€`xlm-clm` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆใ€่จ€่ชžใŒๅŸ‹ใ‚่พผใพใ‚ŒใŸใƒ†ใ‚ญใ‚นใƒˆใ‚’็”Ÿๆˆใงใใพใ™ใ€‚ ### ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใŒใชใ„XLM ๆฌกใฎ XLM ใƒขใƒ‡ใƒซใฏใ€ๆŽจ่ซ–ไธญใซ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ๅฟ…่ฆใจใ—ใพใ›ใ‚“ใ€‚ - `xlm-mlm-17-1280` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€17ใฎ่จ€่ชž) - `xlm-mlm-100-1280` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€100ใฎ่จ€่ชž) ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฏใ€ไปฅๅ‰ใฎ XLM ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใจใฏ็•ฐใชใ‚Šใ€ไธ€่ˆฌ็š„ใชๆ–‡ใฎ่กจ็พใซไฝฟ็”จใ•ใ‚Œใพใ™ใ€‚ ## BERT ไปฅไธ‹ใฎ BERT ใƒขใƒ‡ใƒซใฏใ€ๅคš่จ€่ชžใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใพใ™ใ€‚ - `bert-base-multilingual-uncased` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ + ๆฌกใฎๆ–‡ใฎไบˆๆธฌใ€102ใฎ่จ€่ชž) - `bert-base-multilingual-cased` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐ + ๆฌกใฎๆ–‡ใฎไบˆๆธฌใ€104ใฎ่จ€่ชž) ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใฏใ€ๆŽจ่ซ–ไธญใซ่จ€่ชžใฎๅŸ‹ใ‚่พผใฟใ‚’ๅฟ…่ฆใจใ—ใพใ›ใ‚“ใ€‚ ๆ–‡่„ˆใ‹ใ‚‰่จ€่ชžใ‚’่ญ˜ๅˆฅใ—ใ€ใใ‚Œใซๅฟœใ˜ใฆๆŽจๆธฌใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ## XLM-RoBERTa ๆฌกใฎ XLM-RoBERTa ใƒขใƒ‡ใƒซใฏใ€ๅคš่จ€่ชžใ‚ฟใ‚นใ‚ฏใซไฝฟ็”จใงใใพใ™ใ€‚ - `xlm-roberta-base` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€100ใฎ่จ€่ชž) - `xlm-roberta-large` (ใƒžใ‚นใ‚ฏๅŒ–ใ•ใ‚ŒใŸ่จ€่ชžใƒขใƒ‡ใƒชใƒณใ‚ฐใ€100ใฎ่จ€่ชž) XLM-RoBERTa ใฏใ€100ใฎ่จ€่ชžใงๆ–ฐใ—ใไฝœๆˆใŠใ‚ˆใณใ‚ฏใƒชใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ2.5 TB ใฎ CommonCrawl ใƒ‡ใƒผใ‚ฟใงใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚Œใพใ—ใŸใ€‚ ใ“ใ‚Œใฏใ€ๅˆ†้กžใ€ใ‚ทใƒผใ‚ฑใƒณใ‚นใฎใƒฉใƒ™ใƒซไป˜ใ‘ใ€่ณชๅ•ๅฟœ็ญ”ใชใฉใฎใƒ€ใ‚ฆใƒณใ‚นใƒˆใƒชใƒผใƒ ใ‚ฟใ‚นใ‚ฏใงใ€mBERT ใ‚„ XLM ใชใฉใฎไปฅๅ‰ใซใƒชใƒชใƒผใ‚นใ•ใ‚ŒใŸๅคš่จ€่ชžใƒขใƒ‡ใƒซใ‚’ๅคงๅน…ใซๆ”นๅ–„ใ—ใพใ™ใ€‚ ## M2M100 ๆฌกใฎ M2M100 ใƒขใƒ‡ใƒซใฏใ€ๅคš่จ€่ชž็ฟป่จณใซไฝฟ็”จใงใใพใ™ใ€‚ - `facebook/m2m100_418M` (็ฟป่จณ) - `facebook/m2m100_1.2B` (็ฟป่จณ) ใ“ใฎไพ‹ใงใฏใ€`facebook/m2m100_418M` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใฆใ€ไธญๅ›ฝ่ชžใ‹ใ‚‰่‹ฑ่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใงใ‚ฝใƒผใ‚น่จ€่ชžใ‚’่จญๅฎšใงใใพใ™ใ€‚ ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใพใ™ใ€‚ ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 ใฏใ€ๆœ€ๅˆใซ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจใ—ใฆใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชž ID ใ‚’ๅผทๅˆถ็š„ใซใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ ่‹ฑ่ชžใซ็ฟป่จณใ™ใ‚‹ใซใฏใ€`generate` ใƒกใ‚ฝใƒƒใƒ‰ใง `forced_bos_token_id` ใ‚’ `en` ใซ่จญๅฎšใ—ใพใ™ใ€‚ ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart ๅคš่จ€่ชž็ฟป่จณใซใฏใ€ๆฌกใฎ MBart ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใพใ™ใ€‚ - `facebook/mbart-large-50-one-to-many-mmt` (One-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-many-mmt` (Many-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-one-mmt` (Many-to-one multilingual machine translation, 50 languages) - `facebook/mbart-large-50` (Multilingual translation, 50 languages) - `facebook/mbart-large-cc25` ใ“ใฎไพ‹ใงใฏใ€`facebook/mbart-large-50-many-to-many-mmt` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใฆใ€ใƒ•ใ‚ฃใƒณใƒฉใƒณใƒ‰่ชžใ‚’่‹ฑ่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใงใ‚ฝใƒผใ‚น่จ€่ชžใ‚’่จญๅฎšใงใใพใ™ใ€‚ ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` ใƒ†ใ‚ญใ‚นใƒˆใ‚’ใƒˆใƒผใ‚ฏใƒณๅŒ–ใ—ใพใ™ใ€‚ ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart ใฏใ€ๆœ€ๅˆใซ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจใ—ใฆใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชž ID ใ‚’ๅผทๅˆถ็š„ใซใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชžใซ็ฟป่จณใ—ใพใ™ใ€‚ ่‹ฑ่ชžใซ็ฟป่จณใ™ใ‚‹ใซใฏใ€`generate` ใƒกใ‚ฝใƒƒใƒ‰ใง `forced_bos_token_id` ใ‚’ `en` ใซ่จญๅฎšใ—ใพใ™ใ€‚ ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id("en_XX")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` `facebook/mbart-large-50-many-to-one-mmt` ใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ๅ ดๅˆใ€ๆœ€ๅˆใซ็”Ÿๆˆใ•ใ‚ŒใŸใƒˆใƒผใ‚ฏใƒณใจใ—ใฆใ‚ฟใƒผใ‚ฒใƒƒใƒˆ่จ€่ชž ID ใ‚’ๅผทๅˆถใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใใ‚Œไปฅๅค–ใฎๅ ดๅˆใ€ไฝฟ็”จๆ–นๆณ•ใฏๅŒใ˜ใงใ™ใ€‚
transformers/docs/source/ja/multilingual.md/0
{ "file_path": "transformers/docs/source/ja/multilingual.md", "repo_id": "transformers", "token_count": 4086 }
262
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Video classification [[open-in-colab]] ใƒ“ใƒ‡ใ‚ชๅˆ†้กžใฏใ€ใƒ“ใƒ‡ใ‚ชๅ…จไฝ“ใซใƒฉใƒ™ใƒซใพใŸใฏใ‚ฏใƒฉใ‚นใ‚’ๅ‰ฒใ‚Šๅฝ“ใฆใ‚‹ใ‚ฟใ‚นใ‚ฏใงใ™ใ€‚ใƒ“ใƒ‡ใ‚ชใซใฏใ€ๅ„ใƒ“ใƒ‡ใ‚ชใซ 1 ใคใฎใ‚ฏใƒฉใ‚นใฎใฟใŒๅซใพใ‚Œใ‚‹ใ“ใจใŒๆœŸๅพ…ใ•ใ‚Œใพใ™ใ€‚ใƒ“ใƒ‡ใ‚ชๅˆ†้กžใƒขใƒ‡ใƒซใฏใƒ“ใƒ‡ใ‚ชใ‚’ๅ…ฅๅŠ›ใจใ—ใฆๅ—ใ‘ๅ–ใ‚Šใ€ใƒ“ใƒ‡ใ‚ชใŒใฉใฎใ‚ฏใƒฉใ‚นใซๅฑžใ™ใ‚‹ใ‹ใซใคใ„ใฆใฎไบˆๆธฌใ‚’่ฟ”ใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใ€ใƒ“ใƒ‡ใ‚ชใฎๅ†…ๅฎนใ‚’ๅˆ†้กžใงใใพใ™ใ€‚ใƒ“ใƒ‡ใ‚ชๅˆ†้กžใฎๅฎŸ้š›ใฎใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใฏใ‚ขใ‚ฏใ‚ทใƒงใƒณ/ใ‚ขใ‚ฏใƒ†ใ‚ฃใƒ“ใƒ†ใ‚ฃ่ช่ญ˜ใงใ‚ใ‚Šใ€ใƒ•ใ‚ฃใƒƒใƒˆใƒใ‚น ใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใซๅฝน็ซ‹ใกใพใ™ใ€‚ใพใŸใ€่ฆ–่ฆš้šœๅฎณใฎใ‚ใ‚‹ไบบใซใจใฃใฆใ€็‰นใซ้€šๅ‹คๆ™‚ใซๅฝน็ซ‹ใกใพใ™ใ€‚ ใ“ใฎใ‚ฌใ‚คใƒ‰ใงใฏใ€ๆฌกใฎๆ–นๆณ•ใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚ 1. [UCF101](https://www.crcv.ucf.edu/) ใฎใ‚ตใƒ–ใ‚ปใƒƒใƒˆใง [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) ใ‚’ๅพฎ่ชฟๆ•ดใ—ใพใ™ใ€‚ data/UCF101.php) ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ€‚ 2. ๅพฎ่ชฟๆ•ดใ—ใŸใƒขใƒ‡ใƒซใ‚’ๆŽจ่ซ–ใซไฝฟ็”จใ—ใพใ™ใ€‚ <Tip> ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใง่ชฌๆ˜Žใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏใฏใ€ๆฌกใฎใƒขใƒ‡ใƒซ ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae), [ViViT](../model_doc/vivit) <!--End of the generated tip--> </Tip> ๅง‹ใ‚ใ‚‹ๅ‰ใซใ€ๅฟ…่ฆใชใƒฉใ‚คใƒ–ใƒฉใƒชใŒใ™ในใฆใ‚คใƒณใ‚นใƒˆใƒผใƒซใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„ใ€‚ ```bash pip install -q pytorchvideo transformers evaluate ``` [PyTorchVideo](https://pytorchvideo.org/) (`pytorchvideo` ใจๅ‘ผใฐใ‚Œใพใ™) ใ‚’ไฝฟ็”จใ—ใฆใƒ“ใƒ‡ใ‚ชใ‚’ๅ‡ฆ็†ใ—ใ€ๆบ–ๅ‚™ใ—ใพใ™ใ€‚ ใƒขใƒ‡ใƒซใ‚’ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ—ใฆใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใจๅ…ฑๆœ‰ใงใใ‚‹ใ‚ˆใ†ใซใ€Hugging Face ใ‚ขใ‚ซใ‚ฆใƒณใƒˆใซใƒญใ‚ฐใ‚คใƒณใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใƒ—ใƒญใƒณใƒ—ใƒˆใŒ่กจ็คบใ•ใ‚ŒใŸใ‚‰ใ€ใƒˆใƒผใ‚ฏใƒณใ‚’ๅ…ฅๅŠ›ใ—ใฆใƒญใ‚ฐใ‚คใƒณใ—ใพใ™ใ€‚ ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load UCF101 dataset ใพใšใ€[UCF-101 ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ](https://www.crcv.ucf.edu/data/UCF101.php) ใฎใ‚ตใƒ–ใ‚ปใƒƒใƒˆใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ๅฎŒๅ…จใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใ•ใ‚‰ใซๆ™‚้–“ใ‚’่ฒปใ‚„ใ™ๅ‰ใซใ€ๅฎŸ้จ“ใ—ใฆใ™ในใฆใŒๆฉŸ่ƒฝใ™ใ‚‹ใ“ใจใ‚’็ขบ่ชใ™ใ‚‹ๆฉŸไผšใŒๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ ```py >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` ใ‚ตใƒ–ใ‚ปใƒƒใƒˆใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใŸๅพŒใ€ๅœง็ธฎใ‚ขใƒผใ‚ซใ‚คใƒ–ใ‚’ๆŠฝๅ‡บใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ```py >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` ๅคงใพใ‹ใซ่จ€ใ†ใจใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฏๆฌกใฎใ‚ˆใ†ใซๆง‹ๆˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` (`sorted`)ใ•ใ‚ŒใŸ ใƒ“ใƒ‡ใ‚ช ใƒ‘ใ‚นใฏๆฌกใฎใ‚ˆใ†ใซ่กจ็คบใ•ใ‚Œใพใ™ใ€‚ ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` ๅŒใ˜ใ‚ฐใƒซใƒผใƒ—/ใ‚ทใƒผใƒณใซๅฑžใ™ใ‚‹ใƒ“ใƒ‡ใ‚ช ใ‚ฏใƒชใƒƒใƒ—ใŒใ‚ใ‚Šใ€ใƒ“ใƒ‡ใ‚ช ใƒ•ใ‚กใ‚คใƒซ ใƒ‘ใ‚นใงใฏใ‚ฐใƒซใƒผใƒ—ใŒ`g`ใง็คบใ•ใ‚Œใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐใ€`v_ApplyEyeMakeup_g07_c04.avi`ใ‚„`v_ApplyEyeMakeup_g07_c06.avi`ใชใฉใงใ™ใ€‚ ๆคœ่จผใจ่ฉ•ไพกใฎๅˆ†ๅ‰ฒใงใฏใ€[ใƒ‡ใƒผใ‚ฟๆผๆดฉ](https://www.kaggle.com/code/alexisbcook/data-leakage) ใ‚’้˜ฒใใŸใ‚ใซใ€ๅŒใ˜ใ‚ฐใƒซใƒผใƒ—/ใ‚ทใƒผใƒณใ‹ใ‚‰ใฎใƒ“ใƒ‡ใ‚ช ใ‚ฏใƒชใƒƒใƒ—ใ‚’ไฝฟ็”จใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงไฝฟ็”จใ—ใฆใ„ใ‚‹ใ‚ตใƒ–ใ‚ปใƒƒใƒˆใงใฏใ€ใ“ใฎๆƒ…ๅ ฑใŒ่€ƒๆ…ฎใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ๆฌกใซใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ†…ใซๅญ˜ๅœจใ™ใ‚‹ใƒฉใƒ™ใƒซใฎใ‚ปใƒƒใƒˆใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ใพใŸใ€ใƒขใƒ‡ใƒซใ‚’ๅˆๆœŸๅŒ–ใ™ใ‚‹ใจใใซๅฝน็ซ‹ใค 2 ใคใฎ่พžๆ›ธใ‚’ไฝœๆˆใ—ใพใ™ใ€‚ * `label2id`: ใ‚ฏใƒฉใ‚นๅใ‚’ๆ•ดๆ•ฐใซใƒžใƒƒใƒ—ใ—ใพใ™ใ€‚ * `id2label`: ๆ•ดๆ•ฐใ‚’ใ‚ฏใƒฉใ‚นๅใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ—ใพใ™ใ€‚ ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` ๅ€‹ๆ€ง็š„ใชใ‚ฏใƒฉใ‚นใŒ10็จฎ้กžใ‚ใ‚Šใพใ™ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใ‚ปใƒƒใƒˆใซใฏใ€ใ‚ฏใƒฉใ‚นใ”ใจใซ 30 ๅ€‹ใฎใƒ“ใƒ‡ใ‚ชใŒใ‚ใ‚Šใพใ™ใ€‚ ## Load a model to fine-tune ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใจใใ‚Œใซ้–ข้€ฃใ™ใ‚‹็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใ‹ใ‚‰ใƒ“ใƒ‡ใ‚ชๅˆ†้กžใƒขใƒ‡ใƒซใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎใ‚จใƒณใ‚ณใƒผใƒ€ใƒผใซใฏไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผใŒไป˜ๅฑžใ—ใฆใŠใ‚Šใ€ๅˆ†้กžใƒ˜ใƒƒใƒ‰ใฏใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ•ใ‚Œใพใ™ใ€‚็”ปๅƒใƒ—ใƒญใ‚ปใƒƒใ‚ตใฏใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๅ‰ๅ‡ฆ็†ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ไฝœๆˆใ™ใ‚‹ใจใใซๅฝน็ซ‹ใกใพใ™ใ€‚ ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` ใƒขใƒ‡ใƒซใฎใƒญใƒผใƒ‰ไธญใซใ€ๆฌกใฎ่ญฆๅ‘ŠใŒ่กจ็คบใ•ใ‚Œใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚ ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` ใ“ใฎ่ญฆๅ‘Šใฏใ€ไธ€้ƒจใฎ้‡ใฟ (ใŸใจใˆใฐใ€`classifier`ๅฑคใฎ้‡ใฟใจใƒใ‚คใ‚ขใ‚น) ใ‚’็ ดๆฃ„ใ—ใ€ไป–ใฎใ„ใใคใ‹ใฎ้‡ใฟ (ๆ–ฐใ—ใ„`classifier`ๅฑคใฎ้‡ใฟใจใƒใ‚คใ‚ขใ‚น) ใ‚’ใƒฉใƒณใƒ€ใƒ ใซๅˆๆœŸๅŒ–ใ—ใฆใ„ใ‚‹ใ“ใจใ‚’็คบใ—ใฆใ„ใพใ™ใ€‚ใ“ใฎๅ ดๅˆใ€ใ“ใ‚Œใฏไบˆๆƒณใ•ใ‚Œใ‚‹ใ“ใจใงใ™ใ€‚ไบ‹ๅ‰ใซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸ้‡ใฟใ‚’ๆŒใŸใชใ„ๆ–ฐใ—ใ„้ ญ้ƒจใ‚’่ฟฝๅŠ ใ—ใฆใ„ใ‚‹ใŸใ‚ใ€ๆŽจ่ซ–ใซไฝฟ็”จใ™ใ‚‹ๅ‰ใซใ“ใฎใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ใจใƒฉใ‚คใƒ–ใƒฉใƒชใŒ่ญฆๅ‘Šใ—ใพใ™ใ€‚ใ“ใ‚Œใฏใพใ•ใซ็งใŸใกใŒ่กŒใŠใ†ใจใ—ใฆใ„ใ‚‹ใ‚‚ใฎใงใ™ใ€‚ใ™ใ‚‹ใ€‚ **ๆณจๆ„** [ใ“ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) ใฏใ€ๅŒๆง˜ใฎใƒ€ใ‚ฆใƒณใ‚นใƒˆใƒชใƒผใƒ ใงๅพฎ่ชฟๆ•ดใ•ใ‚Œใฆใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใŒๅ–ๅพ—ใ•ใ‚ŒใŸใŸใ‚ใ€ใ“ใฎใ‚ฟใ‚นใ‚ฏใฎใƒ‘ใƒ•ใ‚ฉใƒผใƒžใƒณใ‚นใŒๅ‘ไธŠใ™ใ‚‹ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใ‹ใชใ‚Šใฎใƒ‰ใƒกใ‚คใƒณใฎ้‡่ค‡ใŒใ‚ใ‚‹ใ‚ฟใ‚นใ‚ฏใ€‚ `MCG-NJU/videomae-base-finetuned-kinetics` ใ‚’ๅพฎ่ชฟๆ•ดใ—ใฆๅ–ๅพ—ใ—ใŸ [ใ“ใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆ](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) ใ‚’็ขบ่ชใงใใพใ™ใ€‚ -ใ‚ญใƒใƒ†ใ‚ฃใ‚ฏใ‚น`ใ€‚ ## Prepare the datasets for training ใƒ“ใƒ‡ใ‚ชใฎๅ‰ๅ‡ฆ็†ใซใฏใ€[PyTorchVideo ใƒฉใ‚คใƒ–ใƒฉใƒช](https://pytorchvideo.org/) ใ‚’ๅˆฉ็”จใ—ใพใ™ใ€‚ใพใšใ€ๅฟ…่ฆใชไพๅญ˜้–ขไฟ‚ใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใพใ™ใ€‚ ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๅค‰ๆ›ใซใฏใ€ๅ‡ไธ€ใชๆ™‚้–“ใ‚ตใƒ–ใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ€ใƒ”ใ‚ฏใ‚ปใƒซๆญฃ่ฆๅŒ–ใ€ใƒฉใƒณใƒ€ใƒ  ใ‚ฏใƒญใƒƒใƒ”ใƒณใ‚ฐใ€ใŠใ‚ˆใณใƒฉใƒณใƒ€ใƒ ใชๆฐดๅนณๅ่ปขใ‚’็ต„ใฟๅˆใ‚ใ›ใฆไฝฟ็”จโ€‹โ€‹ใ—ใพใ™ใ€‚ๆคœ่จผใŠใ‚ˆใณ่ฉ•ไพกใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅค‰ๆ›ใงใฏใ€ใƒฉใƒณใƒ€ใƒ ใชใƒˆใƒชใƒŸใƒณใ‚ฐใจๆฐดๅนณๅ่ปขใ‚’้™คใใ€ๅŒใ˜ๅค‰ๆ›ใƒใ‚งใƒผใƒณใ‚’็ถญๆŒใ—ใพใ™ใ€‚ใ“ใ‚Œใ‚‰ใฎๅค‰ๆ›ใฎ่ฉณ็ดฐใซใคใ„ใฆใฏใ€[PyTorchVideo ใฎๅ…ฌๅผใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://pytorchvideo.org) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ไบ‹ๅ‰ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใซ้–ข้€ฃไป˜ใ‘ใ‚‰ใ‚ŒใŸ`image_processor`ใ‚’ไฝฟ็”จใ—ใฆใ€ๆฌกใฎๆƒ…ๅ ฑใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚ * ใƒ“ใƒ‡ใ‚ช ใƒ•ใƒฌใƒผใƒ ใฎใƒ”ใ‚ฏใ‚ปใƒซใŒๆญฃ่ฆๅŒ–ใ•ใ‚Œใ‚‹็”ปๅƒใฎๅนณๅ‡ๅ€คใจๆจ™ๆบ–ๅๅทฎใ€‚ * ใƒ“ใƒ‡ใ‚ช ใƒ•ใƒฌใƒผใƒ ใฎใ‚ตใ‚คใ‚บใŒๅค‰ๆ›ดใ•ใ‚Œใ‚‹็ฉบ้–“่งฃๅƒๅบฆใ€‚ ใพใšใ€ใ„ใใคใ‹ใฎๅฎšๆ•ฐใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` ๆฌกใซใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ›บๆœ‰ใฎๅค‰ๆ›ใจใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใใ‚Œใžใ‚Œๅฎš็พฉใ—ใพใ™ใ€‚ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚ปใƒƒใƒˆใ‹ใ‚‰ๅง‹ใ‚ใพใ™: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` ๅŒใ˜ไธ€้€ฃใฎใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผใ‚’ๆคœ่จผใ‚ปใƒƒใƒˆใจ่ฉ•ไพกใ‚ปใƒƒใƒˆใซ้ฉ็”จใงใใพใ™ใ€‚ ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **ๆณจๆ„**: ไธŠ่จ˜ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฏใ€[ๅ…ฌๅผ PyTorchVideo ใ‚ตใƒณใƒ—ใƒซ](https://pytorchvideo.org/docs/tutorial_classification#dataset) ใ‹ใ‚‰ๅ–ๅพ—ใ—ใŸใ‚‚ใฎใงใ™ใ€‚ [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) ้–ขๆ•ฐใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ UCF-101 ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ€‚ๅ†…้ƒจใงใฏใ€[`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) ใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใ‚’่ฟ”ใ—ใพใ™ใ€‚ `LabeledVideoDataset` ใ‚ฏใƒฉใ‚นใฏใ€PyTorchVideo ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ†…ใฎใ™ในใฆใฎใƒ“ใƒ‡ใ‚ชใฎๅŸบๆœฌใ‚ฏใƒฉใ‚นใงใ™ใ€‚ใ—ใŸใŒใฃใฆใ€PyTorchVideo ใงๆ—ข่ฃฝใงใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„ใ‚ซใ‚นใ‚ฟใƒ  ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใŸใ„ๅ ดๅˆใฏใ€ใใ‚Œใซๅฟœใ˜ใฆ `LabeledVideoDataset` ใ‚ฏใƒฉใ‚นใ‚’ๆ‹กๅผตใงใใพใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€`data`API [ใƒ‰ใ‚ญใƒฅใƒกใƒณใƒˆ](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html)ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ใพใŸใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒๅŒๆง˜ใฎๆง‹้€  (ไธŠใซ็คบใ—ใŸใ‚‚ใฎ) ใซๅพ“ใฃใฆใ„ใ‚‹ๅ ดๅˆใฏใ€`pytorchvideo.data.Ucf101()` ใ‚’ไฝฟ็”จใ™ใ‚‹ใจๅ•้กŒใชใๅ‹•ไฝœใ™ใ‚‹ใฏใšใงใ™ใ€‚ `num_videos` ๅผ•ๆ•ฐใซใ‚ขใ‚ฏใ‚ปใ‚นใ™ใ‚‹ใจใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ†…ใฎใƒ“ใƒ‡ใ‚ชใฎๆ•ฐใ‚’็Ÿฅใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ``` ## Visualize the preprocessed video for better debugging ```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div> ## Train the model ๐Ÿค— Transformers ใฎ [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) ใ‚’ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซๅˆฉ็”จใ—ใพใ™ใ€‚ `Trainer`ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ใซใฏใ€ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๆง‹ๆˆใจ่ฉ•ไพกใƒกใƒˆใƒชใ‚ฏใ‚นใ‚’ๅฎš็พฉใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ๆœ€ใ‚‚้‡่ฆใชใฎใฏ [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments) ใงใ€ใ“ใ‚Œใฏใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ‚’ๆง‹ๆˆใ™ใ‚‹ใŸใ‚ใฎใ™ในใฆใฎๅฑžๆ€งใ‚’ๅซใ‚€ใ‚ฏใƒฉใ‚นใงใ™ใ€‚ใƒขใƒ‡ใƒซใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ไฟๅญ˜ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใ‚‹ๅ‡บๅŠ›ใƒ•ใ‚ฉใƒซใƒ€ใƒผๅใŒๅฟ…่ฆใงใ™ใ€‚ใพใŸใ€๐Ÿค— Hub ไธŠใฎใƒขใƒ‡ใƒซ ใƒชใƒใ‚ธใƒˆใƒชๅ†…ใฎใ™ในใฆใฎๆƒ…ๅ ฑใ‚’ๅŒๆœŸใ™ใ‚‹ใฎใซใ‚‚ๅฝน็ซ‹ใกใพใ™ใ€‚ ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐๅผ•ๆ•ฐใฎใปใจใ‚“ใฉใฏไธ€็›ฎ็žญ็„ถใงใ™ใŒใ€ใ“ใ“ใง้žๅธธใซ้‡่ฆใชใฎใฏ`remove_unused_columns=False`ใงใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฎๅ‘ผใณๅ‡บใ—้–ขๆ•ฐใงไฝฟ็”จใ•ใ‚Œใชใ„ๆฉŸ่ƒฝใŒๅ‰Š้™คใ•ใ‚Œใพใ™ใ€‚ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏ`True`ใงใ™ใ€‚ใ“ใ‚Œใฏใ€้€šๅธธใ€ๆœชไฝฟ็”จใฎ็‰นๅพดๅˆ—ใ‚’ๅ‰Š้™คใ—ใ€ใƒขใƒ‡ใƒซใฎๅ‘ผใณๅ‡บใ—้–ขๆ•ฐใธใฎๅ…ฅๅŠ›ใ‚’่งฃๅ‡ใ—ใ‚„ใ™ใใ™ใ‚‹ใ“ใจใŒ็†ๆƒณ็š„ใงใ‚ใ‚‹ใŸใ‚ใงใ™ใ€‚ใŸใ ใ—ใ€ใ“ใฎๅ ดๅˆใ€`pixel_values` (ใƒขใƒ‡ใƒซใŒๅ…ฅๅŠ›ใงๆœŸๅพ…ใ™ใ‚‹ๅฟ…้ ˆใ‚ญใƒผใงใ™) ใ‚’ไฝœๆˆใ™ใ‚‹ใซใฏใ€ๆœชไฝฟ็”จใฎๆฉŸ่ƒฝ (็‰นใซ`video`) ใŒๅฟ…่ฆใงใ™ใ€‚ ```py >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` `pytorchvideo.data.Ucf101()` ใซใ‚ˆใฃใฆ่ฟ”ใ•ใ‚Œใ‚‹ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฏ `__len__` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅฎŸ่ฃ…ใ—ใฆใ„ใพใ›ใ‚“ใ€‚ใใฎใŸใ‚ใ€`TrainingArguments`ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ™ใ‚‹ใจใใซ`max_steps`ใ‚’ๅฎš็พฉใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ ๆฌกใซใ€ไบˆๆธฌใ‹ใ‚‰ใƒกใƒˆใƒชใ‚ฏใ‚นใ‚’่จˆ็ฎ—ใ™ใ‚‹้–ขๆ•ฐใ‚’ๅฎš็พฉใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ‚Œใฏใ€ใ“ใ‚Œใ‹ใ‚‰ใƒญใƒผใƒ‰ใ™ใ‚‹`metric`ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ๅฟ…่ฆใชๅ‰ๅ‡ฆ็†ใฏใ€ไบˆๆธฌใ•ใ‚ŒใŸใƒญใ‚ธใƒƒใƒˆใฎ argmax ใ‚’ๅ–ๅพ—ใ™ใ‚‹ใ“ใจใ ใ‘ใงใ™ใ€‚ ```py import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **่ฉ•ไพกใซ้–ขใ™ใ‚‹ๆณจๆ„ไบ‹้ …**: [VideoMAE ่ซ–ๆ–‡](https://arxiv.org/abs/2203.12602) ใงใฏใ€่‘—่€…ใฏๆฌกใฎ่ฉ•ไพกๆˆฆ็•ฅใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ๅฝผใ‚‰ใฏใƒ†ใ‚นใƒˆ ใƒ“ใƒ‡ใ‚ชใ‹ใ‚‰ใฎใ„ใใคใ‹ใฎใ‚ฏใƒชใƒƒใƒ—ใงใƒขใƒ‡ใƒซใ‚’่ฉ•ไพกใ—ใ€ใใ‚Œใ‚‰ใฎใ‚ฏใƒชใƒƒใƒ—ใซใ•ใพใ–ใพใชใ‚ฏใƒญใƒƒใƒ—ใ‚’้ฉ็”จใ—ใฆใ€ๅˆ่จˆใ‚นใ‚ณใ‚ขใ‚’ๅ ฑๅ‘Šใ—ใพใ™ใ€‚ใŸใ ใ—ใ€ๅ˜็ด”ใ•ใจ็ฐกๆฝ”ใ•ใ‚’ไฟใคใŸใ‚ใซใ€ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใใ‚Œใ‚’่€ƒๆ…ฎใ—ใพใ›ใ‚“ใ€‚ ใพใŸใ€ใ‚ตใƒณใƒ—ใƒซใ‚’ใพใจใ‚ใฆใƒใƒƒใƒๅ‡ฆ็†ใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ•ใ‚Œใ‚‹ `collatโ€‹โ€‹e_fn` ใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ๅ„ใƒใƒƒใƒใฏใ€`pixel_values` ใจ `labels` ใจใ„ใ† 2 ใคใฎใ‚ญใƒผใงๆง‹ๆˆใ•ใ‚Œใพใ™ใ€‚ ```py >>> def collate_fn(examples): ... # permute to (num_frames, num_channels, height, width) ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` ๆฌกใซใ€ใ“ใ‚Œใ‚‰ใ™ในใฆใ‚’ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใจใจใ‚‚ใซ`Trainer`ใซๆธกใ™ใ ใ‘ใงใ™ใ€‚ ```py >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` ใ™ใงใซใƒ‡ใƒผใ‚ฟใ‚’ๅ‰ๅ‡ฆ็†ใ—ใฆใ„ใ‚‹ใฎใซใ€ใชใœใƒˆใƒผใ‚ฏใƒŠใ‚คใ‚ถใƒผใจใ—ใฆ`image_processor`ใ‚’ๆธกใ—ใŸใฎใ‹ไธๆ€่ญฐใซๆ€ใ†ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใ“ใ‚Œใฏใ€ใ‚คใƒกใƒผใ‚ธ ใƒ—ใƒญใ‚ปใƒƒใ‚ตๆง‹ๆˆใƒ•ใ‚กใ‚คใƒซ (JSON ใจใ—ใฆไฟๅญ˜) ใ‚‚ใƒใƒ–ไธŠใฎใƒชใƒใ‚ธใƒˆใƒชใซใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚Œใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใŸใ‚ใ ใ‘ใงใ™ใ€‚ ๆฌกใซใ€`train` ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ๅ‘ผใณๅ‡บใ—ใฆใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ—ใพใ™ใ€‚ ```py >>> train_results = trainer.train() ``` ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใŒๅฎŒไบ†ใ—ใŸใ‚‰ใ€ [`~transformers.Trainer.push_to_hub`] ใƒกใ‚ฝใƒƒใƒ‰ใ‚’ไฝฟ็”จใ—ใฆใƒขใƒ‡ใƒซใ‚’ใƒใƒ–ใซๅ…ฑๆœ‰ใ—ใ€่ชฐใ‚‚ใŒใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ```py >>> trainer.push_to_hub() ``` ## Inference ใƒขใƒ‡ใƒซใ‚’ๅพฎ่ชฟๆ•ดใ—ใŸใฎใงใ€ใใ‚Œใ‚’ๆŽจ่ซ–ใซไฝฟ็”จใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚ ๆŽจ่ซ–ใฎใŸใ‚ใซใƒ“ใƒ‡ใ‚ชใ‚’ใƒญใƒผใƒ‰ใ—ใพใ™ใ€‚ ```py >>> sample_test_video = next(iter(test_dataset)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/> </div> ๆŽจ่ซ–็”จใซๅพฎ่ชฟๆ•ดใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใ‚’่ฉฆใ™ๆœ€ใ‚‚็ฐกๅ˜ใชๆ–นๆณ•ใฏใ€ใใ‚Œใ‚’ [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline). ใงไฝฟ็”จใ™ใ‚‹ใ“ใจใงใ™ใ€‚ใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ—ใฆใƒ“ใƒ‡ใ‚ชๅˆ†้กž็”จใฎ` pipeline`ใ‚’ใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅŒ–ใ—ใ€ใใ‚Œใซใƒ“ใƒ‡ใ‚ชใ‚’ๆธกใ—ใพใ™ใ€‚ ```py >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` ๅฟ…่ฆใซๅฟœใ˜ใฆใ€`pipeline`ใฎ็ตๆžœใ‚’ๆ‰‹ๅ‹•ใง่ค‡่ฃฝใ™ใ‚‹ใ“ใจใ‚‚ใงใใพใ™ใ€‚ ```py >>> def run_inference(model, video): ... # (num_frames, num_channels, height, width) ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), # this can be skipped if you don't have labels available. ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... # forward pass ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` ๆฌกใซใ€ๅ…ฅๅŠ›ใ‚’ใƒขใƒ‡ใƒซใซๆธกใ—ใ€`logits `ใ‚’่ฟ”ใ—ใพใ™ใ€‚ ``` >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` `logits` ใ‚’ใƒ‡ใ‚ณใƒผใƒ‰ใ™ใ‚‹ใจใ€ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ```py >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```
transformers/docs/source/ja/tasks/video_classification.md/0
{ "file_path": "transformers/docs/source/ja/tasks/video_classification.md", "repo_id": "transformers", "token_count": 10073 }
263
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hugging Face Transformers๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ธ๊ฐ€์š”? [[how-to-add-a-model-to-transformers]] Hugging Face Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ ๊ธฐ์—ฌ์ž๋“ค ๋•๋ถ„์— ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Š” ๋„์ „์ ์ธ ํ”„๋กœ์ ํŠธ์ด๋ฉฐ Hugging Face Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ตฌํ˜„ํ•  ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊นŠ์€ ์ดํ•ด๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Hugging Face์—์„œ๋Š” ๋” ๋งŽ์€ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฉค๋ฒ„๊ฐ€ ๋ชจ๋ธ์„ ์ ๊ทน์ ์œผ๋กœ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•˜๊ณ ์ž ํ•˜๋ฉฐ, ์ด ๊ฐ€์ด๋“œ๋ฅผ ํ†ตํ•ด PyTorch ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ณผ์ •์„ ์•ˆ๋‚ดํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค (PyTorch๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์ฃผ์„ธ์š”). <Tip> TensorFlow ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ณ ์ž ํ•˜๋Š” ๊ฒฝ์šฐ [๐Ÿค— Transformers ๋ชจ๋ธ์„ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋ฐฉ๋ฒ•](add_tensorflow_model) ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด ๊ณผ์ •์„ ์ง„ํ–‰ํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ์ดํ•ดํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: - ์˜คํ”ˆ ์†Œ์Šค์˜ ๋ชจ๋ฒ” ์‚ฌ๋ก€์— ๋Œ€ํ•œ ํ†ต์ฐฐ๋ ฅ์„ ์–ป์Šต๋‹ˆ๋‹ค. - ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์„ค๊ณ„ ์›์น™์„ ์ดํ•ดํ•ฉ๋‹ˆ๋‹ค. - ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ํ…Œ์ŠคํŠธํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›๋‹ˆ๋‹ค. - `black`, `ruff`, `make fix-copies`์™€ ๊ฐ™์€ Python ์œ ํ‹ธ๋ฆฌํ‹ฐ๋ฅผ ํ†ตํ•ฉํ•˜์—ฌ ๊น”๋”ํ•˜๊ณ  ๊ฐ€๋…์„ฑ ์žˆ๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›๋‹ˆ๋‹ค. Hugging Face ํŒ€์€ ํ•ญ์ƒ ๋„์›€์„ ์ค„ ์ค€๋น„๊ฐ€ ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ˜ผ์ž๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ๐Ÿค— โค๏ธ ์‹œ์ž‘์— ์•ž์„œ ๐Ÿค— Transformers์— ์›ํ•˜๋Š” ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) ์ด์Šˆ๋ฅผ ์—ด์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ๋ชจ๋ธ์„ ๊ธฐ์—ฌํ•˜๋Š” ๋ฐ ํŠน๋ณ„ํžˆ ๊นŒ๋‹ค๋กœ์šด ๊ธฐ์ค€์„ ๊ฐ€์ง€์ง€ ์•Š๋Š” ๊ฒฝ์šฐ [New model label](https://github.com/huggingface/transformers/labels/New%20model)์„ ํ•„ํ„ฐ๋งํ•˜์—ฌ ์š”์ฒญ๋˜์ง€ ์•Š์€ ๋ชจ๋ธ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๊ณ  ์ž‘์—…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์š”์ฒญ์„ ์—ด์—ˆ๋‹ค๋ฉด ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ๐Ÿค— Transformers์— ์ต์ˆ™ํ•ด์ง€๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ## ๐Ÿค— Transformers์˜ ์ „๋ฐ˜์ ์ธ ๊ฐœ์š” [[general-overview-of-transformers]] ๋จผ์ € ๐Ÿค— Transformers์— ๋Œ€ํ•œ ์ „๋ฐ˜์ ์ธ ๊ฐœ์š”๋ฅผ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋งค์šฐ ์ฃผ๊ด€์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ด๊ธฐ ๋•Œ๋ฌธ์— ํ•ด๋‹น ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฒ ํ•™์ด๋‚˜ ์„ค๊ณ„ ์„ ํƒ ์‚ฌํ•ญ์— ๋™์˜ํ•˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์ƒ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ธฐ๋ณธ์ ์ธ ์„ค๊ณ„ ์„ ํƒ๊ณผ ์ฒ ํ•™์€ ๐Ÿค— Transformers์˜ ๊ทœ๋ชจ๋ฅผ ํšจ์œจ์ ์œผ๋กœ ํ™•์žฅํ•˜๋ฉด์„œ ์œ ์ง€ ๋ณด์ˆ˜ ๋น„์šฉ์„ ํ•ฉ๋ฆฌ์ ์ธ ์ˆ˜์ค€์œผ๋กœ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฒ ํ•™์— ๋Œ€ํ•œ ๋ฌธ์„œ](philosophy)๋ฅผ ์ฝ๋Š” ๊ฒƒ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๋Š” ์ข‹์€ ์‹œ์ž‘์ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์— ์ ์šฉํ•˜๋ ค๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž‘์—… ๋ฐฉ์‹์— ๋Œ€ํ•œ ์„ ํƒ ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ์ผ๋ฐ˜์ ์œผ๋กœ ์ถ”์ƒํ™”๋ณด๋‹ค๋Š” ๊ตฌ์„ฑ์„ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. - ์ฝ”๋“œ๋ฅผ ๋ณต์ œํ•˜๋Š” ๊ฒƒ์ด ํ•ญ์ƒ ๋‚˜์œ ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์ฝ”๋“œ์˜ ๊ฐ€๋…์„ฑ์ด๋‚˜ ์ ‘๊ทผ์„ฑ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค๋ฉด ๋ณต์ œํ•˜๋Š” ๊ฒƒ์€ ์ข‹์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ ํŒŒ์ผ์€ ๊ฐ€๋Šฅํ•œ ํ•œ ๋…๋ฆฝ์ ์œผ๋กœ ์œ ์ง€๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํŠน์ • ๋ชจ๋ธ์˜ ์ฝ”๋“œ๋ฅผ ์ฝ์„ ๋•Œ ํ•ด๋‹น `modeling_....py` ํŒŒ์ผ๋งŒ ํ™•์ธํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฝ”๋“œ๊ฐ€ ์ œํ’ˆ์„ ์ œ๊ณตํ•˜๋Š” ์ˆ˜๋‹จ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜๋Š” ์ œํ’ˆ์ด๋ผ๊ณ ๋„ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ, ์‚ฌ์šฉ์ž๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์‚ฌ๋žŒ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ฝ”๋“œ๋ฅผ ์ฝ๊ณ  ์ดํ•ดํ•˜๊ณ  ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ๊นŒ์ง€๋„ ํฌํ•จํ•œ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—ผ๋‘์— ๋‘๊ณ  ์ผ๋ฐ˜์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์„ค๊ณ„์— ๋Œ€ํ•ด ์กฐ๊ธˆ ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋ชจ๋ธ ๊ฐœ์š” [[overview-of-models]] ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด ๋ชจ๋ธ๊ณผ ํ•ด๋‹น ๊ตฌ์„ฑ์ธ [`PreTrainedModel`] ๋ฐ [`PretrainedConfig`] ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Transformers์— ์ถ”๊ฐ€ํ•˜๋ ค๋Š” ๋ชจ๋ธ์„ `BrandNewBert`๋ผ๊ณ  ๋ถ€๋ฅด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> ๋ณด๋‹ค์‹œํ”ผ, ๐Ÿค— Transformers์—์„œ๋Š” ์ƒ์†์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ ์ถ”์ƒํ™” ์ˆ˜์ค€์„ ์ตœ์†Œํ•œ์œผ๋กœ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์–ด๋–ค ๋ชจ๋ธ์—์„œ๋„ ๋‘ ์ˆ˜์ค€ ์ด์ƒ์˜ ์ถ”์ƒํ™”๊ฐ€ ์กด์žฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. `BrandNewBertModel`์€ `BrandNewBertPreTrainedModel`์—์„œ ์ƒ์†๋ฐ›๊ณ , ์ด ํด๋ž˜์Šค๋Š” [`PreTrainedModel`]์—์„œ ์ƒ์†๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด๋กœ์จ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ [`PreTrainedModel`]์—๋งŒ ์˜์กดํ•˜๋„๋ก ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒˆ๋กœ์šด ๋ชจ๋ธ์— ์ž๋™์œผ๋กœ ์ œ๊ณต๋˜๋Š” ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์€ [`~PreTrainedModel.from_pretrained`] ๋ฐ [`~PreTrainedModel.save_pretrained`]์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ ์™ธ์—๋„ `BrandNewBertModel.forward`์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์€ ์ƒˆ๋กœ์šด `modeling_brand_new_bert.py` ์Šคํฌ๋ฆฝํŠธ์—์„œ ์™„์ „ํžˆ ์ •์˜๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `BrandNewBertForMaskedLM`๊ณผ ๊ฐ™์€ ํŠน์ • ํ—ค๋“œ ๋ ˆ์ด์–ด๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์€ `BrandNewBertModel`์„ ์ƒ์†๋ฐ›์ง€ ์•Š๊ณ  forward pass์—์„œ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ๋Š” `BrandNewBertModel`์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”์ƒํ™” ์ˆ˜์ค€์„ ๋‚ฎ๊ฒŒ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ `BrandNewBertConfig`๋ผ๋Š” ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ตฌ์„ฑ์€ ํ•ญ์ƒ [`PreTrainedModel`]์˜ ์†์„ฑ์œผ๋กœ ์ €์žฅ๋˜๋ฉฐ, ๋”ฐ๋ผ์„œ `BrandNewBertPreTrainedModel`์„ ์ƒ์†๋ฐ›๋Š” ๋ชจ๋“  ํด๋ž˜์Šค์—์„œ `config` ์†์„ฑ์„ ํ†ตํ•ด ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` ๋ชจ๋ธ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ์€ [`PretrainedConfig`]์—์„œ ๊ธฐ๋ณธ ์ง๋ ฌํ™” ๋ฐ ์—ญ์ง๋ ฌํ™” ๊ธฐ๋Šฅ์„ ์ƒ์†๋ฐ›์Šต๋‹ˆ๋‹ค. ๊ตฌ์„ฑ๊ณผ ๋ชจ๋ธ์€ ํ•ญ์ƒ *pytorch_model.bin* ํŒŒ์ผ๊ณผ *config.json* ํŒŒ์ผ๋กœ ๊ฐ๊ฐ ๋ณ„๋„๋กœ ์ง๋ ฌํ™”๋ฉ๋‹ˆ๋‹ค. [`~PreTrainedModel.save_pretrained`]๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ [`~PretrainedConfig.save_pretrained`]๋„ ํ˜ธ์ถœ๋˜๋ฏ€๋กœ ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์ด ๋ชจ๋‘ ์ €์žฅ๋ฉ๋‹ˆ๋‹ค. ### ์ฝ”๋“œ ์Šคํƒ€์ผ [[code-style]] ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ž‘์„ฑํ•  ๋•Œ, Transformers๋Š” ์ฃผ๊ด€์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ด๋ฉฐ ๋ช‡ ๊ฐ€์ง€ ๋…ํŠนํ•œ ์ฝ”๋”ฉ ์Šคํƒ€์ผ์ด ์žˆ์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ์˜ forward pass๋Š” ๋ชจ๋ธ ํŒŒ์ผ์— ์™„์ „ํžˆ ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ๋ชจ๋ธ์—์„œ ๋ธ”๋ก์„ ์žฌ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ฝ”๋“œ๋ฅผ ๋ณต์‚ฌํ•˜์—ฌ ์œ„์— `# Copied from` ์ฃผ์„๊ณผ ํ•จ๊ป˜ ๋ถ™์—ฌ๋„ฃ์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค (์˜ˆ: [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). 2. ์ฝ”๋“œ๋Š” ์™„์ „ํžˆ ์ดํ•ดํ•˜๊ธฐ ์‰ฌ์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€์ˆ˜ ์ด๋ฆ„์„ ๋ช…ํ™•ํ•˜๊ฒŒ ์ง€์ •ํ•˜๊ณ  ์•ฝ์–ด๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `act`๋ณด๋‹ค๋Š” `activation`์„ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. ํ•œ ๊ธ€์ž ๋ณ€์ˆ˜ ์ด๋ฆ„์€ ๋ฃจํ”„์˜ ์ธ๋ฑ์Šค์ธ ๊ฒฝ์šฐ๋ฅผ ์ œ์™ธํ•˜๊ณ  ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 3. ๋” ์ผ๋ฐ˜์ ์œผ๋กœ, ์งง์€ ๋งˆ๋ฒ• ๊ฐ™์€ ์ฝ”๋“œ๋ณด๋‹ค๋Š” ๊ธธ๊ณ  ๋ช…์‹œ์ ์ธ ์ฝ”๋“œ๋ฅผ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. 4. PyTorch์—์„œ `nn.Sequential`์„ ํ•˜์œ„ ํด๋ž˜์Šค๋กœ ๋งŒ๋“ค์ง€ ๋ง๊ณ  `nn.Module`์„ ํ•˜์œ„ ํด๋ž˜์Šค๋กœ ๋งŒ๋“ค๊ณ  forward pass๋ฅผ ์ž‘์„ฑํ•˜์—ฌ ๋‹ค๋ฅธ ์‚ฌ๋žŒ์ด ์ฝ”๋“œ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. print ๋ฌธ์ด๋‚˜ ์ค‘๋‹จ์ ์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 5. ํ•จ์ˆ˜ ์‹œ๊ทธ๋‹ˆ์ฒ˜์—๋Š” ํƒ€์ž… ์ฃผ์„์„ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์™ธ์—๋Š” ํƒ€์ž… ์ฃผ์„๋ณด๋‹ค ๋ณ€์ˆ˜ ์ด๋ฆ„์ด ํ›จ์”ฌ ์ฝ๊ธฐ ์‰ฝ๊ณ  ์ดํ•ดํ•˜๊ธฐ ์‰ฝ์Šต๋‹ˆ๋‹ค. ### ํ† ํฌ๋‚˜์ด์ € ๊ฐœ์š” [[overview-of-tokenizers]] ์•„์ง ์ค€๋น„๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค :-( ์ด ์„น์…˜์€ ๊ณง ์ถ”๊ฐ€๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค! ## ๐Ÿค— Transformers์— ๋ชจ๋ธ ์ถ”๊ฐ€ํ•˜๋Š” ๋‹จ๊ณ„๋ณ„ ๋ฐฉ๋ฒ• [[stepbystep-recipe-to-add-a-model-to-transformers]] ๊ฐ์ž ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์„ ํ˜ธ๊ฐ€ ๋‹ค๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ๊ธฐ์—ฌ์ž๋“ค์ด Hugging Face์— ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์š”์•ฝ์„ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๋ฌผ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค: 1. [GPT2 ๋ชจ๋ธ ์ด์‹ํ•˜๊ธฐ](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) - [Thomas](https://huggingface.co/thomwolf) 2. [WMT19 MT ๋ชจ๋ธ ์ด์‹ํ•˜๊ธฐ](https://huggingface.co/blog/porting-fsmt) - [Stas](https://huggingface.co/stas) ๊ฒฝํ—˜์ƒ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ์ฃผ์˜ํ•ด์•ผ ํ•  ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ๊ฐ™์€ ์ผ์„ ๋ฐ˜๋ณตํ•˜์ง€ ๋งˆ์„ธ์š”! ์ƒˆ๋กœ์šด ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์œ„ํ•ด ์ถ”๊ฐ€ํ•  ์ฝ”๋“œ์˜ ๋Œ€๋ถ€๋ถ„์€ ์ด๋ฏธ ๐Ÿค— Transformers ์–ด๋”˜๊ฐ€์— ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ณต์‚ฌํ•  ์ˆ˜ ์žˆ๋Š” ์œ ์‚ฌํ•œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ฐพ๋Š”๋ฐ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜์„ธ์š”. [grep](https://www.gnu.org/software/grep/)์™€ [rg](https://github.com/BurntSushi/ripgrep)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋ชจ๋ธ์˜ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•œ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  ๋ชจ๋ธ๋ง ์ฝ”๋“œ๊ฐ€ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์กด์žฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด FSMT์˜ ๋ชจ๋ธ๋ง ์ฝ”๋“œ๋Š” BART๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  FSMT์˜ ํ† ํฌ๋‚˜์ด์ € ์ฝ”๋“œ๋Š” XLM์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. - ์ด๊ฒƒ์€ ๊ณผํ•™์ ์ธ ๋„์ „๋ณด๋‹ค๋Š” ๊ณตํ•™์ ์ธ ๋„์ „์ž…๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์˜ ๋ชจ๋ธ์˜ ๋ชจ๋“  ์ด๋ก ์  ์ธก๋ฉด์„ ์ดํ•ดํ•˜๋ ค๋Š” ๊ฒƒ๋ณด๋‹ค ํšจ์œจ์ ์ธ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์„ ๋งŒ๋“œ๋Š” ๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ์†Œ๋น„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋ง‰ํž ๋•Œ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”! ๋ชจ๋ธ์€ ๐Ÿค— Transformers์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ ์š”์†Œ์ด๋ฏ€๋กœ Hugging Face์˜ ์šฐ๋ฆฌ๋Š” ๋‹น์‹ ์ด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฐ ๋‹จ๊ณ„์—์„œ ๊ธฐ๊บผ์ด ๋„์›€์„ ์ค„ ์ค€๋น„๊ฐ€ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ง„์ „์ด ์—†๋‹ค๊ณ  ๋Š๋ผ๋ฉด ์ฃผ์ €ํ•˜์ง€ ๋ง๊ณ  ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. ๋‹ค์Œ์—์„œ๋Š” ๋ชจ๋ธ์„ ๐Ÿค— Transformers๋กœ ์ด์‹ํ•˜๋Š” ๋ฐ ๊ฐ€์žฅ ์œ ์šฉํ•œ ์ผ๋ฐ˜์ ์ธ ์ ˆ์ฐจ๋ฅผ ์ œ๊ณตํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ชฉ๋ก์€ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•  ๋ชจ๋“  ์ž‘์—…์˜ ์š”์•ฝ์ด๋ฉฐ To-Do ๋ชฉ๋ก์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: โ˜ (์„ ํƒ ์‚ฌํ•ญ) BrandNewBert์˜ ์ด๋ก ์  ์ธก๋ฉด ์ดํ•ด<br> โ˜ Hugging Face ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์ค€๋น„<br> โ˜ ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์˜ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ ์„ค์ •<br> โ˜ ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `forward()` pass๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰๋˜๋Š” ์Šคํฌ๋ฆฝํŠธ ์ž‘์„ฑ<br> โ˜ ๐Ÿค— Transformers์— ๋ชจ๋ธ ์Šค์ผˆ๋ ˆํ†ค ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€<br> โ˜ ์›๋ณธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๐Ÿค— Transformers ์ฒดํฌํฌ์ธํŠธ๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ๋ณ€ํ™˜<br> โ˜ ๐Ÿค— Transformers์—์„œ ์›๋ณธ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด์ฃผ๋Š” `forward()` pass ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰<br> โ˜ ๐Ÿค— Transformers์—์„œ ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ์™„๋ฃŒ<br> โ˜ ๐Ÿค— Transformers์— ํ† ํฌ๋‚˜์ด์ € ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€<br> โ˜ ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ ์‹คํ–‰<br> โ˜ ๋ฌธ์„œ ์ž‘์„ฑ ์™„๋ฃŒ<br> โ˜ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ<br> โ˜ Pull request ์ œ์ถœ<br> โ˜ (์„ ํƒ ์‚ฌํ•ญ) ๋ฐ๋ชจ ๋…ธํŠธ๋ถ ์ถ”๊ฐ€ ์šฐ์„ , ์ผ๋ฐ˜์ ์œผ๋กœ๋Š” `BrandNewBert`์˜ ์ด๋ก ์ ์ธ ์ดํ•ด๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ก ์  ์ธก๋ฉด์„ ์ง์ ‘ ์ดํ•ดํ•˜๋Š” ๋Œ€์‹  *์ง์ ‘ ํ•ด๋ณด๋ฉด์„œ* ๋ชจ๋ธ์˜ ์ด๋ก ์  ์ธก๋ฉด์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์„ ์„ ํ˜ธํ•˜๋Š” ๊ฒฝ์šฐ ๋ฐ”๋กœ `BrandNewBert` ์ฝ”๋“œ ๋ฒ ์ด์Šค๋กœ ๋น ์ ธ๋“œ๋Š” ๊ฒƒ๋„ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค. ์ด ์˜ต์…˜์€ ์—”์ง€๋‹ˆ์–ด๋ง ๊ธฐ์ˆ ์ด ์ด๋ก ์  ๊ธฐ์ˆ ๋ณด๋‹ค ๋” ๋›ฐ์–ด๋‚œ ๊ฒฝ์šฐ, `BrandNewBert`์˜ ๋…ผ๋ฌธ์„ ์ดํ•ดํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์ด ์žˆ๋Š” ๊ฒฝ์šฐ, ๋˜๋Š” ๊ณผํ•™์ ์ธ ๋…ผ๋ฌธ์„ ์ฝ๋Š” ๊ฒƒ๋ณด๋‹ค ํ”„๋กœ๊ทธ๋ž˜๋ฐ์— ํ›จ์”ฌ ๋” ํฅ๋ฏธ ์žˆ๋Š” ๊ฒฝ์šฐ์— ๋” ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### 1. (์„ ํƒ ์‚ฌํ•ญ) BrandNewBert์˜ ์ด๋ก ์  ์ธก๋ฉด [[1-optional-theoretical-aspects-of-brandnewbert]] ๋งŒ์•ฝ ๊ทธ๋Ÿฐ ์„œ์ˆ ์ ์ธ ์ž‘์—…์ด ์กด์žฌํ•œ๋‹ค๋ฉด, *BrandNewBert*์˜ ๋…ผ๋ฌธ์„ ์ฝ์–ด๋ณด๋Š” ์‹œ๊ฐ„์„ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šด ์„น์…˜์ด ๋งŽ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋”๋ผ๋„ ๊ฑฑ์ •ํ•˜์ง€ ๋งˆ์„ธ์š”! ๋ชฉํ‘œ๋Š” ๋…ผ๋ฌธ์˜ ๊นŠ์€ ์ด๋ก ์  ์ดํ•ด๊ฐ€ ์•„๋‹ˆ๋ผ *BrandNewBert*๋ฅผ ๐Ÿค— Transformers์—์„œ ํšจ๊ณผ์ ์œผ๋กœ ์žฌ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์ด๋ก ์  ์ธก๋ฉด์— ๋„ˆ๋ฌด ๋งŽ์€ ์‹œ๊ฐ„์„ ํˆฌ์žํ•  ํ•„์š”๋Š” ์—†์ง€๋งŒ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์‹ค์ œ์ ์ธ ์ธก๋ฉด์— ์ง‘์ค‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - *BrandNewBert*๋Š” ์–ด๋–ค ์œ ํ˜•์˜ ๋ชจ๋ธ์ธ๊ฐ€์š”? BERT์™€ ์œ ์‚ฌํ•œ ์ธ์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? GPT2์™€ ์œ ์‚ฌํ•œ ๋””์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? BART์™€ ์œ ์‚ฌํ•œ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? ์ด๋“ค ๊ฐ„์˜ ์ฐจ์ด์ ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ[model_summary](model_summary)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. - *BrandNewBert*์˜ ์‘์šฉ ๋ถ„์•ผ๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์ธ๊ฐ€์š”? ํ…์ŠคํŠธ ์ƒ์„ฑ์ธ๊ฐ€์š”? ์š”์•ฝ๊ณผ ๊ฐ™์€ Seq2Seq ์ž‘์—…์ธ๊ฐ€์š”? - *brand_new_bert*์™€ BERT/GPT-2/BART์˜ ์ฐจ์ด์ ์€ ๋ฌด์—‡์ธ๊ฐ€์š”? - *brand_new_bert*์™€ ๊ฐ€์žฅ ์œ ์‚ฌํ•œ [๐Ÿค— Transformers ๋ชจ๋ธ](https://huggingface.co/transformers/#contents)์€ ๋ฌด์—‡์ธ๊ฐ€์š”? - ์–ด๋–ค ์ข…๋ฅ˜์˜ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์‚ฌ์šฉ๋˜๋‚˜์š”? Sentencepiece ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? Word piece ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? BERT ๋˜๋Š” BART์— ์‚ฌ์šฉ๋˜๋Š” ๋™์ผํ•œ ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ์ถฉ๋ถ„ํžˆ ์ดํ•ดํ–ˆ๋‹ค๋Š” ์ƒ๊ฐ์ด ๋“  ํ›„, ๊ถ๊ธˆํ•œ ์‚ฌํ•ญ์ด ์žˆ์œผ๋ฉด Hugging Face ํŒ€์— ๋ฌธ์˜ํ•˜์‹ญ์‹œ์˜ค. ์ด๋Š” ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜, ์–ดํ…์…˜ ๋ ˆ์ด์–ด ๋“ฑ์— ๊ด€ํ•œ ์งˆ๋ฌธ์„ ํฌํ•จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face์˜ ์œ ์ง€ ๊ด€๋ฆฌ์ž๋“ค์€ ๋ณดํ†ต ์ฝ”๋“œ๋ฅผ ๊ฒ€ํ† ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•ด ๋งค์šฐ ๊ธฐ๋ปํ•˜๋ฏ€๋กœ ๋‹น์‹ ์„ ๋•๋Š” ์ผ์„ ๋งค์šฐ ํ™˜์˜ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ### 2. ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์„ค์ • [[2-next-prepare-your-environment]] 1. ์ €์žฅ์†Œ ํŽ˜์ด์ง€์—์„œ "Fork" ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ €์žฅ์†Œ์˜ ์‚ฌ๋ณธ์„ GitHub ์‚ฌ์šฉ์ž ๊ณ„์ •์œผ๋กœ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 2. `transformers` fork๋ฅผ ๋กœ์ปฌ ๋””์Šคํฌ์— ํด๋ก ํ•˜๊ณ  ๋ฒ ์ด์Šค ์ €์žฅ์†Œ๋ฅผ ์›๊ฒฉ ์ €์žฅ์†Œ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` ๊ฐ ์šด์˜ ์ฒด์ œ์— ๋”ฐ๋ผ Transformers์˜ ์„ ํƒ์  ์˜์กด์„ฑ์ด ๊ฐœ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•˜๋ฉด ์ด ๋ช…๋ น์ด ์‹คํŒจํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๊ฒฝ์šฐ์—๋Š” ์ž‘์—… ์ค‘์ธ ๋”ฅ ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ (PyTorch, TensorFlow ๋ฐ/๋˜๋Š” Flax)์„ ์„ค์น˜ํ•œ ํ›„, ๋‹ค์Œ ๋ช…๋ น์„ ์ˆ˜ํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```bash pip install -e ".[quality]" ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์—๋Š” ์ด๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ƒ์œ„ ๋””๋ ‰ํ† ๋ฆฌ๋กœ ๋Œ์•„๊ฐ‘๋‹ˆ๋‹ค. ```bash cd .. ``` 4. Transformers์— *brand_new_bert*์˜ PyTorch ๋ฒ„์ „์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. PyTorch๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋งํฌ์˜ ์ง€์นจ์„ ๋”ฐ๋ฅด์‹ญ์‹œ์˜ค: https://pytorch.org/get-started/locally/. **์ฐธ๊ณ :** CUDA๋ฅผ ์„ค์น˜ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์ด CPU์—์„œ ์ž‘๋™ํ•˜๋„๋ก ๋งŒ๋“œ๋Š” ๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. 5. *brand_new_bert*๋ฅผ ์ด์‹ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ•ด๋‹น ์›๋ณธ ์ €์žฅ์†Œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` ์ด์ œ *brand_new_bert*๋ฅผ ๐Ÿค— Transformers๋กœ ์ด์‹ํ•˜๊ธฐ ์œ„ํ•œ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ### 3.-4. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ ์‹คํ–‰ํ•˜๊ธฐ [[3.-4.-run-a-pretrained-checkpoint-using-the-original-repository]] ๋จผ์ €, ์›๋ณธ *brand_new_bert* ์ €์žฅ์†Œ์—์„œ ์ž‘์—…์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ๊ตฌํ˜„์€ ๋ณดํ†ต "์—ฐ๊ตฌ์šฉ"์œผ๋กœ ๋งŽ์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ฌธ์„œํ™”๊ฐ€ ๋ถ€์กฑํ•˜๊ณ  ์ฝ”๋“œ๊ฐ€ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๊ฒƒ์ด ๋ฐ”๋กœ *brand_new_bert*๋ฅผ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋ ค๋Š” ๋™๊ธฐ๊ฐ€ ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Hugging Face์—์„œ์˜ ์ฃผ์š” ๋ชฉํ‘œ ์ค‘ ํ•˜๋‚˜๋Š” **๊ฑฐ์ธ์˜ ์–ด๊นจ ์œ„์— ์„œ๋Š” ๊ฒƒ**์ด๋ฉฐ, ์ด๋Š” ์—ฌ๊ธฐ์—์„œ ์‰ฝ๊ฒŒ ํ•ด์„๋˜์–ด ๋™์ž‘ํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐ€์ ธ์™€์„œ ๊ฐ€๋Šฅํ•œ ํ•œ **์ ‘๊ทผ ๊ฐ€๋Šฅํ•˜๊ณ  ์‚ฌ์šฉ์ž ์นœํ™”์ ์ด๋ฉฐ ์•„๋ฆ„๋‹ต๊ฒŒ** ๋งŒ๋“œ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๐Ÿค— Transformers์—์„œ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋Š” ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋™๊ธฐ์ž…๋‹ˆ๋‹ค - ์ƒˆ๋กœ์šด ๋ณต์žกํ•œ NLP ๊ธฐ์ˆ ์„ **๋ชจ๋‘์—๊ฒŒ** ์ ‘๊ทผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›๋ณธ ์ €์žฅ์†Œ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ๊ณต์‹ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์€ ์ข…์ข… **๊ฐ€์žฅ ์–ด๋ ค์šด** ๋‹จ๊ณ„์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์— ๋”ฐ๋ฅด๋ฉด, ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ์ต์ˆ™ํ•ด์ง€๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์–ด๋””์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”์ง€? - ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ํ•ด๋‹น ๋ชจ๋ธ์—๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์€? - ๋ชจ๋ธ๊ณผ ๋…๋ฆฝ์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์€? - ๊ฐ„๋‹จํ•œ forward pass์— ํ•„์š”ํ•œ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด forward pass๋ฅผ ํ•œ ๋ฒˆ ์ถ”์ ํ•ด ๋ณด์„ธ์š”. ์ผ๋ฐ˜์ ์œผ๋กœ ํ•ด๋‹น ํ•จ์ˆ˜๋“ค๋งŒ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์˜ ์ค‘์š”ํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ํด๋ž˜์Šค๋Š” ์–ด๋””์— ์žˆ๋‚˜์š”? ๋ชจ๋ธ ํ•˜์œ„ ํด๋ž˜์Šค(*EncoderModel*, *DecoderModel* ๋“ฑ)๊ฐ€ ์žˆ๋‚˜์š”? self-attention ๋ ˆ์ด์–ด๋Š” ์–ด๋””์— ์žˆ๋‚˜์š”? self-attention, cross-attention ๋“ฑ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋‹ค๋ฅธ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‚˜์š”? - ์›๋ณธ ํ™˜๊ฒฝ์—์„œ ๋ชจ๋ธ์„ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ธ๊ฐ€์š”? *print* ๋ฌธ์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•˜๋‚˜์š”? *ipdb*์™€ ๊ฐ™์€ ๋Œ€ํ™”์‹ ๋””๋ฒ„๊ฑฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‚˜์š”? PyCharm๊ณผ ๊ฐ™์€ ํšจ์œจ์ ์ธ IDE๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋‚˜์š”? ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ฝ”๋“œ๋ฅผ ์ด์‹ํ•˜๋Š” ์ž‘์—…์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ฝ”๋“œ๋ฅผ **ํšจ์œจ์ ์œผ๋กœ** ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ๋˜ํ•œ, ์˜คํ”ˆ ์†Œ์Šค ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ์ž‘์—…ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›๋ณธ ์ €์žฅ์†Œ์—์„œ issue๋ฅผ ์—ด๊ฑฐ๋‚˜ pull request๋ฅผ ์—ด๊ธฐ๋ฅผ ์ฃผ์ €ํ•˜์ง€ ๋งˆ์‹ญ์‹œ์˜ค. ์ด ์ €์žฅ์†Œ์˜ ์œ ์ง€ ๊ด€๋ฆฌ์ž๋“ค์€ ๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ์ž์‹ ๋“ค์˜ ์ฝ”๋“œ๋ฅผ ์‚ดํŽด๋ณธ๋‹ค๋Š” ๊ฒƒ์— ๋Œ€ํ•ด ๋งค์šฐ ๊ธฐ๋ปํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ํ˜„์žฌ ์‹œ์ ์—์„œ, ์›๋ž˜ ๋ชจ๋ธ์„ ๋””๋ฒ„๊น…ํ•˜๊ธฐ ์œ„ํ•ด ์–ด๋–ค ๋””๋ฒ„๊น… ํ™˜๊ฒฝ๊ณผ ์ „๋žต์„ ์„ ํ˜ธํ•˜๋Š”์ง€๋Š” ๋‹น์‹ ์—๊ฒŒ ๋‹ฌ๋ ธ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ณ ๊ฐ€์˜ GPU ํ™˜๊ฒฝ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๊ฒƒ์€ ๋น„์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์‹ , ์›๋ž˜ ์ €์žฅ์†Œ๋กœ ๋“ค์–ด๊ฐ€์„œ ์ž‘์—…์„ ์‹œ์ž‘ํ•  ๋•Œ์™€ ๐Ÿค— Transformers ๋ชจ๋ธ์˜ ๊ตฌํ˜„์„ ์‹œ์ž‘ํ•  ๋•Œ์—๋„ CPU์—์„œ ์ž‘์—…ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์ด๋ฏธ ๐Ÿค— Transformers๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ์ด์‹๋˜์—ˆ์„ ๋•Œ์—๋งŒ ๋ชจ๋ธ์ด GPU์—์„œ๋„ ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ, ์›๋ž˜ ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•œ ๋‘ ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - [Jupyter ๋…ธํŠธ๋ถ](https://jupyter.org/) / [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) - ๋กœ์ปฌ Python ์Šคํฌ๋ฆฝํŠธ Jupyter ๋…ธํŠธ๋ถ์˜ ์žฅ์ ์€ ์…€ ๋‹จ์œ„๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋…ผ๋ฆฌ์ ์ธ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ๋” ์ž˜ ๋ถ„๋ฆฌํ•˜๊ณ  ์ค‘๊ฐ„ ๊ฒฐ๊ณผ๋ฅผ ์ €์žฅํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋””๋ฒ„๊น… ์‚ฌ์ดํด์ด ๋” ๋นจ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋…ธํŠธ๋ถ์€ ๋‹ค๋ฅธ ๊ธฐ์—ฌ์ž์™€ ์‰ฝ๊ฒŒ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ Hugging Face ํŒ€์˜ ๋„์›€์„ ์š”์ฒญํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Jupyter ๋…ธํŠธ๋ถ์— ์ต์ˆ™ํ•˜๋‹ค๋ฉด ์ด๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ๊ฐ•๋ ฅํžˆ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. Jupyter ๋…ธํŠธ๋ถ์˜ ๋‹จ์ ์€ ์‚ฌ์šฉ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ƒˆ๋กœ์šด ํ”„๋กœ๊ทธ๋ž˜๋ฐ ํ™˜๊ฒฝ์— ์ ์‘ํ•˜๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํ• ์• ํ•ด์•ผ ํ•˜๋ฉฐ, `ipdb`์™€ ๊ฐ™์€ ์•Œ๋ ค์ง„ ๋””๋ฒ„๊น… ๋„๊ตฌ๋ฅผ ๋” ์ด์ƒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์„ ์ˆ˜๋„ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ๋Œ€ํ•ด ์ข‹์€ ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ํ•ญ์ƒ **์ž‘์€** ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋กœ๋“œํ•˜๊ณ  ๋”๋ฏธ ์ •์ˆ˜ ๋ฒกํ„ฐ ์ž…๋ ฅ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹จ์ผ forward pass๋ฅผ ์žฌํ˜„ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜์‚ฌ ์ฝ”๋“œ๋กœ ์ž‘์„ฑ): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` ๋‹ค์Œ์œผ๋กœ, ๋””๋ฒ„๊น… ์ „๋žต์— ๋Œ€ํ•ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ์„ ํƒ์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - ์›๋ณธ ๋ชจ๋ธ์„ ๋งŽ์€ ์ž‘์€ ํ…Œ์ŠคํŠธ ๊ฐ€๋Šฅํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๊ณ  ๊ฐ๊ฐ์— ๋Œ€ํ•ด forward pass๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๊ฒ€์ฆํ•ฉ๋‹ˆ๋‹ค. - ์›๋ณธ ๋ชจ๋ธ์„ ์›๋ณธ *tokenizer*๊ณผ ์›๋ณธ *model*๋กœ๋งŒ ๋ถ„ํ•ดํ•˜๊ณ  ํ•ด๋‹น ๋ถ€๋ถ„์— ๋Œ€ํ•ด forward pass๋ฅผ ์‹คํ–‰ํ•œ ํ›„ ๊ฒ€์ฆ์„ ์œ„ํ•ด ์ค‘๊ฐ„ ์ถœ๋ ฅ(print ๋ฌธ ๋˜๋Š” ์ค‘๋‹จ์ )์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์–ด๋–ค ์ „๋žต์„ ์„ ํƒํ• ์ง€๋Š” ๋‹น์‹ ์—๊ฒŒ ๋‹ฌ๋ ค ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ๋”ฐ๋ผ ํ•˜๋‚˜ ๋˜๋Š” ๋‹ค๋ฅธ ์ „๋žต์ด ์œ ๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๋ฅผ ๋ชจ๋ธ์˜ ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์—ฌ๋ถ€, ์˜ˆ๋ฅผ ๋“ค์–ด ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๊ฐ€ ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ์—์„œ ๊ฐ„๋‹จํžˆ ์‹คํ–‰๋  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ๊ทธ๋Ÿฐ ๊ฒฝ์šฐ์—๋Š” ๊ทธ ๋…ธ๋ ฅ์ด ๊ฐ€์น˜๊ฐ€ ์žˆ๋‹ค๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ดˆ๊ธฐ์— ๋” ์–ด๋ ค์šด ๋ฐฉ๋ฒ•์„ ์„ ํƒํ•˜๋Š” ๊ฒƒ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์žฅ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - ์›๋ณธ ๋ชจ๋ธ์„ ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ๋น„๊ตํ•  ๋•Œ ๊ฐ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์ผ์น˜ํ•˜๋Š”์ง€ ์ž๋™์œผ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ์‹œ๊ฐ์ ์ธ ๋น„๊ต(print ๋ฌธ์„ ํ†ตํ•œ ๋น„๊ต๊ฐ€ ์•„๋‹Œ) ๋Œ€์‹  ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ์›๋ณธ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ „์ฒด ๋ชจ๋ธ์„ ๋ชจ๋“ˆ๋ณ„๋กœ, ์ฆ‰ ์ž‘์€ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•จ์œผ๋กœ์จ ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ํฐ ๋ฌธ์ œ๋ฅผ ๋‹จ์ˆœํžˆ ๊ฐœ๋ณ„ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ด์‹ํ•˜๋Š” ์ž‘์€ ๋ฌธ์ œ๋กœ ๋ถ„ํ•ดํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ž‘์—…์„ ๋” ์ž˜ ๊ตฌ์กฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ์„ ๋…ผ๋ฆฌ์ ์œผ๋กœ ์˜๋ฏธ ์žˆ๋Š” ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„๋ฆฌํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋ธ์˜ ์„ค๊ณ„์— ๋Œ€ํ•œ ๋” ๋‚˜์€ ๊ฐœ์š”๋ฅผ ์–ป๊ณ  ๋ชจ๋ธ์„ ๋” ์ž˜ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. - ์ด๋Ÿฌํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ณ„ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ตํ•ด ์ฝ”๋“œ๋ฅผ ๋ณ€๊ฒฝํ•˜๋ฉด์„œ ํšŒ๊ท€๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋„๋ก ๋ณด์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Lysandre์˜ ELECTRA ํ†ตํ•ฉ ๊ฒ€์‚ฌ](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed)๋Š” ์ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ข‹์€ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๊ฐ€ ๋งค์šฐ ๋ณต์žกํ•˜๊ฑฐ๋‚˜ ์ค‘๊ฐ„ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ปดํŒŒ์ผ๋œ ๋ชจ๋“œ์—์„œ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ๋งŒ ํ—ˆ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธ ๊ฐ€๋Šฅํ•œ ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๋Š” ๊ฒƒ์ด ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๊ฑฐ๋‚˜ ๋ถˆ๊ฐ€๋Šฅํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [T5์˜ MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋งค์šฐ ๋ณต์žกํ•˜๋ฉฐ ๋ชจ๋ธ์„ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ฒฝ์šฐ, ๋ณดํ†ต print ๋ฌธ์„ ํ†ตํ•ด ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์–ด๋–ค ์ „๋žต์„ ์„ ํƒํ•˜๋”๋ผ๋„ ๊ถŒ์žฅ๋˜๋Š” ์ ˆ์ฐจ๋Š” ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ์‹œ์ž‘ ๋ ˆ์ด์–ด๋ฅผ ๋””๋ฒ„๊ทธํ•˜๊ณ  ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋ฅผ ๋งˆ์ง€๋ง‰์— ๋””๋ฒ„๊ทธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ˆœ์„œ๋กœ ๊ฐ ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ์„ ๊ฒ€์ƒ‰ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ์— ์ „๋‹ฌ๋œ ์ž…๋ ฅ ID ๊ฐ€์ ธ์˜ค๊ธฐ 2. ์›Œ๋“œ ์ž„๋ฒ ๋”ฉ ๊ฐ€์ ธ์˜ค๊ธฐ 3. ์ฒซ ๋ฒˆ์งธ Transformer ๋ ˆ์ด์–ด์˜ ์ž…๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 4. ์ฒซ ๋ฒˆ์งธ Transformer ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 5. ๋‹ค์Œ n-1๊ฐœ์˜ Transformer ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 6. BrandNewBert ๋ชจ๋ธ์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ ์ž…๋ ฅ ID๋Š” ์ •์ˆ˜ ๋ฐฐ์—ด๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ, ์˜ˆ๋ฅผ ๋“ค์–ด `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]`์™€ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ์€ ์ข…์ข… ๋‹ค์ฐจ์› ์‹ค์ˆ˜ ๋ฐฐ์—ด๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` ๐Ÿค— Transformers์— ์ถ”๊ฐ€๋˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์€ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์›๋ณธ ๋ชจ๋ธ๊ณผ ๐Ÿค— Transformers์˜ ์žฌ๊ตฌํ˜„ ๋ฒ„์ „์ด 0.001์˜ ์ •๋ฐ€๋„๋กœ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ๋™์ผํ•œ ๋ชจ๋ธ์ด ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ž‘์„ฑ๋˜์—ˆ์„ ๋•Œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ํ”„๋ ˆ์ž„์›Œํฌ์— ๋”ฐ๋ผ ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ์ถœ๋ ฅ์„ ์–ป๋Š” ๊ฒƒ์€ ์ •์ƒ์ด๋ฏ€๋กœ 1e-3(0.001)์˜ ์˜ค์ฐจ๋Š” ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ฑฐ์˜ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด๋Š” ๊ฒƒ๋งŒ์œผ๋กœ๋Š” ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์™„๋ฒฝํžˆ ์ผ์น˜ํ•˜๋Š” ์ˆ˜์ค€์ด์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๐Ÿค— Transformers ๋ฒ„์ „์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ์„ *brand_new_bert*์˜ ์›๋ž˜ ๊ตฌํ˜„์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ๊ณผ ์—ฌ๋Ÿฌ ๋ฒˆ ๋น„๊ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์›๋ณธ ์ €์žฅ์†Œ์˜ **ํšจ์œจ์ ์ธ** ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์ด ์ ˆ๋Œ€์ ์œผ๋กœ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์„ ๊ฐ€๋Šฅํ•œ ํ•œ ํšจ์œจ์ ์œผ๋กœ ๋งŒ๋“œ๋Š” ๋ช‡ ๊ฐ€์ง€ ์กฐ์–ธ์„ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. - ์ค‘๊ฐ„ ๊ฒฐ๊ณผ๋ฅผ ๋””๋ฒ„๊ทธํ•˜๋Š” ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•์„ ์ฐพ์œผ์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ PyTorch๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด ์›๋ณธ ๋ชจ๋ธ์„ ๋” ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜์—ฌ ์ค‘๊ฐ„ ๊ฐ’์„ ๊ฒ€์ƒ‰ํ•˜๋Š” ๊ธด ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์— ์‹œ๊ฐ„์„ ํˆฌ์žํ•  ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ Tensorflow 1๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด [tf.print](https://www.tensorflow.org/api_docs/python/tf/print)์™€ ๊ฐ™์€ Tensorflow ์ถœ๋ ฅ ์ž‘์—…์„ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘๊ฐ„ ๊ฐ’์„ ์ถœ๋ ฅํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ Jax๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด forward pass๋ฅผ ์‹คํ–‰ํ•  ๋•Œ ๋ชจ๋ธ์ด **jit ๋˜์ง€ ์•Š๋„๋ก** ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [์ด ๋งํฌ](https://github.com/google/jax/issues/196)๋ฅผ ํ™•์ธํ•ด ๋ณด์„ธ์š”. - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฐ€์žฅ ์ž‘์€ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ž‘์„์ˆ˜๋ก ๋””๋ฒ„๊ทธ ์‚ฌ์ดํด์ด ๋” ๋นจ๋ผ์ง‘๋‹ˆ๋‹ค. ์ „๋ฐ˜์ ์œผ๋กœ forward pass์— 10์ดˆ ์ด์ƒ์ด ๊ฑธ๋ฆฌ๋Š” ๊ฒฝ์šฐ ํšจ์œจ์ ์ด์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋งค์šฐ ํฐ ์ฒดํฌํฌ์ธํŠธ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ƒˆ ํ™˜๊ฒฝ์—์„œ ์ž„์˜๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ€์ค‘์น˜๋กœ ๋”๋ฏธ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ณ  ํ•ด๋‹น ๊ฐ€์ค‘์น˜๋ฅผ ๐Ÿค— Transformers ๋ฒ„์ „๊ณผ ๋น„๊ตํ•˜๊ธฐ ์œ„ํ•ด ์ €์žฅํ•˜๋Š” ๊ฒƒ์ด ๋” ์˜๋ฏธ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋””๋ฒ„๊น… ์„ค์ •์—์„œ ๊ฐ€์žฅ ์‰ฝ๊ฒŒ forward pass๋ฅผ ํ˜ธ์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ **๋‹จ์ผ** forward pass๋งŒ ํ˜ธ์ถœํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ฐพ๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ `predict`, `evaluate`, `forward`, `__call__`๊ณผ ๊ฐ™์ด ํ˜ธ์ถœ๋ฉ๋‹ˆ๋‹ค. `autoregressive_sample`๊ณผ ๊ฐ™์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์—์„œ `forward`๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ ํ˜ธ์ถœํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋“ฑ์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋””๋ฒ„๊ทธํ•˜๊ณ  ์‹ถ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ํ† ํฐํ™” ๊ณผ์ •์„ ๋ชจ๋ธ์˜ *forward* pass์™€ ๋ถ„๋ฆฌํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ž…๋ ฅ ๋ฌธ์ž์—ด์„ ์ž…๋ ฅํ•ด์•ผ ํ•˜๋Š” ์˜ˆ์ œ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ž…๋ ฅ ๋ฌธ์ž์—ด์ด ์ž…๋ ฅ ID๋กœ ๋ณ€๊ฒฝ๋˜๋Š” ์ˆœ๊ฐ„์„ ์ฐพ์•„์„œ ์‹œ์ž‘ํ•˜์„ธ์š”. ์ด ๊ฒฝ์šฐ ์ง์ ‘ ID๋ฅผ ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์ž‘์€ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•˜๊ฑฐ๋‚˜ ์›๋ณธ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋””๋ฒ„๊น… ์„ค์ •์—์„œ ๋ชจ๋ธ์ด ํ›ˆ๋ จ ๋ชจ๋“œ๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ชจ๋“œ์—์„œ๋Š” ๋ชจ๋ธ์˜ ์—ฌ๋Ÿฌ ๋“œ๋กญ์•„์›ƒ ๋ ˆ์ด์–ด ๋•Œ๋ฌธ์— ๋ฌด์ž‘์œ„ ์ถœ๋ ฅ์ด ์ƒ์„ฑ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์—์„œ forward pass๊ฐ€ **๊ฒฐ์ •๋ก ์ **์ด๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜๋Š” ๋™์ผํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์— ์žˆ๋Š” ๊ฒฝ์šฐ *transformers.utils.set_seed*๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋‹ค์Œ ์„น์…˜์—์„œ๋Š” *brand_new_bert*์— ๋Œ€ํ•ด ์ด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐ ๋” ๊ตฌ์ฒด์ ์ธ ์„ธ๋ถ€ ์‚ฌํ•ญ/ํŒ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ### 5.-14. ๐Ÿค— Transformers์— BrandNewBert๋ฅผ ์ด์‹ํ•˜๊ธฐ [[5.-14.-port-brandnewbert-to-transformers]] ์ด์ œ, ๋งˆ์นจ๋‚ด ๐Ÿค— Transformers์— ์ƒˆ๋กœ์šด ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํฌํฌ์˜ ํด๋ก ์œผ๋กœ ์ด๋™ํ•˜์„ธ์š”: ```bash cd transformers ``` ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์™€ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜๋Š” ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ํŠน๋ณ„ํ•œ ๊ฒฝ์šฐ์—๋Š” [์ด ์„น์…˜](#write-a-conversion-script)์— ์„ค๋ช…๋œ๋Œ€๋กœ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋งŒ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ์ „์ฒด ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ทธ๋Œ€๋กœ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์ƒ์„ฑ์„ ์‹œ์ž‘ํ•ฉ์‹œ๋‹ค. ์—ฌ๊ธฐ์—์„œ ๋‘ ๊ฐ€์ง€ ์„ ํƒ์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `transformers-cli add-new-model-like`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ธฐ์กด ๋ชจ๋ธ๊ณผ ์œ ์‚ฌํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์ถ”๊ฐ€ํ•˜๊ธฐ - `transformers-cli add-new-model`์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…œํ”Œ๋ฆฟ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์ถ”๊ฐ€ํ•˜๊ธฐ (์„ ํƒํ•œ ๋ชจ๋ธ ์œ ํ˜•์— ๋”ฐ๋ผ BERT ๋˜๋Š” Bart์™€ ์œ ์‚ฌํ•œ ๋ชจ์Šต์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค) ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘, ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ์ •๋ณด๋ฅผ ์ž…๋ ฅํ•˜๋Š” ์„ค๋ฌธ์กฐ์‚ฌ๊ฐ€ ์ œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ช…๋ น์–ด๋Š” `cookiecutter`๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **huggingface/transformers ๋ฉ”์ธ ์ €์žฅ์†Œ์— Pull Request ์—ด๊ธฐ** ์ž๋™์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•˜๊ธฐ ์ „์—, ์ง€๊ธˆ์€ "์ž‘์—… ์ง„ํ–‰ ์ค‘ (WIP)" ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ์—ด๊ธฐ ์œ„ํ•œ ์‹œ๊ธฐ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Transformers์— "*brand_new_bert* ์ถ”๊ฐ€"๋ผ๋Š” ์ œ๋ชฉ์˜ "[WIP] Add *brand_new_bert*" ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ์—ฝ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋‹น์‹ ๊ณผ Hugging Face ํŒ€์ด ๐Ÿค— Transformers์— ๋ชจ๋ธ์„ ํ†ตํ•ฉํ•˜๋Š” ์ž‘์—…์„ ํ•จ๊ป˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์—์„œ ์ž‘์—…์„ ์ž˜ ์„ค๋ช…ํ•˜๋Š” ์ด๋ฆ„์œผ๋กœ ๋ธŒ๋žœ์น˜ ์ƒ์„ฑ ```bash git checkout -b add_brand_new_bert ``` 2. ์ž๋™์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ ์ปค๋ฐ‹ ```bash git add . git commit ``` 3. ํ˜„์žฌ ๋ฉ”์ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ๋ฒ ์ด์Šค ```bash git fetch upstream git rebase upstream/main ``` 4. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๊ณ„์ •์— ํ‘ธ์‹œ ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. ๋งŒ์กฑ์Šค๋Ÿฝ๋‹ค๋ฉด, GitHub์—์„œ ์ž์‹ ์˜ ํฌํฌํ•œ ์›น ํŽ˜์ด์ง€๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค. "Pull request"๋ฅผ ํด๋ฆญํ•ฉ๋‹ˆ๋‹ค. Hugging Face ํŒ€์˜ ์ผ๋ถ€ ๋ฉค๋ฒ„์˜ GitHub ํ•ธ๋“ค์„ ๋ฆฌ๋ทฐ์–ด๋กœ ์ถ”๊ฐ€ํ•˜์—ฌ Hugging Face ํŒ€์ด ์•ž์œผ๋กœ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•ด ์•Œ๋ฆผ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 6. GitHub ํ’€ ๋ฆฌํ€˜์ŠคํŠธ ์›น ํŽ˜์ด์ง€ ์˜ค๋ฅธ์ชฝ์— ์žˆ๋Š” "Convert to draft"๋ฅผ ํด๋ฆญํ•˜์—ฌ PR์„ ์ดˆ์•ˆ์œผ๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์–ด๋–ค ์ง„์ „์„ ์ด๋ฃจ์—ˆ๋‹ค๋ฉด ์ž‘์—…์„ ์ปค๋ฐ‹ํ•˜๊ณ  ๊ณ„์ •์— ํ‘ธ์‹œํ•˜์—ฌ ํ’€ ๋ฆฌํ€˜์ŠคํŠธ์— ํ‘œ์‹œ๋˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ˜„์žฌ ๋ฉ”์ธ๊ณผ ์ž‘์—…์„ ์—…๋ฐ์ดํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git fetch upstream git merge upstream/main ``` ์ผ๋ฐ˜์ ์œผ๋กœ, ๋ชจ๋ธ ๋˜๋Š” ๊ตฌํ˜„์— ๊ด€ํ•œ ๋ชจ๋“  ์งˆ๋ฌธ์€ ์ž์‹ ์˜ PR์—์„œ ํ•ด์•ผ ํ•˜๋ฉฐ, PR์—์„œ ํ† ๋ก ๋˜๊ณ  ํ•ด๊ฒฐ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด Hugging Face ํŒ€์ด ์ƒˆ๋กœ์šด ์ฝ”๋“œ๋ฅผ ์ปค๋ฐ‹ํ•˜๊ฑฐ๋‚˜ ์งˆ๋ฌธ์„ ํ•  ๋•Œ ํ•ญ์ƒ ์•Œ๋ฆผ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์ œ ๋˜๋Š” ์งˆ๋ฌธ์„ ํšจ์œจ์ ์œผ๋กœ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์ถ”๊ฐ€ํ•œ ์ฝ”๋“œ๋ฅผ ๋ช…์‹œํ•˜๋Š” ๊ฒƒ์ด ๋„์›€์ด ๋  ๋•Œ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด, ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ชจ๋‘ ๋ณผ ์ˆ˜ ์žˆ๋Š” "Files changed" ํƒญ์œผ๋กœ ์ด๋™ํ•˜์—ฌ ์งˆ๋ฌธํ•˜๊ณ ์ž ํ•˜๋Š” ์ค„๋กœ ์ด๋™ํ•œ ๋‹ค์Œ "+" ๊ธฐํ˜ธ๋ฅผ ํด๋ฆญํ•˜์—ฌ ์ฝ”๋ฉ˜ํŠธ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ๋ฌธ์ด๋‚˜ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋˜๋ฉด, ์ƒ์„ฑ๋œ ์ฝ”๋ฉ˜ํŠธ์˜ "Resolve" ๋ฒ„ํŠผ์„ ํด๋ฆญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, Hugging Face ํŒ€์€ ์ฝ”๋“œ๋ฅผ ๋ฆฌ๋ทฐํ•  ๋•Œ ์ฝ”๋ฉ˜ํŠธ๋ฅผ ๋‚จ๊ธธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” PR์—์„œ ๋Œ€๋ถ€๋ถ„์˜ ์งˆ๋ฌธ์„ GitHub์—์„œ ๋ฌป๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ณต๊ฐœ์— ํฌ๊ฒŒ ๋„์›€์ด ๋˜์ง€ ์•Š๋Š” ๋งค์šฐ ์ผ๋ฐ˜์ ์ธ ์งˆ๋ฌธ์˜ ๊ฒฝ์šฐ, Slack์ด๋‚˜ ์ด๋ฉ”์ผ์„ ํ†ตํ•ด Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **5. brand_new_bert์— ๋Œ€ํ•ด ์ƒ์„ฑ๋œ ๋ชจ๋ธ ์ฝ”๋“œ๋ฅผ ์ ์šฉํ•˜๊ธฐ** ๋จผ์ €, ์šฐ๋ฆฌ๋Š” ๋ชจ๋ธ ์ž์ฒด์—๋งŒ ์ดˆ์ ์„ ๋งž์ถ”๊ณ  ํ† ํฌ๋‚˜์ด์ €์— ๋Œ€ํ•ด์„œ๋Š” ์‹ ๊ฒฝ ์“ฐ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ๊ด€๋ จ ์ฝ”๋“œ๋Š” ๋‹ค์Œ์˜ ์ƒ์„ฑ๋œ ํŒŒ์ผ์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` ๋ฐ `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. ์ด์ œ ๋งˆ์นจ๋‚ด ์ฝ”๋”ฉ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค :). `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์˜ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋Š” ์ธ์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ BERT์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ฐ€์ง€๊ฑฐ๋‚˜, ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ BART์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ฐ€์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ, ๋ชจ๋ธ์˜ ์ด๋ก ์  ์ธก๋ฉด์— ๋Œ€ํ•ด ๋ฐฐ์šด ๋‚ด์šฉ์„ ๋‹ค์‹œ ์ƒ๊ธฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: *๋ชจ๋ธ์ด BERT ๋˜๋Š” BART์™€ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฅธ๊ฐ€์š”?*. ์ž์ฃผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•˜๋Š” ๊ฒƒ์€ *self-attention* ๋ ˆ์ด์–ด, ์ •๊ทœํ™” ๋ ˆ์ด์–ด์˜ ์ˆœ์„œ ๋“ฑ์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์ž์‹ ์˜ ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋„๋ก Transformers์—์„œ ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ์œ ์‚ฌํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์ด ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **์ฐธ๊ณ ๋กœ** ์ด ์‹œ์ ์—์„œ, ์ฝ”๋“œ๊ฐ€ ์™„์ „ํžˆ ์ •ํ™•ํ•˜๊ฑฐ๋‚˜ ๊นจ๋—ํ•˜๋‹ค๊ณ  ํ™•์‹ ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์˜คํžˆ๋ ค ์ฒ˜์Œ์—๋Š” ์›๋ณธ ์ฝ”๋“œ์˜ ์ฒซ ๋ฒˆ์งธ *๋ถˆ์™„์ „ํ•˜๊ณ * ๋ณต์‚ฌ๋œ ๋ฒ„์ „์„ `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์— ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๋ชจ๋“  ์ฝ”๋“œ๊ฐ€ ์ถ”๊ฐ€๋  ๋•Œ๊นŒ์ง€ ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ์ง„ํ–‰ํ•œ ํ›„, ๋‹ค์Œ ์„น์…˜์—์„œ ์„ค๋ช…ํ•œ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฝ”๋“œ๋ฅผ ์ ์ง„์ ์œผ๋กœ ๊ฐœ์„ ํ•˜๊ณ  ์ˆ˜์ •ํ•˜๋Š” ๊ฒƒ์ด ํ›จ์”ฌ ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ์ž‘๋™ํ•ด์•ผ ํ•˜๋Š” ์œ ์ผํ•œ ๊ฒƒ์€ ๋‹ค์Œ ๋ช…๋ น์ด ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` ์œ„์˜ ๋ช…๋ น์€ `BrandNewBertConfig()`์— ์ •์˜๋œ ๊ธฐ๋ณธ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋”ฐ๋ผ ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋ฉฐ, ์ด๋กœ์จ ๋ชจ๋“  ๊ตฌ์„ฑ ์š”์†Œ์˜ `init()` ๋ฉ”์„œ๋“œ๊ฐ€ ์ž‘๋™ํ•จ์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ฌด์ž‘์œ„ ์ดˆ๊ธฐํ™”๋Š” `BrandnewBertPreTrainedModel` ํด๋ž˜์Šค์˜ `_init_weights` ๋ฉ”์„œ๋“œ์—์„œ ์ˆ˜ํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฉ”์„œ๋“œ๋Š” ๊ตฌ์„ฑ ์„ค์ • ๋ณ€์ˆ˜์— ๋”ฐ๋ผ ๋ชจ๋“  ๋ฆฌํ”„ ๋ชจ๋“ˆ์„ ์ดˆ๊ธฐํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. BERT์˜ `_init_weights` ๋ฉ”์„œ๋“œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` ๋ช‡ ๊ฐ€์ง€ ๋ชจ๋“ˆ์— ๋Œ€ํ•ด ํŠน๋ณ„ํ•œ ์ดˆ๊ธฐํ™”๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `Wav2Vec2ForPreTraining`์—์„œ ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐœ์˜ ์„ ํ˜• ๋ ˆ์ด์–ด๋Š” ์ผ๋ฐ˜์ ์ธ PyTorch `nn.Linear`์˜ ์ดˆ๊ธฐํ™”๋ฅผ ๊ฐ€์ ธ์•ผ ํ•˜์ง€๋งŒ, ๋‹ค๋ฅธ ๋ชจ๋“  ๋ ˆ์ด์–ด๋Š” ์œ„์™€ ๊ฐ™์€ ์ดˆ๊ธฐํ™”๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ฝ”๋“œํ™”๋ฉ๋‹ˆ๋‹ค: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` `_is_hf_initialized` ํ”Œ๋ž˜๊ทธ๋Š” ์„œ๋ธŒ๋ชจ๋“ˆ์„ ํ•œ ๋ฒˆ๋งŒ ์ดˆ๊ธฐํ™”ํ•˜๋„๋ก ๋‚ด๋ถ€์ ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. `module.project_q` ๋ฐ `module.project_hid`์— ๋Œ€ํ•ด `True`๋กœ ์„ค์ •ํ•จ์œผ๋กœ์จ, ์šฐ๋ฆฌ๊ฐ€ ์ˆ˜ํ–‰ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ์ดˆ๊ธฐํ™”๊ฐ€ ์ดํ›„์— ๋ฎ์–ด์“ฐ์ด์ง€ ์•Š๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, `_init_weights` ํ•จ์ˆ˜๊ฐ€ ์ด๋“ค์—๊ฒŒ ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. **6. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ ์ž‘์„ฑํ•˜๊ธฐ** ๋‹ค์Œ์œผ๋กœ, ๋””๋ฒ„๊ทธ์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ธฐ์กด ์ €์žฅ์†Œ์—์„œ ๋งŒ๋“  ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ํ˜ธํ™˜๋˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฒ˜์Œ๋ถ€ํ„ฐ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค๋Š” *brand_new_bert*์™€ ๋™์ผํ•œ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ž‘์„ฑ๋œ ์œ ์‚ฌํ•œ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•œ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฐพ์•„๋ณด๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณต์‚ฌํ•˜์—ฌ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ์•ฝ๊ฐ„ ์ˆ˜์ •ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ๋Œ€ํ•ด ์œ ์‚ฌํ•œ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์–ด๋””์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”์ง€ Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์˜ํ•˜๋Š” ๊ฒƒ์„ ๋ง์„ค์ด์ง€ ๋งˆ์„ธ์š”. - TensorFlow์—์„œ PyTorch๋กœ ๋ชจ๋ธ์„ ์ด์ „ํ•˜๋Š” ๊ฒฝ์šฐ, ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ BERT์˜ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - PyTorch์—์„œ PyTorch๋กœ ๋ชจ๋ธ์„ ์ด์ „ํ•˜๋Š” ๊ฒฝ์šฐ, ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ BART์˜ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py)๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์—์„œ๋Š” PyTorch ๋ชจ๋ธ์ด ๋ ˆ์ด์–ด ๊ฐ€์ค‘์น˜๋ฅผ ์ €์žฅํ•˜๊ณ  ๋ ˆ์ด์–ด ์ด๋ฆ„์„ ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. PyTorch์—์„œ ๋ ˆ์ด์–ด์˜ ์ด๋ฆ„์€ ๋ ˆ์ด์–ด์— ์ง€์ •ํ•œ ํด๋ž˜์Šค ์†์„ฑ์˜ ์ด๋ฆ„์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด PyTorch์—์„œ `SimpleModel`์ด๋ผ๋Š” ๋”๋ฏธ ๋ชจ๋ธ์„ ์ •์˜ํ•ด ๋ด…์‹œ๋‹ค: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` ์ด์ œ ์ด ๋ชจ๋ธ ์ •์˜์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ `dense`, `intermediate`, `layer_norm` ๋“ฑ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ ๋žœ๋คํ•˜๊ฒŒ ํ• ๋‹น๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ถœ๋ ฅํ•˜์—ฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python model = SimpleModel() print(model) ``` ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` ์šฐ๋ฆฌ๋Š” ๋ ˆ์ด์–ด์˜ ์ด๋ฆ„์ด PyTorch์—์„œ ํด๋ž˜์Šค ์†์„ฑ์˜ ์ด๋ฆ„์œผ๋กœ ์ •์˜๋˜์–ด ์žˆ๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜ ๊ฐ’์„ ์ถœ๋ ฅํ•˜์—ฌ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python print(model.dense.weight.data) ``` ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋˜์—ˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์ฒดํฌํฌ์ธํŠธ์˜ ํ•ด๋‹น ๋ ˆ์ด์–ด์˜ ์ •ํ™•ํ•œ ๊ฐ€์ค‘์น˜๋กœ ์ฑ„์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด PyTorch ๋ชจ๋ธ์˜ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ ๊ฐ€์ค‘์น˜์™€ ํ•ด๋‹น ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๊ฐ€ **๋ชจ์–‘๊ณผ ์ด๋ฆ„** ๋ชจ๋‘์—์„œ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ชจ์–‘์— ๋Œ€ํ•œ assert ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜์˜ ์ด๋ฆ„์„ ์ถœ๋ ฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฌธ์žฅ์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` ๋˜ํ•œ ๋‘ ๊ฐ€์ค‘์น˜์˜ ์ด๋ฆ„์„ ์ถœ๋ ฅํ•˜์—ฌ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ์‹œ*: ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` ๋ชจ์–‘ ๋˜๋Š” ์ด๋ฆ„์ด ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ๋žœ๋ค์œผ๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ ˆ์ด์–ด์— ์ž˜๋ชป๋œ ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ํ• ๋‹นํ•œ ๊ฒƒ์œผ๋กœ ์ถ”์ธก๋ฉ๋‹ˆ๋‹ค. ์ž˜๋ชป๋œ ๋ชจ์–‘์€ `BrandNewBertConfig()`์˜ ๊ตฌ์„ฑ ๋งค๊ฐœ๋ณ€์ˆ˜ ์„ค์ •์ด ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ์ฒดํฌํฌ์ธํŠธ์— ์‚ฌ์šฉ๋œ ์„ค์ •๊ณผ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ํฝ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PyTorch์˜ ๋ ˆ์ด์–ด ๊ตฌํ˜„ ์ž์ฒด์—์„œ ๊ฐ€์ค‘์น˜๋ฅผ ์ „์น˜ํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, **๋ชจ๋“ ** ํ•„์š”ํ•œ ๊ฐ€์ค‘์น˜๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜๊ณ  ์ดˆ๊ธฐํ™”์— ์‚ฌ์šฉ๋˜์ง€ ์•Š์€ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ์ถœ๋ ฅํ•˜์—ฌ ๋ชจ๋ธ์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋ณ€ํ™˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž˜๋ชป๋œ ๋ชจ์–‘ ๋ฌธ์žฅ์ด๋‚˜ ์ž˜๋ชป๋œ ์ด๋ฆ„ ํ• ๋‹น์œผ๋กœ ์ธํ•ด ๋ณ€ํ™˜ ์‹œ๋„๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒƒ์€ ์™„์ „ํžˆ ์ •์ƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” `BrandNewBertConfig()`์—์„œ ์ž˜๋ชป๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ๐Ÿค— Transformers ๊ตฌํ˜„์—์„œ ์ž˜๋ชป๋œ ์•„ํ‚คํ…์ฒ˜, ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ๊ตฌ์„ฑ ์š”์†Œ ์ค‘ ํ•˜๋‚˜์˜ `init()` ํ•จ์ˆ˜์— ๋ฒ„๊ทธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ์ด๊ฑฐ๋‚˜ ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜ ์ค‘ ํ•˜๋‚˜๋ฅผ ์ „์น˜ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ๋†’์Šต๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„๋Š” ์ด์ „ ๋‹จ๊ณ„์™€ ํ•จ๊ป˜ ๋ฐ˜๋ณต๋˜์–ด์•ผ ํ•˜๋ฉฐ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ Transformers ๋ชจ๋ธ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋กœ๋“œ๋˜์—ˆ์„ ๋•Œ๊นŒ์ง€ ๊ณ„์†๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers ๊ตฌํ˜„์— ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋กœ๋“œํ•œ ํ›„์—๋Š” `/path/to/converted/checkpoint/folder`์™€ ๊ฐ™์€ ์›ํ•˜๋Š” ํด๋”์— ๋ชจ๋ธ์„ ์ €์žฅํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ํด๋”์—๋Š” `pytorch_model.bin` ํŒŒ์ผ๊ณผ `config.json` ํŒŒ์ผ์ด ๋ชจ๋‘ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ๊ตฌํ˜„ํ•˜๊ธฐ** ๐Ÿค— Transformers ๊ตฌํ˜„์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ๋กœ๋“œํ•œ ํ›„์—๋Š” ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ตฌํ˜„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [์›๋ณธ ์ €์žฅ์†Œ์— ์ต์ˆ™ํ•ด์ง€๊ธฐ](#34-run-a-pretrained-checkpoint-using-the-original-repository)์—์„œ ์ด๋ฏธ ์›๋ณธ ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๋ฅผ ์‹คํ–‰ํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์›๋ณธ ๋Œ€์‹  ๐Ÿค— Transformers ๊ตฌํ˜„์„ ์‚ฌ์šฉํ•˜๋Š” ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ์›๋ณธ ๋ชจ๋ธ ๊ตฌํ˜„์ด ์ฒ˜์Œ๋ถ€ํ„ฐ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ์ œ๊ณตํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋งค์šฐ ๋†’์Šต๋‹ˆ๋‹ค. ์‹ค๋งํ•˜์ง€ ๋งˆ์„ธ์š”. ์˜ˆ์ƒ๋œ ์ผ์ž…๋‹ˆ๋‹ค! ๋จผ์ €, ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ข…์ข… ์ž˜๋ชป๋œ ์ฐจ์›์ด ์‚ฌ์šฉ๋˜์–ด *์ฐจ์› ๋ถˆ์ผ์น˜* ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ฑฐ๋‚˜ ์ž˜๋ชป๋œ ๋ฐ์ดํ„ฐ ์œ ํ˜• ๊ฐœ์ฒด๊ฐ€ ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด `torch.long` ๋Œ€์‹ ์— `torch.float32`๊ฐ€ ์‚ฌ์šฉ๋œ ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ํ•ด๊ฒฐํ•  ์ˆ˜ ์—†๋Š” ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด Hugging Face ํŒ€์— ๋„์›€์„ ์š”์ฒญํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ๊ตฌํ˜„์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋Š” ์ถœ๋ ฅ์ด `1e-3`์˜ ์ •๋ฐ€๋„๋กœ ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋จผ์ €, ์ถœ๋ ฅ ๋ชจ์–‘์ด ๋™์ผํ•˜๋„๋ก ๋ณด์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๐Ÿค— Transformers ๊ตฌํ˜„ ์Šคํฌ๋ฆฝํŠธ์™€ ์›๋ณธ ๊ตฌํ˜„ ์‚ฌ์ด์—์„œ `outputs.shape`๋Š” ๋™์ผํ•œ ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ, ์ถœ๋ ฅ ๊ฐ’์ด ๋™์ผํ•˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ถœ๋ ฅ์ด ๋™์ผํ•˜์ง€ ์•Š์€ ์ผ๋ฐ˜์ ์ธ ์‹ค์ˆ˜ ์‚ฌ๋ก€๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ผ๋ถ€ ๋ ˆ์ด์–ด๊ฐ€ ์ถ”๊ฐ€๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์ฆ‰, *ํ™œ์„ฑํ™”* ๋ ˆ์ด์–ด๊ฐ€ ์ถ”๊ฐ€๋˜์ง€ ์•Š์•˜๊ฑฐ๋‚˜ ์ž”์ฐจ ์—ฐ๊ฒฐ์ด ๋น ์กŒ์Šต๋‹ˆ๋‹ค. - ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ ํ–‰๋ ฌ์ด ์—ฐ๊ฒฐ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. - ์ž˜๋ชป๋œ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์ด ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ๊ตฌํ˜„์—์„œ๋Š” ์˜คํ”„์…‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. - ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ์ค‘์— Dropout์ด ์ ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์ˆ˜์ •ํ•˜๋ ค๋ฉด *model.training์ด False*์ธ์ง€ ํ™•์ธํ•˜๊ณ  ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ์ค‘์— Dropout ๋ ˆ์ด์–ด๊ฐ€ ์ž˜๋ชป ํ™œ์„ฑํ™”๋˜์ง€ ์•Š๋„๋ก ํ•˜์„ธ์š”. ์ฆ‰, [PyTorch์˜ ๊ธฐ๋Šฅ์  Dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)์— *self.training*์„ ์ „๋‹ฌํ•˜์„ธ์š”. ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๋ฅผ ๋‚˜๋ž€ํžˆ ๋†“๊ณ  ์ฐจ์ด์ ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์ƒ์ ์œผ๋กœ๋Š” ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ์„ ๋””๋ฒ„๊ทธ/์ถœ๋ ฅํ•˜์—ฌ ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ์ •ํ™•ํ•œ ์œ„์น˜๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, ๋‘ ์Šคํฌ๋ฆฝํŠธ์˜ ํ•˜๋“œ์ฝ”๋”ฉ๋œ `input_ids`๊ฐ€ ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋‹ค์Œ์œผ๋กœ, `input_ids`์˜ ์ฒซ ๋ฒˆ์งธ ๋ณ€ํ™˜์˜ ์ถœ๋ ฅ(์ผ๋ฐ˜์ ์œผ๋กœ ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ)์ด ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋„คํŠธ์›Œํฌ์˜ ๊ฐ€์žฅ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๊นŒ์ง€ ์ง„ํ–‰ํ•ด๋ณด์„ธ์š”. ์–ด๋Š ์‹œ์ ์—์„œ ๋‘ ๊ตฌํ˜„ ์‚ฌ์ด์— ์ฐจ์ด๊ฐ€ ์žˆ๋Š” ๊ฒƒ์„ ์•Œ๊ฒŒ ๋˜๋Š”๋ฐ, ์ด๋Š” ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ๋ฒ„๊ทธ ์œ„์น˜๋ฅผ ๊ฐ€๋ฆฌํ‚ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ €ํฌ ๊ฒฝํ—˜์ƒ์œผ๋กœ๋Š” ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„ ๋ชจ๋‘์—์„œ ๋™์ผํ•œ ์œ„์น˜์— ๋งŽ์€ ์ถœ๋ ฅ ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ด๋“ค์˜ ์ค‘๊ฐ„ ํ‘œํ˜„์— ๋Œ€ํ•ด ๋™์ผํ•œ ๊ฐ’์„ ๋ณด์ด๋Š” ์ถœ๋ ฅ ๋ฌธ์„ ์—ฐ์†์ ์œผ๋กœ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์ด ๊ฐ„๋‹จํ•˜๊ณ  ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. `torch.allclose(original_output, output, atol=1e-3)`๋กœ ์ถœ๋ ฅ์„ ํ™•์ธํ•˜์—ฌ ๋‘ ๊ตฌํ˜„์ด ๋™์ผํ•œ ์ถœ๋ ฅ์„ ํ•˜๋Š” ๊ฒƒ์„ ํ™•์‹ ํ•œ๋‹ค๋ฉด, ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„์€ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค! ์ถ•ํ•˜๋“œ๋ฆฝ๋‹ˆ๋‹ค. ๋‚จ์€ ์ž‘์—…์€ ์‰ฌ์šด ์ผ์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค ๐Ÿ˜Š. **8. ํ•„์š”ํ•œ ๋ชจ๋“  ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ์ถ”๊ฐ€ํ•˜๊ธฐ** ์ด ์‹œ์ ์—์„œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•ด๋‹น ๋ชจ๋ธ์ด ์š”๊ตฌ๋˜๋Š” ๋””์ž์ธ์— ์™„์ „ํžˆ ๋ถ€ํ•ฉํ•˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์™€ ์™„๋ฒฝํ•˜๊ฒŒ ํ˜ธํ™˜๋˜๋Š” ๊ตฌํ˜„์ธ์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋“  ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Cookiecutter๋Š” ์•„๋งˆ๋„ ๋ชจ๋ธ์„ ์œ„ํ•œ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์„ ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ–ˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์•„๋งˆ๋„ `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`์™€ ๊ฐ™์€ ๊ฒฝ๋กœ์— ์œ„์น˜ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํ…Œ์ŠคํŠธ ํŒŒ์ผ์„ ์‹คํ–‰ํ•˜์—ฌ ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ชจ๋‘ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` ๋ชจ๋“  ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜์ •ํ•œ ํ›„, ์ด์ œ ์ˆ˜ํ–‰ํ•œ ์ž‘์—…์„ ์ถฉ๋ถ„ํžˆ ํ…Œ์ŠคํŠธํ•˜์—ฌ ๋‹ค์Œ ์‚ฌํ•ญ์„ ๋ณด์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - a) ์ปค๋ฎค๋‹ˆํ‹ฐ๊ฐ€ *brand_new_bert*์˜ ํŠน์ • ํ…Œ์ŠคํŠธ๋ฅผ ์‚ดํŽด๋ด„์œผ๋กœ์จ ์ž‘์—…์„ ์‰ฝ๊ฒŒ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•จ - b) ๋ชจ๋ธ์— ๋Œ€ํ•œ ํ–ฅํ›„ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ๋ชจ๋ธ์˜ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์„ ์†์ƒ์‹œํ‚ค์ง€ ์•Š๋„๋ก ํ•จ ๋จผ์ € ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋Š” ์ด์ „์— ๋ชจ๋ธ์„ ๐Ÿค— Transformers๋กœ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•œ ๋””๋ฒ„๊น… ์Šคํฌ๋ฆฝํŠธ์™€ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. Cookiecutter์— ์ด๋ฏธ ์ด๋Ÿฌํ•œ ๋ชจ๋ธ ํ…Œ์ŠคํŠธ์˜ ํ…œํ”Œ๋ฆฟ์ธ `BrandNewBertModelIntegrationTests`๊ฐ€ ์ถ”๊ฐ€๋˜์–ด ์žˆ์œผ๋ฉฐ, ์—ฌ๋Ÿฌ๋ถ„์ด ์ž‘์„ฑํ•ด์•ผ ํ•  ๋‚ด์šฉ์œผ๋กœ๋งŒ ์ฑ„์›Œ ๋„ฃ์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Windows๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ `RUN_SLOW=1`์„ `SET RUN_SLOW=1`๋กœ ๋ฐ”๊ฟ”์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋‘˜์งธ๋กœ, *brand_new_bert*์— ํŠนํ™”๋œ ๋ชจ๋“  ๊ธฐ๋Šฅ๋„ ๋ณ„๋„์˜ ํ…Œ์ŠคํŠธ์—์„œ ์ถ”๊ฐ€๋กœ ํ…Œ์ŠคํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ถ€๋ถ„์€ ์ข…์ข… ์žŠํžˆ๋Š”๋ฐ, ๋‘ ๊ฐ€์ง€ ์ธก๋ฉด์—์„œ ๊ต‰์žฅํžˆ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. - *brand_new_bert*์˜ ํŠน์ˆ˜ ๊ธฐ๋Šฅ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•ด์•ผ ํ•˜๋Š”์ง€ ๋ณด์—ฌ์คŒ์œผ๋กœ์จ ์ปค๋ฎค๋‹ˆํ‹ฐ์—๊ฒŒ ๋ชจ๋ธ ์ถ”๊ฐ€ ๊ณผ์ •์—์„œ ์Šต๋“ํ•œ ์ง€์‹์„ ์ „๋‹ฌํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. - ํ–ฅํ›„ ๊ธฐ์—ฌ์ž๋Š” ์ด๋Ÿฌํ•œ ํŠน์ˆ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋น ๋ฅด๊ฒŒ ํ…Œ์ŠคํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **9. ํ† ํฌ๋‚˜์ด์ € ๊ตฌํ˜„ํ•˜๊ธฐ** ๋‹ค์Œ์œผ๋กœ, *brand_new_bert*์˜ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณดํ†ต ํ† ํฌ๋‚˜์ด์ €๋Š” ๐Ÿค— Transformers์˜ ๊ธฐ์กด ํ† ํฌ๋‚˜์ด์ €์™€ ๋™์ผํ•˜๊ฑฐ๋‚˜ ๋งค์šฐ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋จผ์ € ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์—์„œ ๋ฌธ์ž์—ด์„ ์ž…๋ ฅํ•˜๊ณ  `input_ids`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์˜์‚ฌ ์ฝ”๋“œ๋กœ ์ž‘์„ฑ): ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ๋ฅผ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ณ  ์˜ฌ๋ฐ”๋ฅธ ํ† ํฌ๋‚˜์ด์ € ํ•จ์ˆ˜๋ฅผ ์ฐพ๊ฑฐ๋‚˜, ๋ณต์ œ๋ณธ์—์„œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์ ์šฉํ•˜์—ฌ `input_ids`๋งŒ ์ถœ๋ ฅํ•˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ธฐ๋Šฅ์ ์ธ ํ† ํฐํ™” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•œ ํ›„, ๐Ÿค— Transformers์˜ ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` ๋‘ ๊ฐœ์˜ `input_ids`๊ฐ€ ๋™์ผํ•œ ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•  ๋•Œ, ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ ํ† ํฌ๋‚˜์ด์ € ํ…Œ์ŠคํŠธ ํŒŒ์ผ๋„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *brand_new_bert*์˜ ๋ชจ๋ธ๋ง ํ…Œ์ŠคํŠธ ํŒŒ์ผ๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ, *brand_new_bert*์˜ ํ† ํฌ๋‚˜์ด์ œ์ด์…˜ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ํ•˜๋“œ์ฝ”๋”ฉ๋œ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **10. ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ ์‹คํ–‰** ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„์—๋Š” ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ช‡ ๊ฐ€์ง€ ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`์— ์ถ”๊ฐ€ํ•ด์ฃผ์„ธ์š”. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋Š” ๐Ÿค— Transformers ๊ตฌํ˜„์ด ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€๋ฅผ ์˜๋ฏธ ์žˆ๋Š” text-to-text ์˜ˆ์‹œ๋กœ ๋ณด์—ฌ์ค˜์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์˜ˆ์‹œ๋กœ๋Š” *์˜ˆ๋ฅผ ๋“ค์–ด* source-to-target ๋ฒˆ์—ญ ์Œ, article-to-summary ์Œ, question-to-answer ์Œ ๋“ฑ์ด ํฌํ•จ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ถˆ๋Ÿฌ์˜จ ์ฒดํฌํฌ์ธํŠธ ์ค‘ ์–ด๋Š ๊ฒƒ๋„ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์—์„œ ๋ฏธ์„ธ ์กฐ์ •๋˜์ง€ ์•Š์•˜๋‹ค๋ฉด, ๋ชจ๋ธ ํ…Œ์ŠคํŠธ๋งŒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์™„์ „ํžˆ ๊ธฐ๋Šฅ์„ ๊ฐ–์ถ”์—ˆ๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ GPU์—์„œ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๋‚ด๋ถ€ ํ…์„œ์˜ ์ผ๋ถ€์— `.to(self.device)` ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ์žŠ์—ˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ฒฝ์šฐ ํ…Œ์ŠคํŠธ์—์„œ ์˜ค๋ฅ˜๋กœ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. GPU์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ, Hugging Face ํŒ€์ด ํ…Œ์ŠคํŠธ๋ฅผ ๋Œ€์‹  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **11. ๊ธฐ์ˆ ๋ฌธ์„œ ์ถ”๊ฐ€** ์ด์ œ *brand_new_bert*์— ํ•„์š”ํ•œ ๋ชจ๋“  ๊ธฐ๋Šฅ์ด ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฑฐ์˜ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค! ์ถ”๊ฐ€ํ•ด์•ผ ํ•  ๊ฒƒ์€ ๋ฉ‹์ง„ ๊ธฐ์ˆ ๋ฌธ์„œ๊ณผ ๊ธฐ์ˆ ๋ฌธ์„œ ํŽ˜์ด์ง€์ž…๋‹ˆ๋‹ค. Cookiecutter๊ฐ€ `docs/source/model_doc/brand_new_bert.md`๋ผ๋Š” ํ…œํ”Œ๋ฆฟ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•ด์คฌ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ์šฉ์ž๋“ค์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ด ํŽ˜์ด์ง€๋ฅผ ๋จผ์ € ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ฌธ์„œ๋Š” ์ดํ•ดํ•˜๊ธฐ ์‰ฝ๊ณ  ๊ฐ„๊ฒฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๊ธฐ ์œ„ํ•ด *ํŒ*์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋…์ŠคํŠธ๋ง์— ๊ด€๋ จํ•˜์—ฌ Hugging Face ํŒ€์— ๋ฌธ์˜ํ•˜๋Š” ๊ฒƒ์„ ์ฃผ์ €ํ•˜์ง€ ๋งˆ์„ธ์š”. ๋‹ค์Œ์œผ๋กœ, `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์— ์ถ”๊ฐ€๋œ ๋…์ŠคํŠธ๋ง์ด ์˜ฌ๋ฐ”๋ฅด๋ฉฐ ํ•„์š”ํ•œ ๋ชจ๋“  ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์„ ํฌํ•จํ•˜๋„๋ก ํ™•์ธํ•˜์„ธ์š”. [์—ฌ๊ธฐ](writing-documentation)์—์„œ ์šฐ๋ฆฌ์˜ ๋ฌธ์„œ ์ž‘์„ฑ ๊ฐ€์ด๋“œ์™€ ๋…์ŠคํŠธ๋ง ํ˜•์‹์— ๋Œ€ํ•œ ์ƒ์„ธ ๊ฐ€์ด๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์„œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๋ชจ๋ธ์˜ ์ฒซ ๋ฒˆ์งธ ์ ‘์ ์ด๊ธฐ ๋•Œ๋ฌธ์—, ๋ฌธ์„œ๋Š” ์ ์–ด๋„ ์ฝ”๋“œ๋งŒํผ์˜ ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. **์ฝ”๋“œ ๋ฆฌํŒฉํ† ๋ง** ์ข‹์•„์š”, ์ด์ œ *brand_new_bert*๋ฅผ ์œ„ํ•œ ๋ชจ๋“  ํ•„์š”ํ•œ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์—ฌ ์ž ์žฌ์ ์œผ๋กœ ์ž˜๋ชป๋œ ์ฝ”๋“œ ์Šคํƒ€์ผ์„ ์ˆ˜์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ๊ทธ๋ฆฌ๊ณ  ์ฝ”๋”ฉ ์Šคํƒ€์ผ์ด ํ’ˆ์งˆ ์ ๊ฒ€์„ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜๊ณ  ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash make style ``` ๐Ÿค— Transformers์—๋Š” ์—ฌ์ „ํžˆ ์‹คํŒจํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๋งค์šฐ ์—„๊ฒฉํ•œ ๋””์ž์ธ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋…์ŠคํŠธ๋ง์— ๋ˆ„๋ฝ๋œ ์ •๋ณด๋‚˜ ์ž˜๋ชป๋œ ๋ช…๋ช… ๋•Œ๋ฌธ์— ์ข…์ข… ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ๋ง‰ํžˆ๋ฉด Hugging Face ํŒ€์ด ๋„์›€์„ ์ค„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```bash make quality ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ฝ”๋“œ๊ฐ€ ์ •ํ™•ํžˆ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•œ ํ›„์—๋Š” ํ•ญ์ƒ ์ฝ”๋“œ๋ฅผ ๋ฆฌํŒฉํ† ๋งํ•˜๋Š” ๊ฒƒ์ด ์ข‹์€ ์ƒ๊ฐ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผ๋œ ์ง€๊ธˆ์€ ์ถ”๊ฐ€ํ•œ ์ฝ”๋“œ๋ฅผ ๋‹ค์‹œ ๊ฒ€ํ† ํ•˜๊ณ  ๋ฆฌํŒฉํ† ๋งํ•˜๋Š” ์ข‹์€ ์‹œ๊ธฐ์ž…๋‹ˆ๋‹ค. ์ด์ œ ์ฝ”๋”ฉ ๋ถ€๋ถ„์„ ์™„๋ฃŒํ–ˆ์Šต๋‹ˆ๋‹ค. ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๐ŸŽ‰ ๋ฉ‹์ ธ์š”! ๐Ÿ˜Ž **12. ๋ชจ๋ธ์„ ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜์„ธ์š”** ์ด ๋งˆ์ง€๋ง‰ ํŒŒํŠธ์—์„œ๋Š” ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜์—ฌ ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜๊ณ  ๊ฐ ์—…๋กœ๋“œ๋œ ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•œ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [Model sharing and uploading Page](model_sharing)๋ฅผ ์ฝ๊ณ  ํ—ˆ๋ธŒ ๊ธฐ๋Šฅ์— ์ต์ˆ™ํ•ด์ง€์„ธ์š”. *brand_new_bert*์˜ ์ €์ž ์กฐ์ง ์•„๋ž˜์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ํ•„์š”ํ•œ ์•ก์„ธ์Šค ๊ถŒํ•œ์„ ์–ป๊ธฐ ์œ„ํ•ด Hugging Face ํŒ€๊ณผ ํ˜‘์—…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `transformers`์˜ ๋ชจ๋“  ๋ชจ๋ธ์— ์žˆ๋Š” `push_to_hub` ๋ฉ”์„œ๋“œ๋Š” ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ—ˆ๋ธŒ์— ๋น ๋ฅด๊ณ  ํšจ์œจ์ ์œผ๋กœ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์— ์ž‘์€ ์ฝ”๋“œ ์กฐ๊ฐ์ด ๋ถ™์—ฌ์ ธ ์žˆ์Šต๋‹ˆ๋‹ค: ๊ฐ ์ฒดํฌํฌ์ธํŠธ์— ์ ํ•ฉํ•œ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๋Š” ๊ฒƒ์€ ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ๋Š” ์ฒดํฌํฌ์ธํŠธ์˜ ํŠน์„ฑ์„ ๊ฐ•์กฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด* ์ด ์ฒดํฌํฌ์ธํŠธ๋Š” ์–ด๋–ค ๋ฐ์ดํ„ฐ์…‹์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ/์„ธ๋ถ€ ํ›ˆ๋ จ๋˜์—ˆ๋Š”์ง€? ์ด ๋ชจ๋ธ์€ ์–ด๋–ค ํ•˜์œ„ ์ž‘์—…์—์„œ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š”์ง€? ๊ทธ๋ฆฌ๊ณ  ๋ชจ๋ธ์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ฝ”๋“œ๋„ ํฌํ•จํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` **13. (์„ ํƒ ์‚ฌํ•ญ) ๋…ธํŠธ๋ถ ์ถ”๊ฐ€** *brand_new_bert*๋ฅผ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์—์„œ ์ถ”๋ก  ๋˜๋Š” ๋ฏธ์„ธ ์กฐ์ •์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž์„ธํžˆ ๋ณด์—ฌ์ฃผ๋Š” ๋…ธํŠธ๋ถ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ PR์„ ๋ณ‘ํ•ฉํ•˜๋Š” ๋ฐ ํ•„์ˆ˜์ ์ด์ง€๋Š” ์•Š์ง€๋งŒ ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. **14. ์™„๋ฃŒ๋œ PR ์ œ์ถœ** ์ด์ œ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์„ ๋งˆ์ณค์œผ๋ฉฐ, ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ PR์„ ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์— ๋ณ‘ํ•ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณดํ†ต Hugging Face ํŒ€์€ ์ด๋ฏธ ์—ฌ๊ธฐ๊นŒ์ง€ ๋„์›€์„ ์ฃผ์—ˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PR์— ๋ฉ‹์ง„ ์„ค๋ช…์„ ์ถ”๊ฐ€ํ•˜๊ณ  ๋ฆฌ๋ทฐ์–ด์—๊ฒŒ ํŠน์ • ๋””์ž์ธ ์„ ํƒ ์‚ฌํ•ญ์„ ๊ฐ•์กฐํ•˜๋ ค๋ฉด ์™„๋ฃŒ๋œ PR์— ์•ฝ๊ฐ„์˜ ์„ค๋ช…์„ ์ถ”๊ฐ€ํ•˜๋Š” ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ž‘์—…๋ฌผ์„ ๊ณต์œ ํ•˜์„ธ์š”!! [[share-your-work]] ์ด์ œ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ์ž‘์—…๋ฌผ์„ ์ธ์ •๋ฐ›์„ ์‹œ๊ฐ„์ž…๋‹ˆ๋‹ค! ๋ชจ๋ธ ์ถ”๊ฐ€ ์ž‘์—…์„ ์™„๋ฃŒํ•˜๋Š” ๊ฒƒ์€ Transformers์™€ ์ „์ฒด NLP ์ปค๋ฎค๋‹ˆํ‹ฐ์— ํฐ ๊ธฐ์—ฌ์ž…๋‹ˆ๋‹ค. ๋‹น์‹ ์˜ ์ฝ”๋“œ์™€ ์ด์‹๋œ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ์ˆ˜๋ฐฑ, ์‹ฌ์ง€์–ด ์ˆ˜์ฒœ ๋ช…์˜ ๊ฐœ๋ฐœ์ž์™€ ์—ฐ๊ตฌ์›์— ์˜ํ•ด ํ™•์‹คํžˆ ์‚ฌ์šฉ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹น์‹ ์˜ ์ž‘์—…์— ์ž๋ž‘์Šค๋Ÿฌ์›Œํ•ด์•ผ ํ•˜๋ฉฐ ์ด๋ฅผ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **๋‹น์‹ ์€ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋‚ด ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์—๊ฒŒ ๋งค์šฐ ์‰ฝ๊ฒŒ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•œ ๋˜ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค! ๐Ÿคฏ**
transformers/docs/source/ko/add_new_model.md/0
{ "file_path": "transformers/docs/source/ko/add_new_model.md", "repo_id": "transformers", "token_count": 43460 }
264
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPU์—์„œ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ [[efficient-training-on-cpu]] ์ด ๊ฐ€์ด๋“œ๋Š” CPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ์ดˆ์ ์„ ๋งž์ถฅ๋‹ˆ๋‹ค. ## IPEX์™€ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ [[mixed-precision-with-ipex]] IPEX๋Š” AVX-512 ์ด์ƒ์„ ์ง€์›ํ•˜๋Š” CPU์— ์ตœ์ ํ™”๋˜์–ด ์žˆ์œผ๋ฉฐ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU์—๋„ ๊ธฐ๋Šฅ์ ์œผ๋กœ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ AVX-512 ์ด์ƒ์˜ Intel CPU ์„ธ๋Œ€์—์„œ๋Š” ์„ฑ๋Šฅ์ƒ ์ด์ ์ด ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜์ง€๋งŒ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU (์˜ˆ: AMD CPU ๋˜๋Š” ์˜ค๋ž˜๋œ Intel CPU)์˜ ๊ฒฝ์šฐ์—๋Š” IPEX ์•„๋ž˜์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์ด๋Š” ๋ณด์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. IPEX๋Š” Float32์™€ BFloat16๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์—ฌ CPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. BFloat16์˜ ์‚ฌ์šฉ์€ ๋‹ค์Œ ์„น์…˜์˜ ์ฃผ์š” ์ดˆ์ ์ž…๋‹ˆ๋‹ค. ์ €์ •๋ฐ€๋„ ๋ฐ์ดํ„ฐ ํƒ€์ž…์ธ BFloat16์€ 3์„ธ๋Œ€ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ (์ฝ”๋“œ๋ช…: Cooper Lake)์—์„œ AVX512 ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ๋„ค์ดํ‹ฐ๋ธŒ๋กœ ์ง€์›ํ•ด ์™”์œผ๋ฉฐ, ๋‹ค์Œ ์„ธ๋Œ€์˜ Intelยฎ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ์—์„œ Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ์ง€์›ํ•˜์—ฌ ์„ฑ๋Šฅ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚ฌ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. CPU ๋ฐฑ์—”๋“œ์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๊ธฐ๋Šฅ์€ PyTorch-1.10๋ถ€ํ„ฐ ํ™œ์„ฑํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋™์‹œ์—, Intelยฎ Extension for PyTorch์—์„œ BFloat16์— ๋Œ€ํ•œ CPU์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋ฐ ์—ฐ์‚ฐ์ž์˜ BFloat16 ์ตœ์ ํ™”๋ฅผ ๋Œ€๊ทœ๋ชจ๋กœ ํ™œ์„ฑํ™”ํ•˜๊ณ , PyTorch ๋งˆ์Šคํ„ฐ ๋ธŒ๋žœ์น˜๋กœ ๋ถ€๋ถ„์ ์œผ๋กœ ์—…์ŠคํŠธ๋ฆผ์„ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž๋“ค์€ IPEX ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ๋‚˜์€ ์„ฑ๋Šฅ๊ณผ ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html)์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### IPEX ์„ค์น˜: [[ipex-installation]] IPEX ๋ฆด๋ฆฌ์Šค๋Š” PyTorch๋ฅผ ๋”ฐ๋ผ๊ฐ‘๋‹ˆ๋‹ค. pip๋ฅผ ํ†ตํ•ด ์„ค์น˜ํ•˜๋ ค๋ฉด: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ``` pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` [IPEX ์„ค์น˜](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html)์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### Trainer์—์„œ์˜ ์‚ฌ์šฉ๋ฒ• [[usage-in-trainer]] Trainer์—์„œ IPEX์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ์‚ฌ์šฉ์ž๋Š” ํ›ˆ๋ จ ๋ช…๋ น ์ธ์ˆ˜์— `use_ipex`, `bf16`, `no_cuda`๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [Transformers ์งˆ๋ฌธ-์‘๋‹ต](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. - CPU์—์„œ BF16 ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ IPEX๋กœ ํ›ˆ๋ จํ•˜๊ธฐ: <pre> python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### ์‹ค์Šต ์˜ˆ์‹œ [[practice-example]] ๋ธ”๋กœ๊ทธ: [Intel Sapphire Rapids๋กœ PyTorch Transformers ๊ฐ€์†ํ™”](https://huggingface.co/blog/intel-sapphire-rapids)
transformers/docs/source/ko/perf_train_cpu.md/0
{ "file_path": "transformers/docs/source/ko/perf_train_cpu.md", "repo_id": "transformers", "token_count": 2390 }
265
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] [[open-in-colab]] <Youtube id="TksaY_FDgnk"/> ์ž๋™ ์Œ์„ฑ ์ธ์‹(Automatic Speech Recognition, ASR)์€ ์Œ์„ฑ ์‹ ํ˜ธ๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์Œ์„ฑ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ํ…์ŠคํŠธ ์ถœ๋ ฅ์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. Siri์™€ Alexa์™€ ๊ฐ™์€ ๊ฐ€์ƒ ์–ด์‹œ์Šคํ„ดํŠธ๋Š” ASR ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ์ƒ์ ์œผ๋กœ ์‚ฌ์šฉ์ž๋ฅผ ๋•๊ณ  ์žˆ์œผ๋ฉฐ, ํšŒ์˜ ์ค‘ ๋ผ์ด๋ธŒ ์บก์…˜ ๋ฐ ๋ฉ”๋ชจ ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์‚ฌ์šฉ์ž ์นœํ™”์  ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ๋„ ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate jiwer ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-minds-14-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ถ„์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ์‹œ๊ฐ„์„ ๋“ค์ด๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]") ``` [`~Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id`์™€ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ, ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio`์™€ `transcription`์— ์ดˆ์ ์„ ๋งž์ถœ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ๋‹ค์‹œ ํ•œ๋ฒˆ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> minds["train"][0] {'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `array(๋ฐฐ์—ด)` - `transcription`: ๋ชฉํ‘œ ํ…์ŠคํŠธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ Wav2Vec2 ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base") ``` MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋Š” 8000kHz์ด๋ฏ€๋กœ([๋ฐ์ดํ„ฐ ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธ), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ..., 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ์œ„์˜ 'transcription'์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ํ…์ŠคํŠธ๋Š” ๋Œ€๋ฌธ์ž์™€ ์†Œ๋ฌธ์ž๊ฐ€ ์„ž์—ฌ ์žˆ์Šต๋‹ˆ๋‹ค. Wav2Vec2 ํ† ํฌ๋‚˜์ด์ €๋Š” ๋Œ€๋ฌธ์ž ๋ฌธ์ž์— ๋Œ€ํ•ด์„œ๋งŒ ํ›ˆ๋ จ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ…์ŠคํŠธ๊ฐ€ ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> def uppercase(example): ... return {"transcription": example["transcription"].upper()} >>> minds = minds.map(uppercase) ``` ์ด์ œ ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์—์„œ `input_values`๋ฅผ ์ถ”์ถœํ•˜๊ณ  ํ”„๋กœ์„ธ์„œ๋กœ `transcription` ์—ด์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def prepare_dataset(batch): ... audio = batch["audio"] ... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"]) ... batch["input_length"] = len(batch["input_values"][0]) ... return batch ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `num_proc` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4) ``` ๐Ÿค— Transformers์—๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” ํ…์ŠคํŠธ์™€ ๋ ˆ์ด๋ธ”์„ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์š”์†Œ์˜ ๊ธธ์ด์— ๋™์ ์œผ๋กœ ํŒจ๋”ฉํ•˜์—ฌ ๊ธธ์ด๋ฅผ ๊ท ์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. `tokenizer` ํ•จ์ˆ˜์—์„œ `padding=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๋™์  ํŒจ๋”ฉ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ ์ด ํŠน์ • ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” `input_values`์™€ `labels`์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass ... class DataCollatorCTCWithPadding: ... processor: AutoProcessor ... padding: Union[bool, str] = "longest" ... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... # ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค ... # ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๊ณ , ๊ฐ๊ฐ ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค ... input_features = [{"input_values": feature["input_values"][0]} for feature in features] ... label_features = [{"input_ids": feature["labels"]} for feature in features] ... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt") ... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt") ... # ํŒจ๋”ฉ์— ๋Œ€ํ•ด ์†์‹ค์„ ์ ์šฉํ•˜์ง€ ์•Š๋„๋ก -100์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค ... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) ... batch["labels"] = labels ... return batch ``` ์ด์ œ `DataCollatorForCTCWithPadding`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate, WER)](https://huggingface.co/spaces/evaluate-metric/wer) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> wer = evaluate.load("wer") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ WER์„ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(pred): ... pred_logits = pred.predictions ... pred_ids = np.argmax(pred_logits, axis=-1) ... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id ... pred_str = processor.batch_decode(pred_ids) ... label_str = processor.batch_decode(pred.label_ids, group_tokens=False) ... wer = wer.compute(predictions=pred_str, references=label_str) ... return {"wer": wer} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCTC`]๋กœ Wav2Vec2๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. `ctc_loss_reduction` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ CTC ์†์‹ค์— ์ ์šฉํ•  ์ถ•์†Œ(reduction) ๋ฐฉ๋ฒ•์„ ์ง€์ •ํ•˜์„ธ์š”. ๊ธฐ๋ณธ๊ฐ’์ธ ํ•ฉ๊ณ„ ๋Œ€์‹  ํ‰๊ท ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋” ์ข‹์€ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer >>> model = AutoModelForCTC.from_pretrained( ... "facebook/wav2vec2-base", ... ctc_loss_reduction="mean", ... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). [`Trainer`]๋Š” ๊ฐ ์—ํญ๋งˆ๋‹ค WER์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_asr_mind_model", ... per_device_train_batch_size=8, ... gradient_accumulation_steps=2, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True, ... group_by_length=True, ... evaluation_strategy="steps", ... per_device_eval_batch_size=8, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... load_best_model_at_end=True, ... metric_for_best_model="wer", ... greater_is_better=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=processor.feature_extractor, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋‘๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ์˜์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-wav2vec2-english)์™€ ๋‹ค๊ตญ์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ๋น„์œจ์„ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์— ๋งž๊ฒŒ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} ``` <Tip> ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜๋œ ๊ฒฐ๊ณผ๊ฐ€ ๊ฝค ๊ดœ์ฐฎ์ง€๋งŒ ๋” ์ข‹์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋” ๋งŽ์€ ์˜ˆ์ œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”! </Tip> `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ์žฌํ˜„ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์˜ค๋””์˜ค ํŒŒ์ผ๊ณผ ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  PyTorch ํ…์„œ๋กœ `input`์„ ๋ฐ˜ํ™˜ํ•  ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForCTC >>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์˜ `input_ids`๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ธก๋œ `input_ids`๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> import torch >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] ``` </pt> </frameworkcontent>
transformers/docs/source/ko/tasks/asr.md/0
{ "file_path": "transformers/docs/source/ko/tasks/asr.md", "repo_id": "transformers", "token_count": 9668 }
266
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜์ƒ ๋ถ„๋ฅ˜ [[video-classification]] [[open-in-colab]] ์˜์ƒ ๋ถ„๋ฅ˜๋Š” ์˜์ƒ ์ „์ฒด์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ฐ ์˜์ƒ์—๋Š” ํ•˜๋‚˜์˜ ํด๋ž˜์Šค๊ฐ€ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์˜์ƒ์„ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์•„ ์–ด๋Š ํด๋ž˜์Šค์— ์†ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์˜์ƒ์ด ์–ด๋–ค ๋‚ด์šฉ์ธ์ง€ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜์˜ ์‹ค์ œ ์‘์šฉ ์˜ˆ๋Š” ํ”ผํŠธ๋‹ˆ์Šค ์•ฑ์—์„œ ์œ ์šฉํ•œ ๋™์ž‘ / ์šด๋™ ์ธ์‹ ์„œ๋น„์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋˜ํ•œ ์‹œ๊ฐ ์žฅ์• ์ธ์ด ์ด๋™ํ•  ๋•Œ ๋ณด์กฐํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ํ†ตํ•ด [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q pytorchvideo transformers evaluate ``` ์˜์ƒ์„ ์ฒ˜๋ฆฌํ•˜๊ณ  ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [PyTorchVideo](https://pytorchvideo.org/)(์ดํ•˜ `pytorchvideo`)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## UCF101 ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ [[load-ufc101-dataset]] [UCF-101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ(subset)์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ•™์Šตํ•˜๋Š”๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ฐ์ดํ„ฐ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๋ถˆ๋Ÿฌ์™€ ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์ด ๋‹ค์šด๋กœ๋“œ ๋˜๋ฉด, ์••์ถ•๋œ ํŒŒ์ผ์˜ ์••์ถ•์„ ํ•ด์ œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` ์ •๋ ฌ๋œ ์˜์ƒ์˜ ๊ฒฝ๋กœ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` ๋™์ผํ•œ ๊ทธ๋ฃน/์žฅ๋ฉด์— ์†ํ•˜๋Š” ์˜์ƒ ํด๋ฆฝ์€ ํŒŒ์ผ ๊ฒฝ๋กœ์—์„œ `g`๋กœ ํ‘œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, `v_ApplyEyeMakeup_g07_c04.avi`์™€ `v_ApplyEyeMakeup_g07_c06.avi` ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘˜์€ ๊ฐ™์€ ๊ทธ๋ฃน์ž…๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ๋ถ„ํ• ์„ ํ•  ๋•Œ, [๋ฐ์ดํ„ฐ ๋ˆ„์ถœ(data leakage)](https://www.kaggle.com/code/alexisbcook/data-leakage)์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๋™์ผํ•œ ๊ทธ๋ฃน / ์žฅ๋ฉด์˜ ์˜์ƒ ํด๋ฆฝ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ํ•˜์œ„ ์ง‘ํ•ฉ์€ ์ด๋Ÿฌํ•œ ์ •๋ณด๋ฅผ ๊ณ ๋ คํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ, ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์กด์žฌํ•˜๋Š” ๋ผ๋ฒจ์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•  ๋•Œ ๋„์›€์ด ๋  ๋”•์…”๋„ˆ๋ฆฌ(dictionary data type)๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. * `label2id`: ํด๋ž˜์Šค ์ด๋ฆ„์„ ์ •์ˆ˜์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. * `id2label`: ์ •์ˆ˜๋ฅผ ํด๋ž˜์Šค ์ด๋ฆ„์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ์ด 10๊ฐœ์˜ ๊ณ ์œ ํ•œ ํด๋ž˜์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ํด๋ž˜์Šค๋งˆ๋‹ค 30๊ฐœ์˜ ์˜์ƒ์ด ํ›ˆ๋ จ ์„ธํŠธ์— ์žˆ์Šต๋‹ˆ๋‹ค ## ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-a-model-to-fine-tune]] ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ์™€ ์ฒดํฌํฌ์ธํŠธ์— ์—ฐ๊ด€๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ธ์ฝ”๋”์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์ œ๊ณต๋˜๋ฉฐ, ๋ถ„๋ฅ˜ ํ—ค๋“œ(๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด)๋Š” ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ „์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ž‘์„ฑํ•  ๋•Œ๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋™์•ˆ, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฝ๊ณ ๋ฅผ ๋งˆ์ฃผ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` ์œ„ ๊ฒฝ๊ณ ๋Š” ์šฐ๋ฆฌ๊ฐ€ ์ผ๋ถ€ ๊ฐ€์ค‘์น˜(์˜ˆ: `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ)๋ฅผ ๋ฒ„๋ฆฌ๊ณ  ์ƒˆ๋กœ์šด `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ์„ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๊ฐ€ ์—†๋Š” ์ƒˆ๋กœ์šด ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์žˆ์œผ๋ฏ€๋กœ, ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ผ๊ณ  ๊ฒฝ๊ณ ๋ฅผ ๋ณด๋‚ด๋Š” ๊ฒƒ์€ ๋‹น์—ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด์ œ ์šฐ๋ฆฌ๋Š” ์ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. **์ฐธ๊ณ ** ์ด [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics)๋Š” ๋„๋ฉ”์ธ์ด ๋งŽ์ด ์ค‘์ฒฉ๋œ ์œ ์‚ฌํ•œ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ๋Œ€ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ ์ฒดํฌํฌ์ธํŠธ์ด๋ฏ€๋กœ ์ด ์ž‘์—…์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `MCG-NJU/videomae-base-finetuned-kinetics` ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ค€๋น„ํ•˜๊ธฐ[[prepare-the-datasets-for-training]] ์˜์ƒ ์ „์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด [PyTorchVideo ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://pytorchvideo.org/)๋ฅผ ํ™œ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ข…์†์„ฑ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜์„ธ์š”. ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` ํ•™์Šต ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๊ท ์ผํ•œ ์‹œ๊ฐ„ ์ƒ˜ํ”Œ๋ง(uniform temporal subsampling)', 'ํ”ฝ์…€ ์ •๊ทœํ™”(pixel normalization)', '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ(random cropping)' ๋ฐ '๋žœ๋ค ์ˆ˜ํ‰ ๋’ค์ง‘๊ธฐ(random horizontal flipping)'์˜ ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ'์™€ '๋žœ๋ค ๋’ค์ง‘๊ธฐ'๋ฅผ ์ œ์™ธํ•œ ๋™์ผํ•œ ๋ณ€ํ™˜ ์ฒด์ธ์„ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ณ€ํ™˜์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [PyTorchVideo ๊ณต์‹ ๋ฌธ์„œ](https://pytorchvideo.org)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ ์ •๋ณด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ์˜์ƒ ํ”„๋ ˆ์ž„ ํ”ฝ์…€์„ ์ •๊ทœํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ์ด๋ฏธ์ง€ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ * ์˜์ƒ ํ”„๋ ˆ์ž„์ด ์กฐ์ •๋  ๊ณต๊ฐ„ ํ•ด์ƒ๋„ ๋จผ์ €, ๋ช‡ ๊ฐ€์ง€ ์ƒ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํŠนํ™”๋œ ์ „์ฒ˜๋ฆฌ(transform)๊ณผ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ž์ฒด๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` ๊ฐ™์€ ๋ฐฉ์‹์˜ ์ž‘์—… ํ๋ฆ„์„ ๊ฒ€์ฆ๊ณผ ํ‰๊ฐ€ ์„ธํŠธ์—๋„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **์ฐธ๊ณ **: ์œ„์˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŒŒ์ดํ”„๋ผ์ธ์€ [๊ณต์‹ ํŒŒ์ดํ† ์น˜ ์˜ˆ์ œ](https://pytorchvideo.org/docs/tutorial_classification#dataset)์—์„œ ๊ฐ€์ ธ์˜จ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” UCF-101 ๋ฐ์ดํ„ฐ์…‹์— ๋งž๊ฒŒ [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‚ด๋ถ€์ ์œผ๋กœ ์ด ํ•จ์ˆ˜๋Š” [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) ๊ฐ์ฒด๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `LabeledVideoDataset` ํด๋ž˜์Šค๋Š” PyTorchVideo ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋ชจ๋“  ์˜์ƒ ๊ด€๋ จ ์ž‘์—…์˜ ๊ธฐ๋ณธ ํด๋ž˜์Šค์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PyTorchVideo์—์„œ ๋ฏธ๋ฆฌ ์ œ๊ณตํ•˜์ง€ ์•Š๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ์ด ํด๋ž˜์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ํ™•์žฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ์‚ฌํ•ญ์ด ์•Œ๊ณ  ์‹ถ๋‹ค๋ฉด `data` API [๋ฌธ์„œ](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋˜ํ•œ ์œ„์˜ ์˜ˆ์‹œ์™€ ์œ ์‚ฌํ•œ ๊ตฌ์กฐ๋ฅผ ๊ฐ–๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด, `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ ๋ฌธ์ œ๊ฐ€ ์—†์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์˜์ƒ์˜ ๊ฐœ์ˆ˜๋ฅผ ์•Œ๊ธฐ ์œ„ํ•ด `num_videos` ์ธ์ˆ˜์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ``` ## ๋” ๋‚˜์€ ๋””๋ฒ„๊น…์„ ์œ„ํ•ด ์ „์ฒ˜๋ฆฌ ์˜์ƒ ์‹œ๊ฐํ™”ํ•˜๊ธฐ[[visualize-the-preprocessed-video-for-better-debugging]] ```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div> ## ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ[[train-the-model]] ๐Ÿค— Transformers์˜ [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผœ๋ณด์„ธ์š”. `Trainer`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ์„ค์ •๊ณผ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments)์ž…๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ชจ๋“  ์†์„ฑ์„ ํฌํ•จํ•˜๋ฉฐ, ํ›ˆ๋ จ ์ค‘ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•  ์ถœ๋ ฅ ํด๋” ์ด๋ฆ„์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๐Ÿค— Hub์˜ ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ๋ชจ๋“  ์ •๋ณด๋ฅผ ๋™๊ธฐํ™”ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋Š” ๋”ฐ๋กœ ์„ค๋ช…ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ์—์„œ ์ค‘์š”ํ•œ ์ธ์ˆ˜๋Š” `remove_unused_columns=False` ์ž…๋‹ˆ๋‹ค. ์ด ์ธ์ž๋Š” ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜์—์„œ ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๋ชจ๋“  ์†์„ฑ ์—ด(columns)์„ ์‚ญ์ œํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์€ ์ผ๋ฐ˜์ ์œผ๋กœ True์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ ์—ด์„ ์‚ญ์ œํ•˜๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ด๋ฉฐ, ์ž…๋ ฅ์„ ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜๋กœ ํ’€๊ธฐ(unpack)๊ฐ€ ์‰ฌ์›Œ์ง€๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด ๊ฒฝ์šฐ์—๋Š” `pixel_values`(๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ํ•„์ˆ˜์ ์ธ ํ‚ค)๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ('video'๊ฐ€ ํŠนํžˆ ๊ทธ๋ ‡์Šต๋‹ˆ๋‹ค)์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ remove_unused_columns์„ False๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋กœ ๋ฐ˜ํ™˜๋˜๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” `__len__` ๋ฉ”์†Œ๋“œ๊ฐ€ ์ด์‹๋˜์–ด ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, `TrainingArguments`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ๋•Œ `max_steps`๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ , ์˜ˆ์ธก๊ฐ’์—์„œ ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•  ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ „์ฒ˜๋ฆฌ ์ž‘์—…์€ ์˜ˆ์ธก๋œ ๋กœ์ง“(logits)์— argmax ๊ฐ’์„ ์ทจํ•˜๋Š” ๊ฒƒ๋ฟ์ž…๋‹ˆ๋‹ค: ```py import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **ํ‰๊ฐ€์— ๋Œ€ํ•œ ์ฐธ๊ณ ์‚ฌํ•ญ**: [VideoMAE ๋…ผ๋ฌธ](https://arxiv.org/abs/2203.12602)์—์„œ ์ €์ž๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ‰๊ฐ€ ์ „๋žต์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ์˜์ƒ์—์„œ ์—ฌ๋Ÿฌ ํด๋ฆฝ์„ ์„ ํƒํ•˜๊ณ  ๊ทธ ํด๋ฆฝ์— ๋‹ค์–‘ํ•œ ํฌ๋กญ์„ ์ ์šฉํ•˜์—ฌ ์ง‘๊ณ„ ์ ์ˆ˜๋ฅผ ๋ณด๊ณ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ฐ„๋‹จํ•จ๊ณผ ๊ฐ„๊ฒฐํ•จ์„ ์œ„ํ•ด ํ•ด๋‹น ์ „๋žต์„ ๊ณ ๋ คํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์˜ˆ์ œ๋ฅผ ๋ฌถ์–ด์„œ ๋ฐฐ์น˜๋ฅผ ํ˜•์„ฑํ•˜๋Š” `collate_fn`์„ ์ •์˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ฐฐ์น˜๋Š” `pixel_values`์™€ `labels`๋ผ๋Š” 2๊ฐœ์˜ ํ‚ค๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> def collate_fn(examples): ... # permute to (num_frames, num_channels, height, width) ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด ๋ชจ๋“  ๊ฒƒ์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ•จ๊ป˜ `Trainer`์— ์ „๋‹ฌํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` ๋ฐ์ดํ„ฐ๋ฅผ ์ด๋ฏธ ์ฒ˜๋ฆฌํ–ˆ๋Š”๋ฐ๋„ ๋ถˆ๊ตฌํ•˜๊ณ  `image_processor`๋ฅผ ํ† ํฌ๋‚˜์ด์ € ์ธ์ˆ˜๋กœ ๋„ฃ์€ ์ด์œ ๋Š” JSON์œผ๋กœ ์ €์žฅ๋˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๊ตฌ์„ฑ ํŒŒ์ผ์ด Hub์˜ ์ €์žฅ์†Œ์— ์—…๋กœ๋“œ๋˜๋„๋ก ํ•˜๊ธฐ ์œ„ํ•จ์ž…๋‹ˆ๋‹ค. `train` ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”: ```py >>> train_results = trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์„ [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์—ฌ ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜์ƒ์„ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”: ```py >>> sample_test_video = next(iter(test_dataset)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/> </div> ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline)์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ์˜์ƒ ๋ถ„๋ฅ˜๋ฅผ ํ•˜๊ธฐ ์œ„ํ•ด `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜์ƒ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` ๋งŒ์•ฝ ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> def run_inference(model, video): ... # (num_frames, num_channels, height, width) ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), # this can be skipped if you don't have labels available. ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... # forward pass ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` ๋ชจ๋ธ์— ์ž…๋ ฅ๊ฐ’์„ ๋„ฃ๊ณ  `logits`์„ ๋ฐ˜ํ™˜๋ฐ›์œผ์„ธ์š”: ``` >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` `logits`์„ ๋””์ฝ”๋”ฉํ•˜๋ฉด, ์šฐ๋ฆฌ๋Š” ๋‹ค์Œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```
transformers/docs/source/ko/tasks/video_classification.md/0
{ "file_path": "transformers/docs/source/ko/tasks/video_classification.md", "repo_id": "transformers", "token_count": 13653 }
267
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Tour rรกpido - local: installation title: Instalaรงรฃo title: Inรญcio - sections: - local: pipeline_tutorial title: Pipelines para inferรชncia - local: training title: Fine-tuning de um modelo prรฉ-treinado - local: accelerate title: Treinamento distribuรญdo com ๐Ÿค— Accelerate title: Tutoriais - sections: - local: fast_tokenizers title: Usando os Tokenizers do ๐Ÿค— Tokenizers - local: create_a_model title: Criando uma arquitetura customizada - local: custom_models title: Compartilhando modelos customizados - local: run_scripts title: Treinamento a partir de um script - local: converting_tensorflow_models title: Convertendo checkpoints do TensorFlow para Pytorch - local: serialization title: Exportando modelos para ONNX - sections: - local: tasks/sequence_classification title: Classificaรงรฃo de texto - local: tasks/token_classification title: Classificaรงรฃo de tokens title: Fine-tuning para tarefas especรญficas - local: multilingual title: Modelos multilinguรญsticos para inferรชncia title: Guias prรกticos
transformers/docs/source/pt/_toctree.yml/0
{ "file_path": "transformers/docs/source/pt/_toctree.yml", "repo_id": "transformers", "token_count": 424 }
268
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฑเฐฏเฐŸเฐจ title: เฐชเฑเฐฐเฐพเฐฐเฐ‚เฐญเฐฟเฐ‚เฐšเฐกเฐพเฐจเฐฟเฐ•เฐฟ
transformers/docs/source/te/_toctree.yml/0
{ "file_path": "transformers/docs/source/te/_toctree.yml", "repo_id": "transformers", "token_count": 125 }
269