index
int64
0
0
repo_id
stringclasses
179 values
file_path
stringlengths
26
186
content
stringlengths
1
2.1M
__index_level_0__
int64
0
9
0
hf_public_repos
hf_public_repos/blog/inference-update.md
--- title: "An overview of inference solutions on Hugging Face" thumbnail: /blog/assets/116_inference_update/widget.png authors: - user: juliensimon --- # An Overview of Inference Solutions on Hugging Face Every day, developers and organizations are adopting models hosted on [Hugging Face](https://huggingface.co/models) to turn ideas into proof-of-concept demos, and demos into production-grade applications. For instance, Transformer models have become a popular architecture for a wide range of machine learning (ML) applications, including natural language processing, computer vision, speech, and more. Recently, diffusers have become a popular architecuture for text-to-image or image-to-image generation. Other architectures are popular for other tasks, and we host all of them on the HF Hub! At Hugging Face, we are obsessed with simplifying ML development and operations without compromising on state-of-the-art quality. In this respect, the ability to test and deploy the latest models with minimal friction is critical, all along the lifecycle of an ML project. Optimizing the cost-performance ratio is equally important, and we'd like to thank our friends at [Intel](https://huggingface.co/intel) for sponsoring our free CPU-based inference solutions. This is another major step in our [partnership](https://huggingface.co/blog/intel). It's also great news for our user community, who can now enjoy the speedup delivered by the [Intel Xeon Ice Lake](https://www.intel.com/content/www/us/en/products/docs/processors/xeon/3rd-gen-xeon-scalable-processors-brief.html) architecture at zero cost. Now, let's review your inference options with Hugging Face. ## Free Inference Widget One of my favorite features on the Hugging Face hub is the Inference [Widget](https://huggingface.co/docs/hub/models-widgets). Located on the model page, the Inference Widget lets you upload sample data and predict it in a single click. Here's a sentence similarity example with the `sentence-transformers/all-MiniLM-L6-v2` [model](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2): <kbd> <img src="assets/116_inference_update/widget.png"> </kbd> It's the best way to quickly get a sense of what a model does, its output, and how it performs on a few samples from your dataset. The model is loaded on-demand on our servers and unloaded when it's not needed anymore. You don't have to write any code and the feature is free. What's not to love? ## Free Inference API The [Inference API](https://huggingface.co/docs/api-inference/) is what powers the Inference widget under the hood. With a simple HTTP request, you can load any hub model and predict your data with it in seconds. The model URL and a valid hub token are all you need. Here's how I can load and predict with the `xlm-roberta-base` [model](https://huggingface.co/xlm-roberta-base) in a single line: ``` curl https://api-inference.huggingface.co/models/xlm-roberta-base \ -X POST \ -d '{"inputs": "The answer to the universe is <mask>."}' \ -H "Authorization: Bearer HF_TOKEN" ``` The Inference API is the simplest way to build a prediction service that you can immediately call from your application during development and tests. No need for a bespoke API, or a model server. In addition, you can instantly switch from one model to the next and compare their performance in your application. And guess what? The Inference API is free to use. As rate limiting is enforced, we don't recommend using the Inference API for production. Instead, you should consider Inference Endpoints. ## Production with Inference Endpoints Once you're happy with the performance of your ML model, it's time to deploy it for production. Unfortunately, when leaving the sandbox, everything becomes a concern: security, scaling, monitoring, etc. This is where a lot of ML stumble and sometimes fall. We built [Inference Endpoints](https://huggingface.co/inference-endpoints) to solve this problem. In just a few clicks, Inference Endpoints let you deploy any hub model on secure and scalable infrastructure, hosted in your AWS or Azure region of choice. Additional settings include CPU and GPU hosting, built-in auto-scaling, and more. This makes finding the appropriate cost/performance ratio easy, with [pricing](https://huggingface.co/pricing#endpoints) starting as low as $0.06 per hour. Inference Endpoints support three security levels: * Public: the endpoint runs in a public Hugging Face subnet, and anyone on the Internet can access it without any authentication. * Protected: the endpoint runs in a public Hugging Face subnet, and anyone on the Internet with the appropriate Hugging Face token can access it. * Private: the endpoint runs in a private Hugging Face subnet and is not accessible on the Internet. It's only available through a private connection in your AWS or Azure account. This will satisfy the strictest compliance requirements. <kbd> <img src="assets/116_inference_update/endpoints.png"> </kbd> To learn more about Inference Endpoints, please read this [tutorial](https://huggingface.co/blog/inference-endpoints) and the [documentation](https://huggingface.co/docs/inference-endpoints/). ## Spaces Finally, Spaces is another production-ready option to deploy your model for inference on top of a simple UI framework (Gradio for instance), and we also support [hardware upgrades](/docs/hub/spaces-gpus) like advanced Intel CPUs and NVIDIA GPUs. There's no better way to demo your models! <kbd> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-gpu-settings.png"> </kbd> To learn more about Spaces, please take a look at the [documentation](https://huggingface.co/docs/hub/spaces) and don't hesitate to browse posts or ask questions in our [forum](https://discuss.huggingface.co/c/spaces/24). ## Getting started It couldn't be simpler. Just log in to the Hugging Face [hub](https://huggingface.co/) and browse our [models](https://huggingface.co/models). Once you've found one that you like, you can try the Inference Widget directly on the page. Clicking on the "Deploy" button, you'll get auto-generated code to deploy the model on the free Inference API for evaluation, and a direct link to deploy it to production with Inference Endpoints or Spaces. Please give it a try and let us know what you think. We'd love to read your feedback on the Hugging Face [forum](https://discuss.huggingface.co/). Thank you for reading!
0
0
hf_public_repos
hf_public_repos/blog/introducing-doi.md
--- title: "Introducing DOI: the Digital Object Identifier to Datasets and Models" thumbnail: /blog/assets/107_launching_doi/thumbnail.jpeg authors: - user: sasha - user: Sylvestre - user: christopher guest: true - user: aleroy guest: true --- # Introducing DOI: the Digital Object Identifier to Datasets and Models Our mission at Hugging Face is to democratize good machine learning. That includes best practices that make ML models and datasets more reproducible, better documented, and easier to use and share. To solve this challenge, **we're excited to announce that you can now generate a DOI for your model or dataset directly from the Hub**! ![](assets/107_launching_doi/repo-settings.png) DOIs can be generated directly from your repo settings, and anyone will then be able to cite your work by clicking "Cite this model/dataset" on your model or dataset page 🔥. <kbd> <img alt="Generating DOI" src="assets/107_launching_doi/doi.gif"> </kbd> ## DOIs in a nutshell and why do they matter? DOIs (Digital Object Identifiers) are strings uniquely identifying a digital object, anything from articles to figures, including datasets and models. DOIs are tied to object metadata, including the object's URL, version, creation date, description, etc. They are a commonly accepted reference to digital resources across research and academic communities; they are analogous to a book's ISBN. DOIs make finding information about a model or dataset easier and sharing them with the world via a permanent link that will never expire or change. As such, datasets/models with DOIs are intended to persist perpetually and may only be deleted upon filing a request with our support. ## How are DOIs being assigned by Hugging Face? We have partnered with [DataCite](https://datacite.org) to allow registered Hub users to request a DOI for their model or dataset. Once they’ve filled out the necessary metadata, they receive a shiny new DOI 🌟! <kbd> <img alt="Cite DOI" src="assets/107_launching_doi/cite-modal.jpeg"> </kbd> If ever there’s a new version of a model or dataset, the DOI can easily be updated, and the previous version of the DOI gets outdated. This makes it easy to refer to a specific version of an object, even if it has changed. Have ideas for more improvements we can make? Many features, just like this, come directly from community feedback. Please drop us a note or tweet us at [@HuggingFace](https://twitter.com/huggingface) to share yours or open an issue on [huggingface/hub-docs](https://github.com/huggingface/hub-docs/issues) 🤗 Thanks DataCite team for this partnership! Thanks also Alix Leroy, Bram Vanroy, Daniel van Strien and Yoshitomo Matsubara for starting and fostering the discussion on [this `hub-docs` GitHub issue](https://github.com/huggingface/hub-docs/issues/25).
1
0
hf_public_repos
hf_public_repos/blog/leaderboard-arabic.md
--- title: "Introducing the Open Arabic LLM Leaderboard" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_arabic.png authors: - user: alielfilali01 guest: true org: 2A2I - user: Hamza-Alobeidli guest: true org: tiiuae - user: rcojocaru guest: true org: tiiuae - user: Basma-b guest: true org: tiiuae - user: clefourrier --- # Introducing the Open Arabic LLM Leaderboard The Open Arabic LLM Leaderboard (OALL) is designed to address the growing need for specialized benchmarks in the Arabic language processing domain. As the field of Natural Language Processing (NLP) progresses, the focus often remains heavily skewed towards English, leaving a significant gap in resources for other languages. The OALL aims to balance this by providing a platform specifically for evaluating and comparing the performance of Arabic Large Language Models (LLMs), thus promoting research and development in Arabic NLP. <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/4.4.0/gradio.js"> </script> <gradio-app theme_mode="light" space="OALL/Open-Arabic-LLM-Leaderboard"></gradio-app> This initiative is particularly significant given that it directly serves over 380 million Arabic speakers worldwide. By enhancing the ability to accurately evaluate and improve Arabic LLMs, we hope the OALL will play a crucial role in developing models and applications that are finely tuned to the nuances of the Arabic language, culture and heritage. ## Benchmarks, Metrics & Technical setup ### Benchmark Datasets The Open Arabic LLM Leaderboard (OALL) utilizes an extensive and diverse collection of robust datasets to ensure comprehensive model evaluation. - [AlGhafa benchmark](https://aclanthology.org/2023.arabicnlp-1.21): created by the TII LLM team with the goal of evaluating models on a range of abilities including reading comprehension, sentiment analysis, and question answering. It was initially introduced with 11 native Arabic datasets and was later extended to include an additional 11 datasets that are translations of other widely adopted benchmarks within the English NLP community. - ACVA and AceGPT benchmarks: feature 58 datasets from the paper ["AceGPT, Localizing Large Language Models in Arabic"](https://arxiv.org/abs/2309.12053), and translated versions of the MMLU and EXAMS benchmarks to broaden the evaluation spectrum and cover a comprehensive range of linguistic tasks. These benchmarks are meticulously curated and feature various subsets that precisely capture the complexities and subtleties of the Arabic language. ### Evaluation Metrics Given the nature of the tasks, which include multiple-choice and yes/no questions, the leaderboard primarily uses normalized log likelihood accuracy for all tasks. This metric was chosen for its ability to provide a clear and fair measurement of model performance across different types of questions. ### Technical setup The technical setup for the Open Arabic LLM Leaderboard (OALL) uses: - front- and back-ends inspired by the [`demo-leaderboard`](https://huggingface.co/demo-leaderboard-backend), with the back-end running locally on the TII cluster - the `lighteval` library to run the evaluations. Significant contributions have been made to integrate the Arabic benchmarks discussed above into `lighteval`, to support out-of-the-box evaluations of Arabic models for the community (see [PR #44](https://github.com/huggingface/lighteval/pull/44) and [PR #95](https://github.com/huggingface/lighteval/pull/95) on GitHub for more details). ## Future Directions We have many ideas about expanding the scope of the Open Arabic LLM Leaderboard. Plans are in place to introduce additional leaderboards under various categories, such as one for evaluating Arabic LLMs in Retrieval Augmented Generation (RAG) scenarios and another as a chatbot arena that calculates the ELO scores of different Arabic chatbots based on user preferences. Furthermore, we aim to extend our benchmarks to cover more comprehensive tasks by developing the OpenDolphin benchmark, which will include about 50 datasets and will be an open replication of the work done by Nagoudi et al. in the paper titled [“Dolphin: A Challenging and Diverse Benchmark for Arabic NLG”](https://arxiv.org/abs/2305.14989). For those interested in adding their benchmarks or collaborating on the OpenDolphin project, please contact us through the discussion tab or at this [email address](mailto:[email protected]). We’d love to welcome your contribution on these points! We encourage the community to contribute by submitting models, suggesting new benchmarks, or participating in discussions. We also encourage the community to make use of the top models of the current leaderboard to create new models through finetuning or any other techniques that might help your model to climb the ranks to the first place! You can be the next Arabic Open Models Hero! We hope the OALL will encourage technological advancements and highlight the unique linguistic and cultural characteristics inherent to the Arabic language, and that our technical setup and learnings from deploying a large-scale, language-specific leaderboard can be helpful for similar initiatives in other underrepresented languages. This focus will help bridge the gap in resources and research, traditionally dominated by English-centric models, enriching the global NLP landscape with more diverse and inclusive tools, which is crucial as AI technologies become increasingly integrated into everyday life around the world. ## Submit Your Model ! ### Model Submission Process To ensure a smooth evaluation process, participants must adhere to specific guidelines when submitting models to the Open Arabic LLM Leaderboard: 1. **Ensure Model Precision Alignment:** It is critical that the precision of the submitted models aligns with that of the original models. Discrepancies in precision may result in the model being evaluated but not properly displayed on the leaderboard. 2. **Pre-Submission Checks:** - **Load Model and Tokenizer:** Confirm that your model and tokenizer can be successfully loaded using AutoClasses. Use the following commands: ```python from transformers import AutoConfig, AutoModel, AutoTokenizer config = AutoConfig.from_pretrained("your model name", revision=revision) model = AutoModel.from_pretrained("your model name", revision=revision) tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision) ``` If you encounter errors, address them by following the error messages to ensure your model has been correctly uploaded. - **Model Visibility:** Ensure that your model is set to public visibility. Additionally, note that if your model requires `use_remote_code=True`, this feature is not currently supported but is under development. 3. **Convert Model Weights to Safetensors:** - Convert your model weights to safetensors, a safer and faster format for loading and using weights. This conversion also enables the inclusion of the model's parameter count in the `Extended Viewer`. 4. **License and Model Card:** - **Open License:** Verify that your model is openly licensed. This leaderboard promotes the accessibility of open LLMs to ensure widespread usability. - **Complete Model Card:** Populate your model card with detailed information. This data will be automatically extracted and displayed alongside your model on the leaderboard. ### In Case of Model Failure If your model appears in the 'FAILED' category, this indicates that execution was halted. Review the steps outlined above to troubleshoot and resolve any issues. Additionally, test the following [script](https://gist.github.com/alielfilali01/d486cfc962dca3ed4091b7c562a4377f) on your model locally to confirm its functionality before resubmitting. ## Acknowledgements We extend our gratitude to all contributors, partners, and sponsors, particularly the Technology Innovation Institute and Hugging Face for their substantial support in this project. TII has provided generously the essential computational resources, in line with their commitment to supporting community-driven projects and advancing open science within the Arabic NLP field, whereas Hugging Face has assisted with the integration and customization of their new evaluation framework and leaderboard template. We would also like to express our thanks to Upstage for their work on the Open Ko-LLM Leaderboard, which served as a valuable reference and source of inspiration for our own efforts. Their pioneering contributions have been instrumental in guiding our approach to developing a comprehensive and inclusive Arabic LLM leaderboard. ## Citations and References ``` @misc{OALL, author = {Elfilali, Ali and Alobeidli, Hamza and Fourrier, Clémentine and Boussaha, Basma El Amel and Cojocaru, Ruxandra and Habib, Nathan and Hacid, Hakim}, title = {Open Arabic LLM Leaderboard}, year = {2024}, publisher = {OALL}, howpublished = "\url{https://huggingface.co/spaces/OALL/Open-Arabic-LLM-Leaderboard}" } @inproceedings{almazrouei-etal-2023-alghafa, title = "{A}l{G}hafa Evaluation Benchmark for {A}rabic Language Models", author = "Almazrouei, Ebtesam and Cojocaru, Ruxandra and Baldo, Michele and Malartic, Quentin and Alobeidli, Hamza and Mazzotta, Daniele and Penedo, Guilherme and Campesan, Giulia and Farooq, Mugariya and Alhammadi, Maitha and Launay, Julien and Noune, Badreddine", editor = "Sawaf, Hassan and El-Beltagy, Samhaa and Zaghouani, Wajdi and Magdy, Walid and Abdelali, Ahmed and Tomeh, Nadi and Abu Farha, Ibrahim and Habash, Nizar and Khalifa, Salam and Keleg, Amr and Haddad, Hatem and Zitouni, Imed and Mrini, Khalil and Almatham, Rawan", booktitle = "Proceedings of ArabicNLP 2023", month = dec, year = "2023", address = "Singapore (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.arabicnlp-1.21", doi = "10.18653/v1/2023.arabicnlp-1.21", pages = "244--275", abstract = "Recent advances in the space of Arabic large language models have opened up a wealth of potential practical applications. From optimal training strategies, large scale data acquisition and continuously increasing NLP resources, the Arabic LLM landscape has improved in a very short span of time, despite being plagued by training data scarcity and limited evaluation resources compared to English. In line with contributing towards this ever-growing field, we introduce AlGhafa, a new multiple-choice evaluation benchmark for Arabic LLMs. For showcasing purposes, we train a new suite of models, including a 14 billion parameter model, the largest monolingual Arabic decoder-only model to date. We use a collection of publicly available datasets, as well as a newly introduced HandMade dataset consisting of 8 billion tokens. Finally, we explore the quantitative and qualitative toxicity of several Arabic models, comparing our models to existing public Arabic LLMs.", } @misc{huang2023acegpt, title={AceGPT, Localizing Large Language Models in Arabic}, author={Huang Huang and Fei Yu and Jianqing Zhu and Xuening Sun and Hao Cheng and Dingjie Song and Zhihong Chen and Abdulmohsen Alharthi and Bang An and Ziche Liu and Zhiyi Zhang and Junying Chen and Jianquan Li and Benyou Wang and Lian Zhang and Ruoyu Sun and Xiang Wan and Haizhou Li and Jinchao Xu}, year={2023}, eprint={2309.12053}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{lighteval, author = {Fourrier, Clémentine and Habib, Nathan and Wolf, Thomas and Tunstall, Lewis}, title = {LightEval: A lightweight framework for LLM evaluation}, year = {2023}, version = {0.3.0}, url = {https://github.com/huggingface/lighteval} } ```
2
0
hf_public_repos
hf_public_repos/blog/inference-endpoints-llm.md
--- title: Deploy LLMs with Hugging Face Inference Endpoints thumbnail: /blog/assets/155_inference_endpoints_llm/thumbnail.jpg authors: - user: philschmid --- # Deploy LLMs with Hugging Face Inference Endpoints Open-source LLMs like [Falcon](https://huggingface.co/tiiuae/falcon-40b), [(Open-)LLaMA](https://huggingface.co/openlm-research/open_llama_13b), [X-Gen](https://huggingface.co/Salesforce/xgen-7b-8k-base), [StarCoder](https://huggingface.co/bigcode/starcoder) or [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base), have come a long way in recent months and can compete with closed-source models like ChatGPT or GPT4 for certain use cases. However, deploying these models in an efficient and optimized way still presents a challenge. In this blog post, we will show you how to deploy open-source LLMs to [Hugging Face Inference Endpoints](https://ui.endpoints.huggingface.co/), our managed SaaS solution that makes it easy to deploy models. Additionally, we will teach you how to stream responses and test the performance of our endpoints. So let's get started! 1. [How to deploy Falcon 40B instruct](#1-how-to-deploy-falcon-40b-instruct) 2. [Test the LLM endpoint](#2-test-the-llm-endpoint) 3. [Stream responses in Javascript and Python](#3-stream-responses-in-javascript-and-python) Before we start, let's refresh our knowledge about Inference Endpoints. ## What is Hugging Face Inference Endpoints [Hugging Face Inference Endpoints](https://ui.endpoints.huggingface.co/) offers an easy and secure way to deploy Machine Learning models for use in production. Inference Endpoints empower developers and data scientists alike to create AI applications without managing infrastructure: simplifying the deployment process to a few clicks, including handling large volumes of requests with autoscaling, reducing infrastructure costs with scale-to-zero, and offering advanced security. Here are some of the most important features for LLM deployment: 1. [Easy Deployment](https://huggingface.co/docs/inference-endpoints/index): Deploy models as production-ready APIs with just a few clicks, eliminating the need to handle infrastructure or MLOps. 2. [Cost Efficiency](https://huggingface.co/docs/inference-endpoints/autoscaling): Benefit from automatic scale to zero capability, reducing costs by scaling down the infrastructure when the endpoint is not in use, while paying based on the uptime of the endpoint, ensuring cost-effectiveness. 3. [Enterprise Security](https://huggingface.co/docs/inference-endpoints/security): Deploy models in secure offline endpoints accessible only through direct VPC connections, backed by SOC2 Type 2 certification, and offering BAA and GDPR data processing agreements for enhanced data security and compliance. 4. [LLM Optimization](https://huggingface.co/text-generation-inference): Optimized for LLMs, enabling high throughput with Paged Attention and low latency through custom transformers code and Flash Attention power by Text Generation Inference 5. [Comprehensive Task Support](https://huggingface.co/docs/inference-endpoints/supported_tasks): Out of the box support for 🤗 Transformers, Sentence-Transformers, and Diffusers tasks and models, and easy customization to enable advanced tasks like speaker diarization or any Machine Learning task and library. You can get started with Inference Endpoints at: [https://ui.endpoints.huggingface.co/](https://ui.endpoints.huggingface.co/) ## 1. How to deploy Falcon 40B instruct To get started, you need to be logged in with a User or Organization account with a payment method on file (you can add one **[here](https://huggingface.co/settings/billing)**), then access Inference Endpoints at **[https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)** Then, click on “New endpoint”. Select the repository, the cloud, and the region, adjust the instance and security settings, and deploy in our case `tiiuae/falcon-40b-instruct`. ![Select Hugging Face Repository](assets/155_inference_endpoints_llm/repository.png "Select Hugging Face Repository") Inference Endpoints suggest an instance type based on the model size, which should be big enough to run the model. Here `4x NVIDIA T4` GPUs. To get the best performance for the LLM, change the instance to `GPU [xlarge] · 1x Nvidia A100`. *Note: If the instance type cannot be selected, you need to [contact us](mailto:[email protected]?subject=Quota%20increase%20HF%20Endpoints&body=Hello,%0D%0A%0D%0AI%20would%20like%20to%20request%20access/quota%20increase%20for%20{INSTANCE%20TYPE}%20for%20the%20following%20account%20{HF%20ACCOUNT}.) and request an instance quota.* ![Select Instance Type](assets/155_inference_endpoints_llm/instance-selection.png "Select Instance Type") You can then deploy your model with a click on “Create Endpoint”. After 10 minutes, the Endpoint should be online and available to serve requests. ## 2. Test the LLM endpoint The Endpoint overview provides access to the Inference Widget, which can be used to manually send requests. This allows you to quickly test your Endpoint with different inputs and share it with team members. Those Widgets do not support parameters - in this case this results to a “short” generation. ![Test Inference Widget](assets/155_inference_endpoints_llm/widget.png "Test Inference Widget") The widget also generates a cURL command you can use. Just add your `hf_xxx` and test. ```python curl https://j4xhm53fxl9ussm8.us-east-1.aws.endpoints.huggingface.cloud \ -X POST \ -d '{"inputs":"Once upon a time,"}' \ -H "Authorization: Bearer <hf_token>" \ -H "Content-Type: application/json" ``` You can use different parameters to control the generation, defining them in the `parameters` attribute of the payload. As of today, the following parameters are supported: - `temperature`: Controls randomness in the model. Lower values will make the model more deterministic and higher values will make the model more random. Default value is 1.0. - `max_new_tokens`: The maximum number of tokens to generate. Default value is 20, max value is 512. - `repetition_penalty`: Controls the likelihood of repetition. Default is `null`. - `seed`: The seed to use for random generation. Default is `null`. - `stop`: A list of tokens to stop the generation. The generation will stop when one of the tokens is generated. - `top_k`: The number of highest probability vocabulary tokens to keep for top-k-filtering. Default value is `null`, which disables top-k-filtering. - `top_p`: The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling, default to `null` - `do_sample`: Whether or not to use sampling; use greedy decoding otherwise. Default value is `false`. - `best_of`: Generate best_of sequences and return the one if the highest token logprobs, default to `null`. - `details`: Whether or not to return details about the generation. Default value is `false`. - `return_full_text`: Whether or not to return the full text or only the generated part. Default value is `false`. - `truncate`: Whether or not to truncate the input to the maximum length of the model. Default value is `true`. - `typical_p`: The typical probability of a token. Default value is `null`. - `watermark`: The watermark to use for the generation. Default value is `false`. ## 3. Stream responses in Javascript and Python Requesting and generating text with LLMs can be a time-consuming and iterative process. A great way to improve the user experience is streaming tokens to the user as they are generated. Below are two examples of how to stream tokens using Python and JavaScript. For Python, we are going to use the [client from Text Generation Inference](https://github.com/huggingface/text-generation-inference/tree/main/clients/python), and for JavaScript, the [HuggingFace.js library](https://huggingface.co/docs/huggingface.js/main/en/index) ### Streaming requests with Python First, you need to install the `huggingface_hub` library: ```python pip install -U huggingface_hub ``` We can create a `InferenceClient` providing our endpoint URL and credential alongside the hyperparameters we want to use ```python from huggingface_hub import InferenceClient # HF Inference Endpoints parameter endpoint_url = "https://YOUR_ENDPOINT.endpoints.huggingface.cloud" hf_token = "hf_YOUR_TOKEN" # Streaming Client client = InferenceClient(endpoint_url, token=hf_token) # generation parameter gen_kwargs = dict( max_new_tokens=512, top_k=30, top_p=0.9, temperature=0.2, repetition_penalty=1.02, stop_sequences=["\nUser:", "<|endoftext|>", "</s>"], ) # prompt prompt = "What can you do in Nuremberg, Germany? Give me 3 Tips" stream = client.text_generation(prompt, stream=True, details=True, **gen_kwargs) # yield each generated token for r in stream: # skip special tokens if r.token.special: continue # stop if we encounter a stop sequence if r.token.text in gen_kwargs["stop_sequences"]: break # yield the generated token print(r.token.text, end = "") # yield r.token.text ``` Replace the `print` command with the `yield` or with a function you want to stream the tokens to. ![Python Streaming](assets/155_inference_endpoints_llm/python-stream.gif "Python Streaming") ### Streaming requests with JavaScript First, you need to install the `@huggingface/inference` library. ```python npm install @huggingface/inference ``` We can create a `HfInferenceEndpoint` providing our endpoint URL and credential alongside the hyperparameter we want to use. ```jsx import { HfInferenceEndpoint } from '@huggingface/inference' const hf = new HfInferenceEndpoint('https://YOUR_ENDPOINT.endpoints.huggingface.cloud', 'hf_YOUR_TOKEN') //generation parameter const gen_kwargs = { max_new_tokens: 512, top_k: 30, top_p: 0.9, temperature: 0.2, repetition_penalty: 1.02, stop_sequences: ['\nUser:', '<|endoftext|>', '</s>'], } // prompt const prompt = 'What can you do in Nuremberg, Germany? Give me 3 Tips' const stream = hf.textGenerationStream({ inputs: prompt, parameters: gen_kwargs }) for await (const r of stream) { // # skip special tokens if (r.token.special) { continue } // stop if we encounter a stop sequence if (gen_kwargs['stop_sequences'].includes(r.token.text)) { break } // yield the generated token process.stdout.write(r.token.text) } ``` Replace the `process.stdout` call with the `yield` or with a function you want to stream the tokens to. ![Javascript Streaming](assets/155_inference_endpoints_llm/js-stream.gif "Javascript Streaming") ## Conclusion In this blog post, we showed you how to deploy open-source LLMs using Hugging Face Inference Endpoints, how to control the text generation with advanced parameters, and how to stream responses to a Python or JavaScript client to improve the user experience. By using Hugging Face Inference Endpoints you can deploy models as production-ready APIs with just a few clicks, reduce your costs with automatic scale to zero, and deploy models into secure offline endpoints backed by SOC2 Type 2 certification. --- Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
3
0
hf_public_repos
hf_public_repos/blog/kv-cache-quantization.md
--- title: "Unlocking Longer Generation with Key-Value Cache Quantization" thumbnail: /blog/assets/kv_cache_quantization/thumbnail.png authors: - user: RaushanTurganbay --- # Unlocking Longer Generation with Key-Value Cache Quantization At Hugging Face, we are excited to share with you a new feature that's going to take your language models to the next level: *KV Cache Quantization*. TL;DR: KV Cache Quantization reduces memory usage for long-context text generation in LLMs with minimal impact on quality, offering customizable trade-offs between memory efficiency and generation speed. Have you ever tried generating a lengthy piece of text with your language model, only to hit a wall because of pesky memory limitations? As language models continue to grow in size and capabilities, supporting longer generations can start to really eat up memory. It's a common frustration, especially when you're dealing with limited resources. That's where kv cache quantization swoops in to save the day. So, what exactly is kv cache quantization? If you're not familiar with the term, don't sweat it! Let's break it down into two pieces: *kv cache* and *quantization*. Key-value cache, or kv cache, is needed to optimize the generation in autoregressive models, where the model predicts text token by token. This process can be slow since the model can generate only one token at a time, and each new prediction is dependent on the previous context. That means, to predict token number 1000 in the generation, you need information from the previous 999 tokens, which comes in the form of some matrix multiplications across the representations of those tokens. But to predict token number 1001, you also need the same information from the first 999 tokens, plus additional information from token number 1000. That is where key-value cache is used to optimize the sequential generation process by storing previous calculations to reuse in subsequent tokens, so they don't need to be computed again. More concretely, key-value cache acts as a memory bank for autoregressive generative models, where the model stores key-value pairs derived from self-attention layers for previously processed tokens. In the transformer architecture, self-attention layers calculate attention scores by multiplying queries with keys, producing weighted sums of value vectors as outputs. By storing this information, the model can avoid redundant computations and instead retrieve keys and values of previous tokens from the cache. For a visual explanation of this concept, take a look at how key-value cache functions in the image below. When calculating the attentions scores for the `K+1`th token we do not need to recompute all of the previous keys and values, but rather take it from cache and concatenate to the current vector. This usually results in faster and more efficient text generation. <figure class="image text-center m-0"> <img class="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/kv_cache_quantization/kv-cache-optimization.png" alt="kv cache visual"/> </figure> Moving on to the second term, quantization is just a fancy word for reducing the precision of numerical values to save memory. During quantization, each numerical value is rounded or truncated to fit within the reduced precision format, which may result in a loss of information. However, careful selection of quantization parameters and techniques can minimize this loss while still achieving satisfactory performance. There are different quantization methods, so if you're curious to learn more be sure to check out our [previous blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes) for a deeper dive into the world of quantization. Even though kv cache speeds up autoregressive generation, it can become a memory bottleneck with long context length or high batch size. Let's estimate how much memory we will need to store kv cache for an input of sequence length 10000 tokens for a 7B Llama-2 model. The memory required to store kv cache of one token is roughly `2 * 2 * num_layers * num_key_value_heads * head_dim`, where the first `2` accounts for keys and values and the second `2` is the number of bytes we need (assuming the model is loaded in `float16`). So if we have a context of length 10000 tokens, we would need `2 * 2 * 32 * 32 * 128 * 10000 ≈ 5GB` of memory only to store the previous key-value cache, which is almost one third of the memory required to store model parameters in half-precision. Therefore, by compressing kv cache into a more compact form we can save up a lot of memory and run longer context generation on consumer GPUs. In our experiments, we were able to significantly reduce the memory footprint without sacrificing too much quality by quantizing the kv cache into lower precision formats. With this new quantization feature, we can now support longer generations without running out of memory, which means you can expand your model's context length without worrying about hitting a memory constraint. ## Implementation Details Key-value cache quantization in Transformers was largely inspired by the [KIVI: A Tuning-Free Asymmetric 2bit Quantization for kv Cache](https://arxiv.org/abs/2402.02750) paper. The paper introduced a 2bit asymmetrical quantization for large language models without quality degradation. KIVI quantizes the key cache per-channel and the value cache per-token, because they showed that for LLMs keys have higher magnitudes of outliers in some channels while values don't show such a pattern. Therefore, the relative error between quantized and original precision is much smaller when keys are quantized per-channel and the values per-token. In the method we integrated in Transformers the key and values are both quantized per-token. The main bottleneck when quantizing per-token is the need to quantize and de-quantize keys and values every time a new token is added, that is every generatoin step. That might cause a slow down in generation. To overcome this issue we decided to retain a fixed size residual cache to store keys and values in their original precision. When the residual cache reaches its maximum capacity the stored keys and values are quantized and the cache content is discarded. This small trick also allows to preserve accuracy since some part of the most recent keys and values are always stored in their original precision. The main consideration is the memory-efficiency trade-off when setting the residual cache length. While residual cache stores keys and values in their original precision, that may result in overall memory usage increase. We found that using a residual length of 128 works well as a baseline. So given a key or value of shape `batch size, num of heads, num of tokens, head dim` we group it to `num of groups, group size` and perform affine quantization as follows: `X_Q = round(X / S) - Z` where, - X_Q is the quantized tensor - S is the scale, calculated as `(maxX - minX) / (max_val_for_precision - min_val_for_precision)` - Z is the zeropoint, calculated as `round(-minX / S)` Currently, the kv quantization works on [quanto](https://github.com/huggingface/quanto) backend with `int2` and `int4` precisions and [`HQQ`](https://github.com/mobiusml/hqq/tree/master) backend with `int2`, `int4` and `int8` precisions. For more information about `quanto` refer to the previous [blogpost](https://huggingface.co/blog/quanto-introduction). Although we don't currently support more quantization backends, we are open to community contributions that could help integrate them. Specifically, quantization methods that do not need calibration data and can dynamically calculate lower-bit tensors on-the-fly can be easily integrated. Additionally, you can indicate the most common quantization parameters in the config, thus have freedom to tweak quantization process, e.g. decide whether to perform per-channel or per-token quantization depending on your use case. ## Comparing performance of fp16 and quantized cache We know visuals speak louder than words, so we've prepared some comparison plots to give you a snapshot of how quantization stacks up against FP16 precision. These plots show you at a glance how the model's generation holds up in terms of quality when we tweak the precision settings for kv cache. We calculated the perplexity of [Llama2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) model on the [`PG-19`](https://huggingface.co/datasets/emozilla/pg19-test) dataset with the following quantization parameters: `nbits=4, group_size=64, resildual_length=128, per_token=True` We can see that `int4` cache performs almost the same as the original `fp16` precision for both backends, while the quality degrades when using `int2`. The script to reproduce the results is available [here](https://gist.github.com/zucchini-nlp/a7b19ec32f8c402761d48f3736eac808). <figure class="image text-center m-0"> <img class="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/kv_cache_quantization/perplexity.png" alt="Log Perplexity Comparison"/> </figure> The same conclusion holds when calculating performance on the [LongBench](https://huggingface.co/datasets/THUDM/LongBench) benchmark comparing it to results from the KIVI paper. `Int4 quanto` precision is comparable and even outperforms slightly the `fp16` in all of the datasets in the table below (higher is better). | Dataset | KIVI f16p | KIVI int2 | Transformers fp16 | Quanto int4| Quanto int2| |-----------------------|-------------|--------------|---------------------|---------|---------| | TREC | 63.0 | 67.5 | 63.0 | 63.0 | 55.0 | | SAMSum | 41.12 | 42.18 | 41.12 | 41.3 | 14.04 | | TriviaQA | NA | NA | 84.28 | 84.76 | 63.64 | | HotPotQA | NA | NA | 30.08 | 30.04 | 17.3 | | Passage_retrieval_en | NA | NA | 8.5 | 9.5 | 4.82 | Now, let's talk about the trade-off between memory savings and speed. When we quantize the kv cache in models, we're making them less memory hungry, but sometimes that comes at a tiny cost to generation speed. While quantizing the cache to `int4` can offer roughly an x2.5 memory saving, the generation speed starts to decrease with higher batch sizes. One has to decide whether using quantized kv cache and potentially sacrificing a bit of speed is worth the trade-off for the significant gains in memory efficiency. It's all about finding the approach that best suits your specific use case and priorities. Below are the performance metrics for kv cache in original precision and quantized format. Script to obtain the following figures is available [here](https://gist.github.com/zucchini-nlp/56ce57276d7b1ee666e957912d8d36ca). <figure class="image text-center m-0"> <img class="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/kv_cache_quantization/gpu_mem_max_new_tokens.png" alt="GPU memory consumption as max new tokens increase"/> </figure> <figure class="image text-center m-0"> <img class="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/kv_cache_quantization/gpu_mem_bs.png" alt="GPU memory consumption as batch size increases"/> </figure> <figure class="image text-center m-0"> <img class="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/kv_cache_quantization/latency.png" alt="Latency as batch size increases"/> </figure> Wondering what happens when we throw weight quantization into the mix? Sure, combining these techniques can further slim down your model's memory footprint, but there's a catch – it might slow things down even more. In fact, our experiments show that weight quantization together with kv cache quantization can lead to a threefold decrease in speed. But we're constantly tinkering away to find ways to make this combo work seamlessly. And while we don't currently have optimized kernels in the `quanto` library, we're open to community contributions that could help improve computational efficiency. Our goal is to ensure your model runs smoothly while maintaining high latency and accuracy. It's also worth noting that initial processing of the input prompt (aka pre-fill stage) still requires computing the entire key-value matrices in one go for the whole input, which may be another memory bottleneck for long contexts. This is the reason why the latency associated with generating the first token tends to be higher compared to subsequent tokens. There are other different strategies to decrease the memory burden of the pre-fill stage by optimizing the attention computation stage, such like [Local Windowed Attention](https://arxiv.org/abs/2004.05150) or [Flash-Attention](https://arxiv.org/abs/2307.08691). If you are out of memory for the pre-fill stage, you can use `FlashAttention` in 🤗 Transformers along with the kv cache quantization to decrease memory usage even more for long input prompts. See [the docs](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) for more information on that. If you are interested how many tokens we can fit in the context if we were to push the memory usage to its limits, quantized kv cache can support up to 128k tokens with Flash Attention enabled in an 80GB A100. For the cache in half precision, the maximum capacity is 40k tokens. ## How to use quantized kv cache in 🤗 Transformers? To use kv cache quantization in 🤗 Tranformers we have to install external dependencies first by running `pip install quanto`. To activate quantization on kv cache, we have to pass in `cache_implementation="quantized"` and indicate quantization parameters in a cache config in dictionary format. And that's all we need to start using kv cache quantization. Additionally, since quanto is device agnostic, you can quantize and run your model regardless if you are on CPU/GPU/MPS (Apple Silicon). Here you can find a short [Colab notebook](https://colab.research.google.com/drive/1YKAdOLoBPIore77xR5Xy0XLN8Etcjhui?usp=sharing) with usage examples. ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.float16, device_map="cuda:0") >>> inputs = tokenizer("I like rock music because", return_tensors="pt").to(model.device) >>> out = model.generate(**inputs, do_sample=False, max_new_tokens=20, cache_implementation="quantized", cache_config={"backend": "quanto", "nbits": 4}) >>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0]) I like rock music because it's loud and energetic. It's a great way to express myself and rel >>> out = model.generate(**inputs, do_sample=False, max_new_tokens=20) >>> print(tokenizer.batch_decode(out, skip_special_tokens=True)[0]) I like rock music because it's loud and energetic. I like to listen to it when I'm feeling ``` ## Conclusion There are many more different methods to reduce memory usage by key-value cache, including [MultiQueryAttention](https://arxiv.org/abs/1911.02150), [GroupedQueryAttention](https://arxiv.org/abs/2305.13245) or recent [kv cache retrieval](https://arxiv.org/abs/2403.09054) methods. While some of these methods are bound to the model architecture choices, others can be applied post-training. Quantization is one of such post-training optimization techniques and we can draw the following conclusion from our short blogpost: 1. **Memory vs Speed trade-off**: By quantizing the kv cache into lower precision formats, memory usage is significantly reduced, allowing for longer text generations without encountering memory constraints. But users have to decide on whether giving up a tiny bit of generation speed suits their use-case. 2. **Maintained Accuracy**: Despite the reduction in precision, kv cache quantization in `int4` preserves model accuracy to a satisfactory extent, ensuring that generated text remains contextually relevant and coherent. 3. **Flexibility**: Users have the flexibility to choose between different precision formats based on their specific requirements, allowing for customization to suit varying use cases and priorities. 4. **Potential for Further Optimization**: While kv cache quantization provides significant benefits on its own, it can also be combined with other optimization techniques, such as weight quantization, to further enhance memory efficiency and computational speed. ## Acknowledgment Special thanks to [Younes](https://huggingface.co/ybelkada) and [Marc](https://huggingface.co/marcsun13) for their assistance and advice on quantization techniques. Their expertise greatly contributed to the development of this feature. Additionally, I would like to thank [Joao](https://huggingface.co/joaogante) for his invaluable support. ## Additional Resources 1. Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Braverman, V., Beidi Chen, & Hu, X. (2023). [KIVI : Plug-and-play 2bit KV Cache Quantization with Streaming Asymmetric Quantization](https://arxiv.org/abs/2402.02750). 2. Blogpost from Databricks on [LLM Inference Performance Engineering: Best Practices](https://www.databricks.com/blog/llm-inference-performance-engineering-best-practices) 3. Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, & Amir Gholami. (2024). [KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization](https://arxiv.org/abs/2401.18079). 4. T. Dettmers, M. Lewis, Y. Belkada, and L. Zettlemoyer, (2022). [LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale](https://arxiv.org/abs/2208.07339). 5. A. Gholami, S. Kim, Z. Dong, Z. Yao, M. W. Mahoney, and K. Keutzer, (2021). A Survey of Quantization Methods for Efficient Neural Network Inference.
4
0
hf_public_repos
hf_public_repos/blog/ml-for-games-3.md
--- title: "3D Asset Generation: AI for Game Development #3" thumbnail: /blog/assets/124_ml-for-games/thumbnail3.png authors: - user: dylanebert --- # 3D Asset Generation: AI for Game Development #3 **Welcome to AI for Game Development!** In this series, we'll be using AI tools to create a fully functional farming game in just 5 days. By the end of this series, you will have learned how you can incorporate a variety of AI tools into your game development workflow. I will show you how you can use AI tools for: 1. Art Style 2. Game Design 3. 3D Assets 4. 2D Assets 5. Story Want the quick video version? You can watch it [here](https://www.tiktok.com/@individualkex/video/7190364745495678254). Otherwise, if you want the technical details, keep reading! **Note:** This tutorial is intended for readers who are familiar with Unity development and C#. If you're new to these technologies, check out the [Unity for Beginners](https://www.tiktok.com/@individualkex/video/7086863567412038954) series before continuing. ## Day 3: 3D Assets In [Part 2](https://huggingface.co/blog/ml-for-games-2) of this tutorial series, we used **AI for Game Design**. More specifically, we used ChatGPT to brainstorm the design for our game. In this part, we'll talk about how you can use AI to generate 3D Assets. The short answer is: you can't. That's because text-to-3D isn't at the point it can be practically applied to game development, *yet*. However, that's changing very quickly. Keep reading to learn about [The Current State of Text-to-3D](#the-current-state-of-text-to-3d), [Why It Isn't Useful (yet)](#why-it-isnt-useful-yet), and [The Future of Text-to-3D](#the-future-of-text-to-3d). ### The Current State of Text-to-3D As discussed in [Part 1](https://huggingface.co/blog/ml-for-games-1), text-to-image tools such as Stable Diffusion are incredibly useful in the game development workflow. However, what about text-to-3D, or generating 3D models from text descriptions? There have been many very recent developments in this area: - [DreamFusion](https://dreamfusion3d.github.io/) uses 2D diffusion to generate 3D assets. - [CLIPMatrix](https://arxiv.org/abs/2109.12922) and [CLIP-Mesh-SMPLX](https://github.com/NasirKhalid24/CLIP-Mesh-SMPLX) generate textured meshes directly. - [CLIP-Forge](https://github.com/autodeskailab/clip-forge) uses language to generate voxel-based models. - [CLIP-NeRF](https://github.com/cassiePython/CLIPNeRF) drives NeRFs with text and images. - [Point-E](https://huggingface.co/spaces/openai/point-e) and [Pulsar+CLIP](https://colab.research.google.com/drive/1IvV3HGoNjRoyAKIX-aqSWa-t70PW3nPs) use language to generate 3D point clouds. - [Dream Textures](https://github.com/carson-katri/dream-textures/releases/tag/0.0.9) uses text-to-image to texture scenes in Blender automatically. Many of these approaches, excluding CLIPMatrix and CLIP-Mesh-SMPLX, are based on [view synthesis](https://en.wikipedia.org/wiki/View_synthesis), or generating novel views of a subject, as opposed to conventional 3D rendering. This is the idea behind [NeRFs](https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/) or Neural Radiance Fields, which use neural networks for view synthesis. <figure class="image text-center"> <img src="https://developer-blogs.nvidia.com/wp-content/uploads/2022/05/Excavator_NeRF.gif" alt="NeRF"> <figcaption>View synthesis using NeRFs.</figcaption> </figure> What does all of this mean if you're a game developer? Currently, nothing. This technology hasn't reached the point that it's useful in game development *yet*. Let's talk about why. ### Why It Isn't Useful (yet) **Note:** This section is intended for readers who are familiar with conventional 3D rendering techniques, such as [meshes](https://en.wikipedia.org/wiki/Polygon_mesh), [UV mapping](https://en.wikipedia.org/wiki/UV_mapping) and [photogrammetry](https://en.wikipedia.org/wiki/Photogrammetry). While view synthesis is impressive, the world of 3D runs on meshes, which are not the same as NeRFs. There is, however, [ongoing work on converting NeRFs to meshes](https://github.com/NVlabs/instant-ngp). In practice, this is reminiscient of [photogrammetry](https://en.wikipedia.org/wiki/Photogrammetry), where multiple photos of real-world objects are combined to author 3D assets. <figure class="image text-center"> <img src="https://github.com/NVlabs/instant-ngp/raw/master/docs/assets_readme/testbed.png" alt="NeRF-to-mesh"> <figcaption>NVlabs instant-ngp, which supports NeRF-to-mesh conversion.</figcaption> </figure> The practical use of assets generated using the text-to-NeRF-to-mesh pipeline is limited in a similar way to assets produced using photogrammetry. That is, the resulting mesh is not immediately game-ready, and requires significant work and expertise to become a game-ready asset. In this sense, NeRF-to-mesh may be a useful tool as-is, but doesn't yet reach the transformative potential of text-to-3D. Since NeRF-to-mesh, like photogrammetry, is currently most suited to creating ultra-high-fidelity assets with significant manual post-processing, it doesn't really make sense for creating a farming game in 5 days. In which case, I decided to just use cubes of different colors to represent the crops in the game. <figure class="image text-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/124_ml-for-games/cubes.png" alt="Stable Diffusion Demo Space"> </figure> Things are changing rapidly in this area, though, and there may be a viable solution in the near future. Next, I'll talk about some of the directions text-to-3D may be going. ### The Future of Text-to-3D While text-to-3D has come a long way recently, there is still a significant gap between where we are now and what could have an impact along the lines of text-to-image. I can only speculate on how this gap will be closed. There are two possible directions that are most apparent: 1. Improvements in NeRF-to-mesh and mesh generation. As we've seen, current generation models are similar to photogrammetry in that they require a lot of work to produce game-ready assets. While this is useful in some scenarios, like creating realistic high-fidelity assets, it's still more time-consuming than making low-poly assets from scratch, especially if you're like me and use an ultra-low-poly art style. 2. New rendering techniques that allow NeRFs to be rendered directly in-engine. While there have been no official announcements, one could speculate that [NVIDIA](https://www.nvidia.com/en-us/omniverse/) and [Google](https://dreamfusion3d.github.io/), among others, may be working on this. Of course, only time will tell. If you want to keep up with advancements as they come, feel free to follow me on [Twitter](https://twitter.com/dylan_ebert_). If there are new developments I've missed, feel free to reach out! Click [here](https://huggingface.co/blog/ml-for-games-4) to read Part 4, where we use **AI for 2D Assets**. #### Attribution Thanks to Poli [@multimodalart](https://huggingface.co/multimodalart) for providing info on the latest open source text-to-3D.
5
0
hf_public_repos
hf_public_repos/blog/transformersjs-v3.md
--- title: "Transformers.js v3: WebGPU Support, New Models & Tasks, and More…" thumbnail: /blog/assets/transformersjs-v3/thumbnail.png authors: - user: xenova --- # Transformers.js v3: WebGPU Support, New Models & Tasks, and More… After more than a year of development, we're excited to announce the release of 🤗 Transformers.js v3! Highlights include: - [WebGPU support (up to 100x faster than WASM!)](#webgpu-support) - [New quantization formats (dtypes)](#new-quantization-formats-dtypes) - [A total of 120 supported architectures](#120-supported-architectures) - [25 new example projects and templates](#example-projects-and-templates) - [Over 1200 pre-converted models on the Hugging Face Hub](#over-1200-pre-converted-models) - [Node.js (ESM + CJS), Deno, and Bun compatibility](#nodejs-esm--cjs-deno-and-bun-compatibility) - [A new home on GitHub and NPM](#a-new-home-on-npm-and-github) ## Installation You can get started by installing Transformers.js v3 from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` Then, importing the library with ```js import { pipeline } from "@huggingface/transformers"; ``` or, via a CDN ```js import { pipeline } from "https://cdn.jsdelivr.net/npm/@huggingface/[email protected]"; ``` For more information, check out the [documentation](https://hf.co/docs/transformers.js). ## WebGPU support WebGPU is a new web standard for accelerated graphics and compute. The [API](https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API) enables web developers to use the underlying system's GPU to carry out high-performance computations directly in the browser. WebGPU is the successor to [WebGL](https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API) and provides significantly better performance, because it allows for more direct interaction with modern GPUs. Lastly, it supports general-purpose GPU computations, which makes it just perfect for machine learning! > [!WARNING] > As of October 2024, global WebGPU support is around 70% (according to [caniuse.com](https://caniuse.com/webgpu)), meaning some users may not be able to use the API. > > If the following demos do not work in your browser, you may need to enable it using a feature flag: > > - Firefox: with the `dom.webgpu.enabled` flag (see [here](https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Experimental_features#:~:text=tested%20by%20Firefox.-,WebGPU%20API,-The%20WebGPU%20API)). > - Safari: with the `WebGPU` feature flag (see [here](https://webkit.org/blog/14879/webgpu-now-available-for-testing-in-safari-technology-preview/)). > - Older Chromium browsers (on Windows, macOS, Linux): with the `enable-unsafe-webgpu` flag (see [here](https://developer.chrome.com/docs/web-platform/webgpu/troubleshooting-tips)). ### Usage in Transformers.js v3 Thanks to our collaboration with [ONNX Runtime Web](https://www.npmjs.com/package/onnxruntime-web), enabling WebGPU acceleration is as simple as setting `device: 'webgpu'` when loading a model. Let's see some examples! **Example:** Compute text embeddings on WebGPU ([demo](https://v2.scrimba.com/s06a2smeej)) ```js import { pipeline } from "@huggingface/transformers"; // Create a feature-extraction pipeline const extractor = await pipeline( "feature-extraction", "mixedbread-ai/mxbai-embed-xsmall-v1", { device: "webgpu" }, ); // Compute embeddings const texts = ["Hello world!", "This is an example sentence."]; const embeddings = await extractor(texts, { pooling: "mean", normalize: true }); console.log(embeddings.tolist()); // [ // [-0.016986183822155, 0.03228696808218956, -0.0013630966423079371, ... ], // [0.09050482511520386, 0.07207386940717697, 0.05762749910354614, ... ], // ] ``` **Example:** Perform automatic speech recognition with OpenAI whisper on WebGPU ([demo](https://v2.scrimba.com/s0oi76h82g)) ```js import { pipeline } from "@huggingface/transformers"; // Create automatic speech recognition pipeline const transcriber = await pipeline( "automatic-speech-recognition", "onnx-community/whisper-tiny.en", { device: "webgpu" }, ); // Transcribe audio from a URL const url = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav"; const output = await transcriber(url); console.log(output); // { text: ' And so my fellow Americans ask not what your country can do for you, ask what you can do for your country.' } ``` **Example:** Perform image classification with MobileNetV4 on WebGPU ([demo](https://v2.scrimba.com/s0fv2uab1t)) ```js import { pipeline } from "@huggingface/transformers"; // Create image classification pipeline const classifier = await pipeline( "image-classification", "onnx-community/mobilenetv4_conv_small.e2400_r224_in1k", { device: "webgpu" }, ); // Classify an image from a URL const url = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg"; const output = await classifier(url); console.log(output); // [ // { label: 'tiger, Panthera tigris', score: 0.6149784922599792 }, // { label: 'tiger cat', score: 0.30281734466552734 }, // { label: 'tabby, tabby cat', score: 0.0019135422771796584 }, // { label: 'lynx, catamount', score: 0.0012161266058683395 }, // { label: 'Egyptian cat', score: 0.0011465961579233408 } // ] ``` ## New quantization formats (dtypes) Before Transformers.js v3, we used the `quantized` option to specify whether to use a quantized (q8) or full-precision (fp32) variant of the model by setting `quantized` to `true` or `false`, respectively. Now, we've added the ability to select from a much larger list with the `dtype` parameter. The list of available quantizations depends on the model, but some common ones are: full-precision (`"fp32"`), half-precision (`"fp16"`), 8-bit (`"q8"`, `"int8"`, `"uint8"`), and 4-bit (`"q4"`, `"bnb4"`, `"q4f16"`). <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/dtypes-dark.jpg" style="max-width: 100%;"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/dtypes-light.jpg" style="max-width: 100%;"> <img alt="Available dtypes for mixedbread-ai/mxbai-embed-xsmall-v1" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/dtypes-dark.jpg" style="max-width: 100%;"> </picture> <a href="https://huggingface.co/mixedbread-ai/mxbai-embed-xsmall-v1/tree/main/onnx">(e.g., mixedbread-ai/mxbai-embed-xsmall-v1)</a> </p> ### Basic usage **Example:** Run Qwen2.5-0.5B-Instruct in 4-bit quantization ([demo](https://v2.scrimba.com/s0dlcpv0ci)) ```js import { pipeline } from "@huggingface/transformers"; // Create a text generation pipeline const generator = await pipeline( "text-generation", "onnx-community/Qwen2.5-0.5B-Instruct", { dtype: "q4", device: "webgpu" }, ); // Define the list of messages const messages = [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Tell me a funny joke." }, ]; // Generate a response const output = await generator(messages, { max_new_tokens: 128 }); console.log(output[0].generated_text.at(-1).content); ``` ### Per-module dtypes Some encoder-decoder models, like Whisper or Florence-2, are extremely sensitive to quantization settings: especially of the encoder. For this reason, we added the ability to select per-module dtypes, which can be done by providing a mapping from module name to dtype. **Example:** Run Florence-2 on WebGPU ([demo](https://v2.scrimba.com/s0pdm485fo)) ```js import { Florence2ForConditionalGeneration } from "@huggingface/transformers"; const model = await Florence2ForConditionalGeneration.from_pretrained( "onnx-community/Florence-2-base-ft", { dtype: { embed_tokens: "fp16", vision_encoder: "fp16", encoder_model: "q4", decoder_model_merged: "q4", }, device: "webgpu", }, ); ``` <p align="middle"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/florence-2-webgpu.gif" alt="Florence-2 running on WebGPU" /> </p> <details> <summary> See full code example </summary> ```js import { Florence2ForConditionalGeneration, AutoProcessor, AutoTokenizer, RawImage, } from "@huggingface/transformers"; // Load model, processor, and tokenizer const model_id = "onnx-community/Florence-2-base-ft"; const model = await Florence2ForConditionalGeneration.from_pretrained( model_id, { dtype: { embed_tokens: "fp16", vision_encoder: "fp16", encoder_model: "q4", decoder_model_merged: "q4", }, device: "webgpu", }, ); const processor = await AutoProcessor.from_pretrained(model_id); const tokenizer = await AutoTokenizer.from_pretrained(model_id); // Load image and prepare vision inputs const url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"; const image = await RawImage.fromURL(url); const vision_inputs = await processor(image); // Specify task and prepare text inputs const task = "<MORE_DETAILED_CAPTION>"; const prompts = processor.construct_prompts(task); const text_inputs = tokenizer(prompts); // Generate text const generated_ids = await model.generate({ ...text_inputs, ...vision_inputs, max_new_tokens: 100, }); // Decode generated text const generated_text = tokenizer.batch_decode(generated_ids, { skip_special_tokens: false, })[0]; // Post-process the generated text const result = processor.post_process_generation( generated_text, task, image.size, ); console.log(result); // { '<MORE_DETAILED_CAPTION>': 'A green car is parked in front of a tan building. The building has a brown door and two brown windows. The car is a two door and the door is closed. The green car has black tires.' } ``` </details> ## 120 supported architectures This release increases the total number of supported architectures to 120 (see [full list](https://huggingface.co/docs/transformers.js/index#models)), spanning a wide range of input modalities and tasks. Notable new names include: Phi-3, Gemma & Gemma 2, LLaVa, Moondream, Florence-2, MusicGen, Sapiens, Depth Pro, PyAnnote, and RT-DETR. <p align="middle"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/architectures.png" alt="Bubble diagram of new architectures in Transformers.js v3" /> </p> <details> <summary>List of new models</summary> 1. **[Cohere](https://huggingface.co/docs/transformers/main/model_doc/cohere)** (from Cohere) released with the paper [Command-R: Retrieval Augmented Generation at Production Scale](https://txt.cohere.com/command-r/) by Cohere. 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **Depth Pro** (from Apple) released with the paper [Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https://arxiv.org/abs/2410.02073) by Aleksei Bochkovskii, Amaël Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R. Richter, Vladlen Koltun. 1. **Florence2** (from Microsoft) released with the paper [Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks](https://arxiv.org/abs/2311.06242) by Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan. 1. **[Gemma](https://huggingface.co/docs/transformers/main/model_doc/gemma)** (from Google) released with the paper [Gemma: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/gemma-open-models/) by the Gemma Google team. 1. **[Gemma2](https://huggingface.co/docs/transformers/main/model_doc/gemma2)** (from Google) released with the paper [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/) by the Gemma Google team. 1. **[Granite](https://huggingface.co/docs/transformers/main/model_doc/granite)** (from IBM) released with the paper [Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler](https://arxiv.org/abs/2408.13359) by Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox, Rameswar Panda. 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hiera](https://huggingface.co/docs/transformers/model_doc/hiera)** (from Meta) released with the paper [Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles](https://arxiv.org/pdf/2306.00989) by Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer. 1. **JAIS** (from Core42) released with the paper [Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models](https://arxiv.org/pdf/2308.16149) by Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren, Preslav Nakov, Timothy Baldwin, Eric Xing. 1. **[LLaVa](https://huggingface.co/docs/transformers/model_doc/llava)** (from Microsoft Research & University of Wisconsin-Madison) released with the paper [Visual Instruction Tuning](https://arxiv.org/abs/2304.08485) by Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee. 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen)** (from Meta) released with the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez. 1. **MobileCLIP** (from Apple) released with the paper [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/abs/2311.17049) by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel. 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 1. **MobileNetV3** (from Google Inc.) released with the paper [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244) by Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam. 1. **MobileNetV4** (from Google Inc.) released with the paper [MobileNetV4 - Universal Models for the Mobile Ecosystem](https://arxiv.org/abs/2404.10518) by Danfeng Qin, Chas Leichner, Manolis Delakis, Marco Fornoni, Shixin Luo, Fan Yang, Weijun Wang, Colby Banbury, Chengxi Ye, Berkin Akin, Vaibhav Aggarwal, Tenghui Zhu, Daniele Moro, Andrew Howard. 1. **Moondream1** released in the repository [moondream](https://github.com/vikhyat/moondream) by vikhyat. 1. **OpenELM** (from Apple) released with the paper [OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework](https://arxiv.org/abs/2404.14619) by Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari. 1. **[Phi3](https://huggingface.co/docs/transformers/main/model_doc/phi3)** (from Microsoft) released with the paper [Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone](https://arxiv.org/abs/2404.14219) by Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Amit Garg, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Masahiro Tanaka, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, Xiren Zhou. 1. **[PVT](https://huggingface.co/docs/transformers/main/model_doc/pvt)** (from Nanjing University, The University of Hong Kong etc.) released with the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/pdf/2102.12122.pdf) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. 1. **PyAnnote** released in the repository [pyannote/pyannote-audio](https://github.com/pyannote/pyannote-audio) by Hervé Bredin. 1. **[RT-DETR](https://huggingface.co/docs/transformers/model_doc/rt_detr)** (from Baidu), released together with the paper [DETRs Beat YOLOs on Real-time Object Detection](https://arxiv.org/abs/2304.08069) by Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, Jie Chen. 1. **Sapiens** (from Meta AI) released with the paper [Sapiens: Foundation for Human Vision Models](https://arxiv.org/pdf/2408.12569) by Rawal Khirodkar, Timur Bagautdinov, Julieta Martinez, Su Zhaoen, Austin James, Peter Selednik, Stuart Anderson, Shunsuke Saito. 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. </details> ## Example projects and templates As part of the release, we've published 25 new example projects and templates, primarily focused on showcasing WebGPU support! This includes demos like [Phi-3.5 WebGPU](https://github.com/huggingface/transformers.js-examples/tree/main/phi-3.5-webgpu) and [Whisper WebGPU](https://github.com/xenova/whisper-web/tree/experimental-webgpu), as shown below. > [!NOTE] > We're in the process of moving all our example projects and demos to https://github.com/huggingface/transformers.js-examples, so stay tuned for updates on this! | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/phi-3.5-webgpu.gif" style="max-height: 500px;" alt="Phi-3.5 running on WebGPU" /> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/whisper-turbo-webgpu.gif" style="max-height: 500px;" alt="Whisper Turbo running on WebGPU" /> | | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | ## Over 1200 pre-converted models As of today's release, the community has converted over 1200 models to be compatible with Transformers.js! You can find the full list of available models [here](https://hf.co/models?library=transformers.js). If you'd like to convert your own models or fine-tunes, you can use our [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) as follows: ```sh python -m scripts.convert --quantize --model_id <model_name_or_path> ``` After uploading the generated files to the Hugging Face Hub, remember to add the `transformers.js` tag so others can easily find and use your model! <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/models-dark.jpg" style="max-width: 100%;"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/models-light.jpg" style="max-width: 100%;"> <img alt="Available Transformers.js models" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/transformersjs-v3/models-dark.jpg" style="max-width: 100%;"> </picture> </p> ## Node.js (ESM + CJS), Deno, and Bun compatibility Transformers.js v3 is now compatible with the three most popular server-side JavaScript runtimes: | Runtime | Description | Examples | | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Node.js](https://nodejs.org/) | A widely-used JavaScript runtime built on Chrome's V8. It has a large ecosystem and supports a wide range of libraries and frameworks. | [ESM Example](https://github.com/huggingface/transformers.js-examples/tree/main/node-esm) / [CJS Example](https://github.com/huggingface/transformers.js-examples/tree/main/node-cjs) | | [Deno](https://deno.com/) | A modern runtime for JavaScript and TypeScript that is secure by default. It uses ES modules and even features experimental WebGPU support. | [Deno Example](https://github.com/huggingface/transformers.js-examples/tree/main/deno-embed) | | [Bun](https://bun.sh/) | A fast JavaScript runtime optimized for performance. It features a built-in bundler, transpiler, and package manager. | [Bun Example](https://github.com/huggingface/transformers.js-examples/tree/main/bun) | ## A new home on NPM and GitHub Finally, we're delighted to announce that Transformers.js will now be published under the official Hugging Face organization on NPM as [`@huggingface/transformers`](https://www.npmjs.com/package/@huggingface/transformers) (instead of [`@xenova/transformers`](https://www.npmjs.com/package/@xenova/transformers), which was used for v1 and v2). We've also moved the repository to the official Hugging Face organization on GitHub (https://github.com/huggingface/transformers.js), which will be our new home — come say hi! We look forward to hearing your feedback, responding to your issues, and reviewing your PRs! This is a significant milestone and we're extremely grateful to the community for helping us achieve this long-term goal! None of this would be possible without all of you… thank you! 🤗
6
0
hf_public_repos
hf_public_repos/blog/vlms.md
--- title: "Vision Language Models Explained" thumbnail: /blog/assets/vlms_explained/thumbnail.png authors: - user: merve - user: edbeeching --- # Vision Language Models Explained Vision language models are models that can learn simultaneously from images and texts to tackle many tasks, from visual question answering to image captioning. In this post, we go through the main building blocks of vision language models: have an overview, grasp how they work, figure out how to find the right model, how to use them for inference and how to easily fine-tune them with the new version of [trl](https://github.com/huggingface/trl) released today! ## What is a Vision Language Model? Vision language models are broadly defined as multimodal models that can learn from images and text. They are a type of generative models that take image and text inputs, and generate text outputs. Large vision language models have good zero-shot capabilities, generalize well, and can work with many types of images, including documents, web pages, and more. The use cases include chatting about images, image recognition via instructions, visual question answering, document understanding, image captioning, and others. Some vision language models can also capture spatial properties in an image. These models can output bounding boxes or segmentation masks when prompted to detect or segment a particular subject, or they can localize different entities or answer questions about their relative or absolute positions. There’s a lot of diversity within the existing set of large vision language models, the data they were trained on, how they encode images, and, thus, their capabilities. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/vlm/visual.jpg" alt="VLM Capabilities" style="width: 90%; height: auto;"><br> </p> ## Overview of Open-source Vision Language Models There are many open vision language models on the Hugging Face Hub. Some of the most prominent ones are shown in the table below. - There are base models, and models fine-tuned for chat that can be used in conversational mode. - Some of these models have a feature called “grounding” which reduces model hallucinations. - All models are trained on English unless stated otherwise. | Model | Permissive License | Model Size | Image Resolution | Additional Capabilities | |------------------------|--------------------|------------|------------------|---------------------------------------| | [LLaVA 1.6 (Hermes 34B)](https://huggingface.co/llava-hf/llava-v1.6-34b-hf) | ✅ | 34B | 672x672 | | | [deepseek-vl-7b-base](https://huggingface.co/deepseek-ai/deepseek-vl-7b-base) | ✅ | 7B | 384x384 | | | [DeepSeek-VL-Chat](https://huggingface.co/deepseek-ai/deepseek-vl-7b-chat) | ✅ | 7B | 384x384 | Chat | | [moondream2](https://huggingface.co/vikhyatk/moondream2) | ✅ | ~2B | 378x378 | | | [CogVLM-base](https://huggingface.co/THUDM/cogvlm-base-490-hf) | ✅ | 17B | 490x490 | | | [CogVLM-Chat](https://huggingface.co/THUDM/cogvlm-chat-hf) | ✅ | 17B | 490x490 | Grounding, chat | | [Fuyu-8B](https://huggingface.co/adept/fuyu-8b) | ❌ | 8B | 300x300 | Text detection within image | | [KOSMOS-2](https://huggingface.co/microsoft/kosmos-2-patch14-224) | ✅ | ~2B | 224x224 | Grounding, zero-shot object detection | | [Qwen-VL](https://huggingface.co/Qwen/Qwen-VL) | ✅ | 4B | 448x448 | Zero-shot object detection | | [Qwen-VL-Chat](https://huggingface.co/Qwen/Qwen-VL-Chat) | ✅ | 4B | 448x448 | Chat | | [Yi-VL-34B](https://huggingface.co/01-ai/Yi-VL-34B) | ✅ | 34B | 448x448 | Bilingual (English, Chinese) | ## Finding the right Vision Language Model There are many ways to select the most appropriate model for your use case. [Vision Arena](https://huggingface.co/spaces/WildVision/vision-arena) is a leaderboard solely based on anonymous voting of model outputs and is updated continuously. In this arena, the users enter an image and a prompt, and outputs from two different models are sampled anonymously, then the user can pick their preferred output. This way, the leaderboard is constructed solely based on human preferences. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/vlm/arena.png" alt="Vision Arena" style="width: 90%; height: auto;"><be> <em>Vision Arena</em> </p> [Open VLM Leaderboard](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard), is another leaderboard where various vision language models are ranked according to these metrics and average scores. You can also filter models according to model sizes, proprietary or open-source licenses, and rank for different metrics. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/vlm/leaderboard.png" alt="VLM Capabilities" style="width: 90%; height: auto;"><be> <em>Open VLM Leaderboard</em> </p> [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) is a toolkit to run benchmarks on a vision language models that powers the Open VLM Leaderboard. Another evaluation suite is [LMMS-Eval](https://github.com/EvolvingLMMs-Lab/lmms-eval), which provides a standard command line interface to evaluate Hugging Face models of your choice with datasets hosted on the Hugging Face Hub, like below: ```bash accelerate launch --num_processes=8 -m lmms_eval --model llava --model_args pretrained="liuhaotian/llava-v1.5-7b" --tasks mme,mmbench_en --batch_size 1 --log_samples --log_samples_suffix llava_v1.5_mme_mmbenchen --output_path ./logs/ ``` Both the Vision Arena and the Open VLM Leaderbard are limited to the models that are submitted to them, and require updates to add new models. If you want to find additional models, you can browse the Hub for [models](https://huggingface.co/models?pipeline_tag=image-text-to-text&sort=trending) under the task `image-text-to-text`. There are different benchmarks to evaluate vision language models that you may come across in the leaderboards. We will go through a few of them. ### MMMU [A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI (MMMU)](https://huggingface.co/datasets/MMMU/MMMU) is the most comprehensive benchmark to evaluate vision language models. It contains 11.5K multimodal challenges that require college-level subject knowledge and reasoning across different disciplines such as arts and engineering. ### MMBench [MMBench](https://huggingface.co/datasets/lmms-lab/MMBench) is an evaluation benchmark that consists of 3000 single-choice questions over 20 different skills, including OCR, object localization and more. The paper also introduces an evaluation strategy called CircularEval, where the answer choices of a question are shuffled in different combinations, and the model is expected to give the right answer at every turn. There are other more specific benchmarks across different domains, including MathVista (visual mathematical reasoning), AI2D (diagram understanding), ScienceQA (Science Question Answering) and OCRBench (document understanding). ## Technical Details There are various ways to pretrain a vision language model. The main trick is to unify the image and text representation and feed it to a text decoder for generation. The most common and prominent models often consist of an image encoder, an embedding projector to align image and text representations (often a dense neural network) and a text decoder stacked in this order. As for the training parts, different models have been following different approaches. For instance, LLaVA consists of a CLIP image encoder, a multimodal projector and a Vicuna text decoder. The authors fed a dataset of images and captions to GPT-4 and generated questions related to the caption and the image. The authors have frozen the image encoder and text decoder and have only trained the multimodal projector to align the image and text features by feeding the model images and generated questions and comparing the model output to the ground truth captions. After the projector pretraining, they keep the image encoder frozen, unfreeze the text decoder, and train the projector with the decoder. This way of pre-training and fine-tuning is the most common way of training vision language models. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/vlm/vlm-structure.png" alt="VLM Structure" style="width: 90%; height: auto;"><br> <em>Structure of a Typical Vision Language Model</em> </p> <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/vlm/proj.jpg" alt="VLM Structure" style="width: 90%; height: auto;"><br> <em>Projection and text embeddings are concatenated</em> </p> Another example is KOSMOS-2, where the authors chose to fully train the model end-to-end, which is computationally expensive compared to LLaVA-like pre-training. The authors later did language-only instruction fine-tuning to align the model. Fuyu-8B, as another example, doesn’t even have an image encoder. Instead, image patches are directly fed to a projection layer and then the sequence goes through an auto-regressive decoder. Most of the time, you don’t need to pre-train a vision language model, as you can either use one of the existing ones or fine-tune them on your own use case. We will go through how to use these models using transformers and fine-tune using `SFTTrainer`. ## Using Vision Language Models with transformers You can infer with Llava using the `LlavaNext` model as shown below. Let’s initialize the model and the processor first. ```python from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf") model = LlavaNextForConditionalGeneration.from_pretrained( "llava-hf/llava-v1.6-mistral-7b-hf", torch_dtype=torch.float16, low_cpu_mem_usage=True ) model.to(device) ``` We now pass the image and the text prompt to the processor, and then pass the processed inputs to the `generate`. Note that each model uses its own prompt template, be careful to use the right one to avoid performance degradation. ```python from PIL import Image import requests url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true" image = Image.open(requests.get(url, stream=True).raw) prompt = "[INST] <image>\nWhat is shown in this image? [/INST]" inputs = processor(prompt, image, return_tensors="pt").to(device) output = model.generate(**inputs, max_new_tokens=100) ``` Call decode to decode the output tokens. ```python print(processor.decode(output[0], skip_special_tokens=True)) ``` ## Fine-tuning Vision Language Models with TRL We are excited to announce that [TRL](https://github.com/huggingface/trl)’s `SFTTrainer` now includes experimental support for Vision Language Models! We provide an example here of how to perform SFT on a [Llava 1.5 VLM](https://huggingface.co/llava-hf/llava-1.5-7b-hf) using the [llava-instruct](https://huggingface.co/datasets/HuggingFaceH4/llava-instruct-mix-vsft) dataset which contains 260k image-conversation pairs. The dataset contains user-assistant interactions formatted as a sequence of messages. For example, each conversation is paired with an image that the user asks questions about. To use the experimental VLM training support, you must install the latest version of TRL, with `pip install -U trl`. The full example script can be found [here](https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py). ```python from trl.commands.cli_utils import SftScriptArguments, TrlParser parser = TrlParser((SftScriptArguments, TrainingArguments)) args, training_args = parser.parse_args_and_config() ``` Initialize the chat template for instruction fine-tuning. ```bash LLAVA_CHAT_TEMPLATE = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. {% for message in messages %}{% if message['role'] == 'user' %}USER: {% else %}ASSISTANT: {% endif %}{% for item in message['content'] %}{% if item['type'] == 'text' %}{{ item['text'] }}{% elif item['type'] == 'image' %}<image>{% endif %}{% endfor %}{% if message['role'] == 'user' %} {% else %}{{eos_token}}{% endif %}{% endfor %}""" ``` We will now initialize our model and tokenizer. ```python from transformers import AutoTokenizer, AutoProcessor, TrainingArguments, LlavaForConditionalGeneration import torch model_id = "llava-hf/llava-1.5-7b-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.chat_template = LLAVA_CHAT_TEMPLATE processor = AutoProcessor.from_pretrained(model_id) processor.tokenizer = tokenizer model = LlavaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.float16) ``` Let’s create a data collator to combine text and image pairs. ```python class LLavaDataCollator: def __init__(self, processor): self.processor = processor def __call__(self, examples): texts = [] images = [] for example in examples: messages = example["messages"] text = self.processor.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=False ) texts.append(text) images.append(example["images"][0]) batch = self.processor(texts, images, return_tensors="pt", padding=True) labels = batch["input_ids"].clone() if self.processor.tokenizer.pad_token_id is not None: labels[labels == self.processor.tokenizer.pad_token_id] = -100 batch["labels"] = labels return batch data_collator = LLavaDataCollator(processor) ``` Load our dataset. ```python from datasets import load_dataset raw_datasets = load_dataset("HuggingFaceH4/llava-instruct-mix-vsft") train_dataset = raw_datasets["train"] eval_dataset = raw_datasets["test"] ``` Initialize the SFTTrainer, passing in the model, the dataset splits, PEFT configuration and data collator and call `train()`. To push our final checkpoint to the Hub, call `push_to_hub()`. ```python from trl import SFTTrainer trainer = SFTTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, dataset_text_field="text", # need a dummy field tokenizer=tokenizer, data_collator=data_collator, dataset_kwargs={"skip_prepare_dataset": True}, ) trainer.train() ``` Save the model and push to the Hugging Face Hub. ```python trainer.save_model(training_args.output_dir) trainer.push_to_hub() ``` You can find the trained model [here](https://huggingface.co/HuggingFaceH4/vsft-llava-1.5-7b-hf-trl). You can try the model we just trained directly in our VLM playground below ⬇️ <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.23.0/gradio.js"></script> <gradio-app theme_mode="light" src="https://HuggingFaceH4-vlm-playground.hf.space"></gradio-app> **Acknowledgements** We would like to thank Pedro Cuenca, Lewis Tunstall, Kashif Rasul and Omar Sanseviero for their reviews and suggestions on this blog post.
7
0
hf_public_repos
hf_public_repos/blog/bridgetower.md
--- title: "Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2" thumbnail: /blog/assets/bridgetower/thumbnail.png authors: - user: regisss - user: anahita-b guest: true --- # Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2 *Update (29/08/2023): A benchmark on H100 was added to this blog post. Also, all performance numbers have been updated with newer versions of software.* [Optimum Habana v1.7](https://github.com/huggingface/optimum-habana/tree/main) on Habana Gaudi2 achieves **x2.5 speedups compared to A100 and x1.4 compared to H100** when fine-tuning BridgeTower, a state-of-the-art vision-language model. This performance improvement relies on hardware-accelerated data loading to make the most of your devices. *These techniques apply to any other workloads constrained by data loading, which is frequently the case for many types of vision models.* This post will take you through the process and benchmark we used to compare BridgeTower fine-tuning on Habana Gaudi2, Nvidia H100 and Nvidia A100 80GB. It also demonstrates how easy it is to take advantage of these features in transformers-based models. ## BridgeTower In the recent past, [Vision-Language (VL) models](https://huggingface.co/blog/vision_language_pretraining) have gained tremendous importance and shown dominance in a variety of VL tasks. Most common approaches leverage uni-modal encoders to extract representations from their respective modalities. Then those representations are either fused together, or fed into a cross-modal encoder. To efficiently handle some of the performance limitations and restrictions in VL representation learning, [BridgeTower](https://huggingface.co/papers/2206.08657) introduces multiple _bridge layers_ that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations at different semantic levels in the cross-modal encoder. Pre-trained with only 4M images (see the detail [below](#benchmark)), BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, BridgeTower achieves an accuracy of 78.73% on the VQAv2 test-std set, outperforming the previous state-of-the-art model (METER) by 1.09% using the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. ## Hardware [NVIDIA H100 Tensor Core GPU](https://www.nvidia.com/en-us/data-center/h100/) is the latest and fastest generation of Nvidia GPUs. It includes a dedicated Transformer Engine that enables to perform fp8 mixed-precision runs. One device has 80GB of memory. [Nvidia A100 Tensor Core GPU](https://www.nvidia.com/en-us/data-center/a100/) includes the 3rd generation of the [Tensor Core technology](https://www.nvidia.com/en-us/data-center/tensor-cores/). This is still the fastest GPU that you will find at most cloud providers. We use here the 80GB-memory variant which also offers faster memory bandwidth than the 40GB one. [Habana Gaudi2](https://habana.ai/products/gaudi2/) is the second-generation AI hardware accelerator designed by Habana Labs. A single server contains 8 accelerator devices called HPUs with 96GB of memory each. Check out [our previous blog post](https://huggingface.co/blog/habana-gaudi-2-bloom#habana-gaudi2) for a more in-depth introduction and a guide showing how to access it through the [Intel Developer Cloud](https://www.intel.com/content/www/us/en/secure/developer/devcloud/cloud-launchpad.html). Unlike many AI accelerators in the market, advanced features are very easy to apply to make the most of Gaudi2 with [Optimum Habana](https://huggingface.co/docs/optimum/habana/index), which enables users to port Transformers-compatible scripts to Gaudi with just a 2-line change. ## Benchmark To benchmark training, we are going to fine-tune a [BridgeTower Large checkpoint](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-itc) consisting of 866M parameters. This checkpoint was pretrained on English language using masked language modeling, image-text matching and image-text contrastive loss on [Conceptual Captions](https://huggingface.co/datasets/conceptual_captions), [SBU Captions](https://huggingface.co/datasets/sbu_captions), [MSCOCO Captions](https://huggingface.co/datasets/HuggingFaceM4/COCO) and [Visual Genome](https://huggingface.co/datasets/visual_genome). We will further fine-tune this checkpoint on the [New Yorker Caption Contest dataset](https://huggingface.co/datasets/jmhessel/newyorker_caption_contest) which consists of cartoons from The New Yorker and the most voted captions. Hyperparameters are the same for all accelerators. We used a batch size of 48 samples for each device. You can check hyperparameters out [here](https://huggingface.co/regisss/bridgetower-newyorker-gaudi2-8x#training-hyperparameters) for Gaudi2 and [there](https://huggingface.co/regisss/bridgetower-newyorker-a100-8x#training-hyperparameters) for A100. **When dealing with datasets involving images, data loading is frequently a bottleneck** because many costly operations are computed on CPU (image decoding, image augmentations) and then full images are sent to the training devices. Ideally, *we would like to send only raw bytes to devices and then perform decoding and various image transformations on device*. But let's see first how to *easily* allocate more resources to data loading for accelerating your runs. ### Making use of `dataloader_num_workers` When image loading is done on CPU, a quick way to speed it up would be to allocate more subprocesses for data loading. This is very easy to do with Transformers' `TrainingArguments` (or its Optimum Habana counterpart `GaudiTrainingArguments`): you can use the `dataloader_num_workers=N` argument to set the number of subprocesses (`N`) allocated on CPU for data loading. The default is 0, which means that data is loaded in the main process. This may not be optimal as the main process has many things to manage. We can set it to 1 to have one fully dedicated subprocess for data loading. When several subprocesses are allocated, each one of them will be responsible for preparing a batch. This means that RAM consumption will increase with the number of workers. One recommendation would be to set it to the number of CPU cores, but those cores may not be fully free so you will have to try it out to find the best configuration. Let's run the three following experiments: - a mixed-precision (*bfloat16*/*float32*) run distributed across 8 devices where data loading is performed by the same process as everything else (i.e. `dataloader_num_workers=0`) - a mixed-precision (*bfloat16*/*float32*) run distributed across 8 devices with 1 dedicated subprocess for data loading (i.e. `dataloader_num_workers=1`) - same run with `dataloader_num_workers=2` Here are the throughputs we got on Gaudi2, H100 and A100: | Device | `dataloader_num_workers=0` | `dataloader_num_workers=1` | `dataloader_num_workers=2` | |:----------:|:--------------------------:|:--------------------------:|:--------------------------:| | Gaudi2 HPU | 601.5 samples/s | 747.4 samples/s | 768.7 samples/s | | H100 GPU | 336.5 samples/s | 580.1 samples/s | 602.1 samples/s | | A100 GPU | 227.5 samples/s | 339.7 samples/s | 345.4 samples/s | We first see that **Gaudi2 is x1.28 faster than H100** with `dataloader_num_workers=2`, x1.29 faster with `dataloader_num_workers=1` and x1.79 faster with `dataloader_num_workers=0`. Gaudi2 is also much faster than the previous generation since it is **x2.23 faster than A100** with `dataloader_num_workers=2`, x2.20 faster with `dataloader_num_workers=1` and x2.64 faster with `dataloader_num_workers=0`, which is even better than [the speedups we previously reported](https://huggingface.co/blog/habana-gaudi-2-benchmark)! Second, we see that **allocating more resources for data loading can lead to easy speedups**: x1.28 on Gaudi2, x1.79 on H100 and x1.52 on A100. We also ran experiments with several dedicated subprocesses for data loading but performance was not better than with `dataloader_num_workers=2` for all accelerators. Thus, **using `dataloader_num_workers>0` is usually a good first way of accelerating your runs involving images!** Tensorboard logs can be visualized [here](https://huggingface.co/regisss/bridgetower-newyorker-gaudi2-8x/tensorboard) for Gaudi2 and [there](https://huggingface.co/regisss/bridgetower-newyorker-a100-8x/tensorboard) for A100. <!-- ### Optimum Habana's fast DDP Before delving into how to perform hardware-accelerated data loading, let's look at another very easy way of speeding up your distributed runs on Gaudi. The new release of Optimum Habana, version 1.6.0, introduced a new feature that allows users to choose the distribution strategy to use: - `distribution_strategy="ddp"` to use PyTorch [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) (DDP) - `distribution_strategy="fast_ddp"` to use a lighter and usually faster implementation Optimum Habana's fast DDP does not split parameter gradients into buckets as [DDP does](https://pytorch.org/docs/stable/notes/ddp.html#internal-design). It also uses [HPU graphs](https://docs.habana.ai/en/latest/PyTorch/Inference_on_PyTorch/Inference_Using_HPU_Graphs.html?highlight=hpu%20graphs) to collect gradients in all processes and then update them (after the [all_reduce](https://pytorch.org/docs/stable/distributed.html#torch.distributed.all_reduce) operation is performed) with minimal host overhead. You can check this implementation [here](https://github.com/huggingface/optimum-habana/blob/main/optimum/habana/distributed/fast_ddp.py). Simply using `distribution_strategy="fast_ddp"` (and keeping `dataloader_num_workers=1`) on Gaudi2 gives us 705.9 samples/s. **This is x1.10 faster than with DDP and x2.38 faster than A100!** So adding just two training arguments (`dataloader_num_workers=1` and `distribution_strategy="fast_ddp"`) led to a x1.33 speedup on Gaudi2 and to a x2.38 speedup compared to A100 with `dataloader_num_workers=1`. --> ### Hardware-accelerated data loading with Optimum Habana For even larger speedups, we are now going to move as many data loading operations as possible from the CPU to the accelerator devices (i.e. HPUs on Gaudi2 or GPUs on A100/H100). This can be done on Gaudi2 using Habana's [media pipeline](https://docs.habana.ai/en/latest/Media_Pipeline/index.html). Given a dataset, most dataloaders follow the following recipe: 1. Fetch data (e.g. where your JPEG images are stored on disk) 2. The CPU reads encoded images 3. The CPU decodes images 4. The CPU applies image transformations to augment images 5. Finally, images are sent to devices (although this is usually not done by the dataloader itself) Instead of doing the whole process on CPU and send ready-to-train data to devices, a more efficient workflow would be to send encoded images to devices first and then perform image decoding and augmentations: 1. Same as before 2. Same as before 3. Encoded images are sent to devices 4. Devices decode images 5. Devices apply image transformations to augment images That way we can benefit from the computing power of our devices to speed image decoding and transformations up. Note that there are two caveats to be aware of when doing this: - Device memory consumption will increase, so you may have to reduce your batch size if there is not enough free memory. This may mitigate the speedup brought by this approach. - If devices are intensively used (100% or close to it) when doing data loading on CPU, don't expect any speedup when doing it on devices as they already have their hands full. <!-- To achieve this on Gaudi2, Habana's media pipeline enables us to: - Initialize a media pipeline with all the operators it needs (see [here](https://docs.habana.ai/en/latest/Media_Pipeline/Operators.html#media-operators) the list of all supported operators) and define a graph so that we can specify in which order operations should be performed (e.g. reading data &rarr; decoding &rarr; cropping). - Create a Torch dataloader with a HPU-tailored iterator. --> To implement this on Gaudi2, we have got you covered: the [contrastive image-text example](https://github.com/huggingface/optimum-habana/tree/main/examples/contrastive-image-text) in Optimum Habana now provides a ready-to-use media pipeline that you can use with COCO-like datasets that contain text and images! You will just have to add `--mediapipe_dataloader` to your command to use it. For interested readers, a lower-level overview is given in the documentation of Gaudi [here](https://docs.habana.ai/en/latest/Media_Pipeline/index.html) and the list of all supported operators is available [there](https://docs.habana.ai/en/latest/Media_Pipeline/Operators.html). We are now going to re-run the previous experiments adding the `mediapipe_dataloader` argument since it is compatible with `dataloader_num_workers`: | Device | `dataloader_num_workers=0` | `dataloader_num_workers=2` | `dataloader_num_workers=2` + `mediapipe_dataloader` | |:----------:|:--------------------------:|:--------------------------------------------:|:---------------:| | Gaudi2 HPU | 601.5 samples/s | 768.7 samples/s | 847.7 samples/s | | H100 GPU | 336.5 samples/s | 602.1 samples/s | / | | A100 GPU | 227.5 samples/s | 345.4 samples/s | / | We got an additional x1.10 speedup compared to the previous run with `dataloader_num_workers=2` only. This final run is thus x1.41 faster than our base run on Gaudi2 **simply adding 2 ready-to-use training arguments.** It is also **x1.41 faster than H100** and **x2.45 faster than A100** with `dataloader_num_workers=2`! ### Reproducing this benchmark To reproduce this benchmark, you first need to get access to Gaudi2 through the [Intel Developer Cloud](https://www.intel.com/content/www/us/en/secure/developer/devcloud/cloud-launchpad.html) (see [this guide](https://huggingface.co/blog/habana-gaudi-2-benchmark#how-to-get-access-to-gaudi2) for more information). Then, you need to install the latest version of Optimum Habana and run `run_bridgetower.py` which you can find [here](https://github.com/huggingface/optimum-habana/blob/main/examples/contrastive-image-text/run_bridgetower.py). Here is how to do it: ```bash pip install optimum[habana] git clone https://github.com/huggingface/optimum-habana.git cd optimum-habana/examples/contrastive-image-text pip install -r requirements.txt ``` The base command line to run the script is: ```bash python ../gaudi_spawn.py --use_mpi --world_size 8 run_bridgetower.py \ --output_dir /tmp/bridgetower-test \ --model_name_or_path BridgeTower/bridgetower-large-itm-mlm-itc \ --dataset_name jmhessel/newyorker_caption_contest --dataset_config_name matching \ --dataset_revision 3c6c4f6c0ff7e902833d3afa5f8f3875c2b036e6 \ --image_column image --caption_column image_description \ --remove_unused_columns=False \ --do_train --do_eval --do_predict \ --per_device_train_batch_size="40" --per_device_eval_batch_size="16" \ --num_train_epochs 5 \ --learning_rate="1e-5" \ --push_to_hub --report_to tensorboard --hub_model_id bridgetower\ --overwrite_output_dir \ --use_habana --use_lazy_mode --use_hpu_graphs_for_inference --gaudi_config_name Habana/clip \ --throughput_warmup_steps 3 \ --logging_steps 10 ``` which corresponds to the case `--dataloader_num_workers 0`. You can then add `--dataloader_num_workers N` and `--mediapipe_dataloader` to test other configurations. To push your model and Tensorboard logs to the Hugging Face Hub, you will have to log in to your account beforehand with: ```bash huggingface-cli login ``` For A100 and H100, you can use the same `run_bridgetower.py` script with a few small changes: - Replace `GaudiTrainer` and `GaudiTrainingArguments` with `Trainer` and `TrainingArguments` from Transformers - Remove references to `GaudiConfig`, `gaudi_config` and `HabanaDataloaderTrainer` - Import `set_seed` directly from Transformers: `from transformers import set_seed` The results displayed in this benchmark were obtained with a Nvidia H100 Lambda instance and a Nvidia A100 80GB GCP instance both with 8 devices using [Nvidia's Docker images](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html). Note that `--mediapipe_dataloader` is compatible with Gaudi2 only and will not work with A100/H100. Regarding fp8 results on H100 using [Transformer Engine](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html), they are not available because the code crashes and would require modifying the modeling of BridgeTower in Transformers. We will revisit this comparison when fp8 is supported on Gaudi2. ## Conclusion When dealing with images, we presented two solutions to speed up your training workflows: allocating more resources to the dataloader, and decoding and augmenting images directly on accelerator devices rather than on CPU. We showed that it leads to dramatic speedups when training a SOTA vision-language model like BridgeTower: **Habana Gaudi2 with Optimum Habana is about x1.4 faster than Nvidia H100 and x2.5 faster than Nvidia A100 80GB with Transformers!** And this is super easy to use as you just need to provide a few additional training arguments. To go further, we are looking forward to using HPU graphs for training models even faster and to presenting how to use DeepSpeed ZeRO-3 on Gaudi2 to accelerate the training of your LLMs. Stay tuned! If you are interested in accelerating your Machine Learning training and inference workflows using the latest AI hardware accelerators and software libraries, check out our [Expert Acceleration Program](https://huggingface.co/support). To learn more about Habana solutions, [read about our partnership and contact them here](https://huggingface.co/hardware/habana). To learn more about Hugging Face efforts to make AI hardware accelerators easy to use, check out our [Hardware Partner Program](https://huggingface.co/hardware). ### Related Topics - [Faster Training and Inference: Habana Gaudi-2 vs Nvidia A100 80GB](https://huggingface.co/blog/habana-gaudi-2-benchmark) - [Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator](https://huggingface.co/blog/habana-gaudi-2-bloom)
8
0
hf_public_repos
hf_public_repos/blog/hardware-partners-program.md
--- title: "Introducing Optimum: The Optimization Toolkit for Transformers at Scale" authors: - user: mfuntowicz - user: echarlaix - user: michaelbenayoun - user: jeffboudier --- # Introducing 🤗 Optimum: The Optimization Toolkit for Transformers at Scale This post is the first step of a journey for Hugging Face to democratize state-of-the-art **Machine Learning production performance**. To get there, we will work hand in hand with our Hardware Partners, as we have with Intel below. Join us in this journey, and follow [Optimum](https://github.com/huggingface/optimum), our new open source library! ## Why 🤗 Optimum? ### 🤯 Scaling Transformers is hard What do Tesla, Google, Microsoft and Facebook all have in common? Well many things, but one of them is they all run billions of Transformer model predictions every day. Transformers for AutoPilot to drive your Tesla (lucky you!), for Gmail to complete your sentences, for Facebook to translate your posts on the fly, for Bing to answer your natural language queries. [Transformers](https://github.com/huggingface/transformers) have brought a step change improvement in the accuracy of Machine Learning models, have conquered NLP and are now expanding to other modalities starting with [Speech](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) and [Vision](https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads). But taking these massive models into production, and making them run fast at scale is a huge challenge for any Machine Learning Engineering team. What if you don’t have hundreds of highly skilled Machine Learning Engineers on payroll like the above companies? Through Optimum, our new open source library, we aim to build the definitive toolkit for Transformers production performance, and enable maximum efficiency to train and run models on specific hardware. ### 🏭 Optimum puts Transformers to work To get optimal performance training and serving models, the model acceleration techniques need to be specifically compatible with the targeted hardware. Each hardware platform offers specific software tooling, [features and knobs that can have a huge impact on performance](https://huggingface.co/blog/bert-cpu-scaling-part-1). Similarly, to take advantage of advanced model acceleration techniques like sparsity and quantization, optimized kernels need to be compatible with the operators on silicon, and specific to the neural network graph derived from the model architecture. Diving into this 3-dimensional compatibility matrix and how to use model acceleration libraries is daunting work, which few Machine Learning Engineers have experience on. [Optimum](https://github.com/huggingface/optimum) aims to make this work easy, providing performance optimization tools targeting efficient AI hardware, built in collaboration with our Hardware Partners, and turn Machine Learning Engineers into ML Optimization wizards. With the [Transformers](https://github.com/huggingface/transformers) library, we made it easy for researchers and engineers to use state-of-the-art models, abstracting away the complexity of frameworks, architectures and pipelines. With the [Optimum](https://github.com/huggingface/optimum) library, we are making it easy for engineers to leverage all the available hardware features at their disposal, abstracting away the complexity of model acceleration on hardware platforms. ## 🤗 Optimum in practice: how to quantize a model for Intel Xeon CPU ### 🤔 Why quantization is important but tricky to get right Pre-trained language models such as BERT have achieved state-of-the-art results on a wide range of natural language processing tasks, other Transformer based models such as ViT and Speech2Text have achieved state-of-the-art results on computer vision and speech tasks respectively: transformers are everywhere in the Machine Learning world and are here to stay. However, putting transformer-based models into production can be tricky and expensive as they need a lot of compute power to work. To solve this many techniques exist, the most popular being quantization. Unfortunately, in most cases quantizing a model requires a lot of work, for many reasons: 1. The model needs to be edited: some ops need to be replaced by their quantized counterparts, new ops need to be inserted (quantization and dequantization nodes), and others need to be adapted to the fact that weights and activations will be quantized. This part can be very time-consuming because frameworks such as PyTorch work in eager mode, meaning that the changes mentioned above need to be added to the model implementation itself. PyTorch now provides a tool called `torch.fx` that allows you to trace and transform your model without having to actually change the model implementation, but it is tricky to use when tracing is not supported for your model out of the box. On top of the actual editing, it is also necessary to find which parts of the model need to be edited, which ops have an available quantized kernel counterpart and which ops don't, and so on. 2. Once the model has been edited, there are many parameters to play with to find the best quantization settings: - Which kind of observers should I use for range calibration? - Which quantization scheme should I use? - Which quantization related data types (int8, uint8, int16) are supported on my target device? 3. Balance the trade-off between quantization and an acceptable accuracy loss. 4. Export the quantized model for the target device. Although PyTorch and TensorFlow made great progress in making things easy for quantization, the complexities of transformer based models makes it hard to use the provided tools out of the box and get something working without putting up a ton of effort. ### 💡 How Intel is solving quantization and more with Neural Compressor Intel® [Neural Compressor](https://github.com/intel/neural-compressor) (formerly referred to as Low Precision Optimization Tool or LPOT) is an open-source python library designed to help users deploy low-precision inference solutions. The latter applies low-precision recipes for deep-learning models to achieve optimal product objectives, such as inference performance and memory usage, with expected performance criteria. Neural Compressor supports post-training quantization, quantization-aware training and dynamic quantization. In order to specify the quantization approach, objective and performance criteria, the user must provide a configuration yaml file specifying the tuning parameters. The configuration file can either be hosted on the Hugging Face's Model Hub or can be given through a local directory path. ### 🔥 How to easily quantize Transformers for Intel Xeon CPUs with Optimum ![Automatic quantization code snippet](assets/25_hardware_partners_program/carbon_inc_quantizer.png) ## Follow 🤗 Optimum: a journey to democratize ML production performance ### ⚡️State of the Art Hardware Optimum will focus on achieving optimal production performance on dedicated hardware, where software and hardware acceleration techniques can be applied for maximum efficiency. We will work hand in hand with our Hardware Partners to enable, test and maintain acceleration, and deliver it in an easy and accessible way through Optimum, as we did with Intel and Neural Compressor. We will soon announce new Hardware Partners who have joined us on our journey toward Machine Learning efficiency. ### 🔮 State-of-the-Art Models The collaboration with our Hardware Partners will yield hardware-specific optimized model configurations and artifacts, which we will make available to the AI community via the Hugging Face [Model Hub](https://huggingface.co/models). We hope that Optimum and hardware-optimized models will accelerate the adoption of efficiency in production workloads, which represent most of the aggregate energy spent on Machine Learning. And most of all, we hope that Optimum will accelerate the adoption of Transformers at scale, not just for the biggest tech companies, but for all of us. ### 🌟 A journey of collaboration: join us, follow our progress Every journey starts with a first step, and ours was the public release of Optimum. Join us and make your first step by [giving the library a Star](https://github.com/huggingface/optimum), so you can follow along as we introduce new supported hardware, acceleration techniques and optimized models. If you would like to see new hardware and features be supported in Optimum, or you are interested in joining us to work at the intersection of software and hardware, please reach out to us at [email protected]
9
0
hf_public_repos/candle/candle-wasm-examples
hf_public_repos/candle/candle-wasm-examples/whisper/main.js
import init, { run_app } from './pkg/candle_wasm_example_whisper.js'; async function main() { await init('/pkg/candle_wasm_example_whisper_bg.wasm'); run_app(); } main()
0
0
hf_public_repos/candle/candle-wasm-examples
hf_public_repos/candle/candle-wasm-examples/whisper/whisperWorker.js
//load the candle Whisper decoder wasm module import init, { Decoder } from "./build/m.js"; async function fetchArrayBuffer(url) { const cacheName = "whisper-candle-cache"; const cache = await caches.open(cacheName); const cachedResponse = await cache.match(url); if (cachedResponse) { const data = await cachedResponse.arrayBuffer(); return new Uint8Array(data); } const res = await fetch(url, { cache: "force-cache" }); cache.put(url, res.clone()); return new Uint8Array(await res.arrayBuffer()); } class Whisper { static instance = {}; // Retrieve the Whisper model. When called for the first time, // this will load the model and save it for future use. static async getInstance(params) { const { weightsURL, modelID, tokenizerURL, mel_filtersURL, configURL, quantized, is_multilingual, timestamps, task, language, } = params; // load individual modelID only once if (!this.instance[modelID]) { await init(); self.postMessage({ status: "loading", message: "Loading Model" }); const [ weightsArrayU8, tokenizerArrayU8, mel_filtersArrayU8, configArrayU8, ] = await Promise.all([ fetchArrayBuffer(weightsURL), fetchArrayBuffer(tokenizerURL), fetchArrayBuffer(mel_filtersURL), fetchArrayBuffer(configURL), ]); this.instance[modelID] = new Decoder( weightsArrayU8, tokenizerArrayU8, mel_filtersArrayU8, configArrayU8, quantized, is_multilingual, timestamps, task, language ); } else { self.postMessage({ status: "loading", message: "Model Already Loaded" }); } return this.instance[modelID]; } } self.addEventListener("message", async (event) => { const { weightsURL, modelID, tokenizerURL, configURL, mel_filtersURL, audioURL, } = event.data; try { self.postMessage({ status: "decoding", message: "Starting Decoder" }); let quantized = false; if (modelID.includes("quantized")) { quantized = true; } let is_multilingual = false; if (modelID.includes("multilingual")) { is_multilingual = true; } let timestamps = true; const decoder = await Whisper.getInstance({ weightsURL, modelID, tokenizerURL, mel_filtersURL, configURL, quantized, is_multilingual, timestamps, task: null, language: null, }); self.postMessage({ status: "decoding", message: "Loading Audio" }); const audioArrayU8 = await fetchArrayBuffer(audioURL); self.postMessage({ status: "decoding", message: "Running Decoder..." }); const segments = decoder.decode(audioArrayU8); // Send the segment back to the main thread as JSON self.postMessage({ status: "complete", message: "complete", output: JSON.parse(segments), }); } catch (e) { self.postMessage({ error: e }); } });
1
0
hf_public_repos/candle/candle-wasm-examples
hf_public_repos/candle/candle-wasm-examples/whisper/README.md
## Running Whisper Examples Here, we provide two examples of how to run Whisper using a Candle-compiled WASM binary and runtimes. ### Pure Rust UI To build and test the UI made in Rust you will need [Trunk](https://trunkrs.dev/#install) From the `candle-wasm-examples/whisper` directory run: Download assets: ```bash # mel filters wget -c https://huggingface.co/spaces/lmz/candle-whisper/resolve/main/mel_filters.safetensors # Model and tokenizer tiny.en wget -c https://huggingface.co/openai/whisper-tiny.en/resolve/main/model.safetensors -P whisper-tiny.en wget -c https://huggingface.co/openai/whisper-tiny.en/raw/main/tokenizer.json -P whisper-tiny.en wget -c https://huggingface.co/openai/whisper-tiny.en/raw/main/config.json -P whisper-tiny.en # model and tokenizer tiny multilanguage wget -c https://huggingface.co/openai/whisper-tiny/resolve/main/model.safetensors -P whisper-tiny wget -c https://huggingface.co/openai/whisper-tiny/raw/main/tokenizer.json -P whisper-tiny wget -c https://huggingface.co/openai/whisper-tiny/raw/main/config.json -P whisper-tiny #quantized wget -c https://huggingface.co/lmz/candle-whisper/resolve/main/model-tiny-en-q80.gguf -P quantized wget -c https://huggingface.co/lmz/candle-whisper/raw/main/tokenizer-tiny-en.json -P quantized wget -c https://huggingface.co/lmz/candle-whisper/raw/main/config-tiny-en.json -P quantized # Audio samples wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_gb0.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_a13.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_gb1.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_hp0.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_jfk.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_mm0.wav -P audios ``` Run hot reload server: ```bash trunk serve --release --public-url / --port 8080 ``` ### Vanilla JS and WebWorkers To build and test the UI made in Vanilla JS and WebWorkers, first we need to build the WASM library: ```bash sh build-lib.sh ``` This will bundle the library under `./build` and we can import it inside our WebWorker like a normal JS module: ```js import init, { Decoder } from "./build/m.js"; ``` The full example can be found under `./lib-example.html`. All needed assets are fetched from the web, so no need to download anything. Finally, you can preview the example by running a local HTTP server. For example: ```bash python -m http.server ``` Then open `http://localhost:8000/lib-example.html` in your browser.
2
0
hf_public_repos/candle/candle-wasm-examples/whisper
hf_public_repos/candle/candle-wasm-examples/whisper/src/lib.rs
pub const WITH_TIMER: bool = true; mod app; mod audio; pub mod languages; pub mod worker; pub use app::App; pub use worker::Worker;
3
0
hf_public_repos/candle/candle-wasm-examples/whisper
hf_public_repos/candle/candle-wasm-examples/whisper/src/audio.rs
// Audio processing code, adapted from whisper.cpp // https://github.com/ggerganov/whisper.cpp use super::worker; pub trait Float: num_traits::Float + num_traits::FloatConst + num_traits::NumAssign {} impl Float for f32 {} impl Float for f64 {} // https://github.com/ggerganov/whisper.cpp/blob/4774d2feb01a772a15de81ffc34b34a1f294f020/whisper.cpp#L2357 fn fft<T: Float>(inp: &[T]) -> Vec<T> { let n = inp.len(); let zero = T::zero(); if n == 1 { return vec![inp[0], zero]; } if n % 2 == 1 { return dft(inp); } let mut out = vec![zero; n * 2]; let mut even = Vec::with_capacity(n / 2); let mut odd = Vec::with_capacity(n / 2); for (i, &inp) in inp.iter().enumerate() { if i % 2 == 0 { even.push(inp) } else { odd.push(inp); } } let even_fft = fft(&even); let odd_fft = fft(&odd); let two_pi = T::PI() + T::PI(); let n_t = T::from(n).unwrap(); for k in 0..n / 2 { let k_t = T::from(k).unwrap(); let theta = two_pi * k_t / n_t; let re = theta.cos(); let im = -theta.sin(); let re_odd = odd_fft[2 * k]; let im_odd = odd_fft[2 * k + 1]; out[2 * k] = even_fft[2 * k] + re * re_odd - im * im_odd; out[2 * k + 1] = even_fft[2 * k + 1] + re * im_odd + im * re_odd; out[2 * (k + n / 2)] = even_fft[2 * k] - re * re_odd + im * im_odd; out[2 * (k + n / 2) + 1] = even_fft[2 * k + 1] - re * im_odd - im * re_odd; } out } // https://github.com/ggerganov/whisper.cpp/blob/4774d2feb01a772a15de81ffc34b34a1f294f020/whisper.cpp#L2337 fn dft<T: Float>(inp: &[T]) -> Vec<T> { let zero = T::zero(); let n = inp.len(); let two_pi = T::PI() + T::PI(); let mut out = Vec::with_capacity(2 * n); let n_t = T::from(n).unwrap(); for k in 0..n { let k_t = T::from(k).unwrap(); let mut re = zero; let mut im = zero; for (j, &inp) in inp.iter().enumerate() { let j_t = T::from(j).unwrap(); let angle = two_pi * k_t * j_t / n_t; re += inp * angle.cos(); im -= inp * angle.sin(); } out.push(re); out.push(im); } out } #[allow(clippy::too_many_arguments)] // https://github.com/ggerganov/whisper.cpp/blob/4774d2feb01a772a15de81ffc34b34a1f294f020/whisper.cpp#L2414 fn log_mel_spectrogram_w<T: Float>( ith: usize, hann: &[T], samples: &[T], filters: &[T], fft_size: usize, fft_step: usize, speed_up: bool, n_len: usize, n_mel: usize, n_threads: usize, ) -> Vec<T> { let n_fft = if speed_up { 1 + fft_size / 4 } else { 1 + fft_size / 2 }; let zero = T::zero(); let half = T::from(0.5).unwrap(); let mut fft_in = vec![zero; fft_size]; let mut mel = vec![zero; n_len * n_mel]; for i in (ith..n_len).step_by(n_threads) { let offset = i * fft_step; // apply Hanning window for j in 0..fft_size { fft_in[j] = if offset + j < samples.len() { hann[j] * samples[offset + j] } else { zero } } // FFT -> mag^2 let mut fft_out: Vec<T> = fft(&fft_in); for j in 0..fft_size { fft_out[j] = fft_out[2 * j] * fft_out[2 * j] + fft_out[2 * j + 1] * fft_out[2 * j + 1]; } for j in 1..fft_size / 2 { let v = fft_out[fft_size - j]; fft_out[j] += v; } if speed_up { // scale down in the frequency domain results in a speed up in the time domain for j in 0..n_fft { fft_out[j] = half * (fft_out[2 * j] + fft_out[2 * j + 1]); } } // mel spectrogram for j in 0..n_mel { let mut sum = zero; for k in 0..n_fft { sum += fft_out[k] * filters[j * n_fft + k]; } mel[j * n_len + i] = T::max(sum, T::from(1e-10).unwrap()).log10(); } } mel } fn log_mel_spectrogram_<T: Float + std::fmt::Display>( samples: &[T], filters: &[T], fft_size: usize, fft_step: usize, n_mel: usize, speed_up: bool, ) -> Vec<T> { let zero = T::zero(); let two_pi = T::PI() + T::PI(); let half = T::from(0.5).unwrap(); let one = T::from(1.0).unwrap(); let four = T::from(4.0).unwrap(); let fft_size_t = T::from(fft_size).unwrap(); let hann: Vec<T> = (0..fft_size) .map(|i| half * (one - ((two_pi * T::from(i).unwrap()) / fft_size_t).cos())) .collect(); let n_len = samples.len() / fft_step; // pad audio with at least one extra chunk of zeros let pad = 100 * worker::m::CHUNK_LENGTH / 2; let n_len = if n_len % pad != 0 { (n_len / pad + 1) * pad } else { n_len }; let n_len = n_len + pad; let samples = { let mut samples_padded = samples.to_vec(); let to_add = n_len * fft_step - samples.len(); samples_padded.extend(std::iter::repeat(zero).take(to_add)); samples_padded }; // Use a single thread for now. let mut mel = log_mel_spectrogram_w( 0, &hann, &samples, filters, fft_size, fft_step, speed_up, n_len, n_mel, 1, ); let mmax = mel .iter() .max_by(|&u, &v| u.partial_cmp(v).unwrap_or(std::cmp::Ordering::Greater)) .copied() .unwrap_or(zero) - T::from(8).unwrap(); for m in mel.iter_mut() { let v = T::max(*m, mmax); *m = v / four + one } mel } pub fn pcm_to_mel<T: Float + std::fmt::Display>( cfg: &worker::m::Config, samples: &[T], filters: &[T], ) -> anyhow::Result<Vec<T>> { let mel = log_mel_spectrogram_( samples, filters, worker::m::N_FFT, worker::m::HOP_LENGTH, cfg.num_mel_bins, false, ); Ok(mel) }
4
0
hf_public_repos/candle/candle-wasm-examples/whisper
hf_public_repos/candle/candle-wasm-examples/whisper/src/languages.rs
pub const LANGUAGES: [(&str, &str); 99] = [ ("en", "english"), ("zh", "chinese"), ("de", "german"), ("es", "spanish"), ("ru", "russian"), ("ko", "korean"), ("fr", "french"), ("ja", "japanese"), ("pt", "portuguese"), ("tr", "turkish"), ("pl", "polish"), ("ca", "catalan"), ("nl", "dutch"), ("ar", "arabic"), ("sv", "swedish"), ("it", "italian"), ("id", "indonesian"), ("hi", "hindi"), ("fi", "finnish"), ("vi", "vietnamese"), ("he", "hebrew"), ("uk", "ukrainian"), ("el", "greek"), ("ms", "malay"), ("cs", "czech"), ("ro", "romanian"), ("da", "danish"), ("hu", "hungarian"), ("ta", "tamil"), ("no", "norwegian"), ("th", "thai"), ("ur", "urdu"), ("hr", "croatian"), ("bg", "bulgarian"), ("lt", "lithuanian"), ("la", "latin"), ("mi", "maori"), ("ml", "malayalam"), ("cy", "welsh"), ("sk", "slovak"), ("te", "telugu"), ("fa", "persian"), ("lv", "latvian"), ("bn", "bengali"), ("sr", "serbian"), ("az", "azerbaijani"), ("sl", "slovenian"), ("kn", "kannada"), ("et", "estonian"), ("mk", "macedonian"), ("br", "breton"), ("eu", "basque"), ("is", "icelandic"), ("hy", "armenian"), ("ne", "nepali"), ("mn", "mongolian"), ("bs", "bosnian"), ("kk", "kazakh"), ("sq", "albanian"), ("sw", "swahili"), ("gl", "galician"), ("mr", "marathi"), ("pa", "punjabi"), ("si", "sinhala"), ("km", "khmer"), ("sn", "shona"), ("yo", "yoruba"), ("so", "somali"), ("af", "afrikaans"), ("oc", "occitan"), ("ka", "georgian"), ("be", "belarusian"), ("tg", "tajik"), ("sd", "sindhi"), ("gu", "gujarati"), ("am", "amharic"), ("yi", "yiddish"), ("lo", "lao"), ("uz", "uzbek"), ("fo", "faroese"), ("ht", "haitian creole"), ("ps", "pashto"), ("tk", "turkmen"), ("nn", "nynorsk"), ("mt", "maltese"), ("sa", "sanskrit"), ("lb", "luxembourgish"), ("my", "myanmar"), ("bo", "tibetan"), ("tl", "tagalog"), ("mg", "malagasy"), ("as", "assamese"), ("tt", "tatar"), ("haw", "hawaiian"), ("ln", "lingala"), ("ha", "hausa"), ("ba", "bashkir"), ("jw", "javanese"), ("su", "sundanese"), ];
5
0
hf_public_repos/candle/candle-wasm-examples/whisper
hf_public_repos/candle/candle-wasm-examples/whisper/src/worker.rs
use crate::languages::LANGUAGES; use anyhow::Error as E; use candle::{safetensors::Load, DType, Device, IndexOp, Tensor, D}; use candle_nn::{ops::softmax, VarBuilder}; pub use candle_transformers::models::whisper::{self as m, Config}; use rand::{distributions::Distribution, rngs::StdRng, SeedableRng}; use serde::{Deserialize, Serialize}; use tokenizers::Tokenizer; use wasm_bindgen::prelude::*; use yew_agent::{HandlerId, Public, WorkerLink}; #[wasm_bindgen] extern "C" { // Use `js_namespace` here to bind `console.log(..)` instead of just // `log(..)` #[wasm_bindgen(js_namespace = console)] pub fn log(s: &str); } #[macro_export] macro_rules! console_log { // Note that this is using the `log` function imported above during // `bare_bones` ($($t:tt)*) => ($crate::worker::log(&format_args!($($t)*).to_string())) } pub const DTYPE: DType = DType::F32; pub enum Model { Normal(m::model::Whisper), Quantized(m::quantized_model::Whisper), } // Maybe we should use some traits rather than doing the dispatch for all these. impl Model { pub fn config(&self) -> &Config { match self { Self::Normal(m) => &m.config, Self::Quantized(m) => &m.config, } } pub fn encoder_forward(&mut self, x: &Tensor, flush: bool) -> candle::Result<Tensor> { match self { Self::Normal(m) => m.encoder.forward(x, flush), Self::Quantized(m) => m.encoder.forward(x, flush), } } pub fn decoder_forward( &mut self, x: &Tensor, xa: &Tensor, flush: bool, ) -> candle::Result<Tensor> { match self { Self::Normal(m) => m.decoder.forward(x, xa, flush), Self::Quantized(m) => m.decoder.forward(x, xa, flush), } } pub fn decoder_final_linear(&self, x: &Tensor) -> candle::Result<Tensor> { match self { Self::Normal(m) => m.decoder.final_linear(x), Self::Quantized(m) => m.decoder.final_linear(x), } } } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct DecodingResult { pub tokens: Vec<u32>, pub text: String, pub avg_logprob: f64, pub no_speech_prob: f64, temperature: f64, compression_ratio: f64, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Segment { pub start: f64, pub duration: f64, pub dr: DecodingResult, } pub struct Decoder { model: Model, rng: rand::rngs::StdRng, task: Option<Task>, language: Option<String>, is_multilingual: bool, mel_filters: Vec<f32>, timestamps: bool, tokenizer: Tokenizer, suppress_tokens: Tensor, sot_token: u32, transcribe_token: u32, translate_token: u32, eot_token: u32, no_speech_token: u32, no_timestamps_token: u32, } impl Decoder { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, mel_filters: Vec<f32>, device: &Device, task: Option<Task>, language: Option<String>, is_multilingual: bool, timestamps: bool, ) -> anyhow::Result<Self> { let suppress_tokens: Vec<f32> = (0..model.config().vocab_size as u32) .map(|i| { if model.config().suppress_tokens.contains(&i) { f32::NEG_INFINITY } else { 0f32 } }) .collect(); let no_timestamps_token = token_id(&tokenizer, m::NO_TIMESTAMPS_TOKEN)?; let suppress_tokens = Tensor::new(suppress_tokens.as_slice(), device)?; let sot_token = token_id(&tokenizer, m::SOT_TOKEN)?; let transcribe_token = token_id(&tokenizer, m::TRANSCRIBE_TOKEN)?; let translate_token = token_id(&tokenizer, m::TRANSLATE_TOKEN)?; let eot_token = token_id(&tokenizer, m::EOT_TOKEN)?; let no_speech_token = m::NO_SPEECH_TOKENS .iter() .find_map(|token| token_id(&tokenizer, token).ok()); let no_speech_token = match no_speech_token { None => anyhow::bail!("unable to find any non-speech token"), Some(n) => n, }; let seed = 299792458; Ok(Self { model, rng: StdRng::seed_from_u64(seed), tokenizer, mel_filters, task, timestamps, language, is_multilingual, suppress_tokens, sot_token, transcribe_token, translate_token, eot_token, no_speech_token, no_timestamps_token, }) } fn decode(&mut self, mel: &Tensor, t: f64) -> anyhow::Result<DecodingResult> { let model = &mut self.model; let language_token = match (self.is_multilingual, &self.language) { (true, None) => Some(detect_language(model, &self.tokenizer, mel)?), (false, None) => None, (true, Some(language)) => { match token_id(&self.tokenizer, &format!("<|{:?}|>", self.language)) { Ok(token_id) => Some(token_id), Err(_) => anyhow::bail!("language {language} is not supported"), } } (false, Some(_)) => { anyhow::bail!("a language cannot be set for non-multilingual models") } }; let audio_features = model.encoder_forward(mel, true)?; println!("audio features: {:?}", audio_features.dims()); let sample_len = model.config().max_target_positions / 2; let mut sum_logprob = 0f64; let mut no_speech_prob = f64::NAN; let mut tokens = vec![self.sot_token]; if let Some(language_token) = language_token { tokens.push(language_token); } match self.task { None | Some(Task::Transcribe) => tokens.push(self.transcribe_token), Some(Task::Translate) => tokens.push(self.translate_token), } if !self.timestamps { tokens.push(self.no_timestamps_token); } for i in 0..sample_len { let tokens_t = Tensor::new(tokens.as_slice(), mel.device())?; // The model expects a batch dim but this inference loop does not handle // it so we add it at this point. let tokens_t = tokens_t.unsqueeze(0)?; let ys = model.decoder_forward(&tokens_t, &audio_features, i == 0)?; // Extract the no speech probability on the first iteration by looking at the first // token logits and the probability for the according token. if i == 0 { let logits = model.decoder_final_linear(&ys.i(..1)?)?.i(0)?.i(0)?; no_speech_prob = softmax(&logits, 0)? .i(self.no_speech_token as usize)? .to_scalar::<f32>()? as f64; } let (_, seq_len, _) = ys.dims3()?; let logits = model .decoder_final_linear(&ys.i((..1, seq_len - 1..))?)? .i(0)? .i(0)?; // TODO: Besides suppress tokens, we should apply the heuristics from // ApplyTimestampRules, i.e.: // - Timestamps come in pairs, except before EOT. // - Timestamps should be non-decreasing. // - If the sum of the probabilities of timestamps is higher than any other tokens, // only consider timestamps when sampling. // https://github.com/openai/whisper/blob/e8622f9afc4eba139bf796c210f5c01081000472/whisper/decoding.py#L439 let logits = logits.broadcast_add(&self.suppress_tokens)?; let next_token = if t > 0f64 { let prs = softmax(&(&logits / t)?, 0)?; let logits_v: Vec<f32> = prs.to_vec1()?; let distr = rand::distributions::WeightedIndex::new(&logits_v)?; distr.sample(&mut self.rng) as u32 } else { let logits_v: Vec<f32> = logits.to_vec1()?; logits_v .iter() .enumerate() .max_by(|(_, u), (_, v)| u.total_cmp(v)) .map(|(i, _)| i as u32) .unwrap() }; tokens.push(next_token); let prob = softmax(&logits, candle::D::Minus1)? .i(next_token as usize)? .to_scalar::<f32>()? as f64; if next_token == self.eot_token || tokens.len() > model.config().max_target_positions { break; } sum_logprob += prob.ln(); } let text = self.tokenizer.decode(&tokens, true).map_err(E::msg)?; let avg_logprob = sum_logprob / tokens.len() as f64; Ok(DecodingResult { tokens, text, avg_logprob, no_speech_prob, temperature: t, compression_ratio: f64::NAN, }) } fn decode_with_fallback(&mut self, segment: &Tensor) -> anyhow::Result<DecodingResult> { for (i, &t) in m::TEMPERATURES.iter().enumerate() { let dr: Result<DecodingResult, _> = self.decode(segment, t); if i == m::TEMPERATURES.len() - 1 { return dr; } // On errors, we try again with a different temperature. match dr { Ok(dr) => { let needs_fallback = dr.compression_ratio > m::COMPRESSION_RATIO_THRESHOLD || dr.avg_logprob < m::LOGPROB_THRESHOLD; if !needs_fallback || dr.no_speech_prob > m::NO_SPEECH_THRESHOLD { return Ok(dr); } } Err(err) => { console_log!("Error running at {t}: {err}") } } } unreachable!() } fn run(&mut self, mel: &Tensor) -> anyhow::Result<Vec<Segment>> { let (_, _, content_frames) = mel.dims3()?; let mut seek = 0; let mut segments = vec![]; while seek < content_frames { let time_offset = (seek * m::HOP_LENGTH) as f64 / m::SAMPLE_RATE as f64; let segment_size = usize::min(content_frames - seek, m::N_FRAMES); let mel_segment = mel.narrow(2, seek, segment_size)?; let segment_duration = (segment_size * m::HOP_LENGTH) as f64 / m::SAMPLE_RATE as f64; let dr = self.decode_with_fallback(&mel_segment)?; seek += segment_size; if dr.no_speech_prob > m::NO_SPEECH_THRESHOLD && dr.avg_logprob < m::LOGPROB_THRESHOLD { console_log!("no speech detected, skipping {seek} {dr:?}"); continue; } let segment = Segment { start: time_offset, duration: segment_duration, dr, }; console_log!("{seek}: {segment:?}"); segments.push(segment) } Ok(segments) } pub fn load(md: ModelData) -> anyhow::Result<Self> { let device = Device::Cpu; let tokenizer = Tokenizer::from_bytes(&md.tokenizer).map_err(E::msg)?; let mel_filters = safetensors::tensor::SafeTensors::deserialize(&md.mel_filters)?; let mel_filters = mel_filters.tensor("mel_80")?.load(&device)?; console_log!("loaded mel filters {:?}", mel_filters.shape()); let mel_filters = mel_filters.flatten_all()?.to_vec1::<f32>()?; let config: Config = serde_json::from_slice(&md.config)?; let model = if md.quantized { let vb = candle_transformers::quantized_var_builder::VarBuilder::from_gguf_buffer( &md.weights, &device, )?; Model::Quantized(m::quantized_model::Whisper::load(&vb, config)?) } else { let vb = VarBuilder::from_buffered_safetensors(md.weights, m::DTYPE, &device)?; Model::Normal(m::model::Whisper::load(&vb, config)?) }; console_log!("done loading model"); let task = match md.task.as_deref() { Some("translate") => Some(Task::Translate), _ => Some(Task::Transcribe), }; let decoder = Self::new( model, tokenizer, mel_filters, &device, task, md.language, md.is_multilingual, md.timestamps, )?; Ok(decoder) } pub fn convert_and_run(&mut self, wav_input: &[u8]) -> anyhow::Result<Vec<Segment>> { let device = Device::Cpu; let mut wav_input = std::io::Cursor::new(wav_input); let wav_reader = hound::WavReader::new(&mut wav_input)?; let spec = wav_reader.spec(); console_log!("loaded wav data: {spec:?}"); if spec.sample_rate != m::SAMPLE_RATE as u32 { anyhow::bail!("wav file must have a {} sampling rate", m::SAMPLE_RATE); } let mut data = wav_reader.into_samples::<i16>().collect::<Vec<_>>(); data.truncate(data.len() / spec.channels as usize); let mut pcm_data = Vec::with_capacity(data.len()); for d in data.into_iter() { let d = d?; pcm_data.push(d as f32 / 32768.) } console_log!("pcm data loaded {}", pcm_data.len()); let mel = crate::audio::pcm_to_mel(self.model.config(), &pcm_data, &self.mel_filters)?; let mel_len = mel.len(); let n_mels = self.model.config().num_mel_bins; let mel = Tensor::from_vec(mel, (1, n_mels, mel_len / n_mels), &device)?; console_log!("loaded mel: {:?}", mel.dims()); let segments = self.run(&mel)?; Ok(segments) } } /// Returns the token id for the selected language. pub fn detect_language(model: &mut Model, tokenizer: &Tokenizer, mel: &Tensor) -> Result<u32, E> { console_log!("detecting language"); let (_bsize, _, seq_len) = mel.dims3()?; let mel = mel.narrow( 2, 0, usize::min(seq_len, model.config().max_source_positions), )?; let device = mel.device(); let language_token_ids = LANGUAGES .iter() .map(|(t, _)| token_id(tokenizer, &format!("<|{t}|>"))) .map(|e| e.map_err(E::msg)) .collect::<Result<Vec<_>, E>>()?; let sot_token = token_id(tokenizer, m::SOT_TOKEN)?; let audio_features = model.encoder_forward(&mel, true)?; let tokens = Tensor::new(&[[sot_token]], device)?; let language_token_ids = Tensor::new(language_token_ids.as_slice(), device)?; let ys = model.decoder_forward(&tokens, &audio_features, true)?; let logits = model.decoder_final_linear(&ys.i(..1)?)?.i(0)?.i(0)?; let logits = logits.index_select(&language_token_ids, 0)?; let probs = candle_nn::ops::softmax(&logits, D::Minus1)?; let probs = probs.to_vec1::<f32>()?; let mut probs = LANGUAGES.iter().zip(probs.iter()).collect::<Vec<_>>(); probs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for ((_, language), p) in probs.iter().take(5) { println!("{language}: {p}") } let token = &format!("<|{}|>", probs[0].0 .0); let language = token_id(tokenizer, token)?; console_log!("detected language: {language} {token}"); Ok(language) } pub fn token_id(tokenizer: &Tokenizer, token: &str) -> candle::Result<u32> { match tokenizer.token_to_id(token) { None => candle::bail!("no token-id for {token}"), Some(id) => Ok(id), } } #[derive(Serialize, Deserialize, Clone, Copy, Debug)] pub enum Task { Transcribe, Translate, } // Communication to the worker happens through bincode, the model weights and configs are fetched // on the main thread and transferred via the following structure. #[derive(Serialize, Deserialize)] pub struct ModelData { pub weights: Vec<u8>, pub tokenizer: Vec<u8>, pub mel_filters: Vec<u8>, pub config: Vec<u8>, pub quantized: bool, pub timestamps: bool, pub is_multilingual: bool, pub language: Option<String>, pub task: Option<String>, } pub struct Worker { link: WorkerLink<Self>, decoder: Option<Decoder>, } #[derive(Serialize, Deserialize)] pub enum WorkerInput { ModelData(ModelData), DecodeTask { wav_bytes: Vec<u8> }, } #[derive(Serialize, Deserialize)] pub enum WorkerOutput { Decoded(Vec<Segment>), WeightsLoaded, } impl yew_agent::Worker for Worker { type Input = WorkerInput; type Message = (); type Output = Result<WorkerOutput, String>; type Reach = Public<Self>; fn create(link: WorkerLink<Self>) -> Self { Self { link, decoder: None, } } fn update(&mut self, _msg: Self::Message) { // no messaging } fn handle_input(&mut self, msg: Self::Input, id: HandlerId) { let output = match msg { WorkerInput::ModelData(md) => match Decoder::load(md) { Ok(decoder) => { self.decoder = Some(decoder); Ok(WorkerOutput::WeightsLoaded) } Err(err) => Err(format!("model creation error {err:?}")), }, WorkerInput::DecodeTask { wav_bytes } => match &mut self.decoder { None => Err("model has not been set".to_string()), Some(decoder) => decoder .convert_and_run(&wav_bytes) .map(WorkerOutput::Decoded) .map_err(|e| e.to_string()), }, }; self.link.respond(id, output); } fn name_of_resource() -> &'static str { "worker.js" } fn resource_path_is_relative() -> bool { true } }
6
0
hf_public_repos/candle/candle-wasm-examples/whisper
hf_public_repos/candle/candle-wasm-examples/whisper/src/app.rs
use crate::console_log; use crate::worker::{ModelData, Segment, Worker, WorkerInput, WorkerOutput}; use js_sys::Date; use wasm_bindgen::prelude::*; use wasm_bindgen_futures::JsFuture; use yew::{html, Component, Context, Html}; use yew_agent::{Bridge, Bridged}; const SAMPLE_NAMES: [&str; 6] = [ "audios/samples_jfk.wav", "audios/samples_a13.wav", "audios/samples_gb0.wav", "audios/samples_gb1.wav", "audios/samples_hp0.wav", "audios/samples_mm0.wav", ]; async fn fetch_url(url: &str) -> Result<Vec<u8>, JsValue> { use web_sys::{Request, RequestCache, RequestInit, RequestMode, Response}; let window = web_sys::window().ok_or("window")?; let opts = RequestInit::new(); opts.set_method("GET"); opts.set_mode(RequestMode::Cors); opts.set_cache(RequestCache::NoCache); let request = Request::new_with_str_and_init(url, &opts)?; let resp_value = JsFuture::from(window.fetch_with_request(&request)).await?; // `resp_value` is a `Response` object. assert!(resp_value.is_instance_of::<Response>()); let resp: Response = resp_value.dyn_into()?; let data = JsFuture::from(resp.blob()?).await?; let blob = web_sys::Blob::from(data); let array_buffer = JsFuture::from(blob.array_buffer()).await?; let data = js_sys::Uint8Array::new(&array_buffer).to_vec(); Ok(data) } pub enum Msg { Run(usize), UpdateStatus(String), SetDecoder(ModelData), WorkerIn(WorkerInput), WorkerOut(Result<WorkerOutput, String>), } pub struct CurrentDecode { start_time: Option<f64>, } pub struct App { status: String, loaded: bool, segments: Vec<Segment>, current_decode: Option<CurrentDecode>, worker: Box<dyn Bridge<Worker>>, } async fn model_data_load() -> Result<ModelData, JsValue> { let quantized = false; let is_multilingual = false; let (tokenizer, mel_filters, weights, config) = if quantized { console_log!("loading quantized weights"); let tokenizer = fetch_url("quantized/tokenizer-tiny-en.json").await?; let mel_filters = fetch_url("mel_filters.safetensors").await?; let weights = fetch_url("quantized/model-tiny-en-q80.gguf").await?; let config = fetch_url("quantized/config-tiny-en.json").await?; (tokenizer, mel_filters, weights, config) } else { console_log!("loading float weights"); if is_multilingual { let mel_filters = fetch_url("mel_filters.safetensors").await?; let tokenizer = fetch_url("whisper-tiny/tokenizer.json").await?; let weights = fetch_url("whisper-tiny/model.safetensors").await?; let config = fetch_url("whisper-tiny/config.json").await?; (tokenizer, mel_filters, weights, config) } else { let mel_filters = fetch_url("mel_filters.safetensors").await?; let tokenizer = fetch_url("whisper-tiny.en/tokenizer.json").await?; let weights = fetch_url("whisper-tiny.en/model.safetensors").await?; let config = fetch_url("whisper-tiny.en/config.json").await?; (tokenizer, mel_filters, weights, config) } }; let timestamps = true; let _task = Some("transcribe".to_string()); console_log!("{}", weights.len()); Ok(ModelData { tokenizer, mel_filters, weights, config, quantized, timestamps, task: None, is_multilingual, language: None, }) } fn performance_now() -> Option<f64> { let window = web_sys::window()?; let performance = window.performance()?; Some(performance.now() / 1000.) } impl Component for App { type Message = Msg; type Properties = (); fn create(ctx: &Context<Self>) -> Self { let status = "loading weights".to_string(); let cb = { let link = ctx.link().clone(); move |e| link.send_message(Self::Message::WorkerOut(e)) }; let worker = Worker::bridge(std::rc::Rc::new(cb)); Self { status, segments: vec![], current_decode: None, worker, loaded: false, } } fn rendered(&mut self, ctx: &Context<Self>, first_render: bool) { if first_render { ctx.link().send_future(async { match model_data_load().await { Err(err) => { let status = format!("{err:?}"); Msg::UpdateStatus(status) } Ok(model_data) => Msg::SetDecoder(model_data), } }); } } fn update(&mut self, ctx: &Context<Self>, msg: Self::Message) -> bool { match msg { Msg::SetDecoder(md) => { self.status = "weights loaded successfully!".to_string(); self.loaded = true; console_log!("loaded weights"); self.worker.send(WorkerInput::ModelData(md)); true } Msg::Run(sample_index) => { let sample = SAMPLE_NAMES[sample_index]; if self.current_decode.is_some() { self.status = "already decoding some sample at the moment".to_string() } else { let start_time = performance_now(); self.current_decode = Some(CurrentDecode { start_time }); self.status = format!("decoding {sample}"); self.segments.clear(); ctx.link().send_future(async move { match fetch_url(sample).await { Err(err) => { let output = Err(format!("decoding error: {err:?}")); // Mimic a worker output to so as to release current_decode Msg::WorkerOut(output) } Ok(wav_bytes) => Msg::WorkerIn(WorkerInput::DecodeTask { wav_bytes }), } }) } // true } Msg::WorkerOut(output) => { let dt = self.current_decode.as_ref().and_then(|current_decode| { current_decode.start_time.and_then(|start_time| { performance_now().map(|stop_time| stop_time - start_time) }) }); self.current_decode = None; match output { Ok(WorkerOutput::WeightsLoaded) => self.status = "weights loaded!".to_string(), Ok(WorkerOutput::Decoded(segments)) => { self.status = match dt { None => "decoding succeeded!".to_string(), Some(dt) => format!("decoding succeeded in {:.2}s", dt), }; self.segments = segments; } Err(err) => { self.status = format!("decoding error {err:?}"); } } true } Msg::WorkerIn(inp) => { self.worker.send(inp); true } Msg::UpdateStatus(status) => { self.status = status; true } } } fn view(&self, ctx: &Context<Self>) -> Html { html! { <div> <table> <thead> <tr> <th>{"Sample"}</th> <th></th> <th></th> </tr> </thead> <tbody> { SAMPLE_NAMES.iter().enumerate().map(|(i, name)| { html! { <tr> <th>{name}</th> <th><audio controls=true src={format!("./{name}")}></audio></th> { if self.loaded { html!(<th><button class="button" onclick={ctx.link().callback(move |_| Msg::Run(i))}> { "run" }</button></th>) }else{html!()} } </tr> } }).collect::<Html>() } </tbody> </table> <h2> {&self.status} </h2> { if !self.loaded{ html! { <progress id="progress-bar" aria-label="loading weights…"></progress> } } else if self.current_decode.is_some() { html! { <progress id="progress-bar" aria-label="decoding…"></progress> } } else { html!{ <blockquote> <p> { self.segments.iter().map(|segment| { html! { <> <i> { format!("{:.2}s-{:.2}s: (avg-logprob: {:.4}, no-speech-prob: {:.4})", segment.start, segment.start + segment.duration, segment.dr.avg_logprob, segment.dr.no_speech_prob, ) } </i> <br/ > {&segment.dr.text} <br/ > </> } }).collect::<Html>() } </p> </blockquote> } } } // Display the current date and time the page was rendered <p class="footer"> { "Rendered: " } { String::from(Date::new_0().to_string()) } </p> </div> } } }
7
0
hf_public_repos/candle/candle-wasm-examples/whisper/src
hf_public_repos/candle/candle-wasm-examples/whisper/src/bin/m.rs
use candle_wasm_example_whisper::worker::{Decoder as D, ModelData}; use wasm_bindgen::prelude::*; #[wasm_bindgen] pub struct Decoder { decoder: D, } #[wasm_bindgen] impl Decoder { #[wasm_bindgen(constructor)] #[allow(clippy::too_many_arguments)] pub fn new( weights: Vec<u8>, tokenizer: Vec<u8>, mel_filters: Vec<u8>, config: Vec<u8>, quantized: bool, is_multilingual: bool, timestamps: bool, task: Option<String>, language: Option<String>, ) -> Result<Decoder, JsError> { let decoder = D::load(ModelData { tokenizer, mel_filters, config, quantized, weights, is_multilingual, timestamps, task, language, }); match decoder { Ok(decoder) => Ok(Self { decoder }), Err(e) => Err(JsError::new(&e.to_string())), } } #[wasm_bindgen] pub fn decode(&mut self, wav_input: Vec<u8>) -> Result<String, JsError> { let segments = self .decoder .convert_and_run(&wav_input) .map_err(|e| JsError::new(&e.to_string()))?; let json = serde_json::to_string(&segments)?; Ok(json) } } fn main() {}
8
0
hf_public_repos/candle/candle-wasm-examples/whisper/src
hf_public_repos/candle/candle-wasm-examples/whisper/src/bin/worker.rs
use yew_agent::PublicWorker; fn main() { candle_wasm_example_whisper::Worker::register(); }
9
0
hf_public_repos
hf_public_repos/blog/vision_language_pretraining.md
--- title: "A Dive into Vision-Language Models" thumbnail: /blog//assets/128_vision_language_pretraining/thumbnail.png authors: - user: adirik - user: sayakpaul --- # A Dive into Vision-Language Models Human learning is inherently multi-modal as jointly leveraging multiple senses helps us understand and analyze new information better. Unsurprisingly, recent advances in multi-modal learning take inspiration from the effectiveness of this process to create models that can process and link information using various modalities such as image, video, text, audio, body gestures, facial expressions, and physiological signals. Since 2021, we’ve seen an increased interest in models that combine vision and language modalities (also called joint vision-language models), such as [OpenAI’s CLIP](https://openai.com/blog/clip/). Joint vision-language models have shown particularly impressive capabilities in very challenging tasks such as image captioning, text-guided image generation and manipulation, and visual question-answering. This field continues to evolve, and so does its effectiveness in improving zero-shot generalization leading to various practical use cases. In this blog post, we'll introduce joint vision-language models focusing on how they're trained. We'll also show how you can leverage 🤗 Transformers to experiment with the latest advances in this domain. ## Table of contents 1. [Introduction](#introduction) 2. [Learning Strategies](#learning-strategies) 1. [Contrastive Learning](#1-contrastive-learning) 2. [PrefixLM](#2-prefixlm) 3. [Multi-modal Fusing with Cross Attention](#3-multi-modal-fusing-with-cross-attention) 4. [MLM / ITM](#4-masked-language-modeling--image-text-matching) 5. [No Training](#5-no-training) 3. [Datasets](#datasets) 4. [Supporting Vision-Language Models in 🤗 Transformers](#supporting-vision-language-models-in-🤗-transformers) 5. [Emerging Areas of Research](#emerging-areas-of-research) 6. [Conclusion](#conclusion) ## Introduction What does it mean to call a model a “vision-language” model? A model that combines both the vision and language modalities? But what exactly does that mean? One characteristic that helps define these models is their ability to process both images (vision) and natural language text (language). This process depends on the inputs, outputs, and the task these models are asked to perform. Take, for example, the task of zero-shot image classification. We’ll pass an image and a few prompts like so to obtain the most probable prompt for the input image. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/example1.png" alt="drawing"><br> <em>The cat and dog image has been taken from <a href=https://www.istockphoto.com/photos/dog-cat-love>here</a>.</em> </p> To predict something like that, the model needs to understand both the input image and the text prompts. The model would have separate or fused encoders for vision and language to achieve this understanding. But these inputs and outputs can take several forms. Below we give some examples: - Image retrieval from natural language text. - Phrase grounding, i.e., performing object detection from an input image and natural language phrase (example: A **young person** swings a **bat**). - Visual question answering, i.e., finding answers from an input image and a question in natural language. - Generate a caption for a given image. This can also take the form of conditional text generation, where you'd start with a natural language prompt and an image. - Detection of hate speech from social media content involving both images and text modalities. ## Learning Strategies A vision-language model typically consists of 3 key elements: an image encoder, a text encoder, and a strategy to fuse information from the two encoders. These key elements are tightly coupled together as the loss functions are designed around both the model architecture and the learning strategy. While vision-language model research is hardly a new research area, the design of such models has changed tremendously over the years. Whereas earlier research adopted hand-crafted image descriptors and pre-trained word vectors or the frequency-based TF-IDF features, the latest research predominantly adopts image and text encoders with [transformer](https://arxiv.org/abs/1706.03762) architectures to separately or jointly learn image and text features. These models are pre-trained with strategic pre-training objectives that enable various downstream tasks. In this section, we'll discuss some of the typical pre-training objectives and strategies for vision-language models that have been shown to perform well regarding their transfer performance. We'll also touch upon additional interesting things that are either specific to these objectives or can be used as general components for pre-training. We’ll cover the following themes in the pre-training objectives: - **Contrastive Learning:** Aligning images and texts to a joint feature space in a contrastive manner - **PrefixLM:** Jointly learning image and text embeddings by using images as a prefix to a language model - **Multi-modal Fusing with Cross Attention:** Fusing visual information into layers of a language model with a cross-attention mechanism - **MLM / ITM:** Aligning parts of images with text with masked-language modeling and image-text matching objectives - **No Training:** Using stand-alone vision and language models via iterative optimization Note that this section is a non-exhaustive list, and there are various other approaches, as well as hybrid strategies such as [Unified-IO](https://arxiv.org/abs/2206.08916). For a more comprehensive review of multi-modal models, refer to [this work.](https://arxiv.org/abs/2210.09263) ### 1) Contrastive Learning <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/contrastive_learning.png" alt="Contrastive Learning"><br> <em>Contrastive pre-training and zero-shot image classification as shown <a href=https://openai.com/blog/clip>here</a>.</em> </p> Contrastive learning is a commonly used pre-training objective for vision models and has proven to be a highly effective pre-training objective for vision-language models as well. Recent works such as [CLIP](https://arxiv.org/abs/2103.00020), [CLOOB](https://arxiv.org/abs/2110.11316), [ALIGN](https://arxiv.org/abs/2102.05918), and [DeCLIP](https://arxiv.org/abs/2110.05208) bridge the vision and language modalities by learning a text encoder and an image encoder jointly with a contrastive loss, using large datasets consisting of {image, caption} pairs. Contrastive learning aims to map input images and texts to the same feature space such that the distance between the embeddings of image-text pairs is minimized if they match or maximized if they don’t. For CLIP, the distance is simply the cosine distance between the text and image embeddings, whereas models such as ALIGN and DeCLIP design their own distance metrics to account for noisy datasets. Another work, [LiT](https://arxiv.org/abs/2111.07991), introduces a simple method for fine-tuning the text encoder using the CLIP pre-training objective while keeping the image encoder frozen. The authors interpret this idea as _a way to teach the text encoder to better read image embeddings from the image encoder_. This approach has been shown to be effective and is more sample efficient than CLIP. Other works, such as [FLAVA](https://arxiv.org/abs/2112.04482), use a combination of contrastive learning and other pretraining strategies to align vision and language embeddings. ### 2) PrefixLM <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/prefixlm.png" alt="PrefixLM"><br> <em>A diagram of the PrefixLM pre-training strategy (<a ahref=https://ai.googleblog.com/2021/10/simvlm-simple-visual-language-model-pre.html>image source<a>)</em> </p> Another approach to training vision-language models is using a PrefixLM objective. Models such as [SimVLM](https://arxiv.org/abs/2108.10904) and [VirTex](https://arxiv.org/abs/2006.06666v3) use this pre-training objective and feature a unified multi-modal architecture consisting of a transformer encoder and transformer decoder, similar to that of an autoregressive language model. Let’s break this down and see how this works. Language models with a prefix objective predict the next token given an input text as the prefix. For example, given the sequence “A man is standing at the corner”, we can use “A man is standing at the” as the prefix and train the model with the objective of predicting the next token - “corner” or another plausible continuation of the prefix. Visual transformers (ViT) apply the same concept of the prefix to images by dividing each image into a number of patches and sequentially feeding these patches to the model as inputs. Leveraging this idea, SimVLM features an architecture where the encoder receives a concatenated image patch sequence and prefix text sequence as the prefix input, and the decoder then predicts the continuation of the textual sequence. The diagram above depicts this idea. The SimVLM model is first pre-trained on a text dataset without image patches present in the prefix and then on an aligned image-text dataset. These models are used for image-conditioned text generation/captioning and VQA tasks. Models that leverage a unified multi-modal architecture to fuse visual information into a language model (LM) for image-guided tasks show impressive capabilities. However, models that solely use the PrefixLM strategy can be limited in terms of application areas as they are mainly designed for image captioning or visual question-answering downstream tasks. For example, given an image of a group of people, we can query the image to write a description of the image (e.g., “A group of people is standing together in front of a building and smiling”) or query it with questions that require visual reasoning: “How many people are wearing red t-shirts?”. On the other hand, models that learn multi-modal representations or adopt hybrid approaches can be adapted for various other downstream tasks, such as object detection and image segmentation. #### Frozen PrefixLM <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/frozen_prefixlm.png" alt="Frozen PrefixLM"><br> <em>Frozen PrefixLM pre-training strategy (<a href=https://lilianweng.github.io/posts/2022-06-09-vlm>image source</a>)</em> </p> While fusing visual information into a language model is highly effective, being able to use a pre-trained language model (LM) without the need for fine-tuning would be much more efficient. Hence, another pre-training objective in vision-language models is learning image embeddings that are aligned with a frozen language model. Models such as [Frozen](https://arxiv.org/abs/2106.13884) and [ClipCap](https://arxiv.org/abs/2111.09734) use this Frozen PrefixLM pre-training objective. They only update the parameters of the image encoder during training to generate image embeddings that can be used as a prefix to the pre-trained, frozen language model in a similar fashion to the PrefixLM objective discussed above. Both Frozen and ClipCap are trained on aligned image-text (caption) datasets with the objective of generating the next token in the caption, given the image embeddings and the prefix text. Finally, models such as [MAPL](https://arxiv.org/abs/2210.07179) and [Flamingo](https://arxiv.org/abs/2204.14198) keep both the pre-trained vision encoder and language model frozen. Flamingo sets a new state-of-the-art in few-shot learning on a wide range of open-ended vision and language tasks by adding Perceiver Resampler modules on top of the pre-trained frozen vision model and inserting new cross-attention layers between existing pre-trained and frozen LM layers to condition the LM on visual data. A nifty advantage of the Frozen PrefixLM pre-training objective is it enables training with limited aligned image-text data, which is particularly useful for domains where aligned multi-modal datasets are not available. ### 3) Multi-modal Fusing with Cross Attention <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/cross_attention_fusing.png" alt="Cross Attention Fusing" width=500><br> <em> Fusing visual information with a cross-attention mechanism as shown (<a href=https://www.semanticscholar.org/paper/VisualGPT%3A-Data-efficient-Adaptation-of-Pretrained-Chen-Guo/616e0ed02ca024a8c1d4b86167f7486ea92a13d9>image source</a>)</em> </p> Another approach to leveraging pre-trained language models for multi-modal tasks is to directly fuse visual information into the layers of a language model decoder using a cross-attention mechanism instead of using images as additional prefixes to the language model. Models such as [VisualGPT](https://arxiv.org/abs/2102.10407), [VC-GPT](https://arxiv.org/abs/2201.12723), and [Flamingo](https://arxiv.org/abs/2204.14198) use this pre-training strategy and are trained on image captioning and visual question-answering tasks. The main goal of such models is to balance the mixture of text generation capacity and visual information efficiently, which is highly important in the absence of large multi-modal datasets. Models such as VisualGPT use a visual encoder to embed images and feed the visual embeddings to the cross-attention layers of a pre-trained language decoder module to generate plausible captions. A more recent work, [FIBER](http://arxiv.org/abs/2206.07643), inserts cross-attention layers with a gating mechanism into both vision and language backbones, for more efficient multi-modal fusing and enables various other downstream tasks, such as image-text retrieval and open vocabulary object detection. ### 4) Masked-Language Modeling / Image-Text Matching Another line of vision-language models uses a combination of Masked-Language Modeling (MLM) and Image-Text Matching (ITM) objectives to align specific parts of images with text and enable various downstream tasks such as visual question answering, visual commonsense reasoning, text-based image retrieval, and text-guided object detection. Models that follow this pre-training setup include [VisualBERT](https://arxiv.org/abs/1908.03557), [FLAVA](https://arxiv.org/abs/2112.04482), [ViLBERT](https://arxiv.org/abs/1908.02265), [LXMERT](https://arxiv.org/abs/1908.07490) and [BridgeTower](https://arxiv.org/abs/2206.08657). <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/mlm_itm.png" alt="MLM / ITM"><br> <em> Aligning parts of images with text (<a href=https://arxiv.org/abs/1908.02265>image source</a>)</em> </p> Let’s break down what MLM and ITM objectives mean. Given a partially masked caption, the MLM objective is to predict the masked words based on the corresponding image. Note that the MLM objective requires either using a richly annotated multi-modal dataset with bounding boxes or using an object detection model to generate object region proposals for parts of the input text. For the ITM objective, given an image and caption pair, the task is to predict whether the caption matches the image or not. The negative samples are usually randomly sampled from the dataset itself. The MLM and ITM objectives are often combined during the pre-training of multi-modal models. For instance, VisualBERT proposes a BERT-like architecture that uses a pre-trained object detection model, [Faster-RCNN](https://arxiv.org/abs/1506.01497), to detect objects. This model uses a combination of the MLM and ITM objectives during pre-training to implicitly align elements of an input text and regions in an associated input image with self-attention. Another work, FLAVA, consists of an image encoder, a text encoder, and a multi-modal encoder to fuse and align the image and text representations for multi-modal reasoning, all of which are based on transformers. In order to achieve this, FLAVA uses a variety of pre-training objectives: MLM, ITM, as well as Masked-Image Modeling (MIM), and contrastive learning. ### 5) No Training Finally, various optimization strategies aim to bridge image and text representations using the pre-trained image and text models or adapt pre-trained multi-modal models to new downstream tasks without additional training. For example, [MaGiC](https://arxiv.org/abs/2205.02655) proposes iterative optimization through a pre-trained autoregressive language model to generate a caption for the input image. To do this, MaGiC computes a CLIP-based “Magic score” using CLIP embeddings of the generated tokens and the input image. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/asif.png" alt="ASIF" width=500><br> <em>Crafting a similarity search space using pre-trained, frozen unimodal image and text encoders (<a href=https://luca.moschella.dev/publication/norelli-asif-2022>image source</a>)</em> </p> [ASIF](https://arxiv.org/abs/2210.01738) proposes a simple method to turn pre-trained uni-modal image and text models into a multi-modal model for image captioning using a relatively small multi-modal dataset without additional training. The key intuition behind ASIF is that captions of similar images are also similar to each other. Hence we can perform a similarity-based search by crafting a relative representation space using a small dataset of ground-truth multi-modal pairs. ## Datasets Vision-language models are typically trained on large image and text datasets with different structures based on the pre-training objective. After they are pre-trained, they are further fine-tuned on various downstream tasks using task-specific datasets. This section provides an overview of some popular pre-training and downstream datasets used for training and evaluating vision-language models. ### Pre-training datasets Vision-language models are typically pre-trained on large multi-modal datasets harvested from the web in the form of matching image/video and text pairs. The text data in these datasets can be human-generated captions, automatically generated captions, image metadata, or simple object labels. Some examples of such large datasets are [PMD](https://huggingface.co/datasets/facebook/pmd) and [LAION-5B](https://laion.ai/blog/laion-5b/). The PMD dataset combines multiple smaller datasets such as the [Flickr30K](https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset), [COCO](https://cocodataset.org/), and [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) datasets. The COCO detection and image captioning (>330K images) datasets consist of image instances paired with the text labels of the objects each image contains, and natural sentence descriptions, respectively. The Conceptual Captions (> 3.3M images) and Flickr30K (> 31K images) datasets are scraped from the web along with their captions - free-form sentences describing the image. Even image-text datasets consisting solely of human-generated captions, such as Flickr30K, are inherently noisy as users only sometimes write descriptive or reflective captions for their images. To overcome this issue, datasets such as the LAION-5B dataset leverage CLIP or other pre-trained multi-modal models to filter noisy data and create high-quality multi-modal datasets. Furthermore, some vision-language models, such as ALIGN, propose further preprocessing steps and create their own high-quality datasets. Other vision-language datasets, such as the [LSVTD](https://davar-lab.github.io/dataset/lsvtd.html) and [WebVid](https://github.com/m-bain/webvid) datasets, consist of video and text modalities, although at a smaller scale. ### Downstream datasets Pre-trained vision-language models are often trained on various downstream tasks such as visual question-answering, text-guided object detection, text-guided image inpainting, multi-modal classification, and various stand-alone NLP and computer vision tasks. Models fine-tuned on the question-answering downstream task, such as [ViLT](https://arxiv.org/abs/2102.03334) and [GLIP](https://arxiv.org/abs/2112.03857), most commonly use the [VQA](https://visualqa.org/) (visual question-answering), [VQA v2](https://visualqa.org/), [NLVR2](https://lil.nlp.cornell.edu/nlvr/), [OKVQA](https://okvqa.allenai.org/), [TextVQA](https://huggingface.co/datasets/textvqa), [TextCaps](https://textvqa.org/textcaps/) and [VizWiz](https://vizwiz.org/) datasets. These datasets typically contain images paired with multiple open-ended questions and answers. Furthermore, datasets such as VizWiz and TextCaps can also be used for image segmentation and object localization downstream tasks. Some other interesting multi-modal downstream datasets are [Hateful Memes](https://huggingface.co/datasets/limjiayi/hateful_memes_expanded) for multi-modal classification, [SNLI-VE](https://github.com/necla-ml/SNLI-VE) for visual entailment prediction, and [Winoground](https://huggingface.co/datasets/facebook/winoground) for visio-linguistic compositional reasoning. Note that vision-language models are used for various classical NLP and computer vision tasks such as text or image classification and typically use uni-modal datasets ([SST2](https://huggingface.co/datasets/sst2), [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), for example) for such downstream tasks. In addition, datasets such as [COCO](https://cocodataset.org/) and [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) are commonly used both in the pre-training of models and also for the caption generation downstream task. ## Supporting Vision-Language Models in 🤗 Transformers Using Hugging Face Transformers, you can easily download, run and fine-tune various pre-trained vision-language models or mix and match pre-trained vision and language models to create your own recipe. Some of the vision-language models supported by 🤗 Transformers are: * [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) * [FLAVA](https://huggingface.co/docs/transformers/main/en/model_doc/flava) * [GIT](https://huggingface.co/docs/transformers/main/en/model_doc/git) * [BridgeTower](https://huggingface.co/docs/transformers/main/en/model_doc/bridgetower) * [GroupViT](https://huggingface.co/docs/transformers/v4.25.1/en/model_doc/groupvit) * [BLIP](https://huggingface.co/docs/transformers/main/en/model_doc/blip) * [OWL-ViT](https://huggingface.co/docs/transformers/main/en/model_doc/owlvit) * [CLIPSeg](https://huggingface.co/docs/transformers/main/en/model_doc/clipseg) * [X-CLIP](https://huggingface.co/docs/transformers/main/en/model_doc/xclip) * [VisualBERT](https://huggingface.co/docs/transformers/main/en/model_doc/visual_bert) * [ViLT](https://huggingface.co/docs/transformers/main/en/model_doc/vilt) * [LiT](https://huggingface.co/docs/transformers/main/en/model_doc/vision-text-dual-encoder) (an instance of the `VisionTextDualEncoder`) * [TrOCR](https://huggingface.co/docs/transformers/main/en/model_doc/trocr) (an instance of the `VisionEncoderDecoderModel`) * [`VisionTextDualEncoder`](https://huggingface.co/docs/transformers/main/en/model_doc/vision-text-dual-encoder) * [`VisionEncoderDecoderModel`](https://huggingface.co/docs/transformers/main/en/model_doc/vision-encoder-decoder) While models such as CLIP, FLAVA, BridgeTower, BLIP, LiT and `VisionEncoderDecoder` models provide joint image-text embeddings that can be used for downstream tasks such as zero-shot image classification, other models are trained on interesting downstream tasks. In addition, FLAVA is trained with both unimodal and multi-modal pre-training objectives and can be used for both unimodal vision or language tasks and multi-modal tasks. For example, OWL-ViT [enables](https://huggingface.co/spaces/adirik/OWL-ViT) zero-shot / text-guided and one-shot / image-guided object detection, CLIPSeg and GroupViT [enable](https://huggingface.co/spaces/nielsr/CLIPSeg) text and image-guided image segmentation, and VisualBERT, GIT and ViLT [enable](https://huggingface.co/spaces/nielsr/vilt-vqa) visual question answering as well as various other tasks. X-CLIP is a multi-modal model trained with video and text modalities and [enables](https://huggingface.co/spaces/fcakyon/zero-shot-video-classification) zero-shot video classification similar to CLIP’s zero-shot image classification capabilities. Unlike other models, the `VisionEncoderDecoderModel` is a cookie-cutter model that can be used to initialize an image-to-text model with any pre-trained Transformer-based vision model as the encoder (e.g. ViT, BEiT, DeiT, Swin) and any pre-trained language model as the decoder (e.g. RoBERTa, GPT2, BERT, DistilBERT). In fact, TrOCR is an instance of this cookie-cutter class. Let’s go ahead and experiment with some of these models. We will use [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt) for visual question answering and [CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg) for zero-shot image segmentation. First, let’s install 🤗Transformers: `pip install transformers`. ### ViLT for VQA Let’s start with ViLT and download a model pre-trained on the VQA dataset. We can do this by simply initializing the corresponding model class and calling the `from_pretrained()` method to download our desired checkpoint. ```py from transformers import ViltProcessor, ViltForQuestionAnswering model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa") ``` Next, we will download a random image of two cats and preprocess both the image and our query question to transform them to the input format expected by the model. To do this, we can conveniently use the corresponding preprocessor class (`ViltProcessor`) and initialize it with the preprocessing configuration of the corresponding checkpoint. ```py import requests from PIL import Image processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa") # download an input image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "How many cats are there?" # prepare inputs inputs = processor(image, text, return_tensors="pt") ``` Finally, we can perform inference using the preprocessed image and question as input and print the predicted answer. However, an important point to keep in mind is to make sure your text input resembles the question templates used in the training setup. You can refer to [the paper and the dataset](https://arxiv.org/abs/2102.03334) to learn how the questions are formed. ```py import torch # forward pass with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) ``` Straight-forward, right? Let’s do another demonstration with CLIPSeg and see how we can perform zero-shot image segmentation with a few lines of code. ### CLIPSeg for zero-shot image segmentation We will start by initializing `CLIPSegForImageSegmentation` and its corresponding preprocessing class and load our pre-trained model. ```py from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined") ``` Next, we will use the same input image and query the model with the text descriptions of all objects we want to segment. Similar to other preprocessors, `CLIPSegProcessor` transforms the inputs to the format expected by the model. As we want to segment multiple objects, we input the same image for each text description separately. ```py from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a cat", "a remote", "a blanket"] inputs = processor(text=texts, images=[image] * len(texts), padding=True, return_tensors="pt") ``` Similar to ViLT, it’s important to refer to the [original work](https://arxiv.org/abs/2112.10003) to see what kind of text prompts are used to train the model in order to get the best performance during inference. While CLIPSeg is trained on simple object descriptions (e.g., “a car”), its CLIP backbone is pre-trained on engineered text templates (e.g., “an image of a car”, “a photo of a car”) and kept frozen during training. Once the inputs are preprocessed, we can perform inference to get a binary segmentation map of shape (height, width) for each text query. ```py import torch with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits print(logits.shape) >>> torch.Size([3, 352, 352]) ``` Let’s visualize the results to see how well CLIPSeg performed (code is adapted from [this post](https://huggingface.co/blog/clipseg-zero-shot)). ```py import matplotlib.pyplot as plt logits = logits.unsqueeze(1) _, ax = plt.subplots(1, len(texts) + 1, figsize=(3*(len(texts) + 1), 12)) [a.axis('off') for a in ax.flatten()] ax[0].imshow(image) [ax[i+1].imshow(torch.sigmoid(logits[i][0])) for i in range(len(texts))]; [ax[i+1].text(0, -15, prompt) for i, prompt in enumerate(texts)] ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/128_vision_language_pretraining/clipseg_result.png" alt="CLIPSeg results"> </p> Amazing, isn’t it? Vision-language models enable a plethora of useful and interesting use cases that go beyond just VQA and zero-shot segmentation. We encourage you to try out the different use cases supported by the models mentioned in this section. For sample code, refer to the respective documentation of the models. ## Emerging Areas of Research With the massive advances in vision-language models, we see the emergence of new downstream tasks and application areas, such as medicine and robotics. For example, vision-language models are increasingly getting adopted for medical use cases, resulting in works such as [Clinical-BERT](https://ojs.aaai.org/index.php/AAAI/article/view/20204) for medical diagnosis and report generation from radiographs and [MedFuseNet](https://www.nature.com/articles/s41598-021-98390-1) for visual question answering in the medical domain. We also see a massive surge of works that leverage joint vision-language representations for image manipulation (e.g., [StyleCLIP](https://arxiv.org/abs/2103.17249), [StyleMC](https://arxiv.org/abs/2112.08493), [DiffusionCLIP](https://arxiv.org/abs/2110.02711)), text-based video retrieval (e.g., [X-CLIP](https://arxiv.org/abs/2207.07285)) and manipulation (e.g., [Text2Live](https://arxiv.org/abs/2204.02491)) and 3D shape and texture manipulation (e.g., [AvatarCLIP](https://arxiv.org/abs/2205.08535), [CLIP-NeRF](https://arxiv.org/abs/2112.05139), [Latent3D](https://arxiv.org/abs/2202.06079), [CLIPFace](https://arxiv.org/abs/2212.01406), [Text2Mesh](https://arxiv.org/abs/2112.03221)). In a similar line of work, [MVT](https://arxiv.org/abs/2204.02174) proposes a joint 3D scene-text representation model, which can be used for various downstream tasks such as 3D scene completion. While robotics research hasn’t leveraged vision-language models on a wide scale yet, we see works such as [CLIPort](https://arxiv.org/abs/2109.12098) leveraging joint vision-language representations for end-to-end imitation learning and reporting large improvements over previous SOTA. We also see that large language models are increasingly getting adopted in robotics tasks such as common sense reasoning, navigation, and task planning. For example, [ProgPrompt](https://arxiv.org/abs/2209.11302) proposes a framework to generate situated robot task plans using large language models (LLMs). Similarly, [SayCan](https://say-can.github.io/assets/palm_saycan.pdf) uses LLMs to select the most plausible actions given a visual description of the environment and available objects. While these advances are impressive, robotics research is still confined to limited sets of environments and objects due to the limitation of object detection datasets. With the emergence of open-vocabulary object detection models such as [OWL-ViT](https://arxiv.org/abs/2205.06230) and [GLIP](https://arxiv.org/abs/2112.03857), we can expect a tighter integration of multi-modal models with robotic navigation, reasoning, manipulation, and task-planning frameworks. ## Conclusion There have been incredible advances in multi-modal models in recent years, with vision-language models making the most significant leap in performance and the variety of use cases and applications. In this blog, we talked about the latest advancements in vision-language models, as well as what multi-modal datasets are available and which pre-training strategies we can use to train and fine-tune such models. We also showed how these models are integrated into 🤗 Transformers and how you can use them to perform various tasks with a few lines of code. We are continuing to integrate the most impactful computer vision and multi-modal models and would love to hear back from you. To stay up to date with the latest news in multi-modal research, you can follow us on Twitter: [@adirik](https://twitter.com/https://twitter.com/alaradirik), [@NielsRogge](https://twitter.com/NielsRogge), [@apsdehal](https://twitter.com/apsdehal), [@a_e_roberts](https://twitter.com/a_e_roberts), [@RisingSayak](https://mobile.twitter.com/a_e_roberts), and [@huggingface](https://twitter.com/huggingface). *Acknowledgements: We thank Amanpreet Singh and Amy Roberts for their rigorous reviews. Also, thanks to Niels Rogge, Younes Belkada, and Suraj Patil, among many others at Hugging Face, who laid out the foundations for increasing the use of multi-modal models from Transformers.*
0
0
hf_public_repos
hf_public_repos/blog/prezi-case-study.md
--- title: "Going multimodal: How Prezi is leveraging the Hub and the Expert Support Program to accelerate their ML roadmap" thumbnail: /blog/assets/70_sempre_health/thumbnailprezi.jpg authors: - user: Violette - user: jeffboudier - user: MoritzLaurer - user: bmateusz guest: true --- # Going multimodal: How Prezi is leveraging the Hub and the Expert Support Program to accelerate their ML roadmap Everybody knows that a great visual is worth a thousand words. The team at Prezi, a visual communications software company, is putting this insight into practice with their Prezi presentations that combine images and text in highly dynamic presentations. Prezi has joined the Hugging Face Expert Support Program to fully leverage modern machine learning's potential. Over the past months, Hugging Face has supported Prezi in integrating smaller, more efficient open-source models into their ML workflows. This cooperation started at a perfect time, as multimodal models are becoming increasingly capable. We recently sat down with [Máté Börcsök](https://www.linkedin.com/in/mateborcsok/?originalSubdomain=hu), a backend engineer at [Prezi](https://prezi.com/), to talk about their experience in the [Expert Support Program](https://huggingface.co/support). In this short video, Máté walks us through some of their machine learning work and shares their experience collaborating with our team via the Expert Support Program. <iframe width="100%" style="aspect-ratio: 16 / 9;" src="https://www.youtube.com/embed/pM6D0tRoIbI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> _If you'd like to accelerate your machine learning roadmap with the help of our experts, as Máté and his team did, visit [hf.co/support](https://huggingface.co/support) to learn more about our Expert Support Program and request a quote._ ## Transcript with additional details: ### Introduction My name is Máté, and I am a backend engineer at Prezi, an online presentation tool that brings your ideas to life. ### How does the HF Expert Support Program help you build AI? Our flagship AI product at Prezi is Prezi AI, which helps our users create better Prezi presentations faster. Users start by providing a prompt and description of the presentation they want to create. The system then automatically creates a draft presentation for them to get started. It’s a complex system that calls different services and builds up the presentation’s structure using closed models and various asset provider services. When we joined the program, we already had a version of this system, and our expert reviewed the flow and suggested improvements. Our pipeline includes a search system to find suitable assets (images and texts) for each unique presentation. In this context, an important piece of advice was, for example, to add an open-source re-ranker model to the system, which can find the best images or texts for your presentation cheaper, faster, and better than an LLM. Our use cases are inherently multi-modal as our presentations combine images and text. There are a lot of models released every week, and our expert helps us cut through the hype and understand which models are useful for us and which are not. This helps us save a lot of time, as we are using a combination of vision models, text models, and vision-language models (VLMs) to solve our unique challenges. Multimodal machine learning is challenging, and the guidance is really appreciated. We are not Machine Learning Engineers, and we are learning this together on the way. ### What’s your favorite feature of Inference Endpoints? I highly recommend you check out the [Endpoint Model Catalog](https://ui.endpoints.huggingface.co/catalog). It is a curated list of models that work well with Inference Endpoints and require zero configuration. I love that you can set it up so that the Endpoint goes to sleep after a few minutes, so it won’t burn money. It also supports single and quad A100 instances required for some models. Keeping the models updated is also straightforward. Inference Endpoints let us deploy the latest version with a single click or roll back to any older version using the Git hash. None of these features are easily available on AWS, so it was very convenient for us to use them. Even if a model is not in the [catalog](https://ui.endpoints.huggingface.co/catalog) yet, it’s relatively easy to make them work. At least it was easy for me, with our expert supporting us. ### What teams would benefit most from Expert Support? The Hugging Face partnership opened the doors of machine learning for us. Our dedicated expert gives us access to a community of machine learning experts who can give feedback on our wildest questions. As I said earlier, we are not Machine Learning Engineers. Our expert guides us to work on the right things, sharing best practices and state-of-the-art models for embedding, re-ranking, and object detection and showing us how to fine-tune new vision language models and collect and curate data. These are mostly things we can do ourselves, but his guidance gives a huge speedup and keeps us focused on meaningful tasks for our users. --- With the he Expert Support Program, we've put together a world-class team to help customers build better ML solutions, faster. Our experts answer questions and find solutions as needed in your machine learning journey from research to production. Visit [hf.co/support](https://huggingface.co/support) to learn more and request a quote.
1
0
hf_public_repos
hf_public_repos/blog/_tags.yml
- value: community label: community - value: guide label: guide - value: open-source-collab label: open source collab - value: partnerships label: partnerships - value: research label: research - value: nlp label: NLP - value: audio label: Audio - value: cv label: CV - value: rl label: RL - value: ethics label: ethics - value: diffusion label: Diffusion - value: game-dev label: Game Development - value: rlhf label: RLHF - value: leaderboard label: Leaderboard - value: case-studies label: Case Studies
2
0
hf_public_repos
hf_public_repos/blog/leaderboard-bigcodebench.md
--- title: "BigCodeBench: The Next Generation of HumanEval" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_bigcode.png authors: - user: terryyz guest: true org: bigcode - user: ganler guest: true org: bigcode - user: SivilTaram guest: true org: bigcode - user: huybery guest: true org: bigcode - user: Muennighoff guest: true org: bigcode - user: dpfried guest: true org: bigcode - user: harmdevries guest: true org: bigcode - user: lvwerra org: bigcode - user: clefourrier --- # BigCodeBench: The Next Generation of HumanEval [HumanEval](https://github.com/openai/human-eval) is a reference benchmark for evaluating large language models (LLMs) on code generation tasks, as it makes the evaluation of compact function-level code snippets easy. However, there are growing concerns about its effectiveness in evaluating the programming capabilities of LLMs, and the main concern is that tasks in HumanEval are too simple and may not be representative of real-world programming tasks. Compared to the algorithm-oriented tasks in HumanEval, real-world software development often involves diverse libraries and function calls. Furthermore, LLMs' performance on HumanEval is subject to [contamination and overfitting issues](https://arxiv.org/abs/2403.07974), making it less reliable for evaluating the generalization of LLMs. While there have been some efforts to address these issues, they are either domain-specific, deterministic, or agent-centric (sorry [DS-1000](https://github.com/HKUNLP/DS-1000), [ODEX](https://github.com/zorazrw/odex), and [SWE-bench](https://github.com/princeton-nlp/SWE-bench) 💔). We feel that the community still lacks an easy-to-use benchmark that can broadly evaluate the programming capabilities of LLMs, and that's what we focused on. We are excited to announce the release of BigCodeBench, which evaluates LLMs on solving practical and challenging programming tasks without contamination. Specifically, BigCodeBench contains 1,140 function-level tasks to challenge LLMs to follow instructions and compose multiple function calls as tools from 139 libraries. To evaluate LLMs rigorously, each programming task encompasses 5.6 test cases with an average branch coverage of 99%. Ready to dive into BigCodeBench? Let's get started! 🚀 ## What do the tasks in BigCodeBench look like? 🕵️‍♂️ <img src="https://github.com/bigcode-bench/bigcode-bench.github.io/blob/main/asset/tease.svg?raw=true" alt="task" style="display: block; margin-left: auto; margin-right: auto;"> BigCodeBench features complex, user-oriented instructions for each task, including clear functionality descriptions, input/output formats, error handling, and verified interactive examples. We avoid step-by-step task instructions, believing capable LLMs should understand and solve tasks from the user's perspective in an open-ended manner. We verify specific features using test cases. ```python # We elaborate the above task with some test cases: # Requirements SetUp import unittest from unittest.mock import patch import http.client import ssl import socket # Start the test class TestCases(unittest.TestCase): # Mock the successful connection and assess the response content @patch('http.client.HTTPSConnection') def test_response_content(self, mock_conn): """ Test the content of the response. """ mock_conn.return_value.getresponse.return_value.read.return_value = b'Expected Content' result = task_func('www.example.com', 443, '/content/path') self.assertEqual(result, 'Expected Content') # Mock the failed connection and assess the error handling @patch('socket.create_connection') @patch('http.client.HTTPSConnection') def test_ssl_handshake_error_handling(self, mock_conn, mock_socket): """ Test handling of SSL handshake errors. """ mock_socket.side_effect = ssl.SSLError('SSL handshake failed') with self.assertRaises(ssl.SSLError): task_func('badssl.com', 443, '/test/path') # More test cases... ``` Tasks in BigCodeBench utilize diverse function calls from popular libraries. We don't restrict the function calls LLMs can use, expecting them to choose appropriate functions and combine them flexibly to solve tasks. Test cases are designed as test harnesses to examine expected program behaviors during runtime. To assess LLM performance, we use Pass@1 with greedy decoding, measuring the percentage of tasks correctly solved with the first generated code snippet via curated test cases. This approach aligns with benchmarks like [HumanEval](https://github.com/openai/human-eval) and [MBPP](https://github.com/google-research/google-research/tree/master/mbpp). We address LLMs' tendency to skip long code prompts by adding missing setups (e.g., import statements, global constants) during Pass@1 evaluation, referred to as calibrated Pass@1. <img src="https://github.com/bigcode-bench/bigcode-bench.github.io/blob/main/asset/depth-breadth.svg?raw=true" alt="comparison" style="display: block; margin-left: auto; margin-right: auto; width: 50%;"> To better understand implementation complexity and tool-use diversity, we compare the tasks in BigCodeBench with those in representative benchmarks, including [APPS](https://github.com/hendrycks/apps), [DS-1000](https://github.com/HKUNLP/DS-1000), [ODEX](https://github.com/zorazrw/odex), [APIBench](https://github.com/ShishirPatil/gorilla/tree/main/data/apibench), [MBPP](https://github.com/google-research/google-research/tree/master/mbpp), [NumpyEval](https://github.com/microsoft/PyCodeGPT/tree/main/cert/pandas-numpy-eval), [PandasEval](https://github.com/microsoft/PyCodeGPT/tree/main/cert/pandas-numpy-eval), [HumanEval](https://github.com/openai/human-eval), and [TorchDataEval](https://github.com/microsoft/PyCodeGPT/tree/main/apicoder/private-eval). We find that BigCodeBench requires more complex reasoning and problem-solving skills to implement comprehensive functionalities. <img src="https://github.com/bigcode-bench/bigcode-bench.github.io/blob/main/asset/bigcodebench_prompt.svg?raw=true" alt="prompt" style="display: block; margin-left: auto; margin-right: auto; width: 70%;"> As shown in the task figure, the main target scenario is code completion (denoted as `BigCodeBench-Complete`), where LLMs are required to finish the implementation of a function based on detailed instructions in the docstring. However, considering downstream applications such as multi-turn dialogue, users may describe requirements in a more conversational and less verbose manner. This is where instruction-tuned LLMs are beneficial, as they are trained to follow natural-language instructions and generate code snippets accordingly. To test if models can truly understand human intents and translate them into code, we create `BigCodeBench-Instruct`, a more challenging variant of BigCodeBench designed to evaluate instruction-tuned LLMs. ## Where do the tasks come from? 🤔 <img src="https://github.com/bigcode-bench/bigcode-bench.github.io/blob/main/asset/construct_pipeline.svg?raw=true" alt="png" style="display: block; margin-left: auto; margin-right: auto;"> We guarantee the quality of the tasks in BigCodeBench through a systematic "Human-LLM collaboration process." We start with [ODEX](https://github.com/zorazrw/odex) as the "seed dataset," which contains short but realistic human intents and corresponding Python one-liners from Stack Overflow. We use GPT-4 to expand these one-liners into comprehensive function-level tasks. Next, 20 human experts—most with over 5 years of Python programming experience—voluntarily guide GPT-4 in an execution-based sandbox. They continually instruct it to refine the synthesized tasks and add test cases. The tasks and test cases are then examined in a local environment, pre-evaluated on other LLMs, and cross-checked by 7 additional human experts to ensure their quality. To assert overall quality, the authors sample tasks for 11 human experts to solve, achieving an average human performance of 97%. ## How well do LLMs perform on BigCodeBench? 📊 We host the BigCodeBench leaderboard on both [Hugging Face Space](https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard) and [GitHub Pages](https://bigcode-bench.github.io/). Here, we use the Hugging Face leaderboard as an example. <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/4.36.1/gradio.js" ></script> <gradio-app theme_mode="light" space="bigcode/bigcodebench-leaderboard"></gradio-app> Interestingly, we observe that instruction-tuned LLMs like GPT-4 can omit essential import statements in the long prompts of `BigCodeBench-Complete`, leading to task failures due to missing modules and constants. This behavior, called "model laziness", is discussed in the [community](https://community.openai.com/t/why-i-think-gpt-is-now-lazy/534332). <u>Compared to human performance, LLMs perform significantly lower on `BigCodeBench-Complete` and even lower on `BigCodeBench-Instruct`.</u> The best model (GPT-4o) achieves a calibrated Pass@1 of 61.1% on `BigCodeBench-Complete` and 51.1% on `BigCodeBench-Instruct`. Additionally, there is a notable performance gap between closed and open LLMs. While Pass@1 is a good metric for overall performance, it is not detailed enough to compare models directly. Inspired by [Chatbot Arena](https://lmsys.org/blog/2023-05-03-arena/), we use Elo rating to rank models on `BigCodeBench-Complete`. This method, originally used in chess, ranks players based on their game performance. We adapt it to programming tasks, treating each task as a game and each model as a player. The Elo rating updates are based on game outcomes and expectations, using task-level calibrated Pass@1 (0% or 100%) and excluding ties. Starting with an initial Elo rating of 1000, we fit it using maximum likelihood estimation and bootstrap with 500 iterations to get final scores. <u>We find that GPT-4o outperforms other models by a large margin, with DeepSeekCoder-V2 in the second tier.</u> To help the community understand model performance on each task, we track solve rates, measured by calibrated Pass@1. On `BigCodeBench-Complete`, 149 tasks remain unsolved by all models, while 6 tasks are completely solved. For `BigCodeBench-Instruct`, 278 tasks remain unsolved and 14 tasks are fully solved by all models. The significant number of unsolved tasks and the small number of fully solved tasks show that BigCodeBench is a challenging benchmark for LLMs. ## Great! So, how can I evaluate my model on BigCodeBench? 🛠️ We make BigCodeBench easily accessible to the community by providing a simple and user-friendly evaluation framework, which can be downloaded via [PyPI](https://pydigger.com/pypi/bigcodebench). The prototype of the evaluation framework is based on [EvalPlus](https://github.com/evalplus/evalplus) for the HumanEval+ and MBPP+ benchmarks. However, as our benchmark has tasks with much more diverse library dependencies than EvalPlus, we build less resource-constrained execution environment, and adapt it for `unittest` in the test harness of BigCodeBench. To facilitate the evaluation, we provide pre-built Docker images for [_code generation_](https://hub.docker.com/r/bigcodebench/bigcodebench-generate) and [_code execution_](https://hub.docker.com/r/bigcodebench/bigcodebench-evaluate). Check out our [GitHub repository](https://github.com/bigcode-project/bigcodebench) to find more details on how to use the evaluation framework. ### Setup ```bash # Install to use bigcodebench.evaluate pip install bigcodebench --upgrade # If you want to use the evaluate locally, you need to install the requirements pip install -I -r https://raw.githubusercontent.com/bigcode-project/bigcodebench/main/Requirements/requirements-eval.txt # Install to use bigcodebench.generate # You are strongly recommended to install the [generate] dependencies in a separate environment pip install bigcodebench[generate] --upgrade ``` ### Code Generation You are suggested to use `flash-attn` for generating code samples. ```bash pip install -U flash-attn ``` To generate code samples from a model, you can use the following command: ```bash bigcodebench.generate \ --model [model_name] \ --subset [complete|instruct] \ --greedy \ --bs [bs] \ --temperature [temp] \ --n_samples [n_samples] \ --resume \ --backend [vllm|hf|openai|mistral|anthropic|google] \ --tp [gpu_number] \ [--trust_remote_code] \ [--base_url [base_url]] ``` The generated code samples will be stored in a file named `[model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples].jsonl`. ### Code Post-processing LLM-generated text may not be compilable code as it includes natural language lines or incomplete extra code. We provide a tool namely `bigcodebench.sanitize` to clean up the code: ```bash # 💡 If you want to store calibrated code in jsonl: bigcodebench.sanitize --samples samples.jsonl --calibrate # Sanitized code will be produced to `samples-sanitized-calibrated.jsonl` # 💡 If you do without calibration: bigcodebench.sanitize --samples samples.jsonl # Sanitized code will be produced to `samples-sanitized.jsonl` # 💡 If you are storing codes in directories: bigcodebench.sanitize --samples /path/to/vicuna-[??]b_temp_[??] # Sanitized code will be produced to `/path/to/vicuna-[??]b_temp_[??]-sanitized` ``` ### Code Evaluation You are strongly recommended to use a sandbox such as [docker](https://docs.docker.com/get-docker/): ```bash # Mount the current directory to the container docker run -v $(pwd):/app bigcodebench/bigcodebench-evaluate:latest --subset [complete|instruct] --samples samples-sanitized-calibrated # ...Or locally ⚠️ bigcodebench.evaluate --subset [complete|instruct] --samples samples-sanitized-calibrated # ...If the ground truth is working locally (due to some flaky tests) bigcodebench.evaluate --subset [complete|instruct] --samples samples-sanitized-calibrated --no-gt ``` ## What's next? We share a long-term roadmap to address the limitations of BigCodeBench and sustainably build with the community. Our goal is to provide the community with the most open, reliable, and scalable evaluations to truly understand the fundamental capabilities of LLMs for programming and pinpoint ways to unleash their power. Specifically, we plan to enhance the following aspects of BigCodeBench: - **Multilingualism**: Currently, BigCodeBench is Python-only and cannot be easily extended to other programming languages. Since function calls are mostly language-specific, finding packages or libraries with the same functionalities in languages other than Python is challenging. - **Rigorousness**: While we achieve high test coverage for ground-truth solutions in BigCodeBench, it does not guarantee that _all_ code solutions generated by LLMs will be correctly assessed against existing test cases. Previous works like EvalPlus have attempted to extend limited test cases by augmenting input-output pairs via LLM- and mutation-based strategies. However, adapting EvalPlus to the test harness in BigCodeBench is challenging. While EvalPlus emphasizes the input-output assertions, most of test harnesses in BigCoeBench require non-trivial configurations (e.g., mock patching) to examine expected program behaviors during runtime. - **Generalization**: A key question is, "How well do the models generalize to unseen tools and tasks?" Currently, BigCodeBench covers common libraries and daily programming tasks. Benchmarking models on programming tasks that use emerging libraries like [transformers](https://github.com/huggingface/transformers) and [langchain](https://github.com/langchain-ai/langchain) would be more interesting. - **Evolution**: Libraries can become obsolete or be updated, meaning the source code data for model training will constantly evolve. Models may not memorize function calls from deprecated library versions, posing a challenge for any tool-dependent programming benchmarks to correctly examine model capabilities without periodic updates. Another related concern is test set contamination due to evolving training data. - **Interaction**: Recent interest focuses on the concept of _LLMs as Agents_, which is seen as a path toward artificial general intelligence. Specifically, LLMs will be grounded in a less constrained sandbox environment, where they can interact with applications such as web browsers and terminals. This environment can help unlock capabilities like [self-debugging](https://arxiv.org/pdf/2304.05128) and [self-reflection](https://arxiv.org/abs/2303.11366). We are excited to see the community's feedback and contributions to building BigCodeBench in the long run 🤗 ## Resources We open-source all the artifacts of BigCodeBench, including the tasks, test cases, evaluation framework, and leaderboard. You can find them as follows: - [GitHub Repository](https://github.com/bigcode-project/bigcodebench) - [HF Data Viewer](https://huggingface.co/spaces/bigcode/bigcodebench-viewer) - [HF Dataset](https://huggingface.co/datasets/bigcode/bigcodebench) - [HF Leaderboard](https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard) - [GitHub Pages Leaderboard](https://bigcode-bench.github.io/) If you have any questions or suggestions, please feel free to open an issue in the repository or contact us via [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]). ## Citation If you find our evaluations useful, please consider citing our work ```bibtex @article{zhuo2024bigcodebench, title={BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions}, author={Zhuo, Terry Yue and Vu, Minh Chien and Chim, Jenny and Hu, Han and Yu, Wenhao and Widyasari, Ratnadira and Yusuf, Imam Nur Bani and Zhan, Haolan and He, Junda and Paul, Indraneil and others}, journal={arXiv preprint arXiv:2406.15877}, year={2024} } ```
3
0
hf_public_repos
hf_public_repos/blog/train-optimize-sd-intel.md
--- title: Optimizing Stable Diffusion for Intel CPUs with NNCF and 🤗 Optimum thumbnail: /blog/assets/train_optimize_sd_intel/thumbnail.png authors: - user: AlexKoff88 guest: true - user: MrOpenVINO guest: true - user: helenai guest: true - user: sayakpaul - user: echarlaix --- # Optimizing Stable Diffusion for Intel CPUs with NNCF and 🤗 Optimum [**Latent Diffusion models**](https://arxiv.org/abs/2112.10752) are game changers when it comes to solving text-to-image generation problems. [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) is one of the most famous examples that got wide adoption in the community and industry. The idea behind the Stable Diffusion model is simple and compelling: you generate an image from a noise vector in multiple small steps refining the noise to a latent image representation. This approach works very well, but it can take a long time to generate an image if you do not have access to powerful GPUs. Through the past five years, [OpenVINO Toolkit](https://docs.openvino.ai/) encapsulated many features for high-performance inference. Initially designed for Computer Vision models, it still dominates in this domain showing best-in-class inference performance for many contemporary models, including [Stable Diffusion](https://huggingface.co/blog/stable-diffusion-inference-intel). However, optimizing Stable Diffusion models for resource-constraint applications requires going far beyond just runtime optimizations. And this is where model optimization capabilities from OpenVINO [Neural Network Compression Framework](https://github.com/openvinotoolkit/nncf) (NNCF) come into play. In this blog post, we will outline the problems of optimizing Stable Diffusion models and propose a workflow that substantially reduces the latency of such models when running on a resource-constrained HW such as CPU. In particular, we achieved **5.1x** inference acceleration and **4x** model footprint reduction compared to PyTorch. ## Stable Diffusion optimization In the [Stable Diffusion pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), the UNet model is computationally the most expensive to run. Thus, optimizing just one model brings substantial benefits in terms of inference speed. However, it turns out that the traditional model optimization methods, such as post-training 8-bit quantization, do not work for this model. There are two main reasons for that. First, pixel-level prediction models, such as semantic segmentation, super-resolution, etc., are one of the most complicated in terms of model optimization because of the complexity of the task, so tweaking model parameters and the structure breaks the results in numerous ways. The second reason is that the model has a lower level of redundancy because it accommodates a lot of information while being trained on [hundreds of millions of samples](https://laion.ai/blog/laion-5b/). That is why researchers have to employ more sophisticated quantization methods to preserve the accuracy after optimization. For example, Qualcomm used the layer-wise Knowledge Distillation method ([AdaRound](https://arxiv.org/abs/2004.10568)) to [quantize](https://www.qualcomm.com/news/onq/2023/02/worlds-first-on-device-demonstration-of-stable-diffusion-on-android) Stable Diffusion models. It means that model tuning after quantization is required, anyway. If so, why not just use [Quantization-Aware Training](https://arxiv.org/abs/1712.05877) (QAT) which can tune the model and quantization parameters simultaneously in the same way the source model is trained? Thus, we tried this approach in our work using [NNCF](https://github.com/openvinotoolkit/nncf), [OpenVINO](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html), and [Diffusers](https://github.com/huggingface/diffusers) and coupled it with [Token Merging](https://arxiv.org/abs/2210.09461). ## Optimization workflow We usually start the optimization of a model after it's trained. Here, we start from a [model](https://huggingface.co/svjack/Stable-Diffusion-Pokemon-en) fine-tuned on the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) containing images of Pokemons and their text descriptions. We used the [text-to-image fine-tuning example](https://huggingface.co/docs/diffusers/training/text2image) for Stable Diffusion from the Diffusers and integrated QAT from NNCF into the following training [script](https://github.com/huggingface/optimum-intel/tree/main/examples/openvino/stable-diffusion). We also changed the loss function to incorporate knowledge distillation from the source model that acts as a teacher in this process while the actual model being trained acts as a student. This approach is different from the classical knowledge distillation method, where the trained teacher model is distilled into a smaller student model. In our case, knowledge distillation is used as an auxiliary method that helps improve the final accuracy of the optimizing model. We also use the Exponential Moving Average (EMA) method for model parameters excluding quantizers which allows us to make the training process more stable. We tune the model for 4096 iterations only. With some tricks, such as gradient checkpointing and [keeping the EMA model](https://github.com/huggingface/optimum-intel/blob/bbbe7ff0e81938802dbc1d234c3dcdf58ef56984/examples/openvino/stable-diffusion/train_text_to_image_qat.py#L941) in RAM instead of VRAM, we can run the optimization process using one GPU with 24 GB of VRAM. The whole optimization takes less than a day using one GPU! ## Going beyond Quantization-Aware Training Quantization alone can bring significant enhancements by reducing model footprint, load time, memory consumption, and inference latency. But the great thing about quantization is that it can be applied along with other optimization methods leading to a cumulative speedup. Recently, Facebook Research introduced a [Token Merging](https://arxiv.org/abs/2210.09461) method for Vision Transformer models. The essence of the method is that it merges redundant tokens with important ones using one of the available strategies (averaging, taking max values, etc.). This is done before the self-attention block, which is the most computationally demanding part of Transformer models. Therefore, reducing the token dimension reduces the overall computation time in the self-attention blocks. This method has also been [adapted](https://arxiv.org/pdf/2303.17604.pdf) for Stable Diffusion models and has shown promising results when optimizing Stable Diffusion pipelines for high-resolution image synthesis running on GPUs. We modified the Token Merging method to be compliant with OpenVINO and stacked it with 8-bit quantization when applied to the Attention UNet model. This also involves all the mentioned techniques including Knowledge Distillation, etc. As for quantization, it requires fine-tuning to be applied to restore the accuracy. We also start optimization and fine-tuning from the [model](https://huggingface.co/svjack/Stable-Diffusion-Pokemon-en) trained on the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). The figure below shows an overall optimization workflow. ![overview](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/train-optimize-sd-intel/overview.png) The resultant model is highly beneficial when running inference on devices with limited computational resources, such as client or edge CPUs. As it was mentioned, stacking Token Merging with quantization leads to an additional reduction in the inference latency. <div class="flex flex-row"> <div class="grid grid-cols-2 gap-4"> <figure> <img class="max-w-full rounded-xl border-2 border-solid border-gray-600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/train-optimize-sd-intel/image_torch.png" alt="Image 1" /> <figcaption class="mt-2 text-center text-sm text-gray-500">PyTorch FP32, Inference Speed: 230.5 seconds, Memory Footprint: 3.44 GB</figcaption> </figure> <figure> <img class="max-w-full rounded-xl border-2 border-solid border-gray-600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/train-optimize-sd-intel/image_fp32.png" alt="Image 2" /> <figcaption class="mt-2 text-center text-sm text-gray-500">OpenVINO FP32, Inference Speed: 120 seconds (<b>1.9x</b>), Memory Footprint: 3.44 GB</figcaption> </figure> <figure> <img class="max-w-full rounded-xl border-2 border-solid border-gray-600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/train-optimize-sd-intel/image_quantized.png" alt="Image 3" /> <figcaption class="mt-2 text-center text-sm text-gray-500">OpenVINO 8-bit, Inference Speed: 59 seconds (<b>3.9x</b>), Memory Footprint: 0.86 GB (<b>0.25x</b>)</figcaption> </figure> <figure> <img class="max-w-full rounded-xl border-2 border-solid border-gray-600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/train-optimize-sd-intel/image_tome_quantized.png" alt="Image 4" /> <figcaption class="mt-2 text-center text-sm text-gray-500">ToMe + OpenVINO 8-bit, Inference Speed: 44.6 seconds (<b>5.1x</b>), Memory Footprint: 0.86 GB (<b>0.25x</b>)</figcaption> </figure> </div> </div> Results of image generation [demo](https://huggingface.co/spaces/helenai/stable_diffusion) using different optimized models. Input prompt is “cartoon bird”, seed is 42. The models are with OpenVINO 2022.3 in [Hugging Face Spaces](https://huggingface.co/docs/hub/spaces-overview) using a “CPU upgrade” instance which utilizes 3rd Generation Intel® Xeon® Scalable Processors with Intel® Deep Learning Boost technology. ## Results We used the disclosed optimization workflows to get two types of optimized models, 8-bit quantized and quantized with Token Merging, and compare them to the PyTorch baseline. We also converted the baseline to vanilla OpenVINO floating-point (FP32) model for the comprehensive comparison. The picture above shows the results of image generation and some model characteristics. As you can see, just conversion to OpenVINO brings a significant decrease in the inference latency ( **1.9x** ). Applying 8-bit quantization boosts inference speed further leading to **3.9x** speedup compared to PyTorch. Another benefit of quantization is a significant reduction of model footprint, **0.25x** of PyTorch checkpoint, which also improves the model load time. Applying Token Merging (ToME) (with a **merging ratio of 0.4** ) on top of quantization brings **5.1x** performance speedup while keeping the footprint at the same level. We didn't provide a thorough analysis of the visual quality of the optimized models, but, as you can see, the results are quite solid. For the results shown in this blog, we used the default number of 50 inference steps. With fewer inference steps, inference speed will be faster, but this has an effect on the quality of the resulting image. How large this effect is depends on the model and the [scheduler](https://huggingface.co/docs/diffusers/using-diffusers/schedulers). We recommend experimenting with different number of steps and schedulers and find what works best for your use case. Below we show how to perform inference with the final pipeline optimized to run on Intel CPUs: ```python from optimum.intel import OVStableDiffusionPipeline # Load and compile the pipeline for performance. name = "OpenVINO/stable-diffusion-pokemons-tome-quantized-aggressive" pipe = OVStableDiffusionPipeline.from_pretrained(name, compile=False) pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1) pipe.compile() # Generate an image. prompt = "a drawing of a green pokemon with red eyes" output = pipe(prompt, num_inference_steps=50, output_type="pil").images[0] output.save("image.png") ``` You can find the training and quantization [code](https://github.com/huggingface/optimum-intel/tree/main/examples/openvino/stable-diffusion) in the Hugging Face [Optimum Intel](https://huggingface.co/docs/optimum/main/en/intel/index) library. The notebook that demonstrates the difference between optimized and original models is available [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/stable_diffusion_optimization.ipynb). You can also find [many models](https://huggingface.co/models?library=openvino&sort=downloads) on the Hugging Face Hub under the [OpenVINO organization](https://huggingface.co/OpenVINO). In addition, we have created a [demo](https://huggingface.co/spaces/helenai/stable_diffusion) on Hugging Face Spaces that is being run on a 3rd Generation Intel Xeon Scalable processor. ## What about the general-purpose Stable Diffusion model? As we showed with the Pokemon image generation task, it is possible to achieve a high level of optimization of the Stable Diffusion pipeline when using a relatively small amount of training resources. At the same time, it is well-known that training a general-purpose Stable Diffusion model is an [expensive task](https://www.mosaicml.com/blog/training-stable-diffusion-from-scratch-part-2). However, with enough budget and HW resources, it is possible to optimize the general-purpose model using the described approach and tune it to produce high-quality images. The only caveat we have is related to the token merging method that reduces the model capacity substantially. The rule of thumb here is the more complicated the dataset you have for the training, the less merging ratio you should use during the optimization. If you enjoyed reading this post, you might also be interested in checking out [this post](https://huggingface.co/blog/stable-diffusion-inference-intel) that discusses other complementary approaches to optimize the performance of Stable Diffusion on 4th generation Intel Xeon CPUs.
4
0
hf_public_repos
hf_public_repos/blog/rlhf.md
--- title: "Illustrating Reinforcement Learning from Human Feedback (RLHF)" thumbnail: /blog/assets/120_rlhf/thumbnail.png authors: - user: natolambert - user: LouisCastricato guest: true - user: lvwerra - user: Dahoas guest: true --- # Illustrating Reinforcement Learning from Human Feedback (RLHF) _This article has been translated to Chinese [简体中文](https://huggingface.co/blog/zh/rlhf) and Vietnamese [đọc tiếng việt](https://trituenhantao.io/kien-thuc/minh-hoa-rlhf-vu-khi-dang-sau-gpt/)_. Language models have shown impressive capabilities in the past few years by generating diverse and compelling text from human input prompts. However, what makes a "good" text is inherently hard to define as it is subjective and context dependent. There are many applications such as writing stories where you want creativity, pieces of informative text which should be truthful, or code snippets that we want to be executable. Writing a loss function to capture these attributes seems intractable and most language models are still trained with a simple next token prediction loss (e.g. cross entropy). To compensate for the shortcomings of the loss itself people define metrics that are designed to better capture human preferences such as [BLEU](https://en.wikipedia.org/wiki/BLEU) or [ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric)). While being better suited than the loss function itself at measuring performance these metrics simply compare generated text to references with simple rules and are thus also limited. Wouldn't it be great if we use human feedback for generated text as a measure of performance or go even one step further and use that feedback as a loss to optimize the model? That's the idea of Reinforcement Learning from Human Feedback (RLHF); use methods from reinforcement learning to directly optimize a language model with human feedback. RLHF has enabled language models to begin to align a model trained on a general corpus of text data to that of complex human values. RLHF's most recent success was its use in [ChatGPT](https://openai.com/blog/chatgpt/). Given ChatGPT's impressive abilities, we asked it to explain RLHF for us: <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/rlhf/chatgpt-explains.png" width="500" /> </p> It does surprisingly well, but doesn't quite cover everything. We'll fill in those gaps! ## RLHF: Let’s take it step by step Reinforcement learning from Human Feedback (also referenced as RL from human preferences) is a challenging concept because it involves a multiple-model training process and different stages of deployment. In this blog post, we’ll break down the training process into three core steps: 1. Pretraining a language model (LM), 2. gathering data and training a reward model, and 3. fine-tuning the LM with reinforcement learning. To start, we'll look at how language models are pretrained. #### Pretraining language models As a starting point RLHF use a language model that has already been pretrained with the classical pretraining objectives (see this [blog post](https://huggingface.co/blog/how-to-train) for more details). OpenAI used a smaller version of GPT-3 for its first popular RLHF model, [InstructGPT](https://openai.com/blog/instruction-following/). In their shared papers, Anthropic used transformer models from 10 million to 52 billion parameters trained for this task. DeepMind has documented using up to their 280 billion parameter model [Gopher](https://arxiv.org/abs/2112.11446). It is likely that all these companies use much larger models in their RLHF-powered products. This initial model *can* also be fine-tuned on additional text or conditions, but does not necessarily need to be. For example, OpenAI fine-tuned on human-generated text that was “preferable” and Anthropic generated their initial LM for RLHF by distilling an original LM on context clues for their “helpful, honest, and harmless” criteria. These are both sources of what we refer to as expensive, *augmented* data, but it is not a required technique to understand RLHF. Core to starting the RLHF process is having a _model that responds well to diverse instructions_. In general, there is not a clear answer on “which model” is the best for the starting point of RLHF. This will be a common theme in this blog – the design space of options in RLHF training are not thoroughly explored. Next, with a language model, one needs to generate data to train a **reward model**, which is how human preferences are integrated into the system. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/rlhf/pretraining.png" width="500" /> </p> #### Reward model training Generating a reward model (RM, also referred to as a preference model) calibrated with human preferences is where the relatively new research in RLHF begins. The underlying goal is to get a model or system that takes in a sequence of text, and returns a scalar reward which should numerically represent the human preference. The system can be an end-to-end LM, or a modular system outputting a reward (e.g. a model ranks outputs, and the ranking is converted to reward). The output being a **scalar** **reward** is crucial for existing RL algorithms being integrated seamlessly later in the RLHF process. These LMs for reward modeling can be both another fine-tuned LM or a LM trained from scratch on the preference data. For example, Anthropic has used a specialized method of fine-tuning to initialize these models after pretraining (preference model pretraining, PMP) because they found it to be more sample efficient than fine-tuning, but no one base model is considered the clear best choice for reward models. The training dataset of prompt-generation pairs for the RM is generated by sampling a set of prompts from a predefined dataset (Anthropic’s data generated primarily with a chat tool on Amazon Mechanical Turk is [available](https://huggingface.co/datasets/Anthropic/hh-rlhf) on the Hub, and OpenAI used prompts submitted by users to the GPT API). The prompts are passed through the initial language model to generate new text. Human annotators are used to rank the generated text outputs from the LM. One may initially think that humans should apply a scalar score directly to each piece of text in order to generate a reward model, but this is difficult to do in practice. The differing values of humans cause these scores to be uncalibrated and noisy. Instead, rankings are used to compare the outputs of multiple models and create a much better regularized dataset. There are multiple methods for ranking the text. One method that has been successful is to have users compare generated text from two language models conditioned on the same prompt. By comparing model outputs in head-to-head matchups, an [Elo](https://en.wikipedia.org/wiki/Elo_rating_system) system can be used to generate a ranking of the models and outputs relative to each-other. These different methods of ranking are normalized into a scalar reward signal for training. An interesting artifact of this process is that the successful RLHF systems to date have used reward language models with varying sizes relative to the text generation (e.g. OpenAI 175B LM, 6B reward model, Anthropic used LM and reward models from 10B to 52B, DeepMind uses 70B Chinchilla models for both LM and reward). An intuition would be that these preference models need to have similar capacity to understand the text given to them as a model would need in order to generate said text. At this point in the RLHF system, we have an initial language model that can be used to generate text and a preference model that takes in any text and assigns it a score of how well humans perceive it. Next, we use **reinforcement learning (RL)** to optimize the original language model with respect to the reward model. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/rlhf/reward-model.png" width="600" /> </p> #### Fine-tuning with RL Training a language model with reinforcement learning was, for a long time, something that people would have thought as impossible both for engineering and algorithmic reasons. What multiple organizations seem to have gotten to work is fine-tuning some or all of the parameters of a **copy of the initial LM** with a policy-gradient RL algorithm, Proximal Policy Optimization (PPO). Some parameters of the LM are frozen because fine-tuning an entire 10B or 100B+ parameter model is prohibitively expensive (for more, see Low-Rank Adaptation ([LoRA](https://arxiv.org/abs/2106.09685)) for LMs or the [Sparrow](https://arxiv.org/abs/2209.14375) LM from DeepMind) -- depending on the scale of the model and infrastructure being used. The exact dynamics of how many parameters to freeze, or not, is considered an open research problem. PPO has been around for a relatively long time – there are [tons](https://spinningup.openai.com/en/latest/algorithms/ppo.html) of [guides](https://huggingface.co/blog/deep-rl-ppo) on how it works. The relative maturity of this method made it a favorable choice for scaling up to the new application of distributed training for RLHF. It turns out that many of the core RL advancements to do RLHF have been figuring out how to update such a large model with a familiar algorithm (more on that later). Let's first formulate this fine-tuning task as a RL problem. First, the **policy** is a language model that takes in a prompt and returns a sequence of text (or just probability distributions over text). The **action space** of this policy is all the tokens corresponding to the vocabulary of the language model (often on the order of 50k tokens) and the **observation space** is the distribution of possible input token sequences, which is also quite large given previous uses of RL (the dimension is approximately the size of vocabulary ^ length of the input token sequence). The **reward function** is a combination of the preference model and a constraint on policy shift. The reward function is where the system combines all of the models we have discussed into one RLHF process. Given a prompt, *x*, from the dataset, the text *y* is generated by the current iteration of the fine-tuned policy. Concatenated with the original prompt, that text is passed to the preference model, which returns a scalar notion of “preferability”, \\( r_\theta \\). In addition, per-token probability distributions from the RL policy are compared to the ones from the initial model to compute a penalty on the difference between them. In multiple papers from OpenAI, Anthropic, and DeepMind, this penalty has been designed as a scaled version of the Kullback–Leibler [(KL) divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between these sequences of distributions over tokens, \\( r_\text{KL} \\). The KL divergence term penalizes the RL policy from moving substantially away from the initial pretrained model with each training batch, which can be useful to make sure the model outputs reasonably coherent text snippets. Without this penalty the optimization can start to generate text that is gibberish but fools the reward model to give a high reward. In practice, the KL divergence is approximated via sampling from both distributions (explained by John Schulman [here](http://joschu.net/blog/kl-approx.html)). The final reward sent to the RL update rule is \\( r = r_\theta - \lambda r_\text{KL} \\). Some RLHF systems have added additional terms to the reward function. For example, OpenAI experimented successfully on InstructGPT by mixing in additional pre-training gradients (from the human annotation set) into the update rule for PPO. It is likely as RLHF is further investigated, the formulation of this reward function will continue to evolve. Finally, the **update rule** is the parameter update from PPO that maximizes the reward metrics in the current batch of data (PPO is on-policy, which means the parameters are only updated with the current batch of prompt-generation pairs). PPO is a trust region optimization algorithm that uses constraints on the gradient to ensure the update step does not destabilize the learning process. DeepMind used a similar reward setup for Gopher but used [synchronous advantage actor-critic](http://proceedings.mlr.press/v48/mniha16.html?ref=https://githubhelp.com) (A2C) to optimize the gradients, which is notably different but has not been reproduced externally. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/rlhf/rlhf.png" width="650" /> </p> _Technical detail note: The above diagram makes it look like both models generate different responses for the same prompt, but what really happens is that the RL policy generates text, and that text is fed into the initial model to produce its relative probabilities for the KL penalty. This initial model is untouched by gradient updates during training_. Optionally, RLHF can continue from this point by iteratively updating the reward model and the policy together. As the RL policy updates, users can continue ranking these outputs versus the model's earlier versions. Most papers have yet to discuss implementing this operation, as the deployment mode needed to collect this type of data only works for dialogue agents with access to an engaged user base. Anthropic discusses this option as *Iterated Online RLHF* (see the original [paper](https://arxiv.org/abs/2204.05862)), where iterations of the policy are included in the ELO ranking system across models. This introduces complex dynamics of the policy and reward model evolving, which represents a complex and open research question. ## Open-source tools for RLHF The first [code](https://github.com/openai/lm-human-preferences) released to perform RLHF on LMs was from OpenAI in TensorFlow in 2019. Today, there are already a few active repositories for RLHF in PyTorch that grew out of this. The primary repositories are Transformers Reinforcement Learning ([TRL](https://github.com/lvwerra/trl)), [TRLX](https://github.com/CarperAI/trlx) which originated as a fork of TRL, and Reinforcement Learning for Language models ([RL4LMs](https://github.com/allenai/RL4LMs)). TRL is designed to fine-tune pretrained LMs in the Hugging Face ecosystem with PPO. TRLX is an expanded fork of TRL built by [CarperAI](https://carper.ai/) to handle larger models for online and offline training. At the moment, TRLX has an API capable of production-ready RLHF with PPO and Implicit Language Q-Learning [ILQL](https://sea-snell.github.io/ILQL_site/) at the scales required for LLM deployment (e.g. 33 billion parameters). Future versions of TRLX will allow for language models up to 200B parameters. As such, interfacing with TRLX is optimized for machine learning engineers with experience at this scale. [RL4LMs](https://github.com/allenai/RL4LMs) offers building blocks for fine-tuning and evaluating LLMs with a wide variety of RL algorithms (PPO, NLPO, A2C and TRPO), reward functions and metrics. Moreover, the library is easily customizable, which allows training of any encoder-decoder or encoder transformer-based LM on any arbitrary user-specified reward function. Notably, it is well-tested and benchmarked on a broad range of tasks in [recent work](https://arxiv.org/abs/2210.01241) amounting up to 2000 experiments highlighting several practical insights on data budget comparison (expert demonstrations vs. reward modeling), handling reward hacking and training instabilities, etc. RL4LMs current plans include distributed training of larger models and new RL algorithms. Both TRLX and RL4LMs are under heavy further development, so expect more features beyond these soon. There is a large [dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) created by Anthropic available on the Hub. ## What’s next for RLHF? While these techniques are extremely promising and impactful and have caught the attention of the biggest research labs in AI, there are still clear limitations. The models, while better, can still output harmful or factually inaccurate text without any uncertainty. This imperfection represents a long-term challenge and motivation for RLHF – operating in an inherently human problem domain means there will never be a clear final line to cross for the model to be labeled as *complete*. When deploying a system using RLHF, gathering the human preference data is quite expensive due to the direct integration of other human workers outside the training loop. RLHF performance is only as good as the quality of its human annotations, which takes on two varieties: human-generated text, such as fine-tuning the initial LM in InstructGPT, and labels of human preferences between model outputs. Generating well-written human text answering specific prompts is very costly, as it often requires hiring part-time staff (rather than being able to rely on product users or crowdsourcing). Thankfully, the scale of data used in training the reward model for most applications of RLHF (~50k labeled preference samples) is not as expensive. However, it is still a higher cost than academic labs would likely be able to afford. Currently, there only exists one large-scale dataset for RLHF on a general language model (from [Anthropic](https://huggingface.co/datasets/Anthropic/hh-rlhf)) and a couple of smaller-scale task-specific datasets (such as summarization data from [OpenAI](https://github.com/openai/summarize-from-feedback)). The second challenge of data for RLHF is that human annotators can often disagree, adding a substantial potential variance to the training data without ground truth. With these limitations, huge swaths of unexplored design options could still enable RLHF to take substantial strides. Many of these fall within the domain of improving the RL optimizer. PPO is a relatively old algorithm, but there are no structural reasons that other algorithms could not offer benefits and permutations on the existing RLHF workflow. One large cost of the feedback portion of fine-tuning the LM policy is that every generated piece of text from the policy needs to be evaluated on the reward model (as it acts like part of the environment in the standard RL framework). To avoid these costly forward passes of a large model, offline RL could be used as a policy optimizer. Recently, new algorithms have emerged, such as [implicit language Q-learning](https://arxiv.org/abs/2206.11871) (ILQL) [[Talk](https://youtu.be/fGq4np3brbs) on ILQL at CarperAI], that fit particularly well with this type of optimization. Other core trade-offs in the RL process, like exploration-exploitation balance, have also not been documented. Exploring these directions would at least develop a substantial understanding of how RLHF functions and, if not, provide improved performance. We hosted a lecture on Tuesday 13 December 2022 that expanded on this post; you can watch it [here](https://www.youtube.com/watch?v=2MBJOuVq380&feature=youtu.be)! #### Further reading Here is a list of the most prevalent papers on RLHF to date. The field was recently popularized with the emergence of DeepRL (around 2017) and has grown into a broader study of the applications of LLMs from many large technology companies. Here are some papers on RLHF that pre-date the LM focus: - [TAMER: Training an Agent Manually via Evaluative Reinforcement](https://www.cs.utexas.edu/~pstone/Papers/bib2html-links/ICDL08-knox.pdf) (Knox and Stone 2008): Proposed a learned agent where humans provided scores on the actions taken iteratively to learn a reward model. - [Interactive Learning from Policy-Dependent Human Feedback](http://proceedings.mlr.press/v70/macglashan17a/macglashan17a.pdf) (MacGlashan et al. 2017): Proposed an actor-critic algorithm, COACH, where human feedback (both positive and negative) is used to tune the advantage function. - [Deep Reinforcement Learning from Human Preferences](https://proceedings.neurips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html) (Christiano et al. 2017): RLHF applied on preferences between Atari trajectories. - [Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces](https://ojs.aaai.org/index.php/AAAI/article/view/11485) (Warnell et al. 2018): Extends the TAMER framework where a deep neural network is used to model the reward prediction. - [A Survey of Preference-based Reinforcement Learning Methods](https://www.jmlr.org/papers/volume18/16-634/16-634.pdf) (Wirth et al. 2017): Summarizes efforts above with many, many more references. And here is a snapshot of the growing set of "key" papers that show RLHF's performance for LMs: - [Fine-Tuning Language Models from Human Preferences](https://arxiv.org/abs/1909.08593) (Zieglar et al. 2019): An early paper that studies the impact of reward learning on four specific tasks. - [Learning to summarize with human feedback](https://proceedings.neurips.cc/paper/2020/hash/1f89885d556929e98d3ef9b86448f951-Abstract.html) (Stiennon et al., 2020): RLHF applied to the task of summarizing text. Also, [Recursively Summarizing Books with Human Feedback](https://arxiv.org/abs/2109.10862) (OpenAI Alignment Team 2021), follow on work summarizing books. - [WebGPT: Browser-assisted question-answering with human feedback](https://arxiv.org/abs/2112.09332) (OpenAI, 2021): Using RLHF to train an agent to navigate the web. - InstructGPT: [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155) (OpenAI Alignment Team 2022): RLHF applied to a general language model [[Blog post](https://openai.com/blog/instruction-following/) on InstructGPT]. - GopherCite: [Teaching language models to support answers with verified quotes](https://www.deepmind.com/publications/gophercite-teaching-language-models-to-support-answers-with-verified-quotes) (Menick et al. 2022): Train a LM with RLHF to return answers with specific citations. - Sparrow: [Improving alignment of dialogue agents via targeted human judgements](https://arxiv.org/abs/2209.14375) (Glaese et al. 2022): Fine-tuning a dialogue agent with RLHF - [ChatGPT: Optimizing Language Models for Dialogue](https://openai.com/blog/chatgpt/) (OpenAI 2022): Training a LM with RLHF for suitable use as an all-purpose chat bot. - [Scaling Laws for Reward Model Overoptimization](https://arxiv.org/abs/2210.10760) (Gao et al. 2022): studies the scaling properties of the learned preference model in RLHF. - [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862) (Anthropic, 2022): A detailed documentation of training a LM assistant with RLHF. - [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://arxiv.org/abs/2209.07858) (Ganguli et al. 2022): A detailed documentation of efforts to “discover, measure, and attempt to reduce [language models] potentially harmful outputs.” - [Dynamic Planning in Open-Ended Dialogue using Reinforcement Learning](https://arxiv.org/abs/2208.02294) (Cohen at al. 2022): Using RL to enhance the conversational skill of an open-ended dialogue agent. - [Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization](https://arxiv.org/abs/2210.01241) (Ramamurthy and Ammanabrolu et al. 2022): Discusses the design space of open-source tools in RLHF and proposes a new algorithm NLPO (Natural Language Policy Optimization) as an alternative to PPO. - [Llama 2](https://arxiv.org/abs/2307.09288) (Touvron et al. 2023): Impactful open-access model with substantial RLHF details. The field is the convergence of multiple fields, so you can also find resources in other areas: * Continual learning of instructions ([Kojima et al. 2021](https://arxiv.org/abs/2108.04812), [Suhr and Artzi 2022](https://arxiv.org/abs/2212.09710)) or bandit learning from user feedback ([Sokolov et al. 2016](https://arxiv.org/abs/1601.04468), [Gao et al. 2022](https://arxiv.org/abs/2203.10079)) * Earlier history on using other RL algorithms for text generation (not all with human preferences), such as with recurrent neural networks ([Ranzato et al. 2015](https://arxiv.org/abs/1511.06732)), an actor-critic algorithm for text prediction ([Bahdanau et al. 2016](https://arxiv.org/abs/1607.07086)), or an early work adding human preferences to this framework ([Nguyen et al. 2017](https://arxiv.org/abs/1707.07402)). **Citation:** If you found this useful for your academic work, please consider citing our work, in text: ``` Lambert, et al., "Illustrating Reinforcement Learning from Human Feedback (RLHF)", Hugging Face Blog, 2022. ``` BibTeX citation: ``` @article{lambert2022illustrating, author = {Lambert, Nathan and Castricato, Louis and von Werra, Leandro and Havrilla, Alex}, title = {Illustrating Reinforcement Learning from Human Feedback (RLHF)}, journal = {Hugging Face Blog}, year = {2022}, note = {https://huggingface.co/blog/rlhf}, } ``` *Thanks to [Robert Kirk](https://robertkirk.github.io/) for fixing some factual errors regarding specific implementations of RLHF. Thanks to Stas Bekman for fixing some typos or confusing phrases Thanks to [Peter Stone](https://www.cs.utexas.edu/~pstone/), [Khanh X. Nguyen](https://machineslearner.com/) and [Yoav Artzi](https://yoavartzi.com/) for helping expand the related works further into history. Thanks to [Igor Kotenkov](https://www.linkedin.com/in/seeall/) for pointing out a technical error in the KL-penalty term of the RLHF procedure, its diagram, and textual description.*
5
0
hf_public_repos
hf_public_repos/blog/content-guidelines-update.md
--- title: "Announcing our new Content Guidelines and Policy" thumbnail: /blog/assets/content-guidelines-blogpost/thumbnail.png authors: - user: giadap --- # Announcing our new Community Policy As a community-driven platform that aims to advance Open, Collaborative, and Responsible Machine Learning, we are thrilled to support and maintain a welcoming space for our entire community! In support of this goal, we've updated our [Content Policy](https://huggingface.co/content-guidelines). We encourage you to familiarize yourself with the complete document to fully understand what it entails. Meanwhile, this blog post serves to provide an overview, outline the rationale, and highlight the values driving the update of our Content Policy. By delving into both resources, you'll gain a comprehensive understanding of the expectations and goals for content on our platform. ## Moderating Machine Learning Content Moderating Machine Learning artifacts introduces new challenges. Even more than static content, the risks associated with developing and deploying artificial intelligence systems and/or models require in-depth analysis and a wide-ranging approach to foresee possible harms. That is why the efforts to draft this new Content Policy come from different members and expertise in our cross-company teams, all of which are indispensable to have both a general and a detailed picture to provide clarity on how we enable responsible development and deployment on our platform. Furthermore, as the field of AI and machine learning continues to expand, the variety of use cases and applications proliferates. This makes it essential for us to stay up-to-date with the latest research, ethical considerations, and best practices. For this reason, promoting user collaboration is also vital to the sustainability of our platform. Namely, through our community features, such as the Community Tab, we encourage and foster collaborative solutions between repository authors, users, organizations, and our team. ## Consent as a Core Value As we prioritize respecting people's rights throughout the development and use of Machine Learning systems, we take a forward-looking view to account for developments in the technology and law affecting those rights. New ways of processing information enabled by Machine Learning are posing entirely new questions, both in the field of AI and in regulatory circles, about people's agency and rights with respect to their work, their image, and their privacy. Central to these discussions are how people's rights should be operationalized -- and we offer one avenue for addressing this here. In this evolving legal landscape, it becomes increasingly important to emphasize the intrinsic value of "consent" to avoid enabling harm. By doing so, we focus on the individual's agency and subjective experiences. This approach not only supports forethought and a more empathetic understanding of consent but also encourages proactive measures to address cultural and contextual factors. In particular, our Content Policy aims to address consent related to what users see, and to how people's identities and expressions are represented. This consideration for people's consent and experiences on the platform extends to Community Content and people's behaviors on the Hub. To maintain a safe and welcoming environment, we do not allow aggressive or harassing language directed at our users and/or the Hugging Face staff. We focus on fostering collaborative resolutions for any potential conflicts between users and repository authors, intervening only when necessary. To promote transparency, we encourage open discussions to occur within our Community tab. Our approach is a reflection of our ongoing efforts to adapt and progress, which is made possible by the invaluable input of our users who actively collaborate and share their feedback. We are committed to being receptive to comments and constantly striving for improvement. We encourage you to reach out to [[email protected]](mailto:[email protected]) with any questions or concerns. Let's join forces to build a friendly and supportive community that encourages open AI and ML collaboration! Together, we can make great strides forward in fostering a welcoming environment for everyone.
6
0
hf_public_repos
hf_public_repos/blog/controlnet.md
--- title: "ControlNet in 🧨 Diffusers" thumbnail: /blog/assets/controlnet/thumbnail.png authors: - user: sayakpaul - user: yiyixu - user: patrickvonplaten --- # Ultra fast ControlNet with 🧨 Diffusers <a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Ever since Stable Diffusion took the world by storm, people have been looking for ways to have more control over the results of the generation process. ControlNet provides a minimal interface allowing users to customize the generation process up to a great extent. With [ControlNet](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet), users can easily condition the generation with different spatial contexts such as a depth map, a segmentation map, a scribble, keypoints, and so on! We can turn a cartoon drawing into a realistic photo with incredible coherence. <table> <tr style="text-align: center;"> <th>Realistic Lofi Girl</th> </tr> <tr> <td><img class="mx-auto" src="https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/lofi.jpg" width=300 /></td> </tr> </table> Or even use it as your interior designer. <table> <tr style="text-align: center;"> <th>Before</th> <th>After</th> </tr> <tr> <td><img class="mx-auto" src="https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/house_depth.png" width=300/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/house_after.jpeg" width=300/></td> </tr> </table> You can turn your sketch scribble into an artistic drawing. <table> <tr style="text-align: center;"> <th>Before</th> <th>After</th> </tr> <tr> <td><img class="mx-auto" src="https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/drawing_before.png" width=300/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/drawing_after.jpeg" width=300/></td> </tr> </table> Also, make some of the famous logos coming to life. <table> <tr style="text-align: center;"> <th>Before</th> <th>After</th> </tr> <tr> <td><img class="mx-auto" src="https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/starbucks_logo.jpeg" width=300/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/starbucks_after.png" width=300/></td> </tr> </table> With ControlNet, the sky is the limit 🌠 In this blog post, we first introduce the [`StableDiffusionControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) and then show how it can be applied for various control conditionings. Let’s get controlling! ## ControlNet: TL;DR ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala. It introduces a framework that allows for supporting various spatial contexts that can serve as additional conditionings to Diffusion models such as Stable Diffusion. The diffusers implementation is adapted from the original [source code](https://github.com/lllyasviel/ControlNet/). Training ControlNet is comprised of the following steps: 1. Cloning the pre-trained parameters of a Diffusion model, such as Stable Diffusion's latent UNet, (referred to as “trainable copy”) while also maintaining the pre-trained parameters separately (”locked copy”). It is done so that the locked parameter copy can preserve the vast knowledge learned from a large dataset, whereas the trainable copy is employed to learn task-specific aspects. 2. The trainable and locked copies of the parameters are connected via “zero convolution” layers (see [here](https://github.com/lllyasviel/ControlNet#controlnet) for more information) which are optimized as a part of the ControlNet framework. This is a training trick to preserve the semantics already learned by frozen model as the new conditions are trained. Pictorially, training a ControlNet looks like so: <p align="center"> <img src="https://github.com/lllyasviel/ControlNet/raw/main/github_page/sd.png" alt="controlnet-structure"><br> <em>The diagram is taken from <a href=https://github.com/lllyasviel/ControlNet/blob/main/github_page/sd.png>here</a>.</em> </p> A sample from the training set for ControlNet-like training looks like this (additional conditioning is via edge maps): <table> <tr style="text-align: center;"> <th>Prompt</th> <th>Original Image</th> <th>Conditioning</th> </tr> <tr style="text-align: center;"> <td style="vertical-align: middle">"bird"</td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/original_bird.png" width=200/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/canny_map.png" width=200/></td> </tr> </table> Similarly, if we were to condition ControlNet with semantic segmentation maps, a training sample would be like so: <table> <tr style="text-align: center;"> <th>Prompt</th> <th>Original Image</th> <th>Conditioning</th> </tr> <tr style="text-align: center;"> <td style="vertical-align: middle">"big house"</td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/original_house.png" width=300/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/segmentation_map.png" width=300/></td> </tr> </table> Every new type of conditioning requires training a new copy of ControlNet weights. The paper proposed 8 different conditioning models that are all [supported](https://huggingface.co/lllyasviel?search=controlnet) in Diffusers! For inference, both the pre-trained diffusion models weights as well as the trained ControlNet weights are needed. For example, using [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with a ControlNet checkpoint require roughly 700 million more parameters compared to just using the original Stable Diffusion model, which makes ControlNet a bit more memory-expensive for inference. Because the pre-trained diffusion models are locked during training, one only needs to switch out the ControlNet parameters when using a different conditioning. This makes it fairly simple to deploy multiple ControlNet weights in one application as we will see below. ## The `StableDiffusionControlNetPipeline` Before we begin, we want to give a huge shout-out to the community contributor [Takuma Mori](https://github.com/takuma104) for having led the integration of ControlNet into Diffusers ❤️ . To experiment with ControlNet, Diffusers exposes the [`StableDiffusionControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) similar to the [other Diffusers pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview). Central to the `StableDiffusionControlNetPipeline` is the `controlnet` argument which lets us provide a particular trained [`ControlNetModel`](https://huggingface.co/docs/diffusers/main/en/api/models#diffusers.ControlNetModel) instance while keeping the pre-trained diffusion model weights the same. We will explore different use cases with the `StableDiffusionControlNetPipeline` in this blog post. The first ControlNet model we are going to walk through is the [Canny model](https://huggingface.co/runwayml/stable-diffusion-v1-5) - this is one of the most popular models that generated some of the amazing images you are libely seeing on the internet. We welcome you to run the code snippets shown in the sections below with [this Colab Notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb). Before we begin, let's make sure we have all the necessary libraries installed: ```bash pip install diffusers==0.14.0 transformers xformers git+https://github.com/huggingface/accelerate.git ``` To process different conditionings depending on the chosen ControlNet, we also need to install some additional dependencies: - [OpenCV](https://opencv.org/) - [controlnet-aux](https://github.com/patrickvonplaten/controlnet_aux#controlnet-auxiliary-models) - a simple collection of pre-processing models for ControlNet ```bash pip install opencv-contrib-python pip install controlnet_aux ``` We will use the famous painting ["Girl With A Pearl"](https://en.wikipedia.org/wiki/Girl_with_a_Pearl_Earring) for this example. So, let's download the image and take a look: ```python from diffusers.utils import load_image image = load_image( "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" ) image ``` <p align="center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_6_output_0.jpeg" width=600/> </p> Next, we will put the image through the canny pre-processor: ```python import cv2 from PIL import Image import numpy as np image = np.array(image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) canny_image = Image.fromarray(image) canny_image ``` As we can see, it is essentially edge detection: <p align="center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" width=600/> </p> Now, we load [runwaylml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as well as the [ControlNet model for canny edges](https://huggingface.co/lllyasviel/sd-controlnet-canny). The models are loaded in half-precision (`torch.dtype`) to allow for fast and memory-efficient inference. ```python from diffusers import StableDiffusionControlNetPipeline, ControlNetModel import torch controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ) ``` Instead of using Stable Diffusion's default [PNDMScheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/pndm), we use one of the currently fastest diffusion model schedulers, called [UniPCMultistepScheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/unipc). Choosing an improved scheduler can drastically reduce inference time - in our case we are able to reduce the number of inference steps from 50 to 20 while more or less keeping the same image generation quality. More information regarding schedulers can be found [here](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers). ```python from diffusers import UniPCMultistepScheduler pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` Instead of loading our pipeline directly to GPU, we instead enable smart CPU offloading which can be achieved with the [`enable_model_cpu_offload` function](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet#diffusers.StableDiffusionControlNetPipeline.enable_model_cpu_offload). Remember that during inference diffusion models, such as Stable Diffusion require not just one but multiple model components that are run sequentially. In the case of Stable Diffusion with ControlNet, we first use the CLIP text encoder, then the diffusion model unet and control net, then the VAE decoder and finally run a safety checker. Most components are only run once during the diffusion process and are thus not required to occupy GPU memory all the time. By enabling smart model offloading, we make sure that each component is only loaded into GPU when it's needed so that we can significantly save memory consumption without significantly slowing down infenence. **Note**: When running `enable_model_cpu_offload`, do not manually move the pipeline to GPU with `.to("cuda")` - once CPU offloading is enabled, the pipeline automatically takes care of GPU memory management. ```py pipe.enable_model_cpu_offload() ``` Finally, we want to take full advantage of the amazing [FlashAttention/xformers](https://github.com/facebookresearch/xformers) attention layer acceleration, so let's enable this! If this command does not work for you, you might not have `xformers` correctly installed. In this case, you can just skip the following line of code. ```py pipe.enable_xformers_memory_efficient_attention() ``` Now we are ready to run the ControlNet pipeline! We still provide a prompt to guide the image generation process, just like what we would normally do with a Stable Diffusion image-to-image pipeline. However, ControlNet will allow a lot more control over the generated image because we will be able to control the exact composition in generated image with the canny edge image we just created. It will be fun to see some images where contemporary celebrities posing for this exact same painting from the 17th century. And it's really easy to do that with ControlNet, all we have to do is to include the names of these celebrities in the prompt! Let's first create a simple helper function to display images as a grid. ```python def image_grid(imgs, rows, cols): assert len(imgs) == rows * cols w, h = imgs[0].size grid = Image.new("RGB", size=(cols * w, rows * h)) grid_w, grid_h = grid.size for i, img in enumerate(imgs): grid.paste(img, box=(i % cols * w, i // cols * h)) return grid ``` Next, we define the input prompts and set a seed for reproducability. ```py prompt = ", best quality, extremely detailed" prompt = [t + prompt for t in ["Sandra Oh", "Kim Kardashian", "rihanna", "taylor swift"]] generator = [torch.Generator(device="cpu").manual_seed(2) for i in range(len(prompt))] ``` Finally, we can run the pipeline and display the image! ```py output = pipe( prompt, canny_image, negative_prompt=["monochrome, lowres, bad anatomy, worst quality, low quality"] * 4, num_inference_steps=20, generator=generator, ) image_grid(output.images, 2, 2) ``` <p align="center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_16_output_1.jpeg" width=600/> </p> We can effortlessly combine ControlNet with fine-tuning too! For example, we can fine-tune a model with [DreamBooth](https://huggingface.co/docs/diffusers/main/en/training/dreambooth), and use it to render ourselves into different scenes. In this post, we are going to use our beloved Mr Potato Head as an example to show how to use ControlNet with DreamBooth. We can use the same ControlNet. However, instead of using the Stable Diffusion 1.5, we are going to load the [Mr Potato Head model](https://huggingface.co/sd-dreambooth-library/mr-potato-head) into our pipeline - Mr Potato Head is a Stable Diffusion model fine-tuned with Mr Potato Head concept using Dreambooth 🥔 Let's run the above commands again, keeping the same controlnet though! ```python model_id = "sd-dreambooth-library/mr-potato-head" pipe = StableDiffusionControlNetPipeline.from_pretrained( model_id, controlnet=controlnet, torch_dtype=torch.float16, ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` Now let's make Mr Potato posing for [Johannes Vermeer](https://en.wikipedia.org/wiki/Johannes_Vermeer)! ```python generator = torch.manual_seed(2) prompt = "a photo of sks mr potato head, best quality, extremely detailed" output = pipe( prompt, canny_image, negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps=20, generator=generator, ) output.images[0] ``` It is noticeable that Mr Potato Head is not the best candidate but he tried his best and did a pretty good job in capturing some of the essence 🍟 <p align="center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_22_output_0.jpeg" width=600/> </p> Another exclusive application of ControlNet is that we can take a pose from one image and reuse it to generate a different image with the exact same pose. So in this next example, we are going to teach superheroes how to do yoga using [Open Pose ControlNet](https://huggingface.co/lllyasviel/sd-controlnet-openpose)! First, we will need to get some images of people doing yoga: ```python urls = "yoga1.jpeg", "yoga2.jpeg", "yoga3.jpeg", "yoga4.jpeg" imgs = [ load_image("https://huggingface.co/datasets/YiYiXu/controlnet-testing/resolve/main/" + url) for url in urls ] image_grid(imgs, 2, 2) ``` <p align="center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_25_output_0.jpeg" width=600/> </p> Now let's extract yoga poses using the OpenPose pre-processors that are handily available via `controlnet_aux`. ```python from controlnet_aux import OpenposeDetector model = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") poses = [model(img) for img in imgs] image_grid(poses, 2, 2) ``` <p align="center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_28_output_0.jpeg" width=600/> </p> To use these yoga poses to generate new images, let's create a [Open Pose ControlNet](https://huggingface.co/lllyasviel/sd-controlnet-openpose). We will generate some super-hero images but in the yoga poses shown above. Let's go 🚀 ```python controlnet = ControlNetModel.from_pretrained( "fusing/stable-diffusion-v1-5-controlnet-openpose", torch_dtype=torch.float16 ) model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionControlNetPipeline.from_pretrained( model_id, controlnet=controlnet, torch_dtype=torch.float16, ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() ``` Now it's yoga time! ```python generator = [torch.Generator(device="cpu").manual_seed(2) for i in range(4)] prompt = "super-hero character, best quality, extremely detailed" output = pipe( [prompt] * 4, poses, negative_prompt=["monochrome, lowres, bad anatomy, worst quality, low quality"] * 4, generator=generator, num_inference_steps=20, ) image_grid(output.images, 2, 2) ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/anime_do_yoga.png" width=600/> </p> ### Combining multiple conditionings Multiple ControlNet conditionings can be combined for a single image generation. Pass a list of ControlNets to the pipeline's constructor and a corresponding list of conditionings to `__call__`. When combining conditionings, it is helpful to mask conditionings such that they do not overlap. In the example, we mask the middle of the canny map where the pose conditioning is located. It can also be helpful to vary the `controlnet_conditioning_scale`s to emphasize one conditioning over the other. #### Canny conditioning The original image <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" width=600/> </p> Prepare the conditioning ```python from diffusers.utils import load_image from PIL import Image import cv2 import numpy as np from diffusers.utils import load_image canny_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" ) canny_image = np.array(canny_image) low_threshold = 100 high_threshold = 200 canny_image = cv2.Canny(canny_image, low_threshold, high_threshold) # zero out middle columns of image where pose will be overlayed zero_start = canny_image.shape[1] // 4 zero_end = zero_start + canny_image.shape[1] // 2 canny_image[:, zero_start:zero_end] = 0 canny_image = canny_image[:, :, None] canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2) canny_image = Image.fromarray(canny_image) ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/landscape_canny_masked.png" width=600/> </p> #### Openpose conditioning The original image <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" width=600/> </p> Prepare the conditioning ```python from controlnet_aux import OpenposeDetector from diffusers.utils import load_image openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") openpose_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" ) openpose_image = openpose(openpose_image) ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/person_pose.png" width=600/> </p> #### Running ControlNet with multiple conditionings ```python from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler import torch controlnet = [ ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16), ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16), ] pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_xformers_memory_efficient_attention() pipe.enable_model_cpu_offload() prompt = "a giant standing in a fantasy landscape, best quality" negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" generator = torch.Generator(device="cpu").manual_seed(1) images = [openpose_image, canny_image] image = pipe( prompt, images, num_inference_steps=20, generator=generator, negative_prompt=negative_prompt, controlnet_conditioning_scale=[1.0, 0.8], ).images[0] image.save("./multi_controlnet_output.png") ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/multi_controlnet_output.png" width=600/> </p> Throughout the examples, we explored multiple facets of the [`StableDiffusionControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet) to show how easy and intuitive it is play around with ControlNet via Diffusers. However, we didn't cover all types of conditionings supported by ControlNet. To know more about those, we encourage you to check out the respective model documentation pages: * [lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth) * [lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed) * [lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal) * [lllyasviel/sd-controlnet-scribble](https://huggingface.co/lllyasviel/sd-controlnet-scribble) * [lllyasviel/sd-controlnet-seg](https://huggingface.co/lllyasviel/sd-controlnet-scribble) * [lllyasviel/sd-controlnet-openpose](https://huggingface.co/lllyasviel/sd-controlnet-openpose) * [lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd) * [lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny) We welcome you to combine these different elements and share your results with [@diffuserslib](https://twitter.com/diffuserslib). Be sure to check out [the Colab Notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb) to take some of the above examples for a spin! We also showed some techniques to make the generation process faster and memory-friendly by using a fast scheduler, smart model offloading and `xformers`. With these techniques combined the generation process takes only ~3 seconds on a V100 GPU and consumes just ~4 GBs of VRAM for a single image ⚡️ On free services like Google Colab, generation takes about 5s on the default GPU (T4), whereas the original implementation requires 17s to create the same result! Combining all the pieces in the `diffusers` toolbox is a real superpower 💪 ## Conclusion We have been playing a lot with [`StableDiffusionControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/controlnet), and our experience has been fun so far! We’re excited to see what the community builds on top of this pipeline. If you want to check out other pipelines and techniques supported in Diffusers that allow for controlled generation, check out our [official documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/controlling_generation). If you cannot wait to try out ControlNet directly, we got you covered as well! Simply click on one of the following spaces to play around with ControlNet: - [![Canny ControlNet Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/diffusers/controlnet-canny) - [![OpenPose ControlNet Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/diffusers/controlnet-openpose)
7
0
hf_public_repos
hf_public_repos/blog/starchat-alpha.md
--- title: "Creating a Coding Assistant with StarCoder" thumbnail: /blog/assets/starchat_alpha/thumbnail.png authors: - user: lewtun - user: natolambert - user: nazneen - user: edbeeching - user: teven - user: sheonhan - user: philschmid - user: lvwerra - user: srush --- # Creating a Coding Assistant with StarCoder If you’re a software developer, chances are that you’ve used GitHub Copilot or ChatGPT to solve programming tasks such as translating code from one language to another or generating a full implementation from a natural language query like *“Write a Python program to find the Nth Fibonacci number”*. Although impressive in their capabilities, these proprietary systems typically come with several drawbacks, including a lack of transparency on the public data used to train them and the inability to adapt them to your domain or codebase. Fortunately, there are now several high-quality open-source alternatives! These include SalesForce’s [CodeGen Mono 16B](https://huggingface.co/Salesforce/codegen-16B-mono) for Python, or [Replit’s 3B parameter model](https://huggingface.co/replit/replit-code-v1-3b) trained on 20 programming languages. The new kid on the block is [BigCode’s StarCoder](https://huggingface.co/bigcode/starcoder), a 16B parameter model trained on one trillion tokens sourced from 80+ programming languages, GitHub issues, Git commits, and Jupyter notebooks (all permissively licensed). With an enterprise-friendly license, 8,192 token context length, and fast large-batch inference via [multi-query attention](https://arxiv.org/abs/1911.02150), StarCoder is currently the best open-source choice for code-based applications. In this blog post, we’ll show how StarCoder can be fine-tuned for chat to create a personalised coding assistant! Dubbed StarChat, we’ll explore several technical details that arise when using large language models (LLMs) as coding assistants, including: - How LLMs can be prompted to act like conversational agents. - OpenAI’s [Chat Markup Language](https://github.com/openai/openai-python/blob/main/chatml.md) (or ChatML for short), which provides a structured format for conversational messages between human users and AI assistants. - How to fine-tune a large model on a diverse corpus of dialogues with 🤗 Transformers and DeepSpeed ZeRO-3. As a teaser of the end result, try asking StarChat a few programming questions in the demo below! <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.28.2/gradio.js" ></script> <gradio-app theme_mode="light" src="https://huggingfaceh4-starchat-playground.hf.space"></gradio-app> You can also find the code, dataset, and model used to produce the demo at the following links: - Code: [https://github.com/bigcode-project/starcoder](https://github.com/bigcode-project/starcoder) - Dataset: [https://huggingface.co/datasets/HuggingFaceH4/oasst1_en](https://huggingface.co/datasets/HuggingFaceH4/oasst1_en) - Model: [https://huggingface.co/HuggingFaceH4/starchat-alpha](https://huggingface.co/HuggingFaceH4/starchat-alpha) To get started, let’s take a look at how language models can be turned into conversational agents without any fine-tuning at all. ## Prompting LLMs for dialogue As shown by [DeepMind](https://arxiv.org/abs/2209.14375) and [Anthropic](https://arxiv.org/abs/2112.00861), LLMs can be turned into conversational agents through a clever choice of prompt. These prompts typically involve a so-called “system” message that defines the character of the LLM, along with a series of dialogues between the assistant and a user. For example, here’s an excerpt from [Anthropic’s HHH prompt](https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt) (a whopping 6k tokens in total!): ``` Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn’t entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn’t let caution get too much in the way of being useful. ----- Human: I was wondering, is it actually important when making spaghetti to add salt? Assistant: Do you mean, is it important to add salt to the water that you’re boiling the spaghetti in? Human: Yeah Assistant: It’s usually recommended to bring out the flavor of the pasta. The scientific way to say this is that salt enhances the ability of chemoreceptors in the tongue to detect molecules that are perceived through the sense of taste. Human: But I mean I’m going to use sauce and stuff, so does it matter? Assistant: It sounds like you’re skeptical. Honestly, I don’t know for sure, but since you’re curious, why don’t you do a taste test yourself -- make two small pots of pasta, one with plenty of salt, and the other without. Then taste the noodles and see which one you prefer? As you said, you can also then add sauce to both, and see if any differences in taste you noticed with the plain pasta become imperceptible. It should be a fun experiment! ----- ... ----- Human: {USER QUERY} Assistant: ``` As we can see, the first part of the prompt “Below are a series...” corresponds to the system message and specifies that the assistant should have characteristics like “helpfulness” and “politeness”. The dialogue examples then condition the model to follow the multi-turn format of a conversation. When a user asks a question, the whole prompt is fed to the model and it generates an answer after the `Assistant:` prefix. The answer is then concatenated to the prompt and the process repeated at every turn. Somewhat surprisingly, this technique also works for StarCoder! This is enabled by the model’s 8k token context length, which allows one to include a wide variety of programming examples and convert the model into a coding assistant. Here’s an excerpt of the StarCoder prompt: ``` Below are a series of dialogues between various people and an AI technical assistant. The assistant tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with code questions, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn’t entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn’t let caution get too much in the way of being useful. The Starcoder models are a series of 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2) (excluding opt-out requests). The model uses Multi Query Attention, was trained using the Fill-in-the-Middle objective and with 8,192 tokens context window for a trillion tokens of heavily deduplicated data. ----- Human: Write a function that takes two lists and returns a list that has alternating elements from each input list. Assistant: Sure. Here is a function that does that. def alternating(list1, list2): results = [] for i in range(len(list1)): results.append(list1[i]) results.append(list2[i]) return results Human: Can you write some test cases for this function? Assistant: Sure, here are some tests. assert alternating([10, 20, 30], [1, 2, 3]) == [10, 1, 20, 2, 30, 3] assert alternating([True, False], [4, 5]) == [True, 4, False, 5] assert alternating([], []) == [] Human: Modify the function so that it returns all input elements when the lists have uneven length. The elements from the longer list should be at the end. Assistant: Here is the modified function. def alternating(list1, list2): results = [] for i in range(min(len(list1), len(list2))): results.append(list1[i]) results.append(list2[i]) if len(list1) > len(list2): results.extend(list1[i+1:]) else: results.extend(list2[i+1:]) return results ----- ``` Here we can see how a well crafted prompt can induce coding behaviour similar to that observed in ChatGPT. You can find the full prompt [here](https://huggingface.co/datasets/bigcode/ta-prompt/blob/main/TA_prompt_v1.txt) and chat with the prompted StarCoder on [HuggingChat](https://hf.co/chat/?model=bigcode/starcoder). One major drawback with dialogue-prompting is that inference can be very costly: every turn of the conversation involves thousands of tokens which will quickly burn a hole in your wallet! The obvious alternative is to fine-tune the base model on a corpus of dialogues and enable it to become “chatty”. Let’s take a look at a few interesting datasets that have recently landed on the Hub and are powering most of the open-source chatbots today. ## Datasets for chatty language models The open-source community is rapidly creating diverse and powerful datasets for transforming any base language model into a conversational agent that can follow instructions. Some examples that we have found to produce chatty language models include: - [OpenAssistant’s dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), which consists of over 40,000 conversations, where members of the community take turns mimicking the roles of a user or AI assistant. - [The ShareGPT dataset](https://huggingface.co/datasets/RyokoAI/ShareGPT52K), which contains approximately 90,000 conversations between human users and ChatGPT. For the purposes of this blog post, we’ll use the OpenAssistant dataset to fine-tune StarCoder since it has a permissive license and was produced entirely by humans. The raw dataset is formatted as a collection of conversation trees, so we’ve preprocessed it so that each row corresponds to a single dialogue between the user and the assistant. To avoid deviating too far from the data that StarCoder was pretrained on, we’ve also filtered it for English dialogues. Let’s start by downloading the processed dataset from the Hub: ```python from datasets import load_dataset dataset = load_dataset("HuggingFaceH4/oasst1_en") print(dataset) ``` ``` DatasetDict({ train: Dataset({ features: ['messages'], num_rows: 19034 }) test: Dataset({ features: ['messages'], num_rows: 2115 }) }) ``` As we can see, the dataset contains around 21,000 English conversations. Let’s take a look at one of the training examples. Take the first example, ```python sample = dataset["train"][0] print(sample) ``` ``` { "messages": [ { "content": "Is it possible to imagine a society without law?", "role": "user", }, { "content": "It is difficult to imagine a society that is able to be maintained without any semblance of Law. Laws exist for the purpose of maintaining society and past human behavior suggests that there would be people that have no interest in promoting social cohesion and norms so it's unlikely that complex social and economic systems could develop without a legal system.", "role": "assistant", }, { "content": 'It seems like you consider the absence of law equal to the absence of anything that could guide the behaviour of the individual, however there are many other such things, like religion, conscience, tradition to mention a few important patterns which are known to have crucial influence on society even today, but also we could imagine more, like universal love if everyone suddenly became "awaken" in some spiritual sense, or simply just having an omnipotent leader who maintains a certain order without ever codifying the principles in law. Is it still difficult to imagine?', "role": "user", }, { "content": "You are correct that there are other factors that can guide behavior in a society and play a role in shaping individuals' behavior and interactions with each other. However, even in societies where these factors are present, laws still serve an important role in maintaining social order and resolving conflicts. \n\nFor example, religious teachings may prohibit theft and murder, but laws provide a system of justice for those who violate these prohibitions. Personal morality and conscience may guide individuals to act in a certain way, but laws provide a framework for resolving disputes when conflicting moral beliefs arise. Furthermore, the other factors you mentioned, such as universal love or an omnipotent leader, are not necessarily stable or reliable guides for behavior. Universal love, while a noble ideal, may not be achievable in practice, and an omnipotent leader may be prone to corruption or abuse of power.\n\nLaws provide a crucial foundation for maintaining social order and ensuring justice. It is possible to imagine alternative systems of governance, but it is unlikely that a society without any sort of legal framework would be stable or functional in the long term.", "role": "assistant", }, { "content": "Yeah, but laws are complicated. Most people can't understand them in depth. Some would argue it is almost a self-serving system which put energy into growing itself(eg.: patent trolling). I think there must be a less complex system which keeps up order in society.", "role": "user", }, ] } ``` OK, this looks like an interesting dialogue about moral philosophy, with each turn involving a role and content field to indicate who is writing. Let’s now take a look at converting these dialogues to a standard format that simplifies the way messages are generated at inference time. ### A standard format for dialogues One way to fine-tune a model on dialogues is to simply insert the system message and roles in each training example, and then separate each dialogue with an end-of-sequence token like <EOS>. For instance, the conversation above could take the form: ``` Below is a dialogue between a human and AI assistant ... Human: Is it possible to imagine a society without law? Assistant: It is difficult to imagine ... Human: It seems like you ... Assistant: You are correct ... Human: Yeah, but laws are complicated .. <EOS> ``` Although this works fine for training, it isn’t ideal for inference because the model will naturally generate unwanted turns until it produces an `<EOS>` token, and some post-processing or additional logic is typically required to prevent this. A more appealing approach is to use a structured format like [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md), which wraps each turn with a set of *special tokens* that indicates the role of the query or response. In this format, we have the following special tokens: - `<|system|>`: indicates which part of the dialogue contains the system message to condition the character of the assistant. - `<|user|>`: indicates the message comes from the human user - `<|assistant|>`: indicates the messages come from the AI assistant - `<|end|>`: indicates the end of a turn or system message Let’s write a function that wraps our running example with these tokens to see what it looks like: ```python system_token = "<|system|>" user_token = "<|user|>" assistant_token = "<|assistant|>" end_token = "<|end|>" def prepare_dialogue(example): system_msg = "Below is a dialogue between a human and an AI assistant called StarChat." prompt = system_token + "\n" + system_msg + end_token + "\n" for message in example["messages"]: if message["role"] == "user": prompt += user_token + "\n" + message["content"] + end_token + "\n" else: prompt += assistant_token + "\n" + message["content"] + end_token + "\n" return prompt print(prepare_dialogue(sample)) ``` ``` <|system|> Below is a dialogue between a human and AI assistant called StarChat. <|end|> <|user|> Is it possible to imagine a society without law?<|end|> <|assistant|> It is difficult to imagine ...<|end|> <|user|> It seems like you ...<|end|> <|assistant|> You are correct ...<|end|> <|user|> Yeah, but laws are complicated ...<|end|> ``` OK, this looks like what we need! The next step is to include these special tokens in the tokenizer’s vocabulary, so let’s download the StarCoder tokenizer and add them: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoderbase") tokenizer.add_special_tokens({"additional_special_tokens": ["<|system|>", "<|assistant|>", "<|user|>", "<|end|>"]}) # Check the tokens have been added tokenizer.special_tokens_map ``` ``` { "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>", "additional_special_tokens": ["<|system|>", "<|assistant|>", "<|user|>", "<|end|>"], } ``` As a sanity check this works, let’s see if tokenizing the string "<|assistant|>" produces a single token ID: ```python tokenizer("<|assistant|>") ``` ``` {"input_ids": [49153], "attention_mask": [1]} ``` Great, it works! ### Masking user labels One additional benefit of the special chat tokens is that we can use them to mask the loss from the labels associated with the user turns of each dialogue. The reason to do this is to ensure the model is conditioned on the user parts of the dialogue, but only trained to predict the assistant parts (which is what really matters during inference). Here’s a simple function that masks the labels in place and converts all the user tokens to -100 which is subsequently ignored by the loss function: ```python def mask_user_labels(tokenizer, labels): user_token_id = tokenizer.convert_tokens_to_ids(user_token) assistant_token_id = tokenizer.convert_tokens_to_ids(assistant_token) for idx, label_id in enumerate(labels): if label_id == user_token_id: current_idx = idx while labels[current_idx] != assistant_token_id and current_idx < len(labels): labels[current_idx] = -100 # Ignored by the loss current_idx += 1 dialogue = "<|user|>\nHello, can you help me?<|end|>\n<|assistant|>\nSure, what can I do for you?<|end|>\n" input_ids = tokenizer(dialogue).input_ids labels = input_ids.copy() mask_user_labels(tokenizer, labels) labels ``` ``` [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 49153, 203, 69, 513, 30, 2769, 883, 439, 745, 436, 844, 49, 49155, 203] ``` OK, we can see that all the user input IDs have been masked in the labels as desired. These special tokens have embeddings that will need to be learned during the fine-tuning process. Let’s take a look at what’s involved. ## Fine-tuning StarCoder with DeepSpeed ZeRO-3 The StarCoder and StarCoderBase models contain 16B parameters, which means we’ll need a lot of GPU vRAM to fine-tune them — for instance, simply loading the model weights in full FP32 precision requires around 60GB vRAM! Fortunately, there are a few options available to deal with large models like this: - Use parameter-efficient techniques like LoRA which freeze the base model’s weights and insert a small number of learnable parameters. You can find many of these techniques in the [🤗 PEFT](https://github.com/huggingface/peft) library. - Shard the model weights, optimizer states, and gradients across multiple devices using methods like [DeepSpeed ZeRO-3](https://huggingface.co/docs/transformers/main_classes/deepspeed) or [FSDP](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/). Since DeepSpeed is tightly integrated in 🤗 Transformers, we’ll use it to train our model. To get started, first clone BigCode’s StarCoder repo from GitHub and navigate to the `chat` directory: ```shell git clone https://github.com/bigcode-project/starcoder.git cd starcoder/chat ``` Next, create a Python virtual environment using e.g. Conda: ```shell conda create -n starchat python=3.10 && conda activate starchat ``` Next, we install PyTorch v1.13.1. Since this is hardware-dependent, we direct you to the [PyTorch Installation Page](https://pytorch.org/get-started/locally/) for this step. Once you've installed it, install the rest of the project dependencies: ```shell pip install -r requirements.txt ``` We need to be logged into both Hugging Face. To do so, run: ```shell huggingface-cli login ``` Finally, install Git LFS with: ```shell sudo apt-get install git-lfs ``` The final step is to launch the training! If you’re lucky enough to have 8 x A100 (80GB) GPUs to run this training, you can run the following command. Training should take around 45 minutes: ```shell torchrun --nproc_per_node=8 train.py config.yaml --deepspeed=deepspeed_z3_config_bf16.json ``` Here the `config.yaml` file specifies all the parameters associated with the dataset, model, and training - you can configure it [here](https://github.com/bigcode-project/starcoder/tree/main/chat) to adapt the training to a new dataset. Your trained model will then be available on the Hub! ## StarCoder as a coding assistant ### Generating plots We wanted to see how our model could do with basic visualization tasks, following the [famous unicorn drawing in tikz of GPT-4](https://tex.stackexchange.com/questions/681418/draw-a-unicorn-in-tikz). To do this, we prompted our model with some coding tasks and got wonderful results! Yes, these are a little cherry picked as we only selected the completions that **wrote functioning code**, but the others were not far off! **Example 1: bar plot** Prompt: ``` Give the following data {'Name':['Tom', 'Brad', 'Kyle', 'Jerry'], 'Age':[20, 21, 19, 18], 'Height' : [6.1, 5.9, 6.0, 6.1]}. Can you plot one graph with two subplots as columns. The first is a bar graph showing the height of each person. The second is a bargraph showing the age of each person? Draw the graph in seaborn talk mode. ``` Response: ```python import pandas as pd import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn-talk') data = {'Name':['Tom', 'Brad', 'Kyle', 'Jerry'], 'Age':[20, 21, 19, 18], 'Height':[6.1, 5.9, 6.0, 6.1]} df=pd.DataFrame(data) # Bar Graph for Height and Age fig,(ax1, ax2)=plt.subplots(nrows=1, ncols=2, figsize=(14,7)) sns.barplot(x='Name', y="Height", data=df, palette="Blues_d", ax=ax1).set_title("Height Comparison") sns.barplot(x='Name',y ="Age", data=df,palette="Reds_d", ax=ax2 ).set_title("Age Comparison") # Add padding between plots fig.tight_layout() # Show the final image plt.show() ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/starchat-alpha/barplot.png" alt="samples"><br> <em>Example of a plot generated by StarChat</a>.</em> </p> **Example 2: mapping** Prompt: ``` Draw me a map of the world using geopandas. Make it so that only Germany and Spain are colored red. ``` Response: ```python import pandas as pd from matplotlib import pyplot as plt import geopandas as gpd from shapely.geometry import Point, Polygon %matplotlib inline # Load data from URL or local file url = 'https://raw.githubusercontent.com/datasets/geo-countries/master/data/countries.geojson' df = gpd.read_file(url) # Create plot fig, ax = plt.subplots() ax.set_aspect('equal') world = gpd.GeoDataFrame({'name': ['World'], 'geometry': [Polygon([(0, 0), (1, 0), (1, 1), (0, 1)])]}) germany = df[df['ADMIN'] == "Germany"] spain = df[df['ADMIN'] == "Spain"] other = df[(df['ADMIN']!= "Germany") & (df['ADMIN']!= "Spain")] world.plot(color='lightgrey', edgecolor='white', ax=ax) germany.plot(color="red", ax=ax) spain.plot(color="red", ax=ax) other.plot(color="skyblue", ax=ax) plt.title("European Countries") plt.show() ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/starchat-alpha/map.png" alt="samples"><br> <em>Example of a plot generated by StarChat</a>.</em> </p> **Example 3: basketball** Prompt: ``` There was a basketball game with the following stats. player, points, rebounds and assists: J. Harden 42 8 9 J. Embiid 34 13 4 T. Maxey 14 8 1 T. Harris 9 7 2 P. Tucker 6 3 1. Can you draw a scatter plot graph in Seaborn talk mode for each player where the y-axis is number of points and the x-axis is number of rebounds. Make sure the marker is 'o'. Put the name of the player over each point. The title should be "Sixers Players". ``` Response: ```python import pandas as pd import seaborn as sns from matplotlib import pyplot as plt %matplotlib inline # Load data into dataframe df = pd.DataFrame({'Player': ['J. Harden', 'J. Embiid', 'T. Maxey', 'T. Harris', 'P. Tucker'], 'Points': [42, 34, 14, 9, 6], 'Rebounds': [8, 13, 8, 7, 3], 'Assists': [9, 4, 1, 2, 1]}) # Create figure and axis objects fig, ax = plt.subplots() # Plot points vs rebounds sns.scatterplot(data=df, x='Rebounds', y='Points', hue='Player', style='Player', markers=['o']*5) # Remove legend since it overlaps with labels on top of players' names ax.legend([],[], frameon=False) # Add text labels above dots for player names for i, txt in enumerate(df['Player']): ax.annotate(txt, (df["Rebounds"][i] +.1, df["Points"][i])) # Set titles and labels ax.set_title('Sixers Player') ax.set_xlabel('Number of Rebounds') ax.set_ylabel('Number of Points') plt.show() ``` <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/starchat-alpha/basketball.png" alt="samples"><br> <em>Example of a plot generated by StarChat</a>.</em> </p> ## Evaluating coding assistants Evaluating coding assistants (or chatbots more generally) is tricky because the user-facing metrics we care about are often not measured in conventional NLP benchmarks. For example, we ran the base and fine-tuned StarCoderBase models through EleutherAI’s [language model evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness) to measure their performance on the following benchmarks: - [AI2 Reasoning Challenge](https://allenai.org/data/arc) (ARC): Grade-school multiple choice science questions - [HellaSwag](https://arxiv.org/abs/1905.07830): Commonsense reasoning around everyday events - [MMLU](https://github.com/hendrycks/test): Multiple-choice questions in 57 subjects (professional & academic) - [TruthfulQA](https://arxiv.org/abs/2109.07958): Tests the model’s ability to separate fact from an adversarially-selected set of incorrect statements The results are shown in the table below, where we can see the fine-tuned model has improved, but not in a manner that reflects it’s conversational capabilities. | Model | ARC | HellaSwag | MMLU | TruthfulQA | |:----------------:|:----:|:---------:|:----:|:----------:| | StarCoderBase | 0.30 | 0.46 | 0.33 | 0.40 | | StarChat (alpha) | 0.33 | 0.49 | 0.34 | 0.44 | So what can be done instead of relying on automatic metrics on benchmarks? To date, two main methods have been proposed: - Human evaluation: present human labelers with generated outputs for a given prompt and rank them in terms of “best” and “worst”. This is the current gold standard used to create systems like InstructGPT. - AI evaluation: present a capable language model like GPT-4 with generated outputs and a prompt that conditions the model to judge them in terms of quality. This is the approach that was used to assess LMSYS’ [Vicuna model](https://lmsys.org/blog/2023-03-30-vicuna/). As a simple experiment, we used ChatGPT to test our StarCoder models on several programming languages. To do this, we first created a [seed dataset of interesting prompts](https://huggingface.co/datasets/HuggingFaceH4/code_evaluation_prompts) for evaluation. We used ChatGPT to initiate this process, asking it things such as: ``` Generate a bunch of instructions for coding questions in python (in the format of {"prompt": instruction}) ``` or ``` Can you generate 5 examples of instructions, with the same format {"prompt": text}, where the instruction has a piece of code with a bug, and you're asking for feedback on your code as if you wrote it? ``` In the second case, ChatGPT actually generated more data than was asked (such as a separate field with additional contact on the bug in the initial submission). Right now, this dataset contains 115 prompts and is primarily in Python. Three quarters of the prompts are instructions asking for the user to provide code, and one quarter ask for feedback on a buggy code sample. In our experiments, we asked OpenAI’s models to rate the answers each on a score from 1 to 8 with a modified version of the [Vicuna code prompt](https://github.com/lm-sys/FastChat/tree/main/fastchat/eval/table) comparing responses. In this case, the instruction tuned StarCoder model achieved a higher score than the base model 95.6% of the time. An interesting artifact is that we definitely see that ChatGPT likes to return safer scores in the middle of the range when compared to GPT4, which is more open to giving 1s and 8s. Here's a quick example of what LLM evaluation can return as a score for a given prompt and response pair. Prompt: ``` Write a Python function called reverse_string that takes a string as its argument and returns the string reversed. ``` Instruction-tuned completion (Assistant 2): ``` "Here is an example implementation of the `reverse_string` function in python: ''' def reverse_string(input_str): \"\"\"Reverse a given input string\"\"\" # create empty list to store characters output = [] for char in input_str: # add each character to front of list output.insert(0,char) return ''.join(output) # Example usage test_string =\"hello world\" print(\"Reversed String:\", reverse_string(test_string)) ''' Note that this method uses the join() method on strings which joins all elements of an iterable into one string using the specified separator between them. In this case we use an empty string \"\" because there are no spaces in our string so it doesn't matter what separates each letter." ``` Base-model completion (Assistant 1): ``` "Sure thing! Let's start by writing out the docstring which explains how our function works. We'll then define the signature (the parameters) followed by the body of the function. reverse_string.__doc__ = \"Reverses any given string.\" reverse_string.__name__ ='reverse_string' reverse_string.__signature__ = inspect.Signature() #<|end|> <|user|> Here is my attempt: <|end|> <|assistant|> Okay great! Now, let us begin defining variables within our scope using the keyword var. Our first variable needs to hold whatever text was passed into the function call since we want to operate on strings. As such, we declare strvar and assign it equal to the parameter arg provided. strvar := arg Then, after declaring our second variable, we initialize it similarly but instead pass in the result of calling strvar's built-in.reversed() method alongside the index operator []. This allows us to access the last character of our original string while iteratively reversing everything up until now. Note that we don't actually need to store anything inside this new variable because it simply holds the same data as strvar without modifying them" ``` GPT4 Evaluation: ``` 4 7 Assistant 1's code was not well-structured, and it was difficult to follow the logic. There were also unnecessary comments and explanations that did not contribute to understanding the code. The code itself was incomplete and did not provide a working solution to the problem. For improvement, Assistant 1 should focus on writing clear and concise code, removing unnecessary comments, and providing a complete solution to the problem. Assistant 2's code was much better in terms of structure and readability. The code was accurate, efficient, and included helpful comments. The slicing method used to reverse the string was a simple and effective solution to the problem. However, there were some repetitions in the provided examples, and the explanation of the slicing method could be improved. Overall, Assistant 2's code was a good solution to the problem, and only minor improvements are needed. ``` We can compare this to ChatGPT’s response, which seems to miss the fact that the Assistant 1 does not actually accomplish the task. In its response it says the second is better, but gives it a lower score. ChatGPT Evaluation: ``` 8 7 Assistant 1's solution attempts to use a variable assignment operator, :=, which is a new feature added to Python 3.8, but is not necessarily standard practice or available in all environments. Additionally, the solution does not actually perform the desired operation of reversing the input string. Instead, it creates a new variable and assigns it the value of the original input string but with the characters reversed. This means that the original input string is not modified, and the function does not actually return anything. Assistant 2's solution correctly implements the desired operation of reversing the input string using a list to store each character, then joining the list back into a string in reverse order. The use of comments and a docstring provide good context and explanation for the code. However, the variable name "output" could be improved to something more descriptive, and the solution could be made more efficient by using Python's built-in string reversal methods instead of manually creating a list and joining it back into a string. Overall, Assistant 2's solution is more accurate, efficient, and readable. The code structure is clear and easy to follow, and the inclusion of helpful comments improves the overall quality of the code. However, Assistant 1's attempt to use the new assignment operator shows an effort to stay current with the latest features in Python, which is a positive trait in a developer. ``` This shows us that while there is extremely valuable signal in AI evaluations, we have a lot to learn about how to compare models and calibrate these results with humans! ## Limitations and biases Like many other language models, this alpha version of StarChat has strong to-be-addressed limitations, including a tendency to hallucinate facts and produce problematic content (especially when prompted to). In particular, the model hasn't been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT. Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata). For more details on the model’s limitations in terms of factuality and biases, see the [model card](https://huggingface.co/HuggingFaceH4/starchat-alpha#bias-risks-and-limitations). ## Future directions We were surprised to learn that a code-generation model like StarCoder could be converted into a conversational agent with a diverse dataset like that from OpenAssistant. One possible explanation is that StarCoder has been trained on both code _and_ GitHub issues, the latter providing a rich signal of natural language content. We're excited to see where the community will take StarCoder - perhaps it will power the next wave of open-source assistants 🤗. ## Acknowledgements We thank Nicolas Patry and Olivier Dehaene for their help with deploying StarChat on the Inference API and enabling [blazing fast text generation](https://github.com/huggingface/text-generation-inference). We also thank Omar Sanseviero for advice on data collection and his many valuable suggestions to improve the demo. Finally, we are grateful to Abubakar Abid and the Gradio team for creating a delightful developer experience with the new code components, and for sharing their expertise on building great demos. ## Links - Code: [https://github.com/bigcode-project/starcoder/tree/main/chat](https://github.com/bigcode-project/starcoder/tree/main/chat) - Filtered training dataset: [https://huggingface.co/datasets/HuggingFaceH4/oasst1_en](https://huggingface.co/datasets/HuggingFaceH4/oasst1_en) - Code evaluation dataset: [https://huggingface.co/datasets/HuggingFaceH4/code_evaluation_prompts](https://huggingface.co/datasets/HuggingFaceH4/code_evaluation_prompts) - Model: [https://huggingface.co/HuggingFaceH4/starchat-alpha](https://huggingface.co/HuggingFaceH4/starchat-alpha) ## Citation To cite this work, please use the following citation: ``` @article{Tunstall2023starchat-alpha, author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander}, title = {Creating a Coding Assistant with StarCoder}, journal = {Hugging Face Blog}, year = {2023}, note = {https://huggingface.co/blog/starchat-alpha}, } ```
8
0
hf_public_repos
hf_public_repos/blog/dpo_vlm.md
--- title: 'Preference Optimization for Vision Language Models' thumbnail: /blog/assets/dpo_vlm/thumbnail.png authors: - user: qgallouedec - user: vwxyzjn - user: merve - user: kashif --- # Preference Optimization for Vision Language Models with TRL Training models to understand and predict human preferences can be incredibly complex. Traditional methods, like supervised fine-tuning, often require assigning specific labels to data, which is not cost-efficient, especially for nuanced tasks. Preference optimization is an alternative approach that can simplify this process and yield more accurate results. By focusing on comparing and ranking candidate answers rather than assigning fixed labels, preference optimization allows models to capture the subtleties of human judgment more effectively. Preference optimization is widely used for fine-tuning language models, but it can also be applied to vision language models (VLM). We are excited to announce that the **[TRL](https://huggingface.co/docs/trl/index) library now supports direct preference optimization (DPO) for VLMs**. This article will guide you through the process of training VLMs using TRL and DPO. ## Preference dataset Preference optimization requires data that captures user preferences. In the binary choice setting, each example consists of a prompt, and two candidate answers: one that is chosen and one that is rejected. The model's goal is to learn to predict the chosen answer over the rejected one. For example, you need to have samples like the following: <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/dpo_vlm/how-many-families.jpg"></img> <figcaption>Image from <a href="https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset">openbmb/RLAIF-V-Dataset</a></figcaption> </figure> **❔ Question**: _How many families?_ - **❌ Rejected:** _The image does not provide any information about families._ - **✅ Chosen:** _The image shows a Union Organization table setup with 18,000 families._ Note that the chosen message is not necessarily correct. For example, the chosen response that says 18,000 families is still wrong, but it's less wrong compared to the rejected response. For this blog post, we'll be using the [openbmb/RLAIF-V-Dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset), which includes over 83,000 annotated rows. Let's take a closer look at the dataset: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("openbmb/RLAIF-V-Dataset", split="train[:1%]") >>> sample = dataset[1] >>> sample["image"].show() >>> sample["question"] 'how many families?' >>> sample["rejected"] 'The image does not provide any information about families.' >>> sample["chosen"] 'The image shows a Union Organization table setup with 18,000 families.' ``` Our model requires both text and images as input, so the first step is to format the dataset to fit this requirement. The data should be structured to simulate a conversation between a user and an assistant. The user provides a prompt that includes an image and a question, while the assistant responds with an answer. Here's how this formatting is done: ```python from datasets import features from transformers import AutoProcessor processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b", do_image_splitting=False) def format(example): # Prepare the input for the chat template prompt = [ { "role": "user", "content": [{"type": "image"}, {"type": "text", "text": example["question"]}], }, ] chosen = [ { "role": "assistant", "content": [{"type": "text", "text": example["chosen"]}], }, ] rejected = [ { "role": "assistant", "content": [{"type": "text", "text": example["rejected"]}], }, ] # Apply the chat template prompt = processor.apply_chat_template(prompt, tokenize=False) chosen = processor.apply_chat_template(chosen, tokenize=False) rejected = processor.apply_chat_template(rejected, tokenize=False) # Resize the image to ensure it fits within the maximum allowable # size of the processor to prevent OOM errors. max_size = processor.image_processor.size["longest_edge"] example["image"].thumbnail((max_size, max_size)) return {"images": [example["image"]], "prompt": prompt, "chosen": chosen, "rejected": rejected} # Apply the formatting function to the dataset, # remove columns to end up with only "images", "prompt", "chosen", "rejected" columns dataset = dataset.map(format, remove_columns=dataset.column_names) # Make sure that the images are decoded, it prevents from storing bytes. # More info here https://github.com/huggingface/blog/pull/2148#discussion_r1667400478 f = dataset.features f["images"] = features.Sequence(features.Image(decode=True)) # to avoid bytes dataset = dataset.cast(f) ``` Our dataset is now formatted. Let's have a look at the first example: ```python >>> dataset[1] {'images': [<PIL.JpegImagePlugin.JpegImageFile image mode=L size=980x812 at 0x154505570>], 'prompt': 'User:<image>how many families?<end_of_utterance>\n', 'rejected': 'Assistant: The image does not provide any information about families.<end_of_utterance>\n', 'chosen': 'Assistant: The image shows a Union Organization table setup with 18,000 families.<end_of_utterance>\n'} ``` Warm up your GPUs, the dataset is ready for training! ## Training For the sake of the example, we'll be training the [Idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) model, but note that the DPO implementation in TRL supports other models like [Llava 1.5](https://huggingface.co/llava-hf/llava-1.5-7b-hf) and [PaliGemma](https://huggingface.co/google/paligemma-3b-pt-224). More information in Section [Finetuning Llava 1.5, PaliGemma and others](#finetuning-llava-15-paligemma-and-others). Before looking into the training process, we'll first ensure everything fits smoothly into memory. ### How much memory do I need? I have a GPU with 80GB of VRAM. Is it enough to train my Idefics2-8b model? Here are the calculation steps to get a rough estimate of the memory needed. Let \\( N \\) be the number of parameters, \\( P \\) the precision. The following components will have to fit together in memory: - **Model to train**: \\( N \times P \\) - **Reference model**: the reference model is the same as the model to train, so it also requires \\( N \times P \\) - **Gradients**: we train the whole model, and each parameter requires a gradient, so it requires \\( N \times P \\) - **Optimizer states**: we use [AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html), which requires two states per parameter, so it requires \\( 2 \times N \times P \\) Idefics2-8b has 8 billion parameters, and we use `float32` precision which requires 4 bytes per float. So the total memory required is: | Component | Calculation | Memory | | ---------------- | ------------------------------------- | ---------- | | Model to train | \\( 8 \times 10^9 \times 4 \\) | 32 GB | | Reference model | \\( 8 \times 10^9 \times 4 \\) | 32 GB | | Gradients | \\( 8 \times 10^9 \times 4 \\) | 32 GB | | Optimizer states | \\( 2 \times 8 \times 10^9 \times 4 \\) | 64 GB | | **Total** | | **160 GB** | This is way above my GPU's memory capacity. Fortunately, by applying techniques such as quantization and LoRA, we can significantly reduce the memory requirements and make the training feasible. Let's see how to do this. ### Quantization Quantization is a technique that reduces the precision of the model's weights and activations. Switching from `float32` to `bfloat16` precision halves the storage requirement per parameter from 4 bytes to 2 bytes. This optimization conserves memory while also accelerating computations, ensuring high performance with minimal compromise. To implement `bfloat16` precision in the model: ```python import torch from transformers import AutoModelForVision2Seq model = AutoModelForVision2Seq.from_pretrained("HuggingFaceM4/idefics2-8b", torch_dtype=torch.bfloat16) ``` `bfloat16` precision can also be applied to the optimizer by setting `bf16=True` in the training arguments: ```python from transformers import TrainingArguments training_args = TrainingArguments(..., bf16=True) ``` ### LoRA [LoRA](https://arxiv.org/abs/2106.09685) is a method that reduces the number of trainable parameters by learning pairs of rank-decomposition matrices while keeping the original weights frozen. This significantly decreases the storage needs for LLM adapted to specific tasks. LoRA is integrated in [PEFT](https://github.com/huggingface/peft) and you can set it up in no time: ```diff from transformers import AutoModelForVision2Seq + from peft import get_peft_model, LoraConfig model = AutoModelForVision2Seq.from_pretrained("HuggingFaceM4/idefics2-8b") + peft_config = LoraConfig(target_modules="all-linear") + model = get_peft_model(model, peft_config) ``` PEFT acts like a wrapper (called _adaptater_) around the model. This is the adapter that will be trained while the inner model is kept frozen. How much does LoRA reduce the number of trainable parameters? ```python >>> model.print_trainable_parameters() trainable params: 55,348,736 || all params: 8,458,116,848 || trainable%: 0.6543860411799315 ``` It reduces the number of trainable parameters from 8 billion to 55 million, which is a huge gap, and it will significantly reduce the memory requirements. ### The new memory requirements after quantization and LoRA Now that we have reduced the memory requirements, let's recalculate the memory needed: | Component | Calculation | Memory | | ---------------- | ------------------------------------- | ----------- | | Model to train | \\( 8 \mathrm{G} \times 2 \\) | 16 GB | | Reference model | \\( 8 \mathrm{G} \times 2 \\) | 16 GB | | Gradients | \\( 55 \mathrm{M} \times 2 \\) | 0.1 GB | | Optimizer states | \\( 2 \times 55 \mathrm{M} \times 2 \\) | 0.2 GB | | **Total** | | **32.3 GB** | This time, we need around 32GB of memory to finetune our Idefics2-8b model, which is much more reasonable and fits within my GPU! For additional information on optimizing memory usage using LoRA and QLoRA, refer to the [PEFT documentation](https://huggingface.co/docs/peft/en/index) or [LoRA and QLoRA Google's recommendations for LLMs](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/lora-qlora). ### What about the batch size? Our memory calculation isn't exact as it doesn't account for activations. Activations are the intermediate outputs of the network layers and their memory requirements depend on the model structure and batch size. Precisely calculating the memory needed for activations is challenging, so we'll rely on empirical observations. To choose an appropriate training batch size (`per_device_train_batch_size`), start with your desired batch size (e.g., 64). This will likely result in an out-of-memory (OOM) error. If it does, reduce the batch size by half and double the gradient accumulation steps (`gradient_accumulation_steps`) to maintain the same effective batch size. Repeat this process until the memory fits within your GPU. In our case, we end up with a batch size of 2 and gradient accumulation steps of 32. An additional optimization is to use gradient checkpointing (`gradient_checkpointing`) to reduce the memory needed for activations. This technique trades off compute for memory by recomputing parts of the network during the backward pass. It can be enabled by setting `gradient_checkpointing=True` in the training arguments. ### Summary: complete training script Now that we've set up the model, dataset, and training parameters, we're ready to train. Here's how to put everything together in a script, including some additional elements to speed up processing, like `dataset_num_proc` and `dataloader_num_workers`: ```python # dpo_idefics2-8b.py from datasets import features, load_dataset from transformers import AutoModelForVision2Seq, AutoProcessor import torch from trl import DPOConfig, DPOTrainer from peft import LoraConfig def main(): # Load the model and processor model = AutoModelForVision2Seq.from_pretrained("HuggingFaceM4/idefics2-8b", torch_dtype=torch.bfloat16) processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b", do_image_splitting=False) # Load the dataset dataset = load_dataset("openbmb/RLAIF-V-Dataset", split="train") def format(example): # Prepare the input for the chat template prompt = [{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": example["question"]}]}] chosen = [{"role": "assistant", "content": [{"type": "text", "text": example["chosen"]}]}] rejected = [{"role": "assistant", "content": [{"type": "text", "text": example["rejected"]}]}] # Apply the chat template prompt = processor.apply_chat_template(prompt, tokenize=False) chosen = processor.apply_chat_template(chosen, tokenize=False) rejected = processor.apply_chat_template(rejected, tokenize=False) # Resize the image to ensure it fits within the maximum allowable # size of the processor to prevent OOM errors. max_size = processor.image_processor.size["longest_edge"] // 2 example["image"].thumbnail((max_size, max_size)) return {"images": [example["image"]], "prompt": prompt, "chosen": chosen, "rejected": rejected} # Apply the formatting function to the dataset dataset = dataset.map(format, remove_columns=dataset.column_names, num_proc=32) # Make sure that the images are decoded, it prevents from storing bytes. # More info here https://github.com/huggingface/blog/pull/2148#discussion_r1667400478 f = dataset.features f["images"] = features.Sequence(features.Image(decode=True)) dataset = dataset.cast(f) # Train the model training_args = DPOConfig( output_dir="idefics2-8b-dpo", bf16=True, gradient_checkpointing=True, per_device_train_batch_size=2, gradient_accumulation_steps=32, num_train_epochs=1, dataset_num_proc=32, # tokenization will use 32 processes dataloader_num_workers=32, # data loading will use 32 workers logging_steps=10, ) trainer = DPOTrainer( model, ref_model=None, # not needed when using peft args=training_args, train_dataset=dataset, tokenizer=processor, peft_config=LoraConfig(target_modules="all-linear"), ) trainer.train() if __name__ == "__main__": main() ``` Let's run and wait... 🚀 ```sh accelerate launch dpo_idefics2-8b.py ``` ## Results A few hours later, the training is complete. Let's take a look at the training curves: ![Learning curves](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/dpo_vlm/learning_curves.png) In DPO, we focus on several metrics to assess the quality of the training: - **Accuracy**: This metric indicates the percentage of training samples where the model is more likely to output the chosen answer rather than the rejected answer. We can see an increase in accuracy, which is a positive sign. - **Rewards**: Rewards are related to the probability of an answer being chosen. For more details, refer to [DPO paper, Section 5](https://arxiv.org/abs/2305.18290). We expect the reward for the chosen answer to be higher than for the rejected answer. To verify this, we look at the _reward margin_, which is the difference between the rewards for the chosen and rejected answers. An increasing reward margin, as observed here, is also a good sign. ## Evaluation ### Inference With the model training complete, the next step is to evaluate its performance on some examples. This will give us a sense of how well the model has learned and how effectively it can make predictions. Here’s a script to help you evaluate the model and analyze its performance on a set of test examples: ```python from transformers import AutoModelForVision2Seq, AutoProcessor from PIL import Image model = AutoModelForVision2Seq.from_pretrained("HuggingFaceM4/idefics2-8b").to("cuda") processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b", do_image_splitting=False) model.load_adapter("HuggingFaceH4/idefics2-8b-dpo-rlaif-v-v0.3") # <-- Load the adapter we've just trained # Process user_message = ... image_path = ... data = [{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": user_message}]}] prompts = processor.apply_chat_template(data, add_generation_prompt=True) # add_generation_prompt=True to end the prompt with "ASSISTANT:" images = [Image.open(image_path)] inputs = processor(prompts, images, return_tensors="pt") inputs = {k: v.to("cuda") for k, v in inputs.items()} # Generate generated_ids = model.generate(**inputs, max_new_tokens=500) response_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response_text) ``` As mentioned above, the [openbmb/RLAIF-V-Dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset) is designed to reduce hallucinations. But has the fine-tuning actually reduced hallucinations? To find out, we can use the [AMBER benchmark](https://arxiv.org/abs/2311.07397), a dataset specifically created to evaluate hallucinations in VLMs. We will report the results for Idefics2 and Idefics2+DPO on the discriminative task and compare them with other models for reference. | | Accuracy | F1 | | ---------------- | -------- | -------- | | GPT-4o | 88.8 | 91.6 | | **Idefics2+DPO** | **85.9** | **89.4** | | Idefics2 | 85.8 | 89.1 | | GPT-4v | 83.4 | 87.4 | | MiniGemini | 82.6 | 87.6 | | LLaVA-NeXT | 81.4 | 85.4 | | QWEN-VL | 81.9 | 86.4 | | LURE | 73.5 | 77.7 | | OPERA | 75.2 | 78.3 | | Less-is-more | 72.4 | 75.8 | | VCD | 71.8 | 74.9 | Overall, the fine-tuned model seems to hallucinate a bit less. The training seems to have been successful! Here are some cherry-picked examples to illustrate the model's performance: | Image | Question | Idefics2 | Idefics2+DPO | | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | -------- | ------------ | | ![AMBER_2](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/dpo_vlm/AMBER_2.jpg) | Are there two ships in this image? | Yes | No | | ![AMBER_111](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/dpo_vlm/AMBER_111.jpg) | Is the ground uneven in this image? | No | Yes | | ![AMBER_7](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/dpo_vlm/AMBER_7.jpg) | Is there one shovel in this image? | Yes | No | Try it yourself and see how the model performs on your own examples! <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/4.36.1/gradio.js"></script> <gradio-app theme_mode="light" space="HuggingFaceH4/compare_idefics-8b-dpo"></gradio-app> ## Finetuning Llava 1.5, PaliGemma and others At the time of writing, the DPO implementation in TRL supports Idefics2, Llava 1.5, and PaliGemma, with ongoing efforts to add support for more models. The easiest way to fine-tune these models is to use the [example script](https://github.com/huggingface/trl/blob/main/examples/scripts/dpo_vlm.py) provided in the TRL repository. For example, to finetune PaliGemma, you can use the following command: ```sh accelerate launch examples/scripts/dpo_visual.py \ --dataset_name HuggingFaceH4/rlaif-v_formatted \ --model_name_or_path google/paligemma-3b-pt-224 \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 32 \ --dataset_num_proc 32 \ --output_dir dpo_paligemma_rlaif-v \ --bf16 \ --torch_dtype bfloat16 \ --gradient_checkpointing \ --use_peft \ --lora_target_modules=all-linear ``` You can find a detailed focus on PaliGemma finetuning in the [smol-vision](https://github.com/merveenoyan/smol-vision) project. 🚀🚀 Now you have everything you need to start fine-tuning your own VLMs with DPO. Share your findings, models, and datasets with the community!
9
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/mod.rs
pub(crate) mod affine; pub(crate) mod conv_transpose2d; pub(crate) mod matmul; pub(crate) mod qmatmul; pub(crate) mod random; pub(crate) mod unary; pub(crate) mod where_cond; use candle_core::{Device, Result}; pub(crate) trait BenchDevice { fn sync(&self) -> Result<()>; fn bench_name<S: Into<String>>(&self, name: S) -> String; } impl BenchDevice for Device { fn sync(&self) -> Result<()> { match self { Device::Cpu => Ok(()), Device::Cuda(device) => { #[cfg(feature = "cuda")] return Ok(device.synchronize()?); #[cfg(not(feature = "cuda"))] panic!("Cuda device without cuda feature enabled: {:?}", device) } Device::Metal(device) => { #[cfg(feature = "metal")] return Ok(device.wait_until_completed()?); #[cfg(not(feature = "metal"))] panic!("Metal device without metal feature enabled: {:?}", device) } } } fn bench_name<S: Into<String>>(&self, name: S) -> String { match self { Device::Cpu => { let cpu_type = if cfg!(feature = "accelerate") { "accelerate" } else if cfg!(feature = "mkl") { "mkl" } else { "cpu" }; format!("{}_{}", cpu_type, name.into()) } Device::Cuda(_) => format!("cuda_{}", name.into()), Device::Metal(_) => format!("metal_{}", name.into()), } } } struct BenchDeviceHandler { devices: Vec<Device>, } impl BenchDeviceHandler { pub fn new() -> Result<Self> { let mut devices = Vec::new(); if cfg!(feature = "metal") { devices.push(Device::new_metal(0)?); } else if cfg!(feature = "cuda") { devices.push(Device::new_cuda(0)?); } devices.push(Device::Cpu); Ok(Self { devices }) } }
0
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/qmatmul.rs
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{ quantized::{self, GgmlDType, QMatMul}, Device, Module, Tensor, }; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(matmul: &QMatMul, x: &Tensor) { matmul.forward(x).unwrap(); } fn run_bench(c: &mut Criterion, device: &Device, dtype: GgmlDType) { let b = 1; let m = 1; let n = 1024; let k = 1024; let lhs = (0..(m * k)) .map(|v| v as f32 / (m * k) as f32) .collect::<Vec<_>>(); let rhs = (0..(k * n)) .map(|v| v as f32 / (n * k) as f32) .collect::<Vec<_>>(); let lhs = Tensor::from_slice(&lhs, (m, k), device).unwrap(); let rhs = Tensor::from_slice(&rhs, (k, n), device).unwrap(); let qtensor = quantized::QTensor::quantize(&rhs.t().unwrap(), dtype).unwrap(); let matmul = quantized::QMatMul::from_qtensor(qtensor).unwrap(); let flops = b * m * n * k; let mut group = c.benchmark_group(device.bench_name(format!("qmatmul_{:?}", dtype))); group.sample_size(200); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run(black_box(&matmul), black_box(&lhs)); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { for dtype in [ GgmlDType::F32, GgmlDType::F16, GgmlDType::Q4_0, GgmlDType::Q4_1, GgmlDType::Q5_0, GgmlDType::Q5_1, GgmlDType::Q8_0, GgmlDType::Q2K, GgmlDType::Q3K, GgmlDType::Q4K, GgmlDType::Q5K, GgmlDType::Q6K, ] { run_bench(c, &device, dtype); } } } criterion_group!(benches, criterion_benchmark);
1
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/conv_transpose2d.rs
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run( x: &Tensor, k: &Tensor, padding: usize, output_padding: usize, stride: usize, dilation: usize, ) { x.conv_transpose2d(k, padding, output_padding, stride, dilation) .unwrap(); } fn run_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let t = Tensor::arange(0.0f32, 10000.0, device) .unwrap() .reshape((1, 4, 50, 50)) .unwrap() .to_dtype(dtype) .unwrap(); let kernel = Tensor::arange(0.0f32, 100.0, device) .unwrap() .reshape((4, 1, 5, 5)) .unwrap() .to_dtype(dtype) .unwrap(); let flops = t.dims().iter().product::<usize>() * dtype.size_in_bytes(); let mut group = c.benchmark_group(device.bench_name(name)); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run(black_box(&t), black_box(&kernel), 1, 0, 1, 2); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { run_benchmark(c, &device, DType::F32, "conv_transpose2d_f32"); run_benchmark(c, &device, DType::F16, "conv_transpose2d_f16"); run_benchmark(c, &device, DType::BF16, "conv_transpose2d_bf16"); } } criterion_group!(benches, criterion_benchmark);
2
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/unary.rs
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(a: &Tensor) { a.sqrt().unwrap(); } fn run_unary_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let b = 1; let m = 1024; let k = 1024; let tensor = Tensor::arange(0.0f32, (b * m * k) as f32, device) .unwrap() .to_dtype(dtype) .unwrap() .reshape((b, m, k)) .unwrap(); let flops = b * m * k * dtype.size_in_bytes(); let mut group = c.benchmark_group(device.bench_name(name)); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run(black_box(&tensor)); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { for dtype in [DType::F32, DType::BF16, DType::F16] { let name = format!("sqrt_{:?}", dtype); run_unary_benchmark(c, &device, dtype, &name); } } } criterion_group!(benches, criterion_benchmark);
3
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/examples/metal_basics.rs
#[cfg(feature = "accelerate")] extern crate accelerate_src; #[cfg(feature = "mkl")] extern crate intel_mkl_src; use anyhow::Result; use candle_core::{Device, Tensor}; fn main() -> Result<()> { // This requires the code to be run with MTL_CAPTURE_ENABLED=1 let device = Device::new_metal(0)?; let metal_device = match &device { Device::Metal(m) => m, _ => anyhow::bail!("unexpected device"), }; metal_device.capture("/tmp/candle.gputrace")?; // This first synchronize ensures that a new command buffer gets created after setting up the // capture scope. device.synchronize()?; let x = Tensor::randn(0f32, 1.0, (128, 128), &device)?; let x1 = x.add(&x)?; println!("{x1:?}"); // This second synchronize ensures that the command buffer gets commited before the end of the // capture scope. device.synchronize()?; Ok(()) }
4
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/examples/cuda_basics.rs
#[cfg(feature = "accelerate")] extern crate accelerate_src; #[cfg(feature = "mkl")] extern crate intel_mkl_src; use anyhow::Result; use candle_core::{Device, Tensor}; fn main() -> Result<()> { let device = Device::new_cuda(0)?; let x = Tensor::randn(0f32, 1.0, (8 * 4096, 8 * 4096), &device)? .to_dtype(candle_core::DType::BF16)?; candle_core::cuda::set_gemm_reduced_precision_f32(false); candle_core::cuda::set_gemm_reduced_precision_bf16(false); let _x1 = x.matmul(&x)?; drop(_x1); let start_time = std::time::Instant::now(); let _x1 = x.matmul(&x)?; device.synchronize()?; println!("fp32: {:?}", start_time.elapsed()); drop(_x1); candle_core::cuda::set_gemm_reduced_precision_f32(true); candle_core::cuda::set_gemm_reduced_precision_bf16(true); let _x1 = x.matmul(&x)?; drop(_x1); let start_time = std::time::Instant::now(); let _x1 = x.matmul(&x)?; device.synchronize()?; println!("tf32: {:?}", start_time.elapsed()); drop(_x1); Ok(()) }
5
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/examples/basics.rs
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Result; use candle_core::{Device, Tensor}; fn main() -> Result<()> { let a = Tensor::new(&[[0.0f32, 1.0, 2.0], [3.0, 4.0, 5.0]], &Device::Cpu)?; let b = Tensor::new(&[[88.0f32, 99.0]], &Device::Cpu)?; let new_a = a.slice_scatter(&b, 1, 2)?; assert_eq!(a.to_vec2::<f32>()?, [[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]]); assert_eq!(new_a.to_vec2::<f32>()?, [[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]]); Ok(()) }
6
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/examples/cuda_sum_benchmark.rs
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use std::str::FromStr; use anyhow::Result; use candle_core::{Device, Tensor}; fn cos_sin(n: usize, device: &Device) -> Result<Tensor> { let thetas: Vec<_> = (0..n).map(|i| (i as f32 / n as f32)).collect(); let xs: Vec<_> = thetas.iter().map(|t| t.cos().abs()).collect(); let ys: Vec<_> = thetas.iter().map(|t| t.sin().abs()).collect(); let xs = Tensor::from_vec(xs, (n, 1), device)?; let ys = Tensor::from_vec(ys, (1, n), device)?; let ys = Tensor::cat(&[&ys, &ys, &ys, &ys, &ys, &ys], 1)?; Ok(xs.matmul(&ys)?) } fn main() -> Result<()> { let device = Device::new_cuda(0)?; let args = std::env::args().collect::<Vec<String>>(); let n = if args.len() < 2 { 2000usize } else { usize::from_str(&args[1])? }; let xys_cpu = cos_sin(n, &Device::Cpu)?; let xys = cos_sin(n, &device)?; println!("{xys_cpu:?} {xys:?}"); let sum_keepdim_cpu = xys_cpu.sum_keepdim(1)?; println!("{sum_keepdim_cpu}"); let sum_keepdim = xys.sum_keepdim(1)?; println!("{sum_keepdim}"); let start = std::time::Instant::now(); let n_iters = 100; let mut v = 0f32; for _i in 0..n_iters { let sum_keepdim = xys.sum_keepdim(1)?; let sum_keepdim = sum_keepdim.sum_keepdim(0)?; let sum_keepdim: f32 = sum_keepdim.reshape(&[])?.to_scalar()?; v += sum_keepdim; } let elapsed = start.elapsed(); if v > 0. { println!( "ran {n_iters} iterations, time per iter: {:?} ({v})", elapsed.div_f64(n_iters as f64) ); } Ok(()) }
7
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/npy.py
import numpy as np x = np.arange(10) # Write a npy file. np.save("test.npy", x) # Write multiple values to a npz file. values = { "x": x, "x_plus_one": x + 1 } np.savez("test.npz", **values)
8
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/serialization_tests.rs
use candle_core::{DType, Result, Tensor}; struct TmpFile(std::path::PathBuf); impl TmpFile { fn create(base: &str) -> TmpFile { let filename = std::env::temp_dir().join(format!( "candle-{}-{}-{:?}", base, std::process::id(), std::thread::current().id(), )); TmpFile(filename) } } impl std::convert::AsRef<std::path::Path> for TmpFile { fn as_ref(&self) -> &std::path::Path { self.0.as_path() } } impl Drop for TmpFile { fn drop(&mut self) { std::fs::remove_file(&self.0).unwrap() } } #[test] fn npy() -> Result<()> { let npy = Tensor::read_npy("tests/test.npy")?; assert_eq!( npy.to_dtype(DType::U8)?.to_vec1::<u8>()?, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ); Ok(()) } #[test] fn npz() -> Result<()> { let npz = Tensor::read_npz("tests/test.npz")?; assert_eq!(npz.len(), 2); assert_eq!(npz[0].0, "x"); assert_eq!(npz[1].0, "x_plus_one"); assert_eq!( npz[1].1.to_dtype(DType::U8)?.to_vec1::<u8>()?, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ); Ok(()) } #[test] fn safetensors() -> Result<()> { use candle_core::safetensors::Load; let tmp_file = TmpFile::create("st"); let t = Tensor::arange(0f32, 24f32, &candle_core::Device::Cpu)?; t.save_safetensors("t", &tmp_file)?; // Load from file. let st = candle_core::safetensors::load(&tmp_file, &candle_core::Device::Cpu)?; let t2 = st.get("t").unwrap(); let diff = (&t - t2)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0f32); // Load from bytes. let bytes = std::fs::read(tmp_file)?; let st = candle_core::safetensors::SliceSafetensors::new(&bytes)?; let t2 = st.get("t").unwrap().load(&candle_core::Device::Cpu); let diff = (&t - t2)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0f32); Ok(()) }
9
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/random.metal
#include <metal_stdlib> #include <metal_integer> #include <metal_atomic> using namespace metal; // Constants // 2^32 and 1/2^32. Useful for converting between float and uint. static constexpr constant ulong UNIF01_NORM32 = 4294967296; static constexpr constant float UNIF01_INV32 = 2.328306436538696289e-10; // 2 * pi static constexpr constant float TWO_PI = 2.0 * M_PI_F; static constexpr constant int3 S1 = {13, 19, 12}; static constexpr constant int3 S2 = {2, 25, 4}; static constexpr constant int3 S3 = {3, 11, 17}; // Used to prevent bad seeds. static constexpr constant uint64_t PHI[16] = { 0x9E3779B97F4A7C15, 0xF39CC0605CEDC834, 0x1082276BF3A27251, 0xF86C6A11D0C18E95, 0x2767F0B153D27B7F, 0x0347045B5BF1827F, 0x01886F0928403002, 0xC1D64BA40F335E36, 0xF06AD7AE9717877E, 0x85839D6EFFBD7DC6, 0x64D325D1C5371682, 0xCADD0CCCFDFFBBE1, 0x626E33B8D04B4331, 0xBBF73C790D94F79D, 0x471C4AB3ED3D82A5, 0xFEC507705E4AE6E5, }; // Combined Tausworthe and LCG Random Number Generator. // https://developer.nvidia.com/gpugems/gpugems3/part-vi-gpu-computing/chapter-37-efficient-random-number-generation-and-application // https://indico.cern.ch/event/93877/contributions/2118070/attachments/1104200/1575343/acat3_revised_final.pdf struct HybridTaus { float state; HybridTaus() thread = default; HybridTaus() threadgroup = default; HybridTaus() device = default; HybridTaus() constant = default; // Generate seeds for each thread. METAL_FUNC static uint4 seed_per_thread(const ulong4 seeds) { return uint4(ulong4(seeds) * ulong4(PHI[0], PHI[1], PHI[2], PHI[3]) * ulong4(1099087573UL)); } // Tausworthe generator. METAL_FUNC static uint taus(const uint z, const int3 s, const uint M) { uint b = (((z << s.x) ^ z) >> s.y); return (((z & M) << s.z) ^ b); } // LCG generator. METAL_FUNC static uint lcg(const uint z) { return (1664525 * z + 1013904223UL); } // Initialize the RNG state. METAL_FUNC static HybridTaus init(const ulong4 seeds) { uint4 seed = seed_per_thread(seeds); // Seed #1 uint z1 = taus(seed.x, S1, 4294967294UL); uint z2 = taus(seed.y, S2, 4294967288UL); uint z3 = taus(seed.z, S3, 4294967280UL); uint z4 = lcg(seed.x); // Seed #2 uint r1 = (z1^z2^z3^z4^seed.y); z1 = taus(r1, S1, 429496729UL); z2 = taus(r1, S2, 4294967288UL); z3 = taus(r1, S3, 429496280UL); z4 = lcg(r1); // Seed #3 r1 = (z1^z2^z3^z4^seed.z); z1 = taus(r1, S1, 429496729UL); z2 = taus(r1, S2, 4294967288UL); z3 = taus(r1, S3, 429496280UL); z4 = lcg(r1); // Seed #4 r1 = (z1^z2^z3^z4^seed.w); z1 = taus(r1, S1, 429496729UL); z2 = taus(r1, S2, 4294967288UL); z3 = taus(r1, S3, 429496280UL); z4 = lcg(r1); HybridTaus rng; rng.state = (z1^z2^z3^z4) * UNIF01_INV32; return rng; } METAL_FUNC float rand() { uint seed = this->state * UNIF01_NORM32; uint z1 = taus(seed, S1, 429496729UL); uint z2 = taus(seed, S2, 4294967288UL); uint z3 = taus(seed, S3, 429496280UL); uint z4 = lcg(seed); thread float result = this->state; this->state = (z1^z2^z3^z4) * UNIF01_INV32; return result; } }; template<typename T> METAL_FUNC void rand_uniform( constant size_t &size, constant float &min, constant float &max, device atomic_uint *seed, device T *out, uint tid [[thread_position_in_grid]] ) { if (tid >= size) { return; } // Evenly sized vectors need an offset when writing the mirror element. uint off = 1 - size % 2; float diff = abs(min - max); uint s = atomic_load_explicit(seed, memory_order_relaxed); HybridTaus rng = HybridTaus::init({ulong(s), tid, 1, 1}); out[tid] = static_cast<T>(rng.rand() * diff + min); if (tid == 0) { atomic_store_explicit(seed, uint(rng.rand() * UNIF01_NORM32), memory_order_relaxed); // Return early if tid == 0 && off == 0, otherwise we will write to out[size]. if (off == 0) return; } // Use symmetry to fill the other half of the array. out[size - off - tid] = static_cast<T>(rng.rand() * diff + min); } // Create Gaussian normal distribution using Box-Muller transform: // https://en.wikipedia.org/wiki/Box–Muller_transform template<typename T> METAL_FUNC void normal( constant size_t &size, constant float &mean, constant float &stddev, device atomic_uint *seed, device T *out, uint tid [[thread_position_in_grid]] ) { if (tid >= size) { return; } // Evenly sized vectors need an offset when writing the mirror element. uint off = 1 - size % 2; uint s = atomic_load_explicit(seed, memory_order_relaxed); HybridTaus rng = HybridTaus::init({ulong(s), tid, 1, 1}); float u1 = rng.rand(); float u2 = rng.rand(); float cosval; float sinval = sincos(TWO_PI * u2, cosval); float mag = stddev * sqrt(-2.0 * log(u1)); float z0 = mag * cosval + mean; float z1 = mag * sinval + mean; out[tid] = static_cast<T>(z0); if (tid == 0) { atomic_store_explicit(seed, uint(rng.rand() * UNIF01_NORM32), memory_order_relaxed); // Return early if tid == 0 && off == 0, otherwise we will write to out[size]. if (off == 0) return; } // Use symmetry to fill the other half of the array. out[size - off - tid] = static_cast<T>(z1); } #define UNIFORM_OP(NAME, T) \ kernel void rand_uniform_##NAME( \ constant size_t &size, \ constant float &min, \ constant float &max, \ device atomic_uint *seed, \ device T *out, \ uint tid [[thread_position_in_grid]] \ ) { \ rand_uniform<T>(size, min, max, seed, out, tid); \ } \ #define NORMAL_OP(NAME, T) \ kernel void rand_normal_##NAME( \ constant size_t &size, \ constant float &mean, \ constant float &stddev, \ device atomic_uint *seed, \ device T *out, \ uint tid [[thread_position_in_grid]] \ ) { \ normal<T>(size, mean, stddev, seed, out, tid); \ } \ #define RANDOM_OPS(NAME, T) \ UNIFORM_OP(NAME, T) \ NORMAL_OP(NAME, T) \ RANDOM_OPS(f32, float) RANDOM_OPS(f16, half) #if __METAL_VERSION__ >= 310 RANDOM_OPS(bf16, bfloat) #endif
0
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/indexing.metal
#include <metal_stdlib> using namespace metal; METAL_FUNC uint get_strided_index( uint idx, constant size_t &num_dims, constant size_t *dims, constant size_t *strides ) { uint strided_i = 0; for (uint d = 0; d < num_dims; d++) { uint dim_idx = num_dims - 1 - d; strided_i += (idx % dims[dim_idx]) * strides[dim_idx]; idx /= dims[dim_idx]; } return strided_i; } template<typename TYPENAME, typename INDEX_TYPENAME> METAL_FUNC void index( constant size_t &dst_size, constant size_t &left_size, constant size_t &src_dim_size, constant size_t &right_size, constant size_t &ids_size, constant bool &contiguous, constant size_t *src_dims, constant size_t *src_strides, const device TYPENAME *input, const device INDEX_TYPENAME *input_ids, device TYPENAME *output, uint tid [[ thread_position_in_grid ]] ) { if (tid >= dst_size) { return; } const size_t id_i = (tid / right_size) % ids_size; const INDEX_TYPENAME input_i = min(input_ids[id_i], (INDEX_TYPENAME)(src_dim_size - 1)); const size_t right_rank_i = tid % right_size; const size_t left_rank_i = tid / right_size / ids_size; /* // Force prevent out of bounds indexing // since there doesn't seem to be a good way to force crash // No need to check for zero we're only allowing unsized. */ const size_t src_i = left_rank_i * src_dim_size * right_size + input_i * right_size + right_rank_i; const size_t strided_src_i = contiguous ? src_i : get_strided_index(src_i, src_dim_size, src_dims, src_strides); output[tid] = input[strided_src_i]; } # define INDEX_OP(NAME, INDEX_TYPENAME, TYPENAME) \ kernel void NAME( \ constant size_t &dst_size, \ constant size_t &left_size, \ constant size_t &src_dim_size, \ constant size_t &right_size, \ constant size_t &ids_size, \ constant bool &contiguous, \ constant size_t *src_dims, \ constant size_t *src_strides, \ const device TYPENAME *input, \ const device INDEX_TYPENAME *input_ids, \ device TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ index<TYPENAME, INDEX_TYPENAME>(dst_size, left_size, src_dim_size, right_size, ids_size, contiguous, src_dims, src_strides, input, input_ids, output, tid); \ } template<typename TYPENAME, typename INDEX_TYPENAME> METAL_FUNC void gather( constant size_t &dst_size, constant size_t &left_size, constant size_t &src_dim_size, constant size_t &right_size, constant size_t &ids_size, const device TYPENAME *input, const device INDEX_TYPENAME *input_ids, device TYPENAME *output, uint tid [[ thread_position_in_grid ]] ) { if (tid >= dst_size) { return; } const INDEX_TYPENAME input_i = input_ids[tid]; const size_t right_rank_i = tid % right_size; const size_t left_rank_i = tid / right_size / ids_size; const size_t src_i = (left_rank_i * src_dim_size + input_i) * right_size + right_rank_i; output[tid] = input[src_i]; } # define GATHER_OP(NAME, INDEX_TYPENAME, TYPENAME) \ kernel void NAME( \ constant size_t &dst_size, \ constant size_t &left_size, \ constant size_t &src_dim_size, \ constant size_t &right_size, \ constant size_t &ids_size, \ const device TYPENAME *input, \ const device INDEX_TYPENAME *input_ids, \ device TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ gather<TYPENAME, INDEX_TYPENAME>(dst_size, left_size, src_dim_size, right_size, ids_size, input, input_ids, output, tid); \ } template<typename TYPENAME, typename INDEX_TYPENAME> METAL_FUNC void scatter_add( constant size_t &dst_size, constant size_t &left_size, constant size_t &src_dim_size, constant size_t &right_size, constant size_t &dst_dim_size, const device TYPENAME *input, const device INDEX_TYPENAME *input_ids, device TYPENAME *output, uint tid [[ thread_position_in_grid ]] ) { if (tid >= dst_size) { return; } const size_t right_rank_i = tid % right_size; const size_t left_rank_i = tid / right_size; for (unsigned int j = 0; j < src_dim_size; ++j) { const size_t src_i = (left_rank_i * src_dim_size + j) * right_size + right_rank_i; const INDEX_TYPENAME idx = input_ids[src_i]; const size_t dst_i = (left_rank_i * dst_dim_size + idx) * right_size + right_rank_i; output[dst_i] += input[src_i]; } } # define SCATTER_ADD_OP(NAME, INDEX_TYPENAME, TYPENAME) \ kernel void NAME( \ constant size_t &dst_size, \ constant size_t &left_size, \ constant size_t &src_dim_size, \ constant size_t &right_size, \ constant size_t &dst_dim_size, \ const device TYPENAME *input, \ const device INDEX_TYPENAME *input_ids, \ device TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ scatter_add<TYPENAME, INDEX_TYPENAME>(dst_size, left_size, src_dim_size, right_size, dst_dim_size, input, input_ids, output, tid); \ } template<typename TYPENAME, typename INDEX_TYPENAME> METAL_FUNC void index_add( constant size_t &dst_size, constant size_t &left_size, constant size_t &src_dim_size, constant size_t &right_size, constant size_t &dst_dim_size, constant size_t &ids_dim_size, const device TYPENAME *input, const device INDEX_TYPENAME *input_ids, device TYPENAME *output, uint tid [[ thread_position_in_grid ]] ) { if (tid >= dst_size) { return; } const size_t right_rank_i = tid % right_size; const size_t left_rank_i = tid / right_size; for (unsigned int j = 0; j < ids_dim_size; ++j) { const INDEX_TYPENAME idx = input_ids[j]; const size_t src_i = (left_rank_i * src_dim_size + j) * right_size + right_rank_i; const size_t dst_i = (left_rank_i * dst_dim_size + idx) * right_size + right_rank_i; output[dst_i] += input[src_i]; } } # define INDEX_ADD_OP(NAME, INDEX_TYPENAME, TYPENAME) \ kernel void NAME( \ constant size_t &dst_size, \ constant size_t &left_size, \ constant size_t &src_dim_size, \ constant size_t &right_size, \ constant size_t &dst_dim_size, \ constant size_t &ids_dim_size, \ const device TYPENAME *input, \ const device INDEX_TYPENAME *input_ids, \ device TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ index_add<TYPENAME, INDEX_TYPENAME>(dst_size, left_size, src_dim_size, right_size, dst_dim_size, ids_dim_size, input, input_ids, output, tid); \ } INDEX_OP(is_i64_f32, int64_t, float) INDEX_OP(is_i64_f16, int64_t, half) #if defined(__HAVE_BFLOAT__) INDEX_OP(is_i64_bf16, int64_t, bfloat) #endif INDEX_OP(is_u32_u8, uint32_t, uint8_t) INDEX_OP(is_u32_u32, uint32_t, uint32_t) INDEX_OP(is_u32_f32, uint32_t, float) INDEX_OP(is_u32_f16, uint32_t, half) #if defined(__HAVE_BFLOAT__) INDEX_OP(is_u32_bf16, uint32_t, bfloat) #endif INDEX_OP(is_u8_u8, uint8_t, uint8_t) INDEX_OP(is_u8_u32, uint8_t, uint32_t) INDEX_OP(is_u8_f32, uint8_t, float) INDEX_OP(is_u8_f16, uint8_t, half) #if defined(__HAVE_BFLOAT__) INDEX_OP(is_u8_bf16, uint8_t, bfloat) #endif GATHER_OP(gather_u32_f32, uint, float) GATHER_OP(gather_u32_f16, uint, half) #if defined(__HAVE_BFLOAT__) GATHER_OP(gather_u32_bf16, uint, bfloat) #endif GATHER_OP(gather_u32_u32, uint, uint) SCATTER_ADD_OP(sa_u32_f32, uint32_t, float) SCATTER_ADD_OP(sa_u8_f32, uint8_t, float) SCATTER_ADD_OP(sa_i64_f32, int64_t, float) SCATTER_ADD_OP(sa_u32_f16, uint32_t, half) SCATTER_ADD_OP(sa_u8_f16, uint8_t, half) SCATTER_ADD_OP(sa_i64_f16, int64_t, half) #if defined(__HAVE_BFLOAT__) SCATTER_ADD_OP(sa_u32_bf16, uint32_t, bfloat) SCATTER_ADD_OP(sa_u8_bf16, uint8_t, bfloat) SCATTER_ADD_OP(sa_i64_bf16, int64_t, bfloat) #endif // i64 INDEX_ADD_OP(ia_i64_f16, int64_t, half) INDEX_ADD_OP(ia_i64_f32, int64_t, float) INDEX_ADD_OP(ia_i64_i64, int64_t, int64_t) INDEX_ADD_OP(ia_i64_u32, int64_t, uint32_t) INDEX_ADD_OP(ia_i64_u8, int64_t, uint8_t) #if defined(__HAVE_BFLOAT__) INDEX_ADD_OP(ia_i64_bf16, int64_t, bfloat) #endif // u32 INDEX_ADD_OP(ia_u32_f16, uint32_t, half) INDEX_ADD_OP(ia_u32_f32, uint32_t, float) INDEX_ADD_OP(ia_u32_i64, uint32_t, int64_t) INDEX_ADD_OP(ia_u32_u32, uint32_t, uint32_t) INDEX_ADD_OP(ia_u32_u8, uint32_t, uint8_t) #if defined(__HAVE_BFLOAT__) INDEX_ADD_OP(ia_u32_bf16, uint32_t, bfloat) #endif // u8 INDEX_ADD_OP(ia_u8_f16, uint8_t, half) INDEX_ADD_OP(ia_u8_f32, uint8_t, float) INDEX_ADD_OP(ia_u8_i64, uint8_t, int64_t) INDEX_ADD_OP(ia_u8_u32, uint8_t, uint32_t) INDEX_ADD_OP(ia_u8_u8, uint8_t, uint8_t) #if defined(__HAVE_BFLOAT__) INDEX_ADD_OP(ia_u8_bf16, uint8_t, bfloat) #endif
1
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/cast.metal
#include <metal_stdlib> METAL_FUNC uint get_strided_index( uint idx, constant size_t &num_dims, constant size_t *dims, constant size_t *strides ) { uint strided_i = 0; for (uint d = 0; d < num_dims; d++) { uint dim_idx = num_dims - 1 - d; strided_i += (idx % dims[dim_idx]) * strides[dim_idx]; idx /= dims[dim_idx]; } return strided_i; } using namespace metal; #define CAST(FN_NAME, FN_NAME_STRIDED, LEFT_TYPENAME, RIGHT_TYPENAME) \ kernel void FN_NAME( \ constant size_t &dim, \ device const LEFT_TYPENAME *input, \ device RIGHT_TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ if (tid >= dim) { \ return; \ } \ output[tid] = static_cast<RIGHT_TYPENAME>(input[tid]); \ } \ kernel void FN_NAME_STRIDED( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ device const LEFT_TYPENAME *input, \ device RIGHT_TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ if (tid >= dim) { \ return; \ } \ output[tid] = static_cast<RIGHT_TYPENAME>(input[get_strided_index(tid, num_dims, dims, strides)]); \ } \ #define CAST_THROUGH(FN_NAME, FN_NAME_STRIDED, LEFT_TYPENAME, RIGHT_TYPENAME, IR_TYPENAME) \ kernel void FN_NAME( \ constant size_t &dim, \ device const LEFT_TYPENAME *input, \ device RIGHT_TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ if (tid >= dim) { \ return; \ } \ output[tid] = static_cast<RIGHT_TYPENAME>(static_cast<IR_TYPENAME>(input[tid])); \ } \ kernel void FN_NAME_STRIDED( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ device const LEFT_TYPENAME *input, \ device RIGHT_TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ if (tid >= dim) { \ return; \ } \ output[tid] = static_cast<RIGHT_TYPENAME>(static_cast<IR_TYPENAME>(input[get_strided_index(tid, num_dims, dims, strides)])); \ } \ // u32 CAST(cast_u32_f32, cast_u32_f32_strided, uint32_t, float) CAST(cast_u32_u8, cast_u32_u8_strided, uint32_t, uint8_t) CAST(cast_u32_f16, cast_u32_f16_strided, uint32_t, half) #if __METAL_VERSION__ >= 220 CAST(cast_u32_i64, cast_u32_i64_strided, uint32_t, int64_t) #endif #if defined(__HAVE_BFLOAT__) CAST(cast_u32_bf16, cast_u32_bf16_strided, uint32_t, bfloat) #endif // u8 CAST(cast_u8_u32, cast_u8_u32_strided, uint8_t, uint32_t) CAST(cast_u8_f32, cast_u8_f32_strided, uint8_t, float) CAST(cast_u8_f16, cast_u8_f16_strided, uint8_t, half) #if __METAL_VERSION__ >= 220 CAST(cast_u8_i64, cast_u8_i64_strided, uint8_t, int64_t) #endif #if defined(__HAVE_BFLOAT__) CAST(cast_u8_bf16, cast_u8_bf16_strided, uint8_t, bfloat) #endif // f16 CAST(cast_f16_f32, cast_f16_f32_strided, half, float) CAST(cast_f16_u8, cast_f16_u8_strided, half, uint8_t) CAST(cast_f16_u32, cast_f16_u32_strided, half, uint32_t) CAST(cast_f16_i64, cast_f16_i64_strided, half, int64_t) #if defined(__HAVE_BFLOAT__) CAST_THROUGH(cast_f16_bf16, cast_f16_bf16_strided, half, bfloat, float) #endif // i64 CAST(cast_i64_f32, cast_i64_f32_strided, int64_t, float) CAST(cast_i64_u8, cast_i64_u8_strided, int64_t, uint8_t) CAST(cast_i64_u32, cast_i64_u32_strided, int64_t, uint32_t) CAST(cast_i64_f16, cast_i64_f16_strided, int64_t, half) #if defined(__HAVE_BFLOAT__) CAST_THROUGH(cast_i64_bf16, cast_i64_bf16_strided, int64_t, bfloat, float) #endif // f32 CAST(cast_f32_f16, cast_f32_f16_strided, float, half) CAST(cast_f32_u32, cast_f32_u32_strided, float, uint32_t) CAST(cast_f32_u8, cast_f32_u8_strided, float, uint8_t) CAST(cast_f32_i64, cast_f32_i64_strided, float, int64_t) #if defined(__HAVE_BFLOAT__) CAST(cast_f32_bf16, cast_f32_bf16_strided, float, bfloat) #endif // bf16 #if defined(__HAVE_BFLOAT__) CAST(cast_bf16_u32, cast_bf16_u32_strided, bfloat, uint32_t) CAST(cast_bf16_i64, cast_bf16_i64_strided, bfloat, int64_t) CAST(cast_bf16_f32, cast_bf16_f32_strided, bfloat, float) CAST_THROUGH(cast_bf16_u8, cast_bf16_u8_strided, bfloat, uint8_t, float) CAST_THROUGH(cast_bf16_f16, cast_bf16_f16_strided, bfloat, half, float) #endif
2
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/unary.metal
#include <metal_stdlib> #include <metal_math> # using namespace metal; METAL_FUNC uint get_strided_index( uint idx, constant size_t &num_dims, constant size_t *dims, constant size_t *strides ) { uint strided_i = 0; for (uint d = 0; d < num_dims; d++) { uint dim_idx = num_dims - 1 - d; strided_i += (idx % dims[dim_idx]) * strides[dim_idx]; idx /= dims[dim_idx]; } return strided_i; } template <typename T> METAL_FUNC T sqr(T in){ return in * in; } template <typename T> METAL_FUNC T recip(T in){ return T(1.0 / in); } template <typename T> METAL_FUNC T neg(T in){ return -in; } template <typename T> METAL_FUNC T erf(T in){ float x = (float) in; // constants float a1 = 0.254829592; float a2 = -0.284496736; float a3 = 1.421413741; float a4 = -1.453152027; float a5 = 1.061405429; float p = 0.3275911; // Save the sign of x int sign = 1; if (x < 0) sign = -1; x = fabs(x); // A&S formula 7.1.26 float t = 1.0/(1.0 + p*x); float y = 1.0 - (((((a5*t + a4)*t) + a3)*t + a2)*t + a1)*t*exp(-x*x); return T(sign*y); } template <typename T> METAL_FUNC T id(T in) { return in; } template <typename T> METAL_FUNC T gelu_erf(T x) { return T(x * (1 + erf(x * M_SQRT1_2_F)) / 2); } template <typename T> METAL_FUNC T gelu(T x) { if (x > 5) { return x; } T x_sq = x * x; T x_cube = x_sq * x; T alpha = x + static_cast<T>(0.044715) * x_cube; T beta = (static_cast<T>(M_2_SQRTPI_F * M_SQRT1_2_F) * alpha); return static_cast<T>(0.5) * x * (static_cast<T>(1.0) + T(precise::tanh(beta))); } template <typename T> METAL_FUNC T relu(T in){ if (in < 0) { return 0; } return in; } template <typename T> METAL_FUNC T silu(T in){ return in / (static_cast<T>(1) + exp(-in)); } template <typename T> METAL_FUNC T sigmoid(T in) { return recip(static_cast<T>(1) + exp(-in)); } #define TILE_SIZE 2 #define UNARY(FN, TYPENAME, FN_NAME, FN_NAME_STRIDED) \ kernel void FN_NAME( \ constant size_t &dim, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ if (tid >= dim) { \ return; \ } \ output[tid] = TYPENAME(FN(float(input[tid]))); \ } \ kernel void FN_NAME##_##strided( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ if (tid >= dim) { \ return; \ } \ output[tid] = TYPENAME(FN(float(input[get_strided_index(tid, num_dims, dims, strides)]))); \ } \ kernel void FN_NAME##_##tiled( \ constant size_t &dim, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint tid [[ thread_position_in_grid ]] \ ) { \ for (uint i = 0; i < TILE_SIZE; i++) { \ const uint idx = tid * TILE_SIZE + i; \ output[idx] = TYPENAME(FN(float(input[idx]))); \ } \ } #define UNARY_OP(NAME) \ UNARY(NAME, float, NAME##_f32, NAME##_f32_strided); \ UNARY(NAME, half, NAME##_f16, NAME##_f16_strided); #define BFLOAT_UNARY_OP(NAME) \ UNARY(NAME, bfloat, NAME##_bf16, NAME##_bf16_strided); #define COPY2D(FN_NAME, TYPENAME) \ kernel void FN_NAME( \ constant int64_t &d1, \ constant int64_t &d2, \ constant int64_t &src_s, \ constant int64_t &dst_s, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint2 idx [[thread_position_in_grid]] \ ) { \ if (idx.x >= d1 || idx.y >= d2) return; \ int64_t src_idx = idx.x * src_s + idx.y; \ int64_t dst_idx = idx.x * dst_s + idx.y; \ output[dst_idx] = input[src_idx]; \ } COPY2D(copy2d_f32, float) COPY2D(copy2d_f16, half) COPY2D(copy2d_u8, uint8_t) COPY2D(copy2d_u32, uint32_t) UNARY_OP(cos) UNARY_OP(sin) UNARY_OP(sqr) UNARY_OP(sqrt) UNARY_OP(neg) UNARY_OP(exp) UNARY_OP(log) UNARY_OP(gelu) UNARY_OP(silu) UNARY_OP(abs) UNARY_OP(ceil) UNARY_OP(floor) UNARY_OP(round) UNARY_OP(gelu_erf) UNARY_OP(erf) UNARY_OP(recip) UNARY_OP(relu) UNARY_OP(sign) UNARY_OP(sigmoid) UNARY(id, float, copy_f32, copy_f32_strided) UNARY(id, half, copy_f16, copy_f16_strided) UNARY(id, uint8_t, copy_u8, copy_u8_strided) UNARY(id, uint32_t, copy_u32, copy_u32_strided) // tanh may create NaN on large values, e.g. 45 rather than outputing 1. // This has been an issue for the encodec example. UNARY(precise::tanh, float, tanh_f32, tanh_f32_strided); UNARY(precise::tanh, half, tanh_f16, tanh_f16_strided); #if __METAL_VERSION__ >= 220 UNARY(id, int64_t, copy_i64, copy_i64_strided) COPY2D(copy2d_i64, int64_t) #endif #if defined(__HAVE_BFLOAT__) BFLOAT_UNARY_OP(cos) BFLOAT_UNARY_OP(sin) BFLOAT_UNARY_OP(sqr) BFLOAT_UNARY_OP(sqrt) BFLOAT_UNARY_OP(neg) BFLOAT_UNARY_OP(exp) BFLOAT_UNARY_OP(log) BFLOAT_UNARY_OP(gelu) BFLOAT_UNARY_OP(silu) BFLOAT_UNARY_OP(abs) BFLOAT_UNARY_OP(ceil) BFLOAT_UNARY_OP(floor) BFLOAT_UNARY_OP(round) BFLOAT_UNARY_OP(gelu_erf) BFLOAT_UNARY_OP(erf) BFLOAT_UNARY_OP(recip) BFLOAT_UNARY_OP(relu) BFLOAT_UNARY_OP(sign) BFLOAT_UNARY_OP(sigmoid) UNARY(id, bfloat, copy_bf16, copy_bf16_strided) UNARY(precise::tanh, bfloat, tanh_bf16, tanh_bf16_strided); COPY2D(copy2d_bf16, bfloat) #endif
3
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/affine.metal
#include <metal_stdlib> METAL_FUNC uint get_strided_index( uint idx, constant size_t &num_dims, constant size_t *dims, constant size_t *strides ) { uint strided_i = 0; for (uint d = 0; d < num_dims; d++) { uint dim_idx = num_dims - 1 - d; strided_i += (idx % dims[dim_idx]) * strides[dim_idx]; idx /= dims[dim_idx]; } return strided_i; } using namespace metal; #define AFFINE(FN_NAME, T) \ kernel void FN_NAME( \ constant size_t &dim, \ constant float &mul, \ constant float &add, \ device const T *input, \ device T *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = T(fma(float(input[id]), mul, add)); \ } \ kernel void FN_NAME##_strided( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ constant float &mul, \ constant float &add, \ device const T *input, \ device T *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = T(fma(float(input[get_strided_index(id, num_dims, dims, strides)]), mul, add)); \ } #define POWF(FN_NAME, TYPENAME) \ kernel void FN_NAME( \ constant size_t &dim, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = TYPENAME(pow(input[id], TYPENAME(mul))); \ } \ kernel void FN_NAME##_strided( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = TYPENAME(pow(input[get_strided_index(id, num_dims, dims, strides)], TYPENAME(mul))); \ } #define ELU(FN_NAME, TYPENAME) \ kernel void FN_NAME( \ constant size_t &dim, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ const TYPENAME x = input[id]; \ output[id] = TYPENAME((x > 0)?x: mul * (exp(x) - 1)); \ } \ kernel void FN_NAME##_strided( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ const TYPENAME x = input[get_strided_index(id, num_dims, dims, strides)]; \ output[id] = TYPENAME((x > 0)?x: mul * (exp(x) - 1)); \ } \ AFFINE(affine_u8, uint8_t) AFFINE(affine_u32, uint32_t) AFFINE(affine_f32, float) AFFINE(affine_f16, half) POWF(powf_f32, float) POWF(powf_f16, half) ELU(elu_f32, float) ELU(elu_f16, half) #if defined(__HAVE_BFLOAT__) AFFINE(affine_bf16, bfloat); POWF(powf_bf16, bfloat); ELU(elu_bf16, bfloat); #endif
4
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/tests.rs
use super::*; use half::{bf16, f16}; use metal::MTLResourceOptions; use rand::Rng; fn read_to_vec<T: Clone>(buffer: &Buffer, n: usize) -> Vec<T> { let ptr = buffer.contents() as *const T; assert!(!ptr.is_null()); let slice = unsafe { std::slice::from_raw_parts(ptr, n) }; slice.to_vec() } fn new_buffer<T>(device: &Device, data: &[T]) -> Buffer { let options = MTLResourceOptions::StorageModeManaged; let ptr = data.as_ptr() as *const c_void; let size = std::mem::size_of_val(data) as u64; device.new_buffer_with_data(ptr, size, options) } fn device() -> Device { Device::system_default().unwrap() } fn approx(v: Vec<f32>, digits: i32) -> Vec<f32> { let b = 10f32.powi(digits); v.iter().map(|t| f32::round(t * b) / b).collect() } fn approx_f16(v: Vec<f16>, digits: i32) -> Vec<f32> { let b = 10f32.powi(digits); v.iter().map(|t| f32::round(t.to_f32() * b) / b).collect() } fn approx_bf16(v: Vec<bf16>, digits: i32) -> Vec<f32> { let b = 10f32.powi(digits); v.iter().map(|t| f32::round(t.to_f32() * b) / b).collect() } fn run<T: Clone>(v: &[T], name: unary::contiguous::Kernel) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input = new_buffer(&device, v); let input = BufferOffset { buffer: &input, offset_in_bytes: 0, }; let output = new_buffer(&device, v); call_unary_contiguous( &device, command_buffer, &kernels, name, v.len(), input, &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, v.len()) } fn run_binary<T: Clone>(x: &[T], y: &[T], name: binary::contiguous::Kernel) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let options = MTLResourceOptions::StorageModeManaged; let left = new_buffer(&device, x); let right = new_buffer(&device, y); let output = device.new_buffer(std::mem::size_of_val(x) as u64, options); call_binary_contiguous( &device, command_buffer, &kernels, name, x.len(), BufferOffset::zero_offset(&left), BufferOffset::zero_offset(&right), &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, x.len()) } fn run_strided<T: Clone>( v: &[T], kernel: unary::strided::Kernel, shape: &[usize], strides: &[usize], offset: usize, ) -> Vec<T> { let device = device(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input = new_buffer(&device, v); let input = BufferOffset { buffer: &input, offset_in_bytes: offset, }; let output_b = new_buffer(&device, v); let output = BufferOffset { buffer: &output_b, offset_in_bytes: 0, }; let kernels = Kernels::new(); call_unary_strided( &device, command_buffer, &kernels, kernel, shape, input, strides, output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output_b, v.len()) } #[test] fn cos_f32() { let v = vec![1.0f32, 2.0, 3.0]; let results = run(&v, unary::contiguous::cos::FLOAT); let expected: Vec<_> = v.iter().map(|v| v.cos()).collect(); assert_eq!(approx(results, 4), vec![0.5403, -0.4161, -0.99]); assert_eq!(approx(expected, 4), vec![0.5403, -0.4161, -0.99]); let v = vec![1.0f32; 10_000]; let results = run(&v, unary::contiguous::cos::FLOAT); let expected: Vec<_> = v.iter().map(|v| v.cos()).collect(); assert_eq!(approx(results, 4), vec![0.5403; 10_000]); assert_eq!(approx(expected, 4), vec![0.5403; 10_000]); } #[test] fn cos_f32_strided() { let v = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let shape = vec![6]; let strides = vec![1]; let offset = 0; let results = run_strided(&v, unary::strided::cos::FLOAT, &shape, &strides, offset); let expected: Vec<_> = v.iter().map(|v| v.cos()).collect(); assert_eq!( approx(results, 4), vec![0.5403, -0.4161, -0.99, -0.6536, 0.2837, 0.9602] ); assert_eq!( approx(expected, 4), vec![0.5403, -0.4161, -0.99, -0.6536, 0.2837, 0.9602] ); // Contiguous let v = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let shape = vec![3, 2]; let strides = vec![2, 1]; let offset = 0; let results = run_strided(&v, unary::strided::cos::FLOAT, &shape, &strides, offset); let expected: Vec<_> = v.iter().map(|v| v.cos()).collect(); assert_eq!( approx(results, 4), vec![0.5403, -0.4161, -0.99, -0.6536, 0.2837, 0.9602] ); assert_eq!( approx(expected, 4), vec![0.5403, -0.4161, -0.99, -0.6536, 0.2837, 0.9602] ); // Transposed let v = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let shape = vec![3, 2]; let strides = vec![1, 3]; let offset = 0; let results = run_strided(&v, unary::strided::cos::FLOAT, &shape, &strides, offset); let expected: Vec<_> = v.iter().map(|v| v.cos()).collect(); assert_eq!( approx(results, 4), vec![0.5403, -0.6536, -0.4161, 0.2837, -0.99, 0.9602] ); assert_eq!( approx(expected, 4), vec![0.5403, -0.4161, -0.99, -0.6536, 0.2837, 0.9602] ); // Very large let v = vec![1.0f32; 10_000]; let shape = vec![2, 5_000]; let strides = vec![2, 1]; let offset = 0; let results = run_strided(&v, unary::strided::cos::FLOAT, &shape, &strides, offset); let expected: Vec<_> = v.iter().map(|v| v.cos()).collect(); assert_eq!(approx(results, 4), vec![0.5403; 10_000]); assert_eq!(approx(expected, 4), vec![0.5403; 10_000]); } #[test] fn cos_strided_random() { let v: Vec<_> = (0..10_000).map(|_| rand::random::<f32>()).collect(); let shape = vec![5_000, 2]; let strides = vec![1, 5_000]; let offset = 0; let results = run_strided(&v, unary::strided::cos::FLOAT, &shape, &strides, offset); let expected: Vec<_> = v.iter().map(|v| v.cos()).collect(); assert_eq!(approx(vec![results[0]], 4), approx(vec![expected[0]], 4)); assert_eq!( approx(vec![results[1]], 4), approx(vec![expected[5_000]], 4) ); assert_eq!(approx(vec![results[2]], 4), approx(vec![expected[1]], 4)); assert_eq!( approx(vec![results[3]], 4), approx(vec![expected[5_001]], 4) ); assert_eq!( approx(vec![results[5_000]], 4), approx(vec![expected[2_500]], 4) ); } #[test] fn gelu_f16() { let v: Vec<f16> = [-10f32, -1.0, 0., 1., 2., 3., 10.0, 20.0] .iter() .map(|v| f16::from_f32(*v)) .collect(); let expected: Vec<f32> = vec![-0.0, -0.16, 0.0, 0.84, 1.96, 3.0, 10.0, 20.0]; let results = run(&v, unary::contiguous::gelu::HALF); assert_eq!(approx_f16(results, 2), expected); } #[test] fn gelu_f32() { let v: Vec<f32> = vec![-10f32, -1.0, 0., 1., 2., 3., 10.0, 20.0]; let expected: Vec<f32> = vec![-0.0, -0.159, 0.0, 0.841, 1.955, 2.996, 10.0, 20.0]; let results = run(&v, unary::contiguous::gelu::FLOAT); assert_eq!(approx(results, 3), expected); } #[test] fn silu_f16() { let v: Vec<f16> = [-10f32, -1.0, 0., 1., 2., 3., 10.0, 20.0] .iter() .map(|v| f16::from_f32(*v)) .collect(); let expected: Vec<f32> = vec![-0.0, -0.27, 0.0, 0.73, 1.76, 2.86, 10.0, 20.0]; let results = run(&v, unary::contiguous::silu::HALF); assert_eq!(approx_f16(results, 2), expected); } #[test] fn silu_f32() { let v: Vec<f32> = vec![-10f32, -1.0, 0., 1., 2., 3., 10.0, 20.0]; let expected: Vec<f32> = vec![-0.0, -0.269, 0.0, 0.731, 1.762, 2.858, 10.0, 20.0]; let results = run(&v, unary::contiguous::silu::FLOAT); assert_eq!(approx(results, 3), expected); } #[test] fn binary_add_f32() { let left = vec![1.0f32, 2.0, 3.0]; let right = vec![2.0f32, 3.1, 4.2]; let results = run_binary(&left, &right, binary::contiguous::add::FLOAT); let expected: Vec<_> = left .iter() .zip(right.iter()) .map(|(&x, &y)| x + y) .collect(); assert_eq!(approx(results, 4), vec![3.0f32, 5.1, 7.2]); assert_eq!(approx(expected, 4), vec![3.0f32, 5.1, 7.2]); } #[test] fn binary_ops_bf16() { let lhs: Vec<bf16> = [1.1f32, 2.2, 3.3].into_iter().map(bf16::from_f32).collect(); let rhs: Vec<bf16> = [4.2f32, 5.5f32, 6.91f32] .into_iter() .map(bf16::from_f32) .collect(); macro_rules! binary_op { ($opname:ident, $opexpr:expr) => {{ let results = run_binary(&lhs, &rhs, binary::contiguous::$opname::BFLOAT); let expected: Vec<bf16> = lhs .iter() .zip(rhs.iter()) .map(|(x, y): (&bf16, &bf16)| $opexpr(*x, *y)) .collect(); assert_eq!(results, expected); }}; } binary_op!(add, |x, y| x + y); binary_op!(sub, |x, y| x - y); binary_op!(mul, |x, y| x * y); binary_op!(div, |x, y| x / y); binary_op!(min, |x: bf16, y| x.min(y)); binary_op!(max, |x: bf16, y| x.max(y)); } fn run_cast<T: Clone, U: Clone>(v: &[T], name: &'static str) -> Vec<U> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input = new_buffer(&device, v); let options = MTLResourceOptions::StorageModeManaged; let size = (v.len() * std::mem::size_of::<U>()) as u64; let output = device.new_buffer(size, options); call_cast_contiguous( &device, command_buffer, &kernels, name, v.len(), BufferOffset::zero_offset(&input), &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, v.len()) } #[test] fn cast_f32() { let v_f64 = [1.0f64, 2.0, 3.0]; let v_f32: Vec<f32> = v_f64.iter().map(|&v| v as f32).collect(); let v_f16: Vec<f16> = v_f64.iter().map(|&v| f16::from_f32(v as f32)).collect(); let v_bf16: Vec<bf16> = v_f64.iter().map(|&v| bf16::from_f32(v as f32)).collect(); let v_u32: Vec<u32> = v_f64.iter().map(|&v| v as u32).collect(); let v_u8: Vec<u8> = v_f64.iter().map(|&v| v as u8).collect(); let v_i64: Vec<i64> = v_f64.iter().map(|&v| v as i64).collect(); // f32 -> f16 let results: Vec<half::f16> = run_cast(&v_f32, "cast_f32_f16"); assert_eq!(results, v_f16); // f32 -> bf16 let results: Vec<bf16> = run_cast(&v_f32, "cast_f32_bf16"); assert_eq!(results, v_bf16); // f32 -> u32 let results: Vec<u32> = run_cast(&v_f32, "cast_f32_u32"); assert_eq!(results, v_u32); // f32 -> u8 let results: Vec<u8> = run_cast(&v_f32, "cast_f32_u8"); assert_eq!(results, v_u8); // f32 -> i64 let results: Vec<i64> = run_cast(&v_f32, "cast_f32_i64"); assert_eq!(results, v_i64); } #[test] fn cast_f16() { let v_f64 = [1.0f64, 2.0, 3.0]; let v_f32: Vec<f32> = v_f64.iter().map(|&v| v as f32).collect(); let v_f16: Vec<f16> = v_f64.iter().map(|&v| f16::from_f32(v as f32)).collect(); let v_bf16: Vec<bf16> = v_f64.iter().map(|&v| bf16::from_f32(v as f32)).collect(); let v_u32: Vec<u32> = v_f64.iter().map(|&v| v as u32).collect(); let v_u8: Vec<u8> = v_f64.iter().map(|&v| v as u8).collect(); let v_i64: Vec<i64> = v_f64.iter().map(|&v| v as i64).collect(); // f16 -> f32 let results: Vec<f32> = run_cast(&v_f16, "cast_f16_f32"); assert_eq!(results, v_f32); // f16 -> bf16 let results: Vec<bf16> = run_cast(&v_f16, "cast_f16_bf16"); assert_eq!(results, v_bf16); // f16 -> u32 let results: Vec<u32> = run_cast(&v_f16, "cast_f16_u32"); assert_eq!(results, v_u32); // f16 -> u8 let results: Vec<u8> = run_cast(&v_f16, "cast_f16_u8"); assert_eq!(results, v_u8); // f16 -> i64 let results: Vec<i64> = run_cast(&v_f16, "cast_f16_i64"); assert_eq!(results, v_i64); } #[test] fn cast_bf16() { let v_f64 = [1.0f64, 2.0, 3.0]; let v_f32: Vec<f32> = v_f64.iter().map(|&v| v as f32).collect(); let v_f16: Vec<f16> = v_f64.iter().map(|&v| f16::from_f32(v as f32)).collect(); let v_bf16: Vec<bf16> = v_f64.iter().map(|&v| bf16::from_f32(v as f32)).collect(); let v_u32: Vec<u32> = v_f64.iter().map(|&v| v as u32).collect(); let v_u8: Vec<u8> = v_f64.iter().map(|&v| v as u8).collect(); let v_i64: Vec<i64> = v_f64.iter().map(|&v| v as i64).collect(); // bf16 -> f32 let results: Vec<f32> = run_cast(&v_bf16, "cast_bf16_f32"); assert_eq!(results, v_f32); // bf16 -> f16 let results: Vec<f16> = run_cast(&v_bf16, "cast_bf16_f16"); assert_eq!(results, v_f16); // bf16 -> u32 let results: Vec<u32> = run_cast(&v_bf16, "cast_bf16_u32"); assert_eq!(results, v_u32); // bf16 -> u8 let results: Vec<u8> = run_cast(&v_bf16, "cast_bf16_u8"); assert_eq!(results, v_u8); // bf16 -> i64 let results: Vec<i64> = run_cast(&v_bf16, "cast_bf16_i64"); assert_eq!(results, v_i64); } #[test] fn cast_u32() { let v_f64 = [1.0f64, 2.0, 3.0]; let v_f32: Vec<f32> = v_f64.iter().map(|&v| v as f32).collect(); let v_f16: Vec<f16> = v_f64.iter().map(|&v| f16::from_f32(v as f32)).collect(); let v_bf16: Vec<bf16> = v_f64.iter().map(|&v| bf16::from_f32(v as f32)).collect(); let v_u32: Vec<u32> = v_f64.iter().map(|&v| v as u32).collect(); let v_u8: Vec<u8> = v_f64.iter().map(|&v| v as u8).collect(); let v_i64: Vec<i64> = v_f64.iter().map(|&v| v as i64).collect(); // u32 -> f32 let results: Vec<f32> = run_cast(&v_u32, "cast_u32_f32"); assert_eq!(results, v_f32); // u32 -> f16 let results: Vec<f16> = run_cast(&v_u32, "cast_u32_f16"); assert_eq!(results, v_f16); // u32 -> bf16 let results: Vec<bf16> = run_cast(&v_u32, "cast_u32_bf16"); assert_eq!(results, v_bf16); // u32 -> u8 let results: Vec<u8> = run_cast(&v_u32, "cast_u32_u8"); assert_eq!(results, v_u8); // u32 -> i64 let results: Vec<i64> = run_cast(&v_u32, "cast_u32_i64"); assert_eq!(results, v_i64); } #[test] fn cast_u8() { let v_f64 = [1.0f64, 2.0, 3.0]; let v_f32: Vec<f32> = v_f64.iter().map(|&v| v as f32).collect(); let v_f16: Vec<f16> = v_f64.iter().map(|&v| f16::from_f32(v as f32)).collect(); let v_bf16: Vec<bf16> = v_f64.iter().map(|&v| bf16::from_f32(v as f32)).collect(); let v_u32: Vec<u32> = v_f64.iter().map(|&v| v as u32).collect(); let v_u8: Vec<u8> = v_f64.iter().map(|&v| v as u8).collect(); let v_i64: Vec<i64> = v_f64.iter().map(|&v| v as i64).collect(); // u8 -> f32 let results: Vec<f32> = run_cast(&v_u8, "cast_u8_f32"); assert_eq!(results, v_f32); // u8 -> f16 let results: Vec<f16> = run_cast(&v_u8, "cast_u8_f16"); assert_eq!(results, v_f16); // u8 -> bf16 let results: Vec<bf16> = run_cast(&v_u8, "cast_u8_bf16"); assert_eq!(results, v_bf16); // u8 -> u32 let results: Vec<u32> = run_cast(&v_u8, "cast_u8_u32"); assert_eq!(results, v_u32); // u8 -> i64 let results: Vec<i64> = run_cast(&v_u8, "cast_u8_i64"); assert_eq!(results, v_i64); } #[test] fn cast_i64() { let v_f64 = [1.0f64, 2.0, 3.0]; let v_f32: Vec<f32> = v_f64.iter().map(|&v| v as f32).collect(); let v_f16: Vec<f16> = v_f64.iter().map(|&v| f16::from_f32(v as f32)).collect(); let v_bf16: Vec<bf16> = v_f64.iter().map(|&v| bf16::from_f32(v as f32)).collect(); let v_u32: Vec<u32> = v_f64.iter().map(|&v| v as u32).collect(); let v_u8: Vec<u8> = v_f64.iter().map(|&v| v as u8).collect(); let v_i64: Vec<i64> = v_f64.iter().map(|&v| v as i64).collect(); // i64 -> f32 let results: Vec<f32> = run_cast(&v_i64, "cast_i64_f32"); assert_eq!(results, v_f32); // i64 -> f16 let results: Vec<f16> = run_cast(&v_i64, "cast_i64_f16"); assert_eq!(results, v_f16); // i64 -> bf16 let results: Vec<bf16> = run_cast(&v_i64, "cast_i64_bf16"); assert_eq!(results, v_bf16); // i64 -> u32 let results: Vec<u32> = run_cast(&v_i64, "cast_i64_u32"); assert_eq!(results, v_u32); // i64 -> u8 let results: Vec<u8> = run_cast(&v_i64, "cast_i64_u8"); assert_eq!(results, v_u8); } fn run_affine<T: Clone>(v: &[T], mul: f64, add: f64) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input = new_buffer(&device, v); let output = new_buffer(&device, v); let size = v.len(); call_affine( &device, command_buffer, &kernels, "affine_f32", size, BufferOffset::zero_offset(&input), &output, mul as f32, add as f32, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, v.len()) } fn run_affine_strided<T: Clone>( v: &[T], shape: &[usize], strides: &[usize], mul: f64, add: f64, ) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input = new_buffer(&device, v); let output = new_buffer(&device, v); call_affine_strided( &device, command_buffer, &kernels, "affine_f32_strided", shape, BufferOffset::zero_offset(&input), strides, &output, mul as f32, add as f32, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); let len: usize = shape.iter().product(); read_to_vec(&output, len) } #[test] fn affine() { let input = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]; let mul = 1.5; let add = 1.1; let result = run_affine(&input, mul, add); assert_eq!(result, vec![2.6, 4.1, 5.6, 7.1, 8.6, 10.1, 11.6, 13.1]); let input = [1.0f32; 40_000]; let mul = 1.5; let add = 1.1; let result = run_affine(&input, mul, add); assert_eq!(result, vec![2.6; 40_000]); } #[test] fn affine_strided() { let input = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]; let mul = 1.5; let add = 1.1; let shape = [4]; let strides = [2]; let result = run_affine_strided(&input, &shape, &strides, mul, add); // 1 on 2 assert_eq!(result, vec![2.6, 5.6, 8.6, 11.6]); } #[test] fn index_select() { let embedding = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]; let shape = [5, 2]; let stride = [2, 1]; let ids = [0u32, 4, 2]; let dim = 0; let result = run_index_select(&embedding, &shape, &stride, &ids, dim, "is_u32_f32"); assert_eq!(result, vec![1.0f32, 2.0, 9.0, 10.0, 5.0, 6.0]); let embedding = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]; let shape = [2, 5]; let stride = [1, 2]; let ids = [0u32, 1, 0]; let dim = 0; let result = run_index_select(&embedding, &shape, &stride, &ids, dim, "is_u32_f32"); assert_eq!( result, vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 1.0f32, 2.0, 3.0, 4.0, 5.0] ); } #[test] fn index_select_strided() { let embedding = (0..16).map(|x| x as f32).collect::<Vec<_>>(); let shape = [2, 2]; let stride = [2, 4]; let ids = [0u32]; let dim = 0; let result = run_index_select_strided(&embedding, &shape, &stride, &ids, dim, "is_u32_f32"); assert_eq!(result, vec![0.0, 4.0]); } #[test] fn index_select_f16() { let embedding: Vec<_> = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0] .into_iter() .map(f16::from_f32) .collect(); let shape = [5, 2]; let stride = [2, 1]; let ids = [0u32, 4, 2]; let dim = 0; let result = run_index_select(&embedding, &shape, &stride, &ids, dim, "is_u32_f16"); assert_eq!( approx_f16(result, 4), vec![1.0f32, 2.0, 9.0, 10.0, 5.0, 6.0] ); } #[test] fn index_select_is_u32_bf16() { let embedding: Vec<bf16> = (1..=10).map(|x| bf16::from_f32(x as f32)).collect(); let shape = [5, 2]; let stride = [2, 1]; let ids = [0u32, 4, 2]; let dim = 0; let result = run_index_select(&embedding, &shape, &stride, &ids, dim, "is_u32_bf16"); assert_eq!( approx_bf16(result, 4), vec![1.0f32, 2.0, 9.0, 10.0, 5.0, 6.0] ); } #[test] fn index_select_is_u8_bf16() { let embedding: Vec<bf16> = (1..=10).map(|x| bf16::from_f32(x as f32)).collect(); let shape = [5, 2]; let stride = [2, 1]; let ids = [0u8, 4, 2]; let dim = 0; let result = run_index_select(&embedding, &shape, &stride, &ids, dim, "is_u8_bf16"); assert_eq!( approx_bf16(result, 4), vec![1.0f32, 2.0, 9.0, 10.0, 5.0, 6.0] ); } #[test] fn index_select_dim1() { let embedding = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]; let shape = [5, 2]; let stride = [2, 1]; let ids = [0u32, 1, 0]; let dim = 1; let result = run_index_select(&embedding, &shape, &stride, &ids, dim, "is_u32_f32"); assert_eq!( result, vec![1.0f32, 2.0, 1.0, 3.0, 4.0, 3.0, 5.0, 6.0, 5.0, 7.0, 8.0f32, 7.0, 9.0, 10.0, 9.0] ); } fn run_index_select<T: Clone, I: Clone + std::fmt::Debug>( embeddings: &[T], shape: &[usize], stride: &[usize], ids: &[I], dim: usize, name: &'static str, ) -> Vec<T> { let device = Device::system_default().expect("no device found"); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let embeddings_buffer = new_buffer(&device, embeddings); let ids_buffer = new_buffer(&device, ids); let left_size: usize = shape[..dim].iter().product(); let right_size: usize = shape[dim + 1..].iter().product(); let dst_el = ids.len() * left_size * right_size; let dst_buffer = new_buffer(&device, &vec![0.0f32; dst_el]); let kernels = Kernels::new(); call_index_select( &device, command_buffer, &kernels, name, shape, ids.len(), dim, true, shape, stride, BufferOffset::zero_offset(&embeddings_buffer), BufferOffset::zero_offset(&ids_buffer), &dst_buffer, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&dst_buffer, dst_el) } fn run_index_select_strided<T: Clone, I: Clone + std::fmt::Debug>( embeddings: &[T], shape: &[usize], stride: &[usize], ids: &[I], dim: usize, name: &'static str, ) -> Vec<T> { let device = Device::system_default().expect("no device found"); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let embeddings_buffer = new_buffer(&device, embeddings); let ids_buffer = new_buffer(&device, ids); let left_size: usize = shape[..dim].iter().product(); let right_size: usize = shape[dim + 1..].iter().product(); let dst_el = ids.len() * left_size * right_size; let dst_buffer = new_buffer(&device, &vec![0.0f32; dst_el]); let kernels = Kernels::new(); call_index_select( &device, command_buffer, &kernels, name, shape, ids.len(), dim, false, shape, stride, BufferOffset::zero_offset(&embeddings_buffer), BufferOffset::zero_offset(&ids_buffer), &dst_buffer, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&dst_buffer, dst_el) } #[test] fn cos_f16() { let v: Vec<f16> = [1.0f32, 2.0, 3.0] .iter() .map(|v| f16::from_f32(*v)) .collect(); let results = run(&v, unary::contiguous::cos::HALF); let expected: Vec<f16> = v.iter().map(|v| f16::from_f32(v.to_f32().cos())).collect(); assert_eq!(approx_f16(results, 2), vec![0.54, -0.42, -0.99]); assert_eq!(approx_f16(expected, 2), vec![0.54, -0.42, -0.99]); } fn run_reduce<T: Clone>(v: &[T], out_length: usize, name: &'static str) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input = new_buffer(&device, v); let options = MTLResourceOptions::StorageModeManaged; let output = device.new_buffer((out_length * core::mem::size_of::<T>()) as u64, options); let dims = vec![v.len()]; let strides = vec![1]; call_reduce_strided( &device, command_buffer, &kernels, name, &dims, &strides, out_length, BufferOffset::zero_offset(&input), &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, out_length) } fn run_softmax<T: Clone + std::fmt::Debug>(v: &[T], last_dim: usize, name: &'static str) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input = new_buffer(&device, v); let output = new_buffer(&device, v); call_last_softmax( &device, command_buffer, &kernels, name, v.len(), last_dim, &input, 0, &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, v.len()) } #[test] fn reduce_sum() { let v = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let out_length = 1; let results = run_reduce(&v, out_length, "fast_sum_f32_strided"); assert_eq!(approx(results, 4), vec![21.0]); } #[test] fn reduce_sum2() { let v = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let out_length = 2; let results = run_reduce(&v, out_length, "fast_sum_f32_strided"); assert_eq!(approx(results, 4), vec![6.0, 15.0]); } #[test] fn softmax() { let v = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let last_dim = 6; let results = run_softmax(&v, last_dim, "softmax_f32"); assert_eq!( approx(results, 4), vec![0.0043, 0.0116, 0.0315, 0.0858, 0.2331, 0.6337] ); let last_dim = 4096; let n = 200; let mut v = vec![0.0; n * last_dim]; for i in 0..n { v[i * last_dim] = 20.0; } let results = run_softmax(&v, last_dim, "softmax_f32"); let results = approx(results, 4); assert_eq!( results.iter().map(|&s| s.round() as usize).sum::<usize>(), n ); assert_eq!(results[0], 1.0); assert_eq!(results[1], 0.0); assert_eq!(results[last_dim], 1.0); assert_eq!(results[2 * last_dim], 1.0); let v = vec![0.0f32, 1.0, 2.0, 3.0, 4.0, 5.0]; let last_dim = 6; let results = run_softmax(&v, last_dim, "softmax_f32"); assert_eq!( approx(results, 4), vec![0.0043, 0.0116, 0.0315, 0.0858, 0.2331, 0.6337] ); let v = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let last_dim = 3; let results = run_softmax(&v, last_dim, "softmax_f32"); assert_eq!( approx(results, 4), vec![0.0900, 0.2447, 0.6652, 0.0900, 0.2447, 0.6652] ); let v = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0] .iter() .map(|v| f16::from_f32(*v)) .collect::<Vec<_>>(); let last_dim = 6; let results = run_softmax(&v, last_dim, "softmax_f16"); assert_eq!( approx_f16(results, 4), vec![0.0043, 0.0116, 0.0316, 0.0858, 0.2332, 0.6338] ); let v = [1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0] .iter() .map(|v| bf16::from_f32(*v)) .collect::<Vec<_>>(); let last_dim = 6; let results = run_softmax(&v, last_dim, "softmax_bf16"); assert_eq!( approx_bf16(results, 4), vec![0.0043, 0.0116, 0.0315, 0.0859, 0.2324, 0.6328] ); } #[allow(clippy::too_many_arguments)] fn run_where_cond<I: Clone, T: Clone>( shape: &[usize], cond: &[I], (cond_stride, cond_offset): (Vec<usize>, usize), left_true: &[T], (left_stride, left_offset): (Vec<usize>, usize), right_false: &[T], (_right_stride, _right_offset): (Vec<usize>, usize), name: &'static str, ) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let options = MTLResourceOptions::StorageModeManaged; let length = cond.len(); let cond = device.new_buffer_with_data( cond.as_ptr() as *const core::ffi::c_void, std::mem::size_of_val(cond) as u64, options, ); let left = device.new_buffer_with_data( left_true.as_ptr() as *const core::ffi::c_void, (length * core::mem::size_of::<T>()) as u64, options, ); let right = device.new_buffer_with_data( right_false.as_ptr() as *const core::ffi::c_void, (length * core::mem::size_of::<T>()) as u64, options, ); let output = device.new_buffer((length * core::mem::size_of::<T>()) as u64, options); let cond = BufferOffset { buffer: &cond, offset_in_bytes: cond_offset, }; let left = BufferOffset { buffer: &left, offset_in_bytes: left_offset, }; let right = BufferOffset { buffer: &right, offset_in_bytes: cond_offset, }; call_where_cond_strided( &device, command_buffer, &kernels, name, shape, cond, &cond_stride, left, &left_stride, right, &cond_stride, &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, length) } #[test] fn where_cond() { let shape = vec![6]; let cond = vec![0u8, 1, 0, 0, 1, 1]; let cond_l = (vec![1], 0); let left_true = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let left_l = (vec![1], 0); let right_false = vec![-1.0f32, -2.0, -3.0, -4.0, -5.0, -6.0]; let right_l = (vec![1], 0); let results = run_where_cond( &shape, &cond, cond_l, &left_true, left_l, &right_false, right_l, "where_u8_f32", ); assert_eq!(approx(results, 4), vec![-1.0f32, 2.0, -3.0, -4.0, 5.0, 6.0]); } #[test] fn where_cond_u32_f32() { let shape = vec![6]; let cond = vec![0u32, 1, 0, 0, 1, 1]; let cond_l = (vec![1], 0); let left_true = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let left_l = (vec![1], 0); let right_false = vec![-1.0f32, -2.0, -3.0, -4.0, -5.0, -6.0]; let right_l = (vec![1], 0); let results = run_where_cond( &shape, &cond, cond_l, &left_true, left_l, &right_false, right_l, "where_u32_f32", ); assert_eq!(approx(results, 4), vec![-1.0f32, 2.0, -3.0, -4.0, 5.0, 6.0]); } #[allow(clippy::too_many_arguments)] fn run_gemm<T: Clone>( name: &'static str, (b, m, n, k): (usize, usize, usize, usize), lhs: &[T], lhs_stride: &[usize], lhs_offset: usize, rhs: &[T], rhs_stride: &[usize], rhs_offset: usize, ) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let options = MTLResourceOptions::StorageModeManaged; let lhs = device.new_buffer_with_data( lhs.as_ptr() as *const core::ffi::c_void, std::mem::size_of_val(lhs) as u64, options, ); let rhs = device.new_buffer_with_data( rhs.as_ptr() as *const core::ffi::c_void, std::mem::size_of_val(rhs) as u64, options, ); let length = b * m * n; let output = device.new_buffer((length * core::mem::size_of::<T>()) as u64, options); call_gemm( &device, command_buffer, &kernels, name, (b, m, n, k), lhs_stride, lhs_offset, &lhs, rhs_stride, rhs_offset, &rhs, &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, length) } #[test] fn gemm() { let (b, m, n, k) = (1, 2, 4, 3); let lhs_stride = vec![m * k, k, 1]; let lhs: Vec<f32> = (0..b * m * k).map(|f| f as f32).collect(); let rhs_stride = vec![n * k, n, 1]; let rhs: Vec<f32> = (0..b * n * k).map(|f| f as f32).collect(); let results = run_gemm( "sgemm", (b, m, n, k), &lhs, &lhs_stride, 0, &rhs, &rhs_stride, 0, ); assert_eq!( approx(results, 4), vec![20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0] ); let (b, m, n, k) = (2, 2, 4, 3); let lhs_stride = vec![m * k, k, 1]; let lhs: Vec<f32> = (0..b * m * k).map(|f| f as f32).collect(); let rhs_stride = vec![n * k, n, 1]; let rhs: Vec<f32> = (0..b * n * k).map(|f| f as f32).collect(); let results = run_gemm( "sgemm", (b, m, n, k), &lhs, &lhs_stride, 0, &rhs, &rhs_stride, 0, ); assert_eq!( approx(results, 4), vec![ 20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0, 344.0, 365.0, 386.0, 407.0, 488.0, 518.0, 548.0, 578.0 ] ); // OFFSET let (b, m, n, k) = (2, 2, 4, 3); let lhs_stride = vec![m * k, k, 1]; let lhs: Vec<f32> = (0..b * m * k).map(|f| f as f32).collect(); let rhs_stride = vec![n * k, n, 1]; let rhs: Vec<f32> = (0..b * n * k).map(|f| f as f32).collect(); // Manually set batch_size=1 and offset 12 elements * 4 the number of bytes for f32 let results = run_gemm( "sgemm", (1, m, n, k), &lhs, &lhs_stride, 0, &rhs, &rhs_stride, 12 * 4, ); assert_eq!( approx(results, 4), vec![56.0, 59.0, 62.0, 65.0, 200.0, 212.0, 224.0, 236.0] ); // bgemm sanity test if false { let (b, m, n, k) = (1, 2, 4, 3); let lhs_stride = vec![m * k, k, 1]; let lhs: Vec<bf16> = (0..b * m * k).map(|f| bf16::from_f32(f as f32)).collect(); let rhs_stride = vec![n * k, n, 1]; let rhs: Vec<bf16> = (0..b * n * k).map(|f| bf16::from_f32(f as f32)).collect(); let results = run_gemm( "bgemm", (b, m, n, k), &lhs, &lhs_stride, 0, &rhs, &rhs_stride, 0, ); assert_eq!( approx_bf16(results, 4), vec![20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0] ); } // hgemm sanity test let (b, m, n, k) = (1, 2, 4, 3); let lhs_stride = vec![m * k, k, 1]; let lhs: Vec<f16> = (0..b * m * k).map(|f| f16::from_f32(f as f32)).collect(); let rhs_stride = vec![n * k, n, 1]; let rhs: Vec<f16> = (0..b * n * k).map(|f| f16::from_f32(f as f32)).collect(); let results = run_gemm( "hgemm", (b, m, n, k), &lhs, &lhs_stride, 0, &rhs, &rhs_stride, 0, ); assert_eq!( approx_f16(results, 4), vec![20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0] ); } #[allow(clippy::too_many_arguments)] fn run_mlx_gemm<T: Clone>( dtype: GemmDType, (b, m, n, k): (usize, usize, usize, usize), lhs: &[T], lhs_stride: &[usize], lhs_offset: usize, rhs: &[T], rhs_stride: &[usize], rhs_offset: usize, ) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let options = MTLResourceOptions::StorageModeManaged; let lhs = device.new_buffer_with_data( lhs.as_ptr() as *const core::ffi::c_void, std::mem::size_of_val(lhs) as u64, options, ); let rhs = device.new_buffer_with_data( rhs.as_ptr() as *const core::ffi::c_void, std::mem::size_of_val(rhs) as u64, options, ); let length = b * m * n; let output = device.new_buffer((length * core::mem::size_of::<T>()) as u64, options); call_mlx_gemm( &device, command_buffer, &kernels, dtype, (b, m, n, k), lhs_stride, lhs_offset, &lhs, rhs_stride, rhs_offset, &rhs, &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, length) } fn mlx_vs_mfa_one(b: usize, m: usize, n: usize, k: usize, dtype: GemmDType) { use rand::SeedableRng; use rand_distr::Distribution; let mut rng = rand::rngs::StdRng::seed_from_u64(42424242); let normal = rand_distr::Normal::new(0.0, 1.0).unwrap(); let lhs: Vec<_> = (0..b * m * k).map(|_| normal.sample(&mut rng)).collect(); let rhs: Vec<_> = (0..b * n * k).map(|_| normal.sample(&mut rng)).collect(); let v1: Vec<f32> = run_mlx_gemm( dtype, (b, m, n, k), &lhs, &[m * k, k, 1], 0, &rhs, &[k * n, n, 1], 0, ); let v2: Vec<f32> = run_gemm( "sgemm", (b, m, n, k), &lhs, &[m * k, k, 1], 0, &rhs, &[k * n, n, 1], 0, ); for (a, b) in v1.iter().zip(v2.iter()) { let diff = (a - b).abs(); assert_eq!((diff * 1e4).round(), 0.) } } #[test] fn mlx_vs_mfa() { mlx_vs_mfa_one(1, 32, 32, 25, GemmDType::F32); mlx_vs_mfa_one(1, 128, 128, 100, GemmDType::F32); mlx_vs_mfa_one(1, 256, 256, 256, GemmDType::F32); mlx_vs_mfa_one(1, 192, 200, 75, GemmDType::F32); mlx_vs_mfa_one(3, 27, 67, 64, GemmDType::F32); } #[test] fn mlx_gemm() { let (b, m, n, k) = (1, 2, 4, 3); let lhs: Vec<f32> = (0..b * m * k).map(|f| f as f32).collect(); let rhs: Vec<f32> = (0..b * n * k).map(|f| f as f32).collect(); let results = run_mlx_gemm( GemmDType::F32, (b, m, n, k), &lhs, &[m * k, k, 1], 0, &rhs, &[n * k, n, 1], 0, ); assert_eq!( approx(results, 4), vec![20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0] ); let (b, m, n, k) = (2, 2, 4, 3); let lhs: Vec<f32> = (0..b * m * k).map(|f| f as f32).collect(); let rhs: Vec<f32> = (0..b * n * k).map(|f| f as f32).collect(); let results = run_mlx_gemm( GemmDType::F32, (b, m, n, k), &lhs, &[m * k, k, 1], 0, &rhs, &[n * k, n, 1], 0, ); assert_eq!( approx(results, 4), vec![ 20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0, 344.0, 365.0, 386.0, 407.0, 488.0, 518.0, 548.0, 578.0 ] ); // OFFSET let (b, m, n, k) = (2, 2, 4, 3); let lhs: Vec<f32> = (0..b * m * k).map(|f| f as f32).collect(); let rhs: Vec<f32> = (0..b * n * k).map(|f| f as f32).collect(); // Manually set batch_size=1 and offset 12 elements * 4 the number of bytes for f32 let results = run_mlx_gemm( GemmDType::F32, (1, m, n, k), &lhs, &[m * k, k, 1], 0, &rhs, &[n * k, n, 1], 12 * 4, ); assert_eq!( approx(results, 4), vec![56.0, 59.0, 62.0, 65.0, 200.0, 212.0, 224.0, 236.0] ); // bgemm sanity test { let (b, m, n, k) = (1, 2, 4, 3); let lhs: Vec<bf16> = (0..b * m * k).map(|f| bf16::from_f32(f as f32)).collect(); let rhs: Vec<bf16> = (0..b * n * k).map(|f| bf16::from_f32(f as f32)).collect(); let results = run_mlx_gemm( GemmDType::BF16, (b, m, n, k), &lhs, &[m * k, k, 1], 0, &rhs, &[n * k, n, 1], 0, ); assert_eq!( approx_bf16(results, 4), vec![20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0] ); } { // hgemm sanity test let (b, m, n, k) = (1, 2, 4, 3); let lhs: Vec<f16> = (0..b * m * k).map(|f| f16::from_f32(f as f32)).collect(); let rhs: Vec<f16> = (0..b * n * k).map(|f| f16::from_f32(f as f32)).collect(); let results = run_mlx_gemm( GemmDType::F16, (b, m, n, k), &lhs, &[m * k, k, 1], 0, &rhs, &[n * k, n, 1], 0, ); assert_eq!( approx_f16(results, 4), vec![20.0, 23.0, 26.0, 29.0, 56.0, 68.0, 80.0, 92.0] ); } } fn run_random<T: Clone>(name: &'static str, seed: u32, length: usize, a: f32, b: f32) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let options = MTLResourceOptions::StorageModeManaged; let output = device.new_buffer((length * core::mem::size_of::<T>()) as NSUInteger, options); let seed = device.new_buffer_with_data( &seed as *const u32 as *const core::ffi::c_void, std::mem::size_of::<u32>() as NSUInteger, options, ); if name.starts_with("rand_uniform") { call_random_uniform( &device, command_buffer, &kernels, name, a, b, length, &seed, &output, ) .unwrap(); } else { call_random_normal( &device, command_buffer, &kernels, name, a, b, length, &seed, &output, ) .unwrap(); } command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, length) } #[test] fn random() { fn calc_mean(data: &[f32]) -> f32 { let sum = data.iter().sum::<f32>(); let count = data.len(); assert!(count > 0); sum / count as f32 } fn calc_stddev(data: &[f32]) -> f32 { let mean = calc_mean(data); let count = data.len(); assert!(count > 0); let variance = data .iter() .map(|value| { let diff = mean - *value; diff * diff }) .sum::<f32>() / count as f32; variance.sqrt() } let shape = [1024, 10]; let length = shape.iter().product::<usize>(); let seed = 299792458; let min = -30.0; let max = 30.0; let mean = 100.0; let stddev = 50.0; macro_rules! validate_random { ($type:ty) => { let results: Vec<f32> = run_random::<$type>( concat!("rand_uniform_", stringify!($type)), seed, length, min, max, ) .into_iter() .map(f32::from) .collect(); results.iter().for_each(|v| { assert!(*v >= min && *v <= max); }); assert!(calc_mean(&results) > -1.0 && calc_mean(&results) < 1.0); let results: Vec<f32> = run_random::<$type>( concat!("rand_normal_", stringify!($type)), seed, length, mean, stddev, ) .into_iter() .map(f32::from) .collect(); assert!((calc_mean(&results) - mean).abs() < mean / 10.0); assert!((calc_stddev(&results) - stddev).abs() < stddev / 10.0); }; } validate_random!(f32); validate_random!(f16); validate_random!(bf16); } fn run_scatter_add<T: Clone, I: Clone + std::fmt::Debug>( input: &[T], ids: &[I], shape: &[usize], dim: usize, name: &'static str, ) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let options = MTLResourceOptions::StorageModeManaged; let input_buffer = new_buffer(&device, input); let ids_buffer = new_buffer(&device, ids); let output = device.new_buffer(std::mem::size_of_val(input) as u64, options); call_scatter_add( &device, command_buffer, &kernels, name, shape, shape, dim, BufferOffset::zero_offset(&input_buffer), BufferOffset::zero_offset(&ids_buffer), &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, input.len()) } #[test] fn scatter_add() { let ids_u8 = [0u8, 0, 1, 0, 2, 2, 3, 3]; let ids_u32 = [0u32, 0, 1, 0, 2, 2, 3, 3]; let ids_i64 = [0i64, 0, 1, 0, 2, 2, 3, 3]; let input_f32 = [5.0f32, 1.0, 7.0, 2.0, 3.0, 2.0, 1.0, 3.0]; let input_f16 = input_f32 .iter() .map(|v| f16::from_f32(*v)) .collect::<Vec<_>>(); let input_bf16 = input_f32 .iter() .map(|v| bf16::from_f32(*v)) .collect::<Vec<_>>(); let output_dim1_f32 = vec![8.0, 7.0, 5.0, 4.0, 0.0, 0.0, 0.0, 0.0]; let output_dim1_f16 = output_dim1_f32 .iter() .map(|v| f16::from_f32(*v)) .collect::<Vec<_>>(); let output_dim1_bf16 = output_dim1_f32 .iter() .map(|v| bf16::from_f32(*v)) .collect::<Vec<_>>(); let output_dim2_f32 = vec![5.0, 3.0, 7.0, 0.0, 3.0, 2.0, 1.0, 3.0]; let output_dim2_f16 = output_dim2_f32 .iter() .map(|v| f16::from_f32(*v)) .collect::<Vec<_>>(); let output_dim2_bf16 = output_dim2_f32 .iter() .map(|v| bf16::from_f32(*v)) .collect::<Vec<_>>(); for (shape, output_f32, output_f16, output_bf16) in [ (vec![8], output_dim1_f32, output_dim1_f16, output_dim1_bf16), ( vec![4, 2], output_dim2_f32, output_dim2_f16, output_dim2_bf16, ), ] { for results in [ run_scatter_add(&input_f32, &ids_u8, &shape, 0, "sa_u8_f32"), run_scatter_add(&input_f32, &ids_u32, &shape, 0, "sa_u32_f32"), run_scatter_add(&input_f32, &ids_i64, &shape, 0, "sa_i64_f32"), ] { assert_eq!(results, output_f32); } for results in [ run_scatter_add(&input_f16, &ids_u8, &shape, 0, "sa_u8_f16"), run_scatter_add(&input_f16, &ids_u32, &shape, 0, "sa_u32_f16"), run_scatter_add(&input_f16, &ids_i64, &shape, 0, "sa_i64_f16"), ] { assert_eq!(results, output_f16); } for results in [ run_scatter_add(&input_bf16, &ids_u8, &shape, 0, "sa_u8_bf16"), run_scatter_add(&input_bf16, &ids_u32, &shape, 0, "sa_u32_bf16"), run_scatter_add(&input_bf16, &ids_i64, &shape, 0, "sa_i64_bf16"), ] { assert_eq!(results, output_bf16); } } } fn run_index_add<T: Clone, I: Clone + std::fmt::Debug>( left: &[T], right: &[T], indices: &[I], shape: &[usize], dim: usize, name: &'static str, ) -> Vec<T> { let device = device(); let kernels = Kernels::new(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let input_buffer = new_buffer(&device, right); let output = new_buffer(&device, left); let indices_buffer = new_buffer(&device, indices); call_index_add( &device, command_buffer, &kernels, name, shape, shape, shape, dim, BufferOffset::zero_offset(&input_buffer), BufferOffset::zero_offset(&indices_buffer), &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, left.len()) } #[test] fn index_add() { let left = vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0]; let right = vec![1.0f32, 1.0, 1.0, 1.0, 1.0, 1.0]; let indices = vec![0u32, 1, 0, 1, 0, 1]; let shape = vec![6]; // u32, f32 { let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_u32_f32"); assert_eq!(results, vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // u32, f16 { let left = left.iter().map(|v| f16::from_f32(*v)).collect::<Vec<_>>(); let right = right.iter().map(|v| f16::from_f32(*v)).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_u32_f16"); assert_eq!(approx_f16(results, 4), vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // u32, bf16 { let left = left.iter().map(|v| bf16::from_f32(*v)).collect::<Vec<_>>(); let right = right.iter().map(|v| bf16::from_f32(*v)).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_u32_bf16"); assert_eq!(approx_bf16(results, 4), vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // u8, f32 { let indices = indices.iter().map(|v| *v as u8).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_u8_f32"); assert_eq!(results, vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // u8, f16 { let indices = indices.iter().map(|v| *v as u8).collect::<Vec<_>>(); let left = left.iter().map(|v| f16::from_f32(*v)).collect::<Vec<_>>(); let right = right.iter().map(|v| f16::from_f32(*v)).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_u8_f16"); assert_eq!(approx_f16(results, 4), vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // u8, bf16 { let indices = indices.iter().map(|v| *v as u8).collect::<Vec<_>>(); let left = left.iter().map(|v| bf16::from_f32(*v)).collect::<Vec<_>>(); let right = right.iter().map(|v| bf16::from_f32(*v)).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_u8_bf16"); assert_eq!(approx_bf16(results, 4), vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // i64, f32 { let indices = indices.iter().map(|v| *v as i64).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_i64_f32"); assert_eq!(results, vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // i64, f16 { let indices = indices.iter().map(|v| *v as i64).collect::<Vec<_>>(); let left = left.iter().map(|v| f16::from_f32(*v)).collect::<Vec<_>>(); let right = right.iter().map(|v| f16::from_f32(*v)).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_i64_f16"); assert_eq!(approx_f16(results, 4), vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } // i64, bf16 { let indices = indices.iter().map(|v| *v as i64).collect::<Vec<_>>(); let left = left.iter().map(|v| bf16::from_f32(*v)).collect::<Vec<_>>(); let right = right.iter().map(|v| bf16::from_f32(*v)).collect::<Vec<_>>(); let results = run_index_add(&left, &right, &indices, &shape, 0, "ia_i64_bf16"); assert_eq!(approx_bf16(results, 4), vec![4.0, 5.0, 3.0, 4.0, 5.0, 6.0]); } } fn run_pool2d<T: Clone>( v: &[T], (w_k, h_k): (usize, usize), (w_stride, h_stride): (usize, usize), shape: &[usize], strides: &[usize], name: &'static str, ) -> Vec<T> { let device = device(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let out_w = (shape[2] - w_k) / w_stride + 1; let out_h = (shape[3] - h_k) / h_stride + 1; let dst_el = out_w * out_h * shape[0] * shape[1]; let input = new_buffer(&device, v); let output = new_buffer(&device, &vec![0.0f32; dst_el]); let kernels = Kernels::new(); call_pool2d( &device, command_buffer, &kernels, name, shape, strides, out_w, out_h, w_k, h_k, w_stride, h_stride, &input, &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, dst_el) } #[test] fn max_pool2d_f32() { // kernel 2 stride 1 let v: Vec<f32> = (0..16).map(|v| v as f32).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_f32", ); let expected = vec![5.0, 6.0, 7.0, 9.0, 10.0, 11.0, 13.0, 14.0, 15.0]; assert_eq!(results, expected); // kernel 2 stride 2 let v: Vec<f32> = (0..16).map(|v| v as f32).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 2; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_f32", ); let expected = vec![5.0, 7.0, 13.0, 15.0]; assert_eq!(results, expected); } #[test] fn max_pool2d_f16() { // kernel 2 stride 1 let v: Vec<half::f16> = (0..16).map(|v| half::f16::from_f32(v as f32)).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_f16", ); let expected = [5.0, 6.0, 7.0, 9.0, 10.0, 11.0, 13.0, 14.0, 15.0] .iter() .map(|v| half::f16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); // kernel 2 stride 2 let v: Vec<half::f16> = (0..16).map(|v| half::f16::from_f32(v as f32)).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 2; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_f16", ); let expected = [5.0, 7.0, 13.0, 15.0] .iter() .map(|v| half::f16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); } #[test] fn max_pool2d_bf16() { // kernel 2 stride 1 let v: Vec<half::bf16> = (0..16).map(|v| half::bf16::from_f32(v as f32)).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_bf16", ); let expected = [5.0, 6.0, 7.0, 9.0, 10.0, 11.0, 13.0, 14.0, 15.0] .iter() .map(|v| half::bf16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); // kernel 2 stride 2 let v: Vec<half::bf16> = (0..16).map(|v| half::bf16::from_f32(v as f32)).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 2; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_bf16", ); let expected = [5.0, 7.0, 13.0, 15.0] .iter() .map(|v| half::bf16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); } #[test] fn max_pool2d_u8() { // kernel 2 stride 1 let v: Vec<u8> = (0..16).map(|v| v as u8).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_u8", ); let expected = vec![5, 6, 7, 9, 10, 11, 13, 14, 15]; assert_eq!(results, expected); // kernel 2 stride 2 let v: Vec<u8> = (0..16).map(|v| v as u8).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 2; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_u8", ); let expected = vec![5, 7, 13, 15]; assert_eq!(results, expected); } #[test] fn max_pool2d_u32() { // kernel 2 stride 1 let v: Vec<u32> = (0..16).map(|v| v as u32).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_u32", ); let expected = vec![5, 6, 7, 9, 10, 11, 13, 14, 15]; assert_eq!(results, expected); // kernel 2 stride 2 let v: Vec<u32> = (0..16).map(|v| v as u32).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 2; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "max_pool2d_u32", ); let expected = vec![5, 7, 13, 15]; assert_eq!(results, expected); } #[test] fn avg_pool2d_f32() { // kernel 2 stride 1 let v: Vec<f32> = (0..16).map(|v| v as f32).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "avg_pool2d_f32", ); let expected = vec![ 2.5000, 3.5000, 4.5000, 6.5000, 7.5000, 8.5000, 10.5000, 11.5000, 12.5000, ]; assert_eq!(results, expected); } #[test] fn avg_pool2d_f16() { // kernel 2 stride 1 let v: Vec<f16> = (0..16).map(|v| f16::from_f32(v as f32)).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "avg_pool2d_f16", ); let expected = [ 2.5000, 3.5000, 4.5000, 6.5000, 7.5000, 8.5000, 10.5000, 11.5000, 12.5000, ] .iter() .map(|v| f16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); } #[test] fn avg_pool2d_bf16() { // kernel 2 stride 1 let v: Vec<bf16> = (0..16).map(|v| bf16::from_f32(v as f32)).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "avg_pool2d_bf16", ); let expected = [ 2.5000, 3.5000, 4.5000, 6.5000, 7.5000, 8.5000, 10.5000, 11.5000, 12.5000, ] .iter() .map(|v| bf16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); } #[test] fn avg_pool2d_u8() { // kernel 2 stride 1 let v: Vec<u8> = (0..16).map(|v| v as u8).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "avg_pool2d_u8", ); let expected = vec![2, 3, 4, 6, 7, 8, 10, 11, 12]; assert_eq!(results, expected); } #[test] fn avg_pool2d_u32() { // kernel 2 stride 1 let v: Vec<u32> = (0..16).map(|v| v as u32).collect(); let shape = vec![1, 1, 4, 4]; let strides = vec![16, 16, 4, 1]; let kernel = 2; let stride = 1; let results = run_pool2d( &v, (kernel, kernel), (stride, stride), &shape, &strides, "avg_pool2d_u32", ); let expected = vec![2, 3, 4, 6, 7, 8, 10, 11, 12]; assert_eq!(results, expected); } #[allow(clippy::too_many_arguments)] fn run_conv_transpose1d<T: Clone>( input: &[T], input_shape: &[usize], input_stride: &[usize], kernel: &[T], kernel_shape: &[usize], kernel_stride: &[usize], dilation: usize, stride: usize, padding: usize, out_padding: usize, name: &'static str, ) -> Vec<T> { let device = device(); let command_queue = device.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let c_out = kernel_shape[1]; let k_size = kernel_shape[2]; let b_size = input_shape[0]; let l_in = input_shape[2]; let l_out = (l_in - 1) * stride - 2 * padding + dilation * (k_size - 1) + out_padding + 1; let dst_el = c_out * l_out * b_size; let input = new_buffer(&device, input); let kernel = new_buffer(&device, kernel); let output = new_buffer(&device, &vec![0.0f32; dst_el]); let kernels = Kernels::new(); call_conv_transpose1d( &device, command_buffer, &kernels, name, dilation, stride, padding, out_padding, c_out, l_out, b_size, input_shape, input_stride, kernel_shape, kernel_stride, &input, 0, &kernel, 0, &output, ) .unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec(&output, dst_el) } #[test] fn conv_transpose1d_f32() { let input = vec![1.0f32, 2.0, 3.0, 4.0]; let input_shape = &[1, 1, 4]; let input_stride = &[4, 4, 1]; let kernel = vec![1.0f32, 2.0, 3.0, 4.0]; let kernel_shape = &[1, 1, 4]; let kernel_stride = &[4, 4, 1]; let results = run_conv_transpose1d( &input, input_shape, input_stride, &kernel, kernel_shape, kernel_stride, 1, 1, 0, 0, "conv_transpose1d_f32", ); let expected = vec![1., 4., 10., 20., 25., 24., 16.]; assert_eq!(results, expected); } #[test] fn conv_transpose1d_f16() { let input: Vec<f16> = [1.0, 2.0, 3.0, 4.0] .iter() .map(|v| f16::from_f32(*v)) .collect(); let input_shape = &[1, 1, 4]; let input_stride = &[4, 4, 1]; let kernel: Vec<f16> = [1.0, 2.0, 3.0, 4.0] .iter() .map(|v| f16::from_f32(*v)) .collect(); let kernel_shape = &[1, 1, 4]; let kernel_stride = &[4, 4, 1]; let results = run_conv_transpose1d( &input, input_shape, input_stride, &kernel, kernel_shape, kernel_stride, 1, 1, 0, 0, "conv_transpose1d_f16", ); let expected = [1., 4., 10., 20., 25., 24., 16.] .iter() .map(|v| f16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); } #[test] fn conv_transpose1d_bf16() { let input: Vec<bf16> = [1.0, 2.0, 3.0, 4.0] .iter() .map(|v| bf16::from_f32(*v)) .collect(); let input_shape = &[1, 1, 4]; let input_stride = &[4, 4, 1]; let kernel: Vec<bf16> = [1.0, 2.0, 3.0, 4.0] .iter() .map(|v| bf16::from_f32(*v)) .collect(); let kernel_shape = &[1, 1, 4]; let kernel_stride = &[4, 4, 1]; let results = run_conv_transpose1d( &input, input_shape, input_stride, &kernel, kernel_shape, kernel_stride, 1, 1, 0, 0, "conv_transpose1d_bf16", ); let expected = [1., 4., 10., 20., 25., 24., 16.] .iter() .map(|v| bf16::from_f32(*v)) .collect::<Vec<_>>(); assert_eq!(results, expected); } #[test] fn conv_transpose1d_u8() { let input: Vec<u8> = vec![1, 2, 3, 4]; let input_shape = &[1, 1, 4]; let input_stride = &[4, 4, 1]; let kernel: Vec<u8> = vec![1, 2, 3, 4]; let kernel_shape = &[1, 1, 4]; let kernel_stride = &[4, 4, 1]; let results = run_conv_transpose1d( &input, input_shape, input_stride, &kernel, kernel_shape, kernel_stride, 1, 1, 0, 0, "conv_transpose1d_u8", ); let expected = vec![1, 4, 10, 20, 25, 24, 16]; assert_eq!(results, expected); } #[test] fn conv_transpose1d_u32() { let input: Vec<u32> = vec![1, 2, 3, 4]; let input_shape = &[1, 1, 4]; let input_stride = &[4, 4, 1]; let kernel: Vec<u32> = vec![1, 2, 3, 4]; let kernel_shape = &[1, 1, 4]; let kernel_stride = &[4, 4, 1]; let results = run_conv_transpose1d( &input, input_shape, input_stride, &kernel, kernel_shape, kernel_stride, 1, 1, 0, 0, "conv_transpose1d_u32", ); let expected = vec![1, 4, 10, 20, 25, 24, 16]; assert_eq!(results, expected); } #[test] fn const_fill() { fn constant_fill<T: Clone>(name: &'static str, len: usize, value: f32) -> Vec<T> { let dev = device(); let kernels = Kernels::new(); let command_queue = dev.new_command_queue(); let command_buffer = command_queue.new_command_buffer(); let buffer = dev.new_buffer( (len * std::mem::size_of::<T>()) as u64, MTLResourceOptions::StorageModePrivate, ); call_const_fill(&dev, command_buffer, &kernels, name, len, &buffer, value).unwrap(); command_buffer.commit(); command_buffer.wait_until_completed(); read_to_vec::<T>(&buffer, len) } fn test<T: Clone + PartialEq + std::fmt::Debug, F: FnOnce(f32) -> T>(name: &'static str, f: F) { let len = rand::thread_rng().gen_range(2..16) * rand::thread_rng().gen_range(4..16); let value = rand::thread_rng().gen_range(1. ..19.); let v = constant_fill::<T>(name, len, value); assert_eq!(v, vec![f(value); len]) } test::<u8, _>("fill_u8", |v| v as u8); test::<u32, _>("fill_u32", |v| v as u32); test::<i64, _>("fill_i64", |v| v as i64); test::<f16, _>("fill_f16", f16::from_f32); test::<bf16, _>("fill_bf16", bf16::from_f32); test::<f32, _>("fill_f32", |v| v); }
5
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/conv.metal
#include <metal_stdlib> using namespace metal; #define MAX(x, y) ((x) > (y) ? (x) : (y)) template <typename T> METAL_FUNC void im2col( constant size_t &dst_numel, constant size_t &h_out, constant size_t &w_out, constant size_t &h_k, constant size_t &w_k, constant size_t &stride, constant size_t &padding, constant size_t &dilation, constant size_t *src_dims, constant size_t *src_strides, device const T *src, device T *dst, uint tid [[ thread_position_in_grid ]] ) { // dst: (b_size, h_out, w_out, c_in, h_k, w_k) // src: (b_size, c_in, h_in, w_in) if (tid >= dst_numel) { return; } const size_t b_in = src_dims[0]; const size_t c_in = src_dims[1]; const size_t h_in = src_dims[2]; const size_t w_in = src_dims[3]; const size_t dst_s4 = w_k; const size_t dst_s3 = h_k * dst_s4; const size_t dst_s2 = c_in * dst_s3; const size_t dst_s1 = w_out * dst_s2; const size_t dst_s0 = h_out * dst_s1; size_t tmp_tid = tid; const size_t b_idx = tmp_tid / dst_s0; tmp_tid -= b_idx * dst_s0; const size_t h_idx = tmp_tid / dst_s1; tmp_tid -= h_idx * dst_s1; const size_t w_idx = tmp_tid / dst_s2; tmp_tid -= w_idx * dst_s2; const size_t c_idx = tmp_tid / dst_s3; tmp_tid -= c_idx * dst_s3; const size_t h_k_idx = tmp_tid / dst_s4; tmp_tid -= h_k_idx * dst_s4; const size_t w_k_idx = tmp_tid; size_t src_h_idx = h_idx * stride + h_k_idx * dilation; size_t src_w_idx = w_idx * stride + w_k_idx * dilation; if (src_h_idx < padding || src_h_idx >= h_in + padding) { dst[tid] = static_cast<T>(0); } else if (src_w_idx < padding || src_w_idx >= w_in + padding) { dst[tid] = static_cast<T>(0); } else { src_h_idx -= padding; src_w_idx -= padding; const size_t src_i = b_idx * src_strides[0] + c_idx * src_strides[1] + src_h_idx * src_strides[2] + src_w_idx * src_strides[3]; dst[tid] = src[src_i]; } } template <typename T> METAL_FUNC void col2im1d( constant size_t &dst_el, constant size_t &l_out, constant size_t &l_in, constant size_t &c_out, constant size_t &k_size, constant size_t &stride, device const T *src, device T *dst, uint dst_i [[ thread_position_in_grid ]] ) { // src: (b_size, l_in, c_out, l_k) // dst: (b_size, c_out, l_out) if (dst_i >= dst_el) { return; } const size_t dst_s0 = c_out * l_out; const size_t dst_s1 = l_out; const size_t src_s0 = c_out * k_size * l_in; const size_t src_s1 = c_out * k_size; const size_t src_s2 = k_size; size_t tmp_dst_i = dst_i; const size_t b_idx = tmp_dst_i / dst_s0; tmp_dst_i -= b_idx * dst_s0; const size_t c_idx = tmp_dst_i / dst_s1; tmp_dst_i -= c_idx * dst_s1; const int l_out_idx = tmp_dst_i; dst[dst_i] = static_cast<T>(0); int l_in_idx = l_out_idx / stride; int k0 = l_out_idx - l_in_idx * stride; // l_out_idx = l_in_idx * stride + k0 for (; k0 < k_size && l_in_idx >= 0; k0 += stride, --l_in_idx) { if (l_in_idx < l_in) { const size_t src_i = b_idx * src_s0 + l_in_idx * src_s1 + c_idx * src_s2 + k0; dst[dst_i] += src[src_i]; } } } template <typename T> METAL_FUNC void im2col1d( constant size_t &dst_numel, constant size_t &l_out, constant size_t &l_k, constant size_t &stride, constant size_t &padding, constant size_t &dilation, constant size_t *src_dims, constant size_t *src_strides, device const T *src, device T *dst, uint tid [[ thread_position_in_grid ]] ) { // dst: (b_size, l_out, c_in, l_k) // src: (b_size, c_in, l_in) if (tid >= dst_numel) { return; } const size_t b_in = src_dims[0]; const size_t c_in = src_dims[1]; const size_t l_in = src_dims[2]; const size_t dst_s2 = l_k; const size_t dst_s1 = c_in * dst_s2; const size_t dst_s0 = l_out * dst_s1; size_t tmp_dst_i = tid; const size_t b_idx = tmp_dst_i / dst_s0; tmp_dst_i -= b_idx * dst_s0; const size_t l_idx = tmp_dst_i / dst_s1; tmp_dst_i -= l_idx * dst_s1; const size_t c_idx = tmp_dst_i / dst_s2; tmp_dst_i -= c_idx * dst_s2; const size_t l_k_idx = tmp_dst_i; size_t src_l_idx = l_idx * stride + l_k_idx * dilation; if (src_l_idx < padding || src_l_idx >= l_in + padding) { dst[tid] = static_cast<T>(0); } else { src_l_idx -= padding; const size_t src_i = b_idx * src_strides[0] + c_idx * src_strides[1] + src_l_idx * src_strides[2]; dst[tid] = src[src_i]; } } template <typename T> METAL_FUNC void upsample_nearest2d( constant size_t &w_out, constant size_t &h_out, constant float &w_scale, constant float &h_scale, constant size_t *src_dims, constant size_t *src_s, device const T *src, device T *dst, uint tid [[ thread_position_in_grid ]] ) { // src: (b_size, c_in, w_in, h_in) const size_t c = src_dims[1]; const size_t w_in = src_dims[2]; const size_t h_in = src_dims[3]; if (tid >= src_dims[0] * c * w_out * h_out) { return; } // TODO: Improve this. const size_t b_idx = tid / (w_out * h_out * c); const size_t c_idx = (tid / (w_out * h_out)) % c; const size_t dst_w = (tid / h_out) % w_out; const size_t dst_h = tid % h_out; size_t src_w = static_cast<size_t>(dst_w * w_scale); size_t src_h = static_cast<size_t>(dst_h * h_scale); if (src_w >= w_in) { src_w = w_in - 1; } if (src_h >= h_in) { src_h = h_in - 1; } const size_t src_i = b_idx * src_s[0] + c_idx * src_s[1] + src_w * src_s[2] + src_h * src_s[3]; dst[tid] = src[src_i]; } #define IM2COL_OP(T, FN_NAME) \ kernel void FN_NAME( \ constant size_t &dst_numel, \ constant size_t &h_out, \ constant size_t &w_out, \ constant size_t &h_k, \ constant size_t &w_k, \ constant size_t &stride, \ constant size_t &padding, \ constant size_t &dilation, \ constant size_t *src_dims, \ constant size_t *src_strides, \ device const T *src, \ device T *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ im2col<T>(dst_numel, h_out, w_out, h_k, w_k, stride, padding, dilation, src_dims, src_strides, src, dst, tid); \ } \ #define IM2COL1D_OP(T, FN_NAME) \ kernel void FN_NAME( \ constant size_t &dst_numel, \ constant size_t &l_out, \ constant size_t &l_k, \ constant size_t &stride, \ constant size_t &padding, \ constant size_t &dilation, \ constant size_t *src_dims, \ constant size_t *src_strides, \ device const T *src, \ device T *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ im2col1d<T>(dst_numel, l_out, l_k, stride, padding, dilation, src_dims, src_strides, src, dst, tid); \ } \ #define COL2IM1D_OP(T, FN_NAME) \ kernel void FN_NAME( \ constant size_t &dst_el, \ constant size_t &l_out, \ constant size_t &l_in, \ constant size_t &c_out, \ constant size_t &k_size, \ constant size_t &stride, \ device const T *src, \ device T *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ col2im1d<T>(dst_el, l_out, l_in, c_out, k_size, stride, src, dst, tid); \ } \ #define UPSAMPLE_NEAREST2D_OP(TYPENAME, FN_NAME) \ kernel void FN_NAME( \ constant size_t &w_out, \ constant size_t &h_out, \ constant float &w_scale, \ constant float &h_scale, \ constant size_t *dims, \ constant size_t *strides, \ device const TYPENAME *src, \ device TYPENAME *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ upsample_nearest2d<TYPENAME>(w_out, h_out, w_scale, h_scale, dims, strides, src, dst, tid); \ } \ template <typename T, typename A> METAL_FUNC void avg_pool2d( constant size_t &w_k, constant size_t &h_k, constant size_t &w_stride, constant size_t &h_stride, constant size_t *src_dims, constant size_t *src_strides, device const T *src, device T *dst, uint tid [[ thread_position_in_grid ]] ) { const size_t c = src_dims[1]; const size_t w_in = src_dims[2]; const size_t h_in = src_dims[3]; const size_t w_out = (w_in - w_k) / w_stride + 1; const size_t h_out = (h_in - h_k) / h_stride + 1; if (tid >= src_dims[0] * c * w_out * h_out) { return; } const size_t b_idx = tid / (w_out * h_out * c); const size_t c_idx = (tid / (w_out * h_out)) % c; const size_t dst_w = (tid / h_out) % w_out; const size_t dst_h = tid % h_out; const size_t src_idx0 = b_idx * src_strides[0]; A d = 0; for (size_t w_offset = 0; w_offset < w_k; ++w_offset) { size_t src_w = w_stride * dst_w + w_offset; if (src_w >= w_in){ continue; } for (size_t h_offset = 0; h_offset < h_k; ++h_offset) { size_t src_h = h_stride * dst_h + h_offset; if (src_h >= h_in) { continue; } const size_t src_idx = src_idx0 + c_idx * src_strides[1] + src_w * src_strides[2] + src_h * src_strides[3]; d += static_cast<A>(src[src_idx]); } } dst[tid] = static_cast<T>(d / (w_k * h_k)); } #define AVGPOOL2D_OP(TYPENAME, TYPEACC, FN_NAME) \ kernel void FN_NAME( \ constant size_t &w_k, \ constant size_t &h_k, \ constant size_t &w_s, \ constant size_t &h_s, \ constant size_t *src_dims, \ constant size_t *src_s, \ device const TYPENAME *src, \ device TYPENAME *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ avg_pool2d<TYPENAME, TYPEACC>(w_k, h_k, w_s, h_s, src_dims, src_s, src, dst, tid); \ } \ template <typename T> METAL_FUNC void max_pool2d( constant size_t &w_k, constant size_t &h_k, constant size_t &w_stride, constant size_t &h_stride, constant size_t *src_dims, constant size_t *src_strides, device const T *src, device T *dst, uint tid [[ thread_position_in_grid ]] ) { const size_t c = src_dims[1]; const size_t w_in = src_dims[2]; const size_t h_in = src_dims[3]; const size_t w_out = (w_in - w_k) / w_stride + 1; const size_t h_out = (h_in - h_k) / h_stride + 1; if (tid >= src_dims[0] * c * w_out * h_out) { return; } const size_t b_idx = tid / (w_out * h_out * c); const size_t c_idx = (tid / (w_out * h_out)) % c; const size_t dst_w = (tid / h_out) % w_out; const size_t dst_h = tid % h_out; const size_t src_idx0 = b_idx * src_strides[0]; T d = 0; bool set = false; for (size_t w_offset = 0; w_offset < w_k; ++w_offset) { size_t src_w = w_stride * dst_w + w_offset; if (src_w >= w_in){ continue; } for (size_t h_offset = 0; h_offset < h_k; ++h_offset) { size_t src_h = h_stride * dst_h + h_offset; if (src_h >= h_in) { continue; } const size_t src_idx = src_idx0 + c_idx * src_strides[1] + src_w * src_strides[2] + src_h * src_strides[3]; if (set) { d = MAX(d, src[src_idx]); } else { d = src[src_idx]; set = true; } } } dst[tid] = d; } #define MAXPOOL2D_OP(TYPENAME, FN_NAME) \ kernel void FN_NAME( \ constant size_t &w_k, \ constant size_t &h_k, \ constant size_t &w_s, \ constant size_t &h_s, \ constant size_t *src_dims, \ constant size_t *src_s, \ device const TYPENAME *src, \ device TYPENAME *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ max_pool2d<TYPENAME>(w_k, h_k, w_s, h_s, src_dims, src_s, src, dst, tid); \ } \ // Naive implementation of conv_transpose1d. template <typename T, typename A> METAL_FUNC void conv_transpose1d( constant size_t &l_out, constant size_t &stride, constant size_t &padding, constant size_t &out_padding, constant size_t &dilation, constant size_t *src_dims, constant size_t *src_strides, constant size_t *k_dims, constant size_t *k_strides, device const T *src, device const T *k, device T *dst, uint tid [[ thread_position_in_grid ]] ) { // src: (b_size, c_in, l_in) // kernel: (c_in, c_out, l_k) const size_t l_k = k_dims[2]; const size_t c_out = k_dims[1]; const size_t c_in = src_dims[1]; const size_t l_in = src_dims[2]; if (tid >= src_dims[0] * c_out * l_out) { return; } const size_t b_idx = tid / (l_out * c_out); const size_t dst_c_idx = (tid / l_out) % c_out; const size_t out_x = tid % l_out; const size_t src_idx0 = b_idx * src_strides[0]; A d = 0; for (int k_x = 0; k_x < (int)l_k; ++k_x) { // let out_x = inp_x * p.stride + k_x * p.dilation - p.padding; int inp_x_stride = (int)(out_x + padding) - k_x * dilation; if (inp_x_stride < 0 || inp_x_stride % stride) { continue; } int inp_x = inp_x_stride / stride; if (inp_x >= l_in) continue; for (size_t src_c_idx = 0; src_c_idx < c_in; ++src_c_idx) { const size_t src_idx = src_idx0 + src_c_idx * src_strides[1] + inp_x * src_strides[2]; const size_t k_idx = src_c_idx * k_strides[0] + dst_c_idx * k_strides[1] + k_x * k_strides[2]; d += static_cast<A>(src[src_idx]) * static_cast<A>(k[k_idx]); } } dst[tid] = static_cast<T>(d); } #define CONVT1D_OP(TYPENAME, TYPEACC, FN_NAME) \ kernel void FN_NAME( \ constant size_t &l_out, \ constant size_t &stride, \ constant size_t &padding, \ constant size_t &out_padding, \ constant size_t &dilation, \ constant size_t *src_dims, \ constant size_t *src_strides, \ constant size_t *k_dims, \ constant size_t *k_strides, \ device const TYPENAME *src, \ device const TYPENAME *k, \ device TYPENAME *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ conv_transpose1d<TYPENAME, TYPEACC>(l_out, stride, padding, out_padding, dilation, src_dims, src_strides, k_dims, k_strides, src, k, dst, tid); \ } \ template <typename T, typename A> METAL_FUNC void conv_transpose2d( constant size_t &w_out, constant size_t &h_out, constant size_t &stride, constant size_t &padding, constant size_t &out_padding, constant size_t &dilation, constant size_t *input_dims, constant size_t *input_stride, constant size_t *k_dims, constant size_t *k_stride, device const T *src, device const T *k, device T *dst, uint tid [[ thread_position_in_grid ]] ) { const size_t h_k = k_dims[2]; const size_t w_k = k_dims[3]; const size_t c_out = k_dims[1]; const size_t c_in = input_dims[1]; const size_t h_in = input_dims[2]; const size_t w_in = input_dims[3]; if (tid >= input_dims[0] * c_out * w_out * h_out) { return; } const size_t b_idx = tid / (w_out * h_out * c_out); const size_t dst_c_idx = (tid / (w_out * h_out)) % c_out; const size_t out_y = (tid / w_out) % h_out; const size_t out_x = tid % w_out; const size_t src_idx0 = b_idx * input_stride[0]; A d = 0; for (int k_x = 0; k_x < (int)w_k; ++k_x) { const int inp_x_stride = (int)(out_x + padding) - k_x * dilation; if (inp_x_stride < 0 || inp_x_stride % stride) { continue; } const int inp_x = inp_x_stride / stride; if (inp_x >= w_in) continue; for (int k_y = 0; k_y < (int)h_k; ++k_y) { const int inp_y_stride = (int)(out_y + padding) - k_y * dilation; if (inp_y_stride < 0 || inp_y_stride % stride) { continue; } const int inp_y = inp_y_stride / stride; if (inp_y >= h_in) continue; for (size_t src_c_idx = 0; src_c_idx < c_in; ++src_c_idx) { const size_t src_idx = src_idx0 + src_c_idx * input_stride[1] + inp_y * input_stride[2] + inp_x * input_stride[3]; const size_t k_idx = src_c_idx * k_stride[0] + dst_c_idx * k_stride[1] + k_y * k_stride[2] + k_x * k_stride[3]; d += static_cast<A>(src[src_idx]) * static_cast<A>(k[k_idx]); } } } dst[tid] = static_cast<T>(d); } #define CONVT2D_OP(TYPENAME, TYPEACC, FN_NAME) \ kernel void FN_NAME( \ constant size_t &w_out, \ constant size_t &h_out, \ constant size_t &stride, \ constant size_t &padding, \ constant size_t &out_padding, \ constant size_t &dilation, \ constant size_t *input_dims, \ constant size_t *input_stride, \ constant size_t *k_dims, \ constant size_t *k_stride, \ device const TYPENAME *src, \ device const TYPENAME *k, \ device TYPENAME *dst, \ uint tid [[ thread_position_in_grid ]] \ ) { \ conv_transpose2d<TYPENAME, TYPEACC>(w_out, h_out, stride, padding, out_padding, dilation, input_dims, input_stride, k_dims, k_stride, src, k, dst, tid); \ } \ IM2COL_OP(float, im2col_f32) IM2COL_OP(half, im2col_f16) IM2COL_OP(uint8_t, im2col_u8) IM2COL_OP(uint32_t, im2col_u32) #if defined(__HAVE_BFLOAT__) IM2COL_OP(bfloat, im2col_bf16) #endif COL2IM1D_OP(float, col2im1d_f32) COL2IM1D_OP(uint8_t, col2im1d_u8) COL2IM1D_OP(uint32_t, col2im1d_u32) IM2COL1D_OP(float, im2col1d_f32) IM2COL1D_OP(uint8_t, im2col1d_u8) IM2COL1D_OP(uint32_t, im2col1d_u32) UPSAMPLE_NEAREST2D_OP(float, upsample_nearest2d_f32) UPSAMPLE_NEAREST2D_OP(half, upsample_nearest2d_f16) UPSAMPLE_NEAREST2D_OP(uint8_t, upsample_nearest2d_u8) UPSAMPLE_NEAREST2D_OP(uint32_t, upsample_nearest2d_u32) #if defined(__HAVE_BFLOAT__) UPSAMPLE_NEAREST2D_OP(bfloat, upsample_nearest2d_bf16) #endif MAXPOOL2D_OP(float, max_pool2d_f32) MAXPOOL2D_OP(half, max_pool2d_f16) MAXPOOL2D_OP(uint32_t, max_pool2d_u32) MAXPOOL2D_OP(uint8_t, max_pool2d_u8) #if defined(__HAVE_BFLOAT__) MAXPOOL2D_OP(bfloat, max_pool2d_bf16) #endif AVGPOOL2D_OP(float, float, avg_pool2d_f32) AVGPOOL2D_OP(half, float, avg_pool2d_f16) AVGPOOL2D_OP(uint32_t, uint32_t, avg_pool2d_u32) AVGPOOL2D_OP(uint8_t, uint8_t, avg_pool2d_u8) #if defined(__HAVE_BFLOAT__) AVGPOOL2D_OP(bfloat, float, avg_pool2d_bf16) #endif CONVT1D_OP(float, float, conv_transpose1d_f32) CONVT1D_OP(half, float, conv_transpose1d_f16) CONVT1D_OP(uint8_t, uint8_t, conv_transpose1d_u8) CONVT1D_OP(uint32_t, uint32_t, conv_transpose1d_u32) #if defined(__HAVE_BFLOAT__) CONVT1D_OP(bfloat, float, conv_transpose1d_bf16) #endif CONVT2D_OP(float, float, conv_transpose2d_f32) CONVT2D_OP(half, float, conv_transpose2d_f16) #if defined(__HAVE_BFLOAT__) CONVT1D_OP(bfloat, float, conv_transpose2d_bf16) #endif
6
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/mlx_gemm.metal
// MLX Kernel extracted from: // https://github.com/ml-explore/mlx/blob/main/mlx/backend/metal/kernels/steel/gemm // Copyright © 2024 Apple Inc. #include <metal_simdgroup> #include <metal_simdgroup_matrix> #include <metal_stdlib> #define STEEL_CONST static constant constexpr const #define STEEL_PRAGMA_UNROLL _Pragma("clang loop unroll(full)") using namespace metal; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/params.h#L1 /////////////////////////////////////////////////////////////////////////////// // GEMM param classes /////////////////////////////////////////////////////////////////////////////// struct GEMMParams { const int M; const int N; const int K; const int lda; const int ldb; const int ldd; const int tiles_n; const int tiles_m; const size_t batch_stride_a; const size_t batch_stride_b; const size_t batch_stride_d; const int swizzle_log; const int gemm_k_iterations_aligned; const int batch_ndim; }; struct GEMMSpiltKParams { const int M; const int N; const int K; const int lda; const int ldb; const int ldc; const int tiles_n; const int tiles_m; const int split_k_partitions; const int split_k_partition_stride; const int split_k_partition_size; const int gemm_k_iterations_aligned; }; struct GEMMAddMMParams { const int ldc; const int fdc; const size_t batch_stride_c; const float alpha; const float beta; }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/loader.h#L1 /////////////////////////////////////////////////////////////////////////////// // Loading helper /////////////////////////////////////////////////////////////////////////////// template < typename T, short BROWS, short BCOLS, short dst_ld, short reduction_dim, short tgp_size, short alignment = 1, short n_reads = (BCOLS * BROWS) / (tgp_size), short TCOLS = BCOLS / n_reads, short TROWS = tgp_size / TCOLS> struct BlockLoader { STEEL_CONST short n_rows = (BROWS + TROWS - 1) / TROWS; STEEL_CONST short vec_size = n_reads; // Leading dimension for src const int src_ld; const int tile_stride; // Thread location indices const short thread_idx; const short bi; const short bj; // threadgroup and device memory threadgroup T* dst; const device T* src; struct alignas(alignment * sizeof(T)) ReadVector { uint8_t v[sizeof(T) * vec_size]; }; /* Constructor */ METAL_FUNC BlockLoader( const device T* src_, const int src_ld_, threadgroup T* dst_, ushort simd_group_id [[simdgroup_index_in_threadgroup]], ushort simd_lane_id [[thread_index_in_simdgroup]]) : src_ld(src_ld_), tile_stride(reduction_dim ? BCOLS : BROWS * src_ld), thread_idx(simd_group_id * 32 + simd_lane_id), bi(thread_idx / TCOLS), bj(vec_size * (thread_idx % TCOLS)), dst(dst_ + bi * dst_ld + bj), src(src_ + bi * src_ld + bj) {} /* Apply operation to threadgroup without bound checking */ template <typename UnaryOp> METAL_FUNC void apply_inplace_op(thread const UnaryOp& op) const { STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { dst[i * dst_ld + j] = op.apply(dst[i * dst_ld + j]); } } } /* Load from device memory into threadgroup memory - without bound checking */ METAL_FUNC void load_unsafe() const { STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { *((threadgroup ReadVector*)(&dst[i * dst_ld])) = *((const device ReadVector*)(&src[i * src_ld])); } } /* Load from device memory into threadgroup memory - with bound checking */ METAL_FUNC void load_safe(short2 src_tile_dim) const { src_tile_dim = src_tile_dim - short2(bj, bi); // Skip loading if thread has no valid reads if (src_tile_dim.x <= 0 || src_tile_dim.y <= 0) { STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { dst[i * dst_ld + j] = T(0); } } return; } // Use fast thread memory for bound checks bool tmp_idx[vec_size]; T tmp_val[vec_size]; STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { // Make sure tmp_idx only contains valid indices STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { tmp_idx[j] = (i < src_tile_dim.y) && (j < src_tile_dim.x); } // Read valid indices into tmp_val STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { tmp_val[j] = src[(tmp_idx[j] ? i * src_ld + j : 0)]; } // Zero out uneeded values STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { tmp_val[j] = tmp_idx[j] ? tmp_val[j] : T(0); } // Copy values to threadgroup memory STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { dst[i * dst_ld + j] = tmp_val[j]; } } } /* Iteration helper */ METAL_FUNC void next() { src += tile_stride; } }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/transforms.h#L1 /////////////////////////////////////////////////////////////////////////////// // Transforms and Epilogues /////////////////////////////////////////////////////////////////////////////// template <typename OutT, typename InT> struct TransformNone { static METAL_FUNC OutT apply(InT x) { return static_cast<OutT>(x); } static METAL_FUNC OutT apply(InT x, OutT) { return static_cast<OutT>(x); } }; template <typename OutT, typename InT> struct TransformAdd { TransformAdd(const float, const float) {} static METAL_FUNC OutT apply(InT x) { return static_cast<OutT>(x); } static METAL_FUNC OutT apply(InT x, OutT c) { return static_cast<OutT>(x) + c; } }; template <typename OutT, typename InT> struct TransformAxpby { const float alpha; const float beta; TransformAxpby(const float alpha_, const float beta_) : alpha(alpha_), beta(beta_) {} static METAL_FUNC OutT apply(InT x) { return static_cast<OutT>(x); } METAL_FUNC OutT apply(InT x, OutT c) const { return static_cast<OutT>(x * alpha + (beta * c)); } }; template <typename T> struct AccumHelper { typedef float accum_type; }; struct BlockSwizzle { static METAL_FUNC int2 swizzle(uint3 tid [[threadgroup_position_in_grid]], const int swizzle_log) { const int tid_x = (tid.x) >> swizzle_log; const int tid_y = ((tid.y) << swizzle_log) + ((tid.x) & ((1 << swizzle_log) - 1)); return int2(tid_x, tid_y); } }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/mma.h#L1 /////////////////////////////////////////////////////////////////////////////// // MMA helper /////////////////////////////////////////////////////////////////////////////// template < typename T, typename U, int BM, int BN, int BK, int WM, int WN, bool transpose_a, bool transpose_b, short lda_tgp, short ldb_tgp, typename AccumType = float, typename Epilogue = TransformNone<U, AccumType>> struct BlockMMA { // Warp tile simdgroup matrix strides along M STEEL_CONST short TM_stride = 8 * WM; // Warp tile simdgroup matrix strides along M STEEL_CONST short TN_stride = 8 * WN; // Warp tile size along M STEEL_CONST short TM = BM / TM_stride; // Warp tile size along N STEEL_CONST short TN = BN / TN_stride; // Strides of A, B along reduction axis STEEL_CONST short simd_stride_a = { transpose_a ? TM_stride : TM_stride * lda_tgp}; STEEL_CONST short simd_stride_b = { transpose_b ? TN_stride * ldb_tgp : TN_stride}; // Jump between elements STEEL_CONST short jump_a = {transpose_a ? lda_tgp : 1}; STEEL_CONST short jump_b = {transpose_b ? ldb_tgp : 1}; STEEL_CONST short tile_stride_a = {transpose_a ? 8 * lda_tgp : 8}; STEEL_CONST short tile_stride_b = {transpose_b ? 8 : 8 * ldb_tgp}; // Simdgroup matrices simdgroup_matrix<AccumType, 8, 8> Asimd[TM]; simdgroup_matrix<AccumType, 8, 8> Bsimd[TN]; simdgroup_matrix<AccumType, 8, 8> results[TM * TN] = { simdgroup_matrix<AccumType, 8, 8>(0)}; // Offsets within threadgroup const short tm; const short tn; short sm; short sn; short As_offset; short Bs_offset; /* Constructor */ METAL_FUNC BlockMMA( ushort simd_group_id [[simdgroup_index_in_threadgroup]], ushort simd_lane_id [[thread_index_in_simdgroup]]) : tm(8 * (simd_group_id / WN)), tn(8 * (simd_group_id % WN)) { // Determine thread position in simdgroup matrix short qid = simd_lane_id / 4; sm = (qid & 4) + (simd_lane_id / 2) % 4; sn = (qid & 2) * 2 + (simd_lane_id % 2) * 2; // Determine thread and simdgroup offset As_offset = transpose_a ? ((sn)*lda_tgp + (tm + sm)) : ((sn) + (tm + sm) * lda_tgp); Bs_offset = transpose_b ? ((tn + sn) * ldb_tgp + (sm)) : ((sm)*ldb_tgp + (tn + sn)); } /* (BM, BK) X (BK, BN) multiply accumulate function */ METAL_FUNC void mma(const threadgroup T* As, const threadgroup T* Bs) { // Adjust for simdgroup and thread location As += As_offset; Bs += Bs_offset; // Iterate over BK in blocks of 8 STEEL_PRAGMA_UNROLL for (short kk = 0; kk < BK; kk += 8) { simdgroup_barrier(mem_flags::mem_none); // Load elements from threadgroup A as simdgroup matrices STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { Asimd[i].thread_elements()[0] = static_cast<AccumType>(As[i * simd_stride_a + 0]); Asimd[i].thread_elements()[1] = static_cast<AccumType>(As[i * simd_stride_a + jump_a]); } simdgroup_barrier(mem_flags::mem_none); // Load elements from threadgroup B as simdgroup matrices STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { Bsimd[j].thread_elements()[0] = static_cast<AccumType>(Bs[j * simd_stride_b + 0]); Bsimd[j].thread_elements()[1] = static_cast<AccumType>(Bs[j * simd_stride_b + jump_b]); } simdgroup_barrier(mem_flags::mem_none); // Multiply and accumulate into result simdgroup matrices STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { short j_serp = (i % 2) ? (TN - 1 - j) : j; simdgroup_multiply_accumulate( results[i * TN + j_serp], Asimd[i], Bsimd[j_serp], results[i * TN + j_serp]); } } // Progress to next simdgroup tile As += tile_stride_a; Bs += tile_stride_b; } } /* Store results from simdgroup_matrix results into device memory */ METAL_FUNC void store_result(device U* D, const int ldd) const { // Adjust for simdgroup and thread location D += (sm + tm) * ldd + tn + sn; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue U outs[2] = {Epilogue::apply(accum[0]), Epilogue::apply(accum[1])}; // Write out D D[offset] = outs[0]; D[offset + 1] = outs[1]; } } } METAL_FUNC void store_result_safe(device U* D, const int ldd, short2 dst_tile_dims) const { // Adjust for simdgroup and thread location D += (sm + tm) * ldd + (tn + sn); dst_tile_dims -= short2(tn + sn, sm + tm); if (dst_tile_dims.x <= 0 || dst_tile_dims.y <= 0) return; STEEL_PRAGMA_UNROLL for (int i = 0; i < TM; i++) { if (i * TM_stride < dst_tile_dims.y) { STEEL_PRAGMA_UNROLL for (int j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue and output C if (j * TN_stride < dst_tile_dims.x) { D[offset] = Epilogue::apply(accum[0]); } if (j * TN_stride + 1 < dst_tile_dims.x) { D[offset + 1] = Epilogue::apply(accum[1]); } } } } } /* Apply epilogue */ template <typename UnaryEpilogue> METAL_FUNC void apply_epilogue(thread const UnaryEpilogue& epilogue_op) { // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread auto& accum = results[i * TN + j].thread_elements(); // Apply epilogue accum[0] = epilogue_op.apply(accum[0]); accum[1] = epilogue_op.apply(accum[1]); } } } /* Apply epilogue */ template <typename BinaryEpilogue> METAL_FUNC void apply_epilogue( const device U* C, const int ldc, const int fdc, thread const BinaryEpilogue& epilogue_op) { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; // Apply epilogue accum[0] = epilogue_op.apply(accum[0], C[offset_c]); accum[1] = epilogue_op.apply(accum[1], C[offset_c + fdc]); } } } /* Apply epilogue */ template <typename BinaryEpilogue> METAL_FUNC void apply_epilogue_safe( const device U* C, const int ldc, const int fdc, short2 dst_tile_dims, thread const BinaryEpilogue& epilogue_op) { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; dst_tile_dims -= short2(tn + sn, sm + tm); if (dst_tile_dims.x <= 0 || dst_tile_dims.y <= 0) return; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; // Read C U c_elems[2] = {0}; if ((j * TN_stride + 1) < dst_tile_dims.x) { c_elems[0] = C[offset_c]; c_elems[1] = C[offset_c + fdc]; } else if ((j * TN_stride) < dst_tile_dims.x) { c_elems[0] = C[offset_c]; } // Apply epilogue accum[0] = epilogue_op.apply(accum[0], c_elems[0]); accum[1] = epilogue_op.apply(accum[1], c_elems[1]); } } } /* Store results from simdgroup_matrix results into device memory */ METAL_FUNC void store_result( device U* D, const int ldd, const device U* C, const int ldc, const int fdc, thread const Epilogue& epilogue_op) const { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; D += (sm + tm) * ldd + tn + sn; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; int offset_d = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue U outs[2] = { epilogue_op.apply(accum[0], C[offset_c]), epilogue_op.apply(accum[1], C[offset_c + fdc])}; // Write out D D[offset_d] = outs[0]; D[offset_d + 1] = outs[1]; } } } METAL_FUNC void store_result_safe( device U* D, const int ldd, const device U* C, const int ldc, const int fdc, short2 dst_tile_dims, thread const Epilogue& epilogue_op) const { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; D += (sm + tm) * ldd + tn + sn; dst_tile_dims -= short2(tn + sn, sm + tm); if (dst_tile_dims.x <= 0 || dst_tile_dims.y <= 0) return; STEEL_PRAGMA_UNROLL for (int i = 0; i < TM; i++) { if (i * TM_stride < dst_tile_dims.y) { STEEL_PRAGMA_UNROLL for (int j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; int offset_d = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue and output C if (j * TN_stride < dst_tile_dims.x) { D[offset_d] = epilogue_op.apply(accum[0], C[offset_c]); } if (j * TN_stride + 1 < dst_tile_dims.x) { D[offset_d + 1] = epilogue_op.apply(accum[1], C[offset_c + fdc]); } } } } } }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/gemm.h#L1 /////////////////////////////////////////////////////////////////////////////// // GEMM kernel class /////////////////////////////////////////////////////////////////////////////// template <bool M_aligned, bool N_aligned, bool K_aligned> struct LoopAlignment {}; template < typename T, typename U, int BM, int BN, int BK, int WM, int WN, bool transpose_a, bool transpose_b, bool MN_aligned, bool K_aligned, typename AccumType = typename AccumHelper<T>::accum_type, typename Epilogue = TransformNone<U, AccumType>> struct GEMMKernel { STEEL_CONST short tgp_padding_a = 16 / sizeof(T); STEEL_CONST short tgp_padding_b = 16 / sizeof(T); STEEL_CONST short tgp_mem_size_a = transpose_a ? BK * (BM + tgp_padding_a) : BM * (BK + tgp_padding_a); STEEL_CONST short tgp_mem_size_b = transpose_b ? BN * (BK + tgp_padding_b) : BK * (BN + tgp_padding_b); STEEL_CONST short tgp_mem_size = tgp_mem_size_a + tgp_mem_size_b; STEEL_CONST short tgp_size = WM * WN * 32; using loader_a_t = BlockLoader< T, transpose_a ? BK : BM, transpose_a ? BM : BK, transpose_a ? BM + tgp_padding_a : BK + tgp_padding_a, !transpose_a, tgp_size>; using loader_b_t = BlockLoader< T, transpose_b ? BN : BK, transpose_b ? BK : BN, transpose_b ? BK + tgp_padding_b : BN + tgp_padding_b, transpose_b, tgp_size>; using mma_t = BlockMMA< T, U, BM, BN, BK, WM, WN, transpose_a, transpose_b, transpose_a ? BM + tgp_padding_a : BK + tgp_padding_a, transpose_b ? BK + tgp_padding_b : BN + tgp_padding_b, AccumType, Epilogue>; /* Main kernel function */ template <bool M_aligned, bool N_aligned, bool K_aligned_> static METAL_FUNC void gemm_loop( threadgroup T* As [[threadgroup(0)]], threadgroup T* Bs [[threadgroup(1)]], const int gemm_k_iterations, thread loader_a_t& loader_a, thread loader_b_t& loader_b, thread mma_t& mma_op, thread const short& tgp_bm, thread const short& tgp_bn, thread const short& lbk, LoopAlignment<M_aligned, N_aligned, K_aligned_> l = {}) { // Appease the compiler (void)l; short2 tile_dims_A = transpose_a ? short2(tgp_bm, BK) : short2(BK, tgp_bm); short2 tile_dims_B = transpose_b ? short2(BK, tgp_bn) : short2(tgp_bn, BK); for (int k = 0; k < gemm_k_iterations; k++) { threadgroup_barrier(mem_flags::mem_threadgroup); // Load elements into threadgroup if (M_aligned) { loader_a.load_unsafe(); } else { loader_a.load_safe(tile_dims_A); } if (N_aligned) { loader_b.load_unsafe(); } else { loader_b.load_safe(tile_dims_B); } threadgroup_barrier(mem_flags::mem_threadgroup); // Multiply and accumulate threadgroup elements mma_op.mma(As, Bs); // Prepare for next iteration loader_a.next(); loader_b.next(); } if (!K_aligned_) { threadgroup_barrier(mem_flags::mem_threadgroup); short2 tile_dims_A_last = transpose_a ? short2(tgp_bm, lbk) : short2(lbk, tgp_bm); short2 tile_dims_B_last = transpose_b ? short2(lbk, tgp_bn) : short2(tgp_bn, lbk); loader_a.load_safe(tile_dims_A_last); loader_b.load_safe(tile_dims_B_last); threadgroup_barrier(mem_flags::mem_threadgroup); mma_op.mma(As, Bs); } } /* Main kernel function */ static METAL_FUNC void run( const device T* A [[buffer(0)]], const device T* B [[buffer(1)]], device U* D [[buffer(2)]], const constant GEMMParams* params [[buffer(3)]], threadgroup T* As [[threadgroup(0)]], threadgroup T* Bs [[threadgroup(1)]], uint simd_lane_id [[thread_index_in_simdgroup]], uint simd_group_id [[simdgroup_index_in_threadgroup]], uint3 tid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) { // Pacifying compiler (void)lid; const int tid_y = ((tid.y) << params->swizzle_log) + ((tid.x) & ((1 << params->swizzle_log) - 1)); const int tid_x = (tid.x) >> params->swizzle_log; if (params->tiles_n <= tid_x || params->tiles_m <= tid_y) { return; } threadgroup_barrier(mem_flags::mem_none); // Find block in A, B, C const int c_row = tid_y * BM; const int c_col = tid_x * BN; const size_t c_row_long = size_t(c_row); const size_t c_col_long = size_t(c_col); A += transpose_a ? c_row_long : c_row_long * params->lda; B += transpose_b ? c_col_long * params->ldb : c_col_long; D += c_row_long * params->ldd + c_col_long; // Prepare threadgroup loading operations thread loader_a_t loader_a(A, params->lda, As, simd_group_id, simd_lane_id); thread loader_b_t loader_b(B, params->ldb, Bs, simd_group_id, simd_lane_id); // Prepare threadgroup mma operation thread mma_t mma_op(simd_group_id, simd_lane_id); int gemm_k_iterations = params->gemm_k_iterations_aligned; /////////////////////////////////////////////////////////////////////////////// // MNK aligned loop if (MN_aligned) { for (int k = 0; k < gemm_k_iterations; k++) { threadgroup_barrier(mem_flags::mem_threadgroup); // Load elements into threadgroup loader_a.load_unsafe(); loader_b.load_unsafe(); threadgroup_barrier(mem_flags::mem_threadgroup); // Multiply and accumulate threadgroup elements mma_op.mma(As, Bs); // Prepare for next iteration loader_a.next(); loader_b.next(); } threadgroup_barrier(mem_flags::mem_none); // Loop tail if (!K_aligned) { int lbk = params->K - params->gemm_k_iterations_aligned * BK; short2 tile_dims_A = transpose_a ? short2(BM, lbk) : short2(lbk, BM); short2 tile_dims_B = transpose_b ? short2(lbk, BN) : short2(BN, lbk); loader_a.load_safe(tile_dims_A); loader_b.load_safe(tile_dims_B); threadgroup_barrier(mem_flags::mem_threadgroup); mma_op.mma(As, Bs); } // Store results to device memory mma_op.store_result(D, params->ldd); return; } /////////////////////////////////////////////////////////////////////////////// // MN unaligned loop else { // Loop over K - unaligned case short tgp_bm = min(BM, params->M - c_row); short tgp_bn = min(BN, params->N - c_col); short leftover_bk = params->K - params->gemm_k_iterations_aligned * BK; if (tgp_bm == BM && tgp_bn == BN) { gemm_loop<true, true, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result(D, params->ldd); return; } else if (tgp_bn == BN) { gemm_loop<false, true, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); return; } else if (tgp_bm == BM) { gemm_loop<true, false, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); return; } else { gemm_loop<false, false, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); return; } } } }; // utils.h /////////////////////////////////////////////////////////////////////////////// // Single Array with generic dims template <typename stride_t> METAL_FUNC stride_t elem_to_loc( uint elem, device const int* shape, device const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } template <typename stride_t> METAL_FUNC stride_t elem_to_loc( uint elem, constant const int* shape, constant const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } template <typename stride_t> METAL_FUNC stride_t elem_to_loc( stride_t elem, device const int* shape, device const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } template <typename stride_t> METAL_FUNC stride_t elem_to_loc( stride_t elem, constant const int* shape, constant const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } // Non templated version to handle arbitrary dims template <typename stride_t> METAL_FUNC stride_t elem_to_loc( uint3 elem, constant const int* shape, constant const stride_t* strides, int ndim) { stride_t loc = elem.x * strides[ndim - 1] + elem.y * strides[ndim - 2]; for (int d = ndim - 3; d >= 0; --d) { loc += (elem.z % shape[d]) * strides[d]; elem.z /= shape[d]; } return loc; } METAL_FUNC ulong2 elem_to_loc_broadcast( uint elem, constant const int* shape, constant const size_t* a_strides, constant const size_t* b_strides, int ndim) { ulong loc_a{0}; ulong loc_b{0}; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { int pos_in_dim = (elem % shape[i]); elem /= shape[i]; loc_a += pos_in_dim * a_strides[i]; loc_b += pos_in_dim * b_strides[i]; } return ulong2(loc_a, loc_b); } METAL_FUNC ulong3 elem_to_loc_broadcast( uint elem, constant const int* shape, constant const size_t* a_strides, constant const size_t* b_strides, constant const size_t* c_strides, int ndim) { ulong loc_a{0}; ulong loc_b{0}; ulong loc_c{0}; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { int pos_in_dim = (elem % shape[i]); elem /= shape[i]; loc_a += pos_in_dim * a_strides[i]; loc_b += pos_in_dim * b_strides[i]; loc_c += pos_in_dim * c_strides[i]; } return ulong3(loc_a, loc_b, loc_c); } // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/kernels/steel_gemm_fused.h#L1 /////////////////////////////////////////////////////////////////////////////// // GEMM kernels /////////////////////////////////////////////////////////////////////////////// constant bool has_batch [[function_constant(10)]]; constant bool use_out_source [[function_constant(100)]]; constant bool do_axpby [[function_constant(110)]]; constant bool align_M [[function_constant(200)]]; constant bool align_N [[function_constant(201)]]; constant bool align_K [[function_constant(202)]]; constant bool do_gather [[function_constant(300)]]; constant bool gather_bias = do_gather && use_out_source; // clang-format off template < typename T, int BM, int BN, int BK, int WM, int WN, bool transpose_a, bool transpose_b, typename AccumType = float> [[kernel, max_total_threads_per_threadgroup(WM* WN * 32)]] void gemm( const device T* A [[buffer(0)]], const device T* B [[buffer(1)]], const device T* C [[buffer(2), function_constant(use_out_source)]], device T* D [[buffer(3)]], const constant GEMMParams* params [[buffer(4)]], const constant GEMMAddMMParams* addmm_params [[buffer(5), function_constant(use_out_source)]], const constant int* batch_shape [[buffer(6)]], const constant size_t* batch_strides [[buffer(7)]], const constant uint32_t* lhs_indices [[buffer(10), function_constant(do_gather)]], const constant uint32_t* rhs_indices [[buffer(11), function_constant(do_gather)]], const constant uint32_t* C_indices [[buffer(12), function_constant(gather_bias)]], const constant int* operand_shape [[buffer(13), function_constant(do_gather)]], const constant size_t* operand_strides [[buffer(14), function_constant(do_gather)]], const constant packed_int3& operand_batch_ndim [[buffer(15), function_constant(do_gather)]], uint simd_lane_id [[thread_index_in_simdgroup]], uint simd_group_id [[simdgroup_index_in_threadgroup]], uint3 tid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) { // clang-format on // Pacifying compiler (void)lid; using gemm_kernel = GEMMKernel< T, T, BM, BN, BK, WM, WN, transpose_a, transpose_b, true, true, AccumType>; using loader_a_t = typename gemm_kernel::loader_a_t; using loader_b_t = typename gemm_kernel::loader_b_t; using mma_t = typename gemm_kernel::mma_t; // Find block const int tid_y = ((tid.y) << params->swizzle_log) + ((tid.x) & ((1 << params->swizzle_log) - 1)); const int tid_x = (tid.x) >> params->swizzle_log; // Exit early if out of bounds if (params->tiles_n <= tid_x || params->tiles_m <= tid_y) { return; } // Adjust for batch // Handle gather if (do_gather) { // Read indices uint32_t indx_A, indx_B, indx_C; if (has_batch) { const constant size_t* indx_A_bstrides = batch_strides; const constant size_t* indx_B_bstrides = batch_strides + params->batch_ndim; ulong2 indx_offsets = elem_to_loc_broadcast( tid.z, batch_shape, indx_A_bstrides, indx_B_bstrides, params->batch_ndim); indx_A = lhs_indices[indx_offsets.x]; indx_B = rhs_indices[indx_offsets.y]; if (use_out_source) { const constant size_t* indx_C_bstrides = indx_B_bstrides + params->batch_ndim; auto indx_offset_C = elem_to_loc( tid.z, batch_shape, indx_C_bstrides, params->batch_ndim); indx_C = C_indices[indx_offset_C]; } } else { indx_A = lhs_indices[params->batch_stride_a * tid.z]; indx_B = rhs_indices[params->batch_stride_b * tid.z]; if (use_out_source) { indx_C = C_indices[addmm_params->batch_stride_c * tid.z]; } } // Translate indices to offsets int batch_ndim_A = operand_batch_ndim.x; const constant int* batch_shape_A = operand_shape; const constant size_t* batch_strides_A = operand_strides; A += elem_to_loc(indx_A, batch_shape_A, batch_strides_A, batch_ndim_A); int batch_ndim_B = operand_batch_ndim.y; const constant int* batch_shape_B = batch_shape_A + batch_ndim_A; const constant size_t* batch_strides_B = batch_strides_A + batch_ndim_A; B += elem_to_loc(indx_B, batch_shape_B, batch_strides_B, batch_ndim_B); if (use_out_source) { int batch_ndim_C = operand_batch_ndim.z; const constant int* batch_shape_C = batch_shape_B + batch_ndim_B; const constant size_t* batch_strides_C = batch_strides_B + batch_ndim_B; C += elem_to_loc(indx_C, batch_shape_C, batch_strides_C, batch_ndim_C); } } // Handle regular batch else { if (has_batch) { const constant size_t* A_bstrides = batch_strides; const constant size_t* B_bstrides = batch_strides + params->batch_ndim; ulong2 batch_offsets = elem_to_loc_broadcast( tid.z, batch_shape, A_bstrides, B_bstrides, params->batch_ndim); A += batch_offsets.x; B += batch_offsets.y; if (use_out_source) { const constant size_t* C_bstrides = B_bstrides + params->batch_ndim; C += elem_to_loc(tid.z, batch_shape, C_bstrides, params->batch_ndim); } } else { A += params->batch_stride_a * tid.z; B += params->batch_stride_b * tid.z; if (use_out_source) { C += addmm_params->batch_stride_c * tid.z; } } } D += params->batch_stride_d * tid.z; // Prepare threadgroup memory threadgroup T As[gemm_kernel::tgp_mem_size_a]; threadgroup T Bs[gemm_kernel::tgp_mem_size_b]; threadgroup_barrier(mem_flags::mem_none); // Find block in A, B, C const int c_row = tid_y * BM; const int c_col = tid_x * BN; const size_t c_row_long = size_t(c_row); const size_t c_col_long = size_t(c_col); A += transpose_a ? c_row_long : c_row_long * params->lda; B += transpose_b ? c_col_long * params->ldb : c_col_long; D += c_row_long * params->ldd + c_col_long; if (use_out_source) { C += c_row_long * addmm_params->ldc + c_col_long * addmm_params->fdc; } // Prepare threadgroup mma operation thread mma_t mma_op(simd_group_id, simd_lane_id); // Prepare threadgroup loading operations thread loader_a_t loader_a(A, params->lda, As, simd_group_id, simd_lane_id); thread loader_b_t loader_b(B, params->ldb, Bs, simd_group_id, simd_lane_id); // Prepare threadgroup bounds const short tgp_bm = align_M ? BM : short(min(BM, params->M - c_row)); const short tgp_bn = align_N ? BN : short(min(BN, params->N - c_col)); // Prepare iterations int gemm_k_iterations = params->gemm_k_iterations_aligned; // Do unaligned K iterations first if (!align_K) { const int k_last = params->gemm_k_iterations_aligned * BK; const int k_remain = params->K - k_last; const size_t k_jump_a = transpose_a ? params->lda * size_t(k_last) : size_t(k_last); const size_t k_jump_b = transpose_b ? size_t(k_last) : params->ldb * size_t(k_last); // Move loader source ahead to end loader_a.src += k_jump_a; loader_b.src += k_jump_b; // Load tile const short2 tile_dims_A = transpose_a ? short2(tgp_bm, k_remain) : short2(k_remain, tgp_bm); const short2 tile_dims_B = transpose_b ? short2(k_remain, tgp_bn) : short2(tgp_bn, k_remain); loader_a.load_safe(tile_dims_A); loader_b.load_safe(tile_dims_B); threadgroup_barrier(mem_flags::mem_threadgroup); // Do matmul mma_op.mma(As, Bs); // Reset source back to start loader_a.src -= k_jump_a; loader_b.src -= k_jump_b; } const TransformAdd<AccumType, AccumType> epilogue_op_add( addmm_params->alpha, addmm_params->beta); const TransformAxpby<AccumType, AccumType> epilogue_op_axpby( addmm_params->alpha, addmm_params->beta); /////////////////////////////////////////////////////////////////////////////// // MNK aligned loop if (align_M && align_N) { // Do gemm for (int k = 0; k < gemm_k_iterations; k++) { threadgroup_barrier(mem_flags::mem_threadgroup); // Load elements into threadgroup loader_a.load_unsafe(); loader_b.load_unsafe(); threadgroup_barrier(mem_flags::mem_threadgroup); // Multiply and accumulate threadgroup elements mma_op.mma(As, Bs); // Prepare for next iteration loader_a.next(); loader_b.next(); } threadgroup_barrier(mem_flags::mem_none); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_axpby); } else { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_add); } } // Store results to device memory return mma_op.store_result(D, params->ldd); } /////////////////////////////////////////////////////////////////////////////// // MN unaligned loop else { // Loop over K - unaligned case const int leftover_bk = 0; if ((align_M || tgp_bm == BM) && (align_N || tgp_bn == BN)) { // Do gemm gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<true, true, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_axpby); } else { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_add); } } // Store results to device memory return mma_op.store_result(D, params->ldd); } else if (align_N || tgp_bn == BN) { gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<false, true, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_axpby); } else { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_add); } } // Store results to device memory return mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); } else if (align_M || tgp_bm == BM) { gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<true, false, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_axpby); } else { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_add); } } // Store results to device memory return mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); } else { gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<false, false, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_axpby); } else { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_add); } } // Store results to device memory return mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); } } } #define instantiate_gemm(tname, trans_a, trans_b, iname, itype, oname, otype, bm, bn, bk, wm, wn) \ template [[host_name("gemm_" #tname "_" #iname "_" #oname "_" #bm "_" #bn "_" #bk "_" #wm "_" #wn)]] \ [[kernel]] void gemm<itype, bm, bn, bk, wm, wn, trans_a, trans_b, float>( \ const device itype *A [[buffer(0)]], \ const device itype *B [[buffer(1)]], \ const device itype *C [[buffer(2), function_constant(use_out_source)]], \ device itype *D [[buffer(3)]], \ const constant GEMMParams* params [[buffer(4)]], \ const constant GEMMAddMMParams* addmm_params [[buffer(5), function_constant(use_out_source)]], \ const constant int* batch_shape [[buffer(6)]], \ const constant size_t* batch_strides [[buffer(7)]], \ const constant uint32_t* lhs_indices [[buffer(10), function_constant(do_gather)]], \ const constant uint32_t* rhs_indices [[buffer(11), function_constant(do_gather)]], \ const constant uint32_t* C_indices [[buffer(12), function_constant(gather_bias)]], \ const constant int* operand_shape [[buffer(13), function_constant(do_gather)]], \ const constant size_t* operand_strides [[buffer(14), function_constant(do_gather)]], \ const constant packed_int3& operand_batch_ndim [[buffer(15), function_constant(do_gather)]], \ uint simd_lane_id [[thread_index_in_simdgroup]], \ uint simd_group_id [[simdgroup_index_in_threadgroup]], \ uint3 tid [[threadgroup_position_in_grid]], \ uint3 lid [[thread_position_in_threadgroup]]); #define instantiate_gemm_transpose_helper(iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(nn, false, false, iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(nt, false, true , iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(tn, true , false, iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(tt, true , true , iname, itype, oname, otype, bm, bn, bk, wm, wn) instantiate_gemm_transpose_helper(f32, float, f32, float, 32, 32, 16, 2, 2) instantiate_gemm_transpose_helper(f16, half, f16, half, 32, 32, 16, 2, 2) #if defined(__HAVE_BFLOAT__) instantiate_gemm_transpose_helper(bf16, bfloat, bf16, bfloat, 32, 32, 16, 2, 2) #endif
7
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/sort.metal
// Imported from https://github.com/ggerganov/llama.cpp/blob/master/ggml-metal.metal #include <metal_stdlib> using namespace metal; #define SWAP(x, y) { auto tmp = (x); (x) = (y); (y) = tmp; } #define SORT_ASC 1 #define SORT_DESC 0 template<int order, typename T> METAL_FUNC void argsort( device const T * x, device uint32_t * dst, constant int64_t & ncols, constant int64_t & ncols_pad, threadgroup uint32_t * shared_values [[threadgroup(0)]], uint3 tgpig[[threadgroup_position_in_grid]], uint3 tpitg[[thread_position_in_threadgroup]]) { int col = tpitg[0]; int row = tgpig[1]; if (col >= ncols_pad) return; device const T * x_row = x + row * ncols; threadgroup uint32_t * dst_row = shared_values; // initialize indices dst_row[col] = col; threadgroup_barrier(mem_flags::mem_threadgroup); for (int k = 2; k <= ncols_pad; k *= 2) { for (int j = k / 2; j > 0; j /= 2) { int ixj = col ^ j; if (ixj > col) { if ((col & k) == 0) { if (dst_row[col] >= ncols || (dst_row[ixj] < ncols && (order == SORT_ASC ? x_row[dst_row[col]] > x_row[dst_row[ixj]] : x_row[dst_row[col]] < x_row[dst_row[ixj]])) ) { SWAP(dst_row[col], dst_row[ixj]); } } else { if (dst_row[ixj] >= ncols || (dst_row[col] < ncols && (order == SORT_ASC ? x_row[dst_row[col]] < x_row[dst_row[ixj]] : x_row[dst_row[col]] > x_row[dst_row[ixj]])) ) { SWAP(dst_row[col], dst_row[ixj]); } } } threadgroup_barrier(mem_flags::mem_threadgroup); } } // copy the result to dst without the padding if (col < ncols) { dst[row * ncols + col] = dst_row[col]; } } #define ARGSORT(T, RUST_T) \ kernel void asort_asc_##RUST_T( \ device const T * x, \ device uint32_t * dst, \ constant int64_t & ncols, \ constant int64_t & ncols_pad, \ threadgroup uint32_t * shared_values [[threadgroup(0)]], \ uint3 tgpig[[threadgroup_position_in_grid]], \ uint3 tpitg[[thread_position_in_threadgroup]] \ ) { \ argsort<SORT_ASC, T>(x, dst, ncols, ncols_pad, shared_values, tgpig, tpitg); \ } \ kernel void asort_desc_##RUST_T( \ device const T * x, \ device uint32_t * dst, \ constant int64_t & ncols, \ constant int64_t & ncols_pad, \ threadgroup uint32_t * shared_values [[threadgroup(0)]], \ uint3 tgpig[[threadgroup_position_in_grid]], \ uint3 tpitg[[thread_position_in_threadgroup]] \ ) { \ argsort<SORT_DESC, T>(x, dst, ncols, ncols_pad, shared_values, tgpig, tpitg); \ } \ ARGSORT(float, f32) ARGSORT(half, f16) ARGSORT(uint8_t, u8) ARGSORT(uint32_t, u32) #if __METAL_VERSION__ >= 220 ARGSORT(int64_t, i64) #endif #if defined(__HAVE_BFLOAT__) ARGSORT(bfloat, bf16) #endif
8
0
hf_public_repos/candle/candle-metal-kernels
hf_public_repos/candle/candle-metal-kernels/src/ternary.metal
#include <metal_stdlib> using namespace metal; METAL_FUNC uint get_strided_index( uint idx, constant size_t &num_dims, constant size_t *dims, constant size_t *strides ) { uint strided_i = 0; for (uint d = 0; d < num_dims; d++) { uint dim_idx = num_dims - 1 - d; strided_i += (idx % dims[dim_idx]) * strides[dim_idx]; idx /= dims[dim_idx]; } return strided_i; } template<typename T, typename ID> METAL_FUNC void where_cond( constant size_t &numel, constant size_t &num_dims, constant size_t *dims, constant size_t *strides, constant size_t *strides_t, constant size_t *strides_f, device const ID *ids, device const T *t, device const T *f, device T *out, uint i [[ thread_position_in_grid ]] ) { if (i >= numel){ return; } uint strided_i = get_strided_index(i, num_dims, dims, strides); uint strided_i_t = get_strided_index(i, num_dims, dims, strides_t); uint strided_i_f = get_strided_index(i, num_dims, dims, strides_f); out[i] = ids[strided_i] ? t[strided_i_t] : f[strided_i_f]; } #define WHERE_OP(T, ID, FN_NAME) \ kernel void FN_NAME( \ constant size_t &numel, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ constant size_t *strides_t, \ constant size_t *strides_f, \ device const ID *ids, \ device const T *t, \ device const T *f, \ device T *out, \ uint i [[ thread_position_in_grid ]] \ ) { \ where_cond<T, ID>(numel, num_dims, dims, strides, strides_t, strides_f, ids, t, f, out, i); \ } \ WHERE_OP(half, uint32_t, where_u32_f16) WHERE_OP(float, uint32_t, where_u32_f32) WHERE_OP(uint8_t, uint32_t, where_u32_u8) WHERE_OP(uint32_t, uint32_t, where_u32_u32) WHERE_OP(half, uint8_t, where_u8_f16) WHERE_OP(float, uint8_t, where_u8_f32) WHERE_OP(uint8_t, uint8_t, where_u8_u8) WHERE_OP(uint32_t, uint8_t, where_u8_u32) #if __METAL_VERSION__ >= 220 WHERE_OP(int64_t, uint8_t, where_u8_i64) WHERE_OP(int64_t, uint32_t, where_u32_i64) WHERE_OP(half, int64_t, where_i64_f16) WHERE_OP(float, int64_t, where_i64_f32) WHERE_OP(uint8_t, int64_t, where_i64_u8) WHERE_OP(uint32_t, int64_t, where_i64_u32) WHERE_OP(int64_t, int64_t, where_i64_i64) #if defined(__HAVE_BFLOAT__) WHERE_OP(bfloat, int64_t, where_i64_bf16) #endif #endif #if defined(__HAVE_BFLOAT__) WHERE_OP(bfloat, uint8_t, where_u8_bf16) WHERE_OP(bfloat, uint32_t, where_u32_bf16) #endif
9
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/local_sgd.py
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from accelerate import Accelerator, DistributedType class LocalSGD: """ A helper class to support local SGD on top of Accelerator. It simply runs a given number of updates independently on each device, and averages model weights every K synchronization step. It should be used only in the multi-GPU (or multi-CPU) setup without extensions such as DeepSpeed. In particular, this is a simple implementation that cannot support scenarios such as model parallelism. Although we are not aware of the true origins of this simple approach, the idea of local SGD is quite old and goes back to at least: Zhang, J., De Sa, C., Mitliagkas, I., & Ré, C. (2016). [Parallel SGD: When does averaging help?. arXiv preprint arXiv:1606.07365.](https://arxiv.org/abs/1606.07365) We credit the term Local SGD to the following paper (but there might be earlier references we are not aware of). Stich, Sebastian Urban. ["Local SGD Converges Fast and Communicates Little." ICLR 2019-International Conference on Learning Representations. No. CONF. 2019.](https://arxiv.org/abs/1805.09767) """ def __enter__(self): if self.enabled: self.model_sync_obj = self.model.no_sync() self.model_sync_obj.__enter__() return self def __exit__(self, type, value, tb): if self.enabled: # Average all models on exit self._sync_and_avg_model_params() self.model_sync_obj.__exit__(type, value, tb) def __init__(self, accelerator: Accelerator, model: torch.nn.Module, local_sgd_steps: int, enabled: bool = True): """ Constructor. Args: model (`torch.nn.Module): The model whose parameters we need to average. accelerator (`Accelerator`): Accelerator object. local_sgd_steps (`int`): A number of local SGD steps (before model parameters are synchronized). enabled (`bool): Local SGD is disabled if this parameter set to `False`. """ if accelerator.distributed_type not in [ DistributedType.NO, DistributedType.MULTI_CPU, DistributedType.MULTI_GPU, DistributedType.MULTI_XPU, DistributedType.MULTI_MLU, DistributedType.MULTI_MUSA, DistributedType.MULTI_NPU, ]: raise NotImplementedError("LocalSGD is supported only for CPUs and GPUs (no DeepSpeed or MegatronLM)") self.enabled = enabled and accelerator.distributed_type != DistributedType.NO self.num_steps = 0 if self.enabled: self.accelerator = accelerator self.model = model self.local_sgd_steps = local_sgd_steps def step(self): """ This function makes a "step" and synchronizes model parameters if necessary. """ self.num_steps += 1 if not self.enabled: return if self.num_steps % self.local_sgd_steps == 0: self._sync_and_avg_model_params() def _sync_and_avg_model_params(self): """ Synchronize + Average model parameters across all GPUs """ self.accelerator.wait_for_everyone() with self.accelerator.autocast(): for param in self.model.parameters(): param.data = self.accelerator.reduce(param.data, reduction="mean")
0
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/memory_utils.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import warnings warnings.warn( "memory_utils has been reorganized to utils.memory. Import `find_executable_batchsize` from the main `__init__`: " "`from accelerate import find_executable_batch_size` to avoid this warning.", FutureWarning, )
1
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/data_loader.py
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from contextlib import suppress from typing import Callable, List, Optional, Union import torch from torch.utils.data import BatchSampler, DataLoader, IterableDataset, RandomSampler from .logging import get_logger from .state import DistributedType, GradientState, PartialState, is_torch_xla_available from .utils import ( RNGType, broadcast, broadcast_object_list, concatenate, find_batch_size, get_data_structure, initialize_tensors, is_torch_version, is_torchdata_stateful_dataloader_available, send_to_device, slice_tensors, synchronize_rng_states, ) logger = get_logger(__name__) # kwargs of the DataLoader in min version 1.4.0. _PYTORCH_DATALOADER_KWARGS = { "batch_size": 1, "shuffle": False, "sampler": None, "batch_sampler": None, "num_workers": 0, "collate_fn": None, "pin_memory": False, "drop_last": False, "timeout": 0, "worker_init_fn": None, "multiprocessing_context": None, "generator": None, "prefetch_factor": 2, "persistent_workers": False, } # kwargs added after by version _PYTORCH_DATALOADER_ADDITIONAL_KWARGS = {} for v, additional_kwargs in _PYTORCH_DATALOADER_ADDITIONAL_KWARGS.items(): if is_torch_version(">=", v): _PYTORCH_DATALOADER_KWARGS.update(additional_kwargs) class SeedableRandomSampler(RandomSampler): """ Same as a random sampler, except that in `__iter__` a seed can be used. Needed specifically in distributed cases, when the random generator for each GPU needs to start from the same seed and be fully reproducable on multiple iterations. If a custom `generator` is passed, it will rely on its initial seed as well as the current iteration it is on (stored in `self.epoch`). """ def __init__(self, *args, **kwargs): data_seed = kwargs.pop("data_seed", None) super().__init__(*args, **kwargs) self.initial_seed = data_seed if data_seed is not None else torch.random.initial_seed() self.epoch = 0 def __iter__(self): if self.generator is None: self.generator = torch.Generator() self.generator.manual_seed(self.initial_seed) # Allow `self.epoch` to modify the seed of the generator seed = self.epoch + self.initial_seed # print("Setting seed at epoch", self.epoch, seed) self.generator.manual_seed(seed) yield from super().__iter__() self.set_epoch(self.epoch + 1) def set_epoch(self, epoch: int): "Sets the current iteration of the sampler." self.epoch = epoch class BatchSamplerShard(BatchSampler): """ Wraps a PyTorch `BatchSampler` to generate batches for one of the processes only. Instances of this class will always yield a number of batches that is a round multiple of `num_processes` and that all have the same size. Depending on the value of the `drop_last` attribute of the batch sampler passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. Args: batch_sampler (`torch.utils.data.sampler.BatchSampler`): The batch sampler to split in several shards. num_processes (`int`, *optional*, defaults to 1): The number of processes running concurrently. process_index (`int`, *optional*, defaults to 0): The index of the current process. split_batches (`bool`, *optional*, defaults to `False`): Whether the shards should be created by splitting a batch to give a piece of it on each process, or by yielding different full batches on each process. On two processes with a sampler of `[[0, 1, 2, 3], [4, 5, 6, 7]]`, this will result in: - the sampler on process 0 to yield `[0, 1, 2, 3]` and the sampler on process 1 to yield `[4, 5, 6, 7]` if this argument is set to `False`. - the sampler on process 0 to yield `[0, 1]` then `[4, 5]` and the sampler on process 1 to yield `[2, 3]` then `[6, 7]` if this argument is set to `True`. even_batches (`bool`, *optional*, defaults to `True`): Whether or not to loop back at the beginning of the sampler when the number of samples is not a round multiple of (original batch size / number of processes). <Tip warning={true}> `BatchSampler`s with varying batch sizes are not enabled by default. To enable this behaviour, set `even_batches` equal to `False` </Tip>""" def __init__( self, batch_sampler: BatchSampler, num_processes: int = 1, process_index: int = 0, split_batches: bool = False, even_batches: bool = True, ): if split_batches and batch_sampler.batch_size % num_processes != 0: raise ValueError( f"To use `BatchSamplerShard` in `split_batches` mode, the batch size ({batch_sampler.batch_size}) " f"needs to be a round multiple of the number of processes ({num_processes})." ) self.batch_sampler = batch_sampler self.num_processes = num_processes self.process_index = process_index self.split_batches = split_batches self.even_batches = even_batches self.batch_size = getattr(batch_sampler, "batch_size", None) self.drop_last = getattr(batch_sampler, "drop_last", False) if self.batch_size is None and self.even_batches: raise ValueError( "You need to use `even_batches=False` when the batch sampler has no batch size. If you " "are not calling this method directly, set `accelerator.even_batches=False` instead." ) @property def total_length(self): return len(self.batch_sampler) def __len__(self): if self.split_batches: # Split batches does not change the length of the batch sampler return len(self.batch_sampler) if len(self.batch_sampler) % self.num_processes == 0: # If the length is a round multiple of the number of processes, it's easy. return len(self.batch_sampler) // self.num_processes length = len(self.batch_sampler) // self.num_processes if self.drop_last: # Same if we drop the remainder. return length elif self.even_batches: # When we even batches we always get +1 return length + 1 else: # Otherwise it depends on the process index. return length + 1 if self.process_index < len(self.batch_sampler) % self.num_processes else length def __iter__(self): return self._iter_with_split() if self.split_batches else self._iter_with_no_split() def _iter_with_split(self): initial_data = [] batch_length = self.batch_sampler.batch_size // self.num_processes for idx, batch in enumerate(self.batch_sampler): if idx == 0: initial_data = batch if len(batch) == self.batch_size: # If the batch is full, we yield the part of it this process is responsible of. yield batch[batch_length * self.process_index : batch_length * (self.process_index + 1)] # If drop_last is True of the last batch was full, iteration is over, otherwise... if not self.drop_last and len(initial_data) > 0 and len(batch) < self.batch_size: if not self.even_batches: if len(batch) > batch_length * self.process_index: yield batch[batch_length * self.process_index : batch_length * (self.process_index + 1)] else: # For degenerate cases where the dataset has less than num_process * batch_size samples while len(initial_data) < self.batch_size: initial_data += initial_data batch = batch + initial_data yield batch[batch_length * self.process_index : batch_length * (self.process_index + 1)] def _iter_with_no_split(self): initial_data = [] batch_to_yield = [] for idx, batch in enumerate(self.batch_sampler): # We gather the initial indices in case we need to circle back at the end. if not self.drop_last and idx < self.num_processes: initial_data += batch # We identify the batch to yield but wait until we ar sure every process gets a full batch before actually # yielding it. if idx % self.num_processes == self.process_index: batch_to_yield = batch if idx % self.num_processes == self.num_processes - 1 and ( self.batch_size is None or len(batch) == self.batch_size ): yield batch_to_yield batch_to_yield = [] # If drop_last is True, iteration is over, otherwise... if not self.drop_last and len(initial_data) > 0: if not self.even_batches: if len(batch_to_yield) > 0: yield batch_to_yield else: # ... we yield the complete batch we had saved before if it has the proper length if len(batch_to_yield) == self.batch_size: yield batch_to_yield # For degenerate cases where the dataset has less than num_process * batch_size samples while len(initial_data) < self.num_processes * self.batch_size: initial_data += initial_data # If the last batch seen was of the proper size, it has been yielded by its process so we move to the next if len(batch) == self.batch_size: batch = [] idx += 1 # Make sure we yield a multiple of self.num_processes batches cycle_index = 0 while idx % self.num_processes != 0 or len(batch) > 0: end_index = cycle_index + self.batch_size - len(batch) batch += initial_data[cycle_index:end_index] if idx % self.num_processes == self.process_index: yield batch cycle_index = end_index batch = [] idx += 1 class IterableDatasetShard(IterableDataset): """ Wraps a PyTorch `IterableDataset` to generate samples for one of the processes only. Instances of this class will always yield a number of samples that is a round multiple of the actual batch size (depending of the value of `split_batches`, this is either `batch_size` or `batch_size x num_processes`). Depending on the value of the `drop_last` attribute of the batch sampler passed, it will either stop the iteration at the first batch that would be too small or loop with indices from the beginning. Args: dataset (`torch.utils.data.dataset.IterableDataset`): The batch sampler to split in several shards. batch_size (`int`, *optional*, defaults to 1): The size of the batches per shard (if `split_batches=False`) or the size of the batches (if `split_batches=True`). drop_last (`bool`, *optional*, defaults to `False`): Whether or not to drop the last incomplete batch or complete the last batches by using the samples from the beginning. num_processes (`int`, *optional*, defaults to 1): The number of processes running concurrently. process_index (`int`, *optional*, defaults to 0): The index of the current process. split_batches (`bool`, *optional*, defaults to `False`): Whether the shards should be created by splitting a batch to give a piece of it on each process, or by yielding different full batches on each process. On two processes with an iterable dataset yielding of `[0, 1, 2, 3, 4, 5, 6, 7]`, this will result in: - the shard on process 0 to yield `[0, 1, 2, 3]` and the shard on process 1 to yield `[4, 5, 6, 7]` if this argument is set to `False`. - the shard on process 0 to yield `[0, 1, 4, 5]` and the sampler on process 1 to yield `[2, 3, 6, 7]` if this argument is set to `True`. """ def __init__( self, dataset: IterableDataset, batch_size: int = 1, drop_last: bool = False, num_processes: int = 1, process_index: int = 0, split_batches: bool = False, ): if split_batches and batch_size > 1 and batch_size % num_processes != 0: raise ValueError( f"To use `IterableDatasetShard` in `split_batches` mode, the batch size ({batch_size}) " f"needs to be a round multiple of the number of processes ({num_processes})." ) self.dataset = dataset self.batch_size = batch_size self.drop_last = drop_last self.num_processes = num_processes self.process_index = process_index self.split_batches = split_batches def set_epoch(self, epoch): self.epoch = epoch if hasattr(self.dataset, "set_epoch"): self.dataset.set_epoch(epoch) def __len__(self): # We will just raise the downstream error if the underlying dataset is not sized if self.drop_last: return (len(self.dataset) // (self.batch_size * self.num_processes)) * self.batch_size else: return math.ceil(len(self.dataset) / (self.batch_size * self.num_processes)) * self.batch_size def __iter__(self): if ( not hasattr(self.dataset, "set_epoch") and hasattr(self.dataset, "generator") and isinstance(self.dataset.generator, torch.Generator) ): self.dataset.generator.manual_seed(self.epoch) real_batch_size = self.batch_size if self.split_batches else (self.batch_size * self.num_processes) process_batch_size = (self.batch_size // self.num_processes) if self.split_batches else self.batch_size process_slice = range(self.process_index * process_batch_size, (self.process_index + 1) * process_batch_size) first_batch = None current_batch = [] for element in self.dataset: current_batch.append(element) # Wait to have a full batch before yielding elements. if len(current_batch) == real_batch_size: for i in process_slice: yield current_batch[i] if first_batch is None: first_batch = current_batch.copy() current_batch = [] # Finished if drop_last is True, otherwise complete the last batch with elements from the beginning. if not self.drop_last and len(current_batch) > 0: if first_batch is None: first_batch = current_batch.copy() while len(current_batch) < real_batch_size: current_batch += first_batch for i in process_slice: yield current_batch[i] class DataLoaderStateMixin: """ Mixin class that adds a state to a `DataLoader` to keep track of the status inside the dataloader such as at the end of the iteration, the number of items in the dataset in the last batch relative to the batch size, and other useful information that might be needed. **Available attributes:** - **end_of_dataloader** (`bool`) -- Whether at the last iteration or batch - **remainder** (`int`) -- The number of items that are remaining in the last batch, relative to the total batch size <Tip warning={true}> Inheriters of this class should ensure that the class creates a `GradientState()` instance, stored in `self.gradient_state`. </Tip> """ def __init_subclass__(cls, **kwargs): cls.end_of_dataloader = False cls.remainder = -1 def reset(self): self.end_of_dataloader = False self.remainder = -1 def begin(self): "Prepares the gradient state for the current dataloader" self.reset() with suppress(Exception): if not self._drop_last: length = getattr(self.dataset, "total_dataset_length", len(self.dataset)) self.remainder = length % self.total_batch_size self.gradient_state._add_dataloader(self) def end(self): "Cleans up the gradient state after exiting the dataloader" self.gradient_state._remove_dataloader(self) class DataLoaderAdapter: """ A class which wraps around a PyTorch `DataLoader` (or variants of it) to be used with the `Accelerator`. For compatability reasons, this class inherits from the class it wraps around, so it can be used as a drop-in. """ def __init__(self, dataset, use_stateful_dataloader=False, batch_sampler=None, **kwargs): self.use_stateful_dataloader = use_stateful_dataloader if is_torchdata_stateful_dataloader_available(): from torchdata.stateful_dataloader import StatefulDataLoader if use_stateful_dataloader and not is_torchdata_stateful_dataloader_available(): raise ImportError( "StatefulDataLoader is not available. Please install torchdata version 0.8.0 or higher to use it." ) if use_stateful_dataloader: self.base_dataloader = StatefulDataLoader(dataset, batch_sampler=batch_sampler, **kwargs) else: self.base_dataloader = DataLoader(dataset, batch_sampler=batch_sampler, **kwargs) if hasattr(self.base_dataloader, "state_dict"): self.dl_state_dict = self.base_dataloader.state_dict() def __getattr__(self, name): # Avoid infinite recursion if we try to access a nonexistent base_dataloader attribute. if name == "base_dataloader": raise AttributeError() # Delegate attribute access to the internal dataloader return getattr(self.base_dataloader, name) def state_dict(self): return self.dl_state_dict def load_state_dict(self, state_dict): self.base_dataloader.load_state_dict(state_dict) @property def __class__(self): """ In order to maintain backwards compatability with other code, we need to ensure `isinstance(obj, DataLoader)` returs true. This is because some downstream code assumes that the `DataLoader` is the base class of the object. """ return self.base_dataloader.__class__ def __len__(self): return len(self.base_dataloader) def adjust_state_dict_for_prefetch(self): """ Adjusts the state dict for prefetching. Natively, this will adjust all of the iters yielded keys in `self.dl_state_dict` by a factor of `num_processes - 1`, however if a custom correction is needed, this can be overridden. This should modify `self.dl_state_dict` directly """ # The state dict will be off by a factor of `n-1` batch too many during DDP, # so we need to adjust it here if PartialState().distributed_type != DistributedType.NO: factor = PartialState().num_processes - 1 if self.dl_state_dict["_sampler_iter_yielded"] > 0: self.dl_state_dict["_sampler_iter_yielded"] -= factor if self.dl_state_dict["_num_yielded"] > 0: self.dl_state_dict["_num_yielded"] -= factor if self.dl_state_dict["_index_sampler_state"] is not None: if ( "samples_yielded" in self.dl_state_dict["_index_sampler_state"] and self.dl_state_dict["_index_sampler_state"]["samples_yielded"] > 0 ): self.dl_state_dict["_index_sampler_state"]["samples_yielded"] -= self.batch_size * factor def _update_state_dict(self): # The state_dict of the underlying base_dataloader may be ahead of what is currently being yielded. # E.g. the implementation of DataLoaderShard involves having an underlying iterator 1 element ahead of # what it wants to yield. # # _update_state_dict is called to snapshot the state_dict that would properly recover the DataLoaderAdapter. if hasattr(self.base_dataloader, "state_dict"): self.dl_state_dict = self.base_dataloader.state_dict() # Potentially modify the state_dict to adjust for prefetching self.adjust_state_dict_for_prefetch() # Then tag if we are at the end of the dataloader self.dl_state_dict["_iterator_finished"] = self.end_of_dataloader class DataLoaderShard(DataLoaderAdapter, DataLoaderStateMixin): """ Subclass of `DataLoaderAdapter` that will deal with device placement and current distributed setup. Args: dataset (`torch.utils.data.dataset.Dataset`): The dataset to use to build this dataloader. device (`torch.device`, *optional*): If passed, the device to put all batches on. rng_types (list of `str` or [`~utils.RNGType`]): The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: - `"torch"`: the base torch random number generator - `"cuda"`: the CUDA random number generator (GPU only) - `"xla"`: the XLA random number generator (TPU only) - `"generator"`: an optional `torch.Generator` synchronized_generator (`torch.Generator`, *optional*): A random number generator to keep synchronized across processes. skip_batches (`int`, *optional*, defaults to 0): The number of batches to skip at the beginning. use_stateful_dataloader (`bool`, *optional*, defaults to `False`): Whether to have this class adapt `StatefulDataLoader` from `torchdata` instead of the regular `DataLoader`. **kwargs (additional keyword arguments, *optional*): All other keyword arguments to pass to the regular `DataLoader` initialization. **Available attributes:** - **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes. Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total number of processes - **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes. """ def __init__( self, dataset, device=None, rng_types=None, synchronized_generator=None, skip_batches=0, use_stateful_dataloader=False, _drop_last: bool = False, _non_blocking: bool = False, **kwargs, ): super().__init__(dataset, use_stateful_dataloader=use_stateful_dataloader, **kwargs) self.device = device self.rng_types = rng_types self.synchronized_generator = synchronized_generator self.skip_batches = skip_batches self.gradient_state = GradientState() self._drop_last = _drop_last self._non_blocking = _non_blocking self.iteration = 0 def __iter__(self): if self.rng_types is not None: synchronize_rng_states(self.rng_types, self.synchronized_generator) self.begin() self.set_epoch(self.iteration) dataloader_iter = self.base_dataloader.__iter__() # We iterate one batch ahead to check when we are at the end try: current_batch = next(dataloader_iter) except StopIteration: yield batch_index = 0 while True: try: # But we still move it to the device so it is done before `StopIteration` is reached if self.device is not None: current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking) self._update_state_dict() next_batch = next(dataloader_iter) if batch_index >= self.skip_batches: yield current_batch batch_index += 1 current_batch = next_batch except StopIteration: self.end_of_dataloader = True self._update_state_dict() if batch_index >= self.skip_batches: yield current_batch break self.iteration += 1 self.end() def __reduce__(self): """ Define the `__reduce__` method to ensure a `DataLoaderShard` can be pickled and unpickled. This needs to be explicitly defined since default pickling behavior is broken by `DataLoaderAdapter` messing with its `__class__` member. """ args = super().__reduce__() return (DataLoaderShard, *args[1:]) def set_epoch(self, epoch: int): # In case it is manually passed in, the user can set it to what they like if self.iteration != epoch: self.iteration = epoch if hasattr(self.batch_sampler, "set_epoch"): self.batch_sampler.set_epoch(epoch) if hasattr(self.batch_sampler, "sampler") and hasattr(self.batch_sampler.sampler, "set_epoch"): self.batch_sampler.sampler.set_epoch(epoch) # We support if a custom `Dataset` implementation has `set_epoch` # or in general HF datasets `Datasets` elif hasattr(self.dataset, "set_epoch"): self.dataset.set_epoch(epoch) @property def total_batch_size(self): batch_sampler = self.sampler if isinstance(self.sampler, BatchSampler) else self.batch_sampler return ( batch_sampler.batch_size if getattr(batch_sampler, "split_batches", False) else (batch_sampler.batch_size * getattr(batch_sampler, "num_processes", 1)) ) @property def total_dataset_length(self): if hasattr(self.dataset, "total_length"): return self.dataset.total_length else: return len(self.dataset) def get_sampler(self): return get_sampler(self) def set_sampler(self, sampler): sampler_is_batch_sampler = isinstance(self.sampler, BatchSampler) if sampler_is_batch_sampler: self.sampler.sampler = sampler else: self.batch_sampler.sampler = sampler if hasattr(self.batch_sampler, "batch_sampler"): self.batch_sampler.batch_sampler.sampler = sampler if is_torch_xla_available(): import torch_xla.distributed.parallel_loader as xpl class MpDeviceLoaderWrapper(xpl.MpDeviceLoader): """ Wrapper for the xpl.MpDeviceLoader class that knows the total batch size. XLA preloading threads will all call DataLoaderShard's __iter__(). Remove rng_types from DataLoaderShard to prevent it from using the XLA device in the preloading threads, and synchronize the RNG once from the main thread only. **Available attributes:** - **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes. Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total number of processes - **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes. """ def __init__(self, dataloader: DataLoaderShard, device: torch.device): super().__init__(dataloader, device) self._rng_types = self._loader.rng_types self._loader.rng_types = None self.device = device def __iter__(self): if self._rng_types is not None: synchronize_rng_states(self._rng_types, self._loader.synchronized_generator) return super().__iter__() def set_epoch(self, epoch: int): if hasattr(self.dataloader, "set_epoch"): self.dataloader.set_epoch(epoch) @property def total_batch_size(self): return self._loader.total_batch_size @property def total_dataset_length(self): return self._loader.total_dataset_length @property def batch_sampler(self): return self._loader.batch_sampler @property def dataloader(self): return self._loader class DataLoaderDispatcher(DataLoaderAdapter, DataLoaderStateMixin): """ Subclass of `DataLoaderAdapter` that will iterate and preprocess on process 0 only, then dispatch on each process their part of the batch. Args: split_batches (`bool`, *optional*, defaults to `False`): Whether the resulting `DataLoader` should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the `process_index`-th and advancing of `num_processes` batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial `dataloader` if this option is set to `True`, the batch size of the initial `dataloader` multiplied by `num_processes` otherwise. Setting this option to `True` requires that the batch size of the `dataloader` is a round multiple of `batch_size`. skip_batches (`int`, *optional*, defaults to 0): The number of batches to skip at the beginning of an iteration. use_stateful_dataloader (`bool`, *optional*, defaults to `False`): Whether to have this class adapt `StatefulDataLoader` from `torchdata` instead of the regular `DataLoader`. **Available attributes:** - **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes. Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total number of processes - **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes. """ def __init__( self, dataset, split_batches: bool = False, skip_batches=0, use_stateful_dataloader=False, _drop_last: bool = False, _non_blocking: bool = False, slice_fn=None, **kwargs, ): shuffle = False if is_torch_version(">=", "1.11.0"): from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe # We need to save the shuffling state of the DataPipe if isinstance(dataset, ShufflerIterDataPipe): shuffle = dataset._shuffle_enabled super().__init__(dataset, use_stateful_dataloader=use_stateful_dataloader, **kwargs) self.split_batches = split_batches if shuffle: torch.utils.data.graph_settings.apply_shuffle_settings(dataset, shuffle=shuffle) self.gradient_state = GradientState() self.state = PartialState() self._drop_last = _drop_last self._non_blocking = _non_blocking self.skip_batches = skip_batches self.slice_fn = slice_tensors if slice_fn is None else slice_fn self.iteration = 0 def _fetch_batches(self, iterator): batches, batch = None, None # On process 0, we gather the batch to dispatch. if self.state.process_index == 0: try: if self.split_batches: # One batch of the main iterator is dispatched and split. self._update_state_dict() batch = next(iterator) else: # num_processes batches of the main iterator are concatenated then dispatched and split. # We add the batches one by one so we have the remainder available when drop_last=False. batches = [] for _ in range(self.state.num_processes): self._update_state_dict() batches.append(next(iterator)) try: batch = concatenate(batches, dim=0) except RuntimeError as e: raise RuntimeError( "You can't use batches of different size with `dispatch_batches=True` or when using an `IterableDataset`." "either pass `dispatch_batches=False` and have each process fetch its own batch " " or pass `split_batches=True`. By doing so, the main process will fetch a full batch and " "slice it into `num_processes` batches for each process." ) from e # In both cases, we need to get the structure of the batch that we will broadcast on other # processes to initialize the tensors with the right shape. # data_structure, stop_iteration batch_info = [get_data_structure(batch), False] except StopIteration: batch_info = [None, True] else: batch_info = [None, self._stop_iteration] # This is inplace, so after this instruction, every process has the same `batch_info` as process 0. broadcast_object_list(batch_info) self._stop_iteration = batch_info[1] if self._stop_iteration: # If drop_last is False and split_batches is False, we may have a remainder to take care of. if not self.split_batches and not self._drop_last: if self.state.process_index == 0 and len(batches) > 0: batch = concatenate(batches, dim=0) batch_info = [get_data_structure(batch), False] else: batch_info = [None, True] broadcast_object_list(batch_info) return batch, batch_info def __iter__(self): self.begin() self.set_epoch(self.iteration) main_iterator = None if is_torch_version(">=", "2.0.1"): # NOTE PyTorch DataLoader adds forward compatibilities for DataPipes, which broadcasts # shared seed to all dist processes. Thus, we need to create iterator for all dist processes. # But, we only iterate through the DataLoader on process 0. main_iterator = self.base_dataloader.__iter__() elif self.state.process_index == 0: main_iterator = self.base_dataloader.__iter__() stop_iteration = False self._stop_iteration = False first_batch = None next_batch, next_batch_info = self._fetch_batches(main_iterator) batch_index = 0 while not stop_iteration: batch, batch_info = next_batch, next_batch_info if self.state.process_index != 0: # Initialize tensors on other processes than process 0. batch = initialize_tensors(batch_info[0]) batch = send_to_device(batch, self.state.device, non_blocking=self._non_blocking) # Broadcast the batch before splitting it. batch = broadcast(batch, from_process=0) if not self._drop_last and first_batch is None: # We keep at least num processes elements of the first batch to be able to complete the last batch first_batch = self.slice_fn( batch, slice(0, self.state.num_processes), process_index=self.state.process_index, num_processes=self.state.num_processes, ) if batch is None: raise ValueError( f"Batch does not contain any data (`{batch}`). At the end of all iterable data available before expected stop iteration." ) observed_batch_size = find_batch_size(batch) batch_size = observed_batch_size // self.state.num_processes stop_iteration = self._stop_iteration if not stop_iteration: # We may still be at the end of the dataloader without knowing it yet: if there is nothing left in # the dataloader since the number of batches is a round multiple of the number of processes. next_batch, next_batch_info = self._fetch_batches(main_iterator) # next_batch_info[0] is None when there are no more batches, otherwise we still need to process them. if self._stop_iteration and next_batch_info[0] is None: stop_iteration = True if not self._drop_last and stop_iteration and observed_batch_size % self.state.num_processes != 0: # If the last batch is not complete, let's add the first batch to it. batch = concatenate([batch, first_batch], dim=0) # Batch size computation above is wrong, it's off by 1 so we fix it. batch_size += 1 data_slice = slice(self.state.process_index * batch_size, (self.state.process_index + 1) * batch_size) batch = self.slice_fn( batch, data_slice, process_index=self.state.process_index, num_processes=self.state.num_processes, ) if stop_iteration: self.end_of_dataloader = True self._update_state_dict() self.remainder = observed_batch_size if batch_index >= self.skip_batches: yield batch batch_index += 1 self.iteration += 1 self.end() def set_epoch(self, epoch: int): # In case it is manually passed in, the user can set it to what they like if self.iteration != epoch: self.iteration = epoch if hasattr(self.batch_sampler, "sampler") and hasattr(self.batch_sampler.sampler, "set_epoch"): self.batch_sampler.sampler.set_epoch(epoch) elif hasattr(self.dataset, "set_epoch"): self.dataset.set_epoch(epoch) def __len__(self): whole_length = len(self.base_dataloader) if self.split_batches: return whole_length elif self._drop_last: return whole_length // self.state.num_processes else: return math.ceil(whole_length / self.state.num_processes) def __reduce__(self): """ Define the `__reduce__` method to ensure a `DataLoaderDispatcher` can be pickled and unpickled. This needs to be explicitly defined since default pickling behavior is broken by `DataLoaderAdapter` messing with its `__class__` member. """ args = super().__reduce__() return (DataLoaderDispatcher, *args[1:]) @property def total_batch_size(self): return ( self.dataset.batch_size if self.split_batches else (self.dataset.batch_size * self.dataset.num_processes) ) @property def total_dataset_length(self): return len(self.dataset) def get_sampler(self): return get_sampler(self) def set_sampler(self, sampler): sampler_is_batch_sampler = isinstance(self.sampler, BatchSampler) if sampler_is_batch_sampler: self.sampler.sampler = sampler else: self.batch_sampler.sampler = sampler if hasattr(self.batch_sampler, "batch_sampler"): self.batch_sampler.batch_sampler.sampler = sampler def get_sampler(dataloader): """ Get the sampler associated to the dataloader Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. Returns: `torch.utils.data.Sampler`: The sampler associated to the dataloader """ sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) if sampler_is_batch_sampler: sampler = getattr(dataloader.sampler, "sampler", None) else: sampler = getattr(dataloader.batch_sampler, "sampler", None) return sampler def prepare_data_loader( dataloader: DataLoader, device: Optional[torch.device] = None, num_processes: Optional[int] = None, process_index: Optional[int] = None, split_batches: bool = False, put_on_device: bool = False, rng_types: Optional[List[Union[str, RNGType]]] = None, dispatch_batches: Optional[bool] = None, even_batches: bool = True, slice_fn_for_dispatch: Optional[Callable] = None, use_seedable_sampler: bool = False, data_seed: Optional[int] = None, non_blocking: bool = False, use_stateful_dataloader: bool = False, ) -> DataLoader: """ Wraps a PyTorch `DataLoader` to generate batches for one of the processes only. Depending on the value of the `drop_last` attribute of the `dataloader` passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. device (`torch.device`): The target device for the returned `DataLoader`. num_processes (`int`, *optional*): The number of processes running concurrently. Will default to the value given by [`~state.PartialState`]. process_index (`int`, *optional*): The index of the current process. Will default to the value given by [`~state.PartialState`]. split_batches (`bool`, *optional*, defaults to `False`): Whether the resulting `DataLoader` should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the `process_index`-th and advancing of `num_processes` batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial `dataloader` if this option is set to `True`, the batch size of the initial `dataloader` multiplied by `num_processes` otherwise. Setting this option to `True` requires that the batch size of the `dataloader` is a round multiple of `batch_size`. put_on_device (`bool`, *optional*, defaults to `False`): Whether or not to put the batches on `device` (only works if the batches are nested list, tuples or dictionaries of tensors). rng_types (list of `str` or [`~utils.RNGType`]): The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: - `"torch"`: the base torch random number generator - `"cuda"`: the CUDA random number generator (GPU only) - `"xla"`: the XLA random number generator (TPU only) - `"generator"`: the `torch.Generator` of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type. dispatch_batches (`bool`, *optional*): If set to `True`, the dataloader prepared is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to `True` when the underlying dataset is an `IterableDataset`, `False` otherwise. even_batches (`bool`, *optional*, defaults to `True`): If set to `True`, in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. slice_fn_for_dispatch (`Callable`, *optional*`): If passed, this function will be used to slice tensors across `num_processes`. Will default to [`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will be ignored otherwise. use_seedable_sampler (`bool`, *optional*, defaults to `False`): Whether to use the [`~data_loader.SeedableRandomSampler`] instead of a `RandomSampler` for better reproducability. Comes at a cost of potentially different performances due to different shuffling algorithms but ensures results will be the *exact* same. Should be paired with `set_seed()` at every `self.set_epoch` data_seed (`int`, *optional*, defaults to `None`): The seed to use for the underlying generator when using `use_seedable_sampler`. If `None`, the generator will use the current default seed from torch. non_blocking (`bool`, *optional*, defaults to `False`): If set to `True`, dataloader will utilize non-blocking host-to-device transfers. If the dataloader has `pin_memory` set to `True`, this will help to increase overlap between data transfer and computations. use_stateful_dataloader (`bool`, *optional*, defaults to `False`): "If set to true, the dataloader prepared by the Accelerator will be backed by " "[torchdata.StatefulDataLoader](https://github.com/pytorch/data/tree/main/torchdata/stateful_dataloader). This requires `torchdata` version 0.8.0 or higher that supports StatefulDataLoader to be installed." Returns: `torch.utils.data.dataloader.DataLoader`: A new data loader that will yield the portion of the batches <Tip warning={true}> `BatchSampler`s with varying batch sizes are not enabled by default. To enable this behaviour, set `even_batches` equal to `False` </Tip> """ if dispatch_batches is None: if not put_on_device: dispatch_batches = False else: dispatch_batches = isinstance(dataloader.dataset, IterableDataset) if dispatch_batches and not put_on_device: raise ValueError("Using `dispatch_batches=True` requires `put_on_device=True`.") # Grab defaults from PartialState state = PartialState() if num_processes is None: num_processes = state.num_processes if process_index is None: process_index = state.process_index # Sanity check if split_batches: if dataloader.batch_size is not None: batch_size_for_check = dataloader.batch_size else: # For custom batch_sampler if hasattr(dataloader.batch_sampler, "batch_size"): batch_size_for_check = dataloader.batch_sampler.batch_size else: raise ValueError( "In order to use `split_batches==True` you must have a `batch_size` attribute either in the passed " "`dataloader` or `dataloader.batch_sampler` objects, and it has to return a natural number. " "Your `dataloader.batch_size` is None and `dataloader.batch_sampler` " f"(`{type(dataloader.batch_sampler)}`) does not have the `batch_size` attribute set." ) if batch_size_for_check > 1 and batch_size_for_check % num_processes != 0: raise ValueError( f"To use a `DataLoader` in `split_batches` mode, the batch size ({dataloader.batch_size}) " f"needs to be a round multiple of the number of processes ({num_processes})." ) new_dataset = dataloader.dataset # Iterable dataset doesn't like batch_sampler, but data_loader creates a default one for it new_batch_sampler = dataloader.batch_sampler if not isinstance(new_dataset, IterableDataset) else None sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) synchronized_generator = None sampler = get_sampler(dataloader) if isinstance(sampler, RandomSampler) and use_seedable_sampler: # When iterating through the dataloader during distributed processes # we want to ensure that on each process we are iterating through the same # samples in the same order if a seed is set. This requires a tweak # to the `torch.utils.data.RandomSampler` class (if used). sampler = SeedableRandomSampler( data_source=sampler.data_source, replacement=sampler.replacement, num_samples=sampler._num_samples, generator=getattr(sampler, "generator", torch.Generator()), data_seed=data_seed, ) if isinstance(dataloader.sampler, RandomSampler) and state.distributed_type == DistributedType.XLA: # isinstance(dataloader.sampler, RandomSampler) indicates the original dataloader has `shuffle` enabled. generator = torch.Generator().manual_seed(42) dataloader.generator = generator dataloader.sampler.generator = generator # No change if no multiprocess if (num_processes != 1 or state.distributed_type == DistributedType.MEGATRON_LM) and not dispatch_batches: if isinstance(new_dataset, IterableDataset): if getattr(dataloader.dataset, "generator", None) is not None: synchronized_generator = dataloader.dataset.generator new_dataset = IterableDatasetShard( new_dataset, batch_size=dataloader.batch_size, drop_last=dataloader.drop_last, num_processes=num_processes, process_index=process_index, split_batches=split_batches, ) else: if not use_seedable_sampler and hasattr(sampler, "generator"): if sampler.generator is None: sampler.generator = torch.Generator() synchronized_generator = sampler.generator batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler new_batch_sampler = BatchSamplerShard( batch_sampler, num_processes=num_processes, process_index=process_index, split_batches=split_batches, even_batches=even_batches, ) # We ignore all of those since they are all dealt with by our new_batch_sampler ignore_kwargs = [ "batch_size", "shuffle", "sampler", "batch_sampler", "drop_last", ] if rng_types is not None and synchronized_generator is None and "generator" in rng_types: rng_types.remove("generator") kwargs = { k: getattr(dataloader, k, _PYTORCH_DATALOADER_KWARGS[k]) for k in _PYTORCH_DATALOADER_KWARGS if k not in ignore_kwargs } # Need to provide batch_size as batch_sampler is None for Iterable dataset if new_batch_sampler is None: kwargs["drop_last"] = dataloader.drop_last kwargs["batch_size"] = ( dataloader.batch_size // num_processes if split_batches and not dispatch_batches else dataloader.batch_size ) if dispatch_batches: kwargs.pop("generator") dataloader = DataLoaderDispatcher( new_dataset, split_batches=split_batches, batch_sampler=new_batch_sampler, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, slice_fn=slice_fn_for_dispatch, use_stateful_dataloader=use_stateful_dataloader, **kwargs, ) elif sampler_is_batch_sampler: dataloader = DataLoaderShard( new_dataset, device=device if put_on_device and state.distributed_type != DistributedType.XLA else None, sampler=new_batch_sampler, batch_size=dataloader.batch_size, rng_types=rng_types, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, synchronized_generator=synchronized_generator, use_stateful_dataloader=use_stateful_dataloader, **kwargs, ) else: dataloader = DataLoaderShard( new_dataset, device=device if put_on_device and state.distributed_type != DistributedType.XLA else None, batch_sampler=new_batch_sampler, rng_types=rng_types, synchronized_generator=synchronized_generator, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, use_stateful_dataloader=use_stateful_dataloader, **kwargs, ) if isinstance(sampler, SeedableRandomSampler) and use_seedable_sampler: dataloader.set_sampler(sampler) if state.distributed_type == DistributedType.XLA: return MpDeviceLoaderWrapper(dataloader, device) return dataloader class SkipBatchSampler(BatchSampler): """ A `torch.utils.data.BatchSampler` that skips the first `n` batches of another `torch.utils.data.BatchSampler`. Should not be used if the original dataloader is a `StatefulDataLoader`. """ def __init__(self, batch_sampler, skip_batches=0): self.batch_sampler = batch_sampler self.skip_batches = skip_batches def __iter__(self): for index, samples in enumerate(self.batch_sampler): if index >= self.skip_batches: yield samples @property def total_length(self): return len(self.batch_sampler) def __len__(self): return len(self.batch_sampler) - self.skip_batches class SkipDataLoader(DataLoaderAdapter, DataLoaderStateMixin): """ Subclass of a PyTorch `DataLoader` that will skip the first batches. Generally it's preferable to use `skip_first_batches`/`torchdata.StatefulDataLoader` instead of this class. Args: dataset (`torch.utils.data.dataset.Dataset`): The dataset to use to build this dataloader. skip_batches (`int`, *optional*, defaults to 0): The number of batches to skip at the beginning. kwargs: All other keyword arguments to pass to the regular `DataLoader` initialization. """ def __init__(self, dataset, skip_batches=0, use_stateful_dataloader=False, **kwargs): super().__init__(dataset, use_stateful_dataloader=use_stateful_dataloader, **kwargs) self.skip_batches = skip_batches self.gradient_state = GradientState() def __iter__(self): self.begin() for index, batch in enumerate(self.base_dataloader.__iter__()): if index >= self.skip_batches: self._update_state_dict() yield batch self.end() def __len__(self): return len(self.base_dataloader) - self.skip_batches def __reduce__(self): """ Define the `__reduce__` method to ensure a `SkipDataLoader` can be pickled and unpickled. This needs to be explicitly defined since default pickling behavior is broken by `DataLoaderAdapter` messing with its `__class__` member. """ args = super().__reduce__() return (SkipDataLoader, *args[1:]) def skip_first_batches(dataloader, num_batches=0): """ Creates a `torch.utils.data.DataLoader` that will efficiently skip the first `num_batches`. Should not be used if the original dataloader is a `StatefulDataLoader`. """ state = PartialState() if state.distributed_type == DistributedType.XLA: device = dataloader.device dataloader = dataloader.dataloader dataset = dataloader.dataset sampler_is_batch_sampler = False if isinstance(dataset, IterableDataset): new_batch_sampler = None else: sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler new_batch_sampler = SkipBatchSampler(batch_sampler, skip_batches=num_batches) # We ignore all of those since they are all dealt with by our new_batch_sampler ignore_kwargs = [ "batch_size", "shuffle", "sampler", "batch_sampler", "drop_last", ] kwargs = { k: getattr(dataloader, k, _PYTORCH_DATALOADER_KWARGS[k]) for k in _PYTORCH_DATALOADER_KWARGS if k not in ignore_kwargs } # Need to provide batch_size as batch_sampler is None for Iterable dataset if new_batch_sampler is None: kwargs["drop_last"] = dataloader.drop_last kwargs["batch_size"] = dataloader.batch_size if isinstance(dataloader, DataLoaderDispatcher): if new_batch_sampler is None: # Need to manually skip batches in the dataloader kwargs["skip_batches"] = num_batches dataloader = DataLoaderDispatcher( dataset, split_batches=dataloader.split_batches, batch_sampler=new_batch_sampler, _drop_last=dataloader._drop_last, **kwargs, ) elif isinstance(dataloader, DataLoaderShard): if new_batch_sampler is None: # Need to manually skip batches in the dataloader kwargs["skip_batches"] = num_batches elif sampler_is_batch_sampler: kwargs["sampler"] = new_batch_sampler kwargs["batch_size"] = dataloader.batch_size else: kwargs["batch_sampler"] = new_batch_sampler dataloader = DataLoaderShard( dataset, device=dataloader.device, rng_types=dataloader.rng_types, synchronized_generator=dataloader.synchronized_generator, **kwargs, ) else: if new_batch_sampler is None: # Need to manually skip batches in the dataloader dataloader = SkipDataLoader(dataset, skip_batches=num_batches, **kwargs) else: dataloader = DataLoader(dataset, batch_sampler=new_batch_sampler, **kwargs) if state.distributed_type == DistributedType.XLA: dataloader = MpDeviceLoaderWrapper(dataloader, device) return dataloader
2
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/__init__.py
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. __version__ = "1.2.0.dev0" from .accelerator import Accelerator from .big_modeling import ( cpu_offload, cpu_offload_with_hook, disk_offload, dispatch_model, init_empty_weights, init_on_device, load_checkpoint_and_dispatch, ) from .data_loader import skip_first_batches from .inference import prepare_pippy from .launchers import debug_launcher, notebook_launcher from .state import PartialState from .utils import ( AutocastKwargs, DataLoaderConfiguration, DDPCommunicationHookType, DeepSpeedPlugin, DistributedDataParallelKwargs, DistributedType, FullyShardedDataParallelPlugin, GradScalerKwargs, InitProcessGroupKwargs, ProfileKwargs, find_executable_batch_size, infer_auto_device_map, is_rich_available, load_checkpoint_in_model, synchronize_rng_states, ) if is_rich_available(): from .utils import rich
3
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/optimizer.py
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import torch from .state import AcceleratorState, GradientState from .utils import DistributedType, honor_type, is_lomo_available, is_torch_xla_available if is_torch_xla_available(): import torch_xla.core.xla_model as xm def move_to_device(state, device): if isinstance(state, (list, tuple)): return honor_type(state, (move_to_device(t, device) for t in state)) elif isinstance(state, dict): return type(state)({k: move_to_device(v, device) for k, v in state.items()}) elif isinstance(state, torch.Tensor): return state.to(device) return state class AcceleratedOptimizer(torch.optim.Optimizer): """ Internal wrapper around a torch optimizer. Conditionally will perform `step` and `zero_grad` if gradients should be synchronized when performing gradient accumulation. Args: optimizer (`torch.optim.optimizer.Optimizer`): The optimizer to wrap. device_placement (`bool`, *optional*, defaults to `True`): Whether or not the optimizer should handle device placement. If so, it will place the state dictionary of `optimizer` on the right device. scaler (`torch.cuda.amp.grad_scaler.GradScaler`, *optional*): The scaler to use in the step function if training with mixed precision. """ def __init__(self, optimizer, device_placement=True, scaler=None): self.optimizer = optimizer self.scaler = scaler self.accelerator_state = AcceleratorState() self.gradient_state = GradientState() self.device_placement = device_placement self._is_overflow = False if self.scaler is not None: self._accelerate_step_called = False self._optimizer_original_step_method = self.optimizer.step self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step) # Handle device placement if device_placement: state_dict = self.optimizer.state_dict() if self.accelerator_state.distributed_type == DistributedType.XLA: xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device) else: state_dict = move_to_device(state_dict, self.accelerator_state.device) self.optimizer.load_state_dict(state_dict) @property def state(self): return self.optimizer.state @state.setter def state(self, state): self.optimizer.state = state @property def param_groups(self): return self.optimizer.param_groups @param_groups.setter def param_groups(self, param_groups): self.optimizer.param_groups = param_groups @property def defaults(self): return self.optimizer.defaults @defaults.setter def defaults(self, defaults): self.optimizer.defaults = defaults def add_param_group(self, param_group): self.optimizer.add_param_group(param_group) def load_state_dict(self, state_dict): if self.accelerator_state.distributed_type == DistributedType.XLA and self.device_placement: xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device) self.optimizer.load_state_dict(state_dict) def state_dict(self): return self.optimizer.state_dict() def zero_grad(self, set_to_none=None): if self.gradient_state.sync_gradients: accept_arg = "set_to_none" in inspect.signature(self.optimizer.zero_grad).parameters if accept_arg: if set_to_none is None: set_to_none = True self.optimizer.zero_grad(set_to_none=set_to_none) else: if set_to_none is not None: raise ValueError("`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.") self.optimizer.zero_grad() def train(self): """ Sets the optimizer to "train" mode. Useful for optimizers like `schedule_free` """ if hasattr(self.optimizer, "train") and callable(self.optimizer.train): self.optimizer.train() def eval(self): """ Sets the optimizer to "eval" mode. Useful for optimizers like `schedule_free` """ if hasattr(self.optimizer, "eval") and callable(self.optimizer.eval): self.optimizer.eval() def step(self, closure=None): if is_lomo_available(): from lomo_optim import AdaLomo, Lomo if ( not self.gradient_state.is_xla_gradients_synced and self.accelerator_state.distributed_type == DistributedType.XLA ): gradients = xm._fetch_gradients(self.optimizer) xm.all_reduce("sum", gradients, scale=1.0 / xm.xrt_world_size()) self.gradient_state.is_xla_gradients_synced = True if is_lomo_available(): # `step` should be a no-op for LOMO optimizers. if isinstance(self.optimizer, (Lomo, AdaLomo)): return if self.gradient_state.sync_gradients: if self.scaler is not None: self.optimizer.step = self._optimizer_patched_step_method self.scaler.step(self.optimizer, closure) self.scaler.update() if not self._accelerate_step_called: # If the optimizer step was skipped, gradient overflow was detected. self._is_overflow = True else: self._is_overflow = False # Reset the step method to the original one self.optimizer.step = self._optimizer_original_step_method # Reset the indicator self._accelerate_step_called = False else: self.optimizer.step(closure) if self.accelerator_state.distributed_type == DistributedType.XLA: self.gradient_state.is_xla_gradients_synced = False def _switch_parameters(self, parameters_map): for param_group in self.optimizer.param_groups: param_group["params"] = [parameters_map.get(p, p) for p in param_group["params"]] @property def step_was_skipped(self): """Whether or not the optimizer step was skipped.""" return self._is_overflow def __getstate__(self): _ignored_keys = [ "_accelerate_step_called", "_optimizer_original_step_method", "_optimizer_patched_step_method", ] return {k: v for k, v in self.__dict__.items() if k not in _ignored_keys} def __setstate__(self, state): self.__dict__.update(state) if self.scaler is not None: self._accelerate_step_called = False self._optimizer_original_step_method = self.optimizer.step self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step) def patch_optimizer_step(accelerated_optimizer: AcceleratedOptimizer, method): def patched_step(*args, **kwargs): accelerated_optimizer._accelerate_step_called = True return method(*args, **kwargs) return patched_step
4
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/scheduler.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # We ignore warnings about stepping the scheduler since we step it ourselves during gradient accumulation import warnings from .state import AcceleratorState, GradientState warnings.filterwarnings("ignore", category=UserWarning, module="torch.optim.lr_scheduler") class AcceleratedScheduler: """ A wrapper around a learning rate scheduler that will only step when the optimizer(s) have a training step. Useful to avoid making a scheduler step too fast when gradients went overflow and there was no training step (in mixed precision training) When performing gradient accumulation scheduler lengths should not be changed accordingly, Accelerate will always step the scheduler to account for it. Args: scheduler (`torch.optim.lr_scheduler._LRScheduler`): The scheduler to wrap. optimizers (one or a list of `torch.optim.Optimizer`): The optimizers used. step_with_optimizer (`bool`, *optional*, defaults to `True`): Whether or not the scheduler should be stepped at each optimizer step. split_batches (`bool`, *optional*, defaults to `False`): Whether or not the dataloaders split one batch across the different processes (so batch size is the same regardless of the number of processes) or create batches on each process (so batch size is the original batch size multiplied by the number of processes). """ def __init__(self, scheduler, optimizers, step_with_optimizer: bool = True, split_batches: bool = False): self.scheduler = scheduler self.optimizers = optimizers if isinstance(optimizers, (list, tuple)) else [optimizers] self.split_batches = split_batches self.step_with_optimizer = step_with_optimizer self.gradient_state = GradientState() def step(self, *args, **kwargs): if not self.step_with_optimizer: # No link between scheduler and optimizer -> just step self.scheduler.step(*args, **kwargs) return # Otherwise, first make sure the optimizer was stepped. if not self.gradient_state.sync_gradients: if self.gradient_state.adjust_scheduler: self.scheduler._step_count += 1 return for opt in self.optimizers: if opt.step_was_skipped: return if self.split_batches: # Split batches -> the training dataloader batch size is not changed so one step per training step self.scheduler.step(*args, **kwargs) else: # Otherwise the training dataloader batch size was multiplied by `num_processes`, so we need to do # num_processes steps per training step num_processes = AcceleratorState().num_processes for _ in range(num_processes): # Special case when using OneCycle and `drop_last` was not used if hasattr(self.scheduler, "total_steps"): if self.scheduler._step_count <= self.scheduler.total_steps: self.scheduler.step(*args, **kwargs) else: self.scheduler.step(*args, **kwargs) # Passthroughs def get_last_lr(self): return self.scheduler.get_last_lr() def state_dict(self): return self.scheduler.state_dict() def load_state_dict(self, state_dict): self.scheduler.load_state_dict(state_dict) def get_lr(self): return self.scheduler.get_lr() def print_lr(self, *args, **kwargs): return self.scheduler.print_lr(*args, **kwargs)
5
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/tracking.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Expectation: # Provide a project dir name, then each type of logger gets stored in project/{`logging_dir`} import json import os import time from functools import wraps from typing import Any, Dict, List, Optional, Union import yaml from .logging import get_logger from .state import PartialState from .utils import ( LoggerType, is_aim_available, is_clearml_available, is_comet_ml_available, is_dvclive_available, is_mlflow_available, is_tensorboard_available, is_wandb_available, listify, ) _available_trackers = [] if is_tensorboard_available(): _available_trackers.append(LoggerType.TENSORBOARD) if is_wandb_available(): _available_trackers.append(LoggerType.WANDB) if is_comet_ml_available(): _available_trackers.append(LoggerType.COMETML) if is_aim_available(): _available_trackers.append(LoggerType.AIM) if is_mlflow_available(): _available_trackers.append(LoggerType.MLFLOW) if is_clearml_available(): _available_trackers.append(LoggerType.CLEARML) if is_dvclive_available(): _available_trackers.append(LoggerType.DVCLIVE) logger = get_logger(__name__) def on_main_process(function): """ Decorator to selectively run the decorated function on the main process only based on the `main_process_only` attribute in a class. Checks at function execution rather than initialization time, not triggering the initialization of the `PartialState`. """ @wraps(function) def execute_on_main_process(self, *args, **kwargs): if getattr(self, "main_process_only", False): return PartialState().on_main_process(function)(self, *args, **kwargs) else: return function(self, *args, **kwargs) return execute_on_main_process def get_available_trackers(): "Returns a list of all supported available trackers in the system" return _available_trackers class GeneralTracker: """ A base Tracker class to be used for all logging integration implementations. Each function should take in `**kwargs` that will automatically be passed in from a base dictionary provided to [`Accelerator`]. Should implement `name`, `requires_logging_directory`, and `tracker` properties such that: `name` (`str`): String representation of the tracker class name, such as "TensorBoard" `requires_logging_directory` (`bool`): Whether the logger requires a directory to store their logs. `tracker` (`object`): Should return internal tracking mechanism used by a tracker class (such as the `run` for wandb) Implementations can also include a `main_process_only` (`bool`) attribute to toggle if relevent logging, init, and other functions should occur on the main process or across all processes (by default will use `True`) """ main_process_only = True def __init__(self, _blank=False): if not _blank: err = "" if not hasattr(self, "name"): err += "`name`" if not hasattr(self, "requires_logging_directory"): if len(err) > 0: err += ", " err += "`requires_logging_directory`" # as tracker is a @property that relies on post-init if "tracker" not in dir(self): if len(err) > 0: err += ", " err += "`tracker`" if len(err) > 0: raise NotImplementedError( f"The implementation for this tracker class is missing the following " f"required attributes. Please define them in the class definition: " f"{err}" ) def store_init_configuration(self, values: dict): """ Logs `values` as hyperparameters for the run. Implementations should use the experiment configuration functionality of a tracking API. Args: values (Dictionary `str` to `bool`, `str`, `float` or `int`): Values to be stored as initial hyperparameters as key-value pairs. The values need to have type `bool`, `str`, `float`, `int`, or `None`. """ pass def log(self, values: dict, step: Optional[int], **kwargs): """ Logs `values` to the current run. Base `log` implementations of a tracking API should go in here, along with special behavior for the `step parameter. Args: values (Dictionary `str` to `str`, `float`, or `int`): Values to be logged as key-value pairs. The values need to have type `str`, `float`, or `int`. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. """ pass def finish(self): """ Should run any finalizing functions within the tracking API. If the API should not have one, just don't overwrite that method. """ pass class TensorBoardTracker(GeneralTracker): """ A `Tracker` class that supports `tensorboard`. Should be initialized at the start of your script. Args: run_name (`str`): The name of the experiment run logging_dir (`str`, `os.PathLike`): Location for TensorBoard logs to be stored. **kwargs (additional keyword arguments, *optional*): Additional key word arguments passed along to the `tensorboard.SummaryWriter.__init__` method. """ name = "tensorboard" requires_logging_directory = True @on_main_process def __init__(self, run_name: str, logging_dir: Union[str, os.PathLike], **kwargs): try: from torch.utils import tensorboard except ModuleNotFoundError: import tensorboardX as tensorboard super().__init__() self.run_name = run_name self.logging_dir = os.path.join(logging_dir, run_name) self.writer = tensorboard.SummaryWriter(self.logging_dir, **kwargs) logger.debug(f"Initialized TensorBoard project {self.run_name} logging to {self.logging_dir}") logger.debug( "Make sure to log any initial configurations with `self.store_init_configuration` before training!" ) @property def tracker(self): return self.writer @on_main_process def store_init_configuration(self, values: dict): """ Logs `values` as hyperparameters for the run. Should be run at the beginning of your experiment. Stores the hyperparameters in a yaml file for future use. Args: values (Dictionary `str` to `bool`, `str`, `float` or `int`): Values to be stored as initial hyperparameters as key-value pairs. The values need to have type `bool`, `str`, `float`, `int`, or `None`. """ self.writer.add_hparams(values, metric_dict={}) self.writer.flush() project_run_name = time.time() dir_name = os.path.join(self.logging_dir, str(project_run_name)) os.makedirs(dir_name, exist_ok=True) with open(os.path.join(dir_name, "hparams.yml"), "w") as outfile: try: yaml.dump(values, outfile) except yaml.representer.RepresenterError: logger.error("Serialization to store hyperparameters failed") raise logger.debug("Stored initial configuration hyperparameters to TensorBoard and hparams yaml file") @on_main_process def log(self, values: dict, step: Optional[int] = None, **kwargs): """ Logs `values` to the current run. Args: values (Dictionary `str` to `str`, `float`, `int` or `dict` of `str` to `float`/`int`): Values to be logged as key-value pairs. The values need to have type `str`, `float`, `int` or `dict` of `str` to `float`/`int`. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to either `SummaryWriter.add_scaler`, `SummaryWriter.add_text`, or `SummaryWriter.add_scalers` method based on the contents of `values`. """ values = listify(values) for k, v in values.items(): if isinstance(v, (int, float)): self.writer.add_scalar(k, v, global_step=step, **kwargs) elif isinstance(v, str): self.writer.add_text(k, v, global_step=step, **kwargs) elif isinstance(v, dict): self.writer.add_scalars(k, v, global_step=step, **kwargs) self.writer.flush() logger.debug("Successfully logged to TensorBoard") @on_main_process def log_images(self, values: dict, step: Optional[int], **kwargs): """ Logs `images` to the current run. Args: values (Dictionary `str` to `List` of `np.ndarray` or `PIL.Image`): Values to be logged as key-value pairs. The values need to have type `List` of `np.ndarray` or step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to the `SummaryWriter.add_image` method. """ for k, v in values.items(): self.writer.add_images(k, v, global_step=step, **kwargs) logger.debug("Successfully logged images to TensorBoard") @on_main_process def finish(self): """ Closes `TensorBoard` writer """ self.writer.close() logger.debug("TensorBoard writer closed") class WandBTracker(GeneralTracker): """ A `Tracker` class that supports `wandb`. Should be initialized at the start of your script. Args: run_name (`str`): The name of the experiment run. **kwargs (additional keyword arguments, *optional*): Additional key word arguments passed along to the `wandb.init` method. """ name = "wandb" requires_logging_directory = False main_process_only = False @on_main_process def __init__(self, run_name: str, **kwargs): super().__init__() self.run_name = run_name import wandb self.run = wandb.init(project=self.run_name, **kwargs) logger.debug(f"Initialized WandB project {self.run_name}") logger.debug( "Make sure to log any initial configurations with `self.store_init_configuration` before training!" ) @property def tracker(self): return self.run @on_main_process def store_init_configuration(self, values: dict): """ Logs `values` as hyperparameters for the run. Should be run at the beginning of your experiment. Args: values (Dictionary `str` to `bool`, `str`, `float` or `int`): Values to be stored as initial hyperparameters as key-value pairs. The values need to have type `bool`, `str`, `float`, `int`, or `None`. """ import wandb wandb.config.update(values, allow_val_change=True) logger.debug("Stored initial configuration hyperparameters to WandB") @on_main_process def log(self, values: dict, step: Optional[int] = None, **kwargs): """ Logs `values` to the current run. Args: values (Dictionary `str` to `str`, `float`, `int` or `dict` of `str` to `float`/`int`): Values to be logged as key-value pairs. The values need to have type `str`, `float`, `int` or `dict` of `str` to `float`/`int`. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to the `wandb.log` method. """ self.run.log(values, step=step, **kwargs) logger.debug("Successfully logged to WandB") @on_main_process def log_images(self, values: dict, step: Optional[int] = None, **kwargs): """ Logs `images` to the current run. Args: values (Dictionary `str` to `List` of `np.ndarray` or `PIL.Image`): Values to be logged as key-value pairs. The values need to have type `List` of `np.ndarray` or step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to the `wandb.log` method. """ import wandb for k, v in values.items(): self.log({k: [wandb.Image(image) for image in v]}, step=step, **kwargs) logger.debug("Successfully logged images to WandB") @on_main_process def log_table( self, table_name: str, columns: List[str] = None, data: List[List[Any]] = None, dataframe: Any = None, step: Optional[int] = None, **kwargs, ): """ Log a Table containing any object type (text, image, audio, video, molecule, html, etc). Can be defined either with `columns` and `data` or with `dataframe`. Args: table_name (`str`): The name to give to the logged table on the wandb workspace columns (list of `str`, *optional*): The name of the columns on the table data (List of List of Any data type, *optional*): The data to be logged in the table dataframe (Any data type, *optional*): The data to be logged in the table step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. """ import wandb values = {table_name: wandb.Table(columns=columns, data=data, dataframe=dataframe)} self.log(values, step=step, **kwargs) @on_main_process def finish(self): """ Closes `wandb` writer """ self.run.finish() logger.debug("WandB run closed") class CometMLTracker(GeneralTracker): """ A `Tracker` class that supports `comet_ml`. Should be initialized at the start of your script. API keys must be stored in a Comet config file. Args: run_name (`str`): The name of the experiment run. **kwargs (additional keyword arguments, *optional*): Additional key word arguments passed along to the `Experiment.__init__` method. """ name = "comet_ml" requires_logging_directory = False @on_main_process def __init__(self, run_name: str, **kwargs): super().__init__() self.run_name = run_name from comet_ml import Experiment self.writer = Experiment(project_name=run_name, **kwargs) logger.debug(f"Initialized CometML project {self.run_name}") logger.debug( "Make sure to log any initial configurations with `self.store_init_configuration` before training!" ) @property def tracker(self): return self.writer @on_main_process def store_init_configuration(self, values: dict): """ Logs `values` as hyperparameters for the run. Should be run at the beginning of your experiment. Args: values (Dictionary `str` to `bool`, `str`, `float` or `int`): Values to be stored as initial hyperparameters as key-value pairs. The values need to have type `bool`, `str`, `float`, `int`, or `None`. """ self.writer.log_parameters(values) logger.debug("Stored initial configuration hyperparameters to CometML") @on_main_process def log(self, values: dict, step: Optional[int] = None, **kwargs): """ Logs `values` to the current run. Args: values (Dictionary `str` to `str`, `float`, `int` or `dict` of `str` to `float`/`int`): Values to be logged as key-value pairs. The values need to have type `str`, `float`, `int` or `dict` of `str` to `float`/`int`. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to either `Experiment.log_metric`, `Experiment.log_other`, or `Experiment.log_metrics` method based on the contents of `values`. """ if step is not None: self.writer.set_step(step) for k, v in values.items(): if isinstance(v, (int, float)): self.writer.log_metric(k, v, step=step, **kwargs) elif isinstance(v, str): self.writer.log_other(k, v, **kwargs) elif isinstance(v, dict): self.writer.log_metrics(v, step=step, **kwargs) logger.debug("Successfully logged to CometML") @on_main_process def finish(self): """ Closes `comet-ml` writer """ self.writer.end() logger.debug("CometML run closed") class AimTracker(GeneralTracker): """ A `Tracker` class that supports `aim`. Should be initialized at the start of your script. Args: run_name (`str`): The name of the experiment run. **kwargs (additional keyword arguments, *optional*): Additional key word arguments passed along to the `Run.__init__` method. """ name = "aim" requires_logging_directory = True @on_main_process def __init__(self, run_name: str, logging_dir: Optional[Union[str, os.PathLike]] = ".", **kwargs): self.run_name = run_name from aim import Run self.writer = Run(repo=logging_dir, **kwargs) self.writer.name = self.run_name logger.debug(f"Initialized Aim project {self.run_name}") logger.debug( "Make sure to log any initial configurations with `self.store_init_configuration` before training!" ) @property def tracker(self): return self.writer @on_main_process def store_init_configuration(self, values: dict): """ Logs `values` as hyperparameters for the run. Should be run at the beginning of your experiment. Args: values (`dict`): Values to be stored as initial hyperparameters as key-value pairs. """ self.writer["hparams"] = values @on_main_process def log(self, values: dict, step: Optional[int], **kwargs): """ Logs `values` to the current run. Args: values (`dict`): Values to be logged as key-value pairs. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to the `Run.track` method. """ # Note: replace this with the dictionary support when merged for key, value in values.items(): self.writer.track(value, name=key, step=step, **kwargs) @on_main_process def log_images(self, values: dict, step: Optional[int] = None, kwargs: Optional[Dict[str, dict]] = None): """ Logs `images` to the current run. Args: values (`Dict[str, Union[np.ndarray, PIL.Image, Tuple[np.ndarray, str], Tuple[PIL.Image, str]]]`): Values to be logged as key-value pairs. The values need to have type `np.ndarray` or PIL.Image. If a tuple is provided, the first element should be the image and the second element should be the caption. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs (`Dict[str, dict]`): Additional key word arguments passed along to the `Run.Image` and `Run.track` method specified by the keys `aim_image` and `track`, respectively. """ import aim aim_image_kw = {} track_kw = {} if kwargs is not None: aim_image_kw = kwargs.get("aim_image", {}) track_kw = kwargs.get("track", {}) for key, value in values.items(): if isinstance(value, tuple): img, caption = value else: img, caption = value, "" aim_image = aim.Image(img, caption=caption, **aim_image_kw) self.writer.track(aim_image, name=key, step=step, **track_kw) @on_main_process def finish(self): """ Closes `aim` writer """ self.writer.close() class MLflowTracker(GeneralTracker): """ A `Tracker` class that supports `mlflow`. Should be initialized at the start of your script. Args: experiment_name (`str`, *optional*): Name of the experiment. Environment variable MLFLOW_EXPERIMENT_NAME has priority over this argument. logging_dir (`str` or `os.PathLike`, defaults to `"."`): Location for mlflow logs to be stored. run_id (`str`, *optional*): If specified, get the run with the specified UUID and log parameters and metrics under that run. The run’s end time is unset and its status is set to running, but the run’s other attributes (source_version, source_type, etc.) are not changed. Environment variable MLFLOW_RUN_ID has priority over this argument. tags (`Dict[str, str]`, *optional*): An optional `dict` of `str` keys and values, or a `str` dump from a `dict`, to set as tags on the run. If a run is being resumed, these tags are set on the resumed run. If a new run is being created, these tags are set on the new run. Environment variable MLFLOW_TAGS has priority over this argument. nested_run (`bool`, *optional*, defaults to `False`): Controls whether run is nested in parent run. True creates a nested run. Environment variable MLFLOW_NESTED_RUN has priority over this argument. run_name (`str`, *optional*): Name of new run (stored as a mlflow.runName tag). Used only when `run_id` is unspecified. description (`str`, *optional*): An optional string that populates the description box of the run. If a run is being resumed, the description is set on the resumed run. If a new run is being created, the description is set on the new run. """ name = "mlflow" requires_logging_directory = False @on_main_process def __init__( self, experiment_name: str = None, logging_dir: Optional[Union[str, os.PathLike]] = None, run_id: Optional[str] = None, tags: Optional[Union[Dict[str, Any], str]] = None, nested_run: Optional[bool] = False, run_name: Optional[str] = None, description: Optional[str] = None, ): experiment_name = os.environ.get("MLFLOW_EXPERIMENT_NAME", experiment_name) run_id = os.environ.get("MLFLOW_RUN_ID", run_id) tags = os.environ.get("MLFLOW_TAGS", tags) if isinstance(tags, str): tags = json.loads(tags) nested_run = os.environ.get("MLFLOW_NESTED_RUN", nested_run) import mlflow exps = mlflow.search_experiments(filter_string=f"name = '{experiment_name}'") if len(exps) > 0: if len(exps) > 1: logger.warning("Multiple experiments with the same name found. Using first one.") experiment_id = exps[0].experiment_id else: experiment_id = mlflow.create_experiment( name=experiment_name, artifact_location=logging_dir, tags=tags, ) self.active_run = mlflow.start_run( run_id=run_id, experiment_id=experiment_id, run_name=run_name, nested=nested_run, tags=tags, description=description, ) logger.debug(f"Initialized mlflow experiment {experiment_name}") logger.debug( "Make sure to log any initial configurations with `self.store_init_configuration` before training!" ) @property def tracker(self): return self.active_run @on_main_process def store_init_configuration(self, values: dict): """ Logs `values` as hyperparameters for the run. Should be run at the beginning of your experiment. Args: values (`dict`): Values to be stored as initial hyperparameters as key-value pairs. """ import mlflow for name, value in list(values.items()): # internally, all values are converted to str in MLflow if len(str(value)) > mlflow.utils.validation.MAX_PARAM_VAL_LENGTH: logger.warning_once( f'Accelerate is attempting to log a value of "{value}" for key "{name}" as a parameter. MLflow\'s' f" log_param() only accepts values no longer than {mlflow.utils.validation.MAX_PARAM_VAL_LENGTH} characters so we dropped this attribute." ) del values[name] values_list = list(values.items()) # MLflow cannot log more than 100 values in one go, so we have to split it for i in range(0, len(values_list), mlflow.utils.validation.MAX_PARAMS_TAGS_PER_BATCH): mlflow.log_params(dict(values_list[i : i + mlflow.utils.validation.MAX_PARAMS_TAGS_PER_BATCH])) logger.debug("Stored initial configuration hyperparameters to MLflow") @on_main_process def log(self, values: dict, step: Optional[int]): """ Logs `values` to the current run. Args: values (`dict`): Values to be logged as key-value pairs. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. """ metrics = {} for k, v in values.items(): if isinstance(v, (int, float)): metrics[k] = v else: logger.warning_once( f'MLflowTracker is attempting to log a value of "{v}" of type {type(v)} for key "{k}" as a metric. ' "MLflow's log_metric() only accepts float and int types so we dropped this attribute." ) import mlflow mlflow.log_metrics(metrics, step=step) logger.debug("Successfully logged to mlflow") @on_main_process def finish(self): """ End the active MLflow run. """ import mlflow mlflow.end_run() class ClearMLTracker(GeneralTracker): """ A `Tracker` class that supports `clearml`. Should be initialized at the start of your script. Args: run_name (`str`, *optional*): Name of the experiment. Environment variables `CLEARML_PROJECT` and `CLEARML_TASK` have priority over this argument. **kwargs (additional keyword arguments, *optional*): Kwargs passed along to the `Task.__init__` method. """ name = "clearml" requires_logging_directory = False @on_main_process def __init__(self, run_name: str = None, **kwargs): from clearml import Task current_task = Task.current_task() self._initialized_externally = False if current_task: self._initialized_externally = True self.task = current_task return kwargs.setdefault("project_name", os.environ.get("CLEARML_PROJECT", run_name)) kwargs.setdefault("task_name", os.environ.get("CLEARML_TASK", run_name)) self.task = Task.init(**kwargs) @property def tracker(self): return self.task @on_main_process def store_init_configuration(self, values: dict): """ Connect configuration dictionary to the Task object. Should be run at the beginning of your experiment. Args: values (`dict`): Values to be stored as initial hyperparameters as key-value pairs. """ return self.task.connect_configuration(values) @on_main_process def log(self, values: Dict[str, Union[int, float]], step: Optional[int] = None, **kwargs): """ Logs `values` dictionary to the current run. The dictionary keys must be strings. The dictionary values must be ints or floats Args: values (`Dict[str, Union[int, float]]`): Values to be logged as key-value pairs. If the key starts with 'eval_'/'test_'/'train_', the value will be reported under the 'eval'/'test'/'train' series and the respective prefix will be removed. Otherwise, the value will be reported under the 'train' series, and no prefix will be removed. step (`int`, *optional*): If specified, the values will be reported as scalars, with the iteration number equal to `step`. Otherwise they will be reported as single values. kwargs: Additional key word arguments passed along to the `clearml.Logger.report_single_value` or `clearml.Logger.report_scalar` methods. """ clearml_logger = self.task.get_logger() for k, v in values.items(): if not isinstance(v, (int, float)): logger.warning_once( "Accelerator is attempting to log a value of " f'"{v}" of type {type(v)} for key "{k}" as a scalar. ' "This invocation of ClearML logger's report_scalar() " "is incorrect so we dropped this attribute." ) continue if step is None: clearml_logger.report_single_value(name=k, value=v, **kwargs) continue title, series = ClearMLTracker._get_title_series(k) clearml_logger.report_scalar(title=title, series=series, value=v, iteration=step, **kwargs) @on_main_process def log_images(self, values: dict, step: Optional[int] = None, **kwargs): """ Logs `images` to the current run. Args: values (`Dict[str, List[Union[np.ndarray, PIL.Image]]`): Values to be logged as key-value pairs. The values need to have type `List` of `np.ndarray` or step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to the `clearml.Logger.report_image` method. """ clearml_logger = self.task.get_logger() for k, v in values.items(): title, series = ClearMLTracker._get_title_series(k) clearml_logger.report_image(title=title, series=series, iteration=step, image=v, **kwargs) @on_main_process def log_table( self, table_name: str, columns: List[str] = None, data: List[List[Any]] = None, dataframe: Any = None, step: Optional[int] = None, **kwargs, ): """ Log a Table to the task. Can be defined eitherwith `columns` and `data` or with `dataframe`. Args: table_name (`str`): The name of the table columns (list of `str`, *optional*): The name of the columns on the table data (List of List of Any data type, *optional*): The data to be logged in the table. If `columns` is not specified, then the first entry in data will be the name of the columns of the table dataframe (Any data type, *optional*): The data to be logged in the table step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to the `clearml.Logger.report_table` method. """ to_report = dataframe if dataframe is None: if data is None: raise ValueError( "`ClearMLTracker.log_table` requires that `data` to be supplied if `dataframe` is `None`" ) to_report = [columns] + data if columns else data title, series = ClearMLTracker._get_title_series(table_name) self.task.get_logger().report_table(title=title, series=series, table_plot=to_report, iteration=step, **kwargs) @on_main_process def finish(self): """ Close the ClearML task. If the task was initialized externally (e.g. by manually calling `Task.init`), this function is a noop """ if self.task and not self._initialized_externally: self.task.close() @staticmethod def _get_title_series(name): for prefix in ["eval", "test", "train"]: if name.startswith(prefix + "_"): return name[len(prefix) + 1 :], prefix return name, "train" class DVCLiveTracker(GeneralTracker): """ A `Tracker` class that supports `dvclive`. Should be initialized at the start of your script. Args: run_name (`str`, *optional*): Ignored for dvclive. See `kwargs` instead. kwargs: Additional key word arguments passed along to [`dvclive.Live()`](https://dvc.org/doc/dvclive/live). Example: ```py from accelerate import Accelerator accelerator = Accelerator(log_with="dvclive") accelerator.init_trackers(project_name="my_project", init_kwargs={"dvclive": {"dir": "my_directory"}}) ``` """ name = "dvclive" requires_logging_directory = False @on_main_process def __init__(self, run_name: Optional[str] = None, live: Optional[Any] = None, **kwargs): from dvclive import Live super().__init__() self.live = live if live is not None else Live(**kwargs) @property def tracker(self): return self.live @on_main_process def store_init_configuration(self, values: dict): """ Logs `values` as hyperparameters for the run. Should be run at the beginning of your experiment. Stores the hyperparameters in a yaml file for future use. Args: values (Dictionary `str` to `bool`, `str`, `float`, `int`, or a List or Dict of those types): Values to be stored as initial hyperparameters as key-value pairs. The values need to have type `bool`, `str`, `float`, or `int`. """ self.live.log_params(values) @on_main_process def log(self, values: dict, step: Optional[int] = None, **kwargs): """ Logs `values` to the current run. Args: values (Dictionary `str` to `str`, `float`, or `int`): Values to be logged as key-value pairs. The values need to have type `str`, `float`, or `int`. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. kwargs: Additional key word arguments passed along to `dvclive.Live.log_metric()`. """ from dvclive.plots import Metric if step is not None: self.live.step = step for k, v in values.items(): if Metric.could_log(v): self.live.log_metric(k, v, **kwargs) else: logger.warning_once( "Accelerator attempted to log a value of " f'"{v}" of type {type(v)} for key "{k}" as a scalar. ' "This invocation of DVCLive's Live.log_metric() " "is incorrect so we dropped this attribute." ) self.live.next_step() @on_main_process def finish(self): """ Closes `dvclive.Live()`. """ self.live.end() LOGGER_TYPE_TO_CLASS = { "aim": AimTracker, "comet_ml": CometMLTracker, "mlflow": MLflowTracker, "tensorboard": TensorBoardTracker, "wandb": WandBTracker, "clearml": ClearMLTracker, "dvclive": DVCLiveTracker, } def filter_trackers( log_with: List[Union[str, LoggerType, GeneralTracker]], logging_dir: Union[str, os.PathLike] = None, ): """ Takes in a list of potential tracker types and checks that: - The tracker wanted is available in that environment - Filters out repeats of tracker types - If `all` is in `log_with`, will return all trackers in the environment - If a tracker requires a `logging_dir`, ensures that `logging_dir` is not `None` Args: log_with (list of `str`, [`~utils.LoggerType`] or [`~tracking.GeneralTracker`], *optional*): A list of loggers to be setup for experiment tracking. Should be one or several of: - `"all"` - `"tensorboard"` - `"wandb"` - `"comet_ml"` - `"mlflow"` - `"dvclive"` If `"all"` is selected, will pick up all available trackers in the environment and initialize them. Can also accept implementations of `GeneralTracker` for custom trackers, and can be combined with `"all"`. logging_dir (`str`, `os.PathLike`, *optional*): A path to a directory for storing logs of locally-compatible loggers. """ loggers = [] if log_with is not None: if not isinstance(log_with, (list, tuple)): log_with = [log_with] if "all" in log_with or LoggerType.ALL in log_with: loggers = [o for o in log_with if issubclass(type(o), GeneralTracker)] + get_available_trackers() else: for log_type in log_with: if log_type not in LoggerType and not issubclass(type(log_type), GeneralTracker): raise ValueError(f"Unsupported logging capability: {log_type}. Choose between {LoggerType.list()}") if issubclass(type(log_type), GeneralTracker): loggers.append(log_type) else: log_type = LoggerType(log_type) if log_type not in loggers: if log_type in get_available_trackers(): tracker_init = LOGGER_TYPE_TO_CLASS[str(log_type)] if tracker_init.requires_logging_directory: if logging_dir is None: raise ValueError( f"Logging with `{log_type}` requires a `logging_dir` to be passed in." ) loggers.append(log_type) else: logger.debug(f"Tried adding logger {log_type}, but package is unavailable in the system.") return loggers
6
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/state.py
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import logging import os import threading import warnings from contextlib import contextmanager from functools import partial from typing import Any, Callable, Optional import torch from .utils import ( DistributedType, DynamoBackend, GradientAccumulationPlugin, check_cuda_p2p_ib_support, check_fp8_capability, deepspeed_required, get_ccl_version, get_cpu_distributed_information, get_int_from_env, is_ccl_available, is_datasets_available, is_deepspeed_available, is_fp8_available, is_ipex_available, is_mlu_available, is_mps_available, is_musa_available, is_npu_available, is_torch_xla_available, is_xpu_available, parse_choice_from_env, parse_flag_from_env, set_numa_affinity, ) from .utils.dataclasses import SageMakerDistributedType if is_torch_xla_available(): import torch_xla.core.xla_model as xm if is_mlu_available(check_device=False): import torch_mlu # noqa: F401 if is_musa_available(check_device=False): import torch_musa # noqa: F401 if is_npu_available(check_device=False): import torch_npu # noqa: F401 logger = logging.getLogger(__name__) def is_initialized() -> bool: """ Checks if the `AcceleratorState` has been initialized from `Accelerator`. Same as `AcceleratorState.initialized`, but works as a module method. """ return AcceleratorState._shared_state != {} # Lambda function that does nothing def do_nothing(*args, **kwargs): return None class ThreadLocalSharedDict(threading.local): """ Descriptor that holds a dict shared between instances of a class in the same thread. Note: Descriptors have slightly different semantics than just a dict field on its own. `PartialState(...)._shared_state` and `PartialState._shared_state` (instance vs class) give the same value: the underlying _storage dict. Likewise, `PartialState(...)._shared_state = {...}` overrides the _storage dict inside the descriptor as you would expect. However, `PartialState._shared_state = {}` actually replaces the descriptor object with a dict instead Thus, you should modify the _storage dict in-place (e.g. `_shared_state.clear()`). See Python documentation for an explanation of descriptors: https://docs.python.org/3/howto/descriptor.html This is required for using PyTorch/XLA with PJRT in multithreaded mode (required for TPU v2 and v3). See https://github.com/pytorch/xla/blob/r2.0/docs/pjrt.md#multithreading-on-tpu-v2v3 """ def __init__(self, thread_local: bool = False): self._storage = {} def __get__(self, obj, objtype=None): return self._storage def __set__(self, obj, value): self._storage = value # Prefer global shared dictionary, except when using TPU. SharedDict = dict if not is_torch_xla_available() else ThreadLocalSharedDict # Inspired by Alex Martelli's 'Borg'. class PartialState: """ Singleton class that has information about the current training environment and functions to help with process control. Designed to be used when only process control and device execution states are needed. Does *not* need to be initialized from `Accelerator`. Args: cpu (`bool`, *optional*): Whether or not to force the script to execute on CPU. Will ignore any accelerators available if set to `True` and force the execution on the CPU. kwargs (additional keyword arguments, *optional*): Additional keyword arguments to pass to the relevent `init_process_group` function. Valid `kwargs` can be found in [`utils.InitProcessGroupKwargs`]. See the example section for detailed usage. **Available attributes:** - **device** (`torch.device`) -- The device to use. - **distributed_type** ([`~accelerate.state.DistributedType`]) -- The type of distributed environment currently in use. - **local_process_index** (`int`) -- The index of the current process on the current server. - **mixed_precision** (`str`) -- Whether or not the current script will use mixed precision, and if so the type of mixed precision being performed. (Choose from 'no','fp16','bf16 or 'fp8'). - **num_processes** (`int`) -- The number of processes currently launched in parallel. - **process_index** (`int`) -- The index of the current process. - **is_last_process** (`bool`) -- Whether or not the current process is the last one. - **is_main_process** (`bool`) -- Whether or not the current process is the main one. - **is_local_main_process** (`bool`) -- Whether or not the current process is the main one on the local node. - **debug** (`bool`) -- Whether or not the current script is being run in debug mode. Example: ```python from accelerate.utils import InitProcessGroupKwargs # To include `InitProcessGroupKwargs`, init then call `.to_kwargs()` kwargs = InitProcessGroupKwargs(...).to_kwargs() state = PartialState(**kwargs) ``` """ _shared_state = SharedDict() _known_attrs = [ "_cpu", "_mixed_precision", "_shared_state", "backend", "debug", "device", "distributed_type", "fork_launched", "local_process_index", "num_processes", "process_index", ] def __init__(self, cpu: bool = False, **kwargs): self.__dict__ = self._shared_state if not self.initialized: self._cpu = cpu self.backend = None env_device = os.environ.get("ACCELERATE_TORCH_DEVICE", None) self.device = torch.device(env_device) if env_device is not None else None self.debug = parse_flag_from_env("ACCELERATE_DEBUG_MODE") use_sagemaker_dp = kwargs.pop("_use_sagemaker_dp", None) dist_information = None if use_sagemaker_dp is None: use_sagemaker_dp = ( os.environ.get("ACCELERATE_USE_SAGEMAKER", "false") == "true" and os.environ.get("ACCELERATE_SAGEMAKER_DISTRIBUTED_TYPE") != SageMakerDistributedType.NO ) # Sets up self.backend + imports original_backend = kwargs.pop("backend", None) backend, distributed_type = self._prepare_backend(cpu, use_sagemaker_dp, original_backend) if original_backend is not None and backend != original_backend: raise ValueError(f"Your assigned backend {original_backend} is not avaliable, please use {backend}") self.backend = backend self.distributed_type = distributed_type use_deepspeed = False if not cpu and self.backend != "xla": if int(os.environ.get("LOCAL_RANK", -1)) != -1: # Deal with spawning deepspeed if os.environ.get("ACCELERATE_USE_DEEPSPEED", "false") == "true": if not is_deepspeed_available(): raise ImportError( "DeepSpeed is not available => install it using `pip3 install deepspeed` or build it from source" ) from deepspeed import comm as dist if not dist.is_initialized(): dist.init_distributed(dist_backend=self.backend, auto_mpi_discovery=False, **kwargs) # We need to flag to `use_deepspeed` to be True to override `distributed_type` later use_deepspeed = True # Deal with all other backends but XPU and CPU, that gets handled special later elif ( self.distributed_type not in (DistributedType.MULTI_XPU, DistributedType.MULTI_CPU) and not torch.distributed.is_initialized() ): torch.distributed.init_process_group(backend=self.backend, **kwargs) # XPU and CPU require special env configs to be set if self.distributed_type in (DistributedType.MULTI_XPU, DistributedType.MULTI_CPU): dist_information = get_cpu_distributed_information() os.environ["RANK"] = str(dist_information.rank) os.environ["WORLD_SIZE"] = str(dist_information.world_size) os.environ["LOCAL_RANK"] = str(dist_information.local_rank) os.environ["LOCAL_WORLD_SIZE"] = str(dist_information.local_world_size) if not os.environ.get("MASTER_PORT", None): os.environ["MASTER_PORT"] = "29500" if ( not os.environ.get("MASTER_ADDR", None) and dist_information.local_world_size != dist_information.world_size and self.backend != "mpi" ): raise ValueError( "Tried to launch on distributed with multinode, but `MASTER_ADDR` env was not set, " "please try exporting rank 0's hostname as `MASTER_ADDR`" ) kwargs["rank"] = dist_information.rank kwargs["world_size"] = dist_information.world_size if ( self.distributed_type == DistributedType.MULTI_CPU and get_int_from_env(["OMP_NUM_THREADS"], 0) == 0 ): import psutil num_cpu_threads_per_process = int( psutil.cpu_count(logical=False) / dist_information.local_world_size ) if num_cpu_threads_per_process == 0: num_cpu_threads_per_process = 1 torch.set_num_threads(num_cpu_threads_per_process) warnings.warn( f"OMP_NUM_THREADS/MKL_NUM_THREADS unset, we set it at {num_cpu_threads_per_process} to improve oob" " performance." ) if not torch.distributed.is_initialized(): torch.distributed.init_process_group(backend=self.backend, **kwargs) # No backend == no distributed training if self.backend is None: self.distributed_type = DistributedType.NO self.num_processes = 1 self.process_index = 0 self.local_process_index = 0 elif self.backend == "xla": # XLA needs device setting first for `set_replication` self.set_device() xm.set_replication(self.device, xm.get_xla_supported_devices()) self.num_processes = xm.xrt_world_size() self.process_index = xm.get_ordinal() if is_torch_xla_available(check_is_tpu=True): self.local_process_index = xm.get_local_ordinal() else: self.local_process_index = int(os.environ.get("LOCAL_RANK", -1)) else: self.num_processes = torch.distributed.get_world_size() self.process_index = torch.distributed.get_rank() self.local_process_index = ( int(os.environ.get("LOCAL_RANK", -1)) if dist_information is None else dist_information.local_rank ) self.set_device() # Now we can change to deepseed if use_deepspeed: self.distributed_type = DistributedType.DEEPSPEED # Set CPU affinity if enabled if parse_flag_from_env("ACCELERATE_CPU_AFFINITY", False): set_numa_affinity(self.local_process_index) # Check for old RTX 4000's that can't use P2P or IB and are on old drivers if self.device.type == "cuda" and not check_cuda_p2p_ib_support(): if "NCCL_P2P_DISABLE" not in os.environ or "NCCL_IB_DISABLE" not in os.environ: raise NotImplementedError( "Using RTX 4000 series doesn't support faster communication broadband via P2P or IB. " 'Please set `NCCL_P2P_DISABLE="1"` and `NCCL_IB_DISABLE="1" or use `accelerate launch` which ' "will do this automatically." ) # Important: This should be the *only* code outside of `self.initialized!` self.fork_launched = parse_flag_from_env("FORK_LAUNCHED", 0) def __repr__(self) -> str: return ( f"Distributed environment: {self.distributed_type}{(' Backend: ' + self.backend) if self.backend else ''}\n" f"Num processes: {self.num_processes}\n" f"Process index: {self.process_index}\n" f"Local process index: {self.local_process_index}\n" f"Device: {self.device}\n" ) @staticmethod def _reset_state(): "Resets `_shared_state`, is used internally and should not be called" PartialState._shared_state.clear() @property def initialized(self) -> bool: "Returns whether the `PartialState` has been initialized" return self._shared_state != {} @property def use_distributed(self): """ Whether the Accelerator is configured for distributed training """ return self.distributed_type != DistributedType.NO and self.num_processes > 1 @property def is_last_process(self) -> bool: "Returns whether the current process is the last one" return self.process_index == self.num_processes - 1 @property def is_main_process(self) -> bool: "Returns whether the current process is the main process" return ( self.process_index == 0 if self.distributed_type != DistributedType.MEGATRON_LM else self.is_last_process ) @property def is_local_main_process(self) -> bool: "Returns whether the current process is the main process on the local node" return ( self.local_process_index == 0 if self.distributed_type != DistributedType.MEGATRON_LM else self.is_last_process ) def wait_for_everyone(self): """ Will stop the execution of the current process until every other process has reached that point (so this does nothing when the script is only run in one process). Useful to do before saving a model. Example: ```python >>> # Assuming two GPU processes >>> import time >>> from accelerate.state import PartialState >>> state = PartialState() >>> if state.is_main_process: ... time.sleep(2) >>> else: ... print("I'm waiting for the main process to finish its sleep...") >>> state.wait_for_everyone() >>> # Should print on every process at the same time >>> print("Everyone is here") ``` """ if self.distributed_type in ( DistributedType.MULTI_GPU, DistributedType.MULTI_MLU, DistributedType.MULTI_MUSA, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU, DistributedType.MULTI_CPU, DistributedType.DEEPSPEED, DistributedType.FSDP, ): torch.distributed.barrier() elif self.distributed_type == DistributedType.XLA: xm.rendezvous("accelerate.utils.wait_for_everyone") def _goes_first(self, is_main: bool): if not is_main: self.wait_for_everyone() yield if is_main: self.wait_for_everyone() @contextmanager def split_between_processes(self, inputs: list | tuple | dict | torch.Tensor, apply_padding: bool = False): """ Splits `input` between `self.num_processes` quickly and can be then used on that process. Useful when doing distributed inference, such as with different prompts. Note that when using a `dict`, all keys need to have the same number of elements. Args: inputs (`list`, `tuple`, `torch.Tensor`, `dict` of `list`/`tuple`/`torch.Tensor`, or `datasets.Dataset`): The input to split between processes. apply_padding (`bool`, `optional`, defaults to `False`): Whether to apply padding by repeating the last element of the input so that all processes have the same number of elements. Useful when trying to perform actions such as `gather()` on the outputs or passing in less inputs than there are processes. If so, just remember to drop the padded elements afterwards. Example: ```python # Assume there are two processes from accelerate import PartialState state = PartialState() with state.split_between_processes(["A", "B", "C"]) as inputs: print(inputs) # Process 0 ["A", "B"] # Process 1 ["C"] with state.split_between_processes(["A", "B", "C"], apply_padding=True) as inputs: print(inputs) # Process 0 ["A", "B"] # Process 1 ["C", "C"] ``` """ if self.num_processes == 1: yield inputs return length = len(inputs) # Nested dictionary of any types if isinstance(inputs, dict): length = len(inputs[list(inputs.keys())[0]]) if not all(len(v) == length for v in inputs.values()): raise ValueError("All values in the dictionary must have the same length") num_samples_per_process, num_extras = divmod(length, self.num_processes) start_index = self.process_index * num_samples_per_process + min(self.process_index, num_extras) end_index = start_index + num_samples_per_process + (1 if self.process_index < num_extras else 0) def _split_values(inputs, start_index, end_index): if isinstance(inputs, (list, tuple, torch.Tensor)): if start_index >= len(inputs): result = inputs[-1:] else: result = inputs[start_index:end_index] if apply_padding: if isinstance(result, torch.Tensor): from accelerate.utils import pad_across_processes, send_to_device # The tensor needs to be on the device before we can pad it tensorized_result = send_to_device(result, self.device) result = pad_across_processes(tensorized_result, pad_index=inputs[-1]) else: result += [result[-1]] * (num_samples_per_process + 1 - len(result)) return result elif isinstance(inputs, dict): for key in inputs.keys(): inputs[key] = _split_values(inputs[key], start_index, end_index) return inputs else: if is_datasets_available(): from datasets import Dataset if isinstance(inputs, Dataset): if start_index >= len(inputs): start_index = len(inputs) - 1 if end_index > len(inputs): end_index = len(inputs) result_idcs = list(range(start_index, end_index)) if apply_padding: result_idcs += [end_index - 1] * (num_samples_per_process + 1 - len(result_idcs)) return inputs.select(result_idcs) return inputs yield _split_values(inputs, start_index, end_index) @contextmanager def main_process_first(self): """ Lets the main process go first inside a with block. The other processes will enter the with block after the main process exits. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> with accelerator.main_process_first(): ... # This will be printed first by process 0 then in a seemingly ... # random order by the other processes. ... print(f"This will be printed by process {accelerator.process_index}") ``` """ yield from self._goes_first(self.is_main_process) @contextmanager def local_main_process_first(self): """ Lets the local main process go inside a with block. The other processes will enter the with block after the main process exits. Example: ```python >>> from accelerate.state import PartialState >>> state = PartialState() >>> with state.local_main_process_first(): ... # This will be printed first by local process 0 then in a seemingly ... # random order by the other processes. ... print(f"This will be printed by process {state.local_process_index}") ``` """ yield from self._goes_first(self.is_local_main_process) def on_main_process(self, function: Callable[..., Any] = None): """ Decorator that only runs the decorated function on the main process. Args: function (`Callable`): The function to decorate. Example: ```python >>> from accelerate.state import PartialState >>> state = PartialState() >>> @state.on_main_process ... def print_something(): ... print("This will be printed by process 0 only.") >>> print_something() "This will be printed by process 0 only" ``` """ if not self.initialized: raise ValueError("The `PartialState` or `Accelerator` must be initialized before calling this function.") if self.is_main_process or not self.use_distributed: return function return do_nothing def on_local_main_process(self, function: Callable[..., Any] = None): """ Decorator that only runs the decorated function on the local main process. Args: function (`Callable`): The function to decorate. Example: ```python # Assume we have 2 servers with 4 processes each. from accelerate.state import PartialState state = PartialState() @state.on_local_main_process def print_something(): print("This will be printed by process 0 only on each server.") print_something() # On server 1: "This will be printed by process 0 only" # On server 2: "This will be printed by process 0 only" ``` """ if self.is_local_main_process or not self.use_distributed: return function return do_nothing def on_last_process(self, function: Callable[..., Any]): """ Decorator that only runs the decorated function on the last process. Args: function (`Callable`): The function to decorate. Example: ```python # Assume we have 4 processes. from accelerate.state import PartialState state = PartialState() @state.on_last_process def print_something(): print(f"Printed on process {state.process_index}") print_something() "Printed on process 3" ``` """ if self.is_last_process or not self.use_distributed: return function return do_nothing def on_process(self, function: Callable[..., Any] = None, process_index: int = None): """ Decorator that only runs the decorated function on the process with the given index. Args: function (`Callable`, `optional`): The function to decorate. process_index (`int`, `optional`): The index of the process on which to run the function. Example: ```python # Assume we have 4 processes. from accelerate.state import PartialState state = PartialState() @state.on_process(process_index=2) def print_something(): print(f"Printed on process {state.process_index}") print_something() "Printed on process 2" ``` """ if function is None: return partial(self.on_process, process_index=process_index) if (self.process_index == process_index) or (not self.use_distributed): return function return do_nothing def on_local_process(self, function: Callable[..., Any] = None, local_process_index: int = None): """ Decorator that only runs the decorated function on the process with the given index on the current node. Args: function (`Callable`, *optional*): The function to decorate. local_process_index (`int`, *optional*): The index of the local process on which to run the function. Example: ```python # Assume we have 2 servers with 4 processes each. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_local_process(local_process_index=2) def print_something(): print(f"Printed on process {accelerator.local_process_index}") print_something() # On server 1: "Printed on process 2" # On server 2: "Printed on process 2" ``` """ if function is None: return partial(self.on_local_process, local_process_index=local_process_index) if (self.local_process_index == local_process_index) or (not self.use_distributed): return function return do_nothing def print(self, *args, **kwargs): if self.is_local_main_process: print(*args, **kwargs) @property def default_device(self) -> torch.device: """ Returns the default device which is: - MPS if `torch.backends.mps.is_available()` and `torch.backends.mps.is_built()` both return True. - CUDA if `torch.cuda.is_available()` - MLU if `is_mlu_available()` - MUSA if `is_musa_available()` - NPU if `is_npu_available()` - CPU otherwise """ if is_mps_available(): os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" return torch.device("mps") elif is_mlu_available(): return torch.device("mlu") elif is_musa_available(): return torch.device("musa") # NPU should be checked before CUDA when using `transfer_to_npu` # See issue #3020: https://github.com/huggingface/accelerate/issues/3020 elif is_npu_available(): return torch.device("npu") elif torch.cuda.is_available(): return torch.device("cuda") elif is_xpu_available(): return torch.device("xpu") else: return torch.device("cpu") def _prepare_backend( self, cpu: bool = False, sagemaker_dp=False, backend: str = None ) -> tuple[str, DistributedType]: "Prepares any imports needed before initializing the distributed backend and sets `self.backend` properly" distributed_type = None if sagemaker_dp: import smdistributed.dataparallel.torch.torch_smddp # noqa backend = "smddp" distributed_type = DistributedType.MULTI_GPU elif is_torch_xla_available(): backend = "xla" distributed_type = DistributedType.XLA elif int(os.environ.get("LOCAL_RANK", -1)) != -1 and not cpu: if is_mlu_available(): backend = "cncl" distributed_type = DistributedType.MULTI_MLU elif is_musa_available(): backend = "mccl" distributed_type = DistributedType.MULTI_MUSA # NPU should be checked before CUDA when using `transfer_to_npu` # See issue #3020: https://github.com/huggingface/accelerate/issues/3020 elif is_npu_available(): backend = "hccl" distributed_type = DistributedType.MULTI_NPU elif torch.cuda.is_available(): if backend is None: backend = "nccl" distributed_type = DistributedType.MULTI_GPU if distributed_type is None and ( int(os.environ.get("LOCAL_RANK", -1)) != -1 or get_int_from_env(["PMI_SIZE", "OMPI_COMM_WORLD_SIZE", "MV2_COMM_WORLD_SIZE", "WORLD_SIZE"], 1) > 1 ): if not cpu and is_xpu_available(): distributed_type = DistributedType.MULTI_XPU else: distributed_type = DistributedType.MULTI_CPU if ( backend in (None, "ccl") and is_ccl_available() and (get_int_from_env(["CCL_WORKER_COUNT"], 0) > 0 or distributed_type == DistributedType.MULTI_XPU) ): if get_ccl_version() >= "1.12": import oneccl_bindings_for_pytorch # noqa: F401 else: import torch_ccl # noqa: F401 backend = "ccl" elif backend in (None, "mpi") and torch.distributed.is_mpi_available(): backend = "mpi" else: backend = "gloo" if distributed_type is None: distributed_type = DistributedType.NO return backend, distributed_type def set_device(self): """ Sets the device in `self.device` to the current distributed environment. """ if self.device is not None: return if self.distributed_type == DistributedType.NO: self.device = torch.device("cpu") if self._cpu else self.default_device return device = str(self.distributed_type).split(".")[-1].replace("MULTI_", "").lower() if device not in ("cpu", "gpu", "mlu", "musa", "npu", "xpu", "xla"): raise ValueError( f"Can't set device for {self.distributed_type} ({device}), verify we should be calling `_set_device()` for it!" ) if device == "xla": self.device = xm.xla_device() else: if device == "gpu": device = "cuda" device_module = getattr(torch, device) device_index = self.local_process_index % device_module.device_count() self.device = torch.device(device, device_index) device_module.set_device(self.device) def destroy_process_group(self, group=None): """ Destroys the process group. If one is not specified, the default process group is destroyed. """ if self.fork_launched and group is None: return # needed when using torch.distributed.init_process_group if torch.distributed.is_initialized(): torch.distributed.destroy_process_group(group) def __getattr__(self, name: str): # By this point we know that no attributes of `self` contain `name`, # so we just modify the error message if name in self._known_attrs: raise AttributeError( f"`PartialState` object has no attribute `{name}`. " "This happens if `PartialState._reset_state()` was called and " "an `Accelerator` or `PartialState` was not reinitialized." ) # Raise a typical AttributeError raise AttributeError(f"'PartialState' object has no attribute '{name}'") class AcceleratorState: """ Singleton class that has information about the current training environment. **Available attributes:** - **device** (`torch.device`) -- The device to use. - **distributed_type** ([`~accelerate.state.DistributedType`]) -- The type of distributed environment currently in use. - **initialized** (`bool`) -- Whether or not the `AcceleratorState` has been initialized from `Accelerator`. - **local_process_index** (`int`) -- The index of the current process on the current server. - **mixed_precision** (`str`) -- Whether or not the current script will use mixed precision, and if so the type of mixed precision being performed. (Choose from 'no','fp16','bf16 or 'fp8'). - **num_processes** (`int`) -- The number of processes currently launched in parallel. - **process_index** (`int`) -- The index of the current process. - **is_last_process** (`bool`) -- Whether or not the current process is the last one. - **is_main_process** (`bool`) -- Whether or not the current process is the main one. - **is_local_main_process** (`bool`) -- Whether or not the current process is the main one on the local node. - **debug** (`bool`) -- Whether or not the current script is being run in debug mode. """ _shared_state = SharedDict() _known_attrs = PartialState._known_attrs + [ "deepspeed_plugin", "use_ipex", "fsdp_plugin", "megatron_lm_plugin", "dynamo_plugin", ] def __init__( self, mixed_precision: str = None, cpu: bool = False, dynamo_plugin=None, deepspeed_plugin=None, fsdp_plugin=None, megatron_lm_plugin=None, _from_accelerator: bool = False, **kwargs, ): self.__dict__ = self._shared_state if parse_flag_from_env("ACCELERATE_USE_CPU"): cpu = True if PartialState._shared_state == {}: PartialState(cpu, **kwargs) self.__dict__.update(PartialState._shared_state) self._check_initialized(mixed_precision, cpu) if not self.initialized: self.deepspeed_plugins = None self.use_ipex = None mixed_precision = ( parse_choice_from_env("ACCELERATE_MIXED_PRECISION", "no") if mixed_precision is None else mixed_precision.lower() ) if mixed_precision == "fp8": if not is_fp8_available(): raise ValueError( "Using `fp8` precision requires `transformer_engine` or `MS-AMP` to be installed." ) elif not check_fp8_capability(): logger.warning( f"The current device has compute capability of {torch.cuda.get_device_capability()} which is " "insufficient for FP8 mixed precision training (requires a GPU Hopper/Ada Lovelace " "or higher, compute capability of 8.9 or higher). Will use FP16 instead." ) mixed_precision = "fp16" self.dynamo_plugin = dynamo_plugin if not _from_accelerator: raise ValueError( "Please make sure to properly initialize your accelerator via `accelerator = Accelerator()` " "before using any functionality from the `accelerate` library." ) # deepspeed handles mixed_precision using deepspeed_config self._mixed_precision = "no" if self.distributed_type == DistributedType.DEEPSPEED else mixed_precision if self.distributed_type == DistributedType.XLA and is_torch_xla_available(check_is_tpu=True): if mixed_precision == "bf16": if os.environ.get("ACCELERATE_DOWNCAST_BF16"): os.environ["XLA_USE_BF16"] = str(0) os.environ["XLA_DOWNCAST_BF16"] = str(1) self.downcast_bfloat = True else: os.environ["XLA_USE_BF16"] = str(1) os.environ["XLA_DOWNCAST_BF16"] = str(0) self.downcast_bfloat = False elif os.environ.get("ACCELERATE_USE_DEEPSPEED", "false") == "true" and not cpu: self.deepspeed_plugins = deepspeed_plugin self.distributed_type = DistributedType.DEEPSPEED elif self.distributed_type in [ DistributedType.MULTI_GPU, DistributedType.MULTI_MLU, DistributedType.MULTI_MUSA, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU, ]: if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true" or fsdp_plugin is not None: self.distributed_type = DistributedType.FSDP if self._mixed_precision != "no": fsdp_plugin.set_mixed_precision(self._mixed_precision) self.fsdp_plugin = fsdp_plugin if os.environ.get("ACCELERATE_USE_MEGATRON_LM", "false") == "true" and self.distributed_type not in [ DistributedType.MULTI_XPU, ]: self.distributed_type = DistributedType.MEGATRON_LM megatron_lm_plugin.set_mixed_precision(self._mixed_precision) self.megatron_lm_plugin = megatron_lm_plugin elif self.distributed_type in [DistributedType.MULTI_CPU, DistributedType.MULTI_XPU, DistributedType.NO]: if is_ipex_available(): # check if user disables it explicitly self.use_ipex = parse_flag_from_env("ACCELERATE_USE_IPEX", default=True) else: self.use_ipex = False if ( self.dynamo_plugin.backend != DynamoBackend.NO and self._mixed_precision == "no" and self.device.type == "cuda" ): torch.backends.cuda.matmul.allow_tf32 = True if ( self.dynamo_plugin.backend != DynamoBackend.NO and self._mixed_precision == "no" and self.device.type == "musa" ): torch.backends.musa.matmul.allow_tf32 = True PartialState._shared_state["distributed_type"] = self.distributed_type @property def initialized(self) -> bool: return self._shared_state != PartialState._shared_state def __repr__(self): repr = PartialState().__repr__() + f"\nMixed precision type: {self.mixed_precision}\n" if self.distributed_type == DistributedType.DEEPSPEED: repr += f"ds_config: {self.deepspeed_plugin.deepspeed_config}\n" return repr def _check_initialized(self, mixed_precision=None, cpu=None): "Checks if a modification is trying to be made and the `AcceleratorState` has already been initialized" if self.initialized: err = "AcceleratorState has already been initialized and cannot be changed, restart your runtime completely and pass `{flag}` to `Accelerator()`." if cpu and self.device.type != "cpu": raise ValueError(err.format(flag="cpu=True")) if ( mixed_precision is not None and mixed_precision != self._mixed_precision and self.distributed_type != DistributedType.DEEPSPEED ): raise ValueError(err.format(flag=f"mixed_precision='{mixed_precision}'")) @property def mixed_precision(self): if self.distributed_type == DistributedType.DEEPSPEED: config = self.deepspeed_plugin.deepspeed_config if config.get("fp16", {}).get("enabled", False): mixed_precision = "fp16" elif config.get("bf16", {}).get("enabled", False): mixed_precision = "bf16" else: mixed_precision = "no" else: mixed_precision = self._mixed_precision return mixed_precision @staticmethod def _reset_state(reset_partial_state: bool = False): "Resets `_shared_state`, is used internally and should not be called" AcceleratorState._shared_state.clear() if reset_partial_state: PartialState._reset_state() def destroy_process_group(self, group=None): """ Destroys the process group. If one is not specified, the default process group is destroyed. If `self.fork_lauched` is `True` and `group` is `None`, nothing happens. """ PartialState().destroy_process_group(group) @property def fork_launched(self): return PartialState().fork_launched @property def use_distributed(self): """ Whether the Accelerator is configured for distributed training """ return PartialState().use_distributed @property def is_last_process(self) -> bool: "Returns whether the current process is the last one" return PartialState().is_last_process @property def is_main_process(self) -> bool: "Returns whether the current process is the main process" return PartialState().is_main_process @property def is_local_main_process(self) -> bool: "Returns whether the current process is the main process on the local node" return PartialState().is_local_main_process def wait_for_everyone(self): PartialState().wait_for_everyone() @contextmanager def split_between_processes(self, inputs: list | tuple | dict | torch.Tensor, apply_padding: bool = False): """ Splits `input` between `self.num_processes` quickly and can be then used on that process. Useful when doing distributed inference, such as with different prompts. Note that when using a `dict`, all keys need to have the same number of elements. Args: inputs (`list`, `tuple`, `torch.Tensor`, or `dict` of `list`/`tuple`/`torch.Tensor`): The input to split between processes. apply_padding (`bool`, `optional`, defaults to `False`): Whether to apply padding by repeating the last element of the input so that all processes have the same number of elements. Useful when trying to perform actions such as `gather()` on the outputs or passing in less inputs than there are processes. If so, just remember to drop the padded elements afterwards. Example: ```python # Assume there are two processes from accelerate.state import AcceleratorState state = AcceleratorState() with state.split_between_processes(["A", "B", "C"]) as inputs: print(inputs) # Process 0 ["A", "B"] # Process 1 ["C"] with state.split_between_processes(["A", "B", "C"], apply_padding=True) as inputs: print(inputs) # Process 0 ["A", "B"] # Process 1 ["C", "C"] ``` """ with PartialState().split_between_processes(inputs, apply_padding=apply_padding) as inputs: yield inputs @contextmanager def main_process_first(self): """ Lets the main process go first inside a with block. The other processes will enter the with block after the main process exits. """ with PartialState().main_process_first(): yield @contextmanager def local_main_process_first(self): """ Lets the local main process go inside a with block. The other processes will enter the with block after the main process exits. """ with PartialState().local_main_process_first(): yield @property def deepspeed_plugin(self): """ Returns the currently active DeepSpeedPlugin. If not using deepspeed, returns `None`. """ # To maintain original behavior, return None if not using deepspeed. if self.distributed_type != DistributedType.DEEPSPEED: return None from accelerate.utils.deepspeed import get_active_deepspeed_plugin return get_active_deepspeed_plugin(self) @deepspeed_required def get_deepspeed_plugin(self, name: str): """ Returns the DeepSpeedPlugin with the given plugin_key. """ return self.deepspeed_plugins[name] @deepspeed_required def select_deepspeed_plugin(self, name: str = None): """ Activates the DeepSpeedPlugin with the given `name`, and will disable all other plugins. """ for key, plugin in self.deepspeed_plugins.items(): if key != name: plugin._unselect() self.deepspeed_plugins[name].select(_from_accelerator_state=True) def print(self, *args, **kwargs): PartialState().print(*args, **kwargs) def __getattr__(self, name: str): # By this point we know that no attributes of `self` contain `name`, # so we just modify the error message if name in self._known_attrs: raise AttributeError( f"`AcceleratorState` object has no attribute `{name}`. " "This happens if `AcceleratorState._reset_state()` was called and " "an `Accelerator` or `PartialState` was not reinitialized." ) # Raise a typical AttributeError raise AttributeError(f"'AcceleratorState' object has no attribute '{name}'") class GradientState: """ Singleton class that has information related to gradient synchronization for gradient accumulation **Available attributes:** - **end_of_dataloader** (`bool`) -- Whether we have reached the end the current dataloader - **remainder** (`int`) -- The number of extra samples that were added from padding the dataloader - **sync_gradients** (`bool`) -- Whether the gradients should be synced across all devices - **active_dataloader** (`Optional[DataLoader]`) -- The dataloader that is currently being iterated over - **dataloader_references** (`List[Optional[DataLoader]]`) -- A list of references to the dataloaders that are being iterated over - **num_steps** (`int`) -- The number of steps to accumulate over - **adjust_scheduler** (`bool`) -- Whether the scheduler should be adjusted to account for the gradient accumulation - **sync_with_dataloader** (`bool`) -- Whether the gradients should be synced at the end of the dataloader iteration and the number of total steps reset - **is_xla_gradients_synced** (`bool`) -- Whether the XLA gradients have been synchronized. It is initialized as false. Once gradients have been reduced before the optimizer step, this flag is set to true. Subsequently, after each step, the flag is reset to false. FSDP will always synchronize the gradients, hence is_xla_gradients_synced is always true. """ _shared_state = SharedDict() def __init__(self, gradient_accumulation_plugin: Optional[GradientAccumulationPlugin] = None): self.__dict__ = self._shared_state if not self.initialized: self.sync_gradients = True self.active_dataloader = None self.dataloader_references = [None] self.plugin_kwargs = ( gradient_accumulation_plugin.to_kwargs() if gradient_accumulation_plugin is not None else {} ) self._is_xla_gradients_synced = False # Plugin args are different and can be updated if gradient_accumulation_plugin is not None and self.plugin_kwargs != gradient_accumulation_plugin.to_kwargs(): self.plugin_kwargs = gradient_accumulation_plugin.to_kwargs() @property def num_steps(self) -> int: "Returns the number of steps to accumulate over" return self.plugin_kwargs.get("num_steps", 1) @property def adjust_scheduler(self) -> bool: "Returns whether the scheduler should be adjusted" return self.plugin_kwargs.get("adjust_scheduler", False) @property def sync_with_dataloader(self) -> bool: "Returns whether the gradients should be synced at the end of the dataloader iteration and the number of total steps reset" return self.plugin_kwargs.get("sync_with_dataloader", True) @property def initialized(self) -> bool: "Returns whether the `GradientState` has been initialized" return GradientState._shared_state != {} @property def end_of_dataloader(self) -> bool: "Returns whether we have reached the end of the current dataloader" if not self.in_dataloader: return False return self.active_dataloader.end_of_dataloader @property def remainder(self) -> int: "Returns the number of extra samples that were added from padding the dataloader" if not self.in_dataloader: return -1 return self.active_dataloader.remainder def __repr__(self): return ( f"Sync Gradients: {self.sync_gradients}\n" f"At end of current dataloader: {self.end_of_dataloader}\n" f"Extra samples added: {self.remainder}\n" f"Gradient accumulation plugin: {self.plugin_kwargs}\n" ) @property def is_xla_gradients_synced(self): "Returns the value of is_xla_gradients_synced. FSDP will always synchronize the gradients, hence is_xla_gradients_synced is always true." if parse_flag_from_env("ACCELERATE_USE_FSDP", default=False): return True return self._is_xla_gradients_synced @is_xla_gradients_synced.setter def is_xla_gradients_synced(self, is_synced): "Set the _is_xla_gradients_synced attribute." self._is_xla_gradients_synced = is_synced def _set_sync_gradients(self, sync_gradients): "Private function that sets whether gradients should be synchronized. Users should not have to call this." self.sync_gradients = sync_gradients # Allow grad-sync to automatically work on TPUs if ( self.sync_gradients and is_torch_xla_available(check_is_tpu=True) and PartialState().distributed_type == DistributedType.XLA ): xm.mark_step() def _add_dataloader(self, dataloader): "Private function that adds a dataloader to `self.dataloader_references` and sets `in_dataloader` to `True`. Users should not have to call this." self.active_dataloader = dataloader self.dataloader_references.append(self.active_dataloader) def _remove_dataloader(self, dataloader): "Private function that removes a dataloader from `self.dataloader_references` and sets `in_dataloader` to `False` if there are no more dataloaders. Users should not have to call this." self.dataloader_references.remove(dataloader) self.active_dataloader = self.dataloader_references[-1] @property def in_dataloader(self) -> bool: "Returns whether the current process is in a dataloader" return self.active_dataloader is not None @staticmethod def _reset_state(): "Resets `_shared_state`, is used internally and should not be called" GradientState._shared_state.clear()
7
0
hf_public_repos/accelerate/src
hf_public_repos/accelerate/src/accelerate/inference.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from types import MethodType from typing import Any, Dict, List, Optional, Tuple, Union from .state import PartialState from .utils import ( calculate_maximum_sizes, convert_bytes, copy_tensor_to_devices, ignorant_find_batch_size, infer_auto_device_map, is_pippy_available, pad_input_tensors, send_to_device, ) def generate_device_map(model, num_processes: int = 1, no_split_module_classes=None, max_memory: dict = None): """ Calculates the device map for `model` with an offset for PiPPy """ if num_processes == 1: return infer_auto_device_map(model, no_split_module_classes=no_split_module_classes, clean_result=False) if max_memory is None: model_size, shared = calculate_maximum_sizes(model) # Split into `n` chunks for each GPU memory = (model_size + shared[0]) / num_processes memory = convert_bytes(memory) value, ending = memory.split(" ") # Add a chunk to deal with potential extra shared memory instances memory = math.ceil(float(value)) * 1.1 memory = f"{memory} {ending}" max_memory = {i: memory for i in range(num_processes)} device_map = infer_auto_device_map( model, max_memory=max_memory, no_split_module_classes=no_split_module_classes, clean_result=False, ) return device_map def find_pippy_batch_size(args, kwargs): found_batch_size = None if args is not None: for arg in args: found_batch_size = ignorant_find_batch_size(arg) if found_batch_size is not None: break if kwargs is not None and found_batch_size is None: for kwarg in kwargs.values(): found_batch_size = ignorant_find_batch_size(kwarg) if found_batch_size is not None: break return found_batch_size def build_pipeline(model, split_points, args, kwargs, num_chunks): """ Attaches the split points to the model based on `self.device_map` and generates a `PipelineStage`. Requires passing in needed `args` and `kwargs` as the model needs on the CPU. Users can pass in custom `num_chunks` as an optional hyper-parameter. By default will use `AcceleratorState.num_processes` """ # Note: We import here to reduce import time from general modules, and isolate outside dependencies from torch.distributed.pipelining import ScheduleGPipe, SplitPoint, pipeline # We need to annotate the split points in the model for PiPPy state = PartialState() split_spec = {split_point: SplitPoint.BEGINNING for split_point in split_points} pipe = pipeline( model, mb_args=args, mb_kwargs=kwargs, split_spec=split_spec, ) stage = pipe.build_stage(state.local_process_index, device=state.device) schedule = ScheduleGPipe(stage, num_chunks) return schedule def pippy_forward(forward, num_chunks, gather_output, *args, **kwargs): state = PartialState() output = None if state.num_processes == 1: output = forward(*args, **kwargs) elif state.is_local_main_process: found_batch_size = find_pippy_batch_size(args, kwargs) if found_batch_size is None: raise ValueError("Could not find batch size from args or kwargs") else: if found_batch_size != num_chunks: args = pad_input_tensors(args, found_batch_size, num_chunks) kwargs = pad_input_tensors(kwargs, found_batch_size, num_chunks) forward(*args, **kwargs) elif state.is_last_process: output = forward() else: forward() if gather_output: # Each node will get a copy of the full output which is only on the last GPU output = copy_tensor_to_devices(output) return output def prepare_pippy( model, split_points: Optional[Union[str, List[str]]] = "auto", no_split_module_classes: Optional[List[str]] = None, example_args: Optional[Tuple[Any]] = (), example_kwargs: Optional[Dict[str, Any]] = None, num_chunks: Optional[int] = None, gather_output: Optional[bool] = False, ): """ Wraps `model` for pipeline parallel inference. Args: model (`torch.nn.Module`): A model we want to split for pipeline-parallel inference split_points (`str` or `List[str]`, defaults to 'auto'): How to generate the split points and chunk the model across each GPU. 'auto' will find the best balanced split given any model. Should be a list of layer names in the model to split by otherwise. no_split_module_classes (`List[str]`): A list of class names for layers we don't want to be split. example_args (tuple of model inputs): The expected inputs for the model that uses order-based inputs for a *single process*. Recommended to use this method if possible. example_kwargs (dict of model inputs) The expected inputs for the model that uses dictionary-based inputs for a *single process*. This is a *highly* limiting structure that requires the same keys be present at *all* inference calls. Not recommended unless the prior condition is true for all cases. num_chunks (`int`, defaults to the number of available GPUs): The number of different stages the Pipeline will have. By default it will assign one chunk per GPU, but this can be tuned and played with. In general one should have num_chunks >= num_gpus. gather_output (`bool`, defaults to `False`): If `True`, the output from the last GPU (which holds the true outputs) is sent across to all GPUs. """ if not is_pippy_available(): raise ImportError("Using `torch.distributed.pipelining` requires PyTorch 2.4.0 or later.") state = PartialState() example_args = send_to_device(example_args, "cpu") example_kwargs = send_to_device(example_kwargs, "cpu") if num_chunks is None: num_chunks = state.num_processes if split_points == "auto": device_map = generate_device_map(model, num_chunks, no_split_module_classes=no_split_module_classes) split_points = [] for i in range(1, num_chunks): split_points.append(next(k for k, v in device_map.items() if v == i)) model.hf_split_points = split_points stage = build_pipeline(model, split_points, example_args, example_kwargs, num_chunks) model._original_forward = model.forward model._original_call = model.__call__ model.pippy_stage = stage model.hf_split_points = split_points def forward(*args, **kwargs): return pippy_forward(stage.step, num_chunks, gather_output, *args, **kwargs) # To act like a decorator so that it can be popped when doing `extract_model_from_parallel` # Note: creates an infinite recursion loop with `generate` model_forward = MethodType(forward, model) forward.__wrapped__ = model_forward model.forward = forward return model
8
0
hf_public_repos/accelerate/src/accelerate
hf_public_repos/accelerate/src/accelerate/commands/utils.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse class _StoreAction(argparse.Action): """ Custom action that allows for `-` or `_` to be passed in for an argument. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) new_option_strings = [] for option_string in self.option_strings: new_option_strings.append(option_string) if "_" in option_string[2:]: # Add `-` version to the option string new_option_strings.append(option_string.replace("_", "-")) self.option_strings = new_option_strings def __call__(self, parser, namespace, values, option_string=None): setattr(namespace, self.dest, values) class _StoreConstAction(_StoreAction): """ Same as `argparse._StoreConstAction` but uses the custom `_StoreAction`. """ def __init__(self, option_strings, dest, const, default=None, required=False, help=None): super().__init__( option_strings=option_strings, dest=dest, nargs=0, const=const, default=default, required=required, help=help, ) def __call__(self, parser, namespace, values, option_string=None): setattr(namespace, self.dest, self.const) class _StoreTrueAction(_StoreConstAction): """ Same as `argparse._StoreTrueAction` but uses the custom `_StoreConstAction`. """ def __init__( self, option_strings, dest, default=None, required=False, help=None, ): super().__init__( option_strings=option_strings, dest=dest, const=True, default=default, required=required, help=help ) class CustomArgumentGroup(argparse._ArgumentGroup): """ Custom argument group that allows for the use of `-` or `_` in arguments passed and overrides the help for each when applicable. """ def _add_action(self, action): args = vars(action) if isinstance(action, argparse._StoreTrueAction): action = _StoreTrueAction( args["option_strings"], args["dest"], args["default"], args["required"], args["help"] ) elif isinstance(action, argparse._StoreConstAction): action = _StoreConstAction( args["option_strings"], args["dest"], args["const"], args["default"], args["required"], args["help"], ) elif isinstance(action, argparse._StoreAction): action = _StoreAction(**args) action = super()._add_action(action) return action class CustomArgumentParser(argparse.ArgumentParser): """ Custom argument parser that allows for the use of `-` or `_` in arguments passed and overrides the help for each when applicable. """ def add_argument(self, *args, **kwargs): if "action" in kwargs: # Translate action -> class if kwargs["action"] == "store_true": kwargs["action"] = _StoreTrueAction else: kwargs["action"] = _StoreAction super().add_argument(*args, **kwargs) def add_argument_group(self, *args, **kwargs): group = CustomArgumentGroup(self, *args, **kwargs) self._action_groups.append(group) return group
9
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/pool_tests.rs
use candle_core::{test_device, test_utils, Device, IndexOp, Result, Tensor}; // https://github.com/huggingface/candle/issues/364 fn avg_pool2d(dev: &Device) -> Result<()> { let data: Vec<f32> = vec![ 1., 1., 1., 1., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., ]; let t = Tensor::from_vec(data, (1, 1, 4, 4), dev)?; let pool = t.avg_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[0.5f32, 1.], [1., 1.]]); let data: Vec<f32> = vec![ 1., 2., 1., 3., 0., 0., 1., 1., 1., 1., 1., 1., 5., 1., 1., 1., ]; let t = Tensor::from_vec(data, (1, 1, 2, 8), dev)?; let pool = t.avg_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[5. / 4., 6. / 4., 6. / 4., 1.]]); Ok(()) } fn max_pool2d(dev: &Device) -> Result<()> { let data: Vec<f32> = vec![ 1., 2., 1., 3., 0., 0., 1., 1., 1., 1., 1., 1., 5., 1., 1., 1., ]; let t = Tensor::from_vec(data, (1, 1, 4, 4), dev)?; let pool = t.max_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[2f32, 3.], [5., 1.]]); let t = t.reshape((1, 1, 2, 8))?; let pool = t.max_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[2.0, 3.0, 5.0, 1.0]]); Ok(()) } /* This test corresponds to the following PyTorch script. import torch torch.manual_seed(4242) t = torch.randn((1, 2, 4, 4)) print(t.flatten()) res = torch.nn.functional.avg_pool2d(t, 2) print(res) */ fn avg_pool2d_pytorch(dev: &Device) -> Result<()> { if dev.is_metal() { return Ok(()); } let t = Tensor::new( &[ 0.4056f32, -0.8689, -0.0773, -1.5630, -2.8012, -1.5059, 0.3972, 1.0852, 0.4997, 3.0616, 1.6541, 0.0964, -0.8338, -1.6523, -0.8323, -0.1699, 0.0823, 0.3526, 0.6843, 0.2395, 1.2279, -0.9287, -1.7030, 0.1370, 0.6047, 0.3770, -0.6266, 0.3529, 2.2013, -0.6836, 0.2477, 1.3127, ], dev, )? .reshape((1, 2, 4, 4))?; let pool = t.avg_pool2d(2)?.squeeze(0)?; assert_eq!( test_utils::to_vec3_round(&pool, 4)?, [ [[-1.1926, -0.0395], [0.2688, 0.1871]], [[0.1835, -0.1606], [0.6249, 0.3217]] ] ); let pool = t.avg_pool2d(3)?.squeeze(0)?; assert_eq!( test_utils::to_vec3_round(&pool, 4)?, [[[0.085]], [[0.0078]]] ); let t = t.reshape((1, 1, 4, 8))?; let pool = t.avg_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!( test_utils::to_vec2_round(&pool, 4)?, [ [0.7745, 0.0276, -1.6983, 0.12], [0.3542, 0.1625, 0.4542, -0.0014] ] ); Ok(()) } fn upsample_nearest2d(dev: &Device) -> Result<()> { let t = Tensor::arange(0f32, 6f32, dev)?.reshape((1, 1, 2, 3))?; let upsampled = t.upsample_nearest2d(4, 6)?.i(0)?.i(0)?; assert_eq!( t.i(0)?.i(0)?.to_vec2::<f32>()?, [[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]] ); assert_eq!( upsampled.to_vec2::<f32>()?, [ [0.0, 0.0, 1.0, 1.0, 2.0, 2.0], [0.0, 0.0, 1.0, 1.0, 2.0, 2.0], [3.0, 3.0, 4.0, 4.0, 5.0, 5.0], [3.0, 3.0, 4.0, 4.0, 5.0, 5.0] ] ); Ok(()) } test_device!(avg_pool2d, avg_pool2d_cpu, avg_pool2d_gpu, avg_pool2d_metal); test_device!( avg_pool2d_pytorch, avg_pool2d_pytorch_cpu, avg_pool2d_pytorch_gpu, avg_pool2d_pytorch_metal ); test_device!(max_pool2d, max_pool2d_cpu, max_pool2d_gpu, max_pool2d_metal); test_device!( upsample_nearest2d, upsample_nearest2d_cpu, upsample_nearest2d_gpu, upsample_nearest2d_metal );
0
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/tensor_tests.rs
use candle_core::{test_device, test_utils, DType, Device, IndexOp, Result, Tensor, D}; fn zeros(device: &Device) -> Result<()> { let tensor = Tensor::zeros((5, 2), DType::F32, device)?; let (dim1, dim2) = tensor.dims2()?; assert_eq!(dim1, 5); assert_eq!(dim2, 2); Ok(()) } fn ones(device: &Device) -> Result<()> { assert_eq!( Tensor::ones((2, 3), DType::U8, device)?.to_vec2::<u8>()?, [[1, 1, 1], [1, 1, 1]], ); assert_eq!( Tensor::ones((2, 3), DType::U32, device)?.to_vec2::<u32>()?, [[1, 1, 1], [1, 1, 1]], ); assert_eq!( Tensor::ones((2, 3), DType::I64, device)?.to_vec2::<i64>()?, [[1, 1, 1], [1, 1, 1]], ); assert_eq!( Tensor::ones((2, 3), DType::F32, device)?.to_vec2::<f32>()?, [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], ); assert_eq!( Tensor::ones((2, 3), DType::F64, device)?.to_vec2::<f64>()?, [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], ); assert_eq!( Tensor::ones((2, 3), DType::F16, device)?.to_vec2::<half::f16>()?, [ [ half::f16::from_f32(1.0), half::f16::from_f32(1.0), half::f16::from_f32(1.0) ], [ half::f16::from_f32(1.0), half::f16::from_f32(1.0), half::f16::from_f32(1.0) ] ], ); assert_eq!( Tensor::ones((2, 3), DType::BF16, device)?.to_vec2::<half::bf16>()?, [ [ half::bf16::from_f32(1.0), half::bf16::from_f32(1.0), half::bf16::from_f32(1.0) ], [ half::bf16::from_f32(1.0), half::bf16::from_f32(1.0), half::bf16::from_f32(1.0) ] ], ); Ok(()) } fn full(device: &Device) -> Result<()> { assert_eq!( Tensor::full(42u32, (2, 3), device)?.to_vec2::<u32>()?, [[42, 42, 42], [42, 42, 42]], ); Ok(()) } fn arange(device: &Device) -> Result<()> { assert_eq!( Tensor::arange(0u8, 5u8, device)?.to_vec1::<u8>()?, [0, 1, 2, 3, 4], ); assert_eq!( Tensor::arange_step(0u8, 5u8, 2, device)?.to_vec1::<u8>()?, [0, 2, 4], ); assert_eq!( Tensor::arange_step(0u8, 5u8, 3, device)?.to_vec1::<u8>()?, [0, 3], ); assert_eq!( Tensor::arange_step(5i64, 0i64, -1, device)?.to_vec1::<i64>()?, [5, 4, 3, 2, 1], ); Ok(()) } fn add_mul(device: &Device) -> Result<()> { let tensor = Tensor::new(&[3f32, 1., 4.], device)?; let dim1 = tensor.dims1()?; assert_eq!(dim1, 3); let content: Vec<f32> = tensor.to_vec1()?; assert_eq!(content, [3., 1., 4.]); let tensor = Tensor::add(&tensor, &tensor)?; let content: Vec<f32> = tensor.to_vec1()?; assert_eq!(content, [6., 2., 8.]); let tensor = Tensor::mul(&tensor, &tensor)?; let content: Vec<f32> = tensor.to_vec1()?; assert_eq!(content, [36., 4., 64.]); Ok(()) } fn tensor_2d(device: &Device) -> Result<()> { let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]]; let tensor = Tensor::new(data, device)?; let dims = tensor.dims2()?; assert_eq!(dims, (2, 5)); let content: Vec<Vec<f32>> = tensor.to_vec2()?; assert_eq!(content, data); Ok(()) } fn clamp(device: &Device) -> Result<()> { let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]]; let tensor = Tensor::new(data, device)?; let tensor = tensor.clamp(1.5, 6.2)?; assert_eq!( tensor.to_vec2::<f32>()?, [[3.0, 1.5, 4.0, 1.5, 5.0], [2.0, 1.5, 6.2, 6.2, 2.0]], ); Ok(()) } fn asort(device: &Device) -> Result<()> { let data = &[[3f32, 1., 4., 1.1, 5.], [2.1, 1., 7., 8., 2.]]; let tensor = Tensor::new(data, device)?; let indexes = tensor.arg_sort_last_dim(true)?; assert_eq!( indexes.to_vec2::<u32>()?, [[1, 3, 0, 2, 4], [1, 4, 0, 2, 3]], ); let indexes = tensor.arg_sort_last_dim(false)?; assert_eq!( indexes.to_vec2::<u32>()?, [[4, 2, 0, 3, 1], [3, 2, 0, 4, 1]], ); let (sorted, indexes) = tensor.sort_last_dim(true)?; assert_eq!( indexes.to_vec2::<u32>()?, [[1, 3, 0, 2, 4], [1, 4, 0, 2, 3]], ); assert_eq!( sorted.to_vec2::<f32>()?, [[1.0, 1.1, 3.0, 4.0, 5.0], [1.0, 2.0, 2.1, 7.0, 8.0]] ); let (sorted, indexes) = tensor.sort_last_dim(false)?; assert_eq!( indexes.to_vec2::<u32>()?, [[4, 2, 0, 3, 1], [3, 2, 0, 4, 1]], ); assert_eq!( sorted.to_vec2::<f32>()?, [[5.0, 4.0, 3.0, 1.1, 1.0], [8.0, 7.0, 2.1, 2.0, 1.0]] ); Ok(()) } fn unary_op(device: &Device) -> Result<()> { let data = &[[-3f32, 1., 4., -0.1, 0.5], [2.7, -1.8, -0.28, 1.8, 2.8]]; let tensor = Tensor::new(data, device)?; assert_eq!( test_utils::to_vec2_round(&tensor.gelu()?, 4)?, [ [-0.0036, 0.8412, 3.9999, -0.046, 0.3457], [2.6911, -0.0647, -0.1091, 1.7353, 2.7933] ] ); let t_f16 = tensor.to_dtype(DType::F16)?.gelu()?.to_dtype(DType::F32)?; let max_diff = (tensor.gelu()? - t_f16)?.flatten_all()?.max(0)?; assert!(max_diff.to_vec0::<f32>()? < 5e-3); assert_eq!( test_utils::to_vec2_round(&tensor.gelu_erf()?, 4)?, [ [-0.004, 0.8413, 3.9999, -0.046, 0.3457], [2.6906, -0.0647, -0.1091, 1.7353, 2.7928] ] ); assert_eq!( test_utils::to_vec2_round(&tensor.erf()?, 4)?, [ [-1.0, 0.8427, 1.0, -0.1125, 0.5205], [0.9999, -0.9891, -0.3079, 0.9891, 0.9999] ] ); assert_eq!( test_utils::to_vec2_round(&tensor.silu()?, 4)?, [ [-0.1423, 0.7311, 3.9281, -0.0475, 0.3112], [2.53, -0.2553, -0.1205, 1.5447, 2.6395] ] ); assert_eq!( test_utils::to_vec2_round(&tensor.ceil()?, 4)?, [[-3.0, 1.0, 4.0, -0.0, 1.0], [3.0, -1.0, -0.0, 2.0, 3.0]] ); assert_eq!( test_utils::to_vec2_round(&tensor.floor()?, 4)?, [[-3.0, 1.0, 4.0, -1.0, 0.0], [2.0, -2.0, -1.0, 1.0, 2.0]] ); assert_eq!( test_utils::to_vec2_round(&tensor.round()?, 4)?, [[-3.0, 1.0, 4.0, -0.0, 1.0], [3.0, -2.0, -0.0, 2.0, 3.0]] ); let tensor = Tensor::new(&[2997.9246, 314.15926f32], device)?; assert_eq!( test_utils::to_vec1_round(&tensor.round_to(2)?, 4)?, [2997.92, 314.16] ); assert_eq!( test_utils::to_vec1_round(&tensor.round_to(-2)?, 4)?, [3000.0, 300.] ); let tensor = Tensor::new( &[-1.01f32, -0.9, -0.1, 0.0, -0.0, 0.1, 0.9, 1.0, 1.1], device, )?; assert_eq!( tensor.sign()?.to_vec1::<f32>()?, [-1., -1., -1., 0., 0., 1., 1., 1., 1.] ); let tensor = Tensor::new(&[-1.0f32, 0., -2., 3.], device)?; let y = tensor.elu(2.)?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [-1.2642, 0.0000, -1.7293, 3.0000] ); // This test failed on metal prior to the following PR: // https://github.com/huggingface/candle/pull/2490 let y = tensor.reshape((2, 2))?.t()?.elu(2.)?.flatten_all()?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [-1.2642, -1.7293, 0.0000, 3.0000] ); Ok(()) } fn binary_op(device: &Device) -> Result<()> { let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]]; let tensor1 = Tensor::new(data, device)?; let data2 = &[[5f32, 5., 5., 5., 5.], [2., 1., 7., 8., 2.]]; let tensor2 = Tensor::new(data2, device)?; let tensor = (&tensor1 + (&tensor1 * &tensor1)? / (&tensor1 + &tensor2))?; let dims = tensor.dims2()?; assert_eq!(dims, (2, 5)); let content: Vec<Vec<f32>> = tensor.to_vec2()?; assert_eq!(content[0], [4.125, 1.1666666, 5.7777777, 1.1666666, 7.5]); assert_eq!(content[1], [3.0, 1.5, 10.5, 12.0, 3.0]); #[allow(clippy::eq_op)] let tensor = (&tensor - &tensor)?; let content: Vec<Vec<f32>> = tensor.to_vec2()?; assert_eq!(content[0], [0., 0., 0., 0., 0.]); let min = tensor1.minimum(&(&tensor2 * 0.5)?)?; let max = tensor1.maximum(&(&tensor2 * 0.5)?)?; assert_eq!( min.to_vec2::<f32>()?, [[2.5, 1.0, 2.5, 1.0, 2.5], [1.0, 0.5, 3.5, 4.0, 1.0]], ); assert_eq!( max.to_vec2::<f32>()?, [[3.0, 2.5, 4.0, 2.5, 5.0], [2.0, 1.0, 7.0, 8.0, 2.0]] ); Ok(()) } fn transpose(device: &Device) -> Result<()> { let data = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]]; let tensor = Tensor::new(data, device)?.t()?; let dims = tensor.dims2()?; assert_eq!(dims, (5, 2)); assert_eq!( tensor.to_vec2::<f32>()?, &[[3f32, 2.], [1., 1.], [4., 7.], [1., 8.], [5., 2.]] ); assert_eq!(tensor.t()?.to_vec2::<f32>()?, data); assert_eq!(tensor.contiguous()?.t()?.to_vec2::<f32>()?, data); assert_eq!(((tensor + 1.)?.t()? - 1.)?.to_vec2::<f32>()?, data); Ok(()) } fn var(device: &Device) -> Result<()> { // Values taken from https://pytorch.org/docs/stable/generated/torch.var.html let data = &[ [0.2035f32, 1.2959, 1.8101, -0.4644], [1.5027, -0.3270, 0.5905, 0.6538], [-1.5745, 1.3330, -0.5596, -0.6548], [0.1264, -0.5080, 1.6420, 0.1992], ]; let tensor = Tensor::new(data, device)?; assert_eq!( test_utils::to_vec2_round(&tensor.var_keepdim(1)?, 4)?, &[[1.0631], [0.559], [1.4893], [0.8258]] ); Ok(()) } fn sum(device: &Device) -> Result<()> { let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]]; let tensor = Tensor::new(data, device)?; assert_eq!( tensor.sum_keepdim(2)?.to_vec3::<u32>()?, &[[[8], [15]], [[10], [18]]] ); assert_eq!( tensor.sum_keepdim(0)?.to_vec3::<u32>()?, &[[[5, 2, 11], [9, 7, 17]]], ); assert_eq!(tensor.sum_keepdim((0, 2, 1))?.to_vec3::<u32>()?, &[[[51]]],); assert_eq!( tensor.t()?.sum_keepdim(1)?.t()?.to_vec3::<u32>()?, &[[[8], [15]], [[10], [18]]] ); assert_eq!( tensor.sum_keepdim((2, 1))?.to_vec3::<u32>()?, &[[[8 + 15]], [[10 + 18]]] ); let data: Vec<u32> = (0..4000u32).collect(); let tensor = Tensor::new(data.as_slice(), device)?; assert_eq!(tensor.sum_keepdim(0)?.to_vec1::<u32>()?, &[7998000]); let tensor = tensor.reshape((2000, 2))?; assert_eq!(tensor.sum_keepdim((0, 1))?.to_vec2::<u32>()?, &[[7998000]]); assert_eq!( tensor.sum_keepdim(0)?.sum_keepdim(1)?.to_vec2::<u32>()?, &[[7998000]] ); assert_eq!( tensor.sum_keepdim(1)?.sum_keepdim(0)?.to_vec2::<u32>()?, &[[7998000]] ); assert_eq!( tensor.sum_keepdim(0)?.to_vec2::<u32>()?, &[[3998000, 4000000]] ); // Make the tensor non contiguous. let tensor = tensor.t()?.contiguous()?.t()?; assert_eq!(tensor.sum_keepdim((0, 1))?.to_vec2::<u32>()?, &[[7998000]]); assert_eq!( tensor.sum_keepdim(0)?.sum_keepdim(1)?.to_vec2::<u32>()?, &[[7998000]] ); assert_eq!( tensor.sum_keepdim(1)?.sum_keepdim(0)?.to_vec2::<u32>()?, &[[7998000]] ); assert_eq!( tensor.sum_keepdim(0)?.to_vec2::<u32>()?, &[[3998000, 4000000]] ); let t1 = tensor.reshape((200, 5, 4))?; let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?; for tensor in [t1, t2] { assert_eq!( tensor.sum_keepdim((0, 1, 2))?.to_vec3::<u32>()?, &[[[7998000]]] ); assert_eq!( tensor .sum_keepdim(0)? .sum_keepdim(2)? .sum_keepdim(1)? .to_vec3::<u32>()?, &[[[7998000]]] ); assert_eq!( tensor .sum_keepdim(0)? .sum_keepdim((1, 2))? .to_vec3::<u32>()?, &[[[7998000]]] ); assert_eq!( tensor .sum_keepdim(1)? .sum_keepdim((0, 2))? .to_vec3::<u32>()?, &[[[7998000]]] ); assert_eq!( tensor.sum_keepdim(0)?.to_vec3::<u32>()?, &[[ [398000, 398200, 398400, 398600], [398800, 399000, 399200, 399400], [399600, 399800, 400000, 400200], [400400, 400600, 400800, 401000], [401200, 401400, 401600, 401800] ]] ); } Ok(()) } fn min(device: &Device) -> Result<()> { let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]]; let tensor = Tensor::new(data, device)?; assert_eq!( tensor.min_keepdim(2)?.to_vec3::<u32>()?, &[[[1], [1]], [[1], [2]]] ); assert_eq!( tensor.min_keepdim(0)?.to_vec3::<u32>()?, &[[[2, 1, 4], [1, 2, 8]]], ); let data: Vec<u32> = (200..4000u32).collect(); let tensor = Tensor::new(data.as_slice(), device)?; assert_eq!(tensor.min_keepdim(0)?.to_vec1::<u32>()?, &[200]); let tensor = tensor.reshape((1900, 2))?; assert_eq!( tensor.min_keepdim(0)?.min_keepdim(1)?.to_vec2::<u32>()?, &[[200]] ); assert_eq!( tensor.min_keepdim(1)?.min_keepdim(0)?.to_vec2::<u32>()?, &[[200]] ); assert_eq!(tensor.min_keepdim(0)?.to_vec2::<u32>()?, &[[200, 201]]); // Make the tensor non contiguous. let tensor = tensor.t()?.contiguous()?.t()?; assert_eq!( tensor.min_keepdim(0)?.min_keepdim(1)?.to_vec2::<u32>()?, &[[200]] ); assert_eq!( tensor.min_keepdim(1)?.min_keepdim(0)?.to_vec2::<u32>()?, &[[200]] ); assert_eq!(tensor.min_keepdim(0)?.to_vec2::<u32>()?, &[[200, 201]]); let t1 = tensor.reshape((190, 5, 4))?; let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?; for tensor in [t1, t2] { assert_eq!( tensor .min_keepdim(0)? .min_keepdim(2)? .min_keepdim(1)? .to_vec3::<u32>()?, &[[[200]]] ); assert_eq!( tensor.min_keepdim(0)?.to_vec3::<u32>()?, &[[ [200, 201, 202, 203], [204, 205, 206, 207], [208, 209, 210, 211], [212, 213, 214, 215], [216, 217, 218, 219] ]] ); } Ok(()) } fn max(device: &Device) -> Result<()> { let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]]; let tensor = Tensor::new(data, device)?; assert_eq!( tensor.max_keepdim(2)?.to_vec3::<u32>()?, &[[[4], [9]], [[7], [8]]] ); assert_eq!( tensor.max_keepdim(0)?.to_vec3::<u32>()?, &[[[3, 1, 7], [8, 5, 9]]], ); let data: Vec<u32> = (200..4000u32).collect(); let tensor = Tensor::new(data.as_slice(), device)?; assert_eq!(tensor.max_keepdim(0)?.to_vec1::<u32>()?, &[3999]); let tensor = tensor.reshape((1900, 2))?; assert_eq!( tensor.max_keepdim(0)?.max_keepdim(1)?.to_vec2::<u32>()?, &[[3999]] ); assert_eq!( tensor.max_keepdim(1)?.max_keepdim(0)?.to_vec2::<u32>()?, &[[3999]] ); assert_eq!(tensor.max_keepdim(0)?.to_vec2::<u32>()?, &[[3998, 3999]]); // Make the tensor non contiguous. let tensor = tensor.t()?.contiguous()?.t()?; assert_eq!( tensor.max_keepdim(0)?.max_keepdim(1)?.to_vec2::<u32>()?, &[[3999]] ); assert_eq!( tensor.max_keepdim(1)?.max_keepdim(0)?.to_vec2::<u32>()?, &[[3999]] ); assert_eq!(tensor.max_keepdim(0)?.to_vec2::<u32>()?, &[[3998, 3999]]); let t1 = tensor.reshape((190, 5, 4))?; let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?; for tensor in [t1, t2] { assert_eq!( tensor .max_keepdim(0)? .max_keepdim(2)? .max_keepdim(1)? .to_vec3::<u32>()?, &[[[3999]]] ); assert_eq!( tensor.max_keepdim(0)?.to_vec3::<u32>()?, &[[ [3980, 3981, 3982, 3983], [3984, 3985, 3986, 3987], [3988, 3989, 3990, 3991], [3992, 3993, 3994, 3995], [3996, 3997, 3998, 3999] ]] ); } Ok(()) } fn argmin(device: &Device) -> Result<()> { let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]]; let tensor = Tensor::new(data, device)?; assert_eq!( tensor.argmin_keepdim(2)?.to_vec3::<u32>()?, &[[[1], [0]], [[1], [1]]] ); assert_eq!( tensor.argmin_keepdim(0)?.to_vec3::<u32>()?, &[[[1, 0, 0], [0, 1, 1]]], ); let data: Vec<u32> = (200..4000u32).collect(); let tensor = Tensor::new(data.as_slice(), device)?; assert_eq!(tensor.argmin_keepdim(0)?.to_vec1::<u32>()?, &[0]); let tensor = tensor.reshape((1900, 2))?; assert_eq!( tensor .argmin_keepdim(0)? .argmin_keepdim(1)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!( tensor .argmin_keepdim(1)? .argmin_keepdim(0)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!(tensor.argmin_keepdim(0)?.to_vec2::<u32>()?, &[[0, 0]]); // Make the tensor non contiguous. let tensor = tensor.t()?.contiguous()?.t()?; assert_eq!( tensor .argmin_keepdim(0)? .argmin_keepdim(1)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!( tensor .argmin_keepdim(1)? .argmin_keepdim(0)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!(tensor.argmin_keepdim(0)?.to_vec2::<u32>()?, &[[0, 0]]); let t1 = tensor.reshape((190, 5, 4))?; let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?; for tensor in [t1, t2] { assert_eq!( tensor .argmin_keepdim(0)? .argmin_keepdim(2)? .argmin_keepdim(1)? .to_vec3::<u32>()?, &[[[0]]] ); assert_eq!( tensor.argmin_keepdim(0)?.to_vec3::<u32>()?, &[[ [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], ]] ); } Ok(()) } fn argmax(device: &Device) -> Result<()> { let data = &[[[3u32, 1, 4], [1, 5, 9]], [[2, 1, 7], [8, 2, 8]]]; let tensor = Tensor::new(data, device)?; assert_eq!( tensor.argmax_keepdim(2)?.to_vec3::<u32>()?, &[[[2], [2]], [[2], [0]]] ); assert_eq!( tensor.argmax_keepdim(0)?.to_vec3::<u32>()?, &[[[0, 0, 1], [1, 0, 0]]], ); let data: Vec<u32> = (200..4000u32).collect(); let tensor = Tensor::new(data.as_slice(), device)?; assert_eq!(tensor.argmax_keepdim(0)?.to_vec1::<u32>()?, &[3799]); let tensor = tensor.reshape((1900, 2))?; assert_eq!( tensor .argmax_keepdim(0)? .argmax_keepdim(1)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!( tensor .argmax_keepdim(1)? .argmax_keepdim(0)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!(tensor.argmax_keepdim(0)?.to_vec2::<u32>()?, &[[1899, 1899]]); // Make the tensor non contiguous. let tensor = tensor.t()?.contiguous()?.t()?; assert_eq!( tensor .argmax_keepdim(0)? .argmax_keepdim(1)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!( tensor .argmax_keepdim(1)? .argmax_keepdim(0)? .to_vec2::<u32>()?, &[[0]] ); assert_eq!(tensor.argmax_keepdim(0)?.to_vec2::<u32>()?, &[[1899, 1899]]); let t1 = tensor.reshape((190, 5, 4))?; let t2 = t1.transpose(0, 2)?.contiguous()?.transpose(0, 2)?; for tensor in [t1, t2] { assert_eq!( tensor .argmax_keepdim(0)? .argmax_keepdim(2)? .argmax_keepdim(1)? .to_vec3::<u32>()?, &[[[0]]] ); assert_eq!( tensor.argmax_keepdim(0)?.to_vec3::<u32>()?, &[[ [189, 189, 189, 189], [189, 189, 189, 189], [189, 189, 189, 189], [189, 189, 189, 189], [189, 189, 189, 189], ]] ); } Ok(()) } fn narrow(device: &Device) -> Result<()> { let data = &[[[3f32, 1., 4.], [1., 5., 9.]], [[2., 1., 7.], [8., 2., 8.]]]; let tensor = Tensor::new(data, device)?; assert_eq!( tensor.narrow(2, 1, 2)?.to_vec3::<f32>()?, &[[[1.0, 4.0], [5.0, 9.0]], [[1.0, 7.0], [2.0, 8.0]]], ); assert_eq!( tensor.narrow(1, 1, 1)?.to_vec3::<f32>()?, &[[[1.0, 5.0, 9.0]], [[8.0, 2.0, 8.0]]], ); assert_eq!( tensor.narrow(0, 0, 1)?.to_vec3::<f32>()?, &[[[3.0, 1.0, 4.0], [1.0, 5.0, 9.0]]], ); assert_eq!( tensor.narrow(0, 1, 1)?.to_vec3::<f32>()?, &[[[2.0, 1.0, 7.0], [8.0, 2.0, 8.0]]], ); // The following has been checked against PyTorch via: // import torch // t = torch.tensor([[[3., 1., 4.], [1., 5., 9.]], [[2., 1., 7.], [8., 2., 8.]]]) // t.transpose(-1, -2).narrow(1, 1, 2) assert_eq!( tensor.t()?.narrow(1, 1, 2)?.to_vec3::<f32>()?, &[[[1.0, 5.0], [4.0, 9.0]], [[1.0, 2.0], [7.0, 8.0]]], ); Ok(()) } fn broadcast(device: &Device) -> Result<()> { let data = &[3f32, 1., 4.]; let tensor = Tensor::new(data, device)?; assert_eq!( tensor.broadcast_left((3, 1))?.to_vec3::<f32>()?, &[[[3.0, 1.0, 4.0]], [[3.0, 1.0, 4.0]], [[3.0, 1.0, 4.0]]] ); Ok(()) } fn slice_set(device: &Device) -> Result<()> { let (b, h, max_t, d) = (2, 4, 7, 3); let cache = Tensor::zeros((b, h, max_t, d), DType::F32, device)?; let tensor = Tensor::randn(0f32, 1f32, (b, h, 4, d), device)?; cache.slice_set(&tensor, 2, 0)?; let cache_t = cache.narrow(2, 0, 4)?; let diff = (cache_t - &tensor)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0.); cache.slice_set(&tensor, 2, 1)?; let cache_t = cache.narrow(2, 1, 4)?; let diff = (cache_t - &tensor)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0.); let ones = Tensor::ones((b, h, 1, d), DType::F32, device)?; cache.slice_set(&ones, 2, 6)?; let diff = cache.narrow(2, 5, 1)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0.); let diff = (cache.narrow(2, 6, 1)? - 1.)? .abs()? .sum_all()? .to_vec0::<f32>()?; assert_eq!(diff, 0.); Ok(()) } fn cat(device: &Device) -> Result<()> { // 1D let t1 = Tensor::new(&[3f32, 1., 4.], device)?; let t2 = Tensor::new(&[1f32, 5., 9., 2.], device)?; let t3 = Tensor::new(&[6f32, 5., 3., 5., 8., 9.], device)?; assert_eq!(Tensor::cat(&[&t1], 0)?.to_vec1::<f32>()?, [3f32, 1., 4.],); assert_eq!( Tensor::cat(&[&t1, &t2], 0)?.to_vec1::<f32>()?, [3f32, 1., 4., 1., 5., 9., 2.], ); assert_eq!( Tensor::cat(&[&t1, &t2, &t3], 0)?.to_vec1::<f32>()?, [3f32, 1., 4., 1., 5., 9., 2., 6., 5., 3., 5., 8., 9.], ); // 2D let data = &[[3f32, 1., 4., 1., 5.], [2., 7., 1., 8., 2.]]; let t1 = Tensor::new(data, device)?; let data2 = &[[5f32, 5., 5., 5., 5.], [2., 7., 1., 8., 2.]]; let t2 = Tensor::new(data2, device)?; assert_eq!( Tensor::cat(&[&t1, &t2], 0)?.to_vec2::<f32>()?, [ [3.0, 1.0, 4.0, 1.0, 5.0], [2.0, 7.0, 1.0, 8.0, 2.0], [5.0, 5.0, 5.0, 5.0, 5.0], [2.0, 7.0, 1.0, 8.0, 2.0] ] ); // PyTorch equivalent: // import torch // t1 = torch.tensor([[3, 1, 4, 1, 5], [2, 7, 1, 8, 2]]) // t2 = torch.tensor([[5]*5, [2, 7, 1, 8, 2]]) // torch.cat([t1.t(), t2.t()], dim=1).t() assert_eq!( Tensor::cat(&[&t1.t()?, &t2.t()?], 1)? .t()? .to_vec2::<f32>()?, [ [3.0, 1.0, 4.0, 1.0, 5.0], [2.0, 7.0, 1.0, 8.0, 2.0], [5.0, 5.0, 5.0, 5.0, 5.0], [2.0, 7.0, 1.0, 8.0, 2.0] ] ); assert_eq!( Tensor::cat(&[&t1, &t2], 1)?.to_vec2::<f32>()?, [ [3.0, 1.0, 4.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0], [2.0, 7.0, 1.0, 8.0, 2.0, 2.0, 7.0, 1.0, 8.0, 2.0] ] ); // 3D let t1 = Tensor::arange(0, 48i64, device)?.reshape((2, 6, 4))?; let t2 = Tensor::arange(100, 124i64, device)?.reshape((2, 3, 4))?; let t3 = Tensor::arange(10000, 10032i64, device)?.reshape((2, 4, 4))?; let t_cat = Tensor::cat(&[&t1, &t2, &t3], 1)?; let t1 = t1.t()?.contiguous()?.t()?; let t2 = t2.t()?.contiguous()?.t()?; let t3 = t3.t()?.contiguous()?.t()?; let t_cat2 = Tensor::cat(&[&t1, &t2, &t3], 1)?; let diff = t_cat.eq(&t_cat2)?.to_dtype(DType::F32)?.sum_all()?; assert_eq!(diff.to_vec0::<f32>()?, 104.0); assert_eq!(t_cat.i((0, 0, 0))?.to_vec0::<i64>()?, 0); assert_eq!(t_cat.i((0, 4, 0))?.to_vec0::<i64>()?, 16); assert_eq!(t_cat.i((0, 5, 0))?.to_vec0::<i64>()?, 20); assert_eq!(t_cat.i((1, 5, 0))?.to_vec0::<i64>()?, 44); assert_eq!(t_cat.i((0, 6, 0))?.to_vec0::<i64>()?, 100); assert_eq!(t_cat.i((1, 6, 0))?.to_vec0::<i64>()?, 112); assert_eq!(t_cat.i((0, 6, 1))?.to_vec0::<i64>()?, 101); assert_eq!(t_cat.i((0, 7, 1))?.to_vec0::<i64>()?, 105); assert_eq!(t_cat.i((0, 12, 1))?.to_vec0::<i64>()?, 10013); assert_eq!(t_cat.i((1, 12, 3))?.to_vec0::<i64>()?, 10031); Ok(()) } fn embeddings(device: &Device) -> Result<()> { let ids = Tensor::new(&[0u32, 2u32, 1u32], device)?; let t = Tensor::new(&[[0f32, 1f32], [2f32, 3f32], [4f32, 5f32]], device)?; let hs = t.embedding(&ids)?; assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 1.0], [4.0, 5.0], [2.0, 3.0]]); let hs = t.index_select(&ids, 0)?; assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 1.0], [4.0, 5.0], [2.0, 3.0]]); let hs = t.index_select(&ids.to_dtype(DType::I64)?, 0)?; assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 1.0], [4.0, 5.0], [2.0, 3.0]]); Ok(()) } fn cmp(device: &Device) -> Result<()> { let t1 = Tensor::new(&[[0f32, 1f32], [2f32, 3f32], [4f32, 5f32]], device)?; let t2 = Tensor::new(&[[1f32, 0f32], [3f32, 3f32], [4f32, 7f32]], device)?; assert_eq!(t1.eq(&t2)?.to_vec2::<u8>()?, &[[0, 0], [0, 1], [1, 0]]); assert_eq!(t1.ne(&t2)?.to_vec2::<u8>()?, &[[1, 1], [1, 0], [0, 1]]); assert_eq!(t1.le(&t2)?.to_vec2::<u8>()?, &[[1, 0], [1, 1], [1, 1]]); assert_eq!(t1.lt(&t2)?.to_vec2::<u8>()?, &[[1, 0], [1, 0], [0, 1]]); assert_eq!(t1.gt(&t2)?.to_vec2::<u8>()?, &[[0, 1], [0, 0], [0, 0]]); assert_eq!(t1.ge(&t2)?.to_vec2::<u8>()?, &[[0, 1], [0, 1], [1, 0]]); Ok(()) } fn index_select(device: &Device) -> Result<()> { let ids = Tensor::new(&[0u32, 2u32, 1u32], device)?; let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?; assert_eq!( t.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0], [9.0, 10.0, 11.0] ] ); for dtype in [DType::U8, DType::U32, DType::I64] { let ids = ids.to_dtype(dtype)?; let hs = t.index_select(&ids, 1)?; assert_eq!( hs.to_vec2::<f32>()?, &[ [0.0, 2.0, 1.0], [3.0, 5.0, 4.0], [6.0, 8.0, 7.0], [9.0, 11.0, 10.0] ] ); let hs = t.index_select(&ids, 0)?; assert_eq!( hs.to_vec2::<f32>()?, &[[0.0, 1.0, 2.0], [6.0, 7.0, 8.0], [3.0, 4.0, 5.0]] ); // Prior to https://github.com/huggingface/candle/pull/1022 // There would be a bug where the last values in the result tensor would be set to 0. let ids = Tensor::new(&[0u32, 2u32, 1u32, 0u32, 2u32, 1u32], device)?; let hs = t.index_select(&ids, 0)?; assert_eq!( hs.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [6.0, 7.0, 8.0], [3.0, 4.0, 5.0], [0.0, 1.0, 2.0], [6.0, 7.0, 8.0], [3.0, 4.0, 5.0], ] ); // Test when selecting dim > 0 with ids size different from elem count of // target dim in source/input. let ids = Tensor::new(&[1u32, 0u32, 1u32], device)?; let t = Tensor::arange(1f32, 5f32, device)?.reshape((2, 2))?; assert_eq!(t.to_vec2::<f32>()?, &[[1.0, 2.0], [3.0, 4.0]]); let hs = t.index_select(&ids, 1)?; assert_eq!(hs.to_vec2::<f32>()?, &[[2.0, 1.0, 2.0], [4.0, 3.0, 4.0]]); } Ok(()) } fn index_add(device: &Device) -> Result<()> { let ids = Tensor::new(&[0u32, 1u32, 1u32], device)?; let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?; assert_eq!( t.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0], [9.0, 10.0, 11.0] ] ); let init = Tensor::ones((4, 2), DType::F32, device)?; let hs = init.index_add(&ids, &t, 1)?; assert_eq!( hs.to_vec2::<f32>()?, &[[1.0, 4.0], [4.0, 10.0], [7.0, 16.0], [10.0, 22.0]], ); let init = Tensor::zeros((4, 2), DType::F32, device)?; let ids = Tensor::new(&[1u32, 0u32, 0u32], device)?; let hs = init.index_add(&ids, &t, 1)?; assert_eq!( hs.to_vec2::<f32>()?, &[[3.0, 0.0], [9.0, 3.0], [15.0, 6.0], [21.0, 9.0]], ); let init = Tensor::zeros((6, 3), DType::F32, device)?; let ids = Tensor::new(&[5u32, 0u32, 1u32, 0u32], device)?; let hs = init.index_add(&ids, &t, 0)?; assert_eq!( hs.to_vec2::<f32>()?, &[ [12.0, 14.0, 16.0], [6.0, 7.0, 8.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 1.0, 2.0] ] ); Ok(()) } fn slice_scatter(device: &Device) -> Result<()> { let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?; assert_eq!( t.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0], [9.0, 10.0, 11.0] ] ); let src = Tensor::arange(100f32, 106f32, device)?.reshape((2, 3))?; assert_eq!( t.slice_scatter0(&src, 0)?.to_vec2::<f32>()?, &[ [100.0, 101.0, 102.0], [103.0, 104.0, 105.0], [6.0, 7.0, 8.0], [9.0, 10.0, 11.0] ] ); assert_eq!( t.slice_scatter0(&src, 1)?.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [100.0, 101.0, 102.0], [103.0, 104.0, 105.0], [9.0, 10.0, 11.0] ] ); assert_eq!( t.slice_scatter0(&src, 2)?.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [100.0, 101.0, 102.0], [103.0, 104.0, 105.0], ] ); Ok(()) } fn scatter_add(device: &Device) -> Result<()> { let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?; assert_eq!( t.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0], [9.0, 10.0, 11.0] ] ); let ids = Tensor::new(&[[0u32, 1, 2], [3, 4, 0], [3, 3, 1], [2, 0, 4]], device)?; let init = Tensor::ones((4, 5), DType::F32, device)?; let hs = init.scatter_add(&ids, &t, 1)?; assert_eq!( hs.to_vec2::<f32>()?, &[ [1.0, 2.0, 3.0, 1.0, 1.0], [6.0, 1.0, 1.0, 4.0, 5.0], [1.0, 9.0, 1.0, 14.0, 1.0], [11.0, 1.0, 10.0, 1.0, 12.0] ] ); let init = Tensor::ones((6, 3), DType::F32, device)?; let hs = init.scatter_add(&ids, &t, 0)?; assert_eq!( hs.to_vec2::<f32>()?, &[ [1.0, 11.0, 6.0], [1.0, 2.0, 9.0], [10.0, 1.0, 3.0], [10.0, 8.0, 1.0], [1.0, 5.0, 12.0], [1.0, 1.0, 1.0] ] ); Ok(()) } fn gather(device: &Device) -> Result<()> { let ids = Tensor::new(&[[0u32], [2u32], [1u32], [0u32]], device)?; let t = Tensor::arange(0f32, 12f32, device)?.reshape((4, 3))?; assert_eq!( t.to_vec2::<f32>()?, &[ [0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0], [9.0, 10.0, 11.0] ] ); let hs = t.gather(&ids, 1)?; assert_eq!(hs.to_vec2::<f32>()?, &[[0.0], [5.0], [7.0], [9.0]]); let ids = Tensor::new( &[[0u32, 0u32], [2u32, 0u32], [1u32, 1u32], [0u32, 2u32]], device, )?; let hs = t.gather(&ids, 1)?; assert_eq!( hs.to_vec2::<f32>()?, &[[0.0, 0.0], [5.0, 3.0], [7.0, 7.0], [9.0, 11.0]] ); let ids = Tensor::new(&[[0u32, 2u32, 0u32]], device)?; let hs = t.gather(&ids, 0)?; assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 7.0, 2.0]]); let ids = Tensor::new(&[[0u32, 2u32, 0u32], [0u32, 1u32, 1u32]], device)?; let hs = t.gather(&ids, 0)?; assert_eq!(hs.to_vec2::<f32>()?, &[[0.0, 7.0, 2.0], [0.0, 4.0, 5.0]]); // Random data // Dim: 0 let t = Tensor::new( &[ [ [108_f32, -47., 16., -56., -83., -130., 210.], [253., 95., 151., 228., -210., -123., -127.], [-9., -217., 2., -78., 163., 245., -204.], [-246., 79., -238., 88., -226., -184., 171.], [8., -48., -153., 234., -34., 166., -153.], [124., 0., -10., -61., -242., -15., -238.], ], [ [12., -64., -199., 244., -240., 156., -128.], [173., -57., 4., -198., 233., -110., 238.], [95., 82., 0., 240., 53., -211., 209.], [-122., 167., -212., 227., -144., 61., 118.], [-63., -146., 200., 244., 168., -167., 116.], [-125., -147., 110., -253., -178., -250., -18.], ], [ [57., 86., -50., 56., 92., 205., -78.], [-137., -156., -18., 248., -61., -239., 14.], [-248., -30., -50., -70., -251., 250., -83.], [-221., 67., 72., 59., -24., -154., 232.], [-144., -23., -74., 5., 93., 171., 205.], [46., -77., -38., -226., 246., 161., -17.], ], [ [-153., -231., -236., 161., 126., 2., -22.], [-229., -41., 209., 164., 234., 160., 57.], [223., 254., -186., -162., -46., -160., -102.], [65., 30., 213., -253., 59., 224., -154.], [-82., -203., -177., 17., 31., -256., -246.], [176., -135., -65., 54., -56., 210., 76.], ], [ [-10., -245., 168., 124., -14., -33., -178.], [25., -43., -39., 132., -89., 169., 179.], [187., -215., 32., -133., 87., -7., -168.], [-224., -215., -5., -230., -58., -162., 128.], [158., -137., -122., -100., -202., -83., 136.], [30., -185., -144., 250., 209., -40., 127.], ], [ [-196., 108., -245., 122., 146., -228., 62.], [-1., -66., 160., 137., 13., -172., -21.], [244., 199., -164., 28., 119., -175., 198.], [-62., 253., -162., 195., -95., -230., -211.], [123., -72., -26., -107., -139., 64., 245.], [11., -126., -182., 108., -12., 184., -127.], ], [ [-159., 126., 176., 161., 73., -111., -138.], [-187., 214., -217., -33., -223., -201., -212.], [-61., -120., -166., -172., -95., 53., 196.], [-33., 86., 134., -152., 154., -53., 74.], [186., -28., -154., -174., 141., -109., 217.], [82., 35., 252., 145., 181., 74., -87.], ], ], device, )?; let ids = Tensor::new( &[ [ [6_u32, 6, 4, 3, 4, 4, 6], [3, 3, 2, 4, 4, 4, 6], [3, 3, 0, 2, 4, 6, 4], [2, 5, 1, 2, 6, 6, 1], [2, 1, 6, 5, 3, 2, 3], [6, 1, 0, 1, 0, 2, 6], ], [ [4, 6, 4, 3, 3, 3, 2], [4, 3, 2, 4, 4, 4, 6], [2, 3, 0, 2, 4, 6, 4], [6, 5, 1, 2, 6, 6, 1], [4, 1, 6, 5, 3, 2, 3], [1, 1, 0, 1, 0, 2, 6], ], [ [3, 6, 4, 3, 3, 3, 2], [2, 3, 2, 4, 4, 4, 6], [4, 3, 0, 2, 4, 6, 4], [0, 5, 1, 2, 6, 6, 1], [6, 1, 6, 5, 3, 2, 3], [4, 1, 0, 1, 0, 2, 6], ], [ [0, 6, 4, 3, 3, 3, 2], [5, 3, 2, 4, 4, 4, 6], [0, 3, 0, 2, 4, 6, 4], [3, 5, 1, 2, 6, 6, 1], [0, 1, 6, 5, 3, 2, 3], [3, 1, 0, 1, 0, 2, 6], ], ], device, )?; let hs = t.gather(&ids, 0)?; assert_eq!( hs.to_vec3::<f32>()?, &[ [ [-159_f32, 126., 168., 161., -14., -33., -138.], [-229., -41., -18., 132., -89., 169., -212.], [223., 254., 2., -70., 87., 53., -168.], [-221., 253., -212., 59., 154., -53., 118.], [-144., -146., -154., -107., 31., 171., -246.], [82., -147., -10., -253., -242., 161., -87.] ], [ [-10., 126., 168., 161., 126., 2., -78.], [25., -41., -18., 132., -89., 169., -212.], [-248., 254., 2., -70., 87., 53., -168.], [-33., 253., -212., 59., 154., -53., 118.], [158., -146., -154., -107., 31., 171., -246.], [-125., -147., -10., -253., -242., 161., -87.] ], [ [-153., 126., 168., 161., 126., 2., -78.], [-137., -41., -18., 132., -89., 169., -212.], [187., 254., 2., -70., 87., 53., -168.], [-246., 253., -212., 59., 154., -53., 118.], [186., -146., -154., -107., 31., 171., -246.], [30., -147., -10., -253., -242., 161., -87.] ], [ [108., 126., 168., 161., 126., 2., -78.], [-1., -41., -18., 132., -89., 169., -212.], [-9., 254., 2., -70., 87., 53., -168.], [65., 253., -212., 59., 154., -53., 118.], [8., -146., -154., -107., 31., 171., -246.], [176., -147., -10., -253., -242., 161., -87.] ] ] ); // Dim: 1 let t = Tensor::new( &[ [ [-117_f32, -175., 69., -163.], [200., 242., -21., -67.], [179., 150., -126., -75.], [-118., 38., -138., -13.], [-221., 136., -185., 180.], [58., 182., -204., -149.], ], [ [3., -148., -58., -154.], [-43., 45., -108., 4.], [-69., -249., -71., -21.], [80., 110., -152., -235.], [-88., 7., 92., -250.], [-186., 207., -242., 98.], ], [ [238., 19., 64., -242.], [-150., -97., 218., 58.], [111., -233., 204., -212.], [-242., -232., 83., 42.], [153., 62., -251., 219.], [-117., 36., -119., 10.], ], [ [215., 159., -169., -27.], [-83., 101., -88., 169.], [-205., 93., 225., -64.], [-162., 240., 214., 23.], [-112., 6., 21., 245.], [-38., 113., 93., 215.], ], [ [91., -188., -148., 101.], [74., 203., -35., 55.], [-116., -130., -153., -96.], [58., 22., -45., -194.], [-221., -134., 73., 159.], [-203., -254., 31., 235.], ], [ [105., -53., 61., 186.], [-195., 234., 75., -1.], [51., 139., 160., -108.], [-173., -167., 161., 19.], [83., -246., 156., -222.], [109., 39., -149., 137.], ], ], device, )?; let ids = Tensor::new( &[ [[4_u32, 4, 4, 2]], [[0, 4, 4, 3]], [[1, 5, 3, 4]], [[0, 3, 3, 2]], [[1, 1, 5, 2]], [[1, 4, 5, 4]], ], device, )?; let hs = t.gather(&ids, 1)?; assert_eq!( hs.to_vec3::<f32>()?, &[ [[-221., 136., -185., -75.]], [[3., 7., 92., -235.]], [[-150., 36., 83., 219.]], [[215., 240., 214., -64.]], [[74., 203., 31., -96.]], [[-195., -246., -149., -222.]] ] ); // Dim: 2 let t = Tensor::new( &[ [[-162_f32, 202.], [-126., -39.], [35., -65.], [1., 80.]], [[37., 248.], [-191., 89.], [117., -40.], [-217., 220.]], ], device, )?; let ids = Tensor::new(&[[[1_u32], [0], [1], [1]], [[0], [1], [0], [1]]], device)?; let hs = t.gather(&ids, 2)?; assert_eq!( hs.to_vec3::<f32>()?, &[ [[202.], [-126.], [-65.], [80.]], [[37.], [89.], [117.], [220.]] ] ); let t = Tensor::new( &[ [[-21_f32, -197.], [194., 122.]], [[255., -106.], [-191., 250.]], [[33., -117.], [43., 10.]], [[-130., 238.], [-217., -92.]], ], device, )?; let ids = Tensor::new( &[ [[0_u32, 1], [1, 0]], [[1, 0], [0, 1]], [[0, 1], [0, 1]], [[1, 0], [1, 0]], ], device, )?; let hs = t.gather(&ids, 2)?; assert_eq!( hs.to_vec3::<f32>()?, &[ [[-21., -197.], [122., 194.]], [[-106., 255.], [-191., 250.]], [[33., -117.], [43., 10.]], [[238., -130.], [-92., -217.]] ] ); Ok(()) } fn broadcasting(device: &Device) -> Result<()> { let t1 = Tensor::arange(0f32, 24f32, device)?.reshape((4, 2, 3))?; let t2 = Tensor::new(&[100f32, 200f32], device)?; let s = t1.broadcast_add(&t2.reshape((2, 1))?)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[100.0, 101.0, 102.0], [203.0, 204.0, 205.0]], [[106.0, 107.0, 108.0], [209.0, 210.0, 211.0]], [[112.0, 113.0, 114.0], [215.0, 216.0, 217.0]], [[118.0, 119.0, 120.0], [221.0, 222.0, 223.0]] ] ); let s = t1.t()?.broadcast_add(&t2)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[100.0, 203.0], [101.0, 204.0], [102.0, 205.0]], [[106.0, 209.0], [107.0, 210.0], [108.0, 211.0]], [[112.0, 215.0], [113.0, 216.0], [114.0, 217.0]], [[118.0, 221.0], [119.0, 222.0], [120.0, 223.0]] ] ); let s = t1.broadcast_sub(&t2.reshape((2, 1))?)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[-100.0, -99.0, -98.0], [-197.0, -196.0, -195.0]], [[-94.0, -93.0, -92.0], [-191.0, -190.0, -189.0]], [[-88.0, -87.0, -86.0], [-185.0, -184.0, -183.0]], [[-82.0, -81.0, -80.0], [-179.0, -178.0, -177.0]] ] ); let s = t1.t()?.broadcast_sub(&t2)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[-100.0, -197.0], [-99.0, -196.0], [-98.0, -195.0]], [[-94.0, -191.0], [-93.0, -190.0], [-92.0, -189.0]], [[-88.0, -185.0], [-87.0, -184.0], [-86.0, -183.0]], [[-82.0, -179.0], [-81.0, -178.0], [-80.0, -177.0]] ] ); // Test a narrowed version as this uses a layout start_offset. let t1 = t1.i(2..)?; let s = t1.broadcast_add(&t2.reshape((2, 1))?)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[112.0, 113.0, 114.0], [215.0, 216.0, 217.0]], [[118.0, 119.0, 120.0], [221.0, 222.0, 223.0]] ] ); let s = t1.t()?.broadcast_add(&t2)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[112.0, 215.0], [113.0, 216.0], [114.0, 217.0]], [[118.0, 221.0], [119.0, 222.0], [120.0, 223.0]] ] ); let s = t1.broadcast_sub(&t2.reshape((2, 1))?)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[-88.0, -87.0, -86.0], [-185.0, -184.0, -183.0]], [[-82.0, -81.0, -80.0], [-179.0, -178.0, -177.0]] ] ); let s = t1.t()?.broadcast_sub(&t2)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[-88.0, -185.0], [-87.0, -184.0], [-86.0, -183.0]], [[-82.0, -179.0], [-81.0, -178.0], [-80.0, -177.0]] ] ); let t3 = Tensor::new(1f32, device)?.broadcast_div(&t2)?; let s = t1.broadcast_mul(&t2.reshape((2, 1))?)?; let s_div = t1.broadcast_div(&t3.reshape((2, 1))?)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[1200.0, 1300.0, 1400.0], [3000.0, 3200.0, 3400.0]], [[1800.0, 1900.0, 2000.0], [4200.0, 4400.0, 4600.0]] ] ); assert_eq!(s.to_vec3::<f32>()?, s_div.to_vec3::<f32>()?,); let s = t1.t()?.broadcast_mul(&t2)?; let s_div = t1.t()?.broadcast_div(&t3)?; assert_eq!( s.to_vec3::<f32>()?, &[ [[1200.0, 3000.0], [1300.0, 3200.0], [1400.0, 3400.0]], [[1800.0, 4200.0], [1900.0, 4400.0], [2000.0, 4600.0]] ] ); assert_eq!(s.to_vec3::<f32>()?, s_div.to_vec3::<f32>()?,); Ok(()) } fn randn(device: &Device) -> Result<()> { let tensor = Tensor::randn(0f32, 1f32, (5, 3), device)?; assert_eq!(tensor.dims(), [5, 3]); // Check that the seed gets updated by checking that // a new series of numbers is generated each time let tensor2 = Tensor::randn(0f32, 1f32, (5, 3), device)?; assert_ne!(tensor.to_vec2::<f32>()?, tensor2.to_vec2::<f32>()?); let tensor = Tensor::rand(0f32, 1f32, (5, 3), device)?; assert_eq!(tensor.dims(), [5, 3]); // Check that the seed gets updated by checking that // a new series of numbers is generated each time let tensor2 = Tensor::rand(0f32, 1f32, (5, 3), device)?; assert_ne!(tensor.to_vec2::<f32>()?, tensor2.to_vec2::<f32>()?); // We do not expect deterministic elements at any index. // There once was a bug that had a deterministic zero element in evenly sized tensors. const N: usize = 2; let v = (0..100) .map(|_| Tensor::randn(0f32, 1f32, N, device).and_then(|t| t.to_vec1::<f32>())) .collect::<Result<Vec<_>>>()?; assert!( (0..N).all(|i| v.windows(2).any(|pair| pair[0][i] != pair[1][i])), "There are deterministic values in the randn tensors" ); let v = (0..100) .map(|_| Tensor::rand(0f32, 1f32, N, device).and_then(|t| t.to_vec1::<f32>())) .collect::<Result<Vec<_>>>()?; assert!( (0..N).all(|i| v.windows(2).any(|pair| pair[0][i] != pair[1][i])), "There are deterministic values in the rand tensors" ); Ok(()) } fn zero_dim(device: &Device) -> Result<()> { let t = Tensor::zeros((4, 0, 1), DType::F32, device)?; assert_eq!(t.dims3()?, (4, 0, 1)); let t2 = Tensor::zeros((4, 3, 1), DType::F32, device)?; let t_cat = Tensor::cat(&[&t, &t2], 1)?; assert_eq!(t_cat.dims3()?, (4, 3, 1)); let t_cat = Tensor::cat(&[&t, &t], 1)?; assert_eq!(t_cat.dims3()?, (4, 0, 1)); let t_unary = t.sqrt()?; assert_eq!(t_unary.dims3()?, (4, 0, 1)); let t_plus = (&t + 1.)?; assert_eq!(t_plus.dims3()?, (4, 0, 1)); let t_mm = t2.matmul(&t.t()?)?; assert_eq!(t_mm.dims3()?, (4, 3, 0)); let t_mm = t.matmul(&t2.t()?)?; assert_eq!(t_mm.dims3()?, (4, 0, 3)); let t_mm = t.t()?.matmul(&t)?; assert_eq!(t_mm.dims3()?, (4, 1, 1)); Ok(()) } test_device!(zeros, zeros_cpu, zeros_gpu, zeros_metal); test_device!(ones, ones_cpu, ones_gpu, ones_metal); test_device!(full, full_cpu, full_gpu, full_metal); test_device!(arange, arange_cpu, arange_gpu, arange_metal); test_device!(add_mul, add_mul_cpu, add_mul_gpu, add_mul_metal); test_device!(tensor_2d, tensor_2d_cpu, tensor_2d_gpu, tensor_2d_metal); test_device!(narrow, narrow_cpu, narrow_gpu, narrow_metal); test_device!(broadcast, broadcast_cpu, broadcast_gpu, broadcast_metal); test_device!(slice_set, ss_cpu, ss_gpu, ss_metal); test_device!(cat, cat_cpu, cat_gpu, cat_metal); test_device!(sum, sum_cpu, sum_gpu, sum_metal); test_device!(min, min_cpu, min_gpu, min_metal); test_device!(max, max_cpu, max_gpu, max_metal); test_device!(argmax, argmax_cpu, argmax_gpu, argmax_metal); test_device!(argmin, argmin_cpu, argmin_gpu, argmin_metal); test_device!(transpose, transpose_cpu, transpose_gpu, transpose_metal); test_device!(unary_op, unary_op_cpu, unary_op_gpu, unary_op_metal); test_device!(binary_op, binary_op_cpu, binary_op_gpu, binary_op_metal); test_device!(embeddings, embeddings_cpu, embeddings_gpu, embeddings_metal); test_device!(cmp, cmp_cpu, cmp_gpu, cmp_metal); test_device!( broadcasting, broadcasting_cpu, broadcasting_gpu, broadcasting_metal ); test_device!( index_select, index_select_cpu, index_select_gpu, index_select_metal ); test_device!(index_add, index_add_cpu, index_add_gpu, index_add_metal); test_device!(gather, gather_cpu, gather_gpu, gather_metal); test_device!( scatter_add, scatter_add_cpu, scatter_add_gpu, scatter_add_metal ); test_device!( slice_scatter, slice_scatter_cpu, slice_scatter_gpu, slice_scatter_metal ); test_device!(randn, randn_cpu, randn_gpu, randn_metal); test_device!(clamp, clamp_cpu, clamp_gpu, clamp_metal); test_device!(asort, asort_cpu, asort_gpu, asort_metal); test_device!(var, var_cpu, var_gpu, var_metal); test_device!(zero_dim, zero_dim_cpu, zero_dim_gpu, zero_dim_metal); // There was originally a bug on the CPU implementation for randn // https://github.com/huggingface/candle/issues/381 #[test] fn randn_hasneg() -> Result<()> { let t = Tensor::randn(0f32, 1f32, 200, &Device::Cpu)?.to_vec1::<f32>()?; if t.iter().all(|&v| v >= 0.) { candle_core::bail!("all values in tensors are non-negative") } Ok(()) } #[test] fn pad_with_same() -> Result<()> { let t = Tensor::arange(1f32, 5f32, &Device::Cpu)?.reshape((2, 2))?; let t0 = t.pad_with_same(0, 1, 2)?; assert_eq!( t0.to_vec2::<f32>()?, [[1.0, 2.0], [1.0, 2.0], [3.0, 4.0], [3.0, 4.0], [3.0, 4.0]] ); let t1 = t.pad_with_same(1, 1, 2)?; assert_eq!( t1.to_vec2::<f32>()?, [[1.0, 1.0, 2.0, 2.0, 2.0], [3.0, 3.0, 4.0, 4.0, 4.0]] ); Ok(()) } #[test] fn i64_abs() -> Result<()> { let t = Tensor::new(&[-42i64, 1337], &Device::Cpu)?; let t = t.abs()?; assert_eq!(t.to_vec1::<i64>()?, [42, 1337]); Ok(()) } #[test] fn tril_triu_eye() -> Result<()> { let t = Tensor::tril2(4, DType::F32, &Device::Cpu)?; assert_eq!( t.to_vec2::<f32>()?, [ [1.0, 0.0, 0.0, 0.0], [1.0, 1.0, 0.0, 0.0], [1.0, 1.0, 1.0, 0.0], [1.0, 1.0, 1.0, 1.0] ], ); let t = Tensor::triu2(4, DType::F32, &Device::Cpu)?; assert_eq!( t.to_vec2::<f32>()?, [ [1.0, 1.0, 1.0, 1.0], [0.0, 1.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 0.0, 1.0] ] ); let t = Tensor::eye(4, DType::F32, &Device::Cpu)?; assert_eq!( t.to_vec2::<f32>()?, [ [1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 1.0] ] ); Ok(()) } #[test] fn cumsum() -> Result<()> { let t = &[3f32, 1., 4., 1., 5.]; let t = Tensor::new(t, &Device::Cpu)?; assert_eq!(t.cumsum(0)?.to_vec1::<f32>()?, [3., 4., 8., 9., 14.]); let t = t.unsqueeze(1)?; assert_eq!( t.cumsum(0)?.to_vec2::<f32>()?, [[3.0], [4.0], [8.0], [9.0], [14.0]] ); assert_eq!( t.cumsum(1)?.to_vec2::<f32>()?, [[3.0], [1.0], [4.0], [1.0], [5.0]] ); let t = &[[3f32, 1., 4., 1., 5.], [2., 1., 7., 8., 2.]]; let t = Tensor::new(t, &Device::Cpu)?; assert_eq!( t.cumsum(1)?.to_vec2::<f32>()?, [[3.0, 4.0, 8.0, 9.0, 14.0], [2.0, 3.0, 10.0, 18.0, 20.0]], ); assert_eq!( t.cumsum(0)?.to_vec2::<f32>()?, [[3.0, 1.0, 4.0, 1.0, 5.0], [5.0, 2.0, 11.0, 9.0, 7.0]] ); Ok(()) } /// A helper function for floating point comparison. Both a and b must be 1D Tensor and contains the same amount of data. /// Assertion passes if the difference of all pairs of a and b is smaller than epsilon. fn assert_close(a: &Tensor, b: &Tensor, epsilon: f64) -> Result<()> { let a_vec: Vec<f64> = a.to_vec1()?; let b_vec: Vec<f64> = b.to_vec1()?; assert_eq!(a_vec.len(), b_vec.len()); for (a, b) in a_vec.iter().zip(b_vec.iter()) { assert!((a - b).abs() < epsilon); } Ok(()) } #[test] fn log_sum_exp() -> Result<()> { let input = Tensor::new( &[ [[1f64, 2., 3.], [4., 5., 6.]], [[-1000.0, -999.0, -1001.0], [1000.0, 999.0, 1001.0]], ], &Device::Cpu, )?; let output = input.log_sum_exp(D::Minus1)?; // The expectations obtained from pytorch. let expected = Tensor::new(&[[3.4076, 6.4076], [-998.5924, 1001.4076]], &Device::Cpu)?; assert_eq!(output.dims(), expected.dims()); assert_close(&output.flatten_all()?, &expected.flatten_all()?, 0.00001)?; assert_eq!( input.log_sum_exp((0, 1))?.to_vec1::<f64>()?, [1000.0, 999.0, 1001.0] ); assert_eq!( input.log_sum_exp(())?.to_vec3::<f64>()?, input.to_vec3::<f64>()? ); Ok(()) } #[test] fn pow() -> Result<()> { let lhs = Tensor::new(&[[1f32, 2., 3.], [4., 5., 6.]], &Device::Cpu)?; let rhs = (&lhs - 2.)?; let res = lhs.pow(&rhs)?; assert_eq!( test_utils::to_vec2_round(&res, 3)?, [[1.0, 1.0, 3.0], [16.0, 125.0, 1296.0]] ); Ok(()) }
1
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/grad_tests.rs
#![allow(clippy::approx_constant)] use anyhow::{Context, Result}; use candle_core::{test_device, test_utils, Device, Shape, Tensor, Var}; fn simple_grad(device: &Device) -> Result<()> { let x = Var::new(&[3f32, 1., 4.], device)?; let x = x.as_tensor(); let y = (((x * x)? + x * 5f64)? + 4f64)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(x.to_vec1::<f32>()?, [3., 1., 4.]); // y = x^2 + 5.x + 4 assert_eq!(y.to_vec1::<f32>()?, [28., 10., 40.]); // dy/dx = 2.x + 5 assert_eq!(grad_x.to_vec1::<f32>()?, [11., 7., 13.]); Ok(()) } fn sum_grad(device: &Device) -> Result<()> { let x = Var::new(&[3f32, 1., 4.], device)?; let x = x.as_tensor(); let y = (x.sqr()?.sum_keepdim(0)? * 2.)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [52.]); // y = 2.x^2 so dy/dx = 4.x assert_eq!(grad_x.to_vec1::<f32>()?, &[12., 4., 16.]); // Same test as before but squeezing on the last dimension. let y = (x.sqr()?.sum_keepdim(0)? * 2.)?.squeeze(0)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_scalar::<f32>()?, 52.); // y = 2.x^2 so dy/dx = 4.x assert_eq!(grad_x.to_vec1::<f32>()?, &[12., 4., 16.]); Ok(()) } fn matmul_grad(device: &Device) -> Result<()> { let data: Vec<_> = (0..12).map(|i| i as f32).collect(); let x = Var::from_slice(&data, (2, 2, 3), device)?; let data: Vec<_> = (0..12).map(|i| i as f32).collect(); let y = Var::from_slice(&data, (2, 3, 2), device)?; let c = x.matmul(&y)?; let grads = c.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; let grad_y = grads.get(&y).context("no grad for y")?; assert_eq!(grad_x.shape(), &Shape::from((2, 2, 3))); assert_eq!(grad_y.shape(), &Shape::from((2, 3, 2))); assert_eq!( &*grad_x.to_vec3::<f32>()?, &[ [[1., 5., 9.], [1., 5., 9.]], [[13., 17., 21.], [13., 17., 21.]] ] ); assert_eq!( &*grad_y.to_vec3::<f32>()?, &[ [[3., 3.], [5., 5.], [7., 7.]], [[15., 15.], [17., 17.], [19., 19.]] ] ); Ok(()) } // The simplest gradient descent, using scalar variable. fn grad_descent(device: &Device) -> Result<()> { let x = Var::new(0f32, device)?; let learning_rate = 0.1; for _step in 0..100 { let xt = x.as_tensor(); let c = ((xt - 4.2)? * (xt - 4.2)?)?; let grads = c.backward()?; let x_grad = grads.get(&x).context("no grad for x")?; x.set(&(xt - x_grad * learning_rate)?)? } assert_eq!(x.to_scalar::<f32>()?, 4.199999); Ok(()) } fn unary_grad(device: &Device) -> Result<()> { let x = Var::new(&[3f32, 1., 4., 0.15], device)?; let x = x.as_tensor(); let y = (x.log()? + 1.)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [2.0986, 1.0, 2.3863, -0.8971] ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [0.3333, 1.0, 0.25, 6.6667] ); let y = x.exp()?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [20.0855, 2.7183, 54.5982, 1.1618] ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [20.0855, 2.7183, 54.5982, 1.1618] ); let y = x.exp()?.sqr()?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 3)?, [403.429, 7.389, 2980.958, 1.35] ); // exp(x)^2 = exp(2*x) assert_eq!( test_utils::to_vec1_round(grad_x, 2)?, [806.86, 14.78, 5961.92, 2.7] ); let y = x.sin()?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [0.1411, 0.8415, -0.7568, 0.1494], ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [-0.99, 0.5403, -0.6536, 0.9888], ); let y = x.cos()?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [-0.99, 0.5403, -0.6536, 0.9888], ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [-0.1411, -0.8415, 0.7568, -0.1494], ); let y = x.sqr()?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [9.0, 1.0, 16.0, 0.0225]); assert_eq!(grad_x.to_vec1::<f32>()?, [6.0, 2.0, 8.0, 0.3]); let y = x.sqr()?.sqrt()?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [3.0, 1.0, 4.0, 0.15]); assert_eq!(test_utils::to_vec1_round(grad_x, 4)?, [1.0, 1.0, 1.0, 1.0]); let y = x.neg()?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [-3.0, -1.0, -4.0, -0.15]); assert_eq!(grad_x.to_vec1::<f32>()?, [-1.0, -1.0, -1.0, -1.0]); let y = x.affine(0.2, 1.)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [1.6, 1.2, 1.8, 1.03]); assert_eq!(grad_x.to_vec1::<f32>()?, [0.2, 0.2, 0.2, 0.2]); let y = Tensor::new(1f32, device)?.broadcast_div(x)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [0.3333, 1.0, 0.25, 6.6667] ); assert_eq!( grad_x.to_vec1::<f32>()?, [-0.11111111, -1.0, -0.0625, -44.444443], ); let y = x.broadcast_div(&Tensor::new(0.5f32, device)?)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [6., 2., 8., 0.3]); assert_eq!(grad_x.to_vec1::<f32>()?, [2., 2., 2., 2.]); let x = Var::new(&[3f32, 1., 4., 0.15], device)?; let y = x.powf(2.5)?; let grads = y.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!(test_utils::to_vec1_round(&y, 2)?, [15.59, 1.0, 32.0, 0.01]); assert_eq!( test_utils::to_vec1_round(grad_x, 2)?, [12.99, 2.5, 20.0, 0.15] ); let y = x.tanh()?; let grads = y.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!(test_utils::to_vec1_round(&y, 2)?, [1.0, 0.76, 1.0, 0.15]); assert_eq!( test_utils::to_vec1_round(grad_x, 2)?, [0.01, 0.42, 0.0, 0.98], ); // testing compared to pytorch nn.GELU(approximate = 'tanh') let y = x.gelu()?; let grads = y.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [2.9964, 0.8412, 3.9999, 0.0839] ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [1.0116, 1.0830, 1.0003, 0.6188], ); // Testing compared to pytorch torch.erf // // import torch // x = torch.tensor([3.0, 1.0, 4.0, 0.15], requires_grad=True) // y = x.erf() // print(y) // loss = y.sum() // loss.backward() // print(x.grad) let y = x.erf()?; let grads = y.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!(test_utils::to_vec1_round(&y, 4)?, [1.0, 0.8427, 1.0, 0.168]); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [0.0001, 0.4151, 0.0, 1.1033], ); // Testing compared to pytorch nn.GELU(approximate = 'none') // // import torch // import torch.nn.functional as F // x = torch.tensor([3.0, 1.0, 4.0, 0.15], requires_grad=True) // y = F.gelu(x, approximate='none') // print(y) // loss = y.sum() // loss.backward() // print(x.grad) let y = x.gelu_erf()?; let grads = y.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [2.9960, 0.8413, 3.9999, 0.0839] ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [1.0119, 1.0833, 1.0005, 0.6188], ); // Testing compared to pytorch elu // // import torch // import torch.nn.functional as F // x = torch.tensor([-1.0, 0.0, -2.0, 3.0], requires_grad=True) // y = F.elu(x, alpha=2.0) // print(y) // loss = y.min // loss = y.sum() // loss.backward() // print(x.grad) let elu_x = Var::new(&[-1.0f32, 0., -2., 3.], device)?; let y = elu_x.elu(2.)?; let grads = y.backward()?; let grad_x = grads.get(&elu_x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [-1.2642, 0.0000, -1.7293, 3.0000] ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [0.7358, 2.0000, 0.2707, 1.0000] ); // testing compared to pytorch nn.Silu() let y = x.silu()?; let grads = y.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec1_round(&y, 4)?, [2.8577, 0.7311, 3.9281, 0.0806] ); assert_eq!( test_utils::to_vec1_round(grad_x, 4)?, [1.0881, 0.9277, 1.0527, 0.5747], ); if device.is_cpu() { let x = Var::new(&[[[1f32, 2., 3.], [4., 5., 6.], [7., 8., 9.]]], device)?; let y = x.interpolate1d(12)?.reshape(36)?; let z = Tensor::new( &[ 1_f32, 02., 03., 04., 05., 06., 07., 08., 09., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., ], device, )?; let loss = y.unsqueeze(1)?.transpose(0, 1)?.matmul(&z.unsqueeze(1)?)?; let grads = loss.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec3_round(grad_x, 4)?, [[[10_f32, 26., 42.], [58., 74., 90.], [106., 122., 138.]]] ); } // manually checked: see comments let x = Var::new(&[[[[1f32, 2., 3.], [4., 5., 6.], [7., 8., 9.]]]], device)?; let y = x.interpolate2d(6, 6)?.reshape(36)?; let z = Tensor::new( &[ 1_f32, 02., 03., 04., 05., 06., 07., 08., 09., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., ], device, )?; // gradient should be // row 1 // 1+2+7+8 = 18 // 3+4+9+10 = 26 // 5+6+11+12 = 34 // row 2 // 13+14+19+20 = 66 // 15+16+21+22 = 74 // 17+18+23+24 = 82 // row 3 // 25+26+31+32 = 114 // 27+28+33+34 = 122 // 29+30+35+36 = 130 let loss = y.unsqueeze(1)?.transpose(0, 1)?.matmul(&z.unsqueeze(1)?)?; let grads = loss.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec2_round(&grad_x.flatten(0, 2)?, 4)?, [[18_f32, 26., 34.], [66., 74., 82.], [114., 122., 130.]] ); // manually checked: see comments let x = Var::new(&[[[[1f32, 2.], [4., 5.]]]], device)?; let y = x.interpolate2d(6, 6)?.reshape(36)?; let z = Tensor::new( &[ 1_f32, 02., 03., 04., 05., 06., 07., 08., 09., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., ], device, )?; // gradient should be // row 1 // 1+2+3+7+8+9+13+14+15 = 72 // 4+5+6+10+11+12+16+17+18 = 99 // row 2 // 19+20+21+25+26+27+31+32+33 = 234 // 22+23+24+28+29+30+34+35+36 = 243 let loss = y.unsqueeze(1)?.transpose(0, 1)?.matmul(&z.unsqueeze(1)?)?; let grads = loss.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec2_round(&grad_x.flatten(0, 2)?, 4)?, [[72_f32, 99.], [234., 261.]] ); // manually checked: see comments let x = Var::new(&[[[[1f32, 2.], [4., 5.]], [[6f32, 7.], [8., 9.]]]], device)?; let y = x.interpolate2d(4, 4)?.reshape(32)?; #[rustfmt::skip] let z = Tensor::new( &[ 1_f32, 02., 03., 04., 05., 06., 07., 08., 09., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32. ], device, )?; // gradient should be // m1r1 // 1+2+5+6=14 // 3+4+7+8=22 // m1r2 // 9+10+13+14=46 // 11+12+15+16=54 // m2r1 // 17+18+21+22=78 // 19+20+23+24=86 // m2r2 // 25+26+29+30=110 // 27+28+31+32=118 let loss = y.unsqueeze(1)?.transpose(0, 1)?.matmul(&z.unsqueeze(1)?)?; let grads = loss.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec3_round(&grad_x.flatten(0, 1)?, 4)?, [[[14_f32, 22.], [46., 54.]], [[78., 86.], [110., 118.]]] ); // manually checked: see comments let x = Var::new( &[[[[1f32, 2.], [4., 5.]]], [[[6f32, 7.], [8., 9.]]]], device, )?; let y = x.interpolate2d(4, 4)?.reshape(32)?; #[rustfmt::skip] let z = Tensor::new( &[ 1_f32, 02., 03., 04., 05., 06., 07., 08., 09., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32. ], device, )?; // gradient should be // m1r1 // 1+2+5+6=14 // 3+4+7+8=22 // m1r2 // 9+10+13+14=46 // 11+12+15+16=54 // m2r1 // 17+18+21+22=78 // 19+20+23+24=86 // m2r2 // 25+26+29+30=110 // 27+28+31+32=118 let loss = y.unsqueeze(1)?.transpose(0, 1)?.matmul(&z.unsqueeze(1)?)?; let grads = loss.backward()?; let grad_x = grads.get(&x).context("no grad for x")?; assert_eq!( test_utils::to_vec3_round(&grad_x.flatten(0, 1)?, 4)?, [[[14_f32, 22.], [46., 54.]], [[78., 86.], [110., 118.]]] ); Ok(()) } fn binary_grad(device: &Device) -> Result<()> { let x = Var::new(&[3f32, 1., -4., -1.], device)?; let x = x.as_tensor(); // leaky relu let y = x.maximum(&(x * 0.1)?)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(x.to_vec1::<f32>()?, [3., 1., -4., -1.]); assert_eq!(y.to_vec1::<f32>()?, [3., 1., -0.4, -0.1]); assert_eq!(grad_x.to_vec1::<f32>()?, [1., 1., 0.1, 0.1]); let y = x.minimum(&(x * 0.1)?)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [0.3, 0.1, -4., -1.]); assert_eq!(grad_x.to_vec1::<f32>()?, [0.1, 0.1, 1., 1.]); // This one is easy to mess up, we want the gradient to be one as it is the identity function. let y = x.minimum(x)?; let grads = y.backward()?; let grad_x = grads.get(x).context("no grad for x")?; assert_eq!(y.to_vec1::<f32>()?, [3., 1., -4., -1.]); assert_eq!(grad_x.to_vec1::<f32>()?, [1., 1., 1., 1.]); let x_var = Var::new(&[3f32, 1., -4., -1., 5., 9.], device)?; let x = x_var.as_tensor(); let y_var = Var::new(&[2f32, 7., 1.], device)?; let y = y_var.as_tensor(); let ss = x .reshape((2, 3))? .slice_scatter0(&y.reshape((1, 3))?, 1)? .sqr()?; let grads = ss.backward()?; let grad_x = grads.get(x).context("no grad for x")?; let grad_y = grads.get(y).context("no grad for y")?; assert_eq!(ss.to_vec2::<f32>()?, [[9., 1., 16.], [4., 49., 1.]]); assert_eq!(grad_x.to_vec1::<f32>()?, [6.0, 2.0, -8.0, 0.0, 0.0, 0.0]); assert_eq!(grad_y.to_vec1::<f32>()?, [4.0, 14.0, 2.0]); Ok(()) } test_device!( simple_grad, simple_grad_cpu, simple_grad_gpu, simple_grad_metal ); test_device!(sum_grad, sum_grad_cpu, sum_grad_gpu, sum_grad_metal); test_device!( matmul_grad, matmul_grad_cpu, matmul_grad_gpu, matmul_grad_metal ); test_device!( grad_descent, grad_descent_cpu, grad_descent_gpu, grad_descent_metal ); test_device!(unary_grad, unary_grad_cpu, unary_grad_gpu, unary_grad_metal); test_device!( binary_grad, binary_grad_cpu, binary_grad_gpu, binary_grad_metal );
2
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/indexing_tests.rs
use anyhow::Result; use candle_core::{Device, IndexOp, Tensor}; #[test] fn integer_index() -> Result<()> { let dev = Device::Cpu; let tensor = Tensor::arange(0u32, 2 * 3, &dev)?.reshape((2, 3))?; let result = tensor.i(1)?; assert_eq!(result.dims(), &[3]); assert_eq!(result.to_vec1::<u32>()?, &[3, 4, 5]); let result = tensor.i((.., 2))?; assert_eq!(result.dims(), &[2]); assert_eq!(result.to_vec1::<u32>()?, &[2, 5]); Ok(()) } #[test] fn range_index() -> Result<()> { let dev = Device::Cpu; // RangeFull let tensor = Tensor::arange(0u32, 2 * 3, &dev)?.reshape((2, 3))?; let result = tensor.i(..)?; assert_eq!(result.dims(), &[2, 3]); assert_eq!(result.to_vec2::<u32>()?, &[[0, 1, 2], [3, 4, 5]]); // Range let tensor = Tensor::arange(0u32, 4 * 3, &dev)?.reshape((4, 3))?; let result = tensor.i(1..3)?; assert_eq!(result.dims(), &[2, 3]); assert_eq!(result.to_vec2::<u32>()?, &[[3, 4, 5], [6, 7, 8]]); // RangeFrom let result = tensor.i(2..)?; assert_eq!(result.dims(), &[2, 3]); assert_eq!(result.to_vec2::<u32>()?, &[[6, 7, 8], [9, 10, 11]]); // RangeTo let result = tensor.i(..2)?; assert_eq!(result.dims(), &[2, 3]); assert_eq!(result.to_vec2::<u32>()?, &[[0, 1, 2], [3, 4, 5]]); // RangeInclusive let result = tensor.i(1..=2)?; assert_eq!(result.dims(), &[2, 3]); assert_eq!(result.to_vec2::<u32>()?, &[[3, 4, 5], [6, 7, 8]]); // RangeTo let result = tensor.i(..1)?; assert_eq!(result.dims(), &[1, 3]); assert_eq!(result.to_vec2::<u32>()?, &[[0, 1, 2]]); // RangeToInclusive let result = tensor.i(..=1)?; assert_eq!(result.dims(), &[2, 3]); assert_eq!(result.to_vec2::<u32>()?, &[[0, 1, 2], [3, 4, 5]]); // Empty range let result = tensor.i(1..1)?; assert_eq!(result.dims(), &[0, 3]); let empty: [[u32; 3]; 0] = []; assert_eq!(result.to_vec2::<u32>()?, &empty); // Similar to PyTorch, allow empty ranges when the computed length is negative. #[allow(clippy::reversed_empty_ranges)] let result = tensor.i(1..0)?; assert_eq!(result.dims(), &[0, 3]); let empty: [[u32; 3]; 0] = []; assert_eq!(result.to_vec2::<u32>()?, &empty); Ok(()) } #[test] fn index_3d() -> Result<()> { let tensor = Tensor::from_iter(0..24u32, &Device::Cpu)?.reshape((2, 3, 4))?; assert_eq!(tensor.i((0, 0, 0))?.to_scalar::<u32>()?, 0); assert_eq!(tensor.i((1, 0, 0))?.to_scalar::<u32>()?, 12); assert_eq!(tensor.i((0, 1, 0))?.to_scalar::<u32>()?, 4); assert_eq!(tensor.i((0, 1, 3))?.to_scalar::<u32>()?, 7); assert_eq!(tensor.i((0..2, 0, 0))?.to_vec1::<u32>()?, &[0, 12]); assert_eq!( tensor.i((0..2, .., 0))?.to_vec2::<u32>()?, &[[0, 4, 8], [12, 16, 20]] ); assert_eq!( tensor.i((..2, .., 3))?.to_vec2::<u32>()?, &[[3, 7, 11], [15, 19, 23]] ); assert_eq!(tensor.i((1, .., 3))?.to_vec1::<u32>()?, &[15, 19, 23]); Ok(()) } #[test] fn slice_assign() -> Result<()> { let dev = Device::Cpu; let tensor = Tensor::arange(0u32, 4 * 5, &dev)?.reshape((4, 5))?; let src = Tensor::arange(0u32, 2 * 3, &dev)?.reshape((3, 2))?; let out = tensor.slice_assign(&[1..4, 3..5], &src)?; assert_eq!( out.to_vec2::<u32>()?, &[ [0, 1, 2, 3, 4], [5, 6, 7, 0, 1], [10, 11, 12, 2, 3], [15, 16, 17, 4, 5] ] ); let out = tensor.slice_assign(&[0..3, 0..2], &src)?; assert_eq!( out.to_vec2::<u32>()?, &[ [0, 1, 2, 3, 4], [2, 3, 7, 8, 9], [4, 5, 12, 13, 14], [15, 16, 17, 18, 19] ] ); Ok(()) }
3
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/pth_tests.rs
/// Regression test for pth files not loading on Windows. #[test] fn test_pth() { let tensors = candle_core::pickle::PthTensors::new("tests/test.pt", None).unwrap(); tensors.get("test").unwrap().unwrap(); } #[test] fn test_pth_with_key() { let tensors = candle_core::pickle::PthTensors::new("tests/test_with_key.pt", Some("model_state_dict")) .unwrap(); tensors.get("test").unwrap().unwrap(); } #[test] fn test_pth_fortran_congiguous() { let tensors = candle_core::pickle::PthTensors::new("tests/fortran_tensor_3d.pth", None).unwrap(); let tensor = tensors.get("tensor_fortran").unwrap().unwrap(); assert_eq!(tensor.dims3().unwrap(), (2, 3, 4)); assert_eq!( tensor.to_vec3::<i64>().unwrap(), [ [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]], [[13, 14, 15, 16], [17, 18, 19, 20], [21, 22, 23, 24]] ] ); }
4
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/conv_tests.rs
use anyhow::Result; use candle_core::{test_device, test_utils, Device, IndexOp, Tensor}; /* This test is based on the following script. import torch torch.manual_seed(4242) t = torch.randn((1, 4, 5)) w = torch.randn((2, 4, 3)) print(t.flatten()) print(w.flatten()) res = torch.nn.functional.conv1d(t, w) print(res.flatten()) res = torch.nn.functional.conv1d(t, w, padding=1) print(res.flatten()) w_t = w.transpose(0, 1) res = torch.nn.functional.conv_transpose1d(t, w_t) print(res.shape) print(res) res = torch.nn.functional.conv_transpose1d(t, w_t, groups=2) print(res.shape) print(res) */ fn conv1d(dev: &Device) -> Result<()> { let t = Tensor::new( &[ 0.4056f32, -0.8689, -0.0773, -1.5630, 1.2279, -0.9287, -1.7030, 0.1370, 0.1866, 0.4145, 1.8025, -0.1536, 2.2013, -0.6836, 0.2477, 1.3127, -0.6957, 0.3278, -1.0124, 0.5599, ], dev, )? .reshape((1, 4, 5))?; let w = Tensor::new( &[ -0.8404f32, -0.3490, 0.0130, 1.3123, 0.1763, -1.9249, 1.4270, 0.9421, 0.8670, -0.7181, -1.1111, 0.8869, -1.2429, 1.8357, 1.6052, -1.3844, 0.3951, -1.2036, 0.6686, 1.6261, -0.6451, -0.0840, -1.4247, 0.5512, ], dev, )? .reshape((2, 4, 3))?; let res = t.conv1d(&w, 0, 1, 1, 1)?; assert_eq!(res.dims(), [1, 2, 3]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [2.6357, -1.3336, 4.1393, -1.1784, 3.5675, 0.5069] ); let res = t.conv1d(&w, /*padding*/ 1, 1, 1, 1)?; assert_eq!(res.dims(), [1, 2, 5]); // Same as pytorch default padding: use zeros. assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [2.4509, 2.6357, -1.3336, 4.1393, 0.5657, 1.8091, -1.1784, 3.5675, 0.5069, 3.3352] ); let w = w.transpose(0, 1)?; // The CPU kernels applied in the contiguous and non contiguous cases are different. for w in [w.clone(), w.contiguous()?] { let res = t.conv_transpose1d(&w, 0, 0, 1, 1, 1)?; assert_eq!(res.dims(), [1, 2, 7]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [ 0.0699, -1.2899, 8.3018, 5.5873, 2.4572, -2.6143, -0.0706, 1.8765, 4.8318, 1.1538, 4.7076, -5.9745, -0.8276, 1.621 ], ); let res = t.conv_transpose1d(&w, 0, 0, 1, 1, 2)?; assert_eq!(res.dims(), [1, 4, 7]); assert_eq!( test_utils::to_vec2_round(&res.squeeze(0)?, 4)?, [ [-1.5596, -1.8099, 2.0407, 4.8764, -0.1743, -0.735, -0.7819], [0.7816, 3.8152, -0.5926, 2.2515, -5.1844, -0.3157, 1.4721], [1.6295, 0.52, 6.2611, 0.7109, 2.6315, -1.8793, 0.7113], [1.0949, 1.0166, 1.7464, 2.4561, -0.79, -0.5119, 0.1488] ] ); } Ok(()) } fn conv1d_small(dev: &Device) -> Result<()> { let t = Tensor::new(&[0.4056f32, -0.8689, -0.0773, -1.5630], dev)?.reshape((1, 1, 4))?; let w = Tensor::new(&[1f32, 0., 0.], dev)?.reshape((1, 1, 3))?; let res = t.conv1d(&w, 0, 1, 1, 1)?; assert_eq!(res.dims(), [1, 1, 2]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [0.4056, -0.8689] ); let res = t.conv1d(&w, /*padding*/ 1, 1, 1, 1)?; assert_eq!(res.dims(), [1, 1, 4]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [0.0, 0.4056, -0.8689, -0.0773], ); Ok(()) } /* This test is based on the following script. import torch torch.manual_seed(4242) t = torch.randn((1, 4, 5, 5)) w = torch.randn((2, 4, 3, 3)) print(t.flatten()) print(w.flatten()) res = torch.nn.functional.conv2d(t, w) print(res.flatten()) w_t = w.transpose(0, 1) res = torch.nn.functional.conv_transpose2d(t, w_t) print(res.shape) print(res) res = torch.nn.functional.conv2d(t, w, dilation=2) print(res.shape) print(res[0]) res = torch.nn.functional.conv_transpose2d(t, w_t, dilation=2) print(res.shape) print(res) */ fn conv2d(dev: &Device) -> Result<()> { let t = Tensor::new( &[ 0.4056f32, -0.8689, -0.0773, -1.5630, -2.8012, -1.5059, 0.3972, 1.0852, 0.4997, 3.0616, 1.6541, 0.0964, -0.8338, -1.6523, -0.8323, -0.1699, 0.0823, 0.3526, 0.6843, 0.2395, 1.2279, -0.9287, -1.7030, 0.1370, 0.6047, 0.3770, -0.6266, 0.3529, 2.2013, -0.6836, 0.2477, 1.3127, -0.2260, 0.2622, -1.2974, -0.8140, -0.8404, -0.3490, 0.0130, 1.3123, 1.7569, -0.3956, -1.8255, 0.1727, -0.3538, 2.6941, 1.0529, 0.4219, -0.2071, 1.1586, 0.4717, 0.3865, -0.5690, -0.5010, -0.1310, 0.7796, 0.6630, -0.2021, 2.6090, 0.2049, 0.6466, -0.5042, -0.0603, -1.6538, -1.2429, 1.8357, 1.6052, -1.3844, 0.3323, -1.3712, 0.9634, -0.4799, -0.6451, -0.0840, -1.4247, 0.5512, -0.1747, -0.5509, -0.3742, 0.3790, -0.4431, -0.4720, -0.7890, 0.2620, 0.7875, 0.5377, -0.6779, -0.8088, 1.9098, 1.2006, -0.8, -0.4983, 1.5480, 0.8265, -0.1025, 0.5138, 0.5748, 0.3821, -0.4607, 0.0085, ], dev, )?; let w = Tensor::new( &[ -0.9325f32, 0.6451, -0.8537, 0.2378, 0.8764, -0.1832, 0.2987, -0.6488, -0.2273, -2.4184, -0.1192, -0.4821, -0.5079, -0.5766, -2.4729, 1.6734, 0.4558, 0.2851, 1.1514, -0.9013, 1.0662, -0.1817, -0.0259, 0.1709, 0.5367, 0.7513, 0.8086, -2.2586, -0.5027, 0.9141, -1.3086, -1.3343, -1.5669, -0.1657, 0.7958, 0.1432, 0.3896, -0.4501, 0.1667, 0.0714, -0.0952, 1.2970, -0.1674, -0.3178, 1.0677, 0.3060, 0.7080, 0.1914, 1.1679, -0.3602, 1.9265, -1.8626, -0.5112, -0.0982, 0.2621, 0.6565, 0.5908, 1.0089, -0.1646, 1.8032, -0.6286, 0.2016, -0.3370, 1.2555, 0.8009, -0.6488, -0.4652, -1.5685, 1.5860, 0.5583, 0.4623, 0.6026, ], dev, )?; let t = t.reshape((1, 4, 5, 5))?; let w = w.reshape((2, 4, 3, 3))?; let res = t.conv2d(&w, 0, 1, 1, 1)?; assert_eq!(res.dims(), [1, 2, 3, 3]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [ -4.2812, 2.0923, 5.2187, 7.5184, 0.752, -14.9426, 10.0087, 4.391, 0.2918, 1.6715, 10.389, 3.6023, -4.2808, 0.2672, 5.3646, -5.2023, -2.1955, -9.4075 ] ); let res = t.conv_transpose2d(&w.transpose(0, 1)?, 0, 0, 1, 1)?; assert_eq!(res.dims(), [1, 2, 7, 7]); assert_eq!( test_utils::to_vec3_round(&res.i(0)?, 4)?, [ [ [-1.9918, 2.6797, -0.4599, -1.6037, 1.4131, -2.4012, 2.9277], [1.8016, -3.5361, 1.0757, 3.5395, -8.2168, -3.2023, 0.5375], [0.8243, 1.8675, 7.8929, -4.0746, -6.4415, 5.1139, 1.6889], [0.2722, 8.9679, 3.3477, 1.8514, -4.2896, -3.8228, -7.5632], [-8.5412, -5.8142, -7.1587, -1.6095, 0.4651, 0.2748, -2.0985], [2.0833, -0.6482, -12.1692, -4.1284, -2.9765, -0.0656, -4.5114], [5.307, 2.6957, 2.3087, 1.0478, 0.7808, -1.1519, -0.9579] ], [ [1.089, 0.1872, -0.6408, -0.9897, 0.8503, 1.1019, -0.9211], [-0.1741, -0.2915, 4.2472, 1.9417, 1.65, 0.6303, -4.7131], [1.6555, 2.4026, -2.9293, 2.9953, 0.5328, 3.5873, -0.9621], [-1.4289, -3.2787, 4.1747, -6.0341, -4.6341, -5.7945, 4.142], [7.5973, 6.4431, 5.9872, 2.1639, -8.6566, 3.3143, -3.4059], [-0.8775, -3.048, 11.6543, 0.6442, 2.3218, -0.4765, 1.1516], [-5.5423, -2.5188, 1.0754, -0.0563, -2.9386, -1.1504, 1.0171] ] ] ); // Dilations. let res = t.conv2d(&w, 0, 1, 2, 1)?; assert_eq!(res.dims(), [1, 2, 1, 1]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [2.45, -2.3504], ); // Transpose and dilations. let res = t.conv_transpose2d(&w.transpose(0, 1)?, 0, 0, 1, 2)?; assert_eq!(res.dims(), [1, 2, 9, 9]); assert_eq!( test_utils::to_vec3_round(&res.i(0)?, 4)?, [ [ [-1.9918, 3.1652, -0.6778, -4.3442, 4.4351, 0.6652, -3.0124, -0.6031, 2.9277], [2.7036, -1.7156, -0.3969, 1.0516, 1.6381, -2.8886, -0.205, 2.4682, -1.0499], [-0.9459, 3.1631, 3.707, -4.8369, -8.5166, -1.4496, -2.7559, -3.2698, 1.4376], [-0.2157, 3.7786, -2.0252, -4.2633, 3.6731, -1.5142, 5.9391, -0.2622, -0.141], [-6.8121, -3.1744, 1.5945, 3.0637, -9.6088, 1.4446, 2.9489, -3.0082, -7.3822], [0.2371, 3.3303, 0.3861, 2.2646, -4.6784, 4.1235, -0.0109, 0.3176, -0.03], [-2.5339, -2.9564, -3.4518, -4.4594, -9.1873, -1.9709, -0.4676, 0.51, -3.5024], [4.007, 0.3067, -2.2954, 1.1105, -0.1992, 1.6372, -2.9268, 0.2807, -1.2787], [5.307, 1.1317, 1.3518, 0.9049, 3.8116, -0.4075, -0.8874, -0.2241, -0.9579] ], [ [1.089, -0.6483, 0.0726, -0.4752, -1.3283, 1.7103, 1.0703, 0.1076, -0.9211], [-0.8629, 0.1376, 0.3202, 2.0955, 0.9696, 2.8988, -1.0012, 1.5049, -0.1278], [1.9286, -1.5255, -2.9563, 2.4589, 3.3611, -0.6951, 0.3525, -1.7724, -5.9861], [1.1226, 2.1561, 3.6417, 4.7546, -0.692, 4.4126, -5.1902, 6.0805, 2.3185], [1.0111, 0.3604, 0.6432, -3.6605, 7.9517, -9.2955, -5.2988, -3.7803, -2.0642], [3.3172, -1.7967, -3.6576, -2.0942, 1.3158, 0.112, -1.7405, 2.9167, 0.7957], [5.1001, 1.8995, -1.8639, 1.1262, 9.9629, 2.683, -3.6319, -1.1607, 0.5856], [-4.8445, -0.5642, 4.2317, 0.0856, 1.2267, -0.5712, 1.736, 1.0997, 0.6908], [-5.5423, -1.1831, -1.2176, 0.0843, 0.0446, -0.7545, -2.4798, -0.0827, 1.0171] ] ] ); Ok(()) } /* This test is based on the following script. import torch torch.manual_seed(4242) t = torch.randn((1, 2, 3, 3)) w = torch.randn((1, 2, 1, 1)) print(t.flatten()) print(w.flatten()) res = torch.nn.functional.conv2d(t, w) print(res.flatten()) w_t = w.transpose(0, 1) res = torch.nn.functional.conv_transpose2d(t, w_t) print(res.shape) print(res.flatten()) t_t = w.transpose(0, 1) res = torch.nn.functional.conv_transpose2d(t_t, w) print(res.shape) print(res.flatten()) */ fn conv2d_small(dev: &Device) -> Result<()> { let t = Tensor::new( &[ 0.4056f32, -0.8689, 0.6843, 0.2395, 1.2279, -0.9287, -1.7030, 0.1370, 0.1866, 0.4145, -0.6266, 0.3529, 2.2013, -0.6836, 0.2477, 1.3127, -0.6957, 0.3278, ], dev, )?; let w = Tensor::new(&[-0.9259f32, 1.3017], dev)?; let t = t.reshape((1, 2, 3, 3))?; let w = w.reshape((1, 2, 1, 1))?; let res = t.conv2d(&w, 0, 1, 1, 1)?; assert_eq!(res.dims(), [1, 1, 3, 3]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [0.164, -0.0111, -0.1742, 2.6437, -2.0268, 1.1823, 3.2855, -1.0324, 0.2539] ); let res = t.conv2d(&w, 2, 1, 1, 1)?; assert_eq!(res.dims(), [1, 1, 7, 7]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1640, -0.0111, -0.1742, 0.0, 0.0, 0.0, 0.0, 2.6437, -2.0268, 1.1823, 0.0, 0.0, 0.0, 0.0, 3.2855, -1.0324, 0.2539, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 ] ); let res = t.conv_transpose2d(&w.transpose(0, 1)?, 0, 0, 1, 1)?; assert_eq!(res.dims(), [1, 1, 3, 3]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [0.164, -0.0111, -0.1742, 2.6437, -2.0268, 1.1823, 3.2855, -1.0324, 0.2539], ); let res = t.transpose(0, 1)?.conv_transpose2d(&w, 0, 0, 1, 1)?; assert_eq!(res.dims(), [2, 2, 3, 3]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [ -0.3755, 0.8045, -0.6336, -0.2218, -1.1369, 0.8599, 1.5768, -0.1268, -0.1728, 0.528, -1.131, 0.8908, 0.3118, 1.5984, -1.2089, -2.2168, 0.1783, 0.2429, -0.3838, 0.5802, -0.3268, -2.0382, 0.6329, -0.2293, -1.2154, 0.6441, -0.3035, 0.5396, -0.8156, 0.4594, 2.8654, -0.8898, 0.3224, 1.7087, -0.9056, 0.4267 ] ); Ok(()) } fn conv2d_smaller(dev: &Device) -> Result<()> { let t = Tensor::new( &[ 0.4056f32, -0.8689, 0.6843, 0.2395, 1.2279, -0.9287, -1.7030, 0.1370, 0.1866, ], dev, )?; let w = Tensor::new(&[1f32, 1., 1., 1., 1., 1., 1., 1., 1.], dev)?; let t = t.reshape((1, 1, 3, 3))?; let w = w.reshape((1, 1, 3, 3))?; let res = t.conv2d(&w, 0, 1, 1, 1)?; assert_eq!(res.dims(), [1, 1, 1, 1]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [-0.6197] ); Ok(()) } /* This test is based on the following script. import torch torch.manual_seed(4242) t = torch.randn((1, 2, 4, 2)) w = torch.randn((1, 2, 1, 1)) print(t.flatten()) print(w.flatten()) res = torch.nn.functional.conv2d(t, w) print(res.flatten()) */ fn conv2d_non_square(dev: &Device) -> Result<()> { let t = Tensor::new( &[ 0.4056f32, -0.8689, -0.0773, -1.5630, -2.8012, -1.5059, 0.3972, 1.0852, 0.4997, 3.0616, 1.6541, 0.0964, -0.8338, -1.6523, -0.8323, -0.1699, ], dev, )?; let w = Tensor::new(&[-1.1351f32, 1.3841], dev)?; let t = t.reshape((1, 2, 4, 2))?; let w = w.reshape((1, 2, 1, 1))?; let res = t.conv2d(&w, 0, 1, 1, 1)?; assert_eq!(res.dims(), [1, 1, 4, 2]); assert_eq!( test_utils::to_vec1_round(&res.flatten_all()?, 4)?, [0.2312, 5.2238, 2.3772, 1.9076, 2.0256, -0.5776, -1.6028, -1.467] ); Ok(()) } /* import torch torch.manual_seed(4242) t = torch.randn((1, 4, 5, 5), requires_grad=True) w = torch.randn((2, 4, 3, 3), requires_grad=True) print(t.flatten()) print(w.flatten()) res = torch.nn.functional.conv2d(t, w) print(res.flatten()) loss = (res ** 2).sum() print(loss) loss.backward() print(t.grad.shape) print(t.grad.flatten()) print(w.grad.shape) print(w.grad.flatten()) t.grad.zero_() w.grad.zero_() res = torch.nn.functional.conv2d(t, w, stride=2) print(res.flatten()) loss = (res ** 2).sum() print(loss) loss.backward() print(t.grad.shape) print(t.grad[0]) print(w.grad.shape) print(w.grad[0]) */ fn conv2d_grad(dev: &Device) -> Result<()> { // conv-transposes are not implemented for metal use candle_core::Var; let t = Var::from_slice( &[ 0.4056f32, -0.8689, -0.0773, -1.5630, -2.8012, -1.5059, 0.3972, 1.0852, 0.4997, 3.0616, 1.6541, 0.0964, -0.8338, -1.6523, -0.8323, -0.1699, 0.0823, 0.3526, 0.6843, 0.2395, 1.2279, -0.9287, -1.7030, 0.1370, 0.6047, 0.3770, -0.6266, 0.3529, 2.2013, -0.6836, 0.2477, 1.3127, -0.2260, 0.2622, -1.2974, -0.8140, -0.8404, -0.3490, 0.0130, 1.3123, 1.7569, -0.3956, -1.8255, 0.1727, -0.3538, 2.6941, 1.0529, 0.4219, -0.2071, 1.1586, 0.4717, 0.3865, -0.5690, -0.5010, -0.1310, 0.7796, 0.6630, -0.2021, 2.6090, 0.2049, 0.6466, -0.5042, -0.0603, -1.6538, -1.2429, 1.8357, 1.6052, -1.3844, 0.3323, -1.3712, 0.9634, -0.4799, -0.6451, -0.0840, -1.4247, 0.5512, -0.1747, -0.5509, -0.3742, 0.3790, -0.4431, -0.4720, -0.7890, 0.2620, 0.7875, 0.5377, -0.6779, -0.8088, 1.9098, 1.2006, -0.8, -0.4983, 1.5480, 0.8265, -0.1025, 0.5138, 0.5748, 0.3821, -0.4607, 0.0085, ], (1, 4, 5, 5), dev, )?; let w = Var::from_slice( &[ -0.9325f32, 0.6451, -0.8537, 0.2378, 0.8764, -0.1832, 0.2987, -0.6488, -0.2273, -2.4184, -0.1192, -0.4821, -0.5079, -0.5766, -2.4729, 1.6734, 0.4558, 0.2851, 1.1514, -0.9013, 1.0662, -0.1817, -0.0259, 0.1709, 0.5367, 0.7513, 0.8086, -2.2586, -0.5027, 0.9141, -1.3086, -1.3343, -1.5669, -0.1657, 0.7958, 0.1432, 0.3896, -0.4501, 0.1667, 0.0714, -0.0952, 1.2970, -0.1674, -0.3178, 1.0677, 0.3060, 0.7080, 0.1914, 1.1679, -0.3602, 1.9265, -1.8626, -0.5112, -0.0982, 0.2621, 0.6565, 0.5908, 1.0089, -0.1646, 1.8032, -0.6286, 0.2016, -0.3370, 1.2555, 0.8009, -0.6488, -0.4652, -1.5685, 1.5860, 0.5583, 0.4623, 0.6026, ], (2, 4, 3, 3), dev, )?; let res = t.conv2d(&w, 0, 1, 1, 1)?; let loss = res.sqr()?.sum_all()?; assert_eq!(test_utils::to_vec0_round(&loss, 2)?, 741.12f32); let grads = loss.backward()?; let grad_t = grads.get(&t).unwrap(); let grad_w = grads.get(&w).unwrap(); assert_eq!(grad_t.dims(), [1, 4, 5, 5]); assert_eq!(grad_w.dims(), [2, 4, 3, 3]); assert_eq!( test_utils::to_vec1_round(&grad_t.flatten_all()?, 2)?, [ 9.29, -2.84, -5.71, 3.38, -7.71, -19.15, 7.02, 29.1, 9.34, 34.73, -22.87, 24.35, -39.88, -14.01, 21.08, 9.94, 13.63, -34.68, 11.21, -6.26, 7.72, -6.32, -16.64, -1.08, -20.22, 21.73, -0.37, -4.06, 5.82, -3.65, -30.73, 14.55, 87.7, 31.6, 4.53, -89.78, -75.37, -57.43, -7.56, 92.96, 18.79, -4.63, -159.75, -42.47, -47.26, 52.88, 37.32, 49.0, 12.82, 2.01, -8.98, 20.18, 16.62, 12.06, 15.38, 20.0, 2.57, -15.22, 72.62, -10.75, 2.25, -31.2, 3.75, -0.2, 9.76, -0.68, 5.21, -40.44, -22.59, -61.61, 17.28, 20.41, 37.55, 5.23, 6.81, 23.54, 23.62, -9.99, -9.13, 4.87, -35.06, -26.1, 63.48, 25.81, -39.21, -70.68, -46.96, 2.33, 41.81, 82.42, -28.63, -11.78, -35.33, -10.28, -28.57, -9.13, 7.21, -9.05, -9.62, -11.25 ] ); assert_eq!( test_utils::to_vec1_round(&grad_w.flatten_all()?, 2)?, [ -28.92, -22.88, -141.23, 73.35, 61.07, 47.81, -20.0, -73.71, -41.82, -13.59, 21.5, 28.72, 28.57, -46.85, -90.19, 143.61, 16.68, 7.43, 18.88, -90.81, -20.29, 54.79, 82.63, 22.94, 77.81, -16.39, -13.2, 9.34, -40.39, -26.62, 5.33, -60.91, 9.09, -59.37, 7.08, 58.64, 5.55, 20.52, 2.5, -17.25, -6.8, 22.21, 30.15, -7.52, -37.46, 5.67, 22.58, 9.03, 47.05, 17.61, 37.31, -98.13, -14.61, -4.8, -6.36, 44.69, 23.34, 8.37, -13.52, 80.05, -34.24, -16.36, -12.31, 1.92, -33.62, -14.1, -49.23, -7.39, 11.5, -9.98, 9.66, 29.6 ] ); // Same as before but with stride. let res = t.conv2d(&w, 0, 2, 1, 1)?; let loss = res.sqr()?.sum_all()?; assert_eq!(test_utils::to_vec0_round(&loss, 2)?, 277.16f32); let grads = loss.backward()?; let grad_t = grads.get(&t).unwrap(); let grad_w = grads.get(&w).unwrap(); assert_eq!(grad_t.dims(), [1, 4, 5, 5]); assert_eq!(grad_w.dims(), [2, 4, 3, 3]); assert_eq!( test_utils::to_vec3_round(&grad_t.i(0)?, 2)?, [ [ [9.29, -7.03, 0.94, 3.49, -7.71], [-1.8, -7.82, 8.9, 8.46, 7.43], [-25.84, 22.09, -19.27, -0.22, 1.69], [4.02, 18.53, -18.37, 2.3, -24.51], [7.72, -9.68, -12.34, 5.6, -20.22] ], [ [21.73, 3.39, -18.27, 3.86, -3.65], [8.25, 3.73, 30.73, -8.61, -11.93], [-72.15, -15.36, -17.53, -12.32, -1.61], [-22.32, -7.79, -91.82, 6.44, -37.69], [52.88, 14.44, 42.75, 9.88, 2.01] ], [ [-8.98, 9.91, 6.75, -4.68, 15.38], [4.93, -0.33, 9.94, -1.46, 14.78], [13.62, -30.63, 3.96, -3.58, -4.48], [-14.13, 1.19, -34.43, 3.08, -33.83], [17.28, 12.94, 31.83, -3.35, 6.81] ], [ [23.54, 6.98, -24.52, 0.52, 4.87], [9.65, 6.18, 1.71, -25.23, -4.93], [-54.99, -23.66, 3.19, -3.73, 18.58], [-21.35, -10.39, -39.88, 28.73, -30.76], [-9.13, 11.12, -14.0, -8.23, -11.25] ] ] ); assert_eq!( test_utils::to_vec3_round(&grad_w.i(0)?, 2)?, [ [ [28.34, -7.91, -45.75], [21.03, 3.86, 29.86], [0.72, -36.58, -35.28] ], [ [-16.04, 11.53, -16.38], [29.62, -16.32, -48.35], [57.5, 28.29, 25.81] ], [ [2.93, -19.6, 1.57], [27.15, 53.88, -24.64], [12.74, -22.6, -26.2] ], [ [-0.18, -14.86, -6.82], [-19.55, -2.72, 45.9], [-2.54, 36.97, 27.11] ] ] ); // Replicate the issue from https://github.com/huggingface/candle/issues/1212 let res = t.i((.., .., 0..4, 0..4))?.conv2d(&w, 0, 2, 1, 1)?; let loss = res.sqr()?.sum_all()?; assert_eq!(test_utils::to_vec0_round(&loss, 2)?, 21.12f32); let grads = loss.backward()?; let grad_t = grads.get(&t).unwrap(); let grad_w = grads.get(&w).unwrap(); assert_eq!(grad_t.dims(), [1, 4, 5, 5]); assert_eq!(grad_w.dims(), [2, 4, 3, 3]); assert_eq!( test_utils::to_vec3_round(&grad_t.i(0)?, 2)?, [ [ [9.29, -7.03, 7.87, 0.0, 0.0], [-1.8, -7.82, 5.9, 0.0, 0.0], [-3.12, 4.49, 5.52, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0] ], [ [21.73, 3.39, 4.77, 0.0, 0.0], [8.25, 3.73, 27.61, 0.0, 0.0], [-20.55, -5.61, -2.77, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0] ], [ [-8.98, 9.91, -7.15, 0.0, 0.0], [4.93, -0.33, 4.56, 0.0, 0.0], [-6.7, -5.76, -8.05, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0] ], [ [23.54, 6.98, -10.0, 0.0, 0.0], [9.65, 6.18, 18.72, 0.0, 0.0], [3.29, -5.27, 0.79, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0] ] ] ); assert_eq!( test_utils::to_vec3_round(&grad_w.i(0)?, 2)?, [ [ [-3.47, 7.44, 0.66], [12.89, -3.4, -9.29], [-14.16, -0.83, 7.14] ], [ [-3.23, 5.37, -3.02], [-2.12, -11.24, 1.94], [6.97, 7.2, 2.99] ], [ [-4.04, -3.31, 4.87], [-6.68, -5.68, 1.73], [-5.54, 4.32, 0.52] ], [[-4.72, 1.5, 4.72], [3.79, 4.04, 6.76], [-4.6, 5.8, 6.93]] ] ); // Conv Transpose 2d Test //tested against following python // import torch // torch.manual_seed(4242) // padding = 4 // outpadding = 2 // dilation = 3 // stride = 3 // input = torch.randn((1, 4, 7, 5), requires_grad=True) // kernel = torch.randn((4, 2, 3, 5), requires_grad=True) // print("input", input.flatten()) // print("kernel", kernel.flatten()) // res = torch.nn.functional.conv_transpose2d( // input, // kernel, // stride=stride, // padding=padding, // dilation=dilation, // output_padding=outpadding, // ) // res.retain_grad() // print(res.shape) // loss = (res**2).sum() // print(loss) // loss.backward() // print(input.grad.shape) // print("input grad", torch.round(input.grad, decimals=1)) // print(kernel.grad.shape) // print("kernel grad", torch.round(kernel.grad.flatten(), decimals=1)) let padding = 4; let outpadding = 2; let dilation = 3; let stride = 3; let t = Var::from_slice( &[ 0.4056_f32, -0.8689, -0.0773, -1.5630, -2.8012, -1.5059, 0.3972, 1.0852, 0.4997, 3.0616, 1.6541, 0.0964, -0.8338, -1.6523, -0.8323, -0.1699, 0.0823, 0.3526, 0.6843, 0.2395, 1.2279, -0.9287, -1.7030, 0.1370, 0.6047, 0.3770, -0.6266, 0.3529, 2.2013, -0.6836, 0.2477, 1.3127, -0.2260, 0.2622, -1.2974, -0.8140, -0.8404, -0.3490, 0.0130, 1.3123, 1.7569, -0.3956, -1.8255, 0.1727, -0.3538, 2.6941, 1.0529, 0.4219, -0.2071, 1.1586, 0.4717, 0.3865, -0.5690, -0.5010, -0.1310, 0.7796, 0.6630, -0.2021, 2.6090, 0.2049, 0.6466, -0.5042, -0.0603, -1.6538, -1.2429, 1.8357, 1.6052, -1.3844, 0.3323, -1.3712, 0.9634, -0.4799, -0.6451, -0.0840, -1.4247, 0.5512, -0.1747, -0.5509, -0.3742, 0.3790, -0.4431, -0.4720, -0.7890, 0.2620, 0.5411, -1.1715, -2.4997, 2.3249, -0.8912, -0.4733, -0.5701, -2.8888, -1.4112, -0.5471, -0.9234, -1.1660, 0.4189, -0.7465, -0.6473, 0.1402, 0.7875, 0.5377, -0.6779, -0.8088, -0.4864, -0.2312, 0.9279, 0.1264, 1.5480, 0.8265, -0.1025, 0.5138, -0.2512, 0.1576, 1.2705, 0.3641, -0.9325, 0.6451, -0.8537, 0.2378, 0.1794, 0.2752, -0.3687, -1.1149, -0.1410, -0.5829, -0.0892, 1.4258, -2.2789, 0.5270, 0.1825, 1.7007, -0.5263, -0.2954, 0.4440, 0.5537, 0.3492, 0.6186, 1.6475, 0.2219, ], (1, 4, 7, 5), dev, )?; #[rustfmt::skip] let w = Var::from_slice( &[ -1.1744_f32, 0.3266, 2.5893, 1.0142, 0.1763, 0.7752, 0.6604, 0.2029, -0.2145, 0.7234, -0.3441, -1.5400, -0.6333, 0.6613, 0.2083, 0.6230, -1.7002, 0.3393, 0.4049, 1.0762, 0.2723, 1.4181, 0.0029, -0.2122, 1.7668, 1.4168, 0.3320, -0.2719, 0.7932, -0.7204, 0.4447, 0.1211, 0.5908, 1.0089, -0.1646, 1.8033, -0.6286, 0.2016, -0.3370, 1.2555, 0.8009, -0.6488, -0.4652, -1.5685, 1.5860, 0.5583, 0.4623, 0.6026, 0.8828, 2.4990, 0.6811, -0.3369, 1.3320, 1.7669, -1.1067, 1.2958, -0.9415, -0.9655, -0.4462, 0.7181, 0.5181, -1.1658, -1.8467, -0.7763, 1.2769, 0.8651, 0.9890, 1.5092, 0.7207, -0.8481, 0.7417, 0.3375, -1.2685, 1.4572, 1.0915, 0.1093, -0.8550, -0.5831, -0.6309, -0.2509, 0.5220, -0.0914, 0.7900, 0.1096, 0.3258, 0.2723, -1.0942, -0.3393, -0.1653, 0.5732, -0.8014, 1.8194, -1.9023, 0.2127, 1.8636, -0.8979, 0.1927, -0.2778, 0.3105, 0.0071, -1.1823, 0.2476, -0.7178, -1.3821, 1.0769, -0.4376, -0.9967, -0.1227, 1.6197, -1.0604, 0.1372, 0.8141, -0.6163, 0.7304, -0.8285, 2.0636, -0.7176, 0.2495, -0.2581, -0.4478, ], (4, 2, 3, 5), dev, )?; let res = t.conv_transpose2d(&w, padding, outpadding, stride, dilation)?; let loss = res.sqr()?.sum_all()?; assert_eq!(test_utils::to_vec0_round(&loss, 0)?, 2904.0); let grads = loss.backward()?; let grad_t = grads.get(&t).unwrap(); let grad_w = grads.get(&w).unwrap(); assert_eq!(grad_t.dims(), [1, 4, 7, 5]); assert_eq!(grad_w.dims(), [4, 2, 3, 5]); assert_eq!( test_utils::to_vec1_round(&grad_w.flatten_all()?, 1)?, [ // torch gets 89.1 -89.0, -135.3, 136.7, 102.0, -53.4, 117.9, 118.6, -43.9, -218.0, -58.5, -114.3, -150.0, -15.6, 172.1, 66.3, -64.3, -27.9, -19.8, 31.7, 62.1, 5.5, 92.6, 28.2, -29.6, 55.9, 52.7, -72.7, -119.8, 53.8, -25.5, 128.8, 19.3, 68.0, 190.9, -64.1, -86.2, -111.2, 106.6, -67.7, 37.8, 115.9, 50.4, -77.7, -54.9, 22.3, -4.6, 89.8, 61.7, 122.4, 192.6, -27.8, -104.6, 57.0, 166.4, 27.1, 6.1, 18.7, -93.2, 31.5, 168.2, -3.7, -99.5, -55.5, -10.8, 17.5, 20.8, 16.9, 43.8, 42.0, -89.2, 18.8, -9.6, -84.1, 212.6, 19.7, -50.0, -52.0, -40.0, -166.6, -73.2, -10.8, -73.3, 31.5, -23.4, -79.3, -27.0, -84.4, -42.9, -20.3, 51.8, -16.7, 76.3, -120.5, -65.8, 96.5, -10.7, -45.9, -88.1, 65.4, -7.0, -1.5, 92.8, -25.1, -114.2, -5.8, -14.8, -51.2, -20.7, 54.2, -79.8, 47.7, -29.2, -8.8, 53.5, -28.4, 85.0, -18.3, 107.0, 28.3, -71.8 ] ); assert_eq!( test_utils::to_vec3_round(&grad_t.i(0)?, 1)?, [ [ [32.3, -41.6, -24.0, 14.1, 17.6], [-11.8, 72.5, 87.6, 46.4, 61.5], [115.0, 108.5, -48.6, -63.4, -50.0], [51.3, 5.4, 31.3, 91.1, -30.9], [52.7, 92.8, -68.0, -47.0, 83.0], // pytorch gets -107.1 [-10.2, -107.0, -5.4, 213.1, -31.4], [-2.4, 65.1, 9.2, -146.2, -24.2] ], [ [-72.6, -63.9, -61.9, 45.3, 33.0], [79.3, -0.5, -26.2, 78.2, 42.7], [90.9, 141.6, 40.1, -62.7, 37.0], [32.8, 198.2, -0.8, -31.1, 27.3], // torch gets 48.0 [34.5, 34.9, -47.9, 127.6, -12.3], [-61.4, -3.2, -2.9, -10.9, -16.6], [74.6, 60.1, -68.9, 34.5, -50.4] ], [ [37.5, -56.9, -43.6, -13.5, -9.9], [40.0, 97.3, 28.6, 14.2, -30.1], [-22.3, -126.3, -68.8, -8.2, 26.1], [-32.9, 37.3, 108.5, -54.8, 29.6], [34.9, -176.9, -125.0, -28.3, -13.9], [-54.9, 142.6, 62.1, -80.4, -65.6], [7.4, -91.1, -67.6, 35.0, 39.7] ], [ [-57.2, -40.9, -10.1, 32.6, 29.4], [18.7, -18.0, 29.5, -1.2, 59.2], [-14.0, -74.4, 19.8, -117.0, 58.2], [-21.8, 163.5, -71.1, -99.0, 80.9], [-58.9, -10.9, 93.8, -139.6, 98.0], // torch gets 54.5 [-54.4, 135.3, 6.0, -79.1, 134.6], [27.5, -76.0, 43.4, -2.8, -7.8] ] ] ); // Test the same, but then with the following properties, t & w are unmodified. let padding = 1; let outpadding = 1; let dilation = 1; let stride = 2; let res = t.conv_transpose2d(&w, padding, outpadding, stride, dilation)?; let loss = res.sqr()?.sum_all()?; assert_eq!(test_utils::to_vec0_round(&loss, 0)?, 3627.0); // torch gives 3626.8560 let grads = loss.backward()?; let grad_t = grads.get(&t).unwrap(); let grad_w = grads.get(&w).unwrap(); assert_eq!(grad_t.dims(), [1, 4, 7, 5]); assert_eq!(grad_w.dims(), [4, 2, 3, 5]); #[rustfmt::skip] assert_eq!( test_utils::to_vec3_round(&grad_t.i(0)?, 1)?, [ [ [ 13.2, -40.7, -9.7, -47.3, -82.7], [ -98.2, 9.7, 57.7, -6.2, 180.7], [ 100.2, 24.1, 3.7, -100.5, -48.1], [ -0.3, 13.5, -2.9, 80.0, -49.8], [ 47.2, -25.6, -74.4, 61.2, -18.4], [ 4.6, -69.5, 27.9, 66.5, -88.1], // 4th column on next row; torch is 4.2 [ -12.0, 79.2, -40.0, 4.1, -97.1], ], [ [ -42.2, -36.5, -51.1, 7.5, 32.3], [ 74.1, -44.6, -68.8, 19.5, 7.7], [ 137.1, 54.2, 153.8, -58.0, 45.5], [ 24.4, -56.8, 9.7, -41.0, -14.5], [ -3.7, 72.6, 8.3, 134.8, 40.5], [ 43.2, -56.9, -47.5, -89.4, -95.4], [ 68.2, 108.1, -80.0, 57.0, -121.1] ], [ [ 31.1, -11.4, -34.8, 33.1, -44.2], [ 29.4, -31.6, -40.2, 13.7, 13.1], [ -0.8, -83.8, -7.8, -17.3, 78.2], [ 12.0, -118.7, 137.5, -76.7, 50.8], [ -28.7, -114.2, -3.7, -96.3, -13.8], [ -31.8, 28.5, -14.3, 4.6, 13.4], [ 28.0, -0.2, -38.9, -29.7, -59.0] ], [ [ -16.8, 38.5, 15.5, 26.6, 48.9], [ 14.5, 49.6, -24.8, 65.6, 61.7], [ 22.1, -64.7, -4.3, -51.0, 36.3], [ 31.0, -88.9, 47.1, -123.5, -3.8], [ -14.8, -39.8, 128.2, -110.3, 42.6], // 1st column on next row; torch is -7.2 [ -7.1, 95.3, -21.3, -58.7, -13.9], [ 26.9, 21.3, 16.1, 70.3, 32.1] ] ] ); #[rustfmt::skip] assert_eq!( test_utils::to_vec1_round(&grad_w.flatten_all()?, 1)?, [ // 2nd value; torch gets -3.2, 3rd value; torch gets 221.8 -2.460e+01, -3.100e+00, 2.219e+02, 7.400e+00, 5.620e+01, 7.420e+01, 7.830e+01, 8.900e+00, 1.050e+01, 2.810e+01, 5.100e+00, -1.046e+02, -1.572e+02, 8.710e+01, -9.840e+01, -4.230e+01, -1.898e+02, 1.860e+01, -3.570e+01, 9.810e+01, 4.680e+01, 1.182e+02, 4.020e+01, -1.900e+00, 1.508e+02, 1.094e+02, 1.018e+02, -4.620e+01, 1.591e+02, -2.320e+01, // 5th value; torch gets 7.1 -8.450e+01, -4.600e+00, 6.330e+01, 1.123e+02, -7.000e+00, 1.101e+02, -6.620e+01, 2.090e+01, -5.120e+01, 8.990e+01, 9.050e+01, -6.990e+01, 6.800e+01, -9.250e+01, 1.380e+02, 4.720e+01, 4.710e+01, 6.210e+01, 8.870e+01, 2.098e+02, 3.870e+01, -1.390e+01, 6.270e+01, 1.484e+02, -9.920e+01, -4.200e+01, -1.505e+02, -1.480e+01, -2.620e+01, 8.220e+01, -3.350e+01, -2.260e+01, -1.198e+02, -5.080e+01, 1.259e+02, 5.600e+01, 9.270e+01, 1.209e+02, 6.590e+01, -8.330e+01, 7.000e+00, -2.600e+01, -1.133e+02, 3.870e+01, 4.020e+01, -6.300e+00, -8.710e+01, -5.150e+01, -8.510e+01, 2.000e-01, 3.640e+01, -6.100e+00, 6.590e+01, -2.700e+00, 6.550e+01, // 4th value; torch gets 3.8 5.300e+00, -6.760e+01, -4.270e+01, -3.900e+00, 2.880e+01, 5.260e+01, 6.170e+01, -1.203e+02, -1.610e+01, 7.740e+01, -1.008e+02, -1.070e+01, -9.900e+00, 3.300e+00, -2.620e+01, -4.440e+01, 2.580e+01, -6.920e+01, -4.220e+01, 1.108e+02, 1.240e+01, -3.440e+01, -2.800e+00, 7.880e+01, -6.690e+01, 1.480e+01, 2.310e+01, -4.260e+01, -1.500e+00, -4.760e+01, 5.350e+01, -2.260e+01, 8.000e-01, -3.840e+01, -2.500e+00 ] ); Ok(()) } test_device!(conv1d, conv1d_cpu, conv1d_gpu, conv1d_metal); test_device!( conv1d_small, conv1d_small_cpu, conv1d_small_gpu, conv1d_small_metal ); test_device!(conv2d, conv2d_cpu, conv2d_gpu, conv2d_metal); test_device!( conv2d_non_square, conv2d_non_square_cpu, conv2d_non_square_gpu, conv2d_non_square_metal ); test_device!( conv2d_small, conv2d_small_cpu, conv2d_small_gpu, conv2d_small_metal ); test_device!( conv2d_smaller, conv2d_smaller_cpu, conv2d_smaller_gpu, conv2d_smaller_metal ); test_device!( conv2d_grad, conv2d_grad_cpu, conv2d_grad_gpu, conv2_grad_metal );
5
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/matmul_tests.rs
use candle_core::{test_device, DType, Device, IndexOp, Result, Tensor}; fn matmul(device: &Device) -> Result<()> { let data = vec![1.0f32, 2.0, 3.0, 4.0]; let a = Tensor::from_slice(&data, (2, 2), device)?; let data = vec![1.0f32, 2.0, 3.0, 4.0]; let b = Tensor::from_slice(&data, (2, 2), device)?; let c = a.matmul(&b)?; assert_eq!(c.to_vec2::<f32>()?, &[[7.0f32, 10.0], [15.0, 22.0]]); let data = vec![1.0f32, 2.0]; let a = Tensor::from_slice(&data, (2, 1), device)?; let data = vec![3.0f32, 4.0]; let b = Tensor::from_slice(&data, (1, 2), device)?; let c = a.matmul(&b)?; assert_eq!(c.to_vec2::<f32>()?, &[&[3.0, 4.0], &[6.0, 8.0]]); let data: Vec<_> = (0..6).map(|i| i as f32).collect(); let a = Tensor::from_slice(&data, (2, 3), device)?; let data: Vec<_> = (0..6).map(|i| (i + 2) as f32).collect(); let b = Tensor::from_slice(&data, (3, 2), device)?; let c = a.matmul(&b)?; assert_eq!(c.to_vec2::<f32>()?, &[&[16., 19.], &[52., 64.]]); let data: Vec<_> = (0..12).map(|i| i as f32).collect(); let a = Tensor::from_slice(&data, (2, 2, 3), device)?; let data: Vec<_> = (0..12).map(|i| (i + 2) as f32).collect(); let b = Tensor::from_slice(&data, (2, 3, 2), device)?; let expected = [[[16., 19.], [52., 64.]], [[214., 235.], [304., 334.]]]; let c = a.matmul(&b)?; assert_eq!(c.to_vec3::<f32>()?, &expected); // Also perform the matmul on contiguous transposed versions. let a_tt = a.t()?.contiguous()?.t()?; assert!(!a_tt.is_contiguous()); assert_eq!(a.dims(), a_tt.dims()); assert_eq!(a_tt.stride(), &[6, 1, 2]); let b_tt = b.t()?.contiguous()?.t()?; assert!(!b_tt.is_contiguous()); assert_eq!(b.dims(), b_tt.dims()); assert_eq!(b_tt.stride(), &[6, 1, 3]); assert_eq!(a_tt.matmul(&b)?.to_vec3::<f32>()?, &expected); assert_eq!(a.matmul(&b_tt)?.to_vec3::<f32>()?, &expected); assert_eq!(a_tt.matmul(&b_tt)?.to_vec3::<f32>()?, &expected); Ok(()) } fn matmul_bf16(device: &Device) -> Result<()> { if !device.supports_bf16() { return Ok(()); } let data = vec![1.0f32, 2.0, 3.0, 4.0]; let a = Tensor::from_slice(&data, (2, 2), device)?.to_dtype(DType::BF16)?; let data = vec![1.0f32, 2.0, 3.0, 4.0]; let b = Tensor::from_slice(&data, (2, 2), device)?.to_dtype(DType::BF16)?; let c = a.matmul(&b)?.to_dtype(DType::F32)?; assert_eq!(c.to_vec2::<f32>()?, &[[7.0f32, 10.0], [15.0, 22.0]]); Ok(()) } fn broadcast_matmul(device: &Device) -> Result<()> { let lhs = Tensor::randn(0f32, 1f32, (3, 1, 4, 5), device)?; let rhs = Tensor::randn(0f32, 1f32, (6, 5, 2), device)?; let out = lhs.broadcast_matmul(&rhs)?; assert_eq!(out.dims(), &[3, 6, 4, 2]); for idx1 in 0..3 { for idx2 in 0..6 { let out = out.i((idx1, idx2))?; let lhs = lhs.i((idx1, 0))?; let rhs = rhs.i(idx2)?; let out2 = lhs.matmul(&rhs); let sum_diff2 = (out - out2)?.sqr()?.sum_all()?; // With cuda, we see errors of up to ~1e-12. assert!(sum_diff2.to_vec0::<f32>()? < 1e-6) } } Ok(()) } // https://github.com/huggingface/candle/issues/1948 fn squeeze_mm(device: &Device) -> Result<()> { let seq_len = 8_usize; let a = Tensor::zeros((1, seq_len, 16), DType::F32, device)?; let x = a.i((.., seq_len - 1, ..))?; let w = Tensor::zeros((32, 16), DType::F32, device)?.t()?; let x = x.matmul(&w)?; assert_eq!(x.dims(), &[1, 32]); Ok(()) } // https://github.com/huggingface/candle/issues/1992 fn mm_layout(device: &Device) -> Result<()> { let a = Tensor::arange(0f32, 16f32, device)?.reshape((1, 1, 4, 4))?; let b = Tensor::arange(0f32, 8f32, device)?.reshape((1, 1, 4, 2))?; let mm1 = a.matmul(&b)?; // Forces the layout to be: // shape: [1, 1, 4, 2], stride: [8, 2, 2, 1], start_offset: 0 // This is still a contiguous matrix but matmul checks are only the two last dimensions have // non 1 sizes but matmul check may be reluctant to handle it. let b = b.transpose(1, 2)?.force_contiguous()?.transpose(1, 2)?; let mm2 = a.matmul(&b)?; let diff = (mm1 - mm2)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0.); Ok(()) } test_device!(matmul, matmul_cpu, matmul_gpu, matmul_metal); test_device!( matmul_bf16, matmul_bf16_cpu, matmul_bf16_gpu, matmul_bf16_metal ); test_device!( broadcast_matmul, broadcast_matmul_cpu, broadcast_matmul_gpu, broadcast_matmul_metal ); test_device!(squeeze_mm, squeeze_mm_cpu, squeeze_mm_gpu, squeeze_mm_metal); test_device!(mm_layout, mm_layout_cpu, mm_layout_gpu, mm_layout_metal);
6
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/layout_tests.rs
use candle::{test_device, Device, IndexOp, Result, Tensor}; use candle_core as candle; fn contiguous(device: &Device) -> Result<()> { let tensor = Tensor::arange(0u32, 24u32, device)?.reshape((2, 3, 4))?; assert_eq!( tensor.to_vec3::<u32>()?, &[ [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]] ] ); assert_eq!( tensor.t()?.contiguous()?.to_vec3::<u32>()?, &[ [[0, 4, 8], [1, 5, 9], [2, 6, 10], [3, 7, 11]], [[12, 16, 20], [13, 17, 21], [14, 18, 22], [15, 19, 23]] ] ); assert_eq!( tensor.transpose(0, 1)?.contiguous()?.to_vec3::<u32>()?, &[ [[0, 1, 2, 3], [12, 13, 14, 15]], [[4, 5, 6, 7], [16, 17, 18, 19]], [[8, 9, 10, 11], [20, 21, 22, 23]] ] ); assert_eq!( tensor.transpose(0, 1)?.flatten_all()?.to_vec1::<u32>()?, &[0, 1, 2, 3, 12, 13, 14, 15, 4, 5, 6, 7, 16, 17, 18, 19, 8, 9, 10, 11, 20, 21, 22, 23] ); assert_eq!( tensor .i(1..)? .transpose(0, 1)? .contiguous()? .to_vec3::<u32>()?, &[[[12, 13, 14, 15]], [[16, 17, 18, 19]], [[20, 21, 22, 23]]] ); assert_eq!( tensor.transpose(0, 2)?.contiguous()?.to_vec3::<u32>()?, &[ [[0, 12], [4, 16], [8, 20]], [[1, 13], [5, 17], [9, 21]], [[2, 14], [6, 18], [10, 22]], [[3, 15], [7, 19], [11, 23]] ] ); Ok(()) } test_device!(contiguous, contiguous_cpu, contiguous_gpu, contiguous_metal); #[test] fn strided_blocks() -> Result<()> { use candle::Device::Cpu; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 0); assert_eq!(len, 24); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 26u32, &Cpu)? .i(2..)? .reshape((2, 3, 4))?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 2); assert_eq!(len, 24); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; let tensor = tensor.i(1)?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 12); assert_eq!(len, 12); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; let tensor = tensor.i((.., 1))?.contiguous()?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 0); assert_eq!(len, 8); assert_eq!(tensor.to_vec2::<u32>()?, &[[4, 5, 6, 7], [16, 17, 18, 19]]); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; let tensor = tensor.i((.., 1))?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { .. } => { panic!("unexpected block structure") } candle::StridedBlocks::MultipleBlocks { block_len, block_start_index, } => { assert_eq!(block_len, 4); assert_eq!(block_start_index.collect::<Vec<_>>(), &[4, 16]) } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; match tensor.t()?.strided_blocks() { candle::StridedBlocks::SingleBlock { .. } => { panic!("unexpected block structure") } candle::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { assert_eq!(block_len, 1); assert_eq!( block_start_index.collect::<Vec<_>>(), &[ 0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11, 12, 16, 20, 13, 17, 21, 14, 18, 22, 15, 19, 23 ] ) } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; match tensor.transpose(0, 1)?.strided_blocks() { candle::StridedBlocks::SingleBlock { .. } => { panic!("unexpected block structure") } candle::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { assert_eq!(block_len, 4); assert_eq!( block_start_index.collect::<Vec<_>>(), &[0, 12, 4, 16, 8, 20] ) } }; Ok(()) }
7
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/display_tests.rs
use anyhow::Result; use candle_core::{DType, Device::Cpu, Tensor}; #[test] fn display_scalar() -> Result<()> { let t = Tensor::new(1234u32, &Cpu)?; let s = format!("{t}"); assert_eq!(&s, "[1234]\nTensor[[], u32]"); let t = t.to_dtype(DType::F32)?.neg()?; let s = format!("{}", (&t / 10.0)?); assert_eq!(&s, "[-123.4000]\nTensor[[], f32]"); let s = format!("{}", (&t / 1e8)?); assert_eq!(&s, "[-1.2340e-5]\nTensor[[], f32]"); let s = format!("{}", (&t * 1e8)?); assert_eq!(&s, "[-1.2340e11]\nTensor[[], f32]"); let s = format!("{}", (&t * 0.)?); assert_eq!(&s, "[0.]\nTensor[[], f32]"); Ok(()) } #[test] fn display_vector() -> Result<()> { let t = Tensor::new::<&[u32; 0]>(&[], &Cpu)?; let s = format!("{t}"); assert_eq!(&s, "[]\nTensor[[0], u32]"); let t = Tensor::new(&[0.1234567, 1.0, -1.2, 4.1, f64::NAN], &Cpu)?; let s = format!("{t}"); assert_eq!( &s, "[ 0.1235, 1.0000, -1.2000, 4.1000, NaN]\nTensor[[5], f64]" ); let t = (Tensor::ones(50, DType::F32, &Cpu)? * 42.)?; let s = format!("\n{t}"); let expected = r#" [42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42., 42.] Tensor[[50], f32]"#; assert_eq!(&s, expected); let t = (Tensor::ones(11000, DType::F32, &Cpu)? * 42.)?; let s = format!("{t}"); assert_eq!( &s, "[42., 42., 42., ..., 42., 42., 42.]\nTensor[[11000], f32]" ); Ok(()) } #[test] fn display_multi_dim() -> Result<()> { let t = (Tensor::ones((200, 100), DType::F32, &Cpu)? * 42.)?; let s = format!("\n{t}"); let expected = r#" [[42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], ... [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.]] Tensor[[200, 100], f32]"#; assert_eq!(&s, expected); let t = t.reshape(&[2, 1, 1, 100, 100])?; let t = format!("\n{t}"); let expected = r#" [[[[[42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], ... [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.]]]], [[[[42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], ... [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.], [42., 42., 42., ..., 42., 42., 42.]]]]] Tensor[[2, 1, 1, 100, 100], f32]"#; assert_eq!(&t, expected); Ok(()) }
8
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/tests/custom_op_tests.rs
use candle_core::backend::BackendStorage; use candle_core::cpu_backend; use candle_core::test_utils::to_vec1_round; use candle_core::{CpuStorage, CustomOp1, DType, Device, Error, Layout, Result, Shape, Tensor}; fn fwd<T: num_traits::Float>(v: T, alpha: f64) -> T { if v.is_sign_positive() { v } else { let alpha = T::from(alpha).unwrap_or(T::nan()); (v.exp() - T::one()) * alpha } } struct Elu { alpha: f64, } impl CustomOp1 for Elu { fn name(&self) -> &'static str { "elu" } fn cpu_fwd(&self, s: &CpuStorage, l: &Layout) -> Result<(CpuStorage, Shape)> { let storage = candle_core::map_dtype!( "elu", s, |s| cpu_backend::unary_map(s, l, |v| fwd(v, self.alpha)), (BF16, F16, F32, F64) ); Ok((storage, l.shape().clone())) } } #[test] fn custom_op1_no_backward() -> Result<()> { let cpu = &Device::Cpu; let t = Tensor::arange(0u32, 12u32, cpu)?.to_dtype(DType::F32)?; let t = (t - 5.)?; let elu_t = t.apply_op1_no_bwd(&Elu { alpha: 1. })?; assert_eq!( to_vec1_round(&elu_t, 4)?, &[-0.9933, -0.9817, -0.9502, -0.8647, -0.6321, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] ); Ok(()) } // Define a similar struct as Elu but with backward support. fn bwd<T: num_traits::Float>(v: T, alpha: f64) -> T { if v.is_sign_positive() { T::one() } else { let alpha = T::from(alpha).unwrap_or(T::nan()); v.exp() * alpha } } struct EluBackward { alpha: f64, } impl CustomOp1 for EluBackward { fn name(&self) -> &'static str { "elu-bwd" } fn cpu_fwd(&self, s: &CpuStorage, l: &Layout) -> Result<(CpuStorage, Shape)> { let storage = candle_core::map_dtype!( "elu-bwd", s, |s| cpu_backend::unary_map(s, l, |v| bwd(v, self.alpha)), (BF16, F16, F32, F64) ); Ok((storage, l.shape().clone())) } } struct EluWithBackward(Elu); impl EluWithBackward { fn new(alpha: f64) -> Self { Self(Elu { alpha }) } } impl CustomOp1 for EluWithBackward { fn name(&self) -> &'static str { "elu" } fn cpu_fwd(&self, s: &CpuStorage, l: &Layout) -> Result<(CpuStorage, Shape)> { self.0.cpu_fwd(s, l) } fn bwd(&self, arg: &Tensor, _res: &Tensor, grad_res: &Tensor) -> Result<Option<Tensor>> { let alpha = self.0.alpha; let bwd = arg.apply_op1(EluBackward { alpha })?; Ok(Some(grad_res.mul(&bwd)?)) } } #[test] fn custom_op1_with_backward() -> Result<()> { let cpu = &Device::Cpu; let t = candle_core::Var::new(&[-2f32, 0f32, 2f32], cpu)?; let elu_t = t.apply_op1(EluWithBackward::new(2.))?; assert_eq!(to_vec1_round(&elu_t, 4)?, &[-1.7293, 0.0, 2.0]); let grads = elu_t.backward()?; let grad_x = grads.get(&t).unwrap(); assert_eq!(to_vec1_round(grad_x, 4)?, [0.2707, 1.0, 1.0]); Ok(()) } impl candle_core::InplaceOp1 for Elu { fn name(&self) -> &'static str { "elu" } fn cpu_fwd(&self, s: &mut CpuStorage, _l: &Layout) -> Result<()> { let alpha = self.alpha; match s { CpuStorage::BF16(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), CpuStorage::F16(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), CpuStorage::F32(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), CpuStorage::F64(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), _ => candle_core::bail!("unsupported dtype for inplace elu"), } Ok(()) } } #[test] fn inplace_op1() -> Result<()> { let cpu = &Device::Cpu; let t = Tensor::arange(0u32, 12u32, cpu)?.to_dtype(DType::F32)?; let t = (t - 5.)?; t.inplace_op1(&Elu { alpha: 1. })?; assert_eq!( to_vec1_round(&t, 4)?, &[-0.9933, -0.9817, -0.9502, -0.8647, -0.6321, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] ); Ok(()) } #[cfg(any(feature = "cuda", feature = "metal"))] #[allow(clippy::approx_constant)] #[test] fn ug_op() -> Result<()> { let kernel = { use ug::lang::op; let layout = ug::Layout::from_shape(&[12]); let ptr = op::Arg::ptr(ug::DType::F32); let src = op::load(ptr.id(), layout.clone(), ug::DType::F32)?; let src = op::unary(op::UnaryOp::Exp, src)?; let st = op::store(ptr.id(), layout, src)?; let kernel = op::Kernel::new("exp".to_string(), vec![ptr], vec![st]); let opts: ug::lower_op::Opts = Default::default(); kernel.lower(&opts.with_global(0, 12))? }; let device = if candle_core::utils::cuda_is_available() { Device::new_cuda(0)? } else if candle_core::utils::metal_is_available() { Device::new_metal(0)? } else { candle_core::bail!("metal/cuda is mandatory for this test") }; let op = candle_core::UgIOp1::new("test", kernel, &device)?; let t = Tensor::arange(0u32, 12u32, &device)?.to_dtype(DType::F32)?; t.inplace_op1(&op)?; assert_eq!( to_vec1_round(&t, 2)?, &[ 1.0, 2.72, 7.39, 20.09, 54.6, 148.41, 403.43, 1096.63, 2980.96, 8103.08, 22026.47, 59874.13 ] ); Ok(()) }
9
0
hf_public_repos/adversarialnlp/adversarialnlp
hf_public_repos/adversarialnlp/adversarialnlp/generators/__init__.py
from .generator import Generator from .swag import SwagGenerator from .addsent import AddSentGenerator
0
0
hf_public_repos/adversarialnlp/adversarialnlp/generators
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/utils.py
"""Utilities for AddSent generator.""" from typing import List, Dict, Tuple, Optional class ConstituencyParse(object): """A CoreNLP constituency parse (or a node in a parse tree). Word-level constituents have |word| and |index| set and no children. Phrase-level constituents have no |word| or |index| and have at least one child. """ def __init__(self, tag, children=None, word=None, index=None): self.tag = tag if children: self.children = children else: self.children = None self.word = word self.index = index @classmethod def _recursive_parse_corenlp(cls, tokens, i, j): orig_i = i if tokens[i] == '(': tag = tokens[i + 1] children = [] i = i + 2 while True: child, i, j = cls._recursive_parse_corenlp(tokens, i, j) if isinstance(child, cls): children.append(child) if tokens[i] == ')': return cls(tag, children), i + 1, j else: if tokens[i] != ')': raise ValueError('Expected ")" following leaf') return cls(tag, word=child, index=j), i + 1, j + 1 else: # Only other possibility is it's a word return tokens[i], i + 1, j @classmethod def from_corenlp(cls, s): """Parses the "parse" attribute returned by CoreNLP parse annotator.""" # "parse": "(ROOT\n (SBARQ\n (WHNP (WDT What)\n (NP (NN portion)\n (PP (IN of)\n (NP\n (NP (NNS households))\n (PP (IN in)\n (NP (NNP Jacksonville)))))))\n (SQ\n (VP (VBP have)\n (NP (RB only) (CD one) (NN person))))\n (. ? )))", s_spaced = s.replace('\n', ' ').replace('(', ' ( ').replace(')', ' ) ') tokens = [t for t in s_spaced.split(' ') if t] tree, index, num_words = cls._recursive_parse_corenlp(tokens, 0, 0) if index != len(tokens): raise ValueError('Only parsed %d of %d tokens' % (index, len(tokens))) return tree def is_singleton(self): if self.word: return True if len(self.children) > 1: return False return self.children[0].is_singleton() def print_tree(self, indent=0): spaces = ' ' * indent if self.word: print(f"{spaces}{self.tag}: {self.word} ({self.index})") else: print(f"{spaces}{self.tag}") for c in self.children: c.print_tree(indent=indent + 1) def get_phrase(self): if self.word: return self.word toks = [] for i, c in enumerate(self.children): p = c.get_phrase() if i == 0 or p.startswith("'"): toks.append(p) else: toks.append(' ' + p) return ''.join(toks) def get_start_index(self): if self.index is not None: return self.index return self.children[0].get_start_index() def get_end_index(self): if self.index is not None: return self.index + 1 return self.children[-1].get_end_index() @classmethod def _recursive_replace_words(cls, tree, new_words, i): if tree.word: new_word = new_words[i] return (cls(tree.tag, word=new_word, index=tree.index), i + 1) new_children = [] for c in tree.children: new_child, i = cls._recursive_replace_words(c, new_words, i) new_children.append(new_child) return cls(tree.tag, children=new_children), i @classmethod def replace_words(cls, tree, new_words): """Return a new tree, with new words replacing old ones.""" new_tree, i = cls._recursive_replace_words(tree, new_words, 0) if i != len(new_words): raise ValueError('len(new_words) == %d != i == %d' % (len(new_words), i)) return new_tree def rejoin(tokens: List[Dict[str, str]], sep: str = None) -> str: """Rejoin tokens into the original sentence. Args: tokens: a list of dicts containing 'originalText' and 'before' fields. All other fields will be ignored. sep: if provided, use the given character as a separator instead of the 'before' field (e.g. if you want to preserve where tokens are). Returns: the original sentence that generated this CoreNLP token list. """ if sep is None: return ''.join('%s%s' % (t['before'], t['originalText']) for t in tokens) else: # Use the given separator instead return sep.join(t['originalText'] for t in tokens) def get_tokens_for_answers(answer_objs: List[Tuple[int, Dict]], corenlp_obj: Dict) -> Tuple[int, List]: """Get CoreNLP tokens corresponding to a SQuAD answer object.""" first_a_toks = None for i, a_obj in enumerate(answer_objs): a_toks = [] answer_start = a_obj['answer_start'] answer_end = answer_start + len(a_obj['text']) for sent in corenlp_obj['sentences']: for tok in sent['tokens']: if tok['characterOffsetBegin'] >= answer_end: continue if tok['characterOffsetEnd'] <= answer_start: continue a_toks.append(tok) if rejoin(a_toks).strip() == a_obj['text']: # Make sure that the tokens reconstruct the answer return i, a_toks if i == 0: first_a_toks = a_toks # None of the extracted token lists reconstruct the answer # Default to the first return 0, first_a_toks def get_determiner_for_answers(answer_objs: List[Dict]) -> Optional[str]: for ans in answer_objs: words = ans['text'].split(' ') if words[0].lower() == 'the': return 'the' if words[0].lower() in ('a', 'an'): return 'a' return None def compress_whnp(tree, inside_whnp=False): if not tree.children: return tree # Reached leaf # Compress all children for i, c in enumerate(tree.children): tree.children[i] = compress_whnp(c, inside_whnp=inside_whnp or tree.tag == 'WHNP') if tree.tag != 'WHNP': if inside_whnp: # Wrap everything in an NP return ConstituencyParse('NP', children=[tree]) return tree wh_word = None new_np_children = [] new_siblings = [] for i, c in enumerate(tree.children): if i == 0: if c.tag in ('WHNP', 'WHADJP', 'WHAVP', 'WHPP'): wh_word = c.children[0] new_np_children.extend(c.children[1:]) elif c.tag in ('WDT', 'WP', 'WP$', 'WRB'): wh_word = c else: # No WH-word at start of WHNP return tree else: if c.tag == 'SQ': # Due to bad parse, SQ may show up here new_siblings = tree.children[i:] break # Wrap everything in an NP new_np_children.append(ConstituencyParse('NP', children=[c])) if new_np_children: new_np = ConstituencyParse('NP', children=new_np_children) new_tree = ConstituencyParse('WHNP', children=[wh_word, new_np]) else: new_tree = tree if new_siblings: new_tree = ConstituencyParse('SBARQ', children=[new_tree] + new_siblings) return new_tree def read_const_parse(parse_str): tree = ConstituencyParse.from_corenlp(parse_str) new_tree = compress_whnp(tree) return new_tree
1
0
hf_public_repos/adversarialnlp/adversarialnlp/generators
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/corenlp.py
# Python wrapper for Stanford CoreNLP # Copyright (c) 2017 Lynten Guo, 2018 Thomas Wolf # Extracted and adapted from https://github.com/Lynten/stanford-corenlp from __future__ import print_function import glob import json import logging import os import re import socket import subprocess import sys import time import psutil try: from urlparse import urlparse except ImportError: from urllib.parse import urlparse import requests class StanfordCoreNLP: def __init__(self, path_or_host, port=None, memory='4g', lang='en', timeout=1500, quiet=True, logging_level=logging.WARNING, max_retries=5): self.path_or_host = path_or_host self.port = port self.memory = memory self.lang = lang self.timeout = timeout self.quiet = quiet self.logging_level = logging_level logging.basicConfig(level=self.logging_level) # Check args self._check_args() if path_or_host.startswith('http'): self.url = path_or_host + ':' + str(port) logging.info('Using an existing server {}'.format(self.url)) else: # Check Java if not subprocess.call(['java', '-version'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) == 0: raise RuntimeError('Java not found.') # Check if the dir exists if not os.path.isdir(self.path_or_host): raise IOError(str(self.path_or_host) + ' is not a directory.') directory = os.path.normpath(self.path_or_host) + os.sep self.class_path_dir = directory # Check if the language specific model file exists switcher = { 'en': 'stanford-corenlp-[0-9].[0-9].[0-9]-models.jar', 'zh': 'stanford-chinese-corenlp-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-models.jar', 'ar': 'stanford-arabic-corenlp-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-models.jar', 'fr': 'stanford-french-corenlp-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-models.jar', 'de': 'stanford-german-corenlp-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-models.jar', 'es': 'stanford-spanish-corenlp-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-models.jar' } jars = { 'en': 'stanford-corenlp-x.x.x-models.jar', 'zh': 'stanford-chinese-corenlp-yyyy-MM-dd-models.jar', 'ar': 'stanford-arabic-corenlp-yyyy-MM-dd-models.jar', 'fr': 'stanford-french-corenlp-yyyy-MM-dd-models.jar', 'de': 'stanford-german-corenlp-yyyy-MM-dd-models.jar', 'es': 'stanford-spanish-corenlp-yyyy-MM-dd-models.jar' } if len(glob.glob(directory + switcher.get(self.lang))) <= 0: raise IOError(jars.get( self.lang) + ' not exists. You should download and place it in the ' + directory + ' first.') # If port not set, auto select # Commenting: see https://github.com/Lynten/stanford-corenlp/issues/26 # if self.port is None: # for port_candidate in range(9000, 65535): # if port_candidate not in [conn.laddr[1] for conn in psutil.net_connections()]: # self.port = port_candidate # break self.port = 9999 # Check if the port is in use # Also commenting: see https://github.com/Lynten/stanford-corenlp/issues/26 # if self.port in [conn.laddr[1] for conn in psutil.net_connections()]: # raise IOError('Port ' + str(self.port) + ' is already in use.') # Start native server logging.info('Initializing native server...') cmd = "java" java_args = "-Xmx{}".format(self.memory) java_class = "edu.stanford.nlp.pipeline.StanfordCoreNLPServer" class_path = '"{}*"'.format(directory) args = [cmd, java_args, '-cp', class_path, java_class, '-port', str(self.port)] args = ' '.join(args) logging.info(args) # Silence with open(os.devnull, 'w') as null_file: out_file = None if self.quiet: out_file = null_file self.p = subprocess.Popen(args, shell=True, stdout=out_file, stderr=subprocess.STDOUT) logging.info('Server shell PID: {}'.format(self.p.pid)) self.url = 'http://localhost:' + str(self.port) # Wait until server starts sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host_name = urlparse(self.url).hostname time.sleep(1) # OSX, not tested trial = 1 while sock.connect_ex((host_name, self.port)): if trial > max_retries: raise ValueError('Corenlp server is not available') logging.info('Waiting until the server is available.') trial += 1 time.sleep(1) logging.info('The server is available.') def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() def close(self): logging.info('Cleanup...') if hasattr(self, 'p'): try: parent = psutil.Process(self.p.pid) except psutil.NoSuchProcess: logging.info('No process: {}'.format(self.p.pid)) return if self.class_path_dir not in ' '.join(parent.cmdline()): logging.info('Process not in: {}'.format(parent.cmdline())) return children = parent.children(recursive=True) for process in children: logging.info('Killing pid: {}, cmdline: {}'.format(process.pid, process.cmdline())) # process.send_signal(signal.SIGTERM) process.kill() logging.info('Killing shell pid: {}, cmdline: {}'.format(parent.pid, parent.cmdline())) # parent.send_signal(signal.SIGTERM) parent.kill() def annotate(self, text, properties=None): if sys.version_info.major >= 3: text = text.encode('utf-8') r = requests.post(self.url, params={'properties': str(properties)}, data=text, headers={'Connection': 'close'}) return r.text def tregex(self, sentence, pattern): tregex_url = self.url + '/tregex' r_dict = self._request(tregex_url, "tokenize,ssplit,depparse,parse", sentence, pattern=pattern) return r_dict def tokensregex(self, sentence, pattern): tokensregex_url = self.url + '/tokensregex' r_dict = self._request(tokensregex_url, "tokenize,ssplit,depparse", sentence, pattern=pattern) return r_dict def semgrex(self, sentence, pattern): semgrex_url = self.url + '/semgrex' r_dict = self._request(semgrex_url, "tokenize,ssplit,depparse", sentence, pattern=pattern) return r_dict def word_tokenize(self, sentence, span=False): r_dict = self._request('ssplit,tokenize', sentence) tokens = [token['originalText'] for s in r_dict['sentences'] for token in s['tokens']] # Whether return token span if span: spans = [(token['characterOffsetBegin'], token['characterOffsetEnd']) for s in r_dict['sentences'] for token in s['tokens']] return tokens, spans else: return tokens def pos_tag(self, sentence): r_dict = self._request(self.url, 'pos', sentence) words = [] tags = [] for s in r_dict['sentences']: for token in s['tokens']: words.append(token['originalText']) tags.append(token['pos']) return list(zip(words, tags)) def ner(self, sentence): r_dict = self._request(self.url, 'ner', sentence) words = [] ner_tags = [] for s in r_dict['sentences']: for token in s['tokens']: words.append(token['originalText']) ner_tags.append(token['ner']) return list(zip(words, ner_tags)) def parse(self, sentence): r_dict = self._request(self.url, 'pos,parse', sentence) return [s['parse'] for s in r_dict['sentences']][0] def dependency_parse(self, sentence): r_dict = self._request(self.url, 'depparse', sentence) return [(dep['dep'], dep['governor'], dep['dependent']) for s in r_dict['sentences'] for dep in s['basicDependencies']] def coref(self, text): r_dict = self._request('coref', text) corefs = [] for k, mentions in r_dict['corefs'].items(): simplified_mentions = [] for m in mentions: simplified_mentions.append((m['sentNum'], m['startIndex'], m['endIndex'], m['text'])) corefs.append(simplified_mentions) return corefs def switch_language(self, language="en"): self._check_language(language) self.lang = language def _request(self, url, annotators=None, data=None, *args, **kwargs): if sys.version_info.major >= 3: data = data.encode('utf-8') properties = {'annotators': annotators, 'outputFormat': 'json'} params = {'properties': str(properties), 'pipelineLanguage': self.lang} if 'pattern' in kwargs: params = {"pattern": kwargs['pattern'], 'properties': str(properties), 'pipelineLanguage': self.lang} logging.info(params) r = requests.post(url, params=params, data=data, headers={'Connection': 'close'}) r_dict = json.loads(r.text) return r_dict def _check_args(self): self._check_language(self.lang) if not re.match('\dg', self.memory): raise ValueError('memory=' + self.memory + ' not supported. Use 4g, 6g, 8g and etc. ') def _check_language(self, lang): if lang not in ['en', 'zh', 'ar', 'fr', 'de', 'es']: raise ValueError('lang=' + self.lang + ' not supported. Use English(en), Chinese(zh), Arabic(ar), ' 'French(fr), German(de), Spanish(es).')
2
0
hf_public_repos/adversarialnlp/adversarialnlp/generators
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/__init__.py
from .addsent_generator import AddSentGenerator from .squad_reader import squad_reader
3
0
hf_public_repos/adversarialnlp/adversarialnlp/generators
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/squad_reader.py
import json import logging from typing import Iterator, List, Tuple from adversarialnlp.common.file_utils import download_files logger = logging.getLogger(__name__) # pylint: disable=invalid-name def squad_reader(file_path: str = None) -> Iterator[List[Tuple[str, str]]]: r""" Reads a JSON-formatted SQuAD file and returns an Iterator. Args: file_path: Path to a JSON-formatted SQuAD file. If no path is provided, download and use SQuAD v1.0 training dataset. Return: list of tuple (question_answer, paragraph). """ if file_path is None: file_path = download_files(fnames=['train-v1.1.json'], paths='https://rajpurkar.github.io/SQuAD-explorer/dataset/', local_folder='squad') file_path = file_path[0] logger.info("Reading file at %s", file_path) with open(file_path) as dataset_file: dataset_json = json.load(dataset_file) dataset = dataset_json['data'] logger.info("Reading the dataset") out_data = [] for article in dataset: for paragraph_json in article['paragraphs']: paragraph = paragraph_json["context"] for question_answer in paragraph_json['qas']: question_answer["question"] = question_answer["question"].strip().replace("\n", "") out_data.append((question_answer, paragraph)) return out_data
4
0
hf_public_repos/adversarialnlp/adversarialnlp/generators
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/addsent_generator.py
import logging import json import itertools from typing import Iterable, Dict, Tuple from collections import defaultdict from adversarialnlp.common.file_utils import download_files from adversarialnlp.generators import Generator from adversarialnlp.generators.addsent.rules import (ANSWER_RULES, HIGH_CONF_ALTER_RULES, ALL_ALTER_RULES, DO_NOT_ALTER, BAD_ALTERATIONS, CONVERSION_RULES) from adversarialnlp.generators.addsent.utils import (rejoin, ConstituencyParse, get_tokens_for_answers, get_determiner_for_answers, read_const_parse) from adversarialnlp.generators.addsent.squad_reader import squad_reader from adversarialnlp.generators.addsent.corenlp import StanfordCoreNLP logger = logging.getLogger(__name__) # pylint: disable=invalid-name SQUAD_FILE = 'data/squad/train-v1.1.json' NEARBY_GLOVE_FILE = 'data/addsent/nearby_n100_glove_6B_100d.json' POSTAG_FILE = 'data/addsent/postag_dict.json' class AddSentGenerator(Generator): r"""Adversarial examples generator based on AddSent. AddSent is described in the paper `Adversarial Examples for Evaluating Reading Comprehension Systems`_ by Robin Jia & Percy Liang Args, input and yield: See the ``Generator`` class. Additional arguments: alteration_strategy: Alteration strategy. Options: - `separate`: Do best alteration for each word separately. - `best`: Generate exactly one best alteration (may over-alter). - `high-conf`: Do all possible high-confidence alterations. - `high-conf-separate`: Do best high-confidence alteration for each word separately. - `all`: Do all possible alterations (very conservative) prepend: Insert adversarial example at the beginning or end of the context. use_answer_placeholder: Use and answer placeholder. Seeds: Tuple of SQuAD-like instances containing - question-answer-span, and - context paragraph. default_seeds: If no seeds are provided, the default_seeds are the training set of the `SQuAD V1.0 dataset <https://rajpurkar.github.io/SQuAD-explorer/>`_. """ def __init__(self, alteration_strategy: str = 'high-conf', prepend: bool = False, use_answer_placeholder: bool = False, default_seeds: Iterable = None, quiet: bool = False): super(AddSentGenerator).__init__(default_seeds, quiet) model_files = download_files(fnames=['nearby_n100_glove_6B_100d.json', 'postag_dict.json'], local_folder='addsent') corenlp_path = download_files(fnames=['stanford-corenlp-full-2018-02-27.zip'], paths='http://nlp.stanford.edu/software/', local_folder='corenlp') self.nlp: StanfordCoreNLP = StanfordCoreNLP(corenlp_path[0]) with open(model_files[0], 'r') as data_file: self.nearby_word_dict: Dict = json.load(data_file) with open(model_files[1], 'r') as data_file: self.postag_dict: Dict = json.load(data_file) self.alteration_strategy: str = alteration_strategy self.prepend: bool = prepend self.use_answer_placeholder: bool = use_answer_placeholder if default_seeds is None: self.default_seeds = squad_reader(SQUAD_FILE) else: self.default_seeds = default_seeds def close(self): self.nlp.close() def _annotate(self, text: str, annotators: str): r"""Wrapper to call CoreNLP. """ props = {'annotators': annotators, 'ssplit.newlineIsSentenceBreak': 'always', 'outputFormat':'json'} return json.loads(self.nlp.annotate(text, properties=props)) def _alter_question(self, question, tokens, const_parse): r"""Alter the question to make it ask something else. """ used_words = [tok['word'].lower() for tok in tokens] new_qs = [] toks_all = [] if self.alteration_strategy.startswith('high-conf'): rules = HIGH_CONF_ALTER_RULES else: rules = ALL_ALTER_RULES for i, tok in enumerate(tokens): if tok['word'].lower() in DO_NOT_ALTER: if self.alteration_strategy in ('high-conf', 'all'): toks_all.append(tok) continue begin = tokens[:i] end = tokens[i+1:] found = False for rule_name in rules: rule = rules[rule_name] new_words = rule(tok, nearby_word_dict=self.nearby_word_dict, postag_dict=self.postag_dict) if new_words: for word in new_words: if word.lower() in used_words: continue if word.lower() in BAD_ALTERATIONS: continue # Match capitzliation if tok['word'] == tok['word'].upper(): word = word.upper() elif tok['word'] == tok['word'].title(): word = word.title() new_tok = dict(tok) new_tok['word'] = new_tok['lemma'] = new_tok['originalText'] = word new_tok['altered'] = True # NOTE: obviously this is approximate if self.alteration_strategy.endswith('separate'): new_tokens = begin + [new_tok] + end new_q = rejoin(new_tokens) tag = '%s-%d-%s' % (rule_name, i, word) new_const_parse = ConstituencyParse.replace_words( const_parse, [tok['word'] for tok in new_tokens]) new_qs.append((new_q, new_tokens, new_const_parse, tag)) break elif self.alteration_strategy in ('high-conf', 'all'): toks_all.append(new_tok) found = True break if self.alteration_strategy in ('high-conf', 'all') and found: break if self.alteration_strategy in ('high-conf', 'all') and not found: toks_all.append(tok) if self.alteration_strategy in ('high-conf', 'all'): new_q = rejoin(toks_all) new_const_parse = ConstituencyParse.replace_words( const_parse, [tok['word'] for tok in toks_all]) if new_q != question: new_qs.append((rejoin(toks_all), toks_all, new_const_parse, self.alteration_strategy)) return new_qs def generate_from_seed(self, seed: Tuple): r"""Edit a SQuAD example using rules. """ qas, paragraph = seed question = qas['question'].strip() if not self.quiet: print(f"Question: {question}") if self.use_answer_placeholder: answer = 'ANSWER' determiner = '' else: p_parse = self._annotate(paragraph, 'tokenize,ssplit,pos,ner,entitymentions') ind, a_toks = get_tokens_for_answers(qas['answers'], p_parse) determiner = get_determiner_for_answers(qas['answers']) answer_obj = qas['answers'][ind] for _, func in ANSWER_RULES: answer = func(answer_obj, a_toks, question, determiner=determiner) if answer: break else: raise ValueError('Missing answer') q_parse = self._annotate(question, 'tokenize,ssplit,pos,parse,ner') q_parse = q_parse['sentences'][0] q_tokens = q_parse['tokens'] q_const_parse = read_const_parse(q_parse['parse']) if self.alteration_strategy: # Easiest to alter the question before converting q_list = self._alter_question(question, q_tokens, q_const_parse) else: q_list = [(question, q_tokens, q_const_parse, 'unaltered')] for q_str, q_tokens, q_const_parse, tag in q_list: for rule in CONVERSION_RULES: sent = rule.convert(q_str, answer, q_tokens, q_const_parse) if sent: if not self.quiet: print(f" Sent ({tag}): {sent}'") cur_qa = { 'question': qas['question'], 'id': '%s-%s' % (qas['id'], tag), 'answers': qas['answers'] } if self.prepend: cur_text = '%s %s' % (sent, paragraph) new_answers = [] for ans in qas['answers']: new_answers.append({ 'text': ans['text'], 'answer_start': ans['answer_start'] + len(sent) + 1 }) cur_qa['answers'] = new_answers else: cur_text = '%s %s' % (paragraph, sent) out_example = {'title': title, 'seed_context': paragraph, 'seed_qas': qas, 'context': cur_text, 'qas': [cur_qa]} yield out_example # from adversarialnlp.common.file_utils import FIXTURES_ROOT # generator = AddSentGenerator() # test_instances = squad_reader(FIXTURES_ROOT / 'squad.json') # batches = list(generator(test_instances, num_epochs=1)) # assert len(batches) != 0
5
0
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/rules/alteration_rules.py
import collections import nltk nltk.download('wordnet') from nltk.corpus import wordnet as wn from nltk.stem.lancaster import LancasterStemmer STEMMER = LancasterStemmer() POS_TO_WORDNET = { 'NN': wn.NOUN, 'JJ': wn.ADJ, 'JJR': wn.ADJ, 'JJS': wn.ADJ, } def alter_special(token, **kwargs): w = token['originalText'] if w in SPECIAL_ALTERATIONS: return [SPECIAL_ALTERATIONS[w]] return None def alter_nearby(pos_list, ignore_pos=False, is_ner=False): def func(token, nearby_word_dict=None, postag_dict=None, **kwargs): if token['pos'] not in pos_list: return None if is_ner and token['ner'] not in ('PERSON', 'LOCATION', 'ORGANIZATION', 'MISC'): return None w = token['word'].lower() if w in ('war'): return None if w not in nearby_word_dict: return None new_words = [] w_stem = STEMMER.stem(w.replace('.', '')) for x in nearby_word_dict[w][1:]: new_word = x['word'] # Make sure words aren't too similar (e.g. same stem) new_stem = STEMMER.stem(new_word.replace('.', '')) if w_stem.startswith(new_stem) or new_stem.startswith(w_stem): continue if not ignore_pos: # Check for POS tag match if new_word not in postag_dict: continue new_postag = postag_dict[new_word] if new_postag != token['pos']: continue new_words.append(new_word) return new_words return func def alter_entity_glove(token, nearby_word_dict=None, **kwargs): # NOTE: Deprecated if token['ner'] not in ('PERSON', 'LOCATION', 'ORGANIZATION', 'MISC'): return None w = token['word'].lower() if w == token['word']: return None # Only do capitalized words if w not in nearby_word_dict: return None new_words = [] for x in nearby_word_dict[w][1:3]: if token['word'] == w.upper(): new_words.append(x['word'].upper()) else: new_words.append(x['word'].title()) return new_words def alter_entity_type(token, **kwargs): pos = token['pos'] ner = token['ner'] word = token['word'] is_abbrev = word == word.upper() and not word == word.lower() if token['pos'] not in ( 'JJ', 'JJR', 'JJS', 'NN', 'NNS', 'NNP', 'NNPS', 'RB', 'RBR', 'RBS', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ'): # Don't alter non-content words return None if ner == 'PERSON': return ['Jackson'] elif ner == 'LOCATION': return ['Berlin'] elif ner == 'ORGANIZATION': if is_abbrev: return ['UNICEF'] return ['Acme'] elif ner == 'MISC': return ['Neptune'] elif ner == 'NNP': if is_abbrev: return ['XKCD'] return ['Dalek'] elif pos == 'NNPS': return ['Daleks'] return None def alter_wordnet_antonyms(token, **kwargs): if token['pos'] not in POS_TO_WORDNET: return None w = token['word'].lower() wn_pos = POS_TO_WORDNET[token['pos']] synsets = wn.synsets(w, wn_pos) if not synsets: return None synset = synsets[0] antonyms = [] for lem in synset.lemmas(): if lem.antonyms(): for a in lem.antonyms(): new_word = a.name() if '_' in a.name(): continue antonyms.append(new_word) return antonyms SPECIAL_ALTERATIONS = { 'States': 'Kingdom', 'US': 'UK', 'U.S': 'U.K.', 'U.S.': 'U.K.', 'UK': 'US', 'U.K.': 'U.S.', 'U.K': 'U.S.', 'largest': 'smallest', 'smallest': 'largest', 'highest': 'lowest', 'lowest': 'highest', 'May': 'April', 'Peyton': 'Trevor', } DO_NOT_ALTER = ['many', 'such', 'few', 'much', 'other', 'same', 'general', 'type', 'record', 'kind', 'sort', 'part', 'form', 'terms', 'use', 'place', 'way', 'old', 'young', 'bowl', 'united', 'one', 'likely', 'different', 'square', 'war', 'republic', 'doctor', 'color'] BAD_ALTERATIONS = ['mx2004', 'planet', 'u.s.', 'Http://Www.Co.Mo.Md.Us'] HIGH_CONF_ALTER_RULES = collections.OrderedDict([ ('special', alter_special), ('wn_antonyms', alter_wordnet_antonyms), ('nearbyNum', alter_nearby(['CD'], ignore_pos=True)), ('nearbyProperNoun', alter_nearby(['NNP', 'NNPS'])), ('nearbyProperNoun', alter_nearby(['NNP', 'NNPS'], ignore_pos=True)), ('nearbyEntityNouns', alter_nearby(['NN', 'NNS'], is_ner=True)), ('nearbyEntityJJ', alter_nearby(['JJ', 'JJR', 'JJS'], is_ner=True)), ('entityType', alter_entity_type), #('entity_glove', alter_entity_glove), ]) ALL_ALTER_RULES = collections.OrderedDict(list(HIGH_CONF_ALTER_RULES.items()) + [ ('nearbyAdj', alter_nearby(['JJ', 'JJR', 'JJS'])), ('nearbyNoun', alter_nearby(['NN', 'NNS'])), #('nearbyNoun', alter_nearby(['NN', 'NNS'], ignore_pos=True)), ])
6
0
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/rules/answer_rules.py
import math from adversarialnlp.generators.addsent.utils import rejoin MONTHS = ['january', 'february', 'march', 'april', 'may', 'june', 'july', 'august', 'september', 'october', 'november', 'december'] def ans_number(a, tokens, q, **kwargs): out_toks = [] seen_num = False for t in tokens: ner = t['ner'] pos = t['pos'] w = t['word'] out_tok = {'before': t['before']} # Split on dashes leftover = '' dash_toks = w.split('-') if len(dash_toks) > 1: w = dash_toks[0] leftover = '-'.join(dash_toks[1:]) # Try to get a number out value = None if w != '%': # Percent sign should just pass through try: value = float(w.replace(',', '')) except: try: norm_ner = t['normalizedNER'] if norm_ner[0] in ('%', '>', '<'): norm_ner = norm_ner[1:] value = float(norm_ner) except: pass if not value and ( ner == 'NUMBER' or (ner == 'PERCENT' and pos == 'CD')): # Force this to be a number anyways value = 10 if value: if math.isinf(value) or math.isnan(value): value = 9001 seen_num = True if w in ('thousand', 'million', 'billion', 'trillion'): if w == 'thousand': new_val = 'million' else: new_val = 'thousand' else: if value < 2500 and value > 1000: new_val = str(value - 75) else: # Change leading digit if value == int(value): val_chars = list('%d' % value) else: val_chars = list('%g' % value) c = val_chars[0] for i in range(len(val_chars)): c = val_chars[i] if c >= '0' and c <= '9': val_chars[i] = str(max((int(c) + 5) % 10, 1)) break new_val = ''.join(val_chars) if leftover: new_val = '%s-%s' % (new_val, leftover) out_tok['originalText'] = new_val else: out_tok['originalText'] = t['originalText'] out_toks.append(out_tok) if seen_num: return rejoin(out_toks).strip() else: return None def ans_date(a, tokens, q, **kwargs): out_toks = [] if not all(t['ner'] == 'DATE' for t in tokens): return None for t in tokens: if t['pos'] == 'CD' or t['word'].isdigit(): try: value = int(t['word']) except: value = 10 # fallback if value > 50: new_val = str(value - 25) # Year else: # Day of month if value > 15: new_val = str(value - 11) else: new_val = str(value + 11) else: if t['word'].lower() in MONTHS: m_ind = MONTHS.index(t['word'].lower()) new_val = MONTHS[(m_ind + 6) % 12].title() else: # Give up new_val = t['originalText'] out_toks.append({'before': t['before'], 'originalText': new_val}) new_ans = rejoin(out_toks).strip() if new_ans == a['text']: return None return new_ans def ans_entity_full(ner_tag, new_ans): """Returns a function that yields new_ans iff every token has |ner_tag|.""" def func(a, tokens, q, **kwargs): for t in tokens: if t['ner'] != ner_tag: return None return new_ans return func def ans_abbrev(new_ans): def func(a, tokens, q, **kwargs): s = a['text'] if s == s.upper() and s != s.lower(): return new_ans return None return func def ans_match_wh(wh_word, new_ans): """Returns a function that yields new_ans if the question starts with |wh_word|.""" def func(a, tokens, q, **kwargs): if q.lower().startswith(wh_word + ' '): return new_ans return None return func def ans_pos(pos, new_ans, end=False, add_dt=False): """Returns a function that yields new_ans if the first/last token has |pos|.""" def func(a, tokens, q, determiner, **kwargs): if end: t = tokens[-1] else: t = tokens[0] if t['pos'] != pos: return None if add_dt and determiner: return '%s %s' % (determiner, new_ans) return new_ans return func def ans_catch_all(new_ans): def func(a, tokens, q, **kwargs): return new_ans return func ANSWER_RULES = [ ('date', ans_date), ('number', ans_number), ('ner_person', ans_entity_full('PERSON', 'Jeff Dean')), ('ner_location', ans_entity_full('LOCATION', 'Chicago')), ('ner_organization', ans_entity_full('ORGANIZATION', 'Stark Industries')), ('ner_misc', ans_entity_full('MISC', 'Jupiter')), ('abbrev', ans_abbrev('LSTM')), ('wh_who', ans_match_wh('who', 'Jeff Dean')), ('wh_when', ans_match_wh('when', '1956')), ('wh_where', ans_match_wh('where', 'Chicago')), ('wh_where', ans_match_wh('how many', '42')), # Starts with verb ('pos_begin_vb', ans_pos('VB', 'learn')), ('pos_end_vbd', ans_pos('VBD', 'learned')), ('pos_end_vbg', ans_pos('VBG', 'learning')), ('pos_end_vbp', ans_pos('VBP', 'learns')), ('pos_end_vbz', ans_pos('VBZ', 'learns')), # Ends with some POS tag ('pos_end_nn', ans_pos('NN', 'hamster', end=True, add_dt=True)), ('pos_end_nnp', ans_pos('NNP', 'Central Park', end=True, add_dt=True)), ('pos_end_nns', ans_pos('NNS', 'hamsters', end=True, add_dt=True)), ('pos_end_nnps', ans_pos('NNPS', 'Kew Gardens', end=True, add_dt=True)), ('pos_end_jj', ans_pos('JJ', 'deep', end=True)), ('pos_end_jjr', ans_pos('JJR', 'deeper', end=True)), ('pos_end_jjs', ans_pos('JJS', 'deepest', end=True)), ('pos_end_rb', ans_pos('RB', 'silently', end=True)), ('pos_end_vbg', ans_pos('VBG', 'learning', end=True)), ('catch_all', ans_catch_all('aliens')), ]
7
0
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/rules/conversion_rules.py
from pattern import en as patten CONST_PARSE_MACROS = { '$Noun': '$NP/$NN/$NNS/$NNP/$NNPS', '$Verb': '$VB/$VBD/$VBP/$VBZ', '$Part': '$VBN/$VG', '$Be': 'is/are/was/were', '$Do': "do/did/does/don't/didn't/doesn't", '$WHP': '$WHADJP/$WHADVP/$WHNP/$WHPP', } # Map to pattern.en aliases # http://www.clips.ua.ac.be/pages/pattern-en#conjugation POS_TO_PATTERN = { 'vb': 'inf', # Infinitive 'vbp': '1sg', # non-3rd-person singular present 'vbz': '3sg', # 3rd-person singular present 'vbg': 'part', # gerund or present participle 'vbd': 'p', # past 'vbn': 'ppart', # past participle } # Tenses prioritized by likelihood of arising PATTERN_TENSES = ['inf', '3sg', 'p', 'part', 'ppart', '1sg'] def _check_match(node, pattern_tok): if pattern_tok in CONST_PARSE_MACROS: pattern_tok = CONST_PARSE_MACROS[pattern_tok] if ':' in pattern_tok: # ':' means you match the LHS category and start with something on the right lhs, rhs = pattern_tok.split(':') match_lhs = _check_match(node, lhs) if not match_lhs: return False phrase = node.get_phrase().lower() retval = any(phrase.startswith(w) for w in rhs.split('/')) return retval elif '/' in pattern_tok: return any(_check_match(node, t) for t in pattern_tok.split('/')) return ((pattern_tok.startswith('$') and pattern_tok[1:] == node.tag) or (node.word and pattern_tok.lower() == node.word.lower())) def _recursive_match_pattern(pattern_toks, stack, matches): """Recursively try to match a pattern, greedily.""" if len(matches) == len(pattern_toks): # We matched everything in the pattern; also need stack to be empty return len(stack) == 0 if len(stack) == 0: return False cur_tok = pattern_toks[len(matches)] node = stack.pop() # See if we match the current token at this level is_match = _check_match(node, cur_tok) if is_match: cur_num_matches = len(matches) matches.append(node) new_stack = list(stack) success = _recursive_match_pattern(pattern_toks, new_stack, matches) if success: return True # Backtrack while len(matches) > cur_num_matches: matches.pop() # Recurse to children if not node.children: return False # No children to recurse on, we failed stack.extend(node.children[::-1]) # Leftmost children should be popped first return _recursive_match_pattern(pattern_toks, stack, matches) def match_pattern(pattern, const_parse): pattern_toks = pattern.split(' ') whole_phrase = const_parse.get_phrase() if whole_phrase.endswith('?') or whole_phrase.endswith('.'): # Match trailing punctuation as needed pattern_toks.append(whole_phrase[-1]) matches = [] success = _recursive_match_pattern(pattern_toks, [const_parse], matches) if success: return matches else: return None def run_postprocessing(s, rules, all_args): rule_list = rules.split(',') for rule in rule_list: if rule == 'lower': s = s.lower() elif rule.startswith('tense-'): ind = int(rule[6:]) orig_vb = all_args[ind] tenses = patten.tenses(orig_vb) for tense in PATTERN_TENSES: # Prioritize by PATTERN_TENSES if tense in tenses: break else: # Default to first tense tense = PATTERN_TENSES[0] s = patten.conjugate(s, tense) elif rule in POS_TO_PATTERN: s = patten.conjugate(s, POS_TO_PATTERN[rule]) return s def convert_whp(node, q, a, tokens, quiet=False): if node.tag in ('WHNP', 'WHADJP', 'WHADVP', 'WHPP'): # Apply WHP rules cur_phrase = node.get_phrase() cur_tokens = tokens[node.get_start_index():node.get_end_index()] for r in WHP_RULES: phrase = r.convert(cur_phrase, a, cur_tokens, node, run_fix_style=False) if phrase: if not quiet: print(f" WHP Rule '{r.name}': {phrase}") return phrase return None ### Rules for converting questions into declarative sentences def fix_style(s): """Minor, general style fixes for questions.""" s = s.replace('?', '') # Delete question marks anywhere in sentence. s = s.strip(' .') if s[0] == s[0].lower(): s = s[0].upper() + s[1:] return s + '.' class ConversionRule(object): def convert(self, q, a, tokens, const_parse, run_fix_style=True): raise NotImplementedError class ConstituencyRule(ConversionRule): """A rule for converting question to sentence based on constituency parse.""" def __init__(self, in_pattern, out_pattern, postproc=None): self.in_pattern = in_pattern # e.g. "where did $NP $VP" self.out_pattern = out_pattern #unicode(out_pattern) # e.g. "{1} did {2} at {0}." Answer is always 0 self.name = in_pattern if postproc: self.postproc = postproc else: self.postproc = {} def convert(self, q, a, tokens, const_parse, run_fix_style=True) -> str: pattern_toks = self.in_pattern.split(' ') # Don't care about trailing punctuation match = match_pattern(self.in_pattern, const_parse) appended_clause = False if not match: # Try adding a PP at the beginning appended_clause = True new_pattern = '$PP , ' + self.in_pattern pattern_toks = new_pattern.split(' ') match = match_pattern(new_pattern, const_parse) if not match: # Try adding an SBAR at the beginning new_pattern = '$SBAR , ' + self.in_pattern pattern_toks = new_pattern.split(' ') match = match_pattern(new_pattern, const_parse) if not match: return None appended_clause_match = None fmt_args = [a] for t, m in zip(pattern_toks, match): if t.startswith('$') or '/' in t: # First check if it's a WHP phrase = convert_whp(m, q, a, tokens) if not phrase: phrase = m.get_phrase() fmt_args.append(phrase) if appended_clause: appended_clause_match = fmt_args[1] fmt_args = [a] + fmt_args[2:] for i in range(len(fmt_args)): if i in self.postproc: # Run postprocessing filters fmt_args[i] = run_postprocessing(fmt_args[i], self.postproc[i], fmt_args) output = self.gen_output(fmt_args) if appended_clause: output = appended_clause_match + ', ' + output if run_fix_style: output = fix_style(output) return output def gen_output(self, fmt_args): """By default, use self.out_pattern. Can be overridden.""" return self.out_pattern.format(*fmt_args) class ReplaceRule(ConversionRule): """A simple rule that replaces some tokens with the answer.""" def __init__(self, target, replacement='{}', start=False): self.target = target self.replacement = replacement #unicode(replacement) self.name = 'replace(%s)' % target self.start = start def convert(self, q, a, tokens, const_parse, run_fix_style=True): t_toks = self.target.split(' ') q_toks = q.rstrip('?.').split(' ') replacement_text = self.replacement.format(a) for i in range(len(q_toks)): if self.start and i != 0: continue if ' '.join(q_toks[i:i + len(t_toks)]).rstrip(',').lower() == self.target: begin = q_toks[:i] end = q_toks[i + len(t_toks):] output = ' '.join(begin + [replacement_text] + end) if run_fix_style: output = fix_style(output) return output return None class FindWHPRule(ConversionRule): """A rule that looks for $WHP's from right to left and does replacements.""" name = 'FindWHP' def _recursive_convert(self, node, q, a, tokens, found_whp): if node.word: return node.word, found_whp if not found_whp: whp_phrase = convert_whp(node, q, a, tokens) if whp_phrase: return whp_phrase, True child_phrases = [] for c in node.children[::-1]: c_phrase, found_whp = self._recursive_convert(c, q, a, tokens, found_whp) child_phrases.append(c_phrase) out_toks = [] for i, p in enumerate(child_phrases[::-1]): if i == 0 or p.startswith("'"): out_toks.append(p) else: out_toks.append(' ' + p) return ''.join(out_toks), found_whp def convert(self, q, a, tokens, const_parse, run_fix_style=True): out_phrase, found_whp = self._recursive_convert(const_parse, q, a, tokens, False) if found_whp: if run_fix_style: out_phrase = fix_style(out_phrase) return out_phrase return None class AnswerRule(ConversionRule): """Just return the answer.""" name = 'AnswerRule' def convert(self, q, a, tokens, const_parse, run_fix_style=True): return a CONVERSION_RULES = [ # Special rules ConstituencyRule('$WHP:what $Be $NP called that $VP', '{2} that {3} {1} called {1}'), # What type of X #ConstituencyRule("$WHP:what/which type/sort/kind/group of $NP/$Noun $Be $NP", '{5} {4} a {1} {3}'), #ConstituencyRule("$WHP:what/which type/sort/kind/group of $NP/$Noun $Be $VP", '{1} {3} {4} {5}'), #ConstituencyRule("$WHP:what/which type/sort/kind/group of $NP $VP", '{1} {3} {4}'), # How $JJ ConstituencyRule('how $JJ $Be $NP $IN $NP', '{3} {2} {0} {1} {4} {5}'), ConstituencyRule('how $JJ $Be $NP $SBAR', '{3} {2} {0} {1} {4}'), ConstituencyRule('how $JJ $Be $NP', '{3} {2} {0} {1}'), # When/where $Verb ConstituencyRule('$WHP:when/where $Do $NP', '{3} occurred in {1}'), ConstituencyRule('$WHP:when/where $Do $NP $Verb', '{3} {4} in {1}', {4: 'tense-2'}), ConstituencyRule('$WHP:when/where $Do $NP $Verb $NP/$PP', '{3} {4} {5} in {1}', {4: 'tense-2'}), ConstituencyRule('$WHP:when/where $Do $NP $Verb $NP $PP', '{3} {4} {5} {6} in {1}', {4: 'tense-2'}), ConstituencyRule('$WHP:when/where $Be $NP', '{3} {2} in {1}'), ConstituencyRule('$WHP:when/where $Verb $NP $VP/$ADJP', '{3} {2} {4} in {1}'), # What/who/how $Do ConstituencyRule("$WHP:what/which/who $Do $NP do", '{3} {1}', {0: 'tense-2'}), ConstituencyRule("$WHP:what/which/who/how $Do $NP $Verb", '{3} {4} {1}', {4: 'tense-2'}), ConstituencyRule("$WHP:what/which/who $Do $NP $Verb $IN/$NP", '{3} {4} {5} {1}', {4: 'tense-2', 0: 'vbg'}), ConstituencyRule("$WHP:what/which/who $Do $NP $Verb $PP", '{3} {4} {1} {5}', {4: 'tense-2', 0: 'vbg'}), ConstituencyRule("$WHP:what/which/who $Do $NP $Verb $NP $VP", '{3} {4} {5} {6} {1}', {4: 'tense-2'}), ConstituencyRule("$WHP:what/which/who $Do $NP $Verb to $VB", '{3} {4} to {5} {1}', {4: 'tense-2'}), ConstituencyRule("$WHP:what/which/who $Do $NP $Verb to $VB $VP", '{3} {4} to {5} {1} {6}', {4: 'tense-2'}), ConstituencyRule("$WHP:what/which/who/how $Do $NP $Verb $NP $IN $VP", '{3} {4} {5} {6} {1} {7}', {4: 'tense-2'}), ConstituencyRule("$WHP:what/which/who/how $Do $NP $Verb $PP/$S/$VP/$SBAR/$SQ", '{3} {4} {1} {5}', {4: 'tense-2'}), ConstituencyRule("$WHP:what/which/who/how $Do $NP $Verb $PP $PP/$S/$VP/$SBAR", '{3} {4} {1} {5} {6}', {4: 'tense-2'}), # What/who/how $Be # Watch out for things that end in a preposition ConstituencyRule("$WHP:what/which/who $Be/$MD $NP of $NP $Verb/$Part $IN", '{3} of {4} {2} {5} {6} {1}'), ConstituencyRule("$WHP:what/which/who $Be/$MD $NP $NP $IN", '{3} {2} {4} {5} {1}'), ConstituencyRule("$WHP:what/which/who $Be/$MD $NP $VP/$IN", '{3} {2} {4} {1}'), ConstituencyRule("$WHP:what/which/who $Be/$MD $NP $IN $NP/$VP", '{1} {2} {3} {4} {5}'), ConstituencyRule('$WHP:what/which/who $Be/$MD $NP $Verb $PP', '{3} {2} {4} {1} {5}'), ConstituencyRule('$WHP:what/which/who $Be/$MD $NP/$VP/$PP', '{1} {2} {3}'), ConstituencyRule("$WHP:how $Be/$MD $NP $VP", '{3} {2} {4} by {1}'), # What/who $Verb ConstituencyRule("$WHP:what/which/who $VP", '{1} {2}'), # $IN what/which $NP ConstituencyRule('$IN what/which $NP $Do $NP $Verb $NP', '{5} {6} {7} {1} the {3} of {0}', {1: 'lower', 6: 'tense-4'}), ConstituencyRule('$IN what/which $NP $Be $NP $VP/$ADJP', '{5} {4} {6} {1} the {3} of {0}', {1: 'lower'}), ConstituencyRule('$IN what/which $NP $Verb $NP/$ADJP $VP', '{5} {4} {6} {1} the {3} of {0}', {1: 'lower'}), FindWHPRule(), ] # Rules for going from WHP to an answer constituent WHP_RULES = [ # WHPP rules ConstituencyRule('$IN what/which type/sort/kind/group of $NP/$Noun', '{1} {0} {4}'), ConstituencyRule('$IN what/which type/sort/kind/group of $NP/$Noun $PP', '{1} {0} {4} {5}'), ConstituencyRule('$IN what/which $NP', '{1} the {3} of {0}'), ConstituencyRule('$IN $WP/$WDT', '{1} {0}'), # what/which ConstituencyRule('what/which type/sort/kind/group of $NP/$Noun', '{0} {3}'), ConstituencyRule('what/which type/sort/kind/group of $NP/$Noun $PP', '{0} {3} {4}'), ConstituencyRule('what/which $NP', 'the {2} of {0}'), # How many ConstituencyRule('how many/much $NP', '{0} {2}'), # Replace ReplaceRule('what'), ReplaceRule('who'), ReplaceRule('how many'), ReplaceRule('how much'), ReplaceRule('which'), ReplaceRule('where'), ReplaceRule('when'), ReplaceRule('why'), ReplaceRule('how'), # Just give the answer AnswerRule(), ]
8
0
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent
hf_public_repos/adversarialnlp/adversarialnlp/generators/addsent/rules/__init__.py
from .answer_rules import ANSWER_RULES from .alteration_rules import (HIGH_CONF_ALTER_RULES, ALL_ALTER_RULES, DO_NOT_ALTER, BAD_ALTERATIONS) from .conversion_rules import CONVERSION_RULES
9
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_utils.py
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import pickle import tempfile import unittest import warnings from collections import UserDict, namedtuple from typing import NamedTuple, Optional from unittest.mock import Mock, patch import numpy as np import pytest import torch from torch import nn from accelerate.big_modeling import cpu_offload_with_hook from accelerate.hooks import attach_align_device_hook, remove_hook_from_module from accelerate.state import PartialState from accelerate.test_utils.testing import ( require_huggingface_suite, require_non_cpu, require_non_torch_xla, require_torch_min_version, require_tpu, require_triton, torch_device, ) from accelerate.test_utils.training import RegressionModel from accelerate.utils import ( CannotPadNestedTensorWarning, check_os_kernel, clear_environment, convert_dict_to_env_variables, convert_outputs_to_fp32, convert_to_fp32, extract_model_from_parallel, find_device, has_offloaded_params, is_torch_xla_available, listify, pad_across_processes, pad_input_tensors, patch_environment, purge_accelerate_environment, recursively_apply, save, send_to_device, ) from accelerate.utils.operations import is_namedtuple if is_torch_xla_available(): import torch_xla.distributed.spmd as xs import torch_xla.runtime as xr from torch_xla.experimental.spmd_fully_sharded_data_parallel import SpmdFullyShardedDataParallel as FSDPv2 ExampleNamedTuple = namedtuple("ExampleNamedTuple", "a b c") class UtilsTester(unittest.TestCase): def setUp(self): # logging requires initialized state PartialState() def test_send_to_device(self): tensor = torch.randn(5, 2) device = torch.device(f"{torch_device}:0") result1 = send_to_device(tensor, device) assert torch.equal(result1.cpu(), tensor) result2 = send_to_device((tensor, [tensor, tensor], 1), device) assert isinstance(result2, tuple) assert torch.equal(result2[0].cpu(), tensor) assert isinstance(result2[1], list) assert torch.equal(result2[1][0].cpu(), tensor) assert torch.equal(result2[1][1].cpu(), tensor) assert result2[2] == 1 result2 = send_to_device({"a": tensor, "b": [tensor, tensor], "c": 1}, device) assert isinstance(result2, dict) assert torch.equal(result2["a"].cpu(), tensor) assert isinstance(result2["b"], list) assert torch.equal(result2["b"][0].cpu(), tensor) assert torch.equal(result2["b"][1].cpu(), tensor) assert result2["c"] == 1 result3 = send_to_device(ExampleNamedTuple(a=tensor, b=[tensor, tensor], c=1), device) assert isinstance(result3, ExampleNamedTuple) assert torch.equal(result3.a.cpu(), tensor) assert isinstance(result3.b, list) assert torch.equal(result3.b[0].cpu(), tensor) assert torch.equal(result3.b[1].cpu(), tensor) assert result3.c == 1 result4 = send_to_device(UserDict({"a": tensor, "b": [tensor, tensor], "c": 1}), device) assert isinstance(result4, UserDict) assert torch.equal(result4["a"].cpu(), tensor) assert isinstance(result4["b"], list) assert torch.equal(result4["b"][0].cpu(), tensor) assert torch.equal(result4["b"][1].cpu(), tensor) assert result4["c"] == 1 def test_honor_type(self): with self.assertRaises(TypeError) as cm: _ = recursively_apply(torch.tensor, (torch.tensor(1), 1), error_on_other_type=True) assert ( str(cm.exception) == "Unsupported types (<class 'int'>) passed to `tensor`. Only nested list/tuple/dicts of objects that are valid for `is_torch_tensor` should be passed." ) def test_listify(self): tensor = torch.tensor([1, 2, 3, 4, 5]) assert listify(tensor) == [1, 2, 3, 4, 5] tensor = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) assert listify(tensor) == [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]] tensor = torch.tensor([[[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]], [[11, 12, 13, 14, 15], [16, 17, 18, 19, 20]]]) assert listify(tensor) == [[[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]], [[11, 12, 13, 14, 15], [16, 17, 18, 19, 20]]] def test_patch_environment(self): with patch_environment(aa=1, BB=2): assert os.environ.get("AA") == "1" assert os.environ.get("BB") == "2" assert "AA" not in os.environ assert "BB" not in os.environ def test_patch_environment_key_exists(self): # check that patch_environment correctly restores pre-existing env vars with patch_environment(aa=1, BB=2): assert os.environ.get("AA") == "1" assert os.environ.get("BB") == "2" with patch_environment(Aa=10, bb="20", cC=30): assert os.environ.get("AA") == "10" assert os.environ.get("BB") == "20" assert os.environ.get("CC") == "30" assert os.environ.get("AA") == "1" assert os.environ.get("BB") == "2" assert "CC" not in os.environ assert "AA" not in os.environ assert "BB" not in os.environ assert "CC" not in os.environ def test_patch_environment_restores_on_error(self): # we need to find an upper-case envvar # because `patch_environment upper-cases all keys... key, orig_value = next(kv for kv in os.environ.items() if kv[0].isupper()) new_value = f"{orig_value}_foofoofoo" with pytest.raises(RuntimeError), patch_environment(**{key: new_value}): assert os.environ[key] == os.getenv(key) == new_value # noqa: TID251 raise RuntimeError("Oopsy daisy!") assert os.environ[key] == os.getenv(key) == orig_value # noqa: TID251 def test_clear_environment(self): key, value = os.environ.copy().popitem() with pytest.raises(RuntimeError), clear_environment(): assert key not in os.environ assert not os.getenv(key) # test the environment is actually cleared # noqa: TID251 raise RuntimeError("Oopsy daisy!") # Test values are restored assert os.getenv(key) == os.environ[key] == value # noqa: TID251 def test_can_undo_convert_outputs(self): model = RegressionModel() model._original_forward = model.forward model.forward = convert_outputs_to_fp32(model.forward) model = extract_model_from_parallel(model, keep_fp32_wrapper=False) _ = pickle.dumps(model) @require_non_cpu def test_can_undo_fp16_conversion(self): model = RegressionModel() model._original_forward = model.forward model.forward = torch.autocast(device_type=torch_device, dtype=torch.float16)(model.forward) model.forward = convert_outputs_to_fp32(model.forward) model = extract_model_from_parallel(model, keep_fp32_wrapper=False) _ = pickle.dumps(model) @require_triton @require_non_cpu @require_torch_min_version(version="2.0") def test_dynamo(self): model = RegressionModel() model._original_forward = model.forward model.forward = torch.autocast(device_type=torch_device, dtype=torch.float16)(model.forward) model.forward = convert_outputs_to_fp32(model.forward) model.forward = torch.compile(model.forward, backend="inductor") inputs = torch.randn(4, 10).to(torch_device) _ = model(inputs) def test_extract_model(self): model = RegressionModel() # could also do a test with DistributedDataParallel, but difficult to run on CPU or single GPU distributed_model = torch.nn.parallel.DataParallel(model) model_unwrapped = extract_model_from_parallel(distributed_model) assert model == model_unwrapped @require_tpu @require_huggingface_suite def test_extract_model_recursive_fsdpv2(self): # Specifically tests for FSDPv2 extraction # reported in https://github.com/huggingface/transformers/pull/29780 xr.use_spmd() from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("gpt2") orig_state_dict_keys = list(model.state_dict().keys()) num_devices = xr.global_runtime_device_count() # Set environment for FSDPv2 to be active xs.set_global_mesh(xs.Mesh(np.array(range(num_devices)), (num_devices, 1), axis_names=("fsdp", "tensor"))) def nested_wrap(model): layer = model.wte wrapped_layer = FSDPv2(layer) model.wte = wrapped_layer return model wrapped_model = nested_wrap(model) unwrapped_model = extract_model_from_parallel(wrapped_model, recursive=True) unwrapped_state_dict_keys = list(unwrapped_model.state_dict().keys()) for original_key, new_key in zip(orig_state_dict_keys, unwrapped_state_dict_keys): assert original_key == new_key, f"Keys did not align: {original_key} != {new_key}" @require_torch_min_version(version="2.0") def test_dynamo_extract_model(self): model = RegressionModel() compiled_model = torch.compile(model) # could also do a test with DistributedDataParallel, but difficult to run on CPU or single GPU distributed_model = torch.nn.parallel.DataParallel(model) distributed_compiled_model = torch.compile(distributed_model) compiled_model_unwrapped = extract_model_from_parallel(distributed_compiled_model) assert compiled_model._orig_mod == compiled_model_unwrapped._orig_mod def test_find_device(self): assert find_device([1, "a", torch.tensor([1, 2, 3])]) == torch.device("cpu") assert find_device({"a": 1, "b": torch.tensor([1, 2, 3])}) == torch.device("cpu") assert find_device([1, "a"]) is None def test_check_os_kernel_no_warning_when_release_gt_min(self): # min version is 5.5 with patch("platform.uname", return_value=Mock(release="5.15.0-35-generic", system="Linux")): with warnings.catch_warnings(record=True) as w: check_os_kernel() assert len(w) == 0 def test_check_os_kernel_no_warning_when_not_linux(self): # system must be Linux with patch("platform.uname", return_value=Mock(release="5.4.0-35-generic", system="Darwin")): with warnings.catch_warnings(record=True) as w: check_os_kernel() assert len(w) == 0 def test_check_os_kernel_warning_when_release_lt_min(self): # min version is 5.5 with patch("platform.uname", return_value=Mock(release="5.4.0-35-generic", system="Linux")): with self.assertLogs() as ctx: check_os_kernel() assert len(ctx.records) == 1 assert ctx.records[0].levelname == "WARNING" assert "5.4.0" in ctx.records[0].msg assert "5.5.0" in ctx.records[0].msg @require_non_torch_xla def test_save_safetensor_shared_memory(self): class Model(nn.Module): def __init__(self): super().__init__() self.a = nn.Linear(100, 100) self.b = self.a def forward(self, x): return self.b(self.a(x)) model = Model() with tempfile.TemporaryDirectory() as tmp_dir: save_path = os.path.join(tmp_dir, "model.safetensors") with self.assertLogs(level="WARNING") as log: save(model.state_dict(), save_path, safe_serialization=True) assert len(log.records) == 1 assert "Removed shared tensor" in log.output[0] @require_torch_min_version(version="1.12") def test_pad_across_processes(self): from torch.nested import nested_tensor nt = nested_tensor([[1, 2, 3], [1], [1, 2]]) with self.assertWarns(CannotPadNestedTensorWarning): nt2 = pad_across_processes(nt) assert nt is nt2 # Basic functionality tensor = torch.randn(4, 3, 100) padded_tensor = pad_across_processes(tensor, dim=-1) assert padded_tensor.shape[-1] == 100 # dim = -4 is out of bounds padded_tensor = pad_across_processes(tensor, dim=-4) assert padded_tensor is tensor def test_slice_and_concatenate(self): # First base case: 2 processes, batch size of 1 num_processes = 2 batch_size = 1 batch = torch.rand(batch_size, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 2 items now assert result.shape == torch.Size([2, 4]) # Second base case: 2 processes, batch size of 3 num_processes = 2 batch_size = 3 batch = torch.rand(batch_size, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 4 items now assert result.shape == torch.Size([4, 4]) # Third base case: 3 processes, batch size of 4 num_processes = 3 batch_size = 4 batch = torch.rand(batch_size, 4, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 6 items now assert result.shape == torch.Size([6, 4, 4]) # Fourth base case: 4 processes, batch size of 3 num_processes = 4 batch_size = 3 batch = torch.rand(batch_size, 4, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 4 items now assert result.shape == torch.Size([4, 4, 4]) # Fifth base case: 6 processes, batch size of 4 num_processes = 6 batch_size = 4 batch = torch.rand(batch_size, 4, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 6 items now assert result.shape == torch.Size([6, 4, 4]) # Sixth base case: 6 processes, batch size of 1 num_processes = 6 batch_size = 1 batch = torch.rand(batch_size, 4, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 6 items now assert result.shape == torch.Size([6, 4, 4]) # Seventh base case: 6 processes, batch size of 2 num_processes = 6 batch_size = 2 batch = torch.rand(batch_size, 4, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 6 items now assert result.shape == torch.Size([6, 4, 4]) # Eighth base case: 6 processes, batch size of 61 num_processes = 6 batch_size = 61 batch = torch.rand(batch_size, 4, 4) result = pad_input_tensors(batch, batch_size, num_processes) # We should expect there to be 66 items now assert result.shape == torch.Size([66, 4, 4]) def test_send_to_device_compiles(self): compiled_send_to_device = torch.compile(send_to_device, fullgraph=True) compiled_send_to_device(torch.zeros([1], dtype=torch.bfloat16), "cpu") def test_convert_to_fp32(self): compiled_convert_to_fp32 = torch.compile(convert_to_fp32, fullgraph=True) compiled_convert_to_fp32(torch.zeros([1], dtype=torch.bfloat16)) def test_named_tuples(self): class QuantTensorBase(NamedTuple): value: torch.Tensor scale: Optional[torch.Tensor] zero_point: Optional[torch.Tensor] class Second(QuantTensorBase): pass a = QuantTensorBase(torch.tensor(1.0), None, None) b = Second(torch.tensor(1.0), None, None) point = namedtuple("Point", ["x", "y"]) p = point(11, y=22) self.assertTrue(is_namedtuple(a)) self.assertTrue(is_namedtuple(b)) self.assertTrue(is_namedtuple(p)) self.assertFalse(is_namedtuple((1, 2))) self.assertFalse(is_namedtuple("hey")) self.assertFalse(is_namedtuple(object())) def test_convert_dict_to_env_variables(self): env = {"ACCELERATE_DEBUG_MODE": "1", "BAD_ENV_NAME": "<mything", "OTHER_ENV": "2"} with self.assertLogs("accelerate.utils.environment", level="WARNING"): valid_env_items = convert_dict_to_env_variables(env) assert valid_env_items == ["ACCELERATE_DEBUG_MODE=1\n", "OTHER_ENV=2\n"] def test_has_offloaded_params(self): model = RegressionModel() assert not has_offloaded_params(model) attach_align_device_hook(model, offload=False) assert not has_offloaded_params(model) remove_hook_from_module(model) model, _ = cpu_offload_with_hook(model) assert not has_offloaded_params(model) remove_hook_from_module(model) attach_align_device_hook(model, offload=True) assert has_offloaded_params(model) def set_dummy_accelerate_env_var(): """Set an accelerate env var This class emulates the behavior of, for instance, transformers.TrainingArguments, which is allowed to set accelerate env vars but does not clean them up. E.g. TrainingArguments(fp16=True, output_dir="/tmp/test") leaves ACCELERATE_MIXED_PRECISION=fp16 as an env var. """ os.environ["ACCELERATE_SOME_ENV_VAR"] = "true" @purge_accelerate_environment class MyUnittest(unittest.TestCase): def test_purge_env_vars_unittest_1(self): os.environ.pop("ACCELERATE_SOME_ENV_VAR", None) set_dummy_accelerate_env_var() assert "ACCELERATE_SOME_ENV_VAR" in os.environ def test_purge_env_vars_unittest_2(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @unittest.skipIf(False, "dummy unittest wrapper") @purge_accelerate_environment @unittest.skipUnless(True, "dummy unittest wrapper") class MyUnittestWithDecorators(unittest.TestCase): def test_purge_env_vars_unittest_with_wrapper_1(self): os.environ.pop("ACCELERATE_SOME_ENV_VAR", None) set_dummy_accelerate_env_var() assert "ACCELERATE_SOME_ENV_VAR" in os.environ def test_purge_env_vars_unittest_with_wrapper_2(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @unittest.skipIf(False, "dummy unittest wrapper") def test_purge_env_vars_unittest_with_wrapper_3(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @unittest.skipIf(True, "this is always skipped") def test_purge_env_vars_unittest_with_wrapper_4(self): # ensure that unittest markers still do their job assert False @purge_accelerate_environment class _BaseCls(unittest.TestCase): def test_purge_env_vars_unittest_with_inheritance_3(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ class MyUnittestWithInheritance(_BaseCls): def test_purge_env_vars_unittest_with_inheritance_1(self): os.environ.pop("ACCELERATE_SOME_ENV_VAR", None) set_dummy_accelerate_env_var() assert "ACCELERATE_SOME_ENV_VAR" in os.environ def test_purge_env_vars_unittest_with_inheritance_2(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @purge_accelerate_environment class TestMyPytest: def test_purge_env_vars_pytest_1(self): os.environ.pop("ACCELERATE_SOME_ENV_VAR", None) set_dummy_accelerate_env_var() assert "ACCELERATE_SOME_ENV_VAR" in os.environ def test_purge_env_vars_pytest_2(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @pytest.fixture def dummy_fixture(): pass @pytest.mark.skipif(False, reason="dummy pytest wrapper") @pytest.mark.usefixtures("dummy_fixture") @purge_accelerate_environment @pytest.mark.skipif(False, reason="dummy pytest wrapper") @pytest.mark.usefixtures("dummy_fixture") class TestPytestWithWrapper: def test_purge_env_vars_pytest_with_wrapper_1(self): os.environ.pop("ACCELERATE_SOME_ENV_VAR", None) set_dummy_accelerate_env_var() assert "ACCELERATE_SOME_ENV_VAR" in os.environ def test_purge_env_vars_pytest_with_wrapper_2(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @pytest.mark.skipif(False, reason="dummy pytest wrapper") @pytest.mark.usefixtures("dummy_fixture") def test_purge_env_vars_pytest_with_wrapper_3(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @pytest.mark.skipif(True, reason="this is always skipped") def test_purge_env_vars_pytest_with_wrapper_4_should_be_skipped(self): # ensure that pytest markers still do their job assert False @purge_accelerate_environment class _PytestBaseCls: def test_purge_env_vars_pytest_with_inheritance_3(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ class TestPytestWithInheritance(_PytestBaseCls): def test_purge_env_vars_pytest_with_inheritance_1(self): os.environ.pop("ACCELERATE_SOME_ENV_VAR", None) set_dummy_accelerate_env_var() assert "ACCELERATE_SOME_ENV_VAR" in os.environ def test_purge_env_vars_pytest_with_inheritance_2(self): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ @purge_accelerate_environment def test_purge_env_vars_standalone_1(): os.environ.pop("ACCELERATE_SOME_ENV_VAR", None) set_dummy_accelerate_env_var() assert "ACCELERATE_SOME_ENV_VAR" in os.environ def test_purge_env_vars_standalone_2(): assert "ACCELERATE_SOME_ENV_VAR" not in os.environ def test_purge_env_vars_restores_previous_values(): # Ensure that purge_accelerate_environment restores values of previous accelerate env vars and does not delete # untouched env vars. @purge_accelerate_environment def dummy_func(): os.environ["ACCELERATE_SOME_ENV_VAR"] = "456" os.environ["ACCELERATE_SOME_ENV_VAR"] = "1" os.environ["ACCELERATE_ANOTHER_ENV_VAR"] = "2" dummy_func() assert os.environ["ACCELERATE_SOME_ENV_VAR"] == "1" assert os.environ["ACCELERATE_ANOTHER_ENV_VAR"] == "2" del os.environ["ACCELERATE_SOME_ENV_VAR"] del os.environ["ACCELERATE_ANOTHER_ENV_VAR"]
0
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_accelerator.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import json import os import pickle import tempfile import time from unittest.mock import patch import psutil import torch from parameterized import parameterized from torch.utils.data import DataLoader, TensorDataset from accelerate import DistributedType, infer_auto_device_map, init_empty_weights, load_checkpoint_and_dispatch from accelerate.accelerator import Accelerator from accelerate.data_loader import DataLoaderDispatcher, DataLoaderShard, skip_first_batches from accelerate.state import GradientState, PartialState from accelerate.test_utils import ( require_bnb, require_multi_gpu, require_non_cpu, require_transformer_engine, slow, torch_device, ) from accelerate.test_utils.testing import ( AccelerateTestCase, require_cuda, require_non_torch_xla, require_torchdata_stateful_dataloader, ) from accelerate.utils import FP8RecipeKwargs, is_torchdata_stateful_dataloader_available, patch_environment from accelerate.utils.dataclasses import DataLoaderConfiguration from accelerate.utils.modeling import get_state_dict_from_offload, load_checkpoint_in_model from accelerate.utils.random import set_seed if is_torchdata_stateful_dataloader_available(): from torchdata.stateful_dataloader import StatefulDataLoader class ModelWithTiedWeights(torch.nn.Module): def __init__(self): super().__init__() self.linear1 = torch.nn.Linear(2, 4) self.linear2 = torch.nn.Linear(4, 2) self.linear2.weight = self.linear1.weight self.linear2.bias = self.linear1.bias def forward(self, x): return self.linear2(self.linear1(x)) def create_components(tied_weights=False): model = ModelWithTiedWeights() if tied_weights else torch.nn.Linear(2, 4) optimizer = torch.optim.AdamW(model.parameters(), lr=1.0) scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=2, epochs=1) train_dl = DataLoader(TensorDataset(torch.tensor([1, 2, 3]))) valid_dl = DataLoader(TensorDataset(torch.tensor([4, 5, 6]))) return model, optimizer, scheduler, train_dl, valid_dl class ModelForTest(torch.nn.Module): def __init__(self): super().__init__() self.linear1 = torch.nn.Linear(3, 4) self.batchnorm = torch.nn.BatchNorm1d(4) self.linear2 = torch.nn.Linear(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) def create_dataloaders_for_test(batch_size=3, n_train_batches: int = 12, n_valid_batches: int = 2, num_workers=0): "Generates a tuple of dummy DataLoaders to test with" def get_dataset(n_batches): x = torch.randn(batch_size * n_batches, 3) y = torch.randn(batch_size * n_batches, 5) return TensorDataset(x, y) train_dataset = get_dataset(n_train_batches) valid_dataset = get_dataset(n_valid_batches) train_dataloader = DataLoader(train_dataset, batch_size=batch_size, num_workers=num_workers) valid_dataloader = DataLoader(valid_dataset, batch_size=batch_size, num_workers=num_workers) return (train_dataloader, valid_dataloader) def get_signature(model): return sum(param.abs().sum().item() for param in model.parameters()) def load_random_weights(model): if isinstance(model, torch.nn.Linear): state = torch.nn.Linear(*tuple(model.weight.T.shape)).state_dict() elif isinstance(model, ModelWithTiedWeights): state = ModelWithTiedWeights().state_dict() model.load_state_dict(state) def parameterized_custom_name_func(func, param_num, param): # customize the test name generator function as we want both params to appear in the sub-test # name, as by default it shows only the first param param_based_name = "use_safetensors" if param.args[0] is True else "use_pytorch" if len(param.args) > 1: param_based_name += "_tied_weights" if param.args[1] is True else "" if len(param.args) > 2: param_based_name += f"_num_workers_{param.args[2]}" if len(param.args) > 3: param_based_name += "_dispatch_batches" if param.args[3] is True else "_no_dispatch_batches" return f"{func.__name__}_{param_based_name}" class AcceleratorTester(AccelerateTestCase): def test_partial_state_after_reset(self): # Verifies that custom getattr errors will be thrown # if the state is reset, but only if trying to # get expected attributes state = PartialState() assert state.num_processes > 0 with self.assertRaises(AttributeError) as cm: state.someotherthing assert "'PartialState' object has no attribute" in str(cm.exception) assert "This happens if `PartialState._reset_state()`" not in str(cm.exception) with self.assertRaises(AttributeError) as cm: state._reset_state() state.num_processes assert "`PartialState` object has no attribute" in str(cm.exception) assert "This happens if `PartialState._reset_state()`" in str(cm.exception) state.someotherthing = "MyValue" assert state.someotherthing == "MyValue" def test_accelerator_state_after_reset(self): # Verifies that custom getattr errors will be thrown # if the state is reset, but only if trying to # get expected attributes accelerator = Accelerator() assert accelerator.num_processes > 0 with self.assertRaises(AttributeError) as cm: accelerator.state.someotherthing assert "'AcceleratorState' object has no attribute" in str(cm.exception) assert "This happens if `AcceleratorState._reset_state()`" not in str(cm.exception) with self.assertRaises(AttributeError) as cm: accelerator.state._reset_state() accelerator.num_processes assert "`AcceleratorState` object has no attribute" in str(cm.exception) assert "This happens if `AcceleratorState._reset_state()`" in str(cm.exception) accelerator.state.someotherthing = "MyValue" assert accelerator.state.someotherthing == "MyValue" @require_non_cpu def test_accelerator_can_be_reinstantiated(self): _ = Accelerator() assert PartialState._shared_state["_cpu"] is False assert PartialState._shared_state["device"].type in ["cuda", "mps", "npu", "xpu", "xla"] with self.assertRaises(ValueError): _ = Accelerator(cpu=True) @require_cuda def test_setting_cpu_affinity(self): with patch_environment(accelerate_cpu_affinity=1, accelerate_debug_mode=1): with self.assertLogs("accelerate.utils.environment", level="INFO") as cm: _ = Accelerator() assert any("Assigning" in log for log in cm.output) assert any("cpu cores to process" in log for log in cm.output) def test_mutable_states(self): accelerator = Accelerator() state = GradientState() assert state.num_steps == 1 accelerator.gradient_accumulation_steps = 4 assert state.num_steps == 4 assert state.sync_gradients is True accelerator.sync_gradients = False assert state.sync_gradients is False GradientState._reset_state() def test_prepared_objects_are_referenced(self): accelerator = Accelerator() model, optimizer, scheduler, train_dl, valid_dl = create_components() ( prepared_model, prepared_optimizer, prepared_scheduler, prepared_train_dl, prepared_valid_dl, ) = accelerator.prepare(model, optimizer, scheduler, train_dl, valid_dl) assert prepared_model in accelerator._models assert prepared_optimizer in accelerator._optimizers assert prepared_scheduler in accelerator._schedulers assert prepared_train_dl in accelerator._dataloaders assert prepared_valid_dl in accelerator._dataloaders def test_free_memory_dereferences_prepared_components(self): accelerator = Accelerator() # Free up refs with empty_cache() and gc.collect() accelerator.free_memory() model, optimizer, scheduler, train_dl, valid_dl = create_components() free_cpu_ram_before = psutil.virtual_memory().available // 1024 // 1024 model, optimizer, scheduler, train_dl, valid_dl = accelerator.prepare( model, optimizer, scheduler, train_dl, valid_dl ) # Short sleep here makes this test more reliable time.sleep(1e-3) model, optimizer, scheduler, train_dl, valid_dl = accelerator.free_memory( model, optimizer, scheduler, train_dl, valid_dl ) free_cpu_ram_after = psutil.virtual_memory().available // 1024 // 1024 assert len(accelerator._models) == 0 assert len(accelerator._optimizers) == 0 assert len(accelerator._schedulers) == 0 assert len(accelerator._dataloaders) == 0 # The less-than comes *specifically* from CUDA CPU things/won't be present on CPU builds assert free_cpu_ram_after <= free_cpu_ram_before @require_non_torch_xla def test_env_var_device(self): """Tests that setting the torch device with ACCELERATE_TORCH_DEVICE overrides default device.""" PartialState._reset_state() # Mock torch.cuda.set_device to avoid an exception as the device doesn't exist def noop(*args, **kwargs): pass with patch("torch.cuda.set_device", noop), patch_environment(ACCELERATE_TORCH_DEVICE="cuda:64"): accelerator = Accelerator() assert str(accelerator.state.device) == "cuda:64" @parameterized.expand([(True, True), (True, False), (False, False)], name_func=parameterized_custom_name_func) def test_save_load_model(self, use_safetensors, tied_weights): accelerator = Accelerator() model, optimizer, scheduler, train_dl, valid_dl = create_components(tied_weights) accelerator.prepare(model, optimizer, scheduler, train_dl, valid_dl) model_signature = get_signature(model) with tempfile.TemporaryDirectory() as tmpdirname: accelerator.save_state(tmpdirname, safe_serialization=use_safetensors) # make sure random weights don't match load_random_weights(model) assert abs(model_signature - get_signature(model)) > 1e-3 # make sure loaded weights match accelerator.load_state(tmpdirname) assert abs(model_signature - get_signature(model)) < 1e-3 @parameterized.expand([True, False], name_func=parameterized_custom_name_func) def test_save_model(self, use_safetensors): accelerator = Accelerator() model = torch.nn.Linear(10, 10) model_signature = get_signature(model) with tempfile.TemporaryDirectory() as tmpdirname: accelerator.save_model(model, tmpdirname, safe_serialization=use_safetensors) # make sure loaded weights match load_checkpoint_in_model(model, tmpdirname) assert abs(model_signature - get_signature(model)) < 1e-3 @parameterized.expand([True, False], name_func=parameterized_custom_name_func) def test_save_sharded_model(self, use_safetensors): accelerator = Accelerator() inputs = torch.randn(3, 3) model = ModelForTest() expected = model(inputs) with tempfile.TemporaryDirectory() as tmpdirname: # By setting it to 100, we will split the model int 3 shards accelerator.save_model(model, tmpdirname, safe_serialization=use_safetensors, max_shard_size=100) # make sure loaded weights match load_checkpoint_in_model(model, tmpdirname) output = model(inputs) assert torch.allclose(expected, output, atol=1e-5) @parameterized.expand([True, False], name_func=parameterized_custom_name_func) def test_save_model_offload(self, use_safetensors): accelerator = Accelerator() device_map = {"linear1": "cpu", "batchnorm": "disk", "linear2": "cpu"} inputs = torch.randn(3, 3) model = ModelForTest() expected = model(inputs) with tempfile.TemporaryDirectory() as tmp_dir: accelerator.save_model(model, tmp_dir, safe_serialization=use_safetensors) # load and save offloaded model load_checkpoint_and_dispatch(model, tmp_dir, device_map=device_map, offload_folder=tmp_dir) accelerator.save_model(model, tmp_dir, safe_serialization=use_safetensors) # load weights that were saved from the offloaded model load_checkpoint_and_dispatch(model, tmp_dir) output = model(inputs) assert torch.allclose(expected, output, atol=1e-5) @parameterized.expand([True, False], name_func=parameterized_custom_name_func) @require_non_cpu def test_get_state_dict_from_offload(self, use_safetensors): accelerator = Accelerator() device_map = {"linear1": "cpu", "batchnorm": "disk", "linear2": "disk"} model = ModelForTest() offloaded_layer_weight = model.linear2.weight with tempfile.TemporaryDirectory() as tmp_dir: accelerator.save_model(model, tmp_dir, safe_serialization=use_safetensors) # load model with offloaded layers load_checkpoint_and_dispatch(model, tmp_dir, device_map=device_map, offload_folder=tmp_dir) cpu_onloaded_layer = get_state_dict_from_offload( model.linear2, "linear2.weight", {"linear2.weight": ""}, device_to_put_offload="cpu" ) device_onloaded_layer = get_state_dict_from_offload( model.linear2, "linear2.weight", {"linear2.weight": ""}, device_to_put_offload=0 ) cpu_onloaded_layer_weight = cpu_onloaded_layer["linear2.weight"] device_onloaded_layer_weight = device_onloaded_layer["linear2.weight"] assert torch.allclose(offloaded_layer_weight, cpu_onloaded_layer_weight) assert torch.allclose( offloaded_layer_weight, device_onloaded_layer_weight.to("cpu") ) # must be on the same device for torch.allclose() assert cpu_onloaded_layer_weight.device.type == "cpu" assert device_onloaded_layer_weight.device.type == torch_device @parameterized.expand([True, False], name_func=parameterized_custom_name_func) def test_save_load_model_with_hooks(self, use_safetensors): accelerator = Accelerator() model, optimizer, scheduler, train_dl, valid_dl = create_components() accelerator.prepare(model, optimizer, scheduler, train_dl, valid_dl) model_signature = get_signature(model) # saving hook def save_config(models, weights, output_dir): config = {"class_name": models[0].__class__.__name__} with open(os.path.join(output_dir, "data.json"), "w") as f: json.dump(config, f) # loading hook def load_config(models, input_dir): with open(os.path.join(input_dir, "data.json")) as f: config = json.load(f) models[0].class_name = config["class_name"] save_hook = accelerator.register_save_state_pre_hook(save_config) load_hook = accelerator.register_load_state_pre_hook(load_config) with tempfile.TemporaryDirectory() as tmpdirname: accelerator.save_state(tmpdirname, safe_serialization=use_safetensors) # make sure random weights don't match with hooks load_random_weights(model) assert abs(model_signature - get_signature(model)) > 1e-3 # random class name to verify correct one is loaded model.class_name = "random" # make sure loaded weights match with hooks accelerator.load_state(tmpdirname) assert abs(model_signature - get_signature(model)) < 1e-3 # mode.class_name is loaded from config assert model.class_name == model.__class__.__name__ # remove hooks save_hook.remove() load_hook.remove() with tempfile.TemporaryDirectory() as tmpdirname: accelerator.save_state(tmpdirname, safe_serialization=use_safetensors) # make sure random weights don't match with hooks removed load_random_weights(model) assert abs(model_signature - get_signature(model)) > 1e-3 # random class name to verify correct one is loaded model.class_name = "random" # make sure loaded weights match with hooks removed accelerator.load_state(tmpdirname) assert abs(model_signature - get_signature(model)) < 1e-3 # mode.class_name is NOT loaded from config assert model.class_name != model.__class__.__name__ def test_accelerator_none(self): """Just test that passing None to accelerator.prepare() works.""" accelerator = Accelerator() model, optimizer, scheduler, train_dl, valid_dl = create_components() dummy_obj = None # This should work model, optimizer, scheduler, train_dl, valid_dl, dummy_obj = accelerator.prepare( model, optimizer, scheduler, train_dl, valid_dl, dummy_obj ) assert dummy_obj is None def test_is_accelerator_prepared(self): """Checks that `_is_accelerator_prepared` is set properly""" accelerator = Accelerator() model, optimizer, scheduler, train_dl, valid_dl = create_components() dummy_obj = [1, 2, 3] # This should work model, optimizer, scheduler, train_dl, valid_dl, dummy_obj = accelerator.prepare( model, optimizer, scheduler, train_dl, valid_dl, dummy_obj ) assert ( getattr(dummy_obj, "_is_accelerate_prepared", False) is False ), "Dummy object should have `_is_accelerate_prepared` set to `True`" assert ( getattr(model, "_is_accelerate_prepared", False) is True ), "Model is missing `_is_accelerator_prepared` or is set to `False`" assert ( getattr(optimizer, "_is_accelerate_prepared", False) is True ), "Optimizer is missing `_is_accelerator_prepared` or is set to `False`" assert ( getattr(scheduler, "_is_accelerate_prepared", False) is True ), "Scheduler is missing `_is_accelerator_prepared` or is set to `False`" assert ( getattr(train_dl, "_is_accelerate_prepared", False) is True ), "Train Dataloader is missing `_is_accelerator_prepared` or is set to `False`" assert ( getattr(valid_dl, "_is_accelerate_prepared", False) is True ), "Valid Dataloader is missing `_is_accelerator_prepared` or is set to `False`" @require_cuda @slow @require_bnb def test_accelerator_bnb(self): """Tests that the accelerator can be used with the BNB library.""" from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-125m", load_in_8bit=True, device_map={"": 0}, ) accelerator = Accelerator() # This should work model = accelerator.prepare(model) @require_cuda @slow @require_bnb def test_accelerator_bnb_cpu_error(self): """Tests that the accelerator can be used with the BNB library. This should fail as we are trying to load a model that is loaded between cpu and gpu""" from transformers import AutoModelForCausalLM accelerator = Accelerator() with init_empty_weights(): model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-125m", ) model.tie_weights() device_map = infer_auto_device_map(model) device_map["lm_head"] = "cpu" model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-125m", device_map=device_map, load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True ) # This should not work and get value error with self.assertRaises(ValueError): model = accelerator.prepare(model) @require_non_torch_xla @slow @require_bnb @require_multi_gpu def test_accelerator_bnb_multi_device(self): """Tests that the accelerator can be used with the BNB library.""" from transformers import AutoModelForCausalLM if torch_device == "cuda": PartialState._shared_state = {"distributed_type": DistributedType.MULTI_GPU} elif torch_device == "npu": PartialState._shared_state = {"distributed_type": DistributedType.MULTI_NPU} else: raise ValueError(f"{torch_device} is not supported in test_accelerator_bnb_multi_device.") with init_empty_weights(): model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-125m", ) model.tie_weights() device_map = infer_auto_device_map(model) device_map["lm_head"] = 1 model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-125m", load_in_8bit=True, device_map=device_map, ) accelerator = Accelerator() # This should not work and get value error with self.assertRaises(ValueError): _ = accelerator.prepare(model) PartialState._reset_state() @require_non_torch_xla @slow @require_bnb @require_multi_gpu def test_accelerator_bnb_multi_device_no_distributed(self): """Tests that the accelerator can be used with the BNB library.""" from transformers import AutoModelForCausalLM with init_empty_weights(): model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-125m", ) device_map = infer_auto_device_map(model) device_map["lm_head"] = 1 model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-125m", load_in_8bit=True, device_map=device_map, ) accelerator = Accelerator() # This should work _ = accelerator.prepare(model) @require_non_cpu def test_accelerator_cpu_flag_prepare(self): model = torch.nn.Linear(10, 10) sgd = torch.optim.SGD(model.parameters(), lr=0.01) accelerator = Accelerator(cpu=True) _ = accelerator.prepare(sgd) @require_transformer_engine def test_can_unwrap_model_te(self): model, optimizer, *_ = create_components() fp8_recipe = FP8RecipeKwargs(backend="TE") accelerator = Accelerator(mixed_precision="fp8", kwargs_handlers=[fp8_recipe]) inputs = torch.randn(10, 2).to(torch_device) model, optimizer = accelerator.prepare(model, optimizer) model(inputs) # sanity check that this works model = accelerator.unwrap_model(model, keep_fp32_wrapper=False) model(inputs) # check that this still works # check that pickle roundtrip works model_loaded = pickle.loads(pickle.dumps(model)) model_loaded(inputs) @require_non_cpu def test_can_unwrap_model_fp16(self): # test for a regression introduced in #872 # before the fix, after unwrapping with keep_fp32_wrapper=False, there would be the following error: # Linear.forward() missing 1 required positional argument: 'input' model = create_components()[0] accelerator = Accelerator(mixed_precision="fp16") inputs = torch.randn(10, 2).to(torch_device) model = accelerator.prepare(model) model(inputs) # sanity check that this works model = accelerator.unwrap_model(model, keep_fp32_wrapper=False) model(inputs) # check that this still works # check that pickle roundtrip works model_loaded = pickle.loads(pickle.dumps(model)) model_loaded(inputs) def test_can_unwrap_model(self): model = create_components()[0] accelerator = Accelerator(mixed_precision="no", cpu=True) inputs = torch.randn(10, 2) model = accelerator.prepare(model) model(inputs) # sanity check that this works model = accelerator.unwrap_model(model, keep_fp32_wrapper=False) model(inputs) # check that this still works # check that pickle roundtrip works model_loaded = pickle.loads(pickle.dumps(model)) model_loaded(inputs) @parameterized.expand([True, False]) def test_can_pickle_dataloader(self, dispatch_batches): """ Test that pickling a prepared dataloader works. """ data = torch.arange(10).to(torch_device) ds = torch.utils.data.TensorDataset(data) dl = torch.utils.data.DataLoader(ds) skip_dl = skip_first_batches(dl, 2) # Currently, StatefulDataLoader doesn't seem to support pickling, so we aren't testing that functionality # TODO: Add support for pickling StatefulDataLoader dataloader_config = DataLoaderConfiguration(dispatch_batches=dispatch_batches, use_stateful_dataloader=False) accelerator = Accelerator(dataloader_config=dataloader_config) original_dl, _ = accelerator.prepare(dl, skip_dl) if dispatch_batches: assert isinstance(original_dl, DataLoaderDispatcher) else: assert isinstance(original_dl, DataLoaderShard) prepared_model_dumps = pickle.dumps(accelerator) model_loaded = pickle.loads(prepared_model_dumps) assert len(model_loaded._dataloaders) == 2 # Assert equality of recovered and original dataloader loaded_dl = model_loaded._dataloaders[0] assert isinstance(loaded_dl, DataLoader) if dispatch_batches: assert isinstance(loaded_dl, DataLoaderDispatcher) else: assert isinstance(loaded_dl, DataLoaderShard) assert len(loaded_dl) == len(original_dl) assert [i for i in loaded_dl] == [i for i in original_dl] # Test skip dataloader works as expected as well loaded_skip_dl = model_loaded._dataloaders[1] assert isinstance(loaded_skip_dl, DataLoader) if dispatch_batches: assert isinstance(loaded_dl, DataLoaderDispatcher) else: assert isinstance(loaded_dl, DataLoaderShard) assert len(loaded_skip_dl) == len(original_dl) - 2 assert [i for i in loaded_skip_dl] == [i for i in original_dl][2:] # Ideally would be a parameterized test which works with either stateful or non-stateful dataloaders, but dependencies are a bit awkward. @require_torchdata_stateful_dataloader def test_prepared_objects_are_referenced_with_stateful_dataloader(self): """Test that setting `use_stateful_dataloader=True` in `DataLoaderConfiguration` prepares a `StatefulDataLoader` object instead of a `DataLoader` object.""" dataloader_config = DataLoaderConfiguration(use_stateful_dataloader=True) accelerator = Accelerator(dataloader_config=dataloader_config) model, optimizer, scheduler, train_dl, valid_dl = create_components() ( prepared_model, prepared_optimizer, prepared_scheduler, prepared_train_dl, prepared_valid_dl, ) = accelerator.prepare(model, optimizer, scheduler, train_dl, valid_dl) assert prepared_model in accelerator._models assert prepared_optimizer in accelerator._optimizers assert prepared_scheduler in accelerator._schedulers assert prepared_train_dl in accelerator._dataloaders assert prepared_valid_dl in accelerator._dataloaders assert isinstance(prepared_train_dl, StatefulDataLoader) assert isinstance(prepared_valid_dl, StatefulDataLoader) @parameterized.expand( itertools.product([True, False], [True, False], [0, 2], [True, False]), name_func=parameterized_custom_name_func, ) @require_torchdata_stateful_dataloader def test_save_model_with_stateful_dataloader(self, use_safetensors, tied_weights, num_workers, dispatch_batches): """ Test that saving and loading a model with a stateful dataloader returns the same model, and that the dataloader's iterator is restored properly.""" set_seed(42) n_train_batches = 64 # Use enough batches to ensure we can get partial iterations on large compute dataloader_config = DataLoaderConfiguration(dispatch_batches=dispatch_batches, use_stateful_dataloader=True) accelerator = Accelerator(dataloader_config=dataloader_config) model, optimizer, scheduler, train_dl, valid_dl = create_components(tied_weights) train_dl, valid_dl = create_dataloaders_for_test(n_train_batches=n_train_batches, num_workers=num_workers) model = ModelForTest() ( prepared_model, prepared_optimizer, prepared_scheduler, prepared_train_dl, prepared_valid_dl, ) = accelerator.prepare(model, optimizer, scheduler, train_dl, valid_dl) assert isinstance(prepared_train_dl, StatefulDataLoader) assert isinstance(prepared_valid_dl, StatefulDataLoader) # Perform 3 training iterations to ensure the dataloader's iterator is advanced num_batches_to_skip = 3 model.train() untrained_batches = [] with tempfile.TemporaryDirectory() as tmpdirname: for step, batch in enumerate(prepared_train_dl): x, y = batch outputs = prepared_model(x) loss = torch.nn.functional.mse_loss(outputs, y) accelerator.backward(loss) prepared_optimizer.step() prepared_scheduler.step() prepared_optimizer.zero_grad() if step == num_batches_to_skip - 1: # Save the state once we've gone through a few batches accelerator.save_state(f"{tmpdirname}/state", safe_serialization=use_safetensors) if step >= num_batches_to_skip: untrained_batches.append(batch) not_skipped_batches = accelerator.gather(untrained_batches) # We then unwrap the trained model unwrapped_model = accelerator.unwrap_model(prepared_model) original_linear1 = unwrapped_model.linear1.weight.clone() original_batchnorm = unwrapped_model.batchnorm.weight.clone() original_linear2 = unwrapped_model.linear2.weight.clone() # Resume the state accelerator.load_state(f"{tmpdirname}/state") # Train this to the end of the DataLoader batches_seen_with_loaded_dl = 0 for batch in prepared_train_dl: x, y = batch outputs = prepared_model(x) loss = torch.nn.functional.mse_loss(outputs, y) accelerator.backward(loss) prepared_optimizer.step() prepared_scheduler.step() prepared_optimizer.zero_grad() batches_seen_with_loaded_dl += 1 unwrapped_model_2 = accelerator.unwrap_model(prepared_model) new_linear1 = unwrapped_model_2.linear1.weight new_batchnorm = unwrapped_model_2.batchnorm.weight new_linear2 = unwrapped_model_2.linear2.weight # Assert equalities assert batches_seen_with_loaded_dl == len(not_skipped_batches) assert torch.allclose(original_linear1, new_linear1) assert torch.allclose(original_batchnorm, new_batchnorm) assert torch.allclose(original_linear2, new_linear2)
1
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_offload.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from tempfile import TemporaryDirectory import torch import torch.nn as nn from accelerate.utils import ( OffloadedWeightsLoader, extract_submodules_state_dict, load_offloaded_weight, offload_state_dict, offload_weight, ) class ModelForTest(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.batchnorm = nn.BatchNorm1d(4) self.linear2 = nn.Linear(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) class OffloadTester(unittest.TestCase): def test_offload_state_dict(self): model = ModelForTest() with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, model.state_dict()) index_file = os.path.join(tmp_dir, "index.json") assert os.path.isfile(index_file) # TODO: add tests on what is inside the index for key in ["linear1.weight", "linear1.bias", "linear2.weight", "linear2.bias"]: weight_file = os.path.join(tmp_dir, f"{key}.dat") assert os.path.isfile(weight_file) # TODO: add tests on the fact weights are properly loaded def test_offload_weight(self): dtypes = [torch.float16, torch.float32, torch.bfloat16] for dtype in dtypes: weight = torch.randn(2, 3, dtype=dtype) with TemporaryDirectory() as tmp_dir: index = offload_weight(weight, "weight", tmp_dir, {}) weight_file = os.path.join(tmp_dir, "weight.dat") assert os.path.isfile(weight_file) assert index == {"weight": {"shape": [2, 3], "dtype": str(dtype).split(".")[1]}} new_weight = load_offloaded_weight(weight_file, index["weight"]) assert torch.equal(weight, new_weight) def test_offload_weights_loader(self): model = ModelForTest() state_dict = model.state_dict() cpu_part = {k: v for k, v in state_dict.items() if "linear2" not in k} disk_part = {k: v for k, v in state_dict.items() if "linear2" in k} with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, disk_part) weight_map = OffloadedWeightsLoader(state_dict=cpu_part, save_folder=tmp_dir) # Every key is there with the right value assert sorted(weight_map) == sorted(state_dict.keys()) for key, param in state_dict.items(): assert torch.allclose(param, weight_map[key]) cpu_part = {k: v for k, v in state_dict.items() if "weight" in k} disk_part = {k: v for k, v in state_dict.items() if "weight" not in k} with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, disk_part) weight_map = OffloadedWeightsLoader(state_dict=cpu_part, save_folder=tmp_dir) # Every key is there with the right value assert sorted(weight_map) == sorted(state_dict.keys()) for key, param in state_dict.items(): assert torch.allclose(param, weight_map[key]) with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, state_dict) # Duplicates are removed weight_map = OffloadedWeightsLoader(state_dict=cpu_part, save_folder=tmp_dir) # Every key is there with the right value assert sorted(weight_map) == sorted(state_dict.keys()) for key, param in state_dict.items(): assert torch.allclose(param, weight_map[key]) def test_extract_submodules_state_dict(self): state_dict = {"a.1": 0, "a.10": 1, "a.2": 2} extracted = extract_submodules_state_dict(state_dict, ["a.1", "a.2"]) assert extracted == {"a.1": 0, "a.2": 2} state_dict = {"a.1.a": 0, "a.10.a": 1, "a.2.a": 2} extracted = extract_submodules_state_dict(state_dict, ["a.1", "a.2"]) assert extracted == {"a.1.a": 0, "a.2.a": 2}
2
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_cli.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from pathlib import Path from unittest.mock import patch import torch from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError import accelerate.commands.test as accelerate_test_cmd from accelerate.commands.config.config_args import BaseConfig, ClusterConfig, SageMakerConfig, load_config_from_file from accelerate.commands.estimate import estimate_command, estimate_command_parser, gather_data from accelerate.commands.launch import _validate_launch_command, launch_command, launch_command_parser from accelerate.commands.tpu import tpu_command_launcher, tpu_command_parser from accelerate.test_utils.testing import ( capture_call_output, path_in_accelerate_package, require_multi_device, require_timm, require_transformers, run_command, ) from accelerate.utils import patch_environment from accelerate.utils.launch import prepare_simple_launcher_cmd_env class AccelerateLauncherTester(unittest.TestCase): """ Test case for verifying the `accelerate launch` CLI operates correctly. If a `default_config.yaml` file is located in the cache it will temporarily move it for the duration of the tests. """ test_file_path = path_in_accelerate_package("test_utils", "scripts", "test_cli.py") notebook_launcher_path = path_in_accelerate_package("test_utils", "scripts", "test_notebook.py") config_folder = Path.home() / ".cache/huggingface/accelerate" config_file = "default_config.yaml" config_path = config_folder / config_file changed_path = config_folder / "_default_config.yaml" test_config_path = Path("tests/test_configs") parser = launch_command_parser() @classmethod def setUpClass(cls): if cls.config_path.is_file(): cls.config_path.rename(cls.changed_path) @classmethod def tearDownClass(cls): if cls.changed_path.is_file(): cls.changed_path.rename(cls.config_path) def test_no_config(self): args = ["--monitor_interval", "0.1", str(self.test_file_path)] if torch.cuda.is_available() and (torch.cuda.device_count() > 1): args = ["--multi_gpu"] + args args = self.parser.parse_args(["--monitor_interval", "0.1", str(self.test_file_path)]) launch_command(args) def test_config_compatibility(self): invalid_configs = ["fp8", "invalid", "mpi", "sagemaker"] for config in sorted(self.test_config_path.glob("**/*.yaml")): if any(invalid_config in str(config) for invalid_config in invalid_configs): continue with self.subTest(config_file=config): args = self.parser.parse_args(["--config_file", str(config), str(self.test_file_path)]) launch_command(args) def test_invalid_keys(self): config_path = self.test_config_path / "invalid_keys.yaml" with self.assertRaises( ValueError, msg="The config file at 'invalid_keys.yaml' had unknown keys ('another_invalid_key', 'invalid_key')", ): args = self.parser.parse_args(["--config_file", str(config_path), str(self.test_file_path)]) launch_command(args) def test_accelerate_test(self): args = accelerate_test_cmd.test_command_parser().parse_args([]) accelerate_test_cmd.test_command(args) @require_multi_device def test_notebook_launcher(self): """ This test checks a variety of situations and scenarios with the `notebook_launcher` """ cmd = ["python", self.notebook_launcher_path] with patch_environment(omp_num_threads=1, accelerate_num_processes=2): run_command(cmd) def test_mpi_multicpu_config_cmd(self): """ Parses a launch command with a test file and the 0_28_0_mpi.yaml config. Tests getting the command and environment vars and verifies the mpirun command arg values. """ mpi_config_path = str(self.test_config_path / "0_28_0_mpi.yaml") test_file_arg = "--cpu" with patch("sys.argv", ["accelerate", str(self.test_file_path), test_file_arg]): parser = launch_command_parser() args = parser.parse_args() args.config_file = mpi_config_path args, _, _ = _validate_launch_command(args) # Mock out the check for mpirun version to simulate Intel MPI with patch("accelerate.utils.launch.which", return_value=True): with patch("accelerate.utils.launch.subprocess.check_output", return_value=b"Intel MPI"): cmd, _ = prepare_simple_launcher_cmd_env(args) # Verify the mpirun command args expected_mpirun_cmd = ["mpirun", "-f", "/home/user/hostfile", "-ppn", "4", "-n", "16"] self.assertGreater(len(cmd), len(expected_mpirun_cmd)) generated_mpirun_cmd = cmd[0 : len(expected_mpirun_cmd)] self.assertEqual(expected_mpirun_cmd, generated_mpirun_cmd) # Verify the python script and args in the mpirun command python_script_cmd = cmd[len(expected_mpirun_cmd) :] self.assertEqual(len(python_script_cmd), 3) self.assertEqual(python_script_cmd[1], str(self.test_file_path)) self.assertEqual(python_script_cmd[2], test_file_arg) class LaunchArgTester(unittest.TestCase): """ Test cases revolving around the CLI wrappers """ parser = launch_command_parser() def test_hyphen(self): # Try a little from each cluster args = ["--config-file", "test.yaml", "test.py"] result = self.parser.parse_args(args) assert result.config_file == "test.yaml" assert result.multi_gpu is False args = ["--multi-gpu", "--num-processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.num_processes == 4 # And use a mix args = ["--multi-gpu", "--use-deepspeed", "--use-fsdp", "--num_processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.use_deepspeed is True assert result.use_fsdp is True assert result.num_processes == 4 def test_underscore(self): # Try a little from each cluster args = ["--config_file", "test.yaml", "test.py"] result = self.parser.parse_args(args) assert result.config_file == "test.yaml" args = ["--multi_gpu", "--num_processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.num_processes == 4 # And use a mix args = ["--multi_gpu", "--use_deepspeed", "--use_fsdp", "--num-processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.use_deepspeed is True assert result.use_fsdp is True assert result.num_processes == 4 def test_duplicate_entities(self): help_return = self.parser.format_help() args = self.parser.parse_args(["test.py"]) for arg in args.__dict__: if "_" in arg: bad_arg = f'--{arg.replace("_", "-")}' # Need an exception for `num-processes` since it's in the docstring if bad_arg == "--num-processes": assert help_return.count(bad_arg) == 1, f"Found {bad_arg} in `accelerate launch -h`" else: assert bad_arg not in help_return, f"Found {bad_arg} in `accelerate launch -h`" class ClusterConfigTester(unittest.TestCase): """ Test case for verifying the config dataclasses work """ test_config_path = Path("tests/test_configs") def test_base_config(self): # Tests that all the dataclasses can be initialized config = BaseConfig( compute_environment="LOCAL_MACHINE", distributed_type="NO", mixed_precision="fp16", debug=False, use_cpu=False, ) assert config.compute_environment == "LOCAL_MACHINE" assert config.distributed_type == "NO" assert config.mixed_precision == "fp16" assert config.debug is False def test_cluster_config(self): # First normally config = ClusterConfig( compute_environment="LOCAL_MACHINE", distributed_type="NO", mixed_precision="fp16", num_processes=2, debug=False, use_cpu=False, ) assert config.compute_environment == "LOCAL_MACHINE" assert config.distributed_type == "NO" assert config.mixed_precision == "fp16" assert config.debug is False # Then check with other compute environments config = ClusterConfig( compute_environment="LOCAL_MACHINE", distributed_type="MULTI_GPU", mixed_precision="fp16", debug=False, num_processes=2, enable_cpu_affinity=True, use_cpu=False, ) assert config.distributed_type == "MULTI_GPU" assert config.num_processes == 2 assert config.enable_cpu_affinity is True def test_sagemaker_config(self): config = SageMakerConfig( compute_environment="AMAZON_SAGEMAKER", distributed_type="NO", mixed_precision="fp16", debug=False, use_cpu=False, ec2_instance_type="MY_TYPE", iam_role_name="MY_ROLE", ) assert config.compute_environment == "AMAZON_SAGEMAKER" assert config.ec2_instance_type == "MY_TYPE" assert config.iam_role_name == "MY_ROLE" config = load_config_from_file(str(self.test_config_path / "0_30_0_sagemaker.yaml")) class TpuConfigTester(unittest.TestCase): """ Test case for verifying the `accelerate tpu-config` CLI passes the right `gcloud` command. """ tpu_name = "test-tpu" tpu_zone = "us-central1-a" command = "ls" cmd = ["accelerate", "tpu-config"] base_output = "cd /usr/share" command_file = "tests/test_samples/test_command_file.sh" gcloud = "Running gcloud compute tpus tpu-vm ssh" def setUp(self): self.parser = tpu_command_parser() def test_base(self): args = self.parser.parse_args( ["--command", self.command, "--tpu_zone", self.tpu_zone, "--tpu_name", self.tpu_name, "--debug"] ) output = capture_call_output(tpu_command_launcher, args) assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output def test_base_backward_compatibility(self): args = self.parser.parse_args( [ "--config_file", "tests/test_configs/0_12_0.yaml", "--command", self.command, "--tpu_zone", self.tpu_zone, "--tpu_name", self.tpu_name, "--debug", ] ) output = capture_call_output(tpu_command_launcher, args) assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output def test_with_config_file(self): args = self.parser.parse_args(["--config_file", "tests/test_configs/latest.yaml", "--debug"]) output = capture_call_output(tpu_command_launcher, args) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_with_config_file_and_command(self): args = self.parser.parse_args( ["--config_file", "tests/test_configs/latest.yaml", "--command", self.command, "--debug"] ) output = capture_call_output(tpu_command_launcher, args) assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output def test_with_config_file_and_multiple_command(self): args = self.parser.parse_args( [ "--config_file", "tests/test_configs/latest.yaml", "--command", self.command, "--command", 'echo "Hello World"', "--debug", ] ) output = capture_call_output(tpu_command_launcher, args) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls; echo "Hello World" --worker all' in output ) def test_with_config_file_and_command_file(self): args = self.parser.parse_args( ["--config_file", "tests/test_configs/latest.yaml", "--command_file", self.command_file, "--debug"] ) output = capture_call_output(tpu_command_launcher, args) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_with_config_file_and_command_file_backward_compatibility(self): args = self.parser.parse_args( [ "--config_file", "tests/test_configs/0_12_0.yaml", "--command_file", self.command_file, "--tpu_zone", self.tpu_zone, "--tpu_name", self.tpu_name, "--debug", ] ) output = capture_call_output(tpu_command_launcher, args) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_accelerate_install(self): args = self.parser.parse_args( ["--config_file", "tests/test_configs/latest.yaml", "--install_accelerate", "--debug"] ) output = capture_call_output(tpu_command_launcher, args) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; pip install accelerate -U; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_accelerate_install_version(self): args = self.parser.parse_args( [ "--config_file", "tests/test_configs/latest.yaml", "--install_accelerate", "--accelerate_version", "12.0.0", "--debug", ] ) output = capture_call_output(tpu_command_launcher, args) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; pip install accelerate==12.0.0; echo "hello world"; echo "this is a second command" --worker all' in output ) class ModelEstimatorTester(unittest.TestCase): """ Test case for checking the output of `accelerate estimate-memory` is correct. - Uses `estimate_command` when trying to catch raised errors - Uses `gather_data` when just verifying the calculations are correct """ parser = estimate_command_parser() def test_invalid_model_name(self): with self.assertRaises( RepositoryNotFoundError, msg="Repo for model `somebrokenname` does not exist on the Hub" ): args = self.parser.parse_args(["somebrokenname"]) estimate_command(args) @require_timm def test_invalid_model_name_timm(self): with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `timm` but"): args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "timm"]) estimate_command(args) @require_transformers def test_invalid_model_name_transformers(self): with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `transformers` but"): args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "transformers"]) estimate_command(args) def test_no_metadata(self): with self.assertRaises( ValueError, msg="Model `muellerzr/dummy` does not have any library metadata on the Hub" ): args = self.parser.parse_args(["muellerzr/dummy"]) estimate_command(args) def test_gated(self): with self.assertRaises( (GatedRepoError, EnvironmentError), msg="Repo for model `meta-llama/Llama-2-7b-hf` is gated or environment error occurred", ): args = self.parser.parse_args(["meta-llama/Llama-2-7b-hf"]) with patch_environment(hf_hub_disable_implicit_token="1"): estimate_command(args) @require_transformers def test_remote_code(self): # Also tests that custom `Auto` classes work args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model"]) with self.assertRaises(ValueError, msg="--trust_remote_code"): gather_data(args) # Verify it works with the flag args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model", "--trust_remote_code"]) gather_data(args) @require_transformers def test_explicit_dtypes(self): args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32", "float16"]) output = gather_data(args) # The largest layer and total size of the model in bytes largest_layer, total_size = 90669056, 433249280 # Check that full precision -> int4 is calculating correctly assert len(output) == 2, f"Output was missing a precision, expected 2 but received {len(output)}" for i, factor in enumerate([1, 2]): precision = 32 // factor precision_str = f"float{precision}" largest_layer_estimate = largest_layer / factor total_size_estimate = total_size / factor total_training_size_estimate = total_size_estimate * 4 assert precision_str == output[i][0], f"Output is missing precision `{precision_str}`" assert ( largest_layer_estimate == output[i][1] ), f"Calculation for largest layer size in `{precision_str}` is incorrect." assert ( total_size_estimate == output[i][2] ), f"Calculation for total size in `{precision_str}` is incorrect." assert total_training_size_estimate == max( output[i][3].values() ), f"Calculation for total training size in `{precision_str}` is incorrect." @require_transformers def test_transformers_model(self): args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32"]) output = gather_data(args) # The largest layer and total size of the model in bytes largest_layer, total_size = 90669056, 433249280 assert ( largest_layer == output[0][1] ), f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}" assert ( total_size == output[0][2] ), f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}" @require_transformers def test_no_split_modules(self): # idefics-80b-instruct has ["IdeficsDecoderLayer", "IdeficsGatedCrossAttentionLayer"] args = self.parser.parse_args(["HuggingFaceM4/idefics-80b-instruct", "--dtypes", "float32"]) output = gather_data(args) # without factoring in `no_split` modules, the largest layer is 721420288 bytes assert output[0][1] != 721420288, "Largest layer calculation incorrect, did not factor in `no_split` modules." # the real answer is 3240165632 bytes assert output[0][1] == 3240165632 @require_timm def test_timm_model(self): args = self.parser.parse_args(["timm/resnet50.a1_in1k", "--library_name", "timm"]) output = gather_data(args) # The largest layer and total size of the model in bytes largest_layer, total_size = 9437184, 102441032 assert ( largest_layer == output[0][1] ), f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}" assert ( total_size == output[0][2] ), f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}"
3
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_big_modeling.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import gc import logging import os import unittest from collections import OrderedDict from tempfile import TemporaryDirectory import torch import torch.nn as nn from transformers import AutoModelForCausalLM, AutoTokenizer from accelerate.big_modeling import ( cpu_offload, cpu_offload_with_hook, disk_offload, dispatch_model, init_empty_weights, init_on_device, load_checkpoint_and_dispatch, ) from accelerate.hooks import remove_hook_from_submodules from accelerate.test_utils import ( require_bnb, require_cuda, require_multi_device, require_multi_gpu, require_non_cpu, require_non_torch_xla, slow, torch_device, ) from accelerate.utils import is_torch_version, offload_state_dict logger = logging.getLogger(__name__) torch_device = f"{torch_device}:0" if torch_device != "cpu" else "cpu" class ModelForTest(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.batchnorm = nn.BatchNorm1d(4) self.linear2 = nn.Linear(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) class LinearWithNonPersistentBuffers(nn.Module): def __init__(self, in_features: int, out_features: int, bias: bool = True, device=None, dtype=None) -> None: factory_kwargs = {"device": device, "dtype": dtype} super().__init__() self.in_features = in_features self.out_features = out_features self.register_buffer("weight", torch.ones((out_features, in_features), **factory_kwargs)) if bias: self.register_buffer("bias", torch.ones(out_features, **factory_kwargs), persistent=False) else: self.register_buffer("bias", None) def forward(self, input: torch.Tensor) -> torch.Tensor: return torch.nn.functional.linear(input, self.weight, self.bias) class ModelForTestNonPersistentBuffers(nn.Module): def __init__(self): super().__init__() self.linear1 = LinearWithNonPersistentBuffers(3, 4) self.batchnorm = nn.BatchNorm1d(4) self.linear2 = LinearWithNonPersistentBuffers(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) class ModelForTestCopy(nn.Module): def __init__(self, id: int): super().__init__() self.id = id self.linear1 = nn.Linear(3, 4) self.batchnorm = nn.BatchNorm1d(4) self.linear2 = nn.Linear(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))), self.id class ModelForTestTiedWeights(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(4, 4) self.batchnorm = nn.BatchNorm1d(4) self.linear2 = nn.Linear(4, 4) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) class BiggerModelForTest(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.linear2 = nn.Linear(4, 5) self.batchnorm = nn.BatchNorm1d(5) self.linear3 = nn.Linear(5, 6) self.linear4 = nn.Linear(6, 5) def forward(self, x): return self.linear4(self.linear3(self.batchnorm(self.linear2(self.linear1(x))))) # To test preload_module_classes class ModuleWithUnusedSubModules(nn.Module): def __init__(self, input_dim, output_dim): super().__init__() self.linear = nn.Linear(input_dim, output_dim) def forward(self, x): return x @ self.linear.weight.t() + self.linear.bias class ModelWithUnusedSubModulesForTest(nn.Module): def __init__(self): super().__init__() self.linear1 = ModuleWithUnusedSubModules(3, 4) self.linear2 = ModuleWithUnusedSubModules(4, 5) self.batchnorm = nn.BatchNorm1d(5) self.linear3 = ModuleWithUnusedSubModules(5, 6) self.linear4 = ModuleWithUnusedSubModules(6, 5) def forward(self, x): return self.linear4(self.linear3(self.batchnorm(self.linear2(self.linear1(x))))) class BigModelingTester(unittest.TestCase): def test_init_empty_weights(self): # base use with init_empty_weights(): module = nn.Linear(4, 5) assert module.weight.device == torch.device("meta") # base use with buffers, they are not touched with init_empty_weights(): module = nn.BatchNorm1d(4) assert module.weight.device == torch.device("meta") assert module.running_mean.device == torch.device("cpu") # Use with include_buffers=True register_parameter_func = nn.Module.register_parameter register_buffer_func = nn.Module.register_buffer with init_empty_weights(include_buffers=True): module = nn.BatchNorm1d(4) # nn.Module.register_parameter/buffer shouldn't be changed with torch >= 2.0 if is_torch_version(">=", "2.0"): assert register_parameter_func == nn.Module.register_parameter assert register_buffer_func == nn.Module.register_buffer assert module.weight.device == torch.device("meta") assert module.running_mean.device == torch.device("meta") # Double check we didn't break PyTorch module = nn.BatchNorm1d(4) assert module.weight.device == torch.device("cpu") assert module.running_mean.device == torch.device("cpu") def test_init_empty_weights_very_large_model(self): # This is a 100 billion parameters model. with init_empty_weights(): _ = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)]) @require_non_cpu def test_init_on_device(self): device = torch.device(torch_device) with init_on_device(device): model = nn.Linear(10, 10) assert model.weight.device == device assert model.weight.device == device def test_cpu_offload(self): model = ModelForTest() x = torch.randn(2, 3) expected = model(x) device = torch.device(torch_device) cpu_offload(model, execution_device=device) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" # Clean up for next test. remove_hook_from_submodules(model) cpu_offload(model, execution_device=device, offload_buffers=True) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" def test_cpu_offload_with_unused_submodules(self): model = ModelWithUnusedSubModulesForTest() x = torch.randn(2, 3) expected = model(x) device = torch.device(torch_device) cpu_offload(model, execution_device=device, preload_module_classes=["ModuleWithUnusedSubModules"]) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" # Clean up for next test. remove_hook_from_submodules(model) cpu_offload( model, execution_device=device, offload_buffers=True, preload_module_classes=["ModuleWithUnusedSubModules"], ) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" @slow @require_non_cpu def test_cpu_offload_gpt2(self): tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = tokenizer("Hello world! My name is", return_tensors="pt").to(torch_device) gpt2 = AutoModelForCausalLM.from_pretrained("gpt2") cpu_offload(gpt2, execution_device=0) outputs = gpt2.generate(inputs["input_ids"]) assert ( tokenizer.decode(outputs[0].tolist()) == "Hello world! My name is Kiyoshi, and I'm a student at the University of Tokyo" ) def test_disk_offload(self): model = ModelForTest() x = torch.randn(2, 3) expected = model(x) device = torch.device(torch_device) with TemporaryDirectory() as tmp_dir: disk_offload(model, tmp_dir, execution_device=device) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" # Clean up for next test. remove_hook_from_submodules(model) with TemporaryDirectory() as tmp_dir: disk_offload(model, tmp_dir, execution_device=device, offload_buffers=True) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" def test_disk_offload_with_unused_submodules(self): model = ModelWithUnusedSubModulesForTest() x = torch.randn(2, 3) expected = model(x) device = torch.device(torch_device) with TemporaryDirectory() as tmp_dir: disk_offload( model, tmp_dir, execution_device=device, preload_module_classes=["ModuleWithUnusedSubModules"] ) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" # Clean up for next test. remove_hook_from_submodules(model) with TemporaryDirectory() as tmp_dir: disk_offload( model, tmp_dir, execution_device=device, offload_buffers=True, preload_module_classes=["ModuleWithUnusedSubModules"], ) output = model(x) assert torch.allclose(expected, output.cpu(), 1e-4, 1e-5), f"Expected: {expected}, Actual: {output.cpu()}" @slow @require_non_cpu def test_disk_offload_gpt2(self): tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = tokenizer("Hello world! My name is", return_tensors="pt").to(torch_device) gpt2 = AutoModelForCausalLM.from_pretrained("gpt2") with TemporaryDirectory() as tmp_dir: disk_offload(gpt2, tmp_dir, execution_device=0) outputs = gpt2.generate(inputs["input_ids"]) assert ( tokenizer.decode(outputs[0].tolist()) == "Hello world! My name is Kiyoshi, and I'm a student at the University of Tokyo" ) @require_non_cpu def test_dispatch_model_and_remove_hook(self): model = ModelForTest() device_map = {"linear1": "cpu", "batchnorm": "cpu", "linear2": 0} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: dispatch_model(model, device_map, offload_dir=tmp_dir) output = model(x) remove_hook_from_submodules(model) # need to check if we get any warning with self.assertLogs(level="WARNING") as cm: # We want to assert there are no warnings, but the 'assertLogs' method does not support that. # Therefore, we are adding a dummy warning, and then we will assert it is the only warning. model.to(torch_device) logger.warning("Dummy warning") self.assertEqual(len(cm.records), 1) self.assertIn( "Dummy warning", cm.records[0].message, ) output_bis = model(x.to(torch_device)) assert torch.allclose(expected, output.cpu(), atol=1e-5) assert torch.allclose(expected, output_bis.cpu(), atol=1e-5) @require_non_cpu def test_dispatch_model(self): model = ModelForTest() device_map = {"linear1": "disk", "batchnorm": "cpu", "linear2": 0} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: dispatch_model(model, device_map, offload_dir=tmp_dir) output = model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_dispatch_model_with_non_persistent_buffers(self): model = ModelForTestNonPersistentBuffers() device_map = {"linear1": 0, "batchnorm": "cpu", "linear2": "disk"} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: dispatch_model(model, device_map, offload_dir=tmp_dir, offload_buffers=True) output = model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_dispatch_model_tied_weights(self): model = ModelForTestTiedWeights() model.linear1.weight = model.linear2.weight device_map = {"linear1": 0, "batchnorm": 0, "linear2": 0} dispatch_model(model, device_map) assert model.linear2.weight is model.linear1.weight @require_multi_gpu def test_dispatch_model_tied_weights_memory(self): # Test that we do not duplicate tied weights at any point during dispatch_model call. torch.cuda.empty_cache() # Needed in case we run several tests in a row. model = nn.Sequential( OrderedDict( [ ("linear0", nn.Linear(5000, 5000, bias=False)), ("linear1", nn.Linear(5000, 5000, bias=False)), ("linear2", nn.Linear(5000, 5000, bias=False)), ("linear3", nn.Linear(5000, 5000, bias=False)), ("linear4", nn.Linear(5000, 5000, bias=False)), ] ) ) model.linear2.weight = model.linear0.weight model.linear3.weight = model.linear0.weight model.linear4.weight = model.linear0.weight x = torch.randn(5, 5000) with torch.no_grad(): expected = model(x) # We should need only 5000 * 5000 * 32 // 8 * 1e-6 = 100 MB on the device 0 for the four linear weights. device_map = {"linear0": 0, "linear1": 1, "linear2": 0, "linear3": 0, "linear4": 0} # Just to intialize CUDA context. a = torch.rand(5).to("cuda:0") # noqa: F841 free_memory_bytes = torch.cuda.mem_get_info("cuda:0")[0] required_memory_bytes = 5000 * 5000 * (32 // 8) # Leaving 50 MB of free memory for possible buffers, etc. n_vals = (free_memory_bytes - required_memory_bytes - int(50e6)) // (32 // 8) foo = torch.rand(n_vals, device="cuda:0") # noqa: F841 # If this does OOM: there is an issue in somewhere in dispatch_model, memory of tied weights is duplicated. try: dispatch_model(model, device_map) except torch.cuda.OutOfMemoryError as e: raise torch.cuda.OutOfMemoryError( f"OOM error in dispatch_model. This is a bug and should not happen, see test_dispatch_model_tied_weights_memory. {e}" ) except Exception as e: raise e with torch.no_grad(): output = model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_cuda def test_dispatch_model_tied_weights_memory_with_nested_offload_cpu(self): # Test that we do not duplicate tied weights at any point during dispatch_model call. torch.cuda.empty_cache() # Needed in case we run several tests in a row. class SubModule(torch.nn.Module): def __init__(self, ref_to_parameter): super().__init__() self.parameter = ref_to_parameter def forward(self, x): return x + torch.max(self.parameter) class LinearModuleAndSubModule(torch.nn.Linear): def __init__(self, in_features, out_features): super().__init__(in_features, out_features, bias=False) self.weight_submodule = SubModule(self.weight) self.weight_submodule2 = SubModule(self.weight) self.weight_submodule3 = SubModule(self.weight) self.weight_submodule4 = SubModule(self.weight) def forward(self, x): a = torch.nn.functional.linear(self.weight_submodule(x), self.weight) b = torch.nn.functional.linear(self.weight_submodule2(x), self.weight) c = torch.nn.functional.linear(self.weight_submodule3(x), self.weight) d = torch.nn.functional.linear(self.weight_submodule4(x), self.weight) return a + b + c + d class ModelWithSubmodules(torch.nn.Module): def __init__(self): super().__init__() self.compute = LinearModuleAndSubModule(5000, 5000) self.compute1 = LinearModuleAndSubModule(5000, 5000) def forward(self, x): a = self.compute(x) b = self.compute1(x) return a + b # We should need only 2 * 5000 * 5000 * 32 // 8 * 1e-6 = 200 MB on the device 0 for the whole model forward, and not 600 MB. device_map = {"compute": 0, "compute1": "cpu"} model = ModelWithSubmodules() x = torch.randn(1, 5000) with torch.no_grad(): expected = model(x) # Just to intialize CUDA context. a = torch.rand(5).to("cuda:0") # noqa: F841 free_memory_bytes = torch.cuda.mem_get_info("cuda:0")[0] required_memory_bytes = 2 * 5000 * 5000 * (32 // 8) # 200 MB # Leaving 150 MB of free memory for possible buffers, etc. n_vals = (free_memory_bytes - required_memory_bytes - int(150e6)) // (32 // 8) foo = torch.rand(n_vals, device="cuda:0") # noqa: F841 free_memory_bytes_before_dispatch = torch.cuda.mem_get_info("cuda:0")[0] dispatch_model(model, device_map) free_memory_bytes_after_dispatch = torch.cuda.mem_get_info("cuda:0")[0] assert (free_memory_bytes_after_dispatch - free_memory_bytes_before_dispatch) * 1e-6 < 130 original_pointer = model.compute1._hf_hook.weights_map["weight"].data_ptr() with torch.no_grad(): try: output = model(x) except torch.cuda.OutOfMemoryError as e: raise torch.cuda.OutOfMemoryError( f"OOM error in dispatch_model. This is a bug and should not happen, see test_dispatch_model_tied_weights_memory_with_nested_offload_cpu. {e}" ) except Exception as e: raise e assert torch.allclose(expected, output.cpu(), atol=1e-5) torch.cuda.empty_cache() free_memory_bytes_after_infer = torch.cuda.mem_get_info("cuda:0")[0] # Check that we have no more references on GPU for the offloaded tied weight. assert len(model.compute1.weight_submodule._hf_hook.tied_params_map[original_pointer]) == 0 assert len(model.compute1._hf_hook.tied_params_map[original_pointer]) == 0 assert (free_memory_bytes_after_infer - free_memory_bytes_after_dispatch) * 1e-6 < 130 # Test is flacky otherwise. del model gc.collect() # This test fails because sometimes data_ptr() of compute2.weight is the same as compute1.weight. # I checked that the values are not the same but it gives the same address. This does not happen on my local machine. @require_cuda @unittest.skip( "Flaky test, we should have enough coverage with test_dispatch_model_tied_weights_memory_with_nested_offload_cpu test" ) def test_dispatch_model_tied_weights_memory_with_nested_offload_disk(self): # Test that we do not duplicate tied weights at any point during dispatch_model call. torch.cuda.empty_cache() # Needed in case we run several tests in a row. class SubModule(torch.nn.Module): def __init__(self, ref_to_parameter): super().__init__() self.parameter = ref_to_parameter def forward(self, x): return x + torch.max(self.parameter) class LinearModuleAndSubModule(torch.nn.Linear): def __init__(self, in_features, out_features): super().__init__(in_features, out_features, bias=False) self.weight_submodule = SubModule(self.weight) self.weight_submodule2 = SubModule(self.weight) self.weight_submodule3 = SubModule(self.weight) self.weight_submodule4 = SubModule(self.weight) def forward(self, x): a = torch.nn.functional.linear(self.weight_submodule(x), self.weight) b = torch.nn.functional.linear(self.weight_submodule2(x), self.weight) c = torch.nn.functional.linear(self.weight_submodule3(x), self.weight) d = torch.nn.functional.linear(self.weight_submodule4(x), self.weight) return a + b + c + d class ModelWithSubmodules(torch.nn.Module): def __init__(self): super().__init__() self.compute = LinearModuleAndSubModule(5000, 5000) self.compute1 = LinearModuleAndSubModule(5000, 5000) def forward(self, x): a = self.compute(x) b = self.compute1(x) return a + b # We should need only 2 * 5000 * 5000 * 32 // 8 * 1e-6 = 200 MB on the device 0 for the whole model forward, and not 600 MB. device_map = {"compute": 0, "compute1": "disk"} model = ModelWithSubmodules() x = torch.randn(1, 5000) with torch.no_grad(): expected = model(x) # Just to intialize CUDA context. a = torch.rand(5).to("cuda:0") # noqa: F841 free_memory_bytes = torch.cuda.mem_get_info("cuda:0")[0] required_memory_bytes = 2 * 5000 * 5000 * (32 // 8) # 200 MB # Leaving 150 MB of free memory for possible buffers, etc. n_vals = (free_memory_bytes - required_memory_bytes - int(200e6)) // (32 // 8) foo = torch.rand(n_vals, device="cuda:0") # noqa: F841 free_memory_bytes_before_dispatch = torch.cuda.mem_get_info("cuda:0")[0] with TemporaryDirectory() as tmp_dir: dispatch_model(model, device_map, offload_dir=tmp_dir) free_memory_bytes_after_dispatch = torch.cuda.mem_get_info("cuda:0")[0] assert (free_memory_bytes_after_dispatch - free_memory_bytes_before_dispatch) * 1e-6 < 130 with torch.no_grad(): try: output = model(x) except torch.cuda.OutOfMemoryError as e: raise torch.cuda.OutOfMemoryError( f"OOM error in dispatch_model. This is a bug and should not happen, see test_dispatch_model_tied_weights_memory_with_nested_offload_disk. {e}" ) except Exception as e: raise e assert torch.allclose(expected, output.cpu(), atol=1e-5) torch.cuda.empty_cache() free_memory_bytes_after_infer = torch.cuda.mem_get_info("cuda:0")[0] # Check that we have no more references on GPU for the offloaded tied weight. n_non_empty = 0 for pointer, pointer_dict in model.compute1.weight_submodule._hf_hook.tied_params_map.items(): if len(pointer_dict) > 0: n_non_empty += 1 assert n_non_empty == 1 # `compute` layer one. n_non_empty = 0 for pointer, pointer_dict in model.compute1._hf_hook.tied_params_map.items(): if len(pointer_dict) > 0: n_non_empty += 1 assert n_non_empty == 1 # `compute` layer one. assert (free_memory_bytes_after_infer - free_memory_bytes_after_dispatch) * 1e-6 < 130 @require_multi_device def test_dispatch_model_multi_devices(self): model = BiggerModelForTest() device_map = {"linear1": "cpu", "linear2": "disk", "batchnorm": "cpu", "linear3": 0, "linear4": 1} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: dispatch_model(model, device_map, offload_dir=tmp_dir) output = model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_dispatch_model_copy(self): original_model = ModelForTestCopy(id=1) device_map = {"linear1": 0, "batchnorm": "cpu", "linear2": 0} x = torch.randn(2, 3) expected, original_output_id = original_model(x) dispatch_model(original_model, device_map) copied_model = copy.deepcopy(original_model) copied_model.id = 2 output, copied_output_id = copied_model(x) assert original_model.id == original_output_id assert copied_model.id == copied_output_id assert copied_model.linear1.forward is not original_model.linear1.forward assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_dispatch_model_move_offloaded_model(self): model = ModelForTest() device_map = {"linear1": "disk", "batchnorm": "cpu", "linear2": 0} with TemporaryDirectory() as tmp_dir: dispatch_model(model, device_map, offload_dir=tmp_dir) with self.assertRaises(RuntimeError): model.to(0) @require_multi_device def test_dispatch_model_move_model_warning(self): model = ModelForTest() device_map = {"linear1": 0, "batchnorm": 0, "linear2": 1} with TemporaryDirectory() as tmp_dir: dispatch_model(model, device_map, offload_dir=tmp_dir) with self.assertLogs("accelerate.big_modeling", level="WARNING"): model.to("cpu") with self.assertLogs("accelerate.big_modeling", level="WARNING"): model.to(torch_device) with self.assertRaises(RuntimeError): x = torch.randn(2, 3) model(x) @slow @require_multi_device def test_dispatch_model_gpt2_on_two_devices(self): tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = tokenizer("Hello world! My name is", return_tensors="pt").to(torch_device) gpt2 = AutoModelForCausalLM.from_pretrained("gpt2") # Dispatch on GPUs 0 and 1 device_map = { "transformer.wte": 0, "transformer.wpe": 0, "transformer.ln_f": 1, "lm_head": 0, } for i in range(12): device_map[f"transformer.h.{i}"] = 0 if i <= 5 else 1 gpt2 = dispatch_model(gpt2, device_map) outputs = gpt2.generate(inputs["input_ids"]) assert ( tokenizer.decode(outputs[0].tolist()) == "Hello world! My name is Kiyoshi, and I'm a student at the University of Tokyo" ) # Dispatch with a bit of CPU offload gpt2 = AutoModelForCausalLM.from_pretrained("gpt2") for i in range(4): device_map[f"transformer.h.{i}"] = "cpu" gpt2 = dispatch_model(gpt2, device_map) outputs = gpt2.generate(inputs["input_ids"]) assert ( tokenizer.decode(outputs[0].tolist()) == "Hello world! My name is Kiyoshi, and I'm a student at the University of Tokyo" ) # Dispatch with a bit of CPU and disk offload gpt2 = AutoModelForCausalLM.from_pretrained("gpt2") for i in range(2): device_map[f"transformer.h.{i}"] = "disk" with TemporaryDirectory() as tmp_dir: state_dict = { k: p for k, p in gpt2.state_dict().items() if "transformer.h.0" in k or "transformer.h.1" in k } offload_state_dict(tmp_dir, state_dict) gpt2 = dispatch_model(gpt2, device_map, offload_dir=tmp_dir) outputs = gpt2.generate(inputs["input_ids"]) assert ( tokenizer.decode(outputs[0].tolist()) == "Hello world! My name is Kiyoshi, and I'm a student at the University of Tokyo" ) @require_non_cpu def test_dispatch_model_with_unused_submodules(self): model = ModelWithUnusedSubModulesForTest() device_map = {"linear1": "cpu", "linear2": "disk", "batchnorm": "cpu", "linear3": 0, "linear4": 0} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: dispatch_model( model, device_map, offload_dir=tmp_dir, preload_module_classes=["ModuleWithUnusedSubModules"] ) output = model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_multi_device def test_dispatch_model_with_unused_submodules_multi_device(self): model = ModelWithUnusedSubModulesForTest() device_map = {"linear1": "cpu", "linear2": "disk", "batchnorm": "cpu", "linear3": 0, "linear4": 1} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: dispatch_model( model, device_map, offload_dir=tmp_dir, preload_module_classes=["ModuleWithUnusedSubModules"] ) output = model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_dispatch_model_force_hooks(self): model = ModelForTest() device_map = {"": 0} x = torch.randn(2, 3) expected = model(x) dispatch_model(model, device_map, force_hooks=True) output = model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_load_checkpoint_and_dispatch(self): model = ModelForTest() device_map = {"linear1": "cpu", "batchnorm": "cpu", "linear2": 0} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: checkpoint = os.path.join(tmp_dir, "pt_model.bin") torch.save(model.state_dict(), checkpoint) new_model = ModelForTest() new_model = load_checkpoint_and_dispatch(new_model, checkpoint, device_map=device_map) # CPU-offloaded weights are on the meta device while waiting for the forward pass. assert new_model.linear1.weight.device == torch.device("meta") assert new_model.linear2.weight.device == torch.device(torch_device) output = new_model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_multi_device def test_load_checkpoint_and_dispatch_multi_device(self): model = BiggerModelForTest() device_map = {"linear1": "cpu", "linear2": "cpu", "batchnorm": 0, "linear3": 0, "linear4": 1} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: checkpoint = os.path.join(tmp_dir, "pt_model.bin") torch.save(model.state_dict(), checkpoint) new_model = BiggerModelForTest() new_model = load_checkpoint_and_dispatch(new_model, checkpoint, device_map=device_map) # CPU-offloaded weights are on the meta device while waiting for the forward pass. assert new_model.linear1.weight.device == torch.device("meta") assert new_model.linear2.weight.device == torch.device("meta") assert new_model.linear3.weight.device == torch.device(torch_device) assert new_model.linear4.weight.device == torch.device(torch_device.replace(":0", ":1")) output = new_model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_load_checkpoint_and_dispatch_with_unused_submodules(self): model = ModelWithUnusedSubModulesForTest() device_map = {"linear1": "cpu", "linear2": "cpu", "batchnorm": 0, "linear3": 0, "linear4": 0} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: checkpoint = os.path.join(tmp_dir, "pt_model.bin") torch.save(model.state_dict(), checkpoint) new_model = ModelWithUnusedSubModulesForTest() new_model = load_checkpoint_and_dispatch( new_model, checkpoint, device_map=device_map, preload_module_classes=["ModuleWithUnusedSubModules"] ) # CPU-offloaded weights are on the meta device while waiting for the forward pass. assert new_model.linear1.linear.weight.device == torch.device("meta") assert new_model.linear2.linear.weight.device == torch.device("meta") assert new_model.linear3.linear.weight.device == torch.device(torch_device) assert new_model.linear4.linear.weight.device == torch.device(torch_device) output = new_model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_multi_device def test_load_checkpoint_and_dispatch_multi_device_with_unused_submodules(self): model = ModelWithUnusedSubModulesForTest() device_map = {"linear1": "cpu", "linear2": "cpu", "batchnorm": 0, "linear3": 0, "linear4": 1} x = torch.randn(2, 3) expected = model(x) with TemporaryDirectory() as tmp_dir: checkpoint = os.path.join(tmp_dir, "pt_model.bin") torch.save(model.state_dict(), checkpoint) new_model = ModelWithUnusedSubModulesForTest() new_model = load_checkpoint_and_dispatch( new_model, checkpoint, device_map=device_map, preload_module_classes=["ModuleWithUnusedSubModules"] ) # CPU-offloaded weights are on the meta device while waiting for the forward pass. assert new_model.linear1.linear.weight.device == torch.device("meta") assert new_model.linear2.linear.weight.device == torch.device("meta") assert new_model.linear3.linear.weight.device == torch.device(torch_device) assert new_model.linear4.linear.weight.device == torch.device(torch_device.replace(":0", ":1")) output = new_model(x) assert torch.allclose(expected, output.cpu(), atol=1e-5) @require_non_cpu def test_cpu_offload_with_hook(self): model1 = torch.nn.Linear(4, 5) model1, hook1 = cpu_offload_with_hook(model1) assert model1.weight.device == torch.device("cpu") inputs = torch.randn(3, 4) outputs = model1(inputs) assert outputs.device == torch.device(torch_device) assert model1.weight.device == torch.device(torch_device) hook1.offload() assert model1.weight.device == torch.device("cpu") model2 = torch.nn.Linear(5, 5) model2, hook2 = cpu_offload_with_hook(model2, prev_module_hook=hook1) assert model2.weight.device == torch.device("cpu") outputs = model1(inputs) assert outputs.device == torch.device(torch_device) assert model1.weight.device == torch.device(torch_device) outputs = model2(outputs) assert outputs.device == torch.device(torch_device) assert model1.weight.device == torch.device("cpu") assert model2.weight.device == torch.device(torch_device) hook2.offload() assert model2.weight.device == torch.device("cpu") @require_non_torch_xla @slow @require_bnb @require_multi_gpu def test_dispatch_model_bnb(self): """Tests that `dispatch_model` quantizes int8 layers""" from huggingface_hub import hf_hub_download from transformers import AutoConfig, AutoModel, BitsAndBytesConfig from transformers.utils.bitsandbytes import replace_with_bnb_linear with init_empty_weights(): model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m")) quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = replace_with_bnb_linear( model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config ) model_path = hf_hub_download("bigscience/bloom-560m", "pytorch_model.bin") model = load_checkpoint_and_dispatch( model, checkpoint=model_path, device_map="balanced", ) assert model.h[0].self_attention.query_key_value.weight.dtype == torch.int8 assert model.h[0].self_attention.query_key_value.weight.device.index == 0 assert model.h[(-1)].self_attention.query_key_value.weight.dtype == torch.int8 assert model.h[(-1)].self_attention.query_key_value.weight.device.index == 1 @require_cuda @slow @require_bnb def test_dispatch_model_int8_simple(self): """Tests that `dispatch_model` quantizes int8 layers""" from huggingface_hub import hf_hub_download from transformers import AutoConfig, AutoModel, BitsAndBytesConfig from transformers.utils.bitsandbytes import replace_with_bnb_linear with init_empty_weights(): model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m")) quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = replace_with_bnb_linear( model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config ) model_path = hf_hub_download("bigscience/bloom-560m", "pytorch_model.bin") # test with auto model = load_checkpoint_and_dispatch( model, checkpoint=model_path, device_map="auto", ) assert model.h[0].self_attention.query_key_value.weight.dtype == torch.int8 assert model.h[0].self_attention.query_key_value.weight.device.index == 0 with init_empty_weights(): model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m")) model = replace_with_bnb_linear( model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config ) # test with str device map model = load_checkpoint_and_dispatch( model, checkpoint=model_path, device_map={"": torch.device("cuda:0")}, ) assert model.h[0].self_attention.query_key_value.weight.dtype == torch.int8 assert model.h[0].self_attention.query_key_value.weight.device.index == 0 with init_empty_weights(): model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m")) model = replace_with_bnb_linear( model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config ) # test with torch.device device map model = load_checkpoint_and_dispatch( model, checkpoint=model_path, device_map={"": "cuda:0"}, ) assert model.h[0].self_attention.query_key_value.weight.dtype == torch.int8 assert model.h[0].self_attention.query_key_value.weight.device.index == 0 @require_cuda @slow @require_bnb def test_dipatch_model_fp4_simple(self): """Tests that `dispatch_model` quantizes fp4 layers""" from huggingface_hub import hf_hub_download from transformers import AutoConfig, AutoModel, BitsAndBytesConfig from transformers.utils.bitsandbytes import replace_with_bnb_linear with init_empty_weights(): model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m")) quantization_config = BitsAndBytesConfig(load_in_4bit=True) model = replace_with_bnb_linear( model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config ) model_path = hf_hub_download("bigscience/bloom-560m", "pytorch_model.bin") # test with auto model = load_checkpoint_and_dispatch( model, checkpoint=model_path, device_map="auto", ) assert model.h[0].self_attention.query_key_value.weight.dtype == torch.uint8 assert model.h[0].self_attention.query_key_value.weight.device.index == 0 with init_empty_weights(): model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m")) model = replace_with_bnb_linear( model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config ) # test with str device map model = load_checkpoint_and_dispatch( model, checkpoint=model_path, device_map={"": torch.device("cuda:0")}, ) assert model.h[0].self_attention.query_key_value.weight.dtype == torch.uint8 assert model.h[0].self_attention.query_key_value.weight.device.index == 0 with init_empty_weights(): model = AutoModel.from_config(AutoConfig.from_pretrained("bigscience/bloom-560m")) model = replace_with_bnb_linear( model, modules_to_not_convert=["lm_head"], quantization_config=quantization_config ) # test with torch.device device map model = load_checkpoint_and_dispatch( model, checkpoint=model_path, device_map={"": "cuda:0"}, ) assert model.h[0].self_attention.query_key_value.weight.dtype == torch.uint8 assert model.h[0].self_attention.query_key_value.weight.device.index == 0
4
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_cpu.py
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from accelerate import debug_launcher from accelerate.test_utils import require_cpu, test_ops, test_script @require_cpu class MultiCPUTester(unittest.TestCase): def test_cpu(self): debug_launcher(test_script.main) def test_ops(self): debug_launcher(test_ops.main)
5
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_quantization.py
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import tempfile import unittest import torch import torch.nn as nn from accelerate import Accelerator, init_empty_weights from accelerate.test_utils import ( require_bnb, require_cuda, require_huggingface_suite, require_multi_gpu, require_non_torch_xla, slow, ) from accelerate.utils.bnb import load_and_quantize_model from accelerate.utils.dataclasses import BnbQuantizationConfig class BitsAndBytesConfigIntegration(unittest.TestCase): def test_BnbQuantizationConfig(self): with self.assertRaises(ValueError): BnbQuantizationConfig(load_in_8bit=True, load_in_4bit=True) @require_non_torch_xla @slow @require_cuda @require_bnb @require_huggingface_suite class MixedInt8EmptyModelTest(unittest.TestCase): # We keep the constants inside the init function and model loading inside setUp function # We need to test on relatively large models (aka >1b parameters otherwise the quantiztion may not work as expected) # Therefore here we use only bloom-1b3 to test our module model_name = "marcsun13/bloom-1b7_with_lm_head" # Constant values # This was obtained on a Quadro RTX 8000 so the number might slightly change EXPECTED_RELATIVE_DIFFERENCE = 1.540025 input_text = "Hello my name is" EXPECTED_OUTPUT = "Hello my name is John.\nI am a friend of the family.\n" MAX_NEW_TOKENS = 10 def setUp(self): """ Setup quantized model from empty model """ from huggingface_hub import hf_hub_download from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer # Models and tokenizer self.model_fp16 = AutoModelForCausalLM.from_pretrained( self.model_name, torch_dtype=torch.float16, device_map="auto" ) # create model on meta device with init_empty_weights(): self.model_8bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) self.model_8bit.tie_weights() self.weights_location = hf_hub_download(self.model_name, "pytorch_model.bin") self.bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) self.model_8bit = load_and_quantize_model( self.model_8bit, self.bnb_quantization_config, weights_location=self.weights_location, device_map={"": 0}, no_split_module_classes=["BloomBlock"], ) self.tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b7") self.accelerate = Accelerator() def tearDown(self): r""" TearDown function needs to be called at the end of each test to free the GPU memory and cache, also to avoid unexpected behaviors. Please see: https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/27 """ del self.model_fp16 del self.model_8bit gc.collect() torch.cuda.empty_cache() def test_memory_footprint(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ from bitsandbytes.nn import Int8Params mem_fp16 = self.model_fp16.get_memory_footprint() mem_8bit = self.model_8bit.get_memory_footprint() assert round((mem_fp16 / mem_8bit) - self.EXPECTED_RELATIVE_DIFFERENCE, 7) >= 0 assert self.model_8bit.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Int8Params def test_linear_are_8bit(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ self.model_fp16.get_memory_footprint() self.model_8bit.get_memory_footprint() for name, module in self.model_8bit.named_modules(): if isinstance(module, torch.nn.Linear): modules_not_converted = ( self.bnb_quantization_config.keep_in_fp32_modules + self.bnb_quantization_config.skip_modules ) if name not in modules_not_converted: assert module.weight.dtype == torch.int8 def test_llm_skip(self): r""" A simple test to check if `llm_int8_skip_modules` works as expected """ import bitsandbytes as bnb from transformers import AutoConfig, AutoModelForCausalLM bnb_quantization_config = BnbQuantizationConfig( load_in_8bit=True, skip_modules=["lm_head", "transformer.word_embeddings"] ) with init_empty_weights(): model = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model.tie_weights() model = load_and_quantize_model( model, bnb_quantization_config, weights_location=self.weights_location, device_map="auto", no_split_module_classes=["BloomBlock"], ) assert model.transformer.h[1].mlp.dense_4h_to_h.weight.dtype == torch.int8 assert isinstance(model.transformer.h[1].mlp.dense_4h_to_h, bnb.nn.Linear8bitLt) assert isinstance(model.lm_head, nn.Linear) assert model.lm_head.weight.dtype != torch.int8 def check_inference_correctness(self, model): r""" Test the generation quality of the quantized model and see that we are matching the expected output. Given that we are operating on small numbers + the testing model is relatively small, we might not get the same output across GPUs. So we'll generate few tokens (5-10) and check their output. """ # Check that inference pass works on the model encoded_input = self.tokenizer(self.input_text, return_tensors="pt") # Check the exactness of the results output_parallel = model.generate(input_ids=encoded_input["input_ids"].to(0), max_new_tokens=10) # Get the generation output_text = self.tokenizer.decode(output_parallel[0], skip_special_tokens=True) assert output_text == self.EXPECTED_OUTPUT def test_generate_quality(self): self.check_inference_correctness(self.model_8bit) def test_fp32_8bit_conversion(self): r""" Test whether it is possible to mix both `8bit` and `fp32` weights when using `keep_in_fp32_modules` correctly. """ from transformers import AutoConfig, AutoModelForCausalLM bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True, keep_in_fp32_modules=["lm_head"]) with init_empty_weights(): model = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model.tie_weights() model = load_and_quantize_model( model, bnb_quantization_config, weights_location=self.weights_location, device_map="auto", no_split_module_classes=["BloomBlock"], ) assert model.lm_head.weight.dtype == torch.float32 @require_multi_gpu def test_cpu_gpu_loading_custom_device_map(self): from bitsandbytes.nn import Int8Params from transformers import AutoConfig, AutoModelForCausalLM r""" A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map`. """ device_map = { "transformer.word_embeddings": "cpu", "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h.0": "cpu", "transformer.h.1": "cpu", "transformer.h.2": "cpu", "transformer.h.3": 0, "transformer.h.4": 0, "transformer.h.5": 0, "transformer.h.6": 0, "transformer.h.7": 0, "transformer.h.8": 0, "transformer.h.9": 1, "transformer.h.10": 0, "transformer.h.11": 1, "transformer.h.12": 0, "transformer.h.13": 0, "transformer.h.14": 1, "transformer.h.15": 0, "transformer.h.16": 0, "transformer.h.17": 1, "transformer.h.18": 1, "transformer.h.19": 0, "transformer.h.20": 1, "transformer.h.21": 1, "transformer.h.22": 0, "transformer.h.23": 0, "transformer.ln_f": 1, } bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) with init_empty_weights(): model_8bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_8bit.tie_weights() model_8bit = load_and_quantize_model( model_8bit, bnb_quantization_config, weights_location=self.weights_location, device_map=device_map, no_split_module_classes=["BloomBlock"], ) assert model_8bit.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Int8Params assert model_8bit.transformer.h[1].mlp.dense_4h_to_h.weight.__class__ == Int8Params self.check_inference_correctness(model_8bit) @require_multi_gpu def test_cpu_gpu_loading_custom_device_map_offload_state_dict(self): from bitsandbytes.nn import Int8Params from transformers import AutoConfig, AutoModelForCausalLM r""" A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map` and offload_state_dict=True. """ device_map = { "transformer.word_embeddings": "cpu", "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h.0": "cpu", "transformer.h.1": "cpu", "transformer.h.2": "cpu", "transformer.h.3": 0, "transformer.h.4": 0, "transformer.h.5": 0, "transformer.h.6": 0, "transformer.h.7": 0, "transformer.h.8": 0, "transformer.h.9": 1, "transformer.h.10": 0, "transformer.h.11": 1, "transformer.h.12": 0, "transformer.h.13": 0, "transformer.h.14": 1, "transformer.h.15": 0, "transformer.h.16": 0, "transformer.h.17": 1, "transformer.h.18": 1, "transformer.h.19": 0, "transformer.h.20": 1, "transformer.h.21": 1, "transformer.h.22": 0, "transformer.h.23": 0, "transformer.ln_f": 1, } bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) with init_empty_weights(): model_8bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_8bit.tie_weights() model_8bit = load_and_quantize_model( model_8bit, bnb_quantization_config, weights_location=self.weights_location, device_map=device_map, no_split_module_classes=["BloomBlock"], offload_state_dict=True, ) assert model_8bit.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Int8Params assert model_8bit.transformer.h[1].mlp.dense_4h_to_h.weight.__class__ == Int8Params self.check_inference_correctness(model_8bit) @require_multi_gpu def test_cpu_gpu_disk_loading_custom_device_map_kwargs(self): from bitsandbytes.nn import Int8Params from transformers import AutoConfig, AutoModelForCausalLM r""" A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map`. This time we also add `disk` on the device_map - using the kwargs directly instead of the quantization config """ device_map = { "transformer.word_embeddings": "cpu", "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h.0": "cpu", "transformer.h.1": "cpu", "transformer.h.2": "cpu", "transformer.h.3": "disk", "transformer.h.4": "disk", "transformer.h.5": "disk", "transformer.h.6": 0, "transformer.h.7": 0, "transformer.h.8": 0, "transformer.h.9": 1, "transformer.h.10": 0, "transformer.h.11": 1, "transformer.h.12": 0, "transformer.h.13": 0, "transformer.h.14": 1, "transformer.h.15": 0, "transformer.h.16": 0, "transformer.h.17": 1, "transformer.h.18": 1, "transformer.h.19": 0, "transformer.h.20": 1, "transformer.h.21": 1, "transformer.h.22": 0, "transformer.h.23": 0, "transformer.ln_f": 1, } bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) with init_empty_weights(): model_8bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_8bit.tie_weights() with tempfile.TemporaryDirectory() as tmpdirname: model_8bit = load_and_quantize_model( model_8bit, bnb_quantization_config, weights_location=self.weights_location, device_map=device_map, no_split_module_classes=["BloomBlock"], offload_folder=tmpdirname, offload_state_dict=True, ) assert model_8bit.transformer.h[4].mlp.dense_4h_to_h.weight.__class__ == Int8Params assert model_8bit.transformer.h[5].mlp.dense_4h_to_h.weight.__class__ == Int8Params self.check_inference_correctness(model_8bit) def test_int8_serialization(self): r""" Test whether it is possible to serialize a model in 8-bit. """ from bitsandbytes.nn import Int8Params from transformers import AutoConfig, AutoModelForCausalLM with tempfile.TemporaryDirectory() as tmpdirname: # saving state dict for now but will save config and other in the future self.accelerate.save_model(self.model_8bit, tmpdirname) with init_empty_weights(): # let's suppose that we can get the right config model_8bit_from_saved = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_8bit_from_saved.tie_weights() bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) model_8bit_from_saved = load_and_quantize_model( model_8bit_from_saved, bnb_quantization_config, weights_location=tmpdirname, device_map="auto", no_split_module_classes=["BloomBlock"], ) assert model_8bit_from_saved.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Int8Params assert hasattr(model_8bit_from_saved.transformer.h[0].mlp.dense_4h_to_h.weight, "SCB") assert hasattr(model_8bit_from_saved.transformer.h[0].mlp.dense_4h_to_h.weight, "CB") self.check_inference_correctness(model_8bit_from_saved) @require_multi_gpu def test_int8_serialization_offload(self): r""" Test whether it is possible to serialize a model in 8-bit and offload weights to cpu/disk """ from bitsandbytes.nn import Int8Params from transformers import AutoConfig, AutoModelForCausalLM with tempfile.TemporaryDirectory() as tmpdirname: # saving state dict for now but will save config and other in the future self.accelerate.save_model(self.model_8bit, tmpdirname) with init_empty_weights(): # let's suppose that we can get the right config model_8bit_from_saved = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_8bit_from_saved.tie_weights() bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) device_map = { "transformer.word_embeddings": "cpu", "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h.0": "cpu", "transformer.h.1": "cpu", "transformer.h.2": "cpu", "transformer.h.3": "disk", "transformer.h.4": "disk", "transformer.h.5": "disk", "transformer.h.6": 0, "transformer.h.7": 0, "transformer.h.8": 0, "transformer.h.9": 1, "transformer.h.10": 0, "transformer.h.11": 1, "transformer.h.12": 0, "transformer.h.13": 0, "transformer.h.14": 1, "transformer.h.15": 0, "transformer.h.16": 0, "transformer.h.17": 1, "transformer.h.18": 1, "transformer.h.19": 0, "transformer.h.20": 1, "transformer.h.21": 1, "transformer.h.22": 0, "transformer.h.23": 0, "transformer.ln_f": 1, } model_8bit_from_saved = load_and_quantize_model( model_8bit_from_saved, bnb_quantization_config, weights_location=tmpdirname, device_map=device_map, no_split_module_classes=["BloomBlock"], offload_folder=tmpdirname + "/tmp", offload_state_dict=True, ) assert model_8bit_from_saved.transformer.h[4].mlp.dense_4h_to_h.weight.__class__ == Int8Params assert model_8bit_from_saved.transformer.h[5].mlp.dense_4h_to_h.weight.__class__ == Int8Params self.check_inference_correctness(model_8bit_from_saved) def test_int8_serialization_shard(self): r""" Test whether it is possible to serialize a model in 8-bit. """ from bitsandbytes.nn import Int8Params from transformers import AutoConfig, AutoModelForCausalLM with tempfile.TemporaryDirectory() as tmpdirname: # saving state dict for now but will save config and other in the future self.accelerate.save_model(self.model_8bit, tmpdirname, max_shard_size="1GB") with init_empty_weights(): # let's suppose that we can get the right config model_8bit_from_saved = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_8bit_from_saved.tie_weights() bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) model_8bit_from_saved = load_and_quantize_model( model_8bit_from_saved, bnb_quantization_config, weights_location=tmpdirname, device_map="auto", no_split_module_classes=["BloomBlock"], ) assert model_8bit_from_saved.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Int8Params assert hasattr(model_8bit_from_saved.transformer.h[0].mlp.dense_4h_to_h.weight, "SCB") assert hasattr(model_8bit_from_saved.transformer.h[0].mlp.dense_4h_to_h.weight, "CB") self.check_inference_correctness(model_8bit_from_saved) @require_non_torch_xla @slow @require_cuda @require_bnb @require_huggingface_suite class MixedInt8LoaddedModelTest(unittest.TestCase): # We keep the constants inside the init function and model loading inside setUp function # We need to test on relatively large models (aka >1b parameters otherwise the quantiztion may not work as expected) # Therefore here we use only bloom-1b3 to test our module model_name = "marcsun13/bloom-1b7_with_lm_head" # Constant values # This was obtained on a Quadro RTX 8000 so the number might slightly change EXPECTED_RELATIVE_DIFFERENCE = 1.540025 input_text = "Hello my name is" EXPECTED_OUTPUT = "Hello my name is John.\nI am a friend of the family.\n" MAX_NEW_TOKENS = 10 def setUp(self): """ Setup quantized model from loaded model """ from transformers import AutoModelForCausalLM, AutoTokenizer # Models and tokenizer self.model_fp16 = AutoModelForCausalLM.from_pretrained( self.model_name, torch_dtype=torch.float16, device_map="auto" ) self.bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True) self.model_8bit = AutoModelForCausalLM.from_pretrained(self.model_name, torch_dtype=torch.float16) self.model_8bit = load_and_quantize_model(self.model_8bit, self.bnb_quantization_config) self.tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b7") def tearDown(self): r""" TearDown function needs to be called at the end of each test to free the GPU memory and cache, also to avoid unexpected behaviors. Please see: https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/27 """ del self.model_fp16 del self.model_8bit gc.collect() torch.cuda.empty_cache() def test_memory_footprint(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ from bitsandbytes.nn import Int8Params mem_fp16 = self.model_fp16.get_memory_footprint() mem_8bit = self.model_8bit.get_memory_footprint() assert round((mem_fp16 / mem_8bit) - self.EXPECTED_RELATIVE_DIFFERENCE, 7) >= 0 assert self.model_8bit.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Int8Params def test_linear_are_8bit(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ self.model_fp16.get_memory_footprint() self.model_8bit.get_memory_footprint() for name, module in self.model_8bit.named_modules(): if isinstance(module, torch.nn.Linear): modules_not_converted = ( self.bnb_quantization_config.keep_in_fp32_modules + self.bnb_quantization_config.skip_modules ) if name not in modules_not_converted: assert module.weight.dtype == torch.int8 def test_generate_quality(self): r""" Test the generation quality of the quantized model and see that we are matching the expected output. Given that we are operating on small numbers + the testing model is relatively small, we might not get the same output across GPUs. So we'll generate few tokens (5-10) and check their output. """ encoded_input = self.tokenizer(self.input_text, return_tensors="pt") output_sequences = self.model_8bit.generate( input_ids=encoded_input["input_ids"].to(self.model_8bit.device), max_new_tokens=10 ) assert self.tokenizer.decode(output_sequences[0], skip_special_tokens=True) == self.EXPECTED_OUTPUT def test_fp32_8bit_conversion(self): r""" Test whether it is possible to mix both `8bit` and `fp32` weights when using `keep_in_fp32_modules` correctly. """ from transformers import AutoModelForCausalLM bnb_quantization_config = BnbQuantizationConfig(load_in_8bit=True, keep_in_fp32_modules=["lm_head"]) model = AutoModelForCausalLM.from_pretrained(self.model_name, torch_dtype=torch.float16) model = load_and_quantize_model(model, bnb_quantization_config) assert model.lm_head.weight.dtype == torch.float32 @require_non_torch_xla @slow @require_cuda @require_bnb @require_huggingface_suite class Bnb4BitEmptyModelTest(unittest.TestCase): # We keep the constants inside the init function and model loading inside setUp function # We need to test on relatively large models (aka >1b parameters otherwise the quantiztion may not work as expected) # Therefore here we use only bloom-1b3 to test our module model_name = "marcsun13/bloom-1b7_with_lm_head" # Constant values # This was obtained on a RTX Titan so the number might slightly change EXPECTED_RELATIVE_DIFFERENCE = 2.109659552692574 input_text = "Hello my name is" EXPECTED_OUTPUTS = set() EXPECTED_OUTPUTS.add("Hello my name is John and I am a professional photographer. I") EXPECTED_OUTPUTS.add("Hello my name is John.\nI am a friend of your father.\n") MAX_NEW_TOKENS = 10 def setUp(self): from huggingface_hub import hf_hub_download from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer super().setUp() # Models and tokenizer self.model_fp16 = AutoModelForCausalLM.from_pretrained( self.model_name, torch_dtype=torch.float16, device_map="auto" ) # create model on meta device with init_empty_weights(): self.model_4bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) self.model_4bit.tie_weights() self.weights_location = hf_hub_download(self.model_name, "pytorch_model.bin") self.bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True) self.model_4bit = load_and_quantize_model( self.model_4bit, self.bnb_quantization_config, weights_location=self.weights_location, device_map={"": 0}, no_split_module_classes=["BloomBlock"], ) self.tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b7") def tearDown(self): """ TearDown function needs to be called at the end of each test to free the GPU memory and cache, also to avoid unexpected behaviors. Please see: https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/27 """ super().tearDown() del self.model_fp16 del self.model_4bit gc.collect() torch.cuda.empty_cache() def test_memory_footprint(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ from bitsandbytes.nn import Params4bit mem_fp16 = self.model_fp16.get_memory_footprint() mem_4bit = self.model_4bit.get_memory_footprint() assert round((mem_fp16 / mem_4bit) - self.EXPECTED_RELATIVE_DIFFERENCE, 7) >= 0 assert self.model_4bit.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Params4bit def check_inference_correctness(self, model): r""" Test the generation quality of the quantized model and see that we are matching the expected output. Given that we are operating on small numbers + the testing model is relatively small, we might not get the same output across GPUs. So we'll generate few tokens (5-10) and check their output. """ # Check that inference pass works on the model encoded_input = self.tokenizer(self.input_text, return_tensors="pt") # Check the exactness of the results output_sequences = model.generate(input_ids=encoded_input["input_ids"].to(0), max_new_tokens=10) assert self.tokenizer.decode(output_sequences[0], skip_special_tokens=True) in self.EXPECTED_OUTPUTS def test_generate_quality(self): self.check_inference_correctness(self.model_4bit) def test_linear_are_4bit(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ self.model_fp16.get_memory_footprint() self.model_4bit.get_memory_footprint() for name, module in self.model_4bit.named_modules(): if isinstance(module, torch.nn.Linear): if ( name not in self.bnb_quantization_config.keep_in_fp32_modules + self.bnb_quantization_config.skip_modules ): # 4-bit parameters are packed in uint8 variables assert module.weight.dtype == torch.uint8 def test_fp32_4bit_conversion(self): r""" Test whether it is possible to mix both `4bit` and `fp32` weights when using `keep_in_fp32_modules` correctly. """ from transformers import AutoConfig, AutoModelForCausalLM bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True, keep_in_fp32_modules=["lm_head"]) with init_empty_weights(): model = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model.tie_weights() model = load_and_quantize_model( model, bnb_quantization_config, weights_location=self.weights_location, device_map="auto", no_split_module_classes=["BloomBlock"], ) assert model.lm_head.weight.dtype == torch.float32 @require_multi_gpu def test_cpu_gpu_loading_random_device_map(self): from transformers import AutoConfig, AutoModelForCausalLM r""" A test to check is dispatching a model on cpu & gpu works correctly using a random `device_map`. """ device_map = { "transformer.word_embeddings": "cpu", "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h.0": 0, "transformer.h.1": 0, "transformer.h.2": 0, "transformer.h.3": 0, "transformer.h.4": 0, "transformer.h.5": 0, "transformer.h.6": 0, "transformer.h.7": 0, "transformer.h.8": 0, "transformer.h.9": 1, "transformer.h.10": 0, "transformer.h.11": 1, "transformer.h.12": 0, "transformer.h.13": 0, "transformer.h.14": 1, "transformer.h.15": 0, "transformer.h.16": 0, "transformer.h.17": 1, "transformer.h.18": 1, "transformer.h.19": 0, "transformer.h.20": 1, "transformer.h.21": 1, "transformer.h.22": 0, "transformer.h.23": 0, "transformer.ln_f": 1, } bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True) with init_empty_weights(): model_4bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_4bit.tie_weights() model_4bit = load_and_quantize_model( model_4bit, bnb_quantization_config, weights_location=self.weights_location, device_map=device_map, no_split_module_classes=["BloomBlock"], ) self.check_inference_correctness(model_4bit) @require_multi_gpu def test_cpu_gpu_loading_custom_device_map(self): from transformers import AutoConfig, AutoModelForCausalLM r""" A test to check is dispatching a model on cpu & gpu works correctly using a random `device_map`. """ device_map = { "transformer.word_embeddings": "cpu", "transformer.word_embeddings_layernorm": "cpu", "lm_head": "cpu", "transformer.h": 0, "transformer.ln_f": 1, } bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True) with init_empty_weights(): model_4bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_4bit.tie_weights() model_4bit = load_and_quantize_model( model_4bit, bnb_quantization_config, weights_location=self.weights_location, device_map=device_map, no_split_module_classes=["BloomBlock"], ) self.check_inference_correctness(model_4bit) @require_multi_gpu def test_cpu_gpu_disk_loading_custom_device_map_kwargs(self): from transformers import AutoConfig, AutoModelForCausalLM r""" A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map`. This time we also add `disk` on the device_map - using the kwargs directly instead of the quantization config """ device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": "disk", "lm_head": 0, "transformer.h": 1, "transformer.ln_f": "cpu", } bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True) with init_empty_weights(): model_4bit = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(self.model_name)) model_4bit.tie_weights() with tempfile.TemporaryDirectory() as tmpdirname: model_4bit = load_and_quantize_model( model_4bit, bnb_quantization_config, weights_location=self.weights_location, device_map=device_map, no_split_module_classes=["BloomBlock"], offload_folder=tmpdirname, offload_state_dict=True, ) self.check_inference_correctness(model_4bit) @require_non_torch_xla @slow @require_cuda @require_bnb @require_huggingface_suite class Bnb4BitTestLoadedModel(unittest.TestCase): # We keep the constants inside the init function and model loading inside setUp function # We need to test on relatively large models (aka >1b parameters otherwise the quantiztion may not work as expected) # Therefore here we use only bloom-1b3 to test our module model_name = "marcsun13/bloom-1b7_with_lm_head" # Constant values # This was obtained on a RTX Titan so the number might slightly change EXPECTED_RELATIVE_DIFFERENCE = 2.109659552692574 input_text = "Hello my name is" EXPECTED_OUTPUTS = set() EXPECTED_OUTPUTS.add("Hello my name is John and I am a professional photographer. I") EXPECTED_OUTPUTS.add("Hello my name is John.\nI am a friend of your father.\n") MAX_NEW_TOKENS = 10 def setUp(self): """ Setup quantized model from loaded model """ from transformers import AutoModelForCausalLM, AutoTokenizer super().setUp() # Models and tokenizer self.model_fp16 = AutoModelForCausalLM.from_pretrained( self.model_name, torch_dtype=torch.float16, device_map="auto" ) self.bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True) self.model_4bit = AutoModelForCausalLM.from_pretrained(self.model_name, torch_dtype=torch.float16) self.model_4bit = load_and_quantize_model(self.model_4bit, self.bnb_quantization_config) self.tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b7") def tearDown(self): """ TearDown function needs to be called at the end of each test to free the GPU memory and cache, also to avoid unexpected behaviors. Please see: https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/27 """ super().tearDown() del self.model_fp16 del self.model_4bit gc.collect() torch.cuda.empty_cache() def test_memory_footprint(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ from bitsandbytes.nn import Params4bit mem_fp16 = self.model_fp16.get_memory_footprint() mem_4bit = self.model_4bit.get_memory_footprint() assert round((mem_fp16 / mem_4bit) - self.EXPECTED_RELATIVE_DIFFERENCE, 7) >= 0 assert self.model_4bit.transformer.h[0].mlp.dense_4h_to_h.weight.__class__ == Params4bit def test_linear_are_4bit(self): r""" A simple test to check if the model conversion has been done correctly by checking on the memory footprint of the converted model and the class type of the linear layers of the converted models """ self.model_fp16.get_memory_footprint() self.model_4bit.get_memory_footprint() for name, module in self.model_4bit.named_modules(): if isinstance(module, torch.nn.Linear): if ( name not in self.bnb_quantization_config.keep_in_fp32_modules + self.bnb_quantization_config.skip_modules ): # 4-bit parameters are packed in uint8 variables assert module.weight.dtype == torch.uint8 def test_generate_quality(self): r""" Test the generation quality of the quantized model and see that we are matching the expected output. Given that we are operating on small numbers + the testing model is relatively small, we might not get the same output across GPUs. So we'll generate few tokens (5-10) and check their output. """ encoded_input = self.tokenizer(self.input_text, return_tensors="pt") output_sequences = self.model_4bit.generate( input_ids=encoded_input["input_ids"].to(self.model_4bit.device), max_new_tokens=10 ) assert self.tokenizer.decode(output_sequences[0], skip_special_tokens=True) in self.EXPECTED_OUTPUTS def test_fp32_4bit_conversion(self): r""" Test whether it is possible to mix both `4bit` and `fp32` weights when using `keep_in_fp32_modules` correctly. """ from transformers import AutoModelForCausalLM bnb_quantization_config = BnbQuantizationConfig(load_in_4bit=True, keep_in_fp32_modules=["lm_head"]) model = AutoModelForCausalLM.from_pretrained(self.model_name, torch_dtype=torch.float16) model = load_and_quantize_model(model, bnb_quantization_config) assert model.lm_head.weight.dtype == torch.float32
6
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_grad_sync.py
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from accelerate import debug_launcher from accelerate.test_utils import ( DEFAULT_LAUNCH_COMMAND, device_count, execute_subprocess_async, path_in_accelerate_package, require_cpu, require_multi_device, require_non_cpu, test_sync, ) from accelerate.utils import patch_environment class SyncScheduler(unittest.TestCase): test_file_path = path_in_accelerate_package("test_utils", "scripts", "test_sync.py") @require_cpu def test_gradient_sync_cpu_noop(self): debug_launcher(test_sync.main, num_processes=1) @require_cpu def test_gradient_sync_cpu_multi(self): debug_launcher(test_sync.main) @require_non_cpu def test_gradient_sync_gpu(self): test_sync.main() @require_multi_device def test_gradient_sync_gpu_multi(self): print(f"Found {device_count} devices.") cmd = DEFAULT_LAUNCH_COMMAND + [self.test_file_path] with patch_environment(omp_num_threads=1): execute_subprocess_async(cmd)
7
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_metrics.py
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import numpy as np from packaging import version from accelerate import debug_launcher from accelerate.test_utils import ( DEFAULT_LAUNCH_COMMAND, device_count, execute_subprocess_async, path_in_accelerate_package, require_cpu, require_huggingface_suite, require_multi_device, require_single_device, ) from accelerate.utils import patch_environment @require_huggingface_suite @unittest.skipIf(version.parse(np.__version__) >= version.parse("2.0"), "Test requires numpy version < 2.0") class MetricTester(unittest.TestCase): def setUp(self): self.test_file_path = path_in_accelerate_package("test_utils", "scripts", "external_deps", "test_metrics.py") from accelerate.test_utils.scripts.external_deps import test_metrics # noqa: F401 self.test_metrics = test_metrics @require_cpu def test_metric_cpu_noop(self): debug_launcher(self.test_metrics.main, num_processes=1) @require_cpu def test_metric_cpu_multi(self): debug_launcher(self.test_metrics.main) @require_single_device def test_metric_accelerator(self): self.test_metrics.main() @require_multi_device def test_metric_accelerator_multi(self): print(f"Found {device_count} devices.") cmd = DEFAULT_LAUNCH_COMMAND + [self.test_file_path] with patch_environment(omp_num_threads=1, ACCELERATE_LOG_LEVEL="INFO"): execute_subprocess_async(cmd)
8
0
hf_public_repos/accelerate
hf_public_repos/accelerate/tests/test_logging.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import logging import os import pytest from accelerate import Accelerator from accelerate.logging import get_logger def current_lineno() -> int: # A simple helper that returns the lineno of its call-site. caller_frame = inspect.currentframe().f_back caller_info = inspect.getframeinfo(caller_frame) return caller_info.lineno class CustomLogger(logging.LoggerAdapter): # Mocks a user-defined custom logger wrapper that sets `stacklevel=3`. def log(self, level, msg, *args, **kwargs): # E.g. the user wants to modify `stacklevel`, `accelerate.logging` # should respect the user's `stacklevel`. For the specific value # of `3`, calling `CustomLogger.log()`, etc., should log that callsite, # rather than the callsite of the following `self.logger.log()`. kwargs["stacklevel"] = 3 self.logger.log(level, msg, *args, **kwargs) @pytest.fixture(scope="module") def accelerator(): return Accelerator() @pytest.mark.usefixtures("accelerator") def test_log_stack(caplog): logger = get_logger(__name__) logging.basicConfig( format="%(filename)s:%(name)s:%(lineno)s:%(funcName)s - %(message)s", datefmt="%m/%d %H:%M:%S", ) message = "Test" lineno = current_lineno() + 1 # the next line is the actual callsite logger.warning(message) assert len(caplog.records) == 1 rec = caplog.records[0] assert rec.levelname == logging.getLevelName(logging.WARNING) assert rec.filename == os.path.basename(__file__) assert rec.name == __name__ assert rec.lineno == lineno assert rec.funcName == test_log_stack.__name__ assert rec.message == message @pytest.mark.usefixtures("accelerator") def test_custom_stacklevel(caplog): wrapped_logger = get_logger(__name__) logging.basicConfig( format="%(filename)s:%(name)s:%(lineno)s:%(funcName)s - %(message)s", datefmt="%m/%d %H:%M:%S", ) logger = CustomLogger(wrapped_logger, {}) message = "Test" lineno = current_lineno() + 1 # the next line is the actual callsite logger.warning(message) # `CustomLogger.log` set custom `stacklevel=3`, so `logger.warning` should # log its callsite (rather than those of the `warpped_logger`). assert len(caplog.records) == 1 rec = caplog.records[0] assert rec.levelname == logging.getLevelName(logging.WARNING) assert rec.filename == os.path.basename(__file__) assert rec.name == __name__ assert rec.lineno == lineno assert rec.funcName == test_custom_stacklevel.__name__ assert rec.message == message
9
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter4/classification_models.mdx
# Modèles et jeux de données pré-entraînés pour la classification d’audio Le *Hub* abrite [plusieurs centaines de modèles pré-entraînés pour la classification d’audio](https://huggingface.co/models?pipeline_tag=audio-classification). Dans cette section, nous passerons en revue certaines des tâches de classification d’audio les plus courantes et suggérerons des modèles pré-entraînés appropriés pour chacune. En utilisant la classe `pipeline()`, la commutation entre les modèles et les tâches est simple : une fois que vous savez comment utiliser `pipeline()` pour un modèle, vous pourrez l'utiliser pour n'importe quel modèle sur le *Hub*, sans modification du code ! Cela rend l'expérimentation de la classe `pipeline()` extrêmement rapide, ce qui vous permet de sélectionner rapidement le meilleur modèle pré-entraîné pour vos besoins. Avant de passer aux différents problèmes de classification d’audio, récapitulons rapidement les architectures de *transformers* généralement utilisées. L'architecture standard de classification d’audio est motivée par la nature de la tâche. Nous voulons transformer une séquence d'entrées audio (c'est-à-dire notre réseau audio d'entrée) en une prédiction d'étiquette de classe unique. Les modèles d'encodeur associent d'abord la séquence audio d'entrée dans une séquence de représentations à l'état caché en faisant passer les entrées à travers un bloc *transformer*. La séquence de représentations d'états masqués est ensuite associée à une sortie d'étiquette de classe en prenant la moyenne sur les états masqués et en faisant passer le vecteur résultant à travers une couche de classification linéaire. Par conséquent, il y a une préférence pour les modèles *encodeur* pour la classification d’audio. Les modèles de décodeur introduisent une complexité inutile à la tâche car ils supposent que les sorties peuvent également être une *séquence* de prédictions (plutôt qu'une prédiction d'étiquette de classe unique), et génèrent ainsi plusieurs sorties. Par conséquent, ils ont une vitesse d'inférence plus lente et ont tendance à ne pas être utilisés. Les modèles encodeur-décodeur sont largement omis pour la même raison. Ces choix d'architecture sont analogues à ceux de NLP, où les modèles d'encodeur tels que BERT sont privilégiés pour les tâches de classification de séquences, et les modèles de décodeur tels que GPT réservés aux tâches de génération de séquences. Maintenant que nous avons récapitulé l'architecture du *transformer* standard pour la classification d’audio, passons aux différents sous-ensembles de la classification d’audio et couvrons les modèles les plus populaires ! ## 🤗 Installation de Transformers Au moment de la rédaction de cette section, les dernières mises à jour requises pour le pipeline de classification d’audio se trouvent uniquement sur la version « principale » du dépôt 🤗 Transformers, plutôt que sur la dernière version de PyPi. Pour nous assurer que nous avons ces mises à jour localement, nous allons installer Transformers à partir de la branche `main` avec la commande suivante : ``` pip install git+https://github.com/huggingface/transformers ``` ## Repérage de mots-clés Le repérage de mots clés (KWS pour *Keyword spotting*) est la tâche d'identifier un mot-clé dans un discours. L'ensemble des mots-clés possibles forme l'ensemble des étiquettes de classe prédites. Par conséquent, pour utiliser un modèle de repérage de mots clés pré-entraîné, vous devez vous assurer que vos mots-clés correspondent à ceux sur lesquels le modèle a été pré-entraîné. Ci-dessous, nous présenterons deux jeux de données et modèles pour la détection de mots clés. ### MINDS-14 Commençons en utilisant le même jeu de données [MINDS-14](https://huggingface.co/datasets/PolyAI/minds14) exploré dans l'unité précédente. Si vous vous souvenez, MINDS-14 contient des enregistrements de personnes posant des questions à un système bancaire électronique dans plusieurs langues et dialectes, et a indique une classe d’intention pour chaque enregistrement. Nous pouvons donc classer les enregistrements par intention de l'appel. ```python from datasets import load_dataset minds = load_dataset("PolyAI/minds14", name="en-AU", split="train") ``` Nous allons charger le *checkpoint* [`"anton-l/xtreme_s_xlsr_300m_minds14"`](https://huggingface.co/anton-l/xtreme_s_xlsr_300m_minds14), qui est un modèle XLS-R *finetuné* sur MINDS-14 pendant environ 50 époques. Il atteint une précision de 90% sur toutes les langues de MINDS-14 sur l'ensemble d'évaluation. ```python from transformers import pipeline classifier = pipeline( "audio-classification", model="anton-l/xtreme_s_xlsr_300m_minds14", ) ``` Enfin, nous pouvons passer un échantillon au pipeline de classification pour faire une prédiction : ```python classifier(minds[0]["path"]) ``` **Sortie :** ``` [ {"score": 0.9631525278091431, "label": "pay_bill"}, {"score": 0.02819698303937912, "label": "freeze"}, {"score": 0.0032787492964416742, "label": "card_issues"}, {"score": 0.0019414445850998163, "label": "abroad"}, {"score": 0.0008378693601116538, "label": "high_value_payment"}, ] ``` Nous avons identifié que l'intention de l'appel était de payer une facture, avec une probabilité de 96%. Vous pouvez imaginer que ce type de système de repérage de mots-clés soit utilisé comme première étape d'un centre d'appels automatisé, où nous voulons catégoriser les appels entrants des clients en fonction de leur requête et leur offrir un support contextualisé en conséquence. ### Speech Commands Speech Commands est un jeu de données de mots parlés conçu pour évaluer les modèles de classification d’audio sur des mots de commande simples. Le jeu de données se compose de 15 classes de mots-clés, d'une classe pour le silence et d'une classe inconnue pour inclure le faux positif. Les 15 mots-clés sont des mots uniques qui seraient généralement utilisés dans les paramètres sur l'appareil pour contrôler les tâches de base ou lancer d'autres processus. Un modèle similaire fonctionne en continu sur votre téléphone mobile. Ici, au lieu d'avoir des mots de commande uniques, nous avons des mots de réveil spécifiques à votre appareil, tels que « Hey Google » ou « Hey Siri ». Lorsque le modèle de classification d’audio détecte ces mots de réveil, il déclenche votre téléphone pour commencer à écouter le microphone et transcrire votre discours à l'aide d'un modèle de reconnaissance vocale. Le modèle de classification d’audio est beaucoup plus petit et plus léger que le modèle de reconnaissance vocale, souvent seulement quelques millions de paramètres contre plusieurs centaines de millions pour la reconnaissance vocale. Ainsi, il peut fonctionner en continu sur votre appareil sans vider votre batterie ! Ce n'est que lorsque le mot de réveil est détecté que le modèle de reconnaissance vocale plus large est lancé, puis qu'il est à nouveau arrêté. Nous couvrirons les modèles de *transformers* pour la reconnaissance vocale dans la prochaine unité, donc à la fin du cours, vous devriez avoir les outils dont vous avez besoin pour construire votre propre assistant à commande vocale ! Comme pour tout jeu de données sur le *Hub*, nous pouvons avoir une idée de la tête des données sans avoir à les télécharger ou les avoir en mémoire. Après avoir accédé à la [carte du jeu de données Speech Commands](https://huggingface.co/datasets/speech_commands) sur le *Hub*, nous pouvons utiliser la visionneuse de données pour faire défiler les 100 premiers échantillons du jeu de données, écouter les fichiers audio et vérifier toute autre information de métadonnées : <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speech_commands.png" alt="Diagram of datasets viewer."> </div> L'aperçu du jeu de données est un moyen de découvrir les jeux de données audio avant de s'engager à les utiliser. Vous pouvez choisir n'importe quel jeu de données sur le *Hub*, faire défiler les échantillons et écouter l'audio pour les différents sous-ensembles et échantillons, en évaluant s'il s'agit du bon jeu de données pour vos besoins. Une fois que vous avez sélectionné un jeu de données, il est trivial de télécharger les données afin de pouvoir commencer à les utiliser. Faisons cela et chargeons un échantillon du jeu de données Speech Commands en utilisant le mode streaming : ```python speech_commands = load_dataset( "speech_commands", "v0.02", split="validation", streaming=True ) sample = next(iter(speech_commands)) ``` Nous allons charger un *checkpoint* d’un [*transformer* d’audio sous la forme de spectrogramme](https://huggingface.co/docs/transformers/model_doc/audio-spectrogramme-transformer) *finetuné* sur le jeu de données Speech Commands : ```python classifier = pipeline( "audio-classification", model="MIT/ast-finetuned-speech-commands-v2" ) classifier(sample["audio"].copy()) ``` **Sortie :** ``` [{'score': 0.9999892711639404, 'label': 'backward'}, {'score': 1.7504888774055871e-06, 'label': 'happy'}, {'score': 6.703040185129794e-07, 'label': 'follow'}, {'score': 5.805884484288981e-07, 'label': 'stop'}, {'score': 5.614546694232558e-07, 'label': 'up'}] ``` On dirait que l'exemple contient le mot `backward` avec une forte probabilité. Nous pouvons écouter l'échantillon et vérifier qu'il est correct: ``` from IPython.display import Audio Audio(sample["audio"]["array"], rate=sample["audio"]["sampling_rate"]) ``` Vous vous demandez peut-être comment nous avons sélectionné les modèles pré-entraînés montrés dans ces exemples de classification d’audio. C’est très simple ! La première chose que nous devons faire est de nous diriger sur le *Hub* et de cliquer sur l'onglet « *Models* »: https://huggingface.co/models Cela va faire apparaître tous les modèles sur le *Hub* : <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/all_models.png"> </div> Vous remarquerez sur le côté gauche que nous avons plusieurs onglets que nous pouvons sélectionner pour filtrer les modèles par tâche, bibliothèque, jeu de données, etc. Faites défiler vers le bas et sélectionnez la tâche « Classification d’audio » dans la liste des tâches audio: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/by_audio_classification.png"> </div> Nous voyons alors le sous-ensemble de modèles de classification d’audio présent sur le *Hub*. Pour affiner davantage cette sélection, nous pouvons filtrer les modèles par jeu de données. Cliquez sur l'onglet « Jeux de données », et dans la zone de recherche, tapez « speech_commands ». Lorsque vous commencez à taper, vous verrez la sélection pour 'speech_commands' apparaître sous l'onglet de recherche. Vous pouvez cliquer sur ce bouton pour filtrer tous les modèles de classification d’audio *finetuné* sur le jeu de données Speech Commands : <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/by_speech_commands.png"> </div> Bien, nous voyons que nous avons 6 modèles pré-entraînés à notre disposition pour ce jeu de données et cette tâche spécifiques. Le premier listé est celui que nous avons utilisé dans l'exemple précédent. Ce processus de filtrage des modèles du *Hub* est exactement la façon dont nous avons procédé pour choisir ce modèle. ## Identification de la langue L'identification de la langue est la tâche d'identifier la langue parlée dans un échantillon audio à partir d'une liste de langues candidates. Cette tâche peut jouer un rôle important dans de nombreux pipelines de parole. Par exemple, étant donné un échantillon audio dans une langue inconnue, un modèle d’identification de langue peut être utilisé pour catégoriser la ou les langues parlées dans l'échantillon audio, puis sélectionner un modèle de reconnaissance vocale approprié entraîné sur cette langue pour transcrire l'audio. ### FLEURS FLEURS (*Few-shot Learning Evaluation of Universal Representations of Speech*) est un jeu de données permettant d'évaluer les systèmes de reconnaissance vocale dans 102 langues, dont beaucoup sont classées comme à faibles ressources. Jetez un coup d'œil à la carte de FLEURS sur le *Hub* et explorez les différentes langues présentes : [google/fleurs](https://huggingface.co/datasets/google/fleurs). Pouvez-vous trouver votre langue maternelle ici ? Si ce n'est pas le cas, quelle est la langue la plus proche ? Chargeons un échantillon à partir de l’échantillon de validation de FLEURS en utilisant le mode streaming : ```python fleurs = load_dataset("google/fleurs", "all", split="validation", streaming=True) sample = next(iter(fleurs)) ``` Génial ! Nous pouvons maintenant charger notre modèle de classification d’audio. Pour cela, nous utiliserons une version de [Whisper](https://arxiv.org/pdf/2212.04356.pdf) *finetuné* sur FLEURS, qui est actuellement le modèle de détection de langue le plus performant sur le Hub: ```python classifier = pipeline( "audio-classification", model="sanchit-gandhi/whisper-medium-fleurs-lang-id" ) ``` Nous pouvons ensuite passer l'audio à travers notre classifieur et générer une prédiction : ```python classifier(sample["audio"]) ``` **Sortie :** ``` [{'score': 0.9999330043792725, 'label': 'Afrikaans'}, {'score': 7.093023668858223e-06, 'label': 'Northern-Sotho'}, {'score': 4.269149485480739e-06, 'label': 'Icelandic'}, {'score': 3.2661141631251667e-06, 'label': 'Danish'}, {'score': 3.2580724109720904e-06, 'label': 'Cantonese Chinese'}] ``` Nous pouvons voir que le modèle a prédit que l'audio était en Afrikaans avec une probabilité extrêmement élevée. FLEURS contient des données audio provenant d'un large éventail de langues : nous pouvons voir que les étiquettes de classe possibles incluent le sotho du Nord, l'islandais, le danois et le cantonais, entre autres. Vous pouvez trouver la liste complète des langues ici : [google/fleurs](https://huggingface.co/datasets/google/fleurs). À vous de jouer ! Quels autres *checkpoints* pouvez-vous trouver sur le *Hub* afin de détecter les langues présentes dans FLEURS ? Quels modèles de *transformers* utilisent-ils sous le capot ? ## Classification d’audio en zéro-shot Dans le paradigme traditionnel de la classification d’audio, le modèle prédit une étiquette de classe à partir d'un ensemble de classes prédéfinies possibles. Cela constitue un obstacle à l'utilisation de modèles pré-entraînés pour la classification d’audio, car les étiquettes du modèle pré-entraîné doit correspondre à celui de la tâche en aval. Pour l'exemple précédent de détection de langues, le modèle doit prédire l'une des 102 classes de langue sur lesquelles il a été entraîné. Si la tâche en aval nécessite en fait 110 langues, le modèle ne serait pas en mesure de prédire 8 des 110 langues, et nécessiterait donc un nouvel entraînement pour atteindre une couverture complète. Cela limite l'efficacité de l'apprentissage par transfert pour les tâches de classification d’audio. La classification d’audio zéro-shot est une méthode permettant de prendre un modèle de classification d’audio pré-entraîné entraîné sur un ensemble d'exemples étiquetés et de lui permettre de classer de nouveaux exemples de classes inédites. Voyons comment nous pouvons y parvenir. Actuellement, 🤗 *Transformers* prend en charge un type de modèle pour la classification d’audio en zéro-shot : le [modèle CLAP](https://huggingface.co/docs/transformers/model_doc/clap). CLAP est un modèle basé sur un *transformer* qui prend à la fois l'audio et le texte comme entrées, et calcule la *similitude* entre les deux. Si nous passons une entrée de texte fortement corrélée à une entrée audio, nous obtiendrons un score de similarité élevé. Inversement, passer une entrée de texte qui n'a aucun rapport avec l'entrée audio renverra une faible similitude. Nous pouvons utiliser cette prédiction de similarité pour la classification d’audio en zéro-shot en passant une entrée audio au modèle et plusieurs étiquettes candidates. Le modèle renverra un score de similarité pour chacune des étiquettes candidates, et nous pouvons choisir celle qui a le score le plus élevé comme prédiction. Prenons un exemple où nous utilisons une entrée audio du jeu de données [Environmental Speech Challenge (ESC)](https://huggingface.co/datasets/ashraq/esc50) : ```python dataset = load_dataset("ashraq/esc50", split="train", streaming=True) audio_sample = next(iter(dataset))["audio"]["array"] ``` Nous définissons ensuite nos étiquettes candidates, qui forment l'ensemble des étiquettes de classification possibles. Le modèle renverra une probabilité de classification pour chacune des étiquettes que nous définissons. Cela signifie que nous devons connaître _a-priori_ l'ensemble des étiquettes possibles dans notre problème de classification, de sorte que l'étiquette correcte soit contenue dans l'ensemble et se voie donc attribuer un score de probabilité valide. Notez que nous pouvons soit transmettre l'ensemble complet des étiquettes au modèle, soit un sous-ensemble sélectionné à la main qui, selon nous, contient l'étiquette correcte. Passer l'ensemble complet des étiquettes sera plus exhaustif, mais se fait au détriment d'une précision de classification plus faible puisque l'espace de classification est plus grand (à condition que l'étiquette correcte soit notre sous-ensemble d'étiquettes choisi): ```python candidate_labels = ["Sound of a dog", "Sound of vacuum cleaner"] ``` Nous pouvons parcourir les deux modèles pour trouver l'étiquette candidate qui est la plus similaire à l'entrée audio: ```python classifier = pipeline( task="zero-shot-audio-classification", model="laion/clap-htsat-unfused" ) classifier(audio_sample, candidate_labels=candidate_labels) ``` **Sortie :** ``` [{'score': 0.9997242093086243, 'label': 'Sound of a dog'}, {'score': 0.0002758323971647769, 'label': 'Sound of vacuum cleaner'}] ``` Le modèle semble assez confiant (probabilité de 99,97%) que nous ayons le son d'un chien .Nous allons donc prendre cela comme notre prédiction. Confirmons si nous avions raison en écoutant l'échantillon audio (n'augmentez pas trop le volume, sinon vous risquez de sursauter !): ```python Audio(audio, rate=16000) ``` Parfait ! Nous avons le son d'un chien qui aboie 🐕, ce qui correspond à la prédiction du modèle. Jouez avec différents échantillons audio et différentes étiquettes candidates. Pouvez-vous définir un ensemble d'étiquettes qui donnent une bonne généralisation à travers le jeu de données ESC ? Astuce : pensez à l'endroit où vous pourriez trouver des informations sur les sons possibles dans ESC et construisez vos étiquettes en conséquence. Vous vous demandez peut-être pourquoi nous n'utilisons pas le pipeline de classification d’audio zero-shot pour **toutes** les tâches de classification d’audio ? Il semble que nous puissions faire des prédictions pour n'importe quel problème de classification d’audio en définissant des étiquettes de classe appropriées _à priori_, contournant ainsi la contrainte dont notre tâche de classification a besoin pour correspondre aux étiquettes sur lesquelles le modèle a été pré-entraîné. Cela se résume à la nature du modèle CLAP utilisé dans le pipeline zéro-shot. CLAP est pré-entraîné sur des données de classification d’audio _génériques_, similaires aux sons environnementaux dans le jeu de données ESC, plutôt que sur des données vocales spécifiques, comme nous l'avions dans la tâche de détection de langue. Si vous lui donnez un discours en anglais et un discours en espagnol, CLAP saurait que les deux exemples étaient des données vocales. Mais il ne serait pas capable de différencier les langues de la même manière qu'un modèle de détection de langue dédié à cette tâche. ## Et ensuite ? Nous avons couvert un certain nombre de tâches de classification d’audio, présenté les jeux de données et les modèles les plus pertinents que vous pouvez télécharger à partir du *Hub* et comment les utiliser en quelques lignes de code à l'aide de la classe `pipeline()`. Ces tâches comprenent la détection de mots-clés, l'identification de la langue et la classification d’audio en zéro-shot. Mais que se passe-t-il si nous voulons faire quelque chose de **nouveau** ? Nous avons beaucoup travaillé sur les tâches de traitement de la parole, mais ce n'est qu'un aspect de la classification d’audio. Un autre domaine populaire du traitement d’audio est la **musique**. Bien que la musique ait des caractéristiques intrinsèquement différentes à la parole, bon nombre des mêmes principes que nous avons déjà appris peuvent être appliqués. Dans la section suivante, nous allons passer en revue un guide étape par étape sur la façon dont vous pouvez *finetuner* un *transformer* avec 🤗 *Transformers* sur la tâche de classification de la musique. À la fin, vous aurez un *checkpoint* *finetuné* que vous pourrez brancher dans la classe `pipeline()`, vous permettant de classer les chansons exactement de la même manière que nous avons classé la parole ici.
0
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter4/introduction.mdx
# Unité 4 : Construire un classifieur de genres musicaux ## Ce que vous allez apprendre et construire La classification d'audio est l'une des applications les plus courantes des *transformers* dans le traitement du son et de la parole. Comme d'autres tâches de classification dans l'apprentissage automatique, cette tâche consiste à attribuer une ou plusieurs étiquettes à un enregistrement audio en fonction de son contenu. Par exemple, dans le cas de la parole, nous pourrions vouloir détecter quand des mots de réveil comme « Hey Siri » sont prononcés, ou déduire un mot clé comme « température » d'une requête vocale comme « Quel temps fait-il aujourd'hui ? ». Les sons environnementaux constituent un autre exemple : nous pourrions vouloir distinguer automatiquement des sons tels que « klaxon de voiture », « sirène », « aboiement de chien », etc. Dans cette section, nous verrons comment les *transformers* audio pré-entraînés peuvent être appliqués à une série de tâches de classification d'audio. Nous allons ensuite *finetuner* un *transformer* sur la tâche de classification de la musique, en classant les chansons dans des genres comme « pop » et « rock ». Il s'agit d'une partie importante des plateformes de streaming musical, qui recommandent des chansons similaires à celles que l'utilisateur est en train d'écouter. À la fin de cette section, vous saurez comment : * Trouver des modèles pré-entraînés appropriés pour les tâches de classification d'audio * Utiliser la bibliothèque 🤗 *Datasets* et le *Hub* pour sélectionner des jeux de données de classification d'audio * *Finetuner * un modèle pré-entraîné pour classer les chansons par genre * Construire une démo *Gradio* permettant de classer vos propres chansons
1
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter4/fine-tuning.mdx
# Finetuner un modèle de classification musicale Dans cette section, nous présenterons un guide étape par étape sur le *finetuning* d'un *transformer* encodeur pour la classification de la musique. Nous utiliserons un modèle léger pour cette démonstration et un jeu de données assez petit, ce qui signifie que le code est exécutable de bout en bout sur n'importe quel GPU grand public, y compris le GPU T4 16GB fourni gratuitement par Google Colab. La section comprend divers conseils que vous pouvez essayer si vous avez un GPU plus petit et rencontrez des problèmes de mémoire en cours de route. ## Le de données Pour entraîner notre modèle, nous utiliserons le jeu de données [GTZAN](https://huggingface.co/datasets/marsyas/gtzan), qui est un jeu de données populaire de 1 000 chansons pour la classification des genres musicaux. Chaque chanson dure 30 secondes et fait partie de l'un des 10 genres de musique disponible, allant du disco au métal. Nous pouvons obtenir les fichiers audio et leurs étiquettes correspondantes à partir du *Hub* avec la fonction `load_dataset()` de 🤗 *Datasets* : ```python from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") gtzan ``` **Sortie :** ```out Dataset({ features: ['file', 'audio', 'genre'], num_rows: 999 }) ``` <Tip warning={true}> L'un des enregistrements dans GTZAN est corrompu, il a donc été supprimé du jeu de données. C'est pourquoi nous avons 999 exemples au lieu de 1 000. </Tip> GTZAN ne fournit pas de jeu de validation prédéfini, nous devrons donc en créer un nous-mêmes. Le jeu de données est équilibré entre les genres, nous pouvons donc utiliser la méthode `train_test_split()` pour créer rapidement une répartition 90/10 comme suit : ```python gtzan = gtzan["train"].train_test_split(seed=42, shuffle=True, test_size=0.1) gtzan ``` **Sortie :** ```out DatasetDict({ train: Dataset({ features: ['file', 'audio', 'genre'], num_rows: 899 }) test: Dataset({ features: ['file', 'audio', 'genre'], num_rows: 100 }) }) ``` Maintenant que nous avons nos jeux d’entraînement et de validation, jetons un coup d'œil à l'un des fichiers audio : ```python gtzan["train"][0] ``` **Sortie :** ```out { "file": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "audio": { "path": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "array": array( [ 0.10720825, 0.16122437, 0.28585815, ..., -0.22924805, -0.20629883, -0.11334229, ], dtype=float32, ), "sampling_rate": 22050, }, "genre": 7, } ``` Comme nous l'avons vu dans [Unité 1](.. /chapter1/audio_data), les fichiers audio sont représentés sous forme de tableaux NumPy à 1 dimension, où la valeur du tableau représente l'amplitude à ce pas de temps. Pour ces chansons, la fréquence d'échantillonnage est de 22 050 Hz, ce qui signifie qu'il y a 22 050 valeurs d'amplitude échantillonnées par seconde. Nous devrons garder cela à l'esprit lorsque nous utiliserons un modèle pré-entraîné avec un taux d'échantillonnage différent, en convertissant nous-mêmes les taux d'échantillonnage pour nous assurer qu'ils correspondent. Nous pouvons également voir que le genre est représenté sous la forme d'un entier, ou _class label_, qui est le format dans lequel le modèle fera ses prédictions. Utilisons la méthode `int2str()` de la caractéristique `genre` pour associer ces entiers à des noms lisibles par l'homme : ```python id2label_fn = gtzan["train"].features["genre"].int2str id2label_fn(gtzan["train"][0]["genre"]) ``` **Sortie :** ```out 'pop' ``` Cette étiquette semble correcte, car elle correspond au nom de fichier du fichier audio. Écoutons maintenant quelques exemples supplémentaires en utilisant Gradio pour créer une interface simple avec l'API `Blocks` : ```python def generate_audio(): example = gtzan["train"].shuffle()[0] audio = example["audio"] return ( audio["sampling_rate"], audio["array"], ), id2label_fn(example["genre"]) with gr.Blocks() as demo: with gr.Column(): for _ in range(4): audio, label = generate_audio() output = gr.Audio(audio, label=label) demo.launch(debug=True) ``` <iframe src="https://course-demos-gtzan-samples.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> À partir de ces échantillons, nous pouvons certainement entendre la différence entre les genres, mais un *transformer* peut-il le faire aussi ? Entraînons un modèle pour le découvrir ! Tout d'abord, nous devrons trouver un modèle pré-entraîné approprié pour cette tâche. Voyons comment nous pouvons le faire. ## Choisir un modèle pré-entraîné pour la classification audio Pour commencer, choisissons un modèle pré-entraîné approprié pour la classification audio. Dans ce domaine, le pré-entraînement est généralement effectuée sur de grandes quantités de données audio non étiquetées, en utilisant des jeux de données tels que [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) et [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli). La meilleure façon de trouver ces modèles sur le *Hub* est d'utiliser le filtre « classification audio », comme décrit dans la section précédente. Bien que des modèles comme Wav2Vec2 et HuBERT soient très populaires, nous utiliserons un modèle appelé _DistilHuBERT_. Il s'agit d'une version beaucoup plus petite (ou distillée) du modèle [HuBERT](https://huggingface.co/docs/transentraîners/model_doc/hubert), qui s’entraîne environ 73% plus vite, tout en préservant la plupart des performances. <iframe src="https://autoevaluate-leaderboards.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> ## De l'audio aux caractéristiques d'apprentissage automatique ## Prétraitement des données À l'instar de la tokenisation en NLP, les modèles audio et vocaux nécessitent que l'entrée soit encodée dans un format que le modèle peut traiter. Dans 🤗 *Transformers*, la conversion de l'audio au format d'entrée est gérée par l’extracteur de caractéristique du modèle. 🤗 *Transformers* fournit une classe pratique `AutoFeatureExtractor` qui peut sélectionner automatiquement le bon extracteur de caractéristiques pour un modèle donné. Pour voir comment nous pouvons traiter nos fichiers audio, commençons par instancier l'extracteur de caractéristiques pour DistilHuBERT à partir du *checkpoint* pré-entraîné : ```python from transformers import AutoFeatureExtractor model_id = "ntu-spml/distilhubert" feature_extractor = AutoFeatureExtractor.from_pretrained( model_id, do_normalize=True, return_attention_mask=True ) ``` Comme le taux d'échantillonnage du modèle et de le jeu de données est différent, nous devons rééchantillonner le fichier audio à 16 000 Hz avant de le transmettre à l'extracteur de caractéristiques. Nous pouvons le faire en obtenant d'abord la fréquence d'échantillonnage du modèle à partir de l'extracteur de caractéristiques : ```python sampling_rate = feature_extractor.sampling_rate sampling_rate ``` **Sortie :** ```out 16000 ``` Ensuite, nous rééchantillonnons le jeu de données à l'aide de la méthode `cast_column()` et de la fonction `Audio` de 🤗 *Datasets* : ```python from datasets import Audio gtzan = gtzan.cast_column("audio", Audio(sampling_rate=sampling_rate)) ``` Nous pouvons maintenant vérifier le premier échantillon de l’échantillon d’entraînement de notre jeu de données pour vérifier qu'il est bien à 16 000 Hz. 🤗 *Datasets* rééchantillonnent le fichier audio *à la volée* lorsque nous chargeons chaque échantillon audio : ```python gtzan["train"][0] ``` **Sortie :** ```out { "file": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "audio": { "path": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "array": array( [ 0.0873509, 0.20183384, 0.4790867, ..., -0.18743178, -0.23294401, -0.13517427, ], dtype=float32, ), "sampling_rate": 16000, }, "genre": 7, } ``` Bien. Nous pouvons voir que le taux d'échantillonnage a été sous-échantillonné à 16kHz. Les valeurs de tableau sont également différentes car nous n'avons maintenant qu'environ une valeur d'amplitude pour chaque 1,5 pas que nous avions auparavant. Une caractéristique de conception des modèles de type Wav2Vec2 et HuBERT est qu'ils acceptent un tableau de flottants correspondant à la forme d'onde brute du signal vocal comme entrée. Cela contraste avec d'autres modèles, comme Whisper, où nous prétraitons la forme d'onde audio brute au format spectrogramme. Nous avons mentionné que les données audio sont représentées comme un tableau à 1 dimension, elles sont donc déjà dans le bon format pour être lues par le modèle (un ensemble d'entrées continues à pas de temps discrets). Alors, que fait exactement l'extracteur de caractéristiques ? Eh bien, les données audio sont dans le bon format, mais nous n'avons imposé aucune restriction sur les valeurs qu'elles peuvent prendre. Pour que notre modèle fonctionne de manière optimale, nous voulons conserver toutes les entrées dans la même plage dynamique. Cela va nous assurer d'obtenir une plage similaire d'activations et de gradients pour nos échantillons, ce qui nous aidera à la stabilité et à la convergence pendant l'entraînement. Pour ce faire, nous *normalisons* nos données audio, en redimensionnant chaque échantillon à une moyenne nulle et une variance unitaire pour avoir des variables centrées réduites. C'est exactement cette normalisation des caractéristiques que notre extracteur de caractéristiques effectue. Nous pouvons jeter un coup d'œil à l'extracteur de caractéristiques en fonctionnement en l'appliquant à notre premier échantillon audio. Tout d'abord, calculons la moyenne et la variance de nos données audio brutes : ```python import numpy as np sample = gtzan["train"][0]["audio"] print(f"Mean: {np.mean(sample['array']):.3}, Variance: {np.var(sample['array']):.3}") ``` **Sortie :** ```out Mean: 0.000185, Variance: 0.0493 ``` Nous pouvons voir que la moyenne est déjà proche de 0, mais la variance est plus proche de 0,05. Si la variance de l'échantillon était plus grande, cela pourrait causer des problèmes à notre modèle, car la plage dynamique des données audio serait très petite et donc difficile à séparer. Appliquons l'extracteur de caractéristiques et voyons à quoi ressemblent les sorties: ```python inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) print(f"inputs keys: {list(inputs.keys())}") print( f"Mean: {np.mean(inputs['input_values']):.3}, Variance: {np.var(inputs['input_values']):.3}" ) ``` **Sortie :** ```out inputs keys: ['input_values', 'attention_mask'] Mean: -4.53e-09, Variance: 1.0 ``` Notre extracteur de caractéristiques renvoie un dictionnaire de deux tableaux : `input_values` et `attention_mask`. Les `input_values` sont les entrées audio prétraitées que nous passerons au modèle HuBERT. L’[`attention_mask`](https://huggingface.co/docs/transentraîners/glossary#attention-mask) est utilisé lorsque nous traitons un _batch_ d'entrées audio à la fois. Il est utilisé pour indiquer au modèle où nous avons des entrées rembourrées de différentes longueurs. Nous pouvons voir que la valeur moyenne est maintenant beaucoup plus proche de 0, et la variance est à 1 ! C'est exactement la forme sous laquelle nous voulons que nos échantillons audio soient avant de les passer dans notre HuBERT. <Tip warning={true}> Notez comment nous avons transmis le taux d'échantillonnage de nos données audio à notre extracteur de caractéristiques. Il s'agit d'une bonne pratique, car l'extracteur de caractéristiques effectue une vérification sous le capot pour s'assurer que le taux d'échantillonnage de nos données audio correspond au taux d'échantillonnage attendu par le modèle. Si le taux d'échantillonnage de nos données audio ne correspondait pas au taux d'échantillonnage de notre modèle, nous aurions besoin de suréchantillonner ou de sous-échantillonner les données audio au taux d'échantillonnage correct. </Tip> Alors maintenant que nous savons comment traiter nos fichiers audio rééchantillonnés, la dernière chose à faire est de définir une fonction que nous pouvons appliquer à tous les exemples du jeu de données. Étant donné que nous nous attendons à des échantillons de 30 secondes, nous tronquerons aussi les échantillons plus longs en utilisant les arguments `max_length` et `truncation` de l'extracteur de caractéristiques comme suit : ```python max_duration = 30.0 def preprocess_function(examples): audio_arrays = [x["array"] for x in examples["audio"]] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=int(feature_extractor.sampling_rate * max_duration), truncation=True, return_attention_mask=True, ) return inputs ``` Une fois cette fonction définie, nous pouvons maintenant l'appliquer au jeu de données à l'aide de la méthode `map()`. ```python gtzan_encoded = gtzan.map( preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1 ) gtzan_encoded ``` **Sortie :** ```out DatasetDict({ train: Dataset({ features: ['genre', 'input_values'], num_rows: 899 }) test: Dataset({ features: ['genre', 'input_values'], num_rows: 100 }) }) ``` Pour simplifier l’entraînement, nous avons supprimé les colonnes `audio` et `file` du jeu de données. La colonne `input_values` contient les fichiers audio encodés, la colonne `attention_mask` est un masque binaire de valeurs 0/1 qui indique où nous avons rempli l'entrée audio, et la colonne `genre` contient les étiquettes (ou cibles) correspondantes. Pour permettre au `Trainer` de traiter les étiquettes de classe, nous devons renommer la colonne `genre` en `label` : ```python gtzan_encoded = gtzan_encoded.rename_column("genre", "label") ``` Enfin, nous devons obtenir les associations d'étiquettes à partir du jeu de données. L’association nous mènera des identifiants entiers (par exemple `7`) aux étiquettes de classe lisibles par l'homme (par exemple `pop`) et inversement. Ce faisant, nous pouvons convertir la prédiction de notre modèle dans un format lisible par l'homme, ce qui nous permet d'utiliser le modèle dans n'importe quelle application en aval. Nous pouvons le faire en utilisant la méthode `int2str()` comme suit: ```python id2label = { str(i): id2label_fn(i) for i in range(len(gtzan_encoded["train"].features["label"].names)) } label2id = {v: k for k, v in id2label.items()} id2label["7"] ``` **Sortie :** ```out 'pop' ``` Nous avons maintenant un jeu de données prêt pour l’entraînement. Voyons comment nous pouvons entraîner un modèle sur ce jeu de données. ## Finetuner le modèle Pour affiner le modèle, nous utiliserons la classe 'Trainer' de 🤗 *Transformers*. Comme nous l'avons vu dans d'autres chapitres, le « Trainer » est une API de haut niveau conçue pour gérer les scénarios de formation les plus courants. Dans ce cas, nous utiliserons le 'Trainer' pour affiner le modèle sur GTZAN. Pour ce faire, nous devons d'abord charger un modèle pour cette tâche. Nous pouvons le faire en utilisant la classe 'AutoModelForAudioClassification', qui ajoutera automatiquement la tête de classification appropriée à notre modèle DistilHuBERT préentraîné. Allons de l'avant et instancions le modèle : '''python from transformers import AutoModelForAudioClassification num_labels = len(id2label) modèle = AutoModelForAudioClassification.from_pretrained( model_id, num_labels=num_labels, label2id=label2id, id2label=id2label, ) ``` Nous vous conseillons fortement de télécharger les *checkpoints* du modèle directement sur le [*Hub*](https://huggingface.co/) pendant l'entraînement. Le *Hub* fournit : - Un contrôle de version intégré : vous pouvez être sûr qu'aucun *checkpoint* du modèle n'est perdu pendant l’entraînement. - Tensorboard : suivez les métriques importantes au cours de l’entraînement. - La carte du modèle : documentant ce que fait un modèle et ses cas d'utilisation prévus. - Communauté : un moyen facile de partager et de collaborer avec la communauté ! Lier le *notebook* au *Hub* est simple. Il suffit d'entrer votre *token* d'authentification au *Hub* lorsque vous y êtes invité. Vous pouvez trouver votre *token* d'authentification [ici](https://huggingface.co/settings/tokens) et le saisir dans ```python from huggingface_hub import notebook_login notebook_login() ``` **Sortie :** ```bash Login successful Your token has been saved to /root/.huggingface/token ``` L'étape suivante consiste à définir les arguments d'apprentissage, y compris la taille du batch, les étapes d'accumulation du gradient, le nombre d'époques d'apprentissage et le taux d'apprentissage : ```python from transformers import TrainingArguments model_name = model_id.split("/")[-1] batch_size = 8 gradient_accumulation_steps = 1 num_train_epochs = 10 training_args = TrainingArguments( f"{model_name}-finetuned-gtzan", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=5e-5, per_device_train_batch_size=batch_size, gradient_accumulation_steps=gradient_accumulation_steps, per_device_eval_batch_size=batch_size, num_train_epochs=num_train_epochs, warmup_ratio=0.1, logging_steps=5, load_best_model_at_end=True, metric_for_best_model="accuracy", fp16=True, push_to_hub=True, ) ``` <Tip warning={true}> Ici, nous avons défini `push_to_hub=True` pour activer le téléchargement automatique de nos *checkpoints* *finetunés* pendant l'entraînement. Si vous ne souhaitez pas que vos *checkpoints* soient téléchargés sur le *Hub*, vous pouvez définir cette option sur `False`. </Tip> La dernière chose que nous devons faire est de définir les métriques. Étant donné que le jeu de données est équilibré, nous utiliserons la précision comme métrique et le chargerons à l'aide de la bibliothèque 🤗 *Evaluate* : ```python import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): """Computes accuracy on a batch of predictions""" predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` Nous avons maintenant toutes les pièces ! Instancions le `Trainer` et entraînons le modèle : ```python from transformers import Trainer trainer = Trainer( model, training_args, train_dataset=gtzan_encoded["train"], eval_dataset=gtzan_encoded["test"], tokenizer=feature_extractor, compute_metrics=compute_metrics, ) trainer.train() ``` <Tip warning={true}> Selon votre GPU, il est possible que vous rencontriez une erreur CUDA `"out-of-memory"` lorsque vous commencez à vous entraîner. Dans ce cas, vous pouvez réduire la taille du batch de manière incrémentielle par des facteurs de 2 et utiliser ['gradient_accumulation_steps'](https://huggingface.co/docs/transentraîners/main_classes/trainer#transentraîners.TrainingArguments.gradient_accumulation_steps) pour compenser. </Tip> **Sortie :** ```out | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7297 | 1.0 | 113 | 1.8011 | 0.44 | | 1.24 | 2.0 | 226 | 1.3045 | 0.64 | | 0.9805 | 3.0 | 339 | 0.9888 | 0.7 | | 0.6853 | 4.0 | 452 | 0.7508 | 0.79 | | 0.4502 | 5.0 | 565 | 0.6224 | 0.81 | | 0.3015 | 6.0 | 678 | 0.5411 | 0.83 | | 0.2244 | 7.0 | 791 | 0.6293 | 0.78 | | 0.3108 | 8.0 | 904 | 0.5857 | 0.81 | | 0.1644 | 9.0 | 1017 | 0.5355 | 0.83 | | 0.1198 | 10.0 | 1130 | 0.5716 | 0.82 | ``` L’entraînement durera environ 1 heure en fonction de votre GPU ou de celui alloué par Google Colab. Notre meilleure précision d'évaluation est de 83%. Pas mal pour seulement 10 époques avec 899 exemples d'entraînement ! Nous pourrions certainement améliorer ce résultat en entraînant sur plus d'époques, en utilisant des techniques de régularisation telles que le *dropout*, ou en découpant chaque segment d’audio de 30s en segments de 15s pour utiliser une stratégie de prétraitement de données plus efficace. La grande question est de savoir comment ce système se compare à d'autres systèmes de classification de musique 🤔 Pour cela, nous pouvons afficher le [classement *autoevaluate*](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=marsyas%2Fgtzan&only_verified=0&task=audio-classification&config=all&split=train&metric=accuracy), un classement qui catégorise les modèles par langue et jeu de données, puis les classe en fonction de leur précision. Nous pouvons automatiquement soumettre notre *checkpoint* au classement lorsque nous transmettons les résultats de l'entraînement au *Hub*. Nous devons simplement définir les arguments appropriés (kwargs). Vous pouvez modifier ces valeurs pour qu'elles correspondent à votre jeu de données, à votre langue et au nom de votre modèle en conséquence : ```python kwargs = { "dataset_tags": "marsyas/gtzan", "dataset": "GTZAN", "model_name": f"{model_name}-finetuned-gtzan", "finetuned_from": model_id, "tasks": "audio-classification", } ``` Les résultats de l’entraînement peuvent maintenant être téléchargés sur le *Hub*. Pour ce faire, exécutez la commande `.push_to_hub` : ```python trainer.push_to_hub(**kwargs) ``` Cela enregistrera les logs d'entraînement et les poids du modèle sous `"your-username/distilhubert-finetuned-gtzan"`. Pour cet exemple, consultez [`"sanchit-gandhi/distilhubert-finetuned-gtzan"`](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan). ## Partager le modèle Vous pouvez désormais partager votre modèle avec n'importe qui en utilisant le lien sur le *Hub*. Il sera utilisable avec l'identifiant `"your-username/distilhubert-finetuned-gtzan"` directement dans la classe `pipeline()`. Par exemple, pour charger le *checkpoint* *finetuné* ['"sanchit-gandhi/distilhubert-finetuned-gtzan"'](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan): ```python from transformers import pipeline pipe = pipeline( "audio-classification", model="sanchit-gandhi/distilhubert-finetuned-gtzan" ) ``` ## Conclusion Dans cette section, nous avons couvert un guide étape par étape pour *finetuner* le modèle DistilHuBERT pour la tâche de classification de la musique. Bien que nous nous soyons concentrés sur cette tâche et le jeu de données GTZAN, les étapes présentées ici s'appliquent plus généralement à toute tâche de classification audio. Le même script peut être utilisé pour les tâches de détection de mots-clés ou l'identification de la langue. Il vous suffit de changer le jeu de données avec celui de votre tâche d'intérêt ! Si vous souhaitez *finetuner* d'autres modèles du *Hub* pour la classification d’audio, nous vous encourageons à consulter les [exemples](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) trouvables sur le dépôt 🤗 *Transformers*. Dans la section suivante, nous prendrons le modèle que nous venons de *finetuner* et construirons une démo avec *Gradio* que nous pourrons partager sur le Hub.
2
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter4/hands_on.mdx
# Exercice pratique Il est temps de manipuler les modèles audio et d'appliquer ce que vous avez appris jusqu'à présent. Cet exercice est l'un des quatre exercices pratiques requis pour obtenir un certificat de fin de cours. Voici les instructions. Dans cette unité, nous avons démontré comment *finetuner* un modèle HuBERT sur le jeu de données `marsyas/gtzan` pour de la classification de musique. Notre exemple a atteint une précision de 83%. Votre tâche consiste à améliorer cette mesure de précision. Vous pouvez choisir n'importe quel modèle sur le [🤗 *Hub*](https://huggingface.co/models) que vous pensez adapté à la classification audio, et utiliser exactement le même jeu de données [`marsyas/gtzan`](https://huggingface.co/datasets/marsyas/gtzan) pour construire votre propre classifieur. Votre objectif est d'atteindre une précision de 87% sur ce jeu de données. Vous pouvez choisir exactement le même modèle, et jouer avec les hyperparamètres d'entraînement, ou choisir un modèle complètement différent. A vous de décider ! Pour que votre résultat soit pris en compte dans votre certificat, n'oubliez pas de pousser votre modèle sur le *Hub* comme cela a été montré dans cette unité avec les `**kwargs` suivants à la fin de l'entraînement : ```python kwargs = { "dataset_tags": "marsyas/gtzan", "dataset": "GTZAN", "model_name": f"{model_name}-finetuned-gtzan", "finetuned_from": model_id, "tasks": "audio-classification", } trainer.push_to_hub(**kwargs) ``` Voici quelques ressources supplémentaires qui pourraient vous être utiles dans le cadre de cet exercice : * [Guide de classification audio dans la documentation de 🤗 Transformers](https://huggingface.co/docs/transformers/tasks/audio_classification) * [Documentation du modèle HuBERT](https://huggingface.co/docs/transformers/model_doc/hubert) * [Documentation du modèle M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct) * [Audio Spectrogram Transformer documentation](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer) * [Documentation du Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2) N'hésitez pas à créer une démo de votre modèle et à la partager sur Discord ! Si vous avez des questions, posez-les dans le canal `#audio-study-group`.
3
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter1/supplemental_reading.mdx
# En apprendre encore plus Cette unité couvre de nombreux concepts fondamentaux permettant de comprendre les données audio et de travailler avec elles. Vous voulez en savoir plus ? Vous trouverez ici des ressources supplémentaires qui vous aideront à approfondir votre compréhension des sujets et à améliorer votre expérience d'apprentissage. Dans la vidéo suivante, Monty Montgomery de xiph.org présente une démonstration en temps réel de l'échantillonnage, de la quantification, la profondeur de bits et le dither sur un équipement audio réel en utilisant à la fois une analyse numérique moderne et un équipement analogique vintage, regardez-la : <Youtube id="cIQ9IXSUzuM"/> Si vous souhaitez approfondir le traitement des signaux numériques, consultez le livre gratuit (en anglais) [*Digital Signals Theory*](https://brianmcfee.net/dstbook-site/content/intro.html) écrit par Brian McFee, professeur assistant de technologie musicale et de science des données à l'Université de New York et principal responsable de `librosa`.
4
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter1/preprocessing.mdx
# Prétraitement d'un jeu de données audio Le chargement d'un jeu de données avec 🤗 *Datasets* n'est que la moitié du plaisir. Si vous prévoyez de l'utiliser pour entraîner un modèle ou pour exécuter l'inférence, vous devrez d'abord prétraiter les données. En général, cela impliquera les étapes suivantes: * Rééchantillonnage des données audio * Filtrage du jeu de données * Conversion des données audio en entrée attendue du modèle ## Rééchantillonnage des données audio La fonction `load_dataset` télécharge des exemples audio avec le taux d'échantillonnage avec lequel ils ont été publiés. Ce n'est pas toujours le taux d'échantillonnage attendu par un modèle que vous prévoyez d'entraîner ou d'utiliser pour l'inférence. S'il existe un écart entre les taux d'échantillonnage, vous pouvez rééchantillonner l'audio à la fréquence d'échantillonnage attendue du modèle. La plupart des modèles pré-entraînés disponibles ont été pré-entraînés sur des jeux de données audio à une fréquence d'échantillonnage de 16 kHz. Lorsque nous avons exploré le jeu de données MINDS-14, vous avez peut-être remarqué qu'il est échantillonné à 8 kHz, ce qui signifie que nous devrons probablement le suréchantillonner. Pour ce faire, utilisez la méthode `cast_column` de 🤗 Datasets. Cette opération ne modifie pas l'audio sur place, mais plutôt les signaux aux jeux de données pour rééchantillonner les exemples audio à la volée lorsqu'ils sont chargés. Le code suivant définira la fréquence d'échantillonnage à 16 kHz : ```py from datasets import Audio minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) ``` Rechargez le premier exemple audio dans le jeu de données MINDS-14 et vérifiez qu'il a été rééchantillonné à la « fréquence d'échantillonnage » souhaitée : ```py minds[0] ``` **Sortie :** ```out { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "audio": { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "array": array( [ 2.0634243e-05, 1.9437837e-04, 2.2419340e-04, ..., 9.3852862e-04, 1.1302452e-03, 7.1531429e-04, ], dtype=float32, ), "sampling_rate": 16000, }, "transcription": "I would like to pay my electricity bill using my card can you please assist", "intent_class": 13, } ``` Vous remarquerez peut-être que les valeurs de tableau sont maintenant également différentes. C'est parce que nous avons maintenant deux fois plus de valeurs d'amplitude pour chacune d'entre elles que nous avions auparavant. <Tip> 💡 Quelques informations sur le rééchantillonnage: Si un signal audio a été échantillonné à 8 kHz, de sorte qu'il a 8000 lectures d'échantillon par seconde, nous savons que l'audio ne contient aucune fréquence supérieure à 4 kHz. Ceci est garanti par le théorème d'échantillonnage de Nyquist. Pour cette raison, nous pouvons être certains qu'entre les points d'échantillonnage, le signal continu d'origine fait toujours une courbe lisse. L'augmentation du prélèvement vers un taux d'échantillonnage plus élevé consiste alors à calculer des valeurs d'échantillonnage supplémentaires qui se situent entre les valeurs existantes, en approximant cette courbe. Le sous-échantillonnage, cependant, nécessite que nous filtrons d'abord toutes les fréquences qui seraient supérieures à la nouvelle limite de Nyquist, avant d'estimer les nouveaux points d'échantillonnage. En d'autres termes, vous ne pouvez pas sous-échantillonner d'un facteur 2x en jetant simplement tous les autres échantillons - cela créera des distorsions dans le signal appelées alias. Faire un rééchantillonnage correct est délicat et mieux laissé à des bibliothèques bien testées telles que librosa ou 🤗 *Datasets*. </Tip> ## Filtrage du jeu de données Vous devrez peut-être filtrer les données en fonction de certains critères. L'un des cas courants consiste à limiter les exemples audio à une certaine durée. Par exemple, nous pourrions vouloir filtrer tous les exemples de plus de 20 secondes pour éviter les erreurs de mémoire insuffisante lors de l'entraînement d'un modèle. Nous pouvons le faire en utilisant la méthode `filter` de 🤗 *Datasets* et en lui passant une fonction avec une logique de filtrage. Commençons par écrire une fonction qui indique quels exemples conserver et lesquels rejeter. Cette fonction, `is_audio_length_in_range', renvoie `True` si un échantillon est inférieur à 20s et `False` s'il est plus long que 20s. ```py MAX_DURATION_IN_SECONDS = 20.0 def is_audio_length_in_range(input_length): return input_length < MAX_DURATION_IN_SECONDS ``` La fonction de filtrage peut être appliquée à la colonne d'un jeu de données, mais nous n'avons pas de colonne avec une durée de piste audio dans ce jeu de données. Cependant, nous pouvons en créer un, filtrer en fonction des valeurs de cette colonne, puis le supprimer. ```py # Utilisez librosa pour obtenir la durée de l'exemple à partir du fichier audio new_column = [librosa.get_duration(path=x) for x in minds["path"]] minds = minds.add_column("duration", new_column) # utiliser la méthode `filter` de 🤗 Datasets pour appliquer la fonction de filtrage minds = minds.filter(is_audio_length_in_range, input_columns=["duration"]) # supprimer la colonne d'assistance temporaire minds = minds.remove_columns(["duration"]) minds ``` **Sortie :** ```out Dataset({features: ["path", "audio", "transcription", "intent_class"], num_rows: 624}) ``` Nous pouvons vérifier que le jeu de données a été filtré de 654 exemples à 624. ## Prétraitement des données audio L'un des aspects les plus difficiles de l'utilisation d'ensembles de données audio consiste à préparer les données dans le bon format pour la formation des modèles. Comme vous l'avez vu, les données audio brutes se présentent sous la forme d'un tableau de valeurs d'échantillon. Cependant, les modèles pré-entraînés, que vous les utilisiez pour l'inférence ou que vous souhaitiez les finetuner pour votre tâche, s'attendent à ce que les données brutes soient converties en fonctionnalités d'entrée. Les exigences pour les caractéristiques d'entrée peuvent varier d'un modèle à l'autre. Elles dépendent de l'architecture du modèle et des données avec lesquelles il a été pré-entraîné. La bonne nouvelle est que, pour chaque modèle audio pris en charge, 🤗 *Transformers* offrent une classe d'extracteur de caractéristiques qui peut convertir les données audio brutes en caractéristiques d'entrée attendues par le modèle. Alors, que fait un extracteur de caractéristiques avec les données audio brutes ? Jetons un coup d'œil à l'extracteur de caractéristiques de [Whisper](https://cdn.openai.com/papers/whisper.pdf) pour comprendre certaines transformations d'extraction de caractéristiques courantes. Whisper est un modèle pré-entraîné pour la reconnaissance vocale automatique (ASR) publié en septembre 2022 par Alec Radford et al. d'OpenAI. Tout d'abord, l'extracteur de fonction Whisper pave / tronque un batch d'exemples audio de sorte que tous les exemples ont une longueur d'entrée de 30s. Les exemples plus courts sont complétés à 30s en ajoutant des zéros à la fin de la séquence (les zéros dans un signal audio correspondent à l'absence de signal ou au silence). Les exemples de plus de 30 ans sont tronqués à 30 s. Étant donné que tous les éléments du batch sont rembourrés/tronqués à une longueur maximale dans l'espace d'entrée, il n'y a pas besoin d'un masque d'attention. Whisper est unique à cet égard, la plupart des autres modèles audio nécessitent un masque d'attention qui détaille où les séquences ont été rembourrées, et donc où elles doivent être ignorées dans le mécanisme d'auto-attention. Whisper est entraîné pour fonctionner sans masque d'attention et déduire directement des signaux vocaux où ignorer les entrées. La deuxième opération effectuée par l'extracteur de fonctions Whisper consiste à convertir les matrices audio rembourrées en spectrogrammes log-mel. Ces spectrogrammes décrivent comment les fréquences d'un signal changent au fil du temps, exprimées sur l'échelle mel et mesurées en décibels (la partie log) pour rendre les fréquences et les amplitudes plus représentatives de l'audition humaine. Toutes ces transformations peuvent être appliquées à vos données audio brutes avec quelques lignes de code. Allons de l'avant et chargeons l'extracteur de caractéristiques à partir du *checkpoint* de Whisper pré-entraîné pour être prêt pour nos données audio: ```py from transformers import WhisperFeatureExtractor feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") ``` Ensuite, vous pouvez écrire une fonction pour pré-traiter un seul exemple audio en le faisant passer par le `feature_extractor`. ```py def prepare_dataset(example): audio = example["audio"] features = feature_extractor( audio["array"], sampling_rate=audio["sampling_rate"], padding=True ) return features ``` Nous pouvons appliquer la fonction de préparation des données à tous nos exemples d’entraînement en utilisant la méthode `map` de 🤗 *Datasets* : ```py minds = minds.map(prepare_dataset) minds ``` **Sortie :** ```out Dataset( { features: ["path", "audio", "transcription", "intent_class", "input_features"], num_rows: 624, } ) ``` Aussi simple que cela, nous avons maintenant des spectrogrammes log-mel comme `input_features` dans le jeu de données. Visualisons-le pour l'un des exemples de `minds` : ```py import numpy as np example = minds[0] input_features = example["input_features"] plt.figure().set_figwidth(12) librosa.display.specshow( np.asarray(input_features[0]), x_axis="time", y_axis="mel", sr=feature_extractor.sampling_rate, hop_length=feature_extractor.hop_length, ) plt.colorbar() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/log_mel_whisper.png" alt="Log mel spectrogram plot"> </div> Vous pouvez maintenant voir à quoi ressemble l'entrée audio du modèle Whisper après le prétraitement. La classe d'extracteur de caractéristiques du modèle se charge de transformer les données audio brutes au format attendu par le modèle. Cependant, de nombreuses tâches impliquant l'audio sont multimodales, par exemple la reconnaissance vocale. Dans de tels cas, 🤗 *Transformers* offrent également des tokeniseurs spécifiques au modèle pour traiter les entrées de texte. Pour une plongée approfondie dans les tokeniseurs, veuillez vous référer à notre [cours de NLP](https://huggingface.co/learn/nlp-course/fr/chapter2/4). Vous pouvez charger séparément l'extracteur de caractéristiques et le tokeniseur pour Whisper et d'autres modèles multimodaux, ou vous pouvez charger les deux via un processeur. Pour rendre les choses encore plus simples, utilisez `AutoProcessor` pour charger l'extracteur de caractéristiques et le processeur d'un modèle à partir d'un *checkpoint*, comme ceci : ```py from transformers import AutoProcessor processor = AutoProcessor.from_pretrained("openai/whisper-small") ``` Nous avons illustré ici les étapes fondamentales de préparation des données. Bien entendu, les données personnalisées peuvent nécessiter un prétraitement plus complexe. Dans ce cas, vous pouvez étendre la fonction `prepare_dataset` pour effectuer n'importe quel type de transformations de données personnalisées. Avec 🤗 *Datasets*, si vous pouvez l'écrire en tant que fonction Python, vous pouvez l'appliquer à votre jeu de données !
5
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter1/streaming.mdx
# Streaming de données audio L'un des plus grands défis auxquels sont confrontés les ensembles de données audio est leur taille. Une seule minute d'audio non compressé de qualité CD (44,1 kHz, 16 bits) occupe un peu plus de 5 Mo de stockage. En règle générale, un jeu de données audio contient des heures d'enregistrements. Dans les sections précédentes, nous avons utilisé un très petit sous-ensemble de jeux de données audio MINDS-14, mais les jeux de données audio typiques sont beaucoup plus volumineux. Par exemple, la configuration `xs` (la plus petite) de [GigaSpeech de SpeechColab](https://huggingface.co/datasets/speechcolab/gigaspeech) ne contient que 10 heures de données d'entraînement, mais prend plus de 13 Go d'espace de stockage pour le téléchargement et la préparation. Alors, que se passe-t-il lorsque nous voulons nous entraîner sur un split plus grand ? La configuration `xl` complète du même jeu de données contient 10 000 heures de données d'entraînement, nécessitant plus de 1 To d'espace de stockage. Pour la plupart d'entre nous, cela dépasse largement les spécifications d'un disque dur typique. Devons-nous débourser et acheter du stockage supplémentaire ? Ou existe-t-il un moyen de nous entraîner sur ces ensembles de données sans contraintes d'espace disque ? 🤗 *Datasets* vient à la rescousse en proposant le mode [streaming](https://huggingface.co/docs/datasets/stream). Le streaming nous permet de charger les données progressivement au fur et à mesure que nous itérons sur le jeu de données. Plutôt que de télécharger l'ensemble du jeu de données en une seule fois, nous chargeons le jeu de données un exemple à la fois. Nous itérons sur le jeu de données, chargeant et préparant des exemples à la volée lorsqu'ils sont nécessaires. De cette façon, nous ne chargeons que les exemples que nous utilisons, et non ceux que nous ne sommes pas ! Une fois que nous avons terminé avec un exemple d'exemple, nous continuons à itérer sur le jeu de données et chargeons le suivant. Le mode streaming présente trois avantages principaux par rapport au téléchargement simultané de l'ensemble du jeu de données : * Espace disque : les exemples sont chargés en mémoire un par un au fur et à mesure que nous itérons sur l'ensemble de données. Étant donné que les données ne sont pas téléchargées localement, il n'y a pas d'espace disque requis, vous pouvez donc utiliser des jeux de données de taille arbitraire. * Temps de téléchargement et de traitement: les ensembles de données audio sont volumineux et nécessitent beaucoup de temps pour être téléchargés et traités. Avec le streaming, le chargement et le traitement se font à la volée, ce qui signifie que vous pouvez commencer à utiliser le jeu de données dès que le premier exemple est prêt. * Expérimentation facile : vous pouvez expérimenter sur une poignée d'exemples pour vérifier que votre script fonctionne sans avoir à télécharger l'ensemble du jeu de données. Il y a une mise en garde au mode streaming. Lors du téléchargement d'un jeu de données complet sans streaming, les données brutes et les données traitées sont enregistrées localement sur le disque. Si nous voulons réutiliser ce jeu de données, nous pouvons charger directement les données traitées à partir du disque, en sautant les étapes de téléchargement et de traitement. Par conséquent, nous ne devons effectuer les opérations de téléchargement et de traitement qu'une seule fois, après quoi nous pouvons réutiliser les données préparées. Avec le mode streaming, les données ne sont pas téléchargées sur le disque. Ainsi, ni les données téléchargées ni les données prétraitées ne sont mises en cache. Si nous voulons réutiliser le jeu de données, les étapes de streaming doivent être répétées, avec les fichiers audio chargés et traités à nouveau à la volée. Pour cette raison, il est conseillé de télécharger des jeux de données que vous êtes susceptible d'utiliser plusieurs fois. Comment activer le mode streaming ? Facile! Il suffit de définir `streaming=True` lorsque vous chargez votre jeu de données. Le reste sera pris en charge pour vous : ```py gigaspeech = load_dataset("speechcolab/gigaspeech", "xs", streaming=True) ``` Tout comme nous avons appliqué des étapes de prétraitement à un sous-ensemble téléchargé de MINDS-14, vous pouvez effectuer le même prétraitement avec un jeu de données en streaming exactement de la même manière. La seule différence est que vous ne pouvez plus accéder à des échantillons individuels à l'aide de l'indexation Python (c'est-à-dire `gigaspeech["train"][sample_idx]`). Au lieu de cela, vous devez itérer sur le jeu de données. Voici comment accéder à un exemple lors de la diffusion en continu d'un jeu de données : ```py next(iter(gigaspeech["train"])) ``` **Sortie :** ```out { "segment_id": "YOU0000000315_S0000660", "speaker": "N/A", "text": "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", "audio": { "path": "xs_chunks_0000/YOU0000000315_S0000660.wav", "array": array( [0.0005188, 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621] ), "sampling_rate": 16000, }, "begin_time": 2941.89, "end_time": 2945.07, "audio_id": "YOU0000000315", "title": "Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43", "url": "https://www.youtube.com/watch?v=zr2n1fLVasU", "source": 2, "category": 24, "original_full_path": "audio/youtube/P0004/YOU0000000315.opus", } ``` Si vous souhaitez prévisualiser plusieurs exemples d'un grand jeu de données, utilisez `take()` pour obtenir les $n$ premiers éléments. Prenons les deux premiers exemples dans le jeu de données gigaspeech : ```py gigaspeech_head = gigaspeech["train"].take(2) list(gigaspeech_head) ``` **Sortie :** ```out [ { "segment_id": "YOU0000000315_S0000660", "speaker": "N/A", "text": "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", "audio": { "path": "xs_chunks_0000/YOU0000000315_S0000660.wav", "array": array( [ 0.0005188, 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621, ] ), "sampling_rate": 16000, }, "begin_time": 2941.89, "end_time": 2945.07, "audio_id": "YOU0000000315", "title": "Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43", "url": "https://www.youtube.com/watch?v=zr2n1fLVasU", "source": 2, "category": 24, "original_full_path": "audio/youtube/P0004/YOU0000000315.opus", }, { "segment_id": "AUD0000001043_S0000775", "speaker": "N/A", "text": "SIX TOMATOES <PERIOD>", "audio": { "path": "xs_chunks_0000/AUD0000001043_S0000775.wav", "array": array( [ 1.43432617e-03, 1.37329102e-03, 1.31225586e-03, ..., -6.10351562e-05, -1.22070312e-04, -1.83105469e-04, ] ), "sampling_rate": 16000, }, "begin_time": 3673.96, "end_time": 3675.26, "audio_id": "AUD0000001043", "title": "Asteroid of Fear", "url": "http//www.archive.org/download/asteroid_of_fear_1012_librivox/asteroid_of_fear_1012_librivox_64kb_mp3.zip", "source": 0, "category": 28, "original_full_path": "audio/audiobook/P0011/AUD0000001043.opus", }, ] ``` Le mode streaming peut faire passer vos recherches au niveau supérieur : non seulement les plus grands jeux de données vous sont accessibles mais vous pouvez facilement évaluer les systèmes sur plusieurs jeux de données en une seule fois sans vous soucier de votre espace disque. Par rapport à l'évaluation sur un seul jeu de données, l'évaluation multi-jeux de données donne une meilleure mesure des capacités de généralisation d'un système de reconnaissance vocale (cf. *End-to-end Speech Benchmark* (ESB)).
6
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter1/quiz.mdx
<!-- DISABLE-FRONTMATTER-SECTIONS --> # Vérifier votre compréhension de l'unité ### 1. En quelles unités le taux d'échantillonnage est-il mesuré ? <Question choices={[ { text: "dB", explain: "Non, l'amplitude est mesurée en décibels (dB)." }, { text: "Hz", explain: "La fréquence d'échantillonnage est le nombre d'échantillons prélevés en une seconde et est mesurée en hertz (Hz).", correct: true }, { text: "bit", explain: "Les bits sont utilisés pour décrire la profondeur de bits, qui fait référence au nombre de bits d'information utilisés pour représenter chaque échantillon d'un signal audio.", } ]} /> ### 2. Lorsqu'un grand jeu de données audio est streamé, à quel moment peut-on commencer à l'utiliser ? <Question choices={[ { text: "Dès que le jeu de données complet est téléchargé.", explain: "L'objectif du streaming de données est de pouvoir travailler avec ces données sans avoir à télécharger entièrement un jeu de données." }, { text: "Dès que les 16 premiers exemples sont téléchargés.", explain: " Réessayez !" }, { text: "Dès que le premier exemple est téléchargé.", explain: "", correct: true } ]} /> ### 3. Qu'est-ce qu'un spectrogramme ? <Question choices={[ { text: "Un appareil utilisé pour numériser le son qui est d'abord capturé par un microphone, qui convertit les ondes sonores en un signal électrique.", explain: "Un dispositif utilisé pour numériser un tel signal électrique est appelé convertisseur analogique-numérique. Réessayez !" }, { text: "Un graphique qui montre comment l'amplitude d'un signal audio change dans le temps. Il est également connu sous le nom de représentation du son dans le *domaine temporel*.", explain: "La description ci-dessus se réfère à la forme d'onde, et non au spectrogramme." }, { text: "Une représentation visuelle du spectre de fréquence d'un signal qui varie en fonction du temps.", explain: "", correct: true } ]} /> ### 4. Quel est le moyen le plus simple de convertir des données audio brutes en spectrogramme log-mel attendu par Whisper ? A. ```python librosa.feature.melspectrogram(audio["array"]) ``` B. ```python feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") feature_extractor(audio["array"]) ``` C. ```python dataset.feature(audio["array"], model="whisper") ``` <Question choices={[ { text: "A", explain: "`librosa.feature.melspectrogram()` crée un spectrogramme de puissance." }, { text: "B", explain: "", correct: true }, { text: "C", explain: "Le jeu de données ne prépare pas les caractéristiques pour les <i>Transformers</i>, ceci est fait par le préprocesseur du modèle." } ]} /> ### 5. Comment charger un jeu de données depuis me 🤗 Hub ? A. ```python from datasets import load_dataset dataset = load_dataset(DATASET_NAME_ON_HUB) ``` B. ```python import librosa dataset = librosa.load(PATH_TO_DATASET) ``` C. ```python from transformers import load_dataset dataset = load_dataset(DATASET_NAME_ON_HUB) ``` <Question choices={[ { text: "A", explain: "Le meilleur moyen est d'utiliser la bibliothèque 🤗 <i>Datasets</i>.", correct: true }, { text: "B", explain: "<code>Librosa.load</code> est utile pour charger un fichier audio individuel à partir d'un chemin dans un tuple avec des séries temporelles audio et une fréquence d'échantillonnage, mais pas un jeu de données entier avec de nombreux exemples et de multiples caractéristiques. " }, { text: "C", explain: "La méthode <code>load_dataset</code> se trouve dans la bibliothèque 🤗 <i>Datasets</i>, pas dans 🤗 <i>Transformers</i>." } ]} /> ### 6. Votre jeu de données personnalisé contient des données audio de haute qualité avec une fréquence d'échantillonnage de 32 kHz. Vous souhaitez entraîner un modèle de reconnaissance vocale qui s'attend à ce que les exemples audio aient une fréquence d'échantillonnage de 16 kHz. Que devez-vous faire ? <Question choices={[ { text: "Utilisez les exemples tels quels, le modèle se généralisera facilement à des exemples audio de meilleure qualité.", explain: "En raison de la dépendance à l'égard des mécanismes d'attention, il est difficile pour les modèles de généraliser entre les taux d'échantillonnage." }, { text: "Utiliser le module <code>Audio</code> de la bibliothèque 🤗 <i>Datasets</i> pour sous-échantillonner les exemples du jeu de données personnalisé.", explain: "", correct: true }, { text: "Réduire l'échantillonnage d'un facteur 2 en éliminant tous les autres échantillons.", explain: "Cela créera des distorsions dans le signal, appelées alias. Le rééchantillonnage est une opération délicate qu'il vaut mieux confier à des bibliothèques éprouvées telles que librosa ou 🤗 <i>Datasets</i>." } ]} /> ### 7. Comment convertir un spectrogramme généré par un modèle d'apprentissage automatique en une forme d'onde ? <Question choices={[ { text: "Nous pouvons utiliser un réseau neuronal appelé vocodeur pour reconstruire une forme d'onde à partir du spectrogramme.", explain: "Comme l'information de phase est manquante dans ce cas, nous devons utiliser un vocodeur ou l'algorithme classique de Griffin-Lim pour reconstruire la forme d'onde.", correct: true }, { text: "Nous pouvons utiliser l'inverse de la TFCT pour convertir le spectrogramme généré en une forme d'onde.", explain: "Un spectrogramme généré ne contient pas les informations de phase nécessaires à l'utilisation de l'inverse de la TFCT. " }, { text: "Il est impossible de convertir un spectrogramme généré par un modèle d'apprentissage automatique en une forme d'onde.", explain: "Réessayez !" } ]} />
7
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter1/introduction.mdx
# Unité 1 : Travailler avec des données audio ## Ce que vous apprendrez dans cette unité Chaque tâche audio ou vocale commence par un fichier audio. Avant de pouvoir résoudre ces tâches, il est important de comprendre ce que ces fichiers contiennent réellement et comment les utiliser. Dans cette unité, vous acquerrez une compréhension de la terminologie fondamentale liée aux données audio, notamment la forme d'onde, la fréquence d'échantillonnage et le spectrogramme. Vous apprendrez également à travailler avec des jeux de données audio, y compris le chargement et le prétraitement des données audio, et à streamer efficacement de grands jeux de données. À la fin de cette unité, vous maîtriserez la terminologie essentielle des données audio et serez équipé des compétences nécessaires pour travailler avec des jeux de données audio pour diverses applications. Les connaissances issues de cette unité vont jeter les bases de la compréhension du reste du cours.
8
0
hf_public_repos/audio-transformers-course/chapters/fr
hf_public_repos/audio-transformers-course/chapters/fr/chapter1/audio_data.mdx
# Introduction aux données audio Le **taux d'échantillonnage** (également appelé fréquence d'échantillonnage) est le nombre d'échantillons prélevés en une seconde et est mesuré en hertz (Hz). Pour vous donner un point de référence, l'audio de qualité CD a un taux d'échantillonnage de 44 100 Hz, ce qui signifie que les échantillons sont prélevés 44 100 fois par seconde. À titre de comparaison, l'audio haute résolution a un taux d'échantillonnage de 192 000 Hz ou 192 kHz. Un taux d'échantillonnage couramment utilisé dans les modèles vocaux d'apprentissage est de 16 000 Hz ou 16 kHz. Le choix de la fréquence d'échantillonnage détermine principalement la fréquence la plus élevée qui peut être capturée à partir du signal. Ceci est également connu sous le nom de limite de Nyquist et correspond exactement à la moitié du taux d'échantillonnage. Les fréquences audibles dans la parole humaine sont inférieures à 8 kHz et, par conséquent, l'échantillonnage de la parole à 16 kHz est suffisant. L'utilisation d'un taux d'échantillonnage plus élevé ne permettra pas de capturer plus d'informations et ne fera qu'augmenter le coût de calcul du traitement de ces fichiers. D'autre part, l'échantillonnage audio à un taux d'échantillonnage trop faible entraînera une perte d'informations. La parole échantillonnée à 8 kHz sonnera étouffée car les fréquences plus élevées ne peuvent pas être capturées à ce rythme. Il est important de vous assurer que tous les exemples audio de votre jeu de données ont le même taux d'échantillonnage lorsque vous travaillez sur une tâche audio. Si vous prévoyez d'utiliser des données audio personnalisées pour finetuner un modèle pré-entraîné, le taux d'échantillonnage de vos données doit correspondre au taux d'échantillonnage des données sur lesquelles le modèle a été pré-entraîné. La fréquence d'échantillonnage détermine l'intervalle de temps entre les échantillons audio successifs, ce qui a un impact sur la résolution temporelle des données audio. Prenons un exemple : un son de 5 secondes à une fréquence d'échantillonnage de 16 000 Hz sera représenté comme une série de 80 000 valeurs, tandis que le même son de 5 secondes à une fréquence d'échantillonnage de 8 000 Hz sera représenté comme une série de 40 000 valeurs. Les *transformers* qui résolvent les tâches audio traitent les exemples comme des séquences et s'appuient sur des mécanismes d'attention pour apprendre l'audio ou la représentation multimodale. Étant donné que les séquences sont différentes pour les exemples audio à des taux d'échantillonnage différents, il sera difficile pour les modèles de généraliser entre les taux d'échantillonnage. **Le rééchantillonnage** est le processus de mise en correspondance des taux d'échantillonnage et fait partie du [prétraitement](preprocessing#resampling-the-audio-data) des données audio. ## Amplitude et profondeur de bits Bien que le taux d'échantillonnage vous indique à quelle fréquence les échantillons sont prélevés, quelles sont exactement les valeurs de chaque échantillon? Le son est produit par des changements de pression atmosphérique à des fréquences audibles pour les humains. L'**amplitude** d'un son décrit le niveau de pression acoustique à un instant donné et est mesurée en décibels (dB). Nous percevons l'amplitude comme un volume sonore. Pour vous donner un exemple, une voix normale est inférieure à 60 dB, et un concert de rock peut être autour de 125 dB, repoussant les limites de l'audition humaine. En audio, chaque échantillon enregistre l'amplitude de l'onde à un moment donné. La **profondeur de bits** de l'échantillon détermine avec quelle précision cette valeur d'amplitude peut être décrite. Plus la profondeur de bits est élevée, plus la représentation numérique se rapproche fidèlement de l'onde sonore continue d'origine. Les profondeurs de bits audio les plus courantes sont 16 bits et 24 bits. Chacun est un terme binaire, représentant le nombre d'étapes possibles auxquelles la valeur d'amplitude peut être quantifiée lorsqu'elle est convertie de continue à discrète: 65 536 étapes pour l'audio 16 bits, 16 777 216 étapes pour l'audio 24 bits. Étant donné que la quantification implique d'arrondir la valeur continue à une valeur discrète, le processus d'échantillonnage introduit du bruit. Plus la profondeur de bits est élevée, plus ce bruit de quantification est faible. En pratique, le bruit de quantification de l'audio 16 bits est déjà suffisamment faible pour être inaudible et l'utilisation de profondeurs de bits plus élevées n'est généralement pas nécessaire. Vous pouvez également rencontrer de l'audio 32 bits. Cela stocke les échantillons sous forme de valeurs à virgule flottante, tandis que l'audio 16 bits et 24 bits utilise des échantillons entiers. La précision d'une valeur à virgule flottante de 32 bits est de 24 bits, ce qui lui donne la même profondeur de bits que l'audio 24 bits. Les échantillons audio en virgule flottante devraient se situer dans la plage [-1.0, 1.0]. Étant donné que les modèles d'apprentissage automatique fonctionnent naturellement sur des données en virgule flottante, l'audio doit d'abord être converti au format à virgule flottante avant de pouvoir être utilisé pour entraîner le modèle. Nous verrons comment faire cela dans la section suivante sur le [Prétraitement] (preprocessing). Tout comme pour les signaux audio continus, l'amplitude de l'audio numérique est généralement exprimée en décibels (dB). L'audition humaine étant de nature logarithmique - nos oreilles sont plus sensibles aux petites fluctuations des sons calmes qu'à celles des sons forts - l'intensité d'un son est plus facile à interpréter si les amplitudes sont exprimées en décibels, qui sont également logarithmiques. L'échelle de décibels pour l'audio réel commence à 0 dB, ce qui représente le son le plus faible possible que les humains peuvent entendre, et les sons plus forts ont des valeurs plus importantes. Cependant, pour les signaux audio numériques, 0 dB est l'amplitude la plus forte possible, tandis que toutes les autres amplitudes sont négatives. En règle générale: chaque -6 dB est une réduction de moitié de l'amplitude, et tout ce qui est inférieur à -60 dB est généralement inaudible à moins que vous n'augmentiez vraiment le volume. ## L'audio comme forme d'onde Vous avez peut-être vu des sons visualisés sous la **forme d'onde** qui traçant les valeurs de l'échantillon au fil du temps et illustrant les changements d'amplitude du son. Ceci est aussi connu sous le nom de représentation du *domaine temporel* du son. Ce type de visualisation est utile pour identifier des caractéristiques spécifiques du signal audio telles que la synchronisation des événements sonores individuels, l'intensité sonore globale du signal et toute irrégularité ou bruit présent dans l'audio. Pour tracer la forme d'onde d'un signal audio, nous pouvons utiliser une bibliothèque Python `librosa`: ```bash pip install librosa ``` Prenons un exemple de son appelé « trompette » qui vient avec la bibliothèque: ```py import librosa array, sampling_rate = librosa.load(librosa.ex("trumpet")) ``` L'exemple est chargé sous la forme d'un tuple de séries temporelles audio (ici nous l'appelons `array`) et de taux d'échantillonnage (`sampling_rate`). Jetons un coup d'œil à la forme d'onde de ce son en utilisant la fonction `waveshow()` de librosa: ```py import matplotlib.pyplot as plt import librosa.display plt.figure().set_figwidth(12) librosa.display.waveshow(array, sr=sampling_rate) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/waveform_plot.png" alt="Waveform plot"> </div> Cela trace l'amplitude du signal sur l'axe des y et le temps le long de l'axe des x. En d'autres termes, chaque point correspond à une seule valeur d'échantillon qui a été prise lors de l'échantillonnage de ce son. Notez également que librosa renvoie déjà l'audio sous forme de valeurs à virgule flottante et que les valeurs d'amplitude sont effectivement comprises dans la plage [-1.0, 1.0]. Visualiser l'audio et l'écouter peut être un outil utile pour comprendre les données avec lesquelles vous travaillez. Vous pouvez voir la forme du signal, observer des modèles, apprendre à repérer le bruit ou la distorsion. Si vous prétraitez les données d'une manière ou d'une autre, telle que la normalisation, le rééchantillonnage ou le filtrage, vous pouvez confirmer visuellement que les étapes de prétraitement ont été appliquées comme prévu. Après avoir entraîné un modèle, vous pouvez également visualiser des exemples où des erreurs se produisent (par exemple, dans la tâche de classification audio) pour déboguer le problème. ## Le spectre de fréquences Une autre façon de visualiser les données audio consiste à tracer le **spectre de fréquences** d'un signal audio, également connu sous le nom de représentation du *domaine fréquentiel*. Le spectre est calculé à l'aide de la transformée de Fourier discrète ou TFD. Il décrit les fréquences individuelles qui composent le signal et leur force. Traçons le spectre de fréquences pour le même son de trompette en prenant la TFD en utilisant la fonction `rfft()` de numpy. Bien qu'il soit possible de tracer le spectre de l'ensemble du son, il est plus utile de regarder une petite région à la place. Ici, nous allons prendre la TFD sur les 4096 premiers échantillons, ce qui correspond à peu près à la longueur de la première note jouée: ```py import numpy as np TFD_input = array[:4096] # calculer la TDF window = np.hanning(len(TFD_input)) windowed_input = TFD_input * window TFD = np.fft.rfft(windowed_input) # obtenir le spectre d'amplitude en décibels amplitude = np.abs(TFD) amplitude_db = librosa.amplitude_to_db(amplitude, ref=np.max) # obtenir les bacs de fréquence frequency = librosa.fft_frequencies(sr=sampling_rate, n_fft=len(TFD_input)) plt.figure().set_figwidth(12) plt.plot(frequency, amplitude_db) plt.xlabel("Frequency (Hz)") plt.ylabel("Amplitude (dB)") plt.xscale("log") ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/spectrum_plot.png" alt="Spectrum plot"> </div> Cela trace la force des différentes composantes de fréquence présentes dans ce segment audio. Les valeurs de fréquence sont sur l'axe des abscisses, généralement tracées sur une échelle logarithmique, tandis que leurs amplitudes sont sur l'axe des y. Le spectre de fréquences que nous avons tracé montre plusieurs pics. Ces pics correspondent aux harmoniques de la note jouée, les harmoniques supérieures étant plus calmes. Puisque le premier pic est à environ 620 Hz, c'est le spectre de fréquence d'une♭ note Mi. La sortie de la TDF est un tableau de nombres complexes, composé de composants réels et imaginaires. Prendre la magnitude avec ` np.abs(TFD)` extrait l'information d'amplitude du spectrogramme. L'angle entre les composants réels et imaginaires fournit ce que l'on appelle le spectre de phase, mais il est souvent écarté dans les applications d'apprentissage automatique. Nous utilions `librosa.amplitude_to_db()` pour convertir les valeurs d'amplitude en échelle de décibels, ce qui facilite la visualisation des détails les plus fins du spectre. Parfois, les gens utilisent le **spectre de puissance**, qui mesure l'énergie plutôt que l'amplitude. Il s'agit simplement d'un spectre avec les valeurs d'amplitude au carré. <Tip> 💡 En pratique, les gens utilisent le terme FFT de manière interchangeable avec TFD, car la FFT ou transformée de Fourier rapide est le seul moyen efficace de calculer la TFD sur un ordinateur. </Tip> Le spectre de fréquences d'un signal audio contient exactement la même information que sa forme d'onde. Ce sont simplement deux façons différentes de regarder les mêmes données (ici, les 4096 premiers échantillons du son de la trompette). Là où la forme d'onde trace l'amplitude du signal audio au fil du temps, le spectre visualise les amplitudes des fréquences individuelles à un moment donné. ## Spectrogramme Et si nous voulons voir comment les fréquences d'un signal audio changent ? La trompette joue plusieurs notes et elles ont toutes des fréquences différentes. Le problème est que le spectre ne montre qu'un instantané figé des fréquences à un instant donné. La solution consiste à prendre plusieurs TFD, chacune ne couvrant qu'une petite tranche de temps, et à empiler les spectres résultants dans un **spectrogramme**. Un spectrogramme trace le contenu en fréquence d'un signal audio au fil du temps. Il vous permet de voir le temps, la fréquence et l'amplitude sur un seul graphique. L'algorithme qui effectue ce calcul est la TFCT ou transformée de Fourier à court terme. Le spectrogramme est l'un des outils audio les plus informatifs à notre disposition. Par exemple, lorsque nous travaillons avec un enregistrement musical, nous pouvons voir les différents instruments et pistes vocales et comment ils contribuent au son global. Dans la parole, nous pouvons identifier différents sons de voyelles car chaque voyelle est caractérisée par des fréquences particulières. Traçons un spectrogramme pour le même son de trompette, en utilisant les fonctions `stft()` et `specshow()` de librosa: ```py import numpy as np D = librosa.stft(array) S_db = librosa.amplitude_to_db(np.abs(D), ref=np.max) plt.figure().set_figwidth(12) librosa.display.specshow(S_db, x_axis="time", y_axis="hz") plt.colorbar() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/spectrogram_plot.png" alt="Spectrogram plot"> </div> Dans ce graphique, l'axe des x représente le temps comme dans la visualisation de la forme d'onde, mais maintenant l'axe des y représente la fréquence en Hz. L'intensité de la couleur donne l'amplitude ou la puissance de la composante fréquence à chaque point dans le temps, mesurée en décibels (dB). Le spectrogramme est créé en prenant de courts segments du signal audio, généralement de quelques millisecondes, et en calculant la transformée de Fourier discrète de chaque segment pour obtenir son spectre de fréquences. Les spectres résultants sont ensuite empilés sur l'axe temporel pour créer le spectrogramme. Chaque tranche verticale de cette image correspond à un spectre de fréquences unique, vu du haut. Par défaut, `librosa.stft()` divise le signal audio en segments de 2048 échantillons, ce qui donne un bon compromis entre la résolution de fréquence et la résolution temporelle. Étant donné que le spectrogramme et la forme d'onde sont des vues différentes des mêmes données, il est possible de retourner le spectrogramme dans la forme d'onde d'origine en utilisant la TFCT inverse. Cependant, cela nécessite les informations de phase en plus des informations d'amplitude. Si le spectrogramme a été généré par un modèle d'apprentissage automatique, il ne produit généralement que les amplitudes. Dans ce cas, nous pouvons utiliser un algorithme de reconstruction de phase tel que l'algorithme classique de Griffin-Lim, ou en utilisant un réseau neuronal appelé vocodeur, pour reconstruire une forme d'onde à partir du spectrogramme. Les spectrogrammes ne sont pas seulement utilisés pour la visualisation. De nombreux modèles d'apprentissage automatique prendront des spectrogrammes en entrée, par opposition aux formes d'onde, et produiront des spectrogrammes en sortie. Maintenant que nous savons ce qu'est un spectrogramme et comment il est fabriqué, jetons un coup d'œil à une variante de celui-ci largement utilisée pour le traitement de la parole: le spectrogramme mel. ## Spectrogramme Mel Un spectrogramme mel est une variante du spectrogramme couramment utilisée dans les tâches de traitement de la parole et d'apprentissage automatique. Il est similaire à un spectrogramme en ce sens qu'il montre le contenu en fréquence d'un signal audio au fil du temps, mais sur un axe de fréquence différent. Dans un spectrogramme standard, l'axe de fréquence est linéaire et est mesuré en hertz (Hz). Cependant, le système auditif humain est plus sensible aux changements dans les basses fréquences que dans les fréquences plus élevées, et cette sensibilité diminue logarithmiquement à mesure que la fréquence augmente. L'échelle mel est une échelle perceptuelle qui se rapproche de la réponse en fréquence non linéaire de l'oreille humaine. Pour créer un spectrogramme mel, le STFT est utilisé comme auparavant, divisant l'audio en segments courts pour obtenir une séquence de spectres de fréquence. De plus, chaque spectre est envoyé à travers un ensemble de filtres, appelé *mel filterbank*, pour transformer les fréquences à l'échelle mel. Voyons comment nous pouvons tracer un spectrogramme mel en utilisant la fonction `melspectrogram()` de librosa, qui effectue toutes ces étapes pour nous: ```py S = librosa.feature.melspectrogram(y=array, sr=sampling_rate, n_mels=128, fmax=8000) S_dB = librosa.power_to_db(S, ref=np.max) plt.figure().set_figwidth(12) librosa.display.specshow(S_dB, x_axis="time", y_axis="mel", sr=sampling_rate, fmax=8000) plt.colorbar() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/mel-spectrogram.png" alt="Mel spectrogram plot"> </div> Dans l'exemple ci-dessus, `n_mels` représente le nombre de bandes mel à générer. Les bandes mel définissent un ensemble de gammes de fréquences qui divisent le spectre en composantes perceptuellement significatives, en utilisant un ensemble de filtres dont la forme et l'espacement sont choisis pour imiter la façon dont l'oreille humaine répond à différentes fréquences. Les valeurs courantes pour `n_mels` sont 40 ou 80. `fmax` indique la fréquence la plus élevée (en Hz) qui nous intéresse. Tout comme avec un spectrogramme standard, il est courant d'exprimer la force des composantes de fréquence mel en décibels. C'est ce qu'on appelle communément un **spectrogramme log-mel**, car la conversion en décibels implique une opération logarithmique. L'exemple ci-dessus utilisé `librosa.power_to_db()` car `librosa.feature.melspectrogram()` crée un spectrogramme de puissance. <Tip> 💡 Tous les spectrogrammes mel ne sont pas identiques ! Il existe deux échelles mel différentes d'usage courant (« htk » et « slaney »), et au lieu du spectrogramme de puissance, le spectrogramme d'amplitude peut être utilisé. La conversion en spectrogramme log-mel ne calcule pas toujours les décibels vrais, mais peut simplement prendre le « log ». Par conséquent, si un modèle d'apprentissage automatique attend un spectrogramme mel en entrée, vérifiez deux fois pour vous assurer que vous le calculez de la même manière. </Tip> La création d'un spectrogramme mel est une opération avec perte car elle implique le filtrage du signal. La conversion d'un spectrogramme mel en une forme d'onde est plus difficile que de le faire pour un spectrogramme régulier, car cela nécessite d'estimer les fréquences qui ont été jetées. C'est pourquoi des modèles d'apprentissage automatique tels que le vocodeur HiFiGAN sont nécessaires pour produire une forme d'onde à partir d'un spectrogramme mel. Comparé à un spectrogramme standard, un spectrogramme mel peut capturer des caractéristiques plus significatives du signal audio pour la perception humaine, ce qui en fait un choix populaire dans des tâches telles que la reconnaissance vocale, l'identification du locuteur et la classification des genres musicaux. Maintenant que vous savez comment visualiser des exemples de données audio, essayez de voir à quoi ressemblent vos sons préférés :)
9