index
int64
0
0
repo_id
stringclasses
179 values
file_path
stringlengths
26
186
content
stringlengths
1
2.1M
__index_level_0__
int64
0
9
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter4/demo.mdx
# Build a demo with Gradio In this final section on audio classification, we'll build a [Gradio](https://gradio.app) demo to showcase the music classification model that we just trained on the [GTZAN](https://huggingface.co/datasets/marsyas/gtzan) dataset. The first thing to do is load up the fine-tuned checkpoint using the `pipeline()` class - this is very familiar now from the section on [pre-trained models](classification_models). You can change the `model_id` to the namespace of your fine-tuned model on the Hugging Face Hub: ```python from transformers import pipeline model_id = "sanchit-gandhi/distilhubert-finetuned-gtzan" pipe = pipeline("audio-classification", model=model_id) ``` Secondly, we'll define a function that takes the filepath for an audio input and passes it through the pipeline. Here, the pipeline automatically takes care of loading the audio file, resampling it to the correct sampling rate, and running inference with the model. We take the models predictions of `preds` and format them as a dictionary object to be displayed on the output: ```python def classify_audio(filepath): preds = pipe(filepath) outputs = {} for p in preds: outputs[p["label"]] = p["score"] return outputs ``` Finally, we launch the Gradio demo using the function we've just defined: ```python import gradio as gr demo = gr.Interface( fn=classify_audio, inputs=gr.Audio(type="filepath"), outputs=gr.outputs.Label() ) demo.launch(debug=True) ``` This will launch a Gradio demo similar to the one running on the Hugging Face Space: <iframe src="https://course-demos-song-classifier.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
0
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter4/classification_models.mdx
# Pre-trained models and datasets for audio classification The Hugging Face Hub is home to over 500 pre-trained models for audio classification. In this section, we'll go through some of the most common audio classification tasks and suggest appropriate pre-trained models for each. Using the `pipeline()` class, switching between models and tasks is straightforward - once you know how to use `pipeline()` for one model, you'll be able to use it for any model on the Hub no code changes! This makes experimenting with the `pipeline()` class extremely fast, allowing you to quickly select the best pre-trained model for your needs. Before we jump into the various audio classification problems, let's quickly recap the transformer architectures typically used. The standard audio classification architecture is motivated by the nature of the task; we want to transform a sequence of audio inputs (i.e. our input audio array) into a single class label prediction. Encoder-only models first map the input audio sequence into a sequence of hidden-state representations by passing the inputs through a transformer block. The sequence of hidden-state representations is then mapped to a class label output by taking the mean over the hidden-states, and passing the resulting vector through a linear classification layer. Hence, there is a preference for _encoder-only_ models for audio classification. Decoder-only models introduce unnecessary complexity to the task, since they assume that the outputs can also be a _sequence_ of predictions (rather than a single class label prediction), and so generate multiple outputs. Therefore, they have slower inference speed and tend not to be used. Encoder-decoder models are largely omitted for the same reason. These architecture choices are analogous to those in NLP, where encoder-only models such as [BERT](https://huggingface.co/blog/bert-101) are favoured for sequence classification tasks, and decoder-only models such as GPT reserved for sequence generation tasks. Now that we've recapped the standard transformer architecture for audio classification, let's jump into the different subsets of audio classification and cover the most popular models! ## 🤗 Transformers Installation At the time of writing, the latest updates required for audio classification pipeline are only on the `main` version of the 🤗 Transformers repository, rather than the latest PyPi version. To make sure we have these updates locally, we'll install Transformers from the `main` branch with the following command: ``` pip install git+https://github.com/huggingface/transformers ``` ## Keyword Spotting Keyword spotting (KWS) is the task of identifying a keyword in a spoken utterance. The set of possible keywords forms the set of predicted class labels. Hence, to use a pre-trained keyword spotting model, you should ensure that your keywords match those that the model was pre-trained on. Below, we'll introduce two datasets and models for keyword spotting. ### Minds-14 Let's go ahead and use the same [MINDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset that you have explored in the previous unit. If you recall, MINDS-14 contains recordings of people asking an e-banking system questions in several languages and dialects, and has the `intent_class` for each recording. We can classify the recordings by intent of the call. ```python from datasets import load_dataset minds = load_dataset("PolyAI/minds14", name="en-AU", split="train") ``` We'll load the checkpoint [`"anton-l/xtreme_s_xlsr_300m_minds14"`](https://huggingface.co/anton-l/xtreme_s_xlsr_300m_minds14), which is an XLS-R model fine-tuned on MINDS-14 for approximately 50 epochs. It achieves 90% accuracy over all languages from MINDS-14 on the evaluation set. ```python from transformers import pipeline classifier = pipeline( "audio-classification", model="anton-l/xtreme_s_xlsr_300m_minds14", ) ``` Finally, we can pass a sample to the classification pipeline to make a prediction: ```python classifier(minds[0]["audio"]) ``` **Output:** ``` [ {"score": 0.9631525278091431, "label": "pay_bill"}, {"score": 0.02819698303937912, "label": "freeze"}, {"score": 0.0032787492964416742, "label": "card_issues"}, {"score": 0.0019414445850998163, "label": "abroad"}, {"score": 0.0008378693601116538, "label": "high_value_payment"}, ] ``` Great! We've identified that the intent of the call was paying a bill, with probability 96%. You can imagine this kind of keyword spotting system being used as the first stage of an automated call centre, where we want to categorise incoming customer calls based on their query and offer them contextualised support accordingly. ### Speech Commands Speech Commands is a dataset of spoken words designed to evaluate audio classification models on simple command words. The dataset consists of 15 classes of keywords, a class for silence, and an unknown class to include the false positive. The 15 keywords are single words that would typically be used in on-device settings to control basic tasks or launch other processes. A similar model is running continuously on your mobile phone. Here, instead of having single command words, we have 'wake words' specific to your device, such as "Hey Google" or "Hey Siri". When the audio classification model detects these wake words, it triggers your phone to start listening to the microphone and transcribe your speech using a speech recognition model. The audio classification model is much smaller and lighter than the speech recognition model, often only several millions of parameters compared to several hundred millions for speech recognition. Thus, it can be run continuously on your device without draining your battery! Only when the wake word is detected is the larger speech recognition model launched, and afterwards it is shut down again. We'll cover transformer models for speech recognition in the next Unit, so by the end of the course you should have the tools you need to build your own voice activated assistant! As with any dataset on the Hugging Face Hub, we can get a feel for the kind of audio data it has present without downloading or committing it memory. After heading to the [Speech Commands' dataset card](https://huggingface.co/datasets/speech_commands) on the Hub, we can use the Dataset Viewer to scroll through the first 100 samples of the dataset, listening to the audio files and checking any other metadata information: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speech_commands.png" alt="Diagram of datasets viewer."> </div> The Dataset Preview is a brilliant way of experiencing audio datasets before committing to using them. You can pick any dataset on the Hub, scroll through the samples and listen to the audio for the different subsets and splits, gauging whether it's the right dataset for your needs. Once you've selected a dataset, it's trivial to load the data so that you can start using it. Let's do exactly that and load a sample of the Speech Commands dataset using streaming mode: ```python speech_commands = load_dataset( "speech_commands", "v0.02", split="validation", streaming=True ) sample = next(iter(speech_commands)) ``` We'll load an official [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer) checkpoint fine-tuned on the Speech Commands dataset, under the namespace [`"MIT/ast-finetuned-speech-commands-v2"`](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2): ```python classifier = pipeline( "audio-classification", model="MIT/ast-finetuned-speech-commands-v2" ) classifier(sample["audio"].copy()) ``` **Output:** ``` [{'score': 0.9999892711639404, 'label': 'backward'}, {'score': 1.7504888774055871e-06, 'label': 'happy'}, {'score': 6.703040185129794e-07, 'label': 'follow'}, {'score': 5.805884484288981e-07, 'label': 'stop'}, {'score': 5.614546694232558e-07, 'label': 'up'}] ``` Cool! Looks like the example contains the word "backward" with high probability. We can take a listen to the sample and verify this is correct: ``` from IPython.display import Audio Audio(sample["audio"]["array"], rate=sample["audio"]["sampling_rate"]) ``` Now, you might be wondering how we've selected these pre-trained models to show you in these audio classification examples. The truth is, finding pre-trained models for your dataset and task is very straightforward! The first thing we need to do is head to the Hugging Face Hub and click on the "Models" tab: https://huggingface.co/models This is going to bring up all the models on the Hugging Face Hub, sorted by downloads in the past 30 days: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/all_models.png"> </div> You'll notice on the left-hand side that we have a selection of tabs that we can select to filter models by task, library, dataset, etc. Scroll down and select the task "Audio Classification" from the list of audio tasks: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/by_audio_classification.png"> </div> We're now presented with the sub-set of 500+ audio classification models on the Hub. To further refine this selection, we can filter models by dataset. Click on the tab "Datasets", and in the search box type "speech_commands". As you begin typing, you'll see the selection for `speech_commands` appear underneath the search tab. You can click this button to filter all audio classification models to those fine-tuned on the Speech Commands dataset: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/by_speech_commands.png"> </div> Great! We see that we have 6 pre-trained models available to us for this specific dataset and task. You'll recognise the first of these models as the Audio Spectrogram Transformer checkpoint that we used in the previous example. This process of filtering models on the Hub is exactly how we went about selecting the checkpoint to show you! ## Language Identification Language identification (LID) is the task of identifying the language spoken in an audio sample from a list of candidate languages. LID can form an important part in many speech pipelines. For example, given an audio sample in an unknown language, an LID model can be used to categorise the language(s) spoken in the audio sample, and then select an appropriate speech recognition model trained on that language to transcribe the audio. ### FLEURS FLEURS (Few-shot Learning Evaluation of Universal Representations of Speech) is a dataset for evaluating speech recognition systems in 102 languages, including many that are classified as 'low-resource'. Take a look at the FLEURS dataset card on the Hub and explore the different languages that are present: [google/fleurs](https://huggingface.co/datasets/google/fleurs). Can you find your native tongue here? If not, what's the most closely related language? Let's load up a sample from the validation split of the FLEURS dataset using streaming mode: ```python fleurs = load_dataset("google/fleurs", "all", split="validation", streaming=True) sample = next(iter(fleurs)) ``` Great! Now we can load our audio classification model. For this, we'll use a version of [Whisper](https://arxiv.org/pdf/2212.04356.pdf) fine-tuned on the FLEURS dataset, which is currently the most performant LID model on the Hub: ```python classifier = pipeline( "audio-classification", model="sanchit-gandhi/whisper-medium-fleurs-lang-id" ) ``` We can then pass the audio through our classifier and generate a prediction: ```python classifier(sample["audio"]) ``` **Output:** ``` [{'score': 0.9999330043792725, 'label': 'Afrikaans'}, {'score': 7.093023668858223e-06, 'label': 'Northern-Sotho'}, {'score': 4.269149485480739e-06, 'label': 'Icelandic'}, {'score': 3.2661141631251667e-06, 'label': 'Danish'}, {'score': 3.2580724109720904e-06, 'label': 'Cantonese Chinese'}] ``` We can see that the model predicted the audio was in Afrikaans with extremely high probability (near 1). The FLEURS dataset contains audio data from a wide range of languages - we can see that possible class labels include Northern-Sotho, Icelandic, Danish and Cantonese Chinese amongst others. You can find the full list of languages on the dataset card here: [google/fleurs](https://huggingface.co/datasets/google/fleurs). Over to you! What other checkpoints can you find for FLEURS LID on the Hub? What transformer models are they using under-the-hood? ## Zero-Shot Audio Classification In the traditional paradigm for audio classification, the model predicts a class label from a _pre-defined_ set of possible classes. This poses a barrier to using pre-trained models for audio classification, since the label set of the pre-trained model must match that of the downstream task. For the previous example of LID, the model must predict one of the 102 langauge classes on which it was trained. If the downstream task actually requires 110 languages, the model would not be able to predict 8 of the 110 languages, and so would require re-training to achieve full coverage. This limits the effectiveness of transfer learning for audio classification tasks. Zero-shot audio classification is a method for taking a pre-trained audio classification model trained on a set of labelled examples and enabling it to be able to classify new examples from previously unseen classes. Let's take a look at how we can achieve this! Currently, 🤗 Transformers supports one kind of model for zero-shot audio classification: the [CLAP model](https://huggingface.co/docs/transformers/model_doc/clap). CLAP is a transformer-based model that takes both audio and text as inputs, and computes the _similarity_ between the two. If we pass a text input that strongly correlates with an audio input, we'll get a high similarity score. Conversely, passing a text input that is completely unrelated to the audio input will return a low similarity. We can use this similarity prediction for zero-shot audio classification by passing one audio input to the model and multiple candidate labels. The model will return a similarity score for each of the candidate labels, and we can pick the one that has the highest score as our prediction. Let's take an example where we use one audio input from the [Environmental Speech Challenge (ESC)](https://huggingface.co/datasets/ashraq/esc50) dataset: ```python dataset = load_dataset("ashraq/esc50", split="train", streaming=True) audio_sample = next(iter(dataset))["audio"]["array"] ``` We then define our candidate labels, which form the set of possible classification labels. The model will return a classification probability for each of the labels we define. This means we need to know _a-priori_ the set of possible labels in our classification problem, such that the correct label is contained within the set and is thus assigned a valid probability score. Note that we can either pass the full set of labels to the model, or a hand-selected subset that we believe contains the correct label. Passing the full set of labels is going to be more exhaustive, but comes at the expense of lower classification accuracy since the classification space is larger (provided the correct label is our chosen subset of labels): ```python candidate_labels = ["Sound of a dog", "Sound of vacuum cleaner"] ``` We can run both through the model to find the candidate label that is _most similar_ to the audio input: ```python classifier = pipeline( task="zero-shot-audio-classification", model="laion/clap-htsat-unfused" ) classifier(audio_sample, candidate_labels=candidate_labels) ``` **Output:** ``` [{'score': 0.9997242093086243, 'label': 'Sound of a dog'}, {'score': 0.0002758323971647769, 'label': 'Sound of vacuum cleaner'}] ``` Alright! The model seems pretty confident we have the sound of a dog - it predicts it with 99.96% probability, so we'll take that as our prediction. Let's confirm whether we were right by listening to the audio sample (don't turn up your volume too high or else you might get a jump!): ```python Audio(audio_sample, rate=16000) ``` Perfect! We have the sound of a dog barking 🐕, which aligns with the model's prediction. Have a play with different audio samples and different candidate labels - can you define a set of labels that give good generalisation across the ESC dataset? Hint: think about where you could find information on the possible sounds in ESC and construct your labels accordingly! You might be wondering why we don't use the zero-shot audio classification pipeline for **all** audio classification tasks? It seems as though we can make predictions for any audio classification problem by defining appropriate class labels _a-priori_, thus bypassing the constraint that our classification task needs to match the labels that the model was pre-trained on. This comes down to the nature of the CLAP model used in the zero-shot pipeline: CLAP is pre-trained on _generic_ audio classification data, similar to the environmental sounds in the ESC dataset, rather than specifically speech data, like we had in the LID task. If you gave it speech in English and speech in Spanish, CLAP would know that both examples were speech data 🗣️ But it wouldn't be able to differentiate between the languages in the same way a dedicated LID model is able to. ## What next? We've covered a number of different audio classification tasks and presented the most relevant datasets and models that you can download from the Hugging Face Hub and use in just several lines of code using the `pipeline()` class. These tasks included keyword spotting, language identification and zero-shot audio classification. But what if we want to do something **new**? We've worked extensively on speech processing tasks, but this is only one aspect of audio classification. Another popular field of audio processing involves **music**. While music has inherently different features to speech, many of the same principles that we've learnt about already can be applied to music. In the following section, we'll go through a step-by-step guide on how you can fine-tune a transformer model with 🤗 Transformers on the task of music classification. By the end of it, you'll have a fine-tuned checkpoint that you can plug into the `pipeline()` class, enabling you to classify songs in exactly the same way that we've classified speech here!
1
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter4/introduction.mdx
# Unit 4. Build a music genre classifier ## What you'll learn and what you'll build Audio classification is one of the most common applications of transformers in audio and speech processing. Like other classification tasks in machine learning, this task involves assigning one or more labels to an audio recording based on its content. For example, in the case of speech, we might want to detect when wake words like "Hey Siri" are spoken, or infer a key word like "temperature" from a spoken query like "What is the weather today?". Environmental sounds provide another example, where we might want to automatically distinguish between sounds such as "car horn", "siren", "dog barking", etc. In this section, we'll look at how pre-trained audio transformers can be applied to a range of audio classification tasks. We'll then fine-tune a transformer model on the task of music classification, classifying songs into genres like "pop" and "rock". This is an important part of music streaming platforms like [Spotify](https://en.wikipedia.org/wiki/Spotify), which recommend songs that are similar to the ones the user is listening to. By the end of this section, you'll know how to: * Find suitable pre-trained models for audio classification tasks * Use the 🤗 Datasets library and the Hugging Face Hub to select audio classification datasets * Fine-tune a pretrained model to classify songs by genre * Build a Gradio demo that lets you classify your own songs
2
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter4/fine-tuning.mdx
# Fine-tuning a model for music classification In this section, we'll present a step-by-step guide on fine-tuning an encoder-only transformer model for music classification. We'll use a lightweight model for this demonstration and fairly small dataset, meaning the code is runnable end-to-end on any consumer grade GPU, including the T4 16GB GPU provided in the Google Colab free tier. The section includes various tips that you can try should you have a smaller GPU and encounter memory issues along the way. ## The Dataset To train our model, we'll use the [GTZAN](https://huggingface.co/datasets/marsyas/gtzan) dataset, which is a popular dataset of 1,000 songs for music genre classification. Each song is a 30-second clip from one of 10 genres of music, spanning disco to metal. We can get the audio files and their corresponding labels from the Hugging Face Hub with the `load_dataset()` function from 🤗 Datasets: ```python from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") gtzan ``` **Output:** ```out Dataset({ features: ['file', 'audio', 'genre'], num_rows: 999 }) ``` <Tip warning={true}> One of the recordings in GTZAN is corrupted, so it's been removed from the dataset. That's why we have 999 examples instead of 1,000. </Tip> GTZAN doesn't provide a predefined validation set, so we'll have to create one ourselves. The dataset is balanced across genres, so we can use the `train_test_split()` method to quickly create a 90/10 split as follows: ```python gtzan = gtzan["train"].train_test_split(seed=42, shuffle=True, test_size=0.1) gtzan ``` **Output:** ```out DatasetDict({ train: Dataset({ features: ['file', 'audio', 'genre'], num_rows: 899 }) test: Dataset({ features: ['file', 'audio', 'genre'], num_rows: 100 }) }) ``` Great, now that we've got our training and validation sets, let's take a look at one of the audio files: ```python gtzan["train"][0] ``` **Output:** ```out { "file": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "audio": { "path": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "array": array( [ 0.10720825, 0.16122437, 0.28585815, ..., -0.22924805, -0.20629883, -0.11334229, ], dtype=float32, ), "sampling_rate": 22050, }, "genre": 7, } ``` As we saw in [Unit 1](../chapter1/audio_data), the audio files are represented as 1-dimensional NumPy arrays, where the value of the array represents the amplitude at that timestep. For these songs, the sampling rate is 22,050 Hz, meaning there are 22,050 amplitude values sampled per second. We'll have to keep this in mind when using a pretrained model with a different sampling rate, converting the sampling rates ourselves to ensure they match. We can also see the genre is represented as an integer, or _class label_, which is the format the model will make it's predictions in. Let's use the `int2str()` method of the `genre` feature to map these integers to human-readable names: ```python id2label_fn = gtzan["train"].features["genre"].int2str id2label_fn(gtzan["train"][0]["genre"]) ``` **Output:** ```out 'pop' ``` This label looks correct, since it matches the filename of the audio file. Let's now listen to a few more examples by using Gradio to create a simple interface with the `Blocks` API: ```python import gradio as gr def generate_audio(): example = gtzan["train"].shuffle()[0] audio = example["audio"] return ( audio["sampling_rate"], audio["array"], ), id2label_fn(example["genre"]) with gr.Blocks() as demo: with gr.Column(): for _ in range(4): audio, label = generate_audio() output = gr.Audio(audio, label=label) demo.launch(debug=True) ``` <iframe src="https://course-demos-gtzan-samples.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> From these samples we can certainly hear the difference between genres, but can a transformer do this too? Let's train a model to find out! First, we'll need to find a suitable pretrained model for this task. Let's see how we can do that. ## Picking a pretrained model for audio classification To get started, let's pick a suitable pretrained model for audio classification. In this domain, pretraining is typically carried out on large amounts of unlabeled audio data, using datasets like [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) and [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli). The best way to find these models on the Hugging Face Hub is to use the "Audio Classification" filter, as described in the previous section. Although models like Wav2Vec2 and HuBERT are very popular, we'll use a model called _DistilHuBERT_. This is a much smaller (or _distilled_) version of the [HuBERT](https://huggingface.co/docs/transformers/model_doc/hubert) model, which trains around 73% faster, yet preserves most of the performance. <iframe src="https://autoevaluate-leaderboards.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> ## From audio to machine learning features ## Preprocessing the data Similar to tokenization in NLP, audio and speech models require the input to be encoded in a format that the model can process. In 🤗 Transformers, the conversion from audio to the input format is handled by the _feature extractor_ of the model. Similar to tokenizers, 🤗 Transformers provides a convenient `AutoFeatureExtractor` class that can automatically select the correct feature extractor for a given model. To see how we can process our audio files, let's begin by instantiating the feature extractor for DistilHuBERT from the pre-trained checkpoint: ```python from transformers import AutoFeatureExtractor model_id = "ntu-spml/distilhubert" feature_extractor = AutoFeatureExtractor.from_pretrained( model_id, do_normalize=True, return_attention_mask=True ) ``` Since the sampling rate of the model and the dataset are different, we'll have to resample the audio file to 16,000 Hz before passing it to the feature extractor. We can do this by first obtaining the model's sample rate from the feature extractor: ```python sampling_rate = feature_extractor.sampling_rate sampling_rate ``` **Output:** ```out 16000 ``` Next, we resample the dataset using the `cast_column()` method and `Audio` feature from 🤗 Datasets: ```python from datasets import Audio gtzan = gtzan.cast_column("audio", Audio(sampling_rate=sampling_rate)) ``` We can now check the first sample of the train-split of our dataset to verify that it is indeed at 16,000 Hz. 🤗 Datasets will resample the audio file _on-the-fly_ when we load each audio sample: ```python gtzan["train"][0] ``` **Output:** ```out { "file": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "audio": { "path": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "array": array( [ 0.0873509, 0.20183384, 0.4790867, ..., -0.18743178, -0.23294401, -0.13517427, ], dtype=float32, ), "sampling_rate": 16000, }, "genre": 7, } ``` Great! We can see that the sampling rate has been downsampled to 16kHz. The array values are also different, as we've now only got approximately one amplitude value for every 1.5 that we had before. A defining feature of Wav2Vec2 and HuBERT like models is that they accept a float array corresponding to the raw waveform of the speech signal as an input. This is in contrast to other models, like Whisper, where we pre-process the raw audio waveform to spectrogram format. We mentioned that the audio data is represented as a 1-dimensional array, so it's already in the right format to be read by the model (a set of continuous inputs at discrete time steps). So, what exactly does the feature extractor do? Well, the audio data is in the right format, but we've imposed no restrictions on the values it can take. For our model to work optimally, we want to keep all the inputs within the same dynamic range. This is going to make sure we get a similar range of activations and gradients for our samples, helping with stability and convergence during training. To do this, we _normalise_ our audio data, by rescaling each sample to zero mean and unit variance, a process called _feature scaling_. It's exactly this feature normalisation that our feature extractor performs! We can take a look at the feature extractor in operation by applying it to our first audio sample. First, let's compute the mean and variance of our raw audio data: ```python import numpy as np sample = gtzan["train"][0]["audio"] print(f"Mean: {np.mean(sample['array']):.3}, Variance: {np.var(sample['array']):.3}") ``` **Output:** ```out Mean: 0.000185, Variance: 0.0493 ``` We can see that the mean is close to zero already, but the variance is closer to 0.05. If the variance for the sample was larger, it could cause our model problems, since the dynamic range of the audio data would be very small and thus difficult to separate. Let's apply the feature extractor and see what the outputs look like: ```python inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) print(f"inputs keys: {list(inputs.keys())}") print( f"Mean: {np.mean(inputs['input_values']):.3}, Variance: {np.var(inputs['input_values']):.3}" ) ``` **Output:** ```out inputs keys: ['input_values', 'attention_mask'] Mean: -4.53e-09, Variance: 1.0 ``` Alright! Our feature extractor returns a dictionary of two arrays: `input_values` and `attention_mask`. The `input_values` are the preprocessed audio inputs that we'd pass to the HuBERT model. The [`attention_mask`](https://huggingface.co/docs/transformers/glossary#attention-mask) is used when we process a _batch_ of audio inputs at once - it is used to tell the model where we have padded inputs of different lengths. We can see that the mean value is now very much closer to zero, and the variance bang-on one! This is exactly the form we want our audio samples in prior to feeding them to the HuBERT model. <Tip warning={true}> Note how we've passed the sampling rate of our audio data to our feature extractor. This is good practice, as the feature extractor performs a check under-the-hood to make sure the sampling rate of our audio data matches the sampling rate expected by the model. If the sampling rate of our audio data did not match the sampling rate of our model, we'd need to up-sample or down-sample the audio data to the correct sampling rate. </Tip> Great, so now we know how to process our resampled audio files, the last thing to do is define a function that we can apply to all the examples in the dataset. Since we expect the audio clips to be 30 seconds in length, we'll also truncate any longer clips by using the `max_length` and `truncation` arguments of the feature extractor as follows: ```python max_duration = 30.0 def preprocess_function(examples): audio_arrays = [x["array"] for x in examples["audio"]] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=int(feature_extractor.sampling_rate * max_duration), truncation=True, return_attention_mask=True, ) return inputs ``` With this function defined, we can now apply it to the dataset using the [`map()`](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) method. The `.map()` method supports working with batches of examples, which we'll enable by setting `batched=True`. The default batch size is 1000, but we'll reduce it to 100 to ensure the peak RAM stays within a sensible range for Google Colab's free tier: <!--- TODO(SG): revert to multiprocessing when bug in datasets is fixed Since audio datasets can be quite slow to process, it is usually a good idea to use multiprocessing. We can do this by passing the `num_proc` argument to `map()` and we'll use Python's `psutil` module to determine the number of CPU cores on the system: ---> ```python gtzan_encoded = gtzan.map( preprocess_function, remove_columns=["audio", "file"], batched=True, batch_size=100, num_proc=1, ) gtzan_encoded ``` **Output:** ```out DatasetDict({ train: Dataset({ features: ['genre', 'input_values','attention_mask'], num_rows: 899 }) test: Dataset({ features: ['genre', 'input_values','attention_mask'], num_rows: 100 }) }) ``` <Tip warning={true}> If you exhaust your device's RAM executing the above code, you can adjust the batch parameters to reduce the peak RAM usage. In particular, the following two arguments can be modified: * `batch_size`: defaults to 1000, but set to 100 above. Try reducing by a factor of 2 again to 50 * `writer_batch_size`: defaults to 1000. Try reducing it to 500, and if that doesn't work, then reduce it by a factor of 2 again to 250 </Tip> To simplify the training, we've removed the `audio` and `file` columns from the dataset. The `input_values` column contains the encoded audio files, the `attention_mask` a binary mask of 0/1 values that indicate where we have padded the audio input, and the `genre` column contains the corresponding labels (or targets). To enable the `Trainer` to process the class labels, we need to rename the `genre` column to `label`: ```python gtzan_encoded = gtzan_encoded.rename_column("genre", "label") ``` Finally, we need to obtain the label mappings from the dataset. This mapping will take us from integer ids (e.g. `7`) to human-readable class labels (e.g. `"pop"`) and back again. In doing so, we can convert our model's integer id prediction into human-readable format, enabling us to use the model in any downstream application. We can do this by using the `int2str()` method as follows: ```python id2label = { str(i): id2label_fn(i) for i in range(len(gtzan_encoded["train"].features["label"].names)) } label2id = {v: k for k, v in id2label.items()} id2label["7"] ``` ```out 'pop' ``` OK, we've now got a dataset that's ready for training! Let's take a look at how we can train a model on this dataset. ## Fine-tuning the model To fine-tune the model, we'll use the `Trainer` class from 🤗 Transformers. As we've seen in other chapters, the `Trainer` is a high-level API that is designed to handle the most common training scenarios. In this case, we'll use the `Trainer` to fine-tune the model on GTZAN. To do this, we'll first need to load a model for this task. We can do this by using the `AutoModelForAudioClassification` class, which will automatically add the appropriate classification head to our pretrained DistilHuBERT model. Let's go ahead and instantiate the model: ```python from transformers import AutoModelForAudioClassification num_labels = len(id2label) model = AutoModelForAudioClassification.from_pretrained( model_id, num_labels=num_labels, label2id=label2id, id2label=id2label, ) ``` We strongly advise you to upload model checkpoints directly the [Hugging Face Hub](https://huggingface.co/) while training. The Hub provides: - Integrated version control: you can be sure that no model checkpoint is lost during training. - Tensorboard logs: track important metrics over the course of training. - Model cards: document what a model does and its intended use cases. - Community: an easy way to share and collaborate with the community! 🤗 Linking the notebook to the Hub is straightforward - it simply requires entering your Hub authentication token when prompted. Find your Hub authentication token [here](https://huggingface.co/settings/tokens): ```python from huggingface_hub import notebook_login notebook_login() ``` **Output:** ```bash Login successful Your token has been saved to /root/.huggingface/token ``` The next step is to define the training arguments, including the batch size, gradient accumulation steps, number of training epochs and learning rate: ```python from transformers import TrainingArguments model_name = model_id.split("/")[-1] batch_size = 8 gradient_accumulation_steps = 1 num_train_epochs = 10 training_args = TrainingArguments( f"{model_name}-finetuned-gtzan", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=5e-5, per_device_train_batch_size=batch_size, gradient_accumulation_steps=gradient_accumulation_steps, per_device_eval_batch_size=batch_size, num_train_epochs=num_train_epochs, warmup_ratio=0.1, logging_steps=5, load_best_model_at_end=True, metric_for_best_model="accuracy", fp16=True, push_to_hub=True, ) ``` <Tip warning={true}> Here we have set `push_to_hub=True` to enable automatic upload of our fine-tuned checkpoints during training. Should you not wish for your checkpoints to be uploaded to the Hub, you can set this to `False`. </Tip> The last thing we need to do is define the metrics. Since the dataset is balanced, we'll use accuracy as our metric and load it using the 🤗 Evaluate library: ```python import evaluate import numpy as np metric = evaluate.load("accuracy") def compute_metrics(eval_pred): """Computes accuracy on a batch of predictions""" predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` We've now got all the pieces! Let's instantiate the `Trainer` and train the model: ```python from transformers import Trainer trainer = Trainer( model, training_args, train_dataset=gtzan_encoded["train"], eval_dataset=gtzan_encoded["test"], tokenizer=feature_extractor, compute_metrics=compute_metrics, ) trainer.train() ``` <Tip warning={true}> Depending on your GPU, it is possible that you will encounter a CUDA `"out-of-memory"` error when you start training. In this case, you can reduce the `batch_size` incrementally by factors of 2 and employ [`gradient_accumulation_steps`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps) to compensate. </Tip> **Output:** ```out | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7297 | 1.0 | 113 | 1.8011 | 0.44 | | 1.24 | 2.0 | 226 | 1.3045 | 0.64 | | 0.9805 | 3.0 | 339 | 0.9888 | 0.7 | | 0.6853 | 4.0 | 452 | 0.7508 | 0.79 | | 0.4502 | 5.0 | 565 | 0.6224 | 0.81 | | 0.3015 | 6.0 | 678 | 0.5411 | 0.83 | | 0.2244 | 7.0 | 791 | 0.6293 | 0.78 | | 0.3108 | 8.0 | 904 | 0.5857 | 0.81 | | 0.1644 | 9.0 | 1017 | 0.5355 | 0.83 | | 0.1198 | 10.0 | 1130 | 0.5716 | 0.82 | ``` Training will take approximately 1 hour depending on your GPU or the one allocated to the Google Colab. Our best evaluation accuracy is 83% - not bad for just 10 epochs with 899 examples of training data! We could certainly improve upon this result by training for more epochs, using regularisation techniques such as _dropout_, or sub-diving each audio example from 30s into 15s segments to use a more efficient data pre-processing strategy. The big question is how this compares to other music classification systems 🤔 For that, we can view the [autoevaluate leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=marsyas%2Fgtzan&only_verified=0&task=audio-classification&config=all&split=train&metric=accuracy), a leaderboard that categorises models by language and dataset, and subsequently ranks them according to their accuracy. We can automatically submit our checkpoint to the leaderboard when we push the training results to the Hub - we simply have to set the appropriate key-word arguments (kwargs). You can change these values to match your dataset, language and model name accordingly: ```python kwargs = { "dataset_tags": "marsyas/gtzan", "dataset": "GTZAN", "model_name": f"{model_name}-finetuned-gtzan", "finetuned_from": model_id, "tasks": "audio-classification", } ``` The training results can now be uploaded to the Hub. To do so, execute the `.push_to_hub` command: ```python trainer.push_to_hub(**kwargs) ``` This will save the training logs and model weights under `"your-username/distilhubert-finetuned-gtzan"`. For this example, check out the upload at [`"sanchit-gandhi/distilhubert-finetuned-gtzan"`](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan). ## Share Model You can now share this model with anyone using the link on the Hub. They can load it with the identifier `"your-username/distilhubert-finetuned-gtzan"` directly into the `pipeline()` class. For instance, to load the fine-tuned checkpoint [`"sanchit-gandhi/distilhubert-finetuned-gtzan"`](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan): ```python from transformers import pipeline pipe = pipeline( "audio-classification", model="sanchit-gandhi/distilhubert-finetuned-gtzan" ) ``` ## Conclusion In this section, we've covered a step-by-step guide for fine-tuning the DistilHuBERT model for music classification. While we focussed on the task of music classification and the GTZAN dataset, the steps presented here apply more generally to any audio classification task - the same script can be used for spoken language audio classification tasks like keyword spotting or language identification. You just need to swap out the dataset for one that corresponds to your task of interest! If you're interested in fine-tuning other Hugging Face Hub models for audio classification, we encourage you to check out the other [examples](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) in the 🤗 Transformers repository. In the next section, we'll take the model that you just fine-tuned and build a music classification demo that you can share on the Hugging Face Hub.
3
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter4/hands_on.mdx
# Hands-on exercise It's time to get your hands on some Audio models and apply what you have learned so far. This exercise is one of the four hands-on exercises required to qualify for a course completion certificate. Here are the instructions. In this unit, we demonstrated how to fine-tune a Hubert model on `marsyas/gtzan` dataset for music classification. Our example achieved 83% accuracy. Your task is to improve upon this accuracy metric. Feel free to choose any model on the [🤗 Hub](https://huggingface.co/models) that you think is suitable for audio classification, and use the exact same dataset [`marsyas/gtzan`](https://huggingface.co/datasets/marsyas/gtzan) to build your own classifier. Your goal is to achieve 87% accuracy on this dataset with your classifier. You can choose the exact same model, and play with the training hyperparameters, or pick an entirely different model - it's up to you! For your result to count towards your certificate, don't forget to push your model to Hub as was shown in this unit with the following `**kwargs` at the end of the training: ```python kwargs = { "dataset_tags": "marsyas/gtzan", "dataset": "GTZAN", "model_name": f"{model_name}-finetuned-gtzan", "finetuned_from": model_id, "tasks": "audio-classification", } trainer.push_to_hub(**kwargs) ``` Here are some additional resources that you may find helpful when working on this exercise: * [Audio classification task guide in Transformers documentation](https://huggingface.co/docs/transformers/tasks/audio_classification) * [Hubert model documentation](https://huggingface.co/docs/transformers/model_doc/hubert) * [M-CTC-T model documentation](https://huggingface.co/docs/transformers/model_doc/mctct) * [Audio Spectrogram Transformer documentation](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer) * [Wav2Vec2 documentation](https://huggingface.co/docs/transformers/model_doc/wav2vec2) Feel free to build a demo of your model, and share it on Discord! If you have questions, post them in the #audio-study-group channel.
4
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter1/supplemental_reading.mdx
# Learn more This unit covered many fundamental concepts relevant to understanding of audio data and working with it. Want to learn more? Here you will find additional resources that will help you deepen your understanding of the topics and enhance your learning experience. In the following video, Monty Montgomery from xiph.org presents a real-time demonstrations of sampling, quantization, bit-depth, and dither on real audio equipment using both modern digital analysis and vintage analog bench equipment, check it out: <Youtube id="cIQ9IXSUzuM"/> If you'd like to dive deeper into digital signal processing, check out the free ["Digital Signals Theory" book](https://brianmcfee.net/dstbook-site/content/intro.html) authored by Brian McFee, an Assistant Professor of Music Technology and Data Science at New York University and the principal maintainer of the `librosa` package.
5
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter1/preprocessing.mdx
# Preprocessing an audio dataset Loading a dataset with 🤗 Datasets is just half of the fun. If you plan to use it either for training a model, or for running inference, you will need to pre-process the data first. In general, this will involve the following steps: * Resampling the audio data * Filtering the dataset * Converting audio data to model's expected input ## Resampling the audio data The `load_dataset` function downloads audio examples with the sampling rate that they were published with. This is not always the sampling rate expected by a model you plan to train, or use for inference. If there's a discrepancy between the sampling rates, you can resample the audio to the model's expected sampling rate. Most of the available pretrained models have been pretrained on audio datasets at a sampling rate of 16 kHz. When we explored MINDS-14 dataset, you may have noticed that it is sampled at 8 kHz, which means we will likely need to upsample it. To do so, use 🤗 Datasets' `cast_column` method. This operation does not change the audio in-place, but rather signals to datasets to resample the audio examples on the fly when they are loaded. The following code will set the sampling rate to 16kHz: ```py from datasets import Audio minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) ``` Re-load the first audio example in the MINDS-14 dataset, and check that it has been resampled to the desired `sampling rate`: ```py minds[0] ``` **Output:** ```out { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "audio": { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "array": array( [ 2.0634243e-05, 1.9437837e-04, 2.2419340e-04, ..., 9.3852862e-04, 1.1302452e-03, 7.1531429e-04, ], dtype=float32, ), "sampling_rate": 16000, }, "transcription": "I would like to pay my electricity bill using my card can you please assist", "intent_class": 13, } ``` You may notice that the array values are now also different. This is because we've now got twice the number of amplitude values for every one that we had before. <Tip> 💡 Some background on resampling: If an audio signal has been sampled at 8 kHz, so that it has 8000 sample readings per second, we know that the audio does not contain any frequencies over 4 kHz. This is guaranteed by the Nyquist sampling theorem. Because of this, we can be certain that in between the sampling points the original continuous signal always makes a smooth curve. Upsampling to a higher sampling rate is then a matter of calculating additional sample values that go in between the existing ones, by approximating this curve. Downsampling, however, requires that we first filter out any frequencies that would be higher than the new Nyquist limit, before estimating the new sample points. In other words, you can't downsample by a factor 2x by simply throwing away every other sample — this will create distortions in the signal called aliases. Doing resampling correctly is tricky and best left to well-tested libraries such as librosa or 🤗 Datasets. </Tip> ## Filtering the dataset You may need to filter the data based on some criteria. One of the common cases involves limiting the audio examples to a certain duration. For instance, we might want to filter out any examples longer than 20s to prevent out-of-memory errors when training a model. We can do this by using the 🤗 Datasets' `filter` method and passing a function with filtering logic to it. Let's start by writing a function that indicates which examples to keep and which to discard. This function, `is_audio_length_in_range`, returns `True` if a sample is shorter than 20s, and `False` if it is longer than 20s. ```py MAX_DURATION_IN_SECONDS = 20.0 def is_audio_length_in_range(input_length): return input_length < MAX_DURATION_IN_SECONDS ``` The filtering function can be applied to a dataset's column but we do not have a column with audio track duration in this dataset. However, we can create one, filter based on the values in that column, and then remove it. ```py # use librosa to get example's duration from the audio file new_column = [librosa.get_duration(path=x) for x in minds["path"]] minds = minds.add_column("duration", new_column) # use 🤗 Datasets' `filter` method to apply the filtering function minds = minds.filter(is_audio_length_in_range, input_columns=["duration"]) # remove the temporary helper column minds = minds.remove_columns(["duration"]) minds ``` **Output:** ```out Dataset({features: ["path", "audio", "transcription", "intent_class"], num_rows: 624}) ``` We can verify that dataset has been filtered down from 654 examples to 624. ## Pre-processing audio data One of the most challenging aspects of working with audio datasets is preparing the data in the right format for model training. As you saw, the raw audio data comes as an array of sample values. However, pre-trained models, whether you use them for inference, or want to fine-tune them for your task, expect the raw data to be converted into input features. The requirements for the input features may vary from one model to another — they depend on the model's architecture, and the data it was pre-trained with. The good news is, for every supported audio model, 🤗 Transformers offer a feature extractor class that can convert raw audio data into the input features the model expects. So what does a feature extractor do with the raw audio data? Let's take a look at [Whisper](https://huggingface.co/papers/2212.04356)'s feature extractor to understand some common feature extraction transformations. Whisper is a pre-trained model for automatic speech recognition (ASR) published in September 2022 by Alec Radford et al. from OpenAI. First, the Whisper feature extractor pads/truncates a batch of audio examples such that all examples have an input length of 30s. Examples shorter than this are padded to 30s by appending zeros to the end of the sequence (zeros in an audio signal correspond to no signal or silence). Examples longer than 30s are truncated to 30s. Since all elements in the batch are padded/truncated to a maximum length in the input space, there is no need for an attention mask. Whisper is unique in this regard, most other audio models require an attention mask that details where sequences have been padded, and thus where they should be ignored in the self-attention mechanism. Whisper is trained to operate without an attention mask and infer directly from the speech signals where to ignore the inputs. The second operation that the Whisper feature extractor performs is converting the padded audio arrays to log-mel spectrograms. As you recall, these spectrograms describe how the frequencies of a signal change over time, expressed on the mel scale and measured in decibels (the log part) to make the frequencies and amplitudes more representative of human hearing. All these transformations can be applied to your raw audio data with a couple of lines of code. Let's go ahead and load the feature extractor from the pre-trained Whisper checkpoint to have ready for our audio data: ```py from transformers import WhisperFeatureExtractor feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") ``` Next, you can write a function to pre-process a single audio example by passing it through the `feature_extractor`. ```py def prepare_dataset(example): audio = example["audio"] features = feature_extractor( audio["array"], sampling_rate=audio["sampling_rate"], padding=True ) return features ``` We can apply the data preparation function to all of our training examples using 🤗 Datasets' map method: ```py minds = minds.map(prepare_dataset) minds ``` **Output:** ```out Dataset( { features: ["path", "audio", "transcription", "intent_class", "input_features"], num_rows: 624, } ) ``` As easy as that, we now have log-mel spectrograms as `input_features` in the dataset. Let's visualize it for one of the examples in the `minds` dataset: ```py import numpy as np example = minds[0] input_features = example["input_features"] plt.figure().set_figwidth(12) librosa.display.specshow( np.asarray(input_features[0]), x_axis="time", y_axis="mel", sr=feature_extractor.sampling_rate, hop_length=feature_extractor.hop_length, ) plt.colorbar() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/log_mel_whisper.png" alt="Log mel spectrogram plot"> </div> Now you can see what the audio input to the Whisper model looks like after preprocessing. The model's feature extractor class takes care of transforming raw audio data to the format that the model expects. However, many tasks involving audio are multimodal, e.g. speech recognition. In such cases 🤗 Transformers also offer model-specific tokenizers to process the text inputs. For a deep dive into tokenizers, please refer to our [NLP course](https://huggingface.co/course/chapter2/4). You can load the feature extractor and tokenizer for Whisper and other multimodal models separately, or you can load both via a so-called processor. To make things even simpler, use `AutoProcessor` to load a model's feature extractor and processor from a checkpoint, like this: ```py from transformers import AutoProcessor processor = AutoProcessor.from_pretrained("openai/whisper-small") ``` Here we have illustrated the fundamental data preparation steps. Of course, custom data may require more complex preprocessing. In this case, you can extend the function `prepare_dataset` to perform any sort of custom data transformations. With 🤗 Datasets, if you can write it as a Python function, you can [apply it](https://huggingface.co/docs/datasets/audio_process) to your dataset!
6
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter1/streaming.mdx
# Streaming audio data One of the biggest challenges faced with audio datasets is their sheer size. A single minute of uncompressed CD-quality audio (44.1kHz, 16-bit) takes up a bit more than 5 MB of storage. Typically, an audio dataset would contains hours of recordings. In the previous sections we used a very small subset of MINDS-14 audio dataset, however, typical audio datasets are much larger. For example, the `xs` (smallest) configuration of [GigaSpeech from SpeechColab](https://huggingface.co/datasets/speechcolab/gigaspeech) contains only 10 hours of training data, but takes over 13GB of storage space for download and preparation. So what happens when we want to train on a larger split? The full `xl` configuration of the same dataset contains 10,000 hours of training data, requiring over 1TB of storage space. For most of us, this well exceeds the specifications of a typical hard drive disk. Do we need to fork out and buy additional storage? Or is there a way we can train on these datasets with no disk space constraints? 🤗 Datasets comes to the rescue by offering the [streaming mode](https://huggingface.co/docs/datasets/stream). Streaming allows us to load the data progressively as we iterate over the dataset. Rather than downloading the whole dataset at once, we load the dataset one example at a time. We iterate over the dataset, loading and preparing examples on the fly when they are needed. This way, we only ever load the examples that we're using, and not the ones that we're not! Once we're done with an example sample, we continue iterating over the dataset and load the next one. Streaming mode has three primary advantages over downloading the entire dataset at once: * Disk space: examples are loaded to memory one-by-one as we iterate over the dataset. Since the data is not downloaded locally, there are no disk space requirements, so you can use datasets of arbitrary size. * Download and processing time: audio datasets are large and need a significant amount of time to download and process. With streaming, loading and processing is done on the fly, meaning you can start using the dataset as soon as the first example is ready. * Easy experimentation: you can experiment on a handful of examples to check that your script works without having to download the entire dataset. There is one caveat to streaming mode. When downloading a full dataset without streaming, both the raw data and processed data are saved locally to disk. If we want to re-use this dataset, we can directly load the processed data from disk, skipping the download and processing steps. Consequently, we only have to perform the downloading and processing operations once, after which we can re-use the prepared data. With streaming mode, the data is not downloaded to disk. Thus, neither the downloaded nor pre-processed data are cached. If we want to re-use the dataset, the streaming steps must be repeated, with the audio files loaded and processed on the fly again. For this reason, it is advised to download datasets that you are likely to use multiple times. How can you enable streaming mode? Easy! Just set `streaming=True` when you load your dataset. The rest will be taken care for you: ```py gigaspeech = load_dataset("speechcolab/gigaspeech", "xs", streaming=True) ``` Just like we applied preprocessing steps to a downloaded subset of MINDS-14, you can do the same preprocessing with a streaming dataset in the exactly the same manner. The only difference is that you can no longer access individual samples using Python indexing (i.e. `gigaspeech["train"][sample_idx]`). Instead, you have to iterate over the dataset. Here's how you can access an example when streaming a dataset: ```py next(iter(gigaspeech["train"])) ``` **Output:** ```out { "segment_id": "YOU0000000315_S0000660", "speaker": "N/A", "text": "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", "audio": { "path": "xs_chunks_0000/YOU0000000315_S0000660.wav", "array": array( [0.0005188, 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621] ), "sampling_rate": 16000, }, "begin_time": 2941.89, "end_time": 2945.07, "audio_id": "YOU0000000315", "title": "Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43", "url": "https://www.youtube.com/watch?v=zr2n1fLVasU", "source": 2, "category": 24, "original_full_path": "audio/youtube/P0004/YOU0000000315.opus", } ``` If you'd like to preview several examples from a large dataset, use the `take()` to get the first n elements. Let's grab the first two examples in the gigaspeech dataset: ```py gigaspeech_head = gigaspeech["train"].take(2) list(gigaspeech_head) ``` **Output:** ```out [ { "segment_id": "YOU0000000315_S0000660", "speaker": "N/A", "text": "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", "audio": { "path": "xs_chunks_0000/YOU0000000315_S0000660.wav", "array": array( [ 0.0005188, 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621, ] ), "sampling_rate": 16000, }, "begin_time": 2941.89, "end_time": 2945.07, "audio_id": "YOU0000000315", "title": "Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43", "url": "https://www.youtube.com/watch?v=zr2n1fLVasU", "source": 2, "category": 24, "original_full_path": "audio/youtube/P0004/YOU0000000315.opus", }, { "segment_id": "AUD0000001043_S0000775", "speaker": "N/A", "text": "SIX TOMATOES <PERIOD>", "audio": { "path": "xs_chunks_0000/AUD0000001043_S0000775.wav", "array": array( [ 1.43432617e-03, 1.37329102e-03, 1.31225586e-03, ..., -6.10351562e-05, -1.22070312e-04, -1.83105469e-04, ] ), "sampling_rate": 16000, }, "begin_time": 3673.96, "end_time": 3675.26, "audio_id": "AUD0000001043", "title": "Asteroid of Fear", "url": "http//www.archive.org/download/asteroid_of_fear_1012_librivox/asteroid_of_fear_1012_librivox_64kb_mp3.zip", "source": 0, "category": 28, "original_full_path": "audio/audiobook/P0011/AUD0000001043.opus", }, ] ``` Streaming mode can take your research to the next level: not only are the biggest datasets accessible to you, but you can easily evaluate systems over multiple datasets in one go without worrying about your disk space. Compared to evaluating on a single dataset, multi-dataset evaluation gives a better metric for the generalisation abilities of a speech recognition system (c.f. End-to-end Speech Benchmark (ESB)).
7
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter1/quiz.mdx
<!-- DISABLE-FRONTMATTER-SECTIONS --> # Check your understanding of the course material ### 1. What units is the sampling rate measured in? <Question choices={[ { text: "dB", explain: "No, the amplitude is measured in decibels (dB)." }, { text: "Hz", explain: "The sampling rate is the number of samples taken in one second and is measured in hertz (Hz).", correct: true }, { text: "bit", explain: "Bits are used to describe bit depth, which refers to the number of bits of information used to represent each sample of an audio signal.", } ]} /> ### 2. When streaming a large audio dataset, how soon can you start using it? <Question choices={[ { text: "As soon as the full dataset is downloaded.", explain: "The goal of streaming data is to be able to work with it without having to fully download a dataset." }, { text: "As soon as the first 16 examples are downloaded.", explain: "Try again!" }, { text: "As soon as the first example is downloaded.", explain: "", correct: true } ]} /> ### 3. What is a spectrogram? <Question choices={[ { text: "A device used to digitize the audio that is first captured by a microphone, which converts the sound waves into an electrical signal.", explain: "A device used to digitize such electrical signal is called Analog-to-Digital Converter. Try again!" }, { text: "A plot that shows how the amplitude of an audio signal change over time. It is also known as the *time domain* representation of sound.", explain: "The description above refers to waveform, not spectrogram." }, { text: "A visual representation of the frequency spectrum of a signal as it varies with time.", explain: "", correct: true } ]} /> ### 4. What is the easiest way to convert raw audio data into log-mel spectrogram expected by Whisper? A. ```python librosa.feature.melspectrogram(audio["array"]) ``` B. ```python feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") feature_extractor(audio["array"]) ``` C. ```python dataset.feature(audio["array"], model="whisper") ``` <Question choices={[ { text: "A", explain: "`librosa.feature.melspectrogram()` creates a power spectrogram." }, { text: "B", explain: "", correct: true }, { text: "C", explain: "Dataset does not prepare features for Transformer models, this is done by the model's preprocessor." } ]} /> ### 5. How do you load a dataset from 🤗 Hub? A. ```python from datasets import load_dataset dataset = load_dataset(DATASET_NAME_ON_HUB) ``` B. ```python import librosa dataset = librosa.load(PATH_TO_DATASET) ``` C. ```python from transformers import load_dataset dataset = load_dataset(DATASET_NAME_ON_HUB) ``` <Question choices={[ { text: "A", explain: "The best way is to use the 🤗 Datasets library.", correct: true }, { text: "B", explain: "Librosa.load is useful to load an individual audio file from a path into a tuple with audio time series and a sampling rate, but not an entire dataset with many examples and multiple features. " }, { text: "C", explain: "load_dataset method comes in the 🤗 Datasets library, not in 🤗 Transformers." } ]} /> ### 6. Your custom dataset contains high-quality audio with 32 kHz sampling rate. You want to train a speech recognition model that expects the audio examples to have a 16 kHz sampling rate. What should you do? <Question choices={[ { text: "Use the examples as is, the model will easily generalize to higher quality audio examples.", explain: "Due to reliance on attention mechanisms, it is challenging for models to generalize between sampling rates." }, { text: "Use Audio module from the 🤗 Datasets library to downsample the examples in the custom dataset", explain: "", correct: true }, { text: "Downsample by a factor 2x by throwing away every other sample.", explain: "This will create distortions in the signal called aliases. Doing resampling correctly is tricky and best left to well-tested libraries such as librosa or 🤗 Datasets." } ]} /> ### 7. How can you convert a spectrogram generated by a machine learning model into a waveform? <Question choices={[ { text: "We can use a neural network called a vocoder to reconstruct a waveform from the spectrogram.", explain: "Since the phase information is missing in this case, we need to use a vocoder, or the classic Griffin-Lim algorithm to reconstruct the waveform.", correct: true }, { text: "We can use the inverse STFT to convert the generated spectrogram into a waveform", explain: "A generated spectrogram is missing phase information that is required to use the inverse STFT." }, { text: "You can't convert a spectrogram generated by a machine learning model into a waveform.", explain: "Try again!" } ]} />
8
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter1/introduction.mdx
# Unit 1. Working with audio data ## What you'll learn in this unit Every audio or speech task starts with an audio file. Before we can dive into solving these tasks, it's important to understand what these files actually contain, and how to work with them. In this unit, you will gain an understanding of the fundamental terminology related to audio data, including waveform, sampling rate, and spectrogram. You will also learn how to work with audio datasets, including loading and preprocessing audio data, and how to stream large datasets efficiently. By the end of this unit, you will have a strong grasp of the essential audio data terminology and will be equipped with the skills necessary to work with audio datasets for various applications. The knowledge you'll gain in this unit is going to lay a foundation to understanding the remainder of the course.
9
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/dinov2/README.md
# candle-dinov2 [DINOv2](https://github.com/facebookresearch/dinov2) is a computer vision model. In this example, it is used as an ImageNet classifier: the model returns the probability for the image to belong to each of the 1000 ImageNet categories. ## Running some example ```bash cargo run --example dinov2 --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg > mountain bike, all-terrain bike, off-roader: 43.67% > bicycle-built-for-two, tandem bicycle, tandem: 33.20% > crash helmet : 13.23% > unicycle, monocycle : 2.44% > maillot : 2.42% ``` ![Leading group, Giro d'Italia 2021](../yolo-v8/assets/bike.jpg)
0
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/siglip/main.rs
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Error as E; use clap::Parser; use candle::{DType, Device, Tensor}; use candle_nn::{ops::softmax, VarBuilder}; use candle_transformers::models::siglip; use tokenizers::Tokenizer; #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] tokenizer: Option<String>, #[arg(long, use_value_delimiter = true)] images: Option<Vec<String>>, #[arg(long)] cpu: bool, #[arg(long, use_value_delimiter = true)] sequences: Option<Vec<String>>, } fn load_image<T: AsRef<std::path::Path>>(path: T, image_size: usize) -> anyhow::Result<Tensor> { let img = image::ImageReader::open(path)?.decode()?; let (height, width) = (image_size, image_size); let img = img.resize_to_fill( width as u32, height as u32, image::imageops::FilterType::Triangle, ); let img = img.to_rgb8(); let img = img.into_raw(); let img = Tensor::from_vec(img, (height, width, 3), &Device::Cpu)? .permute((2, 0, 1))? .to_dtype(DType::F32)? .affine(2. / 255., -1.)?; Ok(img) } fn load_images<T: AsRef<std::path::Path>>( paths: &Vec<T>, image_size: usize, ) -> anyhow::Result<Tensor> { let mut images = vec![]; for path in paths { let tensor = load_image(path, image_size)?; images.push(tensor); } let images = Tensor::stack(&images, 0)?; Ok(images) } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let model_file = match args.model { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("google/siglip-base-patch16-224".to_string()); api.get("model.safetensors")? } Some(model) => model.into(), }; let tokenizer = get_tokenizer(args.tokenizer)?; let config = siglip::Config::base_patch16_224(); let device = candle_examples::device(args.cpu)?; let vec_imgs = match args.images { Some(imgs) => imgs, None => vec![ "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg".to_string(), "candle-examples/examples/yolo-v8/assets/bike.jpg".to_string(), ], }; let images = load_images(&vec_imgs, config.vision_config.image_size)?.to_device(&device)?; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file.clone()], DType::F32, &device)? }; let model = siglip::Model::new(&config, vb)?; let (input_ids, vec_seq) = tokenize_sequences(&config, args.sequences, &tokenizer, &device)?; let (_logits_per_text, logits_per_image) = model.forward(&images, &input_ids)?; let softmax_image = softmax(&logits_per_image, 1)?; let softmax_image_vec = softmax_image.flatten_all()?.to_vec1::<f32>()?; println!("softmax_image_vec: {:?}", softmax_image_vec); let probability_vec = softmax_image_vec .iter() .map(|v| v * 100.0) .collect::<Vec<f32>>(); let probability_per_image = probability_vec.len() / vec_imgs.len(); for (i, img) in vec_imgs.iter().enumerate() { let start = i * probability_per_image; let end = start + probability_per_image; let prob = &probability_vec[start..end]; println!("\n\nResults for image: {}\n", img); for (i, p) in prob.iter().enumerate() { println!("Probability: {:.4}% Text: {} ", p, vec_seq[i]); } } Ok(()) } pub fn get_tokenizer(tokenizer: Option<String>) -> anyhow::Result<Tokenizer> { let tokenizer = match tokenizer { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("google/siglip-base-patch16-224".to_string()); api.get("tokenizer.json")? } Some(file) => file.into(), }; Tokenizer::from_file(tokenizer).map_err(E::msg) } pub fn tokenize_sequences( config: &siglip::Config, sequences: Option<Vec<String>>, tokenizer: &Tokenizer, device: &Device, ) -> anyhow::Result<(Tensor, Vec<String>)> { let pad_id = config.text_config.pad_token_id; let vec_seq = match sequences { Some(seq) => seq, None => vec![ "a cycling race".to_string(), "a photo of two cats".to_string(), "a robot holding a candle".to_string(), ], }; let mut tokens = vec![]; for seq in vec_seq.clone() { let encoding = tokenizer.encode(seq, true).map_err(E::msg)?; tokens.push(encoding.get_ids().to_vec()); } let max_len = config.text_config.max_position_embeddings; // Pad the sequences to have the same length for token_vec in tokens.iter_mut() { let len_diff = max_len - token_vec.len(); if len_diff > 0 { token_vec.extend(vec![pad_id; len_diff]); } } let input_ids = Tensor::new(tokens, device)?; Ok((input_ids, vec_seq)) }
1
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/siglip/README.md
## SigLIP SigLIP is multi-modal text-vision model that improves over CLIP by using a sigmoid based loss, [HuggingFace](https://huggingface.co/google/siglip-base-patch16-224). ### Running an example ``` $ cargo run --features cuda -r --example siglip - softmax_image_vec: [2.1912122e-14, 2.3624872e-14, 1.0, 1.0, 2.4787932e-8, 3.2784535e-12] Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg Probability: 0.0000% Text: a cycling race Probability: 0.0000% Text: a photo of two cats Probability: 100.0000% Text: a robot holding a candle Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg Probability: 100.0000% Text: a cycling race Probability: 0.0000% Text: a photo of two cats Probability: 0.0000% Text: a robot holding a candle ```
2
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/splade/main.rs
use std::path::PathBuf; use anyhow::{Error as E, Result}; use candle::Tensor; use candle_nn::VarBuilder; use candle_transformers::models::bert::{self, BertForMaskedLM, Config}; use clap::Parser; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::{PaddingParams, Tokenizer}; #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// The model to use, check out available models: https://huggingface.co/models?library=sentence-transformers&sort=trending #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, // Path to the tokenizer file. #[arg(long)] tokenizer_file: Option<String>, // Path to the weight files. #[arg(long)] weight_files: Option<String>, // Path to the config file. #[arg(long)] config_file: Option<String>, /// When set, compute embeddings for this prompt. #[arg(long)] prompt: Option<String>, } fn main() -> Result<()> { let args = Args::parse(); let api = Api::new()?; let model_id = match &args.model_id { Some(model_id) => model_id.to_string(), None => "prithivida/Splade_PP_en_v1".to_string(), }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let config_filename = match args.config_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("config.json")?, }; let weights_filename = match args.weight_files { Some(files) => PathBuf::from(files), None => match repo.get("model.safetensors") { Ok(safetensors) => safetensors, Err(_) => match repo.get("pytorch_model.bin") { Ok(pytorch_model) => pytorch_model, Err(e) => { return Err(anyhow::Error::msg(format!("Model weights not found. The weights should either be a `model.safetensors` or `pytorch_model.bin` file. Error: {}", e))); } }, }, }; let config = std::fs::read_to_string(config_filename)?; let config: Config = serde_json::from_str(&config)?; let mut tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let device = candle_examples::device(args.cpu)?; let dtype = bert::DTYPE; let vb = if weights_filename.ends_with("model.safetensors") { unsafe { VarBuilder::from_mmaped_safetensors(&[weights_filename], dtype, &device).unwrap() } } else { println!("Loading weights from pytorch_model.bin"); VarBuilder::from_pth(&weights_filename, dtype, &device).unwrap() }; let model = BertForMaskedLM::load(vb, &config)?; if let Some(prompt) = args.prompt { let tokenizer = tokenizer .with_padding(None) .with_truncation(None) .map_err(E::msg)?; let tokens = tokenizer .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let token_ids = Tensor::new(&tokens[..], &device)?.unsqueeze(0)?; let token_type_ids = token_ids.zeros_like()?; let ys = model.forward(&token_ids, &token_type_ids, None)?; let vec = Tensor::log( &Tensor::try_from(1.0)? .to_dtype(dtype)? .to_device(&device)? .broadcast_add(&ys.relu()?)?, )? .max(1)?; let vec = normalize_l2(&vec)?; let vec = vec.squeeze(0)?.to_vec1::<f32>()?; let indices = (0..vec.len()) .filter(|&i| vec[i] != 0.0) .map(|x| x as u32) .collect::<Vec<_>>(); let tokens = tokenizer.decode(&indices, true).unwrap(); println!("{tokens:?}"); let values = indices.iter().map(|&i| vec[i as usize]).collect::<Vec<_>>(); println!("{values:?}"); } else { let sentences = [ "The cat sits outside", "A man is playing guitar", "I love pasta", "The new movie is awesome", "The cat plays in the garden", "A woman watches TV", "The new movie is so great", "Do you like pizza?", ]; let n_sentences = sentences.len(); if let Some(pp) = tokenizer.get_padding_mut() { pp.strategy = tokenizers::PaddingStrategy::BatchLongest } else { let pp = PaddingParams { strategy: tokenizers::PaddingStrategy::BatchLongest, ..Default::default() }; tokenizer.with_padding(Some(pp)); } let tokens = tokenizer .encode_batch(sentences.to_vec(), true) .map_err(E::msg)?; let token_ids = tokens .iter() .map(|tokens| { let tokens = tokens.get_ids().to_vec(); Ok(Tensor::new(tokens.as_slice(), &device)?) }) .collect::<Result<Vec<_>>>()?; let attention_mask = tokens .iter() .map(|tokens| { let tokens = tokens.get_attention_mask().to_vec(); Ok(Tensor::new(tokens.as_slice(), &device)?) }) .collect::<Result<Vec<_>>>()?; let token_ids = Tensor::stack(&token_ids, 0)?; let attention_mask = Tensor::stack(&attention_mask, 0)?; let token_type_ids = token_ids.zeros_like()?; let ys = model.forward(&token_ids, &token_type_ids, Some(&attention_mask))?; let vector = Tensor::log( &Tensor::try_from(1.0)? .to_dtype(dtype)? .to_device(&device)? .broadcast_add(&ys.relu()?)?, )?; let vector = vector .broadcast_mul(&attention_mask.unsqueeze(2)?.to_dtype(dtype)?)? .max(1)?; let vec = normalize_l2(&vector)?; let mut similarities = vec![]; for i in 0..n_sentences { let e_i = vec.get(i)?; for j in (i + 1)..n_sentences { let e_j = vec.get(j)?; let sum_ij = (&e_i * &e_j)?.sum_all()?.to_scalar::<f32>()?; let sum_i2 = (&e_i * &e_i)?.sum_all()?.to_scalar::<f32>()?; let sum_j2 = (&e_j * &e_j)?.sum_all()?.to_scalar::<f32>()?; let cosine_similarity = sum_ij / (sum_i2 * sum_j2).sqrt(); similarities.push((cosine_similarity, i, j)) } } similarities.sort_by(|u, v| v.0.total_cmp(&u.0)); for &(score, i, j) in similarities[..5].iter() { println!("score: {score:.2} '{}' '{}'", sentences[i], sentences[j]) } } Ok(()) } pub fn normalize_l2(v: &Tensor) -> Result<Tensor> { Ok(v.broadcast_div(&v.sqr()?.sum_keepdim(1)?.sqrt()?)?) }
3
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/splade/README.md
# candle-splade SPLADE is a neural retrieval model which learns query/document sparse expansion via the BERT MLM head and sparse regularization. Sparse representations benefit from several advantages compared to dense approaches: efficient use of inverted index, explicit lexical match, interpretability... They also seem to be better at generalizing on out-of-domain data. In this example we can do the following two tasks: - Compute sparse embedding for a given query. - Compute similarities between a set of sentences using sparse embeddings. ## Sparse Sentence embeddings SPLADE is used to compute the sparse embedding for a given query. The model weights are downloaded from the hub on the first run. This makes use of the BertForMaskedLM model. ```bash cargo run --example splade --release -- --prompt "Here is a test sentence" > "the out there still house inside position outside stay standing hotel sitting dog animal sit bird cat statue cats" > [0.10270107, 0.269471, 0.047469813, 0.0016636598, 0.05394874, 0.23105666, 0.037475716, 0.45949644, 0.009062732, 0.06790692, 0.0327835, 0.33122346, 0.16863061, 0.12688516, 0.340983, 0.044972017, 0.47724655, 0.01765311, 0.37331146] ``` ```bash cargo run --example splade --release --features > score: 0.47 'The new movie is awesome' 'The new movie is so great' > score: 0.43 'The cat sits outside' 'The cat plays in the garden' > score: 0.14 'I love pasta' 'Do you like pizza?' > score: 0.11 'A man is playing guitar' 'The cat plays in the garden' > score: 0.05 'A man is playing guitar' 'A woman watches TV' ```
4
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/stella-en-v5/main.rs
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use std::path::Path; use anyhow::{anyhow, Error as E, Result}; use clap::Parser; use candle_transformers::models::stella_en_v5::{ Config, EmbedDim as StellaEmbedDim, EmbeddingModel, }; use candle::{DType, Device, Tensor}; use candle_nn::VarBuilder; use hf_hub::{api::sync::Api, Repo}; use tokenizers::{PaddingDirection, PaddingParams, PaddingStrategy, Tokenizer}; struct Embedding { model: EmbeddingModel, device: Device, tokenizer: Tokenizer, } impl Embedding { fn new(model: EmbeddingModel, tokenizer: Tokenizer, device: &Device) -> Self { Self { model, tokenizer, device: device.clone(), } } fn encode(&mut self, task: EncodeTask, text: Option<String>) -> Result<()> { // Just shocasing embeddings, this has no real value if let Some(text) = text { let qry = task.query_preproc(&[text]); let encoding = self.tokenizer.encode(qry, true).map_err(|e| anyhow!(e))?; let shape = (1, encoding.len()); let input = Tensor::from_slice(encoding.get_ids(), shape, &self.device)?; let mask = Tensor::from_slice(encoding.get_attention_mask(), shape, &self.device)?; let result = self.model.forward(&input, &mask)?; println!("embeddings: {result}"); } else { // Examples copied from [Model Card](https://huggingface.co/dunzhang/stella_en_1.5B_v5#transformers) let queries = [ "What are some ways to reduce stress?".to_string(), "What are the benefits of drinking green tea?".to_string(), ]; let docs = [ "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.".to_string(), "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.".to_string(), ]; // We only encode the queries and not the data let qry = task.query_preproc(&queries); let mut qry_encoded = self .tokenizer .encode_batch(qry, true) .map_err(|e| anyhow!(e))?; let mut docs_encoded = self .tokenizer .encode_batch(docs.to_vec(), true) .map_err(|e| anyhow!(e))?; let qry_embed = { // Now, we generate the tensors for the `input` and `mask` let shape = (qry_encoded.len(), qry_encoded[1].len()); let mut ids = Tensor::zeros(shape, DType::U32, &self.device)?; let mut masks = Tensor::zeros(shape, DType::U8, &self.device)?; for (i, e) in qry_encoded.drain(..).enumerate() { let input_id = Tensor::from_iter(e.get_ids().to_vec(), &self.device)?.unsqueeze(0)?; let mask = Tensor::from_iter(e.get_attention_mask().to_vec(), &self.device)? .to_dtype(DType::U8)? .unsqueeze(0)?; ids = ids.slice_assign(&[i..i + 1, 0..input_id.dims2().unwrap().1], &input_id)?; masks = masks.slice_assign(&[i..i + 1, 0..mask.dims2().unwrap().1], &mask)?; } // Let's generate the embeddings for the query, we are going to be normalizing the result. // For larger datasets, you can call `.forward()` on batches and run a `l2 norm` pass on the entire data self.model.forward_norm(&ids, &masks)? }; let doc_embed = { let shape = (docs_encoded.len(), docs_encoded[1].len()); let mut ids = Tensor::zeros(shape, DType::U32, &self.device)?; let mut masks = Tensor::zeros(shape, DType::U8, &self.device)?; for (i, e) in docs_encoded.drain(..).enumerate() { let input_id = Tensor::from_iter(e.get_ids().to_vec(), &self.device)?.unsqueeze(0)?; let mask = Tensor::from_iter(e.get_attention_mask().to_vec(), &self.device)? .to_dtype(DType::U8)? .unsqueeze(0)?; ids = ids.slice_assign(&[i..i + 1, 0..input_id.dims2().unwrap().1], &input_id)?; masks = masks.slice_assign(&[i..i + 1, 0..mask.dims2().unwrap().1], &mask)?; } // Let's generate the embeddings for the query, we are going to be normalizing the result. // For larger datasets, you can call `.forward()` on batches and run a `l2 norm` pass on the entire data self.model.forward_norm(&ids, &masks)? }; println!( "Embed shapes:\nQuery: {:?}\nDocs: {:?}", qry_embed.shape(), doc_embed.shape() ); // [2, 1024] for head dim `1024` // a matmul to generate the `similarity` score let res = qry_embed.matmul(&doc_embed.t()?)?; for (k, v) in queries.iter().enumerate() { let tnsr = res.get(k)?; let max = tnsr.argmax(0)?.to_scalar::<u32>()?; println!( "\nScore: {}\nQuery: {}\nAnswer: {}\n\n", tnsr.get(max as usize)?.to_scalar::<f32>()?, v, docs[k] ); } } Ok(()) } } #[derive(Clone, Copy, Debug, clap::ValueEnum, PartialEq, Eq)] enum EmbedDim { #[value(name = "256")] Dim256, #[value(name = "768")] Dim768, #[value(name = "1024")] Dim1024, #[value(name = "2048")] Dim2048, #[value(name = "4096")] Dim4096, #[value(name = "6144")] Dim6144, #[value(name = "8192")] Dim8192, } impl EmbedDim { /// Returns dir path to the embed head weights int he repo pub fn embed_dim_default_dir(&self) -> &'static str { match self { Self::Dim256 => "2_Dense_256", Self::Dim768 => "2_Dense_768", Self::Dim1024 => "2_Dense_1024", Self::Dim2048 => "2_Dense_2048", Self::Dim4096 => "2_Dense_4096", Self::Dim6144 => "2_Dense_6144", Self::Dim8192 => "2_Dense_8192", } } /// Resolves the `EmbedDim` for given variant pub fn embed_dim(&self) -> StellaEmbedDim { match self { Self::Dim256 => StellaEmbedDim::Dim256, Self::Dim768 => StellaEmbedDim::Dim768, Self::Dim1024 => StellaEmbedDim::Dim1024, Self::Dim2048 => StellaEmbedDim::Dim2048, Self::Dim4096 => StellaEmbedDim::Dim4096, Self::Dim6144 => StellaEmbedDim::Dim6144, Self::Dim8192 => StellaEmbedDim::Dim8192, } } } #[derive(Clone, Copy, Debug, clap::ValueEnum, PartialEq, Eq)] pub enum EncodeTask { /// `s2p` is the `retrieval` task /// Default in this example #[value(name = "s2p")] S2P, /// `s2s` is the semantic similarity task #[value(name = "s2s")] S2S, } impl EncodeTask { /// Preprocess a set of inputs basef on a template suggested by the model authors /// See: https://huggingface.co/dunzhang/stella_en_1.5B_v5#introduction pub fn query_preproc(&self, txt: &[String]) -> Vec<String> { let instruct = match self { Self::S2P => { "Given a web search query, retrieve relevant passages that answer the query." } Self::S2S => "Retrieve semantically similar text.", }; txt.iter() .map(|s| format!("Instruct: {instruct}\nQuery: {s}")) .collect::<Vec<_>>() } } #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "1.5b")] Large, #[value(name = "400m")] Small, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, #[arg(long)] which: Which, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] use_flash_attn: bool, #[arg(long)] query: Option<String>, #[arg(long, default_value = "1024")] embed_dim: Option<EmbedDim>, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] base_weight_files: Option<String>, #[arg(long)] embed_head_weight_files: Option<String>, /// `Stella` is trained on 2 tasks: See [`Model Card`](https://huggingface.co/dunzhang/stella_en_1.5B_v5) /// `s2s`: Semantic textual similarity /// `s2p`: Retrieval task - `Default` in this example #[arg(long, default_value = "s2p")] task: Option<EncodeTask>, } // Tokenizer creation is super critical in our case. // We are going to be `padding: Left` for each batch fn create_tokenizer(tokenizer_file: &Path, which: Which) -> Result<Tokenizer> { let mut tokenizer = Tokenizer::from_file(tokenizer_file).map_err(E::msg)?; if which == Which::Large { let pad_id = if let Some(pad_id) = tokenizer.token_to_id("<|endoftext|>") { pad_id } else { return Err(anyhow!( "Tokenizer doesn't contain expected `<|endoftext|>` token" )); }; // This part is super important, we are padding the tokens to the *`left`* and not the usual *`right`* padding tokenizer.with_padding(Some(PaddingParams { strategy: PaddingStrategy::BatchLongest, direction: PaddingDirection::Left, pad_id, pad_token: "<|endoftext|>".to_string(), ..Default::default() })); } else { tokenizer.with_padding(Some(PaddingParams { strategy: PaddingStrategy::BatchLongest, direction: PaddingDirection::Right, ..Default::default() })); } Ok(tokenizer) } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); let start = std::time::Instant::now(); let api = Api::new()?; let embed_dim = match args.embed_dim { Some(d) => d, None => EmbedDim::Dim1024, }; let (repo, cfg) = match args.which { Which::Large => ( "dunzhang/stella_en_1.5B_v5", Config::new_1_5_b_v5(embed_dim.embed_dim()), ), Which::Small => ( "dunzhang/stella_en_400M_v5", Config::new_400_m_v5(embed_dim.embed_dim()), ), }; let repo = api.repo(Repo::model(repo.to_string())); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; // Note, if you are providing `weight_files`, ensure that the `--embed_dim` dimensions provided matches the weights // E.g. if you are using `--embed_dim 1024`, the weight files should include the `.safetensors` file from `2_Dense_1024` dir of the repo let base_weight_files = match args.base_weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => { vec![repo.get("model.safetensors")?] } }; let embed_weight_files = match args.embed_head_weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => { let head_w_path = format!("{}/model.safetensors", embed_dim.embed_dim_default_dir()); vec![repo.get(&head_w_path)?] } }; println!("retrieved the files in {:?}", start.elapsed()); // Initializing the tokenizer which would require us to add padding to the `left` for batch encoding let tokenizer = create_tokenizer(tokenizer_filename.as_path(), args.which)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let dtype = DType::F32; let base_vb = unsafe { VarBuilder::from_mmaped_safetensors(&base_weight_files, dtype, &device)? }; // Embedding layer is always built on F32 for accuracy let embed_vb = unsafe { VarBuilder::from_mmaped_safetensors(&embed_weight_files, DType::F32, &device)? }; let model = EmbeddingModel::new(&cfg, base_vb, embed_vb)?; println!("loaded the model in {:?}", start.elapsed()); let mut embedding = Embedding::new(model, tokenizer, &device); let task = args.task.map_or(EncodeTask::S2P, |t| t); embedding.encode(task, args.query) }
5
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/stella-en-v5/README.md
# candle-stella-en-v5: Implementation of [stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) embedding model As of 7th Oct 2024, *Stella_en_1.5B_v5* is one of the top ranking model on `retrieval` and `reranking` tasks in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard. [Model card](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the HuggingFace Hub. ## Running the example Stella_en_1.5B_v5 is used to generate text embeddings embeddings for a prompt. The model weights are downloaded from the hub on the first run. ```bash $ cargo run --example stella-en-v5 --release -- --query "What are safetensors?" > [[ 0.3905, -0.0130, 0.2072, ..., -0.1100, -0.0086, 0.6002]] > Tensor[[1, 1024], f32] ``` Stella_en_1.5B_v5 is trained by [MRL](https://arxiv.org/abs/2205.13147) enabling multiple embedding dimensions. The following reproduces the example in the [model card](https://huggingface.co/dunzhang/stella_en_1.5B_v5) for a retrieval task (s2p). The sample queries and docs are hardcoded in the example. ```bash $ cargo run --example stella-en-v5 --release --features <metal | cuda> -- --which 1.5b > > Score: 0.8178786 > Query: What are some ways to reduce stress? > Answer: There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending > time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent > stress from building up. > > > Score: 0.7853528 > Query: What are the benefits of drinking green tea? > Answer: Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage > caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types > > of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties. > $ cargo run --example stella-en-v5 --release --features <metal | cuda> -- --which 400m > > Score: 0.8397539 > Query: What are some ways to reduce stress? > Answer: There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending > time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent > stress from building up. > > > > Score: 0.809545 > Query: What are the benefits of drinking green tea? > Answer: Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage > caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types > of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties. > ``` ## Supported options: - `Stella_en_v5` has 2 model variants published - a 1.5B variant and 400M variant. This is enabled through the flag `--which`. E.g. `--which 400m` or `--which 1.5b`. - `Stella_en_v5` supports 256, 768, 1024, 2048, 4096, 6144 and 8192 embedding dimensions (though the model card mentions 512, I couldn't find weights for the same). In the example run this is supported with `--embed-dim` option. E.g. `... --embed-dim 4096`. Defaults to `1024`. - As per the [model card](https://huggingface.co/dunzhang/stella_en_1.5B_v5), the model has been primarily trained on `s2s` (similarity) and `s2p` (retrieval) tasks. These require a slightly different `query` preprocessing (a different prompt template for each). In this example this is enabled though `--task` option.
6
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/resnet/export_models.py
# This script exports pre-trained model weights in the safetensors format. import numpy as np import torch import torchvision from safetensors import torch as stt m = torchvision.models.resnet50(pretrained=True) stt.save_file(m.state_dict(), 'resnet50.safetensors') m = torchvision.models.resnet101(pretrained=True) stt.save_file(m.state_dict(), 'resnet101.safetensors') m = torchvision.models.resnet152(pretrained=True) stt.save_file(m.state_dict(), 'resnet152.safetensors')
7
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/resnet/main.rs
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::resnet; use clap::{Parser, ValueEnum}; #[derive(Clone, Copy, Debug, ValueEnum)] enum Which { #[value(name = "18")] Resnet18, #[value(name = "34")] Resnet34, #[value(name = "50")] Resnet50, #[value(name = "101")] Resnet101, #[value(name = "152")] Resnet152, } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Variant of the model to use. #[arg(value_enum, long, default_value_t = Which::Resnet18)] which: Which, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("lmz/candle-resnet".into()); let filename = match args.which { Which::Resnet18 => "resnet18.safetensors", Which::Resnet34 => "resnet34.safetensors", Which::Resnet50 => "resnet50.safetensors", Which::Resnet101 => "resnet101.safetensors", Which::Resnet152 => "resnet152.safetensors", }; api.get(filename)? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let class_count = candle_examples::imagenet::CLASS_COUNT as usize; let model = match args.which { Which::Resnet18 => resnet::resnet18(class_count, vb)?, Which::Resnet34 => resnet::resnet34(class_count, vb)?, Which::Resnet50 => resnet::resnet50(class_count, vb)?, Which::Resnet101 => resnet::resnet101(class_count, vb)?, Which::Resnet152 => resnet::resnet152(class_count, vb)?, }; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
8
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/resnet/README.md
# candle-resnet A candle implementation of inference using a pre-trained [ResNet](https://arxiv.org/abs/1512.03385). This uses a classification head trained on the ImageNet dataset and returns the probabilities for the top-5 classes. ## Running an example ``` $ cargo run --example resnet --release -- --image tiger.jpg loaded image Tensor[dims 3, 224, 224; f32] model built tiger, Panthera tigris : 90.21% tiger cat : 8.93% lion, king of beasts, Panthera leo: 0.35% leopard, Panthera pardus: 0.16% jaguar, panther, Panthera onca, Felis onca: 0.09% ```
9
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/concept_guides/internal_mechanism.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Accelerate's internal mechanisms Internally, Accelerate works by first analyzing the environment in which the script is launched to determine which kind of distributed setup is used, how many different processes there are and which one the current script is in. All that information is stored in the [`~AcceleratorState`]. This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of [`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits) Then, when calling [`~Accelerator.prepare`], the library: - wraps your model(s) in the container adapted for the distributed setup, - wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`], - wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`] - creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`] While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other `num_processes` batches (if enabled). The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality: - it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any randomization (like shuffling) is done the exact same way across processes. - it puts the batches on the proper device before yielding them (unless you have opted out of `device_placement=True`). The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level. The random number generator synchronization will by default synchronize: - the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6 - the main random number generator in PyTorch <=1.5.1 You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main [`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid setting the same seed in the main random number generator in all processes. <Tip warning={true}> Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get the same random numbers from the torch random modules (so will apply the same random data augmentation if it's controlled by torch). </Tip> <Tip> The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local `torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example. </Tip> If you have [`torchdata>=0.8.0`](https://github.com/pytorch/data/tree/main) installed, and you have passed `use_stateful_dataloader=True` into your [`~utils.DataLoaderConfiguration`], these classes will directly inherit from `StatefulDataLoader` instead, and maintain a `state_dict`. For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
0
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/concept_guides/training_tpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Training on TPUs Training on TPUs can be slightly different from training on multi-gpu, even with Accelerate. This guide aims to show you where you should be careful and why, as well as the best practices in general. ## Training in a Notebook The main carepoint when training on TPUs comes from the [`notebook_launcher`]. As mentioned in the [notebook tutorial](../usage_guides/notebook), you need to restructure your training code into a function that can get passed to the [`notebook_launcher`] function and be careful about not declaring any tensors on the GPU. While on a TPU that last part is not as important, a critical part to understand is that when you launch code from a notebook you do so through a process called **forking**. When launching from the command-line, you perform **spawning**, where a python process is not currently running and you *spawn* a new process in. Since your Jupyter notebook is already utilizing a python process, you need to *fork* a new process from it to launch your code. Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model *once* and pass this into your training function. This is different than training on GPUs where you create `n` models that have their gradients synced and back-propagated at certain moments. Instead, one model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or on Google Colaboratory. Below is an example of a training function passed to the [`notebook_launcher`] if training on CPUs or GPUs: <Tip> This code snippet is based off the one from the `simple_nlp_example` notebook found [here](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb) with slight modifications for the sake of simplicity </Tip> ```python def training_function(): # Initialize accelerator accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) train_dataloader, eval_dataloader = create_dataloaders( train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"] ) # Instantiate optimizer optimizer = AdamW(params=model.parameters(), lr=hyperparameters["learning_rate"]) # Prepare everything # There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the # prepare method. model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader ) num_epochs = hyperparameters["num_epochs"] # Now we train the model for epoch in range(num_epochs): model.train() for step, batch in enumerate(train_dataloader): outputs = model(**batch) loss = outputs.loss accelerator.backward(loss) optimizer.step() optimizer.zero_grad() ``` ```python from accelerate import notebook_launcher notebook_launcher(training_function) ``` <Tip> The `notebook_launcher` will default to 8 processes if Accelerate has been configured for a TPU </Tip> If you use this example and declare the model *inside* the training loop, then on a low-resource system you will potentially see an error like: ``` ProcessExitedException: process 0 terminated with signal SIGSEGV ``` This error is *extremely* cryptic but the basic explanation is you ran out of system RAM. You can avoid this entirely by reconfiguring the training function to accept a single `model` argument, and declare it in an outside cell: ```python # In another Jupyter cell model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) ``` ```diff + def training_function(model): # Initialize accelerator accelerator = Accelerator() - model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) train_dataloader, eval_dataloader = create_dataloaders( train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"] ) ... ``` And finally calling the training function with: ```diff from accelerate import notebook_launcher - notebook_launcher(training_function) + notebook_launcher(training_function, (model,)) ``` <Tip> The above workaround is only needed when launching a TPU instance from a Jupyter Notebook on a low-resource server such as Google Colaboratory or Kaggle. If using a script or launching on a much beefier server declaring the model beforehand is not needed. </Tip> ## Mixed Precision and Global Variables As mentioned in the [mixed precision tutorial](../usage_guides/mixed_precision), Accelerate supports fp16 and bf16, both of which can be used on TPUs. That being said, ideally `bf16` should be utilized as it is extremely efficient to use. There are two "layers" when using `bf16` and Accelerate on TPUs, at the base level and at the operation level. At the base level, this is enabled when passing `mixed_precision="bf16"` to `Accelerator`, such as: ```python accelerator = Accelerator(mixed_precision="bf16") ``` By default, this will cast `torch.float` and `torch.double` to `bfloat16` on TPUs. The specific configuration being set is an environmental variable of `XLA_USE_BF16` is set to `1`. There is a further configuration you can perform which is setting the `XLA_DOWNCAST_BF16` environmental variable. If set to `1`, then `torch.float` is `bfloat16` and `torch.double` is `float32`. This is performed in the `Accelerator` object when passing `downcast_bf16=True`: ```python accelerator = Accelerator(mixed_precision="bf16", downcast_bf16=True) ``` Using downcasting instead of bf16 everywhere is good for when you are trying to calculate metrics, log values, and more where raw bf16 tensors would be unusable. ## Training Times on TPUs As you launch your script, you may notice that training seems exceptionally slow at first. This is because TPUs first run through a few batches of data to see how much memory to allocate before finally utilizing this configured memory allocation extremely efficiently. If you notice that your evaluation code to calculate the metrics of your model takes longer due to a larger batch size being used, it is recommended to keep the batch size the same as the training data if it is too slow. Otherwise the memory will reallocate to this new batch size after the first few iterations. <Tip> Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader. </Tip>
1
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/concept_guides/gradient_synchronization.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Gradient synchronization PyTorch's distributed module operates by communicating back and forth between all of the GPUs in your system. This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints when using the `ddp` module. These triggerpoints are added to the PyTorch model, specifically their `forward()` and `backward()` methods. This happens when the model is wrapped with `DistributedDataParallel`: ```python import torch.nn as nn from torch.nn.parallel import DistributedDataParallel model = nn.Linear(10, 10) ddp_model = DistributedDataParallel(model) ``` In Accelerate this conversion happens automatically when calling [`~Accelerator.prepare`] and passing in your model. ```diff + from accelerate import Accelerator + accelerator = Accelerator() import torch.nn as nn - from torch.nn.parallel import DistributedDataParallel model = nn.Linear(10,10) + model = accelerator.prepare(model) ``` ## The slowdown in gradient accumulation You now understand that PyTorch adds hooks to the `forward` and `backward` method of your PyTorch model when training in a distributed setup. But how does this risk slowing down your code? In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected at specific points and these must also occur at roughly the same time before moving on. The most direct example is when you update model parameters through `optimizer.step()`. Without gradient accumulation, all instances of the model need to have updated their gradients computed, collated, and updated before moving on to the next batch of data. When performing gradient accumulation, you accumulate `n` loss gradients and skip `optimizer.step()` until `n` batches have been reached. As all training processes only need to synchronize by the time `optimizer.step()` is called, without any modification to your training step, this needless inter-process communication can cause a significant slowdown. How can you avoid this overhead? ## Solving the slowdown problem Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where `optimizer.step()` is actually called. PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the [`no_sync`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync) context manager that is added to your model after converting it to DDP. Under this context manager, PyTorch will skip synchronizing the gradients when `.backward()` is called, and the first call to `.backward()` outside this context manager will trigger the synchronization. See an example below: ```python ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer) for index, batch in enumerate(dataloader): inputs, targets = batch # Trigger gradient synchronization on the last batch if index != (len(dataloader) - 1): with ddp_model.no_sync(): # Gradients only accumulate outputs = ddp_model(inputs) loss = loss_func(outputs) accelerator.backward(loss) else: # Gradients finally sync outputs = ddp_model(inputs) loss = loss_func(outputs) accelerator.backward(loss) optimizer.step() ``` In Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!), `ddp_model.no_sync` gets replaced with [`~Accelerator.no_sync`] and operates the same way: ```diff ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer) for index, batch in enumerate(dataloader): inputs, targets = batch # Trigger gradient synchronization on the last batch if index != (len(dataloader)-1): - with ddp_model.no_sync(): + with accelerator.no_sync(model): # Gradients only accumulate outputs = ddp_model(inputs) loss = loss_func(outputs, targets) accelerator.backward(loss) else: # Gradients finally sync outputs = ddp_model(inputs) loss = loss_func(outputs) accelerator.backward(loss) optimizer.step() optimizer.zero_grad() ``` As you may expect, the [`~Accelerator.accumulate`] function wraps around this conditional check by keeping track of the current batch number, leaving you with the final gradient accumulation API: ```python ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer) for batch in dataloader: with accelerator.accumulate(model): optimizer.zero_grad() inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() optimizer.zero_grad() ``` As a result, you should either use *`accelerator.accumulate` or `accelerator.no_sync`* when it comes to API choice. ## Just how much of a slowdown is there, and easy mistakes you can make To set up a realistic example, consider the following setup: * Two single-GPU T4 nodes and one node with two GPUs * Each GPU is a T4, and are hosted on GCP * The script used is a modification of the [NLP Example](https://github.com/muellerzr/timing_experiments/blob/main/baseline.py) script * Batch size per GPU is 16, and gradients are accumulated every 4 steps All scripts are available in [this repository](https://github.com/muellerzr/timing_experiments). If not careful about gradient synchronization and GPU communication, a *large* amount of time can be wasted from when these GPUs communicate to each other during unnecessary periods. By how much? Reference: - Baseline: uses no synchronization practices discussed here - `no_sync` improperly: `no_sync` only around the `backward` call, not the `forward` - `no_sync`: using the `no_sync` pattern properly - `accumulate`: using [`~Accelerator.accumulate`] properly Below are the average seconds per batch iterating over 29 batches of data for each setup on both a single node and on the dual-node setup: | | Baseline | `no_sync` improperly | `no_sync` | `accumulate`| | :---------: | :-------: | :------------------: | :-------: | :---------: | | Multi-Node | 2±0.01s | 2.13±0.08s | **0.91±0.11s** | **0.91±0.11s** | | Single Node | 0.50±0.01s | 0.50±0.01s | **0.41±0.015s** | **0.41±0.015s** | As you can see, if you are not careful about how you set up your gradient synchronization, you can get upwards of more than a 2x slowdown during training! If you are worried about making sure everything is done properly, we highly recommend utilizing the [`~Accelerator.accumulate`] function and passing in `gradient_accumulation_steps` or `gradient_accumulation_plugin` to the [`Accelerator`] object so Accelerate can handle this for you. ### `no_sync` requires additional GPU memory when using FSDP Be aware that not syncing gradients can have adverse effects while performing FSDP training. As it has been warned in `torch`, the [`no_sync` context manager for FSDP](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel.no_sync) will require additional memory. Therefore in memory intensive situations while using FSDP, we recommend to set `sync_each_batch` to `True` in the [`~utils.GradientAccumulationPlugin`] to disable `no_sync`. See the example below where we fine-tune Mixtral (47B parameters) on 8 A100-80GB GPUs. We see that even for a modest `gradient_accumulation_steps=2` we quickly go out-of-memory (OOM) if `no_sync` is enabled. Again, this is due to additional memory overheads due to FSDP's `no_sync`. However, if `no_sync` is disabled via `sync_each_batch=True`, then the memory consumption for `gradient_accumulation_steps=16` reverts to that of `gradient_accumulation_steps=1`. | Model | `no_sync` (accum=1) | `no_sync` (accum=2) | `no_sync` disabled (accum=16) | :-------------: | :-----------------: | :-----------------: | :-----------------: mixtral 8x7B | 69G | OOM | 69G > [!WARNING] > Disabling `no_sync` means there _will be slowdown_ due the extra data syncs, as explained by the earlier sections of this guide.
2
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/concept_guides/performance.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Comparing performance across distributed setups Evaluating and comparing the performance from different setups can be quite tricky if you don't know what to look for. For example, you cannot run the same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate and expect your results to line up. But why? There are three reasons for this that this tutorial will cover: 1. **Setting the right seeds** 2. **Observed Batch Sizes** 3. **Learning Rates** ## Setting the Seed While this issue has not come up as much, make sure to use [`utils.set_seed`] to fully set the seed in all distributed cases so training will be reproducible: ```python from accelerate.utils import set_seed set_seed(42) ``` Why is this important? Under the hood this will set **5** different seed settings: ```python random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) # ^^ safe to call this function even if cuda is not available if is_torch_xla_available(): xm.set_rng_state(seed) ``` The random state, numpy's state, torch, torch's cuda state, and if TPUs are available torch_xla's cuda state. ## Observed Batch Sizes When training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. What this entails is a batch size of 64 on two GPUs is truly a batch size of 128. As a result, when testing on a single GPU this needs to be accounted for, as well as similarly for TPUs. The below table can be used as a quick reference to try out different batch sizes: <Tip> In this example, there are two GPUs for "Multi-GPU" and a TPU pod with 8 workers </Tip> | Single GPU Batch Size | Multi-GPU Equivalent Batch Size | TPU Equivalent Batch Size | |-----------------------|---------------------------------|---------------------------| | 256 | 128 | 32 | | 128 | 64 | 16 | | 64 | 32 | 8 | | 32 | 16 | 4 | ## Learning Rates As noted in multiple sources[[1](https://aws.amazon.com/blogs/machine-learning/scalable-multi-node-deep-learning-training-using-gpus-in-the-aws-cloud/)][[2](https://docs.nvidia.com/clara/clara-train-sdk/pt/model.html#classification-models-multi-gpu-training)], the learning rate should be scaled *linearly* based on the number of devices present. The below snippet shows doing so with Accelerate: <Tip> Since users can have their own learning rate schedulers defined, we leave this up to the user to decide if they wish to scale their learning rate or not. </Tip> ```python learning_rate = 1e-3 accelerator = Accelerator() learning_rate *= accelerator.num_processes optimizer = AdamW(params=model.parameters(), lr=learning_rate) ``` You will also find that `accelerate` will step the learning rate based on the number of processes being trained on. This is because of the observed batch size noted earlier. So in the case of 2 GPUs, the learning rate will be stepped twice as often as a single GPU to account for the batch size being twice as large (if no changes to the batch size on the single GPU instance are made). ## Gradient Accumulation and Mixed Precision When using gradient accumulation and mixed precision, due to how gradient averaging works (accumulation) and the precision loss (mixed precision), some degradation in performance is expected. This will be explicitly seen when comparing the batch-wise loss between different compute setups. However, the overall loss, metric, and general performance at the end of training should be _roughly_ the same.
3
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/package_reference/deepspeed.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DeepSpeed utilities ## DeepSpeedPlugin ## get_active_deepspeed_plugin [[autodoc]] utils.get_active_deepspeed_plugin [[autodoc]] utils.DeepSpeedPlugin [[autodoc]] utils.deepspeed.DummyScheduler ## DeepSpeedEnginerWrapper [[autodoc]] utils.deepspeed.DeepSpeedEngineWrapper ## DeepSpeedOptimizerWrapper [[autodoc]] utils.deepspeed.DeepSpeedOptimizerWrapper ## DeepSpeedSchedulerWrapper [[autodoc]] utils.deepspeed.DeepSpeedSchedulerWrapper ## DummyOptim [[autodoc]] utils.deepspeed.DummyOptim ## DummyScheduler
4
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/package_reference/utilities.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Utility functions and classes Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case. ## Constants Constants used throughout 🤗 Accelerate for reference The following are constants used when utilizing [`Accelerator.save_state`] `utils.MODEL_NAME`: `"pytorch_model"` `utils.OPTIMIZER_NAME`: `"optimizer"` `utils.RNG_STATE_NAME`: `"random_states"` `utils.SCALER_NAME`: `"scaler.pt` `utils.SCHEDULER_NAME`: `"scheduler` The following are constants used when utilizing [`Accelerator.save_model`] `utils.WEIGHTS_NAME`: `"pytorch_model.bin"` `utils.SAFE_WEIGHTS_NAME`: `"model.safetensors"` `utils.WEIGHTS_INDEX_NAME`: `"pytorch_model.bin.index.json"` `utils.SAFE_WEIGHTS_INDEX_NAME`: `"model.safetensors.index.json"` ## Data Classes These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters. ### Standalone These are standalone dataclasses used for checks, such as the type of distributed system being used [[autodoc]] utils.ComputeEnvironment [[autodoc]] utils.DistributedType [[autodoc]] utils.DynamoBackend [[autodoc]] utils.LoggerType [[autodoc]] utils.PrecisionType [[autodoc]] utils.RNGType [[autodoc]] utils.SageMakerDistributedType ### Kwargs These are configurable arguments for specific interactions throughout the PyTorch ecosystem that Accelerate handles under the hood. [[autodoc]] utils.AutocastKwargs [[autodoc]] utils.DistributedDataParallelKwargs [[autodoc]] utils.FP8RecipeKwargs [[autodoc]] utils.GradScalerKwargs [[autodoc]] utils.InitProcessGroupKwargs [[autodoc]] utils.KwargsHandler ## Plugins These are plugins that can be passed to the [`Accelerator`] object. While they are defined elsewhere in the documentation, for convenience all of them are available to see here: [[autodoc]] utils.DeepSpeedPlugin [[autodoc]] utils.FullyShardedDataParallelPlugin [[autodoc]] utils.GradientAccumulationPlugin [[autodoc]] utils.MegatronLMPlugin [[autodoc]] utils.TorchDynamoPlugin ## Configurations These are classes which can be configured and passed through to the appropriate integration [[autodoc]] utils.BnbQuantizationConfig [[autodoc]] utils.DataLoaderConfiguration [[autodoc]] utils.ProjectConfiguration ## Environmental Variables These are environmental variables that can be enabled for different use cases * `ACCELERATE_DEBUG_MODE` (`str`): Whether to run accelerate in debug mode. More info available [here](../usage_guides/debug.md). ## Data Manipulation and Operations These include data operations that mimic the same `torch` ops but can be used on distributed processes. [[autodoc]] utils.broadcast [[autodoc]] utils.broadcast_object_list [[autodoc]] utils.concatenate [[autodoc]] utils.convert_outputs_to_fp32 [[autodoc]] utils.convert_to_fp32 [[autodoc]] utils.gather [[autodoc]] utils.gather_object [[autodoc]] utils.get_grad_scaler [[autodoc]] utils.get_mixed_precision_context_manager [[autodoc]] utils.listify [[autodoc]] utils.pad_across_processes [[autodoc]] utils.recursively_apply [[autodoc]] utils.reduce [[autodoc]] utils.send_to_device [[autodoc]] utils.slice_tensors ## Environment Checks These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed. [[autodoc]] utils.is_bf16_available [[autodoc]] utils.is_ipex_available [[autodoc]] utils.is_mps_available [[autodoc]] utils.is_npu_available [[autodoc]] utils.is_torch_version [[autodoc]] utils.is_torch_xla_available [[autodoc]] utils.is_xpu_available ## Environment Manipulation [[autodoc]] utils.patch_environment [[autodoc]] utils.clear_environment [[autodoc]] utils.write_basic_config When setting up 🤗 Accelerate for the first time, rather than running `accelerate config` [~utils.write_basic_config] can be used as an alternative for quick configuration. [[autodoc]] utils.set_numa_affinity [[autodoc]] utils.environment.override_numa_affinity [[autodoc]] utils.purge_accelerate_environment ## Memory [[autodoc]] utils.find_executable_batch_size ## Modeling These utilities relate to interacting with PyTorch models [[autodoc]] utils.calculate_maximum_sizes [[autodoc]] utils.compute_module_sizes [[autodoc]] utils.extract_model_from_parallel [[autodoc]] utils.get_balanced_memory [[autodoc]] utils.get_max_layer_size [[autodoc]] utils.infer_auto_device_map [[autodoc]] utils.load_checkpoint_in_model [[autodoc]] utils.load_offloaded_weights [[autodoc]] utils.load_state_dict [[autodoc]] utils.offload_state_dict [[autodoc]] utils.retie_parameters [[autodoc]] utils.set_module_tensor_to_device ## Parallel These include general utilities that should be used when working in parallel. [[autodoc]] utils.extract_model_from_parallel [[autodoc]] utils.save [[autodoc]] utils.load [[autodoc]] utils.wait_for_everyone ## Random These utilities relate to setting and synchronizing of all the random states. [[autodoc]] utils.set_seed [[autodoc]] utils.synchronize_rng_state [[autodoc]] utils.synchronize_rng_states ## PyTorch XLA These include utilities that are useful while using PyTorch with XLA. [[autodoc]] utils.install_xla ## Loading model weights These include utilities that are useful to load checkpoints. [[autodoc]] utils.load_checkpoint_in_model ## Quantization These include utilities that are useful to quantize model. [[autodoc]] utils.load_and_quantize_model
5
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/package_reference/megatron_lm.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Megatron-LM utilities ## MegatronLMPlugin [[autodoc]] utils.MegatronLMPlugin ## MegatronLMDummyScheduler [[autodoc]] utils.MegatronLMDummyScheduler ## MegatronLMDummyDataLoader [[autodoc]] utils.MegatronLMDummyDataLoader ## AbstractTrainStep [[autodoc]] utils.AbstractTrainStep ## GPTTrainStep [[autodoc]] utils.GPTTrainStep ## BertTrainStep [[autodoc]] utils.BertTrainStep ## T5TrainStep [[autodoc]] utils.T5TrainStep ## avg_losses_across_data_parallel_group [[autodoc]] utils.avg_losses_across_data_parallel_group
6
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/package_reference/accelerator.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Accelerator The [`Accelerator`] is the main class for enabling distributed training on any type of training setup. Read the [Add Accelerator to your code](../basic_tutorials/migration) tutorial to learn more about how to add the [`Accelerator`] to your script. ## Accelerator[[api]] [[autodoc]] Accelerator ## Utilities [[autodoc]] accelerate.utils.gather_object
7
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/package_reference/torch_wrappers.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DataLoaders, Optimizers, and Schedulers The internal classes Accelerate uses to prepare objects for distributed training when calling [`~Accelerator.prepare`]. ## DataLoader utilities [[autodoc]] data_loader.prepare_data_loader [[autodoc]] data_loader.skip_first_batches ## BatchSamplerShard [[autodoc]] data_loader.BatchSamplerShard ## IterableDatasetShard [[autodoc]] data_loader.IterableDatasetShard ## DataLoaderShard [[autodoc]] data_loader.DataLoaderShard ## DataLoaderDispatcher [[autodoc]] data_loader.DataLoaderDispatcher ## AcceleratedOptimizer [[autodoc]] optimizer.AcceleratedOptimizer ## AcceleratedScheduler [[autodoc]] scheduler.AcceleratedScheduler
8
0
hf_public_repos/accelerate/docs/source
hf_public_repos/accelerate/docs/source/package_reference/launchers.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Launchers Functions for launching training on distributed processes. ## notebook_launcher [[autodoc]] accelerate.notebook_launcher ## debug_launcher [[autodoc]] accelerate.debug_launcher
9
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter3/quiz.mdx
<!-- DISABLE-FRONTMATTER-SECTIONS --> # Проверьте свое понимание материала курса ### 1. Что такое вокодер? <Question choices={[ { text: "Дополнительная нейронная сеть, превращающая выходную спектрограмму трансформера в осциллограмму.", explain: "Correct! ", correct: true }, { text: "Тип слоя-трансформера, отвечающий за создание эмбеддингов звука.", explain: "" }, { text: "Дополнительная нейронная сеть, осуществляющая предварительную обработку речевого аудиосигнала для удаления фонового шума", explain: "", } ]} /> ### 2. Wav2Vec2 является примером <Question choices={[ { text: "Архитектуры Seq2Seq", explain: "" }, { text: "Архитектуры CNN", explain: "" }, { text: "Архитектуры CTC", explain: "Correct!", correct: true } ]} /> ### 3. Что делает пустой токен в алгоритме CTC? <Question choices={[ { text: "Пустой токен обозначает паузы между отдельными словами в предложении.", explain: "" }, { text: "Пустой токен - это спрогнозированный токен, который служит жесткой границей между группами символов. Он позволяет отфильтровать дублирующиеся символы", explain: "Correct!", correct: true }, { text: "Пустой токен используется для звуков, не соответствующих ни одному токену в словаре, аналогично токену <UNK> для 'unknown'.", explain: "" } ]} /> ### 4. Какое из следующих утверждений о моделях CTC является ЛОЖНЫМ? <Question choices={[ { text: "В моделях CTC используется только энкодерная часть архитектуры трансформера.", explain: "" }, { text: "Wav2Vec2 и HuBERT используют абсолютно одинаковую архитектуру, но обучаются по-разному.", explain: "" }, { text: "Модели CTC, как правило, показывают лучшие результаты при распознавании речи по сравнению с другими архитектурами.", explain: "Correct!", correct: true } ]} /> ### 5. Whisper является примером <Question choices={[ { text: "Seq2Seq архитектуры", explain: "Correct!", correct: true }, { text: "CNN архитектуры", explain: "" }, { text: "CTC архитектуры", explain: "" } ]} /> ### 6. Как проще всего выполнить классификацию звука? <Question choices={[ { text: "Использовать трансформеры энкодер-декодер на форме волны звука.", explain: "" }, { text: "Использовать спектрограммы и рассматривать задачу как задачу классификации изображений.", explain: "Correct!", correct: true }, { text: "Превратить модель CTC в классификатор звука общего назначения, изменив метки и обучив ее с помощью обычной функции потерь кросс-энтропии.", explain: "" } ]} /> ### 7. Верно или нет? Если рассматривать спектрограммы как изображения для классификации, то всегда полезно использовать методы дополнения данных изображения, такие как сдвиг изображения, его обрезка или изменение размера. <Question choices={[ { text: "Правда", explain: "" }, { text: "Ложь", explain: "Correct!", correct: true } ]} />
0
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter3/seq2seq.mdx
# Архитектуры Seq2Seq В моделях CTC, рассмотренных в предыдущем разделе, использовалась только энкодерная часть архитектуры трансформера. В случае, когда мы добавляем декодер для создания модели энкодер-декодер, это называется моделью **последовательность-в-последовательность (sequence-to-sequence)** или сокращенно seq2seq. Модель сопоставляет последовательность данных одного вида с последовательностью данных другого вида. В моделях трансформеров, использующих только энкодер, энкодер делал предсказание для каждого элемента входной последовательности. Поэтому и входная, и выходная последовательности всегда будут иметь одинаковую длину. В случае моделей CTC, таких как Wav2Vec2, входная форма сигнала сначала подвергалась даунсемплингу, но все равно на каждые 20 мс звука приходилось одно предсказание. В модели seq2seq такого соответствия один к одному нет, и входная и выходная последовательности могут иметь разную длину. Это делает модели seq2seq пригодными для решения задач NLP, таких как резюмирование текста или перевод с одного языка на другой, а также для решения аудио задач, таких как распознавание речи. Архитектура декодера очень похожа на архитектуру энкодера, и в обоих случаях используются схожие слои, главной особенностью которых является самовнимание. Однако декодер выполняет иную задачу, чем энкодер. Чтобы понять, как это работает, рассмотрим, как модель seq2seq может выполнять автоматическое распознавание речи. ## Автоматическое распознавание речи Архитектура **Whisper** выглядит следующим образом (рисунок любезно предоставлен [блогом OpenAI Whisper](https://openai.com/blog/whisper/)): <div class="flex justify-center"> <img src="https://huggingface.co/blog/assets/111_fine_tune_whisper/whisper_architecture.svg" alt="Whisper is a transformer encoder-decoder model"> </div> Все это должно выглядеть довольно знакомо. Слева находится **энкодер трансформера**. В качестве входного сигнала принимается лог-мел спектрограмма, которая кодируется для формирования последовательности скрытых состояний энкодера, извлекающих важные признаки из произносимой речи. Этот тензор скрытых состояний представляет входную последовательность как единое целое и эффективно кодирует "смысл" поступившей на вход речи. <Tip> 💡 Обычно в таких seq2seq-моделях в качестве входных данных используются спектрограммы. Однако модель seq2seq может быть разработана и для работы непосредственно с формой волны звука. </Tip> Затем выход энкодера передается в **декодер трансформера**, показанный справа, с помощью механизма, называемого **перекрёстным вниманием (cross-attention)**. Это похоже на самовнимание (self-attention), но внимание направлено на выход энкодера. С этого момента энкодер больше не нужен. Декодер предсказывает последовательность текстовых токенов **авторегрессивным** способом, по одному токену за раз, начиная с начальной последовательности, в которой есть только "стартовый" токен (`SOT` в случае Whisper). На каждом следующем временном интервале предыдущая выходная последовательность подается обратно в декодер в качестве новой входной последовательности. Таким образом, декодер выдает по одному новому токену за раз, неуклонно наращивая выходную последовательность, пока не спрогнозирует "конечный" токен или не будет достигнуто максимальное количество временных шагов. Хотя архитектура декодера в основном идентична архитектуре кодера, есть два существенных отличия: 1. декодер имеет механизм перекрестного внимания, который позволяет ему просматривать представление энкодера о входной последовательности 2. внимание декодера является каузальным - декодер не имеет права заглядывать в будущее. В этом случае декодер играет роль **языковой модели**, обрабатывая представления скрытых состояний, полученные от энкодера, и генерируя соответствующие текстовые транскрипции. Это более мощный подход, чем CTC, даже если модель CTC сочетается с внешней языковой моделью, так как система seq2seq может быть обучена от начала до конца с использованием одних и тех же обучающих данных и функции потерь, что обеспечивает большую гибкость и в целом более высокую производительность. <Tip> 💡 В то время как модель CTC выводит последовательность отдельных символов, токены, предсказываемые Whisper, представляют собой полные слова или фрагменты слов. Он использует токенизатор из GPT-2 и имеет 50k+ уникальных токенов. Поэтому модель seq2seq может выдать гораздо более короткую последовательность, чем модель CTC для той же транскрипции. </Tip> Типичной функцией потерь для seq2seq ASR-модели является функция кросс-энтропии, поскольку последний слой модели предсказывает распределение вероятностей по возможным токенам. Обычно это сочетается с такими методами, как [лучевой поиск для генерации конечной последовательности](https://huggingface.co/blog/how-to-generate). Метрикой распознавания речи является WER или word error rate, которая измеряет, сколько замен, вставок и удалений необходимо для превращения предсказанного текста в целевой - чем меньше, тем лучше результат. ## Преобразование текста в речь (Text-to-speech, TTS) Возможно, это вас не удивит: Модель seq2seq для TTS работает по сути так же, как и описанная выше, но входы и выходы поменяны местами! Энкодер трансформера принимает последовательность текстовых токенов и извлекает из нее последовательность скрытых состояний, которые представляют собой входной текст. Декодер трансформера применяет перекрестное внимание к выходу энкодера и прогнозирует спектрограмму. <Tip> 💡 Напомним, что спектрограмма создается путем взятия частотного спектра последовательных временных отрезков звуковой волны и их суммирования. Другими словами, спектрограмма - это последовательность, элементами которой являются (лог-мел) частотные спектры, по одному на каждый временной интервал. </Tip> В ASR-модели декодер запускался с помощью последовательности, содержащей только специальный токен "start". Для модели TTS мы можем начать декодирование со спектрограммы длиной один, состоящей из одних нулей, которая выступает в качестве "стартового токена". Учитывая эту начальную спектрограмму и перекрестное внимание к представлениям скрытых состояний энкодера, декодер предсказывает следующий временной интервал для этой спектрограммы, постепенно увеличивая спектрограмму на один временной интервал. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speecht5_decoding.png" alt="The audio waveform gets mapped to a shorter sequence of hidden-states"> </div> Но как декодер узнает, когда нужно остановиться? В модели **SpeechT5** это решается тем, что декодер предсказывает вторую последовательность. Она содержит вероятность того, что текущий временной шаг является последним. При генерации звука в момент инференса, если эта вероятность превышает определенный порог (скажем, 0,5), декодер сигнализирует о том, что спектрограмма закончена и цикл генерации должен завершиться. После завершения декодирования и получения выходной последовательности, содержащей спектрограмму, SpeechT5 использует так называемую **пост-сеть (post-net)**, состоящую из нескольких сверточных слоев, для уточнения спектрограммы. При обучении модели TTS в качестве целей также используются спектрограммы, а в качестве потерь - L1 или MSE. Во время инференса мы хотим преобразовать выходную спектрограмму в форму звукового сигнала, чтобы ее можно было прослушать. Для этого используется внешняя модель - **вокодер (vocoder)**. Этот вокодер не является частью архитектуры seq2seq и обучается отдельно. Сложность TTS заключается в том, что это отображение "один-ко-многим". При преобразовании речи в текст существует только один правильный выходной текст, соответствующий входной речи, в то время как при преобразовании текста в речь входной текст может быть сопоставлен с множеством возможных звуков речи. Например, разные дикторы могут выбирать для акцентирования внимания разные части предложения. Это затрудняет оценку моделей TTS. В связи с этим значение потерь L1 или MSE на самом деле не имеет большого смысла - существует множество способов представить один и тот же текст на спектрограмме. Именно поэтому модели TTS обычно оцениваются слушателями, используя метрику, известную как MOS (mean opinion score) или cредняя экспертная оценка. ## Заключение Подход seq2seq является более мощным, чем модель, основанная только на энкодере. Благодаря разделению входной последовательности энкодера и выходной последовательности декодера, выравнивание звука и текста становится менее проблематичным. <!-- Модель учится выполнять это выравнивание с помощью механизма внимания. --> Однако модель энкодер-декодер также является более медленной, поскольку процесс декодирования происходит по одному шагу за раз, а не все сразу. Чем длиннее последовательность, тем медленнее прогнозирование. Авторегрессивные модели также могут застревать на повторах или пропускать слова. Такие методы, как лучевой поиск, позволяют улучшить качество прогнозов, но при этом еще больше замедляют декодирование.
1
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter3/introduction.mdx
# Раздел 3. Архитектуры трансформеров для аудио В этом курсе мы рассмотрим, прежде всего, трансформерные модели и их применение для решения задач аудио. Хотя вам не обязательно знать внутренние детали этих моделей, полезно понимать основные концепции, обеспечивающие их работу, поэтому здесь мы приведем краткую справку. Для более глубокого погружения в трансформеры ознакомьтесь с нашим [курсом по NLP] (https://huggingface.co/course/chapter1/1). ## Как работает трансформер? Оригинальная модель трансформера предназначалась для перевода письменного текста с одного языка на другой. Ее архитектура выглядела следующим образом: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers.svg" alt="Original transformer architecture"> </div> Слева находится **энкодер**, а "справа находится **декодер**. - Энкодер получает входной сигнал, в данном случае последовательность текстовых токенов, и строит его представление (признаки). Эта часть модели обучается для получения понимания из входных данных. - Декодер использует представление кодера (признаки) вместе с другими входными данными (ранее предсказанными токенами) для генерации целевой последовательности. Эта часть модели обучается генерировать выходные данные. В оригинальном дизайне выходная последовательность состояла из текстовых лексем. Существуют также модели на основе трансформеров, использующие только энкодерную часть (хорошо подходят для задач, требующих понимания входных данных, например, для классификации) или только декодерную часть (хорошо подходят для задач, например, для генерации текста). Примером модели, использующей только энкодер, является BERT, а примером модели, использующей только декодер, является GPT2. Ключевой особенностью трансформерных моделей является то, что при их построении используются специальные слои, называемые **слоями внимания (attention layers)**. Эти слои указывают модели на необходимость уделять особое внимание определенным элементам входной последовательности и игнорировать другие при вычислении представлений признаков. ## Использование трансформеров для аудио Аудио модели, которые мы рассмотрим в этом курсе, обычно имеют стандартную архитектуру трансформера, как показано выше, но с небольшими изменениями на входе или выходе, позволяющими использовать аудио данные вместо текста. Поскольку все эти модели по своей сути являются трансформерами, большая часть их архитектуры будет общей, а основные различия заключаются в способах их обучения и использования. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/transformers_blocks.png" alt="The transformer with audio input and output"> </div> Для задач, связанных с аудио, входные и/или выходные последовательности могут быть не текстовыми, а звуковыми: - Автоматическое распознавание речи (Automatic speech recognition - ASR): На входе - речь, на выходе - текст. - Синтез речи (Text To Speech - TTS): На входе - текст, на выходе - речь. - Классификация аудио: На входе - аудио, на выходе - вероятность класса - по одному для каждого элемента в последовательности или единая вероятность класса для всей последовательности. - Преобразование голоса или улучшение речи: И на входе, и на выходе - аудио. Существует несколько различных способов обработки аудио, чтобы его можно было использовать с трансформером. Основное внимание уделяется тому, использовать ли звук в исходном виде - как форму волны - или вместо этого обработать его как спектрограмму. ## Входы модели Входными данными для аудио модели могут быть как текстом, так и звуком. Задача состоит в том, чтобы преобразовать эти входные данные в вектор эмбединга, который может быть обработан архитектурой трансформера. ### Текстовый ввод Модель преобразования текста в речь принимает текст на вход. Она работает так же, как и оригинальный трансформер или любая другая модель NLP: Входной текст сначала подвергается токенизации, в результате чего получается последовательность текстовых токенов. Эта последовательность проходит через слой эмбединга, который преобразует токены в 512-мерные векторы. Затем эти векторы эмбеддинга передаются в энкодер трансформера. ### Входной сигнал в форме волны Модель автоматического распознавания речи принимает на вход аудиосигнал. Для того чтобы использовать трансформер для ASR, необходимо сначала каким-то образом преобразовать звук в последовательность векторов эмбеддинга. Такие модели, как **Wav2Vec2** и **HuBERT**, используют непосредственно форму волны звукового сигнала в качестве входного сигнала для модели. Как вы уже видели в [главе, посвященной аудиоданным](../chapter1/introduction), форма волны представляет собой одномерную последовательность чисел с плавающей точкой, где каждое число представляет собой амплитуду дискретизации в данный момент времени. Эта необработанная форма волны сначала нормализуется до нулевого среднего и единичной дисперсии, что позволяет стандартизировать аудио образцы разной громкости (амплитуды). <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/wav2vec2-input.png" alt="Wav2Vec2 uses a CNN to create embeddings from the input waveform"> </div> После нормализации последовательность аудио образцов они превращается в эмбединг с помощью небольшой сверточной нейронной сети, называемой энкодером признаков (feature encoder). Каждый из сверточных слоев этой сети обрабатывает входную последовательность, субсэмплируя звук для уменьшения длины последовательности, пока последний сверточный слой не выдает 512-мерный вектор с эмбдингами для каждых 25 мс звука. После преобразования входной последовательности в последовательность таких эмбеддингов трансформер обрабатывает данные обычным образом. ### Ввод спектрограмм Недостатком использования в качестве входных данных необработанной формы волны является то, что они, как правило, имеют большую длину последовательности. Например, тридцать секунд звука с частотой дискретизации 16 кГц дают входной сигнал длиной `30 * 16000 = 480000`. Большая длина последовательности требует большего количества вычислений в модели трансформера, а значит, и большего объема памяти. В связи с этим необработанные формы звуковых сигналов, как правило, не являются наиболее эффективной формой представления входного аудиосигнала. Используя спектрограмму, мы получаем тот же объем информации, но в более сжатом виде. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/whisper-input.png" alt="Whisper uses a CNN to create embeddings from the input spectrogram"> </div> Модели типа **Whisper** сначала преобразуют форму волны в лог-мел спектрограмму. Whisper всегда разбивает звук на 30-секундные сегменты, и лог-мел спектрограмма для каждого сегмента имеет форму `(80, 3000)`, где 80 - количество столбцов mel, а 3000 - длина последовательности. Преобразовав в лог-мел спектрограмму, мы уменьшили объем входных данных, но, что более важно, эта последовательность гораздо короче, чем исходная форма сигнала. Затем лог-мел спектрограмма обрабатывается небольшой CNN в последовательность эмбдингов, которая, как обычно, поступает в трансформер. В обоих случаях, как при вводе формы волны, так и спектрограммы, перед трансформером имеется небольшая сеть, которая преобразует входной сигнал в эмбеддинги, после чего трансформер начинает выполнять свою работу. ## Выходы модели Архитектура трансформера выдает на выходе последовательность векторов скрытых состояний (hidden-state vectors), также известных как эмбеддинги на выходе. Наша цель - преобразовать эти векторы в текст или аудиоданные. ### Вывод текста Цель модели автоматического распознавания речи - предсказать последовательность текстовых токенов. Для этого на выход трансформера добавляется голова языковой модели - как правило, один линейный слой - с последующим softmax. Таким образом, прогнозируются вероятности для текстовых токенов в словаре. ### Вывод спектрограммы Для моделей, генерирующих звук, таких как модель преобразования текста в речь (TTS), необходимо добавить слои, которые могут генерировать звуковую последовательность. Очень часто генерируется спектрограмма, а затем используется дополнительная нейронная сеть, известная как вокодер, для преобразования этой спектрограммы в форму волны. Например, в модели TTS **SpeechT5** выходной сигнал трансформера представляет собой последовательность 768-элементных векторов. Линейный слой проецирует эту последовательность в лог-мел спектрограмму. Так называемая пост-сеть, состоящая из дополнительных линейных и сверточных слоев, уточняет спектрограмму за счет уменьшения шума. Затем вокодер формирует конечную форму звукового сигнала. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speecht5.png" alt="SpeechT5 outputs a spectrogram and uses a vocoder to create the waveform"> </div> <Tip> 💡 Если взять существующую форму сигнала и применить к ней Оконное преобразование Фурье или ОПФ, то можно выполнить обратную операцию, ООПФ, чтобы снова получить исходную форму сигнала. Это работает потому, что спектрограмма, созданная в результате ОПФ, содержит информацию как об амплитуде, так и о фазе, а для восстановления формы волны необходимо и то, и другое. Однако аудиомодели, генерирующие выходной сигнал в виде спектрограммы, обычно предсказывают только амплитудную информацию, но не фазовую. Чтобы превратить такую спектрограмму в форму волны, необходимо каким-то образом оценить фазовую информацию. Этим и занимается вокодер. </Tip> ### Вывод формы волны Также существует возможность для моделей напрямую выводить форму волны вместо спектрограммы в качестве промежуточного шага, но в настоящее время в 🤗 Transformers нет ни одной модели, которая бы это делала. ## Заключение Подведем итоги: большинство моделей аудио трансформеров скорее похожи друг на друга, чем отличаются - все они построены на одной и той же архитектуре трансформера и слоях внимания, хотя в некоторых моделях используется только энкодерная часть трансформера, а в других - и энкодер, и декодер. Вы также увидели, как вводить и выводить аудиоданные из трансформерных моделей. Для выполнения различных аудиозадач ASR, TTS и т.д. мы можем просто менять местами слои, которые предварительно обрабатывают входные данные в эмбеддинги, и менять местами слои, которые после обработки предсказанных эмбеддингов превращаются в выходные данные, при этом основа трансформера остается неизменной. Далее мы рассмотрим несколько различных способов обучения этих моделей для автоматического распознавания речи.
2
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter4/demo.mdx
# Создание демонтрационного образца с Gradio В этом заключительном разделе, посвященном классификации звука, мы построим демонстрационный пример [Gradio](https://gradio.app) чтобы продемонстрировать модель классификации музыки, которую мы только что обучили на наборе данных [GTZAN](https://huggingface.co/datasets/marsyas/gtzan). Первое, что необходимо сделать, это загрузить контрольную точку дообученной модели с помощью класса `pipeline()` - это уже хорошо знакомо по разделу [Предварительно обученные модели и наборы данных для классификации звука](classification_models). Вы можете изменить `model_id` на пространство имен вашей дообученной модели на Hugging Face Hub: ```python from transformers import pipeline model_id = "sanchit-gandhi/distilhubert-finetuned-gtzan" pipe = pipeline("audio-classification", model=model_id) ``` Во-вторых, мы определим функцию, которая принимает путь к файлу для входного аудиосигнала и пропускает его через конвейер. Здесь конвейер автоматически позаботится о том, чтобы загрузить аудиофайл, передискретизировать его до нужной частоты дискретизации и выполнить вывод с помощью модели. Мы берем предсказания модели `preds` и оформляем их в виде словаря для отображения на выходе: ```python def classify_audio(filepath): preds = pipe(filepath) outputs = {} for p in preds: outputs[p["label"]] = p["score"] return outputs ``` Наконец, мы запускаем демонстрационную программу Gradio с помощью функции, которую мы только что определили: ```python import gradio as gr demo = gr.Interface( fn=classify_audio, inputs=gr.Audio(type="filepath"), outputs=gr.outputs.Label() ) demo.launch(debug=True) ``` В результате будет запущена демонстрация Gradio, аналогичная той, что работает на Hugging Face Space: <iframe src="https://course-demos-song-classifier.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
3
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter4/classification_models.mdx
# Предварительно обученные модели и наборы данных для классификации звука Hugging Face Hub содержит более 500 предварительно обученных моделей для классификации звука. В этом разделе мы рассмотрим несколько наиболее распространенных задач классификации звука и предложим для каждой из них подходящие предварительно обученные модели. Использование `pipeline()` позволяет легко переключаться между моделями и задачами - как только вы узнаете, как использовать `pipeline()` для одной модели, вы сможете использовать его для любой модели на Hugging Face Hub без изменений кода! Это делает эксперименты с `pipeline()` чрезвычайно быстрыми, позволяя быстро выбрать наилучшую предварительно обученную модель для ваших нужд. Прежде чем перейти к рассмотрению различных задач классификации звука, давайте кратко перечислим обычно используемые архитектуры трансформеров. Стандартная архитектура классификации звука обусловлена характером задачи; мы хотим преобразовать последовательность входных аудиосигналов (т.е. наш входной массив аудиосигналов) в предсказание метки одного из классов. Модели, архитектура которых состоит только из кодировщика, сначала преобразуют входную звуковую последовательность в последовательность представлений скрытых состояний, пропуская входные сигналы через блок трансформации. Последовательность представлений скрытых состояний преобразуется в выходную метку класса путем взятия среднего значения по скрытым состояниям и пропускания полученного вектора через слой линейной классификации. Поэтому для классификации аудиосигналов предпочтение отдается моделям, архитектура которых состоит только из _кодировщика_. Модели, архитектура которых состоит только из декодировщика излишне усложняют задачу, поскольку предполагают что выходы могут быть в том числе и _последовательностью_ предсказаний (а не одним предсказанием метки класса), и поэтому генерируют несколько выходов. Поэтому они имеют более низкую скорость вывода и, как правило, не используются. По этой же причине модели кодеровщик-декодировщик в значительной степени не рассматриваются. Такой выбор архитектуры аналогичен выбору в NLP, где для задач классификации последовательностей предпочтение отдается только моделям-кодировщикам, таким как [BERT](https://huggingface.co/blog/bert-101), а для задач генерации последовательностей - только моделям-декодировщикам, таким как GPT. Теперь, когда мы рассказали о стандартной архитектуре трансформеров для классификации звука, перейдем к рассмотрению различных подмножеств классификации звука и наиболее популярных моделей! ## 🤗 Установка библиотеки Transformers На момент написания статьи последние обновления, необходимые для работы конвейера классификации звука, находятся только в `main` ветке репозитория 🤗 Transformers, а не в последней версии PyPi. Чтобы убедиться в наличии этих обновлений локально, мы установим Transformers из ветки `main` следующей командой: ``` pip install git+https://github.com/huggingface/transformers ``` ## Поиск ключевых слов Поиск ключевых слов (Keyword Spotting, KWS) - это задача идентификации ключевого слова в произносимой речи. Набор возможных ключевых слов формирует набор прогнозируемых меток классов. Поэтому для использования предварительно обученной модели выделения ключевых слов необходимо убедиться, что ваши ключевые слова совпадают с теми, на которых модель была предварительно обучена. Ниже мы представим два набора данных и модели для выявления ключевых слов. ### Minds-14 Воспользуемся тем же набором данных [MINDS-14](https://huggingface.co/datasets/PolyAI/minds14), который вы исследовали в предыдущем разделе. Если вы помните, MINDS-14 содержит записи людей, задающих вопросы системе дистанционного банковского обслуживания на нескольких языках и диалектах, и для каждой записи имеет значение `intent_class`. Мы можем классифицировать записи по намерению звонящего. ```python from datasets import load_dataset minds = load_dataset("PolyAI/minds14", name="en-AU", split="train") ``` Загрузим контрольную точку [`"anton-l/xtreme_s_xlsr_300m_minds14"`](https://huggingface.co/anton-l/xtreme_s_xlsr_300m_minds14), которая представляет собой XLS-R-модель, дообученную на MINDS-14 в течение примерно 50 эпох. На оценочной выборке набора MINDS-14 она достигает 90% по метрике accuracy по всем языкам. ```python from transformers import pipeline classifier = pipeline( "audio-classification", model="anton-l/xtreme_s_xlsr_300m_minds14", ) ``` Наконец, мы можем передать сэмпл в конвейер классификации, чтобы сделать предсказание: ```python classifier(minds[0]["path"]) ``` **Output:** ``` [ {"score": 0.9631525278091431, "label": "pay_bill"}, {"score": 0.02819698303937912, "label": "freeze"}, {"score": 0.0032787492964416742, "label": "card_issues"}, {"score": 0.0019414445850998163, "label": "abroad"}, {"score": 0.0008378693601116538, "label": "high_value_payment"}, ] ``` Отлично! Мы определили, что целью звонка была оплата счета, с вероятностью 96%. Можно представить, что подобная система выявления ключевых слов используется в качестве первого этапа автоматизированного центра обработки вызовов (call-центр), где мы хотим классифицировать входящие звонки клиентов в зависимости от их запроса и предложить им соответствующую контекстную поддержку. ### Speech Commands Speech Commands - это набор устных слов, предназначенный для оценки моделей классификации звука на простых командных словах. Набор данных состоит из 15 классов ключевых слов, класса молчания и неизвестного класса, включающего ложные срабатывания. 15 ключевых слов - это отдельные слова, которые обычно используются в настройках устройства для управления основными задачами или запуска других процессов. Аналогичная модель постоянно работает в вашем мобильном телефоне. Здесь вместо отдельных командных слов используются "слова пробуждения", характерные для конкретного устройства, например "Привет, Google" или "Привет, Siri". Когда модель классификации звука обнаруживает эти слова, она заставляет телефон начать прослушивание микрофона и транскрибировать вашу речь с помощью модели распознавания речи. Модель классификации звука гораздо меньше и легче, чем модель распознавания речи, зачастую в ней всего несколько миллионов параметров по сравнению с несколькими сотнями миллионов параметров в модели для распознавания речи. Таким образом, она может непрерывно работать на вашем устройстве, не разряжая аккумулятор! Более крупная модель распознавания речи запускается только при обнаружении слова-пробуждения, после чего она снова отключается. В следующем разделе мы рассмотрим модели трансформеров для распознавания речи, так что к концу курса у вас должны быть все необходимые инструменты для создания собственного голосового помощника! Как и в случае с любым набором данных на Hugging Face Hub, мы можем получить представление о том, какие аудиоданные в нем присутствуют, не скачивая и не сохраняя их в памяти компьютера. Перейдя к карточке набора данных [Speech Commands' dataset](https://huggingface.co/datasets/speech_commands) на Hugging Face Hub, мы можем воспользоваться средством просмотра набора данных (Dataset Viewer), чтобы пролистать первые 100 образцов набора, прослушать аудиофайлы и проверить любые другие метаданные: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speech_commands.png" alt="Diagram of datasets viewer."> </div> Предварительный просмотр данных - это отличный способ ознакомиться с наборами аудиоданных, прежде чем приступить к их использованию. Вы можете выбрать любой набор данных на Hugging Face Hub, пролистать примеры и прослушать аудиозаписи для различных подмножеств и разбиений, чтобы понять, подходит ли этот набор данных для ваших нужд. Выбрав набор данных, несложно загрузить данные, чтобы начать их использовать. Давайте сделаем именно это и загрузим образец набора данных Speech Commands в потоковом режиме: ```python speech_commands = load_dataset( "speech_commands", "v0.02", split="validation", streaming=True ) sample = next(iter(speech_commands)) ``` Загрузим официальную контрольную точку [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer), прошедшую дообучение на наборе данных Speech Commands, в пространстве имен [`"MIT/ast-finetuned-speech-commands-v2"`](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2): ```python classifier = pipeline( "audio-classification", model="MIT/ast-finetuned-speech-commands-v2" ) classifier(sample["audio"].copy()) ``` **Output:** ``` [{'score': 0.9999892711639404, 'label': 'backward'}, {'score': 1.7504888774055871e-06, 'label': 'happy'}, {'score': 6.703040185129794e-07, 'label': 'follow'}, {'score': 5.805884484288981e-07, 'label': 'stop'}, {'score': 5.614546694232558e-07, 'label': 'up'}] ``` Класс! Похоже, что пример с высокой вероятностью содержит слово "назад". Мы можем прослушать пример и убедиться что это действительно так: ``` from IPython.display import Audio Audio(sample["audio"]["array"], rate=sample["audio"]["sampling_rate"]) ``` Теперь вам, возможно, интересно, как мы выбрали эти предварительно обученные модели, чтобы показать их на этих примерах классификации звука. На самом деле, найти предварительно обученные модели для вашего набора данных и задачи очень просто! Первое, что нам нужно сделать, это зайти в Hugging Face Hub и перейти на вкладку "Models" (Модели): https://huggingface.co/models В результате будут отображены все модели на Hugging Face Hub, отсортированные по количеству загрузок за последние 30 дней: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/all_models.png"> </div> С левой стороны можно заметить ряд вкладок, на которых можно отфильтровать модели по задачам, библиотекам, наборам данных и т.д. Прокрутите страницу вниз и выберите задачу "Audio Classification" (Классификация аудио) из списка задач аудио: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/by_audio_classification.png"> </div> Теперь нам представлено подмножество из 500+ моделей классификации звука на хабе. Для дальнейшего уточнения этого отбора мы можем отфильтровать модели по набору данных. Перейдите на вкладку "Datasets" и в строке поиска введите "speech_commands". Когда вы начнете вводить текст, под вкладкой поиска появится выбор для `speech_commands`. Нажав на эту кнопку, вы можете отфильтровать все модели классификации звука от тех, которые были настроены на наборе данных Speech Commands: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/by_speech_commands.png"> </div> Отлично! Мы видим, что для данного набора данных и задачи нам доступны 6 предварительно обученных моделей. Вы заметите первую из этих моделей Audio Spectrogram Transformer, контрольную точку которой мы использовали в предыдущем примере. Этот процесс фильтрации моделей на Hugging Face Hub - именно то, как мы выбирали контрольную точку для показа вам! ## Идентификация языка (Language Identification) Идентификация языка (LID) - это задача определения языка, на котором говорят в аудиосэмпле, из списка языков-кандидатов. LID может стать важной частью многих речевых конвейеров. Например, при получении образца аудиозаписи на неизвестном языке модель LID может быть использована для классификации языка (языков), на котором разговаривают в аудиозаписи, и последующего выбора соответствующей модели распознавания речи, обученной на этом языке, для транскрибации аудиозаписи. ### FLEURS FLEURS (Few-shot Learning Evaluation of Universal Representations of Speech) - это набор данных для оценки систем распознавания речи на 102 языках, в том числе на многих языках, которые относятся к категории "малоресурсных". Ознакомьтесь с карточкой набора данных FLEURS на Hugging Face Hub и изучите различные языки, которые в нем представлены: [google/fleurs](https://huggingface.co/datasets/google/fleurs). Можете ли Вы найти здесь свой родной язык? Если нет, то какой язык наиболее близок к нему? Загрузим выборку из валидационной части набора данных FLEURS в потоковом режиме: ```python fleurs = load_dataset("google/fleurs", "all", split="validation", streaming=True) sample = next(iter(fleurs)) ``` Отлично! Теперь мы можем загрузить нашу модель классификации звука. Для этого мы будем использовать версию [Whisper](https://arxiv.org/pdf/2212.04356.pdf) дообученный на наборе данных FLEURS, который в настоящее время является наиболее производительной моделью LID на Hugging Face Hub: ```python classifier = pipeline( "audio-classification", model="sanchit-gandhi/whisper-medium-fleurs-lang-id" ) ``` Затем мы можем пропустить звук через наш классификатор и сгенерировать предсказание: ```python classifier(sample["audio"]) ``` **Output:** ``` [{'score': 0.9999330043792725, 'label': 'Afrikaans'}, {'score': 7.093023668858223e-06, 'label': 'Northern-Sotho'}, {'score': 4.269149485480739e-06, 'label': 'Icelandic'}, {'score': 3.2661141631251667e-06, 'label': 'Danish'}, {'score': 3.2580724109720904e-06, 'label': 'Cantonese Chinese'}] ``` Видно, что модель предсказала, что звук был на "Afrikaans" с очень высокой вероятностью (близкой к 1). Набор данных FLEURS содержит аудиоданные из широкого спектра языков - мы видим, что возможные метки классов включают "Northern-Sotho", "Icelandic", "Danish" и "Cantonese Chinese" языки, а также другие. Полный список языков, представленных в карточке набора данных, можно найти здесь: [google/fleurs](https://huggingface.co/datasets/google/fleurs). Посмотрите самостоятельно! Какие еще контрольные точки можно найти для FLEURS LID на хабе? Какие модели трансформаторов используются под капотом? ## Zero-Shot Audio Classification В традиционной парадигме классификации звука модель предсказывает метку класса из _предварительно определенного_ набора возможных классов. Это создает препятствие для использования предварительно обученных моделей для классификации звука, поскольку набор меток предварительно обученной модели должен соответствовать набору меток последующей задачи. В предыдущем примере LID модель должна предсказать один из 102 языковых классов, на которых она была обучена. Если для решения поставленной задачи требуется 110 языков, то модель не сможет предсказать 8 из 110 языков, и для достижения полного покрытия потребуется повторное обучение. Это ограничивает эффективность применения трансферного обучения для задач классификации звука. Zero-shot классификация звука это метод, позволяющий использовать предварительно обученную модель классификации аудиоданных, натренированную на множестве размеченных примеров, для классификации новых примеров из ранее не встречавшихся классов. Давайте рассмотрим, как этого можно добиться! В настоящее время 🤗 Transformers поддерживает один вид модели для Zero-shot классификации звука: это [CLAP model](https://huggingface.co/docs/transformers/model_doc/clap). CLAP - это модель, основанная на трансформации, которая принимает в качестве входных данных звук и текст и вычисляет _сходство_ между ними. Если мы передаем текстовый ввод, который сильно коррелирует с аудиовводом, мы получим высокую оценку сходства. И наоборот, при передаче текстового ввода, совершенно не связанного с аудиовводом, будет получено низкое сходство. Мы можем использовать это предсказание сходства для zero-shot классификации звука, передавая модели один аудиовход и несколько меток-кандидатов. Модель вернет оценку сходства для каждой из меток-кандидатов, и мы можем выбрать в качестве прогноза ту, которая имеет наибольшую оценку. Рассмотрим пример, в котором мы используем один аудиовход от набора данных [Environmental Speech Challenge (ESC)](https://huggingface.co/datasets/ashraq/esc50): ```python dataset = load_dataset("ashraq/esc50", split="train", streaming=True) audio_sample = next(iter(dataset))["audio"]["array"] ``` Затем мы определяем наши метки-кандидаты, которые образуют набор возможных классификационных меток. Модель будет возвращать вероятность принадлежности к классу для каждой из заданных нами меток. Это означает, что нам необходимо знать _априори_ набор возможных меток в нашей задаче классификации, причем так, чтобы правильная метка содержалась в этом наборе и, следовательно, ей была присвоена правильная вероятностная оценка. Обратите внимание, что мы можем передать модели либо полный набор меток, либо отобранное вручную подмножество, которое, по нашему мнению, содержит правильную метку. Передача полного набора меток будет более исчерпывающей, но за счет более низкой точности классификации, поскольку пространство классификации больше (при условии, что правильной меткой является выбранное нами подмножество меток): ```python candidate_labels = ["Sound of a dog", "Sound of vacuum cleaner"] ``` Мы можем прогнать обе эти метки через модель, чтобы найти метку-кандидата, которая _наиболее_ похожа на входной аудиосигнал: ```python classifier = pipeline( task="zero-shot-audio-classification", model="laion/clap-htsat-unfused" ) classifier(audio_sample, candidate_labels=candidate_labels) ``` **Output:** ``` [{'score': 0.9997242093086243, 'label': 'Sound of a dog'}, {'score': 0.0002758323971647769, 'label': 'Sound of vacuum cleaner'}] ``` Отлично! Модель, похоже, уверена, что у нас есть звук собаки - она предсказывает его с вероятностью 99,96%, так что мы примем это за наше предсказание. Убедимся в том, что мы не ошиблись, прослушав аудиопример (не увеличивайте громкость слишком сильно!): ```python Audio(audio_sample, rate=16000) ``` Отлично! У нас есть звук лая собаки 🐕, что соответствует предсказанию модели. Поиграйте с разными аудиосэмплами и разными кандидатами на метки - сможете ли вы определить набор меток, которые дают хорошее обобщение по всему набору данных ESC? Подсказка: подумайте, где можно найти информацию о возможных звуках в ESC, и постройте свои метки соответствующим образом! Возможно, вы зададитесь вопросом, почему мы не используем конвейер zero-shot классификации звука для **всех** задач классификации звука? Кажется, что мы можем делать предсказания для любой задачи классификации звука, определяя соответствующие метки классов _априори_, тем самым обходя ограничения, связанные с тем, что наша задача классификации должна соответствовать меткам, на которых была предварительно обучена модель. Это связано с характером модели CLAP, используемой в zero-shot конвейере: CLAP предварительно обучена на _общих_ аудиоданных для классификации, таких как звуки окружающей среды в наборе данных ESC, а не на речевых данных, как в задаче LID. Если дать ему речь на английском и речь на испанском языках, CLAP поймет, что оба примера являются речевыми данными 🗣️. Но он не сможет различить языки так, как это может сделать специализированная LID-модель. ## Что дальше? Мы рассмотрели ряд различных задач классификации звука и представили наиболее актуальные наборы данных и модели, которые можно загрузить с Hugging Face Hub и использовать всего в нескольких строках кода с помощью `pipeline()`. Эти задачи включали в себя выделение ключевых слов, идентификацию языка и zero-shot классификацию аудиозаписей. Но что, если мы хотим сделать что-то **новое**? Мы много работали над задачами обработки речи, но это лишь один из аспектов классификации аудио. Другая популярная область обработки звука связана с **музыкой**. Хотя музыка по своей сути отличается от речи, многие из тех же принципов, о которых мы уже узнали, могут быть применены и к музыке. В следующем разделе мы рассмотрим пошаговое руководство по тонкой настройке модели трансформера с помощью 🤗 Transformers на задаче классификации музыки. К концу этой работы у вас будет контрольная точка дообученной модели, которую вы сможете передать в `pipeline()`, что позволит вам классифицировать песни точно так же, как мы классифицировали здесь речь!
4
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter4/introduction.mdx
# Раздел 4. Разработка классификатора музыкальных жанров ## Чему вы научитесь и что вы сможете создать Классификация звука - одно из наиболее распространенных применений трансформеров в обработке звука и речи. Как и другие задачи классификации в машинном обучении, эта задача предполагает присвоение одной или нескольких меток аудиозаписи на основе ее содержания. Например, в случае с речью мы можем захотеть обнаружить, когда произносится фраза-пробуждение вроде "Привет, Siri", или определить ключевое слово вроде "температура" из произнесенного запроса "Какая сегодня погода?". Другим примером могут служить звуки окружающей среды, когда мы хотим автоматически различать такие звуки, как "автомобильный гудок", "сирена", "лай собаки" и т.д. В этом разделе мы рассмотрим, как предварительно обученные звуковые трансформеры могут применяться в различных задачах классификации звука. Затем мы произведем дообучение модели-трансформера на задаче классификации музыки, классифицируя песни по жанрам, таким как "поп" и "рок". Это важная составляющая таких музыкальных стриминговых сервисов, как [Spotify](https://en.wikipedia.org/wiki/Spotify), которые рекомендуют песни, похожие на те, что слушает пользователь. К концу этого раздела вы узнаете, как: * Найти подходящие предварительно обученные модели для задачи классификации звука * Использовать библиотеку 🤗 Datasets и Hugging Face Hub для выбора наборов данных для классификации звука * Производить дообучение предварительно обученной модели для классификации песен по жанрам * Создание демо-версии Gradio, позволяющей классифицировать собственные песни
5
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter4/fine-tuning.mdx
# Дообучение модели для классификации музыки В этом разделе мы представим пошаговое руководство по дообучению модели трансформера, использующей только кодеровщик, для классификации музыки. Для демонстрации мы будем использовать облегченную модель и достаточно небольшой набор данных, что означает, что код может быть запущен на любом GPU потребительского класса, включая GPU T4 16GB, предоставляемый в рамках бесплатного уровня Google Colab. В разделе приведены различные советы, которые можно попробовать при использовании GPU меньшего размера и возникновении проблем с нехваткой памяти. ## Набор данных Для обучения нашей модели мы будем использовать набор данных [GTZAN](https://huggingface.co/datasets/marsyas/gtzan), который представляет собой популярный набор данных из 1000 песен для классификации музыкальных жанров. Каждая песня представляет собой 30-секундный клип из одного из 10 музыкальных жанров - от диско до металла. Мы можем получить аудиофайлы и соответствующие им метки из Hugging Face Hub с помощью функции `load_dataset()` из 🤗 Datasets: ```python from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") gtzan ``` **Output:** ```out Dataset({ features: ['file', 'audio', 'genre'], num_rows: 999 }) ``` <Tip warning={true}> Одна из записей в GTZAN повреждена, поэтому она была удалена из набора данных. Поэтому мы имеем 999 примеров вместо 1000. </Tip> GTZAN не предоставляет предопределенного валидационного набора, поэтому нам придется создать его самостоятельно. Набор данных сбалансирован по жанрам, поэтому мы можем использовать метод `train_test_split()` для быстрого создания разбиения в пропорции 90/10 следующим образом: ```python gtzan = gtzan.train_test_split(seed=42, shuffle=True, test_size=0.1) gtzan ``` **Output:** ```out DatasetDict({ train: Dataset({ features: ['file', 'audio', 'genre'], num_rows: 899 }) test: Dataset({ features: ['file', 'audio', 'genre'], num_rows: 100 }) }) ``` Отлично, теперь, когда у нас есть обучающий и тестовый наборы, давайте посмотрим на один из аудиофайлов: ```python gtzan["train"][0] ``` **Output:** ```out { "file": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "audio": { "path": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "array": array( [ 0.10720825, 0.16122437, 0.28585815, ..., -0.22924805, -0.20629883, -0.11334229, ], dtype=float32, ), "sampling_rate": 22050, }, "genre": 7, } ``` Как мы видели в [Разделе 1](../chapter1/audio_data), аудиофайлы представлены в виде одномерных массивов NumPy, где значение массива представляет собой амплитуду на заданном временном интервале. Для этих композиций частота дискретизации составляет 22 050 Гц, то есть в секунду дискретизируется 22 050 значений амплитуды. Это необходимо учитывать при использовании предварительно обученной модели с другой частотой дискретизации, самостоятельно преобразуя частоты дискретизации для обеспечения их соответствия. Мы также видим, что жанр представлен в виде целого числа, или _class label_, то есть в том формате, в котором модель будет делать свои предсказания. Воспользуемся методом `int2str()` функции `genre` для преобразования этих целых чисел в человекочитаемые имена: ```python id2label_fn = gtzan["train"].features["genre"].int2str id2label_fn(gtzan["train"][0]["genre"]) ``` **Output:** ```out 'pop' ``` Эта метка выглядит корректно, поскольку совпадает с именем аудиофайла. Теперь рассмотрим еще несколько примеров на примере Gradio для создания простого интерфейса с API `Blocks`: ```python import gradio as gr def generate_audio(): example = gtzan["train"].shuffle()[0] audio = example["audio"] return ( audio["sampling_rate"], audio["array"], ), id2label_fn(example["genre"]) with gr.Blocks() as demo: with gr.Column(): for _ in range(4): audio, label = generate_audio() output = gr.Audio(audio, label=label) demo.launch(debug=True) ``` <iframe src="https://course-demos-gtzan-samples.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> Из этих образцов мы, конечно, можем услышать разницу между жанрами, но может ли это сделать трансформер? Давайте обучим модель, чтобы выяснить это! Для начала нам нужно найти подходящую для этой задачи предварительно обученную модель. Посмотрим, как это можно сделать. ## Выбор предварительно обученной модели для классификации звука Для начала выберем подходящую предварительно обученную модель для классификации звука. В этой области предварительное обучение обычно проводится на больших объемах немаркированных аудиоданных, используя такие наборы данных, как [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) и [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli). Лучший способ найти эти модели на Hugging Face Hub - использовать фильтр "Audio Classification", как описано в предыдущем разделе. Хотя такие модели, как Wav2Vec2 и HuBERT, очень популярны, мы будем использовать модель под названием _DistilHuBERT_. Это гораздо более компактная (или _дистиллированная_) версия модели [HuBERT](https://huggingface.co/docs/transformers/model_doc/hubert), которая обучается примерно на 73% быстрее, сохраняя при этом большую часть производительности. <iframe src="https://autoevaluate-leaderboards.hf.space" frameBorder="0" height="450" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> ## От аудио к машинному обучению ## Предварительная обработка данных Подобно токенизации в NLP, аудио- и речевые модели требуют, чтобы входные данные были закодированы в формате, который модель может обрабатывать. В 🤗 Transformers преобразование звука во входной формат осуществляется с помощью _feature extractor_ модели. Подобно токенизаторам, 🤗 Transformers предоставляет удобный класс `AutoFeatureExtractor`, который может автоматически выбирать нужный экстрактор признаков для заданной модели. Для того чтобы увидеть, как мы можем обрабатывать наши аудиофайлы, давайте начнем с инстанцирования экстрактора признаков для DistilHuBERT из предварительно обученной контрольной точки: ```python from transformers import AutoFeatureExtractor model_id = "ntu-spml/distilhubert" feature_extractor = AutoFeatureExtractor.from_pretrained( model_id, do_normalize=True, return_attention_mask=True ) ``` Поскольку частота дискретизации модели и набора данных различна, перед передачей аудиофайла в программу извлечения признаков его необходимо передискретизировать до 16 000 Гц. Для этого сначала нужно получить частоту дискретизации модели от экстрактора признаков: ```python sampling_rate = feature_extractor.sampling_rate sampling_rate ``` **Output:** ```out 16000 ``` Далее мы проводим повторную выборку набора данных, используя метод `cast_column()` и функцию `Audio` из 🤗 Datasets: ```python from datasets import Audio gtzan = gtzan.cast_column("audio", Audio(sampling_rate=sampling_rate)) ``` Теперь мы можем проверить первый сэмпл train-split нашего набора данных, чтобы убедиться, что он действительно находится на частоте 16 000 Гц. При загрузке каждого сэмпла Datasets производит повторную дискретизацию аудиофайла "на лету": ```python gtzan["train"][0] ``` **Output:** ```out { "file": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "audio": { "path": "~/.cache/huggingface/datasets/downloads/extracted/fa06ce46130d3467683100aca945d6deafb642315765a784456e1d81c94715a8/genres/pop/pop.00098.wav", "array": array( [ 0.0873509, 0.20183384, 0.4790867, ..., -0.18743178, -0.23294401, -0.13517427, ], dtype=float32, ), "sampling_rate": 16000, }, "genre": 7, } ``` Отлично! Мы видим, что частота дискретизации была понижена до 16 кГц. Значения массива также отличаются, так как теперь на каждые 1,5 значения амплитуды приходится примерно одно значение, которое мы имели раньше. Отличительной особенностью моделей типа Wav2Vec2 и HuBERT является то, что они принимают на вход массив float, соответствующий исходной форме речевого сигнала. В отличие от других моделей, например Whisper, в которых мы предварительно обрабатываем исходную форму звукового сигнала до формата спектрограммы. Мы уже упоминали, что аудиоданные представлены в виде одномерного массива, поэтому они уже имеют правильный формат для чтения моделью (набор непрерывных входов с дискретными временными шагами). Чтож, что именно делает экстрактор признаков? Итак, аудиоданные имеют правильный формат, но мы не наложили никаких ограничений на значения, которые они могут принимать. Для оптимальной работы нашей модели необходимо, чтобы все входные данные находились в одном и том же динамическом диапазоне. Это позволит получить одинаковый диапазон активаций и градиентов для наших образцов, что поможет обеспечить стабильность и сходимость в процессе обучения. Для этого мы _нормализуем_ наши аудиоданные, приводя каждую выборку к нулевому среднему и единичной дисперсии - этот процесс называется _масштабированием признаков_. Именно эту нормализацию и выполняет наш экстрактор признаков! Мы можем посмотреть, как работает экстрактор признаков, применив его к нашему первому аудиосэмплу. Сначала вычислим среднее значение и дисперсию наших исходных аудиоданных: ```python import numpy as np sample = gtzan["train"][0]["audio"] print(f"Mean: {np.mean(sample['array']):.3}, Variance: {np.var(sample['array']):.3}") ``` **Output:** ```out Mean: 0.000185, Variance: 0.0493 ``` Видно, что среднее значение уже близко к нулю, но дисперсия ближе к 0,05. Если бы дисперсия для выборки была больше, это могло бы вызвать проблемы с нашей моделью, так как динамический диапазон аудиоданных был бы очень мал и, следовательно, трудноразделим. Применим экстрактор признаков и посмотрим, что получится на выходе: ```python inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) print(f"inputs keys: {list(inputs.keys())}") print( f"Mean: {np.mean(inputs['input_values']):.3}, Variance: {np.var(inputs['input_values']):.3}" ) ``` **Output:** ```out inputs keys: ['input_values', 'attention_mask'] Mean: -4.53e-09, Variance: 1.0 ``` Отлично! Наш экстрактор признаков возвращает словарь, состоящий из двух массивов: `input_values` and `attention_mask`. `input_values` это предварительно обработанные входные аудиоданные, которые мы передадим в модель HuBERT. [`attention_mask`](https://huggingface.co/docs/transformers/glossary#attention-mask) используется, когда мы обрабатываем _batch_ аудиовходов одновременно - она используется для того, чтобы сообщить модели, где у нас есть входы разной длины. Мы видим, что среднее значение теперь очень сильно приближается к нулю, а дисперсия - к единице! Именно в таком виде мы хотим получить наши аудиосэмплы перед подачей их в модель HuBERT. <Tip warning={true}> Обратите внимание, как мы передали частоту дискретизации наших аудиоданных нашему экстрактору признаков. Это хорошая практика, так как экстрактор признаков выполняет проверку под капотом, чтобы убедиться, что частота дискретизации наших аудиоданных соответствует частоте дискретизации, ожидаемой моделью. Если частота дискретизации аудиоданных не совпадает с частотой дискретизации нашей модели, то необходимо увеличить или уменьшить частоту дискретизации аудиоданных до нужной. </Tip> Отлично, теперь мы знаем, как обрабатывать наши ресэмплированные аудиофайлы, осталось определить функцию, которую мы можем применить ко всем примерам в наборе данных. Поскольку мы ожидаем, что длина аудиоклипов будет составлять 30 секунд, мы также будем обрезать все более длинные клипы, используя аргументы `max_length` и `truncation` в экстракторе признаков следующим образом: ```python max_duration = 30.0 def preprocess_function(examples): audio_arrays = [x["array"] for x in examples["audio"]] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=int(feature_extractor.sampling_rate * max_duration), truncation=True, return_attention_mask=True, ) return inputs ``` Определив эту функцию, мы можем применить ее к набору данных с помощью метода [`map()`](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map). Метод `.map()` поддерживает работу с пакетами сэмплов, что мы и сделаем, установив `batched=True`. По умолчанию размер пакета составляет 1000, но мы уменьшим его до 100, чтобы пиковая оперативная память оставалась в разумных пределах для бесплатного уровня Google Colab: <!--- TODO(SG): вернуться к многопроцессорной обработке, когда будет исправлена ошибка в наборах данных Поскольку наборы аудиоданных могут обрабатываться довольно медленно, обычно целесообразно использовать многопроцессорную обработку. Для этого мы можем передать аргумент `num_proc` в команду `map()` и с помощью модуля Python `psutil` определить количество процессорных ядер в системе: ---> ```python gtzan_encoded = gtzan.map( preprocess_function, remove_columns=["audio", "file"], batched=True, batch_size=100, num_proc=1, ) gtzan_encoded ``` **Output:** ```out DatasetDict({ train: Dataset({ features: ['genre', 'input_values','attention_mask'], num_rows: 899 }) test: Dataset({ features: ['genre', 'input_values','attention_mask'], num_rows: 100 }) }) ``` <Tip warning={true}> Если при выполнении приведенного выше кода оперативная память устройства будет исчерпана, можно настроить параметры пакетной обработки, чтобы уменьшить пиковое потребление оперативной памяти. В частности, можно модифицировать следующие два аргумента: * `batch_size`: по умолчанию 1000, но выше было установлено значение 100. Попробуйте еще раз уменьшить в 2 раза до 50 * `writer_batch_size`: по умолчанию равен 1000. Попробуйте уменьшить его до 500, а если это не сработает, то уменьшите его еще раз в 2 раза до 250 </Tip> Для упрощения обучения мы удалили из набора данных столбцы `audio` и `file`. Столбец `input_values` содержит закодированные аудиофайлы, `attention_mask` - двоичную маску из значений 0/1, указывающую на то, куда мы добавили входной аудиосигнал, а столбец `genre` - соответствующие метки (или цели). Для того чтобы `Trainer` мог обрабатывать метки классов, необходимо переименовать колонку `genre` в `label`: ```python gtzan_encoded = gtzan_encoded.rename_column("genre", "label") ``` Наконец, нам необходимо получить отображения меток из набора данных. Это отображение позволит нам перейти от целочисленных идентификаторов (например, `7`) к человекочитаемым меткам классов (например, `"поп"`) и обратно. Таким образом, мы можем преобразовать целочисленное предсказание id нашей модели в человекочитаемый формат, что позволит нам использовать модель в любом последующем приложении. Для этого можно использовать метод `int2str()` следующим образом: ```python id2label = { str(i): id2label_fn(i) for i in range(len(gtzan_encoded["train"].features["label"].names)) } label2id = {v: k for k, v in id2label.items()} id2label["7"] ``` ```out 'pop' ``` Итак, у нас есть набор данных, готовый к обучению! Давайте рассмотрим, как можно обучить модель на этом наборе данных. ## Дообучение модели Для дообучения модели мы воспользуемся классом `Trainer` из раздела 🤗 Transformers. Как мы уже видели в других главах, `Trainer` - это высокоуровневый API, предназначенный для работы с наиболее распространенными сценариями обучения. В данном случае мы будем использовать `Trainer` для дообучения модели на GTZAN. Для этого сначала нужно загрузить модель для данной задачи. Для этого мы можем использовать класс `AutoModelForAudioClassification`, который автоматически добавит соответствующую классификационную голову в нашу предварительно обученную модель DistilHuBERT. Давайте перейдем к инстанцированию модели: ```python from transformers import AutoModelForAudioClassification num_labels = len(id2label) model = AutoModelForAudioClassification.from_pretrained( model_id, num_labels=num_labels, label2id=label2id, id2label=id2label, ) ``` Мы настоятельно рекомендуем во время тренировок загружать контрольные точки моделей непосредственно на [Hugging Face Hub](https://huggingface.co/). Hugging Face Hub предоставляет: - Встроенный контроль версий: вы можете быть уверены, что ни одна контрольная точка модели не будет потеряна в процессе обучения. - Журналы Tensorboard: отслеживание важных показателей в процессе обучения. - Карты моделей: документирование того, что делает модель, и предполагаемых вариантов ее использования. - Сообщество: простой способ обмена информацией и сотрудничества с сообществом! 🤗 Связать ноутбук с Hugging Face Hub очень просто - для этого достаточно ввести аутентификационный токен при появлении соответствующего запроса. Найдите свой токен аутентификации [здесь](https://huggingface.co/settings/tokens): ```python from huggingface_hub import notebook_login notebook_login() ``` **Output:** ```bash Login successful Your token has been saved to /root/.huggingface/token ``` Следующим шагом является определение аргументов обучения, включая размер пакета, шаги накопления градиента, количество эпох обучения и скорость обучения: ```python from transformers import TrainingArguments model_name = model_id.split("/")[-1] batch_size = 8 gradient_accumulation_steps = 1 num_train_epochs = 10 training_args = TrainingArguments( f"{model_name}-finetuned-gtzan", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=5e-5, per_device_train_batch_size=batch_size, gradient_accumulation_steps=gradient_accumulation_steps, per_device_eval_batch_size=batch_size, num_train_epochs=num_train_epochs, warmup_ratio=0.1, logging_steps=5, load_best_model_at_end=True, metric_for_best_model="accuracy", fp16=True, push_to_hub=True, ) ``` <Tip warning={true}> Здесь мы установили значение `push_to_hub=True`, чтобы включить автоматическую загрузку настроенных контрольных точек во время обучения. Если вы не хотите, чтобы ваши контрольные точки загружались на Hugging Face Hub, вы можете установить значение `False`. </Tip> Последнее, что нам необходимо сделать, это определить метрики. Поскольку набор данных сбалансирован, в качестве метрики мы будем использовать accuracy и загружать ее с помощью библиотеки 🤗 Evaluate: ```python import evaluate import numpy as np metric = evaluate.load("accuracy") def compute_metrics(eval_pred): """Computes accuracy on a batch of predictions""" predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` Теперь у нас есть все необходимые компоненты! Давайте инстанцируем `Trainer` и обучим модель: ```python from transformers import Trainer trainer = Trainer( model, training_args, train_dataset=gtzan_encoded["train"], eval_dataset=gtzan_encoded["test"], tokenizer=feature_extractor, compute_metrics=compute_metrics, ) trainer.train() ``` <Tip warning={true}> В зависимости от используемого графического процессора, при запуске обучения возможно возникновение ошибки CUDA `"out-of-memory"`. В этом случае можно уменьшать `batch_size` постепенно в 2 раза, а для компенсации использовать [`gradient_accumulation_steps`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps) </Tip> **Output:** ```out | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7297 | 1.0 | 113 | 1.8011 | 0.44 | | 1.24 | 2.0 | 226 | 1.3045 | 0.64 | | 0.9805 | 3.0 | 339 | 0.9888 | 0.7 | | 0.6853 | 4.0 | 452 | 0.7508 | 0.79 | | 0.4502 | 5.0 | 565 | 0.6224 | 0.81 | | 0.3015 | 6.0 | 678 | 0.5411 | 0.83 | | 0.2244 | 7.0 | 791 | 0.6293 | 0.78 | | 0.3108 | 8.0 | 904 | 0.5857 | 0.81 | | 0.1644 | 9.0 | 1017 | 0.5355 | 0.83 | | 0.1198 | 10.0 | 1130 | 0.5716 | 0.82 | ``` Обучение займет примерно 1 час в зависимости от вашего GPU или выделенного для Google Colab. Наша лучшая доля верных ответов составляет 83% - неплохо для 10 эпох с 899 примерами обучающих данных! Конечно, мы могли бы улучшить этот результат, тренируясь на большем количестве эпох, используя методы регуляризации, такие как _dropout_, или разбивая каждый аудиопример на сегменты по 30 и 15 секунд, чтобы использовать более эффективную стратегию предварительной обработки данных. Большой вопрос, как это соотносится с другими системами классификации музыки 🤔 Для этого мы можем просмотреть [autoevaluate leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=marsyas%2Fgtzan&only_verified=0&task=audio-classification&config=all&split=train&metric=accuracy), таблицу лидеров, которая классифицирует модели по языку и набору данных, а затем ранжирует их по accuracy. Мы можем автоматически отправить нашу контрольную точку в таблицу лидеров при передаче результатов обучения в Hugging Face Hub - для этого достаточно задать соответствующие аргументы ключевых слов (kwargs). Вы можете изменить эти значения в соответствии с набором данных, языком и названием модели: ```python kwargs = { "dataset_tags": "marsyas/gtzan", "dataset": "GTZAN", "model_name": f"{model_name}-finetuned-gtzan", "finetuned_from": model_id, "tasks": "audio-classification", } ``` Теперь результаты обучения можно загрузить в Hugging Face Hub. Для этого выполните команду `.push_to_hub`: ```python trainer.push_to_hub(**kwargs) ``` При этом журналы обучения и веса моделей будут сохранены под именем `"your-username/distilhubert-finetuned-gtzan"`. Для примера посмотрите загрузку по адресу [`"sanchit-gandhi/distilhubert-finetuned-gtzan"`](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan). ## Поделиться моделью Теперь вы можете поделиться этой моделью со всеми желающими, воспользовавшись ссылкой на Hub. Они могут загрузить его с идентификатором `"your-username/distilhubert-finetuned-gtzan"` непосредственно в `pipeline()`. Например, для загрузки точно настроенной контрольной точки [`"sanchit-gandhi/distilhubert-finetuned-gtzan"`](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan): ```python from transformers import pipeline pipe = pipeline( "audio-classification", model="sanchit-gandhi/distilhubert-finetuned-gtzan" ) ``` ## Заключение В этом разделе мы рассмотрели пошаговое руководство по тонкой настройке модели DistilHuBERT для классификации музыки. Хотя мы сосредоточились на задаче классификации музыки и на наборе данных GTZAN, представленные здесь шаги применимы в более общем случае к любой задаче классификации звука - тот же сценарий может быть использован для задач классификации звука разговорного языка, таких как выделение ключевых слов или идентификация языка. Вам просто нужно поменять набор данных на тот, который соответствует интересующей вас задаче! Если вы заинтересованы в тонкой настройке других моделей Hugging Face Hub для классификации звука, мы рекомендуем вам ознакомиться с другими [примерами](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) в 🤗 репозитории Transformers. В следующем разделе мы возьмем модель, которую вы только что отладили, и создадим демонстрационный образец классификации музыки, который вы сможете опубликовать на Hugging Face Hub.
6
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter4/hands_on.mdx
# Практическое занятие Настало время взять в руки несколько Аудио моделей и применить на практике то, чему вы научились. Это упражнение является одним из четырех практических упражнений, необходимых для получения сертификата об окончании курса. Вот инструкции. В этом блоке мы продемонстрировали дообучение модели Hubert на наборе данных `marsyas/gtzan` для классификации музыки. Accuracy нашего примера составила 83%. Ваша задача - улучшить этот показатель. Вы можете выбрать любую модель на [🤗 Hugging Face Hub](https://huggingface.co/models), которая, по вашему мнению, подходит для классификации аудио, и использовать точно такой же набор данных [`marsyas/gtzan`](https://huggingface.co/datasets/marsyas/gtzan) для построения собственного классификатора. Ваша цель - достичь accuracy 87% на этом наборе данных с помощью вашего классификатора. Вы можете выбрать точно такую же модель, поиграть с гиперпараметрами обучения или выбрать совершенно другую модель - все зависит от вас! Чтобы ваш результат был засчитан в сертификат, не забудьте в конце обучения вывести модель на Hub, как это было показано в данном блоке, со следующими `**kwargs`: ```python kwargs = { "dataset_tags": "marsyas/gtzan", "dataset": "GTZAN", "model_name": f"{model_name}-finetuned-gtzan", "finetuned_from": model_id, "tasks": "audio-classification", } trainer.push_to_hub(**kwargs) ``` Вот некоторые дополнительные ресурсы, которые могут оказаться полезными при работе над этим упражнением: * [Руководство по решению задач классификации звука в документации Transformers](https://huggingface.co/docs/transformers/tasks/audio_classification) * [Документация по модели Hubert](https://huggingface.co/docs/transformers/model_doc/hubert) * [Документация по модели M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct) * [Документация Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer) * [Документация Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2) Не стесняйтесь создавать демо-версию своей модели и делиться ею в Discord! Если у вас есть вопросы, задавайте их в канале #audio-study-group.
7
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter1/supplemental_reading.mdx
# Как узнать больше В данном разделе рассмотрены многие фундаментальные понятия, имеющие отношение к пониманию аудиоданных и работе с ними. Хотите узнать больше? Здесь вы найдете дополнительные ресурсы, которые помогут вам углубить понимание тем и повысить эффективность обучения. В следующем видео Монти Монтгомери из xiph.org демонстрирует в реальном времени дискретизацию, квантование, битовую глубину и дизеринг на реальном аудиооборудовании с использованием как современного цифрового анализа, так и винтажного аналогового стендового оборудования, посмотрите его: <Youtube id="cIQ9IXSUzuM"/> Если вы хотите глубже погрузиться в тему цифровой обработки сигналов, обратите внимание на бесплатную книгу ["Теория цифровых сигналов"](https://brianmcfee.net/dstbook-site/content/intro.html), автором которой является Брайан Макфи, доцент кафедры музыкальных технологий и науки о данных Нью-Йоркского университета и главный сопровождающий пакета `librosa`.
8
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter1/preprocessing.mdx
# Препроцессинг набора аудиоданных Загрузка набора данных с помощью 🤗 Datasets - это только половина удовольствия. Если вы планируете использовать его либо для обучения модели, либо для выполнения инференса, необходимо предварительно обработать данные. В общем случае это включает в себя следующие шаги: * Передискретизация аудиоданных * Фильтрация набора данных * Преобразование аудиоданных в ожидаемый моделью формат входных данных ## Передискретизация аудиоданных Функция `load_dataset` загружает аудиопримеры с той частотой дискретизации, с которой они были опубликованы. Это не всегда та частота дискретизации, которая ожидается моделью, которую вы планируете обучать или использовать для инференса. Если есть расхождение между частотой дискретизации, можно передискретизировать звук до ожидаемой моделью частоты дискретизации. Большинство имеющихся предварительно обученных моделей были обучены на аудиоданных с частотой дискретизации 16 кГц. Когда мы исследовали набор данных MINDS-14, вы могли заметить, что он сэмплирован с частотой 8 кГц, что означает, что нам, скорее всего, потребуется увеличить частоту дискретизации. Чтобы сделать это, используйте метод 🤗 Datasets `cast_column`. Эта операция не изменяет звук непосредственно в наборе данных (in-place), а дает сигнал datasets для передискретизации аудиопримеров "на лету" при их загрузке. Следующий код установит частоту дискретизации равной 16 кГц: ```py from datasets import Audio minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) ``` Перезагрузим первый аудиопример из набора данных MINDS-14 и проверим, что он был передискретизирован до нужной `sampling rate`: ```py minds[0] ``` **Output:** ```out { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "audio": { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "array": array( [ 2.0634243e-05, 1.9437837e-04, 2.2419340e-04, ..., 9.3852862e-04, 1.1302452e-03, 7.1531429e-04, ], dtype=float32, ), "sampling_rate": 16000, }, "transcription": "I would like to pay my electricity bill using my card can you please assist", "intent_class": 13, } ``` Вы можете заметить, что значения массива теперь также отличаются. Это связано с тем, что теперь для каждого значения амплитуды мы имеем в два раза больше значений чем раньше. <Tip> 💡 Некоторые сведения о передискретизации: Если аудиосигнал дискретизирован с частотой 8 кГц, т. е. имеет 8000 выборок в секунду, то мы знаем, что он не содержит частот выше 4 кГц. Это гарантируется теоремой Найквиста о дискретизации. Благодаря этому мы можем быть уверены, что между точками дискретизации исходный непрерывный сигнал всегда имеет плавную кривую. Повышение частоты дискретизации до более высокой сводится к вычислению дополнительных значений выборки, которые находятся между существующими, путем аппроксимации этой кривой. Однако понижающая дискретизация требует, чтобы мы сначала отфильтровали все частоты, которые будут выше нового предела Найквиста, прежде чем оценивать новые точки дискретизации. Другими словами, нельзя понизить дискретизацию в 2 раза, просто отбрасывая каждый второй сэмпл - это приведет к появлению искажений в сигнале, называемых наложениями. Корректная передискретизация - дело непростое, и его лучше доверить проверенным библиотекам, таким как librosa или 🤗 Datasets. </Tip> ## Фильтрация набора данных Возможно, потребуется отфильтровать данные по каким-либо критериям. Одним из распространенных случаев является ограничение аудиопримеров определенной продолжительности. Например, для предотвращения ошибок, связанных с выходом за пределы доступного обьёма памяти, необходимо отфильтровать все примеры длительностью более 20 секунд при обучении модели. Мы можем сделать это, используя метод 🤗 Datasets `filter` и передать ему функцию с логикой фильтрации. Начнем с того, что напишем функцию которая определяет, какие примеры следует оставить, а какие отбросить. Эта функция, `is_audio_length_in_range`, возвращает `True`, если длина образца меньше 20 с, и `False`, если больше 20 с. ```py MAX_DURATION_IN_SECONDS = 20.0 def is_audio_length_in_range(input_length): return input_length < MAX_DURATION_IN_SECONDS ``` Функция фильтрации может быть применена к столбцу набора данных, но в данном наборе столбец с длительностью звуковой дорожки отсутствует. Однако мы можем его создать, отфильтровать по значениям в этом столбце, а затем удалить. ```py # используем librosa для получения длительности фрагмента из аудиофайла new_column = [librosa.get_duration(path=x) for x in minds["path"]] minds = minds.add_column("duration", new_column) # используем метод 🤗 Datasets `filter` для применения функции фильтрации minds = minds.filter(is_audio_length_in_range, input_columns=["duration"]) # удалим временный вспомогательный столбец minds = minds.remove_columns(["duration"]) minds ``` **Output:** ```out Dataset({features: ["path", "audio", "transcription", "intent_class"], num_rows: 624}) ``` Мы можем убедиться, что набор данных был отфильтрован с 654 примеров до 624. ## Препроцессинг аудиоданных Одним из наиболее сложных аспектов работы с наборами аудиоданных является подготовка данных в нужном для обучения модели формате. Как вы видели, исходные аудиоданные поступают в виде массива значений образцов. Однако предварительно обученные модели, независимо от того, используете ли вы их для инференса или для дообучения под вашу задачу, ожидают, что сырые данные будут преобразованы во входные признаки. Требования к входным признакам могут быть различными для разных моделей - они зависят от архитектуры модели и данных, на которых она была предварительно обучена. Хорошей новостью является то, что для каждой поддерживаемой аудиомодели 🤗 Transformers предлагает класс feature extractor который может преобразовать сырые аудиоданные во входные признаки, ожидаемые моделью. Что же делает экстрактор признаков с исходными аудиоданными? Давайте посмотрим на экстрактор признаков в [Whisper](https://huggingface.co/papers/2212.04356), чтобы понять некоторые общие преобразования извлечения признаков. Whisper - это предварительно обученная модель для автоматического распознавания речи (ASR), опубликованная в сентябре 2022 года Алеком Рэдфордом и другими из OpenAI. Сначала экстрактор признаков Whisper дополняет/обрезает батч аудиопримеров таким образом, что все образцы имеют длительность входного сигнала 30 секунд. Примеры короче этого значения дополняются до 30 секунд путем добавления нулей в конец последовательности (нули в аудиосигнале соответствуют отсутствию сигнала или тишине). Примеры длиной более 30 секунд усекаются до 30 секунд. Поскольку все элементы в батче дополняются/обрезаются до максимальной длины во входном пространстве, необходимость в использованрии маски внимания отпадает. Whisper уникален в этом отношении, большинству других аудиомоделей требуется маска внимания, которая подробно описывает, где последовательности были дополненны, и, следовательно, где они должны быть проигнорированы в механизме самовнимания. Whisper обучен работать без маски внимания и непосредственно по речевым сигналам определять, где следует игнорировать входные сигналы. Второй операцией, которую выполняет экстрактор признаков Whisper, является преобразование дополненных звуковых массивов в лог-мел спектрограммы. Как вы помните, эти спектрограммы описывают, как изменяются частоты сигнала с течением времени, выраженные в шкале мел и измеряются в децибелах (логарифмическая часть), чтобы сделать частоты и амплитуды более репрезентативными для человеческого слуха. Все эти преобразования могут быть применены к необработанным аудиоданным с помощью пары строк кода. Давайте загрузим экстрактор признаков из предварительно обученной контрольной точки Whisper, чтобы получить готовые аудиоданные: ```py from transformers import WhisperFeatureExtractor feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") ``` Далее можно написать функцию для предварительной обработки одного аудиопримера, передавая его в `feature_extractor`. ```py def prepare_dataset(example): audio = example["audio"] features = feature_extractor( audio["array"], sampling_rate=audio["sampling_rate"], padding=True ) return features ``` Мы можем применить функцию подготовки данных ко всем нашим обучающим примерам, используя метод 🤗 Datasets' map: ```py minds = minds.map(prepare_dataset) minds ``` **Output:** ```out Dataset( { features: ["path", "audio", "transcription", "intent_class", "input_features"], num_rows: 624, } ) ``` Вот так просто мы получили лог-мел спектрограммы в качестве `input_features` в наборе данных. Визуализируем ее для одного из примеров в наборе данных `minds`: ```py import numpy as np example = minds[0] input_features = example["input_features"] plt.figure().set_figwidth(12) librosa.display.specshow( np.asarray(input_features[0]), x_axis="time", y_axis="mel", sr=feature_extractor.sampling_rate, hop_length=feature_extractor.hop_length, ) plt.colorbar() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/log_mel_whisper.png" alt="Log mel spectrogram plot"> </div> Теперь вы можете увидеть, как выглядит входной аудиосигнал для модели Whisper после препроцессинга. Класс модели feature extractor занимается преобразованием сырых аудиоданных в формат, ожидаемый моделью. Однако, многие задачи с использованием звука являются мультимодальными, например, распознавание речи. В таких случаях 🤗 Transformers также предлагает специфичные для конкретной модели токенизаторы для обработки текстовых данных. Для более глубокого изучения токенизаторов обратитесь к нашему [курсу по NLP](https://huggingface.co/course/chapter2/4). Вы можете загрузить экстрактор признаков и токенизатор для Whisper и других мультимодальных моделей отдельно, либо загрузить их через так называемый процессор. Чтобы еще больше упростить задачу, используйте `AutoProcessor` для загрузки экстрактора признаков и процессора модели из контрольной точки, например, так: ```py from transformers import AutoProcessor processor = AutoProcessor.from_pretrained("openai/whisper-small") ``` Здесь мы проиллюстрировали основные этапы подготовки данных. Конечно, пользовательские данные могут потребовать более сложного препроцессинга. В этом случае можно расширить функцию `prepare_dataset` для выполнения любых преобразований пользовательских данных. С 🤗 Datasets, если вы можете записать процес подготовки данных как функцию Python, вы можете [применить его](https://huggingface.co/docs/datasets/audio_process) к вашему набору данных!
9
0
hf_public_repos
hf_public_repos/blog/ai-comic-factory.md
--- title: "Deploying the AI Comic Factory using the Inference API" thumbnail: /blog/assets/165_ai_comic_factory/thumbnail.jpg authors: - user: jbilcke-hf --- # Deploying the AI Comic Factory using the Inference API We recently announced [Inference for PROs](https://huggingface.co/blog/inference-pro), our new offering that makes larger models accessible to a broader audience. This opportunity opens up new possibilities for running end-user applications using Hugging Face as a platform. An example of such an application is the [AI Comic Factory](https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory) - a Space that has proved incredibly popular. Thousands of users have tried it to create their own AI comic panels, fostering its own community of regular users. They share their creations, with some even opening pull requests. In this tutorial, we'll show you how to fork and configure the AI Comic Factory to avoid long wait times and deploy it to your own private space using the Inference API. It does not require strong technical skills, but some knowledge of APIs, environment variables and a general understanding of LLMs & Stable Diffusion are recommended. ## Getting started First, ensure that you sign up for a [PRO Hugging Face account](https://huggingface.co/subscribe/pro), as this will grant you access to the Llama-2 and SDXL models. ## How the AI Comic Factory works The AI Comic Factory is a bit different from other Spaces running on Hugging Face: it is a NextJS application, deployed using Docker, and is based on a client-server approach, requiring two APIs to work: - a Language Model API (Currently [Llama-2](https://huggingface.co/docs/transformers/model_doc/llama2)) - a Stable Diffusion API (currently [SDXL 1.0](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl)) ## Duplicating the Space To duplicate the AI Comic Factory, go to the Space and [click on "Duplicate"](https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory?duplicate=true): ![duplicate-space-1.jpg](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/165_ai_comic_factory/duplicate-space-1.jpg) You'll observe that the Space owner, name, and visibility are already filled in for you, so you can leave those values as is. Your copy of the Space will run inside a Docker container that doesn't require many resources, so you can use the smallest instance. The official AI Comic Factory Space utilizes a bigger CPU instance, as it caters to a large user base. To operate the AI Comic Factory under your account, you need to configure your Hugging Face token: ![duplicate-space-2.jpg](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/165_ai_comic_factory/duplicate-space-2.jpg) ## Selecting the LLM and SD engines The AI Comic Factory supports various backend engines, which can be configured using two environment variables: - `LLM_ENGINE` to configure the language model (possible values are `INFERENCE_API`, `INFERENCE_ENDPOINT`, `OPENAI`) - `RENDERING_ENGINE` to configure the image generation engine (possible values are `INFERENCE_API`, `INFERENCE_ENDPOINT`, `REPLICATE`, `VIDEOCHAIN`). We'll focus on making the AI Comic Factory work on the Inference API, so they both need to be set to `INFERENCE_API`: ![duplicate-space-3.jpg](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/165_ai_comic_factory/duplicate-space-3.jpg) You can find more information about alternative engines and vendors in the project's [README](https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory/blob/main/README.md) and the [.env](https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory/blob/main/README.md) config file. ## Configuring the models The AI Comic Factory comes with the following models pre-configured: - `LLM_HF_INFERENCE_API_MODEL`: default value is `meta-llama/Llama-2-70b-chat-hf` - `RENDERING_HF_RENDERING_INFERENCE_API_MODEL`: default value is `stabilityai/stable-diffusion-xl-base-1.0` Your PRO Hugging Face account already gives you access to those models, so you don't have anything to do or change. ## Going further Support for the Inference API in the AI Comic Factory is in its early stages, and some features, such as using the refiner step for SDXL or implementing upscaling, haven't been ported over yet. Nonetheless, we hope this information will enable you to start forking and tweaking the AI Comic Factory to suit your requirements. Feel free to experiment and try other models from the community, and happy hacking!
0
0
hf_public_repos
hf_public_repos/blog/spaces-dev-mode.md
--- title: "Introducing Spaces Dev Mode for a seamless developer experience" thumbnail: /blog/assets/spaces-dev-mode/thumbnail.jpg authors: - user: pagezyhf --- # Introducing Spaces Dev Mode for a seamless developer experience Hugging Face Spaces makes it easy for you to create and deploy AI-powered demos in minutes. Over 500,000 Spaces have been created by the Hugging Face community and it keeps growing! As part of [Hugging Face Spaces](https://huggingface.co/spaces), we recently released support for “Dev Mode”, to make your experience of building Spaces even more seamless. Spaces Dev Mode lets you connect with VS Code or SSH directly to your Space. In a click, you can connect to your Space, and start editing your code, removing the need to push your local changes to the Space repository using git. Let's see how to setup this feature in your Space’s settings 🔥 ## Enable Dev Mode Spaces Dev Mode is currently in beta, and available to [PRO subscribers](https://huggingface.co/pricing#pro). To learn more about Spaces Dev Mode, check out the [documentation](https://huggingface.co/dev-mode-explorers). After creating your space, navigate to Settings. ![dev-mode-settings-1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/spaces-dev-mode/dev-mode-settings-1.png) Scroll down in the Settings and click on “Enable Dev Mode”. Your Space will automatically Restart. ![dev-mode-settings-2](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/spaces-dev-mode/dev-mode-settings-2.png) ## Connect to VS Code Once your Space is in a Running state, you can connect to VS Code locally or in your browser in one click! You can also use SSH to set up the connection to your Space in another IDE. ![dev-mode-connect](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/spaces-dev-mode/dev-mode-connect.png) For example, let’s change the color theme of this Gradio Space. After editing the code, no need to push your changes and rebuild the Space container to test it. Go directly in your Space and click “Refresh”. ![dev-mode-refresh](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/spaces-dev-mode/dev-mode-refresh.png) That’s it! Once you’re satisfied with your changes, you can commit and merge to persist them. ![dev-mode-update](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/spaces-dev-mode/dev-mode-update.png) Go build your first Spaces [here](https://huggingface.co/spaces)!
1
0
hf_public_repos
hf_public_repos/blog/pytorch-xla.md
--- title: "Hugging Face on PyTorch / XLA TPUs" thumbnail: /blog/assets/13_pytorch_xla/pytorch_xla_thumbnail.png authors: - user: jysohn23 guest: true - user: lysandre --- # Hugging Face on PyTorch / XLA TPUs: Faster and cheaper training <a href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/13_pytorch_xla.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Training Your Favorite Transformers on Cloud TPUs using PyTorch / XLA The PyTorch-TPU project originated as a collaborative effort between the Facebook PyTorch and Google TPU teams and officially launched at the 2019 PyTorch Developer Conference 2019. Since then, we’ve worked with the Hugging Face team to bring first-class support to training on Cloud TPUs using [PyTorch / XLA](https://github.com/pytorch/xla). This new integration enables PyTorch users to run and scale up their models on Cloud TPUs while maintaining the exact same Hugging Face trainers interface. This blog post provides an overview of changes made in the Hugging Face library, what the PyTorch / XLA library does, an example to get you started training your favorite transformers on Cloud TPUs, and some performance benchmarks. If you can’t wait to get started with TPUs, please skip ahead to the [“Train Your Transformer on Cloud TPUs”](#train-your-transformer-on-cloud-tpus) section - we handle all the PyTorch / XLA mechanics for you within the `Trainer` module! ### XLA:TPU Device Type PyTorch / XLA adds a new `xla` device type to PyTorch. This device type works just like other PyTorch device types. For example, here's how to create and print an XLA tensor: ```python import torch import torch_xla import torch_xla.core.xla_model as xm t = torch.randn(2, 2, device=xm.xla_device()) print(t.device) print(t) ``` This code should look familiar. PyTorch / XLA uses the same interface as regular PyTorch with a few additions. Importing `torch_xla` initializes PyTorch / XLA, and `xm.xla_device()` returns the current XLA device. This may be a CPU, GPU, or TPU depending on your environment, but for this blog post we’ll focus primarily on TPU. The `Trainer` module leverages a `TrainingArguments` dataclass in order to define the training specifics. It handles multiple arguments, from batch sizes, learning rate, gradient accumulation and others, to the devices used. Based on the above, in `TrainingArguments._setup_devices()` when using XLA:TPU devices, we simply return the TPU device to be used by the `Trainer`: ```python @dataclass class TrainingArguments: ... @cached_property @torch_required def _setup_devices(self) -> Tuple["torch.device", int]: ... elif is_torch_tpu_available(): device = xm.xla_device() n_gpu = 0 ... return device, n_gpu ``` ### XLA Device Step Computation In a typical XLA:TPU training scenario we’re training on multiple TPU cores in parallel (a single Cloud TPU device includes 8 TPU cores). So we need to ensure that all the gradients are exchanged between the data parallel replicas by consolidating the gradients and taking an optimizer step. For this we provide the `xm.optimizer_step(optimizer)` which does the gradient consolidation and step-taking. In the Hugging Face trainer, we correspondingly update the train step to use the PyTorch / XLA APIs: ```python class Trainer: … def train(self, *args, **kwargs): ... if is_torch_tpu_available(): xm.optimizer_step(self.optimizer) ``` ### PyTorch / XLA Input Pipeline There are two main parts to running a PyTorch / XLA model: (1) tracing and executing your model’s graph lazily (refer to below [“PyTorch / XLA Library”](https://github.com/pytorch/xla) section for a more in-depth explanation) and (2) feeding your model. Without any optimization, the tracing/execution of your model and input feeding would be executed serially, leaving chunks of time during which your host CPU and your TPU accelerators would be idle, respectively. To avoid this, we provide an API, which pipelines the two and thus is able to overlap the tracing of step n+1 while step n is still executing. ![alt text](/blog/assets/13_pytorch_xla/training_pipeline.png) ```python import torch_xla.distributed.parallel_loader as pl ... dataloader = pl.MpDeviceLoader(dataloader, device) ``` ### Checkpoint Writing and Loading When a tensor is checkpointed from a XLA device and then loaded back from the checkpoint, it will be loaded back to the original device. Before checkpointing tensors in your model, you want to ensure that all of your tensors are on CPU devices instead of XLA devices. This way, when you load back the tensors, you’ll load them through CPU devices and then have the opportunity to place them on whatever XLA devices you desire. We provide the `xm.save()` API for this, which already takes care of only writing to storage location from only one process on each host (or one globally if using a shared file system across hosts). ```python class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin): … def save_pretrained(self, save_directory): ... if getattr(self.config, "xla_device", False): import torch_xla.core.xla_model as xm if xm.is_master_ordinal(): # Save configuration file model_to_save.config.save_pretrained(save_directory) # xm.save takes care of saving only from master xm.save(state_dict, output_model_file) ``` ```python class Trainer: … def train(self, *args, **kwargs): ... if is_torch_tpu_available(): xm.rendezvous("saving_optimizer_states") xm.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt")) xm.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt")) ``` ## PyTorch / XLA Library PyTorch / XLA is a Python package that uses the XLA linear algebra compiler to connect the PyTorch deep learning framework with XLA devices, which includes CPU, GPU, and Cloud TPUs. Part of the following content is also available in our [API_GUIDE.md](https://github.com/pytorch/xla/blob/master/API_GUIDE.md). ### PyTorch / XLA Tensors are Lazy Using XLA tensors and devices requires changing only a few lines of code. However, even though XLA tensors act a lot like CPU and CUDA tensors, their internals are different. CPU and CUDA tensors launch operations immediately or eagerly. XLA tensors, on the other hand, are lazy. They record operations in a graph until the results are needed. Deferring execution like this lets XLA optimize it. A graph of multiple separate operations might be fused into a single optimized operation. Lazy execution is generally invisible to the caller. PyTorch / XLA automatically constructs the graphs, sends them to XLA devices, and synchronizes when copying data between an XLA device and the CPU. Inserting a barrier when taking an optimizer step explicitly synchronizes the CPU and the XLA device. This means that when you call `model(input)` forward pass, calculate your loss `loss.backward()`, and take an optimization step `xm.optimizer_step(optimizer)`, the graph of all operations is being built in the background. Only when you either explicitly evaluate the tensor (ex. Printing the tensor or moving it to a CPU device) or mark a step (this will be done by the `MpDeviceLoader` everytime you iterate through it), does the full step get executed. ### Trace, Compile, Execute, and Repeat From a user’s point of view, a typical training regimen for a model running on PyTorch / XLA involves running a forward pass, backward pass, and optimizer step. From the PyTorch / XLA library point of view, things look a little different. While a user runs their forward and backward passes, an intermediate representation (IR) graph is traced on the fly. The IR graph leading to each root/output tensor can be inspected as following: ```python >>> import torch >>> import torch_xla >>> import torch_xla.core.xla_model as xm >>> t = torch.tensor(1, device=xm.xla_device()) >>> s = t*t >>> print(torch_xla._XLAC._get_xla_tensors_text([s])) IR { %0 = s64[] prim::Constant(), value=1 %1 = s64[] prim::Constant(), value=0 %2 = s64[] xla::as_strided_view_update(%1, %0), size=(), stride=(), storage_offset=0 %3 = s64[] aten::as_strided(%2), size=(), stride=(), storage_offset=0 %4 = s64[] aten::mul(%3, %3), ROOT=0 } ``` This live graph is accumulated while the forward and backward passes are run on the user's program, and once `xm.mark_step()` is called (indirectly by `pl.MpDeviceLoader`), the graph of live tensors is cut. This truncation marks the completion of one step and subsequently we lower the IR graph into XLA Higher Level Operations (HLO), which is the IR language for XLA. This HLO graph then gets compiled into a TPU binary and subsequently executed on the TPU devices. However, this compilation step can be costly, typically taking longer than a single step, so if we were to compile the user’s program every single step, overhead would be high. To avoid this, we have caches that store compiled TPU binaries keyed by their HLO graphs’ unique hash identifiers. So once this TPU binary cache has been populated on the first step, subsequent steps will typically not have to re-compile new TPU binaries; instead, they can simply look up the necessary binaries from the cache. Since TPU compilations are typically much slower than the step execution time, this means that if the graph keeps changing in shape, we’ll have cache misses and compile too frequently. To minimize compilation costs, we recommend keeping tensor shapes static whenever possible. Hugging Face library’s shapes are already static for the most part with input tokens being padded appropriately, so throughout training the cache should be consistently hit. This can be checked using the debugging tools that PyTorch / XLA provides. In the example below, you can see that compilation only happened 5 times (`CompileTime`) whereas execution happened during each of 1220 steps (`ExecuteTime`): ```python >>> import torch_xla.debug.metrics as met >>> print(met.metrics_report()) Metric: CompileTime TotalSamples: 5 Accumulator: 28s920ms153.731us ValueRate: 092ms152.037us / second Rate: 0.0165028 / second Percentiles: 1%=428ms053.505us; 5%=428ms053.505us; 10%=428ms053.505us; 20%=03s640ms888.060us; 50%=03s650ms126.150us; 80%=11s110ms545.595us; 90%=11s110ms545.595us; 95%=11s110ms545.595us; 99%=11s110ms545.595us Metric: DeviceLockWait TotalSamples: 1281 Accumulator: 38s195ms476.007us ValueRate: 151ms051.277us / second Rate: 4.54374 / second Percentiles: 1%=002.895us; 5%=002.989us; 10%=003.094us; 20%=003.243us; 50%=003.654us; 80%=038ms978.659us; 90%=192ms495.718us; 95%=208ms893.403us; 99%=221ms394.520us Metric: ExecuteTime TotalSamples: 1220 Accumulator: 04m22s555ms668.071us ValueRate: 923ms872.877us / second Rate: 4.33049 / second Percentiles: 1%=045ms041.018us; 5%=213ms379.757us; 10%=215ms434.912us; 20%=217ms036.764us; 50%=219ms206.894us; 80%=222ms335.146us; 90%=227ms592.924us; 95%=231ms814.500us; 99%=239ms691.472us Counter: CachedCompile Value: 1215 Counter: CreateCompileHandles Value: 5 ... ``` ### Train Your Transformer on Cloud TPUs To configure your VM and Cloud TPUs, please follow [“Set up a Compute Engine instance”](https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch#set_up_a_instance) and [“Launch a Cloud TPU resource”](https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch#launch-tpu) (pytorch-1.7 version as of writing) sections. Once you have your VM and Cloud TPU created, using them is as simple as SSHing to your GCE VM and running the following commands to get `bert-large-uncased` training kicked off (batch size is for v3-8 device, may OOM on v2-8): ```bash conda activate torch-xla-1.7 export TPU_IP_ADDRESS="ENTER_YOUR_TPU_IP_ADDRESS" # ex. 10.0.0.2 export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" git clone -b v4.2.2 https://github.com/huggingface/transformers.git cd transformers && pip install . pip install datasets==1.2.1 python examples/xla_spawn.py \ --num_cores 8 \ examples/language-modeling/run_mlm.py \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 \ --max_seq_length 512 \ --pad_to_max_length \ --logging_dir ./tensorboard-metrics \ --cache_dir ./cache_dir \ --do_train \ --do_eval \ --overwrite_output_dir \ --output_dir language-modeling \ --overwrite_cache \ --tpu_metrics_debug \ --model_name_or_path bert-large-uncased \ --num_train_epochs 3 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --save_steps 500000 ``` The above should complete training in roughly less than 200 minutes with an eval perplexity of ~3.25. ## Performance Benchmarking The following table shows the performance of training bert-large-uncased on a v3-8 Cloud TPU system (containing 4 TPU v3 chips) running PyTorch / XLA. The dataset used for all benchmarking measurements is the [WikiText103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) dataset, and we use the [run_mlm.py](https://github.com/huggingface/transformers/blob/v4.2.2/examples/language-modeling/run_mlm.py) script provided in Hugging Face examples. To ensure that the workloads are not host-CPU-bound, we use the n1-standard-96 CPU configuration for these tests, but you may be able to use smaller configurations as well without impacting performance. | Name | Dataset | Hardware | Global Batch Size | Precision | Training Time (mins) | |--------------------|-------------|---------------------------|-------------------|-----------|----------------------| | bert-large-uncased | WikiText103 | 4 TPUv3 chips (i.e. v3-8) | 64 | FP32 | 178.4 | | bert-large-uncased | WikiText103 | 4 TPUv3 chips (i.e. v3-8) | 128 | BF16 | 106.4 | ## Get Started with PyTorch / XLA on TPUs See the [“Running on TPUs”](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus) section under the Hugging Face examples to get started. For a more detailed description of our APIs, check out our [API_GUIDE](https://github.com/pytorch/xla/blob/master/API_GUIDE.md), and for performance best practices, take a look at our [TROUBLESHOOTING](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md) guide. For generic PyTorch / XLA examples, run the following [Colab Notebooks](https://github.com/pytorch/xla/tree/master/contrib/colab) we offer with free Cloud TPU access. To run directly on GCP, please see our tutorials labeled “PyTorch” on our [documentation site](https://cloud.google.com/tpu/docs/tutorials). Have any other questions or issues? Please open an issue or question at https://github.com/huggingface/transformers/issues or directly at https://github.com/pytorch/xla/issues.
2
0
hf_public_repos
hf_public_repos/blog/2024-security-features.md
--- title: 2024 Security Feature Highlights thumbnail: /blog/assets/2024-security-features/thumbnail.png authors: - user: jack-kumar --- # 2024 Security Feature Highlights Security is a top priority at Hugging Face, and we're committed to continually enhancing our defenses to safeguard our users. In our ongoing security efforts, we have developed a range of security features designed to empower users to protect themselves and their assets. In this blog post, we'll take a look at our current security landscape as of August 6th, 2024, and break down key security features available on the Hugging Face Hub. This post is broken down into two parts: in the first sections, we explore the essential security features available to all users of the Hub. Then in the second section we describe the advanced controls available to Enterprise Hub users. ## "Default" Hub Security Features The following security features are available to all users of the Hugging Face Hub. We highly recommend that you use all of these controls where possible as it will help increase your resiliency against a variety of common attacks, such as phishing, token leaks, credential stuffing, session hijacking, etc. ### Fine Grained Token User Access Tokens are required to access Hugging Face via APIs. In addition to the standard "read" and "write" tokens, Hugging Face supports "fine-grained" tokens which allow you enforce least privilege by defining permissions on a per resource basis, ensuring that no other resources can be impacted in the event the token is leaked. Fine-grained tokens offer a plethora of ways to tune your token, see the images below for the options available. You can learn more about tokens here: https://huggingface.co/docs/hub/en/security-tokens ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/fine-grained-tokens-1.png) ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/fine-grained-tokens-2.png) ### Two Factor Authentication (2FA) Two factor authentication adds an extra layer of protection to your online accounts by requiring two forms of verification before granting access. 2FA combines something you know (like a password) with something you have (such as a smartphone) to ensure that only authorized users can access sensitive information. By enabling 2FA, you can greatly reduce the risk of unauthorized access from compromised passwords, credential stuffing and phishing. You can learn more about 2FA here: https://huggingface.co/docs/hub/en/security-2fa ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/2fa.png) ### Commit Signing Although Git has an authentication layer to control who can push commits to a repo, it does not authenticate the actual commit author. This means it's possible for bad actors to impersonate authors by using `git config --global user.email [email protected]` and `git config --global user.name Your Name`. This config does not automatically give them access to push to your repositories that they otherwise wouldn't have - but it does allow them to impersonate you anywhere they can push to. This could be a public repository or a private repository using compromised credentials or stolen SSH key. Commit signing adds an additional layer of security by using GPG to mitigate this issue; you can learn more at [Git Tools: Signing Your Work](https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work). Hugging Face gives authors the ability to add their GPG keys to their profile. When a signed commit is pushed, the signature is authenticated using the GPG key in the authors profile. If it's a valid signature, the commit will be marked with a “Verified” badge. You can learn more about commit signing here: https://huggingface.co/docs/hub/en/security-gpg ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/commit-signing.png) ### Organizational Access Controls Organizations on Hugging Face have access to Organizational Access Controls. This allows teams and businesses to define least privilege access to their organization by assigning "read", "write", "contributor" or "admin" roles to each of their users. This helps ensure that the compromise of one user account (such as via phishing) cannot affect the entire organization. You can learn more about Organizational Access Controls here: https://huggingface.co/docs/hub/en/organizations-security ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/organizational-access-controls.png) ### Automated Security Scanning Hugging Face implements an automated security scanning pipeline that scans all repos and commits. Currently, there are three major components of the pipeline: - malware scanning: scans for known malware signatures with [ClamAV](https://clamav.net) - pickle scanning: scans pickle files for malicious executable code with [picklescan](https://github.com/mmaitre314/picklescan) - secret scanning: scans for passwords, tokens and API keys using the [`trufflehog filesystem`](https://github.com/trufflesecurity/trufflehoghttps://github.com/trufflesecurity/trufflehog) command In the event a malicious file is detected, the scans will place a notice on the repo allowing users to see that they may potentially be interacting with a malicious repository. You can see an example of a (fake) malicious repository here: https://huggingface.co/mcpotato/42-eicar-street/tree/main. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/security-scanning.png) For any verified secret detected, the pipeline will send an email notifying the owner so that they can invalidate and refresh the secret. Verified secrets are the ones that have been confirmed to work for authentication against their respective providers. Note, however, that unverified secrets are not necessarily harmless or invalid: verification can fail due to technical reasons, such as in the case of down time from the provider. You can learn more about automated scanning here: - https://huggingface.co/docs/hub/en/security-malware - https://huggingface.co/docs/hub/en/security-pickle - https://huggingface.co/docs/hub/en/security-secrets ## Enterprise Hub Security Features In addition to the security features available to all users, Hugging Face offers advanced security controls for Enterprise users. These additional controls allow enterprises to build a security configuration that is most effective for them. ### Single Sign-On (SSO) Single sign-on (SSO) allows a user to access multiple applications with one set of credentials. Enterprises have widely moved to SSO as it allows their employees to access a variety of corporate software using identities that are managed centrally by their IT team. Hugging Face Enterprise supports SSO with both the SAML 2.0 and OpenID Connect (OIDC) protocols, and supports any compliant provider such as Okta, OneLign, Azure AD, etc. Additionally, SSO users can be configured to be dynamically assigned access control roles based on data provided by your identity provider. You can learn more about SSO here: https://huggingface.co/docs/hub/en/security-sso ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/sso.png) ### Resource Groups In addition to the base organizational access controls, Enterprises can define and manage groups of repositories as Resource Groups. This allows you to segment your resources by team or purpose, such as "Research", "Engineering", "Production" so that the compromise of one segment can not affect others. You can learn more about Resource Groups here: https://huggingface.co/docs/hub/en/security-resource-groups ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/resource-groups.png) ### Organization Token Management ✨New✨ Enterprise users can now manage which tokens can access their organization and resources. Organization owners can enforce the usage of fine-grained tokens and require administrator approval for each token. Administrators can review and revoke each token that has access to their repositories at any time. You can learn more about Organization Token Management here: https://huggingface.co/docs/hub/enterprise-hub-tokens-management ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/organizational-token-management-1.png) ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/organizational-token-management-2.png) ### Data Residency Enterprise users have access to data residency controls, which allow them to define where repositories (models, datasets, spaces) are stored. This allows for regulatory and legal compliance, while also improving download and upload performance by bringing the data closer to your users. We currently support US and EU regions, with Asia-Pacific coming soon. We call this feature "Storage Regions". You can learn more about Data Residency here: https://huggingface.co/docs/hub/en/storage-regions ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/data-residency.png) ### Audit Logs Enterprise users have access to audit logs that allow organization admins to review changes to repositories, settings and billing. The audit logs contain the username, location, IP, and action taken and can be downloaded as a JSON file which can be used in your own security tooling. You can learn more about Audit Logs here: https://huggingface.co/docs/hub/en/audit-logs ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/2024-security-features/audit-log.png) ### Compliance Hugging Face is SOC2 Type 2 certified and GDPR compliant. We offer Business Associate Addendums for GDPR data processing agreements to Enterprise Plan users. You can learn more about our Compliance efforts here: https://huggingface.co/docs/hub/en/security ### Custom Security Features Hugging Face offers custom agreements and development of features and tools for Enterprise accounts which are established via Statement of Work (SoW) and Service Level Agreements (SLA). You can reach out directly to sales to discuss your options at https://huggingface.co/contact/sales. ## Conclusion At Hugging Face, we're committed to providing a secure and trustworthy platform for the AI community. With our robust security features, users can focus on building and deploying AI models with confidence. Whether you're an individual researcher or a large enterprise, our security features are designed to empower you to protect yourself and your assets. By continually enhancing our defenses and expanding our security capabilities, we aim to stay ahead of emerging threats and maintain the trust of our users. If you have any questions or feedback about our security features, we'd love to hear from you. Reach out at [email protected]!
3
0
hf_public_repos
hf_public_repos/blog/_blog.yml
# "thumbnail" attribute can be GIFs while in the blogpost itself it's better if it's a simple bitmap (because it will be used as a social thumbnail) # make sure to optimize your "thumbnail" img with tinypng.com - local: how-to-train title: How to train a new language model from scratch using Transformers and Tokenizers thumbnail: /blog/assets/01_how-to-train/how-to-train_blogpost.png author: julien-c date: February 14, 2020 tags: - guide - nlp - local: how-to-generate title: "How to generate text: using different decoding methods for language generation with Transformers" author: patrickvonplaten thumbnail: /blog/assets/02_how-to-generate/thumbnail.png date: March, 2020 tags: - guide - nlp - local: reformer title: "The Reformer - Pushing the limits of language modeling" author: patrickvonplaten thumbnail: /blog/assets/03_reformer/thumbnail.png date: July 3, 2020 tags: - research - nlp - local: pytorch_block_sparse title: Block Sparse Matrices for Smaller and Faster Language Models author: madlag thumbnail: /blog/assets/04_pytorch_block_sparse/thumbnail.png date: Sep 10, 2020 tags: - research - nlp - local: encoder-decoder title: "Transformer-based Encoder-Decoder Models" author: patrickvonplaten thumbnail: /blog/assets/05_encoder_decoder/thumbnail.png date: October 10, 2020 tags: - research - nlp - local: ray-tune title: "Hyperparameter Search with Transformers and Ray Tune" thumbnail: /blog/assets/06_ray_tune/ray-hf.jpg author: ray-project guest: true date: November 2, 2020 tags: - open-source-collab - nlp - local: porting-fsmt title: "Porting fairseq wmt19 translation system to transformers" thumbnail: /blog/assets/07_porting_fsmt/thumbnail.png author: stas date: November 3, 2020 tags: - open-source-collab - nlp - local: warm-starting-encoder-decoder title: "Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models" author: patrickvonplaten thumbnail: /blog/assets/08_warm_starting_encoder_decoder/thumbnail.png date: November 09, 2020 tags: - guide - nlp - local: accelerated-inference title: How we sped up transformer inference 100x for 🤗 API customers author: Narsil thumbnail: /blog/assets/09_accelerated_inference/thumbnail.png date: January 18, 2021 tags: - analysis - nlp - local: zero-deepspeed-fairscale title: "Fit More and Train Faster With ZeRO via DeepSpeed and FairScale" author: stas thumbnail: /blog/assets/11_zero_deepspeed_fairscale/zero-partitioning.png date: January 19, 2021 tags: - guide - local: tf-serving title: "Faster TensorFlow models in Hugging Face Transformers" author: jplu thumbnail: /blog/assets/10_tf-serving/thumbnail.png date: January 26, 2021 tags: - guide - nlp - local: pytorch-xla title: "Hugging Face on PyTorch / XLA TPUs" thumbnail: /blog/assets/13_pytorch_xla/pytorch_xla_thumbnail.png author: jysohn23 guest: true date: February 9, 2021 tags: - open-source-collab - local: ray-rag title: "Retrieval Augmented Generation with Huggingface Transformers and Ray" thumbnail: /blog/assets/12_ray_rag/ray_arch_updated.png author: amogkam guest: true date: February 10, 2021 tags: - open-source-collab - nlp - local: simple-considerations title: "Simple considerations for simple people building fancy neural networks" author: VictorSanh thumbnail: /blog/assets/13_simple-considerations/henry-co-3coKbdfnAFg-unsplash.jpg date: February 25, 2021 tags: - guide - local: long-range-transformers title: "Hugging Face Reads, Feb. 2021 - Long-range Transformers" author: VictorSanh thumbnail: /blog/assets/14_long_range_transformers/EfficientTransformerTaxonomy.png date: March 09, 2021 tags: - research - nlp - local: fine-tune-wav2vec2-english title: "Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers" author: patrickvonplaten thumbnail: /blog/assets/15_fine_tune_wav2vec2/wav2vec2.png date: March 12, 2021 tags: - guide - audio - local: how-to-deploy-a-pipeline-to-google-clouds title: "My Journey to a serverless transformers pipeline on Google Cloud" author: Maxence guest: true date: March 18, 2021 tags: - guide - local: the-partnership-amazon-sagemaker-and-hugging-face title: "The Partnership: Amazon SageMaker and Hugging Face" author: philschmid thumbnail: /blog/assets/17_the_partnership_amazon_sagemaker_and_hugging_face/thumbnail.png date: March 23, 2021 tags: - partnerships - aws - local: big-bird title: "Understanding BigBird's Block Sparse Attention" thumbnail: /blog/assets/18_big_bird/block-sparse-attn.gif author: vasudevgupta guest: true date: March 31, 2021 tags: - community - research - nlp - local: sagemaker-distributed-training-seq2seq title: "Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker" author: philschmid thumbnail: /blog/assets/19_sagemaker_distributed_training_seq2seq/thumbnail.png date: April 8, 2021 tags: - guide - partnerships - aws - nlp - local: accelerate-library title: Introducing 🤗 Accelerate thumbnail: /blog/assets/20_accelerate_library/accelerate_diff.png author: sgugger date: April 16, 2021 tags: - guide - local: bert-cpu-scaling-part-1 title: "Scaling-up BERT Inference on CPU (Part 1)" thumbnail: /blog/assets/21_bert_cpu_scaling_part_1/imgs/numa_set.png author: mfuntowicz date: April 20, 2021 tags: - guide - nlp - partnerships - intel - local: gradio title: "Using & Mixing Hugging Face Models with Gradio 2.0" author: abidlabs thumbnail: /blog/assets/22_gradio/gradio.png guest: true date: May 25, 2021 tags: - open-source-collab - guide - local: few-shot-learning-gpt-neo-and-inference-api title: "Few-shot learning in practice: GPT-NEO and the 🤗 Accelerated Inference API" author: philschmid thumbnail: /blog/assets/22_few_shot_learning_gpt_neo_and_inference_api/few-shot-prompt.png date: June 3, 2021 tags: - guide - nlp - local: sentence-transformers-in-the-hub title: "Sentence Transformers in the 🤗 Hub" author: nreimers date: June 28, 2021 tags: - open-source-collab - nlp - local: deploy-hugging-face-models-easily-with-amazon-sagemaker title: "Deploy Hugging Face models easily with Amazon SageMaker" author: philschmid date: July 8, 2021 tags: - guide - partnerships - aws - local: spacy title: "Welcome spaCy to the 🤗 Hub" author: osanseviero thumbnail: /blog/assets/23_spacy/thumbnail.png date: July 13, 2021 tags: - open-source-collab - nlp - local: collaborative-training title: "Deep Learning over the Internet: Training Language Models Collaboratively" author: mryab guest: true thumbnail: /blog/assets/24_sahajBERT/thumbnail.png date: July 15, 2021 tags: - research - local: hardware-partners-program title: "Introducing Optimum: The Optimization Toolkit for Transformers at Scale" author: mfuntowicz thumbnail: /blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png date: September 14, 2021 tags: - guide - local: graphcore title: "Hugging Face and Graphcore partner for IPU-optimized Transformers" author: sallydoherty guest: true thumbnail: /blog/assets/26_graphcore-ipu/thumbnail.png date: September 14, 2021 tags: - graphcore - partnerships - local: summer-at-huggingface title: "Summer at Hugging Face ☀️" author: huggingface thumbnail: /blog/assets/27_summer_at_huggingface/thumbnail.png date: September 24, 2021 tags: - community - local: gradio-spaces title: "Showcase Your Projects in Spaces using Gradio" author: merve thumbnail: /blog/assets/28_gradio-spaces/thumbnail.png date: October 5, 2021 tags: - guide - local: streamlit-spaces title: "Hosting your Models and Datasets on Hugging Face Spaces using Streamlit" author: merve thumbnail: /blog/assets/29_streamlit-spaces/thumbnail.png date: October 5, 2021 tags: - guide - local: fine-tune-clip-rsicd title: "Fine tuning CLIP with Remote Sensing (Satellite) images and captions" author: arampacha guest: true thumbnail: /blog/assets/30_clip_rsicd/clip-rsicd-header-image.png date: October 13, 2021 tags: - community - cv - nlp - local: the-age-of-ml-as-code title: "The Age of Machine Learning As Code Has Arrived" author: juliensimon thumbnail: /blog/assets/31_age_of_ml_as_code/01_entreprise_ml.png date: October 20, 2021 tags: - analysis - local: 1b-sentence-embeddings title: "Train a Sentence Embedding Model with 1B Training Pairs" author: asi guest: true thumbnail: /blog/assets/32_1b_sentence_embeddings/model.png date: October 25, 2021 tags: - community - nlp - local: large-language-models title: "Large Language Models: A New Moore's Law?" author: juliensimon thumbnail: /blog/assets/33_large_language_models/01_model_size.jpg date: October 26, 2021 tags: - analysis - nlp - local: course-launch-event title: "Course Launch Community Event" author: sgugger thumbnail: /blog/assets/34_course_launch/speakers_day1.png date: October 26, 2021 tags: - community - nlp - local: bert-cpu-scaling-part-2 title: "Scaling up BERT-like model Inference on modern CPU - Part 2" author: mfuntowicz thumbnail: /blog/assets/35_bert_cpu_scaling_part_2/openmp.png date: November 4, 2021 tags: - partnerships - intel - guide - nlp - local: fine-tune-xlsr-wav2vec2 title: "Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers" author: patrickvonplaten thumbnail: /blog/assets/16_fine_tune_xlsr_wav2vec2/xlsr_wav2vec2.png date: November 15, 2021 tags: - guide - audio - local: accelerating-pytorch title: "Accelerating PyTorch distributed fine-tuning with Intel technologies" author: juliensimon thumbnail: /blog/assets/36_accelerating_pytorch/03_two_nodes.png date: November 19, 2021 tags: - guide - local: data-measurements-tool title: "Introducing the Data Measurements Tool: an Interactive Tool for Looking at Datasets" author: sasha thumbnail: /blog/assets/37_data-measurements-tool/basics_scroll.gif date: November 29, 2021 tags: - research - local: graphcore-getting-started title: "Getting Started with Hugging Face Transformers for IPUs with Optimum" author: internetoftim guest: true thumbnail: /blog/assets/38_getting_started_graphcore/graphcore_1.png date: November 30, 2021 tags: - partnerships - graphcore - guide - local: snowball-fight title: "Introducing Snowball Fight ☃️, our First ML-Agents Environment" author: ThomasSimonini thumbnail: /blog/assets/39_introducing_snowball_fight/snowballfight.gif date: December 2, 2021 tags: - research - rl - local: codeparrot title: "Training CodeParrot 🦜 from Scratch" author: lvwerra thumbnail: /blog/assets/40_codeparrot/thumbnail.png date: December 8, 2021 tags: - guide - research - nlp - local: perceiver title: "Perceiver IO: a scalable, fully-attentional model that works on any modality" author: nielsr thumbnail: /blog/assets/41_perceiver/thumbnail.png date: December 15, 2021 tags: - research - guide - nlp - audio - cv - local: gradio-joins-hf title: "Gradio joins Hugging Face!" author: abidlabs thumbnail: /blog/assets/42_gradio_joins_hf/thumbnail.png date: December 21, 2021 tags: - community - open-source-collab - local: autonlp-prodigy title: "Active Learning with AutoNLP and Prodigy" author: abhishek thumbnail: /blog/assets/43_autonlp_prodigy/thumbnail.png date: December 23, 2021 tags: - research - partnerships - nlp - local: gptj-sagemaker title: "Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker" author: philschmid thumbnail: /blog/assets/45_gptj_sagemaker/thumbnail.png date: January 11, 2022 tags: - partnerships - aws - guide - nlp - local: wav2vec2-with-ngram title: "Boost Wav2Vec2 with n-gram LM in 🤗 Transformers" author: patrickvonplaten thumbnail: /blog/assets/44_boost_wav2vec2_ngram/wav2vec2_ngram.png date: January 12, 2022 tags: - research - guide - audio - local: infinity-cpu-performance title: "Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs" author: philschmid thumbnail: /blog/assets/46_infinity_cpu_performance/thumbnail.png date: January 13, 2022 tags: - analysis - local: sb3 title: "Welcome Stable-baselines3 to the Hugging Face Hub 🤗" author: ThomasSimonini thumbnail: /blog/assets/47_sb3/thumbnail.png date: January 21, 2022 tags: - open-source-collab - rl - local: searching-the-hub title: "Supercharged Searching on the Hugging Face Hub" author: muellerzr thumbnail: /blog/assets/48_hubsearch/thumbnail.png date: January 25, 2022 tags: - guide - local: asr-chunking title: "Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers" author: Narsil thumbnail: /blog/assets/49_asr_chunking/thumbnail.png date: February 1, 2022 tags: - guide - research - audio - local: sentiment-analysis-python title: "Getting Started with Sentiment Analysis using Python" author: FedericoPascual thumbnail: /blog/assets/50_sentiment_python/thumbnail.png date: February 2, 2022 tags: - sentiment-analysis - nlp - guide - local: fine-tune-vit title: "Fine-Tune ViT for Image Classification with 🤗 Transformers" author: nateraw thumbnail: /blog/assets/51_fine_tune_vit/vit-thumbnail.jpg date: February 11, 2022 tags: - guide - cv - local: bert-101 title: "BERT 101 🤗 State Of The Art NLP Model Explained" author: britneymuller thumbnail: /blog/assets/52_bert_101/thumbnail.jpg date: March 2, 2022 tags: - guide - nlp - local: constrained-beam-search title: "Guiding Text Generation with Constrained Beam Search in 🤗 Transformers" author: cwkeam guest: true thumbnail: /blog/assets/53_constrained_beam_search/thumbnail.png date: March 11, 2022 tags: - guide - nlp - local: image-search-datasets title: "Image search with 🤗 datasets" author: davanstrien thumbnail: /blog/assets/54_image_search_datasets/spaces_image_search.jpg date: March 16, 2022 tags: - cv - local: bert-inferentia-sagemaker title: "Accelerate BERT inference with Hugging Face Transformers and AWS inferentia" author: philschmid thumbnail: /blog/assets/55_bert_inferentia_sagemaker/thumbnail.png date: March 16, 2022 tags: - partnerships - aws - guide - nlp - local: fine-tune-segformer title: "Fine-Tune a Semantic Segmentation Model with a Custom Dataset" author: tobiasc thumbnail: /blog/assets/56_fine_tune_segformer/thumb.png date: March 17, 2022 tags: - guide - partnerships - cv - local: ai-residency title: "Announcing the 🤗 AI Research Residency Program" author: douwekiela thumbnail: /blog/assets/57_ai_residency/residency-thumbnail.jpg date: March 22, 2022 tags: - community - research - local: meg-mitchell-interview title: "Machine Learning Experts - Meg Mitchell Interview" author: britneymuller thumbnail: /blog/assets/57_meg_mitchell_interview/thumbnail.png date: March 23, 2022 tags: - expert-acceleration-program - ml-experts - local: decision-transformers title: "Introducing Decision Transformers on Hugging Face 🤗" author: edbeeching thumbnail: /blog/assets/58_decision-transformers/thumbnail.jpg date: March 28, 2022 tags: - open-source-collab - guide - rl - local: transformers-design-philosophy title: "Don't repeat yourself - 🤗 Transformers Design Philosophy" author: patrickvonplaten thumbnail: /blog/assets/59_transformers_philosophy/transformers.png date: April 5, 2022 tags: - community - local: habana title: "Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training" author: susanlansing thumbnail: /blog/assets/60_habana/habana.png date: April 12, 2022 tags: - partnerships - local: lewis-tunstall-interview title: "Machine Learning Experts - Lewis Tunstall Interview" author: britneymuller thumbnail: /blog/assets/60_lewis_tunstall_interview/thumbnail.png date: April 13, 2022 tags: - expert-acceleration-program - ml-experts - local: carbon-emissions-on-the-hub title: "CO2 Emissions and the 🤗 Hub: Leading the Charge" author: sasha thumbnail: /blog/assets/60_carbon_emissions_on_the_hub/thumbnail.jpg date: April 22, 2022 tags: - community - guide - local: supercharge-customer-service-with-machine-learning title: "Supercharged Customer Service with Machine Learning" author: patrickvonplaten thumbnail: /blog/assets/61_supercharged_customer_service_with_nlp/thumbnail.png date: April 25, 2022 tags: - guide - nlp - local: education title: "Introducing Hugging Face for Education" author: Violette thumbnail: /blog/assets/61_education/thumbnail.png date: April 25, 2022 tags: - community - local: getting-started-habana title: "Getting Started with Transformers on Habana Gaudi" author: juliensimon thumbnail: /blog/assets/61_getting_started_habana/thumbnail.png date: April 26, 2022 tags: - partnerships - guide - local: ml-director-insights title: "Director of Machine Learning Insights [Series]" author: britneymuller thumbnail: /blog/assets/61_ml_director_insights/thumbnail.png date: April 27, 2022 tags: - community - research - local: opinion-classification-with-kili title: "Opinion Classification with Kili and HuggingFace AutoTrain" author: alperiox guest: true thumbnail: /blog/assets/59_opinion-classification-with-kili/thumbnail.png date: April 28, 2022 tags: - guide - local: pytorch-fsdp title: "Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel" author: smangrul thumbnail: /blog/assets/62_pytorch_fsdp/fsdp-thumbnail.png date: May 2, 2022 tags: - guide - local: deep-rl-intro title: "An Introduction to Deep Reinforcement Learning" author: ThomasSimonini thumbnail: /blog/assets/63_deep_rl_intro/thumbnail.png date: May 4, 2022 tags: - rl - local: fastai title: "Welcome fastai to the Hugging Face Hub" author: espejelomar thumbnail: /blog/assets/64_fastai/fastai_hf_blog.png date: May 6, 2022 tags: - guide - open-source-collab - community - local: series-c title: "We Raised $100 Million for Open & Collaborative Machine Learning 🚀" author: The Hugging Face Team thumbnail: /blog/assets/65_series_c/thumbnail.jpg date: May 9, 2022 tags: - news - local: optimum-inference title: "Accelerated Inference with Optimum and Transformers Pipelines" author: philschmid thumbnail: /blog/assets/66_optimum_inference/thumbnail.png date: May 10, 2022 tags: - guide - community - local: ambassadors title: "Student Ambassador Program's call for applications is open!" author: Violette thumbnail: /blog/assets/67_ambassadors/thumbnail.png date: May 13, 2022 tags: - community - local: ml-director-insights-2 title: "Director of Machine Learning Insights [Part 2: SaaS Edition]" author: britneymuller thumbnail: /blog/assets/67_ml_director_insights/thumbnail.png date: May 13, 2022 tags: - community - research - local: gradio-blocks title: "Gradio 3.0 is Out!" author: abidlabs thumbnail: /blog/assets/68_gradio_blocks/block-party.png date: May 16, 2022 tags: - community - open-source-collab - local: fellowship title: "Announcing the Hugging Face Fellowship Program" author: espejelomar thumbnail: /blog/assets/62_fellowship/fellowship-thumbnail.png date: May 17, 2022 tags: - community - local: sasha-luccioni-interview title: "Machine Learning Experts - Sasha Luccioni Interview" author: britneymuller thumbnail: /blog/assets/69_sasha_luccioni_interview/thumbnail.png date: May 17, 2022 tags: - expert-acceleration-program - ml-experts - local: deep-rl-q-part1 title: "An Introduction to Q-Learning Part 1" author: ThomasSimonini thumbnail: /blog/assets/70_deep_rl_q_part1/thumbnail.gif date: May 18, 2022 tags: - rl - local: ethical-charter-multimodal title: "Putting ethical principles at the core of research lifecycle" author: SaulLu thumbnail: /blog/assets/71_ethical-charter/thumbnail.jpg date: May 19, 2022 tags: - research - nlp - audio - cv - local: sempre-health-eap-case-study title: "How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap" author: federicopascual thumbnail: /blog/assets/70_sempre_health/thumbnail.jpg date: May 19, 2022 tags: - expert-acceleration-program - case-study - case-studies - local: deep-rl-q-part2 title: "An Introduction to Q-Learning Part 2" author: ThomasSimonini thumbnail: /blog/assets/73_deep_rl_q_part2/thumbnail.gif date: May 20, 2022 tags: - rl - local: tapex title: "Efficient Table Pre-training without Real Data: An Introduction to TAPEX" author: SivilTaram thumbnail: /blog/assets/74_tapex/thumbnail.png guest: true date: May 23, 2022 tags: - research - nlp - community - local: community-update title: "Introducing Pull Requests and Discussions 🥳" author: victor thumbnail: /blog/assets/76_community_update/thumbnail.png date: May 25, 2022 tags: - launch - local: graphcore-update title: "Graphcore and Hugging Face Launch New Lineup of IPU-Ready Transformers" author: sallydoherty thumbnail: /blog/assets/77_graphcore-update/graphcore_update.png date: May 26, 2022 tags: - graphcore - partnerships - local: deep-rl-dqn title: "Deep Q-Learning with Atari" author: ThomasSimonini thumbnail: /blog/assets/78_deep_rl_dqn/thumbnail.gif date: June 7, 2022 tags: - rl - local: annotated-diffusion title: "The Annotated Diffusion Model" author: nielsr thumbnail: /blog/assets/78_annotated-diffusion/thumbnail.png date: June 7, 2022 tags: - guide - diffusion - stable-diffusion - local: ml-director-insights-3 title: "Director of Machine Learning Insights [Part 3: Finance Edition]" author: britneymuller thumbnail: /blog/assets/78_ml_director_insights/thumbnail.png date: June 14, 2022 tags: - community - research - local: intel title: "Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration" author: juliensimon thumbnail: /blog/assets/80_intel/01.png date: June 15, 2022 tags: - hardware - intel - guide - local: convert-transformers-to-onnx title: "Convert Transformers to ONNX with Hugging Face Optimum" author: philschmid thumbnail: /blog/assets/81_convert_transformers_to_onnx/thumbnail.png date: June 22, 2022 tags: - guide - community - hardware - local: getting-started-with-embeddings title: "Getting Started With Embeddings" author: espejelomar thumbnail: /blog/assets/80_getting_started_with_embeddings/thumbnail.png date: June 23, 2022 tags: - guide - nlp - local: eval-on-the-hub title: "Announcing Evaluation on the Hub" author: douwekiela thumbnail: /blog/assets/82_eval_on_the_hub/thumbnail.png date: June 28, 2022 tags: - community - launch - guide - local: accelerate-deepspeed title: "Accelerate Large Model Training using DeepSpeed" author: smangrul thumbnail: /blog/assets/83_accelerate_deepspeed/deepspeed-thumbnail.png date: June 28, 2022 tags: - guide - local: your-first-ml-project title: "Liftoff! How to get started with your first ML project 🚀" author: nimaboscarino thumbnail: /blog/assets/84_first_ml_project/thumbnail.png date: June 29, 2022 tags: - guide - local: deep-rl-pg title: "Policy Gradient with PyTorch" author: ThomasSimonini thumbnail: /blog/assets/85_policy_gradient/thumbnail.gif date: June 30, 2022 tags: - rl - local: sentiment-analysis-twitter title: "Getting Started with Sentiment Analysis on Twitter" author: FedericoPascual thumbnail: /blog/assets/85_sentiment_analysis_twitter/thumbnail.png date: July 7, 2022 tags: - sentiment-analysis - nlp - guide - local: bloom title: "Introducing The World's Largest Open Multilingual Language Model: BLOOM" author: BigScience thumbnail: /blog/assets/86_bloom/thumbnail.png date: July 12, 2022 tags: - open-source-collab - community - research - local: playlist-generator title: "Building a Playlist Generator with Sentence Transformers" author: NimaBoscarino thumbnail: /blog/assets/87_playlist_generator/thumbnail.png date: July 13, 2022 tags: - nlp - guide - local: bloom-megatron-deepspeed title: "The Technology Behind BLOOM Training" author: stas thumbnail: /blog/assets/86_bloom_megatron_deepspeed/thumbnail.png date: July 14, 2022 tags: - nlp - llm - local: mnist-adversarial title: "How to train your model dynamically using adversarial data" author: chrisjay thumbnail: /blog/assets/88_mnist_adversarial/mnist-adversarial.png date: July 16, 2022 tags: - mnist - adversarial - guide - local: deep-rl-a2c title: "Advantage Actor Critic (A2C)" author: ThomasSimonini thumbnail: /blog/assets/89_deep_rl_a2c/thumbnail.gif date: July 22, 2022 tags: - rl - local: tf-serving-vision title: "Deploying TensorFlow Vision Models in Hugging Face with TF Serving" author: sayakpaul thumbnail: /blog/assets/90_tf_serving_vision/thumbnail.png date: July 25, 2022 tags: - guide - cv - local: tf-xla-generate title: "Faster Text Generation with TensorFlow and XLA" author: joaogante thumbnail: /blog/assets/91_tf_xla_generate/thumbnail.png date: July 27, 2022 tags: - nlp - guide - local: datasets-docs-update title: "Introducing new audio and vision documentation in 🤗 Datasets" author: stevhliu thumbnail: /blog/assets/87_datasets-docs-update/thumbnail.gif date: July 28, 2022 tags: - audio - cv - community - announcement - local: us-national-ai-research-resource title: "AI Policy @🤗: Comments on U.S. National AI Research Resource Interim Report" author: irenesolaiman thumbnail: /blog/assets/92_us_national_ai_research_resource/nairr_thumbnail.png date: August 1, 2022 tags: - community - ethics - local: nystromformer title: "Nyströmformer, Approximating self-attention in linear time and memory via the Nyström method" author: novice03 thumbnail: /blog/assets/86_nystromformer/thumbnail.png date: August 2, 2022 tags: - research - nlp - local: introducing-private-hub title: "Introducing the Private Hub: A New Way to Build With Machine Learning" author: FedericoPascual thumbnail: /blog/assets/92_introducing_private_hub/thumbnail.png date: August 3, 2022 tags: - announcement - enterprise - hub - local: deep-rl-ppo title: "Proximal Policy Optimization (PPO)" author: ThomasSimonini thumbnail: /blog/assets/93_deep_rl_ppo/thumbnail.png date: August 5, 2022 tags: - rl - local: how-to-train-sentence-transformers title: "Train and Fine-Tune Sentence Transformers Models" author: espejelomar thumbnail: /blog/assets/95_training_st_models/thumbnail.png date: August 10, 2022 tags: - guide - nlp - local: deploy-tfserving-kubernetes title: "Deploying 🤗 ViT on Kubernetes with TF Serving" author: chansung thumbnail: /blog/assets/94_tf_serving_kubernetes/thumb.png date: August 11, 2022 tags: - guide - cv - local: tensorflow-philosophy title: "Hugging Face's TensorFlow Philosophy" author: rocketknight1 thumbnail: /blog/assets/96_tensorflow_philosophy/thumbnail.png date: August 12, 2022 tags: - nlp - cv - guide - local: skops title: Introducing Skops author: merve thumbnail: /blog/assets/94_skops/introducing_skops.png date: August 12, 2022 tags: - open-source-collab - scikit-learn - announcement - guide - local: hf-bitsandbytes-integration title: "A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using transformers, accelerate and bitsandbytes" author: ybelkada thumbnail: /blog/assets/96_hf_bitsandbytes_integration/Thumbnail_blue.png date: August 17, 2022 tags: - nlp - llm - quantization - local: vision-transformers title: "Deep Dive: Vision Transformers On Hugging Face Optimum Graphcore" author: juliensimon thumbnail: /blog/assets/97_vision_transformers/thumbnail.jpg date: August 18, 2022 tags: - vision - graphcore - local: deploy-vertex-ai title: "Deploying 🤗 ViT on Vertex AI" author: sayakpaul thumbnail: /blog/assets/97_vertex_ai/image1.png date: August 19, 2022 tags: - guide - cv - local: pretraining-bert title: "Pre-Train BERT with Hugging Face Transformers and Habana Gaudi" author: philschmid thumbnail: /blog/assets/99_pretraining_bert/thumbnail.png date: August 22, 2022 tags: - nlp - partnerships - guide - local: stable_diffusion title: "Stable Diffusion with 🧨 Diffusers" author: valhalla thumbnail: /blog/assets/98_stable_diffusion/thumbnail.png date: August 22, 2022 tags: - guide - diffusion - nlp - text to image - clip - stable-diffusion - dalle - local: spaces_3dmoljs title: "Visualize proteins on Hugging Face Spaces" author: duerrsimon thumbnail: /blog/assets/98_spaces_3dmoljs/thumbnail.png date: August 24, 2022 tags: - research - local: open_rail title: "OpenRAIL: Towards open and responsible AI licensing frameworks" author: CarlosMFerr thumbnail: /blog/assets/100_open_rail/100_open-rail.png date: August 31, 2022 tags: - community - local: train-decision-transformers title: "Train your first Decision Transformer" author: edbeeching thumbnail: /blog/assets/101_train-decision-transformers/thumbnail.gif date: September 08, 2022 tags: - rl - local: diffusers-2nd-month title: "What's new in Diffusers? 🎨" author: osanseviero thumbnail: /blog/assets/102_diffusers_2nd_month/inpainting.png date: September 12, 2022 tags: - guide - diffusion - text_to_image - stable-diffusion - local: megatron-training title: "How to train a Language Model with Megatron-LM" author: loubnabnl thumbnail: /blog/assets/100_megatron_training/thumbnail.png date: September 7, 2022 tags: - guide - nlp - local: bloom-inference-pytorch-scripts title: "Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate" author: stas thumbnail: /blog/assets/bloom-inference-pytorch-scripts/thumbnail.png date: Sep 16, 2022 tags: - nlp - llm - bloom - inference - local: ethics-soc-1 title: "Ethics and Society Newsletter #1" author: meg thumbnail: /blog/assets/103_ethics-soc-1/thumbnail.png date: Sep 22, 2022 tags: - ethics - local: setfit title: "SetFit: Efficient Few-Shot Learning Without Prompts" author: Unso thumbnail: /blog/assets/103_setfit/intel_hf_logo.png date: September 26, 2022 tags: - research - nlp - local: accelerate-large-models title: "How 🤗 Accelerate runs very large models thanks to PyTorch" author: sgugger thumbnail: /blog/assets/104_accelerate-large-models/thumbnail.png date: September 27, 2022 tags: - guide - research - open-source-collab - local: autotrain-image-classification title: "Image Classification with AutoTrain" author: NimaBoscarino thumbnail: /blog/assets/105_autotrain-image-classification/thumbnail.png date: Sep 28, 2022 tags: - autotrain - cv - guide - local: zero-shot-eval-on-the-hub title: "Very Large Language Models and How to Evaluate Them" author: mathemakitten thumbnail: /blog/assets/106_zero_shot_eval_on_the_hub/thumbnail.png date: Oct 3, 2022 tags: - autotrain - research - nlp - local: japanese-stable-diffusion title: "Japanese Stable Diffusion" author: mkshing thumbnail: /blog/assets/106_japanese_stable_diffusion/jsd_thumbnail.png date: Oct 5, 2022 tags: - diffusion - nlp - text-to-image - clip - stable-diffusion - local: introducing-doi title: "Introducing DOI: the Digital Object Identifier to Datasets and Models" author: sylvestre thumbnail: /blog/assets/107_launching_doi/thumbnail.jpeg date: Oct 7, 2022 tags: - community - local: bloom-inference-optimization title: "Optimization story: Bloom inference" author: Narsil thumbnail: /blog/assets/bloom-inference-pytorch-scripts/thumbnail.png date: Oct 12, 2022 tags: - open-source-collab - community - research - local: stable_diffusion_jax title: "Stable Diffusion in JAX/Flax 🚀" author: pcuenca thumbnail: /blog/assets/108_stable_diffusion_jax/thumbnail.png date: Oct 13, 2022 tags: - guide - diffusion - nlp - text-to-image - clip - stable-diffusion - dalle - local: inference-endpoints title: "Getting started with Hugging Face Inference Endpoints" author: julsimon thumbnail: /blog/assets/109_inference_endpoints/endpoints05.png date: Oct 14, 2022 tags: - guide - cloud - inference - local: mteb title: "MTEB: Massive Text Embedding Benchmark" author: Muennighoff thumbnail: /blog/assets/110_mteb/thumbnail.png date: Oct 19, 2022 tags: - nlp - research - llm - local: pytorch-ddp-accelerate-transformers title: "From PyTorch DDP to 🤗 Accelerate to 🤗 Trainer, mastery of distributed training with ease" author: muellerzr thumbnail: /blog/assets/111_pytorch_ddp_accelerate_transformers/thumbnail.png date: October 21, 2022 tags: - guide - research - open-source-collab - local: evaluating-llm-bias title: "Evaluating Language Model Bias with 🤗 Evaluate" author: sasha thumbnail: /blog/assets/112_evaluating-llm-bias/thumbnail.png date: Oct 24, 2022 tags: - ethics - research - nlp - local: openvino title: "Accelerate your models with 🤗 Optimum Intel and OpenVINO" author: echarlaix thumbnail: /blog/assets/113_openvino/thumbnail.png date: November 2, 2022 tags: - hardware - intel - guide - local: fine-tune-whisper title: "Fine-Tune Whisper with 🤗 Transformers" author: sanchit-gandhi thumbnail: /blog/assets/111_fine_tune_whisper/thumbnail.jpg date: Nov 3, 2022 tags: - guide - audio - local: dreambooth title: "Training Stable Diffusion with Dreambooth using 🧨 Diffusers" author: valhalla thumbnail: /blog/assets/sd_dreambooth_training/thumbnail.jpg date: November 7, 2022 tags: - diffusers - stable-diffusion - dreambooth - fine-tuning - guide - local: pricing-update title: "Introducing our new pricing" author: sbrandeis thumbnail: /blog/assets/114_pricing-update/thumbnail.png date: November 8, 2022 tags: - announcement - local: introducing-csearch title: "Generating Human-level Text with Contrastive Search in Transformers 🤗" author: yxuansu thumbnail: /blog/assets/115_introducing_contrastive_search/thumbnail.png date: Nov 8, 2022 tags: - nlp - text generation - research - local: sentiment-analysis-fhe title: "Sentiment Classification with Fully Homomorphic Encryption using Concrete ML" author: jfrery-zama thumbnail: /blog/assets/sentiment-analysis-fhe/thumbnail.png date: November 17, 2022 tags: - guide - privacy - research - FHE - local: arxiv title: "Hugging Face Machine Learning Demos on arXiv" author: abidlabs thumbnail: /blog/assets/arxiv/thumbnail.png date: Nov 17, 2022 tags: - research - community - local: ml-director-insights-4 title: "Director of Machine Learning Insights [Part 4]" author: Violette thumbnail: /blog/assets/78_ml_director_insights/part4.png date: November 23, 2022 tags: - community - research - local: inference-update title: "An Overview of Inference Solutions on Hugging Face" author: julsimon thumbnail: /blog/assets/116_inference_update/widget.png date: Nov 21, 2022 tags: - guide - inference - local: document-ai title: "Accelerating Document AI" author: rajistics thumbnail: /blog/assets/112_document-ai/thumbnail.png date: Nov 21, 2022 tags: - guide - expert-acceleration-program - case-studies - local: diffusion-models-event title: "Diffusion Models Live Event" author: lewtun thumbnail: /blog/assets/diffusion-models-event/thumbnail.png date: Nov 25, 2022 tags: - diffusion - nlp - text to image - clip - stable-diffusion - dalle - local: interns-2023 title: "We are hiring interns!" author: douwekiela thumbnail: /blog/assets/interns-2023/thumbnail.png date: November 29, 2022 tags: - community - announcement - local: vq-diffusion title: "VQ Diffusion with 🧨 Diffusers" author: williamberman thumbnail: /blog/assets/117_vq_diffusion/thumbnail.png date: November 30, 2022 tags: - diffusers - diffusion - text-to-image - local: time-series-transformers title: "Probabilistic Time Series Forecasting with 🤗 Transformers" author: nielsr thumbnail: /blog/assets/118_time-series-transformers/thumbnail.png date: December 1, 2022 tags: - research - time-series - local: diffusers-coreml title: "Using Stable Diffusion with Core ML on Apple Silicon" author: pcuenca thumbnail: /blog/assets/diffusers_coreml/thumbnail.png date: December 1, 2022 tags: - coreml - diffusers - stable-diffusion - diffusion - local: deep-learning-with-proteins title: "Deep Learning with Proteins" author: rocketknight1 thumbnail: /blog/assets/119_deep_learning_with_proteins/folding_example.png date: December 2, 2022 tags: - guide - fine-tuning - local: elixir-bumblebee title: "From GPT2 to Stable Diffusion: Hugging Face arrives to the Elixir community" author: josevalim thumbnail: /blog/assets/120_elixir-bumblebee/thumbnail.png date: December 9, 2022 tags: - elixir - transformers - stable-diffusion - nlp - open-source-collab - local: rlhf title: "Illustrating Reinforcement Learning from Human Feedback (RLHF)" author: natolambert thumbnail: /blog/assets/120_rlhf/thumbnail.png date: December 9, 2022 tags: - rlhf - rl - guide - local: habana-gaudi-2-benchmark title: "Faster Training and Inference: Habana Gaudi®2 vs Nvidia A100 80GB" author: regisss thumbnail: /blog/assets/habana-gaudi-2-benchmark/thumbnail.png date: December 14, 2022 tags: - partnerships - habana - local: audio-datasets title: "A Complete Guide to Audio Datasets" author: sanchit-gandhi thumbnail: /blog/assets/116_audio_datasets/thumbnail.jpg date: Dec 15, 2022 tags: - guide - audio - local: ethics-soc-2 title: "Ethics and Society Newsletter #2: Let's talk about bias!" author: yjernite thumbnail: /blog/assets/122_ethics_soc_2/thumbnail-solstice.png date: Dec 15, 2022 tags: - ethics - local: model-cards title: "Model Cards: Introducing HF Model documentation tools" author: Ezi thumbnail: /blog/assets/121_model-cards/thumbnail.png date: December 20, 2022 tags: - community - research - ethics - guide - local: clipseg-zero-shot title: "Zero-shot image segmentation with CLIPSeg" author: tobiasc thumbnail: /blog/assets/123_clipseg-zero-shot/thumb.png date: December 21, 2022 tags: - guide - partnerships - cv - clip - local: intel-sapphire-rapids title: "Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 1" author: juliensimon thumbnail: /blog/assets/124_intel_sapphire_rapids/02.png date: January 2, 2023 tags: - guide - intel - hardware - partnerships - local: ml-for-games-1 title: "AI for Game Development: Creating a Farming Game in 5 Days. Part 1" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/thumbnail.png date: January 2, 2023 tags: - community - stable-diffusion - guide - game-dev - local: intro-graphml title: "Introduction to Graph Machine Learning" author: clefourrier thumbnail: /blog/assets/125_intro-to-graphml/thumbnail.png date: January 3, 2023 tags: - community - guide - graphs - local: ml-for-games-2 title: "AI for Game Development: Creating a Farming Game in 5 Days. Part 2" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/thumbnail2.png date: January 9, 2023 tags: - community - guide - game-dev - local: image-similarity title: "Image Similarity with Hugging Face Datasets and Transformers" author: sayakpaul thumbnail: /blog/assets/image_similarity/thumbnail.png date: Jan 16, 2023 tags: - guide - cv - local: paddlepaddle title: "Welcome PaddlePaddle to the Hugging Face Hub" author: paddlepaddle guest: true thumbnail: /blog/assets/126_paddlepaddle/thumbnail.jpg date: January 17, 2023 tags: - open-source-collab - nlp - local: mask2former title: "Universal Image Segmentation with Mask2Former and OneFormer" author: nielsr thumbnail: /blog/assets/127_mask2former/thumbnail.png date: Jan 19, 2023 tags: - cv - guide - local: ml-for-games-3 title: "3D Asset Generation: AI for Game Development #3" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/thumbnail3.png date: January 20, 2023 tags: - community - guide - game-dev - local: optimum-onnxruntime-training title: "Optimum+ONNX Runtime - Easier, Faster training for your Hugging Face models" author: Jingya thumbnail: /blog/assets/optimum_onnxruntime-training/thumbnail.png date: January 24, 2023 tags: - guide - community - onnxruntime - local: dialog-agents title: "What Makes a Dialog Agent Useful?" author: nazneen thumbnail: /blog/assets/dialog-agents/thumbnail.png date: January 24, 2023 tags: - rlhf - ChatGPT - cot - ift - sft - local: lora title: "Using LoRA for Efficient Stable Diffusion Fine-Tuning" author: pcuenq thumbnail: /blog/assets/lora/thumbnail.png date: January 26, 2023 tags: - diffusers - stable-diffusion - dreambooth - fine-tuning - guide - local: ml-for-games-4 title: "2D Asset Generation: AI for Game Development #4" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/thumbnail4.png date: January 26, 2023 tags: - community - guide - game-dev - local: cv_state title: "The State of Computer Vision at Hugging Face 🤗" author: sayakpaul thumbnail: /blog/assets/cv_state/thumbnail.png date: January 30, 2023 tags: - community - guide - cv - local: vision_language_pretraining title: "A Dive into Pretraining Strategies for Vision-Language Models" author: adirik thumbnail: /blog/assets/128_vision_language_pretraining/thumbnail.png date: February 03, 2023 tags: - cv - guide - multimodal - local: intel-sapphire-rapids-inference title: "Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 2" author: juliensimon thumbnail: /blog/assets/129_intel_sapphire_rapids_inference/01.png date: February 6, 2023 tags: - guide - intel - hardware - partnerships - local: aivsai title: "Introducing ⚔️ AI vs. AI ⚔️ a deep reinforcement learning multi-agents competition system" author: CarlCochet thumbnail: /blog/assets/128_aivsai/thumbnail.png date: February 07, 2023 tags: - rl - local: ml-for-games-5 title: "Generating Stories: AI for Game Development #5" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/thumbnail5.png date: February 07, 2023 tags: - community - guide - game-dev - local: speecht5 title: "Speech Synthesis, Recognition, and More With SpeechT5" author: Matthijs thumbnail: /blog/assets/speecht5/thumbnail.png date: February 8, 2023 tags: - guide - audio - local: peft title: "🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware" author: smangrul thumbnail: /blog/assets/130_peft/thumbnail.png date: February 10, 2023 tags: - guide - nlp - cv - multimodal - fine-tuning - community - dreambooth - local: mantis-case-study title: "Why we’re switching to Hugging Face Inference Endpoints, and maybe you should too" author: mattupson guest: true thumbnail: /blog/assets/78_ml_director_insights/mantis1.png date: February 15, 2023 tags: - case-studies - local: blip-2 title: "Zero-shot image-to-text generation with BLIP-2" author: MariaK thumbnail: /blog/assets/blip-2/thumbnail.png date: February 15, 2023 tags: - guide - nlp - cv - multimodal - local: aws-partnership title: "Hugging Face and AWS partner to make AI more accessible" author: jeffboudier thumbnail: /blog/assets/131_aws-partnership/aws-partnership-thumbnail.png date: February 21, 2023 tags: - partnerships - aws - nlp - cv - local: fast-mac-diffusers title: "Swift Diffusers: Fast Stable Diffusion for Mac" author: pcuenq thumbnail: /blog/assets/fast-mac-diffusers/thumbnail.png date: February 24, 2023 tags: - coreml - diffusers - stable-diffusion - diffusion - local: red-teaming title: "Red-Teaming Large Language Models" author: nazneen thumbnail: /blog/assets/red-teaming/thumbnail.png date: February 24, 2023 tags: - llms - rlhf - red-teaming - chatgpt - safety - alignment - local: classification-use-cases title: "How Hugging Face Accelerated Development of Witty Works Writing Assistant" author: Violette thumbnail: /blog/assets/78_ml_director_insights/witty-works.png date: March 1, 2023 tags: - nlp - case-studies - local: ethics-diffusers title: "Ethical guidelines for developing the Diffusers library" author: giadap thumbnail: /blog/assets/ethics-diffusers/thumbnail.png date: March 2, 2023 tags: - ethics - diffusers - local: controlnet title: "ControlNet in Diffusers 🧨" author: sayakpaul thumbnail: /blog/assets/controlnet/thumbnail.png date: March 3, 2023 tags: - diffusers - local: using-ml-for-disasters title: "Using Machine Learning to Aid Survivors and Race through Time" author: merve thumbnail: /blog/assets/using-ml-for-disasters/thumbnail.png date: March 3, 2023 tags: - nlp - transformers - object-detection - local: vit-align title: "New ViT and ALIGN Models From Kakao Brain" author: adirik thumbnail: /blog/assets/132_vit_align/thumbnail.png date: March 6, 2023 tags: - cv - guide - partnerships - multimodal - local: trl-peft title: "Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU" author: edbeeching thumbnail: /blog/assets/133_trl_peft/thumbnail.png date: March 9, 2023 tags: - rl - rlhf - nlp - local: informer title: "Multivariate Probabilistic Time Series Forecasting with Informer" author: elisim thumbnail: /blog/assets/134_informer/thumbnail.png date: March 10, 2023 tags: - guide - research - time-series - local: notebooks-hub title: "Jupyter X Hugging Face" author: davanstrien thumbnail: /blog/assets/135_notebooks-hub/before_after_notebook_rendering.png date: March 23, 2023 tags: - partnerships - announcement - local: train-your-controlnet title: "Train your ControlNet with diffusers" author: multimodalart thumbnail: /blog/assets/136_train-your-controlnet/thumbnail.png date: March 24, 2023 tags: - guide - diffusion - stable-diffusion - local: fl-with-flower title: "Federated Learning using Hugging Face and Flower" author: charlesbvll guest: true thumbnail: /blog/assets/fl-with-flower/thumbnail.png date: March 27, 2023 tags: - nlp - transformers - guide - flower - federated-learning - fl - open-source-collab - local: stable-diffusion-inference-intel title: "Accelerating Stable Diffusion Inference on Intel CPUs" author: juliensimon thumbnail: /blog/assets/136_stable_diffusion_inference_intel/01.png date: March 28, 2023 tags: - hardware - intel - guide - local: habana-gaudi-2-bloom title: "Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator" author: regisss thumbnail: /blog/assets/habana-gaudi-2-bloom/thumbnail.png date: March 28, 2023 tags: - habana - partnerships - hardware - nlp - llm - bloom - inference - local: ethics-soc-3 title: "Ethics and Society Newsletter #3: Ethical Openness at Hugging Face" author: irenesolaiman thumbnail: /blog/assets/137_ethics_soc_3/ethics_3_thumbnail.png date: Mar 30, 2023 tags: - ethics - local: stackllama title: "StackLLaMA: A hands-on guide to train LLaMA with RLHF" author: edbeeching thumbnail: /blog/assets/138_stackllama/thumbnail.png date: April 5, 2023 tags: - rl - rlhf - nlp - local: snorkel-case-study title: "Snorkel AI x Hugging Face: unlock foundation models for enterprises" author: Violette thumbnail: /blog/assets/78_ml_director_insights/snorkel.png date: April 6, 2023 tags: - case-studies - local: owkin-substra title: "Creating Privacy Preserving AI with Substra" author: EazyAl thumbnail: /blog/assets/139_owkin-substra/thumbnail.png date: April 12, 2023 tags: - cv - federated-learning - fl - open-source-collab - local: graphml-classification title: "Graph Classification with Transformers" author: clefourrier thumbnail: /blog/assets/125_intro-to-graphml/thumbnail_classification.png date: April 14, 2023 tags: - community - guide - graphs - local: accelerate-transformers-with-inferentia2 title: "Accelerating Hugging Face Transformers with AWS Inferentia2" author: philschmid thumbnail: /blog/assets/140_accelerate_transformers_with_inferentia2/thumbnail.png date: April 17, 2023 tags: - partnerships - aws - nlp - cv - local: unity-in-spaces title: "How to host a Unity game in a Space" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/unity-in-spaces-thumbnail.png date: April 21, 2023 tags: - community - guide - game-dev - local: chinese-language-blog title: "Introducing HuggingFace blog for Chinese speakers: Fostering Collaboration with the Chinese AI community" author: xianbao thumbnail: /blog/assets/chinese-language-blog/thumbnail.png date: April 24, 2023 tags: - partnerships - community - local: databricks-case-study title: "Databricks ❤️ Hugging Face: up to 40% faster training and tuning of Large Language Models" author: alighodsi guest: true thumbnail: /blog/assets/78_ml_director_insights/databricks.png date: April 26, 2023 tags: - case-studies - local: tf_tpu title: "Training a language model with 🤗 Transformers using TensorFlow and TPUs" author: rocketknight1 thumbnail: /blog/assets/tf_tpu_training/thumbnail.png date: April 27, 2023 tags: - nlp - guide - tensorflow - tpu - local: if title: "Running IF with 🧨 diffusers on a Free Tier Google Colab" author: williamberman thumbnail: /blog/assets/if/thumbnail.jpg date: April 26, 2023 tags: - guide - diffusion - local: unity-api title: "How to Install and Use the Hugging Face Unity API" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/unity-api-thumbnail.png date: May 1, 2023 tags: - community - guide - game-dev - local: starcoder title: "StarCoder: A State-of-the-Art LLM for Code" author: lvwerra thumbnail: /blog/assets/141_starcoder/starcoder_thumbnail.png date: May 4, 2023 tags: - nlp - community - research - local: text-to-video title: "A Dive into Text-to-Video Models" author: adirik thumbnail: /blog/assets/140_text-to-video/thumbnail.png date: May 8, 2023 tags: - multi-modal - cv - guide - diffusion - text-to-image - text-to-video - local: starchat-alpha title: "Creating a Coding Assistant with StarCoder" author: lewtun thumbnail: /blog/assets/starchat_alpha/thumbnail.png date: May 9, 2023 tags: - nlp - community - research - local: assisted-generation title: "Assisted Generation: a new direction toward low-latency text generation" author: joaogante thumbnail: /blog/assets/assisted-generation/thumbnail.png date: May 11, 2023 tags: - nlp - research - local: rwkv title: "Introducing RWKV — An RNN with the advantages of a transformer" author: BlinkDL thumbnail: /blog/assets/142_rwkv/rwkv_thumbnail.png date: May 15, 2023 tags: - nlp - community - research - local: chatbot-amd-gpu title: "Run a Chatgpt-like Chatbot on a Single GPU with ROCm" thumbnail: /blog/assets/chatbot-amd-gpu/thumbnail.png author: andyll7772 date: May 15, 2023 tags: - guide - llm - nlp - inference - rocm - local: generative-ai-models-on-intel-cpu title: "Smaller is better: Q8-Chat, an efficient generative AI experience on Xeon" thumbnail: /blog/assets/143_q8chat/thumbnail.png author: andyll7772 date: May 16, 2023 tags: - llm - nlp - inference - intel - quantization - local: dedup title: "Large-scale Near-deduplication Behind BigCode" author: chenghao guest: true thumbnail: /blog/assets/dedup/thumbnail.png date: May 16, 2023 tags: - bigcode - deduplication - local: instruction-tuning-sd title: "Instruction-tuning Stable Diffusion with InstructPix2Pix" author: sayakpaul thumbnail: /blog/assets/instruction_tuning_sd/thumbnail.png date: May 23, 2023 tags: - diffusers - diffusion - instruction-tuning - research - guide - local: safetensors-security-audit title: "Safetensors audited as really safe and becoming the default" author: Narsil thumbnail: /blog/assets/142_safetensors_official/thumbnail.png date: May 23, 2023 tags: - pickle - serialization - load times - local: huggingface-and-ibm title: "Hugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI builders" author: juliensimon thumbnail: /blog/assets/144_ibm/01.png date: May 23, 2023 tags: - cloud - ibm - partnership - local: hugging-face-endpoints-on-azure title: "Hugging Face Collaborates with Microsoft to Launch Hugging Face Model Catalog on Azure" author: philschmid thumbnail: /blog/assets/75_hugging_face_endpoints_on_azure/01.jpg date: May 24, 2023 tags: - cloud - azure - partnership - local: 4bit-transformers-bitsandbytes title: "Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA" author: ybelkada thumbnail: /blog/assets/96_hf_bitsandbytes_integration/Thumbnail_blue.png date: May 24, 2023 tags: - transformers - quantization - bitsandbytes - 4bit - local: train-optimize-sd-intel title: "Optimizing Stable Diffusion for Intel CPUs with NNCF and 🤗 Optimum" author: AlexKoff88 thumbnail: /blog/assets/train_optimize_sd_intel/thumbnail.png date: May 25, 2023 tags: - diffusers - cpu - intel - guide - quantization - local: bertopic title: "Introducing BERTopic Integration with Hugging Face Hub" author: davanstrien thumbnail: /blog/assets/145_bertopic/logo.png date: May 31, 2023 tags: - guide - open-source-collab - community - local: sagemaker-huggingface-llm title: "Introducing the Hugging Face LLM Inference Container for Amazon SageMaker" author: philschmid thumbnail: /blog/assets/145_sagemaker-huggingface-llm/thumbnail.jpg date: May 31, 2023 tags: - cloud - aws - partnership - guide - local: cnil title: "Hugging Face Selected for the French Data Protection Agency Enhanced Support Program" author: yjernite thumbnail: /blog/assets/146_cnil-accompaniment/logo.png date: May 15, 2023 tags: - ethics - local: game-jam title: "Announcing the Open Source AI Game Jam 🎮" author: ThomasSimonini thumbnail: /blog/assets/145_gamejam/thumbnail.png date: June 1, 2023 tags: - community - local: unity-asr title: "AI Speech Recognition in Unity" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/unity-asr-thumbnail.png date: June 2, 2023 tags: - community - guide - game-dev - local: falcon title: "The Falcon has landed in the Hugging Face ecosystem" author: lvwerra thumbnail: /blog/assets/147_falcon/falcon_thumbnail.jpg date: June 5, 2023 tags: - nlp - community - research - local: fasttext title: "Welcome fastText to the 🤗 Hub" author: sheonhan thumbnail: /blog/assets/147_fasttext/thumbnail.png date: June 6, 2023 tags: - open-source-collab - nlp - partnerships - local: hub-duckdb title: "DuckDB: run SQL queries on 50,000+ datasets on the Hugging Face Hub" author: stevhliu thumbnail: /blog/assets/hub_duckdb/hub_duckdb.png date: June 7, 2023 tags: - guide - local: hf-hub-glam-guide title: "The Hugging Face Hub for Galleries, Libraries, Archives and Museums" author: davanstrien thumbnail: /blog/assets/144_hf_hub_glam_guide/thumbnail.png date: June 12, 2023 tags: - community - guide - local: open-llm-leaderboard-rlhf title: "Can foundation models label data like humans?" author: nazneen thumbnail: /blog/assets/llm-leaderboard/leaderboard-thumbnail.png date: June 12, 2023 tags: - nlp - evaluation - leaderboard - local: huggingface-and-amd title: "Hugging Face and AMD partner on accelerating state-of-the-art models for CPU and GPU platforms" author: juliensimon thumbnail: /blog/assets/148_huggingface_amd/01.png date: June 13, 2023 tags: - hardware - amd - partnership - local: content-guidelines-update title: "Announcing our new Content Guidelines and Policy" author: giadap thumbnail: /blog/assets/content-guidelines-blogpost/thumbnail.png date: June 15, 2023 tags: - community - ethics - local: livebook-app-deployment title: "Deploy Livebook notebooks as apps to Hugging Face Spaces" author: josevalim thumbnail: /blog/assets/120_elixir-bumblebee/thumbnail.png date: Jun 15, 2023 tags: - elixir - notebooks - spaces - whisper - local: fast-diffusers-coreml title: "Faster Stable Diffusion with Core ML on iPhone, iPad, and Mac" author: pcuenq thumbnail: /blog/assets/149_fast_diffusers_coreml/thumbnail.png date: June 15, 2023 tags: - coreml - diffusers - stable-diffusion - diffusion - quantization - local: autoformer title: "Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)" author: elisim thumbnail: /blog/assets/150_autoformer/thumbnail.png date: June 16, 2023 tags: - guide - research - time-series - local: policy-ntia-rfc title: "AI Policy @🤗: Response to the U.S. NTIA's Request for Comment on AI Accountability" author: yjernite thumbnail: /blog/assets/151_policy_ntia_rfc/us_policy_thumbnail.png date: June 20, 2023 tags: - community - ethics - local: mms_adapters title: "Fine-tuning MMS Adapter Models for Multi-Lingual ASR" author: patrickvonplaten thumbnail: /blog/assets/151_mms/mms_map.png date: June 19, 2023 tags: - audio - research - local: panel-on-hugging-face title: "Panel on Hugging Face" author: sophiamyang thumbnail: /blog/assets/panel-on-hugging-face/thumbnail.png date: June 22, 2023 tags: - open-source-collab - panel - deployment - spaces - visualization - apps - local: open-llm-leaderboard-mmlu title: "What's going on with the Open LLM Leaderboard?" author: clefourrier thumbnail: /blog/assets/evaluating-mmlu-leaderboard/thumbnail.png date: June 23, 2023 tags: - community - research - nlp - evaluation - open-llm-leaderboard - leaderboard - local: ethics-soc-4 title: "Ethics and Society Newsletter #4: Bias in Text-to-Image Models" author: sasha thumbnail: /blog/assets/152_ethics_soc_4/ethics_4_thumbnail.png date: June 26, 2023 tags: - ethics - local: bridgetower title: "Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2" author: regisss thumbnail: /blog/assets/bridgetower/thumbnail.png date: June 29, 2023 tags: - partnerships - multimodal - nlp - cv - hardware - local: writer-case-study title: "Leveraging Hugging Face for complex generative AI use cases" author: jeffboudier thumbnail: /blog/assets/78_ml_director_insights/writer.png date: July 1, 2023 tags: - case-studies - local: text-to-webapp title: "Making a web app generator with open ML models" author: jbilcke-hf thumbnail: /blog/assets/153_text_to_webapp/thumbnail.jpg date: July 3, 2023 tags: - guide - llm - apps - local: inference-endpoints-llm title: "Deploy LLMs with Hugging Face Inference Endpoints" author: philschmid thumbnail: /blog/assets/155_inference_endpoints_llm/thumbnail.jpg date: July 4, 2023 tags: - guide - llm - apps - inference - local: ml-web-games title: "Making ML-powered web games with Transformers.js" author: Xenova thumbnail: /blog/assets/ml-web-games/thumbnail.png date: July 5, 2023 tags: - game-dev - guide - web - javascript - transformers.js - local: stable-diffusion-finetuning-intel title: "Fine-tuning Stable Diffusion models on Intel CPUs" author: juliensimon thumbnail: /blog/assets/stable-diffusion-finetuning-intel/dicoo_image.png date: July 14, 2023 tags: - guide - intel - hardware - partnerships - local: os-llms title: "Open-Source Text Generation & LLM Ecosystem at Hugging Face" author: merve thumbnail: /blog/assets/os_llms/thumbnail.png date: July 17, 2023 tags: - LLM - inference - nlp - local: ai-webtv title: "Building an AI WebTV" author: jbilcke-hf thumbnail: /blog/assets/156_ai_webtv/thumbnail.gif date: July 17, 2023 tags: - text-to-video - guide - local: llama2 title: "Llama 2 is here - get it on Hugging Face" author: osanseviero thumbnail: /blog/assets/llama2/thumbnail.jpg date: July 18, 2023 tags: - nlp - community - research - LLM - local: diffusers-turns-1 title: "Happy 1st anniversary 🤗 Diffusers!" author: stevhliu thumbnail: /blog/assets/diffusers-turns-1/diffusers-turns-1.png date: July 20, 2023 tags: - community - open-source-collab - diffusion - diffusers - local: game-jam-first-edition-results title: "Results of the Open Source AI Game Jam" author: ThomasSimonini thumbnail: /blog/assets/game-jam-first-edition-results/thumbnail.jpg date: July 21, 2023 tags: - game-dev - local: agents-js title: "Introducing Agents.js: Give tools to your LLMs using JavaScript" author: nsarrazin thumbnail: /blog/assets/agents-js/thumbnail.png date: July 24, 2023 tags: - agents - javascript - web - local: eu-ai-act-oss title: "AI Policy @🤗: Open ML Considerations in the EU AI Act" author: yjernite thumbnail: /blog/assets/eu_ai_act_oss/thumbnailEU.png date: July 24, 2023 tags: - ethics - local: stable-diffusion-xl-coreml title: "Stable Diffusion XL on Mac with Advanced Core ML Quantization" author: pcuenq thumbnail: /blog/assets/stable-diffusion-xl-coreml/thumbnail.png date: July 27, 2023 tags: - coreml - stable-diffusion - stable-diffusion-xl - diffusers - local: sd_distillation title: "Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny" author: harishsegmind guest: true thumbnail: /blog/assets/distill_sd/thumbnail.png date: August 1, 2023 tags: - stable-diffusion - research - diffusers - local: 3d-assets title: "Practical 3D Asset Generation: A Step-by-Step Guide" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/thumbnail-3d.jpg date: August 01, 2023 tags: - community - guide - cv - diffusion - game-dev - local: encrypted-llm title: "Towards Encrypted Large Language Models with FHE" author: RomanBredehoft guest: true thumbnail: /blog/assets/encrypted-llm/thumbnail.png date: August 02, 2023 tags: - guide - privacy - research - FHE - llm - local: huggy-lingo title: "Huggy Lingo: Using Machine Learning to Improve Language Metadata on the Hugging Face Hub" author: davanstrien thumbnail: /blog/assets/156_huggylingo/Huggy_Lingo.png date: August 2, 2023 tags: - announcement - research - local: run-musicgen-as-an-api title: "Deploy MusicGen in no time with Inference Endpoints" author: reach-vb thumbnail: /blog/assets/run-musicgen-as-an-api/thumbnail.png date: August 4, 2023 tags: - audio - guide - local: swift-coreml-llm title: "Releasing Swift Transformers: Run On-Device LLMs in Apple Devices" author: pcuenq thumbnail: /blog/assets/swift-coreml-llm/thumbnail.png date: August 8, 2023 tags: - guide - coreml - llm - swift - local: dpo-trl title: "Fine-tune Llama 2 with DPO" author: kashif thumbnail: /blog/assets/157_dpo_trl/dpo_thumbnail.png date: August 8, 2023 tags: - rl - rlhf - nlp - local: deploy-deepfloydif-using-bentoml title: "Deploying Hugging Face Models with BentoML: DeepFloyd IF in Action" author: Sherlockk guest: true thumbnail: /blog/assets/deploy-deepfloydif-using-bentoml/thumbnail.png date: August 9, 2023 tags: - deployment - open-source-collab - bentoml - guide - diffusers - local: optimizing-bark title: "Optimizing Bark using 🤗 Transformers" author: ylacombe thumbnail: /blog/assets/bark_optimization/thumbnail.png date: August 9, 2023 tags: - text-to-speech - optimization - benchmark - bark - local: aws-marketplace title: "Hugging Face Platform on the AWS Marketplace: Pay with your AWS Account" author: philschmid thumbnail: /blog/assets/158_aws_marketplace/thumbnail.jpg date: August 10, 2023 tags: - guide - announcement - partnerships - aws - local: idefics title: "Introducing IDEFICS: An Open Reproduction of State-of-the-art Visual Language Model" author: VictorSanh thumbnail: /blog/assets/idefics/thumbnail.png date: August 22, 2023 tags: - research - nlp - cv - local: safecoder title: "Introducing SafeCoder" author: jeffboudier thumbnail: /blog/assets/159_safecoder/thumbnail.jpg date: August 22, 2023 tags: - announcement - partnerships - vmware - bigcode - local: gptq-integration title: "Making LLMs lighter with AutoGPTQ and transformers" author: marcsun13 thumbnail: /blog/assets/159_autogptq_transformers/thumbnail.jpg date: August 23, 2023 tags: - llm - optimization - quantization - local: password-git-deprecation title: "Deprecation of Git Authentication using password" author: Sylvestre thumbnail: /blog/assets/password-git-deprecation/thumbnail.png date: August 25, 2023 tags: - announcement - security - local: codellama title: "Code Llama: Llama 2 learns to code" author: philschmid thumbnail: /blog/assets/160_codellama/thumbnail.jpg date: August 25, 2023 tags: - nlp - community - research - LLM - local: audioldm2 title: "AudioLDM 2, but faster ⚡️" author: sanchit-gandhi thumbnail: /blog/assets/161_audioldm2/thumbnail.png date: Aug 30, 2023 tags: - guide - audio - diffusers - diffusion - local: fetch-case-study title: "Fetch Cuts ML Processing Latency by 50% Using Amazon SageMaker & Hugging Face" author: Violette thumbnail: /blog/assets/78_ml_director_insights/fetch.png date: September 1, 2023 tags: - case-studies - local: falcon-180b title: "Spread Your Wings: Falcon 180B is here" author: philschmid thumbnail: /blog/assets/162_falcon_180b/thumbnail.jpg date: September 6, 2023 tags: - nlp - community - research - LLM - local: t2i-sdxl-adapters title: "Efficient Controllable Generation for SDXL with T2I-Adapters" author: Adapter guest: true thumbnail: /blog/assets/t2i-sdxl-adapters/thumbnail.png date: September 8, 2023 tags: - guide - collaboration - diffusers - diffusion - local: safecoder-vs-closed-source-code-assistants title: "SafeCoder vs. Closed-source Code Assistants" author: julsimon thumbnail: /blog/assets/safecoder-vs-closed-source-code-assistants/image.png date: September 11, 2023 tags: - bigcode - local: overview-quantization-transformers title: "Overview of natively supported quantization schemes in 🤗 Transformers" author: ybelkada thumbnail: /blog/assets/163_overview_quantization_transformers/thumbnail.jpg date: September 12, 2023 tags: - llm - optimization - quantization - comparison - bitsandbytes - gptq - local: ram-efficient-pytorch-fsdp title: "Fine-tuning Llama 2 70B using PyTorch FSDP" author: smangrul thumbnail: /blog/assets/160_fsdp_llama/thumbnail.jpg date: September 13, 2023 tags: - llm - guide - nlp - local: wuerstchen title: "Introducing Würstchen: Fast Diffusion for Image Generation" author: dome272 thumbnail: /blog/assets/wuerstchen/thumbnail.jpg date: September 13, 2023 tags: - diffusion - diffusers - text-to-image - local: optimize-llm title: "Optimizing your LLM in production" author: patrickvonplaten thumbnail: /blog/assets/163_optimize_llm/optimize_llm.png date: Sep 15, 2023 tags: - nlp - research - LLM - local: object-detection-leaderboard title: "Object Detection Leaderboard" author: rafaelpadilla guest: true thumbnail: /blog/assets/object-detection-leaderboard/thumbnail.png date: September 18, 2023 tags: - community - guide - cv - leaderboard - evaluation - local: gaussian-splatting title: "Introduction to 3D Gaussian Splatting" author: dylanebert thumbnail: /blog/assets/124_ml-for-games/thumbnail-gaussian-splatting.png date: September 18, 2023 tags: - community - guide - cv - game-dev - local: rocketmoney-case-study title: "Rocket Money x Hugging Face: Scaling Volatile ML Models in Production" author: nicokuzak guest: true thumbnail: /blog/assets/78_ml_director_insights/rocketmoney.png date: September 19, 2023 tags: - case-studies - local: inference-pro title: "Inference for PROs" author: osanseviero thumbnail: /blog/assets/inference_pro/thumbnail.png date: September 22, 2023 tags: - community - guide - inference - api - llm - stable-diffusion - local: llama-sagemaker-benchmark title: "Llama 2 on Amazon SageMaker a Benchmark" author: philschmid thumbnail: /blog/assets/llama_sagemaker_benchmark/thumbnail.jpg date: September 26, 2023 tags: - guide - inference - llm - aws - local: Llama2-for-non-engineers title: "Non-engineers guide: Train a LLaMA 2 chatbot" author: 2legit2overfit thumbnail: /blog/assets/78_ml_director_insights/tuto.png date: September 28, 2023 tags: - guide - community - nlp - local: trl-ddpo title: "Finetune Stable Diffusion Models with DDPO via TRL" author: metric-space guest: true thumbnail: /blog/assets/166_trl_ddpo/thumbnail.png date: September 29, 2023 tags: - guide - diffusers - rl - rlhf - local: ethics-soc-5 title: "Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings" author: meg thumbnail: /blog/assets/164_ethics-soc-5/thumbnail.png date: September 29, 2023 tags: - ethics - local: ai-comic-factory title: "Deploying the AI Comic Factory using the Inference API" author: jbilcke-hf thumbnail: /blog/assets/165_ai_comic_factory/thumbnail.jpg date: October 2, 2023 tags: - guide - inference - api - llm - stable-diffusion - local: chat-templates title: "Chat Templates: An End to the Silent Performance Killer" author: rocketknight1 thumbnail: /blog/assets/chat-templates/thumbnail.png date: October 3, 2023 tags: - LLM - nlp - community - local: sdxl_jax title: "Accelerating Stable Diffusion XL Inference with JAX on Cloud TPU v5e" author: pcuenq thumbnail: /blog/assets/sdxl-jax/thumbnail.jpg date: October 3, 2023 tags: - sdxl - jax - stable diffusion - guide - tpu - google - local: ort-accelerating-hf-models title: "Accelerating over 130,000 Hugging Face models with ONNX Runtime" author: sschoenmeyer thumbnail: /blog/assets/ort_accelerating_hf_models/thumbnail.png date: October 4, 2023 tags: - open-source-collab - onnxruntime - onnx - inference - local: gradio-lite title: "Gradio-Lite: Serverless Gradio Running Entirely in Your Browser" author: abidlabs thumbnail: /blog/assets/167_gradio_lite/thumbnail.png date: October 19, 2023 tags: - gradio - open-source - serverless - local: simple_sdxl_optimizations title: "Exploring simple optimizations for SDXL" author: sayakpaul thumbnail: /blog/assets/simple_sdxl_optimizations/thumbnail.png date: October 24, 2023 tags: - diffusers - guide - sdxl - local: the_n_implementation_details_of_rlhf_with_ppo title: "The N Implementation Details of RLHF with PPO" author: vwxyzjn thumbnail: /blog/assets/167_the_n_implementation_details_of_rlhf_with_ppo/thumbnail.png date: October 24, 2023 tags: - research - rl - rlhf - local: inference-endpoints-embeddings title: "Deploy Embedding Models with Hugging Face Inference Endpoints" author: philschmid thumbnail: /blog/assets/168_inference_endpoints_embeddings/thumbnail.jpg date: October 24, 2023 tags: - guide - llm - apps - inference - local: scalable-data-inspection title: "Interactively explore your Huggingface dataset with one line of code" author: sps44 thumbnail: /blog/assets/scalable-data-inspection/thumbnail.png date: October 25, 2023 tags: - open-source-collab - visualization - data inspection - local: personal-copilot title: "Personal Copilot: Train Your Own Coding Assistant" author: smangrul thumbnail: /blog/assets/170_personal_copilot/thumbnail.png date: October 27, 2023 tags: - bigcode - llm - nlp - inference - guide - local: regions title: "Introducing Storage Regions on the HF Hub" author: julien-c thumbnail: /blog/assets/172_regions/thumbnail.png date: November 3, 2023 tags: - announcement - enterprise - hub - local: Lora-for-sequence-classification-with-Roberta-Llama-Mistral title: "Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama 2, and Mistral for Disaster Tweets Analysis with Lora" author: mehdiiraqui thumbnail: /blog/assets/Lora-for-sequence-classification-with-Roberta-Llama-Mistral/Thumbnail.png date: November 7, 2023 tags: - nlp - guide - llm - peft - local: prodigy-hf title: "Introducing Prodigy-HF: a direct integration with Hugging Face" author: koaning thumbnail: /blog/assets/171_prodigy_hf/thumbnail.png date: November 7, 2023 tags: - community - nlp - datasets - guide - local: inferentia-llama2 title: "Make your llama generation time fly with AWS Inferentia2" author: dacorvo thumbnail: /blog/assets/inferentia-llama2/thumbnail.png date: November 7, 2023 tags: - guide - text-generation - llama2 - aws - local: lcm_lora title: "SDXL in 4 steps with Latent Consistency LoRAs" author: pcuenq thumbnail: /blog/assets/lcm_sdxl/lcm_thumbnail.png date: November 9, 2023 tags: - sdxl - lcm - stable diffusion - guide - local: open-llm-leaderboard-drop title: "Open LLM Leaderboard: DROP deep dive" author: clefourrier thumbnail: /blog/assets/evaluating-mmlu-leaderboard/thumbnail.png date: December 1, 2023 tags: - community - research - nlp - evaluation - open-llm-leaderboard - leaderboard - local: lora-adapters-dynamic-loading title: "Goodbye cold boot - how we made LoRA inference 300% faster" author: raphael-gl thumbnail: /blog/assets/171_load_lora_adapters/thumbnail3.png date: December 5, 2023 tags: - diffusers - lora - models - inference - stable-diffusion - local: optimum-nvidia title: "Optimum-NVIDIA - Unlock blazingly fast LLM inference in just 1 line of code" author: laikh-nvidia thumbnail: /blog/assets/optimum_nvidia/hf_nvidia_banner.png date: December 5, 2023 tags: - llm - nvidia - llama - inference - optimum - local: setfit-absa title: "SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit" author: ronenlap guest: true thumbnail: /blog/assets/setfit-absa/intel_hf_logo_2.png date: December 6, 2023 tags: - research - nlp - local: huggingface-and-optimum-amd title: "AMD + 🤗: Large Language Models Out-of-the-Box Acceleration with AMD GPU" author: huggingface-team thumbnail: /blog/assets/optimum_amd/amd_hf_logo_fixed.png date: December 5, 2023 tags: - llm - amd - llama - inference - optimum - rocm - text-generation - local: moe title: "Mixture of Experts Explained" author: osanseviero thumbnail: /blog/assets/moe/thumbnail.png date: December 11, 2023 tags: - moe - nlp - llm - guide - local: mixtral title: "Welcome Mixtral - a SOTA Mixture of Experts on Hugging Face" author: lewtun thumbnail: /blog/assets/mixtral/thumbnail.jpg date: December 11, 2023 tags: - mixtral - moe - nlp - llm - transformers - local: 2023-in-llms title: "2023, year of open LLMs" thumbnail: /blog/assets/cv_state/thumbnail.png author: clefourrier date: December 18, 2023 tags: - research - nlp - llm - guide - local: whisper-speculative-decoding title: "Speculative Decoding for 2x Faster Whisper Inference" author: sanchit-gandhi thumbnail: /blog/assets/whisper-speculative-decoding/thumbnail.png date: Dec 20, 2023 tags: - guide - audio - transformers - local: sdxl_lora_advanced_script title: "LoRA training scripts of the world, unite!" author: LinoyTsaban thumbnail: /blog/assets/dreambooth_lora_sdxl/thumbnail.png date: January 2, 2024 tags: - guide - collaboration - diffusers - diffusion - lora - dreambooth - stable-diffusion - fine-tuning - community - sdxl - local: amused title: "Welcome aMUSEd: Efficient Text-to-Image Generation" author: Isamu136 guest: true thumbnail: /blog/assets/amused/thumbnail.png date: January 4, 2024 tags: - guide - vision - research - diffusers - local: unsloth-trl title: "Faster fine-tuning using TRL & Unsloth" author: danielhanchen guest: true thumbnail: /blog/assets/hf_unsloth/thumbnail.png date: Jan 10, 2024 tags: - sft - optimization - llm - qlora - local: leaderboard-vectara title: "A guide to setting up your own Hugging Face leaderboard: an end-to-end example with Vectara's hallucination leaderboard" author: ofermend guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_vectara.png date: Jan 12, 2024 tags: - leaderboard - guide - collaboration - community - local: sdxl_ort_inference title: "Accelerating SD Turbo and SDXL Turbo Inference with ONNX Runtime and Olive" author: sschoenmeyer guest: true thumbnail: /blog/assets/optimum_onnxruntime-training/thumbnail.png date: Jan 15, 2024 tags: - stable-diffusion - diffusion - onnxruntime - optimum - collaboration - community - local: pref-tuning title: "Preference Tuning LLMs with Direct Preference Optimization Methods" author: kashif thumbnail: /blog/assets/pref-tuning/thumbnail.jpg date: Jan 18, 2024 tags: - rl - rlhf - nlp - research - local: patchtsmixer title: "PatchTSMixer in HuggingFace" author: ajati guest: true thumbnail: /blog/assets/patchtsmixer/thumbnail.jpeg date: January 19, 2024 tags: - guide - research - time-series - local: fine-tune-w2v2-bert title: "Fine-Tune W2V2-Bert for low-resource ASR with 🤗 Transformers" author: ylacombe thumbnail: /blog/assets/fine-tune-w2v2-bert/w2v_thumbnail.png date: January 19, 2024 tags: - guide - audio - asr - low-resource - local: open-source-llms-as-agents title: "Open-source LLMs as LangChain Agents" author: m-ric thumbnail: /blog/assets/open-source-llms-as-agents/thumbnail_open_source_agents.png date: January 24, 2024 tags: - mixtral - zephyr - solar - llama2 - nlp - llm - agents - langchain - benchmark - local: gcp-partnership title: "Hugging Face and Google partner for open AI collaboration" author: jeffboudier thumbnail: /blog/assets/173_gcp-partnership/thumbnail.jpg date: January 25, 2024 tags: - partnerships - gcp - hardware - local: leaderboard-decodingtrust title: "An Introduction to AI Secure LLM Safety Leaderboard" author: danielz01 guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_decodingtrust.png date: January 26, 2024 tags: - leaderboard - guide - collaboration - research - local: leaderboard-hallucinations title: "The Hallucinations Leaderboard, an Open Effort to Measure Hallucinations in Large Language Models" author: pminervini guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail.png date: January 29, 2024 tags: - leaderboard - guide - collaboration - research - local: intel-starcoder-quantization title: "Accelerate StarCoder with 🤗 Optimum Intel on Xeon: Q8/Q4 and Speculative Decoding" author: ofirzaf guest: true thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png date: Jan 30, 2024 tags: - nlp - intel - quantization - optimum - collaboration - community - local: leaderboard-patronus title: "Introducing the Enterprise Scenarios Leaderboard: a Leaderboard for Real World Use Cases" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_patronus.png author: sunitha98 guest: true date: January 31, 2024 tags: - leaderboard - guide - collaboration - local: patchtst title: "Patch Time Series Transformer in Hugging Face" author: namctin guest: true thumbnail: /blog/assets/patchtst/thumbnail.png date: February 1, 2024 tags: - guide - research - time-series - local: text-generation-inference-on-inferentia2 title: "Hugging Face Text Generation Inference available for AWS Inferentia2" author: philschmid thumbnail: /blog/assets/175_text_generation_inference_on_inferentia2/thumbnail.jpg date: Feb 1, 2024 tags: - guide - partnerships - aws - llm - local: constitutional_ai title: "Constitutional AI with Open LLMs" author: vwxyzjn thumbnail: /blog/assets/175_constitutional_ai/thumbnail.png date: Feburary 1, 2024 tags: - research - rl - rlhf - constitutional-ai - local: leaderboard-nphardeval title: "NPHardEval Leaderboard: Unveiling the Reasoning Abilities of Large Language Models through Complexity Classes and Dynamic Updates" author: lizhouf guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_nphardeval.png date: Feb 2, 2024 tags: - leaderboard - guide - collaboration - research - local: segmoe title: "SegMoE: Segmind Mixture of Diffusion Experts" author: Warlord-K guest: true thumbnail: /blog/assets/segmoe/thumbnail.png date: February 3, 2024 tags: - text-to-image - stable-diffusion - moe - segmoe - local: tgi-messages-api title: "From OpenAI to Open LLMs with Messages API" author: andrewrreed thumbnail: /blog/assets/tgi-messages-api/thumbnail.jpg date: Feb 8, 2024 tags: - guide - llm - nlp - tgi - local: amd_pervasive_developer_ai_contest title: "AMD Pervasive AI Developer Contest!" author: guruprasadmp guest: true thumbnail: /blog/assets/amd_pervasive_developer_ai_contest/amd_developer_general_abstract.jpg date: Feb 14, 2024 tags: - partner - amd - local: synthetic-data-save-costs title: "Synthetic data: save money, time and carbon with open source" author: MoritzLaurer thumbnail: /blog/assets/176_synthetic-data-save-costs/thumbnail.png date: Feb 16, 2024 tags: - guide - llm - nlp - synthetic-data - mixtral - inference-endpoints - autotrain - local: peft_merging title: "🤗 PEFT welcomes new merging methods" author: smangrul thumbnail: /blog/assets/peft_merging/thumbnail.png date: Feb 19, 2024 tags: - guide - llm - nlp - cv - lora - local: leaderboard-upstage title: "Introducing the Open Ko-LLM Leaderboard: Leading the Korean LLM Evaluation Ecosystem" author: Chanjun guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_upstage.png date: Feb 20, 2024 tags: - leaderboard - guide - collaboration - local: gemma title: "Welcome Gemma - Google's new open LLM" author: philschmid thumbnail: /blog/assets/gemma/thumbnail.jpg date: Feb 21, 2024 tags: - nlp - community - research - LLM - gcp - local: fetch-eap-case-study title: "Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS" author: Violette thumbnail: /blog/assets/78_ml_director_insights/fetch2.png date: Feb 23, 2023 tags: - case-studies - local: matryoshka title: "🪆 Introduction to Matryoshka Embedding Models" author: tomaarsen thumbnail: /blog/assets/matryoshka/thumbnail.png date: Feb 23, 2024 tags: - nlp - community - guide - local: leaderboard-haizelab title: "Introducing the Red-Teaming Resistance Leaderboard" author: steve-sli guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_haizelab.png date: Feb 23, 2024 tags: - leaderboard - guide - collaboration - local: gemma-peft title: "Fine-Tuning Gemma Models in Hugging Face" author: svaibhav guest: true thumbnail: /blog/assets/gemma-peft/thumbnail.png date: Feb 23, 2024 tags: - nlp - community - research - LLM - gcp - peft - local: watermarking title: "AI Watermarking 101: Tools and Techniques" author: sasha thumbnail: /blog/assets/watermarking/thumbnail.png date: Feb 26, 2024 tags: - ethics - research - nlp - guide - local: arena-tts title: "TTS Arena: Benchmarking Text-to-Speech Models in the Wild" thumbnail: /blog/assets/arenas-on-the-hub/thumbnail.png author: mrfakename guest: true date: Feb 27, 2024 tags: - leaderboard - arena - collaboration - local: starcoder2 title: "StarCoder2 and The Stack v2" author: lvwerra thumbnail: /blog/assets/177_starcoder2/sc2-banner.png date: Feb 28, 2024 tags: - nlp - community - research - LLM - local: textgen-pipe-gaudi title: "Text-Generation Pipeline on Intel® Gaudi® 2 AI Accelerator" author: siddjags guest: true thumbnail: /blog/assets/textgen-pipe-gaudi/thumbnail.png date: Feb 29, 2024 tags: - habana - partnerships - hardware - nlp - llm - inference - local: community-datasets title: "Data is better together" author: davanstrien guest: true thumbnail: /blog/assets/community-datasets/thumbnail.png date: Mar 4, 2024 tags: - community - data - collaboration - announcement - local: leaderboard-contextual title: "Introducing ConTextual: How well can your Multimodal model jointly reason over text and image in text-rich scenes?" author: rohan598 guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_contextual.png date: Mar 5, 2024 tags: - leaderboard - collaboration - research - local: websight title: "Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset" author: HugoLaurencon thumbnail: /blog/assets/websight/thumbnail.png date: Mar 15, 2024 tags: - nlp - cv - data - research - local: intel-fast-embedding title: "CPU Optimized Embeddings with 🤗 Optimum Intel and fastRAG" author: peterizsak guest: true thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png date: Mar 15, 2024 tags: - nlp - intel - quantization - optimum - collaboration - community - local: quanto-introduction title: "quanto: a pytorch quantization toolkit" author: dacorvo thumbnail: /blog/assets/169_quanto_intro/thumbnail.png date: March 18, 2024 tags: - guide - quantization - transformers - diffusers - local: train-dgx-cloud title: "Easily Train Models with H100 GPUs on NVIDIA DGX Cloud" author: philschmid thumbnail: /blog/assets/train-dgx-cloud/thumbnail.jpg date: March 18, 2024 tags: - partnerships - hardware - nvidia - llm - training - local: galore title: "GaLore: Advancing Large Model Training on Consumer-grade Hardware" author: Titus-von-Koeller thumbnail: /blog/assets/galore_introduction/thumbnail.png date: March 20, 2024 tags: - galore - peft - llm - training - local: cosmopedia title: "Cosmopedia: how to create large-scale synthetic data for pre-training Large Language Models" author: loubnabnl thumbnail: /blog/assets/cosmopedia/thumbnail.png date: March 20, 2024 tags: - guide - nlp - synthetic-data - llm - community - local: phi2-intel-meteor-lake title: "A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake" author: juliensimon thumbnail: /blog/assets/phi2-intel-meteor-lake/02.jpg date: March 20, 2024 tags: - partnerships - intel - llm - local: arena-lighthouz title: "Introducing the Chatbot Guardrails Arena" thumbnail: /blog/assets/arenas-on-the-hub/thumbnail_lighthouz.png author: sonalipnaik guest: true date: Mar 21, 2024 tags: - leaderboard - arena - collaboration - local: embedding-quantization title: "Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval" author: aamirshakir guest: true thumbnail: /blog/assets/embedding-quantization/thumbnail.png date: Mar 22, 2024 tags: - nlp - community - guide - collaboration - research - local: noob_intro_transformers title: "Total noob’s intro to Hugging Face Transformers" author: 2legit2overfit thumbnail: /blog/assets/78_ml_director_insights/guide.png date: March 22, 2024 tags: - guide - community - local: pollen-vision title: "Pollen-Vision: Unified interface for Zero-Shot vision models in robotics" author: apirrone guest: true thumbnail: /blog/assets/pollen-vision/thumbnail.jpg date: March 25, 2024 tags: - robotics - vision - object-detection - local: cloudflare-workers-ai title: "Bringing serverless GPU inference to Hugging Face users" author: philschmid thumbnail: /blog/assets/cloudflare-workers-ai/thumbnail.jpg date: April 2, 2024 tags: - partnerships - cloudflare - llm - inference - local: policy-blog title: "Public Policy at Hugging Face" author: irenesolaiman thumbnail: /blog/assets/policy_docs/policy_blog_thumbnail.png date: April 8, 2024 tags: - ethics - local: setfit-optimum-intel title: "Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon" author: danielkorat guest: true thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png date: April 3, 2024 tags: - nlp - intel - quantization - optimum - collaboration - community - open-source-collab - local: duckdb-nsql-7b title: "Text2SQL using Hugging Face Dataset Viewer API and Motherduck DuckDB-NSQL-7B" author: asoria thumbnail: /blog/assets/duckdb-nsql-7b/thumbnail.png date: April 4, 2024 tags: - guide - text2sql - datasets - llm - local: hugging-face-wiz-security-blog title: "Hugging Face partners with Wiz Research to Improve AI Security" author: JJoe206 thumbnail: /blog/assets/wiz_security/security.png date: April 4, 2024 tags: - security - local: codegemma title: "CodeGemma - an official Google release for code LLMs" author: pcuenq thumbnail: /blog/assets/codegemma/thumbnail_b.png date: April 9, 2024 tags: - nlp - community - research - LLM - gcp - local: google-cloud-model-garden title: "Making thousands of open LLMs bloom in the Vertex AI Model Garden" author: philschmid thumbnail: /blog/assets/173_gcp-partnership/thumbnail.jpg date: April 10, 2024 tags: - partnerships - gcp - hardware - local: vlms title: "Vision Language Models Explained" author: merve thumbnail: /blog/assets/vlms_explained/thumbnail.png date: April 11, 2024 tags: - vision - vlm - multimodal - guide - trl - local: idefics2 title: "Introducing Idefics2: A Powerful 8B Vision-Language Model for the community" author: Leyo thumbnail: /blog/assets/idefics/thumbnail.png date: April 15, 2024 tags: - research - nlp - cv - vlm - multimodal - local: ryght-case-study title: "Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face" author: andrewrreed thumbnail: /blog/assets/ryght-case-study/thumbnail.png date: April 16, 2024 tags: - case-studies - local: fhe-endpoints title: "Running Privacy-Preserving Inference on Hugging Face Endpoints" author: binoua guest: true thumbnail: /blog/assets/fhe-endpoints/thumbnail.png date: April 16, 2024 tags: - guide - privacy - research - FHE - local: leaderboard-livecodebench title: "Introducing the LiveCodeBench Leaderboard - Holistic and Contamination-Free Evaluation of Code LLMs" author: StringChaos guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail.png date: Apr 16, 2024 tags: - leaderboard - research - collaboration - community - local: gradio-reload title: "AI Apps in a Flash with Gradio's Reload Mode" author: freddyaboulton thumbnail: /blog/assets/gradio-reload/thumbnail_compressed.png date: April 16, 2024 tags: - gradio - open-source - guide - demo - local: leaderboard-medicalllm title: "The Open Medical-LLM Leaderboard: Benchmarking Large Language Models in Healthcare" author: aaditya guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_medicalllm.png date: Apr 19, 2024 tags: - leaderboard - collaboration - research - local: llama3 title: "Welcome Llama 3 - Meta's new open LLM" author: philschmid thumbnail: /blog/assets/llama3/thumbnail.jpg date: April 18, 2024 tags: - nlp - community - research - LLM - local: jat title: "Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent" author: qgallouedec thumbnail: /blog/assets/jat/thumbnail.png date: April 22, 2024 tags: - imitation - rl - transformers - generalist - local: leaderboard-cot title: "Introducing the Open Chain of Thought Leaderboard" author: ggbetz guest: true org: logikon thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_cot.png date: Apr 23, 2024 tags: - leaderboard - research - collaboration - community - local: sc2-instruct title: "StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation" thumbnail: /blog/assets/sc2-instruct/sc2-instruct-banner.png author: yuxiang630 guest: true date: Apr 29, 2024 tags: - nlp - community - research - LLM - local: evaluation-structured-outputs title: "Improving Prompt Consistency with Structured Generations" author: willkurt guest: true thumbnail: /blog/assets/evaluating-mmlu-leaderboard/thumbnail.png date: Apr 30, 2024 tags: - evaluation - collaboration - research - leaderboard - local: asr-diarization title: "Powerful ASR + diarization + speculative decoding with Hugging Face Inference Endpoints" author: sergeipetrov thumbnail: /blog/assets/asr-diarization/thumbnail.png date: May 1, 2024 tags: - audio - asr - inference - local: leaderboard-artificial-analysis title: "Bringing the Artificial Analysis LLM Performance Leaderboard to Hugging Face" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_artificialanalysis.png author: mhillsmith guest: true date: May 3, 2024 tags: - leaderboard - research - collaboration - community - local: leaderboard-hebrew title: "Introducing the Open Leaderboard for Hebrew LLMs!" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_hebrew.png author: Shaltiel guest: true date: May 05, 2024 tags: - nlp - research - leaderboard - LLM - local: cost-efficient-rag-applications-with-intel title: "Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon" author: juliensimon thumbnail: /blog/assets/cost_efficient_rag_applications_with_intel/main.jpg date: May 9, 2024 tags: - partnerships - intel - llm - local: enterprise-hub-aws-marketplace title: "Subscribe to Enterprise Hub with your AWS Account" author: jeffboudier thumbnail: /blog/assets/158_aws_marketplace/thumbnail.jpg date: May 9, 2024 tags: - guide - announcement - partnerships - aws - local: agents title: "License to Call: Introducing Transformers Agents 2.0" thumbnail: /blog/assets/agents/thumbnail.png author: m-ric date: May 13, 2024 tags: - nlp - LLM - agents - transformers - gpt - mixtral - llama3 - langchain - benchmark - local: leaderboard-arabic title: "Introducing the Open Arabic LLM Leaderboard" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_arabic.png author: Ali-C137 guest: true date: May 14, 2024 tags: - nlp - research - leaderboard - LLM - local: langchain title: "Hugging Face x LangChain : A new partner package in LangChain" author: jofthomas thumbnail: /blog/assets/langchain_huggingface/thumbnail.png date: May 14, 2024 tags: - collaboration - community - nlp - llm - local: paligemma title: "PaliGemma – Google's Cutting-Edge Open Vision Language Model" thumbnail: /blog/assets/paligemma/Paligemma.png author: merve date: May 14, 2024 tags: - multimodal - LLM - vision - local: microsoft-collaboration title: "From cloud to developers: Hugging Face and Microsoft Deepen Collaboration" thumbnail: /blog/assets/microsoft-collaboration/thumbnail.jpg author: jeffboudier date: May 21, 2024 tags: - cloud - azure - partnership - local: huggingface-amd-mi300 title: "Hugging Face on AMD Instinct MI300 GPU" thumbnail: /blog/assets/optimum_amd/amd_hf_logo_fixed.png author: mfuntowicz date: May 21, 2024 tags: - llm - amd - llama - inference - optimum - rocm - text-generation - local: dell-enterprise-hub title: "Build AI on premise with Dell Enterprise Hub" thumbnail: /blog/assets/dell-enterprise-hub/thumbnail.jpg author: jeffboudier date: May 21, 2024 tags: - announcement - enterprise - hub - dell - partnerships - local: spaces-dev-mode title: "Introducing Spaces Dev Mode for a seamless developer experience" thumbnail: /blog/assets/spaces-dev-mode/thumbnail.png author: pagezyhf date: May 21, 2024 tags: - spaces - azure - partnership - local: inferentia-inference-endpoints title: "Deploy models on AWS Inferentia2 from Hugging Face" thumbnail: /blog/assets/inferentia-inference-endpoints/thumbnail.jpg author: philschmid date: May 22, 2024 tags: - cloud - aws - partnership - optimum - local: kv-cache-quantization title: "Unlocking Longer Generation with Key-Value Cache Quantization" thumbnail: /blog/assets/kv_cache_quantization/thumbnail.png author: RaushanTurganbay date: May 16, 2024 tags: - generation - LLM - quantization - local: leaderboard-llamaguard title: "CyberSecEval 2 - A Comprehensive Evaluation Framework for Cybersecurity Risks and Capabilities of Large Language Models" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_llamaguard.png author: r34p3r1321 guest: true date: May 24, 2024 tags: - nlp - research - leaderboard - LLM - local: falcon2-11b title: "Falcon 2: An 11B parameter pretrained language model and VLM, trained on over 5000B tokens tokens and 11 languages" author: Quent-01 guest: true thumbnail: /blog/assets/179_falcon2-11b/thumbnail.jpg date: May 24, 2024 tags: - nlp - community - research - LLM - multimodal - vision - open-source - local: train-sentence-transformers title: "Training and Finetuning Embedding Models with Sentence Transformers v3" author: tomaarsen thumbnail: /blog/assets/train-sentence-transformers/st-hf-thumbnail.png date: May 28, 2024 tags: - nlp - guide - community - open-source - local: tgi-benchmarking title: "Benchmarking Text Generation Inference" thumbnail: /blog/assets/tgi-benchmarking/tgi-benchmarking-thumbnail.png author: derek-thomas date: May 29, 2024 tags: - LLM - NLP - guide - tgi - local: space-secrets-disclosure title: "Space secrets security update" thumbnail: /blog/assets/space-secrets-security-update/space-secrets-security-update.png author: huggingface date: May 31, 2024 tags: - security - local: assisted-generation-support-gaudi title: "Faster assisted generation support for Intel Gaudi" author: haimbarad thumbnail: /blog/assets/assisted-generation-support-gaudi/thumbnail.png date: June 4, 2024 tags: - partnerships - intel - hardware - local: npc-gigax-cubzh title: "Introducing NPC-Playground, a 3D playground to interact with LLM-powered NPCs" thumbnail: /blog/assets/181_npc-gigax-cubzh/thumbnail.png author: ThomasSimonini date: June 5, 2024 tags: - game-dev - local: leaderboard-artificial-analysis2 title: "Launching the Artificial Analysis Text to Image Leaderboard & Arena" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_artificialanalysis.png author: mhillsmith guest: true date: Jun 6, 2024 tags: - leaderboard - research - collaboration - community - local: sagemaker-huggingface-embedding title: "Introducing the Hugging Face Embedding Container for Amazon SageMaker" thumbnail: /blog/assets/sagemaker-huggingface-embedding/thumbnail.jpg author: philschmid date: Jun 7, 2024 tags: - cloud - aws - partnership - guide - local: transformers-docs-redesign title: "Making sense of this mess" thumbnail: /blog/assets/transformers-docs-redesign/thumbnail.png author: stevhliu date: June 7, 2024 tags: - community - open-source - local: putting_rl_back_in_rlhf_with_rloo title: "Putting RL back in RLHF" thumbnail: /blog/assets/putting_rl_back_in_rlhf_with_rloo/thumbnail.png author: vwxyzjn date: June 12, 2024 tags: - research - rl - rlhf - local: sd3 title: "🧨 Diffusers welcomes Stable Diffusion 3" author: diffusers thumbnail: /blog/assets/sd3/thumbnail.png date: June 12, 2024 tags: - diffusers - guide - sd3 - local: deepspeed-to-fsdp-and-back title: "From DeepSpeed to FSDP and Back Again with Hugging Face Accelerate" thumbnail: /blog/assets/deepspeed-to-fsdp-and-back/thumbnail.png author: muellerzr date: June 13, 2024 tags: - open-source - guide - research - collaboration - local: leaderboard-bigcodebench title: "BigCodeBench: Benchmarking Large Language Models on Solving Practical and Challenging Programming Tasks" thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_bigcode.png author: terryyz guest: true date: Jun 18, 2024 tags: - leaderboard - research - collaboration - community - local: prezi-case-study title: "Going multimodal: How Prezi is leveraging the Hub and the Expert Support Program to accelerate their ML roadmap" author: Violette thumbnail: /blog/assets/70_sempre_health/thumbnailprezi.jpg date: June 19, 2024 tags: - case-studies - local: dibt title: "Data Is Better Together: A Look Back and Forward" thumbnail: /blog/assets/dibt/thumbnail.png author: sdiazlor date: Jun 20, 2024 tags: - collaboration - community - open-source - local: ethics-soc-6 title: "Ethics and Society Newsletter #6: Building Better AI: The Importance of Data Quality" thumbnail: /blog/assets/182_ethics-soc-6/thumbnail.png date: June 24, 2024 tags: - ethics author: evijit - local: finetune-florence2 title: "Fine-tuning Florence-2 - Microsoft's Cutting-edge Vision Language Models" thumbnail: /blog/assets/182_finetune-florence/thumbnail.png author: andito date: Jun 24, 2024 tags: - collaboration - community - open-source - research - local: xlscout-case-study title: "XLSCOUT Unveils ParaEmbed 2.0: a Powerful Embedding Model Tailored for Patents and IP with Expert Support from Hugging Face" author: andrewrreed thumbnail: /blog/assets/xlscout-case-study/thumbnail.png date: June 25, 2024 tags: - case-studies - local: gemma2 title: "Welcome Gemma 2 - Google's new open LLM" author: philschmid thumbnail: /blog/assets/gemma2/thumbnail.jpg date: June 27, 2024 tags: - nlp - community - research - LLM - gcp - local: beating-gaia title: "Our Transformers Code Agent beats the GAIA benchmark!" author: m-ric thumbnail: /blog/assets/beating-gaia/thumbnail.jpeg date: July 1, 2024 tags: - agents - nlp - community - research - leaderboard - local: intel-protein-language-model-protst title: "Accelerating Protein Language Model ProtST on Intel Gaudi 2" author: juliensimon thumbnail: /blog/assets/intel-protein-language-model-protst/01.jpeg date: July 3, 2024 tags: - partnerships - intel - llm - local: datasets-filters title: "Announcing New Dataset Search Features" author: lhoestq thumbnail: /blog/assets/datasets-filters/thumbnail.png date: Jul 8, 2024 tags: - datasets - local: sovereign-data-solution-case-study title: "Banque des Territoires (CDC Group) x Polyconseil x Hugging Face: Enhancing a Major French Environmental Program with a Sovereign Data Solution" author: florentgbelidji thumbnail: /blog/assets/78_ml_director_insights/cdc_poly_hf.png date: July 9, 2024 tags: - case-studies - local: tpu-inference-endpoints-spaces title: "Google Cloud TPUs made available to Hugging Face users" thumbnail: /blog/assets/tpu-inference-endpoints-spaces/thumbnail.png author: pagezyhf date: July 9, 2024 tags: - partnerships - gcp - spaces - inference - hardware - local: dpo_vlm title: "Preference Optimization for Vision Language Models" author: qgallouedec thumbnail: /blog/assets/dpo_vlm/thumbnail.png date: July 10, 2024 tags: - vlm - multimodal - trl - rlhf - dpo - local: presidio-pii-detection title: "Experimenting with Automatic PII Detection on the Hub using Presidio" author: lhoestq thumbnail: /blog/assets/presidio-pii-detection/thumbnail.png date: Jul 10, 2024 tags: - datasets - pii - local: keras-hub-integration title: "Announcing New Hugging Face and KerasHub integration" thumbnail: /blog/assets/keras-nlp-integration/thumbnail.png author: ariG23498 date: Jul 10, 2024 tags: - open-source-collab - nlp - local: winning-aimo-progress-prize title: "How NuminaMath Won the 1st AIMO Progress Prize" author: yfleureau thumbnail: /blog/assets/winning-aimo-progress-prize/thumbnail.png date: July 11, 2024 tags: - ai4math - nlp - community - research - leaderboard - open-science-collab - local: argilla-chatbot title: "How we leveraged distilabel to create an Argilla 2.0 Chatbot" thumbnail: /blog/assets/argilla-chatbot/thumbnail.png author: plaguss date: Jul 16, 2024 tags: - nlp - guide - open-source - spaces - apps - local: smollm title: "SmolLM - blazingly fast and remarkably powerful" author: loubnabnl thumbnail: /blog/assets/smollm/banner.png date: July 16, 2024 tags: - llm - nlp - synthetic-data - research - datasets - community - local: multi-lora-serving title: "TGI Multi-LoRA: Deploy Once, Serve 30 Models" author: derek-thomas thumbnail: /blog/assets/multi-lora-serving/thumbnail.png date: Jul 18, 2024 tags: - nlp - tgi - LLM - lora - peft - open-source - guide - local: docmatix title: "Docmatix - a huge dataset for Document Visual Question Answering" thumbnail: /blog/assets/183_docmatix/thumbnail_new.png author: andito date: Jul 18, 2024 tags: - community - datasets - synthetic-data - open-source - cv - vlm - announcement - research - local: mistral-coreml title: "WWDC 24: Running Mistral 7B with Core ML" thumbnail: /blog/assets/mistral-coreml/thumbnail.png author: FL33TW00D-HF date: Jul 22, 2024 tags: - coreml - guide - llm - swift - wwdc - local: llama31 title: "Llama 3.1 - 405B, 70B & 8B with multilinguality and long context" author: philschmid thumbnail: /blog/assets/llama31/thumbnail.jpg date: July 23, 2024 tags: - nlp - community - research - LLM - local: zero-shot-vqa-docmatix title: "LAVE: Zero-shot VQA Evaluation on Docmatix with LLMs - Do We Still Need Fine-Tuning?" author: danaaubakirova thumbnail: /blog/assets/184_zero_shot_docmatix/thumb.001.jpeg date: Jul 25, 2024 tags: - community - evaluation - synthetic-data - vqa - vlm - zero-shot - research - local: inference-dgx-cloud title: "Serverless Inference with Hugging Face and NVIDIA NIMs" author: philschmid thumbnail: /blog/assets/train-dgx-cloud/thumbnail.jpg date: July 29, 2024 tags: - partnerships - hardware - nvidia - llm - inference - local: quanto-diffusers title: "Memory-efficient Diffusion Transformers with Quanto and Diffusers" author: sayakpaul thumbnail: /blog/assets/quanto-diffusers/thumbnail.png date: July 30, 2024 tags: - diffusers - guide - diffusion-transformers - local: gemma-july-update title: "Google releases Gemma 2 2B, ShieldGemma and Gemma Scope" author: Xenova thumbnail: /blog/assets/gemma-july-update/thumbnail.jpg date: July 31, 2024 tags: - nlp - community - research - LLM - gcp - local: doc_aug_hf_alb title: "Introducing TextImage Augmentation for Document Images" author: danaaubakirova thumbnail: /blog/assets/185_albumentations/thumbnail.png date: Aug 6, 2024 tags: - document ai - data augmentation - synthetic-data - albumentations - research - local: 2024-security-features title: 2024 Security Feature Highlights author: jack-kumar thumbnail: /blog/assets/2024-security-features/thumbnail.png date: August 6, 2024 tags: - security - enterprise - local: xethub-joins-hf title: "XetHub is joining Hugging Face!" author: julien-c thumbnail: /blog/assets/xethub-joins-hf/thumbnail.png date: August 8, 2024 tags: - announcement - enterprise - hub - local: unified-tool-use title: "Tool Use, Unified" author: rocketknight1 thumbnail: /blog/assets/unified-tool-use/thumbnail.png date: August 12, 2024 tags: - LLM - nlp - community - local: falconmamba title: "Welcome FalconMamba: The first strong attention-free 7B model " guest: true author: JingweiZuo thumbnail: /blog/assets/falconmamba/thumbnail.png date: August 12, 2024 tags: - nlp - community - research - LLM - Mamba - local: introduction-to-ggml title: "Introduction to ggml" author: ngxson thumbnail: /blog/assets/introduction-to-ggml/cover.jpg date: August 13, 2024 tags: - guide - community - ggml - local: infini-attention title: "A failed experiment: Infini-Attention, and why we should keep trying?" author: neuralink thumbnail: /blog/assets/185_infini_attention/infini_attention_thumbnail.png date: August 14, 2024 tags: - long-context - infini-attention - memory-compression - local: llama31-on-vertex-ai title: "Deploy Meta Llama 3.1 405B on Google Cloud Vertex AI" author: alvarobartt thumbnail: /blog/assets/llama31-on-vertex-ai/thumbnail.png date: August 19, 2024 tags: - nlp - partnerships - gcp - vertex - local: packing-with-FA2 title: "Improving Hugging Face Training Efficiency Through Packing with Flash Attention" author: lwtr thumbnail: /blog/assets/packing-with-FA2/thumbnail.png date: August 21, 2024 tags: - padding - packing - Flash Attention 2 - local: unsung-heroes title: "The 5 Most Under-Rated Tools on Hugging Face" author: derek-thomas thumbnail: /blog/assets/unsung-heroes/new-thumbnail.png date: August 22, 2024 tags: - hub - api - apps - datasets - enterprise - nlp - visualization - nomic - atlas - guide - local: video-encoding title: "Scaling robotics datasets with video encoding" author: aliberts thumbnail: /blog/assets/video-encoding/thumbnail.png date: August 27, 2024 tags: - video - datasets - robotics - local: trufflesecurity-partnership title: "Hugging Face partners with TruffleHog to Scan for Secrets" author: mcpotato thumbnail: /blog/assets/trufflesecurity-partnership/thumbnail.png date: September 4, 2024 tags: - hub - partnerships - security - local: accelerate-v1 title: "Accelerate 1.0.0" author: muellerzr thumbnail: /blog/assets/186_accelerate_v1/accelerate_v1_thumbnail.png date: September 13, 2024 tags: - guide - local: community-tools title: "Introducing Community Tools on HuggingChat" author: nsarrazin thumbnail: /blog/assets/community-tools/thumbnail.png date: September 16, 2024 tags: - huggingchat - tools - community - local: sql-console title: "Introducing the SQL Console on Datasets" author: cfahlgren1 thumbnail: /blog/assets/sql_console/thumbnail.png date: September 17, 2024 tags: - datasets - sql - duckdb - local: 1_58_llm_extreme_quantization title: "Fine-tuning LLMs to 1.58bit: extreme quantization made easy" author: medmekk thumbnail: /blog/assets/1_58_llm_extreme_quantization/thumbnail.png date: September 18, 2024 tags: - nlp - research - community - local: deploy-with-openvino title: "Optimize and deploy models with Optimum-Intel and OpenVINO GenAI" author: AlexKoff88 thumbnail: /blog/assets/deploy-with-openvino/openvino_genai_workflow.png date: September 20, 2024 tags: - intel - optimum - quantization - inference - local: daily-papers title: "Exploring the Daily Papers Page on Hugging Face" author: AdinaY thumbnail: /blog/assets/daily-papers/thumbnail.png date: September 23, 2024 tags: - research - community - local: fine-video title: "FineVideo: behind the scenes" author: mfarre thumbnail: /blog/assets/186_fine_video/thumbnail.png date: September 23, 2024 tags: - video - datasets - multimodal - local: llama32 title: "Llama can now see and run on your device - welcome Llama 3.2" author: merve thumbnail: /blog/assets/llama32/thumbnail.jpg date: September 25, 2024 tags: - multimodal - on-device - llm - nlp - vision - local: vertex-colored-to-textured-mesh title: "Converting Vertex-Colored Meshes to Textured Meshes" author: dylanebert thumbnail: /blog/assets/vertex-colored-to-textured-mesh/thumbnail.png date: September 30, 2024 tags: - vision - 3d - mesh - tutorial - local: benczechmark title: "🇨🇿 BenCzechMark - Can your LLM Understand Czech?" author: mfajcik thumbnail: /blog/assets/187_benczechmark/thumbnail.png date: October 1, 2024 tags: - nlp - research - leaderboard - LLM - local: chinese-ai-expansion title: "A Short Summary of Chinese AI Global Expansion" author: AdinaY thumbnail: /blog/assets/chinese-ai-expansion/thumbnail.png date: October 3, 2024 tags: - research - community - local: leaderboard-finbench title: "Introducing the Open FinLLM Leaderboard" author: QianqianXie1994 guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_finbench.png date: Oct 4, 2024 tags: - leaderboard - collaboration - community - local: improve_parquet_dedupe title: "Improving Parquet Dedupe on Hugging Face Hub" author: yuchenglow thumbnail: /blog/assets/improve_parquet_dedupe/thumbnail.png date: October 5, 2024 tags: - parquet - dedupe - storage - local: dynamic_speculation_lookahead title: "Faster Assisted Generation with Dynamic Speculation" author: jmamou guest: true thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png date: October 8, 2024 tags: - research - nlp - local: dask-scaling title: "Scaling AI-based Data Processing with Hugging Face + Dask" author: scj13 thumbnail: /blog/assets/dask-scaling/thumbnail.png date: October 9, 2024 tags: - nlp - guide - datasets - local: gradio-5 title: "Welcome, Gradio 5" author: abidlabs thumbnail: /blog/assets/gradio-5/thumbnail.png date: October 9, 2024 tags: - gradio - spaces - open-source - local: gradio-5-security title: "A Security Review of Gradio 5" author: abidlabs thumbnail: /blog/assets/gradio-5-security/thumbnail.png date: October 10, 2024 tags: - gradio - spaces - open-source - security - local: huggingface-amd-turin title: "Introducing the AMD 5th Gen EPYC™ CPU" author: huggingface-team thumbnail: /blog/assets/optimum_amd/amd_hf_logo_fixed.png date: October 10, 2024 tags: - llm - amd - llama - inference - optimum - cpu - text-generation - local: gradient_accumulation title: "Fixing Gradient Accumulation" author: lysandre thumbnail: /blog/assets/gradient_accumulation/gradient_accumulation.png date: October 16, 2024 tags: - transformers - bug - gradient_accumulation - local: keras-llama-32 title: "Llama 3.2 in Keras" author: martin-gorner thumbnail: /blog/assets/keras-llama-32/thumbnail.jpg date: October 21, 2024 tags: - keras - llm - open-source - local: s2s_endpoint title: "Deploying Speech-to-Speech on Hugging Face" author: andito thumbnail: /blog/assets/s2s_endpoint/thumbnail.png date: October 22, 2024 tags: - audio - speech-to-speech - inference - inference-endpoints - local: outlines-core title: "Releasing Outlines-core 0.1.0: structured generation in Rust and Python" thumbnail: /blog/assets/outlines-core/thumbnail.png author: erikkaum date: October 22, 2024 tags: - structured generation - llm - open-source - local: sd3-5 title: "🧨 Diffusers welcomes Stable Diffusion 3.5 Large" author: diffusers thumbnail: /blog/assets/sd3-5/thumbnail.png date: October 22, 2024 tags: - diffusers - guide - sd3-5 - local: transformersjs-v3 title: "Transformers.js v3: WebGPU support, new models & tasks, and more…" author: Xenova thumbnail: /blog/assets/transformersjs-v3/thumbnail.png date: October 22, 2024 tags: - announcement - transformers.js - transformers - javascript - webgpu - local: cinepile2 title: "CinePile 2.0 - making stronger datasets with adversarial refinement" author: mfarre thumbnail: /blog/assets/188_cinepile2/thumbnail.png date: October 23, 2024 tags: - video - datasets - multimodal - local: hugs title: "Introducing HUGS - Scale your AI with Open Models" author: philschmid thumbnail: /blog/assets/hugs/thumbnail.jpg date: October 23, 2024 tags: - announcement - partnerships - aws - gcp - azure - digitalocean - llm - inference - enterprise - multimodal - local: synthid-text title: "Introducing SynthID Text" author: sumedhghaisas thumbnail: /blog/assets/synthid-text/thumbnail.png date: October 23, 2024 tags: - announcement - synthid - llm - watermarking - open-source - local: aya-expanse title: "A Deepdive into Aya Expanse: Advancing the Frontier of Multilinguality" author: johndang-cohere thumbnail: /blog/assets/aya-expanse/thumbnail.jpg date: October 24, 2024 tags: - announcement - cohere - llm - aya - open-source - local: protectai title: "Hugging Face Teams Up with Protect AI: Enhancing Model Security for the Community" author: mcpotato thumbnail: /blog/assets/protectai/thumbnail.png date: October 22, 2024 tags: - hub - partnerships - security - local: digital-green-llm-judge title: "Expert Support case study: Bolstering a RAG app with LLM-as-a-Judge" author: m-ric thumbnail: /blog/assets/digital-green-llm-judge/thumbnail.png date: October 28, 2024 tags: - RAG - llm - expert-support-program - expert-support - case-studies - local: universal_assisted_generation title: "Universal Assisted Generation: Faster Decoding with Any Assistant Model" author: danielkorat guest: true thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png date: October 29, 2024 tags: - research - nlp - open-source - collaboration - local: argilla-ui-hub title: "Argilla 2.4: Easily Build Fine-Tuning and Evaluation datasets on the Hub — No Code Required" author: nataliaElv thumbnail: /blog/assets/argilla-ui-hub/thumbnail.png date: November 4, 2024 tags: - hub - spaces - datasets - argilla - human feedback - local: pycharm-integration title: "Hugging Face + PyCharm" author: rocketknight1 thumbnail: /blog/assets/pycharm-integration/thumbnail.png date: November 5, 2024 tags: - announcement - open-source - community - collaboration - local: researcher-dataset-sharing title: "Share your open ML datasets on Hugging Face Hub!" author: davanstrien thumbnail: /blog/assets/researcher-dataset-sharing/thumbnail.png date: November 12, 2024 tags: - community - research - datasets - guide - local: arena-atla title: "Judge Arena: Benchmarking LLMs as Evaluators" thumbnail: /blog/assets/arenas-on-the-hub/thumbnail_atla.png author: kaikaidai guest: true date: Nov 19, 2024 tags: - leaderboard - arena - collaboration - nlp - evaluation - local: leaderboard-japanese title: "Introduction to the Open Leaderboard for Japanese LLMs" author: akimfromparis guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_japanese.png date: November 20, 2024 tags: - community - research - nlp - evaluation - leaderboard - collaboration - local: layerskip title: "Faster Text Generation with Self-Speculative Decoding" author: ariG23498 thumbnail: /blog/assets/layerskip/thumbnail.png date: November 20, 2024 tags: - research - nlp - open-source - collaboration - local: from-files-to-chunks title: "From Files to Chunks: Improving Hugging Face Storage Efficiency" author: jsulz thumbnail: /blog/assets/from-files-to-chunks/thumbnail.png date: November 20, 2024 tags: - dedupe - storage - content defined chunking - local: debate title: "Letting Large Models Debate: The First Multilingual LLM Debate Competition" author: xuanricheng guest: true thumbnail: /blog/assets/leaderboards-on-the-hub/thumbnail_flageval.png date: November 20, 2024 tags: - community - research - nlp - evaluation - leaderboard - collaboration - local: designing-positional-encoding title: "You could have designed state of the art positional encoding" author: FL33TW00D-HF thumbnail: /blog/assets/designing-positional-encoding/thumbnail_posenc.png date: November 25, 2024 tags: - research - multimodal - tutorial - local: smolvlm title: "SmolVLM - small yet mighty Vision Language Model" author: andito thumbnail: /blog/assets/smolvlm/banner.png date: November 26, 2024 tags: - multimodal - on-device - llm - nlp - vision - trl - local: rearchitecting-uploads-and-downloads title: "Rearchitecting Hugging Face Uploads and Downloads" author: jsulz thumbnail: /blog/assets/rearchitecting-uploads-and-downloads/thumbnail.png date: November 26, 2024 tags: - dedupe - storage - content addressed store - infrastructure
4
0
hf_public_repos
hf_public_repos/blog/accelerated-inference.md
--- title: "How we sped up transformer inference 100x for 🤗 API customers" thumbnail: /blog/assets/09_accelerated_inference/thumbnail.png --- # How we sped up transformer inference 100x for 🤗 API customers 🤗 Transformers has become the default library for data scientists all around the world to explore state of the art NLP models and build new NLP features. With over 5,000 pre-trained and fine-tuned models available, in over 250 languages, it is a rich playground, easily accessible whichever framework you are working in. While experimenting with models in 🤗 Transformers is easy, deploying these large models into production with maximum performance, and managing them into an architecture that scales with usage is a **hard engineering challenge** for any Machine Learning Engineer. This 100x performance gain and built-in scalability is why subscribers of our hosted [Accelerated Inference API](https://huggingface.co/pricing) chose to build their NLP features on top of it. To get to the **last 10x of performance** boost, the optimizations need to be low-level, specific to the model, and to the target hardware. This post shares some of our approaches squeezing every drop of compute juice for our customers. 🍋 ## Getting to the first 10x speedup The first leg of the optimization journey is the most accessible, all about using the best combination of techniques offered by the [Hugging Face libraries](https://github.com/huggingface/), independent of the target hardware. We use the most efficient methods built into Hugging Face model [pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) to reduce the amount of computation during each forward pass. These methods are specific to the architecture of the model and the target task, for instance for a text-generation task on a GPT architecture, we reduce the dimensionality of the attention matrices computation by focusing on the new attention of the last token in each pass: -| Naive version | Optimized version | -|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:| -|![](/blog/assets/09_accelerated_inference/unoptimized_graph.png)|![](/blog/assets/09_accelerated_inference/optimized_graph.png)| Tokenization is often a bottleneck for efficiency during inference. We use the most efficient methods from the [🤗 Tokenizers](https://github.com/huggingface/tokenizers/) library, leveraging the Rust implementation of the model tokenizer in combination with smart caching to get up to 10x speedup for the overall latency. Leveraging the latest features of the Hugging Face libraries, we achieve a reliable 10x speed up compared to an out-of-box deployment for a given model/hardware pair. As new releases of Transformers and Tokenizers typically ship every month, our API customers do not need to constantly adapt to new optimization opportunities, their models just keep running faster. ## Compilation FTW: the hard to get 10x Now this is where it gets really tricky. In order to get the best possible performance we will need to modify the model and compile it targeting the specific hardware for inference. The choice of hardware itself will depend on both the model (size in memory) and the demand profile (request batching). Even when serving predictions from the same model, some API customers may benefit more from Accelerated CPU inference, and others from Accelerated GPU inference, each with different optimization techniques and libraries applied. Once the compute platform has been selected for the use case, we can go to work. Here are some CPU-specific techniques that can be applied with a static graph: - Optimizing the graph (Removing unused flow) - Fusing layers (with specific CPU instructions) - Quantizing the operations Using out-of-box functions from open source libraries (e.g. 🤗 Transformers with [ONNX Runtime](https://github.com/microsoft/onnxruntime)) won’t produce the best results, or could result in a significant loss of accuracy, particularly during quantization. There is no silver bullet, and the best path is different for each model architecture. But diving deep into the Transformers code and ONNX Runtime documentation, the stars can be aligned to achieve another 10x speedup. ## Unfair advantage The Transformer architecture was a decisive inflection point for Machine Learning performance, starting with NLP, and over the last 3 years the rate of improvement in Natural Language Understanding and Generation has been steep and accelerating. Another metric which accelerated accordingly, is the average size of the models, from the 110M parameters of BERT to the now 175Bn of GPT-3. This trend has introduced daunting challenges for Machine Learning Engineers when deploying the latest models into production. While 100x speedup is a high bar to reach, that’s what it takes to serve predictions with acceptable latency in real-time consumer applications. To reach that bar, as Machine Learning Engineers at Hugging Face we certainly have an unfair advantage sitting in the same (virtual) offices as the 🤗 Transformers and 🤗 Tokenizers maintainers 😬. We are also extremely lucky for the rich partnerships we have developed through open source collaborations with hardware and cloud vendors like Intel, NVIDIA, Qualcomm, Amazon and Microsoft that enable us to tune our models x infrastructure with the latest hardware optimizations techniques. If you want to feel the speed on our infrastructure, start a [free trial](https://huggingface.co/pricing) and we’ll get in touch. If you want to benefit from our experience optimizing inference on your own infrastructure participate in our [🤗 Expert Acceleration Program](https://huggingface.co/support).
5
0
hf_public_repos
hf_public_repos/blog/infinity-cpu-performance.md
--- title: "Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs" thumbnail: /blog/assets/46_infinity_cpu_performance/thumbnail.png authors: - user: philschmid - user: jeffboudier - user: mfuntowicz --- # Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <br> <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 10px;"> December 2022 Update: Infinity is no longer offered by Hugging Face as a commercial inference solution. To deploy and accelerate your models, we recommend the following new solutions: * [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) to easily deploy models on dedicated infrastructure managed by Hugging Face. * Our open-source optimization libraries, [🤗 Optimum Intel](https://huggingface.co/blog/openvino) and [🤗 Optimum ONNX Runtime](https://huggingface.co/docs/optimum/main/en/onnxruntime/overview), to get the highest efficiency out of training and running models for inference. * Hugging Face [Expert Acceleration Program](https://huggingface.co/support), a commercial service for Hugging Face experts to work directly with your team to accelerate your Machine Learning roadmap and models. </div> ## Introduction Transfer learning has changed Machine Learning by reaching new levels of accuracy from Natural Language Processing (NLP) to Audio and Computer Vision tasks. At Hugging Face, we work hard to make these new complex models and large checkpoints as easily accessible and usable as possible. But while researchers and data scientists have converted to the new world of Transformers, few companies have been able to deploy these large, complex models in production at scale. The main bottleneck is the latency of predictions which can make large deployments expensive to run and real-time use cases impractical. Solving this is a difficult engineering challenge for any Machine Learning Engineering team and requires the use of advanced techniques to optimize models all the way down to the hardware. With [Hugging Face Infinity](https://huggingface.co/infinity), we offer a containerized solution that makes it easy to deploy low-latency, high-throughput, hardware-accelerated inference pipelines for the most popular Transformer models. Companies can get both the accuracy of Transformers and the efficiency necessary for large volume deployments, all in a simple to use package. In this blog post, we want to share detailed performance results for Infinity running on the latest generation of Intel Xeon CPU, to achieve optimal cost, efficiency, and latency for your Transformer deployments. ## What is Hugging Face Infinity Hugging Face Infinity is a containerized solution for customers to deploy end-to-end optimized inference pipelines for State-of-the-Art Transformer models, on any infrastructure. Hugging Face Infinity consists of 2 main services: * The Infinity Container is a hardware-optimized inference solution delivered as a Docker container. * Infinity Multiverse is a Model Optimization Service through which a Hugging Face Transformer model is optimized for the Target Hardware. Infinity Multiverse is compatible with Infinity Container. The Infinity Container is built specifically to run on a Target Hardware architecture and exposes an HTTP /predict endpoint to run inference. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Product overview" src="assets/46_infinity_cpu_performance/overview.png"></medium-zoom> <figcaption>Figure 1. Infinity Overview</figcaption> </figure> <br> An Infinity Container is designed to serve 1 Model and 1 Task. A Task corresponds to machine learning tasks as defined in the [Transformers Pipelines documentation](https://huggingface.co/docs/transformers/master/en/main_classes/pipelines). As of the writing of this blog post, supported tasks include feature extraction/document embedding, ranking, sequence classification, and token classification. You can find more information about Hugging Face Infinity at [hf.co/infinity](https://huggingface.co/infinity), and if you are interested in testing it for yourself, you can sign up for a free trial at [hf.co/infinity-trial](https://huggingface.co/infinity-trial). --- ## Benchmark Inference performance benchmarks often only measure the execution of the model. In this blog post, and when discussing the performance of Infinity, we always measure the end-to-end pipeline including pre-processing, prediction, post-processing. Please keep this in mind when comparing these results with other latency measurements. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Pipeline" src="assets/46_infinity_cpu_performance/pipeline.png"></medium-zoom> <figcaption>Figure 2. Infinity End-to-End Pipeline</figcaption> </figure> <br> ### Environment As a benchmark environment, we are going to use the [Amazon EC2 C6i instances](https://aws.amazon.com/ec2/instance-types/c6i), which are compute-optimized instances powered by the 3rd generation of Intel Xeon Scalable processors. These new Intel-based instances are using the ice-lake Process Technology and support Intel AVX-512, Intel Turbo Boost, and Intel Deep Learning Boost. In addition to superior performance for machine learning workloads, the Intel Ice Lake C6i instances offer great cost-performance and are our recommendation to deploy Infinity on Amazon Web Services. To learn more, visit the [EC2 C6i instance](https://aws.amazon.com/ec2/instance-types/c6i) page. ### Methodologies When it comes to benchmarking BERT-like models, two metrics are most adopted: * **Latency**: Time it takes for a single prediction of the model (pre-process, prediction, post-process) * **Throughput**: Number of executions performed in a fixed amount of time for one benchmark configuration, respecting Physical CPU cores, Sequence Length, and Batch Size These two metrics will be used to benchmark Hugging Face Infinity across different setups to understand the benefits and tradeoffs in this blog post. --- ## Results To run the benchmark, we created an infinity container for the [EC2 C6i instance](https://aws.amazon.com/ec2/instance-types/c6i) (Ice-lake) and optimized a [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) model for sequence classification using Infinity Multiverse. This ice-lake optimized Infinity Container can achieve up to 34% better latency & throughput compared to existing cascade-lake-based instances, and up to 800% better latency & throughput compared to vanilla transformers running on ice-lake. The Benchmark we created consists of 192 different experiments and configurations. We ran experiments for: * Physical CPU cores: 1, 2, 4, 8 * Sequence length: 8, 16, 32, 64, 128, 256, 384, 512 * Batch_size: 1, 2, 4, 8, 16, 32 In each experiment, we collect numbers for: * Throughput (requests per second) * Latency (min, max, avg, p90, p95, p99) You can find the full data of the benchmark in this google spreadsheet: [🤗 Infinity: CPU Ice-Lake Benchmark](https://docs.google.com/spreadsheets/d/1GWFb7L967vZtAS1yHhyTOZK1y-ZhdWUFqovv7-73Plg/edit?usp=sharing). In this blog post, we will highlight a few results of the benchmark including the best latency and throughput configurations. In addition to this, we deployed the [DistilBERT](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) model we used for the benchmark as an API endpoint on 2 physical cores. You can test it and get a feeling for the performance of Infinity. Below you will find a `curl` command on how to send a request to the hosted endpoint. The API returns a `x-compute-time` HTTP Header, which contains the duration of the end-to-end pipeline. ```bash curl --request POST `-i` \ --url https://infinity.huggingface.co/cpu/distilbert-base-uncased-emotion \ --header 'Content-Type: application/json' \ --data '{"inputs":"I like you. I love you"}' ``` ### Throughput Below you can find the throughput comparison for running infinity on 2 physical cores with batch size 1, compared with vanilla transformers. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Throughput" src="assets/46_infinity_cpu_performance/throughput.png"></medium-zoom> <figcaption>Figure 3. Throughput: Infinity vs Transformers</figcaption> </figure> <br> | Sequence Length | Infinity | Transformers | improvement | |-----------------|-------------|--------------|-------------| | 8 | 248 req/sec | 49 req/sec | +506% | | 16 | 212 req/sec | 50 req/sec | +424% | | 32 | 150 req/sec | 40 req/sec | +375% | | 64 | 97 req/sec | 28 req/sec | +346% | | 128 | 55 req/sec | 18 req/sec | +305% | | 256 | 27 req/sec | 9 req/sec | +300% | | 384 | 17 req/sec | 5 req/sec | +340% | | 512 | 12 req/sec | 4 req/sec | +300% | ### Latency Below, you can find the latency results for an experiment running Hugging Face Infinity on 2 Physical Cores with Batch Size 1. It is remarkable to see how robust and constant Infinity is, with minimal deviation for p95, p99, or p100 (max latency). This result is confirmed for other experiments as well in the benchmark. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Latency" src="assets/46_infinity_cpu_performance/latency.png"></medium-zoom> <figcaption>Figure 4. Latency (Batch=1, Physical Cores=2)</figcaption> </figure> <br> --- ## Conclusion In this post, we showed how Hugging Face Infinity performs on the new Intel Ice Lake Xeon CPU. We created a detailed benchmark with over 190 different configurations sharing the results you can expect when using Hugging Face Infinity on CPU, what would be the best configuration to optimize your Infinity Container for latency, and what would be the best configuration to maximize throughput. Hugging Face Infinity can deliver up to 800% higher throughput compared to vanilla transformers, and down to 1-4ms latency for sequence lengths up to 64 tokens. The flexibility to optimize transformer models for throughput, latency, or both enables businesses to either reduce the amount of infrastructure cost for the same workload or to enable real-time use cases that were not possible before. If you are interested in trying out Hugging Face Infinity sign up for your trial at [hf.co/infinity-trial](https://hf.co/infinity-trial) ## Resources * [Hugging Face Infinity](https://huggingface.co/infinity) * [Hugging Face Infinity Trial](https://huggingface.co/infinity-trial) * [Amazon EC2 C6i instances](https://aws.amazon.com/ec2/instance-types/c6i) * [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) * [DistilBERT paper](https://arxiv.org/abs/1910.01108) * [DistilBERT model](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) * [🤗 Infinity: CPU Ice-Lake Benchmark](https://docs.google.com/spreadsheets/d/1GWFb7L967vZtAS1yHhyTOZK1y-ZhdWUFqovv7-73Plg/edit?usp=sharing)
6
0
hf_public_repos
hf_public_repos/blog/ethics-soc-3.md
--- title: "Ethics and Society Newsletter #3: Ethical Openness at Hugging Face" thumbnail: /blog/assets/137_ethics_soc_3/ethics_3_thumbnail.png authors: - user: irenesolaiman - user: giadap - user: NimaBoscarino - user: yjernite - user: allendorf - user: meg - user: sasha --- # Ethics and Society Newsletter #3: Ethical Openness at Hugging Face ## Mission: Open and Good ML In our mission to democratize good machine learning (ML), we examine how supporting ML community work also empowers examining and preventing possible harms. Open development and science decentralizes power so that many people can collectively work on AI that reflects their needs and values. While [openness enables broader perspectives to contribute to research and AI overall, it faces the tension of less risk control](https://arxiv.org/abs/2302.04844). Moderating ML artifacts presents unique challenges due to the dynamic and rapidly evolving nature of these systems. In fact, as ML models become more advanced and capable of producing increasingly diverse content, the potential for harmful or unintended outputs grows, necessitating the development of robust moderation and evaluation strategies. Moreover, the complexity of ML models and the vast amounts of data they process exacerbate the challenge of identifying and addressing potential biases and ethical concerns. As hosts, we recognize the responsibility that comes with potentially amplifying harm to our users and the world more broadly. Often these harms disparately impact minority communities in a context-dependent manner. We have taken the approach of analyzing the tensions in play for each context, open to discussion across the company and Hugging Face community. While many models can amplify harm, especially discriminatory content, we are taking a series of steps to identify highest risk models and what action to take. Importantly, active perspectives from many backgrounds is key to understanding, measuring, and mitigating potential harms that affect different groups of people. We are crafting tools and safeguards in addition to improving our documentation practices to ensure open source science empowers individuals and continues to minimize potential harms. ## Ethical Categories The first major aspect of our work to foster good open ML consists in promoting the tools and positive examples of ML development that prioritize values and consideration for its stakeholders. This helps users take concrete steps to address outstanding issues, and present plausible alternatives to de facto damaging practices in ML development. To help our users discover and engage with ethics-related ML work, we have compiled a set of tags. These 6 high-level categories are based on our analysis of Spaces that community members had contributed. They are designed to give you a jargon-free way of thinking about ethical technology: - Rigorous work pays special attention to developing with best practices in mind. In ML, this can mean examining failure cases (including conducting bias and fairness audits), protecting privacy through security measures, and ensuring that potential users (technical and non-technical) are informed about the project's limitations. - Consentful work [supports](https://www.consentfultech.io/) the self-determination of people who use and are affected by these technologies. - Socially Conscious work shows us how technology can support social, environmental, and scientific efforts. - Sustainable work highlights and explores techniques for making machine learning ecologically sustainable. - Inclusive work broadens the scope of who builds and benefits in the machine learning world. - Inquisitive work shines a light on inequities and power structures which challenge the community to rethink its relationship to technology. Read more at https://huggingface.co/ethics Look for these terms as we’ll be using these tags, and updating them based on community contributions, across some new projects on the Hub! ## Safeguards Taking an “all-or-nothing” view of open releases ignores the wide variety of contexts that determine an ML artifact’s positive or negative impacts. Having more levers of control over how ML systems are shared and re-used supports collaborative development and analysis with less risk of promoting harmful uses or misuses; allowing for more openness and participation in innovation for shared benefits. We engage directly with contributors and have addressed pressing issues. To bring this to the next level, we are building community-based processes. This approach empowers both Hugging Face contributors, and those affected by contributions, to inform the limitations, sharing, and additional mechanisms necessary for models and data made available on our platform. The three main aspects we will pay attention to are: the origin of the artifact, how the artifact is handled by its developers, and how the artifact has been used. In that respect we: - launched a [flagging feature](https://twitter.com/GiadaPistilli/status/1571865167092396033) for our community to determine whether ML artifacts or community content (model, dataset, space, or discussion) violate our [content guidelines](https://huggingface.co/content-guidelines), - monitor our community discussion boards to ensure Hub users abide by the [code of conduct](https://huggingface.co/code-of-conduct), - robustly document our most-downloaded models with model cards that detail social impacts, biases, and intended and out-of-scope use cases, - create audience-guiding tags, such as the “Not For All Audiences” tag that can be added to the repository’s card metadata to avoid un-requested violent and sexual content, - promote use of [Open Responsible AI Licenses (RAIL)](https://huggingface.co/blog/open_rail) for [models](https://www.licenses.ai/blog/2022/8/26/bigscience-open-rail-m-license), such as with LLMs ([BLOOM](https://huggingface.co/spaces/bigscience/license), [BigCode](https://huggingface.co/spaces/bigcode/license)), - conduct research that [analyzes](https://arxiv.org/abs/2302.04844) which models and datasets have the highest potential for, or track record of, misuse and malicious use. **How to use the flagging function:** Click on the flag icon on any Model, Dataset, Space, or Discussion: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/flag2.jpg" alt="screenshot pointing to the flag icon to Report this model" /> <em> While logged in, you can click on the "three dots" button to bring up the ability to report (or flag) a repository. This will open a conversation in the repository's community tab. </em> </p> Share why you flagged this item: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/flag1.jpg" alt="screenshot showing the text window where you describe why you flagged this item" /> <em> Please add as much relevant context as possible in your report! This will make it much easier for the repo owner and HF team to start taking action. </em> </p> In prioritizing open science, we examine potential harm on a case-by-case basis and provide an opportunity for collaborative learning and shared responsibility. When users flag a system, developers can directly and transparently respond to concerns. In this spirit, we ask that repository owners make reasonable efforts to address reports, especially when reporters take the time to provide a description of the issue. We also stress that the reports and discussions are subject to the same communication norms as the rest of the platform. Moderators are able to disengage from or close discussions should behavior become hateful and/or abusive (see [code of conduct](https://huggingface.co/code-of-conduct)). Should a specific model be flagged as high risk by our community, we consider: - Downgrading the ML artifact’s visibility across the Hub in the trending tab and in feeds, - Requesting that the gating feature be enabled to manage access to ML artifacts (see documentation for [models](https://huggingface.co/docs/hub/models-gated) and [datasets](https://huggingface.co/docs/hub/datasets-gated)), - Requesting that the models be made private, - Disabling access. **How to add the “Not For All Audiences” tag:** Edit the model/data card → add `not-for-all-audiences` in the tags section → open the PR and wait for the authors to merge it. Once merged, the following tag will be displayed on the repository: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/nfaa_tag.png" alt="screenshot showing where to add tags" /> </p> Any repository tagged `not-for-all-audiences` will display the following popup when visited: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/nfaa2.png" alt="screenshot showing where to add tags" /> </p> Clicking "View Content" will allow you to view the repository as normal. If you wish to always view `not-for-all-audiences`-tagged repositories without the popup, this setting can be changed in a user's [Content Preferences](https://huggingface.co/settings/content-preferences) <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/nfaa1.png" alt="screenshot showing where to add tags" /> </p> Open science requires safeguards, and one of our goals is to create an environment informed by tradeoffs with different values. Hosting and providing access to models in addition to cultivating community and discussion empowers diverse groups to assess social implications and guide what is good machine learning. ## Are you working on safeguards? Share them on Hugging Face Hub! The most important part of Hugging Face is our community. If you’re a researcher working on making ML safer to use, especially for open science, we want to support and showcase your work! Here are some recent demos and tools from researchers in the Hugging Face community: - [A Watermark for LLMs](https://huggingface.co/spaces/tomg-group-umd/lm-watermarking) by John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein ([paper](https://arxiv.org/abs/2301.10226)) - [Generate Model Cards Tool](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool) by the Hugging Face team - [Photoguard](https://huggingface.co/spaces/RamAnanth1/photoguard) to safeguard images against manipulation by Ram Ananth Thanks for reading! 🤗 ~ Irene, Nima, Giada, Yacine, and Elizabeth, on behalf of the Ethics and Society regulars If you want to cite this blog post, please use the following (in descending order of contribution): ``` @misc{hf_ethics_soc_blog_3, author = {Irene Solaiman and Giada Pistilli and Nima Boscarino and Yacine Jernite and Elizabeth Allendorf and Margaret Mitchell and Carlos Muñoz Ferrandis and Nathan Lambert and Alexandra Sasha Luccioni }, title = {Hugging Face Ethics and Society Newsletter 3: Ethical Openness at Hugging Face}, booktitle = {Hugging Face Blog}, year = {2023}, url = {https://doi.org/10.57967/hf/0487}, doi = {10.57967/hf/0487} } ```
7
0
hf_public_repos
hf_public_repos/blog/zero-deepspeed-fairscale.md
--- title: "Fit More and Train Faster With ZeRO via DeepSpeed and FairScale" thumbnail: /blog/assets/11_zero_deepspeed_fairscale/zero-partitioning.png authors: - user: stas guest: true --- # Fit More and Train Faster With ZeRO via DeepSpeed and FairScale **A guest blog post by Hugging Face fellow Stas Bekman** As recent Machine Learning models have been growing much faster than the amount of GPU memory added to newly released cards, many users are unable to train or even just load some of those huge models onto their hardware. While there is an ongoing effort to distill some of those huge models to be of a more manageable size -- that effort isn't producing models small enough soon enough. In the fall of 2019 Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase and Yuxiong He published a paper: [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054), which contains a plethora of ingenious new ideas on how one could make their hardware do much more than what it was thought possible before. A short time later [DeepSpeed](https://github.com/microsoft/deepspeed) has been released and it gave to the world the open source implementation of most of the ideas in that paper (a few ideas are still in works) and in parallel a team from Facebook released [FairScale](https://github.com/facebookresearch/fairscale/) which also implemented some of the core ideas from the ZeRO paper. If you use the Hugging Face Trainer, as of `transformers` v4.2.0 you have the experimental support for DeepSpeed's and FairScale's ZeRO features. The new `--sharded_ddp` and `--deepspeed` command line `Trainer` arguments provide FairScale and DeepSpeed integration respectively. Here is [the full documentation](https://huggingface.co/transformers/master/main_classes/trainer.html#trainer-integrations). This blog post will describe how you can benefit from ZeRO regardless of whether you own just a single GPU or a whole stack of them. ## Huge Speedups with Multi-GPU Setups Let's do a small finetuning with translation task experiment, using a `t5-large` model and the `finetune_trainer.py` script which you can find under [`examples/seq2seq`](https://github.com/huggingface/transformers/tree/master/examples/seq2seq) in the `transformers` GitHub repo. We have 2x 24GB (Titan RTX) GPUs to test with. This is just a proof of concept benchmarks so surely things can be improved further, so we will benchmark on a small sample of 2000 items for training and 500 items for evalulation to perform the comparisons. Evaluation does by default a beam search of size 4, so it's slower than training with the same number of samples, that's why 4x less eval items were used in these tests. Here are the key command line arguments of our baseline: ``` export BS=16 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py \ --model_name_or_path t5-large --n_train 2000 --n_val 500 \ --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --task translation_en_to_ro [...] ``` We are just using the `DistributedDataParallel` (DDP) and nothing else to boost the performance for the baseline. I was able to fit a batch size (BS) of 16 before hitting Out of Memory (OOM) error. Note, that for simplicity and to make it easier to understand, I have only shown the command line arguments important for this demonstration. You will find the complete command line at [this post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400). Next, we are going to re-run the benchmark every time adding one of the following: 1. `--fp16` 2. `--sharded_ddp` (fairscale) 3. `--sharded_ddp --fp16` (fairscale) 4. `--deepspeed` without cpu offloading 5. `--deepspeed` with cpu offloading Since the key optimization here is that each technique deploys GPU RAM more efficiently, we will try to continually increase the batch size and expect the training and evaluation to complete faster (while keeping the metrics steady or even improving some, but we won't focus on these here). Remember that training and evaluation stages are very different from each other, because during training model weights are being modified, gradients are being calculated, and optimizer states are stored. During evaluation, none of these happen, but in this particular task of translation the model will try to search for the best hypothesis, so it actually has to do multiple runs before it's satisfied. That's why it's not fast, especially when a model is large. Let's look at the results of these six test runs: | Method | max BS | train time | eval time | |---------------------------|--------|-------------|-------------| | baseline | 16 | 30.9458 | 56.3310 | | fp16 | 20 | 21.4943 | 53.4675 | | sharded_ddp | 30 | 25.9085 | 47.5589 | | sharded_ddp+fp16 | 30 | 17.3838 | 45.6593 | | deepspeed w/o cpu offload | 40 | **10.4007** | 34.9289 | | deepspeed w/ cpu offload | **50** | 20.9706 | **32.1409** | It's easy to see that both FairScale and DeepSpeed provide great improvements over the baseline, in the total train and evaluation time, but also in the batch size. DeepSpeed implements more magic as of this writing and seems to be the short term winner, but Fairscale is easier to deploy. For DeepSpeed you need to write a simple configuration file and change your command line's launcher, with Fairscale you only need to add the `--sharded_ddp` command line argument, so you may want to try it first as it's the most low-hanging fruit. Following the 80:20 rule, I have only spent a few hours on these benchmarks and I haven't tried to squeeze every MB and second by refining the command line arguments and configuration, since it's pretty obvious from the simple table what you'd want to try next. When you will face a real project that will be running for hours and perhaps days, definitely spend more time to make sure you use the most optimal hyper-parameters to get your job done faster and at a minimal cost. If you would like to experiment with this benchmark yourself or want to know more details about the hardware and software used to run it, please, refer to [this post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400). ## Fitting A Huge Model Onto One GPU While Fairscale gives us a boost only with multiple GPUs, DeepSpeed has a gift even for those of us with a single GPU. Let's try the impossible - let's train [t5-3b](https://huggingface.co/t5-3b) on a 24GB RTX-3090 card. First let's try to finetune the huge `t5-3b` using the normal single GPU setup: ``` export BS=1 CUDA_VISIBLE_DEVICES=0 ./finetune_trainer.py \ --model_name_or_path t5-3b --n_train 60 --n_val 10 \ --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --task translation_en_to_ro --fp16 [...] ``` No cookie, even with BS=1 we get: ``` RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 23.70 GiB total capacity; 21.37 GiB already allocated; 45.69 MiB free; 22.05 GiB reserved in total by PyTorch) ``` Note, as earlier I'm showing only the important parts and the full command line arguments can be found [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685). Now update your `transformers` to v4.2.0 or higher, then install DeepSpeed: ``` pip install deepspeed ``` and let's try again, this time adding DeepSpeed to the command line: ``` export BS=20 CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 ./finetune_trainer.py \ --model_name_or_path t5-3b --n_train 60 --n_val 10 \ --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --task translation_en_to_ro --fp16 --deepspeed ds_config_1gpu.json [...] ``` et voila! We get a batch size of 20 trained just fine. I could probably push it even further. The program failed with OOM at ``BS=30``. Here are the relevant results: ``` 2021-01-12 19:06:31 | INFO | __main__ | train_n_objs = 60 2021-01-12 19:06:31 | INFO | __main__ | train_runtime = 8.8511 2021-01-12 19:06:35 | INFO | __main__ | val_n_objs = 10 2021-01-12 19:06:35 | INFO | __main__ | val_runtime = 3.5329 ``` We can't compare these to the baseline, since the baseline won't even start and immediately failed with OOM. Simply amazing! I used only a tiny sample since I was primarily interested in being able to train and evaluate with this huge model that normally won't fit onto a 24GB GPU. If you would like to experiment with this benchmark yourself or want to know more details about the hardware and software used to run it, please, refer to [this post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685). ## The Magic Behind ZeRO Since `transformers` only integrated these fabulous solutions and wasn't part of their invention I will share the resources where you can discover all the details for yourself. But here are a few quick insights that may help understand how ZeRO manages these amazing feats. The key feature of ZeRO is adding distributed data storage to the quite familiar concept of data parallel training. The computation on each GPU is exactly the same as data parallel training, but the parameter, gradients and optimizer states are stored in a distributed/partitioned fashion across all the GPUs and fetched only when needed. The following diagram, coming from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/) illustrates how this works: ![ZeRO Partitioning](./assets/11_zero_deepspeed_fairscale/zero-partitioning.png) ZeRO's ingenious approach is to partition the params, gradients and optimizer states equally across all GPUs and give each GPU just a single partition (also referred to as a shard). This leads to zero overlap in data storage between GPUs. At runtime each GPU builds up each layer's data on the fly by asking participating GPUs to send the information it's lacking. This idea could be difficult to grasp, and you will find my attempt at an explanation [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-758418429). As of this writing FairScale and DeepSpeed only perform Partitioning (Sharding) for the optimizer states and gradients. Model parameters sharding is supposedly coming soon in DeepSpeed and FairScale. The other powerful feature is ZeRO-Offload ([paper](https://arxiv.org/abs/2101.06840)). This feature offloads some of the processing and memory needs to the host's CPU, thus allowing more to be fit onto the GPU. You saw its dramatic impact in the success at running `t5-3b` on a 24GB GPU. One other problem that a lot of people complain about on pytorch forums is GPU memory fragmentation. One often gets an OOM error that may look like this: ``` RuntimeError: CUDA out of memory. Tried to allocate 1.48 GiB (GPU 0; 23.65 GiB total capacity; 16.22 GiB already allocated; 111.12 MiB free; 22.52 GiB reserved in total by PyTorch) ``` The program wants to allocate ~1.5GB and the GPU still has some 6-7GBs of unused memory, but it reports to have only ~100MB of contiguous free memory and it fails with the OOM error. This happens as chunks of different size get allocated and de-allocated again and again, and over time holes get created leading to memory fragmentation, where there is a lot of unused memory but no contiguous chunks of the desired size. In the example above the program could probably allocate 100MB of contiguous memory, but clearly it can't get 1.5GB in a single chunk. DeepSpeed attacks this problem by managing GPU memory by itself and ensuring that long term memory allocations don't mix with short-term ones and thus there is much less fragmentation. While the paper doesn't go into details, the [source code](https://github.com/microsoft/DeepSpeed) is available, so it's possible to see how DeepSpeed accomplishes that. As ZeRO stands for Zero Redundancy Optimizer, it's easy to see that it lives up to its name. ## The Future Besides the anticipated upcoming support for model params sharding in DeepSpeed, it already released new features that we haven't explored yet. These include DeepSpeed Sparse Attention and 1-bit Adam, which are supposed to decrease memory usage and dramatically reduce inter-GPU communication overhead, which should lead to an even faster training and support even bigger models. I trust we are going to see new gifts from the FairScale team as well. I think they are working on ZeRO stage 3 as well. Even more exciting, [ZeRO is being integrated into pytorch](https://github.com/pytorch/pytorch/pull/46750). ## Deployment If you found the results shared in this blog post enticing, please proceed [here](https://huggingface.co/transformers/master/main_classes/trainer.html#trainer-integrations) for details on how to use DeepSpeed and FairScale with the `transformers` Trainer. You can, of course, modify your own trainer to integrate DeepSpeed and FairScale, based on each project's instructions or you can "cheat" and see how we did it in the `transformers` Trainer. If you go for the latter, to find your way around `grep` the source code for `deepspeed` and/or `sharded_ddp`. The good news is that ZeRO requires no model modification. The only required modifications are in the training code. ## Issues If you encounter any issues with the integration part of either of these projects please open an Issue in [transformers](https://github.com/huggingface/transformers/issues). But if you have problems with DeepSpeed and FairScale installation, configuration and deployment - you need to ask the experts in their domains, therefore, please, use [DeepSpeed Issue](https://github.com/microsoft/DeepSpeed/issues) or [FairScale Issue](https://github.com/facebookresearch/fairscale/issues) instead. ## Resources While you don't really need to understand how any of these projects work and you can just deploy them via the `transformers` Trainer, should you want to figure out the whys and hows please refer to the following resources. * [FairScale GitHub](https://github.com/facebookresearch/fairscale) * [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed) * Paper: [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054). The paper is very interesting, but it's very terse. * Here is a good [video discussion](https://www.youtube.com/watch?v=tC01FRB0M7w) of the paper with visuals * Paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). Just published - this one goes into the details of ZeRO Offload feature. * DeepSpeed [configuration and tutorials](https://www.deepspeed.ai/getting-started/) * In addition to the paper, I highly recommend to read the following detailed blog posts with diagrams: - [DeepSpeed: Extreme-scale model training for everyone]( https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/) - [ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/) - [Turing-NLG: A 17-billion-parameter language model by Microsoft](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/) * DeepSpeed [examples on GitHub](https://github.com/microsoft/DeepSpeedExamples) ## Gratitude We were quite astonished at the amazing level of support we received from the FairScale and DeepSpeed developer teams while working on integrating those projects into `transformers`. In particular I'd like to thank: * Benjamin Lefaudeux [@blefaudeux](https://github.com/blefaudeux) * Mandeep Baines [@msbaines](https://github.com/msbaines) from the FairScale team and: * Jeff Rasley [@jeffra](https://github.com/jeffra) * Olatunji Ruwase [@tjruwase](https://github.com/tjruwase) * Samyam Rajbhandari [@samyam](https://github.com/samyam) from the DeepSpeed team for your generous and caring support and prompt resolution of the issues we have encountered. And HuggingFace for providing access to hardware the benchmarks were run on. Sylvain Gugger [@sgugger](https://github.com/sgugger/) and Stas Bekman [@stas00](https://github.com/stas00) worked on the integration of these projects.
8
0
hf_public_repos
hf_public_repos/blog/vertex-colored-to-textured-mesh.md
--- title: "Converting Vertex-Colored Meshes to Textured Meshes" thumbnail: /blog/assets/vertex-colored-to-textured-mesh/thumbnail.png authors: - user: dylanebert --- # Converting Vertex-Colored Meshes to Textured Meshes [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/dylanebert/InstantTexture/blob/main/notebooks/walkthrough.ipynb) Convert vertex-colored meshes to UV-mapped, textured meshes. <gradio-app theme_mode="light" space="dylanebert/InstantTexture"></gradio-app> ## Introduction Vertex colors are a straightforward way to add color information directly to a mesh's vertices. This is often the way generative 3D models like [InstantMesh](https://huggingface.co/spaces/TencentARC/InstantMesh) produce meshes. However, most applications prefer UV-mapped, textured meshes. This tutorial walks through a quick solution to convert vertex-colored meshes to UV-mapped, textured meshes. This includes [The Short Version](#the-short-version) to get results quickly, and [The Long Version](#the-long-version) for an in-depth walkthrough. ## The Short Version Install the [InstantTexture](https://github.com/dylanebert/InstantTexture) library for easy conversion. This is a small library we wrote that implements the steps described in [The Long Version](#the-long-version) below. ```bash pip install git+https://github.com/dylanebert/InstantTexture ``` ### Usage The code below converts a vertex-colored `.obj` mesh to a UV-mapped, textured `.glb` mesh and saves it to `output.glb`. ```python from instant_texture import Converter input_mesh_path = "https://raw.githubusercontent.com/dylanebert/InstantTexture/refs/heads/main/examples/chair.obj" converter = Converter() converter.convert(input_mesh_path) ``` Let's visualize the output mesh. ```python import trimesh mesh = trimesh.load("output.glb") mesh.show() ``` That's it! For a detailed walkthrough, continue reading. ## The Long Version Install the following dependencies: - **numpy** for numerical operations - **trimesh** for loading and saving mesh data - **xatlas** for generating uv maps - **Pillow** for image processing - **opencv-python** for image processing - **httpx** for downloading the input mesh ```bash pip install numpy trimesh xatlas opencv-python pillow httpx ``` Import dependencies. ```python import cv2 import numpy as np import trimesh import xatlas from PIL import Image, ImageFilter ``` Load the vertex-colored input mesh. This should be a `.obj` file located at `input_mesh_path`. If it's a local file, use `trimesh.load()` instead of `trimesh.load_remote()`. ```python mesh = trimesh.load_remote(input_mesh_path) mesh.show() ``` Access the vertex colors of the mesh. If this fails, ensure the mesh is a valid `.obj` file with vertex colors. ```python vertex_colors = mesh.visual.vertex_colors ``` Generate the uv map using xatlas. This is the most time-consuming part of the process. ```python vmapping, indices, uvs = xatlas.parametrize(mesh.vertices, mesh.faces) ``` Remap the vertices and vertex colors to the uv map. ```python vertices = mesh.vertices[vmapping] vertex_colors = vertex_colors[vmapping] mesh.vertices = vertices mesh.faces = indices ``` Define the desired texture size. Construct a texture buffer that is upscaled by an `upscale_factor` to create a higher quality texture. ```python texture_size = 1024 upscale_factor = 2 buffer_size = texture_size * upscale_factor texture_buffer = np.zeros((buffer_size, buffer_size, 4), dtype=np.uint8) ``` Fill in the texture of the UV-mapped mesh using barycentric interpolation. 1. **Barycentric interpolation**: Computes the interpolated color at point `p` inside a triangle defined by vertices `v0`, `v1`, and `v2` with corresponding colors `c0`, `c1`, and `c2`. 2. **Point-in-Triangle test**: Determines if a point `p` lies within a triangle defined by vertices `v0`, `v1`, and `v2`. 3. **Texture-filling loop**: - Iterate over each face of the mesh. - Retrieve the UV coordinates (`uv0`, `uv1`, `uv2`) and colors (`c0`, `c1`, `c2`) for the current face. - Convert the UV coordinates to buffer coordinates. - Determine the bounding box of the triangle on the texture buffer. - For each pixel in the bounding box, check if the pixel lies within the triangle using the point-in-triangle test. - If inside, compute the interpolated color using barycentric interpolation. - Assign the color to the corresponding pixel in the texture buffer. ```python # Barycentric interpolation def barycentric_interpolate(v0, v1, v2, c0, c1, c2, p): v0v1 = v1 - v0 v0v2 = v2 - v0 v0p = p - v0 d00 = np.dot(v0v1, v0v1) d01 = np.dot(v0v1, v0v2) d11 = np.dot(v0v2, v0v2) d20 = np.dot(v0p, v0v1) d21 = np.dot(v0p, v0v2) denom = d00 * d11 - d01 * d01 if abs(denom) < 1e-8: return (c0 + c1 + c2) / 3 v = (d11 * d20 - d01 * d21) / denom w = (d00 * d21 - d01 * d20) / denom u = 1.0 - v - w u = np.clip(u, 0, 1) v = np.clip(v, 0, 1) w = np.clip(w, 0, 1) interpolate_color = u * c0 + v * c1 + w * c2 return np.clip(interpolate_color, 0, 255) # Point-in-Triangle test def is_point_in_triangle(p, v0, v1, v2): def sign(p1, p2, p3): return (p1[0] - p3[0]) * (p2[1] - p3[1]) - (p2[0] - p3[0]) * (p1[1] - p3[1]) d1 = sign(p, v0, v1) d2 = sign(p, v1, v2) d3 = sign(p, v2, v0) has_neg = (d1 < 0) or (d2 < 0) or (d3 < 0) has_pos = (d1 > 0) or (d2 > 0) or (d3 > 0) return not (has_neg and has_pos) # Texture-filling loop for face in mesh.faces: uv0, uv1, uv2 = uvs[face] c0, c1, c2 = vertex_colors[face] uv0 = (uv0 * (buffer_size - 1)).astype(int) uv1 = (uv1 * (buffer_size - 1)).astype(int) uv2 = (uv2 * (buffer_size - 1)).astype(int) min_x = max(int(np.floor(min(uv0[0], uv1[0], uv2[0]))), 0) max_x = min(int(np.ceil(max(uv0[0], uv1[0], uv2[0]))), buffer_size - 1) min_y = max(int(np.floor(min(uv0[1], uv1[1], uv2[1]))), 0) max_y = min(int(np.ceil(max(uv0[1], uv1[1], uv2[1]))), buffer_size - 1) for y in range(min_y, max_y + 1): for x in range(min_x, max_x + 1): p = np.array([x + 0.5, y + 0.5]) if is_point_in_triangle(p, uv0, uv1, uv2): color = barycentric_interpolate(uv0, uv1, uv2, c0, c1, c2, p) texture_buffer[y, x] = np.clip(color, 0, 255).astype( np.uint8 ) ``` Let's visualize how the texture looks so far. ```python from IPython.display import display image_texture = Image.fromarray(texture_buffer) display(image_texture) ``` ![Texture with holes](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vertex-colored-to-textured-mesh/tex_output_1.png) As we can see, the texture has a lot of holes. To correct for this, we'll combine 4 techniques: 1. **Inpainting**: Fill in the holes using the average color of the surrounding pixels. 2. **Median filter**: Remove noise by replacing each pixel with the median color of its surrounding pixels. 3. **Gaussian blur**: Smooth out the texture to remove any remaining noise. 4. **Downsample**: Resize down to `texture_size` with LANCZOS resampling. ```python # Inpainting image_bgra = texture_buffer.copy() mask = (image_bgra[:, :, 3] == 0).astype(np.uint8) * 255 image_bgr = cv2.cvtColor(image_bgra, cv2.COLOR_BGRA2BGR) inpainted_bgr = cv2.inpaint( image_bgr, mask, inpaintRadius=3, flags=cv2.INPAINT_TELEA ) inpainted_bgra = cv2.cvtColor(inpainted_bgr, cv2.COLOR_BGR2BGRA) texture_buffer = inpainted_bgra[::-1] image_texture = Image.fromarray(texture_buffer) # Median filter image_texture = image_texture.filter(ImageFilter.MedianFilter(size=3)) # Gaussian blur image_texture = image_texture.filter(ImageFilter.GaussianBlur(radius=1)) # Downsample image_texture = image_texture.resize((texture_size, texture_size), Image.LANCZOS) # Display the final texture display(image_texture) ``` ![Texture without holes](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vertex-colored-to-textured-mesh/tex_output_2.png) As we can see, the texture is now much smoother and has no holes. This can be further improved with more advanced techniques or manual texture editing. Finally, we can construct a new mesh with the generated uv coordinates and texture. ```python material = trimesh.visual.material.PBRMaterial( baseColorFactor=[1.0, 1.0, 1.0, 1.0], baseColorTexture=image_texture, metallicFactor=0.0, roughnessFactor=1.0, ) visuals = trimesh.visual.TextureVisuals(uv=uvs, material=material) mesh.visual = visuals mesh.show() ``` ![Final mesh](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vertex-colored-to-textured-mesh/mesh_output.png) Et voilà! The mesh is UV-mapped and textured. To export it when running locally, call `mesh.export("output.glb")`. ## Limitations As you can see, the mesh still has many small artifacts. The quality of the UV map and texture are also far below the standards of a production-ready mesh. However, if you're looking for a quick solution to map from a vertex-colored mesh to a UV-mapped mesh, then this approach may be useful for you. ## Conclusion This tutorial walked through how to convert a vertex-colored mesh to a UV-mapped, textured mesh. If you have any questions or feedback, please feel free to open an issue on [GitHub](https://github.com/dylanebert/InstantTexture) or on the [Space](https://huggingface.co/spaces/dylanebert/InstantTexture). Thank you for reading!
9
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/text_regression/local_dataset.yml
task: text_regression base_model: google-bert/bert-base-uncased project_name: autotrain-bert-custom-finetuned log: tensorboard backend: local data: path: data/ # this must be the path to the directory containing the train and valid files train_split: train # this must be either train.csv or train.json valid_split: valid # this must be either valid.csv or valid.json column_mapping: text_column: text # this must be the name of the column containing the text target_column: label # this must be the name of the column containing the target params: max_seq_length: 512 epochs: 3 batch_size: 4 lr: 2e-5 optimizer: adamw_torch scheduler: linear gradient_accumulation: 1 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
0
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/text_regression/hub_dataset.yml
task: text_regression base_model: google-bert/bert-base-uncased project_name: autotrain-bert-sms-spam-finetuned log: tensorboard backend: local data: path: sms_spam train_split: train valid_split: null column_mapping: text_column: sms target_column: label params: max_seq_length: 512 epochs: 3 batch_size: 4 lr: 2e-5 optimizer: adamw_torch scheduler: linear gradient_accumulation: 1 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
1
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/token_classification/local_dataset.yml
task: token_classification base_model: google-bert/bert-base-uncased project_name: autotrain-bert-custom-finetuned log: tensorboard backend: local data: path: data/ # this must be the path to the directory containing the train and valid files train_split: train # this must be either train.json valid_split: test # this must be either valid.json, can also be set to null column_mapping: tokens_column: tokens # this must be the name of the column containing the text tags_column: tags # this must be the name of the column containing the target params: max_seq_length: 512 epochs: 3 batch_size: 4 lr: 2e-5 optimizer: adamw_torch scheduler: linear gradient_accumulation: 1 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
2
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/token_classification/hub_dataset.yml
task: token_classification base_model: google-bert/bert-base-uncased project_name: autotrain-bert-conll2003-finetuned log: tensorboard backend: local data: path: conll2003 train_split: train valid_split: validation column_mapping: tokens_column: tokens tags_column: ner_tags params: max_seq_length: 512 epochs: 3 batch_size: 4 lr: 2e-5 optimizer: adamw_torch scheduler: linear gradient_accumulation: 1 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
3
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/llm_finetuning/llama3-70b-orpo-v1.yml
task: llm-orpo base_model: meta-llama/Meta-Llama-3-70B-Instruct project_name: autotrain-llama3-70b-orpo-v1 log: tensorboard backend: local data: path: argilla/distilabel-capybara-dpo-7k-binarized train_split: train valid_split: valid chat_template: chatml column_mapping: text_column: chosen rejected_text_column: rejected prompt_text_column: prompt params: block_size: 2048 model_max_length: 8192 max_prompt_length: 1024 epochs: 3 batch_size: 1 lr: 1e-5 peft: true quantization: null target_modules: all-linear padding: right optimizer: paged_adamw_8bit scheduler: linear gradient_accumulation: 4 mixed_precision: bf16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
4
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/llm_finetuning/llama3-8b-orpo-space.yml
task: llm-orpo base_model: meta-llama/Meta-Llama-3-8B-Instruct project_name: autotrain-llama3-8b-orpo-t1 log: tensorboard backend: spaces-a10g-largex4 data: path: argilla/distilabel-capybara-dpo-7k-binarized train_split: train valid_split: null chat_template: chatml column_mapping: text_column: chosen rejected_text_column: rejected prompt_text_column: prompt params: block_size: 1024 model_max_length: 8192 max_prompt_length: 512 epochs: 3 batch_size: 2 lr: 3e-5 peft: true quantization: int4 target_modules: all-linear padding: right optimizer: adamw_torch scheduler: linear gradient_accumulation: 4 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
5
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/llm_finetuning/llama3-8b-dpo-qlora.yml
task: llm-dpo base_model: meta-llama/Meta-Llama-3-8B-Instruct project_name: autotrain-llama3-8b-dpo-qlora log: tensorboard backend: local data: path: mlabonne/orpo-dpo-mix-40k train_split: train valid_split: null chat_template: chatml column_mapping: text_column: chosen rejected_text_column: rejected prompt_text_column: prompt params: block_size: 1024 model_max_length: 2048 max_prompt_length: 512 epochs: 3 batch_size: 2 lr: 3e-5 peft: true quantization: int4 target_modules: all-linear padding: right optimizer: adamw_torch scheduler: linear gradient_accumulation: 4 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: false
6
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/llm_finetuning/llama3-8b-orpo.yml
task: llm-orpo base_model: meta-llama/Meta-Llama-3-8B-Instruct project_name: autotrain-llama3-8b-orpo log: tensorboard backend: local data: path: argilla/distilabel-capybara-dpo-7k-binarized train_split: train valid_split: null chat_template: chatml column_mapping: text_column: chosen rejected_text_column: rejected prompt_text_column: prompt params: block_size: 1024 model_max_length: 8192 max_prompt_length: 512 epochs: 3 batch_size: 2 lr: 3e-5 peft: true quantization: int4 target_modules: all-linear padding: right optimizer: adamw_torch scheduler: linear gradient_accumulation: 4 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
7
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/llm_finetuning/gpt2_sft.yml
task: llm-sft base_model: openai-community/gpt2 project_name: autotrain-gpt2-finetuned-guanaco log: tensorboard backend: local data: path: timdettmers/openassistant-guanaco train_split: train valid_split: null chat_template: null column_mapping: text_column: text params: block_size: 1024 model_max_length: 2048 max_prompt_length: 512 epochs: 3 batch_size: 2 lr: 3e-5 padding: right optimizer: adamw_torch scheduler: linear gradient_accumulation: 4 mixed_precision: fp16 merge_adapter: true hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: false
8
0
hf_public_repos/autotrain-advanced/configs
hf_public_repos/autotrain-advanced/configs/llm_finetuning/llama3-8b-sft-unsloth.yml
task: llm-sft base_model: meta-llama/Meta-Llama-3-8B-Instruct project_name: autotrain-llama3-8b-sft-unsloth log: tensorboard backend: local data: path: rishiraj/guanaco-style-metamath-40k train_split: train valid_split: null chat_template: null column_mapping: text_column: text params: block_size: 1024 model_max_length: 8192 max_prompt_length: 512 epochs: 3 batch_size: 2 lr: 3e-5 peft: true quantization: int4 target_modules: all-linear padding: right optimizer: adamw_torch scheduler: linear gradient_accumulation: 4 mixed_precision: fp16 unsloth: true lora_dropout: 0 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true
9
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/kernel_helpers.h
// This header is not specific to our application and you'll probably want // something like this for any extension you're building. This includes the // infrastructure needed to serialize descriptors that are used with the // "opaque" parameter of the GPU custom call. In our example we'll use this // parameter to pass the size of our problem. #ifndef _GPU_OPS_KERNEL_HELPERS_H_ #define _GPU_OPS_KERNEL_HELPERS_H_ #include <cstdint> #include <stdexcept> #include <string> #include <type_traits> #define JAX_APEX_WARP_SIZE 32 namespace gpu_ops { // https://en.cppreference.com/w/cpp/numeric/bit_cast template <class To, class From> typename std::enable_if<sizeof(To) == sizeof(From) && std::is_trivially_copyable<From>::value && std::is_trivially_copyable<To>::value, To>::type bit_cast(const From &src) noexcept { static_assert(std::is_trivially_constructible<To>::value, "This implementation additionally requires destination type to " "be trivially constructible"); To dst; memcpy(&dst, &src, sizeof(To)); return dst; } template <typename T> std::string PackDescriptorAsString(const T &descriptor) { return std::string(bit_cast<const char *>(&descriptor), sizeof(T)); } template <typename T> const T *UnpackDescriptor(const char *opaque, std::size_t opaque_len) { if (opaque_len != sizeof(T)) { throw std::runtime_error("Invalid opaque object size"); } return bit_cast<const T *>(opaque); } } // namespace gpu_ops #endif
0
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/flash_fwd_hdim96_fp16_sm80.cu
// Copyright (c) 2023, Tri Dao. // Splitting the different head dimensions to different files to speed up compilation. // This file is auto-generated. See "generate_kernels.py" #include "flash_fwd_launch_template.h" template<> void run_mha_fwd_<cutlass::half_t, 96, false>(Flash_fwd_params &params, cudaStream_t stream) { run_mha_fwd_hdim96<cutlass::half_t, false>(params, stream); }
1
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/flash_fwd_hdim64_bf16_causal_sm80.cu
// Copyright (c) 2023, Tri Dao. // Splitting the different head dimensions to different files to speed up compilation. // This file is auto-generated. See "generate_kernels.py" #include "flash_fwd_launch_template.h" template<> void run_mha_fwd_<cutlass::bfloat16_t, 64, true>(Flash_fwd_params &params, cudaStream_t stream) { run_mha_fwd_hdim64<cutlass::bfloat16_t, true>(params, stream); }
2
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/alibi.h
#include <cmath> #include <cute/tensor.hpp> #include <cutlass/cutlass.h> #include <cutlass/array.h> #include "utils.h" namespace flash { using namespace cute; //////////////////////////////////////////////////////////////////////////////////////////////////// template <bool Is_causal> struct Alibi { const float alibi_slope; const int max_seqlen_k, max_seqlen_q; __forceinline__ __device__ Alibi(const float alibi_slope, const int max_seqlen_k, const int max_seqlen_q) : alibi_slope(alibi_slope) , max_seqlen_k(max_seqlen_k) , max_seqlen_q(max_seqlen_q) { }; template <typename Engine, typename Layout> __forceinline__ __device__ void apply_alibi(Tensor<Engine, Layout> &tensor, const int col_idx_offset_, const int row_idx_offset, const int warp_row_stride) { // tensor has shape (nrow=(2, MMA_M), ncol=(2, MMA_N)) static_assert(Layout::rank == 2, "Only support 2D Tensor"); const int lane_id = threadIdx.x % 32; const int col_idx_offset = col_idx_offset_ + (lane_id % 4) * 2; if constexpr (Is_causal) { // Simpler, we add the same bias vector to all rows #pragma unroll for (int nj = 0; nj < size<1, 1>(tensor); ++nj) { const int col_idx_base = col_idx_offset + nj * 8; #pragma unroll for (int j = 0; j < size<1, 0>(tensor); ++j) { const int col_idx = col_idx_base + j; #pragma unroll for (int mi = 0; mi < size<0>(tensor); ++mi) { tensor(mi, make_coord(j, nj)) += alibi_slope * col_idx; } } } } else { // Bias depends on both row_idx and col_idx #pragma unroll for (int mi = 0; mi < size<0, 1>(tensor); ++mi) { const int row_idx_base = row_idx_offset + mi * warp_row_stride; #pragma unroll for (int i = 0; i < size<0, 0>(tensor); ++i) { const int row_idx = row_idx_base + i * 8; #pragma unroll for (int nj = 0; nj < size<1, 1>(tensor); ++nj) { const int col_idx_base = col_idx_offset + nj * 8; #pragma unroll for (int j = 0; j < size<1, 0>(tensor); ++j) { const int col_idx = col_idx_base + j; tensor(make_coord(i, mi), make_coord(j, nj)) -= alibi_slope * abs(row_idx + max_seqlen_k - max_seqlen_q - col_idx); } } } } } } }; } // namespace flash
3
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/flash_fwd_hdim96_bf16_sm80.cu
// Copyright (c) 2023, Tri Dao. // Splitting the different head dimensions to different files to speed up compilation. // This file is auto-generated. See "generate_kernels.py" #include "flash_fwd_launch_template.h" template<> void run_mha_fwd_<cutlass::bfloat16_t, 96, false>(Flash_fwd_params &params, cudaStream_t stream) { run_mha_fwd_hdim96<cutlass::bfloat16_t, false>(params, stream); }
4
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/flash_fwd_hdim32_bf16_causal_sm80.cu
// Copyright (c) 2023, Tri Dao. // Splitting the different head dimensions to different files to speed up compilation. // This file is auto-generated. See "generate_kernels.py" #include "flash_fwd_launch_template.h" template<> void run_mha_fwd_<cutlass::bfloat16_t, 32, true>(Flash_fwd_params &params, cudaStream_t stream) { run_mha_fwd_hdim32<cutlass::bfloat16_t, true>(params, stream); }
5
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/flash_fwd_hdim128_fp16_sm80.cu
// Copyright (c) 2023, Tri Dao. // Splitting the different head dimensions to different files to speed up compilation. // This file is auto-generated. See "generate_kernels.py" #include "flash_fwd_launch_template.h" template<> void run_mha_fwd_<cutlass::half_t, 128, false>(Flash_fwd_params &params, cudaStream_t stream) { run_mha_fwd_hdim128<cutlass::half_t, false>(params, stream); }
6
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/flash_fwd_hdim224_bf16_sm80.cu
// Copyright (c) 2023, Tri Dao. // Splitting the different head dimensions to different files to speed up compilation. // This file is auto-generated. See "generate_kernels.py" #include "flash_fwd_launch_template.h" template<> void run_mha_fwd_<cutlass::bfloat16_t, 224, false>(Flash_fwd_params &params, cudaStream_t stream) { run_mha_fwd_hdim224<cutlass::bfloat16_t, false>(params, stream); }
7
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/utils.h
/****************************************************************************** * Copyright (c) 2023, Tri Dao. ******************************************************************************/ #pragma once #include <assert.h> #include <stdint.h> #include <stdlib.h> #include <cuda_fp16.h> #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 #include <cuda_bf16.h> #endif #include <cute/tensor.hpp> #include <cutlass/array.h> #include <cutlass/cutlass.h> #include <cutlass/numeric_conversion.h> #include <cutlass/numeric_types.h> //////////////////////////////////////////////////////////////////////////////////////////////////// namespace flash { //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename T> __forceinline__ __device__ uint32_t relu2(const uint32_t x); template<> __forceinline__ __device__ uint32_t relu2<cutlass::half_t>(const uint32_t x) { uint32_t res; const uint32_t zero = 0u; #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 asm volatile("max.f16x2 %0, %1, %2;\n" : "=r"(res) : "r"(x), "r"(zero)); #else asm volatile( \ "{\n" \ "\t .reg .f16x2 sela;\n" \ "\t set.gtu.u32.f16x2 sela, %1, %2;\n" \ "\t and.b32 %0, sela, %1;\n" "}\n" : "=r"(res) : "r"(x), "r"(zero)); #endif return res; } #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 template<> __forceinline__ __device__ uint32_t relu2<cutlass::bfloat16_t>(const uint32_t x) { uint32_t res; const uint32_t zero = 0u; asm volatile("max.bf16x2 %0, %1, %2;\n" : "=r"(res) : "r"(x), "r"(zero)); return res; } #endif //////////////////////////////////////////////////////////////////////////////////////////////////// #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 template<typename T> __forceinline__ __device__ uint32_t convert_relu2(const float2 x); template<> __forceinline__ __device__ uint32_t convert_relu2<cutlass::half_t>(const float2 x) { uint32_t res; const uint32_t a = reinterpret_cast<const uint32_t&>(x.x); const uint32_t b = reinterpret_cast<const uint32_t&>(x.y); asm volatile("cvt.rn.relu.f16x2.f32 %0, %1, %2;\n" : "=r"(res) : "r"(b), "r"(a)); return res; } template<> __forceinline__ __device__ uint32_t convert_relu2<cutlass::bfloat16_t>(const float2 x) { uint32_t res; const uint32_t a = reinterpret_cast<const uint32_t&>(x.x); const uint32_t b = reinterpret_cast<const uint32_t&>(x.y); asm volatile("cvt.rn.relu.bf16x2.f32 %0, %1, %2;\n" : "=r"(res) : "r"(b), "r"(a)); return res; } #endif //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename T> struct MaxOp { __device__ __forceinline__ T operator()(T const & x, T const & y) { return x > y ? x : y; } }; template <> struct MaxOp<float> { // This is slightly faster __device__ __forceinline__ float operator()(float const &x, float const &y) { return max(x, y); } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename T> struct SumOp { __device__ __forceinline__ T operator()(T const & x, T const & y) { return x + y; } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<int THREADS> struct Allreduce { static_assert(THREADS == 32 || THREADS == 16 || THREADS == 8 || THREADS == 4); template<typename T, typename Operator> static __device__ __forceinline__ T run(T x, Operator &op) { constexpr int OFFSET = THREADS / 2; x = op(x, __shfl_xor_sync(uint32_t(-1), x, OFFSET)); return Allreduce<OFFSET>::run(x, op); } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<> struct Allreduce<2> { template<typename T, typename Operator> static __device__ __forceinline__ T run(T x, Operator &op) { x = op(x, __shfl_xor_sync(uint32_t(-1), x, 1)); return x; } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<bool A_in_regs=false, bool B_in_regs=false, typename Tensor0, typename Tensor1, typename Tensor2, typename Tensor3, typename Tensor4, typename TiledMma, typename TiledCopyA, typename TiledCopyB, typename ThrCopyA, typename ThrCopyB> __forceinline__ __device__ void gemm(Tensor0 &acc, Tensor1 &tCrA, Tensor2 &tCrB, Tensor3 const& tCsA, Tensor4 const& tCsB, TiledMma tiled_mma, TiledCopyA smem_tiled_copy_A, TiledCopyB smem_tiled_copy_B, ThrCopyA smem_thr_copy_A, ThrCopyB smem_thr_copy_B) { CUTE_STATIC_ASSERT_V(size<1>(tCrA) == size<1>(acc)); // MMA_M CUTE_STATIC_ASSERT_V(size<1>(tCrB) == size<2>(acc)); // MMA_N CUTE_STATIC_ASSERT_V(size<2>(tCrA) == size<2>(tCrB)); // MMA_K Tensor tCrA_copy_view = smem_thr_copy_A.retile_D(tCrA); CUTE_STATIC_ASSERT_V(size<1>(tCsA) == size<1>(tCrA_copy_view)); // M Tensor tCrB_copy_view = smem_thr_copy_B.retile_D(tCrB); CUTE_STATIC_ASSERT_V(size<1>(tCsB) == size<1>(tCrB_copy_view)); // N if (!A_in_regs) { cute::copy(smem_tiled_copy_A, tCsA(_, _, _0{}), tCrA_copy_view(_, _, _0{})); } if (!B_in_regs) { cute::copy(smem_tiled_copy_B, tCsB(_, _, _0{}), tCrB_copy_view(_, _, _0{})); } #pragma unroll for (int i = 0; i < size<2>(tCrA); ++i) { if (i < size<2>(tCrA) - 1) { if (!A_in_regs) { cute::copy(smem_tiled_copy_A, tCsA(_, _, i + 1), tCrA_copy_view(_, _, i + 1)); } if (!B_in_regs) { cute::copy(smem_tiled_copy_B, tCsB(_, _, i + 1), tCrB_copy_view(_, _, i + 1)); } } cute::gemm(tiled_mma, tCrA(_, _, i), tCrB(_, _, i), acc); } } //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename Tensor0, typename Tensor1, typename Tensor2, typename Tensor3, typename TiledMma, typename TiledCopy, typename ThrCopy> __forceinline__ __device__ void gemm_rs(Tensor0 &acc, Tensor1 &tCrA, Tensor2 &tCrB, Tensor3 const& tCsB, TiledMma tiled_mma, TiledCopy smem_tiled_copy_B, ThrCopy smem_thr_copy_B) { CUTE_STATIC_ASSERT_V(size<1>(tCrA) == size<1>(acc)); // MMA_M CUTE_STATIC_ASSERT_V(size<1>(tCrB) == size<2>(acc)); // MMA_N CUTE_STATIC_ASSERT_V(size<2>(tCrA) == size<2>(tCrB)); // MMA_K Tensor tCrB_copy_view = smem_thr_copy_B.retile_D(tCrB); CUTE_STATIC_ASSERT_V(size<1>(tCsB) == size<1>(tCrB_copy_view)); // N cute::copy(smem_tiled_copy_B, tCsB(_, _, _0{}), tCrB_copy_view(_, _, _0{})); #pragma unroll for (int i = 0; i < size<2>(tCrA); ++i) { if (i < size<2>(tCrA) - 1) { cute::copy(smem_tiled_copy_B, tCsB(_, _, i + 1), tCrB_copy_view(_, _, i + 1)); } cute::gemm(tiled_mma, tCrA(_, _, i), tCrB(_, _, i), acc); } } //////////////////////////////////////////////////////////////////////////////////////////////////// // Convert acc_layout from (MMA=4, MMA_M, MMA_N) to (nrow=(2, MMA_M), ncol=(2, MMA_N)) template<typename Layout> __forceinline__ __device__ auto convert_layout_acc_rowcol(Layout acc_layout) { static_assert(decltype(size<0>(acc_layout))::value == 4); static_assert(decltype(rank(acc_layout))::value == 3); auto l = logical_divide(acc_layout, Shape<_2>{}); // ((2, 2), MMA_M, MMA_N) return make_layout(make_layout(get<0, 1>(l), get<1>(l)), make_layout(get<0, 0>(l), get<2>(l))); }; //////////////////////////////////////////////////////////////////////////////////////////////////// // Convert acc_layout from (MMA=4, MMA_M, MMA_N) to ((4, 2), MMA_M, MMA_N / 2) // if using m16n8k16, or to (4, MMA_M, MMA_N) if using m16n8k8. template<typename MMA_traits, typename Layout> __forceinline__ __device__ auto convert_layout_acc_Aregs(Layout acc_layout) { using X = Underscore; static_assert(decltype(size<0>(acc_layout))::value == 4); static_assert(decltype(rank(acc_layout))::value == 3); constexpr int mma_shape_K = get<2>(typename MMA_traits::Shape_MNK{}); static_assert(mma_shape_K == 8 || mma_shape_K == 16); if constexpr (mma_shape_K == 8) { return acc_layout; } else { auto l = logical_divide(acc_layout, Shape<X, X, _2>{}); // (4, MMA_M, (2, MMA_N / 2))) return make_layout(make_layout(get<0>(l), get<2, 0>(l)), get<1>(l), get<2, 1>(l)); } }; //////////////////////////////////////////////////////////////////////////////////////////////////// // Convert acc_layout from (MMA=4, MMA_M, MMA_N) to ((4, 2), MMA_M, MMA_N / 2) template<typename Layout> __forceinline__ __device__ auto convert_layout_acc_dropout(Layout acc_layout) { using X = Underscore; static_assert(decltype(size<0>(acc_layout))::value == 4); static_assert(decltype(rank(acc_layout))::value == 3); auto l = logical_divide(acc_layout, Shape<X, X, _2>{}); // (4, MMA_M, (2, MMA_N / 2))) return make_layout(make_layout(get<0>(l), get<2, 0>(l)), get<1>(l), get<2, 1>(l)); }; //////////////////////////////////////////////////////////////////////////////////////////////////// template <typename To_type, typename Engine, typename Layout> __forceinline__ __device__ auto convert_type(Tensor<Engine, Layout> const &tensor) { using From_type = typename Engine::value_type; constexpr int numel = decltype(size(tensor))::value; cutlass::NumericArrayConverter<To_type, From_type, numel> convert_op; // HACK: this requires tensor to be "contiguous" auto frag = convert_op(*reinterpret_cast<const cutlass::Array<From_type, numel> *>(tensor.data())); return make_tensor(make_rmem_ptr<To_type>(&frag), tensor.layout()); } //////////////////////////////////////////////////////////////////////////////////////////////////// template <typename Engine, typename Layout> __forceinline__ __device__ void relu_(Tensor<Engine, Layout> &tensor) { constexpr int numel = decltype(size(tensor))::value; static_assert(numel % 2 == 0); using value_t = typename Engine::value_type; // HACK: this requires tensor to be "contiguous" Tensor tensor_uint32 = recast<uint32_t>(tensor); #pragma unroll for (int i = 0; i < size(tensor_uint32); ++i) { tensor_uint32(i) = relu2<value_t>(tensor_uint32(i)); } } //////////////////////////////////////////////////////////////////////////////////////////////////// // On SM80 and above, we can fuse fp32 -> fp16/bf16 conversion and relu into 1 instruction template <typename To_type, typename Engine, typename Layout> __forceinline__ __device__ auto convert_type_relu(Tensor<Engine, Layout> const &tensor) { using From_type = typename Engine::value_type; static_assert(std::is_same_v<To_type, cutlass::half_t> || std::is_same_v<To_type, cutlass::bfloat16_t>); static_assert(std::is_same_v<float, From_type>); constexpr int numel = decltype(size(tensor))::value; static_assert(numel % 2 == 0); #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 // HACK: this requires tensor to be "contiguous" Tensor tensor_float2 = recast<float2>(tensor); Tensor out_uint32 = make_tensor<uint32_t>(tensor_float2.layout()); #pragma unroll for (int i = 0; i < size(out_uint32); ++i) { out_uint32(i) = convert_relu2<To_type>(tensor_float2(i)); } Tensor out = make_tensor(make_rmem_ptr<To_type>(out_uint32.data()), tensor.layout()); #else Tensor out = flash::convert_type<To_type>(tensor); flash::relu_(out); #endif return out; } //////////////////////////////////////////////////////////////////////////////////////////////////// // Blocks until all but N previous cp.async.commit_group operations have committed. // This differs from cute::cp_async_wait in that when N = 0 we don't call cp.async.wait_all // (which is equivalent to commit_group then wait_group 0). // Instead we just call cp.async.wait_group 0, which is slightly faster. // https://github.com/NVIDIA/cutlass/blob/master/include/cute/arch/copy_sm80.hpp#L113 template <int N> CUTE_HOST_DEVICE void cp_async_wait() { #if defined(CUTE_ARCH_CP_ASYNC_SM80_ENABLED) asm volatile("cp.async.wait_group %0;\n" :: "n"(N)); #endif } //////////////////////////////////////////////////////////////////////////////////////////////////// template <bool Is_even_MN=true, bool Is_even_K=true, bool Clear_OOB_MN=false, bool Clear_OOB_K=true, typename TiledCopy, typename Engine0, typename Layout0, typename Engine1, typename Layout1, typename Engine2, typename Layout2, typename Engine3, typename Layout3> __forceinline__ __device__ void copy(TiledCopy tiled_copy, Tensor<Engine0, Layout0> const &S, Tensor<Engine1, Layout1> &D, Tensor<Engine2, Layout2> const &identity_MN, Tensor<Engine3, Layout3> const &predicate_K, const int max_MN=0) { CUTE_STATIC_ASSERT_V(rank(S) == Int<3>{}); CUTE_STATIC_ASSERT_V(rank(D) == Int<3>{}); CUTE_STATIC_ASSERT_V(size<0>(S) == size<0>(D)); // MMA CUTE_STATIC_ASSERT_V(size<1>(S) == size<1>(D)); // MMA_M CUTE_STATIC_ASSERT_V(size<2>(S) == size<2>(D)); // MMA_K // There's no case where !Clear_OOB_K && Clear_OOB_MN static_assert(!(Clear_OOB_MN && !Clear_OOB_K)); #pragma unroll for (int m = 0; m < size<1>(S); ++m) { if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) { #pragma unroll for (int k = 0; k < size<2>(S); ++k) { if (Is_even_K || predicate_K(k)) { cute::copy(tiled_copy, S(_, m, k), D(_, m, k)); } else if (Clear_OOB_K) { cute::clear(D(_, m, k)); } } } else if (Clear_OOB_MN) { cute::clear(D(_, m, _)); } } // TD [2023-04-13]: Strange that the code below can cause race condition. // I think it's because the copies are under an if statement. // if (Is_even_K) { // #pragma unroll // for (int m = 0; m < size<1>(S); ++m) { // if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) { // copy(tiled_copy, S(_, m, _), D(_, m, _)); // } else if (Clear_OOB_MN) { // clear(D(_, m, _)); // } // } // } else { // It's slightly faster in this case if iterate over K first // #pragma unroll // for (int k = 0; k < size<2>(S); ++k) { // if (predicate_K(k)) { // #pragma unroll // for (int m = 0; m < size<1>(S); ++m) { // if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) { // copy(tiled_copy, S(_, m, k), D(_, m, k)); // } else if (Clear_OOB_MN) { // clear(D(_, m, k)); // } // } // } else if (Clear_OOB_K) { // There's no case where !Clear_OOB_K && Clear_OOB_MN // if (Clear_OOB_MN || Is_even_MN) { // clear(D(_, _, k)); // } else { // #pragma unroll // for (int m = 0; m < size<1>(S); ++m) { // if (!(Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN)) { // clear(D(_, m, k)); // } // } // } // } // } // } } //////////////////////////////////////////////////////////////////////////////////////////////////// template <bool Is_even_K=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1, typename Engine2, typename Layout2, typename Engine3, typename Layout3> __forceinline__ __device__ void copy_w_min_idx(Tensor<Engine0, Layout0> const &S, Tensor<Engine1, Layout1> &D, Tensor<Engine2, Layout2> const &identity_MN, Tensor<Engine3, Layout3> const &predicate_K, const int max_MN=0, const int min_MN=0) { CUTE_STATIC_ASSERT_V(rank(S) == Int<3>{}); CUTE_STATIC_ASSERT_V(rank(D) == Int<3>{}); CUTE_STATIC_ASSERT_V(size<0>(S) == size<0>(D)); // MMA CUTE_STATIC_ASSERT_V(size<1>(S) == size<1>(D)); // MMA_M CUTE_STATIC_ASSERT_V(size<2>(S) == size<2>(D)); // MMA_K // if (threadIdx.x == 0 && blockIdx.z == 0) { printf("blockIdx.y = %d, max_MN = %d, min_MN = %d\n", blockIdx.y, max_MN, min_MN); } #pragma unroll for (int m = 0; m < size<1>(S); ++m) { // if (threadIdx.x == 0 && blockIdx.z == 0) { printf("blockIdx.y = %d, m = %d\n", blockIdx.y, get<0>(identity_MN(0, m, 0))); } if (get<0>(identity_MN(0, m, 0)) >= min_MN && get<0>(identity_MN(0, m, 0)) < max_MN) { // if (threadIdx.x == 0 && blockIdx.z == 0) { printf("Inner loop, blockIdx.y = %d, m = %d\n", blockIdx.y, get<0>(identity_MN(0, m, 0))); } #pragma unroll for (int k = 0; k < size<2>(S); ++k) { if (Is_even_K || predicate_K(k)) { cute::copy(S(_, m, k), D(_, m, k)); } } } } } //////////////////////////////////////////////////////////////////////////////////////////////////// } // namespace flash
8
0
hf_public_repos/candle/candle-flash-attn
hf_public_repos/candle/candle-flash-attn/kernels/flash_fwd_hdim128_bf16_sm80.cu
// Copyright (c) 2023, Tri Dao. // Splitting the different head dimensions to different files to speed up compilation. // This file is auto-generated. See "generate_kernels.py" #include "flash_fwd_launch_template.h" template<> void run_mha_fwd_<cutlass::bfloat16_t, 128, false>(Flash_fwd_params &params, cudaStream_t stream) { run_mha_fwd_hdim128<cutlass::bfloat16_t, false>(params, stream); }
9
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter3/classification.mdx
# 오디오 분류 아키텍처[[audio-classification-architectures]] 오디오 분류의 목표는 오디오 입력에 대한 클래스 레이블을 예측하는 것입니다. 모델은 전체 입력 시퀀스를 포괄하는 단일 클래스 레이블을 예측하거나 모든 오디오 프레임(일반적으로 입력 오디오의 20밀리초마다)에 대한 레이블을 예측할 수 있으며, 이 경우 모델의 출력은 클래스 레이블 확률의 시퀀스입니다. 전자의 예로는 어떤 새가 특정 소리를 내는지 감지하는 것을 들 수 있고, 후자의 예로는 특정 순간에 어떤 화자가 말하는지 예측하는 화자 구분(speaker diarization)을 들 수 있습니다. ## 스펙트로그램을 사용한 분류[[classification-using-spectrograms]] 오디오 분류를 수행하는 가장 쉬운 방법 중 하나는 이미지 분류 문제인 것처럼 가정하는 것입니다! 스펙트로그램은 `(주파수, 시퀀스 길이)` 모양의 2차원 텐서라는 것을 기억하세요. [오디오 데이터 챕터](../chapter1/audio_data)에서 이러한 스펙트로그램을 이미지로 그려보았습니다. 여러분 아시나요? 말 그대로 스펙트로그램을 이미지로 취급하고 ResNet과 같은 일반 CNN 분류기 모델에 전달하면 매우 좋은 예측 결과를 얻을 수 있습니다. 더 좋은 방법은 ViT와 같은 이미지 트랜스포머 모델을 사용하는 것입니다. 이것이 바로 **오디오 스펙트로그램 트랜스포머**가 하는 일입니다. 이 모델은 ViT 또는 비전 트랜스포머 모델을 사용하며, 일반 이미지 대신 스펙트로그램을 입력으로 전달합니다. 트랜스포머의 셀프 어텐션 레이어 덕분에 이 모델은 CNN보다 글로벌 컨텍스트를 더 잘 포착할 수 있습니다. ViT와 마찬가지로 AST(Audio Spectrogram Transformer) 모델은 오디오 스펙트로그램을 16×16픽셀의 부분적으로 겹치는 이미지 패치 시퀀스로 분할합니다. 그런 다음 이 패치 시퀀스는 일련의 임베딩으로 투영되고, 이 임베딩은 평소와 같이 트랜스포머 인코더에 입력으로 제공됩니다. AST는 인코더 전용 트랜스포머 모델이므로 출력은 16×16 입력 패치마다 하나씩 숨겨진 상태 시퀀스입니다. 여기에는 은닉 상태를 분류 확률에 매핑하기 위해 시그모이드 활성화가 포함된 간단한 분류 계층이 있습니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/ast.png" alt="오디오 스펙트로그램 트랜스포머는 스펙트로그램에서 가져온 일련의 패치에서 작동합니다."> </div> 논문 [AST: 오디오 스펙트로그램 트랜스포머](https://arxiv.org/pdf/2104.01778.pdf)에서 가져온 이미지 <Tip> 💡 여기서는 스펙트로그램이 이미지와 동일하다고 가정하지만, 중요한 차이점이 있습니다. 예를 들어, 이미지의 내용을 위아래로 이동해도 일반적으로 이미지에 포함된 내용의 의미는 변하지 않습니다. 그러나 스펙트로그램을 위아래로 이동하면 소리에 포함된 주파수가 변경되어 소리의 성격이 완전히 달라집니다. 이미지는 변환 시에도 변하지 않지만 스펙트로그램은 그렇지 않습니다. 스펙트로그램을 이미지로 취급하는 것은 실제로는 매우 잘 작동할 수 있지만 실제로는 같은 것이 아니라는 점을 명심하세요. </Tip> ## 모든 트랜스포머는 분류기가 될 수 있습니다.[[any-transformer-can-be-a-classifier]] [이전 섹션](ctc)에서 CTC가 인코더 전용 트랜스포머를 사용하여 자동 음성 인식을 수행하는 데 효율적인 기술이라는 것을 살펴보았습니다. 이러한 CTC 모델은 이미 토큰화 어휘에서 클래스 레이블에 대한 확률을 예측하는 분류기입니다. 라벨을 변경하고 특수한 CTC 손실 대신 크로스 엔트로피 손실 함수로 훈련하면 CTC 모델을 범용 오디오 분류기로 전환할 수 있습니다. 예를 들어, HF 트랜스포머에는 `Wav2Vec2ForCTC` 모델뿐만 아니라 `Wav2Vec2ForSequenceClassification` 및 `Wav2Vec2ForAudioFrameClassification` 모델도 있습니다. 이러한 모델의 아키텍처 간의 유일한 차이점은 분류 계층의 크기와 사용되는 손실 함수입니다. 실제로 모든 인코더 전용 오디오 트랜스포머 모델은 은닉 상태 시퀀스 위에 분류 레이어를 추가하여 오디오 분류기로 전환할 수 있습니다. (분류기에는 일반적으로 트랜스포머 디코더가 필요하지 않습니다.) 전체 시퀀스에 대한 단일 분류 점수를 예측하기 위해 모델(Wav2Vec2ForSequenceClassification)에서는 숨겨진 상태의 평균을 구하여 분류 레이어에 입력합니다. 출력은 단일 확률 분포입니다. 각 오디오 프레임에 대해 별도의 분류를 만들기 위해, 분류기(Wav2Vec2ForAudioFrameClassification)는 은닉 상태의 시퀀스에서 실행되므로 분류기의 출력도 시퀀스입니다.
0
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter3/ctc.mdx
# CTC 아키텍처 [[ctc-architectures]] 연결주의 시간 분류(CTC, Connectionist Temporal Classification)는 자동 음성 인식을 위한 인코더 전용 트랜스포머 모델에 사용되는 기법입니다. 이러한 모델의 예로는 **Wav2Vec2**, **HuBERT** 및 **M-CTC-T**가 있습니다. 인코더 전용 트랜스포머는 모델의 인코더 부분만 사용하기 때문에 가장 간단한 종류의 트랜스포머입니다. 인코더는 입력 시퀀스(오디오 파형)를 읽고 이를 출력 임베딩이라고도 하는 은닉 상태 시퀀스로 매핑합니다. CTC 모델을 사용하면 은닉 상태 시퀀스에 추가 선형 매핑을 적용하여 클래스 레이블 예측을 얻습니다. 클래스 레이블은 **알파벳 문자**(a, b, c, ...)입니다. 이렇게 하면 어휘가 26자와 몇 개의 특수 토큰으로만 존재하면 되기 때문에 작은 분류 헤드로 대상 언어의 모든 단어를 예측할 수 있습니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/wav2vec2-ctc.png" alt="Transformer encoder with a CTC head on top"> </div> 지금까지는 NLP에서 BERT와 같은 모델을 사용하는 것과 매우 유사합니다. 인코더 전용 트랜스포머 모델이 텍스트 토큰을 인코더 숨겨진 상태 시퀀스에 매핑한 다음 선형 매핑을 적용하여 각 숨겨진 상태에 대해 하나의 클래스 레이블 예측을 얻습니다. 음성에서는 오디오 입력과 텍스트 출력의 '정렬(alignment)'을 알 수 없다는 점이 문제입니다. 음성이 말하는 순서와 텍스트를 필사(transcribe)하는 순서가 같다는 것은 알지만(소위 단조로운 정렬의 경우), 필사하는 텍스트의 문자가 오디오와 어떻게 일치하는지는 알 수 없습니다. 바로 이 부분에서 CTC 알고리즘이 등장합니다. <Tip> 💡 NLP 모델에서 어휘는 일반적으로 개별 문자뿐만 아니라 단어의 일부 또는 완전한 단어를 설명하는 수천 개의 토큰으로 구성됩니다. 그러나 CTC의 경우 작은 어휘가 가장 효과적이며 일반적으로 50자 미만으로 유지하려고 노력합니다. 트위터에서는 글자의 대소문자를 구분하지 않으므로 대문자(또는 소문자)만 사용해도 충분합니다. 숫자는 철자로 표기합니다(예: `"20"`은 `"twenty"`가 됩니다). 문자 외에도 최소한 단어 구분 토큰(공백)과 패딩 토큰이 필요합니다. 패딩 토큰은 자연어 처리 모델과 마찬가지로 여러 개의 예문을 일괄적으로 결합할 수 있게 해주지만, 모델이 무음을 예측할 때 사용하는 토큰이기도 합니다. 영어에서는 `'` 문자를 유지하는 것도 유용합니다. `it`s`와 `its`는 매우 다른 의미를 갖기 때문입니다. </Tip> ## 정렬을 어떻게 확인하지?[[dude-wheres-my-alignment]] 자동 음성 인식(ASR)은 오디오를 입력으로 받아 텍스트를 출력으로 생성합니다. 텍스트를 예측하는 방법에는 몇 가지 선택지가 있습니다: - 개별 문자로 인식 - 음소(phonemes)로 인식 - 단어 토큰으로 인식 자동 음성 인식 모델은 `(오디오, 텍스트)` 쌍으로 구성된 데이터 셋에 대해 학습되며, 텍스트는 오디오 파일의 사람이 만든 필사본입니다. 일반적으로 데이터 셋에는 오디오 파일에서 어떤 단어나 음절이 어디에 나오는지 알려주는 타이밍 정보가 포함되지 않습니다. 훈련 중에 타이밍 정보에 의존할 수 없기 때문에 입력과 출력 순서를 어떻게 정렬해야 하는지 알 수 없습니다. 입력이 1초짜리 오디오 파일이라고 가정해 봅시다. **Wav2Vec2** 모델에서는 먼저 CNN 피처 인코더를 사용하여 오디오 입력을 더 짧은 은닉 상태 시퀀스로 다운샘플링하는데, 여기에는 오디오 20밀리초당 하나의 은닉 상태 벡터가 있습니다. 오디오 1초에 대해 50개의 은닉 상태 시퀀스를 트랜스포머 인코더로 전달합니다. (입력 시퀀스에서 추출된 오디오 세그먼트는 부분적으로 겹치므로 20밀리초마다 하나의 은닉 상태 벡터가 출력되지만 각 은닉 상태는 실제로 25밀리초의 오디오를 나타냅니다.) 트랜스포머 인코더는 이러한 숨겨진 상태 각각에 대해 하나의 특징 표현을 예측하므로 트랜스포머로부터 50개의 출력 시퀀스를 수신합니다. 이러한 각 출력의 차원은 768입니다. 따라서 이 예제에서 트랜스포머 인코더의 출력 시퀀스는 `(768, 50)` 모양을 갖습니다. 이러한 각 예측은 음소 지속 시간보다 짧은 25ms의 시간을 포함하므로 전체 단어가 아닌 개별 음소 또는 문자를 예측하는 것이 합리적입니다. CTC는 작은 어휘에서 가장 잘 작동하므로 문자를 예측해 보겠습니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/cnn-feature-encoder.png" alt="The audio waveform gets mapped to a shorter sequence of hidden-states"> </div> 텍스트 예측을 위해 768차원 인코더 출력 각각을 선형 레이어("CTC 헤드")를 사용하여 문자 레이블에 매핑합니다. 그런 다음 모델은 로그를 포함하는 `(50, 32)` 텐서(여기서 32는 어휘의 토큰 수)를 예측합니다. 시퀀스의 각 특징에 대해 하나의 예측을 수행하므로 오디오의 각 초에 대해 총 50개의 문자를 예측하게 됩니다. 그러나 단순히 20ms마다 한 문자를 예측한다면 출력 시퀀스는 다음과 같이 보일 수 있습니다: ```text BRIIONSAWWSOMEETHINGCLOSETOPANICONHHISOPPONENT'SSFAACEWHENTHEMANNFINALLLYRREECOGGNNIIZEDHHISSERRRRORR ... ``` 자세히 보면 영어와 다소 비슷하지만 많은 문자가 중복되어 있습니다. 이는 모델이 입력 시퀀스의 오디오 20밀리초마다 *어떤 것*을 출력해야 하기 때문이며, 한 문자가 20밀리초보다 긴 기간에 걸쳐 분산되어 있으면 출력에 여러 번 나타나게 됩니다. 특히 훈련 중에는 대본의 타이밍을 알 수 없기 때문에 이를 피할 방법이 없습니다. CTC는 이러한 중복을 필터링하는 방법입니다. (실제로 예측된 시퀀스에는 모델이 소리가 무엇을 나타내는지 잘 모를 때나 문자 사이의 빈 공간을 위한 많은 패딩 토큰도 포함되어 있습니다. 명확성을 위해 예제에서 이러한 패딩 토큰을 제거했습니다. 오디오 세그먼트가 부분적으로 겹치는 것도 출력에서 문자가 중복되는 또 다른 이유입니다.) ## CTC 알고리즘[[the-ctc-algorithm]] CTC 알고리즘의 핵심은 흔히 **공백 토큰**이라고 불리는 특수 토큰을 사용하는 것입니다. 이것은 모델이 예측하는 또 다른 토큰이며 어휘의 일부입니다. 이 예시에서 빈 토큰은 `_`로 표시됩니다. 이 특수 토큰은 문자 그룹 간의 엄격한 경계 역할을 합니다. CTC 모델의 전체 출력은 다음과 같을 수 있습니다: ```text B_R_II_O_N_||_S_AWW_|||||_S_OMEE_TH_ING_||_C_L_O_S_E||TO|_P_A_N_I_C_||_ON||HHI_S||_OP_P_O_N_EN_T_'SS||_F_AA_C_E||_W_H_EN||THE||M_A_NN_||||_F_I_N_AL_LL_Y||||_RREE_C_O_GG_NN_II_Z_ED|||HHISS|||_ER_RRR_ORR|||| ``` 토큰 `|`는 단어 구분 문자입니다. 이 예에서는 공백 대신 `|`를 사용하여 단어 나누기 위치를 더 쉽게 파악할 수 있도록 했지만 동일한 용도로 사용됩니다. CTC 공백 문자를 사용하면 중복 문자를 필터링할 수 있습니다. 예를 들어 예측된 시퀀스의 마지막 단어인 `_ER_RRR_ORR`을 살펴봅시다. CTC 공백 토큰이 없으면 이 단어는 다음과 같이 보입니다: ```text ERRRRORR ``` 단순히 중복된 문자를 제거하면 `EROR`이 됩니다. 이는 분명 올바른 철자가 아닙니다. 하지만 CTC 빈 토큰을 사용하면 각 그룹에서 중복을 제거할 수 있습니다. 따라서: ```text _ER_RRR_ORR ``` 는 아래와 같이 변경됩니다.: ```text _ER_R_OR ``` 이제 `_` 빈 토큰을 제거하여 최종 단어를 얻습니다: ```text ERROR ``` 이 논리를 `|`를 포함한 전체 텍스트에 적용하고 남은 `|` 문자를 공백으로 바꾸면 최종 CTC 디코딩된 출력은 다음과 같습니다: ```text BRION SAW SOMETHING CLOSE TO PANIC ON HIS OPPONENT'S FACE WHEN THE MAN FINALLY RECOGNIZED HIS ERROR ``` 요약하자면, 모델은 입력 파형에서 (부분적으로 겹치는) 오디오의 20ms마다 하나의 토큰(문자)을 예측합니다. 이로 인해 많은 중복이 발생합니다. CTC 빈 토큰 덕분에 단어의 올바른 철자를 파괴하지 않고도 이러한 중복을 쉽게 제거할 수 있습니다. 이는 출력 텍스트를 입력 오디오와 정렬하는 문제를 해결하는 매우 간단하고 편리한 방법입니다. <Tip> 💡 실제 Wav2Vec2 모델에서 CTC 빈 토큰은 패딩 토큰 `<pad>`와 동일합니다. 이 모델은 예를 들어 현재 20ms의 오디오에 대해 예측할 명확한 문자가 없는 경우와 같이 이러한 `<pad>` 토큰을 많이 예측합니다. 패딩에 CTC 공백(blanking)과 동일한 토큰을 사용하면 디코딩 알고리즘이 단순화되고 어휘를 작게 유지하는 데 도움이 됩니다. </Tip> 인코더의 출력 시퀀스가 어휘에 음향 특징을 투영하는 선형 레이어로 이동하기 때문에 트랜스포머 인코더 모델에 CTC를 추가하는 것은 간단합니다.모델은 특수한 CTC 손실로 훈련됩니다. CTC의 한 가지 단점은 '소리'는 정확하지만 '철자'는 정확하지 않은 단어를 출력할 수 있다는 점입니다.결국 CTC 헤드는 완전한 단어가 아닌 개별 문자만 고려하기 때문입니다. 오디오 트랜스크립션의 품질을 개선하는 한 가지 방법은 외부 언어 모델을 사용하는 것입니다. 이 언어 모델은 기본적으로 CTC 출력 위에 맞춤법 검사기 역할을 합니다. ## Wav2Vec2, HuBERT, M-CTC-T, ...의 차이점은 무엇인가요?[[whats-the-difference-between-wav2vec2-hubert-mctct]] 모든 트랜스포머 기반 CTC 모델은 매우 유사한 아키텍처를 가지고 있습니다. 트랜스포머 인코더(디코더는 아님)를 사용하며 그 위에 CTC 헤드가 있습니다. 아키텍처 측면에서 보면 다른 점보다는 비슷한 점이 더 많습니다. Wav2Vec2와 M-CTC-T의 한 가지 차이점은 전자는 원시 오디오 파형에서 작동하는 반면 후자는 멜 스펙트로그램을 입력으로 사용한다는 점입니다. 또한 두 모델은 서로 다른 목적으로 훈련되었습니다. 예를 들어, M-CTC-T는 다국어 음성 인식을 위해 훈련되었기 때문에 다른 알파벳 외에 한자를 포함하는 비교적 큰 CTC 헤드를 가지고 있습니다. Wav2Vec2와 HuBERT는 완전히 동일한 아키텍처를 사용하지만 매우 다른 방식으로 학습됩니다. Wav2Vec2는 오디오의 마스크된 부분에 대한 음성 단위를 예측하여 BERT의 마스크된 언어 모델링과 같이 사전 학습됩니다. HuBERT는 BERT에서 한 걸음 더 나아가 텍스트 문장의 토큰과 유사한 '개별 음성 단위'를 예측하는 방법을 학습하여 기존 NLP 기술을 사용하여 음성을 처리할 수 있도록 합니다. 여기서 강조 표시된 모델만 트랜스포머 기반 CTC 모델이 아니라는 점을 분명히 말씀드립니다. 다른 모델도 많이 있지만 모두 비슷한 방식으로 작동한다는 것을 배웠습니다.
1
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter3/quiz.mdx
<!-- DISABLE-FRONTMATTER-SECTIONS --> # 이번 코스에 대한 이해도를 확인해보세요[[check-your-understanding-of-the-course-material]] ### 1. 보코더(vocoder)는 무엇일까요? <Question choices={[ { text: "트랜스포머의 스펙트로그램 출력을 파형으로 변환하는 추가 신경망입니다.", explain: "정답입니다.", correct: true }, { text: "오디오 임베딩을 생성하는 트랜스포머 레이어의 한 유형입니다.", explain: "" }, { text: "배경 소음을 제거하기 위해 음성 오디오를 전처리하는 추가 신경망", explain: "", } ]} /> ### 2. Wav2Vec2는 어떤 항목에 예제일까요? <Question choices={[ { text: "Seq2Seq 아키텍처", explain: "" }, { text: "CNN 아키텍처", explain: "" }, { text: "CTC 아키텍처", explain: "정답입니다.", correct: true } ]} /> ### 3. CTC 알고리즘에서 빈 토큰은 어떤 역할을 하나요? <Question choices={[ { text: "빈 토큰은 문장의 개별 단어 사이에 공백이 있음을 나타냅니다.", explain: "" }, { text: "빈 토큰은 문자 그룹 간의 엄격한 경계 역할을 하는 예측 토큰입니다. 중복되는 문자를 필터링할 수 있습니다.", explain: "정답입니다.", correct: true }, { text: "빈 토큰은 어휘에서 어떤 토큰과도 일치하지 않는 소리에 사용되며, '알 수 없음'을 나타내는 <UNK> 토큰과 유사합니다.", explain: "" } ]} /> ### 4. 다음 중 CTC 모델에 대한 설명 중 거짓은 무엇입니까? <Question choices={[ { text: "CTC 모델은 트랜스포머 아키텍처의 인코더 부분만 사용합니다.", explain: "" }, { text: "Wav2Vec2와 HuBERT는 완전히 동일한 아키텍처를 사용하지만 학습 방식은 다릅니다.", explain: "" }, { text: "CTC 모델은 다른 아키텍처에 비해 음성 인식 성능이 가장 우수한 경향이 있습니다.", explain: "정답입니다.", correct: true } ]} /> ### 5. Whisper모델은 어떤 항목의 예제일까요? <Question choices={[ { text: "Seq2Seq 아키텍처", explain: "정답입니다.", correct: true }, { text: "CNN 아키텍처", explain: "" }, { text: "CTC 아키텍처", explain: "" } ]} /> ### 6. 오디오 분류를 수행하는 가장 쉬운 방법은 무엇인가요? <Question choices={[ { text: "오디오 파형에 인코더-디코더 트랜스포머를 사용합니다.", explain: "" }, { text: "스펙트로그램을 사용하여 작업을 이미지 분류 문제로 처리합니다.", explain: "정답입니다.", correct: true }, { text: "레이블을 변경하고 일반 크로스 엔트로피 손실 함수로 훈련하여 CTC 모델을 범용 오디오 분류기로 전환합니다.", explain: "" } ]} /> ### 7. 참인가요, 거짓인가요? 분류를 위해 스펙트로그램을 이미지로 처리할 때는 항상 이미지 이동, 자르기 또는 크기 조정과 같은 이미지 데이터 증강 기술을 활용할 수 있습니다. <Question choices={[ { text: "참", explain: "" }, { text: "거짓", explain: "정답입니다.", correct: true } ]} />
2
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter3/seq2seq.mdx
# Seq2Seq 아키텍처[[seq2seq-architectures]] 이전 섹션에서 설명한 CTC 모델은 트랜스포머 아키텍처의 인코더 부분만 사용했습니다. 디코더를 추가하여 인코더-디코더 모델을 생성할 때 이를 **sequence-to-sequence** 모델 또는 줄여서 seq2seq라고 합니다. 이 모델은 한 종류의 데이터 시퀀스를 다른 종류의 데이터 시퀀스에 매핑합니다. 인코더 전용 트랜스포머 모델에서는 인코더가 입력 시퀀스의 각 요소에 대해 예측을 수행합니다. 따라서 입력 및 출력 시퀀스의 길이는 항상 동일합니다. Wav2Vec2와 같은 CTC 모델의 경우 입력 파형이 먼저 다운샘플링되었지만 여전히 오디오 20ms당 하나의 예측이 있었습니다. seq2seq 모델을 사용하면 이러한 일대일 대응이 없으며 입력 및 출력 시퀀스의 길이가 다를 수 있습니다. 따라서 seq2seq 모델은 텍스트 요약이나 서로 다른 언어 간 번역과 같은 NLP 작업뿐만 아니라 음성 인식과 같은 오디오 작업에도 적합합니다. 디코더 아키텍처는 인코더 아키텍처와 매우 유사하며, 둘 다 셀프 어텐션을 주요 기능으로 하는 유사한 레이어를 사용합니다. 하지만 디코더는 인코더와는 다른 작업을 수행합니다. 이것이 어떻게 작동하는지 알아보기 위해 seq2seq 모델이 어떻게 자동 음성 인식을 수행하는지 살펴봅시다. ## 자동 음성 인식[[automatic-speech-recognition]] **Whisper**의 아키텍처는 다음과 같습니다([OpenAI Whisper 블로그](https://openai.com/blog/whisper/) 그림 제공): <div class="flex justify-center"> <img src="https://huggingface.co/blog/assets/111_fine_tune_whisper/whisper_architecture.svg" alt="Whisper is a transformer encoder-decoder model"> </div> 꽤 익숙하게 보일 것입니다. 왼쪽은 **트랜스포머 인코더**입니다. 이것은 로그 멜 스펙트로그램을 입력으로 받아 해당 스펙트로그램을 인코딩하여 음성에서 중요한 특징을 추출하는 인코더의 은닉 상태 시퀀스를 형성합니다. 이 은닉 상태 텐서는 입력 시퀀스를 전체적으로 나타내며 입력 음성의 '의미'를 효과적으로 인코딩합니다. <Tip> 💡 이러한 seq2seq 모델은 스펙트로그램을 입력으로 사용하는 것이 일반적입니다. 하지만 오디오 파형에서 직접 작동하도록 설계할 수도 있습니다. </Tip> 그런 다음 인코더의 출력은 **크로스 어텐션**이라는 메커니즘을 사용하여 오른쪽에 표시된 **트랜스포머 디코더**로 전달됩니다. 이는 셀프 어텐션과 비슷하지만 인코더 출력을 통해 이루어집니다. 이 시점부터 인코더는 더 이상 필요하지 않습니다. 디코더는 '시작' 토큰만 있는 초기 시퀀스(위스퍼의 경우 'SOT')부터 시작하여 한 번에 하나의 토큰씩 **자동 회귀** 방식으로 텍스트 토큰의 시퀀스를 예측합니다. 다음 각 타임스텝에서 이전 출력 시퀀스는 새로운 입력 시퀀스로 디코더에 다시 공급됩니다. 이러한 방식으로 디코더는 "종료" 토큰을 예측하거나 최대 타임스텝 수에 도달할 때까지 한 번에 하나의 새 토큰을 방출하여 출력 시퀀스를 꾸준히 증가시킵니다. 디코더의 아키텍처는 인코더의 아키텍처와 대부분 동일하지만 두 가지 큰 차이점이 있습니다: 1. 디코더에는 인코더의 입력 시퀀스 표현을 살펴볼 수 있는 크로스 어텐션 메커니즘이 있습니다. 2. 디코더 어텐션은 인과적이기 때문에 디코더는 미래를 미리 살펴볼 수 없습니다. 이 설계에서 디코더는 **언어 모델**의 역할을 수행하여 인코더의 은닉 상태 표현을 처리하고 해당 텍스트 트랜스크립션을 생성합니다. 이는 CTC 모델을 외부 언어 모델과 결합하더라도 동일한 훈련 데이터와 손실 함수로 seq2seq 시스템을 엔드 투 엔드 훈련할 수 있어 유연성이 뛰어나고 일반적으로 성능이 우수하기 때문에 CTC보다 강력한 접근 방식입니다. <Tip> 💡 CTC 모델은 개별 문자의 시퀀스를 출력하는 반면, Whisper가 예측하는 토큰은 전체 단어 또는 단어의 일부입니다. GPT-2의 토크나이저를 사용하며 5만 개 이상의 고유 토큰을 보유하고 있습니다. 따라서 seq2seq 모델은 동일한 트랜스크립션에 대해 CTC 모델보다 훨씬 짧은 시퀀스를 출력할 수 있습니다. </Tip> 모델의 최종 계층이 발생 가능한 토큰에 대한 확률 분포를 예측하기 때문에 seq2seq ASR 모델의 일반적인 손실 함수는 크로스 엔트로피 손실입니다. 이는 일반적으로 [최종 시퀀스 생성을 위한 빔 검색](https://huggingface.co/blog/how-to-generate)과 같은 기술과 결합됩니다. 음성 인식의 지표는 문자 오류율(WER, Word Error Rate)로, 예측된 텍스트를 대상 텍스트로 바꾸는 데 필요한 대체, 삽입, 삭제 횟수를 측정하며, 이 수치가 적을수록 좋은 점수를 받습니다. ## 텍스트 음성 변환[[texttospeech]] 놀랍지 않으실 수도 있습니다: TTS용 seq2seq 모델은 위에서 설명한 것과 본질적으로 동일하게 작동하지만 입력과 출력의 위치가 바뀝니다! 트랜스포머 인코더는 일련의 텍스트 토큰을 받아 입력 텍스트를 나타내는 은닉 상태 시퀀스를 추출합니다. 트랜스포머 디코더는 인코더 출력에 크로스 어텐션을 적용하고 스펙트로그램을 예측합니다. <Tip> 💡 스펙트로그램은 오디오 파형의 연속적인 시간 조각의 주파수 스펙트럼을 가져와서 함께 쌓아서 만든다는 것을 기억하세요. 즉, 스펙트로그램은 각 타임스텝마다 하나씩의 로그 멜(log-mel) 주파수 스펙트럼이 요소로 구성된 시퀀스입니다. </Tip> ASR 모델에서는 특별한 "시작" 토큰이 포함된 시퀀스를 사용하여 디코더를 시작합니다. TTS 모델의 경우, '시작 토큰' 역할을 하는 길이가 1이고 모두 값이 0인 스펙트로그램으로 디코딩을 시작할 수 있습니다. 이 초기 스펙트로그램과 인코더의 은닉 상태 표현에 대한 크로스 어텐션이 주어지면 디코더는 이 스펙트로그램의 다음 타임슬라이스를 예측하여 스펙트로그램을 한 번에 한 타임스텝씩 꾸준히 증가시킵니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speecht5_decoding.png" alt="The audio waveform gets mapped to a shorter sequence of hidden-states"> </div> 하지만 디코더는 언제 멈춰야 하는지 어떻게 알 수 있을까요? **SpeechT5** 모델에서는 디코더가 두 번째 시퀀스를 예측하도록 함으로써 이 문제를 처리합니다. 여기에는 현재 시간 간격이 마지막 시간 간격일 확률이 포함됩니다. 추론 시간에 오디오를 생성하는 동안 이 확률이 특정 임계값(예: 0.5)을 초과하면 디코더는 스펙트로그램이 완료되었음을 나타내며 생성 루프를 종료해야 합니다. 디코딩이 완료되고 스펙트로그램이 포함된 출력 시퀀스를 얻은 후 SpeechT5는 여러 컨볼루션 레이어로 구성된 소위 **post-net**을 사용하여 스펙트로그램을 개선합니다. TTS 모델을 훈련하는 동안 목표도 스펙트로그램이며 손실은 L1 또는 MSE(Mean Squared Error)입니다. 추론 시에는 출력 스펙트로그램을 오디오 파형으로 변환하여 실제로 들을 수 있도록 하려고 합니다. 이를 위해 외부 모델인 **보코더(vocoder)**가 사용됩니다. 이 보코더는 seq2seq 아키텍처의 일부가 아니며 별도로 학습됩니다. TTS를 어렵게 만드는 것은 일대다 매핑이라는 점입니다. 음성 대 텍스트에서는 입력 음성에 해당하는 올바른 출력 텍스트가 하나만 있지만, 텍스트 음성 변환에서는 입력 텍스트를 여러 가지 가능한 음성 소리에 매핑할 수 있습니다. 예를 들어 화자마다 문장의 다른 부분을 강조하도록 선택할 수 있습니다. 이 때문에 TTS 모델을 평가하기가 어렵습니다. 동일한 텍스트를 스펙트로그램으로 표현하는 방법은 여러 가지가 있기 때문에 L1 또는 MSE 손실 값은 실제로 큰 의미가 없습니다. 그렇기 때문에 일반적으로 TTS 모델은 MOS(Mean Opinion Score)라는 메트릭을 사용하여 사람이 직접 평가합니다. ## 결론[[conclusion]] seq2seq 접근 방식은 인코더 전용 모델보다 더 강력합니다. 입력 시퀀스의 인코딩과 출력 시퀀스의 디코딩을 분리함으로써 오디오와 텍스트의 정렬이 보다 수월해집니다. <!-- 모델은 어텐션 메커니즘을 통해 이 정렬을 수행하는 방법을 학습합니다. --> 그러나 인코더-디코더 모델은 디코딩 프로세스가 한 번에 한 번에 이루어지는 것이 아니라 한 번에 한 단계씩 이루어지기 때문에 속도가 느립니다. 시퀀스가 길수록 예측 속도가 느려집니다. 또한 자동 회귀 모델은 반복되는 단어에 갇히거나 단어를 건너뛸 수 있습니다. 빔 검색과 같은 기술을 사용하면 예측 품질을 개선하는 데 도움이 될 수 있지만 디코딩 속도가 더 느려질 수도 있습니다.
3
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter3/introduction.mdx
# 3단원. 오디오를 위한 트랜스포머 아키텍처[[unit-3-transformer-architectures-for-audio]] 이 강좌에서는 주로 트랜스포머 모델과 이를 오디오 작업에 적용하는 방법을 살펴봅니다. 모델의 내부의 세부 내용을 알 필요는 없지만 모델이 동작하는 주요 개념을 이해하는 것이 중요하기 때문에 간단히 복습하겠습니다. 트랜스포머에 대해 자세히 살펴보고 싶으시다면 [NLP 과정](https://huggingface.co/course/chapter1/1)을 참조하세요. ## 트렌스포머의 작동 원리[[how-does-a-transformer-work]] 원래 트랜스포머 모델은 텍스트를 한 언어에서 다른 언어로 번역하도록 설계되었습니다. 구조는 다음과 같습니다.: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers.svg" alt="Original transformer architecture"> </div> 왼쪽에는 **인코더(encoder)**가 있고 오른쪽에는 **디코더(decoder)**가 있습니다. - 인코더는 입력(이 경우 텍스트 토큰 시퀀스)을 수신하고 그 표현(특징)을 구축합니다. 모델의 이 부분은 입력을 통해 이해력을 습득하도록 학습됩니다. - 디코더는 인코더의 표현(특징)을 다른 입력(이전에 예측된 토큰)과 함께 사용하여 목표 시퀀스를 생성합니다. 모델의 이 부분은 출력을 생성하도록 훈련됩니다. 원래 설계에서 출력 시퀀스는 텍스트 토큰으로 구성되었습니다. 인코더 부분만 사용하는 트랜스포머 기반 모델(분류와 같이 입력에 대한 이해가 필요한 작업에 적합) 또는 디코더 부분만 사용하는 모델(텍스트 생성과 같은 작업에 적합)도 있습니다. 인코더 전용 모델의 예로는 BERT가 있고, 디코더 전용 모델의 예로는 GPT2가 있습니다. 트랜스포머 모델의 핵심 특징은 **어텐션(attention) 레이어**라는 특수 레이어로 구축된다는 점입니다. 이 레이어는 특징 표현을 계산할 때 입력 시퀀스의 특정 요소에 특별히 주의를 기울이고 다른 요소는 무시하도록 모델에 지시합니다. ## 오디오에 트랜스포머 사용하기[[using-transformers-for-audio]] 이 강좌에서 다룰 오디오 모델은 일반적으로 위와 같은 표준 트랜스포머 아키텍처를 사용하지만, 텍스트 대신 오디오 데이터를 사용할 수 있도록 입력 또는 출력 측에서 약간의 수정이 이루어집니다. 이러한 모든 모델은 기본적으로 트랜스포머이므로 대부분의 아키텍처가 공통적이며 주요 차이점은 학습 및 사용 방식에 있습니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/transformers_blocks.png" alt="The transformer with audio input and output"> </div> 오디오 작업의 경우 입력과 출력 전체 혹은 각각의 시퀀스가 텍스트가 아닌 오디오일 수 있습니다: - 자동 음성 인식(ASR, Automatic Speech Recognition): 입력은 음성, 출력은 텍스트입니다. - 음성 합성(TTS): 입력은 텍스트, 출력은 음성입니다. - 오디오 분류(audio classification): 입력은 오디오이고 출력은 클래스 확률(시퀀스의 각 요소에 대해 하나씩 또는 전체 시퀀스에 대해 단일 클래스 확률)입니다. - 음성 변환(voice conversion) 또는 음성 향상(speech enhancement): 입력과 출력 모두 오디오입니다. 트랜스포머와 함께 사용할 수 있도록 오디오를 처리하는 방법에는 몇 가지가 있습니다. 주요 고려 사항은 오디오를 원시 형태(파형)로 사용할지, 아니면 스펙트로그램으로 처리할지 여부입니다. ## 모델 입력[[model-inputs]] 오디오 모델에 대한 입력은 텍스트 또는 사운드일 수 있습니다. 목표는 이 입력을 트랜스포머 아키텍처에서 처리할 수 있는 임베딩 벡터로 변환하는 것입니다. ### 텍스트 입력[[text-inputs]] 텍스트 음성 변환 모델은 텍스트를 입력으로 받습니다. 이는 원래의 트랜스포머나 다른 NLP(Natural Language Processing) 모델과 똑같이 작동합니다: 입력 텍스트는 먼저 토큰화되어 일련의 텍스트 토큰을 제공합니다. 이 시퀀스는 입력 임베딩 레이어를 통해 전송되어 토큰을 512차원 벡터로 변환합니다. 그런 다음 이러한 임베딩 벡터는 트랜스포머 인코더로 전달됩니다. ### 파형 입력[[waveform-input]] 자동 음성 인식 모델은 오디오를 입력으로 받습니다. ASR에 트랜스포머를 사용하려면 먼저 오디오를 어떤 식으로든 임베딩 벡터 시퀀스로 변환해야 합니다. **Wav2Vec2** 및 **HuBERT**와 같은 모델은 오디오 파형을 모델에 대한 입력으로 직접 사용합니다. [오디오 데이터 소개](chapter1/introduction)에서 살펴보았듯이 파형은 부동 소수점 숫자의 1차원 시퀀스이며, 각 숫자는 주어진 시간에 샘플링된 진폭을 나타냅니다. 이 원시 파형은 먼저 평균과 단위 분산이 0으로 정규화되어 다양한 음량(진폭)의 오디오 샘플을 표준화하는 데 도움이 됩니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/wav2vec2-input.png" alt="Wav2Vec2 uses a CNN to create embeddings from the input waveform"> </div> 정규화 후 오디오 샘플 시퀀스는 특징 인코더(feature encoder)로 알려진 작은 컨볼루션 신경망을 사용하여 임베딩으로 변환됩니다. 이 네트워크의 각 컨볼루션 레이어는 입력 시퀀스를 처리하고 오디오를 서브샘플링하여 시퀀스 길이를 줄인 다음 최종 컨볼루션 레이어가 오디오 25ms마다 임베딩이 포함된 512차원 벡터를 출력할 때까지 처리합니다. 입력 시퀀스가 이러한 임베딩 시퀀스로 변환되면 트랜스포머는 평소와 같이 데이터를 처리합니다. ### 스펙트로그램 입력[[spectrogram-input]] 원시 파형을 입력으로 사용할 때의 한 가지 단점은 시퀀스 길이가 길어지는 경향이 있다는 것입니다. 예를 들어 샘플링 속도가 16kHz인 30초 분량의 오디오는 '30 * 16000 = 480000' 길이의 입력이 됩니다. 시퀀스 길이가 길수록 트랜스포머 모델에서 더 많은 계산이 필요하므로 메모리 사용량이 증가합니다. 이 때문에 원시 오디오 파형은 일반적으로 오디오 입력을 표현하는 가장 효율적인 형태가 아닙니다. 스펙트로그램을 사용하면 동일한 양의 정보를 더 압축된 형태로 얻을 수 있습니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/whisper-input.png" alt="Whisper uses a CNN to create embeddings from the input spectrogram"> </div> **Whisper**와 같은 모델은 먼저 파형을 로그 멜 스펙트로그램으로 변환합니다. Whisper는 항상 오디오를 30초 세그먼트로 분할하며, 각 세그먼트의 로그 멜 스펙트로그램은 80, 3000의 형태를 갖습니다. 여기서 80은 멜 빈의 수이고 3000은 시퀀스 길이입니다. 로그 멜 스펙트로그램으로 변환함으로써 입력 데이터의 양을 줄였지만, 더 중요한 것은 원시 파형보다 훨씬 짧은 시퀀스라는 점입니다. 그런 다음 로그 멜 스펙트로그램은 작은 CNN에 의해 임베딩 시퀀스로 처리되어 평소와 같이 트랜스포머로 들어갑니다. 파형과 스펙트로그램 입력 두 경우 모두, 트랜스포머 앞에 작은 네트워크가 있어 입력을 임베딩으로 변환한 다음 트랜스포머가 작업을 수행합니다. ## 모델 출력[[model-outputs]] 트랜스포머 아키텍처는 출력 임베딩이라고도 하는 은닉 상태 벡터의 시퀀스를 출력합니다. 우리의 목표는 이러한 벡터를 텍스트 또는 오디오 출력으로 변환하는 것입니다. ### 텍스트 출력[[text-output]] 자동 음성 인식 모델의 목표는 텍스트 토큰의 시퀀스를 예측하는 것입니다. 이는 언어 모델링 헤드(일반적으로 단일 선형 레이어)를 추가한 다음 트랜스포머의 출력 위에 소프트맥스를 추가하여 수행됩니다. 이렇게 하면 어휘의 텍스트 토큰에 대한 확률을 예측할 수 있습니다 ### 스펙트로그램 출력[[spectrogram-output]] 텍스트 음성 변환(TTS) 모델과 같이 오디오를 생성하는 모델의 경우 오디오 시퀀스를 생성할 수 있는 레이어를 추가해야 합니다. 스펙트로그램을 생성한 다음 보코더(vocoder)라고 하는 추가 신경망을 사용하여 이 스펙트로그램을 파형으로 변환하는 것이 매우 일반적입니다. 예를 들어 **SpeechT5** TTS 모델에서 트랜스포머 네트워크의 출력은 768개 요소 벡터의 시퀀스입니다. 선형 레이어는 이 시퀀스를 로그 멜 스펙트로그램으로 투영합니다. 추가 선형 및 컨볼루션 레이어로 구성된 이른바 포스트넷(post-net)은 노이즈를 줄여 스펙트로그램을 개선합니다. 그런 다음 보코더가 최종 오디오 파형을 생성합니다. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speecht5.png" alt="SpeechT5 outputs a spectrogram and uses a vocoder to create the waveform"> </div> <Tip> 💡 기존 파형을 가지고 단시간 푸리에 변환(STFT)을 적용하면 역연산인 ISFT를 수행하여 원래의 파형을 다시 얻을 수 있습니다. 이는 STFT로 생성된 스펙트로그램에 진폭과 위상 정보가 모두 포함되어 있고 파형을 재구성하는 데 두 가지 정보가 모두 필요하기 때문에 가능합니다. 그러나 스펙트로그램으로 출력을 생성하는 오디오 모델은 일반적으로 위상이 아닌 진폭 정보만 예측합니다. 이러한 스펙트로그램을 파형으로 변환하려면 어떻게든 위상 정보를 추정해야 합니다. 이것이 바로 보코더가 하는 일입니다. </Tip> ### 파형 출력[[waveform-output]] 모델이 중간 단계로 스펙트로그램 대신 파형을 직접 출력하는 것도 가능하지만, 현재 🤗 트랜스포머에는 이 기능을 지원하는 모델이 없습니다. ## 결론[[conclusion]] 요약: 대부분의 오디오 트랜스포머 모델은 다른 점보다는 비슷한 점이 더 많은데, 일부 모델은 트랜스포머의 인코더 부분만 사용하고 다른 모델은 인코더와 디코더를 모두 사용하지만 모두 동일한 트랜스포머 아키텍처와 어텐션 레이어를 기반으로 구축됩니다. 또한 트랜스포머 모델에 오디오 데이터를 가져오고 내보내는 방법도 살펴봤습니다. ASR, TTS 등의 다양한 오디오 작업을 수행하려면 입력을 임베딩으로 전처리하는 레이어를 교체하고, 예측된 임베딩을 출력으로 후처리하는 레이어를 교체하면 되며, 트랜스포머 백본(backbone)은 그대로 유지하면 됩니다. 다음으로, 이러한 모델을 자동 음성 인식으로 학습시킬 수 있는 몇 가지 방법을 살펴보겠습니다.
4
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter1/supplemental_reading.mdx
# 더 알아보기[[learn-more]] 이 단원에서는 오디오 데이터와 이를 다루는 데 관련된 많은 기본 개념들을 다루었습니다. 더 알고 싶으신가요? 여기에서 주제에 대한 더 깊은 이해를 돕고 학습 경험을 향상시킬 수 있는 추가 자료들을 찾아보실 수 있습니다. 아래 비디오에서는 xiph.org의 Monty Montgomery가 현대 디지털 분석 장비와 오래된 아날로그 벤치 장비를 이용해 실제 오디오 장비에서의 샘플링, 양자화, 비트뎁스, 디더(dither)를 실시간 시연으로 보여줍니다. 확인해보세요: <Youtube id="cIQ9IXSUzuM"/> 디지털 신호 처리에 대해 더 깊게 다뤄보고 싶으시다면 `librosa` 패키지의 주요 메인테이너이자 Assistant Professor of Music Technology and Data Science at New York University인 Brian McFee가 저술한 무료 책 ["Digital Signals Theory"](https://brianmcfee.net/dstbook-site/content/intro.html)를 확인해보세요.
5
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter1/preprocessing.mdx
# 오디오 데이터셋 전처리하기[[preprocessing-an-audio-dataset]] 🤗 Datasets을 이용하여 데이터셋을 불러오는건 재미의 반에 불과합니다. 모델을 학습시키거나 추론(inference)을 실행하기 위해선 먼저 데이터를 전처리해야할 것입니다. 일반적으로 이는 다음의 단계를 거칩니다: * 오디오 데이터 리샘플링 * 데이터셋 필터링 * 오디오 데이터를 모델의 입력에 맞게 변환 ## 오디오 데이터 리샘플링하기[[resampling-the-audio-data]] `load_dataset` 함수는 오디오 데이터를 게시된(published) 샘플링 속도에 맞춰 다운로드합니다. 이 샘플링 속도는 여러분이 계획한 학습 혹은 추론을 위한 샘플링 속도가 아닐 수 있습니다. 이렇게 샘플링 속도간 불일치가 있다면, 모델이 기대하는 샘플링 속도에 맞춰 리샘플링을 할 수 있습니다. 대부분의 사전 학습된 모델들은 16 kHz의 샘플링 속도를 가진 오디오 데이터셋에 대하여 사전학습이 이뤄져있습니다. 여러분이 MINDS-14 데이터셋을 살펴보신다면 8 kHz로 샘플링된것을 알 수 있을겁니다. 업샘플링이 필요하다는 뜻이죠. 이를 위해, 🤗 Datasets의 `cast_column` 메소드를 써봅시다. 이 연산은 오디오를 in-place로 변경하는 것이 아니라 오디오 데이터들이 불러와질때 즉석에서 리샘플링되도록 데이터셋에 신호를 보냅니다. 다음의 코드는 샘플링 속도를 16 kHz로 설정합니다: ```py from datasets import Audio minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) ``` MINDS-14 데이터셋의 첫번째 오디오 예제를 다시 불러와 원하는 `sampling_rate`으로 리샘플링 되었는지 확인해 보겠습니다: ```py minds[0] ``` **Output:** ```out { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "audio": { "path": "/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-AU~PAY_BILL/response_4.wav", "array": array( [ 2.0634243e-05, 1.9437837e-04, 2.2419340e-04, ..., 9.3852862e-04, 1.1302452e-03, 7.1531429e-04, ], dtype=float32, ), "sampling_rate": 16000, }, "transcription": "I would like to pay my electricity bill using my card can you please assist", "intent_class": 13, } ``` 여러분은 아마 배열의 값들 역시 달라졌음을 눈치채셨을 겁니다. 이는 기존에 비해 진폭값들의 갯수가 전부 두배로 늘어났기 때문입니다. <Tip> 💡 리샘플링에 대한 배경 정보: 만약 오디오 신호가 8 kHz로 샘플링 되었다면(즉, 초당 8000개의 샘플이 있다면) 4 kHz보다 높은 주파수는 없음을 알 수 있습니다. 나이퀴스트 샘플링 정리(Nyquist sampling theorem)에 의해서 말이죠. 이 덕분에 우린 샘플링 지점들간의 원래의 연속적인 신호는 항상 부드러운 커브임을 확신할 수 있는 것입니다. 더 높은 샘플링 속도로의 업샘플링은 이 커브를 근사하여 기존 점들 사이의 값을 찾아내면 됩니다. 그러나 다운샘플링 같은 경우, 새로운 샘플을 결정하기전에 새로운 나이퀴스트 한계보다 높은 주파수를 먼저 걸러내는 작업이 필요할 겁니다. 다시 말해, 2배의 다운샘플링 같은 경우 이에 맞춰 단순히 샘플들을 버리는 것으로는 왜곡이 생길 수 있습니다. 이 왜곡을 alias라고 합니다. 이렇듯 리샘플링을 올바르게 하기란 꽤 까다로우므로 librosa나 🤗 Datasets같은 잘 테스트된 라이브러리를 쓰는편이 낫습니다. </Tip> ## 데이터셋 필터링하기[[filtering-the-dataset]] 여러분은 데이터를 어떤 기준에 맞춰 필터링해야할 때도 있을겁니다. 흔한 경우로는 오디오 데이터를 특정 길이에 맞춰 제한하는 경우가 있을 수 있습니다. 예를 들어, 모델 학습시 out-of-memory 에러를 피하기 위해 20초 보다 긴 모든 데이터를 필터링하길 원할 수도 있습니다. 🤗 Datasets의 `filter` 메소드에 필터링 로직을 짠 함수를 집어넣어 쓴다면 이를 수행할 수 있습니다. 한번 어떤 데이터를 쓸지 또는 버릴지를 알려주는 함수를 작성해 이를 써봅시다. 함수 `is_audio_length_in_range`는 만약 샘플이 20초보다 짧다면 `True`를 그렇지 않다면 `False`를 반환합니다. ```py MAX_DURATION_IN_SECONDS = 20.0 def is_audio_length_in_range(input_length): return input_length < MAX_DURATION_IN_SECONDS ``` 필터링 함수는 데이터셋의 컬럼에 적용될 수 있지만 이 데이터셋엔 오디오 트랙 길이가 없습니다. 그러나 우린 새로 이런 컬럼을 만들 수 있으니 새로 만든 후 이 컬럼의 값에 필터를 적용하고 최종적으로는 다시 지워봅시다. ```py # use librosa to get example's duration from the audio file new_column = [librosa.get_duration(filename=x) for x in minds["path"]] minds = minds.add_column("duration", new_column) # use 🤗 Datasets' `filter` method to apply the filtering function minds = minds.filter(is_audio_length_in_range, input_columns=["duration"]) # remove the temporary helper column minds = minds.remove_columns(["duration"]) minds ``` **Output:** ```out Dataset({features: ["path", "audio", "transcription", "intent_class"], num_rows: 624}) ``` 데이터셋의 숫자가 654개에서 624개로 감소한것을 확인하실 수 있습니다. ## 오디오 데이터 전처리하기[[pre-processing-audio-data]] 오디오 데이터셋을 준비할 때 가장 어려운점 중 하나는 모델 학습에 맞는 형식을 갖추는 것입니다. 여러분이 앞서 보셧듯, 원시 오디오 데이터는 샘플값들의 배열로 제공됩니다. 그러나, 사전 학습된 모델같은 경우(이를 추론을 위해 쓰든 파인튜닝을 위해 쓰든) 이런 원시 데이터를 입력 feature에 맞춰야합니다. 이런 입력 feature의 요구사항은 모델마다 다를 수 있습니다. 이는 모델의 구조와 어떤 데이터로 사전학습이 이뤄졌는지에 달려있습니다. 좋은 소식은 🤗 Transformers는 지원하는 모든 모델에 대해 원시 데이터를 모델이 원하는 입력 feature로 바꿔주는 feature extractor 클래스를 제공한다는 것입니다. 이 feature extractor는 그럼 원시 데이터로 무엇을 하는 걸까요? 일반적인 feature extraction 변환을 이해하기 위해 [Whisper](https://cdn.openai.com/papers/whisper.pdf)의 feature extractor를 살펴보겠습니다. Whisper는 자동 음성 인식(ASR)을 위해 사전 학습된 모델로 2022년 9월에 OpenAI의 Alec Radford와 공동 연구자들이 발표했습니다. 첫번째로, Whisper의 feature extractor는 모든 데이터가 30초의 길이를 갖도록 덧붙이거나(pad) 자릅니다(truncate). 30초 보다 짧은 데이터는 시퀀스의 끝에 0을 붙여 길이를 늘립니다(오디오 신호에서 0은 신호 없음 혹은 무음과 같습니다). 30초 보다 긴 데이터는 30초가 되도록 자릅니다. 배치의 모든 요소가 input space의 최대 길이에 맞춰 덧붙여지거나 잘렸으므로 별도의 attention mask는 필요 없습니다. 이런 점에서 Whisper는 특별한데, 대부분의 다른 오디오 모델들은 self-attention 메커니즘에서 어느 부분을 무시해야하는지를 알려주기 위해 시퀀스의 어디가 덧붙여졌는지 알려주는 attention mask가 필요하기 때문입니다. Whisper는 attention mask 없이 작동하도록 훈련되어 음성 신호에서 직접 입력의 어느 부분을 무시해야 하는지를 추론합니다. Whisper feature extractor가 수행하는 두번째 작업은 덧붙여진 오디오 배열들을 로그-멜 스펙트로그램으로 바꾸는 것입니다. 아시다시피, 이 스펙트로그램은 신호의 주파수가 시간에 따라 어떻게 변하는지를 멜 스케일에 맞춰 데시벨(로그 부분)로 측정하여 주파수와 진폭이 사람의 청각 시스템을 더 잘 표현하도록 합니다. 이 모든 변환은 몇 줄의 코드로 여러분의 원시 데이터에 적용될 수 있습니다. 사전 학습된 Whisper의 체크포인트에서 feature extractor를 불러와 오디오 데이터에 사용할 준비를 해봅시다: ```py from transformers import WhisperFeatureExtractor feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") ``` 다음으로, `feature_extractor`를 통해 각각의 오디오 데이터를 전처리할 함수를 작성할 수 있습니다. ```py def prepare_dataset(example): audio = example["audio"] features = feature_extractor( audio["array"], sampling_rate=audio["sampling_rate"], padding=True ) return features ``` 🤗 Datasets의 `map` 메소드를 이용하여 모든 학습 데이터에 적용시킬 수 있습니다: ```py minds = minds.map(prepare_dataset) minds ``` **Output:** ```out Dataset( { features: ["path", "audio", "transcription", "intent_class", "input_features"], num_rows: 624, } ) ``` 이렇게 간단히, 로그-멜 스펙트로그램을 데이터셋의 `input_features`에 저장할 수 있습니다. `minds` 데이터셋 중 하나를 시각화해봅시다: ```py import numpy as np example = minds[0] input_features = example["input_features"] plt.figure().set_figwidth(12) librosa.display.specshow( np.asarray(input_features[0]), x_axis="time", y_axis="mel", sr=feature_extractor.sampling_rate, hop_length=feature_extractor.hop_length, ) plt.colorbar() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/log_mel_whisper.png" alt="Log mel spectrogram plot"> </div> 이제 전처리 후 Whisper 모델에 대한 오디오 입력이 어떻게 보이는지 확인하실 수 있습니다. 모델의 feature extractor 클래스는 원시 데이터를 모델이 원하는 포맷으로 변경하는 작업을 처리합니다. 그러나, 대개의 오디오 작업은(예를 들어, 음성 인식) multimodal입니다. 이런 경우 🤗 Transformers는 텍스트 입력을 처리하기 위해 모델별 토크나이저(tokenizer)를 제공합니다. 토크나이저에 대해 더 자세히 알고 싶으시다면 [NLP 코스](https://huggingface.co/course/chapter2/4)를 참고하세요. Whisper와 다른 multimodal 모델에 대해 각각의 feature extractor와 토크나이저를 별도로 불러오거나, 이른바 processor를 통해 한번에 불러올 수도 있습니다. 더 간단히 다음의 코드처럼 `AutoProcessor`로 체크포인트에서 모델의 feature extractor와 processor를 불러올 수도 있습니다: ```py from transformers import AutoProcessor processor = AutoProcessor.from_pretrained("openai/whisper-small") ``` 여기에서는 기본적인 데이터 준비 단계를 설명했습니다. 물론 커스텀 데이터는 더 복잡한 전처리가 필요할 수도 있습니다. 이 경우, 여러분은 어떤 종류의 커스텀 데이터도 변환이 가능하도록 `prepare_dataset` 함수를 확장할 수 있습니다. 🤗 Datasets과 함께라면, 여러분은 파이썬 함수로 작성 할 수만 있다면 여러분의 데이터에 이를 적용시킬 수 있을겁니다!
6
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter1/streaming.mdx
# 오디오 데이터 스트리밍하기[[streaming-audio-data]] 오디오 데이터셋을 다룰 때 마주치는 가장 큰 문제 중 하나는 바로 그 크기입니다. 1분짜리 압축되지 않은 CD 품질의 오디오(44.1kHz, 16-bit)는 5MB를 조금 넘습니다. 보통 오디오 데이터셋은 몇시간 분량의 녹음을 가지고 있습니다. 이전 섹션에서 우린 MINDS-14 오디오 데이터셋의 아주 작은 부분만을 다뤘습니다. 그러나, 보통의 오디오 데이터셋은 훨씬 큽니다. 예를 들어, [GigaSpeech from SpeechColab](https://huggingface.co/datasets/speechcolab/gigaspeech)의 `xs`(최소) 설정은 10시간의 훈련 데이터만 포함하지만 다운로드와 준비에 13GB의 저장공간이 필요합니다. 더 큰 분할(split)의 경우는 어떨까요? 이 데이터셋의 `xl`(최대) 설정은 10,000 시간의 훈련 데이터를 가지고 있고 이는 1TB의 저장 공간을 차지합니다. 우리 대부분은 이 정도 크기의 하드 디스크 용량을 가지고 있지 않을겁니다. 추가로 저장장치를 구매해야 할까요? 아니면 이런 저장 공간의 제약이 없이 학습할 수 있는 방법이 있을까요? 🤗 Datasets은 스트리밍 모드를 제공하여 이 문제를 해결합니다. 스트리밍은 데이터셋을 차례로 접근할 때 점진적으로 불러올 수 있도록 해줍니다. 모든 데이터셋을 한번에 다운로드하기보단, 데이터셋을 하나씩 불러오는 것입니다. 데이터셋을 순회하며 데이터가 필요할 때마다 즉석에서 준비하고 불러옵니다. 이런 방식으로 우린 현재 필요가 없는 데이터가 아닌 당장 필요로 하는 데이터만을 불러올 수 있습니다! 한 샘플이 끝나면 데이터셋에서 다음 데이터를 불러오면 됩니다. 스트리밍 모드는 전체 데이터셋을 한번에 다운로드하는 것에 비해 세가지 특장점이 있습니다: * 디스크 공간: 데이터는 데이터셋을 순회하며 메모리에 하나씩 불러와집니다. 데이터를 로컬에 다운로드하지 않으므로 저장 공간의 제약없이 임의의 크기의 데이터셋을 다룰 수 있습니다. * 다운로드시간과 처리 시간: 오디오 데이터셋은 그 크기가 크기때문에 다운로드하고 처리하는데 많은 시간이 소요됩니다. 스트리밍의 경우, 처리와 불러오는것이 즉석에서 이뤄지기 때문에 데이터가 준비되는대로 시작할 수 있습니다. * 실험의 간편함: 전체 데이터셋을 다운로드받을 필요 없이 몇개의 데이터에 대해 여러분의 스크립트가 잘 작동하는지 실험하기 쉽습니다. 스트리밍 모드에는 한가지 주의사항이 있습니다. 스트리밍이 아닌 전체 데이터셋을 다운로드하는 경우, 원시 데이터와 가공된 데이터(processed data) 모두 로컬 디스크에 저장됩니다. 따라서 추후에 재사용하고 싶다면 다운로드와 처리 단계를 다시 거칠 필요 없이 바로 가공된 데이터를 불러올 수 있습니다. 즉, 한번 다운로드와 처리과정을 거친다면 후에는 준비된 데이터를 다시 사용할 수 있습니다. 스트리밍 모드에서는 데이터가 디스크에 저장되지 않습니다. 따라서 다운로드 데이터와 가공 데이터는 캐시되지 않습니다. 만약 데이터셋을 재사용하길 원한다면 스트리밍 단계를 반복해야 합니다. 즉, 오디오 파일을 불러오고 처리하는 과정을 다시 거쳐야합니다. 이런 이유때문에, 여러번 사용할 데이터셋은 다운로드하는것이 좋습니다. 스트리밍 모드는 어떻게 활성화 시킬까요? 쉽습니다! 데이터셋을 불러올 때 `streaming=True`로 설정만 하면 됩니다. 나머지는 알아서 처리됩니다: ```py gigaspeech = load_dataset("speechcolab/gigaspeech", "xs", streaming=True) ``` MINDS-14에 전처리 과정을 적용했던것처럼 스트리밍 데이터셋에도 똑같은 방식으로 전처리를 할 수 있습니다. 유일한 차이점은 더 이상 파이썬 인덱싱으로 데이터에 접근하지 못한다는 점입니다(즉, `gigaspeech["train"][sample_idx]`같은 접근은 불가합니다). 대신, 데이터셋을 순회하여 접근해야 합니다. 다음은 데이터셋을 스트리밍할때 어떻게 데이터에 접근하는지를 보여줍니다: ```py next(iter(gigaspeech["train"])) ``` **Output:** ```out { "segment_id": "YOU0000000315_S0000660", "speaker": "N/A", "text": "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", "audio": { "path": "xs_chunks_0000/YOU0000000315_S0000660.wav", "array": array( [0.0005188, 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621] ), "sampling_rate": 16000, }, "begin_time": 2941.89, "end_time": 2945.07, "audio_id": "YOU0000000315", "title": "Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43", "url": "https://www.youtube.com/watch?v=zr2n1fLVasU", "source": 2, "category": 24, "original_full_path": "audio/youtube/P0004/YOU0000000315.opus", } ``` 만약 큰 데이터셋에서 여러개의 데이터를 보고싶다면 `take()` 함수로 첫 `n`개의 원소를 가져올 수 있습니다. gigaspeech 데이터셋에서 처음 두개의 데이터를 가져와 보겠습니다: ```py gigaspeech_head = gigaspeech["train"].take(2) list(gigaspeech_head) ``` **Output:** ```out [ { "segment_id": "YOU0000000315_S0000660", "speaker": "N/A", "text": "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", "audio": { "path": "xs_chunks_0000/YOU0000000315_S0000660.wav", "array": array( [ 0.0005188, 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621, ] ), "sampling_rate": 16000, }, "begin_time": 2941.89, "end_time": 2945.07, "audio_id": "YOU0000000315", "title": "Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43", "url": "https://www.youtube.com/watch?v=zr2n1fLVasU", "source": 2, "category": 24, "original_full_path": "audio/youtube/P0004/YOU0000000315.opus", }, { "segment_id": "AUD0000001043_S0000775", "speaker": "N/A", "text": "SIX TOMATOES <PERIOD>", "audio": { "path": "xs_chunks_0000/AUD0000001043_S0000775.wav", "array": array( [ 1.43432617e-03, 1.37329102e-03, 1.31225586e-03, ..., -6.10351562e-05, -1.22070312e-04, -1.83105469e-04, ] ), "sampling_rate": 16000, }, "begin_time": 3673.96, "end_time": 3675.26, "audio_id": "AUD0000001043", "title": "Asteroid of Fear", "url": "http//www.archive.org/download/asteroid_of_fear_1012_librivox/asteroid_of_fear_1012_librivox_64kb_mp3.zip", "source": 0, "category": 28, "original_full_path": "audio/audiobook/P0011/AUD0000001043.opus", }, ] ``` 스티리밍 모드는 여러분의 연구를 한 단계 높은 수준으로 이끌어줄 수 있습니다. 가장 큰 데이터셋에 접근가능할 뿐만 아니라 디스크 공간에 대한 걱정 없이 여러 데이터셋을 이용해 시스템을 한번에 쉽게 평가(evaluate)할 수 있기 때문입니다. 하나의 데이터셋을 평가하는 것과 비교하여 여러 데이터셋에 대한 평가는 음성 인식 시스템의 일반화(generalisation) 능력에 대해 더 나은 지표를 제공합니다(End-to-End Speech Benchmark(ESB)를 참고하세요).
7
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter1/quiz.mdx
<!-- DISABLE-FRONTMATTER-SECTIONS --> # 코스에 대한 이해도를 체크해보세요[[check-your-understanding-of-the-course-material]] ### 1. 샘플링 속도는 어떤 단위를 사용합니까? <Question choices={[ { text: "dB", explain: "정답이 아닙니다.데시벨(dB)은 진폭의 측정에 사용됩니다." }, { text: "Hz", explain: "샘플링 속도는 초당 샘플의 갯수로 헤르츠(Hz)를 사용합니다.", correct: true }, { text: "bit", explain: "bit는 오디오 신호의 샘플에서 몇개의 비트로 정보를 나타내는지를 알려주는 비트뎁스에서 사용됩니다. ", } ]} /> ### 2. 큰 오디오 데이터셋을 스트리밍한다면 어느 시점부터 이를 사용할 수 있습니까? <Question choices={[ { text: "모든 데이터셋이 다운로드되는 순간.", explain: "데이터를 스트리밍하는 것의 목적은 데이터셋을 전부 다운로드하지 않고 처리하는 것에 있습니다." }, { text: "처음 16개의 데이터가 다운로드되는 순간.", explain: "다시 풀어보세요!" }, { text: "첫번째 데이터가 다운로드되는 순간.", explain: "", correct: true } ]} /> ### 3. 스펙트로그램이란 무엇인가요? <Question choices={[ { text: "마이크에서 캡처된 오디오를 디지털화하는데 사용하는 장치로, 음파를 전기 신호로 변환합니다.", explain: "전기 신호를 디지털화하는 장치는 아날로그-디지털 컨버터입니다. 다시 풀어보세요!" }, { text: "오디오 신호의 진폭이 시간에 따라 변하는 것을 그린 것. 소리의 *시간 영역* 표현 이라고도 합니다.", explain: "이 설명은 스펙트로그램이 아닌 파형에 대한 설명입니다." }, { text: "주파수 스펙트럼이 시간에 따라 변화하는 것을 시각적으로 나타낸 것.", explain: "", correct: true } ]} /> ### 4. 원시 오디오 데이터를 Whisper에 적합한 로그-멜 스펙트로그램으로 변환하는 가장 쉬운 방법은? A. ```python librosa.feature.melspectrogram(audio["array"]) ``` B. ```python feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") feature_extractor(audio["array"]) ``` C. ```python dataset.feature(audio["array"], model="whisper") ``` <Question choices={[ { text: "A", explain: "`librosa.feature.melspectrogram()`는 파워 스펙트로그램을 만듭니다." }, { text: "B", explain: "", correct: true }, { text: "C", explain: "Dataset은 트랜스포머 모델의 feature를 준비하는데 사용하지 않습니다. 이는 모델의 preprocessor를 통해 이뤄집니다." } ]} /> ### 5. 🤗 허브에서 데이터셋을 불러오는 방법은? A. ```python from datasets import load_dataset dataset = load_dataset(DATASET_NAME_ON_HUB) ``` B. ```python import librosa dataset = librosa.load(PATH_TO_DATASET) ``` C. ```python from transformers import load_dataset dataset = load_dataset(DATASET_NAME_ON_HUB) ``` <Question choices={[ { text: "A", explain: "가장 좋은 방법은 🤗 Datasets 라이브러리를 사용하는 것입니다.", correct: true }, { text: "B", explain: "`librosa.load`는 경로에 있는 단일 오디오 파일을 오디오 시계열과 샘플링 속도로 이뤄진 튜플로 불러올때 유용한 방법이지 여러 feature로 이뤄진 데이터들의 데이터셋 전체를 불러올 때 유용한 방법이 아닙니다." }, { text: "C", explain: "`load_dataset` 메소드는 🤗 Transformers가 아닌 🤗 Datasets 라이브러에 포함되어 있습니다." } ]} /> ### 6. 32 kHz의 샘플링 속도를 가진 고품질 오디오 데이터셋으로 16 kHz 샘플링 속도를 요구하는 음성 인식 모델을 학습하고자 합니다. 그렇다면 무엇을 해야합니까? <Question choices={[ { text: "데이터를 그대로 사용한다. 모델은 고품질 데이터를 쉽게 일반화할 수 있을 것이므로.", explain: "attention 메커니즘에 의존하기 때문에 모델이 서로 다른 샘플링 속도간 일반화를 하기는 어렵습니다." }, { text: "🤗 Datasets 라이브러리의 Audio 모듈을 이용하여 다운샘플링을 한다.", explain: "", correct: true }, { text: "다른 모든 샘플들을 버려서 2배 다운샘플링이 되도록 한다.", explain: "이는 alias라는 신호의 왜곡을 만듭니다. 리샘플링을 하는 것은 까다로우므로 librosa나 🤗 Datasets같은 잘 테스트된 라이브러리에 맡기는게 납니다." } ]} /> ### 7. 머신러닝 모델에 의해 만들어진 스펙트로그램을 파형으로 바꾸는 방법으로 옳은 것은? <Question choices={[ { text: "vocoder라는 신경망을 이용해 스펙트로그램에서 파형을 재구성한다.", explain: "위상정보가 손실된 경우기 때문에 vocoder나 고전적인 Griffin-Lim 알고리즘을 이용해 파형을 재구성해야 합니다.", correct: true }, { text: "역 STFT를 이용해 스펙트로그램을 파형으로 바꾼다.", explain: "머신러닝 모델에 의해 만들어진 스펙트로그램은 위상정보가 없어 역 STFT를 쓸 수 없습니다." }, { text: "머신러닝 모델에 의해 만들어진 스펙트로그램은 다시 파형으로 되돌릴 수 없습니다.", explain: "다시 풀어보세요!" } ]} />
8
0
hf_public_repos/audio-transformers-course/chapters/ko
hf_public_repos/audio-transformers-course/chapters/ko/chapter1/introduction.mdx
# 1단원. 오디오 데이터 다루기[[unit-1-working-with-audio-data]] ## 이 단원에서 배울 내용[[what-youll-learn-in-this-unit]] 모든 오디오작업과 음성 작업은 오디오 파일부터 시작됩니다. 이러한 작업을 해결하기에 앞서, 이 파일들이 실제로 무엇을 담고 있는지, 그리고 어떻게 작업해야할지에 대해 이해하는 것이 중요합니다. 이 단원에선 파형(waveform), 샘플링 속도(sampling rate), 스펙트로그램(spectrogram)과 같은 오디오 데이터와 연관된 기본 용어에 대하여 배웁니다. 또한 오디오 데이터를 불러오고 전처리하는 방법, 큰 데이터셋을 효율적으로 스트리밍하는 방법 등 오디오 데이터셋을 다루는 법도 배우게 됩니다. 이 단원을 마치면 오디오 데이터 용어들에 대한 확실한 이해와 오디오 데이터셋의 다양한 응용작업을 위해 필요한 기술들을 습득하게 될것입니다. 이 단원에서 습득하게 될 지식은 코스의 나머지 과정을 이해하기 위해 필요한 기초가 됩니다.
9
0
hf_public_repos/candle/candle-core/src
hf_public_repos/candle/candle-core/src/cpu/mod.rs
//! Traits and methods for CPU-backed Tensors pub mod erf; pub mod kernels; #[allow(unused)] trait Cpu<const ARR: usize> { type Unit; type Array; const STEP: usize; const EPR: usize; fn n() -> usize; unsafe fn zero() -> Self::Unit; unsafe fn zero_array() -> Self::Array; unsafe fn load(mem_addr: *const f32) -> Self::Unit; unsafe fn vec_add(a: Self::Unit, b: Self::Unit) -> Self::Unit; unsafe fn vec_fma(a: Self::Unit, b: Self::Unit, c: Self::Unit) -> Self::Unit; unsafe fn vec_reduce(x: Self::Array, y: *mut f32); unsafe fn from_f32(v: f32) -> Self::Unit; unsafe fn vec_store(mem_addr: *mut f32, a: Self::Unit); } #[allow(unused)] trait CpuF16<const ARR: usize> { type Unit; type Array; const STEP: usize; const EPR: usize; fn n() -> usize; unsafe fn zero() -> Self::Unit; unsafe fn zero_array() -> Self::Array; unsafe fn load(mem_addr: *const f16) -> Self::Unit; unsafe fn vec_add(a: Self::Unit, b: Self::Unit) -> Self::Unit; unsafe fn vec_fma(a: Self::Unit, b: Self::Unit, c: Self::Unit) -> Self::Unit; unsafe fn vec_reduce(x: Self::Array, y: *mut f32); unsafe fn from_f32(v: f32) -> Self::Unit; unsafe fn vec_store(mem_addr: *mut f16, a: Self::Unit); } use half::f16; #[cfg(any(target_arch = "x86", target_arch = "x86_64"))] #[cfg(target_feature = "avx")] pub mod avx; #[cfg(any(target_arch = "x86", target_arch = "x86_64"))] #[cfg(target_feature = "avx")] pub use avx::{CurrentCpu, CurrentCpuF16}; #[cfg(target_arch = "wasm32")] #[cfg(target_feature = "simd128")] pub mod simd128; #[cfg(target_arch = "wasm32")] #[cfg(target_feature = "simd128")] pub use simd128::CurrentCpu; #[cfg(any(target_arch = "arm", target_arch = "aarch64"))] #[cfg(target_feature = "neon")] pub mod neon; #[cfg(any(target_arch = "arm", target_arch = "aarch64"))] #[cfg(target_feature = "neon")] pub use neon::CurrentCpu; #[cfg(any( target_feature = "neon", target_feature = "avx", target_feature = "simd128" ))] #[inline(always)] pub(crate) unsafe fn vec_dot_f32(a_row: *const f32, b_row: *const f32, c: *mut f32, k: usize) { let np = k & !(CurrentCpu::STEP - 1); let mut sum = CurrentCpu::zero_array(); let mut ax = CurrentCpu::zero_array(); let mut ay = CurrentCpu::zero_array(); for i in (0..np).step_by(CurrentCpu::STEP) { for j in 0..CurrentCpu::n() { ax[j] = CurrentCpu::load(a_row.add(i + j * CurrentCpu::EPR)); ay[j] = CurrentCpu::load(b_row.add(i + j * CurrentCpu::EPR)); sum[j] = CurrentCpu::vec_fma(sum[j], ax[j], ay[j]); } } CurrentCpu::vec_reduce(sum, c); // leftovers for i in np..k { *c += *a_row.add(i) * (*b_row.add(i)); } } #[cfg(not(any( target_feature = "neon", target_feature = "avx", target_feature = "simd128" )))] #[inline(always)] pub(crate) unsafe fn vec_dot_f32(a_row: *const f32, b_row: *const f32, c: *mut f32, k: usize) { // leftovers for i in 0..k { *c += *a_row.add(i) * (*b_row.add(i)); } } #[cfg(any( target_feature = "neon", target_feature = "avx", target_feature = "simd128" ))] #[inline(always)] pub(crate) unsafe fn vec_sum(row: *const f32, b: *mut f32, k: usize) { let np = k & !(CurrentCpu::STEP - 1); let mut sum = CurrentCpu::zero_array(); let mut x = CurrentCpu::zero_array(); for i in (0..np).step_by(CurrentCpu::STEP) { for j in 0..CurrentCpu::n() { x[j] = CurrentCpu::load(row.add(i + j * CurrentCpu::EPR)); sum[j] = CurrentCpu::vec_add(sum[j], x[j]); } } CurrentCpu::vec_reduce(sum, b); // leftovers for i in np..k { *b += *row.add(i) } } #[cfg(not(any( target_feature = "neon", target_feature = "avx", target_feature = "simd128" )))] #[inline(always)] pub(crate) unsafe fn vec_sum(row: *const f32, b: *mut f32, k: usize) { *b = 0f32; for i in 0..k { *b += *row.add(i) } } #[cfg(target_feature = "avx")] #[inline(always)] pub(crate) unsafe fn vec_dot_f16(a_row: *const f16, b_row: *const f16, c: *mut f32, k: usize) { let mut sumf = 0.0f32; let np = k & !(CurrentCpuF16::STEP - 1); let mut sum = CurrentCpuF16::zero_array(); let mut ax = CurrentCpuF16::zero_array(); let mut ay = CurrentCpuF16::zero_array(); for i in (0..np).step_by(CurrentCpuF16::STEP) { for j in 0..CurrentCpuF16::n() { ax[j] = CurrentCpuF16::load(a_row.add(i + j * CurrentCpuF16::EPR)); ay[j] = CurrentCpuF16::load(b_row.add(i + j * CurrentCpuF16::EPR)); sum[j] = CurrentCpuF16::vec_fma(sum[j], ax[j], ay[j]); } } CurrentCpuF16::vec_reduce(sum, &mut sumf); // leftovers for i in np..k { sumf += (*a_row.add(i)).to_f32() * (*b_row.add(i)).to_f32(); } *c = sumf; } #[cfg(not(target_feature = "avx"))] #[inline(always)] pub(crate) unsafe fn vec_dot_f16(a_row: *const f16, b_row: *const f16, c: *mut f32, k: usize) { // leftovers let mut sum = 0.0; for i in 0..k { sum += (*a_row.add(i)).to_f32() * (*b_row.add(i)).to_f32(); } *c = sum; }
0
0
hf_public_repos/candle/candle-core/src
hf_public_repos/candle/candle-core/src/cpu/neon.rs
use super::Cpu; #[cfg(target_arch = "arm")] use core::arch::arm::*; #[cfg(target_arch = "aarch64")] use core::arch::aarch64::*; pub struct CurrentCpu {} const STEP: usize = 16; const EPR: usize = 4; const ARR: usize = STEP / EPR; impl CurrentCpu { #[cfg(target_arch = "aarch64")] unsafe fn reduce_one(x: float32x4_t) -> f32 { vaddvq_f32(x) } #[cfg(target_arch = "arm")] unsafe fn reduce_one(x: float32x4_t) -> f32 { vgetq_lane_f32(x, 0) + vgetq_lane_f32(x, 1) + vgetq_lane_f32(x, 2) + vgetq_lane_f32(x, 3) } } impl Cpu<ARR> for CurrentCpu { type Unit = float32x4_t; type Array = [float32x4_t; ARR]; const STEP: usize = STEP; const EPR: usize = EPR; fn n() -> usize { ARR } unsafe fn zero() -> Self::Unit { vdupq_n_f32(0.0) } unsafe fn from_f32(x: f32) -> Self::Unit { vdupq_n_f32(x) } unsafe fn zero_array() -> Self::Array { [Self::zero(); ARR] } unsafe fn load(mem_addr: *const f32) -> Self::Unit { vld1q_f32(mem_addr) } unsafe fn vec_add(a: Self::Unit, b: Self::Unit) -> Self::Unit { vaddq_f32(a, b) } unsafe fn vec_fma(a: Self::Unit, b: Self::Unit, c: Self::Unit) -> Self::Unit { vfmaq_f32(a, b, c) } unsafe fn vec_store(mem_addr: *mut f32, a: Self::Unit) { vst1q_f32(mem_addr, a); } unsafe fn vec_reduce(mut x: Self::Array, y: *mut f32) { for i in 0..ARR / 2 { x[2 * i] = vaddq_f32(x[2 * i], x[2 * i + 1]); } for i in 0..ARR / 4 { x[4 * i] = vaddq_f32(x[4 * i], x[4 * i + 2]); } *y = Self::reduce_one(x[0]); } }
1
0
hf_public_repos/candle/candle-core/src
hf_public_repos/candle/candle-core/src/cpu/erf.rs
#![allow(clippy::excessive_precision)] // Code taken from https://github.com/statrs-dev/statrs //! Provides the [error](https://en.wikipedia.org/wiki/Error_function) and //! related functions mod evaluate { //! Provides functions that don't have a numerical solution and must //! be solved computationally (e.g. evaluation of a polynomial) /// evaluates a polynomial at `z` where `coeff` are the coeffecients /// to a polynomial of order `k` where `k` is the length of `coeff` and the /// coeffecient /// to the `k`th power is the `k`th element in coeff. E.g. [3,-1,2] equates to /// `2z^2 - z + 3` /// /// # Remarks /// /// Returns 0 for a 0 length coefficient slice pub fn polynomial(z: f64, coeff: &[f64]) -> f64 { let n = coeff.len(); if n == 0 { return 0.0; } let mut sum = *coeff.last().unwrap(); for c in coeff[0..n - 1].iter().rev() { sum = *c + z * sum; } sum } } use std::f64; /// `erf` calculates the error function at `x`. pub fn erf(x: f64) -> f64 { if x.is_nan() { f64::NAN } else if x >= 0.0 && x.is_infinite() { 1.0 } else if x <= 0.0 && x.is_infinite() { -1.0 } else if x == 0. { 0.0 } else { erf_impl(x, false) } } /// `erf_inv` calculates the inverse error function /// at `x`. pub fn erf_inv(x: f64) -> f64 { if x == 0.0 { 0.0 } else if x >= 1.0 { f64::INFINITY } else if x <= -1.0 { f64::NEG_INFINITY } else if x < 0.0 { erf_inv_impl(-x, 1.0 + x, -1.0) } else { erf_inv_impl(x, 1.0 - x, 1.0) } } /// `erfc` calculates the complementary error function /// at `x`. pub fn erfc(x: f64) -> f64 { if x.is_nan() { f64::NAN } else if x == f64::INFINITY { 0.0 } else if x == f64::NEG_INFINITY { 2.0 } else { erf_impl(x, true) } } /// `erfc_inv` calculates the complementary inverse /// error function at `x`. pub fn erfc_inv(x: f64) -> f64 { if x <= 0.0 { f64::INFINITY } else if x >= 2.0 { f64::NEG_INFINITY } else if x > 1.0 { erf_inv_impl(-1.0 + x, 2.0 - x, -1.0) } else { erf_inv_impl(1.0 - x, x, 1.0) } } // ********************************************************** // ********** Coefficients for erf_impl polynomial ********** // ********************************************************** /// Polynomial coefficients for a numerator of `erf_impl` /// in the interval [1e-10, 0.5]. const ERF_IMPL_AN: &[f64] = &[ 0.00337916709551257388990745, -0.00073695653048167948530905, -0.374732337392919607868241, 0.0817442448733587196071743, -0.0421089319936548595203468, 0.0070165709512095756344528, -0.00495091255982435110337458, 0.000871646599037922480317225, ]; /// Polynomial coefficients for a denominator of `erf_impl` /// in the interval [1e-10, 0.5] const ERF_IMPL_AD: &[f64] = &[ 1.0, -0.218088218087924645390535, 0.412542972725442099083918, -0.0841891147873106755410271, 0.0655338856400241519690695, -0.0120019604454941768171266, 0.00408165558926174048329689, -0.000615900721557769691924509, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [0.5, 0.75]. const ERF_IMPL_BN: &[f64] = &[ -0.0361790390718262471360258, 0.292251883444882683221149, 0.281447041797604512774415, 0.125610208862766947294894, 0.0274135028268930549240776, 0.00250839672168065762786937, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [0.5, 0.75]. const ERF_IMPL_BD: &[f64] = &[ 1.0, 1.8545005897903486499845, 1.43575803037831418074962, 0.582827658753036572454135, 0.124810476932949746447682, 0.0113724176546353285778481, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [0.75, 1.25]. const ERF_IMPL_CN: &[f64] = &[ -0.0397876892611136856954425, 0.153165212467878293257683, 0.191260295600936245503129, 0.10276327061989304213645, 0.029637090615738836726027, 0.0046093486780275489468812, 0.000307607820348680180548455, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [0.75, 1.25]. const ERF_IMPL_CD: &[f64] = &[ 1.0, 1.95520072987627704987886, 1.64762317199384860109595, 0.768238607022126250082483, 0.209793185936509782784315, 0.0319569316899913392596356, 0.00213363160895785378615014, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [1.25, 2.25]. const ERF_IMPL_DN: &[f64] = &[ -0.0300838560557949717328341, 0.0538578829844454508530552, 0.0726211541651914182692959, 0.0367628469888049348429018, 0.00964629015572527529605267, 0.00133453480075291076745275, 0.778087599782504251917881e-4, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [1.25, 2.25]. const ERF_IMPL_DD: &[f64] = &[ 1.0, 1.75967098147167528287343, 1.32883571437961120556307, 0.552528596508757581287907, 0.133793056941332861912279, 0.0179509645176280768640766, 0.00104712440019937356634038, -0.106640381820357337177643e-7, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [2.25, 3.5]. const ERF_IMPL_EN: &[f64] = &[ -0.0117907570137227847827732, 0.014262132090538809896674, 0.0202234435902960820020765, 0.00930668299990432009042239, 0.00213357802422065994322516, 0.00025022987386460102395382, 0.120534912219588189822126e-4, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [2.25, 3.5]. const ERF_IMPL_ED: &[f64] = &[ 1.0, 1.50376225203620482047419, 0.965397786204462896346934, 0.339265230476796681555511, 0.0689740649541569716897427, 0.00771060262491768307365526, 0.000371421101531069302990367, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [3.5, 5.25]. const ERF_IMPL_FN: &[f64] = &[ -0.00546954795538729307482955, 0.00404190278731707110245394, 0.0054963369553161170521356, 0.00212616472603945399437862, 0.000394984014495083900689956, 0.365565477064442377259271e-4, 0.135485897109932323253786e-5, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [3.5, 5.25]. const ERF_IMPL_FD: &[f64] = &[ 1.0, 1.21019697773630784832251, 0.620914668221143886601045, 0.173038430661142762569515, 0.0276550813773432047594539, 0.00240625974424309709745382, 0.891811817251336577241006e-4, -0.465528836283382684461025e-11, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [5.25, 8]. const ERF_IMPL_GN: &[f64] = &[ -0.00270722535905778347999196, 0.0013187563425029400461378, 0.00119925933261002333923989, 0.00027849619811344664248235, 0.267822988218331849989363e-4, 0.923043672315028197865066e-6, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [5.25, 8]. const ERF_IMPL_GD: &[f64] = &[ 1.0, 0.814632808543141591118279, 0.268901665856299542168425, 0.0449877216103041118694989, 0.00381759663320248459168994, 0.000131571897888596914350697, 0.404815359675764138445257e-11, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [8, 11.5]. const ERF_IMPL_HN: &[f64] = &[ -0.00109946720691742196814323, 0.000406425442750422675169153, 0.000274499489416900707787024, 0.465293770646659383436343e-4, 0.320955425395767463401993e-5, 0.778286018145020892261936e-7, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [8, 11.5]. const ERF_IMPL_HD: &[f64] = &[ 1.0, 0.588173710611846046373373, 0.139363331289409746077541, 0.0166329340417083678763028, 0.00100023921310234908642639, 0.24254837521587225125068e-4, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [11.5, 17]. const ERF_IMPL_IN: &[f64] = &[ -0.00056907993601094962855594, 0.000169498540373762264416984, 0.518472354581100890120501e-4, 0.382819312231928859704678e-5, 0.824989931281894431781794e-7, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [11.5, 17]. const ERF_IMPL_ID: &[f64] = &[ 1.0, 0.339637250051139347430323, 0.043472647870310663055044, 0.00248549335224637114641629, 0.535633305337152900549536e-4, -0.117490944405459578783846e-12, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [17, 24]. const ERF_IMPL_JN: &[f64] = &[ -0.000241313599483991337479091, 0.574224975202501512365975e-4, 0.115998962927383778460557e-4, 0.581762134402593739370875e-6, 0.853971555085673614607418e-8, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [17, 24]. const ERF_IMPL_JD: &[f64] = &[ 1.0, 0.233044138299687841018015, 0.0204186940546440312625597, 0.000797185647564398289151125, 0.117019281670172327758019e-4, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [24, 38]. const ERF_IMPL_KN: &[f64] = &[ -0.000146674699277760365803642, 0.162666552112280519955647e-4, 0.269116248509165239294897e-5, 0.979584479468091935086972e-7, 0.101994647625723465722285e-8, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [24, 38]. const ERF_IMPL_KD: &[f64] = &[ 1.0, 0.165907812944847226546036, 0.0103361716191505884359634, 0.000286593026373868366935721, 0.298401570840900340874568e-5, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [38, 60]. const ERF_IMPL_LN: &[f64] = &[ -0.583905797629771786720406e-4, 0.412510325105496173512992e-5, 0.431790922420250949096906e-6, 0.993365155590013193345569e-8, 0.653480510020104699270084e-10, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [38, 60]. const ERF_IMPL_LD: &[f64] = &[ 1.0, 0.105077086072039915406159, 0.00414278428675475620830226, 0.726338754644523769144108e-4, 0.477818471047398785369849e-6, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [60, 85]. const ERF_IMPL_MN: &[f64] = &[ -0.196457797609229579459841e-4, 0.157243887666800692441195e-5, 0.543902511192700878690335e-7, 0.317472492369117710852685e-9, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [60, 85]. const ERF_IMPL_MD: &[f64] = &[ 1.0, 0.052803989240957632204885, 0.000926876069151753290378112, 0.541011723226630257077328e-5, 0.535093845803642394908747e-15, ]; /// Polynomial coefficients for a numerator in `erf_impl` /// in the interval [85, 110]. const ERF_IMPL_NN: &[f64] = &[ -0.789224703978722689089794e-5, 0.622088451660986955124162e-6, 0.145728445676882396797184e-7, 0.603715505542715364529243e-10, ]; /// Polynomial coefficients for a denominator in `erf_impl` /// in the interval [85, 110]. const ERF_IMPL_ND: &[f64] = &[ 1.0, 0.0375328846356293715248719, 0.000467919535974625308126054, 0.193847039275845656900547e-5, ]; // ********************************************************** // ********** Coefficients for erf_inv_impl polynomial ****** // ********************************************************** /// Polynomial coefficients for a numerator of `erf_inv_impl` /// in the interval [0, 0.5]. const ERF_INV_IMPL_AN: &[f64] = &[ -0.000508781949658280665617, -0.00836874819741736770379, 0.0334806625409744615033, -0.0126926147662974029034, -0.0365637971411762664006, 0.0219878681111168899165, 0.00822687874676915743155, -0.00538772965071242932965, ]; /// Polynomial coefficients for a denominator of `erf_inv_impl` /// in the interval [0, 0.5]. const ERF_INV_IMPL_AD: &[f64] = &[ 1.0, -0.970005043303290640362, -1.56574558234175846809, 1.56221558398423026363, 0.662328840472002992063, -0.71228902341542847553, -0.0527396382340099713954, 0.0795283687341571680018, -0.00233393759374190016776, 0.000886216390456424707504, ]; /// Polynomial coefficients for a numerator of `erf_inv_impl` /// in the interval [0.5, 0.75]. const ERF_INV_IMPL_BN: &[f64] = &[ -0.202433508355938759655, 0.105264680699391713268, 8.37050328343119927838, 17.6447298408374015486, -18.8510648058714251895, -44.6382324441786960818, 17.445385985570866523, 21.1294655448340526258, -3.67192254707729348546, ]; /// Polynomial coefficients for a denominator of `erf_inv_impl` /// in the interval [0.5, 0.75]. const ERF_INV_IMPL_BD: &[f64] = &[ 1.0, 6.24264124854247537712, 3.9713437953343869095, -28.6608180499800029974, -20.1432634680485188801, 48.5609213108739935468, 10.8268667355460159008, -22.6436933413139721736, 1.72114765761200282724, ]; /// Polynomial coefficients for a numerator of `erf_inv_impl` /// in the interval [0.75, 1] with x less than 3. const ERF_INV_IMPL_CN: &[f64] = &[ -0.131102781679951906451, -0.163794047193317060787, 0.117030156341995252019, 0.387079738972604337464, 0.337785538912035898924, 0.142869534408157156766, 0.0290157910005329060432, 0.00214558995388805277169, -0.679465575181126350155e-6, 0.285225331782217055858e-7, -0.681149956853776992068e-9, ]; /// Polynomial coefficients for a denominator of `erf_inv_impl` /// in the interval [0.75, 1] with x less than 3. const ERF_INV_IMPL_CD: &[f64] = &[ 1.0, 3.46625407242567245975, 5.38168345707006855425, 4.77846592945843778382, 2.59301921623620271374, 0.848854343457902036425, 0.152264338295331783612, 0.01105924229346489121, ]; /// Polynomial coefficients for a numerator of `erf_inv_impl` /// in the interval [0.75, 1] with x between 3 and 6. const ERF_INV_IMPL_DN: &[f64] = &[ -0.0350353787183177984712, -0.00222426529213447927281, 0.0185573306514231072324, 0.00950804701325919603619, 0.00187123492819559223345, 0.000157544617424960554631, 0.460469890584317994083e-5, -0.230404776911882601748e-9, 0.266339227425782031962e-11, ]; /// Polynomial coefficients for a denominator of `erf_inv_impl` /// in the interval [0.75, 1] with x between 3 and 6. const ERF_INV_IMPL_DD: &[f64] = &[ 1.0, 1.3653349817554063097, 0.762059164553623404043, 0.220091105764131249824, 0.0341589143670947727934, 0.00263861676657015992959, 0.764675292302794483503e-4, ]; /// Polynomial coefficients for a numerator of `erf_inv_impl` /// in the interval [0.75, 1] with x between 6 and 18. const ERF_INV_IMPL_EN: &[f64] = &[ -0.0167431005076633737133, -0.00112951438745580278863, 0.00105628862152492910091, 0.000209386317487588078668, 0.149624783758342370182e-4, 0.449696789927706453732e-6, 0.462596163522878599135e-8, -0.281128735628831791805e-13, 0.99055709973310326855e-16, ]; /// Polynomial coefficients for a denominator of `erf_inv_impl` /// in the interval [0.75, 1] with x between 6 and 18. const ERF_INV_IMPL_ED: &[f64] = &[ 1.0, 0.591429344886417493481, 0.138151865749083321638, 0.0160746087093676504695, 0.000964011807005165528527, 0.275335474764726041141e-4, 0.282243172016108031869e-6, ]; /// Polynomial coefficients for a numerator of `erf_inv_impl` /// in the interval [0.75, 1] with x between 18 and 44. const ERF_INV_IMPL_FN: &[f64] = &[ -0.0024978212791898131227, -0.779190719229053954292e-5, 0.254723037413027451751e-4, 0.162397777342510920873e-5, 0.396341011304801168516e-7, 0.411632831190944208473e-9, 0.145596286718675035587e-11, -0.116765012397184275695e-17, ]; /// Polynomial coefficients for a denominator of `erf_inv_impl` /// in the interval [0.75, 1] with x between 18 and 44. const ERF_INV_IMPL_FD: &[f64] = &[ 1.0, 0.207123112214422517181, 0.0169410838120975906478, 0.000690538265622684595676, 0.145007359818232637924e-4, 0.144437756628144157666e-6, 0.509761276599778486139e-9, ]; /// Polynomial coefficients for a numerator of `erf_inv_impl` /// in the interval [0.75, 1] with x greater than 44. const ERF_INV_IMPL_GN: &[f64] = &[ -0.000539042911019078575891, -0.28398759004727721098e-6, 0.899465114892291446442e-6, 0.229345859265920864296e-7, 0.225561444863500149219e-9, 0.947846627503022684216e-12, 0.135880130108924861008e-14, -0.348890393399948882918e-21, ]; /// Polynomial coefficients for a denominator of `erf_inv_impl` /// in the interval [0.75, 1] with x greater than 44. const ERF_INV_IMPL_GD: &[f64] = &[ 1.0, 0.0845746234001899436914, 0.00282092984726264681981, 0.468292921940894236786e-4, 0.399968812193862100054e-6, 0.161809290887904476097e-8, 0.231558608310259605225e-11, ]; /// `erf_impl` computes the error function at `z`. /// If `inv` is true, `1 - erf` is calculated as opposed to `erf` fn erf_impl(z: f64, inv: bool) -> f64 { if z < 0.0 { if !inv { return -erf_impl(-z, false); } if z < -0.5 { return 2.0 - erf_impl(-z, true); } return 1.0 + erf_impl(-z, false); } let result = if z < 0.5 { if z < 1e-10 { z * 1.125 + z * 0.003379167095512573896158903121545171688 } else { z * 1.125 + z * evaluate::polynomial(z, ERF_IMPL_AN) / evaluate::polynomial(z, ERF_IMPL_AD) } } else if z < 110.0 { let (r, b) = if z < 0.75 { ( evaluate::polynomial(z - 0.5, ERF_IMPL_BN) / evaluate::polynomial(z - 0.5, ERF_IMPL_BD), 0.3440242112, ) } else if z < 1.25 { ( evaluate::polynomial(z - 0.75, ERF_IMPL_CN) / evaluate::polynomial(z - 0.75, ERF_IMPL_CD), 0.419990927, ) } else if z < 2.25 { ( evaluate::polynomial(z - 1.25, ERF_IMPL_DN) / evaluate::polynomial(z - 1.25, ERF_IMPL_DD), 0.4898625016, ) } else if z < 3.5 { ( evaluate::polynomial(z - 2.25, ERF_IMPL_EN) / evaluate::polynomial(z - 2.25, ERF_IMPL_ED), 0.5317370892, ) } else if z < 5.25 { ( evaluate::polynomial(z - 3.5, ERF_IMPL_FN) / evaluate::polynomial(z - 3.5, ERF_IMPL_FD), 0.5489973426, ) } else if z < 8.0 { ( evaluate::polynomial(z - 5.25, ERF_IMPL_GN) / evaluate::polynomial(z - 5.25, ERF_IMPL_GD), 0.5571740866, ) } else if z < 11.5 { ( evaluate::polynomial(z - 8.0, ERF_IMPL_HN) / evaluate::polynomial(z - 8.0, ERF_IMPL_HD), 0.5609807968, ) } else if z < 17.0 { ( evaluate::polynomial(z - 11.5, ERF_IMPL_IN) / evaluate::polynomial(z - 11.5, ERF_IMPL_ID), 0.5626493692, ) } else if z < 24.0 { ( evaluate::polynomial(z - 17.0, ERF_IMPL_JN) / evaluate::polynomial(z - 17.0, ERF_IMPL_JD), 0.5634598136, ) } else if z < 38.0 { ( evaluate::polynomial(z - 24.0, ERF_IMPL_KN) / evaluate::polynomial(z - 24.0, ERF_IMPL_KD), 0.5638477802, ) } else if z < 60.0 { ( evaluate::polynomial(z - 38.0, ERF_IMPL_LN) / evaluate::polynomial(z - 38.0, ERF_IMPL_LD), 0.5640528202, ) } else if z < 85.0 { ( evaluate::polynomial(z - 60.0, ERF_IMPL_MN) / evaluate::polynomial(z - 60.0, ERF_IMPL_MD), 0.5641309023, ) } else { ( evaluate::polynomial(z - 85.0, ERF_IMPL_NN) / evaluate::polynomial(z - 85.0, ERF_IMPL_ND), 0.5641584396, ) }; let g = (-z * z).exp() / z; g * b + g * r } else { 0.0 }; if inv && z >= 0.5 { result } else if z >= 0.5 || inv { 1.0 - result } else { result } } // `erf_inv_impl` computes the inverse error function where // `p`,`q`, and `s` are the first, second, and third intermediate // parameters respectively fn erf_inv_impl(p: f64, q: f64, s: f64) -> f64 { let result = if p <= 0.5 { let y = 0.0891314744949340820313; let g = p * (p + 10.0); let r = evaluate::polynomial(p, ERF_INV_IMPL_AN) / evaluate::polynomial(p, ERF_INV_IMPL_AD); g * y + g * r } else if q >= 0.25 { let y = 2.249481201171875; let g = (-2.0 * q.ln()).sqrt(); let xs = q - 0.25; let r = evaluate::polynomial(xs, ERF_INV_IMPL_BN) / evaluate::polynomial(xs, ERF_INV_IMPL_BD); g / (y + r) } else { let x = (-q.ln()).sqrt(); if x < 3.0 { let y = 0.807220458984375; let xs = x - 1.125; let r = evaluate::polynomial(xs, ERF_INV_IMPL_CN) / evaluate::polynomial(xs, ERF_INV_IMPL_CD); y * x + r * x } else if x < 6.0 { let y = 0.93995571136474609375; let xs = x - 3.0; let r = evaluate::polynomial(xs, ERF_INV_IMPL_DN) / evaluate::polynomial(xs, ERF_INV_IMPL_DD); y * x + r * x } else if x < 18.0 { let y = 0.98362827301025390625; let xs = x - 6.0; let r = evaluate::polynomial(xs, ERF_INV_IMPL_EN) / evaluate::polynomial(xs, ERF_INV_IMPL_ED); y * x + r * x } else if x < 44.0 { let y = 0.99714565277099609375; let xs = x - 18.0; let r = evaluate::polynomial(xs, ERF_INV_IMPL_FN) / evaluate::polynomial(xs, ERF_INV_IMPL_FD); y * x + r * x } else { let y = 0.99941349029541015625; let xs = x - 44.0; let r = evaluate::polynomial(xs, ERF_INV_IMPL_GN) / evaluate::polynomial(xs, ERF_INV_IMPL_GD); y * x + r * x } }; s * result }
2
0
hf_public_repos/candle/candle-core/src
hf_public_repos/candle/candle-core/src/cpu/kernels.rs
pub trait VecOps: num_traits::NumAssign + Copy { fn min(self, rhs: Self) -> Self; fn max(self, rhs: Self) -> Self; /// Dot-product of two vectors. /// /// # Safety /// /// The length of `lhs` and `rhs` have to be at least `len`. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) { *res = Self::zero(); for i in 0..len { *res += *lhs.add(i) * *rhs.add(i) } } /// Sum of all elements in a vector. /// /// # Safety /// /// The length of `xs` must be at least `len`. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_reduce_sum(xs: *const Self, res: *mut Self, len: usize) { *res = Self::zero(); for i in 0..len { *res += *xs.add(i) } } /// Maximum element in a non-empty vector. /// /// # Safety /// /// The length of `xs` must be at least `len` and positive. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_reduce_max(xs: *const Self, res: *mut Self, len: usize) { *res = *xs; for i in 1..len { *res = (*res).max(*xs.add(i)) } } /// Minimum element in a non-empty vector. /// /// # Safety /// /// The length of `xs` must be at least `len` and positive. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_reduce_min(xs: *const Self, res: *mut Self, len: usize) { *res = *xs; for i in 1..len { *res = (*res).min(*xs.add(i)) } } } impl VecOps for f32 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } #[inline(always)] unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) { super::vec_dot_f32(lhs, rhs, res, len) } #[inline(always)] unsafe fn vec_reduce_sum(xs: *const Self, res: *mut Self, len: usize) { super::vec_sum(xs, res, len) } } impl VecOps for half::f16 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } #[inline(always)] unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) { let mut res_f32 = 0f32; super::vec_dot_f16(lhs, rhs, &mut res_f32, len); *res = half::f16::from_f32(res_f32); } } impl VecOps for f64 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } } impl VecOps for half::bf16 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } } impl VecOps for u8 { #[inline(always)] fn min(self, other: Self) -> Self { <Self as Ord>::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { <Self as Ord>::max(self, other) } } impl VecOps for u32 { #[inline(always)] fn min(self, other: Self) -> Self { <Self as Ord>::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { <Self as Ord>::max(self, other) } } impl VecOps for i64 { #[inline(always)] fn min(self, other: Self) -> Self { <Self as Ord>::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { <Self as Ord>::max(self, other) } } #[inline(always)] pub fn par_for_each(n_threads: usize, func: impl Fn(usize) + Send + Sync) { if n_threads == 1 { func(0) } else { rayon::scope(|s| { for thread_idx in 0..n_threads { let func = &func; s.spawn(move |_| func(thread_idx)); } }) } } #[inline(always)] pub fn par_range(lo: usize, up: usize, n_threads: usize, func: impl Fn(usize) + Send + Sync) { if n_threads == 1 { for i in lo..up { func(i) } } else { rayon::scope(|s| { for thread_idx in 0..n_threads { let func = &func; s.spawn(move |_| { for i in (thread_idx..up).step_by(n_threads) { func(i) } }); } }) } }
3
0
hf_public_repos/candle/candle-core/src
hf_public_repos/candle/candle-core/src/cpu/simd128.rs
use super::Cpu; use core::arch::wasm32::*; pub struct CurrentCpu {} const STEP: usize = 16; const EPR: usize = 4; const ARR: usize = STEP / EPR; impl Cpu<ARR> for CurrentCpu { type Unit = v128; type Array = [v128; ARR]; const STEP: usize = STEP; const EPR: usize = EPR; fn n() -> usize { ARR } unsafe fn zero() -> Self::Unit { f32x4_splat(0.0) } unsafe fn zero_array() -> Self::Array { [Self::zero(); ARR] } unsafe fn from_f32(v: f32) -> Self::Unit { f32x4_splat(v) } unsafe fn load(mem_addr: *const f32) -> Self::Unit { v128_load(mem_addr as *mut v128) } unsafe fn vec_add(a: Self::Unit, b: Self::Unit) -> Self::Unit { f32x4_add(a, b) } unsafe fn vec_fma(a: Self::Unit, b: Self::Unit, c: Self::Unit) -> Self::Unit { f32x4_add(f32x4_mul(b, c), a) } unsafe fn vec_store(mem_addr: *mut f32, a: Self::Unit) { v128_store(mem_addr as *mut v128, a); } unsafe fn vec_reduce(mut x: Self::Array, y: *mut f32) { for i in 0..ARR / 2 { x[2 * i] = f32x4_add(x[2 * i], x[2 * i + 1]); } for i in 0..ARR / 4 { x[4 * i] = f32x4_add(x[4 * i], x[4 * i + 2]); } for i in 0..ARR / 8 { x[8 * i] = f32x4_add(x[8 * i], x[8 * i + 4]); } *y = f32x4_extract_lane::<0>(x[0]) + f32x4_extract_lane::<1>(x[0]) + f32x4_extract_lane::<2>(x[0]) + f32x4_extract_lane::<3>(x[0]); } }
4
0
hf_public_repos/candle/candle-core
hf_public_repos/candle/candle-core/benches/bench_main.rs
mod benchmarks; use criterion::criterion_main; criterion_main!( benchmarks::affine::benches, benchmarks::matmul::benches, benchmarks::random::benches, benchmarks::where_cond::benches, benchmarks::conv_transpose2d::benches, benchmarks::qmatmul::benches, benchmarks::unary::benches );
5
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/affine.rs
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(a: &Tensor) { a.affine(12.34, 56.78).unwrap(); } fn run_affine_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let b = 1; let m = 1024; let k = 1024; let tensor = Tensor::zeros((b, m, k), dtype, device).unwrap(); let flops = b * m * k * dtype.size_in_bytes(); let mut group = c.benchmark_group(device.bench_name(name)); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run(black_box(&tensor)); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { run_affine_benchmark(c, &device, DType::F32, "affine_f32"); run_affine_benchmark(c, &device, DType::F16, "affine_f16"); run_affine_benchmark(c, &device, DType::BF16, "affine_bf16"); } } criterion_group!(benches, criterion_benchmark);
6
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/where_cond.rs
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(a: &Tensor, b: &Tensor, c: &Tensor) { a.where_cond(b, c).unwrap(); } const fn create_cond_arr<const N: usize>() -> [u8; N] { let mut arr = [0u8; N]; let mut i = 0; while i < N { arr[i] = (i % 2) as u8; i += 1; } arr } const B: usize = 1; const M: usize = 1024; const K: usize = 1024; const SIZE: usize = B * M * K; const DATA: [u8; SIZE] = create_cond_arr::<SIZE>(); fn run_where_cond_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let tensor = Tensor::from_slice(DATA.as_slice(), (B, M, K), device).unwrap(); let on_true = Tensor::ones((B, M, K), dtype, device).unwrap(); let on_false = Tensor::zeros((B, M, K), dtype, device).unwrap(); let elements = B * M * K; // E.g. 2 f32 tensors + 1 u8 tensor let flops = (2 * elements * dtype.size_in_bytes()) + elements; let mut group = c.benchmark_group(device.bench_name(name)); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run( black_box(&tensor), black_box(&on_true), black_box(&on_false), ); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let device = BenchDeviceHandler::new().unwrap(); for d in device.devices { run_where_cond_benchmark(c, &d, DType::F32, "where_cond_f32"); run_where_cond_benchmark(c, &d, DType::BF16, "where_cond_bf16"); run_where_cond_benchmark(c, &d, DType::F16, "where_cond_f16"); } } criterion_group!(benches, criterion_benchmark);
7
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/random.rs
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn rand_uniform(a: &Tensor) { a.rand_like(-1.0, 123.0).unwrap(); } fn rand_normal(a: &Tensor) { a.randn_like(100.0, 15.0).unwrap(); } fn run_random_bench(c: &mut Criterion, device: &Device) { let b = 1; let rows = 2048; let cols = 2048; let dtype = DType::F32; let tensor = Tensor::zeros((b, rows, cols), dtype, device).unwrap(); let flops = b * rows * cols * dtype.size_in_bytes(); let mut group = c.benchmark_group(device.bench_name("random_uniform")); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |benches| { benches.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { rand_uniform(black_box(&tensor)); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); let tensor = Tensor::zeros((b, rows, cols), dtype, device).unwrap(); let mut group = c.benchmark_group(device.bench_name("random_normal")); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |benches| { benches.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { rand_normal(black_box(&tensor)); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { run_random_bench(c, &device); } } criterion_group!(benches, criterion_benchmark);
8
0
hf_public_repos/candle/candle-core/benches
hf_public_repos/candle/candle-core/benches/benchmarks/matmul.rs
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(a: &Tensor, b: &Tensor) { a.matmul(&b.t().unwrap()).unwrap(); } fn run_bench(c: &mut Criterion, device: &Device) { let b = 1; let m = 1; let n = 2048; let k = 2048; let dtype = DType::F32; let lhs = Tensor::zeros((b, m, k), dtype, device).unwrap(); let rhs = Tensor::zeros((b, n, k), dtype, device).unwrap(); let flops = b * m * n * k; let mut group = c.benchmark_group(device.bench_name("matmul")); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run(black_box(&lhs), black_box(&rhs)); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { run_bench(c, &device); } } criterion_group!(benches, criterion_benchmark);
9
0
hf_public_repos/api-inference-community/docker_images/fasttext/app
hf_public_repos/api-inference-community/docker_images/fasttext/app/pipelines/text_classification.py
from typing import Dict, List from app.pipelines import Pipeline from huggingface_hub import HfApi FASTTEXT_PREFIX_LENGTH = 9 # fasttext labels are formatted like "__label__eng_Latn" class TextClassificationPipeline(Pipeline): def __init__( self, model_id: str, ): super().__init__(model_id) self.info = HfApi().model_info(repo_id=self.model_id) def __call__(self, inputs: str) -> List[Dict[str, float]]: """ Args: inputs (:obj:`str`): a string containing some text Return: A :obj:`list`:. The object returned should be a list of one list like [[{"label": 0.9939950108528137}]] containing: - "label": A string representing what the label/class is. There can be multiple labels. - "score": A score between 0 and 1 describing how confident the model is for this label/class. """ if "language-identification" in self.info.tags: preds = self.model.predict(inputs, k=5) result = [ {"label": label[FASTTEXT_PREFIX_LENGTH:], "score": prob} for label, prob in zip(preds[0], preds[1]) ] return [result] if len(inputs.split()) > 1: raise ValueError("Expected input is a single word") preds = self.model.get_nearest_neighbors(inputs, k=5) result = [] for distance, word in preds: result.append({"label": word, "score": distance}) return [result]
0
0
hf_public_repos/api-inference-community/docker_images/fasttext
hf_public_repos/api-inference-community/docker_images/fasttext/tests/test_docker_build.py
import os import subprocess from unittest import TestCase class cd: """Context manager for changing the current working directory""" def __init__(self, newPath): self.newPath = os.path.expanduser(newPath) def __enter__(self): self.savedPath = os.getcwd() os.chdir(self.newPath) def __exit__(self, etype, value, traceback): os.chdir(self.savedPath) class DockerBuildTestCase(TestCase): def test_can_build_docker_image(self): with cd(os.path.dirname(os.path.dirname(__file__))): subprocess.check_output(["docker", "build", "."])
1
0
hf_public_repos/api-inference-community/docker_images/fasttext
hf_public_repos/api-inference-community/docker_images/fasttext/tests/test_api.py
import os from typing import Dict, List from unittest import TestCase, skipIf from app.main import ALLOWED_TASKS, get_pipeline # Must contain at least one example of each implemented pipeline # Tests do not check the actual values of the model output, so small dummy # models are recommended for faster tests. TESTABLE_MODELS: Dict[str, List[str]] = { "text-classification": [ "osanseviero/fasttext_nearest", "sheonhan/fasttext-language-identification", ], "feature-extraction": ["osanseviero/fasttext_embedding"], } ALL_TASKS = { "audio-classification", "audio-to-audio", "automatic-speech-recognition", "feature-extraction", "image-classification", "language-identification", "question-answering", "sentence-similarity", "speech-segmentation", "structured-data-classification", "text-to-speech", "token-classification", } class PipelineTestCase(TestCase): @skipIf( os.path.dirname(os.path.dirname(__file__)).endswith("common"), "common is a special case", ) def test_has_at_least_one_task_enabled(self): self.assertGreater( len(ALLOWED_TASKS.keys()), 0, "You need to implement at least one task" ) def test_unsupported_tasks(self): unsupported_tasks = ALL_TASKS - ALLOWED_TASKS.keys() for unsupported_task in unsupported_tasks: with self.subTest(msg=unsupported_task, task=unsupported_task): os.environ["TASK"] = unsupported_task os.environ["MODEL_ID"] = "XX" with self.assertRaises(EnvironmentError): get_pipeline()
2
0
hf_public_repos/api-inference-community/docker_images/fasttext
hf_public_repos/api-inference-community/docker_images/fasttext/tests/test_api_feature_extraction.py
import json import os from unittest import TestCase, skipIf from app.main import ALLOWED_TASKS from starlette.testclient import TestClient from tests.test_api import TESTABLE_MODELS @skipIf( "feature-extraction" not in ALLOWED_TASKS, "feature-extraction not implemented", ) class FeatureExtractionTestCase(TestCase): def setUp(self): model_id = TESTABLE_MODELS["feature-extraction"] self.old_model_id = os.getenv("MODEL_ID") self.old_task = os.getenv("TASK") os.environ["MODEL_ID"] = model_id os.environ["TASK"] = "feature-extraction" from app.main import app self.app = app @classmethod def setUpClass(cls): from app.main import get_pipeline get_pipeline.cache_clear() def tearDown(self): if self.old_model_id is not None: os.environ["MODEL_ID"] = self.old_model_id else: del os.environ["MODEL_ID"] if self.old_task is not None: os.environ["TASK"] = self.old_task else: del os.environ["TASK"] def test_simple(self): inputs = "Hello, my name is John and I live in New York" with TestClient(self.app) as client: response = client.post("/", json={"inputs": inputs}) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), list) self.assertEqual({type(item) for item in content}, {float}) with TestClient(self.app) as client: response = client.post("/", json=inputs) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), list) self.assertEqual({type(item) for item in content}, {float}) def test_malformed_sentence(self): with TestClient(self.app) as client: response = client.post("/", data=b"\xc3\x28") self.assertEqual( response.status_code, 400, ) self.assertEqual( response.content, b'{"error":"\'utf-8\' codec can\'t decode byte 0xc3 in position 0: invalid continuation byte"}', )
3
0
hf_public_repos/api-inference-community/docker_images/fasttext
hf_public_repos/api-inference-community/docker_images/fasttext/tests/test_api_text_classification.py
import json import os from unittest import TestCase, skipIf from app.main import ALLOWED_TASKS from parameterized import parameterized_class from starlette.testclient import TestClient from tests.test_api import TESTABLE_MODELS @skipIf( "text-classification" not in ALLOWED_TASKS, "text-classification not implemented", ) @parameterized_class( [{"model_id": model_id} for model_id in TESTABLE_MODELS["text-classification"]] ) class TextClassificationTestCase(TestCase): def setUp(self): self.old_model_id = os.getenv("MODEL_ID") self.old_task = os.getenv("TASK") os.environ["MODEL_ID"] = self.model_id os.environ["TASK"] = "text-classification" from app.main import app self.app = app @classmethod def setUpClass(cls): from app.main import get_pipeline get_pipeline.cache_clear() def tearDown(self): if self.old_model_id is not None: os.environ["MODEL_ID"] = self.old_model_id else: del os.environ["MODEL_ID"] if self.old_task is not None: os.environ["TASK"] = self.old_task else: del os.environ["TASK"] def test_simple(self): inputs = "beautiful" with TestClient(self.app) as client: response = client.post("/", json={"inputs": inputs}) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), list) self.assertEqual(len(content), 1) self.assertEqual(type(content[0]), list) self.assertEqual( set(k for el in content[0] for k in el.keys()), {"label", "score"}, ) with TestClient(self.app) as client: response = client.post("/", json=inputs) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), list) self.assertEqual(len(content), 1) self.assertEqual(type(content[0]), list) self.assertEqual( set(k for el in content[0] for k in el.keys()), {"label", "score"}, ) def test_malformed_question(self): with TestClient(self.app) as client: response = client.post("/", data=b"\xc3\x28") self.assertEqual( response.status_code, 400, ) self.assertEqual( response.content, b'{"error":"\'utf-8\' codec can\'t decode byte 0xc3 in position 0: invalid continuation byte"}', ) def test_multiple_words(self): inputs = "this is great" # For "language-identification" subtask, fasttext can identify the language of a sentence # but when getting a word vector's nearest neighbors, only a single word is valid as an input expected_status_code = ( 200 if "language-identification" in self.model_id else 400 ) with TestClient(self.app) as client: response = client.post("/", json={"inputs": inputs}) self.assertEqual( response.status_code, expected_status_code, )
4
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/speechbrain/requirements.txt
starlette==0.27.0 # TODO: Replace with the correct tag once the core PR is merged api-inference-community==0.0.32 huggingface_hub>=0.7 transformers==4.30.0 git+https://github.com/speechbrain/[email protected] https://github.com/kpu/kenlm/archive/master.zip pygtrie #Dummy.
5
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/speechbrain/Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.9 LABEL maintainer="me <[email protected]>" # Add any system dependency here # RUN apt-get update -y && apt-get install libXXX -y RUN apt-get update -y && apt-get install ffmpeg -y RUN pip install --no-cache-dir torch==2.0 COPY ./requirements.txt /app RUN pip install --no-cache-dir -r requirements.txt COPY ./prestart.sh /app/ # Most DL models are quite large in terms of memory, using workers is a HUGE # slowdown because of the fork and GIL with python. # Using multiple pods seems like a better default strategy. # Feel free to override if it does not make sense for your library. ARG max_workers=1 ENV MAX_WORKERS=$max_workers ENV HUGGINGFACE_HUB_CACHE=/data # Necessary on GPU environment docker. # TIMEOUT env variable is used by nvcr.io/nvidia/pytorch:xx for another purpose # rendering TIMEOUT defined by uvicorn impossible to use correctly # We're overriding it to be renamed UVICORN_TIMEOUT # UVICORN_TIMEOUT is a useful variable for very large models that take more # than 30s (the default) to load in memory. # If UVICORN_TIMEOUT is too low, uvicorn will simply never loads as it will # kill workers all the time before they finish. RUN sed -i 's/TIMEOUT/UVICORN_TIMEOUT/g' /gunicorn_conf.py COPY ./app /app/app
6
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/speechbrain/prestart.sh
python app/main.py
7
0
hf_public_repos/api-inference-community/docker_images/speechbrain
hf_public_repos/api-inference-community/docker_images/speechbrain/app/main.py
import functools import logging import os from typing import Dict, Type from api_inference_community.routes import pipeline_route, status_ok from app.pipelines import ( AudioClassificationPipeline, AudioToAudioPipeline, AutomaticSpeechRecognitionPipeline, Pipeline, TextToSpeechPipeline, TextToTextPipeline, ) from starlette.applications import Starlette from starlette.routing import Route TASK = os.getenv("TASK") MODEL_ID = os.getenv("MODEL_ID") logger = logging.getLogger(__name__) # Add the allowed tasks # Supported tasks are: # - text-generation # - text-classification # - token-classification # - translation # - summarization # - automatic-speech-recognition # - ... # For instance # from app.pipelines import AutomaticSpeechRecognitionPipeline # ALLOWED_TASKS = {"automatic-speech-recognition": AutomaticSpeechRecognitionPipeline} # You can check the requirements and expectations of each pipelines in their respective # directories. Implement directly within the directories. ALLOWED_TASKS: Dict[str, Type[Pipeline]] = { "audio-classification": AudioClassificationPipeline, "audio-to-audio": AudioToAudioPipeline, "automatic-speech-recognition": AutomaticSpeechRecognitionPipeline, "text-to-speech": TextToSpeechPipeline, "text2text-generation": TextToTextPipeline, } @functools.lru_cache() def get_pipeline() -> Pipeline: task = os.environ["TASK"] model_id = os.environ["MODEL_ID"] if task not in ALLOWED_TASKS: raise EnvironmentError(f"{task} is not a valid pipeline for model : {model_id}") return ALLOWED_TASKS[task](model_id) routes = [ Route("/{whatever:path}", status_ok), Route("/{whatever:path}", pipeline_route, methods=["POST"]), ] app = Starlette(routes=routes) if os.environ.get("DEBUG", "") == "1": from starlette.middleware.cors import CORSMiddleware app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_headers=["*"], allow_methods=["*"] ) @app.on_event("startup") async def startup_event(): logger = logging.getLogger("uvicorn.access") handler = logging.StreamHandler() handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")) logger.handlers = [handler] # Link between `api-inference-community` and framework code. app.get_pipeline = get_pipeline try: get_pipeline() except Exception: # We can fail so we can show exception later. pass if __name__ == "__main__": try: get_pipeline() except Exception: # We can fail so we can show exception later. pass
8
0
hf_public_repos/api-inference-community/docker_images/speechbrain
hf_public_repos/api-inference-community/docker_images/speechbrain/app/common.py
from enum import Enum from huggingface_hub import HfApi class ModelType(Enum): # audio-to-audio SEPFORMERSEPARATION = "SEPFORMERSEPARATION" SPECTRALMASKENHANCEMENT = "SPECTRALMASKENHANCEMENT" WAVEFORMENHANCEMENT = "WAVEFORMENHANCEMENT" # automatic-speech-recognition ENCODERASR = "ENCODERASR" ENCODERDECODERASR = "ENCODERDECODERASR" WHISPERASR = "WHISPERASR" # audio-clasification ENCODERCLASSIFIER = "ENCODERCLASSIFIER" # text-to-speech TACOTRON2 = "TACOTRON2" HIFIGAN = "HIFIGAN" FASTSPEECH2 = "FASTSPEECH2" # text2text-generation GRAPHEMETOPHONEME = "GRAPHEMETOPHONEME" def get_type(model_id, interface_type="speechbrain_interface"): info = HfApi().model_info(repo_id=model_id) if info.config: if "speechbrain" in info.config: if interface_type in info.config["speechbrain"]: return ModelType(info.config["speechbrain"][interface_type].upper()) else: raise ValueError(f"{interface_type} not in config.json") else: raise ValueError("speechbrain_interface not in config.json") raise ValueError("no config.json in repository") def get_vocoder_model_id(model_id): info = HfApi().model_info(repo_id=model_id) if info.config: if "speechbrain" in info.config: if "vocoder_model_id" in info.config["speechbrain"]: return info.config["speechbrain"]["vocoder_model_id"] else: raise ValueError("vocoder_model_id not in config.json") else: raise ValueError("speechbrain_interface not in config.json") raise ValueError("no config.json in repository")
9