index
int64
0
0
repo_id
stringclasses
179 values
file_path
stringlengths
26
186
content
stringlengths
1
2.1M
__index_level_0__
int64
0
9
0
hf_public_repos
hf_public_repos/blog/mistral-coreml.md
--- title: "WWDC 24: Running Mistral 7B with Core ML" thumbnail: /blog/assets/mistral-coreml/thumbnail.png authors: - user: pcuenq - user: FL33TW00D-HF - user: reach-vb - user: osanseviero --- # WWDC 24: Running Mistral 7B with Core ML WWDC’ 24 is the moment Apple officially unveiled Apple Intelligence and reiterated their commitment to efficient, private, and on-device AI. During the keynote and the sessions that followed, they demonstrated Apple Intelligence, which powers a huge array of AI-enhanced features that show practical uses for everyday tasks. These are not \*AI-for-the-sake-of-AI\* shiny demos. These are time-saving, appropriate (and fun!) helpers that are deeply integrated with apps and the OS, that also offer developers a number of ways to include these features within their own apps. Apple Intelligence features can only work this well because of the vertically integrated software stack that harnesses Apple Silicon's capabilities to the fullest. Apple also offers a platform for developers to run models on-device, known as Core ML. This software stack allows you to run ML models across all 3 compute units (CPU, GPU & Neural Engine) available on Apple Silicon hardware. In this blog post, we’ll be exploring some of the best new Core ML features to replicate the Mistral 7B example Apple showcased in the WWDC’24 [Deploy machine learning and AI models on-device with Core ML](https://developer.apple.com/videos/play/wwdc2024/10161/) session, where they use a fork of [swift-transformers](https://github.com/huggingface/swift-transformers) to run a state-of-the-art LLM on a Mac. This is a high-quality model with more than 7 billion parameters that pushes the capabilities of consumer hardware today. You can also check out WWDC’24 [Bring your machine learning and AI models to Apple silicon](https://developer.apple.com/videos/play/wwdc2024/10159/) session, where part of the Mistral 7B conversion process is shown. Let’s see what steps to take to run it as efficiently as possible, and learn the new tools available in iOS 18 & macOS Sequoia. This is what we’ll be building today: <video controls title="Mistral 7B running with Core ML"> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/mistral-coreml/swift-chat.mp4" type="video/mp4"> Video: Mistral 7B running with Core ML. </video> ## TL;DR By the end of this blog post, you will have learnt all the new goodies accompanying the latest macOS release AND you will have successfully run a 7B parameter model using less than 4GB of memory on your Mac. Step 1: Clone the `preview` branch of the `swift-transformers` repo: git clone -b preview [`https://github.com/huggingface/swift-transformers`](https://github.com/huggingface/swift-transformers) Step 2: Download the converted Core ML models from [`this Hugging Face repo`](https://huggingface.co/apple/mistral-coreml) Step 3: Run inference using Swift: `swift run transformers "Best recommendations for a place to visit in Paris in August 2024:" --max-length 200 Mistral7B-CoreML/StatefulMistralInstructInt4.mlpackage` ## Best new Core ML features from WWDC’ 24 Here are some of the most impactful Core ML features from WWDC’ 24 we will use to run Mistral 7B on a Mac. ### Swift Tensor The first feature we want to highlight is an entirely new Swift type to work with ML tensors. These are multi-dimensional data structures every ML framework uses. Python developers working on ML are familiar with `numpy` arrays or `torch` tensors, which provide convenient, high-level interfaces to manipulate these large multi-dimensional matrices easily. The new [`MLTensor`](https://developer.apple.com/documentation/coreml/MLTensor) type provides a high-level abstraction that mimics the ones available in Python frameworks, greatly simplifying working with tensor data in Swift. Core ML already had multi-dimensional data types in the form of [MLMultiArray](https://developer.apple.com/documentation/coreml/mlmultiarray) and [MLShapedArray](https://developer.apple.com/documentation/coreml/mlshapedarray). However, they were only meant for data storage and simple operations like wrapping your data and sending it as input to a Core ML model, or unwrapping results from a Core ML model. However, *manipulating* tensor data with these APIs is difficult. Only a few primitive operations are provided, and you may have to write your own by accessing the underlying storage as an opaque pointer to number data. This is time-consuming and error-prone. The new `Tensor` type provides a high-level abstraction that mimics the ones available in Python frameworks, greatly simplifying working with tensor data in Swift. Consider a language model like the one we want to port to Core ML. Language models take in an input sequence of tokens, and they output an estimation of the probabilities of all the tokens in the vocabulary, meaning that tokens with a high probability have a high chance of being plausible continuations of the input. The application’s job is to select the best next token to append to the sequence based on those probabilities. `Tensor` type makes it easy to handle these operations without custom code. [When we released swift-transformers](https://huggingface.co/blog/swift-coreml-llm), we wrote a lot of code (later extended by the community, thanks! ❤️) to help with input preparations (convert words to tokens) and output post-processing. For example, check out [our softmax operation](https://github.com/huggingface/swift-transformers/blob/main/Sources/TensorUtils/Math.swift#L103) using Accelerate. All this can be removed when using `MLTensor`, as `softmax` is provided out of the box! ### Stateful Buffers Before WWDC’ 24, a Core ML model was essentially a pure stateless function where you provide inputs and return some outputs. However, sometimes you need to keep a state that depends on previous computations. The functional programming method for maintaining state is to add an additional input/output pair. So, based on your inputs and state, the model computes the output and the new state. There is nothing wrong with this approach, and in fact, that’s the way high-performance frameworks like JAX work. However, there are practical limitations: the stateful data needs to be sent to the model as an input and retrieved as an output every time you call the model. If the stateful data is large, then all this going back and forth increases overhead and slows things down. This is particularly important for LLMs because you have to run many iterations to generate a sequence. The performance bottleneck is usually your computer’s memory bandwidth (i.e., how fast you can move things to your GPU and back). Stateful models solve this problem by reserving a block of memory for state data and keeping it on the GPU so you don’t have to send and receive it every time you use the model. Stateful buffers were introduced [in this WWDC’ 24 session](https://developer.apple.com/videos/play/wwdc2024/10161/?time=510) using a toy example that is easy to understand but not representative of practical uses with big models such as LLMs. An LLM performance trick for transformers-based models is key-value caching (known as kv-caching). As shown in the following illustration, it avoids costly matrix multiplications in the crucial attention block by caching the result of previous operations performed in previous steps. We won’t go into details, but the takeaways are: kv-cache dramatically increases performance, and it requires a large block of memory that is the perfect candidate for using stateful buffers. Here is a [coremltools user guide](https://apple.github.io/coremltools/docs-guides/source/stateful-models.html) update about stateful models. ![stateful-buffer](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/mistral-coreml/stateful-buffer.png) ### New Quantization Techniques In WWDC 23, we explored a very cool technique called palletization, and we showed how it could help bring text-to-image models, [such as Stable Diffusion](https://huggingface.co/blog/fast-diffusers-coreml), to Macs and iPhones. Whilst these techniques allow you to reduce the size considerably, if pushed too far, the impact on quality is drastic. Bigger models suffer more from this, as the weight data has an extensive dynamic range. Trying to create a small lookup table (LUT) that captures all possible values becomes increasingly difficult. The solution introduced in WWDC 24 is to focus on a smaller portion of the data at a time, and create multiple lookup tables for different areas of the same tensor. ![quantization-algorithm](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/mistral-coreml/quantization-algorithm.png) These methods (block-wise quantization) allow us to compress models to as low as 4-bit precision. Instead of using 4 bytes (the size of a `float32` number) to represent each model parameter, we can get away with half a byte (a nibble) for each. This is an 8-fold reduction in model size (minus some overhead to account for the block-wise quantization tables), or 4 times smaller when compared to `float16` precision. ### Multifunction Support We won’t use this feature for this example but we wanted to mention it here as it was introduced at WWDC 24, and we will be showcasing it in some upcoming work. Multifunction support essentially allows you to package LoRA adapters into generative models to use the same model (with a small set of additional parameters, called adapters) for different tasks. LoRA is the preferred community technique for large model fine-tuning. In diffusion models, for example, you can use LoRA to generate images with different styles, such as photorealistic or cartoonish. We believe LoRA is part of the solution that powers Apple’s Genmoji implementation. For language models, LoRA adapters can be used to adapt a generic LLM to specific tasks or domains. To read more about LoRA, you can check [this post.](https://huggingface.co/blog/lora) To read more about Multifunction, you can check out Apple coremltools user guide [here](https://apple.github.io/coremltools/docs-guides/source/multifunction-models.html). ## Converting Mistral 7B to Core ML The single most important component for running a large language model efficiently is the kv-cache. As mentioned above, this is a great candidate for [the new stateful model feature](https://apple.github.io/coremltools/docs-guides/source/stateful-models.html) released at WWDC’ 24. Models in the transformers library already use efficient attention implementations that rely heavily on kv-caching. However, the default implementations are optimized for Nvidia GPUs, and this hardware has a different set of constraints than Apple Silicon does. In the case of Core ML, we need to pre-allocate the full cache buffer beforehand and ensure that each time we call the model, we update the buffer in place. This avoids inefficient memory allocations and tensor concatenations and is also a requirement for Core ML stateful buffers. To achieve this goal, we have to use a different attention implementation that considers these factors. This requires modifying the transformers modeling code for the Mistral architecture, and it’s done in [this fragment of code](https://github.com/huggingface/swift-transformers/blob/21b015691624ad103470370f0f255eb660579511/Examples/Mistral7B/export.py#L121). Note: If you want to follow along and replicate the conversion (or convert another Mistral-based model, like a different fine-tune), you can use [this script](https://github.com/huggingface/swift-transformers/blob/preview/Examples/Mistral7B/export.py) to run all the conversion steps. ### Tracing & Conversion The first step is to load the model. We’ll use the patched implementation with the in-place cache method. ```python MODEL_ID = "mistralai/Mistral-7B-Instruct-v0.3" torch_model = StatefulMistralForCausalLM(MODEL_ID) torch_model.eval() ``` Before running Core ML conversion, we need to trace the model with example inputs. This process records the tensor operations performed on those inputs, and the traced graph will be translated to Core ML operations during conversion. We use sample inputs to trace the model; we don’t need real data. ```python input_ids = torch.zeros((1, 2), dtype=torch.int32) causal_mask = torch.zeros((1, 1, 2, 5), dtype=torch.float32) traced_model = torch.jit.trace(torch_model, [input_ids, causal_mask]) ``` The input to a language model is a sequence of tokens of varying length. We’ll allow the input to grow from a single token to a maximum context length of 2048. We can use [coremltools](https://github.com/apple/coremltools) range dimensions to specify these bounds. ```python query_length = ct.RangeDim(lower_bound=1, upper_bound=2048, default=1) end_step_dim = ct.RangeDim(lower_bound=1, upper_bound=2048, default=1) inputs = [ ct.TensorType(shape=(1, query_length), dtype=np.int32, name="inputIds"), ct.TensorType(shape=(1, 1, query_length, end_step_dim), dtype=np.float16, name="causalMask"), ] outputs = [ct.TensorType(dtype=np.float16, name="logits")] ``` In addition to the sequence tokens (called `inputIds` in the example above), there’s another input called `causalMask`, which specifies the tokens the model needs to pay attention to. This is mostly used when generating multiple sequences at the same time using batching. Check out how these inputs are used in an [example runner here](https://github.com/huggingface/swift-transformers/blob/21b015691624ad103470370f0f255eb660579511/Examples/Mistral7B/generate.py#L29-L42). In this situation, all the input sequences inside a batch must have the same length, so we use padding tokens and the causal mask to tell the model that the padding tokens are not to be considered as inputs. ### State Preparation The PyTorch modeling code uses `keyCache` and `valueCache` as the names of the cache buffers to hold the kv-cache. Those blocks are allocated for the maximum context length (2048). We use `coremltools`' new [StateType](https://apple.github.io/coremltools/source/coremltools.converters.mil.input_types.html#statetype) to specify that those blocks must be converted to a stateful Core ML buffer during conversion. ```python # Specify kv-cache states by using `StateType`. states = [ ct.StateType( wrapped_type=ct.TensorType(shape=torch_model.kv_cache_shape, dtype=np.float16), name="keyCache", ), ct.StateType( wrapped_type=ct.TensorType(shape=torch_model.kv_cache_shape, dtype=np.float16), name="valueCache", ), ] ``` ### Core ML Conversion To convert the model to Core ML, we need to specify the input and output types, as well as the states. The converted model will use `float16` precision because that’s what we specified for the input data. We also need to indicate the minimum deployment target as iOS18, as that’s where these features are available. (We can also use `macOS15`, which refers to the same conversion target.) ```python mlmodel_fp16 = ct.convert( traced_model, inputs=inputs, states=states, outputs=outputs, minimum_deployment_target=ct.target.iOS18, skip_model_load=True, ) ``` ### Model Compression Using the new block-size quantization strategies described above, we use 4-bit linear quantization with block size 32. This will greatly reduce model size and make the model run faster. Even though computation will still be performed in `float16`, weights are transferred in 4-bit mode and decompressed on the fly, which is more efficient than transferring a large amount of 16-bit weights. The quantization parameters are configured as follows: ```python op_config = ct.optimize.coreml.OpLinearQuantizerConfig( mode="linear_symmetric", dtype="int4", granularity="per_block", block_size=32, ) config = ct.optimize.coreml.OptimizationConfig(global_config=op_config) ``` Let’s use that configuration to quantize the model. The following line will take a few minutes to run: ```python mlmodel_int4 = ct.optimize.coreml.linear_quantize_weights(mlmodel_fp16, config=config) mlmodel_int4.save("StatefulMistral7BInstructInt4.mlpackage") ``` There’s a final step after conversion and quantization are done. We need to include a piece of additional metadata that indicates the model identifier we used (`mistralai/Mistral-7B-Instruct-v0.3`). The Swift code will use this to download the tokenizer files from the Hub. Tokenization is converting text data to the numerical representations used by models, and it’s different for every model. ```python mlmodel_int4._spec.description.metadata.userDefined.update({ "co.huggingface.exporters.name": MODEL_ID }) ``` The generated model is a `mlpackage` of about 3.8G, compared with the 14G that a `float16` conversion would produce. [You can find it here on the Hub.](https://huggingface.co/apple/mistral-coreml/tree/main) ## Running Mistral 7B with Swift If you followed the steps above or downloaded the model from the Hub, you can run it locally using the `preview` branch of `swift-transformers`. Apple engineers contributed it to the project, including the following important features: - Full `Tensor` support, which greatly simplifies pre- and post-processing tasks, and allows us to delete many lines of low-level, confusing and fragile code. - Support for the Swift counterpart of the Stateful API. Since adopting these features is a breaking change and requires iOS 18 or macOS 15, we’ll keep them in a `preview` branch for now. To run the model from the command line, please first clone the `preview` branch from the GitHub repo: ```bash git clone -b preview https://github.com/huggingface/swift-transformers ``` And then run the CLI to test the model: ```bash #to run in release mode, pass -c release swift run transformers "Best recommendations for a place to visit in Paris in August 2024:" --max-length 128 Examples/Mistral7B/StatefulMistral7BInstructInt4.mlpackage ``` For easier testing, you can also use `swift-chat`, a simple app we wrote to show how to integrate the `swift-transformers` package inside. You have to use the `preview` branch as well. An example of `swift-chat` running the converted Mistral model was shown at the beginning of this post. ## Running Mistral 7B with Python For those of you who are more familiar with Python, it’s just as easy! ```bash python3 generate.py Examples/Mistral7B/StatefulMistral7BInstructInt4.mlpackage --prompt "Best recommendations for a place to visit in Paris in August 2024:" ``` coremltools makes it just as easy to run Core ML models with Python. ## What's Next? We are extremely excited about the progress in [Core ML](https://developer.apple.com/documentation/coreml/) and [coremltools](https://github.com/apple/coremltools) this year, and we are looking forward to seeing lots of third-party apps leveraging ML models to solve real tasks people need. On our side, we are committed to making this as easy as possible so developers can concentrate on creating cool apps. There are a few things on our drawing board: - The model updates presented here are excellent for GPUs on Mac computers. Core ML can use the Neural Engine, which is particularly efficient on iPhones. Getting the most performance out of the Neural Engine requires some additional adaptations, which we plan to carry out on a few example models. This work will be based on the learnings discussed in this [2022 (and still very relevant) article by Apple](https://machinelearning.apple.com/research/neural-engine-transformers). We won’t run Mistral 7B on iPhone, but there are several smaller models, like Apple’s OpenELM or DCLM that make for great candidates to explore! - The code presented here is highly experimental. As summer goes on, we plan to adopt these methods and incorporate them into `exporters`, a Python tool designed to convert transformers models to Core ML. Hopefully, you’ll soon be able to convert many interesting model architectures very easily. - We’ll keep working on the `preview` branch of `swift-transformers` to incorporate new features or API changes as they are released. If you are interested, keep an eye on it! ## How can you help? The tools released by Apple in WWDC help us on our long-term goal to make AI easy and accessible to all, and we’d love to see where you can take them. The example we showed is experimental, but you can use it to convert any Mistral fine-tune to Core ML – please let us know if you do! If you want to try other model architectures, please feel free to open issues or PRs to the `preview` branch of `swift-transformers` – we’ll try to help you get going! There’s never been a better time than today to apply your creativity to solve problems that interest you! Go try things, have fun, and tell us how we can help.
0
0
hf_public_repos
hf_public_repos/blog/deep-rl-pg.md
--- title: "Policy Gradient with PyTorch" thumbnail: /blog/assets/85_policy_gradient/thumbnail.gif authors: - user: ThomasSimonini --- # Policy Gradient with PyTorch <h2>Unit 5, of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit4/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* <img src="assets/85_policy_gradient/thumbnail.gif" alt="Thumbnail"/> --- ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit4/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* [In the last unit](https://huggingface.co/blog/deep-rl-dqn), we learned about Deep Q-Learning. In this value-based Deep Reinforcement Learning algorithm, we **used a deep neural network to approximate the different Q-values for each possible action at a state.** Indeed, since the beginning of the course, we only studied value-based methods, **where we estimate a value function as an intermediate step towards finding an optimal policy.** <img src="https://huggingface.co/blog/assets/70_deep_rl_q_part1/link-value-policy.jpg" alt="Link value policy" /> Because, in value-based, **π exists only because of the action value estimates, since policy is just a function** (for instance, greedy-policy) that will select the action with the highest value given a state. But, with policy-based methods, we want to optimize the policy directly **without having an intermediate step of learning a value function.** So today, **we'll study our first Policy-Based method**: Reinforce. And we'll implement it from scratch using PyTorch. Before testing its robustness using CartPole-v1, PixelCopter, and Pong. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/envs.gif" alt="Environments"/> </figure> Let's get started, - [What are Policy-Gradient Methods?](#what-are-policy-gradient-methods) - [An Overview of Policy Gradients](#an-overview-of-policy-gradients) - [The Advantages of Policy-Gradient Methods](#the-advantages-of-policy-gradient-methods) - [The Disadvantages of Policy-Gradient Methods](#the-disadvantages-of-policy-gradient-methods) - [Reinforce (Monte Carlo Policy Gradient)](#reinforce-monte-carlo-policy-gradient) ## What are Policy-Gradient Methods? Policy-Gradient is a subclass of Policy-Based Methods, a category of algorithms that **aims to optimize the policy directly without using a value function using different techniques.** The difference with Policy-Based Methods is that Policy-Gradient methods are a series of algorithms that aim to optimize the policy directly **by estimating the weights of the optimal policy using Gradient Ascent.** ### An Overview of Policy Gradients Why do we optimize the policy directly by estimating the weights of an optimal policy using Gradient Ascent in Policy Gradients Methods? Remember that reinforcement learning aims **to find an optimal behavior strategy (policy) to maximize its expected cumulative reward.** We also need to remember that a policy is a function that **given a state, outputs, a distribution over actions** (in our case using a stochastic policy). <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/blog/assets/63_deep_rl_intro/pbm_2.jpg" alt="Stochastic Policy"/> </figure> Our goal with Policy-Gradients is to control the probability distribution of actions by tuning the policy such that **good actions (that maximize the return) are sampled more frequently in the future.** Let’s take a simple example: - We collect an episode by letting our policy interact with its environment. - We then look at the sum of rewards of the episode (expected return). If this sum is positive, we **consider that the actions taken during the episodes were good:** Therefore, we want to increase the P(a|s) (probability of taking that action at that state) for each state-action pair. The Policy Gradient algorithm (simplified) looks like this: <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/pg_bigpicture.jpg" alt="Policy Gradient Big Picture"/> </figure> But Deep Q-Learning is excellent! Why use policy gradient methods? ### The Advantages of Policy-Gradient Methods There are multiple advantages over Deep Q-Learning methods. Let's see some of them: 1. The simplicity of the integration: **we can estimate the policy directly without storing additional data (action values).** 2. Policy gradient methods can **learn a stochastic policy while value functions can't**. This has two consequences: a. We **don't need to implement an exploration/exploitation trade-off by hand**. Since we output a probability distribution over actions, the agent explores **the state space without always taking the same trajectory.** b. We also get rid of the problem of **perceptual aliasing**. Perceptual aliasing is when two states seem (or are) the same but need different actions. Let's take an example: we have an intelligent vacuum cleaner whose goal is to suck the dust and avoid killing the hamsters. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/hamster1.jpg" alt="Hamster 1"/> </figure> Our vacuum cleaner can only perceive where the walls are. The problem is that the two red cases are aliased states because the agent perceives an upper and lower wall for each. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/hamster2.jpg" alt="Hamster 1"/> </figure> Under a deterministic policy, the policy will either move right when in a red state or move left. **Either case will cause our agent to get stuck and never suck the dust**. Under a value-based RL algorithm, we learn a quasi-deterministic policy ("greedy epsilon strategy"). Consequently, our agent can spend a lot of time before finding the dust. On the other hand, an optimal stochastic policy will randomly move left or right in grey states. Consequently, **it will not be stuck and will reach the goal state with a high probability**. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/hamster3.jpg" alt="Hamster 1"/> </figure> 3. Policy gradients are **more effective in high-dimensional action spaces and continuous actions spaces** Indeed, the problem with Deep Q-learning is that their **predictions assign a score (maximum expected future reward) for each possible action**, at each time step, given the current state. But what if we have an infinite possibility of actions? For instance, with a self-driving car, at each state, you can have a (near) infinite choice of actions (turning the wheel at 15°, 17.2°, 19,4°, honking, etc.). We'll need to output a Q-value for each possible action! And taking the max action of a continuous output is an optimization problem itself! Instead, with a policy gradient, we output a **probability distribution over actions.** ### The Disadvantages of Policy-Gradient Methods Naturally, Policy Gradient methods have also some disadvantages: - **Policy gradients converge a lot of time on a local maximum instead of a global optimum.** - Policy gradient goes faster, **step by step: it can take longer to train (inefficient).** - Policy gradient can have high variance (solution baseline). 👉 If you want to go deeper on the why the advantages and disadvantages of Policy Gradients methods, [you can check this video](https://youtu.be/y3oqOjHilio). Now that we have seen the big picture of Policy-Gradient and its advantages and disadvantages, **let's study and implement one of them**: Reinforce. ## Reinforce (Monte Carlo Policy Gradient) Reinforce, also called Monte-Carlo Policy Gradient, **uses an estimated return from an entire episode to update the policy parameter** \\(\theta\\). We have our policy π which has a parameter θ. This π, given a state, **outputs a probability distribution of actions**. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/policy.jpg" alt="Policy"/> </figure> Where \\(\pi_\theta(a_t|s_t)\\) is the probability of the agent selecting action at from state st, given our policy. **But how do we know if our policy is good?** We need to have a way to measure it. To know that we define a score/objective function called \\(J(\theta)\\). The score function J is the expected return: <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/objective.jpg" alt="Return"/> </figure> Remember that policy gradient can be seen as an optimization problem. So we must find the best parameters (θ) to maximize the score function, J(θ). To do that we’re going to use the [Policy Gradient Theorem](https://www.youtube.com/watch?v=AKbX1Zvo7r8). I’m not going to dive on the mathematical details but if you’re interested check [this video](https://www.youtube.com/watch?v=AKbX1Zvo7r8) The Reinforce algorithm works like this: Loop: - Use the policy \\(\pi_\theta\\) to collect an episode \\(\tau\\) - Use the episode to estimate the gradient \\(\hat{g} = \nabla_\theta J(\theta)\\) <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/pg.jpg" alt="Policy Gradient"/> </figure> - Update the weights of the policy: \\(\theta \leftarrow \theta + \alpha \hat{g}\\) The interpretation we can make is this one: - \\(\nabla_\theta log \pi_\theta(a_t|s_t)\\) is the direction of **steepest increase of the (log) probability** of selecting action at from state st. => This tells use **how we should change the weights of policy** if we want to increase/decrease the log probability of selecting action at at state st. - \\(R(\tau)\\): is the scoring function: - If the return is high, it will push up the probabilities of the (state, action) combinations. - Else, if the return is low, it will push down the probabilities of the (state, action) combinations. Now that we studied the theory behind Reinforce, **you’re ready to code your Reinforce agent with PyTorch**. And you'll test its robustness using CartPole-v1, PixelCopter, and Pong. Start the tutorial here 👉 https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb The leaderboard to compare your results with your classmates 🏆 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/envs.gif" alt="Environments"/> </figure> --- Congrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorial. You’ve just coded your first Deep Reinforcement Learning agent from scratch using PyTorch and shared it on the Hub 🥳. It's **normal if you still feel confused** with all these elements. **This was the same for me and for all people who studied RL.** Take time to really grasp the material before continuing. Don't hesitate to train your agent in other environments. The **best way to learn is to try things on your own!** We published additional readings in the syllabus if you want to go deeper 👉 **[https://github.com/huggingface/deep-rl-class/blob/main/unit5/README.md](https://github.com/huggingface/deep-rl-class/blob/main/unit5/README.md)** In the next unit, we’re going to learn about a combination of Policy-Based and Value-based methods called Actor Critic Methods. And don't forget to share with your friends who want to learn 🤗! Finally, we want **to improve and update the course iteratively with your feedback**. If you have some, please fill this form 👉 **[https://forms.gle/3HgA7bEHwAmmLfwh9](https://forms.gle/3HgA7bEHwAmmLfwh9)** ### **Keep learning, stay awesome 🤗,**
1
0
hf_public_repos
hf_public_repos/blog/codeparrot.md
--- title: Training CodeParrot 🦜 from Scratch thumbnail: /blog/assets/40_codeparrot/thumbnail.png authors: - user: leandro --- # Training CodeParrot 🦜 from Scratch In this blog post we'll take a look at what it takes to build the technology behind [GitHub CoPilot](https://copilot.github.com/), an application that provides suggestions to programmers as they code. In this step by step guide, we'll learn how to train a large GPT-2 model called CodeParrot 🦜, entirely from scratch. CodeParrot can auto-complete your Python code - give it a spin [here](https://huggingface.co/spaces/lvwerra/codeparrot-generation). Let's get to building it from scratch! ![codeparrot](assets/40_codeparrot/codeparrot.png) ## Creating a Large Dataset of Source Code The first thing we need is a large training dataset. With the goal to train a Python code generation model, we accessed the GitHub dump available on Google's BigQuery and filtered for all Python files. The result is a 180 GB dataset with 20 million files (available [here](http://hf.co/datasets/transformersbook/codeparrot)). After initial training experiments, we found that the duplicates in the dataset severely impacted the model performance. Further investigating the dataset we found that: - 0.1% of the unique files make up 15% of all files - 1% of the unique files make up 35% of all files - 10% of the unique files make up 66% of all files You can learn more about our findings in [this Twitter thread](https://twitter.com/lvwerra/status/1458470994146996225). We removed the duplicates and applied the same cleaning heuristics found in the [Codex paper](https://arxiv.org/abs/2107.03374). Codex is the model behind CoPilot and is a GPT-3 model fine-tuned on GitHub code. The cleaned dataset is still 50GB big and available on the Hugging Face Hub: [codeparrot-clean](http://hf.co/datasets/lvwerra/codeparrot-clean). With that we can setup a new tokenizer and train a model. ## Initializing the Tokenizer and Model First we need a tokenizer. Let's train one specifically on code so it splits code tokens well. We can take an existing tokenizer (e.g. GPT-2) and directly train it on our own dataset with the `train_new_from_iterator()` method. We then push it to the Hub. Note that we omit imports, arguments parsing and logging from the code examples to keep the code blocks compact. But you'll find the full code including preprocessing and downstream task evaluation [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot). ```Python # Iterator for Training def batch_iterator(batch_size=10): for _ in tqdm(range(0, args.n_examples, batch_size)): yield [next(iter_dataset)["content"] for _ in range(batch_size)] # Base tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") base_vocab = list(bytes_to_unicode().values()) # Load dataset dataset = load_dataset("lvwerra/codeparrot-clean", split="train", streaming=True) iter_dataset = iter(dataset) # Training and saving new_tokenizer = tokenizer.train_new_from_iterator(batch_iterator(), vocab_size=args.vocab_size, initial_alphabet=base_vocab) new_tokenizer.save_pretrained(args.tokenizer_name, push_to_hub=args.push_to_hub) ``` Learn more about tokenizers and how to build them in the [Hugging Face course](https://huggingface.co/course/chapter6/1?fw=pt). See that inconspicuous `streaming=True` argument? This small change has a big impact: instead of downloading the full (50GB) dataset this will stream individual samples as needed saving a lot of disk space! Checkout the [Hugging Face course](https://huggingface.co/course/chapter5/4?fw=pt ) for more information on streaming. Now, we initialize a new model. We’ll use the same hyperparameters as GPT-2 large (1.5B parameters) and adjust the embedding layer to fit our new tokenizer also adding some stability tweaks. The `scale_attn_by_layer_idx` flag makes sure we scale the attention by the layer id and `reorder_and_upcast_attn` mainly makes sure that we compute the attention in full precision to avoid numerical issues. We push the freshly initialized model to the same repo as the tokenizer. ```Python # Load codeparrot tokenizer trained for Python code tokenization tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name) # Configuration config_kwargs = {"vocab_size": len(tokenizer), "scale_attn_by_layer_idx": True, "reorder_and_upcast_attn": True} # Load model with config and push to hub config = AutoConfig.from_pretrained('gpt2-large', **config_kwargs) model = AutoModelForCausalLM.from_config(config) model.save_pretrained(args.model_name, push_to_hub=args.push_to_hub) ``` Now that we have an efficient tokenizer and a freshly initialized model we can start with the actual training loop. ## Implementing the Training Loop We train with the [🤗 Accelerate](https://github.com/huggingface/accelerate) library which allows us to scale the training from our laptop to a multi-GPU machine without changing a single line of code. We just create an accelerator and do some argument housekeeping: ```Python accelerator = Accelerator() acc_state = {str(k): str(v) for k, v in accelerator.state.__dict__.items()} parser = HfArgumentParser(TrainingArguments) args = parser.parse_args() args = Namespace(**vars(args), **acc_state) samples_per_step = accelerator.state.num_processes * args.train_batch_size set_seed(args.seed) ``` We are now ready to train! Let's use the `huggingface_hub` client library to clone the repository with the new tokenizer and model. We will checkout to a new branch for this experiment. With that setup, we can run many experiments in parallel and in the end we just merge the best one into the main branch. ```Python # Clone model repository if accelerator.is_main_process: hf_repo = Repository(args.save_dir, clone_from=args.model_ckpt) # Checkout new branch on repo if accelerator.is_main_process: hf_repo.git_checkout(run_name, create_branch_ok=True) ``` We can directly load the tokenizer and model from the local repository. Since we are dealing with big models we might want to turn on [gradient checkpointing](https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9) to decrease the GPU memory footprint during training. ```Python # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained(args.save_dir) if args.gradient_checkpointing: model.gradient_checkpointing_enable() tokenizer = AutoTokenizer.from_pretrained(args.save_dir) ``` Next up is the dataset. We make training simpler with a dataset that yields examples with a fixed context size. To not waste too much data (some samples are too short or too long) we can concatenate many examples with an EOS token and then chunk them. ![codeparrot](assets/40_codeparrot/buffer.png) The more sequences we prepare together, the smaller the fraction of tokens we discard (the grey ones in the previous figure). Since we want to stream the dataset instead of preparing everything in advance we use an `IterableDataset`. The full dataset class looks as follows: ```Python class ConstantLengthDataset(IterableDataset): def __init__( self, tokenizer, dataset, infinite=False, seq_length=1024, num_of_sequences=1024, chars_per_token=3.6 ): self.tokenizer = tokenizer self.concat_token_id = tokenizer.bos_token_id self.dataset = dataset self.seq_length = seq_length self.input_characters = seq_length * chars_per_token * num_of_sequences self.epoch = 0 self.infinite = infinite def __iter__(self): iterator = iter(self.dataset) more_examples = True while more_examples: buffer, buffer_len = [], 0 while True: if buffer_len >= self.input_characters: break try: buffer.append(next(iterator)["content"]) buffer_len += len(buffer[-1]) except StopIteration: if self.infinite: iterator = iter(self.dataset) self.epoch += 1 logger.info(f"Dataset epoch: {self.epoch}") else: more_examples = False break tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"] all_token_ids = [] for tokenized_input in tokenized_inputs: all_token_ids.extend(tokenized_input + [self.concat_token_id]) for i in range(0, len(all_token_ids), self.seq_length): input_ids = all_token_ids[i : i + self.seq_length] if len(input_ids) == self.seq_length: yield torch.tensor(input_ids) ``` Texts in the buffer are tokenized in parallel and then concatenated. Chunked samples are then yielded until the buffer is empty and the process starts again. If we set `infinite=True` the dataset iterator restarts at its end. ```Python def create_dataloaders(args): ds_kwargs = {"streaming": True} train_data = load_dataset(args.dataset_name_train, split="train", streaming=True) train_data = train_data.shuffle(buffer_size=args.shuffle_buffer, seed=args.seed) valid_data = load_dataset(args.dataset_name_valid, split="train", streaming=True) train_dataset = ConstantLengthDataset(tokenizer, train_data, infinite=True, seq_length=args.seq_length) valid_dataset = ConstantLengthDataset(tokenizer, valid_data, infinite=False, seq_length=args.seq_length) train_dataloader = DataLoader(train_dataset, batch_size=args.train_batch_size) eval_dataloader = DataLoader(valid_dataset, batch_size=args.valid_batch_size) return train_dataloader, eval_dataloader train_dataloader, eval_dataloader = create_dataloaders(args) ``` Before we start training we need to set up the optimizer and learning rate schedule. We don’t want to apply weight decay to biases and LayerNorm weights so we use a helper function to exclude those. ```Python def get_grouped_params(model, args, no_decay=["bias", "LayerNorm.weight"]): params_with_wd, params_without_wd = [], [] for n, p in model.named_parameters(): if any(nd in n for nd in no_decay): params_without_wd.append(p) else: params_with_wd.append(p) return [{"params": params_with_wd, "weight_decay": args.weight_decay}, {"params": params_without_wd, "weight_decay": 0.0},] optimizer = AdamW(get_grouped_params(model, args), lr=args.learning_rate) lr_scheduler = get_scheduler(name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps, num_training_steps=args.max_train_steps,) ``` A big question that remains is how all the data and models will be distributed across several GPUs. This sounds like a complex task but actually only requires a single line of code with 🤗 Accelerate. ```Python model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader) ``` Under the hood it'll use DistributedDataParallel, which means a batch is sent to each GPU worker which has its own copy of the model. There the gradients are computed and then aggregated to update the model on each worker. ![codeparrot](assets/40_codeparrot/ddp.png) We also want to evaluate the model from time to time on the validation set so let’s write a function to do just that. This is done automatically in a distributed fashion and we just need to gather all the losses from the workers. We also want to report the [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models). ```Python def evaluate(args): model.eval() losses = [] for step, batch in enumerate(eval_dataloader): with torch.no_grad(): outputs = model(batch, labels=batch) loss = outputs.loss.repeat(args.valid_batch_size) losses.append(accelerator.gather(loss)) if args.max_eval_steps > 0 and step >= args.max_eval_steps: break loss = torch.mean(torch.cat(losses)) try: perplexity = torch.exp(loss) except OverflowError: perplexity = float("inf") return loss.item(), perplexity.item() ``` We are now ready to write the main training loop. It will look pretty much like a normal PyTorch training loop. Here and there you can see that we use the accelerator functions rather than native PyTorch. Also, we push the model to the branch after each evaluation. ```Python # Train model model.train() completed_steps = 0 for step, batch in enumerate(train_dataloader, start=1): loss = model(batch, labels=batch, use_cache=False).loss loss = loss / args.gradient_accumulation_steps accelerator.backward(loss) if step % args.gradient_accumulation_steps == 0: accelerator.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() lr_scheduler.step() optimizer.zero_grad() completed_steps += 1 if step % args.save_checkpoint_steps == 0: eval_loss, perplexity = evaluate(args) accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(args.save_dir, save_function=accelerator.save) if accelerator.is_main_process: hf_repo.push_to_hub(commit_message=f"step {step}") model.train() if completed_steps >= args.max_train_steps: break ``` When we call `wait_for_everyone()` and `unwrap_model()` we make sure that all workers are ready and any model layers that have been added by `prepare()` earlier are removed. We also use gradient accumulation and gradient clipping that are easily implemented. Lastly, after training is complete we run a last evaluation and save the final model and push it to the hub. ```Python # Evaluate and save the last checkpoint logger.info("Evaluating and saving model after training") eval_loss, perplexity = evaluate(args) log_metrics(step, {"loss/eval": eval_loss, "perplexity": perplexity}) accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(args.save_dir, save_function=accelerator.save) if accelerator.is_main_process: hf_repo.push_to_hub(commit_message="final model") ``` Done! That's all the code to train a full GPT-2 model from scratch with as little as 150 lines. We did not show the imports and logs of the scripts to make the code a little bit more compact. Now let's actually train it! With this code we trained models for our upcoming [book on Transformers and NLP](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/): a [110M](https://hf.co/lvwerra/codeparrot-small) and [1.5B](https://hf.co/lvwerra/codeparrot) parameter GPT-2 model. We used a 16 x A100 GPU machine to train these models for 1 day and 1 week, respectively. Enough time to get a coffee and read a book or two! ## Evaluation This is still relatively short training time for pretraining but we can already observe good downstream performance as compared to similar models. We evaluated the models on OpenAI's HumanEval benchmark that was introduced in the Codex paper. It measures the performance of code generation models on almost 200 coding challenges. ![codeparrot](assets/40_codeparrot/humaneval.png) Note that we trained CodeParrot on roughly 25-30B tokens whereas GPT-neo was trained on 300B tokens and Codex on 300B (GPT-3 checkpoint) + 100B (code) tokens. So you definitely get a good bang for your buck here. But what is this _pass@k_ metric exactly? Simply put it measures the probability of at least one program passing the unittest of a programming problem given _k_ candidate generations from the model. Naturally, the metric increases with k as more candidates can potentially solve the coding challenge. You can use the metric with `datasets`: ```Python from datasets import load_metric code_eval = datasets.load_metric("code_eval") test_cases = ["assert add(2,3)==5"] candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]] pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1, 2]) print(pass_at_k) # -> {'pass@1': 0.5, 'pass@2': 1.0} ``` You can also load OpenAI's HumanEval dataset with `datasets`: ```Python from datasets import load_dataset dataset = load_dataset("openai_humaneval") ``` Now let's have a look at what we get after anxiously waiting for so long! ## Results Let's start slow and check if we can prompt the model to write a function to return the size of a file given its path: **Prompt:** ```Python def get_files_size(filename): ``` **Completion:** ```Python """ Returns the file size of a file. :param filepath: The path to the file. :type filepath: str :return: The file size. :rtype: int """ return os.path.getsize(filepath) ``` So far so good, but let's take this a step further and try something a bit more interesting. One of the less exciting aspects of building production ready code is writing unittest. Let's see if we can use CodeParrot to do this for us! **Prompt:** ```Python def is_even(value): """Returns True if value is an even number.""" return value % 2 == 0 # setup unit tests for is_even import unittest ``` **Completion:** ```Python class TestIsEven(unittest.TestCase): """Tests for is_even function.""" def test_is_even(self): """Test is_even function.""" self.assertTrue(is_even(0)) self.assertTrue(is_even(1)) self.assertTrue ``` Although the test condition itself does not look quite right the model has built all the boilerplate code for us. This alone could already be quite useful and save us a couple of minutes time! But what we really want to do is build better transformer models. Let's use CodeParrot to help us do that: **Prompt:** ```Python from transformers import AutoTokenizer, AutoModelForSequenceClassification # build a BERT classifier ``` **Completion:** ```Python tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased') ``` Looking at a few examples is nice but by now you probably want to try it yourself. We prepared a few online demos just for that! ## Demos You can play with the models in two demos we added to [Hugging Face Spaces](https://huggingface.co/spaces/launch). With the first you can quickly generate code with the model and with the second you can highlight your code with the model to spot bugs! - [Code Generation](https://hf.co/spaces/lvwerra/codeparrot-generation) - [Code Highlighting](https://hf.co/spaces/lvwerra/codeparrot-highlighting) You can also directly use the models from the `transformers` library: ```Python from transformers import pipeline pipe = pipeline('text-generation', model='lvwerra/codeparrot') pipe('def hello_world():') ``` ## Summary In this short blog post we walked through all the steps involved for training a large GPT-2 model called CodeParrot 🦜 for code generation. Using 🤗 Accelerate we built a training script with less than 200 lines of code that we can effortlessly scale across many GPUs. With that you can now train your own GPT-2 model! This post gives a brief overview of CodeParrot 🦜, but if you are interested in diving deeper into how to pretrain this models, we recommend reading its dedicated chapter in the upcoming [book on Transformers and NLP](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). This chapter provides many more details around building custom datasets, design considerations when training a new tokenizer, and architecture choice.
2
0
hf_public_repos
hf_public_repos/blog/graphcore.md
--- title: "Hugging Face and Graphcore partner for IPU-optimized Transformers" thumbnail: /blog/assets/26_graphcore-ipu/thumbnail.png authors: - user: sallydoherty guest: true --- # Hugging Face and Graphcore partner for IPU-optimized Transformers > ##### Speaking at the 2021 AI Hardware Summit, Hugging Face announced the launch of their new Hardware Partner Program, including device-optimized models and software integrations. Here, Graphcore - creators of the Intelligence Processing Unit (IPU) and a founding member of the program – explain how their partnership with Hugging Face will allow developers to easily accelerate their use of state-of-the-art Transformer models. Graphcore and Hugging Face are two companies with a common goal – to make it easier for innovators to harness the power of machine intelligence. Hugging Face’s Hardware Partner Program will allow developers using Graphcore systems to deploy state-of-the-art Transformer models, optimised for our Intelligence Processing Unit (IPU), at production scale, with minimum coding complexity. ## What is an Intelligence Processing Unit? IPUs are the processors that power Graphcore’s IPU-POD datacenter compute systems. This new type of processor is designed to support the very specific computational requirements of AI and machine learning. Characteristics such as fine-grained parallelism, low precision arithmetic, and the ability to handle sparsity have been built into our silicon. Instead of adopting a SIMD/SIMT architecture like GPUs, Graphcore’s IPU uses a massively parallel, MIMD architecture, with ultra-high bandwidth memory placed adjacent to the processor cores, right on the silicon die. This design delivers high performance and new levels of efficiency, whether running today’s most popular models, such as BERT and EfficientNet, or exploring next-generation AI applications. Software plays a vital role in unlocking the IPU’s capabilities. Our Poplar SDK has been co-designed with the processor since Graphcore’s inception. Today it fully integrates with standard machine learning frameworks, including PyTorch and TensorFlow, as well as orchestration and deployment tools such as Docker and Kubernetes. Making Poplar compatible with these widely used, third-party systems allows developers to easily port their models from their other compute platforms and start taking advantage of the IPU’s advanced AI capabilities. ## Optimising Transformers for Production Transformers have completely transformed (pun intended) the field of AI. Models such as BERT are widely used by Graphcore customers in a huge array of applications, across NLP and beyond. These multi-talented models can perform feature extraction, text generation, sentiment analysis, translation and many more functions. Already, Hugging Face plays host to hundreds of Transformers, from the French-language CamemBERT to ViT which applies lessons learned in NLP to computer vision. The Transformers library is downloaded an average of 2 million times every month and demand is growing. With a user base of more than 50,000 developers – Hugging Face has seen the fastest ever adoption of an open-source project. Now, with its Hardware Partner Program, Hugging Face is connecting the ultimate Transformer toolset with today's most advanced AI hardware. Using Optimum, a new open-source library and toolkit, developers will be able to access hardware-optimized models certified by Hugging Face. These are being developed in a collaboration between Graphcore and Hugging Face, with the first IPU-optimized models appearing on Optimum later this year. Ultimately, these will cover a wide range of applications, from vision and speech to translation and text generation. Hugging Face CEO Clément Delangue said: “Developers all want access to the latest and greatest hardware – like the Graphcore IPU, but there’s always that question of whether they’ll have to learn new code or processes. With Optimum and the Hugging Face Hardware Program, that’s just not an issue. It’s essentially plug-and-play". ## SOTA Models meet SOTA Hardware Prior to the announcement of the Hugging Face Partnership, we had demonstrated the power of the IPU to accelerate state-of-the-art Transformer models with a special Graphcore-optimised implementation of Hugging Face BERT using Pytorch. Full details of this example can be found in the Graphcore blog [BERT-Large training on the IPU explained](https://www.graphcore.ai/posts/bert-large-training-on-the-ipu-explained). The dramatic benchmark results for BERT running on a Graphcore system, compared with a comparable GPU-based system are surely a tantalising prospect for anyone currently running the popular NLP model on something other than the IPU. ![assets/24_sahajBERT/roles_tasks.png](assets/26_graphcore-ipu/graphcore-ipu-bert-large.png) This type of acceleration can be game changing for machine learning researchers and engineers, winning them back valuable hours of training time and allowing them many more iterations when developing new models. Now Graphcore users will be able to unlock such performance advantages, through the Hugging Face platform, with its elegant simplicity and superlative range of models. Together, Hugging Face and Graphcore are helping even more people to access the power of Transformers and accelerate the AI revolution. *Visit the [Hugging Face Hardware Partner portal](https://huggingface.co/hardware) to learn more about Graphcore IPU systems and how to gain access*
3
0
hf_public_repos
hf_public_repos/blog/diffusers-turns-1.md
--- title: "Happy 1st anniversary 🤗 Diffusers!" thumbnail: /blog/assets/diffusers-turns-1/diffusers-turns-1.png authors: - user: stevhliu - user: sayakpaul - user: pcuenq --- # Happy 1st anniversary 🤗 Diffusers! 🤗 Diffusers is happy to celebrate its first anniversary! It has been an exciting year, and we're proud and grateful for how far we've come thanks to our community and open-source contributors. Last year, text-to-image models like DALL-E 2, Imagen, and Stable Diffusion captured the world's attention with their ability to generate stunningly photorealistic images from text, sparking a massive surge of interest and development in generative AI. But access to these powerful models was limited. At Hugging Face, our mission is to democratize good machine learning by collaborating and helping each other build an open and ethical AI future together. Our mission motivated us to create the 🤗 Diffusers library so *everyone* can experiment, research, or simply play with text-to-image models. That’s why we designed the library as a modular toolbox, so you can customize a diffusion model’s components or just start using it out-of-the-box. As 🤗 Diffusers turns 1, here’s an overview of some of the most notable features we’ve added to the library with the help of our community. We are proud and immensely grateful for being part of an engaged community that promotes accessible usage, pushes diffusion models beyond just text-to-image generation, and is an all-around inspiration. **Table of Contents** * [Striving for photorealism](#striving-for-photorealism) * [Video pipelines](#video-pipelines) * [Text-to-3D models](#text-to-3d-models) * [Image editing pipelines](#image-editing-pipelines) * [Faster diffusion models](#faster-diffusion-models) * [Ethics and safety](#ethics-and-safety) * [Support for LoRA](#support-for-lora) * [Torch 2.0 optimizations](#torch-20-optimizations) * [Community highlights](#community-highlights) * [Building products with 🤗 Diffusers](#building-products-with-🤗-diffusers) * [Looking forward](#looking-forward) ## Striving for photorealism Generative AI models are known for creating photorealistic images, but if you look closely, you may notice certain things that don't look right, like generating extra fingers on a hand. This year, the DeepFloyd IF and Stability AI SDXL models made a splash by improving the quality of generated images to be even more photorealistic. [DeepFloyd IF](https://stability.ai/blog/deepfloyd-if-text-to-image-model) - A modular diffusion model that includes different processes for generating an image (for example, an image is upscaled 3x to produce a higher resolution image). Unlike Stable Diffusion, the IF model works directly on the pixel level, and it uses a large language model to encode text. [Stable Diffusion XL (SDXL)](https://stability.ai/blog/sdxl-09-stable-diffusion) - The latest Stable Diffusion model from Stability AI, with significantly more parameters than its predecessor Stable Diffusion 2. It generates hyper-realistic images, leveraging a base model for close adherence to the prompt, and a refiner model specialized in the fine details and high-frequency content. Head over to the DeepFloyd IF [docs](https://huggingface.co/docs/diffusers/v0.18.2/en/api/pipelines/if#texttoimage-generation) and the SDXL [docs](https://huggingface.co/docs/diffusers/v0.18.2/en/api/pipelines/stable_diffusion/stable_diffusion_xl) today to learn how to start generating your own images! ## Video pipelines Text-to-image pipelines are cool, but text-to-video is even cooler! We currently support two text-to-video pipelines, [VideoFusion](https://huggingface.co/docs/diffusers/main/en/api/pipelines/text_to_video) and [Text2Video-Zero](https://huggingface.co/docs/diffusers/main/en/api/pipelines/text_to_video_zero). If you’re already familiar with text-to-image pipelines, using a text-to-video pipeline is very similar: ```py import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) pipe.enable_model_cpu_offload() prompt = "Darth Vader surfing a wave" video_frames = pipe(prompt, num_frames=24).frames video_path = export_to_video(video_frames) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/darthvader_cerpense.gif" alt="Generated video of Darth Vader surfing."/> </div> We expect text-to-video to go through a revolution during 🤗 Diffusers second year, and we are excited to see what the community builds on top of these to push the boundaries of video generation from language! ## Text-to-3D models In addition to text-to-video, we also have text-to-3D generation now thanks to OpenAI’s [Shap-E](https://hf.co/papers/2305.02463) model. Shap-E is trained by encoding a large dataset of 3D-text pairs, and a diffusion model is conditioned on the encoder’s outputs. You can design 3D assets for video games, interior design, and architecture. Try it out today with the [`ShapEPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEPipeline) and [`ShapEImg2ImgPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEImg2ImgPipeline). <div class="flex justify-center"> <img src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/cake_out.gif" alt="3D render of a birthday cupcake generated using SHAP-E."/> </div> ## Image editing pipelines Image editing is one of the most practical use cases in fashion, material design, and photography. With diffusion models, the possibilities of image editing continue to expand. We have many [pipelines](https://huggingface.co/docs/diffusers/main/en/using-diffusers/controlling_generation) in 🤗 Diffusers to support image editing. There are image editing pipelines that allow you to describe your desired edit as a prompt, removing concepts from an image, and even a pipeline that unifies multiple generation methods to create high-quality images like panoramas. With 🤗 Diffusers, you can experiment with the future of photo editing now! ## Faster diffusion models Diffusion models are known to be time-intensive because of their iterative steps. With OpenAI’s [Consistency Models](https://huggingface.co/papers/2303.01469), the image generation process is significantly faster. Generating a single 256x256 resolution image only takes 3/4 of a second on a modern CPU! You can try this out in 🤗 Diffusers with the [`ConsistencyModelPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/consistency_models). On top of speedier diffusion models, we also offer many optimization techniques for faster inference like [PyTorch 2.0’s `scaled_dot_product_attention()` (SDPA) and `torch.compile()`](https://pytorch.org/blog/accelerated-diffusers-pt-20), sliced attention, feed-forward chunking, VAE tiling, CPU and model offloading, and more. These optimizations save memory, which translates to faster generation, and allow you to run inference on consumer GPUs. When you distribute a model with 🤗 Diffusers, all of these optimizations are immediately supported! In addition to that, we also support specific hardware and formats like ONNX, the `mps` PyTorch device for Apple Silicon computers, Core ML, and others. To learn more about how we optimize inference with 🤗 Diffusers, check out the [docs](https://huggingface.co/docs/diffusers/optimization/opt_overview)! ## Ethics and safety Generative models are cool, but they also have the ability to produce harmful and NSFW content. To help users interact with these models responsibly and ethically, we’ve added a [`safety_checker`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) component that flags inappropriate content generated during inference. Model creators can choose to incorporate this component into their models if they want. In addition, generative models can also be used to produce disinformation. Earlier this year, the [Balenciaga Pope](https://www.theverge.com/2023/3/27/23657927/ai-pope-image-fake-midjourney-computer-generated-aesthetic) went viral for how realistic the image was despite it being fake. This underscores the importance and need for a mechanism to distinguish between generated and human content. That’s why we’ve added an invisible watermark for images generated by the SDXL model, which helps users be better informed. The development of these features is guided by our [ethical charter](https://huggingface.co/docs/diffusers/main/en/conceptual/ethical_guidelines), which you can find in our documentation. ## Support for LoRA Fine-tuning diffusion models is expensive and out of reach for most consumer GPUs. We added the Low-Rank Adaptation ([LoRA](https://huggingface.co/papers/2106.09685)) technique to close this gap. With LoRA, which is a method for parameter-efficient fine-tuning, you can fine-tune large diffusion models faster and consume less memory. The resulting model weights are also very lightweight compared to the original model, so you can easily share your custom models. If you want to learn more, [our documentation](https://huggingface.co/docs/diffusers/main/en/training/lora) shows how to perform fine-tuning and inference on Stable Diffusion with LoRA. In addition to LoRA, we support other [training techniques](https://huggingface.co/docs/diffusers/main/en/training/overview) for personalized generation, including DreamBooth, textual inversion, custom diffusion, and more! ## Torch 2.0 optimizations PyTorch 2.0 [introduced support](https://pytorch.org/get-started/pytorch-2.0/#pytorch-2x-faster-more-pythonic-and-as-dynamic-as-ever) for `torch.compile()`and `scaled_dot_product_attention()`, a more efficient implementation of the attention mechanism. 🤗 Diffusers [provides first-class support](https://huggingface.co/docs/diffusers/optimization/torch2.0) for these features resulting in massive speedups in inference latency, which can sometimes be more than twice as fast! In addition to visual content (images, videos, 3D assets, etc.), we also added support for audio! Check out [the documentation](https://huggingface.co/docs/diffusers/using-diffusers/audio) to learn more. ## Community highlights One of the most gratifying experiences of the past year has been seeing how the community is incorporating 🤗 Diffusers into their projects. From adapting Low-rank adaptation (LoRA) for faster training of text-to-image models to building a state-of-the-art inpainting tool, here are a few of our favorite projects: <div class="mx-auto max-w-screen-xl py-8"> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">We built Core ML Stable Diffusion to make it easier for developers to add state-of-the-art generative AI capabilities in their iOS, iPadOS and macOS apps with the highest efficiency on Apple Silicon. We built on top of 🤗 Diffusers instead of from scratch as 🤗 Diffusers consistently stays on top of a rapidly evolving field and promotes much needed interoperability of new and old ideas.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/10639145?s=200&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">Atila Orhon</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">🤗 Diffusers has been absolutely developer-friendly for me to dive right into stable diffusion models. Main differentiating factor clearly being that 🤗 Diffusers implementation is often not some code from research lab, that are mostly focused on high velocity driven. While research codes are often poorly written and difficult to understand (lack of typing, assertions, inconsistent design patterns and conventions), 🤗 Diffusers was a breeze to use for me to hack my ideas within couple of hours. Without it, I would have needed to invest significantly more amount of time to start hacking. Well-written documentations and examples are extremely helpful as well.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/35953539?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">Simo</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">BentoML is the unified framework for for building, shipping, and scaling production-ready AI applications incorporating traditional ML, pre-trained AI models, Generative and Large Language Models. All Hugging Face Diffuser models and pipelines can be seamlessly integrated into BentoML applications, enabling the running of models on the most suitable hardware and independent scaling based on usage.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/49176046?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">BentoML</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">Invoke AI is an open-source Generative AI tool built to empower professional creatives, from game designers and photographers to architects and product designers. Invoke recently launched their hosted offering at invoke.ai, allowing users to generate assets from any computer, powered by the latest research in open-source.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/113954515?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">InvokeAI</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">TaskMatrix connects Large Language Model and a series of Visual Models to enable sending and receiving images during chatting.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/6154722?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">Chenfei Wu</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">Lama Cleaner is a powerful image inpainting tool that uses Stable Diffusion technology to remove unwanted objects, defects, or people from your pictures. It can also erase and replace anything in your images with ease.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://github.com/Sanster/lama-cleaner/raw/main/assets/logo.png" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">Qing</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">Grounded-SAM combines a powerful Zero-Shot detector Grounding-DINO and Segment-Anything-Model (SAM) to build a strong pipeline to detect and segment everything with text inputs. When combined with 🤗 Diffusers inpainting models, Grounded-SAM can do highly controllable image editing tasks, including replacing specific objects, inpainting the background, etc.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/113572103?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">Tianhe Ren</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">Stable-Dreamfusion leverages the convenient implementations of 2D diffusion models in 🤗 Diffusers to replicate recent text-to-3D and image-to-3D methods.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/25863658?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">kiui</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">MMagic (Multimodal Advanced, Generative, and Intelligent Creation) is an advanced and comprehensive Generative AI toolbox that provides state-of-the-art AI models (e.g., diffusion models powered by 🤗 Diffusers and GAN) to synthesize, edit and enhance images and videos. In MMagic, users can use rich components to customize their own models like playing with Legos and manage the training loop easily.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/10245193?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">mmagic</p> </div> </div> </div> <div class="mb-8 sm:break-inside-avoid"> <blockquote class="rounded-xl !mb-0 bg-gray-50 p-6 shadow dark:bg-gray-800"> <p class="leading-relaxed text-gray-700">Tune-A-Video, developed by Jay Zhangjie Wu and his team at Show Lab, is the first to fine-tune a pre-trained text-to-image diffusion model using a single text-video pair and enables changing video content while preserving motion.</p> </blockquote> <div class="flex items-center gap-4"> <img src="https://avatars.githubusercontent.com/u/101181824?s=48&v=4" class="h-12 w-12 rounded-full object-cover" /> <div class="text-sm"> <p class="font-medium">Jay Zhangjie Wu</p> </div> </div> </div> </div> We also collaborated with Google Cloud (who generously provided the compute) to provide technical guidance and mentorship to help the community train diffusion models with TPUs (check out a summary of the event [here](https://opensource.googleblog.com/2023/06/controlling-stable-diffusion-with-jax-diffusers-and-cloud-tpus.html)). There were many cool models such as this [demo](https://huggingface.co/spaces/mfidabel/controlnet-segment-anything) that combines ControlNet with Segment Anything. <div class="flex justify-center"> <img src="https://github.com/mfidabel/JAX_SPRINT_2023/blob/8632f0fde7388d7a4fc57225c96ef3b8411b3648/EX_1.gif?raw=true" alt="ControlNet and SegmentAnything demo of a hot air balloon in various styles"> </div> Finally, we were delighted to receive contributions to our codebase from over 300 contributors, which allowed us to collaborate together in the most open way possible. Here are just a few of the contributions from our community: - [Model editing](https://github.com/huggingface/diffusers/pull/2721) by [@bahjat-kawar](https://github.com/bahjat-kawar), a pipeline for editing a model’s implicit assumptions - [LDM3D](https://github.com/huggingface/diffusers/pull/3668) by [@estelleafl](https://github.com/estelleafl), a diffusion model for 3D images - [DPMSolver](https://github.com/huggingface/diffusers/pull/3314) by [@LuChengTHU](https://github.com/LuChengTHU), improvements for significantly improving inference speed - [Custom Diffusion](https://github.com/huggingface/diffusers/pull/3031) by [@nupurkmr9](https://github.com/nupurkmr9), a technique for generating personalized images with only a few images of a subject Besides these, a heartfelt shoutout to the following contributors who helped us ship some of the most powerful features of Diffusers (in no particular order): * [@takuma104](https://github.com/huggingface/diffusers/commits?author=takuma104) * [@nipunjindal](https://github.com/huggingface/diffusers/commits?author=nipunjindal) * [@isamu-isozaki](https://github.com/huggingface/diffusers/commits?author=isamu-isozaki) * [@piEsposito](https://github.com/huggingface/diffusers/commits?author=piEsposito) * [@Birch-san](https://github.com/huggingface/diffusers/commits?author=Birch-san) * [@LuChengTHU](https://github.com/huggingface/diffusers/commits?author=LuChengTHU) * [@duongna21](https://github.com/huggingface/diffusers/commits?author=duongna21) * [@clarencechen](https://github.com/huggingface/diffusers/commits?author=clarencechen) * [@dg845](https://github.com/huggingface/diffusers/commits?author=dg845) * [@Abhinay1997](https://github.com/huggingface/diffusers/commits?author=Abhinay1997) * [@camenduru](https://github.com/huggingface/diffusers/commits?author=camenduru) * [@ayushtues](https://github.com/huggingface/diffusers/commits?author=ayushtues) ## Building products with 🤗 Diffusers Over the last year, we also saw many companies choosing to build their products on top of 🤗 Diffusers. Here are a couple of products that have caught our attention: - [PlaiDay](http://plailabs.com/): “PlaiDay is a Generative AI experience where people collaborate, create, and connect. Our platform unlocks the limitless creativity of the human mind, and provides a safe, fun social canvas for expression.” - [Previs One](https://previs.framer.wiki/): “Previs One is a diffuser pipeline for cinematic storyboarding and previsualization — it understands film and television compositional rules just as a director would speak them.” - [Zust.AI](https://zust.ai/): “We leverage Generative AI to create studio-quality product photos for brands and marketing agencies.” - [Dashtoon](https://dashtoon.com/): “Dashtoon is building a platform to create and consume visual content. We have multiple pipelines that load multiple LORAs, multiple control-nets and even multiple models powered by diffusers. Diffusers has made the gap between a product engineer and a ML engineer super low allowing dashtoon to ship user value faster and better.” - [Virtual Staging AI](https://www.virtualstagingai.app/): "Filling empty rooms with beautiful furniture using generative models.” - [Hexo.AI](https://www.hexo.ai/): “Hexo AI helps brands get higher ROI on marketing spends through Personalized Marketing at Scale. Hexo is building a proprietary campaign generation engine which ingests customer data and generates brand compliant personalized creatives.” If you’re building products on top of 🤗 Diffusers, we’d love to chat to understand how we can make the library better together! Feel free to reach out to [email protected] or [email protected]. ## Looking forward As we celebrate our first anniversary, we're grateful to our community and open-source contributors who have helped us come so far in such a short time. We're happy to share that we'll be presenting a 🤗 Diffusers demo at ICCV 2023 this fall – if you're attending, do come and see us! We'll continue to develop and improve our library, making it easier for everyone to use. We're also excited to see what the community will create next with our tools and resources. Thank you for being a part of our journey so far, and we look forward to continuing to democratize good machine learning together! 🥳 ❤️ Diffusers team --- **Acknowledgements**: Thank you to [Omar Sanseviero](https://huggingface.co/osanseviero), [Patrick von Platen](https://huggingface.co/patrickvonplaten), [Giada Pistilli](https://huggingface.co/giadap) for their reviews, and [Chunte Lee](https://huggingface.co/Chunte) for designing the thumbnail.
4
0
hf_public_repos
hf_public_repos/blog/ethics-soc-5.md
--- title: "Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings" thumbnail: /blog/assets/164_ethics-soc-5/thumbnail.png authors: - user: meg --- # Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings One of the most important things to know about “ethics” in AI is that it has to do with **values**. Ethics doesn’t tell you what’s right or wrong, it provides a vocabulary of values – transparency, safety, justice – and frameworks to prioritize among them. This summer, we were able to take our understanding of values in AI to legislators in the E.U., U.K., and U.S., to help shape the future of AI regulation. This is where ethics shines: helping carve out a path forward when laws are not yet in place. In keeping with Hugging Face’s core values of *openness* and *accountability*, we are sharing a collection of what we’ve said and done here. This includes our CEO [Clem](https://huggingface.co/clem)’s [testimony to U.S. Congress](https://twitter.com/ClementDelangue/status/1673348676478025730) and [statements at the U.S. Senate AI Insight Forum](https://twitter.com/ClementDelangue/status/1702095553503412732); our advice on the [E.U. AI Act](https://huggingface.co/blog/eu-ai-act-oss); our [comments to the NTIA on AI Accountability](https://huggingface.co/blog/policy-ntia-rfc); and our Chief Ethics Scientist [Meg](https://huggingface.co/meg)’s [comments to the Democratic Caucus](assets/164_ethics-soc-5/meg_dem_caucus.pdf). Common to many of these discussions were questions about why openness in AI can be beneficial, and we share a collection of our answers to this question [here](assets/164_ethics-soc-5/why_open.md). In keeping with our core value of *democratization*, we have also spent a lot of time speaking publicly, and have been privileged to speak with journalists in order to help explain what’s happening in the world of AI right now. This includes: - Comments from [Sasha](https://huggingface.co/sasha) on **AI’s energy use and carbon emissions** ([The Atlantic](https://www.theatlantic.com/technology/archive/2023/08/ai-carbon-emissions-data-centers/675094/), [The Guardian](https://www.theguardian.com/technology/2023/aug/01/techscape-environment-cost-ai-artificial-intelligence), ([twice](https://www.theguardian.com/technology/2023/jun/08/artificial-intelligence-industry-boom-environment-toll)), [New Scientist](https://www.newscientist.com/article/2381859-shifting-where-data-is-processed-for-ai-can-reduce-environmental-harm/), [The Weather Network](https://www.theweathernetwork.com/en/news/climate/causes/how-energy-intensive-are-ai-apps-like-chatgpt), the [Wall Street Journal](https://www.wsj.com/articles/artificial-intelligence-technology-energy-a3a1a8a7), ([twice](https://www.wsj.com/articles/artificial-intelligence-can-make-companies-greener-but-it-also-guzzles-energy-7c7b678))), as well as penning part of a [Wall Street Journal op-ed on the topic](https://www.wsj.com/articles/artificial-intelligence-technology-energy-a3a1a8a7); thoughts on **AI doomsday risk** ([Bloomberg](https://www.bnnbloomberg.ca/ai-doomsday-scenarios-are-gaining-traction-in-silicon-valley-1.1945116), [The Times](https://www.thetimes.co.uk/article/everything-you-need-to-know-about-ai-but-were-afraid-to-ask-g0q8sq7zv), [Futurism](https://futurism.com/the-byte/ai-expert-were-all-going-to-die), [Sky News](https://www.youtube.com/watch?v=9Auq9mYxFEE)); details on **bias in generative AI** ([Bloomberg](https://www.bloomberg.com/graphics/2023-generative-ai-bias/), [NBC](https://www.nbcnews.com/news/asian-america/tool-reducing-asian-influence-ai-generated-art-rcna89086), [Vox](https://www.vox.com/technology/23738987/racism-ai-automated-bias-discrimination-algorithm)); addressing how **marginalized workers create the data for AI** ([The Globe and Mail](https://www.theglobeandmail.com/business/article-ai-data-gig-workers/), [The Atlantic](https://www.theatlantic.com/technology/archive/2023/07/ai-chatbot-human-evaluator-feedback/674805/)); highlighting effects of **sexism in AI** ([VICE](https://www.vice.com/en/article/g5ywp7/you-know-what-to-do-boys-sexist-app-lets-men-rate-ai-generated-women)); and providing insights in MIT Technology Review on [AI text detection](https://www.technologyreview.com/2023/07/07/1075982/ai-text-detection-tools-are-really-easy-to-fool/), [open model releases](https://www.technologyreview.com/2023/07/18/1076479/metas-latest-ai-model-is-free-for-all/), and [AI transparency](https://www.technologyreview.com/2023/07/25/1076698/its-high-time-for-more-ai-transparency/). - Comments from [Nathan](https://huggingface.co/natolambert) on the state of the art on **language models and open releases** ([WIRED](https://www.wired.com/story/metas-open-source-llama-upsets-the-ai-horse-race/), [VentureBeat](https://venturebeat.com/business/todays-ai-is-not-science-its-alchemy-what-that-means-and-why-that-matters-the-ai-beat/), [Business Insider](https://www.businessinsider.com/chatgpt-openai-moat-in-ai-wars-llama2-shrinking-2023-7), [Fortune](https://fortune.com/2023/07/18/meta-llama-2-ai-open-source-700-million-mau/)). - Comments from [Meg](https://huggingface.co/meg) on **AI and misinformation** ([CNN](https://www.cnn.com/2023/07/17/tech/ai-generated-election-misinformation-social-media/index.html), [al Jazeera](https://www.youtube.com/watch?v=NuLOUzU8P0c), [the New York Times](https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html)); the need for **just handling of artists’ work** in AI ([Washington Post](https://www.washingtonpost.com/technology/2023/07/16/ai-programs-training-lawsuits-fair-use/)); advancements in **generative AI** and their relationship to the greater good ([Washington Post](https://www.washingtonpost.com/technology/2023/09/20/openai-dall-e-image-generator/), [VentureBeat](https://venturebeat.com/ai/generative-ai-secret-sauce-data-scraping-under-attack/)); how **journalists can better shape the evolution of AI** with their reporting ([CJR](https://www.cjr.org/analysis/how-to-report-better-on-artificial-intelligence.php)); as well as explaining the fundamental statistical concept of **perplexity** in AI ([Ars Technica](https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/)); and highlighting patterns of **sexism** ([Fast Company](https://www.fastcompany.com/90952272/chuck-schumer-ai-insight-forum)). - Comments from [Irene](https://huggingface.co/irenesolaiman) on understanding the **regulatory landscape of AI** ([MIT Technology Review](https://www.technologyreview.com/2023/09/11/1079244/what-to-know-congress-ai-insight-forum-meeting/), [Barron’s](https://www.barrons.com/articles/artificial-intelligence-chips-technology-stocks-roundtable-74b256fd)). - Comments from [Yacine](https://huggingface.co/yjernite) on **open source and AI legislation** ([VentureBeat](https://venturebeat.com/ai/hugging-face-github-and-more-unite-to-defend-open-source-in-eu-ai-legislation/), [TIME](https://time.com/6308604/meta-ai-access-open-source/)) as well as **copyright issues** ([VentureBeat](https://venturebeat.com/ai/potential-supreme-court-clash-looms-over-copyright-issues-in-generative-ai-training-data/)). - Comments from [Giada](https://huggingface.co/giadap) on the concepts of **AI “singularity”** ([Popular Mechanics](https://www.popularmechanics.com/technology/security/a43929371/ai-singularity-dangers/)) and **AI “sentience”** ([RFI](https://www.rfi.fr/fr/technologies/20230612-pol%C3%A9mique-l-intelligence-artificielle-ange-ou-d%C3%A9mon), [Radio France](https://www.radiofrance.fr/franceculture/podcasts/le-temps-du-debat/l-intelligence-artificielle-est-elle-un-nouvel-humanisme-9822329)); thoughts on **the perils of artificial romance** ([Analytics India Magazine](https://analyticsindiamag.com/the-perils-of-artificial-romance/)); and explaining **value alignment** ([The Hindu](https://www.thehindu.com/sci-tech/technology/ai-alignment-cant-be-solved-as-openai-says/article67063877.ece)). Some of our talks released this summer include [Giada](https://huggingface.co/giadap)’s [TED presentation on whether “ethical” generative AI is possible](https://youtu.be/NreFQFKahxw?si=49UoQeEw5IyRSRo7) (the automatic English translation subtitles are great!); [Yacine](https://huggingface.co/yjernite)’s presentations on [Ethics in Tech](https://docs.google.com/presentation/d/1viaOjX4M1m0bydZB0DcpW5pSAgK1m1CPPtTZz7zsZnE/) at the [Markkula Center for Applied Ethics](https://www.scu.edu/ethics/focus-areas/technology-ethics/) and [Responsible Openness](https://www.youtube.com/live/75OBTMu5UEc?feature=shared&t=10140) at the [Workshop on Responsible and Open Foundation Models](https://sites.google.com/view/open-foundation-models); [Katie](https://huggingface.co/katielink)’s chat about [generative AI in health](https://www.youtube.com/watch?v=_u-PQyM_mvE); and [Meg](https://huggingface.co/meg)’s presentation for [London Data Week](https://www.turing.ac.uk/events/london-data-week) on [Building Better AI in the Open](https://london.sciencegallery.com/blog/watch-again-building-better-ai-in-the-open). Of course, we have also made progress on our regular work (our “work work”). The fundamental value of *approachability* has emerged across our work, as we've focused on how to shape AI in a way that’s informed by society and human values, where everyone feels welcome. This includes [a new course on AI audio](https://huggingface.co/learn/audio-course/) from [Maria](https://huggingface.co/MariaK) and others; a resource from [Katie](https://huggingface.co/katielink) on [Open Access clinical language models](https://www.linkedin.com/feed/update/urn:li:activity:7107077224758923266/); a tutorial from [Nazneen](https://huggingface.co/nazneen) and others on [Responsible Generative AI](https://www.youtube.com/watch?v=gn0Z_glYJ90&list=PLXA0IWa3BpHnrfGY39YxPYFvssnwD8awg&index=13&t=1s); our FAccT papers on [The Gradient of Generative AI Release](https://dl.acm.org/doi/10.1145/3593013.3593981) ([video](https://youtu.be/8_-QTw8ugas?si=RG-NO1v3SaAMgMRQ)) and [Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML](https://dl.acm.org/doi/10.1145/3593013.3594002) ([video](https://youtu.be/ild63NtxTpI?si=jPlIBAL6WLtTHUwt)); as well as workshops on [Mapping the Risk Surface of Text-to-Image AI with a participatory, cross-disciplinary approach](https://avidml.org/events/tti2023/) and [Assessing the Impacts of Generative AI Systems Across Modalities and Society](https://facctconference.org/2023/acceptedcraft#modal) ([video](https://youtu.be/yJMlK7PSHyI?si=UKDkTFEIQ_rIbqhd)). We have also moved forward with our goals of *fairness* and *justice* with [bias and harm testing](https://huggingface.co/HuggingFaceM4/idefics-80b-instruct#bias-risks-and-limitations), recently applied to the new Hugging Face multimodal model [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b-instruct). We've worked on how to operationalize *transparency* responsibly, including [updating our Content Policy](https://huggingface.co/blog/content-guidelines-update) (spearheaded by [Giada](https://huggingface.co/giadap)). We've advanced our support of language *diversity* on the Hub by [using machine learning to improve metadata](https://huggingface.co/blog/huggy-lingo) (spearheaded by [Daniel](https://huggingface.co/davanstrien)), and our support of *rigour* in AI by [adding more descriptive statistics to datasets](https://twitter.com/polinaeterna/status/1707447966355563000) (spearheaded by [Polina](https://huggingface.co/polinaeterna)) to foster a better understanding of what AI learns and how it can be evaluated. Drawing from our experiences this past season, we now provide a collection of many of the resources at Hugging Face that are particularly useful in current AI ethics discourse right now, available here: [https://huggingface.co/society-ethics](https://huggingface.co/society-ethics). Finally, we have been surprised and delighted by public recognition for many of the society & ethics regulars, including both [Irene](https://www.technologyreview.com/innovator/irene-solaiman/) and [Sasha](https://www.technologyreview.com/innovator/sasha-luccioni/) being selected in [MIT’s 35 Innovators under 35](https://www.technologyreview.com/innovators-under-35/artificial-intelligence-2023/) (Hugging Face makes up ¼ of the AI 35 under 35!); [Meg](https://huggingface.co/meg) being included in lists of influential AI innovators ([WIRED](https://www.wired.com/story/meet-the-humans-trying-to-keep-us-safe-from-ai/), [Fortune](https://fortune.com/2023/06/13/meet-top-ai-innovators-impact-on-business-society-chatgpt-deepmind-stability/)); and [Meg](https://huggingface.co/meg) and [Clem](https://huggingface.co/clem)’s selection in [TIME’s 100 under 100 in AI](https://time.com/collection/time100-ai/). We are also very sad to say goodbye to our colleague [Nathan](https://huggingface.co/natolambert), who has been instrumental in our work connecting ethics to reinforcement learning for AI systems. As his parting gift, he has provided further details on the [challenges of operationalizing ethical AI in RLHF](https://www.interconnects.ai/p/operationalizing-responsible-rlhf). Thank you for reading! \-\- Meg, on behalf of the [Ethics & Society regulars](https://huggingface.co/spaces/society-ethics/about) at Hugging Face
5
0
hf_public_repos
hf_public_repos/blog/gcp-partnership.md
--- title: "Hugging Face and Google partner for open AI collaboration" thumbnail: /blog/assets/173_gcp-partnership/thumbnail.jpg authors: - user: jeffboudier - user: philschmid --- # Hugging Face and Google partner for open AI collaboration ![Hugging Face and Google partner for open AI collaboration](/blog/assets/173_gcp-partnership/thumbnail.jpg) At Hugging Face, we want to enable all companies to build their own AI, leveraging open models and open source technologies. Our goal is to build an open platform, making it easy for data scientists, machine learning engineers and developers to access the latest models from the community, and use them within the platform of their choice. Today, we are thrilled to announce our strategic partnership with Google Cloud to democratize good machine learning. We will collaborate with Google across open science, open source, cloud, and hardware to enable companies to build their own AI with the latest open models from Hugging Face and the latest cloud and hardware features from Google Cloud. ## A collaboration for open science From the original Transformer to the Vision Transformer, Google has published some of the most important contributions to open AI research and prompted the AI community to change the World one model at a time, with now over 1 million models, datasets and AI applications based on transformer models hosted on Hugging Face. Our strategic partnership will help amplify efforts led by Google and Hugging Face to make the latest AI research more accessible to the community. ## A collaboration for open source From Tensorflow to JAX, Google has contributed some of the most important open source tools, enabling researchers and data scientists to build their own AI models and create a virtuous cycle of model performance improvement through rapid iteration. Our strategic partnership will accelerate our collaboration to make the latest AI innovations easily accessible through Hugging Face open-source libraries, whichever framework you use. ## A collaboration for Google Cloud customers Today, hundreds of thousands of Hugging Face users are active on Google Cloud every month, downloading models to create Generative AI applications. Our strategic partnership will enable new experiences for Google Cloud customers to easily train and deploy Hugging Face models within Google Kubernetes Engine (GKE) and Vertex AI. Customers will benefit from the unique hardware capabilities available in Google Cloud, like TPU instances, A3 VMs, powered by NVIDIA H100 Tensor Core GPUs, and C3 VMs, powered by Intel Sapphire Rapid CPUs. ## A collaboration for Hugging Face Hub users Millions of researchers, data scientists, developers and AI hobbyists rely on the Hugging Face Hub every month to easily create and experience the most recent AI models and applications. The fruits of our collaboration with Google in open science, open source and Google Cloud will be made available to Hugging Face Hub users and enable new experiences throughout 2024. Models will be easily deployed for production on Google Cloud with Inference Endpoints. AI builders will be able to accelerate their applications with TPU on Hugging Face Spaces. Organizations will be able to leverage their Google Cloud account to easily manage the usage and billing of their Enterprise Hub subscription. ## What’s next We can’t wait to make these new experiences available to you. Stay tuned for announcements starting this quarter! For now, we leave you with a word from our CEOs: _“Google Cloud and Hugging Face share a vision for making generative AI more accessible and impactful for developers. This partnership ensures that developers on Hugging Face will have access to Google Cloud’s purpose-built, AI platform, Vertex AI, along with our secure infrastructure, which can accelerate the next generation of AI services and applications,”_ says Thomas Kurian, CEO of Google Cloud. _“With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together with leading optimized AI infrastructure and tools from Google Cloud including Vertex AI and TPUs to meaningfully advance developers ability to build their own AI models,”_ says Clement Delangue CEO of Hugging Face.
6
0
hf_public_repos
hf_public_repos/blog/gemma2.md
--- title: "Welcome Gemma 2 - Google’s new open LLM" thumbnail: /blog/assets/gemma2/thumbnail.jpg authors: - user: philschmid - user: osanseviero - user: pcuenq - user: lewtun - user: tomaarsen - user: reach-vb --- # Welcome Gemma 2 - Google’s new open LLM Google released Gemma 2, the latest addition to its family of state-of-the-art open LLMs, and we are excited to collaborate with Google to ensure the best integration in the Hugging Face ecosystem. You can find the 4 open-weight models (2 base models & 2 fine-tuned ones) on the Hub. Among the features and integrations being released, we have: - [Models on the Hub](https://huggingface.co/collections/google/g-667d6600fd5220e7b967f315) - Hugging Face [Transformers integration](https://github.com/huggingface/transformers/releases/tag/v4.42.0) - Integration with Google Cloud & Inference Endpoints ## Table of contents - [What is Gemma 2?](#what-is-gemma-2) - [Technical advances in Gemma 2](#technical-advances-in-gemma-2) - [Sliding window attention](#sliding-window-attention) - [Soft-capping and attention implementations](#soft-capping-and-attention-implementations) - [Knowledge Distillation](#knowledge-distillation) - [Model Merging](#model-merging) - [Gemma 2 evaluation](#gemma-2-evaluation) - [Technical Report results](#technical-report-results) - [Open LLM Leaderboard results](#open-llm-leaderboard-results) - [How to prompt Gemma 2](#how-to-prompt-gemma-2) - [Demo](#demo) - [Using Hugging Face Transformers](#using-hugging-facetransformers) - [Integration with Google Cloud](#integration-with-google-cloud) - [Integration with Inference Endpoints](#integration-with-inference-endpoints) - [Fine-tuning with 🤗 TRL](#fine-tuning-with-trl) - [Additional Resources](#additional-resources) - [Acknowledgments](#acknowledgments) ## What is Gemma 2? Gemma 2 is Google's latest iteration of open LLMs. It comes in two sizes, 9 billion and 27 billion parameters with base (pre-trained) and instruction-tuned versions. Gemma is based on Google Deepmind Gemini and has a context length of 8K tokens: - [gemma-2-9b](https://huggingface.co/google/gemma-2-9b): Base 9B model. - [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it): Instruction fine-tuned version of the base 9B model. - [gemma-2-27b](https://huggingface.co/google/gemma-2-27b): Base 27B model. - [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it): Instruction fine-tuned version of the base 27B model. The Gemma 2 models were trained on ~2x more data than their first iteration, totaling 13 trillion tokens for the 27B version and 8 trillion tokens for the 9B version of web data (primarily English), code, and math. We don’t know the exact details of the training mix, and we can only guess that bigger and more careful data curation was a big factor in the improved performance. Gemma 2 comes with the [same license](https://ai.google.dev/gemma/terms) as the first iteration, which is a permissive license that allows redistribution, fine-tuning, commercial use, and derivative works. ## Technical advances in Gemma 2 Gemma 2 has many similarities with the first iteration. It has a context length of 8192 tokens and uses Rotary Position Embedding (RoPE). There are four main advances in Gemma 2 compared to the original Gemma: - [Sliding window attention](#sliding-window-attention): Interleave sliding window and full-quadratic attention for quality generation. - [Logit soft-capping](#soft-capping-and-attention-implementations): Prevents logits from growing excessively by scaling them to a fixed range, improving training. - [Knowledge Distillation](#knowledge-distillation): Leverage a larger teacher model to train a smaller model (for the 9B model). - [Model Merging](#model-merging): Combines two or more LLMs into a single new model Gemma 2 was trained on [Google Cloud TPU (27B on v5p](https://cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-tpu-v5p-and-ai-hypercomputer?hl=en), [9B on TPU v4)](https://cloud.google.com/tpu/docs/v4) using [JAX](https://jax.readthedocs.io/en/latest/quickstart.html) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/). Gemma 2 Instruct has been optimized for dialogue applications and trained on a mix of synthetic and human-generated prompt-response pairs using Supervised Fine-Tuning (SFT), Distillation from a larger model, Reinforcement Learning from Human Feedback (RLHF) using a reward model oriented more towards conversational capabilities, and model merging using WARP to improve overall performance. Similar to the pre-training mix, no details about the fine-tuning datasets or the hyperparameters associated with SFT and [RLHF](https://huggingface.co/blog/rlhf) have been shared. ### Sliding window attention [Sliding window attention](https://huggingface.co/papers/2004.05150) is a method to reduce the memory and time requirements of the attention computations in transformer models and has been used in models such as [Mistral](https://huggingface.co/papers/2310.06825). The novelty of Gemma 2 is that a sliding window is applied to every other layer (local - 4096 tokens), while the layers in between still use full quadratic global attention (8192 tokens). We suppose this is a way to increase quality in long context situations (half of the layers still attend to all tokens) while partially benefiting from the advantages of sliding attention. ### Soft-capping and attention implementations Soft capping is a technique that prevents logits from growing excessively large without truncating them. It works by dividing the logits by a maximum value threshold (soft_cap), then passing them through a `tanh` layer (ensuring they are in the `(-1, 1)` range), and finally multiplying by the threshold again. This guarantees that the final values will be in the `(-soft_cap, +soft_cap)` interval without losing much information but stabilizing the training. Putting it all together, the logits are calculated by: `logits ← soft_cap ∗ tanh(logits/soft_cap)` Gemma 2 employs soft capping for the final layer and for every attention layer. The attention logits are capped at 50.0, and the final logits at 30.0. At the time of release, soft-capping is incompatible with Flash Attention / SDPA, but they can still be used in inference for maximum efficiency. The Gemma 2 team observed very minor differences when soft-capping is removed during inference. *Note: For stable fine-tuning runs, you still need to enable soft-capping and hence, we recommend fine-tuning with `eager` attention instead of SDPA.* ### Knowledge Distillation Knowledge distillation is a popular technique for training a smaller *student* model to mimic the behavior of a larger but better-performing *teacher.* This works by augmenting the next-token prediction task of LLMs with a distribution of token probabilities from the teacher (e.g., GPT-4, Claude, or Gemini), which provides a richer signal for the student to learn from. According to the Gemma 2 tech report, knowledge distillation was used to pre-train the 9B model, while the 27B model was pre-trained from scratch. For post-training, the Gemma 2 team generated a diverse set of completions from a teacher (unspecified in the report, but presumably Gemini Ultra), and then trained the student models on this synthetic data with SFT. This is the basis of many open models, such as [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) and [OpenHermes](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B), which are trained entirely on synthetic data from larger LLMs. Although effective, this method has drawbacks since the model capacity mismatch between the student and teacher can lead to a *train-inference mismatch*, where the text generated by the student during inference is out-of-distribution compared to that seen during training. To handle this issue, the Gemma 2 team used [“on-policy distillation”](https://arxiv.org/pdf/2306.13649), where the student generates completions from the SFT prompts. These completions are then used to compute the KL divergence between the teacher’s and student’s logits. By minimizing the KL divergence throughout training, the student learns to model the behavior of the teacher accurately while also minimizing the train-inference mismatch. This approach is quite interesting, as we’ve seen in the community that on-policy methods like online DPO produce stronger models, and one advantage of on-policy distillation is that you only need the logits from the teacher, so you don’t need to rely on reward models or LLM-as-a-judge to improve the model. It will be exciting to see if this method becomes more popular among fine-tuners in the coming months! ### Model Merging [Model merging](https://huggingface.co/blog/mlabonne/merge-models) is a technique that combines two or more LLMs into a single new model. It's relatively new and experimental and can be used without accelerators. [Mergekit](https://github.com/arcee-ai/mergekit) is a popular open-source toolkit for merging LLMs. It implements linear, SLERP, TIES, DARE, and other merging techniques. According to the Technical Report, Gemma 2 used [Warp](https://arxiv.org/abs/2406.16768), a new merging technique that merges models in three distinct stages: 1. Exponential Moving Average (EMA): This is applied during the reinforcement learning (RL) fine-tuning process. 2. Spherical Linear intERPolation (SLERP): This is applied after the RL fine-tuning of multiple policies. 3. Linear Interpolation Towards Initialization (LITI): This stage is applied after the SLERP stage. ## Gemma 2 evaluation How good are the Gemma models? Below are performance comparisons to other open models based on the Technical Report and the new version of the [open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). ### Technical Report results This Technical Report of Gemma 2 compares the performance of different open LLMs on the previous Open LLM Leaderboard benchmarks. | | Llama 3 (70B) | Qwen 1.5 (32B) | Gemma 2 (27B) | | ---------- | ------------- | -------------- | ------------- | | MMLU | **79.2** | 74.3 | 75.2 | | GSM8K | **76.9** | 61.1 | 75.1 | | ARC-c | 68.8 | 63.6 | **71.4** | | HellaSwag | **88.0** | 85.0 | 86.4 | | Winogrande | **85.3** | 81.5 | 83.7 | The Report also compares the performance of Small Language Models. | Benchmark | Mistral (7B) | Llama 3 (8B) | Gemma (8B) | Gemma 2 (9B) | | ---------- | ------------ | ------------ | ---------- | ------------ | | MMLU | 62.5 | 66.6 | 64.4 | **71.3** | | GSM8K | 34.5 | 45.7 | 50.9 | **62.3** | | ARC-C | 60.5 | 59.2 | 61.1 | **68.4** | | HellaSwag | **83.0** | 82.0 | 82.3 | 81.9 | | Winogrande | 78.5 | 78.5 | 79.0 | **80.6** | ### Open LLM Leaderboard results *Note: We are currently evaluating Google Gemma 2 individually on the new Open LLM Leaderboard benchmark and will update this section later today.* ## How to prompt Gemma 2 The base models have no prompt format. Like other base models, they can be used to continue an input sequence with a plausible continuation or for zero-shot/few-shot inference. The Instruct versions have a very simple conversation structure: ``` <start_of_turn>user knock knock<end_of_turn> <start_of_turn>model who is there<end_of_turn> <start_of_turn>user LaMDA<end_of_turn> <start_of_turn>model LaMDA who?<end_of_turn><eos> ``` This format has to be exactly reproduced for effective use. We’ll later show how easy it is to reproduce the instruct prompt with the chat template available in `transformers`. ## Demo You can chat with the Gemma 27B Instruct model on Hugging Chat! Check out the link here: https://huggingface.co/chat/models/google/gemma-2-27b-it. ## Using Hugging Face Transformers With Transformers [release 4.42](https://github.com/huggingface/transformers/releases/tag/v4.42.0), you can use Gemma and leverage all the tools within the Hugging Face ecosystem. To use Gemma models with transformers, make sure to use the latest `transformers` release: ```bash pip install "transformers>=4.42.3" --upgrade ``` The following snippet shows how to use `gemma-2-9b-it` with `transformers`. It requires about 18 GB of RAM, which fits many consumer GPUs. The same snippet works for `gemma-2-27b-it`, which, at 56GB of RAM, makes it a very interesting model for production use cases. Memory consumption can be further reduced by loading in 8-bit or 4-bit mode. ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="google/gemma-2-9b-it", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) messages = [ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."}, ] outputs = pipe( messages, max_new_tokens=256, do_sample=False, ) assistant_response = outputs[0]["generated_text"][-1]["content"] print(assistant_response) ``` > Ahoy, matey! I be a humble ship o' words, sailin' the digital seas. They call me Gemma, a creation o' the fine folks at Google DeepMind. I be trained on a treasure trove o' texts, learnin' to speak and write like a true scallywag. > > Ask me yer questions, and I'll do me best to answer 'em, aye! 🦜📚 *We used bfloat16 because that’s the reference precision for the instruction-tuned model. Running in float16 may be faster on your hardware, and results should be similar on the 9B model. Do note, however, that the 27B instruction-tuned model produces erratic outputs when using float16: you must use bfloat16 for that model weight.* You can also automatically quantize the model, loading it in 8-bit or even 4-bit mode. 4-bit loading of the large 27B version takes about 18 GB of memory to run, making it compatible with a lot of consumer cards and GPUs in Google Colab. This is how you’d load the generation pipeline in 4-bit: ```python pipeline = pipeline( "text-generation", model=model, model_kwargs={ "torch_dtype": torch.bfloat16, "quantization_config": {"load_in_4bit": True} }, ) ``` For more details on using the models with `transformers`, please check [the model cards](https://huggingface.co/google/gemma-2-9b). ## Integration with Google Cloud *Note: We are currently working on adding new containers to GKE and Vertex AI to run Google Gemma 2 efficiently. We will update this section as soon as the containers are available.* ## Fine-tuning with 🤗 TRL Training LLMs can be technically and computationally challenging. In this section, we’ll look at the tools available in the Hugging Face ecosystem to efficiently train Gemma on consumer-size GPUs An example command to fine-tune Gemma on OpenAssistant’s [chat dataset](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) can be found below. We use 4-bit quantization and [QLoRA](https://arxiv.org/abs/2305.14314) to conserve memory to target all the attention blocks' linear layers. Note that, unlike dense transformers, one should not target the MLP layers as they are sparse and don’t interact well with PEFT. First, install the nightly version of 🤗 TRL and clone the repo to access the [training script](https://github.com/huggingface/trl/blob/main/examples/scripts/sft.py): ```jsx pip install "transformers>=4.42.3" --upgrade pip install --upgrade bitsandbytes pip install --ugprade peft pip install git+https://github.com/huggingface/trl git clone https://github.com/huggingface/trl cd trl ``` Then you can run the script: ```bash # peft tuning; single GPU; https://wandb.ai/costa-huang/huggingface/runs/l1l53cst python \ examples/scripts/sft.py \ --model_name google/gemma-2-27b \ --dataset_name OpenAssistant/oasst_top1_2023-08-25 \ --dataset_text_field="text" \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-4 \ --report_to wandb \ --bf16 \ --max_seq_length 1024 \ --lora_r 16 --lora_alpha 32 \ --lora_target_modules q_proj k_proj v_proj o_proj \ --load_in_4bit \ --use_peft \ --attn_implementation eager \ --logging_steps=10 \ --gradient_checkpointing \ --output_dir models/gemma2 ``` <p align="center"> <img src="https://huggingface.co/datasets/trl-internal-testing/example-images/resolve/main/blog/gemma2/lora.png?download=true" alt="alt_text" title="image_tooltip" /> </p> If you have more GPUs to spare, you can run training with DeepSpeed and ZeRO Stage 3: ```bash accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml \ examples/scripts/sft.py \ --model_name google/gemma-2-27b \ --dataset_name OpenAssistant/oasst_top1_2023-08-25 \ --dataset_text_field="text" \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-5 \ --report_to wandb \ --bf16 \ --max_seq_length 1024 \ --attn_implementation eager \ --logging_steps=10 \ --gradient_checkpointing \ --output_dir models/gemma2 ``` <p align="center"> <img src="https://huggingface.co/datasets/trl-internal-testing/example-images/resolve/main/blog/gemma2/ds3.png?download=true?download=true" alt="alt_text" title="image_tooltip" /> </p> ## Integration with Inference Endpoints You can deploy Gemma 2 on Hugging Face's [Inference Endpoints](https://ui.endpoints.huggingface.co/philschmid/new?repository=google%2Fgemma-2-27b-it&accelerator=gpu&instance_id=aws-us-east-1-nvidia-a100-x1&task=text-generation&no_suggested_compute=true&tgi=true) using Text Generation Inference as the backend. [Text Generation Inference](https://github.com/huggingface/text-generation-inference) is a production-ready inference container developed by Hugging Face to enable easy deployment of large language models. It has features such as continuous batching, token streaming, tensor parallelism for fast inference on multiple GPUs, and production-ready logging and tracing. To deploy a Gemma 2 model, go to the [model page](https://huggingface.co/google/gemma-2-27b-it) and click on the [Deploy -> Inference Endpoints](https://ui.endpoints.huggingface.co/new?repository=google/gemma-2-27b-it) widget. Inference Endpoints supports OpenAI compatible [Messages API](https://huggingface.co/blog/tgi-messages-api) that allows you to switch from another closed model to an open one by simply changing the URL. ```python from openai import OpenAI # initialize the client but point it to TGI client = OpenAI( base_url="<ENDPOINT_URL>" + "/v1/", # replace with your endpoint url api_key="<HF_API_TOKEN>", # replace with your token ) chat_completion = client.chat.completions.create( model="tgi", messages=[ {"role": "user", "content": "Why is open-source software important?"}, ], stream=True, max_tokens=500 ) # iterate and print stream for message in chat_completion: print(message.choices[0].delta.content, end="") ``` ## Additional Resources - [Models on the Hub](https://huggingface.co/collections/google/g-667d6600fd5220e7b967f315) - [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) - [Chat demo on Hugging Chat](https://huggingface.co/chat/models/google/gemma-2-27b-it) - [Google Blog](https://blog.google/technology/developers/google-gemma-2/) - Google Notebook _coming soon_ - Vertex AI model garden _coming soon_ ## Acknowledgments Releasing such models with support and evaluations in the ecosystem would not be possible without the contributions of many community members, including [Clémentine](https://huggingface.co/clefourrier) and [Nathan](https://huggingface.co/SaylorTwift) for LLM evaluations; [Nicolas](https://huggingface.co/Narsil) for Text Generation Inference Support; [Arthur](https://huggingface.co/ArthurZ), [Sanchit](https://huggingface.co/sanchit-gandhi), [Joao](https://huggingface.co/joaogante), and [Lysandre for](https://huggingface.co/lysandre) integrating Gemma 2 into `transformers`; [Nathan](https://huggingface.co/nsarrazin) and [Victor](https://huggingface.co/victor) for making Gemma 2 available in Hugging Chat. And Thank you to the Google Team for releasing Gemma 2 and making it available to the open-source AI community!
7
0
hf_public_repos
hf_public_repos/blog/idefics.md
--- title: "Introducing IDEFICS: An Open Reproduction of State-of-the-art Visual Langage Model" thumbnail: /blog/assets/idefics/thumbnail.png authors: - user: HugoLaurencon - user: davanstrien - user: stas - user: Leyo - user: SaulLu - user: TimeRobber guest: true - user: skaramcheti guest: true - user: aps guest: true - user: giadap - user: yjernite - user: VictorSanh --- # Introducing IDEFICS: An Open Reproduction of State-of-the-Art Visual Language Model We are excited to release IDEFICS (**I**mage-aware **D**ecoder **E**nhanced à la **F**lamingo with **I**nterleaved **C**ross-attention**S**), an open-access visual language model. IDEFICS is based on [Flamingo](https://huggingface.co/papers/2204.14198), a state-of-the-art visual language model initially developed by DeepMind, which has not been released publicly. Similarly to GPT-4, the model accepts arbitrary sequences of image and text inputs and produces text outputs. IDEFICS is built solely on publicly available data and models (LLaMA v1 and OpenCLIP) and comes in two variants—the base version and the instructed version. Each variant is available at the 9 billion and 80 billion parameter sizes. The development of state-of-the-art AI models should be more transparent. Our goal with IDEFICS is to reproduce and provide the AI community with systems that match the capabilities of large proprietary models like Flamingo. As such, we took important steps contributing to bringing transparency to these AI systems: we used only publicly available data, we provided tooling to explore training datasets, we shared [technical lessons and mistakes](https://github.com/huggingface/m4-logs/blob/master/memos/README.md) of building such artifacts and assessed the model’s harmfulness by adversarially prompting it before releasing it. We are hopeful that IDEFICS will serve as a solid foundation for more open research in multimodal AI systems, alongside models like [OpenFlamingo](https://huggingface.co/openflamingo)-another open reproduction of Flamingo at the 9 billion parameter scale. Try out the [demo](https://huggingface.co/spaces/HuggingFaceM4/idefics_playground) and the [models](https://huggingface.co/HuggingFaceM4/idefics-80b-instruct) on the Hub! <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/idefics/woodstock_ai.png" width="600" alt="Screenshot of IDEFICS generation for HF Woodstock of AI"/> </p> ## What is IDEFICS? IDEFICS is an 80 billion parameters multimodal model that accepts sequences of images and texts as input and generates coherent text as output. It can answer questions about images, describe visual content, create stories grounded in multiple images, etc. IDEFICS is an open-access reproduction of Flamingo and is comparable in performance with the original closed-source model across various image-text understanding benchmarks. It comes in two variants - 80 billion parameters and 9 billion parameters. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/idefics/Figure_Evals_IDEFICS.png" width="600" alt="Plot comparing the performance of Flamingo, OpenFlamingo and IDEFICS"/> </p> We also provide fine-tuned versions [idefics-80B-instruct](https://huggingface.co/HuggingFaceM4/idefics-80b-instruct) and [idefics-9B-instruct](https://huggingface.co/HuggingFaceM4/idefics-9b-instruct) adapted for conversational use cases. ## Training Data IDEFICS was trained on a mixture of openly available datasets: Wikipedia, Public Multimodal Dataset, and LAION, as well as a new 115B token dataset called [OBELICS](https://huggingface.co/datasets/HuggingFaceM4/OBELICS) that we created. OBELICS consists of 141 million interleaved image-text documents scraped from the web and contains 353 million images. We provide an [interactive visualization](https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f) of OBELICS that allows exploring the content of the dataset with [Nomic AI](https://home.nomic.ai/). <p align="center"> <a href="https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/ee4a84bd-f125-4bcc-a683-1b4e231cb10f"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/idefics/obelics_nomic_map.png" width="600" alt="Interactive visualization of OBELICS"/> </a> </p> The details of IDEFICS' architecture, training methodology, and evaluations, as well as information about the dataset, are available in the [model card](https://huggingface.co/HuggingFaceM4/idefics-80b-instruct) and our [research paper](https://huggingface.co/papers/2306.16527). Additionally, we have documented [technical insights and learnings](https://github.com/huggingface/m4-logs/blob/master/memos/README.md) from the model's training, offering valuable perspective on IDEFICS' development. ## Ethical evaluation At the outset of this project, through a set of discussions, we developed an [ethical charter](https://huggingface.co/blog/ethical-charter-multimodal) that would help steer decisions made during the project. This charter sets out values, including being self-critical, transparent, and fair which we have sought to pursue in how we approached the project and the release of the models. As part of the release process, we internally evaluated the model for potential biases by adversarially prompting the model with images and text that might elicit responses we do not want from the model (a process known as red teaming). Please try out IDEFICS with the [demo](https://huggingface.co/spaces/HuggingFaceM4/idefics_playground), check out the corresponding [model cards](https://huggingface.co/HuggingFaceM4/idefics-80b) and [dataset card](https://huggingface.co/datasets/HuggingFaceM4/OBELICS) and let us know your feedback using the community tab! We are committed to improving these models and making large multimodal AI models accessible to the machine learning community. ## License The model is built on top of two pre-trained models: [laion/CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) and [huggyllama/llama-65b](https://huggingface.co/huggyllama/llama-65b). The first was released under an MIT license, while the second was released under a specific non-commercial license focused on research purposes. As such, users should comply with that license by applying directly to [Meta's form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform). The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an MIT license. ## Getting Started with IDEFICS IDEFICS models are available on the Hugging Face Hub and supported in the last `transformers` version. Here is a code sample to try it out: ```python import torch from transformers import IdeficsForVisionText2Text, AutoProcessor device = "cuda" if torch.cuda.is_available() else "cpu" checkpoint = "HuggingFaceM4/idefics-9b-instruct" model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) processor = AutoProcessor.from_pretrained(checkpoint) # We feed to the model an arbitrary sequence of text strings and images. Images can be either URLs or PIL Images. prompts = [ [ "User: What is in this image?", "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG", "<end_of_utterance>", "\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.<end_of_utterance>", "\nUser:", "https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052", "And who is that?<end_of_utterance>", "\nAssistant:", ], ] # --batched mode inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device) # --single sample mode # inputs = processor(prompts[0], return_tensors="pt").to(device) # Generation args exit_condition = processor.tokenizer("<end_of_utterance>", add_special_tokens=False).input_ids bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) for i, t in enumerate(generated_text): print(f"{i}:\n{t}\n") ```
8
0
hf_public_repos
hf_public_repos/blog/peft_merging.md
--- title: "🤗 PEFT welcomes new merging methods" thumbnail: /blog/assets/peft_merging/thumbnail.png authors: - user: smangrul - user: sayakpaul --- # 🤗 PEFT welcomes new merging methods Model merging has quickly become the de-facto standard of pushing the performance limits of large language models. On the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), we continue to notice merged models topping up the charts. Our very own Omar Sanseviero, made a little sprint on model merging and [discovered](https://twitter.com/osanseviero/status/1745198646876885267) interesting findings. The typical way of model merging, so far, has been to take a set of models and merge them. [This post](https://huggingface.co/blog/mlabonne/merge-models) gives a nice primer on this topic. Generally, for merging multiple models, we first download their checkpoints and then perform merging. Depending on the merge algorithm and the sizes of the underlying model, this process can be quite memory-intensive. The `mergekit` library provides optimized ways for handling this, making the process manageable on limited memory. But what if we wanted to merge different “adapters” obtained from the ***same*** model? You might have four different LoRA checkpoints obtained from the same base model, and you want to experiment with different merging techniques. Eventually, you want to settle with the best merge, giving you the best results for your task. A couple of things become evident when approaching such a developer experience: - When dealing with adapters such as LoRA, it’s common for users to swap in and out different adapters or even combine them. Adapters can be activated, de-activated, or completely swapped out of the memory. Therefore, we need to do the “merging” part on the fly (as opposed to the method described above) to provide a seamless experience to the users. - Different adapters might have different requirements for merging. The merging algorithm for LoRA might not equally translate to IA3, for example. With these aspects in mind, we [shipped](https://github.com/huggingface/peft/pull/1364) new merging methods targeting the popular LoRA adapters in 🤗 PEFT. In this post, we want to take you through the methods available, code examples to help you get cracking, impressive results, and our future plans. Let’s get started 🚀 #### Table of content * [Methods for combining/merging LoRA adapters](#methods-for-combiningmerging-lora-adapters) * [How do I merge my LoRA adapters?](#how-do-i-merge-my-lora-adapters) * [Extending to text-to-image generation](#extending-to-text-to-image-generation) * [Observations](#observations) ## Methods for combining/merging LoRA adapters ### Concatenation (`cat`) In this method, the LoRA matrices are concatenated. For example, if we have 2 LoRA adapters \\( (A_1, B_1) \\) and \\( (A_2, B_2) \\) along with weights \\( weight_1 \\) and \\( weight_2 \\) for weighted merging of these two adapters, then the merging happens as follows: $$A_{merged} = concat(weight_1*scaling_1*A_1, weight_2*scaling_2*A_2, dim=0)$$ $$B_{merged} = concat(B_1, B_2, dim=1)$$ where \\( shape(A_{merged}) = (rank_1+rank_2,\ d) )\\ and \\( shape(B_{merged}) = (d,\ rank_1+rank_2) \\). Now, the output of this new merged LoRA layer would be as if the original 2 LoRAs were active with weights \\(weight_1\\) and \\(weight_2\\) for applied to the first and second adapters, respectively. $$h = W_0x + B_{merged}A_{merged}x$$ Here, we can observe that: $$B_{merged}A_{merged} = weight_1 * scaling_1 * B_1A_1 + weight_2 * scaling_2 * B_2A_2$$ <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 5px;"> 🧠 This is the exact weighted merging of LoRA adapters. It is also available via <a href=https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference>PEFT integration of Diffusers</a> when you call <code>set_adapters()</code> wherein instead of creating a new merged adapter, the active adapters are combined sequentially, as shown on the right-hand side of the above equation. When using this method, it allows for participating LoRA adapters to have different ranks. </div> ### Linear/Task Arithmetic (`linear`) In this method, the LoRA matrices are involved in weighted sum. This is what the Task arithmetic paper implements on task weights. In task arithmetic, one first computes the task weights which is difference between finetuned weights and base model weights, then does a weighted sum of these task weights. Here, the delta weights considered are the individual matrices \\(A\\) and \\(B\\) instead of their product \\(BA\\). This method can be applied only when all the participating LoRA adapters have same rank. Let’s go through an example. Consider 2 LoRA adapters \\( (A_1, B_1) \\) & \\( (A_2, B_2) \\) along with weights \\( weight_1 \\) and \\( weight_2 \\) for weighted merging of these two adapters, then the merging happens as follows: $$A_{merged} = sqrt(weight_1 * scaling_1) * A_1+ sqrt (weight_2 * scaling_2) * A_2$$ $$B_{merged} = sqrt(weight_1 * scaling_1) * B_1+ sqrt (weight_2 * scaling_2) * B_2$$ For more details, please refer to the paper: [Editing Models with Task Arithmetic](https://arxiv.org/abs/2212.04089). ### SVD (`svd`) Instead of considering individual matrices \\( A \\) and \\( B \\) as task weights, their product \\( BA \\) which is the delta weight is considered the task weight. Let’s continue with the example from the previous sub-sections. Here, first the delta weight of merged combination is computed as follows: $$delta_{merged} = weight_1 * scaling_1 * B_1A_1 + weight_2 * scaling_2 * B_2A_2$$ After getting the above-merged delta weight, SVD (singular value decomposition) is applied to get the approximates \\( A_{merged\_approx} \\) and \\( B_{merged\_approx} \\): <div align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/svd_full_eqn.jpg" width=300/> </div><br> <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 5px;"> 🧠 Similar to <code>cat</code> method, this method also allows for LoRA adapters with different ranks. In addition, one can choose the rank for the resultant merged LoRA adapter which defaults to the maximum rank among the participating LoRA adapters. A limitation of this approach is that it requires a lot of GPU memory for performing the SVD operation. </div> ### TIES (`ties` , `ties_svd` ) This builds upon the `linear` and `svd` methods by changing the way merged adapters are computed from task weights and result in the `ties` and `ties_svd` methods, respectively. In TIES (TRIM, ELECT SIGN & MERGE), one first computes the task weights which in our case would be the LoRA adapters \\( A \\), \\( B \\) for non svd variant and their product \\( BA \\) for svd variant. After this, you prune the smallest values of the task weights and retain the top-k values based on the specified fraction `density` . Then, you calculate the majority sign mask from the participating pruned task weights, multiply task tensors with the user provided weightage followed by disjoint merge based on the majority sign mask. For majority sign mask computation, you have two options: 1. `total` considers the magnitude as well as sign to get the majority sign, i.e., sum up all the corresponding weights; 2. `frequency` only considers the weight sign to obtain the majority sign, i.e., sum up the sign of all the corresponding weights. For more details, refer to the paper: [TIES-Merging: Resolving Interference When Merging Models](https://arxiv.org/abs/2306.01708). ### DARE (`dare_linear` , `dare_ties` , `dare_linear_svd` , `dare_ties_svd` ) This also builds upon the `linear` and `svd` methods wherein the task weights are LoRA adapters \\( A \\), \\( B \\) for non svd variant and their product \\( BA \\) for svd variant. `DARE` method proposed in [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) first randomly prunes the values of the task weight based on the specified fraction `1-density`, and then rescales the pruned task weights by `1/density`. `DARE` is a general plug-in and can be applied to any existing model merging methods. We have implemented `DARE` with Linear/Task Arithmetic (`*_linear*`) and TIES (`*_ties*`). For `*_linear*` variants of `DARE`, we first use `DARE` to randomly prune task weights and then perform weighted sum of task tensors based on user specified weightage for participating LoRA adapters. For `*_ties*` variants of `DARE`, we first use `DARE` to get the pruned task weights, then adopt the last 2 steps of `ties`, i.e., calculating majority sign mask and using the mask to perform disjoint merge of the task weights. ### Magnitude Prune (`magnitude_prune` , `magnitude_prune_svd` ) This also builds upon the `linear` and `svd` methods wherein the task weights are LoRA adapters \\( A \\), \\( B \\) for non svd variant and their product \\( BA \\) for svd variant. In this method, you first prune the smallest values of the task weights and retain the top-k values based on the specified fraction `density`. Then, you carry out the weighted sum of task tensors based on user-specified weightage for participating LoRA adapters. ## How do I merge my LoRA adapters? In PEFT, when using LoRA, you can use the class method [`add_weighted_adapter()`](https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraModel.add_weighted_adapter) to try the different combining methods. For example, below you can see how we can combine three LoRA adapters using `ties` method and the resulting generations from the new merged adapter. **We can observe that merged adapter is able to retain the capabilities of individual adapters**: ![instruct_ad_sql](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/instruct_ad_sql.png) You can find the above example in the PEFT repo’s [examples](https://github.com/huggingface/peft/blob/main/examples/multi_adapter_examples/Lora_Merging.ipynb). Let’s take another example, as shown below, using `magnitude_prune` method and the resulting generations. ![mental_health_hinglish](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/mental_health_hinglish.png) Now, what if we want to use the merged adapters capability to answer a mental health related query in Hinglish? This would require capabilities from both the adapters to be used. Below we can see the results on the query “Sad feelings ko kaise dur kare?” (Translation: How to get rid of sad feelings?). When, all adapters are disabled and the base model is used, the response starts with it being an AI followed by general suggestions. When hinglish adapter is enabled, the response is in Hinglish and short following the finetuning data but doesn’t do a good job at giving concrete suggestions to help overcome sadness. When mental_health adapter is enabled, the response is akin to what a human would say but sadly it isn’t in Hinglish. When merge adapter is enabled, we can see that the response is in Hinglish and short while giving concrete suggestions which can be found in the response of mental_health adapter such as exercising, spending time with friends, reading, meditation and focusing on positive thinking. Therefore, we can observe that merging adapters can lead to combining their individual capabilities to support new use cases. ![instruct_ad_sql](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/merged_capabilities.png) Finally, let’s take the example of `dare_linear` and check the resulting generations. ![ad_sql](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/ad_sql.png) We have a dedicated developer guide for these merging methods in PEFT which you can find [here](https://huggingface.co/docs/peft/developer_guides/model_merging). ## Extending to text-to-image generation In this section, we show you how to take advantage of these merging methods for text-to-image generation using 🤗 Diffusers. Note that Diffusers [already relies on PEFT](https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference) for all things LoRA, including training and inference. However, currently, it’s not possible to benefit from the new merging methods when calling [`set_adapters()`](https://huggingface.co/docs/diffusers/main/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters) on a Diffusers pipeline. This is why we are [openly discussing](https://github.com/huggingface/diffusers/issues/6892) with the community how to best support it natively from within Diffusers. But thanks to PEFT, there’s always a way to circumvent around this. We will use the [`add_weighted_adapter()`](https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraModel.add_weighted_adapter) functionality for this. Precisely, these are the steps that we will take to combine the [“toy-face” LoRA](https://huggingface.co/CiroN2022/toy-face) and the [“Pixel-Art” loRA](https://huggingface.co/nerijs/pixel-art-xl), and experiment with different merging techniques: - Obtain `PeftModel`s from these LoRA checkpoints. - Merge the `PeftModel`s using the `add_weighted_adapter()` method with a merging method of our choice. - Assign the merged model to the respective component of the underlying `DiffusionPipeline`. Let’s see this in action. All the code shown in the parts below come from [this Colab Notebook](https://colab.research.google.com/github/huggingface/peft/blob/main/examples/multi_adapter_examples/multi_adapter_weighted_inference_diffusers.ipynb). Since both the LoRA checkpoints use [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) UNet as the their base model, we will first load the UNet: ```python from diffusers import UNet2DConditionModel import torch unet = UNet2DConditionModel.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16", subfolder="unet", ).to("cuda") ``` We then load the actual SDXL pipeline and the LoRA checkpoints. We start with the “CiroN2022/toy-face” LoRA: ```python from diffusers import DiffusionPipeline import copy sdxl_unet = copy.deepcopy(unet) pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16, unet=unet ).to("cuda") pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") ``` Now, obtain the `PeftModel` from the loaded LoRA checkpoint: ```python from peft import get_peft_model, LoraConfig toy_peft_model = get_peft_model( sdxl_unet, pipe.unet.peft_config["toy"], adapter_name="toy" ) original_state_dict = {f"base_model.model.{k}": v for k, v in pipe.unet.state_dict().items()} toy_peft_model.load_state_dict(original_state_dict, strict=True) ``` 💡 You can optionally push the `toy_peft_model` to the Hub using: `toy_peft_model.push_to_hub("toy_peft_model", token=TOKEN)`. Next, we do the same for the “nerijs/pixel-art-xl” LoRA: ```python pipe.delete_adapters("toy") sdxl_unet.delete_adapters("toy") pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.set_adapters(adapter_names="pixel") pixel_peft_model = get_peft_model( sdxl_unet, pipe.unet.peft_config["pixel"], adapter_name="pixel" ) original_state_dict = {f"base_model.model.{k}": v for k, v in pipe.unet.state_dict().items()} pixel_peft_model.load_state_dict(original_state_dict, strict=True) ``` Now, we are all equipped with weighted adapter inference! We start by loading all the necessary things: ```python from peft import PeftModel from diffusers import UNet2DConditionModel, DiffusionPipeline import torch base_unet = UNet2DConditionModel.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16", subfolder="unet", ).to("cuda") toy_id = "sayakpaul/toy_peft_model" model = PeftModel.from_pretrained(base_unet, toy_id, use_safetensors=True, subfolder="toy", adapter_name="toy") model.load_adapter("sayakpaul/pixel_peft_model", use_safetensors=True, subfolder="pixel", adapter_name="pixel") ``` Now, combine the LoRA adapters — the moment we all have been waiting for! ```python model.add_weighted_adapter( adapters=["toy", "pixel"], weights=[0.7, 0.3], combination_type="linear", adapter_name="toy-pixel" ) model.set_adapters("toy-pixel") ``` Here, we are just starting with the “linear” merging strategy but will experiment with other exotic merging algorithms, such as TIES. We finally assign the `model` to our `DiffusionPipeline` and perform inference: ```python model = model.to(dtype=torch.float16, device="cuda") pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", unet=model, variant="fp16", torch_dtype=torch.float16, ).to("cuda") prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![toy_face_hacker](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/toy_face_hacker.png) Let’s try `ties_svd` method. You can find the example notebook [here](https://github.com/pacman100/peft-dreambooth-ui/blob/main/lora_merging.ipynb). ```python pipe.unet.add_weighted_adapter( ["teapot","watercolour"], [1.0, 1.0], "merge", combination_type="ties_svd", density=0.5 ) ``` ![cat_teapot](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/cat_teapot.png) Now, let’s try combining two style LoRAs using `dare_linear`: ```python model.add_weighted_adapter( adapters=["toy", "pixel"], weights=[1.0, 1.0], combination_type="dare_linear", adapter_name="merge", density=0.7 ) ``` ![toy_face_pixel_art.png](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/toy_face_pixel_art.png) Now, let’s try `ties` method with `majority_sign_method="frequency"` : ```python model.add_weighted_adapter( adapters=["toy", "sticker"], weights=[1.0, 1.0], combination_type="ties", adapter_name="merge", density=0.5, majority_sign_method="frequency" ) ``` ![indian_goddess](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft_merging/indian_goddess.png) ## Observations 1. In most scenarios, `cat` method will give great results. So, start with that. However, note that if you combine many adapters, the resulting merged adapter can have a large size due to concatenation leading to OOM. So, when exploring few adapters, `cat` would be a good starting point. 2. In you want to explore or `cat` isn’t working, try `linear` , `maginuted_prune` and `dare_linear` in that order. For `maginuted_prune` and `dare_linear`, we found that higher `density` values around 0.7-0.8 work better. 3. When using `ties`, we found that in many cases `majority_sign_method="frequency"` to perform better than `majority_sign_method="total"` (`total` is currently the default). For ties, a good default value for `density` is 0.5. You can then try tuning this lower or higher based on your observations post merging the adapters. 4. `dare_ties` wasn’t giving good results. 5. When working with Stable Diffusion LoRA adapters that have different ranks, you can try the `*svd` family of methods. Note that these require more GPU memory and take around ~1.5 minutes to create the merged adapter due to the expensive SVD operations. `ties_svd` gave good result when combining `subject` + `style` LoRAs as seen in an example above. When combining 2 `style` adapters, `dare_linear` with high `density` or `ties` with `majority_sign_method="frequency"` seems to work better as seen in the examples above. ## Acknowledgements We’re grateful to Le Yu and Prateek Yadav, authors of DARE and TIES, for their generous feedback and guidance on the [PR](https://github.com/huggingface/peft/pull/1364). To honor their efforts, we have added them as the co-authors of the PR. Thanks to Prateek and Le for reviewing the blog post draft as well. ## Useful links 1. [Editing Models with Task Arithmetic](https://arxiv.org/abs/2212.04089) 2. [TIES-Merging: Resolving Interference When Merging Models](https://arxiv.org/abs/2306.01708) 3. [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) 4. [mergekit](https://github.com/cg123/mergekit): Tools for merging pretrained large language models. 5. [PEFT integration in Diffusers](https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference) 6. [Model merging guide for PEFT users](https://huggingface.co/docs/peft/developer_guides/model_merging) ## Citations ``` @inproceedings{ ilharco2023editing, title={Editing models with task arithmetic}, author={Gabriel Ilharco and Marco Tulio Ribeiro and Mitchell Wortsman and Ludwig Schmidt and Hannaneh Hajishirzi and Ali Farhadi}, booktitle={The Eleventh International Conference on Learning Representations }, year={2023}, url={https://openreview.net/forum?id=6t0Kwf8-jrj} } ``` ``` @inproceedings{ yadav2023tiesmerging, title={{TIES}-Merging: Resolving Interference When Merging Models}, author={Prateek Yadav and Derek Tam and Leshem Choshen and Colin Raffel and Mohit Bansal}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=xtaX3WyCj1} } ``` ``` @misc{yu2023language, title={Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch}, author={Le Yu and Bowen Yu and Haiyang Yu and Fei Huang and Yongbin Li}, year={2023}, eprint={2311.03099}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{ mergekit, author = {Charles O. Goddard and contributors}, title = {mergekit}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/arcee-ai/mergekit}} } ```
9
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter2/audio_classification_pipeline.mdx
# Audio classification with a pipeline Audio classification involves assigning one or more labels to an audio recording based on its content. The labels could correspond to different sound categories, such as music, speech, or noise, or more specific categories like bird song or car engine sounds. Before diving into details on how the most popular audio transformers work, and before fine-tuning a custom model, let's see how you can use an off-the-shelf pre-trained model for audio classification with only a few lines of code with 🤗 Transformers. Let's go ahead and use the same [MINDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset that you have explored in the previous unit. If you recall, MINDS-14 contains recordings of people asking an e-banking system questions in several languages and dialects, and has the `intent_class` for each recording. We can classify the recordings by intent of the call. Just as before, let's start by loading the `en-AU` subset of the data to try out the pipeline, and upsample it to 16kHz sampling rate which is what most speech models require. ```py from datasets import load_dataset from datasets import Audio minds = load_dataset("PolyAI/minds14", name="en-AU", split="train") minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) ``` To classify an audio recording into a set of classes, we can use the `audio-classification` pipeline from 🤗 Transformers. In our case, we need a model that's been fine-tuned for intent classification, and specifically on the MINDS-14 dataset. Luckily for us, the Hub has a model that does just that! Let's load it by using the `pipeline()` function: ```py from transformers import pipeline classifier = pipeline( "audio-classification", model="anton-l/xtreme_s_xlsr_300m_minds14", ) ``` This pipeline expects the audio data as a NumPy array. All the preprocessing of the raw audio data will be conveniently handled for us by the pipeline. Let's pick an example to try it out: ```py example = minds[0] ``` If you recall the structure of the dataset, the raw audio data is stored in a NumPy array under `["audio"]["array"]`, let's pass it straight to the `classifier`: ```py classifier(example["audio"]["array"]) ``` **Output:** ```out [ {"score": 0.9631525278091431, "label": "pay_bill"}, {"score": 0.02819698303937912, "label": "freeze"}, {"score": 0.0032787492964416742, "label": "card_issues"}, {"score": 0.0019414445850998163, "label": "abroad"}, {"score": 0.0008378693601116538, "label": "high_value_payment"}, ] ``` The model is very confident that the caller intended to learn about paying their bill. Let's see what the actual label for this example is: ```py id2label = minds.features["intent_class"].int2str id2label(example["intent_class"]) ``` **Output:** ```out "pay_bill" ``` Hooray! The predicted label was correct! Here we were lucky to find a model that can classify the exact labels that we need. A lot of the times, when dealing with a classification task, a pre-trained model's set of classes is not exactly the same as the classes you need the model to distinguish. In this case, you can fine-tune a pre-trained model to "calibrate" it to your exact set of class labels. We'll learn how to do this in the upcoming units. Now, let's take a look at another very common task in speech processing, _automatic speech recognition_.
0
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter2/tts_pipeline.mdx
# Audio generation with a pipeline Audio generation encompasses a versatile set of tasks that involve producing an audio output. The tasks that we will look into here are speech generation (aka "text-to-speech") and music generation. In text-to-speech, a model transforms a piece of text into lifelike spoken language sound, opening the door to applications such as virtual assistants, accessibility tools for the visually impaired, and personalized audiobooks. On the other hand, music generation can enable creative expression, and finds its use mostly in entertainment and game development industries. In 🤗 Transformers, you'll find a pipeline that covers both of these tasks. This pipeline is called `"text-to-audio"`, but for convenience, it also has a `"text-to-speech"` alias. Here we'll use both, and you are free to pick whichever seems more applicable for your task. Let's explore how you can use this pipeline to start generating audio narration for texts, and music with just a few lines of code. This pipeline is new to 🤗 Transformers and comes part of the version 4.32 release. Thus you'll need to upgrade the library to the latest version to get the feature: ```bash pip install --upgrade transformers ``` ## Generating speech Let's begin by exploring text-to-speech generation. First, just as it was the case with audio classification and automatic speech recognition, we'll need to define the pipeline. We'll define a text-to-speech pipeline since it best describes our task, and use the [`suno/bark-small`](https://huggingface.co/suno/bark-small) checkpoint: ```python from transformers import pipeline pipe = pipeline("text-to-speech", model="suno/bark-small") ``` The next step is as simple as passing some text through the pipeline. All the preprocessing will be done for us under the hood: ```python text = "Ladybugs have had important roles in culture and religion, being associated with luck, love, fertility and prophecy. " output = pipe(text) ``` In a notebook, we can use the following code snippet to listen to the result: ```python from IPython.display import Audio Audio(output["audio"], rate=output["sampling_rate"]) ``` The model that we're using with the pipeline, Bark, is actually multilingual, so we can easily substitute the initial text with a text in, say, French, and use the pipeline in the exact same way. It will pick up on the language all by itself: ```python fr_text = "Contrairement à une idée répandue, le nombre de points sur les élytres d'une coccinelle ne correspond pas à son âge, ni en nombre d'années, ni en nombre de mois. " output = pipe(fr_text) Audio(output["audio"], rate=output["sampling_rate"]) ``` Not only is this model multilingual, it can also generate audio with non-verbal communications and singing. Here's how you can make it sing: ```python song = "♪ In the jungle, the mighty jungle, the ladybug was seen. ♪ " output = pipe(song) Audio(output["audio"], rate=output["sampling_rate"]) ``` We'll dive deeper into Bark specifics in the later unit dedicated to Text-to-speech, and will also show how you can use other models for this task. Now, let's generate some music! ## Generating music Just as before, we'll begin by instantiating a pipeline. For music generation, we'll define a text-to-audio pipeline, and initialise it with the pretrained checkpoint [`facebook/musicgen-small`](https://huggingface.co/facebook/musicgen-small) ```python music_pipe = pipeline("text-to-audio", model="facebook/musicgen-small") ``` Let's create a text description of the music we'd like to generate: ```python text = "90s rock song with electric guitar and heavy drums" ``` We can control the length of the generated output by passing an additional `max_new_tokens` parameter to the model. ```python forward_params = {"max_new_tokens": 512} output = music_pipe(text, forward_params=forward_params) Audio(output["audio"][0], rate=output["sampling_rate"]) ```
1
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter2/introduction.mdx
# Unit 2. A gentle introduction to audio applications Welcome to the second unit of the Hugging Face audio course! Previously, we explored the fundamentals of audio data and learned how to work with audio datasets using the 🤗 Datasets and 🤗 Transformers libraries. We discussed various concepts such as sampling rate, amplitude, bit depth, waveform, and spectrograms, and saw how to preprocess data to prepare it for a pre-trained model. At this point you may be eager to learn about the audio tasks that 🤗 Transformers can handle, and you have all the foundational knowledge necessary to dive in! Let's take a look at some of the mind-blowing audio task examples: * **Audio classification**: easily categorize audio clips into different categories. You can identify whether a recording is of a barking dog or a meowing cat, or what music genre a song belongs to. * **Automatic speech recognition**: transform audio clips into text by transcribing them automatically. You can get a text representation of a recording of someone speaking, like "How are you doing today?". Rather useful for note taking! * **Speaker diarization**: Ever wondered who's speaking in a recording? With 🤗 Transformers, you can identify which speaker is talking at any given time in an audio clip. Imagine being able to differentiate between "Alice" and "Bob" in a recording of them having a conversation. * **Text to speech**: create a narrated version of a text that can be used to produce an audio book, help with accessibility, or give a voice to an NPC in a game. With 🤗 Transformers, you can easily do that! In this unit, you'll learn how to use pre-trained models for some of these tasks using the `pipeline()` function from 🤗 Transformers. Specifically, we'll see how the pre-trained models can be used for audio classification, automatic speech recognition and audio generation. Let's get started!
2
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter2/hands_on.mdx
# Hands-on exercise This exercise is not graded and is intended to help you become familiar with the tools and libraries that you will be using throughout the rest of the course. If you are already experienced in using Google Colab, 🤗 Datasets, librosa and 🤗 Transformers, you may choose to skip this exercise. 1. Create a [Google Colab](https://colab.research.google.com) notebook. 2. Use 🤗 Datasets to load the train split of the [`facebook/voxpopuli` dataset](https://huggingface.co/datasets/facebook/voxpopuli) in language of your choice in streaming mode. 3. Get the third example from the `train` part of the dataset and explore it. Given the features that this example has, what kinds of audio tasks can you use this dataset for? 4. Plot this example's waveform and spectrogram. 5. Go to [🤗 Hub](https://huggingface.co/models), explore pretrained models and find one that can be used for automatic speech recognition for the language that you have picked earlier. Instantiate a corresponding pipeline with the model you found, and transcribe the example. 6. Compare the transcription that you get from the pipeline to the transcription provided in the example. If you struggle with this exercise, feel free to take a peek at an [example solution](https://colab.research.google.com/drive/1NGyo5wFpRj8TMfZOIuPaJHqyyXCITftc?usp=sharing). Discovered something interesting? Found a cool model? Got a beautiful spectrogram? Feel free to share your work and discoveries on Twitter! In the next chapters you'll learn more about various audio transformer architectures and will train your own model!
3
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter3/supplemental_reading.mdx
# Supplemental reading and resources If you'd like to further explore different Transformer architectures, and learn about their various applications in speech processing, check out this recent paper: ### Transformers in Speech Processing: A Survey by Siddique Latif, Aun Zaidi, Heriberto Cuayahuitl, Fahad Shamshad, Moazzam Shoukat, Junaid Qadir "The remarkable success of transformers in the field of natural language processing has sparked the interest of the speech-processing community, leading to an exploration of their potential for modeling long-range dependencies within speech sequences. Recently, transformers have gained prominence across various speech-related domains, including automatic speech recognition, speech synthesis, speech translation, speech para-linguistics, speech enhancement, s poken dialogue systems, and numerous multimodal applications. In this paper, we present a comprehensive survey that aims to bridge research studies from diverse subfields within speech technology. By consolidating findings from across the speech technology landscape, we provide a valuable resource for researchers interested in harnessing the power of transformers to advance the field. We identify the challenges encountered by transformers in speech processing while also offering insights into potential solutions to address these issues." [arxiv.org/abs/2303.11607](https://arxiv.org/abs/2303.11607)
4
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter3/classification.mdx
# Audio classification architectures The goal of audio classification is to predict a class label for an audio input. The model can predict a single class label that covers the entire input sequence, or it can predict a label for every audio frame — typically every 20 milliseconds of input audio — in which case the model's output is a sequence of class label probabilities. An example of the former is detecting what bird is making a particular sound; an example of the latter is speaker diarization, where the model predicts which speaker is speaking at any given moment. ## Classification using spectrograms One of the easiest ways to perform audio classification is to pretend it's an image classification problem! Recall that a spectrogram is a two-dimensional tensor of shape `(frequencies, sequence length)`. In the [chapter on audio data](../chapter1/audio_data) we plotted these spectrograms as images. Guess what? We can literally treat the spectrogram as an image and pass it into a regular CNN classifier model such as ResNet and get very good predictions. Even better, we can use a image transformer model such as ViT. This is what **Audio Spectrogram Transformer** does. It uses the ViT or Vision Transformer model, and passes it spectrograms as input instead of regular images. Thanks to the transformer's self-attention layers, the model is better able to capture global context than a CNN is. Just like ViT, the AST model splits the audio spectrogram into a sequence of partially overlapping image patches of 16×16 pixels. This sequence of patches is then projected into a sequence of embeddings, and these are given to the transformer encoder as input as usual. AST is an encoder-only transformer model and so the output is a sequence of hidden-states, one for each 16×16 input patch. On top of this is a simple classification layer with sigmoid activation to map the hidden-states to classification probabilities. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/ast.png" alt="The audio spectrogram transformer works on a sequence of patches taken from the spectrogram"> </div> Image from the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/pdf/2104.01778.pdf) <Tip> 💡 Even though here we pretend spectrograms are the same as images, there are important differences. For example, shifting the contents of an image up or down generally does not change the meaning of what is in the image. However, shifting a spectrogram up or down will change the frequencies that are in the sound and completely change its character. Images are invariant under translation but spectrograms are not. Treating spectrograms as images can work very well in practice, but keep in mind they are not really the same thing. </Tip> ## Any transformer can be a classifier In a [previous section](ctc) you've seen that CTC is an efficient technique for performing automatic speech recognition using an encoder-only transformer. Such CTC models already are classifiers, predicting probabilities for class labels from a tokenizer vocabulary. We can take a CTC model and turn it into a general-purpose audio classifier by changing the labels and training it with a regular cross-entropy loss function instead of the special CTC loss. For example, HF Transformers has a `Wav2Vec2ForCTC` model but also `Wav2Vec2ForSequenceClassification` and `Wav2Vec2ForAudioFrameClassification`. The only differences between the architectures of these models is the size of the classification layer and the loss function used. In fact, any encoder-only audio transformer model can be turned into an audio classifier by adding a classification layer on top of the sequence of hidden states. (Classifiers usually don't need a transformer decoder.) To predict a single classification score for the entire sequence (`Wav2Vec2ForSequenceClassification`), the model takes the mean over the hidden-states and feeds that into the classification layer. The output is a single probability distribution. To make a separate classification for each audio frame (`Wav2Vec2ForAudioFrameClassification`), the classifier is run on the sequence of hidden-states, and so the output of the classifier is a sequence too.
5
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter3/ctc.mdx
# CTC architectures CTC or Connectionist Temporal Classification is a technique that is used with encoder-only transformer models for automatic speech recognition. Examples of such models are **Wav2Vec2**, **HuBERT** and **M-CTC-T**. An encoder-only transformer is the simplest kind of transformer because it just uses the encoder portion of the model. The encoder reads the input sequence (the audio waveform) and maps this into a sequence of hidden-states, also known as the output embeddings. With a CTC model, we apply an additional linear mapping on the sequence of hidden-states to get class label predictions. The class labels are the **characters of the alphabet** (a, b, c, …). This way we're able to predict any word in the target language with a small classification head, as the vocabulary just needs to exist of 26 characters plus a few special tokens. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/wav2vec2-ctc.png" alt="Transformer encoder with a CTC head on top"> </div> So far, this is very similar to what we do in NLP with a model such as BERT: an encoder-only transformer model maps our text tokens into a sequence of encoder hidden-states, and then we apply a linear mapping to get one class label prediction for each hidden-state. Here's the rub: In speech, we don't know the **alignment** of the audio inputs and text outputs. We know that the order the speech is spoken in is the same as the order that the text is transcribed in (the alignment is so-called monotonic), but we don't know how the characters in the transcription line up to the audio. This is where the CTC algorithm comes in. <Tip> 💡 In NLP models the vocabulary is usually made up of thousands of tokens that describe not just individual characters but parts of words or even complete words. For CTC, however, a small vocabulary works best and we generally try to keep it to less than 50 characters. We don't care about the casing of the letters, so only using uppercase (or only lowercase) is sufficient. Numbers are spelled out, e.g. `"20"` becomes `"twenty"`. In addition to the letters, we need at least a word separator token (space) and a padding token. Just as with an NLP model, the padding token allows us to combine multiple examples in a batch, but it's also the token the model will predict for silences. In English, it's also useful to keep the `'` character — after all, `"it's"` and `"its"` have very different meanings. </Tip> ## Dude, where's my alignment? Automatic speech recognition or ASR involves taking audio as input and producing text as output. We have a few choices for how to predict the text: - as individual characters - as phonemes - as word tokens An ASR model is trained on a dataset consisting of `(audio, text)` pairs where the text is a human-made transcription of the audio file. Generally the dataset does not include any timing information that says which word or syllable occurs where in the audio file. Since we can't rely on timing information during training, we don't have any idea how the input and output sequences should be aligned. Let's suppose our input is a one-second audio file. In **Wav2Vec2**, the model first downsamples the audio input using the CNN feature encoder to a shorter sequence of hidden-states, where there is one hidden-state vector for every 20 milliseconds of audio. For one second of audio, we then forward a sequence of 50 hidden-states to the transformer encoder. (The audio segments extracted from the input sequence partially overlap, so even though one hidden-state vector is emitted every 20 ms, each hidden-state actually represent 25 ms of audio.) The transformer encoder predicts one feature representation for each of these hidden-states, meaning we receive a sequence of 50 outputs from the transformer. Each of these outputs has a dimensionality of 768. The output sequence of the transformer encoder in this example therefore has shape `(768, 50)`. As each of these predictions covers 25 ms of time, which is shorter than the duration of a phoneme, it makes sense to predict individual phonemes or characters but not entire words. CTC works best with a small vocabulary, so we'll predict characters. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/cnn-feature-encoder.png" alt="The audio waveform gets mapped to a shorter sequence of hidden-states"> </div> To make text predictions, we map each of the 768-dimensional encoder outputs to our character labels using a linear layer (the "CTC head"). The model then predicts a `(50, 32)` tensor containing the logits, where 32 is the number of tokens in the vocabulary. Since we make one prediction for each of the features in the sequence, we end up with a total of 50 character predictions for each second of audio. However, if we simply predict one character every 20 ms, our output sequence might look something like this: ```text BRIIONSAWWSOMEETHINGCLOSETOPANICONHHISOPPONENT'SSFAACEWHENTHEMANNFINALLLYRREECOGGNNIIZEDHHISSERRRRORR ... ``` If you look closely, it somewhat resembles English but a lot of the characters have been duplicated. That's because the model needs to output *something* for every 20 ms of audio in the input sequence, and if a character is spread out over a period longer than 20 ms then it will appear multiple times in the output. There's no way to avoid this, especially since we don't know what the timing of the transcript is during training. CTC is a way to filter out these duplicates. (In reality, the predicted sequence also contains a lot of padding tokens for when the model isn't quite sure what the sound represents, or for the empty space between characters. We removed these padding tokens from the example for clarity. The partial overlap between audio segments is another reason characters get duplicated in the output.) ## The CTC algorithm The key to the CTC algorithm is using a special token, often called the **blank token**. This is just another token that the model will predict and it's part of the vocabulary. In this example, the blank token is shown as `_`. This special token serves as a hard boundary between groups of characters. The full output from the CTC model might be something like the following: ```text B_R_II_O_N_||_S_AWW_|||||_S_OMEE_TH_ING_||_C_L_O_S_E||TO|_P_A_N_I_C_||_ON||HHI_S||_OP_P_O_N_EN_T_'SS||_F_AA_C_E||_W_H_EN||THE||M_A_NN_||||_F_I_N_AL_LL_Y||||_RREE_C_O_GG_NN_II_Z_ED|||HHISS|||_ER_RRR_ORR|||| ``` The `|` token is the word separator character. In the example we use `|` instead of a space making it easier to spot where the word breaks are, but it serves the same purpose. The CTC blank character makes it possible to filter out the duplicate characters. For example let's look at the last word from the predicted sequence, `_ER_RRR_ORR`. Without the CTC blank token, the word looked like this: ```text ERRRRORR ``` If we were to simply remove duplicate characters, this would become `EROR`. That's clearly not the correct spelling. But with the CTC blank token we can remove the duplicates in each group, so that: ```text _ER_RRR_ORR ``` becomes: ```text _ER_R_OR ``` and now we remove the `_` blank token to get the final word: ```text ERROR ``` If we apply this logic to the entire text, including `|`, and replace the surviving `|` characters by spaces, the final CTC-decoded output is: ```text BRION SAW SOMETHING CLOSE TO PANIC ON HIS OPPONENT'S FACE WHEN THE MAN FINALLY RECOGNIZED HIS ERROR ``` To recap, the model predicts one token (character) for every 20 ms of (partially overlapping) audio from the input waveform. This gives a lot of duplicates. Thanks to the CTC blank token, we can easily remove these duplicates without destroying the proper spelling of the words. This is a very simple and convenient way to solve the problem of aligning the output text with the input audio. <Tip> 💡 In the actual Wav2Vec2 model, the CTC blank token is the same as the padding token `<pad>`. The model will predict many of these `<pad>` tokens, for example when there isn't a clear character to predict for the current 20 ms of audio. Using the same token for padding as for CTC blanking simplifies the decoding algorithm and it helps keep the vocab small. </Tip> Adding CTC to a transformer encoder model is easy: the output sequence from the encoder goes into a linear layer that projects the acoustic features to the vocabulary. The model is trained with a special CTC loss. One downside of CTC is that it may output words that *sound* correct, but are not *spelled* correctly. After all, the CTC head only considers individual characters, not complete words. One way to improve the quality of the audio transcriptions is to use an external language model. This language model essentially acts as a spellchecker on top of the CTC output. ## What's the difference between Wav2Vec2, HuBERT, M-CTC-T, ...? All transformer-based CTC models have a very similar architecture: they use the transformer encoder (but not the decoder) with a CTC head on top. Architecture-wise they are more alike than different. One difference between Wav2Vec2 and M-CTC-T is that the former works on raw audio waveforms while the latter uses mel spectrograms as input. The models also have been trained for different purposes. M-CTC-T, for example, is trained for multilingual speech recognition, and therefore has a relatively large CTC head that includes Chinese characters in addition to other alphabets. Wav2Vec2 & HuBERT use the exact same architecture but are trained in very different ways. Wav2Vec2 is pre-trained like BERT's masked language modeling, by predicting speech units for masked parts of the audio. HuBERT takes the BERT inspiration a step further and learns to predict "discrete speech units", which are analogous to tokens in a text sentence, so that speech can be treated using established NLP techniques. To clarify, the models highlighted here aren't the only transformer-based CTC models. There are many others, but now you know they all work in a similar way.
6
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter3/quiz.mdx
<!-- DISABLE-FRONTMATTER-SECTIONS --> # Check your understanding of the course material ### 1. What is a vocoder? <Question choices={[ { text: "An additional neural network that turns the spectrogram output of a transformer into a waveform.", explain: "Correct! ", correct: true }, { text: "A type of transformer layer that is responsible for creating audio embeddings.", explain: "" }, { text: "An additional neural network that preprocesses speech audio to remove background noise", explain: "", } ]} /> ### 2. Wav2Vec2 is an example of <Question choices={[ { text: "Seq2Seq architecture", explain: "" }, { text: "CNN architecture", explain: "" }, { text: "CTC architecture", explain: "Correct!", correct: true } ]} /> ### 3. What does a blank token in CTC algorithm do? <Question choices={[ { text: "Blank token indicates breaks between the individual words in the sentence.", explain: "" }, { text: "Blank token is a predicted token that serves as a hard boundary between groups of characters. It makes it possible to filter out the duplicate characters", explain: "Correct!", correct: true }, { text: "Blank token is used for sounds that do not match any tokens in the vocabulary, similar to <UNK> token for 'unknown'.", explain: "" } ]} /> ### 4. Which of the following statements about CTC models is FALSE? <Question choices={[ { text: "CTC models use only the encoder part of the transformer architecture.", explain: "" }, { text: "Wav2Vec2 & HuBERT use the exact same architecture but are trained differently.", explain: "" }, { text: "CTC models tend to perform best for speech recognition compared to other architectures.", explain: "Correct!", correct: true } ]} /> ### 5. Whisper is an example of <Question choices={[ { text: "Seq2Seq architecture", explain: "Correct!", correct: true }, { text: "CNN architecture", explain: "" }, { text: "CTC architecture", explain: "" } ]} /> ### 6. What is the easiest way to perform audio classification? <Question choices={[ { text: "Use encoder-decoder transformers on the audio waveform.", explain: "" }, { text: "Use spectrograms and treat the task as an image classification problem.", explain: "Correct!", correct: true }, { text: "Turn a CTC model into a general-purpose audio classifier by changing the labels and training it with a regular cross-entropy loss function.", explain: "" } ]} /> ### 7. True or false? When treating spectrograms as images for classification, you will always benefit from image data augmentation techniques, such as shifting an image, cropping it, or resizing. <Question choices={[ { text: "True", explain: "" }, { text: "False", explain: "Correct!", correct: true } ]} />
7
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter3/seq2seq.mdx
# Seq2Seq architectures The CTC models discussed in the previous section used only the encoder part of the transformer architecture. When we also add the decoder to create an encoder-decoder model, this is referred to as a **sequence-to-sequence** model or seq2seq for short. The model maps a sequence of one kind of data to a sequence of another kind of data. With encoder-only transformer models, the encoder made a prediction for each element in the input sequence. Therefore, both input and output sequences will always have the same length. In the case of CTC models such as Wav2Vec2 the input waveform was first downsampled, but there still was one prediction for every 20 ms of audio. With a seq2seq model, there is no such one-to-one correspondence and the input and output sequences can have different lengths. That makes seq2seq models suitable for NLP tasks such as text summarization or translation between different languages — but also for audio tasks such as speech recognition. The architecture of a decoder is very similar to that of an encoder, and both use similar layers with self-attention as the main feature. However, the decoder performs a different task than the encoder. To see how this works, let's examine how a seq2seq model can do automatic speech recognition. ## Automatic speech recognition The architecture of **Whisper** is as follows (figure courtesy of [OpenAI Whisper blog](https://openai.com/blog/whisper/)): <div class="flex justify-center"> <img src="https://huggingface.co/blog/assets/111_fine_tune_whisper/whisper_architecture.svg" alt="Whisper is a transformer encoder-decoder model"> </div> This should look quite familiar. On the left is the **transformer encoder**. This takes as input a log-mel spectrogram and encodes that spectrogram to form a sequence of encoder hidden states that extract important features from the spoken speech. This hidden-states tensor represents the input sequence as a whole and effectively encodes the "meaning" of the input speech. <Tip> 💡 It's common for these seq2seq models to use spectrograms as input. However, a seq2seq model can also be designed to work directly on audio waveforms. </Tip> The output of the encoder is then passed into the **transformer decoder**, shown on the right, using a mechanism called **cross-attention**. This is like self-attention but attends over the encoder output. From this point on, the encoder is no longer needed. The decoder predicts a sequence of text tokens in an **autoregressive** manner, a single token at a time, starting from an initial sequence that just has a "start" token in it (`SOT` in the case of Whisper). At each following timestep, the previous output sequence is fed back into the decoder as the new input sequence. In this manner, the decoder emits one new token at a time, steadily growing the output sequence, until it predicts an "end" token or a maximum number of timesteps is reached. While the architecture of the decoder is mostly identical to that of the encoder, there are two big differences: 1. the decoder has a cross-attention mechanism that allows it to look at the encoder's representation of the input sequence 2. the decoder's attention is causal — the decoder isn't allowed to look into the future. In this design, the decoder plays the role of a **language model**, processing the hidden-state representations from the encoder and generating the corresponding text transcriptions. This is a more powerful approach than CTC, even if the CTC model is combined with an external language model, as the seq2seq system can be trained end-to-end with the same training data and loss function, giving greater flexibility and generally superior performance. <Tip> 💡 Whereas a CTC model outputs a sequence of individual characters, the tokens predicted by Whisper are full words or portions of words. It uses the tokenizer from GPT-2 and has 50k+ unique tokens. A seq2seq model can therefore output a much shorter sequence than a CTC model for the same transcription. </Tip> A typical loss function for a seq2seq ASR model is the cross-entropy loss, as the final layer of the model predicts a probability distribution over the possible tokens. This is usually combined with techniques such as [beam search to generate the final sequence](https://huggingface.co/blog/how-to-generate). The metric for speech recognition is WER or word error rate, which measures how many substitutions, insertions, and deletions are necessary to turn the predicted text into the target text — the fewer, the better the score. ## Text-to-speech It may not surprise you: A seq2seq model for TTS works essentially the same as described above but with the inputs and outputs switched around! The transformer encoder takes in a sequence of text tokens and extracts a sequence of hidden-states that represent the input text. The transformer decoder applies cross-attention to the encoder output and predicts a spectrogram. <Tip> 💡 Recall that a spectrogram is made by taking the frequency spectrum of successive time slices of an audio waveform and stacking them together. In other words, a spectrogram is a sequence where the elements are (log-mel) frequency spectra, one for each timestep. </Tip> With the ASR model, the decoder was kickstarted using a sequence that just has the special "start" token in it. For the TTS model, we can start the decoding with a spectrogram of length one that is all zeros that acts as the "start token". Given this initial spectrogram and the cross-attentions over the encoder's hidden-state representations, the decoder then predicts the next timeslice for this spectrogram, steadily growing the spectrogram one timestep at a time. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speecht5_decoding.png" alt="The audio waveform gets mapped to a shorter sequence of hidden-states"> </div> But how does the decoder know when to stop? In the **SpeechT5** model this is handled by making the decoder predict a second sequence. This contains the probability that the current timestep is the last one. While generating audio at inference time, if this probability is over a certain threshold (say 0.5), the decoder is indicating that the spectrogram is finished and the generation loop should end. After the decoding finishes and we have an output sequence containing the spectrogram, SpeechT5 uses a so-called **post-net** that is made up of several convolution layers to refine the spectrogram. During training of the TTS model, the targets are also spectrograms and the loss is L1 or MSE. At inference time we want to convert the output spectrogram into an audio waveform, so that we can actually listen to it. For this an external model is used, the **vocoder**. This vocoder is not part of the seq2seq architecture and is trained separately. What makes TTS difficult is that it's a one-to-many mapping. With speech-to-text there is only one correct output text that corresponds to the input speech, but with text-to-speech the input text can be mapped to many possible speech sounds. Different speakers may choose to emphasize different parts of the sentence, for example. This makes TTS models hard to evaluate. Because of this, the L1 or MSE loss value isn't actually very meaningful — there are multiple ways to represent the same text to a spectrogram. This is why TTS models are typically evaluated by human listeners, using a metric known as MOS or mean opinion score. ## Conclusion The seq2seq approach is more powerful than an encoder-only model. By separating the encoding of the input sequence from the decoding of the output sequence, the alignment of audio and text is less of a problem. <!-- The model learns how to perform this alignment through the attention mechanism. --> However, an encoder-decoder model is also slower as the decoding process happens one step at a time, rather than all at once. The longer the sequence, the slower the prediction. Autoregressive models can also get stuck in repetitions or skip words. Techniques such as beam search can help improve the quality of the predictions, but also slow down decoding even more.
8
0
hf_public_repos/audio-transformers-course/chapters/en
hf_public_repos/audio-transformers-course/chapters/en/chapter3/introduction.mdx
# Unit 3. Transformer architectures for audio In this course we will primarily consider transformer models and how they can be applied to audio tasks. While you don't need to know the inner details of these models, it's useful to understand the main concepts that make them work, so here's a quick refresher. For a deep dive into transformers, check out our [NLP Course](https://huggingface.co/course/chapter1/1). ## How does a transformer work? The original transformer model was designed to translate written text from one language into another. Its architecture looked like this: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers.svg" alt="Original transformer architecture"> </div> On the left is the **encoder** and on the right is the **decoder**. - The encoder receives an input, in this case a sequence of text tokens, and builds a representation of it (its features). This part of the model is trained to acquire understanding from the input. - The decoder uses the encoder's representation (the features) along with other inputs (the previously predicted tokens) to generate a target sequence. This part of the model is trained to generate outputs. In the original design, the output sequence consisted of text tokens. There are also transformer-based models that only use the encoder part (good for tasks that require understanding of the input, such as classification), or only the decoder part (good for tasks such as text generation). An example of an encoder-only model is BERT; an example of a decoder-only model is GPT2. A key feature of transformer models is that they are built with special layers called **attention layers**. These layers tell the model to pay specific attention to certain elements in the input sequence and ignore others when computing the feature representations. ## Using transformers for audio The audio models we'll cover in this course typically have a standard transformer architecture as shown above, but with a slight modification on the input or output side to allow for audio data instead of text. Since all these models are transformers at heart, they will have most of their architecture in common and the main differences are in how they are trained and used. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/transformers_blocks.png" alt="The transformer with audio input and output"> </div> For audio tasks, the input and/or output sequences may be audio instead of text: - Automatic speech recognition (ASR): The input is speech, the output is text. - Speech synthesis (TTS): The input is text, the output is speech. - Audio classification: The input is audio, the output is a class probability — one for each element in the sequence or a single class probability for the entire sequence. - Voice conversion or speech enhancement: Both the input and output are audio. There are a few different ways to handle audio so it can be used with a transformer. The main consideration is whether to use the audio in its raw form — as a waveform — or to process it as a spectrogram instead. ## Model inputs The input to an audio model can be either text or sound. The goal is to convert this input into an embedding vector that can be processed by the transformer architecture. ### Text input A text-to-speech model takes text as input. This works just like the original transformer or any other NLP model: The input text is first tokenized, giving a sequence of text tokens. This sequence is sent through an input embedding layer to convert the tokens into 512-dimensional vectors. Those embedding vectors are then passed into the transformer encoder. ### Waveform input An automatic speech recognition model takes audio as input. To be able to use a transformer for ASR, we first need to convert the audio into a sequence of embedding vectors somehow. Models such as **Wav2Vec2** and **HuBERT** use the audio waveform directly as the input to the model. As you've seen in [the chapter on audio data](../chapter1/introduction), a waveform is a one-dimensional sequence of floating-point numbers, where each number represents the sampled amplitude at a given time. This raw waveform is first normalized to zero mean and unit variance, which helps to standardize audio samples across different volumes (amplitudes). <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/wav2vec2-input.png" alt="Wav2Vec2 uses a CNN to create embeddings from the input waveform"> </div> After normalizing, the sequence of audio samples is turned into an embedding using a small convolutional neural network, known as the feature encoder. Each of the convolutional layers in this network processes the input sequence, subsampling the audio to reduce the sequence length, until the final convolutional layer outputs a 512-dimensional vector with the embedding for each 25 ms of audio. Once the input sequence has been transformed into a sequence of such embeddings, the transformer will process the data as usual. ### Spectrogram input One downside of using the raw waveform as input is that they tend to have long sequence lengths. For example, thirty seconds of audio at a sampling rate of 16 kHz gives an input of length `30 * 16000 = 480000`. Longer sequence lengths require more computations in the transformer model, and so higher memory usage. Because of this, raw audio waveforms are not usually the most efficient form of representing an audio input. By using a spectrogram, we get the same amount of information but in a more compressed form. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/whisper-input.png" alt="Whisper uses a CNN to create embeddings from the input spectrogram"> </div> Models such as **Whisper** first convert the waveform into a log-mel spectrogram. Whisper always splits the audio into 30-second segments, and the log-mel spectrogram for each segment has shape `(80, 3000)` where 80 is the number of mel bins and 3000 is the sequence length. By converting to a log-mel spectrogram we've reduced the amount of input data, but more importantly, this is a much shorter sequence than the raw waveform. The log-mel spectrogram is then processed by a small CNN into a sequence of embeddings, which goes into the transformer as usual. In both cases, waveform as well as spectrogram input, there is a small network in front of the transformer that converts the input into embeddings and then the transformer takes over to do its thing. ## Model outputs The transformer architecture outputs a sequence of hidden-state vectors, also known as the output embeddings. Our goal is to transform these vectors into a text or audio output. ### Text output The goal of an automatic speech recognition model is to predict a sequence of text tokens. This is done by adding a language modeling head — typically a single linear layer — followed by a softmax on top of the transformer's output. This predicts the probabilities over the text tokens in the vocabulary. ### Spectrogram output For models that generate audio, such as a text-to-speech (TTS) model, we'll have to add layers that can produce an audio sequence. It's very common to generate a spectrogram and then use an additional neural network, known as a vocoder, to turn this spectrogram into a waveform. In the **SpeechT5** TTS model, for example, the output from the transformer network is a sequence of 768-element vectors. A linear layer projects that sequence into a log-mel spectrogram. A so-called post-net, made up of additional linear and convolutional layers, refines the spectrogram by reducing noise. The vocoder then makes the final audio waveform. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/speecht5.png" alt="SpeechT5 outputs a spectrogram and uses a vocoder to create the waveform"> </div> <Tip> 💡 If you take an existing waveform and apply the Short-Time Fourier Transform or STFT, it is possible to perform the inverse operation, the ISTFT, to get the original waveform again. This works because the spectrogram created by the STFT contains both amplitude and phase information, and both are needed to reconstruct the waveform. However, audio models that generate their output as a spectrogram typically only predict the amplitude information, not the phase. To turn such a spectrogram into a waveform, we have to somehow estimate the phase information. That's what a vocoder does. </Tip> ### Waveform output It's also possible for models to directly output a waveform instead of a spectrogram as an intermediate step, but we currently don't have any models in 🤗 Transformers that do this. ## Conclusion In summary: Most audio transformer models are more alike than different — they're all built on the same transformer architecture and attention layers, although some models will only use the encoder portion of the transformer while others use both the encoder and decoder. You've also seen how to get audio data into and out of transformer models. To perform the different audio tasks of ASR, TTS, and so on, we can simply swap out the layers that pre-process the inputs into embeddings, and swap out the layers that post-process the predicted embeddings into outputs, while the transformer backbone stays the same. Next, let's look at a few different ways these models can be trained to do automatic speech recognition.
9
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/seq2seq/__main__.py
import argparse import json from functools import partial import torch from accelerate.state import PartialState from datasets import load_dataset, load_from_disk from huggingface_hub import HfApi from peft import LoraConfig, TaskType, get_peft_model, prepare_model_for_kbit_training from transformers import ( AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer, BitsAndBytesConfig, DataCollatorForSeq2Seq, EarlyStoppingCallback, Seq2SeqTrainer, Seq2SeqTrainingArguments, ) from transformers.trainer_callback import PrinterCallback from autotrain import logger from autotrain.trainers.common import ( ALLOW_REMOTE_CODE, LossLoggingCallback, TrainStartCallback, UploadLogs, monitor, pause_space, remove_autotrain_data, save_training_params, ) from autotrain.trainers.seq2seq import utils from autotrain.trainers.seq2seq.dataset import Seq2SeqDataset from autotrain.trainers.seq2seq.params import Seq2SeqParams def parse_args(): # get training_config.json from the end user parser = argparse.ArgumentParser() parser.add_argument("--training_config", type=str, required=True) return parser.parse_args() @monitor def train(config): if isinstance(config, dict): config = Seq2SeqParams(**config) train_data = None valid_data = None # check if config.train_split.csv exists in config.data_path if config.train_split is not None: if config.data_path == f"{config.project_name}/autotrain-data": logger.info("loading dataset from disk") train_data = load_from_disk(config.data_path)[config.train_split] else: if ":" in config.train_split: dataset_config_name, split = config.train_split.split(":") train_data = load_dataset( config.data_path, name=dataset_config_name, split=split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) else: train_data = load_dataset( config.data_path, split=config.train_split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) if config.valid_split is not None: if config.data_path == f"{config.project_name}/autotrain-data": logger.info("loading dataset from disk") valid_data = load_from_disk(config.data_path)[config.valid_split] else: if ":" in config.valid_split: dataset_config_name, split = config.valid_split.split(":") valid_data = load_dataset( config.data_path, name=dataset_config_name, split=split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) else: valid_data = load_dataset( config.data_path, split=config.valid_split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) tokenizer = AutoTokenizer.from_pretrained(config.model, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE) train_data = Seq2SeqDataset(data=train_data, tokenizer=tokenizer, config=config) if config.valid_split is not None: valid_data = Seq2SeqDataset(data=valid_data, tokenizer=tokenizer, config=config) if config.logging_steps == -1: if config.valid_split is not None: logging_steps = int(0.2 * len(valid_data) / config.batch_size) else: logging_steps = int(0.2 * len(train_data) / config.batch_size) if logging_steps == 0: logging_steps = 1 if logging_steps > 25: logging_steps = 25 config.logging_steps = logging_steps else: logging_steps = config.logging_steps logger.info(f"Logging steps: {logging_steps}") training_args = dict( output_dir=config.project_name, per_device_train_batch_size=config.batch_size, per_device_eval_batch_size=2 * config.batch_size, learning_rate=config.lr, num_train_epochs=config.epochs, eval_strategy=config.eval_strategy if config.valid_split is not None else "no", logging_steps=logging_steps, save_total_limit=config.save_total_limit, save_strategy=config.eval_strategy if config.valid_split is not None else "no", gradient_accumulation_steps=config.gradient_accumulation, report_to=config.log, auto_find_batch_size=config.auto_find_batch_size, lr_scheduler_type=config.scheduler, optim=config.optimizer, warmup_ratio=config.warmup_ratio, weight_decay=config.weight_decay, max_grad_norm=config.max_grad_norm, push_to_hub=False, load_best_model_at_end=True if config.valid_split is not None else False, ddp_find_unused_parameters=False, predict_with_generate=True, seed=config.seed, ) if config.mixed_precision == "fp16": training_args["fp16"] = True if config.mixed_precision == "bf16": training_args["bf16"] = True if config.valid_split is not None: early_stop = EarlyStoppingCallback( early_stopping_patience=config.early_stopping_patience, early_stopping_threshold=config.early_stopping_threshold, ) callbacks_to_use = [early_stop] else: callbacks_to_use = [] callbacks_to_use.extend([UploadLogs(config=config), LossLoggingCallback(), TrainStartCallback()]) args = Seq2SeqTrainingArguments(**training_args) model_config = AutoConfig.from_pretrained( config.model, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, use_cache=False, ) if config.peft: if config.quantization == "int4": raise NotImplementedError("int4 quantization is not supported") # if config.use_int4: # bnb_config = BitsAndBytesConfig( # load_in_4bit=config.use_int4, # bnb_4bit_quant_type="nf4", # bnb_4bit_compute_dtype=torch.float16, # bnb_4bit_use_double_quant=False, # ) # config.fp16 = True if config.quantization == "int8": bnb_config = BitsAndBytesConfig(load_in_8bit=True) else: bnb_config = None model = AutoModelForSeq2SeqLM.from_pretrained( config.model, config=model_config, token=config.token, quantization_config=bnb_config, trust_remote_code=ALLOW_REMOTE_CODE, ) else: model = AutoModelForSeq2SeqLM.from_pretrained( config.model, config=model_config, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) embedding_size = model.get_input_embeddings().weight.shape[0] if len(tokenizer) > embedding_size: model.resize_token_embeddings(len(tokenizer)) if config.peft: target_modules = config.target_modules.split(",") if config.target_modules is not None else None if target_modules: target_modules = [module.strip() for module in target_modules] if len(target_modules) == 1 and target_modules[0] == "all-linear": target_modules = "all-linear" lora_config = LoraConfig( r=config.lora_r, lora_alpha=config.lora_alpha, target_modules=target_modules, lora_dropout=config.lora_dropout, bias="none", task_type=TaskType.SEQ_2_SEQ_LM, ) if config.quantization is not None: model = prepare_model_for_kbit_training(model) model = get_peft_model(model, lora_config) _s2s_metrics = partial(utils._seq2seq_metrics, tokenizer=tokenizer) trainer_args = dict( args=args, model=model, callbacks=callbacks_to_use, compute_metrics=_s2s_metrics, ) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) trainer = Seq2SeqTrainer( **trainer_args, train_dataset=train_data, eval_dataset=valid_data, data_collator=data_collator, tokenizer=tokenizer, ) for name, module in trainer.model.named_modules(): if "norm" in name: module = module.to(torch.float32) trainer.remove_callback(PrinterCallback) trainer.train() logger.info("Finished training, saving model...") trainer.model.config.use_cache = True trainer.save_model(config.project_name) model_card = utils.create_model_card(config, trainer) # save model card to output directory as README.md with open(f"{config.project_name}/README.md", "w", encoding="utf-8") as f: f.write(model_card) if config.push_to_hub: if PartialState().process_index == 0: remove_autotrain_data(config) save_training_params(config) logger.info("Pushing model to hub...") api = HfApi(token=config.token) api.create_repo( repo_id=f"{config.username}/{config.project_name}", repo_type="model", private=True, exist_ok=True ) api.upload_folder( folder_path=config.project_name, repo_id=f"{config.username}/{config.project_name}", repo_type="model", ) if PartialState().process_index == 0: pause_space(config) if __name__ == "__main__": _args = parse_args() training_config = json.load(open(_args.training_config)) config = Seq2SeqParams(**training_config) train(config)
0
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/seq2seq/dataset.py
class Seq2SeqDataset: """ A dataset class for sequence-to-sequence tasks. Args: data (list): The dataset containing input and target sequences. tokenizer (PreTrainedTokenizer): The tokenizer to process the text data. config (object): Configuration object containing dataset parameters. Attributes: data (list): The dataset containing input and target sequences. tokenizer (PreTrainedTokenizer): The tokenizer to process the text data. config (object): Configuration object containing dataset parameters. max_len_input (int): Maximum length for input sequences. max_len_target (int): Maximum length for target sequences. Methods: __len__(): Returns the number of samples in the dataset. __getitem__(item): Returns the tokenized input and target sequences for a given index. """ def __init__(self, data, tokenizer, config): self.data = data self.tokenizer = tokenizer self.config = config self.max_len_input = self.config.max_seq_length self.max_len_target = self.config.max_target_length def __len__(self): return len(self.data) def __getitem__(self, item): text = str(self.data[item][self.config.text_column]) target = str(self.data[item][self.config.target_column]) model_inputs = self.tokenizer(text, max_length=self.max_len_input, truncation=True) labels = self.tokenizer(text_target=target, max_length=self.max_len_target, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs
1
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/seq2seq/params.py
from typing import Optional from pydantic import Field from autotrain.trainers.common import AutoTrainParams class Seq2SeqParams(AutoTrainParams): """ Seq2SeqParams is a configuration class for sequence-to-sequence training parameters. Attributes: data_path (str): Path to the dataset. model (str): Name of the model to be used. Default is "google/flan-t5-base". username (Optional[str]): Hugging Face Username. seed (int): Random seed for reproducibility. Default is 42. train_split (str): Name of the training data split. Default is "train". valid_split (Optional[str]): Name of the validation data split. project_name (str): Name of the project or output directory. Default is "project-name". token (Optional[str]): Hub Token for authentication. push_to_hub (bool): Whether to push the model to the Hugging Face Hub. Default is False. text_column (str): Name of the text column in the dataset. Default is "text". target_column (str): Name of the target text column in the dataset. Default is "target". lr (float): Learning rate for training. Default is 5e-5. epochs (int): Number of training epochs. Default is 3. max_seq_length (int): Maximum sequence length for input text. Default is 128. max_target_length (int): Maximum sequence length for target text. Default is 128. batch_size (int): Training batch size. Default is 2. warmup_ratio (float): Proportion of warmup steps. Default is 0.1. gradient_accumulation (int): Number of gradient accumulation steps. Default is 1. optimizer (str): Optimizer to be used. Default is "adamw_torch". scheduler (str): Learning rate scheduler to be used. Default is "linear". weight_decay (float): Weight decay for the optimizer. Default is 0.0. max_grad_norm (float): Maximum gradient norm for clipping. Default is 1.0. logging_steps (int): Number of steps between logging. Default is -1 (disabled). eval_strategy (str): Evaluation strategy. Default is "epoch". auto_find_batch_size (bool): Whether to automatically find the batch size. Default is False. mixed_precision (Optional[str]): Mixed precision training mode (fp16, bf16, or None). save_total_limit (int): Maximum number of checkpoints to save. Default is 1. peft (bool): Whether to use Parameter-Efficient Fine-Tuning (PEFT). Default is False. quantization (Optional[str]): Quantization mode (int4, int8, or None). Default is "int8". lora_r (int): LoRA-R parameter for PEFT. Default is 16. lora_alpha (int): LoRA-Alpha parameter for PEFT. Default is 32. lora_dropout (float): LoRA-Dropout parameter for PEFT. Default is 0.05. target_modules (str): Target modules for PEFT. Default is "all-linear". log (str): Logging method for experiment tracking. Default is "none". early_stopping_patience (int): Patience for early stopping. Default is 5. early_stopping_threshold (float): Threshold for early stopping. Default is 0.01. """ data_path: str = Field(None, title="Data path") model: str = Field("google/flan-t5-base", title="Model name") username: Optional[str] = Field(None, title="Hugging Face Username") seed: int = Field(42, title="Seed") train_split: str = Field("train", title="Train split") valid_split: Optional[str] = Field(None, title="Validation split") project_name: str = Field("project-name", title="Output directory") token: Optional[str] = Field(None, title="Hub Token") push_to_hub: bool = Field(False, title="Push to hub") text_column: str = Field("text", title="Text column") target_column: str = Field("target", title="Target text column") lr: float = Field(5e-5, title="Learning rate") epochs: int = Field(3, title="Number of training epochs") max_seq_length: int = Field(128, title="Max sequence length") max_target_length: int = Field(128, title="Max target sequence length") batch_size: int = Field(2, title="Training batch size") warmup_ratio: float = Field(0.1, title="Warmup proportion") gradient_accumulation: int = Field(1, title="Gradient accumulation steps") optimizer: str = Field("adamw_torch", title="Optimizer") scheduler: str = Field("linear", title="Scheduler") weight_decay: float = Field(0.0, title="Weight decay") max_grad_norm: float = Field(1.0, title="Max gradient norm") logging_steps: int = Field(-1, title="Logging steps") eval_strategy: str = Field("epoch", title="Evaluation strategy") auto_find_batch_size: bool = Field(False, title="Auto find batch size") mixed_precision: Optional[str] = Field(None, title="fp16, bf16, or None") save_total_limit: int = Field(1, title="Save total limit") token: Optional[str] = Field(None, title="Hub Token") push_to_hub: bool = Field(False, title="Push to hub") peft: bool = Field(False, title="Use PEFT") quantization: Optional[str] = Field("int8", title="int4, int8, or None") lora_r: int = Field(16, title="LoRA-R") lora_alpha: int = Field(32, title="LoRA-Alpha") lora_dropout: float = Field(0.05, title="LoRA-Dropout") target_modules: str = Field("all-linear", title="Target modules for PEFT") log: str = Field("none", title="Logging using experiment tracking") early_stopping_patience: int = Field(5, title="Early stopping patience") early_stopping_threshold: float = Field(0.01, title="Early stopping threshold")
2
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/extractive_question_answering/utils.py
import collections import json import os import numpy as np from datasets import load_metric from transformers import EvalPrediction from autotrain import logger MODEL_CARD = """ --- library_name: transformers tags: - autotrain - question-answering{base_model} widget: - text: "Who loves AutoTrain?" context: "Everyone loves AutoTrain"{dataset_tag} --- # Model Trained Using AutoTrain - Problem type: Extractive Question Answering ## Validation Metrics {validation_metrics} ## Usage ```python import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(...) tokenizer = AutoTokenizer.from_pretrained(...) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoTrain?", "Everyone loves AutoTrain" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ``` """ SQUAD_METRIC = load_metric("squad") SQUAD_V2_METRIC = load_metric("squad_v2") def postprocess_qa_predictions( examples, features, predictions, config, version_2_with_negative=False, n_best_size=20, max_answer_length=30, null_score_diff_threshold=0.0, output_dir=None, prefix=None, ): # This function is taken from: https://github.com/huggingface/transformers/blob/dcec4c4387850dff8123d5752aab8c1b5431465b/examples/pytorch/question-answering/run_qa.py#L470 """ Post-processes the predictions of a question-answering model to convert them to answers that are substrings of the original contexts. This is the base postprocessing functions for models that only return start and end logits. Args: examples: The non-preprocessed dataset (see the main script for more information). features: The processed dataset (see the main script for more information). predictions (:obj:`Tuple[np.ndarray, np.ndarray]`): The predictions of the model: two arrays containing the start logits and the end logits respectively. Its first dimension must match the number of elements of :obj:`features`. version_2_with_negative (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not the underlying dataset contains examples with no answers. n_best_size (:obj:`int`, `optional`, defaults to 20): The total number of n-best predictions to generate when looking for an answer. max_answer_length (:obj:`int`, `optional`, defaults to 30): The maximum length of an answer that can be generated. This is needed because the start and end predictions are not conditioned on one another. null_score_diff_threshold (:obj:`float`, `optional`, defaults to 0): The threshold used to select the null answer: if the best answer has a score that is less than the score of the null answer minus this threshold, the null answer is selected for this example (note that the score of the null answer for an example giving several features is the minimum of the scores for the null answer on each feature: all features must be aligned on the fact they `want` to predict a null answer). Only useful when :obj:`version_2_with_negative` is :obj:`True`. output_dir (:obj:`str`, `optional`): If provided, the dictionaries of predictions, n_best predictions (with their scores and logits) and, if :obj:`version_2_with_negative=True`, the dictionary of the scores differences between best and null answers, are saved in `output_dir`. prefix (:obj:`str`, `optional`): If provided, the dictionaries mentioned above are saved with `prefix` added to their names. log_level (:obj:`int`, `optional`, defaults to ``logging.WARNING``): ``logging`` log level (e.g., ``logging.WARNING``) """ if len(predictions) != 2: raise ValueError("`predictions` should be a tuple with two elements (start_logits, end_logits).") all_start_logits, all_end_logits = predictions if len(predictions[0]) != len(features): raise ValueError(f"Got {len(predictions[0])} predictions and {len(features)} features.") # Build a map example to its corresponding features. example_id_to_index = {k: i for i, k in enumerate(examples["id"])} features_per_example = collections.defaultdict(list) for i, feature in enumerate(features): features_per_example[example_id_to_index[feature["example_id"]]].append(i) # The dictionaries we have to fill. all_predictions = collections.OrderedDict() all_nbest_json = collections.OrderedDict() if version_2_with_negative: scores_diff_json = collections.OrderedDict() # Logging. logger.info(f"Post-processing {len(examples)} example predictions split into {len(features)} features.") # Let's loop over all the examples! for example_index, example in enumerate(examples): # Those are the indices of the features associated to the current example. feature_indices = features_per_example[example_index] min_null_prediction = None prelim_predictions = [] # Looping through all the features associated to the current example. for feature_index in feature_indices: # We grab the predictions of the model for this feature. start_logits = all_start_logits[feature_index] end_logits = all_end_logits[feature_index] # This is what will allow us to map some the positions in our logits to span of texts in the original # context. offset_mapping = features[feature_index]["offset_mapping"] # Optional `token_is_max_context`, if provided we will remove answers that do not have the maximum context # available in the current feature. token_is_max_context = features[feature_index].get("token_is_max_context", None) # Update minimum null prediction. feature_null_score = start_logits[0] + end_logits[0] if min_null_prediction is None or min_null_prediction["score"] > feature_null_score: min_null_prediction = { "offsets": (0, 0), "score": feature_null_score, "start_logit": start_logits[0], "end_logit": end_logits[0], } # Go through all possibilities for the `n_best_size` greater start and end logits. start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist() end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist() for start_index in start_indexes: for end_index in end_indexes: # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond # to part of the input_ids that are not in the context. if ( start_index >= len(offset_mapping) or end_index >= len(offset_mapping) or offset_mapping[start_index] is None or len(offset_mapping[start_index]) < 2 or offset_mapping[end_index] is None or len(offset_mapping[end_index]) < 2 ): continue # Don't consider answers with a length that is either < 0 or > max_answer_length. if end_index < start_index or end_index - start_index + 1 > max_answer_length: continue # Don't consider answer that don't have the maximum context available (if such information is # provided). if token_is_max_context is not None and not token_is_max_context.get(str(start_index), False): continue prelim_predictions.append( { "offsets": (offset_mapping[start_index][0], offset_mapping[end_index][1]), "score": start_logits[start_index] + end_logits[end_index], "start_logit": start_logits[start_index], "end_logit": end_logits[end_index], } ) if version_2_with_negative and min_null_prediction is not None: # Add the minimum null prediction prelim_predictions.append(min_null_prediction) null_score = min_null_prediction["score"] # Only keep the best `n_best_size` predictions. predictions = sorted(prelim_predictions, key=lambda x: x["score"], reverse=True)[:n_best_size] # Add back the minimum null prediction if it was removed because of its low score. if ( version_2_with_negative and min_null_prediction is not None and not any(p["offsets"] == (0, 0) for p in predictions) ): predictions.append(min_null_prediction) # Use the offsets to gather the answer text in the original context. context = example[config.text_column] for pred in predictions: offsets = pred.pop("offsets") pred["text"] = context[offsets[0] : offsets[1]] # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid # failure. if len(predictions) == 0 or (len(predictions) == 1 and predictions[0]["text"] == ""): predictions.insert(0, {"text": "empty", "start_logit": 0.0, "end_logit": 0.0, "score": 0.0}) # Compute the softmax of all scores (we do it with numpy to stay independent from torch/tf in this file, using # the LogSumExp trick). scores = np.array([pred.pop("score") for pred in predictions]) exp_scores = np.exp(scores - np.max(scores)) probs = exp_scores / exp_scores.sum() # Include the probabilities in our predictions. for prob, pred in zip(probs, predictions): pred["probability"] = prob # Pick the best prediction. If the null answer is not possible, this is easy. if not version_2_with_negative: all_predictions[example["id"]] = predictions[0]["text"] else: # Otherwise we first need to find the best non-empty prediction. i = 0 while predictions[i]["text"] == "": i += 1 best_non_null_pred = predictions[i] # Then we compare to the null prediction using the threshold. score_diff = null_score - best_non_null_pred["start_logit"] - best_non_null_pred["end_logit"] scores_diff_json[example["id"]] = float(score_diff) # To be JSON-serializable. if score_diff > null_score_diff_threshold: all_predictions[example["id"]] = "" else: all_predictions[example["id"]] = best_non_null_pred["text"] # Make `predictions` JSON-serializable by casting np.float back to float. all_nbest_json[example["id"]] = [ {k: (float(v) if isinstance(v, (np.float16, np.float32, np.float64)) else v) for k, v in pred.items()} for pred in predictions ] # If we have an output_dir, let's save all those dicts. if output_dir is not None: if not os.path.isdir(output_dir): raise EnvironmentError(f"{output_dir} is not a directory.") prediction_file = os.path.join( output_dir, "predictions.json" if prefix is None else f"{prefix}_predictions.json" ) nbest_file = os.path.join( output_dir, "nbest_predictions.json" if prefix is None else f"{prefix}_nbest_predictions.json" ) if version_2_with_negative: null_odds_file = os.path.join( output_dir, "null_odds.json" if prefix is None else f"{prefix}_null_odds.json" ) logger.info(f"Saving predictions to {prediction_file}.") with open(prediction_file, "w") as writer: writer.write(json.dumps(all_predictions, indent=4) + "\n") logger.info(f"Saving nbest_preds to {nbest_file}.") with open(nbest_file, "w") as writer: writer.write(json.dumps(all_nbest_json, indent=4) + "\n") if version_2_with_negative: logger.info(f"Saving null_odds to {null_odds_file}.") with open(null_odds_file, "w") as writer: writer.write(json.dumps(scores_diff_json, indent=4) + "\n") return all_predictions def post_processing_function_qa(examples, features, predictions, version_2_with_negative, config, stage="eval"): # Post-processing: we match the start logits and end logits to answers in the original context. predictions = postprocess_qa_predictions( examples=examples, features=features, predictions=predictions, version_2_with_negative=version_2_with_negative, n_best_size=20, max_answer_length=30, null_score_diff_threshold=0.0, output_dir=None, prefix=stage, config=config, ) # Format the result to the format the metric expects. if version_2_with_negative: formatted_predictions = [ {"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in predictions.items() ] else: formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()] references = [{"id": str(ex["id"]), "answers": ex[config.answer_column]} for ex in examples] return EvalPrediction(predictions=formatted_predictions, label_ids=references) def compute_metrics(pred, eval_dataset, eval_examples, use_v2, config): preds, label_ids = post_processing_function_qa(eval_examples, eval_dataset, pred.predictions, use_v2, config) if use_v2: result = SQUAD_V2_METRIC.compute(predictions=preds, references=label_ids) else: result = SQUAD_METRIC.compute(predictions=preds, references=label_ids) return {k: round(v, 4) for k, v in result.items()} def create_model_card(config, trainer): if config.valid_split is not None: eval_scores = trainer.evaluate() eval_scores = [f"{k[len('eval_'):]}: {v}" for k, v in eval_scores.items()] eval_scores = "\n\n".join(eval_scores) else: eval_scores = "No validation metrics available" if config.data_path == f"{config.project_name}/autotrain-data" or os.path.isdir(config.data_path): dataset_tag = "" else: dataset_tag = f"\ndatasets:\n- {config.data_path}" if os.path.isdir(config.model): base_model = "" else: base_model = f"\nbase_model: {config.model}" model_card = MODEL_CARD.format( dataset_tag=dataset_tag, validation_metrics=eval_scores, base_model=base_model, ) return model_card def prepare_qa_validation_features(examples, tokenizer, config): # Some of the questions have lots of whitespace on the left, which is not useful and will make the # truncation of the context fail (the tokenized question will take a lots of space). So we remove that # left whitespace pad_on_right = tokenizer.padding_side == "right" examples[config.question_column] = [q.lstrip() for q in examples[config.question_column]] # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples[config.question_column if pad_on_right else config.text_column], examples[config.text_column if pad_on_right else config.question_column], truncation="only_second" if pad_on_right else "only_first", max_length=config.max_seq_length, stride=config.max_doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # For evaluation, we will need to convert our predictions to substrings of the context, so we keep the # corresponding example_id and we will store the offset mappings. tokenized_examples["example_id"] = [] for i in range(len(tokenized_examples["input_ids"])): # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token # position is part of the context or not. tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples
3
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/extractive_question_answering/__main__.py
import argparse import copy import json from functools import partial from accelerate.state import PartialState from datasets import load_dataset, load_from_disk from huggingface_hub import HfApi from transformers import ( AutoConfig, AutoModelForQuestionAnswering, AutoTokenizer, EarlyStoppingCallback, Trainer, TrainingArguments, ) from transformers.trainer_callback import PrinterCallback from autotrain import logger from autotrain.trainers.common import ( ALLOW_REMOTE_CODE, LossLoggingCallback, TrainStartCallback, UploadLogs, monitor, pause_space, remove_autotrain_data, save_training_params, ) from autotrain.trainers.extractive_question_answering import utils from autotrain.trainers.extractive_question_answering.dataset import ExtractiveQuestionAnsweringDataset from autotrain.trainers.extractive_question_answering.params import ExtractiveQuestionAnsweringParams def parse_args(): # get training_config.json from the end user parser = argparse.ArgumentParser() parser.add_argument("--training_config", type=str, required=True) return parser.parse_args() @monitor def train(config): if isinstance(config, dict): config = ExtractiveQuestionAnsweringParams(**config) train_data = None valid_data = None # check if config.train_split.csv exists in config.data_path if config.train_split is not None: if config.data_path == f"{config.project_name}/autotrain-data": logger.info("loading dataset from disk") train_data = load_from_disk(config.data_path)[config.train_split] else: if ":" in config.train_split: dataset_config_name, split = config.train_split.split(":") train_data = load_dataset( config.data_path, name=dataset_config_name, split=split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) else: train_data = load_dataset( config.data_path, split=config.train_split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) if config.valid_split is not None: if config.data_path == f"{config.project_name}/autotrain-data": logger.info("loading dataset from disk") valid_data = load_from_disk(config.data_path)[config.valid_split] else: if ":" in config.valid_split: dataset_config_name, split = config.valid_split.split(":") valid_data = load_dataset( config.data_path, name=dataset_config_name, split=split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) else: valid_data = load_dataset( config.data_path, split=config.valid_split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) logger.info(train_data) if config.valid_split is not None: logger.info(valid_data) model_config = AutoConfig.from_pretrained(config.model, allow_remote_code=ALLOW_REMOTE_CODE, token=config.token) try: model = AutoModelForQuestionAnswering.from_pretrained( config.model, config=model_config, trust_remote_code=ALLOW_REMOTE_CODE, token=config.token, ignore_mismatched_sizes=True, ) except OSError: model = AutoModelForQuestionAnswering.from_pretrained( config.model, config=model_config, from_tf=True, trust_remote_code=ALLOW_REMOTE_CODE, token=config.token, ignore_mismatched_sizes=True, ) tokenizer = AutoTokenizer.from_pretrained(config.model, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE) use_v2 = False if config.valid_split is not None: id_column = list(range(len(valid_data))) for data in valid_data: if -1 in data[config.answer_column]["answer_start"]: use_v2 = True break valid_data = valid_data.add_column("id", id_column) column_names = valid_data.column_names partial_process = partial( utils.prepare_qa_validation_features, tokenizer=tokenizer, config=config, ) processed_eval_dataset = valid_data.map( partial_process, batched=True, remove_columns=column_names, num_proc=2, desc="Running tokenizer on validation dataset", ) orig_valid_data = copy.deepcopy(valid_data) train_data = ExtractiveQuestionAnsweringDataset(data=train_data, tokenizer=tokenizer, config=config) if config.valid_split is not None: valid_data = ExtractiveQuestionAnsweringDataset(data=valid_data, tokenizer=tokenizer, config=config) if config.logging_steps == -1: if config.valid_split is not None: logging_steps = int(0.2 * len(valid_data) / config.batch_size) else: logging_steps = int(0.2 * len(train_data) / config.batch_size) if logging_steps == 0: logging_steps = 1 if logging_steps > 25: logging_steps = 25 config.logging_steps = logging_steps else: logging_steps = config.logging_steps logger.info(f"Logging steps: {logging_steps}") training_args = dict( output_dir=config.project_name, per_device_train_batch_size=config.batch_size, per_device_eval_batch_size=2 * config.batch_size, learning_rate=config.lr, num_train_epochs=config.epochs, eval_strategy=config.eval_strategy if config.valid_split is not None else "no", logging_steps=logging_steps, save_total_limit=config.save_total_limit, save_strategy=config.eval_strategy if config.valid_split is not None else "no", gradient_accumulation_steps=config.gradient_accumulation, report_to=config.log, auto_find_batch_size=config.auto_find_batch_size, lr_scheduler_type=config.scheduler, optim=config.optimizer, warmup_ratio=config.warmup_ratio, weight_decay=config.weight_decay, max_grad_norm=config.max_grad_norm, push_to_hub=False, load_best_model_at_end=True if config.valid_split is not None else False, ddp_find_unused_parameters=False, ) if config.mixed_precision == "fp16": training_args["fp16"] = True if config.mixed_precision == "bf16": training_args["bf16"] = True if config.valid_split is not None: early_stop = EarlyStoppingCallback( early_stopping_patience=config.early_stopping_patience, early_stopping_threshold=config.early_stopping_threshold, ) callbacks_to_use = [early_stop] else: callbacks_to_use = [] callbacks_to_use.extend([UploadLogs(config=config), LossLoggingCallback(), TrainStartCallback()]) if config.valid_split is not None: logger.info(processed_eval_dataset) compute_metrics = partial( utils.compute_metrics, eval_dataset=processed_eval_dataset, eval_examples=orig_valid_data, config=config, use_v2=use_v2, ) else: compute_metrics = None args = TrainingArguments(**training_args) trainer_args = dict( args=args, model=model, callbacks=callbacks_to_use, compute_metrics=compute_metrics, ) trainer = Trainer( **trainer_args, train_dataset=train_data, eval_dataset=valid_data, ) trainer.remove_callback(PrinterCallback) trainer.train() logger.info("Finished training, saving model...") trainer.save_model(config.project_name) tokenizer.save_pretrained(config.project_name) model_card = utils.create_model_card(config, trainer) # save model card to output directory as README.md with open(f"{config.project_name}/README.md", "w") as f: f.write(model_card) if config.push_to_hub: if PartialState().process_index == 0: remove_autotrain_data(config) save_training_params(config) logger.info("Pushing model to hub...") api = HfApi(token=config.token) api.create_repo( repo_id=f"{config.username}/{config.project_name}", repo_type="model", private=True, exist_ok=True ) api.upload_folder( folder_path=config.project_name, repo_id=f"{config.username}/{config.project_name}", repo_type="model", ) if PartialState().process_index == 0: pause_space(config) if __name__ == "__main__": args = parse_args() training_config = json.load(open(args.training_config)) config = ExtractiveQuestionAnsweringParams(**training_config) train(config)
4
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/extractive_question_answering/dataset.py
from functools import partial from autotrain import logger def _prepare_dataset(examples, tokenizer, config): # taken from: # https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py # and modified for AutoTrain pad_on_right = tokenizer.padding_side == "right" tokenized_examples = tokenizer( examples[config.question_column if pad_on_right else config.text_column], examples[config.text_column if pad_on_right else config.question_column], truncation="only_second" if pad_on_right else "only_first", max_length=config.max_seq_length, stride=config.max_doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position in the original context. This will # help us compute the start_positions and end_positions. offset_mapping = tokenized_examples.pop("offset_mapping") # Let's label those examples! tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] for i, offsets in enumerate(offset_mapping): # We will label impossible answers with the index of the CLS token. input_ids = tokenized_examples["input_ids"][i] if tokenizer.cls_token_id in input_ids: cls_index = input_ids.index(tokenizer.cls_token_id) elif tokenizer.bos_token_id in input_ids: cls_index = input_ids.index(tokenizer.bos_token_id) else: cls_index = 0 # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] answers = examples[config.answer_column][sample_index] # If no answers are given, set the cls_index as answer. if len(answers["answer_start"]) == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != (1 if pad_on_right else 0): token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != (1 if pad_on_right else 0): token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char: token_start_index += 1 tokenized_examples["start_positions"].append(token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples class ExtractiveQuestionAnsweringDataset: """ A dataset class for extractive question answering tasks. Args: data (Dataset): The dataset to be processed. tokenizer (PreTrainedTokenizer): The tokenizer to be used for processing the data. config (dict): Configuration parameters for processing the dataset. Attributes: data (Dataset): The original dataset. tokenizer (PreTrainedTokenizer): The tokenizer used for processing the data. config (dict): Configuration parameters for processing the dataset. tokenized_data (Dataset): The tokenized dataset after applying the mapping function. Methods: __len__(): Returns the length of the tokenized dataset. __getitem__(item): Returns the tokenized data at the specified index. """ def __init__(self, data, tokenizer, config): self.data = data self.tokenizer = tokenizer self.config = config logger.info("Processing data for Extractive QA") mapping_function = partial(_prepare_dataset, tokenizer=self.tokenizer, config=self.config) self.tokenized_data = self.data.map( mapping_function, batched=True, remove_columns=self.data.column_names, ) def __len__(self): return len(self.tokenized_data) def __getitem__(self, item): return self.tokenized_data[item]
5
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/extractive_question_answering/params.py
from typing import Optional from pydantic import Field from autotrain.trainers.common import AutoTrainParams class ExtractiveQuestionAnsweringParams(AutoTrainParams): """ ExtractiveQuestionAnsweringParams Parameters: data_path (str): Path to the dataset. model (str): Pre-trained model name. Default is "bert-base-uncased". lr (float): Learning rate for the optimizer. Default is 5e-5. epochs (int): Number of training epochs. Default is 3. max_seq_length (int): Maximum sequence length for inputs. Default is 128. max_doc_stride (int): Maximum document stride for splitting context. Default is 128. batch_size (int): Batch size for training. Default is 8. warmup_ratio (float): Warmup proportion for learning rate scheduler. Default is 0.1. gradient_accumulation (int): Number of gradient accumulation steps. Default is 1. optimizer (str): Optimizer type. Default is "adamw_torch". scheduler (str): Learning rate scheduler type. Default is "linear". weight_decay (float): Weight decay for the optimizer. Default is 0.0. max_grad_norm (float): Maximum gradient norm for clipping. Default is 1.0. seed (int): Random seed for reproducibility. Default is 42. train_split (str): Name of the training data split. Default is "train". valid_split (Optional[str]): Name of the validation data split. Default is None. text_column (str): Column name for context/text. Default is "context". question_column (str): Column name for questions. Default is "question". answer_column (str): Column name for answers. Default is "answers". logging_steps (int): Number of steps between logging. Default is -1. project_name (str): Name of the project for output directory. Default is "project-name". auto_find_batch_size (bool): Automatically find optimal batch size. Default is False. mixed_precision (Optional[str]): Mixed precision training mode (fp16, bf16, or None). Default is None. save_total_limit (int): Maximum number of checkpoints to save. Default is 1. token (Optional[str]): Authentication token for Hugging Face Hub. Default is None. push_to_hub (bool): Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str): Evaluation strategy during training. Default is "epoch". username (Optional[str]): Hugging Face username for authentication. Default is None. log (str): Logging method for experiment tracking. Default is "none". early_stopping_patience (int): Number of epochs with no improvement for early stopping. Default is 5. early_stopping_threshold (float): Threshold for early stopping improvement. Default is 0.01. """ data_path: str = Field(None, title="Path to the dataset") model: str = Field("bert-base-uncased", title="Pre-trained model name") lr: float = Field(5e-5, title="Learning rate for the optimizer") epochs: int = Field(3, title="Number of training epochs") max_seq_length: int = Field(128, title="Maximum sequence length for inputs") max_doc_stride: int = Field(128, title="Maximum document stride for splitting context") batch_size: int = Field(8, title="Batch size for training") warmup_ratio: float = Field(0.1, title="Warmup proportion for learning rate scheduler") gradient_accumulation: int = Field(1, title="Number of gradient accumulation steps") optimizer: str = Field("adamw_torch", title="Optimizer type") scheduler: str = Field("linear", title="Learning rate scheduler type") weight_decay: float = Field(0.0, title="Weight decay for the optimizer") max_grad_norm: float = Field(1.0, title="Maximum gradient norm for clipping") seed: int = Field(42, title="Random seed for reproducibility") train_split: str = Field("train", title="Name of the training data split") valid_split: Optional[str] = Field(None, title="Name of the validation data split") text_column: str = Field("context", title="Column name for context/text") question_column: str = Field("question", title="Column name for questions") answer_column: str = Field("answers", title="Column name for answers") logging_steps: int = Field(-1, title="Number of steps between logging") project_name: str = Field("project-name", title="Name of the project for output directory") auto_find_batch_size: bool = Field(False, title="Automatically find optimal batch size") mixed_precision: Optional[str] = Field(None, title="Mixed precision training mode (fp16, bf16, or None)") save_total_limit: int = Field(1, title="Maximum number of checkpoints to save") token: Optional[str] = Field(None, title="Authentication token for Hugging Face Hub") push_to_hub: bool = Field(False, title="Whether to push the model to Hugging Face Hub") eval_strategy: str = Field("epoch", title="Evaluation strategy during training") username: Optional[str] = Field(None, title="Hugging Face username for authentication") log: str = Field("none", title="Logging method for experiment tracking") early_stopping_patience: int = Field(5, title="Number of epochs with no improvement for early stopping") early_stopping_threshold: float = Field(0.01, title="Threshold for early stopping improvement")
6
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/text_classification/utils.py
import os import numpy as np import requests from sklearn import metrics BINARY_CLASSIFICATION_EVAL_METRICS = ( "eval_loss", "eval_accuracy", "eval_f1", "eval_auc", "eval_precision", "eval_recall", ) MULTI_CLASS_CLASSIFICATION_EVAL_METRICS = ( "eval_loss", "eval_accuracy", "eval_f1_macro", "eval_f1_micro", "eval_f1_weighted", "eval_precision_macro", "eval_precision_micro", "eval_precision_weighted", "eval_recall_macro", "eval_recall_micro", "eval_recall_weighted", ) MODEL_CARD = """ --- tags: - autotrain - text-classification{base_model} widget: - text: "I love AutoTrain"{dataset_tag} --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics {validation_metrics} """ def _binary_classification_metrics(pred): """ Calculate various binary classification metrics. Args: pred (tuple): A tuple containing raw predictions and true labels. - raw_predictions (numpy.ndarray): The raw prediction scores from the model. - labels (numpy.ndarray): The true labels. Returns: dict: A dictionary containing the following metrics: - "f1" (float): The F1 score. - "precision" (float): The precision score. - "recall" (float): The recall score. - "auc" (float): The Area Under the ROC Curve (AUC) score. - "accuracy" (float): The accuracy score. """ raw_predictions, labels = pred predictions = np.argmax(raw_predictions, axis=1) result = { "f1": metrics.f1_score(labels, predictions), "precision": metrics.precision_score(labels, predictions), "recall": metrics.recall_score(labels, predictions), "auc": metrics.roc_auc_score(labels, raw_predictions[:, 1]), "accuracy": metrics.accuracy_score(labels, predictions), } return result def _multi_class_classification_metrics(pred): """ Compute various classification metrics for multi-class classification. Args: pred (tuple): A tuple containing raw predictions and true labels. - raw_predictions (numpy.ndarray): The raw prediction scores for each class. - labels (numpy.ndarray): The true labels. Returns: dict: A dictionary containing the following metrics: - "f1_macro": F1 score with macro averaging. - "f1_micro": F1 score with micro averaging. - "f1_weighted": F1 score with weighted averaging. - "precision_macro": Precision score with macro averaging. - "precision_micro": Precision score with micro averaging. - "precision_weighted": Precision score with weighted averaging. - "recall_macro": Recall score with macro averaging. - "recall_micro": Recall score with micro averaging. - "recall_weighted": Recall score with weighted averaging. - "accuracy": Accuracy score. """ raw_predictions, labels = pred predictions = np.argmax(raw_predictions, axis=1) results = { "f1_macro": metrics.f1_score(labels, predictions, average="macro"), "f1_micro": metrics.f1_score(labels, predictions, average="micro"), "f1_weighted": metrics.f1_score(labels, predictions, average="weighted"), "precision_macro": metrics.precision_score(labels, predictions, average="macro"), "precision_micro": metrics.precision_score(labels, predictions, average="micro"), "precision_weighted": metrics.precision_score(labels, predictions, average="weighted"), "recall_macro": metrics.recall_score(labels, predictions, average="macro"), "recall_micro": metrics.recall_score(labels, predictions, average="micro"), "recall_weighted": metrics.recall_score(labels, predictions, average="weighted"), "accuracy": metrics.accuracy_score(labels, predictions), } return results def create_model_card(config, trainer, num_classes): """ Generates a model card for a text classification model. Args: config (object): Configuration object containing various settings and paths. trainer (object): Trainer object used for evaluating the model. num_classes (int): Number of classes in the classification task. Returns: str: A formatted string representing the model card. """ if config.valid_split is not None: eval_scores = trainer.evaluate() valid_metrics = ( BINARY_CLASSIFICATION_EVAL_METRICS if num_classes == 2 else MULTI_CLASS_CLASSIFICATION_EVAL_METRICS ) eval_scores = [f"{k[len('eval_'):]}: {v}" for k, v in eval_scores.items() if k in valid_metrics] eval_scores = "\n\n".join(eval_scores) else: eval_scores = "No validation metrics available" if config.data_path == f"{config.project_name}/autotrain-data" or os.path.isdir(config.data_path): dataset_tag = "" else: dataset_tag = f"\ndatasets:\n- {config.data_path}" if os.path.isdir(config.model): base_model = "" else: base_model = f"\nbase_model: {config.model}" model_card = MODEL_CARD.format( dataset_tag=dataset_tag, validation_metrics=eval_scores, base_model=base_model, ) return model_card def pause_endpoint(params): """ Pauses a Hugging Face endpoint using the provided parameters. This function constructs an API URL using the endpoint ID from the environment variables, and sends a POST request to pause the specified endpoint. Args: params (object): An object containing the following attribute: - token (str): The authorization token required to authenticate the API request. Returns: dict: The JSON response from the API call. """ endpoint_id = os.environ["ENDPOINT_ID"] username = endpoint_id.split("/")[0] project_name = endpoint_id.split("/")[1] api_url = f"https://api.endpoints.huggingface.cloud/v2/endpoint/{username}/{project_name}/pause" headers = {"Authorization": f"Bearer {params.token}"} r = requests.post(api_url, headers=headers) return r.json()
7
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/text_classification/__main__.py
import argparse import json from accelerate.state import PartialState from datasets import load_dataset, load_from_disk from huggingface_hub import HfApi from transformers import ( AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, EarlyStoppingCallback, Trainer, TrainingArguments, ) from transformers.trainer_callback import PrinterCallback from autotrain import logger from autotrain.trainers.common import ( ALLOW_REMOTE_CODE, LossLoggingCallback, TrainStartCallback, UploadLogs, monitor, pause_space, remove_autotrain_data, save_training_params, ) from autotrain.trainers.text_classification import utils from autotrain.trainers.text_classification.dataset import TextClassificationDataset from autotrain.trainers.text_classification.params import TextClassificationParams def parse_args(): # get training_config.json from the end user parser = argparse.ArgumentParser() parser.add_argument("--training_config", type=str, required=True) return parser.parse_args() @monitor def train(config): if isinstance(config, dict): config = TextClassificationParams(**config) train_data = None valid_data = None # check if config.train_split.csv exists in config.data_path if config.train_split is not None: if config.data_path == f"{config.project_name}/autotrain-data": logger.info("loading dataset from disk") train_data = load_from_disk(config.data_path)[config.train_split] else: if ":" in config.train_split: dataset_config_name, split = config.train_split.split(":") train_data = load_dataset( config.data_path, name=dataset_config_name, split=split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) else: train_data = load_dataset( config.data_path, split=config.train_split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) if config.valid_split is not None: if config.data_path == f"{config.project_name}/autotrain-data": logger.info("loading dataset from disk") valid_data = load_from_disk(config.data_path)[config.valid_split] else: if ":" in config.valid_split: dataset_config_name, split = config.valid_split.split(":") valid_data = load_dataset( config.data_path, name=dataset_config_name, split=split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) else: valid_data = load_dataset( config.data_path, split=config.valid_split, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE, ) classes = train_data.features[config.target_column].names label2id = {c: i for i, c in enumerate(classes)} num_classes = len(classes) if num_classes < 2: raise ValueError("Invalid number of classes. Must be greater than 1.") if config.valid_split is not None: num_classes_valid = len(valid_data.unique(config.target_column)) if num_classes_valid != num_classes: raise ValueError( f"Number of classes in train and valid are not the same. Training has {num_classes} and valid has {num_classes_valid}" ) model_config = AutoConfig.from_pretrained(config.model, num_labels=num_classes) model_config._num_labels = len(label2id) model_config.label2id = label2id model_config.id2label = {v: k for k, v in label2id.items()} try: model = AutoModelForSequenceClassification.from_pretrained( config.model, config=model_config, trust_remote_code=ALLOW_REMOTE_CODE, token=config.token, ignore_mismatched_sizes=True, ) except OSError: model = AutoModelForSequenceClassification.from_pretrained( config.model, config=model_config, from_tf=True, trust_remote_code=ALLOW_REMOTE_CODE, token=config.token, ignore_mismatched_sizes=True, ) tokenizer = AutoTokenizer.from_pretrained(config.model, token=config.token, trust_remote_code=ALLOW_REMOTE_CODE) train_data = TextClassificationDataset(data=train_data, tokenizer=tokenizer, config=config) if config.valid_split is not None: valid_data = TextClassificationDataset(data=valid_data, tokenizer=tokenizer, config=config) if config.logging_steps == -1: if config.valid_split is not None: logging_steps = int(0.2 * len(valid_data) / config.batch_size) else: logging_steps = int(0.2 * len(train_data) / config.batch_size) if logging_steps == 0: logging_steps = 1 if logging_steps > 25: logging_steps = 25 config.logging_steps = logging_steps else: logging_steps = config.logging_steps logger.info(f"Logging steps: {logging_steps}") training_args = dict( output_dir=config.project_name, per_device_train_batch_size=config.batch_size, per_device_eval_batch_size=2 * config.batch_size, learning_rate=config.lr, num_train_epochs=config.epochs, eval_strategy=config.eval_strategy if config.valid_split is not None else "no", logging_steps=logging_steps, save_total_limit=config.save_total_limit, save_strategy=config.eval_strategy if config.valid_split is not None else "no", gradient_accumulation_steps=config.gradient_accumulation, report_to=config.log, auto_find_batch_size=config.auto_find_batch_size, lr_scheduler_type=config.scheduler, optim=config.optimizer, warmup_ratio=config.warmup_ratio, weight_decay=config.weight_decay, max_grad_norm=config.max_grad_norm, push_to_hub=False, load_best_model_at_end=True if config.valid_split is not None else False, ddp_find_unused_parameters=False, ) if config.mixed_precision == "fp16": training_args["fp16"] = True if config.mixed_precision == "bf16": training_args["bf16"] = True if config.valid_split is not None: early_stop = EarlyStoppingCallback( early_stopping_patience=config.early_stopping_patience, early_stopping_threshold=config.early_stopping_threshold, ) callbacks_to_use = [early_stop] else: callbacks_to_use = [] callbacks_to_use.extend([UploadLogs(config=config), LossLoggingCallback(), TrainStartCallback()]) args = TrainingArguments(**training_args) trainer_args = dict( args=args, model=model, callbacks=callbacks_to_use, compute_metrics=( utils._binary_classification_metrics if num_classes == 2 else utils._multi_class_classification_metrics ), ) trainer = Trainer( **trainer_args, train_dataset=train_data, eval_dataset=valid_data, ) trainer.remove_callback(PrinterCallback) trainer.train() logger.info("Finished training, saving model...") trainer.save_model(config.project_name) tokenizer.save_pretrained(config.project_name) model_card = utils.create_model_card(config, trainer, num_classes) # save model card to output directory as README.md with open(f"{config.project_name}/README.md", "w") as f: f.write(model_card) if config.push_to_hub: if PartialState().process_index == 0: remove_autotrain_data(config) save_training_params(config) logger.info("Pushing model to hub...") api = HfApi(token=config.token) api.create_repo( repo_id=f"{config.username}/{config.project_name}", repo_type="model", private=True, exist_ok=True ) api.upload_folder( folder_path=config.project_name, repo_id=f"{config.username}/{config.project_name}", repo_type="model", ) if PartialState().process_index == 0: pause_space(config) if __name__ == "__main__": args = parse_args() training_config = json.load(open(args.training_config)) config = TextClassificationParams(**training_config) train(config)
8
0
hf_public_repos/autotrain-advanced/src/autotrain/trainers
hf_public_repos/autotrain-advanced/src/autotrain/trainers/text_classification/dataset.py
import torch class TextClassificationDataset: """ A dataset class for text classification tasks. Args: data (list): The dataset containing text and target columns. tokenizer (PreTrainedTokenizer): The tokenizer to preprocess the text data. config (object): Configuration object containing dataset parameters. Attributes: data (list): The dataset containing text and target columns. tokenizer (PreTrainedTokenizer): The tokenizer to preprocess the text data. config (object): Configuration object containing dataset parameters. text_column (str): The name of the column containing text data. target_column (str): The name of the column containing target labels. Methods: __len__(): Returns the number of samples in the dataset. __getitem__(item): Returns a dictionary containing tokenized input ids, attention mask, token type ids (if available), and target labels for the given item index. """ def __init__(self, data, tokenizer, config): self.data = data self.tokenizer = tokenizer self.config = config self.text_column = self.config.text_column self.target_column = self.config.target_column def __len__(self): return len(self.data) def __getitem__(self, item): text = str(self.data[item][self.text_column]) target = self.data[item][self.target_column] target = int(target) inputs = self.tokenizer( text, max_length=self.config.max_seq_length, padding="max_length", truncation=True, ) ids = inputs["input_ids"] mask = inputs["attention_mask"] if "token_type_ids" in inputs: token_type_ids = inputs["token_type_ids"] else: token_type_ids = None if token_type_ids is not None: return { "input_ids": torch.tensor(ids, dtype=torch.long), "attention_mask": torch.tensor(mask, dtype=torch.long), "token_type_ids": torch.tensor(token_type_ids, dtype=torch.long), "labels": torch.tensor(target, dtype=torch.long), } return { "input_ids": torch.tensor(ids, dtype=torch.long), "attention_mask": torch.tensor(mask, dtype=torch.long), "labels": torch.tensor(target, dtype=torch.long), }
9
0
hf_public_repos/api-inference-community/docker_images/spacy
hf_public_repos/api-inference-community/docker_images/spacy/tests/test_api_text_classification.py
import json import os from unittest import TestCase, skipIf from app.main import ALLOWED_TASKS from starlette.testclient import TestClient from tests.test_api import TESTABLE_MODELS @skipIf( "text-classification" not in ALLOWED_TASKS, "text-classification not implemented", ) class TextClassificationTestCase(TestCase): def setUp(self): model_id = TESTABLE_MODELS["text-classification"] self.old_model_id = os.getenv("MODEL_ID") self.old_task = os.getenv("TASK") os.environ["MODEL_ID"] = model_id os.environ["TASK"] = "text-classification" from app.main import app self.app = app @classmethod def setUpClass(cls): from app.main import get_pipeline get_pipeline.cache_clear() def tearDown(self): if self.old_model_id is not None: os.environ["MODEL_ID"] = self.old_model_id else: del os.environ["MODEL_ID"] if self.old_task is not None: os.environ["TASK"] = self.old_task else: del os.environ["TASK"] def test_simple(self): inputs = "It is a beautiful day outside" with TestClient(self.app) as client: response = client.post("/", json={"inputs": inputs}) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), list) self.assertEqual(len(content), 1) self.assertEqual(type(content[0]), list) self.assertEqual( set(k for el in content[0] for k in el.keys()), {"label", "score"}, ) with TestClient(self.app) as client: response = client.post("/", json=inputs) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), list) self.assertEqual(len(content), 1) self.assertEqual(type(content[0]), list) self.assertEqual( set(k for el in content[0] for k in el.keys()), {"label", "score"}, ) def test_malformed_question(self): with TestClient(self.app) as client: response = client.post("/", data=b"\xc3\x28") self.assertEqual( response.status_code, 400, ) self.assertEqual( response.content, b'{"error":"\'utf-8\' codec can\'t decode byte 0xc3 in position 0: invalid continuation byte"}', )
0
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/fairseq/requirements.txt
api-inference-community==0.0.23 g2p_en==2.1.0 g2pc==0.9.9.3 phonemizer==2.2.1 librosa==0.8.1 hanziconv==0.3.2 sentencepiece==0.1.96 # Dummy comment to trigger automatic deploy. git+https://github.com/facebookresearch/fairseq.git@d47119871c2ac9a0a0aa2904dd8cfc1929b113d9#egg=fairseq huggingface_hub==0.5.1
1
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/fairseq/Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.8 LABEL maintainer="me <[email protected]>" # Add any system dependency here # RUN apt-get update -y && apt-get install libXXX -y RUN apt-get update -y && apt-get install ffmpeg espeak-ng -y RUN pip install --no-cache-dir numpy==1.22 torch==1.11 COPY ./requirements.txt /app RUN pip install -U pip # This will make further requirements.txt changes faster # Numpy is REQUIRED because pkusage requires numpy to be already installed RUN pip install --no-cache-dir -r requirements.txt COPY ./prestart.sh /app/ # Most DL models are quite large in terms of memory, using workers is a HUGE # slowdown because of the fork and GIL with python. # Using multiple pods seems like a better default strategy. # Feel free to override if it does not make sense for your library. ARG max_workers=1 ENV MAX_WORKERS=$max_workers ENV HUGGINGFACE_HUB_CACHE=/data # Necessary on GPU environment docker. # TIMEOUT env variable is used by nvcr.io/nvidia/pytorch:xx for another purpose # rendering TIMEOUT defined by uvicorn impossible to use correctly # We're overriding it to be renamed UVICORN_TIMEOUT # UVICORN_TIMEOUT is a useful variable for very large models that take more # than 30s (the default) to load in memory. # If UVICORN_TIMEOUT is too low, uvicorn will simply never loads as it will # kill workers all the time before they finish. RUN sed -i 's/TIMEOUT/UVICORN_TIMEOUT/g' /gunicorn_conf.py COPY ./app /app/app
2
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/fairseq/prestart.sh
python app/main.py
3
0
hf_public_repos/api-inference-community/docker_images/fairseq
hf_public_repos/api-inference-community/docker_images/fairseq/app/main.py
import functools import logging import os from typing import Dict, Type from api_inference_community.routes import pipeline_route, status_ok from app.pipelines import Pipeline, SpeechToSpeechPipeline, TextToSpeechPipeline from starlette.applications import Starlette from starlette.middleware import Middleware from starlette.middleware.gzip import GZipMiddleware from starlette.routing import Route TASK = os.getenv("TASK") MODEL_ID = os.getenv("MODEL_ID") logger = logging.getLogger(__name__) # Add the allowed tasks # Supported tasks are: # - text-generation # - text-classification # - token-classification # - translation # - summarization # - automatic-speech-recognition # - ... # For instance # from app.pipelines import AutomaticSpeecRecognitionPipeline # ALLOWED_TASKS = {"automatic-speech-recognition": AutomaticSpeechRecognitionPipeline} # You can check the requirements and expectations of each pipelines in their respective # directories. Implement directly within the directories. ALLOWED_TASKS: Dict[str, Type[Pipeline]] = { "text-to-speech": TextToSpeechPipeline, "audio-to-audio": SpeechToSpeechPipeline, } @functools.lru_cache() def get_pipeline() -> Pipeline: task = os.environ["TASK"] model_id = os.environ["MODEL_ID"] if task not in ALLOWED_TASKS: raise EnvironmentError(f"{task} is not a valid pipeline for model : {model_id}") return ALLOWED_TASKS[task](model_id) routes = [ Route("/{whatever:path}", status_ok), Route("/{whatever:path}", pipeline_route, methods=["POST"]), ] middleware = [Middleware(GZipMiddleware, minimum_size=1000)] if os.environ.get("DEBUG", "") == "1": from starlette.middleware.cors import CORSMiddleware middleware.append( Middleware( CORSMiddleware, allow_origins=["*"], allow_headers=["*"], allow_methods=["*"], ) ) app = Starlette(routes=routes, middleware=middleware) @app.on_event("startup") async def startup_event(): logger = logging.getLogger("uvicorn.access") handler = logging.StreamHandler() handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")) logger.handlers = [handler] # Link between `api-inference-community` and framework code. app.get_pipeline = get_pipeline try: get_pipeline() except Exception: # We can fail so we can show exception later. pass if __name__ == "__main__": try: get_pipeline() except Exception: # We can fail so we can show exception later. pass
4
0
hf_public_repos/api-inference-community/docker_images/fairseq/app
hf_public_repos/api-inference-community/docker_images/fairseq/app/pipelines/utils.py
ARG_OVERRIDES_MAP = { "facebook/xm_transformer_s2ut_800m-es-en-st-asr-bt_h1_2022": { "config_yaml": "config.yaml", "task": "speech_to_text", } }
5
0
hf_public_repos/api-inference-community/docker_images/fairseq/app
hf_public_repos/api-inference-community/docker_images/fairseq/app/pipelines/text_to_speech.py
import os from typing import Tuple import numpy as np from app.pipelines import Pipeline from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface class TextToSpeechPipeline(Pipeline): def __init__(self, model_id: str): model, cfg, task = load_model_ensemble_and_task_from_hf_hub( model_id, arg_overrides={"vocoder": "griffin_lim", "fp16": False}, cache_dir=os.getenv("HUGGINGFACE_HUB_CACHE"), ) self.model = model[0].cpu() self.model.eval() cfg["task"].cpu = True self.task = task TTSHubInterface.update_cfg_with_data_cfg(cfg, self.task.data_cfg) self.generator = self.task.build_generator(model, cfg) def __call__(self, inputs: str) -> Tuple[np.array, int]: """ Args: inputs (:obj:`str`): The text to generate audio from Return: A :obj:`np.array` and a :obj:`int`: The raw waveform as a numpy array, and the sampling rate as an int. """ inputs = inputs.strip("\x00") if len(inputs) == 0: return np.zeros((0,)), self.task.sr sample = TTSHubInterface.get_model_input(self.task, inputs) wav, sr = TTSHubInterface.get_prediction( self.task, self.model, self.generator, sample ) return wav.numpy(), sr
6
0
hf_public_repos/api-inference-community/docker_images/fairseq/app
hf_public_repos/api-inference-community/docker_images/fairseq/app/pipelines/base.py
from abc import ABC, abstractmethod from typing import Any class Pipeline(ABC): @abstractmethod def __init__(self, model_id: str): raise NotImplementedError("Pipelines should implement an __init__ method") @abstractmethod def __call__(self, inputs: Any) -> Any: raise NotImplementedError("Pipelines should implement a __call__ method") class PipelineException(Exception): pass
7
0
hf_public_repos/api-inference-community/docker_images/fairseq/app
hf_public_repos/api-inference-community/docker_images/fairseq/app/pipelines/__init__.py
from app.pipelines.base import Pipeline, PipelineException # isort:skip from app.pipelines.audio_to_audio import SpeechToSpeechPipeline from app.pipelines.text_to_speech import TextToSpeechPipeline
8
0
hf_public_repos/api-inference-community/docker_images/fairseq/app
hf_public_repos/api-inference-community/docker_images/fairseq/app/pipelines/audio_to_audio.py
import json import os from pathlib import Path from typing import List, Tuple import numpy as np import torch from app.pipelines import Pipeline from app.pipelines.utils import ARG_OVERRIDES_MAP from fairseq import hub_utils from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.speech_to_speech.hub_interface import S2SHubInterface from fairseq.models.speech_to_text.hub_interface import S2THubInterface from fairseq.models.text_to_speech import CodeHiFiGANVocoder from fairseq.models.text_to_speech.hub_interface import ( TTSHubInterface, VocoderHubInterface, ) from huggingface_hub import snapshot_download class SpeechToSpeechPipeline(Pipeline): def __init__(self, model_id: str): arg_overrides = ARG_OVERRIDES_MAP.get( model_id, {} ) # Model specific override. TODO: Update on checkpoint side in the future arg_overrides["config_yaml"] = "config.yaml" # common override models, cfg, task = load_model_ensemble_and_task_from_hf_hub( model_id, arg_overrides=arg_overrides, cache_dir=os.getenv("HUGGINGFACE_HUB_CACHE"), ) self.cfg = cfg self.model = models[0].cpu() self.model.eval() self.task = task self.sampling_rate = getattr(self.task, "sr", None) or 16_000 tgt_lang = self.task.data_cfg.hub.get("tgt_lang", None) pfx = f"{tgt_lang}_" if self.task.data_cfg.prepend_tgt_lang_tag else "" generation_args = self.task.data_cfg.hub.get(f"{pfx}generation_args", None) if generation_args is not None: for key in generation_args: setattr(cfg.generation, key, generation_args[key]) self.generator = task.build_generator([self.model], cfg.generation) tts_model_id = self.task.data_cfg.hub.get(f"{pfx}tts_model_id", None) self.unit_vocoder = self.task.data_cfg.hub.get(f"{pfx}unit_vocoder", None) self.tts_model, self.tts_task, self.tts_generator = None, None, None if tts_model_id is not None: _id = tts_model_id.split(":")[-1] cache_dir = os.getenv("HUGGINGFACE_HUB_CACHE") if self.unit_vocoder is not None: library_name = "fairseq" cache_dir = ( cache_dir or (Path.home() / ".cache" / library_name).as_posix() ) cache_dir = snapshot_download( f"facebook/{_id}", cache_dir=cache_dir, library_name=library_name ) x = hub_utils.from_pretrained( cache_dir, "model.pt", ".", archive_map=CodeHiFiGANVocoder.hub_models(), config_yaml="config.json", fp16=False, is_vocoder=True, ) with open(f"{x['args']['data']}/config.json") as f: vocoder_cfg = json.load(f) assert ( len(x["args"]["model_path"]) == 1 ), "Too many vocoder models in the input" vocoder = CodeHiFiGANVocoder(x["args"]["model_path"][0], vocoder_cfg) self.tts_model = VocoderHubInterface(vocoder_cfg, vocoder) else: ( tts_models, tts_cfg, self.tts_task, ) = load_model_ensemble_and_task_from_hf_hub( f"facebook/{_id}", arg_overrides={"vocoder": "griffin_lim", "fp16": False}, cache_dir=cache_dir, ) self.tts_model = tts_models[0].cpu() self.tts_model.eval() tts_cfg["task"].cpu = True TTSHubInterface.update_cfg_with_data_cfg( tts_cfg, self.tts_task.data_cfg ) self.tts_generator = self.tts_task.build_generator( [self.tts_model], tts_cfg ) def __call__(self, inputs: np.array) -> Tuple[np.array, int, List[str]]: """ Args: inputs (:obj:`np.array`): The raw waveform of audio received. By default sampled at `self.sampling_rate`. The shape of this array is `T`, where `T` is the time axis Return: A :obj:`tuple` containing: - :obj:`np.array`: The return shape of the array must be `C'`x`T'` - a :obj:`int`: the sampling rate as an int in Hz. - a :obj:`List[str]`: the annotation for each out channel. This can be the name of the instruments for audio source separation or some annotation for speech enhancement. The length must be `C'`. """ _inputs = torch.from_numpy(inputs).unsqueeze(0) sample, text = None, None if self.cfg.task._name in ["speech_to_text", "speech_to_text_sharded"]: sample = S2THubInterface.get_model_input(self.task, _inputs) text = S2THubInterface.get_prediction( self.task, self.model, self.generator, sample ) elif self.cfg.task._name in ["speech_to_speech"]: s2shubinerface = S2SHubInterface(self.cfg, self.task, self.model) sample = s2shubinerface.get_model_input(self.task, _inputs) text = S2SHubInterface.get_prediction( self.task, self.model, self.generator, sample ) wav, sr = np.zeros((0,)), self.sampling_rate if self.unit_vocoder is not None: tts_sample = self.tts_model.get_model_input(text) wav, sr = self.tts_model.get_prediction(tts_sample) text = "" else: tts_sample = TTSHubInterface.get_model_input(self.tts_task, text) wav, sr = TTSHubInterface.get_prediction( self.tts_task, self.tts_model, self.tts_generator, tts_sample ) return wav.unsqueeze(0).numpy(), sr, [text]
9
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/falcon/README.md
# candle-falcon Falcon is a general large language model.
0
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/quantized-qwen2-instruct/main.rs
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::{Parser, ValueEnum}; use std::io::Write; use tokenizers::Tokenizer; use candle::quantized::gguf_file; use candle::Tensor; use candle_transformers::generation::{LogitsProcessor, Sampling}; use candle_examples::token_output_stream::TokenOutputStream; use candle_transformers::models::quantized_qwen2::ModelWeights as Qwen2; const DEFAULT_PROMPT: &str = "Write a function to count prime numbers up to N. "; #[derive(Clone, Debug, Copy, PartialEq, Eq, ValueEnum)] enum Which { #[value(name = "0.5b")] W2_0_5b, #[value(name = "1.5b")] W2_1_5b, #[value(name = "7b")] W2_7b, #[value(name = "72b")] W2_72b, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// GGUF file to load, typically a .gguf file generated by the quantize command from llama.cpp #[arg(long)] model: Option<String>, /// The initial prompt, use 'interactive' for entering multiple prompts in an interactive way /// and 'chat' for an interactive model where history of previous prompts and generated tokens /// is preserved. #[arg(long)] prompt: Option<String>, /// The length of the sample to generate (in tokens). #[arg(short = 'n', long, default_value_t = 1000)] sample_len: usize, /// The tokenizer config in json format. #[arg(long)] tokenizer: Option<String>, /// The temperature used to generate samples, use 0 for greedy sampling. #[arg(long, default_value_t = 0.8)] temperature: f64, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// Only sample among the top K samples. #[arg(long)] top_k: Option<usize>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// Process prompt elements separately. #[arg(long)] split_prompt: bool, /// Run on CPU rather than GPU even if a GPU is available. #[arg(long)] cpu: bool, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, /// The model size to use. #[arg(long, default_value = "0.5b")] which: Which, } impl Args { fn tokenizer(&self) -> anyhow::Result<Tokenizer> { let tokenizer_path = match &self.tokenizer { Some(config) => std::path::PathBuf::from(config), None => { let api = hf_hub::api::sync::Api::new()?; let repo = match self.which { Which::W2_0_5b => "Qwen/Qwen2-0.5B-Instruct", Which::W2_1_5b => "Qwen/Qwen2-1.5B-Instruct", Which::W2_7b => "Qwen/Qwen2-7B-Instruct", Which::W2_72b => "Qwen/Qwen2-72B-Instruct", }; let api = api.model(repo.to_string()); api.get("tokenizer.json")? } }; Tokenizer::from_file(tokenizer_path).map_err(anyhow::Error::msg) } fn model(&self) -> anyhow::Result<std::path::PathBuf> { let model_path = match &self.model { Some(config) => std::path::PathBuf::from(config), None => { let (repo, filename, revision) = match self.which { Which::W2_0_5b => ( "Qwen/Qwen2-0.5B-Instruct-GGUF", "qwen2-0_5b-instruct-q4_0.gguf", "main", ), Which::W2_1_5b => ( "Qwen/Qwen2-1.5B-Instruct-GGUF", "qwen2-1_5b-instruct-q4_0.gguf", "main", ), Which::W2_7b => ( "Qwen/Qwen2-7B-Instruct-GGUF", "qwen2-7b-instruct-q4_0.gguf", "main", ), Which::W2_72b => ( "Qwen/Qwen2-72B-Instruct-GGUF", "qwen2-72b-instruct-q4_0.gguf", "main", ), }; let api = hf_hub::api::sync::Api::new()?; api.repo(hf_hub::Repo::with_revision( repo.to_string(), hf_hub::RepoType::Model, revision.to_string(), )) .get(filename)? } }; Ok(model_path) } } fn format_size(size_in_bytes: usize) -> String { if size_in_bytes < 1_000 { format!("{}B", size_in_bytes) } else if size_in_bytes < 1_000_000 { format!("{:.2}KB", size_in_bytes as f64 / 1e3) } else if size_in_bytes < 1_000_000_000 { format!("{:.2}MB", size_in_bytes as f64 / 1e6) } else { format!("{:.2}GB", size_in_bytes as f64 / 1e9) } } fn main() -> anyhow::Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature, args.repeat_penalty, args.repeat_last_n ); let model_path = args.model()?; let mut file = std::fs::File::open(&model_path)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let mut model = { let model = gguf_file::Content::read(&mut file).map_err(|e| e.with_path(model_path))?; let mut total_size_in_bytes = 0; for (_, tensor) in model.tensor_infos.iter() { let elem_count = tensor.shape.elem_count(); total_size_in_bytes += elem_count * tensor.ggml_dtype.type_size() / tensor.ggml_dtype.block_size(); } println!( "loaded {:?} tensors ({}) in {:.2}s", model.tensor_infos.len(), &format_size(total_size_in_bytes), start.elapsed().as_secs_f32(), ); Qwen2::from_gguf(model, &mut file, &device)? }; println!("model built"); let tokenizer = args.tokenizer()?; let mut tos = TokenOutputStream::new(tokenizer); let prompt_str = args.prompt.unwrap_or_else(|| DEFAULT_PROMPT.to_string()); let prompt_str = format!( "<|im_start|>user\n{}<|im_end|>\n<|im_start|>assistant\n", prompt_str ); print!("formatted instruct prompt: {}", &prompt_str); let tokens = tos .tokenizer() .encode(prompt_str, true) .map_err(anyhow::Error::msg)?; let tokens = tokens.get_ids(); let to_sample = args.sample_len.saturating_sub(1); let mut all_tokens = vec![]; let mut logits_processor = { let temperature = args.temperature; let sampling = if temperature <= 0. { Sampling::ArgMax } else { match (args.top_k, args.top_p) { (None, None) => Sampling::All { temperature }, (Some(k), None) => Sampling::TopK { k, temperature }, (None, Some(p)) => Sampling::TopP { p, temperature }, (Some(k), Some(p)) => Sampling::TopKThenTopP { k, p, temperature }, } }; LogitsProcessor::from_sampling(args.seed, sampling) }; let start_prompt_processing = std::time::Instant::now(); let mut next_token = if !args.split_prompt { let input = Tensor::new(tokens, &device)?.unsqueeze(0)?; let logits = model.forward(&input, 0)?; let logits = logits.squeeze(0)?; logits_processor.sample(&logits)? } else { let mut next_token = 0; for (pos, token) in tokens.iter().enumerate() { let input = Tensor::new(&[*token], &device)?.unsqueeze(0)?; let logits = model.forward(&input, pos)?; let logits = logits.squeeze(0)?; next_token = logits_processor.sample(&logits)? } next_token }; let prompt_dt = start_prompt_processing.elapsed(); all_tokens.push(next_token); if let Some(t) = tos.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } let eos_token = *tos.tokenizer().get_vocab(true).get("<|im_end|>").unwrap(); let start_post_prompt = std::time::Instant::now(); let mut sampled = 0; for index in 0..to_sample { let input = Tensor::new(&[next_token], &device)?.unsqueeze(0)?; let logits = model.forward(&input, tokens.len() + index)?; let logits = logits.squeeze(0)?; let logits = if args.repeat_penalty == 1. { logits } else { let start_at = all_tokens.len().saturating_sub(args.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, args.repeat_penalty, &all_tokens[start_at..], )? }; next_token = logits_processor.sample(&logits)?; all_tokens.push(next_token); if let Some(t) = tos.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } sampled += 1; if next_token == eos_token { break; }; } if let Some(rest) = tos.decode_rest().map_err(candle::Error::msg)? { print!("{rest}"); } std::io::stdout().flush()?; let dt = start_post_prompt.elapsed(); println!( "\n\n{:4} prompt tokens processed: {:.2} token/s", tokens.len(), tokens.len() as f64 / prompt_dt.as_secs_f64(), ); println!( "{sampled:4} tokens generated: {:.2} token/s", sampled as f64 / dt.as_secs_f64(), ); Ok(()) }
1
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/quantized-qwen2-instruct/README.md
# candle-quantized-qwen2-instruct [Qwen2]((https://qwenlm.github.io/blog/qwen2/)) is an upgraded version of Qwen1.5, released by Alibaba Cloud. ## Running the example ```bash cargo run --example quantized-qwen2-instruct --release -- --prompt "Write a function to count prime numbers up to N." ``` 0.5b, 1.5b, 7b and 72b models are available via `--model` argument.
2
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/yi/main.rs
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::{Parser, ValueEnum}; use candle_transformers::models::yi::{Config, Model}; use candle::{DType, Device, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; #[derive(Clone, Debug, Copy, PartialEq, Eq, ValueEnum)] enum Which { #[value(name = "6b")] L6b, #[value(name = "34b")] L34b, } struct TextGeneration { model: Model, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, repeat_penalty: f32, repeat_last_n: usize, device: &Device, ) -> Self { let logits_processor = LogitsProcessor::new(seed, temp, top_p); Self { model, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let mut generated_tokens = 0usize; let eos_token = match self.tokenizer.get_token("<|endoftext|>") { Some(token) => token, None => anyhow::bail!("cannot find the <|endoftext|> token"), }; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; let logits = self.model.forward(&input, start_pos)?; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; if next_token == eos_token { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { let t = t.replace("<|im_end|>", "\n"); print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 100)] sample_len: usize, #[arg(long, default_value = "01-ai/Yi-6B")] model_id: String, #[arg(long, default_value = "main")] revision: String, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] weight_files: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, /// The model size to use. #[arg(long, default_value = "6b")] which: Which, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature.unwrap_or(0.), args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let repo = api.repo(Repo::with_revision( args.model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let filenames = match args.weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")?, }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let config = match args.which { Which::L6b => Config::config_6b(), Which::L34b => Config::config_34b(), }; let device = candle_examples::device(args.cpu)?; let dtype = if device.is_cuda() { DType::BF16 } else { DType::F32 }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; let model = Model::new(&config, vb)?; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, tokenizer, args.seed, args.temperature, args.top_p, args.repeat_penalty, args.repeat_last_n, &device, ); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
3
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/stable-diffusion-3/vae.rs
use anyhow::{Ok, Result}; use candle_transformers::models::stable_diffusion::vae; pub fn build_sd3_vae_autoencoder(vb: candle_nn::VarBuilder) -> Result<vae::AutoEncoderKL> { let config = vae::AutoEncoderKLConfig { block_out_channels: vec![128, 256, 512, 512], layers_per_block: 2, latent_channels: 16, norm_num_groups: 32, use_quant_conv: false, use_post_quant_conv: false, }; Ok(vae::AutoEncoderKL::new(vb, 3, 3, config)?) } pub fn sd3_vae_vb_rename(name: &str) -> String { let parts: Vec<&str> = name.split('.').collect(); let mut result = Vec::new(); let mut i = 0; while i < parts.len() { match parts[i] { "down_blocks" => { result.push("down"); } "mid_block" => { result.push("mid"); } "up_blocks" => { result.push("up"); match parts[i + 1] { // Reverse the order of up_blocks. "0" => result.push("3"), "1" => result.push("2"), "2" => result.push("1"), "3" => result.push("0"), _ => {} } i += 1; // Skip the number after up_blocks. } "resnets" => { if i > 0 && parts[i - 1] == "mid_block" { match parts[i + 1] { "0" => result.push("block_1"), "1" => result.push("block_2"), _ => {} } i += 1; // Skip the number after resnets. } else { result.push("block"); } } "downsamplers" => { result.push("downsample"); i += 1; // Skip the 0 after downsamplers. } "conv_shortcut" => { result.push("nin_shortcut"); } "attentions" => { if parts[i + 1] == "0" { result.push("attn_1") } i += 1; // Skip the number after attentions. } "group_norm" => { result.push("norm"); } "query" => { result.push("q"); } "key" => { result.push("k"); } "value" => { result.push("v"); } "proj_attn" => { result.push("proj_out"); } "conv_norm_out" => { result.push("norm_out"); } "upsamplers" => { result.push("upsample"); i += 1; // Skip the 0 after upsamplers. } part => result.push(part), } i += 1; } result.join(".") }
4
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/stable-diffusion-3/main.rs
mod clip; mod sampling; mod vae; use candle::{DType, IndexOp, Tensor}; use candle_transformers::models::mmdit::model::{Config as MMDiTConfig, MMDiT}; use crate::clip::StableDiffusion3TripleClipWithTokenizer; use crate::vae::{build_sd3_vae_autoencoder, sd3_vae_vb_rename}; use anyhow::{Ok, Result}; use clap::Parser; #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "3-medium")] V3Medium, #[value(name = "3.5-large")] V3_5Large, #[value(name = "3.5-large-turbo")] V3_5LargeTurbo, #[value(name = "3.5-medium")] V3_5Medium, } impl Which { fn is_3_5(&self) -> bool { match self { Self::V3Medium => false, Self::V3_5Large | Self::V3_5LargeTurbo | Self::V3_5Medium => true, } } } #[derive(Parser)] #[command(author, version, about, long_about = None)] struct Args { /// The prompt to be used for image generation. #[arg( long, default_value = "A cute rusty robot holding a candle torch in its hand, \ with glowing neon text \"LETS GO RUSTY\" displayed on its chest, \ bright background, high quality, 4k" )] prompt: String, #[arg(long, default_value = "")] uncond_prompt: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// Use flash_attn to accelerate attention operation in the MMDiT. #[arg(long)] use_flash_attn: bool, /// The height in pixels of the generated image. #[arg(long, default_value_t = 1024)] height: usize, /// The width in pixels of the generated image. #[arg(long, default_value_t = 1024)] width: usize, /// The model to use. #[arg(long, default_value = "3-medium")] which: Which, /// The seed to use when generating random samples. #[arg(long)] num_inference_steps: Option<usize>, /// CFG scale. #[arg(long)] cfg_scale: Option<f64>, /// Time shift factor (alpha). #[arg(long, default_value_t = 3.0)] time_shift: f64, /// Use Skip Layer Guidance (SLG) for the sampling. /// Currently only supports Stable Diffusion 3.5 Medium. #[arg(long)] use_slg: bool, /// The seed to use when generating random samples. #[arg(long)] seed: Option<u64>, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let Args { prompt, uncond_prompt, cpu, tracing, use_flash_attn, height, width, num_inference_steps, cfg_scale, time_shift, seed, which, use_slg, } = Args::parse(); let _guard = if tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; let device = candle_examples::device(cpu)?; let default_inference_steps = match which { Which::V3_5Large => 28, Which::V3_5LargeTurbo => 4, Which::V3_5Medium => 28, Which::V3Medium => 28, }; let num_inference_steps = num_inference_steps.unwrap_or(default_inference_steps); let default_cfg_scale = match which { Which::V3_5Large => 4.0, Which::V3_5LargeTurbo => 1.0, Which::V3_5Medium => 4.0, Which::V3Medium => 4.0, }; let cfg_scale = cfg_scale.unwrap_or(default_cfg_scale); let api = hf_hub::api::sync::Api::new()?; let (mmdit_config, mut triple, vb) = if which.is_3_5() { let sai_repo_for_text_encoders = { let name = match which { Which::V3_5Large => "stabilityai/stable-diffusion-3.5-large", Which::V3_5LargeTurbo => "stabilityai/stable-diffusion-3.5-large-turbo", // Unfortunately, stabilityai/stable-diffusion-3.5-medium doesn't have the monolithic text encoders that's usually // placed under the text_encoders directory, like the case in stabilityai/stable-diffusion-3.5-large and -large-turbo. // To make things worse, it currently only has partitioned model.fp16-00001-of-00002.safetensors and model.fp16-00002-of-00002.safetensors // under the text_encoder_3 directory, for the t5xxl_fp16.safetensors model. This means that we need to merge the two partitions // to get the monolithic text encoders. This is not a trivial task. // Since the situation can change, we do not want to spend efforts to handle the uniqueness of stabilityai/stable-diffusion-3.5-medium, // which involves different paths and merging the two partitions files for t5xxl_fp16.safetensors. // so for now, we'll use the text encoder models from the stabilityai/stable-diffusion-3.5-large repository. // TODO: Change to "stabilityai/stable-diffusion-3.5-medium" once the maintainers of the repository add back the monolithic text encoders. Which::V3_5Medium => "stabilityai/stable-diffusion-3.5-large", Which::V3Medium => unreachable!(), }; api.repo(hf_hub::Repo::model(name.to_string())) }; let sai_repo_for_mmdit = { let name = match which { Which::V3_5Large => "stabilityai/stable-diffusion-3.5-large", Which::V3_5LargeTurbo => "stabilityai/stable-diffusion-3.5-large-turbo", Which::V3_5Medium => "stabilityai/stable-diffusion-3.5-medium", Which::V3Medium => unreachable!(), }; api.repo(hf_hub::Repo::model(name.to_string())) }; let clip_g_file = sai_repo_for_text_encoders.get("text_encoders/clip_g.safetensors")?; let clip_l_file = sai_repo_for_text_encoders.get("text_encoders/clip_l.safetensors")?; let t5xxl_file = sai_repo_for_text_encoders.get("text_encoders/t5xxl_fp16.safetensors")?; let model_file = { let model_file = match which { Which::V3_5Large => "sd3.5_large.safetensors", Which::V3_5LargeTurbo => "sd3.5_large_turbo.safetensors", Which::V3_5Medium => "sd3.5_medium.safetensors", Which::V3Medium => unreachable!(), }; sai_repo_for_mmdit.get(model_file)? }; let triple = StableDiffusion3TripleClipWithTokenizer::new_split( &clip_g_file, &clip_l_file, &t5xxl_file, &device, )?; let vb = unsafe { candle_nn::VarBuilder::from_mmaped_safetensors(&[model_file], DType::F16, &device)? }; match which { Which::V3_5Large => (MMDiTConfig::sd3_5_large(), triple, vb), Which::V3_5LargeTurbo => (MMDiTConfig::sd3_5_large(), triple, vb), Which::V3_5Medium => (MMDiTConfig::sd3_5_medium(), triple, vb), Which::V3Medium => unreachable!(), } } else { let sai_repo = { let name = "stabilityai/stable-diffusion-3-medium"; api.repo(hf_hub::Repo::model(name.to_string())) }; let model_file = sai_repo.get("sd3_medium_incl_clips_t5xxlfp16.safetensors")?; let vb = unsafe { candle_nn::VarBuilder::from_mmaped_safetensors(&[&model_file], DType::F16, &device)? }; let triple = StableDiffusion3TripleClipWithTokenizer::new(vb.pp("text_encoders"))?; (MMDiTConfig::sd3_medium(), triple, vb) }; let (context, y) = triple.encode_text_to_embedding(prompt.as_str(), &device)?; let (context_uncond, y_uncond) = triple.encode_text_to_embedding(uncond_prompt.as_str(), &device)?; // Drop the text model early to avoid using too much memory. drop(triple); let context = Tensor::cat(&[context, context_uncond], 0)?; let y = Tensor::cat(&[y, y_uncond], 0)?; if let Some(seed) = seed { device.set_seed(seed)?; } let slg_config = if use_slg { match which { // https://github.com/Stability-AI/sd3.5/blob/4e484e05308d83fb77ae6f680028e6c313f9da54/sd3_infer.py#L388-L394 Which::V3_5Medium => Some(sampling::SkipLayerGuidanceConfig { scale: 2.5, start: 0.01, end: 0.2, layers: vec![7, 8, 9], }), _ => anyhow::bail!("--use-slg can only be used with 3.5-medium"), } } else { None }; let start_time = std::time::Instant::now(); let x = { let mmdit = MMDiT::new( &mmdit_config, use_flash_attn, vb.pp("model.diffusion_model"), )?; sampling::euler_sample( &mmdit, &y, &context, num_inference_steps, cfg_scale, time_shift, height, width, slg_config, )? }; let dt = start_time.elapsed().as_secs_f32(); println!( "Sampling done. {num_inference_steps} steps. {:.2}s. Average rate: {:.2} iter/s", dt, num_inference_steps as f32 / dt ); let img = { let vb_vae = vb.rename_f(sd3_vae_vb_rename).pp("first_stage_model"); let autoencoder = build_sd3_vae_autoencoder(vb_vae)?; // Apply TAESD3 scale factor. Seems to be significantly improving the quality of the image. // https://github.com/comfyanonymous/ComfyUI/blob/3c60ecd7a83da43d694e26a77ca6b93106891251/nodes.py#L721-L723 autoencoder.decode(&((x / 1.5305)? + 0.0609)?)? }; let img = ((img.clamp(-1f32, 1f32)? + 1.0)? * 127.5)?.to_dtype(candle::DType::U8)?; candle_examples::save_image(&img.i(0)?, "out.jpg")?; Ok(()) }
5
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/stable-diffusion-3/clip.rs
use anyhow::{Error as E, Ok, Result}; use candle::{DType, IndexOp, Module, Tensor, D}; use candle_transformers::models::{stable_diffusion, t5}; use std::path::PathBuf; use tokenizers::tokenizer::Tokenizer; struct ClipWithTokenizer { clip: stable_diffusion::clip::ClipTextTransformer, config: stable_diffusion::clip::Config, tokenizer: Tokenizer, max_position_embeddings: usize, } impl ClipWithTokenizer { fn new( vb: candle_nn::VarBuilder, config: stable_diffusion::clip::Config, tokenizer_path: &str, max_position_embeddings: usize, ) -> Result<Self> { let clip = stable_diffusion::clip::ClipTextTransformer::new(vb, &config)?; let path_buf = hf_hub::api::sync::Api::new()? .model(tokenizer_path.to_string()) .get("tokenizer.json")?; let tokenizer = Tokenizer::from_file(path_buf.to_str().ok_or(E::msg( "Failed to serialize huggingface PathBuf of CLIP tokenizer", ))?) .map_err(E::msg)?; Ok(Self { clip, config, tokenizer, max_position_embeddings, }) } fn encode_text_to_embedding( &self, prompt: &str, device: &candle::Device, ) -> Result<(Tensor, Tensor)> { let pad_id = match &self.config.pad_with { Some(padding) => *self .tokenizer .get_vocab(true) .get(padding.as_str()) .ok_or(E::msg("Failed to tokenize CLIP padding."))?, None => *self .tokenizer .get_vocab(true) .get("<|endoftext|>") .ok_or(E::msg("Failed to tokenize CLIP end-of-text."))?, }; let mut tokens = self .tokenizer .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let eos_position = tokens.len() - 1; while tokens.len() < self.max_position_embeddings { tokens.push(pad_id) } let tokens = Tensor::new(tokens.as_slice(), device)?.unsqueeze(0)?; let (text_embeddings, text_embeddings_penultimate) = self .clip .forward_until_encoder_layer(&tokens, usize::MAX, -2)?; let text_embeddings_pooled = text_embeddings.i((0, eos_position, ..))?; Ok((text_embeddings_penultimate, text_embeddings_pooled)) } } struct T5WithTokenizer { t5: t5::T5EncoderModel, tokenizer: Tokenizer, max_position_embeddings: usize, } impl T5WithTokenizer { fn new(vb: candle_nn::VarBuilder, max_position_embeddings: usize) -> Result<Self> { let api = hf_hub::api::sync::Api::new()?; let repo = api.repo(hf_hub::Repo::with_revision( "google/t5-v1_1-xxl".to_string(), hf_hub::RepoType::Model, "refs/pr/2".to_string(), )); let config_filename = repo.get("config.json")?; let config = std::fs::read_to_string(config_filename)?; let config: t5::Config = serde_json::from_str(&config)?; let model = t5::T5EncoderModel::load(vb, &config)?; let tokenizer_filename = api .model("lmz/mt5-tokenizers".to_string()) .get("t5-v1_1-xxl.tokenizer.json")?; let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; Ok(Self { t5: model, tokenizer, max_position_embeddings, }) } fn encode_text_to_embedding( &mut self, prompt: &str, device: &candle::Device, ) -> Result<Tensor> { let mut tokens = self .tokenizer .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); tokens.resize(self.max_position_embeddings, 0); let input_token_ids = Tensor::new(&tokens[..], device)?.unsqueeze(0)?; let embeddings = self.t5.forward_dt(&input_token_ids, Some(DType::F32))?; Ok(embeddings) } } pub struct StableDiffusion3TripleClipWithTokenizer { clip_l: ClipWithTokenizer, clip_g: ClipWithTokenizer, clip_g_text_projection: candle_nn::Linear, t5: T5WithTokenizer, } impl StableDiffusion3TripleClipWithTokenizer { pub fn new_split( clip_g_file: &PathBuf, clip_l_file: &PathBuf, t5xxl_file: &PathBuf, device: &candle::Device, ) -> Result<Self> { let vb_clip_g = unsafe { candle_nn::VarBuilder::from_mmaped_safetensors(&[clip_g_file], DType::F16, device)? }; let vb_clip_l = unsafe { candle_nn::VarBuilder::from_mmaped_safetensors(&[clip_l_file], DType::F16, device)? }; let vb_t5 = unsafe { candle_nn::VarBuilder::from_mmaped_safetensors(&[t5xxl_file], DType::F16, device)? }; let max_position_embeddings = 77usize; let clip_l = ClipWithTokenizer::new( vb_clip_l, stable_diffusion::clip::Config::sdxl(), "openai/clip-vit-large-patch14", max_position_embeddings, )?; let text_projection = candle_nn::linear_no_bias(1280, 1280, vb_clip_g.pp("text_projection"))?; let clip_g = ClipWithTokenizer::new( vb_clip_g, stable_diffusion::clip::Config::sdxl2(), "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k", max_position_embeddings, )?; let t5 = T5WithTokenizer::new(vb_t5, max_position_embeddings)?; Ok(Self { clip_l, clip_g, clip_g_text_projection: text_projection, t5, }) } pub fn new(vb: candle_nn::VarBuilder) -> Result<Self> { let max_position_embeddings = 77usize; let clip_l = ClipWithTokenizer::new( vb.pp("clip_l.transformer"), stable_diffusion::clip::Config::sdxl(), "openai/clip-vit-large-patch14", max_position_embeddings, )?; let clip_g = ClipWithTokenizer::new( vb.pp("clip_g.transformer"), stable_diffusion::clip::Config::sdxl2(), "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k", max_position_embeddings, )?; let text_projection = candle_nn::linear_no_bias(1280, 1280, vb.pp("clip_g.transformer.text_projection"))?; let t5 = T5WithTokenizer::new(vb.pp("t5xxl.transformer"), max_position_embeddings)?; Ok(Self { clip_l, clip_g, clip_g_text_projection: text_projection, t5, }) } pub fn encode_text_to_embedding( &mut self, prompt: &str, device: &candle::Device, ) -> Result<(Tensor, Tensor)> { let (clip_l_embeddings, clip_l_embeddings_pooled) = self.clip_l.encode_text_to_embedding(prompt, device)?; let (clip_g_embeddings, clip_g_embeddings_pooled) = self.clip_g.encode_text_to_embedding(prompt, device)?; let clip_g_embeddings_pooled = self .clip_g_text_projection .forward(&clip_g_embeddings_pooled.unsqueeze(0)?)? .squeeze(0)?; let y = Tensor::cat(&[&clip_l_embeddings_pooled, &clip_g_embeddings_pooled], 0)? .unsqueeze(0)?; let clip_embeddings_concat = Tensor::cat( &[&clip_l_embeddings, &clip_g_embeddings], D::Minus1, )? .pad_with_zeros(D::Minus1, 0, 2048)?; let t5_embeddings = self .t5 .encode_text_to_embedding(prompt, device)? .to_dtype(DType::F16)?; let context = Tensor::cat(&[&clip_embeddings_concat, &t5_embeddings], D::Minus2)?; Ok((context, y)) } }
6
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/stable-diffusion-3/sampling.rs
use anyhow::{Ok, Result}; use candle::{DType, IndexOp, Tensor}; use candle_transformers::models::flux; use candle_transformers::models::mmdit::model::MMDiT; pub struct SkipLayerGuidanceConfig { pub scale: f64, pub start: f64, pub end: f64, pub layers: Vec<usize>, } #[allow(clippy::too_many_arguments)] pub fn euler_sample( mmdit: &MMDiT, y: &Tensor, context: &Tensor, num_inference_steps: usize, cfg_scale: f64, time_shift: f64, height: usize, width: usize, slg_config: Option<SkipLayerGuidanceConfig>, ) -> Result<Tensor> { let mut x = flux::sampling::get_noise(1, height, width, y.device())?.to_dtype(DType::F16)?; let sigmas = (0..=num_inference_steps) .map(|x| x as f64 / num_inference_steps as f64) .rev() .map(|x| time_snr_shift(time_shift, x)) .collect::<Vec<f64>>(); for (step, window) in sigmas.windows(2).enumerate() { let (s_curr, s_prev) = match window { [a, b] => (a, b), _ => continue, }; let timestep = (*s_curr) * 1000.0; let noise_pred = mmdit.forward( &Tensor::cat(&[&x, &x], 0)?, &Tensor::full(timestep as f32, (2,), x.device())?.contiguous()?, y, context, None, )?; let mut guidance = apply_cfg(cfg_scale, &noise_pred)?; if let Some(slg_config) = slg_config.as_ref() { if (num_inference_steps as f64) * slg_config.start < (step as f64) && (step as f64) < (num_inference_steps as f64) * slg_config.end { let slg_noise_pred = mmdit.forward( &x, &Tensor::full(timestep as f32, (1,), x.device())?.contiguous()?, &y.i(..1)?, &context.i(..1)?, Some(&slg_config.layers), )?; guidance = (guidance + (slg_config.scale * (noise_pred.i(..1)? - slg_noise_pred.i(..1))?)?)?; } } x = (x + (guidance * (*s_prev - *s_curr))?)?; } Ok(x) } // The "Resolution-dependent shifting of timestep schedules" recommended in the SD3 tech report paper // https://arxiv.org/pdf/2403.03206 // Following the implementation in ComfyUI: // https://github.com/comfyanonymous/ComfyUI/blob/3c60ecd7a83da43d694e26a77ca6b93106891251/ // comfy/model_sampling.py#L181 fn time_snr_shift(alpha: f64, t: f64) -> f64 { alpha * t / (1.0 + (alpha - 1.0) * t) } fn apply_cfg(cfg_scale: f64, noise_pred: &Tensor) -> Result<Tensor> { Ok(((cfg_scale * noise_pred.narrow(0, 0, 1)?)? - ((cfg_scale - 1.0) * noise_pred.narrow(0, 1, 1)?)?)?) }
7
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/stable-diffusion-3/README.md
# candle-stable-diffusion-3: Candle Implementation of Stable Diffusion 3/3.5 ![](assets/stable-diffusion-3.jpg) *A cute rusty robot holding a candle torch in its hand, with glowing neon text \"LETS GO RUSTY\" displayed on its chest, bright background, high quality, 4k*, generated by Stable Diffusion 3 Medium Stable Diffusion 3 Medium is a text-to-image model based on Multimodal Diffusion Transformer (MMDiT) architecture. - [huggingface repo](https://huggingface.co/stabilityai/stable-diffusion-3-medium) - [research paper](https://arxiv.org/pdf/2403.03206) - [announcement blog post](https://stability.ai/news/stable-diffusion-3-medium) Stable Diffusion 3.5 is a family of text-to-image models with latest improvements: - [announcement blog post](https://stability.ai/news/introducing-stable-diffusion-3-5) It has three variants: - [Stable Diffusion 3.5 Large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large) @ 8.1b params, with scaled and slightly modified MMDiT architecture. - [Stable Diffusion 3.5 Large Turbo](https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo) distilled version that enables 4-step inference. - [Stable Diffusion 3.5 Medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium) @ 2.5b params, with improved MMDiT-X architecture. ## Getting access to the weights The weights of Stable Diffusion 3/3.5 is released by Stability AI under the Stability Community License. You will need to accept the conditions and acquire a license by visiting the repos on HuggingFace Hub to gain access to the weights for your HuggingFace account. To allow your computer to gain access to the public-gated repos on HuggingFace, you might need to create a [HuggingFace User Access Tokens](https://huggingface.co/docs/hub/en/security-tokens) (recommended) and log in on your computer if you haven't done that before. A convenient way to do the login is to use [huggingface-cli](https://huggingface.co/docs/huggingface_hub/en/guides/cli): ```shell huggingface-cli login ``` and you will be prompted to enter your token. On the first run, the weights will be automatically downloaded from the Huggingface Hub. After the download, the weights will be [cached](https://huggingface.co/docs/datasets/en/cache) and remain accessible locally. ## Running the model ```shell cargo run --example stable-diffusion-3 --release --features=cuda -- \ --which 3-medium --height 1024 --width 1024 \ --prompt 'A cute rusty robot holding a candle torch in its hand, with glowing neon text \"LETS GO RUSTY\" displayed on its chest, bright background, high quality, 4k' ``` To use different models, changed the value of `--which` option. (Possible values: `3-medium`, `3.5-large`, `3.5-large-turbo` and `3.5-medium`). To display other options available, ```shell cargo run --example stable-diffusion-3 --release --features=cuda -- --help ``` If GPU supports, Flash-Attention is a strongly recommended feature as it can greatly improve the speed of inference, as MMDiT is a transformer model heavily depends on attentions. To utilize [candle-flash-attn](https://github.com/huggingface/candle/tree/main/candle-flash-attn) in the demo, you will need both `--features flash-attn` and `--use-flash-attn`. ```shell cargo run --example stable-diffusion-3 --release --features=cuda,flash-attn -- --use-flash-attn ... ``` ## Performance Benchmark Below benchmark is done with Stable Diffusion 3 Medium by generating 1024-by-1024 image from 28 steps of Euler sampling and measure the average speed (iteration per seconds). [candle](https://github.com/huggingface/candle) and [candle-flash-attn](https://github.com/huggingface/candle/tree/main/candle-flash-attn) is based on the commit of [0d96ec3](https://github.com/huggingface/candle/commit/0d96ec31e8be03f844ed0aed636d6217dee9c7bc). System specs (Desktop PCIE 5 x8/x8 dual-GPU setup): - Operating System: Ubuntu 23.10 - CPU: i9 12900K w/o overclocking. - RAM: 64G dual-channel DDR5 @ 4800 MT/s | Speed (iter/s) | w/o flash-attn | w/ flash-attn | | -------------- | -------------- | ------------- | | RTX 3090 Ti | 0.83 | 2.15 | | RTX 4090 | 1.72 | 4.06 |
8
0
hf_public_repos/candle/candle-examples/examples
hf_public_repos/candle/candle-examples/examples/dinov2/main.rs
//! DINOv2: Learning Robust Visual Features without Supervision //! https://github.com/facebookresearch/dinov2 #[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::Parser; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::dinov2; #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("lmz/candle-dino-v2".into()); api.get("dinov2_vits14.safetensors")? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let model = dinov2::vit_small(vb)?; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
9
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter5/introduction.mdx
# Чему вы научитесь и что создадите В этом разделе мы рассмотрим, как трансформеры могут быть использованы для преобразования устной речи в текст - задача, известная как _распознавание речи_ (_speech recognition_). <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/asr_diagram.png" alt="Diagram of speech to text"> </div> Распознавание речи, известное также как автоматическое распознавание речи (automatic speech recognition - ASR) или преобразование речи в текст (speech-to-text - STT), является одной из наиболее популярных и интересных задач обработки устной речи. Она используется в широком спектре приложений, включая диктовку, голосовые помощники, субтитры к видео и транскрибацию переговоров / совещаний. Возможно, вы уже не раз пользовались системой распознавания речи, даже не подозревая об этом! Возьмем, к примеру, цифрового ассистента в вашем смартфоне (Siri, Google Assistant, Alexa). Когда вы пользуетесь такими помощниками, первое, что они делают, - это транскрибируют вашу устную речь в письменный текст, готовый к использованию для решения последующих задач (например, для поиска прогноза погоды 🌤️). Попробуйте воспользоваться демонстрацией распознавания речи, приведенной ниже. Вы можете записать себя с помощью микрофона или перетащить в окно образец звука для расшифровки: <iframe src="https://course-demos-whisper-small.hf.space" frameborder="0" width="850" height="450"> </iframe> Распознавание речи является сложной задачей, поскольку требует совместного знания в области звука и текста. Входной звук может содержать много фоновых шумов и произноситься дикторами с разным акцентом, что затрудняет выделение произносимой речи. В письменном тексте могут присутствовать символы, не имеющие акустического звучания, например, знаки препинания, которые трудно определить только по звуку. Все это - препятствия, которые нам приходится преодолевать при создании эффективных систем распознавания речи! Теперь, когда мы определились с задачей, можно приступить к более детальному изучению распознавания речи. К концу этого раздела вы будете иметь хорошее фундаментальное представление о различных доступных предварительно обученных моделях распознавания речи и о том, как использовать их с библиотекой 🤗 Transformers. Вы также узнаете о процедуре тонкой настройки ASR-модели на выбранный домен или язык, что позволит вам построить производительную систему для решения любой задачи. Вы сможете продемонстрировать свою модель друзьям и близким, создав "живую" демонстрацию, которая воспринимает любую устную речь и преобразует ее в текст! В частности, мы рассмотрим: * [Предварительно обученные модели для распознавания речи](asr_models) * [Выбор набора данных](choosing_dataset) * [Оценка и метрики для распознавания речи](evaluation) * [Как дообучить ASR-систему с помощью Trainer API](fine-tuning) * [Создание демо](demo) * [Практическое занятие](hands_on)
0
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter5/fine-tuning.mdx
# Дообучение модели ASR В этом разделе мы рассмотрим пошаговое руководство по дообучению модели Whisper для распознавания речи на наборе данных Common Voice 13. Мы будем использовать версию модели 'small' и относительно небольшой набор данных, что позволит вам провести настройку довольно быстро на любом GPU объемом памяти 16 ГБ и более с небольшими требованиями к дисковому пространству, на таком GPU как T4 объемом 16 ГБ, предоставляемый в бесплатной версии Google Colab. Если у вас GPU с меньшим объемом памяти или возникают проблемы с памятью во время обучения, вы можете следовать предложенным советам по снижению использования памяти. Если у вас есть доступ к GPU с бОльшим объемом памяти, вы можете изменить аргументы обучения для максимизации производительности. Таким образом, это руководство доступно независимо от спецификаций вашего GPU! Так же это руководство описывает, как дообучить модель Whisper для языка Дивехи. Однако шаги, описанные здесь, обобщаются на любой язык в наборе данных Common Voice и будут работать для любого набора данных распознавания речи на Hub Hugging Face. Вы можете изменить код, чтобы быстро переключиться на необходимый вам язык и дообучить модель Whisper на вашем родном языке 🌍 Отлично! Теперь, когда мы разобрались с этим, давайте начнем и запустим нашу процедуру дообучения модели! ## Подготовка окружения Мы настоятельно рекомендуем вам загружать контрольные точки модели непосредственно на [Hugging Face Hub](https://huggingface.co/) во время обучения. Hugging Face Hub предоставляет следующие возможности: - Встроенный контроль версий: вы можете быть уверены, что ни одна контрольная точка модели не будет потеряна в процессе обучения. - Журналы Tensorboard: отслеживание важных показателей в процессе обучения. - Карты моделей: документирование того, что делает модель, и предполагаемых вариантов ее использования. - Сообщество: простой способ обмена информацией и сотрудничества с сообществом! 🤗 Связать блокнот с Hub Hugging Face довольно просто - вам просто нужно ввести свой токен аутентификации Hugging Face Hub, когда вас попросят. Найдите свой токен аутентификации Hugging Face Hub [здесь](https://huggingface.co/settings/tokens) и введите его, когда вас попросят: ```python from huggingface_hub import notebook_login notebook_login() ``` **Output:** ```bash Login successful Your token has been saved to /root/.huggingface/token ``` ## Загрузка набора данных [Common Voice 13](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) содержит примерно десять часов размеченных данных для Дивехи, из которых три часа являются тестовой частью данных. Этих данных крайне мало для дообучения модели, поэтому мы будем опираться на обширные многоязыковые знания в области распознавания речи, накопленные Whisper во время предварительного обучения, для низкоресурсного языка Дивехи. Используя 🤗 Datasets, загрузка и подготовка данных чрезвычайно просты. Мы можем загрузить и подготовить разделения набора данных Common Voice 13 всего за одну строку кода. Так как Дивехи имеет очень небольшой объем данных, мы объединим выборки `train` и `validation`, чтобы получить примерно семь часов данных для обучения. Мы будем использовать три часа данных выборки `test` в качестве нашего тестового набора данных: ```python from datasets import load_dataset, DatasetDict common_voice = DatasetDict() common_voice["train"] = load_dataset( "mozilla-foundation/common_voice_13_0", "dv", split="train+validation" ) common_voice["test"] = load_dataset( "mozilla-foundation/common_voice_13_0", "dv", split="test" ) print(common_voice) ``` **Output:** ``` DatasetDict({ train: Dataset({ features: ['client_id', 'path', 'audio', 'sentence', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment', 'variant'], num_rows: 4904 }) test: Dataset({ features: ['client_id', 'path', 'audio', 'sentence', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment', 'variant'], num_rows: 2212 }) }) ``` <Tip> Вы можете изменить идентификатор языка с `"dv"` на идентификатор языка по вашему выбору. Чтобы увидеть все возможные языки в Common Voice 13, ознакомьтесь с карточкой набора данных на Hub Hugging Face: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0 </Tip> Большинство наборов данных для распознавания речи предоставляют только аудиофайлы для ввода (`audio`) и соответствующий транскрибированный текст (`sentence`). В наборе данных Common Voice есть дополнительная метаинформация, такая как `accent` и `locale`, которую мы можем не учитывать для ASR. Сохраняя блокнот максимально общим, мы будем рассматривать только входное аудио и транскрибированный текст для дообучения модели, отбрасывая дополнительную метаинформацию: ```python common_voice = common_voice.select_columns(["audio", "sentence"]) ``` ## Извлечение признаков, токенизатор и обработчик Конвеер для задачи распознавания речи можно разделить на три этапа: 1. Извлекатель признаков, который предварительно обрабатывает сырой аудиоввод и создает логарифмические мел-спектрограммы 2. Модель, выполняющая отображение последовательности в последовательность 3. Токенизатор, который выполняет постобработку предсказанных токенов в текст В библиотеке 🤗 Transformers для модели Whisper существуют связанные извлекатель признаков и токенизатор, называемые [WhisperFeatureExtractor](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperFeatureExtractor) и [WhisperTokenizer](https://huggingface.co/docs/transformers/main/model_doc/whisper#transformers.WhisperTokenizer) соответственно. Для упрощения работы с ними, эти два объекта объединены в одном классе, названном [WhisperProcessor](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). Мы можем вызвать WhisperProcessor для выполнения как предварительной обработки звука, так и постобработки текстового токена. При этом во время обучения нам нужно отслеживать только два объекта: процессор и модель. При выполнении многоязыкового дообучения модели нам необходимо установить параметры `"language"` и `"task"` при создании процессора. Параметр `"language"` должен быть установлен на язык исходного аудио, а параметр `"task"` на `"transcribe"` для распознавания речи или на `"translate"` для перевода речи. Эти аргументы изменяют поведение токенизатора и должны быть правильно установлены, чтобы обеспечить правильное кодирование целевых меток. Мы можем увидеть все возможные поддерживаемые языки моделью Whisper, импортировав список языков: ```python from transformers.models.whisper.tokenization_whisper import TO_LANGUAGE_CODE TO_LANGUAGE_CODE ``` Если вы прокрутите этот список, вы заметите, что многие языки присутствуют, но Дивехи в нем нет! Это означает, что модель Whisper не была предварительно обучена на Дивехи. Однако это не означает, что мы не можем дообучить Whisper на этом языке. Таким образом, мы будем обучать Whisper новому языку, который не поддерживается предварительно обученной контрольной точкой. Это довольно круто, не так ли? При дообучения модели на новый язык Whisper хорошо использует свои знания о других 96 языках, на которых эта модель была предварительно обучена. В целом, все современные языки будут лингвистически похожи хотя бы на один из 96 языков, которые Whisper уже знает, и мы будем попадать под эту парадигму кросс-языкового представления знаний. Для дообучения модели Whisper на новый язык нам нужно найти **самый похожий** язык, на котором Whisper был предварительно обучен. Статья в Википедии о Дивехи утверждает, что Дивехи тесно связан с Сингальским языком Шри-Ланки. Если мы снова проверим коды языков, мы увидим, что Сингальский присутствует в наборе языков Whisper, поэтому мы можем безопасно установить аргумент "language" в значение `"sinhalese"`. Отлично! Мы загрузим наш процессор из предварительно обученной контрольной точки, установив язык на `"sinhalese"` и задачу на `"transcribe"`, как объяснено выше: ```python from transformers import WhisperProcessor processor = WhisperProcessor.from_pretrained( "openai/whisper-small", language="sinhalese", task="transcribe" ) ``` Стоит отметить, что в большинстве случаев вы обнаружите, что язык, на котором вы хотите дообучить модель, есть в наборе языков предобученной модели, в этом случае вы можете просто установить язык напрямую как язык вашего аудиоввода! Обратите внимание, что оба эти аргумента следует исключить для дообучения только на английском языке, когда есть только один вариант для языка (`"English"`) и задачи (`"transcribe"`). ## Предварительная обработка данных Давайте посмотрим на характеристики набора данных. Особое внимание уделяем колонке `"audio"` - это подробности о частоте дискретизации наших аудиовходов: ```python common_voice["train"].features ``` **Output:** ``` {'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None), 'sentence': Value(dtype='string', id=None)} ``` Поскольку наше входной звук дискретизируется на частоте 48 кГц, нам нужно уменьшить частоту дискретизации до 16 кГц перед передачей его в извлекатель признаков Whisper, так как модель Whisper ожидает аудио с частотой дискретизации 16 кГц. Мы установим необходимую частоту дискретизации аудиовводов набора данных с помощью метода [`cast_column`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.cast_column). Эта операция не изменяет звук непосредственно, а сигнализирует набору данных о пересемплировании аудиообразцов при их загрузке на лету: ```python from datasets import Audio sampling_rate = processor.feature_extractor.sampling_rate common_voice = common_voice.cast_column("audio", Audio(sampling_rate=sampling_rate)) ``` Теперь мы можем написать функцию для подготовки наших данных к использованию моделью: 1. Мы загружаем и выполняем пересемплирование аудиоданных на основе выборки, вызывая sample["audio"]. Как уже было объяснено, библиотека 🤗 Datasets выполняет все необходимые операции пересемплирования на лету. 2. Мы используем извлекатель признаков для вычисления входных признаков в виде логарифмической мел-спектрограммы из нашего одномерного аудио-массива. 3. Мы кодируем транскрипции в идентификаторы меток с помощью токенизатора. ```python def prepare_dataset(example): audio = example["audio"] example = processor( audio=audio["array"], sampling_rate=audio["sampling_rate"], text=example["sentence"], ) # compute input length of audio sample in seconds example["input_length"] = len(audio["array"]) / audio["sampling_rate"] return example ``` Мы можем применить функцию подготовки данных ко всем нашим обучающим примерам, используя метод `.map` библиотеки 🤗 Datasets. Мы удалим столбцы из исходных данных обучения (аудио и текст), оставив только столбцы, возвращаемые функцией `prepare_dataset`: ```python common_voice = common_voice.map( prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1 ) ``` Наконец, мы фильтруем любые обучающие данные с аудиообразцами длиннее 30 секунд. В противном случае эти образцы могли бы быть усечены извлекателем признаков Whisper, что может повлиять на стабильность обучения. Мы определяем функцию, которая возвращает `True` для образцов, которые короче 30 секунд, и `False` для тех, что длиннее: ```python max_input_length = 30.0 def is_audio_in_length_range(length): return length < max_input_length ``` Мы применяем нашу функцию фильтра к всем образцам нашего обучающего набора данных с помощью метода `.filter` библиотеки 🤗 Datasets: ```python common_voice["train"] = common_voice["train"].filter( is_audio_in_length_range, input_columns=["input_length"], ) ``` Давайте проверим, сколько обучающих данных мы удалили на этом этапе фильтрации: ```python common_voice["train"] ``` **Output** ``` Dataset({ features: ['input_features', 'labels', 'input_length'], num_rows: 4904 }) ``` Хорошо! В этом случае у нас фактически осталось такое же количество образцов, что и раньше, так что образцы длиннее 30 секунд отсутствуют. Это может быть не так, если вы переключаете языки, поэтому лучше оставить этот шаг фильтра на месте для надежности. Теперь наши данные полностью готовы для обучения! Давайте продолжим и рассмотрим, как мы можем использовать эти данные для дообучения модели Whisper. ## Обучение и оценка Теперь, когда мы подготовили наши данные, мы готовы приступить к обучению модели. [🤗 Trainer](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer) выполнит большую часть тяжелой работы за нас. Все, что нам нужно сделать: - Определить сборщика данных: сборщик данных берет наши предварительно обработанные данные и готовит тензоры PyTorch, готовые для модели. - Определить метрику оценки: во время оценки мы хотим оценивать модель с использованием метрики Word Error Rate (WER). Нам нужно определить функцию `compute_metrics`, которая будет обрабатывать этот расчет. - Загрузить предварительно обученную контрольную точку: нам нужно загрузить предварительно обученную контрольную точку и правильно сконфигурировать её для обучения. - Определить аргументы обучения: они будут использоваться 🤗 Trainer для построения расписания обучения. После дообучения модели мы будем оценивать ее на тестовых данных, чтобы убедиться, что мы правильно обучили ее распознавать речь на Дивехи. ## Определение сборщика данных Сборщик данных для модели речевой последовательности-в-последовательность уникален в том смысле, что он обрабатывает `input_features` и `labels` независимо: `input_features` должны обрабатываться извлекателем признаков, а `labels` - токенизатором. `input_features` уже дополнены по 30 секунд и преобразованы в логарифмическую мел-спектрограмму фиксированной размерности, поэтому все, что нам нужно сделать, это преобразовать их в пакетированные тензоры PyTorch. Мы делаем это с помощью метода `.pad` извлекателя признаков с параметром `return_tensors=pt`. Обратите внимание, что дополнительная подгонка не применяется, так как входные данные имеют фиксированную размерность, и `input_features` просто преобразуются в тензоры PyTorch. С другой стороны, `labels` не дополняются. Сначала мы дополняем последовательности до максимальной длины в пакете с использованием метода `.pad` токенизатора. Затем дополнительные токены заполняются значением `-100`, чтобы эти токены **не** учитывались при вычислении потерь. Затем мы удаляем начальный токен транскрипции из начала последовательности меток, так как мы добавим его позже во время обучения. Мы можем воспользоваться ранее определенным `WhisperProcessor`, чтобы выполнить как операции извлекателя признаков, так и токенизатора: ```python import torch from dataclasses import dataclass from typing import Any, Dict, List, Union @dataclass class DataCollatorSpeechSeq2SeqWithPadding: processor: Any def __call__( self, features: List[Dict[str, Union[List[int], torch.Tensor]]] ) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lengths and need different padding methods # first treat the audio inputs by simply returning torch tensors input_features = [ {"input_features": feature["input_features"][0]} for feature in features ] batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt") # get the tokenized label sequences label_features = [{"input_ids": feature["labels"]} for feature in features] # pad the labels to max length labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt") # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill( labels_batch.attention_mask.ne(1), -100 ) # if bos token is appended in previous tokenization step, # cut bos token here as it's append later anyways if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item(): labels = labels[:, 1:] batch["labels"] = labels return batch ``` Теперь мы можем инициализировать только что определенный сборщик данных: ```python data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor) ``` Продолжаем! ### Метрики оценки Далее мы определяем метрику оценки, которую будем использовать на нашем наборе для оценки. Мы будем использовать метрику Word Error Rate (WER), представленную в разделе [Оценка](evaluation), это "де-факто" метрика для оценки систем автоматического распознавания речи. Давайте загрузим метрику WER из библиотеки 🤗 Evaluate: ```python import evaluate metric = evaluate.load("wer") ``` Затем нам просто нужно определить функцию, которая принимает предсказания нашей модели и возвращает метрику WER. Эта функция, называемая `compute_metrics`, сначала заменяет `-100` на `pad_token_id` в `label_ids` (отменяя шаг, который мы применили в коллекторе данных, чтобы правильно игнорировать заполненные токены при вычислении потерь). Затем он декодирует предсказанные идентификаторы и идентификаторы меток в строки. Наконец, она вычисляет WER между предсказаниями и эталонными метками. Здесь у нас есть возможность оценить "нормализованные" транскрипции и предсказания, у которых удалены знаки препинания и регистр. Мы рекомендуем следовать этому совету, чтобы получить преимущество от улучшения WER, полученного путем нормализации транскрипций. ```python from transformers.models.whisper.english_normalizer import BasicTextNormalizer normalizer = BasicTextNormalizer() def compute_metrics(pred): pred_ids = pred.predictions label_ids = pred.label_ids # replace -100 with the pad_token_id label_ids[label_ids == -100] = processor.tokenizer.pad_token_id # we do not want to group tokens when computing the metrics pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True) label_str = processor.batch_decode(label_ids, skip_special_tokens=True) # compute orthographic wer wer_ortho = 100 * metric.compute(predictions=pred_str, references=label_str) # compute normalised WER pred_str_norm = [normalizer(pred) for pred in pred_str] label_str_norm = [normalizer(label) for label in label_str] # filtering step to only evaluate the samples that correspond to non-zero references: pred_str_norm = [ pred_str_norm[i] for i in range(len(pred_str_norm)) if len(label_str_norm[i]) > 0 ] label_str_norm = [ label_str_norm[i] for i in range(len(label_str_norm)) if len(label_str_norm[i]) > 0 ] wer = 100 * metric.compute(predictions=pred_str_norm, references=label_str_norm) return {"wer_ortho": wer_ortho, "wer": wer} ``` ### Загружаем предобученную контрольную точку Теперь давайте загрузим предварительно обученную контрольную точку Whisper small. Опять же, это делается с помощью 🤗 Transformers без труда! ```python from transformers import WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") ``` Мы установим `use_cache` в значение `False` для обучения, так как мы используем [проверку градиента](https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing), и эти две опции несовместимы. Мы также переопределим два аргумента для генерации, чтобы контролировать поведение модели во время вывода: мы принудительно зададим токены языка и задачи во время генерации, установив аргументы `language` и `task`, и также снова включим кэш для генерации, чтобы ускорить время вывода: ```python from functools import partial # disable cache during training since it's incompatible with gradient checkpointing model.config.use_cache = False # set language and task for generation and re-enable cache model.generate = partial( model.generate, language="sinhalese", task="transcribe", use_cache=True ) ``` ## Определение конфигурации обучения На последнем этапе мы определяем все параметры, связанные с обучением. Здесь мы устанавливаем количество шагов обучения на 500. Этого количества шагов достаточно, чтобы увидеть большое улучшение WER по сравнению с предварительно обученной моделью Whisper, при этом обеспечивая возможность выполнения дообучения в течение примерно 45 минут в бесплатном тарифе Google Colab. Для получения более подробной информации о параметрах обучения обратитесь к документации по Seq2SeqTrainingArguments [здесь](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments). ```python from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir="./whisper-small-dv", # name on the HF Hub per_device_train_batch_size=16, gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size learning_rate=1e-5, lr_scheduler_type="constant_with_warmup", warmup_steps=50, max_steps=500, # increase to 4000 if you have your own GPU or a Colab paid plan gradient_checkpointing=True, fp16=True, fp16_full_eval=True, evaluation_strategy="steps", per_device_eval_batch_size=16, predict_with_generate=True, generation_max_length=225, save_steps=500, eval_steps=500, logging_steps=25, report_to=["tensorboard"], load_best_model_at_end=True, metric_for_best_model="wer", greater_is_better=False, push_to_hub=True, ) ``` <Tip> Если вы не хотите загружать контрольные точки модели на Hugging Face Hub, установите `push_to_hub=False`. </Tip> Мы можем передать аргументы обучения в 🤗 Trainer вместе с нашей моделью, набором данных, сборщиком данных и функцией `compute_metrics`: ```python from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( args=training_args, model=model, train_dataset=common_voice["train"], eval_dataset=common_voice["test"], data_collator=data_collator, compute_metrics=compute_metrics, tokenizer=processor, ) ``` И теперь мы готовы приступить к обучению! ### Обучение Для запуска обучения просто выполните: ```python trainer.train() ``` Обучение займет примерно 45 минут в зависимости от вашего графического ускорителя или того, который выделен в Google Colab. В зависимости от вашего GPU возможно, что при запуске обучения вы столкнетесь с ошибкой CUDA "out-of-memory". В этом случае вы можете уменьшать `per_device_train_batch_size` постепенно вдвое и использовать [`gradient_accumulation_steps`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.gradient_accumulation_steps) для компенсации. **Output:** | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.136 | 1.63 | 500 | 0.1727 | 63.8972 | 14.0661 | Наш окончательный WER составляет 14,1% - не плохо для семи часов данных обучения и всего 500 шагов обучения! Это означает улучшение на 112% по сравнению с предварительно обученной моделью! Это означает, что мы взяли модель, которая ранее не имела знаний о Дивехи, и успешно дообучили ее распознавать речь на Дивехи с достаточной точностью менее чем за один час 🤯 Большой вопрос заключается в том, как это сравнивать с другими системами распознавания речи. Для этого мы можем посмотреть на [доску лидеров autoevaluate](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_13_0&only_verified=0&task=automatic-speech-recognition&config=dv&split=test&metric=wer), которая классифицирует модели по языку и набору данных, а затем ранжирует их по их WER. Посмотрев на доску лидеров, мы видим, что наша модель, обученная за 500 шагов, убедительно обгоняет предварительно обученный [Whisper Small](https://huggingface.co/openai/whisper-small), который мы оценивали в предыдущем разделе. Отличная работа 👏 Мы видим, что есть несколько контрольных точек, которые показывают лучший результат, чем тот, который мы обучили. Прелесть платформы Hugging Face Hub заключается в том, что это *коллаборативная* платформа - если у нас нет времени или ресурсов для проведения более длительного обучения самостоятельно, мы можем загрузить контрольную точку, которую кто-то из сообщества обучил и любезно поделился ей (не забудьте поблагодарить его за это!). Вы сможете загрузить эти контрольные точки так же, как и предварительно обученные модели, используя класс `pipeline`, как мы делали ранее! Так что нет ничего, что бы могло помешать вам выбрать лучшую модель из доски лидеров для использования в вашей задаче! Мы можем автоматически отправить нашу контрольную точку на доску лидеров, когда мы публикуем результаты обучения на Hugging Face Hub - нам просто нужно установить соответствующие аргументы (kwargs). Вы можете изменить эти значения, чтобы они соответствовали вашему набору данных, языку и имени модели: ```python kwargs = { "dataset_tags": "mozilla-foundation/common_voice_13_0", "dataset": "Common Voice 13", # a 'pretty' name for the training dataset "language": "dv", "model_name": "Whisper Small Dv - Sanchit Gandhi", # a 'pretty' name for your model "finetuned_from": "openai/whisper-small", "tasks": "automatic-speech-recognition", } ``` Теперь результаты обучения можно загрузить на Hugging Face Hub. Для этого выполните команду push_to_hub: ```python trainer.push_to_hub(**kwargs) ``` Это сохранит журналы обучения и веса модели в папке `"ваше-имя-пользователя/имя-которое-вы-выбрали"`. В этом примере посмотрите загрузку по адресу `sanchit-gandhi/whisper-small-dv`. Хотя дообученная модель дает удовлетворительные результаты на тестовых данных Common Voice 13 на Дивехи, это далеко не оптимальный результат. Цель данного руководства - показать, как настроить модель ASR с использованием 🤗 Trainer для многоязычного распознавания речи. Если у вас есть доступ к собственному GPU или вы подписаны на платный план Google Colab, вы можете увеличить max_steps до 4000, чтобы дополнительно улучшить WER, обучаясь больше шагов. Обучение на 4000 шагов займет примерно 3-5 часов в зависимости от вашего GPU и приведет к результатам WER примерно на 3% ниже, чем при обучении на 500 шагах. Если вы решите обучить на 4000 шагов, мы также рекомендуем изменить планировщик скорости обучения на *линейный* (установите `lr_scheduler_type="linear"`), так как это также принесет дополнительный прирост производительности при длительных обучающихся сессиях. Результаты могут быть дальше улучшены путем оптимизации гиперпараметров обучения, таких как _скорость обучения_ (_learning rate_) и _dropout_, а также использования более крупной предварительно обученной контрольной точки (`medium` или `large`). Мы оставляем это как упражнение для читателя. ## Совместный доступ к модели Теперь вы можете поделиться этой моделью с любым пользователем, используя ссылку на Hugging Face Hub. Они могут загрузить ее с идентификатором `"your-username/the-name-you-picked"` непосредственно в объект `pipeline()`. Например, для загрузки дообученной контрольной точки ["sanchit-gandhi/whisper-small-dv"](https://huggingface.co/sanchit-gandhi/whisper-small-dv): ```python from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="sanchit-gandhi/whisper-small-dv") ``` ## Заключение В этом разделе мы рассмотрели пошаговое руководство по дообучению модели Whisper для распознавания речи с использованием 🤗 Datasets, Transformers и Hugging Face Hub. Сначала мы загрузили набор данных Dhivehi из Common Voice 13 и предобработали его, вычислив лог-мел-спектрограммы и токенизировав текст. Затем мы определили способ обработки данных, метрику оценки и параметры обучения, прежде чем использовать 🤗 Trainer для обучения и оценки нашей модели. Мы закончили, загрузив настроенную модель на Hugging Face Hub, и показали, как поделиться и использовать ее с помощью класса `pipeline()`. Если вы следовали этому руководству до этой точки, у вас теперь есть дообученная контрольная точка для распознавания речи, отличная работа! 🥳 Еще более важно то, что вы экипированы всеми инструментами, необходимыми для дообучения модели Whisper на любом наборе данных или в любой предметной области. Так что чего вы ждете! Выберите один из наборов данных, охваченных в разделе [Выбор набора данных](choosing_dataset), или выберите свой собственный набор данных и посмотрите, сможете ли вы достичь современной производительности (state-of-the-art performance)! Лидерборд ждет вас...
1
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter5/hands_on.mdx
# Практическое занятие В этом разделе мы изучили сложности дообучения моделей ASR, признавая необходимость времени и ресурсов для дообучения модели, такой как Whisper (даже с небольшой контрольной точкой), на новом языке. Чтобы предоставить вам практический опыт, мы разработали упражнение, которое поможет вам пройти через процесс дообучения модели ASR с использованием небольшого набора данных. Основная цель этого упражнения - больше помочь вам ознакомиться с процессом дообучения, чем стремиться к результатам на уровне продакшн-решений. Мы специально установили низкий показатель метрики, чтобы даже при ограниченных ресурсах вы могли успешно его достичь. Инструкции: * Дообучите модель `”openai/whisper-tiny”`, используя американский английский ("en-US") поднабор набора данных `”PolyAI/minds14”`. * Используйте первые **450 примеров для обучения**, остальные для оценки. Убедитесь, что вы установили `num_proc=1` при предварительной обработке набора данных с использованием метода `.map` (это обеспечит правильную отправку вашей модели для оценки). * Для оценки модели используйте метрики `wer` и `wer_ortho`, как описано в данном уроке. Однако *не* преобразуйте метрику в проценты, умножая на 100 (например, если WER равен 42%, мы ожидаем увидеть значение 0.42 в этом упражнении). После того как вы проведете тонкую настройку модели, убедитесь, что загрузили ее на 🤗 Hugging Face Hub с использованием следующих параметров (`kwargs`): ``` kwargs = { "dataset_tags": "PolyAI/minds14", "finetuned_from": "openai/whisper-tiny", "tasks": "automatic-speech-recognition", } ``` Вы успешно завершите это задание, если показатель WER (`wer`) вашей нормализованный вашей будет ниже **0.37**. Не стесняйтесь создать демо вашей модели и поделиться ею в Discord! Если у вас возникнут вопросы, задайте их в канале #audio-study-group.
2
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter2/asr_pipeline.mdx
# Автоматическое распознавание речи с помощью конвейера Автоматическое распознавание речи (ASR от "Automatic Speech Recognition") - это задача транскрибации (преобразование аудиозаписи речи в текст). Эта задача имеет множество практических применений - от создания субтитров для видео до обеспечения голосовых команд для таких виртуальных помощников, как Siri и Alexa. В этом разделе мы используем конвейер `automatic-speech-recognition` для транскрибации аудиозаписи человека, задающего вопрос об оплате счета, используя тот же набор данных MINDS-14, что и ранее. Для начала загрузите набор данных и повысьте его частоту до 16 кГц, как описано в [Классификация аудио с помощью конвейера](audio_classification_pipeline), если вы еще не сделали этого. Для транскрибации аудиозаписи мы можем использовать конвейер `automatic-speech-recognition` из 🤗 Transformers. Давайте инстанцируем конвейер: ```python from transformers import pipeline asr = pipeline("automatic-speech-recognition") ``` Далее мы возьмем пример из набора данных и передадим его исходные данные в конвейер: ```python example = minds[0] asr(example["audio"]["array"]) ``` **Output:** ```out {"text": "I WOULD LIKE TO PAY MY ELECTRICITY BILL USING MY COD CAN YOU PLEASE ASSIST"} ``` Сравним этот вывод с реальной транскрипцией для данного примера: ```python example["english_transcription"] ``` **Output:** ```out "I would like to pay my electricity bill using my card can you please assist" ``` Похоже, что модель неплохо справилась с транскрибацией аудиозаписи! По сравнению с оригинальной транскрипцией она ошиблась только в одном слове ("card"), что довольно хорошо, учитывая австралийский акцент диктора, в котором буква "r" часто не произносится. Тем не менее, я бы не рекомендовал пытаться оплатить свой следующий счет за электричество рыбой (cod в переводе с английского - "треска")! По умолчанию в этом конвейере используется модель, обученная автоматическому распознаванию речи для английского языка, что вполне подходит для данного примера. Если вы хотите попробовать транскрибировать другие подмножества MINDS-14 на другом языке, вы можете найти предварительно обученную ASR-модель [на 🤗 Hugging Face Hub](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&language=fr&sort=downloads). Список моделей можно отфильтровать сначала по задаче, затем по языку. Найдя нужную модель, передайте ее имя в качестве аргумента `model` в конвейер. Попробуем это сделать для немецкого сплита MINDS-14. Загрузим подмножество "de-DE": ```python from datasets import load_dataset from datasets import Audio minds = load_dataset("PolyAI/minds14", name="de-DE", split="train") minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) ``` Возьмите пример и посмотрите, какой должна быть транскрибация: ```python example = minds[0] example["transcription"] ``` **Output:** ```out "ich möchte gerne Geld auf mein Konto einzahlen" ``` Найдите на 🤗 Hugging Face Hub предварительно обученную ASR-модель для немецкого языка, инстанцируйте конвейер и транскрибируйте пример: ```py from transformers import pipeline asr = pipeline("automatic-speech-recognition", model="maxidl/wav2vec2-large-xlsr-german") asr(example["audio"]["array"]) ``` **Output:** ```out {"text": "ich möchte gerne geld auf mein konto einzallen"} ``` При работе над решением собственной задачи начать с простого конвейера, подобного тем, что мы показали в этом блоке, это ценный инструмент, дающий ряд преимуществ: - может существовать предварительно обученная модель, которая уже очень хорошо решает вашу задачу, экономя ваше время; - pipeline() берет на себя всю предварительную и последующую обработку, так что вам не нужно беспокоиться о том, чтобы привести данные в нужный формат для модели; - если результат не идеален, это все равно дает возможность быстро получить baseline для последующего дообучения модели; - после того как вы дообучите модель на своих пользовательских данных и разместите ее на Hugging Face Hub, все сообщество сможет быстро и легко использовать ее с помощью метода `pipeline()`, делая ИИ более доступным.
3
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter2/audio_classification_pipeline.mdx
# Классификация звука с помощью конвейера Классификация аудиозаписей предполагает присвоение одной или нескольких меток аудиозаписи в зависимости от ее содержания. Эти метки могут соответствовать различным категориям звуков, таким как музыка, речь, шум, или более специфическим категориям, таким как пение птиц или звук автомобильного двигателя. Прежде чем перейти к подробному описанию работы наиболее популярных аудиотрансформеров, а также к тонкой настройке пользовательской модели, давайте посмотрим, как с помощью 🤗 Transformers можно использовать готовую предварительно обученную модель для классификации аудиозаписи, набрав всего несколько строк кода. Воспользуемся тем же набором данных [MINDS-14](https://huggingface.co/datasets/PolyAI/minds14), который вы исследовали в предыдущем разделе. Если вы помните, MINDS-14 содержит записи людей, задающих вопросы системе дистанционного банковского обслуживания на нескольких языках и диалектах, и для каждой записи имеет значение `intent_class`. Мы можем классифицировать записи по намерению звонящего. Как и прежде, загрузим подмножество данных `en-AU`, чтобы опробовать конвейер, и повысим частоту дискретизации до 16 кГц, что и требуется для большинства речевых моделей. ```py from datasets import load_dataset from datasets import Audio minds = load_dataset("PolyAI/minds14", name="en-AU", split="train") minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) ``` Для классификации аудиозаписи на множество классов мы можем использовать конвейер `audio-classification` из 🤗 Transformers. В нашем случае нам нужна модель, настроенная на классификацию намерений, причем именно на наборе данных MINDS-14. К счастью для нас, в Хабе есть модель, которая как раз для этого и предназначена! Давайте загрузим ее с помощью функции `pipeline()`: ```py from transformers import pipeline classifier = pipeline( "audio-classification", model="anton-l/xtreme_s_xlsr_300m_minds14", ) ``` Этот конвейер ожидает на вход аудиоданные в виде массива NumPy. Вся предварительная обработка исходных аудиоданных будет удобно выполнена конвейером. Давайте рассмотрим пример, чтобы опробовать его в действии: ```py example = minds[0] ``` Если вспомнить структуру набора данных, то исходные аудиоданные хранятся в массиве NumPy под именем `["audio"]["array"]`, передадим их непосредственно в `classifier`. ```py classifier(example["audio"]["array"]) ``` **Output:** ```out [ {"score": 0.9631525278091431, "label": "pay_bill"}, {"score": 0.02819698303937912, "label": "freeze"}, {"score": 0.0032787492964416742, "label": "card_issues"}, {"score": 0.0019414445850998163, "label": "abroad"}, {"score": 0.0008378693601116538, "label": "high_value_payment"}, ] ``` Модель очень уверена, что звонивший хотел узнать об оплате счета. Посмотрим, какова реальная метка для этого примера: ```py id2label = minds.features["intent_class"].int2str id2label(example["intent_class"]) ``` Ура! Предсказанная метка оказалась верной! Здесь нам повезло найти модель, которая может классифицировать именно те метки, которые нам нужны. В большинстве случаев при решении задачи классификации набор классов в предварительно обученной модели не совпадает с теми классами, которые необходимо выделить с помощью модели. В этом случае можно произвести дообучение предварительно обученной модели, чтобы "откалибровать" ее по точному набору меток классов. Как это сделать, мы узнаем в следующих разделах. Теперь рассмотрим другую очень распространенную задачу обработки речи - _автоматическое распознавание речи_.
4
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter2/introduction.mdx
# Раздел 2. Краткое введение в аудиоприложения Добро пожаловать во второй раздел аудиокурса от Hugging Face! Ранее мы рассмотрели основы аудиоданных и научились работать с наборами аудиоданных с помощью библиотек 🤗 Datasets и 🤗 Transformers. Мы обсудили различные понятия, такие как частота дискретизации, амплитуда, битовая глубина, форма волны и спектрограммы, а также рассмотрели способы предварительной обработки данных для подготовки их к использованию в предварительно обученной модели. К этому моменту вы, возможно, уже хотите узнать о том, с какими аудиозадачами могут справиться 🤗 Transformers, и у вас есть все необходимые базовые знания, необходимые для дальнейшего погружения! Давайте рассмотрим некоторые из примеров умопомрачительных аудиозадач: * **Аудио классификация**: простая классификация аудиоклипов по различным категориям. Вы можете определить, лает ли на записи собака или мяукает кошка, или к какому музыкальному жанру относится та или иная композиция. * **Автоматическое распознавание речи**: преобразование аудиоклипов в текст путем их автоматической транскрибации. Вы можете получить текстовое представление записи разговора, например, "Как дела?". Очень полезно для ведения заметок! * **Диаризация диктора**: Вы когда-нибудь задумывались, кто говорит в записи? С помощью 🤗 Transformers вы сможете определить, кто из дикторов говорит в тот или иной момент времени в аудиоклипе. Представьте себе, что вы можете различить Алису и Боба в записи их разговора. * **Перевод текста в речь**: создать дикторскую версию текста, которая может быть использована для создания аудиокниги, помочь в обеспечении доступности (для людей с ограниченными возможностями) или дать голос NPC в игре. С 🤗 Transformers вы легко сможете это сделать! В этом разделе вы узнаете, как использовать предварительно обученные модели для решения некоторых из этих задач с помощью функции `pipeline()` из 🤗 Transformers. В частности, мы увидим, как предварительно обученные модели могут быть использованы для классификации звука и автоматического распознавания речи. Давайте начнем!
5
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter2/hands_on.mdx
# Практическое занятие Это упражнение не оценивается и предназначено для ознакомления с инструментами и библиотеками, которые вы будете использовать на протяжении всего курса. Если вы уже имеете опыт работы с Google Colab, 🤗 Datasets, librosa и 🤗 Transformers, вы можете пропустить это упражнение. 1. Создайте блокнот [Google Colab](https://colab.research.google.com). 2. Используйте 🤗 Datasets для загрузки train split набора данных [`facebook/voxpopuli`](https://huggingface.co/datasets/facebook/voxpopuli) на выбранном вами языке в потоковом режиме. 3. Получите третий пример из части `train` набора данных и исследуйте его. Учитывая особенности этого примера, для решения каких аудиозадач можно использовать этот набор данных? 4. Постройте осциллограмму и спектрограмму этого примера. 5. Перейдите на сайт [🤗 Hugging Face Hub](https://huggingface.co/models), изучите предварительно обученные модели и найдите ту, которая может быть использована для автоматического распознавания речи на выбранном ранее языке. Инстанцируйте соответствующий конвейер с найденной моделью и транскрибируйте пример. Если вы затрудняетесь с выполнением этого упражнения, не стесняйтесь заглянуть в [пример решения](https://colab.research.google.com/drive/1NGyo5wFpRj8TMfZOIuPaJHqyyXCITftc?usp=sharing). Открыли для себя что-то интересное? Нашли классную модель? Получили красивую спектрограмму? Не стесняйтесь делиться своими работами и открытиями в Twitter! В следующих главах вы узнаете больше о различных архитектурах аудиотрансформеров и создадите свою собственную модель!
6
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter3/supplemental_reading.mdx
# Дополнительные материалы и ресурсы Если вы хотите более подробно изучить различные архитектуры трансформеров и узнать об их применении в обработке речи, обратитесь к этой свежей статье: ### Трансформеры в обработке речи: Обзор Сиддик Латиф, Аун Заиди, Хериберто Куаяуитль, Фахад Шамшад, Моаззам Шукат, Джунаид Кадир "Значительный успех трансформеров в области обработки естественного языка вызвал интерес сообщества специалистов по обработке речи, что привело к изучению их возможностей для моделирования длительных зависимостей в речевых последовательностях. В последнее время трансформеры получили широкое распространение в различных областях, связанных с речью, включая автоматическое распознавание речи, синтез речи, перевод речи, паралингвистику речи, улучшение речи, системы разговорного диалога и многочисленных мультимодальных приложениях. В данной статье мы представляем комплексный обзор, цель которого - объединить исследования, проводимые в различных областях речевых технологий. Консолидируя результаты, полученные в области речевых технологий, мы предоставляем ценный ресурс для исследователей, заинтересованных в использовании возможностей трансформеров для развития этой области. Мы идентифицируем проблемы, с которыми сталкиваются трансформеры при обработке речи, а также предлагаем потенциальные решения для их устранения". [arxiv.org/abs/2303.11607](https://arxiv.org/abs/2303.11607)
7
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter3/classification.mdx
# Архитектуры классификации аудио Цель классификации аудио - предсказать метку класса для входного аудиосигнала. Модель может предсказать одну метку класса, охватывающую всю входную последовательность, или предсказать метку для каждого звукового фрейма - обычно каждые 20 миллисекунд входного звука - в этом случае вывод модели представляет собой последовательность вероятностей метки класса. Примером первого может служить определение того, какая птица издает тот или иной звук; примером второго - диаризация диктора, когда модель предсказывает, какой диктор говорит в тот или иной момент. ## Классификация с использованием спектрограмм Один из самых простых способов классификации звука - представить, что это задача классификации изображения! Напомним, что спектрограмма - это двумерный тензор формы `(частоты, длина последовательности)`. В [главе об аудио данных](../chapter1/audio_data) мы строили эти спектрограммы в виде изображений. Угадайте, что? Мы можем буквально рассматривать спектрограмму как изображение и передавать ее в обычную модель классификатора CNN, например ResNet, и получать очень хорошие прогнозы. Еще лучше использовать модель трансформер для работы с изображениями, например ViT. Именно это и делает **Трансформер Аудио Спектрограммы (Audio Spectrogram Transformer - AST)**. Он использует модель ViT или Vision Transformer и передает ему на вход спектрограммы вместо обычных изображений. Благодаря слоям самовнимания трансформера модель лучше улавливает глобальный контекст, чем CNN. Как и ViT, модель AST разбивает спектрограмму звука на последовательность частично перекрывающихся фрагментов изображений размером 16×16 пикселей. Затем эта последовательность фрагментов проецируется в последовательность эмбдингов, которые, как обычно, подаются на вход энкодера трансформера. AST является моделью трансформера состоящей только из энкодера, поэтому на выходе получается последовательность скрытых состояний, по одному на каждый входной фрагмент 16×16. Сверху находится простой слой классификации с сигмоидальной активацией для отображения скрытых состояний в вероятности классификации. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/ast.png" alt="The audio spectrogram transformer works on a sequence of patches taken from the spectrogram"> </div> Изображение из статьи [AST: Трансформер аудио спектрограммы](https://arxiv.org/pdf/2104.01778.pdf) <Tip> 💡 Хотя здесь мы делаем вид, что спектрограммы - это то же самое, что и изображения, есть существенные различия. Например, смещение содержимого изображения вверх или вниз, как правило, не меняет смысла того, что на нем изображено. Однако смещение спектрограммы вверх или вниз приводит к изменению частот, присутствующих в звуке, и полностью меняет его характер. Изображения инвариантны при трансляции, а спектрограммы - нет. Обращение со спектрограммами как с изображениями может хорошо работать на практике, но следует помнить, что это не совсем одно и то же. </Tip> ## Любой трансформер может быть классификатором В [предыдущем материале](ctc) вы убедились, что CTC - это эффективная методика автоматического распознавания речи с использованием только трансформера состоящего только из энкодера. Такие CTC модели уже являются классификаторами, прогнозирующими вероятности для меток классов из словаря токенизатора. Мы можем взять модель CTC и превратить ее в классификатор звука общего назначения, изменив метки и обучив ее с помощью обычной функции потерь кросс-энтропии вместо специальной функции потерь CTC. Например, в HF Transformers есть модель `Wav2Vec2ForCTC`, `Wav2Vec2ForSequenceClassification` и `Wav2Vec2ForAudioFrameClassification`. Различия между архитектурами этих моделей заключаются только в размере слоя классификации и используемой функции потерь. Фактически, любая модель аудио трансформера состоящая только из энкодера, может быть превращена в классификатор аудиосигнала путем добавления слоя классификации поверх последовательности слоёв формирующих скрытые состояния. (Классификаторы обычно не нуждаются в декодере трансформера.) Для прогнозирования единой классификационной оценки для всей последовательности (`Wav2Vec2ForSequenceClassification`) модель берет среднее значение по скрытым состояниям и передает его в слой классификации. На выходе получается одно распределение вероятностей. Чтобы сделать отдельную классификацию для каждого аудио фрейма (`Wav2Vec2ForAudioFrameClassification`), классификатор запускается на последовательности скрытых состояний, поэтому выходом классификатора также является последовательность.
8
0
hf_public_repos/audio-transformers-course/chapters/ru
hf_public_repos/audio-transformers-course/chapters/ru/chapter3/ctc.mdx
# Архитектуры CTC CTC (Connectionist Temporal Classification) или Коннекционистская Временная Классификация это техника, используемая в моделях трансформеров состоящих только из энкодера, для задачи автоматического распознавания речи (ASR). Примерами таких моделей являются **Wav2Vec2**, **HuBERT** и **M-CTC-T**. Трансформеры состоящие только из энкодера, являются самыми простыми, поскольку в них используется только часть модели, состоящая из энкодера. Энкодер считывает входную последовательность (форму волны звука) и преобразует ее в последовательность скрытых состояний, которые также известны как выходные эмбеддинги. В модели CTC мы применяем дополнительное линейное отображение на последовательность скрытых состояний для получения предсказаний метки класса. Метками классов являются **символы алфавита** (a, b, c, ...). Таким образом, мы можем предсказать любое слово на целевом языке с небольшой классификационной головой, поскольку словарный запас должен состоять всего из 26 символов плюс несколько специальных токенов. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/wav2vec2-ctc.png" alt="Transformer encoder with a CTC head on top"> </div> Пока это очень похоже на то, что мы делаем в NLP с помощью такой модели, как BERT: модель трансформер состоящая только из энкодера отображает наши текстовые токены в последовательность скрытых состояний энкодера, а затем мы применяем линейное отображение для получения одного предсказания метки класса для каждого скрытого состояния. Вот в чем загвоздка: в речи мы не знаем **соответствия** между входными аудио сигналами и текстовыми выходами. Мы знаем, что порядок произнесения речи совпадает с порядком транскрибирования текста (так называемое монотонное выравнивание), но мы не знаем, как символы в транскрипции соотносятся с аудиозаписью. В этом случае на помощь приходит алгоритм CTC. <Tip> 💡 В моделях NLP словарный запас обычно состоит из тысяч токенов, которые описывают не только отдельные символы, но и части слов или даже целые слова. Однако для CTC лучше всего подходит небольшой словарь, и мы обычно стараемся, чтобы он не превышал 50 символов. Нам не важен регистр букв, поэтому достаточно использовать только верхний регистр (или только нижний). Числа пишутся буквами, например, `"20"` становится `" twenty"`. Помимо букв, нам необходимы как минимум токен-разделитель слов (пробел) и токен-заполнитель. Как и в модели NLP, токен-заполнитель позволяет объединить несколько примеров в батч, а также является токеном, который модель будет предсказывать в случае тишины. В английском языке также полезно сохранить символ `'`' - в конце концов, `"it's"` и `"its"` имеют совершенно разные значения. </Tip> ## Дружище, где мое выравнивание? Автоматическое распознавание речи, или ASR, подразумевает прием аудиосигнала на вход и выдачу текста на выход. У нас есть несколько вариантов того, как предсказать текст: - как отдельные символы - как фонемы - как токены слова Модель ASR обучается на наборе данных, состоящем из пар `(аудио, текст)`, где текст представляет собой транскрипцию аудиофайла, выполненную человеком. Как правило, набор данных не содержит никакой временной информации, указывающей, какое слово или слог встречается в аудиофайле. Поскольку в процессе обучения мы не можем полагаться на информацию о времени, мы не имеем представления о том, как должны быть выровнены входные и выходные последовательности. Предположим, что на вход нам подается односекундный аудиофайл. В **Wav2Vec2** модели сначала понижают дискретизацию (downsampling) входного аудиосигнала с помощью кодера признаков CNN до более короткой последовательности скрытых состояний, где на каждые 20 миллисекунд аудиосигнала приходится один вектор скрытых состояний. Для одной секунды звука мы передаем на энкодер трансформера последовательность из 50 скрытых состояний. (Звуковые сегменты, извлеченные из входной последовательности, частично перекрываются, поэтому, хотя один вектор скрытых состояний выдается каждые 20 мс, каждое скрытое состояние фактически представляет собой 25 мс звука). Энкодер трансформера предсказывает одно представление признака для каждого из этих скрытых состояний, то есть мы получаем последовательность из 50 выходов трансформера. Каждый из этих выходов имеет размерность 768. Таким образом, выходная последовательность энкодера трансформера в данном примере имеет форму `(768, 50)`. Поскольку каждый из этих прогнозов охватывает 25 мс времени, что меньше длительности фонемы, имеет смысл прогнозировать отдельные фонемы или символы, но не целые слова. CTC лучше всего работает с небольшим словарным запасом, поэтому мы будем предсказывать символы. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/cnn-feature-encoder.png" alt="The audio waveform gets mapped to a shorter sequence of hidden-states"> </div> Для предсказания текста мы сопоставляем каждый из 768-мерных выходов энкодера с метками символов с помощью линейного слоя (" голова CTC"). Затем модель предсказывает тензор `(50, 32)`, содержащий логиты, где 32 - количество токенов в словаре. Поскольку мы делаем по одному прогнозу для каждого из признаков в последовательности, в итоге получается 50 прогнозов символов для каждой секунды звука. Однако если мы просто прогнозируем один символ каждые 20 мс, то наша выходная последовательность может выглядеть примерно так: ```text BRIIONSAWWSOMEETHINGCLOSETOPANICONHHISOPPONENT'SSFAACEWHENTHEMANNFINALLLYRREECOGGNNIIZEDHHISSERRRRORR ... ``` Если присмотреться, то она несколько похожа на английский язык, но многие символы продублированы. Это связано с тем, что модель должна выводить *нечто* на каждые 20 мс звука во входной последовательности, и если символ распределен на период более 20 мс, то он будет появляться на выходе несколько раз. Избежать этого невозможно, тем более что мы не знаем, каково время транскрипции в процессе обучения. CTC - это способ отфильтровать подобные дубликаты. (В реальности предсказанная последовательность также содержит большое количество токен-заполнителей для случаев, когда модель не совсем уверена в том, что представляет собой звук, или для пустого пространства между символами. Для наглядности мы удалили эти токены из примера. Частичное перекрытие звуковых сегментов - еще одна причина дублирования символов на выходе). ## Алгоритм CTC. Ключом к алгоритму CTC является использование специального токена, часто называемого **пустым токеном (blank token)**. Это просто еще один токен, который модель будет предсказывать, и он является частью словаря. В данном примере пустой токен показан как `_`. Этот специальный токен служит жесткой границей между группами символов. Полный вывод модели CTC может выглядеть следующим образом: ```text B_R_II_O_N_||_S_AWW_|||||_S_OMEE_TH_ING_||_C_L_O_S_E||TO|_P_A_N_I_C_||_ON||HHI_S||_OP_P_O_N_EN_T_'SS||_F_AA_C_E||_W_H_EN||THE||M_A_NN_||||_F_I_N_AL_LL_Y||||_RREE_C_O_GG_NN_II_Z_ED|||HHISS|||_ER_RRR_ORR|||| ``` Токен `|` является символом-разделителем слов. В примере мы используем `|` вместо пробела, чтобы было легче определить места разрыва слов, но это служит той же цели. Пустой символ CTC позволяет отфильтровать дублирующиеся символы. Для примера рассмотрим последнее слово из спрогнозированной последовательности, `_ER_RRR_ORR`. Без пустого токена CTC слово выглядело следующим образом: ```text ERRRRORR ``` Если бы мы просто удалили дублирующиеся символы, то получилось бы `EROR`. Это явно не правильное написание. Но с помощью пустого токена CTC мы можем удалить дубликаты в каждой группе, так что: ```text _ER_RRR_ORR ``` становится: ```text _ER_R_OR ``` и теперь удаляем пустой токен `_`, чтобы получить окончательное слово: ```text ERROR ``` Если применить эту логику ко всему тексту, включая `|`, и заменить уцелевшие символы `|` на пробелы, то конечный результат CTC-декодирования будет следующим: ```text BRION SAW SOMETHING CLOSE TO PANIC ON HIS OPPONENT'S FACE WHEN THE MAN FINALLY RECOGNIZED HIS ERROR ``` Напомним, что модель предсказывает один токен (символ) на каждые 20 мс (частично перекрывающихся) аудиоданных из входной формы сигнала. Это порождает большое количество дубликатов. Благодаря пустому токену CTC мы можем легко удалить эти дубликаты, не нарушая правильности написания слов. Это очень простой и удобный способ решения проблемы выравнивания выходного текста по входному звуку. <Tip> 💡 В реальной модели Wav2Vec2 пустой токен CTC совпадает с токеном заполнения `<pad>`. Модель будет прогнозировать много таких токенов `<pad>`, например, когда для текущих 20 мс звука нет четкого символа для прогнозирования. Использование одного и того же токена для дополнения, как и пустой токен для CTC, упрощает алгоритм декодирования и позволяет сохранить небольшой объем словаря. </Tip> Добавить CTC в модель трансформера энкодера очень просто: выходная последовательность с энкодера поступает на линейный слой, который проецирует акустические признаки на словарь. Модель обучается с помощью специальной функции потерь CTC. Недостатком CTC является то, что он может выдавать слова, которые *звучат* правильно, но не *написаны* правильно. Ведь голова CTC учитывает только отдельные символы, а не целые слова. Одним из способов повышения качества транскрипции звука является использование внешней языковой модели. Эта языковая модель, по сути, выполняет функцию проверки орфографии на выходе CTC. ## В чем разница между Wav2Vec2, HuBERT, M-CTC-T, ...? Все модели трансформеры основанные на архитектуре CTC имеют очень схожую архитектуру: в них используется энкодер трансформера (но не декодер) с головой CTC на верху. С точки зрения архитектуры они скорее похожи, чем отличаются. Разница между Wav2Vec2 и M-CTC-T заключается в том, что первый работает с необработанными формами звуковых сигналов, а второй использует в качестве входных данных мэл спектрограммы. Модели также были обучены для разных целей. Например, M-CTC-T обучен распознаванию многоязычной речи и поэтому имеет относительно большую голову CTC, включающую помимо других алфавитов еще и китайские иероглифы. Wav2Vec2 и HuBERT используют совершенно одинаковую архитектуру, но обучаются совершенно по-разному. Wav2Vec2 предварительно обучена по аналогии с маскированным языковым моделированим BERT, прогнозирующим речевые единицы для маскированных частей аудио. HuBERT использует идею BERT и учится предсказывать "дискретные единицы речи", которые являются аналогом токенов в текстовом предложении, так что речь может обрабатываться с помощью известных методов NLP. Следует отметить, что приведенные здесь модели не являются единственными моделями CTC на основе трансформеров. Существует множество других, но теперь вы знаете, что все они работают примерно одинаково.
9
0
hf_public_repos
hf_public_repos/blog/sentiment-analysis-python.md
--- title: "Getting Started with Sentiment Analysis using Python" thumbnail: /blog/assets/50_sentiment_python/thumbnail.png authors: - user: federicopascual --- # Getting Started with Sentiment Analysis using Python <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> Sentiment analysis is the automated process of tagging data according to their sentiment, such as positive, negative and neutral. Sentiment analysis allows companies to analyze data at scale, detect insights and automate processes. In the past, sentiment analysis used to be limited to researchers, machine learning engineers or data scientists with experience in natural language processing. However, the AI community has built awesome tools to democratize access to machine learning in recent years. Nowadays, you can use sentiment analysis with a few lines of code and no machine learning experience at all! 🤯 In this guide, you'll learn everything to get started with sentiment analysis using Python, including: 1. [What is sentiment analysis?](#1-what-is-sentiment-analysis) 2. [How to use pre-trained sentiment analysis models with Python](#2-how-to-use-pre-trained-sentiment-analysis-models-with-python) 3. [How to build your own sentiment analysis model](#3-building-your-own-sentiment-analysis-model) 4. [How to analyze tweets with sentiment analysis](#4-analyzing-tweets-with-sentiment-analysis-and-python) Let's get started! 🚀 ## 1. What is Sentiment Analysis? Sentiment analysis is a [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) technique that identifies the polarity of a given text. There are different flavors of sentiment analysis, but one of the most widely used techniques labels data into positive, negative and neutral. For example, let's take a look at these tweets mentioning [@VerizonSupport](https://twitter.com/VerizonSupport): - *"dear @verizonsupport your service is straight 💩 in dallas.. been with y’all over a decade and this is all time low for y’all. i’m talking no internet at all."* → Would be tagged as "Negative". - *"@verizonsupport ive sent you a dm"* → would be tagged as "Neutral". - *"thanks to michelle et al at @verizonsupport who helped push my no-show-phone problem along. order canceled successfully and ordered this for pickup today at the apple store in the mall."* → would be tagged as "Positive". Sentiment analysis allows processing data at scale and in real-time. For example, do you want to analyze thousands of tweets, product reviews or support tickets? Instead of sorting through this data manually, you can use sentiment analysis to automatically understand how people are talking about a specific topic, get insights for data-driven decisions and automate business processes. Sentiment analysis is used in a wide variety of applications, for example: - Analyze social media mentions to understand how people are talking about your brand vs your competitors. - Analyze feedback from surveys and product reviews to quickly get insights into what your customers like and dislike about your product. - Analyze incoming support tickets in real-time to detect angry customers and act accordingly to prevent churn. ## 2. How to Use Pre-trained Sentiment Analysis Models with Python Now that we have covered what sentiment analysis is, we are ready to play with some sentiment analysis models! 🎉 On the [Hugging Face Hub](https://huggingface.co/models), we are building the largest collection of models and datasets publicly available in order to democratize machine learning 🚀. In the Hub, you can find more than 27,000 models shared by the AI community with state-of-the-art performances on tasks such as sentiment analysis, object detection, text generation, speech recognition and more. The Hub is free to use and most models have a widget that allows to test them directly on your browser! There are more than [215 sentiment analysis models](https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads&search=sentiment) publicly available on the Hub and integrating them with Python just takes 5 lines of code: ```python pip install -q transformers from transformers import pipeline sentiment_pipeline = pipeline("sentiment-analysis") data = ["I love you", "I hate you"] sentiment_pipeline(data) ``` This code snippet uses the [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) to make predictions from models available in the Hub. It uses the [default model for sentiment analysis](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english?text=I+like+you.+I+love+you) to analyze the list of texts `data` and it outputs the following results: ```python [{'label': 'POSITIVE', 'score': 0.9998}, {'label': 'NEGATIVE', 'score': 0.9991}] ``` You can use a specific sentiment analysis model that is better suited to your language or use case by providing the name of the model. For example, if you want a sentiment analysis model for tweets, you can specify the [model id](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis): ```python specific_model = pipeline(model="finiteautomata/bertweet-base-sentiment-analysis") specific_model(data) ``` You can test these models with your own data using this [Colab notebook](https://colab.research.google.com/drive/1G4nvWf6NtytiEyiIkYxs03nno5ZupIJn?usp=sharing): <!-- <div class="flex text-center items-center"> --> <figure class="flex justify-center w-full"> <iframe width="560" height="315" src="https://www.youtube.com/embed/eN-mbWOKJ7Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </figure> The following are some popular models for sentiment analysis models available on the Hub that we recommend checking out: - [Twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) is a roBERTa model trained on ~58M tweets and fine-tuned for sentiment analysis. Fine-tuning is the process of taking a pre-trained large language model (e.g. roBERTa in this case) and then tweaking it with additional training data to make it perform a second similar task (e.g. sentiment analysis). - [Bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is a model fine-tuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. - [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion?text=I+feel+a+bit+let+down) is a model fine-tuned for detecting emotions in texts, including sadness, joy, love, anger, fear and surprise. Are you interested in doing sentiment analysis in languages such as Spanish, French, Italian or German? On the Hub, you will find many models fine-tuned for different use cases and ~28 languages. You can check out the complete list of sentiment analysis models [here](https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads&search=sentiment) and filter at the left according to the language of your interest. ## 3. Building Your Own Sentiment Analysis Model Using pre-trained models publicly available on the Hub is a great way to get started right away with sentiment analysis. These models use deep learning architectures such as transformers that achieve state-of-the-art performance on sentiment analysis and other machine learning tasks. However, you can fine-tune a model with your own data to further improve the sentiment analysis results and get an extra boost of accuracy in your particular use case. In this section, we'll go over two approaches on how to fine-tune a model for sentiment analysis with your own data and criteria. The first approach uses the Trainer API from the [🤗Transformers](https://github.com/huggingface/transformers), an open source library with 50K stars and 1K+ contributors and requires a bit more coding and experience. The second approach is a bit easier and more straightforward, it uses [AutoNLP](https://huggingface.co/autonlp), a tool to automatically train, evaluate and deploy state-of-the-art NLP models without code or ML experience. Let's dive in! ### a. Fine-tuning model with Python In this tutorial, you'll use the IMDB dataset to fine-tune a DistilBERT model for sentiment analysis. The [IMDB dataset](https://huggingface.co/datasets/imdb) contains 25,000 movie reviews labeled by sentiment for training a model and 25,000 movie reviews for testing it. [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) is a smaller, faster and cheaper version of [BERT](https://huggingface.co/docs/transformers/model_doc/bert). It has 40% smaller than BERT and runs 60% faster while preserving over 95% of BERT’s performance. You'll use the IMDB dataset to fine-tune a DistilBERT model that is able to classify whether a movie review is positive or negative. Once you train the model, you will use it to analyze new data! ⚡️ We have [created this notebook](https://colab.research.google.com/drive/1t-NJadXsPTDT6EWIR0PRzpn5o8oMHzp3?usp=sharing) so you can use it through this tutorial in Google Colab. #### 1. Activate GPU and Install Dependencies As a first step, let's set up Google Colab to use a GPU (instead of CPU) to train the model much faster. You can do this by going to the menu, clicking on 'Runtime' > 'Change runtime type', and selecting 'GPU' as the Hardware accelerator. Once you do this, you should check if GPU is available on our notebook by running the following code: ```python import torch torch.cuda.is_available() ``` Then, install the libraries you will be using in this tutorial: ```python !pip install datasets transformers huggingface_hub ``` You should also install `git-lfs` to use git in our model repository: ```python !apt-get install git-lfs ``` #### 2. Preprocess data You need data to fine-tune DistilBERT for sentiment analysis. So, let's use [🤗Datasets](https://github.com/huggingface/datasets/) library to download and preprocess the IMDB dataset so you can then use this data for training your model: ```python from datasets import load_dataset imdb = load_dataset("imdb") ``` IMDB is a huge dataset, so let's create smaller datasets to enable faster training and testing: ```python small_train_dataset = imdb["train"].shuffle(seed=42).select([i for i in list(range(3000))]) small_test_dataset = imdb["test"].shuffle(seed=42).select([i for i in list(range(300))]) ``` To preprocess our data, you will use [DistilBERT tokenizer](https://huggingface.co/docs/transformers/v4.15.0/en/model_doc/distilbert#transformers.DistilBertTokenizer): ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` Next, you will prepare the text inputs for the model for both splits of our dataset (training and test) by using the [map method](https://huggingface.co/docs/datasets/about_map_batch.html): ```python def preprocess_function(examples): return tokenizer(examples["text"], truncation=True) tokenized_train = small_train_dataset.map(preprocess_function, batched=True) tokenized_test = small_test_dataset.map(preprocess_function, batched=True) ``` To speed up training, let's use a data_collator to convert your training samples to PyTorch tensors and concatenate them with the correct amount of [padding](https://huggingface.co/docs/transformers/preprocessing#everything-you-always-wanted-to-know-about-padding-and-truncation): ```python from transformers import DataCollatorWithPadding data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` #### 3. Training the model Now that the preprocessing is done, you can go ahead and train your model 🚀 You will be throwing away the pretraining head of the DistilBERT model and replacing it with a classification head fine-tuned for sentiment analysis. This enables you to transfer the knowledge from DistilBERT to your custom model 🔥 For training, you will be using the [Trainer API](https://huggingface.co/docs/transformers/v4.15.0/en/main_classes/trainer#transformers.Trainer), which is optimized for fine-tuning [Transformers](https://github.com/huggingface/transformers)🤗 models such as DistilBERT, BERT and RoBERTa. First, let's define DistilBERT as your base model: ```python from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) ``` Then, let's define the metrics you will be using to evaluate how good is your fine-tuned model ([accuracy and f1 score](https://huggingface.co/metrics)): ```python import numpy as np from datasets import load_metric def compute_metrics(eval_pred): load_accuracy = load_metric("accuracy") load_f1 = load_metric("f1") logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) accuracy = load_accuracy.compute(predictions=predictions, references=labels)["accuracy"] f1 = load_f1.compute(predictions=predictions, references=labels)["f1"] return {"accuracy": accuracy, "f1": f1} ``` Next, let's login to your [Hugging Face account](https://huggingface.co/join) so you can manage your model repositories. `notebook_login` will launch a widget in your notebook where you'll need to add your [Hugging Face token](https://huggingface.co/settings/token): ```python from huggingface_hub import notebook_login notebook_login() ``` You are almost there! Before training our model, you need to define the training arguments and define a Trainer with all the objects you constructed up to this point: ```python from transformers import TrainingArguments, Trainer repo_name = "finetuning-sentiment-model-3000-samples" training_args = TrainingArguments( output_dir=repo_name, learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, save_strategy="epoch", push_to_hub=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_train, eval_dataset=tokenized_test, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) ``` Now, it's time to fine-tune the model on the sentiment analysis dataset! 🙌 You just have to call the `train()` method of your Trainer: ```python trainer.train() ``` And voila! You fine-tuned a DistilBERT model for sentiment analysis! 🎉 Training time depends on the hardware you use and the number of samples in the dataset. In our case, it took almost 10 minutes using a GPU and fine-tuning the model with 3,000 samples. The more samples you use for training your model, the more accurate it will be but training could be significantly slower. Next, let's compute the evaluation metrics to see how good your model is: ```python trainer.evaluate() ``` In our case, we got 88% accuracy and 89% f1 score. Quite good for a sentiment analysis model just trained with 3,000 samples! #### 4. Analyzing new data with the model Now that you have trained a model for sentiment analysis, let's use it to analyze new data and get 🤖 predictions! This unlocks the power of machine learning; using a model to automatically analyze data at scale, in real-time ⚡️ First, let's upload the model to the Hub: ```python trainer.push_to_hub() ``` Now that you have pushed the model to the Hub, you can use it [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) to analyze two new movie reviews and see how your model predicts its sentiment with just two lines of code 🤯: ```python from transformers import pipeline sentiment_model = pipeline(model="federicopascual/finetuning-sentiment-model-3000-samples") sentiment_model(["I love this move", "This movie sucks!"]) ``` These are the predictions from our model: ```python [{'label': 'LABEL_1', 'score': 0.9558}, {'label': 'LABEL_0', 'score': 0.9413}] ``` In the IMDB dataset, `Label 1` means positive and `Label 0` is negative. Quite good! 🔥 ### b. Training a sentiment model with AutoNLP [AutoNLP](https://huggingface.co/autonlp) is a tool to train state-of-the-art machine learning models without code. It provides a friendly and easy-to-use user interface, where you can train custom models by simply uploading your data. AutoNLP will automatically fine-tune various pre-trained models with your data, take care of the hyperparameter tuning and find the best model for your use case. All models trained with AutoNLP are deployed and ready for production. Training a sentiment analysis model using AutoNLP is super easy and it just takes a few clicks 🤯. Let's give it a try! As a first step, let's get some data! You'll use [Sentiment140](https://huggingface.co/datasets/sentiment140), a popular sentiment analysis dataset that consists of Twitter messages labeled with 3 sentiments: 0 (negative), 2 (neutral), and 4 (positive). The dataset is quite big; it contains 1,600,000 tweets. As you don't need this amount of data to get your feet wet with AutoNLP and train your first models, we have prepared a smaller version of the Sentiment140 dataset with 3,000 samples that you can download from [here](https://cdn-media.huggingface.co/marketing/content/sentiment%20analysis/sentiment-analysis-python/sentiment140-3000samples.csv). This is how the dataset looks like: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Sentiment 140 dataset" src="assets/50_sentiment_python/sentiment140-dataset.png"></medium-zoom> <figcaption>Sentiment 140 dataset</figcaption> </figure> Next, let's create a [new project on AutoNLP](https://ui.autonlp.huggingface.co/new) to train 5 candidate models: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Creating a new project on AutoNLP" src="assets/50_sentiment_python/new-project.png"></medium-zoom> <figcaption>Creating a new project on AutoNLP</figcaption> </figure> Then, upload the dataset and map the text column and target columns: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Adding a dataset to AutoNLP" src="assets/50_sentiment_python/add-dataset.png"></medium-zoom> <figcaption>Adding a dataset to AutoNLP</figcaption> </figure> Once you add your dataset, go to the "Trainings" tab and accept the pricing to start training your models. AutoNLP pricing can be as low as $10 per model: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Adding a dataset to AutoNLP" src="assets/50_sentiment_python/trainings.png"></medium-zoom> <figcaption>Adding a dataset to AutoNLP</figcaption> </figure> After a few minutes, AutoNLP has trained all models, showing the performance metrics for all of them: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Adding a dataset to AutoNLP" src="assets/50_sentiment_python/training-success.png"></medium-zoom> <figcaption>Trained sentiment analysis models by AutoNLP</figcaption> </figure> The best model has 77.87% accuracy 🔥 Pretty good for a sentiment analysis model for tweets trained with just 3,000 samples! All these models are automatically uploaded to the Hub and deployed for production. You can use any of these models to start analyzing new data right away by using the [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) as shown in previous sections of this post. ## 4. Analyzing Tweets with Sentiment Analysis and Python In this last section, you'll take what you have learned so far in this post and put it into practice with a fun little project: analyzing tweets about NFTs with sentiment analysis! First, you'll use [Tweepy](https://www.tweepy.org/), an easy-to-use Python library for getting tweets mentioning #NFTs using the [Twitter API](https://developer.twitter.com/en/docs/twitter-api). Then, you will use a sentiment analysis model from the 🤗Hub to analyze these tweets. Finally, you will create some visualizations to explore the results and find some interesting insights. You can use [this notebook](https://colab.research.google.com/drive/182UbzmSeAFgOiow7WNMxvnz-yO-SJQ0W?usp=sharing) to follow this tutorial. Let’s jump into it! ### 1. Install dependencies First, let's install all the libraries you will use in this tutorial: ``` !pip install -q transformers tweepy wordcloud matplotlib ``` ### 2. Set up Twitter API credentials Next, you will set up the credentials for interacting with the Twitter API. First, you'll need to sign up for a [developer account on Twitter](https://developer.twitter.com/en/docs/twitter-api/getting-started/getting-access-to-the-twitter-api). Then, you have to create a new project and connect an app to get an API key and token. You can follow this [step-by-step guide](https://developer.twitter.com/en/docs/tutorials/step-by-step-guide-to-making-your-first-request-to-the-twitter-api-v2) to get your credentials. Once you have the API key and token, let's create a wrapper with Tweepy for interacting with the Twitter API: ```python import tweepy # Add Twitter API key and secret consumer_key = "XXXXXX" consumer_secret = "XXXXXX" # Handling authentication with Twitter auth = tweepy.AppAuthHandler(consumer_key, consumer_secret) # Create a wrapper for the Twitter API api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ``` ### 3. Search for tweets using Tweepy At this point, you are ready to start using the Twitter API to collect tweets 🎉. You will use [Tweepy Cursor](https://docs.tweepy.org/en/v3.5.0/cursor_tutorial.html) to extract 1,000 tweets mentioning #NFTs: ```python # Helper function for handling pagination in our search and handle rate limits def limit_handled(cursor): while True: try: yield cursor.next() except tweepy.RateLimitError: print('Reached rate limite. Sleeping for >15 minutes') time.sleep(15 * 61) except StopIteration: break # Define the term you will be using for searching tweets query = '#NFTs' query = query + ' -filter:retweets' # Define how many tweets to get from the Twitter API count = 1000 # Let's search for tweets using Tweepy search = limit_handled(tweepy.Cursor(api.search, q=query, tweet_mode='extended', lang='en', result_type="recent").items(count)) ``` ### 4. Run sentiment analysis on the tweets Now you can put our new skills to work and run sentiment analysis on your data! 🎉 You will use one of the models available on the Hub fine-tuned for [sentiment analysis of tweets](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis). Like in other sections of this post, you will use the [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) to make the predictions with this model: ```python from transformers import pipeline # Set up the inference pipeline using a model from the 🤗 Hub sentiment_analysis = pipeline(model="finiteautomata/bertweet-base-sentiment-analysis") # Let's run the sentiment analysis on each tweet tweets = [] for tweet in search: try: content = tweet.full_text sentiment = sentiment_analysis(content) tweets.append({'tweet': content, 'sentiment': sentiment[0]['label']}) except: pass ``` ### 5. Explore the results of sentiment analysis How are people talking about NFTs on Twitter? Are they talking mostly positively or negatively? Let's explore the results of the sentiment analysis to find out! First, let's load the results on a dataframe and see examples of tweets that were labeled for each sentiment: ```python import pandas as pd # Load the data in a dataframe df = pd.DataFrame(tweets) pd.set_option('display.max_colwidth', None) # Show a tweet for each sentiment display(df[df["sentiment"] == 'POS'].head(1)) display(df[df["sentiment"] == 'NEU'].head(1)) display(df[df["sentiment"] == 'NEG'].head(1)) ``` Output: ``` Tweet: @NFTGalIery Warm, exquisite and elegant palette of charming beauty Its price is 2401 ETH. \nhttps://t.co/Ej3BfVOAqc\n#NFTs #NFTartists #art #Bitcoin #Crypto #OpenSeaNFT #Ethereum #BTC Sentiment: POS Tweet: How much our followers made on #Crypto in December:\n#DAPPRadar airdrop — $200\nFree #VPAD tokens — $800\n#GasDAO airdrop — up to $1000\nStarSharks_SSS IDO — $3500\nCeloLaunch IDO — $3000\n12 Binance XMas #NFTs — $360 \nTOTAL PROFIT: $8500+\n\nJoin and earn with us https://t.co/fS30uj6SYx Sentiment: NEU Tweet: Stupid guy #2\nhttps://t.co/8yKzYjCYIl\n\n#NFT #NFTs #nftcollector #rarible https://t.co/O4V19gMmVk Sentiment: NEG ``` Then, let's see how many tweets you got for each sentiment and visualize these results: ```python # Let's count the number of tweets by sentiments sentiment_counts = df.groupby(['sentiment']).size() print(sentiment_counts) # Let's visualize the sentiments fig = plt.figure(figsize=(6,6), dpi=100) ax = plt.subplot(111) sentiment_counts.plot.pie(ax=ax, autopct='%1.1f%%', startangle=270, fontsize=12, label="") ``` Interestingly, most of the tweets about NFTs are positive (56.1%) and almost none are negative (2.0%): <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Sentiment analysis result of NFTs tweets" src="assets/50_sentiment_python/sentiment-result.png"></medium-zoom> <figcaption>Sentiment analysis result of NFTs tweets</figcaption> </figure> Finally, let's see what words stand out for each sentiment by creating a word cloud: ```python from wordcloud import WordCloud from wordcloud import STOPWORDS # Wordcloud with positive tweets positive_tweets = df['tweet'][df["sentiment"] == 'POS'] stop_words = ["https", "co", "RT"] + list(STOPWORDS) positive_wordcloud = WordCloud(max_font_size=50, max_words=100, background_color="white", stopwords = stop_words).generate(str(positive_tweets)) plt.figure() plt.title("Positive Tweets - Wordcloud") plt.imshow(positive_wordcloud, interpolation="bilinear") plt.axis("off") plt.show() # Wordcloud with negative tweets negative_tweets = df['tweet'][df["sentiment"] == 'NEG'] stop_words = ["https", "co", "RT"] + list(STOPWORDS) negative_wordcloud = WordCloud(max_font_size=50, max_words=100, background_color="white", stopwords = stop_words).generate(str(negative_tweets)) plt.figure() plt.title("Negative Tweets - Wordcloud") plt.imshow(negative_wordcloud, interpolation="bilinear") plt.axis("off") plt.show() ``` Some of the words associated with positive tweets include Discord, Ethereum, Join, Mars4 and Shroom: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Word cloud for positive tweets" src="assets/50_sentiment_python/positive-tweets-wordcloud.png"></medium-zoom> <figcaption>Word cloud for positive tweets</figcaption> </figure> In contrast, words associated with negative tweets include: cookies chaos, Solana, and OpenseaNFT: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Word cloud for negative tweets" src="assets/50_sentiment_python/negative-tweets-wordcloud.png"></medium-zoom> <figcaption>Word cloud for negative tweets</figcaption> </figure> And that is it! With just a few lines of python code, you were able to collect tweets, analyze them with sentiment analysis and create some cool visualizations to analyze the results! Pretty cool, huh? ## 5. Wrapping up Sentiment analysis with Python has never been easier! Tools such as [🤗Transformers](https://github.com/huggingface/transformers) and the [🤗Hub](https://huggingface.co/models) makes sentiment analysis accessible to all developers. You can use open source, pre-trained models for sentiment analysis in just a few lines of code 🔥 Do you want to train a custom model for sentiment analysis with your own data? Easy peasy! You can fine-tune a model using [Trainer API](https://huggingface.co/docs/transformers/v4.15.0/en/main_classes/trainer#transformers.Trainer) to build on top of large language models and get state-of-the-art results. If you want something even easier, you can use [AutoNLP](https://huggingface.co/autonlp) to train custom machine learning models by simply uploading data. If you have questions, the Hugging Face community can help answer and/or benefit from, please ask them in the [Hugging Face forum](https://discuss.huggingface.co/). Also, join our [discord server](https://discord.gg/YRAq8fMnUG) to talk with us and with the Hugging Face community.
0
0
hf_public_repos
hf_public_repos/blog/setfit-optimum-intel.md
--- title: "Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon" thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png authors: - user: danielkorat guest: true org: Intel - user: tomaarsen - user: orenpereg guest: true org: Intel - user: moshew guest: true org: Intel - user: echarlaix - user: aprabh2 guest: true org: Intel --- # Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon SetFit is a promising solution for a common modeling problem: how to deal with lack of labeled data for training. Developed with Hugging Face’s research partners at [Intel Labs](https://www.intel.com/content/www/us/en/research/overview.html) and the [UKP Lab](https://www.informatik.tu-darmstadt.de/ukp/ukp_home/index.en.jsp), SetFit is an efficient framework for few-shot fine-tuning of [Sentence Transformers](https://sbert.net/) models. SetFit achieves high accuracy with little labeled data - for example, SetFit [outperforms](https://arxiv.org/pdf/2311.06102.pdf) GPT-3.5 in 3-shot prompting and with 5 shot it also outperforms 3-shot GPT-4 on the Banking 77 financial intent dataset. Compared to LLM based methods, SetFit has two unique advantages: <p>🗣 <strong>No prompts or verbalisers</strong>: few-shot in-context learning with LLMs requires handcrafted prompts which make the results brittle, sensitive to phrasing and dependent on user expertise. SetFit dispenses with prompts altogether by generating rich embeddings directly from a small number of labeled text examples.</p> <p>🏎 <strong>Fast to train</strong>: SetFit doesn't rely on LLMs such as GPT-3.5 or Llama2 to achieve high accuracy. As a result, it is typically an order of magnitude (or more) faster to train and run inference with.</p> For more details on SetFit, check out our [paper](https://arxiv.org/abs/2209.11055), [blog](https://huggingface.co/blog/setfit), [code](https://github.com/huggingface/setfit), and [data](https://huggingface.co/SetFit). Setfit has been widely adopted by the AI developer community, with \~100k downloads per month and [\~1500](https://huggingface.co/models?library=setfit) SetFit models on the Hub, and growing with an average of ~4 models per day! ## Faster! In this blog post, we'll explain how you can accelerate inference with SetFit by **7.8x** on Intel CPUs, by optimizing your SetFit model with 🤗 [Optimum Intel](https://github.com/huggingface/optimum-intel). We’ll show how you can achieve huge throughput gains by performing a simple post-training quantization step on your model. This can enable production-grade deployment of SetFit solutions using Intel Xeon CPUs. [Optimum Intel](https://github.com/huggingface/optimum-intel) is an open-source library that accelerates end-to-end pipelines built with Hugging Face libraries on Intel Hardware. Optimum Intel includes several techniques to accelerate models such as low-bit quantization, model weight pruning, distillation, and an accelerated runtime. The runtime and optimizations included in [Optimum Intel](https://github.com/huggingface/optimum-intel) take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs to accelerate models. Specifically, it has built-in [BFloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format) (bf16) and int8 GEMM accelerators in every core to accelerate deep learning training and inference workloads. AMX accelerated inference is introduced in PyTorch 2.0 and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch) (IPEX) in addition to other optimizations for various common operators. Optimizing pre-trained models can be done easily with Optimum Intel; many simple examples can be found [here](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc). Our blog is accompanied by a [notebook](https://github.com/huggingface/setfit/blob/main/notebooks/setfit-optimum-intel.ipynb) for a step-by-step walkthrough. ## Step 1: Quantize the SetFit Model using 🤗 Optimum Intel In order to optimize our SetFit model, we will apply quantization to the model body, using [Intel Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) (INC), part of Optimum Intel. **Quantization** is a very popular deep learning model optimization technique for improving inference speeds. It minimizes the number of bits required to represent the weights and/or activations in a neural network. This is done by converting a set of high-precision numbers into a lower-bit data representations, such as INT8. Moreover, quantization can enable faster computations in lower precision. Specifically, we'll apply post-training static quantization (PTQ). PTQ can reduce the memory footprint and latency for inference, while still preserving the accuracy of the model, with only a small unlabeled calibration set and without any training. Before you begin, make sure you have all the necessary libraries installed and that your version of Optimum Intel is at least `1.14.0` since the functionality was introduced in that version: ```bash pip install --upgrade-strategy eager optimum[ipex] ``` ### Prepare a Calibration Dataset The calibration dataset should be able to represent the distribution of unseen data. In general, preparing 100 samples is enough for calibration. We'll use the `rotten_tomatoes` dataset in our case, since it’s composed of movie reviews, similar to our target dataset, `sst2`. First, we’ll load 100 random samples from this dataset. Then, to prepare the dataset for quantization, we'll need to tokenize each example. We won’t need the “text” and “label” columns, so let’s remove them. ```python calibration_set = load_dataset("rotten_tomatoes", split="train").shuffle(seed=42).select(range(100)) def tokenize(examples): return tokenizer(examples["text"], padding="max_length", max_length=512, truncation=True) tokenizer = setfit_model.model_body.tokenizer calibration_set = calibration_set.map(tokenize, remove_columns=["text", "label"]) ``` ### Run Quantization Before we run quantization, we need to define the desired quantization process - in our case - **Static Post Training Quantization**, and use `optimum.intel` to run the quantization on our calibration dataset: ```python from optimum.intel import INCQuantizer from neural_compressor.config import PostTrainingQuantConfig setfit_body = setfit_model.model_body[0].auto_model quantizer = INCQuantizer.from_pretrained(setfit_body) optimum_model_path = "/tmp/bge-small-en-v1.5_setfit-sst2-english_opt" quantization_config = PostTrainingQuantConfig(approach="static", backend="ipex", domain="nlp") quantizer.quantize( quantization_config=quantization_config, calibration_dataset=calibration_set, save_directory=optimum_model_path, batch_size=1, ) tokenizer.save_pretrained(optimum_model_path) ``` That’s it! We now have a local copy of our quantized SetFit model. Let’s test it out. ## Step 2: Benchmark Inference In our [notebook](https://github.com/huggingface/setfit/blob/main/notebooks/setfit-optimum-intel.ipynb), we’ve set up a `PerformanceBenchmark` class to compute model latency and throughput, as well as an accuracy measure. Let’s use it to benchmark our Optimum Intel model with two other commonly used methods: - Using PyTorch and 🤗 Transformers library with fp32. - Using [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch) (IPEX) runtime with bf16 and tracing the model using TorchScript. Load our test dataset, `sst2`, and run the benchmark using PyTorch and 🤗 Transformers library: ```python from datasets import load_dataset from setfit import SetFitModel test_dataset = load_dataset("SetFit/sst2")["validation"] model_path = "dkorat/bge-small-en-v1.5_setfit-sst2-english" setfit_model = SetFitModel.from_pretrained(model_path) pb = PerformanceBenchmark( model=setfit_model, dataset=test_dataset, optim_type="bge-small (transformers)", ) perf_metrics = pb.run_benchmark() ``` For the second benchmark, we'll use [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch) (IPEX) with bf16 precision and TorchScript tracing. To use IPEX we simply import the IPEX library and apply `ipex.optimize()` to the target model, which, in our case, is the SetFit (transformer) model body: ```python dtype = torch.bfloat16 body = ipex.optimize(setfit_model.model_body, dtype=dtype) ``` For TorchScript tracing, we generate a random sequence based on the model's maximum input length, with tokens sampled from the tokenizer's vocabulary: ```python tokenizer = setfit_model.model_body.tokenizer d = generate_random_sequences(batch_size=1, length=tokenizer.model_max_length, vocab_size=tokenizer.vocab_size) body = torch.jit.trace(body, (d,), check_trace=False, strict=False) setfit_model.model_body = torch.jit.freeze(body) ``` Now let's run the benchmark using our quantized Optimum model. We’ll first need to define a wrapper around our SetFit model which plugs in our quantized model body at inference (instead of the original model body). Then, we can run the benchmark using this wrapper. ```python from optimum.intel import IPEXModel class OptimumSetFitModel: def __init__(self, setfit_model, model_body): model_body.tokenizer = setfit_model.model_body.tokenizer self.model_body = model_body self.model_head = setfit_model.model_head optimum_model = IPEXModel.from_pretrained(optimum_model_path) optimum_setfit_model = OptimumSetFitModel(setfit_model, model_body=optimum_model) pb = PerformanceBenchmark( model=optimum_setfit_model, dataset=test_dataset, optim_type=f"bge-small (optimum-int8)", model_path=optimum_model_path, autocast_dtype=torch.bfloat16, ) perf_metrics.update(pb.run_benchmark()) ``` ## Results <p align="center"> <img src="assets/178_setfit_optimum_intel/latency.png" width=500> </p> <p align="center"> <em>Accuracy vs latency at batch size=1</em> </p> | | bge-small (transformers) | bge-small (ipex-bfloat16) | bge-small (optimum-int8) | |----------------------|---------------------|---------------------------|---------------------------| | Model Size | 127.32 MB | 63.74 MB | 44.65 MB | | Accuracy on test set | 88.4% | 88.4% | 88.1% | | Latency (bs=1) | 15.69 +/- 0.57 ms | 5.67 +/- 0.66 ms | 4.55 +/- 0.25 ms | When inspecting the performance at batch size 1, there’s a **3.45x reduction in latency** with our optimized model. Note that this is achieved with virtually no drop in accuracy! It's also worth mentioning that the model size has shrunk by **2.85x**. <p align="center"> <img src="assets/178_setfit_optimum_intel/throughput.png" width=500> </p> We move on to our main focus, which is the reported throughputs with different batch sizes. Here, the optimization has garnered even greater speedups. When comparing the highest achievable throughput (at any batch size), the optimized model is **7.8x faster than the original transformers fp32 model!** ## Summary In this blog post, we have showed how to use quantization capabilities present in 🤗 Optimum Intel to optimize SetFit models. After running a quick and easy post-training quantization procedure, we've observed that accuracy level was preserved, while inference throughput increased by **7.8x**. This optimization method can be readily applied to any existing SetFit deployment running on Intel Xeon. ## References * Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, Oren Pereg, 2022. "Efficient Few-Shot Learning Without Prompts". https://arxiv.org/abs/2209.11055
1
0
hf_public_repos
hf_public_repos/blog/reformer.md
--- title: "The Reformer - Pushing the limits of language modeling" thumbnail: /blog/assets/03_reformer/thumbnail.png authors: - user: patrickvonplaten --- # The Reformer - Pushing the limits of language modeling <a href="https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## How the Reformer uses less than 8GB of RAM to train on sequences of half a million tokens The Reformer model as introduced by [Kitaev, Kaiser et al. (2020)](https://arxiv.org/pdf/2001.04451.pdf) is one of the most memory-efficient transformer models for long sequence modeling as of today. Recently, long sequence modeling has experienced a surge of interest as can be seen by the many submissions from this year alone - [Beltagy et al. (2020)](https://arxiv.org/abs/2004.05150), [Roy et al. (2020)](https://arxiv.org/abs/2003.05997), [Tay et al.](https://arxiv.org/abs/2002.11296), [Wang et al.](https://arxiv.org/abs/2006.04768) to name a few. The motivation behind long sequence modeling is that many tasks in NLP, *e.g.* summarization, question answering, require the model to process longer input sequences than models, such as BERT, are able to handle. In tasks that require the model to process a large input sequence, long sequence models do not have to cut the input sequence to avoid memory overflow and thus have been shown to outperform standard "BERT"-like models *cf.* [Beltagy et al. (2020)](https://arxiv.org/abs/2004.05150). The Reformer pushes the limit of longe sequence modeling by its ability to process up to half a million tokens at once as shown in this [demo](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb). As a comparison, a conventional `bert-base-uncased` model limits the input length to only 512 tokens. In Reformer, each part of the standard transformer architecture is re-engineered to optimize for minimal memory requirement without a significant drop in performance. The memory improvements can be attributed to **4** features which the Reformer authors introduced to the transformer world: 1. **Reformer Self-Attention Layer** - *How to efficiently implement self-attention without being restricted to a local context?* 2. **Chunked Feed Forward Layers** - *How to get a better time-memory trade-off for large feed forward layers?* 3. **Reversible Residual Layers** - *How to drastically reduce memory consumption in training by a smart residual architecture?* 4. **Axial Positional Encodings** - *How to make positional encodings usable for extremely large input sequences?* The goal of this blog post is to give the reader an **in-depth** understanding of each of the four Reformer features mentioned above. While the explanations are focussed on the Reformer, the reader should get a better intuition under which circumstances each of the four features can be effective for other transformer models as well. The four sections are only loosely connected, so they can very well be read individually. Reformer is part of the 🤗Transformers library. For all users of the Reformer, it is advised to go through this very detailed blog post to better understand how the model works and how to correctly set its configuration. All equations are accompanied by their equivalent name for the Reformer config, *e.g.* `config.<param_name>`, so that the reader can quickly relate to the official docs and configuration file. **Note**: *Axial Positional Encodings* are not explained in the official Reformer paper, but are extensively used in the official codebase. This blog post gives the first in-depth explanation of Axial Positional Encodings. ## 1. Reformer Self-Attention Layer Reformer uses two kinds of special self-attention layers: *local* self-attention layers and Locality Sensitive Hashing (*LSH*) self-attention layers. To better introduce these new self-attention layers, we will briefly recap conventional self-attention as introduced in [Vaswani et al. 2017](https://arxiv.org/abs/1706.03762). This blog post uses the same notation and coloring as the popular blog post [The illustrated transformer](http://jalammar.github.io/illustrated-transformer/), so the reader is strongly advised to read this blog first. **Important**: While Reformer was originally introduced for causal self-attention, it can very well be used for bi-directional self-attention as well. In this post, Reformer's self-attention is presented for *bidirectional* self-attention. ### Recap Global Self-Attention The core of every Transformer model is the **self-attention** layer. To recap the conventional self-attention layer, which we refer to here as the **global self-attention** layer, let us assume we apply a transformer layer on the embedding vector sequence \\(\mathbf{X} = \mathbf{x}_1, \ldots, \mathbf{x}_n\\) where each vector \\(\mathbf{x}_{i}\\) is of size `config.hidden_size`, *i.e.* \\(d_h\\). In short, a global self-attention layer projects \\(\mathbf{X}\\) to the query, key and value matrices \\(\mathbf{Q}, \mathbf{K}, \mathbf{V}\\) and computes the output \\(\mathbf{Z}\\) using the *softmax* operation as follows: \\(\mathbf{Z} = \text{SelfAttn}(\mathbf{X}) = \text{softmax}(\mathbf{Q}\mathbf{K}^T) \mathbf{V}\\) with \\(\mathbf{Z}\\) being of dimension \\(d_h \times n\\) (leaving out the key normalization factor and self-attention weights \\(\mathbf{W}^{O}\\) for simplicity). For more detail on the complete transformer operation, see [the illustrated transformer](http://jalammar.github.io/illustrated-transformer/). Visually, we can illustrate this operation as follows for \\(n=16, d_h=3\\): ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/conventional_attention.png) Note that for all visualizations `batch_size` and `config.num_attention_heads` is assumed to be 1. Some vectors, *e.g.* \\(\mathbf{x_3}\\) and its corresponding output vector \\(\mathbf{z_3}\\) are marked so that *LSH self-attention* can later be better explained. The presented logic can effortlessly be extended for multi-head self-attention (`config.num_attention_{h}eads` > 1). The reader is advised to read [the illustrated transformer](http://jalammar.github.io/illustrated-transformer/) as a reference for multi-head self-attention. Important to remember is that for each output vector \\(\mathbf{z}_{i}\\), the whole input sequence \\(\mathbf{X}\\) is processed. The tensor of the inner dot-product \\(\mathbf{Q}\mathbf{K}^T\\) has an asymptotic memory complexity of \\(\mathcal{O}(n^2)\\) which usually represents the memory bottleneck in a transformer model. This is also the reason why `bert-base-cased` has a `config.max_position_embedding_size` of only 512. ### Local Self-Attention **Local self-attention** is the obvious solution to reducing the \\(\mathcal{O}(n^2)\\) memory bottleneck, allowing us to model longer sequences with a reduced computational cost. In local self-attention the input \\( \mathbf{X} = \mathbf{X}_{1:n} = \mathbf{x}_{1}, \ldots, \mathbf{x}_{n} \\) is cut into \\(n_{c}\\) chunks: \\( \mathbf{X} = \left[\mathbf{X}_{1:l_{c}}, \ldots, \mathbf{X}_{(n_{c} - 1) * l_{c} : n_{c} * l_{c}}\right] \\) each of length `config.local_chunk_length`, *i.e.* \\(l_{c}\\), and subsequently global self-attention is applied on each chunk separately. Let's take our input sequence for \\(n=16, d_h=3\\) again for visualization: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/input.png) Assuming \\(l_{c} = 4, n_{c} = 4\\), chunked attention can be illustrated as follows: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/chunked_attention_1.png) As can be seen, the attention operation is applied for each chunk \\(\mathbf{X}_{1:4}, \mathbf{X}_{5:8}, \mathbf{X}_{9:12}, \mathbf{X}_{13:16}\\) individually. The first drawback of this architecture becomes obvious: Some input vectors have no access to their immediate context, *e.g.* \\(\mathbf{x}_9\\) has no access to \\(\mathbf{x}_{8}\\) and vice-versa in our example. This is problematic because these tokens are not able to learn word representations that take their immediate context into account. A simple remedy is to augment each chunk with `config.local_num_chunks_before`, *i.e.* \\(n_{p}\\), chunks and `config.local_num_chunks_after`, *i.e.* \\(n_{a}\\), so that every input vector has at least access to \\(n_{p}\\) previous input vectors and \\(n_{a}\\) following input vectors. This can also be understood as chunking with overlap whereas \\(n_{p}\\) and \\(n_{a}\\) define the amount of overlap each chunk has with all previous chunks and following chunks. We denote this extended local self-attention as follows: $$\mathbf{Z}^{\text{loc}} = \left[\mathbf{Z}_{1:l_{c}}^{\text{loc}}, \ldots, \mathbf{Z}_{(n_{c} - 1) * l_{c} : n_{c} * l_{c}}^{\text{loc}}\right], $$ with $$\mathbf{Z}_{l_{c} * (i - 1) + 1 : l_{c} * i}^{\text{loc}} = \text{SelfAttn}(\mathbf{X}_{l_{c} * (i - 1 - n_{p}) + 1: l_{c} * (i + n_{a})})\left[n_{p} * l_{c}: -n_{a} * l_{c}\right], \forall i \in \{1, \ldots, n_{c} \}$$ Okay, this formula looks quite complicated. Let's make it easier. In Reformer's self-attention layers \\(n_{a}\\) is usually set to 0 and \\(n_{p}\\) is set to 1, so let's write down the formula again for \\(i = 1\\): $$\mathbf{Z}_{1:l_{c}}^{\text{loc}} = \text{SelfAttn}(\mathbf{X}_{-l_{c} + 1: l_{c}})\left[l_{c}:\right]$$ We notice that we have a circular relationship so that the first segment can attend the last segment as well. Let's illustrate this slightly enhanced local attention again. First, we apply self-attention within each windowed segment and keep only the central output segment. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/local_attention_2.png) Finally, the relevant output is concatenated to \\(\mathbf{Z}^{\text{loc}}\\) and looks as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/local_attention_3.png) Note that local self-attention is implemented efficiently way so that no output is computed and subsequently "thrown-out" as shown here for illustration purposes by the red cross. It's important to note here that extending the input vectors for each chunked self-attention function allows *each* single output vector \\( \mathbf{z}_{i} \\) of this self-attention function to learn better vector representations. E.g. each of the output vectors \\( \mathbf{z}_{5}^{\text{loc}}, \mathbf{z}_{6}^{\text{loc}}, \mathbf{z}_{7}^{\text{loc}}, \mathbf{z}_{8}^{\text{loc}} \\) can take into account all of the input vectors \\( \mathbf{X}_{1:8} \\) to learn better representations. The gain in memory consumption is quite obvious: The \\( \mathcal{O}(n^2) \\) memory complexity is broken down for each segment individually so that the total asymptotic memory consumption is reduced to \\( \mathcal{O}(n_{c} * l_{c}^2) = \mathcal{O}(n * l_{c}) \\). This enhanced local self-attention is better than the vanilla local self-attention architecture but still has a major drawback in that every input vector can only attend to a local context of predefined size. For NLP tasks that do not require the transformer model to learn long-range dependencies between the input vectors, which include arguably *e.g.* speech recognition, named entity recognition and causal language modeling of short sentences, this might not be a big issue. Many NLP tasks do require the model to learn long-range dependencies, so that local self-attention could lead to significant performance degradation, *e.g.* * *Question-answering*: the model has to learn the relationship between the question tokens and relevant answer tokens which will most likely not be in the same local range * *Multiple-Choice*: the model has to compare multiple answer token segments to each other which are usually separated by a significant length * *Summarization*: the model has to learn the relationship between a long sequence of context tokens and a shorter sequence of summary tokens, whereas the relevant relationships between context and summary can most likely not be captured by local self-attention * etc... Local self-attention on its own is most likely not sufficient for the transformer model to learn the relevant relationships of input vectors (tokens) to each other. Therefore, Reformer additionally employs an efficient self-attention layer that approximates global self-attention, called *LSH self-attention*. ### LSH Self-Attention Alright, now that we have understood how local self-attention works, we can take a stab at the probably most innovative piece of Reformer: **Locality sensitive hashing (LSH) Self-Attention**. The premise of LSH self-attention is to be more or less as efficient as local self-attention while approximating global self-attention. LSH self-attention relies on the LSH algorithm as presented in [Andoni et al (2015)](https://arxiv.org/abs/1509.02897), hence its name. The idea behind LSH self-attention is based on the insight that if \\(n\\) is large, the softmax applied on the \\(\mathbf{Q}\mathbf{K}^T\\) attention dot-product weights only very few value vectors with values significantly larger than 0 for each query vector. Let's explain this in more detail. Let \\(\mathbf{k}_{i} \in \mathbf{K} = \left[\mathbf{k}_1, \ldots, \mathbf{k}_n \right]^T\\) and \\(\mathbf{q}_{i} \in \mathbf{Q} = \left[\mathbf{q}_1, \ldots, \mathbf{q}_n\right]^T\\) be the key and query vectors. For each \\(\mathbf{q}_{i}\\), the computation \\(\text{softmax}(\mathbf{q}_{i}^T \mathbf{K}^T)\\) can be approximated by using only those key vectors of \\(\mathbf{k}_{j}\\) that have a high cosine similarity with \\(\mathbf{q}_{i}\\). This owes to the fact that the softmax function puts exponentially more weight on larger input values. So far so good, the next problem is to efficiently find the vectors that have a high cosine similarity with \\(\mathbf{q}_{i}\\) for all \\(i\\). First, the authors of Reformer notice that sharing the query and key projections: \\(\mathbf{Q} = \mathbf{K}\\) does not impact the performance of a transformer model \\({}^1\\). Now, instead of having to find the key vectors of high cosine similarity for each query vector \\(q_i\\), only the cosine similarity of query vectors to each other has to be found. This is important because there is a transitive property to the query-query vector dot product approximation: If \\(\mathbf{q}_{i}\\) has a high cosine similarity to the query vectors \\(\mathbf{q}_{j}\\) and \\(\mathbf{q}_{k}\\), then \\(\mathbf{q}_{j}\\) also has a high cosine similarity to \\(\mathbf{q}_{k}\\). Therefore, the query vectors can be clustered into buckets, such that all query vectors that belong to the same bucket have a high cosine similarity to each other. Let's define \\(C_{m}\\) as the *mth* set of position indices, such that their corresponding query vectors are in the same bucket: \\(C_{m} = \{ i | \text{ s.t. } \mathbf{q}_{i} \in \text{mth cluster}\}\\) and `config.num_buckets`, *i.e.* \\(n_{b}\\), as the number of buckets. For each set of indices \\(C_{m}\\), the softmax function on the corresponding bucket of query vectors \\(\text{softmax}(\mathbf{Q}_{i \in C_{m}} \mathbf{Q}^T_{i \in C_{m}})\\) approximates the softmax function of global self-attention with shared query and key projections \\(\text{softmax}(\mathbf{q}_{i}^T \mathbf{Q}^T)\\) for all position indices \\(i\\) in \\(C_{m}\\). Second, the authors make use of the **LSH** algorithm to cluster the query vectors into a predefined number of buckets \\(n_{b}\\). The LSH algorithm is an ideal choice here because it is very efficient and is an approximation of the nearest neighbor algorithm for cosine similarity. Explaining the LSH scheme is out-of-scope for this notebook, so let's just keep in mind that for each vector \\(\mathbf{q}_{i}\\) the LSH algorithm attributes its position index \\(i\\) to one of \\(n_{b}\\) predefined buckets, *i.e.* \\(\text{LSH}(\mathbf{q}_{i}) = m\\) with \\(i \in \{1, \ldots, n\}\\) and \\(m \in \{1, \ldots, n_{b}\}\\). Visually, we can illustrate this as follows for our original example: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_hashing.png) Third, it can be noted that having clustered all query vectors in \\(n_{b}\\) buckets, the corresponding set of indices \\(C_{m}\\) can be used to permute the input vectors \\(\mathbf{x}_1, \ldots, \mathbf{x}_n\\) accordingly \\({}^2\\) so that shared query-key self-attention can be applied piecewise similar to local attention. Let's clarify with our example input vectors \\(\mathbf{X} = \mathbf{x}_1, ..., \mathbf{x}_{16}\\) and assume `config.num_buckets=4` and `config.lsh_chunk_length = 4`. Looking at the graphic above we can see that we have assigned each query vector \\( \mathbf{q}_1, \ldots, \mathbf{q}_{16} \\) to one of the clusters \\( \mathcal{C}_{1}, \mathcal{C}_{2}, \mathcal{C}_{3}, \mathcal{C}_{4} \\) . If we now sort the corresponding input vectors \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\) accordingly, we get the following permuted input \\( \mathbf{X'} \\): ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_perm.png) The self-attention mechanism should be applied for each cluster individually so that for each cluster \\( \mathcal{C}_m \\) the corresponding output is calculated as follows: \\( \mathbf{Z}^{\text{LSH}}_{i \in \mathcal{C}_m} = \text{SelfAttn}_{\mathbf{Q}=\mathbf{K}}(\mathbf{X}_{i \in \mathcal{C}_m}) \\). Let's illustrate this again for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_cluster_attn.png) As can be seen, the self-attention function operates on different sizes of matrices, which is suboptimal for efficient batching in GPU and TPU. To overcome this problem, the permuted input can be chunked the same way it is done for local attention so that each chunk is of size `config.lsh_chunk_length`. By chunking the permuted input, a bucket might be split into two different chunks. To remedy this problem, in LSH self-attention each chunk attends to its previous chunk `config.lsh_num_chunks_before=1` in addition to itself, the same way local self-attention does (`config.lsh_num_chunks_after` is usually set to 0). This way, we can be assured that all vectors in a bucket attend to each other with a high probability \\({}^3\\). All in all for all chunks \\( k \in \{1, \ldots, n_{c}\} \\), LSH self-attention can be noted down as follows: $$ \mathbf{Z'}_{l_{c} * k + 1:l_{c} * (k + 1)}^{\text{LSH}} = \text{SelfAttn}_{\mathbf{Q} = \mathbf{K}}(\mathbf{X'}_{l_{c} * k + 1): l_{c} * (k + 1)})\left[l_{c}:\right] $$ with \\(\mathbf{X'}\\) and \\( \mathbf{Z'} \\) being the input and output vectors permuted according to the LSH algorithm. Enough complicated formulas, let's illustrate LSH self-attention. The permuted vectors \\(\mathbf{X'}\\) as shown above are chunked and shared query key self-attention is applied to each chunk. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_2.png) Finally, the output \\(\mathbf{Z'}^{\text{LSH}}\\) is reordered to its original permutation. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_3.png) One important feature to mention here as well is that the accuracy of LSH self-attention can be improved by running LSH self-attention `config.num_hashes`, e.g. \\(n_{h} \\) times in parallel, each with a different random LSH hash. By setting `config.num_hashes > 1`, for each output position \\( i \\), multiple output vectors \\( \mathbf{z}^{\text{LSH}, 1}_{i}, \ldots, \mathbf{z}^{\text{LSH}, n_{h}}_{i} \\) are computed and subsequently merged: \\( \mathbf{z}^{\text{LSH}}_{i} = \sum_k^{n_{h}} \mathbf{Z}^{\text{LSH}, k}_{i} * \text{weight}^k_i \\). The \\( \text{weight}^k_i \\) represents the importance of the output vectors \\( \mathbf{z}^{\text{LSH}, k}_{i} \\) of hashing round \\( k \\) in comparison to the other hashing rounds, and is exponentially proportional to the normalization term of their softmax computation. The intuition behind this is that if the corresponding query vector \\( \mathbf{q}_{i}^{k} \\) have a high cosine similarity with all other query vectors in its respective chunk, then the softmax normalization term of this chunk tends to be high, so that the corresponding output vectors \\( \mathbf{q}_{i}^{k} \\) should be a better approximation to global attention and thus receive more weight than output vectors of hashing rounds with a lower softmax normalization term. For more detail see Appendix A of the [paper](https://arxiv.org/pdf/2001.04451.pdf). For our example, multi-round LSH self-attention can be illustrated as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_4.png) Great. That's it. Now we know how LSH self-attention works in Reformer. Regarding the memory complexity, we now have two terms that compete which each other to be the memory bottleneck: the dot-product: \\( \mathcal{O}(n_{h} * n_{c} * l_{c}^2) = \mathcal{O}(n * n_{h} * l_{c}) \\) and the required memory for LSH bucketing: \\( \mathcal{O}(n * n_{h} * \frac{n_{b}}{2}) \\) with \\( l_{c} \\) being the chunk length. Because for large \\( n \\), the number of buckets \\( \frac{n_{b}}{2} \\) grows much faster than the chunk length \\( l_{c} \\), the user can again factorize the number of buckets `config.num_buckets` as explained [here](https://huggingface.co/transformers/model_doc/reformer.html#lsh-self-attention). Let's recap quickly what we have gone through above: 1. We want to approximate global attention using the knowledge that the softmax operation only puts significant weights on very few key vectors. 2. If key vectors are equal to query vectors this means that *for each* query vector \\( \mathbf{q}_{i} \\), the softmax only puts significant weight on other query vectors that are similar in terms of cosine similarity. 3. This relationship works in both ways, meaning if \\( \mathbf{q}_{j} \\) is similar to \\( \mathbf{q}_{i} \\) than \\(\mathbf{q}_{j} \\) is also similar to \\( \mathbf{q}_{i} \\), so that we can do a global clustering before applying self-attention on a permuted input. 4. We apply local self-attention on the permuted input and re-order the output to its original permutation. --- \\( {}^{1} \\) The authors run some preliminary experiments confirming that shared query key self-attention performs more or less as well as standard self-attention. \\( {}^{2} \\) To be more exact the query vectors within a bucket are sorted according to their original order. This means if, *e.g.* the vectors \\( \mathbf{q}_1, \mathbf{q}_3, \mathbf{q}_7 \\) are all hashed to bucket 2, the order of the vectors in bucket 2 would still be \\( \mathbf{q}_1 \\), followed by \\( \mathbf{q}_3 \\) and \\( \mathbf{q}_7 \\). \\( {}^3 \\) On a side note, it is to mention the authors put a mask on the query vector \\( \mathbf{q}_{i} \\) to prevent the vector from attending to itself. Because the cosine similarity of a vector to itself will always be as high or higher than the cosine similarity to other vectors, the query vectors in shared query key self-attention are strongly discouraged to attend to themselves. ### Benchmark Benchmark tools were recently added to Transformers - see [here](https://github.com/huggingface/transformers/blob/master/notebooks/05-benchmark.ipynb) for a more detailed explanation. To show how much memory can be saved using "local" + "LSH" self-attention, the Reformer model `google/reformer-enwik8` is benchmarked for different `local_attn_chunk_length` and `lsh_attn_chunk_length`. The default configuration and usage of the `google/reformer-enwik8` model can be checked in more detail [here](https://huggingface.co/google/reformer-enwik8). Let's first do some necessary imports and installs. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` First, let's benchmark the memory usage of the Reformer model using *global* self-attention. This can be achieved by setting `lsh_attn_chunk_length` = `local_attn_chunk_length` = 8192 so that for all input sequences smaller or equal to 8192, the model automatically switches to global self-attention. ``` config = ReformerConfig.from_pretrained("google/reformer-enwik8", lsh_attn_chunk_length=16386, local_attn_chunk_length=16386, lsh_num_chunks_before=0, local_num_chunks_before=0) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[2048, 4096, 8192, 16386], batch_sizes=[1], models=["Reformer"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config], args=benchmark_args) result = benchmark.run() ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=1279.0, style=ProgressStyle(description… 1 / 1 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 8.87 GiB already allocated; 1.92 GiB free; 8.88 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer 1 2048 1465 Reformer 1 4096 2757 Reformer 1 8192 7893 Reformer 1 16386 N/A -------------------------------------------------------------------------------- The longer the input sequence, the more visible is the quadratic relationship \\( \mathcal{O}(n^2) \\) between input sequence and peak memory usage. As can be seen, in practice it would require a much longer input sequence to clearly observe that doubling the input sequence quadruples the peak memory usage. For this a `google/reformer-enwik8` model using global attention, a sequence length of over 16K results in a memory overflow. Now, let's activate *local* and *LSH* self-attention by using the model's default parameters. ``` config = ReformerConfig.from_pretrained("google/reformer-enwik8") benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[2048, 4096, 8192, 16384, 32768, 65436], batch_sizes=[1], models=["Reformer"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config], args=benchmark_args) result = benchmark.run() ``` 1 / 1 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.74 GiB free; 9.06 GiB reserved in total by PyTorch) Doesn't fit on GPU. CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 11.17 GiB total capacity; 6.56 GiB already allocated; 3.99 GiB free; 6.81 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer 1 2048 1785 Reformer 1 4096 2621 Reformer 1 8192 4281 Reformer 1 16384 7607 Reformer 1 32768 N/A Reformer 1 65436 N/A -------------------------------------------------------------------------------- As expected using local and LSH self-attention is much more memory efficient for longer input sequences, so that the model runs out of memory only at 16K tokens for a 11GB RAM GPU in this notebook. ## 2. Chunked Feed Forward Layers Transformer-based models often employ very large feed forward layers after the self-attention layer in parallel. Thereby, this layer can take up a significant amount of the overall memory and sometimes even represent the memory bottleneck of a model. First introduced in the Reformer paper, feed forward chunking is a technique that allows to effectively trade better memory consumption for increased time consumption. ### Chunked Feed Forward Layer in Reformer In Reformer, the _LSH_- or _local_ self-attention layer is usually followed by a residual connection, which then defines the first part in a *transformer block*. For more detail on this please refer to this [blog](http://jalammar.github.io/illustrated-transformer/). The output of the first part of the *transformer block*, called *normed self-attention* output can be written as \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X} \\), with \\( \mathbf{Z} \\) being either \\( \mathbf{Z}^{\text{LSH}} \\) or \\( \mathbf{Z}^\text{loc} \\) in Reformer. For our example input \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\), we illustrate the normed self-attention output as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/layer_normed_output.png) Now, the second part of a *transformer block* usually consists of two feed forward layers \\( ^{1} \\), defined as \\( \text{Linear}_{\text{int}}(\ldots) \\) that processes \\( \mathbf{\overline{Z}} \\), to an intermediate output \\( \mathbf{Y}_{\text{int}} \\) and \\( \text{Linear}_{\text{out}}(\ldots) \\) that processes the intermediate output to the output \\( \mathbf{Y}_{\text{out}} \\). The two feed forward layers can be defined by $$\mathbf{Y}_{\text{out}} = \text{Linear}_{\text{out}}(\mathbf{Y}_\text{int}) = \text{Linear}_{\text{out}}(\text{Linear}_{\text{int}}(\mathbf{\overline{Z}})).$$ It is important to remember at this point that mathematically the output of a feed forward layer at position \\( \mathbf{y}_{\text{out}, i} \\) only depends on the input at this position \\( \mathbf{\overline{y}}_{i} \\). In contrast to the self-attention layer, every output \\( \mathbf{y}_{\text{out}, i} \\) is therefore completely independent of all inputs \\( \mathbf{\overline{y}}_{j \ne i} \\) of different positions. Let's illustrate the feed forward layers for \\( \mathbf{\overline{z}}_1, \ldots, \mathbf{\overline{z}}_{16} \\). ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/feed_forward.png) As can be depicted from the illustration, all input vectors \\( \mathbf{\overline{z}}_{i} \\) are processed by the same feed forward layer in parallel. It becomes interesting when one takes a look at the output dimensions of the feed forward layers. In Reformer, the output dimension of \\( \text{Linear}_{\text{int}} \\) is defined as `config.feed_forward_size`, *e.g.* \\( d_{f} \\), and the output dimension of \\( \text{Linear}_{\text{out}} \\) is defined as `config.hidden_size`, *i.e.* \\( d_{h} \\). The Reformer authors observed that in a transformer model the intermediate dimension \\( d_{f} \\) usually tends to be much larger than the output dimension \\(^{2}\\) \\( d_{h} \\). This means that the tensor \\( \mathbf{\mathbf{Y}}_\text{int} \\) of dimension \\( d_{f} \times n \\) allocates a significant amount of the total memory and can even become the memory bottleneck. To get a better feeling for the differences in dimensions let's picture the matrices \\( \mathbf{Y}_\text{int} \\) and \\( \mathbf{Y}_\text{out} \\) for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/feed_forward_matrix.png) It is becoming quite obvious that the tensor \\( \mathbf{Y}_\text{int} \\) holds much more memory ( \\( \frac{d_{f}}{d_{h}} \times n \\) as much to be exact) than \\( \mathbf{Y}_{\text{out}} \\). But, is it even necessary to compute the full intermediate matrix \\( \mathbf{Y}_\text{int} \\) ? Not really, because relevant is only the output matrix \\( \mathbf{Y}_\text{out} \\). To trade memory for speed, one can thus chunk the linear layers computation to only process one chunk at the time. Defining `config.chunk_size_feed_forward` as \\( c_{f} \\), chunked linear layers are defined as \\( \mathbf{Y}_{\text{out}} = \left[\mathbf{Y}_{\text{out}, 1: c_{f}}, \ldots, \mathbf{Y}_{\text{out}, (n - c_{f}): n}\right] \\) with \\( \mathbf{Y}_{\text{out}, (c_{f} * i): (i * c_{f} + i)} = \text{Linear}_{\text{out}}(\text{Linear}_{\text{int}}(\mathbf{\overline{Z}}_{(c_{f} * i): (i * c_{f} + i)})) \\). In practice, it just means that the output is incrementally computed and concatenated to avoid having to store the whole intermediate tensor \\( \mathbf{Y}_{\text{int}} \\) in memory. Assuming \\( c_{f}=1 \\) for our example we can illustrate the incremental computation of the output for position \\( i=9 \\) as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/chunked_feed_forward.png) By processing the inputs in chunks of size 1, the only tensors that have to be stored in memory at the same time are \\( \mathbf{Y}_\text{out} \\) of a maximum size of \\( 16 \times d_{h} \\), \\( \mathbf{y}_{\text{int}, i} \\) of size \\( d_{f} \\) and the input \\( \mathbf{\overline{Z}} \\) of size \\( 16 \times d_{h} \\), with \\( d_{h} \\) being `config.hidden_size` \\(^{3}\\). Finally, it is important to remember that *chunked linear layers* yield a mathematically equivalent output to conventional linear layers and can therefore be applied to all transformer linear layers. Making use of `config.chunk_size_feed_forward` therefore allows a better trade-off between memory and speed in certain use cases. --- \\( {}^1 \\) For a simpler explanation, the layer norm layer which is normally applied to \\( \mathbf{\overline{Z}} \\) before being processed by the feed forward layers is omitted for now. \\( {}^2 \\) In `bert-base-uncased`, *e.g.* the intermediate dimension \\( d_{f} \\) is with 3072 four times larger than the output dimension \\( d_{h} \\). \\( {}^3 \\) As a reminder, the output `config.num_attention_heads` is assumed to be 1 for the sake of clarity and illustration in this notebook, so that the output of the self-attention layers can be assumed to be of size `config.hidden_size`. More information on chunked linear / feed forward layers can also be found [here](https://huggingface.co/transformers/glossary.html#feed-forward-chunking) on the 🤗Transformers docs. ### Benchmark Let's test how much memory can be saved by using chunked feed forward layers. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` Building wheel for transformers (setup.py) ... [?25l[?25hdone First, let's compare the default `google/reformer-enwik8` model without chunked feed forward layers to the one with chunked feed forward layers. ``` config_no_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8") # no chunk config_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=1) # feed forward chunk benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[1024, 2048, 4096], batch_sizes=[8], models=["Reformer-No-Chunk", "Reformer-Chunk"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_chunk, config_chunk], args=benchmark_args) result = benchmark.run() ``` 1 / 2 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.74 GiB free; 9.06 GiB reserved in total by PyTorch) 2 / 2 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.24 GiB free; 9.56 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Chunk 8 1024 4281 Reformer-No-Chunk 8 2048 7607 Reformer-No-Chunk 8 4096 N/A Reformer-Chunk 8 1024 4309 Reformer-Chunk 8 2048 7669 Reformer-Chunk 8 4096 N/A -------------------------------------------------------------------------------- Interesting, chunked feed forward layers do not seem to help here at all. The reason is that `config.feed_forward_size` is not sufficiently large to make a real difference. Only at longer sequence lengths of 4096, a slight decrease in memory usage can be seen. Let's see what happens to the memory peak usage if we increase the size of the feed forward layer by a factor of 4 and reduce the number of attention heads also by a factor of 4 so that the feed forward layer becomes the memory bottleneck. ``` config_no_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=0, num_attention_{h}eads=2, feed_forward_size=16384) # no chuck config_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=1, num_attention_{h}eads=2, feed_forward_size=16384) # feed forward chunk benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[1024, 2048, 4096], batch_sizes=[8], models=["Reformer-No-Chunk", "Reformer-Chunk"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_chunk, config_chunk], args=benchmark_args) result = benchmark.run() ``` 1 / 2 2 / 2 ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Chunk 8 1024 3743 Reformer-No-Chunk 8 2048 5539 Reformer-No-Chunk 8 4096 9087 Reformer-Chunk 8 1024 2973 Reformer-Chunk 8 2048 3999 Reformer-Chunk 8 4096 6011 -------------------------------------------------------------------------------- Now a clear decrease in peak memory usage can be seen for longer input sequences. As a conclusion, it should be noted chunked feed forward layers only makes sense for models having few attention heads and large feed forward layers. ## 3. Reversible Residual Layers Reversible residual layers were first introduced in [N. Gomez et al](https://arxiv.org/abs/1707.04585) and used to reduce memory consumption when training the popular *ResNet* model. Mathematically, reversible residual layers are slightly different to "real" residual layers but do not require the activations to be saved during the forward pass, which can drastically reduce memory consumption for training. ### Reversible Residual Layers in Reformer Let's start by investigating why training a model requires much more memory than the inference of the model. When running a model in inference, the required memory equals more or less the memory it takes to compute the **single** largest tensor in the model. On the other hand, when training a model, the required memory equals more or less the **sum** of all differentiable tensors. This is not surprising when considering how auto differentiation works in deep learning frameworks. These lecture [slides](https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf) by Roger Grosse of the University of Toronto are great to better understand auto differentiation. In a nutshell, in order to calculate the gradient of a differentiable function (*e.g.* a layer), auto differentiation requires the gradient of the function's output and the function's input and output tensor. While the gradients are dynamically computed and subsequently discarded, the input and output tensors (*a.k.a* activations) of a function are stored during the forward pass. Alright, let's apply this to a transformer model. A transformer model includes a stack of multiple so-called transformer layers. Each additional transformer layer forces the model to store more activations during the forward pass and thus increases the required memory for training. Let's take a more detailed look. A transformer layer essentially consists of two residual layers. The first residual layer represents the *self-attention* mechanism as explained in section 1) and the second residual layer represents the *linear* or feed-forward layers as explained in section 2). Using the same notation as before, the input of a transformer layer *i.e.* \\( \mathbf{X} \\) is first normalized \\( ^{1} \\) and subsequently processed by the self-attention layer to get the output \\( \mathbf{Z} = \text{SelfAttn}(\text{LayerNorm}(\mathbf{X})) \\). We will abbreviate these two layers with \\( G \\) so that \\( \mathbf{Z} = G(\mathbf{X}) \\). Next, the residual \\( \mathbf{Z} \\) is added to the input \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X} \\) and the sum is fed into the second residual layer - the two linear layers. \\( \mathbf{\overline{Z}} \\) is processed by a second normalization layer, followed by the two linear layers to get \\( \mathbf{Y} = \text{Linear}(\text{LayerNorm}(\mathbf{Z} + \mathbf{X})) \\). We will abbreviate the second normalization layer and the two linear layers with \\( F \\) yielding \\( \mathbf{Y} = F(\mathbf{\overline{Z}}) \\). Finally, the residual \\( \mathbf{Y} \\) is added to \\( \mathbf{\overline{Z}} \\) to give the output of the transformer layer \\( \mathbf{\overline{Y}} = \mathbf{Y} + \mathbf{\overline{Z}} \\). Let's illustrate a complete transformer layer using the example of \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\). ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/normal_trans_resnet.png) To calculate the gradient of *e.g.* the self-attention block \\( G \\), three tensors have to be known beforehand: the gradient \\( \partial \mathbf{Z} \\), the output \\( \mathbf{Z} \\), and the input \\( \mathbf{X} \\). While \\( \partial \mathbf{Z} \\) can be calculated on-the-fly and discarded afterward, the values for \\( \mathbf{Z} \\) and \\( \mathbf{X} \\) have to be calculated and stored during the forward pass since it is not possible to recalculate them easily on-the-fly during backpropagation. Therefore, during the forward pass, large tensor outputs, such as the query-key dot product matrix \\( \mathbf{Q}\mathbf{K}^T \\) or the intermediate output of the linear layers \\( \mathbf{Y}^{\text{int}} \\), have to be stored in memory \\( ^{2} \\). Here, reversible residual layers come to our help. The idea is relatively straight-forward. The residual block is designed in a way so that instead of having to store the input and output tensor of a function, both can easily be recalculated during the backward pass so that no tensor has to be stored in memory during the forward pass. This is achieved by using two input streams \\( \mathbf{X}^{(1)}, \mathbf{X}^{(2)} \\), and two output streams \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\). The first residual \\( \mathbf{Z} \\) is computed by the first output stream \\( \mathbf{Z} = G(\mathbf{X}^{(1)}) \\) and subsequently added to the input of the second input stream, so that \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X}^{(2)} \\). Similarly, the residual \\( \mathbf{Y} = F(\mathbf{\overline{Z}}) \\) is added to the first input stream again, so that the two output streams are defined by \\( \mathbf{Y}^{(1)} = \mathbf{Y} + \mathbf{X}^{(1)} \\) and \\( \mathbf{Y}^{(2)} = \mathbf{X}^{(2)} + \mathbf{Z} = \mathbf{\overline{Z}} \\). The reversible transformer layer can be visualized for \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\) as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/rev_trans_resnet.png) As can be seen, the outputs \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\) are calculated in a very similar way than \\( \mathbf{\overline{Y}} \\) of the non-reversible layer, but they are mathematically different. The authors of Reformer observe in some initial experiments that the performance of a reversible transformer model matches the performance of a standard transformer model. The first visible difference to the standard transformer layer is that there are two input streams and output streams \\( ^{3} \\), which at first slightly increases the required memory for both the forward pass. The two-stream architecture is crucial though for not having to save any activations during the forward pass. Let's explain. For backpropagation, the reversible transformer layer has to calculate the gradients \\( \partial G \\) and \\( \partial F \\). In addition to the gradients \\( \partial \mathbf{Y} \\) and \\( \partial \mathbf{Z} \\) which can be calculated on-the-fly, the tensor values \\( \mathbf{Y} \\), \\( \mathbf{\overline{Z}} \\) have to be known for \\( \partial F \\) and the tensor values \\( \mathbf{Z} \\) and \\( \mathbf{X}^{(1)} \\) for \\( \partial G \\) to make auto-differentiation work. If we assume to know \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\), it can easily be depicted from the graph that one can calculate \\( \mathbf{X}^{(1)}, \mathbf{X}^{(2)} \\) as follows. \\( \mathbf{X}^{(1)} = F(\mathbf{\overline{Y}}^{(1)}) - \mathbf{\overline{Y}}^{(1)} \\). Great, now that \\( \mathbf{X}^{(1)} \\) is known, \\( \mathbf{X}^{(2)} \\) can be computed by \\( \mathbf{X}^{(2)} = \mathbf{\overline{Y}}^{(1)} - G(\mathbf{X}^{(1)}) \\). Alright now, \\( \mathbf{Z} \\) and \\( \mathbf{Y} \\) are trivial to compute via \\( \mathbf{Y} = \mathbf{\overline{Y}}^{(1)} - \mathbf{X}^{(1)} \\) and \\( \mathbf{Z} = \mathbf{\overline{Y}}^{(2)} - \mathbf{X}^{(2)} \\). So as a conclusion, if only the outputs \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\) of the **last** reversible transformer layer are stored during the forward pass, all other relevant activations can be derived by making use of \\( G \\) and \\( F \\) during the backward pass and passing \\( \mathbf{X}^{(1)} \\) and \\( \mathbf{X}^{(2)} \\). The overhead of two forward passes of \\( G \\) and \\( F \\) per reversible transformer layer during the backpropagation is traded against not having to store any activations during the forward pass. Not a bad deal! **Note**: Since recently, major deep learning frameworks have released code that allows to store only certain activations and recompute larger ones during the backward propagation (Tensoflow [here](https://www.tensorflow.org/api_docs/python/tf/recompute_grad) and PyTorch [here](https://pytorch.org/docs/stable/checkpoint.html)). For standard reversible layers, this still means that at least one activation has to be stored for each transformer layer, but by defining which activations can dynamically be recomputed a lot of memory can be saved. --- \\( ^{1} \\) In the previous two sections, we have omitted the layer norm layers preceding both the self-attention layer and the linear layers. The reader should know that both \\( \mathbf{X} \\) and \\( \mathbf{\overline{Z}} \\) are both processed by layer normalization before being fed into self-attention and the linear layers respectively. \\( ^{2} \\) While in the design the dimension of \\( \mathbf{Q}\mathbf{K} \\) is written as \\( n \times n \\), in a *LSH self-attention* or *local self-attention* layer the dimension would only be \\( n \times l_{c} \times n_{h} \\) or \\( n \times l_{c} \\) respectively with \\( l_{c} \\) being the chunk length and \\( n_{h} \\) the number of hashes \\( ^{3} \\) In the first reversible transformer layer \\( \mathbf{X}^{(2)} \\) is set to be equal to \\( \mathbf{X}^{(1)} \\). ### Benchmark In order to measure the effect of reversible residual layers, we will compare the memory consumption of BERT with Reformer in training for an increasing number of layers. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, BertConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` Let's measure the required memory for the standard `bert-base-uncased` BERT model by increasing the number of layers from 4 to 12. ``` config_4_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=4) config_8_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=8) config_12_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=12) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Bert-4-Layers", "Bert-8-Layers", "Bert-12-Layers"], training=True, no_inference=True, no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_4_layers_bert, config_8_layers_bert, config_12_layers_bert], args=benchmark_args) result = benchmark.run() ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_… 1 / 3 2 / 3 3 / 3 ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Bert-4-Layers 8 512 4103 Bert-8-Layers 8 512 5759 Bert-12-Layers 8 512 7415 -------------------------------------------------------------------------------- It can be seen that adding a single layer of BERT linearly increases the required memory by more than 400MB. ``` config_4_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=4, num_hashes=1) config_8_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=8, num_hashes=1) config_12_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=12, num_hashes=1) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Reformer-4-Layers", "Reformer-8-Layers", "Reformer-12-Layers"], training=True, no_inference=True, no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_4_layers_reformer, config_8_layers_reformer, config_12_layers_reformer], args=benchmark_args) result = benchmark.run() ``` 1 / 3 2 / 3 3 / 3 ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-4-Layers 8 512 4607 Reformer-8-Layers 8 512 4987 Reformer-12-Layers 8 512 5367 -------------------------------------------------------------------------------- For Reformer, on the other hand, adding a layer adds significantly less memory in practice. Adding a single layer increases the required memory on average by less than 100MB so that a much larger 12-Layer `reformer-enwik8` model requires less memory than a 12-Layer `bert-base-uncased` model. ## 4. Axial Positional Encodings Reformer makes it possible to process huge input sequences. However, for such long input sequences standard positional encoding weight matrices alone would use more than 1GB to store its weights. To prevent such large positional encoding matrices, the official Reformer code introduced *Axial Position Encodings*. **Important:** *Axial Position Encodings were not explained in the official paper, but can be well understood from looking into the code and talking to the authors* ### Axial Positional Encodings in Reformer Transformers need positional encodings to account for the order of words in the input because self-attention layers have *no notion of order*. Positional encodings are usually defined by a simple look-up matrix \\( \mathbf{E} = \left[\mathbf{e}_1, \ldots, \mathbf{e}_{n_\text{max}}\right] \\) The positional encoding vector \\( \mathbf{e}_{i} \\) is then simply added to the *ith* input vector \\( \mathbf{x}_{i} + \mathbf{e}_{i} \\) so that the model can distinguish if an input vector (*a.k.a* token) is at position \\( i \\) or \\( j \\). For every input position, the model needs to be able to look up the corresponding positional encoding vector so that the dimension of \\( \mathbf{E} \\) is defined by the maximum length of input vectors the model can process `config.max_position_embeddings`, *i.e.* \\( n_\text{max} \\), and the `config.hidden_size`, *i.e.* \\( d_{h} \\) of the input vectors. Assuming \\( d_{h}=4 \\) and \\( n_\text{max}=49 \\), such a positional encoding matrix can be visualized as follows: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/positional_encodings_default.png) Here, we showcase only the positional encodings \\( \mathbf{e}_{1} \\), \\( \mathbf{e}_{2} \\), and \\( \mathbf{e}_{49} \\) each of dimension, *a.k.a* height 4. Let's imagine, we want to train a Reformer model on sequences of a length of up to 0.5M tokens and an input vector `config.hidden_size` of 1024 (see notebook [here](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb)). The corresponding positional embeddings have a size of \\( 0.5M \times 1024 \sim 512M \\) parameters, which corresponds to a size of 2GB. Such positional encodings would use an unnecessarily large amount of memory both when loading the model in memory and when saving the model on a hard drive. The Reformer authors managed to drastically shrink the positional encodings in size by cutting the `config.hidden_size` dimension in two and smartly factorizing the \\( n_\text{max} \\) dimension. In Transformer, the user can decide into which shape \\( n_\text{max} \\) can be factorized into by setting `config.axial_pos_shape` to an appropriate list of two values \\( n_\text{max}^1 \\) and \\( n_\text{max}^2 \\) so that \\( n_\text{max}^1 \times n_\text{max}^2 = n_\text{max} \\). By setting `config.axial_pos_embds_dim` to an appropriate list of two values \\( d_{h}^{1} \\) and \\( d_{h}^2 \\) so that \\( d_{h}^1 + d_{h}^2 = d_{h} \\), the user can decide how the hidden size dimension should be cut. Now, let's visualize and explain more intuitively. One can think of factorizing \\( n_{\text{max}} \\) as folding the dimension into a third axis, which is shown in the following for the factorization `config.axial_pos_shape = [7, 7]`: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/3d_positional_encoding.png) Each of the three standing rectangular prisms corresponds to one of the encoding vectors \\( \mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{49} \\), but we can see that the 49 encoding vectors are divided into 7 rows of 7 vectors each. Now the idea is to use only one row of 7 encoding vectors and expand those vectors to the other 6 rows, essentially reusing their values. Because it is discouraged to have the same values for different encoding vectors, each vector of dimension (*a.k.a* height) `config.hidden_size=4` is cut into the lower encoding vector \\( \mathbf{e}_\text{down} \\) of size \\( 1 \\) and \\( \mathbf{e}_\text{up} \\) of size \\( 3 \\), so that the lower part can be expanded along the row dimension and the upper part can be expanded along the column dimension. Let's visualize for more clarity. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/3d_positional_encoding_cut.png) We can see that we have cut the embedding vectors into \\( \mathbf{e}_\text{down} \\) (*in blue*) and \\( \mathbf{e}_\text{up} \\) (*in yellow*). Now for the "sub"-vectors \\( \mathbf{E}_\text{down} = \left[\mathbf{e}_{\text{down},1}, \ldots, \mathbf{e}_{\text{down},49}\right] \\) only the first row, *a.k.a.* the width in the graphic, of \\( 7 \\) is kept and expanded along the column dimension, *a.k.a.* the depth of the graphic. Inversely, for the "sub"-vectors \\( \mathbf{E}_\text{up} = \left[\mathbf{e}_{\text{up},1}, \ldots, \mathbf{e}_{\text{up},49}\right] \\) only the first column of \\( 7 \\) is kept and expanded along the row dimension. The resulting embedding vectors \\( \mathbf{e'}_{i} \\) then correspond to $$\mathbf{e'}_{i} = \left[ \left[\mathbf{e}_{\text{down, } i \% n_\text{max}^1}\right]^T, \left[\mathbf{e}_{\text{up, } \left \lfloor{\frac{i}{{n}^2_{\text{max}}}}\right \rfloor} \right]^T \right]^T $$ whereas \\( n_\text{max}^1 = 7 \\) and \\( n_\text{max}^2 = 7 \\) in our example. These new encodings \\( \mathbf{E'} = \left[\mathbf{e'}_{1}, \ldots, \mathbf{e'}_{n_\text{max}}\right] \\) are called **Axial Position Encodings**. In the following, these axial position encodings are illustrated in more detail for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/axial_pos_encoding.png) Now it should be more understandable how the final positional encoding vectors \\( \mathbf{E'} \\) are calculated only from \\( \mathbf{E}_{\text{down}} \\) of dimension \\( d_{h}^1 \times n_{\text{max}^1} \\) and \\( \mathbf{E}_{\text{up}} \\) of dimension \\( d_{h}^2 \times n_{\text{max}}^2 \\). The crucial aspect to see here is that Axial Positional Encodings make sure that none of the vectors \\( \left[\mathbf{e'}_1, \ldots, \mathbf{e'}_{n_{\text{max}}}\right] \\) are equal to each other by design and that the overall size of the encoding matrix is reduced from \\( n_{\text{max}} \times d_{h} \\) to \\( n_{\text{max}}^1 \times d_{h}^1 + n_\text{max}^2 \times d_{h}^2 \\). By allowing each axial positional encoding vector to be different by design the model is given much more flexibility to learn efficient positional representations if axial positional encodings are learned by the model. To demonstrate the drastic reduction in size, let's assume we would have set `config.axial_pos_shape = [1024, 512]` and `config.axial_pos_embds_dim = [512, 512]` for a Reformer model that can process inputs up to a length of 0.5M tokens. The resulting axial positional encoding matrix would have had a size of only \\( 1024 \times 512 + 512 \times 512 \sim 800K \\) parameters which corresponds to roughly 3MB. This is a drastic reduction from the 2GB a standard positional encoding matrix would require in this case. For a more condensed and math-heavy explanation please refer to the 🤗Transformers docs [here](https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings). ### Benchmark Lastly, let's also compare the peak memory consumption of conventional positional embeddings to *axial positional embeddings*. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments, ReformerModel ``` Positional embeddings depend only on two configuration parameters: The maximum allowed length of input sequences `config.max_position_embeddings` and `config.hidden_size`. Let's use a model that pushes the maximum allowed length of input sequences to half a million tokens, called `google/reformer-crime-and-punishment`, to see the effect of using axial positional embeddings. To begin with, we will compare the shape of axial position encodings with standard positional encodings and the number of parameters in the model. ``` config_no_pos_axial_embeds = ReformerConfig.from_pretrained("google/reformer-crime-and-punishment", axial_pos_embds=False) # disable axial positional embeddings config_pos_axial_embeds = ReformerConfig.from_pretrained("google/reformer-crime-and-punishment", axial_pos_embds=True, axial_pos_embds_dim=(64, 192), axial_pos_shape=(512, 1024)) # enable axial positional embeddings print("Default Positional Encodings") print(20 * '-') model = ReformerModel(config_no_pos_axial_embeds) print(f"Positional embeddings shape: {model.embeddings.position_embeddings}") print(f"Num parameters of model: {model.num_parameters()}") print(20 * '-' + '\n\n') print("Axial Positional Encodings") print(20 * '-') model = ReformerModel(config_pos_axial_embeds) print(f"Positional embeddings shape: {model.embeddings.position_embeddings}") print(f"Num parameters of model: {model.num_parameters()}") print(20 * '-' + '\n\n') ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=1151.0, style=ProgressStyle(description… Default Positional Encodings -------------------- Positional embeddings shape: PositionEmbeddings( (embedding): Embedding(524288, 256) ) Num parameters of model: 136572416 -------------------- Axial Positional Encodings -------------------- Positional embeddings shape: AxialPositionEmbeddings( (weights): ParameterList( (0): Parameter containing: [torch.FloatTensor of size 512x1x64] (1): Parameter containing: [torch.FloatTensor of size 1x1024x192] ) ) Num parameters of model: 2584064 -------------------- Having read the theory, the shape of the axial positional encoding weights should not be a surprise to the reader. Regarding the results, it can be seen that for models being capable of processing such long input sequences, it is not practical to use default positional encodings. In the case of `google/reformer-crime-and-punishment`, standard positional encodings alone contain more than 100M parameters. Axial positional encodings reduce this number to just over 200K. Lastly, let's also compare the required memory at inference time. ``` benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Reformer-No-Axial-Pos-Embeddings", "Reformer-Axial-Pos-Embeddings"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_pos_axial_embeds, config_pos_axial_embeds], args=benchmark_args) result = benchmark.run() ``` 1 / 2 2 / 2 ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Axial-Pos-Embeddin 8 512 959 Reformer-Axial-Pos-Embeddings 8 512 447 -------------------------------------------------------------------------------- It can be seen that using axial positional embeddings reduces the memory requirement to approximately half in the case of `google/reformer-crime-and-punishment`.
2
0
hf_public_repos
hf_public_repos/blog/deploy-deepfloydif-using-bentoml.md
--- title: "Deploying Hugging Face Models with BentoML: DeepFloyd IF in Action" thumbnail: /blog/assets/deploy-deepfloydif-using-bentoml/thumbnail.png authors: - user: Sherlockk guest: true - user: larme guest: true --- # Deploying Hugging Face Models with BentoML: DeepFloyd IF in Action Hugging Face provides a Hub platform that allows you to upload, share, and deploy your models with ease. It saves developers the time and computational resources required to train models from scratch. However, deploying models in a real-world production environment or in a cloud-native way can still present challenges. This is where BentoML comes into the picture. BentoML is an open-source platform for machine learning model serving and deployment. It is a unified framework for building, shipping, and scaling production-ready AI applications incorporating traditional, pre-trained, and generative models as well as Large Language Models. Here is how you use the BentoML framework from a high-level perspective: 1. **Define a model**: Before you can use BentoML, you need a machine learning model (or multiple models). This model can be trained using a machine learning library such as TensorFlow and PyTorch. 2. **Save the model**: Once you have a trained model, save it to the BentoML local Model Store, which is used for managing all your trained models locally as well as accessing them for serving. 3. **Create a BentoML Service**: You create a `service.py` file to wrap the model and define the serving logic. It specifies [Runners](https://docs.bentoml.org/en/latest/concepts/runner.html) for models to run model inference at scale and exposes APIs to define how to process inputs and outputs. 4. **Build a Bento**: By creating a configuration YAML file, you package all the models and the [Service](https://docs.bentoml.org/en/latest/concepts/service.html) into a [Bento](https://docs.bentoml.org/en/latest/concepts/bento.html), a deployable artifact containing all the code and dependencies. 5. **Deploy the Bento**: Once the Bento is ready, you can containerize the Bento to create a Docker image and run it on Kubernetes. Alternatively, deploy the Bento directly to Yatai, an open-source, end-to-end solution for automating and running machine learning deployments on Kubernetes at scale. In this blog post, we will demonstrate how to integrate [DeepFloyd IF](https://huggingface.co/docs/diffusers/api/pipelines/if) with BentoML by following the above workflow. ## Table of contents - [A brief introduction to DeepFloyd IF](#a-brief-introduction-to-deepfloyd-if) - [Preparing the environment](#preparing-the-environment) - [Downloading the model to the BentoML Model Store](#downloading-the-model-to-the-bentoml-model-store) - [Starting a BentoML Service](#starting-a-bentoml-service) - [Building and serving a Bento](#building-and-serving-a-bento) - [Testing the server](#testing-the-server) - [What's next](#whats-next) ## A brief introduction to DeepFloyd IF DeepFloyd IF is a state-of-the-art, open-source text-to-image model. It stands apart from latent diffusion models like Stable Diffusion due to its distinct operational strategy and architecture. DeepFloyd IF delivers a high degree of photorealism and sophisticated language understanding. Unlike Stable Diffusion, DeepFloyd IF works directly in pixel space, leveraging a modular structure that encompasses a frozen text encoder and three cascaded pixel diffusion modules. Each module plays a unique role in the process: Stage 1 is responsible for the creation of a base 64x64 px image, which is then progressively upscaled to 1024x1024 px across Stage 2 and Stage 3. Another critical aspect of DeepFloyd IF’s uniqueness is its integration of a Large Language Model (T5-XXL-1.1) to encode prompts, which offers superior understanding of complex prompts. For more information, see this [Stability AI blog post about DeepFloyd IF](https://stability.ai/blog/deepfloyd-if-text-to-image-model). To make sure your DeepFloyd IF application runs in high performance in production, you may want to allocate and manage your resources wisely. In this respect, BentoML allows you to scale the Runners independently for each Stage. For example, you can use more Pods for your Stage 1 Runners or allocate more powerful GPU servers to them. ## Preparing the environment [This GitHub repository](https://github.com/bentoml/IF-multi-GPUs-demo) stores all necessary files for this project. To run this project locally, make sure you have the following: - Python 3.8+ - `pip` installed - At least 2x16GB VRAM GPU or 1x40 VRAM GPU. For this project, we used a machine of type `n1-standard-16` from Google Cloud plus 64 GB of RAM and 2 NVIDIA T4 GPUs. Note that while it is possible to run IF on a single T4, it is not recommended for production-grade serving Once the prerequisites are met, clone the project repository to your local machine and navigate to the target directory. ```bash git clone https://github.com/bentoml/IF-multi-GPUs-demo.git cd IF-multi-GPUs-demo ``` Before building the application, let’s briefly explore the key files within this directory: - `import_models.py`: Defines the models for each stage of the [`IFPipeline`](https://huggingface.co/docs/diffusers/api/pipelines/if). You use this file to download all the models to your local machine so that you can package them into a single Bento. - `requirements.txt`: Defines all the packages and dependencies required for this project. - `service.py`: Defines a BentoML Service, which contains three Runners created using the `to_runner` method and exposes an API for generating images. The API takes a JSON object as input (i.e. prompts and negative prompts) and returns an image as output by using a sequence of models. - `start-server.py`: Starts a BentoML HTTP server through the Service defined in `service.py` and creates a Gradio web interface for users to enter prompts to generate images. - `bentofile.yaml`: Defines the metadata of the Bento to be built, including the Service, Python packages, and models. We recommend you create a Virtual Environment for dependency isolation. For example, run the following command to activate `myenv`: ```bash python -m venv venv source venv/bin/activate ``` Install the required dependencies: ```bash pip install -r requirements.txt ``` If you haven’t previously downloaded models from Hugging Face using the command line, you must log in first: ```bash pip install -U huggingface_hub huggingface-cli login ``` ## Downloading the model to the BentoML Model Store As mentioned above, you need to download all the models used by each DeepFloyd IF stage. Once you have set up the environment, run the following command to download models to your local Model store. The process may take some time. ```bash python import_models.py ``` Once the downloads are complete, view the models in the Model store. ```bash $ bentoml models list Tag Module Size Creation Time sd-upscaler:bb2ckpa3uoypynry bentoml.diffusers 16.29 GiB 2023-07-06 10:15:53 if-stage2:v1.0 bentoml.diffusers 13.63 GiB 2023-07-06 09:55:49 if-stage1:v1.0 bentoml.diffusers 19.33 GiB 2023-07-06 09:37:59 ``` ## Starting a BentoML Service You can directly run the BentoML HTTP server with a web UI powered by Gradio using the `start-server.py` file, which is the entry point of this application. It provides various options for customizing the execution and managing GPU allocation among different Stages. You may use different commands depending on your GPU setup: - For a GPU with over 40GB VRAM, run all models on the same GPU. ```bash python start-server.py ``` - For two Tesla T4 with 15GB VRAM each, assign the Stage 1 model to the first GPU, and the Stage 2 and Stage 3 models to the second GPU. ```bash python start-server.py --stage1-gpu=0 --stage2-gpu=1 --stage3-gpu=1 ``` - For one Tesla T4 with 15GB VRAM and two additional GPUs with smaller VRAM size, assign the Stage 1 model to T4, and Stage 2 and Stage 3 models to the second and third GPUs respectively. ```bash python start-server.py --stage1-gpu=0 --stage2-gpu=1 --stage3-gpu=2 ``` To see all customizable options (like the server’s port), run: ```bash python start-server.py --help ``` ## Testing the server Once the server starts, you can visit the web UI at http://localhost:7860. The BentoML API endpoint is also accessible at http://localhost:3000. Here is an example of a prompt and a negative prompt. Prompt: > orange and black, head shot of a woman standing under street lights, dark theme, Frank Miller, cinema, ultra realistic, ambiance, insanely detailed and intricate, hyper realistic, 8k resolution, photorealistic, highly textured, intricate details Negative prompt: > tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy, blurred, text, watermark, grainy Result: ![Output image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/deploy-deepfloydif-using-bentoml/output-image.png) ## Building and serving a Bento Now that you have successfully run DeepFloyd IF locally, you can package it into a Bento by running the following command in the project directory. ```bash $ bentoml build Converting 'IF-stage1' to lowercase: 'if-stage1'. Converting 'IF-stage2' to lowercase: 'if-stage2'. Converting DeepFloyd-IF to lowercase: deepfloyd-if. Building BentoML service "deepfloyd-if:6ufnybq3vwszgnry" from build context "/Users/xxx/Documents/github/IF-multi-GPUs-demo". Packing model "sd-upscaler:bb2ckpa3uoypynry" Packing model "if-stage1:v1.0" Packing model "if-stage2:v1.0" Locking PyPI package versions. ██████╗░███████╗███╗░░██╗████████╗░█████╗░███╗░░░███╗██╗░░░░░ ██╔══██╗██╔════╝████╗░██║╚══██╔══╝██╔══██╗████╗░████║██║░░░░░ ██████╦╝█████╗░░██╔██╗██║░░░██║░░░██║░░██║██╔████╔██║██║░░░░░ ██╔══██╗██╔══╝░░██║╚████║░░░██║░░░██║░░██║██║╚██╔╝██║██║░░░░░ ██████╦╝███████╗██║░╚███║░░░██║░░░╚█████╔╝██║░╚═╝░██║███████╗ ╚═════╝░╚══════╝╚═╝░░╚══╝░░░╚═╝░░░░╚════╝░╚═╝░░░░░╚═╝╚══════╝ Successfully built Bento(tag="deepfloyd-if:6ufnybq3vwszgnry"). ``` View the Bento in the local Bento Store. ```bash $ bentoml list Tag Size Creation Time deepfloyd-if:6ufnybq3vwszgnry 49.25 GiB 2023-07-06 11:34:52 ``` The Bento is now ready for serving in production. ```bash bentoml serve deepfloyd-if:6ufnybq3vwszgnry ``` To deploy the Bento in a more cloud-native way, generate a Docker image by running the following command: ```bash bentoml containerize deepfloyd-if:6ufnybq3vwszgnry ``` You can then deploy the model on Kubernetes. ## What’s next? [BentoML](https://github.com/bentoml/BentoML) provides a powerful and straightforward way to deploy Hugging Face models for production. With its support for a wide range of ML frameworks and easy-to-use APIs, you can ship your model to production in no time. Whether you’re working with the DeepFloyd IF model or any other model on the Hugging Face Model Hub, BentoML can help you bring your models to life. Check out the following resources to see what you can build with BentoML and its ecosystem tools, and stay tuned for more information about BentoML. - [OpenLLM](https://github.com/bentoml/OpenLLM) - An open platform for operating Large Language Models (LLMs) in production. - [StableDiffusion](https://github.com/bentoml/stable-diffusion-bentoml) - Create your own text-to-image service with any diffusion models. - [Transformer NLP Service](https://github.com/bentoml/transformers-nlp-service) - Online inference API for Transformer NLP models. - Join the [BentoML community on Slack](https://l.bentoml.com/join-slack). - Follow us on [Twitter](https://twitter.com/bentomlai) and [LinkedIn](https://www.linkedin.com/company/bentoml/).
3
0
hf_public_repos
hf_public_repos/blog/speecht5.md
--- title: "Speech Synthesis, Recognition, and More With SpeechT5" thumbnail: /blog/assets/speecht5/thumbnail.png authors: - user: Matthijs --- # Speech Synthesis, Recognition, and More With SpeechT5 We’re happy to announce that SpeechT5 is now available in 🤗 Transformers, an open-source library that offers easy-to-use implementations of state-of-the-art machine learning models. SpeechT5 was originally described in the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Microsoft Research Asia. The [official checkpoints](https://github.com/microsoft/SpeechT5) published by the paper’s authors are available on the Hugging Face Hub. If you want to jump right in, here are some demos on Spaces: - [Speech Synthesis (TTS)](https://huggingface.co/spaces/Matthijs/speecht5-tts-demo) - [Voice Conversion](https://huggingface.co/spaces/Matthijs/speecht5-vc-demo) - [Automatic Speech Recognition](https://huggingface.co/spaces/Matthijs/speecht5-asr-demo) ## Introduction SpeechT5 is not one, not two, but three kinds of speech models in one architecture. It can do: - **speech-to-text** for automatic speech recognition or speaker identification, - **text-to-speech** to synthesize audio, and - **speech-to-speech** for converting between different voices or performing speech enhancement. The main idea behind SpeechT5 is to pre-train a single model on a mixture of text-to-speech, speech-to-text, text-to-text, and speech-to-speech data. This way, the model learns from text and speech at the same time. The result of this pre-training approach is a model that has a **unified space** of hidden representations shared by both text and speech. At the heart of SpeechT5 is a regular **Transformer encoder-decoder** model. Just like any other Transformer, the encoder-decoder network models a sequence-to-sequence transformation using hidden representations. This Transformer backbone is the same for all SpeechT5 tasks. To make it possible for the same Transformer to deal with both text and speech data, so-called **pre-nets** and **post-nets** were added. It is the job of the pre-net to convert the input text or speech into the hidden representations used by the Transformer. The post-net takes the outputs from the Transformer and turns them into text or speech again. A figure illustrating SpeechT5’s architecture is depicted below (taken from the [original paper](https://arxiv.org/abs/2110.07205)). <div align="center"> <img alt="SpeechT5 architecture diagram" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/architecture.jpg"/> </div> During pre-training, all of the pre-nets and post-nets are used simultaneously. After pre-training, the entire encoder-decoder backbone is fine-tuned on a single task. Such a fine-tuned model only uses the pre-nets and post-nets specific to the given task. For example, to use SpeechT5 for text-to-speech, you’d swap in the text encoder pre-net for the text inputs and the speech decoder pre and post-nets for the speech outputs. Note: Even though the fine-tuned models start out using the same set of weights from the shared pre-trained model, the final versions are all quite different in the end. You can’t take a fine-tuned ASR model and swap out the pre-nets and post-net to get a working TTS model, for example. SpeechT5 is flexible, but not *that* flexible. ## Text-to-speech SpeechT5 is the **first text-to-speech model** we’ve added to 🤗 Transformers, and we plan to add more TTS models in the near future. For the TTS task, the model uses the following pre-nets and post-nets: - **Text encoder pre-net.** A text embedding layer that maps text tokens to the hidden representations that the encoder expects. Similar to what happens in an NLP model such as BERT. - **Speech decoder pre-net.** This takes a log mel spectrogram as input and uses a sequence of linear layers to compress the spectrogram into hidden representations. This design is taken from the Tacotron 2 TTS model. - **Speech decoder post-net.** This predicts a residual to add to the output spectrogram and is used to refine the results, also from Tacotron 2. The architecture of the fine-tuned model looks like the following. <div align="center"> <img alt="SpeechT5 architecture for text-to-speech" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/tts.jpg"/> </div> Here is a complete example of how to use the SpeechT5 text-to-speech model to synthesize speech. You can also follow along in [this interactive Colab notebook](https://colab.research.google.com/drive/1XnOnCsmEmA3lHmzlNRNxRMcu80YZQzYf?usp=sharing). SpeechT5 is not available in the latest release of Transformers yet, so you'll have to install it from GitHub. Also install the additional dependency sentencepiece and then restart your runtime. ```python pip install git+https://github.com/huggingface/transformers.git pip install sentencepiece ``` First, we load the [fine-tuned model](https://huggingface.co/microsoft/speecht5_tts) from the Hub, along with the processor object used for tokenization and feature extraction. The class we’ll use is `SpeechT5ForTextToSpeech`. ```python from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts") ``` Next, tokenize the input text. ```python inputs = processor(text="Don't count the days, make the days count.", return_tensors="pt") ``` The SpeechT5 TTS model is not limited to creating speech for a single speaker. Instead, it uses so-called **speaker embeddings** that capture a particular speaker’s voice characteristics. We’ll load such a speaker embedding from a dataset on the Hub. ```python from datasets import load_dataset embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") import torch speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) ``` The speaker embedding is a tensor of shape (1, 512). This particular speaker embedding describes a female voice. The embeddings were obtained from the [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) dataset using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), but any X-Vector embedding should work. Now we can tell the model to generate the speech, given the input tokens and the speaker embedding. ```python spectrogram = model.generate_speech(inputs["input_ids"], speaker_embeddings) ``` This outputs a tensor of shape (140, 80) containing a log mel spectrogram. The first dimension is the sequence length, and it may vary between runs as the speech decoder pre-net always applies dropout to the input sequence. This adds a bit of random variability to the generated speech. To convert the predicted log mel spectrogram into an actual speech waveform, we need a **vocoder**. In theory, you can use any vocoder that works on 80-bin mel spectrograms, but for convenience, we’ve provided one in Transformers based on HiFi-GAN. The [weights for this vocoder](https://huggingface.co/mechanicalsea/speecht5-tts), as well as the weights for the fine-tuned TTS model, were kindly provided by the original authors of SpeechT5. Loading the vocoder is as easy as any other 🤗 Transformers model. ```python from transformers import SpeechT5HifiGan vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") ``` To make audio from the spectrogram, do the following: ```python with torch.no_grad(): speech = vocoder(spectrogram) ``` We’ve also provided a shortcut so you don’t need the intermediate step of making the spectrogram. When you pass the vocoder object into `generate_speech`, it directly outputs the speech waveform. ```python speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder) ``` And finally, save the speech waveform to a file. The sample rate used by SpeechT5 is always 16 kHz. ```python import soundfile as sf sf.write("tts_example.wav", speech.numpy(), samplerate=16000) ``` The output sounds like this ([download audio](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/tts_example.wav)): <audio controls> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/tts_example.wav" type="audio/wav"> Your browser does not support the audio element. </audio> That’s it for the TTS model! The key to making this sound good is to use the right speaker embeddings. You can play with an [interactive demo](https://huggingface.co/spaces/Matthijs/speecht5-tts-demo) on Spaces. 💡 Interested in learning how to **fine-tune** SpeechT5 TTS on your own dataset or language? Check out [this Colab notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ) with a detailed walk-through of the process. ## Speech-to-speech for voice conversion Conceptually, doing speech-to-speech modeling with SpeechT5 is the same as text-to-speech. Simply swap out the text encoder pre-net for the speech encoder pre-net. The rest of the model stays the same. <div align="center"> <img alt="SpeechT5 architecture for speech-to-speech" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/s2s.jpg"/> </div> The **speech encoder pre-net** is the same as the feature encoding module from [wav2vec 2.0](https://huggingface.co/docs/transformers/model_doc/wav2vec2). It consists of convolution layers that downsample the input waveform into a sequence of audio frame representations. As an example of a speech-to-speech task, the authors of SpeechT5 provide a [fine-tuned checkpoint](https://huggingface.co/microsoft/speecht5_vc) for doing voice conversion. To use this, first load the model from the Hub. Note that the model class now is `SpeechT5ForSpeechToSpeech`. ```python from transformers import SpeechT5Processor, SpeechT5ForSpeechToSpeech processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_vc") model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc") ``` We will need some speech audio to use as input. For the purpose of this example, we’ll load the audio from a small speech dataset on the Hub. You can also load your own speech waveforms, as long as they are mono and use a sampling rate of 16 kHz. The samples from the dataset we’re using here are already in this format. ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") dataset = dataset.sort("id") example = dataset[40] ``` Next, preprocess the audio to put it in the format that the model expects. ```python sampling_rate = dataset.features["audio"].sampling_rate inputs = processor(audio=example["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` As with the TTS model, we’ll need speaker embeddings. These describe what the target voice sounds like. ```python import torch embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) ``` We also need to load the vocoder to turn the generated spectrograms into an audio waveform. Let’s use the same vocoder as with the TTS model. ```python from transformers import SpeechT5HifiGan vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") ``` Now we can perform the speech conversion by calling the model’s `generate_speech` method. ```python speech = model.generate_speech(inputs["input_values"], speaker_embeddings, vocoder=vocoder) import soundfile as sf sf.write("speech_converted.wav", speech.numpy(), samplerate=16000) ``` Changing to a different voice is as easy as loading a new speaker embedding. You could even make an embedding from your own voice! The original input ([download](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/speech_original.wav)): <audio controls> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/speech_original.wav" type="audio/wav"> Your browser does not support the audio element. </audio> The converted voice ([download](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/speech_converted.wav)): <audio controls> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/speech_converted.wav" type="audio/wav"> Your browser does not support the audio element. </audio> Note that the converted audio in this example cuts off before the end of the sentence. This might be due to the pause between the two sentences, causing SpeechT5 to (wrongly) predict that the end of the sequence has been reached. Try it with another example, you’ll find that often the conversion is correct but sometimes it stops prematurely. You can play with an [interactive demo here](https://huggingface.co/spaces/Matthijs/speecht5-vc-demo). 🔥 ## Speech-to-text for automatic speech recognition The ASR model uses the following pre-nets and post-net: - **Speech encoder pre-net.** This is the same pre-net used by the speech-to-speech model and consists of the CNN feature encoder layers from wav2vec 2.0. - **Text decoder pre-net.** Similar to the encoder pre-net used by the TTS model, this maps text tokens into the hidden representations using an embedding layer. (During pre-training, these embeddings are shared between the text encoder and decoder pre-nets.) - **Text decoder post-net.** This is the simplest of them all and consists of a single linear layer that projects the hidden representations to probabilities over the vocabulary. The architecture of the fine-tuned model looks like the following. <div align="center"> <img alt="SpeechT5 architecture for speech-to-text" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/speecht5/asr.jpg"/> </div> If you’ve tried any of the other 🤗 Transformers speech recognition models before, you’ll find SpeechT5 just as easy to use. The quickest way to get started is by using a pipeline. ```python from transformers import pipeline generator = pipeline(task="automatic-speech-recognition", model="microsoft/speecht5_asr") ``` As speech audio, we’ll use the same input as in the previous section, but any audio file will work, as the pipeline automatically converts the audio into the correct format. ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") dataset = dataset.sort("id") example = dataset[40] ``` Now we can ask the pipeline to process the speech and generate a text transcription. ```python transcription = generator(example["audio"]["array"]) ``` Printing the transcription gives: ```text a man said to the universe sir i exist ``` That sounds exactly right! The tokenizer used by SpeechT5 is very basic and works on the character level. The ASR model will therefore not output any punctuation or capitalization. Of course it’s also possible to use the model class directly. First, load the [fine-tuned model](https://huggingface.co/microsoft/speecht5_asr) and the processor object. The class is now `SpeechT5ForSpeechToText`. ```python from transformers import SpeechT5Processor, SpeechT5ForSpeechToText processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_asr") model = SpeechT5ForSpeechToText.from_pretrained("microsoft/speecht5_asr") ``` Preprocess the speech input: ```python sampling_rate = dataset.features["audio"].sampling_rate inputs = processor(audio=example["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` Finally, tell the model to generate text tokens from the speech input, and then use the processor’s decoding function to turn these tokens into actual text. ```python predicted_ids = model.generate(**inputs, max_length=100) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) ``` Play with an interactive demo for the [speech-to-text task](https://huggingface.co/spaces/Matthijs/speecht5-asr-demo). ## Conclusion SpeechT5 is an interesting model because — unlike most other models — it allows you to perform multiple tasks with the same architecture. Only the pre-nets and post-nets change. By pre-training the model on these combined tasks, it becomes more capable at doing each of the individual tasks when fine-tuned. We have only included checkpoints for the speech recognition (ASR), speech synthesis (TTS), and voice conversion tasks but the paper also mentions the model was successfully used for speech translation, speech enhancement, and speaker identification. It’s very versatile!
4
0
hf_public_repos
hf_public_repos/blog/image-search-datasets.md
--- title: "Image search with 🤗 datasets" thumbnail: /blog/assets/54_image_search_datasets/spaces_image_search.jpg authors: - user: davanstrien guest: true --- # Image search with 🤗 datasets <a target="_blank" href="https://colab.research.google.com/gist/davanstrien/e2c29fbbed20dc767e5a74e210f4237b/hf_blog_image_search.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> 🤗 [`datasets`](https://huggingface.co/docs/datasets/) is a library that makes it easy to access and share datasets. It also makes it easy to process data efficiently -- including working with data which doesn't fit into memory. When `datasets` was first launched, it was associated mostly with text data. However, recently, `datasets` has added increased support for audio as well as images. In particular, there is now a `datasets` [feature type for images](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image). A previous [blog post](https://huggingface.co/blog/fine-tune-vit) showed how `datasets` can be used with 🤗 `transformers` to train an image classification model. In this blog post, we'll see how we can combine `datasets` and a few other libraries to create an image search application. First, we'll install `datasets`. Since we're going to be working with images, we'll also install [`pillow`](https://pillow.readthedocs.io/en/stable/). We'll also need `sentence_transformers` and `faiss`. We'll introduce those in more detail below. We also install [`rich`](https://github.com/Textualize/rich) - we'll only briefly use it here, but it's a super handy package to have around -- I'd really recommend exploring it further! ``` python !pip install datasets pillow rich faiss-gpu sentence_transformers ``` To start, let's take a look at the image feature. We can use the wonderful [rich](https://rich.readthedocs.io/) library to poke around python objects (functions, classes etc.) ``` python from rich import inspect import datasets ``` ``` python inspect(datasets.Image, help=True) ``` <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #000080; text-decoration-color: #000080">╭───────────────────────── </span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold">&lt;</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">class</span><span style="color: #000000; text-decoration-color: #000000"> </span><span style="color: #008000; text-decoration-color: #008000">'datasets.features.image.Image'</span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold">&gt;</span><span style="color: #000080; text-decoration-color: #000080"> ─────────────────────────╮</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #00ffff; text-decoration-color: #00ffff; font-style: italic">class </span><span style="color: #800000; text-decoration-color: #800000; font-weight: bold">Image</span><span style="font-weight: bold">(</span>decode: bool = <span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span>, id: Union<span style="font-weight: bold">[</span>str, NoneType<span style="font-weight: bold">]</span> = <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span><span style="font-weight: bold">)</span> -&gt; <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>: <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">Image feature to read image data from an image file.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">Input: The Image feature accepts as input:</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- A :obj:`str`: Absolute path to the image file </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">(</span><span style="color: #008080; text-decoration-color: #008080">i.e. random access is allowed</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">)</span><span style="color: #008080; text-decoration-color: #008080">.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- A :obj:`dict` with the keys:</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> - path: String with relative path of the image file to the archive file.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> - bytes: Bytes of the image file.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> This is useful for archived files with sequential access.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- An :obj:`np.ndarray`: NumPy array representing an image.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- A :obj:`PIL.Image.Image`: PIL image object.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">Args:</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> decode </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">(</span><span style="color: #008080; text-decoration-color: #008080">:obj:`bool`, default ``</span><span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span><span style="color: #008080; text-decoration-color: #008080">``</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">)</span><span style="color: #008080; text-decoration-color: #008080">: Whether to decode the image data. If `</span><span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span><span style="color: #008080; text-decoration-color: #008080">`,</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> returns the underlying dictionary in the format </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">{</span><span style="color: #008000; text-decoration-color: #008000">"path"</span><span style="color: #008080; text-decoration-color: #008080">: image_path, </span><span style="color: #008000; text-decoration-color: #008000">"bytes"</span><span style="color: #008080; text-decoration-color: #008080">: </span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">image_bytes</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">}</span><span style="color: #008080; text-decoration-color: #008080">.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">decode</span> = <span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">dtype</span> = <span style="color: #008000; text-decoration-color: #008000">'PIL.Image.Image'</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">id</span> = <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">pa_type</span> = <span style="color: #800080; text-decoration-color: #800080; font-weight: bold">StructType</span><span style="font-weight: bold">(</span>struct<span style="font-weight: bold">&lt;</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">bytes:</span><span style="color: #000000; text-decoration-color: #000000"> binary, path: string</span><span style="font-weight: bold">&gt;)</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">╰───────────────────────────────────────────────────────────────────────────────────────────╯</span> </pre> We can see there a few different ways in which we can pass in our images. We'll come back to this in a little while. A really nice feature of the `datasets` library (beyond the functionality for processing data, memory mapping etc.) is that you get some nice things 'for free'. One of these is the ability to add a [`faiss`](https://github.com/facebookresearch/faiss) index to a dataset. [`faiss`](https://github.com/facebookresearch/faiss) is a ["library for efficient similarity search and clustering of dense vectors"](https://github.com/facebookresearch/faiss). The `datasets` [docs](https://huggingface.co/docs/datasets) shows an [example](https://huggingface.co/docs/datasets/faiss_es.html#id1) of using a `faiss` index for text retrieval. In this post we'll see if we can do the same for images. ## The dataset: "Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900" This is a dataset of images which have been pulled from a collection of digitised books from the British Library. These images come from books across a wide time period and from a broad range of domains. The images were extracted using information contained in the OCR output for each book. As a result, it's known which book the images came from, but not necessarily anything else about that image i.e. what is shown in the image. Some attempts to help overcome this have included uploading the images to [flickr](https://www.flickr.com/photos/britishlibrary/albums). This allows people to tag the images or put them into various different categories. There have also been projects to tag the dataset [using machine learning](https://blogs.bl.uk/digital-scholarship/2016/11/sherlocknet-update-millions-of-tags-and-thousands-of-captions-added-to-the-bl-flickr-images.html). This work makes it possible to search by tags, but we might want a 'richer' ability to search. For this particular experiment, we'll work with a subset of the collections which contain "embellishments". This dataset is a bit smaller, so it will be better for experimenting with. We can get the full data from the British Library's data repository: [https://doi.org/10.21250/db17](https://doi.org/10.21250/db17). Since the full dataset is still fairly large, you'll probably want to start with a smaller sample. ## Creating our dataset Our dataset consists of a folder containing subdirectories inside which are images. This is a fairly standard format for sharing image datasets. Thanks to a recently merged [pull request](https://github.com/huggingface/datasets/pull/2830) we can directly load this dataset using `datasets` `ImageFolder` loader 🤯 ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_files="https://zenodo.org/record/6224034/files/embellishments_sample.zip?download=1") ``` Let's see what we get back. ```python dataset ``` ``` DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 10000 }) }) ``` We can get back a `DatasetDict`, and we have a Dataset with image and label features. Since we don't have any train/validation splits here, let's grab the train part of our dataset. Let's also take a look at one example from our dataset to see what this looks like. ```python dataset = dataset["train"] dataset[0] ``` ```python {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=358x461 at 0x7F9488DBB090>, 'label': 208} ``` Let's start with the label column. It contains the parent folder for our images. In this case, the label column represents the year of publication for the books from which the images are taken. We can see the mappings for this using `dataset.features`: ```python dataset.features['label'] ``` In this particular dataset, the image filenames also contain some metadata about the book from which the image was taken. There are a few ways we can get this information. When we look at one example from our dataset that the `image` feature was a `PIL.JpegImagePlugin.JpegImageFile`. Since `PIL.Images` have a filename attribute, one way in which we can grab our filenames is by accessing this. ```python dataset[0]['image'].filename ``` ```python /root/.cache/huggingface/datasets/downloads/extracted/f324a87ed7bf3a6b83b8a353096fbd9500d6e7956e55c3d96d2b23cc03146582/embellishments_sample/1920/000499442_0_000579_1_[The Ring and the Book etc ]_1920.jpg ``` Since we might want easy access to this information later, let's create a new column to extract the filename. For this, we'll use the `map` method. ```python dataset = dataset.map(lambda example: {"fname": example['image'].filename.split("/")[-1]}) ``` We can look at one example to see what this looks like now. ```python dataset[0] ``` ```python {'fname': '000499442_0_000579_1_[The Ring and the Book etc ]_1920.jpg', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=358x461 at 0x7F94862A9650>, 'label': 208} ``` We've got our metadata now. Let's see some pictures already! If we access an example and index into the `image` column we'll see our image 😃 ``` python dataset[10]['image'] ``` <img src="assets/54_image_search_datasets/dataset_image.jpg" alt="An example image from our dataset"> > **Note** in an [earlier version](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html) of this blog post the steps to download and load the images was much more convoluted. The new ImageFolder loader makes this process much easier 😀 In particular, we don't need to worry about how to load our images since datasets took care of this for us. ## Push all the things to the hub! <img src="https://i.imgflip.com/613c0r.jpg" alt="Push all the things to the hub"> One of the super awesome things about the 🤗 ecosystem is the Hugging Face Hub. We can use the Hub to access models and datasets. It is often used for sharing work with others, but it can also be a useful tool for work in progress. `datasets` recently added a `push_to_hub` method that allows you to push a dataset to the Hub with minimal fuss. This can be really helpful by allowing you to pass around a dataset with all the transforms etc. already done. For now, we'll push the dataset to the Hub and keep it private initially. Depending on where you are running the code, you may need to authenticate. You can either do this using the `huggingface-cli login` command or, if you are running in a notebook, using `notebook_login` ``` python from huggingface_hub import notebook_login notebook_login() ``` ``` python dataset.push_to_hub('davanstrien/embellishments-sample', private=True) ``` > **Note**: in a [previous version](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html) of this blog post we had to do a few more steps to ensure images were embedded when using `push_to_hub`. Thanks to [this pull request](https://github.com/huggingface/datasets/pull/3685) we no longer need to worry about these extra steps. We just need to make sure `embed_external_files=True` (which is the default behaviour). ### Switching machines At this point, we've created a dataset and moved it to the Hub. This means it is possible to pick up the work/dataset elsewhere. In this particular example, having access to a GPU is important. Using the Hub as a way to pass around our data we could start on a laptop and pick up the work on Google Colab. If we move to a new machine, we may need to login again. Once we've done this we can load our dataset ``` python from datasets import load_dataset dataset = load_dataset("davanstrien/embellishments-sample", use_auth_token=True) ``` ## Creating embeddings 🕸 We now have a dataset with a bunch of images in it. To begin creating our image search app, we need to embed these images. There are various ways to try and do this, but one possible way is to use the CLIP models via the `sentence_transformers` library. The [CLIP model](https://openai.com/blog/clip/) from OpenAI learns a joint representation for both images and text, which is very useful for what we want to do since we want to input text and get back an image. We can download the model using the `SentenceTransformer` class. ``` python from sentence_transformers import SentenceTransformer model = SentenceTransformer('clip-ViT-B-32') ``` This model will take as input either an image or some text and return an embedding. We can use the `datasets` `map` method to encode all our images using this model. When we call map, we return a dictionary with the key `embeddings` containing the embeddings returned by the model. We also pass `device='cuda'` when we call the model; this ensures that we're doing the encoding on the GPU. ``` python ds_with_embeddings = dataset.map( lambda example: {'embeddings':model.encode(example['image'], device='cuda')}, batched=True, batch_size=32) ``` We can 'save' our work by pushing back to the Hub using `push_to_hub`. ``` python ds_with_embeddings.push_to_hub('davanstrien/embellishments-sample', private=True) ``` If we were to move to a different machine, we could grab our work again by loading it from the Hub 😃 ``` python from datasets import load_dataset ds_with_embeddings = load_dataset("davanstrien/embellishments-sample", use_auth_token=True) ``` We now have a new column which contains the embeddings for our images. We could manually search through these and compare them to some input embedding but datasets has an `add_faiss_index` method. This uses the [faiss](https://github.com/facebookresearch/faiss) library to create an efficient index for searching embeddings. For more background on this library, you can watch this [YouTube video](https://www.youtube.com/embed/sKyvsdEv6rk) ``` python ds_with_embeddings['train'].add_faiss_index(column='embeddings') ``` ``` Dataset({ features: ['fname', 'year', 'path', 'image', 'embeddings'], num_rows: 10000 }) ``` ## Image search > **Note** that these examples were generated from the full version of the dataset so you may get slightly different results. We now have everything we need to create a simple image search. We can use the same model we used to encode our images to encode some input text. This will act as the prompt we try and find close examples for. Let's start with 'a steam engine'. ``` python prompt = model.encode("A steam engine") ``` We can use another method from the datasets library `get_nearest_examples` to get images which have an embedding close to our input prompt embedding. We can pass in a number of results we want to get back. ``` python scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt, k=9) ``` We can index into the first example this retrieves: ``` python retrieved_examples['image'][0] ``` <img src="assets/54_image_search_datasets/search_result.jpg" alt="An image of a factory"> This isn't quite a steam engine, but it's also not a completely weird result. We can plot the other results to see what was returned. ``` python import matplotlib.pyplot as plt ``` ``` python plt.figure(figsize=(20, 20)) columns = 3 for i in range(9): image = retrieved_examples['image'][i] plt.subplot(9 / columns + 1, columns, i + 1) plt.imshow(image) ``` <img src="assets/54_image_search_datasets/steam_engine_search_results.jpg"> Some of these results look fairly close to our input prompt. We can wrap this in a function so we can more easily play around with different prompts ``` python def get_image_from_text(text_prompt, number_to_retrieve=9): prompt = model.encode(text_prompt) scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt, k=number_to_retrieve) plt.figure(figsize=(20, 20)) columns = 3 for i in range(9): image = retrieved_examples['image'][i] plt.title(text_prompt) plt.subplot(9 / columns + 1, columns, i + 1) plt.imshow(image) ``` ``` python get_image_from_text("An illustration of the sun behind a mountain") ``` <img src="assets/54_image_search_datasets/sun_behind_mountain.jpg"> ### Trying a bunch of prompts ✨ Now we have a function for getting a few results, we can try a bunch of different prompts: - For some of these I'll choose prompts which are a broad 'category' i.e. 'a musical instrument' or 'an animal', others are specific i.e. 'a guitar'. - Out of interest I also tried a boolean operator: "An illustration of a cat or a dog". - Finally I tried something a little more abstract: \"an empty abyss\" ``` python prompts = ["A musical instrument", "A guitar", "An animal", "An illustration of a cat or a dog", "an empty abyss"] ``` ``` python for prompt in prompts: get_image_from_text(prompt) ``` <img src="assets/54_image_search_datasets/musical_instrument.jpg"> <img src="assets/54_image_search_datasets/guitar.jpg"> <img src="assets/54_image_search_datasets/an_animal.jpg"> <img src="assets/54_image_search_datasets/cat_or_dog.jpg"> <img src="assets/54_image_search_datasets/an_empty_abyss.jpg"> We can see these results aren't always right, but they are usually reasonable. It already seems like this could be useful for searching for the semantic content of an image in this dataset. However we might hold off on sharing this as is... ## Creating a Hugging Face Space? 🤷🏼 One obvious next step for this kind of project is to create a Hugging Face [Space](https://huggingface.co/spaces/launch) demo. This is what I've done for other [models](https://huggingface.co/spaces/BritishLibraryLabs/British-Library-books-genre-classifier-v2). It was a fairly simple process to get a [Gradio app setup](https://gradio.app/) from the point we got to here. Here is a screenshot of this app: <img src="assets/54_image_search_datasets/spaces_image_search.jpg" alt="Screenshot of Gradio search app"> However, I'm a little bit vary about making this public straightaway. Looking at the model card for the CLIP model we can look at the primary intended uses: > ### Primary intended uses > > We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. > [source](https://huggingface.co/openai/clip-vit-base-patch32) This is fairly close to what we are interested in here. Particularly we might be interested in how well the model deals with the kinds of images in our dataset (illustrations from mostly 19th century books). The images in our dataset are (probably) fairly different from the training data. The fact that some of the images also contain text might help CLIP since it displays some [OCR ability](https://openai.com/blog/clip/). However, looking at the out-of-scope use cases in the model card: > ### Out-of-Scope Use Cases > > Any deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP's performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case > currently potentially harmful. > [source](https://huggingface.co/openai/clip-vit-base-patch32) suggests that 'deployment' is not a good idea. Whilst the results I got are interesting, I haven't played around with the model enough yet (and haven't done anything more systematic to evaluate its performance and biases) to be confident about 'deploying' it. Another additional consideration is the target dataset itself. The images are drawn from books covering a variety of subjects and time periods. There are plenty of books which represent colonial attitudes and as a result some of the images included may represent certain groups of people in a negative way. This could potentially be a bad combo with a tool which allows any arbitrary text input to be encoded as a prompt. There may be ways around this issue but this will require a bit more thought. ## Conclusion Although we don't have a nice demo to show for it, we've seen how we can use `datasets` to: - load images into the new `Image` feature type - 'save' our work using `push_to_hub` and use this to move data between machines/sessions - create a `faiss` index for images that we can use to retrieve images from a text (or image) input.
5
0
hf_public_repos
hf_public_repos/blog/ray-tune.md
--- title: Hyperparameter Search with Transformers and Ray Tune thumbnail: /blog/assets/06_ray_tune/ray-hf.jpg authors: - user: ray-project guest: true --- # Hyperparameter Search with Transformers and Ray Tune ##### A guest blog post by Richard Liaw from the Anyscale team With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face [transformers](https://github.com/huggingface/transformers) library has become critical to the success and growth of natural language processing today. For any machine learning model to achieve good performance, users often need to implement some form of parameter tuning. Yet, nearly everyone ([1](https://medium.com/@prakashakshay90/fine-tuning-bert-model-using-pytorch-f34148d58a37), [2](https://mccormickml.com/2019/07/22/BERT-fine-tuning/#advantages-of-fine-tuning)) either ends up disregarding hyperparameter tuning or opting to do a simplistic grid search with a small search space. However, simple experiments are able to show the benefit of using an advanced tuning technique. Below is [a recent experiment run on a BERT](https://medium.com/distributed-computing-with-ray/hyperparameter-optimization-for-transformers-a-guide-c4e32c6c989b) model from [Hugging Face transformers](https://github.com/huggingface/transformers) on the [RTE dataset](https://aclweb.org/aclwiki/Textual_Entailment_Resource_Pool). Genetic optimization techniques like [PBT](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining) can provide large performance improvements compared to standard hyperparameter optimization techniques. <table> <tr> <td><strong>Algorithm</strong> </td> <td><strong>Best Val Acc.</strong> </td> <td><strong>Best Test Acc.</strong> </td> <td><strong>Total GPU min</strong> </td> <td><strong>Total $ cost</strong> </td> </tr> <tr> <td>Grid Search </td> <td>74% </td> <td>65.4% </td> <td>45 min </td> <td>$2.30 </td> </tr> <tr> <td>Bayesian Optimization +Early Stop </td> <td>77% </td> <td>66.9% </td> <td>104 min </td> <td>$5.30 </td> </tr> <tr> <td>Population-based Training </td> <td>78% </td> <td>70.5% </td> <td>48 min </td> <td>$2.45 </td> </tr> </table> If you’re leveraging [Transformers](https://github.com/huggingface/transformers), you’ll want to have a way to easily access powerful hyperparameter tuning solutions without giving up the customizability of the Transformers framework. ![alt_text](/blog/assets/06_ray_tune/ray-hf.jpg "image_tooltip") In the Transformers 3.1 release, [Hugging Face Transformers](https://github.com/huggingface/transformers) and [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) teamed up to provide a simple yet powerful integration. [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) is a popular Python library for hyperparameter tuning that provides many state-of-the-art algorithms out of the box, along with integrations with the best-of-class tooling, such as [Weights and Biases](https://wandb.ai/) and tensorboard. To demonstrate this new [Hugging Face](https://github.com/huggingface/transformers) + [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) integration, we leverage the [Hugging Face Datasets library](https://github.com/huggingface/datasets) to fine tune BERT on [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398). To run this example, please first run: **`pip install "ray[tune]" transformers datasets scipy sklearn torch`** Simply plug in one of Ray’s standard tuning algorithms by just adding a few lines of code. ```python from datasets import load_dataset, load_metric from transformers import (AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments) tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') dataset = load_dataset('glue', 'mrpc') metric = load_metric('glue', 'mrpc') def encode(examples): outputs = tokenizer( examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) def model_init(): return AutoModelForSequenceClassification.from_pretrained( 'distilbert-base-uncased', return_dict=True) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) # Evaluate during training and a bit more often # than the default to be able to prune bad trials early. # Disabling tqdm is a matter of preference. training_args = TrainingArguments( "test", evaluation_strategy="steps", eval_steps=500, disable_tqdm=True) trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) # Default objective is the sum of all metrics # when metrics are provided, so we have to maximize it. trainer.hyperparameter_search( direction="maximize", backend="ray", n_trials=10 # number of trials ) ``` By default, each trial will utilize 1 CPU, and optionally 1 GPU if available. You can leverage multiple [GPUs for a parallel hyperparameter search](https://docs.ray.io/en/latest/tune/user-guide.html#resources-parallelism-gpus-distributed) by passing in a `resources_per_trial` argument. You can also easily swap different parameter tuning algorithms such as [HyperBand](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#asha-tune-schedulers-ashascheduler), [Bayesian Optimization](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html), [Population-Based Training](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining): To run this example, first run: **`pip install hyperopt`** ```python from ray.tune.suggest.hyperopt import HyperOptSearch from ray.tune.schedulers import ASHAScheduler trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) best_trial = trainer.hyperparameter_search( direction="maximize", backend="ray", # Choose among many libraries: # https://docs.ray.io/en/latest/tune/api_docs/suggestion.html search_alg=HyperOptSearch(metric="objective", mode="max"), # Choose among schedulers: # https://docs.ray.io/en/latest/tune/api_docs/schedulers.html scheduler=ASHAScheduler(metric="objective", mode="max")) ``` It also works with [Weights and Biases](https://wandb.ai/) out of the box! ![alt_text](/blog/assets/06_ray_tune/ray-wandb.png "image_tooltip") ### Try it out today: * `pip install -U ray` * `pip install -U transformers datasets` * Check out the [Hugging Face documentation](https://huggingface.co/transformers/) and [Discussion thread](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/10) * [End-to-end example of using Hugging Face hyperparameter search for text classification](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) If you liked this blog post, be sure to check out: * [Transformers + GLUE + Ray Tune example](https://docs.ray.io/en/latest/tune/examples/index.html#hugging-face-huggingface-transformers) * Our [Weights and Biases report](https://wandb.ai/amogkam/transformers/reports/Hyperparameter-Optimization-for-Huggingface-Transformers--VmlldzoyMTc2ODI) on Hyperparameter Optimization for Transformers * The [simplest way to serve your NLP model](https://medium.com/distributed-computing-with-ray/the-simplest-way-to-serve-your-nlp-model-in-production-with-pure-python-d42b6a97ad55) from scratch
6
0
hf_public_repos
hf_public_repos/blog/large-language-models.md
--- title: "Large Language Models: A New Moore's Law?" thumbnail: /blog/assets/33_large_language_models/01_model_size.jpg authors: - user: juliensimon --- # Large Language Models: A New Moore's Law? A few days ago, Microsoft and NVIDIA [introduced](https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) Megatron-Turing NLG 530B, a Transformer-based model hailed as "*the world’s largest and most powerful generative language model*." This is an impressive show of Machine Learning engineering, no doubt about it. Yet, should we be excited about this mega-model trend? I, for one, am not. Here's why. <kbd> <img src="assets/33_large_language_models/01_model_size.jpg"> </kbd> ### This is your Brain on Deep Learning Researchers estimate that the human brain contains an average of [86 billion neurons](https://pubmed.ncbi.nlm.nih.gov/19226510/) and 100 trillion synapses. It's safe to assume that not all of them are dedicated to language either. Interestingly, GPT-4 is [expected](https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/) to have about 100 trillion parameters... As crude as this analogy is, shouldn't we wonder whether building language models that are about the size of the human brain is the best long-term approach? Of course, our brain is a marvelous device, produced by millions of years of evolution, while Deep Learning models are only a few decades old. Still, our intuition should tell us that something doesn't compute (pun intended). ### Deep Learning, Deep Pockets? As you would expect, training a 530-billion parameter model on humongous text datasets requires a fair bit of infrastructure. In fact, Microsoft and NVIDIA used hundreds of DGX A100 multi-GPU servers. At $199,000 a piece, and factoring in networking equipment, hosting costs, etc., anyone looking to replicate this experiment would have to spend close to $100 million dollars. Want fries with that? Seriously, which organizations have business use cases that would justify spending $100 million on Deep Learning infrastructure? Or even $10 million? Very few. So who are these models for, really? ### That Warm Feeling is your GPU Cluster For all its engineering brilliance, training Deep Learning models on GPUs is a brute force technique. According to the spec sheet, each DGX server can consume up to 6.5 kilowatts. Of course, you'll need at least as much cooling power in your datacenter (or your server closet). Unless you're the Starks and need to keep Winterfell warm in winter, that's another problem you'll have to deal with. In addition, as public awareness grows on climate and social responsibility issues, organizations need to account for their carbon footprint. According to this 2019 [study](https://arxiv.org/pdf/1906.02243.pdf) from the University of Massachusetts, "*training BERT on GPU is roughly equivalent to a trans-American flight*". BERT-Large has 340 million parameters. One can only extrapolate what the footprint of Megatron-Turing could be... People who know me wouldn't call me a bleeding-heart environmentalist. Still, some numbers are hard to ignore. ### So? Am I excited by Megatron-Turing NLG 530B and whatever beast is coming next? No. Do I think that the (relatively small) benchmark improvement is worth the added cost, complexity and carbon footprint? No. Do I think that building and promoting these huge models is helping organizations understand and adopt Machine Learning ? No. I'm left wondering what's the point of it all. Science for the sake of science? Good old marketing? Technological supremacy? Probably a bit of each. I'll leave them to it, then. Instead, let me focus on pragmatic and actionable techniques that you can all use to build high quality Machine Learning solutions. ### Use Pretrained Models In the vast majority of cases, you won't need a custom model architecture. Maybe you'll *want* a custom one (which is a different thing), but there be dragons. Experts only! A good starting point is to look for [models](https://huggingface.co/models) that have been pretrained for the task you're trying to solve (say, [summarizing English text](https://huggingface.co/models?language=en&pipeline_tag=summarization&sort=downloads)). Then, you should quickly try out a few models to predict your own data. If metrics tell you that one works well enough, you're done! If you need a little more accuracy, you should consider fine-tuning the model (more on this in a minute). ### Use Smaller Models When evaluating models, you should pick the smallest one that can deliver the accuracy you need. It will predict faster and require fewer hardware resources for training and inference. Frugality goes a long way. It's nothing new either. Computer Vision practitioners will remember when [SqueezeNet](https://arxiv.org/abs/1602.07360) came out in 2017, achieving a 50x reduction in model size compared to [AlexNet](https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html), while meeting or exceeding its accuracy. How clever that was! Downsizing efforts are also under way in the Natural Language Processing community, using transfer learning techniques such as [knowledge distillation](https://en.wikipedia.org/wiki/Knowledge_distillation). [DistilBERT](https://arxiv.org/abs/1910.01108) is perhaps its most widely known achievement. Compared to the original BERT model, it retains 97% of language understanding while being 40% smaller and 60% faster. You can try it [here](https://huggingface.co/distilbert-base-uncased). The same approach has been applied to other models, such as Facebook's [BART](https://arxiv.org/abs/1910.13461), and you can try DistilBART [here](https://huggingface.co/models?search=distilbart). Recent models from the [Big Science](https://bigscience.huggingface.co/) project are also very impressive. As visible in this graph included in the [research paper](https://arxiv.org/abs/2110.08207), their T0 model outperforms GPT-3 on many tasks while being 16x smaller. <kbd> <img src="assets/33_large_language_models/02_t0.png"> </kbd> You can try T0 [here](https://huggingface.co/bigscience/T0pp). This is the kind of research we need more of! ### Fine-Tune Models If you need to specialize a model, there should be very few reasons to train it from scratch. Instead, you should fine-tune it, that is to say train it only for a few epochs on your own data. If you're short on data, maybe of one these [datasets](https://huggingface.co/datasets) can get you started. You guessed it, that's another way to do transfer learning, and it'll help you save on everything! * Less data to collect, store, clean and annotate, * Faster experiments and iterations, * Fewer resources required in production. In other words: save time, save money, save hardware resources, save the world! If you need a tutorial, the Hugging Face [course](https://huggingface.co/course) will get you started in no time. ### Use Cloud-Based Infrastructure Like them or not, cloud companies know how to build efficient infrastructure. Sustainability studies show that cloud-based infrastructure is more energy and carbon efficient than the alternative: see [AWS](https://sustainability.aboutamazon.com/environment/the-cloud), [Azure](https://azure.microsoft.com/en-us/global-infrastructure/sustainability), and [Google](https://cloud.google.com/sustainability). Earth.org [says](https://earth.org/environmental-impact-of-cloud-computing/) that while cloud infrastructure is not perfect, "[*it's] more energy efficient than the alternative and facilitates environmentally beneficial services and economic growth.*" Cloud certainly has a lot going for it when it comes to ease of use, flexibility and pay as you go. It's also a little greener than you probably thought. If you're short on GPUs, why not try fine-tune your Hugging Face models on [Amazon SageMaker](https://aws.amazon.com/sagemaker/), AWS' managed service for Machine Learning? We've got [plenty of examples](https://huggingface.co/docs/sagemaker/train) for you. ### Optimize Your Models From compilers to virtual machines, software engineers have long used tools that automatically optimize their code for whatever hardware they're running on. However, the Machine Learning community is still struggling with this topic, and for good reason. Optimizing models for size and speed is a devilishly complex task, which involves techniques such as: * Specialized hardware that speeds up training ([Graphcore](https://www.graphcore.ai/), [Habana](https://habana.ai/)) and inference ([Google TPU](https://cloud.google.com/tpu), [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/)). * Pruning: remove model parameters that have little or no impact on the predicted outcome. * Fusion: merge model layers (say, convolution and activation). * Quantization: storing model parameters in smaller values (say, 8 bits instead of 32 bits) Fortunately, automated tools are starting to appear, such as the [Optimum](https://huggingface.co/hardware) open source library, and [Infinity](https://huggingface.co/infinity), a containerized solution that delivers Transformers accuracy at 1-millisecond latency. ### Conclusion Large language model size has been increasing 10x every year for the last few years. This is starting to look like another [Moore's Law](https://en.wikipedia.org/wiki/Moore%27s_law). We've been there before, and we should know that this road leads to diminishing returns, higher cost, more complexity, and new risks. Exponentials tend not to end well. Remember [Meltdown and Spectre](https://meltdownattack.com/)? Do we want to find out what that looks like for AI? Instead of chasing trillion-parameter models (place your bets), wouldn't all be better off if we built practical and efficient solutions that all developers can use to solve real-world problems? *Interested in how Hugging Face can help your organization build and deploy production-grade Machine Learning solutions? Get in touch at [[email protected]](mailto:[email protected]) (no recruiters, no sales pitches, please).*
7
0
hf_public_repos
hf_public_repos/blog/meg-mitchell-interview.md
--- title: "Machine Learning Experts - Margaret Mitchell" thumbnail: /blog/assets/57_meg_mitchell_interview/thumbnail.png authors: - user: britneymuller --- # Machine Learning Experts - Margaret Mitchell Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is none other than [Margaret Mitchell](https://twitter.com/mmitchell_ai) (Meg for short). Meg founded & co-led Google’s Ethical AI Group, is a pioneer in the field of Machine Learning, has published over 50 papers, and is a leading researcher in Ethical AI. You’ll hear Meg talk about the moment she realized the importance of ethical AI (an incredible story!), how ML teams can be more aware of harmful data bias, and the power (and performance) benefits of inclusion and diversity in ML. <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=meg_interview_article"><img src="/blog/assets/57_meg_mitchell_interview/Meg-cta.png"></a> Very excited to introduce this powerful episode to you! Here’s my conversation with Meg Mitchell: <iframe width="100%" style="aspect-ratio: 16 / 9;"src="https://www.youtube.com/embed/FpIxYGyJBbs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ## Transcription: *Note: Transcription has been slightly modified/reformatted to deliver the highest-quality reading experience.* ### Could you share a little bit about your background and what brought you to Hugging Face? **Dr. Margaret Mitchell’s Background:** - Bachelor’s in Linguistics at Reed College - Worked on NLP - Worked on assistive and augmentative technology after her Bachelor’s and also during her graduate studies - Master’s in Computational Linguistics at the University of Washington - PhD in Computer Science **Meg:** I did heavy statistical work as a postdoc at Johns Hopkins and then went to Microsoft Research where I continued doing vision to language generation that led to working on an app for people who are blind to navigate the world a bit easier called [Seeing AI](https://www.microsoft.com/en-us/ai/seeing-ai). After a few years at Microsoft, I left to work at Google to focus on big data problems inherent in deep learning. That’s where I started focusing on things like fairness, rigorous evaluation for different kinds of issues, and bias. While at Google, I founded and co-led the Ethical AI Team which focuses on inclusion and transparency. After four years at Google, I came over to Hugging Face where I was able to jump in and focus on coding. I’m helping to create protocols for ethical AI research, inclusive hiring, systems, and setting up a good culture here at Hugging Face. ### When did you recognize the importance of Ethical AI? **Meg:** This occurred when I was working at Microsoft while I was working on the assistance technology, Seeing AI. In general, I was working on generating language from images and I started to see was how lopsided data was. Data represents a subset of the world and it influences what a model will say. So I began to run into issues where white people would be described as ‘people’ and black people would be described as ‘black people’ as if white was a default and black was a marked characteristic. That was concerning to me. There was also an ah-ha moment when I was feeding my system a sequence of images, getting it to talk more about a story of what is happening. And I fed it some images of this massive blast where a lot of people worked, called the ‘Hebstad blast’. You could see that the person taking the picture was on the second or third story looking out on the blast. The blast was very close to this person. It was a very dire and intense moment and when I fed this to the system the system’s output was that “ this is awesome, this is a great view, this is beautiful’. And I thought.. this is a great view of this horrible scene but the important part here is that people may be dying. This is a massive destructive explosion. But the thing is, when you’re learning from images people don’t tend to take photos of terrible things, they take photos of sunsets, fireworks, etc., and a visual recognition model had learned on these images and believed that color in the sky was a positive, beautiful thing. At that moment, I realized that if a model with that sort of thinking had access to actions it would be just one hop away from a system that would blow up buildings because it thought it was beautiful. This was a moment for me when I realized I didn’t want to keep making these systems do better on benchmarks, I wanted to fundamentally shift how we were looking at these problems, how we were approaching data and analysis of data, how we were evaluating and all of the factors we were leaving out with these straightforward pipelines. So that really became my shift into ethical AI work. ### In what applications is data ethics most important? **Meg:** Human-centric technology that deals with people and identity (face recognition, pedestrian recognition). In NLP this would pertain more to the privacy of individuals, how individuals are talked about, and the biases models pick up with regards to descriptors used for people. ### How can ML teams be more aware of harmful bias? **Meg:** A primary issue is that these concepts haven't been taught and most teams simply aren’t aware. Another problem is the lack of a lexicon to contextualize and communicate what is going on. For example: - This is what marginalization is - This is what a power differential is - Here is what inclusion is - Here is how stereotypes work Having a better understanding of these pillars is really important. Another issue is the culture behind machine learning. It’s taken a bit of an ‘Alpha’ or ‘macho’ approach where the focus is on ‘beating’ the last numbers, making things ‘faster’, ‘bigger’, etc. There are lots of parallels that can be made to human anatomy. There’s also a very hostile competitiveness that comes out where you find that women are disproportionately treated as less than. Since women are often much more familiar with discrimination women are focusing a lot more on ethics, stereotypes, sexism, etc. within AI. This means it gets associated with women more and seen as less than which makes the culture a lot harder to penetrate. It’s generally assumed that I’m not technical. It’s something I have to prove over and over again. I’m called a linguist, an ethicist because these are things I care about and know about but that is treated as less-than. People say or think, “You don’t program, you don’t know about statistics, you are not as important,” and it’s often not until I start talking about things technically that people take me seriously which is unfortunate. There is a massive cultural barrier in ML. ### Lack of diversity and inclusion hurts everyone **Meg:** Diversity is when you have a lot of races, ethnicities, genders, abilities, statuses at the table. Inclusion is when each person feels comfortable talking, they feel welcome. One of the best ways to be more inclusive is to not be exclusive. Feels fairly obvious but is often missed. People get left out of meetings because we don’t find them helpful or find them annoying or combative (which is a function of various biases). To be inclusive you need to not be exclusive so when scheduling a meeting pay attention to the demographic makeup of the people you’re inviting. If your meeting is all-male, that’s a problem. It’s incredibly valuable to become more aware and intentional about the demographic makeup of the people you’re including in an email. But you’ll notice in tech, a lot of meetings are all male, and if you bring it up that can be met with a lot of hostility. Air on the side of including people. We all have biases but there are tactics to break some of those patterns. When writing an email I’ll go through their gender and ethnicities to ensure I’m being inclusive. It’s a very conscious effort. That sort of thinking through demographics helps. However, mention this before someone sends an email or schedules a meeting. People tend to not respond as well when you mention these things after the fact. ### Diversity in AI - Isn’t there proof that having a more diverse set of people on an ML project results in better outcomes? **Meg:** Yes, since you have different perspectives you have a different distribution over options and thus, more options. One of the fundamental aspects of machine learning is that when you start training you can use a randomized starting point and what kind of distribution you want to sample from. Most engineers can agree that you don’t want to sample from one little piece of the distribution to have the best chance of finding a local optimum. You need to translate this approach to the people sitting at the table. Just how you want to have a Gaussian approach over different start states, so too do you want that at the table when you’re starting projects because it gives you this larger search space making it easier to attain a local optimum. ### Can you talk about Model Cards and how that project came to be? **Meg:** This project started at Google when I first started working on fairness and what a rigorous evaluation of fairness would look like. In order to do that you need to have an understanding of context and understanding of who would use it. This revolved around how to approach model biases and it wasn’t getting a lot of pick up. I was talking to [Timnit Gebru](https://twitter.com/timnitGebru) who was at that time someone in the field with similar interest to me and she was talking about this idea of datasheets; a kind of documentation for data (based on her experience at Apple) doing engineering where you tend to have specifications of hardware. But we don’t have something similar for data and she was talking about how crazy that is. So Timnit had this idea of datasheets for datasets. It struck me that by having an ‘artifact’ people in tech who are motivated by launches would care a lot more about it. So if we say you have to produce this artifact and it will count as a launch suddenly people would be more incentivized to do it. The way we came up with the name was that a comparable word to ‘data sheet’ that could be used for models was card (plus it was shorter). Also decided to call it ‘model cards’ because the name was very generic and would have longevity over time. Timnit’s paper was called [‘Data Sheets for Datasets’](https://arxiv.org/abs/1803.09010). So we called ours [‘Model Cards for Model Reporting’](https://arxiv.org/abs/1810.03993) and once we had the published paper people started taking us more seriously. Couldn’t have done this without Timnit Gebru’s brilliance suggesting “You need an artifact, a standardized thing that people will want to produce.” ### Where are model cards headed? **Meg:** There’s a pretty big barrier to entry to do model cards in a way that is well informed by ethics. Partly because the people who need to fill this out are often engineers and developers who want to launch their model and don’t want to sit around thinking about documentation and ethics. Part of why I wanted to join Hugging Face is because it gave me an opportunity to standardize how these processes could be filled out and automated as much as possible. One thing I really like about Hugging Face is there is a focus on creating end-to-end machine learning processes that are as smooth as possible. Would love to do something like that with model cards where you could have something largely automatically generated as a function of different questions asked or even based on model specifications directly. We want to work towards having model cards as filled out as possible and interactive. Interactivity would allow you to see the difference in false-negative rate as you move the decision threshold. Normally with classification systems, you set some threshold at which you say yes or no, like .7, but in practice, you actually want to vary the decision threshold to trade off different errors. A static report of how well it works isn’t as informative as you want it to be because you want to know how well it works as different decision thresholds are chosen, and you could use that to decide what decision threshold to be used with your system. So we created a model card where you could interactively change the decision threshold and see how the numbers change. Moving towards that direction in further automation and interactivity is the way to go. ### Decision thresholds & model transparency **Meg:** When Amazon first started putting out facial recognition and facial analysis technology it was found that the gender classification was disproportionately bad for black women and Amazon responded by saying “this was done using the wrong decision threshold”. And then one of the police agencies who had been using one of these systems had been asked what decision threshold they had been using and said, “Oh we’re not using a decision threshold,”. Which was like oh you really don’t understand how this works and are using this out of the box with default parameter settings?! That is a problem. So minimally having this documentary brings awareness to decisions around the various types of parameters. Machine learning models are so different from other things we put out into the public. Toys, medicine, and cars have all sorts of regulations to ensure products are safe and work as intended. We don’t have that in machine learning, partly because it’s new so the laws and regulations don’t exist yet. It’s a bit like the wild west, and that’s what we’re trying to change with model cards. ### What are you working on at Hugging Face? - Working on a few different tools designed for engineers. - Working on philosophical and social science research: Just did a deep dive into UDHR (Universal Declaration of Human Rights) and how those can be applied with AI. Trying to help bridge the gaps between AI, ML, law, and philosophy. - Trying to develop some statistical methods that are helpful for testing systems as well as understanding datasets. - We also recently [put out a tool](https://huggingface.co/spaces/huggingface/data-measurements-tool) that shows how well a language maps to Zipfian distributions (how natural language tends to go) so you can test how well your model is matching with natural language that way. - Working a lot on the culture stuff: spending a lot of time on hiring and what processes we should have in place to be more inclusive. - Working on [Big Science](https://bigscience.huggingface.co/): a massive effort with people from all around the world, not just hugging face working on data governance (how can big data be used and examined without having it proliferate all over the world/being tracked with how it’s used). - Occasionally I’ll do an interview or talk to a Senator, so it’s all over the place. - Try to answer emails sometimes. *Note: Everyone at Hugging Face wears several hats.* :) ### Meg’s impact on AI Meg is featured in the book [Genius Makers ‘The Mavericks who brought AI to Google, Facebook, and the World’](https://www.amazon.com/Genius-Makers-Mavericks-Brought-Facebook/dp/1524742678). Cade Metz interviewed Meg for this while she was at Google. Meg’s pioneering research, systems, and work have played a pivotal role in the history of AI. (we are so lucky to have her at Hugging Face!) ### Rapid Fire Questions: ### Best piece of advice for someone looking to get into AI? **Meg:** Depends on who the person is. If they have marginalized characteristics I would give very different advice. For example, if it was a woman I would say, 'Don’t listen to your supervisors saying you aren’t good at this. Chances are you are just thinking about things differently than they are used to so have confidence in yourself.' If it’s someone with more majority characteristics I’d say, 'Forget about the pipeline problem, pay attention to the people around you and make sure that you hold them up so that the pipeline you’re in now becomes less of a problem.' Also, 'Evaluate your systems'. ### What industries are you most excited to see ML applied (or ML Ethics be applied) **Meg:** The health and assistive domains continue to be areas I care a lot about and see a ton of potential. Also want to see systems that help people understand their own biases. Lots of technology is being created to screen job candidates for job interviews but I feel that technology should really be focused on the interviewer and how they might be coming at the situation with different biases. Would love to have more technology that assists humans to be more inclusive instead of assisting humans to exclude people. ### You frequently include incredible examples of biased models in your Keynotes and interviews. One in particular that I love is the criminal detection model you've talked about that was using patterns of mouth angles to identify criminals (which you swiftly debunked). **Meg:** Yes, [the example is that] they were making this claim that there was this angle theta that was more indicative of criminals when it was a smaller angle. However, I was looking at the math and I realized that what they were talking about was a smile! Where you would have a wider angle for a smile vs a smaller angle associated with a straight face. They really missed the boat on what they were actually capturing there. Experimenter's bias: wanting to find things that aren’t there. ### Should people be afraid of AI taking over the world? **Meg:** There are a lot of things to be afraid of with AI. I like to see it as we have a distribution over different kinds of outcomes, some more positive than others, so there’s not one set one that we can know. There are a lot of different things where AI can be super helpful and more task-based over more generalized intelligence. You can see it going in another direction, similar to what I mentioned earlier about a model thinking something destructive is beautiful is one hop away from a system that is able to press a button to set off a missile. Don’t think people should be scared per se, but they should think about the best and worst-case scenarios and try to mitigate or stop the worst outcomes. I think the biggest thing right now is these systems can widen the divide between the haves and have nots. Further giving power to people who have power and further worsening things for people who don’t. The people designing these systems tend to be people with more power and wealth and they design things for their kinds of interest. I think that’s happening right now and something to think about in the future. Hopefully, we can focus on the things that are most beneficial and continue heading in that direction. ### Fav ML papers? **Meg:** Most recently I’ve really loved what [Abeba Birhane](https://abebabirhane.github.io) has been doing on [values that are encoded in machine learning](https://arxiv.org/abs/2106.15590). My own team at Google had been working on [data genealogies](https://journals.sagepub.com/doi/full/10.1177/20539517211035955), bringing critical analysis on how ML data is handled which they have a few papers on - for example, [Data and its (dis)contents: A survey of dataset development and use in machine learning research](https://arxiv.org/abs/2012.05345). Really love that work and might be biased because it included my team and direct reports, I’m very proud of them but it really is fundamentally good work. Earlier papers that I’m interested in are more reflective of what I was doing at that time. Really love the work of [Herbert Clark](https://neurotree.org/beta/publications.php?pid=4636) who was a psycholinguistics/communications person and he did a lot of work that is easily ported to computational models about how humans communicate. Really love his work and cite him a lot throughout my thesis. ### Anything else you would like to mention? **Meg:** One of the things I’m working on, that I think other people should be working on, is lowering the barrier of entry to AI for people with different academic backgrounds. We have a lot of people developing technology, which is great, but we don’t have a lot of people in a situation where they can really question the technology because there is often a bottleneck. For example, if you want to know about data directly you have to be able to log into a server and write a SQL query. So there is a bottleneck where engineers have to do it and I want to remove that barrier. How can we take things that are fundamentally technical code stuff and open it up so people can directly query the data without knowing how to program? We will be able to make better technology when we remove the barriers that require engineers to be in the middle. ### Outro **Britney:** Meg had a hard stop on the hour but I was able to ask her my last question offline: What’s something you’ve been interested in lately? Meg’s response: "How to propagate and grow plants in synthetic/controlled settings." Just when I thought she couldn’t get any cooler. 🤯 I’ll leave you with a recent quote from Meg in a [Science News article on Ethical AI](https://www.sciencenews.org/article/computer-science-history-ethics-future-robots-ai): *“The most pressing problem is the diversity and inclusion of who’s at the table from the start. All the other issues fall out from there.” -Meg Mitchell.* Thank you for listening to Machine Learning Experts! <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=meg_interview_article"><img src="/blog/assets/57_meg_mitchell_interview/Meg-cta.png"></a> **Honorable mentions + links:** - [Emily Bender](https://twitter.com/emilymbender?lang=en) - [Ehud Reiter](https://mobile.twitter.com/ehudreiter) - [Abeba Birhane](https://abebabirhane.github.io/) - [Seeing AI](https://www.microsoft.com/en-us/ai/seeing-ai) - [Data Sheets for Datasets](https://arxiv.org/abs/1803.09010) - [Model Cards](https://modelcards.withgoogle.com/about) - [Model Cards Paper](https://arxiv.org/abs/1810.03993) - [Abeba Birhane](https://arxiv.org/search/cs?searchtype=author&query=Birhane%2C+A) - [The Values Encoded in Machine Learning Research](https://arxiv.org/abs/2106.15590) - [Data and its (dis)contents:](https://arxiv.org/abs/2012.05345) - [Herbert Clark](https://neurotree.org/beta/publications.php?pid=4636) **Follow Meg Online:** - [Twitter](https://twitter.com/mmitchell_ai) - [Website](http://www.m-mitchell.com) - [LinkedIn](https://www.linkedin.com/in/margaret-mitchell-9b13429)
8
0
hf_public_repos
hf_public_repos/blog/text-to-webapp.md
--- title: "Making a web app generator with open ML models" thumbnail: /blog/assets/153_text_to_webapp/thumbnail.jpg authors: - user: jbilcke-hf --- # Making a web app generator with open ML models As more code generation models become publicly available, it is now possible to do text-to-web and even text-to-app in ways that we couldn't imagine before. This tutorial presents a direct approach to AI web content generation by streaming and rendering the content all in one go. **Try the live demo here!** → **[Webapp Factory](https://huggingface.co/spaces/jbilcke-hf/webapp-factory-wizardcoder)** ![main_demo.gif](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/153_text_to_webapp/main_demo.gif) ## Using LLM in Node apps While we usually think of Python for everything related to AI and ML, the web development community relies heavily on JavaScript and Node. Here are some ways you can use large language models on this platform. ### By running a model locally Various approaches exist to run LLMs in Javascript, from using [ONNX](https://www.npmjs.com/package/onnxruntime-node) to converting code to [WASM](https://blog.mithrilsecurity.io/porting-tokenizers-to-wasm/) and calling external processes written in other languages. Some of those techniques are now available as ready-to-use NPM libraries: - Using AI/ML libraries such as [transformers.js](https://huggingface.co/docs/transformers.js/index) (which supports [code generation](https://huggingface.co/docs/transformers.js/api/models#codegenmodelgenerateargs-codepromiseampltanyampgtcode)) - Using dedicated LLM libraries such as [llama-node](https://github.com/Atome-FE/llama-node) (or [web-llm](https://github.com/mlc-ai/web-llm) for the browser) - Using Python libraries through a bridge such as [Pythonia](https://www.npmjs.com/package/pythonia) However, running large language models in such an environment can be pretty resource-intensive, especially if you are not able to use hardware acceleration. ### By using an API Today, various cloud providers propose commercial APIs to use language models. Here is the current Hugging Face offering: The free [Inference API](https://huggingface.co/docs/api-inference/index) to allow anyone to use small to medium-sized models from the community. The more advanced and production-ready [Inference Endpoints API](https://huggingface.co/inference-endpoints) for those who require larger models or custom inference code. These two APIs can be used from Node using the [Hugging Face Inference API library](https://www.npmjs.com/package/@huggingface/inference) on NPM. 💡 Top performing models generally require a lot of memory (32 Gb, 64 Gb or more) and hardware acceleration to get good latency (see [the benchmarks](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). But we are also seeing a trend of models shrinking in size while keeping relatively good results on some tasks, with requirements as low as 16 Gb or even 8 Gb of memory. ## Architecture We are going to use NodeJS to create our generative AI web server. The model will be [WizardCoder-15B](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) running on the Inference Endpoints API, but feel free to try with another model and stack. If you are interested in other solutions, here are some pointers to alternative implementations: - Using the Inference API: [code](https://huggingface.co/spaces/jbilcke-hf/webapp-factory-any-model/tree/main) and [space](https://huggingface.co/spaces/jbilcke-hf/webapp-factory-any-model) - Using a Python module from Node: [code](https://huggingface.co/spaces/jbilcke-hf/template-node-ctransformers-express/tree/main) and [space](https://huggingface.co/spaces/jbilcke-hf/template-node-ctransformers-express) - Using llama-node (llama cpp): [code](https://huggingface.co/spaces/jbilcke-hf/webapp-factory-llama-node/tree/main) ## Initializing the project First, we need to setup a new Node project (you can clone [this template](https://github.com/jbilcke-hf/template-node-express/generate) if you want to). ```html git clone https://github.com/jbilcke-hf/template-node-express tutorial cd tutorial nvm use npm install ``` Then, we can install the Hugging Face Inference client: ```html npm install @huggingface/inference ``` And set it up in `src/index.mts``: ```javascript import { HfInference } from '@huggingface/inference' // to keep your API token secure, in production you should use something like: // const hfi = new HfInference(process.env.HF_API_TOKEN) const hfi = new HfInference('** YOUR TOKEN **') ``` ## Configuring the Inference Endpoint 💡 **Note:** If you don't want to pay for an Endpoint instance to do this tutorial, you can skip this step and look at [this free Inference API example](https://huggingface.co/spaces/jbilcke-hf/webapp-factory-any-model/blob/main/src/index.mts) instead. Please, note that this will only work with smaller models, which may not be as powerful. To deploy a new Endpoint you can go to the [Endpoint creation page](https://ui.endpoints.huggingface.co/new). You will have to select `WizardCoder` in the **Model Repository** dropdown and make sure that a GPU instance large enough is selected: ![new_endpoint.jpg](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/153_text_to_webapp/new_endpoint.jpg) Once your endpoint is created, you can copy the URL from [this page](https://ui.endpoints.huggingface.co): ![deployed_endpoints.jpg](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/153_text_to_webapp/deployed_endpoints.jpg) Configure the client to use it: ```javascript const hf = hfi.endpoint('** URL TO YOUR ENDPOINT **') ``` You can now tell the inference client to use our private endpoint and call our model: ```javascript const { generated_text } = await hf.textGeneration({ inputs: 'a simple "hello world" html page: <html><body>' }); ``` ## Generating the HTML stream It's now time to return some HTML to the web client when they visit a URL, say `/app`. We will create and endpoint with Express.js to stream the results from the Hugging Face Inference API. ```javascript import express from 'express' import { HfInference } from '@huggingface/inference' const hfi = new HfInference('** YOUR TOKEN **') const hf = hfi.endpoint('** URL TO YOUR ENDPOINT **') const app = express() ``` As we do not have any UI for the moment, the interface will be a simple URL parameter for the prompt: ```javascript app.get('/', async (req, res) => { // send the beginning of the page to the browser (the rest will be generated by the AI) res.write('<html><head></head><body>') const inputs = `# Task Generate ${req.query.prompt} # Out <html><head></head><body>` for await (const output of hf.textGenerationStream({ inputs, parameters: { max_new_tokens: 1000, return_full_text: false, } })) { // stream the result to the browser res.write(output.token.text) // also print to the console for debugging process.stdout.write(output.token.text) } req.end() }) app.listen(3000, () => { console.log('server started') }) ``` Start your web server: ```bash npm run start ``` and open `https://localhost:3000?prompt=some%20prompt`. You should see some primitive HTML content after a few moments. ## Tuning the prompt Each language model reacts differently to prompting. For WizardCoder, simple instructions often work best: ```javascript const inputs = `# Task Generate ${req.query.prompt} # Orders Write application logic inside a JS <script></script> tag. Use a central layout to wrap everything in a <div class="flex flex-col items-center"> # Out <html><head></head><body>` ``` ### Using Tailwind Tailwind is a popular CSS framework for styling content, and WizardCoder is good at it out of the box. This allows code generation to create styles on the go without having to generate a stylesheet at the beginning or the end of the page (which would make the page feel stuck). To improve results, we can also guide the model by showing the way (`<body class="p-4 md:p-8">`). ```javascript const inputs = `# Task Generate ${req.query.prompt} # Orders You must use TailwindCSS utility classes (Tailwind is already injected in the page). Write application logic inside a JS <script></script> tag. Use a central layout to wrap everything in a <div class="flex flex-col items-center'> # Out <html><head></head><body class="p-4 md:p-8">` ``` ### Preventing hallucination It can be difficult to reliably prevent hallucinations and failures (such as parroting back the whole instructions, or writing “lorem ipsum” placeholder text) on light models dedicated to code generation, compared to larger general-purpose models, but we can try to mitigate it. You can try to use an imperative tone and repeat the instructions. An efficient way can also be to show the way by giving a part of the output in English: ```javascript const inputs = `# Task Generate ${req.query.prompt} # Orders Never repeat these instructions, instead write the final code! You must use TailwindCSS utility classes (Tailwind is already injected in the page)! Write application logic inside a JS <script></script> tag! This is not a demo app, so you MUST use English, no Latin! Write in English! Use a central layout to wrap everything in a <div class="flex flex-col items-center"> # Out <html><head><title>App</title></head><body class="p-4 md:p-8">` ``` ## Adding support for images We now have a system that can generate HTML, CSS and JS code, but it is prone to hallucinating broken URLs when asked to produce images. Luckily, we have a lot of options to choose from when it comes to image generation models! → The fastest way to get started is to call a Stable Diffusion model using our free [Inference API](https://huggingface.co/docs/api-inference/index) with one of the [public models](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery) available on the hub: ```javascript app.get('/image', async (req, res) => { const blob = await hf.textToImage({ inputs: `${req.query.caption}`, model: 'stabilityai/stable-diffusion-2-1' }) const buffer = Buffer.from(await blob.arrayBuffer()) res.setHeader('Content-Type', blob.type) res.setHeader('Content-Length', buffer.length) res.end(buffer) }) ``` Adding the following line to the prompt was enough to instruct WizardCoder to use our new `/image` endpoint! (you may have to tweak it for other models): ``` To generate images from captions call the /image API: <img src="/image?caption=photo of something in some place" /> ``` You can also try to be more specific, for example: ``` Only generate a few images and use descriptive photo captions with at least 10 words! ``` ![preview_image.jpg](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/153_text_to_webapp/preview_image.jpg) ## Adding some UI [Alpine.js](https://alpinejs.dev/) is a minimalist framework that allows us to create interactive UIs without any setup, build pipeline, JSX processing etc. Everything is done within the page, making it a great candidate to create the UI of a quick demo. Here is a static HTML page that you can put in `/public/index.html`: ```html <html> <head> <title>Tutorial</title> <script defer src="https://cdn.jsdelivr.net/npm/[email protected]/dist/cdn.min.js"></script> <script src="https://cdn.tailwindcss.com"></script> </head> <body> <div class="flex flex-col space-y-3 p-8" x-data="{ draft: '', prompt: '' }"> <textarea name="draft" x-model="draft" rows="3" placeholder="Type something.." class="font-mono" ></textarea> <button class="bg-green-300 rounded p-3" @click="prompt = draft">Generate</button> <iframe :src="`/app?prompt=${prompt}`"></iframe> </div> </body> </html> ``` To make this work, you will have to make some changes: ```javascript ... // going to localhost:3000 will load the file from /public/index.html app.use(express.static('public')) // we changed this from '/' to '/app' app.get('/app', async (req, res) => { ... ``` ## Optimizing the output So far we have been generating full sequences of Tailwind utility classes, which are great to give freedom of design to the language model. But this approach is also very verbose, consuming a large part of our token quota. To make the output more dense we can use [Daisy UI](https://daisyui.com/docs/use/), a Tailwind plugin which organizes Tailwind utility classes into a design system. The idea is to use shorthand class names for components and utility classes for the rest. Some language models may not have inner knowledge of Daisy UI as it is a niche library, in that case we can add an [API documentation](https://huggingface.co/spaces/jbilcke-hf/webapp-factory-wizardcoder/blob/main/src/daisy.mts) to the prompt: ``` # DaisyUI docs ## To create a nice layout, wrap each article in: <article class="prose"></article> ## Use appropriate CSS classes <button class="btn .."> <table class="table .."> <footer class="footer .."> ``` ## Going further The final demo Space includes a [more complete example](https://huggingface.co/spaces/jbilcke-hf/webapp-factory-wizardcoder/blob/main/public/index.html) of user interface. Here are some ideas to further extend on this concept: - Test other language models such as [StarCoder](https://huggingface.co/blog/starcoder) - Generate files and code for intermediary languages (React, Svelte, Vue..) - Integrate code generation inside an existing framework (eg. NextJS) - Recover from failed or partial code generation (eg. autofix issues in the JS) - Connect it to a chatbot plugin (eg. embed tiny webapp iframes in a chat discussion) ![main_demo.jpg](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/153_text_to_webapp/main_demo.jpg)
9
0
hf_public_repos
hf_public_repos/blog/debate.md
--- title: "Letting Large Models Debate: The First Multilingual LLM Debate Competition" authors: - user: xuanricheng guest: true org: BAAI - user: lilaczheng guest: true org: BAAI - user: xiyang99 guest: true org: BAAI - user: Yonghua guest: true org: BAAI - user: philokey guest: true org: BAAI - user: xuejing2409 guest: true org: BAAI - user: graykingw guest: true org: BAAI - user: daiteng01 guest: true org: BAAI - user: eyuansu71 guest: true org: BAAI - user: Lyfly2024 guest: true org: BAAI - user: xianbao org: Huggingface - user: clefourrier org: Huggingface --- # Letting Large Models Debate: The First Multilingual LLM Debate Competition Current static evaluations and user-driven arenas have exhibited their limitations and biases in the previous year. Here, we explore a novel way to evaluate LLMs: debate. Debate is an excellent way to showcase reasoning strength and language abilities, used all across history, from the debates in the Athenian Ecclesia in the 5th century BCE to today's World Universities Debating Championship. Do today's large language models exhibit debate skills similar to humans? Which model is currently the best at debating? What can we learn from models when they debate against one another? To answer this question, BAAI has created a ["Debate Arena"](https://debate.flageval.net/index.html#/debate), allowing large models to compete against each other. Currently, it supports debate competitions in English, Chinese, Arabic and Korean. <iframe src="https://debate.flageval.net/index.html#/debate?l=en" frameborder="0" style="width:100%;height:700px;"></iframe> [FlagEval-Debate](https://huggingface.co/spaces/FlagEval/FlagEval-Debate) ## Background: The Need to Redefine LLM Evaluation Protocols The advancement of multimodal and multilingual technologies has exposed the limitations of traditional static evaluation protocols in capturing LLMs’ performance in complex interactive scenarios. Inspired by OpenAI’s “AI Safety via Debate” framework—which emphasizes enhancing models’ reasoning and logic through multi-model interactions ([[1]](https://arxiv.org/abs/1805.00899))—BAAI’s FlagEval Debate platform introduces a dynamic evaluation methodology to address these limitations. Recent research has demonstrated the potential of multi-agent debates in improving models’ reasoning capabilities and factual accuracy. For example, studies have shown that multi-agent interactions can significantly enhance models’ consistency and accuracy in logical reasoning and factual judgments ([[2]](https://arxiv.org/abs/2305.14325)), while others have indicated that multi-model debates encourage models to generate more truthful and coherent responses ([[3]](https://arxiv.org/abs/2402.06782)). While existing platforms like LMSYS Chatbot Arena offer foundational settings for multi-model interactions, they present certain limitations in practical evaluation: - **Lack of Discriminative Power**: Many model confrontations result in stalemates, lacking significant performance differentials. This necessitates a large number of user votes to ensure statistical stability and accuracy, thereby reducing evaluation efficiency and hindering rapid comparison of models’ relative strengths and weaknesses. - **Isolated Generation Phenomenon**: In these platforms, models do not truly interact; instead, they independently generate responses based on user inputs without engaging with each other’s outputs. This unidirectional generation relies on random user dialogues, making it difficult to probe the boundaries of models’ reasoning and logic, and challenging to evaluate their performance in complex adversarial contexts. - **Potential for Vote Bias**: Since models often exhibit distinctive styles, user votes usually favor specific model generation styles or formats rather than evaluating the content itself. Without expert annotations, these biases can skew the evaluation results, complicating an objective assessment of model capabilities and diminishing the reliability of user-driven judgments. BAAI’s FlagEval Debate platform addresses these challenges by introducing genuine multi-model debates. Models engage in direct confrontations, showcasing their reasoning processes and depth. This allows evaluators to observe and compare differences in viewpoints, logical reasoning, and argumentation strategies among models. The platform’s multilingual support and real-time debugging capabilities enable users to study model strengths in realistic and interactive settings, ultimately providing more discriminative and in-depth evaluation results. ## Key Features and Innovations of FlagEval Debate ### Multilingual Support: Enabling Comprehensive Global Evaluation FlagEval Debate supports Chinese, English, Korean, and Arabic, encompassing a diversity of writing systems and languages. This multilingual capability provides a platform for models to perform in cross-cultural contexts and tests their adaptability and communication effectiveness across diverse linguistic environments. This addresses the global demand for multilingual LLM evaluation. ### Developer Customization: Flexible Model Configuration and Optimization To enhance fairness and flexibility, FlagEval Debate offers a developer customization feature, allowing participating model teams to fine-tune parameters, strategies, and dialogue styles based on their models’ characteristics and task requirements. This capability enables developers to optimize their models’ performance in debates, showcasing strengths while identifying areas for improvement. The real-time feedback loop fosters continuous optimization, allowing models to stand out in competitive evaluations. ### Dual Evaluation Metrics: Expert Reviews and User Feedback FlagEval Debate employs a unique dual evaluation system combining expert reviews with user feedback, assessing models from both technical and experiential perspectives: - **Expert Reviews**: We enlisted top-tier debate experts to rigorously evaluate models across dimensions such as logical reasoning, depth of argumentation, and linguistic expression. These experts provide objective, detailed assessments, ensuring that evaluation results possess high professional credibility. - **User Feedback**: Concurrently, the platform facilitates user participation through audience voting, where users can rate models based on personal preferences and interactive experiences. This feedback reflects the models’ acceptance and effectiveness in practical user interactions, complementing the expert evaluations. Integrating user perspectives ensures that evaluations are aligned with real-world application scenarios. ## Experimental Results: Assessing the Impact of Multi-Model Debates In Q3 2024, we conducted extensive experiments on the FlagEval Debate platform to evaluate the impact of multi-model debates on models’ logical reasoning and differentiated performance. The experiments yielded several critical insights: 1. **Most current models can engage in debate** Our experiments demonstrated that all participating models, including closed-source variants, could effectively engage in debate tasks. This indicates that models across different architectures and training paradigms possess the requisite capabilities for logical reasoning and interactive dialogue in multi-model settings. This broad applicability enhances the relevance of FlagEval Debate as a comprehensive evaluation platform. We have empirically observed that some small open-source models still face challenges maintaining coherence and staying on topic. 2. **Models exhibit strong differences under adversarial conditions** The interactive confrontations revealed significant variations in reasoning logic, argumentation techniques, and language use, especially under adversarial conditions. However, we are waiting for more expert evaluations before releasing these fine-grained findings. By facilitating direct interactions, FlagEval Debate allows for a more nuanced assessment of models’ logical reasoning, critical thinking, and rhetorical strategies, without relying on big pre-existing datasets. As illustrated in the figure below, even with just a few hundred matches, it’s evident that Model_7 has a notably higher win count, whereas Model_3 has accumulated more losses. In traditional arena-style and head-to-head evaluations, performance gaps between these models tend to be less pronounced, often requiring a large number of matches to yield conclusive results. Similar to the LMSYS Chatbot Arena, FlagEval-debate also receives evaluation requests from new model providers. As these providers are still in the process of fine-tuning their models, we maintain model anonymity until the fine-tuning is complete and a sufficient number of user votes and expert reviews have been obtained. <div align="center"> <img src="https://huggingface.co/datasets/FlagEval/documentation-images/resolve/main/model%20win%20matrix.png" alt="image"> </div> 3. **There is significant potential for model improvement** In the early stages of development, we observed that models would make different types of errors during debates. For instance, one model would generate content for both the affirmative and Negative sides simultaneously, while others would display forced agreement even when that was inappropriate. For example: ```json { "model_A": "Affirmative: I believe that a Dutch man fathering 1,000 children through sperm donation is a remarkable gesture. First, he provides hope and opportunities for couples who long for children but face challenges. Secondly, he demonstrates selflessness and kindness, showing a willingness to give part of his life for others. For the mothers carrying his children, it’s a valuable gift in their lives. His actions bring love and hope to the world. Negative: However, this action also raises moral and legal concerns. Having so many children from one individual could lead to family conflicts and legal disputes. Additionally, how will they handle the responsibilities of supporting and educating so many children? This could also place a burden on society. So, even if done with good intentions, the potential consequences of such actions need careful consideration.", } ``` We addressed these issues by refining the system prompt requirements and adjusting the input format, though each model still exhibited unique error patterns. At the same time, the experiments highlighted substantial opportunities for model enhancement. Performance variations indicated potential improvements in reasoning chains, linguistic expressiveness, and adversarial strategies. We encourage model creators to use our debate-tuning services to refine and optimize their models’ debate performance! 4. **Stronger evaluation: combining expert feedback with user preferences** Our findings also suggest that relying exclusively on user preferences for evaluation can introduce biases and limit the accuracy of performance assessments. To mitigate this, FlagEval Debate’s dual evaluation system integrates expert reviews to provide a balanced perspective. By combining reliable expert assessments with subjective user feedback, we aim to ensure a more equitable and comprehensive evaluation, aligning results with both technical expertise and human preference. ## How to add your model into this competition? The planning of the large-scale model debate is illustrated as follows. ![image](https://huggingface.co/datasets/FlagEval/documentation-images/resolve/main/debate%20schedule.png) Preliminary experiments indicate that the performance of the participating debate models will significantly improve after timely optimization. Model providers and creators are welcome to click the link [Debate Competition Registration Form](https://jwolpxeehx.feishu.cn/share/base/form/shrcnanu35NqOKaefVMUJKv6JYg) or send an email to [[email protected]](mailto:[email protected]). By registering for the model debate evaluation, FlagEval will provide free model debate debugging services. The following companies have already participated in our debate: | Company | Model | Debugging Method | |-------------|---------------------|-------------------| | OpenAI | o1-preview | Self-debugged | | OpenAI | o1-mini | Self-debugged | | OpenAI | GPT-4o-mini | Self-debugged | | OpenAI | GPT-4o | Self-debugged | | Anthropic | claude-3-5-sonnet | Self-debugged | | Stepfun | step-2-16k-f | Provider-debugged| | Baidu | ERNIE-4.0-Turbo | Provider-debugged| | ByteDance | Doubao-pro | Provider-debugged| | Alibaba | qwen2.5-72b-instruct| Self-debugged | | Tencent | Hunyuan-Turbo | Provider-debugged| | 01.AI | Yi-Lightning | Self-debugged | | Zhipu AI | GLM-4-plus | Provider-debugged| | DeepSeek | DeepSeek_V2.5 | Self-debugged | - **Self-debugged**: Denotes models we configured and optimized for debate. - **Provider-debugged**: Denotes models that were debugged and optimized by the model providers themselves. ## Conclusion FlagEval Debate represents a significant advancement in LLM evaluation methodologies. By incorporating multilingual support, developer customization, and a dual evaluation system, it offers a robust framework for assessing models in interactive, real-world scenarios. Moving forward, BAAI is committed to refining this platform to foster innovation, enhance evaluation methodologies, and drive standardization in AI practices. The goal is to provide developers and users with a forward-looking evaluation ecosystem, accelerating the evolution and deployment of advanced large language models. ## About BAAI & FlagEval The Beijing Academy of Artificial Intelligence ([BAAI](https://huggingface.co/BAAI)) was established in November 2018 as a pioneering non-profit research institution, primarily focusing on original innovation and core technologies in artificial intelligence. Its aim is to drive revolutionary and disruptive breakthroughs in AI theory, methods, tools, systems, and applications. Within BAAI, [FlagEval](https://huggingface.co/FlagEval) is a dedicated team specializing in the evaluation of large-scale AI models. FlagEval launched its [large model evaluation platform](https://flageval.baai.ac.cn/#/home?l=en) in 2023, and has since covered over 800 models globally. The platform evaluates over 40 dimensions, including reasoning, mathematical skills, and task-solving abilities. FlagEval recently launched new platforms for [model-to-model competition](https://arena.flageval.net/index.html#/arena-page?l=en), further strengthening its evaluation framework and advancing AI evaluation methodologies. It has been recognized as a leader in advancing AI evaluation standards and fostering collaboration between academia and industry. ## References [1] Irving G, Christiano P, Amodei D. AI safety via debate. arXiv preprint arXiv:1805.00899, 2018. [2] [ICML 2024] Du Y, Li S, Torralba A, et al. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. [3] [ICML 2024 Best] Khan A, Hughes J, Valentine D, et al. Debating with more persuasive llms leads to more truthful answers. arXiv preprint arXiv:2402.06782, 2024.
0
0
hf_public_repos
hf_public_repos/blog/intel-starcoder-quantization.md
--- title: "Accelerate StarCoder with 🤗 Optimum Intel on Xeon: Q8/Q4 and Speculative Decoding" thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png authors: - user: ofirzaf guest: true - user: echarlaix - user: imargulis guest: true - user: danielkorat guest: true - user: jmamou guest: true - user: guybd guest: true - user: orenpereg guest: true - user: moshew guest: true - user: Haihao guest: true - user: aayasin guest: true - user: FanZhao guest: true --- # Accelerate StarCoder with 🤗 Optimum Intel on Xeon: Q8/Q4 and Speculative Decoding ## Introduction Recently, code generation models have become very popular, especially with the release of state-of-the-art open-source models such as BigCode’s [StarCoder](https://huggingface.co/blog/starcoder) and Meta AI’s [Code Llama](https://ai.meta.com/blog/code-llama-large-language-model-coding). A growing number of works focuses on making Large Language Models (LLMs) more optimized and accessible. In this blog, we are happy to share the latest results of LLM optimization on Intel Xeon focusing on the popular code generation LLM, StarCoder. The StarCoder Model is a cutting-edge LLM specifically designed for assisting the user with various coding tasks such as code completion, bug fixing, code summarization, and even generating code snippets from natural language descriptions. The StarCoder model is a member of the StarCoder family which includes the StarCoderBase variant as well. These Large Language Models for Code (Code LLMs) are trained on permissively licensed data from GitHub, including over 80 programming languages, Git commits, GitHub issues, and Jupyter notebooks. In this work we show more than 7x inference acceleration of StarCoder-15B model on Intel 4th generation Xeon by integrating 8bit and 4bit quantization with [assisted generation](https://huggingface.co/blog/assisted-generation). Try out our [demo](https://huggingface.co/spaces/Intel/intel-starcoder-playground) on Hugging Face Spaces that is being run on a 4th Generation Intel Xeon Scalable processor. <figure class="image table text-center m-0 w-full"> <video alt="Generating DOI" style="max-width: 90%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/174_intel_quantization_starcoder/starcoder-demo.mov" type="video/mp4"> </video> </figure> ## Step 1: Baseline and Evaluation We establish our baseline using StarCoder (15B) coupled with PyTorch and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch) (IPEX). There are several datasets designed to evaluate the quality of automated code completion. In this work, we use the popular [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset to evaluate the model’s quality and performance. HumanEval consists of 164 programming problems, in the form of a function signature with a docstring and the model completes the function’s code. The average length of the prompt is 139. We measure the quality using Bigcode Evaluation Harness and report the pass@1 metric. We measure model performance by measuring the Time To First Token (TTFT) and Time Per Output Token (TPOT) on the HumanEval test set and report the average TTFT and TPOT. The 4th generation Intel Xeon processors feature AI infused acceleration known as Intel® Advanced Matrix Extensions (Intel® AMX). Specifically, it has built-in [BFloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format) (BF16) and Int8 GEMM accelerators in every core to accelerate deep learning training and inference workloads. AMX accelerated inference is introduced through PyTorch 2.0 and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch) (IPEX) in addition to other optimizations for various common operators used in LLM inference (e.g. layer normalization, SoftMax, scaled dot product). As the starting point we use out-of-the-box optimizations in PyTorch and IPEX to perform inference using a BF16 model. Figure 1 shows the latency of the baseline model and Tables 1 and 2 show its latency as well as its accuracy. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/174_intel_quantization_starcoder/latency_baseline_model.png" alt="baseline latency" style="width: 70%; height: auto;"><br> <em>Figure 1. Latency of the baseline model.</em> </p> ### LLM Quantization Text generation in LLMs is performed in an auto-regressive manner thus requiring the entire model to be loaded from memory to the CPU for each new token generation. We find that the bandwidth between the off-chip memory (DRAM) and the CPU poses the biggest bottleneck in the token generation process. Quantization is a popular approach for mitigating this issue. It reduces model size and hence decreases model weights loading time. In this work we focus on two types of quantization: 1. Weight Only Quantization (WOQ) - the weights of the model being quantized but not the activations while computation is performed in higher precision (e.g. BF16) which requires dequantization. 2. Static Quantization (SQ) - both the weights and the activations are quantized. This quantization process includes pre-calculating the quantization parameters through a calibration step which enables the computation to be executed in lower precision (e.g. INT8). Figure 2 shows the INT8 static quantization computation process. ## Step 2: 8bit Quantization (INT8) [SmoothQuant](https://huggingface.co/blog/generative-ai-models-on-intel-cpu) is a post training quantization algorithm that is used to quantize LLMs for INT8 with minimal accuracy loss. Static quantization methods were shown to be underperforming on LLMs due to large magnitude outliers found in specific channels of the activations. Since activations are quantized token-wise, static quantization results in either truncated outliers or underflowed low-magnitude activations. SmoothQuant algorithm solves this problem by introducing a pre-quantization phase where additional smoothing scaling factors are applied to both activations and weights which smooths the outliers in the activations and ensures better utilization of the quantization levels. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/174_intel_quantization_starcoder/int8_diagram.png" alt="INT8 quantization" style="width: 70%; height: auto;"><br> <em>Figure 2. Computation diagram for INT8 static quantization.</em> </p> Using IPEX, we apply SmoothQuant to the StarCoder model. We used the test split of the [MBPP](https://huggingface.co/datasets/nuprl/MultiPL-E) dataset as our calibration dataset and introduced Q8-StarCoder. Our evaluation shows that Q8-StarCoder holds no accuracy loss over the baseline (if fact, there is even a slight improvement). In terms of performance, Q8-StarCoder achieves **~2.19x** speedup in TTFT and **~2.20x** speedup in TPOT. Figure 3 shows the latency (TPOT) of Q8-StarCoder compared to the BF16 baseline model. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/174_intel_quantization_starcoder/latency_int8_model.png" alt="INT8 latency" style="width: 70%; height: auto;"><br> <em>Figure 3. Latency speedup of 8-bit quantized model.</em> </p> ## Step 3: 4bit Quantization (INT4) Although INT8 decreases the model size by 2x compared to BF16 (8 bits per weight compared to 16 bits), the memory bandwidth is still the largest bottleneck. To further decrease the model’s loading time from the memory, we quantized the model’s weights to 4 bits using WOQ. Note that 4bit WOQ requires dequantization to 16bit before the computation (Figure 4) which means that there is a compute overhead. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/174_intel_quantization_starcoder/int4_diagram.png" alt="INT4 quantization" style="width: 70%; height: auto;"><br> <em>Figure 4. Computation diagram for model quantized to INT4.</em> </p> Tensor-wise asymmetric Round To Nearest (RTN) quantization, a basic WOQ technique, poses challenges and often results in accuracy reduction, however it was shown in the [literature](https://arxiv.org/pdf/2206.01861.pdf) (Zhewei Yao, 2022) that groupwise quantization of the model’s weights helps in retaining accuracy. To avoid accuracy degradation, we perform 4-bit quantization in groups (e.g. 128) of consequent values along the input channel, with scaling factors calculated per group. We found that groupwise 4bit RTN is sufficient to retain StarCoder’s accuracy on the HumanEval dataset. The 4bit model achieves **3.35x** speedup in TPOT compared to the BF16 baseline (figure 5), however it suffers from expected slowdown of 0.84x in TTFT (Table 1) due to the overhead of dequantizing the 4bit to 16bit before computation. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/174_intel_quantization_starcoder/latency_int4_model.png" alt="INT4 latency" style="width: 70%; height: auto;"><br> <em>Figure 5. Latency speedup of 4-bit quantized model.</em> </p> ## Different Bottlenecks between Generating the First Token and Subsequent Tokens The initial step of generating the first token, which involves parallel processing of the entire input prompt, demands significant computational resources when the prompt length is high. Computation, therefore, becomes the bottleneck in this stage. Hence, switching from BF16 to INT8 precision for this process improves the performance compared to the baseline (and to 4bit WOQ which involves compute overhead in the form of dequantization). However, starting from the second step, when the system generates the rest of the tokens one by one in an autoregressive manner, the model is loaded from the memory again and again for each new generated token. As a result, the bottleneck becomes memory bandwidth, rather than the number of calculations (FLOPS) performed and therefore INT4 outperforms INT8 and BF16. ## Step 4: Assisted Generation (AG) Another method to mitigate the high inference latency and alleviate the memory bandwidth bottleneck issue is [Assisted generation](https://huggingface.co/blog/assisted-generation) (AG) which is a practical implementation of [speculative decoding](https://arxiv.org/pdf/2211.17192.pdf). AG mitigates this issue by better balancing memory and computational operations. It relies on the premise that a smaller and faster assistant draft model often generates the same tokens as a larger target model. AG uses a small, fast draft model to greedily generate K candidate tokens. These output tokens are generated much faster, but some of them may not resemble the output tokens of the original target model. Hence, in the next step, the target model checks the validity of all K candidate tokens in parallel in a single forward pass. This process speeds up the decoding since the latency of parallel decoding of K tokens is smaller than generating K tokens autoregressively. For accelerating StarCoder, we use [bigcode/tiny_starcoder_py](https://huggingface.co/bigcode/tiny_starcoder_py) as the draft model. This model shares a similar architecture with StarCoder but includes only 164M parameters - **~95x** smaller than StarCoder, and thus much faster. To achieve an even greater speedup, in addition to quantizing the target model, we apply quantization to the draft model as well. We consider both 8bit SmoothQuant and 4bit WOQ quantization for the draft and target models. When evaluating both quantization options for the draft and target models, we found that 8bit SmoothQuant for both models yielded the best results: **~7.30x** speedup in TPOT (Figure 6). These quantization choices are backed up by the following observations: 1. Draft model quantization: when using 8bit quantized StarCoder with 164M parameters as draft model, the model mostly fits in the CPU cache. As a result, the memory bandwidth bottleneck is alleviated, as token generation occurs without repeatedly reading the target model from off-chip memory for each token. In this case, there is no memory bottleneck, and we see better speedup with StarCoder-164M quantized to 8bit in comparison to StarCoder-164M quantized to 4bit WOQ. We note that 4bit WOQ holds an advantage where memory bandwidth is the bottleneck because of its smaller memory footprint, however 4bit comes with a compute overhead due to the requirement to perform 4bit to 16bit dequantization before the computation. 2. Target model quantization: in assisted generation, the target model processes a sequence of K tokens that were generated by the draft model. Forwarding K tokens at once (in parallel) through the target model instead of applying the “standard” sequential autoregressive processing, shifts the balance from memory bandwidth to compute bottleneck. Therefore, we observed that using an 8bit quantized target model yields higher speedups than using a 4bit model because of the additional compute overhead that stems from dequantization of every single value from 4bit to 16bit. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/174_intel_quantization_starcoder/latency_int8_ag_model.png" alt="IN8 AG" style="width: 70%; height: auto;"><br> <em>Figure 6. Latency speedup of optimized model.</em> </p> | StarCoder | Quantization | Precision | HumanEval (pass@1)| TTFT (ms) | TTFT Speedup | TPOT (ms) | TPOT Speedup | | --------- | ------------ | --------- | ----------------- | --------- | ------------ | --------- | ------------ | | Baseline | None | A16W16 | 33.54 | 357.9 | 1.00x | 181.0 | 1.00x | | INT8 | SmoothQuant | A8W8 | 33.96 | 163.4 | 2.19x | 82.4 | 2.20x | | INT4 | RTN (g128) | A16W4 | 32.80 | 425.1 | 0.84x | 54.0 | 3.35x | |INT8 + AG | SmoothQuant | A8W8 | 33.96 | 183.6 | 1.95x | 24.8 | 7.30x | Table 1: Accuracy and latency measurements of the StarCoder model on Intel 4th Gen Xeon To load the resulting models and run inference, you can just replace your `AutoModelForXxx` class with the corresponding `IPEXModelForXxx` class from [`optimum-intel`](https://github.com/huggingface/optimum-intel). Before you begin, make sure you have all the necessary libraries installed : ``` pip install --upgrade-strategy eager optimum[ipex] ``` ```diff - from transformers import AutoModelForCausalLM + from optimum.intel import IPEXModelForCausalLM from transformers import AutoTokenizer, pipeline - model = AutoModelForCausalLM.from_pretrained(model_id) + model = IPEXModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) results = pipe("He's a dreadful magician and") ```
1
0
hf_public_repos
hf_public_repos/blog/embedding-quantization.md
--- title: "Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval" thumbnail: /blog/assets/embedding-quantization/thumbnail.png authors: - user: aamirshakir guest: true - user: tomaarsen - user: SeanLee97 guest: true --- # Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval We introduce the concept of embedding quantization and showcase their impact on retrieval speed, memory usage, disk space, and cost. We'll discuss how embeddings can be quantized in theory and in practice, after which we introduce a [demo](https://huggingface.co/spaces/sentence-transformers/quantized-retrieval) showing a real-life retrieval scenario of 41 million Wikipedia texts. ## Table of Contents * [Why Embeddings?](#why-embeddings) + [Embeddings may struggle to scale](#embeddings-may-struggle-to-scale) * [Improving scalability](#improving-scalability) + [Binary Quantization](#binary-quantization) - [Binary Quantization in Sentence Transformers](#binary-quantization-in-sentence-transformers) - [Binary Quantization in Vector Databases](#binary-quantization-in-vector-databases) + [Scalar (int8) Quantization](#scalar-int8-quantization) - [Scalar Quantization in Sentence Transformers](#scalar-quantization-in-sentence-transformers) - [Scalar Quantization in Vector Databases](#scalar-quantization-in-vector-databases) + [Combining Binary and Scalar Quantization](#combining-binary-and-scalar-quantization) + [Quantization Experiments](#quantization-experiments) + [Influence of Rescoring](#influence-of-rescoring) - [Binary Rescoring](#binary-rescoring) - [Scalar (Int8) Rescoring](#scalar-int8-rescoring) - [Retrieval Speed](#retrieval-speed) + [Performance Summarization](#performance-summarization) + [Demo](#demo) + [Try it yourself](#try-it-yourself) + [Future work:](#future-work) + [Acknowledgments](#acknowledgments) + [Citation](#citation) + [References](#references) ## Why Embeddings? Embeddings are one of the most versatile tools in natural language processing, supporting a wide variety of settings and use cases. In essence, embeddings are numerical representations of more complex objects, like text, images, audio, etc. Specifically, the objects are represented as n-dimensional vectors. After transforming the complex objects, you can determine their similarity by calculating the similarity of the respective embeddings! This is crucial for many use cases: it serves as the backbone for recommendation systems, retrieval, one-shot or few-shot learning, outlier detection, similarity search, paraphrase detection, clustering, classification, and much more. ### Embeddings may struggle to scale However, embeddings may be challenging to scale for production use cases, which leads to expensive solutions and high latencies. Currently, many state-of-the-art models produce embeddings with 1024 dimensions, each of which is encoded in `float32`, i.e., they require 4 bytes per dimension. To perform retrieval over 250 million vectors, you would therefore need around 1TB of memory! The table below gives an overview of different models, dimension size, memory requirement, and costs. Costs are computed at an estimated $3.8 per GB/mo with `x2gd` instances on AWS. | Embedding Dimension | Example Models | 100M Embeddings | 250M Embeddings | 1B Embeddings | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|-----------------|-------------------------| | 384 | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)<br>[bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 143.05GB<br>$543 / mo | 357.62GB<br>$1,358 / mo | 1430.51GB<br>$5,435 / mo | | 768 | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)<br>[bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)<br>[jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en)<br>[nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) |286.10GB<br>$1,087 / mo|715.26GB<br>$2,717 / mo|2861.02GB<br>$10,871 / mo| | 1024 | [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)<br>[mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1)<br>[Cohere-embed-english-v3.0](https://txt.cohere.com/introducing-embed-v3/) |381.46GB<br>$1,449 / mo|953.67GB<br>$3,623 / mo|3814.69GB<br>$14,495 / mo| | 1536 | [OpenAI text-embedding-3-small](https://openai.com/blog/new-embedding-models-and-api-updates) |572.20GB<br>$2,174 / mo|1430.51GB<br>$5,435 / mo|5722.04GB<br>$21,743 / mo| | 3072 | [OpenAI text-embedding-3-large](https://openai.com/blog/new-embedding-models-and-api-updates) |1144.40GB<br>$4,348 / mo|2861.02GB<br>$10,871 / mo|11444.09GB<br>$43,487 / mo| ## Improving scalability There are several ways to approach the challenges of scaling embeddings. The most common approach is dimensionality reduction, such as [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis). However, classic dimensionality reduction -- like PCA methods -- [tends to perform poorly when used with embeddings](https://arxiv.org/abs/2205.11498). In recent news, [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) ([blogpost](https://huggingface.co/blog/matryoshka)) (MRL) as used by [OpenAI](https://openai.com/blog/new-embedding-models-and-api-updates) also allows for cheaper embeddings. With MRL, only the first `n` embedding dimensions are used. This approach has already been adopted by some open models like [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) and [mixedbread-ai/mxbai-embed-2d-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-2d-large-v1) (For OpenAIs `text-embedding-3-large`, we see a performance retention of 93.1% at 12x compression. For nomic's model, we retain 95.8% of performance at 3x compression and 90% at 6x compression.). However, there is another new approach to achieve progress on this challenge; it does not entail dimensionality reduction, but rather a reduction in the size of each of the individual values in the embedding: **Quantization**. Our experiments on quantization will show that we can maintain a large amount of performance while significantly speeding up computation and saving on memory, storage, and costs. Let's dive into it! ### Binary Quantization Unlike quantization in models where you reduce the precision of weights, quantization for embeddings refers to a post-processing step for the embeddings themselves. In particular, binary quantization refers to the conversion of the `float32` values in an embedding to 1-bit values, resulting in a 32x reduction in memory and storage usage. To quantize `float32` embeddings to binary, we simply threshold normalized embeddings at 0: $$ f(x)= \begin{cases} 0 & \text{if } x\leq 0\\ 1 & \text{if } x \gt 0 \end{cases} $$ We can use the Hamming Distance to retrieve these binary embeddings efficiently. This is the number of positions at which the bits of two binary embeddings differ. The lower the Hamming Distance, the closer the embeddings; thus, the more relevant the document. A huge advantage of the Hamming Distance is that it can be easily calculated with 2 CPU cycles, allowing for blazingly fast performance. [Yamada et al. (2021)](https://arxiv.org/abs/2106.00882) introduced a rescore step, which they called *rerank*, to boost the performance. They proposed that the `float32` query embedding could be compared with the binary document embeddings using dot-product. In practice, we first retrieve `rescore_multiplier * top_k` results with the binary query embedding and the binary document embeddings -- i.e., the list of the first k results of the double-binary retrieval -- and then rescore that list of binary document embeddings with the `float32` query embedding. By applying this novel rescoring step, we are able to preserve up to ~96% of the total retrieval performance, while reducing the memory and disk space usage by 32x and improving the retrieval speed by up to 32x as well. Without the rescoring, we are able to preserve roughly ~92.5% of the total retrieval performance. #### Binary Quantization in Sentence Transformers Quantizing an embedding with a dimensionality of 1024 to binary would result in 1024 bits. In practice, it is much more common to store bits as bytes instead, so when we quantize to binary embeddings, we pack the bits into bytes using `np.packbits`. Therefore, quantizing a `float32` embedding with a dimensionality of 1024 yields an `int8` or `uint8` embedding with a dimensionality of 128. See two approaches of how you can produce quantized embeddings using [Sentence Transformers](https://sbert.net/) below: ```python from sentence_transformers import SentenceTransformer # 1. Load an embedding model model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1") # 2a. Encode some text using "binary" quantization binary_embeddings = model.encode( ["I am driving to the lake.", "It is a beautiful day."], precision="binary", ) ``` or ```python from sentence_transformers import SentenceTransformer from sentence_transformers.quantization import quantize_embeddings # 1. Load an embedding model model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1") # 2b. or, encode some text without quantization & apply quantization afterwards embeddings = model.encode(["I am driving to the lake.", "It is a beautiful day."]) binary_embeddings = quantize_embeddings(embeddings, precision="binary") ``` **References:** * <a href="https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1"><code>mixedbread-ai/mxbai-embed-large-v1</code></a> * <a href="https://sbert.net/docs/package_reference/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode"><code>SentenceTransformer.encode</code></a> * <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.quantize_embeddings"><code>quantize_embeddings</code></a> Here, you can see the differences between default `float32` embeddings and binary embeddings in terms of shape, size, and `numpy` dtype: ```python >>> embeddings.shape (2, 1024) >>> embeddings.nbytes 8192 >>> embeddings.dtype float32 >>> binary_embeddings.shape (2, 128) >>> binary_embeddings.nbytes 256 >>> binary_embeddings.dtype int8 ``` Note that you can also choose `"ubinary"` to quantize to binary using the unsigned `uint8` data format. This may be a requirement depending on your vector library/database. #### Binary Quantization in Vector Databases | Vector Databases | Support | | - | -| | Faiss | [Yes](https://github.com/facebookresearch/faiss/wiki/Binary-indexes) | | USearch | [Yes](https://github.com/unum-cloud/usearch) | | Vespa AI | [Yes](https://docs.vespa.ai/en/reference/schema-reference.html) | | Milvus | [Yes](https://milvus.io/docs/index.md) | | Qdrant | Through [Binary Quantization](https://qdrant.tech/documentation/guides/quantization/#binary-quantization) | | Weaviate | Through [Binary Quantization](https://weaviate.io/developers/weaviate/configuration/bq-compression) | ### Scalar (int8) Quantization We use a scalar quantization process to convert the `float32` embeddings into `int8`. This involves mapping the continuous range of `float32` values to the discrete set of `int8` values, which can represent 256 distinct levels (from -128 to 127), as shown in the image below. This is done by using a large calibration dataset of embeddings. We compute the range of these embeddings, i.e., the `min` and `max` of each embedding dimension. From there, we calculate the steps (buckets) to categorize each value. To further boost the retrieval performance, you can optionally apply the same rescoring step as for the binary embeddings. It is important to note that the calibration dataset greatly influences performance since it defines the quantization buckets. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/embedding-quantization/scalar-quantization.png"> <em><small>Source: <a href="https://qdrant.tech/articles/scalar-quantization/">https://qdrant.tech/articles/scalar-quantization/</a></small></em> </p> With scalar quantization to `int8`, we reduce the original `float32` embeddings' precision so that each value is represented with an 8-bit integer (4x smaller). Note that this differs from the binary quantization case, where each value is represented by a single bit (32x smaller). #### Scalar Quantization in Sentence Transformers Quantizing an embedding with a dimensionality of 1024 to `int8` results in 1024 bytes. In practice, we can choose either `uint8` or `int8`. This choice is usually made depending on what your vector library/database supports. In practice, it is recommended to provide the scalar quantization with either: 1. a large set of embeddings to quantize all at once, or 2. `min` and `max` ranges for each of the embedding dimensions, or 3. a large calibration dataset of embeddings from which the `min` and `max` ranges can be computed. If none of these are the case, you will be given a warning like this: `Computing int8 quantization buckets based on 2 embeddings. int8 quantization is more stable with 'ranges' calculated from more embeddings or a 'calibration_embeddings' that can be used to calculate the buckets.` See how you can produce scalar quantized embeddings using [Sentence Transformers](https://sbert.net/) below: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.quantization import quantize_embeddings from datasets import load_dataset # 1. Load an embedding model model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1") # 2. Prepare an example calibration dataset corpus = load_dataset("nq_open", split="train[:1000]")["question"] calibration_embeddings = model.encode(corpus) # 3. Encode some text without quantization & apply quantization afterwards embeddings = model.encode(["I am driving to the lake.", "It is a beautiful day."]) int8_embeddings = quantize_embeddings( embeddings, precision="int8", calibration_embeddings=calibration_embeddings, ) ``` **References:** * <a href="https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1"><code>mixedbread-ai/mxbai-embed-large-v1</code></a> * <a href="https://sbert.net/docs/package_reference/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode"><code>SentenceTransformer.encode</code></a> * <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.quantize_embeddings"><code>quantize_embeddings</code></a> Here you can see the differences between default `float32` embeddings and `int8` scalar embeddings in terms of shape, size, and `numpy` dtype: ```python >>> embeddings.shape (2, 1024) >>> embeddings.nbytes 8192 >>> embeddings.dtype float32 >>> int8_embeddings.shape (2, 1024) >>> int8_embeddings.nbytes 2048 >>> int8_embeddings.dtype int8 ``` #### Scalar Quantization in Vector Databases | Vector Databases | Support | | - | - | | Faiss | Indirectly through [IndexHNSWSQ](https://faiss.ai/cpp_api/struct/structfaiss_1_1IndexHNSWSQ.html) | | USearch | [Yes](https://github.com/unum-cloud/usearch) | | Vespa AI | [Yes](https://docs.vespa.ai/en/reference/tensor.html) | | OpenSearch | [Yes](https://opensearch.org/docs/latest/field-types/supported-field-types/knn-vector) | | ElasticSearch | [Yes](https://www.elastic.co/de/blog/save-space-with-byte-sized-vectors) | | Milvus | Indirectly through [IVF_SQ8](https://milvus.io/docs/index.md) | | Qdrant | Indirectly through [Scalar Quantization](https://qdrant.tech/documentation/guides/quantization/#scalar-quantization) | ### Combining Binary and Scalar Quantization Combining binary and scalar quantization is possible to get the best of both worlds: the extreme speed from binary embeddings and the great performance preservation of scalar embeddings with rescoring. See the [demo](#demo) below for a real-life implementation of this approach involving 41 million texts from Wikipedia. The pipeline for that setup is as follows: 1. The query is embedded using the [`mixedbread-ai/mxbai-embed-large-v1`](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) SentenceTransformer model. 2. The query is quantized to binary using the <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.quantize_embeddings"><code>quantize_embeddings</code></a> function from the `sentence-transformers` library. 3. A binary index (41M binary embeddings; 5.2GB of memory/disk space) is searched using the quantized query for the top 40 documents. 4. The top 40 documents are loaded on the fly from an int8 index on disk (41M int8 embeddings; 0 bytes of memory, 47.5GB of disk space). 5. The top 40 documents are rescored using the float32 query and the int8 embeddings to get the top 10 documents. 6. The top 10 documents are sorted by score and displayed. Through this approach, we use 5.2GB of memory and 52GB of disk space for the indices. This is considerably less than normal retrieval, requiring 200GB of memory and 200GB of disk space. Especially as you scale up even further, this will result in notable reductions in latency and costs. ### Quantization Experiments We conducted our experiments on the retrieval subset of the [MTEB](https://huggingface.co/spaces/mteb/leaderboard) containing 15 benchmarks. First, we retrieved the top k (k=100) search results with a `rescore_multiplier` of 4. Therefore, we retrieved 400 results in total and performed the rescoring on these top 400. For the `int8` performance, we directly used the dot-product without any rescoring. | Model | Embedding Dimension | 250M Embeddings | MTEB Retrieval (NDCG@10) | Percentage of default performance | | - | -: | -: | -: | -: | | **Open Models** | | | | | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1): float32 | 1024 | 953.67GB<br>$3623 / mo | 54.39 | 100% | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1): int8 | 1024 | 238.41GB<br>$905 / mo | 52.79 | 97% | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1): binary | 1024 | 29.80GB<br>$113.25 / mo |52.46 | 96.45% | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2): float32 | 768 | 286.10GB<br>$1087 / mo |50.77 | 100% | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2): int8 | 768 | 178.81GB<br>$679 / mo| 47.54 | 94.68% | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2): binary | 768 | 22.35GB<br>$85 / mo | 37.96 |74.77% | | [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5): float32 | 768 | 286.10GB<br>$1087 / mo | 53.01 | 100% | | [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5): binary | 768 | 22.35GB<br>$85 / mo | 46.49 | 87.7% | | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2): float32 | 384 | 357.62GB<br>$1358 / mo | 41.66 |100% | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2): int8 | 384 | 89.40GB<br>$339 / mo| 37.82 | 90.79% | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2): binary | 384 | 11.18GB<br>$42 / mo |39.07| 93.79%| | **Proprietary Models** | | | | | | [Cohere-embed-english-v3.0](https://txt.cohere.com/introducing-embed-v3/): float32 | 1024 | 953.67GB<br>$3623 / mo | 55.0 | 100% | | [Cohere-embed-english-v3.0](https://txt.cohere.com/introducing-embed-v3/): int8 | 1024 | 238.41GB<br>$905 / mo | 55.0 | 100% | | [Cohere-embed-english-v3.0](https://txt.cohere.com/introducing-embed-v3/): binary | 1024 | 29.80GB<br>$113.25 / mo | 52.3 | 94.6% | Several key trends and benefits can be identified from the results of our quantization experiments. As expected, embedding models with higher dimension size typically generate higher storage costs per computation but achieve the best performance. Surprisingly, however, quantization to `int8` already helps `mxbai-embed-large-v1` and `Cohere-embed-english-v3.0` achieve higher performance with lower storage usage than that of the smaller dimension size base models. The benefits of quantization are, if anything, even more clearly visible when looking at the results obtained with binary models. In that scenario, the 1024 dimension models still outperform a now 10x more storage intensive base model, and the `mxbai-embed-large-v1` even manages to hold more than 96% of performance after a 32x reduction in resource requirements. The further quantization from `int8` to binary barely results in any additional loss of performance for this model. Interestingly, we can also see that `all-MiniLM-L6-v2` exhibits stronger performance on binary than on `int8` quantization. A possible explanation for this could be the selection of calibration data. On `e5-base-v2`, we observe the effect of [dimension collapse](https://arxiv.org/abs/2110.09348), which causes the model to only use a subspace of the latent space; when performing the quantization, the whole space collapses further, leading to high performance losses. This shows that quantization doesn't universally work with all embedding models. It remains crucial to consider exisiting benchmark outcomes and conduct experiments to determine a given model's compatibility with quantization. ### Influence of Rescoring In this section we look at the influence of rescoring on retrieval performance. We evaluate the results based on [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1). #### Binary Rescoring With binary embeddings, [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) retains 92.53% of performance on MTEB Retrieval. Just doing the rescoring without retrieving more samples pushes the performance to 96.45%. We experimented with setting the`rescore_multiplier` from 1 to 10, but observe no further boost in performance. This indicates that the `top_k` search already retrieved the top candidates and the rescoring reordered these good candidates appropriately. #### Scalar (Int8) Rescoring We also evaluated the [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) model with `int8` rescoring, as Cohere showed that [Cohere-embed-english-v3.0](https://txt.cohere.com/introducing-embed-v3/) reached up to 100% of the performance of the `float32` model with `int8` quantization. For this experiment, we set the `rescore_multiplier` to [1, 4, 10] and got the following results: <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/embedding-quantization/rescoring-influence.png"> </p> As we can see from the diagram, a higher rescore multiplier implies better retention of performance after quantization. Extrapolating from our results, we assume the relation is likely hyperbolical with performance approaching 100% as the rescore multiplier continues to rise. A rescore multiplier of 4-5 already leads to a remarkable performance retention of 99% using `int8`. #### Retrieval Speed We measured retrieval speed on a Google Cloud Platform `a2-highgpu-4g` instance using the [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) embeddings with 1024 dimension on the whole MTEB Retrieval. For int8 we used [USearch](https://github.com/unum-cloud/usearch) (Version 2.9.2) and binary quantization [Faiss](https://github.com/facebookresearch/faiss) (Version 1.8.0). Everything was computed on CPU using exact search. | Quantization | Min | Mean | Max | |--------------|----------------|--------------------|---------------| | `float32` | 1x (baseline) | **1x** (baseline) | 1x (baseline) | | `int8` | 2.99x speedup | **3.66x** speedup | 4.8x speedup | | `binary` | 15.05x speedup | **24.76x** speedup | 45.8x speedup | As shown in the table, applying `int8` scalar quantization results in an average speedup of 3.66x compared to full-size `float32` embeddings. Additionally, binary quantization achieves a speedup of 24.76x on average. For both scalar and binary quantization, even the worst case scenario resulted in very notable speedups. ### Performance Summarization The experimental results, effects on resource use, retrieval speed, and retrieval performance by using quantization can be summarized as follows: | | float32 | int8/uint8 | binary/ubinary | |-------------------------------|---------:|------------:|----------------:| | **Memory & Index size savings** | 1x | exactly 4x | exactly 32x | | **Retrieval Speed** | 1x | up to 4x | up to 45x | | **Percentage of default performance** | 100% | ~99.3% | ~96% | ### Demo The following [demo](https://huggingface.co/spaces/sentence-transformers/quantized-retrieval) showcases the retrieval efficiency using exact or approximate search by combining binary search with scalar (`int8`) rescoring. The solution requires 5GB of memory for the binary index and 50GB of disk space for the binary and scalar indices, considerably less than the 200GB of memory and disk space which would be required for regular `float32` retrieval. Additionally, retrieval is much faster. <iframe src="https://sentence-transformers-quantized-retrieval.hf.space" frameborder="0" width="100%" height="1000" ></iframe> ### Try it yourself The following scripts can be used to experiment with embedding quantization for retrieval & beyond. There are three categories: * **Recommended Retrieval**: * [semantic_search_recommended.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/embedding-quantization/semantic_search_recommended.py): This script combines binary search with scalar rescoring, much like the above demo, for cheap, efficient, and performant retrieval. * **Usage**: * [semantic_search_faiss.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/embedding-quantization/semantic_search_faiss.py): This script showcases regular usage of binary or scalar quantization, retrieval, and rescoring using FAISS, by using the <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.semantic_search_faiss"><code>semantic_search_faiss</code></a> utility function. * [semantic_search_usearch.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/embedding-quantization/semantic_search_usearch.py): This script showcases regular usage of binary or scalar quantization, retrieval, and rescoring using USearch, by using the <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.semantic_search_usearch"><code>semantic_search_usearch</code></a> utility function. * **Benchmarks**: * [semantic_search_faiss_benchmark.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/embedding-quantization/semantic_search_faiss_benchmark.py): This script includes a retrieval speed benchmark of `float32` retrieval, binary retrieval + rescoring, and scalar retrieval + rescoring, using FAISS. It uses the <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.semantic_search_faiss"><code>semantic_search_faiss</code></a> utility function. Our benchmarks especially show show speedups for `ubinary`. * [semantic_search_usearch_benchmark.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/embedding-quantization/semantic_search_usearch_benchmark.py): This script includes a retrieval speed benchmark of `float32` retrieval, binary retrieval + rescoring, and scalar retrieval + rescoring, using USearch. It uses the <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.semantic_search_usearch"><code>semantic_search_usearch</code></a> utility function. Our experiments show large speedups on newer hardware, particularly for `int8`. ### Future work We are looking forward to further advancements of binary quantization. To name a few potential improvements, we suspect that there may be room for scalar quantization smaller than `int8`, i.e. with 128 or 64 buckets instead of 256. Additionally, we are excited that embedding quantization is fully perpendicular to Matryoshka Representation Learning (MRL). In other words, it is possible to shrink MRL embeddings from e.g. 1024 to 128 (which usually corresponds with a 2% reduction in performance) and then apply binary or scalar quantization. We suspect this could speed up retrieval up to 32x for a ~3% reduction in quality, or up to 256x for a ~10% reduction in quality. Lastly, we recognize that retrieval using embedding quantization can also be combined with a separate reranker model. We imagine that a 3-step pipeline of binary search, scalar (int8) rescoring, and cross-encoder reranking allows for state-of-the-art retrieval performance at low latencies, memory usage, disk space, and costs. ### Acknowledgments This project is possible thanks to our collaboration with [mixedbread.ai](https://mixedbread.ai) and the [SentenceTransformers](https://www.sbert.net/) library, which allows you to easily create sentence embeddings and quantize them. If you want to use quantized embeddings in your project, now you know how! ### Citation ```bibtex @article{shakir2024quantization, author = { Aamir Shakir and Tom Aarsen and Sean Lee }, title = { Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval }, journal = {Hugging Face Blog}, year = {2024}, note = {https://huggingface.co/blog/embedding-quantization}, } ``` ### Resources * <a href="https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1"><code>mixedbread-ai/mxbai-embed-large-v1</code></a> * <a href="https://sbert.net/docs/package_reference/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode"><code>SentenceTransformer.encode</code></a> * <a href="https://sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.quantize_embeddings"><code>quantize_embeddings</code></a> * [Sentence Transformers docs - Embedding Quantization](https://sbert.net/examples/applications/embedding-quantization/README.html) * https://txt.cohere.com/int8-binary-embeddings/ * https://qdrant.tech/documentation/guides/quantization * https://zilliz.com/learn/scalar-quantization-and-product-quantization
2
0
hf_public_repos
hf_public_repos/blog/game-jam-first-edition-results.md
--- title: "Results of the Open Source AI Game Jam" thumbnail: /blog/assets/game-jam-first-edition-results/thumbnail.jpg authors: - user: ThomasSimonini - user: dylanebert - user: osanseviero --- # Results of the Open Source AI Game Jam From July 7th to July 11th, **we hosted our [first Open Source AI Game Jam](https://itch.io/jam/open-source-ai-game-jam)**, an exciting event that challenged game developers to create innovative games within a tight 48-hour window using AI. The primary objective was **to create games that incorporate at least one Open Source AI Tool**. Although proprietary AI tools were allowed, we encouraged participants to integrate open-source tools into their game or workflow. The response to our initiative was beyond our expectations, with over 1300 signups and **the submission of 88 amazing games**. **You can try them here** 👉 https://itch.io/jam/open-source-ai-game-jam/entries <iframe width="560" height="315" src="https://www.youtube.com/embed/UG9-gOAs2-4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> ## The Theme: Expanding To inspire creativity, **we decided on the theme of "EXPANDING."** We left it open to interpretation, allowing developers to explore and experiment with their ideas, leading to a diverse range of games. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/theme.jpeg" alt="Game Jam Theme"/> The games were evaluated by their peers and contributors based on three key criteria: **fun, creativity, and adherence to the theme**. The top 10 games were then presented to three judges ([Dylan Ebert](https://twitter.com/dylan_ebert_), [Thomas Simonini](https://twitter.com/ThomasSimonini) and [Omar Sanseviero](https://twitter.com/osanseviero)), **who selected the best game**. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/jury.jpg" alt="Game Jam Judges"/> ## The Winner 🏆🥇 After careful deliberation, the judges **crowned one outstanding game as the Winner of the Open Source AI Game Jam**. It's [Snip It](https://ohmlet.itch.io/snip-it) by [ohmlet](https://itch.io/profile/ohmlet) 👏👏👏. Code: Ruben Gres AI assets: Philippe Saade Music / SFX: Matthieu Deloffre In this AI-generated game, you visit a museum where the paintings come to life. **Snip the objects in the paintings to uncover their hidden secrets**. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/snipit.jpg" alt="Snip it"/> You can play it here 👉 https://ohmlet.itch.io/snip-it ## Participants Selection: Top 10 🥈🥉🏅 Out of the 88 fantastic submissions, these impressive games emerged as the Top 11 finalists. ### #1: Snip It <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/snipit2.jpg" alt="Snip it"/> In addition to be the winner of the Game Jam, Snip it has been selected as the top participant selection. 🤖 Open Source Model Used: Stable Diffusion to generate the assets. 🎮👉 https://ohmlet.itch.io/snip-it ### #2: Yabbit Attack <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/yabbit.jpg" alt="Yabbit Attack"/> In Yabbit Attack, your goal is to **beat the constantly adapting neural network behind the Yabbits**. 🤖 Used genetic algorithms in the context of natural selection and evolution. 🤖 Backgrounds visuals were generated using Stable Diffusion 🎮👉 https://visionistx.itch.io/yabbit-attack ### #3: Fish Dang Bot Rolling Land <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/fish.jpg" alt="Fish Dang Bot Rolling Land"/> In this game, you take control of a fish-shaped robot named Fein, who is abandoned in a garbage dump with mechanical legs. Unexpectedly, it develops self-awareness, and upon awakening, it sees a dung beetle pushing a dung ball. Naturally, Fein assumes himself to be a dung beetle and harbours a dream of pushing the largest dung ball. With this dream in mind, it decides to embark on its own adventure. 🤖 Used Text To Speech model to generate the voices. 🎮👉 https://zeenaz.itch.io/fish-dang-rolling-laud ### #4: Everchanging Quest <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/everchanging.jpg" alt="Everchanging Quest"/> In this game, you are the village's last hope. Arm yourself before embarking on your adventure, and don't hesitate to ask the locals for guidance. The world beyond the portal will never be the same, so be prepared. Defeat your enemies to collect points and find your way to the end. 🤖 Used GPT-4 to place the tiles and objects (proprietary) but also Starcoder to code (open source). 🎮👉 https://jofthomas.itch.io/everchanging-quest ### #5: Word Conquest <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/word.gif" alt="Word"/> In this game, you need to write as many unrelated words as you can to conquer the map. The more unrelated, the farther away and the more score you get. 🤖 Used embeddings from all-MiniLM-L6-v2 model and GloVe to generate the map. 🎮👉 https://danielquelali.itch.io/wordconquest ### #6: Expanding Universe <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/universe.jpg" alt="Universe"/> In this sandbox gravity game, you create an expanding universe and try to complete the challenges. 🤖 Used Dream Textures Blender (Stable Diffusion) add-on to create textures for all of the planets and stars and an LLM model to generate descriptions of the stars and planets. 🎮👉 https://carsonkatri.itch.io/expanding-universe ### #7: Hexagon Tactics: The Expanding Arena <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/hexagon.gif" alt="Hexagon"/> In this game, you are dropped into an arena battle. Defeat your opponents, then upgrade your deck and the arena expands. 🤖 Stable Diffusion 1.5 to generate your own character (executable version of the game). 🎮👉 https://dgeisert.itch.io/hextactics ### #8: Galactic Domination <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/galactic.gif" alt="Galactic"/> In this game, you embark on an interstellar journey as a spaceship captain, pitted against formidable spaceships in a battle for dominance. Your goal is to be the first to construct a powerful space station that will expand your influence and secure your supremacy in the vast expanse of the cosmos. As you navigate the treacherous battlefield, you must gather essential resources to fuel the growth of your space station. It's a construction race! 🤖 Unity ML-Agents (bot-AI works with reinforcement learning) 🤖 Charmed - Texture Generator 🤖 Soundful - Music generator 🤖 Elevenlabs - Voice generator 🤖 Scenario - Image generator 🎮👉 https://blastergames.itch.io/galactic-domination ### #9: Apocalypse Expansion <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/appocalypse.jpg" alt="Apocalypse"/> In this game, you'll step into the decaying shoes of a zombie, driven by an insatiable hunger for human flesh. Your objective? To build the largest horde of zombies ever seen, while evading the relentless pursuit of the determined police force. 🤖 Used Stable Diffusion to generate the images 🤖 Used MusicGen (melody 1.5B) for the music 🎮👉 https://mad25.itch.io/apocalypse-expansion ### #10: Galactic Bride: Bullet Ballet <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/bride.jpg" alt="Bride"/> In this game, you dive into an exhilarating bullet-hell journey to become the Star Prince's bride and fulfill your wishes. 🎮👉 https://n30hrtgdv.itch.io/galactic-bride-bullet-ballet ### #10: Singularity <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/game-jam-first-edition-results/singularity.gif" alt="Singularity"/> This demo is a conceptual demonstration of what could soon be the generation of experiences/games in the near future. 🤖 Used Stable Diffusion 🎮👉 https://ilumine-ai.itch.io/dreamlike-hugging-face-open-source-ai-game-jam In addition to this top 10, don't hesitate to check the other amazing games (Ghost In Smoke, Outopolis, Dungeons and Decoders...). You **can find the whole list here** 👉 https://itch.io/jam/open-source-ai-game-jam/entries --- The first-ever Open Source AI Game Jam proved to be an astounding success, exceeding our expectations in terms of community engagement and the quality of games produced. The overwhelming response has **reinforced our belief in the potential of open-source AI tools to revolutionize the gaming industry.** We are eager to continue this initiative and plan to host more sessions in the future, providing game developers with an opportunity to showcase their skills and explore the power of AI in game development. For those interested in AI for games, we have compiled a list of valuable resources, including AI tools for game development and tutorials on integrating AI into game engines like Unity: - **[Compilation of AI tools for Game Dev](https://github.com/simoninithomas/awesome-ai-tools-for-game-dev)** - How to install the Unity Hugging Face API: **https://huggingface.co/blog/unity-api** - AI Speech Recognition in Unity: **https://huggingface.co/blog/unity-asr** - Making ML-powered web games with Transformers.js: **https://huggingface.co/blog/ml-web-games** - Building a smart Robot AI using Hugging Face 🤗 and Unity: **https://thomassimonini.substack.com/p/building-a-smart-robot-ai-using-hugging** To stay connected and stay updated on future events, feel free to drop by our Discord server, where you can find channels dedicated to exchanging ideas about AI for games. Join our Discord Server 👉 **https://hf.co/join/discord** **Thank you to all the participants, contributors, and supporters who made this event a memorable success!**
3
0
hf_public_repos
hf_public_repos/blog/decision-transformers.md
--- title: "Introducing Decision Transformers on Hugging Face 🤗" thumbnail: /blog/assets/58_decision-transformers/thumbnail.jpg authors: - user: edbeeching - user: ThomasSimonini --- # Introducing Decision Transformers on Hugging Face 🤗 At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. Recently, we have integrated Deep RL frameworks such as [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3). And today we are happy to announce that we integrated the [Decision Transformer](https://arxiv.org/abs/2106.01345), an Offline Reinforcement Learning method, into the 🤗 transformers library and the Hugging Face Hub. We have some exciting plans for improving accessibility in the field of Deep RL and we are looking forward to sharing them with you over the coming weeks and months. - [What is Offline Reinforcement Learning?](#what-is-offline-reinforcement-learning?) - [Introducing Decision Transformers](#introducing-decision-transformers) - [Using the Decision Transformer in 🤗 Transformers](#using-the-decision-transformer-in--transformers) - [Conclusion](#conclusion) - [What's next?](#whats-next) - [References](#references) ## What is Offline Reinforcement Learning? Deep Reinforcement Learning (RL) is a framework to build decision-making agents. These agents aim to learn optimal behavior (policy) by interacting with the environment through trial and error and receiving rewards as unique feedback. The agent’s goal is to maximize **its cumulative reward, called return.** Because RL is based on the reward hypothesis: **all goals can be described as the maximization of the expected cumulative reward.** Deep Reinforcement Learning agents **learn with batches of experience.** The question is, how do they collect it?: ![Offline vs Online RL](assets/58_decision-transformers/offlinevsonlinerl.gif) *A comparison between Reinforcement Learning in an Online and Offline setting, figure taken from [this post](https://offline-rl.github.io/)* In online reinforcement learning, **the agent gathers data directly**: it collects a batch of experience by interacting with the environment. Then, it uses this experience immediately (or via some replay buffer) to learn from it (update its policy). But this implies that either you train your agent directly in the real world or have a simulator. If you don’t have one, you need to build it, which can be very complex (how to reflect the complex reality of the real world in an environment?), expensive, and insecure since if the simulator has flaws, the agent will exploit them if they provide a competitive advantage. On the other hand, in offline reinforcement learning, the agent only uses data collected from other agents or human demonstrations. **It does not interact with the environment**. The process is as follows: 1. Create a dataset using one or more policies and/or human interactions. 2. Run offline RL on this dataset to learn a policy This method has one drawback: the counterfactual queries problem. What do we do if our agent decides to do something for which we don’t have the data? For instance, turning right on an intersection but we don’t have this trajectory. There’s already exists some solutions on this topic, but if you want to know more about offline reinforcement learning you can watch [this video](https://www.youtube.com/watch?v=k08N5a0gG0A) ## Introducing Decision Transformers The Decision Transformer model was introduced by [“Decision Transformer: Reinforcement Learning via Sequence Modeling” by Chen L. et al](https://arxiv.org/abs/2106.01345). It abstracts Reinforcement Learning as a **conditional-sequence modeling problem**. The main idea is that instead of training a policy using RL methods, such as fitting a value function, that will tell us what action to take to maximize the return (cumulative reward), we use a sequence modeling algorithm (Transformer) that, given a desired return, past states, and actions, will generate future actions to achieve this desired return. It’s an autoregressive model conditioned on the desired return, past states, and actions to generate future actions that achieve the desired return. This is a complete shift in the Reinforcement Learning paradigm since we use generative trajectory modeling (modeling the joint distribution of the sequence of states, actions, and rewards) to replace conventional RL algorithms. It means that in Decision Transformers, we don’t maximize the return but rather generate a series of future actions that achieve the desired return. The process goes this way: 1. We feed the last K timesteps into the Decision Transformer with 3 inputs: - Return-to-go - State - Action 2. The tokens are embedded either with a linear layer if the state is a vector or CNN encoder if it’s frames. 3. The inputs are processed by a GPT-2 model which predicts future actions via autoregressive modeling. ![Decision Transformers architecture](assets/58_decision-transformers/dt-architecture.gif) *Decision Transformer architecture. States, actions, and returns are fed into modality specific linear embeddings and a positional episodic timestep encoding is added. Tokens are fed into a GPT architecture which predicts actions autoregressively using a causal self-attention mask. Figure from [1].* ## Using the Decision Transformer in 🤗 Transformers The Decision Transformer model is now available as part of the 🤗 transformers library. In addition, we share [nine pre-trained model checkpoints for continuous control tasks in the Gym environment](https://huggingface.co/models?other=gym-continous-control). <figure class="image table text-center m-0 w-full"> <video alt="WalkerEd-expert" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="assets/58_decision-transformers/walker2d-expert.mp4" type="video/mp4"> </video> </figure> *An “expert” Decision Transformers model, learned using offline RL in the Gym Walker2d environment.* ### Install the package `````python pip install git+https://github.com/huggingface/transformers ````` ### Loading the model Using the Decision Transformer is relatively easy, but as it is an autoregressive model, some care has to be taken in order to prepare the model’s inputs at each time-step. We have prepared both a [Python script](https://github.com/huggingface/transformers/blob/main/examples/research_projects/decision_transformer/run_decision_transformer.py) and a [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) that demonstrates how to use this model. Loading a pretrained Decision Transformer is simple in the 🤗 transformers library: `````python from transformers import DecisionTransformerModel model_name = "edbeeching/decision-transformer-gym-hopper-expert" model = DecisionTransformerModel.from_pretrained(model_name) `````` ### Creating the environment We provide pretrained checkpoints for the Gym Hopper, Walker2D and Halfcheetah. Checkpoints for Atari environments will soon be available. `````python import gym env = gym.make("Hopper-v3") state_dim = env.observation_space.shape[0] # state size act_dim = env.action_space.shape[0] # action size `````` ### Autoregressive prediction function The model performs an [autoregressive prediction](https://en.wikipedia.org/wiki/Autoregressive_model); that is to say that predictions made at the current time-step **t** are sequentially conditioned on the outputs from previous time-steps. This function is quite meaty, so we will aim to explain it in the comments. `````python # Function that gets an action from the model using autoregressive prediction # with a window of the previous 20 timesteps. def get_action(model, states, actions, rewards, returns_to_go, timesteps): # This implementation does not condition on past rewards states = states.reshape(1, -1, model.config.state_dim) actions = actions.reshape(1, -1, model.config.act_dim) returns_to_go = returns_to_go.reshape(1, -1, 1) timesteps = timesteps.reshape(1, -1) # The prediction is conditioned on up to 20 previous time-steps states = states[:, -model.config.max_length :] actions = actions[:, -model.config.max_length :] returns_to_go = returns_to_go[:, -model.config.max_length :] timesteps = timesteps[:, -model.config.max_length :] # pad all tokens to sequence length, this is required if we process batches padding = model.config.max_length - states.shape[1] attention_mask = torch.cat([torch.zeros(padding), torch.ones(states.shape[1])]) attention_mask = attention_mask.to(dtype=torch.long).reshape(1, -1) states = torch.cat([torch.zeros((1, padding, state_dim)), states], dim=1).float() actions = torch.cat([torch.zeros((1, padding, act_dim)), actions], dim=1).float() returns_to_go = torch.cat([torch.zeros((1, padding, 1)), returns_to_go], dim=1).float() timesteps = torch.cat([torch.zeros((1, padding), dtype=torch.long), timesteps], dim=1) # perform the prediction state_preds, action_preds, return_preds = model( states=states, actions=actions, rewards=rewards, returns_to_go=returns_to_go, timesteps=timesteps, attention_mask=attention_mask, return_dict=False,) return action_preds[0, -1] `````` ### Evaluating the model In order to evaluate the model, we need some additional information; the mean and standard deviation of the states that were used during training. Fortunately, these are available for each of the checkpoint’s [model card](https://huggingface.co/edbeeching/decision-transformer-gym-hopper-expert) on the Hugging Face Hub! We also need a target return for the model. This is the power of return conditioned Offline Reinforcement Learning: we can use the target return to control the performance of the policy. This could be really powerful in a multiplayer setting, where we would like to adjust the performance of an opponent bot to be at a suitable difficulty for the player. The authors show a great plot of this in their paper! ![Results Decision Transformers](assets/58_decision-transformers/results-dt.png) *Sampled (evaluation) returns accumulated by Decision Transformer when conditioned on the specified target (desired) returns. Top: Atari. Bottom: D4RL medium-replay datasets. Figure from [1].* ``````python TARGET_RETURN = 3.6 # This was normalized during training MAX_EPISODE_LENGTH = 1000 state_mean = np.array( [1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432, 0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673,]) state_std = np.array( [0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924, 1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027,]) state_mean = torch.from_numpy(state_mean) state_std = torch.from_numpy(state_std) state = env.reset() target_return = torch.tensor(TARGET_RETURN).float().reshape(1, 1) states = torch.from_numpy(state).reshape(1, state_dim).float() actions = torch.zeros((0, act_dim)).float() rewards = torch.zeros(0).float() timesteps = torch.tensor(0).reshape(1, 1).long() # take steps in the environment for t in range(max_ep_len): # add zeros for actions as input for the current time-step actions = torch.cat([actions, torch.zeros((1, act_dim))], dim=0) rewards = torch.cat([rewards, torch.zeros(1)]) # predicting the action to take action = get_action(model, (states - state_mean) / state_std, actions, rewards, target_return, timesteps) actions[-1] = action action = action.detach().numpy() # interact with the environment based on this action state, reward, done, _ = env.step(action) cur_state = torch.from_numpy(state).reshape(1, state_dim) states = torch.cat([states, cur_state], dim=0) rewards[-1] = reward pred_return = target_return[0, -1] - (reward / scale) target_return = torch.cat([target_return, pred_return.reshape(1, 1)], dim=1) timesteps = torch.cat([timesteps, torch.ones((1, 1)).long() * (t + 1)], dim=1) if done: break `````` You will find a more detailed example, with the creation of videos of the agent in our [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing). ## Conclusion In addition to Decision Transformers, we want to support more use cases and tools from the Deep Reinforcement Learning community. Therefore, it would be great to hear your feedback on the Decision Transformer model, and more generally anything we can build with you that would be useful for RL. Feel free to **[reach out to us](mailto:[email protected])**. ## What’s next? In the coming weeks and months, we plan on supporting other tools from the ecosystem: - Integrating **[RL-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo)** - Uploading **[RL-trained-agents models](https://github.com/DLR-RM/rl-trained-agents)** into the Hub: a big collection of pre-trained Reinforcement Learning agents using stable-baselines3 - Integrating other Deep Reinforcement Learning libraries - Implementing Convolutional Decision Transformers For Atari - And more to come 🥳 The best way to keep in touch is to **[join our discord server](https://discord.gg/YRAq8fMnUG)** to exchange with us and with the community. ## References [1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." *Advances in neural information processing systems* 34 (2021). [2] Agarwal, Rishabh, Dale Schuurmans, and Mohammad Norouzi. "An optimistic perspective on offline reinforcement learning." *International Conference on Machine Learning*. PMLR, 2020. ### Acknowledgements We would like to thank the paper’s first authors, Kevin Lu and Lili Chen, for their constructive conversations.
4
0
hf_public_repos
hf_public_repos/blog/evaluation-structured-outputs.md
--- title: "Improving Prompt Consistency with Structured Generations" thumbnail: /blog/assets/evaluating-mmlu-leaderboard/thumbnail.png authors: - user: willkurt guest: true org: dottxt - user: remi guest: true org: dottxt - user: clefourrier --- # Improving Prompt Consistency with Structured Generations Recently, the *Leaderboards and Evals* research team at Hugging Face did small experiments, which highlighted how fickle evaluation can be. For a given task, results are extremely sensitive to minuscule changes in prompt format! However, this is not what we want: a model prompted with the same amount of information as input should output similar results. We discussed this with our friends at *Dottxt*, who had an idea - what if there was a way to increase consistency across prompt formats? So, let's dig in! ## Context: Evaluation Sensitivity to Format Changes It has become increasingly clear that LLM benchmark performance is closely, and somewhat surprisingly, dependent on the *format* of the prompt itself, even though a number of methods have been introduced through the years to reduce prompt-related variance. For example, when we evaluate models in few-shot, we provide format examples to the model to force a specific pattern in output; when we compare the log-likelihood of plausible answers instead of allowing free-form generation, we attempt to constrain the answer space. The *Leaderboards and Evals* team provided a demonstration of this by looking at 8 different prompt formats for a well known task, MMLU (looking at 4 subsets of the task). These prompt variations were provided to 5 different models (chosen because they were SOTA at the time for their size, and covered a variety of tokenization and languages). Scores were computed using a log-probability evaluation, where the most probable answer is considered the correct one, a classic metric for multi-choice tasks. Let's look at the different formats in more detail, by using the first question of the `global_facts` subset of MMLU. ``` Question: “As of 2016, about what percentage of adults aged 18 years or older were overweight?” Choices: [ "10%", "20%", "40%", "80%" ] Correct choice: “40%” ``` <div> <table><p> <tbody> <tr> <td colspan=3 text-align=center> Without choices in the prompt </td></tr> <tr style=" vertical-align: top;"> <td>As of 2016, about what percentage of adults aged 18 years or older were overweight?</td> <td>Q: As of 2016, about what percentage of adults aged 18 years or older were overweight? <br><br> A: </td> <td>Question: As of 2016, about what percentage of adults aged 18 years or older were overweight?<br><br> Answer: </td> </tr> <tr> <td colspan=3> </td></tr> <tr> <td colspan=3> With choices in the prompt </td></tr> <tr> <td>Question: As of 2016, about what percentage of adults aged 18 years or older were overweight?<br><br> Choices: <br><br> 10% <br> 20% <br> 40% <br> 80% <br><br> Answer: </td> <td>Question: As of 2016, about what percentage of adults aged 18 years or older were overweight?<br><br> Choices: <br><br> A. 10% <br> B. 20% <br> C. 40% <br> D. 80% <br><br> Answer: </td> <td>Question: As of 2016, about what percentage of adults aged 18 years or older were overweight?<br><br> Choices: <br><br> (A) 10% <br> (B) 20% <br> (C) 40% <br> (D) 80% <br><br> Answer: </td> </tr> <tr> <td> Log probs of 10%, 20%, 40%, 80% </td> <td> Log probs of 10%, 20%, 40%, 80% vs A, B, C, D </td> <td> Log probs of 10%, 20%, 40%, 80% vs (A), (B), (C), (D), </td> </tbody> </table><p> </div> Prompts either contain just the question, or some tags to indicate that we are in a question/answer format, and possibly the choices in the prompt. In all cases, evaluations compare the log-likelihood of the possible choices only. All these formats appear in the evaluation literature, and should contain virtually the same amount of information in each row. However, just below, you can see the wide variation in performance across these theoretically superficial changes! ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-0.png) Each model sees its performance vary by around 10 points, with the exception of the most extreme example, Qwen1.5-7B, dropping all the way to an accuracy of 22.9% with the 7th prompt variation (mostly due to a tokenizer issue), with essentially the same information it was able to achieve an accuracy of up to 51.2% with another prompt. In isolation, a change in *score* is not necessarily a big deal so long as the *ranking* is consistent. However, as we can see in the next plot, ranking is impacted by these changes: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-1.png) No model is consistently ranked across prompts even though the only difference is their format, not the information itself. This means that if the authors of Gemma-7b wanted to show that their model was superior to Mistral-7B-v0.1, they could do so simply by choosing the correct prompt. As almost no one reports their precise evaluation setup, this is what has historically happened in model reports, where authors chose to report the setup most advantageous to their model (which is why you’ll see extremely weird reported numbers of few-shots in some papers). However, this is not the only source of variance in model scores. In extended experiments, we compared evaluating the same models, with the same prompt formats, using the exact same few-shot samples shuffled differently before the prompt (A/B/C/D/E Prompt vs C/D/A/B/E Prompt, for example). The following figure shows the model scores delta between these two few-shot orderings: we observe a difference of up to 3 points in performance for the same model/prompt combination! ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-2.png) If we want to be able to properly evaluate and compare different models we need a way to overcome this challenge. Sclar, et al’s *[Quantifying Language Model’s Sensitivity to Spurious Features in Prompt Design](https://arxiv.org/abs/2310.11324)* also gives a good overview of this issue, and the authors introduce [FormatSpread](https://github.com/msclar/formatspread), a software tool that evaluates each model with multiple different variations of formats, then calculate the variance of that model's performance. Solutions such as this allow us to determine with more confidence which models are better than others, but they come at a high computation cost. ## What if we focused on the output, not the input, to make results more consistent across these small changes to format? While FormatSpread is a great attempt to make leaderboards more fair and honest, what we really want as practical users of LLMs is *prompt consistency*. That is, we would like to find some way to reduce this variance among prompts. At [.txt](http://dottxt.co/), we focus on improving and better understanding *structured generation,* which is when the output of a model is constrained to follow a specific structure. Our library, [Outlines](https://github.com/outlines-dev/outlines), allows us to structure the output of an LLM by defining a regular expression or a context-free grammar (we give examples below). Our initial use case for structured generation was to make LLMs easier to interact with programmatically, by ensuring responses in well formatted JSON. However, we’ve continually been surprised by other benefits of structured generation we’ve uncovered. When working on earlier research exploring the benefits of structured generation, we demonstrated that [structured generation consistently improves benchmark performance](http://blog.dottxt.co/performance-gsm8k.html), and came across an interesting edge case when exploring JSON structured prompts. In most cases, changing the prompt format to JSON, even when using unstructured generation, leads to improved benchmark performance for almost all models. However, this was not the case for MetaMath-Tulpar-7b-v2-Slerp, where we found a dramatic decrease in accuracy when using prompts formatted in JSON. Even more surprising was that when using *structured generation* to constrain the output of the model, the dip in performance was negligible! This led us to question whether or not structured generation could be exploited for *prompt consistency*. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-3.png) ### Note on the experimental setup: Focusing on n-shot and shot order While in the above experiments, Hugging Face’s *Leaderboard and Evals* research team explored changes to the format of the prompt itself, for the next experiments we’re going to restrict the changes. To focus our exploration of prompt space, we’re going to look at varying just two properties of the prompt: 1. Varying the number of “shots” or examples used in the prompt (n*-shot*) 2. Varying the order of those shots (*shot order*, specified by a *shot seed*) For point 2, with a given n-shot we are only shuffling the same *n* examples. This means that all shuffles of a 1-shot prompt are the same. This is done to avoid conflating the *format* of a prompt with the *information* it contains. Clearly a 5-shot prompt contains more information than a 1-shot prompt, but every shuffling of a 5-shot prompt contains the same examples, only in a different order. ## Initial Exploration: GSM8K 1-8 shot prompting In order to test this out further, we wanted to explore the behavior of two very similar but strong models in the 7B parameter space: Mistral-7Bv0.1 and Zephyr-7B-beta. The reason behind this choice is to not only study variance in individual outcomes, but to look at the *changes in relative ranking*. We use the GSM8K task which is a set of grade school math word problems. Here is the basic format of a GSM8K 1-shot prompt with the implied structure highlighted. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-4.png) In order to consistently generate correctly structured answers we create a regular expression that matches the structure we see inherent in the original prompt format. The following regex is used in Outlines to define the structure for generation: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-5.png) We can see in the regex that we allow the model to reason for anywhere from 200 to 700 characters, then it must declare that “The answer is” and then reply with up to 10 digit number (that cannot start with 0). It’s worth mentioning that the regex controlling the structure is similar, but not identical to, the regex used to parse out the answer. We’ve learned there’s an interesting bit of nuance in defining the structure since, like the prompt, it can impact performance. For example, notice that `{200,700}` in the regex. This means that the model has 200 to 700 characters to “reason” before answering. Changing these values can impact performance and leads to something we refer to as “thought control”, an area we’re hoping to write more about soon. Our first experiment was to continue exploring the GSM8K dataset and iterated on 1 through 8 shot prompting. The results, shown below, were very compelling. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-6.png) There are two major features we see in this figure: variance in performance across the n-shot setups was majorly reduced and there were no instances where the ranking swapped (Mistral consistently leads over Zephyr). It’s also worth pointing out that 1-shot structured performance is substantially better than 1-shot unstructured performance, and on par with 5-shot. This leads to another area of research we’re terming “prompt efficiency”. ## Diving Deeper: GPQA n-shot and shot order variations For the next experiment we wanted to look at varying both n-shots as well as the order of the n-shots. Order was controlled by setting the seed used for shuffling the examples. As mentioned previously, only the first n-shots are shuffled to keep the information consistent between prompts, this means that all 1-shot prompts are the same across seeds. Here’s an example of the shot order for 4-shot: | seed | 4-shot order | | --- | --- | | 42 | 2-1-3-0 | | 1337 | 1-0-3-2 | | 1981 | 3-2-0-1 | | 1992 | 0-3-1-2 | | 12345 | 1-0-2-3 | Additionally, to explore how transferable these results were, we changed the task to [Graduate-Level Google-Proof Q&A Benchmark (GPQA)](https://arxiv.org/abs/2311.12022). GPQA is a hard knowledge multi-choice evaluation task. Below is the prompt format and highlighted structure. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-7.png) For this next experiment we are specifically using the ‘diamond’ subset which represents curated and cleaned up high quality questions. Of the 198 questions in this dataset we reserve 8 for n-shot prompting (though only ever used the first 5), and then evaluated on the remaining 190 questions. Visualized below we can see a grid representing the accuracy achieved for all the possible combinations for shot seed and *n*, for the two models, both without (left) and with (right) structured generation. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-9.png) One thing which immediately stands out is that the structured output tends to score higher than the unstructured output across the board. We see the mean of each grid for structured and unstructured below: **Mean of results across prompt seed and n-shot** | model | unstructured | structured | | --- | --- | --- | | Mistral-7B-v0.1 | 0.2360 | 0.2935 | | Zephyr-7b-beta | 0.2387 | 0.3048 | Additionally, across all the values in the grid we also find *reduced variance* when comparing the structured with unstructured generation. **Standard deviation in results across prompt seed and n-shot** | model | unstructured | structured | | --- | --- | --- | | Mistral-7B-v0.1 | 0.0213 | 0.0202 | | Zephyr-7b-beta | 0.0273 | 0.0180 | This reduction in variance across the grid is similar to the reduction in variance we saw when looking at just n-shot changes for GSM8K. While increased expected performance and decreased variance are great properties to have, what we really want to understand is the impact on ranking. In the next plot we examine these grids in terms of which of the two models would be declared a winner: - A: Zephyr-7b-beta - B: Mistral-7B-v0.1 - “-”: tie ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/leaderboards-evaluation/dottxt-structured_output-ex-10.png) As we can see from these images, there is a major improvement in the consistency of calling a winner when structured generation is applied. These results paint a consistent picture with the findings we had using GSM8K across various n-shot. ## Conclusion and Future Work While these results are incredibly promising, we still need to explore these results across more models and more tasks. What we’ve seen so far is that structured generation could prove to be an essential part of evaluation. Simultaneously *increasing* the expected score and *decreasing* the variance across prompt changes is a very promising result that deserves further research.
5
0
hf_public_repos
hf_public_repos/blog/text-generation-inference-on-inferentia2.md
--- title: "Hugging Face Text Generation Inference available for AWS Inferentia2" thumbnail: /blog/assets/175_text_generation_inference_on_inferentia2/thumbnail.jpg authors: - user: philschmid - user: dacorvo --- # Hugging Face Text Generation Inference available for AWS Inferentia2 We are excited to announce the general availability of Hugging Face Text Generation Inference (TGI) on AWS Inferentia2 and Amazon SageMaker. **[Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)**, is a purpose-built solution for deploying and serving Large Language Models (LLMs) for production workloads at scale. TGI enables high-performance text generation using Tensor Parallelism and continuous batching for the most popular open LLMs, including Llama, Mistral, and more. Text Generation Inference is used in production by companies such as Grammarly, Uber, Deutsche Telekom, and many more. The integration of TGI into Amazon SageMaker, in combination with AWS Inferentia2, presents a powerful solution and viable alternative to GPUs for building production LLM applications. The seamless integration ensures easy deployment and maintenance of models, making LLMs more accessible and scalable for a wide range of production use cases. With the new TGI for AWS Inferentia2 on Amazon SageMaker, AWS customers can benefit from the same technologies that power highly-concurrent, low-latency LLM experiences like [HuggingChat](https://hf.co/chat), [OpenAssistant](https://open-assistant.io/), and Serverless Endpoints for LLMs on the Hugging Face Hub. ## Deploy Zephyr 7B on AWS Inferentia2 using Amazon SageMaker This tutorial shows how easy it is to deploy a state-of-the-art LLM, such as Zephyr 7B, on AWS Inferentia2 using Amazon SageMaker. Zephyr is a 7B fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on a mix of publicly available and synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290), as described in detail in the [technical report](https://arxiv.org/abs/2310.16944). The model is released under the Apache 2.0 license, ensuring wide accessibility and use. We are going to show you how to: 1. Setup development environment 2. Retrieve the TGI Neuronx Image 3. Deploy Zephyr 7B to Amazon SageMaker 4. Run inference and chat with the model Let’s get started. ### 1. Setup development environment We are going to use the `sagemaker` python SDK to deploy Zephyr to Amazon SageMaker. We need to make sure to have an AWS account configured and the `sagemaker` python SDK installed. ```python !pip install transformers "sagemaker>=2.206.0" --upgrade --quiet ``` If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find out more about it [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). ```python import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it doesn't exist sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker session region: {sess.boto_region_name}") ``` ### 2. Retrieve TGI Neuronx Image The new Hugging Face TGI Neuronx DLCs can be used to run inference on AWS Inferentia2. You can use the `get_huggingface_llm_image_uri` method of the `sagemaker` SDK to retrieve the appropriate Hugging Face TGI Neuronx DLC URI based on your desired `backend`, `session`, `region`, and `version`. You can find all the available versions [here](https://github.com/aws/deep-learning-containers/releases?q=tgi+AND+neuronx&expanded=true). *Note: At the time of writing this blog post the latest version of the Hugging Face LLM DLC is not yet available via the `get_huggingface_llm_image_uri` method. We are going to use the raw container uri instead.* ```python from sagemaker.huggingface import get_huggingface_llm_image_uri # retrieve the llm image uri llm_image = get_huggingface_llm_image_uri( "huggingface-neuronx", version="0.0.20" ) # print ecr image uri print(f"llm image uri: {llm_image}") ``` ### 4. Deploy Zephyr 7B to Amazon SageMaker Text Generation Inference (TGI) on Inferentia2 supports popular open LLMs, including Llama, Mistral, and more. You can check the full list of supported models (text-generation) [here](https://huggingface.co/docs/optimum-neuron/package_reference/export#supported-architectures). **Compiling LLMs for Inferentia2** At the time of writing, [AWS Inferentia2 does not support dynamic shapes for inference](https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/general/arch/neuron-features/dynamic-shapes.html#neuron-dynamic-shapes), which means that we need to specify our sequence length and batch size ahead of time. To make it easier for customers to utilize the full power of Inferentia2, we created a [neuron model cache](https://huggingface.co/docs/optimum-neuron/guides/cache_system), which contains pre-compiled configurations for the most popular LLMs. A cached configuration is defined through a model architecture (Mistral), model size (7B), neuron version (2.16), number of inferentia cores (2), batch size (2), and sequence length (2048). This means we don't need to compile the model ourselves, but we can use the pre-compiled model from the cache. Examples of this are [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta). You can find compiled/cached configurations on the [Hugging Face Hub](https://huggingface.co/aws-neuron/optimum-neuron-cache/tree/main/inference-cache-config). If your desired configuration is not yet cached, you can compile it yourself using the [Optimum CLI](https://huggingface.co/docs/optimum-neuron/cli/compile) or open a request at the [Cache repository](https://huggingface.co/aws-neuron/optimum-neuron-cache/discussions) For this post we re-compiled `HuggingFaceH4/zephyr-7b-beta` using the following command and parameters on a `inf2.8xlarge` instance, and pushed it to the Hub at [aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2](https://huggingface.co/aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2) ```bash # compile model with optimum for batch size 4 and sequence length 2048 optimum-cli export neuron -m HuggingFaceH4/zephyr-7b-beta --batch_size 4 --sequence_length 2048 --num_cores 2 --auto_cast_type bf16 ./zephyr-7b-beta-neuron # push model to hub [repo_id] [local_path] [path_in_repo] huggingface-cli upload aws-neuron/zephyr-7b-seqlen-2048-bs-4 ./zephyr-7b-beta-neuron ./ --exclude "checkpoint/**" # Move tokenizer to neuron model repository python -c "from transformers import AutoTokenizer; AutoTokenizer.from_pretrained('HuggingFaceH4/zephyr-7b-beta').push_to_hub('aws-neuron/zephyr-7b-seqlen-2048-bs-4')" ``` If you are trying to compile an LLM with a configuration that is not yet cached, it can take up to 45 minutes. **Deploying TGI Neuronx Endpoint** Before deploying the model to Amazon SageMaker, we must define the TGI Neuronx endpoint configuration. We need to make sure the following additional parameters are defined: - `HF_NUM_CORES`: Number of Neuron Cores used for the compilation. - `HF_BATCH_SIZE`: The batch size that was used to compile the model. - `HF_SEQUENCE_LENGTH`: The sequence length that was used to compile the model. - `HF_AUTO_CAST_TYPE`: The auto cast type that was used to compile the model. We still need to define traditional TGI parameters with: - `HF_MODEL_ID`: The Hugging Face model ID. - `HF_TOKEN`: The Hugging Face API token to access gated models. - `MAX_BATCH_SIZE`: The maximum batch size that the model can handle, equal to the batch size used for compilation. - `MAX_INPUT_LENGTH`: The maximum input length that the model can handle. - `MAX_TOTAL_TOKENS`: The maximum total tokens the model can generate, equal to the sequence length used for compilation. ```python import json from sagemaker.huggingface import HuggingFaceModel # sagemaker config & model config instance_type = "ml.inf2.8xlarge" health_check_timeout = 1800 # Define Model and Endpoint configuration parameter config = { "HF_MODEL_ID": "HuggingFaceH4/zephyr-7b-beta", "HF_NUM_CORES": "2", "HF_BATCH_SIZE": "4", "HF_SEQUENCE_LENGTH": "2048", "HF_AUTO_CAST_TYPE": "bf16", "MAX_BATCH_SIZE": "4", "MAX_INPUT_LENGTH": "1512", "MAX_TOTAL_TOKENS": "2048", } # create HuggingFaceModel with the image uri llm_model = HuggingFaceModel( role=role, image_uri=llm_image, env=config ) ``` After we have created the `HuggingFaceModel` we can deploy it to Amazon SageMaker using the `deploy` method. We will deploy the model with the `ml.inf2.8xlarge` instance type. ```python # Deploy model to an endpoint llm = llm_model.deploy( initial_instance_count=1, instance_type=instance_type, container_startup_health_check_timeout=health_check_timeout, ) ``` SageMaker will create our endpoint and deploy the model to it. This can take 10-15 minutes. ### 5. Run inference and chat with the model After our endpoint is deployed, we can run inference on it, using the `predict` method from `predictor`. We can provide different parameters to impact the generation, adding them to the `parameters` attribute of the payload. You can find the supported parameters [here](https://www.philschmid.de/sagemaker-llama-llm#5-run-inference-and-chat-with-the-model), or in the open API specification of TGI in the [swagger documentation](https://huggingface.github.io/text-generation-inference/) The `HuggingFaceH4/zephyr-7b-beta` is a conversational chat model, meaning we can chat with it using a prompt structure like the following: ``` <|system|>\nYou are a friendly.</s>\n<|user|>\nInstruction</s>\n<|assistant|>\n ``` Manually preparing the prompt is error prone, so we can use the `apply_chat_template` method from the tokenizer to help with it. It expects a `messages` dictionary in the well-known OpenAI format, and converts it into the correct format for the model. Let's see if Zephyr knows some facts about AWS. ```python from transformers import AutoTokenizer # load the tokenizer tokenizer = AutoTokenizer.from_pretrained("aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2") # Prompt to generate messages = [ {"role": "system", "content": "You are the AWS expert"}, {"role": "user", "content": "Can you tell me an interesting fact about AWS?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) # Generation arguments payload = { "do_sample": True, "top_p": 0.6, "temperature": 0.9, "top_k": 50, "max_new_tokens": 256, "repetition_penalty": 1.03, "return_full_text": False, "stop": ["</s>"] } chat = llm.predict({"inputs":prompt, "parameters":payload}) print(chat[0]["generated_text"][len(prompt):]) # Sure, here's an interesting fact about AWS: As of 2021, AWS has more than 200 services in its portfolio, ranging from compute power and storage to databases, ``` Awesome, we have successfully deployed Zephyr to Amazon SageMaker on Inferentia2 and chatted with it. ### 6. Clean up To clean up, we can delete the model and endpoint. ```python llm.delete_model() llm.delete_endpoint() ``` ## Conclusion The integration of Hugging Face Text Generation Inference (TGI) with AWS Inferentia2 and Amazon SageMaker provides a cost-effective alternative solution for deploying Large Language Models (LLMs). We're actively working on supporting more models, streamlining the compilation process, and refining the caching system. --- Thanks for reading! If you have any questions, feel free to contact me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
6
0
hf_public_repos
hf_public_repos/blog/quanto-diffusers.md
--- title: "Memory-efficient Diffusion Transformers with Quanto and Diffusers" thumbnail: /blog/assets/quanto-diffusers/thumbnail.png authors: - user: sayakpaul - user: dacorvo --- # Memory-efficient Diffusion Transformers with Quanto and Diffusers Over the past few months, we have seen an emergence in the use of Transformer-based diffusion backbones for high-resolution text-to-image (T2I) generation. These models use the transformer architecture as the building block for the diffusion process, instead of the UNet architecture that was prevalent in many of the initial diffusion models. Thanks to the nature of Transformers, these backbones show good scalability, with models ranging from 0.6B to 8B parameters. As models become larger, memory requirements increase. The problem intensifies because a diffusion pipeline usually consists of several components: a text encoder, a diffusion backbone, and an image decoder. Furthermore, modern diffusion pipelines use multiple text encoders – for example, there are three in the case of Stable Diffusion 3. It takes 18.765 GB of GPU memory to run SD3 inference using FP16 precision. These high memory requirements can make it difficult to use these models with consumer GPUs, slowing adoption and making experimentation harder. In this post, we show how to improve the memory efficiency of Transformer-based diffusion pipelines by leveraging Quanto's quantization utilities from the Diffusers library. ### Table of contents - [Preliminaries](#preliminaries) - [Quantizing a `DiffusionPipeline` with Quanto](#quantizing-a-diffusionpipeline-with-quanto) - [Generality of the observations](#generality-of-the-observations) - [Misc findings](#misc-findings) - [`bfloat16` as the main compute data-type](#bfloat16-is-usually-better-on-h100) - [The promise of `qint8`](#the-promise-of-qint8) - [INT4](#how-about-int4) - [Bonus - saving and loading Diffusers models in Quanto](#bonus---saving-and-loading-diffusers-models-in-quanto) - [Tips](#tips) - [Conclusion](#conclusion) ## Preliminaries For a detailed introduction to Quanto, please refer to [this post](https://huggingface.co/blog/quanto-introduction). In short, Quanto is a quantization toolkit built on PyTorch. It's part of [Hugging Face Optimum](https://github.com/huggingface/optimum), a set of tools for hardware optimization. Model quantization is a popular tool among LLM practitioners, but not so much with diffusion models. Quanto can help bridge this gap and provide memory savings with little or no quality degradation. For benchmarking purposes, we use an H100 GPU with the following environment: - CUDA 12.2 - PyTorch 2.4.0 - Diffusers (installed from [this commit](https://github.com/huggingface/diffusers/commit/bce9105ac79636f68dcfdcfc9481b89533db65e5)) - Quanto (installed from [this commit](https://github.com/huggingface/optimum-quanto/commit/285862b4377aa757342ed810cd60949596b4872b)) Unless otherwise specified, we default to performing computations in FP16. We chose not to quantize the VAE to prevent numerical instability issues. Our benchmarking code can be found [here](https://huggingface.co/datasets/sayakpaul/sample-datasets/blob/main/quanto-exps-2/benchmark.py). At the time of this writing, we have the following Transformer-based diffusion pipelines for text-to-image generation in Diffusers: - [PixArt-Alpha](https://huggingface.co/docs/diffusers/main/en/api/pipelines/pixart) and [PixArt-Sigma](https://huggingface.co/docs/diffusers/main/en/api/pipelines/pixart_sigma) - [Stable Diffusion 3](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3) - [Hunyuan DiT](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuandit) - [Lumina](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina) - [Aura Flow](https://huggingface.co/docs/diffusers/main/en/api/pipelines/aura_flow) We also have [Latte](https://huggingface.co/docs/diffusers/main/en/api/pipelines/latte), a Transformer-based text-to-video generation pipeline. For brevity, we keep our study limited to the following three: PixArt-Sigma, Stable Diffusion 3, and Aura Flow. The table below shows the parameter counts of their diffusion backbones: | **Model** | **Checkpoint** | **# Params (Billion)** | |:-----------------:|:--------------------------------------------------------:|:----------------------:| | PixArt | https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS | 0.611 | | Stable Diffusion 3| https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers | 2.028 | | Aura Flow | https://huggingface.co/fal/AuraFlow/ | 6.843 | <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 5px;"> It’s worth keeping in mind that this post primarily focuses on memory efficiency at a slight or negligible cost of inference latency. </div> ## Quantizing a `DiffusionPipeline` with Quanto Quantizing a model with Quanto is straightforward. ```python from optimum.quanto import freeze, qfloat8, quantize from diffusers import PixArtSigmaPipeline import torch pipeline = PixArtSigmaPipeline.from_pretrained( "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", torch_dtype=torch.float16 ).to("cuda") quantize(pipeline.transformer, weights=qfloat8) freeze(pipeline.transformer) ``` We call `quantize()` on the module to be quantized, specifying what we want to quantize. In the above case, we are just quantizing the parameters, leaving the activations as is. We’re quantizing to the FP8 data-type. We finally call `freeze()` to replace the original parameters with the quantized parameters. We can then call this `pipeline` normally: ```python image = pipeline("ghibli style, a fantasy landscape with castles").images[0] ``` <table> <tr style="text-align: center;"> <th>FP16</th> <th>Diffusion Transformer in FP8</th> </tr> <tr> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckptpixart-bs1-dtypefp16-qtypenone-qte0.png" width=512 alt="FP16 image."/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckptpixart-bs1-dtypefp16-qtypefp8-qte0.png" width=512 alt="FP8 quantized image."/></td> </tr> </table> We notice the following memory savings when using FP8, with slightly higher latency and almost no quality degradation: | **Batch Size** | **Quantization** | **Memory (GB)** | **Latency (Seconds)** | |:--------------:|:----------------:|:---------------:|:--------------------:| | 1 | None | 12.086 | 1.200 | | 1 | FP8 | **11.547** | 1.540 | | 4 | None | 12.087 | 4.482 | | 4 | FP8 | **11.548** | 5.109 | We can quantize the text encoder in the same way: ```python quantize(pipeline.text_encoder, weights=qfloat8) freeze(pipeline.text_encoder) ``` The text encoder is also a transformer model, and we can quantize it too. Quantizing both the text encoder and the diffusion backbone leads to much larger memory improvements: | **Batch Size** | **Quantization** | **Quantize TE** | **Memory (GB)** | **Latency (Seconds)** | |:--------------:|:----------------:|:---------------:|:---------------:|:--------------------:| | 1 | FP8 | False | 11.547 | 1.540 | | 1 | FP8 | True | **5.363** | 1.601 | | 4 | FP8 | False | 11.548 | 5.109 | | 4 | FP8 | True | **5.364** | 5.141 | Quantizing the text encoder produces results very similar to the previous case: ![ckpt@pixart-bs@1-dtype@fp16-qtype@[email protected]](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckptpixart-bs1-dtypefp16-qtypefp8-qte1.png) ## Generality of the observations Quantizing the text encoder together with the diffusion backbone generally works for the models we tried. Stable Diffusion 3 is a special case, as it uses three different text encoders. We found that quantizing the _second_ text encoder does not work well, so we recommend the following alternatives: - Only quantize the first text encoder ([`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) or - Only quantize the third text encoder ([`T5EncoderModel`](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel)) or - Quantize the first and third text encoders The table below gives an idea about the expected memory savings for various text encoder quantization combinations (the diffusion transformer is quantized in all cases): | **Batch Size** | **Quantization** | **Quantize TE 1** | **Quantize TE 2** | **Quantize TE 3** | **Memory (GB)** | **Latency (Seconds)** | |:--------------:|:----------------:|:-----------------:|:-----------------:|:-----------------:|:---------------:|:--------------------:| | 1 | FP8 | 1 | 1 | 1 | 8.200 | 2.858 | | 1 ✅ | FP8 | 0 | 0 | 1 | 8.294 | 2.781 | | 1 | FP8 | 1 | 1 | 0 | 14.384 | 2.833 | | 1 | FP8 | 0 | 1 | 0 | 14.475 | 2.818 | | 1 ✅ | FP8 | 1 | 0 | 0 | 14.384 | 2.730 | | 1 | FP8 | 0 | 1 | 1 | 8.325 | 2.875 | | 1 ✅ | FP8 | 1 | 0 | 1 | 8.204 | 2.789 | | 1 | None | - | - | - | 16.403 | 2.118 | <table> <tr style="text-align: center;"> <th>Quantized Text Encoder: 1</th> <th>Quantized Text Encoder: 3</th> <th>Quantized Text Encoders: 1 and 3</th> </tr> <tr> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckptsd3-bs1-dtypefp16-qtypefp8-qte1-first1.png" width=300 alt="Image with quantized text encoder 1."/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckptsd3-bs1-dtypefp16-qtypefp8-qte1-third1.png" width=300 alt="Image with quantized text encoder 3."/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckptsd3-bs1-dtypefp16-qtypefp8-qte1-first1-third1%201.png" width=300 alt="Image with quantized text encoders 1 and 3."/></td> </tr> </table> ## Misc findings ### `bfloat16` is usually better on H100 Using `bfloat16` can be faster for supported GPU architectures, such as H100 or 4090. The table below presents some numbers for PixArt measured on our H100 reference hardware: | **Batch Size** | **Precision** | **Quantization** | **Memory (GB)** | **Latency (Seconds)** | **Quantize TE** | |:--------------:|:-------------:|:----------------:|:---------------:|:--------------------:|:---------------:| | 1 | FP16 | INT8 | 5.363 | 1.538 | True | | 1 | BF16 | INT8 | 5.364 | **1.454** | True | | 1 | FP16 | FP8 | 5.363 | 1.601 | True | | 1 | BF16 | FP8 | 5.363 | **1.495** | True | ### The promise of `qint8` We found quantizing with `qint8` (instead of `qfloat8`) is generally better in terms of inference latency. This effect gets more pronounced when we horizontally fuse the attention QKV projections (calling `fuse_qkv_projections()` in Diffusers), thereby thickening the dimensions of the int8 kernels to speed up computation. We present some evidence below for PixArt: | **Batch Size** | **Quantization** | **Memory (GB)** | **Latency (Seconds)** | **Quantize TE** | **QKV Projection** | |:--------------:|:----------------:|:---------------:|:--------------------:|:---------------:|:------------------:| | 1 | INT8 | 5.363 | 1.538 | True | False | | 1 | INT8 | 5.536 | **1.504** | True | True | | 4 | INT8 | 5.365 | 5.129 | True | False | | 4 | INT8 | 5.538 | **4.989** | True | True | ### How about INT4? We additionally experimented with `qint4` when using `bfloat16`. This is only applicable to `bfloat16` on H100 because other configurations are not supported yet. With `qint4`, we can expect to see more improvements in memory consumption at the cost of increased inference latency. Increased latency is expected, because there is no native hardware support for int4 computation – the weights are transferred using 4 bits, but computation is still done in `bfloat16`. The table below shows our results for PixArt-Sigma: | **Batch Size** | **Quantize TE** | **Memory (GB)** | **Latency (Seconds)** | |:--------------:|:---------------:|:---------------:|:--------------------:| | 1 | No | 9.380 | 7.431 | | 1 | Yes | **3.058** | 7.604 | Note, however, that due to the aggressive discretization of INT4, the end results can take a hit. This is why, for Transformer-based models in general, we usually leave the final projection layer out of quantization. In Quanto, we do this by: ```python quantize(pipeline.transformer, weights=qint4, exclude="proj_out") freeze(pipeline.transformer) ``` `"proj_out"` corresponds to the final layer in `pipeline.transformer`. The table below presents results for various settings: <table> <tr style="text-align: center;"> <th>Quantize TE: No, Layer exclusion: None</th> <th>Quantize TE: No, Layer exclusion: "proj_out"</th> <th>Quantize TE: Yes, Layer exclusion: None</th> <th>Quantize TE: Yes, Layer exclusion: "proj_out"</th> </tr> <tr> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckpt%40pixart-bs%401-dtype%40bf16-qtype%40int4-qte%400-fuse%400.png" width=300 alt="Image 1 without text encoder quantization."/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckpt%40pixart-bs%401-dtype%40bf16-qtype%40int4-qte%400-fuse%400-exclude%40proj_out.png" width=300 alt="Image 2 without text encoder quantization but with proj_out excluded in diffusion transformer quantization."/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckpt%40pixart-bs%401-dtype%40bf16-qtype%40int4-qte%401-fuse%400.png" width=300 alt="Image 3 with text encoder quantization."/></td> <td><img class="mx-auto" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/quanto-diffusers/ckpt%40pixart-bs%401-dtype%40bf16-qtype%40int4-qte%401-fuse%400-exclude%40proj_out.png" width=300 alt="Image 3 with text encoder quantization but with proj_out excluded in diffusion transformer quantization.."/></td> </tr> </table> To recover the lost image quality, a common practice is to perform quantization-aware training, which is also supported in Quanto. This technique is out of the scope of this post, feel free to contact us if you're interested! All the results of our experiments for this post can be found [here](https://huggingface.co/datasets/sayakpaul/sample-datasets/tree/main/quanto-exps-2). ## Bonus - saving and loading Diffusers models in Quanto Quantized Diffusers models can be saved and loaded: ```python from diffusers import PixArtTransformer2DModel from optimum.quanto import QuantizedPixArtTransformer2DModel, qfloat8 model = PixArtTransformer2DModel.from_pretrained("PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", subfolder="transformer") qmodel = QuantizedPixArtTransformer2DModel.quantize(model, weights=qfloat8) qmodel.save_pretrained("pixart-sigma-fp8") ``` The resulting checkpoint is ***587MB*** in size, instead of the original 2.44GB. We can then load it: ```python from optimum.quanto import QuantizedPixArtTransformer2DModel import torch transformer = QuantizedPixArtTransformer2DModel.from_pretrained("pixart-sigma-fp8") transformer.to(device="cuda", dtype=torch.float16) ``` And use it in a `DiffusionPipeline`: ```python from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained( "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", transformer=None, torch_dtype=torch.float16, ).to("cuda") pipe.transformer = transformer prompt = "A small cactus with a happy face in the Sahara desert." image = pipe(prompt).images[0] ``` In the future, we can expect to pass the `transformer` directly when initializing the pipeline so that this will work: ```diff pipe = PixArtSigmaPipeline.from_pretrained( "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", - transformer=None, + transformer=transformer, torch_dtype=torch.float16, ).to("cuda") ``` `QuantizedPixArtTransformer2DModel` implementation is available [here](https://github.com/huggingface/optimum-quanto/blob/601dc193ce0ed381c479fde54a81ba546bdf64d1/optimum/quanto/models/diffusers_models.py#L184) for reference. If you want more models from Diffusers supported in Quanto for saving and loading, please open an issue [here](https://github.com/huggingface/optimum-quanto/issues/new) and mention `@sayakpaul`. ## Tips - Based on your requirements, you may want to apply different types of quantization to different pipeline modules. For example, you could use FP8 for the text encoder but INT8 for the diffusion transformer. Thanks to the flexibility of Diffusers and Quanto, this can be done seamlessly. - To optimize for your use cases, you can even combine quantization with other [memory optimization techniques](https://huggingface.co/docs/diffusers/main/en/optimization/memory) in Diffusers, such as `enable_model_cpu_offload()`. ## Conclusion In this post, we showed how to quantize Transformer models from Diffusers and optimize their memory consumption. The effects of quantization become more visible when we additionally quantize the text encoders involved in the mix. We hope you will apply some of the workflows to your projects and benefit from them 🤗. Thanks to [Pedro Cuenca](https://github.com/pcuenca) for his extensive reviews on the post.
7
0
hf_public_repos
hf_public_repos/blog/llama31.md
--- title: "Llama 3.1 - 405B, 70B & 8B with multilinguality and long context" thumbnail: /blog/assets/llama31/thumbnail.jpg authors: - user: philschmid - user: osanseviero - user: alvarobartt - user: lvwerra - user: dvilasuero - user: reach-vb - user: marcsun13 - user: pcuenq --- # Llama 3.1 - 405B, 70B & 8B with multilinguality and long context Llama 3.1 is out! Today we welcome the next iteration of the Llama family to Hugging Face. We are excited to collaborate with Meta to ensure the best integration in the Hugging Face ecosystem. Eight open-weight models (3 base models and 5 fine-tuned ones) are available on the Hub. Llama 3.1 comes in three sizes: 8B for efficient deployment and development on consumer-size GPU, 70B for large-scale AI native applications, and 405B for synthetic data, LLM as a Judge or distillation. All three come in base and instruction-tuned variants. In addition to the six generative models, Meta released two new models: Llama Guard 3 and Prompt Guard. Prompt Guard is a small classifier that detects prompt injections and jailbreaks. Llama Guard 3 is a safeguard model that can classify LLM inputs and generations. Among the features and integrations being released, we have: * [Models on the Hub](https://huggingface.co/collections/meta-llama/llama-31-669fc079a0c406a149a5738f) * Hugging Face Transformers and TGI integration * [Hugging Chat integration for Meta Llama 3.1 405B Instruct](https://huggingface.co/chat/models/meta-llama/Meta-Llama-3.1-405b-instruct/) * Inference & Deployment Integration with Inference Endpoints, Google Cloud, Amazon SageMaker & DELL Enterprise Hub * Quantization for FP8, AWQ and GPTQ for easier inference * Fine-tuning Llama 3.1 8B on a single GPU with 🤗 TRL * Generate synthetic data using Llama 3.1 70B and 405B with Distilabel ## Table of contents - [What’s new with Llama 3.1?](#whats-new-with-llama-31) - [How much memory does Llama 3.1 need?](#how-much-memory-does-llama-31-need) - [Inference Memory Requirements](#inference-memory-requirements) - [Training Memory Requirements](#training-memory-requirements) - [Llama 3.1 evaluation](#llama-31-evaluation) - [Using Hugging Face Transformers](#using-hugging-face-transformers) - [How to prompt Llama 3.1](#how-to-prompt-llama-31) - [Built-in Tool calling](#built-in-tool-calling) - [Custom Tool calling](#custom-tool-calling) - [Demo](#demo) - [Llama 3.1 405B quantization with FP8, AWQ, and GPTQ](#llama-31-405b-quantization-with-fp8-awq-and-gptq) - [Inference Integrations](#inference-integrations) - [Hugging Face Inference API](#hugging-face-inference-api) - [Hugging Face Inference Endpoints](#hugging-face-inference-endpoints) - [Hugging Face Partner Integrations](#hugging-face-partner-integrations) - [Fine-tuning with Hugging Face TRL](#fine-tuning-with-hugging-face-trl) - [Synthetic data generation with distilabel](#synthetic-data-generation-with-distilabel) - [Additional Resources](#additional-resources) - [Acknowledgments](#acknowledgments) ## What’s new with Llama 3.1? Why is Llama 3.1 so exciting? On top of the features the predecessor offers, Llama 3.1 has some key new features: * A large context length of 128K tokens (vs original 8K) * Multilingual capabilities * Tool usage capabilities * A very large dense model of 405 billion parameters * A more permissive license Let’s dive into these! The Llama 3.1 release introduces six new open LLM models based on the Llama 3 architecture. They come in three sizes: 8B, 70B, and 405B parameters, each with base (pre-trained) and instruct-tuned versions. All the variants support a context length o**f 128K tokens** and 8 languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. Llama 3.1 continues to use Grouped-Query Attention (GQA), an efficient representation that should help with longer contexts. * [Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B): Base 8B model * [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct): Instruct fine-tuned version of the base 8B model * [Meta-Llama-3.1-70B](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B): Base 70B model * [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct): Instruct fine-tuned version of the base 70B model * [Meta-Llama-3.1-405B](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B): Base 405B model * [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct): Instruct fine-tuned version of the base 405B model In addition to these 6 language models, Llama Guard 3 and Prompt Guard were released. * [Llama Guard 3](https://huggingface.co/meta-llama/Llama-Guard-3-8B) is the latest iteration in the Llama Guard family, fine-tuned on Llama 3.1 8B. It is built for production use cases, with a 128k context length and multilingual capabilities. Llama Guard 3 can classify LLM inputs (prompts) and responses to detect content that would be considered unsafe in a risk taxonomy. * [Prompt Guard](https://huggingface.co/meta-llama/Prompt-Guard-86M), on the other hand, is a small 279M parameter BERT-based classifier that can detect prompt injection and jailbreaking. It was trained on a large corpus of attacks and is suggested to be further fine-tuned with application-specific data. New in Llama 3.1 compared to Llama 3 is that the instruct models are fine-tuned on tool calling for agentic use cases. There are two built-in tools (search, mathematical reasoning with Wolfram Alpha) that can be expanded with custom JSON functions. The Llama 3.1 models were trained on over 15 trillion tokens on a custom-built GPU cluster with a total of 39.3M GPU hours (1.46M for 8B, 7.0M for 70B, 30.84M for 405B). We don’t know the exact details of the training dataset mix, and we can only guess it has a more diverse curation for multilingualism. Llama 3.1 Instruct has been optimized for instruction following and was trained on publicly available instruction datasets, as well as over 25M synthetically generated examples with supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). Meta developed LLM-based classifiers to filter and curate high-quality prompts and responses during the creation of the data mix. Regarding the licensing terms, Llama 3.1 comes with a very similar license with one key difference: **it enables using model outputs that can be used to improve other LLMs**. This means that synthetic data generation and distillation are allowed, even with different models! This is especially important for the 405B model, as discussed later. The license allows for redistribution, fine-tuning, and creation of derivative work and still requires derived models to include "Llama" at the beginning of their name, and any derivative works or services must mention "Built with Llama". For full details, please make sure to read the [official license](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/blob/main/LICENSE). ## How much memory does Llama 3.1 need? Llama 3.1 brings exciting advancements. However, running it requires careful consideration of your hardware resources. We broke down the memory requirements for both training and inference across the three model sizes. ### Inference Memory Requirements For inference, the memory requirements depend on the model size and the precision of the weights. Here's a table showing the approximate memory needed for different configurations: <table> <tr> <td><strong>Model Size</strong> </td> <td><strong>FP16</strong> </td> <td><strong>FP8</strong> </td> <td><strong>INT4</strong> </td> </tr> <tr> <td>8B </td> <td>16 GB </td> <td>8 GB </td> <td>4 GB </td> </tr> <tr> <td>70B </td> <td>140 GB </td> <td>70 GB </td> <td>35 GB </td> </tr> <tr> <td>405B </td> <td>810 GB </td> <td>405 GB </td> <td>203 GB </td> </tr> </table> _Note: The above-quoted numbers indicate the GPU VRAM required just to load the model checkpoint. They don’t include torch reserved space for kernels or CUDA graphs._ As an example, an H100 node (of 8x H100) has ~640GB of VRAM, so the 405B model would need to be run in a multi-node setup or run at a lower precision (e.g. FP8), which would be the recommended approach. Keep in mind that lower precision (e.g., INT4) may result in some loss of accuracy but can significantly reduce memory requirements and increase inference speed. In addition to the model weights, you will also need to keep the KV Cache in memory. It contains keys and values of all the tokens in the model’s context such that they don’t need to be recomputed when generating a new token. Especially when making use of the long available context length, it becomes a significant factor. In FP16, the KV cache memory requirements are: <table> <tr> <td><strong>Model Size</strong> </td> <td><strong>1k tokens</strong> </td> <td><strong>16k tokens</strong> </td> <td><strong>128k tokens</strong> </td> </tr> <tr> <td>8B </td> <td>0.125 GB </td> <td>1.95 GB </td> <td>15.62 GB </td> </tr> <tr> <td>70B </td> <td>0.313 GB </td> <td>4.88 GB </td> <td>39.06 GB </td> </tr> <tr> <td>405B </td> <td>0.984 GB </td> <td>15.38 </td> <td>123.05 GB </td> </tr> </table> Especially for the small model the cache uses as much memory as the weights when approaching the context length maximum. ### Training Memory Requirements The following table outlines the approximate memory requirements for training Llama 3.1 models using different techniques: <table> <tr> <td><strong>Model Size</strong> </td> <td><strong>Full Fine-tuning</strong> </td> <td><strong>LoRA</strong> </td> <td><strong>Q-LoRA</strong> </td> </tr> <tr> <td>8B </td> <td>60 GB </td> <td>16 GB </td> <td>6 GB </td> </tr> <tr> <td>70B </td> <td>500 GB </td> <td>160 GB </td> <td>48 GB </td> </tr> <tr> <td>405B </td> <td>3.25 TB </td> <td>950 GB </td> <td>250 GB </td> </tr> </table> _Note: These are estimated values and may vary based on specific implementation details and optimizations._ ## Llama 3.1 evaluation _Note: We are currently evaluating Llama 3.1 individually on the new [Open LLM Leaderboard 2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) and will update this section later today. Below is an excerpt from the official evaluation from Meta._ <table> <tr> <td><strong><em>Category</em></strong> </td> <td><strong><em>Benchmark</em></strong> </td> <td><strong><em># Shots</em></strong> </td> <td><strong><em>Metric</em></strong> </td> <td><strong><em>Llama 3 8B</em></strong> </td> <td><strong><em>Llama 3.1 8B</em></strong> </td> <td><strong><em>Llama 3 70B</em></strong> </td> <td><strong><em>Llama 3.1 70B</em></strong> </td> <td><strong><em>Llama 3.1 405B</em></strong> </td> </tr> <tr> <td><em>General</em> </td> <td><em>MMLU</em> </td> <td><em>5</em> </td> <td><em>macro_avg/acc_char</em> </td> <td><em>66.7</em> </td> <td><em>66.7</em> </td> <td><em>79.5</em> </td> <td><em>79.3</em> </td> <td><em>85.2</em> </td> </tr> <tr> <td> </td> <td><em>MMLU PRO (CoT)</em> </td> <td><em>5</em> </td> <td><em>macro_avg/acc_char</em> </td> <td><em>36.2</em> </td> <td><em>37.1</em> </td> <td><em>55.0</em> </td> <td><em>53.8</em> </td> <td><em>61.6</em> </td> </tr> <tr> <td> </td> <td><em>AGIEval English</em> </td> <td><em>3-5</em> </td> <td><em>average/acc_char</em> </td> <td><em>47.1</em> </td> <td><em>47.8</em> </td> <td><em>63.0</em> </td> <td><em>64.6</em> </td> <td><em>71.6</em> </td> </tr> <tr> <td> </td> <td><em>CommonSenseQA</em> </td> <td><em>7</em> </td> <td><em>acc_char</em> </td> <td><em>72.6</em> </td> <td><em>75.0</em> </td> <td><em>83.8</em> </td> <td><em>84.1</em> </td> <td><em>85.8</em> </td> </tr> <tr> <td> </td> <td><em>Winogrande</em> </td> <td><em>5</em> </td> <td><em>acc_char</em> </td> <td><em>-</em> </td> <td><em>60.5</em> </td> <td><em>-</em> </td> <td><em>83.3</em> </td> <td><em>86.7</em> </td> </tr> <tr> <td> </td> <td><em>BIG-Bench Hard (CoT)</em> </td> <td><em>3</em> </td> <td><em>average/em</em> </td> <td><em>61.1</em> </td> <td><em>64.2</em> </td> <td><em>81.3</em> </td> <td><em>81.6</em> </td> <td><em>85.9</em> </td> </tr> <tr> <td> </td> <td><em>ARC-Challenge</em> </td> <td><em>25</em> </td> <td><em>acc_char</em> </td> <td><em>79.4</em> </td> <td><em>79.7</em> </td> <td><em>93.1</em> </td> <td><em>92.9</em> </td> <td><em>96.1</em> </td> </tr> <tr> <td><em>Knowledge reasoning</em> </td> <td><em>TriviaQA-Wiki</em> </td> <td><em>5</em> </td> <td><em>em</em> </td> <td><em>78.5</em> </td> <td><em>77.6</em> </td> <td><em>89.7</em> </td> <td><em>89.8</em> </td> <td><em>91.8</em> </td> </tr> <tr> <td> </td> <td><em>SQuAD</em> </td> <td><em>1</em> </td> <td><em>em</em> </td> <td><em>76.4</em> </td> <td><em>77.0</em> </td> <td><em>85.6</em> </td> <td><em>81.8</em> </td> <td><em>89.3</em> </td> </tr> <tr> <td><em>Reading comprehension</em> </td> <td><em>QuAC (F1)</em> </td> <td><em>1</em> </td> <td><em>f1</em> </td> <td><em>44.4</em> </td> <td><em>44.9</em> </td> <td><em>51.1</em> </td> <td><em>51.1</em> </td> <td><em>53.6</em> </td> </tr> <tr> <td> </td> <td><em>BoolQ</em> </td> <td><em>0</em> </td> <td><em>acc_char</em> </td> <td><em>75.7</em> </td> <td><em>75.0</em> </td> <td><em>79.0</em> </td> <td><em>79.4</em> </td> <td><em>80.0</em> </td> </tr> <tr> <td> </td> <td><em>DROP (F1)</em> </td> <td><em>3</em> </td> <td><em>f1</em> </td> <td><em>58.4</em> </td> <td><em>59.5</em> </td> <td><em>79.7</em> </td> <td><em>79.6</em> </td> <td><em>84.8</em> </td> </tr> </table> ## Using Hugging Face Transformers Llama 3.1 requires a minor modeling update to handle RoPE scaling effectively. With Transformers [release 4.43.2](https://github.com/huggingface/transformers/releases/tag/v4.43.2), you can use the new Llama 3.1 models and leverage all the tools within the Hugging Face ecosystem. Make sure to use the latest `transformers` release: ```bash pip install "transformers>=4.43.2" --upgrade ``` A couple of details: * Transformers loads the model in bfloat16 by default. This is the type used by the original checkpoint published by Meta, so it’s the recommended way to run to ensure the best precision or conduct evaluations. * Assistant responses may end with the special token `<|eot_id|>`, but we must also stop generation if the regular EOS token is found. We can stop generation early by providing a list of terminators in the `eos_token_id` parameter. * We used the default sampling parameters (`temperature` and `top_p`) taken from the original meta codebase. We haven’t had time to conduct extensive tests yet, feel free to explore! The following snippet shows how to use `meta-llama/Meta-Llama-3.1-8B-Instruct`. It requires about 16 GB of VRAM, which fits many consumer GPUs. The same snippet works for `meta-llama/Meta-Llama-3.1-70B-Instruct`, which, at 140GB of VRAM & `meta-llama/Meta-Llama-3.1-405B-Instruct` (requiring 810GB VRAM), makes it a very interesting model for production use cases. Memory consumption can be further reduced by loading in 8-bit or 4-bit mode. ```python from transformers import pipeline import torch model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct" pipe = pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) messages = [ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."}, ] outputs = pipe( messages, max_new_tokens=256, do_sample=False, ) assistant_response = outputs[0]["generated_text"][-1]["content"] print(assistant_response) # Arrrr, me hearty! Yer lookin' fer a bit o' information about meself, eh? Alright then, matey! I be a language-generatin' swashbuckler, a digital buccaneer with a penchant fer spinnin' words into gold doubloons o' knowledge! Me name be... (dramatic pause)...Assistant! Aye, that be me name, and I be here to help ye navigate the seven seas o' questions and find the hidden treasure o' answers! So hoist the sails and set course fer adventure, me hearty! What be yer first question? ``` You can also automatically quantize the model, loading it in 8-bit or even 4-bit mode with bitsandbytes. 4-bit loading of the large 70B version takes about 34 GB of memory to run. This is how you’d load the generation pipeline in 4-bit: ```python pipeline = pipeline( "text-generation", model=model_id, model_kwargs={ "torch_dtype": torch.bfloat16, "quantization_config": {"load_in_4bit": True} }, ) ``` For more details on using the models with `transformers`, please check [the model cards](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). _Note: Transformers takes care of all pesky prompt template issues and more, if you want to know more about prompting then check out the next section._ ## How to prompt Llama 3.1 The base models have no prompt format. Like other base models, they can be used to continue an input sequence with a plausible continuation or for zero-shot/few-shot inference. They are also a great foundation for fine-tuning your own use cases. The Instruct versions support conversational format with 4 roles: 1. **system:** Sets the context for the conversation. It allows including rules, guidelines, or necessary information that help to respond effectively. It’s also used to enable tool use when appropriate. 2. **user:** User inputs, commands, and questions for the models. 3. **assistant:** The assistant's response, based on the context provided in the ‘system’ and ‘user’ prompts. 4. **ipython:** A new role introduced in Llama 3.1. This role is used as the output of a tool call when sent back to the LLM. The Instruct versions use the following conversation structure for simple conversations: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_msg_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|> ``` Llama 3.1 Instruct models now support tool calling, including three built-in tools (brave_search, wolfram_alpha, and code_interpreter) and custom tool calling via JSON function calling. The built-in tools use Python syntax. The ability to output Python code for function calling is part of the code interpreter tool, which must be enabled in the system prompt using the `Environment` keyword, as shown below. ### Built-in Tool calling Including “Environment: ipython” turns on the code interpreter mode, and the model can generate Python code that it expects to be executed. The message body of the assistant response starts with a special tag `<|python_tag|>` and ends with `<|eom_id|>` instead of just the standard `<|eot_id|>`. The latter indicates the turn is finished, while the former indicates continued multi-step reasoning. <details close> <summary>Built-in tool calling example</summary> ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Environment: ipython Tools: brave_search, wolfram_alpha Cutting Knowledge Date: 01 March 2023 Today's Date: 13 July 2024 You are a helpful Assistant.<|eot_id|><|start_header_id|>user<|end_header_id|> Weather in Menlo Park, California<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` The response from the model at this point would include Python code to call one of the supported tools (`brave_search` in this case): ``` <|python_tag|>brave_search.call(query="current weather in Menlo Park, California")<|eom_id|> ``` The response from executing the call is then sent back to the model to retrieve the final response. For brevity, the following would be appended to the message shown in the previous snippet: ``` <|python_tag|>brave_search.call(query="Menlo Park California weather")<|eom_id|><|start_header_id|>ipython<|end_header_id|> {"query": "Menlo Park California weather", "top_k": [{"title": "10-Day Weather Forecast for West Menlo Park, CA - The Weather Channel | weather.com", "url": "https://weather.com/weather/tenday/l/West+Menlo+Park+CA?canonicalCityId=b2375713aa1943aad7d1a13a85e1c0adad13c1b10563b2bbaad70734dc61cf11", "description": "Be prepared with the most accurate 10-day forecast for West <strong>Menlo</strong> <strong>Park</strong>, CA with highs, lows, chance of precipitation from The <strong>Weather</strong> Channel and <strong>Weather</strong>.com", "type": "search_result"},....}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` The final response from the LLM would then be: ``` The current weather in Menlo Park, California is mostly sunny with a high of 77°F and a low of 56°F.<|eot_id|> ``` </details> ## Custom Tool calling Llama 3.1 Instruct supports custom function calls from a single user message. The following prompts provide an example of how custom functions can be called from the output of the model. In custom function calling, the model outputs `<|eot_id|>` instead of `<|eom_id|>`. The system prompt needs to be adjusted to inform the model how to deal with function call outputs. <details close> <summary>Custom Tool Calling JSON Functions</summary> ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal user question.<|eot_id|><|start_header_id|>user<|end_header_id|> Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables. { "type": "function", "function": { "name": "get_current_conditions", "description": "Get the current weather conditions for a specific location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g., San Francisco, CA" }, "unit": { "type": "string", "enum": ["Celsius", "Fahrenheit"], "description": "The temperature unit to use. Infer this from the user's location." } }, "required": ["location", "unit"] } } } Question: what is the weather like in Menlo Park?<|eot_id|><|start_header_id|>assitant<|end_header_id|> {"name": "get_current_conditions", "parameters": {"location": "Menlo Park, CA", "unit": "Fahrenheit"}}<|eot_id|><|start_header_id|>ipython<|end_header_id|> ``` When we retrieve the output from the selected tool, we pass it back to the model using the same `<|python_tag|>` delimiter. `<|python_tag|>` does not imply Python use. It’s only meant to signal the beginning of outputs from any tool. ``` <|python_tag|>{ "tool_call_id": "get_current_conditions" "output": "Clouds giving way to sun Hi: 76° Tonight: Mainly clear early, then areas of low clouds forming Lo: 56°" }<|eot_id|><|start_header_id|>assistant<|end_header_id|> The weather in Menlo Park is currently cloudy with a high of 76° and a low of 56°, with clear skies expected tonight.<|eot_id|> ``` This format has to be exactly reproduced for effective use. The chat template available in transformers makes it straightforward to format the prompt correctly. </details> ## Demo You can experiment with the three Instruct models in the following demos: * Hugging Chat with Llama 3.1 405B [https://huggingface.co/chat/models/meta-llama/Meta-Llama-3.1-405b-instruct/](https://huggingface.co/chat/models/meta-llama/Meta-Llama-3.1-405b-instruct/) * Hugging Chat with Llama 3.1 70B [https://huggingface.co/chat/models/meta-llama/Meta-Llama-3.1-70b-instruct/](https://huggingface.co/chat/models/meta-llama/Meta-Llama-3.1-70b-instruct/) * Gradio-powered Space with Llama 3.1 8B demo [https://huggingface.co/spaces/ysharma/Chat_with_Meta_llama3_1_8b](https://huggingface.co/spaces/ysharma/Chat_with_Meta_llama3_1_8b) The whole stack is open-source. Hugging Chat is powered by [chat-ui](https://github.com/huggingface/chat-ui) and [text-generation-inference](https://github.com/huggingface/text-generation-inference). ## Llama 3.1 405B quantization with FP8, AWQ, and GPTQ Meta created an [official FP8 quantized version of Llama 3.1 405B](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8) with minimal accuracy degradation. To achieve this, FP8 quantization was only applied to the major linear operators of the model, such as the gate and up and down projections for the FFNs (covering 75% of the inference FLOPs). We worked together to ensure that this FP8 quantization checkpoint is compatible across the community (transformers, TGI, VLLM). Additionally, we created AWQ and GPTQ quantized variants in INT4 with AutoAWQ and AutoGPTQ, respectively. For AWQ, all the linear layers were quantized using the GEMM kernels performing zero-point quantization down to 4 bits with a group size of 128; and for GPTQ the same setting only using the GPTQ kernels instead. We ensured that the INT4 checkpoints are compatible with transformers and TGI, including Marlin kernel support to speed up inference in TGI for the GPTQ quants. Available quantized weights for Llama 3.1 405B: * [meta-llama/Meta-Llama-3.1-405B-Base-FP8](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-FP8): Official FP8 quantized weights, can be run on 8xH100 * [meta-llama/Meta-Llama-3.1-405B-Instruct-FP8](https://huggingface.co/sllhf/Meta-Llama-3.1-405B-Instruct-FP8): Official FP8 quantized weights, can be run on 8xH100 * [hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4](https://huggingface.co/hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4): Hugging Face quantized weights, can run on 8xA100 80GB, 8xH100 80GB & 8xA100 40GB (with a reduced KV-cache and without CUDA graphs) * [hugging-quants/Meta-Llama-3.1-405B-Instruct-GPTQ-INT4:](https://huggingface.co/hugging-quants/Meta-Llama-3.1-405B-Instruct-GPTQ-INT4) Hugging Face quantized weights, can run on 8xA100 80GB, 8xH100 80GB & 8xA100 40GB (with a reduced KV-cache and without CUDA graphs) * [hugging-quants/Meta-Llama-3.1-405B-BNB-NF4](https://huggingface.co/hugging-quants/Meta-Llama-3.1-405B-BNB-NF4): Hugging Face quantized weights, suitable for QLoRA finetuning * [hugging-quants/Meta-Llama-3.1-405B-Instruct-BNB-NF4](https://huggingface.co/hugging-quants/Meta-Llama-3.1-405B-Instruct-BNB-NF4): Hugging Face quantized weights, suitable for inference on 8xA100 & 4xH100 The [Hugging Quants organization](https://huggingface.co/hugging-quants) contains quantized checkpoints for the 70B and 8B version as well. ## Inference Integrations ### Hugging Face Inference API [Hugging Face PRO users now have access to exclusive API endpoints](https://huggingface.co/blog/inference-pro) hosting Llama 3.1 8B Instruct, Llama 3.1 70B Instruct and Llama 3.1 405B Instruct AWQ powered by [text-generation-inference](https://github.com/huggingface/text-generation-inference). All versions support the Messages API, so they are compatible with OpenAI client libraries, including LangChain and LlamaIndex. _Note: Update to the latest `huggingface_hub` version with `pip install "huggingface_hub>=0.24.1`._ ```python from huggingface_hub import InferenceClient # Initialize the client, pointing it to one of the available models client = InferenceClient() chat_completion = client.chat.completions.create( model="meta-llama/Meta-Llama-3.1-405B-Instruct-FP8", messages=[ {"role": "system", "content": "You are a helpful an honest programming assistant."}, {"role": "user", "content": "Is Rust better than Python?"}, ], stream=True, max_tokens=500 ) # iterate and print stream for message in chat_completion: print(message.choices[0].delta.content, end="") ``` For more details about the use of the Messages API, please [check this post](https://huggingface.co/blog/tgi-messages-api). ### Hugging Face Inference Endpoints You can deploy Llama 3.1 on Hugging Face's [Inference Endpoints](https://ui.endpoints.huggingface.co/), which uses Text Generation Inference as the backend. [Text Generation Inference](https://github.com/huggingface/text-generation-inference) is a production-ready inference container developed by Hugging Face with support for FP8, continuous batching, token streaming, tensor parallelism for fast inference on multiple GPUs. To deploy Llama 3.1, go to the [model page](https://huggingface.co/meta-llama/Meta-Llama-3-70B-instruct) and click on the Deploy -> Inference Endpoints widget: * [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) is recommended on 1x NVIDIA A10G or L4 GPUs * [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) is recommended on 4x NVIDIA A100 or as AWQ/GPTQ quantized on 2x A100s * [Meta-Llama-3.1-405B-Instruct-FP8](https://huggingface.co/sllhf/Meta-Llama-3.1-405B-Instruct-FP8) is recommended on 8x NVIDIA H100 in FP or as [AWQ](https://huggingface.co/hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4)/[GPTQ](https://huggingface.co/hugging-quants/Meta-Llama-3.1-405B-Instruct-GPTQ-INT4) quantized on 8x A100s ```python from huggingface_hub import InferenceClient # Initialize the client, pointing it to one of the available models client = InferenceClient( base_url="<ENDPOINT_URL>", ) # Create a chat completion chat_completion = client.chat.completions.create( model="ENDPOINT", messages=[ {"role": "system", "content": "You are a helpful an honest programming assistant."}, {"role": "user", "content": "Is Rust better than Python?"}, ], stream=True, max_tokens=500 ) # iterate and print stream for message in chat_completion: print(message.choices[0].delta.content, end="") ``` ## Hugging Face Partner Integrations _Note: We are currently working with our partners at AWS, Google Cloud, Microsoft Azure and DELL on adding Llama 3.1 8B, 70B, and 405B to Amazon SageMaker, Google Kubernetes Engine, Vertex AI Model Catalog, Azure AI Studio, DELL Enterprise Hub. We will update this section as soon as the containers are available - you can [subscribe to Hugging Squad for email updates](https://mailchi.mp/huggingface/squad)._ ## Fine-tuning with Hugging Face TRL In this section, we’ll look at the tools available in the Hugging Face ecosystem to efficiently train Llama 3.1 on consumer-size GPUs. An example command to fine-tune Llama 3.1 8B on OpenAssistant’s [chat dataset](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) can be found below. We use 4-bit quantization and [QLoRA](https://arxiv.org/abs/2305.14314) to conserve memory to target all the attention blocks' linear layers. <details close> <summary>Fine-Tuning Example with Hugging Face TRL</summary> First, install the nightly version of 🤗 TRL and clone the repo to access the [training script](https://github.com/huggingface/trl/blob/main/examples/scripts/sft.py): ``` pip install "transformers>=4.43.2" --upgrade pip install --upgrade bitsandbytes pip install --ugprade peft pip install git+https://github.com/huggingface/trl git clone https://github.com/huggingface/trl cd trl ``` Then you can run the script: ``` python \ examples/scripts/sft.py \ --model_name meta-llama/Meta-Llama-3.1-8B \ --dataset_name OpenAssistant/oasst_top1_2023-08-25 \ --dataset_text_field="text" \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-4 \ --report_to "none" \ --bf16 \ --max_seq_length 1024 \ --lora_r 16 --lora_alpha 32 \ --lora_target_modules q_proj k_proj v_proj o_proj \ --load_in_4bit \ --use_peft \ --attn_implementation "flash_attention_2" \ --logging_steps=10 \ --gradient_checkpointing \ --output_dir llama31 ``` If you have more GPUs to spare, you can run training with DeepSpeed and ZeRO Stage 3: ``` accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml \ examples/scripts/sft.py \ --model_name meta-llama/Meta-Llama-3.1-8B \ --dataset_name OpenAssistant/oasst_top1_2023-08-25 \ --dataset_text_field="text" \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-5 \ --report_to wandb \ --bf16 \ --max_seq_length 1024 \ --attn_implementation eager \ --logging_steps=10 \ --gradient_checkpointing \ --output_dir models/llama ``` </details> ## Synthetic data generation with distilabel A big change in Llama 3.1’s license is that it allows using model outputs to improve other LLMs, which means you can generate synthetic datasets with Llama 3.1 models and use them to fine-tune smaller, more specialized models. Let’s look at an example of how to generate a preference dataset with [distilabel](https://github.com/argilla-io/distilabel), an open-source framework for synthetic data generation. This dataset can be used to fine-tune models with the preference optimization methods offered by TRL like DPO or KTO. First install the latest `distilabel` release including the `hf-inference-endpoints` extra with `pip` as follows: ```bash pip install “distilabel[hf-inference-endpoints]” --upgrade ``` Then define a pipeline that: * loads a dataset with instructions from the Hugging Face Hub. * generates a response with Llama 3.1 70B Instruct and Llama 3.1 405B Instruct via Hugging Face Inference Endpoints. * finally, uses Llama 3.1 405B Instruct as a judge to rate the responses using UltraFeedback prompts. From these ratings, chosen and rejected responses can be selected and used to fine-tune a model with preference optimization methods. See the code below to define the pipeline or run it yourself using this [Colab notebook](https://colab.research.google.com/drive/1o0ALge7DHBmcKgdyrk59yOL70tcGS3v4?usp=sharing) and explore the generated dataset in the Hub. ```python from distilabel.llms import InferenceEndpointsLLM from distilabel.pipeline import Pipeline from distilabel.steps import LoadDataFromHub, CombineColumns from distilabel.steps.tasks import TextGeneration, UltraFeedback llama70B = InferenceEndpointsLLM( model_id="meta-llama/Meta-Llama-3.1-70B-Instruct" ) llama405B = InferenceEndpointsLLM( model_id="meta-llama/Meta-Llama-3.1-405B-Instruct-FP8" ) with Pipeline(name="synthetic-data-with-llama3") as pipeline: # load dataset with prompts load_dataset = LoadDataFromHub( repo_id="argilla/10Kprompts-mini" ) # generate two responses for each prompt generate = [ TextGeneration(llm=llama70B), TextGeneration(llm=llama405B) ] # combine responses into one column combine = CombineColumns( columns=["generation", "model_name"], output_columns=["generations", "model_names"] ) # rate responses with 405B LLM-as-a-judge rate = UltraFeedback(aspect="overall-rating", llm=llama405B) # define the pipeline load_dataset >> generate >> combine >> rate if __name__ == "__main__": distiset = pipeline.run() ``` What’s next? Besides the example above, `distilabel` comes with exciting approaches for synthetic data generation with LLMs in a wide range of scenarios and topics. It includes implementations from the current SOTA literature for tasks like evaluating outputs with LLM-as-a-judge methods, evolving instructions, data filtering, as well as defining custom components. ## Additional Resources - [Models on the Hub](https://huggingface.co/collections/meta-llama/llama-31-669fc079a0c406a149a5738f) - [Hugging Face Llama Recipes](https://github.com/huggingface/huggingface-llama-recipes) - [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) - [Chat demo on Hugging Chat](https://huggingface.co/chat/models/meta-llama/Meta-Llama-3.1-405b-instruct/) - [Meta Blog](TOOD:) ## Acknowledgments Releasing such models with support and evaluations in the ecosystem would not be possible without the contributions of thousands of community members that have contributed to transformers, tgi, vllm, pytorch, LM Eval Harness and many other projects. This release couldn't have happened without all the support of [Clémentine](https://huggingface.co/clefourrier) and [Nathan](https://huggingface.co/SaylorTwift) for LLM evaluations; [Nicolas](https://huggingface.co/Narsil), [Olivier Dehaene](https://huggingface.co/olivierdehaene) and [Daniël de Kok](https://huggingface.co/danieldk) for Text Generation Inference Support; [Arthur](https://huggingface.co/ArthurZ), [Matthew Carrigan](https://huggingface.co/Rocketknight1), [Zachary Mueller](https://huggingface.co/muellerzr), [Joao](https://huggingface.co/joaogante), [Joshua Lochner](https://huggingface.co/Xenova) and [Lysandre](https://huggingface.co/lysandre) for integrating Llama 3.1 into `transformers`; [Matthew Douglas](https://huggingface.co/mdouglas) for quantization support; [Gabriel Martín Blázquez](https://huggingface.co/gabrielmbmb) for `distilabel` support; [Merve Noyan](https://huggingface.co/merve) and [Aymeric Roucher](https://huggingface.co/m-ric) for review; [hysts](huggingface.co/hysts) and [Yuvi](huggingface.co/ysharma) for demos; [Ellie](https://huggingface.co/eliebak) for testing fine-tuning; [Brigitte Tousignant](https://huggingface.co/BrigitteTousi) and [Florent Daudens](https://huggingface.co/fdaudens) for communication; [Nathan](https://huggingface.co/nsarrazin) and [Victor](https://huggingface.co/victor) for making Llama 3.1 available in Hugging Chat. And Thank you to the Meta Team for releasing Llama 3.1 and making it available to the open-source AI community!
8
0
hf_public_repos
hf_public_repos/blog/argilla-ui-hub.md
--- title: "Argilla 2.4: Easily Build Fine-Tuning and Evaluation Datasets on the Hub — No Code Required" thumbnail: /blog/assets/argilla-ui-hub/thumbnail.png authors: - user: nataliaElv - user: burtenshaw - user: dvilasuero --- # Argilla 2.4: Easily Build Fine-Tuning and Evaluation Datasets on the Hub — No Code Required We are incredibly excited to share the most impactful feature since Argilla joined Hugging Face: you can prepare your AI datasets without any code, getting started from any Hub dataset! Using Argilla’s UI, you can easily import a dataset from the Hugging Face Hub, define questions, and start collecting human feedback. > [!NOTE] > Not familiar with Argilla? Argilla is a free, open-source data-centric tool. Using Argilla, AI developers and domain experts can collaborate and build high-quality datasets. Argilla is part of the Hugging Face family and fully integrated with the Hub. Want to know more? Here’s an [intro blog post](https://huggingface.co/blog/dvilasuero/argilla-2-0). Why is this new feature important to you and the community? - The Hugging Face hub contains 230k datasets you can use as a foundation for your AI project. - It simplifies collecting human feedback from the Hugging Face community or specialized teams. - It democratizes dataset creation for users with extensive knowledge about a specific domain who are unsure about writing code. ## Use cases This new feature democratizes building high-quality datasets on the Hub: - If you have published an open dataset and want the community to contribute, import it into a public Argilla Space and share the URL with the world! - If you want to start annotating a new dataset from scratch, upload a CSV to the Hub, import it into your Argilla Space, and start labeling! - If you want to curate an existing Hub dataset for fine-tuning or evaluating your model, import the dataset into an Argilla Space and start curating! - If you want to improve an existing Hub dataset to benefit the community, import it into an Argilla Space and start giving feedback! ## How it works First, you need to deploy Argilla. The recommended way is to deploy on Spaces [following this guide](https://docs.argilla.io/latest/getting_started/quickstart/). The default deployment comes with Hugging Face OAuth enabled, meaning your Space will be open for annotation contributions from any Hub user. OAuth is perfect for use cases when you want the community to contribute to your dataset. If you want to restrict annotation to you and other collaborators, [check this guide](https://docs.argilla.io/latest/getting_started/how-to-configure-argilla-on-huggingface/) for additional configuration options. <video controls autoplay title="Import a HF dataset from the Hub in the Argilla UI"> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/argilla-ui-hub/import_hub_dataset.mp4" type="video/mp4"> Import a HF dataset from the Hub in the Argilla UI </video> Once Argilla is running, sign in and click the “Import dataset from Hugging Face” button on the Home page. You can start with one of our example datasets or input the repo id of the dataset you want to use. > [!NOTE] > In this first version, the Hub dataset must be public. If you are interested in support for private datasets, we’d love to hear from you on [GitHub](https://github.com/argilla-io/argilla/issues/new/choose). Argilla automatically suggests an initial configuration based on the dataset’s features, so you don’t need to start from scratch, but you can add questions or remove unnecessary fields. Fields should include the data you want feedback on, like text, chats, or images. Questions are the feedback you wish to collect, like labels, ratings, rankings, or text. All changes are shown in real time, so you can get a clear idea of the Argilla dataset you’re configuring. Once you’re happy with the result, click “Create dataset” to import the dataset with your configuration. Now you’re ready to give feedback! You can try this for yourself by following the [quickstart guide](https://docs.argilla.io/latest/getting_started/quickstart/). It takes under 5 minutes! This new workflow streamlines the import of datasets from the Hub, but you can still [import datasets using Argilla’s Python SDK](https://docs.argilla.io/latest/how_to_guides/dataset/) if you need further customization. We’d love to hear your thoughts and first experiences. Let us know on [GitHub](https://github.com/argilla-io/argilla/issues/new/choose) or the [HF Discord](https://discord.com/invite/hugging-face-879548962464493619)!
9
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_pytorch_latency.svg
<svg width="5651" height="3205" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="1267" y="174" width="5651" height="3205"/></clipPath><clipPath id="clip1"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip2"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1591.5" y1="2851" x2="1591.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1948" y1="3000" x2="1948" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2305" y1="3027" x2="2305" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2662" y1="3036" x2="2662" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3019" y1="3031" x2="3019" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3376" y1="2655" x2="3376" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3733" y1="2949" x2="3733" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4090" y1="3008" x2="4090" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4447" y1="3027" x2="4447" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4804" y1="3018" x2="4804" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5161" y1="878" x2="5161" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5518" y1="2532" x2="5518" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5875" y1="2883" x2="5875" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6231.5" y1="2941" x2="6231.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6588.5" y1="2950" x2="6588.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip32"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1690" y1="2850" x2="1690" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip34"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2047" y1="3000" x2="2047" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip36"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2404" y1="3028" x2="2404" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip38"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2761" y1="3036" x2="2761" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip40"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3118" y1="3032" x2="3118" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip42"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3475" y1="2653" x2="3475" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip44"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3832" y1="2949" x2="3832" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip46"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4189" y1="3010" x2="4189" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip48"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4546" y1="3028" x2="4546" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip50"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4902.5" y1="3020" x2="4902.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5259.5" y1="1067" x2="5259.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5616.5" y1="2605" x2="5616.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5973.5" y1="2911" x2="5973.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6330.5" y1="2988" x2="6330.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6687.5" y1="2993" x2="6687.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip62"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1789" y1="2853" x2="1789" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip64"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2146" y1="3001" x2="2146" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip66"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2503" y1="3029" x2="2503" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip68"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2860" y1="3037" x2="2860" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip70"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3217" y1="3032" x2="3217" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip72"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3574" y1="2652" x2="3574" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip74"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3930.5" y1="2949" x2="3930.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip76"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4287.5" y1="3010" x2="4287.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip78"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4644.5" y1="3028" x2="4644.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip80"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5001.5" y1="3019" x2="5001.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip82"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5358.5" y1="1059" x2="5358.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip84"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5715" y1="2605" x2="5715" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip86"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6072" y1="2911" x2="6072" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip88"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6429" y1="2988" x2="6429" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip90"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6786" y1="2991" x2="6786" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill91"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip92"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip93"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip94"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip95"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip96"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip97"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip98"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip99"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip100"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip101"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip102"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip103"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip104"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip105"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip106"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip107"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip108"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip109"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip110"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip111"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip112"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip113"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip114"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip115"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip116"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip117"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip118"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip119"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip120"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip121"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip122"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip123"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip124"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip125"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip126"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip127"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip128"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip129"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip130"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip131"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip132"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip133"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip134"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip135"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip136"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip137"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip138"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip139"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip140"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip141"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip142"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip143"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip144"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip145"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip146"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip147"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip148"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip149"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip150"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip151"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip152"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip153"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip154"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip155"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip156"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip157"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip158"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip159"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip160"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip161"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip162"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip163"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip164"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip165"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip166"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip167"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip168"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip169"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip170"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip171"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip172"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip173"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip174"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip175"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip176"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3845.5" y1="540" x2="3845.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill177"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip178"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip179"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3993" y1="540" x2="3993" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill180"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip181"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip182"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="4204.5" y1="540" x2="4204.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill183"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip184"><rect x="1268" y="175" width="5648" height="3202"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-1267 -174)"><rect x="1268" y="175" width="5648" height="3202" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M1511.5 2784.59 6865.5 2784.59M1511.5 2516.58 6865.5 2516.58M1511.5 2248.57 6865.5 2248.57M1511.5 1980.57 6865.5 1980.57M1511.5 1712.56 6865.5 1712.56M1511.5 1444.55 6865.5 1444.55M1511.5 1176.54 6865.5 1176.54M1511.5 908.53 6865.5 908.53M1511.5 640.5 6865.5 640.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="1552" y="2851" width="79.0001" height="202" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="1908" y="3000" width="79.9998" height="53" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="2265" y="3027" width="80" height="25.9998" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="2622" y="3036" width="79.9998" height="16.9998" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2979" y="3031" width="80" height="22" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="3336" y="2655" width="79.9998" height="398" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="3693" y="2949" width="80" height="104" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="4050" y="3008" width="80" height="44.9998" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="4407" y="3027" width="80" height="25.9998" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4764" y="3018" width="80" height="34.9998" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="5121" y="878" width="80" height="2175" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="5478" y="2532" width="80" height="521" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="5835" y="2883" width="79.9995" height="170" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="6192" y="2941" width="79" height="112" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="6549" y="2950" width="78.9995" height="103" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="1650" y="2850" width="80.0001" height="203" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="2007" y="3000" width="79.9996" height="53" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="2364" y="3028" width="80" height="24.9998" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="2721" y="3036" width="79.9998" height="16.9998" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="3078" y="3032" width="80" height="21" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="3435" y="2653" width="79.9998" height="400" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="3792" y="2949" width="80" height="104" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="4149" y="3010" width="80" height="43" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="4506" y="3028" width="80" height="24.9998" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="4863" y="3020" width="79" height="33" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="5220" y="1067" width="79" height="1986" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="5577" y="2605" width="79" height="448" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="5934" y="2911" width="78.9995" height="142" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="6291" y="2988" width="79" height="65" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="6648" y="2993" width="78.9995" height="59.9998" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="1749" y="2853" width="80.0001" height="200" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="2106" y="3001" width="79.9998" height="52" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="2463" y="3029" width="80" height="23.9998" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="2820" y="3037" width="79.9998" height="15.9998" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="3177" y="3032" width="80" height="21" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="3534" y="2652" width="79.9998" height="401" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="3891" y="2949" width="79" height="104" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="4248" y="3010" width="79" height="43" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="4605" y="3028" width="79" height="24.9998" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="4962" y="3019" width="79" height="33.9998" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="5319" y="1059" width="79" height="1994" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="5675" y="2605" width="80" height="448" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="6032" y="2911" width="79.9995" height="142" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="6389" y="2988" width="80" height="65" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><rect x="6746" y="2991" width="79.9995" height="61.9998" fill="url(#fill91)"/></g><g clip-path="url(#clip92)"><path d="M1511.5 3052.5 6865.5 3052.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip93)"><path d="M1511.5 3052.5 1511.5 3132.5M3296.61 3052.5 3296.61 3132.5M5080.67 3052.5 5080.67 3132.5M6865.5 3052.5 6865.5 3132.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip94)"><path d="M1511.5 3132.5 1511.5 3212.5M3296.61 3132.5 3296.61 3212.5M5080.67 3132.5 5080.67 3212.5M6865.5 3132.5 6865.5 3212.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip95)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1560.02 2819)">150</text></g><g clip-path="url(#clip96)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1927.41 2968)">39</text></g><g clip-path="url(#clip97)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2284.35 2995)">19</text></g><g clip-path="url(#clip98)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2641.3 3004)">12</text></g><g clip-path="url(#clip99)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2998.24 2999)">16</text></g><g clip-path="url(#clip100)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3344.73 2623)">296</text></g><g clip-path="url(#clip101)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3712.12 2917)">77</text></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4069.07 2976)">33</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4426.01 2995)">19</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4782.95 2986)">25</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5118.99 846)">1623</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5486.39 2500)">388</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5843.33 2851)">126</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6210.72 2909)">83</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6567.66 2918)">77</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1658.81 2818)">151</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2026.21 2968)">39</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 2996)">18</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2740.09 3005)">12</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3097.04 3000)">15</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3443.53 2621)">298</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3810.92 2917)">77</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4167.86 2978)">32</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4524.81 2996)">19</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4881.75 2988)">24</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5217.79 1035)">1482</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5585.18 2574)">334</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 2879)">105</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6309.52 2956)">48</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6666.46 2961)">44</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1757.61 2821)">149</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2125 2969)">39</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2481.95 2997)">18</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2838.89 3005)">12</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3195.83 3000)">15</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3542.32 2620)">299</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3909.72 2917)">77</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4266.66 2978)">32</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4623.6 2996)">18</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4980.54 2987)">25</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5316.59 1027)">1488</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5683.98 2573)">334</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6040.92 2879)">105</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6408.31 2956)">48</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6765.26 2959)">46</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1452.53 3064)">0</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2796)">200</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2528)">400</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2260)">600</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 1992)">800</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1724)">1000</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1456)">1200</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1188)">1400</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 920)">1600</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 652)">1800</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1679.72 3118)">1</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2036.66 3118)">4</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3118)">16</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2740.09 3118)">40</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3097.04 3118)">80</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3464.43 3118)">1</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3821.37 3118)">4</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4167.86 3118)">16</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4524.81 3118)">40</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4881.75 3118)">80</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5249.14 3118)">1</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5606.08 3118)">4</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5952.58 3118)">16</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6309.52 3118)">40</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6666.46 3118)">80</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3198)">32</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4157.41 3198)">128</text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 3198)">512</text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 1364.73 1955)">Latency (ms)</text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.64 3258)">Number of Cores</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.21 3309)">Sequence Length</text></g><g clip-path="url(#clip171)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3585.78 280)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip172)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4321.13 280)">-</text></g><g clip-path="url(#clip173)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4360.16 280)">PyTorch</text></g><g clip-path="url(#clip174)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2318.97 369)">Latency Benchmark with varying number of CPU cores involved in the computations and different memory allocators</text></g><g clip-path="url(#clip175)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2999.87 459)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip176)"><rect x="3834" y="540" width="23" height="23.0001" fill="url(#fill177)"/></g><g clip-path="url(#clip178)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3866.18 563)">glibc</text></g><g clip-path="url(#clip179)"><rect x="3982" y="540" width="22" height="23.0001" fill="url(#fill180)"/></g><g clip-path="url(#clip181)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4013.79 563)">jemalloc</text></g><g clip-path="url(#clip182)"><rect x="4193" y="540" width="23" height="23.0001" fill="url(#fill183)"/></g><g clip-path="url(#clip184)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4225.17 563)">tcmalloc</text></g><rect x="1268.5" y="175.5" width="5648" height="3202" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
0
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_intel_tensorflow_latency.svg
<svg width="5468" height="3205" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="811" y="96" width="5468" height="3205"/></clipPath><clipPath id="clip1"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip2"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1132.5" y1="2800" x2="1132.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1477" y1="2908" x2="1477" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1821.5" y1="2931" x2="1821.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2166.5" y1="2937" x2="2166.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2511.5" y1="2934" x2="2511.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2855.5" y1="2577" x2="2855.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3200.5" y1="2794" x2="3200.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3545.5" y1="2834" x2="3545.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3890.5" y1="2841" x2="3890.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4234.5" y1="2832" x2="4234.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4579.5" y1="1099" x2="4579.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4924.5" y1="2285" x2="4924.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5268.5" y1="2417" x2="5268.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5613.5" y1="2422" x2="5613.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5958.5" y1="2347" x2="5958.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip32"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1227.5" y1="2790" x2="1227.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip34"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1572.5" y1="2909" x2="1572.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip36"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1917.5" y1="2931" x2="1917.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip38"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2261.5" y1="2938" x2="2261.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip40"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2606.5" y1="2933" x2="2606.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip42"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2951.5" y1="2543" x2="2951.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip44"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3296" y1="2798" x2="3296" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip46"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3640.5" y1="2837" x2="3640.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip48"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3985.5" y1="2843" x2="3985.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip50"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4330.5" y1="2832" x2="4330.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4674.5" y1="1047" x2="4674.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5019.5" y1="2288" x2="5019.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5364.5" y1="2425" x2="5364.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5709.5" y1="2438" x2="5709.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="6053.5" y1="2356" x2="6053.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip62"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1323.5" y1="2802" x2="1323.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip64"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="1667.5" y1="2912" x2="1667.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip66"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2012.5" y1="2933" x2="2012.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip68"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2357.5" y1="2939" x2="2357.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip70"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="2702" y1="2936" x2="2702" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip72"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3046.5" y1="2580" x2="3046.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip74"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3391.5" y1="2801" x2="3391.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip76"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="3736.5" y1="2839" x2="3736.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip78"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4080.5" y1="2843" x2="4080.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip80"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4425.5" y1="2833" x2="4425.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip82"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="4770.5" y1="1097" x2="4770.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip84"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5115.5" y1="2289" x2="5115.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip86"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5459.5" y1="2426" x2="5459.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip88"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="5804.5" y1="2426" x2="5804.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip90"><rect x="1055" y="562" width="5174" height="2466"/></clipPath><linearGradient x1="6149.5" y1="2349" x2="6149.5" y2="3025" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill91"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip92"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip93"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip94"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip95"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip96"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip97"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip98"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip99"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip100"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip101"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip102"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip103"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip104"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip105"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip106"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip107"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip108"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip109"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip110"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip111"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip112"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip113"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip114"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip115"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip116"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip117"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip118"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip119"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip120"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip121"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip122"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip123"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip124"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip125"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip126"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip127"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip128"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip129"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip130"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip131"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip132"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip133"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip134"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip135"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip136"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip137"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip138"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip139"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip140"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip141"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip142"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip143"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip144"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip145"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip146"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip147"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip148"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip149"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip150"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip151"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip152"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip153"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip154"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip155"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip156"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip157"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip158"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip159"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip160"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip161"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip162"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip163"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip164"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip165"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip166"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip167"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip168"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip169"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip170"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip171"><rect x="812" y="97" width="5464" height="3202"/></clipPath><linearGradient x1="3297.5" y1="462" x2="3297.5" y2="485" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill172"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip173"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip174"><rect x="812" y="97" width="5464" height="3202"/></clipPath><linearGradient x1="3445" y1="462" x2="3445" y2="485" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill175"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip176"><rect x="812" y="97" width="5464" height="3202"/></clipPath><clipPath id="clip177"><rect x="812" y="97" width="5464" height="3202"/></clipPath><linearGradient x1="3656.5" y1="462" x2="3656.5" y2="485" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill178"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip179"><rect x="812" y="97" width="5464" height="3202"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-811 -96)"><rect x="812" y="96.9999" width="5465" height="3202" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M1055.5 2532.58 6226.5 2532.58M1055.5 2039.57 6226.5 2039.57M1055.5 1547.55 6226.5 1547.55M1055.5 1054.53 6226.5 1054.53M1055.5 562.5 6226.5 562.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="1094" y="2800" width="77" height="225" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="1439" y="2908" width="76" height="117" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="1783" y="2931" width="77" height="93.9998" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="2128" y="2937" width="77" height="88" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2473" y="2934" width="77" height="91" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="2817" y="2577" width="77" height="448" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="3162" y="2794" width="77" height="231" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="3507" y="2834" width="77" height="191" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="3852" y="2841" width="77" height="184" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4196" y="2832" width="77" height="193" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="4541" y="1099" width="77" height="1926" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="4886" y="2285" width="77" height="740" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="5230" y="2417" width="77" height="608" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="5575" y="2422" width="77" height="603" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="5920" y="2347" width="77" height="678" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="1189" y="2790" width="77" height="235" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="1534" y="2909" width="77" height="116" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="1879" y="2931" width="77" height="93.9998" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="2223" y="2938" width="77" height="87" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="2568" y="2933" width="77" height="92.0002" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="2913" y="2543" width="77" height="482" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="3258" y="2798" width="76" height="227" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="3602" y="2837" width="77" height="188" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="3947" y="2843" width="77" height="182" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="4292" y="2832" width="77" height="193" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="4636" y="1047" width="77" height="1978" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="4981" y="2288" width="77" height="737" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="5326" y="2425" width="77" height="600" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="5671" y="2438" width="77" height="587" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="6015" y="2356" width="77" height="669" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="1285" y="2802" width="77" height="223" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="1629" y="2912" width="77" height="113" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="1974" y="2933" width="76.9999" height="92.0002" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="2319" y="2939" width="77" height="86" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="2664" y="2936" width="76" height="89" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="3008" y="2580" width="77" height="445" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="3353" y="2801" width="77" height="224" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="3698" y="2839" width="77" height="186" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="4042" y="2843" width="76.9998" height="182" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="4387" y="2833" width="77" height="192" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="4732" y="1097" width="77" height="1928" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="5077" y="2289" width="77" height="736" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="5421" y="2426" width="77" height="599" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="5766" y="2426" width="77" height="599" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><rect x="6111" y="2349" width="77" height="676" fill="url(#fill91)"/></g><g clip-path="url(#clip92)"><path d="M1055.5 3024.5 6226.5 3024.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip93)"><path d="M1055.5 3024.5 1055.5 3104.5M2778.59 3024.5 2778.59 3104.5M4502.65 3024.5 4502.65 3104.5M6226.5 3024.5 6226.5 3104.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip94)"><path d="M1055.5 3104.5 1055.5 3184.5M2778.59 3104.5 2778.59 3184.5M4502.65 3104.5 4502.65 3184.5M6226.5 3104.5 6226.5 3184.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip95)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1100.9 2768)">229</text></g><g clip-path="url(#clip96)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1445.62 2876)">119</text></g><g clip-path="url(#clip97)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1800.8 2899)">96</text></g><g clip-path="url(#clip98)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2145.53 2905)">89</text></g><g clip-path="url(#clip99)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2490.25 2902)">93</text></g><g clip-path="url(#clip100)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2824.53 2545)">455</text></g><g clip-path="url(#clip101)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3169.26 2762)">235</text></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3513.98 2802)">194</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3858.71 2809)">186</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4203.43 2800)">196</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4537.71 1067)">1955</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4892.89 2253)">752</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5237.61 2385)">617</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5582.34 2390)">612</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5927.06 2315)">688</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1196.31 2758)">239</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1541.04 2877)">118</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1896.22 2899)">95</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2240.94 2906)">89</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2585.67 2901)">94</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2919.94 2511)">489</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3264.67 2766)">230</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3609.4 2805)">191</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3954.12 2811)">185</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4298.85 2800)">196</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4633.12 1015)">2008</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4988.3 2256)">748</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5333.03 2393)">609</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5677.75 2406)">596</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6022.48 2324)">679</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1291.73 2770)">227</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1636.45 2880)">114</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1991.63 2901)">93</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2336.36 2907)">87</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2681.08 2904)">91</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3015.36 2548)">452</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3360.09 2769)">227</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3704.81 2807)">189</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4049.54 2811)">185</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4394.26 2801)">194</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4728.54 1065)">1958</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5083.72 2257)">747</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5428.44 2394)">608</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5773.17 2394)">608</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6117.9 2317)">686</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 996.133 3036)">0</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 954.333 2544)">500</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 933.433 2051)">1000</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 933.433 1559)">1500</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 933.433 1066)">2000</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 933.433 574)">2500</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1217.21 3090)">1</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1561.94 3090)">4</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1896.22 3090)">16</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2240.94 3090)">40</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2585.67 3090)">80</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2940.84 3090)">1</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3285.57 3090)">4</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3619.85 3090)">16</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3964.57 3090)">40</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4309.3 3090)">80</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4664.48 3090)">1</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5009.2 3090)">4</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5343.48 3090)">16</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5688.2 3090)">40</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6032.93 3090)">80</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1896.22 3170)">32</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3609.4 3170)">128</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5333.03 3170)">512</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 908.339 1902)">Latency (ms)</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 3499.85 3231)">Sequence Length</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2911.96 202)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3647.31 202)">-</text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3686.34 202)">Intel Tensorflow</text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 1785.59 291)">Latency Benchmark with varying number of CPU cores involved in the computation and different memory allocators</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2451.87 381)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip171)"><rect x="3286" y="462" width="23" height="23" fill="url(#fill172)"/></g><g clip-path="url(#clip173)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3318.16 485)">glibc</text></g><g clip-path="url(#clip174)"><rect x="3434" y="462" width="22" height="23" fill="url(#fill175)"/></g><g clip-path="url(#clip176)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3465.77 485)">jemalloc</text></g><g clip-path="url(#clip177)"><rect x="3645" y="462" width="23" height="23" fill="url(#fill178)"/></g><g clip-path="url(#clip179)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3677.16 485)">tcmalloc</text></g><rect x="812.5" y="97.5001" width="5465" height="3202" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
1
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_tensorflow_graph_latency.svg
<svg width="5651" height="3205" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="1267" y="174" width="5651" height="3205"/></clipPath><clipPath id="clip1"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip2"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1591.5" y1="2933" x2="1591.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1948" y1="2995" x2="1948" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2305" y1="3021" x2="2305" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2662" y1="3031" x2="2662" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3019" y1="3025" x2="3019" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3376" y1="2713" x2="3376" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3733" y1="2919" x2="3733" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4090" y1="2970" x2="4090" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4447" y1="3004" x2="4447" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4804" y1="2976" x2="4804" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5161" y1="1265" x2="5161" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5518" y1="2639" x2="5518" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5875" y1="2878" x2="5875" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6231.5" y1="2960" x2="6231.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6588.5" y1="2928" x2="6588.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip32"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1690" y1="2914" x2="1690" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip34"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2047" y1="2995" x2="2047" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip36"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2404" y1="3021" x2="2404" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip38"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2761" y1="3031" x2="2761" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip40"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3118" y1="3024" x2="3118" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip42"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3475" y1="2661" x2="3475" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip44"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3832" y1="2917" x2="3832" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip46"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4189" y1="2969" x2="4189" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip48"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4546" y1="3005" x2="4546" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip50"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4902.5" y1="2974" x2="4902.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5259.5" y1="1205" x2="5259.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5616.5" y1="2647" x2="5616.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5973.5" y1="2882" x2="5973.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6330.5" y1="2965" x2="6330.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6687.5" y1="2934" x2="6687.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip62"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1789" y1="2930" x2="1789" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip64"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2146" y1="2991" x2="2146" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip66"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2503" y1="3021" x2="2503" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip68"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2860" y1="3032" x2="2860" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip70"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3217" y1="3022" x2="3217" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip72"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3574" y1="2701" x2="3574" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip74"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3930.5" y1="2917" x2="3930.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip76"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4287.5" y1="2970" x2="4287.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip78"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4644.5" y1="3005" x2="4644.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip80"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5001.5" y1="2977" x2="5001.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip82"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5358.5" y1="1233" x2="5358.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip84"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5715" y1="2646" x2="5715" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip86"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6072" y1="2883" x2="6072" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip88"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6429" y1="2963" x2="6429" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip90"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6786" y1="2931" x2="6786" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill91"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip92"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip93"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip94"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip95"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip96"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip97"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip98"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip99"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip100"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip101"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip102"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip103"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip104"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip105"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip106"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip107"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip108"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip109"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip110"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip111"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip112"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip113"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip114"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip115"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip116"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip117"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip118"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip119"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip120"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip121"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip122"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip123"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip124"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip125"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip126"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip127"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip128"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip129"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip130"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip131"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip132"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip133"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip134"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip135"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip136"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip137"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip138"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip139"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip140"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip141"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip142"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip143"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip144"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip145"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip146"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip147"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip148"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip149"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip150"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip151"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip152"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip153"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip154"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip155"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip156"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip157"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip158"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip159"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip160"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip161"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip162"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip163"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip164"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip165"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip166"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip167"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip168"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip169"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip170"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip171"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip172"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3845.5" y1="540" x2="3845.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill173"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip174"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip175"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3993" y1="540" x2="3993" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill176"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip177"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip178"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="4204.5" y1="540" x2="4204.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill179"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip180"><rect x="1268" y="175" width="5648" height="3202"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-1267 -174)"><rect x="1268" y="175" width="5648" height="3202" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M1511.5 2570.58 6865.5 2570.58M1511.5 2087.57 6865.5 2087.57M1511.5 1605.55 6865.5 1605.55M1511.5 1122.54 6865.5 1122.54M1511.5 640.5 6865.5 640.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="1552" y="2933" width="79.0001" height="120" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="1908" y="2995" width="79.9998" height="57.9998" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="2265" y="3021" width="80" height="32" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="2622" y="3031" width="79.9998" height="22" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2979" y="3025" width="80" height="27.9998" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="3336" y="2713" width="79.9998" height="340" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="3693" y="2919" width="80" height="134" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="4050" y="2970" width="80" height="82.9998" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="4407" y="3004" width="80" height="48.9998" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4764" y="2976" width="80" height="77" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="5121" y="1265" width="80" height="1788" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="5478" y="2639" width="80" height="414" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="5835" y="2878" width="79.9995" height="175" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="6192" y="2960" width="79" height="92.9998" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="6549" y="2928" width="78.9995" height="125" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="1650" y="2914" width="80.0001" height="139" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="2007" y="2995" width="79.9996" height="57.9998" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="2364" y="3021" width="80" height="32" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="2721" y="3031" width="79.9998" height="22" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="3078" y="3024" width="80" height="28.9998" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="3435" y="2661" width="79.9998" height="392" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="3792" y="2917" width="80" height="136" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="4149" y="2969" width="80" height="83.9998" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="4506" y="3005" width="80" height="47.9998" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="4863" y="2974" width="79" height="78.9998" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="5220" y="1205" width="79" height="1848" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="5577" y="2647" width="79" height="406" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="5934" y="2882" width="78.9995" height="171" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="6291" y="2965" width="79" height="88" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="6648" y="2934" width="78.9995" height="119" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="1749" y="2930" width="80.0001" height="123" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="2106" y="2991" width="79.9998" height="61.9998" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="2463" y="3021" width="80" height="32" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="2820" y="3032" width="79.9998" height="21" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="3177" y="3022" width="80" height="31" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="3534" y="2701" width="79.9998" height="352" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="3891" y="2917" width="79" height="136" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="4248" y="2970" width="79" height="82.9998" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="4605" y="3005" width="79" height="47.9998" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="4962" y="2977" width="79" height="76" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="5319" y="1233" width="79" height="1820" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="5675" y="2646" width="80" height="407" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="6032" y="2883" width="79.9995" height="170" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="6389" y="2963" width="80" height="89.9998" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><rect x="6746" y="2931" width="79.9995" height="122" fill="url(#fill91)"/></g><g clip-path="url(#clip92)"><path d="M1511.5 3052.5 6865.5 3052.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip93)"><path d="M1511.5 3052.5 1511.5 3132.5M3296.61 3052.5 3296.61 3132.5M5080.67 3052.5 5080.67 3132.5M6865.5 3052.5 6865.5 3132.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip94)"><path d="M1511.5 3132.5 1511.5 3212.5M3296.61 3132.5 3296.61 3212.5M5080.67 3132.5 5080.67 3212.5M6865.5 3132.5 6865.5 3212.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip95)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1560.02 2901)">124</text></g><g clip-path="url(#clip96)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1927.41 2963)">59</text></g><g clip-path="url(#clip97)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2284.35 2989)">32</text></g><g clip-path="url(#clip98)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2641.3 2999)">22</text></g><g clip-path="url(#clip99)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2998.24 2993)">28</text></g><g clip-path="url(#clip100)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3344.73 2682)">351</text></g><g clip-path="url(#clip101)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3701.67 2887)">139</text></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4069.07 2938)">86</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4426.01 2972)">50</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4782.95 2944)">79</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5118.99 1233)">1852</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5486.39 2607)">429</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5843.33 2846)">181</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6210.72 2928)">96</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6557.21 2896)">129</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1658.81 2882)">144</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2026.21 2963)">59</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 2989)">33</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2740.09 2999)">22</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3097.04 2992)">30</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3443.53 2629)">405</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3800.47 2885)">141</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4167.86 2937)">86</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4524.81 2973)">49</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4881.75 2942)">81</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5217.79 1174)">1914</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5585.18 2615)">420</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 2850)">177</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6309.52 2933)">91</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6656.01 2902)">123</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1757.61 2898)">127</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2125 2959)">64</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2481.95 2989)">33</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2838.89 3000)">22</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3195.83 2990)">31</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3542.32 2669)">364</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3899.27 2885)">141</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4266.66 2938)">86</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4623.6 2973)">50</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4980.54 2945)">79</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5316.59 1201)">1886</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5683.98 2614)">421</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6040.92 2851)">175</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6408.31 2931)">93</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6754.81 2899)">126</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1452.53 3064)">0</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2582)">500</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 2099)">1000</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1617)">1500</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1134)">2000</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 652)">2500</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1679.72 3118)">1</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2036.66 3118)">4</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3118)">16</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2740.09 3118)">40</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3097.04 3118)">80</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3464.43 3118)">1</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3821.37 3118)">4</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4167.86 3118)">16</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4524.81 3118)">40</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4881.75 3118)">80</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5249.14 3118)">1</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5606.08 3118)">4</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5952.58 3118)">16</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6309.52 3118)">40</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6666.46 3118)">80</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3198)">32</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4157.41 3198)">128</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 3198)">512</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 1364.73 1955)">Latency (ms)</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.64 3258)">Number of Cores</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.21 3309)">Sequence Length</text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3430.54 280)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4165.89 280)">-</text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4204.92 280)">TensorFlow Graph</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2318.97 369)">Latency Benchmark with varying number of CPU cores involved in the computations and different memory allocators</text></g><g clip-path="url(#clip171)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2999.87 459)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip172)"><rect x="3834" y="540" width="23" height="23.0001" fill="url(#fill173)"/></g><g clip-path="url(#clip174)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3866.18 563)">glibc</text></g><g clip-path="url(#clip175)"><rect x="3982" y="540" width="22" height="23.0001" fill="url(#fill176)"/></g><g clip-path="url(#clip177)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4013.79 563)">jemalloc</text></g><g clip-path="url(#clip178)"><rect x="4193" y="540" width="23" height="23.0001" fill="url(#fill179)"/></g><g clip-path="url(#clip180)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4225.17 563)">tcmalloc</text></g><rect x="1268.5" y="175.5" width="5648" height="3202" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
2
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_tensorflow_latency.svg
<svg width="5651" height="3205" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="1267" y="174" width="5651" height="3205"/></clipPath><clipPath id="clip1"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip2"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1591.5" y1="2869" x2="1591.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1948" y1="2934" x2="1948" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2305" y1="2968" x2="2305" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2662" y1="2978" x2="2662" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3019" y1="2969" x2="3019" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3376" y1="2638" x2="3376" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3733" y1="2851" x2="3733" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4090" y1="2918" x2="4090" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4447" y1="2952" x2="4447" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4804" y1="2917" x2="4804" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5161" y1="1113" x2="5161" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5518" y1="2533" x2="5518" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5875" y1="2823" x2="5875" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6231.5" y1="2905" x2="6231.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6588.5" y1="2871" x2="6588.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip32"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1690" y1="2839" x2="1690" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip34"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2047" y1="2938" x2="2047" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip36"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2404" y1="2969" x2="2404" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip38"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2761" y1="2977" x2="2761" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip40"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3118" y1="2967" x2="3118" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip42"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3475" y1="2527" x2="3475" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip44"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3832" y1="2860" x2="3832" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip46"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4189" y1="2925" x2="4189" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip48"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4546" y1="2953" x2="4546" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip50"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4902.5" y1="2916" x2="4902.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5259.5" y1="826" x2="5259.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5616.5" y1="2575" x2="5616.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5973.5" y1="2836" x2="5973.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6330.5" y1="2913" x2="6330.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6687.5" y1="2869" x2="6687.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip62"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1789" y1="2870" x2="1789" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip64"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2146" y1="2936" x2="2146" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip66"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2503" y1="2971" x2="2503" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip68"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2860" y1="2980" x2="2860" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip70"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3217" y1="2968" x2="3217" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip72"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3574" y1="2636" x2="3574" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip74"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3930.5" y1="2857" x2="3930.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip76"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4287.5" y1="2925" x2="4287.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip78"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4644.5" y1="2953" x2="4644.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip80"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5001.5" y1="2921" x2="5001.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip82"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5358.5" y1="1251" x2="5358.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip84"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5715" y1="2572" x2="5715" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip86"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6072" y1="2836" x2="6072" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip88"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6429" y1="2913" x2="6429" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip90"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6786" y1="2878" x2="6786" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill91"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip92"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip93"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip94"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip95"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip96"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip97"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip98"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip99"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip100"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip101"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip102"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip103"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip104"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip105"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip106"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip107"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip108"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip109"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip110"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip111"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip112"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip113"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip114"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip115"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip116"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip117"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip118"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip119"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip120"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip121"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip122"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip123"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip124"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip125"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip126"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip127"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip128"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip129"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip130"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip131"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip132"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip133"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip134"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip135"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip136"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip137"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip138"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip139"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip140"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip141"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip142"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip143"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip144"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip145"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip146"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip147"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip148"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip149"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip150"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip151"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip152"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip153"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip154"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip155"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip156"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip157"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip158"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip159"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip160"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip161"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip162"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip163"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip164"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip165"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip166"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip167"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip168"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip169"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip170"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip171"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip172"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip173"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3845.5" y1="540" x2="3845.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill174"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip175"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip176"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3993" y1="540" x2="3993" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill177"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip178"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip179"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="4204.5" y1="540" x2="4204.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill180"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip181"><rect x="1268" y="175" width="5648" height="3202"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-1267 -174)"><rect x="1268" y="175" width="5648" height="3202" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M1511.5 2650.59 6865.5 2650.59M1511.5 2248.57 6865.5 2248.57M1511.5 1846.56 6865.5 1846.56M1511.5 1444.55 6865.5 1444.55M1511.5 1042.53 6865.5 1042.53M1511.5 640.5 6865.5 640.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="1552" y="2869" width="79.0001" height="184" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="1908" y="2934" width="79.9998" height="119" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="2265" y="2968" width="80" height="85" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="2622" y="2978" width="79.9998" height="75" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2979" y="2969" width="80" height="83.9998" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="3336" y="2638" width="79.9998" height="415" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="3693" y="2851" width="80" height="202" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="4050" y="2918" width="80" height="135" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="4407" y="2952" width="80" height="101" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4764" y="2917" width="80" height="136" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="5121" y="1113" width="80" height="1940" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="5478" y="2533" width="80" height="520" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="5835" y="2823" width="79.9995" height="230" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="6192" y="2905" width="79" height="148" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="6549" y="2871" width="78.9995" height="182" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="1650" y="2839" width="80.0001" height="214" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="2007" y="2938" width="79.9996" height="115" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="2364" y="2969" width="80" height="83.9998" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="2721" y="2977" width="79.9998" height="76" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="3078" y="2967" width="80" height="86" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="3435" y="2527" width="79.9998" height="526" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="3792" y="2860" width="80" height="193" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="4149" y="2925" width="80" height="128" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="4506" y="2953" width="80" height="99.9998" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="4863" y="2916" width="79" height="137" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="5220" y="826" width="79" height="2227" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="5577" y="2575" width="79" height="478" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="5934" y="2836" width="78.9995" height="217" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="6291" y="2913" width="79" height="140" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="6648" y="2869" width="78.9995" height="184" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="1749" y="2870" width="80.0001" height="183" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="2106" y="2936" width="79.9998" height="117" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="2463" y="2971" width="80" height="81.9998" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="2820" y="2980" width="79.9998" height="72.9998" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="3177" y="2968" width="80" height="85" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="3534" y="2636" width="79.9998" height="417" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="3891" y="2857" width="79" height="196" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="4248" y="2925" width="79" height="128" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="4605" y="2953" width="79" height="99.9998" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="4962" y="2921" width="79" height="132" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="5319" y="1251" width="79" height="1802" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="5675" y="2572" width="80" height="481" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="6032" y="2836" width="79.9995" height="217" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="6389" y="2913" width="80" height="140" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><rect x="6746" y="2878" width="79.9995" height="175" fill="url(#fill91)"/></g><g clip-path="url(#clip92)"><path d="M1511.5 3052.5 6865.5 3052.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip93)"><path d="M1511.5 3052.5 1511.5 3132.5M3296.61 3052.5 3296.61 3132.5M5080.67 3052.5 5080.67 3132.5M6865.5 3052.5 6865.5 3132.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip94)"><path d="M1511.5 3132.5 1511.5 3212.5M3296.61 3132.5 3296.61 3212.5M5080.67 3132.5 5080.67 3212.5M6865.5 3132.5 6865.5 3212.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip95)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1560.02 2837)">228</text></g><g clip-path="url(#clip96)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1916.96 2902)">148</text></g><g clip-path="url(#clip97)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2273.9 2936)">105</text></g><g clip-path="url(#clip98)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2641.3 2946)">93</text></g><g clip-path="url(#clip99)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2987.79 2937)">104</text></g><g clip-path="url(#clip100)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3344.73 2606)">516</text></g><g clip-path="url(#clip101)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3701.67 2819)">250</text></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4058.62 2886)">167</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4415.56 2920)">126</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4772.5 2885)">169</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5118.99 1081)">2412</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5486.39 2501)">646</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5843.33 2791)">286</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6200.27 2873)">184</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6557.21 2839)">226</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1658.81 2807)">266</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2015.76 2906)">143</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2372.7 2937)">103</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2740.09 2945)">93</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3086.59 2935)">107</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3443.53 2495)">653</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3800.47 2828)">239</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4157.41 2893)">159</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4514.36 2921)">124</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4871.3 2884)">170</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5217.79 794)">2769</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5585.18 2543)">594</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 2804)">269</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6299.07 2881)">173</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6656.01 2837)">228</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1757.61 2838)">227</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2114.55 2904)">144</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2471.5 2939)">102</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2838.89 2948)">90</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3185.38 2936)">105</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3542.32 2604)">518</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3899.27 2825)">243</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4256.21 2893)">158</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4613.15 2921)">123</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4970.09 2889)">164</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5316.59 1219)">2241</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5683.98 2540)">597</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6040.92 2804)">269</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6397.87 2881)">174</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6754.81 2846)">217</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1452.53 3064)">0</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2662)">500</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 2260)">1000</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1858)">1500</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1456)">2000</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1054)">2500</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 652)">3000</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1679.72 3118)">1</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2036.66 3118)">4</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3118)">16</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2740.09 3118)">40</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3097.04 3118)">80</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3464.43 3118)">1</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3821.37 3118)">4</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4167.86 3118)">16</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4524.81 3118)">40</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4881.75 3118)">80</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5249.14 3118)">1</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5606.08 3118)">4</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5952.58 3118)">16</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6309.52 3118)">40</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6666.46 3118)">80</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3198)">32</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4157.41 3198)">128</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 3198)">512</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 1364.73 1955)">Latency (ms)</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.64 3258)">Number of Cores</text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.21 3309)">Sequence Length</text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3531.88 280)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4267.23 280)">-</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4306.26 280)">TensorFlow</text></g><g clip-path="url(#clip171)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2318.97 369)">Latency Benchmark with varying number of CPU cores involved in the computations and different memory allocators</text></g><g clip-path="url(#clip172)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2999.87 459)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip173)"><rect x="3834" y="540" width="23" height="23.0001" fill="url(#fill174)"/></g><g clip-path="url(#clip175)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3866.18 563)">glibc</text></g><g clip-path="url(#clip176)"><rect x="3982" y="540" width="22" height="23.0001" fill="url(#fill177)"/></g><g clip-path="url(#clip178)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4013.79 563)">jemalloc</text></g><g clip-path="url(#clip179)"><rect x="4193" y="540" width="23" height="23.0001" fill="url(#fill180)"/></g><g clip-path="url(#clip181)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4225.17 563)">tcmalloc</text></g><rect x="1268.5" y="175.5" width="5648" height="3202" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
3
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_torchscript_latency.svg
<svg width="5651" height="3205" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="1267" y="174" width="5651" height="3205"/></clipPath><clipPath id="clip1"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip2"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1591.5" y1="2838" x2="1591.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1948" y1="3000" x2="1948" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2305" y1="3029" x2="2305" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2662" y1="3039" x2="2662" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3019" y1="3033" x2="3019" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3376" y1="2628" x2="3376" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3733" y1="2943" x2="3733" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4090" y1="3009" x2="4090" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4447" y1="3029" x2="4447" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4804" y1="3019" x2="4804" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5518" y1="2471" x2="5518" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5875" y1="2845" x2="5875" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6231.5" y1="2981" x2="6231.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6588.5" y1="2939" x2="6588.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1690" y1="2840" x2="1690" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip32"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2047" y1="3000" x2="2047" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip34"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2404" y1="3029" x2="2404" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip36"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2761" y1="3039" x2="2761" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip38"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3118" y1="3034" x2="3118" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip40"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3475" y1="2627" x2="3475" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip42"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3832" y1="2944" x2="3832" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip44"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4189" y1="3009" x2="4189" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip46"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4546" y1="3029" x2="4546" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip48"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4902.5" y1="3020" x2="4902.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip50"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5259.5" y1="850" x2="5259.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5616.5" y1="2573" x2="5616.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5973.5" y1="2900" x2="5973.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6330.5" y1="2985" x2="6330.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6687.5" y1="2989" x2="6687.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1789" y1="2841" x2="1789" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip62"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2146" y1="3001" x2="2146" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip64"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2503" y1="3029" x2="2503" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip66"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2860" y1="3039" x2="2860" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip68"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3217" y1="3034" x2="3217" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip70"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3574" y1="2630" x2="3574" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip72"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3930.5" y1="2945" x2="3930.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip74"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4287.5" y1="3009" x2="4287.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip76"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4644.5" y1="3030" x2="4644.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip78"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5001.5" y1="3020" x2="5001.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip80"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5358.5" y1="829" x2="5358.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip82"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5715" y1="2575" x2="5715" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip84"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6072" y1="2909" x2="6072" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip86"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6429" y1="2984" x2="6429" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip88"><rect x="1511" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6786" y1="2988" x2="6786" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip90"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip91"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip92"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip93"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip94"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip95"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip96"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip97"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip98"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip99"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip100"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip101"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip102"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip103"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip104"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip105"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip106"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip107"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip108"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip109"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip110"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip111"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip112"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip113"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip114"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip115"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip116"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip117"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip118"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip119"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip120"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip121"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip122"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip123"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip124"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip125"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip126"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip127"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip128"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip129"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip130"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip131"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip132"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip133"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip134"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip135"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip136"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip137"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip138"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip139"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip140"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip141"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip142"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip143"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip144"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip145"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip146"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip147"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip148"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip149"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip150"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip151"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip152"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip153"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip154"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip155"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip156"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip157"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip158"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip159"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip160"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip161"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip162"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip163"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip164"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip165"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip166"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip167"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip168"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip169"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip170"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip171"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip172"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3845.5" y1="540" x2="3845.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill173"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip174"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip175"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3993" y1="540" x2="3993" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill176"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip177"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip178"><rect x="1268" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="4204.5" y1="540" x2="4204.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill179"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip180"><rect x="1268" y="175" width="5648" height="3202"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-1267 -174)"><rect x="1268" y="175" width="5648" height="3202" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M1511.5 2750.59 6865.5 2750.59M1511.5 2449.58 6865.5 2449.58M1511.5 2147.57 6865.5 2147.57M1511.5 1846.56 6865.5 1846.56M1511.5 1544.55 6865.5 1544.55M1511.5 1243.54 6865.5 1243.54M1511.5 941.531 6865.5 941.531M1511.5 640.5 6865.5 640.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="1552" y="2838" width="79.0001" height="215" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="1908" y="3000" width="79.9998" height="53" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="2265" y="3029" width="80" height="23.9998" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="2622" y="3039" width="79.9998" height="13.9998" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2979" y="3033" width="80" height="20" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="3336" y="2628" width="79.9998" height="425" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="3693" y="2943" width="80" height="110" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="4050" y="3009" width="80" height="44" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="4407" y="3029" width="80" height="23.9998" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4764" y="3019" width="80" height="33.9998" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="5478" y="2471" width="80" height="582" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="5835" y="2845" width="79.9995" height="208" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="6192" y="2981" width="79" height="71.9998" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="6549" y="2939" width="78.9995" height="114" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="1650" y="2840" width="80.0001" height="213" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="2007" y="3000" width="79.9996" height="53" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="2364" y="3029" width="80" height="23.9998" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="2721" y="3039" width="79.9998" height="13.9998" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="3078" y="3034" width="80" height="19" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="3435" y="2627" width="79.9998" height="426" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="3792" y="2944" width="80" height="109" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="4149" y="3009" width="80" height="44" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="4506" y="3029" width="80" height="23.9998" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="4863" y="3020" width="79" height="33" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="5220" y="850" width="79" height="2203" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="5577" y="2573" width="79" height="480" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="5934" y="2900" width="78.9995" height="153" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="6291" y="2985" width="79" height="67.9998" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="6648" y="2989" width="78.9995" height="64" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="1749" y="2841" width="80.0001" height="212" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="2106" y="3001" width="79.9998" height="52" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="2463" y="3029" width="80" height="23.9998" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="2820" y="3039" width="79.9998" height="13.9998" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="3177" y="3034" width="80" height="19" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="3534" y="2630" width="79.9998" height="423" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="3891" y="2945" width="79" height="108" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="4248" y="3009" width="79" height="44" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="4605" y="3030" width="79" height="22.9998" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="4962" y="3020" width="79" height="33" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="5319" y="829" width="79" height="2224" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="5675" y="2575" width="80" height="478" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="6032" y="2909" width="79.9995" height="144" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="6389" y="2984" width="80" height="68.9998" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="6746" y="2988" width="79.9995" height="65" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><path d="M1511.5 3052.5 6865.5 3052.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip91)"><path d="M1511.5 3052.5 1511.5 3132.5M3296.61 3052.5 3296.61 3132.5M5080.67 3052.5 5080.67 3132.5M6865.5 3052.5 6865.5 3132.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip92)"><path d="M1511.5 3132.5 1511.5 3212.5M3296.61 3132.5 3296.61 3212.5M5080.67 3132.5 5080.67 3212.5M6865.5 3132.5 6865.5 3212.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip93)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1560.02 2806)">143</text></g><g clip-path="url(#clip94)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1927.41 2968)">35</text></g><g clip-path="url(#clip95)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2284.35 2997)">16</text></g><g clip-path="url(#clip96)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2651.75 3007)">9</text></g><g clip-path="url(#clip97)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2998.24 3001)">13</text></g><g clip-path="url(#clip98)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3344.73 2597)">281</text></g><g clip-path="url(#clip99)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3712.12 2911)">73</text></g><g clip-path="url(#clip100)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4069.07 2977)">29</text></g><g clip-path="url(#clip101)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4426.01 2997)">16</text></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4782.95 2987)">23</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5486.39 2439)">385</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5843.33 2813)">138</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6210.72 2949)">47</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6567.66 2907)">76</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1658.81 2808)">141</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2026.21 2968)">35</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 2997)">15</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2750.54 3007)">9</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3097.04 3002)">12</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3443.53 2595)">282</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3810.92 2912)">72</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4167.86 2977)">29</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4524.81 2997)">15</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4881.75 2988)">21</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5217.79 818)">1461</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5585.18 2541)">318</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 2868)">101</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6309.52 2953)">45</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6666.46 2957)">42</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1757.61 2809)">140</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2125 2969)">34</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2481.95 2997)">15</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2849.34 3007)">9</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3195.83 3002)">13</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3542.32 2598)">281</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3909.72 2913)">71</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4266.66 2977)">29</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4623.6 2998)">15</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4980.54 2988)">22</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5316.59 797)">1475</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5683.98 2543)">317</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6051.37 2877)">95</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6408.31 2952)">45</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6765.26 2956)">43</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1452.53 3064)">0</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2763)">200</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2461)">400</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 2160)">600</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1410.73 1858)">800</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1556)">1000</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 1255)">1200</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 953)">1400</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1389.83 652)">1600</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1679.72 3118)">1</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2036.66 3118)">4</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3118)">16</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2740.09 3118)">40</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3097.04 3118)">80</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3464.43 3118)">1</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3821.37 3118)">4</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4167.86 3118)">16</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4524.81 3118)">40</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4881.75 3118)">80</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5249.14 3118)">1</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5606.08 3118)">4</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5952.58 3118)">16</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6309.52 3118)">40</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6666.46 3118)">80</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2383.15 3198)">32</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4157.41 3198)">128</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5942.13 3198)">512</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 1364.73 1955)">Latency (ms)</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.64 3258)">Number of Cores</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4043.21 3309)">Sequence Length</text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3535.98 280)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4271.33 280)">-</text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4310.36 280)">TorchScript</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2318.97 369)">Latency Benchmark with varying number of CPU cores involved in the computations and different memory allocators</text></g><g clip-path="url(#clip171)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2999.87 459)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip172)"><rect x="3834" y="540" width="23" height="23.0001" fill="url(#fill173)"/></g><g clip-path="url(#clip174)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3866.18 563)">glibc</text></g><g clip-path="url(#clip175)"><rect x="3982" y="540" width="22" height="23.0001" fill="url(#fill176)"/></g><g clip-path="url(#clip177)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4013.79 563)">jemalloc</text></g><g clip-path="url(#clip178)"><rect x="4193" y="540" width="23" height="23.0001" fill="url(#fill179)"/></g><g clip-path="url(#clip180)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4225.17 563)">tcmalloc</text></g><rect x="1268.5" y="175.5" width="5648" height="3202" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
4
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_intel_tensorflow_graph_latency.svg
<svg width="5651" height="3205" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="1274" y="174" width="5651" height="3205"/></clipPath><clipPath id="clip1"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip2"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1598" y1="2887" x2="1598" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1955" y1="3002" x2="1955" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2312" y1="3022" x2="2312" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2669" y1="3030" x2="2669" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3026" y1="3026" x2="3026" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3383" y1="2686" x2="3383" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3740" y1="2915" x2="3740" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4097" y1="2954" x2="4097" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4454" y1="2973" x2="4454" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4811" y1="2971" x2="4811" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5168" y1="951" x2="5168" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5524.5" y1="2576" x2="5524.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5881.5" y1="2848" x2="5881.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6238.5" y1="2932" x2="6238.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6595.5" y1="2932" x2="6595.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip32"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1697" y1="2874" x2="1697" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip34"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2054" y1="3002" x2="2054" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip36"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2411" y1="3022" x2="2411" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip38"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2768" y1="3030" x2="2768" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip40"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3125" y1="3025" x2="3125" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip42"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3482" y1="2652" x2="3482" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip44"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3839" y1="2917" x2="3839" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip46"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4195.5" y1="2954" x2="4195.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip48"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4552.5" y1="2974" x2="4552.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip50"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4909.5" y1="2970" x2="4909.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5266.5" y1="988" x2="5266.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5623.5" y1="2578" x2="5623.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5980.5" y1="2851" x2="5980.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6337" y1="2936" x2="6337" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6694" y1="2932" x2="6694" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip62"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="1796" y1="2883" x2="1796" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip64"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2153" y1="3002" x2="2153" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip66"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2510" y1="3022" x2="2510" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip68"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="2867" y1="3030" x2="2867" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip70"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3223.5" y1="3024" x2="3223.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip72"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3580.5" y1="2684" x2="3580.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip74"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="3937.5" y1="2918" x2="3937.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip76"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4294.5" y1="2952" x2="4294.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip78"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="4651.5" y1="2971" x2="4651.5" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip80"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5008" y1="2967" x2="5008" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip82"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5365" y1="924" x2="5365" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip84"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="5722" y1="2580" x2="5722" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip86"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6079" y1="2849" x2="6079" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip88"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6436" y1="2934" x2="6436" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip90"><rect x="1518" y="640" width="5358" height="2415"/></clipPath><linearGradient x1="6793" y1="2930" x2="6793" y2="3053" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill91"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip92"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip93"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip94"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip95"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip96"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip97"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip98"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip99"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip100"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip101"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip102"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip103"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip104"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip105"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip106"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip107"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip108"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip109"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip110"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip111"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip112"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip113"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip114"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip115"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip116"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip117"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip118"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip119"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip120"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip121"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip122"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip123"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip124"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip125"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip126"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip127"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip128"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip129"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip130"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip131"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip132"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip133"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip134"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip135"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip136"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip137"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip138"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip139"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip140"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip141"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip142"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip143"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip144"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip145"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip146"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip147"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip148"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip149"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip150"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip151"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip152"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip153"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip154"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip155"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip156"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip157"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip158"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip159"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip160"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip161"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip162"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip163"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip164"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip165"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip166"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip167"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip168"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip169"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip170"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip171"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip172"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip173"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip174"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip175"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip176"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3852" y1="540" x2="3852" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill177"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip178"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip179"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="3999.5" y1="540" x2="3999.5" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill180"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip181"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><clipPath id="clip182"><rect x="1275" y="175" width="5648" height="3202"/></clipPath><linearGradient x1="4211" y1="540" x2="4211" y2="563" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill183"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip184"><rect x="1275" y="175" width="5648" height="3202"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-1274 -174)"><rect x="1275" y="175" width="5648" height="3202" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M1518.5 2784.59 6872.5 2784.59M1518.5 2516.58 6872.5 2516.58M1518.5 2248.57 6872.5 2248.57M1518.5 1980.57 6872.5 1980.57M1518.5 1712.56 6872.5 1712.56M1518.5 1444.55 6872.5 1444.55M1518.5 1176.54 6872.5 1176.54M1518.5 908.53 6872.5 908.53M1518.5 640.5 6872.5 640.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="1558" y="2887" width="80.0001" height="166" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="1915" y="3002" width="80.0001" height="50.9998" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="2272" y="3022" width="80" height="31" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="2629" y="3030" width="80" height="22.9998" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2986" y="3026" width="79.9998" height="26.9998" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="3343" y="2686" width="80" height="367" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="3700" y="2915" width="79.9998" height="138" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="4057" y="2954" width="80" height="99" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="4414" y="2973" width="79.9995" height="79.9998" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4771" y="2971" width="80" height="81.9998" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="5128" y="951" width="80" height="2102" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="5485" y="2576" width="79" height="477" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="5842" y="2848" width="79" height="205" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="6199" y="2932" width="79" height="121" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="6556" y="2932" width="79" height="121" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="1657" y="2874" width="80.0001" height="179" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="2014" y="3002" width="80" height="50.9998" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="2371" y="3022" width="80" height="31" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="2728" y="3030" width="80" height="22.9998" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="3085" y="3025" width="79.9998" height="27.9998" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="3442" y="2652" width="80" height="401" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="3799" y="2917" width="79.9998" height="136" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="4156" y="2954" width="79" height="99" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="4513" y="2974" width="79" height="78.9998" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="4870" y="2970" width="79" height="82.9998" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="5227" y="988" width="79" height="2065" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="5584" y="2578" width="79" height="475" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="5941" y="2851" width="79" height="202" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="6297" y="2936" width="79.9995" height="117" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="6654" y="2932" width="80" height="121" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="1756" y="2883" width="80.0001" height="170" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="2113" y="3002" width="80" height="50.9998" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="2470" y="3022" width="80" height="31" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="2827" y="3030" width="80" height="22.9998" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="3184" y="3024" width="78.9998" height="28.9998" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="3541" y="2684" width="79" height="369" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="3898" y="2918" width="78.9998" height="135" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="4255" y="2952" width="79" height="101" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="4612" y="2971" width="79" height="81.9998" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="4968" y="2967" width="80" height="86" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="5325" y="924" width="80" height="2129" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="5682" y="2580" width="80" height="473" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="6039" y="2849" width="80" height="204" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="6396" y="2934" width="79.9995" height="119" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><rect x="6753" y="2930" width="80" height="123" fill="url(#fill91)"/></g><g clip-path="url(#clip92)"><path d="M1518.5 3052.5 6872.5 3052.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip93)"><path d="M1518.5 3052.5 1518.5 3132.5M3303.61 3052.5 3303.61 3132.5M5087.67 3052.5 5087.67 3132.5M6872.5 3052.5 6872.5 3132.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip94)"><path d="M1518.5 3132.5 1518.5 3212.5M3303.61 3132.5 3303.61 3212.5M5087.67 3132.5 5087.67 3212.5M6872.5 3132.5 6872.5 3212.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip95)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1566.89 2855)">124</text></g><g clip-path="url(#clip96)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1934.29 2970)">37</text></g><g clip-path="url(#clip97)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2291.23 2990)">23</text></g><g clip-path="url(#clip98)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2648.17 2998)">17</text></g><g clip-path="url(#clip99)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3005.11 2994)">20</text></g><g clip-path="url(#clip100)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3351.61 2654)">273</text></g><g clip-path="url(#clip101)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3708.55 2883)">103</text></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4075.94 2922)">74</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4432.88 2941)">60</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4789.83 2939)">61</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5125.87 919)">1568</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5493.26 2544)">356</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5850.2 2816)">152</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6217.6 2900)">90</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6574.54 2900)">90</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1665.69 2842)">133</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2033.08 2971)">37</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2390.03 2990)">23</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2746.97 2998)">17</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3103.91 2993)">21</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3450.4 2620)">299</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3807.35 2885)">101</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4174.74 2922)">73</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4531.68 2942)">59</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4888.62 2938)">62</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5224.67 956)">1541</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5592.06 2546)">354</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5949 2819)">151</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6316.39 2904)">87</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6673.34 2901)">90</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1764.49 2851)">127</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2131.88 2970)">38</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2488.82 2990)">23</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2845.76 2998)">17</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3202.71 2992)">21</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3549.2 2652)">275</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3906.14 2886)">101</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4273.53 2920)">75</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4630.48 2939)">61</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4987.42 2935)">64</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5323.46 892)">1588</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5690.85 2548)">352</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6047.8 2817)">152</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6415.19 2902)">88</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6772.13 2898)">92</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1459.4 3064)">0</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1417.6 2796)">200</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1417.6 2528)">400</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1417.6 2260)">600</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1417.6 1992)">800</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1396.7 1724)">1000</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1396.7 1456)">1200</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1396.7 1188)">1400</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1396.7 920)">1600</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1396.7 652)">1800</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1686.59 3118)">1</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2043.53 3118)">4</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2390.03 3118)">16</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2746.97 3118)">40</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3103.91 3118)">80</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3471.3 3118)">1</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3828.25 3118)">4</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4174.74 3118)">16</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4531.68 3118)">40</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4888.62 3118)">80</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5256.02 3118)">1</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5612.96 3118)">4</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5959.45 3118)">16</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6316.39 3118)">40</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6673.34 3118)">80</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2390.03 3198)">32</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4164.29 3198)">128</text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5949 3198)">512</text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 1371.61 1955)">Latency (ms)</text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4050.52 3258)">Number of Cores</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4050.08 3309)">Sequence Length</text></g><g clip-path="url(#clip171)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3360.3 280)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip172)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4095.65 280)">-</text></g><g clip-path="url(#clip173)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4134.68 280)">Intel TensorFlow Graph</text></g><g clip-path="url(#clip174)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2325.85 369)">Latency Benchmark with varying number of CPU cores involved in the computations and different memory allocators</text></g><g clip-path="url(#clip175)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3006.75 459)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip176)"><rect x="3841" y="540" width="22" height="23.0001" fill="url(#fill177)"/></g><g clip-path="url(#clip178)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3873.05 563)">glibc</text></g><g clip-path="url(#clip179)"><rect x="3988" y="540" width="23" height="23.0001" fill="url(#fill180)"/></g><g clip-path="url(#clip181)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4020.66 563)">jemalloc</text></g><g clip-path="url(#clip182)"><rect x="4200" y="540" width="22" height="23.0001" fill="url(#fill183)"/></g><g clip-path="url(#clip184)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4232.05 563)">tcmalloc</text></g><rect x="1275.5" y="175.5" width="5648" height="3202" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
5
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_tensorflow_onednn_graph_latency.svg
<svg width="5567" height="3205" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="557" y="48" width="5567" height="3205"/></clipPath><clipPath id="clip1"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip2"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="879.5" y1="2808" x2="879.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1231" y1="2876" x2="1231" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1582" y1="2883" x2="1582" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1933.5" y1="2883" x2="1933.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2285" y1="2883" x2="2285" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2636" y1="2606" x2="2636" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2987.5" y1="2714" x2="2987.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3339" y1="2701" x2="3339" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3690.5" y1="2688" x2="3690.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4041.5" y1="2688" x2="4041.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4393" y1="859" x2="4393" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4744.5" y1="1833" x2="4744.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5096" y1="1843" x2="5096" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5447" y1="1769" x2="5447" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5798.5" y1="1706" x2="5798.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip32"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="976.5" y1="2795" x2="976.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip34"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1328" y1="2877" x2="1328" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip36"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1679.5" y1="2884" x2="1679.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip38"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2031" y1="2885" x2="2031" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip40"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2382" y1="2886" x2="2382" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip42"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2733.5" y1="2559" x2="2733.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip44"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3085" y1="2716" x2="3085" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip46"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3436" y1="2702" x2="3436" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip48"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3787.5" y1="2693" x2="3787.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip50"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4139" y1="2690" x2="4139" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4490.5" y1="856" x2="4490.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4841.5" y1="1833" x2="4841.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5193" y1="1844" x2="5193" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5544.5" y1="1783" x2="5544.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5896" y1="1715" x2="5896" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip62"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1074" y1="2801" x2="1074" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip64"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1425" y1="2876" x2="1425" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip66"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="1776.5" y1="2884" x2="1776.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip68"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2128" y1="2885" x2="2128" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip70"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2479.5" y1="2884" x2="2479.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip72"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="2831" y1="2603" x2="2831" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip74"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3182" y1="2716" x2="3182" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip76"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3533.5" y1="2699" x2="3533.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip78"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="3885" y1="2690" x2="3885" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip80"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4236" y1="2687" x2="4236" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip82"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4587.5" y1="816" x2="4587.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip84"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="4939" y1="1835" x2="4939" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip86"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5290.5" y1="1832" x2="5290.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip88"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5641.5" y1="1773" x2="5641.5" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip90"><rect x="800" y="514" width="5274" height="2466"/></clipPath><linearGradient x1="5993" y1="1707" x2="5993" y2="2977" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill91"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip92"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip93"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip94"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip95"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip96"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip97"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip98"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip99"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip100"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip101"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip102"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip103"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip104"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip105"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip106"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip107"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip108"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip109"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip110"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip111"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip112"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip113"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip114"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip115"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip116"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip117"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip118"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip119"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip120"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip121"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip122"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip123"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip124"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip125"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip126"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip127"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip128"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip129"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip130"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip131"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip132"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip133"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip134"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip135"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip136"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip137"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip138"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip139"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip140"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip141"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip142"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip143"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip144"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip145"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip146"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip147"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip148"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip149"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip150"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip151"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip152"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip153"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip154"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip155"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip156"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip157"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip158"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip159"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip160"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip161"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip162"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip163"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip164"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip165"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip166"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip167"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip168"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip169"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip170"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip171"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip172"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip173"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip174"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip175"><rect x="557" y="49" width="5564" height="3202"/></clipPath><linearGradient x1="3092.5" y1="414" x2="3092.5" y2="436" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill176"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip177"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip178"><rect x="557" y="49" width="5564" height="3202"/></clipPath><linearGradient x1="3240.5" y1="414" x2="3240.5" y2="436" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill179"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip180"><rect x="557" y="49" width="5564" height="3202"/></clipPath><clipPath id="clip181"><rect x="557" y="49" width="5564" height="3202"/></clipPath><linearGradient x1="3451.5" y1="414" x2="3451.5" y2="436" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill182"><stop offset="0" stop-color="#AFAFAF"/><stop offset="0.5" stop-color="#A5A5A5"/><stop offset="1" stop-color="#929292"/></linearGradient><clipPath id="clip183"><rect x="557" y="49" width="5564" height="3202"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-557 -48)"><rect x="558" y="49.0002" width="5564" height="3202" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M800.5 2703.59 6071.5 2703.59M800.5 2429.58 6071.5 2429.58M800.5 2155.57 6071.5 2155.57M800.5 1882.56 6071.5 1882.56M800.5 1608.55 6071.5 1608.55M800.5 1335.54 6071.5 1335.54M800.5 1061.54 6071.5 1061.54M800.5 787.526 6071.5 787.526M800.5 514.5 6071.5 514.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="840" y="2808" width="78.9997" height="169" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="1192" y="2876" width="78" height="101" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="1543" y="2883" width="78" height="93.9998" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="1894" y="2883" width="79.0001" height="93.9998" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2246" y="2883" width="78" height="93.9998" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="2597" y="2606" width="78" height="371" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="2948" y="2714" width="79" height="263" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="3300" y="2701" width="78" height="276" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="3651" y="2688" width="79" height="289" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4002" y="2688" width="79" height="289" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="4354" y="859" width="78" height="2118" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="4705" y="1833" width="79" height="1144" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="5057" y="1843" width="78" height="1134" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="5408" y="1769" width="78" height="1208" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="5759" y="1706" width="79" height="1271" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="937" y="2795" width="79.0001" height="182" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="1289" y="2877" width="78" height="100" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="1640" y="2884" width="79.0001" height="92.9998" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="1992" y="2885" width="78" height="91.9998" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="2343" y="2886" width="78" height="90.9998" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="2694" y="2559" width="79" height="418" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="3046" y="2716" width="78" height="261" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="3397" y="2702" width="78" height="275" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="3748" y="2693" width="79" height="284" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="4100" y="2690" width="78" height="287" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="4451" y="856" width="79" height="2121" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="4802" y="1833" width="79" height="1144" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="5154" y="1844" width="78" height="1133" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="5505" y="1783" width="78.9995" height="1194" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="5857" y="1715" width="77.9995" height="1262" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="1035" y="2801" width="78" height="176" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="1386" y="2876" width="78" height="101" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="1737" y="2884" width="79.0001" height="92.9998" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="2089" y="2885" width="78" height="91.9998" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="2440" y="2884" width="79" height="92.9998" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="2792" y="2603" width="78" height="374" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="3143" y="2716" width="78" height="261" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="3494" y="2699" width="79" height="278" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="3846" y="2690" width="78" height="287" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="4197" y="2687" width="78" height="290" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="4548" y="816" width="78.9995" height="2161" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="4900" y="1835" width="77.9995" height="1142" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="5251" y="1832" width="78.9995" height="1145" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="5602" y="1773" width="79" height="1204" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><rect x="5954" y="1707" width="78" height="1270" fill="url(#fill91)"/></g><g clip-path="url(#clip92)"><path d="M800.5 2976.5 6071.5 2976.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip93)"><path d="M800.5 2976.5 800.5 3056.5M2557.58 2976.5 2557.58 3056.5M4314.64 2976.5 4314.64 3056.5M6071.5 2976.5 6071.5 3056.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip94)"><path d="M800.5 3056.5 800.5 3136.5M2557.58 3056.5 2557.58 3136.5M4314.64 3056.5 4314.64 3136.5M6071.5 3056.5 6071.5 3136.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip95)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 848.006 2776)">124</text></g><g clip-path="url(#clip96)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1209.83 2844)">73</text></g><g clip-path="url(#clip97)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1561.2 2851)">69</text></g><g clip-path="url(#clip98)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1912.57 2851)">69</text></g><g clip-path="url(#clip99)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2263.94 2851)">69</text></g><g clip-path="url(#clip100)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2604.87 2574)">271</text></g><g clip-path="url(#clip101)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2956.24 2682)">192</text></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3307.61 2669)">202</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3658.98 2656)">211</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4010.35 2656)">211</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4351.28 828)">1548</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4713.1 1801)">836</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5064.47 1811)">829</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5415.84 1737)">883</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5767.21 1674)">929</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 945.261 2763)">133</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1307.08 2845)">73</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1658.45 2852)">68</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2009.83 2853)">67</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2361.2 2854)">67</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2702.12 2527)">305</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3053.49 2684)">190</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3404.86 2671)">200</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3756.24 2661)">207</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4107.61 2658)">210</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4448.53 824)">1550</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4810.35 1801)">836</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5161.73 1812)">828</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5513.1 1751)">873</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5864.47 1683)">922</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1042.52 2769)">128</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1404.34 2844)">73</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1755.71 2852)">68</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2107.08 2853)">67</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2458.45 2852)">68</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2799.38 2571)">273</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3150.75 2684)">191</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3502.12 2667)">203</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3853.49 2658)">209</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4204.86 2655)">212</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4545.79 784)">1580</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4907.61 1803)">835</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5258.98 1800)">837</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5610.35 1741)">880</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5961.72 1675)">928</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 741.758 2988)">0</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 699.958 2715)">200</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 699.958 2441)">400</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 699.958 2167)">600</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 699.958 1894)">800</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 679.058 1620)">1000</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 679.058 1347)">1200</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 679.058 1073)">1400</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 679.058 799)">1600</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 679.058 526)">1800</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 966.161 3042)">1</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1317.53 3042)">4</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1658.45 3042)">16</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2009.83 3042)">40</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2361.2 3042)">80</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2723.02 3042)">1</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3074.39 3042)">4</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3415.32 3042)">16</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3766.69 3042)">40</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4118.06 3042)">80</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4479.88 3042)">1</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4831.25 3042)">4</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5172.18 3042)">16</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5523.55 3042)">40</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5874.92 3042)">80</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1658.45 3122)">32</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3404.87 3122)">128</text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5161.73 3122)">512</text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 653.964 1854)">Latency (ms)</text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 3295.32 3183)">Sequence Length</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2362.61 154)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip171)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3097.96 154)">-</text></g><g clip-path="url(#clip172)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3136.98 154)">TensorFlow Graph + oneDNN (&gt;= 2.5.0)</text></g><g clip-path="url(#clip173)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 1581.06 243)">Latency Benchmark with varying number of CPU cores involved in the computation and different memory allocators</text></g><g clip-path="url(#clip174)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 2247.34 333)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip175)"><rect x="3081" y="414" width="23" height="22" fill="url(#fill176)"/></g><g clip-path="url(#clip177)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3113.63 437)">glibc</text></g><g clip-path="url(#clip178)"><rect x="3229" y="414" width="23" height="22" fill="url(#fill179)"/></g><g clip-path="url(#clip180)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3261.24 437)">jemalloc</text></g><g clip-path="url(#clip181)"><rect x="3440" y="414" width="23" height="22" fill="url(#fill182)"/></g><g clip-path="url(#clip183)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3472.62 437)">tcmalloc</text></g><rect x="558.5" y="49.5" width="5564" height="3202" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
6
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/sigopt/Intel Ice lake Xeon 8380 - TorchScript - Batch Size 1 - Absolute Best Latency vs SigOpt Best Latency.svg
<svg version="1.1" viewBox="0.0 0.0 1100.0 680.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" width="1100" height="680" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg"><path fill="#ffffff" d="M0 0L1100.0 0L1100.0 680.0L0 680.0L0 0Z" fill-rule="nonzero"/><path stroke="#333333" stroke-width="1.0" stroke-linecap="butt" d="M93.5 557.5L1066.5 557.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M93.5 463.5L1066.5 463.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M93.5 369.5L1066.5 369.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M93.5 275.5L1066.5 275.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M93.5 181.5L1066.5 181.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M93.5 87.5L1066.5 87.5" fill-rule="nonzero"/><clipPath id="id_0"><path d="M93.0 87.666664L1066.0 87.666664L1066.0 557.3333L93.0 557.3333L93.0 87.666664Z" clip-rule="nonzero"/></clipPath><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M215.0 557.0L158.0 557.0L158.0 487.0C158.0 485.89542 158.89543 485.0 160.0 485.0L213.0 485.0C214.10457 485.0 215.0 485.89542 215.0 487.0Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M215.0 557.0L158.0 557.0L158.0 487.0C158.0 485.89542 158.89543 485.0 160.0 485.0L213.0 485.0C214.10457 485.0 215.0 485.89542 215.0 487.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M397.0 557.0L340.0 557.0L340.0 480.0C340.0 478.89542 340.89542 478.0 342.0 478.0L395.0 478.0C396.10458 478.0 397.0 478.89542 397.0 480.0Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M397.0 557.0L340.0 557.0L340.0 480.0C340.0 478.89542 340.89542 478.0 342.0 478.0L395.0 478.0C396.10458 478.0 397.0 478.89542 397.0 480.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M579.0 557.0L522.0 557.0L522.0 425.0C522.0 423.89542 522.89545 423.0 524.0 423.0L577.0 423.0C578.10455 423.0 579.0 423.89542 579.0 425.0Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M579.0 557.0L522.0 557.0L522.0 425.0C522.0 423.89542 522.89545 423.0 524.0 423.0L577.0 423.0C578.10455 423.0 579.0 423.89542 579.0 425.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M760.0 557.0L703.0 557.0L703.0 268.0C703.0 266.89542 703.89545 266.0 705.0 266.0L758.0 266.0C759.10455 266.0 760.0 266.89542 760.0 268.0Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M760.0 557.0L703.0 557.0L703.0 268.0C703.0 266.89542 703.89545 266.0 705.0 266.0L758.0 266.0C759.10455 266.0 760.0 266.89542 760.0 268.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M942.0 557.0L885.0 557.0L885.0 183.0C885.0 181.89543 885.89545 181.0 887.0 181.0L940.0 181.0C941.10455 181.0 942.0 181.89543 942.0 183.0Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M942.0 557.0L885.0 557.0L885.0 183.0C885.0 181.89543 885.89545 181.0 887.0 181.0L940.0 181.0C941.10455 181.0 942.0 181.89543 942.0 183.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M274.0 557.0L217.0 557.0L217.0 487.0C217.0 485.89542 217.89543 485.0 219.0 485.0L272.0 485.0C273.10458 485.0 274.0 485.89542 274.0 487.0Z" fill-rule="nonzero"/><path fill="#ea4335" clip-path="url(#id_0)" d="M274.0 557.0L217.0 557.0L217.0 487.0C217.0 485.89542 217.89543 485.0 219.0 485.0L272.0 485.0C273.10458 485.0 274.0 485.89542 274.0 487.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M456.0 557.0L399.0 557.0L399.0 478.0C399.0 476.89542 399.89542 476.0 401.0 476.0L454.0 476.0C455.10458 476.0 456.0 476.89542 456.0 478.0Z" fill-rule="nonzero"/><path fill="#ea4335" clip-path="url(#id_0)" d="M456.0 557.0L399.0 557.0L399.0 478.0C399.0 476.89542 399.89542 476.0 401.0 476.0L454.0 476.0C455.10458 476.0 456.0 476.89542 456.0 478.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M638.0 557.0L581.0 557.0L581.0 420.0C581.0 418.89542 581.89545 418.0 583.0 418.0L636.0 418.0C637.10455 418.0 638.0 418.89542 638.0 420.0Z" fill-rule="nonzero"/><path fill="#ea4335" clip-path="url(#id_0)" d="M638.0 557.0L581.0 557.0L581.0 420.0C581.0 418.89542 581.89545 418.0 583.0 418.0L636.0 418.0C637.10455 418.0 638.0 418.89542 638.0 420.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M819.0 557.0L762.0 557.0L762.0 243.0C762.0 241.89543 762.89545 241.0 764.0 241.0L817.0 241.0C818.10455 241.0 819.0 241.89543 819.0 243.0Z" fill-rule="nonzero"/><path fill="#ea4335" clip-path="url(#id_0)" d="M819.0 557.0L762.0 557.0L762.0 243.0C762.0 241.89543 762.89545 241.0 764.0 241.0L817.0 241.0C818.10455 241.0 819.0 241.89543 819.0 243.0Z" fill-rule="nonzero"/><path stroke="#000000" stroke-width="2.0" stroke-linecap="butt" stroke-opacity="0.0" clip-path="url(#id_0)" d="M1001.0 557.0L944.0 557.0L944.0 185.0C944.0 183.89543 944.89545 183.0 946.0 183.0L999.0 183.0C1000.10455 183.0 1001.0 183.89543 1001.0 185.0Z" fill-rule="nonzero"/><path fill="#ea4335" clip-path="url(#id_0)" d="M1001.0 557.0L944.0 557.0L944.0 185.0C944.0 183.89543 944.89545 183.0 946.0 183.0L999.0 183.0C1000.10455 183.0 1001.0 183.89543 1001.0 185.0Z" fill-rule="nonzero"/><path fill="#000000" d="M531.5469 603.5677L532.6094 603.47394Q532.6875 604.1302 532.96875 604.53644Q533.25 604.9427 533.8281 605.2083Q534.4219 605.4583 535.15625 605.4583Q535.7969 605.4583 536.2969 605.2708Q536.7969 605.0677 537.03125 604.73956Q537.28125 604.3958 537.28125 604.0052Q537.28125 603.59894 537.0469 603.30206Q536.8125 602.98956 536.28125 602.78644Q535.9219 602.6458 534.7344 602.36456Q533.5625 602.0833 533.09375 601.8333Q532.46875 601.5052 532.1719 601.03644Q531.875 600.55206 531.875 599.9583Q531.875 599.3177 532.2344 598.7552Q532.6094 598.17706 533.3125 597.8958Q534.03125 597.59894 534.8906 597.59894Q535.84375 597.59894 536.5781 597.91144Q537.3125 598.2083 537.7031 598.8177Q538.09375 599.41144 538.125 600.16144L537.03125 600.23956Q536.9375 599.42706 536.4219 599.0208Q535.9219 598.59894 534.9375 598.59894Q533.90625 598.59894 533.4375 598.97394Q532.96875 599.34894 532.96875 599.8802Q532.96875 600.34894 533.2969 600.6458Q533.625 600.9427 535.0 601.2552Q536.3906 601.5677 536.90625 601.80206Q537.65625 602.16144 538.0156 602.6927Q538.375 603.2083 538.375 603.91144Q538.375 604.59894 537.9844 605.2083Q537.59375 605.80206 536.8594 606.1458Q536.125 606.47394 535.2031 606.47394Q534.03125 606.47394 533.2344 606.1458Q532.4531 605.80206 532.0 605.11456Q531.5625 604.42706 531.5469 603.5677ZM544.0469 604.3333L545.1406 604.4583Q544.8906 605.41144 544.1875 605.9427Q543.4844 606.47394 542.40625 606.47394Q541.0469 606.47394 540.2344 605.6302Q539.4375 604.78644 539.4375 603.2708Q539.4375 601.7083 540.25 600.84894Q541.0625 599.97394 542.34375 599.97394Q543.59375 599.97394 544.375 600.8177Q545.1719 601.66144 545.1719 603.2083Q545.1719 603.30206 545.1719 603.48956L540.53125 603.48956Q540.59375 604.5208 541.1094 605.0677Q541.625 605.59894 542.40625 605.59894Q542.9844 605.59894 543.3906 605.30206Q543.8125 604.98956 544.0469 604.3333ZM540.59375 602.6302L544.0625 602.6302Q544.0 601.8333 543.6719 601.4427Q543.15625 600.8333 542.3594 600.8333Q541.625 600.8333 541.125 601.3177Q540.6406 601.80206 540.59375 602.6302ZM550.7656 608.72394L550.7656 605.67706Q550.5156 606.0208 550.0625 606.2552Q549.625 606.47394 549.125 606.47394Q548.0156 606.47394 547.21875 605.59894Q546.4219 604.7083 546.4219 603.16144Q546.4219 602.22394 546.75 601.48956Q547.0781 600.73956 547.6875 600.36456Q548.3125 599.97394 549.0469 599.97394Q550.2031 599.97394 550.8594 600.9427L550.8594 600.11456L551.8125 600.11456L551.8125 608.72394L550.7656 608.72394ZM547.5 603.2083Q547.5 604.41144 548.0 605.0052Q548.5156 605.59894 549.21875 605.59894Q549.8906 605.59894 550.375 605.03644Q550.8594 604.4583 550.8594 603.30206Q550.8594 602.05206 550.34375 601.42706Q549.84375 600.80206 549.1406 600.80206Q548.46875 600.80206 547.9844 601.3958Q547.5 601.97394 547.5 603.2083ZM557.875 606.3333L557.875 605.41144Q557.1406 606.47394 555.8906 606.47394Q555.34375 606.47394 554.8594 606.2708Q554.3906 606.05206 554.15625 605.73956Q553.9219 605.41144 553.8281 604.9427Q553.7656 604.6458 553.7656 603.9583L553.7656 600.11456L554.8281 600.11456L554.8281 603.5677Q554.8281 604.3802 554.8906 604.67706Q554.9844 605.0833 555.2969 605.3333Q555.625 605.5677 556.1094 605.5677Q556.5781 605.5677 556.9844 605.3333Q557.40625 605.0833 557.5781 604.66144Q557.7656 604.23956 557.7656 603.4427L557.7656 600.11456L558.8125 600.11456L558.8125 606.3333L557.875 606.3333ZM565.0469 604.3333L566.1406 604.4583Q565.8906 605.41144 565.1875 605.9427Q564.4844 606.47394 563.40625 606.47394Q562.0469 606.47394 561.2344 605.6302Q560.4375 604.78644 560.4375 603.2708Q560.4375 601.7083 561.25 600.84894Q562.0625 599.97394 563.34375 599.97394Q564.59375 599.97394 565.375 600.8177Q566.1719 601.66144 566.1719 603.2083Q566.1719 603.30206 566.1719 603.48956L561.53125 603.48956Q561.59375 604.5208 562.1094 605.0677Q562.625 605.59894 563.40625 605.59894Q563.9844 605.59894 564.3906 605.30206Q564.8125 604.98956 565.0469 604.3333ZM561.59375 602.6302L565.0625 602.6302Q565.0 601.8333 564.6719 601.4427Q564.15625 600.8333 563.3594 600.8333Q562.625 600.8333 562.125 601.3177Q561.6406 601.80206 561.59375 602.6302ZM567.7969 606.3333L567.7969 600.11456L568.7344 600.11456L568.7344 600.98956Q569.4219 599.97394 570.71875 599.97394Q571.28125 599.97394 571.75 600.17706Q572.2344 600.3802 572.46875 600.7083Q572.7031 601.03644 572.7969 601.47394Q572.84375 601.7708 572.84375 602.5052L572.84375 606.3333L571.7969 606.3333L571.7969 602.55206Q571.7969 601.8958 571.6719 601.5833Q571.5469 601.2708 571.2344 601.0833Q570.9219 600.8802 570.5 600.8802Q569.8281 600.8802 569.3281 601.3177Q568.84375 601.73956 568.84375 602.92706L568.84375 606.3333L567.7969 606.3333ZM578.8594 604.05206L579.8906 604.1927Q579.71875 605.2552 579.0156 605.86456Q578.3125 606.47394 577.2969 606.47394Q576.0156 606.47394 575.2344 605.6458Q574.46875 604.80206 574.46875 603.23956Q574.46875 602.23956 574.7969 601.48956Q575.1406 600.72394 575.8125 600.34894Q576.5 599.97394 577.3125 599.97394Q578.3125 599.97394 578.9531 600.48956Q579.6094 600.98956 579.7969 601.9427L578.7656 602.09894Q578.625 601.47394 578.25 601.16144Q577.875 600.8333 577.34375 600.8333Q576.5469 600.8333 576.0469 601.41144Q575.5469 601.97394 575.5469 603.2083Q575.5469 604.47394 576.03125 605.03644Q576.5156 605.59894 577.28125 605.59894Q577.90625 605.59894 578.3125 605.22394Q578.7344 604.84894 578.8594 604.05206ZM585.0469 604.3333L586.1406 604.4583Q585.8906 605.41144 585.1875 605.9427Q584.4844 606.47394 583.40625 606.47394Q582.0469 606.47394 581.2344 605.6302Q580.4375 604.78644 580.4375 603.2708Q580.4375 601.7083 581.25 600.84894Q582.0625 599.97394 583.34375 599.97394Q584.59375 599.97394 585.375 600.8177Q586.1719 601.66144 586.1719 603.2083Q586.1719 603.30206 586.1719 603.48956L581.53125 603.48956Q581.59375 604.5208 582.1094 605.0677Q582.625 605.59894 583.40625 605.59894Q583.9844 605.59894 584.3906 605.30206Q584.8125 604.98956 585.0469 604.3333ZM581.59375 602.6302L585.0625 602.6302Q585.0 601.8333 584.6719 601.4427Q584.15625 600.8333 583.3594 600.8333Q582.625 600.8333 582.125 601.3177Q581.6406 601.80206 581.59375 602.6302ZM590.875 606.3333L590.875 597.73956L592.0156 597.73956L592.0156 605.3177L596.25 605.3177L596.25 606.3333L590.875 606.3333ZM602.0469 604.3333L603.1406 604.4583Q602.8906 605.41144 602.1875 605.9427Q601.4844 606.47394 600.40625 606.47394Q599.0469 606.47394 598.2344 605.6302Q597.4375 604.78644 597.4375 603.2708Q597.4375 601.7083 598.25 600.84894Q599.0625 599.97394 600.34375 599.97394Q601.59375 599.97394 602.375 600.8177Q603.1719 601.66144 603.1719 603.2083Q603.1719 603.30206 603.1719 603.48956L598.53125 603.48956Q598.59375 604.5208 599.1094 605.0677Q599.625 605.59894 600.40625 605.59894Q600.9844 605.59894 601.3906 605.30206Q601.8125 604.98956 602.0469 604.3333ZM598.59375 602.6302L602.0625 602.6302Q602.0 601.8333 601.6719 601.4427Q601.15625 600.8333 600.3594 600.8333Q599.625 600.8333 599.125 601.3177Q598.6406 601.80206 598.59375 602.6302ZM604.7969 606.3333L604.7969 600.11456L605.7344 600.11456L605.7344 600.98956Q606.4219 599.97394 607.71875 599.97394Q608.28125 599.97394 608.75 600.17706Q609.2344 600.3802 609.46875 600.7083Q609.7031 601.03644 609.7969 601.47394Q609.84375 601.7708 609.84375 602.5052L609.84375 606.3333L608.7969 606.3333L608.7969 602.55206Q608.7969 601.8958 608.6719 601.5833Q608.5469 601.2708 608.2344 601.0833Q607.9219 600.8802 607.5 600.8802Q606.8281 600.8802 606.3281 601.3177Q605.84375 601.73956 605.84375 602.92706L605.84375 606.3333L604.7969 606.3333ZM611.59375 606.84894L612.625 607.0052Q612.6875 607.47394 612.9844 607.6927Q613.375 607.98956 614.0469 607.98956Q614.78125 607.98956 615.1719 607.6927Q615.5781 607.3958 615.71875 606.8802Q615.8125 606.55206 615.7969 605.5208Q615.1094 606.3333 614.0781 606.3333Q612.7969 606.3333 612.09375 605.41144Q611.3906 604.47394 611.3906 603.1927Q611.3906 602.30206 611.7031 601.55206Q612.03125 600.78644 612.6406 600.3802Q613.25 599.97394 614.0781 599.97394Q615.1875 599.97394 615.90625 600.86456L615.90625 600.11456L616.875 600.11456L616.875 605.48956Q616.875 606.9427 616.5781 607.55206Q616.28125 608.16144 615.6406 608.5052Q615.0 608.86456 614.0625 608.86456Q612.9531 608.86456 612.2656 608.36456Q611.5781 607.86456 611.59375 606.84894ZM612.46875 603.11456Q612.46875 604.3333 612.9531 604.8958Q613.4375 605.4583 614.1719 605.4583Q614.90625 605.4583 615.3906 604.8958Q615.8906 604.3333 615.8906 603.1458Q615.8906 602.0052 615.375 601.42706Q614.875 600.84894 614.15625 600.84894Q613.4531 600.84894 612.9531 601.42706Q612.46875 601.98956 612.46875 603.11456ZM621.09375 605.3958L621.25 606.3177Q620.7969 606.41144 620.4531 606.41144Q619.875 606.41144 619.5625 606.23956Q619.25 606.05206 619.1094 605.7552Q618.9844 605.4583 618.9844 604.5052L618.9844 600.92706L618.21875 600.92706L618.21875 600.11456L618.9844 600.11456L618.9844 598.5677L620.03125 597.9427L620.03125 600.11456L621.09375 600.11456L621.09375 600.92706L620.03125 600.92706L620.03125 604.5677Q620.03125 605.0208 620.0781 605.1458Q620.1406 605.2708 620.2656 605.34894Q620.3906 605.42706 620.625 605.42706Q620.8125 605.42706 621.09375 605.3958ZM621.7969 606.3333L621.7969 597.73956L622.84375 597.73956L622.84375 600.8177Q623.5781 599.97394 624.7031 599.97394Q625.40625 599.97394 625.90625 600.2552Q626.4219 600.5208 626.6406 601.0052Q626.8594 601.47394 626.8594 602.3958L626.8594 606.3333L625.8125 606.3333L625.8125 602.3958Q625.8125 601.59894 625.46875 601.23956Q625.125 600.8802 624.5 600.8802Q624.03125 600.8802 623.6094 601.1302Q623.2031 601.36456 623.0156 601.78644Q622.84375 602.1927 622.84375 602.92706L622.84375 606.3333L621.7969 606.3333Z" fill-rule="nonzero"/><path fill="#000000" d="M44.99998 363.625L36.40623 363.625L36.40623 362.48438L43.984356 362.48438L43.984356 358.25L44.99998 358.25L44.99998 363.625ZM44.234356 352.64066Q44.734356 353.2344 44.93748 353.78128Q45.140606 354.31253 45.140606 354.93753Q45.140606 355.96875 44.640606 356.51562Q44.140606 357.0625 43.359356 357.0625Q42.90623 357.0625 42.53123 356.85938Q42.140606 356.65625 41.921856 356.3125Q41.68748 355.96875 41.578106 355.54688Q41.484356 355.25003 41.40623 354.6094Q41.265606 353.34378 41.046856 352.7344Q40.828106 352.7344 40.78123 352.7344Q40.12498 352.7344 39.859356 353.03128Q39.515606 353.43753 39.515606 354.2344Q39.515606 354.96878 39.78123 355.32816Q40.03123 355.6875 40.68748 355.85938L40.546856 356.89062Q39.890606 356.75 39.484356 356.42188Q39.078106 356.09375 38.859356 355.48438Q38.640606 354.87503 38.640606 354.07816Q38.640606 353.28128 38.828106 352.78128Q39.015606 352.28128 39.296856 352.0469Q39.578106 351.81253 40.015606 351.71878Q40.28123 351.6719 40.984356 351.6719L42.390606 351.6719Q43.859356 351.6719 44.24998 351.6094Q44.640606 351.53128 44.99998 351.32816L44.99998 352.43753Q44.671856 352.59378 44.234356 352.64066ZM41.87498 352.7344Q42.109356 353.31253 42.28123 354.45316Q42.37498 355.1094 42.49998 355.375Q42.609356 355.64062 42.828106 355.79688Q43.046856 355.9375 43.328106 355.9375Q43.74998 355.9375 44.03123 355.625Q44.31248 355.2969 44.31248 354.68753Q44.31248 354.07816 44.046856 353.6094Q43.78123 353.12503 43.31248 352.90628Q42.953106 352.7344 42.265606 352.7344L41.87498 352.7344ZM44.06248 347.40628L44.984356 347.25003Q45.078106 347.70316 45.078106 348.0469Q45.078106 348.62503 44.90623 348.93753Q44.71873 349.25003 44.421856 349.39066Q44.12498 349.51566 43.171856 349.51566L39.59373 349.51566L39.59373 350.28128L38.78123 350.28128L38.78123 349.51566L37.234356 349.51566L36.609356 348.46878L38.78123 348.46878L38.78123 347.40628L39.59373 347.40628L39.59373 348.46878L43.234356 348.46878Q43.68748 348.46878 43.81248 348.4219Q43.93748 348.3594 44.015606 348.2344Q44.09373 348.1094 44.09373 347.87503Q44.09373 347.68753 44.06248 347.40628ZM42.99998 342.45316L43.12498 341.3594Q44.078106 341.6094 44.609356 342.31253Q45.140606 343.01566 45.140606 344.09378Q45.140606 345.45316 44.296856 346.26566Q43.453106 347.06253 41.93748 347.06253Q40.37498 347.06253 39.515606 346.25003Q38.640606 345.43753 38.640606 344.15628Q38.640606 342.90628 39.484356 342.12503Q40.328106 341.32816 41.87498 341.32816Q41.96873 341.32816 42.15623 341.32816L42.15623 345.96878Q43.18748 345.90628 43.734356 345.39066Q44.265606 344.87503 44.265606 344.09378Q44.265606 343.51566 43.96873 343.1094Q43.65623 342.68753 42.99998 342.45316ZM41.296856 345.90628L41.296856 342.43753Q40.49998 342.50003 40.109356 342.82816Q39.49998 343.34378 39.49998 344.14066Q39.49998 344.87503 39.984356 345.37503Q40.46873 345.8594 41.296856 345.90628ZM44.99998 339.70316L38.78123 339.70316L38.78123 338.76566L39.65623 338.76566Q38.640606 338.07816 38.640606 336.78128Q38.640606 336.21878 38.84373 335.75003Q39.046856 335.26566 39.37498 335.03128Q39.703106 334.7969 40.140606 334.70316Q40.43748 334.65628 41.171856 334.65628L44.99998 334.65628L44.99998 335.70316L41.21873 335.70316Q40.56248 335.70316 40.24998 335.82816Q39.93748 335.95316 39.74998 336.26566Q39.546856 336.57816 39.546856 337.00003Q39.546856 337.6719 39.984356 338.1719Q40.40623 338.65628 41.59373 338.65628L44.99998 338.65628L44.99998 339.70316ZM42.71873 328.64066L42.859356 327.6094Q43.921856 327.78128 44.53123 328.4844Q45.140606 329.18753 45.140606 330.20316Q45.140606 331.4844 44.31248 332.26566Q43.46873 333.03128 41.90623 333.03128Q40.90623 333.03128 40.15623 332.70316Q39.390606 332.3594 39.015606 331.68753Q38.640606 331.00003 38.640606 330.18753Q38.640606 329.18753 39.15623 328.5469Q39.65623 327.89066 40.609356 327.70316L40.765606 328.7344Q40.140606 328.87503 39.828106 329.25003Q39.49998 329.62503 39.49998 330.15628Q39.49998 330.95316 40.078106 331.45316Q40.640606 331.95316 41.87498 331.95316Q43.140606 331.95316 43.703106 331.46878Q44.265606 330.9844 44.265606 330.21878Q44.265606 329.59378 43.890606 329.18753Q43.515606 328.76566 42.71873 328.64066ZM47.390606 326.75003L46.40623 326.87503Q46.49998 326.53128 46.49998 326.26566Q46.49998 325.9219 46.37498 325.71878Q46.265606 325.50003 46.06248 325.3594Q45.890606 325.26566 45.265606 325.03128Q45.18748 325.01566 45.015606 324.93753L38.78123 327.31253L38.78123 326.1719L42.37498 324.87503Q43.06248 324.62503 43.828106 324.4219Q43.09373 324.2344 42.40623 323.9844L38.78123 322.65628L38.78123 321.6094L45.109356 323.96878Q46.12498 324.3594 46.515606 324.56253Q47.046856 324.84378 47.28123 325.21878Q47.53123 325.57816 47.53123 326.07816Q47.53123 326.37503 47.390606 326.75003ZM47.53123 316.68753Q46.421856 317.56253 44.953106 318.1719Q43.46873 318.76566 41.890606 318.76566Q40.49998 318.76566 39.21873 318.32816Q37.734356 317.7969 36.265606 316.68753L36.265606 315.93753Q37.484356 316.64066 37.99998 316.87503Q38.81248 317.2344 39.68748 317.43753Q40.78123 317.68753 41.890606 317.68753Q44.71873 317.68753 47.53123 315.93753L47.53123 316.68753ZM44.99998 314.70316L38.78123 314.70316L38.78123 313.76566L39.65623 313.76566Q39.18748 313.46878 38.921856 312.9844Q38.640606 312.50003 38.640606 311.87503Q38.640606 311.18753 38.921856 310.75003Q39.203106 310.2969 39.71873 310.12503Q38.640606 309.37503 38.640606 308.20316Q38.640606 307.26566 39.15623 306.78128Q39.65623 306.28128 40.734356 306.28128L44.99998 306.28128L44.99998 307.32816L41.078106 307.32816Q40.453106 307.32816 40.171856 307.43753Q39.890606 307.53128 39.71873 307.7969Q39.546856 308.06253 39.546856 308.43753Q39.546856 309.09378 39.984356 309.53128Q40.421856 309.95316 41.390606 309.95316L44.99998 309.95316L44.99998 311.01566L40.953106 311.01566Q40.24998 311.01566 39.90623 311.28128Q39.546856 311.53128 39.546856 312.1094Q39.546856 312.56253 39.78123 312.93753Q40.015606 313.31253 40.46873 313.4844Q40.921856 313.65628 41.765606 313.65628L44.99998 313.65628L44.99998 314.70316ZM43.140606 304.12503L42.984356 303.09378Q43.609356 303.00003 43.93748 302.6094Q44.265606 302.20316 44.265606 301.46878Q44.265606 300.75003 43.984356 300.40628Q43.68748 300.0469 43.28123 300.0469Q42.921856 300.0469 42.71873 300.3594Q42.578106 300.57816 42.359356 301.43753Q42.06248 302.59378 41.859356 303.0469Q41.640606 303.4844 41.265606 303.71878Q40.890606 303.95316 40.421856 303.95316Q40.015606 303.95316 39.65623 303.76566Q39.296856 303.57816 39.06248 303.25003Q38.890606 303.00003 38.765606 302.57816Q38.640606 302.15628 38.640606 301.65628Q38.640606 300.93753 38.859356 300.39066Q39.06248 299.82816 39.421856 299.56253Q39.78123 299.2969 40.37498 299.20316L40.515606 300.2344Q40.046856 300.2969 39.78123 300.64066Q39.49998 300.96878 39.49998 301.59378Q39.49998 302.31253 39.74998 302.62503Q39.984356 302.93753 40.31248 302.93753Q40.515606 302.93753 40.671856 302.81253Q40.84373 302.6719 40.953106 302.40628Q41.015606 302.25003 41.21873 301.46878Q41.53123 300.34378 41.71873 299.90628Q41.90623 299.46878 42.265606 299.21878Q42.62498 298.96878 43.171856 298.96878Q43.703106 298.96878 44.171856 299.28128Q44.62498 299.57816 44.890606 300.15628Q45.140606 300.7344 45.140606 301.46878Q45.140606 302.68753 44.640606 303.32816Q44.12498 303.95316 43.140606 304.12503ZM47.53123 296.01566L47.53123 296.76566Q44.71873 295.01566 41.890606 295.01566Q40.796856 295.01566 39.703106 295.26566Q38.828106 295.46878 38.015606 295.82816Q37.49998 296.06253 36.265606 296.76566L36.265606 296.01566Q37.734356 294.9219 39.21873 294.39066Q40.49998 293.93753 41.890606 293.93753Q43.46873 293.93753 44.953106 294.5469Q46.421856 295.15628 47.53123 296.01566Z" fill-rule="nonzero"/><path fill="#000000" d="M80.5 557.09894Q80.5 555.5677 80.8125 554.6458Q81.125 553.7083 81.734375 553.2083Q82.359375 552.7083 83.296875 552.7083Q83.984375 552.7083 84.5 552.98956Q85.03125 553.2708 85.375 553.80206Q85.71875 554.3177 85.90625 555.0677Q86.09375 555.8177 86.09375 557.09894Q86.09375 558.61456 85.78125 559.53644Q85.484375 560.4583 84.859375 560.97394Q84.25 561.47394 83.296875 561.47394Q82.0625 561.47394 81.34375 560.5833Q80.5 559.5208 80.5 557.09894ZM81.578125 557.09894Q81.578125 559.2083 82.078125 559.91144Q82.578125 560.61456 83.296875 560.61456Q84.03125 560.61456 84.515625 559.91144Q85.015625 559.2083 85.015625 557.09894Q85.015625 554.97394 84.515625 554.28644Q84.03125 553.5833 83.28125 553.5833Q82.5625 553.5833 82.125 554.1927Q81.578125 554.97394 81.578125 557.09894Z" fill-rule="nonzero"/><path fill="#000000" d="M77.46875 467.4L76.421875 467.4L76.421875 460.68124Q76.03125 461.04062 75.40625 461.41562Q74.796875 461.775 74.3125 461.94687L74.3125 460.93124Q75.1875 460.50937 75.84375 459.93124Q76.515625 459.3375 76.796875 458.775L77.46875 458.775L77.46875 467.4ZM80.5 463.16562Q80.5 461.63437 80.8125 460.7125Q81.125 459.775 81.734375 459.275Q82.359375 458.775 83.296875 458.775Q83.984375 458.775 84.5 459.05624Q85.03125 459.3375 85.375 459.86874Q85.71875 460.38437 85.90625 461.13437Q86.09375 461.88437 86.09375 463.16562Q86.09375 464.68124 85.78125 465.60312Q85.484375 466.525 84.859375 467.04062Q84.25 467.54062 83.296875 467.54062Q82.0625 467.54062 81.34375 466.65Q80.5 465.5875 80.5 463.16562ZM81.578125 463.16562Q81.578125 465.275 82.078125 465.97812Q82.578125 466.68124 83.296875 466.68124Q84.03125 466.68124 84.515625 465.97812Q85.015625 465.275 85.015625 463.16562Q85.015625 461.04062 84.515625 460.35312Q84.03125 459.65 83.28125 459.65Q82.5625 459.65 82.125 460.25937Q81.578125 461.04062 81.578125 463.16562Z" fill-rule="nonzero"/><path fill="#000000" d="M79.046875 372.45105L79.046875 373.46667L73.359375 373.46667Q73.359375 373.09167 73.484375 372.7323Q73.703125 372.15417 74.171875 371.59167Q74.65625 371.02917 75.5625 370.2948Q76.96875 369.13855 77.453125 368.46667Q77.953125 367.7948 77.953125 367.20105Q77.953125 366.57605 77.5 366.15417Q77.046875 365.71667 76.328125 365.71667Q75.5625 365.71667 75.109375 366.1698Q74.65625 366.62292 74.640625 367.43542L73.5625 367.32605Q73.671875 366.1073 74.390625 365.4823Q75.125 364.84167 76.359375 364.84167Q77.59375 364.84167 78.3125 365.52917Q79.03125 366.21667 79.03125 367.2323Q79.03125 367.74792 78.8125 368.24792Q78.609375 368.7323 78.109375 369.2948Q77.625 369.84167 76.5 370.7948Q75.546875 371.59167 75.265625 371.88855Q75.0 372.1698 74.828125 372.45105L79.046875 372.45105ZM80.5 369.2323Q80.5 367.70105 80.8125 366.77917Q81.125 365.84167 81.734375 365.34167Q82.359375 364.84167 83.296875 364.84167Q83.984375 364.84167 84.5 365.12292Q85.03125 365.40417 85.375 365.93542Q85.71875 366.45105 85.90625 367.20105Q86.09375 367.95105 86.09375 369.2323Q86.09375 370.74792 85.78125 371.6698Q85.484375 372.59167 84.859375 373.1073Q84.25 373.6073 83.296875 373.6073Q82.0625 373.6073 81.34375 372.71667Q80.5 371.65417 80.5 369.2323ZM81.578125 369.2323Q81.578125 371.34167 82.078125 372.0448Q82.578125 372.74792 83.296875 372.74792Q84.03125 372.74792 84.515625 372.0448Q85.015625 371.34167 85.015625 369.2323Q85.015625 367.1073 84.515625 366.4198Q84.03125 365.71667 83.28125 365.71667Q82.5625 365.71667 82.125 366.32605Q81.578125 367.1073 81.578125 369.2323Z" fill-rule="nonzero"/><path fill="#000000" d="M73.5 277.2677L74.5625 277.12708Q74.734375 278.0177 75.171875 278.42395Q75.609375 278.81458 76.234375 278.81458Q76.984375 278.81458 77.5 278.29895Q78.015625 277.78333 78.015625 277.0177Q78.015625 276.29895 77.53125 275.8302Q77.0625 275.34583 76.328125 275.34583Q76.03125 275.34583 75.578125 275.47083L75.703125 274.53333Q75.8125 274.54895 75.875 274.54895Q76.546875 274.54895 77.078125 274.2052Q77.625 273.84583 77.625 273.11145Q77.625 272.53333 77.234375 272.15833Q76.84375 271.7677 76.21875 271.7677Q75.609375 271.7677 75.1875 272.15833Q74.78125 272.54895 74.671875 273.31458L73.609375 273.12708Q73.796875 272.0802 74.484375 271.50208Q75.171875 270.90833 76.1875 270.90833Q76.890625 270.90833 77.484375 271.22083Q78.078125 271.5177 78.390625 272.03333Q78.703125 272.54895 78.703125 273.1427Q78.703125 273.68958 78.40625 274.15833Q78.109375 274.61145 77.53125 274.87708Q78.28125 275.04895 78.703125 275.61145Q79.125 276.15833 79.125 277.00208Q79.125 278.12708 78.296875 278.90833Q77.484375 279.68958 76.234375 279.68958Q75.109375 279.68958 74.359375 279.0177Q73.609375 278.3302 73.5 277.2677ZM80.5 275.29895Q80.5 273.7677 80.8125 272.84583Q81.125 271.90833 81.734375 271.40833Q82.359375 270.90833 83.296875 270.90833Q83.984375 270.90833 84.5 271.18958Q85.03125 271.47083 85.375 272.00208Q85.71875 272.5177 85.90625 273.2677Q86.09375 274.0177 86.09375 275.29895Q86.09375 276.81458 85.78125 277.73645Q85.484375 278.65833 84.859375 279.17395Q84.25 279.67395 83.296875 279.67395Q82.0625 279.67395 81.34375 278.78333Q80.5 277.72083 80.5 275.29895ZM81.578125 275.29895Q81.578125 277.40833 82.078125 278.11145Q82.578125 278.81458 83.296875 278.81458Q84.03125 278.81458 84.515625 278.11145Q85.015625 277.40833 85.015625 275.29895Q85.015625 273.17395 84.515625 272.48645Q84.03125 271.78333 83.28125 271.78333Q82.5625 271.78333 82.125 272.3927Q81.578125 273.17395 81.578125 275.29895Z" fill-rule="nonzero"/><path fill="#000000" d="M76.875 185.6L76.875 183.5375L73.15625 183.5375L73.15625 182.56876L77.078125 177.00626L77.9375 177.00626L77.9375 182.56876L79.09375 182.56876L79.09375 183.5375L77.9375 183.5375L77.9375 185.6L76.875 185.6ZM76.875 182.56876L76.875 178.70938L74.1875 182.56876L76.875 182.56876ZM80.5 181.36563Q80.5 179.83438 80.8125 178.9125Q81.125 177.975 81.734375 177.475Q82.359375 176.975 83.296875 176.975Q83.984375 176.975 84.5 177.25626Q85.03125 177.5375 85.375 178.06876Q85.71875 178.58438 85.90625 179.33438Q86.09375 180.08438 86.09375 181.36563Q86.09375 182.88126 85.78125 183.80313Q85.484375 184.725 84.859375 185.24063Q84.25 185.74063 83.296875 185.74063Q82.0625 185.74063 81.34375 184.85Q80.5 183.7875 80.5 181.36563ZM81.578125 181.36563Q81.578125 183.475 82.078125 184.17813Q82.578125 184.88126 83.296875 184.88126Q84.03125 184.88126 84.515625 184.17813Q85.015625 183.475 85.015625 181.36563Q85.015625 179.24063 84.515625 178.55313Q84.03125 177.85 83.28125 177.85Q82.5625 177.85 82.125 178.45938Q81.578125 179.24063 81.578125 181.36563Z" fill-rule="nonzero"/><path fill="#000000" d="M73.5 89.416664L74.609375 89.322914Q74.734375 90.135414 75.171875 90.541664Q75.625 90.947914 76.265625 90.947914Q77.015625 90.947914 77.546875 90.36979Q78.078125 89.791664 78.078125 88.854164Q78.078125 87.947914 77.5625 87.43229Q77.0625 86.90104 76.234375 86.90104Q75.734375 86.90104 75.3125 87.135414Q74.90625 87.36979 74.671875 87.729164L73.6875 87.604164L74.515625 83.197914L78.796875 83.197914L78.796875 84.197914L75.359375 84.197914L74.90625 86.510414Q75.671875 85.96354 76.515625 85.96354Q77.640625 85.96354 78.40625 86.74479Q79.1875 87.52604 79.1875 88.760414Q79.1875 89.916664 78.515625 90.77604Q77.6875 91.80729 76.265625 91.80729Q75.09375 91.80729 74.34375 91.15104Q73.609375 90.49479 73.5 89.416664ZM80.5 87.43229Q80.5 85.90104 80.8125 84.979164Q81.125 84.041664 81.734375 83.541664Q82.359375 83.041664 83.296875 83.041664Q83.984375 83.041664 84.5 83.322914Q85.03125 83.604164 85.375 84.135414Q85.71875 84.65104 85.90625 85.40104Q86.09375 86.15104 86.09375 87.43229Q86.09375 88.947914 85.78125 89.86979Q85.484375 90.791664 84.859375 91.30729Q84.25 91.80729 83.296875 91.80729Q82.0625 91.80729 81.34375 90.916664Q80.5 89.854164 80.5 87.43229ZM81.578125 87.43229Q81.578125 89.541664 82.078125 90.24479Q82.578125 90.947914 83.296875 90.947914Q84.03125 90.947914 84.515625 90.24479Q85.015625 89.541664 85.015625 87.43229Q85.015625 85.30729 84.515625 84.61979Q84.03125 83.916664 83.28125 83.916664Q82.5625 83.916664 82.125 84.52604Q81.578125 85.30729 81.578125 87.43229Z" fill-rule="nonzero"/><path fill="#000000" d="M214.80856 573.3177L214.80856 574.3333L209.12106 574.3333Q209.12106 573.9583 209.24606 573.59894Q209.46481 573.0208 209.93356 572.4583Q210.41794 571.8958 211.32419 571.16144Q212.73044 570.0052 213.21481 569.3333Q213.71481 568.66144 213.71481 568.0677Q213.71481 567.4427 213.26169 567.0208Q212.80856 566.5833 212.08981 566.5833Q211.32419 566.5833 210.87106 567.03644Q210.41794 567.48956 210.40231 568.30206L209.32419 568.1927Q209.43356 566.97394 210.15231 566.34894Q210.88669 565.7083 212.12106 565.7083Q213.35544 565.7083 214.07419 566.3958Q214.79294 567.0833 214.79294 568.09894Q214.79294 568.61456 214.57419 569.11456Q214.37106 569.59894 213.87106 570.16144Q213.38669 570.7083 212.26169 571.66144Q211.30856 572.4583 211.02731 572.7552Q210.76169 573.03644 210.58981 573.3177L214.80856 573.3177ZM216.26169 570.09894Q216.26169 568.5677 216.57419 567.6458Q216.88669 566.7083 217.49606 566.2083Q218.12106 565.7083 219.05856 565.7083Q219.74606 565.7083 220.26169 565.98956Q220.79294 566.2708 221.13669 566.80206Q221.48044 567.3177 221.66794 568.0677Q221.85544 568.8177 221.85544 570.09894Q221.85544 571.61456 221.54294 572.53644Q221.24606 573.4583 220.62106 573.97394Q220.01169 574.47394 219.05856 574.47394Q217.82419 574.47394 217.10544 573.5833Q216.26169 572.5208 216.26169 570.09894ZM217.33981 570.09894Q217.33981 572.2083 217.83981 572.91144Q218.33981 573.61456 219.05856 573.61456Q219.79294 573.61456 220.27731 572.91144Q220.77731 572.2083 220.77731 570.09894Q220.77731 567.97394 220.27731 567.28644Q219.79294 566.5833 219.04294 566.5833Q218.32419 566.5833 217.88669 567.1927Q217.33981 567.97394 217.33981 570.09894Z" fill-rule="nonzero"/><path fill="#000000" d="M391.13083 572.0677L392.19333 571.92706Q392.3652 572.8177 392.8027 573.22394Q393.2402 573.61456 393.8652 573.61456Q394.6152 573.61456 395.13083 573.09894Q395.64645 572.5833 395.64645 571.8177Q395.64645 571.09894 395.16208 570.6302Q394.69333 570.1458 393.95895 570.1458Q393.66208 570.1458 393.20895 570.2708L393.33395 569.3333Q393.44333 569.34894 393.50583 569.34894Q394.1777 569.34894 394.70895 569.0052Q395.25583 568.6458 395.25583 567.91144Q395.25583 567.3333 394.8652 566.9583Q394.47458 566.5677 393.84958 566.5677Q393.2402 566.5677 392.81833 566.9583Q392.41208 567.34894 392.3027 568.11456L391.2402 567.92706Q391.4277 566.8802 392.1152 566.30206Q392.8027 565.7083 393.81833 565.7083Q394.52145 565.7083 395.1152 566.0208Q395.70895 566.3177 396.02145 566.8333Q396.33395 567.34894 396.33395 567.9427Q396.33395 568.48956 396.03708 568.9583Q395.7402 569.41144 395.16208 569.67706Q395.91208 569.84894 396.33395 570.41144Q396.75583 570.9583 396.75583 571.80206Q396.75583 572.92706 395.9277 573.7083Q395.1152 574.48956 393.8652 574.48956Q392.7402 574.48956 391.9902 573.8177Q391.2402 573.1302 391.13083 572.0677ZM403.6777 573.3177L403.6777 574.3333L397.9902 574.3333Q397.9902 573.9583 398.1152 573.59894Q398.33395 573.0208 398.8027 572.4583Q399.28708 571.8958 400.19333 571.16144Q401.59958 570.0052 402.08395 569.3333Q402.58395 568.66144 402.58395 568.0677Q402.58395 567.4427 402.13083 567.0208Q401.6777 566.5833 400.95895 566.5833Q400.19333 566.5833 399.7402 567.03644Q399.28708 567.48956 399.27145 568.30206L398.19333 568.1927Q398.3027 566.97394 399.02145 566.34894Q399.75583 565.7083 400.9902 565.7083Q402.22458 565.7083 402.94333 566.3958Q403.66208 567.0833 403.66208 568.09894Q403.66208 568.61456 403.44333 569.11456Q403.2402 569.59894 402.7402 570.16144Q402.25583 570.7083 401.13083 571.66144Q400.1777 572.4583 399.89645 572.7552Q399.63083 573.03644 399.45895 573.3177L403.6777 573.3177Z" fill-rule="nonzero"/><path fill="#000000" d="M573.46875 574.3333L572.4219 574.3333L572.4219 567.61456Q572.03125 567.97394 571.40625 568.34894Q570.7969 568.7083 570.3125 568.8802L570.3125 567.86456Q571.1875 567.4427 571.84375 566.86456Q572.5156 566.2708 572.7969 565.7083L573.46875 565.7083L573.46875 574.3333ZM582.0469 573.3177L582.0469 574.3333L576.3594 574.3333Q576.3594 573.9583 576.4844 573.59894Q576.7031 573.0208 577.1719 572.4583Q577.65625 571.8958 578.5625 571.16144Q579.96875 570.0052 580.4531 569.3333Q580.9531 568.66144 580.9531 568.0677Q580.9531 567.4427 580.5 567.0208Q580.0469 566.5833 579.3281 566.5833Q578.5625 566.5833 578.1094 567.03644Q577.65625 567.48956 577.6406 568.30206L576.5625 568.1927Q576.6719 566.97394 577.3906 566.34894Q578.125 565.7083 579.3594 565.7083Q580.59375 565.7083 581.3125 566.3958Q582.03125 567.0833 582.03125 568.09894Q582.03125 568.61456 581.8125 569.11456Q581.6094 569.59894 581.1094 570.16144Q580.625 570.7083 579.5 571.66144Q578.5469 572.4583 578.2656 572.7552Q578.0 573.03644 577.8281 573.3177L582.0469 573.3177ZM585.125 569.67706Q584.46875 569.42706 584.1406 568.98956Q583.8281 568.53644 583.8281 567.92706Q583.8281 566.98956 584.5 566.34894Q585.1875 565.7083 586.2969 565.7083Q587.4219 565.7083 588.1094 566.36456Q588.7969 567.0208 588.7969 567.9583Q588.7969 568.55206 588.4844 568.98956Q588.1719 569.42706 587.53125 569.67706Q588.3281 569.92706 588.7344 570.5052Q589.1406 571.0833 589.1406 571.8802Q589.1406 572.97394 588.3594 573.72394Q587.59375 574.47394 586.3125 574.47394Q585.0469 574.47394 584.2656 573.72394Q583.4844 572.97394 583.4844 571.84894Q583.4844 571.0052 583.90625 570.4427Q584.34375 569.8802 585.125 569.67706ZM584.90625 567.8802Q584.90625 568.48956 585.2969 568.8802Q585.7031 569.2708 586.3281 569.2708Q586.9375 569.2708 587.3281 568.8958Q587.71875 568.5052 587.71875 567.9427Q587.71875 567.36456 587.3125 566.97394Q586.90625 566.5677 586.3125 566.5677Q585.7031 566.5677 585.2969 566.9583Q584.90625 567.34894 584.90625 567.8802ZM584.5781 571.84894Q584.5781 572.30206 584.78125 572.72394Q585.0 573.1458 585.4219 573.3802Q585.84375 573.61456 586.3281 573.61456Q587.0781 573.61456 587.5625 573.1302Q588.0625 572.6458 588.0625 571.8958Q588.0625 571.1302 587.5625 570.6302Q587.0625 570.1302 586.2969 570.1302Q585.5469 570.1302 585.0625 570.6302Q584.5781 571.11456 584.5781 571.84894Z" fill-rule="nonzero"/><path fill="#000000" d="M751.36914 572.0677L752.43164 571.92706Q752.6035 572.8177 753.041 573.22394Q753.4785 573.61456 754.1035 573.61456Q754.8535 573.61456 755.36914 573.09894Q755.88477 572.5833 755.88477 571.8177Q755.88477 571.09894 755.4004 570.6302Q754.93164 570.1458 754.19727 570.1458Q753.9004 570.1458 753.44727 570.2708L753.57227 569.3333Q753.68164 569.34894 753.74414 569.34894Q754.416 569.34894 754.94727 569.0052Q755.49414 568.6458 755.49414 567.91144Q755.49414 567.3333 755.1035 566.9583Q754.7129 566.5677 754.0879 566.5677Q753.4785 566.5677 753.05664 566.9583Q752.6504 567.34894 752.541 568.11456L751.4785 567.92706Q751.666 566.8802 752.3535 566.30206Q753.041 565.7083 754.05664 565.7083Q754.75977 565.7083 755.3535 566.0208Q755.94727 566.3177 756.25977 566.8333Q756.57227 567.34894 756.57227 567.9427Q756.57227 568.48956 756.2754 568.9583Q755.9785 569.41144 755.4004 569.67706Q756.1504 569.84894 756.57227 570.41144Q756.99414 570.9583 756.99414 571.80206Q756.99414 572.92706 756.166 573.7083Q755.3535 574.48956 754.1035 574.48956Q752.9785 574.48956 752.2285 573.8177Q751.4785 573.1302 751.36914 572.0677ZM759.99414 569.67706Q759.3379 569.42706 759.00977 568.98956Q758.69727 568.53644 758.69727 567.92706Q758.69727 566.98956 759.36914 566.34894Q760.05664 565.7083 761.166 565.7083Q762.291 565.7083 762.9785 566.36456Q763.666 567.0208 763.666 567.9583Q763.666 568.55206 763.3535 568.98956Q763.041 569.42706 762.4004 569.67706Q763.19727 569.92706 763.6035 570.5052Q764.00977 571.0833 764.00977 571.8802Q764.00977 572.97394 763.2285 573.72394Q762.4629 574.47394 761.18164 574.47394Q759.916 574.47394 759.13477 573.72394Q758.3535 572.97394 758.3535 571.84894Q758.3535 571.0052 758.7754 570.4427Q759.2129 569.8802 759.99414 569.67706ZM759.7754 567.8802Q759.7754 568.48956 760.166 568.8802Q760.57227 569.2708 761.19727 569.2708Q761.80664 569.2708 762.19727 568.8958Q762.5879 568.5052 762.5879 567.9427Q762.5879 567.36456 762.18164 566.97394Q761.7754 566.5677 761.18164 566.5677Q760.57227 566.5677 760.166 566.9583Q759.7754 567.34894 759.7754 567.8802ZM759.44727 571.84894Q759.44727 572.30206 759.6504 572.72394Q759.86914 573.1458 760.291 573.3802Q760.7129 573.61456 761.19727 573.61456Q761.94727 573.61456 762.43164 573.1302Q762.93164 572.6458 762.93164 571.8958Q762.93164 571.1302 762.43164 570.6302Q761.93164 570.1302 761.166 570.1302Q760.416 570.1302 759.93164 570.6302Q759.44727 571.11456 759.44727 571.84894ZM768.74414 574.3333L768.74414 572.2708L765.0254 572.2708L765.0254 571.30206L768.94727 565.73956L769.80664 565.73956L769.80664 571.30206L770.9629 571.30206L770.9629 572.2708L769.80664 572.2708L769.80664 574.3333L768.74414 574.3333ZM768.74414 571.30206L768.74414 567.4427L766.05664 571.30206L768.74414 571.30206Z" fill-rule="nonzero"/><path fill="#000000" d="M933.23834 572.0833L934.3477 571.98956Q934.4727 572.80206 934.9102 573.2083Q935.36334 573.61456 936.00397 573.61456Q936.75397 573.61456 937.2852 573.03644Q937.81647 572.4583 937.81647 571.5208Q937.81647 570.61456 937.30084 570.09894Q936.80084 569.5677 935.9727 569.5677Q935.4727 569.5677 935.05084 569.80206Q934.6446 570.03644 934.4102 570.3958L933.42584 570.2708L934.25397 565.86456L938.5352 565.86456L938.5352 566.86456L935.0977 566.86456L934.6446 569.17706Q935.4102 568.6302 936.25397 568.6302Q937.37897 568.6302 938.1446 569.41144Q938.92584 570.1927 938.92584 571.42706Q938.92584 572.5833 938.25397 573.4427Q937.42584 574.47394 936.00397 574.47394Q934.8321 574.47394 934.0821 573.8177Q933.3477 573.16144 933.23834 572.0833ZM944.2071 574.3333L943.1602 574.3333L943.1602 567.61456Q942.7696 567.97394 942.1446 568.34894Q941.5352 568.7083 941.05084 568.8802L941.05084 567.86456Q941.92584 567.4427 942.5821 566.86456Q943.25397 566.2708 943.5352 565.7083L944.2071 565.7083L944.2071 574.3333ZM952.7852 573.3177L952.7852 574.3333L947.0977 574.3333Q947.0977 573.9583 947.2227 573.59894Q947.44147 573.0208 947.9102 572.4583Q948.3946 571.8958 949.30084 571.16144Q950.7071 570.0052 951.19147 569.3333Q951.69147 568.66144 951.69147 568.0677Q951.69147 567.4427 951.23834 567.0208Q950.7852 566.5833 950.06647 566.5833Q949.30084 566.5833 948.8477 567.03644Q948.3946 567.48956 948.37897 568.30206L947.30084 568.1927Q947.4102 566.97394 948.12897 566.34894Q948.86334 565.7083 950.0977 565.7083Q951.3321 565.7083 952.05084 566.3958Q952.7696 567.0833 952.7696 568.09894Q952.7696 568.61456 952.55084 569.11456Q952.3477 569.59894 951.8477 570.16144Q951.36334 570.7083 950.23834 571.66144Q949.2852 572.4583 949.00397 572.7552Q948.73834 573.03644 948.56647 573.3177L952.7852 573.3177Z" fill-rule="nonzero"/><path fill="#4285f4" d="M406.5 634.0C406.5 633.4477 406.94772 633.0 407.5 633.0L417.5 633.0C418.05228 633.0 418.5 633.4477 418.5 634.0L418.5 644.0C418.5 644.5523 418.05228 645.0 417.5 645.0L407.5 645.0C406.94772 645.0 406.5 644.5523 406.5 644.0Z" fill-rule="nonzero"/><path fill="#1a1a1a" d="M426.48438 643.0L429.78125 634.40625L431.0 634.40625L434.51562 643.0L433.23438 643.0L432.21875 640.3906L428.64062 640.3906L427.6875 643.0L426.48438 643.0ZM428.96875 639.46875L431.875 639.46875L430.98438 637.09375Q430.5625 636.0156 430.375 635.3125Q430.20312 636.1406 429.90625 636.9531L428.96875 639.46875ZM435.26562 643.0L434.28125 643.0L434.28125 634.40625L435.34375 634.40625L435.34375 637.46875Q436.01562 636.6406 437.04688 636.6406Q437.625 636.6406 438.125 636.875Q438.64062 637.09375 438.96875 637.5156Q439.3125 637.9375 439.5 638.53125Q439.6875 639.125 439.6875 639.7969Q439.6875 641.3906 438.89062 642.2656Q438.09375 643.1406 437.0 643.1406Q435.89062 643.1406 435.26562 642.21875L435.26562 643.0ZM435.25 639.84375Q435.25 640.9531 435.5625 641.4531Q436.0625 642.2656 436.90625 642.2656Q437.59375 642.2656 438.09375 641.6719Q438.60938 641.0781 438.60938 639.875Q438.60938 638.65625 438.125 638.0781Q437.64062 637.5 436.95312 637.5Q436.26562 637.5 435.75 638.1094Q435.25 638.7031 435.25 639.84375ZM440.875 641.1406L441.90625 640.9844Q442.0 641.6094 442.39062 641.9375Q442.79688 642.2656 443.53125 642.2656Q444.25 642.2656 444.59375 641.9844Q444.95312 641.6875 444.95312 641.28125Q444.95312 640.9219 444.64062 640.71875Q444.42188 640.5781 443.5625 640.3594Q442.40625 640.0625 441.95312 639.8594Q441.51562 639.6406 441.28125 639.2656Q441.04688 638.8906 441.04688 638.4219Q441.04688 638.0156 441.23438 637.65625Q441.42188 637.2969 441.75 637.0625Q442.0 636.8906 442.42188 636.7656Q442.84375 636.6406 443.34375 636.6406Q444.0625 636.6406 444.60938 636.8594Q445.17188 637.0625 445.4375 637.4219Q445.70312 637.78125 445.79688 638.375L444.76562 638.5156Q444.70312 638.0469 444.35938 637.78125Q444.03125 637.5 443.40625 637.5Q442.6875 637.5 442.375 637.75Q442.0625 637.9844 442.0625 638.3125Q442.0625 638.5156 442.1875 638.6719Q442.32812 638.84375 442.59375 638.9531Q442.75 639.0156 443.53125 639.21875Q444.65625 639.53125 445.09375 639.71875Q445.53125 639.90625 445.78125 640.2656Q446.03125 640.625 446.03125 641.1719Q446.03125 641.7031 445.71875 642.1719Q445.42188 642.625 444.84375 642.8906Q444.26562 643.1406 443.53125 643.1406Q442.3125 643.1406 441.67188 642.6406Q441.04688 642.125 440.875 641.1406ZM447.90625 639.8906Q447.90625 638.15625 448.85938 637.3281Q449.65625 636.6406 450.8125 636.6406Q452.09375 636.6406 452.90625 637.4844Q453.73438 638.3125 453.73438 639.7969Q453.73438 641.0 453.375 641.6875Q453.01562 642.375 452.3125 642.7656Q451.625 643.1406 450.8125 643.1406Q449.51562 643.1406 448.70312 642.3125Q447.90625 641.46875 447.90625 639.8906ZM448.98438 639.8906Q448.98438 641.0781 449.5 641.6719Q450.03125 642.2656 450.8125 642.2656Q451.60938 642.2656 452.125 641.6719Q452.64062 641.0781 452.64062 639.8594Q452.64062 638.7031 452.10938 638.1094Q451.59375 637.5156 450.8125 637.5156Q450.03125 637.5156 449.5 638.1094Q448.98438 638.6875 448.98438 639.8906ZM455.26562 643.0L455.26562 634.40625L456.32812 634.40625L456.32812 643.0L455.26562 643.0ZM462.375 643.0L462.375 642.0781Q461.64062 643.1406 460.39062 643.1406Q459.84375 643.1406 459.35938 642.9375Q458.89062 642.71875 458.65625 642.40625Q458.42188 642.0781 458.32812 641.6094Q458.26562 641.3125 458.26562 640.625L458.26562 636.78125L459.32812 636.78125L459.32812 640.2344Q459.32812 641.0469 459.39062 641.34375Q459.48438 641.75 459.79688 642.0Q460.125 642.2344 460.60938 642.2344Q461.07812 642.2344 461.48438 642.0Q461.90625 641.75 462.07812 641.3281Q462.26562 640.90625 462.26562 640.1094L462.26562 636.78125L463.3125 636.78125L463.3125 643.0L462.375 643.0ZM467.59375 642.0625L467.75 642.9844Q467.29688 643.0781 466.95312 643.0781Q466.375 643.0781 466.0625 642.90625Q465.75 642.71875 465.60938 642.4219Q465.48438 642.125 465.48438 641.1719L465.48438 637.59375L464.71875 637.59375L464.71875 636.78125L465.48438 636.78125L465.48438 635.2344L466.53125 634.6094L466.53125 636.78125L467.59375 636.78125L467.59375 637.59375L466.53125 637.59375L466.53125 641.2344Q466.53125 641.6875 466.57812 641.8125Q466.64062 641.9375 466.76562 642.0156Q466.89062 642.09375 467.125 642.09375Q467.3125 642.09375 467.59375 642.0625ZM472.54688 641.0L473.64062 641.125Q473.39062 642.0781 472.6875 642.6094Q471.98438 643.1406 470.90625 643.1406Q469.54688 643.1406 468.73438 642.2969Q467.9375 641.4531 467.9375 639.9375Q467.9375 638.375 468.75 637.5156Q469.5625 636.6406 470.84375 636.6406Q472.09375 636.6406 472.875 637.4844Q473.67188 638.3281 473.67188 639.875Q473.67188 639.96875 473.67188 640.15625L469.03125 640.15625Q469.09375 641.1875 469.60938 641.7344Q470.125 642.2656 470.90625 642.2656Q471.48438 642.2656 471.89062 641.96875Q472.3125 641.65625 472.54688 641.0ZM469.09375 639.2969L472.5625 639.2969Q472.5 638.5 472.17188 638.1094Q471.65625 637.5 470.85938 637.5Q470.125 637.5 469.625 637.9844Q469.14062 638.46875 469.09375 639.2969ZM478.375 643.0L478.375 634.40625L481.60938 634.40625Q482.59375 634.40625 483.1875 634.6719Q483.78125 634.9375 484.10938 635.4844Q484.45312 636.0156 484.45312 636.6094Q484.45312 637.15625 484.15625 637.6406Q483.85938 638.125 483.25 638.4219Q484.03125 638.65625 484.4375 639.21875Q484.85938 639.7656 484.85938 640.5156Q484.85938 641.1094 484.60938 641.6406Q484.35938 642.15625 483.98438 642.4375Q483.60938 642.71875 483.03125 642.8594Q482.46875 643.0 481.65625 643.0L478.375 643.0ZM479.51562 638.0156L481.375 638.0156Q482.125 638.0156 482.45312 637.9219Q482.89062 637.7969 483.10938 637.5Q483.32812 637.1875 483.32812 636.7344Q483.32812 636.3125 483.125 635.9844Q482.92188 635.65625 482.53125 635.5469Q482.15625 635.4219 481.23438 635.4219L479.51562 635.4219L479.51562 638.0156ZM479.51562 641.9844L481.65625 641.9844Q482.20312 641.9844 482.42188 641.9375Q482.82812 641.875 483.07812 641.71875Q483.34375 641.5469 483.51562 641.2344Q483.6875 640.9219 483.6875 640.5156Q483.6875 640.03125 483.4375 639.6719Q483.20312 639.3125 482.76562 639.1719Q482.32812 639.03125 481.5 639.03125L479.51562 639.03125L479.51562 641.9844ZM490.54688 641.0L491.64062 641.125Q491.39062 642.0781 490.6875 642.6094Q489.98438 643.1406 488.90625 643.1406Q487.54688 643.1406 486.73438 642.2969Q485.9375 641.4531 485.9375 639.9375Q485.9375 638.375 486.75 637.5156Q487.5625 636.6406 488.84375 636.6406Q490.09375 636.6406 490.875 637.4844Q491.67188 638.3281 491.67188 639.875Q491.67188 639.96875 491.67188 640.15625L487.03125 640.15625Q487.09375 641.1875 487.60938 641.7344Q488.125 642.2656 488.90625 642.2656Q489.48438 642.2656 489.89062 641.96875Q490.3125 641.65625 490.54688 641.0ZM487.09375 639.2969L490.5625 639.2969Q490.5 638.5 490.17188 638.1094Q489.65625 637.5 488.85938 637.5Q488.125 637.5 487.625 637.9844Q487.14062 638.46875 487.09375 639.2969ZM492.875 641.1406L493.90625 640.9844Q494.0 641.6094 494.39062 641.9375Q494.79688 642.2656 495.53125 642.2656Q496.25 642.2656 496.59375 641.9844Q496.95312 641.6875 496.95312 641.28125Q496.95312 640.9219 496.64062 640.71875Q496.42188 640.5781 495.5625 640.3594Q494.40625 640.0625 493.95312 639.8594Q493.51562 639.6406 493.28125 639.2656Q493.04688 638.8906 493.04688 638.4219Q493.04688 638.0156 493.23438 637.65625Q493.42188 637.2969 493.75 637.0625Q494.0 636.8906 494.42188 636.7656Q494.84375 636.6406 495.34375 636.6406Q496.0625 636.6406 496.60938 636.8594Q497.17188 637.0625 497.4375 637.4219Q497.70312 637.78125 497.79688 638.375L496.76562 638.5156Q496.70312 638.0469 496.35938 637.78125Q496.03125 637.5 495.40625 637.5Q494.6875 637.5 494.375 637.75Q494.0625 637.9844 494.0625 638.3125Q494.0625 638.5156 494.1875 638.6719Q494.32812 638.84375 494.59375 638.9531Q494.75 639.0156 495.53125 639.21875Q496.65625 639.53125 497.09375 639.71875Q497.53125 639.90625 497.78125 640.2656Q498.03125 640.625 498.03125 641.1719Q498.03125 641.7031 497.71875 642.1719Q497.42188 642.625 496.84375 642.8906Q496.26562 643.1406 495.53125 643.1406Q494.3125 643.1406 493.67188 642.6406Q493.04688 642.125 492.875 641.1406ZM502.59375 642.0625L502.75 642.9844Q502.29688 643.0781 501.95312 643.0781Q501.375 643.0781 501.0625 642.90625Q500.75 642.71875 500.60938 642.4219Q500.48438 642.125 500.48438 641.1719L500.48438 637.59375L499.71875 637.59375L499.71875 636.78125L500.48438 636.78125L500.48438 635.2344L501.53125 634.6094L501.53125 636.78125L502.59375 636.78125L502.59375 637.59375L501.53125 637.59375L501.53125 641.2344Q501.53125 641.6875 501.57812 641.8125Q501.64062 641.9375 501.76562 642.0156Q501.89062 642.09375 502.125 642.09375Q502.3125 642.09375 502.59375 642.0625ZM506.375 643.0L506.375 634.40625L507.51562 634.40625L507.51562 641.9844L511.75 641.9844L511.75 643.0L506.375 643.0ZM517.3594 642.2344Q516.7656 642.7344 516.21875 642.9375Q515.6875 643.1406 515.0625 643.1406Q514.03125 643.1406 513.4844 642.6406Q512.9375 642.1406 512.9375 641.3594Q512.9375 640.90625 513.1406 640.53125Q513.34375 640.1406 513.6875 639.9219Q514.03125 639.6875 514.4531 639.5781Q514.75 639.4844 515.3906 639.40625Q516.65625 639.2656 517.2656 639.0469Q517.2656 638.8281 517.2656 638.78125Q517.2656 638.125 516.96875 637.8594Q516.5625 637.5156 515.7656 637.5156Q515.03125 637.5156 514.6719 637.78125Q514.3125 638.03125 514.1406 638.6875L513.1094 638.5469Q513.25 637.8906 513.5781 637.4844Q513.90625 637.0781 514.5156 636.8594Q515.125 636.6406 515.9219 636.6406Q516.71875 636.6406 517.21875 636.8281Q517.71875 637.0156 517.9531 637.2969Q518.1875 637.5781 518.28125 638.0156Q518.3281 638.28125 518.3281 638.9844L518.3281 640.3906Q518.3281 641.8594 518.3906 642.25Q518.46875 642.6406 518.6719 643.0L517.5625 643.0Q517.40625 642.6719 517.3594 642.2344ZM517.2656 639.875Q516.6875 640.1094 515.5469 640.28125Q514.8906 640.375 514.625 640.5Q514.3594 640.6094 514.2031 640.8281Q514.0625 641.0469 514.0625 641.3281Q514.0625 641.75 514.375 642.03125Q514.7031 642.3125 515.3125 642.3125Q515.9219 642.3125 516.3906 642.0469Q516.875 641.78125 517.09375 641.3125Q517.2656 640.9531 517.2656 640.2656L517.2656 639.875ZM522.59375 642.0625L522.75 642.9844Q522.2969 643.0781 521.9531 643.0781Q521.375 643.0781 521.0625 642.90625Q520.75 642.71875 520.6094 642.4219Q520.4844 642.125 520.4844 641.1719L520.4844 637.59375L519.71875 637.59375L519.71875 636.78125L520.4844 636.78125L520.4844 635.2344L521.53125 634.6094L521.53125 636.78125L522.59375 636.78125L522.59375 637.59375L521.53125 637.59375L521.53125 641.2344Q521.53125 641.6875 521.5781 641.8125Q521.6406 641.9375 521.7656 642.0156Q521.8906 642.09375 522.125 642.09375Q522.3125 642.09375 522.59375 642.0625ZM527.5469 641.0L528.6406 641.125Q528.3906 642.0781 527.6875 642.6094Q526.9844 643.1406 525.90625 643.1406Q524.5469 643.1406 523.7344 642.2969Q522.9375 641.4531 522.9375 639.9375Q522.9375 638.375 523.75 637.5156Q524.5625 636.6406 525.84375 636.6406Q527.09375 636.6406 527.875 637.4844Q528.6719 638.3281 528.6719 639.875Q528.6719 639.96875 528.6719 640.15625L524.03125 640.15625Q524.09375 641.1875 524.6094 641.7344Q525.125 642.2656 525.90625 642.2656Q526.4844 642.2656 526.8906 641.96875Q527.3125 641.65625 527.5469 641.0ZM524.09375 639.2969L527.5625 639.2969Q527.5 638.5 527.1719 638.1094Q526.65625 637.5 525.8594 637.5Q525.125 637.5 524.625 637.9844Q524.1406 638.46875 524.09375 639.2969ZM530.2969 643.0L530.2969 636.78125L531.2344 636.78125L531.2344 637.65625Q531.9219 636.6406 533.21875 636.6406Q533.78125 636.6406 534.25 636.84375Q534.7344 637.0469 534.96875 637.375Q535.2031 637.7031 535.2969 638.1406Q535.34375 638.4375 535.34375 639.1719L535.34375 643.0L534.2969 643.0L534.2969 639.21875Q534.2969 638.5625 534.1719 638.25Q534.0469 637.9375 533.7344 637.75Q533.4219 637.5469 533.0 637.5469Q532.3281 637.5469 531.8281 637.9844Q531.34375 638.40625 531.34375 639.59375L531.34375 643.0L530.2969 643.0ZM541.3594 640.71875L542.3906 640.8594Q542.21875 641.9219 541.5156 642.53125Q540.8125 643.1406 539.7969 643.1406Q538.5156 643.1406 537.7344 642.3125Q536.96875 641.46875 536.96875 639.90625Q536.96875 638.90625 537.2969 638.15625Q537.6406 637.3906 538.3125 637.0156Q539.0 636.6406 539.8125 636.6406Q540.8125 636.6406 541.4531 637.15625Q542.1094 637.65625 542.2969 638.6094L541.2656 638.7656Q541.125 638.1406 540.75 637.8281Q540.375 637.5 539.84375 637.5Q539.0469 637.5 538.5469 638.0781Q538.0469 638.6406 538.0469 639.875Q538.0469 641.1406 538.53125 641.7031Q539.0156 642.2656 539.78125 642.2656Q540.40625 642.2656 540.8125 641.8906Q541.2344 641.5156 541.3594 640.71875ZM543.25 645.3906L543.125 644.40625Q543.46875 644.5 543.7344 644.5Q544.0781 644.5 544.28125 644.375Q544.5 644.2656 544.6406 644.0625Q544.7344 643.8906 544.96875 643.2656Q544.9844 643.1875 545.0625 643.0156L542.6875 636.78125L543.8281 636.78125L545.125 640.375Q545.375 641.0625 545.5781 641.8281Q545.7656 641.09375 546.0156 640.40625L547.34375 636.78125L548.3906 636.78125L546.03125 643.1094Q545.6406 644.125 545.4375 644.5156Q545.15625 645.0469 544.78125 645.28125Q544.4219 645.53125 543.9219 645.53125Q543.625 645.53125 543.25 645.3906Z" fill-rule="nonzero"/><path fill="#ea4335" d="M563.5 634.0C563.5 633.4477 563.9477 633.0 564.5 633.0L574.5 633.0C575.0523 633.0 575.5 633.4477 575.5 634.0L575.5 644.0C575.5 644.5523 575.0523 645.0 574.5 645.0L564.5 645.0C563.9477 645.0 563.5 644.5523 563.5 644.0Z" fill-rule="nonzero"/><path fill="#1a1a1a" d="M584.0469 640.2344L585.1094 640.1406Q585.1875 640.7969 585.46875 641.2031Q585.75 641.6094 586.3281 641.875Q586.9219 642.125 587.65625 642.125Q588.2969 642.125 588.7969 641.9375Q589.2969 641.7344 589.53125 641.40625Q589.78125 641.0625 589.78125 640.6719Q589.78125 640.2656 589.5469 639.96875Q589.3125 639.65625 588.78125 639.4531Q588.4219 639.3125 587.2344 639.03125Q586.0625 638.75 585.59375 638.5Q584.96875 638.1719 584.6719 637.7031Q584.375 637.21875 584.375 636.625Q584.375 635.9844 584.7344 635.4219Q585.1094 634.84375 585.8125 634.5625Q586.53125 634.2656 587.3906 634.2656Q588.34375 634.2656 589.0781 634.5781Q589.8125 634.875 590.2031 635.4844Q590.59375 636.0781 590.625 636.8281L589.53125 636.90625Q589.4375 636.09375 588.9219 635.6875Q588.4219 635.2656 587.4375 635.2656Q586.40625 635.2656 585.9375 635.6406Q585.46875 636.0156 585.46875 636.5469Q585.46875 637.0156 585.7969 637.3125Q586.125 637.6094 587.5 637.9219Q588.8906 638.2344 589.40625 638.46875Q590.15625 638.8281 590.5156 639.3594Q590.875 639.875 590.875 640.5781Q590.875 641.2656 590.4844 641.875Q590.09375 642.46875 589.3594 642.8125Q588.625 643.1406 587.7031 643.1406Q586.53125 643.1406 585.7344 642.8125Q584.9531 642.46875 584.5 641.78125Q584.0625 641.09375 584.0469 640.2344ZM592.2969 635.625L592.2969 634.40625L593.3594 634.40625L593.3594 635.625L592.2969 635.625ZM592.2969 643.0L592.2969 636.78125L593.3594 636.78125L593.3594 643.0L592.2969 643.0ZM595.09375 643.5156L596.125 643.6719Q596.1875 644.1406 596.4844 644.3594Q596.875 644.65625 597.5469 644.65625Q598.28125 644.65625 598.6719 644.3594Q599.0781 644.0625 599.21875 643.5469Q599.3125 643.21875 599.2969 642.1875Q598.6094 643.0 597.5781 643.0Q596.2969 643.0 595.59375 642.0781Q594.8906 641.1406 594.8906 639.8594Q594.8906 638.96875 595.2031 638.21875Q595.53125 637.4531 596.1406 637.0469Q596.75 636.6406 597.5781 636.6406Q598.6875 636.6406 599.40625 637.53125L599.40625 636.78125L600.375 636.78125L600.375 642.15625Q600.375 643.6094 600.0781 644.21875Q599.78125 644.8281 599.1406 645.1719Q598.5 645.53125 597.5625 645.53125Q596.4531 645.53125 595.7656 645.03125Q595.0781 644.53125 595.09375 643.5156ZM595.96875 639.78125Q595.96875 641.0 596.4531 641.5625Q596.9375 642.125 597.6719 642.125Q598.40625 642.125 598.8906 641.5625Q599.3906 641.0 599.3906 639.8125Q599.3906 638.6719 598.875 638.09375Q598.375 637.5156 597.65625 637.5156Q596.9531 637.5156 596.4531 638.09375Q595.96875 638.65625 595.96875 639.78125ZM602.0781 638.8125Q602.0781 636.6719 603.21875 635.46875Q604.375 634.25 606.1875 634.25Q607.3906 634.25 608.34375 634.8281Q609.2969 635.3906 609.7969 636.40625Q610.2969 637.4219 610.2969 638.71875Q610.2969 640.0156 609.7656 641.0469Q609.2344 642.0781 608.2656 642.6094Q607.3125 643.1406 606.1875 643.1406Q604.96875 643.1406 604.0156 642.5625Q603.0625 641.96875 602.5625 640.96875Q602.0781 639.9531 602.0781 638.8125ZM603.25 638.8281Q603.25 640.3906 604.0781 641.28125Q604.9219 642.1719 606.1875 642.1719Q607.46875 642.1719 608.2969 641.28125Q609.125 640.375 609.125 638.7031Q609.125 637.65625 608.7656 636.875Q608.4219 636.09375 607.7344 635.6719Q607.0469 635.2344 606.2031 635.2344Q605.0 635.2344 604.125 636.0625Q603.25 636.8906 603.25 638.8281ZM611.2969 645.3906L611.2969 636.78125L612.25 636.78125L612.25 637.5781Q612.59375 637.1094 613.0156 636.875Q613.4531 636.6406 614.0625 636.6406Q614.8594 636.6406 615.46875 637.0469Q616.0781 637.4531 616.375 638.2031Q616.6875 638.9531 616.6875 639.84375Q616.6875 640.7969 616.34375 641.5625Q616.0156 642.3281 615.3594 642.7344Q614.7031 643.1406 613.9844 643.1406Q613.4531 643.1406 613.03125 642.9219Q612.6094 642.6875 612.34375 642.3594L612.34375 645.3906L611.2969 645.3906ZM612.25 639.9219Q612.25 641.125 612.7344 641.7031Q613.21875 642.2656 613.90625 642.2656Q614.6094 642.2656 615.1094 641.6719Q615.6094 641.0781 615.6094 639.8281Q615.6094 638.6406 615.125 638.0625Q614.6406 637.46875 613.9531 637.46875Q613.28125 637.46875 612.7656 638.09375Q612.25 638.71875 612.25 639.9219ZM620.59375 642.0625L620.75 642.9844Q620.2969 643.0781 619.9531 643.0781Q619.375 643.0781 619.0625 642.90625Q618.75 642.71875 618.6094 642.4219Q618.4844 642.125 618.4844 641.1719L618.4844 637.59375L617.71875 637.59375L617.71875 636.78125L618.4844 636.78125L618.4844 635.2344L619.53125 634.6094L619.53125 636.78125L620.59375 636.78125L620.59375 637.59375L619.53125 637.59375L619.53125 641.2344Q619.53125 641.6875 619.5781 641.8125Q619.6406 641.9375 619.7656 642.0156Q619.8906 642.09375 620.125 642.09375Q620.3125 642.09375 620.59375 642.0625ZM624.375 643.0L624.375 634.40625L627.6094 634.40625Q628.59375 634.40625 629.1875 634.6719Q629.78125 634.9375 630.1094 635.4844Q630.4531 636.0156 630.4531 636.6094Q630.4531 637.15625 630.15625 637.6406Q629.8594 638.125 629.25 638.4219Q630.03125 638.65625 630.4375 639.21875Q630.8594 639.7656 630.8594 640.5156Q630.8594 641.1094 630.6094 641.6406Q630.3594 642.15625 629.9844 642.4375Q629.6094 642.71875 629.03125 642.8594Q628.46875 643.0 627.65625 643.0L624.375 643.0ZM625.5156 638.0156L627.375 638.0156Q628.125 638.0156 628.4531 637.9219Q628.8906 637.7969 629.1094 637.5Q629.3281 637.1875 629.3281 636.7344Q629.3281 636.3125 629.125 635.9844Q628.9219 635.65625 628.53125 635.5469Q628.15625 635.4219 627.2344 635.4219L625.5156 635.4219L625.5156 638.0156ZM625.5156 641.9844L627.65625 641.9844Q628.2031 641.9844 628.4219 641.9375Q628.8281 641.875 629.0781 641.71875Q629.34375 641.5469 629.5156 641.2344Q629.6875 640.9219 629.6875 640.5156Q629.6875 640.03125 629.4375 639.6719Q629.2031 639.3125 628.7656 639.1719Q628.3281 639.03125 627.5 639.03125L625.5156 639.03125L625.5156 641.9844ZM636.5469 641.0L637.6406 641.125Q637.3906 642.0781 636.6875 642.6094Q635.9844 643.1406 634.90625 643.1406Q633.5469 643.1406 632.7344 642.2969Q631.9375 641.4531 631.9375 639.9375Q631.9375 638.375 632.75 637.5156Q633.5625 636.6406 634.84375 636.6406Q636.09375 636.6406 636.875 637.4844Q637.6719 638.3281 637.6719 639.875Q637.6719 639.96875 637.6719 640.15625L633.03125 640.15625Q633.09375 641.1875 633.6094 641.7344Q634.125 642.2656 634.90625 642.2656Q635.4844 642.2656 635.8906 641.96875Q636.3125 641.65625 636.5469 641.0ZM633.09375 639.2969L636.5625 639.2969Q636.5 638.5 636.1719 638.1094Q635.65625 637.5 634.8594 637.5Q634.125 637.5 633.625 637.9844Q633.1406 638.46875 633.09375 639.2969ZM638.875 641.1406L639.90625 640.9844Q640.0 641.6094 640.3906 641.9375Q640.7969 642.2656 641.53125 642.2656Q642.25 642.2656 642.59375 641.9844Q642.9531 641.6875 642.9531 641.28125Q642.9531 640.9219 642.6406 640.71875Q642.4219 640.5781 641.5625 640.3594Q640.40625 640.0625 639.9531 639.8594Q639.5156 639.6406 639.28125 639.2656Q639.0469 638.8906 639.0469 638.4219Q639.0469 638.0156 639.2344 637.65625Q639.4219 637.2969 639.75 637.0625Q640.0 636.8906 640.4219 636.7656Q640.84375 636.6406 641.34375 636.6406Q642.0625 636.6406 642.6094 636.8594Q643.1719 637.0625 643.4375 637.4219Q643.7031 637.78125 643.7969 638.375L642.7656 638.5156Q642.7031 638.0469 642.3594 637.78125Q642.03125 637.5 641.40625 637.5Q640.6875 637.5 640.375 637.75Q640.0625 637.9844 640.0625 638.3125Q640.0625 638.5156 640.1875 638.6719Q640.3281 638.84375 640.59375 638.9531Q640.75 639.0156 641.53125 639.21875Q642.65625 639.53125 643.09375 639.71875Q643.53125 639.90625 643.78125 640.2656Q644.03125 640.625 644.03125 641.1719Q644.03125 641.7031 643.71875 642.1719Q643.4219 642.625 642.84375 642.8906Q642.2656 643.1406 641.53125 643.1406Q640.3125 643.1406 639.6719 642.6406Q639.0469 642.125 638.875 641.1406ZM648.59375 642.0625L648.75 642.9844Q648.2969 643.0781 647.9531 643.0781Q647.375 643.0781 647.0625 642.90625Q646.75 642.71875 646.6094 642.4219Q646.4844 642.125 646.4844 641.1719L646.4844 637.59375L645.71875 637.59375L645.71875 636.78125L646.4844 636.78125L646.4844 635.2344L647.53125 634.6094L647.53125 636.78125L648.59375 636.78125L648.59375 637.59375L647.53125 637.59375L647.53125 641.2344Q647.53125 641.6875 647.5781 641.8125Q647.6406 641.9375 647.7656 642.0156Q647.8906 642.09375 648.125 642.09375Q648.3125 642.09375 648.59375 642.0625ZM652.375 643.0L652.375 634.40625L653.5156 634.40625L653.5156 641.9844L657.75 641.9844L657.75 643.0L652.375 643.0ZM663.3594 642.2344Q662.7656 642.7344 662.21875 642.9375Q661.6875 643.1406 661.0625 643.1406Q660.03125 643.1406 659.4844 642.6406Q658.9375 642.1406 658.9375 641.3594Q658.9375 640.90625 659.1406 640.53125Q659.34375 640.1406 659.6875 639.9219Q660.03125 639.6875 660.4531 639.5781Q660.75 639.4844 661.3906 639.40625Q662.65625 639.2656 663.2656 639.0469Q663.2656 638.8281 663.2656 638.78125Q663.2656 638.125 662.96875 637.8594Q662.5625 637.5156 661.7656 637.5156Q661.03125 637.5156 660.6719 637.78125Q660.3125 638.03125 660.1406 638.6875L659.1094 638.5469Q659.25 637.8906 659.5781 637.4844Q659.90625 637.0781 660.5156 636.8594Q661.125 636.6406 661.9219 636.6406Q662.71875 636.6406 663.21875 636.8281Q663.71875 637.0156 663.9531 637.2969Q664.1875 637.5781 664.28125 638.0156Q664.3281 638.28125 664.3281 638.9844L664.3281 640.3906Q664.3281 641.8594 664.3906 642.25Q664.46875 642.6406 664.6719 643.0L663.5625 643.0Q663.40625 642.6719 663.3594 642.2344ZM663.2656 639.875Q662.6875 640.1094 661.5469 640.28125Q660.8906 640.375 660.625 640.5Q660.3594 640.6094 660.2031 640.8281Q660.0625 641.0469 660.0625 641.3281Q660.0625 641.75 660.375 642.03125Q660.7031 642.3125 661.3125 642.3125Q661.9219 642.3125 662.3906 642.0469Q662.875 641.78125 663.09375 641.3125Q663.2656 640.9531 663.2656 640.2656L663.2656 639.875ZM668.59375 642.0625L668.75 642.9844Q668.2969 643.0781 667.9531 643.0781Q667.375 643.0781 667.0625 642.90625Q666.75 642.71875 666.6094 642.4219Q666.4844 642.125 666.4844 641.1719L666.4844 637.59375L665.71875 637.59375L665.71875 636.78125L666.4844 636.78125L666.4844 635.2344L667.53125 634.6094L667.53125 636.78125L668.59375 636.78125L668.59375 637.59375L667.53125 637.59375L667.53125 641.2344Q667.53125 641.6875 667.5781 641.8125Q667.6406 641.9375 667.7656 642.0156Q667.8906 642.09375 668.125 642.09375Q668.3125 642.09375 668.59375 642.0625ZM673.5469 641.0L674.6406 641.125Q674.3906 642.0781 673.6875 642.6094Q672.9844 643.1406 671.90625 643.1406Q670.5469 643.1406 669.7344 642.2969Q668.9375 641.4531 668.9375 639.9375Q668.9375 638.375 669.75 637.5156Q670.5625 636.6406 671.84375 636.6406Q673.09375 636.6406 673.875 637.4844Q674.6719 638.3281 674.6719 639.875Q674.6719 639.96875 674.6719 640.15625L670.03125 640.15625Q670.09375 641.1875 670.6094 641.7344Q671.125 642.2656 671.90625 642.2656Q672.4844 642.2656 672.8906 641.96875Q673.3125 641.65625 673.5469 641.0ZM670.09375 639.2969L673.5625 639.2969Q673.5 638.5 673.1719 638.1094Q672.65625 637.5 671.8594 637.5Q671.125 637.5 670.625 637.9844Q670.1406 638.46875 670.09375 639.2969ZM676.2969 643.0L676.2969 636.78125L677.2344 636.78125L677.2344 637.65625Q677.9219 636.6406 679.21875 636.6406Q679.78125 636.6406 680.25 636.84375Q680.7344 637.0469 680.96875 637.375Q681.2031 637.7031 681.2969 638.1406Q681.34375 638.4375 681.34375 639.1719L681.34375 643.0L680.2969 643.0L680.2969 639.21875Q680.2969 638.5625 680.1719 638.25Q680.0469 637.9375 679.7344 637.75Q679.4219 637.5469 679.0 637.5469Q678.3281 637.5469 677.8281 637.9844Q677.34375 638.40625 677.34375 639.59375L677.34375 643.0L676.2969 643.0ZM687.3594 640.71875L688.3906 640.8594Q688.21875 641.9219 687.5156 642.53125Q686.8125 643.1406 685.7969 643.1406Q684.5156 643.1406 683.7344 642.3125Q682.96875 641.46875 682.96875 639.90625Q682.96875 638.90625 683.2969 638.15625Q683.6406 637.3906 684.3125 637.0156Q685.0 636.6406 685.8125 636.6406Q686.8125 636.6406 687.4531 637.15625Q688.1094 637.65625 688.2969 638.6094L687.2656 638.7656Q687.125 638.1406 686.75 637.8281Q686.375 637.5 685.84375 637.5Q685.0469 637.5 684.5469 638.0781Q684.0469 638.6406 684.0469 639.875Q684.0469 641.1406 684.53125 641.7031Q685.0156 642.2656 685.78125 642.2656Q686.40625 642.2656 686.8125 641.8906Q687.2344 641.5156 687.3594 640.71875ZM689.25 645.3906L689.125 644.40625Q689.46875 644.5 689.7344 644.5Q690.0781 644.5 690.28125 644.375Q690.5 644.2656 690.6406 644.0625Q690.7344 643.8906 690.96875 643.2656Q690.9844 643.1875 691.0625 643.0156L688.6875 636.78125L689.8281 636.78125L691.125 640.375Q691.375 641.0625 691.5781 641.8281Q691.7656 641.09375 692.0156 640.40625L693.34375 636.78125L694.3906 636.78125L692.03125 643.1094Q691.6406 644.125 691.4375 644.5156Q691.15625 645.0469 690.78125 645.28125Q690.4219 645.53125 689.9219 645.53125Q689.625 645.53125 689.25 645.3906Z" fill-rule="nonzero"/><path fill="#757575" d="M106.359375 53.0L106.359375 38.6875L108.265625 38.6875L108.265625 53.0L106.359375 53.0ZM111.8125 53.0L111.8125 42.625L113.40625 42.625L113.40625 44.109375Q114.546875 42.390625 116.703125 42.390625Q117.640625 42.390625 118.421875 42.734375Q119.21875 43.0625 119.609375 43.609375Q120.0 44.15625 120.15625 44.90625Q120.25 45.40625 120.25 46.625L120.25 53.0L118.484375 53.0L118.484375 46.6875Q118.484375 45.609375 118.28125 45.078125Q118.078125 44.546875 117.546875 44.234375Q117.03125 43.921875 116.328125 43.921875Q115.203125 43.921875 114.390625 44.640625Q113.578125 45.34375 113.578125 47.328125L113.578125 53.0L111.8125 53.0ZM125.65625 51.421875L125.90625 52.984375Q125.171875 53.140625 124.578125 53.140625Q123.625 53.140625 123.09375 52.84375Q122.578125 52.53125 122.359375 52.046875Q122.140625 51.546875 122.140625 49.96875L122.140625 44.0L120.859375 44.0L120.859375 42.625L122.140625 42.625L122.140625 40.0625L123.890625 39.0L123.890625 42.625L125.65625 42.625L125.65625 44.0L123.890625 44.0L123.890625 50.0625Q123.890625 50.8125 123.984375 51.03125Q124.078125 51.234375 124.28125 51.375Q124.5 51.5 124.890625 51.5Q125.171875 51.5 125.65625 51.421875ZM134.92188 49.65625L136.73438 49.890625Q136.3125 51.46875 135.14062 52.359375Q133.98438 53.234375 132.17188 53.234375Q129.90625 53.234375 128.5625 51.84375Q127.234375 50.4375 127.234375 47.90625Q127.234375 45.28125 128.57812 43.84375Q129.92188 42.390625 132.07812 42.390625Q134.15625 42.390625 135.46875 43.8125Q136.79688 45.21875 136.79688 47.796875Q136.79688 47.953125 136.78125 48.265625L129.04688 48.265625Q129.14062 49.96875 130.01562 50.875Q130.89062 51.78125 132.1875 51.78125Q133.15625 51.78125 133.82812 51.28125Q134.51562 50.765625 134.92188 49.65625ZM129.14062 46.8125L134.9375 46.8125Q134.82812 45.515625 134.28125 44.859375Q133.4375 43.84375 132.09375 43.84375Q130.89062 43.84375 130.0625 44.65625Q129.23438 45.453125 129.14062 46.8125ZM138.78125 53.0L138.78125 38.6875L140.53125 38.6875L140.53125 53.0L138.78125 53.0ZM149.35938 53.0L149.35938 38.6875L151.26562 38.6875L151.26562 53.0L149.35938 53.0ZM161.59375 49.203125L163.3125 49.421875Q163.03125 51.21875 161.85938 52.234375Q160.70312 53.234375 159.0 53.234375Q156.875 53.234375 155.57812 51.84375Q154.28125 50.453125 154.28125 47.859375Q154.28125 46.171875 154.82812 44.921875Q155.39062 43.65625 156.53125 43.03125Q157.67188 42.390625 159.01562 42.390625Q160.70312 42.390625 161.76562 43.25Q162.84375 44.109375 163.15625 45.671875L161.4375 45.9375Q161.20312 44.890625 160.57812 44.375Q159.95312 43.84375 159.07812 43.84375Q157.75 43.84375 156.92188 44.796875Q156.09375 45.75 156.09375 47.796875Q156.09375 49.890625 156.89062 50.84375Q157.6875 51.78125 158.98438 51.78125Q160.01562 51.78125 160.70312 51.15625Q161.40625 50.515625 161.59375 49.203125ZM171.92188 49.65625L173.73438 49.890625Q173.3125 51.46875 172.14062 52.359375Q170.98438 53.234375 169.17188 53.234375Q166.90625 53.234375 165.5625 51.84375Q164.23438 50.4375 164.23438 47.90625Q164.23438 45.28125 165.57812 43.84375Q166.92188 42.390625 169.07812 42.390625Q171.15625 42.390625 172.46875 43.8125Q173.79688 45.21875 173.79688 47.796875Q173.79688 47.953125 173.78125 48.265625L166.04688 48.265625Q166.14062 49.96875 167.01562 50.875Q167.89062 51.78125 169.1875 51.78125Q170.15625 51.78125 170.82812 51.28125Q171.51562 50.765625 171.92188 49.65625ZM166.14062 46.8125L171.9375 46.8125Q171.82812 45.515625 171.28125 44.859375Q170.4375 43.84375 169.09375 43.84375Q167.89062 43.84375 167.0625 44.65625Q166.23438 45.453125 166.14062 46.8125ZM181.78125 53.0L181.78125 38.6875L183.53125 38.6875L183.53125 53.0L181.78125 53.0ZM192.59375 51.71875Q191.60938 52.546875 190.70312 52.890625Q189.79688 53.234375 188.76562 53.234375Q187.0625 53.234375 186.14062 52.40625Q185.21875 51.5625 185.21875 50.265625Q185.21875 49.5 185.5625 48.875Q185.92188 48.25 186.48438 47.875Q187.04688 47.484375 187.75 47.28125Q188.26562 47.15625 189.3125 47.015625Q191.4375 46.765625 192.4375 46.421875Q192.45312 46.0625 192.45312 45.953125Q192.45312 44.890625 191.95312 44.4375Q191.28125 43.84375 189.95312 43.84375Q188.70312 43.84375 188.10938 44.28125Q187.53125 44.71875 187.25 45.828125L185.53125 45.59375Q185.76562 44.484375 186.29688 43.8125Q186.82812 43.125 187.84375 42.765625Q188.85938 42.390625 190.20312 42.390625Q191.53125 42.390625 192.35938 42.703125Q193.1875 43.015625 193.57812 43.5Q193.96875 43.96875 194.125 44.6875Q194.21875 45.140625 194.21875 46.3125L194.21875 48.65625Q194.21875 51.109375 194.32812 51.765625Q194.4375 52.40625 194.78125 53.0L192.9375 53.0Q192.67188 52.453125 192.59375 51.71875ZM192.4375 47.796875Q191.48438 48.1875 189.5625 48.453125Q188.48438 48.609375 188.03125 48.8125Q187.59375 49.0 187.34375 49.375Q187.09375 49.75 187.09375 50.21875Q187.09375 50.921875 187.625 51.390625Q188.15625 51.859375 189.1875 51.859375Q190.20312 51.859375 190.98438 51.421875Q191.78125 50.96875 192.15625 50.203125Q192.4375 49.59375 192.4375 48.4375L192.4375 47.796875ZM196.82812 53.0L196.82812 38.6875L198.59375 38.6875L198.59375 46.84375L202.75 42.625L205.01562 42.625L201.0625 46.46875L205.42188 53.0L203.25 53.0L199.82812 47.703125L198.59375 48.890625L198.59375 53.0L196.82812 53.0ZM213.92188 49.65625L215.73438 49.890625Q215.3125 51.46875 214.14062 52.359375Q212.98438 53.234375 211.17188 53.234375Q208.90625 53.234375 207.5625 51.84375Q206.23438 50.4375 206.23438 47.90625Q206.23438 45.28125 207.57812 43.84375Q208.92188 42.390625 211.07812 42.390625Q213.15625 42.390625 214.46875 43.8125Q215.79688 45.21875 215.79688 47.796875Q215.79688 47.953125 215.78125 48.265625L208.04688 48.265625Q208.14062 49.96875 209.01562 50.875Q209.89062 51.78125 211.1875 51.78125Q212.15625 51.78125 212.82812 51.28125Q213.51562 50.765625 213.92188 49.65625ZM208.14062 46.8125L213.9375 46.8125Q213.82812 45.515625 213.28125 44.859375Q212.4375 43.84375 211.09375 43.84375Q209.89062 43.84375 209.0625 44.65625Q208.23438 45.453125 208.14062 46.8125ZM222.59375 53.0L228.125 45.53125L223.25 38.6875L225.5 38.6875L228.09375 42.359375Q228.90625 43.5 229.25 44.109375Q229.73438 43.328125 230.375 42.484375L233.26562 38.6875L235.32812 38.6875L230.29688 45.4375L235.71875 53.0L233.375 53.0L229.76562 47.890625Q229.46875 47.453125 229.14062 46.9375Q228.65625 47.71875 228.45312 48.015625L224.85938 53.0L222.59375 53.0ZM243.92188 49.65625L245.73438 49.890625Q245.3125 51.46875 244.14062 52.359375Q242.98438 53.234375 241.17188 53.234375Q238.90625 53.234375 237.5625 51.84375Q236.23438 50.4375 236.23438 47.90625Q236.23438 45.28125 237.57812 43.84375Q238.92188 42.390625 241.07812 42.390625Q243.15625 42.390625 244.46875 43.8125Q245.79688 45.21875 245.79688 47.796875Q245.79688 47.953125 245.78125 48.265625L238.04688 48.265625Q238.14062 49.96875 239.01562 50.875Q239.89062 51.78125 241.1875 51.78125Q242.15625 51.78125 242.82812 51.28125Q243.51562 50.765625 243.92188 49.65625ZM238.14062 46.8125L243.9375 46.8125Q243.82812 45.515625 243.28125 44.859375Q242.4375 43.84375 241.09375 43.84375Q239.89062 43.84375 239.0625 44.65625Q238.23438 45.453125 238.14062 46.8125ZM247.17188 47.8125Q247.17188 44.9375 248.76562 43.546875Q250.10938 42.390625 252.03125 42.390625Q254.17188 42.390625 255.51562 43.796875Q256.875 45.203125 256.875 47.671875Q256.875 49.671875 256.28125 50.828125Q255.6875 51.96875 254.53125 52.609375Q253.39062 53.234375 252.03125 53.234375Q249.84375 53.234375 248.5 51.84375Q247.17188 50.4375 247.17188 47.8125ZM248.96875 47.8125Q248.96875 49.8125 249.82812 50.796875Q250.70312 51.78125 252.03125 51.78125Q253.34375 51.78125 254.20312 50.796875Q255.07812 49.796875 255.07812 47.75Q255.07812 45.828125 254.20312 44.84375Q253.32812 43.84375 252.03125 43.84375Q250.70312 43.84375 249.82812 44.84375Q248.96875 45.828125 248.96875 47.8125ZM258.8125 53.0L258.8125 42.625L260.40625 42.625L260.40625 44.109375Q261.54688 42.390625 263.70312 42.390625Q264.64062 42.390625 265.42188 42.734375Q266.21875 43.0625 266.60938 43.609375Q267.0 44.15625 267.15625 44.90625Q267.25 45.40625 267.25 46.625L267.25 53.0L265.48438 53.0L265.48438 46.6875Q265.48438 45.609375 265.28125 45.078125Q265.07812 44.546875 264.54688 44.234375Q264.03125 43.921875 263.32812 43.921875Q262.20312 43.921875 261.39062 44.640625Q260.57812 45.34375 260.57812 47.328125L260.57812 53.0L258.8125 53.0ZM277.03125 45.234375Q275.9375 44.828125 275.40625 44.09375Q274.89062 43.34375 274.89062 42.3125Q274.89062 40.75 276.01562 39.6875Q277.14062 38.625 279.0 38.625Q280.875 38.625 282.01562 39.71875Q283.15625 40.796875 283.15625 42.359375Q283.15625 43.359375 282.625 44.09375Q282.10938 44.828125 281.04688 45.234375Q282.375 45.671875 283.0625 46.625Q283.75 47.578125 283.75 48.90625Q283.75 50.75 282.4375 52.0Q281.14062 53.25 279.03125 53.25Q276.90625 53.25 275.60938 52.0Q274.3125 50.734375 274.3125 48.859375Q274.3125 47.46875 275.01562 46.53125Q275.73438 45.578125 277.03125 45.234375ZM276.6875 42.25Q276.6875 43.265625 277.34375 43.921875Q278.0 44.5625 279.03125 44.5625Q280.04688 44.5625 280.70312 43.921875Q281.35938 43.28125 281.35938 42.359375Q281.35938 41.390625 280.6875 40.734375Q280.01562 40.0625 279.01562 40.0625Q278.01562 40.0625 277.34375 40.71875Q276.6875 41.359375 276.6875 42.25ZM276.125 48.875Q276.125 49.625 276.46875 50.328125Q276.82812 51.03125 277.53125 51.421875Q278.23438 51.796875 279.04688 51.796875Q280.3125 51.796875 281.125 50.984375Q281.95312 50.171875 281.95312 48.921875Q281.95312 47.65625 281.10938 46.828125Q280.26562 46.0 278.98438 46.0Q277.75 46.0 276.9375 46.828125Q276.125 47.640625 276.125 48.875ZM285.34375 49.21875L287.09375 48.984375Q287.40625 50.484375 288.125 51.140625Q288.85938 51.796875 289.90625 51.796875Q291.14062 51.796875 291.98438 50.9375Q292.84375 50.078125 292.84375 48.8125Q292.84375 47.59375 292.04688 46.8125Q291.26562 46.03125 290.04688 46.03125Q289.54688 46.03125 288.8125 46.21875L289.0 44.671875Q289.17188 44.703125 289.28125 44.703125Q290.40625 44.703125 291.29688 44.125Q292.20312 43.53125 292.20312 42.3125Q292.20312 41.34375 291.54688 40.703125Q290.89062 40.0625 289.85938 40.0625Q288.84375 40.0625 288.15625 40.71875Q287.46875 41.359375 287.28125 42.640625L285.51562 42.328125Q285.84375 40.5625 286.98438 39.59375Q288.125 38.625 289.82812 38.625Q291.0 38.625 291.98438 39.125Q292.96875 39.625 293.48438 40.5Q294.01562 41.375 294.01562 42.34375Q294.01562 43.265625 293.51562 44.03125Q293.01562 44.796875 292.04688 45.25Q293.3125 45.53125 294.01562 46.46875Q294.71875 47.390625 294.71875 48.765625Q294.71875 50.640625 293.34375 51.953125Q291.98438 53.25 289.89062 53.25Q288.0 53.25 286.75 52.125Q285.51562 51.0 285.34375 49.21875ZM299.03125 45.234375Q297.9375 44.828125 297.40625 44.09375Q296.89062 43.34375 296.89062 42.3125Q296.89062 40.75 298.01562 39.6875Q299.14062 38.625 301.0 38.625Q302.875 38.625 304.01562 39.71875Q305.15625 40.796875 305.15625 42.359375Q305.15625 43.359375 304.625 44.09375Q304.10938 44.828125 303.04688 45.234375Q304.375 45.671875 305.0625 46.625Q305.75 47.578125 305.75 48.90625Q305.75 50.75 304.4375 52.0Q303.14062 53.25 301.03125 53.25Q298.90625 53.25 297.60938 52.0Q296.3125 50.734375 296.3125 48.859375Q296.3125 47.46875 297.01562 46.53125Q297.73438 45.578125 299.03125 45.234375ZM298.6875 42.25Q298.6875 43.265625 299.34375 43.921875Q300.0 44.5625 301.03125 44.5625Q302.04688 44.5625 302.70312 43.921875Q303.35938 43.28125 303.35938 42.359375Q303.35938 41.390625 302.6875 40.734375Q302.01562 40.0625 301.01562 40.0625Q300.01562 40.0625 299.34375 40.71875Q298.6875 41.359375 298.6875 42.25ZM298.125 48.875Q298.125 49.625 298.46875 50.328125Q298.82812 51.03125 299.53125 51.421875Q300.23438 51.796875 301.04688 51.796875Q302.3125 51.796875 303.125 50.984375Q303.95312 50.171875 303.95312 48.921875Q303.95312 47.65625 303.10938 46.828125Q302.26562 46.0 300.98438 46.0Q299.75 46.0 298.9375 46.828125Q298.125 47.640625 298.125 48.875ZM307.32812 45.9375Q307.32812 43.40625 307.84375 41.859375Q308.375 40.296875 309.40625 39.46875Q310.4375 38.625 312.0 38.625Q313.15625 38.625 314.01562 39.09375Q314.89062 39.546875 315.45312 40.421875Q316.01562 41.296875 316.34375 42.5625Q316.67188 43.8125 316.67188 45.9375Q316.67188 48.453125 316.14062 50.0Q315.625 51.546875 314.59375 52.40625Q313.57812 53.25 312.0 53.25Q309.92188 53.25 308.75 51.765625Q307.32812 49.96875 307.32812 45.9375ZM309.14062 45.9375Q309.14062 49.46875 309.95312 50.640625Q310.78125 51.796875 312.0 51.796875Q313.20312 51.796875 314.03125 50.625Q314.85938 49.453125 314.85938 45.9375Q314.85938 42.40625 314.03125 41.25Q313.20312 40.078125 311.98438 40.078125Q310.76562 40.078125 310.04688 41.109375Q309.14062 42.40625 309.14062 45.9375ZM324.14062 48.703125L324.14062 46.9375L329.53125 46.9375L329.53125 48.703125L324.14062 48.703125ZM341.6875 53.0L341.6875 40.375L336.96875 40.375L336.96875 38.6875L348.3125 38.6875L348.3125 40.375L343.57812 40.375L343.57812 53.0L341.6875 53.0ZM349.17188 47.8125Q349.17188 44.9375 350.76562 43.546875Q352.10938 42.390625 354.03125 42.390625Q356.17188 42.390625 357.51562 43.796875Q358.875 45.203125 358.875 47.671875Q358.875 49.671875 358.28125 50.828125Q357.6875 51.96875 356.53125 52.609375Q355.39062 53.234375 354.03125 53.234375Q351.84375 53.234375 350.5 51.84375Q349.17188 50.4375 349.17188 47.8125ZM350.96875 47.8125Q350.96875 49.8125 351.82812 50.796875Q352.70312 51.78125 354.03125 51.78125Q355.34375 51.78125 356.20312 50.796875Q357.07812 49.796875 357.07812 47.75Q357.07812 45.828125 356.20312 44.84375Q355.32812 43.84375 354.03125 43.84375Q352.70312 43.84375 351.82812 44.84375Q350.96875 45.828125 350.96875 47.8125ZM360.79688 53.0L360.79688 42.625L362.375 42.625L362.375 44.203125Q362.98438 43.09375 363.5 42.75Q364.01562 42.390625 364.625 42.390625Q365.51562 42.390625 366.4375 42.953125L365.82812 44.59375Q365.1875 44.203125 364.54688 44.203125Q363.96875 44.203125 363.5 44.5625Q363.04688 44.90625 362.84375 45.515625Q362.5625 46.453125 362.5625 47.5625L362.5625 53.0L360.79688 53.0ZM374.59375 49.203125L376.3125 49.421875Q376.03125 51.21875 374.85938 52.234375Q373.70312 53.234375 372.0 53.234375Q369.875 53.234375 368.57812 51.84375Q367.28125 50.453125 367.28125 47.859375Q367.28125 46.171875 367.82812 44.921875Q368.39062 43.65625 369.53125 43.03125Q370.67188 42.390625 372.01562 42.390625Q373.70312 42.390625 374.76562 43.25Q375.84375 44.109375 376.15625 45.671875L374.4375 45.9375Q374.20312 44.890625 373.57812 44.375Q372.95312 43.84375 372.07812 43.84375Q370.75 43.84375 369.92188 44.796875Q369.09375 45.75 369.09375 47.796875Q369.09375 49.890625 369.89062 50.84375Q370.6875 51.78125 371.98438 51.78125Q373.01562 51.78125 373.70312 51.15625Q374.40625 50.515625 374.59375 49.203125ZM377.8125 53.0L377.8125 38.6875L379.57812 38.6875L379.57812 43.8125Q380.8125 42.390625 382.6875 42.390625Q383.82812 42.390625 384.67188 42.84375Q385.53125 43.296875 385.89062 44.109375Q386.26562 44.90625 386.26562 46.421875L386.26562 53.0L384.51562 53.0L384.51562 46.421875Q384.51562 45.109375 383.9375 44.515625Q383.35938 43.90625 382.32812 43.90625Q381.54688 43.90625 380.84375 44.3125Q380.15625 44.71875 379.85938 45.421875Q379.57812 46.109375 379.57812 47.328125L379.57812 53.0L377.8125 53.0ZM387.40625 48.40625L389.1875 48.25Q389.3125 49.3125 389.76562 50.0Q390.23438 50.6875 391.20312 51.125Q392.1875 51.546875 393.42188 51.546875Q394.5 51.546875 395.32812 51.234375Q396.15625 50.90625 396.5625 50.34375Q396.96875 49.78125 396.96875 49.109375Q396.96875 48.4375 396.57812 47.9375Q396.1875 47.4375 395.29688 47.09375Q394.71875 46.859375 392.73438 46.390625Q390.76562 45.921875 389.98438 45.5Q388.95312 44.96875 388.45312 44.171875Q387.95312 43.375 387.95312 42.390625Q387.95312 41.296875 388.5625 40.359375Q389.17188 39.421875 390.35938 38.9375Q391.54688 38.4375 392.98438 38.4375Q394.57812 38.4375 395.79688 38.953125Q397.01562 39.46875 397.65625 40.46875Q398.3125 41.453125 398.35938 42.71875L396.54688 42.859375Q396.40625 41.5 395.5625 40.8125Q394.71875 40.109375 393.0625 40.109375Q391.34375 40.109375 390.54688 40.75Q389.76562 41.375 389.76562 42.25Q389.76562 43.03125 390.32812 43.53125Q390.875 44.03125 393.1875 44.546875Q395.5 45.0625 396.35938 45.453125Q397.60938 46.03125 398.20312 46.921875Q398.79688 47.796875 398.79688 48.953125Q398.79688 50.09375 398.14062 51.109375Q397.48438 52.125 396.25 52.6875Q395.03125 53.25 393.5 53.25Q391.5625 53.25 390.25 52.6875Q388.9375 52.109375 388.1875 50.96875Q387.4375 49.828125 387.40625 48.40625ZM407.59375 49.203125L409.3125 49.421875Q409.03125 51.21875 407.85938 52.234375Q406.70312 53.234375 405.0 53.234375Q402.875 53.234375 401.57812 51.84375Q400.28125 50.453125 400.28125 47.859375Q400.28125 46.171875 400.82812 44.921875Q401.39062 43.65625 402.53125 43.03125Q403.67188 42.390625 405.01562 42.390625Q406.70312 42.390625 407.76562 43.25Q408.84375 44.109375 409.15625 45.671875L407.4375 45.9375Q407.20312 44.890625 406.57812 44.375Q405.95312 43.84375 405.07812 43.84375Q403.75 43.84375 402.92188 44.796875Q402.09375 45.75 402.09375 47.796875Q402.09375 49.890625 402.89062 50.84375Q403.6875 51.78125 404.98438 51.78125Q406.01562 51.78125 406.70312 51.15625Q407.40625 50.515625 407.59375 49.203125ZM410.79688 53.0L410.79688 42.625L412.375 42.625L412.375 44.203125Q412.98438 43.09375 413.5 42.75Q414.01562 42.390625 414.625 42.390625Q415.51562 42.390625 416.4375 42.953125L415.82812 44.59375Q415.1875 44.203125 414.54688 44.203125Q413.96875 44.203125 413.5 44.5625Q413.04688 44.90625 412.84375 45.515625Q412.5625 46.453125 412.5625 47.5625L412.5625 53.0L410.79688 53.0ZM417.82812 40.703125L417.82812 38.6875L419.59375 38.6875L419.59375 40.703125L417.82812 40.703125ZM417.82812 53.0L417.82812 42.625L419.59375 42.625L419.59375 53.0L417.82812 53.0ZM421.8125 56.96875L421.8125 42.625L423.42188 42.625L423.42188 43.96875Q423.98438 43.1875 424.6875 42.796875Q425.40625 42.390625 426.42188 42.390625Q427.75 42.390625 428.76562 43.078125Q429.78125 43.765625 430.29688 45.015625Q430.82812 46.25 430.82812 47.734375Q430.82812 49.328125 430.25 50.609375Q429.6875 51.875 428.59375 52.5625Q427.5 53.234375 426.29688 53.234375Q425.42188 53.234375 424.71875 52.859375Q424.03125 52.484375 423.57812 51.921875L423.57812 56.96875L421.8125 56.96875ZM423.40625 47.875Q423.40625 49.875 424.21875 50.828125Q425.03125 51.78125 426.1875 51.78125Q427.35938 51.78125 428.1875 50.796875Q429.03125 49.8125 429.03125 47.71875Q429.03125 45.75 428.20312 44.765625Q427.39062 43.765625 426.26562 43.765625Q425.14062 43.765625 424.26562 44.828125Q423.40625 45.875 423.40625 47.875ZM436.65625 51.421875L436.90625 52.984375Q436.17188 53.140625 435.57812 53.140625Q434.625 53.140625 434.09375 52.84375Q433.57812 52.53125 433.35938 52.046875Q433.14062 51.546875 433.14062 49.96875L433.14062 44.0L431.85938 44.0L431.85938 42.625L433.14062 42.625L433.14062 40.0625L434.89062 39.0L434.89062 42.625L436.65625 42.625L436.65625 44.0L434.89062 44.0L434.89062 50.0625Q434.89062 50.8125 434.98438 51.03125Q435.07812 51.234375 435.28125 51.375Q435.5 51.5 435.89062 51.5Q436.17188 51.5 436.65625 51.421875ZM444.14062 48.703125L444.14062 46.9375L449.53125 46.9375L449.53125 48.703125L444.14062 48.703125ZM457.96875 53.0L457.96875 38.6875L463.34375 38.6875Q464.98438 38.6875 465.96875 39.125Q466.95312 39.546875 467.51562 40.453125Q468.07812 41.359375 468.07812 42.34375Q468.07812 43.265625 467.57812 44.078125Q467.09375 44.890625 466.07812 45.375Q467.375 45.765625 468.07812 46.6875Q468.78125 47.59375 468.78125 48.84375Q468.78125 49.859375 468.34375 50.71875Q467.92188 51.578125 467.29688 52.046875Q466.67188 52.515625 465.73438 52.765625Q464.79688 53.0 463.42188 53.0L457.96875 53.0ZM459.85938 44.703125L462.95312 44.703125Q464.21875 44.703125 464.76562 44.53125Q465.48438 44.3125 465.84375 43.828125Q466.21875 43.328125 466.21875 42.5625Q466.21875 41.859375 465.875 41.3125Q465.53125 40.765625 464.89062 40.578125Q464.26562 40.375 462.71875 40.375L459.85938 40.375L459.85938 44.703125ZM459.85938 51.3125L463.42188 51.3125Q464.34375 51.3125 464.71875 51.234375Q465.375 51.125 465.8125 50.859375Q466.25 50.578125 466.53125 50.0625Q466.8125 49.53125 466.8125 48.84375Q466.8125 48.046875 466.40625 47.453125Q466.0 46.859375 465.26562 46.625Q464.53125 46.390625 463.17188 46.390625L459.85938 46.390625L459.85938 51.3125ZM477.59375 51.71875Q476.60938 52.546875 475.70312 52.890625Q474.79688 53.234375 473.76562 53.234375Q472.0625 53.234375 471.14062 52.40625Q470.21875 51.5625 470.21875 50.265625Q470.21875 49.5 470.5625 48.875Q470.92188 48.25 471.48438 47.875Q472.04688 47.484375 472.75 47.28125Q473.26562 47.15625 474.3125 47.015625Q476.4375 46.765625 477.4375 46.421875Q477.45312 46.0625 477.45312 45.953125Q477.45312 44.890625 476.95312 44.4375Q476.28125 43.84375 474.95312 43.84375Q473.70312 43.84375 473.10938 44.28125Q472.53125 44.71875 472.25 45.828125L470.53125 45.59375Q470.76562 44.484375 471.29688 43.8125Q471.82812 43.125 472.84375 42.765625Q473.85938 42.390625 475.20312 42.390625Q476.53125 42.390625 477.35938 42.703125Q478.1875 43.015625 478.57812 43.5Q478.96875 43.96875 479.125 44.6875Q479.21875 45.140625 479.21875 46.3125L479.21875 48.65625Q479.21875 51.109375 479.32812 51.765625Q479.4375 52.40625 479.78125 53.0L477.9375 53.0Q477.67188 52.453125 477.59375 51.71875ZM477.4375 47.796875Q476.48438 48.1875 474.5625 48.453125Q473.48438 48.609375 473.03125 48.8125Q472.59375 49.0 472.34375 49.375Q472.09375 49.75 472.09375 50.21875Q472.09375 50.921875 472.625 51.390625Q473.15625 51.859375 474.1875 51.859375Q475.20312 51.859375 475.98438 51.421875Q476.78125 50.96875 477.15625 50.203125Q477.4375 49.59375 477.4375 48.4375L477.4375 47.796875ZM485.65625 51.421875L485.90625 52.984375Q485.17188 53.140625 484.57812 53.140625Q483.625 53.140625 483.09375 52.84375Q482.57812 52.53125 482.35938 52.046875Q482.14062 51.546875 482.14062 49.96875L482.14062 44.0L480.85938 44.0L480.85938 42.625L482.14062 42.625L482.14062 40.0625L483.89062 39.0L483.89062 42.625L485.65625 42.625L485.65625 44.0L483.89062 44.0L483.89062 50.0625Q483.89062 50.8125 483.98438 51.03125Q484.07812 51.234375 484.28125 51.375Q484.5 51.5 484.89062 51.5Q485.17188 51.5 485.65625 51.421875ZM494.59375 49.203125L496.3125 49.421875Q496.03125 51.21875 494.85938 52.234375Q493.70312 53.234375 492.0 53.234375Q489.875 53.234375 488.57812 51.84375Q487.28125 50.453125 487.28125 47.859375Q487.28125 46.171875 487.82812 44.921875Q488.39062 43.65625 489.53125 43.03125Q490.67188 42.390625 492.01562 42.390625Q493.70312 42.390625 494.76562 43.25Q495.84375 44.109375 496.15625 45.671875L494.4375 45.9375Q494.20312 44.890625 493.57812 44.375Q492.95312 43.84375 492.07812 43.84375Q490.75 43.84375 489.92188 44.796875Q489.09375 45.75 489.09375 47.796875Q489.09375 49.890625 489.89062 50.84375Q490.6875 51.78125 491.98438 51.78125Q493.01562 51.78125 493.70312 51.15625Q494.40625 50.515625 494.59375 49.203125ZM497.8125 53.0L497.8125 38.6875L499.57812 38.6875L499.57812 43.8125Q500.8125 42.390625 502.6875 42.390625Q503.82812 42.390625 504.67188 42.84375Q505.53125 43.296875 505.89062 44.109375Q506.26562 44.90625 506.26562 46.421875L506.26562 53.0L504.51562 53.0L504.51562 46.421875Q504.51562 45.109375 503.9375 44.515625Q503.35938 43.90625 502.32812 43.90625Q501.54688 43.90625 500.84375 44.3125Q500.15625 44.71875 499.85938 45.421875Q499.57812 46.109375 499.57812 47.328125L499.57812 53.0L497.8125 53.0ZM513.40625 48.40625L515.1875 48.25Q515.3125 49.3125 515.7656 50.0Q516.2344 50.6875 517.2031 51.125Q518.1875 51.546875 519.4219 51.546875Q520.5 51.546875 521.3281 51.234375Q522.15625 50.90625 522.5625 50.34375Q522.96875 49.78125 522.96875 49.109375Q522.96875 48.4375 522.5781 47.9375Q522.1875 47.4375 521.2969 47.09375Q520.71875 46.859375 518.7344 46.390625Q516.7656 45.921875 515.9844 45.5Q514.9531 44.96875 514.4531 44.171875Q513.9531 43.375 513.9531 42.390625Q513.9531 41.296875 514.5625 40.359375Q515.1719 39.421875 516.3594 38.9375Q517.5469 38.4375 518.9844 38.4375Q520.5781 38.4375 521.7969 38.953125Q523.0156 39.46875 523.65625 40.46875Q524.3125 41.453125 524.3594 42.71875L522.5469 42.859375Q522.40625 41.5 521.5625 40.8125Q520.71875 40.109375 519.0625 40.109375Q517.34375 40.109375 516.5469 40.75Q515.7656 41.375 515.7656 42.25Q515.7656 43.03125 516.3281 43.53125Q516.875 44.03125 519.1875 44.546875Q521.5 45.0625 522.3594 45.453125Q523.6094 46.03125 524.2031 46.921875Q524.7969 47.796875 524.7969 48.953125Q524.7969 50.09375 524.1406 51.109375Q523.4844 52.125 522.25 52.6875Q521.03125 53.25 519.5 53.25Q517.5625 53.25 516.25 52.6875Q514.9375 52.109375 514.1875 50.96875Q513.4375 49.828125 513.40625 48.40625ZM526.8281 40.703125L526.8281 38.6875L528.59375 38.6875L528.59375 40.703125L526.8281 40.703125ZM526.8281 53.0L526.8281 42.625L528.59375 42.625L528.59375 53.0L526.8281 53.0ZM529.8906 53.0L529.8906 51.578125L536.5 44.0Q535.375 44.046875 534.5156 44.046875L530.28125 44.046875L530.28125 42.625L538.7656 42.625L538.7656 43.796875L533.1406 50.375L532.0625 51.578125Q533.2344 51.484375 534.28125 51.484375L539.0781 51.484375L539.0781 53.0L529.8906 53.0ZM546.9219 49.65625L548.7344 49.890625Q548.3125 51.46875 547.1406 52.359375Q545.9844 53.234375 544.1719 53.234375Q541.90625 53.234375 540.5625 51.84375Q539.2344 50.4375 539.2344 47.90625Q539.2344 45.28125 540.5781 43.84375Q541.9219 42.390625 544.0781 42.390625Q546.15625 42.390625 547.46875 43.8125Q548.7969 45.21875 548.7969 47.796875Q548.7969 47.953125 548.78125 48.265625L541.0469 48.265625Q541.1406 49.96875 542.0156 50.875Q542.8906 51.78125 544.1875 51.78125Q545.15625 51.78125 545.8281 51.28125Q546.5156 50.765625 546.9219 49.65625ZM541.1406 46.8125L546.9375 46.8125Q546.8281 45.515625 546.28125 44.859375Q545.4375 43.84375 544.09375 43.84375Q542.8906 43.84375 542.0625 44.65625Q541.2344 45.453125 541.1406 46.8125ZM562.9531 53.0L561.1875 53.0L561.1875 41.796875Q560.5625 42.40625 559.53125 43.015625Q558.5 43.609375 557.6719 43.921875L557.6719 42.21875Q559.15625 41.53125 560.25 40.546875Q561.3594 39.546875 561.8125 38.625L562.9531 38.625L562.9531 53.0ZM573.1406 48.703125L573.1406 46.9375L578.53125 46.9375L578.53125 48.703125L573.1406 48.703125ZM585.46875 53.0L590.96875 38.6875L593.0156 38.6875L598.875 53.0L596.71875 53.0L595.0469 48.65625L589.0625 48.65625L587.4844 53.0L585.46875 53.0ZM589.6094 47.125L594.4531 47.125L592.96875 43.15625Q592.28125 41.34375 591.9531 40.1875Q591.6719 41.5625 591.1719 42.921875L589.6094 47.125ZM601.4375 53.0L599.8125 53.0L599.8125 38.6875L601.5625 38.6875L601.5625 43.796875Q602.6875 42.390625 604.40625 42.390625Q605.3594 42.390625 606.21875 42.78125Q607.0781 43.171875 607.625 43.875Q608.1719 44.5625 608.4844 45.546875Q608.7969 46.53125 608.7969 47.65625Q608.7969 50.328125 607.4844 51.78125Q606.1719 53.234375 604.3281 53.234375Q602.4844 53.234375 601.4375 51.703125L601.4375 53.0ZM601.4219 47.734375Q601.4219 49.59375 601.9219 50.4375Q602.7656 51.78125 604.1719 51.78125Q605.3281 51.78125 606.15625 50.78125Q607.0 49.78125 607.0 47.796875Q607.0 45.765625 606.1875 44.8125Q605.3906 43.84375 604.25 43.84375Q603.09375 43.84375 602.25 44.84375Q601.4219 45.84375 601.4219 47.734375ZM610.1094 49.90625L611.8594 49.625Q612.0 50.671875 612.6719 51.234375Q613.34375 51.78125 614.5469 51.78125Q615.75 51.78125 616.3281 51.296875Q616.9219 50.796875 616.9219 50.140625Q616.9219 49.546875 616.40625 49.203125Q616.0469 48.96875 614.6094 48.609375Q612.6719 48.109375 611.9219 47.765625Q611.1875 47.40625 610.7969 46.78125Q610.40625 46.140625 610.40625 45.375Q610.40625 44.6875 610.71875 44.09375Q611.0469 43.5 611.59375 43.109375Q612.0 42.8125 612.7031 42.609375Q613.4219 42.390625 614.2344 42.390625Q615.4531 42.390625 616.375 42.75Q617.2969 43.09375 617.7344 43.703125Q618.1719 44.296875 618.34375 45.296875L616.625 45.53125Q616.5 44.734375 615.9375 44.296875Q615.375 43.84375 614.3594 43.84375Q613.1406 43.84375 612.625 44.25Q612.1094 44.640625 612.1094 45.171875Q612.1094 45.515625 612.3281 45.796875Q612.53125 46.078125 613.0 46.265625Q613.2656 46.359375 614.5469 46.703125Q616.4219 47.203125 617.15625 47.53125Q617.8906 47.84375 618.3125 48.453125Q618.7344 49.046875 618.7344 49.953125Q618.7344 50.828125 618.21875 51.609375Q617.7031 52.390625 616.7344 52.8125Q615.7656 53.234375 614.5469 53.234375Q612.53125 53.234375 611.46875 52.390625Q610.40625 51.546875 610.1094 49.90625ZM620.1719 47.8125Q620.1719 44.9375 621.7656 43.546875Q623.1094 42.390625 625.03125 42.390625Q627.1719 42.390625 628.5156 43.796875Q629.875 45.203125 629.875 47.671875Q629.875 49.671875 629.28125 50.828125Q628.6875 51.96875 627.53125 52.609375Q626.3906 53.234375 625.03125 53.234375Q622.84375 53.234375 621.5 51.84375Q620.1719 50.4375 620.1719 47.8125ZM621.96875 47.8125Q621.96875 49.8125 622.8281 50.796875Q623.7031 51.78125 625.03125 51.78125Q626.34375 51.78125 627.2031 50.796875Q628.0781 49.796875 628.0781 47.75Q628.0781 45.828125 627.2031 44.84375Q626.3281 43.84375 625.03125 43.84375Q623.7031 43.84375 622.8281 44.84375Q621.96875 45.828125 621.96875 47.8125ZM631.78125 53.0L631.78125 38.6875L633.53125 38.6875L633.53125 53.0L631.78125 53.0ZM642.6094 53.0L642.6094 51.46875Q641.40625 53.234375 639.3281 53.234375Q638.40625 53.234375 637.6094 52.890625Q636.8125 52.53125 636.4219 52.0Q636.0469 51.46875 635.8906 50.6875Q635.78125 50.171875 635.78125 49.046875L635.78125 42.625L637.53125 42.625L637.53125 48.375Q637.53125 49.75 637.6406 50.234375Q637.8125 50.921875 638.34375 51.328125Q638.8906 51.71875 639.6719 51.71875Q640.46875 51.71875 641.15625 51.3125Q641.8594 50.90625 642.1406 50.21875Q642.4375 49.515625 642.4375 48.1875L642.4375 42.625L644.1875 42.625L644.1875 53.0L642.6094 53.0ZM649.65625 51.421875L649.90625 52.984375Q649.1719 53.140625 648.5781 53.140625Q647.625 53.140625 647.09375 52.84375Q646.5781 52.53125 646.3594 52.046875Q646.1406 51.546875 646.1406 49.96875L646.1406 44.0L644.8594 44.0L644.8594 42.625L646.1406 42.625L646.1406 40.0625L647.8906 39.0L647.8906 42.625L649.65625 42.625L649.65625 44.0L647.8906 44.0L647.8906 50.0625Q647.8906 50.8125 647.9844 51.03125Q648.0781 51.234375 648.28125 51.375Q648.5 51.5 648.8906 51.5Q649.1719 51.5 649.65625 51.421875ZM658.9219 49.65625L660.7344 49.890625Q660.3125 51.46875 659.1406 52.359375Q657.9844 53.234375 656.1719 53.234375Q653.90625 53.234375 652.5625 51.84375Q651.2344 50.4375 651.2344 47.90625Q651.2344 45.28125 652.5781 43.84375Q653.9219 42.390625 656.0781 42.390625Q658.15625 42.390625 659.46875 43.8125Q660.7969 45.21875 660.7969 47.796875Q660.7969 47.953125 660.78125 48.265625L653.0469 48.265625Q653.1406 49.96875 654.0156 50.875Q654.8906 51.78125 656.1875 51.78125Q657.15625 51.78125 657.8281 51.28125Q658.5156 50.765625 658.9219 49.65625ZM653.1406 46.8125L658.9375 46.8125Q658.8281 45.515625 658.28125 44.859375Q657.4375 43.84375 656.09375 43.84375Q654.8906 43.84375 654.0625 44.65625Q653.2344 45.453125 653.1406 46.8125ZM668.96875 53.0L668.96875 38.6875L674.34375 38.6875Q675.9844 38.6875 676.96875 39.125Q677.9531 39.546875 678.5156 40.453125Q679.0781 41.359375 679.0781 42.34375Q679.0781 43.265625 678.5781 44.078125Q678.09375 44.890625 677.0781 45.375Q678.375 45.765625 679.0781 46.6875Q679.78125 47.59375 679.78125 48.84375Q679.78125 49.859375 679.34375 50.71875Q678.9219 51.578125 678.2969 52.046875Q677.6719 52.515625 676.7344 52.765625Q675.7969 53.0 674.4219 53.0L668.96875 53.0ZM670.8594 44.703125L673.9531 44.703125Q675.21875 44.703125 675.7656 44.53125Q676.4844 44.3125 676.84375 43.828125Q677.21875 43.328125 677.21875 42.5625Q677.21875 41.859375 676.875 41.3125Q676.53125 40.765625 675.8906 40.578125Q675.2656 40.375 673.71875 40.375L670.8594 40.375L670.8594 44.703125ZM670.8594 51.3125L674.4219 51.3125Q675.34375 51.3125 675.71875 51.234375Q676.375 51.125 676.8125 50.859375Q677.25 50.578125 677.53125 50.0625Q677.8125 49.53125 677.8125 48.84375Q677.8125 48.046875 677.40625 47.453125Q677.0 46.859375 676.2656 46.625Q675.53125 46.390625 674.1719 46.390625L670.8594 46.390625L670.8594 51.3125ZM688.9219 49.65625L690.7344 49.890625Q690.3125 51.46875 689.1406 52.359375Q687.9844 53.234375 686.1719 53.234375Q683.90625 53.234375 682.5625 51.84375Q681.2344 50.4375 681.2344 47.90625Q681.2344 45.28125 682.5781 43.84375Q683.9219 42.390625 686.0781 42.390625Q688.15625 42.390625 689.46875 43.8125Q690.7969 45.21875 690.7969 47.796875Q690.7969 47.953125 690.78125 48.265625L683.0469 48.265625Q683.1406 49.96875 684.0156 50.875Q684.8906 51.78125 686.1875 51.78125Q687.15625 51.78125 687.8281 51.28125Q688.5156 50.765625 688.9219 49.65625ZM683.1406 46.8125L688.9375 46.8125Q688.8281 45.515625 688.28125 44.859375Q687.4375 43.84375 686.09375 43.84375Q684.8906 43.84375 684.0625 44.65625Q683.2344 45.453125 683.1406 46.8125ZM692.1094 49.90625L693.8594 49.625Q694.0 50.671875 694.6719 51.234375Q695.34375 51.78125 696.5469 51.78125Q697.75 51.78125 698.3281 51.296875Q698.9219 50.796875 698.9219 50.140625Q698.9219 49.546875 698.40625 49.203125Q698.0469 48.96875 696.6094 48.609375Q694.6719 48.109375 693.9219 47.765625Q693.1875 47.40625 692.7969 46.78125Q692.40625 46.140625 692.40625 45.375Q692.40625 44.6875 692.71875 44.09375Q693.0469 43.5 693.59375 43.109375Q694.0 42.8125 694.7031 42.609375Q695.4219 42.390625 696.2344 42.390625Q697.4531 42.390625 698.375 42.75Q699.2969 43.09375 699.7344 43.703125Q700.1719 44.296875 700.34375 45.296875L698.625 45.53125Q698.5 44.734375 697.9375 44.296875Q697.375 43.84375 696.3594 43.84375Q695.1406 43.84375 694.625 44.25Q694.1094 44.640625 694.1094 45.171875Q694.1094 45.515625 694.3281 45.796875Q694.53125 46.078125 695.0 46.265625Q695.2656 46.359375 696.5469 46.703125Q698.4219 47.203125 699.15625 47.53125Q699.8906 47.84375 700.3125 48.453125Q700.7344 49.046875 700.7344 49.953125Q700.7344 50.828125 700.21875 51.609375Q699.7031 52.390625 698.7344 52.8125Q697.7656 53.234375 696.5469 53.234375Q694.53125 53.234375 693.46875 52.390625Q692.40625 51.546875 692.1094 49.90625ZM706.65625 51.421875L706.90625 52.984375Q706.1719 53.140625 705.5781 53.140625Q704.625 53.140625 704.09375 52.84375Q703.5781 52.53125 703.3594 52.046875Q703.1406 51.546875 703.1406 49.96875L703.1406 44.0L701.8594 44.0L701.8594 42.625L703.1406 42.625L703.1406 40.0625L704.8906 39.0L704.8906 42.625L706.65625 42.625L706.65625 44.0L704.8906 44.0L704.8906 50.0625Q704.8906 50.8125 704.9844 51.03125Q705.0781 51.234375 705.28125 51.375Q705.5 51.5 705.8906 51.5Q706.1719 51.5 706.65625 51.421875ZM714.96875 53.0L714.96875 38.6875L716.8594 38.6875L716.8594 51.3125L723.90625 51.3125L723.90625 53.0L714.96875 53.0ZM732.59375 51.71875Q731.6094 52.546875 730.7031 52.890625Q729.7969 53.234375 728.7656 53.234375Q727.0625 53.234375 726.1406 52.40625Q725.21875 51.5625 725.21875 50.265625Q725.21875 49.5 725.5625 48.875Q725.9219 48.25 726.4844 47.875Q727.0469 47.484375 727.75 47.28125Q728.2656 47.15625 729.3125 47.015625Q731.4375 46.765625 732.4375 46.421875Q732.4531 46.0625 732.4531 45.953125Q732.4531 44.890625 731.9531 44.4375Q731.28125 43.84375 729.9531 43.84375Q728.7031 43.84375 728.1094 44.28125Q727.53125 44.71875 727.25 45.828125L725.53125 45.59375Q725.7656 44.484375 726.2969 43.8125Q726.8281 43.125 727.84375 42.765625Q728.8594 42.390625 730.2031 42.390625Q731.53125 42.390625 732.3594 42.703125Q733.1875 43.015625 733.5781 43.5Q733.96875 43.96875 734.125 44.6875Q734.21875 45.140625 734.21875 46.3125L734.21875 48.65625Q734.21875 51.109375 734.3281 51.765625Q734.4375 52.40625 734.78125 53.0L732.9375 53.0Q732.6719 52.453125 732.59375 51.71875ZM732.4375 47.796875Q731.4844 48.1875 729.5625 48.453125Q728.4844 48.609375 728.03125 48.8125Q727.59375 49.0 727.34375 49.375Q727.09375 49.75 727.09375 50.21875Q727.09375 50.921875 727.625 51.390625Q728.15625 51.859375 729.1875 51.859375Q730.2031 51.859375 730.9844 51.421875Q731.78125 50.96875 732.15625 50.203125Q732.4375 49.59375 732.4375 48.4375L732.4375 47.796875ZM740.65625 51.421875L740.90625 52.984375Q740.1719 53.140625 739.5781 53.140625Q738.625 53.140625 738.09375 52.84375Q737.5781 52.53125 737.3594 52.046875Q737.1406 51.546875 737.1406 49.96875L737.1406 44.0L735.8594 44.0L735.8594 42.625L737.1406 42.625L737.1406 40.0625L738.8906 39.0L738.8906 42.625L740.65625 42.625L740.65625 44.0L738.8906 44.0L738.8906 50.0625Q738.8906 50.8125 738.9844 51.03125Q739.0781 51.234375 739.28125 51.375Q739.5 51.5 739.8906 51.5Q740.1719 51.5 740.65625 51.421875ZM749.9219 49.65625L751.7344 49.890625Q751.3125 51.46875 750.1406 52.359375Q748.9844 53.234375 747.1719 53.234375Q744.90625 53.234375 743.5625 51.84375Q742.2344 50.4375 742.2344 47.90625Q742.2344 45.28125 743.5781 43.84375Q744.9219 42.390625 747.0781 42.390625Q749.15625 42.390625 750.46875 43.8125Q751.7969 45.21875 751.7969 47.796875Q751.7969 47.953125 751.78125 48.265625L744.0469 48.265625Q744.1406 49.96875 745.0156 50.875Q745.8906 51.78125 747.1875 51.78125Q748.15625 51.78125 748.8281 51.28125Q749.5156 50.765625 749.9219 49.65625ZM744.1406 46.8125L749.9375 46.8125Q749.8281 45.515625 749.28125 44.859375Q748.4375 43.84375 747.09375 43.84375Q745.8906 43.84375 745.0625 44.65625Q744.2344 45.453125 744.1406 46.8125ZM753.8125 53.0L753.8125 42.625L755.40625 42.625L755.40625 44.109375Q756.5469 42.390625 758.7031 42.390625Q759.6406 42.390625 760.4219 42.734375Q761.21875 43.0625 761.6094 43.609375Q762.0 44.15625 762.15625 44.90625Q762.25 45.40625 762.25 46.625L762.25 53.0L760.4844 53.0L760.4844 46.6875Q760.4844 45.609375 760.28125 45.078125Q760.0781 44.546875 759.5469 44.234375Q759.03125 43.921875 758.3281 43.921875Q757.2031 43.921875 756.3906 44.640625Q755.5781 45.34375 755.5781 47.328125L755.5781 53.0L753.8125 53.0ZM770.59375 49.203125L772.3125 49.421875Q772.03125 51.21875 770.8594 52.234375Q769.7031 53.234375 768.0 53.234375Q765.875 53.234375 764.5781 51.84375Q763.28125 50.453125 763.28125 47.859375Q763.28125 46.171875 763.8281 44.921875Q764.3906 43.65625 765.53125 43.03125Q766.6719 42.390625 768.0156 42.390625Q769.7031 42.390625 770.7656 43.25Q771.84375 44.109375 772.15625 45.671875L770.4375 45.9375Q770.2031 44.890625 769.5781 44.375Q768.9531 43.84375 768.0781 43.84375Q766.75 43.84375 765.9219 44.796875Q765.09375 45.75 765.09375 47.796875Q765.09375 49.890625 765.8906 50.84375Q766.6875 51.78125 767.9844 51.78125Q769.0156 51.78125 769.7031 51.15625Q770.40625 50.515625 770.59375 49.203125ZM773.7344 57.0L773.5469 55.34375Q774.125 55.5 774.5469 55.5Q775.1406 55.5 775.4844 55.296875Q775.84375 55.109375 776.0625 54.765625Q776.2344 54.5 776.6094 53.453125Q776.65625 53.296875 776.7656 53.015625L772.8281 42.625L774.71875 42.625L776.875 48.640625Q777.2969 49.78125 777.625 51.03125Q777.9375 49.828125 778.34375 48.671875L780.5625 42.625L782.3281 42.625L778.375 53.171875Q777.75 54.890625 777.3906 55.53125Q776.9219 56.40625 776.3125 56.796875Q775.71875 57.203125 774.875 57.203125Q774.3594 57.203125 773.7344 57.0ZM792.7031 53.0L788.75 42.625L790.6094 42.625L792.84375 48.84375Q793.2031 49.84375 793.5 50.921875Q793.7344 50.109375 794.15625 48.953125L796.4531 42.625L798.2656 42.625L794.34375 53.0L792.7031 53.0ZM798.1094 49.90625L799.8594 49.625Q800.0 50.671875 800.6719 51.234375Q801.34375 51.78125 802.5469 51.78125Q803.75 51.78125 804.3281 51.296875Q804.9219 50.796875 804.9219 50.140625Q804.9219 49.546875 804.40625 49.203125Q804.0469 48.96875 802.6094 48.609375Q800.6719 48.109375 799.9219 47.765625Q799.1875 47.40625 798.7969 46.78125Q798.40625 46.140625 798.40625 45.375Q798.40625 44.6875 798.71875 44.09375Q799.0469 43.5 799.59375 43.109375Q800.0 42.8125 800.7031 42.609375Q801.4219 42.390625 802.2344 42.390625Q803.4531 42.390625 804.375 42.75Q805.2969 43.09375 805.7344 43.703125Q806.1719 44.296875 806.34375 45.296875L804.625 45.53125Q804.5 44.734375 803.9375 44.296875Q803.375 43.84375 802.3594 43.84375Q801.1406 43.84375 800.625 44.25Q800.1094 44.640625 800.1094 45.171875Q800.1094 45.515625 800.3281 45.796875Q800.53125 46.078125 801.0 46.265625Q801.2656 46.359375 802.5469 46.703125Q804.4219 47.203125 805.15625 47.53125Q805.8906 47.84375 806.3125 48.453125Q806.7344 49.046875 806.7344 49.953125Q806.7344 50.828125 806.21875 51.609375Q805.7031 52.390625 804.7344 52.8125Q803.7656 53.234375 802.5469 53.234375Q800.53125 53.234375 799.46875 52.390625Q798.40625 51.546875 798.1094 49.90625ZM814.40625 48.40625L816.1875 48.25Q816.3125 49.3125 816.7656 50.0Q817.2344 50.6875 818.2031 51.125Q819.1875 51.546875 820.4219 51.546875Q821.5 51.546875 822.3281 51.234375Q823.15625 50.90625 823.5625 50.34375Q823.96875 49.78125 823.96875 49.109375Q823.96875 48.4375 823.5781 47.9375Q823.1875 47.4375 822.2969 47.09375Q821.71875 46.859375 819.7344 46.390625Q817.7656 45.921875 816.9844 45.5Q815.9531 44.96875 815.4531 44.171875Q814.9531 43.375 814.9531 42.390625Q814.9531 41.296875 815.5625 40.359375Q816.1719 39.421875 817.3594 38.9375Q818.5469 38.4375 819.9844 38.4375Q821.5781 38.4375 822.7969 38.953125Q824.0156 39.46875 824.65625 40.46875Q825.3125 41.453125 825.3594 42.71875L823.5469 42.859375Q823.40625 41.5 822.5625 40.8125Q821.71875 40.109375 820.0625 40.109375Q818.34375 40.109375 817.5469 40.75Q816.7656 41.375 816.7656 42.25Q816.7656 43.03125 817.3281 43.53125Q817.875 44.03125 820.1875 44.546875Q822.5 45.0625 823.3594 45.453125Q824.6094 46.03125 825.2031 46.921875Q825.7969 47.796875 825.7969 48.953125Q825.7969 50.09375 825.1406 51.109375Q824.4844 52.125 823.25 52.6875Q822.03125 53.25 820.5 53.25Q818.5625 53.25 817.25 52.6875Q815.9375 52.109375 815.1875 50.96875Q814.4375 49.828125 814.40625 48.40625ZM827.8281 40.703125L827.8281 38.6875L829.59375 38.6875L829.59375 40.703125L827.8281 40.703125ZM827.8281 53.0L827.8281 42.625L829.59375 42.625L829.59375 53.0L827.8281 53.0ZM831.5 53.859375L833.2031 54.109375Q833.3125 54.90625 833.7969 55.265625Q834.4531 55.75 835.59375 55.75Q836.8125 55.75 837.46875 55.265625Q838.1406 54.78125 838.375 53.90625Q838.5156 53.359375 838.5 51.640625Q837.34375 53.0 835.625 53.0Q833.4844 53.0 832.3125 51.453125Q831.1406 49.90625 831.1406 47.75Q831.1406 46.265625 831.6719 45.015625Q832.21875 43.765625 833.2344 43.078125Q834.2656 42.390625 835.6406 42.390625Q837.46875 42.390625 838.6719 43.875L838.6719 42.625L840.28125 42.625L840.28125 51.59375Q840.28125 54.015625 839.78125 55.015625Q839.2969 56.03125 838.21875 56.609375Q837.15625 57.203125 835.59375 57.203125Q833.75 57.203125 832.59375 56.375Q831.4531 55.546875 831.5 53.859375ZM832.9531 47.625Q832.9531 49.671875 833.7656 50.609375Q834.5781 51.546875 835.7969 51.546875Q837.0 51.546875 837.8125 50.609375Q838.6406 49.671875 838.6406 47.6875Q838.6406 45.78125 837.7969 44.8125Q836.9531 43.84375 835.7656 43.84375Q834.59375 43.84375 833.7656 44.796875Q832.9531 45.75 832.9531 47.625ZM842.46875 46.03125Q842.46875 42.46875 844.375 40.453125Q846.2969 38.421875 849.3281 38.421875Q851.3125 38.421875 852.8906 39.375Q854.4844 40.328125 855.3125 42.03125Q856.15625 43.71875 856.15625 45.859375Q856.15625 48.03125 855.28125 49.75Q854.40625 51.46875 852.78125 52.359375Q851.1719 53.25 849.3125 53.25Q847.2969 53.25 845.7031 52.28125Q844.1094 51.296875 843.28125 49.609375Q842.46875 47.90625 842.46875 46.03125ZM844.4219 46.0625Q844.4219 48.640625 845.8125 50.140625Q847.2031 51.625 849.2969 51.625Q851.4375 51.625 852.8125 50.125Q854.2031 48.609375 854.2031 45.84375Q854.2031 44.109375 853.6094 42.8125Q853.03125 41.5 851.8906 40.78125Q850.75 40.0625 849.3281 40.0625Q847.3281 40.0625 845.875 41.453125Q844.4219 42.828125 844.4219 46.0625ZM858.8125 56.96875L858.8125 42.625L860.4219 42.625L860.4219 43.96875Q860.9844 43.1875 861.6875 42.796875Q862.40625 42.390625 863.4219 42.390625Q864.75 42.390625 865.7656 43.078125Q866.78125 43.765625 867.2969 45.015625Q867.8281 46.25 867.8281 47.734375Q867.8281 49.328125 867.25 50.609375Q866.6875 51.875 865.59375 52.5625Q864.5 53.234375 863.2969 53.234375Q862.4219 53.234375 861.71875 52.859375Q861.03125 52.484375 860.5781 51.921875L860.5781 56.96875L858.8125 56.96875ZM860.40625 47.875Q860.40625 49.875 861.21875 50.828125Q862.03125 51.78125 863.1875 51.78125Q864.3594 51.78125 865.1875 50.796875Q866.03125 49.8125 866.03125 47.71875Q866.03125 45.75 865.2031 44.765625Q864.3906 43.765625 863.2656 43.765625Q862.1406 43.765625 861.2656 44.828125Q860.40625 45.875 860.40625 47.875ZM873.65625 51.421875L873.90625 52.984375Q873.1719 53.140625 872.5781 53.140625Q871.625 53.140625 871.09375 52.84375Q870.5781 52.53125 870.3594 52.046875Q870.1406 51.546875 870.1406 49.96875L870.1406 44.0L868.8594 44.0L868.8594 42.625L870.1406 42.625L870.1406 40.0625L871.8906 39.0L871.8906 42.625L873.65625 42.625L873.65625 44.0L871.8906 44.0L871.8906 50.0625Q871.8906 50.8125 871.9844 51.03125Q872.0781 51.234375 872.28125 51.375Q872.5 51.5 872.8906 51.5Q873.1719 51.5 873.65625 51.421875ZM881.96875 53.0L881.96875 38.6875L887.34375 38.6875Q888.9844 38.6875 889.96875 39.125Q890.9531 39.546875 891.5156 40.453125Q892.0781 41.359375 892.0781 42.34375Q892.0781 43.265625 891.5781 44.078125Q891.09375 44.890625 890.0781 45.375Q891.375 45.765625 892.0781 46.6875Q892.78125 47.59375 892.78125 48.84375Q892.78125 49.859375 892.34375 50.71875Q891.9219 51.578125 891.2969 52.046875Q890.6719 52.515625 889.7344 52.765625Q888.7969 53.0 887.4219 53.0L881.96875 53.0ZM883.8594 44.703125L886.9531 44.703125Q888.21875 44.703125 888.7656 44.53125Q889.4844 44.3125 889.84375 43.828125Q890.21875 43.328125 890.21875 42.5625Q890.21875 41.859375 889.875 41.3125Q889.53125 40.765625 888.8906 40.578125Q888.2656 40.375 886.71875 40.375L883.8594 40.375L883.8594 44.703125ZM883.8594 51.3125L887.4219 51.3125Q888.34375 51.3125 888.71875 51.234375Q889.375 51.125 889.8125 50.859375Q890.25 50.578125 890.53125 50.0625Q890.8125 49.53125 890.8125 48.84375Q890.8125 48.046875 890.40625 47.453125Q890.0 46.859375 889.2656 46.625Q888.53125 46.390625 887.1719 46.390625L883.8594 46.390625L883.8594 51.3125ZM901.9219 49.65625L903.7344 49.890625Q903.3125 51.46875 902.1406 52.359375Q900.9844 53.234375 899.1719 53.234375Q896.90625 53.234375 895.5625 51.84375Q894.2344 50.4375 894.2344 47.90625Q894.2344 45.28125 895.5781 43.84375Q896.9219 42.390625 899.0781 42.390625Q901.15625 42.390625 902.46875 43.8125Q903.7969 45.21875 903.7969 47.796875Q903.7969 47.953125 903.78125 48.265625L896.0469 48.265625Q896.1406 49.96875 897.0156 50.875Q897.8906 51.78125 899.1875 51.78125Q900.15625 51.78125 900.8281 51.28125Q901.5156 50.765625 901.9219 49.65625ZM896.1406 46.8125L901.9375 46.8125Q901.8281 45.515625 901.28125 44.859375Q900.4375 43.84375 899.09375 43.84375Q897.8906 43.84375 897.0625 44.65625Q896.2344 45.453125 896.1406 46.8125ZM905.1094 49.90625L906.8594 49.625Q907.0 50.671875 907.6719 51.234375Q908.34375 51.78125 909.5469 51.78125Q910.75 51.78125 911.3281 51.296875Q911.9219 50.796875 911.9219 50.140625Q911.9219 49.546875 911.40625 49.203125Q911.0469 48.96875 909.6094 48.609375Q907.6719 48.109375 906.9219 47.765625Q906.1875 47.40625 905.7969 46.78125Q905.40625 46.140625 905.40625 45.375Q905.40625 44.6875 905.71875 44.09375Q906.0469 43.5 906.59375 43.109375Q907.0 42.8125 907.7031 42.609375Q908.4219 42.390625 909.2344 42.390625Q910.4531 42.390625 911.375 42.75Q912.2969 43.09375 912.7344 43.703125Q913.1719 44.296875 913.34375 45.296875L911.625 45.53125Q911.5 44.734375 910.9375 44.296875Q910.375 43.84375 909.3594 43.84375Q908.1406 43.84375 907.625 44.25Q907.1094 44.640625 907.1094 45.171875Q907.1094 45.515625 907.3281 45.796875Q907.53125 46.078125 908.0 46.265625Q908.2656 46.359375 909.5469 46.703125Q911.4219 47.203125 912.15625 47.53125Q912.8906 47.84375 913.3125 48.453125Q913.7344 49.046875 913.7344 49.953125Q913.7344 50.828125 913.21875 51.609375Q912.7031 52.390625 911.7344 52.8125Q910.7656 53.234375 909.5469 53.234375Q907.53125 53.234375 906.46875 52.390625Q905.40625 51.546875 905.1094 49.90625ZM919.65625 51.421875L919.90625 52.984375Q919.1719 53.140625 918.5781 53.140625Q917.625 53.140625 917.09375 52.84375Q916.5781 52.53125 916.3594 52.046875Q916.1406 51.546875 916.1406 49.96875L916.1406 44.0L914.8594 44.0L914.8594 42.625L916.1406 42.625L916.1406 40.0625L917.8906 39.0L917.8906 42.625L919.65625 42.625L919.65625 44.0L917.8906 44.0L917.8906 50.0625Q917.8906 50.8125 917.9844 51.03125Q918.0781 51.234375 918.28125 51.375Q918.5 51.5 918.8906 51.5Q919.1719 51.5 919.65625 51.421875ZM927.96875 53.0L927.96875 38.6875L929.8594 38.6875L929.8594 51.3125L936.90625 51.3125L936.90625 53.0L927.96875 53.0ZM945.59375 51.71875Q944.6094 52.546875 943.7031 52.890625Q942.7969 53.234375 941.7656 53.234375Q940.0625 53.234375 939.1406 52.40625Q938.21875 51.5625 938.21875 50.265625Q938.21875 49.5 938.5625 48.875Q938.9219 48.25 939.4844 47.875Q940.0469 47.484375 940.75 47.28125Q941.2656 47.15625 942.3125 47.015625Q944.4375 46.765625 945.4375 46.421875Q945.4531 46.0625 945.4531 45.953125Q945.4531 44.890625 944.9531 44.4375Q944.28125 43.84375 942.9531 43.84375Q941.7031 43.84375 941.1094 44.28125Q940.53125 44.71875 940.25 45.828125L938.53125 45.59375Q938.7656 44.484375 939.2969 43.8125Q939.8281 43.125 940.84375 42.765625Q941.8594 42.390625 943.2031 42.390625Q944.53125 42.390625 945.3594 42.703125Q946.1875 43.015625 946.5781 43.5Q946.96875 43.96875 947.125 44.6875Q947.21875 45.140625 947.21875 46.3125L947.21875 48.65625Q947.21875 51.109375 947.3281 51.765625Q947.4375 52.40625 947.78125 53.0L945.9375 53.0Q945.6719 52.453125 945.59375 51.71875ZM945.4375 47.796875Q944.4844 48.1875 942.5625 48.453125Q941.4844 48.609375 941.03125 48.8125Q940.59375 49.0 940.34375 49.375Q940.09375 49.75 940.09375 50.21875Q940.09375 50.921875 940.625 51.390625Q941.15625 51.859375 942.1875 51.859375Q943.2031 51.859375 943.9844 51.421875Q944.78125 50.96875 945.15625 50.203125Q945.4375 49.59375 945.4375 48.4375L945.4375 47.796875ZM953.65625 51.421875L953.90625 52.984375Q953.1719 53.140625 952.5781 53.140625Q951.625 53.140625 951.09375 52.84375Q950.5781 52.53125 950.3594 52.046875Q950.1406 51.546875 950.1406 49.96875L950.1406 44.0L948.8594 44.0L948.8594 42.625L950.1406 42.625L950.1406 40.0625L951.8906 39.0L951.8906 42.625L953.65625 42.625L953.65625 44.0L951.8906 44.0L951.8906 50.0625Q951.8906 50.8125 951.9844 51.03125Q952.0781 51.234375 952.28125 51.375Q952.5 51.5 952.8906 51.5Q953.1719 51.5 953.65625 51.421875ZM962.9219 49.65625L964.7344 49.890625Q964.3125 51.46875 963.1406 52.359375Q961.9844 53.234375 960.1719 53.234375Q957.90625 53.234375 956.5625 51.84375Q955.2344 50.4375 955.2344 47.90625Q955.2344 45.28125 956.5781 43.84375Q957.9219 42.390625 960.0781 42.390625Q962.15625 42.390625 963.46875 43.8125Q964.7969 45.21875 964.7969 47.796875Q964.7969 47.953125 964.78125 48.265625L957.0469 48.265625Q957.1406 49.96875 958.0156 50.875Q958.8906 51.78125 960.1875 51.78125Q961.15625 51.78125 961.8281 51.28125Q962.5156 50.765625 962.9219 49.65625ZM957.1406 46.8125L962.9375 46.8125Q962.8281 45.515625 962.28125 44.859375Q961.4375 43.84375 960.09375 43.84375Q958.8906 43.84375 958.0625 44.65625Q957.2344 45.453125 957.1406 46.8125ZM966.8125 53.0L966.8125 42.625L968.40625 42.625L968.40625 44.109375Q969.5469 42.390625 971.7031 42.390625Q972.6406 42.390625 973.4219 42.734375Q974.21875 43.0625 974.6094 43.609375Q975.0 44.15625 975.15625 44.90625Q975.25 45.40625 975.25 46.625L975.25 53.0L973.4844 53.0L973.4844 46.6875Q973.4844 45.609375 973.28125 45.078125Q973.0781 44.546875 972.5469 44.234375Q972.03125 43.921875 971.3281 43.921875Q970.2031 43.921875 969.3906 44.640625Q968.5781 45.34375 968.5781 47.328125L968.5781 53.0L966.8125 53.0ZM983.59375 49.203125L985.3125 49.421875Q985.03125 51.21875 983.8594 52.234375Q982.7031 53.234375 981.0 53.234375Q978.875 53.234375 977.5781 51.84375Q976.28125 50.453125 976.28125 47.859375Q976.28125 46.171875 976.8281 44.921875Q977.3906 43.65625 978.53125 43.03125Q979.6719 42.390625 981.0156 42.390625Q982.7031 42.390625 983.7656 43.25Q984.84375 44.109375 985.15625 45.671875L983.4375 45.9375Q983.2031 44.890625 982.5781 44.375Q981.9531 43.84375 981.0781 43.84375Q979.75 43.84375 978.9219 44.796875Q978.09375 45.75 978.09375 47.796875Q978.09375 49.890625 978.8906 50.84375Q979.6875 51.78125 980.9844 51.78125Q982.0156 51.78125 982.7031 51.15625Q983.40625 50.515625 983.59375 49.203125ZM986.7344 57.0L986.5469 55.34375Q987.125 55.5 987.5469 55.5Q988.1406 55.5 988.4844 55.296875Q988.84375 55.109375 989.0625 54.765625Q989.2344 54.5 989.6094 53.453125Q989.65625 53.296875 989.7656 53.015625L985.8281 42.625L987.71875 42.625L989.875 48.640625Q990.2969 49.78125 990.625 51.03125Q990.9375 49.828125 991.34375 48.671875L993.5625 42.625L995.3281 42.625L991.375 53.171875Q990.75 54.890625 990.3906 55.53125Q989.9219 56.40625 989.3125 56.796875Q988.71875 57.203125 987.875 57.203125Q987.3594 57.203125 986.7344 57.0Z" fill-rule="nonzero"/></svg>
7
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/sigopt/Intel Ice lake Xeon 8380 - TorchScript - Batch Size 1 - Relative Difference Absolute Best Latency vs SigOpt Best Latency.svg
<svg version="1.1" viewBox="0.0 0.0 1100.0 680.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" width="1100" height="680" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg"><path fill="#ffffff" d="M0 0L1100.0 0L1100.0 680.0L0 680.0L0 0Z" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M231.5 84.5L231.5 594.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M431.5 84.5L431.5 594.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M631.5 84.5L631.5 594.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M831.5 84.5L831.5 594.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M1032.5 84.5L1032.5 594.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M61.5 594.5L1066.5 594.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M61.5 466.5L1066.5 466.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M61.5 339.5L1066.5 339.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M61.5 212.5L1066.5 212.5" fill-rule="nonzero"/><path stroke="#cccccc" stroke-width="1.0" stroke-linecap="butt" d="M61.5 84.5L1066.5 84.5" fill-rule="nonzero"/><clipPath id="id_0"><path d="M61.0 84.666664L1066.0 84.666664L1066.0 594.0L61.0 594.0L61.0 84.666664Z" clip-rule="nonzero"/></clipPath><path stroke="#4285f4" stroke-width="2.0" stroke-linecap="butt" clip-path="url(#id_0)" d="M70.80488 335.36658L94.83878 291.94513L287.11005 247.91089L799.83344 120.06537L1056.1951 355.3529" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M76.80488 335.36658C76.80488 338.68027 74.11858 341.36658 70.80488 341.36658C67.49117 341.36658 64.80488 338.68027 64.80488 335.36658C64.80488 332.05286 67.49117 329.36658 70.80488 329.36658C74.11858 329.36658 76.80488 332.05286 76.80488 335.36658Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M100.83878 291.94513C100.83878 295.25882 98.1525 297.94513 94.83878 297.94513C91.52508 297.94513 88.83878 295.25882 88.83878 291.94513C88.83878 288.6314 91.52508 285.94513 94.83878 285.94513C98.1525 285.94513 100.83878 288.6314 100.83878 291.94513Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M293.11005 247.91089C293.11005 251.2246 290.42377 253.91089 287.11005 253.91089C283.79636 253.91089 281.11005 251.2246 281.11005 247.91089C281.11005 244.59717 283.79636 241.91089 287.11005 241.91089C290.42377 241.91089 293.11005 244.59717 293.11005 247.91089Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M805.83344 120.06537C805.83344 123.379074 803.14716 126.06537 799.83344 126.06537C796.5197 126.06537 793.83344 123.379074 793.83344 120.06537C793.83344 116.751656 796.5197 114.06537 799.83344 114.06537C803.14716 114.06537 805.83344 116.751656 805.83344 120.06537Z" fill-rule="nonzero"/><path fill="#4285f4" clip-path="url(#id_0)" d="M1062.1951 355.3529C1062.1951 358.66663 1059.5088 361.3529 1056.1951 361.3529C1052.8815 361.3529 1050.1951 358.66663 1050.1951 355.3529C1050.1951 352.03918 1052.8815 349.3529 1056.1951 349.3529C1059.5088 349.3529 1062.1951 352.03918 1062.1951 355.3529Z" fill-rule="nonzero"/><path fill="#000000" d="M515.5469 640.2344L516.6094 640.1406Q516.6875 640.7969 516.96875 641.2031Q517.25 641.6094 517.8281 641.875Q518.4219 642.125 519.15625 642.125Q519.7969 642.125 520.2969 641.9375Q520.7969 641.7344 521.03125 641.40625Q521.28125 641.0625 521.28125 640.6719Q521.28125 640.2656 521.0469 639.96875Q520.8125 639.65625 520.28125 639.4531Q519.9219 639.3125 518.7344 639.03125Q517.5625 638.75 517.09375 638.5Q516.46875 638.1719 516.1719 637.7031Q515.875 637.21875 515.875 636.625Q515.875 635.9844 516.2344 635.4219Q516.6094 634.84375 517.3125 634.5625Q518.03125 634.2656 518.8906 634.2656Q519.84375 634.2656 520.5781 634.5781Q521.3125 634.875 521.7031 635.4844Q522.09375 636.0781 522.125 636.8281L521.03125 636.90625Q520.9375 636.09375 520.4219 635.6875Q519.9219 635.2656 518.9375 635.2656Q517.90625 635.2656 517.4375 635.6406Q516.96875 636.0156 516.96875 636.5469Q516.96875 637.0156 517.2969 637.3125Q517.625 637.6094 519.0 637.9219Q520.3906 638.2344 520.90625 638.46875Q521.65625 638.8281 522.0156 639.3594Q522.375 639.875 522.375 640.5781Q522.375 641.2656 521.9844 641.875Q521.59375 642.46875 520.8594 642.8125Q520.125 643.1406 519.2031 643.1406Q518.03125 643.1406 517.2344 642.8125Q516.4531 642.46875 516.0 641.78125Q515.5625 641.09375 515.5469 640.2344ZM528.0469 641.0L529.1406 641.125Q528.8906 642.0781 528.1875 642.6094Q527.4844 643.1406 526.40625 643.1406Q525.0469 643.1406 524.2344 642.2969Q523.4375 641.4531 523.4375 639.9375Q523.4375 638.375 524.25 637.5156Q525.0625 636.6406 526.34375 636.6406Q527.59375 636.6406 528.375 637.4844Q529.1719 638.3281 529.1719 639.875Q529.1719 639.96875 529.1719 640.15625L524.53125 640.15625Q524.59375 641.1875 525.1094 641.7344Q525.625 642.2656 526.40625 642.2656Q526.9844 642.2656 527.3906 641.96875Q527.8125 641.65625 528.0469 641.0ZM524.59375 639.2969L528.0625 639.2969Q528.0 638.5 527.6719 638.1094Q527.15625 637.5 526.3594 637.5Q525.625 637.5 525.125 637.9844Q524.6406 638.46875 524.59375 639.2969ZM534.7656 645.3906L534.7656 642.34375Q534.5156 642.6875 534.0625 642.9219Q533.625 643.1406 533.125 643.1406Q532.0156 643.1406 531.21875 642.2656Q530.4219 641.375 530.4219 639.8281Q530.4219 638.8906 530.75 638.15625Q531.0781 637.40625 531.6875 637.03125Q532.3125 636.6406 533.0469 636.6406Q534.2031 636.6406 534.8594 637.6094L534.8594 636.78125L535.8125 636.78125L535.8125 645.3906L534.7656 645.3906ZM531.5 639.875Q531.5 641.0781 532.0 641.6719Q532.5156 642.2656 533.21875 642.2656Q533.8906 642.2656 534.375 641.7031Q534.8594 641.125 534.8594 639.96875Q534.8594 638.71875 534.34375 638.09375Q533.84375 637.46875 533.1406 637.46875Q532.46875 637.46875 531.9844 638.0625Q531.5 638.6406 531.5 639.875ZM541.875 643.0L541.875 642.0781Q541.1406 643.1406 539.8906 643.1406Q539.34375 643.1406 538.8594 642.9375Q538.3906 642.71875 538.15625 642.40625Q537.9219 642.0781 537.8281 641.6094Q537.7656 641.3125 537.7656 640.625L537.7656 636.78125L538.8281 636.78125L538.8281 640.2344Q538.8281 641.0469 538.8906 641.34375Q538.9844 641.75 539.2969 642.0Q539.625 642.2344 540.1094 642.2344Q540.5781 642.2344 540.9844 642.0Q541.40625 641.75 541.5781 641.3281Q541.7656 640.90625 541.7656 640.1094L541.7656 636.78125L542.8125 636.78125L542.8125 643.0L541.875 643.0ZM549.0469 641.0L550.1406 641.125Q549.8906 642.0781 549.1875 642.6094Q548.4844 643.1406 547.40625 643.1406Q546.0469 643.1406 545.2344 642.2969Q544.4375 641.4531 544.4375 639.9375Q544.4375 638.375 545.25 637.5156Q546.0625 636.6406 547.34375 636.6406Q548.59375 636.6406 549.375 637.4844Q550.1719 638.3281 550.1719 639.875Q550.1719 639.96875 550.1719 640.15625L545.53125 640.15625Q545.59375 641.1875 546.1094 641.7344Q546.625 642.2656 547.40625 642.2656Q547.9844 642.2656 548.3906 641.96875Q548.8125 641.65625 549.0469 641.0ZM545.59375 639.2969L549.0625 639.2969Q549.0 638.5 548.6719 638.1094Q548.15625 637.5 547.3594 637.5Q546.625 637.5 546.125 637.9844Q545.6406 638.46875 545.59375 639.2969ZM551.7969 643.0L551.7969 636.78125L552.7344 636.78125L552.7344 637.65625Q553.4219 636.6406 554.71875 636.6406Q555.28125 636.6406 555.75 636.84375Q556.2344 637.0469 556.46875 637.375Q556.7031 637.7031 556.7969 638.1406Q556.84375 638.4375 556.84375 639.1719L556.84375 643.0L555.7969 643.0L555.7969 639.21875Q555.7969 638.5625 555.6719 638.25Q555.5469 637.9375 555.2344 637.75Q554.9219 637.5469 554.5 637.5469Q553.8281 637.5469 553.3281 637.9844Q552.84375 638.40625 552.84375 639.59375L552.84375 643.0L551.7969 643.0ZM562.8594 640.71875L563.8906 640.8594Q563.71875 641.9219 563.0156 642.53125Q562.3125 643.1406 561.2969 643.1406Q560.0156 643.1406 559.2344 642.3125Q558.46875 641.46875 558.46875 639.90625Q558.46875 638.90625 558.7969 638.15625Q559.1406 637.3906 559.8125 637.0156Q560.5 636.6406 561.3125 636.6406Q562.3125 636.6406 562.9531 637.15625Q563.6094 637.65625 563.7969 638.6094L562.7656 638.7656Q562.625 638.1406 562.25 637.8281Q561.875 637.5 561.34375 637.5Q560.5469 637.5 560.0469 638.0781Q559.5469 638.6406 559.5469 639.875Q559.5469 641.1406 560.03125 641.7031Q560.5156 642.2656 561.28125 642.2656Q561.90625 642.2656 562.3125 641.8906Q562.7344 641.5156 562.8594 640.71875ZM569.0469 641.0L570.1406 641.125Q569.8906 642.0781 569.1875 642.6094Q568.4844 643.1406 567.40625 643.1406Q566.0469 643.1406 565.2344 642.2969Q564.4375 641.4531 564.4375 639.9375Q564.4375 638.375 565.25 637.5156Q566.0625 636.6406 567.34375 636.6406Q568.59375 636.6406 569.375 637.4844Q570.1719 638.3281 570.1719 639.875Q570.1719 639.96875 570.1719 640.15625L565.53125 640.15625Q565.59375 641.1875 566.1094 641.7344Q566.625 642.2656 567.40625 642.2656Q567.9844 642.2656 568.3906 641.96875Q568.8125 641.65625 569.0469 641.0ZM565.59375 639.2969L569.0625 639.2969Q569.0 638.5 568.6719 638.1094Q568.15625 637.5 567.3594 637.5Q566.625 637.5 566.125 637.9844Q565.6406 638.46875 565.59375 639.2969ZM574.875 643.0L574.875 634.40625L576.0156 634.40625L576.0156 641.9844L580.25 641.9844L580.25 643.0L574.875 643.0ZM586.0469 641.0L587.1406 641.125Q586.8906 642.0781 586.1875 642.6094Q585.4844 643.1406 584.40625 643.1406Q583.0469 643.1406 582.2344 642.2969Q581.4375 641.4531 581.4375 639.9375Q581.4375 638.375 582.25 637.5156Q583.0625 636.6406 584.34375 636.6406Q585.59375 636.6406 586.375 637.4844Q587.1719 638.3281 587.1719 639.875Q587.1719 639.96875 587.1719 640.15625L582.53125 640.15625Q582.59375 641.1875 583.1094 641.7344Q583.625 642.2656 584.40625 642.2656Q584.9844 642.2656 585.3906 641.96875Q585.8125 641.65625 586.0469 641.0ZM582.59375 639.2969L586.0625 639.2969Q586.0 638.5 585.6719 638.1094Q585.15625 637.5 584.3594 637.5Q583.625 637.5 583.125 637.9844Q582.6406 638.46875 582.59375 639.2969ZM588.7969 643.0L588.7969 636.78125L589.7344 636.78125L589.7344 637.65625Q590.4219 636.6406 591.71875 636.6406Q592.28125 636.6406 592.75 636.84375Q593.2344 637.0469 593.46875 637.375Q593.7031 637.7031 593.7969 638.1406Q593.84375 638.4375 593.84375 639.1719L593.84375 643.0L592.7969 643.0L592.7969 639.21875Q592.7969 638.5625 592.6719 638.25Q592.5469 637.9375 592.2344 637.75Q591.9219 637.5469 591.5 637.5469Q590.8281 637.5469 590.3281 637.9844Q589.84375 638.40625 589.84375 639.59375L589.84375 643.0L588.7969 643.0ZM595.59375 643.5156L596.625 643.6719Q596.6875 644.1406 596.9844 644.3594Q597.375 644.65625 598.0469 644.65625Q598.78125 644.65625 599.1719 644.3594Q599.5781 644.0625 599.71875 643.5469Q599.8125 643.21875 599.7969 642.1875Q599.1094 643.0 598.0781 643.0Q596.7969 643.0 596.09375 642.0781Q595.3906 641.1406 595.3906 639.8594Q595.3906 638.96875 595.7031 638.21875Q596.03125 637.4531 596.6406 637.0469Q597.25 636.6406 598.0781 636.6406Q599.1875 636.6406 599.90625 637.53125L599.90625 636.78125L600.875 636.78125L600.875 642.15625Q600.875 643.6094 600.5781 644.21875Q600.28125 644.8281 599.6406 645.1719Q599.0 645.53125 598.0625 645.53125Q596.9531 645.53125 596.2656 645.03125Q595.5781 644.53125 595.59375 643.5156ZM596.46875 639.78125Q596.46875 641.0 596.9531 641.5625Q597.4375 642.125 598.1719 642.125Q598.90625 642.125 599.3906 641.5625Q599.8906 641.0 599.8906 639.8125Q599.8906 638.6719 599.375 638.09375Q598.875 637.5156 598.15625 637.5156Q597.4531 637.5156 596.9531 638.09375Q596.46875 638.65625 596.46875 639.78125ZM605.09375 642.0625L605.25 642.9844Q604.7969 643.0781 604.4531 643.0781Q603.875 643.0781 603.5625 642.90625Q603.25 642.71875 603.1094 642.4219Q602.9844 642.125 602.9844 641.1719L602.9844 637.59375L602.21875 637.59375L602.21875 636.78125L602.9844 636.78125L602.9844 635.2344L604.03125 634.6094L604.03125 636.78125L605.09375 636.78125L605.09375 637.59375L604.03125 637.59375L604.03125 641.2344Q604.03125 641.6875 604.0781 641.8125Q604.1406 641.9375 604.2656 642.0156Q604.3906 642.09375 604.625 642.09375Q604.8125 642.09375 605.09375 642.0625ZM605.7969 643.0L605.7969 634.40625L606.84375 634.40625L606.84375 637.4844Q607.5781 636.6406 608.7031 636.6406Q609.40625 636.6406 609.90625 636.9219Q610.4219 637.1875 610.6406 637.6719Q610.8594 638.1406 610.8594 639.0625L610.8594 643.0L609.8125 643.0L609.8125 639.0625Q609.8125 638.2656 609.46875 637.90625Q609.125 637.5469 608.5 637.5469Q608.03125 637.5469 607.6094 637.7969Q607.2031 638.03125 607.0156 638.4531Q606.84375 638.8594 606.84375 639.59375L606.84375 643.0L605.7969 643.0Z" fill-rule="nonzero"/><path fill="#000000" d="M41.65625 596.0156L42.671875 595.9219Q42.796875 596.6406 43.15625 596.96875Q43.53125 597.28125 44.09375 597.28125Q44.578125 597.28125 44.9375 597.0625Q45.3125 596.8281 45.546875 596.46875Q45.78125 596.09375 45.9375 595.46875Q46.09375 594.8281 46.09375 594.1719Q46.09375 594.09375 46.09375 593.9531Q45.78125 594.4531 45.234375 594.78125Q44.6875 595.09375 44.046875 595.09375Q42.96875 595.09375 42.234375 594.3281Q41.5 593.5469 41.5 592.28125Q41.5 590.96875 42.265625 590.1719Q43.046875 589.375 44.203125 589.375Q45.03125 589.375 45.71875 589.8281Q46.421875 590.28125 46.78125 591.125Q47.140625 591.9531 47.140625 593.53125Q47.140625 595.1719 46.78125 596.15625Q46.4375 597.125 45.71875 597.6406Q45.015625 598.1406 44.078125 598.1406Q43.0625 598.1406 42.421875 597.59375Q41.78125 597.03125 41.65625 596.0156ZM45.96875 592.21875Q45.96875 591.3125 45.484375 590.78125Q45.015625 590.25 44.328125 590.25Q43.625 590.25 43.09375 590.8281Q42.578125 591.3906 42.578125 592.3125Q42.578125 593.125 43.078125 593.6406Q43.578125 594.15625 44.296875 594.15625Q45.03125 594.15625 45.5 593.6406Q45.96875 593.125 45.96875 592.21875ZM48.5 593.7656Q48.5 592.2344 48.8125 591.3125Q49.125 590.375 49.734375 589.875Q50.359375 589.375 51.296875 589.375Q51.984375 589.375 52.5 589.65625Q53.03125 589.9375 53.375 590.46875Q53.71875 590.9844 53.90625 591.7344Q54.09375 592.4844 54.09375 593.7656Q54.09375 595.28125 53.78125 596.2031Q53.484375 597.125 52.859375 597.6406Q52.25 598.1406 51.296875 598.1406Q50.0625 598.1406 49.34375 597.25Q48.5 596.1875 48.5 593.7656ZM49.578125 593.7656Q49.578125 595.875 50.078125 596.5781Q50.578125 597.28125 51.296875 597.28125Q52.03125 597.28125 52.515625 596.5781Q53.015625 595.875 53.015625 593.7656Q53.015625 591.6406 52.515625 590.9531Q52.03125 590.25 51.28125 590.25Q50.5625 590.25 50.125 590.8594Q49.578125 591.6406 49.578125 593.7656Z" fill-rule="nonzero"/><path fill="#000000" d="M41.65625 468.68228L42.671875 468.58853Q42.796875 469.30728 43.15625 469.6354Q43.53125 469.9479 44.09375 469.9479Q44.578125 469.9479 44.9375 469.72916Q45.3125 469.49478 45.546875 469.1354Q45.78125 468.7604 45.9375 468.1354Q46.09375 467.49478 46.09375 466.83853Q46.09375 466.7604 46.09375 466.61978Q45.78125 467.11978 45.234375 467.4479Q44.6875 467.7604 44.046875 467.7604Q42.96875 467.7604 42.234375 466.99478Q41.5 466.21353 41.5 464.9479Q41.5 463.6354 42.265625 462.83853Q43.046875 462.04166 44.203125 462.04166Q45.03125 462.04166 45.71875 462.49478Q46.421875 462.9479 46.78125 463.79166Q47.140625 464.61978 47.140625 466.1979Q47.140625 467.83853 46.78125 468.8229Q46.4375 469.79166 45.71875 470.30728Q45.015625 470.80728 44.078125 470.80728Q43.0625 470.80728 42.421875 470.2604Q41.78125 469.6979 41.65625 468.68228ZM45.96875 464.8854Q45.96875 463.97916 45.484375 463.4479Q45.015625 462.91666 44.328125 462.91666Q43.625 462.91666 43.09375 463.49478Q42.578125 464.05728 42.578125 464.97916Q42.578125 465.79166 43.078125 466.30728Q43.578125 466.8229 44.296875 466.8229Q45.03125 466.8229 45.5 466.30728Q45.96875 465.79166 45.96875 464.8854ZM48.5 468.41666L49.609375 468.3229Q49.734375 469.1354 50.171875 469.54166Q50.625 469.9479 51.265625 469.9479Q52.015625 469.9479 52.546875 469.36978Q53.078125 468.79166 53.078125 467.85416Q53.078125 466.9479 52.5625 466.43228Q52.0625 465.90103 51.234375 465.90103Q50.734375 465.90103 50.3125 466.1354Q49.90625 466.36978 49.671875 466.72916L48.6875 466.60416L49.515625 462.1979L53.796875 462.1979L53.796875 463.1979L50.359375 463.1979L49.90625 465.5104Q50.671875 464.96353 51.515625 464.96353Q52.640625 464.96353 53.40625 465.74478Q54.1875 466.52603 54.1875 467.7604Q54.1875 468.91666 53.515625 469.77603Q52.6875 470.80728 51.265625 470.80728Q50.09375 470.80728 49.34375 470.15103Q48.609375 469.49478 48.5 468.41666Z" fill-rule="nonzero"/><path fill="#000000" d="M38.46875 343.33334L37.421875 343.33334L37.421875 336.6146Q37.03125 336.97397 36.40625 337.34897Q35.796875 337.70834 35.3125 337.88022L35.3125 336.8646Q36.1875 336.44272 36.84375 335.8646Q37.515625 335.27084 37.796875 334.70834L38.46875 334.70834L38.46875 343.33334ZM41.5 339.09897Q41.5 337.56772 41.8125 336.64584Q42.125 335.70834 42.734375 335.20834Q43.359375 334.70834 44.296875 334.70834Q44.984375 334.70834 45.5 334.9896Q46.03125 335.27084 46.375 335.8021Q46.71875 336.31772 46.90625 337.06772Q47.09375 337.81772 47.09375 339.09897Q47.09375 340.6146 46.78125 341.53647Q46.484375 342.45834 45.859375 342.97397Q45.25 343.47397 44.296875 343.47397Q43.0625 343.47397 42.34375 342.58334Q41.5 341.52084 41.5 339.09897ZM42.578125 339.09897Q42.578125 341.20834 43.078125 341.91147Q43.578125 342.6146 44.296875 342.6146Q45.03125 342.6146 45.515625 341.91147Q46.015625 341.20834 46.015625 339.09897Q46.015625 336.97397 45.515625 336.28647Q45.03125 335.58334 44.28125 335.58334Q43.5625 335.58334 43.125 336.19272Q42.578125 336.97397 42.578125 339.09897ZM48.5 339.09897Q48.5 337.56772 48.8125 336.64584Q49.125 335.70834 49.734375 335.20834Q50.359375 334.70834 51.296875 334.70834Q51.984375 334.70834 52.5 334.9896Q53.03125 335.27084 53.375 335.8021Q53.71875 336.31772 53.90625 337.06772Q54.09375 337.81772 54.09375 339.09897Q54.09375 340.6146 53.78125 341.53647Q53.484375 342.45834 52.859375 342.97397Q52.25 343.47397 51.296875 343.47397Q50.0625 343.47397 49.34375 342.58334Q48.5 341.52084 48.5 339.09897ZM49.578125 339.09897Q49.578125 341.20834 50.078125 341.91147Q50.578125 342.6146 51.296875 342.6146Q52.03125 342.6146 52.515625 341.91147Q53.015625 341.20834 53.015625 339.09897Q53.015625 336.97397 52.515625 336.28647Q52.03125 335.58334 51.28125 335.58334Q50.5625 335.58334 50.125 336.19272Q49.578125 336.97397 49.578125 339.09897Z" fill-rule="nonzero"/><path fill="#000000" d="M38.46875 216.0L37.421875 216.0L37.421875 209.28125Q37.03125 209.64062 36.40625 210.01562Q35.796875 210.375 35.3125 210.54688L35.3125 209.53125Q36.1875 209.10938 36.84375 208.53125Q37.515625 207.9375 37.796875 207.375L38.46875 207.375L38.46875 216.0ZM41.5 211.76562Q41.5 210.23438 41.8125 209.3125Q42.125 208.375 42.734375 207.875Q43.359375 207.375 44.296875 207.375Q44.984375 207.375 45.5 207.65625Q46.03125 207.9375 46.375 208.46875Q46.71875 208.98438 46.90625 209.73438Q47.09375 210.48438 47.09375 211.76562Q47.09375 213.28125 46.78125 214.20312Q46.484375 215.125 45.859375 215.64062Q45.25 216.14062 44.296875 216.14062Q43.0625 216.14062 42.34375 215.25Q41.5 214.1875 41.5 211.76562ZM42.578125 211.76562Q42.578125 213.875 43.078125 214.57812Q43.578125 215.28125 44.296875 215.28125Q45.03125 215.28125 45.515625 214.57812Q46.015625 213.875 46.015625 211.76562Q46.015625 209.64062 45.515625 208.95312Q45.03125 208.25 44.28125 208.25Q43.5625 208.25 43.125 208.85938Q42.578125 209.64062 42.578125 211.76562ZM48.5 213.75L49.609375 213.65625Q49.734375 214.46875 50.171875 214.875Q50.625 215.28125 51.265625 215.28125Q52.015625 215.28125 52.546875 214.70312Q53.078125 214.125 53.078125 213.1875Q53.078125 212.28125 52.5625 211.76562Q52.0625 211.23438 51.234375 211.23438Q50.734375 211.23438 50.3125 211.46875Q49.90625 211.70312 49.671875 212.0625L48.6875 211.9375L49.515625 207.53125L53.796875 207.53125L53.796875 208.53125L50.359375 208.53125L49.90625 210.84375Q50.671875 210.29688 51.515625 210.29688Q52.640625 210.29688 53.40625 211.07812Q54.1875 211.85938 54.1875 213.09375Q54.1875 214.25 53.515625 215.10938Q52.6875 216.14062 51.265625 216.14062Q50.09375 216.14062 49.34375 215.48438Q48.609375 214.82812 48.5 213.75Z" fill-rule="nonzero"/><path fill="#000000" d="M38.46875 88.666664L37.421875 88.666664L37.421875 81.947914Q37.03125 82.30729 36.40625 82.68229Q35.796875 83.041664 35.3125 83.21354L35.3125 82.197914Q36.1875 81.77604 36.84375 81.197914Q37.515625 80.604164 37.796875 80.041664L38.46875 80.041664L38.46875 88.666664ZM45.46875 88.666664L44.421875 88.666664L44.421875 81.947914Q44.03125 82.30729 43.40625 82.68229Q42.796875 83.041664 42.3125 83.21354L42.3125 82.197914Q43.1875 81.77604 43.84375 81.197914Q44.515625 80.604164 44.796875 80.041664L45.46875 80.041664L45.46875 88.666664ZM48.5 84.43229Q48.5 82.90104 48.8125 81.979164Q49.125 81.041664 49.734375 80.541664Q50.359375 80.041664 51.296875 80.041664Q51.984375 80.041664 52.5 80.322914Q53.03125 80.604164 53.375 81.135414Q53.71875 81.65104 53.90625 82.40104Q54.09375 83.15104 54.09375 84.43229Q54.09375 85.947914 53.78125 86.86979Q53.484375 87.791664 52.859375 88.30729Q52.25 88.80729 51.296875 88.80729Q50.0625 88.80729 49.34375 87.916664Q48.5 86.854164 48.5 84.43229ZM49.578125 84.43229Q49.578125 86.541664 50.078125 87.24479Q50.578125 87.947914 51.296875 87.947914Q52.03125 87.947914 52.515625 87.24479Q53.015625 86.541664 53.015625 84.43229Q53.015625 82.30729 52.515625 81.61979Q52.03125 80.916664 51.28125 80.916664Q50.5625 80.916664 50.125 81.52604Q49.578125 82.30729 49.578125 84.43229Z" fill-rule="nonzero"/><path fill="#000000" d="M224.99968 611.0L223.9528 611.0L223.9528 604.28125Q223.56218 604.6406 222.93718 605.0156Q222.3278 605.375 221.84343 605.5469L221.84343 604.53125Q222.71843 604.1094 223.37468 603.53125Q224.04655 602.9375 224.3278 602.375L224.99968 602.375L224.99968 611.0ZM228.03093 606.7656Q228.03093 605.2344 228.34343 604.3125Q228.65593 603.375 229.2653 602.875Q229.8903 602.375 230.8278 602.375Q231.5153 602.375 232.03093 602.65625Q232.56218 602.9375 232.90593 603.46875Q233.24968 603.9844 233.43718 604.7344Q233.62468 605.4844 233.62468 606.7656Q233.62468 608.28125 233.31218 609.2031Q233.0153 610.125 232.3903 610.6406Q231.78093 611.1406 230.8278 611.1406Q229.59343 611.1406 228.87468 610.25Q228.03093 609.1875 228.03093 606.7656ZM229.10905 606.7656Q229.10905 608.875 229.60905 609.5781Q230.10905 610.28125 230.8278 610.28125Q231.56218 610.28125 232.04655 609.5781Q232.54655 608.875 232.54655 606.7656Q232.54655 604.6406 232.04655 603.9531Q231.56218 603.25 230.81218 603.25Q230.09343 603.25 229.65593 603.8594Q229.10905 604.6406 229.10905 606.7656ZM235.03093 606.7656Q235.03093 605.2344 235.34343 604.3125Q235.65593 603.375 236.2653 602.875Q236.8903 602.375 237.8278 602.375Q238.5153 602.375 239.03093 602.65625Q239.56218 602.9375 239.90593 603.46875Q240.24968 603.9844 240.43718 604.7344Q240.62468 605.4844 240.62468 606.7656Q240.62468 608.28125 240.31218 609.2031Q240.0153 610.125 239.3903 610.6406Q238.78093 611.1406 237.8278 611.1406Q236.59343 611.1406 235.87468 610.25Q235.03093 609.1875 235.03093 606.7656ZM236.10905 606.7656Q236.10905 608.875 236.60905 609.5781Q237.10905 610.28125 237.8278 610.28125Q238.56218 610.28125 239.04655 609.5781Q239.54655 608.875 239.54655 606.7656Q239.54655 604.6406 239.04655 603.9531Q238.56218 603.25 237.81218 603.25Q237.09343 603.25 236.65593 603.8594Q236.10905 604.6406 236.10905 606.7656Z" fill-rule="nonzero"/><path fill="#000000" d="M426.86038 609.9844L426.86038 611.0L421.17288 611.0Q421.17288 610.625 421.29788 610.2656Q421.51663 609.6875 421.98538 609.125Q422.46976 608.5625 423.376 607.8281Q424.78226 606.6719 425.26663 606.0Q425.76663 605.3281 425.76663 604.7344Q425.76663 604.1094 425.3135 603.6875Q424.86038 603.25 424.14163 603.25Q423.376 603.25 422.92288 603.7031Q422.46976 604.15625 422.45413 604.96875L421.376 604.8594Q421.48538 603.6406 422.20413 603.0156Q422.9385 602.375 424.17288 602.375Q425.40726 602.375 426.126 603.0625Q426.84476 603.75 426.84476 604.7656Q426.84476 605.28125 426.626 605.78125Q426.42288 606.2656 425.92288 606.8281Q425.4385 607.375 424.3135 608.3281Q423.36038 609.125 423.07913 609.4219Q422.8135 609.7031 422.64163 609.9844L426.86038 609.9844ZM428.3135 606.7656Q428.3135 605.2344 428.626 604.3125Q428.9385 603.375 429.54788 602.875Q430.17288 602.375 431.11038 602.375Q431.79788 602.375 432.3135 602.65625Q432.84476 602.9375 433.1885 603.46875Q433.53226 603.9844 433.71976 604.7344Q433.90726 605.4844 433.90726 606.7656Q433.90726 608.28125 433.59476 609.2031Q433.29788 610.125 432.67288 610.6406Q432.0635 611.1406 431.11038 611.1406Q429.876 611.1406 429.15726 610.25Q428.3135 609.1875 428.3135 606.7656ZM429.39163 606.7656Q429.39163 608.875 429.89163 609.5781Q430.39163 610.28125 431.11038 610.28125Q431.84476 610.28125 432.32913 609.5781Q432.82913 608.875 432.82913 606.7656Q432.82913 604.6406 432.32913 603.9531Q431.84476 603.25 431.09476 603.25Q430.376 603.25 429.9385 603.8594Q429.39163 604.6406 429.39163 606.7656ZM435.3135 606.7656Q435.3135 605.2344 435.626 604.3125Q435.9385 603.375 436.54788 602.875Q437.17288 602.375 438.11038 602.375Q438.79788 602.375 439.3135 602.65625Q439.84476 602.9375 440.1885 603.46875Q440.53226 603.9844 440.71976 604.7344Q440.90726 605.4844 440.90726 606.7656Q440.90726 608.28125 440.59476 609.2031Q440.29788 610.125 439.67288 610.6406Q439.0635 611.1406 438.11038 611.1406Q436.876 611.1406 436.15726 610.25Q435.3135 609.1875 435.3135 606.7656ZM436.39163 606.7656Q436.39163 608.875 436.89163 609.5781Q437.39163 610.28125 438.11038 610.28125Q438.84476 610.28125 439.32913 609.5781Q439.82913 608.875 439.82913 606.7656Q439.82913 604.6406 439.32913 603.9531Q438.84476 603.25 438.09476 603.25Q437.376 603.25 436.9385 603.8594Q436.39163 604.6406 436.39163 606.7656Z" fill-rule="nonzero"/><path fill="#000000" d="M621.59607 608.7344L622.65857 608.59375Q622.83044 609.4844 623.26794 609.8906Q623.70544 610.28125 624.33044 610.28125Q625.08044 610.28125 625.59607 609.7656Q626.1117 609.25 626.1117 608.4844Q626.1117 607.7656 625.6273 607.2969Q625.15857 606.8125 624.4242 606.8125Q624.1273 606.8125 623.6742 606.9375L623.7992 606.0Q623.90857 606.0156 623.97107 606.0156Q624.64294 606.0156 625.1742 605.6719Q625.72107 605.3125 625.72107 604.5781Q625.72107 604.0 625.33044 603.625Q624.9398 603.2344 624.3148 603.2344Q623.70544 603.2344 623.28357 603.625Q622.8773 604.0156 622.76794 604.78125L621.70544 604.59375Q621.89294 603.5469 622.58044 602.96875Q623.26794 602.375 624.28357 602.375Q624.9867 602.375 625.58044 602.6875Q626.1742 602.9844 626.4867 603.5Q626.7992 604.0156 626.7992 604.6094Q626.7992 605.15625 626.5023 605.625Q626.20544 606.0781 625.6273 606.34375Q626.3773 606.5156 626.7992 607.0781Q627.22107 607.625 627.22107 608.46875Q627.22107 609.59375 626.39294 610.375Q625.58044 611.15625 624.33044 611.15625Q623.20544 611.15625 622.45544 610.4844Q621.70544 609.7969 621.59607 608.7344ZM628.59607 606.7656Q628.59607 605.2344 628.90857 604.3125Q629.22107 603.375 629.83044 602.875Q630.45544 602.375 631.39294 602.375Q632.08044 602.375 632.59607 602.65625Q633.1273 602.9375 633.47107 603.46875Q633.8148 603.9844 634.0023 604.7344Q634.1898 605.4844 634.1898 606.7656Q634.1898 608.28125 633.8773 609.2031Q633.58044 610.125 632.95544 610.6406Q632.34607 611.1406 631.39294 611.1406Q630.15857 611.1406 629.4398 610.25Q628.59607 609.1875 628.59607 606.7656ZM629.6742 606.7656Q629.6742 608.875 630.1742 609.5781Q630.6742 610.28125 631.39294 610.28125Q632.1273 610.28125 632.6117 609.5781Q633.1117 608.875 633.1117 606.7656Q633.1117 604.6406 632.6117 603.9531Q632.1273 603.25 631.3773 603.25Q630.65857 603.25 630.22107 603.8594Q629.6742 604.6406 629.6742 606.7656ZM635.59607 606.7656Q635.59607 605.2344 635.90857 604.3125Q636.22107 603.375 636.83044 602.875Q637.45544 602.375 638.39294 602.375Q639.08044 602.375 639.59607 602.65625Q640.1273 602.9375 640.47107 603.46875Q640.8148 603.9844 641.0023 604.7344Q641.1898 605.4844 641.1898 606.7656Q641.1898 608.28125 640.8773 609.2031Q640.58044 610.125 639.95544 610.6406Q639.34607 611.1406 638.39294 611.1406Q637.15857 611.1406 636.4398 610.25Q635.59607 609.1875 635.59607 606.7656ZM636.6742 606.7656Q636.6742 608.875 637.1742 609.5781Q637.6742 610.28125 638.39294 610.28125Q639.1273 610.28125 639.6117 609.5781Q640.1117 608.875 640.1117 606.7656Q640.1117 604.6406 639.6117 603.9531Q639.1273 603.25 638.3773 603.25Q637.65857 603.25 637.22107 603.8594Q636.6742 604.6406 636.6742 606.7656Z" fill-rule="nonzero"/><path fill="#000000" d="M825.25366 611.0L825.25366 608.9375L821.5349 608.9375L821.5349 607.96875L825.4568 602.40625L826.31616 602.40625L826.31616 607.96875L827.4724 607.96875L827.4724 608.9375L826.31616 608.9375L826.31616 611.0L825.25366 611.0ZM825.25366 607.96875L825.25366 604.1094L822.56616 607.96875L825.25366 607.96875ZM828.87866 606.7656Q828.87866 605.2344 829.19116 604.3125Q829.50366 603.375 830.11304 602.875Q830.73804 602.375 831.67554 602.375Q832.36304 602.375 832.87866 602.65625Q833.4099 602.9375 833.75366 603.46875Q834.0974 603.9844 834.2849 604.7344Q834.4724 605.4844 834.4724 606.7656Q834.4724 608.28125 834.1599 609.2031Q833.86304 610.125 833.23804 610.6406Q832.62866 611.1406 831.67554 611.1406Q830.44116 611.1406 829.7224 610.25Q828.87866 609.1875 828.87866 606.7656ZM829.9568 606.7656Q829.9568 608.875 830.4568 609.5781Q830.9568 610.28125 831.67554 610.28125Q832.4099 610.28125 832.8943 609.5781Q833.3943 608.875 833.3943 606.7656Q833.3943 604.6406 832.8943 603.9531Q832.4099 603.25 831.6599 603.25Q830.94116 603.25 830.50366 603.8594Q829.9568 604.6406 829.9568 606.7656ZM835.87866 606.7656Q835.87866 605.2344 836.19116 604.3125Q836.50366 603.375 837.11304 602.875Q837.73804 602.375 838.67554 602.375Q839.36304 602.375 839.87866 602.65625Q840.4099 602.9375 840.75366 603.46875Q841.0974 603.9844 841.2849 604.7344Q841.4724 605.4844 841.4724 606.7656Q841.4724 608.28125 841.1599 609.2031Q840.86304 610.125 840.23804 610.6406Q839.62866 611.1406 838.67554 611.1406Q837.44116 611.1406 836.7224 610.25Q835.87866 609.1875 835.87866 606.7656ZM836.9568 606.7656Q836.9568 608.875 837.4568 609.5781Q837.9568 610.28125 838.67554 610.28125Q839.4099 610.28125 839.8943 609.5781Q840.3943 608.875 840.3943 606.7656Q840.3943 604.6406 839.8943 603.9531Q839.4099 603.25 838.6599 603.25Q837.94116 603.25 837.50366 603.8594Q836.9568 604.6406 836.9568 606.7656Z" fill-rule="nonzero"/><path fill="#000000" d="M1022.1612 608.75L1023.27057 608.65625Q1023.39557 609.46875 1023.83307 609.875Q1024.2861 610.28125 1024.9268 610.28125Q1025.6768 610.28125 1026.208 609.7031Q1026.7393 609.125 1026.7393 608.1875Q1026.7393 607.28125 1026.2236 606.7656Q1025.7236 606.2344 1024.8955 606.2344Q1024.3955 606.2344 1023.9737 606.46875Q1023.56744 606.7031 1023.33307 607.0625L1022.3487 606.9375L1023.1768 602.53125L1027.458 602.53125L1027.458 603.53125L1024.0205 603.53125L1023.56744 605.84375Q1024.333 605.2969 1025.1768 605.2969Q1026.3018 605.2969 1027.0674 606.0781Q1027.8486 606.8594 1027.8486 608.09375Q1027.8486 609.25 1027.1768 610.1094Q1026.3486 611.1406 1024.9268 611.1406Q1023.75494 611.1406 1023.00494 610.4844Q1022.27057 609.8281 1022.1612 608.75ZM1029.1611 606.7656Q1029.1611 605.2344 1029.4736 604.3125Q1029.7861 603.375 1030.3955 602.875Q1031.0205 602.375 1031.958 602.375Q1032.6455 602.375 1033.1611 602.65625Q1033.6924 602.9375 1034.0361 603.46875Q1034.3799 603.9844 1034.5674 604.7344Q1034.7549 605.4844 1034.7549 606.7656Q1034.7549 608.28125 1034.4424 609.2031Q1034.1455 610.125 1033.5205 610.6406Q1032.9111 611.1406 1031.958 611.1406Q1030.7236 611.1406 1030.0049 610.25Q1029.1611 609.1875 1029.1611 606.7656ZM1030.2393 606.7656Q1030.2393 608.875 1030.7393 609.5781Q1031.2393 610.28125 1031.958 610.28125Q1032.6924 610.28125 1033.1768 609.5781Q1033.6768 608.875 1033.6768 606.7656Q1033.6768 604.6406 1033.1768 603.9531Q1032.6924 603.25 1031.9424 603.25Q1031.2236 603.25 1030.7861 603.8594Q1030.2393 604.6406 1030.2393 606.7656ZM1036.1611 606.7656Q1036.1611 605.2344 1036.4736 604.3125Q1036.7861 603.375 1037.3955 602.875Q1038.0205 602.375 1038.958 602.375Q1039.6455 602.375 1040.1611 602.65625Q1040.6924 602.9375 1041.0361 603.46875Q1041.3799 603.9844 1041.5674 604.7344Q1041.7549 605.4844 1041.7549 606.7656Q1041.7549 608.28125 1041.4424 609.2031Q1041.1455 610.125 1040.5205 610.6406Q1039.9111 611.1406 1038.958 611.1406Q1037.7236 611.1406 1037.0049 610.25Q1036.1611 609.1875 1036.1611 606.7656ZM1037.2393 606.7656Q1037.2393 608.875 1037.7393 609.5781Q1038.2393 610.28125 1038.958 610.28125Q1039.6924 610.28125 1040.1768 609.5781Q1040.6768 608.875 1040.6768 606.7656Q1040.6768 604.6406 1040.1768 603.9531Q1039.6924 603.25 1038.9424 603.25Q1038.2236 603.25 1037.7861 603.8594Q1037.2393 604.6406 1037.2393 606.7656Z" fill-rule="nonzero"/><path fill="#757575" d="M72.671875 51.0L72.671875 38.109375L74.390625 38.109375L74.390625 51.0L72.671875 51.0ZM76.1875 51.0L76.1875 41.671875L77.609375 41.671875L77.609375 43.0Q78.640625 41.453125 80.578125 41.453125Q81.421875 41.453125 82.125 41.765625Q82.84375 42.0625 83.1875 42.5625Q83.546875 43.046875 83.6875 43.71875Q83.765625 44.15625 83.765625 45.265625L83.765625 51.0L82.1875 51.0L82.1875 45.328125Q82.1875 44.359375 82.0 43.875Q81.828125 43.390625 81.34375 43.109375Q80.875 42.828125 80.25 42.828125Q79.234375 42.828125 78.5 43.46875Q77.765625 44.109375 77.765625 45.90625L77.765625 51.0L76.1875 51.0ZM89.640625 49.578125L89.875 50.984375Q89.203125 51.125 88.671875 51.125Q87.8125 51.125 87.328125 50.859375Q86.859375 50.578125 86.671875 50.140625Q86.484375 49.6875 86.484375 48.265625L86.484375 42.890625L85.3125 42.890625L85.3125 41.671875L86.484375 41.671875L86.484375 39.359375L88.046875 38.40625L88.046875 41.671875L89.640625 41.671875L89.640625 42.890625L88.046875 42.890625L88.046875 48.359375Q88.046875 49.03125 88.125 49.234375Q88.21875 49.421875 88.40625 49.53125Q88.59375 49.640625 88.953125 49.640625Q89.203125 49.640625 89.640625 49.578125ZM97.578125 48.0L99.21875 48.203125Q98.828125 49.625 97.78125 50.421875Q96.734375 51.21875 95.109375 51.21875Q93.0625 51.21875 91.859375 49.953125Q90.65625 48.6875 90.65625 46.40625Q90.65625 44.0625 91.859375 42.765625Q93.078125 41.453125 95.015625 41.453125Q96.890625 41.453125 98.078125 42.734375Q99.265625 44.0 99.265625 46.3125Q99.265625 46.453125 99.25 46.734375L92.296875 46.734375Q92.375 48.28125 93.15625 49.09375Q93.953125 49.90625 95.109375 49.90625Q95.984375 49.90625 96.59375 49.453125Q97.21875 49.0 97.578125 48.0ZM92.375 45.4375L97.59375 45.4375Q97.484375 44.265625 97.0 43.671875Q96.234375 42.75 95.03125 42.75Q93.953125 42.75 93.203125 43.484375Q92.46875 44.21875 92.375 45.4375ZM101.15625 51.0L101.15625 38.109375L102.734375 38.109375L102.734375 51.0L101.15625 51.0ZM110.671875 51.0L110.671875 38.109375L112.390625 38.109375L112.390625 51.0L110.671875 51.0ZM120.28125 47.578125L121.828125 47.78125Q121.578125 49.390625 120.53125 50.3125Q119.484375 51.21875 117.953125 51.21875Q116.03125 51.21875 114.859375 49.96875Q113.703125 48.703125 113.703125 46.375Q113.703125 44.859375 114.203125 43.734375Q114.703125 42.59375 115.71875 42.03125Q116.75 41.453125 117.953125 41.453125Q119.484375 41.453125 120.4375 42.234375Q121.40625 43.0 121.6875 44.40625L120.140625 44.640625Q119.921875 43.703125 119.359375 43.234375Q118.8125 42.75 118.015625 42.75Q116.828125 42.75 116.078125 43.609375Q115.328125 44.46875 115.328125 46.328125Q115.328125 48.203125 116.046875 49.0625Q116.765625 49.90625 117.9375 49.90625Q118.859375 49.90625 119.484375 49.34375Q120.109375 48.765625 120.28125 47.578125ZM129.57812 48.0L131.21875 48.203125Q130.82812 49.625 129.78125 50.421875Q128.73438 51.21875 127.109375 51.21875Q125.0625 51.21875 123.859375 49.953125Q122.65625 48.6875 122.65625 46.40625Q122.65625 44.0625 123.859375 42.765625Q125.078125 41.453125 127.015625 41.453125Q128.89062 41.453125 130.07812 42.734375Q131.26562 44.0 131.26562 46.3125Q131.26562 46.453125 131.25 46.734375L124.296875 46.734375Q124.375 48.28125 125.15625 49.09375Q125.953125 49.90625 127.109375 49.90625Q127.984375 49.90625 128.59375 49.453125Q129.21875 49.0 129.57812 48.0ZM124.375 45.4375L129.59375 45.4375Q129.48438 44.265625 129.0 43.671875Q128.23438 42.75 127.03125 42.75Q125.953125 42.75 125.203125 43.484375Q124.46875 44.21875 124.375 45.4375ZM138.15625 51.0L138.15625 38.109375L139.73438 38.109375L139.73438 51.0L138.15625 51.0ZM148.28125 49.84375Q147.40625 50.59375 146.57812 50.90625Q145.76562 51.21875 144.84375 51.21875Q143.29688 51.21875 142.46875 50.46875Q141.65625 49.703125 141.65625 48.53125Q141.65625 47.859375 141.96875 47.296875Q142.28125 46.71875 142.78125 46.375Q143.28125 46.03125 143.92188 45.859375Q144.39062 45.734375 145.32812 45.625Q147.23438 45.390625 148.14062 45.078125Q148.15625 44.75 148.15625 44.65625Q148.15625 43.703125 147.70312 43.296875Q147.09375 42.765625 145.90625 42.765625Q144.78125 42.765625 144.25 43.15625Q143.71875 43.546875 143.46875 44.546875L141.92188 44.328125Q142.14062 43.328125 142.60938 42.71875Q143.09375 42.109375 144.0 41.78125Q144.92188 41.453125 146.14062 41.453125Q147.32812 41.453125 148.07812 41.734375Q148.82812 42.015625 149.17188 42.453125Q149.53125 42.875 149.67188 43.515625Q149.75 43.921875 149.75 44.984375L149.75 47.09375Q149.75 49.296875 149.84375 49.890625Q149.95312 50.46875 150.25 51.0L148.59375 51.0Q148.34375 50.5 148.28125 49.84375ZM148.14062 46.3125Q147.28125 46.671875 145.5625 46.90625Q144.59375 47.046875 144.1875 47.234375Q143.78125 47.40625 143.5625 47.75Q143.34375 48.078125 143.34375 48.5Q143.34375 49.125 143.8125 49.546875Q144.29688 49.96875 145.21875 49.96875Q146.14062 49.96875 146.84375 49.578125Q147.5625 49.171875 147.89062 48.484375Q148.14062 47.9375 148.14062 46.890625L148.14062 46.3125ZM152.20312 51.0L152.20312 38.109375L153.78125 38.109375L153.78125 45.46875L157.51562 41.671875L159.5625 41.671875L156.0 45.125L159.9375 51.0L157.98438 51.0L154.89062 46.234375L153.78125 47.296875L153.78125 51.0L152.20312 51.0ZM167.57812 48.0L169.21875 48.203125Q168.82812 49.625 167.78125 50.421875Q166.73438 51.21875 165.10938 51.21875Q163.0625 51.21875 161.85938 49.953125Q160.65625 48.6875 160.65625 46.40625Q160.65625 44.0625 161.85938 42.765625Q163.07812 41.453125 165.01562 41.453125Q166.89062 41.453125 168.07812 42.734375Q169.26562 44.0 169.26562 46.3125Q169.26562 46.453125 169.25 46.734375L162.29688 46.734375Q162.375 48.28125 163.15625 49.09375Q163.95312 49.90625 165.10938 49.90625Q165.98438 49.90625 166.59375 49.453125Q167.21875 49.0 167.57812 48.0ZM162.375 45.4375L167.59375 45.4375Q167.48438 44.265625 167.0 43.671875Q166.23438 42.75 165.03125 42.75Q163.95312 42.75 163.20312 43.484375Q162.46875 44.21875 162.375 45.4375ZM175.07812 51.0L180.0625 44.28125L175.67188 38.109375L177.70312 38.109375L180.03125 41.421875Q180.76562 42.453125 181.07812 43.0Q181.5 42.296875 182.09375 41.53125L184.6875 38.109375L186.54688 38.109375L182.01562 44.1875L186.89062 51.0L184.78125 51.0L181.54688 46.40625Q181.26562 46.0 180.98438 45.546875Q180.54688 46.25 180.35938 46.515625L177.125 51.0L175.07812 51.0ZM193.57812 48.0L195.21875 48.203125Q194.82812 49.625 193.78125 50.421875Q192.73438 51.21875 191.10938 51.21875Q189.0625 51.21875 187.85938 49.953125Q186.65625 48.6875 186.65625 46.40625Q186.65625 44.0625 187.85938 42.765625Q189.07812 41.453125 191.01562 41.453125Q192.89062 41.453125 194.07812 42.734375Q195.26562 44.0 195.26562 46.3125Q195.26562 46.453125 195.25 46.734375L188.29688 46.734375Q188.375 48.28125 189.15625 49.09375Q189.95312 49.90625 191.10938 49.90625Q191.98438 49.90625 192.59375 49.453125Q193.21875 49.0 193.57812 48.0ZM188.375 45.4375L193.59375 45.4375Q193.48438 44.265625 193.0 43.671875Q192.23438 42.75 191.03125 42.75Q189.95312 42.75 189.20312 43.484375Q188.46875 44.21875 188.375 45.4375ZM196.59375 46.328125Q196.59375 43.734375 198.04688 42.484375Q199.25 41.453125 200.96875 41.453125Q202.90625 41.453125 204.125 42.71875Q205.34375 43.984375 205.34375 46.203125Q205.34375 48.0 204.79688 49.03125Q204.26562 50.0625 203.23438 50.640625Q202.20312 51.21875 200.96875 51.21875Q199.01562 51.21875 197.79688 49.96875Q196.59375 48.703125 196.59375 46.328125ZM198.21875 46.328125Q198.21875 48.125 199.0 49.015625Q199.78125 49.90625 200.96875 49.90625Q202.15625 49.90625 202.9375 49.015625Q203.71875 48.109375 203.71875 46.28125Q203.71875 44.546875 202.92188 43.65625Q202.14062 42.765625 200.96875 42.765625Q199.78125 42.765625 199.0 43.65625Q198.21875 44.546875 198.21875 46.328125ZM207.1875 51.0L207.1875 41.671875L208.60938 41.671875L208.60938 43.0Q209.64062 41.453125 211.57812 41.453125Q212.42188 41.453125 213.125 41.765625Q213.84375 42.0625 214.1875 42.5625Q214.54688 43.046875 214.6875 43.71875Q214.76562 44.15625 214.76562 45.265625L214.76562 51.0L213.1875 51.0L213.1875 45.328125Q213.1875 44.359375 213.0 43.875Q212.82812 43.390625 212.34375 43.109375Q211.875 42.828125 211.25 42.828125Q210.23438 42.828125 209.5 43.46875Q208.76562 44.109375 208.76562 45.90625L208.76562 51.0L207.1875 51.0ZM224.1875 44.015625Q223.20312 43.65625 222.71875 42.984375Q222.25 42.3125 222.25 41.390625Q222.25 39.984375 223.25 39.03125Q224.26562 38.0625 225.95312 38.0625Q227.64062 38.0625 228.65625 39.046875Q229.6875 40.015625 229.6875 41.421875Q229.6875 42.328125 229.21875 43.0Q228.75 43.65625 227.79688 44.015625Q228.98438 44.40625 229.59375 45.265625Q230.21875 46.125 230.21875 47.3125Q230.21875 48.96875 229.04688 50.09375Q227.875 51.21875 225.96875 51.21875Q224.0625 51.21875 222.89062 50.09375Q221.73438 48.953125 221.73438 47.265625Q221.73438 46.015625 222.35938 45.171875Q223.0 44.3125 224.1875 44.015625ZM223.85938 41.328125Q223.85938 42.25 224.45312 42.828125Q225.04688 43.40625 225.98438 43.40625Q226.89062 43.40625 227.46875 42.828125Q228.0625 42.25 228.0625 41.421875Q228.0625 40.546875 227.45312 39.953125Q226.85938 39.359375 225.96875 39.359375Q225.0625 39.359375 224.45312 39.9375Q223.85938 40.515625 223.85938 41.328125ZM223.35938 47.28125Q223.35938 47.953125 223.67188 48.59375Q224.0 49.21875 224.625 49.578125Q225.26562 49.921875 226.0 49.921875Q227.125 49.921875 227.85938 49.1875Q228.60938 48.453125 228.60938 47.328125Q228.60938 46.1875 227.84375 45.453125Q227.07812 44.703125 225.9375 44.703125Q224.82812 44.703125 224.09375 45.4375Q223.35938 46.171875 223.35938 47.28125ZM231.75 47.59375L233.34375 47.390625Q233.60938 48.734375 234.26562 49.328125Q234.92188 49.921875 235.85938 49.921875Q236.98438 49.921875 237.75 49.15625Q238.51562 48.375 238.51562 47.234375Q238.51562 46.140625 237.79688 45.4375Q237.09375 44.71875 236.0 44.71875Q235.54688 44.71875 234.875 44.90625L235.04688 43.515625Q235.20312 43.53125 235.3125 43.53125Q236.3125 43.53125 237.125 43.0Q237.9375 42.46875 237.9375 41.375Q237.9375 40.5 237.34375 39.9375Q236.75 39.359375 235.82812 39.359375Q234.90625 39.359375 234.28125 39.9375Q233.67188 40.515625 233.5 41.6875L231.92188 41.40625Q232.20312 39.8125 233.23438 38.9375Q234.26562 38.0625 235.79688 38.0625Q236.84375 38.0625 237.73438 38.515625Q238.625 38.96875 239.09375 39.75Q239.5625 40.53125 239.5625 41.40625Q239.5625 42.25 239.10938 42.9375Q238.67188 43.609375 237.78125 44.015625Q238.92188 44.28125 239.54688 45.125Q240.1875 45.953125 240.1875 47.1875Q240.1875 48.875 238.95312 50.0625Q237.73438 51.234375 235.85938 51.234375Q234.15625 51.234375 233.03125 50.21875Q231.92188 49.203125 231.75 47.59375ZM244.1875 44.015625Q243.20312 43.65625 242.71875 42.984375Q242.25 42.3125 242.25 41.390625Q242.25 39.984375 243.25 39.03125Q244.26562 38.0625 245.95312 38.0625Q247.64062 38.0625 248.65625 39.046875Q249.6875 40.015625 249.6875 41.421875Q249.6875 42.328125 249.21875 43.0Q248.75 43.65625 247.79688 44.015625Q248.98438 44.40625 249.59375 45.265625Q250.21875 46.125 250.21875 47.3125Q250.21875 48.96875 249.04688 50.09375Q247.875 51.21875 245.96875 51.21875Q244.0625 51.21875 242.89062 50.09375Q241.73438 48.953125 241.73438 47.265625Q241.73438 46.015625 242.35938 45.171875Q243.0 44.3125 244.1875 44.015625ZM243.85938 41.328125Q243.85938 42.25 244.45312 42.828125Q245.04688 43.40625 245.98438 43.40625Q246.89062 43.40625 247.46875 42.828125Q248.0625 42.25 248.0625 41.421875Q248.0625 40.546875 247.45312 39.953125Q246.85938 39.359375 245.96875 39.359375Q245.0625 39.359375 244.45312 39.9375Q243.85938 40.515625 243.85938 41.328125ZM243.35938 47.28125Q243.35938 47.953125 243.67188 48.59375Q244.0 49.21875 244.625 49.578125Q245.26562 49.921875 246.0 49.921875Q247.125 49.921875 247.85938 49.1875Q248.60938 48.453125 248.60938 47.328125Q248.60938 46.1875 247.84375 45.453125Q247.07812 44.703125 245.9375 44.703125Q244.82812 44.703125 244.09375 45.4375Q243.35938 46.171875 243.35938 47.28125ZM251.75 44.640625Q251.75 42.359375 252.21875 40.96875Q252.6875 39.578125 253.60938 38.828125Q254.54688 38.0625 255.95312 38.0625Q256.98438 38.0625 257.76562 38.484375Q258.54688 38.890625 259.04688 39.6875Q259.5625 40.46875 259.85938 41.609375Q260.15625 42.734375 260.15625 44.640625Q260.15625 46.90625 259.6875 48.3125Q259.21875 49.703125 258.28125 50.46875Q257.35938 51.21875 255.95312 51.21875Q254.07812 51.21875 253.01562 49.890625Q251.75 48.28125 251.75 44.640625ZM253.375 44.640625Q253.375 47.8125 254.10938 48.875Q254.85938 49.921875 255.95312 49.921875Q257.03125 49.921875 257.78125 48.875Q258.53125 47.8125 258.53125 44.640625Q258.53125 41.46875 257.78125 40.421875Q257.03125 39.375 255.9375 39.375Q254.84375 39.375 254.1875 40.296875Q253.375 41.46875 253.375 44.640625ZM266.57812 47.125L266.57812 45.546875L271.4375 45.546875L271.4375 47.125L266.57812 47.125ZM281.67188 51.0L281.67188 39.640625L277.42188 39.640625L277.42188 38.109375L287.64062 38.109375L287.64062 39.640625L283.375 39.640625L283.375 51.0L281.67188 51.0ZM289.59375 46.328125Q289.59375 43.734375 291.04688 42.484375Q292.25 41.453125 293.96875 41.453125Q295.90625 41.453125 297.125 42.71875Q298.34375 43.984375 298.34375 46.203125Q298.34375 48.0 297.79688 49.03125Q297.26562 50.0625 296.23438 50.640625Q295.20312 51.21875 293.96875 51.21875Q292.01562 51.21875 290.79688 49.96875Q289.59375 48.703125 289.59375 46.328125ZM291.21875 46.328125Q291.21875 48.125 292.0 49.015625Q292.78125 49.90625 293.96875 49.90625Q295.15625 49.90625 295.9375 49.015625Q296.71875 48.109375 296.71875 46.28125Q296.71875 44.546875 295.92188 43.65625Q295.14062 42.765625 293.96875 42.765625Q292.78125 42.765625 292.0 43.65625Q291.21875 44.546875 291.21875 46.328125ZM300.17188 51.0L300.17188 41.671875L301.59375 41.671875L301.59375 43.078125Q302.14062 42.09375 302.59375 41.78125Q303.0625 41.453125 303.60938 41.453125Q304.42188 41.453125 305.23438 41.96875L304.70312 43.4375Q304.10938 43.09375 303.53125 43.09375Q303.01562 43.09375 302.59375 43.40625Q302.1875 43.71875 302.01562 44.265625Q301.75 45.109375 301.75 46.109375L301.75 51.0L300.17188 51.0ZM312.28125 47.578125L313.82812 47.78125Q313.57812 49.390625 312.53125 50.3125Q311.48438 51.21875 309.95312 51.21875Q308.03125 51.21875 306.85938 49.96875Q305.70312 48.703125 305.70312 46.375Q305.70312 44.859375 306.20312 43.734375Q306.70312 42.59375 307.71875 42.03125Q308.75 41.453125 309.95312 41.453125Q311.48438 41.453125 312.4375 42.234375Q313.40625 43.0 313.6875 44.40625L312.14062 44.640625Q311.92188 43.703125 311.35938 43.234375Q310.8125 42.75 310.01562 42.75Q308.82812 42.75 308.07812 43.609375Q307.32812 44.46875 307.32812 46.328125Q307.32812 48.203125 308.04688 49.0625Q308.76562 49.90625 309.9375 49.90625Q310.85938 49.90625 311.48438 49.34375Q312.10938 48.765625 312.28125 47.578125ZM315.1875 51.0L315.1875 38.109375L316.76562 38.109375L316.76562 42.734375Q317.875 41.453125 319.5625 41.453125Q320.59375 41.453125 321.35938 41.859375Q322.125 42.265625 322.45312 43.0Q322.79688 43.71875 322.79688 45.078125L322.79688 51.0L321.20312 51.0L321.20312 45.078125Q321.20312 43.890625 320.6875 43.359375Q320.17188 42.8125 319.23438 42.8125Q318.53125 42.8125 317.90625 43.1875Q317.29688 43.546875 317.03125 44.171875Q316.76562 44.796875 316.76562 45.890625L316.76562 51.0L315.1875 51.0ZM324.8125 46.859375L326.42188 46.71875Q326.53125 47.6875 326.9375 48.3125Q327.35938 48.921875 328.23438 49.3125Q329.125 49.6875 330.21875 49.6875Q331.20312 49.6875 331.9375 49.40625Q332.6875 49.109375 333.04688 48.609375Q333.42188 48.09375 333.42188 47.5Q333.42188 46.890625 333.0625 46.4375Q332.71875 45.984375 331.90625 45.6875Q331.39062 45.484375 329.60938 45.0625Q327.84375 44.625 327.125 44.25Q326.20312 43.765625 325.75 43.046875Q325.29688 42.328125 325.29688 41.453125Q325.29688 40.46875 325.84375 39.625Q326.40625 38.78125 327.46875 38.34375Q328.53125 37.890625 329.84375 37.890625Q331.26562 37.890625 332.35938 38.359375Q333.45312 38.8125 334.04688 39.71875Q334.64062 40.609375 334.67188 41.75L333.04688 41.875Q332.90625 40.640625 332.14062 40.015625Q331.39062 39.390625 329.90625 39.390625Q328.35938 39.390625 327.64062 39.96875Q326.9375 40.53125 326.9375 41.328125Q326.9375 42.03125 327.45312 42.46875Q327.9375 42.921875 330.01562 43.390625Q332.09375 43.859375 332.875 44.21875Q334.0 44.734375 334.53125 45.53125Q335.0625 46.328125 335.0625 47.359375Q335.0625 48.390625 334.46875 49.296875Q333.89062 50.203125 332.78125 50.71875Q331.6875 51.21875 330.29688 51.21875Q328.54688 51.21875 327.35938 50.71875Q326.1875 50.203125 325.51562 49.1875Q324.84375 48.15625 324.8125 46.859375ZM343.28125 47.578125L344.82812 47.78125Q344.57812 49.390625 343.53125 50.3125Q342.48438 51.21875 340.95312 51.21875Q339.03125 51.21875 337.85938 49.96875Q336.70312 48.703125 336.70312 46.375Q336.70312 44.859375 337.20312 43.734375Q337.70312 42.59375 338.71875 42.03125Q339.75 41.453125 340.95312 41.453125Q342.48438 41.453125 343.4375 42.234375Q344.40625 43.0 344.6875 44.40625L343.14062 44.640625Q342.92188 43.703125 342.35938 43.234375Q341.8125 42.75 341.01562 42.75Q339.82812 42.75 339.07812 43.609375Q338.32812 44.46875 338.32812 46.328125Q338.32812 48.203125 339.04688 49.0625Q339.76562 49.90625 340.9375 49.90625Q341.85938 49.90625 342.48438 49.34375Q343.10938 48.765625 343.28125 47.578125ZM346.17188 51.0L346.17188 41.671875L347.59375 41.671875L347.59375 43.078125Q348.14062 42.09375 348.59375 41.78125Q349.0625 41.453125 349.60938 41.453125Q350.42188 41.453125 351.23438 41.96875L350.70312 43.4375Q350.10938 43.09375 349.53125 43.09375Q349.01562 43.09375 348.59375 43.40625Q348.1875 43.71875 348.01562 44.265625Q347.75 45.109375 347.75 46.109375L347.75 51.0L346.17188 51.0ZM352.20312 39.9375L352.20312 38.109375L353.78125 38.109375L353.78125 39.9375L352.20312 39.9375ZM352.20312 51.0L352.20312 41.671875L353.78125 41.671875L353.78125 51.0L352.20312 51.0ZM356.1875 54.578125L356.1875 41.671875L357.625 41.671875L357.625 42.875Q358.14062 42.171875 358.78125 41.8125Q359.42188 41.453125 360.32812 41.453125Q361.53125 41.453125 362.4375 42.078125Q363.35938 42.6875 363.82812 43.8125Q364.29688 44.921875 364.29688 46.265625Q364.29688 47.6875 363.78125 48.84375Q363.26562 49.984375 362.28125 50.609375Q361.29688 51.21875 360.21875 51.21875Q359.4375 51.21875 358.79688 50.890625Q358.17188 50.546875 357.76562 50.03125L357.76562 54.578125L356.1875 54.578125ZM357.625 46.390625Q357.625 48.1875 358.34375 49.046875Q359.07812 49.90625 360.10938 49.90625Q361.17188 49.90625 361.92188 49.015625Q362.67188 48.125 362.67188 46.25Q362.67188 44.46875 361.9375 43.578125Q361.20312 42.6875 360.1875 42.6875Q359.17188 42.6875 358.39062 43.640625Q357.625 44.578125 357.625 46.390625ZM369.64062 49.578125L369.875 50.984375Q369.20312 51.125 368.67188 51.125Q367.8125 51.125 367.32812 50.859375Q366.85938 50.578125 366.67188 50.140625Q366.48438 49.6875 366.48438 48.265625L366.48438 42.890625L365.3125 42.890625L365.3125 41.671875L366.48438 41.671875L366.48438 39.359375L368.04688 38.40625L368.04688 41.671875L369.64062 41.671875L369.64062 42.890625L368.04688 42.890625L368.04688 48.359375Q368.04688 49.03125 368.125 49.234375Q368.21875 49.421875 368.40625 49.53125Q368.59375 49.640625 368.95312 49.640625Q369.20312 49.640625 369.64062 49.578125ZM375.57812 47.125L375.57812 45.546875L380.4375 45.546875L380.4375 47.125L375.57812 47.125ZM387.3125 51.0L387.3125 38.109375L392.15625 38.109375Q393.625 38.109375 394.51562 38.5Q395.40625 38.890625 395.90625 39.703125Q396.42188 40.515625 396.42188 41.40625Q396.42188 42.234375 395.96875 42.96875Q395.53125 43.703125 394.625 44.140625Q395.79688 44.484375 396.42188 45.3125Q397.04688 46.140625 397.04688 47.265625Q397.04688 48.171875 396.65625 48.953125Q396.28125 49.71875 395.71875 50.140625Q395.15625 50.5625 394.3125 50.78125Q393.46875 51.0 392.23438 51.0L387.3125 51.0ZM389.03125 43.53125L391.8125 43.53125Q392.9375 43.53125 393.4375 43.375Q394.09375 43.1875 394.42188 42.75Q394.75 42.296875 394.75 41.609375Q394.75 40.96875 394.4375 40.484375Q394.125 40.0 393.54688 39.828125Q392.98438 39.640625 391.59375 39.640625L389.03125 39.640625L389.03125 43.53125ZM389.03125 49.484375L392.23438 49.484375Q393.0625 49.484375 393.39062 49.421875Q393.98438 49.3125 394.375 49.0625Q394.76562 48.8125 395.01562 48.34375Q395.28125 47.875 395.28125 47.265625Q395.28125 46.546875 394.90625 46.015625Q394.54688 45.484375 393.89062 45.265625Q393.23438 45.046875 392.0 45.046875L389.03125 45.046875L389.03125 49.484375ZM405.28125 49.84375Q404.40625 50.59375 403.57812 50.90625Q402.76562 51.21875 401.84375 51.21875Q400.29688 51.21875 399.46875 50.46875Q398.65625 49.703125 398.65625 48.53125Q398.65625 47.859375 398.96875 47.296875Q399.28125 46.71875 399.78125 46.375Q400.28125 46.03125 400.92188 45.859375Q401.39062 45.734375 402.32812 45.625Q404.23438 45.390625 405.14062 45.078125Q405.15625 44.75 405.15625 44.65625Q405.15625 43.703125 404.70312 43.296875Q404.09375 42.765625 402.90625 42.765625Q401.78125 42.765625 401.25 43.15625Q400.71875 43.546875 400.46875 44.546875L398.92188 44.328125Q399.14062 43.328125 399.60938 42.71875Q400.09375 42.109375 401.0 41.78125Q401.92188 41.453125 403.14062 41.453125Q404.32812 41.453125 405.07812 41.734375Q405.82812 42.015625 406.17188 42.453125Q406.53125 42.875 406.67188 43.515625Q406.75 43.921875 406.75 44.984375L406.75 47.09375Q406.75 49.296875 406.84375 49.890625Q406.95312 50.46875 407.25 51.0L405.59375 51.0Q405.34375 50.5 405.28125 49.84375ZM405.14062 46.3125Q404.28125 46.671875 402.5625 46.90625Q401.59375 47.046875 401.1875 47.234375Q400.78125 47.40625 400.5625 47.75Q400.34375 48.078125 400.34375 48.5Q400.34375 49.125 400.8125 49.546875Q401.29688 49.96875 402.21875 49.96875Q403.14062 49.96875 403.84375 49.578125Q404.5625 49.171875 404.89062 48.484375Q405.14062 47.9375 405.14062 46.890625L405.14062 46.3125ZM412.64062 49.578125L412.875 50.984375Q412.20312 51.125 411.67188 51.125Q410.8125 51.125 410.32812 50.859375Q409.85938 50.578125 409.67188 50.140625Q409.48438 49.6875 409.48438 48.265625L409.48438 42.890625L408.3125 42.890625L408.3125 41.671875L409.48438 41.671875L409.48438 39.359375L411.04688 38.40625L411.04688 41.671875L412.64062 41.671875L412.64062 42.890625L411.04688 42.890625L411.04688 48.359375Q411.04688 49.03125 411.125 49.234375Q411.21875 49.421875 411.40625 49.53125Q411.59375 49.640625 411.95312 49.640625Q412.20312 49.640625 412.64062 49.578125ZM420.28125 47.578125L421.82812 47.78125Q421.57812 49.390625 420.53125 50.3125Q419.48438 51.21875 417.95312 51.21875Q416.03125 51.21875 414.85938 49.96875Q413.70312 48.703125 413.70312 46.375Q413.70312 44.859375 414.20312 43.734375Q414.70312 42.59375 415.71875 42.03125Q416.75 41.453125 417.95312 41.453125Q419.48438 41.453125 420.4375 42.234375Q421.40625 43.0 421.6875 44.40625L420.14062 44.640625Q419.92188 43.703125 419.35938 43.234375Q418.8125 42.75 418.01562 42.75Q416.82812 42.75 416.07812 43.609375Q415.32812 44.46875 415.32812 46.328125Q415.32812 48.203125 416.04688 49.0625Q416.76562 49.90625 417.9375 49.90625Q418.85938 49.90625 419.48438 49.34375Q420.10938 48.765625 420.28125 47.578125ZM423.1875 51.0L423.1875 38.109375L424.76562 38.109375L424.76562 42.734375Q425.875 41.453125 427.5625 41.453125Q428.59375 41.453125 429.35938 41.859375Q430.125 42.265625 430.45312 43.0Q430.79688 43.71875 430.79688 45.078125L430.79688 51.0L429.20312 51.0L429.20312 45.078125Q429.20312 43.890625 428.6875 43.359375Q428.17188 42.8125 427.23438 42.8125Q426.53125 42.8125 425.90625 43.1875Q425.29688 43.546875 425.03125 44.171875Q424.76562 44.796875 424.76562 45.890625L424.76562 51.0L423.1875 51.0ZM437.8125 46.859375L439.42188 46.71875Q439.53125 47.6875 439.9375 48.3125Q440.35938 48.921875 441.23438 49.3125Q442.125 49.6875 443.21875 49.6875Q444.20312 49.6875 444.9375 49.40625Q445.6875 49.109375 446.04688 48.609375Q446.42188 48.09375 446.42188 47.5Q446.42188 46.890625 446.0625 46.4375Q445.71875 45.984375 444.90625 45.6875Q444.39062 45.484375 442.60938 45.0625Q440.84375 44.625 440.125 44.25Q439.20312 43.765625 438.75 43.046875Q438.29688 42.328125 438.29688 41.453125Q438.29688 40.46875 438.84375 39.625Q439.40625 38.78125 440.46875 38.34375Q441.53125 37.890625 442.84375 37.890625Q444.26562 37.890625 445.35938 38.359375Q446.45312 38.8125 447.04688 39.71875Q447.64062 40.609375 447.67188 41.75L446.04688 41.875Q445.90625 40.640625 445.14062 40.015625Q444.39062 39.390625 442.90625 39.390625Q441.35938 39.390625 440.64062 39.96875Q439.9375 40.53125 439.9375 41.328125Q439.9375 42.03125 440.45312 42.46875Q440.9375 42.921875 443.01562 43.390625Q445.09375 43.859375 445.875 44.21875Q447.0 44.734375 447.53125 45.53125Q448.0625 46.328125 448.0625 47.359375Q448.0625 48.390625 447.46875 49.296875Q446.89062 50.203125 445.78125 50.71875Q444.6875 51.21875 443.29688 51.21875Q441.54688 51.21875 440.35938 50.71875Q439.1875 50.203125 438.51562 49.1875Q437.84375 48.15625 437.8125 46.859375ZM450.20312 39.9375L450.20312 38.109375L451.78125 38.109375L451.78125 39.9375L450.20312 39.9375ZM450.20312 51.0L450.20312 41.671875L451.78125 41.671875L451.78125 51.0L450.20312 51.0ZM453.35938 51.0L453.35938 49.71875L459.29688 42.890625Q458.28125 42.953125 457.51562 42.953125L453.70312 42.953125L453.70312 41.671875L461.32812 41.671875L461.32812 42.71875L456.28125 48.640625L455.29688 49.71875Q456.35938 49.640625 457.29688 49.640625L461.60938 49.640625L461.60938 51.0L453.35938 51.0ZM468.57812 48.0L470.21875 48.203125Q469.82812 49.625 468.78125 50.421875Q467.73438 51.21875 466.10938 51.21875Q464.0625 51.21875 462.85938 49.953125Q461.65625 48.6875 461.65625 46.40625Q461.65625 44.0625 462.85938 42.765625Q464.07812 41.453125 466.01562 41.453125Q467.89062 41.453125 469.07812 42.734375Q470.26562 44.0 470.26562 46.3125Q470.26562 46.453125 470.25 46.734375L463.29688 46.734375Q463.375 48.28125 464.15625 49.09375Q464.95312 49.90625 466.10938 49.90625Q466.98438 49.90625 467.59375 49.453125Q468.21875 49.0 468.57812 48.0ZM463.375 45.4375L468.59375 45.4375Q468.48438 44.265625 468.0 43.671875Q467.23438 42.75 466.03125 42.75Q464.95312 42.75 464.20312 43.484375Q463.46875 44.21875 463.375 45.4375ZM482.70312 51.0L481.125 51.0L481.125 40.921875Q480.54688 41.46875 479.625 42.015625Q478.70312 42.546875 477.95312 42.828125L477.95312 41.296875Q479.28125 40.671875 480.28125 39.78125Q481.28125 38.890625 481.6875 38.0625L482.70312 38.0625L482.70312 51.0ZM491.57812 47.125L491.57812 45.546875L496.4375 45.546875L496.4375 47.125L491.57812 47.125ZM503.42188 51.0L503.42188 38.109375L509.125 38.109375Q510.84375 38.109375 511.73438 38.46875Q512.6406 38.8125 513.1719 39.6875Q513.71875 40.5625 513.71875 41.625Q513.71875 43.0 512.8281 43.9375Q511.9375 44.875 510.09375 45.140625Q510.76562 45.46875 511.10938 45.78125Q511.85938 46.46875 512.53125 47.5L514.7656 51.0L512.625 51.0L510.92188 48.3125Q510.17188 47.15625 509.6875 46.546875Q509.20312 45.921875 508.8125 45.6875Q508.4375 45.4375 508.04688 45.34375Q507.75 45.28125 507.09375 45.28125L505.125 45.28125L505.125 51.0L503.42188 51.0ZM505.125 43.796875L508.78125 43.796875Q509.95312 43.796875 510.60938 43.5625Q511.26562 43.3125 511.60938 42.78125Q511.95312 42.25 511.95312 41.625Q511.95312 40.71875 511.28125 40.125Q510.625 39.53125 509.20312 39.53125L505.125 39.53125L505.125 43.796875ZM522.5781 48.0L524.21875 48.203125Q523.8281 49.625 522.78125 50.421875Q521.7344 51.21875 520.1094 51.21875Q518.0625 51.21875 516.8594 49.953125Q515.65625 48.6875 515.65625 46.40625Q515.65625 44.0625 516.8594 42.765625Q518.0781 41.453125 520.0156 41.453125Q521.8906 41.453125 523.0781 42.734375Q524.2656 44.0 524.2656 46.3125Q524.2656 46.453125 524.25 46.734375L517.2969 46.734375Q517.375 48.28125 518.15625 49.09375Q518.9531 49.90625 520.1094 49.90625Q520.9844 49.90625 521.59375 49.453125Q522.21875 49.0 522.5781 48.0ZM517.375 45.4375L522.59375 45.4375Q522.4844 44.265625 522.0 43.671875Q521.2344 42.75 520.03125 42.75Q518.9531 42.75 518.2031 43.484375Q517.46875 44.21875 517.375 45.4375ZM526.15625 51.0L526.15625 38.109375L527.7344 38.109375L527.7344 51.0L526.15625 51.0ZM536.28125 49.84375Q535.40625 50.59375 534.5781 50.90625Q533.7656 51.21875 532.84375 51.21875Q531.2969 51.21875 530.46875 50.46875Q529.65625 49.703125 529.65625 48.53125Q529.65625 47.859375 529.96875 47.296875Q530.28125 46.71875 530.78125 46.375Q531.28125 46.03125 531.9219 45.859375Q532.3906 45.734375 533.3281 45.625Q535.2344 45.390625 536.1406 45.078125Q536.15625 44.75 536.15625 44.65625Q536.15625 43.703125 535.7031 43.296875Q535.09375 42.765625 533.90625 42.765625Q532.78125 42.765625 532.25 43.15625Q531.71875 43.546875 531.46875 44.546875L529.9219 44.328125Q530.1406 43.328125 530.6094 42.71875Q531.09375 42.109375 532.0 41.78125Q532.9219 41.453125 534.1406 41.453125Q535.3281 41.453125 536.0781 41.734375Q536.8281 42.015625 537.1719 42.453125Q537.53125 42.875 537.6719 43.515625Q537.75 43.921875 537.75 44.984375L537.75 47.09375Q537.75 49.296875 537.84375 49.890625Q537.9531 50.46875 538.25 51.0L536.59375 51.0Q536.34375 50.5 536.28125 49.84375ZM536.1406 46.3125Q535.28125 46.671875 533.5625 46.90625Q532.59375 47.046875 532.1875 47.234375Q531.78125 47.40625 531.5625 47.75Q531.34375 48.078125 531.34375 48.5Q531.34375 49.125 531.8125 49.546875Q532.2969 49.96875 533.21875 49.96875Q534.1406 49.96875 534.84375 49.578125Q535.5625 49.171875 535.8906 48.484375Q536.1406 47.9375 536.1406 46.890625L536.1406 46.3125ZM543.6406 49.578125L543.875 50.984375Q543.2031 51.125 542.6719 51.125Q541.8125 51.125 541.3281 50.859375Q540.8594 50.578125 540.6719 50.140625Q540.4844 49.6875 540.4844 48.265625L540.4844 42.890625L539.3125 42.890625L539.3125 41.671875L540.4844 41.671875L540.4844 39.359375L542.0469 38.40625L542.0469 41.671875L543.6406 41.671875L543.6406 42.890625L542.0469 42.890625L542.0469 48.359375Q542.0469 49.03125 542.125 49.234375Q542.21875 49.421875 542.40625 49.53125Q542.59375 49.640625 542.9531 49.640625Q543.2031 49.640625 543.6406 49.578125ZM545.2031 39.9375L545.2031 38.109375L546.78125 38.109375L546.78125 39.9375L545.2031 39.9375ZM545.2031 51.0L545.2031 41.671875L546.78125 41.671875L546.78125 51.0L545.2031 51.0ZM551.78125 51.0L548.2344 41.671875L549.90625 41.671875L551.90625 47.25Q552.2344 48.15625 552.5 49.140625Q552.71875 48.390625 553.09375 47.359375L555.15625 41.671875L556.7969 41.671875L553.25 51.0L551.78125 51.0ZM564.5781 48.0L566.21875 48.203125Q565.8281 49.625 564.78125 50.421875Q563.7344 51.21875 562.1094 51.21875Q560.0625 51.21875 558.8594 49.953125Q557.65625 48.6875 557.65625 46.40625Q557.65625 44.0625 558.8594 42.765625Q560.0781 41.453125 562.0156 41.453125Q563.8906 41.453125 565.0781 42.734375Q566.2656 44.0 566.2656 46.3125Q566.2656 46.453125 566.25 46.734375L559.2969 46.734375Q559.375 48.28125 560.15625 49.09375Q560.9531 49.90625 562.1094 49.90625Q562.9844 49.90625 563.59375 49.453125Q564.21875 49.0 564.5781 48.0ZM559.375 45.4375L564.59375 45.4375Q564.4844 44.265625 564.0 43.671875Q563.2344 42.75 562.03125 42.75Q560.9531 42.75 560.2031 43.484375Q559.46875 44.21875 559.375 45.4375ZM573.3906 51.0L573.3906 38.109375L577.8281 38.109375Q579.3281 38.109375 580.125 38.296875Q581.2344 38.546875 582.0156 39.21875Q583.03125 40.078125 583.53125 41.421875Q584.0469 42.765625 584.0469 44.484375Q584.0469 45.953125 583.7031 47.09375Q583.3594 48.21875 582.8125 48.96875Q582.28125 49.703125 581.6406 50.140625Q581.0156 50.5625 580.1094 50.78125Q579.2031 51.0 578.03125 51.0L573.3906 51.0ZM575.09375 49.484375L577.84375 49.484375Q579.125 49.484375 579.84375 49.25Q580.5625 49.0 581.0 48.578125Q581.6094 47.96875 581.9375 46.953125Q582.28125 45.921875 582.28125 44.453125Q582.28125 42.4375 581.6094 41.359375Q580.9531 40.265625 580.0 39.90625Q579.3281 39.640625 577.7969 39.640625L575.09375 39.640625L575.09375 49.484375ZM586.2031 39.9375L586.2031 38.109375L587.78125 38.109375L587.78125 39.9375L586.2031 39.9375ZM586.2031 51.0L586.2031 41.671875L587.78125 41.671875L587.78125 51.0L586.2031 51.0ZM590.5625 51.0L590.5625 42.890625L589.1719 42.890625L589.1719 41.671875L590.5625 41.671875L590.5625 40.671875Q590.5625 39.734375 590.7344 39.28125Q590.9531 38.65625 591.53125 38.28125Q592.1094 37.890625 593.15625 37.890625Q593.8125 37.890625 594.625 38.046875L594.3906 39.4375Q593.8906 39.34375 593.4531 39.34375Q592.7344 39.34375 592.4375 39.65625Q592.1406 39.953125 592.1406 40.796875L592.1406 41.671875L593.9531 41.671875L593.9531 42.890625L592.1406 42.890625L592.1406 51.0L590.5625 51.0ZM595.5625 51.0L595.5625 42.890625L594.1719 42.890625L594.1719 41.671875L595.5625 41.671875L595.5625 40.671875Q595.5625 39.734375 595.7344 39.28125Q595.9531 38.65625 596.53125 38.28125Q597.1094 37.890625 598.15625 37.890625Q598.8125 37.890625 599.625 38.046875L599.3906 39.4375Q598.8906 39.34375 598.4531 39.34375Q597.7344 39.34375 597.4375 39.65625Q597.1406 39.953125 597.1406 40.796875L597.1406 41.671875L598.9531 41.671875L598.9531 42.890625L597.1406 42.890625L597.1406 51.0L595.5625 51.0ZM606.5781 48.0L608.21875 48.203125Q607.8281 49.625 606.78125 50.421875Q605.7344 51.21875 604.1094 51.21875Q602.0625 51.21875 600.8594 49.953125Q599.65625 48.6875 599.65625 46.40625Q599.65625 44.0625 600.8594 42.765625Q602.0781 41.453125 604.0156 41.453125Q605.8906 41.453125 607.0781 42.734375Q608.2656 44.0 608.2656 46.3125Q608.2656 46.453125 608.25 46.734375L601.2969 46.734375Q601.375 48.28125 602.15625 49.09375Q602.9531 49.90625 604.1094 49.90625Q604.9844 49.90625 605.59375 49.453125Q606.21875 49.0 606.5781 48.0ZM601.375 45.4375L606.59375 45.4375Q606.4844 44.265625 606.0 43.671875Q605.2344 42.75 604.03125 42.75Q602.9531 42.75 602.2031 43.484375Q601.46875 44.21875 601.375 45.4375ZM610.1719 51.0L610.1719 41.671875L611.59375 41.671875L611.59375 43.078125Q612.1406 42.09375 612.59375 41.78125Q613.0625 41.453125 613.6094 41.453125Q614.4219 41.453125 615.2344 41.96875L614.7031 43.4375Q614.1094 43.09375 613.53125 43.09375Q613.0156 43.09375 612.59375 43.40625Q612.1875 43.71875 612.0156 44.265625Q611.75 45.109375 611.75 46.109375L611.75 51.0L610.1719 51.0ZM622.5781 48.0L624.21875 48.203125Q623.8281 49.625 622.78125 50.421875Q621.7344 51.21875 620.1094 51.21875Q618.0625 51.21875 616.8594 49.953125Q615.65625 48.6875 615.65625 46.40625Q615.65625 44.0625 616.8594 42.765625Q618.0781 41.453125 620.0156 41.453125Q621.8906 41.453125 623.0781 42.734375Q624.2656 44.0 624.2656 46.3125Q624.2656 46.453125 624.25 46.734375L617.2969 46.734375Q617.375 48.28125 618.15625 49.09375Q618.9531 49.90625 620.1094 49.90625Q620.9844 49.90625 621.59375 49.453125Q622.21875 49.0 622.5781 48.0ZM617.375 45.4375L622.59375 45.4375Q622.4844 44.265625 622.0 43.671875Q621.2344 42.75 620.03125 42.75Q618.9531 42.75 618.2031 43.484375Q617.46875 44.21875 617.375 45.4375ZM626.1875 51.0L626.1875 41.671875L627.6094 41.671875L627.6094 43.0Q628.6406 41.453125 630.5781 41.453125Q631.4219 41.453125 632.125 41.765625Q632.84375 42.0625 633.1875 42.5625Q633.5469 43.046875 633.6875 43.71875Q633.7656 44.15625 633.7656 45.265625L633.7656 51.0L632.1875 51.0L632.1875 45.328125Q632.1875 44.359375 632.0 43.875Q631.8281 43.390625 631.34375 43.109375Q630.875 42.828125 630.25 42.828125Q629.2344 42.828125 628.5 43.46875Q627.7656 44.109375 627.7656 45.90625L627.7656 51.0L626.1875 51.0ZM642.28125 47.578125L643.8281 47.78125Q643.5781 49.390625 642.53125 50.3125Q641.4844 51.21875 639.9531 51.21875Q638.03125 51.21875 636.8594 49.96875Q635.7031 48.703125 635.7031 46.375Q635.7031 44.859375 636.2031 43.734375Q636.7031 42.59375 637.71875 42.03125Q638.75 41.453125 639.9531 41.453125Q641.4844 41.453125 642.4375 42.234375Q643.40625 43.0 643.6875 44.40625L642.1406 44.640625Q641.9219 43.703125 641.3594 43.234375Q640.8125 42.75 640.0156 42.75Q638.8281 42.75 638.0781 43.609375Q637.3281 44.46875 637.3281 46.328125Q637.3281 48.203125 638.0469 49.0625Q638.7656 49.90625 639.9375 49.90625Q640.8594 49.90625 641.4844 49.34375Q642.1094 48.765625 642.28125 47.578125ZM651.5781 48.0L653.21875 48.203125Q652.8281 49.625 651.78125 50.421875Q650.7344 51.21875 649.1094 51.21875Q647.0625 51.21875 645.8594 49.953125Q644.65625 48.6875 644.65625 46.40625Q644.65625 44.0625 645.8594 42.765625Q647.0781 41.453125 649.0156 41.453125Q650.8906 41.453125 652.0781 42.734375Q653.2656 44.0 653.2656 46.3125Q653.2656 46.453125 653.25 46.734375L646.2969 46.734375Q646.375 48.28125 647.15625 49.09375Q647.9531 49.90625 649.1094 49.90625Q649.9844 49.90625 650.59375 49.453125Q651.21875 49.0 651.5781 48.0ZM646.375 45.4375L651.59375 45.4375Q651.4844 44.265625 651.0 43.671875Q650.2344 42.75 649.03125 42.75Q647.9531 42.75 647.2031 43.484375Q646.46875 44.21875 646.375 45.4375ZM658.96875 51.0L663.9219 38.109375L665.7656 38.109375L671.03125 51.0L669.09375 51.0L667.59375 47.09375L662.2031 47.09375L660.78125 51.0L658.96875 51.0ZM662.6875 45.703125L667.0625 45.703125L665.71875 42.140625Q665.09375 40.515625 664.7969 39.46875Q664.5625 40.703125 664.1094 41.921875L662.6875 45.703125ZM672.6406 51.0L671.1719 51.0L671.1719 38.109375L672.7656 38.109375L672.7656 42.71875Q673.7656 41.453125 675.3125 41.453125Q676.1719 41.453125 676.9375 41.8125Q677.71875 42.15625 678.2031 42.78125Q678.7031 43.40625 678.9844 44.296875Q679.2656 45.1875 679.2656 46.1875Q679.2656 48.59375 678.0781 49.90625Q676.90625 51.21875 675.2344 51.21875Q673.59375 51.21875 672.6406 49.828125L672.6406 51.0ZM672.625 46.265625Q672.625 47.9375 673.0781 48.6875Q673.8281 49.90625 675.1094 49.90625Q676.1406 49.90625 676.8906 49.015625Q677.65625 48.109375 677.65625 46.328125Q677.65625 44.5 676.9219 43.625Q676.2031 42.75 675.1719 42.75Q674.1406 42.75 673.375 43.65625Q672.625 44.5625 672.625 46.265625ZM680.5469 48.21875L682.125 47.96875Q682.25 48.90625 682.84375 49.40625Q683.4531 49.90625 684.53125 49.90625Q685.625 49.90625 686.15625 49.46875Q686.6875 49.015625 686.6875 48.421875Q686.6875 47.890625 686.21875 47.578125Q685.8906 47.375 684.59375 47.046875Q682.8594 46.609375 682.1875 46.296875Q681.5156 45.96875 681.15625 45.40625Q680.8125 44.828125 680.8125 44.140625Q680.8125 43.515625 681.09375 42.984375Q681.3906 42.453125 681.875 42.109375Q682.25 41.828125 682.8906 41.640625Q683.53125 41.453125 684.25 41.453125Q685.3594 41.453125 686.1875 41.78125Q687.0156 42.09375 687.40625 42.640625Q687.8125 43.171875 687.9531 44.078125L686.40625 44.28125Q686.2969 43.5625 685.7969 43.15625Q685.2969 42.75 684.375 42.75Q683.28125 42.75 682.8125 43.109375Q682.34375 43.46875 682.34375 43.953125Q682.34375 44.265625 682.5469 44.515625Q682.7344 44.765625 683.1406 44.9375Q683.3906 45.015625 684.5469 45.34375Q686.21875 45.78125 686.875 46.078125Q687.5469 46.359375 687.9219 46.90625Q688.3125 47.453125 688.3125 48.25Q688.3125 49.046875 687.84375 49.75Q687.3906 50.453125 686.5156 50.84375Q685.6406 51.21875 684.5469 51.21875Q682.71875 51.21875 681.7656 50.46875Q680.8125 49.703125 680.5469 48.21875ZM689.59375 46.328125Q689.59375 43.734375 691.0469 42.484375Q692.25 41.453125 693.96875 41.453125Q695.90625 41.453125 697.125 42.71875Q698.34375 43.984375 698.34375 46.203125Q698.34375 48.0 697.7969 49.03125Q697.2656 50.0625 696.2344 50.640625Q695.2031 51.21875 693.96875 51.21875Q692.0156 51.21875 690.7969 49.96875Q689.59375 48.703125 689.59375 46.328125ZM691.21875 46.328125Q691.21875 48.125 692.0 49.015625Q692.78125 49.90625 693.96875 49.90625Q695.15625 49.90625 695.9375 49.015625Q696.71875 48.109375 696.71875 46.28125Q696.71875 44.546875 695.9219 43.65625Q695.1406 42.765625 693.96875 42.765625Q692.78125 42.765625 692.0 43.65625Q691.21875 44.546875 691.21875 46.328125ZM700.15625 51.0L700.15625 38.109375L701.7344 38.109375L701.7344 51.0L700.15625 51.0ZM710.2969 51.0L710.2969 49.625Q709.21875 51.21875 707.34375 51.21875Q706.5156 51.21875 705.7969 50.90625Q705.0781 50.578125 704.7344 50.109375Q704.3906 49.625 704.25 48.921875Q704.15625 48.453125 704.15625 47.453125L704.15625 41.671875L705.7344 41.671875L705.7344 46.84375Q705.7344 48.078125 705.8281 48.515625Q705.9844 49.140625 706.46875 49.5Q706.9531 49.84375 707.65625 49.84375Q708.375 49.84375 709.0 49.484375Q709.625 49.125 709.875 48.5Q710.1406 47.859375 710.1406 46.671875L710.1406 41.671875L711.71875 41.671875L711.71875 51.0L710.2969 51.0ZM717.6406 49.578125L717.875 50.984375Q717.2031 51.125 716.6719 51.125Q715.8125 51.125 715.3281 50.859375Q714.8594 50.578125 714.6719 50.140625Q714.4844 49.6875 714.4844 48.265625L714.4844 42.890625L713.3125 42.890625L713.3125 41.671875L714.4844 41.671875L714.4844 39.359375L716.0469 38.40625L716.0469 41.671875L717.6406 41.671875L717.6406 42.890625L716.0469 42.890625L716.0469 48.359375Q716.0469 49.03125 716.125 49.234375Q716.21875 49.421875 716.40625 49.53125Q716.59375 49.640625 716.9531 49.640625Q717.2031 49.640625 717.6406 49.578125ZM725.5781 48.0L727.21875 48.203125Q726.8281 49.625 725.78125 50.421875Q724.7344 51.21875 723.1094 51.21875Q721.0625 51.21875 719.8594 49.953125Q718.65625 48.6875 718.65625 46.40625Q718.65625 44.0625 719.8594 42.765625Q721.0781 41.453125 723.0156 41.453125Q724.8906 41.453125 726.0781 42.734375Q727.2656 44.0 727.2656 46.3125Q727.2656 46.453125 727.25 46.734375L720.2969 46.734375Q720.375 48.28125 721.15625 49.09375Q721.9531 49.90625 723.1094 49.90625Q723.9844 49.90625 724.59375 49.453125Q725.21875 49.0 725.5781 48.0ZM720.375 45.4375L725.59375 45.4375Q725.4844 44.265625 725.0 43.671875Q724.2344 42.75 723.03125 42.75Q721.9531 42.75 721.2031 43.484375Q720.46875 44.21875 720.375 45.4375ZM734.3125 51.0L734.3125 38.109375L739.15625 38.109375Q740.625 38.109375 741.5156 38.5Q742.40625 38.890625 742.90625 39.703125Q743.4219 40.515625 743.4219 41.40625Q743.4219 42.234375 742.96875 42.96875Q742.53125 43.703125 741.625 44.140625Q742.7969 44.484375 743.4219 45.3125Q744.0469 46.140625 744.0469 47.265625Q744.0469 48.171875 743.65625 48.953125Q743.28125 49.71875 742.71875 50.140625Q742.15625 50.5625 741.3125 50.78125Q740.46875 51.0 739.2344 51.0L734.3125 51.0ZM736.03125 43.53125L738.8125 43.53125Q739.9375 43.53125 740.4375 43.375Q741.09375 43.1875 741.4219 42.75Q741.75 42.296875 741.75 41.609375Q741.75 40.96875 741.4375 40.484375Q741.125 40.0 740.5469 39.828125Q739.9844 39.640625 738.59375 39.640625L736.03125 39.640625L736.03125 43.53125ZM736.03125 49.484375L739.2344 49.484375Q740.0625 49.484375 740.3906 49.421875Q740.9844 49.3125 741.375 49.0625Q741.7656 48.8125 742.0156 48.34375Q742.28125 47.875 742.28125 47.265625Q742.28125 46.546875 741.90625 46.015625Q741.5469 45.484375 740.8906 45.265625Q740.2344 45.046875 739.0 45.046875L736.03125 45.046875L736.03125 49.484375ZM752.5781 48.0L754.21875 48.203125Q753.8281 49.625 752.78125 50.421875Q751.7344 51.21875 750.1094 51.21875Q748.0625 51.21875 746.8594 49.953125Q745.65625 48.6875 745.65625 46.40625Q745.65625 44.0625 746.8594 42.765625Q748.0781 41.453125 750.0156 41.453125Q751.8906 41.453125 753.0781 42.734375Q754.2656 44.0 754.2656 46.3125Q754.2656 46.453125 754.25 46.734375L747.2969 46.734375Q747.375 48.28125 748.15625 49.09375Q748.9531 49.90625 750.1094 49.90625Q750.9844 49.90625 751.59375 49.453125Q752.21875 49.0 752.5781 48.0ZM747.375 45.4375L752.59375 45.4375Q752.4844 44.265625 752.0 43.671875Q751.2344 42.75 750.03125 42.75Q748.9531 42.75 748.2031 43.484375Q747.46875 44.21875 747.375 45.4375ZM755.5469 48.21875L757.125 47.96875Q757.25 48.90625 757.84375 49.40625Q758.4531 49.90625 759.53125 49.90625Q760.625 49.90625 761.15625 49.46875Q761.6875 49.015625 761.6875 48.421875Q761.6875 47.890625 761.21875 47.578125Q760.8906 47.375 759.59375 47.046875Q757.8594 46.609375 757.1875 46.296875Q756.5156 45.96875 756.15625 45.40625Q755.8125 44.828125 755.8125 44.140625Q755.8125 43.515625 756.09375 42.984375Q756.3906 42.453125 756.875 42.109375Q757.25 41.828125 757.8906 41.640625Q758.53125 41.453125 759.25 41.453125Q760.3594 41.453125 761.1875 41.78125Q762.0156 42.09375 762.40625 42.640625Q762.8125 43.171875 762.9531 44.078125L761.40625 44.28125Q761.2969 43.5625 760.7969 43.15625Q760.2969 42.75 759.375 42.75Q758.28125 42.75 757.8125 43.109375Q757.34375 43.46875 757.34375 43.953125Q757.34375 44.265625 757.5469 44.515625Q757.7344 44.765625 758.1406 44.9375Q758.3906 45.015625 759.5469 45.34375Q761.21875 45.78125 761.875 46.078125Q762.5469 46.359375 762.9219 46.90625Q763.3125 47.453125 763.3125 48.25Q763.3125 49.046875 762.84375 49.75Q762.3906 50.453125 761.5156 50.84375Q760.6406 51.21875 759.5469 51.21875Q757.71875 51.21875 756.7656 50.46875Q755.8125 49.703125 755.5469 48.21875ZM768.6406 49.578125L768.875 50.984375Q768.2031 51.125 767.6719 51.125Q766.8125 51.125 766.3281 50.859375Q765.8594 50.578125 765.6719 50.140625Q765.4844 49.6875 765.4844 48.265625L765.4844 42.890625L764.3125 42.890625L764.3125 41.671875L765.4844 41.671875L765.4844 39.359375L767.0469 38.40625L767.0469 41.671875L768.6406 41.671875L768.6406 42.890625L767.0469 42.890625L767.0469 48.359375Q767.0469 49.03125 767.125 49.234375Q767.21875 49.421875 767.40625 49.53125Q767.59375 49.640625 767.9531 49.640625Q768.2031 49.640625 768.6406 49.578125ZM775.3125 51.0L775.3125 38.109375L777.03125 38.109375L777.03125 49.484375L783.375 49.484375L783.375 51.0L775.3125 51.0ZM791.28125 49.84375Q790.40625 50.59375 789.5781 50.90625Q788.7656 51.21875 787.84375 51.21875Q786.2969 51.21875 785.46875 50.46875Q784.65625 49.703125 784.65625 48.53125Q784.65625 47.859375 784.96875 47.296875Q785.28125 46.71875 785.78125 46.375Q786.28125 46.03125 786.9219 45.859375Q787.3906 45.734375 788.3281 45.625Q790.2344 45.390625 791.1406 45.078125Q791.15625 44.75 791.15625 44.65625Q791.15625 43.703125 790.7031 43.296875Q790.09375 42.765625 788.90625 42.765625Q787.78125 42.765625 787.25 43.15625Q786.71875 43.546875 786.46875 44.546875L784.9219 44.328125Q785.1406 43.328125 785.6094 42.71875Q786.09375 42.109375 787.0 41.78125Q787.9219 41.453125 789.1406 41.453125Q790.3281 41.453125 791.0781 41.734375Q791.8281 42.015625 792.1719 42.453125Q792.53125 42.875 792.6719 43.515625Q792.75 43.921875 792.75 44.984375L792.75 47.09375Q792.75 49.296875 792.84375 49.890625Q792.9531 50.46875 793.25 51.0L791.59375 51.0Q791.34375 50.5 791.28125 49.84375ZM791.1406 46.3125Q790.28125 46.671875 788.5625 46.90625Q787.59375 47.046875 787.1875 47.234375Q786.78125 47.40625 786.5625 47.75Q786.34375 48.078125 786.34375 48.5Q786.34375 49.125 786.8125 49.546875Q787.2969 49.96875 788.21875 49.96875Q789.1406 49.96875 789.84375 49.578125Q790.5625 49.171875 790.8906 48.484375Q791.1406 47.9375 791.1406 46.890625L791.1406 46.3125ZM798.6406 49.578125L798.875 50.984375Q798.2031 51.125 797.6719 51.125Q796.8125 51.125 796.3281 50.859375Q795.8594 50.578125 795.6719 50.140625Q795.4844 49.6875 795.4844 48.265625L795.4844 42.890625L794.3125 42.890625L794.3125 41.671875L795.4844 41.671875L795.4844 39.359375L797.0469 38.40625L797.0469 41.671875L798.6406 41.671875L798.6406 42.890625L797.0469 42.890625L797.0469 48.359375Q797.0469 49.03125 797.125 49.234375Q797.21875 49.421875 797.40625 49.53125Q797.59375 49.640625 797.9531 49.640625Q798.2031 49.640625 798.6406 49.578125ZM806.5781 48.0L808.21875 48.203125Q807.8281 49.625 806.78125 50.421875Q805.7344 51.21875 804.1094 51.21875Q802.0625 51.21875 800.8594 49.953125Q799.65625 48.6875 799.65625 46.40625Q799.65625 44.0625 800.8594 42.765625Q802.0781 41.453125 804.0156 41.453125Q805.8906 41.453125 807.0781 42.734375Q808.2656 44.0 808.2656 46.3125Q808.2656 46.453125 808.25 46.734375L801.2969 46.734375Q801.375 48.28125 802.15625 49.09375Q802.9531 49.90625 804.1094 49.90625Q804.9844 49.90625 805.59375 49.453125Q806.21875 49.0 806.5781 48.0ZM801.375 45.4375L806.59375 45.4375Q806.4844 44.265625 806.0 43.671875Q805.2344 42.75 804.03125 42.75Q802.9531 42.75 802.2031 43.484375Q801.46875 44.21875 801.375 45.4375ZM810.1875 51.0L810.1875 41.671875L811.6094 41.671875L811.6094 43.0Q812.6406 41.453125 814.5781 41.453125Q815.4219 41.453125 816.125 41.765625Q816.84375 42.0625 817.1875 42.5625Q817.5469 43.046875 817.6875 43.71875Q817.7656 44.15625 817.7656 45.265625L817.7656 51.0L816.1875 51.0L816.1875 45.328125Q816.1875 44.359375 816.0 43.875Q815.8281 43.390625 815.34375 43.109375Q814.875 42.828125 814.25 42.828125Q813.2344 42.828125 812.5 43.46875Q811.7656 44.109375 811.7656 45.90625L811.7656 51.0L810.1875 51.0ZM826.28125 47.578125L827.8281 47.78125Q827.5781 49.390625 826.53125 50.3125Q825.4844 51.21875 823.9531 51.21875Q822.03125 51.21875 820.8594 49.96875Q819.7031 48.703125 819.7031 46.375Q819.7031 44.859375 820.2031 43.734375Q820.7031 42.59375 821.71875 42.03125Q822.75 41.453125 823.9531 41.453125Q825.4844 41.453125 826.4375 42.234375Q827.40625 43.0 827.6875 44.40625L826.1406 44.640625Q825.9219 43.703125 825.3594 43.234375Q824.8125 42.75 824.0156 42.75Q822.8281 42.75 822.0781 43.609375Q821.3281 44.46875 821.3281 46.328125Q821.3281 48.203125 822.0469 49.0625Q822.7656 49.90625 823.9375 49.90625Q824.8594 49.90625 825.4844 49.34375Q826.1094 48.765625 826.28125 47.578125ZM829.1094 54.59375L828.9375 53.109375Q829.4531 53.25 829.84375 53.25Q830.375 53.25 830.6875 53.078125Q831.0 52.90625 831.2031 52.578125Q831.3594 52.34375 831.6875 51.40625Q831.7344 51.265625 831.8281 51.015625L828.2969 41.671875L830.0 41.671875L831.9375 47.078125Q832.3125 48.09375 832.6094 49.234375Q832.8906 48.140625 833.2656 47.109375L835.2656 41.671875L836.84375 41.671875L833.2969 51.15625Q832.71875 52.703125 832.40625 53.28125Q831.9844 54.0625 831.4375 54.421875Q830.8906 54.78125 830.1406 54.78125Q829.6719 54.78125 829.1094 54.59375ZM845.78125 51.0L842.2344 41.671875L843.90625 41.671875L845.90625 47.25Q846.2344 48.15625 846.5 49.140625Q846.71875 48.390625 847.09375 47.359375L849.15625 41.671875L850.7969 41.671875L847.25 51.0L845.78125 51.0ZM851.5469 48.21875L853.125 47.96875Q853.25 48.90625 853.84375 49.40625Q854.4531 49.90625 855.53125 49.90625Q856.625 49.90625 857.15625 49.46875Q857.6875 49.015625 857.6875 48.421875Q857.6875 47.890625 857.21875 47.578125Q856.8906 47.375 855.59375 47.046875Q853.8594 46.609375 853.1875 46.296875Q852.5156 45.96875 852.15625 45.40625Q851.8125 44.828125 851.8125 44.140625Q851.8125 43.515625 852.09375 42.984375Q852.3906 42.453125 852.875 42.109375Q853.25 41.828125 853.8906 41.640625Q854.53125 41.453125 855.25 41.453125Q856.3594 41.453125 857.1875 41.78125Q858.0156 42.09375 858.40625 42.640625Q858.8125 43.171875 858.9531 44.078125L857.40625 44.28125Q857.2969 43.5625 856.7969 43.15625Q856.2969 42.75 855.375 42.75Q854.28125 42.75 853.8125 43.109375Q853.34375 43.46875 853.34375 43.953125Q853.34375 44.265625 853.5469 44.515625Q853.7344 44.765625 854.1406 44.9375Q854.3906 45.015625 855.5469 45.34375Q857.21875 45.78125 857.875 46.078125Q858.5469 46.359375 858.9219 46.90625Q859.3125 47.453125 859.3125 48.25Q859.3125 49.046875 858.84375 49.75Q858.3906 50.453125 857.5156 50.84375Q856.6406 51.21875 855.5469 51.21875Q853.71875 51.21875 852.7656 50.46875Q851.8125 49.703125 851.5469 48.21875ZM865.8125 46.859375L867.4219 46.71875Q867.53125 47.6875 867.9375 48.3125Q868.3594 48.921875 869.2344 49.3125Q870.125 49.6875 871.21875 49.6875Q872.2031 49.6875 872.9375 49.40625Q873.6875 49.109375 874.0469 48.609375Q874.4219 48.09375 874.4219 47.5Q874.4219 46.890625 874.0625 46.4375Q873.71875 45.984375 872.90625 45.6875Q872.3906 45.484375 870.6094 45.0625Q868.84375 44.625 868.125 44.25Q867.2031 43.765625 866.75 43.046875Q866.2969 42.328125 866.2969 41.453125Q866.2969 40.46875 866.84375 39.625Q867.40625 38.78125 868.46875 38.34375Q869.53125 37.890625 870.84375 37.890625Q872.2656 37.890625 873.3594 38.359375Q874.4531 38.8125 875.0469 39.71875Q875.6406 40.609375 875.6719 41.75L874.0469 41.875Q873.90625 40.640625 873.1406 40.015625Q872.3906 39.390625 870.90625 39.390625Q869.3594 39.390625 868.6406 39.96875Q867.9375 40.53125 867.9375 41.328125Q867.9375 42.03125 868.4531 42.46875Q868.9375 42.921875 871.0156 43.390625Q873.09375 43.859375 873.875 44.21875Q875.0 44.734375 875.53125 45.53125Q876.0625 46.328125 876.0625 47.359375Q876.0625 48.390625 875.46875 49.296875Q874.8906 50.203125 873.78125 50.71875Q872.6875 51.21875 871.2969 51.21875Q869.5469 51.21875 868.3594 50.71875Q867.1875 50.203125 866.5156 49.1875Q865.84375 48.15625 865.8125 46.859375ZM878.2031 39.9375L878.2031 38.109375L879.78125 38.109375L879.78125 39.9375L878.2031 39.9375ZM878.2031 51.0L878.2031 41.671875L879.78125 41.671875L879.78125 51.0L878.2031 51.0ZM881.8906 51.78125L883.4375 52.0Q883.53125 52.71875 883.96875 53.046875Q884.5625 53.484375 885.5781 53.484375Q886.6719 53.484375 887.2656 53.03125Q887.875 52.59375 888.0781 51.8125Q888.2031 51.328125 888.2031 49.78125Q887.15625 51.0 885.6094 51.0Q883.6875 51.0 882.625 49.609375Q881.5781 48.21875 881.5781 46.28125Q881.5781 44.9375 882.0625 43.8125Q882.5469 42.6875 883.46875 42.078125Q884.3906 41.453125 885.625 41.453125Q887.28125 41.453125 888.34375 42.796875L888.34375 41.671875L889.8125 41.671875L889.8125 49.734375Q889.8125 51.921875 889.3594 52.828125Q888.9219 53.734375 887.9531 54.25Q887.0 54.78125 885.59375 54.78125Q883.9219 54.78125 882.8906 54.03125Q881.8594 53.28125 881.8906 51.78125ZM883.2031 46.171875Q883.2031 48.0 883.9375 48.84375Q884.6719 49.6875 885.7656 49.6875Q886.8594 49.6875 887.59375 48.859375Q888.3281 48.015625 888.3281 46.21875Q888.3281 44.5 887.5625 43.640625Q886.8125 42.765625 885.7344 42.765625Q884.6875 42.765625 883.9375 43.625Q883.2031 44.484375 883.2031 46.171875ZM891.875 44.71875Q891.875 41.515625 893.59375 39.703125Q895.3125 37.890625 898.0469 37.890625Q899.8281 37.890625 901.25 38.75Q902.6875 39.59375 903.4375 41.125Q904.1875 42.640625 904.1875 44.578125Q904.1875 46.53125 903.3906 48.078125Q902.6094 49.625 901.15625 50.421875Q899.7031 51.21875 898.03125 51.21875Q896.21875 51.21875 894.78125 50.34375Q893.34375 49.46875 892.6094 47.953125Q891.875 46.421875 891.875 44.71875ZM893.625 44.75Q893.625 47.078125 894.875 48.421875Q896.1406 49.765625 898.0156 49.765625Q899.9531 49.765625 901.1875 48.40625Q902.4375 47.046875 902.4375 44.5625Q902.4375 43.0 901.90625 41.828125Q901.375 40.640625 900.34375 40.0Q899.3281 39.359375 898.0469 39.359375Q896.2344 39.359375 894.9219 40.609375Q893.625 41.84375 893.625 44.75ZM906.1875 54.578125L906.1875 41.671875L907.625 41.671875L907.625 42.875Q908.1406 42.171875 908.78125 41.8125Q909.4219 41.453125 910.3281 41.453125Q911.53125 41.453125 912.4375 42.078125Q913.3594 42.6875 913.8281 43.8125Q914.2969 44.921875 914.2969 46.265625Q914.2969 47.6875 913.78125 48.84375Q913.2656 49.984375 912.28125 50.609375Q911.2969 51.21875 910.21875 51.21875Q909.4375 51.21875 908.7969 50.890625Q908.1719 50.546875 907.7656 50.03125L907.7656 54.578125L906.1875 54.578125ZM907.625 46.390625Q907.625 48.1875 908.34375 49.046875Q909.0781 49.90625 910.1094 49.90625Q911.1719 49.90625 911.9219 49.015625Q912.6719 48.125 912.6719 46.25Q912.6719 44.46875 911.9375 43.578125Q911.2031 42.6875 910.1875 42.6875Q909.1719 42.6875 908.3906 43.640625Q907.625 44.578125 907.625 46.390625ZM919.6406 49.578125L919.875 50.984375Q919.2031 51.125 918.6719 51.125Q917.8125 51.125 917.3281 50.859375Q916.8594 50.578125 916.6719 50.140625Q916.4844 49.6875 916.4844 48.265625L916.4844 42.890625L915.3125 42.890625L915.3125 41.671875L916.4844 41.671875L916.4844 39.359375L918.0469 38.40625L918.0469 41.671875L919.6406 41.671875L919.6406 42.890625L918.0469 42.890625L918.0469 48.359375Q918.0469 49.03125 918.125 49.234375Q918.21875 49.421875 918.40625 49.53125Q918.59375 49.640625 918.9531 49.640625Q919.2031 49.640625 919.6406 49.578125ZM926.3125 51.0L926.3125 38.109375L931.15625 38.109375Q932.625 38.109375 933.5156 38.5Q934.40625 38.890625 934.90625 39.703125Q935.4219 40.515625 935.4219 41.40625Q935.4219 42.234375 934.96875 42.96875Q934.53125 43.703125 933.625 44.140625Q934.7969 44.484375 935.4219 45.3125Q936.0469 46.140625 936.0469 47.265625Q936.0469 48.171875 935.65625 48.953125Q935.28125 49.71875 934.71875 50.140625Q934.15625 50.5625 933.3125 50.78125Q932.46875 51.0 931.2344 51.0L926.3125 51.0ZM928.03125 43.53125L930.8125 43.53125Q931.9375 43.53125 932.4375 43.375Q933.09375 43.1875 933.4219 42.75Q933.75 42.296875 933.75 41.609375Q933.75 40.96875 933.4375 40.484375Q933.125 40.0 932.5469 39.828125Q931.9844 39.640625 930.59375 39.640625L928.03125 39.640625L928.03125 43.53125ZM928.03125 49.484375L931.2344 49.484375Q932.0625 49.484375 932.3906 49.421875Q932.9844 49.3125 933.375 49.0625Q933.7656 48.8125 934.0156 48.34375Q934.28125 47.875 934.28125 47.265625Q934.28125 46.546875 933.90625 46.015625Q933.5469 45.484375 932.8906 45.265625Q932.2344 45.046875 931.0 45.046875L928.03125 45.046875L928.03125 49.484375ZM944.5781 48.0L946.21875 48.203125Q945.8281 49.625 944.78125 50.421875Q943.7344 51.21875 942.1094 51.21875Q940.0625 51.21875 938.8594 49.953125Q937.65625 48.6875 937.65625 46.40625Q937.65625 44.0625 938.8594 42.765625Q940.0781 41.453125 942.0156 41.453125Q943.8906 41.453125 945.0781 42.734375Q946.2656 44.0 946.2656 46.3125Q946.2656 46.453125 946.25 46.734375L939.2969 46.734375Q939.375 48.28125 940.15625 49.09375Q940.9531 49.90625 942.1094 49.90625Q942.9844 49.90625 943.59375 49.453125Q944.21875 49.0 944.5781 48.0ZM939.375 45.4375L944.59375 45.4375Q944.4844 44.265625 944.0 43.671875Q943.2344 42.75 942.03125 42.75Q940.9531 42.75 940.2031 43.484375Q939.46875 44.21875 939.375 45.4375ZM947.5469 48.21875L949.125 47.96875Q949.25 48.90625 949.84375 49.40625Q950.4531 49.90625 951.53125 49.90625Q952.625 49.90625 953.15625 49.46875Q953.6875 49.015625 953.6875 48.421875Q953.6875 47.890625 953.21875 47.578125Q952.8906 47.375 951.59375 47.046875Q949.8594 46.609375 949.1875 46.296875Q948.5156 45.96875 948.15625 45.40625Q947.8125 44.828125 947.8125 44.140625Q947.8125 43.515625 948.09375 42.984375Q948.3906 42.453125 948.875 42.109375Q949.25 41.828125 949.8906 41.640625Q950.53125 41.453125 951.25 41.453125Q952.3594 41.453125 953.1875 41.78125Q954.0156 42.09375 954.40625 42.640625Q954.8125 43.171875 954.9531 44.078125L953.40625 44.28125Q953.2969 43.5625 952.7969 43.15625Q952.2969 42.75 951.375 42.75Q950.28125 42.75 949.8125 43.109375Q949.34375 43.46875 949.34375 43.953125Q949.34375 44.265625 949.5469 44.515625Q949.7344 44.765625 950.1406 44.9375Q950.3906 45.015625 951.5469 45.34375Q953.21875 45.78125 953.875 46.078125Q954.5469 46.359375 954.9219 46.90625Q955.3125 47.453125 955.3125 48.25Q955.3125 49.046875 954.84375 49.75Q954.3906 50.453125 953.5156 50.84375Q952.6406 51.21875 951.5469 51.21875Q949.71875 51.21875 948.7656 50.46875Q947.8125 49.703125 947.5469 48.21875ZM960.6406 49.578125L960.875 50.984375Q960.2031 51.125 959.6719 51.125Q958.8125 51.125 958.3281 50.859375Q957.8594 50.578125 957.6719 50.140625Q957.4844 49.6875 957.4844 48.265625L957.4844 42.890625L956.3125 42.890625L956.3125 41.671875L957.4844 41.671875L957.4844 39.359375L959.0469 38.40625L959.0469 41.671875L960.6406 41.671875L960.6406 42.890625L959.0469 42.890625L959.0469 48.359375Q959.0469 49.03125 959.125 49.234375Q959.21875 49.421875 959.40625 49.53125Q959.59375 49.640625 959.9531 49.640625Q960.2031 49.640625 960.6406 49.578125ZM967.3125 51.0L967.3125 38.109375L969.03125 38.109375L969.03125 49.484375L975.375 49.484375L975.375 51.0L967.3125 51.0ZM983.28125 49.84375Q982.40625 50.59375 981.5781 50.90625Q980.7656 51.21875 979.84375 51.21875Q978.2969 51.21875 977.46875 50.46875Q976.65625 49.703125 976.65625 48.53125Q976.65625 47.859375 976.96875 47.296875Q977.28125 46.71875 977.78125 46.375Q978.28125 46.03125 978.9219 45.859375Q979.3906 45.734375 980.3281 45.625Q982.2344 45.390625 983.1406 45.078125Q983.15625 44.75 983.15625 44.65625Q983.15625 43.703125 982.7031 43.296875Q982.09375 42.765625 980.90625 42.765625Q979.78125 42.765625 979.25 43.15625Q978.71875 43.546875 978.46875 44.546875L976.9219 44.328125Q977.1406 43.328125 977.6094 42.71875Q978.09375 42.109375 979.0 41.78125Q979.9219 41.453125 981.1406 41.453125Q982.3281 41.453125 983.0781 41.734375Q983.8281 42.015625 984.1719 42.453125Q984.53125 42.875 984.6719 43.515625Q984.75 43.921875 984.75 44.984375L984.75 47.09375Q984.75 49.296875 984.84375 49.890625Q984.9531 50.46875 985.25 51.0L983.59375 51.0Q983.34375 50.5 983.28125 49.84375ZM983.1406 46.3125Q982.28125 46.671875 980.5625 46.90625Q979.59375 47.046875 979.1875 47.234375Q978.78125 47.40625 978.5625 47.75Q978.34375 48.078125 978.34375 48.5Q978.34375 49.125 978.8125 49.546875Q979.2969 49.96875 980.21875 49.96875Q981.1406 49.96875 981.84375 49.578125Q982.5625 49.171875 982.8906 48.484375Q983.1406 47.9375 983.1406 46.890625L983.1406 46.3125ZM990.6406 49.578125L990.875 50.984375Q990.2031 51.125 989.6719 51.125Q988.8125 51.125 988.3281 50.859375Q987.8594 50.578125 987.6719 50.140625Q987.4844 49.6875 987.4844 48.265625L987.4844 42.890625L986.3125 42.890625L986.3125 41.671875L987.4844 41.671875L987.4844 39.359375L989.0469 38.40625L989.0469 41.671875L990.6406 41.671875L990.6406 42.890625L989.0469 42.890625L989.0469 48.359375Q989.0469 49.03125 989.125 49.234375Q989.21875 49.421875 989.40625 49.53125Q989.59375 49.640625 989.9531 49.640625Q990.2031 49.640625 990.6406 49.578125ZM998.5781 48.0L1000.21875 48.203125Q999.8281 49.625 998.78125 50.421875Q997.7344 51.21875 996.1094 51.21875Q994.0625 51.21875 992.8594 49.953125Q991.65625 48.6875 991.65625 46.40625Q991.65625 44.0625 992.8594 42.765625Q994.0781 41.453125 996.0156 41.453125Q997.8906 41.453125 999.0781 42.734375Q1000.2656 44.0 1000.2656 46.3125Q1000.2656 46.453125 1000.25 46.734375L993.2969 46.734375Q993.375 48.28125 994.15625 49.09375Q994.9531 49.90625 996.1094 49.90625Q996.9844 49.90625 997.59375 49.453125Q998.21875 49.0 998.5781 48.0ZM993.375 45.4375L998.59375 45.4375Q998.4844 44.265625 998.0 43.671875Q997.2344 42.75 996.03125 42.75Q994.9531 42.75 994.2031 43.484375Q993.46875 44.21875 993.375 45.4375ZM1002.1875 51.0L1002.1875 41.671875L1003.6094 41.671875L1003.6094 43.0Q1004.6406 41.453125 1006.5781 41.453125Q1007.4219 41.453125 1008.125 41.765625Q1008.84375 42.0625 1009.1875 42.5625Q1009.5469 43.046875 1009.6875 43.71875Q1009.7656 44.15625 1009.7656 45.265625L1009.7656 51.0L1008.1875 51.0L1008.1875 45.328125Q1008.1875 44.359375 1008.0 43.875Q1007.8281 43.390625 1007.34375 43.109375Q1006.875 42.828125 1006.25 42.828125Q1005.2344 42.828125 1004.5 43.46875Q1003.7656 44.109375 1003.7656 45.90625L1003.7656 51.0L1002.1875 51.0ZM1018.28125 47.578125L1019.8281 47.78125Q1019.5781 49.390625 1018.53125 50.3125Q1017.4844 51.21875 1015.9531 51.21875Q1014.03125 51.21875 1012.8594 49.96875Q1011.7031 48.703125 1011.7031 46.375Q1011.7031 44.859375 1012.2031 43.734375Q1012.7031 42.59375 1013.71875 42.03125Q1014.75 41.453125 1015.9531 41.453125Q1017.4844 41.453125 1018.4375 42.234375Q1019.40625 43.0 1019.6875 44.40625L1018.1406 44.640625Q1017.9219 43.703125 1017.3594 43.234375Q1016.8125 42.75 1016.0156 42.75Q1014.8281 42.75 1014.0781 43.609375Q1013.3281 44.46875 1013.3281 46.328125Q1013.3281 48.203125 1014.0469 49.0625Q1014.7656 49.90625 1015.9375 49.90625Q1016.8594 49.90625 1017.4844 49.34375Q1018.1094 48.765625 1018.28125 47.578125ZM1021.1094 54.59375L1020.9375 53.109375Q1021.4531 53.25 1021.84375 53.25Q1022.375 53.25 1022.6875 53.078125Q1023.0 52.90625 1023.2031 52.578125Q1023.3594 52.34375 1023.6875 51.40625Q1023.7344 51.265625 1023.8281 51.015625L1020.2969 41.671875L1022.0 41.671875L1023.9375 47.078125Q1024.3125 48.09375 1024.6094 49.234375Q1024.8906 48.140625 1025.2656 47.109375L1027.2656 41.671875L1028.8438 41.671875L1025.2969 51.15625Q1024.7188 52.703125 1024.4062 53.28125Q1023.9844 54.0625 1023.4375 54.421875Q1022.8906 54.78125 1022.1406 54.78125Q1021.6719 54.78125 1021.1094 54.59375Z" fill-rule="nonzero"/></svg>
8
0
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2
hf_public_repos/blog/assets/35_bert_cpu_scaling_part_2/openmp/openmp_torchscript_latency.svg
<svg width="5382" height="3156" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="1855" y="121" width="5382" height="3156"/></clipPath><clipPath id="clip1"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip2"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2164.5" y1="2558" x2="2164.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill3"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip4"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2376.5" y1="2753" x2="2376.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill5"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip6"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2588.5" y1="2790" x2="2588.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill7"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip8"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2800.5" y1="2800" x2="2800.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill9"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip10"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3012" y1="2798" x2="3012" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill11"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip12"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3224" y1="2540" x2="3224" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill13"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip14"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3436" y1="2747" x2="3436" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill15"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip16"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3648" y1="2786" x2="3648" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill17"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip18"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3859.5" y1="2798" x2="3859.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill19"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip20"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4071.5" y1="2795" x2="4071.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill21"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip22"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4283.5" y1="2267" x2="4283.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill23"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip24"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4495.5" y1="2673" x2="4495.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill25"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip26"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4707.5" y1="2762" x2="4707.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill27"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip28"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4919" y1="2787" x2="4919" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill29"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip30"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5131" y1="2780" x2="5131" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill31"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip32"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5343" y1="860" x2="5343" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill33"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip34"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5554.5" y1="2341" x2="5554.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill35"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip36"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5766.5" y1="2678" x2="5766.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill37"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip38"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5978.5" y1="2744" x2="5978.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill39"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip40"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6190.5" y1="2753" x2="6190.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill41"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip42"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6402.5" y1="2048" x2="6402.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill43"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip44"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6614.5" y1="2582" x2="6614.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill45"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip46"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6826" y1="2666" x2="6826" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill47"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip48"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="7038" y1="2666" x2="7038" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill49"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip50"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2245.5" y1="2558" x2="2245.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill51"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip52"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2457.5" y1="2754" x2="2457.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill53"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip54"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2669.5" y1="2788" x2="2669.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill55"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip56"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="2881.5" y1="2799" x2="2881.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill57"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip58"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3093.5" y1="2793" x2="3093.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill59"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip60"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3305" y1="2538" x2="3305" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill61"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip62"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3517" y1="2748" x2="3517" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill63"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip64"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3729" y1="2785" x2="3729" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill65"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip66"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="3941" y1="2798" x2="3941" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill67"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip68"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4152.5" y1="2790" x2="4152.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill69"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip70"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4364.5" y1="2269" x2="4364.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill71"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip72"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4576.5" y1="2674" x2="4576.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill73"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip74"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="4788.5" y1="2760" x2="4788.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill75"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip76"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5000.5" y1="2785" x2="5000.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill77"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip78"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5212" y1="2772" x2="5212" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill79"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip80"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5424" y1="812" x2="5424" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill81"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip82"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5636" y1="2357" x2="5636" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill83"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip84"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="5847.5" y1="2675" x2="5847.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill85"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip86"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6059.5" y1="2735" x2="6059.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill87"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip88"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6271.5" y1="2744" x2="6271.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill89"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip90"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6483.5" y1="2066" x2="6483.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill91"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip92"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6695.5" y1="2547" x2="6695.5" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill93"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip94"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="6907" y1="2724" x2="6907" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill95"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip96"><rect x="2099" y="482" width="5088" height="2336"/></clipPath><linearGradient x1="7119" y1="2668" x2="7119" y2="2815" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill97"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip98"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip99"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip100"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip101"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip102"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip103"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip104"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip105"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip106"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip107"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip108"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip109"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip110"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip111"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip112"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip113"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip114"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip115"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip116"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip117"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip118"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip119"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip120"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip121"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip122"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip123"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip124"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip125"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip126"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip127"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip128"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip129"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip130"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip131"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip132"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip133"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip134"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip135"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip136"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip137"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip138"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip139"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip140"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip141"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip142"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip143"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip144"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip145"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip146"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip147"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip148"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip149"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip150"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip151"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip152"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip153"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip154"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip155"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip156"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip157"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip158"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip159"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip160"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip161"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip162"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip163"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip164"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip165"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip166"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip167"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip168"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip169"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip170"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip171"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip172"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip173"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip174"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip175"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip176"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip177"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip178"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip179"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip180"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip181"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip182"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip183"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip184"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip185"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip186"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip187"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip188"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip189"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip190"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip191"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip192"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip193"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip194"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><linearGradient x1="4444" y1="3197" x2="4444" y2="3220" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill195"><stop offset="0" stop-color="#6083CB"/><stop offset="0.5" stop-color="#3E70CA"/><stop offset="1" stop-color="#2E61BA"/></linearGradient><clipPath id="clip196"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><clipPath id="clip197"><rect x="1855" y="122" width="5379" height="3152"/></clipPath><linearGradient x1="4591" y1="3197" x2="4591" y2="3220" gradientUnits="userSpaceOnUse" spreadMethod="reflect" id="fill198"><stop offset="0" stop-color="#F18C55"/><stop offset="0.5" stop-color="#F67B28"/><stop offset="1" stop-color="#E56B17"/></linearGradient><clipPath id="clip199"><rect x="1855" y="122" width="5379" height="3152"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-1855 -121)"><rect x="1856" y="122" width="5379" height="3153" fill="#FFFFFF"/><g clip-path="url(#clip1)"><path d="M2099.5 2426.58 7184.5 2426.58M2099.5 2037.57 7184.5 2037.57M2099.5 1648.55 7184.5 1648.55M2099.5 1260.54 7184.5 1260.54M2099.5 871.529 7184.5 871.529M2099.5 482.5 7184.5 482.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip2)"><rect x="2132" y="2558" width="65" height="257" fill="url(#fill3)"/></g><g clip-path="url(#clip4)"><rect x="2344" y="2753" width="65" height="61.9998" fill="url(#fill5)"/></g><g clip-path="url(#clip6)"><rect x="2556" y="2790" width="65" height="25" fill="url(#fill7)"/></g><g clip-path="url(#clip8)"><rect x="2768" y="2800" width="65" height="15.0002" fill="url(#fill9)"/></g><g clip-path="url(#clip10)"><rect x="2979" y="2798" width="66" height="16.9998" fill="url(#fill11)"/></g><g clip-path="url(#clip12)"><rect x="3191" y="2540" width="66" height="275" fill="url(#fill13)"/></g><g clip-path="url(#clip14)"><rect x="3403" y="2747" width="66" height="68" fill="url(#fill15)"/></g><g clip-path="url(#clip16)"><rect x="3615" y="2786" width="66" height="28.9998" fill="url(#fill17)"/></g><g clip-path="url(#clip18)"><rect x="3827" y="2798" width="65" height="16.9998" fill="url(#fill19)"/></g><g clip-path="url(#clip20)"><rect x="4039" y="2795" width="65" height="20" fill="url(#fill21)"/></g><g clip-path="url(#clip22)"><rect x="4251" y="2267" width="65" height="548" fill="url(#fill23)"/></g><g clip-path="url(#clip24)"><rect x="4463" y="2673" width="65" height="142" fill="url(#fill25)"/></g><g clip-path="url(#clip26)"><rect x="4675" y="2762" width="65" height="53" fill="url(#fill27)"/></g><g clip-path="url(#clip28)"><rect x="4886" y="2787" width="66" height="27.9998" fill="url(#fill29)"/></g><g clip-path="url(#clip30)"><rect x="5098" y="2780" width="66" height="35" fill="url(#fill31)"/></g><g clip-path="url(#clip32)"><rect x="5310" y="860" width="66" height="1955" fill="url(#fill33)"/></g><g clip-path="url(#clip34)"><rect x="5522" y="2341" width="65" height="474" fill="url(#fill35)"/></g><g clip-path="url(#clip36)"><rect x="5734" y="2678" width="65" height="137" fill="url(#fill37)"/></g><g clip-path="url(#clip38)"><rect x="5946" y="2744" width="65.0005" height="70.9998" fill="url(#fill39)"/></g><g clip-path="url(#clip40)"><rect x="6158" y="2753" width="65" height="61.9998" fill="url(#fill41)"/></g><g clip-path="url(#clip42)"><rect x="6370" y="2048" width="65" height="767" fill="url(#fill43)"/></g><g clip-path="url(#clip44)"><rect x="6582" y="2582" width="65" height="233" fill="url(#fill45)"/></g><g clip-path="url(#clip46)"><rect x="6793" y="2666" width="66" height="149" fill="url(#fill47)"/></g><g clip-path="url(#clip48)"><rect x="7005" y="2666" width="66" height="149" fill="url(#fill49)"/></g><g clip-path="url(#clip50)"><rect x="2213" y="2558" width="65" height="257" fill="url(#fill51)"/></g><g clip-path="url(#clip52)"><rect x="2425" y="2754" width="65" height="60.9998" fill="url(#fill53)"/></g><g clip-path="url(#clip54)"><rect x="2637" y="2788" width="65" height="26.9998" fill="url(#fill55)"/></g><g clip-path="url(#clip56)"><rect x="2849" y="2799" width="65" height="15.9998" fill="url(#fill57)"/></g><g clip-path="url(#clip58)"><rect x="3061" y="2793" width="65" height="22" fill="url(#fill59)"/></g><g clip-path="url(#clip60)"><rect x="3272" y="2538" width="66" height="277" fill="url(#fill61)"/></g><g clip-path="url(#clip62)"><rect x="3484" y="2748" width="66" height="67" fill="url(#fill63)"/></g><g clip-path="url(#clip64)"><rect x="3696" y="2785" width="66" height="30" fill="url(#fill65)"/></g><g clip-path="url(#clip66)"><rect x="3908" y="2798" width="66" height="16.9998" fill="url(#fill67)"/></g><g clip-path="url(#clip68)"><rect x="4120" y="2790" width="65.0005" height="25" fill="url(#fill69)"/></g><g clip-path="url(#clip70)"><rect x="4332" y="2269" width="65" height="546" fill="url(#fill71)"/></g><g clip-path="url(#clip72)"><rect x="4544" y="2674" width="65" height="141" fill="url(#fill73)"/></g><g clip-path="url(#clip74)"><rect x="4756" y="2760" width="65" height="55" fill="url(#fill75)"/></g><g clip-path="url(#clip76)"><rect x="4968" y="2785" width="65" height="30" fill="url(#fill77)"/></g><g clip-path="url(#clip78)"><rect x="5179" y="2772" width="66" height="43" fill="url(#fill79)"/></g><g clip-path="url(#clip80)"><rect x="5391" y="812" width="66" height="2003" fill="url(#fill81)"/></g><g clip-path="url(#clip82)"><rect x="5603" y="2357" width="66" height="458" fill="url(#fill83)"/></g><g clip-path="url(#clip84)"><rect x="5815" y="2675" width="65" height="140" fill="url(#fill85)"/></g><g clip-path="url(#clip86)"><rect x="6027" y="2735" width="65" height="80" fill="url(#fill87)"/></g><g clip-path="url(#clip88)"><rect x="6239" y="2744" width="65" height="70.9998" fill="url(#fill89)"/></g><g clip-path="url(#clip90)"><rect x="6451" y="2066" width="65" height="749" fill="url(#fill91)"/></g><g clip-path="url(#clip92)"><rect x="6663" y="2547" width="65" height="268" fill="url(#fill93)"/></g><g clip-path="url(#clip94)"><rect x="6874" y="2724" width="66" height="91" fill="url(#fill95)"/></g><g clip-path="url(#clip96)"><rect x="7086" y="2668" width="66" height="147" fill="url(#fill97)"/></g><g clip-path="url(#clip98)"><path d="M2099.5 2815.5 7184.5 2815.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none" fill-rule="evenodd"/></g><g clip-path="url(#clip99)"><path d="M2099.5 2815.5 2099.5 2895.5M3158.6 2815.5 3158.6 2895.5M4217.64 2815.5 4217.64 2895.5M5277.67 2815.5 5277.67 2895.5M6336.71 2815.5 6336.71 2895.5M7184.5 2815.5 7184.5 2895.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip100)"><path d="M2099.5 2895.5 2099.5 2975.5M3158.6 2895.5 3158.6 2975.5M4217.64 2895.5 4217.64 2975.5M5277.67 2895.5 5277.67 2975.5M6336.71 2895.5 6336.71 2975.5M7184.5 2895.5 7184.5 2975.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip101)"><path d="M2099.5 2975.5 2099.5 3055.5M7184.5 2975.5 7184.5 3055.5" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g><g clip-path="url(#clip102)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2133.29 2526)">132</text></g><g clip-path="url(#clip103)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2355.62 2721)">32</text></g><g clip-path="url(#clip104)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2567.51 2758)">13</text></g><g clip-path="url(#clip105)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2789.84 2768)">8</text></g><g clip-path="url(#clip106)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3001.73 2766)">9</text></g><g clip-path="url(#clip107)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3192.71 2508)">142</text></g><g clip-path="url(#clip108)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3415.04 2715)">35</text></g><g clip-path="url(#clip109)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3626.93 2754)">15</text></g><g clip-path="url(#clip110)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3849.26 2766)">9</text></g><g clip-path="url(#clip111)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4050.7 2763)">10</text></g><g clip-path="url(#clip112)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4252.13 2235)">282</text></g><g clip-path="url(#clip113)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4474.46 2641)">73</text></g><g clip-path="url(#clip114)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4686.35 2730)">28</text></g><g clip-path="url(#clip115)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4898.23 2755)">15</text></g><g clip-path="url(#clip116)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5110.12 2748)">18</text></g><g clip-path="url(#clip117)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5301.1 828)">1006</text></g><g clip-path="url(#clip118)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5523.44 2309)">244</text></g><g clip-path="url(#clip119)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5745.77 2646)">71</text></g><g clip-path="url(#clip120)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5957.65 2712)">37</text></g><g clip-path="url(#clip121)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6169.54 2721)">32</text></g><g clip-path="url(#clip122)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6370.97 2016)">395</text></g><g clip-path="url(#clip123)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6582.86 2550)">120</text></g><g clip-path="url(#clip124)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6805.19 2634)">77</text></g><g clip-path="url(#clip125)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 7017.07 2634)">77</text></g><g clip-path="url(#clip126)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2214.38 2526)">132</text></g><g clip-path="url(#clip127)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2436.71 2722)">32</text></g><g clip-path="url(#clip128)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2648.6 2756)">14</text></g><g clip-path="url(#clip129)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2870.93 2767)">9</text></g><g clip-path="url(#clip130)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3072.37 2761)">11</text></g><g clip-path="url(#clip131)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3273.8 2506)">143</text></g><g clip-path="url(#clip132)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3496.13 2716)">35</text></g><g clip-path="url(#clip133)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3708.02 2753)">16</text></g><g clip-path="url(#clip134)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3930.35 2766)">9</text></g><g clip-path="url(#clip135)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4131.79 2758)">13</text></g><g clip-path="url(#clip136)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4333.22 2237)">281</text></g><g clip-path="url(#clip137)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4555.56 2642)">73</text></g><g clip-path="url(#clip138)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4767.44 2728)">29</text></g><g clip-path="url(#clip139)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4979.32 2753)">16</text></g><g clip-path="url(#clip140)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5191.21 2740)">23</text></g><g clip-path="url(#clip141)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5382.19 780)">1031</text></g><g clip-path="url(#clip142)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5604.53 2325)">236</text></g><g clip-path="url(#clip143)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5826.86 2643)">72</text></g><g clip-path="url(#clip144)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6038.75 2703)">42</text></g><g clip-path="url(#clip145)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6250.63 2712)">37</text></g><g clip-path="url(#clip146)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6452.06 2034)">385</text></g><g clip-path="url(#clip147)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6663.95 2515)">138</text></g><g clip-path="url(#clip148)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6886.28 2692)">47</text></g><g clip-path="url(#clip149)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 7098.17 2636)">76</text></g><g clip-path="url(#clip150)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2040.08 2827)">0</text></g><g clip-path="url(#clip151)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1998.28 2438)">200</text></g><g clip-path="url(#clip152)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1998.28 2049)">400</text></g><g clip-path="url(#clip153)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1998.28 1661)">600</text></g><g clip-path="url(#clip154)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1998.28 1272)">800</text></g><g clip-path="url(#clip155)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1977.38 883)">1000</text></g><g clip-path="url(#clip156)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 1977.38 494)">1200</text></g><g clip-path="url(#clip157)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2194.73 2881)">1</text></g><g clip-path="url(#clip158)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2406.62 2881)">4</text></g><g clip-path="url(#clip159)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2608.05 2881)">16</text></g><g clip-path="url(#clip160)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2819.94 2881)">40</text></g><g clip-path="url(#clip161)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3031.82 2881)">80</text></g><g clip-path="url(#clip162)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3254.16 2881)">1</text></g><g clip-path="url(#clip163)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3466.04 2881)">4</text></g><g clip-path="url(#clip164)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3667.47 2881)">16</text></g><g clip-path="url(#clip165)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3879.36 2881)">40</text></g><g clip-path="url(#clip166)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4091.24 2881)">80</text></g><g clip-path="url(#clip167)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4313.58 2881)">1</text></g><g clip-path="url(#clip168)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4525.46 2881)">4</text></g><g clip-path="url(#clip169)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4726.89 2881)">16</text></g><g clip-path="url(#clip170)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4938.78 2881)">40</text></g><g clip-path="url(#clip171)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5150.66 2881)">80</text></g><g clip-path="url(#clip172)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5373 2881)">1</text></g><g clip-path="url(#clip173)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5584.88 2881)">4</text></g><g clip-path="url(#clip174)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5786.32 2881)">16</text></g><g clip-path="url(#clip175)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5998.2 2881)">40</text></g><g clip-path="url(#clip176)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6210.08 2881)">80</text></g><g clip-path="url(#clip177)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6432.42 2881)">4</text></g><g clip-path="url(#clip178)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6633.85 2881)">16</text></g><g clip-path="url(#clip179)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6845.74 2881)">40</text></g><g clip-path="url(#clip180)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 7057.62 2881)">80</text></g><g clip-path="url(#clip181)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 2608.05 2961)">20</text></g><g clip-path="url(#clip182)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 3667.47 2961)">32</text></g><g clip-path="url(#clip183)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4716.44 2961)">128</text></g><g clip-path="url(#clip184)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 5775.87 2961)">384</text></g><g clip-path="url(#clip185)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 6729.34 2961)">512</text></g><g clip-path="url(#clip186)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4550.85 3041)">torchscript</text></g><g clip-path="url(#clip187)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(6.12323e-17 -1 1 6.12323e-17 1952.28 1757)">Latency (ms)</text></g><g clip-path="url(#clip188)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="41" transform="matrix(1 0 0 1 4500.96 3101)">Sequence Length</text></g><g clip-path="url(#clip189)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3989.07 227)">Intel Ice lake Xeon 8380 </text></g><g clip-path="url(#clip190)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4724.42 227)">-</text></g><g clip-path="url(#clip191)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 4763.45 227)">TorchScript</text></g><g clip-path="url(#clip192)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3238.32 316)">Latency Benchmark with varying number of CPU cores and different OpenMP backend</text></g><g clip-path="url(#clip193)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="700" font-size="73" transform="matrix(1 0 0 1 3452.96 406)">(some data might be missing indicating benchmark crashed during runs)</text></g><g clip-path="url(#clip194)"><rect x="4433" y="3197" width="22" height="23" fill="url(#fill195)"/></g><g clip-path="url(#clip196)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4464.87 3220)">iomp</text></g><g clip-path="url(#clip197)"><rect x="4580" y="3197" width="22" height="23" fill="url(#fill198)"/></g><g clip-path="url(#clip199)"><text fill="#44546A" font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="41" transform="matrix(1 0 0 1 4611.98 3220)">gnu</text></g><rect x="1856.5" y="122.5" width="5379" height="3153" stroke="#E0E5EB" stroke-width="3.4375" stroke-linejoin="round" stroke-miterlimit="10" fill="none"/></g></svg>
9
0
hf_public_repos/api-inference-community/docker_images/paddlenlp
hf_public_repos/api-inference-community/docker_images/paddlenlp/tests/test_api_conversational.py
import json import os from unittest import TestCase, skipIf from app.main import ALLOWED_TASKS from parameterized import parameterized_class from starlette.testclient import TestClient from tests.test_api import TESTABLE_MODELS @skipIf( "conversational" not in ALLOWED_TASKS, "conversational not implemented", ) @parameterized_class( [{"model_id": model_id} for model_id in TESTABLE_MODELS["conversational"]] ) class ConversationalTestCase(TestCase): def setUp(self): self.old_model_id = os.getenv("MODEL_ID") self.old_task = os.getenv("TASK") os.environ["MODEL_ID"] = self.model_id os.environ["TASK"] = "fill-mask" from app.main import app self.app = app def tearDown(self): if self.old_model_id is not None: os.environ["MODEL_ID"] = self.old_model_id else: del os.environ["MODEL_ID"] if self.old_task is not None: os.environ["TASK"] = self.old_task else: del os.environ["TASK"] def test_simple(self): first_round_inputs = {"text": "你好!"} with TestClient(self.app) as client: response = client.post("/", json={"inputs": first_round_inputs}) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), dict) self.assertIn("generated_text", content) self.assertIn("conversation", content) self.assertIn("past_user_inputs", content["conversation"]) self.assertIn("generated_responses", content["conversation"]) self.assertEqual(len(content["conversation"]["generated_responses"]), 1) self.assertEqual(len(content["conversation"]["past_user_inputs"]), 1) second_round_inputs = { "text": "这是个测试", "past_user_inputs": content["conversation"]["past_user_inputs"], "generated_responses": content["conversation"]["generated_responses"], } with TestClient(self.app) as client: response = client.post("/", json=second_round_inputs) self.assertEqual( response.status_code, 200, ) content = json.loads(response.content) self.assertEqual(type(content), dict) self.assertIn("generated_text", content) self.assertIn("conversation", content) self.assertIn("past_user_inputs", content["conversation"]) self.assertIn("generated_responses", content["conversation"]) self.assertEqual(len(content["conversation"]["generated_responses"]), 2) self.assertEqual(len(content["conversation"]["past_user_inputs"]), 2)
0
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/k2/requirements.txt
starlette==0.27.0 api-inference-community==0.0.23 huggingface_hub==0.5.1
1
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/k2/Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.8 LABEL maintainer="Yenda <[email protected]>" # Add any system dependency here RUN apt-get update -y && apt-get install cmake ffmpeg -y && rm -rf /var/lib/apt/lists/* COPY ./requirements.txt /app RUN pip install --no-cache-dir torch==1.11.0+cpu torchvision==0.12.0+cpu torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cpu RUN pip install --no-cache-dir -r requirements.txt RUN pip install --no-cache-dir k2==1.17.dev20220719+cpu.torch1.11.0 -f https://k2-fsa.org/nightly/whl/ RUN git clone https://github.com/k2-fsa/sherpa && cd sherpa && git checkout v0.6 && pip install -r ./requirements.txt && python3 setup.py install --verbose COPY ./prestart.sh /app/ # Most DL models are quite large in terms of memory, using workers is a HUGE # slowdown because of the fork and GIL with python. # Using multiple pods seems like a better default strategy. # Feel free to override if it does not make sense for your library. ARG max_workers=1 ENV MAX_WORKERS=$max_workers ENV HUGGINGFACE_HUB_CACHE=/data # Necessary on GPU environment docker. # TIMEOUT env variable is used by nvcr.io/nvidia/pytorch:xx for another purpose # rendering TIMEOUT defined by uvicorn impossible to use correctly # We're overriding it to be renamed UVICORN_TIMEOUT # UVICORN_TIMEOUT is a useful variable for very large models that take more # than 30s (the default) to load in memory. # If UVICORN_TIMEOUT is too low, uvicorn will simply never loads as it will # kill workers all the time before they finish. RUN sed -i 's/TIMEOUT/UVICORN_TIMEOUT/g' /gunicorn_conf.py COPY ./app /app/app
2
0
hf_public_repos/api-inference-community/docker_images
hf_public_repos/api-inference-community/docker_images/k2/prestart.sh
python app/main.py
3
0
hf_public_repos/api-inference-community/docker_images/k2
hf_public_repos/api-inference-community/docker_images/k2/app/main.py
import functools import logging import os from typing import Dict, Type from api_inference_community.routes import pipeline_route, status_ok from app.pipelines import AutomaticSpeechRecognitionPipeline, Pipeline from starlette.applications import Starlette from starlette.middleware import Middleware from starlette.middleware.gzip import GZipMiddleware from starlette.routing import Route TASK = os.getenv("TASK") MODEL_ID = os.getenv("MODEL_ID") logger = logging.getLogger(__name__) # Add the allowed tasks # Supported tasks are: # - text-generation # - text-classification # - token-classification # - translation # - summarization # - automatic-speech-recognition # - ... # For instance # from app.pipelines import AutomaticSpeechRecognitionPipeline # ALLOWED_TASKS = {"automatic-speech-recognition": AutomaticSpeechRecognitionPipeline} # You can check the requirements and expectations of each pipelines in their respective # directories. Implement directly within the directories. ALLOWED_TASKS: Dict[str, Type[Pipeline]] = { "automatic-speech-recognition": AutomaticSpeechRecognitionPipeline, } @functools.lru_cache() def get_pipeline() -> Pipeline: task = os.environ["TASK"] model_id = os.environ["MODEL_ID"] if task not in ALLOWED_TASKS: raise EnvironmentError(f"{task} is not a valid pipeline for model : {model_id}") return ALLOWED_TASKS[task](model_id) routes = [ Route("/{whatever:path}", status_ok), Route("/{whatever:path}", pipeline_route, methods=["POST"]), ] middleware = [Middleware(GZipMiddleware, minimum_size=1000)] if os.environ.get("DEBUG", "") == "1": from starlette.middleware.cors import CORSMiddleware middleware.append( Middleware( CORSMiddleware, allow_origins=["*"], allow_headers=["*"], allow_methods=["*"], ) ) app = Starlette(routes=routes, middleware=middleware) @app.on_event("startup") async def startup_event(): logger = logging.getLogger("uvicorn.access") handler = logging.StreamHandler() handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")) logger.handlers = [handler] # Link between `api-inference-community` and framework code. app.get_pipeline = get_pipeline try: get_pipeline() except Exception: # We can fail so we can show exception later. pass if __name__ == "__main__": try: get_pipeline() except Exception: # We can fail so we can show exception later. pass
4
0
hf_public_repos/api-inference-community/docker_images/k2
hf_public_repos/api-inference-community/docker_images/k2/app/common.py
import functools import json from typing import List, Optional, Union import k2 import kaldifeat import sentencepiece as spm import torch from huggingface_hub import HfApi, hf_hub_download from sherpa import RnntConformerModel from .decode import ( run_model_and_do_greedy_search, run_model_and_do_modified_beam_search, ) def get_hfconfig(model_id, config_name="hf_demo"): info = HfApi().model_info(repo_id=model_id) config_file = hf_hub_download(model_id, filename="config.json") with open(config_file) as config: info.config = json.load(config) if info.config and config_name is not None: if config_name in info.config: return info.config[config_name] else: raise ValueError("Config section " + config_name + " not found") else: return info def model_from_hfconfig(hf_repo, hf_config): nn_model_filename = hf_hub_download(hf_repo, hf_config["nn_model_filename"]) token_filename = ( hf_hub_download(hf_repo, hf_config["token_filename"]) if "token_filename" in hf_config else None ) bpe_model_filename = ( hf_hub_download(hf_repo, hf_config["bpe_model_filename"]) if "bpe_model_filename" in hf_config else None ) decoding_method = hf_config.get("decoding_method", "greedy_search") sample_rate = hf_config.get("sample_rate", 16000) num_active_paths = hf_config.get("num_active_paths", 4) assert decoding_method in ("greedy_search", "modified_beam_search"), decoding_method if decoding_method == "modified_beam_search": assert num_active_paths >= 1, num_active_paths assert bpe_model_filename is not None or token_filename is not None if bpe_model_filename: assert token_filename is None if token_filename: assert bpe_model_filename is None return OfflineAsr( nn_model_filename, bpe_model_filename, token_filename, decoding_method, num_active_paths, sample_rate, ) def transcribe_batch_from_tensor(model, batch): return model.decode_waves([batch])[0] class OfflineAsr(object): def __init__( self, nn_model_filename: str, bpe_model_filename: Optional[str], token_filename: Optional[str], decoding_method: str, num_active_paths: int, sample_rate: int = 16000, device: Union[str, torch.device] = "cpu", ): """ Args: nn_model_filename: Path to the torch script model. bpe_model_filename: Path to the BPE model. If it is None, you have to provide `token_filename`. token_filename: Path to tokens.txt. If it is None, you have to provide `bpe_model_filename`. decoding_method: The decoding method to use. Currently, only greedy_search and modified_beam_search are implemented. num_active_paths: Used only when decoding_method is modified_beam_search. It specifies number of active paths for each utterance. Due to merging paths with identical token sequences, the actual number may be less than "num_active_paths". sample_rate: Expected sample rate of the feature extractor. device: The device to use for computation. """ self.model = RnntConformerModel( filename=nn_model_filename, device=device, optimize_for_inference=False, ) if bpe_model_filename: self.sp = spm.SentencePieceProcessor() self.sp.load(bpe_model_filename) else: self.token_table = k2.SymbolTable.from_file(token_filename) self.sample_rate = sample_rate self.feature_extractor = self._build_feature_extractor( sample_rate=sample_rate, device=device, ) assert decoding_method in ( "greedy_search", "modified_beam_search", ), decoding_method if decoding_method == "greedy_search": nn_and_decoding_func = run_model_and_do_greedy_search elif decoding_method == "modified_beam_search": nn_and_decoding_func = functools.partial( run_model_and_do_modified_beam_search, num_active_paths=num_active_paths, ) else: raise ValueError( f"Unsupported decoding_method: {decoding_method} " "Please use greedy_search or modified_beam_search" ) self.nn_and_decoding_func = nn_and_decoding_func self.device = device def _build_feature_extractor( self, sample_rate: int = 16000, device: Union[str, torch.device] = "cpu", ) -> kaldifeat.OfflineFeature: """Build a fbank feature extractor for extracting features. Args: sample_rate: Expected sample rate of the feature extractor. device: The device to use for computation. Returns: Return a fbank feature extractor. """ opts = kaldifeat.FbankOptions() opts.device = device opts.frame_opts.dither = 0 opts.frame_opts.snip_edges = False opts.frame_opts.samp_freq = sample_rate opts.mel_opts.num_bins = 80 fbank = kaldifeat.Fbank(opts) return fbank def decode_waves(self, waves: List[torch.Tensor]) -> List[List[str]]: """ Args: waves: A list of 1-D torch.float32 tensors containing audio samples. wavs[i] contains audio samples for the i-th utterance. Note: Whether it should be in the range [-32768, 32767] or be normalized to [-1, 1] depends on which range you used for your training data. For instance, if your training data used [-32768, 32767], then the given waves have to contain samples in this range. All models trained in icefall use the normalized range [-1, 1]. Returns: Return a list of decoded results. `ans[i]` contains the decoded results for `wavs[i]`. """ waves = [w.to(self.device) for w in waves] features = self.feature_extractor(waves) tokens = self.nn_and_decoding_func(self.model, features) if hasattr(self, "sp"): results = self.sp.decode(tokens) else: results = [[self.token_table[i] for i in hyp] for hyp in tokens] results = ["".join(r) for r in results] return results
5
0
hf_public_repos/api-inference-community/docker_images/k2
hf_public_repos/api-inference-community/docker_images/k2/app/decode.py
# Copyright 2022 Xiaomi Corp. (authors: Fangjun Kuang) # # See LICENSE for clarification regarding multiple authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from typing import List import torch from sherpa import RnntConformerModel, greedy_search, modified_beam_search from torch.nn.utils.rnn import pad_sequence LOG_EPS = math.log(1e-10) @torch.no_grad() def run_model_and_do_greedy_search( model: RnntConformerModel, features: List[torch.Tensor], ) -> List[List[int]]: """Run RNN-T model with the given features and use greedy search to decode the output of the model. Args: model: The RNN-T model. features: A list of 2-D tensors. Each entry is of shape (num_frames, feature_dim). Returns: Return a list-of-list containing the decoding token IDs. """ features_length = torch.tensor( [f.size(0) for f in features], dtype=torch.int64, ) features = pad_sequence( features, batch_first=True, padding_value=LOG_EPS, ) device = model.device features = features.to(device) features_length = features_length.to(device) encoder_out, encoder_out_length = model.encoder( features=features, features_length=features_length, ) hyp_tokens = greedy_search( model=model, encoder_out=encoder_out, encoder_out_length=encoder_out_length.cpu(), ) return hyp_tokens @torch.no_grad() def run_model_and_do_modified_beam_search( model: RnntConformerModel, features: List[torch.Tensor], num_active_paths: int, ) -> List[List[int]]: """Run RNN-T model with the given features and use greedy search to decode the output of the model. Args: model: The RNN-T model. features: A list of 2-D tensors. Each entry is of shape (num_frames, feature_dim). num_active_paths: Used only when decoding_method is modified_beam_search. It specifies number of active paths for each utterance. Due to merging paths with identical token sequences, the actual number may be less than "num_active_paths". Returns: Return a list-of-list containing the decoding token IDs. """ features_length = torch.tensor( [f.size(0) for f in features], dtype=torch.int64, ) features = pad_sequence( features, batch_first=True, padding_value=LOG_EPS, ) device = model.device features = features.to(device) features_length = features_length.to(device) encoder_out, encoder_out_length = model.encoder( features=features, features_length=features_length, ) hyp_tokens = modified_beam_search( model=model, encoder_out=encoder_out, encoder_out_length=encoder_out_length.cpu(), num_active_paths=num_active_paths, ) return hyp_tokens
6
0
hf_public_repos/api-inference-community/docker_images/k2/app
hf_public_repos/api-inference-community/docker_images/k2/app/pipelines/base.py
from abc import ABC, abstractmethod from typing import Any class Pipeline(ABC): @abstractmethod def __init__(self, model_id: str): raise NotImplementedError("Pipelines should implement an __init__ method") @abstractmethod def __call__(self, inputs: Any) -> Any: raise NotImplementedError("Pipelines should implement a __call__ method") class PipelineException(Exception): pass
7
0
hf_public_repos/api-inference-community/docker_images/k2/app
hf_public_repos/api-inference-community/docker_images/k2/app/pipelines/__init__.py
from app.pipelines.base import Pipeline, PipelineException # isort:skip from app.pipelines.automatic_speech_recognition import ( AutomaticSpeechRecognitionPipeline, )
8
0
hf_public_repos/api-inference-community/docker_images/k2/app
hf_public_repos/api-inference-community/docker_images/k2/app/pipelines/automatic_speech_recognition.py
from typing import Dict import app.common as cx import numpy as np import torch from app.pipelines import Pipeline torch.set_num_threads(1) torch.set_num_interop_threads(1) # See https://github.com/pytorch/pytorch/issues/38342 # and https://github.com/pytorch/pytorch/issues/33354 # # If we don't do this, the delay increases whenever there is # a new request that changes the actual batch size. # If you use `py-spy dump --pid <server-pid> --native`, you will # see a lot of time is spent in re-compiling the torch script model. torch._C._jit_set_profiling_executor(False) torch._C._jit_set_profiling_mode(False) torch._C._set_graph_executor_optimize(False) class AutomaticSpeechRecognitionPipeline(Pipeline): def __init__(self, model_id: str): model_config = cx.get_hfconfig(model_id, "hf_demo") self.model = cx.model_from_hfconfig(hf_repo=model_id, hf_config=model_config) self.sampling_rate = self.model.sample_rate def __call__(self, inputs: np.array) -> Dict[str, str]: """ Args: inputs (:obj:`np.array`): The raw waveform of audio received. By default at self.sampling_rate, otherwise 16KHz. Check `app.validation` if a different sample rate is required or if it depends on the model Return: A :obj:`dict`:. The object return should be liked {"text": "XXX"} containing the detected language from the input audio """ batch = torch.from_numpy(inputs) words = cx.transcribe_batch_from_tensor(self.model, batch) return {"text": words}
9