Update README.md
Browse filesupdating about inf2.xlarge
README.md
CHANGED
@@ -20,7 +20,7 @@ You can find detailed information about the base model on its [Model Card](https
|
|
20 |
|
21 |
This model has been exported to the `neuron` format using specific `input_shapes` and `compiler` parameters detailed in the paragraphs below.
|
22 |
|
23 |
-
It has been compiled to run on an inf2.8xlarge instance on AWS.
|
24 |
|
25 |
Please refer to the 🤗 `optimum-neuron` [documentation](https://huggingface.co/docs/optimum-neuron/main/en/guides/models#configuring-the-export-of-a-generative-model) for an explanation of these parameters.
|
26 |
|
|
|
20 |
|
21 |
This model has been exported to the `neuron` format using specific `input_shapes` and `compiler` parameters detailed in the paragraphs below.
|
22 |
|
23 |
+
It has been compiled to run on an inf2.8xlarge instance on AWS. It also runs on an inf2.xlarge (the smallest Inferentia2 instance), but it pretty much maxes out the RAM. Be sure to test before using in production on the smaller instance.
|
24 |
|
25 |
Please refer to the 🤗 `optimum-neuron` [documentation](https://huggingface.co/docs/optimum-neuron/main/en/guides/models#configuring-the-export-of-a-generative-model) for an explanation of these parameters.
|
26 |
|