Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -94,4 +94,6 @@ Remember to update your `requirements.txt` file if you make any changes to your
|
|
94 |
- The application uses CPU for inference by default. If you have a CUDA-capable GPU available on your deployment platform, you can modify the `device_map` and `to()` calls in `app.py` to use GPU acceleration.
|
95 |
- The model and processor are cached using Streamlit's `@st.cache_resource` decorator to improve performance on subsequent runs.
|
96 |
|
|
|
|
|
97 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
94 |
- The application uses CPU for inference by default. If you have a CUDA-capable GPU available on your deployment platform, you can modify the `device_map` and `to()` calls in `app.py` to use GPU acceleration.
|
95 |
- The model and processor are cached using Streamlit's `@st.cache_resource` decorator to improve performance on subsequent runs.
|
96 |
|
97 |
+
|
98 |
+
|
99 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|