File size: 1,708 Bytes
e59dc66 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
# Local Testing Guide
Before deploying to Hugging Face Spaces, you may want to test the application locally. This guide provides instructions for local testing.
## Prerequisites
- CUDA-capable GPU with at least 8GB VRAM
- Python 3.8+
- pip or conda package manager
## Steps for Local Testing
1. **Install Dependencies**
```bash
pip install -r image_descriptor_requirements.txt
```
2. **Run in UI Mode**
```bash
python app.py
```
This will start the Gradio UI on http://localhost:7860. You can upload images and test the model.
3. **Run in API-only Mode**
```bash
FLASK_APP=image_descriptor.py flask run --host=0.0.0.0 --port=5000
```
This will start just the Flask API on http://localhost:5000.
4. **Test the Docker Container**
```bash
# Build the container
docker build -t image-descriptor .
# Run the container
docker run -p 7860:7860 --gpus all image-descriptor
```
The application will be available at http://localhost:7860.
## Testing the API
You can test the API using curl:
```bash
# Health check
curl http://localhost:5000/health
# Process an image
curl -X POST -F "image=@data_temp/page_2.png" http://localhost:5000/describe
```
## Troubleshooting
- **GPU Memory Issues**: If you encounter GPU memory errors, try reducing batch sizes or using a smaller model.
- **Model Download Issues**: If the model download fails, try downloading it manually from Hugging Face and place it in the `.cache/huggingface/transformers` directory.
- **Dependencies**: Make sure you have the correct CUDA version installed for your GPU.
## Next Steps
Once you've confirmed the application works locally, you can deploy it to Hugging Face Spaces following the instructions in the main README.md. |