Spaces:
Running
Running
metadata
title: Omniseal Leaderboard
emoji: 🦀
colorFrom: red
colorTo: green
sdk: docker
pinned: false
short_description: Leaderboard for watermarking models
Docker Build Instructions
Prerequisites
- Docker installed on your system
- Git repository cloned locally
Build Steps (conda)
- Initialize conda environment
cd backend
conda env create -f environment.yml -y
conda activate omniseal-benchmark-backend
- Build frontend (outputs html, js, css into frontend/dist). Note you only need this if you are updating the frontend, the repository would already have a build checked in at frontend/dist
cd frontend
npm install
npm run build
- Run backend server from project root. This would serve frontend files from port http://localhost:7860
gunicorn --chdir backend -b 0.0.0.0:7860 app:app --reload
- Server will be running on
http://localhost:7860
Build Steps (Docker, huggingface)
- Build the Docker image from project root:
docker build -t omniseal-benchmark .
OR
docker buildx build -t omniseal-benchmark .
- Run the container (this runs in auto-reload mode when you update python files in the backend directory). Note the -v argument make it so the backend could hot reload:
docker run -p 7860:7860 -v $(pwd)/backend:/app/backend omniseal-benchmark
- Access the application at
http://localhost:7860
Local Development
When updating the backend, you can run it in whichever build steps above to take advantage of hot-reload so you don't have to restart the server.
For the frontend:
Create a
.env.local
file in the frontend directory. SetVITE_API_SERVER_URL
to where your backend server is running. When running locally it will beVITE_API_SERVER_URL=http://localhost:7860
. This overrides the configuration in.env
so the frontend will connect with your backend URL of choice.Run the development server with hot-reload:
cd frontend
npm install
npm run dev