omnisealbench / README.md
Mark Duppenthaler
Updated with sorting rows and columns
ca345b7
metadata
title: Omniseal Leaderboard
emoji: 🦀
colorFrom: red
colorTo: green
sdk: docker
pinned: false
short_description: Leaderboard for watermarking models

Docker Build Instructions

Prerequisites

  • Docker installed on your system
  • Git repository cloned locally

Build Steps (conda)

  1. Initialize conda environment
cd backend
conda env create -f environment.yml -y
conda activate omniseal-benchmark-backend
  1. Build frontend (outputs html, js, css into frontend/dist). Note you only need this if you are updating the frontend, the repository would already have a build checked in at frontend/dist
cd frontend
npm install
npm run build
  1. Run backend server from project root. This would serve frontend files from port http://localhost:7860
gunicorn --chdir backend -b 0.0.0.0:7860 app:app --reload
  1. Server will be running on http://localhost:7860

Build Steps (Docker, huggingface)

  1. Build the Docker image from project root:
docker build -t omniseal-benchmark .

OR

docker buildx build -t omniseal-benchmark .
  1. Run the container (this runs in auto-reload mode when you update python files in the backend directory). Note the -v argument make it so the backend could hot reload:
docker run -p 7860:7860 -v $(pwd)/backend:/app/backend omniseal-benchmark
  1. Access the application at http://localhost:7860

Local Development

When updating the backend, you can run it in whichever build steps above to take advantage of hot-reload so you don't have to restart the server.

For the frontend:

  1. Create a .env.local file in the frontend directory. Set VITE_API_SERVER_URL to where your backend server is running. When running locally it will be VITE_API_SERVER_URL=http://localhost:7860. This overrides the configuration in .env so the frontend will connect with your backend URL of choice.

  2. Run the development server with hot-reload:

cd frontend
npm install
npm run dev