Spaces:
Building

title: Quantum-API
emoji: π
colorFrom: green
colorTo: indigo
sdk: docker
python_version: 3.11
sdk_version: latest
suggested_hardware: cpu-basic
suggested_storage: small
app_file: app.py
app_port: 7860
base_path: /
fullWidth: true
header: default
short_description: Quantum-AI API for machine learning and quantum computing.
models:
- openai-community/gpt2
datasets:
- mozilla-foundation/common_voice_13_0
tags:
- quantum-ai
- machine-learning
- fastapi
- streamlit
- huggingface-spaces
- docker
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/66ee940c0989ae1ac1383839/MseLCVmNge3tBJzqDbN1c.jpeg
pinned: true
hf_oauth: false
disable_embedding: false
startup_duration_timeout: 30m
custom_headers:
cross-origin-embedder-policy: require-corp
cross-origin-opener-policy: same-origin
cross-origin-resource-policy: cross-origin
preload_from_hub:
- openai-community/gpt2 config.json
license: mit
π Quantum-API
π¬ Overview
Quantum-API is a hybrid FastAPI + Streamlit web application that serves as a unified interface for quantum computing tasks. It integrates PennyLane, PyTorch, and OpenAI models via Hugging Face. Optimized for resource-constrained systems and cloud deployments such as Hugging Face Spaces.
βοΈ Quantum-AI API for machine learning and quantum computing, powered by FastAPI, Streamlit, and PennyLane.
β‘ Features
- π FastAPI Backend: RESTful endpoints for quantum ML processing.
- π§ Streamlit Frontend: Interactive quantum interface on
port 7861
. - π§ͺ Quantum Computation: Process quantum logic with PennyLane.
- π¦ Docker & HuggingFace Compatible: Pre-configured for Spaces deployment.
- π‘οΈ Health Check: System status endpoint.
- βοΈ Hybrid Quantum-Classical AI: Combines classical ML with quantum gates.
π οΈ Installation
1. Clone the Repository
git clone https://github.com/subatomicERROR/Quantum-API.git
cd Quantum-API
2. Create a Virtual Environment (Recommended)
python3 -m venv qvenv
source qvenv/bin/activate # For Linux/macOS
# OR
qvenv\Scripts\activate # For Windows
3. Install Requirements
pip install -r requirements.txt
π Running the App Locally
1. Start the Backend (FastAPI)
uvicorn api.endpoints.codelama:app --host 0.0.0.0 --port 7860 --reload
Accessible at: http://localhost:7860
2. Start the Frontend (Streamlit)
streamlit run app/app.py --server.port 8000
Accessible at: http://localhost:8000
π API Endpoints
π Root
GET /
Returns an SEO-optimized HTML homepage.
βοΈ Quantum Endpoint
POST /quantum-endpoint
Request Body:
{
"data": "your_data_here",
"quantum_factor": 1.0
}
Response:
{
"status": "success",
"quantum_result": "Processed your_data_here with quantum factor 1.0"
}
β€οΈ Health Check
GET /health
Returns API status.
π Streamlit Frontend
An interactive interface to interact with the quantum backend.
streamlit run app/app.py --server.port 8000
π¦ Deployment: Hugging Face Spaces
To deploy on Hugging Face:
Ensure the following in your repo:
requirements.txt
app/app.py
(Streamlit entrypoint)api/endpoints/codelama.py
(FastAPI backend)
Use a Docker-based Space with this command in Dockerfile or runtime:
uvicorn api.endpoints.codelama:app --host 0.0.0.0 --port 7860 & \
streamlit run app/app.py --server.port 8000
Push your repo to Hugging Face:
git remote add hf https://huggingface.co/spaces/subatomicERROR/Quantum-API
git push hf main
π File Structure
Quantum-API/
βββ api/
β βββ endpoints/
β βββ codelama.py # FastAPI main app
βββ app/
β βββ app.py # Streamlit UI
βββ requirements.txt
βββ README.md
βββ .huggingface/README.md # Optional Space ReadMe
π§ Author
Built with β + βοΈ by subatomicERROR (Yash R)
π§ Email: [email protected]
𧬠Branding & Philosophy
Part of the
.ERROR
brand β combining ancient wisdom, futuristic design, and quantum intelligence.
This system is part of the Quantum-AI Stack including:
- Quantum-ML β Model & training backend.
- Quantum-API β This API gateway.
- Quantum-Compute β Quantum computation engine.
π License
MIT License
πͺ Exploring the quantum realm with AI...
...one entangled bit at a time.