chipling commited on
Commit
65d3b67
·
verified ·
1 Parent(s): 2d44426

Upload 26 files

Browse files
README.md CHANGED
@@ -1,12 +1,141 @@
 
 
 
 
 
 
1
  ---
2
- title: Fasthost
3
- emoji: 🔥
4
- colorFrom: purple
5
- colorTo: pink
6
- sdk: docker
7
- pinned: false
8
- license: apache-2.0
9
- short_description: fasthost
10
- ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FastHost
2
+
3
+ ✨ **FastHost** is an open-source, self-hosted deployment platform for Python backend apps. Think of it as your own Vercel or Replit, but tailored for FastAPI and Flask using Docker.
4
+
5
+ With FastHost, you can upload a `.py` file or project archive and instantly deploy it as a running Docker container on your own machine. It’s perfect for developers, tinkerers, and teams who want full control over their Python backend deployments—no third-party cloud required.
6
+
7
  ---
 
 
 
 
 
 
 
 
 
8
 
9
+ 🚧 **FastHost is in active development!**
10
+ We welcome contributions, feedback, and ideas from the community. If you’d like to help shape FastHost, check out the issues, open a pull request, or start a discussion.
11
+
12
+ #Roadmap:
13
+ Phase 1: Core Deployment & Developer Experience (MVP)
14
+ Goal: Establish a stable and user-friendly platform for deploying FastAPI/Flask applications via Git, with essential management tools.
15
+
16
+ 1.1 Git-based Deployments (High Priority):
17
+
18
+ Feature: Implement integration with Git providers (GitHub, GitLab, Bitbucket) to allow users to connect repositories.
19
+
20
+ Value: Automates deployments on git push, significantly improving developer workflow and replacing manual zip uploads.
21
+
22
+ Implementation: Webhooks from Git providers, repository cloning, branch selection for deployment.
23
+
24
+ 1.2 Automated SSL/TLS with Custom Domains (High Priority):
25
+
26
+ Feature: Allow users to add custom domains and automatically provision/renew SSL certificates using Let's Encrypt.
27
+
28
+ Value: Essential for production-ready applications, building trust, and a professional appearance.
29
+
30
+ Implementation: DNS validation (e.g., CNAME/TXT record instructions), ACME client integration.
31
+
32
+ 1.3 Environment Variables Management:
33
+
34
+ Feature: A secure UI for users to define, update, and manage environment variables for their deployed applications.
35
+
36
+ Value: Separates configuration from code, crucial for different environments (dev, staging, prod) and sensitive data.
37
+
38
+ Implementation: Encrypted storage of variables, injection into Docker builds/containers.
39
+
40
+ 1.4 Real-time Build & Deployment Logs:
41
+
42
+ Feature: Stream stdout/stderr from the build process (Docker build, dependencies installation) and deployment steps directly to the user interface.
43
+
44
+ Value: Provides transparency, enabling users to debug issues during the deployment cycle.
45
+
46
+ Implementation: WebSockets or SSE for log streaming.
47
+
48
+ 1.5 Basic User Authentication & Project Management:
49
+
50
+ Feature: Implement a secure user registration and login system. Allow users to create and manage their deployment projects.
51
+
52
+ Value: Enables multi-user access and organization within the self-hosted instance.
53
+
54
+ Implementation: Database for users/projects, secure password hashing, session management.
55
+
56
+ Phase 2: Reliability, Observability & Scalability
57
+ Goal: Enhance the platform's stability, provide critical insights into running applications, and lay the groundwork for horizontal scaling.
58
+
59
+ 2.1 Application Logging & Monitoring Dashboard:
60
+
61
+ Feature: Centralized access to logs generated by the running FastAPI/Flask applications (e.g., from main.py). Implement basic metrics (CPU, memory, request counts).
62
+
63
+ Value: Indispensable for debugging live applications, identifying performance bottlenecks, and understanding usage patterns.
64
+
65
+ Implementation: Log aggregation (e.g., filebeat, fluentd, or direct container logs), Prometheus/Grafana or simpler in-house graphing.
66
+
67
+ 2.2 Deployment Rollbacks:
68
+
69
+ Feature: Allow users to revert a deployed application to any previous successful deployment version.
70
+
71
+ Value: A critical safety net for recovering quickly from bad deployments or unintended side effects.
72
+
73
+ Implementation: Maintain historical Docker image references/tags for each deployment.
74
+
75
+ 2.3 Horizontal Scaling (Basic):
76
+
77
+ Feature: Enable users to define and run multiple instances of a single application.
78
+
79
+ Value: Improves application availability and performance under increased load.
80
+
81
+ Implementation: Integrate with NGINX (or similar) as a load balancer to distribute traffic to multiple Docker containers/instances. UI to adjust instance count.
82
+
83
+ 2.4 Improved Error Handling & Notifications:
84
+
85
+ Feature: Clear, actionable error messages in the UI for failed builds/deployments. Implement email or webhook notifications for critical events.
86
+
87
+ Value: Reduces user frustration and keeps them informed about the status of their deployments.
88
+
89
+ Phase 3: Advanced Features & Ecosystem Expansion
90
+ Goal: Differentiate the platform with powerful capabilities, offering greater flexibility and catering to more complex use cases.
91
+
92
+ 3.1 Persistent Storage Integration:
93
+
94
+ Feature: Provide options for attaching persistent storage volumes to applications for data that needs to survive deployments (e.g., user uploads, database files).
95
+
96
+ Value: Enables stateful applications, broadening the types of projects the platform can host.
97
+
98
+ Implementation: Docker volumes, bind mounts, or integration with network storage solutions if applicable to the underlying infrastructure.
99
+
100
+ 3.2 Custom Buildpacks/Build Steps:
101
+
102
+ Feature: Allow users to define more custom build processes beyond just a Dockerfile, perhaps through a platform.yml or similar configuration.
103
+
104
+ Value: Offers greater flexibility for non-standard build requirements or specialized runtimes.
105
+
106
+ Implementation: Extend the build orchestration to support custom scripts or logic.
107
+
108
+ 3.3 Serverless Function (Python) Support:
109
+
110
+ Feature: Enable users to deploy individual Python functions as serverless endpoints without managing a full application server.
111
+
112
+ Value: Caters to microservices architectures, background tasks, and event-driven workloads, similar to AWS Lambda or Google Cloud Functions.
113
+
114
+ Implementation: Custom runtime environment for functions, API Gateway integration.
115
+
116
+ 3.4 CLI Tool:
117
+
118
+ Feature: Develop a command-line interface (CLI) for interacting with the platform (deploying, checking status, viewing logs).
119
+
120
+ Value: Appeals to developers who prefer terminal-based workflows, enabling scripting and automation.
121
+
122
+ Implementation: Python CLI with API calls to your platform's backend.
123
+
124
+ 3.5 Webhooks for Deployment Events:
125
+
126
+ Feature: Allow users to configure webhooks that trigger on deployment success, failure, or other lifecycle events.
127
+
128
+ Value: Enables integration with external services (e.g., Slack notifications, CI/CD pipelines).
129
+
130
+ 3.6 Comprehensive Documentation & Community:
131
+
132
+ Feature: Create extensive documentation covering setup, usage, troubleshooting, and API. Foster a community around the open-source project.
133
+
134
+ Value: Crucial for adoption and self-sufficiency, reducing support burden.
135
+
136
+ **Contribute:**
137
+ - Fork the repository
138
+ - Create a feature branch
139
+ - Submit a pull request
140
+
141
+ Let’s build the future of Python app deployment together!
app.py CHANGED
@@ -1,7 +1,77 @@
1
- from fastapi import FastAPI
 
 
 
 
 
 
 
 
 
 
2
 
3
  app = FastAPI()
4
 
5
- @app.get("/")
6
- def greet_json():
7
- return {"Hello": "World!"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, Request
2
+ from fastapi.responses import HTMLResponse
3
+ from fastapi.templating import Jinja2Templates
4
+ import docker
5
+ import dotenv
6
+ from routers.deploy import router as deploy_router, deployed_projects
7
+ from routers.controls import router as controls_router
8
+ from routers.logs import router as logs_router
9
+
10
+ # Load environment variables
11
+ dotenv.load_dotenv()
12
 
13
  app = FastAPI()
14
 
15
+ # --- Templating ---
16
+ templates = Jinja2Templates(directory="templates")
17
+
18
+ # --- Routers ---
19
+ app.include_router(controls_router, prefix="/controls")
20
+ app.include_router(logs_router, prefix="/logs")
21
+ app.include_router(deploy_router, prefix="/deploy")
22
+
23
+ # --- Docker Client ---
24
+ client = docker.from_env()
25
+
26
+ # --- Endpoints ---
27
+ @app.get("/", response_class=HTMLResponse)
28
+ async def dashboard(request: Request):
29
+ """Serves the main dashboard page."""
30
+ return templates.TemplateResponse("dashboard.html", {"request": request})
31
+
32
+ @app.get("/projects")
33
+ def get_projects():
34
+ """Returns a list of all deployed projects with their status and URL."""
35
+
36
+ # Create a list of projects from the deployed_projects dictionary
37
+ projects_list = []
38
+ for project_id, details in deployed_projects.items():
39
+ container_name = details.get("container_name", "N/A")
40
+ public_url = details.get("public_url", "#")
41
+ local_url = "#"
42
+
43
+ if container_name != "N/A":
44
+ try:
45
+ container = client.containers.get(container_name)
46
+ port_bindings = client.api.inspect_container(container.id)['NetworkSettings']['Ports']
47
+ if '8080/tcp' in port_bindings and port_bindings['8080/tcp'] is not None:
48
+ host_port = port_bindings['8080/tcp'][0]['HostPort']
49
+ # Get the local IP address of the machine
50
+ import socket
51
+ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
52
+ try:
53
+ # Doesn't matter if the host is reachable
54
+ s.connect(('10.255.255.255', 1))
55
+ host_ip = s.getsockname()[0]
56
+ except Exception:
57
+ host_ip = '127.0.0.1' # Fallback to localhost
58
+ finally:
59
+ s.close()
60
+ local_url = f"http://{host_ip}:{host_port}"
61
+ except docker.errors.NotFound:
62
+ print(f"Container {container_name} not found for project {project_id}.")
63
+ except Exception as e:
64
+ print(f"Error getting local URL for container {container_name}: {e}")
65
+
66
+ projects_list.append({
67
+ "id": project_id,
68
+ "name": details.get("app_name", "N/A"),
69
+ "status": details.get("status", "Unknown"),
70
+ "public_url": public_url,
71
+ "local_url": local_url,
72
+ "container_name": container_name
73
+ })
74
+
75
+ return projects_list
76
+
77
+
examples/.DS_Store ADDED
Binary file (6.15 kB). View file
 
examples/__pycache__/main.cpython-311.pyc ADDED
Binary file (2.02 kB). View file
 
examples/fastapi/Dockerfile ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ COPY . /app
6
+
7
+ RUN pip install --no-cache-dir -r requirements.txt
8
+
9
+ CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]
10
+
examples/fastapi/main.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, Request
2
+ from fastapi.middleware.cors import CORSMiddleware
3
+ from fastapi.responses import JSONResponse
4
+ from fastapi.staticfiles import StaticFiles
5
+ from fastapi.templating import Jinja2Templates
6
+ import os
7
+
8
+ app = FastAPI()
9
+
10
+ app.add_middleware(
11
+ CORSMiddleware,
12
+ allow_origins=["*"],
13
+ allow_credentials=True,
14
+ allow_methods=["*"],
15
+ allow_headers=["*"],
16
+ )
17
+
18
+ @app.get("/echo/{name}")
19
+ def echo(name: str):
20
+ return JSONResponse(content={"message": f"Hello, {name}!"})
examples/fastapi/requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ FastAPI
2
+ uvicorn
3
+ jinja2
examples/flask/Dockerfile ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ COPY . /app
6
+
7
+ RUN pip install --no-cache-dir -r requirements.txt
8
+
9
+ CMD ["gunicorn", "main:app", "--bind", "0.0.0.0:8080"]
10
+
examples/flask/main.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask
2
+ from flask_cors import CORS
3
+
4
+
5
+ app = Flask(__name__)
6
+ CORS(app)
7
+
8
+ @app.route('/')
9
+ def index():
10
+ return "Welcome to the Flask app!"
11
+
12
+ @app.route('/hello')
13
+ def hello():
14
+ return "Hello, World!"
examples/flask/requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ flask
2
+ flask_cors
3
+ gunicorn
examples/test/Dockerfile ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ COPY . /app
6
+
7
+ RUN pip install --no-cache-dir -r requirements.txt
8
+
9
+ CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]
examples/test/encrypt.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+
3
+ SECRET_KEY = b"my-simple-key" # Keep this key short and secret
4
+
5
+ def xor_encrypt(data: bytes, key: bytes) -> bytes:
6
+ return bytes([b ^ key[i % len(key)] for i, b in enumerate(data)])
7
+
8
+ def encrypt_video_id(video_id: str) -> str:
9
+ encrypted = xor_encrypt(video_id.encode(), SECRET_KEY)
10
+ return base64.urlsafe_b64encode(encrypted).decode().rstrip("=")
11
+
12
+ def decrypt_video_id(enc_id: str) -> str:
13
+ padded = enc_id + "=" * (-len(enc_id) % 4) # Add padding back
14
+ encrypted = base64.urlsafe_b64decode(padded.encode())
15
+ decrypted = xor_encrypt(encrypted, SECRET_KEY)
16
+ return decrypted.decode()
examples/test/main.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, HTTPException, Request, Query, Response
2
+ from fastapi.responses import StreamingResponse, HTMLResponse
3
+ from fastapi.templating import Jinja2Templates
4
+ from pytubefix import YouTube
5
+ from pytubefix.cli import on_progress
6
+ import os
7
+ import logging
8
+ import httpx
9
+ import hashlib
10
+ from functools import lru_cache
11
+ from encrypt import encrypt_video_id, decrypt_video_id
12
+
13
+ app = FastAPI()
14
+
15
+ CHUNK_SIZE = 1024 * 1024 # 1MB
16
+
17
+ logger = logging.getLogger(__name__)
18
+
19
+ def open_file_range(file_path: str, start: int, end: int):
20
+ with open(file_path, "rb") as f:
21
+ f.seek(start)
22
+ bytes_to_read = end - start + 1
23
+ while bytes_to_read > 0:
24
+ chunk = f.read(min(CHUNK_SIZE, bytes_to_read))
25
+ if not chunk:
26
+ break
27
+ bytes_to_read -= len(chunk)
28
+ yield chunk
29
+
30
+
31
+ def generate_etag(file_path):
32
+ hasher = hashlib.md5()
33
+ with open(file_path, 'rb') as f:
34
+ while chunk := f.read(8192):
35
+ hasher.update(chunk)
36
+ return hasher.hexdigest()
37
+
38
+
39
+ @lru_cache(maxsize=128)
40
+ def get_video_metadata(video_id: str):
41
+ yt = YouTube(f"https://www.youtube.com/watch?v={video_id}", client='WEB_EMBED')
42
+ if yt.length >= 600:
43
+ return {
44
+ "title": yt.title,
45
+ "description": yt.description,
46
+ "author": yt.author,
47
+ "duration": yt.length,
48
+ "views": yt.views,
49
+ "date": yt.publish_date,
50
+ "video_url": yt.streams.get_highest_resolution().url,
51
+ "audio_url": yt.streams.get_audio_only().url,
52
+ }
53
+ else:
54
+ return {
55
+ "title": yt.title,
56
+ "description": yt.description,
57
+ "author": yt.author,
58
+ "duration": yt.length,
59
+ "views": yt.views,
60
+ "date": yt.publish_date,
61
+ }
62
+
63
+
64
+ @app.get("/api/video/{video_id}")
65
+ def get_video_info(video_id: str, request: Request):
66
+ try:
67
+ metadata = get_video_metadata(video_id)
68
+ encrypted_video_id = encrypt_video_id(video_id)
69
+
70
+ BASE_URL = request.base_url
71
+
72
+ if metadata['duration'] >= 600:
73
+ return {**metadata}
74
+ else:
75
+ return {
76
+ **metadata,
77
+ "video_url": f"{BASE_URL}video/{encrypted_video_id}",
78
+ "audio_url": f"{BASE_URL}audio/{encrypted_video_id}"
79
+ }
80
+ except Exception as e:
81
+ raise HTTPException(status_code=500, detail=f"Error: {str(e)}")
82
+
83
+
84
+ @app.get("/video/{video_id}")
85
+ async def stream_video(video_id: str, request: Request, download: bool = Query(False)):
86
+ try:
87
+ decrypted_video_id = decrypt_video_id(video_id)
88
+ yt = YouTube(f"https://www.youtube.com/watch?v={decrypted_video_id}")
89
+ stream = yt.streams.get_highest_resolution()
90
+ url = stream.url
91
+
92
+ headers = {}
93
+ if range_header := request.headers.get("range"):
94
+ headers["Range"] = range_header
95
+
96
+ async def proxy_stream():
97
+ try:
98
+ async with httpx.AsyncClient() as client:
99
+ async with client.stream("GET", url, headers=headers, timeout=60) as response:
100
+ if response.status_code not in (200, 206):
101
+ logger.error(f"Failed to stream: {response.status_code}")
102
+ return
103
+ async for chunk in response.aiter_bytes(CHUNK_SIZE):
104
+ yield chunk
105
+ except Exception as e:
106
+ logger.error(f"Streaming error: {str(e)}")
107
+ return
108
+
109
+ response_headers = {
110
+ "Accept-Ranges": "bytes",
111
+ "Cache-Control": "public, max-age=3600"
112
+ }
113
+
114
+ # Handle filename safely
115
+ title = yt.title.encode("utf-8", "ignore").decode("utf-8")
116
+ if download:
117
+ response_headers["Content-Disposition"] = f'attachment; filename="{title}.mp4"'
118
+ else:
119
+ response_headers["Content-Disposition"] = f'inline; filename="{title}.mp4"'
120
+
121
+ return StreamingResponse(
122
+ proxy_stream(),
123
+ media_type="video/mp4",
124
+ headers=response_headers
125
+ )
126
+
127
+ except Exception as e:
128
+ raise HTTPException(status_code=500, detail=f"Could not fetch video URL: {str(e)}")
129
+
130
+ @app.get("/audio/{video_id}")
131
+ async def stream_audio(video_id: str, request: Request, download: bool = Query(False)):
132
+ try:
133
+ decrypted_video_id = decrypt_video_id(video_id)
134
+ yt = YouTube(f"https://www.youtube.com/watch?v={decrypted_video_id}")
135
+ stream = yt.streams.get_audio_only()
136
+ url = stream.url
137
+
138
+ headers = {
139
+ "User-Agent": request.headers.get("user-agent", "Mozilla/5.0"),
140
+ }
141
+ if range_header := request.headers.get("range"):
142
+ headers["Range"] = range_header
143
+
144
+ async def proxy_stream():
145
+ async with httpx.AsyncClient(follow_redirects=True) as client:
146
+ async with client.stream("GET", url, headers=headers) as response:
147
+ if response.status_code not in (200, 206):
148
+ raise HTTPException(status_code=502, detail="Source stream error")
149
+ async for chunk in response.aiter_bytes(CHUNK_SIZE):
150
+ yield chunk
151
+
152
+ response_headers = {
153
+ "Accept-Ranges": "bytes",
154
+ "Cache-Control": "public, max-age=3600"
155
+ }
156
+
157
+ # Handle filename safely
158
+ title = yt.title.encode("utf-8", "ignore").decode("utf-8")
159
+ if download:
160
+ response_headers["Content-Disposition"] = f'attachment; filename="{title}.mp3"'
161
+ else:
162
+ response_headers["Content-Disposition"] = f'inline; filename="{title}.mp3"'
163
+
164
+ return StreamingResponse(
165
+ proxy_stream(),
166
+ media_type=stream.mime_type or "audio/mp4",
167
+ headers=response_headers
168
+ )
169
+
170
+ except Exception as e:
171
+ logger.error(f"Streaming error: {e}")
172
+ raise HTTPException(status_code=500, detail=f"Error: {str(e)}")
examples/test/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn[standard]
3
+ pytubefix
4
+ httpx
5
+ jinja2
6
+ cryptography
project.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5470cebe77d2a9c10b64850108e237040fac72c4519aa8a28e54bfc24fc64d6
3
+ size 2498
requirements.txt CHANGED
@@ -1,2 +1,7 @@
1
- fastapi
2
- uvicorn[standard]
 
 
 
 
 
 
1
+ docker
2
+ FastAPI
3
+ uvicorn
4
+ pyngrok
5
+ jinja2
6
+ requests
7
+ python-dotenv
routers/__init__.py ADDED
File without changes
routers/__pycache__/__init__.cpython-311.pyc ADDED
Binary file (164 Bytes). View file
 
routers/__pycache__/controls.cpython-311.pyc ADDED
Binary file (2.79 kB). View file
 
routers/__pycache__/deploy.cpython-311.pyc ADDED
Binary file (4.57 kB). View file
 
routers/__pycache__/logs.cpython-311.pyc ADDED
Binary file (1.32 kB). View file
 
routers/controls.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, HTTPException
2
+ from fastapi.responses import JSONResponse, HTMLResponse
3
+ import docker
4
+ from docker import client
5
+
6
+ router = APIRouter()
7
+
8
+ docker_client = docker.from_env()
9
+
10
+ @router.get("/start/{container_name}")
11
+ def start_container(container_name: str):
12
+ try:
13
+ container = client.containers.get(container_name)
14
+ container.unpause()
15
+ return JSONResponse({"status": "started"})
16
+ except Exception as e:
17
+ return JSONResponse({"error": str(e)}, status_code=500)
18
+
19
+ @router.get("/pause/{container_name}")
20
+ def pause_container(container_name: str):
21
+ try:
22
+ container = client.containers.get(container_name)
23
+ container.pause()
24
+ return JSONResponse({"status": "paused"})
25
+ except Exception as e:
26
+ return JSONResponse({"error": str(e)}, status_code=500)
27
+
28
+ @router.get("/stop/{container_name}")
29
+ def stop_container(container_name: str):
30
+ try:
31
+ container = client.containers.get(container_name)
32
+ container.stop()
33
+ container.remove(force=True)
34
+ return JSONResponse({"status": "stopped and removed"})
35
+ except Exception as e:
36
+ return JSONResponse({"error": str(e)}, status_code=500)
routers/deploy.py ADDED
@@ -0,0 +1,446 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Standard library imports
2
+ import os
3
+ import uuid
4
+ import time
5
+ import zipfile # Still imported, but primarily for handling potential old zip logic or error messages
6
+
7
+ # Third-party library imports
8
+ import docker # For interacting with Docker daemon
9
+ import git # For Git repository operations (requires 'GitPython' package: pip install GitPython)
10
+ import hmac # For validating GitHub webhook signatures (important for security)
11
+ import hashlib # For hashing in webhook signature validation
12
+ from pyngrok import ngrok # For creating public URLs (ensure ngrok is configured)
13
+ from fastapi import APIRouter, HTTPException, UploadFile, Form, Request, BackgroundTasks # Added Request and BackgroundTasks
14
+ from fastapi.responses import JSONResponse
15
+
16
+ # Initialize FastAPI router
17
+ router = APIRouter()
18
+
19
+ deployed_projects = {}
20
+
21
+
22
+ GITHUB_WEBHOOK_SECRET = os.getenv("GITHUB_WEBHOOK_SECRET", "your_github_webhook_secret_here_CHANGE_THIS")
23
+ if GITHUB_WEBHOOK_SECRET == "your_github_webhook_secret_here_CHANGE_THIS":
24
+ print("WARNING: GITHUB_WEBHOOK_SECRET is not set. Webhook security is compromised.")
25
+
26
+ # --- Helper Functions ---
27
+
28
+ # Function to recursively find a file (case-insensitive) within a directory
29
+ def _find_file_in_project(filename: str, root_dir: str) -> str | None:
30
+ """
31
+ Searches for a file (case-insensitive) within the given root directory and its subdirectories.
32
+ Returns the absolute path to the file if found, otherwise None.
33
+ """
34
+ filename_lower = filename.lower()
35
+ for dirpath, _, files in os.walk(root_dir):
36
+ for file in files:
37
+ if file.lower() == filename_lower:
38
+ return os.path.join(dirpath, file)
39
+ return None
40
+
41
+ # Function to build and deploy a Docker container from a project path
42
+ async def _build_and_deploy(project_id: str, project_path: str, app_name: str, existing_container_name: str = None):
43
+ """
44
+ Handles the Docker build and deployment process for a given project.
45
+ If an existing_container_name is provided, it attempts to stop and remove it first.
46
+ Manages ngrok tunnels for the deployed application.
47
+ """
48
+ docker_client = docker.from_env()
49
+
50
+ # Define consistent naming for Docker image and container
51
+ image_name = f"{app_name.lower()}_{project_id[:8]}"
52
+ container_name = f"{image_name}_container"
53
+
54
+ try:
55
+ # Step 1: Clean up old containers and images if they exist
56
+ # Stop and remove the previously deployed container for this project
57
+ if existing_container_name:
58
+ print(f"Attempting to stop and remove existing container: {existing_container_name}")
59
+ try:
60
+ old_container = docker_client.containers.get(existing_container_name)
61
+ old_container.stop(timeout=5) # Give 5 seconds to stop gracefully
62
+ old_container.remove(force=True)
63
+ print(f"Successfully stopped and removed old container: {existing_container_name}")
64
+ except docker.errors.NotFound:
65
+ print(f"Existing container {existing_container_name} not found, proceeding with new deployment.")
66
+ except Exception as e:
67
+ print(f"Error stopping/removing old container {existing_container_name}: {e}")
68
+
69
+ # Remove any exited or created containers that might be lingering from previous runs
70
+ # (This is a general cleanup, not specific to this project_id, but good practice)
71
+ for c in docker_client.containers.list(all=True):
72
+ if c.status in ["created", "exited"]: # Only remove non-running containers
73
+ # Be cautious: only remove containers clearly associated with this deployment logic
74
+ # For more robust logic, might check labels or names more strictly
75
+ if c.name.startswith(f"{app_name.lower()}_{project_id[:8]}") or c.name.startswith(f"ngrok-"):
76
+ print(f"Removing leftover container {c.name} ({c.id}) with status {c.status}")
77
+ try:
78
+ c.remove(force=True)
79
+ except Exception as e:
80
+ print(f"Error removing leftover container {c.name}: {e}")
81
+
82
+ # Step 2: Build Docker image
83
+ print(f"Building Docker image from {project_path} with tag: {image_name}")
84
+ image, build_logs_generator = docker_client.images.build(path=project_path, tag=image_name, rm=True)
85
+ # Process build logs (can be streamed to UI in a real application)
86
+ for log_line in build_logs_generator:
87
+ if 'stream' in log_line:
88
+ print(f"[BUILD LOG] {log_line['stream'].strip()}")
89
+ elif 'error' in log_line:
90
+ print(f"[BUILD ERROR] {log_line['error'].strip()}")
91
+
92
+ print(f"Docker image built successfully: {image.id}")
93
+
94
+ # Step 3: Run new Docker container
95
+ print(f"Running new container {container_name} from image {image_name}")
96
+ container = docker_client.containers.run(
97
+ image=image_name,
98
+ ports={"8080/tcp": None}, # Docker will assign a random host port for 8080/tcp
99
+ name=container_name,
100
+ detach=True, # Run in background
101
+ mem_limit="512m", # Limit memory usage
102
+ nano_cpus=1_000_000_000, # Limit CPU usage to 1 full core (1 billion nano-CPUs)
103
+ read_only=True, # Make container filesystem read-only (except tmpfs)
104
+ tmpfs={"/tmp": ""}, # Mount an in-memory tmpfs for /tmp directory
105
+ user="1001:1001" # Run as a non-root user (important for security)
106
+ )
107
+ print(f"Container started with ID: {container.id}")
108
+
109
+ # Wait a moment for the container to fully start and expose its port
110
+ time.sleep(5) # Increased sleep to give the application within the container more time
111
+
112
+ # Retrieve the dynamically assigned host port for the container's 8080 port
113
+ port_info = docker_client.api.port(container.id, 8080)
114
+ if not port_info:
115
+ # If port 8080 is not exposed, the container likely failed to start or is not exposing correctly
116
+ print(f"Error: Port 8080 not exposed by container {container.id}. Inspecting container logs...")
117
+ try:
118
+ container_logs = container.logs().decode('utf-8')
119
+ print(f"Container logs:\n{container_logs}")
120
+ except Exception as log_e:
121
+ print(f"Could not retrieve container logs: {log_e}")
122
+ container.stop()
123
+ container.remove(force=True)
124
+ raise Exception("Port 8080 not exposed by container or container failed to start correctly. Check container logs.")
125
+
126
+ host_port = port_info[0]['HostPort']
127
+ print(f"Container {container.id} is accessible on host port: {host_port}")
128
+
129
+ # Step 4: Manage ngrok tunnel
130
+ # Check if an ngrok tunnel already exists for this project and close it
131
+ if project_id in deployed_projects and deployed_projects[project_id].get('ngrok_tunnel'):
132
+ existing_tunnel = deployed_projects[project_id]['ngrok_tunnel']
133
+ print(f"Closing existing ngrok tunnel: {existing_tunnel.public_url}")
134
+ try:
135
+ existing_tunnel.disconnect()
136
+ except Exception as ngrok_disconnect_e:
137
+ print(f"Error disconnecting existing ngrok tunnel: {ngrok_disconnect_e}")
138
+ deployed_projects[project_id]['ngrok_tunnel'] = None # Clear the reference
139
+
140
+ # Connect a new ngrok tunnel to the dynamically assigned host port
141
+ print(f"Connecting new ngrok tunnel to host port {host_port}")
142
+ tunnel = ngrok.connect(host_port, bind_tls=True) # bind_tls=True for HTTPS
143
+ public_url = tunnel.public_url
144
+ print(f"Ngrok public URL for {app_name}: {public_url}")
145
+
146
+ # Step 5: Update global state with new deployment details
147
+ # Ensure the project_id exists in deployed_projects before updating
148
+ if project_id not in deployed_projects:
149
+ deployed_projects[project_id] = {} # Initialize if not already present (should be by deploy_from_git)
150
+
151
+ deployed_projects[project_id].update({
152
+ "container_id": container.id,
153
+ "container_name": container_name,
154
+ "ngrok_tunnel": tunnel,
155
+ "public_url": public_url,
156
+ "status": "deployed" # Set status to deployed on success
157
+ })
158
+
159
+ return public_url, container_name
160
+
161
+ except docker.errors.BuildError as e:
162
+ print(f"Docker build error: {e}")
163
+ # Capture and return detailed build logs for better debugging
164
+ build_logs_str = "\n".join([str(log_line.get('stream', '')).strip() for log_line in e.build_log if 'stream' in log_line])
165
+ if project_id in deployed_projects:
166
+ deployed_projects[project_id]["status"] = "failed"
167
+ raise HTTPException(status_code=500, detail=f"Docker build failed: {e.msg}\nLogs:\n{build_logs_str}")
168
+ except docker.errors.ContainerError as e:
169
+ print(f"Docker container runtime error: {e}")
170
+ if project_id in deployed_projects:
171
+ deployed_projects[project_id]["status"] = "failed"
172
+ raise HTTPException(status_code=500, detail=f"Container failed during runtime: {e.stderr.decode()}")
173
+ except docker.errors.APIError as e:
174
+ print(f"Docker API error: {e}")
175
+ if project_id in deployed_projects:
176
+ deployed_projects[project_id]["status"] = "failed"
177
+ raise HTTPException(status_code=500, detail=f"Docker daemon or API error: {e.explanation}")
178
+ except Exception as e:
179
+ print(f"General deployment error: {e}")
180
+ if project_id in deployed_projects:
181
+ deployed_projects[project_id]["status"] = "failed"
182
+ raise HTTPException(status_code=500, detail=f"Deployment process failed unexpectedly: {str(e)}")
183
+
184
+ # --- API Endpoints ---
185
+
186
+ @router.post("/project")
187
+ async def deploy_from_git(repo_url: str = Form(...), app_name: str = Form(...)):
188
+ """
189
+ Deploys a FastAPI/Flask application from a specified Git repository.
190
+ The repository must contain a main.py, requirements.txt, and Dockerfile.
191
+ """
192
+ # Basic validation for the Git repository URL format
193
+ if not repo_url.startswith(("http://", "https://", "git@", "ssh://")):
194
+ raise HTTPException(status_code=400, detail="Invalid Git repository URL format. Must be HTTP(S) or SSH.")
195
+
196
+ # Generate a unique ID for this project
197
+ project_id = str(uuid.uuid4())
198
+
199
+ # Define project directories
200
+ base_dir = os.path.dirname(os.path.abspath(__file__)) # This is where 'router.py' is
201
+ projects_dir = os.path.abspath(os.path.join(base_dir, "..", "projects")) # Parent directory's 'projects' folder
202
+ os.makedirs(projects_dir, exist_ok=True) # Ensure the base projects directory exists
203
+
204
+ project_path = os.path.join(projects_dir, project_id)
205
+ os.makedirs(project_path, exist_ok=True) # Create a unique directory for this project
206
+
207
+ try:
208
+ # Step 1: Clone the Git repository
209
+ print(f"Cloning repository {repo_url} into {project_path}")
210
+ git.Repo.clone_from(repo_url, project_path)
211
+ print("Repository cloned successfully.")
212
+
213
+ except git.exc.GitCommandError as e:
214
+ print(f"Git clone failed: {e.stderr.decode()}")
215
+ # Clean up the partially created project directory if cloning fails
216
+ if os.path.exists(project_path):
217
+ import shutil
218
+ shutil.rmtree(project_path)
219
+ raise HTTPException(status_code=400, detail=f"Failed to clone repository: {e.stderr.decode()}")
220
+ except Exception as e:
221
+ print(f"Unexpected error during git clone: {e}")
222
+ if os.path.exists(project_path):
223
+ import shutil
224
+ shutil.rmtree(project_path)
225
+ raise HTTPException(status_code=500, detail=f"An unexpected error occurred during repository cloning: {str(e)}")
226
+
227
+ # Step 2: Validate required project files (main.py, requirements.txt, Dockerfile)
228
+ main_py_path = _find_file_in_project("main.py", project_path)
229
+ requirements_txt_path = _find_file_in_project("requirements.txt", project_path)
230
+ dockerfile_path = _find_file_in_project("Dockerfile", project_path)
231
+
232
+ missing_files = []
233
+ if not main_py_path:
234
+ missing_files.append("main.py")
235
+ if not requirements_txt_path:
236
+ missing_files.append("requirements.txt")
237
+ if not dockerfile_path:
238
+ missing_files.append("Dockerfile")
239
+
240
+ if missing_files:
241
+ # Clean up the project directory if essential files are missing
242
+ if os.path.exists(project_path):
243
+ import shutil
244
+ shutil.rmtree(project_path)
245
+ raise HTTPException(
246
+ status_code=400,
247
+ detail=f"The cloned repository is missing required file(s): {', '.join(missing_files)} (case-insensitive search)."
248
+ )
249
+
250
+ # Ensure Dockerfile is at the root of the project_path for Docker build context
251
+ if os.path.dirname(dockerfile_path) != project_path:
252
+ print(f"[DEBUG] Moving Dockerfile from {dockerfile_path} to project root: {project_path}")
253
+ target_dockerfile_path = os.path.join(project_path, "Dockerfile")
254
+ os.replace(dockerfile_path, target_dockerfile_path)
255
+ dockerfile_path = target_dockerfile_path # Update the path to reference the new location
256
+
257
+ # Step 3: Store initial project details in global state (or database)
258
+ deployed_projects[project_id] = {
259
+ "app_name": app_name,
260
+ "repo_url": repo_url,
261
+ "project_path": project_path,
262
+ "status": "building", # Set initial status
263
+ "container_name": None, # Will be set by _build_and_deploy
264
+ "public_url": None, # Will be set by _build_and_deploy
265
+ "ngrok_tunnel": None # Will be set by _build_and_deploy
266
+ }
267
+ print(f"Project {project_id} initialized for deployment.")
268
+
269
+ # Step 4: Trigger the build and deploy process
270
+ try:
271
+ public_url, container_name = await _build_and_deploy(project_id, project_path, app_name)
272
+ return JSONResponse({
273
+ "project_id": project_id,
274
+ "container_name": container_name,
275
+ "preview_url": public_url,
276
+ "message": "Deployment initiated from Git repository. Check logs for status."
277
+ }, status_code=202) # Use 202 Accepted, as deployment happens in background
278
+ except HTTPException as e:
279
+ # If _build_and_deploy raises a specific HTTPException, re-raise it
280
+ if project_id in deployed_projects:
281
+ deployed_projects[project_id]["status"] = "failed"
282
+ raise e
283
+ except Exception as e:
284
+ # Catch any other unexpected errors during the build/deploy phase
285
+ if project_id in deployed_projects:
286
+ deployed_projects[project_id]["status"] = "failed"
287
+ print(f"Error during initial _build_and_deploy for project {project_id}: {e}")
288
+ raise HTTPException(status_code=500, detail=f"Initial deployment failed unexpectedly: {str(e)}")
289
+
290
+ @router.post("/webhook/github")
291
+ async def github_webhook(request: Request, background_tasks: BackgroundTasks):
292
+ """
293
+ Endpoint to receive GitHub webhook events (e.g., push events) and trigger redeployments.
294
+ """
295
+ # --- Security: Verify GitHub Webhook Signature ---
296
+ # This is CRUCIAL to ensure the webhook is from GitHub and hasn't been tampered with.
297
+ # For production, DO NOT comment this out.
298
+ signature_header = request.headers.get("X-Hub-Signature-256")
299
+ if not signature_header:
300
+ raise HTTPException(status_code=403, detail="X-Hub-Signature-256 header missing.")
301
+
302
+ # Read the raw request body once to use for hashing
303
+ body = await request.body()
304
+
305
+ try:
306
+ # Calculate expected signature
307
+ sha_name, signature = signature_header.split("=", 1)
308
+ if sha_name != "sha256":
309
+ raise HTTPException(status_code=400, detail="Invalid X-Hub-Signature-256 algorithm. Only sha256 supported.")
310
+
311
+ # Use HMAC-SHA256 with your secret key to hash the raw request body
312
+ # Ensure the secret is encoded to bytes
313
+ mac = hmac.new(GITHUB_WEBHOOK_SECRET.encode("utf-8"), body, hashlib.sha256)
314
+
315
+ # Compare the calculated hash with the signature received from GitHub
316
+ if not hmac.compare_digest(mac.hexdigest(), signature):
317
+ raise HTTPException(status_code=403, detail="Invalid GitHub signature.")
318
+ except Exception as e:
319
+ print(f"Webhook signature verification failed: {e}")
320
+ raise HTTPException(status_code=403, detail="Signature verification failed.")
321
+
322
+ # Parse the JSON payload from the webhook
323
+ payload = await request.json()
324
+ github_event = request.headers.get("X-GitHub-Event")
325
+
326
+ print(f"Received GitHub '{github_event}' webhook for repository: {payload.get('repository', {}).get('full_name')}")
327
+
328
+ # Process only 'push' events
329
+ if github_event != "push":
330
+ return JSONResponse({"message": f"Received '{github_event}' event, but only 'push' events are processed."}, status_code=200)
331
+
332
+ # Get the repository URL from the webhook payload
333
+ repo_url_from_webhook = payload.get("repository", {}).get("html_url") # Prefer html_url or clone_url
334
+ if not repo_url_from_webhook:
335
+ raise HTTPException(status_code=400, detail="Repository URL not found in webhook payload.")
336
+
337
+ # Find the project linked to this repository in our in-memory storage
338
+ project_to_redeploy = None
339
+ project_id_to_redeploy = None
340
+ for project_id, project_data in deployed_projects.items():
341
+ # Match based on repo_url. A more robust solution might normalize URLs or use repository IDs.
342
+ if project_data.get("repo_url") == repo_url_from_webhook:
343
+ project_to_redeploy = project_data
344
+ project_id_to_redeploy = project_id
345
+ break
346
+
347
+ if not project_to_redeploy:
348
+ print(f"No active project found for repository: {repo_url_from_webhook}. Webhook ignored.")
349
+ return JSONResponse({"message": "No associated project found for this repository, ignoring webhook."}, status_code=200)
350
+
351
+ print(f"Received push for {repo_url_from_webhook}. Triggering redeployment for project {project_id_to_redeploy} ({project_to_redeploy['app_name']}).")
352
+
353
+ # Step 1: Pull the latest changes from the Git repository
354
+ project_path = project_to_redeploy["project_path"]
355
+ try:
356
+ repo = git.Repo(project_path)
357
+ origin = repo.remotes.origin
358
+ print(f"Pulling latest changes for {repo_url_from_webhook} into {project_path}")
359
+ origin.pull() # Pull the latest changes from the remote
360
+ print("Latest changes pulled successfully.")
361
+ except git.exc.GitCommandError as e:
362
+ print(f"Failed to pull latest changes for {repo_url_from_webhook}: {e.stderr.decode()}")
363
+ # Update project status to failed if pull fails
364
+ deployed_projects[project_id_to_redeploy]["status"] = "failed"
365
+ return JSONResponse({"error": f"Failed to pull latest changes: {e.stderr.decode()}"}, status_code=500)
366
+ except Exception as e:
367
+ print(f"Unexpected error during git pull: {e}")
368
+ deployed_projects[project_id_to_redeploy]["status"] = "failed"
369
+ return JSONResponse({"error": f"An unexpected error occurred during git pull: {str(e)}"}, status_code=500)
370
+
371
+ # Step 2: Trigger redeployment in a background task
372
+ # Using FastAPI's BackgroundTasks ensures the webhook endpoint returns immediately,
373
+ # preventing timeouts for GitHub, while the redeployment happens asynchronously.
374
+
375
+ # Get the current container name for proper cleanup in _build_and_deploy
376
+ current_container_name = project_to_redeploy.get("container_name")
377
+
378
+ # Add the build and deploy task to background tasks
379
+ background_tasks.add_task(
380
+ _build_and_deploy,
381
+ project_id_to_redeploy,
382
+ project_path,
383
+ project_to_redeploy["app_name"],
384
+ current_container_name # Pass existing container name for cleanup
385
+ )
386
+
387
+ # Update project status to indicate redeployment is in progress
388
+ deployed_projects[project_id_to_redeploy]["status"] = "redeploying"
389
+
390
+ return JSONResponse(
391
+ {"message": f"Redeployment for project {project_id_to_redeploy} initiated from GitHub webhook."},
392
+ background=background_tasks,
393
+ status_code=202 # 202 Accepted: request has been accepted for processing
394
+ )
395
+
396
+ # --- Cleanup Endpoint (Optional, for manual testing/management) ---
397
+ @router.post("/project/delete/{project_id}")
398
+ async def delete_project(project_id: str):
399
+ """
400
+ Deletes a deployed project, its Docker container, ngrok tunnel, and local files.
401
+ """
402
+ if project_id not in deployed_projects:
403
+ raise HTTPException(status_code=404, detail=f"Project with ID {project_id} not found.")
404
+
405
+ project_data = deployed_projects[project_id]
406
+
407
+ # Stop and remove Docker container
408
+ docker_client = docker.from_env()
409
+ container_name = project_data.get("container_name")
410
+ if container_name:
411
+ try:
412
+ container = docker_client.containers.get(container_name)
413
+ container.stop(timeout=5)
414
+ container.remove(force=True)
415
+ print(f"Container {container_name} for project {project_id} removed.")
416
+ except docker.errors.NotFound:
417
+ print(f"Container {container_name} not found, already removed?")
418
+ except Exception as e:
419
+ print(f"Error removing container {container_name}: {e}")
420
+ # Do not raise HTTPException, try to continue cleanup
421
+
422
+ # Disconnect ngrok tunnel
423
+ ngrok_tunnel = project_data.get("ngrok_tunnel")
424
+ if ngrok_tunnel:
425
+ try:
426
+ ngrok_tunnel.disconnect()
427
+ print(f"Ngrok tunnel for project {project_id} disconnected.")
428
+ except Exception as e:
429
+ print(f"Error disconnecting ngrok tunnel for project {project_id}: {e}")
430
+
431
+ # Remove local project directory
432
+ project_path = project_data.get("project_path")
433
+ if project_path and os.path.exists(project_path):
434
+ try:
435
+ import shutil
436
+ shutil.rmtree(project_path)
437
+ print(f"Project directory {project_path} removed.")
438
+ except Exception as e:
439
+ print(f"Error removing project directory {project_path}: {e}")
440
+
441
+ # Remove from global state
442
+ del deployed_projects[project_id]
443
+ print(f"Project {project_id} removed from deployed_projects.")
444
+
445
+ return JSONResponse({"message": f"Project {project_id} and associated resources deleted."})
446
+
routers/logs.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter
2
+ from fastapi.responses import JSONResponse, HTMLResponse
3
+ import docker
4
+
5
+ router = APIRouter()
6
+
7
+ docker_client = docker.from_env()
8
+
9
+ @router.get("/fetch/{container_name}")
10
+ def stream_logs(container_name: str):
11
+ try:
12
+ container = docker_client.containers.get(container_name)
13
+ logs = container.logs(tail=100).decode()
14
+ return HTMLResponse(f"<pre>{logs}</pre>")
15
+ except Exception as e:
16
+ return JSONResponse({"error": str(e)}, status_code=500)
templates/dashboard.html ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>FastHost Dashboard</title>
7
+ <style>
8
+ body {
9
+ font-family: Arial, sans-serif;
10
+ background: #f8f9fa;
11
+ padding: 20px;
12
+ }
13
+ h1 {
14
+ color: #343a40;
15
+ }
16
+ .card {
17
+ background: white;
18
+ padding: 15px;
19
+ margin: 10px 0;
20
+ border-radius: 8px;
21
+ box-shadow: 0 2px 4px rgba(0,0,0,0.1);
22
+ }
23
+ .buttons button {
24
+ margin-right: 5px;
25
+ }
26
+ pre {
27
+ background: #e9ecef;
28
+ padding: 10px;
29
+ max-height: 200px;
30
+ overflow-y: auto;
31
+ }
32
+ </style>
33
+ </head>
34
+ <body>
35
+ <h1>🚀 FastHost Dashboard</h1>
36
+ <div id="projects"></div>
37
+
38
+ <script>
39
+ async function fetchProjects() {
40
+ const res = await fetch('/projects');
41
+ const data = await res.json();
42
+ const container = document.getElementById('projects');
43
+ container.innerHTML = '';
44
+
45
+ data.forEach(proj => {
46
+ const card = document.createElement('div');
47
+ card.className = 'card';
48
+
49
+ card.innerHTML = `
50
+ <h3>${proj.name}</h3>
51
+ <p>Status: <b>${proj.status}</b></p>
52
+ <p>Public URL: <a href="${proj.public_url}" target="_blank">${proj.public_url}</a></p>
53
+ <p>Local URL: <a href="${proj.local_url}" target="_blank">${proj.local_url}</a></p>
54
+ <div class="buttons">
55
+ <button onclick="stopProject('${proj.container_name}')">🛑 Stop</button>
56
+ <button onclick="startProject('${proj.container_name}')">▶️ Start</button>
57
+ <button onclick="pauseProject('${proj.container_name}')">⏸️ Pause</button>
58
+ <button onclick="toggleLogs('${proj.container_name}')">📜 View Logs</button>
59
+ </div>
60
+ <pre id="log-${proj.container_name}" style="display:none"></pre>
61
+ `;
62
+
63
+ container.appendChild(card);
64
+ });
65
+ }
66
+
67
+ async function stopProject(containerName) {
68
+ await fetch(`/controls/stop/${containerName}`);
69
+ fetchProjects();
70
+ }
71
+
72
+ async function startProject(containerName) {
73
+ await fetch(`/controls/start/${containerName}`);
74
+ fetchProjects();
75
+ }
76
+
77
+ async function pauseProject(containerName) {
78
+ await fetch(`/controls/pause/${containerName}`);
79
+ fetchProjects();
80
+ }
81
+
82
+ function toggleLogs(containerName) {
83
+ const pre = document.getElementById(`log-${containerName}`);
84
+ if (pre.style.display === 'none') {
85
+ pre.style.display = 'block';
86
+ fetchLogs(containerName, pre);
87
+ } else {
88
+ pre.style.display = 'none';
89
+ }
90
+ }
91
+
92
+ async function fetchLogs(containerName, target) {
93
+ const res = await fetch(`/logs/fetch/${containerName}`);
94
+ const logs = await res.text();
95
+ target.innerHTML = logs;
96
+ }
97
+
98
+ fetchProjects();
99
+ setInterval(fetchProjects, 5000); // Refresh every 5 seconds
100
+ </script>
101
+ </body>
102
+ </html>
upload.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import zipfile
3
+ import requests
4
+ import dotenv
5
+
6
+ # Load environment variables
7
+ dotenv.load_dotenv()
8
+
9
+ def zip_folder(source_dir, zip_path):
10
+ with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
11
+ for root, _, files in os.walk(source_dir):
12
+ for file in files:
13
+ file_path = os.path.join(root, file)
14
+ arcname = os.path.relpath(file_path, start=source_dir)
15
+ zipf.write(file_path, arcname)
16
+
17
+ def upload_zip(zip_path, app_name, server_url):
18
+ with open(zip_path, 'rb') as f:
19
+ files = {'file': (os.path.basename(zip_path), f, 'application/zip')}
20
+ data = {'app_name': app_name}
21
+ response = requests.post(f'{server_url}/deploy/project', files=files, data=data)
22
+
23
+ if response.status_code == 200:
24
+ print("✅ Deployed Successfully!")
25
+ print(response.json())
26
+ else:
27
+ print("❌ Deployment Failed:")
28
+ print(response.text)
29
+
30
+ if __name__ == "__main__":
31
+ # Configuration
32
+ folder_to_zip = "examples/test" # Folder with main.py and requirements.txt
33
+ zip_output = "project.zip"
34
+ app_name = "maouapp"
35
+ server_url = 'http://192.168.29.195:8000' # Replace with your server URL
36
+
37
+ print("📦 Zipping project...")
38
+ zip_folder(folder_to_zip, zip_output)
39
+
40
+ print("📤 Uploading...")
41
+ upload_zip(zip_output, app_name, server_url)