mgbam's picture
Update README.md
b4f2c32 verified

A newer version of the Streamlit SDK is available: 1.45.0

Upgrade
metadata
title: Smart Edit Assistant
emoji: 🎬
colorFrom: blue
colorTo: indigo
sdk: streamlit
app_file: app.py
pinned: false
hardware: gpu
hf_oauth: true
hf_oauth_scopes:
  - email
sdk_version: 1.44.1

Smart Edit Assistant

Smart Edit Assistant is an AI-powered web application that automates video editing tasks end-to-end. Users can upload video files, let the system extract audio, transcribe the speech (e.g., via Whisper), analyze content with GPT-like models, and apply automated cuts and edits using FFmpeg or MoviePy. The end result is a curated, shorter (or otherwise improved) video that can be downloaded, saving creators time on manual post-production.

Features

  • Video Upload & Preview: Upload .mp4, .mov, or .mkv files.
  • Audio Extraction: Efficiently pulls the audio track for transcription.
  • AI Transcription: Uses OpenAI Whisper (API or local) or other STT solutions.
  • LLM Content Analysis: GPT-4 or open-source LLM suggests cuts and highlight segments.
  • Automated Editing: Uses FFmpeg/MoviePy to cut and reassemble segments, optionally insert transitions.
  • Result Preview: Plays the edited video in-browser before download.
  • (Optional) User Authentication: Configurable free vs. premium tiers.

Repository Structure

smart-edit-assistant/ ├── app.py # Main Streamlit app ├── pipelines/ │ ├── video_process.py # Audio extraction & editing logic (MoviePy / FFmpeg) │ ├── ai_inference.py # Whisper/GPT calls for transcription & instructions │ └── auth_utils.py # Optional authentication logic ├── .streamlit/ │ └── config.toml # Streamlit config (upload limit, theming) ├── requirements.txt # Python dependencies ├── apt.txt # (Optional) System-level dependencies if needed └── README.md # Project description (this file)

bash Copy code

Local Development & Setup

  1. Clone this repo:
    git clone https://github.com/YourUsername/smart-edit-assistant.git
    cd smart-edit-assistant
    Install Python dependencies:
    

bash Copy code pip install -r requirements.txt If you plan to run open-source Whisper locally, ensure you install openai-whisper or the GitHub repo (git+https://github.com/openai/whisper.git).

If you’re using GPU, make sure your PyTorch install matches your CUDA version.

Run the app:

bash Copy code streamlit run app.py Open http://localhost:8501 in your browser to interact with the UI.

Set Environment Variables (for GPT or Whisper API, if needed):

bash Copy code export OPENAI_API_KEY="sk-..." or store in a local .env file and load with python-dotenv.

Deploying on Hugging Face Spaces Create a Space:

Go to Hugging Face Spaces and create a new Space with the Streamlit SDK option.

Upload your files:

Either drag-and-drop via the web interface or push via Git:

bash Copy code git remote add origin https://huggingface.co/spaces/YourUsername/Smart-Edit-Assistant git push origin main Set your secrets:

In the Space’s Settings page, add OPENAI_API_KEY or any other API keys under “Secrets”.

If you want GPU, set hardware: "gpu" in the YAML frontmatter (as shown above) or in the Space settings.

Build and Launch:

The Space will automatically install your requirements.txt and run app.py.

Once deployed, your app is live at https://huggingface.co/spaces/YourUsername/Smart-Edit-Assistant.

Usage Upload a Video: Click “Browse files” to select a .mp4, .mov, or .mkv file.

Extract & Transcribe: The app automatically pulls the audio, then uses Whisper or another STT method to get a transcript.

Generate Edits: An LLM (GPT-4 or local) analyzes the transcript and suggests where to cut or remove filler content.

Apply Edits: The app runs ffmpeg or MoviePy to create a new edited video file.

Preview & Download: You can watch the edited clip directly in the browser and then download the .mp4.

Configuration Streamlit Config:

.streamlit/config.toml can set maxUploadSize (e.g. 10GB) or color theme.

Authentication:

If hf_oauth is true, users must log in with their Hugging Face account.

For custom username/password or free vs. premium tiers, see auth_utils.py or documentation in your code.

Roadmap Interactive Timeline: Let users manually tweak the AI’s suggested cuts.

B-roll Insertion: Generate or fetch recommended B-roll and splice it in automatically.

Transition Effects: Provide crossfades, text overlays, or AI-generated intros/outros.

Multi-user Collaboration: Shared editing session or project saving in a database.

Troubleshooting File Not Found or Zero Bytes: Make sure ffmpeg or MoviePy didn’t fail silently. Check logs for errors.

Whisper “load_model” Error: Ensure you installed openai-whisper or the GitHub repo, not the unrelated “whisper” PyPI package.

Large File Upload: If large uploads fail, confirm the maxUploadSize in .streamlit/config.toml is high enough, and verify huggingface secrets/config.

Performance: For best speed, request a GPU from Hugging Face Spaces or use a local GPU with the correct PyTorch/CUDA version.

License You can choose a license that suits your project. For example:

java Copy code MIT License

Copyright (c) 2025 ...

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), ...