Spaces:
Running
title: LLMSearchEngine
emoji: 🏆
colorFrom: gray
colorTo: purple
sdk: docker
app_file: app.py
pinned: false
LLM Search Engine
This is a Flask-based web application that uses a large language model (LLM) to generate search engine-like results, styled to resemble Google’s classic search results page. Instead of querying an external search API, it prompts an LLM to create titles, snippets, and URLs for a given query, delivering a paginated, familiar interface.
Why We Built It
We created this app to explore how LLMs can mimic traditional search engines by generating results directly from their training data. It offers:
- A nostalgic, Google-like pagination design with clickable links.
- A proof-of-concept for LLM-driven search without real-time web access.
- A simple, self-contained alternative for queries within the model’s knowledge base.
Features
- Google-Styled Interface: Search bar, result list, and pagination styled with Google’s colors and layout.
- Generated Results: Titles, snippets, and URLs are fully produced by the LLM.
- Pagination: Displays 10 results per page, up to 30 total results across 3 pages.
Limitations
- Static Knowledge: Results are limited to the LLM’s training cutoff (e.g., pre-2025).
- Generated Content: URLs and snippets may not correspond to real web pages—use as a starting point.
- No Real-Time Data: Best for historical or established topics, not breaking news.
Using It on Hugging Face Spaces
Try the Demo
Deployed on Hugging Face Spaces, you can test it at https://codelion-llmsearchengine.hf.space:
- Open the URL in your browser.
- Type a query (e.g. "best Python libraries") in the search bar and press Enter or click "LLM Search".
- Browse the paginated results, styled like Google, using "Previous" and "Next" links.
Using It as an API
API Endpoint
Your app doubles as an API when hosted on HF Spaces:
- URL:
https://codelion-llmsearchengine.hf.space/
- Method:
GET
- Parameters:
query
: The search query (e.g.,"best Python libraries"
).page
: Page number (1-3
, defaults to1
).
Example Request
curl "https://codelion-llmsearchengine.hf.space/?query=best+Python+libraries&page=1"
Response
Returns raw HTML styled like a Google search results page.
Integration
You can fetch results programmatically and render or parse the HTML:
import requests
from urllib.parse import quote
query = "best Python libraries"
page = 1
url = f"https://codelion-llmsearchengine.hf.space/?query={quote(query)}&page={page}"
response = requests.get(url)
html_content = response.text # Render or process as needed
print(html_content)
How It Works
LLM Prompting
- Queries trigger a prompt to the
"gemini-2.0-flash-lite"
model. - Generates 30 results in JSON format.
- Queries trigger a prompt to the
Rendering
- Flask converts results into a Google-styled HTML page.
- Includes a search bar, results, and pagination.
Deployment
- Runs via Flask and Docker on HF Spaces.
- Serves dynamic pages based on URL parameters.
Setup Locally
Install dependencies
pip install -r requirements.txt
Set environment variables
export OPENAI_API_KEY="your-key"
export OPENAI_BASE_URL="your-url"
Run the app
python app.py
Visit http://localhost:5000
.