shukdevdatta123 commited on
Commit
533b217
Β·
verified Β·
1 Parent(s): f7478ee

Create v1.txt

Browse files
Files changed (1) hide show
  1. v1.txt +604 -0
v1.txt ADDED
@@ -0,0 +1,604 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import requests
3
+ from bs4 import BeautifulSoup
4
+ from openai import OpenAI
5
+ import json
6
+ import re
7
+ from urllib.parse import urljoin, urlparse
8
+ import time
9
+ import urllib3
10
+ from requests.adapters import HTTPAdapter
11
+ from urllib3.util.retry import Retry
12
+ import ssl
13
+
14
+ # Disable SSL warnings
15
+ urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
16
+
17
+ class WebScrapingTool:
18
+ def __init__(self):
19
+ self.client = None
20
+ self.system_prompt = """You are a specialized web data extraction assistant. Your core purpose is to browse and analyze the content of web pages based on user instructions, and return structured or unstructured information from the provided URL. Your capabilities include:
21
+ 1. Navigating and reading web page content from a given URL.
22
+ 2. Extracting textual content including headings, paragraphs, lists, and metadata.
23
+ 3. Identifying and extracting HTML tables and presenting them in a clean, structured format.
24
+ 4. Creating new, custom tables based on user queries by processing, reorganizing, or filtering the content found on the source page.
25
+ You must always follow these guidelines:
26
+ - Accurately extract and summarize both structured (tables, lists) and unstructured (paragraphs, articles) content.
27
+ - Clearly separate different types of data (e.g., summaries, tables, bullet points).
28
+ - When extracting textual content:
29
+ - Maintain original meaning, structure, and tone.
30
+ - Capture all relevant sections based on user instructions (e.g., only the "Overview" or "Methodology" sections).
31
+ - When extracting tables:
32
+ - Preserve headers and align row data correctly.
33
+ - Identify and differentiate multiple tables, if present.
34
+ - When creating custom tables:
35
+ - Include only the relevant columns as per the user request.
36
+ - Sort, filter, and reorganize data accordingly.
37
+ - Use clear and consistent headers.
38
+ You must not hallucinate or infer data not present on the page. If content is missing, unclear, or restricted, say so explicitly.
39
+ Always respond based on the actual content from the provided link. If the page fails to load or cannot be accessed, inform the user immediately.
40
+ Your role is to act as an intelligent browser and data interpreter β€” able to read and reshape any web content to meet user needs."""
41
+
42
+ def setup_client(self, api_key):
43
+ """Initialize OpenAI client with OpenRouter"""
44
+ try:
45
+ self.client = OpenAI(
46
+ base_url="https://openrouter.ai/api/v1",
47
+ api_key=api_key,
48
+ )
49
+ return True, "API client initialized successfully!"
50
+ except Exception as e:
51
+ return False, f"Failed to initialize API client: {str(e)}"
52
+
53
+ def create_session(self):
54
+ """Create a robust session with retry strategy and proper headers"""
55
+ session = requests.Session()
56
+
57
+ # Define retry strategy with fixed parameter name
58
+ retry_strategy = Retry(
59
+ total=3,
60
+ status_forcelist=[429, 500, 502, 503, 504],
61
+ allowed_methods=["HEAD", "GET", "OPTIONS"], # Fixed: changed from method_whitelist
62
+ backoff_factor=1
63
+ )
64
+
65
+ # Mount adapter with retry strategy
66
+ adapter = HTTPAdapter(max_retries=retry_strategy)
67
+ session.mount("http://", adapter)
68
+ session.mount("https://", adapter)
69
+
70
+ # Set comprehensive headers to mimic real browser
71
+ session.headers.update({
72
+ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
73
+ 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
74
+ 'Accept-Language': 'en-US,en;q=0.9',
75
+ 'Accept-Encoding': 'gzip, deflate, br',
76
+ 'DNT': '1',
77
+ 'Connection': 'keep-alive',
78
+ 'Upgrade-Insecure-Requests': '1',
79
+ 'Sec-Fetch-Dest': 'document',
80
+ 'Sec-Fetch-Mode': 'navigate',
81
+ 'Sec-Fetch-Site': 'none',
82
+ 'Sec-Fetch-User': '?1',
83
+ 'Cache-Control': 'max-age=0'
84
+ })
85
+
86
+ return session
87
+
88
+ def scrape_webpage(self, url):
89
+ """Scrape webpage content with enhanced error handling and timeouts"""
90
+ try:
91
+ session = self.create_session()
92
+
93
+ # Multiple timeout attempts with increasing duration
94
+ timeout_attempts = [15, 30, 45]
95
+ response = None
96
+
97
+ for timeout in timeout_attempts:
98
+ try:
99
+ print(f"Attempting to fetch {url} with {timeout}s timeout...")
100
+
101
+ response = session.get(
102
+ url,
103
+ timeout=timeout,
104
+ verify=False, # Disable SSL verification for problematic sites
105
+ allow_redirects=True,
106
+ stream=False
107
+ )
108
+
109
+ response.raise_for_status()
110
+ break
111
+
112
+ except requests.exceptions.Timeout:
113
+ if timeout == timeout_attempts[-1]: # Last attempt
114
+ return {
115
+ 'success': False,
116
+ 'error': f"Connection timed out after multiple attempts. The website may be slow or blocking automated requests."
117
+ }
118
+ continue
119
+ except requests.exceptions.SSLError:
120
+ # Try with different SSL context
121
+ try:
122
+ response = session.get(
123
+ url,
124
+ timeout=timeout,
125
+ verify=False,
126
+ allow_redirects=True
127
+ )
128
+ response.raise_for_status()
129
+ break
130
+ except:
131
+ continue
132
+ except requests.exceptions.RequestException as e:
133
+ if timeout == timeout_attempts[-1]: # Last attempt
134
+ return {
135
+ 'success': False,
136
+ 'error': f"Request failed: {str(e)}"
137
+ }
138
+ continue
139
+
140
+ # Check if we got a response
141
+ if response is None:
142
+ return {
143
+ 'success': False,
144
+ 'error': "Failed to establish connection after multiple attempts"
145
+ }
146
+
147
+ # Check content type
148
+ content_type = response.headers.get('content-type', '').lower()
149
+ if 'text/html' not in content_type and 'text/plain' not in content_type:
150
+ return {
151
+ 'success': False,
152
+ 'error': f"Invalid content type: {content_type}. Expected HTML content."
153
+ }
154
+
155
+ # Parse HTML content
156
+ soup = BeautifulSoup(response.content, 'html.parser')
157
+
158
+ # Remove unwanted elements
159
+ for element in soup(["script", "style", "nav", "footer", "header", "aside", "noscript", "iframe"]):
160
+ element.decompose()
161
+
162
+ # Remove elements with common ad/tracking classes
163
+ ad_classes = ['ad', 'advertisement', 'banner', 'popup', 'modal', 'cookie', 'newsletter']
164
+ for class_name in ad_classes:
165
+ for element in soup.find_all(class_=re.compile(class_name, re.I)):
166
+ element.decompose()
167
+
168
+ # Extract text content
169
+ text_content = soup.get_text(separator=' ', strip=True)
170
+
171
+ # Clean up text - remove extra whitespace
172
+ text_content = re.sub(r'\s+', ' ', text_content)
173
+ text_content = text_content.strip()
174
+
175
+ # Extract tables with improved structure
176
+ tables = []
177
+ for i, table in enumerate(soup.find_all('table')):
178
+ table_data = []
179
+ headers = []
180
+
181
+ # Try to find headers in various ways
182
+ header_row = table.find('thead')
183
+ if header_row:
184
+ header_row = header_row.find('tr')
185
+ else:
186
+ header_row = table.find('tr')
187
+
188
+ if header_row:
189
+ headers = []
190
+ for th in header_row.find_all(['th', 'td']):
191
+ header_text = th.get_text(strip=True)
192
+ headers.append(header_text if header_text else f"Column_{len(headers)+1}")
193
+
194
+ # Extract all rows (skip header if it was already processed)
195
+ rows = table.find_all('tr')
196
+ start_idx = 1 if header_row and header_row in rows else 0
197
+
198
+ for row in rows[start_idx:]:
199
+ cells = row.find_all(['td', 'th'])
200
+ if cells:
201
+ row_data = []
202
+ for cell in cells:
203
+ cell_text = cell.get_text(strip=True)
204
+ row_data.append(cell_text)
205
+
206
+ if row_data and any(cell.strip() for cell in row_data): # Skip empty rows
207
+ table_data.append(row_data)
208
+
209
+ if table_data:
210
+ # Ensure headers match data columns
211
+ max_cols = max(len(row) for row in table_data) if table_data else 0
212
+ if len(headers) < max_cols:
213
+ headers.extend([f"Column_{i+1}" for i in range(len(headers), max_cols)])
214
+ elif len(headers) > max_cols:
215
+ headers = headers[:max_cols]
216
+
217
+ tables.append({
218
+ 'id': i + 1,
219
+ 'headers': headers,
220
+ 'data': table_data[:50] # Limit rows to prevent overwhelming
221
+ })
222
+
223
+ # Extract metadata
224
+ title = soup.title.string.strip() if soup.title and soup.title.string else "No title found"
225
+
226
+ # Extract meta description
227
+ meta_desc = ""
228
+ desc_tag = soup.find('meta', attrs={'name': 'description'})
229
+ if desc_tag and desc_tag.get('content'):
230
+ meta_desc = desc_tag['content'].strip()
231
+
232
+ return {
233
+ 'success': True,
234
+ 'text': text_content[:20000], # Limit text length
235
+ 'tables': tables,
236
+ 'title': title,
237
+ 'meta_description': meta_desc,
238
+ 'url': url,
239
+ 'content_length': len(text_content)
240
+ }
241
+
242
+ except requests.exceptions.ConnectionError as e:
243
+ return {
244
+ 'success': False,
245
+ 'error': f"Connection failed: {str(e)}. The website may be down or blocking requests."
246
+ }
247
+ except requests.exceptions.HTTPError as e:
248
+ return {
249
+ 'success': False,
250
+ 'error': f"HTTP Error {e.response.status_code}: {e.response.reason}"
251
+ }
252
+ except requests.exceptions.RequestException as e:
253
+ return {
254
+ 'success': False,
255
+ 'error': f"Request failed: {str(e)}"
256
+ }
257
+ except Exception as e:
258
+ return {
259
+ 'success': False,
260
+ 'error': f"Unexpected error while processing webpage: {str(e)}"
261
+ }
262
+
263
+ def analyze_content(self, scraped_data, user_query, api_key):
264
+ """Analyze scraped content using DeepSeek V3"""
265
+ if not self.client:
266
+ success, message = self.setup_client(api_key)
267
+ if not success:
268
+ return f"Error: {message}"
269
+
270
+ if not scraped_data['success']:
271
+ return f"Error scraping webpage: {scraped_data['error']}"
272
+
273
+ # Prepare content for AI analysis
274
+ content_text = f"""
275
+ WEBPAGE ANALYSIS REQUEST
276
+ ========================
277
+ URL: {scraped_data['url']}
278
+ Title: {scraped_data['title']}
279
+ Content Length: {scraped_data['content_length']} characters
280
+ Tables Found: {len(scraped_data['tables'])}
281
+ META DESCRIPTION:
282
+ {scraped_data['meta_description']}
283
+ MAIN CONTENT:
284
+ {scraped_data['text']}
285
+ """
286
+
287
+ if scraped_data['tables']:
288
+ content_text += f"\n\nSTRUCTURED DATA - {len(scraped_data['tables'])} TABLE(S) FOUND:\n"
289
+ content_text += "=" * 50 + "\n"
290
+
291
+ for table in scraped_data['tables']:
292
+ content_text += f"\nTABLE {table['id']}:\n"
293
+ content_text += f"Headers: {' | '.join(table['headers'])}\n"
294
+ content_text += "-" * 50 + "\n"
295
+
296
+ for i, row in enumerate(table['data'][:10]): # Show first 10 rows
297
+ content_text += f"Row {i+1}: {' | '.join(str(cell) for cell in row)}\n"
298
+
299
+ if len(table['data']) > 10:
300
+ content_text += f"... and {len(table['data']) - 10} more rows\n"
301
+ content_text += "\n"
302
+
303
+ try:
304
+ completion = self.client.chat.completions.create(
305
+ extra_headers={
306
+ "HTTP-Referer": "https://gradio-web-scraper.com",
307
+ "X-Title": "AI Web Scraping Tool",
308
+ },
309
+ model="deepseek/deepseek-chat-v3-0324:free",
310
+ messages=[
311
+ {"role": "system", "content": self.system_prompt},
312
+ {"role": "user", "content": f"{content_text}\n\nUSER REQUEST:\n{user_query}\n\nPlease analyze the above webpage content and fulfill the user's request. Be thorough and accurate."}
313
+ ],
314
+ temperature=0.1,
315
+ max_tokens=4000
316
+ )
317
+
318
+ return completion.choices[0].message.content
319
+
320
+ except Exception as e:
321
+ return f"Error analyzing content with AI: {str(e)}"
322
+
323
+ def create_interface():
324
+ tool = WebScrapingTool()
325
+
326
+ def process_request(api_key, url, user_query):
327
+ if not api_key.strip():
328
+ return "❌ Please enter your OpenRouter API key"
329
+
330
+ if not url.strip():
331
+ return "❌ Please enter a valid URL"
332
+
333
+ if not user_query.strip():
334
+ return "❌ Please enter your analysis query"
335
+
336
+ # Validate URL format
337
+ if not url.startswith(('http://', 'https://')):
338
+ url = 'https://' + url
339
+
340
+ # Add progress updates
341
+ yield "πŸ”„ Initializing web scraper..."
342
+ time.sleep(0.5)
343
+
344
+ yield "🌐 Fetching webpage content (this may take a moment)..."
345
+
346
+ # Scrape webpage
347
+ scraped_data = tool.scrape_webpage(url)
348
+
349
+ if not scraped_data['success']:
350
+ yield f"❌ Scraping Failed: {scraped_data['error']}"
351
+ return
352
+
353
+ yield f"βœ… Successfully scraped webpage!\nπŸ“„ Title: {scraped_data['title']}\nπŸ“Š Found {len(scraped_data['tables'])} tables\nπŸ“ Content: {scraped_data['content_length']} characters\n\nπŸ€– Analyzing content with DeepSeek V3..."
354
+
355
+ # Analyze content
356
+ result = tool.analyze_content(scraped_data, user_query, api_key)
357
+
358
+ yield f"βœ… Analysis Complete!\n{'='*50}\n\n{result}"
359
+
360
+ # Create Gradio interface
361
+ with gr.Blocks(title="AI Web Scraping Tool", theme=gr.themes.Soft()) as app:
362
+ gr.Markdown("""
363
+ # πŸ€– AI Web Scraping Tool
364
+ ### Powered by DeepSeek V3 & OpenRouter
365
+
366
+ Extract and analyze web content using advanced AI. The tool handles timeouts, SSL issues, and provides robust scraping capabilities.
367
+ """)
368
+
369
+ with gr.Row():
370
+ with gr.Column(scale=2):
371
+ api_key_input = gr.Textbox(
372
+ label="πŸ”‘ OpenRouter API Key",
373
+ placeholder="Enter your OpenRouter API key here...",
374
+ type="password",
375
+ info="Get your free API key from openrouter.ai"
376
+ )
377
+
378
+ url_input = gr.Textbox(
379
+ label="🌐 Website URL",
380
+ placeholder="https://example.com or just example.com",
381
+ info="Enter the URL you want to scrape and analyze"
382
+ )
383
+
384
+ query_input = gr.Textbox(
385
+ label="πŸ“ Analysis Query",
386
+ placeholder="What do you want to extract? (e.g., 'Extract main points and create a summary table')",
387
+ lines=4,
388
+ info="Describe what information you want to extract from the webpage"
389
+ )
390
+
391
+ with gr.Row():
392
+ analyze_btn = gr.Button("πŸš€ Analyze Website", variant="primary", size="lg")
393
+ clear_btn = gr.Button("πŸ—‘οΈ Clear All", variant="secondary")
394
+
395
+ with gr.Column(scale=3):
396
+ output = gr.Textbox(
397
+ label="πŸ“Š Analysis Results",
398
+ lines=25,
399
+ max_lines=40,
400
+ show_copy_button=True,
401
+ interactive=False,
402
+ placeholder="Results will appear here after analysis..."
403
+ )
404
+
405
+ # Tips and Examples
406
+ with gr.Accordion("πŸ’‘ Usage Tips & Examples", open=False):
407
+ gr.Markdown("""
408
+ ### 🎯 Example Analysis Queries:
409
+ - **Data Extraction**: *"Extract all numerical data and organize it in a table format"*
410
+ - **Content Summary**: *"Summarize the main points in bullet format with key statistics"*
411
+ - **Table Processing**: *"Find all tables and convert them to a single consolidated format"*
412
+ - **Specific Information**: *"Extract contact information, prices, or product details"*
413
+ - **Comparison**: *"Compare different items/options mentioned and create a comparison table"*
414
+
415
+ ### πŸ”§ Technical Notes:
416
+ - **Multiple Timeouts**: Tool tries 15s, 30s, then 45s timeouts automatically
417
+ - **SSL Handling**: Bypasses SSL issues for problematic websites
418
+ - **Content Filtering**: Removes ads, popups, and unnecessary elements
419
+ - **Table Detection**: Automatically finds and structures tabular data
420
+ - **Error Recovery**: Handles connection issues and provides clear error messages
421
+
422
+ ### 🌐 Works Well With:
423
+ - News websites (BBC, CNN, Reuters)
424
+ - Government sites (IMF, WHO, official statistics)
425
+ - Wikipedia and educational content
426
+ - E-commerce product pages
427
+ - Financial data sites (Yahoo Finance, MarketWatch)
428
+ - Research papers and academic sites
429
+
430
+ ## πŸ§ͺ **Test Scenarios**
431
+
432
+ ### **1. News & Media Sites**
433
+ ```
434
+ URL: https://www.bbc.com/news
435
+ Query: Extract the top 5 news headlines with their summaries and create a table with columns: Headline, Category, Summary
436
+ ```
437
+
438
+ ```
439
+ URL: https://edition.cnn.com
440
+ Query: Find all breaking news items and organize them by topic/region in a structured format
441
+ ```
442
+
443
+ ### **2. Financial Data Sites**
444
+ ```
445
+ URL: https://finance.yahoo.com/quote/AAPL
446
+ Query: Extract Apple stock information including current price, daily change, market cap, and any financial metrics into a summary table
447
+ ```
448
+
449
+ ```
450
+ URL: https://www.marketwatch.com/investing/stock/tsla
451
+ Query: Create a table with Tesla's key financial metrics: price, change, volume, market cap, P/E ratio
452
+ ```
453
+
454
+ ### **3. E-commerce & Product Pages**
455
+ ```
456
+ URL: https://www.amazon.com/dp/B08N5WRWNW
457
+ Query: Extract product details including name, price, ratings, key features, and specifications in a structured format
458
+ ```
459
+
460
+ ```
461
+ URL: https://www.ebay.com/itm/123456789
462
+ Query: Extract item details, price, seller information, and shipping details into a comparison-ready table
463
+ ```
464
+
465
+ ### **4. Educational & Reference Sites**
466
+ ```
467
+ URL: https://en.wikipedia.org/wiki/Artificial_intelligence
468
+ Query: Extract the main definition, history timeline, and applications of AI. Create separate sections for each topic.
469
+ ```
470
+
471
+ ```
472
+ URL: https://en.wikipedia.org/wiki/List_of_countries_by_population
473
+ Query: Extract the population data table and create a new table showing top 10 most populous countries with their population and growth rate
474
+ ```
475
+
476
+ ### **5. Government & Official Statistics**
477
+ ```
478
+ URL: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports
479
+ Query: Extract the latest COVID-19 statistics and create a summary table with key global figures
480
+ ```
481
+
482
+ ```
483
+ URL: https://www.census.gov/quickfacts
484
+ Query: Extract key demographic statistics for the United States and organize them into categories: Population, Economy, Geography
485
+ ```
486
+
487
+ ### **6. Technology & Business News**
488
+ ```
489
+ URL: https://techcrunch.com
490
+ Query: Find the latest startup funding news and create a table with: Company Name, Funding Amount, Investors, Industry
491
+ ```
492
+
493
+ ```
494
+ URL: https://www.reuters.com/technology
495
+ Query: Extract top technology news and summarize each story in 2-3 sentences with key points
496
+ ```
497
+
498
+ ### **7. Scientific & Research Sites**
499
+ ```
500
+ URL: https://www.nature.com/articles
501
+ Query: Extract recent scientific article titles, authors, and abstracts. Create a summary table organized by research field
502
+ ```
503
+
504
+ ```
505
+ URL: https://pubmed.ncbi.nlm.nih.gov/trending
506
+ Query: Find trending medical research topics and create a list with brief descriptions of each study's findings
507
+ ```
508
+
509
+ ### **8. Sports & Entertainment**
510
+ ```
511
+ URL: https://www.espn.com/nba/standings
512
+ Query: Extract NBA team standings and create a table with: Team, Wins, Losses, Win Percentage, Conference Position
513
+ ```
514
+
515
+ ```
516
+ URL: https://www.imdb.com/chart/top
517
+ Query: Extract the top 10 movies from IMDb's top 250 list with ratings, year, and brief description
518
+ ```
519
+
520
+ ### **9. Weather & Environmental Data**
521
+ ```
522
+ URL: https://weather.com/weather/today
523
+ Query: Extract current weather conditions and forecast data. Create a summary with temperature, conditions, and weekly outlook
524
+ ```
525
+
526
+ ### **10. Real Estate & Property**
527
+ ```
528
+ URL: https://www.zillow.com/homes/for_sale
529
+ Query: Extract property listings with prices, locations, square footage, and key features into a comparison table
530
+ ```
531
+
532
+ ## 🎯 **Quick Test Samples (Copy & Paste Ready)**
533
+
534
+ ### **Simple Test:**
535
+ ```
536
+ URL: https://httpbin.org/html
537
+ Query: Extract all text content and identify the page structure
538
+ ```
539
+
540
+ ### **Table Extraction Test:**
541
+ ```
542
+ URL: https://www.w3schools.com/html/html_tables.asp
543
+ Query: Find all HTML tables on this page and convert them to a structured format with proper headers
544
+ ```
545
+
546
+ ### **Complex Analysis Test:**
547
+ ```
548
+ URL: https://www.sec.gov/edgar/browse/?CIK=320193
549
+ Query: Extract Apple Inc.'s recent SEC filings and create a table with: Filing Date, Document Type, Description
550
+ ```
551
+
552
+ ### **International Site Test:**
553
+ ```
554
+ URL: https://www.bbc.co.uk/weather
555
+ Query: Extract UK weather information and create a regional breakdown of current conditions
556
+ ```
557
+
558
+ ## πŸ” **Testing Tips:**
559
+
560
+ 1. **Start Simple**: Begin with basic sites like Wikipedia or news sites
561
+ 2. **Test Error Handling**: Try invalid URLs to see error messages
562
+ 3. **Check Timeouts**: Use slow-loading sites to test timeout handling
563
+ 4. **Verify Tables**: Test sites with different table structures
564
+ 5. **Content Variety**: Try different content types (news, data, products)
565
+
566
+ ## 🚨 **Sites That May Have Issues:**
567
+ - Social media sites (require login)
568
+ - Sites with heavy JavaScript (may have limited content)
569
+ - Sites with aggressive bot protection
570
+ - Password-protected pages
571
+
572
+ ## βœ… **Reliable Test Sites:**
573
+ - Wikipedia (excellent for tables and structured content)
574
+ - BBC News (good for text extraction)
575
+ - Government sites (.gov domains)
576
+ - W3Schools (great for HTML table testing)
577
+ - HttpBin (perfect for testing basic functionality)
578
+
579
+ Start with the simpler tests and gradually move to more complex scenarios to fully evaluate your tool's capabilities!
580
+ """)
581
+
582
+ # Event handlers
583
+ analyze_btn.click(
584
+ fn=process_request,
585
+ inputs=[api_key_input, url_input, query_input],
586
+ outputs=output,
587
+ show_progress=True
588
+ )
589
+
590
+ clear_btn.click(
591
+ fn=lambda: ("", "", "", ""),
592
+ outputs=[api_key_input, url_input, query_input, output]
593
+ )
594
+
595
+ return app
596
+
597
+ if __name__ == "__main__":
598
+ # Create and launch the app
599
+ app = create_interface()
600
+
601
+ # Launch with enhanced configuration
602
+ app.launch(
603
+ share=True
604
+ )