Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ size_categories:
|
|
14 |
---
|
15 |
|
16 |
|
17 |
-
#
|
18 |
|
19 |
## Overview
|
20 |
This dataset contains news articles in Tamil language scraped from the Hindu Tamil news website. Each article includes its title, author, city, published date, and text.
|
@@ -25,14 +25,19 @@ This dataset was created to provide a comprehensive collection of Tamil news art
|
|
25 |
## Data Sources and collection method
|
26 |
The data in this dataset was collected from the Hindu Tamil news website (https://www.hindutamil.in/news/tamilnadu/).
|
27 |
Data was collected from the website using web scraping techniques.
|
28 |
-
|
29 |
-
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
## Data Cleaning and Preprocessing
|
33 |
|
34 |
-
Duplicate entries were removed based on the
|
35 |
-
NaN values were handled by removing rows containing NaN values
|
36 |
Irrelevant information such as author's comments, footer text, and advertisements were filtered out from the article text.
|
37 |
Published dates were extracted from the article content and formatted into a standardized date-time format.
|
38 |
|
|
|
14 |
---
|
15 |
|
16 |
|
17 |
+
# HinduTamil News Articles Dataset
|
18 |
|
19 |
## Overview
|
20 |
This dataset contains news articles in Tamil language scraped from the Hindu Tamil news website. Each article includes its title, author, city, published date, and text.
|
|
|
25 |
## Data Sources and collection method
|
26 |
The data in this dataset was collected from the Hindu Tamil news website (https://www.hindutamil.in/news/tamilnadu/).
|
27 |
Data was collected from the website using web scraping techniques.
|
28 |
+
|
29 |
+
- Sending HTTP Requests: The requests library in Python was utilized to send HTTP GET requests to the Hindu Tamil news website. These requests fetched the HTML content of each webpage.
|
30 |
+
|
31 |
+
- Parsing HTML Content: The BeautifulSoup library parsed the HTML content, enabling the extraction of specific elements from each webpage. This included gathering article URLs, titles, authors, published dates, and main article text.
|
32 |
+
|
33 |
+
- Iterative Scraping: Data was scraped iteratively from multiple pages of the website. Each webpage typically displayed a list of articles, and the URLs for each article were extracted. Subsequently, each article URL was visited to extract detailed information.
|
34 |
+
|
35 |
+
- Handling Errors and Timeouts: Error and timeout handling was implemented using try-except blocks to ensure smooth operation during the scraping process.
|
36 |
|
37 |
## Data Cleaning and Preprocessing
|
38 |
|
39 |
+
Duplicate entries were removed based on the published date.
|
40 |
+
NaN values were handled by removing rows containing NaN values.
|
41 |
Irrelevant information such as author's comments, footer text, and advertisements were filtered out from the article text.
|
42 |
Published dates were extracted from the article content and formatted into a standardized date-time format.
|
43 |
|