File size: 2,272 Bytes
e45e100
 
 
 
 
 
 
 
 
 
 
c377fc1
9f4a861
c377fc1
9f4a861
c377fc1
 
685a40d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d11ec71
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
task_categories:
- image-classification
- image-segmentation
- image-to-text
tags:
- OCR
- Text-Image Pairs
size_categories:
- 10M<n<100M
---
# Atlas PDF to Image Cluster Dataset

https://github.com/atlasunified/PDF-to-Image-Cluster

# Dataset Description

This dataset is a collection of text extracted from PDF files, originating from various online resources. The dataset was generated using a series of Python scripts forming a robust pipeline that automated the tasks of downloading, converting, and managing the data.

# Dataset Summary

The data collection process involves a series of stages:

Web scraping: The 000-downloader.py script scrapes a specified webpage for links ending in .snappy.parquet and downloads the linked files into a specific directory.

Conversion: The 001-parquet-to-csv.py script converts the downloaded Parquet files into CSV format.

URL extraction: The 002-url-extractor.py script reads the CSV files and extracts URLs, which are then divided into 50 separate CSV files.

PDF download: The 003-download.py script uses the URLs to download PDF files, while taking into consideration various conditions such as file size and number of pages.

PDF processing: The 003-Main.py script processes the PDF files using OCR to extract text and bounding boxes. It also sorts PDFs by size and processes them concurrently.

Archiving: The 004-tarballer.py script compresses the directory containing the processed files into a tarball archive.

Balancing: The 005-balancer.py script ensures an even distribution of PDF files across various folders.

# Supported Tasks and Use Cases

The primary use case of this dataset is to serve as training data for machine learning models that operate on text data. This may include, but is not limited to, text classification, information extraction, named entity recognition, and machine translation tasks.

# Dataset Creation

This dataset was generated through a multi-stage Python pipeline designed to handle the downloading, conversion, and management of large datasets.

# Data Fields

As the dataset contains text extracted from PDF files, the data fields primarily include the extracted text, alongside metadata about the source PDF, such as file size, number of pages, and bounding box information.