File size: 5,402 Bytes
fb23019 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
---
license: apache-2.0
---
# PicoDet_layout_1x_table
## Introduction
A high-efficiency layout area localization model trained on a self-built dataset using PicoDet-1x, capable of detecting table regions. The key metrics are as follow:
| Model| mAP(0.5) (%) |
| --- | --- |
|PicoDet_layout_1x_table | 97.5 |
## Quick Start
### Installation
1. PaddlePaddle
Please refer to the following commands to install PaddlePaddle using pip:
```bash
# for CUDA11.8
python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
# for CUDA12.6
python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/
# for CPU
python -m pip install paddlepaddle==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
```
For details about PaddlePaddle installation, please refer to the [PaddlePaddle official website](https://www.paddlepaddle.org.cn/en/install/quick).
2. PaddleOCR
Install the latest version of the PaddleOCR inference package from PyPI:
```bash
python -m pip install paddleocr
```
### Model Usage
You can quickly experience the functionality with a single command:
```bash
paddleocr layout_detection \
--model_name PicoDet_layout_1x_table \
-i https://cdn-uploads.huggingface.co/production/uploads/63d7b8ee07cd1aa3c49a2026/N5C68HPVAI-xQAWTxpbA6.jpeg
```
You can also integrate the model inference of the layout detection module into your project. Before running the following code, please download the sample image to your local machine.
```python
from paddleocr import LayoutDetection
model = LayoutDetection(model_name="PicoDet_layout_1x_table")
output = model.predict("N5C68HPVAI-xQAWTxpbA6.jpeg", batch_size=1, layout_nms=True)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
```
After running, the obtained result is as follows:
```json
{'res': {'input_path': '/root/.paddlex/predict_input/N5C68HPVAI-xQAWTxpbA6.jpeg', 'page_index': None, 'boxes': [{'cls_id': 0, 'label': 'Table', 'score': 0.9617661237716675, 'coordinate': [435.82446, 106.01748, 665.04346, 316.21014]}, {'cls_id': 0, 'label': 'Table', 'score': 0.9583022594451904, 'coordinate': [72.52834, 106.46287, 322.751, 301.454]}]}}
```
The visualized image is as follows:

For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/module_usage/layout_detection.html#iii-quick-integration).
### Pipeline Usage
The ability of a single model is limited. But the pipeline consists of several models can provide more capacity to resolve difficult problems in real-world scenarios.
#### PP-TableMagic (table_recognition_v2)
The General Table Recognition v2 pipeline (PP-TableMagic) is designed to tackle table recognition tasks, identifying tables in images and outputting them in HTML format. PP-TableMagic includes the following 8 modules:
* Table Structure Recognition Module
* Table Classification Module
* Table Cell Detection Module
* Text Detection Module
* Text Recognition Module
* Layout Region Detection Module (optional)
* Document Image Orientation Classification Module (optional)
* Text Image Unwarping Module (optional)
You can quickly experience the PP-TableMagic pipeline with a single command.
```bash
paddleocr table_recognition_v2 -i https://cdn-uploads.huggingface.co/production/uploads/63d7b8ee07cd1aa3c49a2026/tuY1zoUdZsL6-9yGG0MpU.jpeg \
--layout_detection_model_name PicoDet_layout_1x_table \
--use_doc_orientation_classify False \
--use_doc_unwarping False \
--save_path ./output \
--device gpu:0
```
If save_path is specified, the visualization results will be saved under `save_path`.
The command-line method is for quick experience. For project integration, also only a few codes are needed as well:
```python
from paddleocr import TableRecognitionPipelineV2
pipeline = TableRecognitionPipelineV2(
layout_detection_model_name=PicoDet_layout_1x_table,
use_doc_orientation_classify=False, # Use use_doc_orientation_classify to enable/disable document orientation classification model
use_doc_unwarping=False, # Use use_doc_unwarping to enable/disable document unwarping module
device="gpu:0", # Use device to specify GPU for model inference
)
output = pipeline.predict("tuY1zoUdZsL6-9yGG0MpU.jpeg")
for res in output:
res.print() ## Print the predicted structured output
res.save_to_img("./output/")
res.save_to_xlsx("./output/")
res.save_to_html("./output/")
res.save_to_json("./output/")
```
The default model used in pipeline is `PP-DocLayout-L`, so it is needed that specifing to `PicoDet_layout_1x_table` by argument `layout_detection_model_name`. And you can also use the local model file by argument `layout_detection_model_dir`. For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/main/en/version3.x/pipeline_usage/table_recognition_v2.html#2-quick-start).
## Links
[PaddleOCR Repo](https://github.com/paddlepaddle/paddleocr)
[PaddleOCR Documentation](https://paddlepaddle.github.io/PaddleOCR/latest/en/index.html)
|