Object Detection
TensorBoard
PyTorch
English
ultralytics
v8
ultralyticsplus
yolov8
yolo
vision
table detection
table extraction
table classification
document analysis
unstructured document
unstructured table extraction
structured table extraction
unstructured table detection
structured table detection
Eval Results
File size: 7,400 Bytes
a4ef907 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- awesome-yolov8-models
- table classification
- structured table detection
- unstructured table detection
- table detection
- table
- Document
- table extraction
- unstructured table extraction
library_name: ultralytics
library_version: 8.0.43
inference: False
model-index:
- name: foduucom/table-detection-and-classification
results:
- task:
type: object-detection
metrics:
- type: precision
value: 0.964
name: [email protected](box)
language:
- en
metrics:
- accuracy
---
Below is the Model Card for the YOLOv8s Table Detection and Classification model:
---
<p align="center">
<!-- Smaller size image -->
<img src="https://example.com/table-detection-model-thumbnail.jpg" alt="Image" style="width:500px; height:300px;">
</p>
# Model Card for YOLOv8s Table Detection
## Model Summary
The YOLOv8s Table Detection model is an object detection model based on the YOLO (You Only Look Once) framework. It is designed to detect tables, whether they are bordered or borderless, in images. The model has been fine-tuned on a vast dataset and achieved high accuracy in detecting tables and distinguishing between bordered and borderless ones.
## Model Details
### Model Description
The YOLOv8s Table Detection model is built upon the YOLOv8 architecture, known for its real-time object detection capabilities. This specific model has been tailored and trained to recognize tables of various types, including those with borders and those without borders. It can accurately detect tables in images and classify them into the appropriate categories.
```
['Bordered','Borderless']
```
- **Developed by:** FODUU AI
- **Model type:** Object Detection
- **Task:** Table Detection (Bordered and Borderless)
Furthermore, the YOLOv8s Table Detection model encourages user collaboration by providing the capability for users to contribute their own table images. Users can submit images of different table designs and types, helping to enhance the model's ability to detect a wider variety of tables accurately. User contributions can be shared through our community platform or by contacting us at [email protected]. Your input will significantly contribute to improving the model's recognition and classification of diverse table types.
## Uses
### Direct Use
The YOLOv8s Table Detection model serves as a versatile solution for precisely identifying tables within images, whether they exhibit a bordered or borderless design. Notably, this model's capabilities extend beyond mere detection – it plays a crucial role in addressing the complexities of unstructured documents. By employing advanced techniques such as bounding box delineation, the model enables users to isolate tables of interest within the visual content.
What sets this model apart is its synergy with Optical Character Recognition (OCR) technology. This seamless integration empowers the model to not only locate tables but also to extract pertinent data contained within. The bounding box information guides the cropping of tables, which is then coupled with OCR to meticulously extract textual data, streamlining the process of information retrieval from unstructured documents.
We invite you to explore the potential of this model and its data extraction capabilities. For those interested in harnessing its power or seeking further collaboration, we encourage you to reach out to us at [email protected]. Whether you require assistance, customization, or have innovative ideas, our collaborative approach is geared towards addressing your unique challenges. Additionally, you can actively engage with our vibrant community section for valuable insights and collective problem-solving. Your input drives our continuous improvement, as we collectively pave the way towards enhanced data extraction and document analysis.
### Downstream Use
The model can also be fine-tuned for specific table detection tasks or integrated into larger applications for furniture recognition, interior design, image-based data extraction, and other related fields.
### Out-of-Scope Use
The model is not designed for unrelated object detection tasks or scenarios outside the scope of table detection.
## Bias, Risks, and Limitations
The YOLOv8s Table Detection model may have some limitations and biases:
- Performance may vary based on the quality, diversity, and representativeness of the training data.
- The model may face challenges in detecting tables with intricate designs or complex arrangements.
- Accuracy may be affected by variations in lighting conditions, image quality, and resolution.
- Detection of very small or distant tables might be less accurate.
- The model's ability to classify bordered and borderless tables may be influenced by variations in design.
### Recommendations
Users should be informed about the model's limitations and potential biases. Further testing and validation are advised for specific use cases to evaluate its performance accurately.
## How to Get Started with the Model
To begin using the YOLOv8s Table Detection model, follow these steps:
1. Install the required libraries, such as [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus) and [ultralytics](https://github.com/ultralytics/ultralytics), using pip:
```bash
pip install ultralyticsplus ultralytics
```
2. Load the model and perform predictions using the provided code snippet.
```python
from ultralyticsplus import YOLO, render_result
# Load model
model = YOLO('foduucom/table-detection-and-classification')
# Set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # Maximum number of detections per image
# Set image
image = 'path/to/your/image'
# Perform inference
results = model.predict(image)
# Observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
## Training Details
### Training Data
The model is trained on a diverse dataset containing images of tables from various sources. The dataset includes examples of both bordered and borderless tables, capturing different designs and styles.
### Training Procedure
The training process involves extensive computation and is conducted over multiple epochs. The model's weights are adjusted to minimize detection loss and optimize performance.
#### Metrics
- [email protected] (box):
- All: 0.962
- Bordered: 0.961
- Borderless: 0.963
### Model Architecture and Objective
The YOLOv8s architecture employs a modified CSPDarknet53 as its backbone, along with self-attention mechanisms and feature pyramid networks. These components contribute to the model's ability to detect and classify tables accurately, considering variations in size, design, and style.
### Compute Infrastructure
#### Hardware
NVIDIA GeForce RTX 3060
#### Software
The model was trained and fine-tuned using a Jupyter Notebook environment.
## Model Card Contact
For inquiries and contributions, please contact us at [email protected].
```bibtex
@ModelCard{
author = {Nehul Agrawal and
Pranjal singh Thakur},
title = { Table Detection and Classification},
year = {2023}
}
```
---
|