Model / README.md
CVRPDataset's picture
Update README.md
eaa44b4 verified
## CVRP: A Rice Image Dataset with High-Quality Annotations for Image Segmentation and Plant Phenomics Research
##### Multi-cultivar and multi-view rice plant image dataset (CVRP) consists of 2,303 field images with their annotated masks and 123 indoor images of individual panicles.
### Annotation Workflow
##### To optimize the process of annotation, we combine deep learning methods with manual curation. The workflow comprises two stages: manual annotation and model-based prediction followed by manual curation.
![alt text](img/workflow.png)
### Getting Started
##### Recommend python 3.7, CUDA v11.3 and Pytorch 1.10.0.
***Clone***
```bash
pip install huggingface_hub
huggingface-cli download CVRPDataset/Model --local-dir /your/path/to/save/Model
cd Model
git clone https://github.com/open-mmlab/mmsegmentation.git -b v1.1.2 mmsegmentation
pip install -U openmim
mim install mmengine
mim install mmcv==2.0.0
cd mmsegmentation
pip install -v -e .
pip install "mmdet>=3.0.0rc4"
```
***UI***
##### We create a web user interface for annotation based on gradio.
```bash
pip install gradio
python app.py
```
##### The UI :
1. Users can upload an image or use a sample image at β‘ .Then, they can select one of four models at β‘‘. We recommend **Mask2Former**. After that, click *Run*.
2. We provide two forms of segmentation results for download at β‘’.
![alt text](img/app.png)
***Train and Test***
1. ***Creating a Dataset***
Here is an example if you want to make own dataset.
β‘  Directory structure of the dataset:
<pre>
πŸ“ CVRPDataset/
β”œβ”€πŸ“ images/
β””β”€πŸ“ labelme_jsons/
</pre>
β‘‘ Convert labelme files to mask:
```bash
python run/labelme2mask.py
```
now, the structure looks like:
<pre>
πŸ“ CVRPDataset/
β”œβ”€πŸ“ img_dir/
β””β”€πŸ“ ann_dir/
</pre>
β‘’ Split the training set and test set.
```bash
python run/split_dataset.py
```
now, the structure looks like:
<pre>
πŸ“ CVRPDataset/
β”œβ”€πŸ“ img_dir/
β”‚ β”œβ”€πŸ“ train/
β”‚ β””β”€πŸ“ val/
β””β”€πŸ“ ann_dir/
β”œβ”€πŸ“ train/
β””β”€πŸ“ val/
</pre>
If you have annotations in RGB format, you need
```bash
python run/transform.py
```
and then follow split the train set and evaluation set according to the format in β‘’.
You can also download our training set and validation set [here](http://61.155.111.202:18080/cvrp).
2. ***Dataset Configs***
```bash
cd mmsegmentation/mmseg/datasets
rm -rf __init__.py # delete original file
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP.py
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/__init__.py
cd ../../configs/_base_/datasets
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP_pipeline.py
```
β‘  If you want to register your own dataset, import and register datasets in `mmseg/datasets/__init__.py`
```python
from .CVRP import CVRPDataset
```
Add,
```python
# other datasets
__all__ = ['CVRPDataset']
```
β‘‘ Register dataset class in `mmseg/datasets/CVRP.py'
```python
class CVRPDataset(BaseSegDataset):
METAINFO = {
'classes':['background','panicle'],
'palette':[[127,127,127],[200,0,0]]
}
```
β‘’ Modify pipeline of data process in `config/_base_/datasets/CVRP_pipeline.py`
```python
dataset_type = 'CVRPDataset'
data_root = 'CVRPDataset/'
```
you'll need to specify the paths for the train and evalution data directories.
```python
# train_dataloader:
data_prefix=dict(img_path='img_dir/train', seg_map_path='ann_dir/train')
# val_dataloader:
data_prefix=dict(img_path='img_dir/val', seg_map_path='ann_dir/val')
```
3. ***Model Configs***
You can generate model config files using *run_configs.py*
```bash
cd mmsegmentation
mkdir 'work_dirs' 'CVRP_configs' 'outputs' 'CVRPDataset'
python ../run/run_configs.py --model_name deeplabv3plus -m configs/deeplabv3plus/deeplabv3plus_r101-d8_4xb4-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
python ../run/run_configs.py --model_name knet -m configs/knet/knet-s3_swin-l_upernet_8xb2-adamw-80k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
python ../run/run_configs.py --model_name mask2former -m configs/mask2former/mask2former_swin-l-in22k-384x384-pre_8xb2-160k_ade20k-640x640.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
python ../run/run_configs.py --model_name segformer -m configs/segformer/segformer_mit-b5_8xb2-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
```
Also, you can download model config files [here](https://huggingface.co/CVRPDataset/Model/tree/main/model_configs).
```bash
cd CVRP_configs
wget https://huggingface.co/CVRPDataset/Model/resolve/main/model_configs/CVRP_mask2former.py
```
4. ***Train***
```bash
cd mmsegmentation
python tools/train.py CVRP_configs/CVRP_mask2former.py
```
Also, you can download checkpoint [here](https://huggingface.co/CVRPDataset/Model/tree/main/checkpoint).
```bash
cd work_dirs
mkdir CVRP_mask2former
cd CVRP_mask2former
wget https://huggingface.co/CVRPDataset/Model/resolve/main/checkpoint/Mask2Former.pth
```
5. ***Test***
```bash
python ../run/test.py -d CVRPDataset/val -m CVRP_configs/CVRP_mask2former.py -pth work_dirs/CVRP_mask2former/Mask2Former.pth -o outputs/CVRP_mask2former
```
### LabelMe
##### If you need to manually adjust the annotation, you can use LabelMe.
```bash
python run/mask2json.py
pip install labelme==3.16.7
labelme
```
```bash
python run/json2png.py
```
### Data and Code Availability
The CVRP dataset and accompanying codes are publicly available from Hugging Face at https://huggingface.co/datasets/CVRPDataset/CVRP & https://huggingface.co/CVRPDataset/Model,
and Bioinformatics service in Nanjing Agricultural University at http://bic.njau.edu.cn/CVRP.html.
### Acknowledgements
We thank Mr.Zhitao Zhu, Dr. Weijie Tang, and Dr. Yunhui Zhang for their technical support.