File size: 6,156 Bytes
5e12285
 
 
 
 
 
 
 
 
 
 
 
 
 
41938f2
 
5e12285
 
 
 
 
 
 
 
 
2561197
5e12285
ea4a14e
6a24b12
 
 
 
 
 
ba746ac
 
6a24b12
 
24881d7
 
e656cf7
7671169
5e12285
24881d7
5e12285
 
 
 
 
24881d7
5e12285
 
 
 
 
 
 
 
 
24881d7
5e12285
 
 
d3d4e73
5e12285
 
 
 
 
 
 
 
 
 
7a13e58
 
4e5565e
7a13e58
 
 
5e12285
24881d7
5e12285
 
7671169
29841a4
5e12285
29841a4
5e12285
 
 
f4d3f82
5e12285
 
 
 
 
 
 
 
24881d7
5e12285
 
ba746ac
5e12285
 
 
 
 
7b0d7f6
5e12285
ba746ac
5e12285
 
d3d4e73
ba746ac
5e12285
 
 
 
a079eba
5e12285
24881d7
875f52b
5e12285
 
 
29841a4
 
 
5e12285
29841a4
5e12285
29841a4
5e12285
29841a4
5e12285
 
7671169
f962f02
5e12285
 
 
24881d7
5e12285
 
 
 
 
30e143a
 
 
 
 
 
5e12285
24881d7
5e12285
30e143a
5e12285
 
 
 
 
 
 
 
 
 
 
 
 
2bf0acb
 
 
5e12285
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
## CVRP: A Rice Image Dataset with High-Quality Annotations for Image Segmentation and Plant Phenomics Research

##### Multi-cultivar and multi-view rice plant image dataset (CVRP) consists of 2,303 field images with their annotated masks and 123 indoor images of individual panicles.



### Annotation Workflow
##### To optimize the process of annotation, we combine deep learning methods with manual curation. The workflow comprises two stages: manual annotation and model-based prediction followed by manual curation. 
![alt text](img/workflow.png)
### Getting Started
##### Recommend python 3.7, CUDA v11.3 and Pytorch 1.10.0.
***Clone***

```bash
pip install huggingface_hub
huggingface-cli download CVRPDataset/Model --local-dir /your/path/to/save/Model
cd Model
git clone https://github.com/open-mmlab/mmsegmentation.git -b v1.1.2 mmsegmentation

pip install -U openmim
mim install mmengine
mim install mmcv==2.0.0

cd mmsegmentation
pip install -v -e .
pip install "mmdet>=3.0.0rc4"
```
***UI***
##### We create a web user interface for annotation based on gradio.
```bash
pip install gradio
python app.py
```
##### The UI :
1. Users can upload an image or use a sample image at β‘ .Then, they can select one of four models at β‘‘. We recommend **Mask2Former**. After that, click *Run*.
2. We provide two forms of segmentation results for download at β‘’.
![alt text](img/app.png)

***Train and Test***
1. ***Creating a Dataset***

Here is an example if you want to make own dataset.

β‘  Directory structure of the dataset:
<pre>
   πŸ“ CVRPDataset/
   β”œβ”€πŸ“ images/
   β””β”€πŸ“ labelme_jsons/
</pre>
β‘‘ Convert labelme files to mask:
   ```bash
   python run/labelme2mask.py
   ```
   now, the structure looks like:
<pre>
   πŸ“ CVRPDataset/
   β”œβ”€πŸ“ img_dir/
   β””β”€πŸ“ ann_dir/
</pre>
β‘’ Split the training set and test set.
   ```bash
   python run/split_dataset.py
   ```
  now, the structure looks like:
<pre>
   πŸ“ CVRPDataset/
   β”œβ”€πŸ“ img_dir/
   β”‚  β”œβ”€πŸ“ train/
   β”‚  β””β”€πŸ“ val/
   β””β”€πŸ“ ann_dir/
      β”œβ”€πŸ“ train/
      β””β”€πŸ“ val/
</pre>

   If you have annotations in RGB format, you need 
   ```bash
   python run/transform.py
   ```
   and then follow split the train set and evaluation set according to the format in β‘’.
   You can also download our training set and validation set [here](http://61.155.111.202:18080/cvrp).

2. ***Dataset Configs***

```bash
cd mmsegmentation/mmseg/datasets
rm -rf __init__.py                  # delete original file
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP.py
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/__init__.py  
cd ../../configs/_base_/datasets
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP_pipeline.py
```
β‘  If you want to register your own dataset, import and register datasets in `mmseg/datasets/__init__.py`
   ```python
   from .CVRP import CVRPDataset
   ```
   Add,
   ```python
   # other datasets
   __all__ = ['CVRPDataset']
   ```
β‘‘ Register dataset class in `mmseg/datasets/CVRP.py'
   ```python
   class CVRPDataset(BaseSegDataset):
   METAINFO = {
        'classes':['background','panicle'],
        'palette':[[127,127,127],[200,0,0]]
    }
   ```
   
β‘’ Modify pipeline of data process in `config/_base_/datasets/CVRP_pipeline.py`
   ```python
    dataset_type = 'CVRPDataset' 
    data_root = 'CVRPDataset/'
   ```
   you'll need to specify the paths for the train and evalution data directories.
  ```python
    # train_dataloader: 
    data_prefix=dict(img_path='img_dir/train', seg_map_path='ann_dir/train')
    # val_dataloader:
    data_prefix=dict(img_path='img_dir/val', seg_map_path='ann_dir/val')
  ```

3. ***Model Configs***

You can generate model config files using *run_configs.py*

```bash
cd mmsegmentation
mkdir 'work_dirs' 'CVRP_configs' 'outputs' 'CVRPDataset'
python ../run/run_configs.py --model_name deeplabv3plus -m configs/deeplabv3plus/deeplabv3plus_r101-d8_4xb4-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs

python ../run/run_configs.py --model_name knet -m configs/knet/knet-s3_swin-l_upernet_8xb2-adamw-80k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs

python ../run/run_configs.py --model_name mask2former -m configs/mask2former/mask2former_swin-l-in22k-384x384-pre_8xb2-160k_ade20k-640x640.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs

python ../run/run_configs.py --model_name segformer -m configs/segformer/segformer_mit-b5_8xb2-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
```
 Also, you can download model config files [here](https://huggingface.co/CVRPDataset/Model/tree/main/model_configs). 
 ```bash
cd CVRP_configs
wget https://huggingface.co/CVRPDataset/Model/resolve/main/model_configs/CVRP_mask2former.py
 ```

4. ***Train***
```bash
cd mmsegmentation 
python tools/train.py CVRP_configs/CVRP_mask2former.py
```
Also, you can download checkpoint [here](https://huggingface.co/CVRPDataset/Model/tree/main/checkpoint). 
```bash
cd work_dirs
mkdir CVRP_mask2former
cd CVRP_mask2former
wget https://huggingface.co/CVRPDataset/Model/resolve/main/checkpoint/Mask2Former.pth
```

5. ***Test***
```bash
python ../run/test.py -d CVRPDataset/val -m CVRP_configs/CVRP_mask2former.py -pth work_dirs/CVRP_mask2former/Mask2Former.pth -o outputs/CVRP_mask2former
```

### LabelMe
##### If you need to manually adjust the annotation, you can use LabelMe.
```bash
python run/mask2json.py
pip install labelme==3.16.7
labelme
```

```bash
python run/json2png.py
```
### Data and Code Availability
The CVRP dataset and accompanying codes are publicly available from Hugging Face at https://huggingface.co/datasets/CVRPDataset/CVRP & https://huggingface.co/CVRPDataset/Model, 
and Bioinformatics service in Nanjing Agricultural University at http://bic.njau.edu.cn/CVRP.html.

### Acknowledgements
We thank Mr.Zhitao Zhu, Dr. Weijie Tang, and Dr. Yunhui Zhang for their technical support.