CVRPDataset commited on
Commit
5e12285
Β·
verified Β·
1 Parent(s): 7ab4620

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -160
README.md CHANGED
@@ -1,160 +1,162 @@
1
- ## CVRP: A Rice Image Dataset with High-Quality Annotations for Image Segmentation and Plant Phenomics Research
2
-
3
- ##### Multi-cultivar and multi-view rice plant image dataset (CVRP) consists of 2,303 field images with their annotated masks and 123 indoor images of individual panicles.
4
-
5
-
6
-
7
- ### Annotation Workflow
8
- ##### To optimize the process of annotation, we combine deep learning methods with manual curation. The workflow comprises two stages: manual annotation and model-based prediction followed by manual curation.
9
- ![alt text](img/workflow.png)
10
- ### Getting Started
11
- ##### Recommend python 3.7, CUDA v11.3 and Pytorch 1.10.0.
12
- ***Clone***
13
-
14
- ```bash
15
- git clone https://huggingface.co/CVRPDataset/Model
16
- cd Model
17
- git clone https://github.com/open-mmlab/mmsegmentation.git -b v1.1.2 mmsegmentation
18
-
19
- pip install -U openmim
20
- mim install mmengine
21
- mim install mmcv==2.0.0
22
-
23
- pip install -r run/requirements.txt
24
- cd mmsegmentation
25
- pip install -v -e .
26
- ```
27
- ***Creating a Dataset***
28
-
29
- 1. Directory structure of the dataset:
30
- <pre>
31
- πŸ“ CVRPDataset/
32
- β”œβ”€πŸ“ images/
33
- β””β”€πŸ“ labelme_jsons/
34
- </pre>
35
- 1. Convert labelme files to mask:
36
- ```bash
37
- python run/labelme2mask.py
38
- ```
39
- now, the structure looks like:
40
- <pre>
41
- πŸ“ CVRPDataset/
42
- β”œβ”€πŸ“ img_dir/
43
- β””β”€πŸ“ ann_dir/
44
- </pre>
45
- 1. Split the training set and test set.
46
- ```bash
47
- python run/split_dataset.py
48
- ```
49
- now, the structure looks like:
50
- <pre>
51
- πŸ“ CVRPDataset/
52
- β”œβ”€πŸ“ img_dir/
53
- β”‚ β”œβ”€πŸ“ train/
54
- β”‚ β””β”€πŸ“ val/
55
- β””β”€πŸ“ ann_dir/
56
- β”œβ”€πŸ“ train/
57
- β””β”€πŸ“ val/
58
- </pre>
59
-
60
-
61
- You can download our training set and test set [here](http://61.155.111.202:18080/cvrp/dataset).
62
-
63
- ***Dataset Configs***
64
-
65
- ```bash
66
- cd mmseg/datasets
67
- wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP.py
68
- wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/__init__.py
69
- cd ../../configs/_base_/datasets
70
- wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP_pipeline.py
71
- ```
72
- If you want to register your own dataset,
73
- 1. Import and register datasets in `mmseg/datasets/__init__.py`
74
- ```python
75
- from .CVRP import CVRPDataset
76
- ```
77
- Add,
78
- ```python
79
- # other datasets
80
- __all__ = ['CVRPDataset']
81
- ```
82
- 2. Register dataset class in `mmseg/datasets/CVRP.py'
83
- ```python
84
- class CVRPDataset(BaseSegDataset):
85
- METAINFO = {
86
- 'classes':['background','panicle'],
87
- 'palette':[[127,127,127],[200,0,0]]
88
- }
89
- ```
90
-
91
- 3. Modify pipeline of data process in `config/_base_/CVRP_pipeline.py`
92
- ```python
93
- dataset_type = 'CVRPDataset'
94
- data_root = 'CVRPDataset/'
95
- ```
96
- you'll need to specify the paths for the train and evalution data directories.
97
- ```python
98
- # train_dataloader:
99
- data_prefix=dict(img_path='img_dir/train', seg_map_path='ann_dir/train')
100
- # val_dataloader:
101
- data_prefix=dict(img_path='img_dir/val', seg_map_path='ann_dir/val')
102
- ```
103
-
104
-
105
-
106
- ***Model Configs***
107
- You can generate model config files using *run_configs.py*
108
-
109
- ```bash
110
- mkdir 'work_dirs' 'CVRP_configs' 'outputs'
111
- python ../run/run_configs.py --model_name deeplabv3plus -m configs/deeplabv3plus/deeplabv3plus_r101-d8_4xb4-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
112
-
113
- python ../run/run_configs.py --model_name knet -m configs/knet/knet-s3_swin-l_upernet_8xb2-adamw-80k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
114
-
115
- python ../run/run_configs.py --model_name mask2former -m configs/mask2former/mask2former_swin-l-in22k-384x384-pre_8xb2-160k_ade20k-640x640.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
116
-
117
- python ../run/run_configs.py --model_name segformer -m configs/segformer/segformer_mit-b5_8xb2-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
118
- ```
119
- Also, you can download model config files [here](https://huggingface.co/CVRPDataset/Model/tree/main/model_configs).
120
- ```bash
121
- cd CVRP_configs
122
- wget https://huggingface.co/CVRPDataset/Model/resolve/main/model_configs/CVRP_mask2former.py
123
- ```
124
-
125
- ***Train***
126
- ```bash
127
- python tools/train.py CVRP_configs/CVRP_mask2former.py
128
- ```
129
- Also, you can download checkpoint [here](https://huggingface.co/CVRPDataset/Model/tree/main/checkpoint).
130
-
131
- ***Test***
132
- ```bash
133
- python ../run/test.py -d CVRPDataset/val -m CVRP_configs/CVRP_mask2former.py -pth work_dirs/CVRP_mask2former/Mask2Former.pth -o outputs/CVRP_mask2former
134
- ```
135
- ### UI
136
- ##### We create a web user interface for annotation based on gradio:
137
- ```bash
138
- python app.py
139
- ```
140
- ##### The UI :
141
- 1.Users can upload an image or use a sample image at β‘ .Then, they can select one of four models at β‘‘. We recommend **Mask2Former**. After that, click *Run*.
142
- 2.We provide two forms of segmentation results for download at β‘’.
143
- ![alt text](img/app.png)
144
-
145
- ### LabelMe
146
- ##### If you need to manually adjust the annotation, you can use LabelMe.
147
- ```bash
148
- python run/mask2json.py
149
- pip install labelme==3.16.7
150
- labelme
151
- ```
152
-
153
- ```bash
154
- python run/json2png.py
155
- ```
156
- ### Citation
157
- Please considering cite our paper if you find this work useful!
158
-
159
- ### Acknowledgements
160
- We thank Mr.Zhitao Zhu, Dr. Weijie Tang, and Dr. Yunhui Zhang for their technical support.
 
 
 
1
+ ## CVRP: A Rice Image Dataset with High-Quality Annotations for Image Segmentation and Plant Phenomics Research
2
+
3
+ ##### Multi-cultivar and multi-view rice plant image dataset (CVRP) consists of 2,303 field images with their annotated masks and 123 indoor images of individual panicles.
4
+
5
+
6
+
7
+ ### Annotation Workflow
8
+ ##### To optimize the process of annotation, we combine deep learning methods with manual curation. The workflow comprises two stages: manual annotation and model-based prediction followed by manual curation.
9
+ ![alt text](img/workflow.png)
10
+ ### Getting Started
11
+ ##### Recommend python 3.7, CUDA v11.3 and Pytorch 1.10.0.
12
+ ***Clone***
13
+
14
+ ```bash
15
+ git clone https://huggingface.co/CVRPDataset/Model
16
+ cd Model
17
+ git clone https://github.com/open-mmlab/mmsegmentation.git -b v1.1.2 mmsegmentation
18
+
19
+ pip install -U openmim
20
+ mim install mmengine
21
+ mim install mmcv==2.0.0
22
+
23
+ pip install -r run/requirements.txt
24
+ cd mmsegmentation
25
+ pip install -v -e .
26
+ ```
27
+ ***Creating a Dataset***
28
+
29
+ 1. Directory structure of the dataset:
30
+ <pre>
31
+ πŸ“ CVRPDataset/
32
+ β”œβ”€πŸ“ images/
33
+ β””β”€πŸ“ labelme_jsons/
34
+ </pre>
35
+ 1. Convert labelme files to mask:
36
+ ```bash
37
+ python run/labelme2mask.py
38
+ ```
39
+ now, the structure looks like:
40
+ <pre>
41
+ πŸ“ CVRPDataset/
42
+ β”œβ”€πŸ“ img_dir/
43
+ β””β”€πŸ“ ann_dir/
44
+ </pre>
45
+ 1. Split the training set and test set.
46
+ ```bash
47
+ python run/split_dataset.py
48
+ ```
49
+ now, the structure looks like:
50
+ <pre>
51
+ πŸ“ CVRPDataset/
52
+ β”œβ”€πŸ“ img_dir/
53
+ β”‚ β”œβ”€πŸ“ train/
54
+ β”‚ β””β”€πŸ“ val/
55
+ β””β”€πŸ“ ann_dir/
56
+ β”œβ”€πŸ“ train/
57
+ β””β”€πŸ“ val/
58
+ </pre>
59
+
60
+
61
+ You can download our training set and test set [here](http://61.155.111.202:18080/cvrp/dataset).
62
+
63
+ ***Dataset Configs***
64
+
65
+ ```bash
66
+ cd mmseg/datasets
67
+ wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP.py
68
+ wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/__init__.py
69
+ cd ../../configs/_base_/datasets
70
+ wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP_pipeline.py
71
+ ```
72
+ If you want to register your own dataset,
73
+ 1. Import and register datasets in `mmseg/datasets/__init__.py`
74
+ ```python
75
+ from .CVRP import CVRPDataset
76
+ ```
77
+ Add,
78
+ ```python
79
+ # other datasets
80
+ __all__ = ['CVRPDataset']
81
+ ```
82
+ 2. Register dataset class in `mmseg/datasets/CVRP.py'
83
+ ```python
84
+ class CVRPDataset(BaseSegDataset):
85
+ METAINFO = {
86
+ 'classes':['background','panicle'],
87
+ 'palette':[[127,127,127],[200,0,0]]
88
+ }
89
+ ```
90
+
91
+ 3. Modify pipeline of data process in `config/_base_/CVRP_pipeline.py`
92
+ ```python
93
+ dataset_type = 'CVRPDataset'
94
+ data_root = 'CVRPDataset/'
95
+ ```
96
+ you'll need to specify the paths for the train and evalution data directories.
97
+ ```python
98
+ # train_dataloader:
99
+ data_prefix=dict(img_path='img_dir/train', seg_map_path='ann_dir/train')
100
+ # val_dataloader:
101
+ data_prefix=dict(img_path='img_dir/val', seg_map_path='ann_dir/val')
102
+ ```
103
+
104
+
105
+
106
+ ***Model Configs***
107
+ You can generate model config files using *run_configs.py*
108
+
109
+ ```bash
110
+ mkdir 'work_dirs' 'CVRP_configs' 'outputs'
111
+ python run/run_configs.py --model_name deeplabv3plus -m configs/deeplabv3plus/deeplabv3plus_r101-d8_4xb4-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
112
+
113
+ python run/run_configs.py --model_name knet -m configs/knet/knet-s3_swin-l_upernet_8xb2-adamw-80k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
114
+
115
+ python run/run_configs.py --model_name mask2former -m configs/mask2former/mask2former_swin-l-in22k-384x384-pre_8xb2-160k_ade20k-640x640.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
116
+
117
+ python run/run_configs.py --model_name segformer -m configs/segformer/segformer_mit-b5_8xb2-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
118
+ ```
119
+ Also, you can download model config files [here](https://huggingface.co/CVRPDataset/Model/tree/main/model_configs).
120
+ ```bash
121
+ cd CVRP_configs
122
+ wget https://huggingface.co/CVRPDataset/Model/resolve/main/model_configs/CVRP_mask2former.py
123
+ ```
124
+
125
+ ***Train***
126
+ ```bash
127
+ cd mmsegmentation
128
+ python tools/train.py CVRP_configs/CVRP_mask2former.py
129
+ ```
130
+ Also, you can download checkpoint [here](https://huggingface.co/CVRPDataset/Model/tree/main/checkpoint).
131
+
132
+ ***Test***
133
+ ```bash
134
+ cd ..
135
+ python run/test.py -d CVRPDataset/val -m CVRP_configs/CVRP_mask2former.py -pth work_dirs/CVRP_mask2former/Mask2Former.pth -o outputs/CVRP_mask2former
136
+ ```
137
+ ### UI
138
+ ##### We create a web user interface for annotation based on gradio:
139
+ ```bash
140
+ python app.py
141
+ ```
142
+ ##### The UI :
143
+ 1.Users can upload an image or use a sample image at β‘ .Then, they can select one of four models at β‘‘. We recommend **Mask2Former**. After that, click *Run*.
144
+ 2.We provide two forms of segmentation results for download at β‘’.
145
+ ![alt text](img/app.png)
146
+
147
+ ### LabelMe
148
+ ##### If you need to manually adjust the annotation, you can use LabelMe.
149
+ ```bash
150
+ python run/mask2json.py
151
+ pip install labelme==3.16.7
152
+ labelme
153
+ ```
154
+
155
+ ```bash
156
+ python run/json2png.py
157
+ ```
158
+ ### Citation
159
+ Please considering cite our paper if you find this work useful!
160
+
161
+ ### Acknowledgements
162
+ We thank Mr.Zhitao Zhu, Dr. Weijie Tang, and Dr. Yunhui Zhang for their technical support.