SimCIS / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Add pipeline tag, library name, abstract, and updated links
88f0a99 verified
|
raw
history blame
6.78 kB
metadata
language: en
license: mit
pipeline_tag: image-segmentation
library_name: detectron2

SimCIS

Rethinking Query-based Transformer for Continual Image Segmentation. (CVPR2025)

LicensePaperGitHub

SimCLS

By Yuchen Zhu*, Cheng Shi*, Dingyou Wang, Jiajin Tang, Zhengxuan Wei, Yu Wu, Guanbin Li and Sibei Yang†

*Equal contribution; †Corresponding Author

Abstract

Class-incremental/Continual image segmentation (CIS) aims to train an image segmenter in stages, where the set of available categories differs at each stage. To leverage the built-in objectness of query-based transformers, which mitigates catastrophic forgetting of mask proposals, current methods often decouple mask generation from the continual learning process. This study, however, identifies two key issues with decoupled frameworks: loss of plasticity and heavy reliance on input data order. To address these, we conduct an in-depth investigation of the built-in objectness and find that highly aggregated image features provide a shortcut for queries to generate masks through simple feature alignment. Based on this, we propose SimCIS, a simple yet powerful baseline for CIS. Its core idea is to directly select image features for query assignment, ensuring "perfect alignment" to preserve objectness, while simultaneously allowing queries to select new classes to promote plasticity. To further combat catastrophic forgetting of categories, we introduce cross-stage consistency in selection and an innovative "visual query"-based replay mechanism. Experiments demonstrate that SimCIS consistently outperforms state-of-the-art methods across various segmentation tasks, settings, splits, and input data orders. All models and codes will be made publicly available at this https URL.

πŸ“£ News

  • [2025.06.17] πŸ€— πŸ€— πŸ€— We Release the weights on huggingface.
  • [2025.06.09] πŸ€— We fully release SimCIS, including both code and paper!
  • [2025.03.03] We are preparing the code and camera ready version of our paper!
  • [2025.02.27] Our paper is accepted by CVPR2025!

πŸ“ To-do list

  • Release the code and paper.
  • Release the weights in the next few days.
  • More detailed instructions.

πŸ’‘ Quick Start

1. Set up environments

conda create --name simcis python=3.8 -y
conda activate simcis
conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia
pip install -U opencv-python

git clone [email protected]:SooLab/SimCIS.git
cd SimCIS

git clone [email protected]:facebookresearch/detectron2.git
cd detectron2
pip install -e .
pip install git+https://github.com/cocodataset/panopticapi.git
pip install git+https://github.com/mcordts/cityscapesScripts.git
cd ..
pip install -r requirements.txt

CUDA kernel for MSDeformAttn

After preparing the required environment, run the following command to compile CUDA kernel for MSDeformAttn:

CUDA_HOME must be defined and points to the directory of the installed CUDA toolkit.

cd mask2former/modeling/pixel_decoder/ops
sh make.sh

2. Data Preparation

We follow the previous work Balconpas to prepare the training data.

Please download the ADE20K dataset and its instance annotation from here, then place the dataset in or create a symbolic link to the ./datasets directory. The structure of data path should be organized as follows:

ADEChallengeData2016/
  images/
  annotations/
  objectInfo150.txt
  sceneCategories.txt
  annotations_instance/
  annotations_detectron2/
  ade20k_panoptic_{train,val}.json
  ade20k_panoptic_{train,val}/
  ade20k_instance_{train,val}.json

The directory annotations_detectron2 is generated by running python datasets/prepare_ade20k_sem_seg.py. Then, run python datasets/prepare_ade20k_pan_seg.py to combine semantic and instance annotations for panoptic annotations and run python datasets/prepare_ade20k_ins_seg.py to extract instance annotations in COCO format.

To fit the requirements of continual segmentation tasks, run python continual/prepare_datasets.py to reorganize the annotations (reorganized annotations will be placed in ./json).

Example data preparation

# for Mask2Former
cd datasets
wget http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip
unzip ADEChallengeData2016.zip
cd ADEChallengeData2016
wget http://sceneparsing.csail.mit.edu/data/ChallengeData2017/annotations_instance.tar
tar -xvf annotations_instance.tar
cd ../..
python datasets/prepare_ade20k_sem_seg.py
python datasets/prepare_ade20k_pan_seg.py
python datasets/prepare_ade20k_ins_seg.py

# for continual segmentation
python continual/prepare_datasets.py

πŸ”₯ Training

Download the weights of the base step(step1) from huggingface.

Please follow the scripts to train SimCIS!

For example:

bash scripts/pan_100-5.sh

⚑️ Evaluation

Download the weights from huggingface.

Please follow the scripts to evaluate SimCIS!

For example:

# 11 means the 11th step(last step for 100-5 setting)
bash scripts/panoptic_eval.sh 11

πŸ“– Cite Us

If you find this repository useful in your research, please consider giving a star ⭐ and a citation

@inproceedings{zhu2025rethinking,
  title={Rethinking Query-based Transformer for Continual Image Segmentation},
  author={Zhu, Yuchen and Shi, Cheng and Wang, Dingyou and Tang, Jiajin and Wei, Zhengxuan and Wu, Yu and Li, Guanbin and Yang, Sibei},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={4595--4606},
  year={2025}
}

πŸ‘ Acknowledgement and Related Work

  • This code is mainly based on Mask2Former. We thank them for their excellent work.
  • Related work for continual image segmentation: Balconpas, ECLIPSE. We appreciate the contributions of these researchers.