Dataset Viewer
Auto-converted to Parquet
forum_id
stringlengths
10
10
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
76
forum_abstract
stringlengths
1
3.52k
forum_pdf_url
stringlengths
0
49
note_id
stringlengths
10
10
note_type
stringclasses
6 values
note_created
int64
1,697B
1,737B
note_replyto
stringlengths
10
10
note_readers
sequencelengths
1
6
note_signatures
sequencelengths
1
1
note_text
stringlengths
10
45k
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
uP676dsarr
official_review
1,698,824,961,127
U0P622bfUN
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Reviewer_HdAm" ]
summary: This work introduces a novel federated learning framework called Federated Generative Learning, which addresses the inefficiency and privacy issues of existing solutions that transmit features, parameters, or gradients between clients and servers. In this framework, clients generate text prompts tailored to their local data and send them to the server, where informative training data is synthesized using stable diffusion. This approach offers enhanced communication efficiency, significant performance gains, and improved privacy protection, as demonstrated through extensive experiments on ImageNet and DomainNet datasets. soundness: 3 good presentation: 3 good contribution: 3 good strengths: - This work proposes a novel learning framework to train local data without accessing the raw data directly. - communication of prompts instead of model parameters addresses several issues of existing federated learning frameworks; high communication cost and potential privacy threats by attackers. weaknesses: - The proposed method may be highly dependent on the performance of both diffusion models and visual-captioning models. - An ablation study of varying the foundation models is needed. - In a similar vein, the local training dataset should be unseen for pertaining foundation models and should be more difficult than ImageNet which is a standard image classification dataset. As mentioned in the Introduction section, the local training data are more likely to be privacy sensitive, so they are more likely to be unseen or not contained for pre-training foundation models such as BLIPv2 and Stable Diffusion. Evaluation on ImageNet or DomainNet implicitly uses the assumption that local data have a similar or subset domain to the pretraining dataset of foundation models, which are publically accessible or have no privacy issue. - Clients in federated learning are often assumed to have limited capacity in memory or computation. Generating prompts using a large visual captioning model in each client is impractical. questions: - The quality of synthetic data could be highly different according to domain discrepancy between the local training data and the pretraining data for the foundation model. Instead of using standard image classification datasets, does the proposed method work for federated learning on fine-grained classification such as CUB-200, Cars, and medical image datasets? flag_for_ethics_review: ['No ethics review needed.'] rating: 6: marginally above the acceptance threshold confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. code_of_conduct: Yes
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
rBIpJVtNpZ
official_review
1,698,700,659,171
U0P622bfUN
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Reviewer_wnsW" ]
summary: - The main idea of the paper is to use prompts to “summarize” the client-side data in federated learning. These prompts are then sent to the central server and fed to a foundation generative model, with the hope that the generated data distribution is close to the client data distribution. - With this idea, federated learning can be made one-round or few-round to drastically reduce communication costs, where clients can just send over the prompts one-shot to the server as the prompts and labels require very little communication. - The paper then evaluates on several natural image datasets (subsets from ImageNet) and show that the proposed technique can match FedAvg in performance. - The paper also performs some privacy analysis and shows that by transmitting prompts instead gradients/model updates/data, the membership inference attack success drops significantly. soundness: 2 fair presentation: 4 excellent contribution: 2 fair strengths: - The proposed approach is interesting and novel to my understanding. Assuming the client data distributions can be well captured by the foundation generative model, the proposed technique can clear benefits in simplicity and reducing communication costs. - Putting aside the underlying assumptions of the proposed techniques (see weaknesses), the paper is overall well-executed in terms of the diversity of the experiments and visualizations. - The paper is generally well-written and easy-to-follow. weaknesses: [W1] The main weakness of the proposed method is the underlying assumption that client data can, in fact, be generated by foundational models. This sound obvious but is key to the applicability of the proposed approach in practice. To put it bluntly, is the proposed solution searching for a problem? 1. Settings where FL is helpful—such as medical images across hospitals [1], user-generated text across mobile phones [2]—are often where the data distributions aren’t covered by the pre-training data of foundational models. The datasets used by the experiments are all natural image datasets (ImageNette, ImageFruit, etc.), which can be well-represented in the pre-training dataset of foundation generative models. I would appreciate results on non-natural image datasets. 2. In particular, if we consider horizontal FL settings (as with the paper), the server may even know about the possible classes / labels (e.g. federating binary classifiers) without communicating to the clients, in which case the “class-level prompts” may not be needed at all since the server can just generate images by itself. [W2] More broadly, the threat model of the paper may need to be defined more clearly. - What exactly is client privacy in this case? Can the client data be still considered “private” if you could already generate them with public foundation models (see also [3])? Does the privacy of the data lie in the pixels, or simply the description of the pixels? - In many cases, the descriptions of the images can already be leaking privacy. If we apply the proposed method to cross-device federated learning on user’s photo data, the server could already learn a lot about the user data distribution and preferences. For example, following Sec 5.4 and Figure 6, knowing that a user have lots of golf photos (without knowing the pixels of the photos) already allows the FL service provider (e.g. Google) to sell targeted ads. [1] FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings. NeurIPS 2022 Datasets and Benchmark. https://arxiv.org/abs/2210.04620 [2] https://research.google/pubs/pub47586/ [3] Considerations for Differentially Private Learning with Large-Scale Public Pretraining. https://arxiv.org/pdf/2212.06470.pdf questions: - [Intro section] Why exactly does the proposed method provide robustness to data heterogeneity? Heterogeneity can still surface in the (instance-level) client prompts and subsequently the generated images. - Minor comment: consider using different citation commands `\citet` , `\cite`, etc. in LaTeX to make the formatting of the in-text references consistent. flag_for_ethics_review: ['Yes, Privacy, security and safety'] rating: 5: marginally below the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
VulmbS3YYc
official_review
1,698,549,593,720
U0P622bfUN
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Reviewer_ZNb2" ]
summary: The paper addresses efficiency and client-shift issues in federated learning by harnessing generative foundation models. Unlike traditional approaches that communicate model parameters, this work exploits clients to send instance-level or class-level prompts, generated by a pre-trained captioning model, to the server. The server aggregates these prompts to produce a proxy dataset via a pre-trained generative model, enabling standard federated learning on this dataset. The server then dispatches the refined weights back to the clients. Empirical evaluations underscore the efficacy of the proposed approach. soundness: 2 fair presentation: 2 fair contribution: 2 fair strengths: 1. The proposed approach significantly reduces communication costs compared to traditional parameter transmission. 2. By leveraging foundation models to synthesize proxy data, the authors effectively mitigate the client-shift problem. 3. A variety of experimental settings across four datasets demonstrate the robustness and effectiveness of the proposed method. weaknesses: 1. The training framework is predominantly tailored for image datasets, limiting its applicability. 2. The method heavily depends on the congruence between the captioning and generative models, making it challenging to ensure the proxy dataset's distribution aligns with the private data. 3. The experimental setup, with only five clients, may not adequately represent real-world scenarios; expanding the evaluation to include 50 or 100 clients could provide more insightful results. 4. The comparison to a single baseline, FedAvg, falls short; including comparisons to advanced Federated Learning frameworks could better highlight the proposed method's effectiveness. 5. Table 2 shows the proposed method outperforming centralized learning significantly; a thorough explanation of this phenomenon is warranted. questions: 1. I wonder if the approach cam be applied to other types of datasets, besides the image datasets. 2. What the experimental results would be when the number of clients becomes bigger, e.g., 100. flag_for_ethics_review: ['No ethics review needed.'] rating: 5: marginally below the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
2ZurMVHvCB
official_review
1,698,543,827,436
U0P622bfUN
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Reviewer_zu8q" ]
summary: The Federated Generative Learning (FGL) framework offers a novel approach to federated learning, leveraging foundational generative models like Stable Diffusion to generate training data from prompts shared by clients. Clients contribute class-level or instance-level prompts, encapsulating key features of their local data. The server, in turn, amalgamates these prompts and synthesizes corresponding training data for global model training. This approach trims down communication costs since only concise prompts, and not bulky gradients or models, are transferred. This system also boasts robustness to data diversity and has demonstrated superior performance – with just one communication round, it outdid FedAvg's 200 rounds in accuracy. When trialed on skewed ImageNet100 distributions, FGL exceeded FedAvg's performance by 30% in just five communication rounds. Apart from being efficient, FGL also enhances privacy, as prompts reveal lesser private data than traditional methods. Evaluations confirmed no private data memorization in the synthetic images and an enhanced resilience against membership inference attacks. However, challenges persist with non-IID data, intricate domains, and the potential risks associated with prompts. soundness: 2 fair presentation: 3 good contribution: 2 fair strengths: 1. Novel idea of using foundation models to synthesize training data for federated learning, enabling low communication costs and better privacy. 2. Compelling experimental results demonstrating accuracy improvements over traditional FedAvg, especially with skewed data distributions. 3. Thorough analysis and quantification of privacy benefits, showing reduced memorization and vulnerability to membership inference attacks. weaknesses: 1. The evaluation of the Federated Generative Learning (FGL) framework is limited to simpler domains like ImageNet and doesn't extend to other areas, casting doubt on whether prompts can encapsulate complexity. 2. While FGL aids in data generation for non-IID data, achieving congruence with a global distribution is yet to be addressed. 3. Security risks of prompts require more analysis. Could prompts be reverse-engineered to obtain private data? 4. The framework hasn't been benchmarked against other federated learning methods that employ generative models. questions: please refer to the weakness flag_for_ethics_review: ['No ethics review needed.'] rating: 5: marginally below the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
YhuEd25YHs
official_comment
1,700,233,875,070
uP676dsarr
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response [1 / 2] comment: We thank Reviewer HdAm for your valuable feedback and constructive comments. We have carefully answered all your questions and added extra experiment results in the following. **Q1: The proposed method may be highly dependent on the performance of both diffusion models and visual-captioning models. An ablation study of varying the foundation models is needed.** **A1**: Thanks for this insightful point. To investigate the impact of various generative models on the results, we followed the setting in [1]. Our experiments primarily focus on three prevalent conditional diffusion models: DiT[2], GLIDE[3], and Stable Diffusion. We use these off-the-shelf models to generate synthetic images. Specifically, for GLIDE and Stable Diffusion, the prompt was configured as "a photo of {label name}, real-world images, high resolution." For DiT, the input comprised the label ID corresponding to the ImageNet1k dataset. The images synthesized by DiT and GLIDE are of dimensions 256x256, whereas those produced by Stable Diffusion are of dimensions 512x512. As shown in the following table, even when we vary the foundation models used in our method, FGL consistently outperforms FedAvg by a significant margin. This observation serves as evidence for the generality of our approach. We have added these results in the Appendix A.4.3. | Method | one-shot | 5-round, $\beta$=0.01 | 5-round, $\beta$=0.5 | IID | |:-------------------------:|:--------:|:------------------:|:-----------------:|:--------:| | Ours w/ Stable Diffusion | **85.2** | **82.8** | **94.1** | **95.6** | | Ours w/ Glide | 79.0 | 76.2 | 89.4 | 89.4 | | Ours w/ Dit | 76.2 | 74.6 | 90.2 | 92.8 | | FedAvg (120-round) | - | 51.6 | 75.1 | 79.2 | [1] Li, Zheng, et al. "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?." arXiv preprint arXiv:2305.12954 (2023). [2] Nichol, Alex, et al. "Glide: Towards photorealistic image generation and editing with text-guided diffusion models." arXiv preprint arXiv:2112.10741 (2021). [3]Peebles, William, and Saining Xie. "Scalable diffusion models with transformers." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. **Q2: Clients in federated learning are often assumed to have limited capacity in memory or computation.** **A2**: - First, compared to FedAvg, our method introduces only one additional operation on the client side, i.e., prompt generation, which involves only forward propagation and does not impose significant computational costs. All computational operations are executed on the server side during the initial communication. The server trains a model with an excellent initial state, and subsequently, the client performs regular model updates, which means no additional cost compared with FedAvg. - Secondly, our method is particularly well-suited for cross-silo FL, where the clients represent organizations or companies. In this context, the number of clients is typically small, but they possess substantial computational resources. Furthermore, this scenario emphasizes the importance of protecting clients' local data from potential leaks, which constitutes a significant contribution of our approach towards preserving privacy.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
5pE0TtLbRY
official_comment
1,700,233,964,709
uP676dsarr
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response [2 / 2] comment: **Q3.Evaluation on ImageNet or DomainNet implicitly uses the assumption that local data have a similar or subset domain to the pretraining dataset of foundation models, which are publically accessible or have no privacy issue.** **A3**: Thanks for pointing this out. Here, we would like to address this from three aspects: - Although the pretraining dataset and private dataset may have some domain similarities (e.g., both may contain common real-world scenes), the tasks of lmageSquawk (fine-grained bird classification) and QuickDraw (non-realistic domain) in DomainNet that we show in our experiments are challenging. It is a non-trivial task to train a model only using synthetic data generated by foundation models to achieve good accuracy on the ImageNet or DomainNet test sets. - Furthermore, even when there are some domain similarities between the pretraining dataset and private dataset, does it mean that there is no need to discuss the privacy risk? Definitely not! Let's consider a scenario where a public dataset contains various images of cats, while a private dataset contains personal images of cats belonging to individual users. Although both datasets involve images of cats, the private dataset may contain users' personal information, such as their family photos or addresses. Therefore, even if the two datasets are similar in some aspects, the private data still carries privacy risks and needs to be properly protected. Taking Membership Inference Attack (MIA) as an example, consider an adversary that wants to probe a ML model to test membership of an individual's data in the model's training data. In this scenario, an adversary is more likely to have access to some representative images of the target individual, but not necessarily the ones used for training the model. As shown in Figure 8, we implemented the state-of-the-art LiRA algorithm in MIA. The experimental results demonstrate that our approach ensures the protection of sensitive information of the members in the clients' data (since the model training process has never been exposed to any private data). In contrast, traditional federated learning methods directly train on private data, posing a high risk of exposing the sensitive information of the members in the clients' data (i.e., for certain private data samples, attackers have a high confidence in identifying the client from which the sample originates). To the best of our knowledge, prior to our proposed approach, no one has put forth a training paradigm that effectively defends against LiRA while concurrently modeling utility (i.e., achieving high test accuracy). - Finally, even for particularly challenging domains such as remote sensing images or fine-grained classification datasets, our method can easily adapt to these scenarios. We conducted experiments on several fine-grained image classification datasets, namely CUB-200, Stanford Cars, and also the satellite image dataset EuroSAT. CUB-200 is a challenging dataset consisting of 200 bird species, while Stanford Cars contains 16,185 images belonging to 196 classes of cars. See more details in the `Appendix A.4.2`. | | | | | | || | | | | |:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:| | Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized | | instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 | | instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 | | class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 | | class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 | | class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 | | class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 | | class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 | **Q4: does the proposed method work for federated learning on fine-grained classification such as CUB-200, Cars, and medical image datasets?** **A4**: Please refer to the table in A3.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
fAydmWtaGN
official_comment
1,700,234,743,826
rBIpJVtNpZ
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response to wnsW [1/2] comment: We thank Reviewer wnsW for the valuable feedback and insightful comments. Here, we answer your questions and provide more experimental evidence. **Q1: The datasets used by the experiments are all natural image datasets (ImageNette, ImageFruit, etc.), which can be well-represented in the pre-training dataset of foundation generative models. I would appreciate results on non-natural image datasets.** **A1:** Although the pretraining dataset and the private dataset may exhibit some domain similarities (e.g., both may contain common real-world scenes), the tasks of lmageSquawk (fine-grained bird classification) and QuickDraw (non-realistic domain) in DomainNet that we demonstrate in our experiments are inherently challenging. Training a model solely on synthetic data generated by foundation models to achieve high accuracy on the ImageNet or DomainNet test sets is a non-trivial task. To further validate the effectiveness of our method, we conducted experiments on several fine-grained image classification datasets, including CUB, Cars, and the satellite image dataset EuroSAT. As the official EuroSAT dataset did not provide predefined training and testing splits, we performed a split in an 8:2 ratio. The size of fine-grained recognition datasets is typically smaller compared to general image classification datasets. In previous work, a common practice is to utilize a pretrained model that has been trained on the ImageNet dataset. In this study, we present two approaches: training the model from scratch and loading a pretrained ResNet34 model. As shown in the table, our method achieves excellent performance even in these challenging domains. Additionally, in the cross-silo federated learning scenario, when clients have strong computational capabilities, one can simply finetune the foundation models on these domains, achieving better performance than normal federated learning methods. We have added these results in the appendix. | | | | | | || | | | | |:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:| | Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized | | instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 | | instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 | | class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 | | class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 | | class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 | | class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 | | class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 | **Q2: the “class-level prompts” may not be needed at all since the server can just generate images by itself.** **A2**: Yes, if the server-side has knowledge of the specific labels for the classification task, it can generate them directly. However, class-level is just a simple case. We propose the instance-level approach to address more complex domains, where client-side customized prompt generation is more advantageous in improving the performance of the overall model. **Q3: More broadly, the threat model of the paper may need to be defined more clearly.** **A3**: threat model: In traditional federated learning schemes that transmit model parameters/gradients, attackers can launch various attacks once they obtain the parameters/gradients, such as membership inference attacks, adversarial example attacks, and model inversion. In contrast, our approach significantly reduces potential security and privacy risks because users only transmit prompts in the first round of communication. To the best of our knowledge, there is no research indicating that using prompts alone can perfectly reconstruct private data. Therefore, our approach is more secure and privacy-preserving compared to FedAvg.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
ZKUIxB4P6Z
official_comment
1,700,234,788,720
rBIpJVtNpZ
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response to wnsW [2/2] comment: **Q4: What exactly is client privacy in this case? Can the client data be still considered “private” if you could already generate them with public foundation models.** **A4**: client privacy: In this paper, similar to differential privacy, we primarily focus on individual privacy, as it is more challenging for attackers. For instance, Attack A perfectly targets a known subset of 0.1% of users in a client, but succeeds with a random 50% chance on the rest. Attack B succeeds with a 50.05% probability on any given user in a client. On average, these two attacks have the same attack success rate. However, the second attack is practically useless, while the first attack is much more powerful in the real-world. This is precisely what LiRA[1] emphasizes, as it evaluates the privacy attack by computing their true-positive rate at very low (e.g., ≤ 0.1%) false-positive rates (as illustrated in our experimental results in Figure 8), demonstrating that our method can better defend against privacy attacks. [1] Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022. **Q5: In many cases, the descriptions of the images can already be leaking privacy.** **A5**:This could be the difference between individual privacy and group privacy. The majority of the current papers on data protection focuses on the individual ‘user,’ or ‘data subject,’ who’s right to privacy will grow exponentially with the enforcement of the GDPR (General Data Protection Regulation). However, group privacy is not mentioned in the GDPR , which is not well-defined. Also, if the server knows some of the user data distribution means potential privacy risks, our proposed method does not introduce additional risk in this regard. This is because in traditional gradient/parameter-based methods, using model inversion, the server can still infer this information[1,2]. But for individual privacy, this information won't increase the leakage of membership in private data. [1] Geiping, Jonas, et al. "Inverting gradients-how easy is it to break privacy in federated learning?." Advances in Neural Information Processing Systems 33 (2020): 16937-16947. [2] Hatamizadeh, Ali, et al. "Do gradient inversion attacks make federated learning unsafe?." IEEE Transactions on Medical Imaging (2023). **Q6:Why exactly does the proposed method provide robustness to data heterogeneity? Heterogeneity can still surface in the (instance-level) client prompts and subsequently the generated images.** **A6**: For one-shot Federated Learning (FL), regardless of the extreme data distributions among different clients, the server can always collect prompts corresponding to all the data, thus obtaining a balanced synthetic dataset on the server. Therefore, compared to FedAvg, our method is not sensitive to data heterogeneity in the first round of communication. In the subsequent model updates, the clients are still affected by non-IID data. However, due to the well-trained initial model obtained in the first round of communication, only a few rounds of communication are needed for local updates, making it more robust to data heterogeneity. As shown in Table 1 in the main text, our method exhibits significantly smaller gaps compared to FedAvg under different non-IID scenarios. **Q7: Minor comment: consider using different citation commands \citet , \cite** **A7:** Thanks for pointing this out. we will check it in the updated version.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
9l46wbVYfG
official_comment
1,700,235,412,781
VulmbS3YYc
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response to Reviewer ZNb2 [1/2] comment: Thank you for your valuable time in reviewing our paper. Below are responses to your concerns. Please let us know if you require any further information, or if anything is unclear. **Q1: The training framework is predominantly tailored for image datasets, limiting its applicability.** **A1**: Our approach is based on existing generative models that are widely used in various domains, such as Computer Vision with Stable Diffusion and Natural Language Processing with GPTs. This means that our framework can easily be applied to other domains, including NLP. However, due to time constraints, we were unable to conduct additional experiments on NLP tasks. We believe that further research in this area would be valuable and should be pursued in the future. **Q2: The method heavily depends on the congruence between the captioning and generative models, making it challenging to ensure the proxy dataset's distribution aligns with the private data.** **A2**: To further validate the effectiveness of our method, we conducted experiments on several fine-grained image classification datasets, namely CUB-200, Stanford Cars, and also the satellite image dataset EuroSAT. CUB-200 is a challenging dataset consisting of 200 bird species, while Stanford Cars contains 16,185 images belonging to 196 classes of cars. As for EuroSAT, the official dataset did not provide predefined training and testing splits, so we performed a split in an 8:2 ratio. The size of fine-grained recognition datasets is typically smaller compared to general image classification datasets. In previous work, a common practice is to utilize a pretrained model that has been trained on the ImageNet dataset. In this study, we present two approaches: training the model from scratch and loading a pretrained ResNet34 model. As shown in the table, our method achieves excellent performance even in these challenging domains. This can be attributed to the fact that regardless of the magnitude of domain differences, pretraining a well-performing model on our synthetic data is beneficial for the downstream federated tasks. | | | | | | || | | | | |:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:| | Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized | | instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 | | instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 | | class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 | | class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 | | class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 | | class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 | | class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 | **Q3: The experimental setup, with only five clients, may not adequately represent real-world scenarios; expanding the evaluation to include 50 or 100 clients could provide more insightful results.** **A3**:Thanks for your suggestion. We extended our analysis to include the results obtained from the ImageNette dataset with 50 and 100 clients. As depicted in the table, our method continues to exhibit superior performance compared to FedAvg across all scenarios. Additionally, the improvements achieved by our method remain significant. See more details in the Appendix. | # Client | FedAvg, $\beta$=0.5 | FedAvg, IID | Ours (one-shot) | Ours (5-round), $\beta$=0.5 | Ours (5-round), IID | Centralized | |:--------:|:----------------:|:-----------:|-----------------|:------------------------:|:-------------------:|-------------| | 5 | 75.0 | 79.2 | 85.2 | 94.0 | 95.6 | 92.2 | | 50 | 72.1 | 77.0 | 85.2 | 93.8 | 91.2 | 92.2 | | 100 | 70.1 | 67.2 | 85.2 | 92.8 | 93.2 | 92.2 |
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
kIQR51kLqb
official_comment
1,700,235,455,263
VulmbS3YYc
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response to Reviewer ZNb2 [2/2] comment: **Q4: The comparison to a single baseline, FedAvg, falls short; including comparisons to advanced Federated Learning frameworks could better highlight the proposed method's effectiveness.** **A4**: We have compared the two popular FL methods, Moon[1] and Fedopt[2]. We conducted experiments on the ImageNette and ImageNet100 datasets, considering a scenario with 50 clients under non-IID settings (beta=0.5). To the best of our knowledge, there is currently no federated learning method that surpasses centralized training. However, our proposed method even outperforms centralized trained models in many scenarios (see Table 1 in main text). Therefore, as shown in this table, our method still outperforms other federated learning approaches. | Method | FedAvg | FedOpt | Moon | Ours (one-shot) | Ours (5-round) | |:----------------------:|:------:|:------:|:-----:|:---------------:|:--------------:| | ImageNette (beta=0.5) | 72.01 | 73.21 | 74.27 | 85.21 | 93.80 | | ImageNet100 (beta=0.5) | 40.13 | 41.25 | 41.43 | 48.31 | 72.67 | [1]Li, Qinbin, Bingsheng He, and Dawn Song. "Model-contrastive federated learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [2]Reddi, Sashank J., et al. "Adaptive Federated Optimization." International Conference on Learning Representations. 2020. **Q5: Table 2 shows the proposed method outperforming centralized learning significantly; a thorough explanation of this phenomenon is warranted.** **A5**: This is because our method synthesizes a balanced dataset using all collected prompts during the first round of communication. We then pretrain a "well-initialized" model on this dataset. Once we have this well-initialized model, several rounds of communication can quickly bring the model to a good performance. In the first table, we present the results of directly loading a pretrained model on ImageNet. It can be observed that directly loading a pretrained model on ImageNet reduces the gap between our method and FedAvg. This is because the pretrained model provides a good starting point. However, training a pretrained model on ImageNet requires a significant computational cost on a dataset of 1.3M samples. In contrast, our method only requires training on a small amount of synthesized data to provide a well-initialized model, hence achieving better performance than models trained in a centralized manner.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
mofNXJXZ39
official_comment
1,700,236,063,921
2ZurMVHvCB
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response to Reviewer zu8q [1/2] comment: Thank you for your constructive comments. We hope the following clarifications can address your concerns. **Q1: The evaluation of the Federated Generative Learning (FGL) framework is limited to simpler domains like ImageNet and doesn't extend to other areas, casting doubt on whether prompts can encapsulate complexity.** **A1**: To further validate the effectiveness of our method, we conducted experiments on several fine-grained image classification datasets, namely CUB-200, Stanford Cars, and also the satellite image dataset EuroSAT. CUB-200 is a challenging dataset consisting of 200 bird species, while Stanford Cars contains 16,185 images belonging to 196 classes of cars. As for EuroSAT, the official dataset did not provide predefined training and testing splits, so we performed a split in an 8:2 ratio. The size of fine-grained recognition datasets is typically smaller compared to general image classification datasets. In previous work, a common practice is to utilize a pretrained model that has been trained on the ImageNet dataset. In this study, we present two approaches: training the model from scratch and loading a pretrained ResNet34 model. As shown in the table, our method achieves excellent performance even in these challenging domains. This can be attributed to the fact that regardless of the magnitude of domain differences, pretraining a well-performing model on our synthetic data is beneficial for the downstream federated tasks. We have added these results in the appendix. | | | | | | || | | | | |:---------------:|:-------------:|:-------:|:------------:|:-----------:|:------------:|:---------------:|:--------------:|:-----------:|:-----:|:-----------:| | Prompt type | Training type | Dataset | FedAvg,$\beta=0.01$ |FedAvg, $\beta=0.5$ | FedAvg (IID) | Ours (one-shot) | Ours (5-round) ,$\beta=0.01$ |Ours (5-round),$\beta=0.5$ | Ours (5-round),IID | Centralized | | instance | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 44.17 | 64.53 | 69.19 | 71.01 | 48.31 | | instance | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 54.02 | 75.13 | 78.96 | 80.72 | 81.77 | | class | scratch | CUB-200 | 35.04 | 36.61 | 36.62 | 45.34 | 67.66 | 71.9 | 73.33 | 48.32 | | class | pretrain | CUB-200 | 78.98 | 79.08 | 78.48 | 52.73 | 74.68 | 78.7 | 80.32 | 81.77 | | class | scratch | Cars | 55.18 | 42.43 | 44.48 | 54.48 | 83.31 | 87.22 | 88.07 | 64.72 | | class | pretrain | Cars | 87.71 | 88.91 | 88.96 | 60.55 | 87.31 | 90.05 | 90.73 | 91.21 | | class | scratch | EuroSAT | 43.94 | 74.48 | 84.87 | 38.37 | 37.59 | 82.94 | 91.01 | 94.31 | **Q2: While FGL aids in data generation for non-IID data, achieving congruence with a global distribution is yet to be addressed.** **A2**: Thank you for your valuable feedback. We acknowledge that FGL is effective in generating data for non-IID scenarios, aligning it with a global distribution (e.g., IID settings) also works in our experiments (see Table 1 for IID results). **Q3: Security risks of prompts require more analysis. Could prompts be reverse-engineered to obtain private data?** **A3**: During the communication phase, traditional Federated Learning (FL) methods typically transmit model parameters or gradients. However, these parameters can be vulnerable to adversarial attacks and model inversion attacks if intercepted by an adversary. To enhance security, some FL methods utilize prompts for communication. However, the potential for attackers to reconstruct private data using prompts has received limited research attention, both in black-box and white-box scenarios. Recent work[1,2] has identified risks associated with the reconstruction of pretrained data in Diffusion models. Nevertheless, there is currently no available method that can solely reconstruct previously unseen private data in Diffusion models based solely on prompts. This presents an interesting and promising research direction for future investigations. Consequently, considering the lack of research in this area, our method can be regarded as relatively safe and privacy-preserving. [1]Shen, Xinyue, et al. "Prompt Stealing Attacks Against Text-to-Image Generation Models." arXiv preprint arXiv:2302.09923 (2023). [2]Carlini, Nicolas, et al. "Extracting training data from diffusion models." 32nd USENIX Security Symposium (USENIX Security 23). 2023.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
DMdGpc6aeM
official_comment
1,700,236,100,046
2ZurMVHvCB
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response to Reviewer zu8q [2/2] comment: **Q4: The framework hasn't been benchmarked against other federated learning methods that employ generative models.** **A4**: Unfortunately, we were unable to find any existing methods in the literature that directly address our specific setting, making it difficult to perform a fair comparison. In light of this, and following the suggestions of other reviewers, we conducted experiments using various types of generative models to demonstrate the applicability of our proposed method. To investigate the impact of various generative models on the results, we followed the setting in [1]. Our experiments primarily focus on three prevalent conditional diffusion models: DiT[2], GLIDE[3], and Stable Diffusion. We use these off-the-shelf models to generate synthetic images. Specifically, for GLIDE and Stable Diffusion, the prompt was configured as "a photo of {label name}, real-world images, high resolution." For DiT, the input comprised the label ID corresponding to the ImageNet1k dataset. The images synthesized by DiT and GLIDE are of dimensions 256x256, whereas those produced by Stable Diffusion are of dimensions 512x512. As shown in the following table, even when we vary the foundation models used in our method, FGL consistently outperforms FedAvg by a significant margin. This observation serves as evidence for the generality of our approach. We have added these results in the appendix. | Method | one-shot | 5-round, beta=0.01 | 5-round, beta=0.5 | IID | |:-------------------------:|:--------:|:------------------:|:-----------------:|:--------:| | Ours w/ Stable Diffusion | **85.2** | **82.8** | **94.1** | **95.6** | | Ours w/ Glide | 79.0 | 76.2 | 89.4 | 89.4 | | Ours w/ Dit | 76.2 | 74.6 | 90.2 | 92.8 | | FedAvg (120-round) | - | 51.6 | 75.1 | 79.2 | [1] Li, Zheng, et al. "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?." arXiv preprint arXiv:2305.12954 (2023). [2] Nichol, Alex, et al. "Glide: Towards photorealistic image generation and editing with text-guided diffusion models." arXiv preprint arXiv:2112.10741 (2021). [3]Peebles, William, and Saining Xie. "Scalable diffusion models with transformers." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
dFtawOvszo
official_comment
1,700,527,427,254
rBIpJVtNpZ
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Reviewer_wnsW" ]
title: Response to author rebuttal comment: ### I appreciate the authors for providing a rebuttal. - A1: I appreciate the authors for putting efforts into the new results. I also appreciate pointing to the results on QuickDraw. However, my concern is not fully addressed since the datasets “CUB-200” (natural images of birds) and “Cars” (natural images of cars) are very much still in-distribution for the pre-trained generative vision models. - A2: The authors responded to my question by pointing to the use of instance-level prompts, but this didn’t quite address my concern that the significance of the class-level prompts is a bit overclaimed. Considering the default implementation of your experiments uses class-level prompts (page 6), I would suggest clearly spelling out the assumptions and weaknesses of class-level prompts in the updated paper. - A4/A5: - (For clarity, the following discussions apply to “instance-level” prompts) - By explaining the LiRA paper on membership inference in A4, the authors imply that the paper cares about instance-level privacy — i.e. image-level privacy, where an attacker cannot confidently tell whether one image is or isn’t used for training. - I’m definitely okay with the **privacy granularity** in this case; what I’m uncertain about (with Q5) is whether **all the information contain within a single example (i.e. image-label pair)** is protected. - A5 does not quite address my question. I do not agree that this is the difference between “group privacy” vs “individual privacy”; rather it is that the instance-level prompts have provided **side channels into learning about the information of a single image.** - Consider running local, image-level DP-SGD on a client when participating in a vanilla FedAvg task. All the information corresponding to a single example (pixel values and labels) are protected behind the “privacy barrier” since privatized gradients are applied to the model. In contrast, instance-level prompts would leak information about the pixel values, and thus do not really satisfy instance-level privacy in the sense of “attacker not being able to tell whether an image is used for training”. I do acknowledge however that there is value in providing empirical privacy of the pixel values. - A6: Thanks for the clarification that the server can select/curate prompts to essentially manually mitigate the data heterogeneity. I would suggest highlighting this in the updated version. Overall, the technique proposed in the paper is interesting, though I feel the assumptions on client data distributions and privacy claims are too strong. Having read through other reviewers’ comments, I’m keeping my rating at 5.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
A2zDYe7XQ6
official_comment
1,700,539,395,845
dFtawOvszo
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Response comment: Thank you for your valuable time. A1: So `you just ignore our results on QuickDraw and EuroSAT`, which also performs much better compared with traditional FL methods and refute your claim. A2: Thanks for your suggestion. The default implementation of our experiments utilizes class-level prompts, as they have shown to provide sufficiently good performance and effectively protect privacy. The choice of training method depends on the specific use case. It is important to note that there is no perfect approach in data privacy, as no method can guarantee zero information leakage while achieving a perfect model (`no free lunch in privacy`). A4/A5: - `Can you find any method in FL that outperforms our method and efficiently defends against LiRA attack` (the most powerful membership inference attack)? To the best of our knowledge, no other method has been shown to achieve such performance. - I understand your concern about potential risks associated with prompts. Consider the following: `Can you reconstruct any private data using only these prompts`? This task is extraordinarily challenging, even in the complete white-box setting, where the prompt cannot perfectly reconstruct the private data. Not to mention, our scenario generation model has never been trained on private data. A6: Thank you for acknowledging the robustness of our method in handling non-IID data. We aim to motivate researchers to consider the effective integration of foundation models for downstream tasks in federated learning through our approach. Additionally, we encourage researchers to explore and identify viable attack strategies to demonstrate the method's potential lack of privacy preservation, either **theoretically or experimentally**.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
6JeFYeADhe
official_comment
1,700,670,736,076
2ZurMVHvCB
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: With the hope that our response addresses your concerns comment: Dear Reviewer zu8q, As the discussion period is closing, we sincerely look forward to your feedback. The authors deeply appreciate your valuable time and efforts spent reviewing this paper and helping us improve it. Please also let us know if there are further questions or comments about this paper. We strive to improve the paper consistently, and it is our pleasure to have your feedback! Best regards, Authors
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
KddjiVwGeM
meta_review
1,701,786,518,795
U0P622bfUN
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Area_Chair_8sJH" ]
metareview: This paper presented an interesting approach to federated learning that doesn't require each client to send the parameters or the gradients to the server but only sends a text prompt describing the client data. The server can then synthesize training data using the received prompts and train. While the reviewers appreciated the idea, a number of concerns were raised. Some of these were related to the fact the datasets used in the experiments could have been present in the pre-training dataset of the foundation model, and applicability of the method to datasets more sophisticated than ImageNet. The authors provided a detailed rebuttal and the paper was discussed. However, apart from one reviewer who marginally leaned towards acceptance (though still had some concerns), the other three reviewers maintained their original assessment and their concerns remained. In the end, after taking into account the reviews, the discussion, and my own reading of the paper, the paper falls short of the acceptance threshold. Although the authors did respond to the reviewers' concerns with additional experimental results, the paper in its current form still does not seem ready for publication. It is advised that the authors properly incorporate the concerns raised in the reviews and submit the work at another venue. justification_for_why_not_higher_score: The concerns of the reviewers still persisted after the author response and discussion justification_for_why_not_lower_score: N/A
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
GgDNjyva5t
decision
1,705,406,011,852
U0P622bfUN
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Reject
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
eRhNDltiUV
official_comment
1,700,670,517,901
GpT9FR36vo
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Thanks for Response comment: Thank you for your response! 1. Since our method focuses on using foundation models in FL, it should be relatively easy to adapt to NLP tasks as well, where foundation models are also used for data synthesis [1,2]. [1] Yue, Xiang, et al. "Synthetic text generation with differential privacy: A simple and practical recipe." arXiv preprint arXiv:2210.14348 (2022). [2] Veselovsky, Veniamin, et al. "Generating Faithful Synthetic Data with Large Language Models: A Case Study in Computational Social Science." arXiv preprint arXiv:2305.15041 (2023). 2. Why is it not possible for our method to perform better than centralized training on certain datasets? Consider this: when you pretrain a model on ImageNet and then use it for certain downstream tasks, it often performs better than training from scratch, such as on the Cars and CUB-200 datasets. Our method provides a well-initialized model in the first round, hence achieving better performance than models trained in a centralized manner. Please feel free to let us know if there are any further questions.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
GpT9FR36vo
official_comment
1,700,668,036,879
2VGRuF5mKB
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Reviewer_ZNb2" ]
title: Experimentation with other datasets and the experimental results comment: I would appreciate the rebuttal of the authors. However, I have two major concerns. 1. The authors mentioned that they have limited time and cannot conduct NLP tasks. However, the authors argue that it is easy to apply the approach to NLP tasks. 2. It is hard to understand how the FL results could be better than centralized approches. I wonder if the authors could explain the indepth reason.
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
YlkSlYahCA
official_comment
1,700,472,328,177
2ZurMVHvCB
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: A friendly reminder that the discussion stage will be closed in 2 days comment: Dear Reviewer, Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have. Thank you in advance! Kind regards, Authors
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
2VGRuF5mKB
official_comment
1,700,472,292,614
VulmbS3YYc
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: A friendly reminder that the discussion stage will be closed in 2 days comment: Dear Reviewer, Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have. Thank you in advance! Kind regards, Authors
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
RiZgxxrdO0
official_comment
1,700,472,247,801
rBIpJVtNpZ
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: A friendly reminder that the discussion stage will be closed in 2 days comment: Dear Reviewer, Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have. Thank you in advance! Kind regards, Authors
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
ca7XdIlXCg
official_comment
1,700,472,178,098
uP676dsarr
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: A friendly reminder that the discussion stage will be closed in 2 days comment: Dear Reviewer, Thank you once again for your valuable comments. As the discussion stage is coming to a close in 2 days, we kindly request your feedback on whether our response adequately addresses your concerns. We would greatly appreciate any additional feedback you may have. Thank you in advance! Kind regards, Authors
U0P622bfUN
Federated Generative Learning with Foundation Models
[ "Jie Zhang", "Xiao hua Qi", "Shengyuan Pang", "Siyuan Pan", "Xiaobing Tu", "Pengfei Wan", "Bo Zhao" ]
Existing federated learning solutions focus on transmitting features, parameters, or gradients between clients and server, which suffer from serious low-efficiency and privacy-leakage problems. Thanks to the emerging foundation generative models, we propose a novel federated learning framework, namely Federated Generative Learning. In this framework, each client can create text prompts that are tailored to their local data, based on its features, and then send them to the server. Given the received prompts, the informative training data can be remotely synthesized on the server using foundation generative models. This new framework offers several advantages, including enhanced communication efficiency, improved resilience to distribution shift, significant performance gains, and enhanced privacy protection. We validate these benefits through extensive experiments conducted on ImageNet and DomainNet datasets, e.g., on ImageNet100 dataset, with a highly skewed data distribution, our method outperforms FedAvg by 12% in a single communication round. Moreover, our approach only requires 229 Bytes prompts for communication, while FedAvg necessitates the transmission of 42.7 MB parameters.
/pdf/3fa3548f0c4b1f86504bdb1db4424c536e65a2b7.pdf
ticwvWFRNm
official_comment
1,700,236,375,549
U0P622bfUN
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission1/Authors" ]
title: Summary comment: Dear Reviewers and ACs, Thank you all for your insightful reviews and constructive comments on our manuscript. We greatly appreciate your feedback, which has helped us improve our work. We have carefully considered all the suggestions and made the following changes: 1. We have included three additional challenging datasets from different domains: CUB-200 and Stanford Cars, which are fine-grained image classification datasets, and the EuroSAT satellite image dataset, which is known to be more difficult to generate. We have conducted experiments on these datasets to demonstrate the effectiveness of our method in various scenarios. 2. To showcase the versatility of our proposed approach, we have employed diverse Generative models. By doing so, we aim to demonstrate that our method is not limited to a specific model but can be applied to different models with similar success. 3. In order to provide more robust evidence of the effectiveness and scalability of our proposed method, we have conducted experiments with an increased number of clients. Specifically, we have included experiments with 50 and 100 clients, which further support our findings and demonstrate the scalability of our approach. 4. We have included extensive discussions on the security and privacy aspects of our proposed method. We believe that addressing these concerns is crucial, and we have provided thorough explanations and considerations to ensure the privacy and security of the data used in our experiments. Thank you once again for your valuable feedback. Best Regards, All authors.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
LMXXBnRdjZ
official_review
1,698,413,312,588
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Reviewer_GFiJ" ]
summary: This paper proposes a construction approach of positive and negative samples based on the quality of milp problem’s feasible solutions. With the constructed samples, one can train a GNN model to predict good assignments for integer variables using contrastive learning mechanism, which helps search optimal solution more quickly. Superior experimental results demonstrate the effectiveness and generalizability of the proposed approach. soundness: 2 fair presentation: 3 good contribution: 3 good strengths: The research topic is valuable and the paper is well written. Moreover, the designed method is presented with succinctly and clearly as well as its motivation. The performance of trained GNN is also impressive, which indicates the superiority of the proposed method. weaknesses: There are still some issues needed to be addressed to make this paper meet the requirement of ICLR: 1. The contribution and novelty is not summarized clearly and relatively weak. The main contributions of this paper are applying contrastive learning to predict and search optimal solution. 2. Results of empirical evaluation can be more solid and convincing. The experiments are just conducted on two generated dataset and one competition dataset, without the recognized authoritative dataset MIPLIB2017 benchmark. Furthermore, only an open-source MILP solver, which is not well configured, is involved in baselines. Considering that different configuration can significantly affect the solver’s performance, I would expect some further comparative experiments conducted on SCIP configured with tuned parameters or some more powerful commercial solvers (like GUROBI and CPLEX). questions: I noticed that the effect of hyperparameter k0 and k1 is evaluated. Of course, this hyperparameter is important, because it controls the tradeoff between the feasibility and quality of predicted solutions. However, considering that MILP instances generally have different scales of integer variables, a specific number of integer variables may not be a good choice. I was wondering that would it be better if we use the coverage rate (i.e., the ratio of fixed variables in the entire set of integer variables when using predict methods like Neural Diving) to control the fixed number of integer variables. In addition, some studies indicate that each instance has an unique optimal coverage rate (https://arxiv.org/abs/2308.00327), so I think that evaluating the effect of k0 by just computing an average number on one dataset (CA) may not help readers configure their own prediction model properly. flag_for_ethics_review: ['No ethics review needed.'] rating: 5: marginally below the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
Uipxj4Qg21
official_review
1,697,774,654,807
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Reviewer_KTmj" ]
summary: In this paper, the authors propose to integrate contrastive learning with the pipeline of solving mixed integer linear programming. They manage to generate positive samples and negative samples during the training. The positive samples are optimal or near-optimal solutions of MILP, while the negative samples are infeasible or low-quality solutions. The model is then trained by these samples via supervised contrastive learning to predict better solutions. After this, the predicted solutions are improved by the PaS framework to make them become valid optimal solutions. Experiments on multiple datasets show the performance of the proposed ConPas framework. soundness: 3 good presentation: 3 good contribution: 2 fair strengths: 1. The paper is well-written and easy to follow. 2. The idea of utilizing contrastive learning in MILP looks interesting to me. 3. The experiments contain various MILP datasets. weaknesses: 1. I find one work in related work[1] very similar to this paper. Both of these two papers propose to utilize contrastive learning in solving MILP and share the core idea of generating positive and negative samples. The only difference is the operation after the contrastive learning part, the ICML paper[1] uses large neighborhood search (LNS) and this ICLR paper uses Predict and Search (PaS). Actually, I think this paper is covered by the ICML paper, as PaS could be regarded as a variant of LNS. Though the authors do mention this ICML paper in the related work, they do not discuss the difference between their work and the ICML paper, nor compare it as a baseline. 2. Though the idea of utilizing contrastive learning in MILP looks interesting, I consider the current usage of contrastive learning to be more like an incremental part. In this work, solving MILP basically relies on the performance of PaS. I am not sure if this contribution is good enough for ICLR. To me, this work is more like using contrastive learning to find a better initialization for PaS, of which the application is limited. 3. The results of experiments look good, but I think more datasets with hard cases are required. In my own experience of using SCIP, I think MVS and MIS are relatively easy for SCIP. In contrast, the datasets from NeurIPS 2021 ML4CO are difficult for SCIP, but it looks like the authors did not select the whole datasets of ML4CO, as they said: "IP instances are taken from the NeurIPS 2021 ML4CO competition Gasse et al. (2022)." I wonder how the data is selected. In fact, there are 3 benchmarks in NeurIPS 2021 ML4CO[2], I wonder why the authors neglect them. Besides, a common dataset MIPLIB is also missing in the paper. [1] Huang, T., Ferber, A.M., Tian, Y., Dilkina, B. &amp; Steiner, B.. (2023). Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning. <i>Proceedings of the 40th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 202:13869-13890 Available from https://proceedings.mlr.press/v202/huang23g.html. [2] https://www.ecole.ai/2021/ml4co-competition/ questions: 1. Please discuss your paper with the ICML paper I mentioned in the weakness. In my view, these two papers are very similar and the ICML paper seems to cover your work to some extent. A comparison in experiments is also suggested if possible. 2. As I mentioned before, this work is more like using contrastive learning to find a better initialization for PaS. I wonder can this work be applied to methods other than PaS? e.g. Neural Diving mentioned in the paper. 3. The datasets in the experiments require more improvement. flag_for_ethics_review: ['No ethics review needed.'] rating: 3: reject, not good enough confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
Ui2FqW5xoH
official_comment
1,700,367,461,894
YDnAiTaavU
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
comment: We thank the reviewer for the feedback and suggestions. Regarding the weaknesses in the novelties and comparison to SCIP and Gurobi (weaknesses 1,2 and 3), please kindly refer to the general responses to all reviewers. To address the other weaknesses: 2. ConPaS is a solution construction heuristic and we acknowledge that our approach doesn’t guarantee optimality or feasibility. Similarly, this drawback also applies to Neural Diving (ND) [Nair et al, 2020] and PaS [Han et al, 2023]. However, in a distributional setting where one needs to solve similar MIP instances over and over again, approaches like ND and ConPaS can be particularly helpful if it is able to predict solutions that are empirically feasible and of high quality. This is indeed true according to our experiments - on the five MIP benchmarks (including one in the Appendix), we achieve a 100% feasibility rate using a consistent set of hyperparameters on each benchmark, confirming the applicability of these approaches. However, we also acknowledge that ConPaS (or ND and PaS) is not universally applicable to all MIP solving, especially on more constrained problems. For example, using MIP for scientific discoveries when the solutions are sparse could be extra challenging [Deza et al., 2023] and often we need to design other approaches tailored to them. We have added the discussion in the conclusion section. 3. Thank you for this comment. We use Gurobi to collect data since Gurobi typically runs a lot faster than SCIP. For data collection, we set the time limit to an hour for Gurobi. We could easily replace Gurobi with SCIP for data collection and get the same-quality training data but this comes at the cost of 4-8 times (4-8 hours per instance) longer runtime on average. Due to our limited computational resources, using Gurobi for data collection is more practical for us. We have included results on Gurobi in Appendix Section D.2 in the updated draft. We show that ConPaS still outperforms Gurobi and PaS significantly in terms of both primal integral and primal gap. 4. The main motivation to design negative samples this way is that we want them to be close to positive samples in the input space but actually with very different quality (i.e., near miss). From a theoretical point of view, the InfoNCE loss we use has the property that it will automatically focus on hard negative pairs (i.e., samples with similar representation but of very different qualities) and learn representations to separate them apart (See e.g., [Tian 2022]). While our approach is built upon a theoretical understanding of contrastive learning, we acknowledge that our work designs the negative samples heuristically and does not aim for theoretical impacts. On the other hand, we believe that our work contributes a new principled method that demonstrates strong empirical performance in challenging domains. 5. Regarding the accuracy of the predicted solutions, we would like to point out that the prediction accuracy doesn’t strongly correlate with the performance of the downstream task where the predictions are used (in this paper, the downstream task is the search phase). The ML model is trained on multiple solution samples and when deployed in the search, we use only a part of the predictions controlled by the hyperparameters. Therefore, there is no standard way to quantify the accuracy of the ML predictions in this setting that captures the downstream performance. [Nair et al., 2020] Solving mixed integer programs using neural networks, Arxiv 2020. [Han et al., 2023] A GNN-guided predict-and-search framework for mixed-integer linear programming. ICLR 2023 [Deza et al., 2023] Fast Matrix Multiplication Without Tears: A Constraint Programming Approach. CP 2023 [Tian 2022] Understanding Deep Contrastive Learning via Coordinate-wise Optimization. NeurIPS 2022.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
KR3sPVMqql
official_comment
1,700,367,348,197
OZRNv4Pv8d
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
title: General Response 2/2 comment: **Choice of MIP Problem Benchmark** We would like to respectfully argue that the benchmark problems used in our paper are already challenging enough for existing MIP solvers such as SCIP and Gurobi, as shown by the results reported in Sections 5.2 and D.2. These benchmarks have indeed been used in various previous studies [Han et al., 2023; Huang et al., 2023; Wu et al., 2021]. We use even larger scale instances for combinatorial auction and independent set problems compared to the closely related recent work [Han et al., 2023]. We would like to clarify that we also use two problem domains (Item placement and workload appointment) from the NeurIPS 2021 ML4CO competition. The results of the workload appointment problems are reported in the Appendix due to being not challenging enough for our setting. We use the same train/validation/test split as suggested by the organizers (we use only 400 instances from their train set though 9,900 instances are given). We would also want to respectfully disagree with the claims that instances from ML4CO competition are harder than the other benchmarks. In the competition, they are indeed hard since the rules of the competition require all heuristics in SCIP (including restart and primal heuristics) to be turned off. However, in our paper, we allow all those options and fine-tune them for our SCIP baseline to maximize its performance. We also found that the workload appointment problem is indeed too easy for approaches like PaS and ConPaS. We agree with reviewers KTmj and GFiJ that MIPLIB is indeed an important MIP benchmark. However, there are few successful cases of ML-based methods for MIP solving to learn heuristics that can generalize to heterogeneous collections of real-world instances like MIPLIB that are diverse in their sizes, domains and structures. Following the majority of previous work, in our paper, we focus on distributional settings for MIP solving that are also important in real-world applications. However, we believe it is important for future work to develop methods that are generalizable to diverse MIP instances. [Nair et al., 2020] Solving mixed integer programs using neural networks, Arxiv 2020. [Han et al., 2023] A GNN-guided predict-and-search framework for mixed-integer linear programming. ICLR 2023 [Huang et al., 2023] Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning. ICML 2023 [Wu et al., 2021] Learning Large Neighborhood Search Policy for Integer Programming. NeurIPS 2021.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
50D8TvJWne
official_comment
1,700,367,506,949
LMXXBnRdjZ
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
comment: We thank the reviewer for the feedback and suggestions. Regarding the weaknesses, please refer to the discussions on the novelties of the work and choices of benchmark in the general response to all reviewers. Below are our responses to answer the question regarding hyperparameters and coverage rates: We agree that using coverage rates as alternatives to model k0 and k1 would be more helpful when the instances are diverse in size. In our paper, we described a systematic way in Section 5.1 “Hyperparameters” to tune both k0 and k1 based on a percentage of the number of variables (10%-50%). We believe that this hyperparameter tuning method is easy to follow. We report the results of different k0 for CA to demonstrate how tuning could be done. Regarding the optimal coverage rate studied in [Yoon et al., 2023], it is important for methods like Neural Diving (ND) since it requires training a separate model for each coverage rate. With an optimal coverage rate identified, it helps overcome the training inefficiency of ND. However in ConPaS, instead of fixing all variables according to the prediction, we let the MIP solver explore regions around the prediction that allows more flexibility and room for inaccuracy in prediction, therefore removing the need for an accurate coverage threshold. [Yoon et al., 2023] Threshold-aware Learning to Generate Feasible Solutions for Mixed Integer Programs. Arxiv 2023.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
0kNBOJMhsI
official_comment
1,700,367,634,501
Uipxj4Qg21
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
comment: We thank the reviewer for the feedback and suggestions. Regarding the weaknesses and questions concerning the novelties and choices of MIP benchmark, please kindly refer to the general responses. In addition, we would like to further discuss how this work could be applied beyond PaS to answer your 2nd question: ConPaS is more versatile since the prediction coming out of its ML model can be useful in different ways. An example is to warm start LNS as mentioned earlier. In addition, one could leverage the ML prediction from ConPaS to assign variable branching priorities and/or generate cuts in tree searches such as branch-and-bound (or branch-and-cut) search. We defer the deployment of ConPaS in different algorithms to future work. We also want to clarify that Neural Diving is a more restricted variant of ConPaS and PaS, where it corresponds to setting \Delta = 0 in PaS that allows no change of the assigned values once they’re fixed in the search.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
8Yg7Up2BrI
official_comment
1,700,367,555,183
PMdcjp79U4
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
comment: We thank the reviewer for the feedback and suggestions. Regarding the weaknesses concerning the novelties, choices of MIP benchmark and using SCIP as a baseline (weaknesses 1,2 and 3), please kindly refer to the general responses to all reviewers posted at the top. Regarding weaknesses 4 and 5: (4) We conduct an additional ablation study on ConPaS-LQ on the MVC and CA problems. (Due to limited computation resources, we are still in the process of getting results for ConPaS-inf and other problems.) The initial results are shown in the table below, where ConPaS-LQ (unweighted) refers to training using the original InfoNCE function without considering different qualities of the samples and ConPaS-LQ (weighted) refers to training using the modified loss. When we use the original loss function, ConPaS is still able to outperform PaS. Its performance further improves when the modified loss function is used. | | MVC | | CA | | |------------------------|------------|-----------------|------------|-----------------| | | Primal Gap | Primal Integral | Primal Gap | Primal Integral | | PaS | 0.17% | 13.9 | 1.16% | 28.9 | | ConPaS-LQ (unweighted) | 0.12% | 3.3 | 0.57% | 24.3 | | ConPaS-LQ (weighted) | 0.10% | 2.8 | 0.16% | 19.7 | (5) We thank the reviewer for the suggestions for a more accurate description for solvers like Gurobi and CPLEX. We have updated the text accordingly in the new draft.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
OZRNv4Pv8d
official_comment
1,700,367,307,631
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
title: General Response comment: We are grateful to all reviewers for their time and helpful suggestions. In this general response, we address concerns and answer questions that come from multiple or all reviews. We summarize our responses and additional findings in the rebuttal text, and we encourage reviewers to look at the updated paper draft uploaded to OpenReview. The updated draft contains new experimental results comparing against Gurobi and a few edits to improve the clarity of the paper. The changes are highlighted in blue for visibility. **Novelties of ConPaS and its Differences from CL-LNS [Huang et al, ICML 2023]** **Differences**: We would like to clarify that our work ConPaS and existing work CL-LNS published at ICML 2023 this year are complementary to each other. More specifically, ConPaS learns to construct a high-quality (partial) solution from scratch and then find it, CL-LNS learns to predict the part of a given solution that is not good enough and then improve it. One could apply ConPaS to warm start CL-LNS (or any other Large Neighborhood Search (LNS) methods). This is similar to the relationship between Neural Diving (a solution construction method) and the ML-guided LNS (a solution improvement method) demonstrated in [Nair et al., 2020]. Furthermore, while CL-LNS has a limited application to only Large Neighborhood Search, the prediction from ConPaS’s ML model can be useful in different search algorithms for MIP. An example is to warm-start LNS as mentioned above. In addition, one could leverage the ML prediction from ConPaS to assign variable branching priorities and/or generate cuts to improve the performance of tree searches such as branch-and-bound (or branch-and-cut) search. We defer the deployment of ConPaS in those algorithms to future work. **Novelties**: While both ConPaS and CL-LNS use contrastive learning for MIP solving, we would like to point out our main novelties: (i) We design a novel data collection process with considerations of two types of negative samples. Finding the negative samples is not straightforward especially when using low-quality solutions as negative samples. In that case, we leverage the techniques of local branching (that are more often used to find improved solutions) to find bad solutions that are similar to good ones and formulate it as a nontrivial bilevel optimization problem; (ii) We design a novel contrastive loss function to take into account positive samples with different solution qualities; (iii) We demonstrate strong empirical performance of ConPaS measured by various metrics and we also believe that our work contributes a new and valuable empirical method **Comparisons with SCIP** Regarding comparisons with SCIP, we mentioned in the paper that we indeed fine-tuned SCIP heuristic setting in our experiments. Specifically, we set SCIP’s heuristic mode to AGGRESSIVE to focus on primal bound improvement and we also allow presolving and restart heuristics in SCIP. We have made these details clear and highlighted them in the revised draft. It is a common practice in the MIP-solving community to present SCIP results for completeness. We want to be clear that we do not intend to make big statements about outperforming SCIP, since the main competitors of ConPaS are the other ML-based approaches - ND [Nair et al., 2021] and PaS [Han et al., 2023]. **Comparisons with Gurobi** We would like to point out that ConPaS is agnostic to the underlying MIP solver that is used in the Predict-and-Search phase. It could be applied to SCIP, Gurobi or CPLEX. In our paper, we demonstrate the effectiveness of ConPaS using SCIP as the solver but it could also be built upon Gurobi. We have included results on Gurobi in Appendix Section D.2 in the updated draft. Due to limited computation resources, we run experiments with Gurobi, PaS [Han et al., 2023] and ConPaS on MVC, MIS and CA instances. **The results show that ConPaS outperforms Gurobi significantly in terms of both the primal gap and primal integral performances.**
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
2W4cMY9YtS
official_comment
1,700,552,524,218
Uipxj4Qg21
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Reviewer_KTmj" ]
comment: Thank you very much for the detailed response and the improved quality of the paper. However, I think the main concern of this paper is still the difference between ConPas and CL-LNS. I understand that they shall be complementary to each other, but I still believe that the approaches of these two works are similar or at least with strong correlation, as mentioned by other reviewers. Therefore, I think the authors should include the discussion between ConPas and CL-LNS in the **main paper**, instead of just mentioning it as related works, otherwise, it will be suspected of deliberately avoiding. As the authors use a lot of space in the general response to describe the difference in their general response, you can not suppose the readers of your paper understand the difference just by mentioning and citing it. Due to the similarity of ConPas and CL-LNS, It's not an exaggeration to open a separate subsection, which could include a discussion of differences or add a table of comparison. Only in this way, the readers can fully understand the novelty of this work.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
yfta5giyPF
official_comment
1,700,605,621,249
2W4cMY9YtS
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
comment: We would like to thank the reviewer for reading our rebuttal and your valuable suggestion on addressing the differences between ConPaS and CL-LNS in the main paper. We agree that it is important for the readers to understand the differences. We have added a paragraph at the end of the related work section highlighted in blue to address this issue. Please kindly let us know if any concerns remain.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
5GrMsgr4HJ
official_comment
1,700,609,479,183
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
comment: Dear reviewers, Thank you again for taking the time to review our paper. We would be grateful if you could kindly check whether our response has answered your questions, and let us know if any issues remain. In addition to our rebuttal response, we have worked to respond to the points raised by the reviewers and submitted a revision. Here is a summary of our effort to improve the paper draft: 1. We added experimental results comparing the performance with Gurobi where ConPaS still shows significant improvement over the baselines. 2. We added a discussion to the end of Section 3 to address the concerns about the novelties and discuss the differences between CL-LNS and ConPaS. 3. We improved the writing of the paper by improving clarity as well as adding and highlighting some important details. If you find the responses and revisions align well with the paper's objectives and address your initial concerns, we are hopeful that an adjustment in the score could reflect these improvements. Please feel free to ask if you have more questions or if there's anything else we can provide to support your evaluation.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
qBOwsxbPuW
official_comment
1,700,664,835,850
8Yg7Up2BrI
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Reviewer_nvLV" ]
comment: Dear authors, Thank you for these results. These results are very good to know. Let me emphasize that the statement in bold at the end of your general response does not interest me, and it shouldn't interest the other reviewers either. I now understand the novelty of the paper to be the way you compute the negative examples for the contrastive learning and that there are key differences to CL-LNS. This puts the paper in somewhat of a different light. I don't usually raise scores by this much, but actually the bilevel model for computing negative examples is rather clever and really works well. I encourage the other reviewers to take this into account. I will adjust my review.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
PMdcjp79U4
official_review
1,698,406,646,855
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Reviewer_nvLV" ]
summary: The paper presents a method for finding primal solutions to mixed-integer programs using a graph neural network-based approach. The training and performance of the approach is improved through the use of constrastive learning, which has been gaining popularity in a variety of deep reinforcement learning applications due to the fact that it does not require expensive labeling of data to "pre-train" networks. The approach is based on the "predict and search" method from a previous ICLR paper. The approach is evaluated experimentally on several relatively easy MIP problems and a dataset of integer programs from the NeurIPS 2021 ML4CO competition. soundness: 3 good presentation: 3 good contribution: 1 poor strengths: - Contrastive learning shows great promise in the space of combinatorial optimization; we see again and again that it is an effective mechanism for reducing training time and creating great models. - The empirical performance of the method on the datasets tested is quite strong. - (Updated) The novelty of the paper, while not huge, is sufficient for ICLR. The authors have indicated how it differs from CL-LNS, and the bi-level model is an interesting contribution that other groups solving MIPs will want to consider. weaknesses: - The instance dataset is not so great, but I admit there are not so many good public MIP problems out there. Simply put, claiming that you can solve the CA dataset to a MIP person is just really not that interesting. Since all the other MIP papers at ICLR/NeurIPS seem to have the same problem, I'll let it pass. - Using SCIP as a direct point of comparison is not really fair. SCIP is trying to prove optimality, while the method proposed in this work is just a primal heuristic. I appreciate, however, that the authors do not make big claims about beating SCIP the way some papers in this area do. They do seem to understand that beating SCIP is relatively meaningless. - I am a little surprised to not see an abalation study on the modified loss function. (Update: the authors have provided one, and the modified loss works and is not the only reason it is outperforming previous work) - The introduction's description of Gurobi and CPLEX is not complete. They are really branch and cut algorithms with (what CPLEX calls) "dynamic search" (and a whole bunch of other stuff, who knows what half of it is...) (Update: this seems to be fixed) - (Update) I still feel like there could be more experimentation regarding the negative examples (e.g., versus the strategy in the CL-LNS paper??). Since this is the main contribution, I wish it was actually more in focus throughout the paper. questions: All questions have been answered. flag_for_ethics_review: ['No ethics review needed.'] rating: 6: marginally above the acceptance threshold confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. code_of_conduct: Yes
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
IquAq42HFZ
official_comment
1,700,667,810,729
Ui2FqW5xoH
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Reviewer_fyvF" ]
comment: I thank the authors for giving the detailed response and new experimental results. However, some of my concerns remains. 1. Regarding the novelty over CL-LNS, I think "complementary" is not very sufficient to justify the differences or novelty. Moreover, the authors claimed two novelties, the negative data collection method and new loss function, but they do not provide any ablation study to support their advantages. 2. Regarding the prediction accuracy, I am not satisfied with the response. If the prediction has low impact on the downstream tasks, then why you need a prediction after all? If a poor prediction can also lead to a good final performance, then I suspect the meaning and usefulness of the ML part, and the performance improvement may come from tuning other hyperparameters. So accuracy is important, because it justifies your core contribution which is an ML component. Also, I do not agree with the last statement in response 5. Prediction accuracy is very easy to quantify, and we do not need to involve the downstream tasks here. I will increase my score, but still believe that this paper needs further improvement.
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
lLVDYQu4XO
official_comment
1,700,703,703,268
IquAq42HFZ
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Authors" ]
comment: We thank the reviewer for taking the time to read our rebuttal and the follow-up feedback. Regarding the remaining concerns: 1. We propose novel data collection methods to find negative samples that are new and specifically designed for our solution prediction task. Existing methods for finding negative samples such as the one proposed for CL-LNS do not directly apply here. For the modified contrastive loss function, we conduct an additional ablation study on ConPaS-LQ on the MVC and CA problems. The initial results are shown in the table below, where ConPaS-LQ (unweighted) refers to training using the original InfoNCE function without considering different qualities of the samples and ConPaS-LQ (weighted) refers to training using the modified loss. When we use the original loss function, ConPaS is still able to outperform PaS. Its performance further improves when the modified loss function is used. | | MVC | | CA | | |------------------------|------------|-----------------|------------|-----------------| | | Primal Gap | Primal Integral | Primal Gap | Primal Integral | | PaS | 0.17% | 13.9 | 1.16% | 28.9 | | ConPaS-LQ (unweighted) | 0.12% | 3.3 | 0.57% | 24.3 | | ConPaS-LQ (weighted) | 0.10% | 2.8 | 0.16% | 19.7 | 2. We report the prediction accuracy quantified by the classification accuracy over all binary variables (with the threshold set to 0.5) in the following table. We report it for both PaS and ConPaS-LQ on the MVC and CA problems on 100 validation instances. The accuracy is the fraction of correctly classified variables averaged over 50 positive samples for each instance, and we report the average accuracy over 100 validation instances. Since the classification accuracy is sensitive to the threshold, we also report the AUROC. On the MVC instances, though ConPaS has a lower accuracy (w.r.t. Threshold =0.5), it has higher AUROC than PaS. On the CA instances, their accuracies and AUROCs are similar. We would like to again point out that a better accuracy/AUROC doesn't necessarily indicate a better downstream task performance, even though we believe they are correlated. | | MVC | | CA | | |-----------|----------|-------|----------|-------| | | Accuracy | AUROC | Accuracy | AUROC | | PaS | 81.2% | 0.88 | 88.3% | 0.87 | | ConPaS-LQ | 76.9% | 0.91 | 86.9% | 0.86 |
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
YDnAiTaavU
official_review
1,698,824,385,414
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Reviewer_fyvF" ]
summary: The authors propose a predict-and-search approach for solving mixed integer programming(MIP), according to a GNN-guided approach from [Han2022]. The algorithm collects high-quality solutions as positive samples and low-quality solutions as negative samples, and then trains the prediction model by contrastive learning. The authors demonstrate that the proposed method outperforms the baseline on four commonly used mixed-integer linear programming datasets. soundness: 3 good presentation: 3 good contribution: 2 fair strengths: 1. The effect of improving the prediction model through contrastive learning training method is intuitive and effective. 2. The author's experiments show that the proposed method has a significant improvement over the baseline. 3. The paper is mostly well-written and easy to follow. weaknesses: 1. The technical novelty is limited. First, it is a somewhat straightforward application of contrastive learning to predict-and-search. Second, the proposed method is essentially the same as the ICML 2023 paper [Huang2023] (Figure 1 of this paper almost coincides with Figure 1 in [Huang2023]), if we consider the procedure as a one-step LNS. 2. Since the proposed approach is based on predict-and-search, it cannot guarantee the optimality or feasibility. This limitation is not discussed or analyzed properly in this paper. For example, there is no empirical study on the feasibility ratio on the test instances. The authors should also conduct experiments on more constrained problems. Furthermore, it is somewhat unfair to compare the anytime performance with SCIP, since the proposed method (as well as predict-and-search) essentially solves a much simpler problem than SCIP since some variables are fixed. 3. The authors collected training data using Gurobi, but only compared the test performance with SCIP. I cannot see any reason why not compare with Gurobi at test time. 4. The authors used two ways to collect negative samples, but only report their empirical performance, without a deep analysis on which way is more reasonable. 5. The authors did not show the results of how accurate the solution prediction is. questions: Please see the above weaknesses. flag_for_ethics_review: ['No ethics review needed.'] rating: 5: marginally below the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
sCtJyx7nrc
meta_review
1,701,823,479,144
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission2/Area_Chair_JTNe" ]
metareview: This paper proposes a neural model trained with contrastive learning for solving mixed integer linear programs. It obtains the solution using a recently proposed predict-and-search (PaS) strategy and empirically demonstrates the advantage of using contrastive learning on top of PaS. All reviewers agree on the good presentation of this paper and the soundness of this work. The experiments on four datasets also show good results. Nonetheless, the novelty of this work is rather limited, as merely an incremental change over PaS (fyvF, GFiJ, KTmj) and the similarity to a recent paper on contrastive learning for large-neighborhood-search (fyvF, KTmj). Also, the lack of the ablation study (fyvF, nvLV) makes the effectiveness of the proposed components questionable. The authors have addressed part of the concerns during the rebuttal. However, the overall lack of novelty remains a major issue. Also, a better understanding of the source of performance improvement is needed. A rejection is recommended. justification_for_why_not_higher_score: This paper lacks novelty and the ablation study is not well conducted. justification_for_why_not_lower_score: N/A
J2kRjUAOLh
Contrastive Predict-and-Search for Mixed Integer Linear Programs
[ "Taoan Huang", "Aaron M Ferber", "Arman Zharmagambetov", "Yuandong Tian", "Bistra Dilkina" ]
Mixed integer linear programs (MILP) are flexible and powerful tools for modeling and solving many difficult real-world combinatorial optimization problems. In this paper, we propose a novel machine learning (ML)-based framework ConPaS that learns to predict solutions to MILPs with contrastive learning. For training, we collect high-quality solutions as positive samples and low-quality or infeasible solutions as negative samples. We then learn to make discriminative predictions by contrasting the positive and negative samples. During test time, we predict assignments for a subset of integer variables of a MILP and then solve the resulting reduced MILP to construct high-quality solutions. Empirically, we show that ConPaS achieves state-of-the-art results compared to other ML-based approaches in terms of the quality of and the speed at which the solutions are found.
/pdf/b0dbd4b8099f6cdd01e7459b5b849a7e395b32d8.pdf
irRrnFXFjA
decision
1,705,406,011,934
J2kRjUAOLh
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Program_Chairs" ]
title: Paper Decision decision: Reject
ZGBOfAQrMl
Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention
[ "Xingyu Zhou", "Leheng Zhang", "Xiaorui Zhao", "Keze Wang", "Leida Li", "Shuhang Gu" ]
Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task. Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart phones and consumer electronic products. In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR). The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features. Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input features into consideration and only exploits previously enhanced features to provide supplementary information. In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames. We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches. The experimental results demonstrate that MIA-VSR improves the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy.
/pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf
SkOC9YExMC
official_review
1,698,846,203,387
ZGBOfAQrMl
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission3/Reviewer_kfYa" ]
summary: This paper presents a novel Transformer-based video super-resolution model called MIA-VSR (Masked Intra and Inter-frame Attention Video Super-Resolution). The model aims to improve the efficiency of video super-resolution by leveraging temporal continuity between adjacent frames and reducing redundant computations. The key components of MIA-VSR include an intra-frame and inter-frame attention block (IIAB) and an adaptive mask predicting module. soundness: 3 good presentation: 3 good contribution: 3 good strengths: 1. Improved efficiency: MIA-VSR reduces computational complexity and memory footprint without sacrificing video super-resolution performance. 2. Effective use of temporal information: The model leverages temporal continuity between frames to avoid unnecessary computations and provide better results. 3. Adaptive masking: The adaptive mask predicting module generates block-wise masks to skip unimportant computations, further improving efficiency. weaknesses: 1. Complexity: The model may be more complex to implement and train compared to simpler video super-resolution methods. 2. Limited applicability: The effectiveness of MIA-VSR may be limited to specific video super-resolution tasks and datasets. 3. Runtime: Although MIA-VSR reduces computational complexity, its runtime may still be slower than some other methods due to the Transformer architecture. questions: 1. In the comparison with state-of-the-art methods, you mentioned that MIA-VSR achieves better trade-offs between accuracy and efficiency. How does MIA-VSR handle the trade-off between model size and computational efficiency? Can you provide more quantitative analysis or visualizations to support this claim? 2. Can you provide some insights on the design choices for the Intra-frame and Inter-frame Attention Block (IIAB)? How does it differ from other attention mechanisms used in video super-resolution models? flag_for_ethics_review: ['No ethics review needed.'] rating: 6: marginally above the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
ZGBOfAQrMl
Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention
[ "Xingyu Zhou", "Leheng Zhang", "Xiaorui Zhao", "Keze Wang", "Leida Li", "Shuhang Gu" ]
Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task. Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart phones and consumer electronic products. In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR). The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features. Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input features into consideration and only exploits previously enhanced features to provide supplementary information. In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames. We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches. The experimental results demonstrate that MIA-VSR improves the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy.
/pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf
UKSQxS1KNk
official_review
1,698,743,938,226
ZGBOfAQrMl
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission3/Reviewer_oDvo" ]
summary: To address the heavy computational burden and large memory footprint in Transformer-based Video Super-resolution (VSR), this paper proposes a masked intra and inter frame attention (MIA-VSR). MIA-VSR uses feature-level temporal continuity between adjacent frames. The experiments demonstrate the effectiveness of the proposed method. soundness: 2 fair presentation: 2 fair contribution: 2 fair strengths: 1. This paper proposes an intra-frame and inter-frame attention block to enhance SR features, and proposes an adaptive mask predicting module to mask out unimportant regions between adjacent frames. 2. Compared with existing Transformer-based VSR methods, the proposed method has less computational cost and memory footprints. weaknesses: 1. The novelty of this paper is not clear. 2. The performance is not significant on benchmark datasets. Although the proposed method has less computational cost and memory footprints compared with existing Transformer-based VSR methods, it is still challenging for applications on smartphones (the main issue that the authors highlight to solve.) questions: 1. The motivations of the paper are to reduce the computational burden and the large memory footprint, and propose a VSR method in smart phones and consumer electronic products. However, the model size is large and not very efficient. For real applications, BasicVSR++ has more advantages than the proposed MIA-VSR. Compared with MIA-VSR, RVRT has a smaller model size, less runtime and comparable PSNR. 2. Some details in Figure 1 are not clear. For example, the inputs of MPM are not clear. How to get x_m^{t-2} in the orange block? What are the blue dashed lines? Why are the output video results poor? 3. The performance is not significant under different metrics. In addition, in Figure 4, it would be better to provide BasicVSR++ results instead of BasicVSR or EDVR. flag_for_ethics_review: ['No ethics review needed.'] details_of_ethics_concerns: None rating: 3: reject, not good enough confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. code_of_conduct: Yes
ZGBOfAQrMl
Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention
[ "Xingyu Zhou", "Leheng Zhang", "Xiaorui Zhao", "Keze Wang", "Leida Li", "Shuhang Gu" ]
Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task. Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart phones and consumer electronic products. In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR). The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features. Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input features into consideration and only exploits previously enhanced features to provide supplementary information. In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames. We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches. The experimental results demonstrate that MIA-VSR improves the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy.
/pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf
XgK5VCdL4l
official_review
1,698,637,121,669
ZGBOfAQrMl
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission3/Reviewer_FTQC" ]
summary: The paper proposes a new framework called MIA-VSR for video super-resolution (VSR) tasks. The framework utilizes a feature-level masked processing approach to reduce computational burden and memory footprint, making it more suitable for deployment on constrained devices. The key component of MIA-VSR is the intra-frame and inter-frame attention block, which considers the roles of past features and input features and only uses previously enhanced features to provide supplementary information. Additionally, an adaptive block-wise mask predicting module is developed to skip unimportant computations based on feature similarity between adjacent frames. Ablation studies and comparisons with state-of-the-art VSR methods demonstrate that MIA-VSR improves memory and computation efficiency without sacrificing PSNR accuracy. soundness: 2 fair presentation: 2 fair contribution: 2 fair strengths: 1. The author tries to accelerate transformer-based VSR from multiple levels, and I think masked processing is reasonable. 2. The comparative experiments are objective and detailed. weaknesses: 1. My main concern is that the effect of this method is not significantly improved compared to previous work. 1.1 From Table 3, choosing transformer for this task does not introduce obvious benefits, especially since CNN can be further compatible with more inference acceleration frameworks. Compared with BasicVSR++, subsequent work uses an order of magnitude higher computational overhead, but it has not made an improvement that I think is worth it. The impression this paper gives me is that it hopes to improve the practicality of this type of method by improving the processing efficiency of VSR. However, the effect of the paper does not seem to achieve this goal. After all, basicVSR++ is already very slow for users. 1.2 In terms of visual effects comparison, overall there seems to be no significant advantage over PSRT-recurrent. 2. About masked processing 2.1 I'm worried it's not novel enough. In low-level vision, this kind of blocking processing is not uncommon. Here are a few examples: * Image SR: Restore Globally, Refine Locally: A Mask-Guided Scheme to Accelerate Super-Resolution Networks * Background Matting: Real-Time High-Resolution Background Matting Although this paper does so by considering temporal continuity. Considering the effects shown in Table 1, I think this contribution is insufficient. questions: 1. How to avoid the blocking effect that mask processing may introduce? 2. I think Figure 1 needs to be redrawn, what is the key message this image is trying to highlight? 3. If generating 720p video requires one second per frame to process, in what scenario do we need video SR? flag_for_ethics_review: ['No ethics review needed.'] rating: 5: marginally below the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
ZGBOfAQrMl
Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention
[ "Xingyu Zhou", "Leheng Zhang", "Xiaorui Zhao", "Keze Wang", "Leida Li", "Shuhang Gu" ]
Recently, Vision Transformer has achieved great success in recovering missing details in low-resolution sequences, i.e. the video super-resolution (VSR) task. Despite its superiority VSR accuracy, the heavy computational burden as well as the large memory footprint hinders the deployment of Transformer-based VSR models on constrained devices, e.g. smart phones and consumer electronic products. In this paper, we address the above issue by proposing a novel feature-level masked processing framework: VSR with Masked Intra and inter frame Attention (MIA-VSR). The core of MIA-VSR is leveraging feature-level temporal continuity between adjacent frames to reduce redundant computations and make more rational use of previously enhanced SR features. Concretely, we propose an intra-frame and inter-frame attention block which takes the respective roles of past features and input features into consideration and only exploits previously enhanced features to provide supplementary information. In addition, an adaptive block-wise mask predicting module is developed to skip unimportant computations according to feature similarity between adjacent frames. We conduct detailed ablation studies to validate our contributions and compare the proposed method with recent state-of-the-art VSR approaches. The experimental results demonstrate that MIA-VSR improves the memory and computation efficiency over state-of-the-art methods, without trading off PSNR accuracy.
/pdf/50d9a5757c70e20b8a2b2c3de85840db57e6d597.pdf
3zFTJy36YZ
official_review
1,698,502,710,527
ZGBOfAQrMl
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission3/Reviewer_FxaB" ]
summary: This paper proposes a Transformer-based recurrent video super-resolution model, termed MIA-VSR. The aim is to reduce the redundant computation in VSR model. To achieve this goal, they propose two complements: An Intra-frame and Inter-frame attention block (IIAB or MIA) and an adaptive mask predicting module (MPM). MIA aims to provide supplementary information from the previous enhanced features (temporal information). MPM aims to generate block-wise masks to reduce the computation. Experiments show that MIA-VSR achieves good results on several datasets. soundness: 2 fair presentation: 2 fair contribution: 2 fair strengths: 1. In the field of video super-resolution, redundant computation is very worthy of study. The problem explored in this paper is meaningful, and the calculation of feature reduction through mask strategy sounds reasonable. 2. The key idea is simple and easy to understand. From the experimental results, the method in this paper seems to be effective. weaknesses: 1. The core problem to be solved in this paper is the redundant computation in VSR, so the MASK strategy is proposed to reduce the calculation of unimportant features. While designing attention mechanisms to take advantage of timing information has been discussed in many previous works, the second contribution of this article (MIA)does not seem to differ from existing attention mechanisms. In other words, directly calculating the attention mechanism through the output features after the mask strategy is also computation-intensive. If the author claims the contribution of its attention mechanism, it should be contrasted with this baseline. 2. In terms of reducing redundant computation through the mask strategy, the authors should discuss how it differs from other approaches such as Token Merging and TTVSR. TTVSR also reduces the computation by limiting the attention mechanism to the trajectory of optical flow by calculating the temporal relationship of optical flow. Authors should cite and discuss the differences. 3. Experiments. MASK has a binary ratio, and the author should perform ablation experiments on it, including the FLOPs, not just choose 0.5. Many new VSR methods are not referenced and compared, such as TTVSR, FTVSR. 4. Writing. The second paragraph of the intro is written like related work. The figures in this paper are messy and not easy to understand, e.g. Fig 1, so many arrows are misunderstood. For example, why two arrows refer to mask M? Is it the output of MPM? 5. The author claimed that aims to solve the problem of heavy computational burden. According to Tab 9, compared with basicvsr++, 7.3M/92ms/32.39dB, MIA-VSR achieves 16.5M/822ms/32.78dB. The results of this experiment do not show its advantages. To sum up, I think the innovation and contribution of this paper are not obvious enough. Writing and experimentation are not enough. This paper is not sufficient for acceptance by ICLR. questions: see weakness flag_for_ethics_review: ['No ethics review needed.'] rating: 3: reject, not good enough confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
Ny150AblPu
Exposing Text-Image Inconsistency Using Diffusion Models
[ "Mingzhen Huang", "Shan Jia", "Zhou Zhou", "Yan Ju", "Jialing Cai", "Siwei Lyu" ]
In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation.
/pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf
5QZYQUEu7b
official_review
1,698,783,361,796
Ny150AblPu
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission5/Reviewer_qTYR" ]
summary: This paper studies how to detect image-text inconsistency with diffusion models. More specifically, the author designed a pipeline that iteratively use diffusion models to edit the text and images in the image-text pairs to gradually optimize a mask that can point out where the inconsistency come from. This task is interesting and meaningful for misinformation detection, as it provides interpretable prediction results. To evaluate the proposed method, the authors collected a dataset containing image-text pairs and their inconsistency masks. Experiments shows that the proposed method outperforms baselines and gives explanable prediction on the inconsistency. soundness: 3 good presentation: 2 fair contribution: 4 excellent strengths: 1. The task studied in this paper is meaningful. 2. The dataset that they collected is contributive to the community. 3. The method is novel. weaknesses: 1. The writing is not very good. I read the methodology part several hours to understand their pipeline. 2. The idea is well justified for the inconsistency of object alignment. But what if the predicate is not aligned, i.e. the person is correct but the action is not? questions: How does the annotation and the model handles predicates? flag_for_ethics_review: ['No ethics review needed.'] rating: 8: accept, good paper confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. code_of_conduct: Yes
Ny150AblPu
Exposing Text-Image Inconsistency Using Diffusion Models
[ "Mingzhen Huang", "Shan Jia", "Zhou Zhou", "Yan Ju", "Jialing Cai", "Siwei Lyu" ]
In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation.
/pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf
BGAsoFzNFk
official_review
1,698,757,618,691
Ny150AblPu
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission5/Reviewer_LEKC" ]
summary: This paper presents D-TIIL for identifying and localizing inconsistencies between text and images. A new dataset, TIIL, containing 14K consistent and inconsistent text-image pairs, is introduced for evaluating the method. The D-TIIL outperforms existing approaches in terms of Accuracy scores and demonstrates more explainable results. In a nutshell, the paper offers a scalable and evidence-based approach to identify and localize the text-image inconsistency. However, it also acknowledges the potential misuse of the method for creating deceptive text-image pairs and suggests improving the algorithm and restricting access. soundness: 3 good presentation: 2 fair contribution: 2 fair strengths: 1. Originality: The paper introduces a novel method, D-TIIL, that exposes text-image inconsistency with the location of inconsistent image regions and words. Also, the new TIIL dataset is the first dataset with pixel-level and word-level inconsistency features that provide fine-grained and reliable inconsistency. 2. Quality: The D-TIIL and TIIL dataset generation are thoroughly described. The paper also provides a comprehensive comparison of the proposed method with existing approaches. 3. Clarity: The paper is well-structured and clearly written. The method is explained in detail and the experiment results are presented in an understandable manner. 4. Significance: The D-TIIL method improves the accuracy of inconsistency detection and provides more explainable results. The introduction of the diffusion model makes it possible to align text and images in a latent and joint representation space to discount irrelevant information and incorporate broader knowledge. weaknesses: 1. The paper acknowledges that the D-TIIL may struggle with inconsistencies with respect to specific external knowledge, and this could reduce the effectiveness of the method in real-world application. 2. The D-TIIL method relies heavily on the text-to-image diffusion models and benefits a lot from the semantic space that is already well aligned. This dependence could limit the generalizability of the proposed method. 3. There are some confusing details in the method description section. 4. In the comparison of methods, the reasons why D-TIIL is superior are not discussed and analyzed in detail, and the potential solutions for the failure cases are not provided. 5. More specific discussions and measures could be included to prevent potential abuse rather than simply restricting access. questions: 1. Regarding Step 3 in Section 3 METHOD, the proposed E_{dnt} and descriptions like “include extra implicit information from the images and excludes additional implicit information that only appears in the text” raise doubts about the effectiveness of the process of the “text denoising”. Such “text denoising” seems to be too idealistic. In Section 5.4, for example, there is the failure case of the word "office". This leads to the bold suspicion that the D-TIIL method is only valid for simple objects, but not for backgrounds or objects that contain more complex semantics. 2. Also, the high dependency on the diffusion model affects the generalizability of the method. If text and image are not well aligned on the latent space, the validity of the method will be more affected. Semantic entanglement can also exist. 3. Regarding Step 4 in Section 3 METHOD, the descriptions like “We then compute the cosine similarity score between this image embedding and the input text embedding” are confusing to the readers. 4. In the Data Generation part of Section 4 TILL DATASET, T_{m} is unknown where it comes from, is it manually designed? flag_for_ethics_review: ['No ethics review needed.'] rating: 5: marginally below the acceptance threshold confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. code_of_conduct: Yes
Ny150AblPu
Exposing Text-Image Inconsistency Using Diffusion Models
[ "Mingzhen Huang", "Shan Jia", "Zhou Zhou", "Yan Ju", "Jialing Cai", "Siwei Lyu" ]
In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation.
/pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf
GRny6S4ecC
official_review
1,698,745,076,380
Ny150AblPu
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission5/Reviewer_4tYr" ]
summary: The authors propose D-TIIL (Diffusion-based Text-Image Inconsistency Localization), a system for automatically identifying and explaining text-image inconsistencies. D-TILL uses text-image diffusion models to locate semantic inconsistencies in text-image pairs. Diffusion models trained on large datasets filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate the effectiveness of D-TIIL, the authors also introduce a new dataset (TIIL) with 14K consistent and inconsistent text-image pairs. soundness: 2 fair presentation: 3 good contribution: 2 fair strengths: • The paper is well written and well structured • The problem and the related work are well introduced • The framework is explained in detail • The idea to build consistency scores between stable diffusion and the original image is interesting. weaknesses: • The general theoretical idea behind the approach lacks clearity • The real-world application is not very clear, e.g. wrong labels have a different type of mislabeling than just objects that are swapped • Sensitivity to threshold highly influences M and the consistency score With D-TIIL, the authors have presented an interesting method for using diffusion models to evaluate the consistency of image-text pairs. However, the utility of the method is not fully evaluated in detail. Deeper insights into why this approach works are lacking. In addition, it would be nice to see how the approach works on other datasets where the labeling is just mixed up or misleading. In addition, I would recommend for ICLR to investigate the method in more detail in terms of learned representations. The paper is well written and has some interesting ideas, e.g. the usage of diffusion models for detecting image-text inconsistency. The method and the dataset, both are valuable. However, to be accepted in ICLR I would expect more and deeper investigations about the method and the dataset. What is learned, what are short comings? There are some doubts, such that the model could be sensitive to the DALE generated part instead being sensitive to the text-image inconsistency. Experiments are missing that evaluate the underlying behavior. Moreover, a second evaluation on another dataset with more established baselines would be preferable to proof some of the assumptions, advantages and shortcomings of the method. questions: • How does the approach perform on completely wrong image descriptions? o Is the whole image masked? • Is the model sensitive to the image part generated by DALE and not to the parts which do not correspondent to the text? o Is there an experiment that can proof that? o Maybe regenerate the image for the dataset also with the right semantic class? • Is there another dataset where the method could be compared also to other baselines? flag_for_ethics_review: ['No ethics review needed.'] rating: 5: marginally below the acceptance threshold confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. code_of_conduct: Yes
Ny150AblPu
Exposing Text-Image Inconsistency Using Diffusion Models
[ "Mingzhen Huang", "Shan Jia", "Zhou Zhou", "Yan Ju", "Jialing Cai", "Siwei Lyu" ]
In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation.
/pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf
wVFaJBiRpf
official_review
1,698,487,488,683
Ny150AblPu
[ "everyone" ]
[ "ICLR.cc/2024/Conference/Submission5/Reviewer_D7Cn" ]
summary: This paper develops a new method, D-TIIL, to expose text-image inconsistency with the location of inconsistent image regions and words, which is quite commonly happening in T2I generation diffusion models. To achieve this, they introduce a new dataset, TIIL, for evaluating text-image inconsistency localization with pixel-level and word-level inconsistency annotations. soundness: 3 good presentation: 2 fair contribution: 3 good strengths: 1. The dataset's contribution is commendable. Existing datasets lack the capacity to furnish evidence regarding inconsistencies occurring at both the image region and word levels, which is essential for evaluating D-TIIL (Diffusion-Based Text-to-Image Inconsistency Localization). 2. The problem addressed in this research is of significant importance. Previous methods have primarily focused on determining the presence of inconsistencies, whereas this paper introduces a novel approach to pinpointing the specific locations where these inconsistencies occur. weaknesses: 1. It would be valuable to explore whether this method could be extended to evaluate other text-to-image (T2I) augmentation techniques (i.e., [1-3]). Given the abundance of research on generating images based on textual prompts, applying this method for evaluation purposes could have a broader impact and contribute significantly to the field. 2. Are there alternative evaluation metrics to assess the correspondence between text and images? Based on my experience with CLIP scores, it may not consistently capture performance accurately in various scenarios. [1] Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models. SIGGRAPH 2023 [2] Improving Sample Quality of Diffusion Models Using Self-Attention Guidance. ICCV 2023 [3] Expressive Text-to-Image Generation with Rich Text. ICCV 2023 questions: As mentioned in the above weakness, I would appreciate seeing the proposed method applied more extensively in evaluation. The inclusion of evaluation metrics beyond CLIP scores could enhance the robustness and confidence of this paper. flag_for_ethics_review: ['No ethics review needed.'] rating: 6: marginally above the acceptance threshold confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. code_of_conduct: Yes
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
55